Liveschool guest blogger Mark Smith (of techno-duo Gardland) recently dropped by Rainbow Chan’s home studio to discuss her unique compositions, as well as the creative process and production techniques behind them.
Sydney-sider Rainbow Chan is something of a 360 degree artist.
Her work is intricately detailed and expertly arranged, yet floats light on a warm cloud of pop. Her productions are executed with a clinical finesse born from a childhood grounded in the rigours of classical music education – but her tracks come across like an old friend, inviting and generous. The elegantly allusive visual aesthetic found in her videos and artwork draws inspiration from her cultural heritage, providing a charming and disarming veneer through which to access her music. So by 360 degree artist I mean she’s got all the angles covered.
Coming off the back of her appearance at the Opera House for Vivid as part of the Since I Left You tribute to the Avalanches, Rainbow talked through how to seamlessly combine acoustic and digital source material, how vocal techniques can supersede complex production techniques and ways to build a symbiotic relationship between songcraft and laptop.
What’s your musical background? Did you start with learning a specific instrument?
I grew up with older sisters who played piano. My mum really wanted me to play piano when I was young but I felt like it was forced upon me, so I ended up taking an interest in the saxophone at primary school, playing that in band for a while, so I was classically trained. Eventually I started to teach myself piano and get lessons, then I taught myself guitar and from then on I was picking up any kind of weird instrument… my latest acquisition was the harp. But then I got into electronic music in uni and that’s kind of where I’ve been most comfortable now, trying to explore that more and more.
For you, computers became involved after you learnt all these instruments – how was it initially bringing those two formats together? When you’ve had so much experience playing an instrument the concept of it being recorded and organised can seem like a new thing.
I didn’t start out making music on the computer but I knew I wanted to get there. I just didn’t have the right tools at that point in my life so I started working with computers on a very basic level. I would record my folky pop songs using a guitar and a bad Casio keyboard with the internal microphone of my sisters iMac, back in 2003 or whatever… and then it was all recorded on one track with lots of layers in iMovie, basically the most crap thing you could do. But that was my introduction into translating live organic sounds into something that would be on the computer. Things started to change a couple years later when I started working with loop pedals and got myself a sampler. At that point I wanted to incorporate the organic and live sounds more and more into an electronic background of beats. Then once I started getting into hardware, I could mix that with the software. To this day now I’m trying to get better and better at using software and make sure that I keep the live element in there as well. Just chasing a balance between the two.
With your writing process how much of a track do you consider ‘done’ before it goes into the computer? Do you feel you’re a songwriter first, or is this process inextricably linked to working on the computer?
It depends on the actual song. I’ve written songs in one sitting say on a piano; I guess those are the more ‘song’ songs outcomes. And then I’d take it to the computer and match it up to existing loops I’ve already created that would work well in that song. But I’ve actually found the most successful songs are the ones I do simultaneously, so I’ll start with a beat and then I’ll get the keyboard out and build a dialogue between the stuff on the computer and what I’m playing live. Also I’ve found it really fun and a great challenge to make music only on my computer using field recordings I’ve collected or bits of audio from past sessions and making tracks in weird transitional spaces like airports or hotels. Having the tools on your computer, your headphones and being in a completely isolated bubble and forcing yourself to make a piece of music using that. I’ve that really challenging but also exciting because it’s moving away more from my songwriting/performer/singer background. But ultimately they go hand in hand for me I think.
So you like to explore the extremes of both sides.
I’ve found sometimes if I’ve written a piece of music that’s really production focused and uses hardly any organic sounds – except for maybe my voice, I like to then go back and try to figure out how to play it on the piano while singing it also. So for me with the classical background and what I’m doing now with electronic stuff, it’s just back and forth between the two. I really enjoy both disciplines and they do work hand in hand. They’re not mutually exclusive at all.
An interesting tension I find in your work as Rainbow Chan is the relationship and transitions between the electronic and organic elements. Is that something you struggle with or is it just part of your process to be able to smooth out the inconsistencies between the different types of source material?
I’m attracted to electronic sounds that are organic, or organic sounds that have been warped or digitised to the point where you’re not quite sure what the source is. If I do make something purely on the computer I try to feed it back in to a ‘live piece’, like a tape for instance, or if I’m working with live instruments I’ll route it through the effects on a Roland SP-404. So it’s about blending. I gravitate toward grainy sounds that wash over everything because it helps to make the whole a bit more cohesive. And because I do use my voice a lot it works as a glue between the live instrumentation and the electronic elements because in the end all the sounds that are not my voice are there to support what is happening vocally and lyrically. Perhaps that’s why there is some sort of balance there.
I wanted to ask about the production technique on your multi-tracked vocals, particularly how you achieve a sense of width and depth. I was wondering if you work with an engineer or mixer because they’re extremely well done.
With the stuff that I’ve released, the answer is no. I’ve mixed everything myself though there has been professional mastering. I listen to a lot of vocal driven music. I sang in a choir as a kid for about a decade so maybe my ears are attuned to harmonies and how to blend different vocal sounds. In terms of mixing, I’ve read up recently on things you should do regarding certain frequencies, but I’ve never been really nerdy or tech savvy about that, I just go with my ears to try make it work.
With these big choral sections are you modulating your vocals with effects at all or is about more about the doubling, tripling, quadrupling of your voice and the beating between the frequencies that creates the width. Because a lot of people have to do this artificially with chorus or by messing with the phase to achieve an expansive vocal field.
No I don’t do that at all. It’s all live. In one song I did 20 vocal layers just live because I wanted to have that big choral sound, but I didn’t really want to do it on the computer. I mean I have done stuff with parallel compression and delay so there is a bit of processing, but generally when I am working with vocals it’s more of a live thing. Trying to work with harmonies and different ways of singing a particular line – for instance singing one with a bit more force and another with a head voice, using different vibratos to get a set of slightly different phrases and shapes in the timbre of your voice to create those little subtleties that contribute to that overall washy effect.
I really like when instrumental technique inherently bypasses things to do with production, so that the production comes from the performance of the individual elements rather than doing it technically – like the standard vocal double trick of taking one track, tripling it, changing their delays by a few milliseconds, spreading them over the stereo spectrum.
I think also that I’ve discovered some of these tricks through weird computer processes. Maybe I’ve accidentally moved a track a little bit off, creating a chorusing effect and though ‘oh, I can try to do this live now’, so again those little happy accidents on the computer can be translated in to the live performance aspect – the classic blending of human and machine.
It sounds like you’ve got a symbiotic relationship between these disparate processes that people generally tend to separate.
Everyone should try to be a cyborg.
I find your instrumentation pretty interesting. Sometimes you’ll have a synth that will be in a 90’s west coast hip hop vein and then have it paired with a melodica – there always seems to be quite different materials which connote different forms of music, yet they amalgamate in to a seamless whole. You’ve got a certain flair in that respect.
I think maybe those pairings are semi-accidental as well. I guess a lot of my understanding of what I’ve come across from producers or people that are really in to electronic music is they can be very niche-y. They get good at a particular sound and then they almost become dictated by what is demanded by that genre or whatever musical wave they think they’re part of. Whereas I’m a bit more on the outside of that, so I can go and take little bits from here and there. Perhaps this comes back to having a fairly diverse musical background. I try to keep my mind open and I’m attracted to certain sounds rather than certain genres. I don’t try to perfect one particular thing – maybe this causes some faux pas but that’s just my interest and I think ultimately that makes it more… not genuine, maybe personal… I’m not trying to appease anyone or appeal to any particular movement, I’m just making things that appeal to me.
With your work as Chunyin the sound sources are more overtly electronic than your Rainbow Chan stuff but the palette you’re drawing from is quite broad. Can you give an overview of where you like to get your materials from?
I really like grainy sounds, anything that has a rubbing, frictional quality – whether it’s sampling me scrunching up a biscuit packet or stepping on pebbles – those kinds of things where you can hear a physical action or shape. I was making a lot of the Chunyin stuff when I was travelling in Hong Kong and Japan, so I worked quite a lot with MIDI. I’ve been listening a lot to a Japanese composer called Joe Hisaishi who composed a lot for Miyazaki films in the 80s and 90s and lots of electronic scores from another Japanese director called Takeshi Kitano –he uses a lot of vocal pads so I was trying to explore that in the Chunyin stuff. I was limited because I didn’t have a microphone but I still wanted to have some kind of vocal element. I knew before starting the project that I didn’t want to sing live so I was trying out all these weird computerised vocal sounds. I really like sampling old folky instruments like weird harps. Once I sampled this old music box at these markets in London. I use things that you don’t particular consider useful, but you can pitch it afterwards and produce it into something very musical. Everything has pitch, even if you hit a table – then if you think about it harmonically as well you can explore interesting elements in mundane sounds.
Can you talk us through that synth you’ve got upstairs?
I found it in a second hand shop down the road. It’s a Korg Sigma. Pretty sure it’s from the 80s. Half of it is just standard organ sounds, the other half is synth sounds. You can blend the two with white noise and other digital sounding patches or go with pure flute and clarinet sounds, change the attack and decay. There’s also a couple of amazing joysticks that can create some cool portamento shapes. This is a synth I’ve used a lot in my music and I’ll probably keep using it. It’s got pretty cheesy sounds, like a great porno slap bass patch and can sound a bit Gameboy-ish here and there. I’ve been pairing it up now with the Roland sp-404, feeding it through different effects, trying to see how far I can push that particular synth, because it’s only monophonic so no harmonies are possible.
As soon as I saw it I was like ‘what is that?’ The bi-timbral blending between the synth section and the instrument emulation section can get you some pretty strange combinations of sounds.
It also goes out of tune really easily, which actually creates some interesting things. I’ve found when I’ve doubled lines and laid down the second track it’d be at a different pitch and it would create some nice vibrations against the original line, totally by accident. Computer stuff I feel can be very clinical, which if you want to use if for a particular purpose can be very interesting, but when you’ve got hardware and synths where you don’t know exactly what it’s going to do, that’s when you can really create some interesting and unrepeatable thing. Even within your own music, sometimes there’s something you can’t ever replicate because you don’t know how the hell it happened.
When you’re programming your beats, do you tend to do that by hand in an unquantised way, or do you like to click them in on a piano roll.
I used to do it a lot purely by hand using the Roland sp-404 but I think with the stuff I’m working on now and also the last EP I released, I’m trying to refine the beats a little bit more. Maybe because texturally there are a lot of things happening already in my music, a lot of elements just flowing over the top, I’ve found that actually making the beat and the percussion a bit neater has helped to make it all sit better. I’m happier with my production now after refining that. I tried to not make it too quantised. With the principal stuff, kick and snare, I’ll quantise that, but then I’ll have little elements here and there that I’ll play by hand, or I’ll stretch it out so that it doesn’t sit perfectly in the grid. I think it can become a bit formulaic with certain producers or sounds that are coming out at the moment. Things can tend to sound a little bit robotic, which isn’t necessarily bad, it’s just not something that I want to do particularly. So I’m trying to make sure that there is a more human sounding thing amongst the strict metric elements as well.
You tend to loop things a lot in a live context. Is that something that comes out of pragmatism regarding having to work on your own or is there something you particularly like about that workflow?
I’ve actually started to loop a little bit less lately because now that I’ve started to make my productions a little more intricate it is hard to achieve that complexity with a basic loop. So now I work with a backing track and loop certain layers on top of that, and I’ve started working with a live drummer on a Roland SPD. It did start out as a pragmatic approach with playing solo and looping being an easy way to translate those live sounds in to a slightly more electronic context, mixing it with beats on the sampler. But I also really like the sound of one layer being repeated then multi tracked. Maybe again it’s got something to do with my interest in layering vocals or sounds upon sounds. When stacking things on top of each other accidental swing rhythms can happen, weird polyrhythmic accents can happen – that really interests me as well. I like when you slightly miss the loop, causing a hiccupping effect. I think those accidents are where the magic happens. It’s obviously been recorded live and is kind of screwed up, but it sounds better now. You can try replicate it on a computer but I feel like when you simply do it with a loop pedal it’s more exciting and improvised.
Our guest blogger Mark Smith is one half of ex-Sydney (now Berlin) techno duo Gardland. Look out for their next release out very soon on RVNG Intl. Before doing this Rainbow Chan interview, Mark also recently interviewed Dro Carey (aka Tuff Sherm) for Liveschool.
Subscribe to our newsletter to keep up with our latest free tutorials, samples, video interviews and more.
Learn more about Producing Music with Ableton Live.