When presented with the opportunity to fill a two hour slot on the radio, we thought it would be a straight-forward task in terms of setting it up and performing, and to a certain extent it was.
We had not done an entirely digital gig since the early 1990s, and back then we did it using 2 Alesis MMT8 sequencers, the Roland R8 drum machine and a Roland D110 synth plus some effects. This was entirely different.
|The Alesis MMT-8 Sequencer|
I came up with five alternative methods of working with Ableton Live that we could use for the show, some were more complex and demanding, and some were basic and simple. We decided that simplicity should be at the core of the performance along with the ability to improvise, so our final choice of performance method lay somewhere in the middle.
We decided on two 40 minute sets (the rest of the available time on the show would be filled with DJing). The sets would be a selection of our best tracks along with some new material. Each set would be constructed in such a way that we would be able to manipulate the tracks as they played. Both of us would be able to add dubs to the channels we controlled and we would also be able to mute or delete things, and/or add new parts and sounds on the fly. This would give the set the live feel that we felt was essential and would provide a decent representation of the way that we create and improvise.
The way it would run would be very similar to the way that we generate new jams, with two laptops running Ableton hooked up wirelessly using the Mac's Airport ethernet connectivity. The analogue outputs from each Mac could then be sent to a mixing desk and then on to the computer running the radio station software.
All I needed to do to prepare things was take the Ableton project files from each track's final mix and divide it up into two, one for Andi and one for myself. Simple.
Not! Once I started this process it got instantly more complex. Once we had agreed upon the two set lists I decided to build each set by starting with the introduction and adding each song one by one. It soon became evident that things would not work out as simply as we had hoped. Each song had a unique build, which meant that there were few common elements across each set, so how to link the songs together for performance?
I decided that I should be thinking as though I were working with a live band at a live venue. I needed similar restrictions instead of the infinite possibilities of the digital realm. Often, when doing front-of-house engineering for live bands you have to compromise to meet the technical restrictions of the venue. There are only so many channels on a mixing desk, and only so much outboard equipment.
In our digital set-up, we have a huge amount of channels available (more than any hardware mixing desk), but to have every single one of the instruments, loops and samples that we used in each song on its own unique channel with its own unique processing would create a vast and unmanageable project. So, the idea of working more like a live band arose to simplify things and create a project we could realistically operate live.
I built a channel list, much like one that would be derived at a live venue, with a drum kit broken down into its separate elements (bass drum, snare, hat, toms, cymbals; the only difference with us is that we often use two or three bass drums, two snares and two hats, no toms and plenty of percussion), bass, guitar, synths and so on. Then there were channels for samples, field recordings and loops. Instruments and sounds would be put into the channel that nearest suited them.
For the live performance we would be working with the Korg NanoKontrol units again, so it made sense to try and build the two sets with around 8 channels for each of us, then we would have a direct correlation with the software and hardware, as well as keeping things relatively simple. The last thing you want at a gig is to have hundreds of settings and parameters to remember. Too many and things will definitely go wrong at some point.
|Simon's Arrangement page 1:|
(click to see larger)
Another issue was that Andi works with Ableton using a completely different method to me. I tend to line up a series of tracks (say, bass, drums, synth) left to right across the screen and process each channel with effects independently. Andi tends to pile up samples in one channel and send them through effects, then another group in another channel, so that different sounds get processed dependent on which track he hurls them at. We both use effects on the returns too (usually an echo and a reverb).
When I divided the sets into two (one for each of us) the two different working methods meant that I had a lot more channels than Andi, and Andi's effects could not be loaded in the same way for the set as he had for each song. The same was true for a lot of my channels. This is where the compromise began.
|Andi's Arrange page 1:|
(click to see larger)
For each track of each song I decided what its essential "movement" might be for the live aspect of the gig. What I mean here is that I might choose a filter's frequency parameter as the one that would be manipulated live across that channel for that song (and possibly for the duration of the set). There would be far too many to cope with otherwise. So, after that decision was made for each channel on each song, I would then export each track (or parts of it) as an audio file (with the other non-live effects included), then import that file back into Ableton and add the "live" effect to it's own channel. One or two effects were automated so that they would switch themselves on and off at the right time for specific tasks in a track, but nearly all of the essential (and most audible) effects were operated live.
After a while of building the sets in this way a pattern began to emerge which enabled me to make a decision as to which effects would suit each channel for the entire set. I set up Andi's channels in a similar way, but left him to make his own decisions regarding effects choices.
Doing things this way also helped to reduce the load on each laptop's CPU during the performance, which was also an important aspect for us (my laptop is an aging Mac PPC iBook G4). The downside of this was that each song generated quite a number of these audio files, and very soon I began to realise I was eating hard drive space at an alarming rate, so each file was painstakingly converted to mp3.
|Ableton Export: note Analysis button|
If you decide to work this way, I can offer you an important tip here. Ableton likes small bits of audio (i.e. loops), but does not cope so well with large audio files, so when you export files for re-importing make sure that you export them with an "analysis" file too, because (as I found out to my cost) if you import a long audio file into your tune without the analysis file Ableton will make a stab at getting it in time with your song, but it will most likely make mistakes. What happens is that you get sections of your file unexpectedly time-stretched. Sometimes this can sound good (and I did leave one or two in, check out Brian's voice in "Brian"), but mostly it just sounds wrong. You can end up with large sections of your tracks being way out of time. If the analysis file is there, this problem does not arise.
After all the new files were in place, I then used the "crop" facility to get rid of any silent space on the audio files to further save hard drive space (or so I thought) and to help our ability to "read" the arrangement as it progressed through the set. We would be able to see where parts dropped in an out of the tunes.
The trouble with the hard drive space-saving idea was that although Ableton does get rid of the silent sections, it does it by creating a new WAV file. Arg! It did not know this until I began this method, so all the painstaking mp3 conversions were a complete waste of time!
Eventually I had it all sorted. A Live set each for both performances. I had overlapped each song and created a tempo map so that as we went from song to song it would sound like it was being mixed like a DJ, but (different to DJ mixing) we could overlap a channel at a time (i.e. decide to have the bass part enter before anything else if we wanted, then a synth and so on) which meant far more creativity in the mix. We could improvise all the way through by using mute controls to silence parts (and enhance others) and use effects to twist and mutate things on the fly. Andi even decided to drop in loops as we played.
We did two rehearsals before the radio gig itself, and things went fine. The idea had worked. The computers coped fairly well, except for at one point losing time with each other (this led me to think that I should not totally trust Airport for sync purposes, it's reliability seems to change depending on where you are working. Connecting using an ethernet cable is probably safer).
We both set up our NanoKontrollers to do things the way we wanted. The faders and knobs controlled effects parameters, buttons controlled effects on and off and channel mutes. If we needed to operate other controls we just went for the trackpad on our laptops. I covered my controller in stickers so I could remember what did what.
I had decided to steer away from any live level alterations. This is where there is a real difference to a live gig. In the gig situation what you here from the PA system is what most of your audience is hearing, so you can make alterations with confidence. In our situation you are closer to mixing in the recording studio. Your audience will be listening on their own speakers or headphones wherever they are, so your mix levels have to be as near to perfect as you can get. I balanced all the levels on my studio monitors as best as I could beforehand, knowing that there would be some shift because of the live treatments we would apply, but hoping to contain any wild leaps in volume (that might be a caused by improvisation and effects) with limiting and compression set in strategic places. I did at times alter levels in the mix, but I knew this should be done with extreme caution, especially as we would be performing on monitors we had not heard before, so it would be impossible to judge frequencies (and therefore levels) correctly. We had to trust the monitor mix I had set up previously on the studio monitors I knew.
In the end it worked as well as we could have hoped for. It definitely felt like a true live performance, and technically there were no major faults. We had great fun. Cash and the children were filming with video cameras and moving lights about, David and Fred Aylwood turned up (old friends from the 1990s - David was in Baby Trio and used to play percussion sometimes with Best Foot Forward, now plays in "Blurt"; Fred performed as "Les" in Vic Reeves' Big Night Out) and gave us a positive boost.
It turned out that there was a web cam there when we performed, so our audience got a slow lo-res view of us doing our thing. Not much to see though, two blokes twiddling knobs is not the greatest visual entertainment. We wrote things on bits of paper and held them up to try and liven it up a bit, but in future we must improve that side. Really we were too busy doing our musical thing, we need others to help with the visuals. We have been preparing some video, and hopefully we will have some VJ activity for the future. Any offers of help would be most welcome, so please drop us a line if you are interested or know someone who is.
The weirdest part was the silence afterwards. Normally when you do a live performance you get audience feedback. This was very different. There were one or two positive comments that came through on the live text thing on the radio station, and both of us received a couple of texts to our mobiles, but that was it. Silence. We have no real idea of how it all went down, or even if many people tuned in. Very strange. We felt it went well, and I suppose we have to cling to that.
We had ideas for the DJ side too. Originally we were going to play all home-made stuff, some of it going back some time, but I did not have the time to prepare alongside preparing everything else.
Another idea was to play music that reflected our influences or related to what we did as X-Amount somehow. We also needed to please the show's regular listeners, so another compromise was met. I played some recent electronic stuff, one home-made ("Spdaz", one of the Dazman's creations on the "Nicky's House" album of 2010) and some older tunes. "Rema Rema", one of Andi's favourites from the post-punk early 1980's. "Rema Rema" was put together by and featured Marco Pirroni, who went on to form Adam and the Ants. The Pop Group's "Thief Of Fire" from the brilliant "Y" album.
Andi and I are both long-time fans of Mark Stewart's work, you can see the influence of mad dub too (done here by genius Dennis Bovell). Can's "Vitamin C". Andi played me this and I have been hooked ever since, thanks to him and the Dazman. The first track in the mix is Four Tet's remix of Rocketnumbernine, and I discovered this through seeing this uplifting video:
With that, I leave you until next time. Many thanks for reading, I hope this was of some interest or use. Please e-mail us or leave a comment (you can do this anonymously if you prefer). We would appreciate some feedback.