Genuary 2024 pt. 2
Welcome to the overview of the second week of my Genuary 2024 challenge. It’s been a while, I’ve been very busy, but I still wanted to keep you updated on the thought process beyond the development.
The first week ended by setting up the Follow Action algorithm for the drum track. We have a melody running, the harmony based accompaniment using bowed guitar sounds, and the drums made up of glitch sounds and using randomness.
This week starts by trying to control the drum’s activity by creating some space. Not only are they feeling a bit overcrowded, but also very mechanical, as the intensities are very similar throughout. Using Ableton’s MIDI editor we are able to add randomness and variation to both the velocity and the note playing itself. This way it will sound softer (since the probability we added in the velocity is for it to be the value it’s set at OR less, never more) and with more gaps in between (because the probability for each note was 100% and now it’s less, meaning it will almost always play less notes, and never more). Sounds way better and I’m happy with that.
The title for this week’s second video is “For every action there is an equal and opposite reaction”, a reference of Newton’s third law of motion. This is because we use an LFO to control the panning of two tracks: the melody and the harmony. One of the most important aspects to ensure you have a good mix is how your material is distributed in the stereo image. Before we added this modulator, they were both centered and fighting for the same space. Now, not only do we have movement, but we ensure they always move in opposite directions, so they are balanced and create more space for the rest.
Taking inspiration from the previous day, in this one we also add movement in the drum instruments. I specifically didn’t want the snares and kicks to move, because for me it’s important to have them centered because they are one of the cornerstones for the whole piece. But we do put the hi-hats and misc percussion instruments panning in the stereo image. In order to do so, we use the Auto Pan Audio Effect. It’s set to a sine wave shape whose cycle is a whole bar on the hi-hats, whereas in the misc percussion Ableton’s Auto Pan is set to Random. We can have different instruments moving in different directions for two reasons: one are not using the track’s pan, which would move all the Drums at once, and also because we are including the Auto Pan on the Drum Rack inside the Drum Rack.
In this video we look elsewhere for inspiration. Specifically, sounds from a bird feeder station from Alabama.
Using live sound coming from across the ocean is possible, in this case, by using internal audio-routing. This can be done with your soundcard, if it has the loopback function, or with specific software, such as Blackhole in Mac or VoiceMeeter in Windows, which is what I’m using here. There’s something unpredictable in including this type of resource that I really like. You never know what’s coming, if there’s a bird chirping, or people talking, or just the ambient sounds from the surrounding environment. It also has a musical function, because the continuous sound (it was raining that day. Great!) serves as the background for the rest of the instruments.
Still using the live feed bird sound, we add an additional layer. The goal here is to heavily process the sound from the birds and route it through a delay. Now, if we did this the whole time, if would be very tiring and noticeable. For that reason, I added and EQ and a Gate. The Gate Audio Effect in Ableton Live only lets sound through if it goes above the set threshold. Hopefully, every time there is a loud bird chirp, it will open the Gate and that chirp - and only that (because the Gate is closed afterwards) - will be processed.
It adds even more unpredictability to our piece.
I’ve always been a big fan of musique concrète and field recordings. I love using sounds collected from all around. One of the main sources of inspiration available is Radio, specifically when it comes to the human voice. There’s a website I use all the time called Radio Garden which allows you to listen to any radio stations from all around the world. Some of my favorite ones are those where you can listen to different languages and cultures. There’s something about the human voice that is irreplaceable.
In this video, I browse around some of my favorite radio stations in search of a sample of spoken word in a foreign language. I found it in a Greek poetry radio stations, so I sampled it to use later on.
The final video for this week is actually a continuation of the previous one. I take the recording from the Greek radio station and use it inside a Granulator II Max for Live object. The goal wasn’t to use the dry recording of the voice, but rather to heavily process it. And granular synthesis is not only a staple of generative music, but also a very versatile technique that adds variety and dimension to any sound.
And that’s it. Our piece is growing day by day and becoming multi-layered and complexo.
If you have any question or you’d like something clarified or expanded on, please contact me.
If you prefer to watch the videos in YouTube, here’s the link for the playlist.
See you soon,
Óscar