Categories
Uncategorised

Surround sound

Surround sound audio systems, while becoming increasingly more popular, date back to the 1930s and 40s. Initially conceived for use in cinemas with Disneys ‘fantasia’, the core idea was to use several sound channels to immerse the audience in the film. While mono utilises one channel, and stereo utilises two (left and right), surround sound utilises many that are positioned around the listener to create space and depth. While this new technology was very interesting, the Broadway Theater in New York and the Carthay Circle Theater in Los Angeles were the only two locotaios screening fantasia that had the capabilities for the technology, and it fell into redundance.

However, surround audio made a come back with ‘Dolby Stereo’ in 1975, which introduced the now standard format of a centre channel, a left and a right channel, and then side channels and rear channels (most commonly seen in 5.1 and 7.1 formats). At this stage most big block busters, such as Star Wars, were being mixed for surround sound, however it wasn’t until 1982 that surround sound technology was available for commercial and home settings. In 2012 Pixar released the first Dolby atmos film ‘brave’, heralding 3D audio.

Musically, Pink Floyd played the first ever gig with quadrophonic audio in London in 1967, placing 4 speakers across each corner in the concert hall. Since, surround sound audio has only grown in popularity with most big album releases in the modern day getting a surround sound mix in addition. However, I think surround sound audio is somewhat limited for two reasons: firstly, not everyone has the capabilities to physically listen to surround sound mixes. A pair of stereo headphones is a lot cheaper than a surround sound audio system, and surround sound audio can’t be listened to on the go. Secondly, only three streaming platforms currently support surround sound audio, Apple Music, amazon music and Tidal.

A brief history of surround sound (no date) KEF US. Available at: https://us.kef.com/blogs/news/a-brief-history-of-surround-sound (Accessed: 26 April 2024). 

The History of Surround Sound (no date) Official Fluance Blog. Available at: https://blog.fluance.com/history-surround-sound/#:~:text=Although%20most%20consumers%20look%20upon,released%20his%20revolutionary%20film%20Fantasia. (Accessed: 26 April 2024). 

Categories
Uncategorised

The Future of Mastering

Mastering is the process of applying finishing touches to a whole musical project, usually through adjusting levels from track to track (in the context of an EP or album), applying stereo enhancement etc. At the moment it’s standard for a track to be mastered to around -14LUFS for digital distribution, however if previous trends have anything to show us, this could go higher. As was focused on in my previous blog on ‘the loudness wars’, the evolution of the mastering process has focused on being able to take full advantage of newer technologies, most recently the move to digital streaming.

In this sense I don’t think stereo mastering can evolve much further than this, aside from getting louder. However, I believe both the process of mastering and the end result of mastering could change drastically. Michael Romanowski currently works to create ‘immersive audio’ spatial mixes, using surround sound systems such as dolby atoms. While I think this will remain outside of the average music listeners access for a while, I think its an interesting artistic path that will certainly become more common as time passes.

Similarly, online AI tools for mastering have become all the more common, such as LANDR. I believe without intervention, AI mastering could become much more wide spread, as to many mastering can seem like some strange unknown science. While i’m unsure about my thoughts on AI’s use in music, my opinion recently became a lot more positive. I attended a talk on AI by Eric Drass (and a few other speakers I can’t recall), and the point was made that 150 years ago when the camera was invented painters were terrified that their medium was done for, however today photography and fine art both thrive alongside each other. It was upon hearing this analogy that my view on the use of AI changed somewhat, and while I still believe that it can be used problematically, it is ultimately a new technological tool (such as the camera) that we can use to help us.

Ultimately I believe a simple loud-ish stereo mix will remain the most prominent outcome of mastering for the foreseeable future. While tools may come such as AI, and other listening formats may be invented, I think ultimately it depends on the accessibility for an average listener. This usually comes to speakers or headphones, meaning simple mono and stereo mixes will remain important for a while to come.

What is mastering? (no date) What Is Mastering? Available at: https://www.izotope.com/en/learn/what-is-mastering.html (Accessed: 19 April 2024). 

AI & Automated Mastering: What to know (no date) iZotope. Available at: https://www.izotope.com/en/learn/ai-mastering.html (Accessed: 19 April 2024). 

Hillmayer, M. (2024) Mastering spatial audio. Available at: https://www.soundonsound.com/techniques/mastering-spatial-audio (Accessed: 19 April 2024). 

Latest works (no date) shardcore. Available at: https://www.shardcore.org/spx/ (Accessed: 19 April 2024). 

Categories
Uncategorised

Loudness Wars

The ‘loudness wars’ was a phenomenon that came to the forefront of music discourse in 2008 with the record ‘death magnetic’, however it can be dated back to the 1940s. With the popularity of jukeboxes in the 40s, a bar owner would set the volume and leave it, meaning any louder song would stand out, this being the cornerstone of the loudness wars. Over the years with the invention of new technology, music has been able to be made louder. By 2008 Rick Rubin was one of the worst culprits for the unnecessary compression and noise that typified the loudness wars, with the most obvious examples being ‘death magnetic’ and ‘californication’.

If you listen closely, during the guitar intro a lot of noise can be heard.

The compression on death magnetic introduced huge amounts of noise and digitally distorts on the CD, being criticised by almost everyone who listens. At this point it was clear that this extremity was unpopular, and when guns and roses were mixing ‘Chinese democracy’ they rejected the louder mix in favour of a quieter yet more dynamic mix. By now, most streaming platforms have audio normalisation, and will start automatically turning down music that’s too loud. This, to some degree, levels the playing field again as now everyone knows what degree of loudness is acceptable before penalty.

This will certainly affect my future work, as while I do appreciate a loud track I value dynamics within my music more. While being the loudest song played might definitely jump out while on the radio for example, if anything a listener actively engaging and turning your song up is a much more impressive achievement. This in combination with preservation of dynamic range is much more appealing.

Metallica Death magnetic – how to lose the Loudness War (2008) YouTube. Available at: https://www.youtube.com/watch?v=DRyIACDCc1I (Accessed: 18 April 2024). 

(No date) Gateway Mastering & DVD. Available at: https://web.archive.org/web/20090131045144/http://gatewaymastering.com/gateway_LoudnessWars.asp (Accessed: 18 April 2024). 

Frampton, T. (2024) Mastering audio for Soundcloud, iTunes, Spotify, Amazon Music and YouTubeMastering The Mix. Available at: https://www.masteringthemix.com/blogs/learn/76296773-mastering-audio-for-soundcloud-itunes-spotify-and-youtube (Accessed: 18 April 2024). 

Categories
Uncategorised

history and development of EQ

EQs original use was purely as a corrective tool. For television dramas, a set had to have several mics to pick up the actors as they moved around, however for several reasons (spacing of actors, mic type) the frequency response of the dialogue would be noticeably different from mic to mic. This is where the term ‘equalisation’ stems from, as its original use was to make sure each mic had similar enough frequency responses and thus the change In audio source wouldn’t be too noticeable. This principal was then applied to audio engineering work, being used creatively in the studio. While initially the EQ controls were simple, usually a high and low filter (a circuit made up of a resistor and capacitor bleeding certain frequencies to ground), a ‘midlift’ filter was quickly created, focusing in on the midrange of a recording. One extreme example of the use of a ‘midlift’ filter is the progressively more ‘scooped’ metal guitars of the 80s and 90s.

Pantera were infamous for their use of scooped guitars, a combination of big muff fuzz pedals and post processing mid scoops.

Dynamic EQ is another EQ technique, working somewhat like a compressor. When incoming signal hits a certain threshold the dynamic part of the EQ kicks in, effecting the desired frequency range. For example, this could be useful if a certain vocal passage in a song is brighter than the other sections and needs to be kept the same.

In my music I initially start using EQ as a corrective tool, cutting low end on guitar to let the bass sit better and techniques like that. Creatively I use EQ in a similar vain as Queens of the Stone Age and deftones, boosting midrange on guitars and ‘telephone EQing’ vocals. Dynamic EQ sounds like a very useful tool to use on something such as cymbals, which often run the risk of sounding sharp or harsh at points.

Gill, C. (2010) Dimebag Darrell: Reinventing the squealguitarworld. Available at: https://www.guitarworld.com/features/dimebag-darrell-reinventing-squeal (Accessed: 08 April 2024). 

Valentine, E. (2023) Making records with Eric Valentine – Qotsa -No one knowsYouTube. Available at: https://www.youtube.com/watch?v=RmIyIPItlG0 (Accessed: 08 April 2024).