Mixing music is one of those things that are kind of elusive, because it seems like every mix is a little bit different from the other, despite the fact you may have done nearly the same thing to each song. \n\n\n\nSome songs just end up sounding a bit different than the other and that's part of the magic. \n\n\n\nIn this tutorial today, I'm going to show you how I learned to mix a song for a client. In the very first section, I briefly outline how the person you're working with can send the files to you in an email. \n\n\n\nClick the link here to skip right to the YouTube video tutorial, but keep reading for the text walk-through. \n\n\n\nThe most important thing right away to know is that the person you're working with has to understand how to actually record music and send it across the internet. \n\n\n\nIn this article here, How To Collaborate With Other Music Producers, I've outlined how one can export each track on its own in 'Solo' mode, gather up all of the files individually, put them in a zip file, and then send them using Gmail or another emailing service.\n\n\n\nI recommend checking out the aforementioned article if you want to learn how to do that. \n\n\n\nHere's How To Mix \n\n\n\nSo, you've opened the Zip File and loaded all of the music into your DAW. \n\n\n\nThe first thing I do is I just listen to the song as it is, without any changes to plug-ins or volume settings. \n\n\n\nMy personal choice is to get the bass-guitar\/808s handled first because the low-end is perhaps the most intrusive audio frequency. You listen to music while slowly moving the volume to the point that sounds good. \n\n\n\nMove down each software instrument track, one at a time, perhaps moving to the drums, guitars, and then the vocals in that order. Once the volume is set to the point you want, you can start adding the plug-ins and EQ to clean things up. \n\n\n\nAt this stage, it's really up to you to mix the volume of the instrument tracks to your liking. No one can show you the actual steps for lowering and increasing the volumes, because it's up to you to figure out what you think sounds good. \n\n\n\nBackground Vocals\n\n\n\nFor the background vocals, I dropped the dB to around -2.3dB, then I added a Noise Gate, Channel EQ, and a bit of Stereo Delay. \n\n\n\nNoise Gate \n\n\n\nOne has to be careful with a noise gate because you can end up eliminating desirable transients. \n\n\n\nA transient is a short-lived part of the audio signal that often gives body and character to a sound. In many cases, they're intrusive and should be eliminated, but sometimes they're part of what makes it sound good.\n\n\n\nIt's one of the reasons why I typically don't add a noise gate to the guitar instruments, especially in the case of punk rock, because the "messiness" of guitar tone is part of what makes it punk. \n\n\n\nHowever, Noise Gating is a very common practice among metal guitar players and others. \n\n\n\nI set the Noise Gate to the Background Vocals to -35dB. This eliminates any undesirable sounds, without pulling the life out of the track. \n\n\n\nChannel EQ \n\n\n\nFor the channel EQ, I cleaned up the vocals by using a Garageband pre-set titled, "Soften Background Vocals." \n\n\n\n\n\n\n\nMany of the presets that one has access to are actually pretty good, especially in the case of Garageband and Logic Pro X. \n\n\n\nWhen dealing with male vocals, a common practice among mixers, from what I understand, is to jack up the frequencies between 5000kHz and 20,000kHz and then decrease the low frequencies. It really depends on the person's voice. \n\n\n\nA man's voice typically has more low-end, which is one of the reasons why you boost the higher frequencies and decrease the lower ones. For female vocals, I imagine it's probably the opposite, although I could be wrong. \n\n\n\nStereo Delay \n\n\n\nThe Stereo Delay, I used very minimally, because only minor adjustments are necessary to dramatically change the sound for the better. In the image you can see below, I used the 1\/4 Delay setting, and then I set the 'left mix' and 'right mix' to 30% equally. \n\n\n\nThe added delay has the effect of making the vocals less dry. \n\n\n\n\n\n\n\nI also set the Ambience and Reverb to around half-way, which is plenty of reverb. \n\n\n\nIt's important not to use too much reverb on your instrument tracks because then everything will sound "washed-out," or in other words, way too many effects to the point of distraction or saturation. \n\n\n\n\n\n\n\nBass \n\n\n\nChannel EQ and Noise Gate \n\n\n\nThe next instrument I mixed was the bass guitar, which I set around -12.0dB in total volume, along with a channel EQ. \n\n\n\nI set the Noise Gate to around -30dB. \n\n\n\nIn this particular song, I used the "E-Bass EQ," which increases the frequencies pretty much right across the broad, with an added EQ boost to the area between 900 and 1000kHz. \n\n\n\n\n\n\n\nThe bass is an instrument that a lot of people like to mix way down for whatever reason.\n\n\n\n I find the bass sounds the best when you can actually hear it in the mix, but in popular music, it seems like it's pretty much almost always super quiet, with the exception of hip-hop which emphasizes the bass frequencies. \n\n\n\nDrums \n\n\n\nChannel EQ \n\n\n\nWhen I mixed the drums, I ended up using the preset, "Refresh Drums." You can see what this pre-set looks below.\n\n\n\n\n\n\n\n Initially, I had a compressor on the drums, because I was trying to equalize all of the sounds of the kit, however, the client used a drummer track, so that means all of the instruments are together. \n\n\n\nWhen I loaded it into Garageband again to do the final mixing, I noticed that there was too much compression on the track and sounded like the audio waves was hitting the ceiling, so to speak. \n\n\n\nFor that reason, I had to go back and turn the compressor right off. \n\n\n\nWhen it comes to mixing a drummer track, there really isn't that much that can be done to it, because in an actual studio, from what I understand, there would be individual microphones on each part of the kit, and then you can mix it all together afterward. \n\n\n\nThis isn't possible when using an automation drummer, because the kit comes together as one sound. \n\n\n\nThere isn't much you can do with the mix, other than use a little compression and a bit of EQ and that's it. Perhaps, a noise gate, but it's not really needed either, considering it's an automation drummer and it's not recorded for real. \n\n\n\nGuitar Solo \n\n\n\nFor the guitar solo, I had to use a compressor as well as an EQ, and I found that using the compressor really brought the guitar into the track. In the image you can see below, I adjusted the compressor's settings to: \n\n\n\nThreshold: -14.5dB Ratio: 2:1:1 Attack: 23.0ms Gain: +1.0dB. \n\n\n\n\n\n\n\nChannel EQ \n\n\n\nFor the Channel EQ, I used the pre-set, "Picked Electric," which you can see what that looks like in the image below:\n\n\n\n\n\n\n\nThe "Picked Electric Guitar" preset works well because it eliminates lower-mid frequencies, to avoid any potential muddiness, and increases the higher frequencies to make the guitar sound more "crunchy" and "biting," so to speak. \n\n\n\nThere are always a ton of mid-range and low-mid frequencies in music, so it's important to pay close attention to them because it's very easy to have too many of them; then the music doesn't sound great. \n\n\n\nThe volume is set at +1.5dB. \n\n\n\nIt was a bit of a challenge to have the vocals and the guitar solo play at the same time, which it does in the song. \n\n\n\nBoth the solo and the vocals are competing for the same frequency, so I found that balancing the two was a bit challenging. \n\n\n\nHowever, the "Picked Electric Guitar" setting, I think played a nice role in balancing the two sounds out so it sounded good. \n\n\n\nMain Left and Right Guitar \n\n\n\nFor the two guitars, I panned one of them to the left and the other one to the right, which is a common tactic for mixing guitars. There's a noise gate set up on each one of -50dB, as well as the "Clean Up Guitar" preset for the EQ. \n\n\n\n\n\n\n\nThe style of the song is punk-rock, so the guitar tone is incredibly important. Most punk rock songs have the biting "crunchy" guitar tone, whose frequencies typically lie between 500hZ and 2000kHz. \n\n\n\n\n\n\n\nI'll also eliminate the sub frequencies within the guitar tracks so as to create room for the bass guitar. \n\n\n\nMain Vocals\n\n\n\nTo the client, the main vocals are pretty much always the most important thing because of the way that popular music emphasizes singing so much over the other instruments. \n\n\n\nInitially, I used Stereo Delay on the vocals, reverb, ambient, and I had them turned down a bit so that the vocals would sit nicely in the mix, however, the client wanted them turned up. \n\n\n\n\n\n\n\nI had to go back and change them because they didn't like it, although, in my opinion, the first mix was far superior to the second one. \n\n\n\nFranky, that's how this goes. At the end of the day, the client is paying you to do work for them, not the other way around. \n\n\n\nTruthfully, if they tell you to do something, then you just have to swallow your pride and do it, even if you know your way sounds way better. \n\n\n\nOtherwise, if you get your way, there's a good chance they won't be happy with it and will harbor some kind of resentment, and no longer want to work with you anymore. \n\n\n\nWhile this may not be the option with the most integrity, it's kind of what you have to do to continue working.\n\n\n\nCompression on Vocals\n\n\n\nFor the vocals, I used the Studio Vocal preset, whose parameters you can see in the image below. \n\n\n\n\n\n\n\nI have the noise gate set to -32dB, the Channel EQ set to "Vocal Refresh," and then the reverb and the ambient settings at the dial, 4. \n\n\n\nThis particular track, I had to be careful with, because the vocal performance was slightly out of tune at times, so when you use pitch-correction, it ends up making that "robot" sound, as people like to call it. \n\n\n\nTruthfully, people believe that Autotune is some kind of magic software that fixes terrible vocals and makes them amazing, but from what I understand, that's not how they really work. \n\n\n\nPitch-correction\/Autotune only make good vocals sound better. \n\n\n\nIf the vocal performance isn't great, then no amount of pitch-correction is going to fix them. \n\n\n\nPitch Correction \n\n\n\nAs I mentioned above, from what I've learned so far about music production, especially vocals, autotune is really a tool that only improves an already good performance. \n\n\n\nIn all probability, Ariana Grande is an extremely good singer, it's not all just some studio tactic that makes her sound the way she does. \n\n\n\nThe same thing can be said for clients you work with. \n\n\n\nPitch Correction software will make minor adjustments that make them sound just a little better; they won't be the thing that makes or breaks your vocal tracks, unlike what a lot of people on the internet tell you. \n\n\n\nWith that said, there is a possibility that software does exist where it can turn a terrible vocalist into Ariana Grande, although, I don't know what it is or whether it exists. \n\n\n\nThe pitch correction software in Garageband is pretty basic; it just has the check-box to turn it on and off, as well as the slider bar where you can turn it up from 0 to 100. \n\n\n\nWhat you can also do to correct imperfections in the vocal track is to zoom in to the highest degree of the vocal track, and then select and delete whatever sound is ruining the mix. \n\n\n\nFor instance, from what I've been told, deleting the inhales before a person starts singing is something you can do with this tactic. \n\n\n\nAnother way to do this is to use a noise gate, but with the noise gate, there is a risk that you'll eliminate desired frequencies and sounds. \n\n\n\nWhat I did for this particular track, is I turned the pitch correction to around 67, because anything above that was creating the "robot" effect as people on the internet call it.\n\n\n\n\n\n\n\nIt's a lot more common for me to turn the pitch correction up to 75-77, but this wasn't the case in this particular track. \n\n\n\nMoreover, you have to know the key signature of the song, which you then select in the top-center of Garageband's interface. \n\n\n\nThe song was in Bb Major, which, I believe, is a relatively common key signature for a popular music song to be in. \n\n\n\n\n\n\n\nIf the client doesn't know what key the song is in, it's as simple as grabbing your guitar or piano, finding a note that matches the song, and then play the notes of the major scale up from that note. \n\n\n\nVocal EQ \n\n\n\nFor the vocal EQ, I used the preset, "Male Vocal Refresh," which essentially, does the most common thing that's done to male vocals, drop out the frequencies below 100Hz and then add a boost to frequencies between 1000kHz to 20,000kHz. \n\n\n\n\n\n\n\nIf you google how to EQ vocals, you'll likely get many other blog posts which describe the same tactic for male vocals. \n\n\n\nNow that we've done the bulk of the mixing, it's time to move on to the final stage of the process. \n\n\n\nMastering\n\n\n\nI've written an entire article about this process, so won't be as thorough during this part. You can click this link here to read that article. \n\n\n\nBefore getting into this final step, ensure that you don't have any plug-ins running on your music's master channel. It's ok to have plug-ins running on all of the software instrument tracks, but not the main master channel. \n\n\n\nOnce you're at this stage, you can export the song using the Share button, and choose Export to Disk, then the AIFF option, which is a high-quality lossless file format. \n\n\n\nAfter you've exported the song to your desktop, drag and drop the song back into a new file in Garageband, and you can begin making the final tweaks. \n\n\n\nWhen I'm in the final stage of the mixing process, I'll typically add three plug-ins on top of it, Compression, Channel EQ, as well as the Limiter. \n\n\n\nFor Compression, I'll use the "Platinum Analog Tape" preset. It's parameters look like what you can see below. \n\n\n\n\n\n\n\nHowever, I did make minor adjustments to the compressor, including decreasing the threshold as well as decreasing the ratio.\n\n\n\n If you want to read more about the compressor, I suggest you check out this article here. In layman's terms, the ratio is how hard the compressor is working, and the threshold is at which point the compressor kicks in. \n\n\n\nChannel EQ \n\n\n\nFor the Channel EQ, I scooped out the sub-frequencies between 20Hz and 40Hz, and then I also scooped the frequencies between 10,000kHz and 20,000 kHz. \n\n\n\nI also subtracted the EQ by -2.0dB at 417 Hz as well as 1160kHz. That's all one really has to when it comes to the final stage of EQ. As a general rule, less is more when it comes to EQ. \n\n\n\n\n\n\n\nYou don't have to spend a ton of time subtracting and adding EQ all over the place, and if you feel this is necessary, it's probably because there's something up with your original mix. \n\n\n\nFor instance, if there is too much low-end in your mix, don't bother trying to fix this in the mastering stage. \n\n\n\nGo back to the original mix and decrease the volume of whatever instrument is causing too much low-end, whether it be the bass guitar, the boutique 808s, or the kick drum. You can read more about using Channel EQ in this article here. \n\n\n\nLimiter\n\n\n\nThe stock limiter in Garageband really only has two parameters, the output level, and the gain, so making adjustments to it is very straightforward and easy. \n\n\n\nWhat I usually do, is I increase the gain by a small amount, around +2.0dB and then I set the output level to -0.1dB or -0.2dB. \n\n\n\n\n\n\n\nThe limiter acts as a ceiling, so it will stop any undesired frequencies past a certain point. \n\n\n\nFrom what I understand, 0dB is the point of distortion, so setting it at -0.1dB and the gain at +2.0dB will be enough limiting. \n\n\n\nRemember, a limiter is essentially a compressor with the ratio turned super high. If you want to read more about the limiter, I suggest you read this article here. \n\n\n\nOther Important Things To Remember \n\n\n\nRegarding the master volume on the final track, I'll increase it by +2.0dB and that's it. Any more than that, and it's starting to get too loud in my opinion. Also, make sure you've turned off the Auto-Normalize function in Garageband's Preferences within the Advanced Settings tab. This is the reason why many people's exported tracks are far too quiet and they can't figure out the reason. When I export the original mix as an AIFF file, I'll have the Master Volume set at +0.0dB. I've never heard anyone else ever mention this, but I find that if I export the original mix say, by +4.0dB, the track ends up being distorted in the final mastering stage. \n\n\n\nYouTube Video \n\n\n\n\nhttps:\/\/www.youtube.com\/watch?v=NT4ayp9ZHPU&feature=youtu.be\n\n\n\n\nConclusion\n\n\n\nI hope this was helpful to you. I'd appreciate it if you shared it on social media.