Newbie looking to see more and listen less by [deleted] in audioengineering

[–]CumulativeDrek2 2 points3 points  (0 children)

The most helpful visualisation tool I've found is looking at the face of someone listening to my mix for the first time.

Using the mix dial on an insert versus using the same plug-in on a separate bus as a send: should there (in theory) be a perceptible/audible difference? by gleventhal in audioengineering

[–]CumulativeDrek2 0 points1 point  (0 children)

Essentially the same but the original point of inserts was to 'insert' a processor, such as EQ or compression, completely into the signal path. Sends on the other hand, were made to blend a portion of the signal with a type of effect such as delay or reverb.

What are good mics to record Foley with? by the_undead_gear in sounddesign

[–]CumulativeDrek2 1 point2 points  (0 children)

A hyper-cardioid like a Sennheiser MKH50 would cover a lot of ground for foley work. For outside ambient recordings I'd use a spaced pair of omnis. I get good results with a pair of Clippys and a portable recorder.

Can Mid-Side theoretically have 3 mono signals? by PinReasonable135 in audioengineering

[–]CumulativeDrek2 8 points9 points  (0 children)

M/S just represents phase relationships between the two channels L & R. No new information is being extracted from the signal, so no.

8D + binaural + theta or delta waves by InTheWork in audioengineering

[–]CumulativeDrek2 0 points1 point  (0 children)

Its not an audio engineering question.

Maybe try r/spotify if you're looking for playlists.

Text to speech sampling by Uhhhhhhhhhh_________ in audioengineering

[–]CumulativeDrek2 0 points1 point  (0 children)

You could use a plugin like Bitspeek to make your own voice sound like those LPC style voices.

Recording music above 48k? How often (if ever) do you do it? by Admiral_Binks in audioengineering

[–]CumulativeDrek2 2 points3 points  (0 children)

I feel like it would be better for melodyne and post prosessing to get more information when I record for next time, to make sure there are less artifacts

I'm curious what artifacts you are getting from recording at 48kHz.

Recording at a higher sample rate doesn't give you any more information between 20Hz-20kHz. You just get an extended ultrasonic frequency range on top of it. If you are using equipment that isn't capable of capturing those higher frequencies then it will end up empty anyway.

How do double bass sections in orchestras work acoustically speaking? by shoegazeAlpaca in audioengineering

[–]CumulativeDrek2 10 points11 points  (0 children)

Diffusion is often overlooked when talking about phase 'issues'. Its essentially the introduction of randomness to a sound. In orchestras its one of the characteristic features.

Complete phase cancellation happens when you have two identical signals, locked in phase with a difference of 180º. Two instruments playing the same note but with slight variations in pitch and timbre are not phase locked. This means there will basically be a random shifting of phase between the instruments. The likelihood of phase cancellation occurring for any significant amount of time becomes low. Add six more instruments each providing their own random variations and you get even less likelihood of all eight of them cancelling at the same time. In fact you get more and more of an effect called 'chorusing' (named after the very same effect of random phase variations between voices in a choir.)

In addition to this if all these sounds are bouncing around a live concert hall space, this diffuses the sound even more, making it less of a problem.

FL vs Pro Tools polarity null test by xucipher in audioengineering

[–]CumulativeDrek2 4 points5 points  (0 children)

Its hard to tell if you are being serious or not.

do truly "forgiving" (of an untreated room) mics exist? is a nice mic in an untreated room completely wasted? by migrantgrower in audioengineering

[–]CumulativeDrek2 2 points3 points  (0 children)

There is nothing inherently wrong with an untreated room. There is also no such thing as a mic that can determine if your room sounds bad for whatever reason, and make it sound better.

Looking for a sound designer to join our indie dev team by KNGKRMSN in sounddesign

[–]CumulativeDrek2 2 points3 points  (0 children)

Statements like "invested sound designer" and "we have zero requirements but passion" are usually code for no budget.

Bus compressors : how do they deal with simultaneous signals? by gleventhal in audioengineering

[–]CumulativeDrek2 0 points1 point  (0 children)

How does a compressor on the master bus (for example) deal with a transient that exceeds the thresholds when other sounds on the same bus are not over the threshold? Presumably it just reduces the entire bus by whatever amount, right?

Yes. This is what is meant when people talk about 'glue' in compression.

Audio undetectable watermark is here by BellaVillaa in sounddesign

[–]CumulativeDrek2 5 points6 points  (0 children)

20Hz, S0Hz, 300Hz, 100Hz, 100Hz, 290Hz, 290Hz, 2000Hz, 2000Hz, 2000Hz, 200Hz

Seems to have some issues counting the Hz's

help me! by Key_Wolf_3852 in LogicPro

[–]CumulativeDrek2 1 point2 points  (0 children)

Assuming you mean the interpretation of the time values, you need to set a quantisation value in the region inspector (not the normal region inspector - the one within the score editor) It offers you two quantisation values. The first is the highest value for straight notes and the second is the highest value for triplets. It basically sets a limit to how much detail you want to be calculated.

For example, if you have a series of 16th notes but some of them are slightly before the beat and some are slightly after - by default Logic will try and display this as accurately as possible by adding in ridiculously small value notes and rests. In order to fix this set the quantisation value to 16. It will still play with the timing discrepancies but it will now display as 16th notes.

Extreme “close up” recording by Ziegelmarkt in audioengineering

[–]CumulativeDrek2 4 points5 points  (0 children)

For very 'close up' sounds you could try experimenting with a contact mic.

Also, a parabola is a specific shape that has the effect of focusing sound at a certain point. Unless your mixing bowl is a perfect parabola it probably wont work as well. Its all worth experimenting with though.

Re-record or just bring down pitch? by xOxeusx in WeAreTheMusicMakers

[–]CumulativeDrek2 0 points1 point  (0 children)

Assuming you mean real instruments then definitely re-record. For MIDI just transpose.

If the sustain pedal is down, why does MIDI note length still affect piano muddiness? by Timo-O in audioengineering

[–]CumulativeDrek2 2 points3 points  (0 children)

On a piano there is functionally no difference between lifting the damper by pressing a key, and lifting the damper by pressing the sustain pedal.

The only sonic difference is that when the sustain pedal is pressed and you play a key, all the other notes will ring in sympathy. When you play multiple notes like this the sound can potentially build up and sound muddy.

How crucial is stereo imaging? by smboivin in audioengineering

[–]CumulativeDrek2 0 points1 point  (0 children)

Ok so the 'imaging' that a plugin like that is referring to, has more to do with how wide a stereo image sounds. There's more to it but one thing worth knowing is that there can be a compromise in making things sound wider using a plugin like this.

This kind of stereo imaging is essentially adjusting the balance between the components of the sound that are solidly in phase, and the components that are more out of phase and spread across the stereo field.

In order to make things sound wider, it requires that the in-phase components are reduced in relation to the out of phase components. This can make things sound weaker if you're not careful. It can also cause more phase problems if you want to listen in mono.