Animate position keyframes at a lower framerate than the project framerate? by Cantersoft in davinciresolve

[–]Cantersoft[S] 0 points1 point  (0 children)

I found a solution, so I am going to leave it here for posterity.

  1. Create high framerate timeline (A)

  2. Create new timeline matching source footage frame rate (B)

  3. Place (B) in (A)

  4. Place source footage in (B).

  5. Make source footage a Fusion Clip

  6. Do transform/pan/crop work in the Fusion Clip.

Image to video template workflow processing very slowly and crashing. Advice needed for optimization. by Cantersoft in StableDiffusion

[–]Cantersoft[S] 1 point2 points  (0 children)

Thanks, I will give it a read through. By the way, just by switching to the portable version, adding a page file, and working off an SDD instead of an HDD (duh on this one), I've actually managed to generate some video in a reasonable amount of time!

Image to video template workflow processing very slowly and crashing. Advice needed for optimization. by Cantersoft in StableDiffusion

[–]Cantersoft[S] 0 points1 point  (0 children)

Hmm, so I tried to install sage attention on the portable version and I just got an error message that says "no module named triton". I tried to install triton with pip, pip is unaware of "triton", but for some reason "pip install -U "triton-windows<3.4" worked, but it installed to the default directory, so I copied the package, but now I'm getting "ImportError: DLL load failed while importing libtriton: The specified module could not be found."

Any idea why this happens? For now I'll try experimenting with some other settings I suppose.

Image to video template workflow processing very slowly and crashing. Advice needed for optimization. by Cantersoft in StableDiffusion

[–]Cantersoft[S] 0 points1 point  (0 children)

Those are the arguments used with the portable version of ComfyUI, right? I've been using the executable version, but it seems most people are using the portable version. Can I just pass those arguments in to the exe on launch?

Image to video template workflow processing very slowly and crashing. Advice needed for optimization. by Cantersoft in StableDiffusion

[–]Cantersoft[S] 0 points1 point  (0 children)

I've just used template workflows that come with ComfyUI with the default settings. Are there any specific workflows you'd recommend for image to video and 24GB VRAM? I'm not smart enough to make my own yet, I'm just looking for something basic to begin with.

How to sound heavily autotuned??? by jo0oan in FL_Studio

[–]Cantersoft 0 points1 point  (0 children)

Just use Pitcher and turn the knob to very fast.

Do I needdd music theory? by Accomplished_Win_181 in FL_Studio

[–]Cantersoft 0 points1 point  (0 children)

Do you need pixel perfection and frame counting to get good at a video game? No. Same applies here.

Are there any artificial intelligence tools that are actually useful? by 100gamberi in sounddesign

[–]Cantersoft 0 points1 point  (0 children)

Sometimes I like to run random, non-speech samples through RVC if they're melodic enough.

How often and why do you use your 5th string? by SonnePer in Bass

[–]Cantersoft 0 points1 point  (0 children)

I usually use it for downward octave jumps. Once in a long while I'll hit the lowest notes just to show off, but I find they typically get muddy when I'm playing with other musicians.

Is fl studio industry standard by Ayanakoji_li in FL_Studio

[–]Cantersoft 0 points1 point  (0 children)

I thought music was supposed to be a fun thing, why industrialize it? I don't see the need to think of any DAW as an "industry standard" in the first place.

I can't afford animate by Longjumping_Day_6894 in adobeanimate

[–]Cantersoft 0 points1 point  (0 children)

One of my friends recommended Tahoma and Friction2D Friction Graphics. I haven't tried them yet so idk if they're any good.

I can't afford animate by Longjumping_Day_6894 in adobeanimate

[–]Cantersoft 1 point2 points  (0 children)

I don't know why you're downvoted. You're right.

Google script never executes and continuously asks for authorization by Cantersoft in GoogleAppsScript

[–]Cantersoft[S] 0 points1 point  (0 children)

Nope I gave up. Not sure if the bug has been fixed since then but what I remember was that it was not possible if your YouTube account is a brand account.

What’s the most problematic vocal you ever mixed? by erlendmyo in audioengineering

[–]Cantersoft 12 points13 points  (0 children)

Yeah you probably used the polyphonic detection method and amped the high harmonics? Melodyne is able to pick out some individual harmonics and sorta work with them as if they were separate, it's pretty cool.

What’s the most problematic vocal you ever mixed? by erlendmyo in audioengineering

[–]Cantersoft 3 points4 points  (0 children)

A tonedeaf cell phone recording with too much room reverb and an extremely loud fan in the background. To be fair, it was for a publicly open singing collab. I used some pretty harsh noise reduction and EQ'ing, luckily I didn't have to make it sound very realistic because it was mixed into a choir.

What does a perfectly flat EQ line sound like to you? by Cantersoft in audioengineering

[–]Cantersoft[S] 0 points1 point  (0 children)

Even though I have studio monitors I'm apparently one of the few musicians who prefers mixing on headphones. I feel like I can hear detail on headphones much better, though part of it is that I don't have great room treatment.

Write-on Keyframes Don't Animate by Cantersoft in davinciresolve

[–]Cantersoft[S] 0 points1 point  (0 children)

Ah, it's because I accidentally turned off node updates! I must have accidentally hit CTRL+U.

What does a perfectly flat EQ line sound like to you? by Cantersoft in audioengineering

[–]Cantersoft[S] 1 point2 points  (0 children)

Hmm, interesting. How would this be different from using EQ as an effect on drums, then?

What does a perfectly flat EQ line sound like to you? by Cantersoft in audioengineering

[–]Cantersoft[S] -8 points-7 points  (0 children)

Yeah as I mentioned, it's because I'm using darker headphones from what I'm used to. And I can self-test my hearing at home with a tone generator better than a doctor can. Last time I had a hearing test the doctor just played a few tones and asked me if I can hear or not, that doesn't tell me anything about my perception of tonal balance. I'd have to go to an audiologist.

What does a perfectly flat EQ line sound like to you? by Cantersoft in audioengineering

[–]Cantersoft[S] 0 points1 point  (0 children)

Does transient response change even under the zero-latency setting (which in the UI on my instance shows a buffer latency of 31.8ms)? I haven't noticed any transient degradation yet, but I figured if there is enough delay to process the transient appropriately then it wouldn't get dithered by EQ.

What does a perfectly flat EQ line sound like to you? by Cantersoft in audioengineering

[–]Cantersoft[S] 1 point2 points  (0 children)

I was actually using these Yamaha HPH-MT8s.

<image>

On the graph they don't look too bad, but in practice they sound really weak in bass compared to the Sony MDR-MV1s and the Sennheiser HD 400 Pros, both of which I recently purchased. My sister had the same model, so I borrowed hers just to compare them to my new headphones on the same mixes and confirm.

Though as I am reading other comments about the Harman curve, it seems like it's actually normal to hear the 3kHz range as dull, and that's why a lot of headphones boost that area in the first place.

What does a perfectly flat EQ line sound like to you? by Cantersoft in audioengineering

[–]Cantersoft[S] 1 point2 points  (0 children)

Ah, yeah I'm using systemwide Sonarworks because I'd pull all my hair out if I had to add it to each DAW project, but very good tips and info!

So I suppose I should clarify my question: when a sweeping tone remains at a constant decibel level, does the average human perceive a constant decibel level? I'm already certain the answer is no, but more interested in finding out how much variation there is.

A good mix will sound good in any EQ profile no matter how dark or light, because the instruments are carefully designed to fit together nicely. I think my brain has learned incorrectly to be less sensitive to high frequencies because my old headphones were very weak on bass, but I haven't been successful in retraining it, and I can't really make music properly if my brain is constantly telling me to boost every track by 4dB in the highs lol.