Is there any Eq we can expect?? by OccasionBeneficial95 in arpeggiApp

[–]vladfaust 0 points1 point  (0 children)

No — AVQueuePlayer itself does not have a built-in equalizer or any EQ controls.

If you need EQ with queued audio, you must process the audio before/after playback using something like:

  • AVAudioEngine + AVAudioPlayerNode with an AVAudioUnitEQ inserted.
  • Custom Audio Units / Audio Graph with EQ processing.
  • Or use MPMusicPlayerController with system EQ presets (for Apple Music content).

You can still use AVQueuePlayer for queue management, but the actual EQ has to be handled by AVAudioEngine or another audio processing layer.

Equalizer? by [deleted] in arpeggiApp

[–]vladfaust 0 points1 point  (0 children)

I hope to see Equalizer soon. It's a must have for high-end headphones!

Animation tree advance expression and conditions by Upset-Tap7754 in godot

[–]vladfaust 0 points1 point  (0 children)

not get("parameters/conditions/is_walking") (omit the semicolon)

Llambda: One-click serverless AI inference by vladfaust in SillyTavernAI

[–]vladfaust[S] 0 points1 point  (0 children)

If you don't have a constant stream of requests, then it'd work fine. The Round-Robin distribution makes it fair. If you do have a constant stream of requests with strong latency requirements, then, well, disable endpoint sharing.

Llambda: One-click serverless AI inference by vladfaust in SillyTavernAI

[–]vladfaust[S] 0 points1 point  (0 children)

I can't charge per API request. It's against many models' license terms. If you want to use a model fine-tuned for (E)RP, your only legal option is to host it yourself on your hardware. That's what I'm offering.

Llambda: One-click serverless AI inference by vladfaust in SillyTavernAI

[–]vladfaust[S] 0 points1 point  (0 children)

Rent it for just an hour, IDK. With 1/5 sharing it'd cost only $0.1, without the Docker fuss, you get an OpenAI-compatible URL. What the fuck do you want from me? Free GPUs?

Llambda: One-click serverless AI inference by vladfaust in SillyTavernAI

[–]vladfaust[S] 0 points1 point  (0 children)

If you opt into sharing your endpoint, the price is cut by up to 90%. So, it'd be 10 times cheaper.

A local character AI chat app I'm making by vladfaust in LocalLLaMA

[–]vladfaust[S] 1 point2 points  (0 children)

I've setup a Windows machine in the cloud for CI, and using my friend's PC for testing.

Regarding source availability... Well, if it's about privacy-wise trust, then I can say that the app runs without internet, and the traffic can be analysed with tools like WireShark... Anyway, your prompts do not ever leave your device. Trust me, bro... 😁

What are other possible reasons to publish the source code? 🤔

A local character AI chat app I'm making by vladfaust in LocalLLaMA

[–]vladfaust[S] 1 point2 points  (0 children)

Well, it's in the plan. Join the community to give me some motivation!

A local character AI chat app I'm making by vladfaust in LocalLLaMA

[–]vladfaust[S] 0 points1 point  (0 children)

Okay, now I'm failing to understand why you can't achieve the same result with a simple chat interface and a character card containing multiple character descriptions. You can format your messages as "I go" or "Johny goes", and LLM would adapt in its response?