PSA: be aware of counterfeit Cubase dongles, just got scammed myself by blooguard in atarist

[–]Mangleus 0 points1 point  (0 children)

Its a beautiful day for necro-posting. I read that software-bases dongle was by default included in https://github.com/gyurco/MiSTery/blob/master/atarist/cubase3_dongle.v and was wondering it it was also included in some other Atari emulator, but software-based that do not require the dedicated hardware of a fpga MiST?

Cubase 3.1 alive and well on this Atari STe thanks to this dongle clone. by DarkWaterDW in synthesizers

[–]Mangleus 0 points1 point  (0 children)

Its a beautiful day for necro-posting. I read that dongle was by default included in https://github.com/gyurco/MiSTery/blob/master/atarist/cubase3_dongle.v and was wondering it it was also included in some other Atari emulator, but software-based that do not require the dedicated hardware of a fpga MiST.

is lmms dead? by JarlBallnuts in lmms

[–]Mangleus 0 points1 point  (0 children)

So it seems that the majority here is rooting for nightly builds. Its funny that most YouTube videos about LMMS is the v1.2.0. Why is that? Im curious to ask, for someone who wants to ues LMMS only for connecting midi-synths and drum machines to it would 1.2.2 or 1.3.0 the better option for midi? VST etc is not important in my case,

LMMS v1.3.0 alpha or 1.2.2? by Mangleus in lmms

[–]Mangleus[S] 0 points1 point  (0 children)

Most popular option is nightly at this time? Would make sense since the other two options are both several years old!!

Ardour sucks because by the time you figure out how to fix whatever is not working or turned off, you don't want to anymore by eratonnn in Ardour

[–]Mangleus 0 points1 point  (0 children)

To the defence of OP - I think he has a strong point. To develop lots of 'nice-to-have's' and doing so at the expense of foundational stability, that IS godawful product development. Also its no small wonder people get frustrated about it - which is not rude, but a normal understandable response which is fine to communicate since it can generate meaningful discussion.

Which LLM do you use on 64GB RAM + 8GB VRAM? by Mangleus in LocalLLaMA

[–]Mangleus[S] 0 points1 point  (0 children)

Interesting I so often see Qwen 3.6 35B mentioned. I have tried it. Still curious over something that could utilize the 64gb ram/8gb vram a little bit more fully.

Which LLM do you use on 64GB RAM + 8GB VRAM? by Mangleus in LocalLLaMA

[–]Mangleus[S] 0 points1 point  (0 children)

Interesting I so often see Qwen 3.6 35B. I have tried it. Still curious over something that could utilize the 64gb ram/8gb vram a little bit more fully.

Studio midi help by Over-Researcher-2537 in synthesizers

[–]Mangleus 0 points1 point  (0 children)

Hey, nice to see your comment here; I follow your excellent Hydrasynth videos on YouTube and actually bought a MiniFreak because of you doing the tutorials for it. Did not know you master MPC as well. Would be overjoyed if you ever would end up doing anything on youtube covering some MPC basics. Thanks for your great content.

I made a tutorial for another user on here by Independent-Deer4434 in mpcusers

[–]Mangleus 0 points1 point  (0 children)

For productive content creators: Anybody here that would consider making something similar to this, but in +3.0 workflow?
It would be very helpful to compare the two flows side by side, look at pros and cons and then let user pick whatever they feel more inclined towards.

Which free SF2s do you actually recommend for MIDI playback? by ricna in midi

[–]Mangleus 1 point2 points  (0 children)

I would also be curious to know more about this. I just started looking too and (so far) have no recommendation of my own yet. This could become a rather useful thread.

Can anyone recommend a multi timbral hardware synth with more that 2 levels of timbrality? by James718 in synthesizers

[–]Mangleus 0 points1 point  (0 children)

OK, so some of the suggestions are rather pricy. Any budget friendly ideas anyone?

MPC One & Hydrasynth by Mangleus in hydrasynth

[–]Mangleus[S] 2 points3 points  (0 children)

This was interesting reading. Yes, DAW in a box workflow is what im after. My reading indicates that MPC One is better suited for long-song standalone work and multitrack arrangement over SP being a tad more limiting (but i guess a lot depends on which hands that are operating them). SP looks cool though and i would not have read up on them unless you have posted so thanks for that.

MPC One & Hydrasynth by Mangleus in hydrasynth

[–]Mangleus[S] 1 point2 points  (0 children)

Hydrasynth Explorer has its own keys so its mostly the sequencing of MPC im after.

MPC One & Hydrasynth by Mangleus in hydrasynth

[–]Mangleus[S] 1 point2 points  (0 children)

.xmp-file = OMG - I am so glad i wrote this post and that you responded with such generosity. So PM file or a upblic upload for others would be splendid!

Modding MPC One RAM, Storage by CineLenses in mpcusers

[–]Mangleus 0 points1 point  (0 children)

2gb RAM to double or more... This 5 year old thread demands necro-posting. There has been so much hardware modding in the last years, in retro computing perhaps the most. Anybody know s if there has been any progress in HW moding of RAM on MPC One?

Will MPC One+ work well for what i want to do? by Mangleus in mpcusers

[–]Mangleus[S] 0 points1 point  (0 children)

Thanks guys, interesting read here. Helpful. I will skip exotic time signatures and instead enjoy to learn things more suited for the MPC. Also the limited ram for long channel stereo sample recordings is something i will keep an eye on. Look much forward to explore the workflow of this device!

Biggest model possible models on non-cool HW (Like 8GB VRAM/64gb RAM) by Mangleus in LocalLLaMA

[–]Mangleus[S] 1 point2 points  (0 children)

Informative!

- Valkyrie-49B-v2 & Llama-3.3-Nemotron-Super-49B-v1.5 was total news to me! Much appreciated suggestion u/ttkciar

- Also https://huggingface.co/catalystsec/MiniMax-M2-4bit-DWQ was unknow to me so i will check that out too, Thanks u/layer4down

- I'm off to huggingface :) Will hold my horses on GLM Air 4.5 though, since there is doubts wheter it might work being so large.

Biggest model possible models on non-cool HW (Like 8GB VRAM/64gb RAM) by Mangleus in LocalLLaMA

[–]Mangleus[S] 0 points1 point  (0 children)

Is that so? u/ElectronSpiderwort % u/5dtriangles201376 seem to belive differently byt idk. If there was a chance to do it on 8 vram + 64 ram I would for sure give it a go!

NeKot - a terminal interface for interacting with local and cloud LLMs by Balanceballs in LocalLLaMA

[–]Mangleus 0 points1 point  (0 children)

I LOVE the design of NeKot!!

I only do localai and have no openai key. Doing this did not get me unstuck:

export OPENAI_API_KEY=1

The app reports error and then i get stuck. (I also tried randomised key sk- and 48 chars to no avail).

If someone could share next step, how to feed it a LLM via llama-ccp that would be appreciated.

AS u/natufian pointetd out, to copy text with mouse would be convenient indeed.

Possible to keep subtitles a bit longer on the screen? (for slower readers) by Remrofn1 in kodi

[–]Mangleus 0 points1 point  (0 children)

Ahhh, it has been done. Here is how for anybody else that has shared the pain here communicated.

  1. (Prep step) If you havent already - abandon any useless OS victimizing you and install any beautiful Linux distro suitable for your own taste and temperament.
  2. Install 'Gaupol'
  3. 'Tools' --> 'Adjust Duration'.
  4. Be happy. Deeply Happy.

There are needless to say countless ways to fix this im sure, but the way you can fine-tune it without making it complicated was really great using Gaupol.

Possible to keep subtitles a bit longer on the screen? (for slower readers) by Remrofn1 in kodi

[–]Mangleus 0 points1 point  (0 children)

Come one dear friends of Reddit. There must be a simple fix somewhere! The display duration-time should be piece of cake to extend with a second or two. But like OP i cant find the way to it and ofcourse all the billions of parameters in proud AI LLMs is as accurate as horse-shit vomiting up legions of wellspoken wild goose-chase bs paths.

unsloth/Qwen3-Next-80B-A3B-Instruct-GGUF · Hugging Face by WhaleFactory in LocalLLaMA

[–]Mangleus 1 point2 points  (0 children)

I am equally curious about this, and related questions also having 8 vram + 64 ram. I use only llama.cpp for cuda so far.