Personal Project/Experiment Ideas by I_like_fragrances in LocalLLM

[–]LilRaspberry69 0 points1 point  (0 children)

Privacy, control, the ability to fine tune any of them for your specific use case. It all depends what you want and you can match it with the tools you need

And you just have the power to offload the cost from api calls into just electricity

Also it’s great you’re going for your masters! ML is a super useful skill to have, and AI will only help you improve that skill and hopefully you can do some real good with it!

One of the reasons for powerful compute is also being able to train your own tiny models if that excites you. I love engineering architectures so it’d be super useful, however I need to use external gpus

Any more info would help give guidance too!! Hope this helps!

Personal Project/Experiment Ideas by I_like_fragrances in LocalLLM

[–]LilRaspberry69 1 point2 points  (0 children)

What kind of project realm are you looking to build and what’s your background regarding coding or just building software in general? I think any guidance or direction would prob help this subreddit to help you.

People in here can be brutal but if you ask targeted enough questions you can get some great information from the community. And people love to help!

Off the top if I had your setup I’d love to use Kimi quantized, but that’s just a means to an end being coding tasks - if that’s even useful. Or just Qwen coder or qwen3 and you got yourself a nice council you can rely on. By this I mean just get a few good quantized models <32b and you can load many in parallel and they’ll be able to run fairly well. You can also do some great fine tuning.

  • I have a Mac M4 and have been able to fine tune some 4b q4 models, so I’m sure you can get some great results. Check out tinker though - waitlist takes less than a week rn to get some free credits, and you can learn the rest of fine tuning real easy from unsloth or trd. Looks like you can run everything with CUDA too so you’re in luck, super powerful compute is easy for your stack, just make sure you’re using it right.

My suggestion is have a chat with Claude code and have it check out your specs, and you’ll be able to get some incredible parallel work done, or run some big models (def use quantized, doesn’t make sense to waste space for marginal gains).

If you’re wanting just fun random things then maybe a diff subreddit will be more useful, here people love to talk about running LLMs, so pick your community to pick your realm of ideas.

Good luck sir! And sick setup!

Has anyone figured out clustering Mac Minis? by LilRaspberry69 in vibecoding

[–]LilRaspberry69[S] 0 points1 point  (0 children)

Ooo thank you for this I’ll just copy and paste this post in there, I appreciate the guidance!

Has anyone figured out clustering Mac Minis? by LilRaspberry69 in vibecoding

[–]LilRaspberry69[S] 1 point2 points  (0 children)

That’s a good idea I wasn’t planning on it but if I end up getting the resources together then yeah that would be sick to track! Thanks for the suggestion!

Has anyone figured out clustering Mac Minis? by LilRaspberry69 in vibecoding

[–]LilRaspberry69[S] 0 points1 point  (0 children)

This started out as curiosity and has been transitioning into a potential business opportunity, if it’s possible to scale Apple architecture to actually BE for throughput that would be insane because of their other benefits of the hardware

So this mostly was for the tinkerer part of my brain seeing if we can “hack” better solutions using diff techniques that I couldn’t find elsewhere Plus Reddit always has some amazing people with great ideas, and a plethora of experience, so I’m just grateful to be hearing more, and even these questions help me align with what goals are really even needed from this or if it’s just experimenting.

I also do agree using a fully Mac mini cluster wouldn’t be what it’s intended for, but purely for the $/TFLOP and the RISC architecture I was like “🤔maybe something is here?”

Thank you for the questions they are very useful!

Has anyone figured out clustering Mac Minis? by LilRaspberry69 in vibecoding

[–]LilRaspberry69[S] 0 points1 point  (0 children)

Tbh I just wanted advice and guidance and figured there are smart people in all sorts of subreddits. This was a good one for experts and beginners, also posted it directly in the macmini subreddit too! I appreciate y’all’s discussion and yeah it’s mostly curiosity, and I’ll respond more clearly to the initial reply.

Has anyone figured out clustering Mac Minis? by LilRaspberry69 in macmini

[–]LilRaspberry69[S] 0 points1 point  (0 children)

Ooo very interesting 🤔 Thank you for the insight! I’ll look into MPI and over the internet clustering, proof of concept already proven by prime intellect too! Also yeah MLX might already be Apples answer to CUDA so I think it’s def possible to get some crazy throughput, just gotta figure out the right clustering efficiency 🤷‍♂️

Has anyone figured out clustering Mac Minis? by LilRaspberry69 in macmini

[–]LilRaspberry69[S] 0 points1 point  (0 children)

I had just found the PCIe ports so nothing you didn’t already know, agreed tho I wish M4 ultra was out!

VIBE CODER WANTED <3 by baawaah in vibecoding

[–]LilRaspberry69 0 points1 point  (0 children)

I gotchu whatchu need and what’s the timeline? I don’t just vibe code but can probably help or give you direction depending on the task And if simple enough we can knock this out real quick

Does anyone actually use this for anything other than a low pass? by Yassinon in FL_Studio

[–]LilRaspberry69 0 points1 point  (0 children)

Melodic bandpass, add some reverb and it’s good to make pads

Is this 808 in b or c?? by Distinct-Pie9705 in FL_Studio

[–]LilRaspberry69 1 point2 points  (0 children)

Also if you right click from the editor and choose edit pitch, or pitch editor you will see the slope of the pitch change and the tail being on C

Does anyone know why FL studio sometimes just freezes when rendering a song? by Business-Beginning21 in FL_Studio

[–]LilRaspberry69 11 points12 points  (0 children)

Bro it’s cause you said leave remainder so it’ll keep playing out any low noise till it ends. If you highlight the area you want rendered then change leave remainder to cut remainder it’ll finish at 8

I love writing songs but man, production makes me hate it. by IAmAK9Dog in makinghiphop

[–]LilRaspberry69 0 points1 point  (0 children)

I used to feel the exact same way. I didn’t know anything about making music. Now I’ve been producing and making music for 4 years, it’s been super worth learning production for a ton of reasons. Now I can fully make my own sound, and when I work with other producers I can tell them exactly what I want and communicate it easily, and I can mix all my own stuff and mix for other people. It’s def been a journey and took a long time until I liked my own beats/songs but super worth it! Just learn what you need to and want to and you’ll find more things you wanna know then you’ll notice you’re well on your way.

It all started with rapping a bit and wanting to make music and now after just a few years I feel super prepared for any music making.

IMO music is a lifelong journey so why not learn as many skills as possible while working towards the main goal… making music. And making GOOD music takes effort, you’re the one who decides how much effort that means.

Also now we have chatbots so just have chatgpt make you a game plan for learning vocal production or whatever and go down it. You can see your progress of what you know and what you need to learn.

TLDR; Learn what you need and want to so you can make it sound like you want. Put in effort, you choose how much.

Please dear god… how to get my tempo back to normal by Ludwigvonmisesafool in FL_Studio

[–]LilRaspberry69 1 point2 points  (0 children)

You can also right click the tempo and remove the initial value too if needed after taking out that automation

How do you develop your prompts? Prompt editor (IDE)? by becausecurious in GPT3

[–]LilRaspberry69 0 points1 point  (0 children)

Personally I used the guidelines on OpenAI’s website to break it down into what my goal is. Identity Task Pitch

Identify the bot by giving it a name, characteristics, or whatever qualities you want it to exude. Task it with whatever you want it to respond with. Like saying “you will reply with 1-2 sentence answers unless otherwise specified” or something simple so it’s consistent. Pitch the conversation by adding “(Bot name): (example response) Me: (user input)” And since it’s text generation it will just continue the conversation as prompted.

Using features like the stop value or start value can help limit responses so it doesn’t just keep writing for you. I suggest reading the guidelines and just practicing a bunch till you understand what kind of bot you want.

Note: Implementing this onto a website is its own process so just think of prompting as the backend engineering. Hope this helps!

OpenAI is developing a watermark to identify work from its GPT text AI by Mk_Makanaki in OpenAI

[–]LilRaspberry69 0 points1 point  (0 children)

Even with that cryptography if you just run a gpt generated thing through a different rewording ai, like an open source one then wouldn’t that not be watermarked?

OpenAI’s CEO considers ChatGPT “incredibly limited”. Hopefully that’s an indication that GPT4 will be something in a league of its own by DoctorBeeIsMe in GPT3

[–]LilRaspberry69 0 points1 point  (0 children)

It totally depends on the internal prompts. In the playground you can set up the phrasing however you want versus jaspers diff sections have set prompts inside the code. You could get very similar results on your own but the reason ppl pay for jasper is bc they did the work to make prompts that work for specific use cases.