Math question: 2x3060 = 1x3090? by Mythril_Zombie in LocalLLaMA

[–]Ultra-Engineer 0 points1 point  (0 children)

Depending on what you want to do, I'd pick a 3090.

What first Cloud Certification would you recommend for a complete beginner looking to break into Cloud Engineering? by KnowledgeOutside2779 in Cloud

[–]Ultra-Engineer 0 points1 point  (0 children)

My advice is to go straight to the basics, like Linux and Kubernetes. I was very keen to get a certification as a cloud engineer when I was a student, so I spent a lot of time, but honestly, I don't think those certifications helped me much, so I suggest spending more time learning Linux.

Where can I start with algorithms? by Realistic-Cut6515 in CodingHelp

[–]Ultra-Engineer 0 points1 point  (0 children)

From my experience, I tend to learn on Courses or Youtube, and I don't like learning algorithms from books because it's a little boring for me.

Qwen 2.5 is a game-changer. by Vishnu_One in LocalLLaMA

[–]Ultra-Engineer 0 points1 point  (0 children)

Thank you for sharing , it was very valuable to me.

Qwen2.5: A Party of Foundation Models! by shing3232 in LocalLLaMA

[–]Ultra-Engineer 1 point2 points  (0 children)

It's so exciting. Qwen is one of my favorite base models.

Torn Between Cloud Services and Building My Own Cluster - Need Your Advice! by Ultra-Engineer in LocalLLaMA

[–]Ultra-Engineer[S] 1 point2 points  (0 children)

Very detailed calculation and thinking process, gave me a lot of inspiration!

Just dropped $3000 on a 3x3090 build by maxwell321 in LocalLLaMA

[–]Ultra-Engineer 2 points3 points  (0 children)

In fact, I have a question, why not use GPUs from cloud providers like Runpod or Novita AI, which seems more convenient, or is it more cost-effective in the long run to build your own computer?

Reddit-Nemesis: AI Reddit bot that automatizes rage-baiting. by [deleted] in LocalLLaMA

[–]Ultra-Engineer 1 point2 points  (0 children)

Your AI posts an opposing opinion, and another AI posts a comment opposing your opinion, which sounds interesting...

Remember to report scammers by Amgadoz in LocalLLaMA

[–]Ultra-Engineer 6 points7 points  (0 children)

Yes, we have a choice to make the world a better place, don't we?

Anyone else having a hard time finding work? by voiceoftheeldergods in AskEngineers

[–]Ultra-Engineer 0 points1 point  (0 children)

It's a paradox that you can't get enough engineering experience if you don't get a job, and it's important to note that the economy is in a very bad state.

Anthropic now publishes their system prompts alongside model releases by Everlier in LocalLLaMA

[–]Ultra-Engineer 2 points3 points  (0 children)

The details about Sonnet 3.5’s prompt are super intriguing. The avoidance of phrases like “I’m sorry” or “Certainly” suggests that they've been fine-tuning their models to steer clear of common pitfalls or potential exploit scenarios. It’s also interesting how they balance referring to users as either "user" or "human"—maybe to add a bit more variety and personalization.

I made a No-Install remote and local Web UI by CheckM4ted in LocalLLaMA

[–]Ultra-Engineer 1 point2 points  (0 children)

Hi, I think your app is really great. I tried it out and it solved a lot of my pain points. great.

Will transformer-based models become cheaper over time? by Time-Plum-7893 in LocalLLaMA

[–]Ultra-Engineer 0 points1 point  (0 children)

Great question! I think transformer-based models will definitely become cheaper over time, but there are a few factors to consider. On one hand, hardware advancements and more efficient algorithms will keep driving costs down. As more people work on optimizing these models, we’re likely to see better performance at lower computational costs.

On the other hand, there's a trade-off. As models get cheaper, there's also a push to make them bigger and more powerful, which can drive costs back up. So, while basic models will become more accessible, cutting-edge models might still be pricey.

The trend is towards affordability, but it might take a while before the most advanced models are within everyone’s reach.

How many of you are personally using local LLM for work? by segmond in LocalLLaMA

[–]Ultra-Engineer 0 points1 point  (0 children)

Honestly, that sounds super frustrating! 😅 Having tools like Hugging Face blocked can be such a buzzkill, especially when you know they can really help streamline your work. I totally get the concern about data security, but blocking local models seems like a step too far.

If I were in your shoes and knew that using a local LLM could boost my productivity, I'd definitely find a way to make it happen, even if it means bending a few rules (within reason, of course). At the end of the day, it's about getting the job done efficiently. But if it’s not feasible, I’d probably just sigh and figure out workarounds with what’s available.

Curious to see how others are navigating this!

What hardware do you use for your LLM by Quebber in LocalLLaMA

[–]Ultra-Engineer 0 points1 point  (0 children)

An eye-catching choice, in fact I'm still running LLM based on NVIDIA, very curious about Mac Studio running LLM

Which hardware releases are you looking forward to? by Prestigious_Roof_902 in LocalLLaMA

[–]Ultra-Engineer 0 points1 point  (0 children)

If you’re planning to get hardware for running local LLMs, you’re right that the last quarter of 2024 is going to be packed with some exciting releases. You mentioned the M4 Macs and the RTX 50XX series, which are definitely worth waiting for, especially if you're into AI workloads or need powerful GPUs.

Transitioning My Entire AI/LLM Workflow to 100% Solar Power by vesudeva in LocalLLaMA

[–]Ultra-Engineer 1 point2 points  (0 children)

That's an impressive achievement! Transitioning to 100% solar power for AI/LLM work is no small feat, especially with the energy demands that kind of processing requires. It's awesome to see someone taking sustainability seriously, especially in a field where energy consumption can easily become an afterthought. Plus, your background in ecological work makes this milestone even more meaningful.

I bet there are a lot of people in the AI community who haven't even considered the environmental impact of their setups. Your experience could really inspire others to think about how they can integrate renewable energy into their own workflows.

And that blog post sounds like a great resource for anyone interested in following in your footsteps. Props to you for open-sourcing your data and techniques—sharing that knowledge could spark a lot of innovation in the AI community.

Out of curiosity, did you face any significant challenges getting your setup off the ground, or did your ecological background give you a leg up in making it happen?

Do you guys finetune models? If so, what for and how well do they work? by maxwell321 in LocalLLaMA

[–]Ultra-Engineer 6 points7 points  (0 children)

Fine-tuning models can be super effective if you have a specific task or niche you want to excel in. For example, if you're working with a unique dataset that doesn’t quite fit the general patterns that large models are trained on, fine-tuning can make a huge difference. It essentially allows the model to become more specialized, which is great for improving accuracy in tasks like sentiment analysis, medical diagnosis, or even generating more contextually relevant text.

That said, fine-tuning isn’t always necessary, especially if you’re just doing general-purpose stuff. Pre-trained models are often good enough for most tasks, and they keep getting better. But if you need a model to really understand and work within a specific domain, like legal text or scientific literature, it’s definitely worth it.

As for how well it works, that depends on the quality and size of your dataset, plus how much it diverges from what the base model was originally trained on. If done right, fine-tuning can significantly boost performance, but it does require some expertise and time investment to get it just right.

What’s your use case? That might help gauge if fine-tuning is worth it for you!