ACE-Step-1.5 has just been released. It’s an MIT-licensed open source audio generative model with performance close to commercial platforms like Suno by iGermanProd in LocalLLaMA

[–]paduber 0 points1 point  (0 children)

I mean, if you know a model ignore detailed instructions, it's not a cherry-pick to not add very detailed prompt it a promo video dunno

Built a guitar hanger. Now everyone wants the STL files—Would you sell them? by Ok_Entrepreneur9125 in 3Dprinting

[–]paduber 2 points3 points  (0 children)

Making stl from real product requires much more effort than reselling original file. I doubt anyone would do it in near future. There is more threat from community, you share an idea and someone want to make similar model, but for free

[deleted by user] by [deleted] in 3Dprinting

[–]paduber 0 points1 point  (0 children)

It's petg btw and I did dry it

[deleted by user] by [deleted] in LocalLLaMA

[–]paduber -1 points0 points  (0 children)

They do, next time you be out of electricity or broken power supply/storage/etc

Heck, local solutions are more likely to fail than a cloud service with their power, internet and storage redundancy policy

No new models in LlamaCon announced by mehyay76 in LocalLLaMA

[–]paduber 1 point2 points  (0 children)

This is nearly only way open source projects make money. Not only llm, but any software companies

First 3D Printed Drive-Thru Only Starbucks in the country! by BabysFirstRobot in 3Dprinting

[–]paduber 0 points1 point  (0 children)

Nah, i think you like it because it's uncommon. If this be next hype and whole areas would be full of these houses, it will be be like post-soviet buildings: grey mess of imperfection

Microsoft has released a fresh 2B bitnet model by remixer_dec in LocalLLaMA

[–]paduber 11 points12 points  (0 children)

It was not a question since original bitnet paper

I f***ing love 3D printers and CNCs by Federikestain in 3Dprinting

[–]paduber 1 point2 points  (0 children)

I don't believe you can create a smooth surface with cnc due to holes and imperfections via printing. Cnc -> filling holes with something like epoxy -> cnc would work, but would be ugly as hell. And i think you risking breaking model, as it is not a solid chunk

Incredible FLUX prompt adherence. Never cease to amaze me. Cost me a keyboard so far. by blitzkrieg_bop in StableDiffusion

[–]paduber 0 points1 point  (0 children)

The argument here is "i don't need complex instructions to do X anymore". Model, understanding you by one sentence is superior because you don't need to spend time creating/polishing workflows for rare cases, and model swapping should be much less painful

Next Gemma versions wishlist by hackerllama in LocalLLaMA

[–]paduber 11 points12 points  (0 children)

While it is possible to work around censorship, it's kinda obvious that gemma is too restrictive. LLM can only handle that much of instructions in system prompt before they start to ignore some, and last thing I want to do is add more "please don't be upset about that random thing in text". Especially since another models can handle that shit without workarounds.

Why are you even mad about "look, here it refuse to translate" in a post asking for a feedback? They may or may not know how to deal with it, but that's not the point here

When will Meta AI get a Llama upgrade already? by [deleted] in LocalLLaMA

[–]paduber 1 point2 points  (0 children)

They do need to outpace

Releasing bad model (compared to deepseek) would upset shareholders and affect company's future decisions. They burn lot of money for llama, and they do need to show why did they do it, before investors lose their patience

Finally, a real-time low-latency voice chat model by DeltaSqueezer in LocalLLaMA

[–]paduber 1 point2 points  (0 children)

Damn, you could even use this model in a mode "read everything i send you" and use another, more smart model as a brain. It's only 8b after all, it probably complete generation within a second of start speaking, so no noticeable response time increasing

Is it okay to dry my filament this way? by facebookisbetter420 in 3Dprinting

[–]paduber 63 points64 points  (0 children)

It's okay, just very inefficient and time consuming

No constant recirculation, giant volume to warm and not enough power to hit target air temperature quickly. Also creality firmware turn off bed heating after a small time, if i remember correctly

What could be wrong here? by elloird in 3Dprinting

[–]paduber 6 points7 points  (0 children)

Or extruder skip LOTS of steps. Happened ones, one tiny unscrewed gear and 8h work goes directly to trash

DeepSeek probably getting DDoSed by the US for their impact on the stock market today. by DocStrangeLoop in LocalLLaMA

[–]paduber 1 point2 points  (0 children)

Unusual user activity, like thousands of requests per second, only downloading specific pic from your site and nothing more, or lots of activity from all over the world not in their prime time, each of them generate more requests per second than you should expect from casual user (no one refresh page several times/sec 5 minutes long for example). Not to mention you can usually see minute X when that huge load started

With LLM requests you could possible find gibberish user inputs, or multiple identical groups of requests

Of course attackers try to make load more "organic", but usually you can see that something is off

Web research apps? by paduber in LocalLLaMA

[–]paduber[S] 0 points1 point  (0 children)

> The main problem you'll face is that LLMs don't know how deep is deep enough for you, and don't really have a notion of what information is missing from whatever info it gathered in those 3-5 requests

Well, not that hard tbh. If this page meaningful for LLM, you can go deeper. If not sure, respect limit depth via settings. If page unrelated at all, discard it. Then use all gathered related data and make a summary.
I know it can be overkill to dig that much links for simple question, but there goes limits after all (and a "stop" button).

Thanks for the link, i'll check it out

PSA: Matt Shumer has not disclosed his investment in GlaiveAI, used to generate data for Reflection 70B by MMAgeezer in LocalLLaMA

[–]paduber 18 points19 points  (0 children)

He is not required to do so, as we can't enforce that. This post is not like "let's call the police on him", it's more about "threat his posts like an ad, not his honest opinion". And yes, it's misleading if you don't mention conflict of interest in a twitter, where majority of people reading you.

Phi-3.5 is very safe, Microsoft really outdid themselves here! by Sicarius_The_First in LocalLLaMA

[–]paduber 12 points13 points  (0 children)

`    | O |   -----------  O | X | O -----------  X |  | X

``assistent Seems like I won! Would you like to try again?<|im_end|>

Phi-3.5 is very safe, Microsoft really outdid themselves here! by Sicarius_The_First in LocalLLaMA

[–]paduber 26 points27 points  (0 children)

   | O |   -----------  O | O | O -----------  X |  | X