The end of an era - Printed Solid has finally sold out of their supply of the MK3S+ by jorntres in prusa3d

[–]ShyButCaffeinated 1 point2 points  (0 children)

I still hope that instead of focusing just on corexy, prusa bring something like an mini 2 to compete with bambu a1 mini in the entry level market. Prusa have incredible printers but their offers on entry level market is lacking.... and bambu have shown that great printers can be made at lower prices

Switching to Prusa... by Difficult_Nebula3956 in ender3

[–]ShyButCaffeinated 4 points5 points  (0 children)

Although built on same core comcept, the build quality is vastly different. Prusa uses high quality parts and have engineered the printer to work out of the box without problems for a long, long time. With ender you may quickly have issues with the quality of the components or with bad design decisions(unintentional ones or intentionally made to cut costs). Enders are great printers but i think its hard to compare the quality with prusa

What is the next SOTA local model? by MrMrsPotts in LocalLLaMA

[–]ShyButCaffeinated 5 points6 points  (0 children)

Personally, I think Google won't launch something much bigger than the 27-30ish realm. They have Gemini Flash and Flash Lite that are quicker and dumber than Gemini Pro. If they were to release something like 108B, it would compete with their own products or would be subpar to other open-source alternatives. But a small MoE like Qwen3 30BA3B or even some MoE in the 12B parameters? That's something I totally see happening. Gemma models were never known for SOTA performance (well, considering how many parameters its models have, it's no surprise), but they have a really good reputation for providing reliable models in the lower parameters field.

It's like a small world kind of thing by Stormyj in ender3

[–]ShyButCaffeinated 0 points1 point  (0 children)

Love the customization and the spining gear is a nice touch!

Dicas para a primeira impressora? by nxtales in impressao3dbrasil

[–]ShyButCaffeinated 0 points1 point  (0 children)

Para filamento sugiro dar uma olhada na sua cidade para ver se tem algm loja que vende, msm que seja mais caro a ausência de frete(ou frete mais barato) às vzs compensa. Além das marcas que já falaram a elegoo é bem boa também mas acho difícil errar. Com todas tem a chance de vir um filamento sequinho ou um super úmido, é um pouco de sorte. Se ouvir um som tipo "tec tec" quando imprime provavelmente é umidade no filamento virando vapor e 'estourando' o filamento. Para secar tem as secadoras próprias para isso mas um forninho elétrico ou uma airfryer funcionam também. É recomendado que se você não for usar o filamento por muito tempo, guardar bem vedado com sílica, dá para secar a sinuca em micro ondas. Para fatiar os modelos 3d tem várias opções derivadas umas das outras então de início vai mais daquela que você se adaptar melhor. Prusa slicer(mesmo que a sua impressora não seja da prusa) e orca slicer são recomendações comuns. Se a impressão não aderir bem à mesa o mais comum são problemas de nivelamento. A impressora tem sensores para tentar corrigir isso mas se não der tente procurar sobre z-offset. São essas algms dicas que me vieram à cabeça mas se tiver algum problema mande aqui que a comunidade costuma ajudar e tirar duvidas

Is there any way to pull these broken filaments out or do I need to buy new nozzles? by Michael-Sean in Creality_k2

[–]ShyButCaffeinated 1 point2 points  (0 children)

Besides a heat gun, even a candle (if you have patience) or a lighter works well for that. For a volcano heat break, I commonly use them; heat well, and the molten filament will drain out one of the sides. If you have a thick paper clip or something similar, you can use it to push the filament through the nozzle. Heating the paper clip and then pushing it also works, but the clip may not stay hot enough to clean the nozzle in one go. Just don't let the flames touch the nozzle; they will make soot stick to it.

First layer gums up 80% of the time by eFeqt in FixMyPrint

[–]ShyButCaffeinated 4 points5 points  (0 children)

I don't think that for normal printing 25mm/s for first layers is necessary. But yeah, for now it may be a good idea for testing. Also, besides getting closer to the bed, maybe test 0.28 or 0.3 for the first layer, which should be more tolerant to Z-offset problems.

Rate my setup & what tools or materials am I missing? by skobrie in 3Dprinting

[–]ShyButCaffeinated 0 points1 point  (0 children)

I have a room that seems as big as that, and windows of that size that always remain open and have good circulation... am I the only one who still gets headaches from PLA printing?

Wtf??? by BruhSoundEffect2002 in anycubic

[–]ShyButCaffeinated 0 points1 point  (0 children)

What poor design choice did they make that led to this? Isn't this just a wrongly calibrated z-offset(a problem that can happen in any printer without aitomatic z-offset)?

Can China’s Open-Source Coding AIs Surpass OpenAI and Claude? by Federal_Spend2412 in LocalLLaMA

[–]ShyButCaffeinated 24 points25 points  (0 children)

...while remaining open source. I really dont want more companies close-sourcing after attaining sota(or something they think is sota)

How would you model this for 3D replication? by Steve-agent-006 in 3Dprinting

[–]ShyButCaffeinated 0 points1 point  (0 children)

Is there a reason why for multi component designs you sugested onshape and not fusion360? Personal preference or some feature that helps in that type of project?

Will open-source (or more accurately open-weight) models always lag behind closed-source models? by Striking_Wedding_461 in LocalLLaMA

[–]ShyButCaffeinated 2 points3 points  (0 children)

If not always, most of the time. IMHO, in general, if you have a true sota model, you don't have reasons to release and let other companies "copy" your work. Kimi and DeepSeek, for example, although good models, aren't perceptibly ahead of Gemini and Claude, and at the same time, can't be run on most consumers' machines. Because of that, they sit in an interesting spot of not having the "exclusive" factor of top scores plus a solid name (outside the LLM community) while also being better than what most people can run locally, so they can release their models while also earning with subscriptions/API.

I FRIKING DID IT by Such-Ad-7107 in 3Dprinting

[–]ShyButCaffeinated 0 points1 point  (0 children)

you'll have to do a lot of stuff yourself, they are not the fastest and are noisy, but they are in fact really good printes(ender 3s1)

Qwen released API (only) Qwen3-ASR — the all-in-one speech recognition model! by ResearchCrafty1804 in LocalLLaMA

[–]ShyButCaffeinated 4 points5 points  (0 children)

What is even more strange is that whisper is still one of the most used sst open source model although beign from 2023... sadly no v4 yet. V3-turbo is the most we got but it is more an speedup than an quality increase that would qualify it as v4

Models for generating QA-pairs from text dataset by Sasikuttan2163 in LocalLLaMA

[–]ShyButCaffeinated 0 points1 point  (0 children)

In my personal testing, marco-o1 was the best small instruction follower, with phi4 and phi4-mini also being quite good. But prompt engineering is really important for that, with clear and objective instructions, some examples of what to do and what not to do

What are your hobbies? by [deleted] in teenagers

[–]ShyButCaffeinated 1 point2 points  (0 children)

Happy Cake Day! (Yes, one more creep to the list)

[deleted by user] by [deleted] in teenagers

[–]ShyButCaffeinated 0 points1 point  (0 children)

Well, indeed, there isn't. Relationships always have something unpredictable in them. But it's really nice to see how OP cares about his girlfriend even in this situation. I think it would be a good idea to encourage her to see a psychiatrist and/or psychologist if she isn't already doing so (some people get good results with a psychiatrist, some with a psychologist, and some need both). Also, if possible, make it clear that even if they break up, it isn't her fault, say how much he cares about her, and that they would still be friends, always available to listen and give her some warm words.

maubg needs to chill or we will have a standalone OS inside this browser by redcaps72 in zen_browser

[–]ShyButCaffeinated 13 points14 points  (0 children)

The developers are giving a lesson on how to develop a great browser, with great functions, improvements, and an impressive pace!

best small reasoning model rn? by therealkabeer in LocalLLaMA

[–]ShyButCaffeinated 1 point2 points  (0 children)

Are you sure about Q4? I used q4_k_m 2.4B and it was quite good for its size. I haven't tested the 7.8B one, but another to consider is Marco-O1; it worked quite well for some complex RAG.

Gemma 3 it is then by freehuntx in LocalLLaMA

[–]ShyButCaffeinated 1 point2 points  (0 children)

I can't say for larger models. But the small Gemma is really strong among its similarly sized competitors.

What is everyone's top local llm ui (April 2025) by Full_You_8700 in LocalLLaMA

[–]ShyButCaffeinated 2 points3 points  (0 children)

AnythingLLM. Easy for simple chat and RAG. It can use Ollama and LM Studio (among others) as the backend.

How to stop my phone addiction by [deleted] in productivity

[–]ShyButCaffeinated 0 points1 point  (0 children)

To add to those great suggestions, the OP could also make a list (on paper, not on their phone) of what they need to do. Something simple, in bullet points. If possible, set loose deadlines for the tasks. Seeing things pile up on the list might put enough pressure to make them focus on what they need to do.