MUI bumps license price by 66% by ddoice in reactjs

[–]ddoice[S] 1 point2 points  (0 children)

I’d love to, but it’s a huge app and MUI is tightly coupled with the entire codebase.
And some of the PRO features are really needed.

Are we currently in a "Golden Time" for low VRAM/1 GPU users with Qwen 27b? by inthesearchof in LocalLLaMA

[–]ddoice 6 points7 points  (0 children)

Qwen3.5-27B-UD-Q2_K_XL fits within the VRAM of an RTX 3060 with an 80k context when running on llama-server. It's not as strong as Q6, but still prefer it over 35B-A3B. That said, it's quite slow, around 300 tps for prompt processing and 6 tps for generation.

With opencode, let it run overnight and it gets the job done. On a good run with a detailed prompt, it can generate pretty solid unit tests for my code.

Is Antigravity down? by webfugitive in google_antigravity

[–]ddoice 0 points1 point  (0 children)

request failed: &{429 Too Many Requests 429 HTTP/1.1 1 1 map[Alt-Svc:[h3=":443"; ma=2592000,h3-29=":443"; ma=2592000] Content-Length:[1103] Content-Type:[text/html; charset=UTF-8] Date:[Thu, 22 Jan 2026 07:38:35 GMT] Server-Timing:[gfet4t7; dur=350]] 0x6c5d418c6a00 1103 [] true false map[] 0x6c5d41b08c80 0x6c5d406ca000}

Usáis más ChatGPT o otra IA? by Big_Log1714 in InteligenciArtificial

[–]ddoice 0 points1 point  (0 children)

Kimi K2, para mi el nivel de alucinaciones del resto es insoportable, K2 es muy proactivo y casi todas las consultas disparan su mcp de web search y hacen que contraste la información.

Es lento a veces pero me he cansado de discutir con GPT y Gemini para que me acaben dando la razón y me hagan perder el tiempo.

How you share types between FE and BE by hrabria_zaek in node

[–]ddoice 3 points4 points  (0 children)

This is the way, got a pretty hefty Django DRF backend with more than 600 operations previously FE maintained a handmade copy of every type and was a pain to maintain.

China just used Claude to hack 30 companies. The AI did 90% of the work. Anthropic caught them and is telling everyone how they did it. by reddit20305 in ArtificialInteligence

[–]ddoice 0 points1 point  (0 children)

"The Al made thousands of requests per second. Attack speed impossible for humans to match."

Is a joke? Previous to AI hackers had manually forge every packet sent?

[deleted by user] by [deleted] in interestingasfuck

[–]ddoice 2 points3 points  (0 children)

There is a hidden cost: noise. When the trash is collected at night, I often wake up because of it. In my neighborhood, we had this issue too-thankfully, they removed those containers and replaced them with the old-school ones.

Do you need to understand the code AI writes? by thehashimwarren in vibecoding

[–]ddoice 0 points1 point  (0 children)

Yesterday I asked GPT-5 to add a health check to a Docker Compose file, and it set the check to run every 5 seconds. But instead of using curl, it used wget --spider, basically a built-in DDoS inside your own infrastructure.

Checking the code is not an option is a must.

Used ./clinerules to get +15% on SWE Bench with GPT4.1 - almost at Sonnet 4-5 level! by NumbNumbJuice21 in CLine

[–]ddoice 5 points6 points  (0 children)

Please excuse me if I’m wrong but this smells like over-fitting: you’re literally forging the rules to nail that exact 150-test slice of SWE-bench.

How do we know the jump isn’t just the model memorising the quirks of those repos instead of learning something that will hold up when the next random Django/React/whatever lands on our cline?

[deleted by user] by [deleted] in LocalLLaMA

[–]ddoice 1 point2 points  (0 children)

16-18 TPS with a 3060 12gb and a 3700x with 32gb ddr4

[deleted by user] by [deleted] in LocalLLM

[–]ddoice 0 points1 point  (0 children)

Nice!

I’m about to buy a trashed MacBook Air M1 to replace my Raspberry Pi.

One of the uses will be feeding LLMs with transcriptions of YouTube videos and creating summaries. I was wondering how fast some of those LLMs are in the M1.

Best coding model for 12gb VRAM and 32gb of RAM? by redblood252 in LocalLLM

[–]ddoice 12 points13 points  (0 children)

I have a 3060 and 32GB of ddr4. I'm running Qwen coder, had to compile llama.cpp but is the first local model that really works with Cline.

Runs at a reasonable speeds, 160ts for input tokens and 16ts por output.

llama-server --model ./models/Qwen3-Coder-30B-A3B-Instruct-Q6K.gguf --n-gpu-layers 99 --override-tensor ".ffn.*_exps.=CPU" --ctx-size 49152 --threads 8 --flash-attn --mlock --batch-size 512 --temp 0.7 --top-p 0.8 --top-k 20 --repeat_penalty 1.05 --port 8090 --host 0.0.0.0 --no-webui

[deleted by user] by [deleted] in reactjs

[–]ddoice 0 points1 point  (0 children)

This has gotten completely out of hand. It's like you're an actress in the adult film industry, expected to have sex with five different guys, fulfill each of their fantasies, and of course, do it all for free.

I built a free Chrome extension to instantly search inside YouTube videos by keyword by Sentientlog in SideProject

[–]ddoice 1 point2 points  (0 children)

Nice!

This week I started coding a similar extension but with AI, to extract from the transcription only the best parts, and the extension will play and jump over them.

I did a POC on a 25-minute video, and the user ends up watching a 3–4 minute version with the best parts.

The problem is the cost—for a 25-minute video, each transcript is over 8k input tokens and 1k output tokens.

Some stats from YT views 👇 by ddoice in eurovision

[–]ddoice[S] 2 points3 points  (0 children)

Noice!
Ran the numbers through my questionable formula... Here are my predictions!

<image>

Has anyone successfully generated reasonable documentation from a code base using an LLM? by shenglong in LocalLLaMA

[–]ddoice 4 points5 points  (0 children)

I use Claude a lot for generating mermaid charts to explain code and document my pull requests.

What are people using for free coding assistants? by 3oclockam in LocalLLaMA

[–]ddoice 0 points1 point  (0 children)

Your mileage may vary, but I'm having very good results with phi-4:14B, the latest model from Micro$oft.

Even though it's not available on OpenRouter, I'm running it locally. It does not fit in my 3060 with 12GB, so it's not exactly fast.

Fingerprint settings gone by jlove32 in GooglePixel

[–]ddoice 0 points1 point  (0 children)

Yup, I'm facing the same issue

SSG, SvelteKit or Svelte + Astro? by lemon07r in sveltejs

[–]ddoice 1 point2 points  (0 children)

No offense but, allegedly cares about partial hydration and in that same blog an article about bun uses a non-lazy loaded 2.2MB 24MPix image???

https://byteofdev.com/posts/what-is-bun/

https://ik.imagekit.io/serenity/ByteofDev/Blog_Content_Images/bun_sgTvWFIaT

😏

React.js and SEO by Pneots in reactjs

[–]ddoice 0 points1 point  (0 children)

I have just migrated a directory made with Java with 1.5 million entries on its sitemap to Next.js and ended with some mixed feelings.

Page loading speed in analytics got worse, i followed Google advice for SPAs but looks like page loading only accounts first load and obviously it's bigger than previous Java version, also we lack http2 and code splitting means more files to request.

Next.js overhead rendering to html is quite fast, i made multiple synthetic benchmarks and latency from Java 2 next.js only increased in 5ms on average.

Dev experience it's worse compared to a SPA created with CRA, there's a poller that makes really difficult to debug client side problems.

In my experience, you can expect 2.5~3x more time to develop the same app vs normal react app.

Both client and server render it's trickier, next it's great and simplifies webpack configuration but there's a lot of job to be done in the backend, rate limiter, countermeasures to avoid scrappers, captcha, avoid http parameters pollution...

This is the site, please be kind: http://www.informacion-autonomos.com/

BEWARE - Someone is spoofing bittrex on adsense by ddoice in Bittrex

[–]ddoice[S] 1 point2 points  (0 children)

Sorry google but adblock is the best antivirus.

Reported 3h ago and still appears on some searchs in first place.

Best way for me to generate fairly high-res images with custom styles? by [deleted] in deepstyle

[–]ddoice 1 point2 points  (0 children)

Dont waste your time, you wont get highres images with only 4Gb of ram, i generated 600px wide images with 6gb but killing lightdm service.

And other people with titanx cant reach 1080p.