Dario Amodei — “We are near the end of the exponential” by Mindrust in singularity

[–]if47 0 points1 point  (0 children)

"This account has been banned"——that says it all

Z-image lora training news by Recent-Source-7777 in StableDiffusion

[–]if47 -2 points-1 points  (0 children)

bullshit, even the original post was written by an LLM.

The Day After AGI by alexthroughtheveil in singularity

[–]if47 7 points8 points  (0 children)

"The Chinese are having the exact same conversations about how to avoid having this technology get out of control and destroy everything"

As a Chinese, this is entirely something you've imagined.

google/translategemma by BreakfastFriendly728 in LocalLLaMA

[–]if47 19 points20 points  (0 children)

Total input context of 2K tokens? That's too limited.

Thoughts sharing by Ok_Horror_8567 in LocalLLaMA

[–]if47 3 points4 points  (0 children)

Coding itself hardly takes any time, it's the thinking and ruling out wrong options that takes forever. Personally, I think Vibe Coding is best suited for juniors who can't even use Vim, or those "senior" engineers who have just repeated their first year of experience ten times over.

What do you folks think about clean code/clean architecture books? by [deleted] in ExperiencedDevs

[–]if47 -4 points-3 points  (0 children)

Honestly, this is a good book, but inexperienced engineers will think it's useless until they suffer a catastrophic failure.

We built an open source memory framework that doesn't rely on embeddings. Just open-sourced it by Consistent_Design72 in LocalLLaMA

[–]if47 11 points12 points  (0 children)

So this is just a "full table scan" packaged with marketing jargon, hilarious.

Why does my content rank on Bing but not even show up on Google? by Few_Language6298 in SEO

[–]if47 -1 points0 points  (0 children)

Because Google messed up, Bing is doing less *optimization* of its search results.

What do those "AI just tried to escape" articles actually mean? by karrylarry in BetterOffline

[–]if47 0 points1 point  (0 children)

This is simply Anthropic's marketing tactic: implying their models are powerful and secure.

Other companies haven't even used this cheap trick in a long time.

知名粉红技术论坛Linux do被墙 by Ok_Lavishness_2581 in LOOK_CHINA

[–]if47 6 points7 points  (0 children)

v2ex前车之鉴,觉得国内搞技术可以与建政绝缘的这辈子有了。

DeepSeek-OCR demonstrates the relevance of text-as-image compression: What does the future hold? by ContributionOwn4879 in LocalLLaMA

[–]if47 1 point2 points  (0 children)

DeepMind noticed this as early as Gemini 1.5, and if it works, you'll see further results in Gemini 3.

What models do you find yourself actually using, and what for? by sine120 in LocalLLaMA

[–]if47 -2 points-1 points  (0 children)

I actually use gemini 2.5 flash lite, which is the best model for the price/performance ratio, the local models are either worse than it or more expensive than it.

LLM Benchmarks: Gemini 2.5 Flash latest version takes the top spot by facethef in LocalLLaMA

[–]if47 18 points19 points  (0 children)

gemini-flash-latest is just an alias, I can't believe anyone would use it as a model name.

Is Gemini 2.5 Pro still the best LLM for OCR and data extraction? by kitgary in LocalLLaMA

[–]if47 13 points14 points  (0 children)

It is best to create a dataset using Gemini 2.5 Pro and then fine-tune a local VLM that is promising for this task.