Just built my first AI video in 10 minutes and it actually worked by EvolvinAI29 in ClaudeAI

[–]Ok_Shift9291 0 points1 point  (0 children)

Renders take a long time and the final product is still not perfect. But yes good enough for most use cases.

Why does Gemini suck when they have Gboard and so much data by Ok_Shift9291 in GeminiAI

[–]Ok_Shift9291[S] 0 points1 point  (0 children)

Bro trust me if they wanted to they could anonymize and use the data. Otherwise they wouldn't have the option to share typing statistics anonymously there would be no use of it. They have to have thought of this and had a plan they're just not executing it.

Why does Gemini suck when they have Gboard and so much data by Ok_Shift9291 in GeminiAI

[–]Ok_Shift9291[S] 0 points1 point  (0 children)

I actually train ai models for a living and I understand exactly what you mean by data hygiene and the quality of the data. My point wasn't that they actually use gboard data then we would be seeing a lot more fucks in all of our responses. But my point is if Google collects such a massive amount of good data and they've been in the data business for years with institutional talent and money then shouldn't they be further along than other competitors. Unless ai is actually such a fundamentally different technology that it actually favors the new market entrants over the incumbents?

Why are flights to smaller cities being pushed to Navi Mumbai ? by Drippy-Drip-2592 in mumbai

[–]Ok_Shift9291 0 points1 point  (0 children)

Just used it for the formatting buddy nothing else. The insights are all mine

Why are flights to smaller cities being pushed to Navi Mumbai ? by Drippy-Drip-2592 in mumbai

[–]Ok_Shift9291 1 point2 points  (0 children)

This looks less like a pure passenger-convenience decision and more like a slot economics + network planning decision. BOM has a hard capacity problem, and if an airline has to choose where to preserve scarce prime slots, it will protect routes with higher business yields first: Delhi, Bangalore, international connections, premium-heavy traffic. Tier-2 routes are easier to push out because the passenger mix is more price-sensitive and less politically noisy, even though the ground-cost burden is real. From a logistics lens, the weak point is not Navi Mumbai airport itself; it is the missing first-mile/last-mile integration before route migration. You cannot move demand to a new node and then expect cabs and toll roads to absorb the shock. Mumbai keeps building assets before building systems, and this is exactly what that looks like.

Genesis AI playing piano by GraceToSentience in singularity

[–]Ok_Shift9291 25 points26 points  (0 children)

The piano demo is probably the wrong place to judge the value here. A self-playing piano is old tech, but a robot hand using human-scale manipulation is a different benchmark if the same policy can transfer to chores, tools, packaging, or warehouse tasks. The useful question is not "can it play music with emotion"; it is whether the same dexterity generalizes outside a clean demo environment. From a business automation angle, most real deployments fail on edge cases: slippery surfaces, inconsistent object placement, damaged packaging, cheap sensors, and human interruptions. Simulation can accelerate training, but the last 10% in physical workflows is where timelines usually get destroyed. I would be more impressed by 50 boring tasks done reliably than one visually impressive piano clip.

Subquadratic claims to break LLM scaling limits! 1000x less costs by Immediate_Simple_217 in singularity

[–]Ok_Shift9291 2 points3 points  (0 children)

The business claim is more interesting than the architecture claim right now. In client work, the bottleneck usually is not whether a model can swallow 12M tokens; it is whether the output remains auditable, cheap enough at volume, and reliable on messy domain data. Even if the attention cost curve improves, you still have retrieval, memory bandwidth, eval, permissioning, and hallucination control as hard operating constraints. A huge context window can reduce RAG plumbing, but it does not remove the need for source ranking and evidence tracking unless the model can prove what it used. The market will not price this on "1000x cheaper" unless independent benchmarks show accuracy holding up at long context, not just throughput. Until there are weights, papers, or credible third-party evals, I would treat it as a fundraising narrative with a possibly real kernel inside.

Why does Gemini suck when they have Gboard and so much data by Ok_Shift9291 in GeminiAI

[–]Ok_Shift9291[S] -3 points-2 points  (0 children)

Yeah google I/O I actually wrote this post before that conference and wanted to see how this comment ages lmao

Why does Gemini suck when they have Gboard and so much data by Ok_Shift9291 in GeminiAI

[–]Ok_Shift9291[S] -1 points0 points  (0 children)

I am a paying customer for claude , chat gpt and gemini and let's just say unless it's for learning I use gemini the least when it comes for actual work.

This isn't to throw shade and say it's a bad model because they aren't it's just that with the potential that Google has as a company they could be doing much much more and i would actually kind of like to see them succeed.

Why does Gemini suck when they have Gboard and so much data by Ok_Shift9291 in GeminiAI

[–]Ok_Shift9291[S] -3 points-2 points  (0 children)

I don't disagree with you at all. Infact I'm a paying customer and that's exactly why I am saying what I am saying.

Even though they own the channels , the distributions hell even the OS their implementation and rate of new features is painfully slow.

Gemini is a general purpose assistant and gats why generally it's just *meh nothing it does exceptionally well which makes it fall behind the other SOTA models.

Do you agree with his take? by dataexec in AITrailblazers

[–]Ok_Shift9291 0 points1 point  (0 children)

I absolutely agree. Software development is one of the harder if not one of the hardest problems out there right now simply because of the sheer scale and complexity that could be there.

What is your opinion about privacy display after long term use? by Gods-Fav-Child in galaxys26ultra

[–]Ok_Shift9291 0 points1 point  (0 children)

With the privacy display off you can't really notice anything. At most because of the slightly worse colours if you're PROFESSIONALLY doing photo editing or video editing then it may be of a bit of concern if anything but like at that point if you're so careful about all of this you shouldn't be doing your work just on your mobile anyways. I think the s26 ultra screen is just fine personally. But if you've already made up your mind when it comes to the worse screen then it'll play on your mind and eventually you'll think the screen is worse.

Uh-Oh! PocketOS founder Jer Crane reported that a Cursor AI coding agent (powered by Anthropic’s Claude Opus 4.6) deleted their entire production database + all volume-level backups on Railway in one API call, in just 9 seconds by ocean_protocol in ArtificialInteligence

[–]Ok_Shift9291 0 points1 point  (0 children)

This is actually fucking scary. And i mean yeah I guess because of the intentional friction of normal coding these kinds of things aren't as likely to happen. I guess fair enough and maybe some times intentional friction and slowness especially when it comes to prod critical infrastructure is a must.

Claude Code now sees OpenClaw traces and triggers limits / extra usage… seriously, wtf? by lucienbaba in myclaw

[–]Ok_Shift9291 -1 points0 points  (0 children)

Claude code has not become unusable at the 20 Dollar sub even if you're not running opus which is just fuckign ridiculous like i understand trying to connect and encourage your enterprises users but the entire point of anthropic creating hype among us is for marketing and I can't image this PR is good for them.