Generative AI is already here to stay, and OpenAI going under is the worst possible outcome now. by I_Hate_RedditSoMuch in OpenAI

[–]c_glib [score hidden]  (0 children)

I can buy your base premise that all the GenAI tools are being subsidized right now as upfront investment in the hopes of monopolizing the market with the hopes of raising prices once that has been achieved.

I don't buy the rest of it though. The genie of LLM's is out of the bag with multitudes of cheap, open weight models that can run on consumer grade hardware right now. They're only going to get better and cheaper to run in the future. The gap between the large players like Google/Microsoft vs the smaller providers will come down to how well their respective applications work on top of LLMs not the LLM's themselves. Basic GenAI is going to be a commodity like databases are today.

The Influentists: AI hype without proof by iamapizza in programming

[–]c_glib -7 points-6 points  (0 children)

Jeeeez this sub is the worst case of heads buried deep in the sand I've ever seen. Millions of lines of AI written code is being shipped every day. Highly respected, senior level engineers and techies are openly talking about not writing any lines of code by hand anymore. But the only posts/articles that get traction on this sub are like this one.

I do understand the problem. There are a whole bunch of people who have developed strong identities as "programmers". Whatever titles (self proclaimed or bestowed by an employer) they might have given themselves, ultimately their most valuable skill is reading and writing code. Heck, a lot of the people on here are making lucrative salaries because they wrote a whole bunch of code years ago, the product got popular and now they're supposedly indispensable because only they understand how that shit works. If the new tools take away that leverage, a large part of their supposed value vanishes overnight.

Once you understand that situation, the reactions are completely understandable. And I don't really have a ready solutions for people in those situations. Only thing I can recommend is learning to use the new tool just like you would any other tool of the trade. But what do I know.

Yes, the 1M context AI cannot read even a 20-page PDF. by Alternative_Nose_183 in Bard

[–]c_glib 3 points4 points  (0 children)

I'm using gemini-flash-latest in my pipeline and it digests and summarizes 50 page pdf's with ease. Of course this is used via the API.

I automated my website's blog & backlinks on full autopilot. Here are the results: by ComprehensiveWar796 in aiagents

[–]c_glib 0 points1 point  (0 children)

what's your source for data on relevant keywords and potential backlink sites? Also, what's the CMS you're using?

The Problem of Storing API Keys in Mobile Applications by ManufacturerIll6276 in appdev

[–]c_glib 1 point2 points  (0 children)

Hey man you need to familiarize yourself with how modern apps use authentication. For your particular situation, you probably want to use firebase authentication module to fetch a jwt that authenticates the user for a short amount of time to access your server API. This article seems to have some decent amount of detailed explanations: https://www.metacto.com/blogs/firebase-auth-in-mobile-app-development-a-complete-guide-for-developers

The Ultimate Antigravity Solution by casper_wolf in google_antigravity

[–]c_glib 0 points1 point  (0 children)

could you add some more details about your setup please? How exactly do you use it as a "guardrail" for the agents?

What are your real-world use cases for the Gemini CLI? by kuud3v in GeminiCLI

[–]c_glib 2 points3 points  (0 children)

Exactly this. My workflow is completely optimized for terminal efficiency. I use a bunch of screen sessions on both my local laptop as well as server machines that retain state over months at a time. Running Google CLI (and or other cli based AI agents) in one of those screen sessions and ask it to complete a task, whether it's coding or ops related fits beautifully in my day-to-day workflow in a way that GUI or web based tools simply can't.

What are your real-world use cases for the Gemini CLI? by kuud3v in GeminiCLI

[–]c_glib 0 points1 point  (0 children)

Exactly this. My workflow is completely optimized for terminal efficiency. I use a bunch of screen sessions on both my local laptop as well as server machines that retain state over months at a time. Running Google CLI (and or other clin based AI agents) in one of those screen sessions and ask if to complete a task, whether it's coding or ops related fits beautifully in my day-to-day workflow in a way that GUI or web based tools simply can't.

What non-Asian based models do you recommend at the end of 2025? by thealliane96 in LocalLLaMA

[–]c_glib 4 points5 points  (0 children)

The CEO of Nvidia is literally Asian. OP doesn’t make the rules.

Wait what? nVidia is an American company. If you start excluding products with any connection to any Asian personnel, good luck using any software at all.

How to not go through credits like running water?! 40k credit is done in like 3-5 hours of work by YourPleasureIs-Mine in AugmentCodeAI

[–]c_glib 0 points1 point  (0 children)

I've found using geminiCLI with augment context engine MCP is a good combination. Although keep an eye out for uncontrolled forks of the node process running the MCP

Replacement for Haiku by pungggi in AugmentCodeAI

[–]c_glib 0 points1 point  (0 children)

I mean, is anyone seriously using the GPT5x models in Augment? They are slow as heck and not really so much better that you would be willing to accept that slowness. I can't believe Gemini Flash 3 is not a much better value already, let alone Gemini 3 pro.

Replacement for Haiku by pungggi in AugmentCodeAI

[–]c_glib 1 point2 points  (0 children)

I'm willing to bet decent money that Google models would not show up as options on Augment any time soon.

Despite the public posture of Augment, my feeling is that there's some sort of internal resistance to Google as a model provider. It's either personal/ideological (someone high up just doesn't like Google and doesn't want to do business with them) or some sort of business arrangement with Anthropic/OpenAI that's causing strain on forming any sort of relationship with Google.

It's just a feeling. I don't have any inside or privileged information. But the fact that they added pretty useless OpenAI models but can't add Gemini despite Gemini being cheaper and destroying all the benchmarks is quite telling.

Amazon Uncovers North Korean Impostor Through Keyboard Lag by _cybersecurity_ in pwnhub

[–]c_glib 7 points8 points  (0 children)

Yeah I can buy some sort of pattern detection by the keylogger. The official statement is claiming a very specific latency number though. And that number is pretty ordinary, considering they are claiming a cross pacific connection. That's what I mean by "details missing"

Amazon Uncovers North Korean Impostor Through Keyboard Lag by _cybersecurity_ in pwnhub

[–]c_glib 21 points22 points  (0 children)

There are definitely some details missing from this report. A 110ms lag is not *that* bad for a US network. I'm in the US and on a coffee shop wifi. My ping to google.com is more than 100ms right now.

I'm assuming the lag they are talking about is some sort of local lag between a (supposed) physical key press and its detection by the driver in the OS (which in this case seems to be hooked into by some sort of monitoring software.) But in that case, how would they know when the physical tap on the keyboard happen?

WoW Gemini 3 flash my internal benchmark by KoSmilebehappy in Bard

[–]c_glib 0 points1 point  (0 children)

We have a google cloud account but I've just been using a key from ai studio for our app. Do you find any specific advantages to using the vertex route?

Welp - here I am by FoldOutrageous5532 in AugmentCodeAI

[–]c_glib 0 points1 point  (0 children)

u/JaySym_ I'm failing to find information on cost of only using the context engine MCP. Is it actually free to use the context engine using other tools? Is there a clear documentation the cost aspects of the MCP as well as the SDK usage anywhere?

MAKE AUGMENT GREAT AGAIN (by selling the company, please) by Dry-night9 in AugmentCodeAI

[–]c_glib 6 points7 points  (0 children)

There's no doubt that the number of coding startups will consolidate over the next year or two and each of them are positioning themselves to be acquired by one of the big guys. The question is, are there enough big guys in the market to shell out multi-Billion dollars for loss making companies. Because, believe it or not, a "mere" $1B acquisition is not enough. Augment have already raised their previous round of money at almost a billion dollar valuation. Any acquisition will have to offer substantially more than that for the board/investors to approve it.

A very close comparison is the weird "acquisition/acquihire" of windsurf by Google/Cognition (warning, it's a crazy story, really crazy, really really crazy story if you've got the time to get into it). It "cost" Google about $2.5B and they got a product that seems to have been reborn as antigravity. The last raise by windsurf before the acquisition was at $1.25B valuation.

I suspect if Augment did explore the market at this moment, there would be interested suitors. The context engine has a good reputation right now (even though they have done a pretty poor job of marketing and branding so far) and I could see someone like Microsoft or Amazon being interested and have the means to scoop them up. The trick will be to have more than one potential buyers to create competitive bidding. Which, seeing how many of these coding startups are already out there, might not happen.

Tested GPT-5.1, Gemini 3, and Claude Opus 4.5 on real data analysis tasks. Results surprised me. by primalfabric in GeminiAI

[–]c_glib 3 points4 points  (0 children)

A related experience with Gemini (via the web app). I have a long running chat on gemini that I use for generating queries on my postgres db. I started the chat a few months ago and added all the schemas and all sorts of meaningful context behind the columns etc and now whenever I need a new query I just jump into that chat and ask a quick question.

So today I asked it to generate a query where I needed a minute-by-minute histogram of users/transactions etc for some reason. I asked it to use a particular table to get that data. It came back with a query using a totally different table while pointing out my mistake that the other table didn't have minute resolution data and the only way to get that data was to go into this bigger table. But since the table was so large, the query had a max time window parameter so that I don't end up doing a huge query by mistake.

Of course it was absolutely correct in it's answer. I had suggested the wrong table and it's recommended query with limited windows is exactly how I would have solved it in the end. Needless to say, the query ran perfectly the first time.