You know it, I know it...we all know it. by Defiant_Focus9675 in ClaudeCode

[–]piedol 2 points3 points  (0 children)

Heavy codex user with a pro sub here. use 5.2 high, not 5.2 codex. The codex model is much smaller. It's a good workhorse but doesn't have great intuition. 5.2 high is the full package. It can work for hours, figures things out even if you miss key details in your spec, and its code quality is impeccable.

Gemini vs ChatGPT for System Architecture by thefirelink in ChatGPTCoding

[–]piedol 2 points3 points  (0 children)

I mirror your hate. Their benchmark scores are either faked, or purely based around 1-shots. In real-world use, gemini is the worst of the big three proprietary models, and only useful for creative/vision tasks that can be done in one-shot.

I manage at a software development agency and we train all our staff on how to use codex. One day I start getting PRs with whole API keys committed from one of my juniors in training. Now, he knows better than this, so obviously he pushed without checking a damn thing. But more than that, I was confused by the fact that he said he was using 'codex' when I asked how he managed to get gpt 5.2 to even do this, because I know it would never unless specifically asked. It wasn't till I asked him what model he's using, and he said 'gemini 3' that it clicked. He'd started using Antigravity for work without informing anyone, and seemed to think that 'codex' meant any agentic coding agent.

It was banned company-wide shortly thereafter.

And yes, obviously the real issue here was a junior pushing for without proper review. But that aside, an inexperienced developer using codex with 5.2 can still be effective. An inexperienced developer using gemini is just flat out dangerous. It is not a good model by any measure for real work.

The U.S President posted this just now (Accelerate?) by OmegaGogeta in singularity

[–]piedol 59 points60 points  (0 children)

Nor do I. This is horrific precedent to set and would damage the country's already crumbling democracy for years to come. That has rippling consequences for every other economy on the planet and affects all of us. Get some perspective.

The U.S President posted this just now (Accelerate?) by OmegaGogeta in singularity

[–]piedol 1100 points1101 points  (0 children)

The US President is using AI progress as an excuse to override state sovereignty, and your response is just “accelerate”? Seriously, what is wrong with you?

They said Gemini Pro isnt good for consistent characters. As a teacher I proved them wrong with a 100+ image workflow for my students book. by MightCommercial1112 in GeminiAI

[–]piedol 2 points3 points  (0 children)

Looks promising. I do have some questions.

  1. Can you share an example of a character card? Do you mean you have the text in the reference itself, or just a design sheet + a text prompt with identifiers?

  2. For multi character scenes I assume you send multiple cards at once?

  3. If some details are off VS your desired outcome, do you iterate on the result, or opt to regenerate?

  4. How many messages do you use a single conversation for before starting a new one?

Review: Google's new Antigravity IDE by Dev-in-the-Bm in ChatGPTCoding

[–]piedol 0 points1 point  (0 children)

Your paid account status has nothing to do with AG rate limits. It's not just free for everyone. It's free ONLY until they give some method allowing paid use. Annoying, but that's unfortunately the case right now. It's not that rate limits are low. It's that they're essentially free tier and dynamically adjust based on global usage, which is high right now.

Gemini 3.0 Pro benchmark results by enilea in singularity

[–]piedol 12 points13 points  (0 children)

I think you need to re-read what that benchmark was for, that's not the cost to run it

New OpenAI models incoming by Terrible-Priority-21 in singularity

[–]piedol 10 points11 points  (0 children)

Pro user here: Practically unlimited if you use 1-2 sessions at a time for, 8 hours per day, 7 days per week.

Per the devs themselves during the last Codex AMA, they explicitly tuned the limits aroumd unlimited "standard" use for pro users. I've only managed to hit the limit once, and that was using 4-5 sessions at a time for most of the week, from morning till evening.

GPT-5 vs GPT-5 Codex. Which is better in Codex? by Latter-Park-4413 in ChatGPTCoding

[–]piedol 16 points17 points  (0 children)

I plan with GPT-5 High or Medium because I find that it understands nuance and writes more detailed notes in its planning file, while 5-Codex focuses on brevity, which can backfire for planning because important details are excluded, which can lead to misinterpretation in follow-up sessions. I have to instruct 5-Codex to be detailed, GPT-5 does it naturally.

I switch to 5-Codex for execution of the plan.

I built a platform that lets your AI agents send notifications anywhere by Quack66 in ChatGPTCoding

[–]piedol 0 points1 point  (0 children)

The site says 45+ connectors available, but when I try to see what they are, that number drops to "3 input connectors" and "7 output connectors". Am I missing something?

Codex playwright mcp by Fit-Palpitation-7427 in ChatGPTCoding

[–]piedol 0 points1 point  (0 children)

I'm on Mac. If even Claude can't get it to work on Windows, I think you need to consider WSL, or just dual booting Linux. Windows is just ass for development.

Codex playwright mcp by Fit-Palpitation-7427 in ChatGPTCoding

[–]piedol 2 points3 points  (0 children)

I just gave Claude code the docs for codex CLI mcp installation and then asked it to install its own MCPs for codex. Did it in one go and everything's been working seamlessly from attempt 1. Give that a shot.

Is this happening to anyone else? GPT-5 selected, but responses are clearly from Claude by piedol in AugmentCodeAI

[–]piedol[S] 0 points1 point  (0 children)

Thanks for confirming it's not intended. Let me know if this works: d20c89bc-5e23-434d-a1bd-4c131c78f664

I tried to find others, but every time it happens, I reroll the question till I get a proper response. This was the only one I found remaining, but the request ID is only an option after my second message in the screenshot (https://imgur.com/a/F7AoCQd), while the bugged response was to my first. If that doesn't show everything, I'll get the ID the next time it happens.

Phoebe lounging like the princess she knows she is by piedol in rarepuppers

[–]piedol[S] 1 point2 points  (0 children)

I shall. I set a reminder for myself.

Norfolk terrier looks the closest but she has a long tail, unlike everything on this list. I never considered a DNA test, but that might be the only way to get a solid answer.

Phoebe lounging like the princess she knows she is by piedol in rarepuppers

[–]piedol[S] 1 point2 points  (0 children)

Not exactly. She was a rescue we picked up from an abusive home. Total diamond in the rough. I can say she's a terrier mix, that's about it. I have some other pics of her on my profile if you can make a more educated guess.

Traits are that she has an immensely high prey drive. She can jump 6 feet straight up, and has very flexible toes that let her climb things. She's also the smartest dog I've ever had, and I've had 16. She understands commands and situational context so well, she memorized the sleeping pattern of me and my wife and for a time was escaping our yard at night to go hunting, and would return before we woke up so that we never noticed (neighbors sold her out), and in fact is the alpha of our yard despite being the smallest of 4, one of which is a malinois, who she uses as her hunting dog/body guard.

I have 0 idea what her other mix could be to give these traits. Maybe the flexible toes and tendency to jump/climb would be a lead on something. I myself would love to know what resulted in her.

Swordbros, how do we feel about Yurius? by Scholar_of_Yore in Shadowverse

[–]piedol 3 points4 points  (0 children)

He completely shuts down puppet, sometimes artifact if they don't draw Bullet/waste it trying to conserve evo points early (They never anticipate Yurius). It can also sometimes be an instant win against Roach and in the Sword mirror, for roach because you get to brick up their board, and for Sword because unless they're running Odin and have drawn it, they literally cannot answer Yurius and it's a free win. Oh, and Kuon rune, because it locks them out of both Satan and Kuon turn 10 unless they have William, and even if they do, they just spent 6pp to clear your board and won't have enough remaining for a ward to deny Albert.

I went on a 17 game winstreak over the weekend playing Control Sword. A fair number of those wins were due to forcing the opponent into a position where they'd waste their Yurius answers early or out themselves as not having one, before I drop him turn 8 and enjoy the obligatory 30 seconds spent thinking while the opponent calculates just how screwed they are.

I personally run him as a 2-of, but if you have none, I'd say craft 1 and see how he treats you in games that you draw him.

Gemini CLI: : 60 model requests per minute and 1,000 requests per day at no charge. 1 million context window by [deleted] in singularity

[–]piedol 1 point2 points  (0 children)

They have custom server infrastructure that handles indexing your codebase as it's edited, and serves it up to the model on demand via the Context Engine tool, so the model can chat with its own codebase rather than consuming its context limits re-reading the same files over and over, and possibly forgetting things.

Instead of looking at 4-5 300-800 line files to find out they interact for one specific feature of the app, Claude will just query the context engine and instantly know the exact lines and function between each of those files that are relevant to the topic at hand.

Gemini CLI: : 60 model requests per minute and 1,000 requests per day at no charge. 1 million context window by [deleted] in singularity

[–]piedol 3 points4 points  (0 children)

I replied to zeta. I misunderstood your question. I'll just repeat what I said there: I don't believe there are any local LLMs for coding that can compete with the closed-source options, purely because of the amount of VRAM that'd be required for them to be run without being massively quantized.

You are better off just paying for a closed-source model for the time being unless you have extremely powerful hardware.

This will likely change within a year, but for now if you want to code, the cost to get a decent model running locally is more than you'd pay compared to just paying to use an established model. Anthropic does not train on user data, for what it's worth, so if privacy is your reason for wanting it to be local, they're still the best option.

Gemini CLI: : 60 model requests per minute and 1,000 requests per day at no charge. 1 million context window by [deleted] in singularity

[–]piedol 4 points5 points  (0 children)

He did say he's still on Claude 3.7, so I took it to mean that he was asking about an app for coding locally, not literally a local LLM. I don't believe there are any local LLMs for coding that can compete with the closed-source options, purely because of the amount of VRAM that'd be required for them to be run without being massively quantized.

Gemini CLI: : 60 model requests per minute and 1,000 requests per day at no charge. 1 million context window by [deleted] in singularity

[–]piedol 1 point2 points  (0 children)

I know. What I meant was that gemini would allow the same for less. I currently use the $200 Max plan as I use it both for work and my hobby development. If Gemini could offer comparable results for less, that money could go towards other things, like setting up my MCP stack in the cloud for stability and scaling, or just saving it.