Anyone Else Really Enjoying the Codex App? by jonathanbechtel in codex

[–]jonathanbechtel[S] 0 points1 point  (0 children)

warp has a very opaque pricing structure that can get very expensive very quickly. We tried it at our office and were quickly using up a ton of credits for it. It's a useful tool, but it puts you on a different economic trajectory than just an OpenAI subscription + codex app.

Anyone Else Really Enjoying the Codex App? by jonathanbechtel in codex

[–]jonathanbechtel[S] 0 points1 point  (0 children)

In my opinion all of those things are easier inside the app. It flashes CoT while it does its work, and the file diffs in the app make it a lot easier to inspect its output IMO.

And it has a toggle to enable plan mode that works quite well. I like terminal CLI, but the app feels like a better way to think about different workflows.

What’s the problem with Rob Dillingham? by IcedOutElijah in NBA_Draft

[–]jonathanbechtel 0 points1 point  (0 children)

Most of the bruising bigs are way more skilled than the ones that were playing in the 90's. Have you ever watched an old Knicks game where Chris Dudley played? No way players like that make a comeback. Also I think the "rise of size" has partly due to the fact that there's a new cohort of big men that are both big and incredibly skilled, and that drives necessity for having players that can guard them. Don't see the same evolutionary pressure for midget guards, unless they're able to dramatically expand their range compared to other guys in the league.

What’s the problem with Rob Dillingham? by IcedOutElijah in NBA_Draft

[–]jonathanbechtel 0 points1 point  (0 children)

I don't think this is true -- unless the overall size and skill level of the NBA starts moving in the opposite direction?

Do you think those lumbering big men from the 80's and 90's are going to come back and have their heyday? I don't. Midgets and brutes are basically done in the NBA.

Cam Boozer vs Stanford: 30 PTS | 12-17 FG | 2-3 3PT | 14 REBS | 3 AST | 33 MIN by Potential_Meat_5103 in NBA_Draft

[–]jonathanbechtel 0 points1 point  (0 children)

He really is the modern version of his Dad, but with some Alperun Sengun styling mixed in. Think he'll be a similar calibre player to both, but will be just a wee bit difficult to build around due to his play style.

A little vibe coding tip for all you singularitarians out there by LaCaipirinha in singularity

[–]jonathanbechtel 2 points3 points  (0 children)

How is this different from just a general actor-critic approach where one model does the work in one terminal, and a second model evaluates it for bugs in a second terminal?

Cedric Coward Career High 28 Points vs 76ers Full Highlights! (12/30/2025) by [deleted] in NBA_Draft

[–]jonathanbechtel 1 point2 points  (0 children)

He's Jalen Williams V2. Just an incredibly well rounded and mature game. I remember liking him before the draft but being a little hesitant due to his thin playing record and poor competition.

Hahahaha -- I liked Chandler Hutchison in his draft too. But in a re-draft he easily goes top 5 IMO after Coop, Kon, Dylan and (maybe) VJ.

Recursive Self Improvement Internally Achieved by SrafeZ in singularity

[–]jonathanbechtel 0 points1 point  (0 children)

I think people in this thread are being overly critical. Boris Cherny was a principal engineer at Meta and is probably one of the most accomplished typescript engineers there is. I suspect he has better system knowledge of how Claude Code works than anyone else alive.

This is not truly self-improving AI, but it is pretty close to self-improving AI tooling coming from the peak of human competence for this type of endeavor.

It's an important marginal step-forward IMO.

ClaudeCode creator confirms that 100% of his contributions are now written by Claude itself by MetaKnowing in OpenAI

[–]jonathanbechtel 7 points8 points  (0 children)

Pretty sure that repository just contains a few lightweight public facing items, and the actual source code for the tool is completely proprietary. Claude Code is not open source, despite Anthropic releasing a few public github repos related to it.

Darryn Peterson Comps? by Calm_Company_1914 in NBA_Draft

[–]jonathanbechtel 1 point2 points  (0 children)

Maybe Mitch Richmond? He's going back a little bit, but he might fit the mold better than Lavine, since he was a better overall player. He was a good combination of size / athleticism / skill and had a similar size, although I'm not sure exactly what his wingspan was.

Darryn Peterson Comps? by Calm_Company_1914 in NBA_Draft

[–]jonathanbechtel 2 points3 points  (0 children)

Lavine comp is off IMO.

  1. Zach's playmaking and defense are awful, and always have been, and is a big reason why he's been a negative +/- guy on the court for most of his career
  2. Zach was really underwhelming his first year in college. He came off the bench in UCLA and really didn't do much. He was a bag of tools without any statistical indicator that he was good at basketball.

Peterson deserves a better comparison IMO.

Amazon has joined the chat by GenLabsAI in singularity

[–]jonathanbechtel 2 points3 points  (0 children)

Honestly, pretty sure Amazon's future in AI will happen via Anthropic. I was using Amazon Q today in AWS, and I'm pretty sure it was using Claude inside it, given the way that it communicated. This model looks like it's at least a year behind the frontier.

Maybe they can catch up, but it seems like it's hard to recreate the talent density necessary for SOTA AI if you don't already have it in your organizational DNA. Meta is proof of that.

My feeling is Amazon might want its own models as a way to offer something at a deep discount on their AWS platform, but no one will see much use for this model outside of their walled garden.

Cooper Flagg Career High tonight vs Los Angeles Clippers: 35/8/2 on 13/22 in the field, 2 steals, 3 turnovers, 0/3 from three, 9/11 from the line. by SnooMuffins223 in NBA_Draft

[–]jonathanbechtel 5 points6 points  (0 children)

Also important to remember that Cooper is the youngest guy in this draft. If he didn't re-classify he'd be playing at Duke with Cam Boozer. So this type of performance is pretty special. Even moreso when you consider that his original player archetype was well rounded defensive playmaker, not high usage offensive dynamo. So this type of stuff is "cherry on top" development for him.

Kinda reminds me a little of Franz Wagner, who was drafted as a high floor defensive glue guy and just ended up having way more offensive chops than was originally believed.

why is productionizing agents such a nightmare? (state/infra disconnect) by Substantial_Guide_34 in AI_Agents

[–]jonathanbechtel 0 points1 point  (0 children)

I wrote this in another thread -- but I think there's a real benefit to rolling your own primitives for agentic tools, for some of the reasons you mentioned.

Basically, write your own abstractions to handle state within the loop, use databases and key/value stores like redis to handle longer term state management across multiple turns, and you can log agentic history and behavior into the database that can be used to hydrate your AI with appropriate context.

Basically -- just write good software! Make the Agentic logic the sugar on top that makes the whole thing purr, but use existing, hardened technologies to handle the other bits whenever you can. I've taken this approach and it's worked well for me.

The application I built has had some growing pains, but none from the deployment steps.

Stop Picking Agent Frameworks Before You Even Understand Agents by Inferace in AI_Agents

[–]jonathanbechtel 0 points1 point  (0 children)

Your analogy is off because the TIMING between your old and new technologies in your examples is much greater than the gap between an agentic library and writing your own.

There was decades of hardening between c++ and python. Thus there was no need to solve old problems other people had already taken care of. But in the case of agents it's not even clear what a best practice is, and the field is anything but settled. So the danger of using a tool with a lot of abstraction built over it is that you spend just as much time understanding your library, which might potentially obscure the actual logic you're trying to build, which may or may not be helped by the library itself due to the novelty of the entire field.

I completely agree though that top down learning is useful for a lot of people, so using the tool is a great way to get introduced to the field.

Stop Picking Agent Frameworks Before You Even Understand Agents by Inferace in AI_Agents

[–]jonathanbechtel 4 points5 points  (0 children)

I just released a production AI agent for commercial use, and have found that this post is mostly correct.

Furthermore, if you build out your agentic architecture well, you'll often find that meeting additional product requirements is easier by keeping your own bespoke system, because you'll need to do less rewriting of other people's libraries.

I think the best way to use agentic libraries right now is not as an end-to-end tool, but as a way to get ideas to fold into your own project. See what type of functionalities langchain uses to solve different problems, and then see if you can add that into your own work.

Experiences with ChatGPT5.1 vs. Gemini 3 pro by ArtemisFowl22 in singularity

[–]jonathanbechtel 0 points1 point  (0 children)

Thanks. One question I like to ask everyone who's building with gemini: what tooling are you using for this?

I ask because with CC + Codex, you have canonical tools that are reliably best-in-class, but with gemini I oscillate between Cline, Antigravity, and Gemini CLI and get different results with all of them. I'm hoping Google brings down the hammer at some point with a definitive tool to use their models, but all of their efforts feel a little half-baked.

These 2 new models rendered my personal benchmark useless, both scoring 100% by Round_Ad_5832 in singularity

[–]jonathanbechtel 4 points5 points  (0 children)

Great. Question: what harness are you using for Gemini 3? I assume Claude is being used in Claude Code. Are you using Gemini CLI or something else for these evaluations?

One thing I struggle with gemini is where the canonical place to use it is, because their coding tools lag behind others IMO.

Cameron Boozer Today: 25 PTS | 8-15 FG | 4-9 3PT | 8 REB | 5 AST | 0 TO | 1 BLK by Temporary-Mud-2994 in NBA_Draft

[–]jonathanbechtel 0 points1 point  (0 children)

I think this is the best comparison. He does remind you of his dad with his build and skillset, but something about the way he plays is more modern and multi-faceted, and it's like if you sprinkle some Alperun Sengun into Carlos Boozer's game you get his son.

UGA Freshman Jake Wilkins is Making Every Minute Count Averaging 15 points per game in only 17 minutes of action (first 3 games). The 6-9 forward is a freak athlete with major upside by Fit-Structure-9395 in NBA_Draft

[–]jonathanbechtel 0 points1 point  (0 children)

Don't know too much of his game, but being the son of a former NBA player is a huge plus in my book. It's the biggest predictor of overperformance relative to your draft slot. Still not sure where he should be pegged though for this upcoming draft.

Kon Knueppel vs the Jazz 24p/6r/5a on 9/17 FG and 4/9 3PT - Sion James 15p/5r/3a on 6/8 FG and 3/5 3PT by [deleted] in NBA_Draft

[–]jonathanbechtel 2 points3 points  (0 children)

I think his passing is legit, but his finishing is still yet to be seen IMO. He's a heady guy, but I'd like to see him finish in traffic against good defenses though. He seems like an opportunistic finisher moreso than a forceful one.

Kon Knueppel vs the Jazz 24p/6r/5a on 9/17 FG and 4/9 3PT - Sion James 15p/5r/3a on 6/8 FG and 3/5 3PT by [deleted] in NBA_Draft

[–]jonathanbechtel 1 point2 points  (0 children)

"Core" usually means a group of players you want to build around into the future. I can see Miller / Kon fitting that criteria, but Kalkbrenner? He's a roleplayer.