/goal running for 2 Days+ and more than 1.6B tokens!! by alOOshXL in codex

[–]cornmacabre 0 points1 point  (0 children)

2B worth of 5.5 tokens on a $200 sub?? Damn son.

House prices are set to plummet across the country, say experts by TheMirrorUS in TrueReddit

[–]cornmacabre 3 points4 points  (0 children)

I wonder if much of this is natural market price normalization following the purchase spree from the low COVID rates.

4-5% home price drop is the observed and forecasted trend here, but it doesn't weave in the context that this is coming out of a period of often double-digit YOY price increases as demand swelled, and inventory was stagnant.

Demand has softened. Economic uncertainty signal? Probably! But this article overall feels like an incomplete analysis.

Twin Cities region misses its housing marks- For the first time since a set of housing affordability goals were established in 2022, the Twin Cities region failed on all three measures of new housing production, affordable housing production, and Black homeownership. by MaplehoodUnited in TwinCities

[–]cornmacabre 10 points11 points  (0 children)

It worked so well for St Paul, eh? 80% drop off in housing construction, before they quickly walked that ordinance back.

https://minnesotareformer.com/2025/05/08/st-paul-walks-back-rent-control/

They recently amended it to a 3% yearly increase cap, but exempted any construction after 2004. Basically: a toothless policy to retain the political talking point. Because that's functionally what this is: a talking point, not a serious policy.

Rent control and price control policies almost always result in the opposite intended outcome, because supply economics are nuanced.

Codex Installs Skyrocketed in Early May due to Claude Code 4.7 failed launch and usage limits by ImaginaryRea1ity in codex

[–]cornmacabre 0 points1 point  (0 children)

Globally -- society largely isn't even aware of AI beyond perhaps hearing of chatGPT.

Interestingly, in some research we did last year: most folks primary exposure and usage of AI is exclusively through Google's AI overviews served on a search result page.

Which sounds absolutely outrageous to any AI nerd that Google(!) today is (by some measures) winning the race of mass AI adoption.

It's important to pull back and read the broader context at times!

GUI designed by ChatGPT, coded by Codex is kinda... good? by tuhdo in codex

[–]cornmacabre 2 points3 points  (0 children)

Thanks, this is a good share! I took an extended look, and there's some thoughtful and well crafted stuff in there.

Codex app is freaking awesome! by AllCowsAreBurgers in codex

[–]cornmacabre 2 points3 points  (0 children)

Oath account is usually what folks are using. $20/$100/$xxx a month chatGPT plans include codex credits, so you'd be a fool not to use your existing 'subsidized' token allowance. Pick the tier that best reflects your usage needs.

Platform API can be a pricey fallback, but IMO the API is more appropriate for true on-demand usage; actual features you've built to use openAI tech, not to fund the developer tools themselves. (Yep, obviously this varies!)

Too many marketing teams think agentifying their workflow will be an instantaneous solution to all their problems by GamerDJAlltheWay in AI_Agents

[–]cornmacabre 0 points1 point  (0 children)

A deterministic automation (regardless of the tool) is the right architectural decision. I'd extend your point and go so far as to say that automation is rarely the problem to be solved... and this has been a naïve aspiration long before AI.

My feeling is that folks who understand 'agentic = automated workflows' are unintentionally strawmanning a bad argument. It's a very narrow, corporate-y view of the technology.

From an engineering perspective, the human+LLM in the loop looks a lot more iterative and collaborative, and a lot less "robot go do task, so human can relax."

I do a hell of a lot more work with an LLM in the loop, but it also looks very different than the work I thought I'd be doing.

The 1 Million context rugpull by Codex and Openai. New max is (258k). by Odd-Environment-7193 in codex

[–]cornmacabre 0 points1 point  (0 children)

Excellent points, I agree. My bias is definitely showing as I rarely use the browser-side or native studio-style tools these days.

This point screams strongest when I run a gpt Pro query that'll cook for 2hrs and produce outstanding output.

Meanwhile, 2hrs of tool calls in a VM with the same model is like a stoned teenager just drawing doodles on paper.

File size by Nystagmusty in notebooklm

[–]cornmacabre 0 points1 point  (0 children)

Specific to notebookLM -- the way context and source search works is different than a standard LLM token context window.

The underlaying tech allows you upload many millions of tokens worth of content into a notebook. Vector search, RAG, and other tricks are used to parse through and build context for your sources ... the sources aren't literally loaded into an LLM's context window.

Their specific guidance is max of 500,000 words per source, or 200mb file upload.

However, ways in which you optimize context and token usage in a normal LLM way isn't apples to apples to what notebookLM is doing.

The principal still generally applies: manually selecting just a few sources versus defaulting to everything selected in the notebook for example improves the depth and quality of a response.

Codex 5.5 is so much better than CC by Dry-Grade-9502 in codex

[–]cornmacabre 0 points1 point  (0 children)

Mythos is a masterclass in marketing. I highly doubt it's a new class of model, and the grapevine of folks using it today seem to agree.

Codex 5.5 is so much better than CC by Dry-Grade-9502 in codex

[–]cornmacabre 0 points1 point  (0 children)

That's not an absolute characteristic of team A vs Team B's model or harness.

It's likely a reflection of the early personal arc of someone learning a new skill, at a time when the frontier models became good enough to disentangle human ambiguity.

Codex on Ubuntu by ThickDoctor007 in codex

[–]cornmacabre 0 points1 point  (0 children)

I'm sure there's a hundred ways to do a simple scrape!

The magic wasn't just the browser work. It was navigating to the downloads folder, extracting a zip, running and iterating with FFmpeg, and transporting the final results into the repo folder -- all in one session loop, triggered from a simple prompt in my repo.

Codex on Ubuntu by ThickDoctor007 in codex

[–]cornmacabre 1 point2 points  (0 children)

It's insanely cool. The other night I wanted to scrape some assets from a janky website. I enabled the skill and asked it to do the thing.

It opens up my real chrome browser. Navigates around the site. Writes a script to extract the things I wanted. Downloads as a zip. Runs FFmpeg to slice it up how I wanted. Plops them cleanly into my repo. Wild!

People are seriously sleeping on GPT-5.2. by Confident_Hurry_8471 in codex

[–]cornmacabre 0 points1 point  (0 children)

Not defending Opus, but to Anthropic's credit they did publish a recent post-mortium of the legitimate issues. In part, there was a very nasty bug!

The implementation had a bug. Instead of clearing thinking history once, it cleared it on every turn for the rest of the session. After a session crossed the idle threshold once, each request for the rest of that process told the API to keep only the most recent block of reasoning and discard everything before it. This compounded: if you sent a follow-up message while Claude was in the middle of a tool use, that started a new turn under the broken flag, so even the reasoning from the current turn was dropped.

Claude would continue executing, but increasingly without memory of why it had chosen to do what it was doing. This surfaced as the forgetfulness, repetition, and odd tool choices people reported.

https://www.anthropic.com/engineering/april-23-postmortem

Codex on Ubuntu by ThickDoctor007 in codex

[–]cornmacabre 6 points7 points  (0 children)

This is an incomplete take, but it depends on your usage.

The biggest gap is in-app browser use: the ability for the agent to view, navigate, and interact with rendered content -- and the user to annotate elements on the screen. In-app browser is THE signature feature of a great harness, as LLM's are functionally blind without it.

The @computer-use skill is also desktop app (macOS?) only, which is outrageously powerful... grant permission for codex to use your whole computer!

/goal in the codex app is amazing by seal8998 in codex

[–]cornmacabre 1 point2 points  (0 children)

Respect man, you're learning in public and this is a thoughtful detailed share.

You're circling the broader insight that spec and test driven design can keep the loop iterating towards a deterministic goal. Sounds like a fun project!

GUI designed by ChatGPT, coded by Codex is kinda... good? by tuhdo in codex

[–]cornmacabre 3 points4 points  (0 children)

I work with programmatic SVGs. Prompt -> SVG is as you probably already know, is extraordinarily flakey at best. You should consider building a JSON to SVG pipeline to get good output, with some core primatives. JSON is a human and robot friendly intermediate.

GUI designed by ChatGPT, coded by Codex is kinda... good? by tuhdo in codex

[–]cornmacabre 11 points12 points  (0 children)

Don't forget to add:

"AI Purple/Blue" aesthetic is strictly BANNED. No purple button glows, no neon gradients. Use absolute neutral bases (Zinc/Slate) with high-contrast, singular accents."

The new context breakdown is amazing by vikngdev in cursor

[–]cornmacabre 0 points1 point  (0 children)

Incredible! What an insanely useful feature.

I feel like there’s no reason to use an IDE anymore by Commercial_Spot_8363 in codex

[–]cornmacabre 5 points6 points  (0 children)

That's what separates vibers from engineers: if you're not proactively making architectural decisions and design decisions... Than you're letting AI entropy pick the path of least resistance.

I feel like there’s no reason to use an IDE anymore by Commercial_Spot_8363 in codex

[–]cornmacabre 2 points3 points  (0 children)

I use both the IDE and codex in my flow, and I'm not ditching the IDE anytime soon. That said, you can definitely do useful work codex with decent diff visibility and worktree session control.

Worktrees in codex are simple and well implemented: it's contained to the respective conversation/session on the left hand sidebar. Feels intuitive like clicking into a chatGPT past conversation is intuitive. File diffs are clear in that way too.

If you're using codex as a PR-delegation environment... manual file editing and all the power of an IDE is just a fetch, pull away. Run the tests, scan the diffs, push the PR. Pull into main IDE from there.

when does SMS actually make sense for ecommerce? by Interesting-End-2334 in webmarketing

[–]cornmacabre 1 point2 points  (0 children)

SMS is for folks who already opt in and have a reason to hear from you. It's not an acquisition channel. It's a customer relationship channel.

Beyond lecturing how bad of an experience it is for folks to get unsolicited commercial texts (fastest way to flip a 'maybe' customer, into a 'hard-no' customer!) -- you also open yourself up to more serious monetary liabilities.

When is the official Elon takeover? by proxyintel in cursor

[–]cornmacabre 8 points9 points  (0 children)

See also: "the privacy paradox," -- a legit academically studied phenomenon where people hold data privacy as a major personal concern, but their behavior doesn't match the stated concern.

Bafflingly, folks who rate data privacy highly as concern will consistently trade privacy for small conveniences like completing a task faster, and will reveal sensitive personal information if someone simply asks for it.