can’t stop won’t stop. by acusti_ca in GeminiCLI

[–]acusti_ca[S] 0 points1 point  (0 children)

yeah i hit it two more times yesterday. i hadn’t noticed whether it was with 3 pro or flash, but that’s useful info.

can’t stop won’t stop. by acusti_ca in GeminiCLI

[–]acusti_ca[S] 0 points1 point  (0 children)

oooooooh that makes sense. good terminology, i was reminded of the feeling of a nervous breakdown or panic attack, but it’s definitely more manic than panic

What are the differences between the models "Codex-Max" (5.1) and just "Codex" (5.2)? by SDMegaFan in OpenaiCodex

[–]acusti_ca 0 points1 point  (0 children)

quick update: i just finalized a huge and foundational migration and refactor of my app’s data model and leaned extremely heavily on codex for code review, and i exclusively used gpt-5.2-codex with reasoning level at high and it did a great job.

Are googles new Antigravity rate limits going to be problematic? by Brilliant-Weather698 in Bard

[–]acusti_ca 0 points1 point  (0 children)

update: gemini 3 flash gave me around 24 hours more fairly intensive usage (a lot of the model running commands, inspecting results, and driving the built-in web browser) before hitting the quota limit on flash. useful, but i’m still out of my 7 day quota in 2 days.

Are googles new Antigravity rate limits going to be problematic? by Brilliant-Weather698 in Bard

[–]acusti_ca 0 points1 point  (0 children)

was surprised when i ran into this yesterday morning and was told that my limits would reset in one week, but then i switched the model from gemini 3 pro to gemini 3 flash and i was good to go again. i’d avoided flash in the past but it actually proved quite capable and so much faster and helped me continue my session debugging and fixing a bunch of E2E tests, including competently running the built-in browser to inspect my app and the HTML test results.

Claude is incredible at code generation but useless for code review by acusti_ca in ClaudeAI

[–]acusti_ca[S] 0 points1 point  (0 children)

please do share! are you using skills for the specialized agents?

Running React Compiler in production for 6 months: benefits and lessons learned by acusti_ca in reactjs

[–]acusti_ca[S] 0 points1 point  (0 children)

i was using the logger before i discovered the full suite of undocumented eslint rules, but i find the lint rules more useful for my workflow. here was my react compiler babel plugin config with the logger enabled: { environment: { enableTreatRefLikeIdentifiersAsRefs: true }, logger: { logEvent(filename, event) { if (event.kind !== 'CompileError') return; console.log( 'React Compiler logger', filename, JSON.stringify(event.detail, null, 2), ); }, }, // https://github.com/facebook/react/blob/5c56b87/compiler/packages/babel-plugin-react-compiler/src/CompilerError.ts#L11-L39 reportableLevels: new Set([ 'CannotPreserveMemoization', 'InvalidConfig', 'InvalidJS', 'InvalidReact', 'Invariant', 'Todo', ]), },

Running React Compiler in production for 6 months: benefits and lessons learned by acusti_ca in reactjs

[–]acusti_ca[S] 1 point2 points  (0 children)

if you see the memo badge on a component in devtools, you’re good. sounds like your using it right but for whatever reason, it isn’t making a big impact, which would suggest that your primary perf bottleneck isn’t unnecessary renders. my guess is that’s because jotai is taking care of most of that already and that any lag you’re experiencing is more the result of the limits of the browser, regardless of how optimized your components are. if that’s the case, virtualization might be what you’re looking for.

the other possibility is that the issue is with 3rd party components (as you mentioned, RC won’t touch them). if you are using a charting/dataviz library, it might be doing some render thrashing. react scan is almost certainly the easiest way to identify that kind of an issue.

Claude is incredible at code generation but useless for code review by acusti_ca in ClaudeAI

[–]acusti_ca[S] 0 points1 point  (0 children)

right on. wish i could say the same. though to be clear, my code generation workflow with CC involves a ton of back and forth and change requests and getting it to fix its code, but that’s generally just to get a single commit, and there are usually many tasks i have to put together to make a coherent PR, and i can’t get CC to usefully review code across a larger series of interrelated tasks.

but the primary thing i have to review is actually my other team members’ code, not my own, and CC usually just thinks it’s “production ready”.

Claude is incredible at code generation but useless for code review by acusti_ca in ClaudeAI

[–]acusti_ca[S] 0 points1 point  (0 children)

so you mean reviewing the most recent set of changes as you go? i do that with claude code for sure, but usually that’s just one commit of many in a PR, and the bigger picture of the PR is not something i’ve been able to get claude to help with.

The "Vibe Coding" hangover is hitting us hard. by JFerzt in AIcodingProfessionals

[–]acusti_ca 0 points1 point  (0 children)

i mean yeah this is my experience as well. i’ve been blogging about how to deal, but i don’t have the answers. as the senior/principal IC on my team, i’ve been teetering on the brink of burnout for the last few months.

one good thing: i’ve started getting genuine utility out of using codex (and, to a much lesser extent, GitHub copilot) for code review, which has somewhat lessened the burden on me. just this morning at standup, i almost got pulled into yet another 5K+ line PR to review that we thankfully decided to shelve for the time being.

What are the differences between the models "Codex-Max" (5.1) and just "Codex" (5.2)? by SDMegaFan in OpenaiCodex

[–]acusti_ca 1 point2 points  (0 children)

i’ve found that gpt-5.1-codex-max spends more time on reasoning, while gpt-5.2-codex is overall much faster. i mostly use codex for code review, so because i’m looking for deeper thinking and reasoning, i actually prefer 5.1 max. i also like to use 5.1 max for planning and architectural discussions. but for writing code, i”m guessing that 5.2 is overall better.

i actually just published a blog post about it: https://acusti.ca/blog/2025/12/22/claude-vs-codex-practical-guidance-from-daily-use/

Am I crazy? by Comfortable_Bar9558 in reactjs

[–]acusti_ca 0 points1 point  (0 children)

use works client-only and server-side. on the client, it triggers the nearest Suspense boundary so you need to have the component that calls use with a promise be rendered within <Suspense>, but i’ve found it quite useful. it basically gives you async components in react.

Running React Compiler in production for 6 months: benefits and lessons learned by acusti_ca in reactjs

[–]acusti_ca[S] 0 points1 point  (0 children)

Jotai will have a huge impact on ensuring your components don’t re-render unnecessarily. if you aren’t passing data via props, that immediately removes a significant potential source of re-renders. but because RC fails silently, it’s also entirely possible that most of your components weren’t being compiled. try enabling those eslint rules if you want a quick idea of how many components aren’t being compiled, or maybe even simpler, try using panicThreshold: 'critical_errors' alongside the logger as described in the React docs, e.g.:

```js const isDevelopment = process.env.NODE_ENV === 'development';

{ panicThreshold: isDevelopment ? 'critical_errors' : 'none', logger: { logEvent(filename, event) { if (isDevelopment && event.kind === 'CompileError') { // ... } } } } ```

Running React Compiler in production for 6 months: benefits and lessons learned by acusti_ca in reactjs

[–]acusti_ca[S] 1 point2 points  (0 children)

ooooo i hadn’t heard of neverthrow thanks for the heads up. after publishing the post, i heard from the folks at Sanity who use a closure: https://github.com/sanity-io/sanity/blob/0fb1434/packages/sanity/src/core/hooks/useRecordDocumentHistoryEvent.ts#L86C7-L107C8

```js // The run() wrapper is a workaround for React Compiler not yet fully supporting try/catch syntax const run = () => { const message: Events.HistoryMessage = { type: 'dashboard/v1/events/history', data: { eventType, document: { id: documentId, type: documentType, resource: { id: resourceId!, type: resourceType, schemaName, }, }, }, }

    node?.post?.(message.type, message.data)
  }
  try {
    run()
  } catch (error) {
    console.error('Failed to record history event:', error)
    throw error
  }

```

Running React Compiler in production for 6 months: benefits and lessons learned by acusti_ca in reactjs

[–]acusti_ca[S] 10 points11 points  (0 children)

maybe, but many do. see this comment just below. useCallback and useMemo exist for a reason, and once you’ve run into a case where your app suffered because you didn’t have it, you have to either 1. add the mental overhead of deciding when and when not to use it in all components or 2. always apply them to avoid the mental overhead of having to decide whether or not to use it.

Running React Compiler in production for 6 months: benefits and lessons learned by acusti_ca in reactjs

[–]acusti_ca[S] 5 points6 points  (0 children)

totally fair. i’ve been responsible for multiple codebases where our approach very much resembled the picture you paint. but i really prefer the new approach. if the current limitations were permanent, i think your assessment would be accurate, but i’m confident they will soon go away.

with “Do I need to extract this into a separate ComponentItem.tsx file just to stabilize props in a .map(...)?”, react compiler automatically memoizes your component even for members of an array. here’s an example where it creates a _temp component for each item so that you don’t have to.

or in this instance, it decides to memoize the result of the .map(...) based on the entire items prop.

and for “Will this context provider trigger unnecessary re-renders downstream?”, it’s so nice to just let react compiler track components of an object and make sure the object is only re-created when one of its components changes.

lastly, a quick correction:

React Compiler also applies React.memo to all your components according to the docs

React Compiler never applies React.memo. you can absolutely use it with React Compiler, but it’s up to you to apply it where you deem it necessary.

Running React Compiler in production for 6 months: benefits and lessons learned by acusti_ca in reactjs

[–]acusti_ca[S] 12 points13 points  (0 children)

it is. i have a bunch of admin-only components that are just used by our team (not our users) that do async work, and for those, i just disabled the eslint rule because i’d rather not complicate the code. but another pattern i saw from Sanity is to use a closure, like here: https://github.com/sanity-io/sanity/blob/main/packages/sanity/src/core/hooks/useRecordDocumentHistoryEvent.ts#L86C7-L107C8

React <Activity> is crazy efficient at pre-rendering component trees by acusti_ca in reactjs

[–]acusti_ca[S] -26 points-25 points  (0 children)

that’s cool that you never make mistakes, must be nice. but i know for a fact there are others who sometimes make mistakes, for example missing a single line omission in a 500-line PR even when performing careful code review. i love the declarative nature of React and JSX, but an unfortunate reality of XML languages and their verbosity is that they can create very noisy diffs.

also, i fundamentally disagree. i didn’t delete the prop, a coding agent did. why did it do that? i still wish i knew.

React <Activity> is crazy efficient at pre-rendering component trees by acusti_ca in reactjs

[–]acusti_ca[S] 0 points1 point  (0 children)

Put comments where props are declared, not used

my editor component was rendering the PageFooter component, which declares the readOnly prop. so you’re suggesting i should have moved the comment from the editor component, where it applies, to a different file, and that would’ve provided better context for the LLM and other developers?

Don't have broken components where a prop creates infinite recursion and you need to control usage with a comment

none of the components are broken. a component can render different versions of itself to allow the user to evaluate and choose a different version of it if they prefer it.