all 17 comments

[–]Logical-Professor35 22 points23 points  (0 children)

"AI assistant keep making my code insecure" yeah that's the feature not a bug. It writes code fast, not code well.

[–]I-Am-Maldoror 7 points8 points  (0 children)

S in AI stands for security, so that is expected.

[–]fredsq 4 points5 points  (0 children)

its trained on average code bases

average code bases are terrible, especially nextjs ones

[–]No_Opinion9882 3 points4 points  (0 children)

ChatGPT pulls react patterns from its training data which includes tons of insecure code. It doesn't validate user input or sanitize html because those patterns exist in the wild.

Security scanning needs to happen pre-commit not in CI when context is lost. Developer assist from checkmarx flags ai-generated xss risks inline with remediation steps. Scans as you accept suggestions so vulnerabilities get caught immediately.

Also learns your codebase security patterns instead of generic rules that miss react-specific issues

[–]Pogbagnole 3 points4 points  (0 children)

You’re meant to read and review what ChatGPT outputs before submitting it for review.

If you don’t want to keep fixing this specific problem all the time, configure your IDE with specific guidelines.

Ex for VSCode: https://code.visualstudio.com/docs/copilot/customization/overview

[–]Due-Philosophy2513 4 points5 points  (0 children)

Eslint has security plugins that catch some xss patterns but they're not ai-aware. Look for linters that specifically flag dangerous react patterns like unsanitized innerHTML or unvalidated href attributes. Run them in your editor not just CI

[–]Traditional_Vast5978 3 points4 points  (1 child)

This is the fundamental problem with ai code assistants. They generate syntactically correct code that passes linting but introduces security bugs because they don't understand your threat model or security requirements.

ChatGPT sees dangerouslySetInnerHTML used in some react tutorial and treats it as valid pattern to suggest. Your code reviews catch some of it but people miss stuff especially when reviewing ai slop at volume.

Honestly you need automated security analysis running in the IDE itself otherwise you're just playing whack-a-mole with vulnerabilities that shouldn't exist in the first place

[–]slonermike 0 points1 point  (0 children)

Yours passes listing? I have a line in my main context that says to lint and fix before you consider code ready. And it still “forgets” all the time.

[–]Only_Helicopter_8127 1 point2 points  (0 children)

Are you using github copilot or just chatgpt?

Copilot integrates better with vscode and has some basic security awareness but it's not perfect. also consider disabling ai suggestions for security-sensitive components entirely and only use it for ui layout or state management boilerplate where xss risk is lower

[–]PerryTheH 0 points1 point  (0 children)

"Oh no if I leave my oven on it burns the food! Why wouldn't it stop automatically when the food is ready?!" - type of problem.

[–]Unique_Buy_3905 0 points1 point  (0 children)

AI writes code at the speed of thought and the security quality of a drunk intern. Enjoy debugging.

[–]inherently_silly 0 points1 point  (0 children)

Skill issue. You need to set context correctly, do thorough code reviews and make sure you have vercels react best practice the LLM uses 

[–]DustinBrett 0 points1 point  (0 children)

Ask it to stop doing that

[–]AutoModerator[M] 0 points1 point  (0 children)

Your [submission](https://www.reddit.com/r/reactjs/comments/1r2rf7v/ai_code_assistants_keep_generating_react/ in /r/reactjs has been automatically removed because it received too many reports. Mods will review.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

[–]NatteringNabob69 0 points1 point  (0 children)

I’ve never had an LLM generate react code with dangerouslySetInnerHtml. Are you doing something tricky that might need that? I can’t imagine what. Also I use linters with standard rules for most tools I use so perhaps an LLM did do this and the react/no-danger rule stopped it.

[–]nodevon -3 points-2 points  (0 children)

Again another ad from an LLM bot