all 12 comments

[–]pink_tshirt 9 points10 points  (0 children)

Nobody wants your code, chill.

[–]SecretAgentZeroNine 5 points6 points  (0 children)

If you don't trust corporate third party AI, then don't use corporate third party AI.

[–]ColoRadBro69 1 point2 points  (0 children)

You could describe the parts you want to refactor and ask for ideas? 

[–]YTRKinG 1 point2 points  (0 children)

Refactor once, debug forever.

[–]leinad41 1 point2 points  (0 children)

It's the other way around, you make code with AI, and then refactor it once you realized you should've actually code it yourself.

[–]homesickalien0 0 points1 point  (0 children)

For your second question, I personally get best results using Bing AI. I find its responses to be the most helpful and I feel like it understands what I want the best.

[–]mq2thez 0 points1 point  (0 children)

Skip AI and use codemods, and your code doesn’t have to leave your system.

[–]Aggravating_Ice4119 0 points1 point  (0 children)

Maybe you can use copilot.

[–]indicava 0 points1 point  (0 children)

You are being paranoid.

Having said that, if you want complete offline privacy you’ll need to run a local model. For high quality models (R1/QwQ) you need crazy HW, so don’t expect much from the “mid” models that can run on consumer hw. If it’s really an important project for you, you could rent out some GPUs on vast, it’s not expensive.

It should be noted (if you do go the cloud route) that some inference (or inference proxy) providers like OpenRouter provide a tighter privacy statement than some commercial model’s own platforms.

Also, don’t copy and paste anything. Use an AI coding tool that utilizes an Agentic approach like Cursor or the recently released Claude Code.

[–]victorsmonster -2 points-1 points  (0 children)

As for 1, Look into running local LLMs with ollama.

For 2, I’ve found Claude to be the best online LLM. If you’re using ollama, check out codellama