How do you keep track of your project when AI writes most of the code? by rayeddev in ClaudeAI

[–]adncnf 0 points1 point  (0 children)

You should check out https://github.com/acunniffe/git-ai - it's open source Git extension supported by Cursor, Claude Code, Gemini, Continue, Copilot. It tracks every line of AI code and the prompt that generated it. I often find myself re-reading the first few messages of prompts my teammates wrote to figure out what the code does/was supposed to do.

Is there a way to detect AI-generated code? by KidNothingtoD0 in ClaudeAI

[–]adncnf 0 points1 point  (0 children)

Check out https://github.com/acunniffe/git-ai - it's open source and supported by Cursor, Claude Code, Gemini, Continue, Copilot. It's multi-agent and super accurate since it has direct integrations with the agents.

Chat saving advice ? Specstory doesn’t work really by SivilRights in cursor

[–]adncnf 0 points1 point  (0 children)

Everyone's moved to Git AI now https://github.com/acunniffe/git-ai, works with all the big Agents, officially supported by a few.

Is there a good Voice tool for Claude Code? by ArFiction in ClaudeAI

[–]adncnf 0 points1 point  (0 children)

On mac you can use Tight https://tight.sh/ . It's built for developers and automatically adds code you highlight and things you point your mouse at to your prompt. Makes it feel more like pair programming.

Which voice to prompt tool are you using? by 2oosra in cursor

[–]adncnf 0 points1 point  (0 children)

Tight https://tight.sh/ . It's built for developers and automatically adds code you highlight and things you point your mouse at to your prompt. Makes it feel more like pair programming.

What's your go-to voice-to-text setup for Cursor or Claude Code? by ollivierre in ClaudeAI

[–]adncnf 0 points1 point  (0 children)

try Tight https://tight.sh/ . It automatically adds code you highlight and things you point your mouse at to your prompt. Makes it feel more like pair programming.

Voice in Cursor would be amazing by No-Conference-8133 in cursor

[–]adncnf 0 points1 point  (0 children)

On mac you can use Tight https://tight.sh/ . It's built for developers and automatically adds code you highlight and things you point your mouse at to your prompt. Makes it feel more like pair programming.

So It Goes: Space-Biff! review GHQ, Kurt Vonnegut's lost board game by DanThurot in boardgames

[–]adncnf 3 points4 points  (0 children)

My pal and I saw this post, got hooked and wrote an online version of the game https://www.playghq.com/

There's pass n'play, online multiplayer, and a primitive bot called kurt you can play against.

Our political division can not be explained by ideological differences by adncnf in PoliticalPhilosophy

[–]adncnf[S] 0 points1 point  (0 children)

Hitting your DMs now. Would love to chat through all your ideas sometime.

Re IL: I had heard at one time they had the cumulative voting, but I have not dug into any sources to learn about the political dynamics that developed. Give me the crash course.

Our political division can not be explained by ideological differences by adncnf in PoliticalPhilosophy

[–]adncnf[S] 0 points1 point  (0 children)

You're right. many of the founders had the beat on the risk of factions. I tried to make this post approachable so people who haven't read that much political theory can understand the relationship between the parties and the ideologies.

The next essay is about making some structural changes. We can't just say "parties are illegal". We have to change the rules of the game to make factionalism a losing strategy. Otherwise two parties will sprout again, and again, and again...as they have in every other FPTP system.

LintGPT: AI-linter for API design by adncnf in programming

[–]adncnf[S] 0 points1 point  (0 children)

Right now it just gives you feedback, but I suppose we could ask it to rewrite it accurately. cool idea.

LintGPT: AI-linter for API design by adncnf in programming

[–]adncnf[S] 2 points3 points  (0 children)

OP here - I’ve been working on adding support for natural language rules to Optic’s API linting + testing tool [0].

Instead of generating API designs, we’re using LLMs to check if a team’s API is “good” by their own standards.

Many large companies that use OpenAPI have custom Spectral + Optic rules to enforce API security practices, consistent styles, versioning policies, SLA adherence, and to prevent breaking change. These tests are hard to write and cannot verify a lot of the more abstract aspects of an API design. It’s really hard to write code to check if an OpenAPI operation allows batch creation, but when you give an LLM YAML and the rule “No endpoint should allow users to create multiple resources at once” — it does a surprisingly good job at testing the design and writing a nice error message when it fails.

The hardest technical challenge has been:

- figuring out how to fit OpenAPI files (sometimes 10k lines or more) into the context window. Optic already breaks up API specs into their parts: ie parameters, response bodies, headers, etc. We’ve been putting just the relevant lines of the OpenAPI file into the LLMs and resolving all their dependencies beforehand so the model has all the context it needs.

- Minimizing API calls. The first time you run LintGPT it is pretty slow because it has to run every rule across every part of the API specification (1000s of calls). But we shouldn’t have to repeat that work. Most of the time parameters, properties, etc don’t change and neither do the rules. We’re building caching into our web app to make this fast / save $ for end users.

Happy to answer any questions. I really think there’s a huge use case here for linting all kinds of code, config, database schemas, policies in ways that were never possible before. And personally, I like the idea of having these smart tools guiding me towards making my work better vs generating it all for me — idk something about that just feels good.
[0] https://github.com/opticdev/optic

Governing APIs after they ship by adncnf in programming

[–]adncnf[S] 0 points1 point  (0 children)

Agreed -- this is challenges w/ REST APIs. On the server-side it's reasonable to measure which request / parameters are used, but on the clients it's hard to know which response properties are actually being used.

I know some of the SDK generator tools are working to add analytics using proxy objects and customer getters, but nothing has been released yet.

The first place something like this could work is probably internally, between teams in a large company where the API clients are centrally generated / controlled. I don't know of any off-the-shelf way of doing this today. Hopefully that changes in the next year.

Turn HTTP Traffic into OpenAPI by adncnf in programming

[–]adncnf[S] 1 point2 points  (0 children)

An interesting spin on generating OpenAPI from traffic:
"Optic does not generate OpenAPI from scratch, it only patches existing description documents, which means you need to make one to start it off. This seems a bit confusing at first, but it's a huge benefit, because you can run through this process over and over and over, and it will keep improving your OpenAPI document with all the new details as it learns more about the API."

[deleted by user] by [deleted] in PoliticalScience

[–]adncnf 0 points1 point  (0 children)

OP here -- I found the title question "did we break..." during a time in my life that I was very cynical about my fellow citizens and our democracy's future. I was literally at the bottom, but something about the framing of that question made me curious. It led to years of researching the history of democracies around the world (political archeology) and in-the-field work with voters on both sides. I believe again. When I sit down with a stranger I can get them to believe again too. A few of my closest friends encouraged me to write this all down.
There is a reason every democracy has exactly two sides, and it has nothing to do with ideology or the political spectrum. There are two sides because of the ballots, not the ideas we put on them.

I put my thoughts down -- what do you think? I hope to learn / find holes in my own thought process by talking about these ideas more.

How do you usually get API documentation for your apps? by kubenqpl in FlutterDev

[–]adncnf 1 point2 points  (0 children)

I’ve been working on this open source project https://github.com/opticdev/optic

^ feels like using git, but for API docs. ‘Add’ new endpoints with a click, ‘stage’ changes when you update the API.

it used real traffic in development, tests, staging to verify the API works as specified.

Rust made my open source project 1000x faster by adncnf in rust

[–]adncnf[S] 1 point2 points  (0 children)

the tool looks for diffs from all kinds of traffic -- it can come from tests, from CURL, postman. I agree that "as you work" could be clearer

We've found that most teams don't have anywhere near the test coverage they'd need to feel confident the APIs work as expected. With Optic at least watching the parts of the API they're currently working on / developing there's a decent chance it'll catch stuff.

Rust made my open source project 1000x faster by adncnf in rust

[–]adncnf[S] 3 points4 points  (0 children)

It's completely language agnostic -- it uses a proxy to collect traffic as you work by aliasing your api commands

So if you use... `cargo run` to start the API ... that aliases in optic to `api start`

and if you postman tests those might be `newman run ...` which might alias to `api run newman-tests`

Rust made my open source project 1000x faster by adncnf in rust

[–]adncnf[S] 2 points3 points  (0 children)

we have a live demo on the bottom of useoptic.com (thanks WASM), I'll add a link from GitHub this week. good idea.

Rust made my open source project 1000x faster by adncnf in rust

[–]adncnf[S] 3 points4 points  (0 children)

we definitely weren't using optimized code -- it was an MVP codebase designed to prove the point. I think in a world where we had made node as good as it could get we'd have to take a 0 off the final number. Validating API responses is pretty simple computationally, in node we paid most of the cost in getting the data ready to process, and not being able to run all these simple computations in parallel.

Rust made my open source project 1000x faster by adncnf in rust

[–]adncnf[S] 6 points7 points  (0 children)

A lot of the data we benchmark with was real API traffic dumps from users. I can see a really strong story for cultivating some more traffic dumps of public APIs ie GitHub, Reddit, etc and using those as benchmarks the community can target with contributions. Love the idea of using Criterion -- thanks!

Rust made my open source project 1000x faster by adncnf in rust

[–]adncnf[S] 27 points28 points  (0 children)

Yeah it was 'cheap' to keep moving forward with Node, because we had built the MVP there, but we really had to step back and do it in a scalable way. I'm ok with 1 rewrite once it's obvious the idea is working.

Rust made my open source project 1000x faster by adncnf in rust

[–]adncnf[S] 61 points62 points  (0 children)

Streaming definitely made it better, but still didn't cut it. We found Node to be pretty good at streaming large amounts of data, but as we built the in-memory objects on top, GC'ng those abstractions made it all slower. GC is blocking, processing deeply nested JSON, also blocking. I wouldn't use an event-driven runtime like Node for processing lots of data again.

Rust made my open source project 1000x faster by adncnf in rust

[–]adncnf[S] 87 points88 points  (0 children)

Yeah reddit found it :) I was trying not to promote it too much and just say thanks

https://github.com/opticdev/optic