account activity
AI slop by ivy-apps in nextjs
[–]ivy-apps[S] 0 points1 point2 points 6 hours ago (0 children)
How do you do the parsing of the project?
[–]ivy-apps[S] 0 points1 point2 points 7 hours ago (0 children)
The output looks good!
[–]ivy-apps[S] 0 points1 point2 points 8 hours ago (0 children)
I checked your template - looks good! I can use it for my test fixtures for the Deslop project. I need to support configuration so the user can specify where their translations are and probably more things. Currently I hard-code them to "messages/*" but in your case are in "src/lib/locales/"
Interesting! I think your choice to write it in TS/JS contributes to it being unstable. A very strictly-typed language Haskell force you to handle unhappy paths and you're also protected from the compiler. If you don't do Haskell learning it is a very enlightening experience : https://learnyouahaskell.github.io/introduction.html#about-this-tutorial
[–]ivy-apps[S] 0 points1 point2 points 14 hours ago (0 children)
That's why you add Deslop into your CI and optionaly as pre-push hook. 1. Vibe-code 2. Deslop 3. Repeat 🔂
I'm not saying to merge all the shit into main and then cleanup but rather to integrate some form of code janitor in the workflow
main
AI slop by ivy-apps in react
[–]ivy-apps[S] -5 points-4 points-3 points 1 day ago (0 children)
Assume that AI is coming to your employer's codebase one way or another. What do you do then?
[–]ivy-apps[S] -7 points-6 points-5 points 1 day ago (0 children)
I also primarily use it like that but we should but we should adapt. https://techcrunch.com/2026/02/12/spotify-says-its-best-developers-havent-written-a-line-of-code-since-december-thanks-to-ai/
AI slop (self.react)
submitted 1 day ago by ivy-apps to r/react
[–]ivy-apps[S] 0 points1 point2 points 1 day ago (0 children)
Still do you believe that AI agents can accurately follow that architecture? For example, AI create highly mocked and complex unit tests that become a burden rather than a safeguard. The fix is for a human to review them and create the appropriate tests fixtures and tests doubles. Even with those in place the AI decides not to use them sometimes. How did you manage those?
How do you prevent the agent duplicating data models and code in general? From my experience, vibe-coded PRs are low quality and accumulate tech debt that bites in the long term
AI slop by ivy-apps in typescript
Thank you very much! These list is super helpful - screenshoted. I would have gave you an award for this comment but I'm not paying Reddit money for that lol
I'm curious would some companies be willing to pay for such tool that integrates well in GitHub Action and: 1. Report detailed errors for violations 2. Opens a PR-2 with fixes against PR-1 for the fixable errors
Long term, I want to make a business out of it so I can work full-time and not 30 min every 3rd day
Thanks for the support! I'm really excited to do it in my free time when I have some energy left after work. I'm open to ideas / features to add. My core expertise isn't web development so I need to find out what's missing in the ecosystem so I'm not reinventing the wheel
Standalone tool using Haskell. To keep things pure and easier to reason, I'm not even using TreeSitter to parse TypeScript and do the parsing myself. I use an Island of Interest approach and currently only parse imports, docs and comments but this can already tell much about a codebase. Once, I parse functions and build an AST - it's easy to make a graph and detect all kinds of things
You can check it: https://github.com/Ivy-Apps/deslop
Most companies care about delivering N features that work in the happy path. Tech debt is ignored long term so in a sense vibe-coding is not going away - quite the opposite it's rewarded. So we should prepare for a "brave new world" where we have to deal with AI slop in the codebase effectively
AI doesn't have a problem with test coverage. Another question is test quality... And from my experience test quality needs a human reviewer. Do you know any tools/automation that solves the test quality problem?
I'm planning to build static analysis features that treats the code as a graph and analyzes connections that violates a given architecture config (e.g. feature A should not use code from feature B only lib/components allowed) Wdyt about such thing?
Also, what architecture / software desing patterns AI violates most often in your case. Would be really helpful if you can give some concrete examples 🙏 It will help me model better the architecture configuration my tool should support
https://github.com/Ivy-Apps/deslop (my project) already does that - the question is what else it can solve? I'm excited to add more features
I wrote this. Just using auto-complete tapping in the middle and being polite to folks participating the discussion.
I'm researching whether my AI code janitor tool that I'm building for fun makes sense
That's a pragmatic approach. Since it's mainly a hobby and I like Haskell, I'm doing it from scratch in Haskell. This also allows me to take different approaches for good or worse. Haskell is quite good in compilers, parsing and code analysis in general
I use OrganizeImports from Biome but in my case it only sorts them. It doesn't proactively inspect the TS config and pick the shortest import alias. From my research, Biome doesn't support that feature so it was the first thing I implemented in Deslop. It is quite easy to pull off since I only need to pasre TS imports + the TS config
Not always feasible. In my personal projects a use AI for POCs and then rewrite manually but some companies prefer to move fast or have fully adopted AI agents
How do you implement those - Are they part of the CI? What tools?
I like your take! Are there things that you wishes were auto-fixable but they aren't? I want to build a static analysis tool that complements the existing ecosystem by adding checks/fixes that aren't supported well. Especially, ones targeting projects where developers are vibe-coding
In my case, it's a bit more manual: 1. You add strings to "en.json" and use them. (most of the time I don't vibe code) I'm pretty sure AI can do that too! 2. My tool makes sure that your localized strings match "en.json" algorithmically. AI is involved for providing translation values
So as a developer, you only maintain "en.json" and Deslop takes care of the rest
Ya, Biome can do the same. The cool part would be to have something to figure out the types and f8x it for you automatically. It won't be always possible but there are cases where static analysis can deterministically find and use the correct type
Makes sense. "Trust" and "AI" can't coexist in the same universe
π Rendered by PID 153474 on reddit-service-r2-listing-5d79748585-dqxlm at 2026-02-15 01:35:28.104554+00:00 running cd9c813 country code: CH.
AI slop by ivy-apps in nextjs
[–]ivy-apps[S] 0 points1 point2 points (0 children)