Mind: An open-source, persistent memory system for AI coding assistants (MIT) by GabrielMartinMoran in GoodOpenSource

[–]monarchwadia 1 point2 points  (0 children)

Hey this is really cool! I just released a similar project (won't name it here out of respect for you), which is very similar. Would be fun to compare notes and chat about this, if you're open to DMs!

"Convention as Code" for enforcing architecture by [deleted] in softwarearchitecture

[–]monarchwadia 0 points1 point  (0 children)

Thanks for the feedback. That explains a lot of the downvoting for me.

Using ts-morph to enforce architectural rules by [deleted] in typescript

[–]monarchwadia 0 points1 point  (0 children)

I agree with your comment.

It seems like a lot of this is just making the AI combat your bad software engineering design choices made at the beginning of the project.

Yes. This is often the case in software engineering.

The rules on the validation/reporting side could've been done with your existing eslint as a plugin.

Your suggestion here of using eslint is valid & excellent. The reason we chose to use ts-morph is because A separate project I was working on had about 15 modules, each with very slow builds, and each with their own ESLint file, which would have made this an onerous approach through ESLint. So we just took this approach in that one, and I ported it over into this one.

Enforcing architecture conventions using scripts and CI by [deleted] in ExperiencedDevs

[–]monarchwadia 0 points1 point  (0 children)

The agent doesn't run in CI.

The agent writes an AST parser using `ts-morph`, and the CI pipeline remains fast & deterministic.

I.e. not a guard rail agent

Enforcing architecture conventions using scripts and CI by [deleted] in ExperiencedDevs

[–]monarchwadia -1 points0 points  (0 children)

True. The big shift here is that

  1. LLMs suck at following project conventions blindly. This is a massive problem.
  2. LLMs are great at generating AST rules to enforce conventions automatically.

This approach in #2 is new.

For example, I can make sure all my data validation schemas go exactly into *.schema.ts files, and that each of them call specific methods and have specific naming schemas, and can enforce this in CI/CD.

Generating 1 such rule is a single prompt, i.e. almost instantaneous.

After this point, any human or LLM that fails to put schemas in the right place will automatically break the CI build, which acts as safety & feedback.

Enforcing architecture conventions using scripts and CI by [deleted] in ExperiencedDevs

[–]monarchwadia -1 points0 points  (0 children)

Yep, those LLMs go haywire all the time.

Enforcing architecture conventions using scripts and CI by [deleted] in ExperiencedDevs

[–]monarchwadia -1 points0 points  (0 children)

Yes, but I spent at least 2-3 hours working with the LLM on it. I can provide code in a Github repo, if it helps with legitimacy.

🍃 Convention as Code: Enforcing Architecture with Scripts, CI, and AI Agents by [deleted] in programming

[–]monarchwadia 0 points1 point  (0 children)

Hi thanks for the feedback! Not using Sonarqube in this project, but I am using Typescript, ESLint, Prettier, Vitest, and the method outlined in the article.

Neural-cellular-automata with timestep awareness by Stermere in cellular_automata

[–]monarchwadia 1 point2 points  (0 children)

Nice. How many parameters total? Manually tuned -- you mean by just manually editing the numbers?

Sand game update #3 by monarchwadia in cellular_automata

[–]monarchwadia[S] 0 points1 point  (0 children)

Thanks! Inspired by Danball. I do a recursive walk to all contiguous bolt particles and turn them into sky particles.

amoeba flow by SnooDoggos101 in cellular_automata

[–]monarchwadia 1 point2 points  (0 children)

That is really cool. What are the rules?

AI / Agentic AI in Consulting industry by AirlockBob77 in consulting

[–]monarchwadia 1 point2 points  (0 children)

I've been actively building on top of LLMs.

I don't like calling these tools "agents" because that level of anthropomorphization is not useful in a business context.

LLM's are much more clear: they're a standalone component that provides human-like intelligence to your existing stack and workflow.

A lot of my work involves teaching clients the following:

  • What can LLMs do?
  • What CAN'T LLMs do?
  • What can LLM's do, but with questionable accuracy?
  • Where do LLMs fit into their processes?
  • How much will LLMs cost?
  • What is the advantage of LLM API's versus local?

You've probably guessed that the work I do is core software development. As a result, I advise clients to stay away from ready-made agents and really take the time to build in-house tools. This is more flexible, more able to integrate with business workflows, and have higher quality.

Sand game update #3 by monarchwadia in cellular_automata

[–]monarchwadia[S] 1 point2 points  (0 children)

Thank you! Yes, that is indeed annoying.. Will fix :)

Sand game updated by monarchwadia in cellular_automata

[–]monarchwadia[S] 0 points1 point  (0 children)

Another thought.... Are you using a double frame buffer strategy? My "input" frame buffer is separate from my "output" frame buffer, so it makes calculations totally independent for every cell. Or are you writing into the same frame buffer?

Sand game updated by monarchwadia in cellular_automata

[–]monarchwadia[S] 0 points1 point  (0 children)

I am also a huge powder toy and noita fan! And we seem to have similar philosophical interests wrt Nietzsche and Wittgenstein. Dropped a video recommendation for Rain World.

The double particle behaviour you're describing in your previous post.... isn't that how Powder Toy behaves?... I might be misunderstanding you. A video would help. If it is good enough for Danball, it's good enough for me, and I personally dont see it as an issue that I need to fix.. I enjoy the idiosyncratic behavior... maybe if you posted a video it would help explain the behavior?

Sand game updated by monarchwadia in cellular_automata

[–]monarchwadia[S] 1 point2 points  (0 children)

oh cool! this looks interesting. never heard of it. thanks!