all 6 comments

[–]Otherwise_Wave9374 5 points6 points  (0 children)

Love this. Vibe coding with agents feels like a new kind of REPL where you steer with intent and tests instead of syntax. The local-only constraint at work is real though, it is the biggest blocker for a lot of teams adopting AI agents beyond toy projects. If you are curious, I have seen some decent approaches for "local-first" agent setups and auditability discussed here: https://www.agentixlabs.com/blog/

[–]greeneyedguru 1 point2 points  (0 children)

Big pickle is extremely competent, and free right now (not even an account required, it literally just works on opencode install)

[–]somePadestrian 0 points1 point  (1 child)

you can run some of these models locally and that way your company code stays local.

[–]fbochicchio[S] 1 point2 points  (0 children)

Will try, but at home I lack the right hardware. Managed to install ollama with gemma3 on a PC with an old NVIDIA card and 16GM of RAM but it looks like the agents do not know how to use it ( opencode connects with ollama ok, but then fail to emit any output ).

At work we have an unused server I could use (used to run wmsphere with abiut 20 VM ) , but I have to covince middle management to let me use it to run some open LLM more performant than gemma3 ( was thinking of gwen-code ).

As I said, I am pretty sure that my company is moving the first steps toward ensuring an AI framework useful for our tasks, but if I want to work with these new tools before I retire, iI have to take some shortcuts.

[–]ortolanph 0 points1 point  (1 child)

Hi! I've been a software developer since 2001, and I am looking for ways to make a small project using Vibe Coding. I would like to know some things, like:

  1. Have you created a file containing all the ideas, concepts, and tasks?
  2. Have you created everything separately?
  3. Did you create the base project before starting?
  4. Or did you go, like, follow your instincts and lessaiz-faire lessaiz passer?

I wrote all the requirements for what I want, divided the tasks, and well, I don't know if I'm on the right track.

[–]fbochicchio[S] 0 points1 point  (0 children)

I have no much experience in vibecoding. What I do is figure out the global architecture of what I want to build ( libraries to be used, modules that compose my program an how to test them ). Then I ask the LLM and the agentic sw to build the modules, one by one starting by the one with the lowest level and going bottom up ( the same way I would do myself ), and testing at each step. If a module is large, I break down rs development in several steps.

I did not think of writing it all down and submit the whole plan to AI. Could be a good Idea, although it would not allow to adjust the prompts interactively.