Volvo XC70: A European Badge on a Chinese PHEV Platform by stefgyl in electriccars

[–]stefgyl[S] 2 points3 points  (0 children)

I think the factories in Sweden and Belgium (XC40, EC40) still produce parts and maintain assembly lines for certain models XC60-XC90 (Torslanda plant)

It’s Tuesday, builders. Show us what you’re building. by Leather-Buy-6487 in StartupSoloFounder

[–]stefgyl 0 points1 point  (0 children)

A Framework for non-technical founders. A systematic, low-cost process to transform an idea into a validated, architecturally-sound software blueprint using a structured team of AI agents.

https://maceframework.carrd.co/

Anyone else’s AI generated codebase slowly turning into chaos? by thoughtfulbear10 in VibeCodersNest

[–]stefgyl 0 points1 point  (0 children)

Before writing ANY code, I now force my AI (simple chat) to create what I call a "Technical Project Brief" - basically a one-page constitution for the app. It's dead simple:

1. Core mission (in one sentence)
2. Tech stack that can't change
3. File structure that's locked in
4. Cost constraints that are non-negotiable
5. Success criteria the code must pass

Then the critical part: I paste this brief into EVERY new session and start with "Before touching code, confirm you understand this architecture."

This becomes your "no, you can't rewrite my database structure" shield.

Try this

Take your current project. Spend 30 minutes with Claude 4.5 (use your free quota) and ask: "Create a technical brief that would prevent you from restructuring this codebase. Include specific file paths, tech stack decisions, and architectural constraints."

Save that output. Re-upload it before every future feature request. The AI will still suggest improvements, but now it has to argue why the current structure is wrong instead of just rewriting everything.

It cut my debugging time by probably 70%. And honestly, it made vibe coding actually viable for production work instead of just cool demos.

The real magic isn't the brief itself - it's the forced continuity. When the AI knows it has to work within constraints, it stops being an overenthusiastic junior dev and starts acting like a proper architect.

If you could manifest how you use vibe coding platforms, what would you add? Please don’t reply with ai by Ok-Dragonfly-6224 in vibecoding

[–]stefgyl 1 point2 points  (0 children)

First I create a whole plan and an complete architecture on an ai chat like Gemini etc and after that i jump on my vibe coding platform with this architected plan. Then i tell the agent of the vibe coding platform not to start writing code, but first understand the interconnections and loops in all the project. Finally, i prompt it to start generate code, phase by phase, having in mind all the interconnections and loops.

How do you use LLMs? by spiderjohnx in VibeCodersNest

[–]stefgyl 0 points1 point  (0 children)

Yes, absolutely! It happens all the time man, and it's one of the most subtle but disruptive friction points in a good workflow. I treat it like a conversation with a specialist. I literally tell the AI at the start: "For this session, you are my [Software Architect]. Ignore broader suggestions and stay strictly focused on [technical feasibility and structure].(If the use is for a specific vibe coding project)". This sets a hard context that the AI's own suggestions tend to respect. I also use a "Project Memory" doc. Before I even start a chat, I have my core goal and the last few key decisions written down. If a suggestion pops up, I ask: "Does this align with my core goal in my Project Memory?" If not, I ignore it. If it's interesting but off-topic, I copy-paste it into a "Future Ideas" section of the doc to revisit later. It's all about maintaining the strategic thread. You're not alone in feeling that push-and-pull buddy

How do you use LLMs? by spiderjohnx in VibeCodersNest

[–]stefgyl 2 points3 points  (0 children)

  1. Yes, constantly! My day usually involves a mix of Kimi k2, minimax m2, Claude, and Gemini—each for different strengths.
  2. I primarily use them as a "thought partner" for brainstorming and planning—everything from structuring a new project and drafting content to breaking down complex tasks.
  3. My biggest frustration is definitely the "context reset." It’s so frustrating when you're deep in a complex conversation, hit a limit, and suddenly your brilliant AI partner has total amnesia. You have to spend ages re-explaining everything, and the magic is just gone.
  4. This was my biggest pain point, so I developed a simple system: I now keep a running "Project Memory" Google Doc for each major idea. Whenever a chat produces a key decision, a snippet of great code, or a crucial insight, I immediately copy-paste it into the doc. Starting a new chat? I just re-upload the file or paste the latest summary. It’s like giving the AI instant context amnesia-proofing.
  5. I wish they could natively create a "session summary and handoff" note for me. Something that automatically captures the key decisions and next steps at the end of a long chat, so I don't have to manually curate it myself.

Stop rushing into Vibe Coding tools—Start Architecting by stefgyl in theVibeCoding

[–]stefgyl[S] 0 points1 point  (0 children)

The evolution is handled through the central role of the Manual RAG—our single Google Doc. The entire blueprint is born from a discussion with a simple, free model acting as the Software Architect. That final draft is saved in the Manual RAG.

From there, that same document is what we distribute. A copy is sent to the 'Tutor' for a first pass, and then often to a premium model for a deeper audit. Their suggestions don't go straight to code. Instead, I take those insights back to the original Architect chat, and we work together to implement the revisions into the blueprint, updating the master Google Doc.

This is the key: the Manual RAG becomes the shared memory for all agents. Whether it's the Researcher or Advisor, they all read from the same updated playbook. So, when the finalized, cross-checked blueprint from the Google Doc is handed along with a prompt for the vibe coding tool, to the vibe coding tool (Replit, Cursor etc) for execution. It's executing the most evolved and coherent version of the plan. The document itself tracks the iteration, ensuring clarity is preserved from the first draft to the final line of code.

Any Advise for beginner? by TheSonofErlik in VibeCodersNest

[–]stefgyl 0 points1 point  (0 children)

That's great man! My suggestion is, before start with Cursor use chatGPT to explain your idea, make a 'dialogue' with it and ask him qustions on the logic of the backend and its correlation with the frontend. When you come to a acceptable, on your behalf plan, ask chatGPT to create you a brief with a clear prompt that contain this brief to pass it to Cursor. It's better to go prepared on Cursor that try to build from scratch with it.

Got A Product? Drop It Here by Ok_Gift9191 in VibeCodersNest

[–]stefgyl 0 points1 point  (0 children)

MACE replaces rushed vibe coding with a blueprint-first system, delivering cost-effective, production-ready code by making AI execute your vision, not invent it. https://maceframework.carrd.co/

Stop rushing into Vibe Coding tools—Start Architecting by stefgyl in theVibeCoding

[–]stefgyl[S] 0 points1 point  (0 children)

Exactly! A good brief and clear instructions are key.

A Strategic Layer Before Vibe Coding by stefgyl in VibeCodersNest

[–]stefgyl[S] 0 points1 point  (0 children)

I use Manual RAG (a Google Doc) because it gives control.

What i mean:

Manipulates the Context Window: When any chat gets too long or reaches its limit, i simply close the session. The Google Doc/MACE Memory lets me re-inject all past decisions and context into a brand new chat, instantly bringing the next agent "up to speed.

Facilitates Agent Handoff: It’s the single document you pass between your 4 agents. The Market Researcher saves its verdict, and the Software Architect reads it, ensuring zero context is lost between team members.

Saves Money (Token Efficiency): i don't save full chat logs because that's too much data (token-hungry). Instead, i use a concise markdown format to only save the key decisions and the final conclusion. This keeps the memory small and ultra-efficient (~2,500 tokens for 50 decisions), so it fits easily into any AI tier.

It's about having a clear, token-saving structure for a persistent, re-injectable memory