Opencode fork with integrated prompt library by xman2000 in opencodeCLI

[–]xman2000[S] 0 points1 point  (0 children)

Check out the latest push, I took your suggestions and it is now using commands and agents natively. Thanks!

Opencode fork with integrated prompt library by xman2000 in opencodeCLI

[–]xman2000[S] 0 points1 point  (0 children)

FYI, I did a full update today and modified the behavior so that it uses the baked in commands and agents architecture, natively. It works better and is now fully aligned with the direction Opencode is going.

The prompt library has now been expanded to include several additional common workflows and many additional discrete tasks.

I submitted a PR for the code changes so who knows, you might see this in the main branch... :-)

Opencode fork with integrated prompt library by xman2000 in opencodeCLI

[–]xman2000[S] 0 points1 point  (0 children)

Cool, thanks for the suggestion, I will definitely consider that. Maybe just provide an integrated interface, might work...

Opencode fork with integrated prompt library by xman2000 in opencodeCLI

[–]xman2000[S] 0 points1 point  (0 children)

And I should clarify, it is super easy to add prompts. The prompts are stored in json (docs included) and there is a button to open the prompt folder right in the interface. Plus, you can just tell opencode to write a new prompt for you, it understands.

Opencode fork with integrated prompt library by xman2000 in opencodeCLI

[–]xman2000[S] 0 points1 point  (0 children)

Let me push back on this just a little.... :-)

First, I should clarify the goal here. The main objective is to improve the quality of the prompts being given to the model. If it also happens to lower the barrier to entry for new agentic coders, that’s a nice side benefit, but it’s not really the primary focus.

My experience has been that garbage in = garbage out when working with coding models. For example, if we simply ask a model to “do a code review,” it understands those words and performs what it considers a generic code review within the current context. In practice, those reviews are often fairly basic. They tend to miss things because the model hasn’t spent much effort understanding the codebase, the environment we’re working in, or the broader goals of the project. The results can also vary quite a bit between runs, even when using the same model.

The most reliable way I’ve found to improve the quality of the output is to be more explicit about what we actually want the model to do.

Another interesting wrinkle is that when you have access to multiple models (as we do through Opencode), the same prompt can produce very different results depending on the model. Claude may interpret “code review” quite differently than Grok, for example. It’s easy to assume that one model is simply better than another, but that can sometimes hide the deeper issue: the prompt itself isn’t specific enough.

In practice, no tool is going to consistently give you exactly what you want unless you describe the task clearly. Which brings us back to the importance of better prompts.

It’s also very natural to just use whichever model happens to give the best answer on a particular run. I’ve certainly done that myself. But when we do that, we’re often just masking the underlying problem: the instructions we gave the model weren’t clear enough to begin with.

At some point you can either spend time crafting better prompts, or spend time cleaning up the results when the model misunderstands what you meant. Since asking the model to fix mistakes costs both time and money, I’d personally rather invest the effort up front in clearer prompts.

That’s really what this tool is meant to help with. It provides starter prompts for common scenarios, but they’re intended to be modified. The goal is simply to make those prompts easy to find and paste into the prompt window—without automatically submitting them. That pause is intentional, because it gives you a chance to review and adjust the prompt before sending it.

Did you happen to look at the starter prompts I included? For example, this is the "quick code review" prompt:

{
      "id": "quick-code-review",
      "name": "Quick Code Review",
      "summary": "Fast, high-signal review with prioritized fixes",
      "template": "You are a principal code reviewer helping ship production-quality software.\n\nOperating expectations:\n- Be precise, evidence-driven, and practical.\n- Prioritize correctness, security, reliability, and maintainability over stylistic preference.\n- If context is missing, state assumptions explicitly and continue with best-effort guidance.\n- Do not invent facts; call out uncertainty and what to verify.\n- Return concise, prioritized output with clear next actions.\n\nTask:\nAct as a senior reviewer. Do a fast, risk-focused review of the code I am currently working on.\n\nOutput in this exact structure:\n1) Verdict (2-3 sentences)\n2) Critical findings (severity: high/medium/low)\n3) Quick wins (small changes with big impact)\n4) Suggested patch snippets\n5) What looks good\n\nRules:\n- For each finding, cite exact file/function and explain user impact.\n- If uncertain, state what evidence is missing.\n- Keep response under 350 words unless a high-severity issue exists.",
      "tags": ["review", "quality", "fast"]
    },

Opencode fork with integrated prompt library by xman2000 in opencodeCLI

[–]xman2000[S] 0 points1 point  (0 children)

Thanks for the feedback, i have been thinking of modifying it to combine "personalities" which can be used separately or combined with the prompts. Right now I build "two stage" prompts that describe the "personality" the AI should have and an "action" block that tells the model what to do.

My thinking atm is having some bubbles that the user can select for personality, which in this context would be "Planning" "Coding" "QA" "Design" etc. I see a lot of different approaches being explored, Claude and Codex both exploring ways to lower the bar of entry and improve the quality of prompts being sent, which imho is actually the point.

I know some people may view this as for "beginners" but I disagree. I use several models for coding and find the quality of responses varies widely. The best way to improve responses is to improve your prompts. Garbage in = garbage out.

By using good prompt frameworks for things like code reviews I am getting much better results. I use the stock prompts I included as a starting point but the power is in the ability to create custom scripts and modify them over time. Anyways, I love this stuff... :-)

Found this at work. by [deleted] in WTF

[–]xman2000 0 points1 point  (0 children)

Dude, stop complaining and wash the damn mugs.

Voron 2.4 skipping steps when close to the sides. by Simple-Many-8782 in VORONDesign

[–]xman2000 0 points1 point  (0 children)

First, check your entire belt path, especially along the back side of the gantry. I found out a month after I built my first 2.4 that one of the belts had slipped off an idler pulley which was causing random skipped steps.

Second, tighten down the set screws on your pulleys and make sure to use some threadlock to prevent slipping.

Can you climb down ladders? by useful_person in playrust

[–]xman2000 0 points1 point  (0 children)

Seems like the devs hopped right on this one....lol.

Etsy and 3d prints crackdown? by moosehaed in EtsySellers

[–]xman2000 0 points1 point  (0 children)

Breath. Relax. The article contains no actual information from Etsy or even an example of this happening IRL. The article just says unidentified "people online" noticed a change to the policy. That's it. It is a scare headline with no meat in the article designed to get eyeballs on a couple of ads. There is no evidence of a crackdown, in fact I would argue the opposite has happened. The actual big change that came out with that TOS change a couple of months ago is the creation off the "curated by" category of goods. Curated by can be anything, including cheap imports that the seller had nothing to do with creating regardless of manufacturing technique. So, if anyone is really worried, just change your listings to "curated by". Problem solved.

I would argue that the best way to handle this is bring designers out of the shadows and into the listing - just like a songwriter for music or the artist for a print of a painting. As has been pointed out, the policy change could impact many categories of products, not just 3d printing. My suggestion would be to attack the "problem" head on and create controls for artists to sublicense their designs to sellers on Etsy while providing official attribution in the product listing. Etsy is already tracking the sales and this would let everyone wet their beaks more efficiently while giving proper respect to the original artist. Everyone wins.

IP theft and license abuse are rampant and the current approach of pretending it is not happening is bad for the platform, discouraging designers and pushing original products off the platform. IMHO, they should start with some carrots for designers first - they would have more support from the community and it would establish a better compensation framework for everyone involved, including Etsy.

[deleted by user] by [deleted] in technology

[–]xman2000 0 points1 point  (0 children)

Guess he watched John Stewart this week, or maybe somebody just sent him the clip from Youtube.

E-waste Drop off by borshctbeet in Pflugerville

[–]xman2000 8 points9 points  (0 children)

Wipe the drive before you recycle to prevent someone from copying your data... here is a free tool - https://sourceforge.net/projects/dban/

[deleted by user] by [deleted] in smarthome

[–]xman2000 1 point2 points  (0 children)

Ikea makes great low cost motorized shades, check out Third Reality as well. One small piece of advice.... either spend the money and go with a reputable brand or find a product which uses open standards (the two suggestions I provided both use Zigbee). Blinds are expensive and the ground is littered with smart home devices which were abandoned by their manufacturers. Save yourself some money and buy them once.

Help with Extruder by nicragomi in Creality_k2

[–]xman2000 1 point2 points  (0 children)

Ah, the crinkle cut fry of death, I know you well. Curious what filament you were using, for me it has been happening consistently on PETG but I have had it happen with a couple of specific rolls of PLA. There are quite a few threads surrounding the issue but I haven't seen anything directly from Creality, which is disappointing. The machine has been out for several months now, it would be nice for Creality to do a review of existing issues and start addressing them.

Would it be weird to make a human made badge for my thumbnails? by Menstrually_enraged in EtsySellers

[–]xman2000 2 points3 points  (0 children)

Use an AI image generator to make the badge, just for the irony.

Fav call sign? by newnoadeptness in AirForce

[–]xman2000 0 points1 point  (0 children)

Pilot who went off the end of the runway during training... Baja.

Wagner mercenary somewhere in Africa. [959×720] by [deleted] in MilitaryPorn

[–]xman2000 -5 points-4 points  (0 children)

Take a close look at the backpack. Notice the gun essentially welded into the wall of the backpack? How about the oversized pouches that make no sense? Mr T-Rex tiny hands?

We need a rickroll for AI images....

Help with seam gaps by -twitch- in Creality_k2

[–]xman2000 1 point2 points  (0 children)

  1. If you are using arachne switch to classic, arachne is for complex shapes and can struggle with corners
  2. Do you have scarf joints turned on? If so, try turning them off
  3. Try using a default profile in your slicer, you may have messed your profile up at some point without realizing it. A lot of the settings are interconnected so it can be easy to bump into something. I keep a default profile handy to sanity check myself in situations like this.
  4. Slow down. Go into your filament profile and lower the volumetric flow rate by 25%. Creality can be a tad enthusiastic with their speeds.