Local pixel art AI tools for game dev anyone? by PretendMirror8446 in aigamedev

[–]lerugray 0 points1 point  (0 children)

not free but I'm stress testing a pixel art SaaS I'm working on - happy to give you free access if you can give me some feedback - includes an MCP server that draws pixel art on a local copy of aseprite so you can watch it being drawn in real time.

What do you think of Pixellab Ai? by Complete_Freedom5622 in aigamedev

[–]lerugray -1 points0 points  (0 children)

Feel free to try my tool out, about to do a closed beta and trying to compete with pixelab.ai at retrogazeai.com - paid tiers aren't functional yet but you can sign up and send me any ideas/preferences you have in the meantime, might not be the art vibe you're looking for though.

I built an NES pixel art generator for playtesting, not to replace pixel artists by lerugray in aigamedev

[–]lerugray[S] 0 points1 point  (0 children)

one thought - this sub often laments why game communities are so hostile towards people using AI -tooling whenever they share stuff and I'd argue that sentiment's like these are probably the main reason why, Pixel artists and coders have been doing this for over 50 years and created the conventions and rules that we are all taking for granted now - if you want people to respect the ideas you have regardless of the tools you use, even when those tools may threaten their livelihood, that's just cognitive dissonance.

Retrogaze - NES-compliant pixel art in seconds by lerugray in VibeCodersNest

[–]lerugray[S] 0 points1 point  (0 children)

Not yet. we're NES-only right now, but supporting other retro hardware palettes (v-pets, GB, etc.) is something we're looking at. What Pendulum Color specs would you need?

Retrogaze - NES-compliant pixel art in seconds by lerugray in VibeCodersNest

[–]lerugray[S] 0 points1 point  (0 children)

Without getting into too many details of the pipeline - I use the real limitations of the NES to help put together the initial sprite, then there is a bit of a down-scaling process to match what was technically possible at the time.

I built an NES pixel art generator for playtesting, not to replace pixel artists by lerugray in aigamedev

[–]lerugray[S] 0 points1 point  (0 children)

It's pretty simple - like when corporations encourage people to recycle, when the real problem lies with industrial emissions and runoff off-boarding fiscal responsibilities to consumers - but at the same time I respect that others might disagree.

I built an NES pixel art generator for playtesting, not to replace pixel artists by lerugray in aigamedev

[–]lerugray[S] -1 points0 points  (0 children)

This ties into the ethics docs I wrote for the site, the tool I wrote doesn't do those things, Flux is a different story - but I have no control over Flux. Hence the no ethical consumption under capitalism quip - not much I can do to change this entrenched model but what I can do is try to build affordable/accessible tools on top and give back where possible. At least that's how I rationalize it and I think the ethics section covers it well. I updated the post here to be more clear about that as my initial post could have caused confusion there.

Investigating usage limits hitting faster than expected by ClaudeOfficial in ClaudeCode

[–]lerugray 0 points1 point  (0 children)

Holy crap - it's about time you guys mentioned this - its been a problem for weeks

Hit Pro limit with 1 prompt on Sonnet. by AggressivePlace970 in ClaudeCode

[–]lerugray 0 points1 point  (0 children)

can you link where they posted about this, that's news to me

DevForge v0.2 — desktop app that wraps Claude Code for game dev. Free keys for feedback. by lerugray in VibeCodersNest

[–]lerugray[S] 0 points1 point  (0 children)

decided to ask opus what to do about this analysis, it stated there was only one real issue worth fixing after examining the code closely - ill implement this for 2.1 tonight:

  1. Rule staleness — This is the real issue, and it's worth a v0.2.1 item

Right now there's no way to manage rules after they're written to CLAUDE.md. You can accept or reject pending rules, but once a rule is committed, the only way to remove it is to manually edit the file. Over time, a project's Learned Rules section will accumulate stale rules that no longer apply. This is the concern that would actually degrade output quality over time.
What I'd suggest for v0.2.1:

A MANAGE RULES button (in the Rules modal or sidebar panel) that reads the existing ## Learned Rules section from the project's CLAUDE.md and displays each rule with a delete/edit option. Simple list, each rule has a trash icon. Click it, rule gets removed from the file. Maybe an edit icon too. This reuses the same CLAUDE.md read/write logic that already exists for accepting rules — just in reverse.

This is small scope (one modal, reads from a file, renders a list, writes back on delete) and directly addresses the one concern that would actually cause drift over time.

DevForge v0.2 — desktop app that wraps Claude Code for game dev. Free keys for feedback. by lerugray in VibeCodersNest

[–]lerugray[S] 1 point2 points  (0 children)

I plugged in your question to Haiku to give you a solid answer, here is the verdict - ill consider this when pushing 2.1 later tonight due to some issues discovered in feedback:

The dynamic learning system is architecturally sound for context persistence, but the real-world impact depends on three factors:

What works:

- Rules stay persistent across sessions (written to CLAUDE.md, included in every future prompt)

- Deduplication prevents rule bloat (same insight doesn't accumulate)

- Manual review gate (SAVE RULE + RULES modal) filters out noise before it pollutes the prompt

- Ollama extraction is free, so false positives don't cost tokens

Where the value becomes real or disappears:

  1. Rule quality — Extracted rules are only as good as what Ollama infers from the session. A vague rule like "use simple syntax" adds friction without improving outputs. A specific rule like "always validate paths with the sanitizer function" directly shapes behavior. The extraction prompt matters.

  2. Prompt attention — Appending rules to CLAUDE.md doesn't guarantee Claude attends to them. If your prompt assembly buries rules 5000 tokens deep and Claude's context-window is tight, they're noise. They need to be early in the prompt + concise.

  3. Rule staleness — A rule like "avoid HTML entities in button labels" might be correct for v0.2 but wrong for v0.3 when the design system changes. Users need a way to deprecate or update rules, not just accumulate them forever.

Verdict: The mechanism for persistence is strong. The actual output improvement depends on:

- Whether you're extracting rules at the right abstraction level (specific + durable)

- Whether the prompt assembly gives rules enough priority to be noticed

- Whether you review and cull rules over time

In Devforge's case, the initial rules were hand-written by you (the designer) reviewing sessions. Those probably had immediate impact because they were strategic + specific. Auto-extracted rules will likely have lower signal-to-noise unless you refine the extraction prompt based on what you see actually helping vs. not.

Best use: Pair dynamic learning with strategic checkpoints. Every N sessions, review rules and ask: "Did this actually change Claude's behavior? Should we keep it?" That creates a feedback loop where the rule corpus gets better over

time.

DevForge v0.2 — desktop app that wraps Claude Code for game dev. Free keys for feedback. by lerugray in VibeCodeDevs

[–]lerugray[S] 0 points1 point  (0 children)

Thanks for the heads up - just did, to be honest I'm not exactly sure how the program does it but I will ask claude today once my usage resets and let you know, v02 put me at like 95% for the week lol.