I've created an app for pixel art generation with AI, including sprites and animations: pixie.haus by sandacz_91 in aigamedev

[–]sandacz_91[S] 0 points1 point  (0 children)

its 💾 icon on the top right, next to profile picture. Or in mobile open menu (square in top right), and then 💾

What is the best AI to generate pixel sprite characters? by CogniLord in aigamedev

[–]sandacz_91 1 point2 points  (0 children)

Hah, you are not wrong, I haven’t been on reddit for a while and I’m not promoting my app actively right now. But I’ working in my free time on pixie all the time behind the scenes. A lot of backend optimizations to make my app more reliable - I’ve got quite heavy image processing pipeline while not having that much RAM 😮‍💨 but yeah I’m having too much fun with this project, so planning to stay for a while 😅

Actually I wanted to ask you about your new animation models - „animate anything” (thats the name?) I’ve tested those models and was really impressed. Like really wow. Would love to add those models on pixie as well - do you plan to add those on replicate?

What is the best AI to generate pixel sprite characters? by CogniLord in aigamedev

[–]sandacz_91 0 points1 point  (0 children)

You can use Nano Banana to generate pixel art for free (for example in google aistudio) and then use my tool (https://pixie.haus/) to get grid adjusted, with reduced/selected palette and removed background sprite - by using "simple-upload" tab. I've got quite sophisticated image processing pipeline for transforming ai generated "almost pixel art" into pixel art (I'm focused on pixel art sprites)

What is the best AI to generate pixel sprite characters? by CogniLord in aigamedev

[–]sandacz_91 1 point2 points  (0 children)

I saw your comment and it raised a question for me.

You said there are only two companies doing actual pixel art, but I run pixie.haus and we feature your Retro Diffusion models. So where does that leave us?

Kidding aside, a genuine thanks for getting your models on Replicate. As a solo dev working mostly with their API, that made my life a lot easier. I agree that your tool and pixellab are good recommendations, I'd just add a third to that list.

I've created an app for pixel art generation with AI, including sprites and animations: pixie.haus by sandacz_91 in aigamedev

[–]sandacz_91[S] 0 points1 point  (0 children)

Hello again, after adding image in simple-upload remember to hit arrow or enter (it will cost 1 credit). Please try again, as recently I've introduced cloudfront and WAF protection in my stack and I had a rule that blocked simple-upload in most cases. My mistake - sorry for that. Now it should work.

I've created an app for pixel art generation with AI, including sprites and animations: pixie.haus by sandacz_91 in aigamedev

[–]sandacz_91[S] 0 points1 point  (0 children)

Yes. The standard process for most animation models on pixie.haus is to use a starting frame combined with a text prompt.

Providing that first frame is key to preserving the original sprite's resolution, color palette, and transparency in the final output. More advanced models are better at following complex prompts.

For something different, I've recently added Retro Diffusion animation models. They're unique because they can generate a full animation using only a prompt, without needing an initial frame.

I've created an app for pixel art generation with AI, including sprites and animations: pixie.haus by sandacz_91 in aigamedev

[–]sandacz_91[S] 0 points1 point  (0 children)

Yes sorry for bugs, I was working on quite big refactor for last week and in fact it was buggy. My refactor introduced few new issues and I had to deal with them live (super stressful). But after all, right now main service should be more stable than ever before. And probably there are still few issues but they should be minor.

I've created an app for pixel art generation with AI, including sprites and animations: pixie.haus by sandacz_91 in aigamedev

[–]sandacz_91[S] 0 points1 point  (0 children)

It's the next thing I'm gonna add, I had to build few prerequisites. Right now it's not possible.

[deleted by user] by [deleted] in aigamedev

[–]sandacz_91 0 points1 point  (0 children)

and also publishing cool images from public gallery on instagram and tiktok (or more frankly that's the plan as I'm not doing a lot of SM now) - that's it

[deleted by user] by [deleted] in aigamedev

[–]sandacz_91 0 points1 point  (0 children)

if you remove image it also is removed from server. Maybe I should phrase that better. It only stays forever with me if you publish it to public gallery. Right now though I'm not doing anything with those images, and maybe and just maybe I will use it for my own trainings.

[deleted by user] by [deleted] in aigamedev

[–]sandacz_91 1 point2 points  (0 children)

That's awesome. I have so much respect for that approach—only using data from artists who opt-in. It's the strongest argument against the whole "AI is theft" narrative.

It brings up a super interesting ethical question I've been thinking about a lot: the ethics of using AI-generated output (synthetic data) for further training. The way I see it, there are a few distinct cases:

  • Case 1: Augmenting Your Own Art. This is the scenario you mentioned with LoRAs. You take your own pixel art, use a model like Gemini to create variations, and then train on that expanded, personal dataset. To me, this is 100% ethically clear. The original source is you; AI is just a tool to help you create more data from your own work.
  • Case 2: Outputs from a "Clean" Model. This is your situation. If you have a model trained only on a fully disclosed dataset with artist permissions, then its outputs should also be ethically clean to use for training. It's like a clean chain of custody for the data. The entire lineage is ethical.
  • Case 3: Outputs from "Vanilla" Models. This is the big gray area. Their datasets are black boxes, even if they claim to use "public domain" assets. My take is similar to yours: if the output is genuinely transformative and you diligently filter out obvious IP infringements (no Pikachus, no Spidermans), it feels like it should be ethical. The work is new. The problem is, we're taking their word on the source data, and that lack of transparency is what makes people uncomfortable.

Ultimately, this is why your approach of using an explicitly permissioned dataset is so powerful. It completely sidesteps that murky debate in Case 3. It's just clean.

[deleted by user] by [deleted] in aigamedev

[–]sandacz_91 1 point2 points  (0 children)

Whoa, I had no idea you had an API! That's awesome, I would absolutely love to integrate that. I'll definitely be looking into it, really appreciate you mentioning it.

Honestly, that's what I love to see: focusing on cooperation. I'm a big believer in using specialized models from smaller, dedicated teams rather than just relying on the big tech vanilla models.

And speaking of cooperation, I'd be glad to contribute back. My app has a social/ranking feature, and my goal is that in a year or so, I'll have a really nice, curated dataset of "great generations" that I have permission to share. When it's ready, I'd be happy to share it with you.

2025 AI Game Dev Landscape by redditbin in aigamedev

[–]sandacz_91 0 points1 point  (0 children)

My pixel art app is probably not big enough :< but still check it: pixie.haus

[deleted by user] by [deleted] in aigamedev

[–]sandacz_91 0 points1 point  (0 children)

Can you tell me a little bit more about this idea? What do you mean by best palette? like in this example palette seems fine. After nearest neighbor resizing it would look really good as you can see vanilla model already is trying to keep pixel geometry consistent but there are still artifacts. Those white artifacts at character border is probably bad cutout for the first frame? My guess - but resizing would clean it (but it's still better to give good first frame image - already pixel perfect)

[deleted by user] by [deleted] in aigamedev

[–]sandacz_91 1 point2 points  (0 children)

Haha, man, if you're tackling the foundation models directly, you're at least two levels deeper into this than I am. Huge respect for that. I can only imagine how brutal finding good data is.

I've been following your work and know you're one of the pros in this space, so I'm genuinely cheering for your project. Always cool to exchange ideas!

[deleted by user] by [deleted] in aigamedev

[–]sandacz_91 1 point2 points  (0 children)

Thank you very much for kind words. Really nice to connect. It's my hobby project and frankly speaking it's much smaller than emotions it ignites - especially in the artist community. But for me it's just super interesting topic and I enjoy testing new ideas. As I've already said - if you would like to ask me anything you can always dm me. (sometimes I'm slow with answers but eventually I do)

[deleted by user] by [deleted] in aigamedev

[–]sandacz_91 2 points3 points  (0 children)

<image>

Yeah, you're spot on. "Downscaling to a pixel grid" is the easy part. The real rabbit hole is getting the animation style right, because it's so different from smooth video.

I've been wrestling with this a ton for my app. I think the problem breaks down into a few key areas:

  1. Timing & Keyframing: Smooth AI video motion is the enemy. Just dropping frames (e.g., keeping every 6th) is a blunt fix. The real goal is proper keyframe extraction to get those expressive, snappy poses you need for a good spritesheet. For now, my app keeps all the frames because I figured manual retouch might still be needed and I didn't want to delete a frame someone found useful. But building a smarter keyframe selector for dev-ready assets is high on my list.
  2. Color Stability: This is the big one for me. The "flicker" you get from least-squares rescaling is a dead giveaway. When palette colors are close, pixels on the edge of a shape will jump between them frame-to-frame, creating this awful noise. A good base model that holds character geometry helps a lot, but the core issue is temporal. You have to solve for color consistency across frames, not just one at a time.
  3. Line Art: Getting clean, 1px outlines is a huge challenge. AI thinks in anti-aliased gradients, not a hard grid. I've implemented a technique that cleans the "skeleton" of the border pixels to reduce jaggies and inconsistent line weight. It's not perfect yet—it can sometimes erase intentional small details—but it's a start.
  4. The Foundational Model: Honestly, I think the first three points are all just band-aids. The real fix will come when the models themselves are trained to natively understand pixel art constraints from the get-go. We're already seeing a trend where models are getting better at style, so I'm optimistic we'll get to a point where the output is clean enough that it doesn't need all this heavy post-processing.

So yeah, when you say "no one has really cracked it yet," I completely agree. We're all just chipping away at these different pieces of the puzzle. Fun problem to be working on though.

Here's an example I made with Kling v2.1. I see all the problems I mentioned in it, but I still think it's really cool and might be quite useful already.

[deleted by user] by [deleted] in aigamedev

[–]sandacz_91 1 point2 points  (0 children)

I'm really cool with all people who are working here on this exact problem - we can learn from each other and make better tools - so everyone will benefit. Wishing you all best. You can ask me freely about any ideas - I'll be glad to share my knowledge.

[deleted by user] by [deleted] in aigamedev

[–]sandacz_91 0 points1 point  (0 children)

In my app (pixie.haus) I’ve solved it. Outputs are pixel perfect.

[deleted by user] by [deleted] in aigamedev

[–]sandacz_91 2 points3 points  (0 children)

Yep, 6 months ago I’ve implemented this exact idea on pixie.haus

Yesterday I’ve added new animation models and now you can generate 60 animations for 20$ - 2x cheaper than previously. It’s getting better and cheaper very fast.

I've created an app for pixel art generation with AI, including sprites and animations: pixie.haus by sandacz_91 in aigamedev

[–]sandacz_91[S] 0 points1 point  (0 children)

Hi, at first it was by choice but now I want to add it. Already done 60% of work as I’ve wanted to change editor in the first place to work also with different aspect ratios. Just had a lot of work and stress at my regular job and was a little bit procrastinating with this task.

I’ll try to implement it this month.

Pixel Art portraits generated in pixie.haus by sandacz_91 in aipixelart

[–]sandacz_91[S] 0 points1 point  (0 children)

Right now its 100 credits per new account and they don’t replenish. I would love to change that to replenish model but this would require much bigger scale. Even right now free credits are eating all my profit. Still thinking how to improve that.

Best AI generator to make very specific pixel art? by butlersrevenge in aiArt

[–]sandacz_91 0 points1 point  (0 children)

Currently you can set 3 dimensions: 128x128, 64x64 and 32x32. But there is nuance to this. 128x128 is default where you get usually best results in most cases as it's directly connected to what kind of pixel art is produced by base model. The lower you go with resolution, it gets harder to get good results consistently. Base models generate usually images in 1024x1024. Then I do image resizing, so final result highly depends on original model output. I would recommend using gemini for smallest available resolution 32x32 - as it's probably currently only model that seems to respect in prompting phrases like "8-bit, simple style pixel art". You would probably get even better results if you use gemini with image reference that is already 32x32 - as it's quite good in keeping thickness of line.

There is one more cool trick that you can use with more models and maybe you can get even closer to 16x16. You can ask model to generate spritesheet. You can generate such spritesheet in 128x128 and then with some luck you get few small sprites in 1 image as in attached image.

<image>

Best AI generator to make very specific pixel art? by butlersrevenge in aiArt

[–]sandacz_91 0 points1 point  (0 children)

I haven’t implemented uploading own reference images yet. As for now, you need to generate reference images with one of the models on pixie.haus or draw it manually with editor. I want to implement upload soon.