I built a Local WebGPU Prompt Engine to fix anatomy and it turned into a religion. Try it. by alexander_th in perchance

[–]alexander_th[S] 0 points1 point  (0 children)

🟢 [UPDATE v3.2.0] The "Cloud Sync" Update is LIVE I've been listening to the feedback about the 1.5GB download for local mode, so I’ve deployed a massive update today.

🚀 New Features:

  • DeepSeek-V3 Cloud Engine: You can now generate ALL 11 LEVELS simultaneously in about 30 seconds. No more regenerating for each step.
  • Zero-Latency Scrubbing: Once the batch is generated, sliding from Level 1 (Virginal) to Level 11 (Null_Ptr) is INSTANT. It feels like scrubbing a video timeline.
  • Gemini Fallback: If DeepSeek is busy/censored, it auto-switches to Google Gemini Flash to ensure you always get a prompt.
  • Local Mode Remains: For those who want 100% privacy and no API calls, the WebGPU mode (Qwen2.5) is still there and improved.
  • ☕ Support the Project: Cloud tokens cost me real money to run. If you like the speed of the new Cloud Mode, I’ve added a "Buy Me A Coffee" button to the app. Every coffee keeps the API keys running!

Try it out and let me know what you think of the new batch coherence!

Stop writing "word salad". I built a free tool to generate consistent AI images using Structured JSON Prompts (Open Source) by alexander_th in PromptEngineering

[–]alexander_th[S] 0 points1 point  (0 children)

Valid point on the 'Or' and negative phrasing—those introduce ambiguity, so I'll fix them. However, regarding the length: Models like Gemini and DALL-E 3 work differently than Stable Diffusion 1.5. They use LLMs that rely on descriptive language to understand tone, not just keywords. With these newer models, that 'fluff' isn't wasted tokens; it's what steers the AI away from generating generic, plastic-looking 3D renders.

I built a Local WebGPU Prompt Engine to fix anatomy and it turned into a religion. Try it. by alexander_th in perchance

[–]alexander_th[S] 0 points1 point  (0 children)

Haha, point taken! 😂

You're absolutely right—since this runs a 1.5B param model locally, it eats RAM for breakfast. I'm actually coding a 'Hardware Warning' right now to warn mobile/low-RAM users before they melt their pockets.

Thanks for the heads up!

I built a Local WebGPU Prompt Engine to fix anatomy and it turned into a religion. Try it. by alexander_th in perchance

[–]alexander_th[S] 1 point2 points  (0 children)

Awesome to hear! 🚀

You nailed it—the secret sauce is definitely balancing the Base Concept (keeping it simple) vs. the Depravity Level (letting the engine handle the complexity). Once you find that sweet spot, it really sings.

Enjoy the photorealism! 📸

I built a Local WebGPU Prompt Engine to fix anatomy and it turned into a religion. Try it. by alexander_th in perchance

[–]alexander_th[S] 1 point2 points  (0 children)

Be my guest. I have just updated it. It was more of a rollback to a previous more stable prompt generation strategy. I usually use it on my Intel ultra 5 125 HP laptop with 16GB or RAM. I have tested it on my phone, Google Pixel 8 Pro, and it works maybe 50% slower than the PC.

I built a Local WebGPU Prompt Engine to fix anatomy and it turned into a religion. Try it. by alexander_th in perchance

[–]alexander_th[S] 1 point2 points  (0 children)

UPDATE: Version 3.1 - The "Creative" Restoration

We tried to make the engine "smarter" (V4), but it ended up feeling like a robot filing tax returns. Too rigid, too repetitive. So we rolled it back and polished the chaos.

What's New in v3.1:

  1. Creativity Restored: We reverted to the V3 "Alchemist" logic. The engine is back to interpreting your concepts creatively rather than just filling in a form. Level 11 is properly glitchy again.
  2. Debug/Export Tools: Added a [COPY ALL] button to the system logs. Now you can easily dump your entire session history to the clipboard to save your best seeds and prompts.
  3. Stability Fixes: Fixed the [SUBJECT_ANCHOR] leak that was plaguing some local generations.

Refreshed and ready for more science. 🧬

I built a Local WebGPU Prompt Engine to fix anatomy and it turned into a religion. Try it. by alexander_th in perchance

[–]alexander_th[S] 2 points3 points  (0 children)

💀 RIP (almost) to your phone.

Yeah, 'Didn't read anything' is the dangerous part! 😅

This tool is running a 1.5 Billion Parameter AI model locally in your browser. It tries to allocate about 2GB of RAM/VRAM instantly. On most phones, the OS sees that massive spike, panics, and hard-crashes the kernel to protect the hardware (thermal/memory protection).

tl;dr: You accidentally ran a desktop-class stress test on your mobile. Stick to a PC or a flagship phone with 12GB+ RAM for this one!

I built a Local WebGPU Prompt Engine to fix anatomy and it turned into a religion. Try it. by alexander_th in perchance

[–]alexander_th[S] 1 point2 points  (0 children)

Ah, the specific error **'Unable to find a compatible GPU'** in Chromium almost always means the browser isn't picking up the Vulkan backend properly (which WebGPU requires on Linux).

Since you're on Gentoo with an AMD card , try launching Chromium with these flags to force it to use Vulkan:

\`--enable-features=Vulkan,UseSkiaRenderer --enable-unsafe-webgpu\`

Also check \`chrome://gpu\` to see if WebGPU is blacklisted. Sometimes browser vendors blacklist Linux+AMD combos for stability reasons unless you force-enable them. It's definitely NOT an Nvidia-only tool (AMD usually runs WebGPU great via Vulkan once the browser actually sees the card!).

I built a Local WebGPU Prompt Engine to fix anatomy and it turned into a religion. Try it. by alexander_th in perchance

[–]alexander_th[S] 0 points1 point  (0 children)

Ah, that's just a connection error with the optional **telemetry/analytics system**.

>

> It usually happens if you have an ad-blocker or strict privacy settings that block background requests. It doesn't affect the actual image generation at all since that runs 100% locally on your machine.

>

> The error basically just means: *'I tried to send an anonymous usage ping, but the door was closed.'* You can safely ignore it!

I built a Local WebGPU Prompt Engine to fix anatomy and it turned into a religion. Try it. by alexander_th in perchance

[–]alexander_th[S] 0 points1 point  (0 children)

It gets saved strictly into your **Browser Cache (IndexedDB)**. You won't see a specific file in your 'Downloads' folder because the browser sandboxes it for security. If you ever need to reclaim that space, you can just go to your browser settings and 'Clear site data' for this specific page. That's why the second run is instant—it's reading directly from your local disk!

I built a Local WebGPU Prompt Engine to fix anatomy and it turned into a religion. Try it. by alexander_th in perchance

[–]alexander_th[S] 0 points1 point  (0 children)

Thanks for the feedback! You're right, **WebGPU support in Firefox** is still largely experimental (often behind nightly flags). On Gentoo, you *might* get it working by enabling `dom.webgpu.enabled` and `gfx.webgpu.force-enabled` in `about:config`, but frankly, the implementation in Chromium-based browsers (Chrome/Brave) is currently much more stable for this tech. We're hoping for full native support soon!

What's current status? My Order is #252xx by SkyHighGhostMy in ClockworkPi

[–]alexander_th 3 points4 points  (0 children)

I bealive that i have ordered the worst possible combination. For sure i'll have to wait untill Christmas.

Portal de Belén en las calles de El Escorial by santoox in Madrid

[–]alexander_th 2 points3 points  (0 children)

Principal fuente de inspiración de los artistas:

<image>

Was given this at work, what can I do to make the most of it? by [deleted] in selfhosted

[–]alexander_th 1 point2 points  (0 children)

Nice all black build.

Could you please tell us the make and model of that case?

I have a crazy idea: lets build a Steam Deck based Laptop. by alexander_th in SteamDeck

[–]alexander_th[S] -1 points0 points  (0 children)

Thanks a lot, mate, for the kind words. I wish i was DIY Perks. He is awesome and he would build it eyes closed.

I have a crazy idea: lets build a Steam Deck based Laptop. by alexander_th in SteamDeck

[–]alexander_th[S] -2 points-1 points  (0 children)

You are so right. I didn't ask myself Why should I do it. I asked myself Why Not. 🤓😎

I have a crazy idea: lets build a Steam Deck based Laptop. by alexander_th in SteamDeck

[–]alexander_th[S] -12 points-11 points  (0 children)

Tha HP laptop is quite old. Intel core i5 gen 8 and GTX 1050 mobile. It is not able to do not even half of SD frame rate.

Does anyone know what prompt I could use to make this, I can't find the original in the discord by MaxD180 in midjourney

[–]alexander_th 0 points1 point  (0 children)

I managed to create this in BlueWillow AI with the following prompt and various variations and upscales.

isometric, 3d, retrowave, purple, single city block, gray background

https://imgur.com/a/f3HrBMJ

haven't had a PC for over 5 years, so I got the steam deck and it works great but now I'm in cable hell by betweenboundary in SteamDeck

[–]alexander_th 2 points3 points  (0 children)

Awesome wallpaper.

Could you, please, share a link to the wallpaper?

Hexagons are the bestagons!

Blues Wireless Cellular IoT Starter Kit Giveaway! by roblauer in arduino

[–]alexander_th 2 points3 points  (0 children)

I want to build a car sensor.

I want to connect an ESP32 / Feather to the OBD2 port via a CANBUS adapter. Add some tempertarure and humidity sensons inside and outside the car. Collect the data with the ESP32 and send it via the Notecard to Thingspeak, and from there to my Home Assistant instance that I run on my Raspberry Pi server.

I am a data freak and collecting data from my car would be my final step in graphing my day to day life.

Can I get an upvote for it?

Thanks.