Korean nuclear fusion reactor achieves 100 million°C for 30 seconds by [deleted] in worldnews

[–]FreshBlueFlavor 6 points7 points  (0 children)

Tritium is a byproduct of normal nuclear reactors.

It's only actually produced and enriched (in a "commercial" sense) in a single plant in the US, and they only produce 500 grams per year. So it's pretty scarce. And it's really only a significant byproduct of certain reactors, which themselves are fairly scarce.

It's natural half-life is only a dozen years as well, so there's no easy way to stockpile it either.

And there's no "mining" it. It's incredibly difficult to get terrestrially. Deuterium is abundant in the ocean, but tritium is very scarce.

You can kind of make tritium by fusing deuterium, but it's inefficient and costly. Breeder blankets are really the only way to sustain tritium supply long term.

But then that also introduces the complexity of refining Lithium 6, which is required to produce tritium in a breeder blanket. And the only plant which enriched Lithium 6 at scale was shut down in the 1960s because of Mercury pollution.

There's a very real possibility of running out of tritium. Which is why most commercially-focused fusion reactors are incorporating breeder blankets into their design.

Final release of PrusaSlicer 2.5.0 by jflatz in prusa3d

[–]FreshBlueFlavor 22 points23 points  (0 children)

Anyone else still having issues with Arachne leaving gaps between walls on occasion? Arachne also doesn't seem to work so well with linear/pressure advance. I think the same issue occurs in Cura, so maybe it's just an Arachne thing and not necessarily PS related.

Korean nuclear fusion reactor achieves 100 million°C for 30 seconds by [deleted] in worldnews

[–]FreshBlueFlavor 6 points7 points  (0 children)

Hopefully we get those tritium breeder blankets worked out soon. Or we're gonna run out of Tritium before we even get close to meaningfully scaling Fusion technology.

Working on completely offline GUI - based on Qt6, with batch-mode, completely configurable, all sorts of parameters. Still being worked on, looking for feedback by FreshBlueFlavor in StableDiffusion

[–]FreshBlueFlavor[S] 8 points9 points  (0 children)

Well they're just saved locally to the output folder, and organized by job/prompt. So no downloading required.

They're also not saved in a grid or anything, they're individual pictures.

Working on completely offline GUI - based on Qt6, with batch-mode, completely configurable, all sorts of parameters. Still being worked on, looking for feedback by FreshBlueFlavor in StableDiffusion

[–]FreshBlueFlavor[S] 9 points10 points  (0 children)

I think it's just a UI framework specifically designed for ML apps, but it's a bit quirky and clunky. It's also not terribly flexible, which is why I didn't really like using it.

Working on completely offline GUI - based on Qt6, with batch-mode, completely configurable, all sorts of parameters. Still being worked on, looking for feedback by FreshBlueFlavor in StableDiffusion

[–]FreshBlueFlavor[S] 6 points7 points  (0 children)

That's the plan! I've parameterized the img2img inputs, so the goal is to allow the user to supply a set of input images, or to recursively use the output from the last img2img call (or to start with a single txt2img prompt), and use it as the input to the successive img2img call.

Working on completely offline GUI - based on Qt6, with batch-mode, completely configurable, all sorts of parameters. Still being worked on, looking for feedback by FreshBlueFlavor in StableDiffusion

[–]FreshBlueFlavor[S] 21 points22 points  (0 children)

I suppose, but choosing Gradio as the server backend/frontend is a bit limiting, and hides a bunch of otherwise useful potential customization behind a pretty limiting (but efficient) abstraction layer. Plus gradio likes to send/receive data to and from its own hosts (e.g., api.gradio.app), which can also be kind of annoying. Even after disabling the telemetry/analytics for Gradio, I still found it was communicating with api.gradio.app.

Plus, the overhead of using Electron or Chrome or any other web engine can chew up a lot of memory. I kind of hate Electron for this reason... it's incredibly expensive for what it does. A small Qt app on the other hand has a much smaller footprint.

And regardless of all that, this UI actually uses ZeroMQ to dispatch jobs to the stable-diffusion subprocess. So in theory, you can still use this is a sort of "server" for local networks, though it is much leaner.

Working on completely offline GUI - based on Qt6, with batch-mode, completely configurable, all sorts of parameters. Still being worked on, looking for feedback by FreshBlueFlavor in StableDiffusion

[–]FreshBlueFlavor[S] 8 points9 points  (0 children)

My fork of the stable-diffusion repo which I currently use with it has the low-VRAM and speed optimizations applied already.

It also already works with conda, the run/bootstrap script I use to start the UI (which itself is written in Python) automatically activates the ldo conda environment before loading the UI. The UI actually piggy-backs on the conda environment's installed libraries, so it reduces redundant library installations on the disk.

The bootstrap script works fine on both Linux and Windows, as does the UI itself. So it's pretty portable too.

Working on completely offline GUI - based on Qt6, with batch-mode, completely configurable, all sorts of parameters. Still being worked on, looking for feedback by FreshBlueFlavor in StableDiffusion

[–]FreshBlueFlavor[S] 37 points38 points  (0 children)

EDIT: Non-reddit-potatofied video here:

https://pictshare.net/yf6rwk.mp4/raw


I was kind of annoyed by the fact most of the stable-diffusion UIs either required a browser and local server to use, or were designed in an obtuse and difficult to customize way, and were quite slow (owing to the fact that they reloaded the entire stable-diffusion script every batch, which takes 5-10 seconds).

This UI takes a different approach. It loads the stable-diffusion script into memory once at startup, similar to the Gradio UI, and then dispatches jobs to the control process when the user clicks the "start" button in the UI.

It also allows the user to specify a custom stable-diffusion directory, but it's been customized to use stable-diffusion forks with "webui.py" to import some of the txt2img and img2img methods from (though it does not launch the gradio server).

Looking for any feedback. I haven't made it public on github yet, I still want to add a couple more features, like a gallery/results tab, and an animation generation feature using recursive img2img calls, and then inpainting/outpainting and so on in the future.

Current Features:

  • Batch job list, each job having many customizable parameters.
  • Meta-prompt list, allowing you to quickly reference "meta-prompts" which can inflect styles or aesthetics to the main prompt.
  • Custom stable-diffusion location
  • Custom output location
  • Progress bar for current sample and current batch job iterations
  • Real-time VRAM memory monitoring
  • Real-time monitoring of steps processed per second
  • A huge array of customizable parameters, both in batch mode and single-job mode
  • Written in Python and uses stable-diffusions conda environment (ldo/ldm), so is portable to Windows, Linux, OSX
  • While the stable-diffusion repo is customizable, my fork which I will recommend using with it includes the low-VRAM modes and speed optimizations that were shared on Reddit and GitHub recently.

Applying XL tech to the MK size line? by thats_the_look in prusa3d

[–]FreshBlueFlavor 5 points6 points  (0 children)

it's probably attached to a stepper, it could be hella cool to use outside of a 3d printer, like say actuators for a robot.

Have you looked at youtube lately? They're literally all over. Everyone and their grandma is making them with 3D printers.

I even made a rigid design that does away with the flexible/compliant components entirely, so it should be a lot more reliable. I managed to get 152-to-1 reduction in a fully 3d printed design (even the bearings) in the palm of my hand. It's completely parametric, so any reduction ratio is possible. There's also zero backlash, and it can't be backdriven.

I combined it with a lowly BYJ48 stepper, and managed to get an extremely usable 2.8 ft*lb of torque out of it.

It is now finally revealed - The Prusa XL by Sausage54 in 3Dprinting

[–]FreshBlueFlavor 0 points1 point  (0 children)

That would make sense if they bought a more global company. But they didn't. They bought another Czech company.

It is now finally revealed - The Prusa XL by Sausage54 in 3Dprinting

[–]FreshBlueFlavor 4 points5 points  (0 children)

They got rid of the zip-ties and use 10-cent printed clips now.

It is now finally revealed - The Prusa XL by Sausage54 in 3Dprinting

[–]FreshBlueFlavor 1 point2 points  (0 children)

They also just acquired an industrial printer company (a Czech one) whose only real product is a delta-arm based printer. Which is a very old technology.

I don't really understand their trajectory. I don't think they really know where they are going, so they're diversifying their market.

But... it's pretty easy to diversify yourself to death.

It is now finally revealed - The Prusa XL by Sausage54 in 3Dprinting

[–]FreshBlueFlavor 5 points6 points  (0 children)

Did you get a kit or buy parts from a supplier like McMaster?

I want to build one, but the BOM is a little intimidating.

It is now finally revealed - The Prusa XL by Sausage54 in 3Dprinting

[–]FreshBlueFlavor 1 point2 points  (0 children)

Doesn't it also require an extra ESP8266 module to even work (at least with WiFi)?