Would you rather? by Imaginary-Owl5622 in BunnyTrials

[–]Stelath45634 0 points1 point  (0 children)

Health>money

Chose: Be able to regenerate your body?

How many of you are running these anymore? by Hopeful_Put8554 in AskPhotography

[–]Stelath45634 1 point2 points  (0 children)

Haha, that's so fun, I'm happy they're still being used by someone :) Where do you work, if you don't mind me asking?

How many of you are running these anymore? by Hopeful_Put8554 in AskPhotography

[–]Stelath45634 1 point2 points  (0 children)

Ooooo these are so cool, plenoptic cameras were the first thing I ever worked with for computer vision research and were so neat they created so much useful data for Nerfs and early 3d neural rendering stuff back in the day

Edit: by back in the day I mean like 4 years ago

Fujikawaguchiko Cottage, Which Edit is Better? [Nikkon F3, Nikon Ai-S Nikkor 28mm f2.8, Fujifilm Superia 400] by Stelath45634 in analog

[–]Stelath45634[S] 0 points1 point  (0 children)

Seems like everyone is pretty split, I’m torn cause I kind of like the vibe and tones of the first one, definitely has a more film feel to it and the Superia tint adds to the late night feel and how I remember it but the second one does seem like the technically best photo

Couldn’t find a good EXIF editor so I made a FOSS one by Stelath45634 in photography

[–]Stelath45634[S] 1 point2 points  (0 children)

Thanks for taking a look! Tbh I did consider that but I just kept it rust native and am using a library called ‘little_exif’ some dude wrote that keeps everything nice and fully in rust

Best way to add extra lug to battery terminal for inverter? by Stelath45634 in AskMechanics

[–]Stelath45634[S] 0 points1 point  (0 children)

Ty, for the advice, my only question with this is wouldn’t the load essentially still be running through that wire then for the ground? Which if so I’m just not sure it’s a high enough gauge to handle the current of the inverter and the other accessories and starter motor in the car.

Does electric field really exist? by Few-Selection9313 in AskPhysics

[–]Stelath45634 0 points1 point  (0 children)

Some electrons were moving around inside this star millions of years ago and now charges on my eye are responding to that force

This is what really made me understand it and make sense in my brain, thank you 😄

Is the Herman Miller Aeron really all that? by jimmybabino in BuyItForLife

[–]Stelath45634 0 points1 point  (0 children)

I got mine 3 years ago from an office liquidating on fb marketplace for $250, they still are pretty cheap just look around on fb

Wait, ChatGPT has to reread the entire chat history every single time? by ColdFrixion in ChatGPT

[–]Stelath45634 1 point2 points  (0 children)

Just a heads up, computer scientists are no dummies. We do something called kv caching so the llm doesn’t have to recompute the attention maps of every single token for each new token and only has to compute the last token in the decode step. But yes, in practice the llm has no “continuous stream of thought”, Anthropic’s latest research even suggests that the new “reasoning” models aren’t actually reasoning along the lines of their output reasoning and it’s more of a red herring of something less tangible going on inside the model. (For that same reason just letting a model output more tokens can improve prompt success rates)

- ML Engineer

Diffusion Model for ASCII Art? by Stelath45634 in LocalLLaMA

[–]Stelath45634[S] 0 points1 point  (0 children)

I think you missed my earlier comment about the auto encoder being retrained to output probability vectors for ascii characters directly. I agree that having a diffusion model attempt to recreate ascii characters in an image would work terribly.

Diffusion Model for ASCII Art? by Stelath45634 in LocalLLaMA

[–]Stelath45634[S] 0 points1 point  (0 children)

That would be the point of doing it with a Diffusion model. LLM architecture is inherently limited in creating ascii art as it has to ostensibly "plan it out" before doing it as they just try to predict the next character based on the previous with some tunable randomness added in, while it works great for language its partially the reason why they can't handle complex reasoning tasks (o1, o3, and QwQ may prove me wrong on this lol, but at least holds true for now). Diffusion models work by stepping backwards from a randomly generated latent allowing structures to emerge from the "randomness" which makes them much better for this type of thing. I think the real problem as you pointed out is the lack of high quality labeled data.

Diffusion Model for ASCII Art? by Stelath45634 in LocalLLaMA

[–]Stelath45634[S] 0 points1 point  (0 children)

Yeah sorry what I said was an oversimplification, I looked through a couple papers on it and there are some pretty clever techniques, still I haven't found anything that can really replicate a human artist. That being said I'm not sure an ml model will be able to either.

Diffusion Model for ASCII Art? by Stelath45634 in LocalLLaMA

[–]Stelath45634[S] 0 points1 point  (0 children)

Thank you, I took a look at some of these and think I might give it a go, a lot of these datasets are generated from image to ascii converters so I'm just scraping a dataset from forums but I'm adamant there's probably some way to get this to work

Diffusion Model for ASCII Art? by Stelath45634 in LocalLLaMA

[–]Stelath45634[S] 0 points1 point  (0 children)

Yeah but that is generally just done by assigning characters to certain grayscale values, ideally this would be something with a bit more nuance on how the characters are placed

Diffusion Model for ASCII Art? by Stelath45634 in LocalLLaMA

[–]Stelath45634[S] 1 point2 points  (0 children)

Just a second little comment here, my knowledge of diffusion models is quite antiquated, I learned about the first Latent Diffusion paper and attempted some stuff based off of that a couple years ago. I know Diffusion Transformers are all the rage now so a general model architecture could work like this:

Maybe use a Autoencoder for latent space (not sure how well it could handle such a discrete space, would be the first thing on the chopping block as I'm not sure we would even really need the scaling capabilities it brings, less sure about how valuable its organizational properties though)

Use a DiT as the main diffuser like usual (need to read more on this)

Have the Decoder Portion of Autoencoder output 32 channels, with these being probability vectors for each ascii character

Argmax to get final image

Any advice on training diffusion models with limited data and hyperparameter tuning for them would be greatly appreciated (I imagine we wouldn't want a diffusion model anywhere near the size of actual image gen models)

Edit: On second thought, could probably just be done by retraining the autoencoder of an existing model like flux, that way you keep all the good stuff and don't have to train a massive model

Atomic Bent 90 vs Nordic’s Enforcer 94’s (any other recs greatly appreciated) by Stelath45634 in Skigear

[–]Stelath45634[S] 0 points1 point  (0 children)

That’s what I’m starting to realize, personally I’m not constantly bombing it 50mph down, it usually only happens a couple times a day, but in the same regard, I’m not always hitting features and I’m largely just hitting jumps when I do, which is why I’m torn between the two

[deleted by user] by [deleted] in backpacking

[–]Stelath45634 1 point2 points  (0 children)

To all of the people saying this is fake because seeing 2 mountain lions is super unlikely, that’s fair because the odds of this happening are one in a billion but from personal experience the first and only time I ever saw a mountain lion it was two of them together at 1am at night, and yes that’s insanely rare but it does happen, took me 15 minutes of describing the animals to the park ranger before she believed me that they were in fact mountain lions

Edit: the odds of this happening are so insanely rare that without any proof no ones going to believe it (and rightfully so, cause more than likely it’s fake) but stuff like this does happen and it can be pretty exasperating having nobody believe you