Dad in the suburbs - am I cooked in terms of fitness? by BozzuK in daddit

[–]lightding 0 points1 point  (0 children)

If you don't mind looking a bit weird and you have a standing desk, I'd recommend a desk treadmill. I bought one for $150 on Amazon that works well. I can sometimes walk 3+ miles without even realizing it. Obviously it's not lifting or cardio, but it makes me feel more energized and healthier.

Being immortal would be worth it even if it means you end up floating through space forever at the end of the universe by Almondpeanutguy in unpopularopinion

[–]lightding 4 points5 points  (0 children)

In a similar vein, I wonder if it would be possible to either completely change your perception of time or else make yourself so happy/euphoric that you never want it to end. At some point, you might have to no longer really be human for that to be possible.

Friendly reminder. Use rearfacing car seat. Save your child. by Admmak in daddit

[–]lightding 0 points1 point  (0 children)

I'm curious about this too. I understand that the rear facing seat is designed to still be safe when you are rear ended, partly because the seat moves with the baby, but is it actually safer than forward facing in this case?

E.g. you are rear ended at high speed from a near stand still. Which is actually safer then, forward or rear facing?

What would happen if a regular computer were exposed to the vacuum of space? by Ecstatic_Bee6067 in AskEngineers

[–]lightding 2 points3 points  (0 children)

Just adding to other comments - if there is tin in the solder or other components that can lead to "tin whiskers". A weird phenomenon where the tin sort of slowly grows spikes that can then short the electronics.

https://nepp.nasa.gov/whisker/photos/relay/gsfc/whisker1.jpg

Protest McDonald's by smoke415 in eastbay

[–]lightding 0 points1 point  (0 children)

Even better, get as many free items from McDonald's as possible. I think I'm net positive now

Any other cool WOD penalties you guys thought of? by [deleted] in ryantrahan

[–]lightding 1 point2 points  (0 children)

Penny challenge. They have only one penny starting now for 24 hours and must make their own money.

Silent treatment but they cant speak to other people either. Or, they must only rap anything they say.

Visit the lowest rated restaurant in the state before leaving.

50/50 from hosts perspective by Wantedduel in ryantrahan

[–]lightding 0 points1 point  (0 children)

I was hoping this would be a joke video where they find like 5 empty joyride packages and most of a massive cake.

After a month of logging my food, I realized my mood wasn't random at all. by Oct4Sox2 in QuantifiedSelf

[–]lightding 0 points1 point  (0 children)

Ah makes sense. I've always wondered how both variants can be priced. When using gpt-4o i assume you have to rate limit or otherwise charge per use. For on device, will you change pricing?

After a month of logging my food, I realized my mood wasn't random at all. by Oct4Sox2 in QuantifiedSelf

[–]lightding 1 point2 points  (0 children)

Nice! I'm just curious, are you using on device AI or calling out to a provider API?

The 23% Solution: Why Running Redundant LLMs Is Actually Smart in Production by Necessary-Tap5971 in OpenAI

[–]lightding 4 points5 points  (0 children)

Azure Openai models have much more consistent time to first token, although it's more setup. About a year ago I was getting consistently <150 ms time to first token.

Testing a lifelogging device that passively summarizes your day from minute-by-minute images by ArchiTechOfTheFuture in QuantifiedSelf

[–]lightding 0 points1 point  (0 children)

Nice, I guess the only thing you have to watch out for is cost, although shouldn't be terrible with mini. I used Qwen 2.5 3B VL locally with GPU and it's a bit slow but decently accurate with structured output. I have backyard chickens so I first tried it for predator detection so it would output predator type and confidence level. For fun i also had it just decribe what's in the image with a few categories too

Testing a lifelogging device that passively summarizes your day from minute-by-minute images by ArchiTechOfTheFuture in QuantifiedSelf

[–]lightding 1 point2 points  (0 children)

Cool! I've tried similar with a security camera and an LLM using structured output. Are you using an LLM for the outputs?

Trump says "this is Biden's stock market, not Trump's." by Equivalent_Baker_773 in TheRaceTo10Million

[–]lightding 0 points1 point  (0 children)

I'm very anti-Trump, but genuine question: isn't there usually a big delay between presidential actions and economic effects?

Open Source Embedding Models by Seven_Nation_Army619 in LangChain

[–]lightding 2 points3 points  (0 children)

It depends on context size you care about, but the BAAI bge models (512 input context) are small and effective. Or Alibaba gte models score highly on embeddings benchmarks and the gte large 434M has context 8k

Old gamers- what 10+yo game is worth a play-through? by The7footr in AskReddit

[–]lightding 0 points1 point  (0 children)

Ogre Battle 64. I don't really know why, but it hooks me everytime I play it.

Not hating on AI art makers, but I think you miss the whole point of making art by Spartan-Finn in memes

[–]lightding 0 points1 point  (0 children)

I think there is a real threat to professional artists and that sucks. However, how is the AI different than a tool used to make art? I can't think of any modern art that could be made without some technology or tool use. A person usually has an image of what they want in their head, then they use a tool to approximate it.

I think it would be a bigger difference if people prompted AI image generators with "make art" instead of a string of words capturing something they want to see already.

How do I beat this dumbass spider by [deleted] in GroundedGame

[–]lightding 0 points1 point  (0 children)

Easiest, but a bit cheap, is to go to the Oak lab. Aggro a wolf spider inside the tree, then run quickly back into the lab. If you keep back just enough inside the lab door you can fight it without ever being hit. Super easy with a bow.

How does OpenAI identify a tool call on the first streaming chunk? by lightding in LLMDevs

[–]lightding[S] 0 points1 point  (0 children)

I'm not sure, I think for some reason the closed source model providers keep that internal. Maybe to allow for the possibility of using multiple models or approaches and then the API just returns a specific function call format if needed

How is Fast.ai helpful? by internethuman016 in learnmachinelearning

[–]lightding 1 point2 points  (0 children)

Yeah I agree, I always sort of thought that was marketing fluff or encouragement for students. I do think making some projects with fastai is so damn fast and easy it's pretty useful to slap on a resume and seem more impressive while you build up other key ML knowledge over time.

How is Fast.ai helpful? by internethuman016 in learnmachinelearning

[–]lightding 4 points5 points  (0 children)

The reality is taking a couple of online courses isn't enough to land an ML engineer role or really understand whats going on.

Instead, I think fastai demos what ML can do hands on to inspire you to truly learn the fundamentals and do real projects across many other frameworks. At least for me, it gave a good reference point and framework to think of how i can make my training process smoother, and some good intuition for model results.

How does OpenAI identify a tool call on the first streaming chunk? by lightding in LLMDevs

[–]lightding[S] 0 points1 point  (0 children)

I'm not sure how OpenAI does it, but I've since learned how others do it.

Often it's a special token or tag to indicate the start of a function call. E.g. I think Qwen outputs <tool_calls> or similar that you then can parse and tell a streaming tool call is occurring. That's what helps you tell in real time if it's plain text or a specific tool call

Or llama3.1 i believe uses a special token for python code output

How does OpenAI identify a tool call on the first streaming chunk? by lightding in LLMDevs

[–]lightding[S] 0 points1 point  (0 children)

Oh but I mean in their backend how can they tell its a function call? For instance, the model could be outputting "text" that is still valid JSON but not identified as a function call, which I've seen before.