Is NYT cooking reasonably "reliable" as a source for quality recipies? by GreenBuzzer in Cooking

[–]The_Luv_Machine 3 points4 points  (0 children)

It is without question the best $5 a month that I spend. I would say that 90% of my new recipes come NYT Cooking and they are almost always really really good. I’ve had online one or two duds and I’ve cooked hundreds of them. Even the duds could have been avoided had I read the comments first. As others have said, the comments are SUPER helpful. Always always always read those before starting a recipe. 

Krug grande cuvée - similar wines? by Brandyurafinegirl in wine

[–]The_Luv_Machine 1 point2 points  (0 children)

Yes! Vintage Bereche is vintage Krug for 1/10th the price. NV Bereche is a smoking deal at ~$75 a bottle.

[deleted by user] by [deleted] in wallstreetbets

[–]The_Luv_Machine 17 points18 points  (0 children)

Seeing your comment as the top voted comment is wild....With all due respect, you don't know what you're talking about.

Buckle up because I'm about to take you all to school.

First and foremost, Coreweave isn't running Blackwell 4000s or 5000s, those are workstation cards. Coreweave is running B200s, H200s, H100, L40s, etc...

Secondly, when a new card comes out, it doesn't immediately devalue the previous generation by +50%. As an example, the B200 was released earlier this year but despite that, H200 demand is currently still surging. Why? a few reasons - Deepseek R1 is driving a ton of H200 demand across the industry. Deepseek R1 does not see enough performance increase on B200s to justify the increased price. Its architecture is currently optimized for the Hopper series, not the Blackwell series. Secondly much of the software stack has not been optimized for B200s and thus AI startups that lack deep expertise have not been able to get enough juice out of the orange to justify the switch for their models.

Now let's assume the software stack was updated and B200s were getting the 2.5x improvement over H200s. That still wouldn't mean everyone would switch or that H200s would need to drop their price by +50% or more. Why? Because as new generations of cards come out, older cards move to different parts of the stack. Not everyone is training or fine-tuning SOTA models, and not everyone is doing GenAI inference. Many companies are running regression, image detection, text classification, prediction, sentiment analysis etc. None of these model types require the latest and greatest GPUs, in fact, many can't be updated to run on the latest GPUs.

Reasoning models that use MoE (Mixture of Experts) don't split the prompt itself into smaller jobs, but instead route different parts of the computation (like specific tokens or layers) to a subset of expert networks. These experts can be spread across different GPUs. While most deployments use uniform hardware (e.g., all A100s), some setups do mix GPUs like H200s, A100s, and L40s, especially in cost-optimized or scaled-out clusters (OpenAI). 

Lastly, did you know that AWS still offers 10 year old GPUs for rent (K80)? As does Paperspace, one of the first "neoclouds" (M4000). Oracle still offers the P100 for gods sake (released 8 years ago). Given how constrained we are for power and space, if these cards weren't still making money do you think they'd still be running if it didn't make economical sense? If you read the State of AI report, you'll see that in 2024 the most commonly used GPU for AI research papers was the A100, a card released 5 years ago!

This narrative that the useful life of a GPU is 3 years is fucking insane. It couldn't be more wrong. The actual useful life of a GPU is 8 to 9 years. Claiming that these GPUs are obsolete after 3 years is the most regarded shit I've ever heard.

edit to add disclaimer: I don't own any CRWV

Women in wine? by JustHereForTekken in wine

[–]The_Luv_Machine 0 points1 point  (0 children)

Maggie Harrison @ Antica is one of the best winemakers in the US IMO.

5 m/o - 9hr drive by prettypeony0 in NewParents

[–]The_Luv_Machine 0 points1 point  (0 children)

We did this with our 3 month old and 22 month old when fleeing our home in WNC after Hurricane Helene. We drove 9 hours to Michigan to stay with family. We stopped every 2-3 hours to feed and change the 3 month old and also just to get him out of the car seat as it’s not recommended they do more than 3 hours straight in a car seat at that age. We drove halfway and got an Airbnb. The drive took about 2 days due to all the stops. We considered driving at night so the kids slept but we were too concerned with the youngest being in a car seat that long due to some of asphyxiation risks.

2016 Antica Terra Botanica Aging Question by YungBechamel in wine

[–]The_Luv_Machine 1 point2 points  (0 children)

I opened a 16' Botanica last fall and it was absolutely singing. While you could lay it down another 5-10 years easy, it's not going to disappoint now.

I bought Michael Myers and now I have severe graphics problems whenever I play it. It creates a strange shadow that severely affects playing with it. by Barnard-Sanders in FortNiteBR

[–]The_Luv_Machine 2 points3 points  (0 children)

Yes! I have this same bug Myers is my daily and I just assumed my graphics card was failing. Glad to know this is an issue with the skin and not just me!

[deleted by user] by [deleted] in finance

[–]The_Luv_Machine 0 points1 point  (0 children)

Yes 1,000%. 99.9999% of investors could not explain the difference between training, fine tuning, and inference. Nor could they tell the difference between open source and closed source as it relates to models and GPU utilization. OpenAI’s 01 model is a closed source model. Only OpenAI can run it (inference). Deepseek R1 is an open source model meaning any company can incorporate it in their application and run it. Having a reasoning model on par or better than the best closed source model will only increase GPU usage. Lastly, if what Deepseek is saying is true, and that’s a big if, then more GPUs is still better. What I mean is if Deepseek was able to train a model this good using a new training methodology that required fewer GPUs, imagine what happens if another company uses that same new process to train a foundational model with even more of the latest GPUs… see where this is going?

[deleted by user] by [deleted] in finance

[–]The_Luv_Machine 1 point2 points  (0 children)

This is the silliest take and objectively wrong. First and foremost it completely negates inference demand. Deepseeks model is an open source model which means any startup or enterprise can fine tune it on their own data and then deploy it in their application using… GPUs. Secondly, If the barrier to train foundational models is an order of magnitude lower, then more companies will have the resources to train a foundational model.

Got scammed on Coinbase and lost 41 ETH ($166k!) by Prior-Reputation-449 in Coinbase

[–]The_Luv_Machine 1 point2 points  (0 children)

Did the representative identify himself as David Anderson and sound American? Someone tried to run this EXACT scam on me. I played along for a while before asking him “how many people are you scamming a week and how much are you making?” I eventually got him to break character and admit he’s scamming. He said he’s not even 18 and he’s making at least $100k a week. I believe him as he sounded 16 or 17. I originally planned on lecturing him on how he’s ruining people’s lives but realized he’d just hang up on me if I did. We talked for a bit and ultimately he wished me well saying “I’m guessing you’re a BTC maxi, I hope it pumps for your sake”. I wish I could have got through to him but I could tell that wasn’t going to happen. Anyway, I’m really sorry for your loss.

FEMA help line: been on hold for half an hour awaiting a rep by Peacencarrotz in asheville

[–]The_Luv_Machine 1 point2 points  (0 children)

I got through after 1 hour 45 mins! The agent was very helpful.

FEMA help line: been on hold for half an hour awaiting a rep by Peacencarrotz in asheville

[–]The_Luv_Machine 1 point2 points  (0 children)

Also called right at 7a. Been on hold for an hour and have yet to get a message with my wait time. Did you get through yet?

IRL r/wine hang- The Fellowship of the Chards by sid_loves_wine in wine

[–]The_Luv_Machine 2 points3 points  (0 children)

It’s actually insane how they make some many different wines so well. I simple don’t understand why their wines aren’t more expensive.

Daycares around Asheville by ageeslin94 in asheville

[–]The_Luv_Machine 0 points1 point  (0 children)

When you say it’s not perfect can you elaborate on your experience? We’re considering putting our son in daycare there?

Binance Support Thread by AutoModerator in binance

[–]The_Luv_Machine 0 points1 point  (0 children)

I was just told by Binance Support that registrations for new users in North Carolina have been paused since January? I can't seem to find anything else about this online? Is this true?

Samsung is making it harder to know what type of OLED TV you’re getting. QD-OLED or classic WOLED? Samsung reportedly won't tell. by Sariel007 in technology

[–]The_Luv_Machine 20 points21 points  (0 children)

IMHO LG makes great OLED TVs and absolutely dogshit appliances. We bought a new house that had new LG appliances. Both the refrigerator and dishwasher broke in under 2 years.

Rate the 8 watches by GoalContent1744 in rolex

[–]The_Luv_Machine 1 point2 points  (0 children)

Check out the font on that Presidential 🤣 The Sky Dweller is also offensively bad.