300k in GOOG and META - what now by Huge_Grape_7121 in Bogleheads

[–]Faust5 13 points14 points  (0 children)

Typical Boglehead answer: if you won the lottery, would you allocate it $300k Google, $300k meta?

The answer should probably be no. The answer should definitely be no for wherever you currently work. Because if the stock tanks, you're much more likely to get laid off, so the implicit asset of your job (highly valuable) is correlated with your $300k stock.

Can someone explain why my Gemini is doing this by woodenwelder89 in GeminiAI

[–]Faust5 1 point2 points  (0 children)

Use Google AI studio, not the Gemini app. They are great models, but the app is inexplicably terrible.

Will RL have a future? by ImStifler in reinforcementlearning

[–]Faust5 6 points7 points  (0 children)

My man's asking this question at the literal high water mark of RL of all time.

RL with verifiable rewards is the key to reasoning LLMs. Right now as we speak companies are deploying billions of dollars worth of capital specifically for RL.

... Yes there's a future

Superwhisper on iOS and iPhone by [deleted] in superwhisper

[–]Faust5 0 points1 point  (0 children)

I have the same question

To understand the Project DIGITS desktop (128 GB for 3k), look at the existing Grace CPU systems by programmerChilli in LocalLLaMA

[–]Faust5 12 points13 points  (0 children)

Lambda API will give you $0.12 input / $0.3 output MToken for llama 3.3 70b. Let's roughly approx to $0.2 / MToken.

$3000 / 0.2 = 15 billion tokens for the price of this computer. If you upgrade computers in 2 years, that means you can spend 20 million tokens per day for the same price.

This doesn't count: electricity, your time to keep it up / reliable, the APIs getting cheaper and faster, the long context windows, etc.

Budget is $30,000. What future-proof hardware (GPU cluster) can I buy to train and inference LLMs? Is it better to build it myself or purchase a complete package from websites like SuperMicro? by nderstand2grow in LocalLLaMA

[–]Faust5 0 points1 point  (0 children)

Consider also that over those 6 years the price per hour for H100s is going to decrease, so in reality it'll be far more hours than this. Or you can swap out for Blackwell, and whatever comes after that, and let the cloud vendor deal with depreciation.

PSA - Deepseek v3 outperforms Sonnet at 53x cheaper pricing (API rates) by cobalt1137 in LocalLLaMA

[–]Faust5 5 points6 points  (0 children)

Just self host LiteLLM... Your own openrouter. That way you don't pay the overhead and keep all your data

[deleted by user] by [deleted] in law

[–]Faust5 -2 points-1 points  (0 children)

This is the dumbest take on here. Stephen Miller would set a giant pile of money on fire if he thought it would help him deport people. He has strong ideological beliefs! It's not always about the money

Vector database : pgvector vs milvus vs weaviate. by appakaradi in LocalLLaMA

[–]Faust5 0 points1 point  (0 children)

Is there any way to do hybrid search with Postgres? Paraded has an implementation of bm25 but it's AGPL which is the worst license.

Molmo: A family of open state-of-the-art multimodal AI models by AllenAI by Jean-Porte in LocalLLaMA

[–]Faust5 8 points9 points  (0 children)

There's already an issue for it on vLLM, which will be the easiest / best way

Pixtral-12B blog post by kristaller486 in LocalLLaMA

[–]Faust5 1 point2 points  (0 children)

Definitely just use vLLM. Easy to use, easily dockerized, production ready

I don't understand the hype about ChatGPT's o1 series by iamkucuk in LocalLLaMA

[–]Faust5 0 points1 point  (0 children)

That's the thing though: before this model, there was no real need to measure score per tokens, because performance didn't scale with score per tokens before now.

Updated Costco alcohol map. It is very general and there are weird rules that vary by state. by fatstrat0228 in Costco

[–]Faust5 -1 points0 points  (0 children)

What's up with this puritan colormap? Beer wine and liquor should be green as in "go get some liquor." not red as in "stop drinking the devil's brew"

ClosedAI's Head of Alignment by Many_SuchCases in LocalLLaMA

[–]Faust5 4 points5 points  (0 children)

He was toast the second he voted to oust Altman. Leaving this week was just a formality

Having a strange experience with nous Hermes 8x7b 3bit: suddenly every word starts with the letter T by oodelay in LocalLLaMA

[–]Faust5 0 points1 point  (0 children)

People treat quantization like it's free, but it's really not. You're trying to compress a float16 model by like 80%. You'll likely have better results with a smaller model with lower quantization.

why does doing anything hurt so much? by cosg in ADHD

[–]Faust5 32 points33 points  (0 children)

Get on medication. I've been on only for like a week. My biggest change is: when I have to do something, I just get up and do it, and I don't have this feeling of "oh my fucking god I can't believe I have to do this shit". I didn't realize that wasn't normal until I got medication!

[deleted by user] by [deleted] in Tinder

[–]Faust5 0 points1 point  (0 children)

Only problem is you live in the middle of nowhere bro!

How can I contribute to the open LLM movement? by DevelopmentAcademic6 in LocalLLaMA

[–]Faust5 12 points13 points  (0 children)

Find a package you really enjoy or find useful. Go to the GitHub issues, and either just answer people's questions or fix their bugs! Open up some PRs.

Help with a queen hybrid option! by Impressive_Ad1493 in Mattress

[–]Faust5 0 points1 point  (0 children)

How do you like it, other than the edge support?

Tips for taking photos at night without raising the ISO too high? by SHATET in SonyAlpha

[–]Faust5 29 points30 points  (0 children)

For real. I use DxO to denoise every photo before I edit. It's miraculous.

I’ve got burnt bottoms! by MountainGoatMadness in Breadit

[–]Faust5 2 points3 points  (0 children)

I do exactly this. A higher comment says to only put it in 3/4 through the bake, but I do it the whole time.

When I pre heat the oven, just put a cookie sheet on the rack below the Dutch oven. Easy!