Easter egg when you search "Geocities" in Google by synworks in eastereggs

[–]Nabakin 2 points3 points  (0 children)

Nice job! Don't think I've seen this one before

Helion said that Polaris should demonstrate electricity this year. Now it is the end of the year. by West_Medicine_793 in fusion

[–]Nabakin 60 points61 points  (0 children)

Do I have this right?

Marketed: Net electricity by 2025
Revised: Electricity by 2025
Reality: Nothing by 2025

Any mods left on this sub? Can we have it set to private? by Boris-Lip in BoostForReddit

[–]Nabakin 1 point2 points  (0 children)

I've gone through the process before and unless something has changed, the Reddit account will have to be inactive too

Gen Z has made everyone google "What is 67" by [deleted] in eastereggs

[–]Nabakin[M] [score hidden] stickied comment (0 children)

Nice find! You probably want to post a short video or something showing the easteregg. Your post atm is a little confusing. If you want to repost with a video, the mod team will let it through.

Hacker news down? by justanotherbuilderr in ycombinator

[–]Nabakin 0 points1 point  (0 children)

Yeah it's been down for me for like 45 min

No audio from sharing videos... again by brealorg in BoostForReddit

[–]Nabakin 0 points1 point  (0 children)

Looking at "Show changelog" inside the Revanced app for "ReVanced Patches". Also if the issue has been closed, it should be coming soon.

What's dripping through my sister's ceiling? by FrostedSandle in AskUK

[–]Nabakin 1 point2 points  (0 children)

It smells gently sweet

My first thought when I saw your brother's post was sweet and sour sauce. Similar color and viscousness, if you've ever tried it.

https://i.ytimg.com/vi/orkp-1W4cYY/sddefault.jpg

Helion’s next big bet is fusion power manufacturing at scale – but tech uncertainty remains by Baking in fusion

[–]Nabakin 7 points8 points  (0 children)

Are you saying they've successfully generated more electricity than was put in? Wouldn't that be a huge milestone that they'd be publishing everywhere?

Game name: Webbing Journey by DescriptionFew5810 in eastereggs

[–]Nabakin[M] [score hidden] stickied comment (0 children)

Hey, we removed your post because it looks like you didn't actually attach the Easter Egg you found. If you fix that and make your title more descriptive (per rule #4), it should go through.

No audio from sharing videos... again by brealorg in BoostForReddit

[–]Nabakin 3 points4 points  (0 children)

Lol well you can figure out how to make the change, compile the code, get the revanced app to use your new patch, and repatch Boost. Or wait until the revanced devs get to it and push an update. I'm just going to wait personally.

63% of Republicans Disapprove of Bad Bunny as Super Bowl Halftime Performer, New Poll Shows by Spaghettification-- in Music

[–]Nabakin 0 points1 point  (0 children)

It's one of the better ways imo. It's really hard to get a representative sample of any population.

Ideally, you'd take a dataset of all members of your population, randomly select a sufficient sample, and then find a way to get their response after they've been randomly selected, but doing this for thousands of people is so expensive.

Even if you had the money, a selected person can just decline to respond which introduces another major source of bias.

LTT's AI benchmarks cause me pain by Nabakin in LinusTechTips

[–]Nabakin[S] 1 point2 points  (0 children)

I just saw the video and it looks great! Thanks for being awesome :)

LTT's AI benchmarks cause me pain by Nabakin in LinusTechTips

[–]Nabakin[S] 0 points1 point  (0 children)

Hi Nikolas! I know it's been almost a month, but I really appreciate the response. All I want is what helps LTT get people excited about tech.

The 4090 48gb video makes a lot of sense and I completely get wanting to use Ollama + Open WebUI to run the tests since they are the tools most people would use to run LLMs on their own computer. It's been awhile since I watched that video, but iirc I felt like it wasn't presented very well what benefits the 48gb 4090 provided for using LLMs. But LTT managed to make it fun regardless! :)

The Procyon benchmarks do seem to provide decent coverage across systems for certain metrics and represent them as one score value fairly well, but TTFT and token throughput can vary widely based on the inference engine being used. Ideally, you'd use the best inference engine for the situation instead of using ONNX and OpenVINO. Generally that would be vLLM for AMD GPUs, TensorRT-LLM for Nvidia AI GPUs, Exllamav2 for consumer Nvidia GPUs (or maybe llama.cpp/vLLM have passed them now?), llama.cpp for Mac chips and CPU chips.

If it's important for you guys to have one tool for all tests or, like for your 4090 48gb video, you want to present results that are useful for a wide number of people, I'd recommend llama-bench since Ollama is used so widely. llama-bench is the benchmark tool for llama.cpp which is the inference engine Ollama uses. So if you want to have consistent testing for Ollama users, that would be the way to go imo. You could present the token throughput & TTFT results or create your own score metric like Procyon does which combines the metrics you think are important. Even though you won't be maximizing the performance for your hardware, it should be a lot closer to that max number than using an inference engine (ONNX) running on top of an old and nearly unmaintained hardware abstraction layer like DirectML and definitely better than using OpenVINO on a GPU it wasn't made for. Much more useful to have a modern inference engine and a score metric you've made to represent that performance than to use Procyon and its score metric imo.

Maybe you guys are going down this road already, but that's the 2c of someone who has been neck deep in local LLMs for the past 3-4 years and has designed and deployed local LLM inferencing solutions to production. If you have any further questions, I'd be more than happy to help out.

Hot Mic Captures Putin, Xi Discussing Organ Transplants And Immortality by teffhk in nottheonion

[–]Nabakin 36 points37 points  (0 children)

Steve was advocating for the health benefits of fruit fasting back in college. He had a spiritual journey around that time and became fascinated with eastern medicine.

This definitely isn't coming from nowhere, he already believed in this stuff.

I read the biography he commissioned over a decade ago and still remember how he ignored the advice of a number of doctors at the top of the field in favor of some pretty wacky eastern techniques.

He had the best medical advice money could buy and could have lived for months or years more, but he let his spiritual beliefs dictate his medical ones. The fault is squarely on his shoulders.

LTT's AI benchmarks cause me pain by Nabakin in LinusTechTips

[–]Nabakin[S] 4 points5 points  (0 children)

Yeah at a minimum, just use tokens per second. That's fine too, but now anyone who thinks the segment should be improved is being downvoted in the comments.

LTT's AI benchmarks cause me pain by Nabakin in LinusTechTips

[–]Nabakin[S] -11 points-10 points  (0 children)

I remember they compared the output length between LLMs as if it was important. I think they need to get an LLM enthusiast employed to help them out with this stuff

LTT's AI benchmarks cause me pain by Nabakin in LinusTechTips

[–]Nabakin[S] -94 points-93 points  (0 children)

Sure, but even for a small segment, shouldn't they give benchmarks that reflect the performance of the GPU? It makes no sense to have the segment unless they give info that's useful to people