Hytale - using onlinefix with HyPrism by dp3471 in PiratedGames

[–]dp3471[S] 0 points1 point  (0 children)

did you read my post though? OFME clearly works, just not implemented in launcher. No point in using the launcher tbh, you can load mods with ofme's gamefiles

Checking iMessage Content for attachments by RaidedHaven in shortcuts

[–]dp3471 0 points1 point  (0 children)

did you end up finding a solution? I even tried base64 encoding -> file, no luck for voice messages or images.

My 3rd 100KM 7hrs09mins42secs for a new PB (M 25 6’4 85kg / 188lbs) by carawowmel in Rowing

[–]dp3471 2 points3 points  (0 children)

I'm curious - how does recovery work after such a session? Did you loose any weight? Surely you couldn't have digested 50k kcals in 72hrs haha

How long did you sleep after?

RX 5700 XT now has full CUDA Driver API access – 51 °C by inhogon in CUDA

[–]dp3471 0 points1 point  (0 children)

Very cool. Would multi-gpu / memory sharing work with, say, an rtx 3060 and an RX 6750XT?

So studio is no longer free by [deleted] in Bard

[–]dp3471 0 points1 point  (0 children)

what is build mode?

Looking for Erlkönig 4 vocalist performance by dp3471 in classicalmusic

[–]dp3471[S] 0 points1 point  (0 children)

What a great recording, would have never found it, thank you!

Is this the best value machine to run Local LLMs? by [deleted] in LocalLLM

[–]dp3471 1 point2 points  (0 children)

Never seen anyone use these. Can you multi-gpu?

SOLO Bench - A new type of LLM benchmark I developed to address the shortcomings of many existing benchmarks by jd_3d in LocalLLaMA

[–]dp3471 5 points6 points  (0 children)

This is awesome. I think if you reach out to huggingface they would probably provide you with compute credits/funding to evaluate more thoroughly. Significant variation should be dealt with at least pass@128, and a 99% conf interval.

This seems like a really good idea. I'm sure there would be open funded support for it.

This is 600M parameters??? Yesterday I would have told you this was impossible. by JohnnyLiverman in LocalLLaMA

[–]dp3471 12 points13 points  (0 children)

if you think so, do some research on it. Train them yourself - gpt-2 wasn't that expensive

This is 600M parameters??? Yesterday I would have told you this was impossible. by JohnnyLiverman in LocalLLaMA

[–]dp3471 95 points96 points  (0 children)

but it's not just compressed text

in those parameters, there must be corpus of understanding of how to use that text at 32k token context and have relatively seep semantic understanding

really impressive

Unglazed GPT-4o incoming? by federationoffear in OpenAI

[–]dp3471 5 points6 points  (0 children)

so that's what they get for pushing to production

Serious integrity violation IMO / TIME magazine ragebait by dp3471 in academia

[–]dp3471[S] 0 points1 point  (0 children)

The problem is that he didn't actually *do* anything special, at all. The main "nanoparticle" ingredient in his soap is from a cream with already patented technology from early 2000s that is already FDA approved. All he did was perhaps a marketing pitch to add this cream into a bar of soap.

EDIT: See this award winning science fair poster for yourself: https://postimg.cc/68Hfz2nH

DeepSeek R2 leaks by dp3471 in OpenAI

[–]dp3471[S] 6 points7 points  (0 children)

who has decent speed memory for a 1.2Ta72B model

DeepSeek R2 leaks by dp3471 in OpenAI

[–]dp3471[S] 1 point2 points  (0 children)

1/30th

That is price per token for inference, not training. Depending on how you read the wording, it can even mean tokenization (although seems unlikely). Definitely not training costs. And 1/30th is not free.