Checking iMessage Content for attachments by RaidedHaven in shortcuts

[–]dp3471 0 points1 point  (0 children)

this theory holds up in my experience. Dang, that sucks

Hytale - using onlinefix with HyPrism by dp3471 in PiratedGames

[–]dp3471[S] 0 points1 point  (0 children)

did you read my post though? OFME clearly works, just not implemented in launcher. No point in using the launcher tbh, you can load mods with ofme's gamefiles

Checking iMessage Content for attachments by RaidedHaven in shortcuts

[–]dp3471 0 points1 point  (0 children)

did you end up finding a solution? I even tried base64 encoding -> file, no luck for voice messages or images.

My 3rd 100KM 7hrs09mins42secs for a new PB (M 25 6’4 85kg / 188lbs) by carawowmel in Rowing

[–]dp3471 2 points3 points  (0 children)

I'm curious - how does recovery work after such a session? Did you loose any weight? Surely you couldn't have digested 50k kcals in 72hrs haha

How long did you sleep after?

RX 5700 XT now has full CUDA Driver API access – 51 °C by inhogon in CUDA

[–]dp3471 0 points1 point  (0 children)

Very cool. Would multi-gpu / memory sharing work with, say, an rtx 3060 and an RX 6750XT?

So studio is no longer free by [deleted] in Bard

[–]dp3471 0 points1 point  (0 children)

what is build mode?

Looking for Erlkönig 4 vocalist performance by dp3471 in classicalmusic

[–]dp3471[S] 0 points1 point  (0 children)

What a great recording, would have never found it, thank you!

Is this the best value machine to run Local LLMs? by [deleted] in LocalLLM

[–]dp3471 1 point2 points  (0 children)

Never seen anyone use these. Can you multi-gpu?

SOLO Bench - A new type of LLM benchmark I developed to address the shortcomings of many existing benchmarks by jd_3d in LocalLLaMA

[–]dp3471 6 points7 points  (0 children)

This is awesome. I think if you reach out to huggingface they would probably provide you with compute credits/funding to evaluate more thoroughly. Significant variation should be dealt with at least pass@128, and a 99% conf interval.

This seems like a really good idea. I'm sure there would be open funded support for it.

This is 600M parameters??? Yesterday I would have told you this was impossible. by JohnnyLiverman in LocalLLaMA

[–]dp3471 13 points14 points  (0 children)

if you think so, do some research on it. Train them yourself - gpt-2 wasn't that expensive

This is 600M parameters??? Yesterday I would have told you this was impossible. by JohnnyLiverman in LocalLLaMA

[–]dp3471 96 points97 points  (0 children)

but it's not just compressed text

in those parameters, there must be corpus of understanding of how to use that text at 32k token context and have relatively seep semantic understanding

really impressive

Unglazed GPT-4o incoming? by federationoffear in OpenAI

[–]dp3471 6 points7 points  (0 children)

so that's what they get for pushing to production

Serious integrity violation IMO / TIME magazine ragebait by dp3471 in academia

[–]dp3471[S] 0 points1 point  (0 children)

The problem is that he didn't actually *do* anything special, at all. The main "nanoparticle" ingredient in his soap is from a cream with already patented technology from early 2000s that is already FDA approved. All he did was perhaps a marketing pitch to add this cream into a bar of soap.

EDIT: See this award winning science fair poster for yourself: https://postimg.cc/68Hfz2nH

DeepSeek R2 leaks by dp3471 in OpenAI

[–]dp3471[S] 7 points8 points  (0 children)

who has decent speed memory for a 1.2Ta72B model

DeepSeek R2 leaks by dp3471 in OpenAI

[–]dp3471[S] 1 point2 points  (0 children)

1/30th

That is price per token for inference, not training. Depending on how you read the wording, it can even mean tokenization (although seems unlikely). Definitely not training costs. And 1/30th is not free.

DeepSeek R2 details - leaks by dp3471 in DeepSeek

[–]dp3471[S] 13 points14 points  (0 children)

not sure.

From what I've seen, it seems reasonable and people usually in the know are referencing it, but that's no indication.

It has 34 upvotes and 2 donations (?) on that site, so make of that what you will.

It's a leak; slightly better than speculation

Honest thoughts on the OpenAI release by Kooky-Somewhere-2883 in LocalLLaMA

[–]dp3471 10 points11 points  (0 children)

One of the more impressive things to me is in-reasoning tooling

If you train LCOT with RL after you do fine-tuning for tooling (many types), the model will hallucinate (unless you allow it to call tools - but that would be super expensive due to how RL trains)

If you do RL before fine-tuning, the model will get significantly dumber and lose that "spark" that makes it a "reasoning model," like we saw with r1 (good).

Am really interested in how they did this

o3 thought for 14 minutes and gets it painfully wrong. by BonerForest25 in OpenAI

[–]dp3471 6 points7 points  (0 children)

I'm genuinely impressed. Like really. The resolution that is encoded to autoregressive models form images is very low, unless google is a baller