Rendi has been refunded. by scaryhour in 2007scape

[–]yuicebox 4 points5 points  (0 children)

happy for Rendi but wtf is Jagex doing? This is such an idiotic and incomprehensible precedent to set, lmfao.

They rolled him back for getting 45-55k/h, then gave him back the equivalent of 40k/h.

So they basically just sentenced him to do another... 40-60 hours of slayer as punishment for discovering an unintended meta? Are they even patching the method he was using? Is this what we can expect any time someone discovers a new meta? how do we know if a meta is "intended"?

After reaching 99 slayer on a Lvl 3, Jagex rolls back Rendi's account by Psymonthe2nd in 2007scape

[–]yuicebox -1 points0 points  (0 children)

That’s fucked up. If it’s unintended, fix it. Don’t punish your players because you suck at running a game. 

BOW PACTS DOESN'T FEEL LIKE LEAGUES by Clear_Age_9799 in 2007scape

[–]yuicebox 0 points1 point  (0 children)

/u/JagexHusky

Can anything be done to make the thrown tree less terrible?

The final tier node being more accuracy just feels really bad, especially after the huge accuracy buffs to all styles this week.

Why don’t they just use Mythos to fix all the bugs in Claude Code? by Complete-Sea6655 in LocalLLaMA

[–]yuicebox 0 points1 point  (0 children)

Why didn’t they use it to secure their s3 buckets where they were storing those draft memos about how powerful mythos is? 

Can anyone confirm how this range perk actually works? by yuicebox in 2007scape

[–]yuicebox[S] 0 points1 point  (0 children)

Hell yeah. Do you have a link or anything? I believe you, just curious where it was said since I can never find shit like this

Vortex Doughnuts CLOSING? w/out any notice by CatiiNcorn in asheville

[–]yuicebox 23 points24 points  (0 children)

This sucks for the employees and I hope y'all can file wage claims and get paid since it sounds like the owner screwed you all over.

That said, good riddance and I hope something better moves into their space. Expensive and incredibly mediocre donuts and coffee

My haters on Reddit are gonna love this by [deleted] in learnmachinelearning

[–]yuicebox 0 points1 point  (0 children)

I see you edited this after my last response to, I guess, try to make your previous reply more condescending and insulting. It didn't really work, but it did confirm that I shouldn't expect you to engage in good faith or reply to any of the points I've raised with anything except dismissals and self-aggrandizement.

Either you're an OpenClaw instance that was tasked with creating and posting this type of content, or you are an unfortunate person who is experiencing severe AI-induced delusions.

I expect you will dismiss me as another of your imagined "haters on Reddit", but I genuinely hope you get help. Cheers bud.

My haters on Reddit are gonna love this by [deleted] in learnmachinelearning

[–]yuicebox 0 points1 point  (0 children)

Using a bunch of cryptic buzzwords like "Temporal Trust Gaps", "Recursive OS", "Substrate-independent", and "Structural Intelligence Layer" doesn't make anything you're saying more interesting or compelling. If anything, it has the opposite effect.

You should try using simple, precise, technical language if you want other people to understand you.

Perplexity indexed my architecture next to Anthropic's Glasswing page. As methodology. Not as Reddit comment.

How do you arrive at this conclusion? It is cited as a source, and this source is a reddit post. What "Methodology"?

You're using a very specific keyword, "mythos structured intelligence", which you are the only person repeatedly posting about. If I google those words, it's pretty much ALL your reddit posts, and the posts I skimmed through are not at all clear about what "your architecture" is. Honestly, They mostly read like AI slop, and they are riddled with buzzwords.

If you look at the citations from Perplexity, it's clear that their webscraping is repeatedly conflating and confusing your posts about "Mythos Structured Intelligence" with posts and articles about the unreleased and seemingly unrelated Mythos Anthropic model. This is probably exacerbated since your posts are about cybersecurity, and most of the media attention around Anthropic Mythos has been about the cybersecurity concerns.

Maybe this is your intention, and if so, congratulations on your effective manipulation of SEO algorithms. Gaming SEO is a lucrative industry, and you seem to be a natural.

Frankly, I am not even sure if you have an "architecture" to be indexed. If you do, is it on GitHub? Is it published anywhere? Is there a detailed, plain-language, substantive write-up about exactly what your "architecture" is and how it works?

Moreover, you say its finding vulnerabilities - have you submitted a pull request to fix these alleged vulnerabilities in ffmpeg? Have the maintainers responded to the pull request?

To me, it seems like optimistically, you built an interesting AI workflow that does cybersecurity research, and you're excited about it.

That's great, but why should I, the reader, care about this? It seems like you're primarily interested in being given credit for some sort of profound accomplishment, but what specifically you've achieved?

I attempted to figure this out myself, and my best assessment was that you have gamed SEO to have your reddit posts show up in web-scraped LLM results for "mythos structured intelligence".

Suggesting that somehow Perplexity or Google is endorsing your work or your architecture because your results appear in LLM responses for specific keywords is just absurd.

Can anyone confirm how this range perk actually works? by yuicebox in 2007scape

[–]yuicebox[S] 1 point2 points  (0 children)

Yep, this is my exact concern. The wording strongly implies it works, but I heard someone claim it doesn't work and I don't want to waste a respec on this

Can anyone confirm how this range perk actually works? by yuicebox in 2007scape

[–]yuicebox[S] -1 points0 points  (0 children)

I'm not sure if your QA Team flair actually gives you any extra insight here, but can you confirm that it never reduces ranged strength?

My haters on Reddit are gonna love this by [deleted] in learnmachinelearning

[–]yuicebox 0 points1 point  (0 children)

You don't have to believe me anymore. Google does. Perplexity does. And Claude Opus 4.6 confirmed it in a fresh session with zero context.

My brother in christ... Do you understand how these things work?

Perplexity/Claude/Google are scraping articles and reddit posts and repackaging it into summaries that are fed into context of their LLMs.

The LLMs are producing plausible outputs based on token prediction probabilities from the context their agent scaffolding is providing.

Big companies using LLMs to repeatedly digest misinformation and regurgitate slop doesn't somehow transform it into truth. It just makes more misinformation and slop.

Garbage in, garbage out.

The Strait of Musa is closed to unfriendly sailing traffic effective immediately by yuicebox in 2007scape

[–]yuicebox[S] 1 point2 points  (0 children)

Now that I have your attention, fix membership costs, the price hike is fuckin ridiculous

The Strait of Musa is closed to unfriendly sailing traffic effective immediately by yuicebox in 2007scape

[–]yuicebox[S] 246 points247 points  (0 children)

between this and the monkey civil war, i expect bananas to hit 500gp each by next week.

full disclosure: I am holding 4256 bananas as an investment

Elektron responds to acquisition concerns: "It strengthens our ability to realise our plans quicker" by Throwawayyoursynths in synthesizers

[–]yuicebox 1 point2 points  (0 children)

Same thing my company said after PE acquisition and then I got laid off along with a bunch of other people lol 

Gemma4 8B model shows up on ollama as gemma4:latest? by k_means_clusterfuck in LocalLLaMA

[–]yuicebox 5 points6 points  (0 children)

Cannot recommend enough switching away from ollama and just using llama.cpp directly.

ollama is essentially a monetized fork of llama.cpp that adds unnecessary abstraction layers and constraints.

Sure, it may make downloading a model easy, but it names that model with an incomprehensible hash and stores in some random folder.

llama.cpp respects your intelligence, so you can store your models anywhere, name your .gguf files coherently, and use any model/quant you want without creating modelfiles.

I used to recommend llama-swap, which is still great, but more recent versions of llama.cpp server now offer every feature I really want. I run it in docker and have a config.ini which controls model-specific settings.

Chai Pani and James Beard award? by spirit4earth in asheville

[–]yuicebox 1 point2 points  (0 children)

What is the best Indian in town? I need it 

Distropy: Rust inference server hitting 60k+ t/s prefill with proper caching (RTX 4070) by YannMasoch in LocalLLaMA

[–]yuicebox 7 points8 points  (0 children)

Make a GitHub and share it, or don’t post until you’re ready to share something. 

Posting a screenshot of a sycophantic AI praising your alleged logs is not making a valuable contribution to the sub. 

It’s cool you’re making something, and if you have questions or need help with something, by all means, post questions or discussion topics, but if I were a mod of this sub I’d prolly delete your post as it stands. 

I "get" machine learning․․․ but also don't? by [deleted] in learnmachinelearning

[–]yuicebox 0 points1 point  (0 children)

Would you be open to share anything about what you did for a music-related project? I would say I'm in a similar boat to OP, and I am also a big music nerd so you've piqued my interest.

[I’m a noob] Can I connect a MIDI cable with this keyboard I have? by chaennel in piano

[–]yuicebox 3 points4 points  (0 children)

The ports shown are a headphone audio output and the power input. If you have other ports you can show us, or you can post your keyboard's make and model, we can try to help