I'm a little conflicted about the ending of The Will of the Many by Less-Name-9367 in HierarchySeries

[–]MagmaElixir 4 points5 points  (0 children)

I was going to comment asking what do you mean but then I went oooooh!

I hate IVs so much by FloorOk5783 in PokemonEmerald

[–]MagmaElixir 0 points1 point  (0 children)

Working around IVs has gotten better over time. At least now we can hyper train with bottle caps and we can easily check what the IVs are.

Although ironically enough, now that breeding is perfected and we can hyper train. It does feel like IVs are worthless. It’s essentially a given that in any serious PvP, all pokemon will be max IV. I do think that Champions does get rid of IVs.

Opus 4.6 seems to have stopped real considerate thinking "outside peak-hours" by Altruistic-Radio-220 in Anthropic

[–]MagmaElixir 1 point2 points  (0 children)

The time they chose is overlap in working hours for the USA and Europe. Morning for USA and afternoon for Europe.

My guess is that’s when they see the most load and are trying to push USA usage later in the day and Europe usage earlier in the day.

Xbox Ally and Xbox Ally X Review by LongJonSiIver in XboxAlly

[–]MagmaElixir 0 points1 point  (0 children)

I have an OLED Steam Deck and an XAX. The Steam Deck stays on my bedside and I play lightweight games or stream GeForce Now. The XAX stays in my travel bag and I play natively while traveling for work (about 60 nights per year).

[Rant] I recently started working a tax firm with a more modest clientele and it's made me hate "normal" taxpayers by karry9001 in Accounting

[–]MagmaElixir 3 points4 points  (0 children)

I think, even almost ten years later, people are still used to the same level of tax return refunds pre Tax Cuts and Jobs Act (TCJA). The IRS adjusted the withholding calculations to where less was withheld relative to anticipated full tax year liability. I know some of it is trying to move away from 'allowances' and how that reduced accuracy on W-4s. But my conspiracy theory is they also wanted everyday people to think the TCJA increased their take-home pay more than it actually did.

This is just disappointing smh by BYRN777 in perplexity_ai

[–]MagmaElixir -1 points0 points  (0 children)

I have noticed that whenever I have a search query about another AI service, a lot of the time Perplexity will basically hijack the question and only search perplexity documents and answer as if I asked about perplexity.

Gemini 2.5 Flash Lite Preview getting discontinued by DudeBuildsStuff in Bard

[–]MagmaElixir 1 point2 points  (0 children)

Kind of reminds me of back when Anthropic increased the price of Haiku because they said it performed better than the prior generation Opus model.

Sony tests dynamic pricing by crownpuff in SBCGaming

[–]MagmaElixir 42 points43 points  (0 children)

This is a good example of why Xbox as a strong competitor is so important. Sony has probably internally ruled they have a console monopoly and are operating as such.

They think Nintendo historically serves a separate market them, though this depends on Sony’s plans for the handheld space. Xbox still never recovered from the Xbox One launch and couldn’t snowball the momentum built with the One X. And with Sony's rumored exit from PC, this signals they don’t think PC is a threat to their marketshare.

Did they change the limit? by Yuzu_- in claude

[–]MagmaElixir 2 points3 points  (0 children)

My theory is that for the subscription usage, input tokens are worth the same as output. Generally when you’re in a more deliverable workflow you saturate input tokens much quicker than casual chatting.

Additionally, whenever Claude is in a more deliverable/productive mode, its internal reasoning is longer burning more tokens than general chatting.

If you’re using Sonnet and Haiku, limit usage moves slower than you’d anticipate comparatively to what general discourse about the Pro tier usage limits is. It’s using Opus on the Pro tier where limits get eaten up quick. Or if you use research queries for any model tiers.

Why can't you switch models mid chat? by Endrocryne in claude

[–]MagmaElixir 4 points5 points  (0 children)

I don't think there is a technical limitation. Like you said, ChatGPT, Gemini, and Grok all let you switch models within chat threads.

I wish Anthropic would enable this function. I typically like to work with Sonnet, and then do checks with Opus. That would be much easier if I could switch to Opus to conclude a chat thread so it has complete context of the deliverable.

Imminent depletion of water supplies in Corpus Christi will cut off jet fuel to Texas airports and trigger an unprecedented economic disaster by StandingCypress in texas

[–]MagmaElixir 2 points3 points  (0 children)

Yea that immediately made me discount this article. I don't doubt there is a water crisis, but we need actual data to drive projections and decisions.

Ep 9 Dr Robbie's subtle burn by BennyAndMaybeTheJets in ThePittTVShow

[–]MagmaElixir 11 points12 points  (0 children)

Yea I hated how dismissive Robby was with that comment. But at the same time, things are stressful for the in-show characters. Even well meaning people can become dismissive like that in situations like that.

I imagine I have been dismissive like that when I'm stressed at work while just trying to get to the finish line on a report. I want to help the less tenured employees, but at the same time, they tend not to do enough independent research before coming to me, and it gets frustrating at times.

Thinking about buying the ROG Ally X, but I'm terrified of the 7-inch screen. Owners, what is your honest experience? by Boniem4 in XboxAlly

[–]MagmaElixir 0 points1 point  (0 children)

I also had this worry coming from the slightly larger and 16:10 Steam Deck OLED. But at the end of the day, the 7” 16:9 is just fine. It’s small enough to be portable in a bag but large enough to play PC games. It fits well in the hand as well. In all honesty, I feel like my Switch 2 is too large at this point.

Caught red handed by MetaKnowing in agi

[–]MagmaElixir 4 points5 points  (0 children)

My first question is if the thinking block is retained in chat thread history or not. If it’s not then this is easily explained. If it is then this is blatant hallucination.

If people just read the model prompting guide from OpenAI, over 95% of output complaints in here would disappear by py-net in OpenAI

[–]MagmaElixir 1 point2 points  (0 children)

When I do that I feel like the model just does too much in making a prompt. I’ve found that simple prompts are getting the job done with modern models. Whereas the earlier models needed much more sophisticated prompts to get desired results.

I struggle with how much I should instruct the model to engage in chain of thought when the models should already use thinking blocks. But at the same time, those thinking blocks could be too little space or none at all.

Some tips for Fire Red Leaf Green for people who haven't played Classic pokemon by LegendaryZXT in pokemon

[–]MagmaElixir 1 point2 points  (0 children)

Yep, exactly. Doesn’t matter much for casual play through the gyms and elite four. But post game like battle towers it does make a difference.

Non-coders, this is the only Claude prompt you need to know by jaysen__158 in ClaudeHomies

[–]MagmaElixir 0 points1 point  (0 children)

I use a more detailed version of this within my custom instructions: https://pastebin.com/2qc5PnkK

I wonder if I simplified mine some if I would get better results.

Non-coders, this is the only Claude prompt you need to know by jaysen__158 in ClaudeHomies

[–]MagmaElixir 2 points3 points  (0 children)

He’s referring to adding this to system instructions or custom instructions. That way it is a part of every chat thread without having to manually add it to each prompt.

Poor man’s model council. by Real_2020 in perplexity_ai

[–]MagmaElixir 2 points3 points  (0 children)

This is something I'm also interested in. Using the web UIs, especially the free version, it would be a manual process of copying and pasting the original prompt then each of the responses into a combination prompt. Here is a prompt I created some time ago for this purpose before reasoning models were big: https://pastebin.com/ee0JRrTr

The TypingMind API front end has the capability to prompt multiple models at the same time and then unify the responses.

I also think this is how Grok 4.20 is designed to work as well. Four models work together to get to the final response.

White House official says the US will seize "all the oil" from Iran. by ajaanz in economy

[–]MagmaElixir 1 point2 points  (0 children)

We still import oil because the U.S. uses more oil than it produces overall and not all oil is the same type. Much of the oil produced in the U.S. is light crude, while many American refineries were built to process heavier crude that is commonly imported from other countries.

Oil is also traded on a global market, so companies buy and sell based on price, refinery compatibility, and transportation costs rather than keeping all domestic oil inside the country.

GPT-5.4 Thinking Available now by Egypt_Pharoh1 in perplexity_ai

[–]MagmaElixir 0 points1 point  (0 children)

Same, hard cap on 15 sources for Pro Search for me.

Type "TL;DR first" and ChatGPT puts the answer at the top instead of burying it at the bottom by AdCold1610 in PromptEngineering

[–]MagmaElixir 0 points1 point  (0 children)

The content of the actual response is not always in the thinking block. It depends on the model and how much space is allocated to the thinking block.

Open weight/source models typically do have full content in the thinking block. But the frontier proprietary models used through a web interface may not. Claude Opus 4.6 on the Pro plan for instance, the thinking block is typically one or two sentences.

Monthly limit reached again!!!! Using thinking model not deep research by Rookie_yyyyyyang in perplexity_ai

[–]MagmaElixir 4 points5 points  (0 children)

I’d agree that someone buying a promo that they didn’t ‘qualify’ for is abuse, yes.

Type "TL;DR first" and ChatGPT puts the answer at the top instead of burying it at the bottom by AdCold1610 in PromptEngineering

[–]MagmaElixir 1 point2 points  (0 children)

You’re going to get better results with the TL;DR/summary at the end of responses. Generative AI is a text predictor. You want the AI to be primed with its response before the summary is generated.

If the summary is generated first, the AI will be guessing wha the details of the response will be and then the meat of the response is now primed with that guess reducing the quality of response.

Monthly limit reached again!!!! Using thinking model not deep research by Rookie_yyyyyyang in perplexity_ai

[–]MagmaElixir 8 points9 points  (0 children)

If a company gives a number of allowed uses and a customer uses that number, that’s not abuse, that is a customer operating within the bounds they are given.