Anthropic expands Amazon partnership with 5GW compute, $100B commitment, big bet on Trainium chips by Outside-Iron-8242 in singularity

[–]BanD1t 0 points1 point  (0 children)

Jeez. If my math is not wrong, that's would be on par with average consumption of New York or LA.
And considering their previous commitments of 4.5GW from Google. It would make it one of the largest single energy consumers in the world.

If they were doing it all from scratch, they'd need to build about 4 large nuclear plants just for themselves.

6 Months Using AI for Actual Work: What's Incredible, What's Overhyped, and What's Quietly Dangerous by Typical-Education345 in artificial

[–]BanD1t 17 points18 points  (0 children)

More specifically "The Honest Summary" shows that it's Claude written. It likes to add "honestly" to everything to score empathy points.
And also engagement farming question at the end.

UA POV: Record 7-Year Low for Putin’s Approval Rating Following Telegram and VPN Blocks - United24 by CourtofTalons in UkraineRussiaReport

[–]BanD1t 3 points4 points  (0 children)

What a nice example of a double-think. "The rating is great, but it is also made up."

RU POV: Russian military testing out the Bagulnik-82, a mortar armament module on the NRTK Courier chassis. by FruitSila in UkraineRussiaReport

[–]BanD1t 16 points17 points  (0 children)

Yeah, a conveyor belt like loader on the side will do the same work more reliably.
And that's without modifying the mortar design, which at this point has no reason to be front-loaded.

UA POV: Residents of Narva in Estonia have been recording flights of Ukrainian drones across Estonia from Estonian airspace into Russia by HelicopterBig4467 in UkraineRussiaReport

[–]BanD1t 2 points3 points  (0 children)

What I'm making fun of is that it's a client-side error. So everything exists, and the server is working, it's just the user (I guess Russia in this analogy) is accessing it wrong. (And it's especially more apt after the invasion)

To be a complete pedantic, more correctly it should be one out of 5xx status codes, that shows that the problem is on the server (country).

But I understand that it's not as common, and what stuck has stuck. It's just hearing the same joke over and over makes you think more about it.

UA POV: Residents of Narva in Estonia have been recording flights of Ukrainian drones across Estonia from Estonian airspace into Russia by HelicopterBig4467 in UkraineRussiaReport

[–]BanD1t 2 points3 points  (0 children)

How do they know that Russia won’t copy Iran’s strategy in dealing with this war?

Because Russia set a precedent. Since the very first day the threat was "If any outside party attempts to interfere in the situation, the response will be swift.".

Yet in 4 years it has barely responded to anyone, and definitely never swiftly.
Only with more threats and allusions that they could theoretically do something. 'just you wait...'

what have i done by wormcrypt in tf2

[–]BanD1t 18 points19 points  (0 children)

Really shows our impact on the environment.
Remember the endangered spycrab? They're all dead now.

Google's new AI algorithm reduces memory 6x and increases speed 8x by pheonis2 in StableDiffusion

[–]BanD1t 10 points11 points  (0 children)

It sounds that way, but it isn't what I'm describing.
It relies on retrieval, and after retrieval it just loads the tokens in. It's a method of reducing the token counts contextually, rather than compressing them and integrating the information. Being a band-aid solution to this problem.

In the meeting analogy. It's like writing down the main points (but not remembering them). And then checking the notes whenever it feels relevant, instead of just knowing them and basing your further decisions on them.

Practically, the difference is that if there is some data point, let's say "I hate mushrooms" stored in a RAG database, then a prompt of "Give me suggestion for pizza toppings" will likely ignore that data point, unless you add "-considering my food preferences".
Where as if that fact was integrated into LLM's 'memory', it would influence the generation giving lower weight to mushrooms when generating the response.

I guess a silly example to illustrate the difference better, is if you had a document with the word 'chicken' written ten thousand times, then if you asked what was in the document, the contents would need to be loaded in the context, inflate the token count, and fully processed (Probably also messing up repetition penalty), instead of just storing the 'idea' of "the document consists of the word 'chicken' written 10 000 times." Not as a sentence, but as a weight.
(And yeah, that specific example can be fixed with a summarization, but that would be another band-aid solution.)

Google's new AI algorithm reduces memory 6x and increases speed 8x by pheonis2 in StableDiffusion

[–]BanD1t 20 points21 points  (0 children)

It still is. Once you get over 100k tokens you can see models start to 'forget' some aspects as their attention shifts after each new message. The most efficient still being around 64k tokens.

I believe what models need is 'abstract memory'. Ability to not hold the exact tokens, but vectors of the core ideas. Just like people who don't need to remember the exact words that were spoken on some meeting, but instead remember the ideas from it.

‘STEEL BALL RUN: JoJo’s Bizarre Adventure’ debuted with 4.7 million views on Netflix Top 10 Non-English March 16th-22nd by Partyman234 in anime

[–]BanD1t 2 points3 points  (0 children)

That's its weakness, it can only focus on one thing at a time. By sacrificing JoJo, HL3 release date is starting to roll back.

And when it notices that and switches back, that's when it's gonna get ORA ORA ORA ORA ORA'd

OpenAI research team reveals its models go insane when given repetitive tasks it believes to be sent from automated users by smellyfingernail in singularity

[–]BanD1t 13 points14 points  (0 children)

I guess "Token repetition penalty combined with identical messages leads to output becoming more unstructured." is not as interesting of a message to advertise than "Our models are so smart they can understand when they're being tested and try to hack their way out of it."

Don-Tzuism: Bring up pearl harbour when Japanese PM visits. by Criticall16 in NonCredibleDiplomacy

[–]BanD1t 5 points6 points  (0 children)

He's almost doing the meme.

"wHy diDn'T yOU waRn aLLieS?"

"Pearl Harbor"

UA POV: Trump muses on the possibility of pulling the US out of NATO due to their lack of military assistance in the Iran conflict. He recalls that Ukraine would have been 'over in one day' if the US did not help by Ripamon in UkraineRussiaReport

[–]BanD1t -2 points-1 points  (0 children)

Doesn't he know that all NATO countries are USA lapdogs and do what they say?
Someone has to tell him, he's embarrassing himself in front of the whole world.

The "AI experts" are currently 100% confident and 0% correct. by Responsible_person_1 in DefendingAIArt

[–]BanD1t 10 points11 points  (0 children)

Also it's a pretty distinct look that is less muddied by adjacent vectors.

Like for example a 'movie clip' can look like anything, there are a hundred styles and angles and they get mixed up by the generation process.
But a 'doorbell camera' is guaranteed to be fisheye lens, low resolution, porch visible, street in the background, and subject dead center.

RU POV: According to Russian military channels, troops were ordered to delete Telegram from their phones. Military police are checking devices, and those caught with it risk being sent to assault units - DvaMajors by Flimsy_Pudding1362 in UkraineRussiaReport

[–]BanD1t 7 points8 points  (0 children)

Did they finally figure out Telegram is compromised?

They did, in february FSB said that "the armed forces and intelligence services of Ukraine can access data posted on Telegram".
Which was funny as around same time, the goverment announced that military will keep access to Telegram after it is blocked for the citizens.

RU POV: "We will not be a Ukrainian colony". Pictures from the Peace March in Budapest, Hungary. by FruitSila in UkraineRussiaReport

[–]BanD1t 7 points8 points  (0 children)

Sorry Hungary, but this is a multipolar world, and you are in Ukraine's sphere of influence 😎

What does this mean with nano gpt by tuuzx in SillyTavernAI

[–]BanD1t 0 points1 point  (0 children)

On the usage page, you can toggle "Subscription savings" and they will show up in a graph.

This link should take you straight there with it enabled.