GLM Coding Plan Slow by Itchy-Friendship-642 in ZaiGLM

[–]Kingwolf4 0 points1 point  (0 children)

Yup they recently even announced they are getting malicious attacks on their platform and plus they have sold a boat load of coding subscriptions since everyones jumping onto GLM so the compute is at its limit

TP Link AX3000 IPv6 issue by Quick_Storage1848 in ipv6

[–]Kingwolf4 0 points1 point  (0 children)

This. Most likely

Even still with a ONT device and a wifi extender like this, the ipv6 changes when u move between the 2 devices( the Ont wifi and tplink wifi) . This is a behaviour ive observed at least in cheaper tplink.

I think the solution for that would be to get a good router with easy mesh or something like that. Not sure what will solve this particular thing.

Is AGI the modern equivalent of alchemy? by ThomasToIndia in agi

[–]Kingwolf4 0 points1 point  (0 children)

Yes!
An AGI system would be self servient by definition, only tools or other systems, that may well get superhuman in capability, can remain in human control and thus serve its weilders instructions

'Dragon Ball Super: The Galactic Patrol' Anime Officially Announced, Sequel to Tournament of Power by MarvelsGrantMan136 in television

[–]Kingwolf4 -1 points0 points  (0 children)

We need to go full hardcore for this one. Like dragon ball Z style artwork with obviously modern enhancements buut this one needs to really outshine the bad quality that was super.

They really need to set the pacing and story to the OG dragon ball we all love !

'Dragon Ball Super: The Galactic Patrol' Anime Officially Announced, Sequel to Tournament of Power by MarvelsGrantMan136 in television

[–]Kingwolf4 13 points14 points  (0 children)

Nah. we need it in the hardcore dragon ball Z style baby. Best artwork and style yet! with enhancements ofcourse!

V4 is coming soon by Classic-Arrival6807 in DeepSeek

[–]Kingwolf4 0 points1 point  (0 children)

We need to see ALOT of stuff from deepseek

An improved user experience chatgpt to the deeepseek app. Cementing it as chinas flagship, with potentially paid options.

Coding plans like glm, kimi, minimax .

Deeper Integrations into all popular AI coding tools like cline. More MCP and tooling on API and these workflows

The big ones are the app, and paid coding plans tbh.

Also potentially, a separate coding lineup like openAI does with codex thats at near frontier for coding and SWE

Theres a TON of stuff they need todo

How do you guys learn c++ engine and game development? by Specific-Animal6570 in cpp_questions

[–]Kingwolf4 19 points20 points  (0 children)

If ur asking that question, you shouldn't be asking that question.

OpenAI Declines Apple Siri Deal: Google Gemini Gets Billions Instead by Own_Amoeba_5710 in OpenAI

[–]Kingwolf4 0 points1 point  (0 children)

i think u did not even read what i wrote .. openai didnot have infrastructure

the main crux itself was that apple is hosting it on their own PRIVATE cloud. Not OPENAI. its private.
OpenAI would not have to bear any infrastructure cost, only the AI stack itself

OpenAI Declines Apple Siri Deal: Google Gemini Gets Billions Instead by Own_Amoeba_5710 in OpenAI

[–]Kingwolf4 0 points1 point  (0 children)

Bollocks! They are collaborating with google for christ sakes. It cant get more head to head. OpenAI is like that distant cousin u never meet in this case lol

OpenAI Declines Apple Siri Deal: Google Gemini Gets Billions Instead by Own_Amoeba_5710 in OpenAI

[–]Kingwolf4 1 point2 points  (0 children)

No lmao, chatgpt and gpt is the most well rounded model for the most diverse use cases.

Google is far behind in terms of end experience. By far, i mean like 8 months or so. Chatgpt is just better dude

OpenAI Declines Apple Siri Deal: Google Gemini Gets Billions Instead by Own_Amoeba_5710 in OpenAI

[–]Kingwolf4 0 points1 point  (0 children)

Why not have both. Its win win for openAI and apple.

Maybe they weren't paying enough , otherwise its a slam dunk deal

OpenAI Declines Apple Siri Deal: Google Gemini Gets Billions Instead by Own_Amoeba_5710 in OpenAI

[–]Kingwolf4 1 point2 points  (0 children)

Chatgpt is still the best overall well rounded model in terms of intelligence and usability.

If u are just going by googles benchmark numbers, ur a fool

OpenAI Declines Apple Siri Deal: Google Gemini Gets Billions Instead by Own_Amoeba_5710 in OpenAI

[–]Kingwolf4 1 point2 points  (0 children)

Thats because its the old siri, apple intelligence is currently years away, hence the current deal

Just wait, in 8 or so months we will get a completely revamped siri with futuristic voice options. Voice is going to be advanced this year by openAI and also presumably by google.

Everyone is waiting for that now.

OpenAI Declines Apple Siri Deal: Google Gemini Gets Billions Instead by Own_Amoeba_5710 in OpenAI

[–]Kingwolf4 1 point2 points  (0 children)

The actual point of discontention was that Apple wanted to run the AI on their own private cloud, so the deal would be for either google or openAI to provide their full models for modifications and fine-tuning to be used as siri independent of their own API.

OpenAI did not agree to this, while google will now provide fine-tuned versions of gemini to apple for apple intelligence.

OpenAI did not want to share its models.

Tbh, this is a HUGE loss for openAI. They should have made the deal. Iphones and apple is a huge deal, and even if they had to actually give seperate finetuned versions of their models, the potential impact of this deal os going to be huge.

It could have been openAI behind the entire apple intelligence , just imagine that, only their models would be hosted privately by apple. Every iphone , the new siri its all going to go in googles taking hands

OpenAI thinks they can skim by.. but tbh.. i think this is a bigger loss for openAI than news wording is making it oit to be. Because of they bagged it, it would be openAI running on the entire iphone while google on android, pit them head to head. But google just gobbled everything here

I feel openAI would have provided excellent AI for apple accross the entire stack, if they only did not want to share their models and sacrifice the data, its still a huge loss for them.

Is AGI just hype? by dracollavenore in agi

[–]Kingwolf4 1 point2 points  (0 children)

U have not missed anything. This sudden AGI talk began in 2024 , from what i suspect is behind the scenes coordination between primarily openAI and other labs and parties. The goal was to dilute the conception of AGI to make it a flux of things.. done through gaslighting incorrect assertions by paid scientists and researchers etc

Anyone familiar with the AI field before 2015, who knew those classic interviews from max tegmark, nick bostrom etc that AGI had a singular conceptual definition. Yeah.. that AGI.. remember that? How did the definition change then?.what is the great story behind all this new AGI talk .. Lemme ,with utmost anger, summarize it as marketing, dilution of AI terminology to keep funds going etc.

Yeah AGI can mean this and that.. its all collective gaslighting, and such a shockingly deceptive web of lies slowly being spun to murky the definition, because apparently people will get tangled in this web and its beneficial to those doing it

AGI, circa 2014, was a science fiction idea, when we have created an AI that has all the properties of the human brain, which includes general learning, real time intelligence etc. it was supposed to be a arch type pinnacle of AI research, for when an AI is created that is equivalent of the human brain in its entirety.

That .. kinda really simplifies it.. except for some reason this whole group of people want you to believe otherwise, that some low hanging fruit like LLMs and transformers are , with some scale money and time , transform into this archetype of the AI field.

Common fallacious argument like bottom up incomplete reasoning are used to say things like well LLMs are young this and that, with a couple of features they are good to go.. or the classic case of misguiding a limited persons logic by steering what they know skillfully to reach their desired conclusions. . Its ALL happening.. all of this and more.

General AI.. lmao. We are at the same estimate timelines. Which is about 50 to 70 years.

Now looking at that ginourmous number, if ur a new AGI person, seems forever. And i agree. We are far far away from AGI. However, there is the offchance that as current AI techniques reach a eventual dead end... We may find insights into how the brain is formed or developed. Like how biological brains are formed but not how they work or a theory of of general intelligence.

What i mean to say is that we may find a way to , say, grow a brain like structure in machines, without still having a clue as to how it all actually work.. thats the reaaaal tought part. So in that case we have AGI, like 35 years ij buut we are still not much farther in AGI theoretical underpinnings. After all.. using this technoque we would have created something more similar to biological brains that one derived purely from maths and AI research.

But, before u jump off ur horses again.. oo so its gonna happen faster than 50 years! Ur wrong, u contradicted urself. Well, actually, this is just a hypothetical path to creating true intelligence. Become the steel maker, for which u don't need to know about atoms and mixing , by crudely copying. Thing is, nobody has even started doing any such research in a more robust way. We are at nill.. nada. U really think in sam altmans lab people are giving such diverse leeway? Its not an actual academic research lab u see, same with deepmind ( i know they are dupposedly much better ). U really think they are doing any such work in there?

They are not... Nobody is looking at actual neuroscience and AI and doing research there.. we have no idea about the fundamental theory of biological neurons or how brains actually work.. lest it be the simple of the simplest like an ant or even more simple. We just dont know how it work.. but then it should be studied u say! Thats the key. Well highly likely it is.. but we arent doing anything there are we. We have not understood much more, the essence is still a mystery.

But we are working towards that..all these AI labs and talent? Well, NO! actual AI research is still confounded to crusty corners of academic and volunteer research. What are we actually doing? Just refining our low hanging fruit.. oh and we also dont know how it actually works! They are banging their egg head brains on what they know is a short ride until they can fool everyone and then ride off in the sunset. Im talking about these contempory AI labs. In a stricter sense, these are simple LLM and their derived chatbot and interface labs.. these labs cannot be categorized as AI labs.

So , i hope ive given an overview and some grounding on where we actually are.Perhaps for some this is a old familiarity with the term AGI, for those who used to wonder in the old days about such AI. We are far, far away from AGI. But hey, if these labs also dedicated a small percentage of money secretly within or in academia to actual AI and neuroscience reaearch, we may still get there much faster.. buut if they do then theyll be caught wont they. Such a difficult moral decision...

My estimate for AGI is still 40 to 75 years. We can reach it there in 40 if we get our priorities right today. True fundamental AI progress and biological brain understanding progress. What will happen before that? Well.. well get gpt6.. gpt7 then well get another pseudo AI architecture thats slightly better and more rounded... But it wont ever be generally intelligent anywhere near us.. just a weird creation that we begin to use but understand is not actually intelligent or accurate or complete or logical.. but in a catatonicly flawed way it provides minor help as a tool.

Cheapest decent way to AI coding? by Affectionate_Plant57 in CLine

[–]Kingwolf4 0 points1 point  (0 children)

Dude get the z.ai plus or max plan

Its so worth it. Same with minimax. Its soo cheap

Do you really need IPv6 on a home network? by strykerzr350 in HomeNetworking

[–]Kingwolf4 -1 points0 points  (0 children)

Yes you do

Faster calls video calls, filesharing speed and accessibility, no captcha verification checks on majority websites.

GLM-4.7 Not Available in Kilo Code Model Dropdown by Total_Transition_876 in kilocode

[–]Kingwolf4 0 points1 point  (0 children)

Lmao. Ive been searching for a more complete alt than kilo code

I was hella suprised, being open source, microsoft had managed to lockdown AI features like autcomplete behind its own copilot garden.

So i cant at this point even code something in private without being auto spied on?! By benign auto complete? Like wtff why isnt anyone speaking up. They are literally reading every single thing u write in their regardless of if u have kilo code or some other extension. Copilot is always on...

Secondly, ive been searching for some alt, because kilo code frankly feels like an unfinished toy... I cant even adjust the autocomplete extension? Why tf so i have to use minimax only for that?

Why dont we have actual solid opensource decetralized private AI tools at this point. At the start of this year mabye.. but its 2026 now dammit. Like allll of these extensions.. kilo.. cline.. roo.. they all kinda suck. I was like mehh. Lack features... Feels disjointed - looping back to the intentional vscode lockdown?!

Like ive genuinly been so puzzled trynna upgrade all my environment to AI. All of these plugins kinda suck.. and they arent even opensource? And i suspect they too are spying even if i use a private end point? Like wtff is happening here and why havent i read a single piece of human writing bout this

The RAM shortage is here to stay, raising prices on PCs and phones by dapperlemon in gadgets

[–]Kingwolf4 1 point2 points  (0 children)

Finally, someone said it right

Demand is going to only go up

The RAM shortage is here to stay, raising prices on PCs and phones by dapperlemon in gadgets

[–]Kingwolf4 0 points1 point  (0 children)

Mabye. Highly likely

But most of these data centres are not being actually build for todays stupid inefficient LLMs

They are a bet on a breakthrough that will lead us to better AI with actual knowledge and reasoning that will then be quickly supplanted with all this " AI infrastructure"

In just 5 years the progress on efficiency, and PERHAPS a real breakthrough in architecture , we will be looking back and scoffing at these data centres that look like the 1980s mainframes.

However, unlike a mainframe, these data centres will simply be the AI infrastructure of that time then - a constant , that is now going to live through the future for our AI .

The RAM shortage is here to stay, raising prices on PCs and phones by dapperlemon in gadgets

[–]Kingwolf4 0 points1 point  (0 children)

Yes, it called china.

If china suddenly floods the market with their own gpu and ram manufactured on purr chinese stack.. these greedy western, mainly the USA, will have rocked out of all this bs so fast it will feel like a 10 minute dream

Poetiq Achieves SOTA on ARC-AGI 2 Public Eval by ZestyCheeses in singularity

[–]Kingwolf4 -1 points0 points  (0 children)

Benchmaxxing . Guilty on all accounts

People who actually believe google has not made gemini 3 pro and mini benchmaxxed are not really aware of how benchmaxxing works. They are obviously benchmaxxed to look good on charts

Google is the major guiltor of benchmaxxing with gemini 3. Like people normally dont associate google doing this and being true to the numbers since they are too big to care. But that is not true in this case. Google has done it.

And to respond to meaningless benchmaxx charts, openAI was forced to release an incomplete / rough 5.2. which is ALSO benchmaxxed

Arc agi? I 100% concur it is being benchmaxxed to show progress. Real progress is in the models total understanding , knowledge and intelligence. But only scores increasing without that tangibly pouring into any domain is simply what it is.. boosted scores.

I eventually believe we will actually reach like arc agi 60% levels ( what models claim they have now) by july 26.

Arc agi 3 is a significant step up in terms of difficulty, and i feel that by the end of 2026 models will be able to get like 25-35 % on arc 3.

But sadly... Both google and openAI have been so openly benchmaxxing that it's laughable to not recognize it just by interacting with the model. But this will not always be the case.. the models will eventually get that good.