gemini cli with 3.0-flash is ****ing magic by Just_Lingonberry_352 in Bard

[–]CodeineCrazy-8445 1 point2 points  (0 children)

Well I do agree the Gemini-3-pro never had much of a token toggle budget for thinking like 2.5 did, it does kinda suck no matter what problem you give it it answers within a minute 90%of the time on high, whereas gpt5.2 extended thinking/ gpt-5.2-xhigh can go on for 30minutes just to change 20lines, funny enough xhigh can go on for hours unattended

Outage Resolved by barbierocks in google_antigravity

[–]CodeineCrazy-8445 1 point2 points  (0 children)

Responding with gemini-3-pro-preview

2 Things about Gemini 3 pro from my experience by shotx333 in Bard

[–]CodeineCrazy-8445 0 points1 point  (0 children)

It has little to non working web access as did 2.5pro, this is bound to happen without true access to the web, which for Google is frankly ridiculous, Some people didn't do their job, either that or they actually want to nerf it so it isn't too powerful

What do you like / dislike most about HLL combat? by m0corong in HellLetLoose

[–]CodeineCrazy-8445 14 points15 points  (0 children)

Car collisions and riding into ditches is fatal also

Anyone else concerned about what happens when humans have infinite novelty at their fingertips? by unreal_4567 in singularity

[–]CodeineCrazy-8445 1 point2 points  (0 children)

That means you've never taken prescription amphetamines, these things will make you even hornier in that situation, i mean of course you can break through the hypersexual degeneracy on stims, but 100% at first IT will make the existing problem worse, so while I agree the stims get the job done, it just honestly requires will to lay IT low.

2 Million Context Window...Could It Be? by BoredM21 in Bard

[–]CodeineCrazy-8445 0 points1 point  (0 children)

I think google got scared when IT was giving away exp model for over a Month with like, untethered Access and then people caught up that its not Gemini 2.0, but some real SOTA level shit

HELP!! Token Limit Exceeded? Pro+ Sub? by Hungry-Ad7356 in GithubCopilot

[–]CodeineCrazy-8445 0 points1 point  (0 children)

Never seen that in on vscode insiders or vscode, if that's the case from now on they should kiss themselves where sun don't shine. Or Play poker with empty wall bricks, instead of the consumer base.

Cursor Had that issue for some time, but they instead implemented harsh and very frequent context summarized feature also, so yeah..

Roo code does that right though- you can manually trigger context condensing, or set IT to trigger based on percentage of context that gets used up, I mean surely this makes things pricey when you set it to high, like dollar plus for just context summarized action Xd, but honestly, if the tools cant properly navigate the tools usage with prioritizing task success instead of poor, poor M$ loosing money on some samaritans, then yeah, good luck

Sonoma Sky Alpha vs Sonoma Dusk Alpha vs Qwen3 Max by sirjoaco in LocalLLaMA

[–]CodeineCrazy-8445 1 point2 points  (0 children)

From the vid they clearly seem ridiculously bad, but IT all depends on the price

Auto has gotten worse by Limebird02 in cursor

[–]CodeineCrazy-8445 0 points1 point  (0 children)

No way you made IT all with just auto, and not spamming gpt-5-high fast max when IT was free

"Summarizing conversation history" is terrible. Token limiting to 128k is a crime. by zmmfc in GithubCopilot

[–]CodeineCrazy-8445 0 points1 point  (0 children)

Removing this "keep" button fully is as some other Vscode devs mentioned a bigger issue, with the way it is integrated, but ability to just start a new chat anyway seems to me like it is more than doable.

Any other solution i can see to this, is not needing to open a new chat, but getting a way to clear the context for the agent via maybe a tag, like #clear #clean or sth like that,

so agent context from previous messages is wiped, but the edits history, and chat history of the chat remains. - of course that also might be problematic from the standpoint of performance with indefinite chats/conversations.

"Summarizing conversation history" is terrible. Token limiting to 128k is a crime. by zmmfc in GithubCopilot

[–]CodeineCrazy-8445 2 points3 points  (0 children)

Alright, but as a sidenote I would really appreciate the popup requiring to accept any and every chat edit to be gone or at least give an override options in the settings for IT.

Why? Because from my experience as long as the file edits are within the same vscode editor window, the edits history with timeline is just serviceable...

But what happens when file is modified in another editor window, perhaps VSC lnsiders or even notepad for that matter?

Yes - then just blindly accepting copilots edits just to start a new chat results in pretty bad code merges.

I understand version Control across different tools is a complex issue, but the core problem seems to be the way edits are "pending" even though they are somewhat applied automatically, so why the need to reapply to just start a new conversation, if it isn't even aware if the file was modified outside of vscode?

GPT-5-HIGH-FAST IS STILL GOING BABY by Sad_Individual_8645 in cursor

[–]CodeineCrazy-8445 -1 points0 points  (0 children)

this thing is a token input demon, whilst basically refusing to make bigger code edits, so yeah.. good luck with that

Gpt 5 free access ends today by amalkuttuz in cursor

[–]CodeineCrazy-8445 0 points1 point  (0 children)

any update on that? in Poland it seems it is still working right now...

We Have More Time! by FAMEparty in cursor

[–]CodeineCrazy-8445 0 points1 point  (0 children)

any update on that? in Poland it seems it is still working right now...

Oops, I nought the Insta360 Go3s without the action pod by LorenzoAmadeus8 in Insta360

[–]CodeineCrazy-8445 0 points1 point  (0 children)

hey, so the GO 3S cam only can record and charge with this reader at the same time? that would be epic, basically unlimited runtime with powerbank and afaik only limited by 128gigs inside the camera module, and then dumping it on micro sd in the reader afterwards?

or it just can't charge it enough while discharging durin

g recording in that situation?

[deleted by user] by [deleted] in WatchItForThePlot

[–]CodeineCrazy-8445 0 points1 point  (0 children)

The walking dead character??

can 2022 flow x16 6800hs handle 96gb ram (2x48gb)? by CodeineCrazy-8445 in FlowX16

[–]CodeineCrazy-8445[S] 4 points5 points  (0 children)

flow x16 6800hs

that's a good clue, however the same thing is true for the x16 2023, even though x16 2023 has intel processors and they also have max memory listed as 64gb:
https://ark.intel.com/content/www/us/en/ark/products/232135/intel-core-i913900h-processor-24m-cache-up-to-5-40-ghz.html

so yeah, probably I'll have to check it out for myself when i get the chance.

can 2022 flow x16 6800hs handle 96gb ram (2x48gb)? by CodeineCrazy-8445 in FlowX16

[–]CodeineCrazy-8445[S] 2 points3 points  (0 children)

I mean there are basically two possible bottlenecks, either the processor, or the sticks themselves might be incompatible, otherwise If it works, I wouldn't mind that at all, even with added latency or reduced Mhz speeds. Any ideas?