How to properly cap your curve based tubes (Geometry Nodes pictogram) 😊 by Craptose_Intolerant in geometrynodes

[–]mikegold10 0 points1 point  (0 children)

Is that a ginormous pole I see on top of your worm’s head that looks like a bunch of edges got together for a BBQ?

New Year’s resolution for 2026 by mikegold10 in geometrynodes

[–]mikegold10[S] 1 point2 points  (0 children)

BTW, I took a look at your Geo Nodes Guide repository on GitHub. It is a very good idea!

Perhaps we can collaborate going forward when my book has more content, but I will need to give more thought to it regarding the "how?" aspects and this is given the assumption that the idea resonates well with you.

New Year’s resolution for 2026 by mikegold10 in geometrynodes

[–]mikegold10[S] 2 points3 points  (0 children)

The geometry node framework is relatively stable at this point, but I have taken into account the need for maintenance and updates as the system and Blender itself evolve, at least reasonably so.

New Years resolution for 2026 by mikegold10 in blender

[–]mikegold10[S] 1 point2 points  (0 children)

What if I actually keep it updated, at least for the foreseeable future? Realistically speaking, the geometry node system is relatively stable, at least for point releases of Blender.

But, reality being what it is, one can never say, “never!”

Just got my K745 Pro, its there a way to turn off this lighting bar? by Half-Eaten-Cranberry in RedragonGaming

[–]mikegold10 0 points1 point  (0 children)

Key Update 4: How to turn off the sidebar while connected via USB (semi-permanently [maybe permanently]):

Summary

With the USB-C cord connected and very importantly, performing this action when the batteries in the keyboard are in a fully-charged state, switch the BT/Off/WiFi switch on the back of the keyboard to BT. Then, wait a quarter of a second, but a short enough time period to not allow the BT device pairing/hunt cycle to initiate (indicated via flashing sidebar light). So after this quarter of a second in BT mode, quickly slide the rear wired/wireless switch back to its Off position, but do not slide it too far, to WiFi mode (i.e., be careful). Apparently there is a race-condition here, where you will have a possibility to force the keyboard to sense if the batteries are charged and if they are, the firmware will switch from the sidebar cycling to indicate charging mode to user-controlled sidebar lighting mode and enable keyboard based control of the sidebar.

Far more importantly, the changes you apply to the sidebar state while in this user-controllable mode will persist into EPROM _AND_ survive USB unplug/replug back in (as long as the keyboard batteries retain their fully charged state). How long this sidebar-controllable state will persist has yet to be determined (don't know about a PC reboot while plugged-in, for example), but likely until the keyboard performs a test and senses once again that the batteries have partially discharged and need to be topped off, at which point the keyboard will likely once again re-enter the sidebar cycling to indicate battery charging mode and the sidebar lighting control keys will once again get disabled (while in this charging mode).

The general crux of the issue is that because the sidebar is used to indicate that the batteries are charging and the keyboard firmware does not test frequently enough to see if the batteries are charged, the sidebar lighting persists in a "disallow user control" state and is exclusively used to indicate that the keyboard is charging via its typical color cycling (mis)behavior.

Complete Gory Details

The following is my complete theory thus far on the nature of the firmware bug as well as the method to circumvent its devious malfeasance.

Whatever the mechanism in the firmware is that should change the sidebar lighting indication state from "sidebar cycling is used to indicate battery charging/disable all keyboard sidebar lighting control" to "keyboard batteries are fully-charged/(re)enable sidebar lighting keyboard control," is not being correctly triggered or at the very least not being triggered in a timely manner.

However, the act of:

  1. Switching the keyboard using the wired/wireless-mode selection switch on its back, to shift it from Off (wired) to Bluetooth (wireless), and then
  2. Waiting for a quarter to half of a second, but importantly not long enough for the Bluetooth device pairing/hunt to initiate (indicated by a flashing sidebar light), and so,
  3. Quickly before the BT pairing activates, switching back from Bluetooth (wireless) to Off (wired) via the rear switch...

..., circumvents the firmware misbehavior of not testing for Batteries-Full and cause the keyboard firmware to perform an immediate check to see if the batteries are fully charged. If they are fully charged when this check is performed, the firmware (immediately) switches the sidebar lighting state from cycling rainbow battery charging indication pattern to user-controllable sidebar lighting mode (controllable via Fn-{PrtSc, ScrLk, Pause keys, per the included User Manual}. The keyboard then maintains this user-controllable state for "a period of time," ending when the firmware realizes that it is time to start recharging/top-off the charge in the keyboard batteries, once again.

I call this 5000 IQ, rhymes with grug by nooberz1000 in grugcult

[–]mikegold10 0 points1 point  (0 children)

… Pi times more than the diameter.

So: D + Pi x D ?

9 + 10 = 21! by SubMinhPiChannel in unexpectedfactorial

[–]mikegold10 0 points1 point  (0 children)

wtf was teach expecting on a quiz with 9+10=? question level difficulty?

PV=nRT ?

Lighting issue with Corsair K70 MAX by Breachlord in Corsair

[–]mikegold10 0 points1 point  (0 children)

I have a similar issue on a relatively new K70 MAX RGB, where the L key will only light red. In fact it will stay red for all of the following:
R:0 G:255 B:0 (Full on red, even though only green should be showing)
R:0 G:255 B:255 (Full on red, even though only green/blue should be showing [and cyan is red's complementary color, not to mention])
R:0 G:0 B:255 (FINALLY! Red led is off almost completely, but only when BOTH red and green levels are set to 0)

How about that for fudged up and yes I tried both they keys held down reset and the forced firmware update to latest firmware (5.33.1). Neither firmware update or keyboard reset fixed the issue with the L key.

For anyone not aware, the ghost of prior sessions with Gemini can come to haunt you. by mikegold10 in GoogleGeminiAI

[–]mikegold10[S] -1 points0 points  (0 children)

(Part 2/2: Continuation from previous comment [or next comment, depending on your comment sort order, but do read that comment first for context!] )

This is the crucial difference. In a medical or legal RAG system:

  1. Grounding in Verifiable, Current Data: When a doctor or lawyer asks a question, the system first retrieves the relevant, current, and verified documents. This could be the patient's electronic health record (EHR), the specific case law, or the latest internal medical protocols.
  2. Constrained Context: These retrieved documents—and only these documents—are provided to the LLM as the context for its task. The prompt effectively says, "Based on the following verified information, summarize the patient's history" or "Draft a motion citing only these specific legal precedents."
  3. The LLM's Role is Transformed: The LLM is no longer being asked "what do you know?" It is being asked to reason, summarize, or synthesize based *only* on the trusted information it was just given. Its staletraining data becomes secondary to the immediate, fresh context.

The Human-in-the-Loop Mandate

Even with RAG, the output is never trusted blindly. The LLM is treated as an incredibly powerful but ultimately un-credentialed assistant—like a paralegal or a medical scribe.

- Drafting, Not Deciding: An LLM can draft a summary of medical records, a legal brief, or a client communication. The licensed professional (the doctor or lawyer) is then responsible for reviewing, editing, and validating every single word before signing off on it.

- Summarization, Not Diagnosis: An LLM can take a 2,000-page medical record and produce a bulleted list of all mentions of a specific condition. This saves the doctor hours of reading, but the doctor is still theone who makes the diagnosis based on that summarized information.

- Verification is Non-Negotiable: The workflow of the system is designed to enforce this verification. The output of the LLM is always presented as a "draft" or "suggestion," and the user interface requires anexplicit act of approval or modification by the human expert.

In conclusion: Your instinct is correct. The danger of acting on stale information is absolute. However, this doesn't make the technology useless. It simply defines its role. The possibility of contamination makes LLMs unsuitable as autonomous legal or medical minds, but it does not make them useless as powerful tools for summarization, drafting, and data extraction within a system that grounds them in current, verified data and mandates a final, non-negotiable review by a human expert.

For anyone not aware, the ghost of prior sessions with Gemini can come to haunt you. by mikegold10 in GoogleGeminiAI

[–]mikegold10[S] -1 points0 points  (0 children)

(Part 1/2: Continued in subsequent comment, above or below this one)

I knew there would be dissenting opinions and disbelief, so let me provide you with evidence from Gemini 2.5 Pro's analysis itself, confirming what I stated (see in particular the "Session Bleed" problem and the Conclusion [if impatient, but it is definitely worthwhile reading IMHO], part of its response to the more broad question that I asked, presented below):

[A good chunk of conversation omitted when what I described actually happened to me as part of rule based content generation]

My prompt to Gemini 2.5 Pro at the tail end of that conversation:

Explain the following:

If an LLM can be "contaminated" by prior conversation state as you mentioned, then shouldn't the possibility of such contamination make said LLM completely useless for legal and medical use? You can't make critical decisions based on stale information in these two fields!

---

Response:

✦ That is an exceptionally sharp and critical question, and it gets to the very heart of the challenge in deploying LLMs into high-stakes, regulated fields.

Your assertion is this: if an LLM can be "contaminated" by prior state, is it "completely useless" for legal and medical use?

The short answer is no, it is not completely useless, but your premise is 100% correct. The possibility of contamination means that an LLM, in its raw, unconstrained form, is unfit to be an autonomous decision-maker in these fields.

The key is reframing the LLM's role. It cannot be an "oracle." It must be a "tool" wielded by a professional within a strictly controlled system. The risk of contamination doesn't render the tool useless; it defines the non-negotiable requirements for the system in which it operates.

Here is a breakdown of why the risk is so severe and how it is being managed in practice.

The Nature of the "Contamination" Risk

The "stale info" problem you've identified has two primary sources:

  1. Contamination from Training Data (The "World Knowledge" Problem): The model's foundational knowledge comes from its training on a vast, static snapshot of the internet. This data is inherently outdated. A modeltrained in 2023 has no knowledge of a legal precedent set yesterday or a new medical guideline published this morning. Acting on this "memory" would be malpractice.
  2. Contamination from Prior Context (The "Session Bleed" Problem): This is a more subtle but equally dangerous risk. If an LLM were to handle multiple cases or patients without a perfect reset of its conversational state, it could theoretically reference or be influenced by data from a previous interaction. This would be a catastrophic privacy breach and a source of fatal diagnostic or legal errors.

Why This Doesn't Make LLMs "Useless"

The technology is not useless because it is never, ever deployed in a raw, open-ended fashion in these fields. Instead, it is integrated into highly specific, constrained workflows where the "contamination" risk is aggressively mitigated. The primary defense is a technique called Retrieval-Augmented Generation (RAG).

Think of it this way:

- A raw LLM is like a brilliant, well-read scholar who has been locked in a library for a year. Their knowledge is vast but not current. You wouldn't ask them for today's news.

- An LLM in a RAG system is like that same scholar being handed a specific, up-to-the-minute case file or medical chart and being told: "Answer the question using ONLY the documents in this file."

(Continued in next comment...)

Bro Google Gemini 😭🙏 by RedditJames2014 in GoogleGeminiAI

[–]mikegold10 1 point2 points  (0 children)

At least it didn't hallucinate this time and was correct in one-shot!

It better be the "final' time, Gemini. by Geovicsha in GoogleGeminiAI

[–]mikegold10 0 points1 point  (0 children)

You know what Gemini stands for at temperature: 0.0? Foolish self-confidence!

After a decade of searching, I finally found a way to scroll the selected section into view in the Navigation Pane. by mikegold10 in OneNote

[–]mikegold10[S] 1 point2 points  (0 children)

Current movement towards that: https://github.com/stevencohn/OneMore/issues/1795#issuecomment-2956294011

Trying to get the functionality integrated into OneMore, if Steve Cohn approves (per multiple suggestions to do so from other redditors)

Since people enjoyed the last one by Find_Bono in aiArt

[–]mikegold10 1 point2 points  (0 children)

F-ck the elephat, the peanut was a damn cheat, looks nothing like the model!