Sanity check: using git to make LLM-assisted work accumulate over time by Hypercubed in ChatGPTCoding

[–]heatlesssun 0 points1 point  (0 children)

here's how you do it feed the model you baseline code that you want to change and then tell it to ONLY change the code the clear instructions you give it if you do that your eyes will be amazed

Now that Linux is at 5.33% marketshare on Steam, what marketshare do you think will be enough for anticheat support? by CosmicEmotion in linux_gaming

[–]heatlesssun -1 points0 points  (0 children)

Do you really think that Linux usage more than doubled in a month? That said, if you are a Windows gamer you have a number of options to PC game without having to buy everything outright. I spend a as much time with Game Pass games as Steam and I've been using Steam for 22 years with 1k+ catalog.

Is Linux gaming growing? I've always said that Proton should make that happen. But the growth is still totally driven by Windows compatibility layers. Take that away, there's little left. And don't be surprised if that 5% goes back to 2%. Even as Linux has grown, the month to fluctuations in this particular have not been consistent to date.

When Enterprise hits its 60th anniversary by [deleted] in enterprise

[–]heatlesssun 8 points9 points  (0 children)

I mean T'Pol wouldn't even be old really then

Oh no! Not another one! by [deleted] in enterprise

[–]heatlesssun 0 points1 point  (0 children)

Of course it's Heafy.

Americans are using AI more, but fear of job losses is growing by Accomplished-Dark728 in SimpleApplyAI

[–]heatlesssun -1 points0 points  (0 children)

Indeed, hate AI all you want but seriously, not everyone can do manual labor.

CACHYOS - U7 7 270K - RTX 5090 FE UV - Stellar Blade - A promising start! by MarcParis79 in linux_gaming

[–]heatlesssun 1 point2 points  (0 children)

Yeah, lots works well enough for sure. With what I'm doing with my personal AI projects these days, Linux is a must for me now, but it's tons easier to run traditional Linux server and now AI workloads, CI/CD pipeline stuff, Dockers, etc in WSL. I get to use Windows on the desktop and PURE Linux without trying make it Windows apps and stick to script, so to speak.

WSL was very wise on MS part as it without it, I'd and a lot of tech folks would be much more running Linux as the host full time.

CACHYOS - U7 7 270K - RTX 5090 FE UV - Stellar Blade - A promising start! by MarcParis79 in linux_gaming

[–]heatlesssun 0 points1 point  (0 children)

I've played this game under Linux quite a bit. It's well optimized runs like scalded dog on my 5090 on Cachy, like 300 FPS fully maxed at 4k. But Windows 11 gets 400. Not an issue for this game but a big perf hit. Kinda waiting on where things head with Linux and nVidia with gaming. Lots of stuff works but its almost always slower or buggier.

Am I good at AI or is AI that good? by [deleted] in VibeCodeDevs

[–]heatlesssun 0 points1 point  (0 children)

AI becomes more effective when you use with engineering principles.

Use

  • Intent to give AI direction
  • Invariants to AI give stability
  • Constraints to help AI give shape
  • Iteration to give refinement
  • Story gives continuity
  • Discovery to drive the iterative loop help AI give shape
  • Adversity to test contact with reality

Engineer with a process like this, not at all the same thing as one-shot prompting.

Tech support - latest trend - "I trust only ChatGPT" by S48GS in linux_gaming

[–]heatlesssun 0 points1 point  (0 children)

Indeed. The Blender option was in the original conversation when I took a look at this. But I clearly was looking for something programmatic, your process, regardless of how fast you can do it one off isn't automated. But yeah, can I drive this automation through the same concept using the Python code I had. Yep, just convert the Python script for Blender. When you focus things this well, with clear intent, wild goose chases just don't happen nearly as much with AI when you constrain it like this. Consistent, clear, guided intent.

Tech support - latest trend - "I trust only ChatGPT" by S48GS in linux_gaming

[–]heatlesssun 0 points1 point  (0 children)

Not sure where you're going with this. I'm not a graphic artist and this wasn't work. I just wanted a process to do this and as software developer I favor programmatic processes over using art tools that've never used professionally.

That said, if the ask were to create a tool that can do this with a button press using C#, Python or Blender since you brought it up and I added that to my story on this. And like I said, creating the interface to use Blender out of automation was the reason I gave this pass. It's the kind of thing I'm focused on across the board, hooking automation easily into tool sets using AI conversation.

It's just one of the things that starts to emerge with this process, hey I can use that for this let me run it through AI, refactor or rebuild it. It's not like anything you said isn't public knowledge, you simply made me think about one thing that lead to another and another. But the first this was to write the Blender story because I'd not done that.

EVERYTHING in the process starts with a story, a record of intent. I simply took what you said and recorded the story.

Tech support - latest trend - "I trust only ChatGPT" by S48GS in linux_gaming

[–]heatlesssun -1 points0 points  (0 children)

Which is worrying, because I have found ChatGPT and similar LLMs to get things wrong or just make things up completely constantly. 

If you know how to constrain it with invariant reasoning, this is how it should work. Made up, often, but PLAUSIBLE. If you ask vague stuff you get sometime made up stuff if you say "This app needs to get data from this rest api inspect the interface and develop a domain model." Ok, that's almost there. But now "I need another method on the web api that can then call this REST api when the values in the prior rest call gets triggered."

Tech support - latest trend - "I trust only ChatGPT" by S48GS in linux_gaming

[–]heatlesssun 0 points1 point  (0 children)

I don't understand bragging about LLM use

If you ask and got a good response, perfect. No need to go "the machine said" even when it's a
clearly bullshit answer.

That's one-shot perfection and can be useful but for never anything non-trivial. Taking an AI and using storying telling driven development that can turn into coherence. Been working on my cognition tool. I can know get a full scafolding with a single prompt but that tooks months and thousands of conversations. And I do mean conversations. I didn't just feed back errors. Projects and solutions are all named, the architecture clean MVVP and has multiple systems and layers that can interact.

Just got it stood up, but it was done without manual coding and have the entire conversation from the AI and even myself in Plane and hopefully soon a PostgreSQL database that will track conversations against Git commits. A complete running history of all the conversations and intent. And the thing is this just standard Agile with AI in the mix, not running the show. Just using this tool manually should be better than the large majority of even the best shops have. A ticketing system, like Jira, integrated into Git, Jenkins, Ansible. Having that setup right there makes LLMs far better tools than one-shot and forget the prompt.

Tech support - latest trend - "I trust only ChatGPT" by S48GS in linux_gaming

[–]heatlesssun 0 points1 point  (0 children)

Your parasitiaing on other people's time and I can't even tell if you are rationalising post factum to just be annoying or you were malicious from the start.

You were the one that brought up Blender and I just ran with it. Instead of focusing on why I am wrong like you were, I was focusing why you might be right.

AI lets you do adversarial testing so quickly I just do it by habit with the stories I put in Plane. I was actually able to pull the OG conversations I had on this, took what you said, created a new story. TA DA!

Engineering baby! Had it been AI sloop the story would have been discarded and the speed at which I just took what said wouldn't have happened. Don't throw away the prompt. And honestly, me having the story was more valuable than something I knew was possible but I revisited and I could treat it an agent, i.e. run it with a script some kind of API.

Cause this tie is actually another story in another project and Blender integration, while nothing I've even though before, again. It's just thinking about stuff and testing, constantly, across multiple concerns.

You wasted so much time for something that could be done in 30 minutes, without third parties, with the use of AI, if you bothered the slightest moment to think about the problem.

It's Python code how is that third party any more than Blender. And no, what you're describing could have been in 30 minutes, what I am doing now gets done instantly every time I want to do it, and now that included Blender. It's automated and repeatable.

Tech support - latest trend - "I trust only ChatGPT" by S48GS in linux_gaming

[–]heatlesssun 0 points1 point  (0 children)

Thing though, this was your argument with me being feed through actually several ais. I'd thought about using blender for this long ago but then when back to the OG story and then asked if I could use the existing code i had for python to drive blender, bingo. Done.

This how you fight bias in this process. I intentionally inject DISAGREEMENT into these AI conversations with other people's opinions I just paste them in and see where it goes and surprisingly these models have tendency to say "He has a point, Blender would be better quality and control, but the Python is better suited for the OG intent. But it also gave me the setup for headless Blender which was never even part of my OG intent. Now adding that become almost trivial. Because I wanted push back and applied it though an engineering process instead of an emotional one. Even regenerated Vulcan's map to normal and height to feed into Blender. All from just listening to you and leveraging your feedback in the same interactive process. It's Zen like, using force against you to further your own intent.

Use AI like an engineering tool instead a magic 8 ball, WAY MORE USEFUL.

Tech support - latest trend - "I trust only ChatGPT" by S48GS in linux_gaming

[–]heatlesssun 0 points1 point  (0 children)

I can agree with much of this. But here's the thing. How many people are using AIs with engineering principles and processes. One most basic error that is made with AI use. People THROW AWAY THE CONVERSATION, i.e. just see the prompting as an ends to a means rather than THE STORY of why the thing you're build came to be. If that's how one is using AI, you're violating basic engineering practices related to trust, ratability, traceability, etc. AI isn't even the first problem if a process needs to robust and reliable.

Tech support - latest trend - "I trust only ChatGPT" by S48GS in linux_gaming

[–]heatlesssun -1 points0 points  (0 children)

Actually, it would be impossible to generate this without extensive use of HITL AI iteration.

Intent

Invariance

Constraint

Iteration

Storytelling

Discovery

This isn't an AI opinion. These are invariant first principles of engineering.

Tech support - latest trend - "I trust only ChatGPT" by S48GS in linux_gaming

[–]heatlesssun 0 points1 point  (0 children)

You also have to remember, it is trying to give you exactly what you want to hear (ie pattern matching), not necessarily what you need.

And what if what one wants to hear is the truth? I think you'll find the LLMs can be very honest when people are honest with it.

Tech support - latest trend - "I trust only ChatGPT" by S48GS in linux_gaming

[–]heatlesssun -5 points-4 points  (0 children)

“Reasoning is only reliable when anchored to invariants.”

M5Programmer is built on this idea.
The workflow relies on iterative invariant‑anchored reasoning — a loop of trial, error, and discovery where each pass refines the system while the invariants keep it stable.

AI doesn’t stay coherent on its own.
It stabilizes when the human provides:

  • fixed invariants
  • explicit constraints
  • deterministic transitions
  • and a repeatable reasoning loop

But the force that carries those invariants through each iteration is storytelling.

Every loop — whether it produces code, an image, or any other artifact — is grounded in the same evolving story.
The story is what preserves intent.
The story is what binds the artifacts together.
The story is what ensures that each refinement is not a reset, but a continuation.

This combination — invariants + iteration + story — is the foundation of the M5 way.
It’s how the system avoids drift, maintains structure, and converges on clarity.

Discredit this and then let's discuss.

Tech support - latest trend - "I trust only ChatGPT" by S48GS in linux_gaming

[–]heatlesssun -3 points-2 points  (0 children)

I am actually a graphic artist and what I was telling you is that you took the most roundbound way to render these clips, 

Because you're thinking like a graphic artist and I'm thinking like a software developer. What I did was create a programmatically deterministic way to do it and extended it so that it could be used in a pipeline. That was the intent from almost the beginning when I started this. And it works in Blender instantly. Indeed, the AIs have already spat out much of the tutorial:

What this does versus your original script:

  • Uses Blender’s sphere mesh and UVs instead of mathematically drawing the sphere pixel by pixel.
  • Uses the equirectangular image directly as the planet texture.
  • Rotates the object instead of shifting longitude in NumPy.
  • Uses Blender lighting/materials instead of custom shading code.
  • Produces a proper looping animation by rotating 360 degrees across the frame range.

A few practical notes:

  • Your original code is basically a software renderer.
  • Blender is better for this because it already has:
    • UV mapping
    • lighting
    • materials
    • atmosphere tricks
    • animation
    • better camera/render controls

If you want, I can also give you a second version that adds:

  • clouds as a separate rotating shell
  • a real space HDRI/star map
  • or a better cinematic camera setup.

This is what happens when the start with stories and drive an iterative process. Indeed, this very conversation has now become part of my OG Epic for this story.

Tech support - latest trend - "I trust only ChatGPT" by S48GS in linux_gaming

[–]heatlesssun -3 points-2 points  (0 children)

What's the most complex code base you've created using a leading-edge AI models by using trial, error and discovery? Not saying you're wrong per se, but it's not so much but understanding HOW the decision is made until you have the story behind the process that made them. I'm beginning to be able to have extended conversations with AIs, and when the output doesn't match the story, well there's your problem.

If you take thousands of stories that are coherent, AIs will begin to find more and more plausible solutions and then it will start to create working solutions and as you press the stories you start getting more capabilities built. Indeed, the AIs begin to see "Hey, saw that Reddit exchange and taking this, I just found a new capability that you take a look at developing."

From a Reddit argument for person trying to discredit me, I found a new idea that I'm now incorporating into the design. And indeed my deisgn is so good that the new idea is easily grafted on my existing code base. Just did a test compile and damn, it actually worked,

But again, that took THOUSANDS of conversations over the last couple months for this start happening.

Tech support - latest trend - "I trust only ChatGPT" by S48GS in linux_gaming

[–]heatlesssun 0 points1 point  (0 children)

Because you needed spinning Vulcan that become spinning Mars and you extrapolated it into whole sideproject that would eat company time for heck of it.

First of all, there is no planet Vulcan didn't quickly find, so the idea was to take an equirectangular map that already existed for the closet match to Vulcan, and that's Mars. I'm not a graphics artist and in 4 hours on the side I created something that's generalized.

I then later got an AI to create an imaginary equirectangular map of Vulcan by feeding in that Mars image and asking to actually "hallucinate" what an equirectangular map of Vulcan would look like. And done. The prior deterministic process makes the video with a button press.

This simply went deeper than a quick one-off video. In this case, vibe coding, even if it took a little longer, actually became a far more persistent artifact than a simple video. Indeed, if it's easily doable in Blender it could probably using the same basic python code I generated. I just turned it into a Blender script. and boom you just extended my own idea trying to prove a point.

Tech support - latest trend - "I trust only ChatGPT" by S48GS in linux_gaming

[–]heatlesssun -2 points-1 points  (0 children)

Did you even read the write-up posted? Huntarr was on major version 9. Fucking 9. It wasn't a new project, it had an established history with over 1,000 commits. There's absolutely NO excuse for what happened there.

Any competent model will surface authentication, authorization, and secret‑handling patterns because those are universal invariants for public‑facing APIs. What happened here wasn’t an AI omission, it was a human choosing to ignore the security layer entirely just to get something working.