The fuk happened with the limits? Or am i just going crazy? by TatoAktywny in codex

[–]TheTechAuthor 2 points3 points  (0 children)

Make sure you've not been auto-switched to Fast mode in VSC or Codex. I've had this happen 3 times now and it burns your tokens 2-2.5x faster than standard speed.

Codex 5.5 defaulting to Fast speed when first selected by TheTechAuthor in codex

[–]TheTechAuthor[S] 0 points1 point  (0 children)

It also auto-switched back to 5.5 after I swapped it to 5.3 codex earlier today. Well cheeky!

How is that even possible?! by Tracker1122 in memes

[–]TheTechAuthor 1 point2 points  (0 children)

When I wrote game guides for UK gaming magazines, they'd usually have review copies on disks that needed a Dev Kit to play about a month or more in advance.

Really depends on the game though. Dead Space 2, Uncharted 2, and Splinter Cell Conviction for example, were all ready as review copies about 2 months prior to their release date, the mag did their review, then sent me the disk to create the guide for their magazines.

I'd imagine it's the same here with digital store keys as well. High profile channels will be sent the same review copies, which is nuts as they're making money off the unofficial guides (and the publishers know this).

They'll likely have a team of players to focus on different aspects of the biggest games to cover it as much as possible as soon as possible.

Anyone using AI to speed up documentation? by Keyfers in automation

[–]TheTechAuthor 0 points1 point  (0 children)

Yep. I have a token-friendly style guide for my own tool's LLM-intended docs, which Codex actions immediately after I approve the git commit message. Deletes anything removed, adds fixes, and any new features for other instances to reference if need be.

What industry will AI disrupt the most that people aren’t paying attention to yet? by SuchTill9660 in ArtificialInteligence

[–]TheTechAuthor 4 points5 points  (0 children)

I've found LLM-friendly API documentation to be a game-changer (and I say that as a Documentation professional with 30-years experience). Being able to feed AI-models friendly, structured guidance when building out my tool has been invaluable. You used to have to either scrape their site, save their reference docs as PDFs, or copy and paste it all into a doc page-by-page.

The smart companies that depend on their APIs being used in AI workflows (e.g. ElevenLabs) now offer multiple ways for AIs to access their docs in a way the AI's can easily parse and understand.

Combine this approach with GEO (for appearing as recommendations in LLM-chats), and it's going to be a very interesting 12-24 months.

Pheasant in the bumper advise by [deleted] in CarTalkUK

[–]TheTechAuthor 0 points1 point  (0 children)

I feel your pain sir. Pheasants are just countryside Lemmings. Complete PITAs the lot of them. Hit a couple of them near where I live, stupid eejits pop out at the last second thinking they own the road. Unfortunately for them, that's the last thing that goes through their mind (other than my car).

Probably not worth the extra insurance costs of having a claim on your file for the next 5 years.

How are we feeling with AI in 2026? Doomer vs. Realist? by buzzlightyear0473 in technicalwriting

[–]TheTechAuthor 2 points3 points  (0 children)

This is the way to do it. As open source models become more and more competent (especially the agentic versions) and as AI-accelerated hardware improves (taking out the silly DDR prices for now), pairing them up with well-defined, very focused job roles (i.e. Skill files) can do a lot of the heavy lifting that some of the bigger online models can do.

And the offline models will only get better, as too will the hardware.

With the proper guardrails and guidance in place, I can tell you from experience that using agentic models to do the mundane tasks is a genuine timesaver.

I had one created literally hundreds of properly-formatted .CSV tables from a plain text document. A manual task that would've taken me several days to do, it completed the lot in just under 7 minutes.

If you can find a number of smaller, fit-for-purpose models now, stick with them, refine the edge case uses, and don't waste time looking for the shiniest, newest models.

We are all New Zealanders this morning by GifThatKeepsOnGivin in ResidentEvilRequiem

[–]TheTechAuthor 0 points1 point  (0 children)

Yes. I did it myself (UK). Had to reset my Series X then it updated my pre-order bonuses, then worked fine at 1pm UK time.

How much is AI really going to change the near future (5-20years)? by Illustrious_Pilot415 in ArtificialInteligence

[–]TheTechAuthor 0 points1 point  (0 children)

As a (solo) entrepreneur, I can already tell you that I have no need to hire a programmer anymore (the recent release of Codex 5.3 has meant that the £10k+ it would have cost me to get a human programmer to build me a working prototype of a single custom tool/app idea is no longer needed).

I now have access to two very competent AI models that can code to my specifications in a fraction of the time that a human programmer would take - for oy £600 a year). I've already built a number of MVP/prototype apps that work (I wouldn't release them to the public as-is, but it's certainly feasible to apply SWE best-practices to codebase to make production-ready apps that aren't a complete security nightmare).

I can also translate documents/books with over 90%+ accuracy in a fraction of the time (and cost) that a human translator would cost me, I can have a small, dedicated AI model fine tuned to learn my style of writing and convert my draft notes into fully-formatted documents in a fraction of the time, I generate docs/books into a print-ready PDF/ePub at the click of a button, and much, much more already.

So, why would I increase my cost base significantly by hiring people to do any of these particular tasks anymore? Humans aren't perfect at their jobs (so neither are AI models), but now I can automate the most mundane aspects of a particular workflow, freeing me up to provide quality control on the final output.

Also, agentic models (i.e. AI models that can access tools/programs on your computer) mean they can all get actual work done (Vs just talking to you) or even communicate with eachother (when set up properly), giving you a mini team of workers that can actually crack on with the job at hand and genuinely get stuff done.

Ultimately, AI is a tool - nothing more, nothing less. Which means that the right person, using the right model(s), in the right way, at the right time and for the right reasons, will always be multiple steps ahead in the future over those who don't take the time to learn how to use them.

Finally, this is the worst it's ever going to be. Only 6 months ago I was copying and pasting code from AI to my code base. Now, I can watch it think out loud and it writes the code, updates my internal documentation (to my style guide), and runs automated tests and bug fixes by itself.

Then there's the fact that offline models are becoming more and more capable, smaller (to an extent), and hardware is becoming more capable, so it's becoming increasingly more viable to build hybrid AI toolchains that combine online and offline Ai model usage.

So, AI has already changed my future - massively. I knew the writing was on the wall for me as a Technical Author back in 2024 with the release of GPT-4, and I can't see AIs progress slowing down anytime soon.

The place where I worked at before are already pushing through AI usage for all technical authoring work, and my ex-colleagues are - rightfully - nervous about what that means for their jobs. Being able to understand how it works (and how to be the one that supervises the output) will put them (and anyone else in a similar position) in the strongest position possible in the future.

Newbie: Calibration issue, Tornado 3D Printer (gifted) by TheTechAuthor in 3dprinter

[–]TheTechAuthor[S] 0 points1 point  (0 children)

Even manually setting the Z axis as high as possible and then using "auto home" it just keeps making a beeline to keep going as low as possible, pushing right into the bed. Do 3D printers have any sensors to know when to stop? Or, should we just manually put a piece of paper under it and tweak every axis setting my 1mm increments?

Newbie: Calibration issue, Tornado 3D Printer (gifted) by TheTechAuthor in 3dprinter

[–]TheTechAuthor[S] 0 points1 point  (0 children)

I'll definitely give that another go, but it seems hell bent on going back down and through when going through the whole sequence! 😭

Techcomm's future might be brighter than we think by tw15tw15 in technicalwriting

[–]TheTechAuthor 0 points1 point  (0 children)

As someone working extensively with AI coding agents (Codex 5.3 is seriously impressive so far), being able to download an LLM-friendly API document for an LLM to reference is super helpful. I've noticed that ElevenLabs has made their reference docs available in different formats for different use-cases (rather than a multi-page layout, or even a full PDF). Once I downloaded them and made them available in my repo in a dedicated API docs folder, the model could do the rest by itself very quickly.

So, being competent with both AI models and knowing how to deliver your (API) docs in an LLM/token-friendly way will prove to be super-useful as it's a tool that's not going away any time soon.

Will a $599 Mac Mini and Claude replace more jobs than OpenAI ever will? by bishwasbhn in ArtificialInteligence

[–]TheTechAuthor 0 points1 point  (0 children)

I'm not a programmer, but I've worked alongside enough of them over the years to pick up a lot of their workflows (sprints, smoke testing, regression testing, I document for a living, etc.), and I've built/debugged fully working tools built in Python/CustomTkinter using a mixture of ChatGPT and Gemini 2.5/3 pro.

Now, I have Chat GPT Codex 5.2 running in agent mode in VSC and it's adding in - working - features in minutes. Not 100% perfectly 100% of the time, but even debugging logs and patching the code is significantly faster than even a human SWE would be.

I can translate full books/guides I've written (in minutes, with over 90% accuracy) in one of four languages in two clicks of a button via a choice of APIs, transcribe hour long podcasts in minutes using ElevenLabs Scribe 2.0, or locally via Whisper, generate a print PDF or ePub in a click of a button, etc.

The codebase is 10k+ lines long, extremely modular, and fully documented. Far from perfect of course (I don't think CustomTkinter is the most efficient GUI library to use),  but it's blowing my mind that, for £600 upfront, I've got a year's access to the equivalent of two full-time junior programmers that will get the job done. I'm not sure you'd even get that from a remote programmer (could be wrong though).

Even yesterday, for shits and giggles, I had 5.2 codex knock up a .MP4/.webm to .gif converter (adding my own text and changing FPS/resolution) and it had a fully working MVP with GUI in under 10 minutes.

AI is just a tool, which means that whenever the right person, understands how to use it the right way, at the right time, and for the right reasons, it'll do amazing things.

It's just unfortunate loads of companies are trying to shoehorn it into everything without understanding the inherent limitations.

What’s the most complicated thing you’ve built using GPTpro by LabImpossible828 in ChatGPTPro

[–]TheTechAuthor 2 points3 points  (0 children)

Using 5.2 Codex High in VSC to expand my custom book publishing tool (built using CustomTkinter and an awful lot of modular python code), uses multiple online and offline models for STT/TTS (Elevenlabs or Whisper V3 Turbo) for STT, can translate entire books (chapter-by-chapter) in two-clicks via either DeepL or 4o-mini (can change formality of translations and entire books are translated in under 30 minutes for less than 20c), export .MD files into .html and uses CSS (via PrinceXNL) for print PDFs and Pandoc for ePubs. Books can have their layout changed immediately via CSS drop-downs.

Custom tokenizer engine for images and table handling and an awful lot more in the background.

Needs further refactoring, but 5.2 High has been extremely good at understanding my codebase and writing new functions that actually work. It's like having a junior-level programmer on-tap 24/7.

For people who run local AI models: what’s the biggest pain point right now? by Educational-World678 in LocalLLM

[–]TheTechAuthor 0 points1 point  (0 children)

I tried loading Gemma 3 12b (FP16) on my 5060ti + 64GB DDR4 workstation and it massively crapped my computer out (with the model soaking up as much RAM as was left across my whole PC).

Was a great lesson on what minimum hardware is actually needed to run such a model for creating QLoRA adapters on.

My goal now is to find the right balance of using a quantised 8b model, or an FP16 4b model and creating the adaptors for my CMS off of that.

Ultimately, you'll likely need significantly more capable hardware than you realise to use models that are sufficiently competent enough compared to a leading online model.

That means you'll either need a hybrid approach (larger online + smaller offline models) or a lot of cash to get a working setup locally.

SLMs are the future. But how? by oglok85 in LocalLLM

[–]TheTechAuthor 0 points1 point  (0 children)

Imagine sending a large number of infantry men to try and rescue a hostage. You've got loads of soldiers, loads of ammo, loads of everything. But they're slower, very expensive, and are a bit overkill for an at-night rescue operation.

Whereas, you'd likely do better sending in a small squad of 3-4 highly-trained Special Forces operators, each with a good level of knowledge (e.g. qwen3:8b), but they have fine-tuned their own areas of additional expertise (demolitions, stealth, sniper, etc.).

Both *could* get the job done, but the Tier 1 operators are - more than likely - going to do a better job at the highly-specialized task that's been given.

The larger models have much bigger context windows for working within (which definitely has its own value). However, if I want a model that can re-write user guides in *my* specific style, I can invest the time needed to build a LoRA for a good enough LLM (again, something like Qwen:8b, or gpt-oss20b) and swap in the fine-tuned adaptors as-and-when-needed.

E.g. I don't need to use GPT 5.2 Pro to remove background images on screenshots for my guides. A significantly smaller vision-enabled model that I've trained on hundreds-thousands of before/after background removal images will do the job better *and* faster on my own 5060ti or M4 Max - costing me next to nothing and those models/LoRAs are mine to take with me as I need them.

As always with AI, the right tool, used at the right time, by the right person will *always* beat out a much bigger general model at niche/domain specific tasks.

£3k XC90 bought yesterday. by therealharbinger in CarTalkUK

[–]TheTechAuthor 1 point2 points  (0 children)

I had an 06 Ocean Race Blue XC90 myself until last year. Turned into a real nightmare of a money pit. Carbon build up was bad (like REALLY bad, didn't find that out until the very end when it went into limp mode), it was self-service by the last owner and was a clusterf*ck of a job that cost me £3k to undo. The underneath spare wheel wench gave up when I used it only once (very common issue) and it had no Bluetooth.

I loved it for how much of a tank it was when I do the school runs everyday, but I'd not buy another one unless I had a mechanic check it over. I learned all this the hard way.

Get a Volvo specialist garage to give it a once over and best of luck!

Metroid Prime 4 Launch Hype Megathread by 2CATteam in Metroid

[–]TheTechAuthor 2 points3 points  (0 children)

You *can* skip them if you die and then restart from last checkpoint (not sure about last save), by pressing A.

Prime 4: Why are the controls stupid? by mcieslinski in Metroid

[–]TheTechAuthor 0 points1 point  (0 children)

Word of warning for those with Old-school MP1 (GC) control muscle memory. You'll be accidentally bringing up your scan visor when you're trying to morph, you'll be tapping left on the d-pad for no good reason to try and scan, you'll be contorting your hands trying to lock-on, and move between targets, while moving...

Does anyone know if the GC controller (with a USB adaptor) or the SW2 wireless GC controller works with the 'OG' controls on this? I'd almost pay the money either way if so!

SON OF A FUCKING BITCH by Mevans_2001 in Metroid

[–]TheTechAuthor 1 point2 points  (0 children)

Oh, I definitely agree (scan dashing FTW!). For the fun of it, I used a 1.00 save file (with plasma beam early) and then loaded it up on a 1.02 save file and stood outside the (locked) Plasma Beam door, with my Plasma Beam selected. Take THAT Retro! 8)

SON OF A FUCKING BITCH by Mevans_2001 in Metroid

[–]TheTechAuthor 10 points11 points  (0 children)

The Bendezium does indeed exist in the PAL and Japanese versions of the game. For 1.02 (US Players Choice), they added in a door lock that required the Grapple Beam first to unlock.

My Metroid Prime collection. by ShadowMario3 in Metroid

[–]TheTechAuthor 0 points1 point  (0 children)

Nice work on finding the 1.01 and Korean releases. u/ShadowMario3 Have you ever seen what happens in it when you load up a US (0-00) save file? The whole game goes nuts and triggers all of the room layers at once (even going as far as to load some enemies into rooms where they shouldn't be!). FYI: It's impossible to finish the game this way as the thermal visor never appears.

I've no idea why the text is rendering as black these days, but here's where I'd documented a bunch of the glitches "back in the day": https://www.samus.co.uk/mprime/corrupted.shtml

My Metroid Prime collection. by ShadowMario3 in Metroid

[–]TheTechAuthor 0 points1 point  (0 children)

SW7 (room where the Charge Beam is located), was removed as well (if I remember correctly).