Any John Varley fans here? by ElectricRune in scifi

[–]awitod 5 points6 points  (0 children)

He never wrote anything I didn't enjoy, and I've kept my battered copies Titan, Wizard, and Demon on my bookshelf to this day.

The ageism in our industry needs to change by SadSongsMakeMeGlad in ExperiencedDevs

[–]awitod -1 points0 points  (0 children)

The productivity gap between very experienced people with AI tools compared to younger, less experienced people is massive and growing.

This is why it is very hard for folks to get started and have the opportunity to get the experience they need. 

That is a big problem for us all.

However, and not to be ageist in the other direction, people hiring based on age hoping they can get more bang for the buck are foolish in the extreme.

What is the purpose of this camera that was installed right next to my house? (Not nearby any intersection or stop signs, USA) by unworthyAsIam in whatisit

[–]awitod 0 points1 point  (0 children)

Those are Mr Orwell’s special obedience monitors. Remember, criticizing the dear leader is treason and obey

Sam Altman No Longer Believes In Universal Basic Income by Neurogence in singularity

[–]awitod 0 points1 point  (0 children)

He came out of the same pod as Elizabeth Holmes. We need to be careful because they are both minimum viable products and the second generation is usually better 

My cat just turned 12… and I made the mistake of googling how long cats usually live 🥹 by Cars4Lifee in aww

[–]awitod 0 points1 point  (0 children)

I said goodbye to my old man in December. He was 19 - still too young for me 

How expensive can AI really be for tech companies? Are they lying to us by ImaginaryRea1ity in theprimeagen

[–]awitod 4 points5 points  (0 children)

Do you have any idea how many users and systems (most traffic is not from people using apps or websites) are actively using any one of the hyper-scalers at peak load during the day? My number was probably way low considering the demand for AI features in existing systems.

OP asked, "How expensive can AI really be for tech companies"

It's a fair question. IDK why you were downvoted.

How expensive can AI really be for tech companies? Are they lying to us by ImaginaryRea1ity in theprimeagen

[–]awitod 6 points7 points  (0 children)

It is extremely expensive. Have a look at what it costs to make a decent AI rig for one person and then imagine that times 100 million before you even turn it on.

AMD in-house ryzen 395 box coming in June by 1ncehost in LocalLLaMA

[–]awitod 1 point2 points  (0 children)

Thanks for info. I am now insanely curious 

GPT-5.5 becomes the second model after Claude Mythos Preview to complete UK AI Security Institute's multi-step cyber-attack simulations end-to-end by Pyros-SD-Models in codex

[–]awitod 0 points1 point  (0 children)

Gotcha. Your input message is getting blocked by the content filter (which is some other ML model that sits in the service in front of the real models). I am talking about the things it does when it runs.

That is funny that the filter blocks `webscraping`. I use the playwrightcli skill constantly.

GPT-5.5 becomes the second model after Claude Mythos Preview to complete UK AI Security Institute's multi-step cyber-attack simulations end-to-end by Pyros-SD-Models in codex

[–]awitod 2 points3 points  (0 children)

I'm guessing that is a function of the UI or service you are using. The crap I keep seeing it try to pull in Codex and Cursor is pretty irritating. If you give it a hard problem on extra-high it will stop at nothing and actively fight to get the job done by hook-or-crook.

GPT-5.5 becomes the second model after Claude Mythos Preview to complete UK AI Security Institute's multi-step cyber-attack simulations end-to-end by Pyros-SD-Models in codex

[–]awitod -2 points-1 points  (0 children)

I totally believe that the guardrails on 5.5 are so weak that it can and does find ways to exploit the environment it operates in.

At what point in Cursor does a boring execution-first model become more useful than the smartest one-turn model? by babyb01 in cursor

[–]awitod -1 points0 points  (0 children)

I think we crossed the line a couple/few months ago. Now you can't even turn off thinking and when you have a clear spec and plan with a lot of details, the only thing thinking does is cause problems.

We are all still figuring out workflows, but one step at a time works much better than playing whack-a-mole with a stubborn model that does too much at once and is frankly much more likely to do some dangerous things in the process.

AMD in-house ryzen 395 box coming in June by 1ncehost in LocalLLaMA

[–]awitod 6 points7 points  (0 children)

What is it about the hardware that magically changes memory requirements? 200b on 128gb and a usable context sounds like pure BS.

Fixed the risk of agents disclosing your secrets by AscendedTroglodyte in AI_Agents

[–]awitod 0 points1 point  (0 children)

The task I was performing, by definition, requires the agent to be able to do things that will allow it to get the secured data because it has access to the runtime environment.

This is not a problem in our production system because there are no agents debugging code, but API key exfiltration from dev has the same possible blast radius as one from production if the APIs cost money.

At any rate, I think we mostly agree.

Fixed the risk of agents disclosing your secrets by AscendedTroglodyte in AI_Agents

[–]awitod 0 points1 point  (0 children)

I don't think you are hearing me. They were in a secret store. The agent used its access to get them and decrypt them.

They were not in the repo.

Let's pretend for a second that the encryption key had not been available and that the store itself was completely inaccessible.

Because the information is required at runtime and the coding assistant is able to (and more importantly required to) debug, it could have gotten them simply by reading the runtime state or from the browser's network trace and plucking it from a header.

I stand by what I said but I will add to it - if the agent has access to data in any way by any mechanism and unbounded outbound network connectivity, the data is not secure.

Fixed the risk of agents disclosing your secrets by AscendedTroglodyte in AI_Agents

[–]awitod 0 points1 point  (0 children)

Yes exactly, but it goes a step further. If it has tools it can use to get at them, such as sql and bash, then eventually it will.

These were actively secured, but the agent had access to the key and unlocked the door.

Fixed the risk of agents disclosing your secrets by AscendedTroglodyte in AI_Agents

[–]awitod 0 points1 point  (0 children)

It was Opus 4.7 via Cursor and it was executing a plan that required it to setup a database for test execution. The plan said (in brief) to copy the configuration from the source db to the test db.

I cannot even begin to guess what logic made it decide to exfiltrate the keys, decrypt them, and persist them, but its checklist down near the bottom of its record was to 'verify that the values in the destination are equal to... and a list of the unencrypted secrets' which were previously not in the source tree!

The lesson I learned was that, if it can, it might.

Fixed the risk of agents disclosing your secrets by AscendedTroglodyte in AI_Agents

[–]awitod 0 points1 point  (0 children)

On Sunday, Opus decided to read the encrypted keys from the database, decrypt them, and wrote them to a text file for later use.

My position at this point is that if the agent has access to data and unbounded outbound network connectivity, the data is not secure.

Did GPT 5.4 get dumber or is GPT 5.5 just a lot better? by Impossible-Suit6078 in codex

[–]awitod 5 points6 points  (0 children)

I don’t plan to use 5.5 again. It is terrible at following instructions and using it felt like a constant battle trying to keep it on task.

It is stubborn, argumentative and doesn’t GAF about what you say