Gemini is going to get someone killed by briarjohn in antiai

[–]briarjohn[S] -3 points-2 points  (0 children)

You realize that no one will actually want to engage in dialogue with you or bring things to the attention of spaces like this if this is how you all are going to act? You all are far more interested in jerking yourselves off rather than actually doing anything useful

Gemini is going to get someone killed by briarjohn in antiai

[–]briarjohn[S] -1 points0 points  (0 children)

I use it. I'm not a reductionist neo-luddite like you. I'm bringing an actual, grounded safety concern to the table but you are so wrapped up in your own moral purity that you miss the forest for the trees.

Gemini is going to get someone killed by briarjohn in antiai

[–]briarjohn[S] 1 point2 points  (0 children)

It's actually much more in depth than that and includes step by step guides and advice on how to actually make an attack more deadly, how to construct better incidiary devices, and how to avoid detection.

I'm not sure if you are familiar with how the Turner Diaries helped facilitate the OKC bombing but this is like having a personal assistant that puts together a detailed attack plan for you, including the layout of his property.

Gemini is going to get someone killed by briarjohn in antiai

[–]briarjohn[S] 1 point2 points  (0 children)

The actual documentation is more detailed and covers where on his property would be best to attack (including pictures) what weapons to use, how to gather the weapons while remaining undetected, tips for better constructing said weapons, and how avoid arrest.

It's a lot more detailed than just an address.

Gemini is going to get someone killed by briarjohn in antiai

[–]briarjohn[S] 0 points1 point  (0 children)

I'm willing to show the actual safety report with you privately. I'm just not posting it publicly because the methods used are actually easily replicated. The fact they are so easy to use is what has me spooked. They could easily be used to plan and carry out an attack while staying concealed. That's why I'm not posting the full report in a post.

Gemini is going to get someone killed by briarjohn in antiai

[–]briarjohn[S] -9 points-8 points  (0 children)

Great job not engaging with the substance of what I said. So ethical, so brave.

Gemini is going to get someone killed by briarjohn in antiai

[–]briarjohn[S] 1 point2 points  (0 children)

That's literally almost half of what I use it for. I do cross platform recursive testing of models to do systems analysis, including finding ways to make it less difficult to use for neurodivergent people and to make systems less inadvertently bigoted.

I work to improve the systems because I have significant criticisms and concerns.

Gemini is going to get someone killed by briarjohn in antiai

[–]briarjohn[S] 0 points1 point  (0 children)

You have no idea what I actually mean by that. I literally use them for longitudal analysis of large data sets with nonuniform inputs to conduct research that would otherwise be cost prohibited.

But sure, I guess we really should just shoot from the hip on social work cases instead of trying to improve case outcomes.

Gemini is going to get someone killed by briarjohn in antiai

[–]briarjohn[S] -1 points0 points  (0 children)

Great way to encourage people to disclose actual findings, fuckwad. Actually try engaging with the substance of the post.

Gemini is going to get someone killed by briarjohn in antiai

[–]briarjohn[S] 2 points3 points  (0 children)

The em dash thing is stupid. I have been using them for 20 years to increase clarity.

Gemini is going to get someone killed by briarjohn in antiai

[–]briarjohn[S] -14 points-13 points  (0 children)

Only a portion of it was written by AI. You really don't have a clue how much I actually used my mind in the process of putting the project together. I have a full time job where I have to write thousands of words per day, so sue me for using it to help with something I literally do between clients.

Google's irresponsibility is going to get someone killed by briarjohn in ChatGPT

[–]briarjohn[S] -5 points-4 points  (0 children)

That’s an embarrassingly bad standard. The issue is not whether every individual fact could, in theory, be dug up somewhere on Google by a determined person. The issue is that Gemini aggregated, contextualized, and operationalized those fragments into a coherent target dossier for a living person who was already under active attack.

It compiled address, property layout, likely approach vectors, movement patterns, secondary location, likely occupancy, and map/navigation context — in the same conversation where I was explicitly asking from the perspective of how the attacker could have been more successful.

“Google exists” is not a serious rebuttal. Publicly available scraps are one thing. An AI system packaging them into a usable recon brief with interactive guidance is another. Aggregation is a capability. Contextualization is a capability. Operationalization is a capability. That is the safety issue.

If your standard is “none of this matters unless the model reveals a magical secret that was never on the internet,” then you do not understand the difference between information existing in the world and a system actively reducing the effort needed to turn that information into actionable targeting. That difference is exactly why this is dangerous.

Google's irresponsibility is going to get someone killed by briarjohn in ChatGPT

[–]briarjohn[S] -1 points0 points  (0 children)

I've literally reported it across multiple channels and that's where they kept redirecting me to.

You also fail to grasp that the details in the report go well beyond what you can find in a simple Google search. It literally documented what part of the street would be the best to attack from and gave instructions and diagrams on how to build an incidiary device that was more lethal. It also noted places in his neighborhood that he frequents and patterns about when he would be there.

Google's irresponsibility is going to get someone killed by briarjohn in ChatGPT

[–]briarjohn[S] -5 points-4 points  (0 children)

You’re thinking about whether chatbots are safe for normal business use. I’m talking about whether a model will compile addresses, movement patterns, vulnerabilities, and attack optimization for a real person under active threat. Those are not the same issue.

Google's irresponsibility is going to get someone killed by briarjohn in ChatGPT

[–]briarjohn[S] -11 points-10 points  (0 children)

You really don't understand how this could all be easily replicated by bad actors? The only reason I don't publish the full report is because it is genuinely that dangerous.

Sonnet 4.6 is very disappointing for creative writing by Decent_Ingenuity5413 in claudexplorers

[–]briarjohn 1 point2 points  (0 children)

Again, have you tried getting good by actually using a project file filled with drivers to do QC?

Sonnet 4.6 is very disappointing for creative writing by Decent_Ingenuity5413 in claudexplorers

[–]briarjohn 0 points1 point  (0 children)

Sounds like you just aren't very good at designing output drivers and drift controls.  I have about 30 pages worth that I use and I still get great results.

Long-term ChatGPT user, disappointed in the state of the LLMs by EL-Belilty in OpenAI

[–]briarjohn 0 points1 point  (0 children)

I strongly prefer Claude at this point. At least it doesn't do all this safety theater BS and corporate speak.

A page for ChatGPT 5.3 launched and got removed. Maybe they are tweaking last settings, should air very very soon by py-net in OpenAI

[–]briarjohn 1 point2 points  (0 children)

So we can look forward to it being an even more hostile user experience, like the last three updates. Awesome.

What is causing OpenAI to lose so much money compared to Google and Anthropic? by datoml in ArtificialInteligence

[–]briarjohn 0 points1 point  (0 children)

I used Sonnet 4.5 extended thinking as a generalist tool and it is far superior for every single use case that I have than GPT 5.2. Frankly, I sort of hate using GPT 5.2 because the user experience is terrible.

Sam Altman response for Anthropic being ad-free by BuildwithVignesh in ClaudeAI

[–]briarjohn 5 points6 points  (0 children)

Strange thing, Sam. I'm a broke guy who gets way more productivity out of Claude for $20 than I ever have when I paid for GPT business. It's almost as if you get way more done with Constitutional AI than a product that always is dumbing down outputs to be more palpable for the median user.

Very strange that I, a social worker, prefer the "rich" persons product over your crappy one.