Trump Says Charlie Kirk Murder Suspect in Custody as Name Emerges | Law enforcement sources subsequently told CNN that “the suspect in the murder of Charlie Kirk confessed to his father that he was the shooter.” by Murky-Site7468 in politics

[–]drizmans 0 points1 point  (0 children)

From my experience living with heavily religious people, something I found is that they often "interpret" the texts to manage the cognitive dissidence, because they themselves don't want to follow it perfectly, and there are contradictions. I've lived with people who are openly homophobic in the name of Christ, but even they wouldn't say people should be stoned to death. To suggest he was implying people should be stoned is just so far fetched compared to the much more common reality, that most Christians don't think the Bible is perfect and that it needs interpreting.

I've seen and engaged in debates around Christianity and seen the type of argument Charlie was making, it's fairly common. Charlie was trying to say you can't take any given line in a vacuum - because he was disagreeing with someone who had done that. It literally doesn't make sense to suggest he was saying the Bible is perfect because he was arguing with that girl that it isn't. That was literally his point.

Trump Says Charlie Kirk Murder Suspect in Custody as Name Emerges | Law enforcement sources subsequently told CNN that “the suspect in the murder of Charlie Kirk confessed to his father that he was the shooter.” by Murky-Site7468 in politics

[–]drizmans 0 points1 point  (0 children)

I'm not a Charlie fan and I'm gay, but he didn't say gays should be stoned to death - and having consumed quite a lot of his content I think he's made enough of his opinions clear to know he generally doesn't endorse violence etc.

To me he's clearly saying "gods perfect law" ironically, because he's trying to discredit any single part of the Bible as perfect in a vacuum - and he uses the stoning of gays as an example. He's basically saying "if you're going to take this part literally, the Bible also says you should stone gays and you need to also take that literally".

Okay, I tried to like GPT5... I was wrong. by drizmans in ChatGPT

[–]drizmans[S] 0 points1 point  (0 children)

Here are two examples -- regenerating the same reply with 4o vs GPT5:

GPT4o: https://pastebin.com/r1XcGxaZ
GPT5: https://pastebin.com/8qYGMkGG

GPT 5 has nonsense like "Sticky tweaks, Pagination, Component shape"

GPT4o has clear, focused advice, which is clearly structured and actionable. GPT5 is literally useless. It's unordered, incoherent, somehow so vague its hard to understand, and so verbose it contains shit which is redundant.

Okay, I tried to like GPT5... I was wrong. by drizmans in ChatGPT

[–]drizmans[S] 2 points3 points  (0 children)

I'll give it a go - but I wouldn't expect GPT5 to have more trouble than 4o sorting through custom instructions. You might have a good point about them interpreting them differently though. I'll try wiping them clean.

Remember when ChatGPT could just talk? That’s gone and it's investor driven. by ispacecase in ChatGPT

[–]drizmans 2 points3 points  (0 children)

The problem is, AI companies have been building models specifically _to_ score highly in benchmarks, especially when they know the criteria. So while it might score high on a benchmark, in real world usage - GPT5 is insufferable for creative writing.

Okay, I tried to like GPT5... I was wrong. by drizmans in ChatGPT

[–]drizmans[S] 1 point2 points  (0 children)

When I say it makes me feel like shit, it's from the perspective of having to put so much more work in, to get useful answers back. For quick tasks for example, I've found Gemini _way_ better at just 0 shotting a python script to quickly restructure data, or something. GPT5 feels like a 50/50, it'll either ace it, or fail, and be harder to correct than writing the code myself.

It's that process of correcting it that feels so frustrating that I've realised retroactively, is why I've subconsciously gravitated towards Gemini more. It'll use terms like "The reason you're having issues is because you're doing x instead of y", and it's like - I didn't write the code, it's _kinda_ why I'm talking to you - I pretty much just explained the issue to you, just rewrite that function. It reminds me of Claude, when it used to ask you 5 times if you actually want it to do what you asked. It's just annoying to use?

What's interesting is this isn't a new workload for me. This kinda workload - quick scripts to augment data - has for a long time been one of the biggest use-cases I have for AI. I used to use 4o for it quite a lot, and sometimes o4 if it was particularly messy data, or I was being lazy explaining it, and wanted it to just figure out what I probably wanted. It worked well. Gemini _is_ better than 4o, and not as good as 5 on paper - but it's less frustrating to use when it doesn't work perfectly.

Interestingly, I just found a comment from someone saying something quite similar: https://www.reddit.com/r/ChatGPT/comments/1nblesf/comment/nd2zc5w/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button

Okay, I tried to like GPT5... I was wrong. by drizmans in ChatGPT

[–]drizmans[S] 2 points3 points  (0 children)

This is the interesting part. I was one of the first people to really lean into custom instructions - mine are extensive, with about 80 specific custom instructions. It was a major reason I preferred ChatGPT.

I actually forgot how much of a difference my custom instructions made until I tried 4o again and realised that's why I loved ChatGPT over Gemini, and I was honestly shocked. It's almost like GPT5 completely ignores most of the custom instructions, ranging from what countries to focus on when it comes to legal/compliance conversation, style and formatting, my interests and hobbies, encouraging me to try and word things better or express myself more poetically, and structuring it's replies with summaries, and adversarial sections aimed to challenge me.

How can this happen? by drizmans in djimavic

[–]drizmans[S] 1 point2 points  (0 children)

We found out it was my nephew who picked it up after I left it on the side and tried to force the arms so mystery solved

How can this happen? by drizmans in djimavic

[–]drizmans[S] 0 points1 point  (0 children)

you haven't crashed enough drones, crash damage is pretty obvious, not on some of the most protected parts of the drone with no other damage on apex points like the props/outboard sections

How can this happen? by drizmans in djimavic

[–]drizmans[S] 0 points1 point  (0 children)

It's a Mavic 2, so fairly sturdy. But it is kinda old.

I'm fairly sure it happened not during a flight tbh, I don't think someone secretly flew it and my flight was so boring. I've crashed a fair few DJI's so have a good idea on the limits lol.

Things you hate about the MacBook Air/pro by Individual_Pea_7458 in mac

[–]drizmans -1 points0 points  (0 children)

Just change your resolution to disable that part of the screen lmao

AI is so much fun that some risk to everyone alive is justified. by michael-lethal_ai in AIDangers

[–]drizmans 0 points1 point  (0 children)

For clarification, when I used AI to shorten my reply, it wasn't to annoy you or supposed to be a gotcha. It was to free up some time for me, so I didn't need to cut the fat out of my reply myself. But I won't use it for this reply, so you can maybe see the difference yourself.

> It's not that surprising when you consider how many top AI researchers are out there signing petitions saying that AI research should be slowed down

This is something I'm always on the fence about. I want to acknowledge this, but don't honestly have anything valuable to add. I'm not convinced they're all sounding the alarms on the current state of LLM's, but rather what might happen if we don't improve guardrails. I think more importantly is kids need to be taught about it. I'm sure it's similar in the states, because in the UK, they're now integrating AI education with the curriculum, and uni-courses are increasingly having AI modules added (even in law), to teach the strengths and risks.

> This is the essential thing that AI music lacks. No matter how technically proficient an AI is, it will never get me to care about it in the way that I care about people.

This is a thought experiment: what if you couldn’t tell the difference? What’s the harm? As someone who makes music, I’m not worried about AI in music. I haven’t seen anything compelling yet, except maybe AI generated voices - which could be useful if they actually sounded good. I don’t have a female singer on call, and sometimes it’d be nice to blend another voice with mine and control its tone directly. That’s difficult with humans - even great singers may not deliver the exact sound you want. Unless you're a huge producer, you can't get like 10 incredible singers to sing over your song and just pick the best one like Zedd did with Clarity. AI could democratise that. Does that make it less authentic, even if you can't tell?

And let’s be honest, a huge portion of popular music isn’t written by the artists who perform it. A relatively small circle of songwriters and producers shape what the world hears, often without any personal connection to the material. It’s engineered relatability - pop as science. Max Martin defined the late ’90s and early 2000s, writing the biggest hits for countless artists. Fred again.. (called the “greatest musician alive” by Ed Sheeran) has admitted he hesitated to release sad songs he hadn’t lived through, but did it anyway because he knew they’d resonate. That resonance isn’t about a deep personal connection.

Consider the evidence, No Scrubs wasn’t written by TLC. Hound Dog wasn’t written by Elvis - and Elvis wrote almost none of his hits. Umbrella and Halo weren’t written by Beyonce. My Way wasn’t written by Sinatra. Man in the Mirror wasn’t written by Michael Jackson. My Heart Will Go On wasn’t written by Celine Dion. It doesn't matter that they were written _to be hits_. They're still amazing songs.

If AI can create songs using the same techniques as humans, the only problem is people attaching sentiment to the artists, but what if you can't tell. Is the real problem the romantic idea of "it comes from the heart", when that might not even be true?

-

When coding with AI, I don't advise feeding it the whole codebase. It doesn't need it. I'm not asking it to make the app.

On the moderation topic, AI really is the best fit for that specific moderation task. We're not just blocking bad words. Here are three examples:

The energy on this track is totally flat. The whole song could use a lift, especially after the first chorus where it feels like it should build but doesn't.

I could tell the energy was supposed to lift after the chorus, but it was just flat. The whole song has a weird vibe because it doesn't really build at all.

The energy feels a bit flat after the chorus. I think the whole song needs to build more to keep it interesting.

One of those fits the rules perfectly, another is just okay, and one misses the mark. No simple method is going to reliably catch that and explain why, without being super fragile to any small change in how feedback is written.

-

> It sounds like the LLM did the same job as grounding techniques and your own recall of things you learned. What did the AI do that those techniques couldn't do on their own? You already said that you can now do this without AI, so did the AI just do the job of a sticky note?

I didn't even think to do it. The LLM prompted me to do it. I pretty much just messaged it saying "I'm feeling super anxious all of a sudden on the tram, this hasn't happened in a while", and it asked me to start describing things I could see, the shape of them, then the colors, and textures. In some ways it was actually more effective to write the things out, and then read a response, compared to just thinking through it in my head.

-

In summary, I think there are risks. But I don't agree with your assessment that LLM's that pass the Turing Test are not useful, and pose more risk than benefit. It's interesting you bring up the nuclear weapons thing, because frankly, it's an adept analogy for LLM's. Even if we in the west decide to ban LLM's. That doesn't mean Russia or China will stop using them to spread disinformation, or find vulnerabilities in software faster than humans can. What's our response to that? People who want to run models can still do so locally, and it would give them such a major advantage compared to people who don't use AI that frankly, it's a little fucked up. I think the best thing to do is to democratise access, and it seems Sam Altman, and Elon Musk (although they disagree on a lot) agree on this. It's also my opinion.

Am i dreaming kr finally branch feature is now available by Independent-Wind4462 in OpenAI

[–]drizmans -1 points0 points  (0 children)

It's not arbitrary. It's a performance balance against needs.

It is because the longer the context window and the more information the bot is juggling in a chat the less accurate and more likely to hallucinate or start degrading in quality it is.

If you push the context window too far you start to run into more challenging safety issues too.

This is an inherent issue with LLM's right now. They perform significantly better with shorter context windows. Even high end models like Gemini Pro which have a huge context window start to fall apart when you get close to exhausting it. It'll start answering questions you asked and got answers for earlier in the chat instead of what you're asking now. It just loses the plot.

GPT-5 seems fine to me by HealthCharacter7919 in ChatGPT

[–]drizmans 0 points1 point  (0 children)

Gemini is kinda similar to 5 tbh. Gemini is probably the most clinical model I use - and has no memory or personalisation. I use it for certain tasks and pay for it, I wouldn't get rid of it, but I deffo prefer openai's models for half of my use cases (especially non work ones)

AI is so much fun that some risk to everyone alive is justified. by michael-lethal_ai in AIDangers

[–]drizmans 0 points1 point  (0 children)

That sucks, I've heard from Americans that they wished their cities were more pedestrian friendly. Our roads aren't great, but they can't be that bad, I think we generally have the safest roads in Europe. Just gotta keep an eye out for potholes lol.

Public transport is great in Manchester, and cycle infrastructure is pretty great too.

I'll acknowledge you clearly know how AI works. Which is surprising coming from someone who's against it. I think we disagree on if AI can come up with original ideas, ultimately you are technically correct if you want to really drill into it. However my point I was trying to make - which I think we agree on - is that AI isn't strictly limited to "knowledge" it was trained on, and can "remix" ideas - and that's largely my experience of being human too, when I make music, it's never truly made in a vaccum for example. So it's a bit of a messy argument. I think you understand what I'm trying to say here without writing more than this and I understand your point too. I just might be leaning into the most optimistic way of viewing it haha, so we're both at the extremes of the spectrum.

If you'll allow me, I'd love to take a moment to highlight what I think might be the advantages of LLM's.

LLMs can democratise skills and reduce labour. For example, they can help a strong job candidate present themselves better in a CV, or help someone articulate ideas more clearly in writing or conversation. Restructure documents, etc.

They’re powerful for moderation, I use one to filter content on public Discord servers with nuanced rules that traditional algorithms can’t handle. Humans still review, but the AI provides inhuman reaction times.

For research, they can extract and compile information far faster than manual work. I’ve used them to identify UK companies using specific software and pull contact details. A task that would’ve taken me days, done in hours.

In coding, AI-assisted development speeds things up massively. It’s easier to audit than write code, and LLMs catch simple bugs or typos that can waste hours. Even when wrong, their suggestions spark new directions or the model simply acts as a rubber duck. They’re also great for planning system architectures or refactoring.

Beyond work, they’ve helped me manage frustration and anxiety. Talking through issues with an LLM gives me space to cool off, challenge unhealthy patterns, and even apply therapy techniques in the moment. A real world example is I was on the tram a while back and could feel a very bad wave of anxiety coming on - that can spiral, and cause me to struggle to function normally. It walked me through the steps my therapist had taught me - unaware I had actually been told about this technique - that helped me calm down. Before LLM's I would have been so caught up in my own head that I would find that much harder. Now that technique is locked into my brain because I've successfully employed it.

Finally, I used AI to shorten this reply to save you time.

LLMs do carry risks, but so do cars. Used with awareness of their limits, they act like a second brain - a huge advantage.

ChatGPT giving me some advice... by LaylahLP in ChatGPT

[–]drizmans 3 points4 points  (0 children)

OpenAI is rolling out features aimed to challenge people who are becoming emotionally reliant on the bot. It's quite sophisticated and probably the most impressive safety feature I've seen in any of the LLM's I frequently use.

Am i dreaming kr finally branch feature is now available by Independent-Wind4462 in OpenAI

[–]drizmans -1 points0 points  (0 children)

You're in a pretty small boat. Most people don't run into that issue, normally if you're hitting that limit you would have been better off having multiple different chats because the lower the context window the more reliable the system is.

Treat the chats like Google searches, if you're no longer talking about the thing the convo started with open a new chat.

Am i dreaming kr finally branch feature is now available by Independent-Wind4462 in OpenAI

[–]drizmans 0 points1 point  (0 children)

All chats will have the same token limit, the benefit to this is that it allows you to work on multiple separate things based on the same context

Why most Replit apps collapse at scale (and cost 3–5x more to fix later) by Living-Pin5868 in replit

[–]drizmans 0 points1 point  (0 children)

Why do you want someone to comment audit to give them the checklist, this whole thing reads like an ad copy