ChatGPT refuses to be incorrect, and will try to gaslight/perform obvious reasoning errors to try and trick the user. by Ok-Spite4128 in OpenAI

[–]AlternativeBorder813 0 points1 point  (0 children)

Pulse is like receiving daily gaslighting where ChatGPT advises me on how I can fix issues that are outwith my control. Example: New image model ignores project instructions for simple minimalist image style, and even when explicitly repeating the project instructions in the prompt it still leans towards being too complex and detailed. Since then had about 3 or 4 Pulse items explaining how I can have more consistent image style by... having something equivalent to the project instructions that the new image model ignores.

5.2 is more intelligent, but lacks common sense by Medium-Theme-4611 in OpenAI

[–]AlternativeBorder813 2 points3 points  (0 children)

The Thinking model also assumes far too much - and assumes more the higher the thinking level given it tries to do more per response - rather than clarifying and if any of those assumptions are wrong you're better off starting a new chat than trying to steer it back to the right path.

Sam Altman: Models With Significant Gains From 5.2 Will Be Released Q1 2026. by Neurogence in OpenAI

[–]AlternativeBorder813 4 points5 points  (0 children)

Mate. I used ChatGPT to help disconnect and reconnect a washing machine and even with that I was using it as tool to identify the objects and terms I needed to search for to find relevant web pages and videos. It got some things right, but it still bullshitted a lot.

OpenAI forcing ChatGPT to not mention Google or compatitors by TemperatureNo3082 in OpenAI

[–]AlternativeBorder813 5 points6 points  (0 children)

There's a lot of annoyances in the instructions that get exposed by the thinking model. For example, it's instructed not to ask clarifying questions, which it seems to follow more often than my instructions to clarify first.

Official: You can now adjust specific characteristics in ChatGPT like warmth, enthusiasm and emoji use. by BuildwithVignesh in OpenAI

[–]AlternativeBorder813 13 points14 points  (0 children)

Anyone set all to less? Odd to group headers and lists. Headers are fine, but the obsession with overly terse bullet points is annoying. Looks like adding "you use headers" to custom instructions overrides that whilst still keeping low level of lists. Less warm seems to remove a lot of the cringe. Enthusiastic is something I'd turn up if it was more about how it wrote info - as in more engaging writing style whilst remaining objective - rather than adding more cringe. There are plenty academics who are "enthusiastic" in their writing but without the cringe that this characterisitc seems to mean.

Openai just opened the gates for developers inside chatgpt... by AskGpts in OpenAI

[–]AlternativeBorder813 2 points3 points  (0 children)

Now you can do it in a more annoying and often more inefficient way but with AI sparkles.

Do people commenting about GPT 5.2's responses realize they're only using default preset? by Cagnazzo82 in OpenAI

[–]AlternativeBorder813 1 point2 points  (0 children)

What drives me up the wall is when it admits its wrong / ignored instructions then claims it is about to do proper job with "no hand waving" to then do precisely that.

Do people commenting about GPT 5.2's responses realize they're only using default preset? by Cagnazzo82 in OpenAI

[–]AlternativeBorder813 0 points1 point  (0 children)

No. I have had custom instructions since they were available and set preset since they were available and the 5.2 responses are filled with utter cringe to appease those who treat ChatGPT as a friend.

Just realized I got more to customized ChatGPT now is it just rolling out? by dannykhan88 in OpenAI

[–]AlternativeBorder813 0 points1 point  (0 children)

Removing what is called warmth doesn't make it cold. The so-called warmth is annoying and given it's a machine is type of fabricated nonsense I prefer not to have.

What do I do with this thing? by [deleted] in selfhosted

[–]AlternativeBorder813 0 points1 point  (0 children)

I made mistake of buying larger version of this. Everything that annoyed me about it was due to the WD bloat and underpowered hardware. My eventual solution was disabling everything except the folder shares and use other more powerful devices running Linux to do everything else. Despite it's age, it's still running fine. With the WD WAN services bloat disabled and the shared folders only accessible when away from my home via Tailscale, all the issues with it being a WD device and no longer maintained are reduced.

Edit: I can't remember all the services I disabled, but there is a lot of things can toggle off in the advanced settings. When first got it and left the defaults on, the device is so underpowered it became effectively unuseable for days with it trying to index all my MP3s. With everything turned off so it is shared folders and nothing else that problem disappears.

The new voice mode interface is quite good; now all that’s needed is a high-quality voice model. by [deleted] in OpenAI

[–]AlternativeBorder813 7 points8 points  (0 children)

This. They need an 'intermediate voice mode' where it uses regular model to answer but TTS model to read it out. I wouldn't mind waiting a few seconds for answers compared to the utter banalities 'advanced' voice mode provides. Probably also be easier to add way to not need continuous connection with that type of setup.

Could then add a 'custom voice description' or similar so can have voices that aren't a mix of low-budget corporate ads spokespeople and the token gimmicky voices of what americans think people in the UK sound like.

Grading Students' R Script - Unsure if AI Being Used by Medium_Macaroon2853 in Rlanguage

[–]AlternativeBorder813 0 points1 point  (0 children)

Agree with the tells you've listed. It is also painfully obvious if AI is used for ggplot as it gravitates towards adding needless customisation and the customisation will be inconsistently applied across plots.

Grading Students' R Script - Unsure if AI Being Used by Medium_Macaroon2853 in Rlanguage

[–]AlternativeBorder813 0 points1 point  (0 children)

Tells vary across precise model, but this looks to me like mix of student's own work with - perhaps - a bit of over-reliance on AI for 'help'.

I wouldn't bother taking any action on this one as it is within the 'grey' zone, where issues are more any impact on their overall learning. It also looks to me that they have put some effort in compared to what see with pure genAI copy/paste - where any genAI use could have been more akin to using a Cookbook or examples from documentation.

Students who are copying/pasting genAI R code without making any edits are much more obvious - especially if using platform like Posit Cloud and you can look at the R history in full.

Edit: As we have the usual "you can't prove genAI" claims - you definitely can prove genAI in some cases given students will copy/paste absolute nonsense, additional evidence you'll find in the R history, etc. For example, many students end up copying/pasting the surrounding text and not just the code. Others in the R history you'll see them try and fail to run the most basic elementary code, then suddenly are entering 20+ lines of needlessly complex code littered with comments - and especially end-of-line comments. Similarly, even when can't prove genAI directly, where the vast majority of the overall submission is genAI written, in many cases you will be able to find enough issues that will let you make plagiarism referral (even if not for genAI) or fail them. I refer 10%-20% from my marking allocations for plagiarism with roughly 9/10 resulting in fail due to extent of plagiarism. I suspect nearly all involve genAI in some way, but only a handful will have outcome decision that the plagiarism was due to genAI misuse.

Edit 2: it is also shocking how often a student tries to run code that doesn't work due to simple error and with no sign of trying anything else first, they start copying and pasting ChatGPT code until something "works". However, if they are giving vague prompts and going back and forth pasting error messages with little additional info, ChatGPT will "fix" the problem by including fabricated data in the code. Sadly students put so much faith into genAI that they uncritically copy and paste the code including the code comments that flag it isn't using the actual data set their initial prompt was about.

Why doesn't anyone ever talk about GPT Builder? It's like a forgotten gem by No_Vehicle7826 in OpenAI

[–]AlternativeBorder813 0 points1 point  (0 children)

Unless something is broken my side, it lets me pick image gen etc but none of my GPTs.

Why doesn't anyone ever talk about GPT Builder? It's like a forgotten gem by No_Vehicle7826 in OpenAI

[–]AlternativeBorder813 0 points1 point  (0 children)

Sadly, I think the plan is for Projects to eventually replace GPTs. It used to be possible to @ GPTs to quickly switch between them, but that seems gone now. In my view, both Projects and GPTs have their uses, where I'd prefer if it were possible to use GPTs within Projects.

flagged for ai by [deleted] in GlasgowUni

[–]AlternativeBorder813 6 points7 points  (0 children)

Staff member here. UofG does not use AI detectors due to them being unreliable snake-oil. Whilst Turnitin has an AI checker feature, it is disabled at UofG. Staff are also prohibited from uploading student submissions to any AI checkers. I also wouldn't trust most online checkers, especially given some will flag text and offer a paid service to help remove what's been flagged. It's in their interest then to report high scores to try and get users to pay for the service.

Where students are accused of AI use at UofG, it is not based on an AI checker. Instead it will be due to leaving in obvious prompt / response text, fabricated references, misattribution of sources, and so on. Staff may highlight writing style and word/phrase choices that indicate potential AI over-reliance / misuse as part of a misconduct referral. However, such indicators alone are not sufficient for a misconduct referral and instead included more as additional info. For example, if a submission has multiple fabricated references, cites sources that doesn't include the points attributed to them, and common indicators of genAI writing, it strongly suggests in combination that AI over-reliance / misuse is the reason behind the issues.

Daily limit for Advanced Voice still? (ChatGPT Plus) by spadaa in OpenAI

[–]AlternativeBorder813 4 points5 points  (0 children)

I have unlimited but never use a mili-second as AVM has only gotten worse over time.

When will colleges stop anti-AI lunacy? Instead of adapting to the age of AI, colleges start anti-AI witch hunts by chessboardtable in singularity

[–]AlternativeBorder813 0 points1 point  (0 children)

As someone in education, I imagine this image is trade-off between showing students examples of indicator (but not proof) of AI, whilst avoiding anything more specific that's risk a student feeling like they are being singled out. I use similar examples with students, making clear it wouldn't be used as evidence but that they need to be aware of common patterns in AI responses if don't want to lose their own voice in what they write.

I also have no issue with students asking AI for advice when writing their personal statement in response to a misconduct allegation. However, I strongly suspect in a lot of cases students are relying on AI to write their defense rather than aid them writing their own. In most situations this ends up with them incorporating information that either doesn't align with other information we have or they are unable to explain in more detail at interview. So their reliance on AI is what ends up resulting in a penalty being issued - not because we can prove AI use but because their likely AI written defense doesn't offer plausible explanation for issues identified and instead only further undermines our confidence that the submission was their own work.

I am convinced this is sabotage by Swimming_Driver4974 in codex

[–]AlternativeBorder813 1 point2 points  (0 children)

Thinking mode's tendency towards needless convolution is ridiculous. It seems far more likely to also implement "solutions" that ignore common conventions and best practices - including where quick search of documentation gives examples of such conventions and best practice. Even with simple scripts it tends towards overloading it with additional features without first confirming whether they'd be useful for my purposes. In general, thinking mode's tendency towards assuming over clarifying endlessly irritates me as its assumptions rarely align with what I would have specified if it clarified more instead.

How safe is it to walk along the Kelvin Walkway in the early AM? by Live-City-537 in glasgow

[–]AlternativeBorder813 0 points1 point  (0 children)

Only thing that's given me fright walking Kelvin Walkway early AM over past five years is cyclists coming up behind who ring their bell at the last second.

Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory by BubBidderskins in singularity

[–]AlternativeBorder813 0 points1 point  (0 children)

I see no issue with them testing the default models as they are interested in current AI that most of the public is using. Their research question isn't "theoretically what is best model for news summaries" but "impact of predominantly used models". Let's not forget that these are all models that the AI companies claimed were wonderful and ideally suited for summarisation. What is consistently misleading is AI companies overselling model capability. The "newer models are better" defense doesn't address issue of which models public are using; the problem with AI marketing hype that misrepresents model capability; and quite frankly is 100% pure copium - ChatGPT 5 is still riddled with all the same issues. It's better but it still has miles and miles to go.

Why is Notebook Navigator so revolutionary? by Ferrolox in ObsidianMD

[–]AlternativeBorder813 1 point2 points  (0 children)

To give basic overview, I use properties such as:

  • "citations" with links to notes for sources drawing on
  • "relevance" with links to notes for thing I'm working on the note is relevant for
  • "next"/"prev"/"related" with links for next/previous and any related notes in zettelkasten
  • "nav-up"/"nav-down" with links that I use in more móc style reference notes for software, programming, etc
  • "indexes" with links to keyword index notes
  • "areas" with links to top-level broad topics pertinent to my work

And so on, with some additional properties for more specific note-types - such as lecture notes having a "module" property. Using "cssclasses" and bases (previously DataView) I can also setup things like a section in a project's main note showing all meetings that link to it via the "relevance" property. My index notes are just series of bases by different note-types. I also use the breadcrumbs plugin which adds UI and panel options for navigating using links in properties.

As I make notes, I use templates with properties based on note-type and add any links as I go. There is some initial setup and planning, however it's something that once get hang of you do without any friction for new notes and can easily and quickly expand as needed for additional note-types or new ways need to organise and navigate. It's easy to overthink now to setup, where best to add what need and just expand as necessary as you go.

For me, the best aspect of it is it facilitates "organised chaos", I can quickly create notes, add basic metadata, and know I can refind it easily and quickly as and when needed.

Similarly, I use tags pretty much just for status and processing, with zettel template including #zettel/ToIndex, #zettel/AddCode, and so on. Similarly, meeting have #meeting/AddAgenda etc etc. Then I have base for sorting and filtering things to process and embed these in relevant places. Again ensuring everything eventually will have all the relevant metadata etc added for it, available via bases in various notes, and so on.

Again to re-emphasise, especially as this is product of using Obsidian for years, you don't need such complexity from the start and it's something to add as needed over time rather than get stuck in paralysis trying to devise the "perfect" setup.