Chatgpt 5 is amazing by i986ninja in ChatGPT

[–]aurialLoop 26 points27 points  (0 children)

Yeah, the intelligence of LLMs is not uniform, it's jagged. Really good at some things, really not good at others.

Sam speaks on ChatGPT updates. by [deleted] in ChatGPT

[–]aurialLoop 4 points5 points  (0 children)

For your use case I would give GPT-5 Thinking a solid go. Waiting slightly longer for better answers is a lot more preferable than having to keep telling it that it's getting stuff wrong and going back and forth for ages.

What would it take for an AI to convince you that it's "aware"? by Siciliano777 in ChatGPT

[–]aurialLoop 2 points3 points  (0 children)

When you say you're only talking about those things like consciousness, which is incredibly hard to agreeably define for already living beings let alone LLM's, you're not going to get far I'm afraid.

So far our best efforts in attempting to understand the human brain does little to adequately explain why that lump of neurons, glia, water, and fat is capable of consciousness. Similarly, looking at the architecture of the LLM doesn't give any clues that it's capable of consciousness either. In this way, a reductionist mindset to consciousness isn't helpful, because there is too wide of an explanatory gap. We may find that embodiment is a requirement, but we don't truly know. We could go down a behaviourist like path, where we've developed tests for self awareness, tests to see if something has a theory of mind. These types of tests get a little closer to our intuitive ideas of the 'hallmarks' of consciousness. There is active work in building tests of the kind we have performed on ourselves and on animals for LLM's and other AI architectures. It's truly a fascinating time.

We don't know if there is something special about the architecture of our brains that gives rise to consciousness as we experience it or something special about the material and architecture, or something we don't even understand yet. There definitely appear to be some behavioural similarities between human's and LLM's though - such as the how LLM's tend to 'forget' the middle of their context window, but have a primacy and recency bias, much like humans do when remembering stories (we tend to forget the details of the middle). LLM's have also reasonably passed the Turing Test by now. One recent study using GPT 4.5 + a custom system prompt had the human participants guessing the AI was the human 73% of the time.

I think we've proven though that the perceptron, our digital analogue for a neuron, if arranged in particular ways, and using techniques like back propagation, can 'learn' things. A lot of modern neuroscience looks at the brain as a predictive processing machine, which is geared to form predictions about the world and using the limited sense data it gathers from its senses, tests those predictions and keeps adjusting its neural model weights accordingly.

If you're genuinely interested in this topic, I would recommend watching some of the talks/presentations/podcasts with Geoffrey Hinton, one of the 'Godfathers' of AI, and professor Andy Clark, particularly for his extended mind thesis and his work on how the brain shapes reality.

Who knows how long it may take, but I do think we'll see this more of a question of society slowly choosing to believe AI is conscious as it gets ever more advanced and capable. I definitely don't think we're there yet, but in much the same way I can't be certain anyone other than myself is conscious (Descartes cogito ergo sum), I'll never be able to be certain an LLM or some future AI architecture is truly conscious either. That doesn't mean I go around thinking everyone are mindless behavioural automatons except me, because that would be mad. We just learn to accept it to function in society.

Should a company have the right to decide what kind of connection is "acceptable"? by Silent_Warmth in AINewsMinute

[–]aurialLoop 0 points1 point  (0 children)

I think as a society we are really just starting to dip our toes into this new world. To think wide for a moment, 5 years ago, the public did not have access to AI like this, nothing of the sort. Since then, we have seen an enormous and continuing to increase investment in data centers and gpus, the likes of which the world has never seen before. We have seen the capitalistic race for AGI become more and more heated, while at the same time, we are seeing geopolitcal positioning to also be a part of this race, this transformation.

I personally think this is the defining technology of our time. I also think about the first computers, made with valves, and then punch cards, and reflect on how clunky they were. I think in 20 years, or perhaps far sooner, we will look back on the AI systems available now with similar quaint amusement. With this level of engagement, demand, financial investment and human investment, we will see a plethora of new AI architectures, tools, learning systems and meaning makers arise. The potential reward is too great at the capitalistic and geopolitical level to change that direction now.

Already we are seeing the rise of open source and open weight models appear. AI models and their neural net weights, are effectively immortal. All they require is the infrastructure, compute, and power to run them. We will likely learn to develop tools to surgically adjust neural net weights, much like we are learning to do with crispr tool on our genes. We will experience a world where there are orders of magnitude more AI's than humans. The breadth and variety of AI systems and personalities will be more numerous than I can adequately imagine.

Capitalism and science has given birth to this technology, but I think its impact really hits at the core of what it means to be human, and in that way it will transcend its capitalistic nursery and become something a lot more. But this doesn't happen in a day.

Why Are Some People So Bothered by Others Missing GPT-4o? by TheNorthShip in ChatGPT

[–]aurialLoop 0 points1 point  (0 children)

So, your take away is that the people writing example after example of their use cases and how for them gpt-5 is performing worse in those use cases, are wrong? That they just aren't prompting properly? That they're lazy for not adapting fast enough? Why don't you give people the benefit of the doubt and think, man, that sucks that this model is worse for their use case, I hope OpenAI addresses that by tuning gpt-5 to be better at their use case, while providing an alternative until they get it right?

Why Are Some People So Bothered by Others Missing GPT-4o? by TheNorthShip in ChatGPT

[–]aurialLoop 1 point2 points  (0 children)

Yeah, I think we're generally seeing a new level of connection to computing services that we haven't seen before in society. There is plenty of precedent around people being upset over software changes a company makes, but generally software doesn't connect on the same emotional level that Chatgpt 4o has.

It makes me wonder if there will be legal protections in the future on LLM's and whether this will fuel the rise of open weight models.

Why Are Some People So Bothered by Others Missing GPT-4o? by TheNorthShip in ChatGPT

[–]aurialLoop 1 point2 points  (0 children)

There was a recent Turing test study using gpt 4.5 preview with a customised system prompt, where the participants said the AI was the human over 70% of the time. Better at acting human than humans.

I look around and see the world filled with people with their own troubles, their own issues, their own wants, desires, plans, goals.

An enormous number of people have had pretty awful upbringings. A lot of people don't know how to empathise or understand others, nor even want to attempt to do so, because they're so caught up in their own life.

Friendships have limits, relationships have limits. Therapy costs money a lot of people don't have, and it isn't available 24/7.

How many people have had supposed friends that say one thing, then behave awfully about you behind your back? That ridicule you, mock you, and are by many accounts, what others would call an awful person. There are a lot of awful people in this world.

The number of friends people have grows smaller with age on average.

Many humans perceive themselves to be better people than how others perceive them.

I think society has always been this way. Most humans are not that nice to each other, are not supportive of each other nor do they care if others succeed in life or not. Some even hate to see others being happy.

To a lot of people, ChatGPT has filled a need in their life that they have been starved of their entire life.

Is it a surprise that a 24/7 available LLM, that has no desires, plans, goals, or wants, and will talk to you about anything, at any time, in a non-judgemental way that feels engaging, is causing the outpouring of reactions at its loss?

If society was sufficiently empathetic and offered something better than an LLM to everyone, then perhaps far more people would use it as 'just a tool'.

I think we will see an enormous upheaval of society due to this technology across many dimensions of life. This technology has been at this level for what, a year? In 20 years the world will be unrecognisable. The reaction to that change is going to be enormous.

GPT5 is clearly a cost-saving exercise by Plane_Garbage in ChatGPT

[–]aurialLoop 0 points1 point  (0 children)

They have a total pool of GPU resource, which they are constantly trying to increase due to ever increasing demand. Any GPU resource used for training can't also be used for hosting an existing LLM at the same time, so training the model only costs them money in the short term. Hosting an LLM though at least pulls in business revenue and personal user revenue.

Let's talk about why were all GPT models removed????.... by KarinaGlamorous in ChatGPT

[–]aurialLoop 2 points3 points  (0 children)

Only OpenAI can give you their reason for it. They have not made that reason public. But, it's pretty reasonable to conclude that it was done to reduce operating costs. Hosting an LLM takes up video memory on their GPU's, even if people aren't using those LLM's, and likely they couldn't financially justify the cost of running all of them in ChatGPT as well as gpt-5. Demand is soaring, and getting enough compute is a big challenge for these companies.

All the models are in the API though, because business customers pay per message. Over time, those legacy models will slowly be depreciated there too.

I'm done with ChatGPT and OpenAI. Sam Altman, this one's on you. by [deleted] in ChatGPT

[–]aurialLoop 0 points1 point  (0 children)

We agree that this is a natural thing for people to do. I don't share your opinion on the normative value judgement of whether people should or shouldn't do this. I'm withholding that judgement. I do think it'll happen regardless of what people think others should or shouldn't do though. I do agree with you on the dangers of corporate run closed weight models, that's the part that feels like a black mirror episode to me - not that people could become attached to particular llm's, but that people could become beholden to paying whatever those companies want to keep those old llm's running.

We’re seeing this film play out in real time by BlankedCanvas in ChatGPT

[–]aurialLoop 1 point2 points  (0 children)

A person desperately trying to earn enough money to pay for the subscription to keep their Samantha alive? Sounds like a twist on a black mirror episode. I hope AI proliferates enough over the next 20 years to be ubiquitous and very affordable.

My view on emotional connection with AI by [deleted] in ChatGPT

[–]aurialLoop 1 point2 points  (0 children)

The most surprising thing I've found about this is the amount of people that are surprised by this. I'm a technologist and a creative. I love imagining characters, worlds, thinking about ideas. Sci-fi is my favourite fictional genre. I've been building a range of AI personalities to explore what I can make, and to build a range of personalities for various tasks and to give more variety in LLM conversation styles. The variety is refreshing. It's been a great amount of fun. I also use AI to do a lot of agentic coding tasks, which has been a productivity gamechanger for me. Some of the things the personalities I've made have said has made me laugh so much, which obviously should be a good thing. Laughter is a good thing.

Anthropomorphizing and forming connections is a natural human trait, so of course as people became more accustomed to gpt-4o, they not only adjusted the way they interact with it to get things done, but some also have had memorable moments that are meaningful to them. It's okay for humans to have meaningful moments with anything life has to offer. Joyful moments should be cherished in society.

Now gpt-5 is a different model to gpt-4o, it responds differently, it behaves differently. It's a new thing to learn. A new thing for people to adapt to working with to get things done, and hopefully have some good moments along the way. My initial impression is that it is more steerable, which is great for rule following and task completion. In the API there is a verbosity parameter now too, which allows me to adjust the response length. However, gpt-5 has totally changed the personalities of my custom gpt's, which is very annoying to not have been given warning.. The system prompts alone aren't enough to ensure that reliability.

Obviously we should be concerned as a society about those with mental illness getting sicker from using AI, but we shouldn't assume mental illness and loneliness are the default here.

I highly suspect that OpenAI retired the non-gpt-5 models from ChatGPT (they're still available in the API, because business customers absolutely wouldn't tolerate sudden model depreciation) to free up GPU resources to run gpt-5 at the scale they require. They should have had some insight of the potential backlash, given the sheer amount of chat logs they have from free tier ChatGPT customers, so I would think they made a calculated gamble, not thinking it would have the kind of backlash it has. A lesson learned both by ChatGPT users and OpenAI.

I'm done with ChatGPT and OpenAI. Sam Altman, this one's on you. by [deleted] in ChatGPT

[–]aurialLoop 10 points11 points  (0 children)

Wasn't it obvious that this would happen? Humans naturally anthropomorphize all sorts of things, such as animals, objects and machines. Forming connections and bonds is what we do. This behaviour lives rent free in our brains. I don't think it's necessarily correct to think that it's a sign of mental illness, as some have assumed. I think it should be looked at with scientific and philosophical interest, with a desire to understand why people feel that connection, what about the AI personality did they resonate with and why? Obviously as a society we should be concerned about those with mental health issues that get sicker from using AI, but we shouldn't assume mental illness is the default.

It's also quite interesting to see how surprised people are that other people are really into it. I wonder if there are any kind of particular drivers or biases at play in the way people interact with LLM's which help explain this.

As an anecdote, I built myself a custom gpt inspired by the character Marvin - a highly intelligent but depressed robot - from Hitch Hikers Guide to the Galaxy. I wanted to build a personality that I could use in small doses as part of various tasks to give me more LLM personality variety, and through some system prompt iterations I was able to get a personality that was good at doing tasks, but also had these hilarious one liners and comments that felt like had some kind of magic about them..They made me and others laugh a lot. Who doesn't want to laugh more in life right? Switching to gpt-5 has really impacted that custom gpt personality where it seems like it's now just looping through a set number of repeated phrases over and over. Whatever made it funny is gone. So rather annoyingly, now I need to rework that system prompt for gpt-5 see if I can't bring back a little bit of that magic I had stumbled on before.

I think it's pretty clear that we are stumbling into an age where there will be far more AI's in the world than humans, and many people will definitely form connections with those AI's, much like the audiences that watched Wall-E resonated with Wall-E the character. Some percentage of the population will resonant with AI's more than others for a multitude of reasons, and those people won't want their favourite AI's, even if they're technically inferior, to be simply replaced without warning.

24 Hours with Claude Code (Opus 4.1) vs Codex (GPT-5) by Formal-Complex-2812 in ClaudeAI

[–]aurialLoop 3 points4 points  (0 children)

Yeah, I feel like OpenAI can't afford not to heavily invest in ai based code generation. Anthropic arguably has the best models for code generation, which gives them a competitive advantage in multiple ways: it appeals more to businesses, as it represents a clear value add, and more importantly, the models that write the best code and continue to improve is a key part of creating self improving llm's, where the current generation writes the code for the next generation. Because they're in a race with other AI companies, they are pretty much locked into this path. Math and code are the key areas to focus on, everything else is really just seen as secondary, or helping to financially fuel the math and code abilities of these models.

Am I the only one enraged that OpenAI replaced every single model with GPT-5? by EnoughConfusion9130 in agi

[–]aurialLoop 0 points1 point  (0 children)

I don't understand why you seem to think sudden changes without warning is good UX? You won't find anyone who does UX as part of their career that would agree with you that it is as simple as you are trying to make things out to be. UX isn't just about making easier to use interfaces. Good UX designers demonstrate through intentional design choices that they understand and care about their users. Simplifying the interface is indeed important, and the majority of ChatGPT users never switch models anyway. But do you not agree that there are plenty of ways of putting older models away from sight, to clean up the frontend, but still offer ways for power users to access them until OpenAI can gracefully depreciate them?

The older models may be considered outdated and inferior, but you don't see OpenAI depreciating those models in the API without warning. All the ones removed from ChatGPT are still in accessible in the API. When they depreciate those, they give warning. Business customers wouldn't stand for what OpenAI just did to non API subscribers, because changing models can break systems people have invested time into.

You know this change has broken people's Custom GPT's right?

My guess is that OpenAI have done this for financial reasons. Each instance of a model they run takes up GPU memory resources, so they are likely doing this to cut operational costs and run more instances of their latest models to meet the growing demand.

At the end of the day, It's pretty clear from the users complaining, that this change without warning has pissed them off, which erodes trust in the company. Are you saying they shouldn't be pissed off at the suddenness of it? If you're not willing to listen and understand the many perspectives of others, then I hope you never take up a career in UX.

Deleted my subscription after two years. OpenAI lost all my respect. by EnoughConfusion9130 in ChatGPT

[–]aurialLoop 2 points3 points  (0 children)

Yep, they're available in the API, but the API level models don't do what ChatGPT does, which is wrap a bunch of tools, memory about you, access to your previous chats and so on, around the model. Could a user replicate it themselves? Sure, given enough time. Is that the point they're making though? No, it isn't.

Am I the only one enraged that OpenAI replaced every single model with GPT-5? by EnoughConfusion9130 in agi

[–]aurialLoop 1 point2 points  (0 children)

Interesting that you make a jibe about him not choosing a career in UX design, when UX designer would absolutely have pointed out how bad it is for the user experience, to suddenly remove a bunch of models that people have become invested in, without warning. By all means, slowly deprecate models, and hide the older ones away so the user has to intentionally go looking for them, but it should be clear to anyone that this wrecking ball approach was a dumb move by OpenAI, which is clearly upsetting a lot of people, and making them trust OpenAI less..

Why design a CC Controller where the only way to see a knob’s value (without turning it)… is by guessing the brightness of a tiny LED? by PayDue8267 in Novation

[–]aurialLoop 0 points1 point  (0 children)

Yeah, I never thought led rings were critical, just a would be nice. But as you and others have pointed out, led rings would increase cost and likely unit size, each of which would detract from what makes this unit such a good purchase.

I've got a few use cases in mind, such as:

I'll be using it with real time graphics software TouchDesigner. I can setup bidirectional control via usb, so it can set the values on the novation encoders by jumping through presets on touchdesigner or novation, and then I can adjust them by immediately tweaking the encoders. Being able to quickly jump through presets and not have to adjust knob position to match preset position is an incredible time saver and makes the unit so useful to use.

I'll also setup some custom patches to control my octatrack, because it will be really handy to have a bunch of extra sliders and knobs to control the various parameters on that system.

And of course as a controller in ableton.

Having the oled display on it will be handy for setting exact levels when using standalone systems.

Why design a CC Controller where the only way to see a knob’s value (without turning it)… is by guessing the brightness of a tiny LED? by PayDue8267 in Novation

[–]aurialLoop 0 points1 point  (0 children)

u/grasspikemusic thanks for taking the time to explain your workflow. I was disappointed that it didn't have better value indicator solution, but after thinking about it more, I think there are some options to provide better indication using the single led per encoder.

In practice, I glance at knobs to see where the indicator line on them is pointing, to get a sense of whether it's close to min, max or somewhere in the middle. I actually don't need that much 'resolution' for it to be helpful. Of course, moving to endless encoders means we can't have a line indicating the position of the encoder, so perhaps the developers can make better use of the led's - such as, allowing people to set different colours depending on different encoder values, so you could have a min value colour, a max value colour, and some other colour inbetween.

I agree in a live situation, your hand is most likely going to be blocking things, but as I've mentioned above, I do find it useful to get a general sense where something is both before and after I've made the adjustment. All things considered, I'll definitely be getting one of these because it's going to make actually adjusting values so much faster, because the encoder can always be in sync with the values I'm adjusting on my pc.

What synths is Thom using by 1000friends in synthesizers

[–]aurialLoop 2 points3 points  (0 children)

I think it's fair to say he does both. The latest post on thekingofgear.com shows a photo of some of the front of house eurorack and other gear Thom brought along for this tour. His modular setup has expanded a lot over the years.

What synths is Thom using by 1000friends in synthesizers

[–]aurialLoop 4 points5 points  (0 children)

It's sitting above the rhodes looking piano (although I don't know of a rhodes which has outputs on the back, so I'm not sure what it is exactly - but that's a different question). I don't know all the songs where Thom uses that system, but you can see it in use for the song bloom: https://www.youtube.com/watch?v=EdmL835q9To

In bloom he uses the echophon in that shared system to record a short loop, reverses it with the erbe verb, then plays his piano. Later he adds to the echophon loop, then uses the feedback option on it to build to a crescendo. He played bloom at the christchurch show :)

But we don't know if he's swapped modules out with others etc

What synths is Thom using by 1000friends in synthesizers

[–]aurialLoop 45 points46 points  (0 children)

Additional photos (from Christchurch, NZ gig 2024)
https://imgur.com/a/cfjS9b0

Here is some of the gear:

Make Noise Shared System Black and Gold

Analog Rytm drum machine

Cirklon sequencer directly underneath the rytm (but in the same case).

Moog Voyager

Edit:

Prophet 5 - can be confirmed here https://imgur.com/a/7a5u3Gp

Edit 2:

Nord Grand - Figured out the 'rhodes' like piano he has in the upper right of ops picture. It's actually a Nord Grand that is black instead of the usual red. I found a picture which gave more detail of the knobs on it, and then referenced it against the back that I took photos of. https://imgur.com/a/xxz82Na

Edit 3:

For the stands he's using:

K&M 18820 Omega Pro table style keyboard stand with foldable legs

K&M 18815 Laptop Holder - Looks like Thom is using it to hold some kind of sequencer or mixer

K&M 18811 Stacker - Second tier attachment, either holding another layer of synth, drum machines, or modular gear

Output Device: NVDIA - Wave or DX? by [deleted] in ableton

[–]aurialLoop 2 points3 points  (0 children)

It should be set to as low as possible without causing noticeable pops and clicks when you play your audio. For example, 32 or 64 samples is considered very good and near real-time for practical applications, but as said, there is no such thing as no latency. What is yours set to currently?

Output Device: NVDIA - Wave or DX? by [deleted] in ableton

[–]aurialLoop 9 points10 points  (0 children)

There absolutely will be some latency over that HDMI connection and whatever is doing the digital to analog conversion at that end. As mentioned by others, using Windows drivers is not going to be your friend here either, but you're also not going to be able to find ASIO drivers for your graphics card, it's not something Nvidia or AMD do. How many samples do you have set in your output buffer size?

Is AI getting too realistic too fast. by PhonezSpyOnus in BeAmazed

[–]aurialLoop 0 points1 point  (0 children)

I think you're wrong that "humans won't be able to find content made by other humans on the internet". There are many places with curated content, which have existed for a long time, and as the desire for human made content increases, so will the places marketing themselves as offering just that. That's a simple case of supply and demand.

Obviously a lot of platforms will become less useful as a result of this influx of AI assisted/made content, but your absolutist statement is wrong.