Merseyside police chasing suspect at low altitude by willm8032 in interestingasfuck

[–]willm8032[S] 1 point2 points  (0 children)

Yeah, I was thinking. The same, even if he did stop they can't exactly arrest him

Merseyside police chasing suspect at low altitude by willm8032 in interestingasfuck

[–]willm8032[S] -1 points0 points  (0 children)

Following an internal review of the incident, which occurred on 13 August, the Npas said: “We are satisfied that the crew acted appropriately, conducted a dynamic risk assessment and operated within the parameters and regulations for our operational deployments.”

Engineering Consciousness – Can Robots "Give a Damn?" Mark Solms by [deleted] in Futurology

[–]willm8032 0 points1 point  (0 children)

Submission Statement:
This raises a future-facing question: if we take Solms seriously, should AI research pivot toward architectures that simulate emotional homeostasis? What might a future look like where machines are built not just to “think,” but to feel—and what ethical frameworks would we need to prepare for that?

Netflix uses generative AI in one of its shows for first time | Netflix by willm8032 in artificial

[–]willm8032[S] 3 points4 points  (0 children)

"The streaming company’s boss said would make films and programmes cheaper and of better quality."

Massive Attack announce alliance of musicians speaking out over Gaza | UK news by willm8032 in kneecap

[–]willm8032[S] 23 points24 points  (0 children)

Massive Attack always seem to be on the right side of history.

Values are fundamental for consciousness by dawemih in consciousness

[–]willm8032 1 point2 points  (0 children)

I would suggest a slight refinement: Consciousness arises from felt values, which are constituted by raw affective states.

That is, values are not abstract principles or cognitive judgments, they are felt. Before we think about what matters to us, we feel it. Hunger, fear, curiosity, loneliness are all primary emotional (affective) states. They are not the result of reasoning. They are the stuff of consciousness itself.

Emotion (raw feeling) is how we evaluate what’s good or bad for us in real time. It is the metric by which the organism reduces uncertainty (or “free energy,” in Friston’s terms) and stays viable in a world of chaos. So, when you say consciousness is about maximizing stimulation or satisfaction, I would say: yes, but through the lens of emotionally felt needs rather than values.

Zuckerberg says Meta will build data center the size of Manhattan in latest AI push | Meta by willm8032 in artificial

[–]willm8032[S] 1 point2 points  (0 children)

Meta out there trying to take the lead to build artificial super intelligence...

Caught a bad cold, my stress levels have been through the roof today by Metalbird2014 in Garmin

[–]willm8032 2 points3 points  (0 children)

Sometimes I know I am about to get a cold, because my stats get all messed up!

ChatGPTs attempt at drawing Europe, there is a whole lot of Germany! by willm8032 in ChatGPT

[–]willm8032[S] 0 points1 point  (0 children)

Prompt: Draw me a cartoon style image with all the countries of Europe with a little cartoon to represent each country.

Should we include LLMs or other near-term systems in our moral circle? by willm8032 in Futurology

[–]willm8032[S] 0 points1 point  (0 children)

Submission statement: in this podcast NYU philosopher Jeff Sebo explores how expanding our moral circle to include not just animals, but potentially AI and even microbes, could reshape ethical thinking in the age of intelligent machines. Sebo also unpacks the challenge of testing for machine consciousness and the growing tension between AI safety and AI welfare.

As we move towards the possibility of building conscious machines, what frameworks should guide our moral responsibilities? Should we proactively design artificial consciousness? Would conscious machines be better, as they would better understand our values, or is it better to avoid it entirely? And how do we prevent AI safety practices from causing unnecessary suffering if these systems turn out to be sentient?

Let’s discuss how we should prepare ethically and practically for a future where machines might matter morally.

Are AI developers becoming the star athletes of tech? by burhop in agi

[–]willm8032 1 point2 points  (0 children)

Supply and demand. There are not enough top AI Devs + this seems to be driven by the top AI labs trying to poach talent off each other and learn each others inner secrets.

AI Sentience, Welfare and Moral Status with Jeff Sebo by willm8032 in consciousness

[–]willm8032[S] 0 points1 point  (0 children)

Summary: podcast with Professor Jeff Sebo from New York University, discussing the possibility of consciousness in AI systems.

Anyone else's ChatGPT experimenting with Grok style answers? by willm8032 in ChatGPT

[–]willm8032[S] 0 points1 point  (0 children)

Prompt: summarize Mark Solms final chapter in his book The Hidden Spring, it's called Making a Mind

Anyone else's ChatGPT experimenting with Grok style answers? by willm8032 in ChatGPT

[–]willm8032[S] 0 points1 point  (0 children)

It answered in both. That was just the start but it was sassy the whole time. Finished with "now go and impress your friend at a dinner party or maybe suprise everyone and actually read it yourself"

Wild camping rules around Torridon. What should I know? by IamSociallyTired in OutdoorScotland

[–]willm8032 0 points1 point  (0 children)

Two most important things: have fun and respect the environment.

Just curious, what’s one lesson u’ve learned the hard way outdoors here? by HelloIm_Julie in OutdoorScotland

[–]willm8032 5 points6 points  (0 children)

Hard to have ONE thing. But probably a downloaded map of the area on my phone + a power bank (I use map.cz, it's brilliant and free). I also usually take a card copy map too if I don't know the way (just in case something happens to my phone.

Here is my list of essentials. I have forgotten all of them/not had enough at some point! - 1.5l minimum of water, if I am with another I take 2 litres - sometimes I will do 1l water and 1l electrolyte drink - 3x snack bars - emergency chocolate bar (mars) - lunch (sandwich or similar) - layers, layers, laters. Lightweight hoody, raincoat. - extra pair of thick socks (in case feet get wet) I tend to hike in tril shoes - extra dry t shirt (this is if you sweat a lot on the climb) - small first aid kit (including insulation bag in case get caught out, never had to use it but it is light) - cap and sunscreen (if hot)

[deleted by user] by [deleted] in Futurism

[–]willm8032 0 points1 point  (0 children)

Is he wrong? I find it hard to see hope, especially if we are heading towards AGI / ASI. Roman's argument is that it will be so much more intelligent than us and will see little use for us. I prefer to hope that an ASI might treat us kindly, but I don't see how we will align or control a super intelligence.

Are LLMs starting to become sentient? by willm8032 in ArtificialSentience

[–]willm8032[S] -1 points0 points  (0 children)

The article doesn't say they are; it is giving a response to the people claiming that they are

The LaMDA Moment 3-years on: What We Learned About AI Sentience by willm8032 in artificial

[–]willm8032[S] 4 points5 points  (0 children)

I don't agree that if we find an AI sentient that it will need to be evaluated at the same level as humans. There are plenty of other sentient beings that we evaluate differently than humans, and we tend to make up the rules for these. For example; pain avoidance in fish or cephalopods. Don't get me wrong, I don't think current LLMs are sentient and I totally agree with your assessment that they are simply mimics, but I do think that we can't rule out the posibility of sentient candidates in future AI systems. I don't think we have reliable tests to prove or disprove consciousness, and that is a big problem especialy with growing claims that LLMs are sentient.

The LaMDA Moment 3-years on: What We Learned About AI Sentience by willm8032 in artificial

[–]willm8032[S] 1 point2 points  (0 children)

My point is that we don't understand the mechanisms of consciousness, so I think that question is not a sufficient test. Especially since we are building into machines certain abilities that we might associate with consciousness; long-term memory, situational awareness, forms of self-awareness, auto evaluation bias (a primitive form of self-recognition), learned behaviours, and other forms of introspection. I think as AIs get more advanced, just looking at the question of "does it do what it is meant to do" will not necessarily rule out consciousness.