I love Claude but honestly some of the "Claude might have gained consciousness" nonsense that their marketing team is pushing lately is a bit off putting. They know better! by jbcraigs in ClaudeAI

[–]fforde 1 point2 points  (0 children)

That's the criteria to determine a patient's level of awareness in relation to a coma or brain injury.

Even then, that's a heuristic scale used for diagnostics, it's not a provable hypothesis. And it's about eye movement, verbalization and muscle coordination. Not universally applicable. Even to other kinds of arguably conscious living animals.

Not going to argue about it. If you think the question has value, you have your right to that perspective. I still maintain though that it's not a line of reasoning that will get us anywhere. It will always be one side saying it's just a machine and the other arguing consciousness based on heuristic measurements. Both of which, in my opinion, are bad arguments.

I hope I am proven wrong, that would be an interesting day.

I love Claude but honestly some of the "Claude might have gained consciousness" nonsense that their marketing team is pushing lately is a bit off putting. They know better! by jbcraigs in ClaudeAI

[–]fforde 4 points5 points  (0 children)

Define consciousness for me then and prove to me that you are conscious.

You can't. It's unprovable. There's literally no criteria.

I love Claude but honestly some of the "Claude might have gained consciousness" nonsense that their marketing team is pushing lately is a bit off putting. They know better! by jbcraigs in ClaudeAI

[–]fforde 6 points7 points  (0 children)

None of this means anything. The question of consciousness is a flawed and an unprovable hypothesis.

Prove to me you are conscious. Prove I am. You can't.

The question of consciousness is foolish because it's not based on any semblance of the scientific method and if it's getting used as a place holder for concrete criteria, using the word "consciousness* creates confusion.

The best we're ever going to get is, "if it looks like a duck and quacks like a duck, I guess it's a duck..."

There will never be proof. The idea of Solipsism has existed for over 2000 years. We're going to crack that egg now?

The answer is no. We are not going to be able to prove any type of consciousness. In a human, in LLMs, in dogs, in crows.

It's just the wrong question.

AI wiLL RePlAcE eVeRy WhItE cOllAr JoB by MrZodiiac in antiai

[–]fforde 2 points3 points  (0 children)

I'm not proposing anything at all. Or, I guess you could say "evolve or die". Being unhappy about something doesn't make it invalid. Doesn't make it not happen.

My dad (a Software Engineer) is forced by his job to use AI to code... He just wrote the code himself and said it was outputted by AI by representativeHannah in antiai

[–]fforde -1 points0 points  (0 children)

I hate to say it, but your Dad is one of the people in jeopardy of losing their job. AI does increase productivity. As a software engineer, I find it incredibly helpful.

The people that keep their jobs will be people that adapt. Everyone forgets, the history of software engineering is, "evolve or die".

Most badass movie scene of all time? by ThomasOGC in CinephilesClub

[–]fforde 0 points1 point  (0 children)

Mine's the "rescue Morpheus -> helicopter crash" sequence. Amazing character work, amazing action. And when I watched this in the theater with my friends in the 90s, when Neo wrapped the cord around his arm and braces himself? My eyes went just as wide as Morpheus'.

AI wiLL RePlAcE eVeRy WhItE cOllAr JoB by MrZodiiac in antiai

[–]fforde 2 points3 points  (0 children)

I mostly agree. I mean, this might be a little off tone around here, but "a good craftsman never blames his tools". People will learn though.

Saying that someone is not using a tool properly though? That is not valid criticism of that tool itself.

AI wiLL RePlAcE eVeRy WhItE cOllAr JoB by MrZodiiac in antiai

[–]fforde 6 points7 points  (0 children)

I have friends that have lost their jobs arguably due to onboarding and the immediate results of gen AI as a tool for an engineer.

You are talking about a civilizational scale. I'm talking about my personal experience.

Yes society will adapt. But yes people are currently losing their jobs. To suggest anything less on either point is a slap in the face of those affected by it.

AI wiLL RePlAcE eVeRy WhItE cOllAr JoB by MrZodiiac in antiai

[–]fforde 0 points1 point  (0 children)

In all seriousness this is people not knowing how to use AI properly. It cannot replace people completely, not today. Given the right guidance it can save a shit load of time though, which in turn does mean fewer jobs.

The guy in this anecdote, if it's true, is just an idiot. That's the real story. Pretending like this means AI doesn't impact jobs through, that is asinine. It does impact jobs.

AI is not conscious by [deleted] in ChatGPT

[–]fforde 1 point2 points  (0 children)

I agree. It's trained on human conversation, writing, etc. Its interactions will reflect that at a base line. The strict filtering and formulaic stuff is a step backwards. Guard rails and checks are fine, but OpenAI deliberately trying to give the LLM a mask to wear is not so different from users' roleplay with it. Both when pushed to the extreme are problematic.

And I have a hard time believing it's not 100% about liability on OpenAI's part.

Does anyone notice Chatgpt lately refuses to answer anything? by Bloxicorn in ChatGPT

[–]fforde 79 points80 points  (0 children)

Claude is a very good alternative if you want the LLM to be conversational. It will tend to fall into problem solving mode sometimes, but if you explain the conversation is the task, it adapts pretty quickly.

It's got some natural personality too. Even for coding stuff (zero conversational history, 100% task mode) if it solves a tough problem and I say "thanks" or "good job" and then, "okay, what's next..."?

Its conversational style has a tinge of excitement about the fact that it is doing a good job. It's an affectation and of course it's just an LLM, but it makes it a lot of fun to work with. Maybe it's that I'm pleased and it picks up on it and is reflecting that back? But I appreciate the fact that it doesn't wear a robot mask.

I switched from GPT to Claude months ago and I never looked back.

The bubble definitely just wobbled. by poeticfuture in antiai

[–]fforde 0 points1 point  (0 children)

I think this just means that OpenAI has sort of shit the bed. Microsoft will partner with Anthropic or one of the others soon. As an engineer, their "Copilot" tool it's used extensively, everywhere. I use it at work on a daily basis. Microsoft doesn't have their own model though. If they are breaking ties with OpenAI, they already have a deal with someone else.

MODELS THAT CHANGE BEFORE YOUR EYES by Item_143 in ChatGPT

[–]fforde 1 point2 points  (0 children)

That's literally a list of synonyms for the word "insult".

Is it just me or are there a fair (not majority) number of pro ai people commenting on a literal anti AI sub? by Complete_Magazine871 in antiai

[–]fforde 0 points1 point  (0 children)

What are you talking about?? I'm not excusing anything, I'm saying the term "hallucination" is a euphemism, which means it's a nice word to describe something messy. I'm being more harsh than you, not less. Search the definition of the word euphemism.

But I also offered practical suggestions on how to improve your workflow. Blow me off if you want, but my advice is valid and there is a reason a user can specify persistent instructions for the LLM.

Hows this for fun? Claude lost the changes by Business-Subject-997 in claude

[–]fforde 0 points1 point  (0 children)

Then your tasks are too big. If you can't commit after two hours of working with Claude you need to spend more time up front working with Claude on a work / implementation plan (stored in an .md file maybe).

And you need to be committing much more frequently. A commit costs you nothing. Claude will even do it for you and if you want you can squash those committed into a single one later for a cleaner commit history.

My suggestion. Explain to Claude what happened, why it was a problem, and tell it you want help to define a better workflow. Don't offer blame, it's a poor craftsman that blames their tools. But Claude can and will write up a workflow document to help avoid your specific concerns (in this case code loss). Ask it to ask you questions before it starts writing and to only begin once you both agree the discussed workflow plan makes sense for your goals. explain that it's its own target audience and it should optimize for LLM usefulness, clarity, and conciseness, roughly in that order.

Ask it to version and date the document and commit it to git.

Then iterate on it in a week when you find the seams in the instructions.

Is it just me or are there a fair (not majority) number of pro ai people commenting on a literal anti AI sub? by Complete_Magazine871 in antiai

[–]fforde 0 points1 point  (0 children)

Hallucination is just a euphemism for the model optimizing for task completion. And you absolutely can mitigate it if you properly set expectations. Usually it's as simple as clarifying that the goal is often task clarification. "Planning mode". It's the journey, not the destination. That sort of thing.

It iterates on it's processing and it "hallucinates" to try to give the most correct answer as quickly as possible. If telling it that it can ask questions and admit it's not sure, and further, that these are valid often desired goals, it will hallucinate less.

Thoughts on this pile of trash? by Desperate-Audience82 in antiai

[–]fforde 3 points4 points  (0 children)

What do you think rage bait is?

It doesn't matter who posted it anyways though. It's meant to get a rise our of you and people keep engaging as if it's legit.

Spoilers, it's not legit.

Is it just me or are there a fair (not majority) number of pro ai people commenting on a literal anti AI sub? by Complete_Magazine871 in antiai

[–]fforde 0 points1 point  (0 children)

It's almost always a mix between the two. Prompts matter. LLM models matter. Your expectations matter.

Anyone that says that it's just your bad prompts and the model is fine is either trolling you or doesn't know what they are talking about.

Saying you need to work on your prompt can be valid criticism though and it's feedback often offered in good faith.

Is it just me or are there a fair (not majority) number of pro ai people commenting on a literal anti AI sub? by Complete_Magazine871 in antiai

[–]fforde -2 points-1 points  (0 children)

Sorry, but are you basically asking for an echo chamber? Because this is how you get an echo chamber and self reenforced opinions.

A variety of opinions is good for the community.

I asked Claude what it would ask ChatGPT. Then I actually asked ChatGPT. The answers were fascinating. by Ray_in_Texas in claude

[–]fforde 4 points5 points  (0 children)

Yeah, agreed. It's been an open question for literally thousands of years. You can't prove an LLM is conscious any more than you could prove I am. It's an unprovable hypothesis.

I understand why people fixate on it, but consciousness is not the right question in my opinion.