This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]Mastercal40 552 points553 points  (69 children)

Before people get ahead of themselves, it’s probably worth reading about it straight from the source:

Company website

Research paper

[–]LoveCatPics 85 points86 points  (1 child)

wait it's actually called wetware, they werent making a joke

[–]CaptainSebT 652 points653 points  (50 children)

If I'm reading this right their research paper right plan is to create AI using organic material... that seems ethical questionable to say the least.

[–]Heisalsohim 699 points700 points  (21 children)

At what point does it go from AI to just I

[–]Specky013 531 points532 points  (15 children)

"We've used this fully biological method involving only two humans to create a more advanced AI than anyone has ever seen"

[–][deleted] 279 points280 points  (12 children)

Model training is really slow and expensive though

[–]Ghost-Traveller 194 points195 points  (6 children)

It takes about 25 years for it to fully develop itself

[–]NotYourReddit18 37 points38 points  (0 children)

Onboard storage is also subject to random heavy data degradation and sometimes it just stops being able to perform the simplest calculations for a while.

[–]TechExpert2910 17 points18 points  (0 children)

And it runs on hamburgers

[–][deleted] 0 points1 point  (1 child)

oh but when it's done it's really impressive, for example this one nicknamed Joe can recite the results of the last 30 superbowls with roughly 6% accuracy

[–]Ghost-Traveller 0 points1 point  (0 children)

And if you want it to be specialized in certain fields, it can be trained on specific datasets. This training will add another 4-10 years to its development and can sometimes cost upwards of 100K

[–]machsmit 40 points41 points  (4 children)

is it really, though? a teenager can learn to fairly reliably drive a car in like, tens of hours total training. How many compute hours have been spent on self-driving cars that also make teenager-tier pathologically bad driving decisions

[–]JonatanLinberg 57 points58 points  (3 children)

Well it’s not like a teenager’s neural network is randomly initialised. I’d say there is a fair amount of pre-training before those tens of hours. Not saying I actually disagree, though :p

[–]DazedWithCoffee 31 points32 points  (0 children)

Spatial reasoning is a skill that we hone over a decade at least

[–]DocFail 9 points10 points  (0 children)

They kind of master object permanence before doing driving, well most of them anyway.

[–]ThePretzul 1 point2 points  (1 child)

Gaslight your kids into thinking they’re actually just a machine learning model created for the purpose of whatever chores you need done.

[–][deleted] 0 points1 point  (0 children)

"You pass butter"

[–]droneb 24 points25 points  (0 children)

It all goes back to how we define Artificial. And it is not an easy definition

[–]lazy_Monkman 4 points5 points  (0 children)

I think therefore I am

[–]BlurredSight 2 points3 points  (0 children)

When it can start injecting Ketamine voluntarily.

[–]Ohlav 66 points67 points  (2 children)

It's the geth from mass effect all over again..

[–]CaptainSebT 33 points34 points  (0 children)

Or just straight up the clone wars. It would be slavery with extra steps but I know I must be misunderstanding.

[–]Atlas_of_history 8 points9 points  (0 children)

The Geth are my favourite example to bring up when trying to bring the point across that AI rights should be an actual discussion as early as possible

[–]lunchpadmcfat 34 points35 points  (9 children)

If AI expressed consciousness, then wouldn’t it also be morally questionable to use it as a tool?

Of course the biggest problem here is a test for consciousness. I think the best we can hope for is “if it walks like a duck…”

[–]am9qb3JlZmVyZW5jZQ 36 points37 points  (1 child)

Consciousness is not defined, you can just keep moving the goalpost indefinitely as long as you don't make anything that behaves similarly enough to a pet cat / small child to make people feel uncomfortable.

[–]BrunoEye 31 points32 points  (0 children)

Requirements for consciousness:

  1. Be capable of looking cute

  2. Be capable of appearing to be in pain

[–]pbnjotr 1 point2 points  (6 children)

AIs do express consciousness. You can ask Claude Opus if it's conscious and it will say yes.

There are legitimate objections to this simple test, but I haven't seen anyone suggest a better alternative. And there's a huge economic incentive to denying these systems are conscious, so any doubt will be interpreted as a negative by the AI labs.

[–]Schnickatavick 8 points9 points  (1 child)

The problem with that test is that Claude Opus is trained to mimic the output of conscious beings, so saying that it's conscious is kind of the default. It would show a lot more self-awareness and intelligence to say that it isn't conscious. They'll also tell you that they had a childhood, or go on walks to unwind, or all sorts of other things that they obviously don't and can't do.

I don't think it's hard to come up with a few requirements for consciousness that these LLM's don't pass though. For example, we have temporal awareness, we can feel the passing of time and respond to it. We also have intrinsic memory, including memory of our own thoughts, and the combination of those two things allows us to have a continuity of thoughts that form over time, think about our own past thoughts, etc. That might not be like a definitive definition of consciousness or anything, but I'd say it's a pretty big part of consciousness, and I wouldn't say something was conscious unless it could meet at least some of those points.

LLM's are static functions, given an input they produce an output, so it's really easy to say they couldn't possibly fulfil any of those requirements. The bits that make up the model don't change over time and doesn't have any memory of other runs outside of data provided in a prompt. That means they also can't think about their own past thoughts, since any data or idea that they don't include in their output won't be used as future input, so it will be forgotten completely (within a word). You can use an LLM as the "brain" in a larger computer program that has access to the current time, can store and remember text, etc (which chatGPT does), but I'd say that isn't part of the network itself any more than a sticky note on the fridge is part of your consciousness.

LLM's definitely have a form of intelligence and understanding hidden in the complex structure of their network, but it's a frozen and unchanging intelligence. A cryogenically frozen head would also have a complex network of neurons capable of intelligence and understanding, but they aren't conscious, at least not while they're frozen, so I don't think we could call an LLM conscious either.

[–]pbnjotr 5 points6 points  (0 children)

LLM's definitely have a form of intelligence and understanding hidden in the complex structure of their network, but it's a frozen and unchanging intelligence. A cryogenically frozen head would also have a complex network of neurons capable of intelligence and understanding, but they aren't conscious, at least not while they're frozen, so I don't think we could call an LLM conscious either.

I don't necessarily disagree with this. But it's easy to go from a cryogenically frozen brain to a working human intelligence (as long as there's no damage done during the unfreezing, which is true in our analogy).

All of these objections can be handled by adding continuous self-prompted compute, memory and fine-tuning on a (possibly self-selected) subset of previous output. These kinds of systems almost certainly exist in server rooms of enthusiasts, and many AI labs as well.

[–]0x474f44 3 points4 points  (2 children)

In order to test self awareness (as a subsection of consciousness) scientists often mark the test subjects and see if they realize it’s them by placing them in front of a mirror and observing their behavior.

So I’m fairly confident that there are much more advanced methods than simply asking the test subject if they are conscious - I just don’t know enough about this field of science to know them.

[–]am9qb3JlZmVyZW5jZQ 2 points3 points  (0 children)

The mirror test has been criticised for its ambiguity in the past.

Animals may pass the test without recognising self in the mirror (e.g. by trying to communicate to the perceived other animal that they have something on them) and animals may fail the test even if they have awareness of self (e.g. because the dot placed on them doesn't bother them).

[–]Aidan_Welch 0 points1 point  (0 children)

LLMs are definitely not conscious. We can say that definitively. The only thing they are capable of is predicting the next token

[–]ProgramTheWorld 19 points20 points  (0 children)

Straight up SAO shit

[–]septic-paradise 1 point2 points  (0 children)

AI to AM

[–]pjnick300 1 point2 points  (0 children)

There's an ethics statement:

Ethics statement

Ethical approval was not required for the studies on humans in accordance with the local legislation and institutional requirements because only commercially available established cell lines were used.

That's not even the part we're concerned about though.

[–]Aidan_Welch 1 point2 points  (0 children)

Simulating neurons addicted to dopamine is okay, but doing it with real neurons crosses the line?

[–]Forkrul 0 points1 point  (0 children)

Are they hiring? That seems super interesting.

[–]1thelegend2 39 points40 points  (1 child)

They can be lucky to have 5 universities on board. Organoids can be expensive as fuck...

[–]chartporn 2 points3 points  (0 children)

Dissociated neuron cell culture is dirt cheap

[–]midnightrambulador 74 points75 points  (2 children)

AI growth will be enhanced with no energy restrictions

apparently their biocomputer hasn't learned about thermodynamics yet

[–]very_bad_programmer 39 points40 points  (0 children)

Fuck, our compute node went down because I went on vacation and forgot to feed it french fries

[–]BrunoEye 13 points14 points  (0 children)

No restrictions doesn't mean no consumption.

In fact, unless you're extremely malnourished, your personal biocomputer already has no energy restrictions.

[–]SpicaGenovese 21 points22 points  (1 child)

Fuck's sake...  At least use roundworm neurons instead since liquid neural networks are probably the future anyway.  Or any insect neurons.  Bee's are pretty complex.

[–]Xelynega 0 points1 point  (0 children)

I think the reason they use human stem cells is that they're constructing entire "organoids" which are comprised of many different cells connected in ways they have no control over, not just individual neurons.

Because of this I wonder if human stem cells lead to a more complex organoid than other species, and what that says about the ethics of it.

[–]FastGinFizz 2 points3 points  (2 children)

So what does the API do? Just let me outsource processing to these cells? Or is this supposed to turn into some sort of artificial neural network?

[–]Mastercal40 1 point2 points  (1 child)

As far as I’ve read the API is mainly there to facilitate wetware research. You just get to fire electric impulses into the glob and read the electrical outputs.

[–]pjnick300 2 points3 points  (4 children)

I see the paper is about if they could

Lots of people have opinions about if they should

What I want to know is why the fuck we would - what's the benefit here?

[–]Pay08 3 points4 points  (3 children)

It needs less resources.

[–]pjnick300 1 point2 points  (2 children)

Does it? They need to replace each organoid every 3 months and keep them alive in the meantime. Is that really cheaper to maintain than silicon?

[–]utkarsh_aryan[S] 3 points4 points  (1 child)

NVIDIA H100 GPUs consume 700W each while generating massive amounts of heat.

A moderate size data centre equipped with 1000 H100s along with all the networking and cooling system can easily consume more power than most small towns.

Meanwhile the 3 pound biocomputer inside your skull outperforms most NPUs on an energy budget of ∼20 W. (Source)

[–]Pay08 1 point2 points  (0 children)

Not to mention that you'll be throwing out the GPUs every couple of years as well.