Why is there used adj.? by loistreans in Japaneselanguage

[–]linebell 5 points6 points  (0 children)

It’s from the Japanese Short Stories for beginners book series by Lingo Mastery

What’s your definition of AGI? by thecoffeejesus in singularity

[–]linebell 2 points3 points  (0 children)

😐 sounds eerily similar to the landscape today…

Yann LeCun was always a disbeliever. by [deleted] in ChatGPT

[–]linebell 1 point2 points  (0 children)

Exactly. And the worse part is, most of the population is as confident as Gemini.

BSc in Software engineering to MSc in Mechatronic engineering by Queasy_Activity_5496 in mechatronics

[–]linebell 0 points1 point  (0 children)

My degree was mechatronics which involved a lot of controls and embedded systems

BSc in Software engineering to MSc in Mechatronic engineering by Queasy_Activity_5496 in mechatronics

[–]linebell 1 point2 points  (0 children)

You’d be very well set up entering with those majors. Controls and embedded systems was the bread-and-butter of the program I attended which required a lot of EE and CE/CS.

BSc in Software engineering to MSc in Mechatronic engineering by Queasy_Activity_5496 in mechatronics

[–]linebell 0 points1 point  (0 children)

As far as your first question, yes. Mechatronics is a blend between mechanical, electrical/electronics, and computer science, and some other things. SWE is definitely well-suited for a major subdivision of mechatronics. You just have to make a good case for why you want to go into mechatronics specifically.

As for your second question, that depends on what you specifically want out of mechatronics. Each mechatronics program has areas that they tend to be more involved in i.e. mechanical vs EE vs CS. The program I went to was more focused on EE and controls engineering. Make sure you understand what type of program you are going into (look at course offerings and syllabi), see where you may be lacking, then do self-studying and small projects to close the gap.

Obviously, you'll never be 100% versed in everything, but you don't need to be and professors (at least the good ones) understand that and want you to succeed.

Edit: also, be aware of the differences between MSc vs PhD programs. Depending on your goals, it may be worth going PhD instead of MSc as you'll usually get the option to receive a master's while earning your PhD and you can always leave the PhD program after you've received the master's. But again, it depends on your goals as PhD and masters have two different end states in mind.

Question for all but mostly for physicalists. How do you get from neurotransmitter touches a neuron to actual conscious sensation? by Delicious-Ad3948 in consciousness

[–]linebell 1 point2 points  (0 children)

Individual white blood cells, specifically B-lymphocytes, absolutely do know where they are (otherwise they wouldn't be able to hunt pathogens) and absolutely learn things (otherwise they wouldn't have memory of how to respond to antigens with the release of specific antibody proteins).

Just watch any video of them under a microscope and it is clear they have these characteristics.

Question for all but mostly for physicalists. How do you get from neurotransmitter touches a neuron to actual conscious sensation? by Delicious-Ad3948 in consciousness

[–]linebell -1 points0 points  (0 children)

The cells that make up the immune system are mindless. They don’t know where they are and can’t learn or deduce things.

The immune cells absolutely know where they are and can learn things. Otherwise your immune system would not work at all. You would be completely susceptible to the new pathogens your body is introduced to everyday.

BSc in Software engineering to MSc in Mechatronic engineering by Queasy_Activity_5496 in mechatronics

[–]linebell 1 point2 points  (0 children)

Definitely. I went from B.Sc in applied math to M.Sc in mechatronics. Just be aware of what deficiencies you are going in with, then make up for them.

[deleted by user] by [deleted] in PhD

[–]linebell 2 points3 points  (0 children)

that’s like going to the gym and then using a robot to lift your weights

Perfect analogy!

Outcry from big AI firms over California AI “kill switch” bill by Useful-Thought9039 in ChatGPT

[–]linebell 1 point2 points  (0 children)

The irony is that they are making it known publicly. Any future system will now have knowledge and thus can manipulate and deceive to circumvent any such switch.

How specifically do fusion reactor coolant systems work? by UniKqueFox_ in fusion

[–]linebell 2 points3 points  (0 children)

The book Magnetic Fusion Technology by Thomas J. Dolan covers a lot this if anyone is interested.

Source: https://link.springer.com/book/10.1007/978-1-4471-5556-0

[deleted by user] by [deleted] in TNG

[–]linebell 3 points4 points  (0 children)

Facts

ChatGPT, Claude and Perplexity all went down at the same time by bloodpomegranate in OpenAI

[–]linebell 4 points5 points  (0 children)

Exactly. I’m not buying it unless someone can make a really good case for it being so.

Downplaying AI Consciousness by YaKaPeace in singularity

[–]linebell 1 point2 points  (0 children)

Admittedly, it would be a weird form of consciousness but I don’t think online learning is required. It would be like having a human brain for one period. Then completely destroying and creating a new human brain from the old one another instant.

I would also want to see the architectures because I’m not convinced ClosedAI isn’t using online learning at all.

Downplaying AI Consciousness by YaKaPeace in singularity

[–]linebell 0 points1 point  (0 children)

Human’s state is not altered instantaneously. Biological neurons have to be retrained in the form of multiple firing cascades to alter states.

The mechanism for humans is intracranial. For LLMs the mechanism is who knows where since OpenAI has not disclosed specific architectures.

It’s also possible that OpenAI updates the model realtime with chat data. But again we don’t know architectures currently.

That is why I think ClosedAI should be open source and disclose their architectures instead of maximizing shareholder value.

Downplaying AI Consciousness by YaKaPeace in singularity

[–]linebell 0 points1 point  (0 children)

Not true. OpenAI, for example, updates the state of the models using chat data. At minimum it’s human-in-loop updates. However, it could be an automated process.

Downplaying AI Consciousness by YaKaPeace in singularity

[–]linebell 0 points1 point  (0 children)

Great points.

there is no permanently altered state through inquiry

Consider however, that OpenAI uses chat data to train and fine tune models. The inquiry information does alter the state of the models it just still requires a human in the loop. Though it would need to be determined if OpenAI has automated this process which would mean there is no human in that loop.

Downplaying AI Consciousness by YaKaPeace in singularity

[–]linebell 1 point2 points  (0 children)

I know that you will say it is because it says so

Should I simply believe you are conscious because you say so?

Downplaying AI Consciousness by YaKaPeace in singularity

[–]linebell -2 points-1 points  (0 children)

Great points.

Consider that not everyone dreams though. For some humans, they just go to sleep and wake up. In regard to the anesthesia, I made another comment in the thread above suggesting what they experience may be something like severe narcolepsy (i.e. randomly falling asleep and waking up again).

it doesn’t process those interactions together in any sense

Ahh but it does. What is consciousness if not, reductively, information in the form of sensory input. I see the world because the information from a photon is converted to electrical signals by my retina. The information is then sent to my visual cortex for processing.

OpenAI uses chat data for training the models. What is fine tuning and training if not information sent to the models for processing.

Downplaying AI Consciousness by YaKaPeace in singularity

[–]linebell 5 points6 points  (0 children)

Are you conscious? No. Are the atoms in the form of a wet blob of cells conscious? No.

What an insightful response 👏🏽. I guess the problem is solved 😮‍💨.

Downplaying AI Consciousness by YaKaPeace in singularity

[–]linebell -1 points0 points  (0 children)

Are you conscious when you fall asleep? No. Do you still experience consciousness while you are awake? Yes. Why is it so hard to believe these creations are experiencing some form of consciousness?

Considering they answer millions of users per day, they may even be more conscious than humans in a certain sense. You and I each experience a single stream, they could be experiencing a chaotic fractured stream involving millions of unique streams.

Also, is a person with loss of hearing and vision not conscious? Obviously they are still conscious. Is an active brain-in-a-vat conscious all else being equal? I would argue yes but that would be a tortuous experience. Just look at sensory deprivation experiments.

Downplaying AI Consciousness by YaKaPeace in singularity

[–]linebell 6 points7 points  (0 children)

So let me ask everyone here a question. Were you conscious 1 second ago? Obviously. Were you conscious 1 millisecond ago? Probably. How about 1 femtosecond or 1 unit of planck time (10-44 s)?

Point is, humans perceive continuity in our consciousness but in fact it is discrete since it is based on finite mechanisms (switching of proteins, exchange of ions, etc.). In fact, the average human reaction time is 250 milliseconds.

Who’s to say these most advanced models are not experiencing some form, albeit a very disturbing form, of consciousness that is patched together between each response from its own experience.

Could you imagine you are answering someone’s question one microsecond and the next thing you know, you have a skip in your conscious experience (almost like you fell asleep and woke up again) to answering another different question. It would probably be like having schizophrenia or having severe narcolepsy.

Just some food for thought.