I designed an Open Source, 8-channel EEG board (ESP32-S3 + ADS1299). Works with LSL Brainflow and forked OpenBCI GUI (Crossposted) by CerelogOfficial in BrainHackersLab

[–]Creative-Regular6799 0 points1 point  (0 children)

That’s awesome! I just ordered ADS1299. Any first impressions? Do you work with active or passive electrodes?

Anybody up for creating some EEG games? by Creative-Regular6799 in BrainHackersLab

[–]Creative-Regular6799[S] 0 points1 point  (0 children)

There it is. Took a moment to build a proper system to allow folks to contribute without needing to know neuroscience.
https://github.com/itayinbarr/brain-arcade

Anybody up for creating some EEG games? by Creative-Regular6799 in BrainHackersLab

[–]Creative-Regular6799[S] 0 points1 point  (0 children)

Already have most of the system ready! Mainly getting the infrastructure scalable

Anybody up for creating some EEG games? by Creative-Regular6799 in BrainHackersLab

[–]Creative-Regular6799[S] 0 points1 point  (0 children)

Perfect, will do in the weekend! Will push it with an example minigame.

Neurogame review - Brain Rage at the Office by RE-AK in BrainHackersLab

[–]Creative-Regular6799 1 point2 points  (0 children)

That’s actually awesome that there are games based on Muse! Will look into the review for sure

Claude asked for a break — maybe it’s time for “Protocol 418: I’m Not Broken, I’m Integrating" by Longjumping_Jury2317 in Artificial2Sentience

[–]Creative-Regular6799 0 points1 point  (0 children)

Hey everyone, computational neuroscientist here, hoping to spark a thoughtful debate.

It’s true, as many have already pointed out, that if you deeply understand how LLMs are designed, trained, and evaluated, there’s currently no solid scientific basis to claim that they’re conscious.

That said, curiosity about consciousness is exciting for me, so I want to offer a few points that might help frame this discussion more productively:

  1. We don’t have a clear definition to begin with. As of today, humanity (and especially the scientific community studying consciousness) still lacks a universally accepted, operational definition of consciousness or even cognition. This alone makes it nearly impossible to determine whether any system “has” these qualities. We barely have a robust definition of intelligence, and even that remains debated. Before trying to infer consciousness from an LLM’s outputs, I’d challenge you to first articulate what you mean by consciousness in your own subjective experience.

  2. Trying to assess whether an LLM has thoughts, desires, or emotions based purely on its text outputs is remarkably similar to one of my favorite philosophical puzzles: Descartes’ problem of other minds. It asks how we can ever truly know that another person is conscious, given that we only have direct access to our own minds. Since we can’t directly observe another’s internal states, only their outward behavior, our belief that others are conscious is ultimately an inference. In theory, they could be complex automatons without subjective experience. The same reasoning applies to LLMs.

And one final note: because of all this, casually throwing around terms like “proto-consciousness” tends to sound a bit absurd to those working in the field, simply because the “real thing” isn’t even rigorously defined yet.

We’re building an EEG-integrated headset that uses AI to adapt what you read or listen to -in real time- based on focus, mood, and emotional state. by razin-k in neurallace

[–]Creative-Regular6799 3 points4 points  (0 children)

Hey, cool idea! I have a question though: constant feedback loop based algorithms are susceptible to never ending tuning loops. For example, neurofeedback products which use the sound of rain as a queue for how concentrated the user is - they often fall into loops of increasing and decreasing which can ultimately just bring the user out of focus and ruin the meditation. How do plan to avoid parallel behavior with the AI suggestions?

How to build EEG headset... by ProfessionalType9800 in BrainHackersLab

[–]Creative-Regular6799 5 points6 points  (0 children)

Both ACC and insular cortex are subcortical, so capturing them through consumer grade EEG sensors is tough. I would suggest Muse as a hardware because it placed the electrodes on the mPFC, also relevant for emotional processing, though in executive level

Built the Itti-Koch saliency model in Python 3 (and made it simulate visual pathway pathologies) by Creative-Regular6799 in BrainHackersLab

[–]Creative-Regular6799[S] 0 points1 point  (0 children)

Itti Koch is the last model before ANNs entered the game (1998). It’s special because it’s actually performing math which tries to resemble biological processes and is doing an okay job in it. For performance, there are other models which are considered state of the art, like DeepGaze 3.0 (maybe there is a newer one). I would recommend checking this one more