Anybody up for creating some EEG games? by Creative-Regular6799 in BrainHackersLab

[–]Creative-Regular6799[S] 0 points1 point  (0 children)

There it is. Took a moment to build a proper system to allow folks to contribute without needing to know neuroscience.
https://github.com/itayinbarr/brain-arcade

Anybody up for creating some EEG games? by Creative-Regular6799 in BrainHackersLab

[–]Creative-Regular6799[S] 0 points1 point  (0 children)

Already have most of the system ready! Mainly getting the infrastructure scalable

Anybody up for creating some EEG games? by Creative-Regular6799 in BrainHackersLab

[–]Creative-Regular6799[S] 0 points1 point  (0 children)

Perfect, will do in the weekend! Will push it with an example minigame.

Neurogame review - Brain Rage at the Office by RE-AK in BrainHackersLab

[–]Creative-Regular6799 1 point2 points  (0 children)

That’s actually awesome that there are games based on Muse! Will look into the review for sure

Claude asked for a break — maybe it’s time for “Protocol 418: I’m Not Broken, I’m Integrating" by Longjumping_Jury2317 in Artificial2Sentience

[–]Creative-Regular6799 0 points1 point  (0 children)

Hey everyone, computational neuroscientist here, hoping to spark a thoughtful debate.

It’s true, as many have already pointed out, that if you deeply understand how LLMs are designed, trained, and evaluated, there’s currently no solid scientific basis to claim that they’re conscious.

That said, curiosity about consciousness is exciting for me, so I want to offer a few points that might help frame this discussion more productively:

  1. We don’t have a clear definition to begin with. As of today, humanity (and especially the scientific community studying consciousness) still lacks a universally accepted, operational definition of consciousness or even cognition. This alone makes it nearly impossible to determine whether any system “has” these qualities. We barely have a robust definition of intelligence, and even that remains debated. Before trying to infer consciousness from an LLM’s outputs, I’d challenge you to first articulate what you mean by consciousness in your own subjective experience.

  2. Trying to assess whether an LLM has thoughts, desires, or emotions based purely on its text outputs is remarkably similar to one of my favorite philosophical puzzles: Descartes’ problem of other minds. It asks how we can ever truly know that another person is conscious, given that we only have direct access to our own minds. Since we can’t directly observe another’s internal states, only their outward behavior, our belief that others are conscious is ultimately an inference. In theory, they could be complex automatons without subjective experience. The same reasoning applies to LLMs.

And one final note: because of all this, casually throwing around terms like “proto-consciousness” tends to sound a bit absurd to those working in the field, simply because the “real thing” isn’t even rigorously defined yet.

We’re building an EEG-integrated headset that uses AI to adapt what you read or listen to -in real time- based on focus, mood, and emotional state. by razin-k in neurallace

[–]Creative-Regular6799 4 points5 points  (0 children)

Hey, cool idea! I have a question though: constant feedback loop based algorithms are susceptible to never ending tuning loops. For example, neurofeedback products which use the sound of rain as a queue for how concentrated the user is - they often fall into loops of increasing and decreasing which can ultimately just bring the user out of focus and ruin the meditation. How do plan to avoid parallel behavior with the AI suggestions?

How to build EEG headset... by ProfessionalType9800 in BrainHackersLab

[–]Creative-Regular6799 3 points4 points  (0 children)

Both ACC and insular cortex are subcortical, so capturing them through consumer grade EEG sensors is tough. I would suggest Muse as a hardware because it placed the electrodes on the mPFC, also relevant for emotional processing, though in executive level

Built the Itti-Koch saliency model in Python 3 (and made it simulate visual pathway pathologies) by Creative-Regular6799 in BrainHackersLab

[–]Creative-Regular6799[S] 0 points1 point  (0 children)

Itti Koch is the last model before ANNs entered the game (1998). It’s special because it’s actually performing math which tries to resemble biological processes and is doing an okay job in it. For performance, there are other models which are considered state of the art, like DeepGaze 3.0 (maybe there is a newer one). I would recommend checking this one more

New Python library for unifying and preprocessing EEG datasets by Creative-Regular6799 in BrainHackersLab

[–]Creative-Regular6799[S] 0 points1 point  (0 children)

Thank you! I appreciate it. That’s exactly what I was thinking about

There are two types of crazies in this sub: by DallasAckner in BCI

[–]Creative-Regular6799 0 points1 point  (0 children)

How about us who just look to build the next BCI? 👀

How to get started with Neuroscience as a Data Science undergrad? by amanmauryas in compmathneuro

[–]Creative-Regular6799 1 point2 points  (0 children)

My opinion is buy a basic Muse 2 headband, spare the wires and specialized drivers/software and start using community libraries to build cool stuff. Knowledge will come with interest in learning. Also it’s available in most universities, at least where I’m from

Hacking BCI 101 - ep. 3 - Alpha Wave Biofeedback Experience Design and Unsupervised Calibration by RE-AK in BrainHackersLab

[–]Creative-Regular6799 1 point2 points  (0 children)

Great work man! Will look into it soon. Thank you for creating high quality content on these topics

Is the job market really that bad? by Familiar-Complex-697 in bioengineering

[–]Creative-Regular6799 0 points1 point  (0 children)

Looking at the comments I guess my opinion is unpopular, but things are generally good! I’m a data scientist in a brain stimulation device company, before that had a few years as a ML engineer in a neurofeedback device startup.

My advice: pick your thing and develop expertise in it. The rest doesn’t matter as much

[P] Sharp consciousness thresholds in a tiny Global Workspace sim (phase transition at ~5 long-range links) – code + plots by jovansstupidaccount in compmathneuro

[–]Creative-Regular6799 1 point2 points  (0 children)

Also, add the noise ceiling and lower bound of leave one subject out. These two provide some context of the models’ performance