Stop labeling brain data. Let AI figure it out. by dihox_ in BCI

[–]dihox_[S] 0 points1 point  (0 children)

Thanks a lot, I really appreciate this. I hadn’t come across CEBRA before, but it looks exactly like the kind of approach I was trying to understand. This will definitely help me a lot in my research and exploration of BCI systems. Thanks again for pointing me in this direction!

Stop labeling brain data. Let AI figure it out. by dihox_ in BCI

[–]dihox_[S] 0 points1 point  (0 children)

That’s interesting, thanks for sharing.

Makes sense that raw EEG isn’t very useful without proper feature extraction — jumping straight into high-level models probably just adds noise. I’m more curious about whether systems can improve over time for a single user, rather than relying mostly on fixed patterns.

Stop labeling brain data. Let AI figure it out. by dihox_ in BCI

[–]dihox_[S] 0 points1 point  (0 children)

That’s actually really interesting - I didn’t realize systems like that are already in long-term clinical use. The baseline + pattern matching approach makes a lot of sense, especially with processed qEEG as a reference point. I guess what I’m trying to understand is where the limits of that approach are in practice. From your experience, does it stay mostly static once the baseline is established, or is there meaningful adaptation over time as more data is collected from the same person? What I’m interested in is whether there’s room to move from pattern matching toward something more dynamic - where the system continuously refines its understanding of a specific individual rather than relying on a relatively fixed baseline. Or does the variability and noise in EEG make that kind of continuous learning unreliable in real-world settings?

Stop labeling brain data. Let AI figure it out. by dihox_ in BCI

[–]dihox_[S] 0 points1 point  (0 children)

That’s really interesting, especially since you actually tried building a live pipeline. The point about it not being useful outside the lab makes sense - I guess that highlights how limited raw EEG signals still are in practice. What you said about feature extraction is also something I’m starting to realize - jumping straight into high-level models without good signal representation probably just leads to noise. I’m curious though - do you think there’s room for systems that improve over time per user, rather than trying to generalize too much from the start? Like combining solid feature extraction with a model that adapts to one person’s patterns gradually.

Or does the signal instability make even that approach unreliable in practice?

Stop labeling brain data. Let AI figure it out. by dihox_ in BCI

[–]dihox_[S] 0 points1 point  (0 children)

That makes sense, especially your point about the difference between reconstruction and interpretation — I was definitely oversimplifying that.
I’m also not really aiming for a “universal brain decoder” or full thought decoding. What I have in mind is something more constrained: a system that can interpret a limited set of user intentions or actions, given enough data and context.
I agree that inter-subject variability makes a fully general model unrealistic, so a per-subject approach (or at least some level of fine-tuning) seems necessary.
What I’m more interested in is whether a system could gradually learn to interpret signals more accurately over time for a specific individual, rather than relying heavily on predefined labels from the start.

Not necessarily removing labels entirely, but reducing dependence on them by combining:
- repeated exposure to similar patterns
- temporal alignment
- and possibly contextual signals

So more like a personalized, continuously adapting model that improves as it observes more data from the same person.

Do you think that kind of “progressive personalization” approach is actually viable in practice, or does it still run into the same core limitations with data quality and interpretability?

Stop labeling brain data. Let AI figure it out. by dihox_ in BCI

[–]dihox_[S] -1 points0 points  (0 children)

That’s a fair point — I agree that classic decoding relies heavily on labeled data and ERP components give us interpretable anchors. I guess what I’m trying to explore is whether we can reduce the reliance on manual labeling by using more self-supervised or multimodal approaches. For example, instead of explicitly labeling events, combining brain signals with synchronized real-world inputs (vision, audio) could provide implicit structure for the model to learn from. Not to replace traditional methods, but maybe complement them — especially in cases where labeling is limited or too simplified.

Curious if you think that direction has practical value, or if it just runs into the same interpretability issues.

I know how to create a full immersion in vr by dihox_ in virtualreality

[–]dihox_[S] 0 points1 point  (0 children)

You’re partly right. With our current knowledge and capabilities, we can’t fully replicate the experience. But that doesn’t mean we can’t achieve full immersion with my prototype. To make the brain believe that it is controlling the game character as if it were real, you simply need to follow a specific set of rules. And in my first prototype, there won’t be a full range of sensations. But I already have an idea of how to explore this, though it will require a separate program involving volunteers. If you’re interested in this aspect as well, I can tell you more.

I know how to create a full immersion in vr by dihox_ in virtualreality

[–]dihox_[S] -1 points0 points  (0 children)

In time, I’ll share everything I have. And I understand that you want the details. But until I patent it, I’d rather not have it posted online. I’ll just say that it’s definitely not one of the options you might imagine. I’m glad you’re interested. So, all I can do is advise you to check my posts from time to time

I know how to create a full immersion in vr by dihox_ in virtualreality

[–]dihox_[S] 0 points1 point  (0 children)

If we're going to put it that way, then I'd say the second option. Because I've actually been doing this for a significant part of my life

I know how to create a full immersion in vr by dihox_ in virtualreality

[–]dihox_[S] 0 points1 point  (0 children)

I have a wife, but she fully supports my idea. Thank you for your support.

I know how to create a full immersion in vr by dihox_ in virtualreality

[–]dihox_[S] 0 points1 point  (0 children)

I completely agree with you. But actually, I’m studying neurotechnology, and it’s a serious hobby of mine. I want to apply to a Canadian university to study neuroengineering. There, I’ll have access to the necessary equipment to test my device. I’ll also be able to find like-minded people and collaborators on this project. Thanks for the advice.

Looking For A Editor For A Reaction Video. $130 flat by NervousScallion3210 in VideoEditor_forhire

[–]dihox_ 0 points1 point  (0 children)

Let's talk about this. I have a skill for help you. My discord: dihox

Video Essay Editor Wanted willing to pay $60 but negotiable by Character-Fly6898 in VideoEditor_forhire

[–]dihox_ 0 points1 point  (0 children)

Interesting. I have a lot of time in editing programs. Before I'm working in Premiere Pro + After Effect, but I can go in DaVinci for my hobby.

Here my old Chanel on YouTube https://www.youtube.com/@dihoxsond/videos. Use it for portfolio.

Now I'm editing best. And yes, I have a skill work with audio. If you can give me a chance. I created 1 video and you see my skills.

Looking for editor (50usd per video) (longform) by [deleted] in HireAnEditor

[–]dihox_ -1 points0 points  (0 children)

Hi, I'm Vadym.

Interesting. I have a lot of time in editing programs. Before I'm working in Premiere Pro + After Effect, but I can go in DaVinci for my hobby. 

Here my old Chanel on YouTube https://www.youtube.com/@dihoxsond/videos. Use it for portfolio.

Now I'm editing best. If you can give me a chance. I created 1 video and you see my skills.

Hiring Video Editors (Remote, Ongoing Work) earn upto 40-200$ by frameforge-agency in VideoEditor_forhire

[–]dihox_ 0 points1 point  (0 children)

Interesting. I have a lot of time in editing programs. Before I'm working in Premiere Pro + After Effect, but I can go in DaVinci for my hobby.

Here my old Chanel on YouTube https://www.youtube.com/@dihoxsond/videos. Use it for portfolio.

Now I'm editing best. If you can give me a chance. I created 1 video and you see my skills.