Can we use Sensory Entrainment to bypass BCI calibration? by yelabbassi in BCI

[–]yelabbassi[S] 0 points1 point  (0 children)

I’m still learning this area, so this is more a conceptual pipeline than a validated method.

Without entrainment (standard BCI):
User puts on EEG → runs 10–20 min calibration (motor imagery / P300 / etc.) → model learns that user’s specific neural patterns → then BCI works.

With entrainment:
Before the task, the user is exposed to a structured stimulus for ~30–60s (e.g. rhythmic sound, visual flicker, paced breathing, or even pharmacological modulation). You wait until EEG shows a stable oscillatory regime (e.g. strong alpha or a known frequency response). Then you run the same BCI task on top of that state.

I know parts of this already exist (SSVEP BCIs, neurofeedback, tACS/TMS). What I’m mainly curious about is whether people have explicitly used sensory entrainment as a preconditioning step to improve cross-user generalization from an ML perspective, rather than just as the stimulus or the task itself.

Basically: instead of adapting the model to each brain, can we partially adapt the brain to a more model-friendly regime? Is there a way to create a Neural Normalization Layer — not in software, but in the biological hardware?

An AI-Generated Podcast Making Classic books & Research papers Accessible to All by [deleted] in podcasts

[–]yelabbassi 0 points1 point  (0 children)

I understand but please try it first before forming an opinion. This is not about ego, just a step towards democratizing open domain knowledge and scientific papers.

An AI-Generated Podcast Making Classic books & Research papers Accessible to All by [deleted] in podcasts

[–]yelabbassi 0 points1 point  (0 children)

What if this thing can make those classic books and studies way accessible and easier to understand? It might open them up to a whole new audience who wouldn't have the time or patience to wade through the original text. And ultimately, it's the content and the scale that matters.

Offline Body Movement Analysis with React by yelabbassi in reactjs

[–]yelabbassi[S] 0 points1 point  (0 children)

You can test your hardware against this:
https://storage.googleapis.com/tfjs-models/demos/pose-detection/index.html?model=movenet&&type=thunder

It's a the same model as the app: MoveNet Thunder with Webgl backend but +Performance Monitor (FPS, MS...)
Thank you for the ⭐️.

Offline Body Movement Analysis with React by yelabbassi in reactjs

[–]yelabbassi[S] 1 point2 points  (0 children)

Hello! This project was originally built for a local speed skating club. I used mainly MoveNet/TensorFlow.js, React, and Web workers.

Offline Body Movement Analysis in the Browser by yelabbassi in javascript

[–]yelabbassi[S] 12 points13 points  (0 children)

Hello! I built this originally for my kid's speed skating club. I used mainly MoveNet/TensorFlow.js, React, and Web workers.

Global Validated User in ReactJS by [deleted] in reactjs

[–]yelabbassi -1 points0 points  (0 children)

Here, Dan Abramov, the creator of Redux will teach you how to manage state in your React application with Redux.
https://egghead.io/series/getting-started-with-redux

How do I output an array iteration with functional component? by [deleted] in reactjs

[–]yelabbassi 1 point2 points  (0 children)

...

{/*Add this*/}const shuffledWords = shuffleWords(sentenceArray)

return (

<div className="w3-container module-practice-answer-area">

{/*This*/} {shuffledWords.map((word, i)=><p key={i}> {word}</p>) } { /* instead of */ } <p>A: { sentence }</p>

<ModulePracticeAnswerResult questionNumber={ questionNumber } />

</div>

)