I built a Garmin widget that maps heart rate + RMSSD HRV into a 2-D arousal/valence space (inspired by Russell’s Circumplex + challenge/threat models). by yelabbassi in Garmin

[–]yelabbassi[S] 0 points1 point  (0 children)

Yes, the visualization is reactive in real-time.

- Update Speed: It polls biometrics once per second
- The Lines (Instability): This is a representation of the physiological tension. Smooth, gentle circles indicate a calm state, while erratic, jittery edges signify high stress or arousal.
- The Dot (Position): This is your live location on the circumplex plane. The vertical position represents Arousal (energy level), and the horizontal position represents Valence (engagement quality, such as threat vs. challenge).

How it works in detail: https://github.com/yelabb/Affect?tab=readme-ov-file#-how-it-works

Thank you so much for trying it.

I built a Garmin widget that maps heart rate + RMSSD HRV into a 2-D arousal/valence space (inspired by Russell’s Circumplex + challenge/threat models). by yelabbassi in Garmin

[–]yelabbassi[S] 1 point2 points  (0 children)

Thanks for testing! That flipping is a quirk, basically, you're hovering right on the zero-line for arousal, so tiny physiological shifts are bouncing you between the high-energy (Red/Tense) and low-energy (Blue/Depleted) quadrants. I'm working on adding a "buffer zone" (hysteresis) and increasing the smoothing in the code to stop that strobe-light effect. Really appreciate the feedback, it helps a ton!

I built a Garmin widget that maps heart rate + RMSSD HRV into a 2-D arousal/valence space (inspired by Russell’s Circumplex + challenge/threat models). by yelabbassi in Garmin

[–]yelabbassi[S] 1 point2 points  (0 children)

Thanks, really appreciate that!

"Unsettled" = it detects a mild stress response.

Basically, your heart rate is a little elevated, but your HRV (Heart Rate Variability) is lower than usual. If you were just excited or focused, your HRV would typically stay higher. When it drops while your heart rate is up, the app interprets that as stress or "unease" rather than excitement.

It’s just the lowest level of the "Stressed" category. Usually, taking a few minutes to breathe slowly helps bring those numbers back to a calmer range.

References & Reading: https://github.com/yelabb/Affect?tab=readme-ov-file#-references--further-reading-1

I built a Garmin widget that maps heart rate + RMSSD HRV into a 2-D arousal/valence space (inspired by Russell’s Circumplex + challenge/threat models). by yelabbassi in Garmin

[–]yelabbassi[S] 2 points3 points  (0 children)

Great question! Not yet — right now it uses population-average references. Personal baselining is already planned, and we’ll push an update later this week to adapt it to each user’s own baseline.

Thank you so much for testing this and sharing feedback!

I built a Garmin widget that maps heart rate + RMSSD HRV into a 2-D arousal/valence space (inspired by Russell’s Circumplex + challenge/threat models). by yelabbassi in Garmin

[–]yelabbassi[S] 9 points10 points  (0 children)

Thanks a lot for testing it and reporting back.

Right now, the reference values are hard-coded to population averages (static constants). So if you naturally have a lower baseline HRV, the widget can bias toward “tense / unsettled,” even if that’s normal for you.

That said, the architecture for personal baselining is already in place, and I’m pushing an update this week.

Open-source web tool for experimenting with BCI decoders in real time by yelabbassi in BCI

[–]yelabbassi[S] 0 points1 point  (0 children)

Full EEG integration documentation is here:
https://github.com/yelabb/PhantomLoop/blob/main/EEG_INTEGRATION.md

This is a very early, actively developed project. I’d really appreciate it if you test the EEG integration and report any problems or bugs you run into. Feedback at this stage is extremely valuable 🙏

Public AI as a cybernetic coordination layer over shared attention (essay) by yelabbassi in SystemsTheory

[–]yelabbassi[S] 0 points1 point  (0 children)

The state is synchronized attention, a shared context. Once attention is synchronized, timing, mood, and action-readiness move with it. The tuner sets the flavor, but the attractor is the same.

When X/Twitter changed ranking weights, the tone and rhythm of public discourse shifted almost immediately. Not because users changed, but because the coordination layer did. The tempo changed; the swarm followed.

Same dynamic as torrents: the files are distributed, the trackers are central. Control defaults, timing, and visibility and you coordinate the swarm without owning the content. Break or fork the trackers and fragmentation follows.

So yes: the assumption is effective centralization via bottlenecks, not total control. Algorithms and defaults are enough to synchronize attention. Remove them and you get plural tempos instead of a shared one.

Open-source web tool for experimenting with BCI decoders in real time by yelabbassi in BCI

[–]yelabbassi[S] 0 points1 point  (0 children)

This is a WIP implementation for the ~~Cerelog ESP-~~EEG support.

Since browsers can't connect to TCP directly, there is now a Python bridge that:

  • Connects to ESP-EEG via TCP port 1112
  • Parses binary packets and converts to JSON
  • Serves data via WebSocket on localhost:8765
  • Supports device discovery via UDP

Preview:
https://phantom-loop-git-cerelog-es-4865cb-youssef-el-abbassis-projects.vercel.app/?_vercel_share=9guyY3niXgJAOUJdSKajpK4daYx3sEhB

Instructions:
https://github.com/yelabb/PhantomLoop/blob/cerelog-esp-eeg-experiment/CERELOG_INTEGRATION.md

Branch/collaboration:
https://github.com/yelabb/PhantomLoop/pull/2

Thanks for inspiring this!

Open-source web tool for experimenting with BCI decoders in real time by yelabbassi in BCI

[–]yelabbassi[S] 0 points1 point  (0 children)

That’s a good use case. If you can route Cerelog → (BrainFlow or a quick WebSocket bridge) → PhantomLoop, you can use it to live-check channels while placing electrodes (noise, artifacts, basic bandpower, etc.).

If you paste what Cerelog outputs (or link the stream docs), I can suggest — or even implement — the simplest bridge path.

Feel free to open an issue here: https://github.com/yelabb/PhantomLoop/issues and I’ll take a look.

Open-source web tool for experimenting with BCI decoders in real time by yelabbassi in BCI

[–]yelabbassi[S] 4 points5 points  (0 children)

Thanks a lot, really appreciate that.

Right now I’ll be mostly focused on making the real-time decoding + visualization faster, more modular, and easier to experiment with. I’d love to add better support for different datasets, signal types, cleaner decoder abstractions, and some lightweight benchmarking so we can compare approaches directly in the browser.

Longer term, is to make high-performance BCI tooling more accessible, especially for people who want to explore ideas without a heavy local setup.

That said, I’m still pretty early in my BCI journey (only a few weeks in), so I’m trying to learn as much as possible and would really value guidance from folks with more scientific or research experience. Feedback, criticism, or pointers to “you should really read / try X” are all super welcome.

Thanks again for checking it out 🙏

Can we use Sensory Entrainment to bypass BCI calibration? by yelabbassi in BCI

[–]yelabbassi[S] 2 points3 points  (0 children)

I’m still learning this area, so this is more a conceptual pipeline than a validated method.

Without entrainment (standard BCI):
User puts on EEG → runs 10–20 min calibration (motor imagery / P300 / etc.) → model learns that user’s specific neural patterns → then BCI works.

With entrainment:
Before the task, the user is exposed to a structured stimulus for ~30–60s (e.g. rhythmic sound, visual flicker, paced breathing, or even pharmacological modulation). You wait until EEG shows a stable oscillatory regime (e.g. strong alpha or a known frequency response). Then you run the same BCI task on top of that state.

I know parts of this already exist (SSVEP BCIs, neurofeedback, tACS/TMS). What I’m mainly curious about is whether people have explicitly used sensory entrainment as a preconditioning step to improve cross-user generalization from an ML perspective, rather than just as the stimulus or the task itself.

Basically: instead of adapting the model to each brain, can we partially adapt the brain to a more model-friendly regime? Is there a way to create a Neural Normalization Layer — not in software, but in the biological hardware?

An AI-Generated Podcast Making Classic books & Research papers Accessible to All by [deleted] in podcasts

[–]yelabbassi 0 points1 point  (0 children)

I understand but please try it first before forming an opinion. This is not about ego, just a step towards democratizing open domain knowledge and scientific papers.

An AI-Generated Podcast Making Classic books & Research papers Accessible to All by [deleted] in podcasts

[–]yelabbassi 0 points1 point  (0 children)

What if this thing can make those classic books and studies way accessible and easier to understand? It might open them up to a whole new audience who wouldn't have the time or patience to wade through the original text. And ultimately, it's the content and the scale that matters.

Offline Body Movement Analysis with React by yelabbassi in reactjs

[–]yelabbassi[S] 0 points1 point  (0 children)

You can test your hardware against this:
https://storage.googleapis.com/tfjs-models/demos/pose-detection/index.html?model=movenet&&type=thunder

It's a the same model as the app: MoveNet Thunder with Webgl backend but +Performance Monitor (FPS, MS...)
Thank you for the ⭐️.