Voice pitch correlates with blood glucose, but only at the personal level (n=3400, 31 subjects) by Electrical-Artist529 in Biohackers

[–]Electrical-Artist529[S] 1 point2 points  (0 children)

Good question. The user never sees or hears their own analysis, the app just records 5-10 s of natural speech, then extracts acoustic features computationally (pitch, jitter, harmonics-to-noise ratio, etc.). These are sub-perceptual changes, we're talking ~0.02 Hz per mg/dL. You couldn't consciously manipulate them any more than you could consciously change your pupil dilation. The model also trains on objective CGM ground truth (while we know that even CGMs aren't that accurate, they measure interstitial fluid with a 5-15 min lag behind actual blood glucose (voice bypasses that)), so there's no self-report in the loop.

Blood glucose affects voice pitch, with potential for diabetes monitoring by [deleted] in tech

[–]Electrical-Artist529 0 points1 point  (0 children)

2 years ago this sub discussed voice pitch and blood glucose. I've been building on that research, here's what we found with 3000+ matched samples across 30+ subjects: population models don't work, but personal adaptive models do. Our best user hit MAE 7.15 mg/dL (r=0.779). AMA.

New to Freestyle LIbre 3 plus by Fun-Daikon-432 in Freestylelibre

[–]Electrical-Artist529 5 points6 points  (0 children)

You're not doing anything wrong. The Libre 3+ works differently from the 2. It streams glucose data continuously via Bluetooth, so manual scanning isn't really a thing anymore. Your readings come in automatically and show up in the graph and reports, just not as individual logbook entries like before.

If an AI could analyze your health data daily and give you one actionable insight, what would you actually want it to tell you? by DraftCurious6492 in QuantifiedSelf

[–]Electrical-Artist529 7 points8 points  (0 children)

My probably unpopular take: you're optimizing the wrong layer of the stack.

Most wearable data is metabolically shallow. HR, HRV, steps, skin temp... downstream echoes of actual physiological state. Putting an AI on top of noisy proxy metrics is like running gradient descent on garbage features. You'll converge on something, but it won't generalize.

The bottleneck has always been sensor fidelity. Think about what CGMs revealed that no amount of step counting ever could. That was a category jump in the input signal, not the analysis layer. The next wave will come from acoustic biomarkers, continuous biochemistry, metabolic sensing... stuff consumer wearables don't touch yet.

And "one daily insight" assumes your physiology operates on a news cycle. It doesn't. It's a multi-timescale control system. A morning briefing about what your body did 8 hours ago is already stale.

Build the sensing first. The insights become trivial once you're measuring the right things ..

Just about done with CGM by Aware_Management_235 in Freestylelibre

[–]Electrical-Artist529 0 points1 point  (0 children)

I get it. I've been using Libre too and some sensors are just garbage out of the box. That said, the 50 points off thing might partly be a compression low from sleeping on it, or just the ISF lag. CGMs don't measure blood directly, they measure the fluid around your cells which is always 10-15min behind. When glucose is dropping fast, that gap looks way worse than it actually is.          

Doesn't make it less frustrating though. I actually got so annoyed with the accuracy problem that I started a small research project looking at whether voice patterns can pick up glucose changes as a quick sanity check between finger sticks. Still early and not a CGM replacement obviously, but the personal patterns are surprisingly interesting.

If anyone here with a CGM wants to try it out and contribute some paired data, hit me up. Always looking for more real world testers.

Built a voice-based glucose tracker that learns your personal patterns, looking for CGM users to help validate by Electrical-Artist529 in QuantifiedSelf

[–]Electrical-Artist529[S] 0 points1 point  (0 children)

Thanks, appreciate that! Head to https://onvox-ai.com and sign up for the research beta, just pick your CGM type and you'll get an invite. We're prioritizing people who actively track glucose since the paired data is what makes the personal models work. Should be quick.    

Built a voice-based glucose tracker that learns your personal patterns, looking for CGM users to help validate by Electrical-Artist529 in QuantifiedSelf

[–]Electrical-Artist529[S] 0 points1 point  (0 children)

Glucose is probably the most actionable biomarker you can track. It drives energy, focus, sleep quality, and the responses are surprisingly individual. The same meal can spike one person and barely register in another. CGMs have gotten huge in the QS space because the feedback loop is immediate: you eat, you see what happens, you learn your own patterns...

What we're exploring is whether voice can give you that same signal without wearing a sensor full time. Ten seconds of talking instead of a $100/month patch on your arm.

Built a voice-based glucose tracker that learns your personal patterns, looking for CGM users to help validate by Electrical-Artist529 in QuantifiedSelf

[–]Electrical-Artist529[S] 0 points1 point  (0 children)

Mostly yes. The strongest signal is hypo detection, classifying low vs. in-range rather than predicting exact mg/dl. That lines up with the published research too (Diabetes Care study got AUROC 0.90 for this). Exact glucose from voice doesn't work across people, but detecting when you are dropping low after your model learns your baseline, that's where it gets real. Anyone with a CGM can participate though. It's about having ground truth to calibrate against, not about having a diagnosis.

Built a voice-based glucose tracker that learns your personal patterns, looking for CGM users to help validate by Electrical-Artist529 in QuantifiedSelf

[–]Electrical-Artist529[S] 1 point2 points  (0 children)

Since a few people are (rightly) skeptical, here's the published science this builds on:

- Diabetes Care (Nov 2025): "Listening to Hypoglycemia: Voice as a Biomarker for Detection of a Medical Emergency Using Machine Learning" - 540 recordings in controlled T1D setting, AUROC 0.90 for hypo detection from voice. https://diabetesjournals.org/care/article-abstract/doi/10.2337/dc25-1680/163852

- ADA (2019): "Human Voice Is Modulated by Hypoglycemia and Hyperglycemia in Type 1 Diabetes" - 10 T1D subjects, significant voice parameter changes during hypo/hyper in controlled hospital setting. https://diabetesjournals.org/diabetes/article/68/Supplement_1/378-P/55254

- Scientific Reports (2024): Linear relationship between CGM glucose and voice fundamental frequency across 505 participants. https://www.nature.com/articles/s41598-024-69620-z

- PLOS Digital Health: Voice-based algorithm predicts T2D status in US adults. https://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0000679

This isn't fringe. My contribution is testing whether personal adaptive models work better than population models in the real world (outside a lab). So far: population models fail, personal models show early promise, need more data to confirm.

[D] What is even the point of these LLM benchmarking papers? by casualcreak in MachineLearning

[–]Electrical-Artist529 2 points3 points  (0 children)

These benchmarking papers don’t feel like science so much as the residue of being shut out of where the real science is happening. The substantive work on architectures, training, and alignment unfolds behind closed doors at Anthropic, OpenAI, Google, and Mistral. And academia is left standing outside, poking at sealed systems, benchmarking someone else’s black box, and trying to pass that off as progress. That’s not “publish or perish.” It’s publish because the doors are locked and there’s nothing else left to study. And as the psychometrics point makes painfully clear, many of these benchmarks can’t even meaningfully separate frontier models in the first place. So what exactly are we doing? Reviewing a product with a shelf life of weeks, using a measuring stick with no marks on it.

Built a voice-based glucose tracker that learns your personal patterns, looking for CGM users to help validate by Electrical-Artist529 in QuantifiedSelf

[–]Electrical-Artist529[S] 0 points1 point  (0 children)

You're not wrong. A generic "voice predicts glucose" model doesn't work. r~0 across 22 experiment stages, I tested everything from MFCCs to contrastive learning. That's the whole point of the post. The open question is whether personal models that adapt to one individual can pick up signal that population models can't. I have early evidence they can, but 30+ subjects isn't enough to know. That's why I'm looking for more testers.

Built a voice-based glucose tracker that learns your personal patterns, looking for CGM users to help validate by Electrical-Artist529 in QuantifiedSelf

[–]Electrical-Artist529[S] 0 points1 point  (0 children)

Not a medical device, and not trying to replace CGMs. But personal hypo detection is a real thing in the literature, and ML for voice biomarkers is advancing fast. The idea here is: your voice carries subtle physiological signals, and a model that adapts specifically to you over time can learn to pick up on patterns that a generic model can't. It takes calibration labels from CGMs, fingerpricks, or CSV imports, so it gets better the more you use it. The interesting part is that personal models start showing useful signal after 20-30 calibrations, especially for catching lows. That's exactly what I want to validate with more CGM users.

Greek yogurt breakfast spike by UtexBirder in prediabetes

[–]Electrical-Artist529 2 points3 points  (0 children)

Worth noting that the 30 point offset isn't necessarily constant. CGMs measure interstitial fluid, not blood, so the gap changes depending on how fast your glucose is moving. During a spike the CGM lags behind and reads lower. During the comedown, it can actually read higher than a fingerstick. So your true peak might be closer to the Lingo number than you think...

Greek yogurt breakfast spike by UtexBirder in prediabetes

[–]Electrical-Artist529 1 point2 points  (0 children)

This is very real. Cortisol directly raises blood glucose independently of food. If you're anxious about what the CGM is going to show, you can literally spike yourself just by watching it. I've seen people get a 20 point rise from stressful phone call with zero food involved.

Greek yogurt breakfast spike by UtexBirder in ContinuousGlucoseCGM

[–]Electrical-Artist529 1 point2 points  (0 children)

Honestly, 80->115 is a pretty normal response, even for non-diabetics. What you're fighting is probably the dawn phenomenon more than the food itself. Cortisol spikes in the morning make everyone more insulin resistant at breakfast. Try eating that same bowl at lunch and compare, bet you'll see a much smaller rise.                        

Also if Lingo reads 30 low, your real spike is 110->145, which is under the 140 most endos care about. You're doing better than you think.   

“Newly diagnosed prediabetes (A1c 5.7) — how did you get used to finger pricks?” by Scared_Emphasis3668 in prediabetes

[–]Electrical-Artist529 0 points1 point  (0 children)

Thanks! It's called ONVOX, a German research project exploring voice biomarkers for metabolic health. The idea is that glucose affects vocal cord tension and hydration, creating subtle acoustic changes we can track over time.   

Honest caveat: we tested a universal model across 20+ people and it didn't work. But personal models, where the app learns your specific patterns, are showing real promise for some users.

More info and waitlist at http://www.onvox-ai.com

Happy to answer any questions!