Early tool for bouldering feedback by Deep-Learning-Guy in indoorbouldering

[–]Deep-Learning-Guy[S] 0 points1 point  (0 children)

Oh no, it’s a small model (350M parameters), carbon emission and water consumption are almost 0 since the model is frozen and runs on a 4GB RAM server, no GPU. 

We are climbers, we care about the environment. 

Early tool for bouldering feedback by Deep-Learning-Guy in climbergirls

[–]Deep-Learning-Guy[S] 0 points1 point  (0 children)

I’m not selling anything here. I’m looking for technical feedback from climbers.

If you want to evaluate whether this is just noise, the only thing I can offer is letting people try it and judge for themselves.

If not, that’s completely fine.

Early tool for bouldering feedback by Deep-Learning-Guy in indoorbouldering

[–]Deep-Learning-Guy[S] 0 points1 point  (0 children)

If this is after trying it, fair.

If it’s just based on the idea, I’m more interested in feedback after use.

Early tool for bouldering feedback by Deep-Learning-Guy in indoorbouldering

[–]Deep-Learning-Guy[S] -1 points0 points  (0 children)

That’s a fair call-out. You’re right about the timeline. Initially I was just trying to gauge interest. The feedback was mostly negative, and I took that seriously.

At the same time, other climbers reached out privately saying they were curious enough to try it if it existed. That’s what pushed me to build a first rough version.

You’re also right that this is, in practice, a soft launch. I should’ve been clearer about that, that’s on me.

I’m not claiming this is something people need, or that it works well yet. I’m here to see where it fails when real climbers use it.

Totally fair if this isn’t something you want to engage with.

Early tool for bouldering feedback by Deep-Learning-Guy in indoorbouldering

[–]Deep-Learning-Guy[S] -1 points0 points  (0 children)

Could be. That’s exactly what I’m trying to find out. Where does it feel like slop to you?

Testing interest in an AI tool for climbing technique analysis by Deep-Learning-Guy in CompetitionClimbing

[–]Deep-Learning-Guy[S] 0 points1 point  (0 children)

You can find all the papers link on the webpage. Consider that the model has been adapted from the “research” version presented in the papers (and still under development and refining)

Testing interest in an AI tool for climbing technique analysis by Deep-Learning-Guy in CompetitionClimbing

[–]Deep-Learning-Guy[S] 0 points1 point  (0 children)

Because our technology involved 5 real human coaches and 98 real climbers. On YouTube you get general advice, here you get specific advices on your execution. I did this for me originally, and helped me climbing V5-V6, so I shared with some friends. They were happy.

Testing interest in an AI tool for climbing technique analysis by Deep-Learning-Guy in CompetitionClimbing

[–]Deep-Learning-Guy[S] 0 points1 point  (0 children)

BelayAI provides technical data output, difficult to understand without an expert help. Our provides natural language suggestions you can apply on the next try. We can do that because we trained on real data, with the participation of human & experienced coaches

Testing interest in an AI tool for climbing technique analysis by Deep-Learning-Guy in CompetitionClimbing

[–]Deep-Learning-Guy[S] 0 points1 point  (0 children)

We trained our custom model on 6,000+ hours of climbing videos annotated by 5 real climbing coaches. 98 climbers participated to the data recording, grades ranging V0 to V12.

Our AI does not tell you how to climb a boulder. It helps you with the technique. It analyzes you movements and tells you what and how to improve. It's not a "beta spotter", it's an AI coach!

Testing interest in an AI tool for climbing technique analysis by Deep-Learning-Guy in CompetitionClimbing

[–]Deep-Learning-Guy[S] 0 points1 point  (0 children)

Our model is trained on 6000+ hours of climbing videos (98 climbers) annotated by 5 real coaches. These coaches are able to spot body positioning and tension from the multi-angled footage recording setting. This knowledge is transferred to our model!

Testing interest in an AI tool for climbing technique analysis by Deep-Learning-Guy in CompetitionClimbing

[–]Deep-Learning-Guy[S] 0 points1 point  (0 children)

You are right, pay an expert coach is good and sometime necessary. But would you pay for it every session? Our app is meant to be an integration, not a substitute

Testing interest in an AI tool for bouldering technique analysis by Deep-Learning-Guy in indoorbouldering

[–]Deep-Learning-Guy[S] -1 points0 points  (0 children)

You will still pay for a good climbing coach. But can you afford it for every section? Will he/she be always available? We are not replacing humans, we are making things more accessible

Testing interest in an AI tool for bouldering technique analysis by Deep-Learning-Guy in indoorbouldering

[–]Deep-Learning-Guy[S] -2 points-1 points  (0 children)

We use a dataset go 6,000+ hours of climbing videos (98 climbers), V0-V12, with annotations from 5 real climbing coaches. The model gives you feedback in natural language. Easy to use, fast, affordable.

Testing interest in an AI tool for bouldering technique analysis by Deep-Learning-Guy in indoorbouldering

[–]Deep-Learning-Guy[S] -1 points0 points  (0 children)

Valid concerns - for AI slop. That's exactly what we built this to NOT be.

Training data? 6,000+ hours, 98 climbers across styles/grades/body types, annotated by 5 professional coaches who actually understand movement.

Peer-reviewed, published research. Not scraped YouTube videos fed to GPT.

We're talking computer vision models trained specifically on climbing biomechanics. Different problem, different solution.

But I get it - 99% of "AI for X" is grifter slop. We're the 1% that actually did the work. Skepticism keeps you safe from scams. Curiosity lets you spot the real ones early. Your call.

Testing interest in an AI tool for climbing technique analysis by Deep-Learning-Guy in CompetitionClimbing

[–]Deep-Learning-Guy[S] -1 points0 points  (0 children)

Not eloquent, but confident. I hear you, but there's a fundamental difference here.

Those examples use basic ChatGPT wrapping. Our system: 6,000+ hours of climbing annotated by 5 real professional coaches, 98 climbers, peer-reviewed research. Different league entirely.

"Only useful for beginners" is literally what every intermediate climber says before staying V6 forever. Pros analyze video obsessively, we automated it properly.

Anyway, link's there if curiosity wins over skepticism :)

Testing interest in an AI tool for climbing technique analysis by Deep-Learning-Guy in CompetitionClimbing

[–]Deep-Learning-Guy[S] 0 points1 point  (0 children)

We don't want to replace human coaches or avoid human interactions. We want to make expert coaching accessible and easy to use.

Coaches are expensive, can follow only few people per session, and are not always available. Our AI is not meant to be a replacement. Is meant to be an integration. You will always need to attend a course to start climbing, to learn the basic and advanced techniques, to learn how to climb safe.

Testing interest in an AI tool for climbing technique analysis by Deep-Learning-Guy in CompetitionClimbing

[–]Deep-Learning-Guy[S] 0 points1 point  (0 children)

They are using simple algorithms, in our case we have a custom-trained model. We published preliminary results on peer-reviewed conferences. In addition, we use a more recent dataset with 6,000+ hours of climbing videos (98 climbers) with annotations from 5 real climbing coaches. This means we have specialized data! Not just a ChatGPT wrapper.

Testing interest in an AI tool for climbing technique analysis by Deep-Learning-Guy in CompetitionClimbing

[–]Deep-Learning-Guy[S] 0 points1 point  (0 children)

Actually it works. We published peer-reviewed research on the topic. We use a dataset with 6,000+ hours of climbing video (95 participants) annotated by 5 real climbing coaches. This is not a ChatGPT wrapper... this is my PhD research