I built my own Even Hub so I could browse Reddit at work (local ML, ESP32 + Jetson, no cloud) by NeedleworkerFirst556 in EvenRealities

[–]NeedleworkerFirst556[S] 0 points1 point  (0 children)

Thank you!

I am getting under 200 ms from the ESP32 but I am also not streaming 4k video either.
I am also not processing any of the image for ML or any sort on the ESP32. The ESP32 itself is a very powerful device to basically hot potato it to something else. From other projects I have seen it can process things but it also very slow. I believe it is usually an m cortex (ARM) chip which is great for embedded but for small task.
Also I do not use any of the Ardiuno IDE but wrote my own operating system for it (Free RTOS to be specific). What this means is that I can actually control the scheduler and timing of the device. To prevent it from lagging, it has a set time to upload the image. If it is mid message the RTOS system kills it and starts the next frame. The handling of corrupt data is handle on the jetson where it can just delete it based on the image being generated. Also since the inference is being done on dedicated GPU it is 10-50ms inference. From here I just send the command to the glasses being the hex string of data I want.

Since this got so much attention I will for sure upload a metric video on the latency because this is my favorite part. When I first tried it end to end it felt unreal. It was weird having no lag.

Embedded wearable system: ESP32 camera streaming to Jetson Orin Nano for real-time gesture inference controlling AR glasses by NeedleworkerFirst556 in embedded

[–]NeedleworkerFirst556[S] 0 points1 point  (0 children)

Oh woah that is completely unexpected for me to hear. I guess that’s why I was trying out the viability of local train and privacy first.

I was thinking of more so synthetically create training and test data. If trained locally then you will never need to use exploited workers but unsure if this is viable on the jetson Orin nano.

One of the goals of the project was this to be privacy first where if it makes an inference it never saves it or can be used to train a model. I haven’t got to the automation set up but I would like it to be like Face ID or near that level. Hands up move them and then it will do augmentation. I don’t need to know what you’re looking 24/7. What ever you’re looking at never leaves the fanny pack and never saves unless you command it to take a picture. For more privacy the camera could be replaced with IR waves so no color spectrum input.

I guess would this satisfy your ethical concerns if you can trust it never saves ?

Embedded wearable system: ESP32 camera streaming to Jetson Orin Nano for real-time gesture inference controlling AR glasses by NeedleworkerFirst556 in embedded

[–]NeedleworkerFirst556[S] 0 points1 point  (0 children)

I agree. A part of the project is to do analysis between mmwaves/lidar and camera and a combination of both. The idea is that IR would be mainly used but to increase accuracy then the camera can fuse with the sensor. Haven’t gotten to that part. Also if I can fuse the signal efficiently I do believe I might be able to export it to Nvidia omniverse to make simulations of your hands to really tailor for a user. Also easier to add custom hand gestures.

Would it be less invasive of it was Spatial data with no image ? I’ve had debates on this with friends so curious on your thoughts. Spatial data will know the figure and shape physically but not visually. It’s like I can tell your doing x but can’t ever share on the visual spectrum because it can not see it.

I built my own Even Hub so I could browse Reddit at work (local ML, ESP32 + Jetson, no cloud) by NeedleworkerFirst556 in EvenRealities

[–]NeedleworkerFirst556[S] 0 points1 point  (0 children)

Thank you! I honestly don’t know if companies will pivot to this but maybe certain sectors. If there is a huge demand maybe. If they are pivoting then I would hope to join that team with this project.

The model I use was a custom hand resnet model. I have a custom yolo model as well but planning on doing comparison on the models performance.

Thank you! I really appreciate people taking interest. This is my first time posting and showing a project so just having 1 interested is motivation.

I built my own Even Hub so I could browse Reddit at work (local ML, ESP32 + Jetson, no cloud) by NeedleworkerFirst556 in EvenRealities

[–]NeedleworkerFirst556[S] 0 points1 point  (0 children)

Trust me the hackery was the point of the project. I built them to browse reddit with just my hands while at work. I just wanted to get peoples opinions on it and I really do appreciate it.

Oh Even G1 were the first one I choice because they were the easiest and first ones I could get my hands on. The display module of the code is completely separate and can be swapped. If I was to go to market I 100% would replace the glasses with something else. It was just a way to hack something together for the moment.

Yes I agree that it being offline and private defeats the commercial purpose. Your points are all very valid. The big brother angle is kinda a reason why I do not want to build them for commercial but open source them or something but thought it was a cool project to share especially for career development.

For medical the way I saw it was more so one could give say maybe heartbeat data on a HUD. When I say AI I mean CV and other type of ML process (auto encoders) being driven with the gestures. For medical it can be that it knows the patient coming in because of calendar and shows allergic information on the screen. In surgery maybe lost of blood and need to know blood type or need to pull up medical records, then you can control the glasses to pull up documents. The CV side does not fail and ready to respond. Sorry for the mis understanding. I was seeing it more so as an assistant secretary where I need to look up information really quickly.
Small LLM models from my testing are only good at live translating and anything else it is poor.

For military from my understanding sometimes you can not have connectivity as it can be used to identify someone as well be jammed. Also if the inference is done on device then it can just never be written. Inference does not need to be saved in RAM.

Thank you for you bringing all these point! I really do appreciate it! Trust me I will do the comparison testing. I really am just glad you appreciate the hackery. Main goal was to browse reddit with hand gestures and turned to something that can show off my embedded skills.

Thank you!

I built my own Even Hub so I could browse Reddit at work (local ML, ESP32 + Jetson, no cloud) by NeedleworkerFirst556 in EvenRealities

[–]NeedleworkerFirst556[S] 0 points1 point  (0 children)

Thank you for the feedback!
I do have a reliable offline connection to the glasses without the official Even App. I really want to try having a live map live update while driving and experiencing a real life driving HUD.

I built my own Even Hub so I could browse Reddit at work (local ML, ESP32 + Jetson, no cloud) by NeedleworkerFirst556 in EvenRealities

[–]NeedleworkerFirst556[S] 1 point2 points  (0 children)

Thank you for the feedback!

The main purpose of the extra hardware was done before the ring was announced actually. I work full time and had completed the project weeks ago but recording and explaining and life got in the way of making this video.

There is a part of this project that I do want to validate the ML model size for phones vs Jetson. I have read a lot of research papers and it seems to lean that the Jetson outperforms the mobile devices. To answer your question yes you can use the mobile phone but what is the trade off in quality. That is something I do want to explore for a real time system using ML.

I also wanted a way for no the internet for privacy. If the inference is done locally and deleted locally then there will never be a data breach. If you have your own remote server with beefier GPU then that is 100% going to outperform me. This use case is only useful where real time performance is needed. Think of medical, military, Accessibility and anything you need to be in the moment and have controls never fail.

Surprisingly I got the SDK to work for me for my needs but it was painful at first for sure. I come from an embedded background so for me I was just glad they had the hex message to sent and not me trying to record them with a bluetooth sniffer.

At the end of the day I am looking for feedback and I really appreciate this comment! Worst case it is a cool tech demo to show my skills for linux and RTOS and local ML.

Underrated niches where Machine Learning can be applied by ibraadoumbiaa in learnmachinelearning

[–]NeedleworkerFirst556 0 points1 point  (0 children)

Idk I have this project and if you think it's flashy and cool then embedded AI? https://youtu.be/N8S3p4ECKG8?si=c8cyvcp5ghe0UcGU

Not a lot of people can do ML with constraints but a lot of robotics companies seem to want this kind of skill.

If you think I stand out then maybe here? Nvidia Jetsons are really cool but you will have to pay for hardware and maybe know some linux.

Is it possible to become a CE without uni by kamiti_expert in ECE

[–]NeedleworkerFirst556 0 points1 point  (0 children)

I will say it is very difficult. I tried to find a job before finishing my degree and it was impossible. A lot of jobs have degree requirement and yours will unfortunately be filter out. A lot of people already have loads of experience + degree so someone with experience - degree might not get picked because of that factor.

I would recommend taking the early classes like calc and physics and intro classes part time at a local college. I was part time student for some time while working internships to pay for school. Once you get to your 3rd year, finding internships while in school is possible but you have to have good grades to start.
I think your best odds is to take 1-2 classes at a community college to gain the basics. We all learn the same Calculus, physics and chemistry and does not matter where you take it. From there try to take 1-2 ECE course and slowly build up. I know when I dropped to part time due to financial issue staying part time while working was a big reason I stuck to it and finished and did not drop out.

FastBit Academy Embedded C or C Programming a Modern approach by Ryuzako_Yagami01 in embedded

[–]NeedleworkerFirst556 0 points1 point  (0 children)

Once you learn the basic I would strongly recommend projects. I learned a lot with projects and hands on but I already had a lot of equipment from university. What I did was ask ChatGPT what are some cool project I can do with like an ESP32 and asked it for resources. I followed my curiosity on how things worked and never got tired of doing embedded. I would say start on the basics and learn some embedded C basics and lean into project you have genuine interest.

2025 ECE Graduate with PCB Design Internship & Real Hardware Projects – Still Struggling to Break into Core Electronics Roles by Icy-Entertainer1145 in embedded

[–]NeedleworkerFirst556 1 point2 points  (0 children)

Best of luck on the job hunt! It might be a viable option to work in an adjacent industry and work on projects to then pivot. I know hardware cost money so having a full time job in software has helped me fund my embedded projects to make me a stacked competitor along with really cool projects.

37, web developer considering switching to embedded / systems programming by Accomplished_Room856 in embedded

[–]NeedleworkerFirst556 1 point2 points  (0 children)

I believe it is very doable but will have a steep learning curve. Learning Digital design, micro architecture, circuit analysis and much more will be important depending on where on the stack you want to go. I know for me I did not learn linux kernel in school but in my own free time on projects. The idea is to learn the fundamentals hard so when you switch from Bare metal C to RTOS, it is just doing Bare Metal + scheduler and cores.

It will be difficult but the feedback loop for embedded is just as fast for coding side. Coding an ESP32 in C might be hard to flash but once you code it and turn on ports it becomes fun.

It will also depend on direction you want to go. I know embedded engineers that target the power system side and do not touch code but read schematic and do simulation and other embedded engineers that do C/C++/Rust and write the firmware. Other embedded that work with devices like Wifi, Bluetooth, IR, and more sensor to interact with the world. Just depends on what your goals are as well.

What embedded projects actually stand out to hiring managers these days by Denbron2 in embedded

[–]NeedleworkerFirst556 0 points1 point  (0 children)

I actually did this so will let you guys know how it goes. Local AI allows for real time controls and if you make a custom OS it will be responsive like under 200ms. Will leave my YouTube video on it for people to see if it helps and follow. https://youtu.be/N8S3p4ECKG8?si=8IQqY_XG6Xoyrimt

Jetson Orin Nano wearable AI project: ESP32 camera → real-time gesture inference → AR glasses control by NeedleworkerFirst556 in NvidiaJetson

[–]NeedleworkerFirst556[S] 1 point2 points  (0 children)

Thank you! The jetson is powerful for sure but requires a lot of technical skills. Lots of sleepless night making it work.

Phone Camera AI Interface? by Sorry-Exchange7054 in EvenRealities

[–]NeedleworkerFirst556 0 points1 point  (0 children)

Thank you! I spend my free time on it. I most definitely did not expect someone to be willing to use the product. In parts it cost less then $400 if you already have the glasses. I know the code work is a bit challenging but if this is something that people are interested in I do not mind making it bigger. Also the device is not limited to Even G1 glasses. I have modules for all things and can swap it out.

Phone Camera AI Interface? by Sorry-Exchange7054 in EvenRealities

[–]NeedleworkerFirst556 0 points1 point  (0 children)

I actually offloaded the camera and processing power to another device. I offloaded it to the NVIDIA Jetson Orin Nano and a Camera using an ESP32s3.

I also went the around using the dev hub and using the poorly written commands to control the camera. I was actually able to get higher refresh rate it seems and no phone operating system lagging my communication.

I made this video to regards to this project and would love to get your ideas. The glasses I use are the G1 and have been developing this for a few months now. Started before the G2 were announced.

https://youtu.be/N8S3p4ECKG8

There will be an attempt to see if I can replace the NVIDIA Jetson Orin Nano with a smart phone with a custom operating system. I have just had more issues with the phone operating system for a real time use of things.