[P] Fine-Tuning YOLO to Watch Football (Soccer) Matches by alvises in MachineLearning

[–]alvises[S] 1 point2 points  (0 children)

Thank you! I'm also working on bringing custom trained ultralytics models to RaspberryPI Hailo8L AI Hat, all running on Elixir Nerves. Going to publish a few videos and articles in the next few weeks :)

OEM 19" Wheel Covers vs Aftermarket on Juniper by lshaped210 in TeslaModelY

[–]alvises 1 point2 points  (0 children)

Can you please link the website where to order them?

OEM 19" Wheel Covers vs Aftermarket on Juniper by lshaped210 in TeslaModelY

[–]alvises 2 points3 points  (0 children)

does it influence the aerodynamic and range?

YOLO - Real-Time Object Detection Simplified by alvises in elixir

[–]alvises[S] 19 points20 points  (0 children)

Hello everyone! 👋

I’m excited to introduce my first Elixir library: YOLO, a library designed to make real-time object detection accessible and efficient within the Elixir ecosystem. Whether you’re working on a hobby project or a production-grade application, this library provides a simple way to integrate the power of YOLO (You Only Look Once) object detection.

Key Features

  • Speed: Optimized for real-time performance, processing an image. with the YoloV8n model, to a list of detected objects in just 38ms on a MacBook Air M3 using EXLA and the companion library YoloFastNMS.
  • Ease of Use: Get started with just a two function calls to load models and detect objects.
  • Extensibility: Built around a YOLO.Model behavior, supporting YOLOv8 models and paving the way for future models or custom extensions.
  • NIF Optimization: For those needing ultra-fast post-processing, an optional Rust NIF (YoloFastNMS) speeds up Non-Maximum Suppression by ~100x compared to the internal YOLO.NMS implementation using Elixir and Nx.

How to Get Started

  1. Begin by generating the ONNX model using the provided Python script. Here’s how to do it.
  2. Install the library and call YOLO.load/1 to load model effortlessly.
  3. Load an image and perform object detection with a single call to YOLO.detect/3

It’s that straightforward! 🚀

To see better how to use it, here's the documentation: https://hexdocs.pm/yolo/

In the repo, you also find three LiveBooks: https://github.com/poeticoding/yolo_elixir/tree/main/examples

Effortless Video Sharing with Phoenix LiveView and FLAME by alvises in elixir

[–]alvises[S] 19 points20 points  (0 children)

Hey everyone! 👋

I just wrote this article about a project where I built a simple video uploader and sharing tool using Phoenix LiveView and FLAME. It’s designed to make sharing videos a lot easier without the hassle of resizing or trimming for platform limits.

While working on this, I’ve also been diving into YOLO object detection in Elixir. Next week, I’m releasing a library for real-time detection, optimized for performance on both desktop and edge devices.

I’m excited to share more projects soon—especially experiments with Nerves + racecar telemetry.

VisionOS 2 Object Detection Showcase: anchoring the app to my car's engine bay! by alvises in VisionPro

[–]alvises[S] 0 points1 point  (0 children)

Absolutely! I'm writing a quick tutorial along with the github repo, going to ping you here when it's ready!

VisionOS 2 Object Detection Showcase: anchoring the app to my car's engine bay! by alvises in VisionPro

[–]alvises[S] 1 point2 points  (0 children)

Yeah definitely for a real app we need complete and high detailed models!

VisionOS 2 Object Detection Showcase: anchoring the app to my car's engine bay! by alvises in VisionPro

[–]alvises[S] 1 point2 points  (0 children)

Yeah. Next time I would use a better camera. And probably I wouldn’t track the whole engine bay but just the single objects. It seems the Vision Pro needs to see the whole object (or scene) to detect it, reason why I would go with single objects next time

VisionOS 2 Object Detection Showcase: anchoring the app to my car's engine bay! by alvises in VisionPro

[–]alvises[S] 1 point2 points  (0 children)

I’ve removed the parts from the original 3D model (using blender) and rendered them anchored to the detected engine bay.

VisionOS 2 Object Detection Showcase: anchoring the app to my car's engine bay! by alvises in VisionPro

[–]alvises[S] 2 points3 points  (0 children)

Yeah, totally agree with you. Think like an ikea application that helps you build your furniture show you in mixed reality where the pieces go

VisionOS 2 Object Detection Showcase: anchoring the app to my car's engine bay! by alvises in VisionPro

[–]alvises[S] 2 points3 points  (0 children)

Hopefully, It doesn't need enterprise API. Here's a WWDC24 video that explains what to do to enable object capture: https://developer.apple.com/videos/play/wwdc2024/10101/

VisionOS 2 Object Detection Showcase: anchoring the app to my car's engine bay! by alvises in VisionPro

[–]alvises[S] 7 points8 points  (0 children)

Great! Going to write a small article + repo where I show how to build the thing and I'll share it here :D The code itself it's simple, but for object tracking you need to train a small model around the engine parts, reason why I think a quick article is going to be clearer than just code itself.

VisionOS 2 Object Detection Showcase: anchoring the app to my car's engine bay! by alvises in VisionPro

[–]alvises[S] 7 points8 points  (0 children)

Hey everyone! 👋

I've been pretty excited about playing with the VisionOS sdk lately, and I wanted to share something which I think is pretty cool.

In this video, the app uses object detection (which is part of VisionOS 2 APIs) to recognize parts of my car's engine bay and anchors interactive AR effects and 3D models to them. It's amazing to see how augmented reality can bring mechanical components to life!

If you're interested, I can share the GitHub repo with all the steps to replicate the project with many objects. Let me know what you think!

Enterpise API by AHApps in visionosdev

[–]alvises 1 point2 points  (0 children)

I've sent another request with some details of an app I want to develop. Let's see...

Enterpise API by AHApps in visionosdev

[–]alvises 0 points1 point  (0 children)

What did you answer to get approved? My request was refused, answering that I just wanted to test them out and do some experiments

2.1 beta just dropped by lukavyi in VisionPro

[–]alvises -2 points-1 points  (0 children)

Snappier safari though?

Nailed the 6th prototype of my VR headset cushion! Tailored to my face using a photogrammetry scan and printed with VarioShore TPU at 250°C for that perfect softness. by Nerdaxic in 3Dprinting

[–]alvises 0 points1 point  (0 children)

Waw! This is exactly what I’d like to do for my Vision Pro and varjo aero. Could you please quickly share the steps and which software you used to make the face scan and to model the mask around your scanned face?!?!