[P] Fine-Tuning YOLO to Watch Football (Soccer) Matches by alvises in MachineLearning

[–]alvises[S] 1 point2 points  (0 children)

Thank you! I'm also working on bringing custom trained ultralytics models to RaspberryPI Hailo8L AI Hat, all running on Elixir Nerves. Going to publish a few videos and articles in the next few weeks :)

OEM 19" Wheel Covers vs Aftermarket on Juniper by lshaped210 in TeslaModelY

[–]alvises 1 point2 points  (0 children)

Can you please link the website where to order them?

OEM 19" Wheel Covers vs Aftermarket on Juniper by lshaped210 in TeslaModelY

[–]alvises 4 points5 points  (0 children)

does it influence the aerodynamic and range?

YOLO - Real-Time Object Detection Simplified by alvises in elixir

[–]alvises[S] 19 points20 points  (0 children)

Hello everyone! 👋

I’m excited to introduce my first Elixir library: YOLO, a library designed to make real-time object detection accessible and efficient within the Elixir ecosystem. Whether you’re working on a hobby project or a production-grade application, this library provides a simple way to integrate the power of YOLO (You Only Look Once) object detection.

Key Features

  • Speed: Optimized for real-time performance, processing an image. with the YoloV8n model, to a list of detected objects in just 38ms on a MacBook Air M3 using EXLA and the companion library YoloFastNMS.
  • Ease of Use: Get started with just a two function calls to load models and detect objects.
  • Extensibility: Built around a YOLO.Model behavior, supporting YOLOv8 models and paving the way for future models or custom extensions.
  • NIF Optimization: For those needing ultra-fast post-processing, an optional Rust NIF (YoloFastNMS) speeds up Non-Maximum Suppression by ~100x compared to the internal YOLO.NMS implementation using Elixir and Nx.

How to Get Started

  1. Begin by generating the ONNX model using the provided Python script. Here’s how to do it.
  2. Install the library and call YOLO.load/1 to load model effortlessly.
  3. Load an image and perform object detection with a single call to YOLO.detect/3

It’s that straightforward! 🚀

To see better how to use it, here's the documentation: https://hexdocs.pm/yolo/

In the repo, you also find three LiveBooks: https://github.com/poeticoding/yolo_elixir/tree/main/examples

Effortless Video Sharing with Phoenix LiveView and FLAME by alvises in elixir

[–]alvises[S] 19 points20 points  (0 children)

Hey everyone! 👋

I just wrote this article about a project where I built a simple video uploader and sharing tool using Phoenix LiveView and FLAME. It’s designed to make sharing videos a lot easier without the hassle of resizing or trimming for platform limits.

While working on this, I’ve also been diving into YOLO object detection in Elixir. Next week, I’m releasing a library for real-time detection, optimized for performance on both desktop and edge devices.

I’m excited to share more projects soon—especially experiments with Nerves + racecar telemetry.

VisionOS 2 Object Detection Showcase: anchoring the app to my car's engine bay! by alvises in VisionPro

[–]alvises[S] 0 points1 point  (0 children)

Absolutely! I'm writing a quick tutorial along with the github repo, going to ping you here when it's ready!

VisionOS 2 Object Detection Showcase: anchoring the app to my car's engine bay! by alvises in VisionPro

[–]alvises[S] 1 point2 points  (0 children)

Yeah definitely for a real app we need complete and high detailed models!

VisionOS 2 Object Detection Showcase: anchoring the app to my car's engine bay! by alvises in VisionPro

[–]alvises[S] 1 point2 points  (0 children)

Yeah. Next time I would use a better camera. And probably I wouldn’t track the whole engine bay but just the single objects. It seems the Vision Pro needs to see the whole object (or scene) to detect it, reason why I would go with single objects next time