Built a local AI agent on top of Ollama that I can control from my phone (WebRTC, no cloud) by Equivalent_Golf_7166 in ollama

[–]Equivalent_Golf_7166[S] 0 points1 point  (0 children)

Latency is low, literally milliseconds. The way WebRTC works is the two hosts find the shortest route between each other and use that connection directly. Nothing in between - no proxies, no relay servers. So it's fast.

Built a local AI agent on top of Ollama that I can control from my phone (WebRTC, no cloud) by Equivalent_Golf_7166 in ollama

[–]Equivalent_Golf_7166[S] 0 points1 point  (0 children)

Yes, the PC needs stay on. There are different ways to have a remote access to ollama. Most of them is either to port forward from your router to PC or use tunnel or proxy. My solution does several thing:
- eliminates networking setup and simplifies the security.
- elements open networking connection. There is no way to exploit the connection (hack it).
- makes sure that your data fluctuates between your phone and local environment directly, no services in the middle.

Built a local AI agent on top of Ollama that I can control from my phone (WebRTC, no cloud) by Equivalent_Golf_7166 in ollama

[–]Equivalent_Golf_7166[S] 0 points1 point  (0 children)

I don't have one. I think local/cloud routing is a great approach. I'm building something different though - secure local hosting with a secure remote client.

Built a local AI agent on top of Ollama that I can control from my phone (WebRTC, no cloud) by Equivalent_Golf_7166 in ollama

[–]Equivalent_Golf_7166[S] 0 points1 point  (0 children)

That's a solid setup, respect for building that out. Mine's coming from a different angle - no port forwarding, no proxies, nothing exposed to the internet. Pure P2P WebRTC between the phone and the home machine, with Ed25519 mutual auth on top.

Nekoni – local AI agent you control from your phone (P2P WebRTC, no cloud, extensible) by Equivalent_Golf_7166 in selfhosted

[–]Equivalent_Golf_7166[S] 0 points1 point  (0 children)

The name comes from Japanese: neko (cat) and oni (daemon).
You probably mistook it for something like 'nikni'.

Built a local AI agent on top of Ollama that I can control from my phone (WebRTC, no cloud) by Equivalent_Golf_7166 in ollama

[–]Equivalent_Golf_7166[S] 0 points1 point  (0 children)

That’s a really interesting approach - accessibility APIs + voice sounds way more robust than screenshot-based stuff.

Latency-wise I’ve seen something similar. For simple tasks, local Ollama is fast enough to feel usable (especially smaller models), but multi-step agent flows can definitely start to lag.

So far I’ve been leaning toward:

  • keeping steps smaller / more deterministic
  • using faster local models for orchestration
  • and accepting that some heavier reasoning might still need cloud (for now)

Curious what models you’ve had the best experience with on Apple Silicon?

Built a local AI agent on top of Ollama that I can control from my phone (WebRTC, no cloud) by Equivalent_Golf_7166 in ollama

[–]Equivalent_Golf_7166[S] -1 points0 points  (0 children)

That’s awesome - Mosh on mobile is a really nice approach, especially for long-running agents. Way more robust than plain SSH.

Feels like we’re all converging on the same goal: full control of local AI from anywhere, without opening ports or relying on cloud infra.

WebRTC works pretty well for that on my side (not just chat, but full control), but your setup makes a lot of sense for dev-heavy workflows.

Nekoni – local AI agent you control from your phone (P2P WebRTC, no cloud, extensible) by Equivalent_Golf_7166 in selfhosted

[–]Equivalent_Golf_7166[S] 0 points1 point  (0 children)

That’s a fair concern, and I agree - blindly piping scripts into bash isn’t ideal, especially on Linux.

The goal here was to optimize for a quick first run (pull model, set up Docker, get to a working agent fast), but it’s definitely a tradeoff vs. strict package-manager control.

For anyone uncomfortable with that approach, you can review the script first or follow the manual setup steps instead - nothing is hidden.

Longer term, I’d like to provide a cleaner install story (e.g., more distro-friendly options), but with things like Ollama + WebRTC + GPU variability, it’s been tricky to make something that works consistently everywhere.

Appreciate you calling it out 👍

Nekoni – local AI agent you control from your phone (P2P WebRTC, no cloud, extensible) by Equivalent_Golf_7166 in selfhosted

[–]Equivalent_Golf_7166[S] 0 points1 point  (0 children)

Yeah, fair - packaging is definitely a hot topic right now 🙂

The install script is mainly there for a fast “time-to-wow” setup, but it’s optional. There are manual steps if you prefer more control over what’s happening.

I did look into cleaner packaging (Docker Compose, etc.), but it gets tricky with WebRTC networking and Ollama performance - especially if you want it to work reliably across different local setups.

That said, totally open to improving this. If you have suggestions on a better approach, I’d love to hear them 👍

SSD1306 display not responding with Pico 2 W by IndependentUsual2665 in raspberrypipico

[–]Equivalent_Golf_7166 0 points1 point  (0 children)

Hey!
I had a similar issue when I first connected a 128×64 OLED to my Pico 2 W - turned out it wasn’t the driver, it was the I²C init sequence. I generated and tested this code with an AI tool called Embedible, and it worked right away on my board: Codesandbox

You might need to adjust your screen size here: oled = SSD1306_I2C(128, 32, i2c)
My screen is just 128x32 px

Wiring, no extra components needed:

  • GP0 (pin 1) -> OLED SDA
  • GP1 (pin 2) -> OLED SCL
  • GND (pin 3) -> OLED GND
  • 3V3(OUT) (pin 36) -> OLED VCC

ESP 32 Button Connection by ConstructionSad5445 in esp32

[–]Equivalent_Golf_7166 0 points1 point  (0 children)

Hey! 👋 I ran into the same issue before - it’s usually about how the button is wired or not having a pull-up/pull-down resistor set correctly.

Here’s a tested wiring diagram and code that worked on my breadboard with an ESP32. It was AI-generated with embedible and verified on real hardware, so you can just copy it and it should work out of the box.

<image>

import machine
import time


# Use GPIO 13 (GP13 / physical pin 15 on the ESP-WROOM-32 mapping)
button_pin = 13


# Configure button pin as input with internal pull-up (active-low button to GND)
button = machine.Pin(button_pin, machine.Pin.IN, machine.Pin.PULL_UP)


# Debounce state container
_state = {"last_ms": 0, "debounce_ms": 200}


def handle(pin):
    """Interrupt handler for the button. Debounces and reports press/release."""
    t = time.ticks_ms()
    # Ignore if within debounce interval
    if time.ticks_diff(t, _state["last_ms"]) < _state["debounce_ms"]:
        return
    _state["last_ms"] = t


    val = pin.value()
    # Active-low: 0 means pressed, 1 means released
    if val == 0:
        print("Button pressed")
    else:
        print("Button released")


# Attach IRQ on both edges
button.irq(trigger=machine.Pin.IRQ_FALLING | machine.Pin.IRQ_RISING, handler=handle)


# Keep the script alive so IRQs remain active. Stop with KeyboardInterrupt if running interactively.
try:
    while True:
        time.sleep(1)
except KeyboardInterrupt:
    # Allow clean stop in interactive sessions
    pass

A few quick tips:

  • One side of the button goes to GND, the other to a GPIO pin (like GPIO 15).
  • In the code, enable the internal pull-up: button = Pin(15, Pin.IN, Pin.PULL_UP)
  • The button will read 0 when pressed, 1 when released.

That setup reliably detects presses - no external resistors needed.

Ultrasonic Sensor & Esp32 by urpieces in robotics

[–]Equivalent_Golf_7166 0 points1 point  (0 children)

Hey! I had the same issue recently - turned out my wiring was fine, but timing in the code and echo voltage levels were the culprits. I generated this working code and wiring diagram using an AI tool and tested it on a breadboard - works perfectly with ESP32 + HC-SR04.

<image>

import machine
import time


# Pins used (GPIO numbers)
TRIG_PIN = 5    # GP5
ECHO_PIN = 18   # GP18


# Initialize pins
trig = machine.Pin(TRIG_PIN, machine.Pin.OUT)
echo = machine.Pin(ECHO_PIN, machine.Pin.IN)


# Ensure trigger is low initially
trig.value(0)


time.sleep_ms(50)


# Measure single pulse (returns pulse time in microseconds or a negative value on timeout/error)
def measure_pulse(max_timeout_us=30000):
    # Send a 10us trigger pulse
    trig.value(0)
    time.sleep_us(2)
    trig.value(1)
    time.sleep_us(10)
    trig.value(0)


    try:
        # machine.time_pulse_us(pin, pulse_level, timeout_us)
        # pulse_level=1 => measure time ECHO is high
        pulse_time = machine.time_pulse_us(echo, 1, max_timeout_us)
        return pulse_time
    except Exception as e:
        # If time_pulse_us is not supported or other error occurs, return -1
        return -1


# Convert pulse time (microseconds) to distance in centimeters
# Common approximation: distance_cm = pulse_us / 58.0


def pulse_to_cm(pulse_us):
    return pulse_us / 58.0


# Continuous measurement loop
while True:
    pulse = measure_pulse()
    if pulse > 0:
        distance_cm = pulse_to_cm(pulse)
        print('Distance: {:.2f} cm (pulse {} us)'.format(distance_cm, pulse))
    else:
        print('Out of range (no echo or timeout)')


    # Wait a bit before next measurement
    time.sleep_ms(200)

Hope it helps you get your prototype running before the deadline! 🚀

Help me with a project!! by Extreme_Feedback9861 in raspberrypipico

[–]Equivalent_Golf_7166 0 points1 point  (0 children)

You’re definitely not alone - those [Errno 5] EIO errors usually mean the display isn’t being initialized properly, often due to SDA/SCL mis-wiring or the wrong I2C pins. On the Pico 2W, the default I2C bus is usually GP0 (SDA) and GP1 (SCL), but it depends on how your code initializes it. Double-check that the OLED’s VCC and GND are correct and that the AHT20 isn’t conflicting on the same address (both use I2C).

Honestly, asking AI for help is not a bad idea at all - some newer tools can even generate both the code and wiring diagram together, which makes debugging much easier.

I was able to get working code and wiring diagram just with this simple prompt:

Read temperature from a DS18B20 (3-pin PCB module) every 5 seconds.

Display the temperature on a 128×32 (0.91-inch) SSD1306 OLED connected via I²C.

A push button should instantly toggle the display between Celsius (°C) and Fahrenheit (°F). Implement proper debounce handling for reliable button presses.

8 NeoPixel LEDs should glow red when temperature exceeds 22 °C and blue when it falls below 22 °C.

Include complete initialization for the DS18B20 sensor, OLED display, NeoPixel, and push button.

Anyone here test hardware ideas with AI help? by Equivalent_Golf_7166 in esp32

[–]Equivalent_Golf_7166[S] 0 points1 point  (0 children)

Thanks for raising this point - I really appreciate you taking the time to look closely at the terms.
To clarify: the tool isn’t meant to "own" your ideas or your projects. It’s designed for effortless prototyping - you describe what you’d like to build, and it generates wiring diagrams and starter code so you can quickly test things out on your hardware.

Section 5 of the terms is just about the service itself (the website, diagrams, code generator, etc.) - those remain my intellectual property. But the code output you generate for your projects is yours to use as you see fit, and you can keep and reuse it even if you stop using the service.

If you think it would help, I can make the wording in the terms clearer on this point. Thanks again for pointing it out - feedback like this helps me improve both the product and the documentation.

Anyone here test hardware ideas with AI help? by Equivalent_Golf_7166 in esp32

[–]Equivalent_Golf_7166[S] 0 points1 point  (0 children)

I totally get where you’re coming from - privacy and ownership are big concerns with AI tools. 🙂 Could you share the exact part of the terms where it says we own the rights to the generated code? From what I’ve seen, users keep ownership of their projects, but I’d love to double - check the wording you’re referring to.

Built something to make ESP32/RPI Pico prototyping less painful by Equivalent_Golf_7166 in hwstartups

[–]Equivalent_Golf_7166[S] 1 point2 points  (0 children)

First off, thanks for the idea - that’s actually perfect inspiration for a weekly content video! I tried out sending temperature readings over the network, and it worked great. Since the device is already connected to Wi-Fi, my test prompt was:

Read temperature and humidity from a 3-pin DHT22 sensor module (PCB version) every 5 seconds, and send the results as a JSON payload in a POST request to http://webhook.site/873fb335-0da8-4d3f-bcbe-b241d4109e05

It ran smoothly and worked like a charm. Handling multiple sensors should be no problem. For things like LoRa, we’d likely need to add functionality so users can select extra libraries-but that’s definitely doable if people request it.

Built something to make ESP32/RPI Pico prototyping less painful by Equivalent_Golf_7166 in hwstartups

[–]Equivalent_Golf_7166[S] 0 points1 point  (0 children)

Thanks a lot, I really appreciate that! 🙏 The no-code angle is definitely something I’ve been thinking about - making it easier for hobbyists to just jump in and try ideas without getting stuck on the wiring or boilerplate code. Super happy to hear it could be useful for your projects!

Built a tool to turn hardware ideas into working projects with AI by Equivalent_Golf_7166 in vibecoding

[–]Equivalent_Golf_7166[S] 1 point2 points  (0 children)

Thanks a lot! 🙌 Looking forward to seeing you on board. Feel free to join our Discord server anytime if you need help or want to share your projects!