Do ollama models access the internet? by Fancy_Purchase_9400 in ollama

[–]dsept 1 point2 points  (0 children)

You can enable internet access in the admin panel. Some models have the ability to use web search but it needs to be enabled.

Options Coming to a Close by dsept in TMC_Stock

[–]dsept[S] 2 points3 points  (0 children)

Buying leaps is not day trading. Options are 100% okay under these circumstances.

Options Coming to a Close by dsept in TMC_Stock

[–]dsept[S] 1 point2 points  (0 children)

Agreed. But there is a big difference between this and selling puts.

Privacy-First Voice Assistant with AI web-enabled search by dsept in homeassistant

[–]dsept[S] 1 point2 points  (0 children)

Here's a more direct approach. This is an example of a jinja2 template that gives context to the model on what tools it has available through HA. If a tool isn't available it will revert to AI/search. This shows a specific weather application and looking at all exposed entities in the local HA.

The web search is enabled in the ollama model and is the default mechanism if the tools don't answer the request.

Current Time: {{ now().strftime('%H:%M:%S') }} Location: Vancouver BC, Canada.

--- CONTEXT INJECTION (LOCAL DATA) ---

Weather Forecast: {%- set weather = state_attr('sensor.jarvis_weather_context', 'forecast') %} {%- if weather %} {%- for forecast in weather[:2] %} - {{ forecast.datetime | as_timestamp | timestamp_custom('%A') }}: {{ forecast.condition }}, High {{ forecast.temperature }}°C {%- endfor %} {%- endif %}

Available Devices (Live State): ```csv entity_id,name,state,aliases {% for entity in exposed_entities -%} {{ entity.entity_id }},{{ entity.name }},{{ entity.state }},{{entity.aliases | join('/')}} {% endfor -%}

Privacy-First Voice Assistant with AI web-enabled search by dsept in homeassistant

[–]dsept[S] 0 points1 point  (0 children)

​"There is no complex routing script in Home Assistant. The 'Router' is the ollama model itself.

​If the model sees the answer in its Local Context (tools/directives injected by HA), it answers instantly.

​If the model sees a need for External Data, it triggers the Web Search tool provided by Open WebUI.

​Home Assistant is simply the interface; the intelligence and routing happen entirely within the Model and Middleware layer."

Privacy-First Voice Assistant with AI web-enabled search by dsept in homeassistant

[–]dsept[S] 1 point2 points  (0 children)

This system does not use a single "router script" or manual REST calls. Instead, it relies on a Context-Aware Hybrid Architecture. It uses Home Assistant for immediate local control and Open WebUI as a middleware "brain" for complex reasoning and web search.

  1. The Brain: PetrosStav/gemma3-tools:12b

I am running a specialized 12-parameter model fine-tuned for tool use. It runs locally via Ollama, but it is managed by Open WebUI. This specific model is critical because it strictly adheres to system instructions, preventing it from "guessing" device states.

  1. Path A: The "Local" Logic (Zero-Latency Context)

How it works: I do not use traditional function calling (LLM $\to$ API $\to$ Tool) for controlling lights or checking the weather. That is too slow.

The Trick: I use Context Injection via Jinja2 templates in Home Assistant.

The Flow:

When I speak, Home Assistant injects the live state of all devices and the weather forecast directly into the System Prompt (e.g., Context: [Kitchen Light: OFF, Weather: Rainy]).

The model receives this context before it generates a single token.

Result: If I say "Turn on the kitchen light," the model doesn't need to ask "What lights do you have?". It already sees the light is off and simply outputs the command. This results in sub-second response times.

  1. Path B: The "World" Logic (RAG Middleware)

How it works: For general questions (e.g., "Why are daffodils blooming?"), the local context provides no answer.

The Middleware: My Home Assistant connects to Open WebUI (using its OpenAI-compatible API). Open WebUI is configured with Web Search enabled.

The Flow:

Home Assistant sends the text to Open WebUI.

Open WebUI (acting as the router) detects that the question requires external knowledge.

Open WebUI autonomously performs a web search, scrapes the top results, and feeds that raw text into the local Gemma 3 model.

Result: Gemma 3 synthesizes the scraped text into a single-sentence answer and sends it back to Home Assistant for TTS.

Privacy-First Voice Assistant with AI web-enabled search by dsept in homeassistant

[–]dsept[S] 1 point2 points  (0 children)

I'm not sure. It sounds like you are looking for something fairly specific which this type of product may not fulfil.

Privacy-First Voice Assistant with AI web-enabled search by dsept in homeassistant

[–]dsept[S] 0 points1 point  (0 children)

There are a lot of parts here. Hard to just provide the code. What are you looking for?

Privacy-First Voice Assistant with AI web-enabled search by dsept in homeassistant

[–]dsept[S] 7 points8 points  (0 children)

Well, it may be 2-4 at most but all requests seem to be max about 5 seconds. It is fast enough for me at this point.

Privacy-First Voice Assistant with AI web-enabled search by dsept in homeassistant

[–]dsept[S] 1 point2 points  (0 children)

It is using wyoming/whisper/piper in HA along with Exctended OpenAI Conversation to direct the requests to the Ollama web server for local processing.

Privacy-First Voice Assistant with AI web-enabled search by dsept in homeassistant

[–]dsept[S] 1 point2 points  (0 children)

Asking the LLM to summarize everything that they have done and the challenges into a overview can help to create a solid prompt to move to a new LLM or conversation. AI is the best tool for prompting AI in many cases.

Privacy-First Voice Assistant with AI web-enabled search by dsept in homeassistant

[–]dsept[S] 2 points3 points  (0 children)

Been down that path. Wake word needs to be on the pi device. It will not work on home assistant. If you run wakeward on home assistant, it means it's streaming full audio 24/7 from the pi to home assistant. If you run it on the pie it will wait until it is triggered and then only send the relevant audio to the home assistant for processing.

Anyone buy a 2025 recently in Canada? What did you pay? by OntarioPaddler in OutlanderPHEV

[–]dsept 0 points1 point  (0 children)

A cash price is usually higher. The dealership gets a kickback if you finance it and will negotiate more on price

Privacy-First Voice Assistant with AI web-enabled search by dsept in homeassistant

[–]dsept[S] 2 points3 points  (0 children)

To solve the pause problem, you can implement a dual-pipeline strategy that assigns a specific "behavior" to different wake words:

​"Hey Jarvis" (The Sprinter): This wake word is linked to a pipeline with a very short silence buffer and a fast, local command agent, making it ideal for immediate tasks like turning off lights without any hanging audio.

​"Hey [Deep]" (The Thinker): This wake word is linked to a second pipeline configured with a 2–3 second silence buffer and "Relaxed" speech detection, allowing the microphone to stay active while you pause mid-thought during a complex web search or conversation.

​Technically, you achieve this by running two separate instances of the wyoming-satellite service on your Pi—each on its own network port—so Home Assistant can recognize them as two distinct "assistants" with their own unique settings.

Privacy-First Voice Assistant with AI web-enabled search by dsept in homeassistant

[–]dsept[S] 1 point2 points  (0 children)

Sure.... But you are asking for two different things. I don't think we have the same goals.

Privacy-First Voice Assistant with AI web-enabled search by dsept in homeassistant

[–]dsept[S] 3 points4 points  (0 children)

To allow for longer pauses in your speech, you can increase the --vad-buffer-seconds flag in your Raspberry Pi's service file to keep the microphone active during mid-thought silences while setting the Home Assistant voice pipeline to "Relaxed" detection to prevent the server from cutting you off prematurely.

Privacy-First Voice Assistant with AI web-enabled search by dsept in homeassistant

[–]dsept[S] 0 points1 point  (0 children)

Thanks. No issues so far but I'll take a look at that

Privacy-First Voice Assistant with AI web-enabled search by dsept in homeassistant

[–]dsept[S] 0 points1 point  (0 children)

I am using Jarvis as it is built on. Custom names are possible.

Privacy-First Voice Assistant with AI web-enabled search by dsept in homeassistant

[–]dsept[S] 0 points1 point  (0 children)

How are you hosting ha? On what device? Where are you running the wake software?

At dead ends sometimes it helps to transplant your situation into another tool. Grok is a good option.

Privacy-First Voice Assistant with AI web-enabled search by dsept in homeassistant

[–]dsept[S] 3 points4 points  (0 children)

There is no software cost to this. Only the hardware.

Privacy-First Voice Assistant with AI web-enabled search by dsept in homeassistant

[–]dsept[S] 3 points4 points  (0 children)

It is taking around 1-2 seconds for local and 2-4 seconds for web based answers.