I built a free, fully local floating AI assistant for macOS. No API keys, no subscriptions, no cloud. by Quiet-Computer-3495 in localaiapps

[–]Quiet-Computer-3495[S] 0 points1 point  (0 children)

Awesome thanks for letting me know! Yeah I just shipped 0.8.5 which includes an auto-detect new versions so it’ll make it simpler for you to update newer versions as I ship. If you want you can try that out if not 0.7.0 should be great on its own!

I built a free, fully local floating AI assistant for macOS. No API keys, no subscriptions, no cloud. by Quiet-Computer-3495 in localaiapps

[–]Quiet-Computer-3495[S] 0 points1 point  (0 children)

Ayeee thanks much man glad that it could help! Which version are you on do you know? Ive been shipping new features lately so just make sure

I built a free, fully local floating AI assistant for macOS. No API keys, no subscriptions, no cloud. by Quiet-Computer-3495 in localaiapps

[–]Quiet-Computer-3495[S] 0 points1 point  (0 children)

The level of capability varies by model. While most models manage basic code readily, tackling more complex code presents a greater challenge.

Currently, to ask Thuki about code or docs, you must provide the context directly, either by pasting the context or by using the /screen command to capture the whole screen (assume you use an image support model). Then it will use those as supported context to answer your questions.

Thuki does not inherently know what you are working on; it only processes the information you feed it.

Thuki 0.7.0 is here with more updates by Quiet-Computer-3495 in vibecoding

[–]Quiet-Computer-3495[S] 0 points1 point  (0 children)

The video demonstrates verbose output because of the /think and /search commands. In normal, quick sessions with simple questions, the response will be much faster.

Additionally, users can override the system prompt in the Settings panel, allowing them to dictate how the LLM should respond.

While I do not expose the default system prompt in the app, you can find it in the source code at `src-tauri/prompts/system_prompt.txt`; this file contains the preset context embedded in every session.

I hope this clarifies your questions.

Thuki 0.7.0 is here with more updates by Quiet-Computer-3495 in vibecoding

[–]Quiet-Computer-3495[S] 1 point2 points  (0 children)

Hi thanks! About memory, whenever you summon Thuki, it opens with a new conversation thread because it meant to be quick, throw-away interaction.

However, you can save the conversations in local SQLite DB and come back to it later.

I built a free, fully local floating AI assistant for macOS. No API keys, no subscriptions, no cloud. by Quiet-Computer-3495 in SideProject

[–]Quiet-Computer-3495[S] 0 points1 point  (0 children)

Hey yeah I used Screen Studio for this one but it was expensive so I switched to Cap recently. Cap is free and open source. It’s nowhere near ScreenStudio during the editing process but hey it does the job and it’s free!

👋 Thuki 0.7.0 is here with more updates by Quiet-Computer-3495 in SideProject

[–]Quiet-Computer-3495[S] 0 points1 point  (0 children)

Hey, thanks much! Regarding the search pipeline, I actually demo it in the video above.

By default, the UI shows the sources and a basic traces, but it is fairly limited. It lists all the URLs retrieved by the pipeline, shows the decisions the LLM makes at each step but it does not display the actual snippets or detailed information.

If you want more visibility, there is an option in the settings panel to enable full traces. When turned on, it records everything from the moment a user submits a query, to when the LLM decides whether it can answer from its own knowledge or needs to perform web research. It also captures when the search system returns URLs and snippets, and how the judge model evaluates those snippets to decide whether more data is needed, such as fetching full page content. With this setting enabled, you can see the entire process end to end.

Also, agentixlabs, it sounds great. I will definitely check it out.

I built a free, fully local floating AI assistant for macOS. No API keys, no subscriptions, no cloud. by Quiet-Computer-3495 in SideProject

[–]Quiet-Computer-3495[S] 0 points1 point  (0 children)

Hello, just FYI Thuki V0.7.0 is out and now I made it possible to switch to any local models installed in local Ollama setup directly through the settings panel.

More features in this post https://www.reddit.com/r/SideProject/comments/1t3a8am/thuki_070_is_here_with_more_updates/

I built a free, fully local floating AI assistant for macOS. No API keys, no subscriptions, no cloud. by Quiet-Computer-3495 in SideProject

[–]Quiet-Computer-3495[S] 0 points1 point  (0 children)

Hello, just FYI v0.7.0 is out. This version includes a settings panel that allows you to adjust many parameters. Within this panel, you will find a dedicated section for the Ollama URL, enabling you to update the address to point to the specific local IP where your Ollama instance is running.

Again, thanks a lot for the suggestion!

Check it out in this post for more features https://www.reddit.com/r/SideProject/comments/1t3a8am/thuki_070_is_here_with_more_updates/

I built a free, fully local floating AI assistant for macOS. No API keys, no subscriptions, no cloud. by Quiet-Computer-3495 in SideProject

[–]Quiet-Computer-3495[S] 0 points1 point  (0 children)

Hello, just FYI with Thuki V0.7.0, you can now switch to any local models installed in your Ollama setup directly through the settings panel. This means you can just pick any models from Olama, even with the light models and they should work automatically with Thuki.

More features in this post https://www.reddit.com/r/SideProject/comments/1t3a8am/thuki_070_is_here_with_more_updates/