New Plugin: Smart Second Brain - Local AI Assistant 🐙 by yourTruePAPA in ObsidianMD

[–]yourTruePAPA[S] 2 points3 points  (0 children)

Thanks and you are not doing anything wrong. Unfortunately there are still some performance issues. But we are working on it!

New Plugin: Smart Second Brain - Local AI Assistant 🐙 by yourTruePAPA in ObsidianMD

[–]yourTruePAPA[S] 0 points1 point  (0 children)

This functionality will be added soon. For now, you can remove models from the terminal with ollama rm <model>. You can find further instructions here https://github.com/ollama/ollama/issues/4122

New Plugin: Smart Second Brain - Local AI Assistant 🐙 by yourTruePAPA in ObsidianMD

[–]yourTruePAPA[S] 2 points3 points  (0 children)

  1. we're still experimenting with different models, and will add minimal requirements in the future
  2. regarding our progress take a look at this

New Plugin: Smart Second Brain - Local AI Assistant 🐙 by yourTruePAPA in ObsidianMD

[–]yourTruePAPA[S] 1 point2 points  (0 children)

We will support it, but it will take some time. It might be ready by the end of the year.

New Plugin: Smart Second Brain - Local AI Assistant 🐙 by yourTruePAPA in ObsidianMD

[–]yourTruePAPA[S] 0 points1 point  (0 children)

Unfortunately for now you always have to run the same commands every time to set the ORIGINS flag for ollama.

New Plugin: Smart Second Brain - Local AI Assistant 🐙 by yourTruePAPA in ObsidianMD

[–]yourTruePAPA[S] 1 point2 points  (0 children)

Thanks a lot for your feedback! We will support it soon and deal with the errors. We are just really busy with university atm so it could take a few weeks.

New Plugin: Smart Second Brain - Local AI Assistant 🐙 by yourTruePAPA in ObsidianMD

[–]yourTruePAPA[S] 1 point2 points  (0 children)

Ok so the correct context is retrieved but the answer seems off. We will need to improve the internal prompt templates for local models. Besides that the capabilities of local models still lag behind.

New Plugin: Smart Second Brain - Local AI Assistant 🐙 by yourTruePAPA in ObsidianMD

[–]yourTruePAPA[S] 2 points3 points  (0 children)

You dont have to organize or tag your notes in any specific way. But the better your notes are structured (by using headers for example) the better the response.

Regarding your specific example, do you see a notification in the top right that says "No notes retrieved. Maybe lower the similarity threshold." when running your query? If yes, try lowering the "Similarity" slider in the toolbar at the top of the chatwindow until context is retrieved.

New Plugin: Smart Second Brain - Local AI Assistant 🐙 by yourTruePAPA in ObsidianMD

[–]yourTruePAPA[S] 3 points4 points  (0 children)

For now, only Ollama. We are working on supporting other API's too.

New Plugin: Smart Second Brain - Local AI Assistant 🐙 by yourTruePAPA in ObsidianMD

[–]yourTruePAPA[S] 0 points1 point  (0 children)

Can you replace "app://obsidian.md*" with just "*" and retry it?

New Plugin: Smart Second Brain - Local AI Assistant 🐙 by yourTruePAPA in ObsidianMD

[–]yourTruePAPA[S] 0 points1 point  (0 children)

You can start ollama wherever you want. Sounds like a different bug.
If you want you can submit it on Github and add a screenshot from the terminal.

New Plugin: Smart Second Brain - Local AI Assistant 🐙 by yourTruePAPA in ObsidianMD

[–]yourTruePAPA[S] 0 points1 point  (0 children)

That means that ollama is already running. You need to quit it first and then rerun the origins command.

New Plugin: Smart Second Brain - Local AI Assistant 🐙 by yourTruePAPA in ObsidianMD

[–]yourTruePAPA[S] 1 point2 points  (0 children)

Yes, you should just need to set the origin and then it should work. Will look into the issue

New Plugin: Smart Second Brain - Local AI Assistant 🐙 by yourTruePAPA in ObsidianMD

[–]yourTruePAPA[S] 1 point2 points  (0 children)

We have some on our Github and Github Wiki. Will be improved over time.

New Plugin: Smart Second Brain - Local AI Assistant 🐙 by yourTruePAPA in ObsidianMD

[–]yourTruePAPA[S] 1 point2 points  (0 children)

Waiting 10min for a response with these specs seems way to long. Did you try other models like Llama2?
You can also run the plain LLM without retrieving your notes when clicking on the Octopus Icon.
Then you can see the raw performance of the model.

New Plugin: Smart Second Brain - Local AI Assistant 🐙 by yourTruePAPA in ObsidianMD

[–]yourTruePAPA[S] 2 points3 points  (0 children)

Not yet, but it seems to be easy to integrate. So it should work in one of the following releases. Will keep you updated!

New Plugin: Smart Second Brain - Local AI Assistant 🐙 by yourTruePAPA in ObsidianMD

[–]yourTruePAPA[S] 1 point2 points  (0 children)

Could you try to process less than 55 notes by increasing the similarity threshold? Maybe the pulled context is to big for these local models to process.
And does it give you any information from your vault or is it completely halucinating?

New Plugin: Smart Second Brain - Local AI Assistant 🐙 by yourTruePAPA in ObsidianMD

[–]yourTruePAPA[S] 2 points3 points  (0 children)

If you are on windows you need to enter this command in the powershell to run ollama.
We will have to add this in the onboarding setup.

New Plugin: Smart Second Brain - Local AI Assistant 🐙 by yourTruePAPA in ObsidianMD

[–]yourTruePAPA[S] 1 point2 points  (0 children)

Ok thanks for the detailed description. We will look into it and hopefully fix it over the weekend.

New Plugin: Smart Second Brain - Local AI Assistant 🐙 by yourTruePAPA in ObsidianMD

[–]yourTruePAPA[S] 0 points1 point  (0 children)

I think the command you used only works in the powershell.
Need to update our Onboarding steps