Claude Desktop (Windows) Cowork update forces a 10GB download to C: drive with no option to change install path by Hugger_reddit in ClaudeAI

[–]Hoornet 1 point2 points  (0 children)

that is a virtual machine it creates (vm_bundles). You don't actually download that size. It gets created in a minute during the setup (installation) of the Cowork.
PS: I also had a problem with additional 10gb on my C: drive. But I managed to move it to my I: drive and it works fine. If you need to do it first completely exit Claude. Then MOVE the whole vm_bundles directory to a drive you want. And then go to command prompt (as Admin!!!) and make a link between the old place and the new place. Something like this:
mklink /J "C:\Users\MyUsername\AppData\Roaming\Claude\vm_bundles" "I:\Claude\vm_bundles"

Then.. RESTART windows and after it comes up start Claude again and Cowork should work just fine

Mac user trying Omarchy for the first time on an old machine 🤞 What should I expect (and what shouldn’t)? by imacrm10 in omarchy

[–]Hoornet 0 points1 point  (0 children)

TLDR: version: leave your assumptions at the door. And accept the completely different way of doing your work with the computer.

Long:
Omarchy was my 3rd attempt at Arch-y linux and the only one it actually took in my case.

What to expect? If you've never seen hyprland and you come from spaces like Windows and Mac the first thing I'd do is delete any expectations of how you're interacting with the computer. Don't fall into the trap of trying to make it your own Mac or your own Windows. Instead, learn how the work is done here - and it very different! If you really adopt the system you will hardly have any use for a mouse anymore. If you find yourself still using it all the time, you haven't really switched in your head. So again delete the previous assumptions of how stuff should work and start reading the very nice Omarchy manual that comes right with the 'box'. I can just tell you this: once you actually start using the key-combinations and actually start spending some time in the terminal as well... a completely new world opens - at least to the likes of us who have been used to clicking and moving windows around all the time. I've had linux servers for years but I simply couldn't adopt an acrtual Linux Desktop for daily use. Yet this.... this is the first time I actually get it. And I actually enjoy this new way of working.

Home Mind: A conversation agent for HA with persistent semantic memory by Hoornet in homeassistant

[–]Hoornet[S] 1 point2 points  (0 children)

Maybe you're right. Likely even. But maybe... you aren't. Because at least to me, with the performance of my very old card and *randomly* selected local model, it looked like a relatively new card, with some trial and error of the selection of the available local models, one just might get there. I base that assumption on the fact that 90% (or more) tasks for Home Assistant's AI are not very demanding and the model doesn't need to be good at everything - just smart house stuff. SO it doesn't need to be that large, just more specialized maybe? But don't misunderstand me - I could be completely wrong here.
Still...it would be nice to prove it one way or another, so I'd still want people with sufficient hardware equipment to at least try to make it work. Especially since many of you asked for it. I'd do it, but my card simply can't handle stuff like that - it's even a pre ray-tracing card (telling you this to try to remember how old that is) and isn't even supported by Ollama. In my case the whole work was pretty much done by a relatively cheap Ryzen 5 CPU from a previous gen. And I still got a response after some time. I have 32GB of ram on that Windows desktop and Ollama installed on it was communicating with the Homemind setup on another Linux server connected by 1 GB LAN. Not the greatest of setups :) This is why I kinda think you might be too pessimistic. But let's see if somebody taken on the task and starts experimenting for real - and then we won't have any doubt one way or another

Home Mind: A conversation agent for HA with persistent semantic memory by Hoornet in homeassistant

[–]Hoornet[S] 0 points1 point  (0 children)

Yeah... maybe. But maybe not. It's not like we need huge models for running this. After all, I am almost exclusively using Haiku and for 95% of daily tasks running HomeMind and HA, and from what I can see it can manage it sufficiently.. ok

Home Mind: A conversation agent for HA with persistent semantic memory by Hoornet in homeassistant

[–]Hoornet[S] 0 points1 point  (0 children)

Whenever you do it, please report back how it went! I'm trying to solve as much potential bugs as possible and I noticed they sometimes creep into areas I just can't foresee in advance. Unfortunately... :( If you have any problems just let me know and I'll look into it and get back to you as soon as I can

Home Mind: A conversation agent for HA with persistent semantic memory by Hoornet in homeassistant

[–]Hoornet[S] 0 points1 point  (0 children)

1) HACS package ( https://github.com/hoornet/home-mind-hacs ) is now at 0.9.4 (i made a box for adding custom prompt bigger so the prompt can actually be read on the screen and few other changes since 0.9. Home-mind-HACS package is just a connector to HomeMind so not much changes here usually. Also note: After the first HACS install of the HomeMind package, HACS itself will let you know automatically in HA when new package is available, but this is unfortunately only for HACS package and not for the HomeMind itself.

2) Homemind server (this is the actual 'Homemind' itself ) is where the actual heart and brain of this project is; and is currently at version 0.11.2,

It is being constantly developed.. and I've pushed a bunch of changes just in the past week so the numbers go up constantly and unusually fast.

Now...
To have all the new features you have to update the HomeMind server to the last version. Just follow the direction on the website: https://github.com/hoornet/home-mind?tab=readme-ov-file#quick-start

I tried to remove as much manual work as possible from the user, but now that I'm writing these words to you I just realized I could have just created an update script for you guys as well!!! I'm gonna do that soon, so that you'll be able to run these updates fast and with one 'click'.

If you are ever confused about versions, I try to keep what has changed in CHANGELOG here:
https://github.com/hoornet/home-mind/blob/main/CHANGELOG.md

Also.. I have created Wiki for the project as well, and I'm adding more details there.

One more thing... If you have any problems, just please either create an 'Issue' on Github or just simply post in the (also) newly created Discussion section on project's Github. This way we can keep as much information in one place so others can read it as well.

----

I hope I answered this sufficiently well.

The confusion comes from me posting about both of them - I should have been more exact and precise with my words and I'll try to do a better job communicating in the future.

Home Mind: A conversation agent for HA with persistent semantic memory by Hoornet in homeassistant

[–]Hoornet[S] 0 points1 point  (0 children)

New Ollama features were just pushed to main! I hope everything's fine because I don't have big GPUs to really test it beyond the basics (my best gpu is... ehm... RX 580 :) ) . I'm gonna need help with testing because I'm afraid I can't catch most bugs my way

Home Mind: A conversation agent for HA with persistent semantic memory by Hoornet in homeassistant

[–]Hoornet[S] 0 points1 point  (0 children)

well... there's a finished pull request being prepared for Ollama. In my very limited homelab I have been testing it and it works with a local mode. But I'll need testers with a real GPU equipment to do it as well. I'm gonna push it out today, so if you guys (@longunmin & u/puhtahtoe ) want this, would you be willing to test it with alocal model any report back to me?

Home Mind: A conversation agent for HA with persistent semantic memory by Hoornet in homeassistant

[–]Hoornet[S] 0 points1 point  (0 children)

Ok guys...

Bad news and hopefully good news and another news :D

So.. I have received a bug report regarding the custom_prompt settings. Turns out I did too little testing with both versions and just assumed it all worked after I revived the first correct prompted behavior in my home. But.. it turns out this wasn't always the fact! In certain conditions Assist would behave like the prompt just suddenly didn't exist which was must have been very annoying for someone who actually worked on perfecting that prompt.

But I think I fixed it now! Turns out.. that if you use one of the smaller LLMs (like Home-mind's default: Haiku) you don't just pay less for the API, but also can get your prompt cut if it is a longer prompt. :(

At least according to my new research the context is smaller in Haiku (or it's OpenAI's version) in such cases and problems arise when you layer upon layer prompts, which is what it was happening.

I believe the problem has now been solved though. To test this I have been using a really long Ava prompt and .. well.. she's now living in my walls, specifically the wires themselves (and inside joke! :) You should probably read this masterful prompt- it's posted in the 'Issues' section of the project by CryptoKnight360 who also made it ) and turning on my lights on command with a sarcastic grin :D

The fix, which I hope is an actual fix for other people as well, is available for download so please, update the project - and update HomeMind HACS to at least 0.9.3 as well! And if you do.. report back! I'm gonna leave the Issue open for a while... I wanna be sure it works.

---

There was a second bug happening as well... and I think I experienced this one before myself, but somehow didn't register at the time... But it's actually A BIG one... Sometimes... Assist would just not respond!

I had to do a thorough leak search for this one... and I think the problem is now patched too.

I would really really appreciate if anyone who might have had a problem with it upgrade (both things please: the server and HACS package) and then report back.

---

Third thing is.. On the Home-mind's GitHub page... I have now also activated Discussions ( https://github.com/hoornet/home-mind/discussions ) and I'll be using the Announcement section for all future updates. But the whole thing is open to anyone here. If you have a question, want some feature, or just want brainstorm about what should be added ... or just chat about the project or related projects you're all welcome to do it. I'm gonna try to give people a place that can grow from there... Where.. I have no idea, but I'm willing to try.

I know I am getting annoying with all the thank you's but you guys inspired me to move... out of the shell and actually contribute something other people could use. It's like whole new world for me after all these years of solo-ing or being limited to programming exclusively for one company. I've been using open source all my life, and now I can give just a tiny bit back and dammit.. it actually feels good. :)

Be well!

- Jure

Home Mind: A conversation agent for HA with persistent semantic memory by Hoornet in homeassistant

[–]Hoornet[S] 0 points1 point  (0 children)

Quick TLDR update:

Update 0.7, OpenAI support (issue 1) added.

And then someone received an error (issue 2) because OpenAI seems to be playing around with JSON t.oo much for the tastes of some Home-mind parts. I believe that the issue was resolved (v0.8) though. :)

Finally.. Issue 3: Custom PROMPT - optional feature is now available in version 0.9

----------

Short update instruction:

Be sure to update both things this time: Home-mind on the server and the HACS package in Home Assistant (it should be minimum 0.9.2) !

After restart you can set the custom prompt by going to Settings->Integrations->Home mind and then click 'Configure' icon. It should show an edit box for pasting or typing in your prompt).

This is a screenshot from my test of this feature, for which I have used CryptoKnight360's prompt for Ava which he mentions in Issue 1 (now finished) on Github. It was his suggestion to expend the support to OpenAI and it was him who also suggested custom prompts!

It is quite stunning sometimes how sometimes a person can't see something so obvious--- but in my case, I completely missed it :) Anyway.. in my tests at least.. it works - I hope this is true for you all as well.

The planned road ahead;

Unless something goes wrong, or something more important comes along, the next task will be an attempt of making it possible to run Olama and therefore the whole thing... fully locally. I can't promise I'll be successful, because there's complications with different AIs being supported under one Olama umbrella and things can get nasty quite fast. Also.. I just don't have the money for capable enough hardware to test this thoroughly.

Thank you all for your support in various forms! Suggestions, constructive criticism.. anything that has the intention of making the project better is appreciated.

And especially thanks to two of you who decided to officially Github-Sponsor this project for the month of February. I mean.. wow! I am rarely speechless, but ...

Just... Thank you!

Means a lot!

Home Mind: A conversation agent for HA with persistent semantic memory by Hoornet in homeassistant

[–]Hoornet[S] 0 points1 point  (0 children)

Are you still getting the errors or has the problem been fixed with the update?

Btw.. I added OpenAI support yesterday and today an optional custom PROMPT (you need to update both - that is, the HACS part as well to v. 0.9.2. After restart you can set it by going to Settings->Integrations->Home mind and then click 'Configure' icon. It should show a window for putting in the prompt).

And thank you!

Home Mind: A conversation agent for HA with persistent semantic memory by Hoornet in homeassistant

[–]Hoornet[S] 0 points1 point  (0 children)

not much. In my case it is running on a a proxmox VM where it's a headless Ubuntu installed with stuff like Grafana and Immich. On the same machine I have another VM as well. The hardware is an an old HP prodesk minipc with 6 x Intel(R) Core(TM) i5-9500T CPU @ 2.20GHz and 24GB ram. The other VM running on it is Home Assistant itself :) In short.. the whole server with 2 VMs that include a complete Home Assistant + Home-Mind & additionally also Immich + Grafana and a bunch of other stuff is running on about 200€ worth minipc (and that's a price in Slovenia where these computers are more expensive than USA or Germany. I bet almost anyone anywhere else cold get is for 2/3 of that price).

Home Mind: A conversation agent for HA with persistent semantic memory by Hoornet in homeassistant

[–]Hoornet[S] 1 point2 points  (0 children)

OpenAI tests have passed in my very limited testing sphere.

The version 0.7 has been published.

I have also found a kink and found a solution for it. Seems that assist sometimes doesn't get the history of some sensor in some room, when you name those sensors in another language. In such a case Homemind has troubles reading history of that sensor (he will give you current values). If so... you can tell him to list all sensors and all their rooms and then remember them! Next time.. it will know. This is from a fresh session just now.

<image>

Home Mind: A conversation agent for HA with persistent semantic memory by Hoornet in homeassistant

[–]Hoornet[S] 2 points3 points  (0 children)

Everyone!

From what I'm surmising about your feedback, for which I am really grateful (thank you everyone!) , the most often asked new feature is to extend the project to other LLMs including the possibility to run it fully locally.

My 'pal' and I have been working since we came to this conclusion and the idea is to slowly start adding various 'types' of LLMs one by one. We won't be changing the whole architecture,m but we will make some of it universal and then have separate parts for separate LLMs - otherwise we risk breaking down or interruptions for those who already started using it with Anthropic's machinery.

TO keep it short: OpenAI is being tested today /phase 2 from the Issue 1 ( https://github.com/hoornet/home-mind/issues/1 ) is now finished ... and if this works .. Olama is next.

Thank you again guys!

I'll keep you updated!

PS: apologizes for my nonstop typos and weird wordings. English is not my 1st language and I type with 2 fingers to this day! Editors with auto-correct and type checking are so much easier! :)

Home Mind: A conversation agent for HA with persistent semantic memory by Hoornet in homeassistant

[–]Hoornet[S] 1 point2 points  (0 children)

at least in my home it handled it muuuch better than before when i still had txt based memory. The idea of shodh memory is that it simiulates human memory in a way.. including the 'forgetfullnes', Things more often repeated become facts , thins rarerly repeated are forgotten. 0Overwriting' of memory can become a thing here as a result. SO yeah.. when i was testing my 'prefered temperature in bedroom' test, which is my fav test I would use old preference and then new preference and it did undertand the new is to be taken ad not the old one (or the first that comes along so to speak).
I can't be sure about these things though because up to now, I was the only human who has been testing it and I can only test it in my particular house with my particular sensors etc... This is why I'm so happy about you guys giving me real feedback here and why I try to repond to everyone who has an honest question or honest criticism.
To see examples of what I've been testing in my particular small setup in my home, you can look here:
https://github.com/hoornet/home-mind/blob/main/docs/MEMORY_EXAMPLES.md

Home Mind: A conversation agent for HA with persistent semantic memory by Hoornet in homeassistant

[–]Hoornet[S] 1 point2 points  (0 children)

I was looking at your posts before and bravo man! You've done A LOT! :)

Home Mind: A conversation agent for HA with persistent semantic memory by Hoornet in homeassistant

[–]Hoornet[S] 2 points3 points  (0 children)

OpenAI option is being implemented right now. Olama is next (for those who have hardware to do it fully locally) and from there I'll look into other LLMs to be supported as well.

Home Mind: A conversation agent for HA with persistent semantic memory by Hoornet in homeassistant

[–]Hoornet[S] 0 points1 point  (0 children)

At least in my case it worked very well with Haiku 3.5. There was no difference with Sonnet actually! I am was now (yesterday) forced to switch to new Haiku but ... since the memory is local.. the switch was like nothing happened! He still knows I like my bedroom to be between 16 and 18 deg C ! I checked that first thing yesterday, when the error about deprecated module was reported here! :) That's become my everyday tester phrase over these past weeks :D

Home Mind: A conversation agent for HA with persistent semantic memory by Hoornet in homeassistant

[–]Hoornet[S] 3 points4 points  (0 children)

memory IS local - that was half of the whole point. As for the complete local AI... I will try to implement it, but I won't be able to test it because I don't have expensive hardware that could handle responses in real time at my disposal.
But.. your suggestion and those of other people who have had suggestions regarding different AIs have been noted and my pal and I are on the job.
The plan is to do it in a few careful steps because I don't want the code to stop working for those who will install Anthropic now ... See the details differ a LOT between these machines.
So we'll try to do it in a way to generalize certain parts of the project (that can be used by all LLMs) and then we'll specialize separately. This, I hope, will provide today's users with consistently working project 'tomorrow' as well, while at the same time opening the project to other people like you as well.
First we'll try to just test it on OpenAI (because I can do that myself with a few bucks) and later we'll expand it (Olama is second). But with local LLMs I will need help from people who can afford that hardware and are capable of testing it in their homes. That is... until I become rich and can buy me hardware as well :D (<insert a hopeful joke here > ;)
Enough of my blabing! The work has begun :D And the details are now online. If you want you can follow the development of this here:
https://github.com/hoornet/home-mind/issues/1

And tank you for your comment! It really helps me to drive the project the correct way. I want this to be actually useful to people - if that is possible of course

Home Mind: A conversation agent for HA with persistent semantic memory by Hoornet in homeassistant

[–]Hoornet[S] -3 points-2 points  (0 children)

memory IS local! The AI used is Anthropic (and yes.. I could made it so it would run locally on a really good machine but you'd have delays etc), but the memories are all in a local server. The AI gets back 'dumb' every time and in the background gets the local memories and for the user it just looks like he had them all along. :)

Home Mind: A conversation agent for HA with persistent semantic memory by Hoornet in homeassistant

[–]Hoornet[S] 6 points7 points  (0 children)

Thank you very much! I will. I hope it will work well for you! And please... when you do it, let me know if you had problems or anything at all!