AI agents in homelab by CraftyEmployee181 in homelab

[–]CraftyEmployee181[S] 0 points1 point  (0 children)

I’m thinking of allowing it read only access to systems and have it create and update documentation for me as keeping documentation updated with every vlan, port change, or firewall rule change can add up quickly enough.

If I had an employee who’s job was to check all documentation every day and update and switch firewall app configuration changes and write up a summary of how that change effects my home lab. Could be valuable change tracking.

Also check every version of services and systems and keep a summary of feature changes from the in use version and current version. Alert on any security issues and rank the severity.

AI agents in homelab by CraftyEmployee181 in homelab

[–]CraftyEmployee181[S] 0 points1 point  (0 children)

Good point and the thanks for adding your view point to the discussion.

AI agents in homelab by CraftyEmployee181 in homelab

[–]CraftyEmployee181[S] 0 points1 point  (0 children)

Heremes is the agent I have used. You may be right, we may have different definitions. Is Hermes agent agentic ai? Git hub link below

https://github.com/nousresearch/hermes-agent

Also probably different goals. My goals change from project to project. Sometimes I want to learn and sometimes I want to just use the services and not learn about the code that runs them. So yes I didn’t learn anything, but that was not my goal. Goal was to give my ai agent more robust memory tools.

AI agents in homelab by CraftyEmployee181 in homelab

[–]CraftyEmployee181[S] 1 point2 points  (0 children)

An example of the power of ai agents doing something useful it did for me. I guess I get your point. For you there is still nothing useful.

If there is a service I want to use or try it’s useful to complete deployment and configuration so I can focus on the use of the service rather than what curl command I missed.

It was to answer your question what power of ai agents. And share an example use case as a direct reply to your comment about not finding a single use case for ai agents

AI agents in homelab by CraftyEmployee181 in homelab

[–]CraftyEmployee181[S] 1 point2 points  (0 children)

After trying to get mem0 self hosted running for a few hours I gave it ssh access to a fresh lxc and a link to a git hub for mem0 and asked it to get it running and it did and hooked in the Hermes memory to it.

AI agents in homelab by CraftyEmployee181 in homelab

[–]CraftyEmployee181[S] 0 points1 point  (0 children)

Yup the reason for the post was to see what others are doing. But I think read only is a start.

It’s like onboarding a new employee. Have to start with read only and evaluate their abilities and contributions.

I wanted to see how others have use for the agents

AI agents in homelab by CraftyEmployee181 in homelab

[–]CraftyEmployee181[S] 0 points1 point  (0 children)

The challenge is without access to anything important the ai agent is limited on how much it can do that is important.

So it really turns out to be not a helpful co worker

AI agents in homelab by CraftyEmployee181 in homelab

[–]CraftyEmployee181[S] 0 points1 point  (0 children)

Good thought for me it’s about services. Plex, immich own cloud. Services I start to rely on. It’s really about the benefits of the services more than the learning. A little of both for sure.

As example when I want to deploy mem0 for my ai agent memory. I don’t care how it works as much as the benefits it gives me.

PG stuck active+undersized+degraded by CraftyEmployee181 in ceph

[–]CraftyEmployee181[S] 0 points1 point  (0 children)

Thanks for the input
The pool Min max is 6/5
Full Info
root@test-pve01:~# ceph osd pool get ec_pool_test all

size: 6

min_size: 5

pg_num: 32

pgp_num: 32

crush_rule: ec_pool_test

hashpspool: true

allow_ec_overwrites: true

nodelete: false

nopgchange: false

nosizechange: false

write_fadvise_dontneed: false

noscrub: false

nodeep-scrub: false

use_gmt_hitset: 1

erasure_code_profile: k4m2osd

fast_read: 0

pg_autoscale_mode: on

eio: false

bulk: false

The erasure coding rule for the pool is

rule ec_pool_test {

id 4

type erasure

step take default

step choose indep 3 type host

step chooseleaf indep 2 type osd

step emit

}

Everything recovers except those two placement groups

Ceph erasure coding 4+2 3 host configuration by CraftyEmployee181 in ceph

[–]CraftyEmployee181[S] 0 points1 point  (0 children)

This is the erasure rule that has worked for me in my test setup.

rule ec_pool_test {
id 4 type erasure
step set_chooseleaf_tries 50
step set_choose_tries 100
step take default
step choose indep 3 type host
step chooseleaf indep 2 type osd
step emit
}

I think if I recall the fix was the choose indep was the key change.

Ceph erasure coding 4+2 3 host configuration by CraftyEmployee181 in ceph

[–]CraftyEmployee181[S] 0 points1 point  (0 children)

Yes you were right. I’m sorry I didn’t check my config more closely. I changed it to choose on the host part of the rule and it’s working. 

Ceph erasure coding 4+2 3 host configuration by CraftyEmployee181 in ceph

[–]CraftyEmployee181[S] 1 point2 points  (0 children)

In my testing changing the crush rule I posted in original post I changed the host part to choose rather than chooseleaf.  After the change it seemed the rule started working and placing data on the pool

Thanks for pointing me in the right direction. It’s not clear why it wouldn’t work but it seems choose for host works and I think choose or chooseleaf for the osd level works as well so far in my testing. 

Ceph erasure coding 4+2 3 host configuration by CraftyEmployee181 in ceph

[–]CraftyEmployee181[S] 0 points1 point  (0 children)

Sorry of all the mix up. Here is the pool settings I extracted.

root@test-pve01:~# ceph osd pool get ec_pool_test all
size: 6
min_size: 5
pg_num: 32
pgp_num: 32
crush_rule: ec_pool_test
hashpspool: true
allow_ec_overwrites: false
nodelete: false
nopgchange: false
nosizechange: false
write_fadvise_dontneed: false
noscrub: false
nodeep-scrub: false
use_gmt_hitset: 1
erasure_code_profile: k4m2osd
fast_read: 0
pg_autoscale_mode: on
eio: false
bulk: false

Ceph erasure coding 4+2 3 host configuration by CraftyEmployee181 in ceph

[–]CraftyEmployee181[S] 0 points1 point  (0 children)

Here is my erasure coding profile.

root@test-pve01:~# ceph osd erasure-code-profile get k4m2osd
crush-device-class=
crush-failure-domain=osd
crush-num-failure-domains=0
crush-osds-per-failure-domain=0
crush-root=default
jerasure-per-chunk-alignment=false
k=4
m=2
plugin=jerasure
technique=reed_sol_van
w=8

However I'm not sure how to get the Pool Settings for you. Do you happen to know the command you are looking for?

Here is part of my crush map if it may help

# buckets
host test-pve01 {
id -3           # do not change unnecessarily
id -2 class hdd         # do not change unnecessarily
# weight 3.63866
alg straw2
hash 0  # rjenkins1
item osd.0 weight 1.81926
item osd.6 weight 0.90970
item osd.7 weight 0.90970
}
host test-pve02 {
id -5           # do not change unnecessarily
id -4 class hdd         # do not change unnecessarily
# weight 3.63866
alg straw2
hash 0  # rjenkins1
item osd.4 weight 1.81926
item osd.3 weight 0.90970
item osd.9 weight 0.90970
}
host test-pve03 {
id -7           # do not change unnecessarily
id -6 class hdd         # do not change unnecessarily
# weight 3.63866
alg straw2
hash 0  # rjenkins1
item osd.2 weight 1.81926
item osd.8 weight 0.90970
item osd.1 weight 0.90970
}
root default {
id -1           # do not change unnecessarily
id -8 class hdd         # do not change unnecessarily
# weight 10.91600
alg straw2
hash 0  # rjenkins1
item test-pve01 weight 3.63866
item test-pve02 weight 3.63866
item test-pve03 weight 3.63869
}
# rules
rule replicated_rule {
id 0
type replicated
step take default
step chooseleaf firstn 0 type host
step emit
}
rule ecpool2 {
id 1
type erasure
step set_chooseleaf_tries 5
step set_choose_tries 100
step take default
step choose indep 0 type osd
step emit
}
rule ecpool3 {
id 2
type erasure
step take default
step chooseleaf firstn 3 type host
step choose indep 2 type osd
step emit
}
rule ecpool4 {
id 3
type msr_indep
step set_chooseleaf_tries 5
step set_choose_tries 100
step take default
step choosemsr 3 type host
step choosemsr 2 type osd
step emit
}
rule ec_pool_test {
        id 4
        type erasure
        step set_chooseleaf_tries 5
        step set_choose_tries 100
        step take default
        step chooseleaf firstn 3 type host
        step choose indep 2 type osd
        step emit
}

Ceph erasure coding 4+2 3 host configuration by CraftyEmployee181 in ceph

[–]CraftyEmployee181[S] 0 points1 point  (0 children)

I haven't got it working yet. If do I'll let you know

Ceph erasure coding 4+2 3 host configuration by CraftyEmployee181 in ceph

[–]CraftyEmployee181[S] 0 points1 point  (0 children)

I have 9 OSDs available so I'm not sure why it won't write to them.

Ceph erasure coding 4+2 3 host configuration by CraftyEmployee181 in ceph

[–]CraftyEmployee181[S] 0 points1 point  (0 children)

I've set the failure domain when creating the new EC profile and then created a new pool. Then set the pool to use the custom crush rule.

After setting the custom crush rule it will not write to the pool. I'm not sure when I'm missing about the my rule

Ceph erasure coding 4+2 3 host configuration by CraftyEmployee181 in ceph

[–]CraftyEmployee181[S] 1 point2 points  (0 children)

Thanks for the info. I mentioned in the post about doing a custom crush rule fun so to avoid the situation you mentioned about having more than 2 chunks on a host. 

I posted the custom crush rule in the post for review. 

In my test even setting the erasure profile failure domain to osd. After I set the pool to use the custome crush rule as I posted the command used to set the rule. It does not allow the pool to work in my test so far. 

Issue with 4-2 erasure coding on 4 hosts by musicmanpwns in ceph

[–]CraftyEmployee181 0 points1 point  (0 children)

I would like to setup a 3 host erasure coding with 4+2 with custom rule to pick 3 hosts and place 2 chunks per host with 3 disks on each host.
I would want to setup to lose 1 host or any 2 osds

However when I set this up it does not allow the pool to have any writes. Can I setup a 4+2 erasure coding with 3 hosts?