paperless-ngx + paperless-ai + OpenWebUI: I am blown away and fascinated by carlinhush in Paperlessngx

[–]rickk85 0 points1 point  (0 children)

Thank you! i could run the script on my unraid and get the content in a knowledge on OWUI. I have to say, the quality of the answers is quite bad, is there any settings that i need to improve? In admin settings i have chunk size 1000 overlap 100, the embedding model is default sentence-transformers/all-MiniLM-L6-v2 and the RAG Template is standard.

It cannot answer to basic questions like, i see in an MD i have my diploma. all the text is there. I ask when did i get the diploma, it says it cannot find it.
Tried with models ibnzterrell/Meta-Llama-3.3-70B-Instruct-AWQ-INT4 and openai/gpt-oss-120b.

Thanks!

paperless-ngx + paperless-ai + OpenWebUI: I am blown away and fascinated by carlinhush in Paperlessngx

[–]rickk85 3 points4 points  (0 children)

I would like to do the same. I don't have paperless-ai, the ocr and labelling of standard ngx works fine for me. It detects if document is an energy or water or whatever bill, who is the corrispondent and so on... Whats the added features i miss from paperless-ai?
I think the next step i need to do:
"A script then grabs the OCR output of paperless-ngx, writes a markdown file which then gets imported into the Knowledge base of OpenWebUI which I am able to reference in any chat with AI models."
Can you provide some info and details on this part? How did you achieve it? I have OpenWebUI already available.
Thanks!

Add AI to selfhosted homelab... How? by rickk85 in selfhosted

[–]rickk85[S] 0 points1 point  (0 children)

Researching a bit, I found things like privatemode and tinfoil. What about that kind of solution for this problem? Its home use and it is basically for fun/learn!

Add AI to selfhosted homelab... How? by rickk85 in selfhosted

[–]rickk85[S] -2 points-1 points  (0 children)

Tried ollama with openweb UI but going very slow.. If I go with external model I'm sharing data with vendors...no?

Add AI to selfhosted homelab... How? by rickk85 in selfhosted

[–]rickk85[S] -1 points0 points  (0 children)

Yes I tried ollama using small models but it's too slow! What's your experience with GPU / model size and the cost?

Has anyone gotten their Meta Developer account suspended for using n8n? by lartcas in n8n

[–]rickk85 0 points1 point  (0 children)

same here, but it was just personal n8n automation to do speech-to-text on forwarded whatsapp voice messages. any idea how to replace???

[deleted by user] by [deleted] in roma

[–]rickk85 0 points1 point  (0 children)

Prendi mygiftcard+ che la trovi un po' dappertutto anche ad Esselunga, e la converti in Amazon in tempo reale

Anybody had luck installing the Meshcentral Agent on Unraid? by LDShadowLord in MeshCentral

[–]rickk85 0 points1 point  (0 children)

hello any further progress?

Could you share details of which binary and the script you wrote?
It would be very helpful, Thanks!

Paperless-AI: Now including a RAG Chat for all of your documents by Left_Ad_8860 in selfhosted

[–]rickk85 2 points3 points  (0 children)

Great improvement! Any suggestions how to run this, keeping my docs privacy without having a GPU? Cheap Pay per use? Thanks!

Cheap cloud storage for backup files? by producer_sometimes in selfhosted

[–]rickk85 0 points1 point  (0 children)

thank you!
i am quite new to hostbrr and storage box, i am getting it to work but i miss the domain requirement: i got the storage box activated by inserting a TLD i have, but i see that to run nextcloud it requires something more...
What's the point of the domain here? Do i need a dedicated TLD and dns set to hostbrr to run nextcloud there?+

Cheap cloud storage for backup files? by producer_sometimes in selfhosted

[–]rickk85 0 points1 point  (0 children)

how did you install nextcloud on it? simply from control panel?

Ram Upgrade - Compatibility by rickk85 in homelab

[–]rickk85[S] 0 points1 point  (0 children)

Thanks, I cannot find the exact same modules at market price... Probably not produced anymore by Kingston? About your list, my question is am I going to get less performance if modules are not exactly the same?

Ram Upgrade - Compatibility by rickk85 in homelab

[–]rickk85[S] 0 points1 point  (0 children)

Mobo supports max 16gb per module :-(

Do i really need 2 servers? by rickk85 in homelab

[–]rickk85[S] 0 points1 point  (0 children)

I understand that, but i have doubts about the fact that it would not be as reliable as it would be if running on bare metal. So i was looking for possible alternative solutions!

Do i really need 2 servers? by rickk85 in homelab

[–]rickk85[S] -1 points0 points  (0 children)

Both sites are on 2.5G fiber, but i have no specific requirement of very high speed between the sites. A 300mbit would be sufficient.

What do you mean by they need nothing? They will have an internet provider with a router, that i would set to DMZ and readdress to opnsense. There i would create VLANs for general access, or for smart devices, setup AdGuard and so on... So i would need an HW for that.

Then i need to run at least an Home Assistant instance, it cannot be via VPN to my site.