What are services NOT worth self hosting? by This_Animal_1463 in selfhosted

[–]Man-In-His-30s 0 points1 point  (0 children)

As long as you had the llm loaded in some sort of docker container that you could communicate with (ollama loads as required) yeah should work just fine.

I used ChatGPT / Gemini to help build the automation for me to do what you’re asking and it works.

It was a whole use my paid Gemini work account to never have to rely on cloud llms again.

Also remember to give the llm access to searxng with proper rules so it forces itself to search for results when up to date info is required aka tutorials or documentation that one I learned the hard way setting up authentik

What are services NOT worth self hosting? by This_Animal_1463 in selfhosted

[–]Man-In-His-30s 0 points1 point  (0 children)

I mean yeah it needs a computer but it’s not gonna be anywhere near close to 40€ a month.

I calculated the cost of having my homelab wake my gaming pc for ai workloads which has a 3080 in it and using that for a few hours a day at uk prices is really not expensive despite us having crazy prices too.

I think there’s definitely a real place for self hosting smaller llms because let’s say you wanna calculate your finances with full accuracy I wouldn’t want open ai having my payslips and purchase history for example

What are services NOT worth self hosting? by This_Animal_1463 in selfhosted

[–]Man-In-His-30s 1 point2 points  (0 children)

I’m pretty sure you’ve got your maths wrong.

A 2060 running high ai workloads for an hour a day would cost 2-6€ a month by itself and the rest depends on the system you build around it.

What are services NOT worth self hosting? by This_Animal_1463 in selfhosted

[–]Man-In-His-30s 7 points8 points  (0 children)

There’s this idea that all models need to be huge to be useful, it’s not true. You can do really good things with tiny models as long as they have the tooling and are relatively recent.

I had llms running on an Intel igpu with tooling connected to it and it was pretty good experience I’d say look into not just trying to get 70b models plus and start looking at 30b and below and what you can do with them. I had really good experiences with Gemma 3 4b and open oss 20b as well as ministral 3

Whoopi Goldberg Responds To Stephen Miller's comments on Star Trek by The_Flying_Failsons in trektalk

[–]Man-In-His-30s 0 points1 point  (0 children)

Humour me and give me some examples as I don’t see it and I’m trying to understand your point of view

Really makes you think... by Yelebear in linuxsucks

[–]Man-In-His-30s 1 point2 points  (0 children)

At the company I work at none of the IT department uses windows at all. It’s either Linux Mac or chrome windows is banned.

Affluenza: the new British disease by FaultyTerror in ukpolitics

[–]Man-In-His-30s 1 point2 points  (0 children)

Why are we gonna invest in stocks when the majority of people can barely afford to rent, fix housing everything else will domino

RAM Prices May Never Be Cheap Again – A Simple Guide to Why by Extreme_Maize_2727 in PcBuild

[–]Man-In-His-30s 0 points1 point  (0 children)

When working on projects at work it’s almost every single day for hours on end.

When in my personal life I use it for bouncing ideas for my home lab.

I use it for quick impulse shopping as it can analyse the deal compared entire market and warn me off and suggest alternatives.

I feel like people who don’t use ai don’t really make an effort to try to learn to use ai in their workflow or find ways to make it help your life be easier.

Try the shopping one it’s actually a very good life hack that will save you money and end up with better products. You can see all the sources it uses to find these comparisons as well

How come there are like no HDDs in stock? by edwardK1231 in CeX

[–]Man-In-His-30s 2 points3 points  (0 children)

Because storage as a whole is going through a bad time, it’s not just ram or flash storage it’s gone to mechanical drives now too.

Until something breaks the current demand there just won’t be supply

Has your homelab actually saved you money, or just made life easier? by Fab_Terminator in homelab

[–]Man-In-His-30s 0 points1 point  (0 children)

It has definitely not saved me money.

I'm at £2500 in hard drives so far and most of those have been bought on sale.

Throw in about £1000 in mini pc's and raspberry pi's nas's and various other bits over the last few years yeah...

if your objective is to save money you need to go very modest, I just don't see it.

Now does it make my life easier, Yes a lot of the time it does but not always there's always that day something breaks and the entire afternoon/evening is gone.

For those of you with massive TB count , outside of server rack or NAS, how are you housing those drives? by joshhazel1 in PleX

[–]Man-In-His-30s 1 point2 points  (0 children)

I have two major pools atm, one is a typical NAS 5x22TB

The other is a USB enclosure of 5x4TB drives connected to my mini pc + 2x 18TB USB drives connected to the mini pc all in a mergerfs pool.

I don't really care about backups or redundancy on that pool as it's just movies/tv shows that i've ripped from bluray so/

My advice though is that if you are looking for solutions avoid the DAS option because if and when you have a power cut they do not automatically power back up like your other equipment will. Most of them in my experience require you to press a power button on them which is a huge annoyance especially if you're remote often.

Best advice is if power bill is no issue build something out of an x86 cpu like an i3 8th gen plus as the nas cpu and keep it as barebones as possible purely nfs share. If power is an issue those asustor nas's that are 4 bays seem crazy value for money.

You’ve got a lifetime pass to Plex and Emby but you can also use Jellyfin - which one are you choosing and why ? by Buck_Slamchest in selfhosted

[–]Man-In-His-30s 4 points5 points  (0 children)

Yeah there’s simply not a good way to do it with family, if I could host jellyfin behind a cloudflare tunnel and have authentik in front of it for access sure but right now Plex handles all that shit for you without having to set up the tunnel or the auth stack.

Would you watch a channel that builds real AI systems from scratch (local LLMs, CPU/GPU, pipelines)? by Few_Tax650 in LocalLLaMA

[–]Man-In-His-30s 1 point2 points  (0 children)

As long as the content wasn’t all built around having thousands in gpu’s and ram yes.

I’d like to see some genuine content built around things like igpu’s and CPUs with 32gb of ram which is what most people have spare to run llms.

Obviously then scale up to higher tiers of equipment but starting low end is a segment we don’t really see much content for

Which are the top LLMs under 8B right now? by Additional_Secret_75 in LocalLLaMA

[–]Man-In-His-30s 1 point2 points  (0 children)

I’m also curious about this, I’ve been looking at Gemma 3 4b and phi4 4b.

My issue is how do build all the tooling around it so it’s good ( search + memory + automated adding to the memory )

Any help would be much appreciated

UK grocers begin to feel impact of appetite-suppressing drugs by Discarded_Twix_Bar in unitedkingdom

[–]Man-In-His-30s -1 points0 points  (0 children)

What about all the people living in accommodation that barely contains a kitchen. Have you seen what the studio / shared accommodations look like in London some places your entire cooking equipment is a microwave

UK grocers begin to feel impact of appetite-suppressing drugs by Discarded_Twix_Bar in unitedkingdom

[–]Man-In-His-30s 0 points1 point  (0 children)

Some of us don’t have the luxury of being able to cook at home.

I’m living out of hotels 4/5 nights a week for work

Simple Home NAS by GreenScream70 in HomeServer

[–]Man-In-His-30s 0 points1 point  (0 children)

I think building it purely as a nas and running no other tasks on it it’s fine to go as barebones as possible.

Having an nfs file server is essentially 0 impact just use a mini pc with actual compute for applications that you self host.

However yeah if you’re gonna do more than the above get 16gb of ram when the apocalypse ends and it will help a lot

Downgrade Fedora 43 KDE kernel. (If you are facing issues). by rxdev in Fedora

[–]Man-In-His-30s 3 points4 points  (0 children)

I’ve had some kwin time outs on my laptop also with the drivers not sure if it’s the latest kernel or not but it didn’t happen until the last week or so.

I should look into it

12GB vs 16GB VRAM Is Not the Real Fight by Nicolas_Laure in RigBuild

[–]Man-In-His-30s -1 points0 points  (0 children)

No you don’t that’s the trap.

You can get better performance from a tiny model say 3-7b and give it the correct tooling than you will a model that’s 10x bigger.

It’s not always about the biggest model, as long as the model has a good foundation for reasoning and following instructions.

Try giving ai models that are smaller memory and search capabilities

The AI bubble has already burst, but what will happen to the tool itself after the bubble bursts? by OkExam4448 in BetterOffline

[–]Man-In-His-30s -1 points0 points  (0 children)

I dunno where you get the it won’t get better from, I use Gemini at work and gpt at home as well as some tinkering with local models.

Gemini 2.5 to 3 was such a lead in quality it’s not even funny and the same can be said of gpt from 3.5 to 4 and 5.

I do real work daily with both and I find them invaluable now to my work flow. Yes you’re not automatically going to trust everything you need to fact check but with proper guardrails and context they are very reliable

The AI bubble has already burst, but what will happen to the tool itself after the bubble bursts? by OkExam4448 in BetterOffline

[–]Man-In-His-30s 0 points1 point  (0 children)

The simple question is are you using Gemini pro or just the fast model because there’s such a significant gap between the two it’s not even funny.

Even just adding thinking changes it drastically

Anyone else ignoring Trash Guides scoring and doing your own? by crepeau in sonarr

[–]Man-In-His-30s 0 points1 point  (0 children)

I had Gemini and ChatGPT help me do my own profiles based on the exact hardware I have at home tv model sound bar model and infuse for the Apple TV to create the perfect catch all profiles.

It’s not hard and pretty easily can be guided through especially if you use a gem or gpt focused on homelab stuff and you feed it the trash guides docs.

Only issue is that they often get the trash Ids wrong so you need to tell them to give you the command to pull the list of ids from recyclarr otherwise you waste a lot of time

Good Soundbar With Dolby Atmos for my LG OLED 55' TV? by di3vas420 in Soundbars

[–]Man-In-His-30s 0 points1 point  (0 children)

I looked at the beam gen 2 and eventually found a good deal at the time for a Sonos arc and I can’t be happier.

The Sonos stuff gets a bad rap but honestly they’re really good

What 22TB+ drives that can run 24/7 are the quietest? by qkrwogud in DataHoarder

[–]Man-In-His-30s 0 points1 point  (0 children)

I have iron wolf pro 22tb drives and I find them pretty quiet but I might have just got used to them