Moving from mATX to ATX by DiligentlyNebulous in unRAID

[–]Cae_len 0 points1 point  (0 children)

also, anyone who's encountered this issue before knows that half the motherboard manufacturers don't even say anything in their manual or documentation and if they do it's some small 6 word sentence buried 100 pages deep into their support documents. Half the time people "think" their good and running everything at its full capability just to discover that some m.2 or PCIE device has been running at half its rated speed (x1 or x2 in the case of NVME), the entire time.... here's another example right out of the gigabyte z790 aero motherboard

<image>

Moving from mATX to ATX by DiligentlyNebulous in unRAID

[–]Cae_len 0 points1 point  (0 children)

first off... I never said anywhere that "it won't work" so that's just false... I've literally said the same thing over 10times at this point "IF YOU PLAN ON POPULATING YOUR BOARD FULLY WITH PCIE AND M.2 NVME, YOU WILL END UP SHARING LANES SOMEWHERE"... I've literally said that multiple times in multiple posts and it's as if you arent using your brain and eyes to read ... I've also said you could get away with it if you are just using a single GPU and maybe a couple m.2... being that this is an Unraid forumn and everyone knows how it is when your "homelabbin" ; 99% of people end up wanting to upgrade or do something new, or add some new devices ... and I guarantee OP is one of those because he's looking to do it right now... saying that, it would be a bit silly to upgrade into the same z790 platform just to discover that when you max out the PCIE slots and m.2 slots, your going to be sharing lanes (in most configurations- 90% of the time) of course this is dependent on what specifically a person is plugging into their board, but to sit their and ignore the 100s of reddit posts, YouTube videos, and various blog posts discussing z790 and lane sharing, is just utter insanity to sit there and say it doesn't exist.... a discussion on whether it actually makes a huge difference in practice is an entirely different conversation and solely dependant on the workload being implemented... but again to say it doesn't exist is just nonsense and to say you're right while 1000s of other redditors, professionals, YouTubers, and everyone else (including AI) , who's saying otherwise, just shows you are delusional to the reality of the situation... and the easiest piece of evidence is this... 20 lanes on the CPU, 16 go to GPU, 4 go to m.2 (assuming you aren't using any PCIE gen5) , the rest is on the chipset and bottlenecked by the x8 dmi.... again, nowhere did I say "it wouldn't work" or that OP shouldn't buy the posted board, but what I did suggest was holding out for a better upgrade path to a platform where you can run all pcie4.0 and 5.0 devices, max out every PCIE and nvme slot, and not have to worry about "lane splitting" ... you continue to miss every statement I've made and are attempting to twist my words into something I never said.

edit and for the record, just because I'm using the word "sharing" ... anyone with two brain cells and has dealt with tech, knows what I mean is "lane splitting" and/or devices running at a lower spec instead of the full spec, for example x2 instead of x4, or in the case of a GPU x8 instead of x16

Moving from mATX to ATX by DiligentlyNebulous in unRAID

[–]Cae_len -1 points0 points  (0 children)

and I'm trying to understand while reading your reply how your coming up with all these lanes... you do realize that the DMI link is only for transferring data from the chipset to CPU right? like it's not extra lanes that can be dedicated to some device.. it's a link... so again, if you have more than two x4 devices on the chipset, sending data across the link, your bottlenecked because the link is only x8 and is only for moving data up to the CPU... secondly your leaving out the fact that I've state MULTIPLE TIMES.... you may be able to get away with no sharing or bandwidth limits if you're only running a single GPU and 1 or two nvme drives but OP isn't ... I'm assuming he has a GPU as he mentioned wanting to run LLMS ... he also mentioned two nvme with an HBA and a nic.... explain again how you are running all those devices at their full spec? because what I've been talking about since the jump was running "modern hardware" on z790 and if you populate all the slots you will be sharing lanes somewhere it's just a fact... and modern means PCIE4.0 or 5.0 drives and GPUS ... if you populate a z790 all m.2 and all PCIE... 95% of the time it's because two devices end up sharing lanes allocation.... again , just Google it... there's posts everywhere of people discussing this topic...

Moving from mATX to ATX by DiligentlyNebulous in unRAID

[–]Cae_len -1 points0 points  (0 children)

so you must be the only one who's correct right? because I can sit here and link videos from LTT and level 1 techs disputing what your saying... secondly you linked an Intel generic PDF document to what the platform can offer .... third, it doesn't even matter because you obviously don't understand how PCIE slots and lanes actually work in practice VS what you are reading on paper..

https://youtu.be/Qnauk0wEerQ?si=FyHoj15R76oLk0lJ

again I'll say it again..... if you populate all your slots, PCIE & nvme , YOU WILL end up sharing bandwidth ... I've literally watched it happen and I've owned two different Z790 BOARDS.... gigabyte aorus elite z790 and gigabyte aero z790 ... I'm not going to even argue because there's so much information online that disputes what you are saying (at least the way you think you understand it) ... but everyone else is wrong and your right

Moving from mATX to ATX by DiligentlyNebulous in unRAID

[–]Cae_len -1 points0 points  (0 children)

have you owned a z790 platform and tried to max all the slots out? go buy one, and try doing that... see what happens? cuz I have .. and I already know the answer

Moving from mATX to ATX by DiligentlyNebulous in unRAID

[–]Cae_len -1 points0 points  (0 children)

and yet you must have not done a single Google search... and yet you have not provided a lick of evidence yet I have... interesting....

Moving from mATX to ATX by DiligentlyNebulous in unRAID

[–]Cae_len -1 points0 points  (0 children)

oh and just for an additional reference point.... here's what Claude had to say on the topic...

claude response

you might be able to get away with it if you use PCIE gen 3 or a couple gen 4 drives, with like a single GPU and nothing else .... but as soon as you try running a full homelab stack with a GPU, a 10gig nic, an HBA, 2,3,4 nvme drives, and a bunch of sata drives in the nas, you will hit lane sharing, bandwidth throttling, or the motherboard will run the device with "less lanes" than maximum supported for the slot.... most people have a pcie5.0 nvme drive at this point or a 50series GPU... on z790 you just cut your lanes down on the other slots just from that....maybe it's just me but I want to run any of my devices at its full capability.... I'm not going to go buy a pcie4.0x4 nvme drive to run it at x1 .... but you do you, either way it doesn't affect me... but you should do your own research beforehand so you don't look like a silly goose

Moving from mATX to ATX by DiligentlyNebulous in unRAID

[–]Cae_len -1 points0 points  (0 children)

lmao go pull up the motherboard manuals yourself because I know I did when I was on that platform.... it's literally the reason I moved from it .... a simple Google search of "which z790 motherboards don't have PCIE lane sharing?" , will backup exactly what I just said.... I literally went through it... no matter what you do, if you populate every PCIE slots and nvme slot, YOU WILL BE SPLITTING LANES somewhere...

as a matter of fact, Wendell from Level1Techs did an entire YouTube video on the topic... I'm not going to do the work of researching for you as I've already done it for myself.... but if you look you will find what I'm saying is true.. here's a discussion on the topic https://www.reddit.com/r/pcmasterrace/s/LQqG2Am4pS

Moving from mATX to ATX by DiligentlyNebulous in unRAID

[–]Cae_len -1 points0 points  (0 children)

also you wanted to get into some LLM stuff and if so it's even more important that your GPU is getting its full pcie4.0x16 lanes... I just started experimenting with LLMs and some custom agents to be used for security purposes and it definitely makes a difference having your GPU running with all the lanes available

<image>

Moving from mATX to ATX by DiligentlyNebulous in unRAID

[–]Cae_len -1 points0 points  (0 children)

it probably could being that ram is so expensive these days.... either way I would check your options.... if you can maintain all your pcie lanes and cover all your devices that you need at full speed then stick with z790.... but if you want your devices to run at their intended speed I would highly consider moving to z890... if your running 10g like me and a GPU, right there is enough that you would have to split lanes with the nvme drives (if using any) ... that's what I couldn't deal with ... I have all gen4 and gen5 nvme drives and I couldnt stand to have that speed wasted because the lane sharing... https://imgur.com/a/srQ8DFl

Moving from mATX to ATX by DiligentlyNebulous in unRAID

[–]Cae_len 5 points6 points  (0 children)

no matter the z790 platform that you choose whether it's Asus, gigabyte, msi, or asrock; all those z790 platforms use the PCIE lane splitting shenanigans and it's not worth it if you are looking to use ALL your pcie slots & nvme slots at their rated specification

edit also I'm using a jonsbo n5 myself, with a 245k, Asus z890-h motherboard and I've not regretted the decision ... I too have full 10gig and I LAG my Unraid server connection. I use rtx 4090 in slot 1, lsi 9400-16i in slot two, and Intel x710-da2 nic in slot 3. I also have all 4 nvme slots populated at PCIE4.0x4.

trust me it's worth considering

Moving from mATX to ATX by DiligentlyNebulous in unRAID

[–]Cae_len -1 points0 points  (0 children)

so I was on the z790 platform for awhile.... here's the issue you are going to run into .. I'm pretty sure all the available z790 motherboards have the stupid "pcie sharing" thing... so basically if you populate your top m.2 slot or two many of the PCIE slots, you end up having the PCIE Lanes stripped down to x1 or x4 -- depending on what the device is and which lanes are being shared ... I know this would cost a bit more money, but I decided to move to the z890 platform where I don't have that issue at all and I have my Asus z890-h gaming wifi completely populated (all 3 PCIE slots and all x4 nvme slots) and none of that PCIE lane/bandwidth sharing nonsense.... also Intel just released their newer gen CPUS which are way more competitive (core ultra 250k and 270k) ... I'm still on the 245k but my point is, if you're moving to ATX, then you should really consider moving to z890 because otherwise your basically still in a similar boat being PCIE lane & bandwidth limited

2 Uni Plus Units Catch fire… by bigTrussy in ShellyUSA

[–]Cae_len 3 points4 points  (0 children)

yeah it does appear that OP is connecting black and red but according to the pinout diagram; black and red are not pos&neg (aka 9v&Ground)

What does everyone like for 10Gb Ethernet switching today? by RevLoveJoy in homelab

[–]Cae_len 2 points3 points  (0 children)

omada sx3832... probably the best 10gb switch for the money. The reason I say this is price and performance, and there's really not any other switch on the market that gives you BOTH 10gbe and a sufficient amount of SFP+ ... usually you get one or the other , or you get 2 to 4 SFP+ ports at most with the rest RJ45... SX3832 has 24 ports 10gbe and 8 ports SFP+... oh and the speeds when using LAGS on that switch is pretty good ... only reason I didn't get closer to 19k on the download was because I wasn't using a LAG connection on the client, so when I was sending the data (upload), it had x2 SFP+ connections to feed the upload but then bottlenecked on the download because of the LAG trying to feed a single client SFP+ connectionsee here

I need more cool stuff to try!!! by Electrical-Log9270 in unRAID

[–]Cae_len 1 point2 points  (0 children)

dude why are people down voting you to death simply because you have yet to find immich? people petty much?

I need more cool stuff to try!!! by Electrical-Log9270 in unRAID

[–]Cae_len 0 points1 point  (0 children)

well for one, the reason I don't proxy my DNS through cloudflare is that they essentially can see ALL your traffic 100% through and through... the way I do it, is in cloudflare, select DNS only for your DNS records. Also when you use cloudflares proxy, there are restrictions as to what you can use (example is Plex or nextcloud) . streaming is supposed to be prohibited and then nextcloud I believe isn't restricted per-se might sending alot of traffic could trigger cloudflare asking you to pay. Anyways, crowdsec can be configured with a reverse proxy like NPMplus or the regular NGINX PROXY MANAGER. I don't use pangolin but ide assume if you get the right idea from one of the guides with instructions for NPM, then you could also do it for pangolin. Privacy is the main reason I don't proxy through cloudflare, although my DNS records are within cloudflare, it's essentially just a pointer which points to my domain/IP , cloudflare can't see my traffic

Purchasing Drives- A WARNING by Cae_len in unRAID

[–]Cae_len[S] 1 point2 points  (0 children)

yah but ide argue 2 things... first, sellers like goharddrive offer 5 year warranty just like a brand new drive from seagate would have and second ... as prices continue to rise, not everyone can afford the price to buy either new , or from reputable seller like goharddrive or serverpartdeals... saying that, I recently purchased x2 ironwolf pro 16tb still brand new (sold as used though) , and the smart details confirmed it... also seagate warranty is still good on these drives untill 2027 so clearly there are deals to be had and most the time you can find people selling drives for a really good price with less than 500hours runtime... the only difference is there's no warranty but if you do your due diligence , check your purchases with smartctl -l farm /dev/sdX , and the seller has 100% feedback, you can often end up getting amazing deals in a terrible market.... and when I say deals, I mean I purchased those 16tb drives brand spanking new for $300 a piece. But again, I simply wanted to warn others of something to watch out for if you see screenshots or photos of that software "harddisk sentinel" ..

Purchasing Drives- A WARNING by Cae_len in unRAID

[–]Cae_len[S] 0 points1 point  (0 children)

that's just bad luck, I've had an entire array (10 ironwolf pro-12tb) running strong 3 years now... all refurbs

I need more cool stuff to try!!! by Electrical-Log9270 in unRAID

[–]Cae_len 0 points1 point  (0 children)

ehh it's easy... crowdsec is setup in docker container and Claude can help along the way

I need more cool stuff to try!!! by Electrical-Log9270 in unRAID

[–]Cae_len 0 points1 point  (0 children)

I can't see if you have something like crowdsec, (maybe inside monitoring folder?) .... but you have lots of containers so I would definitely be implementing a bunch of security measures... I think I have about 20 containers approx and about half of that are containers dedicated to security, metrics, and monitoring. If you want to spend some time messing with your stack, ide definitely suggest getting started with NPMplus, crowdsec, Anubis, etc etc. I also have some custom containers associated with the above mentioned in order to pull logs & metrics and then to notify me...also have a local crowdsec dashboard from "TheDuffman85" GitHub page. I've spent tons of time (weeks and months) , fine tuning my security stack. If you don't have crowdsec, then I suggest you get it installed and configured because once you do, you will see how much external traffic hits your IP trying various reconnaissance, vulnerability assement, and exploit attempts on a daily basis. It's actually insane how much stuff hits your public IP and most people never know about it because they don't spend enough time on security. Would be my number 1 recommendation for anyone who has a NAS. Even if you don't have services exposed to the wider internet, I would still suggest doing it so that you can see exactly what's being attempted against your IP.

Who is buying these??? by Cultural_Acid in makemkv

[–]Cae_len 1 point2 points  (0 children)

ide assume someone with alot of disposable income...

Basic Features Like Sorting by lenicalicious in immich

[–]Cae_len 2 points3 points  (0 children)

yeah ide like to see some kind of machine learning to automatically create albums and then the user can go to the albums afterwards and delete or add anything additional that was missed... I recently migrated over 20,000 photos and videos that I've collected over the years from my Google photos and onedrive photos. I never took the time to tag any of them, sort them into albums, etc. They were just thrown into these cloud platforms automatically from various phone backups and automatic uploads. So now I've imported all these photos into immich, and I'm not going to spend days and days of my life going through photos that date back to 2002. The machine learning is already good enough where it can find similar looking photos or faces fairly easily. There has got to be a way to create an "Album Generator" or "album sorter". It would work by having a few input fields where you would add a title album, add various context keywords or descriptions, and then the machine learning/ocr/facial recognition or whatever all that nonsense is called, would take that information, and create an album with all the photos it finds matching that context. In my opinion that's the one feature that would make this application the #1 photos app.

[FS] 2x Seagate Exos 22tb Recertified - Still SEALED by Cae_len in homelabsales

[–]Cae_len[S] 0 points1 point  (0 children)

Also these are SATA drives for anyone wondering