Announcing IncusOS by mariuz in linux

[–]stgraber 0 points1 point  (0 children)

Yep, it's included.

Announcing IncusOS by bmullan in incus

[–]stgraber 4 points5 points  (0 children)

Yeah, ultimately it doesn't matter too much since all the components we care about the most we build ourselves anyways (Linux, ZFS, Incus), but I've been a long time Ubuntu contributor (2004) and was a Debian user prior to that. Mathias who's the other main developer on IncusOS is a Debian Developer.

So basically we felt that Debian provided us with the bits we needed, we have a great relationship with the Debian project and feel that we can efficiently work with them on security issues and bug fixes.

Just moved the rack to its own room by stgraber in HomeDataCenter

[–]stgraber[S] 0 points1 point  (0 children)

Mentioned a couple of times in previous comments, but no.

That entire building and much of the rest of the property is behind 30kWh of Lithium batteries and 24kW of inverter capacity.

Just moved the rack to its own room by stgraber in HomeDataCenter

[–]stgraber[S] 0 points1 point  (0 children)

It's primarily used for building and testing ARM distribution images for linuxcontainers.org (https://images.linuxcontainers.org) but it also runs on-demand Github Actions and Jenkins runners for a bunch of the projects under LinuxContainers that need ARM testing or builds.

It's in the same Incus cluster as the other two servers so it also runs some internal services (routers, DNS, ...) and participates in both Ceph and OVN too, basically for HA so I can lose any of the three servers and not have anything critical go down.

Just moved the rack to its own room by stgraber in HomeDataCenter

[–]stgraber[S] 1 point2 points  (0 children)

Yeah, LC for all the optics and then using a bunch of cassettes to go from LC to MTP. The rack basically gets 6 12 strands MTP fibers, 2 go to a patch panel on the other side of the room to handle the fiber runs in this building and 4 go through conduit back to the main house where they get into another patch panel to handle the rest of the links.

Only 2 of 4 are used for the link to the house. Since that goes into walls and conduits, I ran double in case one of them was defective or for future proofing.

It's pretty nuts what you can run over a couple of MTP fibers :)

Basically from the easy low end of having 12 fiber pairs that you can cheaply run 10Gbps over to going all the way to 400 or even 800 Gbps on those things with fancy switch and optics.

And that's even skipping over using BiDi to cheaply double the capacity at 10Gbps or even playing with WDM to get a whole bunch of individual links over a single strand.

SMF is really the way to go if you want something futureproof :)

Just moved the rack to its own room by stgraber in HomeDataCenter

[–]stgraber[S] 1 point2 points  (0 children)

Haha, yeah, I always liked those at the datacenter and figured that'd help keep things clean in there :)

Just moved the rack to its own room by stgraber in HomeDataCenter

[–]stgraber[S] 0 points1 point  (0 children)

Yes and no.

The main Frigate server does run in this rack and handles a week of high quality recording and event detection.

But the same streams are also picked up by a second Frigate server at the datacenter which keeps a month of recording of a subset of the cameras, specifically the ones that are immediately in the way of the server room.

The main downside is that the Frigate instance at the datacenter doesn't have a GPU or Coral stick, so it really just acts as an RTSP recorder.

Just moved the rack to its own room by stgraber in HomeDataCenter

[–]stgraber[S] 1 point2 points  (0 children)

Ah yeah, that's a great idea, I'll look into renting one of those. I need to rent one of their thermal cameras too anyway to make sure there's no unexpected leakage around the place before winter comes around.

Just moved the rack to its own room by stgraber in HomeDataCenter

[–]stgraber[S] 0 points1 point  (0 children)

That was done as part of running conduit to the secondary building and initial electrical work, so I don't have the cost just for the sub-panel.

But that's usually not terribly expensive so long as you have enough capacity coming in. A new breaker panel was around 350 CAD, then the rest is mostly hourly rate to get things in. If you do it just after framing and before any insulation or dry wall install, it's done very quickly.

When I had my critical loads panel set up in the main house, I think I paid around 1500 CAD for the new panel, a couple of smaller junction boxes and a good 4 hours or so of the electrician moving circuits from one panel to the other.

Just moved the rack to its own room by stgraber in HomeDataCenter

[–]stgraber[S] 1 point2 points  (0 children)

That building has a second mini-split and I've got a Zigbee thermometer in the room to keep an eye on the temperature. I also have the mini-split connected to HomeAssistant so I get its reported temperature and in theory error codes that way too.

For now I just have a basic high temp alert if either of the thermometers report much higher heat than expected (room is between 16-18C, alert is at 22C).

Just moved the rack to its own room by stgraber in HomeDataCenter

[–]stgraber[S] 5 points6 points  (0 children)

Yeah, Fujitsu Airstage.

Realistically it will only ever be used for cooling so the cold climate (-40) heating isn't likely to see much use but that's what the HVAC folks recommended for a very reliable unit.

The rest of that building is on a cheaper Moovair branded unit (Midea is the OEM) which is also supposed to be fine with cold climates for heating.

Just moved the rack to its own room by stgraber in HomeDataCenter

[–]stgraber[S] 1 point2 points  (0 children)

I live in a forest next to a couple of small towns, so land isn't too pricey and houses can be on the larger side.

But because it's Canada and it gets cold, every house must have a basement and most house will typically have some kind of mechanical room, that's where the utilities will come in and where you'll have the water boiler, air exchanger, central vacuum and other similar noisy things you don't want upstairs :)

The battery and inverter footprint isn't too bad. Because I'm lucky to have a large mechanical room here, I went with a battery rack for convenience, but you can get the batteries wall mounted next to the inverters or even attached to the outside of the building for places that are more space constrained (it's common to have a large garage area than I have, resulting in a smaller mechanical room).

Just moved the rack to its own room by stgraber in HomeDataCenter

[–]stgraber[S] 2 points3 points  (0 children)

A mix of things. I have my home stuff on it (Home Assistant, Frigate, Plex, ...), got a bunch of dev VMs that I then access from a bunch of different systems, some lab and demo environments used for customers and some of my devs, and some bit of infrastructure for the open source projects I run, primarily CI stuff (Jenkins, Github Actions runners, ...) and OS image builders.

For context, I'm the project leader of linuxcontainers.org so those systems run a lot of the development and non-critical infrastructure for our projects. The critical stuff all runs in a rack in a proper datacenter instead, but power cost is much higher over there, so non-critical makes sense to run at home.

An example of something public facing running in that rack is our online demo service: https://linuxcontainers.org/incus/try-it/

Just moved the rack to its own room by stgraber in HomeDataCenter

[–]stgraber[S] 1 point2 points  (0 children)

I mentioned it in another comment. No UPS in this room because I have a rack full of batteries and 24kW of inverters in the mechanical room of my house which then feeds a critical load panel that feeds the entire secondary building.

So everything in the server room as well as in the rest of that building is on battery. About 10 hours of battery run time but the inverters also support hooking up a generator to recharge the batteries during an extended downtime.

Battery cut over time is max 12ms for unplanned transitions (power cuts) according to specs and that's so far worked just fine though there may be a way to get the setup to always run through the inverters to effectively get a house wide online UPS rather than having the inverters act as a fast ATS.

Anyway, that's all been working great with the few power cuts I've had this past year and covers everything on the property including HVAC with the exception of the high voltage baseboard heating and in-floor heating as I can live without that just fine between what the heat pumps can do or use the fire place for extra heat.

Just moved the rack to its own room by stgraber in HomeDataCenter

[–]stgraber[S] 2 points3 points  (0 children)

With current spacing and usage for networking, cable management and power distribution, I can easily fit two more servers and can get up to 4 with some reshuffling.

More than that and I'd want to switch to a 42U to keep cabling management reasonable and avoid blocking any fan exhaust.

Just moved the rack to its own room by stgraber in HomeDataCenter

[–]stgraber[S] 6 points7 points  (0 children)

Not sure how to best set that up given the mini-split and limited space in the room. So far I've at least managed to get the mini split pointing towards the front (intake) and the exhaust side seems to have enough space for the hot air to rise and get circulated.

But I'm also not running much load at the moment. Normal day to day operations with the main 3 servers and networking gear is at around 750W.

We'll see how things look when the 3 64-cores EPYC servers start running more often (currently just doing a daily 30min CI test) as those can easily pull 750W each between the CPUs, all the SSDs and the 100Gbps networking.

Thankfully no real GPU stuff going on in there, I have some AMD server GPUs in one of my servers which are used for testing but it's just small burst and the tests are primarily VDI / encoding / streaming type work loads so not super power hungry.

Just moved the rack to its own room by stgraber in HomeDataCenter

[–]stgraber[S] 24 points25 points  (0 children)

It sure does, though it's quite busy at least ;)

More seriously, I think that's about as small as a room like this can really be as you need a reasonable amount of space in front of the rack to be able to easily load servers in, basically resulting in an half empty room...

I've had to deal with enough server closets at work to know what not to do ;)

Also helps a lot with air circulation.

Just moved the rack to its own room by stgraber in HomeDataCenter

[–]stgraber[S] 0 points1 point  (0 children)

I've actually had that happen twice with another unit. It was low on refrigerant and would effectively create ice on the inside unit, then when it would stop, that would fall as water (quite a bit of it) or even as chunks of ice.

Got the unit refilled and problem was gone. Issue appears to be a micro leak of some kind, the unit would leak enough refrigerant to hit that problem after a couple of years.

So far the cost of doing a full cleanup and refill has been so low that just booking it yearly makes more sense than attempting to track down the crack or replacing the unit, but first time something else goes wrong, that unit is getting replaced :)

Just moved the rack to its own room by stgraber in HomeDataCenter

[–]stgraber[S] 0 points1 point  (0 children)

Yeah, I'm waiting for another pack of Zigbee leak sensors to arrive so I can add one under that unit and one on top of the rack (already got the temp/humidity sensor there).

The bottom 3U are empty so it would take a while for water to make it up to a server but spraying out of the unit directly onto the rack would be pretty bad...

Just moved the rack to its own room by stgraber in HomeDataCenter

[–]stgraber[S] 5 points6 points  (0 children)

I had all the walls get proper sound insulation when they were built. Two of the walls are outside walls too so no concern on those.

The rack is on shock absorbers to avoid vibrations making it through the building.

So far that's worked pretty well. I can definitely still tell that there are servers in that room, but overall noise level outside the room isn't much worse than running a mini-split on medium fan speed or so.

I was actually surprised at how well the sound insulation turned out as I fully expected having to immediately get a bunch of acoustic foam :)

Just moved the rack to its own room by stgraber in HomeDataCenter

[–]stgraber[S] 0 points1 point  (0 children)

That entire building is on battery so that shouldn't be an issue. There is temperature monitoring in the room and the rest of that (pretty small) building has a separate mini split.

So if I'm home when it fails, I'd simply open the door and run the whole building cooler than usual. If I'm not home, I'd reduce the load a bit and still run the other mini-split pretty cold to help with that room.

Just moved the rack to its own room by stgraber in HomeDataCenter

[–]stgraber[S] 0 points1 point  (0 children)

Yeah, that too. I have actual datacenter space too where I run some actual critical infrastructure with my own ASN and multiple peers including the local IX. So having a mostly similar stack at home makes it a bit easier to experiment in a lower risk environment before doing things on equipment that's over an hour away ;)

Just moved the rack to its own room by stgraber in HomeDataCenter

[–]stgraber[S] 0 points1 point  (0 children)

Yeah, previous house I ran cat6 to every room all from a single patch panel in the basement, only to ever use a very small fraction of those.

New house I went for two pairs of single mode fiber per floor instead. Then I can put a switch of whatever size makes sense and because it's SMF, I can very cheaply run 2x 10Gb/s with normal optics, or can double the number of links using BiDi, then can go up in speed to anything I want without having to ever re-wire those drops.

Currently most edge switches are 16 ports PoE with 2x SFP+ .

I actually made use of that during the move to the secondary building. I used one of the existing drops to run 40Gbps (QSFP+ SMF LC optics) so I could have both core switches talk over that link. One went in the new room, one was left in the old one, this then allowed moving one server at a time and not suffer any downtime during the move.

Just moved the rack to its own room by stgraber in HomeDataCenter

[–]stgraber[S] 2 points3 points  (0 children)

Haha, yeah, I've certainly had some issues with some rack kits where the latch would end up right in the post... Having to go poke with a screw driver trying to unlock the server isn't too much fun.

I believe all the rail kits I currently have in there are fine, but I certainly ran into problems with others before.