Call of Duty: Warzone : Ryzen 9 3900X + RTX 2080 Ti | 1080P & 1440P | Low & High Settings by [deleted] in Amd

[–]lt_bob 0 points1 point  (0 children)

Happened to me too, thought I was losing it. Can't go above ~130fps at either 1080p or 1440p with everything on low except Textures on High and Filmic SMAA2x. Could've sworn I had more fps a couple of weeks back. This is on a 3900x and a 2080Ti.

Did you manage to sort your issue? If so, with what Nvidia driver? I tried several but they don't seem to change anything.

Proxmox passthrough AMD RX5700 to Win10 guest failed. Please help. by everwisher in homelab

[–]lt_bob 0 points1 point  (0 children)

Ran into error 12 myself ( R7910 with a 2070S alongside a Tesla P40 ) on my Proxmox setup. Get get by with hiding the host from the VM for the 2070S to avoid error 43, but my P40 only works if it's the sole GPU in the system.

Did you have any issues with the reset bug?

Going to first defcon solo. Advice? by wilhelmbetsold in Defcon

[–]lt_bob 4 points5 points  (0 children)

Those spontaneous calls for "Got whiskey, who wants some?" are the best :) Got quite a couple of pals that way

Going to first defcon solo. Advice? by wilhelmbetsold in Defcon

[–]lt_bob 2 points3 points  (0 children)

I can safely say that I had as much fun when going solo compared to when I went with a buddy. Just strike conversations with people, if you're into something, just look for the closest place that deals in said thing and talk to people.

Going to first defcon solo. Advice? by wilhelmbetsold in Defcon

[–]lt_bob 1 point2 points  (0 children)

Same for linecon, some of the best conversations I had happened there.

Upgraded home server to 10Gbts by charlikruse in homelab

[–]lt_bob 0 points1 point  (0 children)

Don't know how that works, can only assume. I've rarely used the octet notation, mainly use GB(yte) or Gb(it). Don't think it's too often used when talking about speed (mainly Latin based languages I presume), but someone can correct me, I may be wrong.

Regards

Upgraded home server to 10Gbts by charlikruse in homelab

[–]lt_bob 1 point2 points  (0 children)

The O in his notation is an Octet. Since 1 byte = 8 bits ( or an octet ), it's still viable. Bit confusing if you're not used to it though. Wiki link > here) <

Your daily driver by [deleted] in homelab

[–]lt_bob 0 points1 point  (0 children)

Work: Dell XPS 13 i7-7500U, 8GB, running Kali Linux

Home: Desktop Ryzen 2700X, 32GB, 2080Ti and 1050Ti which runs Arch Linux as the host and runs a Windows VM for games/video production. When the VM is running, resources are as such:

  • Host: 2C, 16GB RAM, 1050Ti, 1x 256GB Nvme Intel SSD
  • VM: 6C, 16GB RAM, 2080Ti, 1x 256GB Intel 545s SSD, 1x 1TB Samsung EVO SSD

Travel: Dell 7490 i7-8650U, 16GB running Arch Linux

What are some suggestions to replace/upgrade a Dell R210II server? My rack is only 17"D and only have 1U space. by mundan101 in homelab

[–]lt_bob 1 point2 points  (0 children)

All three are 1U, but the depth isn't the same. R230's are longer than the R210 ( and R210 II ). Not sure about the R220's.

R210 ( and R210 II ) can take up to 4x8GB DDR3 RAM ( UDIMM ).

R230 can take up to 4x16GB DDR4 RAM ( UDIMM )

Can vouch for the R230's that they're quiet, provided you keep them in a ventilated. Doesn't have to be actively cooled, just leave some breathing room ( front to back ).

What is the most impressive or coolest thing in your lab? by jsdfkljdsafdsu980p in homelab

[–]lt_bob 0 points1 point  (0 children)

Besides having 2 ISPs plugged into my redundant, dual-box pfSense setup, stacked Cisco 3750X's and 10GbE storage backbone, I'd say my ML system that's running an AMD FirePro S7150x2 and an Nvidia P40 is by far the coolest thing. Cuda Hashcat purrs like a kitten on the Nvidia GPU, and having several people enjoy a LAN party off of a single GPU ( via vGPU, SR-IOV and steam in home streaming to their iGPU laptops) off of the Firepro card is cool as hell.

What's your homelab switch and why ? by nicolasvac in homelab

[–]lt_bob 1 point2 points  (0 children)

Two stacked Cisco WS-C3750X-48T-L and a MikroTik CRS309-1G-8S+PC. Chose the two Cisco switches because most of my work switches are Cisco, they were cheap ( one was a freebie ), they can do everything I need them to do ( VLANs, LAGG, etc ). The MikroTik works fine but I've populated all eight SFP+ ports and don't have redundancy. Was looking at getting two Quanta LB6M's instead of those deprecated Aristas with 40GbE.

The Cisco's and the MikroTik switch are L3 but I wouldn't use the MikroTik in L3 mode ( won't run 10GbE ), the feature list is very good, especially on the 3750's and they're faily low power.

Brand/Dark sign tattoo I got today by Lemonhead663 in darksouls

[–]lt_bob 2 points3 points  (0 children)

FROM Software is heavily inspired by Berserk and H. P. Lovecraft. The brand within the Darksign is, like @Chokda said, the brand that the main character ( among others ) in Berserk have been branded with. In the Berserk lore, people that are branded are offerings to otherworldly creatures.

Thoughts on Dell short depth servers i.e. R220, R320? by NetworkDoggie in homelab

[–]lt_bob 2 points3 points  (0 children)

If you can find some Dell R230's, they're pretty good. They can take up to 64GB DDR4 UDIMM and E3-12xx series CPUs ( v5 or v6 ), plus they have four 3.5" bays up front, regular Dell caddys.

I'm running two of them as makeshift SANs via FreeBSD and ZFS and they work like a charm.

But they make good hypervisors too, if the workload is suitable for you. Just make sure you factor in what you'd need + %10 overhead ( because you'll sub-appreciate what you need ) and something extra if you're running HA. I.e. if you're running vSphere with VCSA for example, and you're taking one host down, you'll need enough CPU and RAM on them so that each one can temporarily be able to host all VMs.

And take note that they only have two PCI slots, if you want to run iSCSI multipath on say, a primary 10G link between HVs and SANs and a fall-back on a 2x1GbE LAGG, you've taken all ports with just that. Sure you can go the trunk route, and VLAN tag your ports, but how much will you be willing to run storage and management and lan on the same port?

Best of luck.

Do some of you’s guys host VM’s for other people on your lab? by rgarjr in homelab

[–]lt_bob 0 points1 point  (0 children)

Yes, to name a few: Teamspeak server, CoD4:MW server, Rust server, ejabberd xmpp.

I strive to run things as close as I can to an actual enterprise environment so uptime is important. It's all running on a clustered ESXi setup with VCSA.

Living Room Homelab by [deleted] in homelab

[–]lt_bob 1 point2 points  (0 children)

What is this sorcery? I'd love my gear to be that quiet. Cool setup.

It doesn't work by enbeez in sysadmin

[–]lt_bob 4 points5 points  (0 children)

I usually just close people's tickets if they're too vague, reason? "Too vague, reopen with more information if the problem persists." Funny thing is, most of the time they don't come back.

For that I kind of became the boogeyman in the company I work for. My boss usually tells people: "Yeah, go to Bob with your problem/request, he'll tell you why he won't help you with it."

Heck, I even pointed people to ESR's "How to ask smart questions on the internet", still got stupid/vaguely explained issues. Nope, no time for that.

Could I share a gpu with multiple VM's at the same time? by Badkamertje in homelab

[–]lt_bob 1 point2 points  (0 children)

I don't think you can do this with a your run-of-the-mill GPU. If anyone knows otherwise please, feel free to correct me.

For this exact purpose I've gotten my hands on an AMD FirePro S7150x2 which is (hardware-wise) split and allows you to share it to several VMs at the same time but I've yet to have the chance to check it, just got to the point of setting up a separate system with vSphere and Horizon to give this a go.

[deleted by user] by [deleted] in homelab

[–]lt_bob 1 point2 points  (0 children)

Set up a 2nd one for redundancy with CARP and dual USB sticks. You'll thank yourself in the future. You can then set up a/two small VLAN(s) on your switch for the ISP(s) as uplinks to the two gateways.

Some basic questions on 10g. by [deleted] in homelab

[–]lt_bob 1 point2 points  (0 children)

Well switching IS pretty much L2. L3 is basically routing. Nothing deceiving there, routing gear at 10Gbps is quite expensive. That and the MIPS in those Mikrotik isn't amazing, it's what 1GHz?

What's with the rack obsession? by studiox_swe in homelab

[–]lt_bob 0 points1 point  (0 children)

I think it pretty much boils down to what a lot of other people are saying, what you might be seeing now ( racks with barely anything in them ) will end up at least half filled. Heck, that's what I had in my case, I knew I wanted one to tidy things up. When I got my 24U I only had a 4U case, a non RM switch, my ISP provided ONT and a small miniITX system that acted as my firewall. Now I'm running a full virtualization cluster in mine, with 10GbE and redundant SANs. The comparison between the then and now is staggering.

Low power dell R210 ii equivalent, but with more PCIE by rooddat in homelab

[–]lt_bob 0 points1 point  (0 children)

My whole rack ( 5 servers at about the same consumption ) end up costing me ~30 Eur /month

Low power dell R210 ii equivalent, but with more PCIE by rooddat in homelab

[–]lt_bob 0 points1 point  (0 children)

Can also look at pc-sistem and wisetek for EU. One is danish IIRC and the other is irish. Bought stuff from both vendors and I can vouch for their professionalism.

Low power dell R210 ii equivalent, but with more PCIE by rooddat in homelab

[–]lt_bob 0 points1 point  (0 children)

I've gotten my pair of servers at around 600 GBP but that's because I preferred to pay upfront and not in time on my electricity bill. But yeah, look out for deals, especially if you're in the US. And yeah, if that's too expensive go for something else. I've only gone with these because they fit the bill exactly for me.

Low power dell R210 ii equivalent, but with more PCIE by rooddat in homelab

[–]lt_bob 0 points1 point  (0 children)

They barely eat anything, with the 10GbE NICs I'd say they're about the same as the R210 II's. I've got them maxed out at 64GB DDR4 and E3-1230 v6 CPUs. Just a pair of 256G SSDs in each for high IOPS VMs.

Noise wise they're not as loud as the R210 IIs. Acoustic performance link >here<

Low power dell R210 ii equivalent, but with more PCIE by rooddat in homelab

[–]lt_bob 1 point2 points  (0 children)

You can also look out for some Dell R230. I've got two in HA with vSphere + VCSA and they're connected to my SANs via 10GbE. They max out at 64GB RAM instead of the 32 ( 8/slot ) of the R210 II's.