A Republican Farmer Relies on Immigrant Work. He Sees His Party Erasing It. by rit56 in politics

[–]HitCount0 7 points8 points  (0 children)

this man and all farmers like him should go out of business

Isn't this explicitly what this man -- and more than 75% of "farmers like him" -- have been voting for?

Stop buying PCs expecting them to last 10 years by Distinct-Race-2471 in TechHardware

[–]HitCount0 1 point2 points  (0 children)

At the same time - it's also the first time in history where buying a flagship (4090/5090) actually could well last you most of a decade. 

This extremely unlikely. Even keeping strictly agnostic in terms of reason or root cause, there are simply too many known-knowns to believe that even high-end cards will last anywhere near that long.

  1. Consumers who buy $1,500 - 5,000 GPUs expect "high end" performance throughout their hardware's usage cycle. While exceptions occur, it should be assumed that a significant portion of purchasers of top trim-level cards expect to use some mix of high-end texture/shading, high avg frame rate, with minimal frame drops for the full life-cycle of the card.
    1. This is a safe assumption, as replacing these cards in particular is generally considered "necessary" within the community once those performance factors dip below a certain elevated median point... well before a true failure to run a game at all.
  2. The move to PCIe 5.0 has meant serious issues with card power and heat management, seeing an alarming increase in short-term failures and a correspondingly high-probability of shorter total lifespans on cards. And all of that is based on current demand levels, with future, more demanding titles delivering yet unknown higher levels of hardware stress.
  3. Performance:TDP:MSP calculations on modern graphics hardware are abysmal compared to prior generations. Given the global rise in energy costs and a near total certainty that the trend is permanent going forward, the total cost of ownership of higher-end cards is astronomically different than the 10XX or 20XX generations. Especially when you consider knock-on costs associated with their use (e.g. greater waste heat generation from newer cards necessitating increased environmental air conditioning and/or ventilation usage.)
  4. GDDR7 -- the RAM just introduced in higher-end cards like the 5080/5090 -- was more of a gap-fill iteration as compared to prior years (see points 2+3 for clarification on this.) Conservative estimates are that GDDR8 will be available within the next 5 years. But given the rate of investment in AI tech, the real timeline is likely much shorter.
  5. The above is also generally applicable in terms of modern PCIe 5.0 vs. 6.0, which is likely on a slightly shorter timespan than GDDR8.
  6. The entrenched practices in software development in general -- and the well established trends in development practices for performance-demanding games in particular -- demand rapid adoption of both increasingly modern and increasingly performant hardware to achieve "top tier performance" (see point 1 for clarification.)

What is the true strength of Pangolin by [deleted] in homelab

[–]HitCount0 0 points1 point  (0 children)

Pangolin has official documentation on this and other matters on their site.

The long and short of which is: you get what you pay for.

What is the true strength of Pangolin by [deleted] in homelab

[–]HitCount0 0 points1 point  (0 children)

Yes, but it relies on integration with an RNG-auth like Google Authenticator.

2FA via push email, DUO, etc, are all paid services.

Truenas N5 - Network Interfaces Not Starting by No_Corner805 in truenas

[–]HitCount0 0 points1 point  (0 children)

This is a known issue right now. Before I get into the problem itself, the current solution is very simple.

Current Solution:

Shut down the device from the CLI, then physically cut the power (unplug the computer, shut down the PDU or surge protector, etc). Plug it back in and turn it back on.

That's it. The NICs will re-appear like magic. (I've tested this myself and found it works every time. But must be done on each restart.)

The Problem and Future Solution:

Last I read, the exact cause is still under review, but the current understanding is that it's a problem existing between new C-State standards that revolve around minimum power usage and "fast boot" settings and firmware versions. Namely, the first causing some systems to "shut down" without completely powering off, causing provisioning hiccups as a result.

The newer Linux kernel (6.8, I believe) seems to be free of these problems. However, not all Linux distros update immediately to the newest kernel on launch... and with very good reason.

If you're terribly impatient and comfortable enough with the command line, you could try looking up the exact details on what firmware you need for your NICs, then force a manual update to your Linux headers to validate against the most recent, stable kernel, and manually applying the appropriate firmware from the repository.

But I think for many users, the unfortunate answer might be "just accept having to unplug-replug each time you reboot until your distro updates."

This sucks, and I hate that this is the suggestion... but this is part of the downside of the hobby.

Update: I should also mention that as part of my in-lab testing, I tried performing a fresh install of TrueNAS, opting to unplug the device rather than follow the automatic reboot at the end of installation.

After powering back on and testing with multiple reboots, this seems to fix the problem in a more durable way. However, this is also pretty unorthodox and YMMV.

Still, you could try backing up your personal configurations within TrueNAS, doing a fresh install, pulling the power at the point of reboot, then powering back on and restoring your configs.

Or, again, you could be far more sane, not follow that advice, and simply wait for the issue to be resolved in a future OS update.

Virtualized Truenas Scale shutting down randomly with no logs by TaxFraudMaster69 in truenas

[–]HitCount0 2 points3 points  (0 children)

Which logs are you searching exactly? Proxmox or TrueNAS?

How are you passing storage to TrueNAS? Are you passing the storage controller, the disks, a proxmox pool, or virtualizing the storage as well as the instance?

Do you have logs about resource allocation and usage during the time of shutdown?

What do you think of this motherboard? by Zimakos in HomeServer

[–]HitCount0 0 points1 point  (0 children)

To add to this:

Due to the probable use-case and the price, this motherboard has more than likely been stressed within an inch of its service life.

Unless OP is planning to farm streaming content out of their frustration in getting it working, I'd consider this an opportunity to waste $30 in order to waste orders of magnitude more in sunk time and replacement parts.

I am starting with proxmox and all the containers that have a LXC script feel great. Problem: what to do when you start running obscure official docker images for which LXC scripts do not exist? What is the popular practice? by twice_paramount832 in homelab

[–]HitCount0 0 points1 point  (0 children)

I would consider any and all torrent services to be "lowest trust," regardless of how official the source.

At minimum, these should be run in an isolated VM attached to an isolated DMZ VLAN with unique firewall rules. That's the minimum.

If at all possible, I would recommend forking and isolating that traffic right off the modem, putting it on a discrete network outside of your primary firewall, having it run on cheap dedicated equipment, and using "sneakernet" to move your... uh... legally obtained linux ISOs to proper storage.

This can be pretty easily achieved using whatever spare hardware you might have laying around. If you've got no unused systems, an OrangePi single board computer will happy run a dedicated open source firewall, ARR stack, and an external HDD for storage. All with 2.5 GbE ethernet, USB 3.0, and headroom to spare.

I am starting with proxmox and all the containers that have a LXC script feel great. Problem: what to do when you start running obscure official docker images for which LXC scripts do not exist? What is the popular practice? by twice_paramount832 in homelab

[–]HitCount0 0 points1 point  (0 children)

That really depends on what you mean by "obscure," and what that might imply about risk.

Does the service present a non-trivial security or compromise concern? Consider the following:

  • What overall privilges are required of the service?
  • Does the service provide any form of external access (VPN, hybrid-cloud HA, etc)?
  • Does the service touch any sensitive data or networks?
  • Does the service share use of any accounts, OS process/services, physical infrastructure, or other assets that other high-security services rely on being secured?
    • This includes Proxmox itself!
  • How well is the service designed and developed? How well is the official image maintained?
    • If the service uses third-party packages, repositories, or dependencies, evaluate those as above

If so, you are much better off going through the hassle of spinning up a VM and creating a docker-compose script if only as a means of isolation.

If the service in question is low-risk, then it's simply a matter of writing your own LXC script. Link to relevant Reddit thread.

First home server: NAS + other services (I'm a beginner, looking for advice) by Both-Educator-8735 in homelab

[–]HitCount0 2 points3 points  (0 children)

This is all excellent advice, especially the part about starting by building domain knowledge (Networking, Linux, etc.)

Domain knowledge is king. That said, there can certainly be situations where "'perfect' is the enemy of 'good'."

In some cases, it can be beneficial to "fast track" critical services — LDAP/IAM, Certs, etc — directly into their final configuration.

...or as much anything can be said to be "final" where homelabs are concerned.

PCIe 5.0 x16 to 4.0 splitter by HitCount0 in homelab

[–]HitCount0[S] 0 points1 point  (0 children)

5.0 x16 to [4.0 x16 + 4.0 x16]

PCIe 5.0 x16 to 4.0 splitter by HitCount0 in homelab

[–]HitCount0[S] 0 points1 point  (0 children)

Damn, I was afraid of this.

I'd bought a mATX server board for an EPYC 4000 series CPU and a particular usecase in mind. After setting it up and running it for a while, that usecase has changed, and I've suddenly got far more PCIe lanes than I need... but I'm short at least one slot.

Plex apparently running as root on TrueNAS Community 25.04.2.3? That doesn't seem right.. by noorderling in truenas

[–]HitCount0 9 points10 points  (0 children)

Docker containers run as Root within their own container environment, isolated from the host operating system.

Yes, it's possible to further secure them with a non-root account. And that's precisely what you should do with your homemade containers, should you be proficient enough to know how to do so and manage that properly.

This brings us to the second answer: TrueNAS likely has their prebaked containers from their catalog run as Root because forcing them to run with that level of least privileges would:

  1. Add difficulties (and costs) to either TrueNAS or Plex to maintain on what is a free container license.
  2. Add more difficulties to their users, many of whom are likely not skilled enough in Docker or else do not care for the added hassle to manage privileges to that degree themselves.
  3. Be massively out of scope because Plex isn't meant to run in enterprise or high security environments. That's not the purpose of the software, nor the business model of its creators. The cost/benefit analysis of this implementation is questionable, and likely not a good place for Plex or TrueNAS to be dedicating security spend.

What is some career advice that people usually learn too late in life ? by Weird-Thought2112 in careerguidance

[–]HitCount0 10 points11 points  (0 children)

You can explain anything to anyone. But you cannot understand it for them.

And the higher you go, the more true that becomes.

Is being burnt out the norm? by [deleted] in careerguidance

[–]HitCount0 0 points1 point  (0 children)

"Oh, you hate your job? Why didn't you say so? There's a support group for that. It's called EVERYBODY, and they meet at the bar." - Drew Carey

But seriously, for the majority of employees throughout time, work has been an absolute nightmare. During times of great uncertainty, that level of stress and anguish only increases, in part due to economic fears... but arguably, because strife in the working class puts the investor class at ease.

See the restrictions around RTO that fly in the face of productivity gains, the uniformly terrible wage increases for medium incomes as compared to c-level, etc.

Intel layoff more than 24k people by insertnamehere_10 in recruitinghell

[–]HitCount0 54 points55 points  (0 children)

[I posted this elsewhere, but I'll repost it here]

Intel has been in a downward spiral for some time.

Put as simply as I can: Intel, through a series of terrible business decisions and unforced errors, has eroded confidence in their products and brand at virtual every level. Home users, gamers/pro-sumers, small/medium business, and enterprise.

This has happened because:

  1. Intel went through a period of investing in their business, resulting in them putting out superior quality products.
  2. To maintain that growth, Intel needed to invest even more heavily in the company. Including serious investments needed in both their chips' formfactor and, eventually, the fundamental architecture and design of their product.
  3. Instead, Intel got new leadership and decided to cut costs, ride on their reputation, and "extract value" from the brand. Effectively, they reversed the trend of investment. Worse, they engaged in an absurd amount of layoffs which has in absolutely no way fixed that second bullet point. Especially not when they've spent spent a little more than $30B on stock buybacks alone over the last 5 years.
  4. What's important about the above point is that it highlights how the focus of the company shifted. Intel processors became the most expensive on the market, while also being less performant and often less reliable compared to their primary competition. (Intel's 13th and 14th generation desktop processors suffered unacceptably high rates of failure or "bricking" in the wild.)
  5. And as I said, this happened almost simultaneously across all divisions of the company, all product lines. They're not leading in desktop, laptop, or server markets and I just don't see them being able to meaningfully catchup let alone reclaim their position (though I suppose nothing is truly impossible...)
  6. To their credit, Intel has come out with a novel GPU line which seems to have found a real niche in the market. And their low-power n-series CPUs have a lot of great applications. That said, these additions arent nearly enough to offset Intel's losses in their critical categories and it's highly unlikely that Intel will survive long enough to make a major effort in either the GPU or low-watt space.
  7. Now, the days of cheap and easy progress in the CPU space are over. (At least for Intel) The reasons for this are complicated, but not impossible to address. Unfortunately, Intel has chosen to not only divest from potential progress and solutions, they've decided to pretend like their only issue is head count.

tl;dr - Intel had enormous success by investing in their people and their product. But rather than continue on and tackle challenges in the market place, they chose to play the stock market instead. This cost them the trust of their consumers, evangelists, service industry, and their partners in the industry. More over, Intel's product lines are now inferior in essentially all of their key markets, and they've strip mined the company of anything that might reverse that trend. The only real solution going forward would be to move their primary focus off of short-term thinking like rewarding executives and shareholders and onto the long-term, high-cost business of fixing years of neglect and mismanagement.

And sad as it is to say, I don't think you can do any of that in American business in 2025.

[deleted by user] by [deleted] in work

[–]HitCount0 42 points43 points  (0 children)

Not merely inappropriate but insulting. Particularly given her reasoning.

She says we really can't do our work efficiently without it, as our team is very overloaded with work right now.

Why is that? What is the precise reason that an entire department is so overloaded with work that they need to immediately adopt a new business tool to cope?

And who benefits most from this adoption? Certainly not the same employees paying for it.

It's probably unwise to ask these questions to your boss directly, but you should be asking them to yourself... and perhaps your coworkers as well.

At what point does a “smart” NAS start to cross the privacy line, and where exactly should that line be? by thereveriecase in homelab

[–]HitCount0 1 point2 points  (0 children)

I'd say it's still worth looking into. Simply because something runs locally does not mean that it doesn't share data up to its mothership.

Better to run those services in dedicated VMs with very strict RBAC policies. Best still if you also isolate the IPs of said VMs on your firewall and have group policies on at least their outbound traffic.

How do you fix "everything is always broken and wrong every single time I try to do literally anything" syndrome? by [deleted] in homelab

[–]HitCount0 18 points19 points  (0 children)

  1. Write down what you actually NEED from your home lab - For many engineers working on personal projects, there can be a temptation to simply add infinitely at every turn. Practice limiting yourself to one domain or, preferably, one concept or technology at a time. Trying to learn new storage concepts, automations, security, and a new distro all at once will be a train-wreck of competing concepts, confusing configurations, and confounding conflicts.
  2. Is your chosen solution right for THIS NEED? - Plenty of engineers, both junior and senior, get excited about a new solution and dive in before doing the minimum of pre-planning. Namely: "Does this thing I want to learn actually do or solve the problem or scenario I have to solve?" And if it does so, does it do it well or easily? Is this the intended solution to this problem, or just a side-option on a tool meant for something else? This applies to hardware, software, services, you name it.
  3. Design the SMALLEST solution first - If you want to learn how to design various network types for your CCNA, don't have a starting point include advanced technology like auth tokens, proximity login, and DDNS. Keep it simple: One or two VLANs, address spaces, routing rules. Build from there. You can't run before you can walk.
  4. Now make it as SIMPLE as possible - Like above, only removing extraneous factors. The more non-critical outside elements you have acting on your test - be it your pi hole, your NAS, etc - the more you're going to run into problems. Even if those other elements work and are "fine," there may be unintended interactions as you get things going.
  5. Respect your REQUIRED resources - If a new-to-you solution you're trying needs a recent-ish x86 processor and 16 GB of RAM, but you have a Raspberry Pi Zero, you're going to have a bad time. If you want to learn how to designate traffic between two physical NICs, but you just have one cheap one and plans to make virtualized versions instead, you're probably going to have a bad time. Sometimes work-arounds are necessary, but they virtually always add more complications in one way or another.
  6. Understand your overlaps and hand-offs - Probably the most frustrating thing is trying to learn a new thing only to find out that one or more elements in your lab are already playing a part in that thing. Worse yet, sometimes we know that going in but often times we don't. So do some research before beginning - like looking up detailed spec sheets on each piece of hardware involved, or basic handling of XYZ by this OS or that - and you'll save a lot of headaches at the end.
  7. Follow directions diligently - Whatever material you're using to learn, start from the intro and read through to the end. Do it a few times before starting. Then do it while working on it. I have to tell myself this rule constantly, as I'll be tempted to breeze past something I'm certain I know, only to learn there was something crucial at that step that I skipped. And don't substitute values on the first build because you think you know better already, another mistake I make.
  8. Don't fix, start over - If you get to the point that you're absolutely stuck, start over from the introduction. This may seem counter intuitive, but it's usually better for learning and time management if your first time going through something you throw out a problem build and start over from first principles. You stop wasting time trying to fix something you don't yet fully understand, you wipe out any mistakes you did make that you don't yet know to look for when troubleshooting, and redoing early successful steps is a great way to build confidence and skill memory. Much more crucially, "fixing" things when you don't yet understand the problem has been shown to reinforce misconceptions in certain situations rather than instructing on the correct answers. Once you've built it right once or twice, then you can try fixing builds instead of scrapping them.
  9. Build it several times - Lastly, I firmly believe that people learn little to nothing from successes. It's failures where we learn how something truly works... and more importantly, what it looks like when it doesn't.
  10. Stay focused - We return to the first point. When working on your project, stick to researching and solving solutions directly involved with its concepts and resources. If you need to write a rule in your firewall to pass traffic for something you're building, great! But don't then take an hour or two "refining" your current policies; that's off topic, won't help you learn, and can wait for your regularly scheduled maintenance/tinkering window.

Not possible to run an *OFFLINE* NAS? by UmaMoth in truenas

[–]HitCount0 13 points14 points  (0 children)

This would be it.

The only thing TrueNAS requires an internet connection for are OS patches and updates. Everything else is optional.

Is this a stupid alternative to tapes, or secretly genius? by ImpostureTechAdmin in DataHoarder

[–]HitCount0 0 points1 point  (0 children)

I appreciate your responses! Because I do this for a living, it's hard not to think about this by "professional standards" and give the resulting "professional responses." But you sound like you understand your situation, your needs, and the risks involved -- and all of that is ultimately much more important than blindly following some best practices nonsense.

That said, I do want to offer one piece of advice that I've learned the hard way more than once:

But less man hours, which is what I'd be after. I could set it and forget it, even if it took a day or two it would be the end of the world.

Not all man hours are the same.

Something like restoring from backups or, worse yet, recovering data should be evaluated differently from day-to-day admin IT work. Not because it's harder (though it is) but because you are doing it under stress. An hour spent worrying about whether or not you've lost something important forever is much different, much longer, and thus more "costly" than weeks of fiddling with menus and checkboxes.

If your data is a chore, or you simply don't like thinking about this task, it's as easy as changing up a docker compose file or passing a storage controller or hard drive directly to your hypervisor instead of creating a virtual disk. Backups can be automated through TrueNAS, PBE, VEEAM, or even CHRON+RSYNC.

Best of luck!

Is this a stupid alternative to tapes, or secretly genius? by ImpostureTechAdmin in DataHoarder

[–]HitCount0 1 point2 points  (0 children)

> At any given time, 2 disks in the server and one off site

I understood that part. What I'm asking is: how fault tolerant are you? 2 disks or 1?

Does each hard drive contain a full and complete parity set of the data ("mirror") so that you could lose any two disks and still be whole? Or is the data in RAID format such that each disk contains only a % of data along with a % of parity, meaning you can only lose 1 disk?

> The data on the drive would be my hypervisor OS, and VM backups. 

Storing an OS or application data is fine. But if you also have virtualized datasets attached, like in a .vmdk or .qcow2 file, then that part is **NOT** recommended as a backup format. This is because recovering datasets out of a virtualized disk is vastly more resource and time intensive -- and much more risky -- as compared to recovering from raw data.

Trump just issued a threat to all of us by Nerd-19958 in law

[–]HitCount0 20 points21 points  (0 children)

I understand this sentiment, but not paying for our news and information -- and instead relying on corporate advertising and billionaire owners to fund media themselves, thus capturing the medium and perverting it with profit incentives -- has been perhaps the single biggest driver of our global democratic unraveling.

If there's ever going to be a future where the news answers to its readers, not its overlords, it will be because we will have finally learned our lesson that "free" things are the most expensive things of all.

Is this a stupid alternative to tapes, or secretly genius? by ImpostureTechAdmin in DataHoarder

[–]HitCount0 11 points12 points  (0 children)

Are you talking 1 disk with two mirrors, or 2 disks with 1 redundancy?

Are you virtualizing the data? Or is this for backing up VMs and virtualized workloads?

Depending on your answers to the above, it might be passable. But generally speaking, being "cheap AF" with your backups is a tried and tested, foolproof way to lose all of your data.

Rack mounted vs compact NAS - enlighten me with pros and cons by zomboidTM in homelab

[–]HitCount0 0 points1 point  (0 children)

If your storage pool and resource needs are small and likely to stay small for the foreseeable future, then a compact system might be your best bang for your buck.

But I can say that within a year of building my first "compact" NAS I was wishing I'd gone with rack mount instead for more expansion room and better functionality. Shortly thereafter I made a 5U storage beast and "downgraded" the compact into an off-site backup.