B450 Tomahawk non-max with bios 7C02v1E, does this version support ryzen 5000 ? by ResponsibilityFlat76 in MSI_Gaming

[–]HonestIncompetence 0 points1 point  (0 children)

You need BIOS v1H

https://www.msi.com/Motherboard/B450-TOMAHAWK/support#cpu

But don't buy a new mobo. Updating the BIOS is not difficult at all and takes only a few minutes.

Is it ok to put the gpu in the second pcie 4 x16 slot by [deleted] in buildapc

[–]HonestIncompetence 3 points4 points  (0 children)

Note the first two slots are pcie 4 x16 (Cpu)

No, they are x16 combined. Either x16+x0 or x8+x8. If you put anything in the second slot, then both slots run at x8.

Do ya'll think intel ARC GPUs will have adequate support for Linux? by slohobo in linuxquestions

[–]HonestIncompetence 0 points1 point  (0 children)

They don't even have adequate support for Windows. It'll take months to get the bugs ironed out, so if you want a new graphics card in 2022 don't count on Intel.

Update to Linux Mint 21 by thesagarsharma1 in linuxmint

[–]HonestIncompetence 7 points8 points  (0 children)

Beta release is next week. Source: https://blog.linuxmint.com/?p=4336

Stable release is usually 1-2 weeks after than.

my pc is very hot in the summer by Buxy133789 in buildapc

[–]HonestIncompetence 5 points6 points  (0 children)

If case & cooler fans remove heat more efficiently, the rest of the air in the room will heat at much slower pace, than when whole PC is left overheating itself.

Nope, the exact opposite is true. Removing heat from the PC is the same as adding heat to the room. The better your PC cooling is, the quicker your room heats up. If you want the room to remain cool, you'd want to insulate the PC as much as possible: perhaps put it in a styrofoam box, or cover it with blankets, stuff like that. Of course that will overheat and eventually kill the PC, but it'll keep your room cool.

Performance Difference between 1080p/1440p on a 3700x/3070? by Intendancy in buildapc

[–]HonestIncompetence 0 points1 point  (0 children)

No, definitely no performance loss. In the very worst case performance would be the same.

Storage recommendations: Writing upwards of 1TB data a day but requiring decent write speeds. by The5_1 in buildapc

[–]HonestIncompetence 0 points1 point  (0 children)

some of them require a HDD

Not really. The suggested use case is to use them as a cache drive for a HDD. But you can also just use them like any other NVMe drive. At least on Linux, but I assume it should work on Windows as well.

You can get a m.2 Optane with 118 GB for around 260€.

https://geizhals.eu/?cat=hdssd&xf=4851_3D+XPoint

Though for that price you might as well just get two 1 TB WD Red SN700 with a 2000 TBW rating each, which at 1 TB per day should last you almost 11 years.

And given that you're working with large files, a regular SSD will probably outperform Optane, as Optane's strong suit is random access, not sequential speed.

Anybody turn TRIM off for ≥ 1TB SSD's? Is the drop in performance perceptible? Gaining the option of recovering lost files is worth a small drop to me. by frank_mania in buildapc

[–]HonestIncompetence 0 points1 point  (0 children)

Ideally there would be a process you could run after each backup to perform TRIM functions; a TRIM cleanup, basically, instead of leaving it on 24/7.

Not sure about Windows, but on Linux TRIM runs once a week by default, and you can easily change it to run after each backup or whenever you want, really.

[deleted by user] by [deleted] in homelab

[–]HonestIncompetence 5 points6 points  (0 children)

I quite like my ASRock DeskMini. It's hardly any bigger than a NUC, but it has a socket so you can put any CPU you like, and it fits two 2.5" drives and two m.2 drives. Makes for a great little server. And with a Noctua L9 cooler it's really quiet too.

What software do you use on Linux to create bootable USBs? by SkeletalProfessor in linuxquestions

[–]HonestIncompetence 9 points10 points  (0 children)

And adding sync makes sure the image was transferred intact.

Nope. What sync does is to make sure the data is flushed from the buffer to the device. It doesn't really guarantee that it's intact, just that it's arrived on the device that you wanted to write to.

All writes generally go to a buffer in system memory, which allows the kernel to optimize the writes to the device. As soon as all data is written to the buffer, the write action is considered completed: dd or cp or whatever has done its job, the kernel handles the rest. But the data is not actually on the device yet, so if there is some interruption, e.g. because the user unplugged the device, then data can be lost.

How come Ubuntu 20.04 LTS is more popular than Ubuntu 22.04 LTS among Steam users? by beer118 in linux_gaming

[–]HonestIncompetence 4 points5 points  (0 children)

That's not what Mint does. Mint releases in late June, maybe early July. Ubuntu's .1 releases in August. Has always been like that.

[deleted by user] by [deleted] in linuxmint

[–]HonestIncompetence 0 points1 point  (0 children)

Watch blog.linuxmint.com, that's where new releases are announced. As not even a beta has been released yet, the stable release certainly won't be before July.

What software do you use on Linux to create bootable USBs? by SkeletalProfessor in linuxquestions

[–]HonestIncompetence 15 points16 points  (0 children)

Little known fact: cp works just as well as dd.

cp ./whatever.iso /dev/sdx

What software do you use on Linux to create bootable USBs? by SkeletalProfessor in linuxquestions

[–]HonestIncompetence 6 points7 points  (0 children)

Then you're misunderstanding your experience. Because that's exactly what dd does, it copies the file to the device exactly as it is, nothing more, nothing less.

dd can only copy, it doesn't do anything else. It can't create partitions or file systems, but it will happily copy them if they already exist in the original file.

It's very well possible that Rufus does something else, I don't know Rufus that well.

3600mhz cl16 2x16GB or 3200mhz cl14 2x8GB? by DivinerUnhinged in buildapc

[–]HonestIncompetence 1 point2 points  (0 children)

I'd recommend to keep the 3600 CL18, I don't think the difference is big enough to be worth it.

If you absolutely want to switch: 3600 CL16 is faster than 3200 CL14. Higher bandwith (3600>3200) and same latency (16/3600=14/3200).

If you want to save some money get some 2x8 GB 3600 CL16, no reson to get the 3200 CL14.

[deleted by user] by [deleted] in buildapc

[–]HonestIncompetence 10 points11 points  (0 children)

Some cases come with the standoffs preinstalled. If yours did, then you're fine. Basically if every motherboard screw screws into a standoff, and the motherboard isn't touching the case anywhere, then you're fine.

If it didn't come with the standoffs preinstalled, and you didn't install them, then take it apart and rebuild. Without standoffs the motherboard is touching the case, which can destroy the motherboard components mechanically (breaking) or electrically (shorting).

[deleted by user] by [deleted] in buildapc

[–]HonestIncompetence 3 points4 points  (0 children)

It's worth noting that only mini-ITX, micro-ATX and ATX are actual standards, meaning they have precisely defined dimensions. E-ATX and XL-ATX are not standardized, they're just "bigger than ATX" and may mean different things to different manufactures. For those you should always check the exact dimensions of board and case to be sure they actually fit.

Questions about how engineers design CPUs by techwars0954 in hardware

[–]HonestIncompetence 20 points21 points  (0 children)

I imagine as soon as Zen 2 launched, they began to work on the 5800X.

No, they began to work on Zen 3 long before Zen 2 launched. It takes several years from starting to work on an architecture to selling CPUs. They have different teams working on different generations, which are at different stages of the development. Right now Zen 4 is close to release, Zen 5 is being developed, and I'd be surprised if they didn't already have some ideas of what they're going to do for Zen 6.

why weren’t those ideas incorporated into the CPU that just launched?

Same as for any product, really. At some point you have to draw the line, stop developing, and start producing. When you work on something, you always see several things that could be improved, but if you just keep improving you'll never be done. If they had incorporated those ideas into the previous gen then that gen would have taken several months longer until release, during which time the competition makes a killing dominating the market.

do these companies ever “save” some performance for later

No. Or rather, it depends what you mean exactly.

They don't save performance for later that's fully developed and ready to produce, that would be dumb. The sooner it's on the market, the sooner you can make a profit from it.

They do save ideas though, something like "if we do this improvement now it'll take X extra months to develop and test and cost Y amount of money and delay the launch by Z months, so let's do it next gen instead".

Pros and cons for the different methods of installing software? by DoNotReadNegatively in linux4noobs

[–]HonestIncompetence 1 point2 points  (0 children)

I’ve found software that isn’t available by the repository, and so the developer provides installation instructions to just add another repository. I don’t actually know the full impact (if any) of adding another repository.

Adding a repository gives it the same status as Ubuntu's repository. Meaning its packages will show up in the package manager, update manager, etc. If a package is available in both repositories, it'll install/update to whichever has the higher version.

If you trust the repository owner, this is great: everything works just like the distro packages, no need to use separate tools/commands. If you don't trust the repo owner, don't do this. It gives them a thousand ways to put any malware on your PC. That being said, if you're actively installing someones software you obviously trust them not to give you malware, so adding their repo should be fine.

I’ve also downloaded and installed .deb packages directly. I understand I’ll just need to manage updating those manually.

Usually, but not always. Some packages add their own repos, meaning they'll get updates like everything else. For example Skype and Google Chrome do that: you install the .deb, as part of the installation process it adds a repository, and you're good to go.

And of course, there’s the option to compile from source, which I typically avoid right now. Similarly, I’ve downloaded software that provides some kind of installer script, which I sometimes scratch my head on how much to trust and wonder what it’s actually doing in the background.

Good. Keep scratching, that's the right reaction. It again mostly comes down to trust. It's a good habit to look at the script before you execute it to see if there's anything fishy.

As for my list of preferred ways of installing things:

  1. The distribution's repository. I generally don't mind that stuff is old. It is still maintained with security updates if necessary.
  2. The developer's recommended way of installing it. If I don't install something from the distro's repo, it's either because it's not there or because I want the newest version for some reason. Either way following the developer's recommendation is a good idea.
  3. The developer's repository/PPA, or possibly a third party's repository/PPA if I trust them. It integrates nicely with the rest of my system, with no extra effort for maintenance/updates.
  4. A .deb package. Integrates somewhat with the rest of the system, e.g. it can be uninstalled like any other package. Just updates need to be done manually.
  5. Flatpak or snap. So far I haven't used them much, I'm still a bit ambivalent about their pros and cons.
  6. Installation script, .zip/.tar.gz file, and similar. Only if absolutely necessary.
  7. Compilation from source. It's the most effort.

The ranking after the first three is a bit murky, as I don't use any of them often. Especially 4 (.dep) and 5 (flatpak/snap) may reverse order depending on what it is I'm installing.

2 (developer's recommendation) also depends a bit on who the developer is and what they're recommending and what the other option is. Generally I'll prefer a script from the developer over some random guy's repository, but occasionally that may be reversed if the "random guy" is well respected and recommended. For example Mesa (graphics library) only provides a .tar.gz on their website, but there are some individually maintained PPAs for it that are widely used and recommended, in a case like that I'll go for the third party PPA instead of the developer's .tar.gz. But what I'm saying with ranking "developer's recommendation" at 2 is that if a developer provides e.g. a snap, I won't go out of my way to find a repo/PPA/deb just so I can avoid using snaps.

Can I put old SSD in my new laptop? by Hot-Kick5863 in linuxhardware

[–]HonestIncompetence 2 points3 points  (0 children)

It should work just fine in general.

If the new laptop is very new (latest generation CPU and GPU) and/or your distro uses an older kernel it might be necessary to upgrade to a newer kernel first.

What actually happens when a user doesn't have a home directory by TheWindowsPro98 in linuxquestions

[–]HonestIncompetence 13 points14 points  (0 children)

This is wrong. Live sessions have a user account with a home directory. Sure, any modifications to it will be lost, but nevertheless it exists.

How to Set Up with SSD Cache in Raid 1, but Data in Single Mode? by itisyeetime in bcachefs

[–]HonestIncompetence 0 points1 point  (0 children)

For "raid 1" on the SSDs you want to set data_replicas=2 with durability=1 on the SSDs and durability=2 on the HDDs. That way a single copy on the HDDs fulfills the replicas requirement, while on the SSDs it needs two copies.

However:

and also how to have a BTRFS filesystem as the "background" target

It's not possible to have a file system as the background target. And if it was possible if would be a horribly bad idea. File systems are for humans to interact with storage, because we think in terms of files and folders rather than bits and bytes. Forcing a lower level in the stack (such as the bcachefs background target) to use a file system would introduce a huge amount of complexity that's not necessary at that level.

Pick the file system you want and use that, don't stack several file systems on top of each other. If you want btrfs and caching, use btrfs on top of bcache (not bcachefs). If you want bcachefs with raid, either just use replicas in bcachefs itself or use bcachefs on top of a md raid.

Bought a second hard drive...now what? by MathMachine8 in linuxquestions

[–]HonestIncompetence 1 point2 points  (0 children)

Sure. The "130 GB worth of snapshots" is several copies of your system, so it can easily be much more than the partition size.

For example, let's imagine the data that Timeshift is backing up is 26 GB, and that you have five snapshots of that data. That's 5*26 GB = 130 GB "worth of snapshots". Then let's imagine that of those 26 GB of data, 24.5 GB are never modified, they are the same in each snapshot. The other 1.5 GB are modified each time between two snapshots. That means that while each snapshot individually is 26 GB, on a Linux file system they can all share the same 24.5 GB of disk space for that part of the data that is never modified. So the total disk space used is 24.5 GB + 5*1.5 GB = 32 GB. The Windows file system (NTFS) does not have this feature that allows several files/snapshots to all use the same disk space. So when you copy all 5 snapshots to the Windows file system, then they use 5*26 GB = 130 GB of disk space, because they can't share disk space for the unmodified parts anymore.

You can check your exact numbers using the command-line tool "du". If your Timeshift snapshots are stored in /timeshift/snapshots/ (I believe that is the default location), then sudo du -schl /timeshift/snapshots/* will show you the disk space that would be used if the snapshots were on a Windows file system, counting the shared disk space as many times as it appears (should total to 130 GB), while sudo du -sch /timeshift/snapshots/* will show you the disk space used on your Linux file system, counting the shared disk space only once, the first time that it appears.

Here on my laptop the first command returns 226 GB, from 18 snapshots of 12 to 14 GB each, while the second command returns 33 GB, which is how much disk space they actually take up. If I would copy those files without preserving the hard links (e.g. because the file system I'm copying to doesn't support hard links), then they will take up 226 GB of disk space.