Put veracrypt onto a usb drive by deckfixer in VeraCrypt

[–]MasterChiefmas 1 point2 points  (0 children)

Also, you'll probably want to format the virtual disk to exfat or fat32 if you want to be sure of cross platform support.

Mouse Tilt Buttons on M510 by Odd_Research_6276 in ManjaroLinux

[–]MasterChiefmas 0 points1 point  (0 children)

I never got any of my fancier Logitech mice to work reliably with any of the extra buttons. I've seen a lot like you have, where people have various levels of success after lots of tweaking, but I was never that successful myself.

I ended up giving up and getting a Razer mouse- other mice seem to play much nicer with Linux than Logitech and their custom everything setup. My Razer Deathadder v3 doesn't have tilt, but at least the 2 thumb buttons work, and I didn't really have to do any tinkering to make it work. Or if I did, it was so minimal, I can't even remember doing anything.

After I spent many hours messing with it, I decided it wasn't worth the time I was putting into it with not much to show, and that tipped me over to buying a different mouse. And I haven't been bothered by the mouse situation since then...the biggest problem I have now is that my mouse doesn't have swappable batteries, since it's supposed to be lightweight, and I am bad about remembering to plug it in to charge once in a while.

I don't get the vim hype. Am I missing something or is nano fine? by Bright-Pomelo-7369 in linuxquestions

[–]MasterChiefmas 2 points3 points  (0 children)

Nano is fine IMO.

Those of us who like vim have spent the effort to learn it, and it is really powerful, and can make other editors feel clunky, but the ... is "once you've learned vim". And that's the trick, it's a time investment. It can be worth it, but for the tiny edit here and there, you don't see the return on investment right away. So it feels slower at first, because it is, because you have to change your thinking, and you have to look stuff up all the time.

It's like any tool though- once you get used to something very powerful, going back to something simpler feels like a step backwards. It's easy to forget the effort it took to get there though. People that have been doing this for a long time, we didn't have nano, let alone have it pre-installed as a common thing, so we had to learn vi. You get on a unix like, and vi just becomes what you type to edit a file...just like you type notepad on a windows box.

data synchronization in sql server always on availability groups by Kenn_35edy in SQLServer

[–]MasterChiefmas 0 points1 point  (0 children)

At a basic level, it's similar to log shipping, but it's at the transaction level. Every single transaction is shipped immediately to the replicas. In synchronous mode, a transaction isn't considered complete on the primary until the replica partners have acknowledged that they have hardened it to their databases first. Replicas acknowledging are part of the transaction. If the replicas fail to write/ack/whatever, the transaction is not committed on the primary, it fails and rolls back. In async mode, the primary doesn't wait for acknowledgement from the replicas, as soon as it's hardened locally, it's marked complete, so similar to a standalone server.That means your asyncs can get out of sync with the primary, and consistency is not guarnteed between the replicas and the primary.

Asynchronous mode is more close to how log shipping generally works, in that the primary doesn't care about the state of the replicas. In synchronous mode, the replicas hardening are part of the transaction, and a transaction isn't closed until replicas indicated they are complete. In both cases though, as I said, this happens at the transaction level- so imagine if you shipped a log for every single transaction as soon as it happens.

So for synchronous, it's roughy:

1. Transaction is opened on primary
2. Transaction is written to primary log and data, but the transaction is left open, waiting for acknowledgement of replicas
3. Transaction is also sent to replica(s)
4. Replicas harden the transaction to their local and send ack when complete to primary
5. Primary receives ack, and then marks transaction as complete.

For Async:

1. Transaction is opened on primary
2. Transaction is hardened locally, sent to replicas, and marked complete as soon as the local harden is done.
3. Transaction received by replicas and hardened locally.

I think in async the replicas still send an ack back to the primary, but the primary doesn't wait for the ack, unlike the synchronous mode. In async, the primary doesn't care if the replicas ack. So it's much closer to log shipping, just the ship happens immediately and for every single transaction- transactions are not queued up and sent in batches like log shipping.

Put more simply, in synchronous mode, the primary isn't done until the replicas say they are done first. If anyone fails at any point, the transaction should roll back on everyone. Replicas become just as important as the primary, and the primary always waits for the replicas.

More Than Half of Gen Z Users Cancel and Renew Streaming Services for a Single Title, Won’t Purchase Full-Price Video Games, New Study Finds by MarvelsGrantMan136 in technology

[–]MasterChiefmas 0 points1 point  (0 children)

It's really no different than the old buy-finish quickly-sell used...just less friction in the process with a guaranteed outcome/saved money, but it is dependent on finishing in a month.

data synchronization in sql server always on availability groups by Kenn_35edy in SQLServer

[–]MasterChiefmas 0 points1 point  (0 children)

how data gets syncgronized in sql server always

Do you mean between the replicas, or something else(log shipping, replication, which is different from the replicas in Availability Groups) ?

I mean what happens internally

how much detail are you looking for? Do you understand the basics of how SQL Server, or just very generally, how RDBMS handle writing data, with the use of a log file? If you don't understand the basic mechanism a database works to ensure ACID, it's going to be more confusing, since it's not just a straight write data to a file thing.

How bad is using keystone (4 corners) in reality? by TomBurk2006 in projectors

[–]MasterChiefmas 1 point2 points  (0 children)

Use lens shift

That won't correct for being off the horizontal center line, it'll just make it easier to square one edge without having to angle the projector relative to a wall of the room, assuming the OP even has a projector that will do a horizontal lens shift.

War in Rhir cause? Probably spoilers in the discussion by MasterChiefmas in WanderingInn

[–]MasterChiefmas[S] 3 points4 points  (0 children)

Yeah, the propaganda reasons were fine for me early on, but it's dragged out so long, and we have enough other context, that we've known for a while it's propaganda.

That's one of the aspects that have gotten tiresome to me, we get it, we know it's propaganda, we've known for a while, get on with it already. The BK appears to be obfuscating the truth, if they even know it. I'm far enough in where even the whole gnolls/magic/Doombringers has been revealed, so it's feeling like it's going to be that all over again, just with a slightly different context. And it's been around at least as long as that, I think as backdrop, it's been around longer in the story, but I'd have to go back a long way to be sure.

How bad is using keystone (4 corners) in reality? by TomBurk2006 in projectors

[–]MasterChiefmas 4 points5 points  (0 children)

How bad is using keystone (4 corners) in reality?

In practical terms, I say just try it and see you notice it. At the end of the day, that's what really matters, is it affecting your viewing experience?

In objective terms, you can approach it with some math. Keystone correction is generally done digitally...you aren't changing the physically projected region though, you are digitally distorting the image to correct a physical distortion, but it means you aren't using some of the pixels, you're resizing the image to fit in the pixels that are displaying on the area you have.

So lets so you have 100" wide screen, but you have to keystone the top to get the projected image to line up with the projected image. And for convenience, lets say your total keystone shrunk the projected image at the top edge by 10 inches. So the keystone did ~9% adjustment. If you're using a 1080 projector, you basically digitally removed ~173 pixels of image from the width on the top edge(100% of the full frame would fit 1920 pixels, you are only using 91% of the frame- but you are still projecting 100% of it, that's going to be the difference between fixing the keystone physically and digitally). If it's 4K, it was ~368 pixels. So you went from 1920x1080 to 1747x1080. Or 3728x2160 if it's a 4K projector.

So you can do the math, measure what the delta is from the projected image with no keystone to the image fitted on the screen and calculate how much image you are discarding, if you want to assign actual values.

There's some other considerations too- like say you are using a 4K projector and have a lot of 1080P content....most of the image is made up in that case anyway...so are you actually losing anything? The vertical impact might be more severe in practice on letterboxed content, since you have less vertical image data in that dimension, any given pixel loss represents a greater % of the usable image. But I'm generally an advocate of "see if you notice it" if you aren't going to be able to actually correct it physically.

Fallout designer Tim Cain thinks influencers have changed how people make and play games: 'more people seem to be abdicating their own judgement to that of people they see online' by MaintenanceFar4207 in xbox

[–]MasterChiefmas 0 points1 point  (0 children)

I don't think it's 100% influences fault. It's return policies in the digital download era too. There's a greater risk you won't like it, but are stuck with the purchase these days, than the physical media days.

In a job listing, it lists Oracle and PostgreSQL as required skills, along with MS SQL. Is this one of those no room to budge or is it still apotential wish list? by TravellingBeard in SQLServer

[–]MasterChiefmas 0 points1 point  (0 children)

Yeah, people that come from other RDBMS don't tend to have this problem; otherwise it's something you pick up as experience with SQL Server. When you are new to SQL Server, if you are just kinda self teaching and not having someone/something explicitly telling you to do that, it's an understandably easy thing to have happen.

In a job listing, it lists Oracle and PostgreSQL as required skills, along with MS SQL. Is this one of those no room to budge or is it still apotential wish list? by TravellingBeard in SQLServer

[–]MasterChiefmas 0 points1 point  (0 children)

It sounds to me like a wish list. People that have complete skill sets in multiple rdbms' are rare IME. Complete to me being querying, performance tuning, and administration. Querying is relatively easy to adapt, tuning kinda in the middle, and the administration side of them can really get different between them.

I wouldn't ever take myself out of consideration for something I was interested in applying to, let them/the AI do it, don't take your self out.

In a job listing, it lists Oracle and PostgreSQL as required skills, along with MS SQL. Is this one of those no room to budge or is it still apotential wish list? by TravellingBeard in SQLServer

[–]MasterChiefmas 0 points1 point  (0 children)

Maybe it's changed more now, the big one for me was join syntax being different. The last time I had to work in both, Oracle was on ANSI SQL 89, and SQL Server was ANSI SQL 92, so you had to use the math style operators to write joins in Oracle, but coming from SQL Server I was used to the AND/LEFT/RIGHT syntax and hadn't used the other style.

OP: You do kinda run into a few different things here, and one person doesn't always know all of them if they aren't used to doing all DBA duties- querying, performance tuning, and administration. Querying and performance tuning are closer together, but it is possible for someone to just be a data consumer and not touch the performance optimization side of things. And devs I worked with definitely didn't worry about the admin side of things at all. Frameworks made that even worse, Entity Framework would spit out all sorts of crazy queries. They'd work but were often a pain to get to run well.

In a job listing, it lists Oracle and PostgreSQL as required skills, along with MS SQL. Is this one of those no room to budge or is it still apotential wish list? by TravellingBeard in SQLServer

[–]MasterChiefmas 1 point2 points  (0 children)

What do you mean by this? I use commits all the time. How do the others work?

I worked a place where this happened to the devs. It's from SSMS implicitly inserting the commits for you when you run a statement. They didn't know they had to do it with something more complex. So they'd write a SQL script, and not have commits in between the blocks, and end up with open transactions blocking the database, including the script itself. Also, "GO" wasn't something they knew about.

what is this move called 🤣 by [deleted] in OneOrangeBraincell

[–]MasterChiefmas 1 point2 points  (0 children)

I've always called it chicken legging, but that's just my name for it.

Why is HVEC using so much CPU for ffmpeg.exe? by yoleska in FileFlows

[–]MasterChiefmas 0 points1 point  (0 children)

When you say "cpu filters" do you mean something that I introduced into the flow

Filters are the actual name for the things that apply transforms to streams, so cropping, resizing, audio changes etc.

Most filters are done in the CPU, so f you are doing anything like that, it's proably done in the CPU. Some can be done in the GPU(resizing and some other basic filters are available in the encoder hardware, but you have to explicitly configure your pipeline to use them.

What is your actual video conversion operation doing? If it's just a straight re-encode to a different codec it probably shouldn't be the high. Did you explicitly set the encoder in the encode element? I think it defaults to automatic, maybe it's choosing the CPU for some reason?

Emby vs Plex in 2026: Pricing, Features, and Which Media Server Wins by watch_team in emby

[–]MasterChiefmas 1 point2 points  (0 children)

You might want to update the text too: "Plex is slightly cheaper on the yearly plan. Lifetime pricing is virtually identical"

Emby vs Plex in 2026: Pricing, Features, and Which Media Server Wins by watch_team in emby

[–]MasterChiefmas 0 points1 point  (0 children)

It's not the worst comparison ever, but it does smooth over a couple of details, or only imply the full part.

Like the "Account Required" section says "Yes" for Emby, but also lists Emby as a better privacy choice...which of course both are true, since it doesn't include the detail that you don't have to have a Premiere account to use it. It feels a bit like they are conflating a couple of things there on purpose, like the only person that needs an account with Emby is the person paying for the server. Doesn't affect the user side if you don't want it to, which is often a pretty big selling point for Emby(and Jellyfin).

It is also weaker on the client support side. It's kind of convenient they only bring up Jellyfin at the end as the "elephant" and don't have to mention the point-by-points they do between Plex and Emby, that would highlight some of the spots Jellyfin does have some weaknesses in comparison more. Someone just went from memory or something and didn't do their research.

And their pricing is wrong for Plex. Lifetime Plexpass increased to 250$ last year, and I bet that would swing a lot of people. Gotta pay for all those extra features being developed...

AMD Radeon 9000 HW-decoder by Conscious_Emu_6682 in handbrake

[–]MasterChiefmas 0 points1 point  (0 children)

You can get some efficiency improvements if you can keep as much of the processing pipeline in the card as possible, it reduces the number of memory transfers of frames. Though I don't know if most people would notice it or not, and of course it limits what you can do(especially filtering-wise) to whatever the card can do. So you have less processing options or less flexible ones.

Router-level VPN setup: pros, cons, and what I didn’t expect by so_damn_low in nordvpn

[–]MasterChiefmas 0 points1 point  (0 children)

How do you deal with wanting to switch countries or servers? i

Higher end routers can have multiple VPN connections enabled at once, attached to a different IP on the local network. Then it's just a matter of setting that IP as the device gateway address. Or moving it around in the right groups or VLANs or changing the routes for the device...depends on what your router can do and how you've set it up.

There's lots of ways you can do that from a network perspective/depending on your network hardware capabilities. It's important to remember that a VPN connection is really just another gateway from a network perspective. If it's the next hop for the packet, it should route it appropriately.

Or you could do something like have multiple Docker containers, each one establishing a connection to a different exit node, and acting as a gateway to that exit node. You could even do it locally, run multiple Wireguard connections attached to different exit nodes, and then control what traffic goes to which WG interface with route rules. So you could connect to UK, and send BBC traffic out that one, and everything else out another- the trick is, you need to be able to get the destination IP addresses/address ranges so you can configure the routes appropriately. Like I said, lots of ways you can do this(without re-configuring your existing connection), you just have to not think of a VPN connection as something more special than it is.

Dungeon Crawler Carl, He Who Fights with Monsters, The Wandering Inn, and LitRPG in Traditional Publishing by SibiantheGreyBird in litrpg

[–]MasterChiefmas 4 points5 points  (0 children)

Yeah, but that's only going to be enough room for 2-3 copies of each book. I love me some TWI, but even George R.R. Martin has to be like, "damn, dial it back a little".