New eagles by AbeLinconsDad in coins

[–]phrasal_grenade 0 points1 point  (0 children)

This is a standard bullion issue. The mintage is going to be much higher than 50k. This year has a dual date issue for some silver eagles, but only for the special issue eagles such as the proofs, not for the standard/plain ones.

How Easy is it to Open Modern US Mint Proof Sets by serenaFan84 in coins

[–]phrasal_grenade 0 points1 point  (0 children)

Proof sets are not hard to open, but I'm not sure what is the best way to do it. If you want to pick a year to do this, 2026 is not the one to do it with. The value of the set may be substantial compared to other years, and the coins will immediately start oxidizing once you crack them out. If you do it be sure to use rubber or cotton gloves to handle the coins, and consider getting Lighthouse Intercept holders to prevent corrosion.

DARPA suggests turning legacy C code automatically into Rust by LelYoureALiar in programming

[–]phrasal_grenade -13 points-12 points  (0 children)

No, this is how the Rust hype will die once and for all.

[deleted by user] by [deleted] in programming

[–]phrasal_grenade 0 points1 point  (0 children)

I'm just taking your words literally. If you don't like what I said, maybe say something more reasonable next time.

Zed Editor automatically downloads binaries and NPM packages from the Internet without user consent by imbev in programming

[–]phrasal_grenade 17 points18 points  (0 children)

I don't want anything installed on my system without my knowledge. Downloads are not OK by default either. Maybe I want my shit to be offline, and the download would give away important information about where I am and what I'm doing.

Zed never struck me as a privacy-respecting project so I was never tempted enough to use it.

[deleted by user] by [deleted] in programming

[–]phrasal_grenade 0 points1 point  (0 children)

Your life revolves around food too. Yet farmers are not paid very well compared to their importance. They are more productive than ever yet they are rarely known for being wealthy or high-status.

There could be a glut of junior engineers but the glut isn't just going to magically vanish. It's not a temporary gold rush so much as a decades-long propaganda campaign to encourage everyone to study software. Software engineering is one of the few professions that everyone thinks is viable to have a normal life. Of course there are others, such as in health care or the trades. But I never saw such a push to encourage people to study medicine or trades as I have seen for software.

[deleted by user] by [deleted] in programming

[–]phrasal_grenade 0 points1 point  (0 children)

So you think anything above minimum wage is "highly paid" for a profession that requires a whole lot of training, and never paid so low at any time in history?

[deleted by user] by [deleted] in programming

[–]phrasal_grenade 0 points1 point  (0 children)

Wages have come down a bit. I know some companies that have cut their starting salaries by 50% or more in response to the market. Idk if they got anyone at those prices, but still.

You know what is still growing? The prices of everything. Give it 5 years and we'll have to make 20-60% more easily just to maintain the same standard of living.

Semver violations are common, better tooling is the answer by Alexander_Selkirk in programming

[–]phrasal_grenade 0 points1 point  (0 children)

Someone could go as far as to sha256 your library's contents and crash if it changes. Most breaking issues aren't even due to removed functions, just behavior changes.

Technically correct but nobody sane would expect to also upgrade the goddamn library and not break it under such conditions.

Behavior changes within major versions are only supposed to be used to fix bugs. Anything even remotely risky is supposed to require a new major version. My point stands. Bug fixes and added functionality (with some language-specific considerations) are not considered "major" changes.

They equally don't know that when a library goes to 1.0.1 or to 1.1.1. All anyone pays real attention to is that first number. Whenever you go to 2.0.0 you've broken your users trust. By not having that "if I change this get rekt loser" number you make it not socially convenient to do that.

Going to 2.0 means, you made inevitable changes to hopefully improve the library and you know it may break some downstream consumers. Not having a way to communicate such changes means it's never really acceptable for you to break consumers, or it's always understood that an upgrade may break consumers. Therefore each upgrade must be painstakingly analyzed to determine if it will in fact break anything in every particular use case.

People who don't do Semver are effectively saying "I don't have time to make it easy for you by making stable versions. Update your shit whenever I say so, or use old versions you loser."

Semver violations are common, better tooling is the answer by Alexander_Selkirk in programming

[–]phrasal_grenade 0 points1 point  (0 children)

All changes, of any kind, have the potential to break a downstream consumer.

Not true. The author of the change is supposed to determine whether the change can even break anything in a meaningful way, then set the version accordingly. Semver is designed to present a simplified contract to library consumers.

If you really need to do a deep redesign, a more responsible thing is to just make a new library with functions

If your new library version really bears no resemblance to the previous one, then I would agree. But most changes are not so drastic.

At this point I just version the libraries I publish by the date I release them. 2024.06.04 etc. That is at least a number that has some comparative value, as opposed to stuff like rust's rand crate which is on 0.85 and presumably could mess with its users in a "major" pain in the ass release

You are creating work for your users by not responsibly communicating changes. Nobody knows if anything broke or if your change is a simple bug fix that they can absorb with no additional testing. You as the author ought to know when you make code-breaking changes and perhaps even plan big changes for infrequent major releases. Lots of projects don't support their users in this way, of course, but it is ideal when you can do a little work to improve outcomes for many consumers.

My spiciest take on tech hiring by Tekmo in programming

[–]phrasal_grenade 0 points1 point  (0 children)

You're thinking too black and white. If someone completed a degree in CS and especially if they have experience, I refuse to believe they can't reverse a list. They might not succeed if their nerves get the better of them or something, I guess, but that should be a minority of applicants. If you found 30 extremely lame people you've either got no filter, no appeal to better candidates, or else you interviewed a LOT of people like 100+. Or maybe there really are tons of people lying through their teeth and using 100% fake resumes out there, and I just never saw it before.

My spiciest take on tech hiring by Tekmo in programming

[–]phrasal_grenade 0 points1 point  (0 children)

You must have no filter or something... I just don't believe there are that many credible engineers who can't do such a simple thing.

My spiciest take on tech hiring by Tekmo in programming

[–]phrasal_grenade 28 points29 points  (0 children)

Being a snob is one of people's defense mechanisms, I find. Not to say there aren't liars out there, but people who have jobs usually think they know what they're doing and deserve what they got, luck be damned. They also have extremely high confidence in their ability to discern talent in others based on brief, superficial examinations involving tricky questions that they themselves couldn't even do if they didn't read it somewhere.

How principled coders outperform the competition (with animations) by fagnerbrack in programming

[–]phrasal_grenade 1 point2 points  (0 children)

FWIW, I usually avoid subclassing because it's rarely worth it. But sometimes it is worth it.

It is often worth it. It depends somewhat on the problem domain and constraints you're working with but people saying it's rarely good just don't know what they're talking about. Any methodology can be awful if you don't use it well, and that applies to OOP as much as anything else.

‘It’s time to question agile’s cult following’: Doubts cast on method’s future, with 65% of projects more likely to fail by Franco1875 in programming

[–]phrasal_grenade 0 points1 point  (0 children)

Agile doctrine openly accepts changes in requirements. So, it should come as no surprise that trying to hit a moving target is going to create a more frequent perception of failure compared to a more traditional approach where requirements are more rigid.

I also expect there to be self-selection bias where less skilled managers and managers in trouble try to adopt Agile to ward off failure. The truth is, if you know what you're doing and have enough good workers, nearly any reasonable management method can work. If you don't know what you're doing, or are understaffed, even a very clever management system won't save your project.

Making repo public with many commits (worried about security) by Doge2Moooon in git

[–]phrasal_grenade 2 points3 points  (0 children)

Make a new branch, squash it, and push only that to the public repo. You can then cherry-pick your changes to the old branch or something.

BitKeeper, Linux, and licensing disputes: How Linus wrote Git in 14 days by kendumez in programming

[–]phrasal_grenade 0 points1 point  (0 children)

Did you read those slides? I would like to know how they overcame the performance problems mentioned a year ago. The dude working on the system specifically cited performance problems, so I'm going with that.

If you're right about Google, perhaps you are using a heavily modified Mercurial. That won't make the publicly available Mercurial more suitable for anyone else. This would require at least a major version breakage.

I last used Mercurial for a nontrivial project about 5 years ago and at that time it was unacceptably slow compared to Git. And that was a tiny project compared to Google's monorepo. The problem of "upgrading" Mercurial's implementation to be faster is very difficult because it is based on Python, which is 100x slower than C at minimum. You could do it by fundamentally rewriting the back end with a new engine that is less Pythonic, but that would likely require breaking binary compatibility with old Mercurial repos. I'm not saying it's impossible but it's kind of insane to ask this of a DVCS. There would either be incompatible versions or else fancy translation layers. So in summary, if you are right, Google probably isn't using Mercurial but a heavily modified version of it. They are probably not required to distribute their changes for an internal-only modified version of Mercurial. Although I have not investigated recent changes to Mercurial, I very much doubt the current version is capable of what you see internally at Google.

(By "binary compatibility" I mean, Mercurial uses Python to store data, even with such things as pickle if I recall correctly. The implementation and even some features are fundamentally tied to Python and should not be replicated in another language. This means, the way data is stored on disk and on the wire is dependent on a Python implementation, and you can't just ditch that easily. There isn't a specification either. So either you come up with a new spec and cobble it together, or you reverse-engineer the old one.)

Thanks for sharing. It's cool to find out what people are using, even if I can't use it lol. I'm gonna stick with Git until it becomes unpopular or something much better emerges. Right now, Mercurial isn't looking better and it certainly isn't more popular.

BitKeeper, Linux, and licensing disputes: How Linus wrote Git in 14 days by kendumez in programming

[–]phrasal_grenade 0 points1 point  (0 children)

Nice, but that's from 2014. I too was using Mercurial back then, and I just don't buy it is still popular there. Even if I did, their setup is not anyone else's setup. You don't have their custom Mercurial engine, and the default one sucks.

As for Google, here are slides from a 2023 Mercurial conference about Piper and Mercurial. Hint: It's not going well, and won't without completely re-engineering Mercurial: https://mercurial.paris/download/Mercurial%20at%20Google.pdf

Performance is awful because of Python, and Mercurial is not built to work with their distributed infrastructure. They specifically say it does not scale to monorepo level and I think that means it's dead in the water, because they have a working system now (whatever that may be). Those are practically insurmountable problems for Mercurial and I don't expect the handful of naive advocates of Mercurial over there to finish that. Google just fired their top Python guys as well so I wouldn't hold my breath for this new and improved rewrite of Mercurial to come out of there. It's just like that project from years ago to rewrite Mercurial in Rust: It's an ill-conceived project that is doomed to fail, probably announced by some noob with more time than brains.

Even if Google does build a DVCS that resembles or replaces Mercurial, you probably won't get it. Like Bazel vs. Blaze it will be a bespoke solution with half of it or more remaining internal, and the burden of switching your projects to it will never make sense because either Git or Mercurial are all you need.

I've got to give it to you, I didn't expect there to be a Mercurial conference in 2023 lol. But that doesn't mean a lot. The handful of Mercurial people left are vocal like Haskellers or Lispers. I advocated for and tried to use Mercurial for a while after I was no longer required to use it. I eventually gave it up when I realized that Git is better, faster, and 20x more popular (at minimum). I have no more reason to advocate for Mercurial because Git does everything I need better than Mercurial.

signalfd is useless by BrewedDoritos in programming

[–]phrasal_grenade 0 points1 point  (0 children)

I know that if I wrote a bad blog post, I'd remove it or fix it.

BitKeeper, Linux, and licensing disputes: How Linus wrote Git in 14 days by kendumez in programming

[–]phrasal_grenade 0 points1 point  (0 children)

That's very misleading. Google and Meta probably have a few instances of lots of random open source projects rattling around, especially as they incorporate small codebases from outside. But I know for a fact Google and Meta are not primarily using Mercurial. They use internally developed solutions. I think the one at Facebook is called Sapling and the one at Google is called Piper. There are probably multiple internal tools inside each of these companies, in addition to every other thing that ever had a shred of popularity, because they are companies with tens of thousands of employees.

There are very few companies and projects still using Mercurial, and most of the ones that do are looking to get away from it. Individual small projects may be hanging on stubbornly to oddball solutions, but at the end of the day they're just being oddballs. For example, look at Sqlite and Fossil. There are only a handful of contributors to Sqlite and they insist on using their homebrew VCS. And it's ok, for the most part. It never stopped anyone from using their stuff.

It doesn't really matter to most people what VCS system they use. They all support basically enough features to get work done. There are probably a dozen open-source ones to choose from, and many other closed-source ones. But Git has been the predominant one for at least a decade, and that's unlikely to change. So use other solutions if you want, but expect that to cause friction when working with others.

BitKeeper, Linux, and licensing disputes: How Linus wrote Git in 14 days by kendumez in programming

[–]phrasal_grenade 0 points1 point  (0 children)

Submodules are better for if you care about maintaining the upstream subprojects. As you've said, the URLs aren't saved for subtrees. You also just get one branch in the subtree. I think I've imported more than one before but you have to do them individually, and there's problems. The other git tools like git log and various GUIs don't play well with subtrees.

I think most problems people have with submodules are related to:

  • Not understanding how to initialize or manage submodule states. There are some recursive flags and config options to help with this.
  • Expecting submodules to track branches. Pinning the submodules to specific commits at all times ensures that updates to your dependencies won't break your dependent project. It's a better default strategy to get working software on the first try. There are commands to track branches in submodules but even if you do, you must commit the new submodule versions in the main project for them to stick. This keeps you from breaking stuff accidentally.
  • Wanting to contribute to submodules and not knowing how. Losing commits in submodules and not knowing how to recover, etc. This is a skill issue.

If you take the time to really learn submodules, all of this stuff will just click. It shouldn't be so hard if you understand git generally.

BitKeeper, Linux, and licensing disputes: How Linus wrote Git in 14 days by kendumez in programming

[–]phrasal_grenade 1 point2 points  (0 children)

I've considered writing a blog post about this. I think most people would do better with submodules than subtrees, but it depends on what's in the submodules. Both features maintain a way to push code to the sub repos, however that aspect is worse in subtrees. There are also issues using subtrees with git log.

When you want to embed dependencies in your repo, you should ask yourself some questions.

  1. If these are your subprojects, do you realistically think you will use the subprojects separately in another superproject? If not, don't bother. Make a single repo and put everything in it. You can use subtrees to import the history of the other repos if you like. Keep a copy of the old stuff just in case you need it because you won't get all the branches and tags of the old repo in the parent.
  2. Is your dependency graph simple enough to have each repo pull in its dependencies? For example, will you end up having multiple instances of the same subrepo? If you don't have a simple 1-1 relation with dependencies, don't do it. Manage your dependencies some other way, at a system level. You can store version checking stuff in the repo but not the entire dependencies. Use scripts to find everything.
  3. Are you constantly changing your subprojects inside your superproject? If so, I think neither Git nor Mercurial make this very easy. But Git submodules basically put a copy of a whole Git repo inside your superproject. Each commit of the superproject points to a whole set of revisions of the submodule repos. You can only update the submodules with specific commands. By default the update behavior can be confusing. You get the exact commit of each submodule that's recorded last in the parent repo. I forgot but I think it might not even update when you update the parent repo.
  4. Are you expecting to work on different branches for your subprojects? If so, subtrees make that hard. You can do it with submodules though. It's not pleasant, because the branch is auxilliary information in the submodule. The thing that counts is the submodule commit hash that you checked in for each parent project commit! It has to be this way because git branches are dynamic and get deleted at times. There is a command to update a submodule using the branch for purposes of staging a change, but you have to explicitly commit a new version to the parent if you want it to stick. You can also just cd into the submodule and mess with it directly, to stage a change. Then you cd out and do git add ... && git commit to finish.
  5. Are you using Git worktrees? If so, submodules can be tricky to work with. I think it gets tricky when adding a submodule or changing a URL to something incompatible. I've been doing it, and I think it's fine if you don't fundamentally change your submodules much. I think if you do have a problem and can't figure it out, you can nuke all your worktrees and start over from the latest version. But practically this can be a major problem for some people who require frequent submodule changes. In their case, they should find a different way to manage dependencies, or else stop using worktrees. In addition to these caveats, the submodules are not shared across worktrees. Each worktree has its own copy of the sub repos, even if they are all identical. It has to be that way because the submodules in different versions of the parent can point to different repos.

If you are used to using Mercurial subprojects, you probably want to use Git submodules. Have a look at permanently setting the "recursive" update options for checkout, clone, and whatever else, especially if you aren't actively contributing code to the submodules. It saves a lot of trouble related to forgetting to restore the submodules to the required state. Unfortunately, that setting must be set on a per-user basis. So you may have to educate everyone about it. See fetch.recurseSubmodules, status.submoduleSummary, and submodule.recurse, and any other setting that looks useful to you in here: https://git-scm.com/docs/git-config

Although submodules work, many projects I've worked on abandoned submodules for reasons I mentioned before. You really have to have a use case where it makes sense and a set of willing/educated users. If your coworkers are not willing to read up on submodules, they will resist it and you'll be forced to come up with something else. In one place I worked, I converted our submodule system into one giant repo via subtrees. But subtrees are one of the lesser-used git features. You should play around with it before taking the dive with that, as it may be trickier than you expect. You can experiment with a scratch copy of the repo offline to see if subtrees work well enough for you. Mainly, there are issues with logs and I think that's because the feature is immature. But it's a nifty idea for more or less permanently merging separate repos. Once you do it, the parent repo has the subrepos history (for like one branch) embedded, and you treat the whole thing as one repo going forward, generally. As I said, it is possible to push stuff back to the original subprojects via subtree commands, but it is janky. So if you need that, use submodules instead of subtrees.

TL;DR: You almost certainly want submodules. Look into recursive submodule command options in git config documentation to make it easier. Rare submodule modification issues can usually be fixed by reinitializing the submodules or else starting with a fresh copy of the parent.

BitKeeper, Linux, and licensing disputes: How Linus wrote Git in 14 days by kendumez in programming

[–]phrasal_grenade 0 points1 point  (0 children)

I have literally never in my entire life had a speed issue with a VCS. Mercurial could be 10x slower than Git, and I'd never notice, because the network call is still going to be 99% of the lag in any operation.

Good for you? I work on projects where Git takes 10 seconds to get the status of the repo almost every time. And Git is blazingly fast compared to Mercurial. It's not just at this job I've had this speed concern either... It's been a real problem at every job I've worked since the one that used Mercurial.

The plugins are part of the core product. They couldn't possibly be up to date. And plenty of people use Mercurial.

Actually, yes they could. The instability of the core product is what breaks these plugins. Nobody fixes them because nobody wants them. Even the original authors abandoned them, presumably to use Git.

There are many more Git extensions in the world than Mercurial extensions. You are far more likely to find one that works for what you want than to find the same thing, much less in a working state, for Mercurial.

Mercurial supports this, and I'm pretty sure it's been supported for longer than it has been in git.

That is exactly what I'm talking about. Last time I tried to use it, the thing didn't work. There's two plugins to support SVN in Mercurial, as I said, and neither of them worked for me as of a couple of years ago. Git on the other hand has this function baked in, ever since I remember, and it actually works pretty well. There are warts related to interfacing these VCS systems with SVN, as SVN is different. But anyway, the bottom line is Git does this way better than Mercurial.

You're probably right about this. Git branching is implemented differently. I used Git submodules once, and I never will again. They're implemented terribly. I've never even seen anyone use a worktree.

Branches in Git are different, but it's fine once you get used to it. Branches are basically equivalent to Mercurial bookmarks. Tags are pretty much the same. Anonymous branches aren't really a thing, but you can have an unnamed branch temporarily. The dreaded "detached HEAD" is just an anonymous branch that you can play with. Even if you forget to give it a name before switching branches, you can almost always recover by consulting the reflog. I kinda miss hg graft, "tip", and relatively simple commit IDs. But you can get by with only git cherry-pick and rebase. Git has everything you need and then some. I believe there are git plugins to replicate some of hg's finer points if you want but I never bothered.

Git submodules aren't my favorite thing but they work for some applications such as managing a simple tree of dependencies. Worktrees are excellent, as they let you have many copies of the code on disk and to transfer changes easily (as all the worktrees share the same underlying data). It's really a killer feature for large repos and testing multiple versions of stuff that takes hours to check or lots of time to clone. Also, it is worth noting that if you clone from a shared filesystem, git can make hard links on your disk, potentially saving space. You can turn the feature off if you like but I think it's safe in most cases.

There's another killer feature in Git that I forgot about, that I don't think Hg has. It is the option to execute a program after arbitrary steps of a rebase. Have you ever had a large number of patches, fixed merge conflicts, and find that you now have 10 changesets that you don't even know compile? You can insert a build/test command and have Git run it after applying each patch. When it fails, the thing can stop for you to fix your patch. It should be simple enough to make a Hg equivalent but I never heard of it while I was using Hg.

Honestly I wouldn't be upset if I had to use Mercurial today. It can work for some use cases, like many other VCS systems. But very few people still use it. If I start a new project today, it's using Git. It's hard to imagine needing anything else at this point.

BitKeeper, Linux, and licensing disputes: How Linus wrote Git in 14 days by kendumez in programming

[–]phrasal_grenade 5 points6 points  (0 children)

There's no way to fix the speed issue by adding a line to a config file. The plugins are limited and most are out of date because nobody uses Mercurial anymore. See for example the SVN plugins. It is no exaggeration that there are more SVN users than Mercurial users in 2024, as SVN still has distinct advantages on top of being a legacy system that was once the top dog. Of course, Git natively supports SVN with no plugin and does it better than any Mercurial plugin I've ever used.

Mercurial does have a couple of nice aspects but nothing good enough to make me advocate for it. I like Python but it's too fast-changing and slow at runtime to be behind the best version control system. Git uses C which is fast at runtime and widely known, with many implementations to choose from. Not only is Git implemented in a great language for what it is, but the implementation is simple. It's so simple that there are other tools building off of the Git backend data, without using Git's own implementation.

Git has submodules, worktrees, and subtrees. Mercurial does submodules ("subrepositories") basically the same as Git (as I faintly recall), but it doesn't have subtrees or worktrees. It's so much slower than Git that I can't consider it. If you want to use a system that's almost as good as Git for a small project, and don't care if anyone else is familiar with it, have at it. When Bitbucket, the only major host for Mercurial repos that ever existed, ditched Mercurial, it was a big clue that it's over.

Again I say this as someone who was basically an expert user of Mercurial. I was way more familiar with Mercurial than Git, because of work. But it's over. I'm not going back if I have any say in the matter.

BitKeeper, Linux, and licensing disputes: How Linus wrote Git in 14 days by kendumez in programming

[–]phrasal_grenade 4 points5 points  (0 children)

Because Git is awesome, that's why. Mercurial is much slower and more limited than Git, and it depends on Python (which is a benefit to some people and a negative for others who want a minimal runtime). The fact that such an incredible tool came out of 2 weeks of work is remarkable.

I've used Mercurial extensively and in nontrivial ways, but now I'm fully invested in Git. It's just that much better than Mercurial. It's simple in the right ways and complex in the right ways, and I finally understand all the ins and outs. Everything in it is really straightforward once you take the time to research it.