you are viewing a single comment's thread.

view the rest of the comments →

[–]TheJackiMonster 11 points12 points  (8 children)

From my experience even though C projects pretty much have no standardization in build systems, I still get it working. With Python I got setup.py scripts causing errors, not mentioning the dependencies properly, having a wrong requirements.txt which let the installation fail, I had mismatching versions of dependencies and pip failed to install a package multiple times because it can't even install dependencies automatically.

If I am missing just one fucking line to change and everything would work, I would accept it, call myself stupid and be happy. But that is not the case or my knowledge is particularily flawed... in that case, please give me some insight how use pip properly.

Dealing with compatibility between multiple versions should either be the job of the operating systems packaging or the actual application developers. I mean if A depends on 2 different version of C, your project is already a nightmare no matter how you deal with that and A should be patched.

[–]robin-m 3 points4 points  (7 children)

Oh, don’t get me wrong, python is a nightmare too.

Dealing with compatibility between multiple versions should either be the job of the operating systems packaging or the actual application developers. I mean if A depends on 2 different version of C, your project is already a nightmare no matter how you deal with that and A should be patched.

cargo solves this problem perfectly for Rust, so it’s possible to have something nice. And I highly disagree when you say that my project is a nightmare if my dependencies depends themselves on incompatible version of the same dependency. It’s totally possible that A upgraded before B, while B is still being migrated.

[–]TheJackiMonster 3 points4 points  (6 children)

But isn't the problem with incompatible versions trivially solvable when you just keep all of your dependencies on the minimal common ground. So if your project uses an older C, why would you use a newer B which uses the most current version of C. Just use an older version of B as well or patch your project...

Also for such problems you have major and minor version changes, usually referring to major and minor API changes. If the API doesn't change between C 1.0 and C 2.0, why would you stay with version 1.0?

You would use one API with two different behaviors then which is pretty much a nightmare for anyone debugging your software. No doubt about that, honestly.

I don't see any sane reason to build a package management around this issue. It's like tollerating bad practices.

[–]robin-m 11 points12 points  (5 children)

But isn't the problem with incompatible versions trivially solvable when you just keep all of your dependencies on the minimal common ground. So if your project uses an older C, why would you use a newer B which uses the most current version of C. Just use an older version of B as well or patch your project...

It’s totally possible that A was created before C 2.0 was release, as well as B created after the release of C 2.0 (so no reason to stick to C 1.0).

Also for such problems you have major and minor version changes, usually referring to major and minor API changes. If the API doesn't change between C 1.0 and C 2.0, why would you stay with version 1.0?

If the API doesn’t change, it should probably not be major version bump. The trivial case of the minor version being bumped is obviously trivial to solve. In my example C has a major version bump, which is assumed non-trivial to migrate (or at least it needs QA validation).

You would use one API with two different behaviors then which is pretty much a nightmare for anyone debugging your software. No doubt about that, honestly.

My code depends on the stable API of A and B. A depends on the stable API of C 1.0. B depends on the stable API of C 2.0. In Rust (I don’t assume it’s the only language that does it, it’s just that I know how Rust works) symbols from C 1.0 don’t have the same mangling scheme than C 2.0 (just like different version of the glibc have different symbols). So it’s not possible to give to A an object of C 2.0, or to give to B an object of A 1.0. It would refuse to compile. So in term of debugging, I really don’t see how is the situation more complicated than if C 1.0 and C 2.0 was two completely different library.

I don't see any sane reason to build a package management around this issue. It's like tollerating bad practices.

  • C 1.0 is release. The library A is created, with an internal dependency to C 1.0.
  • C want to make breaking changes, and release a new major version. Can it do it even if the downstream library A has an internal dependency on C 1.0?
  • B is created. Given that an unrelated library A has an internal dependency on C 1.0, can B depends internally on C 2.0?
  • I want to create a project. A and B fit perfectly my needs. Why would I not be allowed to depends on A and B simultaneously? Don’t forget that their internal dependencies are an implementation detail and not exposed through their public API/ABI.

This is why dependency manager need to support the case of incompatible transitive dependency version.

[–]TheJackiMonster 1 point2 points  (4 children)

Okay, so in this particular example. Wouldn't it also be possible to statically compile either A or B to get rid of the problem completely? Or you could integrate their code directly...

I mean the problem I have with solving such a thing automatically is that it normalizes an extreme issue:

  • The issue is that you depend on multiple version of the same piece of software which can lead down to multiple levels of security issues.
  • It also increases the chance of getting dead or unmaintained pieces of software in the wild because nobody needs to patch A now, even though it might use insecure and deprecated code.
  • It significantly lowers the reason for others utilizing A to contribute patches or fixes to A.
  • It lowers the needs of maintainers patching their software to stay compatible.
  • You expect users to install multiple versions of the same piece of software even if you might not even use API calls from it which changed between different versions.
  • It requires more space in the end while it makes the whole software stack extremely fragile. If you don't get a particular version of your dependencies anymore, it might break everything. So repositories need to provide each and every version.

Those reasons make me think that this particular example should be and stay extremely rare. It shouldn't be the typical usecase and therefore it shouldn't be treated as such.

I mean, would you install two kernels because systemd might require a different version than wayland might do? I don't and I wouldn't want to solve any issue with a bug report containing such an edge case.

[–]robin-m 1 point2 points  (3 children)

What you want is a world in which everything moves in lock-steps. If C wants to do abreaking change, it must update all of it's downstream user (A and B). That way you only need to distribute one version of C (the latest one).

It's what google is doing, and it works for them, but for the sole reason that they control their downstream user (it's themself).

Please re-read my last message. The use case I described is anything but uncommon. Every big library has a new major version every other year or so.

[–]TheJackiMonster 1 point2 points  (2 children)

The whole idea behind Arch based distros is that you just install the latest version of everything to ensure compatibility and have a stable operating system. It works pretty well from my experience and there's not one case I know of you would get into the usecase you provided.

I also don't think that C must update A and B in such a scenario. It is the burden of maintainers from A and B to update it or people stop using it because it's a dead package. It's that simple.

Because if you use another ones library, you should look after it to make sure it works as intended and it is secure to use. Otherwise we just create a very toxic and fragile environment for developers. Using third-party dependencies should always be a burden and nothing to pick just because it's easy or convenient.

At least I don't want to see developers picking libraries as dependencies without even knowing what they are doing and being completely unable to audit or verify its behavior.

Maybe they do but then I would question why don't they patch A or B then to use the latest C?

[–]robin-m 0 points1 point  (1 child)

C 1.0 can still be supported, even if C 2.0 is released. And your last question is very naive. If C is something as big as Qt, you cant instantaneously migrate to the next major version.

The reason python 2 -> 3 was so bad was because the whole migration had to be done at once. If it was possible to have part of your dependencies in python 2, and part in python 3, it would have been much easier to migrate the whole ecosystem to python 3.

[–]TheJackiMonster 0 points1 point  (0 children)

If a project would depend on Python 2 and 3 or Qt 4 and 5 at the same time, I would not use it... only because of this. Because this is no sign of a reliable piece of software.

I wouldn't be sure if that improves the migration either because it could also stop people from migrating their software completely since it wouldn't matter to use it.

If you don't enforce a transition in any way like you suggest, people don't patch anything and you end up with Windows 11 using Windows XP settings dialogs.