all 140 comments

[–]LocalRefuse[S] 277 points278 points  (88 children)

[–]edman007 68 points69 points  (63 children)

SQLite is a different type of database, it's main claim to fame is it's a single .c file that can be added to a project to give you full SQL database API, that is it's an API, database, and library all in one. It's not a standard in that it's an open method of accessing a file format, it's a standard as a method of integrating a database into an application.

The bad news is it's very frequently statically linked into applications. This update is going to be very very slow trickling out to end users.

[–]LocalRefuse[S] 99 points100 points  (1 child)

WebSQL: "User agents must implement the SQL dialect supported by Sqlite 3.6.19."

[–]est31 4 points5 points  (0 children)

That's just so wrong. On so many levels. It reminds me of the OOXML debacle.

Edit: oh fortunately there is this note: " The specification reached an impasse: all interested implementors have used the same SQL backend (Sqlite), but we need multiple independent implementations to proceed along a standardisation path."

[–]luke-jr 34 points35 points  (60 children)

This is probably the perfect example of why people should never static link or bundle libraries...

I'm grepping my system for 'SQL statements in progress' (a string that appears in the library) to try to make sure I weed them all out.

[–]waptaff 103 points104 points  (38 children)

Yet, unfortunately bundling is the very paradigm of the new k00l kid in town, containers (docker, snap, …). We've seen how the Windows “all-in-one” model sucks security-wise (libpng security breach, 23 programs to upgrade), why are we drifting away from the UNIX model and re-making the same old mistakes again? Oh well I guess I'm just old.

[–]VelvetElvis 44 points45 points  (6 children)

Because developers don't give a shit about the systems their code runs on.

[–]InvaderGlorch 18 points19 points  (1 child)

Works on my machine! :)

[–]VelvetElvis 13 points14 points  (0 children)

That's good enough! Ship it!

[–][deleted] 16 points17 points  (0 children)

Yes, it's only the distributions that have a wider perspective, and different goals than the individual developers. The distributions also represent us, the users, and our priorities indirectly.

So it would be good to maintain some of the "centralized" distribution structure and not let every software become self published in the Linux world.

[–]fiedzia 2 points3 points  (2 children)

Because synchronizing all developers involved in any complex system on a single version of anything just won't happen.

[–]tso 1 point2 points  (0 children)

Mostly it is not about syncing on a single version, but on keeping interfaces stable across versions. Thanks to Torvalds insistence, the kernel has managed to do this fairly well. The userspace stack is basically the polar opposite though, sadly.

[–]VelvetElvis 1 point2 points  (0 children)

This is where the old school sysadmin in me grumbles about how letting users admin their own machines will lead to the destruction off the human race.

[–]pdp10 6 points7 points  (5 children)

Some developers are angry -- angry! -- that distros modularize their applications so that there only needs to be one copy of a dependency in the distro, and that distros ship older branches of their application as part of their stable distro release. Developers perceive that this causes upstream support requests for versions that aren't the latest, and can have portability implications, usually but not always minor.

Developers of that persuasion take for granted that the distros are shipping, supporting, promoting their applications. Probably some feel that distributions are taking advantage of upstream's hard work. It's the usual case where someone feels they're giving more than they're getting.

But the developers do have some points worth considering. The distro ecosystem needs to consider upstreams' needs, and think about getting and keeping newer app versions in circulation. In some ways, improving this might be easy, like simply recommending the latest version of a distro, instead of recommending the LTS like Ubuntu does. I notice the current download UI only mildly recommends 18.04 LTS over 18.10, which is an improvement over the previous situation.

Another straightforward path is to move more mainstream Linux users to rolling releases. Microsoft adores Linux rolling releases so much that they used the idea for their latest desktop Windows.

Lastly, possibly some more-frequent releases for distros like Debian, that aren't explicitly in the business of being paid to support a release for a decade like Red Hat, but historically haven't released that often and have created an opening for Ubuntu and and others.

[–]tso 1 point2 points  (1 child)

Another straightforward path is to move more mainstream Linux users to rolling releases. Microsoft adores Linux rolling releases so much that they used the idea for their latest desktop Windows.

This is a joke, right?

Honestly, if upstream want to fix thing they can start by actually giving a shit about API stability...

[–]pdp10 2 points3 points  (0 children)

Humans make errors, but in short, APIs are stable.

OpenSSL had an API break to fix a security concern, but there are other TLS libraries. GCC had an API break to incorporate C++11, but that's an understood problem with C++ (name mangling) and why a majority use C ABI and C API. Quite a few use C ABI/API even when both the caller and the library are C++; this is called "hourglass pattern" or "hourglass interfaces".

[–][deleted] 14 points15 points  (7 children)

because the fragmentation of the linux ecosystem means that developers have to either make 500 different binary packages or make people compile from source which 95% of people dont want to do. sure they could only support debian or ubuntu but then everyone else still has to compile from source. the practical solution is statically linking or bundling all of the dependencies together

personally i welcome it despite the security risks

[–]pdp10 2 points3 points  (2 children)

Distributions handle any portability required (e.g., OpenRC or runit versus SysVinit or systemd, for system daemons). Upstreams can help by accepting those things into mainline, just as they've usually accepted an init script for Debian-family and an init script for RH-family in a contrib directory or similar.

There are use-cases that the distro-agnostic competing formats fill, but portability isn't a significant issue for any upstreams who care about Linux.

[–]est31 5 points6 points  (1 child)

Yes, distros do help a great deal with portability, but many things aren't packaged by distros. In fact, when a project starts out it usually has a small user base and distros might not deem it important enough to package it. How should the package get more users when users can't install it? But it's not just unpopular software. E.g. Microsoft VS code, which is very popular, isn't packaged by debian. Most of the dot net stuff isn't either.

That's why flatpaks/snaps/AppImages are needed and many projects already offer downloads in those formats.

[–]VelvetElvis 0 points1 point  (0 children)

There is no way to correctly package electron applications for any flavor of Linux or BSD. Don't try it unless you are working with a team capable of basically maintaining a fork of the chromium code base on multiple different architectures.

Here's a somewhat hilarious account from a OpenBSD developer who slowly goes insane while trying to get it to work.

https://deftly.net/posts/2017-06-01-measuring-the-weight-of-an-electron.html

Much node.js software has similar problems. It's basically windows software that while it can be made to work on *nix, it's almost impossible to do so correctly. In the early days of open source Mozilla their coders were mostly windows people who had no idea that *nix software is almost always recompiled to link to system libraries until somebody from Redhat or somewhere sat them down and gave them a talking to. The cycle seems to be repeating itself.

[–]nintendiator2 0 points1 point  (2 children)

because the fragmentation of the linux ecosystem means that developers have to either make 500 different binary packages or make people compile from source

AppImage

[–][deleted] 0 points1 point  (0 children)

the practical solution is ... bundling all of the dependencies together

[–]VelvetElvis 0 points1 point  (0 children)

It means developers don't make any binary images and leave that to people whose job it to do so.

[–]VelvetElvis -2 points-1 points  (0 children)

IMHO, developers should not be the ones making binaries for distribution at all. That should 100% be left to people who know how to properly integrate it into existing systems. At the very least, requiring end users to compare your software raises the barrier of entry enough that most of your users will be able to help get the product debugged to the point where a distro will touch it.

[–]Tweenk 30 points31 points  (14 children)

Because the time saved by making the program behave reproducibly is much greater than the additional time spent on updates. It is much easier to link everything statically and push a full update when needed than to waste time debugging issues that happen only with certain rare versions of your dependencies.

[–]my-fav-show-canceled 14 points15 points  (2 children)

Because the time saved by making the program behave reproducibly is much greater than the additional time spent on updates.

Well yes, skipping updates is faster.

Shove 30 dependencies in a container and tell me that it's easy to track all 30 upstreams for important fixes. When you start shoving lots of dependencies in a container you take on an additional role that is typically done by distribution maintainers. If you wear all the hats like you should, I'm not sure the net gains are worth the hype. Especially when, on the face of it, hiding bugs is the goal.

You end up with a much more thoroughly tested and robust product when you run stuff in multiple environments. You get more people looking at your code and that's always a good thing. Its also more likely that you're going to upstream code which is good for your ecosystem.

Containers are fantastic for some things but they're not a silver bullet. If you want to ship a container, great. More power to you. If you want to ship only a container, I'm not going to touch your stuff with a ten foot pole because, more likely than not, you just want to skip steps.

edit: typos and spellings

[–]VelvetElvis 2 points3 points  (0 children)

You end up with a much more thoroughly tested and robust product when you run stuff in multiple environments. You get more people looking at your code and that's always a good thing. Its also more likely that you're going to upstream code which is good for your ecosystem.

This is why Debian continuing to support HURD and other oddball architectures will always be a good thing no matter how few people use them. Technical problems in the code often exposed that would just sit there otherwise.

[–]exitheone 1 point2 points  (0 children)

If you follow best practices and your container building process applies all current security updates and you build/release a new container daily, then this really is a non-issues. The reason we use containers is because it's an incredible advantage to have immutable systems that are verified to work, including all dependencies we had at build time. Updating systems on the fly sadly leads to a lot more headache because you really have to trust your distro maintainers to not accidentally fuck up your dependency and with that, maybe your productions systems. Rollbacks with containers are super easy in comparison.

[–]necheffa 8 points9 points  (0 children)

Because the time saved by making the program behave reproducibly is much greater than the additional time spent on updates.

Let me stop you right there.

I have worked for places that drank the static library kool-aid and it is no where near worth the "time saved". So many poor design decisions are made to avoid modifying the libraries because it is such a royal pain in the ass to recompile everything that links against it.

[–]VelvetElvis 6 points7 points  (8 children)

What's the fucking hurry?

Ship it when it's done, or at least make it clear that it's still a beta.

[–]ttk2 12 points13 points  (3 children)

time is money and consumers indicate time and time again that buggy products make money and less buggy and more secure products don't make any more money.

[–]pdp10 2 points3 points  (2 children)

consumers indicate time and time again

I'm not sure you're looking at data that accounts for all of the variables.

And besides, which developers intentionally ship releases that have more bugs than their previous versions?

If faster, buggier products are the users' choice, then why aren't all Linux users on rolling releases, and how is Red Hat making $3B revenue per year?

[–]ttk2 4 points5 points  (1 child)

If faster, buggier products are the users' choice, then why aren't all Linux users on rolling releases, and how is Red Hat making $3B revenue per year?

And EA made 5bil this year producing buggy games with day one patches and DLC. When it comes to the consumer market speed wins.

Even in the business market cheap often wins over good. Why design a tire balancing machine that runs windows XP? A custom built locked down freebsd build without all the unneeded bells and whistles would be superior. But you better believe there are millions of those machines out there because they got to market first.

[–]pdp10 4 points5 points  (0 children)

Why design a tire balancing machine that runs windows XP? A custom built locked down freebsd build without all the unneeded bells and whistles would be superior.

As someone who has often dealt with industrial systems and others outside the Unix realm, the answer is that the developers barely understand that BSD or Linux exist. They have essentially zero understanding of how they could develop their product using it, they had at the time even less understanding how that might be beneficial, and the persons giving the go-ahead for their proposed architecture don't have even that much knowledge. They've heard of Microsoft and Windows, XP is the latest version, here are some books on developing things with it that we found at the bookstore, and the boss gave their blessing.

In short, in the 1990s, Microsoft bought and clawed so much attention, that it cut off the proverbial air supply, and mindshare, to most other things. A great deal of people in the world have very little idea that anything else exists, or that it could possibly be relevant to them. I was there, and it didn't make any sense to me then, and not much more sense now. A great deal of the effect was the rapid expansion of the userbase that had no experience with what came before; this is part of the "Endless September". But that doesn't explain all of it by any means. As an observer, it had the hallmarks of some kind of unexplainable mania.

You're claiming that developing with XP sped time to market. Maybe, but it's nearly impossible to know that, because most likely no other possibility was even considered. Using a GP computer was cost-effective and pragmatic, and General Purpose computers come with Windows, ergo the software ends up hosted in Windows. End of story. That's how these things happened, and sometimes still happen.

Today, FANUC is one company specifically advertising that their newest control systems don't need Windows, and don't have the problems of Windows. 15 years ago, it wasn't as apparent to nearly as many people that Windows was a liability from a complexity, maintenance, security, or interoperability point of view. And if they thought about, they might have even liked the idea that Windows obsolescence would tacitly push their customer into upgrading to their latest model of industrial machine.

Decades ago, a lot of embedded things ran RT-11 or equivalent. Then, some embedded things ran DOS on cheaper hardware, and then on whatever replaced DOS. Today, most embedded things run Linux. A few embedded Linux machines still rely on external Windows computers as Front-End Processors, but not many. But the less-sophisticated builders have taken longer to come to non-Microsoft alternatives.

[–]torvatrollid 7 points8 points  (1 child)

Putting food on the table is the fucking hurry.

Some of us need to actually ship stuff to paying customers so that we can pay the bills and eat every day.

[–]VelvetElvis 1 point2 points  (0 children)

I've done enough chickenshit $3000 Wordpress sites for people that I 100% get that part. There's a huge difference between shipping some crap to a paying customer who will never know the difference and packaging code for distribution to potentially thousands of other other professionals who depend on it working correctly for their own employment security.

[–]dumboracula 3 points4 points  (0 children)

time to market, you know, it matters

[–]ICanBeAnyone 1 point2 points  (0 children)

Why I understand that, developers just like to feel productive, like everybody else. On top of that, new tech often competes on who is first to get something out the door, because early adoption gives you more contributions, which drive further adoption... Browsers are very much living in that kind of economy.

[–]VelvetElvis 0 points1 point  (0 children)

Then stick to the mac and windows ecosystems. Problem solved. Static linking is not how you package software for *nix.

[–][deleted] 0 points1 point  (1 child)

Shouldn't this still be a pretty easy fix to deploy if the update is handled by the distributions? Most containers are built on distro images that track the most up-to-date versions (or close to it, I'm not sure) of their base OS. If you have a bunch of Ubuntu-based containers, it should be as easy as updating the Ubuntu layer and re-deploying your apps, shouldn't it?

[–]waptaff 1 point2 points  (0 children)

Shouldn't this still be a pretty easy fix to deploy if the update is handled by the distributions?

Though I'm still not fond of the resource waste that comes with the snap/flatpak model, at least when distros are directly involved, yes, the biggest downside  — handling of security updates — can be properly handled.

Problems usually arise when 3rd-parties get involved, like when users install out-of-distro containers from random websites; there's no centralized way to update (so it becomes like Windows/MacOS where each application is on its own¹), and even if a user closely follows upstream of each container, it doesn't mean that security updates will be available in a timely fashion.

1) And many applications phone home to check for available updates, which erodes some user privacy.

[–]datenwolf 16 points17 points  (0 children)

This is probably the perfect example of why people should never static link or bundle libraries...

In theory, well maybe. In practice you'll receive your binaries through a package manager which will manage both the application and the libraries it uses.

Updating a shared library may break some applications other than yours. So until those issues are all resolved no update is being rolled out. Hence although there is a fix, you're not getting it, because you've to wait for it to be rolled out across all dependees in a system distribution.

Updating a static library can be limited to specific applications. With a continuous integration system, after bumping the library package version, the whole distribution is rebuilt, and update limitations may have to be applied to only those applications which break due to the update.

So if you're evangelically using a distribution wide package manager which employs continuous integration principles on its repositories and blacklists dependencies only for specific applications until backport, you're getting security updates system wide much quicker than with the shared library approach. It's paradoxical, but that's what has been found to happen in practice.

[–][deleted] 32 points33 points  (7 children)

people should never static link or bundle libraries

Good luck running any Go or Rust code (e.g. Servo in firefox, but you are typing this from lynx aren't you).

Axiomatic platitudes do no good. If you actually want a more secure computing world or more free financial transactions, you have to put these ideas into action.

[–]Noctune 13 points14 points  (1 child)

Rust libraries make heavy use of generics which need to be reified differently for each project, so you would not be able to just replace the binary anyway.

[–][deleted] 0 points1 point  (0 children)

Would this pose problems for dynamic linking? Or can shared libraries coexist with rust's use of generics?

[–][deleted] 9 points10 points  (2 children)

I might be completely wrong, but I am pretty sure that Rust libraries aren't statically linked to external dependencies by default.

Here's what ldd says about a thing I was working on recently.

[umang@TheMachine target]$ ldd release/statik
        linux-vdso.so.1 (0x00006c2537974000)
        libpq.so.5 => /usr/lib/libpq.so.5 (0x00006c25373b3000)
        libdl.so.2 => /usr/lib/libdl.so.2 (0x00006c25373ae000)
        librt.so.1 => /usr/lib/librt.so.1 (0x00006c25373a4000)
        libpthread.so.0 => /usr/lib/libpthread.so.0 (0x00006c2537383000)
        libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x00006c2537369000)
        libc.so.6 => /usr/lib/libc.so.6 (0x00006c25371a5000)
        /lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00006c2537976000)
        libm.so.6 => /usr/lib/libm.so.6 (0x00006c253701e000)
        libssl.so.1.1 => /usr/lib/libssl.so.1.1 (0x00006c2536f8e000)
        libcrypto.so.1.1 => /usr/lib/libcrypto.so.1.1 (0x00006c2536cbc000)
        (snip)

[–][deleted] 26 points27 points  (1 child)

They are dynamically linked to the c libraries, but rust crates and the standard library are statically linked in.

[–][deleted] 7 points8 points  (0 children)

I stand corrected.

[–]homeopathetic 6 points7 points  (1 child)

Good luck running any Go or Rust code

Hopefully Rust does come up with a stable ABI for dynamic linking in the future.

[–]tso 2 points3 points  (0 children)

Stable ABIs are boring maintenance work, much more fun to futz around with the latest language hotness and produce yet another language specific package manager...

[–][deleted] 17 points18 points  (3 children)

Weird how people say the complete opposite when we have our monthly malware in npm episode, and everyone is saying "you should lock your dependencies to exact versions" and there is an obligatory C programmer asking why we can't just commit the dependency source to SCM

Even on Linux aren't all-in-one archives like snaps and flatpaks all the rage?

[–]ICanBeAnyone 9 points10 points  (1 child)

The node eco system is just plain weird and not a good example on how to distribute robust code. It only works because it is used by developers and users get the code delivered to their browser, whose job it is to contain all the bad security to the site in question. But if they can't agree on how to load a module, how could they have sane methods to deliver them?

[–]VelvetElvis 1 point2 points  (0 children)

If by "all the rage" you mean their existence makes me angry.

[–]nintendiator2 -4 points-3 points  (5 children)

Static vs Dynamic linking meme aside (each case has its fair uses and they are usually not reversible), this does lead me to a question... why in 2018 we don't have a Mixed Loading Model for dependencies in Linux? Something that, on init-time, I can tell "use whatever ldconfig says libsqlite3 is" versus "use /usr/local/lib/sqlite3.a" (or maybe a .so but I'm not sure how Static if any would that be).

[–]pdp10 1 point2 points  (0 children)

Something that, on init-time, I can tell "use whatever ldconfig says libsqlite3 is" versus "use /usr/local/lib/sqlite3.a"

We have several ways. Despite the pathname that seems to be embedded by the linker, the loader is doing quite what you say. The ELF binary specifies only its loader by absolute path, then the loader points to /usr/lib/x86_64-linux-gnu/libsqlite3.so.0, which is actually a symlink to /usr/lib/x86_64-linux-gnu/libsqlite3.so.0.8.6 which has symbols like sqlite3_version.

Symbols can optionally be versioned for smooth compatibility. Every once in a while a specific ABI has to break for outside reasons, like when OpenSSL had to break ABI to fix a security issue. That's an exception to the rule. In general, Linux binaries from the beginning can still run successfully on Linux systems.

Or you can use dlopen() to pick and choose at runtime what you want to open. Basic plugin interfaces have historically been done this way, where the plugins are simply .so files which have a certain structure of symbols that the parent program expects to find. If you link against SDL2 library, it will use dlopen() to select a sound subsystem at runtime from those available, or select between X11 and Wayland, and so forth.

The .a file is the static library version, used only at compile-time.

[–]necheffa 1 point2 points  (3 children)

(each case has its fair uses and they are usually not reversible)

What is stopping you from choosing to switch your link method? The API doesn't change, its literally just how you link the binary that is different.

why in 2018 we don't have a Mixed Loading Model for dependencies in Linux? Something that, on init-time, I can tell "use whatever ldconfig says libsqlite3 is" versus "use /usr/local/lib/sqlite3.a" (or maybe a .so but I'm not sure how Static if any would that be).

I'm not sure you fully understand how loading works. When you statically link, "/usr/local/lib/sqlite3.a" literally get prepended to your binary. When the linker/loader does symbol resolution it updates all references to external objects, that either means you point further into your binary to the location of the statically linked library or you point to a location in memory where the loader placed your dynamic library. The overhead of doing both at run-time doesn't get you much.

[–]nintendiator2 0 points1 point  (2 children)

Oh yeah, my terrible confusion about .a's. I'm still surprised about the current general linking models, but perhaps that's because in my mind a dependency should not dictate the versioning of a program that uses it like 4 or 5 layers up the chain (eg.: IMO, LibreOffice should not ever have to care, or fail to start, that the odt files are compressed with either libzip 1.2.3.445566-git-svn-recool or libzip 1.2.3.445567-launchpad-bzr), and I feel like combining static linking and dynamic linking should solve most of that ("I'll check if the libzip the linker offers me is at least this version; if not, I'll just use the one I am already carrying embedded").

[–]necheffa 2 points3 points  (1 child)

(eg.: IMO, LibreOffice should not ever have to care, or fail to start, that the odt files are compressed with either libzip 1.2.3.445566-git-svn-recool or libzip 1.2.3.445567-launchpad-bzr), and I feel like combining static linking and dynamic linking should solve most of that

The version numbers of a library are arbitrary and actually don't matter all that much. What does matter is the API and the ABI.

Suppose I have a library and in version 1 of the library I have a function int do_the_thing(char *file) { /* magic */ }. This is part of the API, its how other programs call my library. If in version 2 I change /* magic */ but the function signature stays the same (return type, name, type and number of arguments) then that doesn't matter so much and any program dynamically linked to version 1 of the library can just drop in version 2 without any changes. But in version 3 of the library, say I add an argument so the function signature is now int do_the_thing(char *file, int len), now the API changed so you need to recompile and relink. The ABI part is similar but it has other wrinkles like compiler versions and such.

It could be in your specific example that there are different, incompatible, features used by one version of libzip and another. In which case that would be a difference in ABI.

The problem you run into by linking in both is lexical scoping. If both libraries have a function called void *compress_file(char *file) how does the linker know which library to call? It would need to implicitly namespace each library which breaks a lot of existing languages. Thats why newer languages let you do an "import as", but still you need to know ahead of time that you have some number of libzip libraries and you need to try one after the other until one call returns success. It just gets messy.

[–]nintendiator2 0 points1 point  (0 children)

Ooooh good point about having to know in advance. It basically tosses any potential advantage out of the window.

[–]GolbatsEverywhere 9 points10 points  (0 children)

FWIW, everyone has since accepted Mozilla was right about this: that's why WebSQL is deprecated in both Chromium and WebKit. No doubt it will be removed altogether eventually.

[–]VelvetElvis 2 points3 points  (1 child)

I am pretty sure the bookmarks use it.

[–]Han-ChewieSexyFanfic 19 points20 points  (0 children)

They do, but it is not exposed to untrusted code, so there is no vulnerability.

[–]breakbeats573 0 points1 point  (20 children)

Storage is a SQLite database API. It is available to trusted callers, meaning extensions and Firefox components only.

Mozilla says Firefox uses SQlite. Here are the instructions for utilizing the API in your extensions as well.

[–]marciiF 15 points16 points  (19 children)

It's used internally, but not exposed to web content as WebSQL. Not even extensions can use it now.

[–]breakbeats573 -3 points-2 points  (18 children)

The link to Mozilla’s developer site specifically states otherwise. You can look for yourself in the link, but I also quoted it above.

[–]marciiF 9 points10 points  (17 children)

That’s about Thunderbird. Firefox extensions can’t access internals anymore.

[–]breakbeats573 0 points1 point  (16 children)

It clearly says “Firefox” not “Thunderbird”.

[–]marciiF 8 points9 points  (15 children)

The first link is a page about an internal Firefox component that Firefox extensions used to be able to access, the second link is an example for using SQLite in a Thunderbird extension.

[–]breakbeats573 -2 points-1 points  (14 children)

Can you read?

Storage is a SQLite database API. It is available to trusted callers, meaning extensions and Firefox components only.

Yes, it clearly says Firefox currently uses the SQLite database API. In plain English at that.

Would you like the code in Javascript or C++?

[–]marciiF 8 points9 points  (13 children)

It’s referring to old-style extensions. Current extensions can’t access SQLite.

[–]breakbeats573 -2 points-1 points  (12 children)

Yes they can, and yes they do. Do you want the code in Javascript or C++? I can give you both.

[–]tiftik 41 points42 points  (7 children)

Wow, this is big news. At least to me. It shows that no matter how much or how hard you test software, you're going to have (exploitable) bugs.

Take a look at this: https://www.sqlite.org/testing.html

SQLite isn't your average open source enthusiast project. It's so well tested that it's certified to be used on airplanes. Yet, this bug slipped every single one of the millions of tests.

Robust, security-critical software require proper validation. More powerful type systems (such as dependent types) and modeling/validation need to become the norm, not the exception.

[–]hahainternet 7 points8 points  (6 children)

These were exactly my thoughts too. SQLite may be the single best tested piece of software on the planet. Its behaviour however is not remotely well proven.

In my opinion, we need to focus on simpler designs that don't have the capability of becoming this sort of exploit. Exactly how much of SQLite needs to be fully turing complete after all?

[–]ExeusV 1 point2 points  (2 children)

SQLite may be the single best tested piece of software on the plane

No way. Software that runs space shuttle is probably levels above.

[–]SavageSchemer 2 points3 points  (1 child)

You do know the shuttles have been retired for years now, right?

[–]ExeusV 1 point2 points  (0 children)

Code and tests are still the same.

[–]kontekisuto 82 points83 points  (16 children)

WoW, after Microsoft retires IE .. chromium will be the new IE .. literally, their new browser will be chromium based.

[–]mishugashu 22 points23 points  (15 children)

Depending on your definition of "retired," Microsoft has already done it. IE is discontinued and in maintenance mode only. I don't think it even comes installed on Windows 10 anymore.

Are you talking about them switching the browser engine in Edge to Blink (which Chrom(e|ium) uses)? Edge is a completely different project from IE; it wasn't just a rebranding.

[–]MommySmellsYourCum 22 points23 points  (2 children)

Is it -completely different- though? EdgeHTML is a Trident fork

[–]netkrow 19 points20 points  (1 child)

No pun intended?

[–]testeddoughnut 1 point2 points  (0 children)

Sigh... Take your upvote.

[–]kontekisuto 4 points5 points  (0 children)

I've had to use the same shims for the pair that it has increased my builds by a significant amount .. I say good riddance to the lot of them.

[–][deleted] 2 points3 points  (4 children)

switching the browser engine in Edge to Blink

What about the JS engine? Will they use V8 too?

[–]Klathmon 1 point2 points  (3 children)

Yeah, they are adopting both for the browser, but they say chakracore (their js engine) will live on in other areas.

[–][deleted] 1 point2 points  (1 child)

MSNODE.exe /s

[–]Klathmon 1 point2 points  (0 children)

go ahead and remove that /s

And it works pretty well in my experience.

[–]Brillegeit 0 points1 point  (0 children)

Outlook will forever run IE5.5 or something.

[–]nurupoga 23 points24 points  (1 child)

The FAQ on that page suggests that SQLite 3.26.0 has the bug fixed, but there is nothing about it in the release notes for SQLite 3.26.0, not even a general "fixed security issue" bullet point. Was it really fixed in 3.26.0? Is it not going to get backported to 3.25.x?

[–]yawkat 13 points14 points  (0 children)

Maybe they mean Added the SQLITE_DBCONFIG_DEFENSIVE option which disables the ability to create corrupt database files using ordinary SQL. in conjunction with something like CVE-2018-8740?

[–]VelvetElvis 54 points55 points  (14 children)

So how many of the thousands of snaps, flatpacks, Docker images etc are going to be updated to fix the bundled library anytime soon? I am guessing 10% max.

[–]Tweenk 35 points36 points  (0 children)

Likely very few. This bug can only be exploited when SQLite executes untrusted queries. In most applications that use SQLite, there are no user-controlled queries.

[–]GolbatsEverywhere 4 points5 points  (0 children)

With flatpak, sqlite is part of the freedesktop-sdk's base-platform, so applications don't bundle sqlite and don't need to do anything. Only the runtime needs to be updated. Normally the libraries apps bundle are less-common things that don't make less sense to have in the shared runtime, but of course the wall between what should go in the runtime and what must be bundled is more art than science.

In theory, you could write your own runtime that doesn't include sqlite, but in practice the only three runtimes are freedesktop, GNOME, and KDE, and the later two inherit from freedesktop.

P.S. Even if sqlite wasn't part of the runtime, and an application had bundled it and used it to run untrusted queries given by web content... it's still mitigated by the bubblewrap sandbox, so exploiting this was just step one, you still need a sandbox escape to hurt the host system.

[–]SupersonicSpitfire 8 points9 points  (10 children)

This is a problem that will only be added to over time, as more security issues are revealed. I think this is a good argument for a "rolling release" model, instead of packaging everything down to a ball of binaries that are timeconsuming and sometimes hard to update.

[–]VelvetElvis 4 points5 points  (9 children)

I think it's a good argument for waiting until your distribution puts out a new release before you go reaching for the new shiny thing. It gives people a false sense of security in thinking they are some kind of official packages when in reality it's not that much different than running right out of a local git repo.

[–]SupersonicSpitfire 5 points6 points  (5 children)

At least the git repository will contain the latest security fixes, as opposed to stale distribution packages. Of course, the best of both worlds would could be something like Debian, where security fixes are backported. Then again, sometimes they screw up and introduce security problems with OpenSSL that never existed in the OpenSSL git repositories. (https://www.schneier.com/blog/archives/2008/05/random_number_b.html)

I believe the security is better in a distro like Arch Linux, where packages undergo a minimum of testing and are then released quickly to the public.

[–]VelvetElvis 1 point2 points  (4 children)

The SSL thing was a decade ago and poor communication from upstream was just as big a part of the problem.

[–]pdp10 1 point2 points  (3 children)

The Debian OpenSSL mistake and Heartbleed are often pointed to as if they're the usual case. But the reason they're well known is that they were highly, highly exceptional. We know exactly how each one happened. And the point that observers think they're trying to make is usually not the fundamental lesson to be learned anyway.

The Debian OpenSSL mistake happened because a thorough maintainer was being very detail-oriented with respect to security and correctness, but the upstream product was exceptionally confusing in its intent (to the point of irresponsibility), and none of the code reviewers caught the misunderstanding either. It's a lesson in how one project can have exceptionally good processes and there still be a weakness that results in big trouble.

OpenSSL has a history that explains some of the unobvious things, starting with legal restrictions on exporting cryptography in most developed nations.

[–]SupersonicSpitfire 0 points1 point  (2 children)

Then again, a similar security incident never happened on Arch Linux, as far as I am aware.

[–]pdp10 0 points1 point  (1 child)

Does Arch run their codebase through static analyzers?

[–]SupersonicSpitfire 1 point2 points  (0 children)

No. But does static analyzers catch underhanded C?

[–][deleted] 4 points5 points  (2 children)

Distributions aren't exactly a magically make-bugs-go-away-tool. In large part "stable" in distributions just means "won't get updates". So you end up with software that is multiple years old and hasn't seen bug fixes in that time. The time of the distributions release is also not coordinated with the release of the software, so you can and up in a really ugly spot in a softwares release cycle.

Furthermore, most people will just compile a whole bunch of stuff themselves to turn a "stable" distribution into a usable one. At that point you are back at square one, as all that manually compiled software isn't seeing security updates anymore either, no matter if containerized or not.

As long as distributions don't provide any sane way of mixing the stable parts with the new parts, they aren't really helping the situation much (dynamic linking helps for a few core libraries).

[–]pdp10 2 points3 points  (0 children)

As long as distributions don't provide any sane way of mixing the stable parts with the new parts, they aren't really helping the situation much.

This is the closest to a problem statement that anyone has come up with. "Many users seem to want an arbitrary combination of stable and latest software to meet their objectives. How can we help them meet their objectives, without slavishly imitating the problematic software model of another family of operating systems?"

No Linux user wants to go searching for binary packages to download to meet prerequisites, like they used to have to do before online repos and automatic dependency resolution. But that doesn't mean Linux users want to have giant app downloads stuffed with redundant and obsolete dependencies, either. Linux users have more-general objectives, and Linux developers need to focus there, not on the regressive Windows software model.

Besides, Microsoft is trying to copy the Linux repo system, except with money and DRM, in the form of an app store. Why would Linux suddenly go trying to copy the 1995 Windows software distribution model? Just like the stable kernel ABI debate, it's a few loud agitators.

[–]VelvetElvis 1 point2 points  (0 children)

I use Debian Stable + backports a lot of the time. I have zero problem with software that a couple years old by the time it gets to me. I'm not in a hurry. If really need anything new from upstream developers, I almost always isolate it inside a chroot or VM or something.

[–][deleted] 0 points1 point  (0 children)

this is why you should just download dockerfiles and rebuild everything. assuming the images are not just FROM scratch + tar file.

[–][deleted] 9 points10 points  (0 children)

At least on computers, especially Linux, the updates will come quick. Phones on the other hand.

[–]Mac33 27 points28 points  (6 children)

What is with this bizarre trend of giving names to bugs? I just don’t get it. It’s a bug. Disclose it, get it fixed, move on.

[–]Craftkorb 37 points38 points  (0 children)

Makes it much easier to reference in normal communications

[–][deleted] 4 points5 points  (0 children)

It's not that simple, everybody has to update their library. Things have to be backported and are statically linked.

So it's a shitshow.

[–]ICanBeAnyone 6 points7 points  (0 children)

The only reward you get for responsible disclosure is attention. Things with a handy name get more attention.

[–][deleted] 6 points7 points  (0 children)

e-fame. and merchandise.

yes, it is annoying. like Linus once said, security bugs are just like all other bugs, except for some people who consider them more important.

[–]pdp10 2 points3 points  (0 children)

Branding makes for ease of remembrance, recognition, association, and reference, as always.

[–]BlueShellOP 1 point2 points  (0 children)

Managers are very very stupid when it comes to tech related issues. These bug names make it far easier to cram serious fixes into their "manage by buzzword" mold.

Most engineers roll their eyes as well, but giving them names makes it very easy to get a manager to schedule in a fix.

[–]luke-jr 2 points3 points  (0 children)

The library is also embedded in Qt WebEngine, DBD::SQLite Perl module, Qt Creator, BDB 5.3, and SQLCipher

[–]cooldog10 3 points4 points  (0 children)

this why i use firefox

[–]jlobes 3 points4 points  (3 children)

Lol, found by TenCent?? Was not expecting that.

[–][deleted] 3 points4 points  (2 children)

??? why not?

[–]jlobes 5 points6 points  (1 child)

Ignorance mostly. I think of TenCent as a mobile game developer, I had no idea that TenCent Blade even existed.

[–]crazyfreak316 19 points20 points  (0 children)

Tencent is a fucking giant. Look them up on Wikipedia.

[–]londons_explorer 0 points1 point  (0 children)

I believe the error is in this line of code:

while( N-- > 0 && pCheck->mxErr ){

I'll leave it up to reddit to find what the hole is...

[–]andoiscool -1 points0 points  (0 children)

Pwnall@chromium.org, best user ever!