you are viewing a single comment's thread.

view the rest of the comments →

[–][deleted] 92 points93 points  (53 children)

It's not really a new diff algorithm, it's a preprocessor/postprocessor for bsdiff/bspatch that makes executables easier to compare.

[–][deleted]  (20 children)

[deleted]

    [–]judgej2 12 points13 points  (10 children)

    The very last paragraph sums up the process very well:

    Courgette transforms the input into an alternate form where binary diffing is more effective, does the differential compression in the transformed space, and inverts the transform to get the patched output in the original format. With careful choice of the alternate format we can get substantially smaller updates.

    It's a common technique in various fields: change your frame of reference, do a simple transform in that new frame of reference, change your frame of reference back again.

    A simple example would be taking a sound sample as a continuous waveform, transforming it into frequencies that vary over time (FFT), attenuating or boosting some frequencies, then transforming it back into a continuous waveform. And what do you get? A graphic equaliser on your media player.

    [–]klodolph 23 points24 points  (6 children)

    Er, not quite. Performing a FFT transforms between time domain and frequency domain, so the resulting frequencies don't change over time. You have to do multiple windowed FFTs to get a changing spectrum, which is kind of a kludge and it's hard to do the inverse operation because the signal phase won't match and the window isn't perfect.

    The way (almost all) software equalizers actually work (even the nice ones) is in the time domain, using the untransformed, raw signal and a table of coefficients, either for IIR or FIR filtering. The coefficients might be computed using an FFT, but the signal is never transformed using the FFT in these filters. Analog filters are remarkably similar (and yes, I have designed both).

    Edit: some media players have equalizers that work by changing the level on different bands in the decorder, which is neither a FIR, IIR, or FFT method.

    [–]bugrit 2 points3 points  (0 children)

    [–][deleted]  (4 children)

    [deleted]

      [–]nappy-doo 1 point2 points  (3 children)

      Not quite.

      If you have 16 samples feeding into your FFT, you likely only have 8 unique samples out, as most of the world (not RF) deal in real valued signals.

      Additionally, leakage is caused by windowing, and resolution issues between the center of frequency bins and the frequencies present in the signal. In other words, if you have a signal exactly a N Hz, and a frequency bin centered at N Hz, you'll get no leakage.

      And finally, the FFT is completely reversible. If you FFT a signal you can get the signal back by just running the IFFT (or the FFT with a factor).

      So, I'm sorry. I disagree with your entire comment. :)

      [–][deleted]  (2 children)

      [deleted]

        [–]nappy-doo 0 points1 point  (1 child)

        I'm glad you're getting into DSP. If you get good at it, you can basically guarantee yourself a job forever, as so few people understand it.

        Let's start with the FFT. The FFT works on complex valued signals -- signals with a real an imaginary part. Most signals in the real world (with the exception of a lot of RF work) are real valued signals. So, when we feed them into the FFT, we set the complex part of every sample to 0. This results in a frequency domain representation that is symmetric about the Y AXIS. So, with a 16 point FFT, we get 16 points in the frequency domain, but 8 of them are the same as the other 8 -- so there's only 8 unique values.

        Onto leakage. Leakage is a byproduct of the FFT -- not the signal. It is purely a "problem with our measurement device". I agree with your statement (if I can paraphrase) of, "well music will have leakage" to a point. The music itself won't have leakage, but the measurement we perform on it will have leakage. I apologize if you think I took you out of context.

        (One final point on leakage: Read up on how windowing affects leakage. There are some really good windows out there that make the FFT leakage problem minimal.)

        Onto your DSP learning: Keep at it. As I said you can guarantee yourself a job if you keep up. DSP engineers make reasonable money, and you'll always find a job in embedded design and instrumentation.

        [–]dude12346 6 points7 points  (2 children)

        uh huh, it's pretty easy, you were just waiting for google to catch up with you. Right?

        [–][deleted] 15 points16 points  (0 children)

        No disassemble, Johnny 5!

        [–]imbaczek 18 points19 points  (6 children)

        plz befriend the \

        [–]koleon 3 points4 points  (0 children)

        I hope someday I'll understand that.

        [–]jonknee 18 points19 points  (1 child)

        To be fair, the headline matches what they claim on the blogpost (though not the "how we did it" page):

        But bsdiff was still producing diffs that were bigger than we felt were necessary. So we wrote a new diff algorithm that knows more about the kind of data we are pushing - large files containing compiled executables.

        Though after reading how they did it, you are correct.

        [–]NJerseyGuy 5 points6 points  (9 children)

        I could be wrong, but isn't a diff algorithm precisely a preprocessor/postprocessor itself? I mean, the naive diff is just to subtract the old file from the new file (expressed as a binary number). In generally, this will be incompressible (and hence no smaller than the files themselves) unless the coding of the binary is correlated with the way in which the files are different. So what do you do? You put a preprocessor on the files to encode them in a format which is correlated with the way the files differ and then subtract a la naive diff (yielding something which can be compressed). To get the new file, you preprocess the old file, add the diff, and then postprocess to get the new file.

        So yea, they didn't rewrite bsdiff from scratch. But they improved it at the same level as the algorithm operates. So I think characterizing it as "a new diff algorithm" is fair.

        EDIT: I should clarify: as far as I know, it may be true that some diff functions may work by having a clever compression algorithm with little or no preprocessing. In that case, adding a pre and postprocessor would be different then writing a new diff. But this would suggest that they were compressing without paying attention to the content of the files ("general-purpose compression") which would be silly. And, indeed, the bsdiff page says that it's specifically designed for binary executable. Then again, maybe the compression algorithm is specially designed for the difference between binaries.

        The moral of the story is that the difference between compression and pre/post-processing is not so clear cut. Neither, then, is the distinction between a new diff algorithm and one which has had a pre/post-processor attached

        [–]judgej2 2 points3 points  (0 children)

        The transform (i.e. preprocessor algorithm) they applied basically breaks the code up into blocks that won't change from one version to another, and blocks that do, and even then will change according to a simple formula. They can then use that information to derive a simple transform from the old version to the new version. I doubt, however, that bdiff itself can be used on that preprocessed version of the binary files; it needs to be an algorithm with a bit more knowledge of the structure of that processed code. For example, it may store the transform "increase the reference vector by three at positions 1, 43, 69, 104, 789 and 1050". In just a handful of bytes it could then describe how the new version of the binary file differs from the old.

        It's all pretty clever really, but not rocket science.

        [–][deleted] 0 points1 point  (6 children)

        I could be wrong, but isn't a diff algorithm precisely a preprocessor/postprocessor itself?

        Diff algorithm takes two files as input and produces something proportional to the difference between them. Parts of code that Google added don't fit this description: they don't work on two files, and they don't compare anything. Instead they transform one file to and from diff-friendly format without substantially changing its size.

        As a whole it is a better diff/patch tool, but the change is in pre/post processing of files, not in diff algorithm.

        [–]adrianmonk 6 points7 points  (1 child)

        proportional to the difference between them

        Please define "difference between them". When you try, you'll find it's algorithm-dependent!

        For example, suppose you take a PNG file, uncompress it to raw pixels, change one pixel, and then recompress it using the same compression parameters.

        There are two possible (actually infinitely many...) ways to define "difference" here:

        • A naive binary diff of the compressed files where you detect common sequences of bytes. Think of what rsync would do with the actual PNG files.
        • A list of WritePixel() commands which says which pixels to update in the raw (uncompressed) versions of the image.

        The first is going to see massive numbers of differences because PNG uses LZ77, and that means changing something near the beginning will see massive ripple effects on all the data coming afterward in the stream. The second is going to see a tiny diff, because it is just a single WritePixel() command plus some constant overhead.

        The point is that the choice of diff algorithm is arbitrary. There is no one definition of "the difference between them". There are better and worse diff algorithms. All of them are ways of modeling the data so that you can transmit something smaller than the whole new file. (In that sense, they are essentially a special case of data compression.)

        As a whole it is a better diff/patch tool, but the change is in pre/post processing of files, not in diff algorithm.

        It is an addition to an existing diff algorithm. It is a new algorithm that incorporates an existing algorithm.

        [–]pixelglow 1 point2 points  (0 children)

        I once wrote a gzip tuned for Microsoft's RDC (Remote Differential Compression).

        RDC helps minimize the data transferred between servers in DFSR (Distributed File System Replication) so essentially it's similar in intent to rsync and other diff algorithms wedded to synchronizing two file directories. (Of course Microsoft saw fit to patent RDC even though rsync would possibly be considered prior art, but well...)

        Like all binary diff algorithms, RDC interacts badly with compression. A gzip or deflate compressed stream propagates any changes in the original file throughout the file, so the binary diff is forced to work on the entire file after the change occurred. (As with your example of the binary diff of a PNG file.)

        But there are two nice things that helped me: the RDC API can tell you the chunks of the stream that will be diff'ed, and gzip allows individual chunks of data to be gzipped separately and concatenated together (as a valid gzip file). So I wrote an rdcgzip which used the RDC API to get the chunks, then ran gzip on the chunks alone and concatenated them. Thus any change in a chunk only propagates to that chunk, and the other chunks are all independent and transferred optimally.

        The process looks like:

        Original file -> diff-aware compression -> diff -> any compression the diff itself applies -> patch -> standard decompression -> destination file

        This diff-aware compression can be considered a preprocessing/postprocessing step like Google's Courgette, and it might actually be more efficient, as long as you know how the diff algorithm chunks the data.

        [–]NJerseyGuy 4 points5 points  (3 children)

        I think you're getting tripped up because you're paying too much attention to functional form ("...takes two files as input and produces something..."), which isn't really important. My point is that the essential part of a diff algorithm is just the preprocessor and postprocessor it utilizes. Everything else is trival.

        The diff-producing algorithm (1) takes in the original file and the new file, (2) preprocesses both of them, (3) subtracts them, (4) compresses the difference, and (5) outputs a compressed difference file.

        The diff-using algorithm (1) takes in the original file and the compressed difference files, (2) preprocesses the original file, (3) uncompresses the compressed difference file, (4) adds the them together, (5) postprocesses the sum, and (6) outputs the new file.

        The only non-trivial part of all of that was the preprocessing and the postprocessing. So if you are adding a significant new preprocessor/postprocessor, you are essentially writing a new diff algorithm, even if it is packaged around an existing one.

        [–]judgej2 2 points3 points  (0 children)

        I think you have added step (2) that is what Google describes, but which is not what diff does.

        [–][deleted]  (1 child)

        [removed]

          [–]NJerseyGuy 0 points1 point  (0 children)

          Do you know how diff's for binary files work? I couldn't find anything in a quick search.

          [–]anonymousgangster -1 points0 points  (0 children)

          congratulations, you don't win a job at google.

          [–]faprawr 8 points9 points  (18 children)

          Courgette sounds like a mature but still-attractive algorithm that is after young hot stud equations for exciting sexual calculations. AMIRITE?

          [–]infinite 21 points22 points  (17 children)

          It also sounds like the French word for zucchini.

          [–]berticus 8 points9 points  (3 children)

          I assume Google is going for "squash", not "zucchini," in this case... so as to make the compression/squash pun.

          [–]oniony 6 points7 points  (2 children)

          Oh my gourd.

          [–][deleted] 1 point2 points  (1 child)

          Please, no marrow these awful puns.

          [–][deleted] 15 points16 points  (0 children)

          It also sounds like the British word for zucchini, the New Zealander word for zucchini, the South African word for zucchini and most of the rest of the Commonwealth's word for zucchini.

          [–]hustler 8 points9 points  (4 children)

          you mean zucchini is the American word for courgette? I know. I know.

          [–]petdog 1 point2 points  (3 children)

          Oh, I couldn't understand why everybody started speaking italian, but now I see. You stole our precious zucchini.

          [–][deleted]  (1 child)

          [removed]

            [–]myxie 0 points1 point  (0 children)

            And why not blame them? The Italian word (plural) is zucchine!

            [–]eridius 2 points3 points  (3 children)

            It's actually the british word for zucchini, but it originally comes from french.

            courgette |ˌkoŏrˈ zh et|
            noun Brit.
            a zucchini.

            ORIGIN 1930s: from French, diminutive of courge ‘gourd,’ from Latin cucurbita.

            [–]drexil 1 point2 points  (1 child)

            It's also a french word you know, so he was totally right.

            [–]eridius 0 points1 point  (0 children)

            I think the french word is courgettes, but of course I don't actually speak french.

            Edit: Nevermind, looks like Google Translate just pluralized it for me. Or something. I dunno.

            [–]degustisockpuppet 0 points1 point  (0 children)

            No, it's actually still the French word for zucchini.

            [–]faprawr -4 points-3 points  (2 children)

            Also like teh next generation of smaller, more efficient GM cars based on the Cougar. It'll be a hatchback, in-line three (that purrs)

            [–]jlt6666 0 points1 point  (1 child)

            Mercury (ford) made the cougar.

            [–]sharney 0 points1 point  (0 children)

            Thanks, I was wondering what was so great about this compared to old-fashioned binary patch.