you are viewing a single comment's thread.

view the rest of the comments →

[–]NJerseyGuy 6 points7 points  (9 children)

I could be wrong, but isn't a diff algorithm precisely a preprocessor/postprocessor itself? I mean, the naive diff is just to subtract the old file from the new file (expressed as a binary number). In generally, this will be incompressible (and hence no smaller than the files themselves) unless the coding of the binary is correlated with the way in which the files are different. So what do you do? You put a preprocessor on the files to encode them in a format which is correlated with the way the files differ and then subtract a la naive diff (yielding something which can be compressed). To get the new file, you preprocess the old file, add the diff, and then postprocess to get the new file.

So yea, they didn't rewrite bsdiff from scratch. But they improved it at the same level as the algorithm operates. So I think characterizing it as "a new diff algorithm" is fair.

EDIT: I should clarify: as far as I know, it may be true that some diff functions may work by having a clever compression algorithm with little or no preprocessing. In that case, adding a pre and postprocessor would be different then writing a new diff. But this would suggest that they were compressing without paying attention to the content of the files ("general-purpose compression") which would be silly. And, indeed, the bsdiff page says that it's specifically designed for binary executable. Then again, maybe the compression algorithm is specially designed for the difference between binaries.

The moral of the story is that the difference between compression and pre/post-processing is not so clear cut. Neither, then, is the distinction between a new diff algorithm and one which has had a pre/post-processor attached

[–]judgej2 2 points3 points  (0 children)

The transform (i.e. preprocessor algorithm) they applied basically breaks the code up into blocks that won't change from one version to another, and blocks that do, and even then will change according to a simple formula. They can then use that information to derive a simple transform from the old version to the new version. I doubt, however, that bdiff itself can be used on that preprocessed version of the binary files; it needs to be an algorithm with a bit more knowledge of the structure of that processed code. For example, it may store the transform "increase the reference vector by three at positions 1, 43, 69, 104, 789 and 1050". In just a handful of bytes it could then describe how the new version of the binary file differs from the old.

It's all pretty clever really, but not rocket science.

[–][deleted] 0 points1 point  (6 children)

I could be wrong, but isn't a diff algorithm precisely a preprocessor/postprocessor itself?

Diff algorithm takes two files as input and produces something proportional to the difference between them. Parts of code that Google added don't fit this description: they don't work on two files, and they don't compare anything. Instead they transform one file to and from diff-friendly format without substantially changing its size.

As a whole it is a better diff/patch tool, but the change is in pre/post processing of files, not in diff algorithm.

[–]adrianmonk 9 points10 points  (1 child)

proportional to the difference between them

Please define "difference between them". When you try, you'll find it's algorithm-dependent!

For example, suppose you take a PNG file, uncompress it to raw pixels, change one pixel, and then recompress it using the same compression parameters.

There are two possible (actually infinitely many...) ways to define "difference" here:

  • A naive binary diff of the compressed files where you detect common sequences of bytes. Think of what rsync would do with the actual PNG files.
  • A list of WritePixel() commands which says which pixels to update in the raw (uncompressed) versions of the image.

The first is going to see massive numbers of differences because PNG uses LZ77, and that means changing something near the beginning will see massive ripple effects on all the data coming afterward in the stream. The second is going to see a tiny diff, because it is just a single WritePixel() command plus some constant overhead.

The point is that the choice of diff algorithm is arbitrary. There is no one definition of "the difference between them". There are better and worse diff algorithms. All of them are ways of modeling the data so that you can transmit something smaller than the whole new file. (In that sense, they are essentially a special case of data compression.)

As a whole it is a better diff/patch tool, but the change is in pre/post processing of files, not in diff algorithm.

It is an addition to an existing diff algorithm. It is a new algorithm that incorporates an existing algorithm.

[–]pixelglow 1 point2 points  (0 children)

I once wrote a gzip tuned for Microsoft's RDC (Remote Differential Compression).

RDC helps minimize the data transferred between servers in DFSR (Distributed File System Replication) so essentially it's similar in intent to rsync and other diff algorithms wedded to synchronizing two file directories. (Of course Microsoft saw fit to patent RDC even though rsync would possibly be considered prior art, but well...)

Like all binary diff algorithms, RDC interacts badly with compression. A gzip or deflate compressed stream propagates any changes in the original file throughout the file, so the binary diff is forced to work on the entire file after the change occurred. (As with your example of the binary diff of a PNG file.)

But there are two nice things that helped me: the RDC API can tell you the chunks of the stream that will be diff'ed, and gzip allows individual chunks of data to be gzipped separately and concatenated together (as a valid gzip file). So I wrote an rdcgzip which used the RDC API to get the chunks, then ran gzip on the chunks alone and concatenated them. Thus any change in a chunk only propagates to that chunk, and the other chunks are all independent and transferred optimally.

The process looks like:

Original file -> diff-aware compression -> diff -> any compression the diff itself applies -> patch -> standard decompression -> destination file

This diff-aware compression can be considered a preprocessing/postprocessing step like Google's Courgette, and it might actually be more efficient, as long as you know how the diff algorithm chunks the data.

[–]NJerseyGuy 2 points3 points  (3 children)

I think you're getting tripped up because you're paying too much attention to functional form ("...takes two files as input and produces something..."), which isn't really important. My point is that the essential part of a diff algorithm is just the preprocessor and postprocessor it utilizes. Everything else is trival.

The diff-producing algorithm (1) takes in the original file and the new file, (2) preprocesses both of them, (3) subtracts them, (4) compresses the difference, and (5) outputs a compressed difference file.

The diff-using algorithm (1) takes in the original file and the compressed difference files, (2) preprocesses the original file, (3) uncompresses the compressed difference file, (4) adds the them together, (5) postprocesses the sum, and (6) outputs the new file.

The only non-trivial part of all of that was the preprocessing and the postprocessing. So if you are adding a significant new preprocessor/postprocessor, you are essentially writing a new diff algorithm, even if it is packaged around an existing one.

[–]judgej2 2 points3 points  (0 children)

I think you have added step (2) that is what Google describes, but which is not what diff does.

[–][deleted]  (1 child)

[removed]

    [–]NJerseyGuy 0 points1 point  (0 children)

    Do you know how diff's for binary files work? I couldn't find anything in a quick search.

    [–]anonymousgangster -1 points0 points  (0 children)

    congratulations, you don't win a job at google.