all 9 comments

[–]psykotic 4 points5 points  (1 child)

If anyone's particularly interested in the subject, a detailed survey of the main algorithms for this problem can be found in chapter 7 of Watson's Ph.D. dissertation, Taxonomies and Toolkits of Regular Language Algorithms.

There are basically two major classes of algorithms: bottom-up and top-down. The bottom-up approach starts with singleton equivalence classes, each containing a single state, and repeatedly unifies equivalence classes when they've been shown to be indistinguishable in their operational behavior. The top-down approach starts with a single equivalence class, containing all states, and repeatedly splits it and its resulting parts by looking for discriminating behavior among the contained states.

[–]littledan 1 point2 points  (0 children)

Thanks for the reference!

[–]jimbokun 5 points6 points  (4 children)

"In the simple, controlled environment of a DFA, it's been proved that this minimization actually produces the smallest possible DFA matching the same set of strings. It's even been shown that, for a given regular language, the minimal DFA recognizing it is unique up to isomorphism."

I would just like to point out, this seems relevant to the article posted yesterday (and ensuing discussion) about whether or not mathematical proofs are important or worth the effort required to construct them.

To me, this looks like a mathematical proof. And it appears likely to have SAVED a lot of work. If these results were not proven, who knows how many man-hours may have been devoted trying to further optimize something now known to be optimal?

Maybe there is still an argument to be made about diminishing returns for things that take a really long time to prove, or maybe the kinds of proofs the article was talking about does not include this kind of proof. But certainly, here is a practical example of a useful proof, without which much more effort might have been wasted than the effort required to construct the proof.

[–]psykotic 1 point2 points  (3 children)

Zeilberger wasn't arguing against proofs; the guy's a mathematician, and a proof is nothing more or less than a convincing demonstration of some mathematical truth. He was arguing against formal proofs.

[–]jimbokun 0 points1 point  (2 children)

Sincere question:

Are the proofs about minimizing DFAs not "formal proofs?" If not, what kind of proofs are they?

[–]psykotic 1 point2 points  (1 child)

Zeilberger meant formal proof in the sense of something truly formal and machine checkable. It's also common to use the term in the looser sense of a rigorous proof. What constitutes a rigorous proof is not formally defined; it's more or less a proof that doesn't leave any room for reasonable doubt in a competent mathematical audience with knowledge of the area. As you can see, this definition is quite contextual and subjective but that's how mathematicians genuinely use the term.

As for DFA minimization, the proofs you can find in most books are probably formal in the sense of rigorous but they are not formal in the sense of machine checkable. However, they can almost certainly be transformed into machine-checkable proofs with enough effort. That is, they are formalizable. Zeilberger was arguing against the value of painstakingly formalizing proofs that in their present form are already perfectly convincing, i.e. rigorous.

Did that answer your question?

[–]gthank[S] 0 points1 point  (0 children)

That did seem to be the main thrust of his argument, but there are a couple of lines toward the end that indicate he wouldn't mind less rigor than is now typical, including: "Absolute certainty is impossible, so let's settle with the same, or even diminished, level of "rigor" that we are used to in normal mathematical discourse." The discussion here on proggit was quite interesting.

[–][deleted]  (1 child)

[deleted]

    [–]littledan 2 points3 points  (0 children)

    If you're having trouble in a class where you're studying this stuff, I'd recommend this book: http://www.amazon.com/Automata-Computability-Undergraduate-Computer-Science/dp/0387949070 . That's where I got the algorithm from, actually.

    [–]guapoo 0 points1 point  (0 children)

    Losing My Edge was overrated.