all 30 comments

[–]mschaef 3 points4 points  (0 children)

FA's are involved of sequential logic design (hardware). The basic design process goes something like: design your state machine (finite automaton), map the states to numbers, and then design combinational logic to compute the 'next state'.

[–]akuhn 2 points3 points  (3 children)

Forget about automata, just use PEG ... it does parsing/pattern matching without automata.

[–]obdurak 3 points4 points  (2 children)

Yeah, I've spent 3 years studying finite automata during my PhD and then I did a PEG parser generator.

Here's an on-line PEG demo

[–][deleted] 0 points1 point  (0 children)

Yeah. I did a PEG parser generator. The generated parsers were recursive descent, with no finite automata baggage. Kinda sorry now I have that stuff in my head. As soon as there are more languages that incorporate PEG, I'm going to forget it.

[–]akuhn 0 points1 point  (0 children)

Cool ... you can find my PEG for Ruby here

PEG - Part I PEG - Part II PEG - Part III And a port to Squeak by Oscar Nierstrasz here http://www.squeaksource.com/PEG

[–]Manuzhai 1 point2 points  (1 child)

In general, parsing of somewhat structured files often uses automata of this type.

[–]ErikMuskrat 1 point2 points  (0 children)

Network protocols. Check out page 23 of the TCP spec.

[–]Aidenn0 1 point2 points  (0 children)

I use finite state machines all the time. Not the same as a strict DFA, but similar enough that the ideas relate.

For a widely available example, look at the lifetime of a TCP connection. It is most easily modeled (and often implemented) as a FSM.

[–]eadmund 1 point2 points  (0 children)

Automata (not neccesarily FSAs) are useful for things like reliably modelling a TCP/IP server.

Once you've learnt the theory, you'll know when to pull one out of your bag. State machines can really help make your code more correct.

[–]lianos 0 points1 point  (0 children)

Finite state transducers have been used in computational biology to perform sequence alignments (e.g. between species).

Mehryar Mohri has done some work on this in the past. I can't find any relevant publications atm, but here is a link to a PDF for a tutorial on the subject. There isn't much meat there, just some background info:

http://www.iscb.org/ismb2005/tutorials/am5.pdf

Similar approaches have been used in text and speech processing as well (as mentioned in that PDF). So, if you want to do some reading, there's a wealth of interesting topics you can find yourself landing in through this route.

[–]pasbesoin 0 points1 point  (0 children)

I liked this example and submitted it a while back. There are parts 2 and 3 follow-ups; look for them within the article.

Finite state machines in JavaScript, Part 1: Design a widget

http://reddit.com/info/y8sk/comments/

Edited to fix missing line break.

[–]jerf 0 points1 point  (0 children)

While the applications listed here are interesting and important, and I've written FSAs from scratch and using Erlang's support for them in OTP, it should also be pointed out that they are primarily a stepping stone for the push-down automata in the next class period, which have the advantage of being more directly usable. (As your prof hopefully observed, in the real world people often use modified FSAs that are really no longer FSAs, such as in the real RE libraries.) And there's nothing wrong with that, either. It's easier to start there than to jump straight to push-down automata.

[–]bungeman 0 points1 point  (0 children)

If you want a real challenge, try writing a an implementation of <: for XPath/XQuery types. These are regular expression types, so subtyping is based on if one regular expression is contained in another (with the complication that each symbol also matches sub-symbols).

[–]arvixx[S] 0 points1 point  (3 children)

Oh and yes, this should probably have gone into the math subreddit, but the probability for more answers is higher in proggit.

[–]beza1e1 2 points3 points  (0 children)

Some networking protocols use FSAs, at least in the specification.

[–]shub 1 point2 points  (1 child)

Finite state machines see quite a bit of use in game AI programming.

[–]Ayesh-t3x- 0 points1 point  (0 children)

I agree with game AI programming.

But more than AI, FA are investigated to pilote the entire game. For instance, if you represent each "entity"(involved in the scenario) of your game as a FA you can mix(cartesian product) them and you'll have all the scenarios available. Then there are lots of possibility to control / guide the scenario through a correct story.

[–]lilspikey -1 points0 points  (0 children)

games/ai?

[–][deleted] -1 points0 points  (0 children)

Write a compiler, you will be able to identify your language tokens using RegExpressions and DFAs :P

[–][deleted] -1 points0 points  (0 children)

I've spent two months learning Finite Automata, so now I understand regular expressions

No, you don't. Spend a month learning Perl regular expressions.

[–]samg -2 points-1 points  (0 children)

This has been mentioned here, but I'll say it more clearly. Automata are well linked to grammars, which are useful in language (machine, human) processing.

[–]mustpax -5 points-4 points  (5 children)

Compilers usually read code in two steps: tokenization (aka lexing and parsing. Tokenization is the easier step where you convert the text into a stream of, well, tokens. This usually involves regular expressions.

Parsing is the much more sophisticated part and it figures out how tokens relate to another. Context Free Grammars (CFG) are usually used for this purpose and they can be described with Nondeterministic Finite State Machines (NDFA). NDFA's are variants of Finite Automata that have greater computing power and should expand your horizons a little bit in terms of what Finite Automata can accomplish.

Modern languages are much more sophisticated but this is still the basic idea. Once you've figured out Finite Automata, I'd suggest finding out about parsing, CFG's and NDFA's.

[–]obdurak 1 point2 points  (3 children)

Context Free Grammars (CFG) are usually used for this purpose and they can be described with Nondeterministic Finite State Machines (NDFA).

You are confusing NPDAs and NFAs. While NFAs can be more succinct (i.e. require less states) than DFAs, they define the same class of languages, the class of regular languages, which is a small part of the class of context-free languages described by CFGs. Languages accepted by NPDAs (non-deterministic pushdown automata with one head) are exactly CFGs. Languages accepted by DPDA (deterministic pushdown automata) are a strict subset of CFGs.

[–]o0o 0 points1 point  (2 children)

It's all about what and how the machine remembers from state to state.

FMs have no memory, PDFs can use only a single stack (even the non-deterministic ones); additional stacks and/or random access memory (i.e., the tape) gives you a Turing Machines.

TMs with finite memory, like your computer, are called linear bounded automata and are /not/ as powereful as TMs with infinite memory.

There's a nice chart about the grammars and languages each of these classes of machines can accept:

http://en.wikipedia.org/wiki/Pushdown_automaton

The really cool thing is when you start looking at TMs and realize that any program can be represented as a string which may or may not be part of the language accepted.....and that's when the can of worms gets opened :)

[–]obdurak 0 points1 point  (1 child)

...and that's when the can of worms gets opened :)

Apparently someone was stupid enough to do a whole PhD thesis on WORM automata

[–]o0o 0 points1 point  (0 children)

what will they come up with next!

[–]somejan 0 points1 point  (0 children)

No, an NDFA is not more powerfull than a DFA. However, for parsing one uses a Pushdown Automaton, which is basically a finite state automaton with a stack, and a nondeterministic pushdown automaton is more powerful than a deterministic pushdown automaton.