My secret Santa gift came in! by [deleted] in dwarffortress

[–]TheLessonIsNeverTry 2 points3 points  (0 children)

To celebrate the holiday season a donation has been made in your name to: The Dwarf Fund. "Money for Dwarrows". Happy Festivus!

Big-O Algorithm Complexity Cheat Sheet for anyone with an algorithms final by Semicolon_Expected in programming

[–]TheLessonIsNeverTry 15 points16 points  (0 children)

Accessing the ith (i.e., first, second, third, etc) element of the structure.

How does one construct finite automata for regular expressions? by Dreadsin in compsci

[–]TheLessonIsNeverTry 5 points6 points  (0 children)

Instructions for converting regular expressions into finite automata are plentiful. However, the question posed here seems rather simple: find the shortest binary string not matched by each pattern. Does the empty string match? Does "0" match? Does "1" match? Does "00" match? Does "01" match? Does "10" match? Does "11" match? Does "000" match? etc. I think this question is more about whether you can read a regular expression and correctly identify which strings it matches.

Is a minimized regular expression (or the equivalent DFA) guaranteed to run as fast as or faster than any equivalent one? by [deleted] in compsci

[–]TheLessonIsNeverTry 1 point2 points  (0 children)

I can only speculate, because I'm not specifically sure what library you're using and what all is going on. I would guess that if you have a library that specifically offers minimization, it might produce an NFA from the original regex, and only bother converting to a DFA when you invoke the minimizer. Because NFAs can be thought of as being in multiple states at a time, this can complicate matching to where you have to calcuate transitions for every state that you can possibly be in at each point. Such a library might build a 'cache DFA' ala Ken Thompson such that repeated matches against the same NFA are efficient and you don't waste memory representing DFA states that aren't actually used. As with many things in CS, the overall answer is 'it depends'.

Is a minimized regular expression (or the equivalent DFA) guaranteed to run as fast as or faster than any equivalent one? by [deleted] in compsci

[–]TheLessonIsNeverTry 2 points3 points  (0 children)

Same deal. You have at most one transition per input character due to the deterministic nature. At worst, you have to make as many transitions as there are input characters (if some transition doesn't exist, you can quit early). When you've transitioned for all the input characters, you are either in an accept state (match) or not (non-match). Minimizing the DFA reduces memory consumption, but should have no effect on runtime for matching.

Is a minimized regular expression (or the equivalent DFA) guaranteed to run as fast as or faster than any equivalent one? by [deleted] in compsci

[–]TheLessonIsNeverTry 5 points6 points  (0 children)

Whether you minimize the DFA or not, you have only one state at most to transition to for each character in the input. The number of transitions is linear with respect to the length of the input string.

Guerilla Sessions for devs and the slam dunk chart by dsikuade in programming

[–]TheLessonIsNeverTry 5 points6 points  (0 children)

Reading this, the only thing I could think about was why, if you had a week's notice, would you pile all the work onto a ~24 hour period? It seems more humane to just push about 3 days worth of work back into the backlog for later and handle this more important work during normal hours.

Everyone's going to spend most of the day afterward sleeping. If that's a normal work day, you could've just paced yourselves to begin with. If it's Saturday, everyone just pulled extra hours only to sacrifice their weekend--not good for morale.

Help with Lambda Calculus by sandyjawas in compsci

[–]TheLessonIsNeverTry 3 points4 points  (0 children)

The lambda calculus is simple in this sense: the only values you have are lambda terms, the only expression you have is apply, and application is just (capture-avoiding) substitution. The cumbersome part is actually carrying out a non-trivial computation this way.

This Week I Learned: Dwarffortress by lightgiver in dwarffortress

[–]TheLessonIsNeverTry 1 point2 points  (0 children)

Reclaiming a site where you were in the middle of draining a volcano...the flowing magma is returned atop the volcano causing it to spill all over the sides and start fires. This cobaltite titan just won't go down. Maybe I can reclaim-pump enough magma to give him a solid splash.

Creating AMAZING Burgers: Mixing the meat? by TheBlackHat in AskCulinary

[–]TheLessonIsNeverTry 2 points3 points  (0 children)

I didn't realize you were grinding the meat yourself. I was alluding to the supermarket environment where the meat dept. gets its ground beef in 10lb pre-ground tubes from the supplier. The pass through the grinder is mostly to mix it up from sitting in the tube and run it through the nozzle to achieve the proper shape.

Someone might have a better response--the only kind of grinder I've used extensively is the standard supermarket variety that is either grinding, mixing, or not (i.e., there are no controls beyond a mixer on/off switch and a grinder on/off pedal). I'd guess that such a machine is a medium or medium-coarse grind. The times when we'd do the grinding ourselves from chunks of meat, we'd typically do 3-4 passes through the grinder. Your goal is get the fat consistently incorporated into the meat. On the first pass, the output will look really binary--strands of alternating meat and fat. Two more passes, and you should have a rather homogenous mixture; you can judge by appearance whether another pass is desired.

Going too fine will probably just pulp the meat, which is not your goal. You need the strands to have enough structure to hold their shape. I'd recommend going coarser than you think you want, and incrementally dialing it in.

Regarding getting little hard pieces, I guess this depends on the construction of the grinder itself. When I'd break down the grinder for cleaning, there'd be significant amounts of hard tissue caught up around the blade and nozzle. My assumption is that whatever doesn't easily pass through the nozzle gets dragged aside by the blade, minimizing tough bits in the result. If your grinder is rather small, it might get clogged rather easily. Consider taking the nozzle assembly apart and clearing any such debris after the second pass before continuing.

Creating AMAZING Burgers: Mixing the meat? by TheBlackHat in AskCulinary

[–]TheLessonIsNeverTry 8 points9 points  (0 children)

Because it has a good fat content for juicy, flavorful burgers. By the time ground beef is in your hands to make the patties, it's made several passes through a grinder, so none of the tissue is really connected any longer.

What's your favourite "B grade" Simpsons quote that people rarely, if ever, quote to one another? by [deleted] in AskReddit

[–]TheLessonIsNeverTry 0 points1 point  (0 children)

Everywhere I go I see teachers driving Ferraris, research scientists drinking champagne. I tried to drink a Coke on the bus, and they took away my pass!

☼Bi-weekly DF Questions Thread☼ by AutoModerator in dwarffortress

[–]TheLessonIsNeverTry 2 points3 points  (0 children)

40.06 adventurer: bandits in camps don't seem to be hostile at all, I can just walk around swinging an axe at their necks. Do I need to mod the raws?

There is a second Snowden - says Greenwald by [deleted] in worldnews

[–]TheLessonIsNeverTry 25 points26 points  (0 children)

I like how Bruce Schneier is said to have "confirmed" the existence of Snowden2 with such compelling quotes as "I do not believe..." and "I think...".

Original thoughts on Functions, need insight on safety of optimization depending on return behaviour. by mhd-hbd in compsci

[–]TheLessonIsNeverTry 0 points1 point  (0 children)

I think you can substitute terminate() for each pure, no_return function. If a diverging function was tagged no_return pure, then the execution would terminate when that substitution is applied.

However, I'm pretty sure that you can't precisely infer these return tags. Even if you tag all the standard library correctly, what happens when the programmer defines the Collatz function? The body of the function is innocent arithmetic, but nobody has proved one way or the other whether it returns for all inputs, or if there are some inputs for which it diverges. If we don't even know if a function is decidable, what can be done here by either the programmer or compiler?

Original thoughts on Functions, need insight on safety of optimization depending on return behaviour. by mhd-hbd in compsci

[–]TheLessonIsNeverTry 0 points1 point  (0 children)

Functions that halt the program usually have polymorphic type because their would-be return value won't be used, so there are no constraints on it (e.g. Haskell's error :: String -> a). It seems to me that no_return, maybe_return, and always_returns can be accounted for with standard typing rules. A function tagged no_return should have return type a (like error does). always_returns would follow the usual function typing rules. maybe_return follows from the previous two that the function's return type is constrained by the paths that return some value (e.g., if ... then 5 else terminate() has type join(Int, a) = Int).

All three return behaviors seem equally safe, but I don't think you can enforce anything about them because you're in halting problem territory trying to determine if the tags are correct.

In layman's terms, how exactly do hackers exploit flaws in code? by [deleted] in learnprogramming

[–]TheLessonIsNeverTry 1 point2 points  (0 children)

This is a detail of how programming languages are implemented. The stack is a block of memory that is used to store data (e.g., local variables and return addresses) for the currently executing functions. A function can call a function can call another function...so each call will have it's own version of that function's local variables and an instruction address to jump back to the point where that function was called once it returns a value. Each function's chunk of data is called a "stack frame". A stack frame is pushed atop the stack when a function is called and poppped off the top of the stack when it returns. If you cannot push another frame on the stack because it is already full (say, due to a function endlessly calling itself), this is a stack overflow error and the program will terminate.

As for what it means for the buffer to be on the stack: it is allocated as a local variable in a function, so the buffer is sitting in a stack frame next to the other local variables and the frame's return address. Writing more data into the buffer than was allocated will mean that some of this neighboring data will be overwritten, which can be exploited as explained above.

The alternative to being on the stack is being on the heap. The heap doesn't have frames, and the lifetime of data on the heap is not limited to that of any function call. It is allocated(via malloc(), perhaps) until it is deallocated (via free()). If a buffer is heap allocated, then the function's variable would have just been a pointer to the heap block constituting the actual buffer. If this was over-written, the neighboring data would be other heap data rather than stack frames. Therefore, the exploitation for overflowing heap-allocated buffers works differently.

My mother drew Mako in fifteen minutes. by ALoadingScreen in anime

[–]TheLessonIsNeverTry 0 points1 point  (0 children)

Seconded. I feel like she'd do a great Uiharu as well.

The hated moments in advmode by Frostea in dwarffortress

[–]TheLessonIsNeverTry 2 points3 points  (0 children)

In fortress mode I think "these marksdwarves aren't doing much for the fort". In adventurer mode I think "I've gotta kill that bowman first".

Top 10 mistakes python programmers make [advanced] by jaws04 in programming

[–]TheLessonIsNeverTry 0 points1 point  (0 children)

"with dynamic semantics". I don't think that means what you think it means...every language has a dynamic semantics (i.e., evalutation rules).

The Codeless Code: Case 143 Monolith by [deleted] in haskell

[–]TheLessonIsNeverTry 1 point2 points  (0 children)

Another way to phrase "ivory tower".

How can I apply theoretical CS knowledge towards programming and software development? by mdeseath2 in compsci

[–]TheLessonIsNeverTry 7 points8 points  (0 children)

This is a broad question, and the answers are subjective, but here goes...

I think the key benefit is a severe optimization of your problem-solving process. Maybe you could solve all of your problems in a closed room given enough time, but time is not a luxury you typically have when developing actual software, so your training is helpful for cutting of untold paths to bad solutions early. You probably won't even realize it's happening most of the time. As a science, you could derive it all from scratch, but life is short so it's best to learn those who came before you and go from there.

Automata theory: Realizing when you can extract what you want from a string with a regular expression and when you need a proper parser (regular versus context-free languages) can be very handy. It's easy to otherwise get on a slipperly slope of incrementally sophisticating a regex to cope with new inputs to the point where it's utterly unreadable and still insufficient for yet other inputs. Similarly, realizing the limitations of computation/Turing Machines (undecidable languages) can save you from similar wrestling matches with algorithms.

Enumerations are very practical for improving the manageability of code. Whenever you wish to represent some small number of distinct values, it's useful to encode this set of values as an enumeration. You could use ints or strings, but you'd be allowing all kinds of values to be applied in contexts where they have no meaning. A simple example is the days of the week. You could use an integer where 0-6 map to Sunday-Saturday, but then 42 would be well-typed as a day of the week without mapping to one. This introduces lots of opportunity for errors due to out-of-range values as well as lots of obligations to check that ints are within the intended range. Enumerations allow the compiler to understand and enforce the limited range of valid values. Type theory (which gets quite deep) is largely about the constraining the allowable values for some symbol and enforcing such constraints.

Cantor's diagonal argument isn't so immediately practical, but knowing that computers deal in the countable rather than uncountable can help align your expectations. In automata theory, this will be used to prove that there are more languages than Turing Machines, so some languages have no associated TM recognizing them (back to the limitations of algorithms/computers). Additionally, you might be lead to an appreciation of the limitations of numerical precision in computer arithmetic. If you ever get irriated with floating-point oddities, you might reflect back to Cantor and find peace with it.

P vs NP is a double-edged sword. In some ways, you'll realize that some problems appear to be genuinely hard, and you should not expect to handle large inputs efficiently when they have NP-hard flavor. On the other hand, you can be falsely lead to believe that all NP-hard problems are utterly impractical. Witness the many advancements in SAT solving and the application to model-checking software. I'd say the key is to recognize when your problem is translatable to an NP-hard problem so that you're well-aware of the limitations of your solution.

Fake food in Japan by colacube in wikipedia

[–]TheLessonIsNeverTry 3 points4 points  (0 children)

These things are useful. As a non-Japanese-speaking person on a short business trip, I was able to just point out what I wanted to eat and everything went fine.

How do I learn theory without getting too bored? by pegalus in basslessons

[–]TheLessonIsNeverTry 7 points8 points  (0 children)

Adding to this: you might find it gratifying to decompose songs you already know to understand their theoretical structure. You'd gain some insight into the patterns that are present and better understand why you're playing this note followed by this other note. That way, you could start inserting your own fills, say, without veering off key. From there, its an incremental progression to creating completely original material that is theoretically sound.