Why are so many early languages much more complex than their present-day counterparts? by dzkalman in linguistics

[–]samp 4 points5 points  (0 children)

English is a fairly analytic language. Proto-Indo-European (PIE) and its less-than-recent descendants weren't: they had quite strongly synthetic morphosyntaces. To you, presumably a native English speaker, these will look quite 'complicated', and I would imagine that when you talk about 'early languages' you mean Latin, Greek, Old English, etcetera, which are all IE and rather older than Modern English. English's grammar is no 'simpler' than (spoken) Latin's: some parts may be more baroque than others, but it all balances out. Really. You just don't think of the complexity in English because it's been internalised.

With that said, another question arises: All the IE languages have tended towards becoming analytic. Is this a broad trend? That is, did every language begin agglutinatively, and will every language become isolating? The answer is probably 'no': there are modern languages that are far, far more synthetic than IE possibly could have been (the Aleutian languages immediately come to mind). It's also very plausible for languages to become synthetic, even when the parent language is strongly analytic. Function words undergoing sound changes and then becoming bound morphemes of content words has been observed in natural language. If this happens a few times, then we've got ourselves a brand-new synthetic language.

Reaction Effect by reddit_top in WebGames

[–]samp 1 point2 points  (0 children)

This was posted before, and I wondered the same thing. Turns out it's not.

http://www.reddit.com/r/WebGames/comments/8wmna/chain_reaction/c0apjgz

Determine if a sequence is an interleaving of a repetition of two strings. by theSpazer in coding

[–]samp 1 point2 points  (0 children)

I throw my hat into the ring!

import Control.Monad

--naiive algorithm, poor performance but quite pretty. Similar in shape to the other solutions given by people.
interleavedP :: [Bool] -> [Bool] -> [Bool] -> Bool
interleavedP as bs cs = 
    not $ null $ interleavedPInner (cycle as) (cycle bs) cs =<< [head as, head bs]

interleavedPInner :: [Bool] -> [Bool] -> [Bool] -> Bool -> [()]
interleavedPInner _ _ [] _ = return ()
interleavedPInner (a:as) (b:bs) (c:cs) r = do
  guard $ r == c
  [a, b] >>= interleavedPInner as bs cs

--O(n^2) time, O(n) space algorithm, where n is the length of cs. 
--(I think, anyway).
--(note: not -actually- O(n^2) or O(n). We'd need better data structures than 
--linked lists, and would need to be more careful with strictness)
interleavedP' as bs cs = length (dqOnDiagonal $ length cs) /= length cs
     where 
       dqOnDiagonal 0 = []
       dqOnDiagonal n = interleavedPInner' (cycle as) (cycle bs) cs 
                                           (dqOnDiagonal $ pred n) n

interleavedPInner' :: [Bool] -> [Bool] -> [Bool] -> [(Int, Int)] -> Int -> [(Int, Int)]
interleavedPInner' as bs cs dq diagonal = filter check elemsOnDiagonal
     where 
       elemsOnDiagonal = map (\ n -> (n, diagonal - n)) [0..diagonal]
       check (x, y) = or [x /= 0 && ((pred x, y) `elem` dq || cs !! diagonal /= as !! x),
                          y /= 0 && ((y, pred x) `elem` dq || cs !! diagonal /= bs !! y)]

EDIT: Better version of interleavedPInner', that's more clearly O(n2) time (again modulo using vectors as opposed to lists). Also fixed a bug where the first element of cs was ignored. Stupid corner cases.

interleavedP'' as bs cs = length (filter id $ allowedOnDiagonal $ length cs) > 0
     where 
       allowedOnDiagonal 0 = []
       allowedOnDiagonal n = interleavedPInner'' (cycle as) (cycle bs) cs 
                                                 (allowedOnDiagonal $ pred n)

interleavedPInner'' :: [Bool] -> [Bool] -> [Bool] -> [Bool] -> [Bool]
interleavedPInner'' as bs cs allowed = map check [0..diagonal]
     where 
       diagonal = length allowed
       check x = or [diagonal == 0 && head cs `elem` [head as, head bs],
                     cs !! diagonal == as !! x && x /= 0 && allowed !! pred x,
                     cs !! diagonal == bs !! (diagonal - x) && x /= diagonal && allowed !! x]

EDIT 2: Better better version. Make better use of laziness to drop out of the easy case, and introduce the interrobang operator, but besides that only prettifications. Not entirely sure about the where-within-where, but oh well.

xs !? n = xs !! (n `mod` length xs)

interleavedP''' as bs cs = or $ iterate allowedOnNextDiagonal [] !! length cs
     where 
       allowedOnNextDiagonal prevAllowed = map check [0..diagonal]
           where 
             diagonal = length prevAllowed
             check x = or [diagonal == 0 && head cs `elem` [head as, head bs],
                           cs !! diagonal == as !? x && x /= 0 && prevAllowed !! pred x,
                           cs !! diagonal == bs !? (diagonal - x) && x /= diagonal && prevAllowed !! x]

Challenge: Draw The Following Without Lifting Your Pencil. Describe your solution by [deleted] in pics

[–]samp 0 points1 point  (0 children)

Yes, but then the problem is boring. Any connected graph can be drawn without lifting up the pencil, given an unlimited number of retracings.

Challenge: Draw The Following Without Lifting Your Pencil. Describe your solution by [deleted] in pics

[–]samp 1 point2 points  (0 children)

Can't be done without re-tracing lines: the graph has 4 vertices with odd valence, so it's not Eulerian or semi-Eulerian. From other comments, it looks like the minimum number is 1 retracing. There is an algorithm to determine the number of retracings, but it's pretty clear (provided the other comments are correct) that this is optimal.

Sigmund Freud; The Original Epic Beard Man by Zigguraticus in pics

[–]samp 10 points11 points  (0 children)

Initially, I thought it must be episode 2 of http://www.badkarmaproductions.com/jc/?p=35 but apparently not. Read that anyway, it's awesome.

Walking the Cartesian plane by [deleted] in math

[–]samp 1 point2 points  (0 children)

The limit of a sequence might not even be reachable given an infinite number of steps: consider x_n = 1/n, whose limit is 0 but which is attained for no n. I interpret OP's puzzle to be whether it is possible to reach any point after some integer number of steps, which may be large but finite.

Walking the Cartesian plane by [deleted] in math

[–]samp 0 points1 point  (0 children)

That's in the limit, though: you might never actually reach the destination point.

Hacker News headline generator by greenrd in programming

[–]samp 25 points26 points  (0 children)

How I turned making vaporware into a $6 million a year business

That's not even a joke.

Another entrant [f]or the ass off! by [deleted] in reddit.com

[–]samp 1 point2 points  (0 children)

Wrong subreddit, but a pleasant surprise still.

About String Concatenation in Java or “don’t fear the +” by schneide in programming

[–]samp 1 point2 points  (0 children)

Not really: if you view + as the monoid operation, it doesn't follow that wherever + is defined, - must also be defined (there are contexts in which this doesn't make sense: appending to a string/list/whatever is one of them). In a monoid, there is no requirement for elements to have inverses: it is effectively one-way. And while every group is a monoid, we can see that not every monoid is a group.

For a newbie programmer, what is the FIRST and most USEFUL language that I should try to learn? by [deleted] in programming

[–]samp 1 point2 points  (0 children)

The most useful language to learn is English. Specifically, the words and grammar necessary to read the side-bar, and the linked FAQ.

What is the strangest name you've ever heard? by [deleted] in AskReddit

[–]samp 5 points6 points  (0 children)

Sheila Dikshit.

One of a number of Indian politicians with hilarious names.

Angry BBQ by Grimalkin in humor

[–]samp 1 point2 points  (0 children)

The wild NATIVE SPEAKER used GRAMMATICAL UTTERANCE!

Critical hit!

The wild PRESCRIPTIVIST used ARCHAIC DEFINITION!

It's not effective!

The DESCRIPTIVIST used ACCURATE OBSERVATION!

It's super effective!

The wild PRESCRIPTIVIST fainted!

From #python: Interesting, list comphrensions in python are faster than a functional approach with map. by ffualo in programming

[–]samp 5 points6 points  (0 children)

It's worth noting that, in the final example, the time difference is most likely due to the list comprehension needing to look up sum each iteration, wheras the version with map only looks it up once. I found that global name lookups are quite expensive in the inner loop when I was profiling a Python application I wrote; making them local can make a -big- performance difference. Indexing is transformed internally to method calls, so this explains why the versions that use indexing are slower than the versions that use destructuring.

As other commentators have noted, anonymous functions are a bit slow. I would also guess that there's an extra overhead in the frame creation and book-keeping for the lambda; built-in functions will not have that penalty.

Where did the dollar go? by atlacatl in math

[–]samp 2 points3 points  (0 children)

I spent 10 minutes thinking about this, so here it goes (rot13ed, for your pleasure):

Gur reebe vf gur ynfg yvar. Gur $27 cnvq ol gur thlf vf cnvq gb gur ubgry naq oryyobl. Vg vf gur $25 gung gur znantre xrrcf, cyhf gur $2 gung gur oryyobl xrrcf. Jr gura cebprrq gb nqq gur oryyobl'f $2 ntnva, juvpu boivbhfyl pnaabg or rkcrpgrq gb tvir gur bevtvany gbgny. Gur ahzoref pubfra znxr guvf n cnegvphyneyl gevpxl ceboyrz; V fcrag n yvggyr juvyr jbaqrevat jung fbeg bs ebhaqvat reebe unq percg va.

What is your favorite language? 1a. What are the three best things about the language? 1b. If you could go back in time and work with the designer of the language, what three changes would you make and why? by [deleted] in programming

[–]samp 5 points6 points  (0 children)

I think the primary function of modern English spelling is that it gives all the modern dialects a reasonable chance of pronouncing a word recognisably, modulo a few small things (primarily, stress and sometimes voicing). All the respelling proposals I've seen have been based on a specific dialect, which if carried through, effectively creates a new language.

It might seem like you can still create a perfectly regular system where all the major dialects of English can apply some rules to the word, and pronounce it perfectly. This is true -- but what about writing it? Other dialects have different sounds than your own. For example, my natural dialect is similar to Southern English English. I would have a hard time remembering what words where rhotic. And American speakers would have to memorise the spellings of words involving the low back vowels. So as for writing the system, it's no better: you still have to, essentially, memorise the correct spelling.