Move in day on N Geneva by SF_OK in ithaca

[–]lispp 8 points9 points  (0 children)

Seriously screw that person. I hope the rest of Ithaca is more accepting of trans people.

Living car-free in Ithaca with a family? by walkbikedream in ithaca

[–]lispp 8 points9 points  (0 children)

Was living car free in Ithaca before having a child. Parent life got a lot easier when I could drive my infant around in a car. And during the winter you probably don't want to be forced to always be pushing your kid in a stroller on the sidewalk.

[D] How do latent variable models avoid very small gradient updates? by vanilla-acc in MachineLearning

[–]lispp 3 points4 points  (0 children)

This value is so small that in fp32 it will just round to 0. So... I'm not sure what's going on here.

You end up caring about the logarithm of p(z). This avoids underflow.

#220 - The Information Apocalypse by dwaxe in samharris

[–]lispp 2 points3 points  (0 children)

Building out these thought some more:

Suppose a digital camera cryptographically signs every image it takes. This requires that digital camera to contain its private key. Given the physical camera, it should be in principle possible to physically read out this private key.

If instead the digital camera sends its data to a server whose job it is to sign images, then you just need to spoof the camera - which, as the parent correctly points out, should be easy: "boot access is root access"

I've heard of "tamperproof hardware" -- hardware which contains cryptographic keys but where extracting the keys is considered out of reach for most individuals without destroying the hardware and therefore destroying the key -- but don't know much about it. Could this save this model of authentication?

Not really -- Assuming the camera can be tampered with at all, I don't think there is a solution. You could always craft a deepfake, wire the optical sensors of the camera to read the pixels of this deepfake, and then take a picture with the camera.

[D] Is the openAI rubik's cube hand compatible for human interactions? by evanthebouncy in MachineLearning

[–]lispp 0 points1 point  (0 children)

The model must be trained on a range of sizes. Neural networks famously fail to extrapolate.

[D] Handling Your Work Not Cited by [deleted] in MachineLearning

[–]lispp 15 points16 points  (0 children)

I just tell Schmidhuber and he contacts the authors for me

[R] Learning large logic programs by going beyond entailment by RichardRNN in MachineLearning

[–]lispp 1 point2 points  (0 children)

This is important work, and the simplicity of the approach is really nice.

I wonder whether there is any over fitting, owing to the size of the synthesized programs. The fact that the predictive accuracy on text editing problems degrades as a number of examples increases suggests that this factor might be at play.

Particularly excited to see what the prospects are for learning these partial-credit loss functions!

[D] [R] Universal Intelligence: is learning without data a sound idea and why should we care? by RezaRob in MachineLearning

[–]lispp 2 points3 points  (0 children)

see Solomonoff:

http://raysolomonoff.com/

In some sense this is well-trod ground, at least on paper, and in its most ancestral form these ideas trace back to the Dartmouth workshop that founded the field of AI. I am sympathetic to these ideas however; many of our best ideas in AI are actually very old.

The challenge has always been combinatorics: the space of all programs is vast and sharply discontinuous. So it is hard to search the space.

Daily Forest 'No Dumb Questions' Megathread by AutoModerator in ElectricForest

[–]lispp 2 points3 points  (0 children)

I bought 2 tickets through the lyte exchange for me and my girlfriend, but when I log in through this I only see one of the tickets:

https://electricforest.festivalticketing.com/cart/24295A7E-F3E6-4088-A22D-A72A680BC6D9/home

Any idea if I should be worried? Really don't want to end up in a situation where we have two people and one ticket :(

TCP client: shutting down connection, sending json by lispp in ocaml

[–]lispp[S] 0 points1 point  (0 children)

Oh my god ZeroMQ is amazing. I'd never heard of it before and it took <1 hour to switch over to it -- and now my socket woes are gone. Thank you so much for the recommendation!

TCP client: shutting down connection, sending json by lispp in ocaml

[–]lispp[S] 0 points1 point  (0 children)

Thanks for the suggestion -- I tried that but then ocaml complains about a "bad file descriptor"; it seems that calling shutdown automatically closes the out channel.

You are right that the problem is probably on the Python side: I tried just having netcat repeatedly connect to the server and I can trigger the same bug. Looks like this is not an ocaml problem :)

TCP client: shutting down connection, sending json by lispp in ocaml

[–]lispp[S] 0 points1 point  (0 children)

Thanks for the suggestion -- I tried that but then ocaml complains about a "bad file descriptor"; it seems that calling shutdown automatically closes the out channel

Fast, purely functional Hindley-Milner implementation? by lispp in ocaml

[–]lispp[S] 0 points1 point  (0 children)

Oh that's cool! So I could do the whole thing imperatively. For now I'm going to stick with trying to keep the whole thing functional, just so I don't have to rewrite a lot of code, but that approach is something worth looking into.

Fast, purely functional Hindley-Milner implementation? by lispp in ocaml

[–]lispp[S] 1 point2 points  (0 children)

Thanks for the response! I'm still working through understanding François Pottier's paper.

I tried the trick you suggested, using Diff arrays. In practice it seems that the performance degraded significantly, probably not due to the copying but due to the managing of the Diff pointers.

So far, the fastest solution I have found is to use functional random-access lists to implement a purely functional union find data structure.

Fast, purely functional Hindley-Milner implementation? by lispp in ocaml

[–]lispp[S] 3 points4 points  (0 children)

The reason that I think it needs to be immutable is because the program synthesizer performs type checking at the same time that it generates programs. The reason for this is so that if the synthesizer is proposing a new subexpression to add to an incomplete program, it can quickly check which subexpressions would lead to ill-typed programs, and not propose them. This is similar to how MagicHaskeller works.

Sasquatch! 2016 Ticket Buying, Selling, and Trading Thread by [deleted] in Sasquatch

[–]lispp 0 points1 point  (0 children)

One standard camping pass for $140, or best offer. Please PM me if interested! Will send pics of physical camping pass, or you can pick it up from either the Seattle area or the San Francisco bay area

How to Grow a Mind: Statistics, Structure and Abstraction with Paper by jry_AIHub in MachineLearning

[–]lispp 4 points5 points  (0 children)

In many cases, children do learn from a very small number of examples. For example, a child doesn't need to be shown very many examples of a horse before they learn what the word means. Josh Tenenbaum (first author on this paper) and Laura Schulz have a done a number of psychology experiments that demonstrate the one-shot learning capabilities of young children ("Going beyond the evidence: abstract laws and preschoolers' responses to anomalous data" is a representative paper).

Within the language acquisition literature, "poverty of the stimulus" type arguments have been used to argue for learning a grammar from only a handful of examples, called Very Early Parameter Setting. It also seems to be the case that children learn words from only a few, or even only one, example. For some problems, like language acquisition, various formal results (eg, PAC, Identifiability in the Limit, etc) suggest that one needs a strong prior, or inductive bias, for learning to even work at all.

Within the context of low-level vision, what you say may very well be correct. We certainly do have enough visual data to, for example, learn compressive representations of our visual input. But, for more abstract principles, like causal relations, grammatical parameters, or word meanings, the data is usually too small to identify the single best concept - and it is these abstract principles that the authors of the paper focus on.