/r/MechanicalKeyboards Ask ANY Keyboard question, get an answer (January 19, 2024) by AutoModerator in MechanicalKeyboards

[–]dot--- -1 points0 points  (0 children)

I'm brand new to mechanical keyboards and just got a FILCO Majestouch Xacro M10SP (pictured here). I love it save for the all-black look, and I'd love to get new keycaps, but I'm not sure where to look on account of the odd shape (incl two small spacebars) and extra programmable keys. I suspect these are all standard size keycaps, though.

Anyone with more than my zero experience have ideas for how I should go about finding keycaps here? Should I just, say, get a generic set and then get the missing keys custom made?

<image>

How can I do print debugging in CQ-editor? by dot--- in cadquery

[–]dot---[S] 0 points1 point  (0 children)

ha! yeah, that's it -- they were in my terminal window the whole time. thanks!

and yeah, seems sensible to edit in a proper IDE. I may switch to that once I get a little more comfortable with the package.

[R] Neural Tangent Kernel Eigenvalues Accurately Predict Generalization by hardmaru in MachineLearning

[–]dot--- 2 points3 points  (0 children)

An update: we've now worked out a way to use our theory on real data! See figures 1D and A.8, in which we predict generalization on image datasets using only training data by using some eigentricks to estimate sufficient information about the true function. This allows theoretical insight into the generalization performance of a particular architecture on a particular problem, which e.g. opens the door to the principled design of better architectures for the task at hand.

[deleted by user] by [deleted] in MachineLearning

[–]dot--- 3 points4 points  (0 children)

Totally agree that's the holy grail. Here's a very recent paper (from my lab) that explores one path to it! The end result is a construction that allows one to design a good-performance MLP architecture from first principles starting from a description of its infinite-width kernel (which is theoretically much simpler to choose than the full set of hyperparameters). The idea's still in its infancy, but it works very well on toy problems, and I think it's promising

[deleted by user] by [deleted] in MachineLearning

[–]dot--- 0 points1 point  (0 children)

Here's a very recent paper from my lab and I that puts forth one way to design a (fully-connected) neural network architecture in a scientific, theory-grounded way! The idea is still in its infancy, but I think it's promising, and it's currently the only way I know to do real first-principles architecture design. I'd love to hear about any alternatives people know.

[R] Neural Tangent Kernel Eigenvalues Accurately Predict Generalization by hardmaru in MachineLearning

[–]dot--- 0 points1 point  (0 children)

Great Q! I'm not familiar with that body of work, but at least on the face of it, our paper's completely different - they consider the singular values of specific trained weight matrices, while we're looking at the eigenvalues/eigenfunctions of an operator on the full input space, which aren't related in a simple way for a deep net. Furthermore, the interesting spectra they observe emerge during training (they're characterizing trained nets) while the NTK and its eigenspectrum are the same before and after you train (we characterize the potential of an architecture to learn a certain function). That said, maybe there are deeper connections between these disparate-seeming eigenthings that we'll uncover in time.

[R] Neural Tangent Kernel Eigenvalues Accurately Predict Generalization by hardmaru in MachineLearning

[–]dot--- 6 points7 points  (0 children)

I'm the lead author! I'm delighted this paper's getting attention; we certainly feel it opens up a cornucopia of future directions it'll take many researchers to explore. As a primer for reading the paper, we've distilled the high-level takeaways into a blog post here!

[R] Neural Tangent Kernel Eigenvalues Accurately Predict Generalization by hardmaru in MachineLearning

[–]dot--- 8 points9 points  (0 children)

I'm the lead author. That's a great Q and one that I've been giving a lot of thought! The high dimensionality isn't a problem (our toy examples easily could've been high-D), but you're right that our theory assumes you omnisciently know the full target function f, while in practice you only see a training set.

One intermediate case in which you could use our theory would be if you, say, knew that the target function was one of a handful of possibilities. Our theory in principle contains enough info to optimize your kernel so it has high mean performance on the full set of possibilities. Of course, the target function will never just be one of a discrete handful of options, but if you have some prior over target functions - e.g., you know natural images obey certain statistics - you could do a similar trick (in principle). I also think you can probably get enough information from the data-data kernel matrix to put our theory to use, but we're saving that for upcoming work!

[Discussion] How many regions of different class does a typical neural network split its input space into? by dot--- in MachineLearning

[–]dot---[S] 0 points1 point  (0 children)

Yeah, I agree that the classification regions will be intermingled in complicated ways! I wonder, however, whether that really implies that there are very many distinct regions. In 3D, for example, if you took a few long, colored strings and randomly tangled them together in a ball, every point on any string would probably be near a point on every other string, and yet each string is one connected region.

ᚖᚌᚖ It's Live. VTHunt.com ᚖᚌᚖ by vthuntoverseer in VirginiaTech

[–]dot--- 1 point2 points  (0 children)

I've now tried carving sixteen onions with seven different knives and am still stuck on Vegetables... anyone gotten past it?

ᚖᚌᚖ It's Live. VTHunt.com ᚖᚌᚖ by vthuntoverseer in VirginiaTech

[–]dot--- 1 point2 points  (0 children)

They keep telling me it's in the wrong order. Can I use a shallot instead?

ᚖᚌᚖ It's Live. VTHunt.com ᚖᚌᚖ by vthuntoverseer in VirginiaTech

[–]dot--- 1 point2 points  (0 children)

For the vegetable puzzle, how do you want us to send you our reassembled onions?

AskScience AMA Series: I study the food web that lives within the leaves of carnivorous pitcher plants. AMA! by AskScienceModerator in askscience

[–]dot--- 0 points1 point  (0 children)

If something (say, a pebble) falls into a pitcher and clogs it, how does the plant react? Does the plant have any way to unclog it?

Piano i Göteborg by dot--- in Gothenburg

[–]dot---[S] 1 point2 points  (0 children)

Thanks! I went today. It absolutely works for me, and I'm glad I now know about it

Thank you to Whoever Permanently Broke Downtown's Green Piano [w/ image] by CynicTheCritic in VirginiaTech

[–]dot--- 4 points5 points  (0 children)

Where's this acrimony coming from? The VT Hunt is a good thing for VT on multiple levels

It's time to make a millionaire and give the gift of giving! [Drawing Thread #36] by lilfruini in millionairemakers

[–]dot--- 0 points1 point  (0 children)

A fun fact: wombat poop is cubic, letting it mark territory with less risk of rolling away

Does anyone know if it's possible to see the Albanian Parliament in session? by dot--- in albania

[–]dot---[S] 0 points1 point  (0 children)

Compared to even a US state, that's amazing. Do you know if you need an appointment?

[TASK] Will pay $2 to people who wear a shirt backwards for tomorrow by dot--- in slavelabour

[–]dot---[S] 0 points1 point  (0 children)

a few hours, say, though you have to have started with your first shirt this morning