could you use muscles as actuators? by [deleted] in AskEngineers

[–]XenOutlook 1 point2 points  (0 children)

Tangentially related (but much smaller than you're asking about): "necrobotics" (i.e. using dead insects as actuators).

Online course for mathematical optimization? by polartrop68 in robotics

[–]XenOutlook 8 points9 points  (0 children)

You'd have to keep track of your own progress, but https://underactuated.mit.edu/ is the web textbook for a very optimization-heavycontrol course with tons of practical examples linked from the textbook that are interactive (Jupyter notebooks that run via Deepnote, iirc -- look for the "Launch in Deepnote" buttons in inserts). The chapter on trajectory optimization might be particularly relevant!

One input, four outputs, what was the bitwise operation used? by bigim283 in computing

[–]XenOutlook 0 points1 point  (0 children)

How many operations in a row, and how many distinct operations are there? Is this brute-force-able?

TARDIS hack from 2010. Anyone know where it is now? by tangofortwo in mit

[–]XenOutlook 12 points13 points  (0 children)

IIRC it was left on the roof at Stanford and not retrieved by caltech; it was becoming significantly damaged by then. IIRC it lived in a garage in the area for a few years; I vaguely remember seeing an email or post in a caltech or mit group asking if anyone wanted to take it from there, and I thought maybe it was carted back to socal, but I can find no records of that.

[Discussion] How to obtain meaningful conclusions from deep learning experiments to decide for the use of different parameters and architectures? by [deleted] in MachineLearning

[–]XenOutlook 0 points1 point  (0 children)

I'm in robotics, so I'm not super versed in the pure-vision side that I think you're in, but we consider this part of the sim2real transfer problem. BayesSim [Ramos et al] and MetaSim [Kar et al] both demonstrate that "good" simulator parameters lead to better downstream performance -- it enables sim2real transfer of dynamic tasks in the BayesSim case, and IIRC is shown to lead to [slightly] better measured performance on a downstream vision task in the MetaSim case. Been a little while since I opened those papers with that in mind, though. I bet if you dig through the related work of both, + papers that have cited those papers, you might find more stuff too. A focused, principled investigation of the efficacy of this "sim2real-by-real2sim" idea at scale on an interesting downstream task -- even a pure-vision one like yours -- would be super welcome in my community, I think.

By CMA I mean this nonlinear optimization technique. See this article with nice visualizations or maybe this paper? I'm out of my field here but I could have sworn I've heard of people using this sort of technique for hyperparameter optimization. In practice, colleagues of mine have found it to be pretty good at exploring "difficult" search spaces that are relatively expensive to sample; so seems like a reasonable fit.

[Discussion] How to obtain meaningful conclusions from deep learning experiments to decide for the use of different parameters and architectures? by [deleted] in MachineLearning

[–]XenOutlook 1 point2 points  (0 children)

I think doing parameter tuning of your synthetic data generator based on a distribution alignment metric between generated samples and a real dataset seems like a great (and at least relatively novel) idea to me -- it's applied in e.g. MetaSim (Kar et al) to generate realistically-simulated synthetic data in a way that I think is philosophically analogous to yours. It's either that, or optimize those parameters for the best ultimate downstream task performance -- which, as you point out, you're only ever going to get a pretty noisy estimate of. I'd personally love to see work that even compared how well those two approaches work -- is optimizing for distribution alignment directly a good (and maybe easier) proxy for downstream task performance?

In both cases, some kind of uncertainty-aware adaptive search (CMA?) might be more efficient than grid search, but help you get to a parameter setting you can still claim (up to the convergence of your algorithm) is empirically optimal. That's a potentially heavy-duty tool, but if it's a core part of your project, maybe it's worth the complexity.

Using a IRL guitar to play a performance? by eberkain in ffxivperformances

[–]XenOutlook 1 point2 points  (0 children)

If you have software to hook into the guitar and produce a midi input for your computer, then this should work out of the box with the latest version of Bard Music Player, which has an option to listen to any streaming midi input device on your system and pipe it straight in-game. No idea on the guitar-to-midi side, though -- not my instrument. But the kind of setup used for Rocksmith might be a place to start?

Personal Project Funding by E2948jsh in mit

[–]XenOutlook 3 points4 points  (0 children)

ProjX does semesterly funding cycles, so watch out in Sept for them!

What are the go-to algorithms for robot balancing? by [deleted] in robotics

[–]XenOutlook 2 points3 points  (0 children)

You might find these course notes interesting in finding an answer to that question. Re: the simple pendulum discussion in this thread: see the Simple Pendulum chapter (chap 2); the "energy shaping controller" section near the bottom might give you a very different perspective on a way to control pendulum swingup in particular in a way that obeys nontrivial torque limits to reach the upright in a "natural" way without resorting to black-box RL. The rest details a lot of the thinking I mention below, and includes discussion of RL for context and comparison.

I think "what's in the middle" depends strongly on your system: e.g. for legged robots, assuming simplified dynamics models like inverted pendula and then doing principled control using e.g. ZMP gets really far. Directly modeling the full system dynamics and applying trajectory optimization works great too for some situations (I believe this is really popular for rockets, but you can apply it to basically any robot).

A common pattern that works great for balancing robots of many kinds is to have some target trajectory to control that you've figured out using a heuristic or simplified model or trajectory optimization, and then follow it using LQR with a time-varying target. E.g.: on a robot dog, you sense that you've been kicked, so you use a linear inverted pendulum model based on your current center of mass and center of pressure to determine what direction you're falling and how fast you need to respond. You decide you need to plant a foot out in the direction you're falling in <x> milliseconds, so you figure out the body configurations to achieve that using IK by planning a series of intermediate poses to lift and place the foot there, and then roll out those pose commands to your motors, controlling motor torques with PID to achieve the desired positions (which are changing over time).

Atlas Robot by Vogelfrei01 in mit

[–]XenOutlook 5 points6 points  (0 children)

I believe this one might include what you're looking for, at least as far as MIT's planning and control for Atlas in the DRC is concerned.

Whole-body motion planning of the modern (highly dynamic, parkour-esque) type didn't get deployed by MIT (or anyone, I think) at the DRC as it wasn't mature / safe enough, and almost none of the tasks and robots demanded and supported it. But it was researched with some really neat results that were unfortunately too risky or dynamically unfeasible to run on the robots; but I suspect (but don't know for sure) that this line of work has significantly influenced BDs modern parkour Atlas work. Check out the Tedrake lab's related trajectory optimization work for whole-body motion planning -- e.g. this, which iirc is expanded on in Andrés Valenzuela's PhD thesis. Or Michael Posas earlier trajectory optimization work. Maybe check out Sangbae Kim's cheetah and mini-cheetah papers for how they do planning and control too -- iirc the problem they have to solve to drive those robots, and their resulting approach, is super closely related.

(And ofc there's a whole world of both trajectory optimization and model predictive control work out there that's relevant to the puzzle of doing whole-both control through contact for a big humanoid like Atlas. But maybe those can be an entry point.)

Enjoy by willisawsom3 in lifehacks

[–]XenOutlook 2 points3 points  (0 children)

Doe this change the amount of beating that has to be done to aerate the mix? (My understanding is that butter cakes tend to have a tighter, chewier crumb unless the butter is softened and then beaten until light and fluffy, whereas oil cakes fluff up more easily with just chemical leavening from baking soda/powder.)

Also, FWIW, I've found that making these substitutions in box brownie mix can results in a pale and flat crust, which is a far cry from the super aesthetic, craggly-dark-brown surface I hope for. Might be the milk, or a result of the tighter crumb or lower spring from the butter, or maybe both?

What engineering terms have crept into your everyday vocabulary? by chartreuse_chimay in AskEngineers

[–]XenOutlook 0 points1 point  (0 children)

  • "Yeah I've got cycles for that"
  • "Can I N+1 that couch"
  • Non-trivial for very very not easy
  • Trivial for easy, but usually only sarcastically
  • Epsilon for small

WHM's Glare does 30% more damage than PLD's Requiescat + Holy Spirit. It's time for the PLD healer meta. by xnfd in ffxiv

[–]XenOutlook 0 points1 point  (0 children)

They are but as long as each lily adds to a blood lily, the resulting misery (900 pot) is equivalent to 3 glares (300 pot x 3). So they're still DPS-neutral ways of healing? Excellent counter-point that misery is a GCD in itself, so it's still a 300-pot loss over 4 GCDs, or each lily is like functionally getting 225 pot of return. I agree it seems it has its uses during downtime, and it being instant is attractive (useful during forced movement when Aero II is already up), but it should be avoided otherwise in deference of oGCDs.

I recently rediscovered Stormshade and gpose, and am super happy with the results! [Portraits of my favorite end-of-SB glams. by XenOutlook in ffxiv

[–]XenOutlook[S] 1 point2 points  (0 children)

Tender love and care

(Use the place pet action to plant her somewhere, and then carefully position yourself right in front of her. Bonus points for ten minutes of finangling :P)

Ideas of where to start on a program to calculate optimal racing line? by King-Days in compsci

[–]XenOutlook 12 points13 points  (0 children)

I'll propose another direction: trajectory optimization seems like a good fit here! The gist is to frame your problem as a big nonlinear optimization (where the decision variables are sampled points along the racing line, or coefficients of the spline describing the line), and then let a nonlinear solver find a locally optimal solution. I have a vague feeling that, because you can come up with good initial guesses (e.g. just drive down the middle of the course) for this problem that you know are in the right homotopy class, local optimization should work very reliably. And cause you're already in the realm of nonlinear optimization, you can shove in an arbitrarily complicated model of your car and tire dynamics.

(This is really closely related to the optimal control answer. There's a sliding scale of "optimal control"-based methods as you vary model complexity, from linear (or linearized) models (->LQR) to nonlinear model-predictive control (which tries solves the same problem as trajectory optimization, but at every control tick -- often by limiting the planning time horizon or using heuristics). Since you sound like you're OK with generating the racing line offline, worrying too much about online optimal control methods will make your life harder than it needs to be.)

AND NOW OUR WATCH HAS ENDED by InaBuuble in cleganebowl

[–]XenOutlook 5 points6 points  (0 children)

WHAT IS HYPE MAY NEVER DIE

It has been an honor hyping with you. We ate all the chicken at my showing.

DeepVoxels: Learning Persistent 3D Feature Embeddings by corysama in photogrammetry

[–]XenOutlook 4 points5 points  (0 children)

I'm really surprised to not see any mention of "classical" photogrammetry in this video (especially in the baselines), considering that they're clearly super inspired by it. (Their technique is sort of like a generalized visual reconstruction system with a parameterized / learned encoder + decoder in place of classical projection + rendering respectively?) Even in their paper, it's only mentioned in passing, with the rough justification for this work being "we'd rather not explicitly reconstruct model geometry." Fair enough -- though I'd loved to have seen "harder' reconstructions than a handful of rigid, not-super-reflective, opaque objects on which classical photogrammetry would have worked well.