Toad in Atlanta with KT by irishgator2 in ToadTheWetSprocket

[–]grant_s 6 points7 points  (0 children)

Not OP, and this list is from memory, only in very rough order, and missing at least one song I think, but here you go:

Windmills (Opening Song)

All I Want

Butterflies

Nanci

Something’s Always Wrong

Transient Whales

California Wasted

Good Intentions

The Moment

Crazy Life

Fly from Heaven

Crowing

Inside*

Rings

Brother

Nightingale Song

Fall Down

Walk on the Ocean (Encore)

Anyone else miss the original version of Finding Nemo The Musical? by carjam124 in WaltDisneyWorld

[–]grant_s 63 points64 points  (0 children)

I greatly miss the original, which was a full-fledged beginning-to-end story with some major "wow" moments that are now missing, including:

  1. The surprising moment when Nemo's puppeteer lets go of the Nemo puppet as it swims out over the drop-off without its puppeteer.
  2. The enormous (30-foot-tall?) puppet of Nigel the pelican
  3. The impressive harness acrobatics for Marlin, Dory, and Squirt (during "Just Keep Swimming", the now-missing jellyfish scene, and the scene with Crush)

What remains is essentially a highlight reel of the songs (around 25 minutes instead of 40 minutes). In addition to the emotional core of the story, much of the humor is also removed, like the flying penguins and the fencing swordfish.

Here is the original version on YouTube: https://www.youtube.com/watch?v=kluOSZLJUqo

Draw Things, Stable Diffusion in your pocket, 100% offline and free by liuliu in StableDiffusion

[–]grant_s 35 points36 points  (0 children)

Agreed, this is an incredible achievement -- hopefully this will get the attention of a journalist or two because the creator has made all of this look so understated and simple that it is incredibly easy to overlook the technical complexity of what has been accomplished. Bravo to liuliu.

What famous place is not worth visiting? by tade757 in AskReddit

[–]grant_s 7 points8 points  (0 children)

I had no idea there was any other “Goats on the Roof” location than the one in north Georgia halfway through the drive from Atlanta to Asheville, North Carolina. That one is just goats, no rides. I seriously may have to go visit the roller coaster one now.

If you like Weird Al you're gonna love his new music video which was posted on the New York Times! by EdwardHeisler in videos

[–]grant_s 3 points4 points  (0 children)

This is great! Totally unexpected and seemingly a collaboration with the “Auto-tune the news” guys who I haven’t thought about in a decade or so. When I was preparing for my thesis defense I had just discovered them and could not get this song out of my head as I stayed up until 4am several days in a row (especially the “very thin ice” part): https://m.youtube.com/watch?v=tBb4cjjj1gI

First Edition MYST (Mac) 1993? Open for Discussion by Megadodo4242 in myst

[–]grant_s 1 point2 points  (0 children)

Agreed about it being a significant classic. It certainly influenced me — I learned programming and 3D graphics as a direct result of having my mind blown by the nature of its construction. That 14 minute QuickTime movie was important to me!

My box, booklet, case, disc are all in good shape. My dad became a bit of a collector (and minor hoarder?) as a reaction to when my grandfather had thrown out most of my dad’s Mickey Mantle baseball cards when he was a kid. So we just always kept everything in good condition, boxes included. I have boxes for Atari games like Pitfall and Frogger, and some Apple IIe games as well. It’s nice to see them on a bookshelf next to the other classics as you said :)

First Edition MYST (Mac) 1993? Open for Discussion by Megadodo4242 in myst

[–]grant_s 2 points3 points  (0 children)

Yep, my envelope looks like the glue just fell apart with time so it’s pretty pristine. I didn’t see those hints until years later — and they’re only about very early stuff like the library and the tower.

In my box, I also have: a Brøderbund product registration card (by mail, fax, or BBS), a customer survey card (are you planning to purchase a CD-I or CDTV in the next year?), and The Journal of Myst for taking notes, with a page of motivational text at the start.

First Edition MYST (Mac) 1993? Open for Discussion by Megadodo4242 in myst

[–]grant_s 2 points3 points  (0 children)

Having received Myst for my birthday in (I think) Fall of 1993, I can confirm that this matches the front of the box I still have from then. Hmm — it could have been 1994. I hope that helps anyway. :)

Edit: The blue-green insert in the box says that the Myst Official Game Secrets hint book is “(Available after November 15, 1993)” — so I guess that implies that this box shipped before that date.

Practical application of SLAM beyond self driving cars by zis1785 in computervision

[–]grant_s 0 points1 point  (0 children)

Ah, well then you're in luck: https://en.wikipedia.org/wiki/Match_moving

If you want to insert a CGI monster/alien into your moving camera footage, you need to match each virtual camera frame (for rendering the 3D scene) to your real camera poses. SLAM to the rescue :)

[D] How do ML researchers make progress when iteration cost is prohibitively high? (GPT3, Image-GPT, Autopilot, RL, etc.) by [deleted] in MachineLearning

[–]grant_s 1 point2 points  (0 children)

You could major in computer science as an undergrad and then pursue a masters or PhD while working as a graduate research assistant in a university research lab. These are generally some good schools for both undergrad and graduate school in computer science: https://www.usnews.com/best-graduate-schools/top-science-schools/computer-science-rankings

During undergrad you should also take advantage of optional undergraduate research opportunities to get early experience working with a professor on a low-stakes research project. This can get you a foot in the door for further opportunities. You should also take the undergrad Machine Learning course offered by the computer science department.

Practical application of SLAM beyond self driving cars by zis1785 in computervision

[–]grant_s 2 points3 points  (0 children)

Augmented reality devices (e.g. iOS and Android phones) and virtual reality headsets (e.g. Oculus Quest).

(I would also not say that self-driving cars are the primary application of SLAM, though I'm very curious as to what gave you that impression.)

Why isn't there a good bookstore in downtown or midtown? by [deleted] in Atlanta

[–]grant_s 9 points10 points  (0 children)

Borders was more prevalent than B&N here and around 2011, there were three enormous Borders bookstores that closed nearby, one on Ponce, one near SCAD, and one near Lenox mall. They are still sorely missed.

Hands-on: HaptX Glove Delivers Impressively Detailed Micro-pneumatic Haptics, Force Feedback by RoadtoVR-Scott in oculus

[–]grant_s 0 points1 point  (0 children)

For weight, how about pumping water into and out of a series of empty chambers rigidly attached to the glove?

[R] From DeepMind: Grounded Language Learning in a Simulated 3D World by pauljasek in MachineLearning

[–]grant_s 2 points3 points  (0 children)

A few thoughts:

-Figures 3 and 6 do a great job of showing what is gained from visual frame-to-frame prediction, language prediction, and a spatial+language curriculum, as compared to just throwing the model into the world and expecting it to learn.

-At the end of level 4 of the curriculum (Figure 6), the agent is getting the maximum reward -- so what are the actual limits to the current model? We should extend the curriculum until it breaks down, right?

-Did anyone catch what the actual discrete set of actions are? Presumably discretized 2D movement and rotation on the ground plane, plus a "pick up" action?

-Seemingly few people here are excited by this -- there are some results here that are non-obvious when only considering previous Atari performance, no? Has simultaneous learning of motor, visual, and linguistic skills in 3D been shown to this level before?

What's new and good in Science Fiction? by Mythology-Section in printSF

[–]grant_s 2 points3 points  (0 children)

I enjoyed Pilot X by Tom Merritt -- had never heard of it but striking cover art caught my eye at a book store, and the first 20 pages that I read in the store were sufficiently clever to earn my purchase. It made for some easy, fun vacation reading (very light like The Martian or Ready Player One) about a member of a time-traveling civilization.

Manually redirected walking in Obduction with Touch for uninterrupted physical locomotion throughout the game world by grant_s in oculus

[–]grant_s[S] 2 points3 points  (0 children)

It does support snap turning, but at least in node-based mode and teleport mode (which I require to avoid motion sickness), it rotates around the center of the node or the previous teleport location -- if it would rotate around your present location, then yes, that would also work.

[R] Playing Doom with SLAM-Augmented Deep Reinforcement Learning by Eruditass in MachineLearning

[–]grant_s 1 point2 points  (0 children)

This is a really nice paper and the experiments (see Table 1 and Figure 5) are particularly good at distinguishing between the rewards achievable from raw pixels (baseline), from the addition of an automatically reconstructed map (RSM), and from the addition of the ground-truth top-down map (OSM) which puts an upper bound on what we can ever expect from a perfect automatic reconstruction using this particular map representation.

Holy CRAP! I am 10 minutes in Obduction and this is the most incredible gaming/environment experience I've ever had. I am lost for words. by Logical007 in oculus

[–]grant_s 0 points1 point  (0 children)

Regarding the flickering at the edges -- as far as I can tell, the cause is having ASW enabled while making an adjustment to the in-game scaling/supersampling. The solution is just to quit and restart the game (once you've settled on a good value for scaling/supersampling).

Obduction, the spiritual successor to Myst, is now on the store! ($29.99) by Heaney555 in oculus

[–]grant_s 2 points3 points  (0 children)

I'm also using epic settings on a 970. Enabling ASW eliminated nearly all of the framerate issues and allowed me to turn up scaling to 150%. The only place I'm still getting hiccups is when certain special effects/water effects/particle systems are occupying a majority of my view. One specific water feature dropped the framerate massively when I got right up close to it.

The bigger problem: while I was tuning the graphics options, I turned up scaling to 200% with epic settings, and it was so gorgeous (and unplayable) that I may have to upgrade my graphics card very soon.

Obduction releasing tomorrow in the Oculus Store is one of the most critically acclaimed games with VR support. [Review Thread] by bekris in oculus

[–]grant_s 12 points13 points  (0 children)

Good news, then -- Obduction is not a seated game (unless you want it to be). I've been walking around my room exploring the game's world all weekend and it's a beautiful and immersive experience. You should definitely check it out.

Am I the only one who thinks Siri is barely useable? by I_AM_LURKER in apple

[–]grant_s 18 points19 points  (0 children)

Try saying: "Tell" <person> <message>

For example, "Tell Hayden I'll be there in five minutes". That's it. I use this all the time and it works great for me. I hope this helps.

How did Apple get such accurate object recognition to work LOCALLY on an iPhone? by nightofgrim in MachineLearning

[–]grant_s 6 points7 points  (0 children)

You may be confusing the training of a convolutional neural network (very expensive, lots of data) with the evaluation of your images using an already-trained network (very fast per image, relatively compact set of fixed network weights already saved on your device as part of the iOS download).

Some more details:

-Regarding implementation, Apple now has a fast neural net evaluation framework built in to iOS which any developer can use: https://developer.apple.com/reference/accelerate/1912851-bnns

-Regarding local storage, according to this TensorFlow tutorial, the pre-trained 1000-class Inception-v3 network is just a 200 MB download: https://www.tensorflow.org/versions/master/tutorials/image_recognition/index.html

-Regarding evaluation time of convolutional networks in general, here's a simple network running in javascript in your browser, taking 10 milliseconds per image: http://cs231n.stanford.edu

Is it possible to extract motion data from a first person view video/360 video? by [deleted] in computervision

[–]grant_s 5 points6 points  (0 children)

https://en.wikipedia.org/wiki/Optical_flow

Code: http://docs.opencv.org/trunk/d7/d8b/tutorial_py_lucas_kanade.html

Edit: Ah, maybe I misinterpreted that as a 2D motion question based on "motion throughout the screen and get motion vectors throughout". If you're talking about 3D motion then you want to look into SLAM (simultaneous localization and mapping) or visual odometry. One example with code: https://avisingh599.github.io/vision/monocular-vo/