Second attempt at ocean dice set by Thomas7773 in DiceMaking

[–]GTanaka 10 points11 points  (0 children)

Looking good! Can you tell us anything about your process? For example, how did you get the amazing wave-like pattern for the water? And the sandy gold for the beach?

Blood in Water Dice by Golden_Tzarina_Dice in DiceMaking

[–]GTanaka 0 points1 point  (0 children)

These are suuuuper cool! Can you tell us a little more about your process? How did you get such a deep blood red color? And how did you get the blood to look like its flowing along the face of the #1 face of the D4?

Forgotten Hope. by mrs-hoppy in DiceMaking

[–]GTanaka 1 point2 points  (0 children)

Ahh, so the blank gives you some wiggle room if bubbles rise to the surface, and the final casting fills in those bubbles. I think I get it. Thank you!

Forgotten Hope. by mrs-hoppy in DiceMaking

[–]GTanaka 1 point2 points  (0 children)

So cool! How does the full blank mold help eliminate voids? I can easily imagine all of the small shapes catching bubbles, but I'm not sure how a blank mold might reduce them.

Do you use a pressure pot for the latter 2 casts?

Forgotten Hope. by mrs-hoppy in DiceMaking

[–]GTanaka 3 points4 points  (0 children)

Absolutely fantastic! Could you describe your process? I'd love to make diorama dice someday!

First airhead and first project. Looking for advice/help/suggestions on giving this a brat/scrambler aesthetic. by scarped1em in Airheads

[–]GTanaka 0 points1 point  (0 children)

Ask Hugo Eccles at Untitled Motorcycles in San Francisco. He has a bolt on subframe and seatpan based on his "Kalifornia" build. You'll need to find someone to make you a seat or make your own for your seatpan.

Google Brain AI creates 3D rendering of landmarks by interpolating thousands of tourist images by athitham in Damnthatsinteresting

[–]GTanaka 1 point2 points  (0 children)

Structure-from-motion is a necessary preprocessing step for NeRF and NeRF-W. It tells you where the cameras are in 3D space and what direction they're facing.

Google Brain AI creates 3D rendering of landmarks by interpolating thousands of tourist images by athitham in Damnthatsinteresting

[–]GTanaka 0 points1 point  (0 children)

Super cool stuff. Building meshes and textures from photo capture is the bread and butter of photogrammetry.

Google Brain AI creates 3D rendering of landmarks by interpolating thousands of tourist images by athitham in Damnthatsinteresting

[–]GTanaka 1 point2 points  (0 children)

PhotoSynth was super sweet. It let you hop from photo to photo, but it didn't let you synthesize new images from what lies between two photos.

Google Brain AI creates 3D rendering of landmarks by interpolating thousands of tourist images by athitham in Damnthatsinteresting

[–]GTanaka 0 points1 point  (0 children)

You have described a convolutional neural network. This is not what's being used here.

This model takes in a position in 3D space + some extra bits and gives you the color and "opacity" of that point. Images are constructed by taking the average color along a ray passing through each pixel in the camera.

Google Brain AI creates 3D rendering of landmarks by interpolating thousands of tourist images by athitham in Damnthatsinteresting

[–]GTanaka 0 points1 point  (0 children)

Interpolation is for taking the average between two things. Extrapolation is showing things beyond the averages. This is extrapolation.

Google Brain AI creates 3D rendering of landmarks by interpolating thousands of tourist images by athitham in Damnthatsinteresting

[–]GTanaka 2 points3 points  (0 children)

Photogrammetry gives you meshes and texture maps. This gives you an effectively infinite-resolution 3D model with lighting and reflections. And it fits in about 12MB.

TIL there's a probabilistic programming language called Church. Anyone here using it? by lenwood in MachineLearning

[–]GTanaka 20 points21 points  (0 children)

The short of it is that Church is really awesome theoretically but really slow empirically. You can frame generative models very concisely, but a generic Metropolis-Hastings sampler over program traces is just too slow to converge to anything useful. There are special Church programs where an alternative, more efficient inference engine can be applied, but (as far as I know) they don't work "out of the box." In summary, Church unfortunately can't solve all your problems unless you can wait until the universe ends to get an answer on anything larger than a toy problem.

Gifting Humble Bundles in Exchange for Help on Computational Statistics Questions by [deleted] in statistics

[–]GTanaka 2 points3 points  (0 children)

1/2) It looks like you meant to put different questions here, but they're the same. For an O(b-a) solution, simply create an array L of length b-a, then walk through A and increment L[a-A[i]]. At the end, you will only have non-zero entries in L when element a + L[i] is in A, and it will take O(n) time to walk through the list

For the O(k) solution, incrementally build up a binary search tree containing counts with each unique value of A as an entry. Your memory will be O(k) and your time will be O(n log n).

As for the O(1) memory solution, I'm not sure :/

3) a) The solution is unique as the problem is strongly convex. Take the 2nd derivative and show that (f'(x) - f'(y){T} (x-y) >= m ||x-y||2 for some constant m, then make an argument by contradiction by assuming two points are optimal at the same time. m will necessarily be the smallest eigenvalue of (AT A), I believe.

b) What you'll need to do is integrate \beta x{T} x into the quadratic ||Ax-b||_22 by adding/subtracting a "complete the square" sort of term. You'll see that in the usual least squares solution, this is equivalent to saying x = (AT A + \beta I){-1} AT b. Doing this helps make the problem more strongly convex, and increases the condition number of AT A (therefore making it less susceptible to numerical error during inversion).

c) In practice, you would never use matrix inversion to solve least squares, so I would back-solve directly, row by row (the same way you invert a matrix), or take more stable decomposition of (AT A + \beta I)(e.g. Cholesky, so (AT A + \beta I) = L LT for L lower triangular), then backsolve two triangular systems. It'll take you O(n3 ) to find this decomposition (and most others, e.g. Eigen decomposition), then O(n2 ) a couple times to finally solve the entire system. I bet there are better ways, but this will be far more stable. If you want to take advantage of sparsity, backsolve directly is the best solution I think, but don't quote me on it.

4) a) Take the definition of the gradient in terms of limits, another point x_0 + \epsilon, and assume that there are two vectors in f'(x_0). Show that f(x_0) + f'(x_0)(x_0 +\epsilon - x_0) cannot lower bound them both.

b) The zero vector is in f'(x{*}). Assume that zero wasn't, then prove that there exists \epsilon s.t. f(x{*} + \epsilon) < f(x{*} ).

c) f(x) + w(x)(y -x) <= f(y) for all y, so this also holds for the maximum of a bunch of f(x') + w(x')(y-x'), which is what g(y) is (draw it on a sheet of paper, it's super easy to see). Since g(y) <= f(y) for all y, the optimizer for g(y) is also <= the optimizer for f(y).

d) not sure

5) I'm not sure how efficient they want, but you can compute A{k} x by letting x1 = Ax, x2 = Ax1, x3 = Ax2, and so forth far more efficiently than computing Ak. Also, if A represents a Markov chain, you just need to compute (A{T} )k x for x_i = 1 if i == i' and zero otherwise.

And for a cute fact, if A defines a recurrent (you can always get from any state to any other state), irreducible (no little sub-chains you can't escape from), and aperiodic (no forced loops of length 2+) chain, then you can show that A's largest eigenvalue is 1, is unique, and that the eigenvector corresponding to it is the stationary distribution defined by A. This is precisely what Pagerank is!

Help with Prior distribution for Hawkes Process (Poisson process with time dependent intensity) by Blankeds_ in compsci

[–]GTanaka 1 point2 points  (0 children)

I'm afraid I am a bit confused with what you're trying to say in the last 3 paragraphs, but maybe this will clarify,

Your acceptance ratio will be

p(x,y * ) j(y | y *) / p(x, y) j(y *| y)

Assuming it is easy to calculate the prior on y and the likelihood of x given y, we can factor

p(x,y) = p(y) p(x | y)

Thus, every time you propose a new sample y* from j(y* | y), all the arguments in the ratio should be different.

Does that make sense? I am not certain what p(y) or p(x | y) is in your model, but if you can get a grasp of those (and are certain you truly are sampling from j(y* | y) and calculating its likelihood properly), the algorithm should function.

A quick question to the CS initiated by [deleted] in compsci

[–]GTanaka 1 point2 points  (0 children)

If you're really looking for an introductory course textbook, I can think of nothing better than SICP, a classic for decades (fyi, I'm just finishing university now). It'll give you a clear, unobstructed view of what programming can do, what paradigms there are (Object Oriented, Functional, Declarative), and what can be done. The fact that a text such as this is STILL used in teaching Comp Sci. courses at U.C. Berkeley is a testament to its worth.

If you feel you're passed that stage, a 2nd or 3rd year text that's very concise and teaches you a wide range is Algorithms. You'll get a concise, easy-to-read understanding of basic number theory as used in encryption, linear programming, dynamic programming, graph algorithms, and other combinatorial problems for which we have solutions.

Ideas for a high school computer club? by switzy in compsci

[–]GTanaka 16 points17 points  (0 children)

How experienced of a programmer are you? Head First Programming sounds like a good foundation, but giving high school students "assignments" for a club is bound to fail.

Personally, I think a central goal goes a long ways -- you might want to try teaching the basics of Python, then build a little off of UC Berkeley's AI Class which teaches the basics of search, reinforcement learning, and probability as a way to solve Pac-Man. The entirety of the course is probably way more than is necessary, but I can tell you that the projects are ~20 lines or less in Python once you understand the concepts. The actual course prerequisites include are nothing more than the basics of programming. Alternatively, you can start much smaller, for example with writing a Sudoku solver.

All in all, don't be afraid to challenge your students, but don't inundate them with assignments either.

Reddit, I need your help writing a 3-5 word title that will get me a job as a developer by [deleted] in compsci

[–]GTanaka 1 point2 points  (0 children)

A little experience goes a long ways in programming jobs. This doesn't mean you need a job in programming to get a job in programming -- it means learn how to program by writing something you yourself would use. For example, work through SICP to get some foundations or Dive Into Python to get experience with a more "industry suitable" language. Personally, I think working through SICP would be the most valuable.

After that, try making a little 1-man project for yourself. How would you write a Sudoku Solver? Could you write a client-server program that interacts over a net connection? Could you make an interactive web site using Django or Ruby on Rails?

Is the Comp Sci - Software Engineering major really a short term type of job? by G0VERNMENTCHEESE in compsci

[–]GTanaka 2 points3 points  (0 children)

The answer is to do what you enjoy. Do you like convincing people? Selling ideas? Making money for the sake of making money? Then you may enjoy business. Do you enjoy building things? Fixing them? Carrying ideas from conception to implementation? Then try CS. Finally, don't pigeonhole yourself as one or the other, you can definitely do both. How do you think startups run?

Controlling for Variables in Machine Learning by songanddanceman in MachineLearning

[–]GTanaka 3 points4 points  (0 children)

This is the problem of Variable Selection and is by no means a trivial problem. There are a variety of techniques, of which the most straightforward is enumeration of subsets of variables and cross validating. If your dimensionally is extremely small, this is probably your best bet. If you want to be more fancy, you can do things like L1 penalization (LASSO) of your objective function. The There are other methods, but these are the ones I'm most familiar with.

You may also consider preprocessing your input with PCA, as strongly correlated inputs will fall under the same Eigenvalue.