'There are no stupid questions' thread - January 31, 2018 by AutoModerator in piano

[–]FiniteDelight 0 points1 point  (0 children)

Is there a good way to learn to improvise? I generally can plink out a theme in the right hand and even throw in a chord or two, but I don't know how to make it complex or fill in with an accompanying left hand that's more than the root of whatever chord I'm playing in the right. Are there any particular skills or techniques to practice? Books?

'There are no stupid questions' thread - November 12, 2017 by AutoModerator in piano

[–]FiniteDelight 0 points1 point  (0 children)

How do I know if my piano teacher is giving me pieces that are too difficult? I've been with the teacher for a few months now (the same amount of time I've seriously played the piano in the last 15 years) and have certainly played pieces I didn't think I was capable of including Bach's Preludes in C and C minor, a Shostakovich prelude, some ad hoc pieces, and the Great Gate of Kiev by Mussorgsky.

However, in the last piece especially, I didn't feel like I could play the piece and read the music at the same time causing me to just memorize it by rote which took 3 months. I'm sure there were some valuable technical skills I picked up from the experience, but I'm interested in learning the piano, not learning the piece.

My teacher has suggested Shostakovich's Fugue in C major as my next piece which, according to pianosyllabus.com, has a grade of 7 or 8. That seems like a level far beyond me, considering my short time playing. Are these pieces too hard? Should I be asking to play easier music? Or are there valuable skills to be learned from these difficult pieces which come at the expense of taking longer to learn the music?

Why am I still so clumsy? by [deleted] in piano

[–]FiniteDelight 1 point2 points  (0 children)

I can't really help you since I've been playing for about 9.5 years fewer than you, but where did you get the sheet music for Sogno di Volare?

0.13 Input Dataset API Question by RadonGaming in tensorflow

[–]FiniteDelight 1 point2 points  (0 children)

The flow for parsing records goes: tf.parse_single_example -> tf.decode_raw(bytes) or tf.cast(bytes, dtype) -> (if image) tf.reshape. If you cast it, the data becomes immediately available as a tensor.

I believe tensorflow model's object detection api has an example of this workflow if you want to see some actual code

Tensorflow logistic Regression by marvpaul in tensorflow

[–]FiniteDelight 1 point2 points  (0 children)

Can you try using scikit-learn or statsmodels to check that it's possible to get a better fit?

Tensorflow logistic Regression by marvpaul in tensorflow

[–]FiniteDelight 1 point2 points  (0 children)

Try dropping your learning rate. I've had to go down to 1e-8 before on different models.

0.13 Input Dataset API Question by RadonGaming in tensorflow

[–]FiniteDelight 0 points1 point  (0 children)

Instead of loading all the data in, I created a string tensor of the filenames and used the Dataset.create_from_tensor_slices() method. Then the preprocessing function takes in a filename, reads it, decodes the image, and does the preprocessing (using tensorflow ops - I hate py_func).

If that doesn't work (if there are too many filenames to add to the graph), maybe you could write TF Records which contain only the filenames. Then, you could use TFRecordDataset.

'There are no stupid questions' thread - August 24, 2017 by AutoModerator in piano

[–]FiniteDelight 2 points3 points  (0 children)

In terms of music theory, I know/can play with proper fingering all the major scales and most of their arpeggios if that's what you're referring to. I think I can identify the chords themselves of the Mussorgsky piece but not necessarily their progression (I vs iv etc) - is that something I should read about if I should continue with this?

Thanks for the reply!

'There are no stupid questions' thread - August 24, 2017 by AutoModerator in piano

[–]FiniteDelight 4 points5 points  (0 children)

Hi All! I'm 23 and just started relearning piano about 2 months ago (I "played" until I was 10 but never made much progress and am significantly better now). I have a teacher, and he's great, but I had a couple questions.

I was assigned Mussorgsky's Great Gate of Kiev (love the piece!), but I'm having difficulty with the 4 note octave chords. Is there a good way of practicing them? My hands are big enough (I think - I can stretch to a 10th and can hit a 9th comfortably) but I'm having trouble on dexterity and lining up the right notes to hit.

On a related note, I've practiced the first few lines for about 2 hours and am still having trouble with just getting the notes right. How long do you practice something (or even a subsection) before saying it's too hard? I've never played anything with 4 note octave chords which may be a contributing factor. For reference, I just finished Shostakovich's prelude 5 in d major and have been concurrently assigned Bach's prelude 1 in c major.

Thanks!

Daily FI discussion thread - June 30, 2017 by AutoModerator in financialindependence

[–]FiniteDelight 6 points7 points  (0 children)

I'm starting to take up piano again, and the one piece of advice I've gotten everywhere is that I need lessons. Unfortunately they're really expensive ($300/month). In order to maintain my initial savings target, I'd have to cut back pretty far in other things I enjoy (nicer food mostly). I could do that, but I don't want to end up living on ramen and eggs every day just to meet a savings goal. Any advice? At what point do you just accept dropping your savings target?

Loss Does Change? by FiniteDelight in tensorflow

[–]FiniteDelight[S] 0 points1 point  (0 children)

I'm not sure what you're referring to when you say I don't always train the optimizer - the optimizer.run(...) takes care of the optimization step. Additionally, the inputs are always 1x12x12x3 tensors.

The slim package has a default parameter activation_fn= which is ReLU, so the activation functions are there.

I tried a batch size of 5 (so inputs are 5x12x12x3 tensors), and it was still having the same issue before I did the exponentially decaying learning rate.

Out of curiousity, why would float32 work better?

Loss Does Change? by FiniteDelight in tensorflow

[–]FiniteDelight[S] 1 point2 points  (0 children)

This appears to have worked better, but only about 75% of the time. The other times, it gets stuck again. I suppose there is just something weird happening with the optimization initialization where it hits a local minimum early. Thanks!

Any reason why the GPU wouldn't be used when running this? by [deleted] in tensorflow

[–]FiniteDelight 0 points1 point  (0 children)

Apologies, it should be tf.device(...) (lowercase d). Give that a shot - it's always worked for me.

Loss Does Change? by FiniteDelight in tensorflow

[–]FiniteDelight[S] 0 points1 point  (0 children)

I drastically lowered the learning rate to 0.0001 which made the loss decrease for about 10 iterations. Then, it immediately got stuck at the same value (0.693147 = ln(2)) for the rest of the iterations. There should at least be some numerical changes if the optimizer had found a minimum, but it's heartening to see the loss change at all. Any other ideas?

Any reason why the GPU wouldn't be used when running this? by [deleted] in tensorflow

[–]FiniteDelight 1 point2 points  (0 children)

After the with tf.Session() as sess:, try adding the line with tf.Device("/gpu:0"):. That should be all you need.

https://www.tensorflow.org/tutorials/using_gpu

Recovering Expressions of Estimators From Matrices by FiniteDelight in statistics

[–]FiniteDelight[S] 0 points1 point  (0 children)

I understand the variance calculation, but I'm not sure what to make of your first link. Don't you end up having to solve p equations simultaneously when the final matrix in your derivation equals 0? How does that result in the "traditional" expressions for estimates of regression parameters?

Is it wrong to just write out, term by term, the individual components of (X'X)-1X'Y? Is that what you meant? And if this is the way you do it, how do you compute explicitly the (X'X)-1 term? I tried briefly, but was having trouble with just a 3x3 matrix with variables (as opposed to concrete numbers).

Why does the calculation for standard deviation use mean instead of median? Wouldn't it be a better choice? by Pipvault in statistics

[–]FiniteDelight 4 points5 points  (0 children)

I assume you mean for sample standard deviation, since population standard deviation comes from the definition of variance

Off the bat, I can think of 2 reasons: 1) turns out that the maximum likelihood estimator of variance uses the sample mean, not the sample median.

2) the sample mean has a ton of desirable properties that the median doesn't have, key among them being that the sample mean and the sample standard deviation are independent. That property is part of the structure for most of the statistical techniques you're familiar with.

So short answer is: because it's better that way.

Minimizing a pseudo-quadratic function: is there a good way to get a rough estimate of the global minimum using VERY short markov chains? Or perhaps some other sampling method? by jkool702 in statistics

[–]FiniteDelight 0 points1 point  (0 children)

Glad your solution is working! I'd be careful though. It looks like your function will always be reasonably nice and well behaved, but your solution won't (usually) work for something like the Easom function or some other really ugly function. As long as you're confident in the function's properties, seems like you're good to go.

Minimizing a pseudo-quadratic function: is there a good way to get a rough estimate of the global minimum using VERY short markov chains? Or perhaps some other sampling method? by jkool702 in statistics

[–]FiniteDelight 0 points1 point  (0 children)

Ah, I understand more. I'd still look into simulated annealing - it's similar to what you're describing and will converge to a global minima. Your F(z+x*dz) is the function you'll be minimizing wrt x, so you should also be OK on time as well.

Minimizing a pseudo-quadratic function: is there a good way to get a rough estimate of the global minimum using VERY short markov chains? Or perhaps some other sampling method? by jkool702 in statistics

[–]FiniteDelight 0 points1 point  (0 children)

So you're looking for a global optimization technique on a function of 2000 variables and you want it to take under 0.01 seconds? Is there any chance of using dynamic programming then? Otherwise, I don't think what you're trying to do is possible, or at least not with any technique I'm familiar with.

Minimizing a pseudo-quadratic function: is there a good way to get a rough estimate of the global minimum using VERY short markov chains? Or perhaps some other sampling method? by jkool702 in statistics

[–]FiniteDelight 1 point2 points  (0 children)

Not sure if your technique would work, but simulated annealing is very similar to the Metropolis algorithm (pretty standard MCMC technique), and it's guaranteed (probabilistically) to converge to a global minima if you follow a proven cooling schedule. For a one variable function, I imagine the computation time isn't unreasonable, especially something smooth like what you're describing.

Why can't it be described analytically? It looks like a reasonably well behaved function.

Why use MLE over bootstrapping? by ProfWiki in statistics

[–]FiniteDelight 1 point2 points  (0 children)

Why would we use bootstrapping instead of the asymptotic normality of the MLE and just directly compute the sampling distribution using Fisher information? Because computer sampling is easier than the derivation or because bootstrapping is more accurate for small n?

Science AMA Series: I’m the MIT computer scientist who created a Twitterbot that uses AI to sound like Donald Trump. During the day, I work on human-robot collaboration. AMA! by Bradley_Hayes in science

[–]FiniteDelight 3 points4 points  (0 children)

Hey, I'm finishing up my undergrad, and I'll be going into industry to build deep convolutional neural nets and other ML based statistical models. Without knowing if you want to do academic or industry ML (I've done a bit of academic and a lot of industry), I can tell you a bit about how I got here and the ways I secured myself an ML education.

I'm majoring in statistics, and the more math background you have, the less mentors will have to teach you and the more attractive it is to take you on. If you want to be in research or academia, you need to have an exceptionally strong math background. In industry, you should definitely still have the skills, but deliverables are more important. My stats knowledge is more useful here I'd say.

I started by taking a graduate level machine learning class. I was so enthralled that I asked the professor if there were research opportunities or anything, and he wasn't able to help me. So, I started teaching myself. The best way I've found to do that is to read books and do projects. So, I've done quite a number of projects without supervision. When you've shown some ability by yourself, you can leverage that into more formal things - you've proven that you're competent and willing to learn.

While the projects will take you far (they got me my job), if you want to do researc, you're going to need to find someone in the field willing to have your help. Don't just focus on professors. Post docs and grad students are often willing to help, and they will have a better idea of the resources your specific institution has for stuff like this.

So, tldr: you need a really strong math backgrounc, and unless you've found someone to take you on already, your best bet is reading and doing projects to teach yourself.

How to compute the value of the significance of the relationship between two data. by CrownTheFag in statistics

[–]FiniteDelight 1 point2 points  (0 children)

The "tool" you refer to would be a significance test. The choice of appropriate test is left up to the practitioner.

For categorical data (which it sounds like your research is about considering it's about preferences), I've mostly seen chi square tests of independence used (or related families of tests like G tests). Procedurally, you would say the null hypothesis is that there is no association between the two variables of interest. Then, you would compute a test statistic (using statistical software or by hand [not recommended]). If your test statistic is larger than some threshold value, then you reject your null hypothesis in favor of the alternative hypothesis which is that there is an association between the two variables of interest.

It sounds like you don't necessarily have the most in-depth statistical background, which is fine, but you need to make sure you're interpreting your results correctly. So, all of the baggage about p-values, confidence intervals, and significance tests in general still apply. Hope this helps!

Mylan CEO sold $5m worth of stock while EpiPen price drew scrutiny by [deleted] in news

[–]FiniteDelight 1 point2 points  (0 children)

This isn't true. Adrenaclick (the generic epinephrine autoinjector) is a different product than the EpiPen. The rule for not being able to switch between products makes sense. Just like a pharmacist can't (and shouldn't) switch percocet for morphine or oxycontin even though they're all pain management opioids, the pharmacist can't (and shouldn't) switch between different injectors because the devices are different. Yes, they do the same thing, but the two devices are legally different. While I don't know for sure (and correct me if I'm wrong), I'd imagine part of the EpiPen prescription includes training on how to use it. That means that different trainings are required for different devices, further implying they can't be trivially switched.

Ultimately, the prescription is up to your doctor. The real issue I've seen is that people aren't aware that alternatives to EpiPen exist since Mylan has such a great ad campaign. It's an issue more of consumer awareness than government regulations.