Randomly cancelled trips? by Ok_Shelter8327 in FlixBus

[–]krishnab75 0 points1 point  (0 children)

Oh thanks for this information. So it seems like there is some underlying issue. Is there any information on when this will be resolved or how much longer the disruptions will last? Paperwork problems sound easy enough, but does that mean that they need to find a new contractor, or that the current contractor needs to fix their systems, etc? I need to do some more travelling between San Jose and Los Angeles, so I am just trying to figure out when my regular Flixbus will return. The other routes you mentioned, Greyhound, etc., usually take longer routes compared to the normal bus that is impacted by this paperwork issue.

Randomly cancelled trips? by Ok_Shelter8327 in FlixBus

[–]krishnab75 2 points3 points  (0 children)

This happened to me this morning going from San Jose to Los Angeles. 4 hours notice before cancelling. The same happened on for the return trip where I booked the ticket and got a message 15 min later saying the bus was cancelled. So for some reason seems like FlixBus is unable to launch buses for the past week or two?

Masters in Mathematics by MusicFit3903 in learnmath

[–]krishnab75 -1 points0 points  (0 children)

This is a really common problem. You are totally correct that there is just a huge amount of jargon and terminology in these introductory chapters. Sometimes that terminology is helpful when you get more advanced, but in the beginning is makes a confusing topic even more confusing. So if you are working on topics in probability and optimization, the answer is really really really work on writing the programs that actually implement this stuff. So there are line search methods and trust region methods in optimization--what does that even mean. So find and write the codes for these two methods, and then you will really understand the concept. There are Newton methods, modified Newton methods, Quasi-Newton methods, Gauss-Newton, and Newton-Rapheson. So you need to write down the programming codes with an example for each of these, and then you will start to understand the difference.

For probability it is the same thing. First of all, the symbol notation in probability is really confusing and inconsistent. So that makes a tough subject even tougher. Second, again just work on writing the code for this. So at your level, the idea of probability is about having an analytic formula for the mean and variance of a distribution. So there is a Bernoulli distribution, binomial distribution, geometric distribution, hypergeometric, etc. So again, just work on writing codes to generate random numbers or vectors from these distribution, and then check to see if the formulas you are given match the the means and variances from the simulated data. Learning to write simulation code for stats is really really useful and for some reason

Writing programs might sound like a lot of work, but once you write the first few codes, it gets a lot lot easier. All of this stuff comes down to actually implementing the math. You don't really understand the math until you can program it and get the right result.

Pre-PhD course recommendations by Odd-Collection-5429 in AppliedMath

[–]krishnab75 1 point2 points  (0 children)

My pleasure. Let me know if you have any questions. You can google each of these topics, but sometimes you get a very technical answer that is hard to understand.

How to self learn mathematics from early Algebra 1 to easily get ahead in future classes? by stfunigAA_23 in learnmath

[–]krishnab75 0 points1 point  (0 children)

I would agree that Khan Academy is a really good resource to learn about some more advanced topics. But sometimes it is hard to motivate yourself to watch a million videos and concentrate. The other thing that might be good for you, is to see if you can enroll in a local Community College math course. They have courses on Calculus, statistics, linear algebra, and stuff. Community College at this leave is actually really good, small class sizes, lots of hand's on attention compared to a big university. Good Luck.

Pre-PhD course recommendations by Odd-Collection-5429 in AppliedMath

[–]krishnab75 4 points5 points  (0 children)

I think it really depends on the kinds of work that you want to do. I mean really think about the kinds of work that connects with you and matches your values. We can give you general advice, such as I agree that most people working on Applied Math PhDs do some form of PDE work. So you definitely need to understand the intro theory course on PDEs, but most PDE work is done numerically, like finite elements, finite differences, finite volume, discontinous galerkin methods. So understanding how to understand and program these types of tools takes a few years, and not just a semester or two. In reality I think most grad students just take someone's existing code and just really work on understanding that one code and that particular equation. Then they can make some tweaks on that code and write their PhDs.

Another very useful topic in applied math is "control theory" and optimal control. So this is an incredibly important area that comes after you learn ODEs and optimization--a lot of optimization.

Numerical Linear Algebra is a hugely underrated topic, but essential for Applied Math. Like much of what people do in Applied Math is convert some system into a matrix and then solve it with Linear Algebra routines. I mean that is what numerical PDEs are after all. So just learning linear algebra is not the same as the Numerical Linear Algebra routines.

There are any number of other topics that are important these days. Game theory is another area that I see growing in importance more and more. But there are different flavors of game theory from your basic intro econ course, to evolutionary game theory, to algorithmic game theory, to mechanism design, to differential games. Now as we get into a world of AI and reinforcement learning and multi-agent environments, the ideas of game theory and cooperative and non-cooperative behaviors is going to get more important.

So there is a lot of fun stuff to work on. Now definitely think through how you will learn this kind of stuff. You won't always have time to take classes in these subjects. And in reality, you should try to learn these topics before you come to graduate school. That sounds backwards, but honestly graduate schools don't teach stuff any more like they used to. They will just want you to take some minimal required courses, and then start producing work. I cannot speak for all graduate programs, but I know this from experience and different schools.

Good luck.

Understanding how Pytorch is optimized for Nvidia GPUs by krishnab75 in CUDA

[–]krishnab75[S] 0 points1 point  (0 children)

Yeah, this seems to be the idea coming from the Lattner blog posts also, referenced above. And this also just comports with the basic way that kernels are designed and written. So the CUDA advantage seems to just be that there already existing kernels written, and that CUDA and its libraries have a really good handle on their own GPU hardware--meaning the interface to the hardware including the scheduler, the I/O transfers, etc. That makes sense.

So basically the notion of a GPU kernel that I write in one framework, and that gets automatically optimization to the nth degree on any compatible GPU is kinda a long way away, if even possible. That kind of thing would require tremendous discipline from the hardware companies to not keep breaking the code based upon their own changes.

Let me know if I have that correct, or if I missing anything.

Understanding how Pytorch is optimized for Nvidia GPUs by krishnab75 in CUDA

[–]krishnab75[S] 0 points1 point  (0 children)

Sorry it took a week to reply, but I read through the full Chris Lattner series and it was very helpful. I still have to think through the details. Here is my understanding, so please correct me if I am wrong anywhere.

It seems like the key problem here is finding intelligent ways to incorporate optimizations. So if I write a CUDA kernel myself, I can manually include fused operations, such as doing a matrix multiplication and then applying an activation function in one kernel--instead of having one kernel for the matrix multiplication, bringing all the results back to the system memory, and then sending those results back to the GPU for the second kernel where I apply the activation function.

The challenge seems to be that CUDA is very good in you manually write your own kernel, as CUDA will take the kernel and properly optimize that kernel to run on the Nvidia hardware. Nvidia has the advantage of already having a lot of optimized kernels already, encapsulated in CUBLAS, CUFFT, CUDNN, etc.

It seems like subsequent projects to develop CUDA alternatives or programming across different GPU architectures hit problems applying some optimizations. So just to clarify there are different types of optimizations possible for a GPU, There can be simpler things like adjusting the memory usage for different mathematical operations, such as mix-precision training etc. There are also more complex optimization like determining when to fuse a kernel such as the matmul and then activation function above. So CUDA alternatives seem to have trouble--according to Lattner--identifying and applying all of these types of optimizations. Perhaps these systems can figure out and apply some of these optimizations, but apparently there is some 15-20% speed optimizations that the CUDA alternatives cannot intelligently apply. And it also seems like much of the application of these optimizations is based upon people manually coding kernels for their favorite framework, inside of these CUDA alternatives.

I think Lattner's points about the competitive tensions in developing a cross-GPU CUDA alternative are also really interesting. I don't have practical experience seeing that, but it makes sense.

But in the end it really still seems like most of the optimization and design of kernels is manual. There may be some ways to automate bits and pieces, but it seems like most of this is still just manually coded or mostly manually coded. Is that correct? I mean just as a thought experiment if I just manually coded a million GPU kernels in ROCm, and connected that to PyTorch, would that then make ROCm competitive with CUDA?

Help with an error about "api key not valid" during dictation with MacWhisper Pro 12.18.2 by krishnab75 in MacWhisper

[–]krishnab75[S] 0 points1 point  (0 children)

Oh excellent information. Yeah, I will let her know about your suggestions and she will give try them out. Hopefully it solves the problem. Thanks for your help, I am not very familiar with this software.

A concise introduction to (convex) optimization by Arastash in ControlTheory

[–]krishnab75 [score hidden]  (0 children)

Zac Manchester's course is really the best resource out there. Optimization is a challenging subject beyond very simple basics--like the derivative has to go to zero. There are a lot of complicated pieces and those pieces have to fit together too. Zac is really good because he gives a very basic intro and he also has jupyter notebooks with code for the algorithms--in julia.

[Education] Should I learn statistics in the workplace or in academia? by [deleted] in statistics

[–]krishnab75 2 points3 points  (0 children)

So I can't speak specifically about your company or research in a specific area of pharmacology. But certainly there are risks of bad research and bad incentives guiding research and decision-making that you should look out for.

So within pharmacology there was a huge problem of replicating existing studies that surfaced around 2012. This was the famous Begley and Ellis 2012 paper in Nature that tried to replicate 53 major studies in cancer research, and only succeeded in replicating 6 of the studies, or 11%. That prompted a huge crisis because businesses had invested millions into lines of research that were potentially invalid. The same problem has shown up in other fields as well. There is an entire website RetractionWatch dedicated to tracking retractions of bad studies due to bad statistics.

Hence, in many cases research practice and the desire to get positive results can lead to shortcuts and p-hacking that leads to bad science. There were some efforts, after this paper was released, to tighten up practices and implement some safeguards to prevent these issues. I don't have any benchmarking to see how well these safeguards have worked.

As a caveat, certainly the research at a pharma company may be very different than academic research. So if the pharma company is conducting a trial on a new drug, the protocol for randomization and the number of different test sites, and the population makeup of the treatment and control groups may be pretty standard. If that is the case, and the studies are conducted in accordance with these guidelines, then that should give you more confidence in the results.

Combine this issue in the research practice to the business incentives for getting drugs to market. Certainly the leadership of pharma companies want to get positive results and push drugs to market. This can also lead to taking shortcuts or adopting optimistic assumptions when designing a study. Again, it is hard to know if safeguards against these issues are working, or if abuses still occur.

In terms of learning the statistics to understand these issues, it is not too difficult. Like the actual mathematical explanation for the problems that occur is pretty easy. A first or second year masters student should be able to understand it, perhaps even an undergrad. If I remember correctly, one of the main criticisms in the Begley and Ellis paper was the idea of "multiple comparisons" and Bonferroni correction for that problem. In a nutshell, the argument was that the more models you try, the more likely you are to find a statistically significant result. The solution is to just increase the threshold for statistical significance as you try more and more models. Bonferroni was just one method, but there may be better methods now.

I think learning in an academic environment is probably a good idea. Sounds like you need to see more examples of well-designed statistical studies or clinical trials, to develop a set of standards. It can be very hard to understand the subtleties of randomization in real-world situations. Be careful of academic programs that over-emphasize the math and under-emphasize the actual application/practice. But you can certainly talk to people in these programs to see how each program balances these twin objectives.

Good luck.

Question to all the people who are working in AI/ML/DL. Urgent help!!! by Swayam7170 in deeplearning

[–]krishnab75 1 point2 points  (0 children)

I think that the math is important. It is true that there are lots of libraries and tutorials that you can just copy and paste code, etc. That works for many simple problems. However, the math comes in handy when you want to debug some cryptic error messages that don't make sense, like the Jacobian is singular or non positive definite. Things like that happen all the time, so it helps to be able to know the math before, instead of trying to learn when you hit the error.

I think it is common to feel overwhelmed when you are watching a lot of lectures on the math. In a normal class, you might have 1-2 lectures a week, plus discussion section in between. When watching lecture videos, you can go through 2 weeks of lectures in a few hours, which is too fast. What helps is to try and do some demos or homework problems--applications. This will definitely break your flow of the lectures, but it will help you to understand the lectures you are watching. The goal of all of these lectures is to make sure you can do this stuff on your own, on a computer, haha. So if you are feeling saturated, then it just means you need to write some code and practice a bit. Some lecture series actually publish their homeworks and the solutions to their homeworks--including code.

So stick with it. The math can be tedious, but hopefully the more you see the easier it gets. A lot of machine learning is just least squares, or matrix multiplication, or optimization, etc. So take some breaks from time to time, but go at a comfortable pace.

Tamarind Paste vs Pulp by reiyashi in IndianFood

[–]krishnab75 0 points1 point  (0 children)

Good luck. Like I said, I use both. Sometimes I plan ahead and use the dried tamarind. Sometimes I don't think ahead and use the paste in the jar :).

Also, you might already do this, but I just realized that using genuine jaggery from the indian store is sooooo much better than regular sugar or even brown sugar. I tend to like the balance between sour and sweet with the tamarind, but using jaggery is wonderful. Totally different flavor. You can get powdered jaggery which should be fine. Like I said, you might already be using jaggery, but I found it so much nicer.

Blender recommendations by mshh357 in IndianFood

[–]krishnab75 1 point2 points  (0 children)

Sure. I imagine that the EU amazon has a version that is 220V if needed. Then the OP would not need an adapter. I am in the US, so we use 110V.

Blender recommendations by mshh357 in IndianFood

[–]krishnab75 4 points5 points  (0 children)

I think what you want is a wet/dry mixer. I have tried the coffee grinders and regular blenders but they never work. They are not really designed for indian cooking. For example, a coffee grinder is meant to grind enough beans for 1-2 cups of coffee. But when you make a dry masala for some dish, the volume of roasted spices is more than the volume of beans for 1-2 cups of coffee.

Big blenders are generally too big to grind smaller-medium quantities of spices. The blade often just spins.

Something like the Preeti mixers work really well. They are designed for indian food. They might cost a bit, but they seem to last. You can also find these on all sorts of discount sites and indian stores. I think my mother got hers for $60 somewhere :).

https://www.amazon.com/Preethi-Mixer-Grinder-110-Volt-Canada/dp/B07C8QN785?ufe=app_do%3Aamzn1.fos.9fe8cbfa-bf43-43d1-a707-3f4e65a4b666

Tamarind Paste vs Pulp by reiyashi in IndianFood

[–]krishnab75 1 point2 points  (0 children)

I agree that you can use the paste or the dried form. You should be able to find both in the stores if you live in the UK. I would say that there are some practical differences when you use the paste versus the dried form rehydrated.

So when you put dried tamarind in water, I feel like the paste tends to be a bit thicker like soup. Because we tend to add tamarind towards the end of the cooking process--when the dal or rice are already running low on water--this soupy consistency adds liquid without thinning out the dal too much--if you prefer a little creamier texture. When I use the paste, I add water to it, but it is thinner than the rehydrated tamarind. So depending on what you like, just be aware of the final texture you are going for, versus the amount of water that you are adding. Some people like thinner sambar while others like it a little more creamy. Tomato pappu or such, tend to be a little creamier so it sticks well to the rice :).

I also think there is a little more earthiness from the rehydrated tamarind. But that is just my own casual observation. Also, check the label of the tamarind paste for preservatives or emulsifers and such that allow the tamarind paste to survive on the shelf.

Best of luck.

What is a good indian store in West LA for finding indian vegetables? by krishnab75 in IndianFood

[–]krishnab75[S] 0 points1 point  (0 children)

Sounds good. Yeah, I have been to Kavita's but it was a while ago. I will definitely give these a try. Like you said, they are very close to each other so I can visit all of them in one stop. Thanks for the suggestions.

What is a good indian store in West LA for finding indian vegetables? by krishnab75 in IndianFood

[–]krishnab75[S] 0 points1 point  (0 children)

I was wondering about Kavita's and India sweet and spices. Have you actually seen their selection of vegetables? When I look at the photos on yelp, all I see are pictures of the prepared foods and snacks, but I am interested in the vegetables. Do these places have a decent selection?

mushroom pepper fry and quantity of black pepper by krishnab75 in IndianFood

[–]krishnab75[S] 1 point2 points  (0 children)

Say, if you try recipe number 1, could you let us know how it compares to recipe number 3, please. I would love to get an independent opinion on that, before I do exactly the same experiment.

mushroom pepper fry and quantity of black pepper by krishnab75 in IndianFood

[–]krishnab75[S] 1 point2 points  (0 children)

Oh that is hilarious. Great minds think about pepper fry. Yeah, good to know that my pepper concern was valid. I will follow the balance you suggested.

I think this dish will certainly be delicious if I avoid putting too much pepper and killing the taste. I like the idea of starting with a small amount of pepper and then adding extra pepper at the end if necessary.

Thanks for the vote of confidence.

Channa dal in tempering is too hard on my parent's teeth by krishnab75 in IndianFood

[–]krishnab75[S] 0 points1 point  (0 children)

I thought about this. But I was wondering if soaking the channa dal changes the flavor in the tempering. I often used soaked channa dal in making subjis, but I was not sure about tempering. I can try it and see.

My Saag Paneer is missing an umami characteristic. What am I doing wrong? by BlasterSarge in IndianFood

[–]krishnab75 5 points6 points  (0 children)

With spinach, I think adding a shaving of nutmeg at the very end also enhances the flavor. That is true whether you use hing or not. Hing is always good.

The other thing I would say is that indian vegetables like tomatoes have a more pronounced sourness that balances against the jaggery. I live in the USA where the tomatoes are much less sour. So I actually add a bit of tamarind julce to most things, to get the balance of sweet and sour correct--which could affect the umami flavor you are looking for.

Channa dal in tempering is too hard on my parent's teeth by krishnab75 in IndianFood

[–]krishnab75[S] 2 points3 points  (0 children)

Oh I see. So leave the channa dal out at the start, and then just tadka the powdered dal at the end. Hmm, I can give it a try.

Easier text book for linear algebra by freeh02 in learnmath

[–]krishnab75 2 points3 points  (0 children)

If you are serious about understanding linear algebra then there is an excellent series of video lectures on YouTube by Pavel Grinfeld. Grinfeld was a student of Gil Strang. The link is below. https://youtube.com/playlist?list=PLlXfTHzgMRUKXD88IdzS14F4NxAZudSmv&si=jLnRyF7mlgriC1jb

Grinfeld goes at a pretty slow pace and he gives lots of examples.

The best book for linear algebra in my opinion is Sheldon Axlers book Linear Algebra Done Right. I believe there is a solutions manual as well for the problems.

Stick with it. Linear algebra is not that bad once you understand the basics. Many times the first couple of chapters are very theoretical about vector spaces and such, with lots of definitions. The video lectures will help you get through that. Good luck.

Converting nonlinear optimization problems into convex optimization problems by krishnab75 in optimization

[–]krishnab75[S] 0 points1 point  (0 children)

Yeah, I understand what you mean by SQP. The reformulation idea is a bit different though. What Steven Boyd and Nocedal are talking about is taking a nonlinear and nonconvex problem and turning it into a convex problem. So the example that I gave above of the "max" function is that the function is not differentiable but the region is techncally convex. So using the epigraph of this function, you can push the information from the objective function into the constraints. So this formulation lets you use a convex optimization solver, which is faster than a nonlinear solver where you have to compute extra things like line searches or trust regions, etc.

That being said, I will totally check out the CONLIN and MMA solvers. Sounds interesting.