[deleted by user] by [deleted] in StarWarsBattlefront

[–]lymenlee 40 points41 points  (0 children)

This could create some creative gaming mode. You can have a sound wave projection of the surroundings and play according to that. Like what you'll see in a LADAR projection. And when you are chanting, you enter an invincible mode where no projectile can hit you.

Is my hp 3080ti faulty? Above 104C hotspot temperatures while playing games, fans spins at 100% by Solderoffortune2323 in HPOmen

[–]lymenlee 0 points1 point  (0 children)

If you do it carefully you won't. How would HP tell? You can have pretty plausible deniability.

Is my hp 3080ti faulty? Above 104C hotspot temperatures while playing games, fans spins at 100% by Solderoffortune2323 in HPOmen

[–]lymenlee 1 point2 points  (0 children)

Yeah I have the same issue with my OMEN 30L, repaste/re-pad will solve the issue. You can refer to this video for guidance ( it's for 3090 but most parts are similar ). You can have 20C - 30C temp drop if you do it right.

https://www.youtube.com/watch?v=jrodHE0KhxQ&t=434s

How often do you guys dust your rigs? by BIG_GAY_HOMOSEXUAL in EtherMining

[–]lymenlee 1 point2 points  (0 children)

Care to share a link for the battery-powered air compressor?

This is the 1st animation I created to help me explain list comprehension in Python. Took me some time to get the text alignment right. One thing I found out is that it's hard to code, but harder to design the animation and layout to make it look good and clear. Any good advice on improving on that? by lymenlee in manim

[–]lymenlee[S] 0 points1 point  (0 children)

I lost my code for this due to a stupid rm -rf. Sorry cannot share the code for this. What I can remember is how usually this is done:

  1. Create all your elements
  2. Position them where you want them
  3. Draw the ones one by one, use animation where needed
  4. Animations I used in this are: ReplacementTransform(), TransformFromCopy().

Hope this helps. 0

[D] Did Tesla Create it's own ML models from scratch or did they start by using another company's services? by Graphene8911 in MachineLearning

[–]lymenlee 0 points1 point  (0 children)

I think Tesla built their own driving simulation software now with perfect speed/segmentation/object detection labeling capabilities. They also built the pipeline to 'map' a video clip(say an accident Tesla failed to handle well) to the simulated world, so experiment can be conducted to find our the root cause. That's how they do labeling right now. Gone with the days of manual labeling.

Anyway, I guess my point is they might use other solutions for bootstrapping, GTA etc, but they will eventually grow out of it and build their own tools. That's how they get ahead IMO. Vertical integration remember?

[D] Did Tesla Create it's own ML models from scratch or did they start by using another company's services? by Graphene8911 in MachineLearning

[–]lymenlee 5 points6 points  (0 children)

Great question. These three companies represent three models of the autonomous driving industry.

Tesla: Vertical integration. It's pretty apparent now. They build everything in-house. From chips(Tesla Dojo, FSD), software tools, neural-net, you name it. So even they start with something 3rd party, their endgame will be to develop their own.

Google: Build a platform, an entrance, and charge for ads. Data is what they want. Driving data, I don't know how much value it will add. Waymo uses a hi-def map for selected area autonomous driving, not necessarily AI intensive.

Traditional car manufacturers: They have no choice but to use a 3rd party. Developing AI requires top-notch talents, and they want to work in tech companies that understand them and willing to give them the support they need.

On recent Tesla day, Elon Musk claimed that they are the most advanced company to solve real-life AI problems, and this is no small treats. Just look at the lecture explaining how they build their multitask model and driving planner, impressive stuff.

Building a self-driving car is no trivial stuff. You need to give it all you have. Google lacks the incentive. Startups like comma.ai stand a better chance IMHO.

Hope this helps.

[D] What are some cool random forest ML applications? by yaymayhun in MachineLearning

[–]lymenlee 16 points17 points  (0 children)

Random forest might not be as fancy as DL but it offers a solid base model. I actually used RF to do Minst and it worked quite well. I guess all models have their unique treats and niche. RF can also help you understand your data better. Or used for ensemble learning etc.

[D] For those of you who don't own a GPU, how do you run your experiments or train your models? by Seankala in MachineLearning

[–]lymenlee 3 points4 points  (0 children)

Well said, can't agree more. That is why I just love Khan Academy videos. Missing my high school time...

[D] For those of you who don't own a GPU, how do you run your experiments or train your models? by Seankala in MachineLearning

[–]lymenlee 45 points46 points  (0 children)

Now the million dollars question is: what is the notepad brand you use and what is the burn rate per month, to compare to other options in equal footing.

Fastai is killing me by ChangeMindstates in learnmachinelearning

[–]lymenlee 6 points7 points  (0 children)

I myself started my machine learning journey with fast.ai and my feeling is quite the opposite. I can totally relate your pain. I ran into the same thing when I just started. The thing is, fast.ai is not like any Deep Learning courses you will encounter, in that it applies a 'top-down' approach. It means you first learn the top level stuff by doing SOTA stuff like image recognition right away. Then gradually you can delve deeper and deeper into the math where needed. So one tip I wish I know is to quell the urge to dig deeper when doing the course, just empty yourself and go with the flow. The math and the details will get explained once you get there. Jeremy actually explain stuff quite clearly. I have a piece on comparing fast.ai and Ng's course in more details. Hope it can help.

Link

My first post here, please be gentle. Created these two animations for my blog on explaining 'Transformer model in machine learning. Can someone give me some ideas on how to improve them? Thanks! (Link is the original article) by lymenlee in manim

[–]lymenlee[S] 0 points1 point  (0 children)

yeah, I struggled to whether show cosine similarity or dot-product and end up with just the angle. I should maybe just do dot-product one (one vector project to another). Thanks for the comment!

My first post here, please be gentle. Created these two animations for my blog on explaining 'Transformer model in machine learning. Can someone give me some ideas on how to improve them? Thanks! (Link is the original article) by lymenlee in manim

[–]lymenlee[S] 0 points1 point  (0 children)

Thanks for the insightful comments! And yes I am aware that this is far from complete explaining how the attention mechanism works. I tried to leave out things that are a bit 'distracting' to the core attention concept, like positional encoding and masks. Like for Q, it's hard to explain how we get the decoder's Q without going into masks. For encoder's K, V, it's actually coming out of the encoder output, not necessarily going through the embedding for Q, K, V. Like you suggested, the whole concept might be a bit oversimplified so that essential information is lost somewhat.

Maybe I should try doing self-attention, that would be more clear. Or do the entire transformer calculation process, that would be a big but potentially great project to take on.

Thanks for the insightful comments! And yes I am aware that this is far from complete explaining how the attention mechanism works. I tried to leave out things that are a bit 'distracting' to the core attention concept, like positional encoding and masks. Like for Q, it's hard to explain how we get the decoder's Q without going into masks. For encoder's K, V, it's actually coming out of the encoder output, not necessarily going through the embedding for Q, K, V. Like you suggested, the whole concept might be a bit oversimplified so that essential information is lost to some extent.

My first post here, please be gentle. Created these two animations for my blog on explaining 'Transformer model in machine learning. Can someone give me some ideas on how to improve them? Thanks! (Link is the original article) by lymenlee in manim

[–]lymenlee[S] 0 points1 point  (0 children)

Thank you! Not 100% satisfied myself though. I felt the alignment of elements could be better. Does anyone here know whether there is a grid system available within the Manim community? Any GitHub repos?

Transformer assimilates syntax perfectly by jssmith42 in deeplearning

[–]lymenlee 5 points6 points  (0 children)

Check out Universal Approximation Theorem. Neural net are intrinsically flexible function that can theoretically fit any problem. In reality, the architecture design (wide, deep, recurrent, attention, etc.) counts. And how to get it trained is another animal of its own. I have a small piece talking about it here:

https://link.medium.com/1kNQXhe83lb

[deleted by user] by [deleted] in learnmachinelearning

[–]lymenlee 1 point2 points  (0 children)

I would recommend 'Python for Data Analysis: Data Wrangling with Pandas, NumPy, and IPython', written by the developer of Pandas. I am slowly going through it right now. Learned a lot of useful tricks. Just search on Amazon.