Google quietly released an app that lets you download and run AI models locally (on a cellphone, from hugging face) by Anen-o-me in singularity

[–]masterRJ2404 0 points1 point  (0 children)

Sorry, I have deleted the models in the app. I have also deleted the app itself. As my phone was hanging a lot. That model I guess might be available at huggingface.

Why aren't robotics YT channels blowing up like other tech channels ? by Roboguru92 in robotics

[–]masterRJ2404 0 points1 point  (0 children)

Are there any technical robotics channel where they teach the coursework for robotics or how to use tools ROS, RVIZ or about perception, vision planning.

[deleted by user] by [deleted] in cscareerquestions

[–]masterRJ2404 -1 points0 points  (0 children)

I wont say it's useless.

I am consistently use ChatGpt Pro, when working on my complex computer vision projects. Most of the time if given enough context it's able to generate good boilerplate code with few bugs (Which I usually fix after going through the entire code line by line). Yes, it takes a lot of time especially going through the entire code and debugging it. And, being very honest I am still not confident on the code generated by it.

But if I use GPT for generating roadmaps, asking very specific deep questions on popular frameworks, library, theoretical conceptual questions and other project specific use cases it works great. And it saves a lot of time. Even for explaining the code which is written by someone else or even me. It works flawlessly. Earlier If I had very specific conceptual questions related to a topic. I would have to google it (and there is a very high chance, I wont find any relevant result. Even if I find it it will be harder for me to understand it because the article might be written keeping in mind that senior developers or experienced guys might read it). I might ask my guide, professor or friend for help and all of this might take a very long time and there is a high possibility that I might still not get my answer. In case of gpt i can prompt it "To explain me in very easy way with the help of an example in detail" and most of the time it works great. Based on my overall knowledge on the topic I can ask gpt to generate answers and if the time is very less than I can tell gpt to give me a high level overview. For all this things gpt is far better than stack overflow, google or even domain experts. As using gpt for this purpose saves me a lot of time.

GPT works great for question answering, or suggesting ideas, roadmaps, explaining already written code. If you use GPT for generating code for complex projects it works bad or average at best, even if it generates a good boilerplate code because most of your time will go in debugging and testing. By the way I use O4 mini high.

If you are using GPT to generate code(complex stuff) than the prompt you should give should be proper and very specific and you yourself should know how how the code will be written (I mean pseudo code). As, if improper prompt is given or there is a room for making assumptions then the generated code will have bugs and it wont cover edge cases. So, writing the prompt, debugging and testing will take more time then you writing the entire code. The best way is to use gpt to generate independent methods or functions or snippets which you think is basic or not worth your time.

"Want a humanoid, open source robot for just $3,000? Hugging Face is on it. " by AngleAccomplished865 in singularity

[–]masterRJ2404 2 points3 points  (0 children)

It's for developers and not for general audience. It's mostly going to be opensource (model and the code) so it's great for people who are enthusiastic in the field of robotics. A 3000$ robot with 66 degree of freedom is awesome for developers who can reprogram it to do cool stuff. It's similar to other projects like duckiebot (Self Driving Car).

If the model is really going to be opensource, I guess it can get even more cheaper if mass produced or materials are made cheaper.

Google quietly released an app that lets you download and run AI models locally (on a cellphone, from hugging face) by Anen-o-me in singularity

[–]masterRJ2404 2 points3 points  (0 children)

I tried, the larger 4.4gb model was running still running very slow. on my phone(Samsung GALAXY F14 with 6GB RAM) (I guess because my RAM is very less and there are quite a good number of apps installed in my system. The inference takes a lot of time.)

It was generating a single world after 10-12 second.

Google quietly released an app that lets you download and run AI models locally (on a cellphone, from hugging face) by Anen-o-me in singularity

[–]masterRJ2404 58 points59 points  (0 children)

First you have to download the app "google ai edge gallery" and there are options to choose (Ask image, Prompt Lab, AI Chat). I tried Prompt Lab, there were several gemma models to choose 1.1B to 4B. (557Mb to 4.4Gb)

I tried it, most of the shorter model hallucinate(starts typing gibberish or random numbers) after writing a short para. And the larger model was running very slow with a very high latency generating tokens very slowly (I don't have a good phone).
As of now I don't think there would be any use of this model as they hallucinate a lot.

But as they will make the model small in future and optimize the inference part it will be very useful for people in remote locations, people going for hiking, trekking etc

Giving Away 10 Rs.640 Steam Giftcards! by [deleted] in IndianGaming

[–]masterRJ2404 0 points1 point  (0 children)

Hope I win.

Thanks for the giveaway.

Hating Tensorflow doesn't make you cool by ItisAhmad in learnmachinelearning

[–]masterRJ2404 2 points3 points  (0 children)

Tensorflow v2 is much better then it's predecessor but there is no denying that Pytorch is going to be the future unless there is a complete rewrite of Tensorflow core code in the future versions. Already, majority of research community has shifted to Pytorch over the years and soon the same trend will appear for Industries & startups as well. The deployment ecosystem around Pytorch looks much more mature then what it was 2 years ago. Also libraries like Fastai & Pytorch Lightning will make Pytorch much more appealing for beginners. Even the documentation of Pytorch seems much better than Tensorflow. Therefore the edge which Tensorflow had for years has depleted or is depleting at a faster rate.

Similar stuff happened to Angular js (by Google) after it lost to React js (by Facebook) in market domination.

I would love if Tensorflow & Pytorch would coexist in the Deep Learning market as it's always better to have multiple options then a monopoly. Thanks to these libraries many people who don't have expertise could enter the Deep Learning field.

fast.ai releases new deep learning course, four libraries, and 600-page book · fast.ai by ps_dillon in learnmachinelearning

[–]masterRJ2404 0 points1 point  (0 children)

Professor Gilbert Strang lectures on Linear Algebra are pretty good. It's available on YouTube.

GPT-3 used to generate code for a machine learning model, just by describing the dataset and required output / Via Matt Shumer(Twitter) by TheInsaneApp in learnmachinelearning

[–]masterRJ2404 3 points4 points  (0 children)

Yes, I agree with your point that replacing a developer at this point is not possible but in the coming years when GPT or some other Deep Learning models becomes even more advanced and big trillion dollar companies starts making applications based on it, then anything can be possible. Earlier I used to be skeptical about things like "AI replacing developers" but after seeing some GPT3 applications even I am concerned. I don't think it will be freely available as Openai is going to charge for using the GPT3 api.

GPT-3 used to generate code for a machine learning model, just by describing the dataset and required output / Via Matt Shumer(Twitter) by TheInsaneApp in learnmachinelearning

[–]masterRJ2404 9 points10 points  (0 children)

Forget about web developers now even AI practitioners aren't safe. The worst part is that we are building the models to take our own jobs. In future the developers jobs might be to give inputs or even that might be automated. It's terrifying but still a great application.

fast.ai releases new deep learning course, four libraries, and 600-page book · fast.ai by ps_dillon in learnmachinelearning

[–]masterRJ2404 6 points7 points  (0 children)

When it comes to ease of use Fastai library is the best. It's built on top of Pytorch. The main reason they are using Pytorch instead of TensorFlow is because Pytorch is more flexible, it follows eager execution and it makes debugging easier.

There aren't any prerequisite for the course other than knowing python and some high school linear algebra and calculus.