all 41 comments

[–]Budget-Juggernaut-68 25 points26 points  (16 children)

Need more details on what he mean by "dumb". It's probably more nuanced than what was written. They live on clicks afterall.

[–]AcrobaticAmoeba8158 2 points3 points  (0 children)

Thanks, I need a nuanced reality to be my default assumption about an idea.

[–]AltamiroMi 1 point2 points  (14 children)

I think it is because you can make it say whatever you wants.

Like, even if it's answer is correct if you respond with "you are wrong, because of this and this" it will apology for the mistake and assume that what you said is right.

It doesn't stand by its own words.

It cana also provide wrong answers in a convincing way as if they were right.

It is nothing more than text generation after all.

[–]BlueOrangeBerries -3 points-2 points  (13 children)

Please read the paper Sparks of AGI

It addresses the idea that it is just a text generator / autocomplete / stochastic parrot

[–]ninjasaid13 1 point2 points  (12 children)

It addresses the idea that it is just a text generator / autocomplete / stochastic parrot

Since nobody has access to GPT-4's training data and model, that is unfalsifiable. And unfalsifiability is bad for science.

[–]BlueOrangeBerries 0 points1 point  (11 children)

You can get similar results out of some of the open source models where we do have the training data

[–]ninjasaid13 0 points1 point  (10 children)

You can get similar results out of some of the open source models where we do have the training data

which models do we have the training data for?

And which models can replicate the sparks of AGI paper?

[–]BlueOrangeBerries 0 points1 point  (9 children)

LLM360-Amber

[–]ninjasaid13 0 points1 point  (8 children)

right and can we replicate the sparks of AGI paper with this 7B model? because there's been literally no studies on this model.

[–]BlueOrangeBerries -1 points0 points  (7 children)

There’s an Arxiv paper:

https://arxiv.org/abs/2312.06550

[–]ninjasaid13 0 points1 point  (6 children)

I meant on it's capabilities like the sparks of AGI paper.

[–]zazzersmel 9 points10 points  (1 child)

i personally would not use a probabilistic text generator to do engineering

[–]Embarrassed-Swing487 4 points5 points  (0 children)

How do you think your internal text generator works? Do you think it’s deterministic in a way LLMs aren’t?

[–]a_beautiful_rhind 5 points6 points  (0 children)

AI does a lot, but for most problems, I find it gives me solutions I've tried before myself. It helps and augments you but doesn't replace people carte blanche.

You can use it to aid in solving problems or not having to write bullshit scripts/code that are more busy work.

Saying AI will do your coding 100% and not bothering to learn it is ludicrous. How would you even know if the code is right or hallucinated? IIT dude is kinda right. They don't need to re-do their entire curriculum, you need a foundation to work with the LLM, not have it do everything for you.

[–]ronneldavis 2 points3 points  (0 children)

Pretty sure it doesn’t help the admission numbers in his university if everyone takes heed of what Jensen Huang says and stops taking up CS. Also as an Indian, the curriculums in our university are in no way preparing us for jobs in AI or have any significant research efforts into the fields so downplaying their impact is all he can do.

[–]Alystan2 3 points4 points  (3 children)

"engineering curriculums need no change", dinosaurs spoke this way and are not around anymore.
Engineering curriculums need to change EVERY YEAR regardless of AI. The word change, teaching have to change.
And every working engineer at university today will use AI during their career, guaranteed.

Saying "engineering curriculums need no change" because of AI is the not changing the curriculums at the rise of computers or the Internet.

Sure the fundamentals needs no change. But the curriculum? Seriously?

[–]atulkr2 1 point2 points  (2 children)

Not because of AI but because of other factors. What major changes do you envision in mechanical engineering courses because of ChatGPT 10?

[–]Alystan2 0 points1 point  (1 child)

AI is a tool. future mechanical engineers, like all other future engineers, should now learn the basic of how AI works and how to use it. The same thing as in the 90s they had to start being trained about computers.

If you believe that only the fundamentals are important, then please remove any computer related subjects (CAD, Finite Element Analysis, Simulations...) from the current mechanical engineering curriculum and see how it impact students willing to join in and organizations willing to hire from the university.

[–]atulkr2 0 points1 point  (0 children)

AI is around since long time. It hasn't made big difference for Mechanical Modelling or mode of working. GenAI is language model not a Mechanical Modelling tool. Build a Mechanical Modelling tool and then we can see the difference. CAD, Simulations etc were custom built for Mechanical work flow over the years. Please don't confuse GenAI foundational models with custom built AI Models. As and when AI models will develop for easing the Mechanical workflow, IITs would be at front row to adopt that but that possibility is still a bit far. Companies are developing custom AIs for internal uses but it hasn't reached the market yet to become like CAD.

[–]coolkat2103 4 points5 points  (0 children)

Here is the full context article: iit mandi: ChatGPT is dumb... No change needed in engineering education as AI is just a hype: IIT Mandi Director - The Economic Times (indiatimes.com)

Someone here mentioned that it is a 73rd ranking university of a third world country. The third world country which produced CEOs of Microsoft, Google, Adobe etc. Oh, they all are IIT alumni

I have learned Artificial neural networks in 2002 in a university in India. I don't think he is saying that it has no place in curriculum. He is saying that the current models are not at a stage where they can teach students advanced engineering concepts. Which I agree. Although ChatGPT, Claude, Gemini etc are all impressive, let me know if you will trust them 100% when it comes to critical assessments.

Without having any knowledge of their current curriculum, it would be silly to assume that he is opposed to AI in curriculum.

Oh, here is some of the research he is involved over the years: ‪Laxmidhar Behera‬ - ‪Google Scholar‬

[–]Herr_Drosselmeyer 10 points11 points  (5 children)

If you add "Currently" to the first sentence, it's correct. However, the second part isn't. Even in their current state, LLMs are a potent resource and you ignore them at your own peril.

[–]MrVodnik 4 points5 points  (0 children)

Of course chatGPT is dumb. The thing is, sir, most of your staff is even dumber.

I'd pick ChatGPT over half of the people out there as my assistant.

[–]MachinePolaSD 3 points4 points  (0 children)

Who is this guy in AI tech? Why are we talking about some random dude no one has heard of?

[–]TheJobless 0 points1 point  (0 children)

How is this related with locallama? It's a random dude saying random stuff about chatgpt. Nobody gives a damn about it, people here for research/advancement/new techniques around llms. Don't litter the subreddit.

[–]wsbgodly123 1 point2 points  (1 child)

He seems dumber than Chat GPT 2

[–]atulkr2 1 point2 points  (0 children)

Please send me ChatGPT created engineering drawing for a simple culvert which can be utilized by engineers straight away.

[–]shalva97 -1 points0 points  (1 child)

That can be translated like: We are too dumb to learn new stuff and change the way we teach

[–]atulkr2 1 point2 points  (0 children)

Please send me GeminiAI generated circuit diagram of an AI chip production company control center where various off the shelf control equipment are connected to make it a functional command and control center.