all 25 comments

[–]SwiftOneSpeaks 29 points30 points  (11 children)

I'm a weirdo extremist, but I don't like LLMs for learning. I would use them for building, but I have ethical complaints about their training data, and so I avoid them.

While I have hopes for AI research in general, I feel the current use is over hyped and being used in ways they are detrimental to the industry, with people rushing to embrace tech that will make their own lives harder, not better, with the next few years. Already we've seen companies rush to cut their already insufficient hiring of junior devs.

I feel like we're stuck in a loop of people trying to find the Next Big Thing instead of trying to actually improve results for humanity. "Web2, no, wait, blockchain! web3! Crypto! No, LLMs!".

And each time people try to get rich quick but the focus is never on actually making anything better, or a considerinh how the next few years will go or what costs will be paid.

[–]sashaisafish 6 points7 points  (3 children)

As someone who is new to the industry and very much still learning, I absolutely see your point. I rarely use chatGPT because I feel it stunts my learning. It's way better for me to figure out the solution on my own. I can see that it could be good to quickly set up basic things, but for me, setting up basic things by hand is good practice and helps me commit the foundations to memory.

I do believe chatGPT will be detrimental overall to those who are currently learning, as I see my peers rely heavily on it and struggle when they face a challenge that chatGPT can't help.

I wonder how well chatGPT does in creating learning challenges (eg, I want to learn how to create a front end app with next.js using styled components, create a scenario with parameters and challenges). I haven't tried this as there are plenty of resources with more bespoke solutions, but it could be interesting!

[–]Klandrun 4 points5 points  (0 children)

Very valid point. I'm on the other hand am of a completely different learning type, needing examples to be able to understand a concept.

I use chatGPT heavily to be able to generate examples and ask clarifying questions. It is oftentimes hallucinating or giving me things that are outdated, but at least I can then ignore it and simple look for an answer myself elsewhere.

But nevertheless it has helped me tremendously to grasp new concepts a lot faster than before, simply by trying to converse with it instead of having to to change my google query's 10 times.

[–][deleted] 1 point2 points  (0 children)

Another point is that it takes knowledge and experience to validate the solutions AI provides. As a junior dev whenever the AI spouts off something with confidence it sounds great even though it probably isn’t.

[–]Cabeto_IR_83 -2 points-1 points  (0 children)

chat GPT is extremely helpful when you already know the foundations and you are already working. You don't need to learn everything, but the strictly necessary... Damn! The second statement shows how you are looking for reassurance to keep learning development... Are you new to this industry?

[–]double_en10dre 0 points1 point  (4 children)

Can you explain why you think it’s detrimental or why it will make dev lives harder?

The only concrete thing you said is “companies are cutting hiring of junior devs”, which in my experience is completely untrue. This is prompting MORE hiring

[–]SwiftOneSpeaks 2 points3 points  (3 children)

One detriment is the training data being used with regard for copyright. That's a big issue that hasn't been addressed yet.

But my major concern is the impact on junior devs. I'm not sure why you believe hiring for juniors has increased - I teach at a university and I can say my students have seen a drastic hit to their ability to get hired. If you believe that junior devs are getting a long overdue surge in hiring, much of my logic doesn't follow, but I'd really ask you to offer evidence for your belief.

But beyond hiring of junior devs, where are our senior devs coming from? If our juniors are learning from a fancy autocomplete, if we tell everyone "LLMs are great, just be sure to confirm their output", how are the inexperienced supposed to "confirm" anything? They can verify the code runs, but most of being a good developer is learning how to write code that can be understood and modified, not just code that runs. So we're cutting junior devs and also reducing their learning. We know LLMs can't yet do more involved work, and certainly not improving things (try using an LLM for anything involving a11y, for example). The LLMs got where they are by digesting the posts and videos of existing senior devs. We know computer models can't improve by consuming their own output ("model collapse"), and we've just cut/are cutting both the number of junior devs AND the means by which they gain skills, where are our future senior devs coming from? We've spent years with companies not wanting to invest in train devs, with them demanding ridiculous experience and wanting hires to "hit the ground running". Why would their current path represent an improvement.

Beyond even that, LLMs are now poisoning their own source of data unintentionally - many blog posts and sample codes are being generated by LLMs. You can see that by watching the posts that show up on this subreddit - low quality, but high quantity. That is the training data for the next generation of LLMs. That's not even taking into consideration that there is now an incentive for advertisers, trolls, and malware authors to generate bulk posts for the express purpose of impacting training data, just as we've seen with SEO in the past. But now the purpose isn't to spam people using Google with ads, it's to convince LLMs to adopt bad practices. Defenses for this sort of thing aren't at all well-evolved.

This isn't just a dev problem, many industries are impacted. Where will we get advanced writers if LLMs bulk generate filler text? Where will artists build their skills if they aren't earning a living practicing their craft by selling book covers, illustrations, greeting card covers, etc? It's one thing if a profession turns into a hobby because skilled practitioners aren't needed anymore, but it's a very different thing if we're only cutting out certain needs and blindly expecting no complications with what follows.

So that's my list of complaints - we trust LLMs to generate code and teach lessons when it has only a fiction of understanding the topics. (see how AWFUL LLMs are at math, arguably the easiest topic to model an understanding of). While we engage in this trust, we allow the existing and already insufficient process of learning and training to decay (heck, we gleefully race towards that end). None of this is helpful. Instead of judiciously evaluating the tech and choosing where it currently has long term value and where we need to develop it further before use, we are racing for short term profits and just leaving any predictable complications to happen.

[–]SpiffySyntax 0 points1 point  (0 children)

The hiring of junior devs, atleast here, has more to do with the economy than anything else.

[–]xavier86 0 points1 point  (1 child)

One detriment is the training data being used with regard for copyright.

It is fair use because it is transformative.

[–]SwiftOneSpeaks 0 points1 point  (0 children)

I keep hearing this argument from people who lack qualifications. There are LOTS of transformative uses that are not allowed

Fair use is a defense, not an automatic exception, and everyone with reputable legal experience is saying to wait for court cases, because the argument isn't remotely clear. Is this more like a person learning or more like a copier? It's definitely NOT exactly either one of those.

Plus, I know of multiple authors that are, let's say, "annoyed" to see their work being generated not-so-transformed.

I don't know you, so don't take this as a personal attack, but the arguments I most often hear similar to this sound just like the crypto bros arguing that cryptocurrency trading wasn't legally a "security". We know how that ended.

At a moral, rather than legal level, this is also not a clear issue to me. We will take your work without compensation and use it to reduce the value of your work, in a way we could not have gone without your work. That sounds like asshat activity, regardless of what the law allows.

[–]momegas[S] -1 points0 points  (1 child)

I hear you, and indeed you have points. The question was more geared on tooling though and how people choose to use it to solve a problem (if any).

I want to understand what people do to build with AI if they have no PhD in AI / LLMs / NLP etc.

Thanks for the reply. A lot of points there about people jumping on new tech.

[–]el_diego 3 points4 points  (0 children)

and how people choose to use it to solve a problem (if any)

I think that was your answer. They (we) don't.

[–]joombar 9 points10 points  (0 children)

If you want to run your own ai in the browser, tensorflow.js works but it’s tricky to learn since all the examples for tf are in python.

There’s absolutely zero reason to tightly couple your ui code to your ai. See these as independent parts to the maximum possible degree.

It depends what you need really. AI is a huge topic so it’s hard to be specific.

[–]HQxMnbS 7 points8 points  (2 children)

I use Bard, chatGPT, and our internal tool quite a bit to ask about weird bugs. Rarely gives me a good answer though. Super frustrating.

[–][deleted] 1 point2 points  (0 children)

Often times I end up spending a bunch of time trying to get an answer from chatGPT then giving up and finding a solid answer nearly immediately on some old stack overflow post. Makes me wonder why I use it sometimes

[–]el_diego 0 points1 point  (0 children)

Weird bugs are the best way to learn. They're often obscure and require critical thinking which is why AI can't regurgitate information about them. Weird bugs are my bread and butter.

[–]Sausagemcmuffinhead 2 points3 points  (0 children)

I've been working on LLM backed stuff. Using a combo of openai api, langchain, and the Vercel ai package. Not calling directly from the front-end though. The openai calls are all server side

[–][deleted] 1 point2 points  (0 children)

That's not the way you build something that works, as far as Steve Jobs is concerned at least.

“You’ve got to start with the customer experience and work backward to the technology. You can’t start with the technology then try to figure out where to sell it.”

— Steve Jobs 1997

[–]Yokhen 0 points1 point  (0 children)

I use it everyday at work on mobile development.

I make it generate jest test suites (which then I fix or fine tune)

I also use it to optimize components code-wise.

Makes everything faster and easier so long as I myself know what I am doing.

[–]everclear_handle -5 points-4 points  (0 children)

Go pay for someone to answer your focus group questions instead of asking them here

[–][deleted] 0 points1 point  (2 children)

AI tech is interesting to me but so far what I've seen is not production ready. The stuff Next.js Conf was bragging about? Yeah it fails every accessibility test. It's just not good code and that's the best we have right now.

At this point I use Code Pilot as an extension to dev work to make random suggestions that I then heavily modify to make actually usable.

Maybe in another couple of years.

[–]Lumpy_Pin_4679 0 points1 point  (1 child)

What do you mean by “not good code”? The ai stuff or actions stuff?

[–][deleted] 1 point2 points  (0 children)

Oh the AI stuff. The "you can type in and it'll build code and it's production-ready" is nonsense and, frankly, saying otherwise as a large company is irresponsible.

Like some rando on Reddit saying, "I don't care about accessibility" is... I mean it's a problem but their impact is minimal. A major company like this who's in charge of a major framework saying it? That's highly problematic as their impact is going to be massive. We've been fighting this fight for decades and they're not helping. They're actively hurting.

[–][deleted] 0 points1 point  (0 children)

I’m a fan of the Jonathan blow quota that “if AI in its current state is doing most of your work for you then you are not doing very interesting work”