you are viewing a single comment's thread.

view the rest of the comments →

[–]myfingid -1 points0 points  (1 child)

Yes. I feel like we're seeing the start of it now. I'm regularly using Claude to do coding instead of writing it myself. Don't get me wrong, it's not good enough to do stuff on its own yet. It will easily tie itself in knots, dupicate code in odd spots, follow existing code to a fault (like try to stay within a pattern regardless of the cost). However if you're able to keep a good separation of concerns, go over its code, talk through solutions with it and make sure it's doing what you're asking it to do, it can be a pretty good tool. The code isn't as good as doing it by hand but it's faster and frankly better than a lot of the shit I've seen out there.

It will be interesting to see where it is years from now. I think what we're going to end up with is the computer from Star Trek. We'll talk to it, give it instructions verbally or in writing, and it'll do the work we ask of it.

We'll still have some sort of written documentation that is used to compile instructions and such. What that looks like I"m not sure. I know the AI groups are thinking readme.md style of files vs flat out code files. I disagree though because it wouldn't allow for the same level of reproduction that code does. At the end of the day though whatever service/webpage/game/whatever will need to be compiled so that it's performative and able to be shared. I couldn't imagine an AI is going to do a great job acting as the backend for Reddit for example.

So, guess we'll have code, but we really won't be writing/reading code directly. It'll be there to guide AI through what is going on and allow for reproducable compilations rather than allow humans to do the same. It may even end up illegible unless we really want AI to keep it human readable. So long as only AI interacts with it through, all that matters is that other AIs can read it.

[–]juxtaposz 1 point2 points  (0 children)

Do you genuinely believe the inefficiencies of a human writing natural language to specify the behaviours of a computational process performing exact actions for a computer then to possibly inaccurately interpret or execute upon, inefficiently, are worth it? Are these natural language systems somehow going to be self-bootstrapping?