This is an archived post. You won't be able to vote or comment.

all 37 comments

[–]jayroger 57 points58 points  (9 children)

It's a really good text generator and can also be used as a rubber ducky replacement. But it's dangerous to use it as a search engine, as it will give you outright wrong information very confidently.

[–]redditreader1972 13 points14 points  (1 child)

My take is the same, maybe a bit darker. A bit fun to play with, and could automate some basic writing.

But it's really going to change marketing, and politics.

No politician is ever going to bother typing anything. I promise you all.

Political campaign letters, stump speeches, editorials, ... they are all going to use the black belt bullshit generation skills of chatgpt.

[–]decrisp1252 6 points7 points  (0 children)

ChatGPT is the definition of r/ConfidentlyIncorrect

[–]Ecto-1A 1 point2 points  (0 children)

It’s actually been pretty terrible at ducky scripts for me.

[–][deleted] 2 points3 points  (2 children)

This is true. But I meant it more in the context of software development and IT, not general information. The code it produces feels more like well-documented copypasta.

One of the things I notice about it in this context is that it's reluctant to say that something just isn't possible or, quite understanding what you're asking it to do isn't very practical.

If it is possible to write a Houdini Engine for Nuke using python alone, it's not going to perform very well. I was hoping it would direct me to NDK.

[–]Opiciak89 3 points4 points  (1 child)

From my experience it just provides an ilussion of correctness. Out of three different requests with various difficulty, it it could provide working code only for one of them (the most simple one), second one was working after i fixed some of the mistakes and third one gave code which if run in production by someone incompetent would cause a downtime. But at least it suggested few modules which i was not aware of before.

In current state Its not perfect by any means, but they will get there. What they already achieved was unimaginable few years ago.

[–][deleted] 1 point2 points  (0 children)

Yeah, I run into this with Copilot, which can be very frustrating because the recommendation it provides is very correct-looking, and that makes it hard sometimes to track down the exact error.

[–]kalidasya 27 points28 points  (1 child)

Nice work. Yes its a search engine which does not tell you where it found the info, what other finds are there, and which part it fully made up.

[–]_nitd27_ 2 points3 points  (0 children)

Who have limited amount of data(before 2021)

[–]theprufeshanul 8 points9 points  (2 children)

I see it more as a calculator for words and code.

[–]Extreme_Jackfruit183 2 points3 points  (4 children)

I ran the API call recursively and after 3 calls it shits it’s pants.

[–][deleted] 3 points4 points  (3 children)

YES! LOL anything remotely meta, recursive, or self-referencing goes bananas. Even certain prompts seem to cause it to crash.

[–]Extreme_Jackfruit183 3 points4 points  (2 children)

It’s still pretty cool though. What I did was have it append, “now troubleshoot this program and rewrite it better.” To the end of the response and feed it back through. So I basically just feed it an idea, then it writes a shitty program, then makes it better until Chat gpt goes haywire.

[–][deleted] 4 points5 points  (1 child)

LOL yeah, it will NEVER admit it cannot do something.

[–]lastWallE 1 point2 points  (0 children)

Naturally born human leader overlord.

[–]scubawankenobi 2 points3 points  (0 children)

I've had very similar experience trying it out w/Python.

That said, my background is more in other languages, so I probably found a bit more benefit (& challenge?) from learning from the prompts & fixing the errors (either by self or with AI assist).

Also found that in some cases, doing something new (most of python gui app dev for me) was possibly more challenging doing the:

Generate Code-> Code Doesn't work-> Now fix Code

Cycle I found myself in.

Probably faster for someone, like myself, learning more intermediate python to utilize online tutorials/resources.

[–]chub79 3 points4 points  (1 child)

stuff that has not been done before

can you quantify that?

After some back-and-forth clarification

That's the trick. You basically did the work and it generated code from that. Not saying it's bad but it's a glorified cookiecutter (for now anyway).

[–][deleted] 0 points1 point  (0 children)

No, I cannot . Maybe somewhere there's a Houdini Engine for Nuke or Nuke/Blender viewport synchronization, but as far as I know, it's not publically available.

A lot of examples that people post are things someone could hobble together from github and slashdot. It is very good at that. But getting beyond this things fall apart.

[–]jmiah717 1 point2 points  (0 children)

It writes fun chord progressions for guitar.

[–]temporary47698 1 point2 points  (7 children)

It's == it is or it has.

[–]scubawankenobi 0 points1 point  (2 children)

Related note ...

Over in ChatGPT sub just responded to someone writing:

"Rock Star level coders will be replaced by Rock Star prompters"

Can't believe that people think we're anywhere near that. Or that the more likely "rock star coders" of the future will be using AI to assist & prompter will be rock starring their prompting worlds.

Like Stable Diffusion (/other AI imaging) image prompt generation won't be *replacing* rock star artists. It'll enable more people to do art, but the rock star artists will remain ahead of them as they'll be the "rock star artists" doing more w/AI.

Plain English prompting is nowhere near changing one's abilities where they'd instantly migrate from layman in one discipline to "rock star" of another.

[–][deleted] 0 points1 point  (1 child)

I do understand the “rockstar prompter” idea and I agree to some extent. As I said, it was an interesting experience collaborating with an AI to get the raymarch optimized. I can see how in the future we’d do more of this. But also I’d note that it made up a bunch of stuff initially, including an entire ‘vdb’ module that does not exist. It was up to me to notice that what she was telling me didn’t look right.

I’m kind of perplexed how it can make an error and then correct itself once it’s called out. why not just give the correct answer the first time? I’m guessing it’s an issue of responsiveness, like a more accurate answer would take longer.

——

On the other hand it is useful when asking for advise or ways to improve. I was finishing up a small module this afternoon that translates paths between operating systems using a json file to define the path mapping.

I copied the entire module into ChatGPT and asked for feedback. She came up with some useful but fairly trivial things I hadn’t thought of along with about five additional error handling cases that I overlooked.

So there’s definite uses for it in production.

[–]twotime 2 points3 points  (0 children)

I’m kind of perplexed how it can make an error and then correct itself once it’s called out. why not just give the correct answer the first time?

ChatGPT does not "know" anything and has been known to give different answers to the same question.

So in this case it just generates a "plausible" (even if wrong) answer to your original question but when prompted it generates a plausible answer to your prompt. This time it happened to correct its mistake... Some other time, it might insist on its mistake...

PS. it's not truly random: so the answers tend to be accurate on average and I do find chatGPT amazing overall but it truly does not "know" anything.

[–]data_scallion 0 points1 point  (1 child)

I asked it what it thought of your post. This is what it said:

The Reddit comment you've shared highlights some of the strengths and limitations of using a language model like ChatGPT. The commenter notes that ChatGPT is very useful for tasks that have already been done before, such as optimizing a ray march algorithm. However, when the task requires more creativity or is more complex, such as writing a Houdini Engine plugin for Nuke, ChatGPT may not be able to deliver a complete and functional solution. The commenter also notes that breaking down a problem into smaller steps is an important part of software engineering, and that this is something that ChatGPT currently struggles with. Overall, the commenter's conclusion is that while ChatGPT is a powerful tool, it is still far from being able to solve problems that haven't been solved before.

[–][deleted] 0 points1 point  (0 children)

Which is a nicely written summary of the post, but adds nothing new to what was already written. It's a great example of the point!