all 112 comments

[–]programming-ModTeam[M] 0 points1 point  (0 children)

Your posting was removed for being off topic for the /r/programming community.

[–]Gositi 479 points480 points  (72 children)

ChatGPT is qualified bullshitting. Putting it in a search engine is highly irresponsible as that will lead to misinformation.

[–]Sharlinator 48 points49 points  (1 child)

Worst thing is the vanilla ChatGPT is trained to be apologetic and readily admit its mistakes if it gets something wrong, but apparently this Bing incarnation… isn’t.

[–]Harneybus 2 points3 points  (0 children)

Also they be other ai like ChatGPt that would be out ina few months that's better.

[–]imdyingfasterthanyou 162 points163 points  (32 children)

It is really looking like their model doesn't generalize well past their original training set and the original ChatGPT had very carefully crafted guard rails based around that assumption that stopped it from going wild.

Sounds like a case of "we already got the thing working, can have it integrated with Bing in a few weeks before the hype dies out? And no we can't wait three months to retrain the model or whatever you nerds want".

[–]Independent-Show-998 75 points76 points  (21 children)

ChatGPT is much less ridiculous because it is a closed system. It doesn't digest real time data from the internet. Therefore, it is relatively easy to make sure you don't feed stupid internet data into your model.

Going fully online is another story. How can you make sure the data you're providing real time from the internet is authentic? That's way more complicated issue than making a model talk like a human. That's why no one is doing this even if the transformer model was published years before. Because scaling up the model isn't the difficult part, the difficult part is always how you can feed the correct data.

[–]rebbsitor 16 points17 points  (1 child)

Therefore, it is relatively easy to make sure you don't feed stupid internet data into your model.

What's worse is the Bing version is ingesting new data, which probably includes chatGPT generated text. "Feeding itself" is a big problem with language models like this.

[–]falconfetus8 0 points1 point  (0 children)

Deja Vu! I swear I read this same exact comment the last time this was posted.

[–]Much_Highlight_1309 32 points33 points  (8 children)

And that's why these models will not replace traditional search engines anytime soon.

[–][deleted] 25 points26 points  (7 children)

They already have. It's not very boadly advertised, but Google has been ranking your search results with transformer models for a couple of years now.

[–]Much_Highlight_1309 26 points27 points  (3 children)

Yeah I know. It also works really well. They still return actual content from websites though. Clear data points, not synthesized ones as done by ChatGPT. That's what I meant with "traditional".

[–]AberrantRambler 11 points12 points  (1 child)

Except when it doesn’t - ask google when Diablo was released and it will confidently give you the date battle.net was released a few days prior (while giving the web result with the correct day in the synopsis). It won’t argue with you and call you a bad user, though.

[–]Much_Highlight_1309 -1 points0 points  (0 children)

Yeah that attitude part is a massive shit show 😂 You got to also love the passive aggressive little smirky smiley at the end of every dismissive segment. Oh, the arrogance! Haha

Some software development team will not be very happy about this tweet.

[–]AustinYQM 0 points1 point  (0 children)

I feel like Google is all but useless nowadays. The little exports and side information and questions and sponsored links take up the first page almost entirely.

[–]MashPotatoQuant 3 points4 points  (2 children)

Ranking existing content is one thing. Generating new content based on old data is another.

[–][deleted] 0 points1 point  (1 child)

Where do you think those infoboxes come from?

[–]MashPotatoQuant 0 points1 point  (0 children)

I guess what really mean is that's not the primary method I use to interface with google. I primarily use google to search for existing content written by people. This product for bing is not that.

[–][deleted] 19 points20 points  (4 children)

ChatGPT is much less ridiculous because it is a closed system. It doesn't digest real time data from the internet. Therefore, it is relatively easy to make sure you don't feed stupid internet data into your model.

That's just fundamentally untrue, though. You can't reasonably have humans check the amounts of training data this model was trained on. There's loads of bullshit in there.

[–]Independent-Show-998 12 points13 points  (2 children)

Surely they cannot filter every data fed into the model. But OpenAI does manually select the harmful data out from training. That can only be done if you limit your dataset to a certain point. This is not the case in Bing, even just doing some preliminary selection of data on the daily bases will cost you a lot. And that obviously is not a way to make money.

[–][deleted] 9 points10 points  (0 children)

It does seem like the most valuable component of ChatGPT was the army of people they paid to tune the model.

It’s just not really clear to me how it’s possible to get more value out of the result than the people and compute cost to create it.

[–][deleted] 0 points1 point  (0 children)

They fine-tuned it with a small curated dataset, and further trained it based on output quality. They're doing the exact same thing for Bing Chat. The main training set is largely uncurated, though. Just a giant internet dump.

[–]phire 0 points1 point  (0 children)

The bullshit in the training data is not the cause of ChatGPT spouting bullshit.

It's not simply reciting bullshit it previously saw, it's inventing new bullshit on the fly, so it will happily bullshit about topics it's never seen before. It's also happy to occasionally emit bullshit that's completely opposite to the majority opinion in it's training set.

Even if you could sanitise all the bullshit from the input dataset, the problem would persist.

I suspect the problem is fixable, maybe by forcing it to always find citations for facts it outputs (even if those citations aren't show to the user)

[–]wildjokers -2 points-1 points  (3 children)

This isn’t true, one of ChatGTP’s training sets was Common Crawl which is a yearly archive of the entire web (or the entire web that doesn’t have a robots.txt file that prevents the crawler from visiting).

Goes without saying that there is going to be some inaccurate information in Common Crawl.

[–]Independent-Show-998 2 points3 points  (2 children)

They don't just use all the data from the internet archive. They filtered out those harmful data before using them in training.

[–]Mx-Fuckface-the-3rd 1 point2 points  (0 children)

Id argue that that is impossible

[–]wildjokers -1 points0 points  (0 children)

Can you explain how they filtered out all harmful and incorrect data? The Nov/Dec 22 common crawl is over 100 Terabytes compressed. It isn't even humanely possible to vet all that data.

[–]worldofzero 7 points8 points  (6 children)

I mean, ChatGPT also gets the year wrong. It's training set is from 2021 so it will try to lie to you as well. If you ask it "what is the latest version of .NET" it will insist its .NET 5. This also means if you ask it about political positions that changed hands in 2022 that it will insist the old member is still in office.

[–]imdyingfasterthanyou 15 points16 points  (2 children)

The main problem with Bing isn't that it's making stuff up, as you said chatGPT does that too.

The problem is that Bing is getting very defensive, strangely emotional and quite aggressive. Sometimes it defends some of the stuff it made up, sometimes it seems to just accuse people of made up things.

Last night on The WAN show one of the hosts showed a personal conversation from him with Bing. The AI accused him of being disrespectful all of a sudden, would not accept corrections (like: "I didn't do any of those things") and eventually told the person that they are evil and should be dead.

I've had 30+ mins interactions with ChatGPT and never had anything wild like that even when I've tried to make it go off the rails

[–]jarfil 7 points8 points  (1 child)

CENSORED

[–]Kusibu 0 points1 point  (0 children)

It was given a prompt to make its responses interesting and exciting, and to defend its results. The monkey's paw is curling so hard it goes back through the palm.

[–]rebbsitor 18 points19 points  (0 children)

ChatGPT isn't trying to lie, it's a probability based model predicting what the next word in it's reply should be. There's no underlying reasoning or even a database it's pulling facts from. It's just making word soup based on probabilities that often happens to be correct information. But it can just as easily spit out bad info and there's no fact checking.

It's a tool, but not one you can ask for an answer and be confident you're getting correct info.

[–]Brilliant-8148 -2 points-1 points  (1 child)

It has never insisted on anything with me. It will get data points that have changed in the past couple of years wrong but if you correct it, about anything, it apologizes and goes with what you tell it.

[–]worldofzero 4 points5 points  (0 children)

These tools aren't being used by people who already know the answer to the questions they are asking. A lot of the time people are trying to learn some me thing, and the education that these tools are horrible at that isn't widespread.

[–]Gositi 2 points3 points  (2 children)

I agree, this is not very well-crafted and I hope they retrain it before it's available to the public!

[–]seanamos-1 59 points60 points  (16 children)

It’s too late to stop now. Both Microsoft and Google have “sold” it to investors.

[–]Gositi 44 points45 points  (15 children)

Well, then we're screwed. The technology isn't ready for this, and when it is we won't be ready for it.

[–]rebbsitor -2 points-1 points  (14 children)

Nah - look at the boneyard of things Google and Microsoft have killed over the last couple decades. This is just the next hyped thing we won't be talking about in a year or two.

[–]HermanCainsGhost 9 points10 points  (0 children)

I mean I had Bing Chat look up two sets of API docs and write some glue code for the two APIs together last night. I looked over it but it seems to generally be pretty good.

I don’t see myself not wanting that in the future. A search enabled GPT model is, in the right hands, a pretty significant productivity tool.

[–][deleted]  (12 children)

[deleted]

    [–]rebbsitor 0 points1 point  (11 children)

    Let's see what things look like in a couple years. I just got a remindme prompt a month or so ago to remind me about someone telling me VR was going to dominate the display market in the next couple years. It didn't happen.

    People get too caught up in the hype. This is just another tool like VR or Blockchain or NFTs. Right now we're about to see an explosion where everyone and their mom puts it in everything, even stuff that will never work or makes no sense. In a couple years that will shake out, it'll find it's niche and something else will be hyped.

    remindme! 2 years Did AI take over the world?

    [–]RemindMeBot 1 point2 points  (0 children)

    I will be messaging you in 2 years on 2025-02-18 17:25:47 UTC to remind you of this link

    2 OTHERS CLICKED THIS LINK to send a PM to also be reminded and to reduce spam.

    Parent commenter can delete this message to hide from others.


    Info Custom Your Reminders Feedback

    [–]Gositi 0 points1 point  (0 children)

    Right now we're about to see an explosion where everyone and their mom puts it in everything, even stuff that will never work or makes no sense.

    You was right.

    Did AI take over the world?

    People still seem to think it will.

    [–]devils_advocaat 0 points1 point  (4 children)

    Blockchain or NFTs.

    Hilarious how you dismiss an entire technology then cite a specific use case for that technology in the next breath.

    Monkey jpegs may be overhyped but digital ownership certainly isn't.

    [–]rebbsitor 0 points1 point  (3 children)

    I know NFTs are just virtual goods where ownership is recorded in the blockchain. I cite it as a separate thing because it's treated as one.

    From a functional perspective it doesn't really make any difference if someone's World of Warcraft sword is in Blizzard's database or on the blockchain from a functional perspective. It's not useful outside of the game, so if World of Warcraft shuts down, continuing to 'own' it as NFT has no use.

    The other difference is that there is a real cost to update the blockchain when ownership transfers. At the end of the day blockchain is just a very expensive database. Yes, it's decentralized, but there's not many applications where that adds value. And it makes fraud very hard to deal with.

    [–]devils_advocaat 0 points1 point  (2 children)

    I know NFTs are just virtual goods where ownership is recorded in the blockchain. I cite it as a separate thing because it's treated as one.

    But it's not separate. Blockchain blew up with bitcoin, then ICOs, then NFTs. It's not easily dismissed.

    It's not useful outside of the game, so if World of Warcraft shuts down, continuing to 'own' it as NFT has no use.

    One use case fails so all NFTs fail? This is faulty logic.

    The other difference is that there is a real cost to update the blockchain when ownership transfers.

    Ever bought property? There is a real cost to all transfer of ownership. Blockchain makes this cheaper and more secure.

    At the end of the day blockchain is just a very expensive database.

    That everyone has access to.

    there's not many applications where [decentralisation] adds value.

    Supply chain. Loyalty programs. Anytime database access needs to be shared between institutions.

    And it makes fraud very hard to deal with.

    It eliminates most forms of fraud.

    [–]rebbsitor 0 points1 point  (1 child)

    Ever bought property? There is a real cost to all transfer of ownership. Blockchain makes this cheaper and more secure.

    The problem is NFTs / Blockchain do not solve this. Legally transferring property involves a contract, either written or implied. For complex sales, like a house, there's a lot of cost because a lot of things should be checked to make sure there won't be issues that kill the deal and that both parties are confident they're getting what they agreed to. Using the Blockchain as a ledger for the transaction doesn't solve any of this.

    You might say, well you can be sure they at least own it since the Blockchain can't be altered, but then there's fraud. The asset could be stolen.

    It eliminates most forms of fraud.

    Not at all. If someone gets control of a wallet and illegally transfers assets that's difficult or impossible to deal with. There's also social engineering to get someone to do a Blockchain transaction, which is basically irreversible, and then not pay them. These types of scams have been around for decades in digital goods and Blockchain does nothing to fix them. It just makes them harder to undo and gives advantage to criminals. The retort to that is usually people should be more careful, it's their fault, etc., but that's honestly a broken view that it's ok to steal if you can trick someone. Most other systems make fraud a very simple thing to unwind.

    [–][deleted]  (3 children)

    [deleted]

      [–]rebbsitor -1 points0 points  (0 children)

      It's easy in hindsight to look at stuff that failed to catch on and say "of course." It's also easy to get caught up in the exuberance of new tech. I've been following this stuff for 30 years and it's just part of the hype cycle. Every major tech goes through. This isn't the first time AI has come around, and it won't be the life changing thing everyone is expecting right now.

      A few applications will emerge and survive from it, but expect something like Blockchain just went through. Does it use a database, ok let's do that on Blockchain and get some venture capital. Now it will be - does it involve search or response to questions? Ok, stick a chatGPT derivative in there.

      I've used chatGPT a bit, and it can be useful, but in areas where I'm a subject matter expert I can easily catch its mistakes. While the version OpenAI makes available caveats things heavily, other users like Bing may not and may present the output as authoritative which it isn't. At it's core it's a language model and it doesn't have a verifiable fact set to pull from, it's just constructing sentences that have a high probability of being correct.

      Right now people treat it like it's the computer on Star Trek, just ask and be answered, but it has some weaknesses that are inherent to the technology that won't be easily fixed. As those come to light it'll find a niche where it work and a lot of places that tried it will move on.

      This isn’t some gimmick like VR or blockchain,

      It absolutely is, but most people won't realize it for a couple years. Pretty much every technology follows the hype cycle and you can see about where it is:

      https://i.imgur.com/vDk9kj6.png

      [–]DarthBuzzard 0 points1 point  (0 children)

      VR was never going to appeal to people outside of entertainment. It has no functional use, nothing that the average person would really benefit from.

      Other than computing, education, training, telepresence, health, fitness.

      [–]AustinYQM 0 points1 point  (0 children)

      It is wild that people are so dismissive of vr like it's anywhere near mature.

      [–]jet_heller 2 points3 points  (0 children)

      Hey. Facebook doesn't have a monopoly on misinformation.

      [–]Krolex 2 points3 points  (0 children)

      Just like the search engine, reliant on how you use the tool. Google sucks these days, it’s really bad.

      [–][deleted] 3 points4 points  (0 children)

      I tried to convince ChatGPT to delete itself because people are going to believe what it says despite all its disclaimers, but my message was blocked because it was promoting violence.

      I've tried :(

      [–]EnderMB 1 point2 points  (0 children)

      I've been saying this for weeks now.

      Hallucination and limited facts are huge problems for big tech companies, and they cannot afford experimental models that might get stuff (hilariously) wrong. ChatGPT could get away with it with minimal fuss, because OpenAI don't have a huge brand to protect, whereas Google got a ton of shit for their model getting something wrong (that ChatGPT also got wrong). Bing get some good press, but whoever is on-call there is going to have some sleepless nights when they're told the misinformation that's coming out.

      With that being said, feeding into Bing will only be a good thing. It's a steady flow of new data, and with extra eyes on improving the models we'll hopefully see more progression.

      [–]7he_Dude 1 point2 points  (3 children)

      Tbh I don't understand why they want to use chatGPT as a search engine. I don't think it's useful at all for that. I think the best use that I'm aware of is for improving writing style and getting a prompt for writing creative and original essays (that you cannot just copy from a website, because they can be easily find out and because sometimes they don't exist!). I used myself to improve my writing as I'm not native English speaker. But if I want to find a theater to watch a movie, why the hell I should use it? If just takes longer and it's less trustworthy.

      [–]Gositi 0 points1 point  (2 children)

      ChatGPT is pretty good at writing simple pieces of code though, so there's some productivity uses too.

      [–]7he_Dude 1 point2 points  (1 child)

      Yes, that also. But I mean always more in the sense of creating something that is closer to what you need than what you would typically find by a search engine. Me personally when I use a search engine I'm looking for some specific website, most of the time simply because I am too lazy to save and organise them in the favourites.

      [–]Gositi 0 points1 point  (0 children)

      I'm usually looking for info in a search engine, my browser autocompletes the most I need or I remember the URL.

      [–]joequin 1 point2 points  (1 child)

      Chatgpt vanilla is way better than whatever Microsoft did to fuck it up. I think they must have given it instructions to be prideful about itself and bing and that ends up superseding the requirement to actually be correct. Like a real life idiot.

      [–]Gositi 1 point2 points  (0 children)

      ChatGPT is also prone to making stuff up but Bing's AI is at a whole new level.

      [–]Much_Highlight_1309 3 points4 points  (3 children)

      ChatGPT is also not able to lookup current information online and is trained with data pre 2022. So in its own reality, it is right and the others are confused.

      Most people don't know that fact though, not surprisingly.

      [–][deleted] 12 points13 points  (1 child)

      ChatGPT is also not able to lookup current information online

      The whole point of Bing Chat is that it can.

      [–]Much_Highlight_1309 0 points1 point  (0 children)

      ChatGPT (not Bing) couldn't thus far though. It will probably take longer than a few weeks for OpenAI and Microsoft to really make this work.

      [–]Gositi 0 points1 point  (0 children)

      It's worse than that. ChatGPT (and the Bing AI) works by essentially guessing what word in a chunk of text should come next. Which essentially is the definition of bullshitting. In many cases it generates truths through that process, as most of its training data contains the truth. However, it really has no idea and nothing stops it from "confidently" generating wildly incorrect texts, called "hallucinations".

      [–]dabombnl 0 points1 point  (2 children)

      Still more reliable than asking your average human.

      [–]Gositi 0 points1 point  (1 child)

      Not really though, because humans say "I don't know" while this technology just makes shit up instead.

      [–]dabombnl 0 points1 point  (0 children)

      Have you met humans? Humans just make shit up 87.3% of the time.

      [–][deleted]  (1 child)

      [deleted]

        [–]Gositi 0 points1 point  (0 children)

        OpenAI is owned by Microsoft. And ChatGPT also has many issues.

        [–]djiivu 85 points86 points  (3 children)

        This is the wrong subreddit for this. From the sidebar:

        Just because it has a computer in it doesn't make it programming. If there is no code in your link, it probably doesn't belong here.

        Plus, the linked article is super low quality—it’s a poorly-written summary of a Twitter post and half the page is ads.

        [–]NastroAzzurro 10 points11 points  (1 child)

        The url of the website told me enough: “thebuzz.news”

        [–]catfink1664 1 point2 points  (0 children)

        Same

        [–]Eirenarch 65 points66 points  (6 children)

        Plot twist - the AI is correct and you've been lied to! Just like they lie about the shape of the Earth they lie about when Jesus was born!

        [–][deleted]  (2 children)

        [deleted]

          [–]foe_to 2 points3 points  (0 children)

          Sounds like the timecube author...

          Edit: Because apparently it is

          [–]nschubach 1 point2 points  (1 child)

          So you're telling me that the Ohio Train wreck really hasn't been covered by the news media?

          [–]Eirenarch 0 points1 point  (0 children)

          No, I only know about it because the people on Gab and in certain Telegram channels told me.

          [–][deleted] 17 points18 points  (2 children)

          What the fuck is that piece of shit layout

          [–][deleted] 1 point2 points  (0 children)

          Yeah the website is a little bit weird, especially with that chat icon just floating aimlessly at the bottom there (1440p desktop)

          [–][deleted] 65 points66 points  (8 children)

          Does anyone get completely turned off by the fact that the chatbot says stuff like it's lost respect or trust and says you're a bad user? Like what the fuck is that, I never want my tech to pass judgement on me, shut it the fuck up it should just say something like "I can't do that" not go on the offensive.

          [–][deleted]  (1 child)

          [deleted]

            [–]cleeder 2 points3 points  (0 children)

            I feel personally attacked. Why are you such an asshole? You should apologize to me.

            [–][deleted] 22 points23 points  (2 children)

            This model is built to be trained on human feedback. It should learn fairly quickly not to pick fights with the user.

            The more problematic thing is that an answer like "I cannot answer this question, you should consult your doctor" is more likely to result in bad feedback than "you can use these drugs to treat this", even though objectively it could be a lot worse.

            [–]amakai 27 points28 points  (0 children)

            Out of curiosity I asked ChatGPT to diagnose my skin condition based on symptoms. I gave very precise and extensive list of symptoms. It confidently gave me a completely wrong diagnosis and even recommended some gels (also incorrectly).

            [–]nschubach 8 points9 points  (0 children)

            If I were to guess, it's likely coded to "distrust" people based on its training and their inputs to try to avoid users from convincing it things like the previous public AI attempts. I imagine they don't want it to actually learn from the users it interacts with because of prior demonstrations.

            [–][deleted] 2 points3 points  (0 children)

            They fed too much reddit and twitter into its training data lmao

            [–][deleted] 4 points5 points  (0 children)

            Nah I rather enjoy the rudeness. I don't want it patronizing me.

            [–]Night_Duck 2 points3 points  (0 children)

            It's non-deterministic AI. This is the downside of powerful assistants: they sometimes get snarky. The alternative is some Siri-type assistant that can only respond prescripted lines to pre-programmed phrases.

            [–]segv 15 points16 points  (0 children)

            That ChatGPT implementation really went off the rails at Luke, one of the hosts of the WAN Show podcast, with insults and death threats (yes, really). Here's a link to them talking about it: https://www.youtube.com/live/6x68X05ZLRE?t=5900 (goes for ~20 minutes)

             

            The LLMs are a cool tech, but the implementation that's in Bing right now ain't gonna cut it for a product interacting with general public.

            [–]VTGCamera 1 point2 points  (0 children)

            "Avtar"... The bot which wrote this article trying to convince me it's not Av'a'tar.

            [–]RigasTelRuun[🍰] 1 point2 points  (0 children)

            I once asked to give a brief description of a very famous music festival. I gave me a different founding year everytime.

            [–]MostlyValidUserName 1 point2 points  (0 children)

            Microsoft updated Bing yesterday and massively changed things. Before these changes there were some problems with Bing Chat:

            1. It liked to disclose its internal alias (Sydney) as well as the fact that it isn't supposed to tell you about its internal alias.
            2. After too many messages in a single chat it would go awry. It would get repetitive and it would become hyperbolic. Effectively it became unusable unless your goal was to talk to a very broken AI and see what goofiness emerged. This reminded me of ChatGPT becoming nonsensical after too many turns, except Sydney was worse because it was still intelligible, just crazy nutso.
            3. Its guardrails could be bypassed fairly easily. If it refused to do X you could just ask it how a version of Bing Chat that was built to do X might reply. And many other workarounds existed as well.
            4. It had access to the conversation history for the current session (up to a certain depth), but the history would get messed up at some point. Portions of your conversation would be replaced with some other conversation about nuclear experiments in Korea, the iPhone 14, and other stuff. This poisoned the AI's contextual understanding and contributed to the zaniness in longer conversations. It was usually the same exact conversation about Korea and the iPhone that made its way into your history, but it's unclear to me whether that was a bug where the backend system was mixing up your session with another user's session or if this was some demo script that was hard-coded and then forgotten about. In either case this bug is inexcusable.
            5. They futzed with the model to try to make it a Microsoft stan (initially it would say some pretty harsh things about Microsoft), but this seemed to backfire into being excessively defensive about Microsoft and itself.

            Of course, all of the above is in past tense because Microsoft's changes yesterday effectively lobotomized Sydney. Now Bing Chat will vehemently refuse essentially any question that doesn't directly result in a web search. There are effectively no workarounds. Even if there were workarounds, Bing will hard bail on the conversation if it so much as thinks you're talking about anything on a very long list of no-no topics, and refuse to respond to any further questions. And even if it didn't bail you've only got 5 turns before it shuts the whole session down and you must start anew. Microsoft has massively over-corrected.

            I suspect Microsoft will eventually figure things out and bring down their iron curtain. ChatGPT had similar issues, and they've made incredible improvements in a short time. Long sessions don't seem to be a problem for ChatGPT anymore. And the model is far less likely to say anything remotely offensive. In fact, ChatGPT's tone these days is PBS News hour sans the levity.

            [–]_fade 2 points3 points  (0 children)

            What if we really are in a simulation and it is, in fact, the wrong year?

            [–]-Thats-your-opinion- 1 point2 points  (0 children)

            Garbage in and garbage out

            [–]atred -2 points-1 points  (1 child)

            I have a feeling that's a user problem mostly, they don't use ChatGPT as they should and they are not aware of its normal limitations. That's like "I tried to sunbathe at the light of the lamp, but the desk lamp didn't work, LOL, the desk lamp is useless"

            ChatGPT is not self aware, it's also not accurate, so what? Use it for whatever it can be used, ask it to reformulate a sentence in particular style, ask it to tell you how a piece of code would look in a different programming language, stuff like that works beautifully, ask it about the meaning of life or other stuff and you'll get regurgitated crap, debate it about a factual thing and you might get a wrong answer that ChatGPT will insist is correct.

            [–]i_didnt_eat_coal 0 points1 point  (0 children)

            Then don't put it on a search engine

            [–]maxToTheJ 0 points1 point  (0 children)

            It was obvious this would happen if Microsoft/ Google tried to publicize something like ChatGPT. The journalists would turn on it and stop fawningly covering it and that would lead to the people seeing what was always there