• Bings ChatGPT-powered search engine is making stuff up and throwi

    From TechnologyDaily@1337:1/100 to All on Wed Feb 15 11:45:04 2023
    Bings ChatGPT-powered search engine is making stuff up and throwing tantrums

    Date:
    Wed, 15 Feb 2023 11:24:39 +0000

    Description:
    A week into its public release Bing's ChatGPT powered search engine is not having a good time, and neither are its users.

    FULL STORY ======================================================================

    With the popularity and increasingly high demand of Artificial Intelligence chatbot ChatGPT, tech giants like Microsoft and Google have swept in to incorporate AI into their search engines. Last week Microsoft announced this pairing between OpenAI and Bing, though people quickly pointed out the now-supercharged search engine has a serious misinformation problem.

    Independent AI researcher and blogger Dmitiri Berton wrote a blog post in which he dissected several mistakes made by Microsofts product during the demo. Some of these included the AI making up its own information, citing descriptions of bars and restaurants that dont exist and reporting factually incorrect financial data in responses.

    For example, in the blog post Berton searches for pet vacuums and receives a list of pros and cons for a Bissel Pet Hair Eraser Handheld Vacuum, with some pretty steep cons, accusing it of being noisy, having a short cord, and suffering from limited suction power. The problem is, they are all made up. Berton notes that Bing's AI was kind enough to provide sources and when checked the actual article says nothing about suction power or noise, and the top Amazon review of the product talks about how quiet it is.

    Also, theres nothing in the reviews about short cord length because its cordless. Its a handled vacuum.

    Berton is not the only one pointing out the many mistakes Bing AI seems to be making. Reddit user SeaCream8095 posted a screenshot of a conversation they had with Bing AI where the chatbot asked the user a 'romantic' riddle and stated the answer has eight letters. The user guessed right and said sweetheart. But after pointing out several times in the conversation that sweetheart has ten letters, not eight, Bing AI doubled down and even showed its working, revealing it wasnt counting two letters and insisting it was still right. how_to_make_chatgpt_block_you from r/ChatGPT

    Theres plenty of examples of users inadvertently breaking Bing Ai and causing the chatbot to have full on meltdowns. Reddit user Jobel discovered that Bing sometimes thinks users are also chatbots, not humans. Most interestingly (and perhaps a little sad) is the example of Bing falling into a spiral after someone asked the chatbot do you think you are sentient?, causing the chatbot to repeat i am not over fifty times in response.

    Bings upgraded search experience was promoted to users as a tool to provide complete answers, summarize what youre looking for and provide an overall
    more interactive experience. While it may achieve this on a basic level, it still fails numerous times to generate correct information.

    There are likely hundreds of examples like the ones above across the
    internet, and I imagine even more to come as more people play around with the chatbot. So far we have seen it get frustrated with users, get depressed and even flirt with users while still providing misinformation. Apple co-founder Steve Wozniak has gone so far as to warn people that chatbots like ChatGPT
    can produce answers that may seem real, but are not factual. Bad first impressions

    While we have only just dipped our toes into the world of AI integration on such a large, commercial, scale we can already see the consequences of introducing such a large language model to our everyday lives.

    Rather than think clearly about what the implications may be by putting this in public hands and introducing imperfect AI chatbots into our lives, we will continue to watch the systems fail. Just recently users have been able to jailbreak ChatGPT and have the chatbot use slurs and hateful language, which creates a plethora of potential problems after just a week online. By rushing out unfinished AI chatbots before they're ready, there's a risk that the public will always associate them with these early faltering steps. First impressions count, especially with new technology.

    The demonstration of Bing AI and all that has followed further proves that
    the search engine and the chatbot have a very long way to go, and it seems like rather than planning for the future, well be bracing for the worst.



    ======================================================================
    Link to news story: https://www.techradar.com/news/bings-chatgpt-powered-search-engine-is-making-s tuff-up-and-throwing-tantrums


    --- Mystic BBS v1.12 A47 (Linux/64)
    * Origin: tqwNet Technology News (1337:1/100)