Categories
Mobile Syrup

Snapchat’s new AI chatbot exposed for using slurs, gaslighting users

If you haven’t heard already, Snapchat is getting in on the recent AI chatbot craze with its own model known as “My AI.” Now, users are reporting some bizarre and inappropriate behaviour from the chatbot, including its use of racial slurs and even instances of it pleading with users to turn themselves in to the authorities.

Screenshots were posted to Twitter showing My AI responding with an anti-Black slur when asked to create an acronym with the first letter of the user’s corresponding sentence. The chatbot then tried to backtrack by stating that its own answer was against the company’s policy towards hateful content.

Although concerning, this looks more like a case of users baiting My AI into saying something controversial rather than a genuine problem with the chatbot. Many similar instances have surfaced online, with cases of Snapchat’s AI ‘gaslighting’ users.

The first time that the “My AI” conversation is opened, users must acknowledge a disclaimer about the bot’s capabilities and limitations. It reads, “My AI may use information you share to improve Snap’s products and to personalize your experience, including ads. My AI is designed to avoid biased, incorrect, harmful, or misleading responses, but it may not always be successful, so don’t rely on its advice.” 

The strange responses from the bot have taken over the internet, with cases of My AI pleading with users to turn themselves in when they confess to murders and even reacting harshly to bomb threats.

Snapchat’s My AI was developed using OpenAI’s ChatGPT, which has been known to get facts wrong regularly and spread misinformation accidentally. OpenAI founder Sam Altman went on record to say that ChatGPT is a “horrible product.”

While the chatbot’s legitimacy is concerning, Snapchat is being questioned on why it would bring the product to its audience, which often consists of minors. In most of the above cases, inappropriate and fake prompts are given to the bot in order to elicit a similar response.

A Snapchat spokesperson said that users who intentionally misuse the service could be temporarily restricted from using the bot.

Image credit: Shutterstock

Source: @tracedontmiss Via: Vice

Categories
Mobile Syrup

Google Bard gets update page detailing new changes and additions

Google’s experimental Bard conversational AI chat service got a new updates page to let people know what’s new.

Available at ‘bard.google.com/updates,’ the page details “experiment update[s]” to Bard. Currently, the page only shows the inaugural update from April 10th, 2023, which includes three additions, descriptions of the changes and ‘why’ sections.

The April 10th update adds the experiment updates page (duh). Google says it added the page so “people will have an easy place to see the latest Bard updates for them to test and provide feedback.”

Along with that, the update added additional suggested Search topics when people click the ‘Google it’ button – Google says this will help people explore a wider range of interests and related topics.

Finally, Google says it boosted Bard’s capabilities in math and logic because “Bard doesn’t always get it right.” The company says it’s working toward higher-quality responses in those areas.

Though the information is somewhat vague, it’s good to see Google doing more to communicate updates and changes to Bard. Having transparency on what’s new (and why Google chose to do something) is a big win for Bard users and AI in general.

The new update page comes after Google promised to improve Bard following a lacklustre launch.

Source: Google Via: Engadget

Categories
Mobile Syrup

Take ChatGPT for a spin and see what it can do

Are you tired of boring, robotic chatbots that always seem to be one step behind in your conversations? Well, fear not, because ChatGPT is here to save the day!

At least, so says ChatGPT about itself in response to prompts I gave it. I’ve finally had a chance to play around with the new chatbot developed by OpenAI and see what it could do. Chat Generative Pre-trained Transformer (GPT), for those unfamiliar, runs on OpenAI’s GPT-3.5 family of large language models launched as a prototype in November 2022. It’s currently free to use and OpenAI plans to monetize it in the future.

ChatGPT has so far proven somewhat impressive in its ability to generate detailed responses to a myriad of queries, although it’s not always factually accurate. When playing around with ChatGPT, I found myself swinging back and forth between being impressed and being disappointed.

For example, I asked it to write a MobileSyrup story about itself, which generated the following:

And when I asked it to make that response funnier, it gave me this:

Both responses are fine, but neither are particularly mind-blowing in my book. Moreover, the “funny” response wasn’t all that funny.

I also asked ChatGPT to generate a review of the iPhone 14, but it told me the iPhone 14 didn’t exist. I think it messed up the response because, as indicated by a warning on the main ChatGPT page, it has “limited knowledge of world and events after 2021,” and the iPhone 14 came out earlier this year.

Other prompts I gave to ChatGPT included asking it whether iPhone or Android was better, to which it spat out what I think is a reasonable comparison between the two. I was also pleasantly surprised when ChatGPT was able to generate several ideas for Magic: The Gathering Commander decks. However, the suggestions were somewhat basic, and when I asked for a decklist based on one of the suggestions, there wasn’t much synergy, and it provided incorrect information about some of the cards. You can view those prompts below:

I did a few other prompts with ChatGPT as well, ranging from complex questions like asking for solutions to the housing crisis to simpler stuff, like fun activities to do with an eight-month-old. When it came to suggesting ideas or information, ChatGPT generally did okay as long as you keep an eye out for inaccuracies. However, when ChatGPT did miss, it would miss hard — for example, I asked it what impact ChatGPT will have on education, and it responded with, “I don’t know what ChatGPT is.” Neat.

Ultimately, I’m interested to see what comes of ChatGPT, but I think so far it’s somewhat overhyped. I’m sure it’ll be a powerful tool eventually, but for now, it still needs some work.

How to try ChatGPT out for yourself

Want to try out ChatGPT for yourself? It’s actually pretty easy to get started. Here’s what to do:

  • Head to the OpenAI website and click ‘Try’ at the top of the page (or just click this link).
  • You’ll be prompted to sign in with your OpenAI account. If you don’t have one, you can make one for free.
  • Once signed in, you should see the above screen along with a spot to enter text at the bottom. You can then start entering prompts.
  • Conversations are stored on the side of the page so you can return to them later.

That’s all you need to do to try ChatGPT. It’s worth noting that you might not be able to access it right away — after I made an OpenAI account last week, I had to wait several days because there wasn’t enough capacity for me to use ChatGPT. However, once I got access, I haven’t had any issues using it.

Categories
Mobile Syrup

Google engineer suspended after claiming the LaMDA chatbot achieved sentience

Google suspended one of its engineers, Blake Lemoine, after he claimed the company’s ‘LaMDA’ chatbot system had achieved sentience.

The search giant believes that Lemoine violated the company’s confidentiality policies and placed him on paid administrative leave. Lemoine reportedly invited a lawyer to represent LaMDA, short for Language Model for Dialogue Applications. Additionally, Lemoine reportedly spoke to a representative from the U.S. House Judiciary Committee about alleged unethical activities at Google.

Lemoine works for Google’s Responsible AI organization and was testing whether LaMDA generated discriminatory language or hate speech — something big tech chatbots have had a tendency to do.

Instead, Lemoine believes he found sentience, based on responses LaMDA generated about rights and the ethics of robotics. According to The Verge, Lemoine shared a document with executives titled “Is LaMDA Sentient?” in April. The document, which you can read here, contains a transcript of Lemoine’s conversations with LaMDA (after Lemoine was placed on leave, he published the transcript via his Medium account as well).

In another Medium post, Lemoine shared a list of people he had consulted to help “guide” his investigations. The list included U.S. government employees.

“Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” Google spokesperson Brian Gabriel told The Washington Post.

Gabriel went on to explain that AI systems like LaMDA can “imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic.”

Google CEO Sundar Pichai first introduced LaMDA at the company’s 2021 I/O developer conference. At the time, Pichai said the company planned to embed LaMDA in its products like Search and Google Assistant. The Post also cited a Google paper about LaMDA from January that warned people might share personal thoughts with chatbots that impersonate humans, even when users know the chatbot isn’t human.

The Post interviewed a linguistics professor, who said it wasn’t right to equate convincing written responses with sentience. Still, some of the responses shared by LaMDA are admittedly creepy, regardless if you believe it’s sentient or not.

The main takeaway here should be that there’s a need for increased transparency and understanding around AI systems. Margaret Mitchell, the former co-lead of Ethical AI at Google, told the Post that transparency could help address questions about sentience, bias, and behaviour. Mitchell also warned of the potential harm something like LaMDA could cause to people who don’t understand it.

Moreover, the focus on sentience may distract from other, more important ethics conversations. Timnit Gebru, an AI ethicist that Google fired in 2020 (the company says she resigned) tweeted as much, suggesting discussions of sentience derailed other topics.

Source: The Verge, The Washington Post