Categories
Mobile Syrup

Snapchat’s new AI chatbot exposed for using slurs, gaslighting users

If you haven’t heard already, Snapchat is getting in on the recent AI chatbot craze with its own model known as “My AI.” Now, users are reporting some bizarre and inappropriate behaviour from the chatbot, including its use of racial slurs and even instances of it pleading with users to turn themselves in to the authorities.

Screenshots were posted to Twitter showing My AI responding with an anti-Black slur when asked to create an acronym with the first letter of the user’s corresponding sentence. The chatbot then tried to backtrack by stating that its own answer was against the company’s policy towards hateful content.

Although concerning, this looks more like a case of users baiting My AI into saying something controversial rather than a genuine problem with the chatbot. Many similar instances have surfaced online, with cases of Snapchat’s AI ‘gaslighting’ users.

The first time that the “My AI” conversation is opened, users must acknowledge a disclaimer about the bot’s capabilities and limitations. It reads, “My AI may use information you share to improve Snap’s products and to personalize your experience, including ads. My AI is designed to avoid biased, incorrect, harmful, or misleading responses, but it may not always be successful, so don’t rely on its advice.” 

The strange responses from the bot have taken over the internet, with cases of My AI pleading with users to turn themselves in when they confess to murders and even reacting harshly to bomb threats.

Snapchat’s My AI was developed using OpenAI’s ChatGPT, which has been known to get facts wrong regularly and spread misinformation accidentally. OpenAI founder Sam Altman went on record to say that ChatGPT is a “horrible product.”

While the chatbot’s legitimacy is concerning, Snapchat is being questioned on why it would bring the product to its audience, which often consists of minors. In most of the above cases, inappropriate and fake prompts are given to the bot in order to elicit a similar response.

A Snapchat spokesperson said that users who intentionally misuse the service could be temporarily restricted from using the bot.

Image credit: Shutterstock

Source: @tracedontmiss Via: Vice

Categories
Mobile Syrup

OpenAI will reward you with up to $20,000 for finding ChatGPT bugs

OpenAI is committed to making the ChatGPT experience better for all users. The platform has announced a new bug bounty program that tasks the public with finding bugs in ChatGPT, including vulnerabilities and security flaws.

“We are inviting the global community of security researchers, ethical hackers, and technology enthusiasts to help us identify and address vulnerabilities in our systems,” wrote OpenAI. “We are excited to build on our coordinated disclosure commitments by offering incentives for qualifying vulnerability information. Your expertise and vigilance will have a direct impact on keeping our systems and users secure.”

OpenAI is partnering with Bugcrowd, a crowdsourced cybersecurity platform, to manage the submission of bugs and the eventual reward process. The bounty program is open to all, and rewards range from $200 to $20,000 USD (about $269 to $26,876 CAD) for low-severity and exceptional discoveries, respectively.

ChatGPT has experienced several bugs in the past. In a recent incident, the entire system went offline after users reported seeing titles of chats they weren’t a part of. Further, Twitter user @rez0__ discovered over 80 secret plugins while hacking ChatGPT.

It’s worth noting that not all issues reported to the company will be eligible for a reward. OpenAI has stated that issues such as jailbreaking or getting the model to say or pretend to do anything negative will not qualify. It remains to be seen how successful OpenAI’s Bug Bounty Program will be in mitigating security risks for ChatGPT. However, this initiative highlights the importance of cybersecurity OpenAI’s commitment to making AI safe for all.

Click here to participate in OpenAI’s Bug Bounty Program.

Source: OpenAI

Categories
Mobile Syrup

Microsoft rolling out AI Image Creator tool to Edge users globally

Microsoft is launching its Image Creator tool to Edge users around the globe. Using the toolset, users will have access to the Dall-E powered tech via the Bing chatbot.

“Image Creator is now available on desktop for Edge users around the world,” confirms Microsoft. Once accessible, Image Creator is to be integrated into Edge’s sidebar. To get started, users must manually add the Image Creator tool to your sidebar. Open the sidebar and click the ‘+’ button and select the tool.

Once complete, Users can then type in what they’d like generated into an image. Bing will then take that request and attempt to match the prompt as accurately as possible. It then creates four high-res images. Once complete, users can download the image or upload it to social media.

Microsoft was first discussing the integration of OpenAI’s DALL-E into Edge and Bing Chat last month. The company claims it is using the “very latest DALL-E models.” In a blog post, Microsoft claims that this tool can become useful in the creation of social posts, PowerPoint images, and additives to documents.

Previously, the Image Creator tool was exclusively available to Bing Chat preview users. However, it is now rolling out to users on a wider scale. As of the time of writing, the Image Creator feature isn’t accessible via Edge on my devices. However, the rollout may be sporadic.

Source: The Verge

Categories
Mobile Syrup

Canada’s Privacy Commissioner investigating ChatGPT

The popular AI chatbot ChatGPT is facing a new round of scrutiny, and this time it’s coming from Canada’s privacy watchdog.

The Office of the Privacy Commissioner of Canada says it has launched an investigation into the company that runs ChatGPT, OpenAI, following a complaint alleging the company is collecting and using personal information without user consent.

“AI technology and its effects on privacy is a priority for my Office,” Philippe Dufresne, Canada’s Privacy Commissioner, said in a press release. “We need to keep up with — and stay ahead of — fast-moving technological advances, and that is one of my key focus areas as Commissioner.”

Sam Altman currently serves as OpenAI’s CEO, but a number of players are responsible for its founding, including Elon Musk.

This isn’t the first time the company has faced pushback from government organizations. Italy recently outright banned ChatGPT.

Image credit: Shutterstock

Source: Office of the Privacy Commissioner of Canada

Categories
Mobile Syrup

Apple co-founder, Musk and more call for AI development pause

Elon Musk and other well-known AI researchers have penned an open letter addressed to AI labs globally to pause the development of large-scale AI systems.

“Pause Giant AI Experiments” is the title of the open letter, and it is co-signed by Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallinn, Pinterest co-founder Evan Sharp, Stability AI CEO Emad Mostaque, and several other notable AI researchers. 

The letter says that “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.” The letter says that working on such technology requires labs to take meticulous precaution measures and should be planned for and managed with “commensurate care and resources.”

However, that level of planning and management is not happening, and rather, AI labs are in a race to develop and deploy the large-scale AI systems that “no one – not even their creators – can understand, predict, or reliably control.”

Therefore, the letter suggests AI labs should “immediately pause” the training of AI systems more powerful than GPT-4 for at least six months. “This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

The letter also quotes OpenAI and its recent statement regarding artificial general intelligence, stating that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.”

The letter offers a solution. A set of shared safety protocols for advanced AI design and development should be put in place, and they must be rigorously audited and overseen by independent outside experts. This does not mean that AI development halts, it just means that it takes a temporary pause until more favourable safety protocols can be put in place.

You can check out the full letter here.

Image credit: Shutterstock

Source: Future of Life

Categories
Mobile Syrup

New ChatGPT browser plugin allows it to retrieve information post 2021

San Francisco-based OpenAI has announced that it has begun to gradually roll out plugins for ChatGPT.

One of the new plugins offers access to third-party knowledge sources and databases from within the chatbot. It’s worth noting that the plugin will initially be available in alpha to developers and subscribers of OpenAI’s premium ChatGPT Plus plan. “We plan to roll out larger-scale access over time,” wrote OpenAI. 

The new plugin allows the chatbot to browse the internet, expanding the amount of content and information it can discuss, going beyond the training models to new information from the present day.

One of the main drawbacks of ChatGPT is that it is trained on information until the year 2021. The new web browser plugin can retrieve recent information, and as an example, OpenAI asked ChatGPT a question about the recent Oscars.

OpenAI acknowledges that a web-enabled ChatGPT could perform undesirable behaviours, such as sending fraudulent and spam emails, bypassing safety restrictions, and increasing the capabilities of bad actors. Nevertheless, the company has implemented several safeguards to prevent such behaviours. “This scopes the browsing plugin to be useful for retrieving information, but excludes “transactional” operations such as form submission which have more surface area for security and safety issues.” wrote OpenAI.

“Browsing retrieves content from the web using the Bing search API. As a result, we inherit substantial work from Microsoft on (1) source reliability and truthfulness of information and (2) ‘safe-mode’ to prevent the retrieval of problematic content. The plugin operates within an isolated service, so ChatGPT’s browsing activities are separated from the rest of our infrastructure.”

In addition to the browser plugin, OpenAI is also introducing a “Code Interpreter” plugin that can use Python, handle uploads to ChatGPT and download the results. According to OpenAI, use cases where using code interpreter is especially useful are:

  • Solving mathematical problems, both quantitative and qualitative
  • Doing data analysis and visualization
  • Converting files between formats

OpenAI aims to develop more plugins and wants to bring them to a broader audience in the future. Check out OpenAI’s blog post about the plugins here.

Image credit: OpenAI, Shutterstock

Source: OpenAI

Categories
Mobile Syrup

You can now ask Bing Chat to generate images using DALL-E

Microsoft’s Bing Chat is getting a visual upgrade courtesy of OpenAI’s DALL-E.

Announced in a blog post, Bing Chat will gain the ability to generate images using DALL-E, a generative image generator built by OpenAI. Microsoft didn’t provide specifics on the version of DALL-E used in Bing Chat, but it did tell TechCrunch that it was using the “very latest DALL-E models.”

Called the ‘Bing Image Creator,’ the feature is rolling out slowly to the Bing Chat preview. Users can access it through Bing Chat’s ‘Creative’ mode and it’ll eventually expand to ‘Balanced’ and ‘Presice’ too, though it’s not clear how these modes will impact image generation (if at all). Plus, Edge browser users will get access to it in the sidebar. Of course, it’s not available for everyone yet, so don’t sweat if you can’t get Bing Chat to generate pictures for you right away.

When you ask Bing to create an image, it will generate four high-res images using DALL-E, though they sport a Bing logo in the bottom corner.

And in an effort to prevent Bing Image Creator from going off the rails like Bing Chat did in the early days, Microsoft preemptively added safeguards:

“We have ensured OpenAI’s safeguards, plus additional protections, have been incorporated into Image Creator. For example, we have put controls in place that aim to limit the generation of harmful or unsafe images. When our system detects that a potentially harmful image could be generated by a prompt, it blocks the prompt and warns the user.”

You can learn more about Bing Image Creator here.

Source: Microsoft Via: TechCrunch

Categories
Mobile Syrup

Microsoft unveils Copilot AI-powered assistant in Word, Excel, more

Microsoft unveiled its AI-powered ‘Copilot’ for Microsoft 365 apps at an event on March 16th. Copilot will exist as an assistant with Microsoft’s apps like Word and Excel.

“It works alongside you, embedded in the Microsoft 365 apps you use every day — Word, Excel, PowerPoint, Outlook, Teams and more — to unleash creativity, unlock productivity and uplevel skills,” wrote Microsoft’s corporate vice president of modern work and business applications, Jared Spataro, in a blog post.

Users can summon Copilot to handle a variety of tasks, such as providing information about an upcoming Teams meeting or creating a 10-slide PowerPoint presentation based on a Word document. Per Microsoft’s blog, here are some of the things Copilot can do in:

  • Word – Copilot can help you draft and edit documents
  • PowerPoint – Copilot helps make presentations form a simple prompt
  • Excel – Copilot can analyze trends and make visualizations
  • Outlook – Copilot can help “clear out your inbox minutes”
  • Teams – Copilot can summarize key discussion points, including who said what and suggest action items
  • And more…

Microsoft stressed that users are “always in control” when using Copilot and can decide what to keep, modify, or discard. In the blog, Spataro was also quick to say that Copilot will get things wrong but promised it will “always put you further ahead.”

Copilot leverages OpenAI’s GPT-4, though Spataro wrote that Microsoft did more than just embed it in Microsoft 365. Copilot combines “the power of LLMs, including GPT-4, with the Microsoft 365 apps and your business data in the Microsoft Graph.”

Along with Copilot, Microsoft announced ‘Business Chat’ will work across all the Microsoft 365 apps and data, leveraging the company’s ‘Graph’ to bring everything into a single chat interface.

These new features and changes sound ambitious, and it remains to be seen how well they work in the real world. Moreover, there remain many legitimate concerns about AI — Microsoft’s rush to integrate AI into products doesn’t help, especially as the company shutters teams dedicated to responsible AI.

That said, Copilot won’t be available immediately. In the blog post, Spataro said the company will share more about pricing and details “in the coming months.”

Images credit: Microsoft

Source: Microsoft Via: The Verge

Categories
Mobile Syrup

OpenAI’s new GPT-4 can perceive text and images

OpenAI today introduced GPT-4, its advanced and sophisticated model that can take in and generate up to 25,000 words of text, which is roughly eight times more than GPT-3.5, which powers ChatGPT.

According to OpenAI, GPT-4 can “solve difficult problems with greater accuracy, thanks to its broader general knowledge and problem solving abilities.” The current GPT-3.5 powering ChatGPT can only take text prompts as input, whereas GPT-4 can accept images as inputs and generate captions, classifications, and analyses. “While less capable than humans in many real-world scenarios, [GPT-4] exhibits human-level performance on various professional and academic benchmarks.”

In an example on the company’s website, we see an image with eggs, flour and milk alongside text that reads “What can I make with these ingredients?” The language model replied with:

There are many options for what you can make with these ingredients! Some possibilities include:

  • Pancakes or waffles
  • Crepes
  • French toast
  • Omelette or frittata
  • Quiche
  • Custard or pudding
  • Cake or cupcakes
  • Muffins or bread
  • Cookies or biscuits

These are just a few examples, but the possibilities are endless!

Another example of the model being able to take images as input can be seen below:

The new model is also reportedly smarter than GPT-3.5. The model exhibits human-level performance in professional and academic benchmarks. For example, “it passes a simulated bar exam with a score around the top 10% of test takers; in contrast, GPT-3.5’s score was around the bottom 10%,” wrote OpenAI.

OpenAI also evaluated GPT-4 on traditional models designed for benchmarking machine learning models. It found that GPT-4 considerably outperformed existing large language models.

Further, unlike ChatGPT, which has a classic and fixed verbosity, tone and style, developers would be able to prescribe the GPT-4 AI’s style and task by describing those directions in the system message. “System messages allow API users to significantly customize their users’ experience within bounds,” wrote OpenAI.

For now, ChatGPT Plus subscribers will get GPT-4 access on chat.openai.com with a usage cap. The company says that it will adjust the exact usage cap depending on demand and system performance in practice, but expects to be severely capacity constrained, with the capacity issues easing off over the coming months.

Read more about the advanced large language model here.

Image credit: OpenAI

Source: OpenAI

Categories
Mobile Syrup

OpenAI releases its own AI-written text detector

After a 22-year-old from Toronto made a third-party ChatGPT detector tool to determine whether a piece of text was written by a human or an AI, OpenAI is itself now introducing a new “AI Classifier” for “indicating AI-written text.”

“We’re launching a classifier trained to distinguish between AI-written and human-written text,” reads the tool’s page on OpenAI’s website. The company says that while it is impossible to “reliably” detect AI-written text, it believes that good clarifiers can inform mitigations for false claims regarding a piece of text that was written by AI but claimed to be human-written.

The company makes it very clear that its classifier is not fully reliable. In its tests, the classifier correctly identified 26 percent of AI-written text, while it incorrectly labeled human-written text as AI-written nine percent of the time. “Our classifier’s reliability typically improves as the length of the input text increases,” said OpenAI. Texts with less than 1,000 words are prone to be labelled incorrectly.

OpenAI also says that the tool should not be used as a “primary decision making tool,” and instead, should complement other ways of determining whether the piece of text is human-written or not. Further, OpenAI recommends using the classifier only for English text. “It performs significantly worse in other languages and it is unreliable on code.”

The company is making the classifier publicly available to collect feedback from users, while it continues to improve the tool. Try out the classifier here.

Image credit: Shutterstock

Source: OpenAI