Technology

Google, maker of AI chatbot Bard, warns its employees about using chatbots

Google Bard

Here’s a piece of good, universal advice: Don’t share confidential information with an AI chatbot.

If you don’t believe me, then take it from Google, one of the biggest proponents of AI and the creator of its own AI chatbot, Bard.

According to a new report from Reuters, Google’s parent company Alphabet has warned employees about using AI chatbots, including Bard. Google employees were specifically advised against submitting any confidential information to Bard or any other AI chatbot. 

Users of ChatGPT, Bard, or other AI chatbots may not realize that the conversations they have with their newfound AI buddy don’t just stay in the chat. AI companies store the content of these chats. These companies have human workers who can access and review the messages sent between users and its AI chatbot.

In addition, researchers have found that AI language models often train on the data submitted to them. The purpose of this is for the AI chatbot to “learn” how to better serve the user, but this can also result in AI chatbots sharing the information submitted by one users in its conversations with other users. OpenAI, the creator of the popular ChatGPT, also tells users on its website that it ” may use the data you provide us to improve our models.”

Google’s not only advising its own employees about potential privacy issues when using AI chatbots, but it’s warning its users too – albeit the notice isn’t so easy to find.

The company updated its Bard privacy notice page on its Google Support website earlier this month to include the following information, entirely in bold: “Please do not include information that can be used to identify you or others in your Bard conversations.

Mashable