Meta Study Shows Public Attitude Towards AI Chatbots

Meta Study Shows Public Attitude Towards AI Chatbots

Reading time: 2 min

  • Deep Shikha

    Written by: Deep Shikha Content Writer

  • Kate Richards

    Fact-Checked by Kate Richards Content Manager

Meta Platforms Inc. released a study on April 3 about people’s reactions to AI chatbots, like ChatGPT, and how they believe these bots should interact with humans. Participants’ views were tallied before and after deliberations with AI experts. Overall, the belief that “AI has a positive impact” increased after educational sessions and discussions were had about the new technology.

The research started in October 2023, when Meta collaborated with Stanford’s Deliberative Democracy Lab and the Behavioral Insights Team on a Community Forum on Generative AI. The study used Deliberative Polling to gather opinions on AI from 1,545 people across Brazil, Germany, Spain, and the USA.

After a weekend of discussions, people were also more aware that AI chatbots might reflect existing biases from their training data. They also acknowledged the concerns about data privacy and security. In summary, talking about chatbots for a weekend made people understand the benefits as well as the possible risks.

The Community Forum revealed that while people generally see AI chatbots as a positive step forward, they emphasize the need for developers to prioritize transparency and control.

Most participants agreed that AI chatbots should learn from past interactions to enhance future responses, provided users are aware and agree to share this information. This perspective increased through deliberations. In addition, there was a notable shift towards accepting AI chatbots’ human-like capabilities, contingent on user awareness.

According to Bloomberg, the Meta report often mentioned “transparency,” indicating users are fine with chatbots if they’re aware they’re interacting with one. Clearly marking or introducing chatbots is a simple but crucial detail that can significantly influence people’s attitudes. This could be why Meta is focusing on labeling AI-generated content in user feeds.

While most people in the study already had a positive outlook on AI, one area of discussion that revealed a not-so-positive outlook (before and after discussion) was making AI as human-like as possible. Participants from all four countries disagreed that AI chatbots should be made as humanlike as possible, especially if the user isn’t informed. This shows that while many may see the benefits of AI tech, there’s also a sense that it’s threatening the authenticity of our own existence.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!
0 Voted by 0 users
Title
Comment
Thanks for your feedback
Please wait 5 minutes before posting another comment.
Comment sent for approval.

Leave a Comment

Show more...