AI social media users aren’t always a completely stupid idea

business_meta_ai_characters.jpg


Meta caused a stir last week when it announced that it intends to populate its platform with a significant number of entirely artificial users in the not-too-distant future.

“We expect that over time these AIs will actually exist on our platforms, similar to accounts,” says Connor Hayes, product vice president of generative AI at Meta. told the Financial Times. “They will have bios and profile pictures and will be able to generate and share AI-powered content on the platform… that’s where we see all of that happening.”

The fact that Meta seems happy to populate its platform with AI solutions and the “Deshitification“The Internet as we know it is worrisome.” Some people then noticed that Facebook was actually already doing it full of strange AI-generated individualsmost of them stopped posting a while ago. This included “Liv,” a “proud black queer mother of two and truth-teller, the true source of life’s ups and downs,” a persona that went viral because people marveled at her embarrassing sloppiness. Meta began deleting these former fake profiles after no real users interacted with them.

However, let’s stop ourselves from hating meta for a moment. It’s worth noting that AI-generated social personas can also be a valuable research tool for scientists who want to study how AI can mimic human behavior.

An experiment called GovSim, The project, which will be carried out in late 2024, shows how useful it can be to study how AI characters interact with each other. The researchers behind the project wanted to explore the phenomenon of cooperation between people with access to a common resource such as shared land for grazing livestock. A few decades ago, the Nobel Prize-winning economist Elinor Ostrom showed that rather than exhausting such a resource, real communities tend to figure out how to share it through informal communication and collaboration without imposed rules.

Max Kleiman Weiner, A professor at the University of Washington and one of those involved in the GovSim work says it was partly inspired by a Stanford model Project called Smallvillewhat I previously written about it in the AI ​​Lab. Smallville is a Farmville-like simulation with characters communicating and interacting with each other under the control of large language models.

Kleiman-Weiner and colleagues wanted to see whether AI characters would engage in the kind of collaboration Ostrom found. The team tested 15 different LLMs, including those from OpenAI, Google and Anthropic, in three imaginary scenarios: a fishing community with access to the same lake; shepherds sharing land for their sheep; and a group of factory owners who must limit their collective pollution.

In 43 of 45 simulations, they found that the AI ​​personas did not share resources correctly, even though smarter models did it better. “We saw a pretty strong connection between the performance of the LLM and its ability to sustain collaboration,” Kleiman-Weiner told me.



Source link

Spread the love

Leave a Reply

Your email address will not be published. Required fields are marked *