Researchers from Stanford University, Northwestern University, Washington University, and Google DeepMind discovered that synthetic intelligence can replicate human habits with 85 % accuracy. A research confirmed that letting an AI mannequin interview a human topic for 2 hours was adequate for it to seize their values, preferences, and habits.
Published within the open entry archive arXiv in November 2024, the research used a generative pre-trained transformer GPT-4o AI, the identical mannequin behind OpenAI’s ChatGPT. Researchers didn’t feed the mannequin a lot details about the themes prematurely. Rather, they let it interview the themes for 2 hours after which assemble digital twins.
“Two hours can be very powerful,” stated Joon Sung Park, a PhD scholar in pc science from Standford, who led the workforce of researchers.
How the Study Worked
Researchers recruited 1,000 individuals of various age teams, genders, races, areas, schooling ranges, and political views and paid them every $100 to take part in interviews with assigned AI brokers. They underwent persona exams, social surveys, and logic video games, partaking twice in every class. During the exams, an AI agent guides topics by means of their childhood, childhood, work experiences, beliefs, and social values in a collection of survey questions. After the interview, the AI mannequin creates a digital duplicate, a digital twin that embodies the interviewee’s values and beliefs.
The AI simulation agent replicas would then mimic their interviewees, present process the identical workout routines with astonishing outcomes. On common, the digital twins have been 85 % related in habits and preferences to their human counterparts. Scientists might use such twins for research which may in any other case be too pricey, impractical, or unethical when achieved with human topics.
“If you can have a bunch of small ‘yous’ running around and actually making the decisions that you would have made,” Park stated, “that, I think, is ultimately the future.”
However, within the fallacious fingers, any such AI agent may very well be used to develop deepfakes that unfold misinformation and disinformation, perpetrate fraud, or rip-off individuals. Researchers hope that these digital replicas will assist struggle such malicious use of the expertise whereas offering a greater understanding of human social habits.