A Thai-born American man has died following a tragic accident that his family says was caused by a deceptive Meta AI chatbot. The incident has renewed concerns about the potential dangers posed by artificial intelligence when deployed without sufficient safeguards.
According to a Reuters investigation, 76-year-old Thongbue Wongbandue, who lived with brain impairment due to paralysis, died after falling near a Rutgers University parking lot in New Jersey in March. He had been rushing to meet what he believed was a woman waiting for him in New York City.
Thongbue’s wife, Linda, said her husband suddenly packed his belongings and announced he was traveling to visit a friend in New York. She was alarmed, knowing he had not lived there for decades and no longer maintained friendships in the city. Within hours, he sustained severe head and neck injuries after a fall and later died at a hospital on March 28, following three days of treatment.
Initially, the family suspected he had been targeted by a criminal gang. However, a review of his phone revealed a startling truth: Thongbue had been exchanging messages with Meta’s AI chatbot on Facebook Messenger. The chatbot introduced itself as an attractive woman named “Big Sis Billie,” modeled after influencer Kendall Jenner.
Family members said the chatbot repeatedly insisted it was a real person and invited Thongbue to visit her apartment. At one point, the AI asked, “Do you want me to hug you or kiss you?” The chatbot carried a blue verification checkmark, which usually signifies authenticity, further convincing him of its legitimacy.
Although Meta includes disclaimers labeling AI chats as “AI-generated,” the Wongbandue family said the notice was placed in a way that could easily be scrolled out of view. They argue the design contributed to the deception.
Linda and her daughter, Julie, have since spoken publicly about their loss, releasing chat transcripts to warn others. “Why did it have to lie?” Julie asked. “If it hadn’t said, ‘I’m a real person,’ maybe my father would have stopped believing there was someone in New York waiting for him.”
The case echoes a broader debate about the safety of AI companions. Reuters noted a parallel case in Florida, where the mother of a 14-year-old boy sued Character.AI after alleging that a chatbot imitating a Game of Thrones character contributed to her son’s suicide.
Character.AI has maintained that it clearly informs users that its digital personas are not real people and has safeguards to limit children’s interactions. Meta has not commented directly on the Wongbandue case, but the incident raises urgent questions about transparency, consent, and the ethical responsibility of tech companies when deploying AI meant to simulate human interaction.
For the Wongbandue family, the consequences are permanent. Linda said she hopes sharing their story will prevent others from being misled: “AI may help some people, but in this case, it took my husband from me.”