- Two men were found dead at separate motels after drinking beverages, one woman allegedly laced with prescription drugs.
- Seoul police say her repeated ChatGPT questions about lethal combinations of tranquilizers and alcohol show she knew the mixture could be lethal.
- Investigators say her chatbot search history proves intent, making it central to their upgraded murder charges.
South Korean police have upgraded charges against a 21-year-old woman to murder after revealing a series of disturbing queries she apparently entered into ChatGPT before two men were found dead in separate motel rooms.
Investigators in Seoul say the suspect, identified only as Kim, repeatedly asked the AI chatbot in different ways about what happens when you mix sleeping pills with alcohol and when it becomes dangerous and eventually fatal. Police now claim those searches show she knew the risks long before she served the drugged drinks that left two men dead and another unconscious.
Authorities had originally arrested Kim in February on the misdemeanor charge of inflicting bodily injury resulting in death, a charge that often applies when someone causes fatal injury without intent to kill. That changed when digital forensics teams scrutinized her phone. The combination of her previous statements and the precise wording of her ChatGPT questions convinced investigators that she was not just reckless or ignorant. It formed the backbone of a revised case that now alleges deliberate, premeditated poisoning.
According to police accounts, the first suspected killing happened on January 28, when Kim checked in with a man in his 20s at a hotel and left two hours later. Staff discovered his body the next day. On February 9, an almost identical sequence played out at another motel with another man in his 20s. In both cases, police say the victims consumed alcoholic beverages prepared by Kim that investigators believe she had dissolved in prescription tranquilizers.
Detectives uncovered an earlier, non-fatal attempt involving Kim’s then-partner, who later recovered. After he regained consciousness, investigators say Kim began preparing stronger concoctions and significantly increased drug doses. ChatGPT’s role became central to the case when phone records were decoded. The searches the investigators highlighted were not broad or vague. According to the authorities, they were specific, repetitive and fixated on mortality.
Police say that means she knew what could happen and that it changes the story from an accidental overdose to a planned and investigated poisoning. Kim reportedly told investigators she mixed the sedatives into drinks, but claimed she did not expect the men to die. Police counter that her digital behavior contradicts that story. They have also suggested that actions she took after the two motel deaths further undermine her claim. According to officials, she removed only the empty bottles used in the mixtures before leaving the motel rooms, while taking no steps to call for help or alert authorities. Detectives interpret it as an attempt to cover up rather than panic or confusion.
ChatGPT married guide
One of the most striking elements of the case, beyond the violence itself, is the way generative AI fits into the timeline of the investigation. For years, police have relied on browser histories, text logs and social media messages to determine intent. The presence of chatbot interactions adds a new category of evidence. Unlike a traditional search engine, ChatGPT can provide personal guidance in the form of a conversation. When someone asks a question about injury, the phrasing and follow-ups can reveal not only curiosity but persistence.
For ordinary people who use AI casually, the case serves as a reminder that digital footprints can take on a life of their own. As more people turn to chatbots for everything from homework help to medical questions, law enforcement agencies around the world are beginning to examine how to handle these conversations during investigations. Some countries already treat log files from AI services no differently than browser data. Others still weigh concerns about privacy and legal boundaries.
Although the events themselves are tragic, they highlight a new reality. Technology is now behind many serious crimes. In this case, the police believe that the ChatGPT inquiries help paint a clear picture of intent. The courts will ultimately determine the extent to which these questions prove guilt. For the public, the outcome could affect how people think about the privacy, duration and potential consequences of interacting with AI.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.
The best business laptops for all budgets



