Microsoft Hosted Explicit Videos of This Startup Founder for Years. Here’s How She Got Them Taken Down – WIRED

Breeze Liu’s online nightmare started with a phone call. In April 2020, a college classmate rang Liu, then 24 years old, to tell her an explicit video of her was on PornHub under the title “Korean teen.” Liu alleges it had been filmed without her permission when she was 17 and uploaded without her consent.Over time, the video mushroomed and multiplied: it was saved, posted on other porn websites and, Liu claims, used to create intimate deepfake videos that spread across the web. The impact on Liu was profound. “I honestly had to struggle with suicidal thoughts, and I almost killed myself,” she says.Wiping around 400 nonconsensual images and videos from the web would require a multiyear, intercontinental effort. During this time, Liu went from working as a venture capitalist to starting her own company to help fight digital abuse. But when dealing with her own content, the entrepreneur faced a wall of silence and continual delays from one of the internet’s biggest gatekeepers: Microsoft.As images and videos are published on websites like PornHub, they’re often hosted on cloud infrastructure. A series of emails related to Liu’s case reviewed by WIRED, plus interviews with a French victims’ aid organization and other advocates working with her, show how Microsoft, despite repeated pleas, did not remove about 150 explicit images of Liu stored on its Azure cloud services. While other companies took down hundreds of images, Liu and a colleague say Microsoft only took action after the two confronted a senior member of the tech giant’s safety team at a content moderation conference.The drawn-out process, which prolonged the emotional toll on Liu, provides a detailed illustration of the difficulties victims and survivors of intimate image abuse experience in trying to erase the content from the web. Liu’s ordeal also highlights the void some victims fall into when their age in the imagery is disputed or hard to discern, an overlooked problem that may become more pressing as nudify apps spread in high schools.“It’s almost impossible for ordinary people to navigate the complex system and do damage control,” Liu says. While she has shared parts of her struggle before, many of the details in this story and the resolution of Microsoft’s prolonged failure to remove the intimate images have not been previously reported. “We’re facing an extremely broken system, and this is a global issue,” Liu says. “This is a huge problem.”Courtney Gregoire, Microsoft’s chief digital safety officer, says the company has learned from miscommunications in Liu’s case and doesn’t want anyone to go through the agonizing experience she did. “This content is a priority area where we endeavor to be actioning within 12 hours,” Gregoire says. For Liu, the grueling process took eight months.After being alerted to the first nonconsensual video of her online in April 2020, Liu says, it took her a whole day to calm down. On May 14 of that year, she contacted the Berkeley Police Department, where she lived in California at the time.The state considers it a misdemeanor crime to share real or spoofed intimate images of someone without their consent while knowing it would cause them distress. Detectives obtained search warrants for some websites but couldn’t identify the people responsible for uploading the content “based on the limited information retained by the internet sites and the overseas nature of the accounts involved,” department spokesperson Byron White says. Liu says she had a suspicion of who was responsible for the uploads, but the detectives weren’t sure how to prove it.Liu contacted the Cyber Civil Rights Initiative, a US-based nonprofit that helps to tackle abusive content. While the organization found another webpage with violative content of her, the entrepreneur says it couldn’t aid her with takedown requests because she appeared potentially under the age of 18 in the images. The Cyber Civil Rights Initiative declined to comment about Liu but confirmed that it is legally barred from reviewing sexual imagery of minors.By this point, it had been months since Liu first learned of the images and videos, and she needed a break. The original video hadn’t been quickly removed, and Liu suspected it had already spread far and wide. She felt powerless. She decided to let the case go, citing the stress of the pandemic, her frantic work schedule in venture capital, and the toll on her health. Embarrassed and terrified, the only confidant she told her story to was her cat.“I always had this gut feeling that there’s more, but I was not mentally stable enough to handle any more brutal truths,” Liu says, adding that she did not feel comfortable searching for them herself. “You don’t want to see that of yourself.” That changed after Liu left her VC job in 2022 and decided to fully pursue her own startup, Alecto AI, which aims to develop face-recognition tools to help people find and remove nonconsensual images that have been shared on digital platforms.It took about three years before Liu was ready to revisit efforts to get the explicit content of her taken down. Toward the end of 2023, she enlisted the help of her Alecto AI cofounder, Andrea Powell, a longtime trafficking and abuse victims’ advocate. That October, they sought out the help of a researcher at a victim helpline funded by the UK government. The researcher’s manual and automated image searching discovered 832 links appearing to show Liu in intimate states. “I couldn’t even look at the file, because that was just too much,” Liu says. She dialed up her therapist while a friend downloaded the spreadsheet of URLs for her. “She wasn’t even clicking into the content; she was just looking at the name of the URLs and she started crying,” Liu says.Powell says the links contained “violent Asian-centric” language. But the UK-funded helpline can’t help victims abroad with takedowns, leaving Liu stranded with the spreadsheet. She thought about using StopNCII, a popular tool that uses matching algorithms to find abusive images, but felt it wasn’t a good fit. She feared it might not be able to spot potential deepfakes.Liu then contacted the FBI, which preserved some of the links as evidence but in her view did not demonstrate further progress toward arresting the original uploader. The agency declined WIRED’s request for comment.At one point, Liu turned to the National Center for Missing & Exploited Children, or NCMEC, a nonprofit established by the US Congress to work with child sexual abuse imagery, to see if it could help. But the nonprofit could not engage, Liu says. Even though Liu looked young in the content, she could not prove she was under 18 at the time, a prerequisite for NCMEC to pursue takedowns.Lauren Coffren, NCMEC’s executive director, says it relies on partner law enforcement agencies to assess the age of victims. Age-borderline cases are rare, Coffren says, but “that stinks for a survivor” who should qualify for the organization’s help. “It speaks to just how difficult it is for survivors to be able to navigate this.”Liu felt stuck between groups that seemingly couldn’t pursue takedowns for her. And she was tired of being judged. “What difference would it make if I was 17 or just two days over 18?” she says. “The damage for me is the same. It’s beyond frustrating.”That’s when Powell, at the Paris Peace Forum, pleaded to a French victims’ aid organization, Point de Contact, which assists people in reporting illegal content. “I was sort of threatening that I just fly Breeze to France and make the case she’s French,” Powell says. Ultimately, on November 13, 2023, Point de Contact agreed to step up where other organizations had not. In the following days, emails show, the hotline analyzed the URLs and by the middle of December had started to send legal takedown demands to hosting providers. Liu was overcome—progress at last. “I’m literally shaking as I’m typing this,” Liu wrote in an email as the work began.Etienne Dirani, operations manager at Point de Contact, says it found 395 nonconsensual images in the links Liu provided. The majority of the remaining 437 had already been deleted or made inaccessible. Others did not clearly identify Liu, or did not depict her intimately. Dirani says “tens” of unique images were published across multiple websites and that Point de Contact’s investigation at the time didn’t find any content “likely” generated by AI.Some companies and hosting providers moved quickly; by the first week of January 2024, 155 URLs were dead. Microsoft, according to emails from Point de Contact to Liu, requested additional identifying information, such as her full name and social media handles, so the company could verify the content was associated with her. Liu provided these details, including a copy of her passport, but nothing happened.More than a month later, a few dozen additional pieces of content had been removed. Of the 202 that remained online, 142 were hosted on Microsoft’s Azure services. A Point de Contact investigator emailed Liu and alleged, “Microsoft’s abuse team did not answer our notification emails” and said the team was trying some of its individual contacts at the company. Around that time, a frustrated Liu mentioned Microsoft’s slow response in an interview with The Street. An unnamed Microsoft spokesperson told the news outlet that the company was investigating and noted that any potential violations of its acceptable use policy are taken seriously. Yet again, no action followed.“The main issue we had was the lack of response from Microsoft,” Dirani says. He alleges that Microsoft’s abuse team believed that it needed more information, but he says the company never communicated what that information was. Even a higher-up contact wouldn’t give a straight answer on what additional details could trigger a takedown. Point de Contact tried to “push” Microsoft more, including opening a new case, according to Dirani. “We were sending reminders of all the URLs that were still online,” he says. “But unfortunately, even the new reports we sent were not responded to.”As months went by, Liu could do little but wonder how many people each day were encountering the violative imagery of her. She was terrified about her career being derailed and was generally disheartened. Powell says she was in touch with Microsoft’s director of public policy for digital safety, Liz Thomas, at the time, but she was told it was difficult to verify the content showed Liu.However, in late July last year, Powell and Liu concocted a plan to speak with Thomas at a San Francisco hotel hosting TrustCon, a conference for people working on online trust and safety issues. They hadn’t registered for the event, but Powell located Thomas at the hotel’s public bar with a group of colleagues. Once the colleagues left, Powell approached Thomas and urged for action, pointing toward Liu as she made her case. Seeing Liu in the same room made an impact.Within days of the event, a Microsoft staffer emailed Powell that the case had been escalated, and the 142 URLs with Liu’s image started disappearing. “I don’t ideally want to be chasing trust and safety people up and down the halls at TrustCon to deal with a case,” Powell says. “But it was what had to be done.”At the start of last August, Point de Contact told WIRED that only two images on four different Microsoft servers remained. “We deeply regret that this issue took almost 10 months of communication between the victim, Microsoft and us as an NGO to be resolved,” the NGO said in an email at the time.Microsoft digital safety chief Gregoire says Liu’s situation has spurred her team to try to improve reporting processes and relationships with victim aid groups. Point de Contact initially flagged links over which the company didn’t have control, according to Gregoire. She declined to elaborate on the circumstances. Dirani says this explanation was never communicated to him, and it remains unclear why the links were not “actionable.”Only after Powell cornered Thomas over Liu’s case did Microsoft obtain the URLs upon which it could act. “We’re thankful, to be perfectly honest, to the spontaneous connection at TrustCon,” Gregoire says. But it shouldn’t be needed again: Point de Contact now has a more direct way to stay in touch, she says.Other victim aid groups say their relationships with tech giants remain challenging. Last year, a WIRED investigation revealed that executives at Google rejected numerous ideas raised by staff and outside advocates that aimed to proactively counter access to problematic imagery in search results. Some survivors have found that the fastest way to get content removed is by filing copyright claims, a tactic those working in the online safety industry say is inadequate.The lack of consistency in policies and processes among tech companies contributes to delays in securing takedowns, according to Emma Pickering, the head of technology facilitated abuse at Refuge, the UK’s largest domestic abuse organization. “They all just respond however they choose to—and the response usually is incredibly poor,” she says. (Google introduced new policies in July 2024 to accelerate removals.)Pickering claims Microsoft, in particular, has been difficult. “I’ve recently been told if I want to engage with them, we need to provide evidence that we use their platform and we promote them,” she says, adding Refuge is trying to engage with as many tech platforms as possible.Microsoft’s Gregoire says she will look into these concerns and is open to dialogue. The company hopes to stem the need for takedowns, in part, by scaring off perpetrators. This past December, Microsoft sued a group of 10 unknown individuals who allegedly circumvented safeguards on Azure and used an AI tool to generate offensive images, including some Gregoire described as sexually harmful. “We don’t want our services to be abused to cause harm,” she says.For Liu, the challenges haven’t ended. Videos and images depicting her naked remain available on at least one self-styled “free porn” website, according to links reviewed by WIRED. She also has had to pour her savings into developing Alecto AI because investor support has been lackluster. Some investors allegedly told her not to use her own experience in her pitch. Liu says that when she pitched one male-female pair who were considering investing, they burst into laughter at the idea of building a business around the use of AI to detect online image abuse. Even responding that she had almost killed herself after being victimized did little to sway them, Liu says.In December 2024, more than four and a half years since her nightmare began, Liu found a glimmer of hope. A proposal she has advocated for in the US Congress to require websites to remove unwanted explicit images within 48 hours nearly ended up on then-President Joe Biden’s desk. It was ultimately shelved, but real progress had never felt so close. Liu and a bipartisan group of over 20 lawmakers haven’t given up; in January, they reintroduced the proposal, which threatens potential penalties of up to $50,000 per violation. Despite objections from rights groups worried about over-censorship, the bill passed the Senate last week. Even Microsoft has gotten behind it.If you or someone you know needs help, call 1-800-273-8255 for free, 24-hour support from the National Suicide Prevention Lifeline. You can also text HOME to 741-741 for the Crisis Text Line. Outside the US, visit the International Association for Suicide Prevention for crisis centers around the world.In your inbox: Upgrade your life with WIRED-tested gearMusk takeover: DOGE’s race to the bottomBig Story: The bust that took down Durov and upended TelegramWIRED’s favorite ‘buy it for life’ gearEvent: Join us for WIRED Health on March 18 in LondonMore From WIREDReviews and Guides© 2025 Condé Nast. All rights reserved. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices
Source: https://www.wired.com/story/deepfake-survivor-breeze-liu-microsoft/