FTC logs $2.1B in social media scam losses as TAKE IT DOWN deadline lands
The FTC says social media scams cost Americans $2.1 billion in 2025, eight times the 2020 total. Seventeen days from now, covered platforms must stand up TAKE IT DOWN Act takedown processes for AI-generated intimate imagery and deepfakes, with the FTC as enforcer.
The Federal Trade Commission published fresh consumer-fraud numbers last week showing that scams traced to social media platforms drained roughly $2.1 billion from U.S. consumers in 2025 — an eight-fold jump from 2020 and the largest dollar share of any contact channel the agency tracks. The release landed in the same week the agency closed comments on the implementation framework for the TAKE IT DOWN Act, whose covered-platform compliance deadline arrives on May 19, 2026 — seventeen days from publication of this story.
The combination is more than calendar coincidence. The same engagement-graph plumbing that delivers a fake investment pitch in an Instagram reel is the surface area Congress just told the FTC to police for non-consensual intimate imagery (NCII) and harmful deepfakes. AI policy and fraud enforcement, long discussed as separate beats, are about to share an inbox.
What the FTC actually said
According to the agency’s April 28 press release ↗, nearly 30 percent of people who reported a scam loss in 2025 said the scam started on a social media platform. Investment scams accounted for $1.1 billion of the $2.1 billion total. Shopping scams were the most-reported category — over 40 percent of victims said they lost money after ordering something they saw in an in-feed advertisement. The accompanying Data Spotlight ↗ breaks losses out by platform: Facebook leads, with WhatsApp and Instagram next. Meta-owned properties together account for roughly $1.4 billion of the reported total, per TechCrunch’s reporting on the dataset ↗.
The FTC release does not itself attribute the surge to generative AI. It is careful to frame the cause as “easy access to billions of people from anywhere in the world” combined with the same ad-targeting tools legitimate businesses use. That framing matters: the FTC is not (yet) declaring AI-driven scams a distinct enforcement category, even as its parallel Operation AI Comply sweeps continue to bring cases against companies that use the “AI” label to dress up classic deceptive-claim schemes.
What changes on May 19
The TAKE IT DOWN Act ↗, signed into law on May 19, 2025, gave covered platforms one year to stand up a notice-and-removal process for non-consensual intimate visual depictions, including AI-generated ones. That clock runs out this month.
In concrete terms, by May 19, 2026, any “covered platform” — broadly, public-facing websites and apps that host user-generated content — must:
- Provide a clearly identified mechanism for any individual (or their authorized representative) to submit a takedown request for an intimate visual depiction of themselves, including computer-generated depictions intended to appear realistic.
- Remove the depiction, and reasonable efforts to identify and remove identical copies, within 48 hours of a valid request.
- Treat failure to comply as a violation of Section 5 of the FTC Act — i.e., an unfair or deceptive act or practice.
The criminal provisions of the law — making it a federal crime to knowingly publish such material — took effect immediately on signing in 2025. The platform-side obligations were the part deferred to give product teams time to build. Orrick’s legal summary ↗ is one of the clearer rundowns of the operational requirements.
The FTC’s enforcement reach here is broader than the law’s narrow subject matter suggests. Once a takedown system exists, its handling of adjacent harms — synthetic voice scams, fraudulent product endorsements impersonating real people, deepfaked celebrity investment pitches — becomes evidence the agency can use in unfairness cases that piggyback on the same intake plumbing. The $2.1B fraud number gives that argument a number to point at.
The reaction
Industry pushback on the implementation rule has focused on three issues. The Computer & Communications Industry Association and the Center for Democracy & Technology, in separate comment letters during the rulemaking window, argued that the 48-hour removal window combined with broad “reasonable efforts to identify copies” language creates incentives for over-removal of lawful speech, particularly satire and journalism that incorporates synthetic imagery. Smaller hosting providers raised the cost-of-compliance question: a notice-and-removal pipeline that handles spoofing, identity verification of complainants, and an appeals path is not a weekend project.
Civil society on the other side — the National Center for Missing & Exploited Children, the Cyber Civil Rights Initiative, and a coalition of state attorneys general — have generally supported the law and pushed the FTC to interpret “reasonable efforts” expansively to include hash-matching across the platform’s content library. The state AGs’ angle is partly self-interested: many states already have NCII statutes, and a strong federal floor reduces forum-shopping by hosts.
The administration has not signaled any softening. The continued cadence of Operation AI Comply cases — with the FTC bringing additional actions throughout 2025 against AI-branded business-opportunity schemes — suggests the agency intends to use both authorities together.
What an AI product team should do this quarter
If you ship a product that hosts user-generated images or video, or that generates them, three concrete items belong on the May–June roadmap:
- Stand up the intake form, even if minimal. A web form, an authenticated email address, and a documented internal SLA targeting under 48 hours from receipt to removal decision. Keep an auditable log of every request and disposition; the FTC’s preferred enforcement evidence will be timing data.
- Decide your hash-matching posture. If you are a generative model provider whose outputs are being weaponized, the question is whether you fingerprint outputs at generation time. If you are a host, the question is whether you maintain a perceptual-hash index of removed content. Either is defensible; doing nothing is not.
- Re-read your ad-targeting and creator-monetization policies. The $2.1B FTC dataset is going to be cited in every state-AG and class-action complaint filed against a platform’s ad-fraud handling for the next two years. Investment-scam ads, in particular, are the soft spot.
For policy and GRC leads, the work is mapping which of your products fall under the TAKE IT DOWN Act’s “covered platform” definition (it is broader than people assume — internal collaboration tools with public-facing components can qualify), and aligning the takedown SLA with whatever you have promised in your existing trust-and-safety policy. Conflicts there will be the easiest source of FTC unfairness theories.
The seed for this story was a ThreatsDay Bulletin from The Hacker News ↗ — a roundup that buried the FTC release between an SMS-blaster arrest and a Roblox account-hacking ring. The juxtaposition is the point. The fraud volume the bulletin describes is the political tailwind behind the regulatory deadline that lands in two and a half weeks.
Sources
- FTC press release: “New FTC Data Show People Have Lost Billions to Social Media Scams” ↗ — primary FTC announcement of the 2025 numbers.
- FTC Data Spotlight: Reported losses to scams on social media eight times higher than in 2020 ↗ — the underlying breakdown by platform and scam type.
- TechCrunch coverage of the FTC release ↗ — independent reporting that calls out the Meta-property concentration.
- S.146 — TAKE IT DOWN Act, 119th Congress (full text) ↗ — primary statutory text of the law whose platform-compliance deadline is May 19, 2026.
- Orrick legal analysis: “TAKE IT DOWN Act Becomes Law” ↗ — outside-counsel summary of the platform obligations and FTC enforcement posture.
- The Hacker News, ThreatsDay Bulletin (Apr 30, 2026) ↗ — the original news roundup that flagged the FTC release alongside the week’s other security stories.
Sources
- FTC: New FTC Data Show People Have Lost Billions to Social Media Scams
- FTC Data Spotlight: Reported losses to scams on social media eight times higher than in 2020
- TechCrunch: Consumers lost $2.1 billion to social media scams in 2025, FTC reports
- S.146 — TAKE IT DOWN Act, 119th Congress (text)
- Orrick: TAKE IT DOWN Act Becomes Law
- The Hacker News: ThreatsDay Bulletin (Apr 30, 2026)
Subscribe
AI policy and ethics watchdog — regulation, accountability, governance. — delivered when there's something worth your inbox.
No spam. Unsubscribe anytime.