• Anybody Can AI
  • Posts
  • New York Passes RAISE Act to Curb AI-Driven Disasters

New York Passes RAISE Act to Curb AI-Driven Disasters

PLUS: Meta Sues Maker of AI “Nudify” App

New York Passes Bill to Curb AI-Driven Disasters

New York has passed the Responsible AI Safety and Education (RAISE) Act, landmark legislation targeted at frontier AI models—those developed using over $100 million in compute. The law mandates transparency, safety protocols, and incident reporting to prevent potentially catastrophic outcomes—such as incidents causing 100+ injuries or over $1 billion in damages—and grants authorities the power to impose penalties up to $30 million for non-compliance.

Key Points:

  1. Safety & Transparency Requirements - Covered AI developers (like OpenAI, Google, Anthropic) must publish safety/security plans, submit to third-party audits, and disclose serious incidents in a timely manner—promoting accountability

  2. High-Stakes Compliance - The act applies only to models trained with >$100 million compute. Companies failing to comply could face civil penalties up to $30 million—or a percentage of the compute investment.

  3. Debate Between Safety and Innovation - While AI safety advocates—including Nobel laureate Geoffrey Hinton and Yoshua Bengio—support the law, critics argue it may stifle innovation or drive companies out of New York. Startups and VCs warn compliance burdens could limit experimentation.

Conclusion
As the first state-level legislation of its kind, the RAISE Act signals a major shift toward legally enforced AI governance. By balancing high-risk oversight with a focus on transparency, New York is blazing a regulatory trail—though whether it will inspire or deter broader innovation depends on how Governor Hochul acts and how tech firms respond.

Meta Files Lawsuit Against Maker of "Nudify" App Technology

Meta has taken legal action against Joy Timeline HK Limited, the Hong Kong-based developer behind CrushAI, an AI-powered app that generates nude images by digitally removing clothing from photos. The lawsuit, filed in Hong Kong, targets the app’s aggressive advertising. In early 2025 alone, CrushAI allegedly ran over 8,000 ads on Facebook and Instagram—even after Meta repeatedly removed them.

Key Points:

  1. Policy Violations & Ad-Evasion - Joy Timeline used multiple ad accounts, frequently changed domain names, and even created pages to circumvent Meta’s ad review systems—all to promote its AI “nudify” services.

  2. Widespread Reach & Harm - Ads targeted men in the US, UK, Canada, Australia, and Germany with promises to “erase clothes.” These services are often linked to nonconsensual imagery, blackmail, sextortion—and expose minors to risk.

  3. New Countermeasures
    Beyond the lawsuit, Meta is rolling out enhanced detection technology to flag not just explicit imagery but also disguised “nudify” ad networks. They’re also sharing more than 3,800 URLs with partners via the Tech Coalition’s Lantern program.

Conclusion

Meta’s lawsuit and new enforcement tactics signal an escalation in the fight against nonconsensual AI-generated intimate content. As “nudify” apps morph and evade detection, strong collaboration across platforms and potential legislative backup will be crucial to protect users—especially minors—from digital exploitation.

🚀 Other AI updates to look out

  • 🎓 Anthropic Launches Free AI Fluency Course
    Anthropic released a 12-lesson “AI Fluency” course that’s more than just prompting basics—you’ll build a real project and receive a certificate upon completion. Ideal for learners at any level.

  • 🤖 v0’s Captcha Contest Is Pure Chaos
    AI startup v0 is running a competition to build the most ridiculous captcha. Submissions include gems like: “Are you human? Or are you dancer?”. The winner gets $1,000 in credits.

Thankyou for reading.