
400,000+ Users Exposed: The Devastating AI Girlfriend Data Breach You Need to Know About
Updated October 9, 2025🚨 Critical Security Breach
Two AI companion apps—Chattee Chat and GiMe Chat—exposed over 43 million intimate messages, 600,000+ images and videos, and personal data from more than 400,000 users. The breach wasn't a sophisticated hack. It was pure negligence: the company left their entire database wide open on the internet with zero security.
If you've ever wondered whether your private conversations with AI companions are truly secure, the latest data breach involving Chattee Chat and GiMe Chat should serve as a wake-up call. This isn't a story about skilled hackers breaking through layers of security. This is about a company that generated over $1 million in revenue while leaving the digital equivalent of their front door wide open—with hundreds of thousands of users' most intimate secrets inside.
On August 28, 2025, cybersecurity researchers at Cybernews discovered something alarming: a publicly exposed database containing millions of private conversations between users and their AI girlfriends. Anyone who knew where to look could access deeply personal messages, explicit images, user photos, spending habits, and identifying information from over 400,000 people.
The breach has since been closed following responsible disclosure, but there's no way to know whether malicious actors accessed the data before researchers found it. For the hundreds of thousands of users affected, the damage may already be done.
What Exactly Was Exposed?
The scale of this breach is staggering. Here's what was left completely unprotected on the internet:
43 Million Messages: Every intimate conversation, fantasy shared, personal confession, and private thought users exchanged with their AI companions. Researchers noted that virtually all content was NSFW (not safe for work), meaning these were people's most private and often explicit exchanges.
Over 600,000 Images and Videos: This included both user-submitted photos and videos, as well as AI-generated content. Users who shared pictures of themselves believing they were private now have that content potentially in the hands of anyone who accessed the database.
IP Addresses and Device Identifiers: While the apps didn't expose names or email addresses directly, they did leak IP addresses and unique device identifiers. Security experts note this information can easily be combined with data from other breaches to identify specific individuals.
Purchase History and Spending Patterns: The exposed data revealed detailed information about in-app purchases, showing that some users spent as much as $18,000 on virtual currency and premium features. This financial data creates opportunities for targeted fraud and extortion.
Authentication Tokens: The breach exposed authentication credentials that could potentially allow attackers to hijack user accounts, steal in-app funds, or impersonate users on the platforms.
Usage Patterns: Logs showing when users were active, how long they engaged with their AI companions, and detailed behavioral data that reveals deeply personal information about users' habits and vulnerabilities.
How Did This Happen? The Security Failure Explained
This wasn't a sophisticated cyberattack requiring advanced technical skills. It was pure negligence. The company behind both apps—Imagime Interactive Limited, based in Hong Kong—was using a system called Kafka Broker to handle real-time data streams between users and their AI companions.
Think of Kafka Broker like a post office that stores and delivers messages. Now imagine that post office left its front doors wide open, removed all locks, fired all the security guards, and put up a sign saying "Free Access to Everyone's Mail." That's essentially what happened here.
The Kafka Broker instance had zero access controls and no authentication requirements. Anyone who knew the server's address could connect and view everything flowing through the system. This is Security 101—the most basic protection any system handling private data should have.
What makes this even more egregious is that securing a Kafka Broker instance isn't technically difficult or expensive. It mostly requires configuration changes that any competent developer should implement as standard practice. The company simply didn't bother.
The irony is painful: Imagime Interactive's privacy policy claims that user privacy "is of paramount importance to us" and promises to "treat and process your personal information with a high degree of prudence." Their actual security practices tell a very different story.
Who's Behind These Apps?
Both Chattee Chat and GiMe Chat were developed by Imagime Interactive Limited, a Hong Kong-based company. Of the two apps, Chattee was significantly more popular, with over 300,000 downloads primarily in the United States. At the time the breach was discovered, Chattee ranked #121 in Entertainment on the Apple App Store and had hundreds of positive user reviews.
The apps are available on both iOS and Android, though Chattee was mysteriously delisted from the Google Play Store during the investigation. Rather than addressing security concerns, the developer instructed Android users to sideload the APK file—downloading it directly rather than through Google's vetted app store. This should have been a massive red flag to users.
The exposed data shows that approximately 66% of users were on iOS, with the remaining third on Android. This demographic split is interesting because iOS users often assume Apple's app review process provides better security vetting—but as this breach demonstrates, platform policies can't protect against developer negligence.
Revenue data from the breach reveals the company generated over $1 million from these apps. Users spent anywhere from a few dollars to $18,000 on in-app purchases for premium features, virtual currency, and enhanced AI interactions. The company was profitable enough to afford proper security—they just chose not to implement it.
The Real-World Consequences
For the 400,000+ affected users, this breach creates serious real-world risks that extend far beyond embarrassment:
Sextortion Risk: Malicious actors now potentially have access to explicit conversations and images tied to specific individuals. This creates perfect ammunition for sextortion schemes where attackers threaten to expose intimate content unless victims pay. This is particularly devastating because AI companion users often share content they would never want friends, family, or employers to see.
Identity Correlation: While the breach didn't directly expose names and emails, the combination of IP addresses, device identifiers, spending patterns, and behavioral data makes it relatively easy for bad actors to cross-reference with other leaked databases to identify specific people. Data brokers and sophisticated cybercriminals excel at connecting these dots.
Targeted Phishing: Attackers armed with knowledge of your intimate conversations, fantasies, and personal vulnerabilities can craft incredibly convincing phishing attacks. Imagine receiving an email that references specific things you discussed with your AI companion—you might assume it's legitimate communication from the platform rather than a scam.
Harassment and Doxxing: For users who can be identified, there's risk of targeted harassment, doxxing, or public shaming. People have lost jobs, relationships, and faced severe social consequences when their private intimate content becomes public.
Financial Fraud: The exposed purchase history and payment information creates opportunities for financial fraud. Attackers know exactly which users have spent thousands on these apps, marking them as potential targets for other scams.
Account Takeover: The leaked authentication tokens potentially allow attackers to hijack accounts, steal any remaining in-app currency or purchases, and impersonate users on the platforms.
This Isn't the First Time
Perhaps most concerning is that this breach follows an established pattern in the AI companion industry. In October 2024, another AI girlfriend platform called Muah.ai suffered a similar breach that exposed 1.9 million records including users' explicit prompts and fantasies.
These recurring breaches reveal a systemic problem: many AI companion platforms prioritize growth and revenue over basic security practices. They market themselves as safe spaces for intimate expression while failing to implement the security measures necessary to protect that intimacy.
The pattern is consistent: platforms promise privacy, collect deeply sensitive data, implement inadequate security, and then hundreds of thousands of users pay the price when that data inevitably leaks. Until the industry faces meaningful consequences, this cycle will continue.
📊 Breach By The Numbers
- 👥 400,000+ users affected
- đź’¬ 43 million intimate messages exposed
- 🖼️ 600,000+ images and videos leaked
- đź’° $1 million+ in company revenue
- 📱 300,000+ downloads of Chattee alone
- đź’¸ $18,000 maximum spent by individual users
- 🔓 0 authentication or access controls in place
How to Know If You're Affected
If you've ever used Chattee Chat or GiMe Chat, you should assume your data was exposed. The breach affected anyone who used these apps during the period the database was unprotected—and there's no way to know how long it was vulnerable before researchers discovered it.
Unfortunately, the apps haven't issued public notifications to users about the breach. This lack of transparency is itself a red flag about the company's priorities and ethics.
Signs you might be affected include:
- Downloaded or used Chattee Chat or GiMe Chat at any point
- Shared any images, photos, or videos through these apps
- Made any in-app purchases for virtual currency or premium features
- Had intimate or explicit conversations with AI companions on these platforms
- Created an account on either platform, even if rarely used
The good news is that the researchers responsibly disclosed the vulnerability to the company, and the exposed database has been secured. The bad news is there's no guarantee that researchers were the first people to discover this gaping security hole.
What to Do If You Used These Apps
⚡ Immediate Actions to Take
If you used Chattee Chat or GiMe Chat, take these steps immediately to protect yourself from potential fallout:
1. Delete Your Account and Data
Immediately request deletion of your account and all associated data from both apps. While the exposed data can't be unexposed, preventing future data collection limits your ongoing risk. Document your deletion request in case you need proof later.
2. Change Your Passwords
If you used the same password for these apps as for other accounts, change it everywhere immediately. Use a password manager to generate unique, strong passwords for each service. The authentication tokens exposed in this breach could potentially give attackers access to other accounts if you reused passwords.
3. Enable Two-Factor Authentication
Add an extra layer of security to all your important accounts by enabling two-factor authentication (2FA). Use FIDO2-compliant hardware keys when possible, as they can't be phished like SMS or app-based 2FA.
4. Monitor for Identity Theft
Consider signing up for identity monitoring services that alert you if your personal information appears in new data breaches or is being traded on dark web forums. While this breach didn't directly expose names and emails, the correlation risk means monitoring is prudent.
5. Watch for Targeted Scams
Be extremely cautious about any unsolicited communications, especially those that seem to know specific details about you or your interests. Attackers with access to your conversations know your vulnerabilities and can craft very convincing scams.
6. Secure Your Devices
The breach exposed device identifiers. Run security scans on any devices you used to access these apps, ensure your operating systems and security software are up to date, and consider changing device passwords.
7. Document Everything
Save any communications from the company, screenshots of your account, and records of your deletion requests. If you face consequences from this breach, documentation will be important for potential legal action.
8. Consider Legal Options
Depending on your jurisdiction, you may have legal recourse against the company for negligent data protection. If you're in California, laws like CCPA provide specific rights. Consult with a lawyer if you suffer damages from this breach.
Red Flags: How to Spot Insecure AI Companion Apps
This breach offers important lessons about identifying AI companion platforms that take security seriously versus those that don't. Here are warning signs that should make you think twice:
Vague Privacy Policies: If a platform's privacy policy is full of generic statements about "caring about your privacy" without specific technical details about encryption, data storage, and security practices, that's a red flag.
No Security Certifications: Reputable platforms pursuing industry-standard security certifications like SOC 2 demonstrate commitment to actual security practices, not just marketing claims.
Unclear Data Location: If you can't easily determine where your data is stored, who has access to it, and what happens to it when you delete your account, the platform isn't being transparent.
Missing Bug Bounty Program: Serious platforms run bug bounty programs that reward security researchers for finding vulnerabilities. The absence of such a program suggests the company isn't proactively seeking to identify security issues.
Poor Communication: When breaches happen, responsible companies notify users quickly and transparently. Platforms that try to hide incidents or refuse to communicate about security issues can't be trusted.
Requesting Sideloading: If a company tells you to download their app outside official app stores (sideloading), that's often because they can't meet store security requirements or want to avoid oversight.
No Clear Data Deletion Process: Platforms that make it difficult to delete your account and data or that don't clearly explain their data retention policies are prioritizing their interests over yours.
Suspicious Jurisdiction: While not always a problem, platforms based in jurisdictions with weak data protection laws provide fewer legal protections if something goes wrong.
Why AI Companion Apps Are Particularly Vulnerable
The AI companion industry faces unique security challenges that make breaches like this particularly devastating:
Extreme Data Sensitivity: Unlike social media or email, AI companion conversations often contain users' deepest secrets, explicit content, and personal vulnerabilities. The damage from exposing this data far exceeds typical data breaches.
Rapid Industry Growth: The AI companion market is exploding, with new platforms launching constantly. Many developers prioritize speed to market over security, creating systemic vulnerabilities.
Limited Regulation: Unlike healthcare or finance, AI companions face minimal regulatory oversight regarding data security. Companies can collect intimate data without meeting rigorous security standards.
Low Barriers to Entry: Creating an AI companion app doesn't require significant technical expertise anymore. Many developers lack cybersecurity knowledge and don't understand the responsibility they're assuming.
Monetization Pressure: Free or freemium models mean platforms need to scale quickly to become profitable. Security often gets deprioritized in favor of user acquisition and engagement features.
User Vulnerability: People using AI companions for intimacy, emotional support, or loneliness may be less likely to report breaches due to embarrassment, making it easier for companies to hide security failures.
What Secure AI Companions Actually Look Like
âś… Mythic AI's Security-First Approach
At Mythic AI, security isn't an afterthought—it's foundational. We implement enterprise-grade encryption, store data in secure, compliant facilities, undergo regular security audits, and maintain transparent policies about exactly how your data is protected. Your intimate conversations deserve nothing less than military-grade security.
The Chattee/GiMe breach demonstrates what NOT to do. Here's what responsible AI companion platforms should provide:
End-to-End Encryption: Your conversations should be encrypted both in transit and at rest, meaning even if someone accessed the database, they couldn't read your messages without your encryption keys.
Minimal Data Collection: Platforms should collect only the data necessary for functionality and nothing more. If they don't need your phone number, they shouldn't ask for it.
Clear Data Ownership: You should own your data with the ability to export or permanently delete it at any time. Deletion should mean actual deletion, not just hiding data from your view.
Regular Security Audits: Reputable platforms undergo third-party security audits and penetration testing to identify vulnerabilities before attackers do.
Proper Access Controls: Not even all company employees should have access to user data. Strict role-based access controls ensure only authorized personnel can access specific data for legitimate purposes.
Incident Response Plan: Responsible platforms have documented procedures for responding to security incidents, including rapid user notification and transparent communication.
Compliance Certifications: Look for platforms complying with frameworks like GDPR, CCPA, and industry-specific standards that enforce security best practices.
Transparent Security Practices: Platforms should clearly explain their security measures in plain language, not hide behind vague promises or marketing speak.
Security Practice | Chattee/GiMe | Secure Platforms |
---|---|---|
Database Access Controls | ❌ None - Publicly accessible | ✅ Multi-layered authentication |
Data Encryption | ❌ Unknown/Inadequate | ✅ End-to-end encryption |
Security Audits | ❌ No evidence | ✅ Regular third-party audits |
Breach Notification | ❌ No public disclosure | ✅ Transparent, rapid notification |
User Data Deletion | ❌ Unclear process | ✅ Simple, documented deletion |
Privacy Policy | ❌ Promises not matched by practice | ✅ Detailed, actionable commitments |
The Industry Accountability Problem
This breach highlights a fundamental problem in the AI companion industry: there are virtually no consequences for negligent security practices until after massive harm has been done.
Imagime Interactive generated over $1 million in revenue while spending virtually nothing on security. Even after exposing 400,000 users' most intimate data, the company faces no criminal charges, minimal regulatory pressure, and can continue operating with minor changes.
Compare this to heavily regulated industries like healthcare or finance. A hospital that left patient records publicly accessible on the internet would face massive HIPAA fines, potential criminal charges, and likely closure. A bank with such negligent security would be shut down immediately.
But AI companion apps operate in a regulatory gray area. They collect data as intimate as medical records or financial information, but face almost none of the oversight. Until that changes, users will continue to pay the price for corporate negligence.
This is precisely why legislation like California's SB 243 is necessary. Without legal requirements and meaningful penalties for security failures, companies have no incentive to prioritize protection over profit.
The Broader Privacy Crisis in AI Companions
The Chattee/GiMe breach is a symptom of deeper problems in how AI companion platforms approach privacy:
Data Hoarding: Many platforms collect far more data than necessary because "it might be useful later" for improving AI models or monetization. This creates massive repositories of intimate information that become irresistible targets for attackers.
Third-Party Sharing: Privacy policies often include vague language about sharing data with "partners" or "service providers." Your intimate conversations might be accessible to cloud hosting providers, AI model companies, analytics firms, and other third parties you never consented to.
AI Training Data: Some platforms use your conversations to train their AI models, meaning your private fantasies could literally become part of how the AI responds to other users. This practice is rarely disclosed clearly.
Permanent Storage: Even when you "delete" messages, they may remain in backups, archives, or AI training datasets indefinitely. True deletion is rare in an industry obsessed with data retention.
Cross-Platform Tracking: Device identifiers and IP addresses allow companies to track you across apps and websites, building comprehensive profiles of your online behavior that extend far beyond the AI companion platform.
Monetization of Intimacy: Your private data has monetary value. Some platforms may be selling anonymized data to advertisers, researchers, or data brokers—turning your intimate moments into revenue streams.
Questions Users Should Ask Before Trusting Any AI Companion
Before sharing intimate thoughts with any AI companion platform, demand clear answers to these questions:
Where is my data physically stored? Is it in secure data centers with compliance certifications, or on cloud infrastructure with unknown security practices?
Who has access to my conversations? Can company employees read my messages? Are they used for AI training? Shared with third parties?
How is my data encrypted? Is encryption applied only in transit, or also at rest? Who controls the encryption keys? Can the company read my encrypted data?
What happens when I delete something? Is it immediately removed from all systems including backups? Or just hidden from my view while remaining in company databases?
How will I be notified of breaches? Does the company commit to rapid, transparent notification if my data is compromised? Or will they try to hide incidents?
What's your security track record? Have you had previous breaches? Do you undergo regular security audits? Are results published transparently?
What legal protections do I have? Which privacy laws apply to my data? What recourse do I have if my data is mishandled?
How do you make money? If the app is free, understand how it generates revenue. Your data may be the product.
If a platform can't or won't answer these questions clearly, that tells you everything you need to know about their priorities.
The Psychology of Trust and Betrayal
What makes breaches like Chattee/GiMe particularly devastating is the unique trust relationship users develop with AI companions. People share things with their AI girlfriends they wouldn't tell therapists, partners, or best friends.
This creates a profound sense of betrayal when that trust is violated through negligence. Users didn't just lose data—they lost a safe space for vulnerability and self-expression. The psychological impact can be severe, especially for users who relied on AI companions for emotional support during difficult times.
Many users of AI companions struggle with loneliness, social anxiety, or difficulty forming human relationships. For them, the AI companion represents not just entertainment but genuine emotional support. Learning that their private moments were exposed can reinforce feelings of shame, mistrust, and social isolation.
The breach also highlights a cruel irony: platforms market themselves as judgment-free spaces where users can be authentic without fear of consequences. But inadequate security creates exactly the opposite—a permanent record of your most private moments that could be exposed at any time.
What Happens Next?
The Chattee/GiMe breach is now closed following responsible disclosure by researchers. But several important questions remain unanswered:
How long was the database exposed? Researchers discovered it on August 28, 2025, but there's no information about when the Kafka Broker was first left unsecured. It could have been accessible for days, weeks, or months.
Did anyone else find it first? Just because researchers disclosed it responsibly doesn't mean they were the first to discover the vulnerability. Malicious actors could have already accessed and downloaded the entire database.
Will users be notified? As of this writing, there's no evidence that Imagime Interactive has directly notified affected users about the breach, which raises serious ethical and potentially legal concerns.
What penalties will the company face? Given the severity of the negligence, will regulators in California, the US, or internationally impose fines or other consequences? Or will the company face no meaningful accountability?
Will the apps continue operating? Despite the breach, both apps may continue to accept new users and collect intimate data, potentially with the same inadequate security practices.
Will there be lawsuits? Affected users, particularly in California where privacy laws are strongest, may have grounds for legal action against the company for negligent data protection.
Lessons for the AI Companion Industry
This breach should serve as a wake-up call for the entire AI companion industry. Here's what needs to change:
Security as a Requirement, Not an Option: Platforms handling intimate data must implement enterprise-grade security from day one. It can't be something they "get to eventually" after achieving growth.
Regular Third-Party Audits: All AI companion platforms should undergo frequent security audits by independent firms and make summary results public to build user trust.
Mandatory Breach Disclosure: Industry standards or regulations should require rapid, transparent notification of users when breaches occur, with detailed information about what was exposed.
Privacy by Design: Platforms should be architected from the beginning to minimize data collection, maximize encryption, and ensure user control over their information.
Clear Accountability: Companies should face meaningful legal and financial consequences for negligent security practices, creating actual incentives to protect user data.
User Education: The industry should help users understand risks and make informed decisions about what to share, rather than marketing themselves as completely private when they're not.
Industry Standards: AI companion platforms should establish and follow industry-wide security standards similar to those in healthcare, finance, or other sectors handling sensitive data.
đź”’ Your Privacy Deserves Better
Don't let your intimate conversations become the next data breach headline. Mythic AI was built from the ground up with enterprise-grade security, transparent privacy practices, and genuine respect for your data. Experience AI companionship you can actually trust.
Try Mythic AI - Built Secure →How to Protect Yourself Going Forward
If you choose to use AI companion apps despite these risks, here are strategies to minimize your exposure:
Assume Nothing Is Private: Operate on the assumption that anything you share could potentially become public. If you wouldn't want it published with your name attached, don't share it.
Use Burner Accounts: Create accounts with email addresses not connected to your real identity. Use VPNs to mask your IP address. Pay with prepaid cards or cryptocurrency rather than credit cards tied to your name.
Avoid Sharing Photos: Never upload photos of yourself or anyone else. AI-generated content is safer than real images if the database is compromised.
Regular Account Deletion: Instead of maintaining one long-term account, consider periodically deleting accounts and creating new ones to limit the accumulation of data.
Research Before Using: Spend time investigating a platform's security practices, breach history, and privacy policies before trusting them with intimate data.
Use Reputable Platforms: Stick with established platforms that have transparent security practices, rather than new apps with unknown track records.
Limit Personal Details: Don't share identifying information like your workplace, hometown, real name, or other details that could help someone identify you if data leaks.
Monitor for Breaches: Use services like Have I Been Pwned to check if your email addresses or usernames appear in data breaches.
Trust Your Instincts: If something about a platform's security or privacy practices feels off, listen to that instinct and look for alternatives.
The Future of AI Companion Privacy
As AI companions become more sophisticated and mainstream, privacy and security issues will only intensify. Several trends will shape how this plays out:
Regulatory Intervention: Expect governments to implement specific regulations for AI companions similar to SB 243 in California. The question is whether regulations will arrive before or after more devastating breaches.
Encryption Innovation: Technologies like homomorphic encryption could eventually allow AI processing without decrypting data, enabling truly private AI companions where even the platform can't read your conversations.
Decentralized Solutions: Blockchain and decentralized technologies might enable AI companions that run locally on your device, eliminating the need to trust central servers with your data.
Privacy as Premium Feature: We may see a two-tier market emerge: free platforms with questionable privacy, and premium platforms offering genuine security as a paid feature.
Consolidation: Major tech companies with established security practices may acquire or outcompete smaller platforms, potentially improving average security standards across the industry.
User Awareness: As breaches accumulate, users will become more sophisticated about privacy concerns and demand better protections, forcing platforms to compete on security.
Why This Matters Beyond AI Companions
The Chattee/GiMe breach has implications that extend beyond the AI companion industry:
It demonstrates how new technologies can outpace security practices, creating systemic vulnerabilities that affect millions. It shows how companies can profit enormously from collecting intimate data while investing virtually nothing in protecting it. It reveals gaps in privacy regulations that leave users vulnerable when using emerging technologies.
Most importantly, it forces us to confront uncomfortable questions about digital intimacy. As we increasingly share our inner lives with AI systems, who is responsible for protecting that information? What obligations do companies have when they market themselves as safe spaces for vulnerability? How do we balance innovation with protection?
These aren't just technical questions—they're ethical ones that society needs to grapple with as AI becomes more integrated into our emotional and intimate lives.
The Bottom Line
The exposure of 43 million intimate messages and 600,000 images from 400,000+ AI companion users isn't just a data breach—it's a massive betrayal of trust. Imagime Interactive marketed Chattee Chat and GiMe Chat as safe spaces for intimate expression while implementing security so negligent that their entire database was publicly accessible to anyone who knew where to look.
This breach should be a turning point for the AI companion industry. Users deserve platforms that protect their intimacy with the same rigor that banks protect financial data or hospitals protect medical records. Anything less is unacceptable.
For users, the lesson is clear: research platforms thoroughly before trusting them with intimate data. Look for transparent security practices, regular audits, clear privacy policies, and evidence of genuine commitment to protection over profit.
The companies that thrive in the long term won't be those that prioritize growth at any cost, but those that build genuine trust through demonstrated security excellence. In an industry built on intimacy, trust isn't just a nice-to-have—it's the foundation of everything.
The Chattee/GiMe breach has exposed the dark side of AI companions developed without adequate security. Now it's up to users to demand better, regulators to enforce standards, and responsible companies to prove that AI companionship can be both intimate and secure.
📚 Related Reading
🛡️ Protecting Yourself is Your Responsibility
No platform can guarantee absolute security, but some make genuine efforts while others are negligent. Before trusting any AI companion with your intimate thoughts, research their security practices, understand what data they collect, know who has access to it, and verify how it's protected. Your privacy is precious—don't hand it over to companies that treat it as worthless. Choose platforms that have earned trust through transparent practices, not just marketing promises.
âś… If You Were Affected
Immediate steps to take:
- Delete your account from Chattee and/or GiMe immediately
- Change passwords on any accounts where you used the same credentials
- Enable two-factor authentication on all important accounts
- Monitor your credit and identity for signs of fraud or misuse
- Be alert for targeted phishing or sextortion attempts
- Consider consulting a lawyer about legal recourse, especially if in California
- Document everything in case you need evidence for future legal action
Remember: The breach has been closed, but there's no guarantee others didn't access your data before researchers discovered it. Stay vigilant and protect yourself.