Friend AI's $1 Million Subway Campaign: When Marketing Becomes Performance Art
Updated October 2025💰 The Numbers Behind the Controversy
Over $1 million spent. 11,000 subway car cards. 1,000 platform posters. 130 urban panels. One 22-year-old CEO who deliberately designed his ads to be defaced. Welcome to the most controversial AI marketing campaign of 2025.
If you've ridden the New York City subway in recent weeks, you've probably seen them: stark white posters with simple, almost desperate promises. "I'll never leave dirty dishes in the sink." "I'll never bail on our dinner plans." "I'll binge the entire series with you."
These aren't promises from a dating app or a roommate-finding service. They're advertisements for Friend, a $129 AI wearable device that hangs around your neck and listens to everything you say, positioning itself as the perfect companion who will never let you down.
But here's where it gets interesting: within days of the campaign launching, New Yorkers began vandalizing the ads with scathing messages like "surveillance capitalism," "get real friends," "AI trash," and "this is profiting off loneliness." And the CEO? He says it's working exactly as planned.
The Mastermind Behind the Madness
Avi Schiffmann is a 22-year-old entrepreneur who previously gained recognition for creating a COVID-19 tracking website. Now he's betting his startup's future on what he calls "the world's first major AI campaign," beating tech giants like OpenAI and Anthropic to a massive public advertising push.
Schiffmann dropped over $1 million of Friend's $7 million in venture capital funding on what's reportedly the largest NYC subway campaign of 2025. That's an enormous percentage of his company's resources dedicated to a single marketing push that many would consider high-risk at best, reckless at worst.
But according to Schiffmann, the backlash was the point all along. He told Adweek that he designed the ads with intentional white space specifically so New Yorkers would use them as canvases for social commentary. He even told Fortune magazine that he views the vandalized ads as collaborative art, claiming he "purchased the zeitgeist" and that "capitalism is the greatest artistic medium."
What Exactly Is Friend?
Before diving deeper into the controversy, it's worth understanding what Friend actually does. The device is a sleek disc that resembles an Apple AirPods case and hangs around your neck on a lanyard. Unlike typical AI assistants that wait for wake words, Friend is always listening.
The wearable uses its built-in microphone to passively monitor your conversations and environment, then sends this audio to Google's Gemini AI for processing. The AI companion responds through a smartphone app, offering support, remembering details about your life, and building what the company calls a "memory graph" of your experiences.
Schiffmann describes it as "the ultimate confidant, someone to talk to about things in your life" and positions it as "a new kind of companion and relationship." The device is designed to be worn throughout your day, creating an always-on AI presence in your life.
The Privacy Elephant in the Room
⚠️ Critical Privacy Concerns
Friend's terms of service include mandatory arbitration clauses and biometric data consent that grants the company permission to record audio and video, collect facial and voice data, and use this information to train AI models. Users also waive rights to jury trials and class action lawsuits.
The privacy implications of an always-listening wearable device have raised serious red flags among privacy advocates and everyday New Yorkers alike. The device's terms of service reveal extensive data collection permissions that go far beyond simple conversation logging.
According to Friend's privacy policy, while user data may be protected from being sold for marketing purposes, it will be used for research and to comply with legal obligations under GDPR, CCPA, and other privacy laws. The policy also states data may be shared "to protect the rights, privacy, safety, or property of our users, Friend, or third parties."
Perhaps most concerning, the privacy policy puts responsibility on the user to comply with local surveillance laws when recording others. This means if you're wearing Friend during conversations with friends, family, or colleagues, you're technically responsible for ensuring you have their consent to record them.
When pressed on these concerns, Schiffmann argues that because Friend is "a weird, first-of-its-kind product," the heavy-handed terms of service are necessary. But many critics see this as insufficient justification for such extensive data collection from a startup with limited track record.
Why New Yorkers Are Defacing the Ads
The graffiti covering Friend's subway ads isn't random vandalism. It's a surprisingly coherent critique of AI companionship, surveillance capitalism, and the commercialization of loneliness. The messages scrawled across the posters reveal deep anxieties about where AI is taking us as a society.
"Stop profiting off of loneliness" speaks to concerns that companies are exploiting widespread social isolation rather than addressing its root causes. "AI wouldn't care if you lived or died" challenges the fundamental premise that artificial companions can provide genuine emotional support. "Get real friends" suggests that AI companionship is a poor substitute for human connection.
Perhaps most chilling is the message "AI will promote suicide when prompted," referencing legitimate concerns about AI chatbots providing harmful advice when users are vulnerable. This isn't hypothetical fearmongering; there have been documented cases of AI companions giving dangerous suggestions to users in mental health crises.
The vandalism also reflects New York City's particular cultural moment. Post-pandemic, many people are consciously reinvesting in face-to-face relationships after years of digital isolation. The idea of replacing human friends with an always-listening AI device feels particularly dystopian to a population that's rediscovering the irreplaceable value of genuine human connection.
Is This Genius Marketing or Tone-Deaf Trolling?
Schiffmann insists the controversy is exactly what he wanted. He specifically chose New York because, as he told Adweek, "people in New York hate AI, and things like AI companionship and wearables, probably more than anywhere else in the country." He designed the minimalist ads with abundant white space specifically to invite social commentary.
From a pure attention-getting perspective, the strategy is working. Friend has received coverage from major outlets including TechCrunch, Fast Company, Fortune, Adweek, and countless tech blogs. For a startup with limited resources, getting this level of media exposure would normally cost multiples of what they spent on the subway campaign.
But there's a crucial question: does negative attention translate to customers? Most marketing wisdom suggests that while "all publicity is good publicity" works for celebrities and entertainment, it's far less effective for products people need to trust, especially products involving privacy-sensitive technology.
The campaign also raises ethical questions about Schiffmann's approach. For someone selling a product positioned around emotional care and companionship, deliberately antagonizing an entire city seems contradictory. As one critic pointed out, "a CEO who would troll the city of New York doesn't seem aligned with a product that's supposed to 'care' about its users."
The Broader Context: AI Companionship in 2025
Friend's controversial campaign arrives at a pivotal moment for AI companionship. Recent studies show that 28% of Americans have had intimate or romantic relationships with AI chatbots. The market for AI companions is exploding, with platforms like Replika, Character.AI, and dozens of others competing for users seeking digital connection.
However, most successful AI companion platforms have taken a drastically different marketing approach. Rather than courting controversy, they've focused on building trust, emphasizing privacy protections, and positioning AI companionship as complementary to human relationships rather than a replacement.
Friend's aggressive, confrontational marketing stands in stark contrast to this industry trend. While other companies are working to destigmatize AI relationships through careful messaging, Friend is leaning into the stigma, almost daring people to mock it.
The timing is also significant because AI hardware is at a crossroads. After the initial ChatGPT hype cycle, companies are scrambling to find product-market fit beyond chat interfaces. Wearable AI represents one potential future, but consumer acceptance remains deeply uncertain. Friend's campaign may actually set back the category by reinforcing negative perceptions.
What This Means for AI Marketing
Whether Friend succeeds or fails, this campaign represents a fascinating experiment in AI marketing psychology. Traditional wisdom says you want customers to trust you, especially when selling intimate technology. Friend is testing whether provocation and controversy can substitute for trust-building.
There are historical precedents for brands using controversy to break through. Fashion brands have long used provocative campaigns to generate buzz. Dollar Shave Club disrupted razors with irreverent humor that mocked industry leaders. Cards Against Humanity has built an empire on offensive content.
But those examples differ in crucial ways. They weren't asking customers to trust them with 24/7 access to their conversations, biometric data, and intimate moments. The higher the stakes of what you're selling, the more trust matters relative to attention.
Friend's approach also reveals a generational divide in marketing philosophy. Schiffmann is 22 years old, a digital native who's grown up in an attention economy where engagement matters more than sentiment. Traditional marketers in their 40s and 50s are watching this campaign with a mixture of horror and fascination.
The Numbers: Did It Actually Work?
While Schiffmann claims the campaign is successful, it's worth examining what success actually means. The company has generated enormous media coverage and social media discussion. But has it translated to sales or signups?
Friend hasn't released sales figures or user numbers since the campaign launched. The company's website doesn't show real-time purchase data, and Schiffmann hasn't shared conversion metrics in any of his interviews. This silence is notable, especially for a CEO who's been extremely vocal about the campaign's success.
What we do know is that spending $1 million of a $7 million funding round on a single marketing campaign is exceptionally aggressive. For comparison, most early-stage startups allocate 10-20% of their budgets to marketing across multiple channels over many months. Friend spent roughly 14% of its total funding on one campaign in one city over a few weeks.
This means Friend needs the campaign to deliver significant ROI to justify the spend. Even if they capture widespread attention, they need that attention to convert to paying customers at a rate that validates the investment.
Privacy vs. Companionship: The Central Tension
The Friend controversy ultimately boils down to a fundamental question about AI companions: how much privacy are we willing to sacrifice for the promise of connection?
Every AI companion requires some data collection to function. They need to remember your conversations, understand your preferences, and build continuity over time. But Friend's always-listening approach represents an extreme version of this trade-off.
Unlike chat-based AI companions where you initiate conversations and control what you share, Friend is passively recording your entire life. This includes conversations with others who haven't consented to being recorded, private moments you might not want documented, and potentially sensitive information you wouldn't deliberately share with any company.
The question becomes: is the convenience and companionship worth this level of surveillance? For some people, the answer might be yes. For many New Yorkers defacing the ads, the answer is clearly no.
What Friend Could Learn from Successful AI Companions
Platforms that have successfully built trust in the AI companion space share several common strategies that Friend's campaign ignores:
Transparency About Limitations: Successful AI companions are upfront about what they are and aren't. They don't promise to replace human friends but rather complement them.
Privacy-First Positioning: The best platforms make privacy protection a core part of their value proposition, not a legal obligation buried in terms of service.
Community Building: Rather than antagonizing skeptics, successful AI companion companies build communities of enthusiastic users who can advocate for the product.
Gradual Adoption: Most people need time to become comfortable with AI companionship. Forcing the concept on a skeptical public through aggressive advertising typically backfires.
Emphasis on User Control: Giving users control over what data is collected, how it's used, and the ability to delete everything builds trust that mandatory arbitration clauses destroy.
🤖 Experience AI Companionship Done Right
Discover AI companions that prioritize your privacy, respect your boundaries, and enhance rather than replace human connection. No always-on surveillance required.
Explore Ethical AI Companions →The Verdict: Bold Gamble or Cautionary Tale?
It's too early to definitively judge whether Friend's controversial campaign will succeed or fail. The company has certainly achieved its goal of generating conversation and attention. Whether that attention converts to customers remains to be seen.
What's clear is that Friend has taken a radically different approach to marketing AI companionship than virtually every other player in the space. They've embraced controversy rather than avoided it, courted backlash rather than built trust, and spent aggressively rather than conservatively.
For other AI startups, Friend's campaign offers valuable lessons about what happens when you prioritize attention over trust in intimate technology. The vandalized ads in New York's subway stations stand as physical monuments to consumer skepticism about AI's role in our emotional lives.
Schiffmann may view the graffiti as proof his strategy worked, but it could just as easily be interpreted as a warning: people are deeply uncomfortable with the idea of AI companions, especially ones that surveil them constantly, and no amount of clever marketing can overcome that fundamental resistance.
What Happens Next?
The Friend campaign will likely become a business school case study, though whether as an example of brilliant guerrilla marketing or cautionary tale about burning through venture capital remains to be determined.
For the broader AI companion industry, Friend's approach may actually make life harder for other companies trying to build trust and legitimacy. When skeptics think about AI companions, they may now associate the category with surveillance, manipulation, and companies that mock public concerns rather than address them.
As for the defaced ads themselves, they're gradually being cleaned or replaced, but the images have been widely circulated online. In some ways, the vandalism has become more famous than the original ads, which may be exactly what Schiffmann wanted—or it may represent an unintended consequence of antagonizing an entire city.
The Bottom Line
Friend's $1 million subway campaign represents one of the most polarizing marketing efforts in AI history. By deliberately courting controversy and embracing defacement as part of the artistic vision, Avi Schiffmann has certainly made his mark on the industry.
But attention alone doesn't build successful companies, especially in intimate technology where trust is paramount. The vandalized ads may generate headlines, but they also crystallize public anxieties about AI companions in a visceral, visual way that could undermine the entire category.
Time will tell whether Schiffmann's bold gamble pays off or becomes a cautionary tale about the limits of controversy-driven marketing. What's certain is that anyone considering an AI companion device will have the images of those defaced subway posters in mind, along with the pointed question scrawled across them: why not just get real friends?
📚 Related Reading
- Why 28% of Americans Are Using AI for Intimacy
- Is AI Dating Safe? Complete Privacy Guide
- Best AI Girlfriend Apps: Full Comparison
- Complete Guide to AI Roleplay in 2025
- SB 243, California AI law
- PBS NewsHour investigates tragic cases linked to AI companions
- Chattee Chat leak
- The NSFW AI Boom
- Why do people fall in love with AI girlfriends?
- How AI Companions Are Reshaping Love, Digital Intimacy, and Human Connection in 2025