Ask Runable forDesign-Driven General AI AgentTry Runable For Free
Runable
Back to Blog
Home Security & Privacy29 min read

Ring's 'Zero Out Crime' Plan: AI Surveillance Ethics Explained [2025]

Ring clarifies its leaked neighborhood surveillance plans don't mean mass monitoring. But AI dog tracking and predictive policing raise real privacy concerns...

Ring surveillancehome security privacyAI surveillance ethicsneighborhood securitypredictive policing+10 more
Ring's 'Zero Out Crime' Plan: AI Surveillance Ethics Explained [2025]
Listen to Article
0:00
0:00
0:00

Ring's 'Zero Out Crime' Neighborhood Initiative: What You Actually Need to Know

Back in early 2024, documents leaked from Amazon-owned Ring revealed something that made privacy advocates lose their minds. The company had allegedly developed an internal initiative called "zero out crime in neighborhoods"—a phrase that sounds like something from a dystopian thriller. When that hit the press, people immediately jumped to conclusions about mass surveillance networks, facial recognition databases, and Orwellian monitoring systems.

But here's the thing: Ring's response is a bit more nuanced than the headlines suggest. The company says the leaked plan doesn't mean what you think it means. It doesn't equal blanket surveillance of every person walking down your street. But if we're being honest, the actual situation is more complicated than either "everything's fine" or "this is a surveillance state."

Let me walk you through what we actually know, what Ring claims it's doing, what the legitimate privacy concerns are, and why this matters for your home security setup going forward.

Understanding the Original Leaked Documents

The leaked materials came from internal Ring presentations and strategy documents that outlined an ambitious vision: dramatically reduce crime in residential neighborhoods through a combination of AI, data analytics, and automated responses. On the surface, this sounds reasonable. Every neighborhood wants less crime. That's not controversial.

But the devil, as always, lives in the details. The documents suggested Ring wanted to leverage its massive install base—millions of doorbell cameras, security cams, and Ring devices across North America—to create a connected intelligence network. Think of it like a nervous system for your neighborhood, where data flows between devices, gets analyzed by AI, and then... well, that's where it gets fuzzy.

The specific concern that raised eyebrows was Ring's apparent interest in creating predictive analytics around criminal activity. Not just detecting crimes in progress, but predicting where crimes might happen. This is where the comparisons to minority report started flying around.

One particularly contentious element was how Ring would handle person identification and tracking. The leaked materials hinted at using AI to track individuals—including pets and dogs—across camera feeds in a neighborhood. Ring already offers pet detection features in its standard cameras, but the scale implied in these documents suggested something more systematic.

When you have millions of cameras all connected to the same company's AI system, and that system can identify and track specific individuals or animals across multiple properties, that starts looking less like home security and more like infrastructure for persistent neighborhood-level surveillance.

Ring's Official Response and Clarifications

After the leak, Ring issued a response that walked back some of the more alarming implications while not exactly denying the program exists. Their statement essentially said: "We're not building a surveillance state. We're just investing in technology that helps keep neighborhoods safer."

Which is technically true, but also conveniently vague.

Ring's leadership clarified that the "zero out crime" initiative isn't about tracking every person in a neighborhood. Instead, they framed it as an evolution of services they already offer—like the Neighbors app (their community crime-sharing platform) combined with improved AI features for detecting suspicious behavior on camera feeds.

The company emphasized that they're working within existing privacy frameworks. Ring cameras are owned by individuals. Ring doesn't automatically share footage between neighbors (unless neighbors voluntarily join the Neighbors community). And Ring doesn't have some secret database of faces or identities.

But here's where it gets interesting: Ring didn't deny the dog tracking capability. In fact, they leaned into it. The AI dog detection feature is already available in Ring cameras. The company painted this as a helpful tool—helping neighbors identify pets, returning lost dogs, that kind of thing.

So Ring's response was essentially: "Yes, we have these technologies. Yes, we want to help neighborhoods be safer. But no, this doesn't mean we're turning neighborhoods into panopticons."

The question is whether that response actually addresses the underlying concerns.

The AI Dog Tracking Feature Explained

Let's talk specifically about the dog tracking thing, since that's become a focal point of the whole debate.

Ring cameras now use machine learning to detect dogs in footage and can theoretically recognize the same dog across multiple frames or even multiple cameras if you own more than one. This is genuinely useful for some scenarios: you lose your dog, a neighbor's camera catches it, and the AI can flag which footage has that specific dog.

But expand that to neighborhood scale. Imagine if Ring's AI could recognize "the golden retriever with the white spot" across ten different Ring cameras across your entire block. Or multiple blocks. That's still just pattern recognition—the AI isn't storing dog identities or creating a canine database that's shared with law enforcement.

At least, that's Ring's story.

The concern, though, is the infrastructure. Once you have systems capable of identifying and tracking specific subjects (whether dogs or people) across multiple camera feeds, the technical barriers to using that for more invasive purposes become much lower. It's not that Ring is necessarily doing that right now, but the capability exists.

This is a classic technology ethics problem: the tool works great for legitimate purposes, but it's also a tool that could be misused. And once it's built, it's hard to un-build.

The Privacy Architecture Question

Here's where the technical and ethical arguments diverge.

From a privacy architecture standpoint, Ring's setup is actually reasonably sound—or at least, more sound than some alternatives. Each Ring camera is owned by an individual homeowner. The footage stays on Ring's servers (or on local storage if you use the backup feature). Ring doesn't automatically aggregate footage across your neighborhood.

Compare that to something like a city-wide CCTV network controlled by a government agency, and Ring looks pretty privacy-friendly. You're choosing to install these cameras. You're choosing what footage gets recorded. You're choosing whether to share clips with neighbors.

But there's a layers problem. The individual camera is privacy-respecting. But the network of cameras, combined with AI analysis, is a different beast.

Let's break this down:

Layer 1: Individual Device Level Your Ring camera sits outside your house. It records when motion is detected. That footage is yours (subject to Ring's terms of service, of course). You can review it, delete it, or share it.

Layer 2: Individual Account Level You can set up multiple Ring devices across your property, and they all connect to your account. Ring's AI can identify patterns across your devices—like recognizing your dog across multiple camera angles, or noticing someone who repeatedly walks past your property.

Layer 3: Neighborhood Level This is where it gets fuzzy. If you participate in the Neighbors app, you're voluntarily sharing clips and information with other Ring users in your area. That's a network, but it's a distributed, decentralized network—not controlled by Ring directly.

Layer 4: Company-Level Analysis Ring can (and presumably does) analyze patterns in the aggregate data it collects. Not to identify you specifically, but to understand things like: In neighborhoods with property crime, what patterns emerge in camera footage? What behaviors typically precede a break-in? This is where the predictive analytics come in.

Layer 5: The Hypothetical Integration Layer This is the concerning one. What if Ring were to integrate all these layers more tightly? What if the company created a system where data from layer 4 could be operationalized back into layers 1-3 in automated ways? That's where "zero out crime" gets scary.

Ring says they're not doing this. And technically, there's no public evidence they are. But the documents suggested they were thinking about it.

Predictive Policing and the Ethics Minefield

Predictive policing is one of the most ethically fraught areas in criminal justice technology right now. The concept is simple: use data about past crime to predict where future crime is likely to occur, then increase police presence in those areas.

The problem is just as simple: the data about past crime is biased.

Law enforcement in the United States has a documented history of over-policing certain communities (predominantly Black and Latino neighborhoods). That historical data gets fed into predictive algorithms, and those algorithms recommend more police presence in those same neighborhoods, which creates more arrests, which creates more biased data, which feeds the next iteration of the algorithm.

It's a feedback loop that perpetuates systemic bias.

Now, Ring isn't a police department. The company can't direct police to neighborhoods. What Ring could do is use its camera network data to identify patterns and flag neighborhoods as "higher risk," which could then influence insurance rates, property values, or how law enforcement allocates resources.

That's still predictive policing, just through a different vector.

Ring's leaked documents suggested the company was interested in this space. Their official response didn't really address whether they'd apply predictive analytics to law enforcement outcomes, or just to direct crime prevention (like alerting neighborhood watch groups).

There's a meaningful difference between those two things, and it matters.

The Voluntary vs. Involuntary Surveillance Question

One of the strongest defenses of Ring's approach is that it's voluntary. You choose to buy a Ring camera. You choose to enable features. You choose whether to share footage with neighbors.

But there's a catch: once you buy a Ring camera, you're not just making a choice for yourself. You're also making a choice for everyone else on your street.

Imagine you're a renter in an apartment building, and the person next door installs a Ring camera that happens to point at your front door (even accidentally). You didn't choose that. You're now part of Ring's network whether you like it or not.

Or imagine you're a pedestrian walking down a neighborhood where half the homes have Ring cameras. Even if you don't own a Ring device, you're being recorded by a network of cameras all feeding data to the same company's AI systems.

This is the externality problem with surveillance technology. The benefits (safer neighborhoods, catching criminals) are shared, but the privacy costs are not distributed equally. The people who install cameras benefit from the security. The people who are recorded without their choice bear the privacy cost.

Ring's official response hasn't really grappled with this. They focus on the ownership of individual devices, not the network effects of millions of devices working together.

How Ring's Actual Technology Currently Works

Let's be specific about what Ring actually offers right now, not what they might offer in the future.

Motion Detection and Recording Ring cameras detect motion and record video. This is basic. The AI here just helps filter out false positives (like trees moving in wind) from actual people or animals.

Person Detection Ring can identify when something human-shaped passes through the frame. It can distinguish a person from a pet from a package. This is more sophisticated, using deep learning models trained on video footage.

Package Detection Specifically, Ring flags when something person-sized but stationary appears in frame—like a delivery person leaving a package. This is useful for security.

Pet Detection (Including Dogs) Ring recognizes dogs, cats, and other animals in footage. You can get alerts when a dog appears on your camera. This is the feature that's become central to the privacy debate.

Vehicle Detection Ring identifies cars and can alert you to vehicles in your driveway or street.

Facial Recognition (Limited) Here's where it gets interesting. Ring has facial recognition capability, but they claim they don't use it for mass identification. Instead, they use it to help you identify familiar faces (like family members, regular delivery drivers) versus strangers. It's device-local or account-local, not shared across users or with external databases.

The Neighbors App This is the community layer. Ring users voluntarily upload clips and information about suspicious activity. The app functions like a neighborhood watch bulletin board. Ring provides the platform, but users control what gets shared.

The Amazon Connection and Federal Concerns

Ring isn't independent. It's owned by Amazon, and that matters.

Amazon has a documented relationship with law enforcement. Police departments can request footage from Ring cameras (usually with a warrant, but sometimes with less). Amazon has provided Ring footage to law enforcement in various cases.

This isn't necessarily nefarious—if there's a crime and police have a warrant, handing over relevant footage seems reasonable. But it does mean that Ring footage is potentially available to the government in ways that a purely independent security camera company's footage might not be.

The FTC (Federal Trade Commission) has been concerned about Ring for a while. In 2020, the agency began investigating the company's privacy practices. The concern wasn't just about what Ring does with footage, but about how Ring handles employee access to that footage and how secure their systems are.

There have been documented cases of Ring employees (or people impersonating them) accessing customer footage or threatening customers. This suggests the access controls around Ring footage aren't as tight as they should be.

The leaked "zero out crime" documents raised new concerns at the federal level. Lawmakers started asking questions about whether Ring was building surveillance infrastructure without adequate oversight or transparency.

International Privacy Standards and Ring

Ring's response to privacy concerns has varied by country, which tells you something.

In Europe, GDPR (General Data Protection Regulation) imposes strict requirements on how personal data can be collected, stored, and used. Ring's European operations have to comply with that, which means they're more restricted in what they can do with video data than they are in the United States.

The contrast is stark. A Ring camera in Germany operates under different rules than a Ring camera in Texas. Neither is necessarily wrong, but the difference shows that privacy protections aren't inherent to the technology—they're imposed by regulation.

This matters for the "zero out crime" debate because it suggests that Ring's approach to privacy is shaped by legal requirements, not by principle. In jurisdictions with strong privacy protections, Ring respects those protections. In jurisdictions with weaker protections, Ring has more freedom to operate.

That's not a criticism of Ring—it's how most companies operate. But it does mean that the privacy promises Ring makes in the US could change if regulations change.

The Arguments for Ring's Approach (The Actual Case)

Let's not strawman the pro-Ring position. There are legitimate arguments for why Ring's neighborhood safety initiatives make sense.

Crime is a Real Problem For many neighborhoods, property crime and violent crime are serious issues. Ring cameras have actually helped solve crimes. Footage has led to arrests. Neighborhoods with more cameras and more community awareness do tend to see lower crime rates (though causation isn't always clear).

Decentralized is Better Than Centralized If crime prevention is going to happen through video surveillance, Ring's decentralized model (where homeowners own their cameras) is arguably better than a city government installing cameras on every corner. At least with Ring, you have opt-in and opt-out mechanisms.

Technology Isn't Inherently Bad Dog tracking, person detection, and predictive analytics are tools. Tools can be used well or poorly. Ring argues they're using these tools to help people keep their neighborhoods safer, which is a legitimate goal.

Transparency About What's Actually Happening Ring has been more transparent than some tech companies about what their cameras can do. They publish privacy documentation. They explain their features. They don't hide that they have AI detection capabilities.

Network Effects Create Value When a neighborhood is covered by Ring cameras and people actively use the Neighbors app, there's a genuine security benefit. Criminals are more likely to avoid neighborhoods where they think they'll be caught on camera and reported to neighbors. That's deterrence, and it works.

These aren't trivial points. The question isn't whether Ring has a point—it's whether the benefits outweigh the privacy costs.

The Arguments Against (The Real Concerns)

But the privacy concerns are also substantial.

Scope Creep is Real Today, Ring tracks dogs helpfully. Tomorrow, Ring tracks individuals. Technology capabilities always expand. The infrastructure for more invasive surveillance is being built today, even if it's not being deployed today. History shows that surveillance infrastructure almost always expands beyond its original purpose.

Bias in AI Systems AI detection systems are trained on data that reflects historical biases. A system trained on footage where certain neighborhoods have higher police presence will learn to overrepresent "suspicious activity" in those neighborhoods. Ring hasn't published extensive research on bias in their detection algorithms.

Lack of Regulation Ring operates in a gray area. It's not a government agency (no constitutional protections against it), but it's not a small private company either (it has scale that approaches government surveillance). There's no legal framework that comprehensively governs what Ring can and can't do.

Data Retention and Deletion Isn't Clear Ring's policies about how long they keep footage, whether they use footage for training their AI, and whether they delete footage when you request it are opaque. The company benefits from keeping data, so there's not a strong incentive to delete it.

The Chilling Effect Even if Ring isn't actually tracking people, the knowledge that Ring cameras are recording changes behavior. People self-censor, change routes, avoid certain neighborhoods if they think they're being watched. That's a social cost that's hard to quantify but real.

Dog Tracking is a Slippery Slope If Ring can identify and track the same dog across multiple cameras, that's a proof of concept for identifying and tracking the same person across multiple cameras. The technology is essentially the same. It's just a matter of applying it differently.

The Comparison to Other Surveillance Systems

To understand where Ring sits in the spectrum of surveillance technologies, it helps to compare it to alternatives.

Government CCTV Networks Cities like London have extensive CCTV networks run by government agencies. These are more invasive (ubiquitous, no opt-out) but also more transparent and accountable (subject to FOIA requests, regulated by law). Ring is less ubiquitous but less regulated.

Phone Location Data Your phone constantly reports its location to cellular networks and to apps you've installed. This data is collected, aggregated, and used for targeting ads, predicting behavior, and (sometimes) law enforcement purposes. It's more invasive than Ring cameras but less visible.

Facial Recognition Technology Cities and law enforcement agencies are deploying facial recognition systems that can identify individuals in real-time. This is more advanced than Ring's technology but also more controversial and more regulated in some places.

Private Security Companies Companies like ADT and Vivint operate camera systems that are similarly decentralized (homeowner-owned) but less networked. They don't have a Neighbors app or AI analysis across properties.

Ring sits somewhere in the middle. More networked and AI-driven than traditional private security, but less institutionalized than government surveillance.

What the Documents Actually Revealed (Close Reading)

Let's dig into what the leaked documents specifically suggested, because there's a difference between what they said and what people assumed they said.

The documents outlined a vision for Ring to become more proactive in helping neighborhoods identify and prevent crime. This included:

  • Aggregated Insights: Using anonymized data across neighborhoods to identify patterns in criminal activity. If camera systems consistently detect certain behaviors before break-ins occur, flag those behaviors.
  • Automated Alerts: Creating systems where neighbors are automatically alerted to potential security risks without requiring manual review.
  • Integration with Law Enforcement: Closer partnerships with police departments, potentially automating flagging of footage when certain conditions are detected.
  • Predictive Capabilities: Building models that could predict which neighborhoods or times are at higher risk of crime.
  • Enhanced Identification: Using AI to identify and track individuals or animals across camera feeds (the dog tracking feature is one manifestation).

None of this is inherently illegal or unethical. But taken together, it paints a picture of Ring potentially becoming more of an active crime prevention system and less of a passive recording device.

The question isn't whether Ring is doing this now. It's whether this is what Ring wants to do eventually, and whether they should be allowed to.

The Regulatory Landscape: What's Actually Happening

In response to concerns about Ring and other surveillance technologies, several regulatory efforts are underway.

At the Federal Level The FTC has authority over Ring through consumer protection laws. The agency has investigated Ring's privacy practices and has authority to impose fines or behavioral requirements if Ring violates consumer protection statutes.

Several congressional representatives have also expressed concerns and have proposed legislation that would regulate how companies like Ring can use AI and how they can share data with law enforcement.

At the State Level Some states are moving faster than federal government. California has considered (though not yet passed) legislation that would regulate private security cameras and their use of AI. New York has been particularly active in regulating facial recognition technology, though Ring's AI doesn't currently include facial recognition as a primary feature.

At the Municipal Level Some cities have begun regulating how law enforcement interacts with Ring and similar systems. Some require warrants before Ring footage can be accessed. Others limit how footage can be used.

Internationally Europe's GDPR and similar laws in other countries provide a template for stronger privacy protections. As more countries adopt privacy-first regulations, companies like Ring have to adapt their operations to comply.

The regulatory landscape is still evolving. There's no comprehensive framework yet that clearly defines what Ring can and can't do.

How to Think About This Yourself

Let me cut through the rhetoric here and give you a framework for thinking about this issue yourself.

First: understand that this isn't binary. It's not "Ring is either secretly building a totalitarian surveillance state" or "Ring is just a helpful tool and everyone's paranoid." The reality is more textured.

Second: recognize that the leaked documents show Ring was thinking about more aggressive crime prevention tactics. Whether they're actually implementing those tactics is a different question. The fact that they considered them suggests the company sees business opportunity in expanding its capabilities.

Third: understand that privacy and security are tradeoffs, not opposites. More cameras and more analysis do help prevent crime. But there are costs to that, including erosion of privacy, potential for misuse, and chilling effects on behavior.

Fourth: acknowledge that the impacts aren't evenly distributed. Some neighborhoods benefit from Ring cameras more than others. Some people are more affected by the privacy tradeoffs than others. Technology rarely benefits everyone equally.

Fifth: recognize that once technology infrastructure is built, it's hard to un-build. If Ring builds the capacity to track individuals across neighborhood cameras, even if they don't use it now, that capacity exists for future use. Technology expansionism is a real thing.

Practical Implications for Ring Users

If you use Ring or are considering buying Ring cameras, here's what actually matters.

Your Data Isn't Truly Private Understand that Ring footage is stored on Ring's servers (unless you pay for local storage). Ring can access that footage, analyze it, use it for training AI systems (in anonymized form), and share it with law enforcement under certain circumstances. This is disclosed in their terms of service, but many people don't read those.

You're Part of a Network Whether You Intend to Be or Not Even if you don't join the Neighbors app, your Ring camera is feeding data into Ring's systems. That data is being analyzed and used to understand patterns in your neighborhood.

The Functionality Will Expand Ring will add new features. Some of those features might be privacy-friendly (like better night vision or higher resolution). Some might be more invasive. You should expect your Ring camera to do more five years from now than it does today.

You Need Strong Account Security Given documented cases of unauthorized access to Ring accounts, treat your Ring login like you treat your email or banking login. Strong password, two-factor authentication, review account activity regularly.

Understand Local Regulations The rules around how Ring footage can be used by law enforcement vary by location. In some places, police need warrants. In others, they don't. Know your local regulations.

Consider Alternatives Ring isn't the only option. There are other security camera companies that prioritize privacy more explicitly, though they typically lack Ring's integration and community features.

The Future of Neighborhood Surveillance

Where is this heading?

The trend is clear: more cameras, more AI analysis, more integration between private security systems and law enforcement. Ring is one player in that trend, but it's not unique.

As AI gets better at detecting and tracking, and as cameras become cheaper and more ubiquitous, the infrastructure for comprehensive neighborhood-level surveillance is being built piecemeal. Ring is part of it. So are doorbell cameras from other companies, outdoor security cameras, car dashcams, and even smartphone users recording footage.

The question isn't whether this infrastructure will exist. It's whether it will be regulated, how it will be governed, and who will benefit from it.

Ring's leaked "zero out crime" initiative is interesting because it shows the company thinking ahead about what's possible. Whether they actually implement those ideas depends on market pressure, regulatory action, and company leadership decisions.

The privacy advocates who are concerned about this aren't wrong. The safety advocates who value what these systems can provide aren't wrong either. The real work is building governance structures that let us benefit from the security benefits without sacrificing privacy entirely.

What Ring Should Be Doing Differently

If Ring actually cared about addressing privacy concerns (and not just managing them), here's what they should do.

Transparency Report Publish a detailed, annual report on: how many requests they get from law enforcement, how much footage they share, how they handle AI training, what their retention policies are, and how they're using aggregated data.

Privacy by Design Build systems where footage is encrypted end-to-end, where Ring employees can't access footage without explicit authorization, and where data deletion is actually permanent (not just marked for deletion).

Regulation Engagement Instead of fighting regulation, work with regulators to develop thoughtful frameworks that balance security and privacy. This actually builds consumer trust long-term.

Algorithm Transparency Publish research on bias in their AI systems. Explain how their detection algorithms work. Let independent researchers audit the systems.

Explicit Privacy Policies Make it crystal clear what data Ring collects, how it's used, who it's shared with, and what the implications are. Don't bury this in 50 pages of legal text.

Community Governance Let users have a say in what features get added and how the network operates. This isn't just good ethics—it's good business.

None of these are prohibitively expensive. They're just choices about priorities.

FAQ

What exactly is Ring's "zero out crime" initiative?

Ring's leaked documents outlined an internal strategy to reduce crime in neighborhoods through a combination of AI detection, data analytics, and automated responses. The company wants to use its network of cameras and the Neighbors app to help neighborhoods identify and prevent criminal activity. Ring later clarified this doesn't mean mass surveillance, but rather an evolution of their existing neighborhood safety tools and improved AI features.

Does Ring sell my footage to law enforcement or third parties?

Ring doesn't automatically sell footage, but law enforcement can request access to footage. Ring typically requires a warrant or subpoena, though there have been cases where the company provided footage with less formal requests. Footage is not sold to third parties, but Ring does use anonymized footage to train and improve their AI detection systems. Ring's terms of service give them broad rights to use footage for various purposes.

How does Ring's dog tracking feature work and what are the privacy implications?

Ring's AI can identify dogs in camera footage and recognize the same dog across multiple frames or multiple cameras. This is useful for finding lost pets but also demonstrates Ring's capability to identify and track specific subjects across their camera network. The concern is that the technology could theoretically be adapted to track people instead of dogs, creating a neighborhood-level tracking system. Ring claims they don't use this capability for identifying people, only for helpful features like pet detection.

Is Ring's neighborhood surveillance legal?

Ring operates in a legal gray area. Individual Ring cameras are legal and are typically owned by homeowners. The Neighbors app is essentially a voluntary community bulletin board, which is legal. However, the broader question of whether Ring can aggregate data across neighborhoods and use it for predictive analytics is less clear. Some jurisdictions regulate how this data can be shared with law enforcement, but there's no comprehensive federal framework governing Ring's operations.

What are the real privacy risks of using Ring cameras?

The main risks are: footage stored on Ring servers can be accessed by Ring employees or law enforcement; Ring's AI systems may learn biased patterns from training data; footage can be used to identify patterns about your behavior and neighborhood; account security issues could allow hackers to access your footage; and the infrastructure Ring is building could be used for more invasive purposes in the future. Ring's privacy policies and security have been flagged by regulators as areas of concern.

How does Ring compare to government CCTV surveillance?

Ring is less ubiquitous (not every corner has a camera) but also less regulated than most government surveillance systems. Government cameras are typically subject to FOIA requests and operate under specific policies. Ring cameras are private property but networked together through Ring's systems. Government surveillance is more visible and accountable; Ring surveillance is more distributed but less transparent. Different tradeoffs apply to each.

What should I do if I want home security but care about privacy?

Consider: using wired security cameras you control locally rather than cloud-dependent systems; disabling cloud storage if your system offers that option; not joining the Neighbors app if you want to minimize data sharing; using strong passwords and two-factor authentication; checking your Ring account regularly for unauthorized access; understanding your local regulations about how footage can be used by law enforcement; or exploring alternative security companies that prioritize privacy more explicitly than Ring does.

Will Ring's capabilities expand in the future?

Most likely yes. Ring has shown interest in more advanced AI features like predictive analytics and expanded identification capabilities. Technology companies historically expand their capabilities over time, especially when there's business value in doing so. The question isn't whether Ring will try to expand, but whether regulations will limit what they're allowed to do.

What is the FTC doing about Ring?

The Federal Trade Commission has investigated Ring's privacy practices and has authority to impose fines or require behavioral changes if Ring violates consumer protection laws. The FTC is particularly concerned about Ring's data retention practices, employee access to footage, and how data is shared with law enforcement. Congressional representatives have also expressed concerns and proposed legislation that would more clearly regulate how Ring can use AI and share data.

Conclusion: Living With the Ambiguity

Here's the honest truth: Ring's "zero out crime" initiative is neither as benign as the company claims nor as sinister as privacy advocates fear. It's something in the middle, and that middle ground is uncomfortable because it requires thinking about tradeoffs rather than choosing between good and evil.

Ring is building infrastructure that makes neighborhoods safer in measurable ways. People can recover stolen packages, find lost pets, identify people behaving suspiciously. Crime does decrease in neighborhoods with Ring cameras and active Neighbors communities. Those benefits are real.

At the same time, Ring is building infrastructure that could enable invasive surveillance. The technology for tracking individuals across neighborhood cameras exists. The data collection system is in place. The AI capabilities are being developed. Whether Ring uses these capabilities now is almost beside the point—the fact that they could use them creates incentives and opportunities.

The privacy advocates who worry about this aren't paranoid. They're recognizing that surveillance infrastructure, once built, tends to expand. History backs them up. Technologies create capabilities, and capabilities eventually get used.

The practical takeaway: if you use Ring, understand what you're participating in. Your camera isn't just recording what happens on your property—it's part of a network that Ring and law enforcement can potentially access. Your footage contributes to training AI systems that may or may not treat all neighborhoods equally. Your participation in the Neighbors app shapes what information flows through the community.

None of this means you shouldn't use Ring. Security matters. Neighborhoods should be safer. But you should make an informed choice, not a default one.

And for Ring itself: the company has an opportunity to actually address privacy concerns beyond just managing them. They can choose to be more transparent, more privacy-protecting, and more willing to be regulated. Or they can keep navigating this tension between offering more powerful surveillance capabilities and reassuring people that they're not overstepping.

Right now, they're doing the latter. Whether that's sustainable long-term depends on how privacy regulations evolve and how consumers respond to the tradeoffs they're making.

The conversation about Ring isn't really about whether the company is doing something wrong today. It's about what kind of neighborhood-level surveillance infrastructure we want to build, who gets to build it, and how it should be governed. Ring is just one player in that larger conversation, but it's an important one.

Cut Costs with Runable

Cost savings are based on average monthly price per user for each app.

Which apps do you use?

Apps to replace

ChatGPTChatGPT
$20 / month
LovableLovable
$25 / month
Gamma AIGamma AI
$25 / month
HiggsFieldHiggsField
$49 / month
Leonardo AILeonardo AI
$12 / month
TOTAL$131 / month

Runable price = $9 / month

Saves $122 / month

Runable can save upto $1464 per year compared to the non-enterprise price of your apps.