Introduction: The Deal That Brings Mass Surveillance to the Border
In late 2025, the US Customs and Border Protection agency did something that seemed inevitable once you understand how technology works in government: it signed a contract to tap into one of the most controversial facial recognition databases in America. For $225,000 a year, Border Patrol's intelligence units would gain access to Clearview AI's system, which compares photos against more than 60 billion images scraped from the internet without permission.
Let that sink in for a moment. Sixty billion images. Scraped from social media, mugshot databases, driver's license photos, and countless other sources. All converted into biometric templates without anyone's knowledge or consent. Now it's connected to one of the most powerful enforcement agencies in the country.
This isn't some theoretical future where governments have unprecedented surveillance powers. This is happening right now. CBP's Border Patrol Intelligence division and the National Targeting Center already have access. These aren't specialized units that only investigate major terrorism cases. They're the everyday intelligence teams that analyze data to identify people they consider security threats.
What makes this particularly significant is the scope. The contract doesn't specify what photos agents can upload, whether searches include US citizens, or how long images and search results get stored. It's intentionally vague language that gives agencies flexibility to expand use cases after the deal is already signed.
The privacy implications are staggering, but they're not the only concern. The technology itself has real limitations that nobody's talking about enough. When the National Institute of Standards and Technology tested facial recognition systems on border crossing photos that weren't taken specifically for face recognition, error rates exceeded 20 percent. Sometimes much higher. This isn't like matching a high-quality visa photo. This is matching travel photos, surveillance footage, and candid images captured in chaotic border environments.
Meanwhile, civil liberties groups and Congress are finally paying attention. Senator Ed Markey introduced legislation to ban ICE and CBP from using facial recognition altogether. The DHS recently released an AI inventory acknowledging these systems exist. For years, the expansion of biometric surveillance at the border happened mostly in the shadows. Now it's becoming impossible to ignore.
This article breaks down everything about the Clearview AI deal: how it works, what the privacy implications really are, why the technology has serious accuracy problems, and what Congress might actually do about it. If you work in tech, policy, privacy advocacy, or just care about what your government is doing with your data, you need to understand this.
TL; DR
- The Contract: CBP signed a $225,000 annual deal with Clearview AI, giving Border Patrol intelligence units access to facial recognition across 60+ billion images
- The Data: Clearview scrapes photos from the internet without consent, converting them to biometric templates that agents can search against
- The Scope: The agreement is vague about what images agents can upload, whether US citizens can be searched, and how long data is retained
- The Problems: Facial recognition systems have error rates exceeding 20% on non-controlled images like border photos, creating false matches
- The Response: Congress introduced legislation to ban ICE and CBP from using facial recognition, citing concerns about mass surveillance expansion
- Bottom Line: The Clearview deal represents a major expansion of government biometric surveillance with minimal transparency or safeguards
What Exactly Is Clearview AI and How Did It Become So Powerful?
Clearview AI started in 2017 as a private company with an unusual approach to facial recognition. While most facial recognition companies license their own datasets or work with government records, Clearview decided to build its database by scraping the internet at massive scale. We're talking about photos from social media sites, news outlets, mugshot databases, driver's license records, and thousands of other sources.
The company's pitch was simple: we'll build the largest facial recognition database in the world. They claimed it would help law enforcement find missing persons, identify suspects, and prevent crimes. It sounded good in principle. In practice, it meant taking billions of photos of people without their permission and converting those photos into biometric data.
Clearview didn't ask permission. It didn't notify people that their photos were being used. The people in those photos had no idea their faces were being catalogued and indexed for identification purposes. This scraping happened at a scale that most people can't even comprehend. Over 60 billion images. That's roughly eight images for every person on Earth.
What makes Clearview different from other facial recognition companies is that it doesn't control the underlying data collection. It just scrapes whatever's publicly available. That's both a feature and a massive problem. The feature is that it can build an enormous database quickly without needing to negotiate with government agencies or social media companies. The problem is that it's ethically questionable, legally murky, and raises serious questions about whether public availability equals permission to use.
When Clearview launched, law enforcement agencies loved it. Here was a tool that could supposedly identify anyone by photo, tapping into a database larger than any government agency had built. Police departments in major cities started using it. ICE used it. FBI used it. By 2020, it became clear that Clearview had essentially created a mass surveillance infrastructure that most of the public didn't even know existed.
Then the backlash started. Privacy advocates sued. New York banned government agencies from using it. Several cities followed. Clearview paid settlements and adjusted its model, but it never stopped doing what it does best: building surveillance databases from internet scraping. The company pivoted to government contracts where it could operate with more protection and less public scrutiny.
The CBP deal is the latest chapter in Clearview's expansion into federal agencies. It's also one of the clearest examples of how surveillance technology spreads through government: one contract at a time, with minimal transparency, and growing usage that often exceeds the original intent.
The $225,000 Contract: Breaking Down the CBP-Clearview Agreement
The actual contract between CBP and Clearview AI tells you a lot about how government agencies approach surveillance technology. It's straightforward and businesslike, but also deliberately vague in ways that matter.
First, the money. $225,000 for one year of access. That's not huge in government spending terms, but it's not pocket change either. For that price, CBP gets "access to 60+ billion publicly available images" and the ability to use Clearview's matching algorithms. The contract language specifies that the service would be used for "tactical targeting" and "strategic counter-network analysis."
That's important language. This isn't a tool reserved for investigating specific terrorism cases or major crimes. It's being embedded into daily intelligence operations. The phrase "tactical targeting" means agents are using it as part of their routine work. "Strategic counter-network analysis" suggests they're using it to build networks of associations between people.
The contract acknowledges that analysts will be handling sensitive personal data, including biometric identifiers. It requires nondisclosure agreements for contractors with access. But here's what it doesn't specify:
What kinds of photos can agents upload? The contract doesn't say. Can they upload selfies that someone tweeted? Photos from arrested people? Photos of someone's kid from Instagram? Unclear.
Can searches include US citizens? The contract doesn't explicitly address this. Given that Clearview's database includes photos scraped from US-based social media and US driver's license records, the answer is almost certainly yes. But the contract doesn't confirm or deny it.
How long are uploaded images and search results retained? The contract doesn't specify retention periods. This matters because uploaded images could theoretically be matched against future database additions, creating permanent surveillance records.
Who else gets access to the tool? The contract identifies CBP's headquarters intelligence division (INTEL) and the National Targeting Center. But it's unclear whether these units can share results with ICE, local law enforcement, or international partners.
The vagueness isn't accidental. Government contracts often intentionally avoid specific limitations so agencies can expand usage without renegotiating. Once CBP has Clearview access integrated into its systems, using it more broadly becomes easier. Each expansion happens quietly. By the time anyone notices, it's normalized.
CBP claims that its intelligence units draw from "a variety of sources," including commercially available tools and publicly available data. The Clearview deal is framed as just another source. But combining Clearview with CBP's existing systems creates something much more powerful than any single tool.
CBP's Existing Surveillance Infrastructure: The Systems That Connect
Understanding what Clearview can actually do for CBP requires understanding the systems already in place. CBP isn't just adding a facial recognition tool to a basic filing system. It's plugging Clearview into a sophisticated intelligence infrastructure that already tracks millions of people.
The Automated Targeting System is the foundation. This is CBP's main database that links biometric data, watch lists, and enforcement records. It's been operating for years, growing with each border crossing, arrest record, and investigation. The ATS doesn't just record crossing information. It builds networks of associations, creating profiles of people connected to individuals of interest.
The Traveler Verification System conducts face comparisons at ports of entry and other border screenings. CBP publicly states that TVS doesn't use information from "commercial sources or publicly available data." This is interesting language. It might suggest that Clearview access would go to a different system, likely the Automated Targeting System.
That's actually more concerning. The ATS isn't just used for border enforcement. It connects to ICE operations far inland. When ICE conducts raids in interior cities, they're often using data from the ATS. The Clearview connection means those operations could potentially use facial recognition built on scraped internet photos.
CBP also maintains intelligence divisions at the headquarters level that focus on longer-term network analysis. These are the units specifically mentioned in the Clearview contract. They're not processing routine border crossing data. They're analyzing patterns, building associations, and identifying what they classify as security threats or criminal networks.
When you map out these systems together, Clearview becomes a connecting tissue. It can match unidentified photos against CBP's existing databases. It can identify people in surveillance footage. It can help agents recognize individuals they've never formally encountered before.
CBP calls this approach "data-driven." Civil liberties groups call it mass surveillance. The truth is probably somewhere in between, but it's leaning toward the surveillance side.
The Clearview Database: Where Those 60 Billion Photos Actually Come From
One of the most important things to understand about Clearview is where the photos actually come from. The company doesn't produce these images. It scrapes them from the internet. That distinction matters because it shows how Clearview is different from typical government databases.
Clearview's original scraping targets included:
Social Media Platforms: Clearview scraped photos from Facebook, Instagram, Tik Tok, Linked In, and other social networks. People posted these photos believing they were sharing with friends or followers. They had no idea their face was being catalogued by a facial recognition company.
Mugshot Databases: Public criminal records often include mugshots. Clearview scraped these, though many states and jurisdictions have different policies about public access to mugshots.
Driver's License Records: Not all states' driver's license photos are accessible through scraping, but some jurisdictions have looser controls. Clearview obtained driver's license data where possible.
News and Media Sites: When news outlets publish photos of people, Clearview scraped them. This includes mugshots, protest photos, accident scenes, and any other publicly published images.
Government Databases: Immigration records, visa photos, and other government images accessible through FOIA or public records became part of the database.
The sheer scale is what makes this different from traditional intelligence databases. CBP's own biometric records might contain 50 million photos. Clearview's database has 60 billion. That's twelve times larger. Most of those images are of people who've never been arrested, never crossed a border, and never consented to facial recognition.
Clearview's response to criticism has always been the same: these photos are publicly available. But publicly available doesn't mean people consented to facial recognition use. A photo on your Instagram account is publicly available in the sense that anyone with internet access can see it. But Instagram's terms of service don't explicitly permit facial recognition companies to scrape and biometrically index your photo.
The practical result is that Clearview created a surveillance database encompassing billions of people around the world without their knowledge or permission. When CBP gets access to this database, every agent with credentials can potentially identify any person whose photo is stored online.
Clearview claims to update and expand the database continuously. That means the 60 billion figure is already outdated. The real number is probably higher. And CBP's access isn't a one-time snapshot. It's ongoing access to an ever-expanding surveillance database.
Technical Limitations Nobody's Really Talking About: The Accuracy Problem
Here's something that doesn't get enough attention in coverage of this deal: facial recognition systems, especially when used on real-world photos, are significantly less accurate than people think.
The National Institute of Standards and Technology conducted extensive testing of facial recognition systems from multiple vendors, including Clearview AI. The results are sobering. When systems are tested on high-quality visa-like photos taken in controlled conditions, they perform reasonably well. But real-world photos? That's a different story.
Border crossing photos present specific challenges. These aren't studio portraits taken with perfect lighting and positioning. They're photos captured in busy border environments. People are wearing hats, sunglasses, scarves. Lighting is inconsistent. Angles are awkward. Some photos were captured by surveillance cameras at oblique angles.
NIST found that on border photos "not originally intended for automated face recognition," error rates often exceeded 20 percent. Some algorithms performed even worse. We're talking about false positives, missed matches, and misidentifications that could have real consequences.
This creates a fundamental problem with how facial recognition is actually used in practice. A 20 percent error rate doesn't sound catastrophic until you realize what it means operationally.
If you search for someone in a database of 100 candidates, a 20 percent error rate means roughly one in five results is wrong. If CBP is using Clearview to generate candidate lists for human review, that false positive rate becomes a real problem. An agent sees a match, it looks plausible, and they act on it.
NIST actually identified the core dilemma: facial recognition systems can't reduce false matches without also increasing the rate that they fail to recognize the correct person. This is a mathematical trade-off. You can't have both perfect precision and perfect recall simultaneously. You have to choose which type of error you're willing to tolerate.
NIST recommends that agencies operate these systems in "investigative" mode, where results show ranked candidates for human review rather than a single confirmed match. But there's a catch. When the system is searching for someone not already in the database, every "match" is guaranteed to be wrong. The system will still generate results. It will rank them by confidence. Agents will review them. And they'll all be false positives.
CBP hasn't publicly explained how it intends to configure and use Clearview. Whether it's following NIST's recommendations is unknown. The contract doesn't specify parameters for accuracy thresholds or false positive rates.
Expansion Beyond the Border: Where CBP and ICE's Surveillance Reaches
One reason the Clearview deal matters so much is that CBP and ICE don't just operate at the border. They operate throughout the country. Immigration and Customs Enforcement conducts raids and investigations in interior cities, often using CBP intelligence data.
The geography of enforcement has been expanding for years. ICE agents conduct workplace raids, hospital checks, and neighborhood operations in cities thousands of miles from the border. When they do, they're often working with information from CBP's intelligence databases.
The Automated Targeting System links these operations together. A person identified through border surveillance might later be targeted in an ICE interior operation. Information flows from border intelligence units to local ICE agents. The Clearview connection means that flow of information could now include facial recognition matches against billions of internet photos.
CBP's intelligence headquarters unit, specifically mentioned in the Clearview contract, isn't focused on routine border processing. It's analyzing networks and building profiles on individuals and groups that it considers threats. The people targeted in these analyses might be:
Border crossers with suspected gang affiliations: ICE uses gang databases and associates gang members with particular organizations. A facial recognition match against Clearview data could trigger additional investigation.
Individuals with prior immigration violations: CBP intelligence tracks people with previous immigration encounters. These individuals can be more heavily scrutinized in future interactions.
People connected to persons of interest: Network analysis builds chains of association. If you're connected to someone CBP is targeting, you become more likely to be investigated.
Asylum seekers or migrants with specific nationalities: Intelligence units track migration patterns by nationality. Clearview access could enable more detailed tracking.
The point is that Clearview isn't just for border security in the traditional sense. It's infrastructure for a much broader surveillance apparatus that reaches Americans and foreign nationals throughout the country.
International Implications: How This Affects People Globally
Clearview's database includes photos of people worldwide. CBP's access to this database has international implications that extend far beyond the US border.
When CBP uses Clearview, it gains the ability to identify people from virtually any country whose photos appear online. This affects asylum seekers who might have social media photos. It affects political activists whose photos have been published by international news outlets. It affects refugees whose images have been shared by humanitarian organizations.
CBP can use this identification capability to:
Track asylum seekers before they even present themselves: If someone's photo is on a humanitarian organization's website or circulated through migrant networks, CBP could potentially identify them and have information prepared before they arrive for an asylum interview.
Connect individuals across multiple border crossings: People who cross at different locations or with different identities can be connected through facial recognition.
Share intelligence with other countries: CBP can potentially share information derived from Clearview searches with foreign governments as part of intelligence sharing agreements.
Many countries with authoritarian governments have expressed interest in facial recognition systems. If CBP shares intelligence with foreign partners, people fleeing persecution could be identified and potentially caught.
The international implications are particularly concerning for journalists, activists, and political opponents of authoritarian regimes who might be crossing borders or seeking asylum in the US.
Congressional Response: What Lawmakers Are Actually Trying to Do
Senator Ed Markey introduced legislation that would simply ban ICE and CBP from using facial recognition technology altogether. No carve-outs for specific circumstances. No regulations on how it's used. Just a prohibition.
This is significant because it represents a shift in how Congress is thinking about biometric surveillance. Rather than trying to regulate the technology through accuracy standards or oversight frameworks, Markey's legislation takes the position that mass facial recognition surveillance is incompatible with basic privacy rights.
The bill's language reflects concerns about how facial recognition has been deployed without clear limits or transparency:
"Embedding surveillance in enforcement operations": Congressional critics argue that facial recognition is becoming routine infrastructure rather than a tool for specific investigations. Once it's embedded, expansion becomes easier.
"No public consent or disclosure": Most Americans don't know that CBP has access to facial recognition built on billions of scraped photos. Congress is concerned about the lack of transparency.
"Insufficient safeguards": The existing legal framework for how CBP uses surveillance technology doesn't adequately protect civil liberties. There are no clear standards for accuracy, no restrictions on who can be searched, no transparency about retention.
Markey's approach is more aggressive than regulatory approaches that other lawmakers have proposed. Some members of Congress have suggested instead that agencies should be required to:
- Publish accuracy metrics for facial recognition systems used on border photos
- Get a warrant before using facial recognition on US citizens
- Disclose how long data is retained
- Conduct impact assessments on civil liberties before deploying systems
- Provide transparency reports on how many searches are conducted
The challenge with regulatory approaches is that they assume facial recognition can be made acceptable with the right safeguards. Markey's legislation assumes it can't.
Privacy Advocates' Concerns: Why This Deal Is Different
Privacy organizations have been critical of facial recognition for years. But there's something about the CBP-Clearview deal that seems to have escalated the concern level.
The Electronic Frontier Foundation, the Center for Democracy and Technology, and other groups have pointed to several specific problems:
Scale and Scope: Previous facial recognition deployments, while concerning, were more limited in scope. A police department using facial recognition on local surveillance cameras affects people in that city. CBP's access to Clearview means agents can potentially identify anyone whose photo appears online. That's a different category of mass surveillance.
Lack of Due Process: CBP doesn't need to get a warrant to search Clearview. There's no judicial review of whether the search is justified. An agent can search someone's photo for any reason or no reason. That's fundamentally different from a warrant-based search.
Targeting of Vulnerable Populations: Immigration enforcement disproportionately affects undocumented immigrants and asylum seekers. These populations are already vulnerable. Adding facial recognition surveillance makes them even more vulnerable.
Mission Creep Risk: Once the system is in place, expanding its use becomes easier. CBP started with border enforcement. Now it's intelligence analysis. Eventually it could be routine screening of everyone.
Racial Justice Implications: Facial recognition systems have documented accuracy disparities, particularly for people of color. Studies show these systems have higher false positive rates for Black faces, Asian faces, and other non-white populations. Using these systems in enforcement operations amplifies existing discrimination.
Privacy advocates also point to the history of surveillance expansion. When new surveillance tools are deployed in law enforcement, they almost always expand beyond their original intended use. Phone records that were meant for terrorism investigations became available to local police. Surveillance cameras installed at borders ended up monitoring protests. CCTV systems justified for counter-terrorism were used to identify jaywalkers.
The CBP-Clearview deal follows this pattern. It starts as a tool for intelligence analysis of specific security threats. Within a few years, it could be routine screening of everyone crossing the border.
The Privacy-Security Debate: Is There Actually a Trade-Off?
Government officials often frame surveillance as a necessary trade-off: accept some loss of privacy to gain security benefits. The CBP argument for Clearview essentially boils down to this: facial recognition helps us identify threats faster.
But this framing assumes the premise is true. Does facial recognition actually make us more secure? Or does it just give the appearance of security while eroding privacy?
The security case for Clearview would be something like: if a known security threat tries to cross the border, Clearview's massive database helps us identify them. This could prevent crimes or terrorism.
That sounds reasonable. But it doesn't address several problems:
First, the massive database includes billions of innocent people. The signal-to-noise ratio is terrible. If you're searching for security threats in a database of 60 billion photos, most of your results are going to be innocent people.
Second, facial recognition can create false alarms that consume significant resources. An agent gets an alert that someone matching a threat profile entered the country. They launch an investigation. It's a false positive. Resources are wasted. Meanwhile, actual threats might not be caught because resources were allocated to false alarms.
Third, if actual threats know that facial recognition is being used, they can take countermeasures. Basic disguises, sunglasses, altered appearance all reduce facial recognition effectiveness. The people most motivated to evade detection are likely to be somewhat successful.
Fourth, mass surveillance creates a chilling effect on legitimate behavior. People know they're being monitored. They change their behavior accordingly. This can discourage asylum seekers from seeking protection, migrants from pursuing better lives, and activists from engaging in protected speech.
The privacy-security trade-off assumes that security benefits are real and substantial. But in many cases, the security benefits of mass surveillance are overstated while the privacy costs are understated.
Past Failures: How Facial Recognition Has Been Misused Before
If this is your first time thinking about facial recognition and government surveillance, it might seem like a reasonable tool. If you've been paying attention to how these systems have actually been used, you'd be concerned.
The history of facial recognition deployment in law enforcement includes documented mistakes and misuse:
Detroit Police Wrongful Arrest: In 2020, Robert Williams was arrested in Detroit based on a facial recognition match. The match was wrong. Williams had never been arrested, had no connection to the crime. He spent 30 hours in police custody before being released. The facial recognition system had made a false identification, and police acted on it without proper verification.
ICE Surveillance of Immigrants: ICE has used facial recognition to identify undocumented immigrants in interior cities. These operations have led to arrests of people at courthouses, hospitals, and public spaces. Facial recognition enabled these enforcement operations that wouldn't have been possible without the technology.
NYPD Surveillance of Protests: New York Police Department used facial recognition to identify people at racial justice protests. Critics pointed out that this had a chilling effect on protest participation.
China's Model: Internationally, China's mass facial recognition system is the most comprehensive example of what happens when facial recognition is deployed without privacy protections. The system is used to monitor political opponents, religious minorities, and dissidents.
These examples matter because they show that once facial recognition systems are deployed, they tend to be used in ways that disproportionately affect vulnerable populations. Civil liberties protections that exist on paper don't always prevent misuse in practice.
The Broader Context: How Surveillance Has Evolved at the Border
The CBP-Clearview deal isn't the first facial recognition system deployed at the border. Understanding how surveillance at the border has evolved over time helps explain why this deal is significant.
Border surveillance has been increasing for decades. The US-Mexico border now has:
Sensor arrays and surveillance cameras: Thousands of cameras monitoring border areas, many using automated detection systems.
Biometric collection systems: Fingerprints, iris scans, and facial photos collected from everyone crossing the border.
License plate readers: Automated systems that capture and store license plate information from vehicles crossing.
Drone surveillance: Unmanned aircraft conducting continuous monitoring of border areas.
Database integration: Information from all these systems flowing into centralized databases that support intelligence analysis.
Each of these systems represents a layer of surveillance. Combined, they create a comprehensive surveillance infrastructure at the border. Clearview is another layer.
What's different about Clearview is that it connects border-based surveillance to the broader internet. It's not just monitoring people who are currently crossing or trying to cross. It's monitoring anyone whose photo appears online.
This represents an evolution from border-specific surveillance to mass population surveillance that happens to be deployed at border agencies.
Data Retention and the Ghost of Data: What Happens to Your Information
One of the most underappreciated aspects of surveillance systems is data retention. Even if a search is legitimate, keeping data indefinitely creates privacy problems.
The Clearview contract doesn't specify how long CBP retains:
Photos uploaded by agents: If an agent uploads a photo for matching, how long is it stored? Can it be matched against new photos added to Clearview's database? Can other agents access it?
Search results: If an agent searches for someone and gets a match, how long is that result retained? Is it logged? Can it be accessed later?
Metadata about searches: Who searched for whom? When? From what device? This metadata can be as revealing as the search results themselves.
Without clear retention policies, data can persist indefinitely. Metadata from searches can be used to build behavioral profiles of how CBP agents use the system. Uploaded photos can be cross-matched against future database additions.
Historically, government agencies have been reluctant to delete data. Once information exists in a system, deleting it is administratively burdensome. It's easier to just keep it. The longer data is retained, the more ways it can potentially be misused.
Technical Architecture: How Clearview Actually Integrates with CBP Systems
The actual technical integration of Clearview with CBP systems is important but largely unknown. The contract doesn't provide implementation details. But we can infer from how CBP's other systems work.
Clearview probably provides:
API access: Clearview likely offers an API that CBP systems can call to search the database. An agent using CBP's internal tools could initiate a Clearview search without leaving the CBP interface.
Batch processing capability: CBP probably has the ability to submit multiple photos for matching, useful for intelligence analysis work that processes many photos.
Historical data access: CBP can probably search Clearview's database with historical photos, not just real-time images.
Result ranking and scoring: Clearview returns ranked candidates with confidence scores. Higher confidence scores get investigated first.
The technical architecture matters because it determines how frequently the system can be used and how easy it is to integrate into routine workflows. If searching Clearview requires special authorization and significant effort, it will be used less frequently. If it's integrated into standard intelligence analysis tools, agents will use it routinely.
Given that this is a $225,000 contract, it probably includes integrated API access and reasonable query limits. That means agents can use it regularly without special approval.
Comparison to Other Countries: How US Border Surveillance Stacks Up
Understanding how US border surveillance compares to other countries provides perspective on whether Clearview is part of a broader international trend or uniquely American.
Canada: The Canada Border Services Agency has facial recognition capability at major entry points. It's integrated into the travel system for verifying identity. It's less expansive than CBP's planned use because it's primarily for identity verification of people presenting documents, not for intelligence analysis.
European Union: The EU has facial recognition systems for border control, but it's limited by stronger privacy regulations. The systems are primarily used for identity verification, not intelligence gathering.
China: The government operates the world's largest facial recognition system with hundreds of millions of cameras. It's used for mass surveillance across the country. It's also used to monitor border areas, political opponents, and religious minorities.
Russia: Border surveillance includes facial recognition integrated with visa and passport systems.
The CBP-Clearview approach is notable because it combines facial recognition with intelligence analysis on a massive scale and with relatively limited legal constraints. It's somewhere between the regulated European approach and the mass surveillance approach of authoritarian countries.
The trajectory concerns many observers because it looks like the US is moving toward the authoritarian model, just more slowly.
The Role of Contractors: Who Actually Controls Access?
The Clearview contract includes provisions for nondisclosure agreements for contractors with access. This raises an important question: who actually has access to this system?
CBP is a government agency, but it works with contractors and intelligence partners. The contract mentions that contractors need NDAs, which implies contractors will have access. That means private companies and private individuals with security clearances can search Clearview's database on government investigations.
This outsourcing of surveillance capability is significant because:
Private companies aren't subject to all government rules: Contractors operate under different rules than government employees. While they sign NDAs, their accountability is different.
Information can flow to corporate partners: Intelligence obtained through Clearview searches can potentially be shared with other contractors or government partners.
There's less transparency: Public records requests for information about how government employees use surveillance tools are sometimes granted. Requests for information about contractor access are often denied on proprietary grounds.
The contractor question matters because it suggests the surveillance network is even broader than just government employees. Private sector workers with security clearances could be using Clearview.
Future Expansion: Why Today's Limits Probably Won't Hold
If history is any guide, the Clearview contract will be followed by expansion. That's not speculation. It's how surveillance systems actually evolve.
The first expansion will probably be justified on security grounds. A few successful cases where Clearview helped identify threats will be publicized. Those successes will be used to justify broader access. More units will get access. More search types will be authorized.
The second expansion will probably be to increase the matching threshold. Maybe agents will start using it for lower-level investigations, not just intelligence analysis.
The third expansion will probably be to integrate it with other systems. Rather than searching Clearview separately, searches might happen automatically against all travel photos.
Each expansion will be small enough to avoid major controversy. Each will be justified on security or efficiency grounds. Within five years, facial recognition matching against billions of internet photos could be routine for border enforcement.
The history of surveillance technology shows this pattern. The Patriot Act started with wiretapping suspected terrorists. It ended up enabling warrantless phone records collection on millions of Americans. Surveillance camera networks started for traffic monitoring. They became tools for tracking people. GPS tracking started for criminals. It expanded to routine law enforcement.
Once surveillance infrastructure is in place, constraints on how it's used tend to erode over time.
FAQ
What is Clearview AI and why do privacy advocates criticize it?
Clearview AI is a facial recognition company that built its database by scraping over 60 billion photos from the internet without permission. Privacy advocates criticize it because the system enables identification of virtually anyone whose photo appears online without their consent or knowledge, and because it's been deployed by law enforcement agencies without clear legal restrictions or transparency.
How does facial recognition technology work in practice at the border?
Facial recognition systems convert a photo into mathematical representations (biometric templates) that can be compared against a database of similar templates. When an agent searches with a photo, the system returns candidates ranked by similarity. The agent reviews the results to determine if there's a match. Error rates can exceed 20% on real-world border photos, meaning false identifications are common.
What specific privacy concerns does the CBP-Clearview deal create?
The contract doesn't specify what photos agents can upload, whether US citizens can be searched, or how long data is retained. This vagueness means the system could potentially be used to identify Americans, including protesters, political opponents, or journalists. The lack of transparency means the public doesn't know how frequently the system is used or how many people are identified through it.
How does facial recognition data collected at the border connect to interior enforcement?
CBP shares intelligence with ICE through systems like the Automated Targeting System. Information and photos obtained at the border can flow to ICE, which uses them to conduct enforcement operations in interior cities. Facial recognition matches obtained through Clearview searches become part of this intelligence that can follow people across the country.
What does "tactical targeting" mean in the CBP-Clearview contract?
Tactical targeting indicates that facial recognition isn't reserved for specific investigations but is embedded into daily intelligence operations. Agents can use it routinely as part of their normal work identifying people of interest, mapping networks, and supporting enforcement operations. This is different from a tool used only for specific high-priority cases.
Has Congress attempted to restrict facial recognition use by border agencies?
Yes. Senator Ed Markey introduced legislation that would ban ICE and CBP from using facial recognition technology altogether, citing concerns about mass surveillance without adequate safeguards or transparency. Other lawmakers have proposed regulatory approaches requiring accuracy disclosures, warrants for US citizen searches, and transparency reports.
What are the accuracy limitations of facial recognition systems used on border photos?
The National Institute of Standards and Technology tested facial recognition systems and found error rates exceeded 20% on border photos taken in real-world conditions (not controlled studio settings). The systems can't reduce false positives without increasing missed matches. This trade-off means systems must operate in investigative mode with human review of results.
Could facial recognition data be shared with foreign governments?
Yes. CBP shares intelligence with international partners as part of bilateral agreements. If facial recognition matches obtained from Clearview searches are included in intelligence sharing, individuals could potentially be identified and located by foreign governments, which is particularly concerning for asylum seekers fleeing persecution.
What does the history of surveillance expansion tell us about future Clearview use?
Historically, surveillance tools expand beyond their original intended use. Systems justified for terrorism investigation become tools for routine law enforcement. Cameras installed at borders end up monitoring protests. The pattern suggests Clearview access will likely expand from current intelligence analysis uses to broader enforcement operations over time.
What can individuals do if concerned about facial recognition surveillance?
You can contact congressional representatives to support restrictions on facial recognition use. You can minimize your digital footprint by reducing the number of photos of yourself published online. You can support privacy organizations working to restrict these systems. You can educate others about how facial recognition surveillance actually works and the risks it creates.
The Future of Biometric Surveillance: What Comes Next
The CBP-Clearview deal is likely just the beginning. The infrastructure for comprehensive biometric surveillance is being built in pieces. Facial recognition is one component. Other biometric systems are expanding too.
The Department of Homeland Security operates iris recognition systems at airports. CBP collects fingerprints at the border. These systems, combined with facial recognition, create multi-modal biometric identification. A person can be identified through multiple biometric systems simultaneously.
The trajectory is clear: the US is building the technical infrastructure for comprehensive biometric identification of anyone entering or existing the country. Within a decade, crossing a border could involve automated identification through facial recognition, iris scanning, and fingerprinting, with results flowing instantly to intelligence databases.
Most of this is happening without significant public debate or democratic deliberation. Each contract is negotiated individually. Each system is justified on security grounds. Public awareness lags behind technical capability.
The question Congress and the public needs to address is whether this level of biometric surveillance is compatible with the rights and freedoms the country is supposed to represent. That's the real debate that needs to happen.
Conclusion: Why the Clearview Deal Matters for Everyone
The CBP-Clearview contract might seem like a technical detail buried in federal procurement. It's not. It's a major milestone in the expansion of government surveillance infrastructure.
For decades, surveillance technology expanded quietly. Each new tool was justified on security grounds. Each seemed reasonable in isolation. Taken together, they've created a system where government agencies can potentially identify, track, and target people with unprecedented precision.
The Clearview deal matters because it makes this surveillance capability visible. It's no longer theoretical. CBP actually has access to facial recognition built on billions of internet photos. Agents can use it today. The only questions are how they're using it, how often, and whether they'll expand its use.
The privacy implications are significant. Billions of people have their photos in Clearview's database without knowing it or consenting to it. CBP can now search those photos against intelligence data. That's mass surveillance by any reasonable definition.
The accuracy limitations are also significant. With error rates exceeding 20% on real-world photos, facial recognition systems will create false matches that cause harm. Innocent people will be investigated or detained based on misidentifications.
The expansion risks are real. History shows surveillance systems expand beyond their original intended use. What starts as intelligence analysis could become routine border screening.
But there's also opportunity. The visibility of this deal creates an opening for actual regulation and restriction. Congress can act. States can restrict cooperation with federal surveillance. Public pressure can force transparency and limits.
Senator Markey's legislation to ban facial recognition use by ICE and CBP is the most direct approach. It assumes that some technologies are incompatible with civil liberties, and facial recognition surveillance is one of them.
Whether Congress takes action depends partly on public awareness and pressure. Most Americans don't know about the CBP-Clearview deal. Once they do, many will be concerned about what it means for their privacy.
The deal is a test case. How Congress and the public respond to it will signal whether we're serious about protecting privacy in an age of biometric surveillance, or whether we're willing to accept comprehensive surveillance in exchange for promised security benefits.
That's a choice that should be made explicitly, through democratic processes, with full public understanding. Not buried in federal contracting documents and covered only by specialized tech and policy media.
The Clearview deal forces this conversation. What matters now is whether anyone's actually listening.
![CBP's Clearview AI Deal: What Facial Recognition at the Border Means [2025]](https://tryrunable.com/blog/cbp-s-clearview-ai-deal-what-facial-recognition-at-the-borde/image-1-1770830004756.jpg)


