Ask Runable forDesign-Driven General AI AgentTry Runable For Free
Runable
Back to Blog
Autonomous Vehicles & AI37 min read

Tesla and Waymo Robotaxis' Hidden Workforce: Remote Operators Revealed [2025]

Government documents expose the human operators behind Tesla and Waymo robotaxis. Learn about remote assistance programs, safety protocols, and what it means...

tesla robotaxiwaymo remote assistanceautonomous vehicles safetyself-driving cars human operatorsremote assistance programs+10 more
Tesla and Waymo Robotaxis' Hidden Workforce: Remote Operators Revealed [2025]
Listen to Article
0:00
0:00
0:00

The Hidden Backbone of Self-Driving Cars

Imagine this: You're sitting in a sleek robotaxi rolling through downtown San Francisco. The car glides smoothly through traffic, makes turns with precision, and stops at red lights. Everything feels autonomous, intelligent, almost magical. But what you don't see is someone thousands of miles away—maybe in Phoenix, maybe in Manila—watching your car's feed on a monitor, ready to step in the moment the software gets confused.

This is the uncomfortable truth about robotaxis that nobody really talks about. Self-driving vehicles, despite years of hype and billions in funding, still need human babysitters. Not always. Not constantly. But frequently enough that this hidden workforce has become absolutely critical to the entire autonomous vehicle ecosystem.

For years, Tesla and Waymo have been vague about these remote assistance programs. Company executives downplay their importance, investors gloss over them in earnings calls, and the general public rarely hears about them. The narrative promoted is one of pure autonomy: robots that drive themselves without human intervention. But that's not quite the full story.

Recent government filings have changed this. Waymo submitted detailed documentation to Senator Ed Markey's office. Tesla provided information to the California Public Utilities Commission. Together, these documents paint a more complete picture of what's really happening when a robotaxi needs help. And the picture is complicated.

The question isn't whether these cars can drive themselves—they obviously can most of the time. The real question is: what happens in the moments when they can't? How many humans are required to keep thousands of vehicles safe on public roads? Where are those humans located? What's their training like? And most importantly, if a human makes a mistake while controlling a remote vehicle, who's responsible if someone gets hurt?

These aren't academic questions. They're safety-critical issues that will determine whether autonomous vehicles can be trusted in cities across America and the world. The details matter because unlike a human driver who makes real-time decisions behind the wheel, remote operators are making safety-critical choices with latency, limited information, and the constant pressure that any mistake could be catastrophic.

What "Remote Assistance" Actually Means

Let's start with clarity. "Remote assistance" doesn't mean the same thing across the industry, and that's part of the problem. There's a spectrum of human intervention, and different companies position themselves at different points on that spectrum.

At one end, you have cars that are genuinely autonomous for 99% of the time, where human operators only step in during edge cases they've literally never encountered before. At the other end, you have cars that are semi-autonomous, with humans ready to take over instantly if anything goes wrong. The terminology companies use—"remote operators," "remote assistance agents," "safety drivers"—can obscure where exactly they fall on this spectrum.

Waymo's definition is instructive. In their filing with Senator Markey, the company states that remote assistance agents "provide advice and support to the Waymo Driver but do not directly control, steer, or drive the vehicle." This is important language. Waymo is explicitly denying that their cars are remote-controlled vehicles. The humans aren't pilots. They're advisors.

How does this actually work? When a Waymo encounters a situation it's uncertain about, the vehicle's software recognizes it can't proceed safely. Instead of guessing or taking a potentially dangerous action, the car essentially raises its hand and asks for help. This request goes to a remote assistance center. A human operator sees the situation on their screen and provides advice or data that helps the vehicle make a better decision. The vehicle's software then uses that information to decide what to do next.

The crucial part: the vehicle can accept or reject the human's input. This isn't a human manually steering the car. It's more like a person answering a question that the car's AI asks.

Tesla's approach appears somewhat different, at least based on what's been disclosed publicly. Tesla has mentioned using "chase cars"—actual vehicles following the robotaxis, driven by humans who can intervene if needed. This is a more hands-on form of safety monitoring. Additionally, Tesla's filing to California's Public Utilities Commission mentions "remote operators" based in Austin and the Bay Area, but provides frustratingly few details about how these operators actually function or how often they're needed.

The distinction matters enormously. With Waymo's advisory model, a single human operator can potentially handle multiple vehicles simultaneously, providing guidance when needed. With Tesla's chase car model, you need a dedicated human driver for direct intervention, which limits scalability. This is why Waymo can operate 3,000 robotaxis with only about 70 remote assistance agents on duty at any given time—a ratio that would be impossible if humans were directly controlling the vehicles.

But here's what's concerning about these descriptions: they're mostly self-reported. The companies are telling us what they do, but there's limited independent verification. The government documents provide some structure and oversight, but they're not rigorous audits of actual operations.

The Scale of the Problem: Numbers That Don't Add Up

Waymo says it has roughly 3,000 robotaxis in operation across six metropolitan areas. At any given moment, about 70 remote assistance agents are available to help. That's a ratio of about 43 vehicles per human operator.

Think about what that number implies. If each operator can handle multiple vehicles simultaneously, it means the vast majority of the time, these vehicles are operating completely autonomously. They're making decisions, navigating traffic, avoiding obstacles, all without human intervention. The 70 agents are there for edge cases—the unusual situations the software encounters.

But here's what we don't know: how often do those edge cases actually occur? Waymo doesn't disclose this. In their filing, they don't say "our vehicles call for help X times per day" or "0.1% of driving decisions involve remote assistance." This opacity makes it impossible for regulators or the public to assess how autonomous these vehicles truly are.

There's a world of difference between vehicles that need help 1% of the time and vehicles that need help 10% of the time. If it's the latter, then calling them "autonomous" becomes a bit of a marketing stretch. They're semi-autonomous, supported by human operators.

Tesla is even more opaque. The company doesn't disclose how many robotaxis it operates (estimates range from hundreds to possibly 2,000), how many remote operators it employs, or how frequently they're called upon. The company's filing mentions that remote operators exist in Austin and the San Francisco Bay Area, that they're US-based, and that they have valid driver's licenses. But the operational specifics remain secret.

Elon Musk has claimed that Tesla's operations are becoming less dependent on human intervention, with some vehicles now operating without chase cars following them. But without data on intervention rates, it's impossible to verify this claim. It could be true that Tesla's autonomy is improving. Or it could be that the company is simply reducing supervision without necessarily improving the underlying technology.

This data void creates a credibility problem. When companies make sweeping claims about their autonomous vehicles being safe and operational, but won't disclose the actual metrics around human intervention, skepticism is warranted. It's like a pharmaceutical company saying a drug is safe and effective but refusing to publish safety data.

Where Are These Operators Located?

One of the most striking revelations in these government filings is geographic. Waymo has admitted that approximately 50% of its remote assistance workers are based in the Philippines. This means that thousands of robotaxis in San Francisco, Phoenix, Austin, Los Angeles, and Atlanta are being monitored by people sitting in call centers halfway around the world.

Waymo argues this isn't a problem. The company states that these overseas operators are licensed drivers in the Philippines, have been trained on US traffic laws and road rules, and undergo the same background checks and drug testing as US-based operators. The company emphasizes that highly complex situations—collisions, police interactions, emergency situations—are handled by trained US-based teams, not overseas workers.

But there's an elephant in this room. These workers are in a different time zone. There's latency in their connection. They may have less familiarity with specific US cities and intersections. And perhaps most importantly, they're substantially lower-cost employees, which is almost certainly why Waymo chose this arrangement. The company can operate its robotaxi service more profitably by outsourcing routine remote assistance to cheaper overseas labor.

There's no evidence that this has caused problems yet. Waymo's robotaxis haven't been involved in major incidents traceable to remote operator error. But the setup creates potential vulnerabilities. What happens when there's a network delay? What if an operator isn't familiar with a specific intersection and gives bad advice? What if a critical oversight occurs because a sleepy 3 AM operator in Manila made a mistake while watching a 6 PM rush hour situation in San Francisco?

Regulatory frameworks haven't caught up to this reality. There's no clear rule saying remote operators must be in the same country as the vehicles, or that they must be employees rather than contractors, or that they must have specific training beyond a driver's license. Waymo's arrangement is likely legal, but whether it's safe is an open question.

Tesla has taken a different approach, at least for now. The company explicitly requires that remote operators be "located domestically." This is presented as a competitive advantage over Waymo. Dzuy Cao, Tesla's AI technical program manager, wrote that the company "requires that its remote operators be located domestically," framing this as part of Tesla's commitment to quality and American jobs.

But here's the tension: if Tesla is genuinely trying to avoid overseas labor costs, its robotaxi service will be more expensive to operate per vehicle than Waymo's. That could affect pricing for consumers or profitability for the company. Or Tesla might use fewer remote operators relative to vehicle count, relying more heavily on chase cars or on autonomous capabilities that may not be as mature.

Neither company has disclosed enough information to know which is true.

Safety Protocols and Worker Vetting

Both companies claim extensive safety protocols for their remote operators. But again, these claims need scrutiny.

Waymo says all remote assistance workers undergo drug and alcohol testing when hired, and that 45% are randomly tested every three months. These are basic workplace safety standards, not exceptional measures. Most transportation companies with safety-sensitive roles do similar screening.

The company also mentions that operators are "trained on US road rules," but doesn't specify the depth of this training. Is it a one-day orientation? A week-long course? Ongoing training with regular assessments? We don't know.

Tesla states that remote operators undergo "extensive" background checks and drug and alcohol testing, with valid US driver's licenses. Again, these are baseline standards, not distinctive. The word "extensive" is vague and unquantified.

Neither company has disclosed:

  • How long the initial training program is
  • What qualifications are required beyond a valid driver's license
  • Whether operators receive ongoing training or only initial training
  • What testing or certification must be passed
  • What happens if an operator makes a mistake
  • How errors are tracked and analyzed
  • Whether operators receive feedback on their decisions
  • What the turnover rate is
  • Whether operators work full-time or part-time
  • What the typical shift length is

These details matter because they determine whether the workforce is well-trained and stable, or whether it's a rotating door of minimally qualified workers. Given that Waymo uses overseas contractors, there may be additional turnover challenges. And given that neither company discloses these metrics, we can't assess the true quality of their remote assistance programs.

There's also a question about liability and accountability. If a remote operator gives bad advice that contributes to an accident, who's responsible? Is it the operator personally? The company? Both? The government filings don't address this at all.

The Technical Challenge: Knowing When to Ask for Help

One of the hardest problems in autonomous driving isn't about the technology that drives the car. It's about the technology that decides when to ask for help.

Philip Koopman, an autonomous vehicle safety researcher at Carnegie Mellon University, has emphasized that self-driving systems need to know their own limitations. If a system is uncertain about a situation, it should recognize that uncertainty and request human assistance. If a system doesn't recognize its own limitations, it might confidently make wrong decisions while humans remain unaware there's even a problem.

This is the crux of why remote assistance programs are safety-critical. An autonomous vehicle that never asks for help is either fully capable of handling every situation it encounters, or it's occasionally making mistakes without anyone knowing. The latter scenario is terrifying. If a car just plows through a confused intersection without asking for help, having no human safety net, people could get hurt.

So Waymo's approach of having vehicles request assistance when uncertain is theoretically sound. The car says, "I'm not sure what to do here. I need advice." A human looks at the situation and provides guidance. The car decides what to do with that guidance.

But this system only works if the decision-making algorithm is well-calibrated. It needs to ask for help frequently enough that humans catch genuine problems, but not so frequently that the system is basically remote-controlled. It needs to ask for the right kind of help—not just "I'm confused, tell me what to do," but specifically, "I'm unsure about this intersection because X and I need guidance on Y."

Waymo's system seems to be designed this way. Tesla's approach is less clear from the public filings. The company mentions remote operators but doesn't explain the technical mechanism by which they're invoked.

There's also a deeper problem here: what about the situations where a vehicle should ask for help but doesn't? These are cases where the autonomous system is confidently wrong. The car thinks it understands the situation, makes a decision, and that decision is incorrect. If no human is monitoring the specific vehicle at that moment, or if the monitoring is via a chase car that's looking for obvious problems but not the subtle ones, these mistakes go unnoticed until something bad happens.

Real-World Failures: When the System Gets Confused

The most visible failures of autonomous vehicles in recent years actually highlight the importance of remote assistance, even if the companies don't always frame them that way.

Take the December 2024 power outage in San Francisco. Traffic lights went out across a large area. Multiple Waymo robotaxis became confused, unable to navigate intersections without functioning signals. Some were trapped in intersections, unsure whether it was safe to proceed. Human intervention was required to get the vehicles out of these situations safely.

This isn't a black mark against Waymo's technology per se. It's actually evidence that their remote assistance program worked as intended. When the vehicles encountered a situation they couldn't handle—intersections without signals—they essentially froze, waited for guidance, and got it. The system didn't try to power through and hope for the best. It stopped and asked for help.

But the incident also revealed a gap: traffic lights are such a fundamental part of urban driving that the system should probably have contingencies for when they fail. Most human drivers can navigate an intersection with broken signals by treating it like a four-way stop. Waymo's vehicles apparently couldn't do this without remote assistance.

Then there's the school bus issue in Austin, Texas. Multiple incidents were reported where Waymo vehicles illegally passed school buses that were stopped and loading or unloading passengers. This is an extremely dangerous and illegal maneuver. The vehicles apparently didn't recognize school buses or misunderstood the rules around them.

These failures led to a software recall. But here's the question that haunts this situation: did remote operators miss these mistakes until a crash happened, or did they somehow not flag the pattern until regulators got involved? If remote operators were monitoring these vehicles, why didn't they catch the school bus problem earlier?

The filings don't answer this. It's another gap in transparency.

Then there are the ongoing issues with unprotected left turns, incorrect lane changes, and slow response to emergency situations that various autonomous vehicle companies have experienced. Some of these have been caught and corrected by remote operators. Others have made it through to accidents.

Waymo's track record is actually relatively good compared to other autonomous vehicle programs. The company has been cautious, has expanded gradually, and has built systems with multiple safety layers. But even Waymo isn't perfect, and that's partly because even the best autonomous systems sometimes need human judgment.

The Liability and Accountability Problem

Here's a scenario that should keep regulators up at night: A Waymo robotaxi is involved in an accident in Phoenix. A pedestrian is injured. Investigation reveals that the vehicle's remote assistance system requested help from an operator. That operator, based in Manila, looked at the situation and gave advice. The vehicle followed that advice, and the accident happened.

Who's liable? Is it the remote operator, who made a mistake while advising? Is it Waymo, for using an overseas contractor for safety-critical operations? Is it the vehicle's AI system, for not correctly interpreting the operator's advice? Is it the operator's employer in the Philippines, if Waymo uses a contractor?

The legal landscape here is uncharted territory. Traditional product liability law is designed for products with clear manufacturing defects or design flaws. Autonomous vehicles are different. They're partially autonomous, partially human-controlled, operating in complex environments with many possible failure modes.

Waymo's filing mentions that highly complex situations including collisions and interactions with law enforcement are handled by US-based teams, not overseas contractors. But that doesn't fully resolve the liability issue. Even routine assistance requests could potentially affect safety, and if a Philippines-based contractor gives bad advice in a routine situation that contributes to an accident, the liability chain becomes murky.

Tesla's domestic-only approach might offer some liability advantages, at least in theory. If all remote operators are based in the US and are Tesla employees, the chain of responsibility is clearer. But the company hasn't disclosed enough to know whether it's actually structuring its workforce in a way that ensures clear accountability.

This is an area where regulation could and should step in. Requirements could be established for:

  • Where remote operators must be located
  • Whether they must be employees or can be contractors
  • What training and certification they must have
  • What liability framework applies when they give advice that contributes to accidents
  • How error rates must be tracked and reported
  • What incidents trigger investigations or recalls

But as of now, this regulatory framework doesn't exist, and the companies are largely self-governing.

The Economics of Remote Assistance

Waymo's decision to use overseas contractors makes economic sense. A remote assistance operator in the Philippines can be paid a fraction of what a US-based operator would earn for the same work. If a operator in Manila earns

8,0008,000-
12,000 per year with benefits, while a US operator would earn
35,00035,000-
50,000 per year, the savings add up quickly when scaled across thousands of vehicles.

Consider the math: Waymo operates roughly 3,000 vehicles with 70 operators on duty at any given time. If we assume roughly 100-150 total operators working in shifts, and half are overseas, that's 50-75 overseas-based operators. If each costs

10,000peryearversus10,000 per year versus
40,000 per year for a US-based operator, Waymo saves roughly
1.5millionto1.5 million to
2.25 million annually just on this outsourcing decision.

For a company investing billions in autonomous vehicle development, $2 million annual savings might seem trivial. But it's not about the absolute number. It's about margins. If Waymo is operating thousands of robotaxis, even small per-vehicle cost savings compound across the entire fleet. This is a business model optimization.

Tesla's stated commitment to domestic operators increases operating costs. But the company frames this as a quality and accountability advantage. There's a legitimate argument here: US-based operators are more familiar with US traffic laws and norms, have fewer communication barriers, and create a simpler liability structure.

But let's be honest: if Tesla can achieve better margins using overseas contractors, and if those contractors prove to be as safe and effective, the company's competitive advantage from domestic-only operators is purely reputational. Once Waymo (or another competitor) proves that overseas-based remote assistance is safe, or once regulators fail to crack down on it, Tesla's competitive advantage dissolves.

This creates perverse incentives. There's a rush to outsource remote assistance to the cheapest global labor market while claiming this choice is about safety and quality. Once one major player does it successfully, others will follow.

Training and Expertise: The Unknown Factor

What's the expertise level of these remote operators? That's perhaps the most important unknown in these government filings.

Waymo claims its overseas operators are licensed drivers in the Philippines, trained on US road rules. But having a driver's license and knowing US traffic laws is far from the same as being qualified to advise autonomous vehicles in real-time.

Consider what's required:

  • Understanding the autonomous vehicle's capabilities and limitations
  • Recognizing what information the vehicle needs to make a safe decision
  • Communicating that information clearly through a digital interface
  • Making decisions quickly, often with incomplete information
  • Understanding edge cases and unusual traffic situations
  • Speaking English fluently (for US operations)
  • Managing multiple situations simultaneously
  • Understanding liability and safety protocols

Are overseas operators trained in all of this? The filings don't say. It's possible they receive comprehensive training and are highly capable. It's also possible they receive minimal training and are basically there to click buttons and relay information when the AI asks.

Waymo emphasizes that complex situations are handled by US-based experts. This suggests a tiered system: routine queries handled by overseas operators, complex situations by US-based experts. If this is accurate, it provides some safeguards. The overseas operators are doing the straightforward stuff, and humans with deeper expertise handle the edge cases.

But what defines "complex"? A collision is clearly complex. A police interaction is clearly complex. But what about an ambiguous intersection scenario? A pedestrian in an unexpected location? A broken traffic signal? Without clear definitions and without oversight of how situations are categorized, it's hard to know if the tiering system actually works as intended.

There's also the issue of decision fatigue and attention. An operator working an overnight shift, monitoring multiple vehicles, will inevitably have periods of lower attention. Decision quality degrades under fatigue. Neither company discloses shift lengths, break policies, or any metrics around operator wellbeing.

Regulatory Response: The Missing Piece

What's remarkable about the government filings is how minimal the regulatory response has been. Senator Markey requested information, and Waymo provided it. California asked for details, and Tesla complied. But there's been no enforcement action, no requirement for transparency, no regulatory standards set.

Compare this to the aviation industry. Commercial aircraft have strict rules about who can be in the flight deck, how they're trained, how they're supervised, what records must be kept. The Federal Aviation Administration has authority and uses it.

The National Highway Traffic Safety Administration (NHTSA) has begun establishing standards for autonomous vehicles, but the focus has been on technical safety and testing, not on human oversight systems. There's no "remote assistance operator certification" requirement, no standards for training, no oversight of overseas outsourcing, no rules about when human intervention is required.

This is a regulatory gap that will likely get filled eventually, but probably only after something goes wrong. A serious accident involving a remote operator error would immediately trigger new regulations. Until then, the industry is largely self-governing.

What should regulation look like? Some possibilities:

  • Remote operators working with safety-critical systems must be certified and continuously trained
  • Companies must track and report intervention rates and error rates
  • Serious incidents involving remote operator error must be investigated
  • Remote operators cannot be located in countries without adequate labor standards or safety oversight
  • Operators working on safety-critical systems must be employees, not contractors
  • There must be clear liability frameworks established before systems are deployed
  • Vehicles must fail safely if they can't reach a remote operator

None of these are technically difficult to implement. They're regulatory choices, not technical constraints.

The Future of Remote Assistance: Automation All the Way Down

There's an interesting wrinkle in all of this. As autonomous vehicle technology improves, the need for remote assistance should theoretically decrease. Eventually, the system might become so capable that human intervention is almost never needed—perhaps only for true edge cases that occur once in millions of miles.

But there's an alternative scenario. As autonomous vehicles expand to more complex environments—more cities, more weather conditions, more edge cases—the absolute number of situations requiring remote assistance might increase, even if the percentage of situations decreases. A system that handles 99% of situations autonomously but operates thousands of vehicles could still require a substantial remote assistance workforce.

Waymo seems to be on the path of improving autonomy. The company is expanding to new cities, presumably with improved versions of its system. Eventually, remote assistance might become a rare backup rather than a regular need.

But what about Tesla? The company's approach to autonomy has been different: more data-driven, less reliant on hand-coded rules, more willing to deploy partially-autonomous systems and improve them through real-world learning. This approach might require more remote assistance for longer, as the system learns edge cases through accumulated experience rather than through engineering-driven development.

There's also a darker possibility: remote assistance doesn't fade away, but just becomes less visible. Companies might transition from having humans actively monitoring and advising vehicles to using AI systems to handle the advice-giving. Instead of a human remote operator deciding what to tell a confused vehicle, a second AI system would make that decision.

If this happens, we've essentially removed the human from the safety-critical loop entirely. The monitoring AI would still need oversight, but the direct human judgment would be gone. Whether that's safer or less safe depends entirely on how good the AI is. And we'd have no way to know, because the decision-making would be opaque to everyone except the companies developing it.

Consumer Implications: What This Means for Riders

If you're considering using a Waymo or Tesla robotaxi, what should you know about remote assistance?

First, understand that these vehicles are not fully autonomous in the way the marketing suggests. They're highly automated, but they depend on human oversight. That's not necessarily bad—it might actually be safer than truly autonomous systems. But it's important to understand the true nature of what you're using.

Second, understand that your data is being observed. When a vehicle calls for remote assistance, an operator is looking at camera feeds from the vehicle, seeing the road, seeing passengers, seeing everything the vehicle sees. This data is being transmitted to call centers, potentially overseas. There are privacy implications here that neither company has fully addressed.

Third, understand that the safety of the system depends partly on factors you can't see or control. The training and competence of remote operators, the responsiveness of the communication system, the decision-making protocols—these all affect safety. And consumers have essentially no visibility into any of this.

What should consumers demand?

  • Transparency about when and how often remote assistance is used
  • Clear disclosure about where remote operators are located
  • Information about the training and qualifications of operators
  • Published safety metrics and incident rates
  • Clear liability frameworks so you know who's responsible if something goes wrong
  • Data privacy protections since video from inside the vehicle is being transmitted

None of these are being provided by either company, despite operating commercial services that transport paying passengers.

The Broader Autonomy Narrative

The remote assistance issue reveals something important about the entire autonomous vehicle industry: the narrative doesn't always match the reality.

The story that's been sold for the last decade is that autonomous vehicles would eliminate human drivers. We'd have robots, not humans, operating transportation. This would be safer, cheaper, and more efficient.

What's actually happening is more complex. Humans aren't being eliminated. They're being relocated, reorganized, and deprioritized, but they're still critical. Instead of drivers in vehicles, we have operators in call centers. Instead of one human per vehicle, we have dozens of humans supporting thousands of vehicles. But the humans are still there, making safety-critical decisions.

This isn't inherently bad. A human operator in a call center can make better decisions than a driver in a vehicle in some ways. They're not fatigued by driving all day. They can consult information systems and talk to other experts. They can focus entirely on decision-making without managing a vehicle.

But the current model has problems. The humans are often invisible, undercompensated, and their role in safety is downplayed by companies trying to maintain the "autonomous" narrative. Workers in the Philippines are bearing the responsibility for keeping vehicles safe on US roads, with minimal compensation and no public recognition.

A more honest narrative would be: autonomous vehicles are currently semi-autonomous systems that combine AI with human oversight. The AI handles routine driving. The humans handle exceptional cases. This partnership between AI and humans is probably safer than either alone. But it requires that the humans be well-trained, well-supported, and properly held accountable.

That's not the narrative being sold, because it doesn't sound as exciting as "fully autonomous vehicles."

Policy Recommendations: What Should Change

Based on the information revealed in these government filings, several policy changes would improve safety and transparency.

Transparency Requirements: Companies should be required to disclose:

  • Number of vehicles in operation
  • Number of remote assistance requests per vehicle per day
  • Error rates and incident rates for remote operators
  • Locations where remote operators are based
  • Training requirements and ongoing training programs
  • Liability structures for situations involving remote operator advice

Operator Standards: Remote assistance operators should be subject to:

  • Certification requirements demonstrating competence
  • Continuing education and training
  • Defined shift lengths with mandatory breaks
  • Fatigue monitoring and management
  • Clear accountability frameworks

Oversight Mechanisms: Regulators should establish:

  • Regular audits of remote assistance programs
  • Investigation of serious incidents involving remote operator decisions
  • Public reporting on safety metrics
  • Requirements for independent safety reviews

Data Protection: Users of autonomous vehicles should have:

  • Clear notice that video data is being transmitted to remote operators
  • Opt-out rights for certain types of remote assistance (where feasible)
  • Data minimization requirements (only collect what's necessary)
  • Protections against data misuse or unauthorized access

Liability Clarity: Legal frameworks should establish:

  • Clear responsibility allocation when remote operators contribute to incidents
  • Insurance requirements for remote assistance programs
  • Standards for what constitutes negligent remote assistance
  • Mechanisms for injured parties to hold operators accountable

None of these are radical or unprecedented. They're adapted from existing frameworks in aviation, transportation, and other safety-critical industries.

International Considerations

Using overseas workers for safety-critical systems raises international questions that the government filings don't address.

Waymo is operating in the Philippines through contractors. This creates a situation where a US company is outsourcing safety-critical work to the Philippines, where labor is cheaper and regulations might be less stringent. Is this ethical? Legally, yes, probably. Ethically, it's murkier.

There's an argument that having global operators is fine if they're properly trained and the system works. There's another argument that safety-critical work should stay within the country operating the vehicles, under the regulatory jurisdiction of that country's safety authorities.

There are also practical concerns. What happens if there's a conflict between the Philippines government and the US government about labor standards, data protection, or liability? What if Philippine laws change regarding contractor rights or data transfer? What if there's a network outage affecting Philippines-based operators?

Waymo's strategy might work for a while, but it's fragile in ways that a fully domestic operation isn't.

The Competitive Dynamics

Tesla's decision to use only domestic operators, contrasted with Waymo's use of overseas contractors, could be a competitive differentiator. But only if consumers and regulators care.

If regulators don't enforce standards, and if accidents don't reveal problems with overseas operators, then there's no competitive advantage. Waymo saves money on operator costs and undercuts Tesla on pricing. Tesla can't compete on price but might sell on the "made in America" angle.

But this could change. If a serious accident involves an overseas operator, suddenly Tesla's domestic-only policy becomes a major advantage. The optics of "our vehicles are overseen by US workers" versus "our vehicles are overseen by contractors in developing countries" would shift dramatically.

Conversely, if Waymo's system proves to be just as safe as Tesla's, with no incidents traceable to overseas operators, then the domestic-only policy becomes a costly inefficiency. Other companies would adopt Waymo's model and undercut everyone on price.

The long-term competitive outcome will be determined partly by safety records and partly by regulatory decisions.

Technical Deep Dive: Latency and Real-Time Decision Making

One technical issue that the government filings don't address but is critically important: latency.

When a vehicle in San Francisco requests advice from an operator, whether in Arizona or the Philippines, there's a time delay. The video feed must be transmitted. The operator must see it, understand it, and respond. The response must be transmitted back. The vehicle's AI must process it. All of this takes time.

Over a distance within the US, with good internet, this might be 100-300 milliseconds. Over the Pacific to the Philippines, on less reliable networks, it could be 300-500 milliseconds or worse during peak usage.

Half a second might not sound like much. But in traffic, half a second is significant. A pedestrian can move. A vehicle can cross an intersection. Traffic conditions can change. The advice the operator sends based on seeing a situation half a second ago might not apply to the situation that exists right now.

Waymo's system accounts for this by having the vehicle request advice only when it's certain it needs it and can safely wait for an answer. The vehicle doesn't request help for an imminent collision risk. It requests help for ambiguous situations where waiting for advice is safer than deciding alone.

But this design choice means the remote assistance system only works for certain types of problems, not for real-time emergency situations. Which means the vehicle's AI has to be capable of handling most emergencies autonomously. Which means the remote assistance network is really just for edge cases and ambiguous situations.

Tesla's chase car model avoids latency entirely—the human is right there, can see everything the vehicle sees, and can intervene instantly. But it's much less scalable. Tesla explicitly has recognized this and moved toward remote operators for situations where latency is less critical.

This is an engineering trade-off that isn't discussed in the government filings.

Cultural and Organizational Factors

How do different companies approach remote assistance reflect deeper organizational differences?

Waymo, owned by Alphabet (Google's parent company), has a data-driven culture. The company collects vast amounts of data, builds sophisticated models, and makes decisions based on evidence. Using overseas contractors fits this culture—it's a rational optimization that maximizes efficiency.

Tesla, under Elon Musk's leadership, has a different culture. The company emphasizes domestic manufacturing, has been critical of outsourcing, and has promoted buying American. Using domestic remote operators fits this narrative.

But beneath the culture are the economics. Waymo needs to prove its robotaxi business can be profitable. Cutting operator costs helps. Tesla has enormous resources and can afford the higher cost of domestic operators—for now.

It's worth considering that these organizational choices might persist even if better alternatives emerge. Waymo might continue with overseas operators because that's how the system was built and is working. Tesla might continue with domestic operators because it aligns with the brand and company identity.

Incident Investigation and Learning

What happens when something goes wrong? Neither company has clearly explained their incident investigation and learning processes.

When a Waymo robotaxi is involved in an accident, does the company investigate whether remote assistance was requested? Does it analyze what the operator recommended and whether the advice was followed? Does it use these incidents to train operators and improve the system?

Most industries with safety-critical operations have formal incident investigation processes. The National Transportation Safety Board investigates aviation accidents. The Coast Guard investigates maritime accidents. There's a systematic approach to understanding failures and preventing recurrence.

Autonomous vehicles don't have this yet. When a robotaxi is in an accident, the investigation (if any) is usually by local law enforcement, not by a specialized autonomous vehicle accident investigation board. Important technical details might not be analyzed. Lessons might not be systematically learned.

This is another regulatory gap that should be filled.

Conclusion: The Hidden Workforce That Makes Robotaxis Possible

The government documents revealing details about Tesla and Waymo's remote assistance programs expose a truth that's been obscured by industry marketing: autonomous vehicles, as they currently exist, depend fundamentally on human workers.

These workers are mostly invisible. They work in call centers, often in other countries, making safety-critical decisions that enable the "autonomous" vehicles operating in major cities. They're not celebrated or recognized. Their role in safety is downplayed by companies trying to maintain the narrative of fully autonomous systems.

But they're essential. Without them, these vehicles couldn't operate safely on public roads. That's not an indictment of the technology—human-AI collaboration might actually be safer than purely autonomous systems. But it's important to be honest about what's actually happening.

The revelation that roughly 50% of Waymo's remote assistance workers are based in the Philippines is significant not because overseas workers are inherently less capable, but because it shows companies will optimize for cost and efficiency in areas consumers can't see. The choice likely made sense economically. But it creates risks and accountability gaps that regulation hasn't yet addressed.

Tesla's domestic-only policy is a different choice, but without transparency about how many operators they actually employ or how often they intervene, it's hard to say whether it's actually safer or just more expensive.

Moving forward, several things need to change:

  1. Companies must be transparent about their remote assistance programs, including intervention rates, operator locations, and training standards.

  2. Regulators must establish clear standards for remote operators, oversight mechanisms, and incident investigation processes.

  3. Liability frameworks must be clarified so responsibility is clearly assigned when remote operators contribute to incidents.

  4. The narrative around autonomous vehicles must become more honest about the role of human oversight and control.

  5. Workers in these remote assistance roles must be properly trained, compensated, and supported.

Right now, we have a system where significant safety responsibility is placed on workers who have minimal oversight, limited transparency around their role, and unclear accountability. That's not sustainable. Either the technology improves to the point where remote assistance becomes genuinely optional, or the systems supporting remote assistance operators must become far more robust and transparent.

The government filings represent a small step toward transparency. But it's just the beginning. Much more information needs to be public for regulators and consumers to properly assess the safety of these systems.

The robotaxis that glide silently through city streets look autonomous. But they're actually being watched by thousands of people in call centers around the world, ready to step in the moment the AI gets confused. That's the real story behind the self-driving car revolution. Not robots replacing humans, but humans and robots learning to work together—with the humans often hidden from view.

Cut Costs with Runable

Cost savings are based on average monthly price per user for each app.

Which apps do you use?

Apps to replace

ChatGPTChatGPT
$20 / month
LovableLovable
$25 / month
Gamma AIGamma AI
$25 / month
HiggsFieldHiggsField
$49 / month
Leonardo AILeonardo AI
$12 / month
TOTAL$131 / month

Runable price = $9 / month

Saves $122 / month

Runable can save upto $1464 per year compared to the non-enterprise price of your apps.