Elon Musk's Self-Driving Promises: A Pattern of Delays
Elon Musk has a habit. He makes a bold prediction about Tesla's autonomous capabilities, sets an impossibly tight deadline, and then quietly moves the goalposts when the deadline passes. If you've been paying attention to Tesla's Full Self-Driving development over the past several years, you've seen this movie play out repeatedly.
Last year, he promised that by the end of 2025, Tesla would have unsupervised Full Self-Driving available to the general public. He also claimed the company would launch a robotaxi service that would reach 50 percent of the US population. Neither happened. Instead, here we are in 2026, and Tesla's FSD remains exactly what it's been for years: a Level 2 "supervised" driving system that requires drivers to keep their hands on the wheel and stay engaged with the road. According to Electrek, Tesla's current robotaxi service, operating only in Austin and San Francisco, still requires a human employee sitting in either the driver's seat or front passenger seat with a "kill switch" ready if anything goes wrong. It's the opposite of the autonomous vision Musk has been painting.
Now, Musk has moved the metric again. Tesla needs roughly 10 billion miles of driving data before it can achieve truly safe unsupervised self-driving, he says. As of now, Tesla has logged a little over 7 billion miles. That's where the accountability gap opens up. Why would Musk promise a major milestone by the end of 2025 if he knew Tesla hadn't hit the new data requirements? The pattern suggests either he didn't think the deadline through, or he was willing to overpromise knowing the timeline would slip. As reported by Electrek, this isn't a one-time mistake. It's a recurring pattern that has frustrated investors, customers, and regulators alike. Understanding why these deadlines keep failing requires looking at both the technical realities of autonomous driving and the business incentives that push Musk to make increasingly aggressive claims.
The Historical Pattern of Broken Promises
The Full Self-Driving saga didn't start in 2025. It goes back further, much further. In fact, the disconnect between Musk's promises and reality has been the defining characteristic of Tesla's autonomous driving journey.
Back in 2016, when Musk released Tesla's Master Plan Part Deux, he claimed the company would need about 6 billion miles of data before "true self-driving" could be approved by regulators. At the time, that seemed like a reasonable engineering benchmark. The company was years away from that milestone, so the goalposts felt credible.
But here's the problem: Musk kept moving that target. When 2016 turned into 2017, and 2017 turned into 2018, the promises didn't slow down. They accelerated. In 2019, Musk claimed Tesla would have a million robotaxis on the road by 2020. That didn't happen. In 2020, he promised fully autonomous driving would arrive within two years. It didn't. By 2021, he was saying unsupervised FSD was "imminent." Still no.
Each time a deadline passed, Musk offered a fresh explanation. Sometimes it was regulatory delay. Sometimes it was the complexity being underestimated. Sometimes it was just that Tesla was "close but not quite ready." The goalpost always moved just far enough to keep the dream alive while pushing expectations into the future.
This pattern creates a peculiar dynamic. Tesla's customers have essentially been paying for Full Self-Driving functionality that doesn't actually exist yet. Owners have spent between
Why the 7 Billion Miles Suddenly Became 10 Billion
When Musk first announced the 10 billion miles requirement, it felt like a new number pulled from nowhere. But there's logic buried in the decision, even if it's not the logic Musk wants to admit.
The jump from the original 6 billion miles target to 10 billion miles represents a significant acknowledgment that the problem is harder than previously thought. Machine learning systems require exponentially more training data to handle edge cases, rare scenarios, and unusual driving conditions. The difference between 6 billion and 10 billion miles isn't just a 67 percent increase in data. It's the difference between thinking you understand a problem and realizing you need to handle vastly more complexity.
Tesla's neural network approach to autonomous driving relies on processing massive amounts of real-world driving data. The system learns patterns from what has happened in actual vehicles, then attempts to generalize those patterns to new situations. But rare events—the scenarios that cause accidents or require complex judgment calls—don't show up with equal frequency in training data. A child running into the street, a tire blowout, a flooded road, an aggressive driver cutting you off in an unusual way.
These edge cases demand more data, not less. And Tesla's dataset, while enormous, isn't perfectly balanced toward the most dangerous scenarios. By moving to 10 billion miles, Musk was essentially admitting that 7 billion wasn't enough. But he framed it as a new insight rather than a previous miscalculation. This sleight of hand allows Tesla to continue claiming progress ("we've refined our requirements") while resetting expectations. As noted by Zacks, Tesla's FSD has approached 7 billion miles, with 2.5 billion on urban streets.
There's another possibility worth considering: the 10 billion mile target is partially cover for other problems that have nothing to do with data. Regulatory concerns, liability issues, and business model uncertainty might be the real blockers. By anchoring the conversation to a data metric, Musk keeps the focus on engineering rather than on the more complicated questions about whether Tesla should be liable for a fully autonomous system.
The Difference Between Level 2 and Level 5: More Than Marketing
Tesla's current Full Self-Driving is classified as a Level 2 autonomous system under industry standards. This matters more than most people realize, because the difference between Level 2 and Level 5 (fully autonomous) isn't just incremental improvement. It's fundamentally different in scope, liability, and engineering challenge.
At Level 2, the driver remains responsible for the vehicle. Tesla calls the shots on what the car does, but the human is still accountable for outcomes. If something goes wrong, the driver is expected to have been paying attention and should have intervened. This structure protects Tesla from some liability because the company can argue that the driver failed to supervise properly.
Achieving Level 5 flips this entirely. The car becomes responsible for everything. There's no driver supervision. No human intervention. The vehicle must handle every scenario a human driver would encounter, including edge cases humans rarely think about. And when something goes wrong, the blame lands squarely on the company that built the system.
This liability distinction matters enormously. Tesla has been willing to let FSD exist as a Level 2 system because that structure lets the company avoid full responsibility for failures. When Tesla owners have died in accidents or been injured while using Autopilot, Tesla has fought in court to argue that drivers were not properly attentive. Sometimes the company wins these cases.
But move FSD to Level 5, and that defense evaporates. The company becomes liable for basically any accident that happens when the system is engaged. Insurance companies would need to recalculate their models. Regulatory bodies would need to establish approval processes. Tesla would face potential criminal liability if a death occurred.
It's possible—even likely—that Tesla's engineering team has the capability to get closer to Level 5 than current deployments suggest. But the liability implications might be holding back a public announcement. Musk's claims about data requirements could be genuine technical needs, or they could be a convenient public-facing explanation for what is actually a legal and business strategy decision.
The distinction matters because it changes how you should interpret Musk's promises. When he says Tesla needs 10 billion miles, he might be talking about engineering. Or he might be giving the company time to figure out how to handle the liability problem.
Waymo's Different Approach: Why It's Ahead
Understanding Tesla's struggles becomes easier when you look at what Waymo is doing differently. Waymo, owned by Google's parent company Alphabet, took a fundamentally different approach to autonomous driving.
Waymo started by developing autonomous capabilities in controlled environments. The company began with geofenced areas and predetermined routes, then gradually expanded. They developed their own hardware stack, including custom sensors and computing equipment. They created detailed digital maps of the areas where they operate. And crucially, Waymo took responsibility for what their vehicles do.
Waymo's robotaxi service doesn't operate like Tesla's. When a Waymo vehicle is moving autonomously, it's fully autonomous. There's no safety driver with a kill switch. There's no human employee in the vehicle ready to take over. The car is responsible, and Waymo is responsible, period.
This is why Waymo has been able to move faster in some respects, despite operating in fewer cities. The company committed to doing the thing properly rather than doing it partially while maintaining liability escape routes. According to BBC News, Waymo's approach has allowed it to deploy unsupervised robotaxis that work reliably in their operating areas.
Tesla's strategy is the opposite. Instead of building a custom sensor suite, Tesla relies on cameras and neural networks. Instead of geofencing early deployments, Tesla claims the system will work everywhere eventually. Instead of taking liability, Tesla maintains that the human driver is responsible.
Each choice has tradeoffs. Tesla's approach is cheaper to deploy at scale because it doesn't require custom hardware in every vehicle. Waymo's approach is more conservative but more reliable in the deployments Waymo has chosen.
But here's the crucial bit: Waymo has actually deployed unsupervised robotaxis that work reliably in their operating areas. Tens of thousands of people in San Francisco and other cities use Waymo's robotaxis weekly. The system works. It's limited in geographic scope, but within that scope, it delivers on the promise.
Tesla hasn't achieved this yet. The robotaxi service requires human supervision. The FSD system requires driver engagement. The promises remain promises.
The companies are on different timelines partially because they made different engineering choices. But they're also on different timelines because they defined their goals differently. Waymo aimed for reliability in a limited area. Tesla aims for ubiquity eventually. Ubiquity is harder.
The Data Collection Question: How Much Is Enough?
Musk claims Tesla needs 10 billion miles of data. But here's a question worth asking: how did he arrive at that number, and is it actually grounded in engineering reality or is it a convenient large number?
In machine learning, data requirements follow certain mathematical patterns. A system with more complexity, more edge cases, and more variability needs exponentially more training data to achieve the same level of accuracy. Autonomous driving has enormous complexity. The number of possible scenarios is essentially infinite. Weather conditions, traffic patterns, road conditions, driver behavior, pedestrian behavior, construction, accidents, debris, animals, and countless other variables all interact.
To train a system that handles all these scenarios with high reliability, you'd theoretically need data that covers all significant combinations of these variables. That's a staggering amount of data.
But here's where the problem gets interesting: you don't actually need to collect all that data explicitly. Machine learning systems can learn to generalize. A system trained on 5 billion miles of data can sometimes handle scenarios it hasn't explicitly seen, if those scenarios are close enough to things in the training data.
The question isn't whether 10 billion miles is the exact right number. It's whether there's a meaningful difference between 7 billion and 10 billion that justifies the timeline slippage. And that's harder to answer because Tesla doesn't publish detailed data about what's in their dataset, what's missing, or what specific improvements come from each additional billion miles.
It's possible that Tesla's researchers have genuinely discovered that an additional 3 billion miles is critical to reaching safety targets. It's also possible that some of that gap is due to problems with data quality, data imbalance, or validation issues rather than quantity alone.
When Musk announced the number, the company didn't provide specifics about:
- How current safety metrics were calculated
- What specific accident scenarios Tesla is trying to prevent
- How safety improvements scale with additional data
- What percentage of the dataset consists of rare events versus common driving
- How the company validates that additional data actually improves performance
Without those details, 10 billion miles is just a goalpost. And it might be a goalpost that moves again once 10 billion is approached.
The Business Model Question: Who's Paying for Development?
Here's an uncomfortable truth about Tesla's Full Self-Driving strategy: customers are essentially funding its development through subscription payments and vehicle purchase premiums.
When someone buys a Tesla and pays for the Full Self-Driving package, they're paying for something that doesn't yet exist in its final form. Tesla has collected billions of dollars from customers who bought FSD before the feature was complete, betting on future capability improvements.
This creates an interesting incentive structure. Musk has financial motivation to keep people believing in the FSD vision even when progress is slower than promised. If he admits the timeline is indefinite, FSD sales might dry up. If he keeps making promises with future deadlines, customers stay engaged and keep paying.
Tesla's revenue from FSD is not trivial. At any given time, roughly 20-30 percent of Tesla owners have purchased Full Self-Driving. That's hundreds of thousands of vehicles generating recurring subscription revenue, plus the one-time purchase revenue from new buyers.
The more honest version of this story would acknowledge that FSD is fundamentally harder than initially expected and that deployment will take years longer than first promised. But that honesty would reduce FSD revenue in the near term.
The current approach—making promises, updating timelines, collecting revenue—lets Tesla continue funding development while maintaining customer interest. It's not fraud, technically, because the feature does exist in some form, even if it's not the fully autonomous version customers might be waiting for.
But it does create misaligned incentives. Tesla profits from believing in the FSD vision, whether or not that vision reaches the form Musk originally described.
Regulatory Uncertainty: The Silent Factor
Musk sometimes frames Tesla's delays as purely technical. But regulatory uncertainty is almost certainly playing a role, even though it's not discussed publicly as much.
No major regulatory body has approved a fully autonomous Level 5 vehicle for widespread deployment on public roads. This isn't because it's technically impossible. It's because the legal and liability frameworks don't exist yet. How do you insure an autonomous vehicle? Who is liable if it causes an accident? How do you test whether a system is safe enough for public roads?
These questions don't have settled answers. Different regulatory jurisdictions are approaching them differently. California has specific rules about autonomous vehicle testing and deployment. Federal regulators are still developing frameworks. Insurance companies are struggling to understand the liability implications.
Tesla's approach has been to push forward with its Level 2 system while building regulatory relationships. But moving to a fully autonomous system would require navigating entirely different regulatory waters.
Waymo, by contrast, has explicitly engaged with regulators. The company works with jurisdictions to get approval for specific routes and operating conditions. This is slower but cleaner legally.
It's possible that Tesla is waiting not just for technology maturity but for regulatory clarity. Once that clarity emerges, deployment might accelerate. Or regulators might impose requirements that require additional engineering work.
Musk's data-focused explanation sidesteps this regulatory complexity. It's a much easier story to tell: "We need more data, we're collecting it, we'll be ready when we hit the target." The regulatory story is messier: "We don't know what regulators will require, we're trying to figure it out, and the timeline depends on decisions we don't control."
Why Artificial Timelines Don't Work for Deep Tech
The recurring deadline failures point to a deeper issue with how Musk approaches long-term technical problems. He tends to underestimate how long truly difficult engineering takes.
This isn't unique to Tesla. It's a common problem in the startup and tech world. Founders, especially ambitious ones, tend to be optimistic about timelines because pessimism kills funding and excitement. But in deep tech—the kind where you're pushing against fundamental limitations in physics, materials, or algorithms—that optimism often collides with reality.
Full Self-Driving is genuinely hard. Not just hard in a way that means "it'll take us a while." Hard in a way that means we don't fully understand whether it's possible at all using the approaches Tesla has chosen.
The neural network approach Tesla relies on has impressive capabilities. It can handle a huge range of driving scenarios. But it's not fundamentally solved the problem of reliable decision-making in edge cases or of understanding rare events from finite data.
Machine learning systems trained on billions of examples still sometimes fail on scenarios that a human driver would handle easily. They make errors that seem nonsensical until you understand the training data distribution. They get confident about wrong answers. They struggle with distribution shift (when real-world conditions differ from training conditions).
These aren't problems that disappear once you hit a certain amount of data. They're problems baked into the approach. You can mitigate them, engineer around them, and test extensively. But they don't go away.
Musk's deadline-driven approach works well for manufacturing challenges, where you can engineer your way to a solution through enough effort and resources. It works less well for fundamental research problems, where you sometimes hit walls that effort alone can't overcome.
Full Self-Driving might end up being a research problem disguised as an engineering problem. In which case, Musk's usual approach of "we'll make this happen by quarter four" is structurally mismatched to the problem.
The Autonomy Comparison: Where Different Companies Stand
Looking at the landscape of autonomous vehicle development, the companies are at dramatically different stages.
Waymo is actually deploying unsupervised robotaxis in multiple cities. The service works but is limited in scope. Weather, geography, and route availability are constrained. But within those constraints, the system is reliable and doesn't require human supervision.
Cruise, once General Motors' autonomous division, had a much more aggressive deployment plan. The company tested unsupervised robotaxis in San Francisco. But after a series of incidents, including one where a robotaxi dragged a pedestrian, Cruise pulled back significantly. The company is now more cautious and has lost momentum relative to Waymo.
Argo AI, a Ford-backed autonomous company, shut down operations entirely in 2022, returning billions in investor funding. The company concluded that the technical and business challenges were too significant.
Tesla is in a unique position. The company has billions of real driving data from customers. It has a massive fleet that can collect more data. It has Tesla's reputation and capital to fund development. But it also has the liability shelter of the Level 2 framework, which means there's less pressure to solve the problem than companies deploying truly autonomous vehicles.
So here's the paradox: Tesla has the best position from a data-collection perspective but perhaps the weakest incentive to actually solve the full autonomy problem quickly. Waymo has less data but stronger incentive to make its current approach work because it's already deployed.
These different positions suggest different timelines. Waymo might have a better chance of expanding its capabilities in the next 2-3 years because it's already operating at a higher level of autonomy. Tesla might improve FSD gradually but without the pressure of actual deployment pushing rapid iteration.
The Customer Trust Problem
Repeated broken promises create a trust problem that extends beyond just the autonomy timeline. When Musk says something will happen by a specific date, and it doesn't, it affects how people evaluate his other claims.
Tesla customers who bought FSD are in a strange position. They've paid for something that doesn't exist in the promised form. The feature they paid for—unsupervised self-driving—hasn't materialized. Some percentage of FSD buyers are probably frustrated. Some might be angry.
But they're also sunk-cost committed. They've already paid, often thousands of dollars. Walking away means accepting the loss. Continuing with Tesla means maintaining hope that FSD will eventually deliver.
This is terrible for customer relationships but potentially good for Tesla's revenue, at least in the short term. Customers who are sunk-cost committed are more likely to stay and continue paying for subscriptions.
But it creates a long-term problem. Eventually, either FSD delivers at the promised level, or customers realize it won't and demand refunds or stop buying new Teslas with FSD. The company can only float the promise so long before reality catches up.
With each missed deadline, that reckoning gets closer. Musk and Tesla are essentially in a race: either deliver the autonomous capability that was promised, or deal with increasingly frustrated customers and potential legal issues from buyers who claim they paid for capabilities that don't exist.
What Would Actually Prove Progress?
Here's a useful framework for evaluating Tesla's claims going forward: what would actually demonstrate that meaningful progress is being made?
Simple metrics like "miles driven" are necessary but not sufficient. Miles tell you that the system is being tested, but they don't tell you whether the system is getting meaningfully safer or more capable.
Better metrics would include:
- Safety improvement over time: How much do actual safety metrics improve with each billion additional miles?
- Edge case handling: How well does the system handle rare events? Are edge cases being solved or just ignored in testing?
- Supervised-to-unsupervised ratio: What percentage of current FSD operation happens without any human intervention? Is this growing?
- Real-world deployment: Are the robotaxis actually operating completely autonomously without human employees in the vehicles?
- Transparency about failures: When FSD makes mistakes, how is Tesla analyzing and learning from those mistakes?
Right now, Tesla doesn't provide most of these metrics publicly. The company reports miles driven and claims safety improvement, but granular details are sparse.
Without better visibility into these metrics, it's impossible to independently verify whether the progress is real or whether the additional data is actually translating into capability improvements.
The Path Forward: What Realistic Timelines Look Like
If we're honest about autonomous driving, realistic timelines are measured in decades, not quarters.
Full autonomy—Level 5 across all conditions and geographies—is at least 10 years away, possibly much longer. Partial autonomy in specific conditions (Level 4) might be achievable sooner in specific use cases like robotaxis on predetermined routes, which is what Waymo is doing.
But the kind of autonomy Musk originally promised—cars that drive themselves anywhere, anytime, without human supervision—requires solving problems we haven't even fully identified yet.
This doesn't mean progress won't happen. It just means the progress will be slower and messier than Musk's typical timelines suggest.
Tesla will probably improve FSD. The system will likely become more capable. But that improvement will come in increments, with occasional setbacks, regulatory challenges, and edge cases that surprise everyone.
The honest version of this story sounds like: "We're building toward autonomy. We're making progress. But the problem is harder than we thought, and we don't know exactly when we'll get there."
That's not the story Musk wants to tell. So instead, we get deadlines and goalposts and promises that slip year after year.
For customers and investors, the lesson is simple: when evaluating Musk's promises about FSD timelines, add significant buffer to whatever date he gives. And if he offers a specific metric (like 10 billion miles), understand that the metric might change once it's approached.
The Liability Elephant in the Room
There's one thing that almost nobody discusses publicly when talking about Tesla's FSD timeline: liability insurance and legal responsibility.
A fully autonomous vehicle needs insurance that covers scenarios where the company—not the driver—is responsible for accidents. This creates exposure that traditional vehicle insurance doesn't contemplate. An autonomous vehicle insured as fully responsible for its actions is a different risk profile than a supervised system where the driver shares liability.
Insurance companies would need to develop entirely new models for autonomous vehicle liability. They'd need to understand failure modes, safety statistics, and risk profiles that don't yet exist. This is not a simple engineering problem. It's an actuarial and regulatory problem.
Tesla's current approach avoids this problem by keeping the driver in the loop (legally if not mechanically). This lets the company use existing insurance frameworks while pushing autonomous capability forward.
But it also means moving to full autonomy requires solving a non-technical problem: the liability and insurance framework.
Musk's public focus on data collection sidesteps this entirely. It sounds like a pure engineering problem when it's actually an insurance and legal problem. That problem might be harder to solve than gathering 10 billion miles of data.
Looking Ahead: What Comes Next
The pattern suggests three possible futures for Tesla's FSD:
Scenario 1: Incremental Improvement: FSD gradually improves, becoming more capable but never reaching the unsupervised full autonomy Musk promised. This is probably the most likely scenario. The company continues collecting data, releases incremental updates, and eventually achieves something useful at Level 2/3 but never gets to Level 5.
Scenario 2: Breakthrough: New approaches, either in software or hardware, unlock genuine autonomy sooner than expected. Tesla makes a meaningful jump in capability and actually deploys something close to what was originally promised. This is possible but seems less likely given how consistent the limitations have been.
Scenario 3: Regulatory Lock-In: Regulators establish clear rules about autonomous vehicle liability and deployment. Tesla either complies and accelerates development, or finds that compliance requires significant additional work. This changes the timeline substantially either direction.
Regardless of which scenario plays out, the pattern of missing deadlines and moving goalposts is unlikely to change. Musk will likely continue to make aggressive timeline claims. When those deadlines slip, new targets will be announced.
For Tesla customers, investors, and observers, the lesson is to evaluate FSD based on what it does today, not on what it promises to do tomorrow. Today it's a useful driver assistance system. Tomorrow's promises have a track record of slipping.
That's not a failure of engineering. It's a failure of timeline estimation. But for the people who bought FSD years ago waiting for the promised capability, that's a meaningful distinction without much practical difference.
Conclusion: The Pattern Won't Break
Elon Musk's track record with Full Self-Driving timelines tells a clear story. The story isn't about engineering failure, because Tesla's technology has improved and the company continues making progress. The story is about estimation failure and incentive misalignment.
Musk consistently underestimates how long genuinely difficult technical problems take. He makes public commitments to timelines that internal engineering teams probably know are unrealistic. And when those timelines slip, the company resets expectations for a future date rather than acknowledging that the problem is fundamentally harder than originally thought.
This pattern works for Tesla as long as customers and investors continue believing in the vision. FSD remains a source of subscription revenue and premium pricing. The robotaxi promise attracts excitement that supports Tesla's stock valuation. The narrative that autonomy is "just around the corner" keeps people engaged.
But like all delayed promises, there's an endpoint. Either Tesla delivers something that resembles what was originally promised, or customers eventually realize the promise is permanent. Neither outcome is good for trust, but both are probable.
Waymo's slower, more conservative approach might seem less exciting. But the company is actually deploying unsupervised autonomous vehicles today. Tens of thousands of people are using them weekly. The promise has delivered, even if the scope is limited.
Tesla's more ambitious vision—truly autonomous vehicles everywhere—remains a vision rather than reality. After a decade of promises, increasingly delayed and increasingly uncertain timelines, the credibility gap has grown wide.
For anyone evaluating Tesla's future claims about Full Self-Driving, the lesson is straightforward: wait for deployment, not promises. History suggests that promises will continue to move forward while actual capability arrives on a slower timeline.
The questions worth asking aren't about when the 10 billion mile milestone will be reached. The questions are: why does the company keep missing timelines, why do the metrics keep changing, and how long can this pattern continue before reality catches up?
Those questions don't have happy answers. They just have honest ones.
![Tesla's Full Self-Driving: Why Elon Musk's Promises Keep Missing [2025]](https://tryrunable.com/blog/tesla-s-full-self-driving-why-elon-musk-s-promises-keep-miss/image-1-1767901092535.jpg)


