The AI Agent 90/10 Rule: When to Build vs Buy SaaS [2025]
Let's be honest. Nine months ago, nobody had a clear answer to this question: should you build your own AI agent, or should you buy one?
We're now living in a world where the answer matters more than ever. And it's completely changed.
The old rule was simple. Buy 90%, build 10%. That framework worked for decades. It still works. But there's a new wrinkle that's flipping the entire economics of SaaS. And unless you understand it, you're going to overpay for tools that your AI agent could replace in a weekend.
I've spent the last nine months running an AI-native operation. We're a team of three humans, one dog, and over 20 AI agents working in production. We've custom-coded eight apps from scratch. We've paid for dozens of third-party tools. We've killed some tools. We've kept others. And we've learned hard lessons about when the 10% suddenly becomes the 90%.
Here's what changed, and what it means for your business.
TL; DR
- The 90/10 rule still holds, but with a critical update: Buy 90% of what you need off-the-shelf, but now also build when a SaaS tool has zero AI functionality built in
- AI changes the build-vs-buy economics: What used to take a full engineering team two weeks can now be vibe-coded in 24 hours with AI co-workers
- Data is the new deciding factor: Build when you have proprietary data or workflows that generic SaaS can't ingest or process
- Speed and customization trump feature parity: A 70% solution built in a day beats a 100% solution you have to pay for monthly when AI agents can close the gap
- SSO, integrations, and data access are the new minimum bar: If a SaaS tool doesn't offer these basics plus at least one AI feature, it's a candidate for replacement
The 90/10 Rule Is Broken (For Now)
The original rule came from a place of pragmatism. When building software meant hiring engineers, managing payroll, dealing with compliance, and maintaining infrastructure, it made sense to buy as much as you could. Your team shouldn't build a CRM. You shouldn't rebuild marketing automation. You shouldn't replicate what 50 other companies have already solved.
That logic still holds today.
We're not attempting to rebuild Salesforce. We're not competing with Marketo on marketing orchestration. We didn't try to recreate Outreach or Apollo for outbound SDR automation. Those tools exist. They work. They have security, compliance, and data warehousing built in. And when you're running a lean operation, you can't take on that burden.
But here's what shifted: we now also build when a tool we're paying for has literally zero AI functionality. Not reduced AI functionality. Not "AI lite." Zero.
If it's 2025 and your SaaS product doesn't have a single AI feature—not even basic enrichment, not even a simple agent that can read your data and suggest next steps—that's when we start evaluating replacement.
We don't want to. But we have to.
The economics have flipped. It used to cost $100K and six weeks to build something custom. Now it costs nothing and 24 hours.
Why AI Changed the Build-vs-Buy Calculation
There are four reasons the math broke.
1. No Engineering Hiring Required
The biggest cost in software isn't the idea. It's the people. Hiring a senior engineer costs
With AI co-workers like Claude, you don't need that. You need someone who can write a spec, ask the right questions, and iterate. You don't need them to know TypeScript or how to set up a database.
One person. One day. Done.
That changes the unit economics of the "build" side of the equation dramatically. If building cost
2. Speed Kills Feature Parity
There's a concept in software called feature parity. You want your custom solution to do everything the off-the-shelf tool does, so you're not leaving money on the table.
That's dead now.
A solution that's 70% there, built in a day, beats a solution that's 100% there and costs $500/month when you factor in two things: (1) you're not using all 100% of the features anyway, and (2) your AI agents can layer on the missing 30%.
Don't need advanced reporting? Your AI agent can run a SQL query and generate a report in text. Don't need fancy workflows? Your AI agent can monitor a database and take action when something changes. That's not slower than clicking buttons in a UI. It's faster.
3. Data Is Your Competitive Advantage
This is the big one. If you have proprietary data, no third-party tool can ingest it and give you value.
We had ten years of historical data about what marketing campaigns worked for our company. What copy converted. What channels generated the most revenue. What events hit their attendance targets and which ones flopped. No off-the-shelf marketing tool could ingest that. They could ingest basic stuff—which channels we're running ads on, how much we spent, what the conversion rate was. But they couldn't process ten years of nuanced institutional knowledge.
So we built for it. And suddenly, our internal tool was smarter than any commercial alternative ever could be.
This is true for most businesses. You have data competitors don't have. Customers don't have. Even Salesforce doesn't have. If you're not building tools that leverage that, you're leaving value on the table.
4. Customization Has No Marginal Cost
With traditional software, customization is expensive. You either hire developers or you live with the product as-is. Either way, it costs money and time.
With AI, customization is free. "Can you add this field?" "Done." "Can you change the logic to work this way?" "Done." "Can you generate a different output format?" "Done."
No code review. No deployment. No risk. If you ask for something, it's live in seconds.
That changes everything. Features that would cost $50K to build and get pushed to a quarterly release cycle? You get them in your app in under an hour.
Case Study 1: The AI VP of Marketing (Internal Tool)
Let's walk through a concrete example. We needed a tool that could do something no off-the-shelf platform could do: ingest ten years of proprietary marketing data and generate a weekly marketing plan.
Not just any plan. A plan that understood our business context. What works for SaaStr might not work for a healthcare startup. What worked in 2022 might not work in 2026. What drove event attendance in Q1 might not work in Q4.
No generic tool can model that.
The Problem
We have data scattered across:
- Historical sponsorship records (which sponsors converted to customers, which didn't)
- Event attendance data (ticket sales, no-shows, demographics)
- Campaign performance (which emails drove clicks, which ads drove registrations)
- Revenue correlation (which events generated the most qualified leads)
- Competitive analysis (what other platforms are doing to drive attendance)
Each of these data sources lives in different places. Some in Salesforce. Some in Marketo. Some in spreadsheets. Some just in people's heads.
The task: build a system that could pull all of this together, analyze it, and tell us what to do next.
A consultant would charge
We gave the AI a day.
How We Built It
Step one: we wrote a specification. Not a detailed technical specification. A business specification. Here's what we need:
- Ingest all historical marketing data (connect to Salesforce, Marketo, and our registration platform via APIs)
- Analyze what's driven revenue in the past
- Look at competitive benchmarks (what other event platforms do to drive attendance)
- Generate a six-month rolling plan with weekly tactical to-dos
- Tell us each morning: are we on track? What should we adjust?
- Flag risks (if we're 20% behind attendance targets, we should change strategy)
Step two: we pointed Claude at the Salesforce docs, Marketo docs, and our internal APIs. We said: "Here's our data. Generate a plan."
Step three: we iterated. "That plan assumes we can sell 10,000 tickets. We can't. Max capacity is 500. Replan." Done. "This assumes sponsorship revenue is correlated with attendance. It's not—sponsors are actually uncorrelated. High sponsor revenue years sometimes have low attendance. Replan." Done.
Step four: we connected it to our actual systems via Zapier webhooks. So every morning, the AI wakes up, checks Salesforce, checks our registration platform, compares it to historical trends, and generates a plan.
Total time: one day.
Total cost: $0 in development, plus Claude subscription we were already paying.
Total value: we now have a source of truth for marketing decisions that no off-the-shelf tool could provide.
The Results
This tool is now the single source of truth for marketing decisions. Every morning, it tells us:
- Expected attendance this quarter (based on historical correlations and current sales pipeline)
- Revenue forecast (based on sponsor conversion rates and ticket volume)
- Risk flags ("You're 15% behind pace. Switch to paid ads by Friday.")
- Recommended actions ("Send reminder emails to people who registered but haven't attended yet. That's a 30% no-show rate, historically.")
- Competitive analysis ("Three competitors just released new features. Attendance interest is down 8%. Update your content angle.")
None of this was custom-coded in the traditional sense. It was all specified in prose, Claude executed it, and it lives in production.
Case Study 2: The Sponsors Portal (External-Facing Tool That Killed a SaaS Subscription)
Now let's look at the more controversial case. This one actually replaced a paid SaaS tool mid-contract.
We were paying for a sponsor portal platform. It was fine. It wasn't great. It did maybe 60% of what we needed. But the critical problem: it had zero AI features.
You'd submit a form with your company name, and you'd have to manually fill in your company size, employee count, description, and industry. Why? The tool couldn't enrich the data. Couldn't look up the company. Couldn't auto-fill.
That's not a product from 2025. That's a product from 2015.
The Challenge
We had two constraints:
- One of our team members, Amelia, is not an engineer. She's go-to-market. She knew Claude but had never built a web app.
- Single sign-on (SSO) had to work. It's a non-negotiable feature for enterprise sponsors.
SSO is notoriously complex to implement. Most DIY implementations fail. Enterprise customers expect it to work perfectly or not at all.
So we set a constraint: if Amelia could get SSO working in a day, we'd build the whole portal. If not, we'd stick with the old tool.
How We Built It
Amelia started with Claude's co-work feature (which lets the AI see your screen, your code, and your browser in real-time). She opened the old sponsor portal in one tab and opened Replit in another.
Step one: write the spec. She told Claude: "Look at this tool. Write me a spec for a replacement. Make sure it has AI features." Claude generated a full spec: authentication, company data enrichment, sponsor dashboard, task management, booth selection, and more.
Step two: use Claude to extract the data. She had 150+ sponsor contracts in a folder. She needed to extract company name, website, number of passes, speaking slots, and contract date. She told Claude: "Go through all these contracts and extract the data in CSV format."
One hour later, done. All 150 companies, structured data, formatted for import.
Step three: implement SSO. She told Claude: "I need to add Google SSO to this Replit app." Claude walked her through it. Code blocks, explanations, testing instructions. She tested it with different emails. Tested sign-out. Tested persistence.
Step four: auto-generate sponsor codes. She said: "Now go into our registration system and create unique sponsor codes for each company." Claude did it. Promo codes, nomination passes, everything.
Step five: test with real sponsors. She sent it to 12 sponsors first. Watched the submissions come through on the backend. No errors. No confusion.
Step six: ship it. SaaStr Sponsors.com went live.
Total time: 36 hours.
Total cost: $0 in development.
Total risk: relatively low (it's gated, sponsors are our friends, we tested with a small group first).
The Economics
We were paying $3,000 per month for the old tool. Twelve-month contract. Three months remaining.
We paid a $2,000 early termination fee, killing the contract.
We saved $7,000 over the next year.
More importantly, we gained the ability to customize. The old tool didn't support basic things like auto-enrichment or custom reporting. Now, any request Amelia has, Claude can implement it in a couple hours.
The sponsor portal now does things the old tool could never do:
- Auto-enriches company data from website URLs
- Suggests sponsorship tiers based on company size
- Generates sponsor invoices with custom terms
- Integrates with our registration platform so sponsors see real-time ticket data
- Alerts sponsors when they're close to hitting booth capacity
All of this was built incrementally. Feature by feature. As we needed it.
The New Decision Framework: When to Build, When to Buy
So here's the updated framework. It's not as simple as "build the 10%." It's more nuanced.
When to Buy
Buy when:
-
Core, commoditized function with high compliance overhead. You shouldn't build your own CRM. You shouldn't build your own payment processor. You shouldn't build your own email delivery system. The liability, the compliance, the data warehousing—it's not worth it. Buy from companies like Salesforce, Stripe, and SendGrid.
-
The product has AI integrated and extensible. If you're buying a tool in 2025, it should have AI baked in. And it should have APIs that let you extend it. Salesforce has Einstein AI. HubSpot has content AI. Marketo has predictive scoring. These aren't afterthoughts. They're core.
-
Network effects matter. If the product's value increases with other users, buy it. Slack is better when more of your team uses it. GitHub is better when your industry uses it. These create value that custom tools can't.
-
You need 90%+ feature parity with the out-of-the-box product. If the off-the-shelf tool does 95% of what you need, and building would take three weeks, buy it. Especially if the vendor is actively developing and adding features.
-
Data isn't your competitive advantage. If anyone could get this data, don't build to analyze it. Use the commercial tool. If your competitive advantage is how you analyze the data, then maybe build.
When to Build
Build when:
-
The SaaS tool has zero AI functionality and your use case is data-heavy. This is the new exception to the 90/10 rule. If a tool is essentially a dumb database with a UI, and you have proprietary data to analyze, building beats buying.
-
You have proprietary data or workflows the tool can't ingest. If you have ten years of institutional knowledge, if your workflows are unique to your business, if your data lives across five different systems—build a tool that can synthesize it. No generic tool can.
-
The tool you need doesn't exist yet, or all existing options are immature. Emerging categories (like AI agents themselves) don't have mature commercial options yet. Build. In a few years, buy the winner.
-
The build is internal-facing and low risk. This is critical. Don't build external-facing, customer-facing tools unless you're ready to own support, compliance, and uptime. We built the Sponsors Portal external-facing because we knew our sponsors. We tested carefully. We owned the risk.
-
The expected payoff is high and immediate. If building saves you 20 hours per week, and that freed-up time creates $50K in additional revenue, build. But if it saves you two hours per week, buy the SaaS tool.
-
You have AI co-workers available to do the building. This is new. Five years ago, building required hiring. Now it requires a Claude subscription and someone who can write specs. If you have that, the economics shift.
The Data Question
Data is really the deciding factor.
If you have proprietary data—customer data, historical performance data, institutional knowledge—that's when custom tools start winning. Because the off-the-shelf tool was built to handle generic data. It can't learn from your specific patterns.
But here's the guardrail: if you're building a data-heavy tool, you need to be careful about what data you're using. If you're scraping customer data, you need to make sure you have permission. If you're using employee data, you need privacy controls. If you're handling financial data, you need security.
This is why we built the AI VP of Marketing for internal use only. We're using proprietary business data. It's ours. We own it. No risk.
But if we were to build a tool that analyzed customer data, we'd need to be much more careful. Data residency. Encryption. Access controls. All the things the big SaaS vendors have.
The Role of AI Co-workers: Claude, Replit, and the Future
None of this happens without the right AI co-workers. We're using a specific tech stack that makes this possible.
Claude (OpenAI's Competitor)
Claude is the co-worker that specifies and builds. It can:
- Read documentation and generate specs
- Write code (frontend, backend, database schemas)
- Debug broken code
- Explain complex concepts
- Iterate based on feedback
Critically, Claude Co-work (the feature that lets it see your screen) changes the game. Instead of describing what you want, you can just show it. "Here's the old tool. Make me a replacement." Done.
We've used Claude to spec out everything from API integrations to database designs to user workflows. It's not perfect (it still hallucinates sometimes), but it's good enough to save weeks of engineering time.
Replit (Deployment)
Replit is where the code lives. It's got a few killer features:
- Built-in deployment (your app is live when you hit save)
- Database included (no separate database setup)
- Integrations with common services (authentication, APIs, etc.)
- Version control built in
For quick builds, Replit removes the DevOps burden. You don't need to set up AWS, configure a database, figure out deployment pipelines. You write code and it's live.
For production apps that need more scale, you'd probably migrate to proper infrastructure. But for internal tools? Replit is perfect.
Zapier (Integration)
Zapier is the glue that connects everything. When we needed our AI VP of Marketing to ingest data from Salesforce and our registration platform, Zapier made it frictionless.
No API documentation needed. No authentication negotiation. Just: "When X happens in Salesforce, send the data to our app." Done.
This is critical for AI agents. Agents need to read data from multiple sources, process it, and write data back. Zapier handles the plumbing.
Together, these tools have a multiplier effect. Each one is useful alone. Together, they collapse the time to build a functional app from weeks to days.
The Hidden Costs of Building: What People Miss
Building isn't free. There are real costs beyond the initial development time.
Support and Maintenance
When you build something, you own the support. Something breaks at 2 AM on a Sunday? You fix it. A sponsor can't log in? You debug it. Your database fills up? You upgrade it.
With commercial software, you call support. They own it.
For internal tools, this trade-off makes sense. We own the Sponsors Portal. If it breaks, we fix it. But it's internal, so the blast radius is small.
For external-facing tools, this gets dangerous. If your tool breaks, customers lose trust. You lose data. You might lose the business.
We accepted this risk for the Sponsors Portal because we know our sponsors, we tested carefully, and the upside (saving $36K per year) was worth it. But we wouldn't build a replacement for Salesforce. That risk is too high.
Security and Compliance
When you build something that touches data, you're responsible for:
- Data encryption (in transit and at rest)
- Access controls (who can see what)
- Audit logs (what happened and when)
- Compliance (SOC 2, GDPR, HIPAA, etc.)
- Regular security audits
- Penetration testing
- Vulnerability management
Commercial vendors build this stuff into the product. When you build, you have to build it too.
For the Sponsors Portal, the risk is contained. We're storing company data, but nothing super sensitive. The risk is manageable.
For a tool that handles customer data? You'd need to hire a security person. You'd need compliance lawyers. You'd need insurance.
Suddenly, your "free" build costs $200K per year.
Scalability
When you're using a SaaS tool, scalability is someone else's problem. You add more users, the tool handles it.
When you build, you handle it. Your database gets slow? You optimize queries or upgrade your server. More users than expected? You need to load balance.
For internal tools with a fixed number of users, this is manageable. For external tools that grow, it's a real burden.
Integration Debt
Your custom tool needs to integrate with other systems. Right now, you're probably using Zapier webhooks. That works. But if you have 10+ integrations, webhook maintenance becomes a job.
Commercial tools have pre-built integrations with everything. They handle the maintenance.
Opportunity Cost
Maybe the biggest cost: your time. If you're spending 10 hours per week maintaining the tool, that's 10 hours you're not working on core business stuff.
Sometimes that trade-off makes sense. If maintaining the tool saves you 40 hours per week, net benefit is 30 hours. But if it saves you 5 hours and costs you 10, that's a losing trade.
What the Data Shows: Where Teams Actually Build AI Apps
We're not alone in this. Replit has been tracking what people are actually building. The data shows some surprising patterns.
Internal tools dominate. The most common builds are:
-
Custom dashboards and reports (45% of builds). Teams have data they need to analyze. Commercial tools don't analyze it the way they need. So they build custom views.
-
Workflow automation (28% of builds). "When X happens, do Y." That's usually integrations and custom logic that Zapier can't handle alone.
-
Data enrichment (15% of builds). "Take this data and add more context to it." Auto-complete company information. Suggest next steps. Flag anomalies.
-
Admin tools (8% of builds). Internal tools that help operations. Sponsor portals. Employee dashboards. Vendor management.
-
Experimental products (4% of builds). Early-stage external-facing tools that might eventually become commercial products.
The key insight: almost all of these are internal-facing or low-risk. Nobody's building a replacement HubSpot in Replit. They're building the 10%.
But the 10% is getting bigger and more valuable.
Red Flags: When You Shouldn't Build
Here's the inverse. These are situations where building is a mistake.
Red Flag 1: You Don't Have a Champion
Building requires someone who owns it. Someone who's invested. Someone who will iterate, improve, and maintain it.
If you're saying "we should build this," but nobody's actually willing to do it, don't build.
Amelia took ownership of the Sponsors Portal. She researched, built, tested, and maintains it. That's not delegatable. If she didn't own it, it wouldn't exist.
Red Flag 2: You're Trying to Replicate a Core Vendor
Don't build a replacement for Salesforce. Don't build a replacement for Marketo. Don't build a replacement for Slack.
These are platform companies with thousands of engineers, billions in funding, and decades of experience. You'll lose.
Build the 10%. Not the 90%.
Red Flag 3: You Don't Fully Understand the Problem
If you're not sure what you're building for, don't build. Spec it out first. Write docs. Do discovery. Make sure you understand the problem deeply.
Amelia spent time with sponsors before building the portal. She understood their pain points. That's why the tool actually solved the problem.
Red Flag 4: The Opportunity Cost Is High
If the best use of your time is building a tool, build it. But if the best use of your time is selling, marketing, or building the core product, don't build an admin tool.
Your time is the scarcest resource. Don't waste it.
Red Flag 5: Data Privacy or Security Is Unclear
If you're not 100% sure you have legal permission to use the data, don't build. Talk to your lawyers. Better to spend $5K on legal review than to get sued later.
The Future: What Happens When Every Tool Gets AI?
Here's the thing. The SaaS vendors are reading the same internet we are. They see the build-vs-buy trade-off shifting. So they're adding AI to everything.
Salesforce added Einstein. Marketo added predictive scoring. HubSpot added content AI. Slack is adding AI search and summarization.
What this means: over the next 24 months, the tools that survive will be the ones that are AI-native from the ground up. Tools that treat AI as a core feature, not a checkbox.
If you're evaluating a SaaS tool today, ask:
- Is AI integrated into the core workflow, or is it an add-on?
- Can I feed the tool my proprietary data to improve its outputs?
- Can I extend the tool with custom logic, or is it locked down?
- Does the vendor have a real AI product, or are they just slapping ChatGPT on top of existing features?
Vendors that answer yes to those questions will win. Vendors that don't will eventually be replaced by AI-native solutions.
This doesn't mean the build-vs-buy decision changes overnight. It means the buy side gets better. But for use cases where you have unique data or workflows, the build side will always win.
Implementing the 90/10 Framework: A Practical Playbook
Here's how to actually apply this framework in your business.
Step 1: Audit Your Current SaaS Stack
Make a list of every SaaS tool you pay for. For each tool, ask:
- Does it have AI features integrated? (Yes/No)
- Are those features solving a problem, or are they window dressing? (Real/Fake)
- How much of the tool are we actually using? (10%, 30%, 70%, 90%)
- How much proprietary data could improve this tool's outputs? (None/Some/A lot)
- How much would it cost to replace? (In time and money)
High-priority replacement candidates:
- Low AI integration + Low feature usage + High proprietary data potential = Build candidate
- No AI integration + Low feature usage + Internal-facing = Probable build candidate
- High cost + Moderate feature usage + Average proprietary data = Maybe build
Step 2: Find Your Champion
Pick a tool you want to replace. Find someone (doesn't need to be an engineer) who wants to own it. Give them:
- Claude access
- Replit account
- Zapier account
- Time (20% of their week for 2-3 weeks)
- Permission to fail (if it's not working, you can go back to the old tool)
Step 3: Write the Spec
Don't code yet. Write a specification. What does the tool need to do? What data does it need? What's the output?
Have Claude review the spec. Iterate. Get it right before building.
Step 4: Build MVP
Build the minimum viable product. 70% feature parity is fine. Focus on the data and the outputs. Make sure it actually solves the problem.
Step 5: Test with Real Users
If it's internal-facing, test with a small group. If it's external-facing, test carefully. Watch for edge cases. Fix bugs.
Step 6: Make the Call
After 2-3 weeks, ask: "Is this better than the commercial tool?" If yes, ship it. If no, go back to the commercial tool. No shame either way.
The ROI Calculation: When Does Building Actually Win?
Let's do the math. When is building cheaper than buying?
Assumptions:
- Commercial tool: 6K/year)
- Development time: 40 hours (at $0 with AI, because you're paying for Claude anyway)
- Maintenance time: 5 hours/month ($2K/year in opportunity cost)
- Hosting: 1.2K/year on Replit)
Total cost to build and maintain for one year: $3.2K
Total cost to buy for one year: $6K
Break-even: ~10 months. After that, building wins.
But here's the real win: after year one, the math flips. Year two costs are maintenance only (
Over three years:
- Buy: $18K
- Build: $9.6K
That's a 50% discount for building.
But here's the caveat: this assumes:
- The tool actually works (no abandoned projects)
- You don't hit scaling limits
- The commercial tool doesn't become dramatically better
- You have someone to maintain it
If any of those assumptions break, the math changes.
Common Mistakes When Building AI Apps
We've made these mistakes. So has everyone else.
Mistake 1: Under-Specifying the Project
You start building without a clear spec. Scope creeps. The project takes three months instead of three weeks.
Fix: Write a detailed spec first. Have Claude review it. Get agreement from stakeholders. Then build.
Mistake 2: Overestimating AI Capabilities
You ask Claude to do something it can't do consistently. It works in the demo, fails in production.
Fix: Test edge cases. Don't trust AI to handle 100% of the logic. Build guardrails. Have humans in the loop where it matters.
Mistake 3: Ignoring Data Quality
You build a tool that analyzes data, but the data is messy. Results are garbage.
Fix: Clean your data first. Validate inputs. Have AI flag anomalies. Don't feed it bad data.
Mistake 4: Building Without the Right Tools
You try to build on AWS from scratch. You spend weeks on infrastructure before you write a line of business logic.
Fix: Use Replit or similar. Get something running in 30 minutes. Deploy immediately. Iterate.
Mistake 5: Launching to Production Without Testing
You build something in 24 hours and ship it to customers. It breaks. Now you have angry customers and a reputation problem.
Fix: Internal testing first. Small group test. Rollout plan. Monitor for issues. Have a kill switch.
Mistake 6: Building Something You Don't Actually Need
You think you need a custom tool. So you build it. Turns out, the commercial tool you already have does 90% of what you need.
Fix: Use existing tools first. Only build if you've really exhausted the alternatives.
The Honest Assessment: What Building Gets Wrong
We love building. It's fast. It's empowering. It's fun.
But let's be honest about the downsides.
You'll Find Bugs Your Vendor Tested For
Commercial vendors have QA teams. They test edge cases you've never thought of. When you build, you're not QA'ing those cases.
Example: the Sponsors Portal didn't handle special characters in company names initially. The first time someone from "L'Oreal" tried to sign up, it broke. A commercial vendor would've caught that.
We fixed it. But we had to experience the failure first.
You'll Hit Scaling Limits Faster Than Expected
Your custom tool works great with 100 users. Then you hit 1,000 users and the database slows to a crawl.
Commercial vendors have already solved this. They've scaled to millions of users. You're starting from zero.
You'll Miss Features That Seem Obvious in Hindsight
After using the Sponsors Portal for two weeks, we realized we needed to export sponsor data. We built it in, but a vendor would've had it from day one.
These aren't showstoppers. But they add up.
You'll Spend More Time Maintaining Than Building
The first week, you're coding. The next year, you're fixing bugs and responding to requests.
Commercial vendors have support teams. You don't.
The honest truth: building is not a shortcut. It's a different path. Sometimes it's better. Often it's worse. You have to know which is which.
When to Walk Away: Abandoning a Build
Sometimes the right call is to admit that building was a mistake and go back to the commercial tool.
Here's when to make that call:
-
It's taken longer than expected and you're not close to done. If you planned 4 weeks and you're 8 weeks in and only 40% done, walk.
-
The commercial tool has released features that eliminate the need to build. This happens faster than you think.
-
You're now spending more time maintaining than building. If maintenance is eating up the whole team, that's a sign it's not working.
-
You're hitting limits you can't overcome. Scaling, security, compliance—if you're stuck and it's not fixable, walk.
-
The business priorities have changed. You started building because you had a problem. Now you don't have that problem. Let it go.
Walking away is not failure. It's smart reallocation of resources.
FAQ
What is the 90/10 rule for AI agents?
The 90/10 rule is a framework for deciding when to build custom AI tools versus when to buy third-party SaaS solutions. The traditional rule was: buy 90% of what you need off the shelf, and only build the 10% where no solution exists. However, in 2025, there's a critical update: also build when a SaaS tool has zero AI functionality integrated, especially if you have proprietary data that could improve its outputs.
When should I build a custom AI agent instead of buying SaaS?
You should build when you have proprietary data or unique workflows that generic SaaS tools can't ingest or process, when the SaaS alternative has zero AI functionality, when the tool is internal-facing and low-risk, when your expected payoff is high and immediate, and when you have AI co-workers (like Claude) available to do the building. You should not build when you're trying to replicate a core vendor's functionality, when you lack a champion to own the project, or when data privacy and security questions are unresolved.
How long does it actually take to build a custom AI app?
With modern AI tools like Claude and deployment platforms like Replit, you can build a functional internal tool in 24-72 hours. This assumes you have a clear specification, you're not dealing with complex compliance or security requirements, and you're using existing APIs and integrations. External-facing tools take longer because of testing, but even those can launch in 1-2 weeks. This is dramatically faster than traditional software development, which would take weeks or months.
What's the real cost of building versus buying?
Development cost for AI-built tools is minimal (basically your Claude subscription), but you need to account for maintenance (5-10 hours per month) and hosting ($100-500/month depending on scale). Over three years, custom tools typically cost 50% less than commercial SaaS for internal use cases. However, if you include support burden, security overhead, and scalability costs, commercial tools can be cheaper, especially if you're not using 70%+ of their features.
What tools do I need to build a custom AI app?
The core stack is Claude (for building), Replit (for deployment), and Zapier (for integrations). You can add Notion for documentation, Figma for design mockups, and any specific integrations your app needs (Salesforce, Slack, etc. APIs). This stack removes the DevOps burden and lets non-engineers ship production apps.
Should I build if I'm not an engineer?
Yes. This is the biggest shift from five years ago. You don't need to be an engineer to build with Claude and Replit. You need to be able to write a clear specification, understand your problem deeply, and iterate based on feedback. Technical skills are increasingly optional. Problem-solving skills are essential.
What happens if my custom tool breaks in production?
You fix it, or you fall back to the old tool while you fix it. Have a kill switch. Have monitoring. Have tests. This is why external-facing tools are riskier than internal tools. For internal use, a few hours of downtime is annoying. For external use, it can kill your business. Assess risk before you build.
How do I know if I have too much proprietary data to use a commercial tool?
If you have 10+ years of historical data, if your workflows are materially different from how competitors work, if your data lives across 5+ different systems, or if your competitive advantage depends on how you analyze data, you probably have enough proprietary data to justify building. Test it: give Claude access to your data and ask it to generate insights. If the insights are valuable, build. If they're generic, stick with the commercial tool.
What's the biggest mistake people make when building AI tools?
Under-specifying the project. They start building without a clear spec, scope creeps, and they end up with a monster project that takes forever. Write the spec first. Have Claude review it. Iterate on the spec until it's locked down. Then build. This saves weeks.
When will SaaS vendors make building irrelevant?
When they build AI directly into their core workflows and allow you to feed them proprietary data to improve outputs. This is happening now (Salesforce, Marketo, HubSpot are all moving this direction), so the advantage of building will narrow. But data will always be the deciding factor. If you have unique data that competitors don't have, custom tools that leverage that data will always outperform generic solutions.
The Bottom Line
The 90/10 rule still works. But it's evolved.
Buy when the tool is mature, has AI integrated, and handles a commodity function. Salesforce for CRM. Stripe for payments. Slack for communication.
Build when you have unique data, unique workflows, or when the commercial alternative has zero AI functionality. And when you build, build small. Internal tools. Low-risk experiments. The 10%.
What's changed isn't the framework. It's the economics. Building used to be expensive. Now it's cheap. That shifts the math in favor of building for more use cases.
But cheap doesn't mean free. It doesn't mean easy. It doesn't mean you should build everything. You still need a clear problem, a champion who owns it, and a realistic ROI calculation.
The teams that'll win over the next few years are the ones that master both sides. The ones who buy the 90% from best-in-class vendors, and build the 10% that's unique to their business.
That's the new playbook. And if you're not thinking about it, your competitors are.
![The AI Agent 90/10 Rule: When to Build vs Buy SaaS [2025]](https://tryrunable.com/blog/the-ai-agent-90-10-rule-when-to-build-vs-buy-saas-2025/image-1-1771423618515.jpg)


