
The Dependencies You Haven't Mapped: What Happens to Your Business When SaaS and AI Providers Go Down
BCDR

Mahesh Chandran
CEO Dataring
The Dependencies You Haven't Mapped
Ask a business unit leader "how many SaaS tools does your team use?" and you'll usually get a rough answer in the low teens. Ask their IT team the same question with visibility into actual usage and the answer is usually five to ten times higher. The average mid-sized organization runs on several hundred SaaS applications, a substantial fraction of which were adopted by business teams directly, without IT approval, to solve specific departmental problems.
Every one of those tools is a dependency. Every dependency is a potential point of failure. And almost none of them have been mapped, stress-tested, or protected against the inevitable outages that occur across the modern SaaS and AI landscape.
This post teaches a practical framework for visualizing your team's invisible dependencies — the Dependency Chain Map — and gives you a specific action plan for identifying and protecting the dependencies that matter most. It's aimed at business unit leaders who own the tools their teams use but have never examined what happens when those tools fail.
Your business runs on someone else's computers
The shift to SaaS was complete by about 2020. Most business operations now run on tools hosted, maintained, and controlled by third parties: Salesforce for CRM, HubSpot for marketing, Slack for communication, Notion for documentation, DocuSign for contracts, QuickBooks or NetSuite for finance, Workday for HR, Monday or Asana for project management, Zoom for meetings, and dozens of specialized tools beneath those.
The value proposition is obvious: you don't have to run the infrastructure, manage updates, or employ the engineers needed to keep systems running. The trade-off is equally obvious but rarely discussed: you have also lost direct control over everything. Your ability to function as a business depends on the reliability of vendors you don't manage, running on infrastructure you don't see, in regions you may not know about.
For most business leaders, this is invisible until something breaks. And something breaks more often than people think. Major SaaS providers experience serious outages multiple times per year. Cloud providers have regional incidents regularly. And as AI services become embedded in everyday business tools, the reliability ceiling is dropping — because AI services, even from the best providers, deliver meaningfully lower uptime than traditional SaaS.
The Dependency Chain Map
The Dependency Chain Map is a simple framework for making your invisible dependencies visible. It works at four levels.
Level 1: Business Process. The work your team actually does. "Generate monthly customer invoices." "Respond to Tier 1 support tickets." "Close an enterprise sale." "Execute the quarterly financial close."
Level 2: SaaS Tools. The tools each process depends on. Monthly invoicing might depend on your ERP for billing data, your CRM for customer records, your tax service for rate calculations, your banking portal for payment confirmation, and an email tool to send the invoice PDFs.
Level 3: Cloud Provider. Each SaaS tool runs on underlying cloud infrastructure. Salesforce runs on AWS. HubSpot runs on AWS and GCP. Slack runs primarily on AWS. Microsoft 365 runs on Azure. Most SaaS vendors publish this information in their security or trust portals. If multiple tools you depend on run on the same cloud provider, you have concentration risk — one cloud outage can break many of your tools simultaneously.
Level 4: Geographic Region. Within each cloud provider, data and compute live in specific regions. If your SaaS tools all serve their European customers from the same AWS region in Ireland or Frankfurt, a single regional incident can take down multiple tools at once — even though they're nominally "different" vendors.
Walking through this chain for a single critical process usually produces an uncomfortable realization: your team's work depends on more things than you thought, and more of those things share underlying infrastructure than you realized.
The lesson of the 2024 CrowdStrike incident
On July 19, 2024, a routine software update from the cybersecurity vendor CrowdStrike crashed approximately 8.5 million Windows devices worldwide in a single morning. The cause was a configuration error in an update file, not an attack. The consequences were nonetheless catastrophic: Delta cancelled over 7,000 flights, hospitals cancelled surgeries, payment terminals stopped working, and the global cost to Fortune 500 companies alone was estimated at $5.4 billion.
The lesson for business leaders is not "CrowdStrike is bad." The lesson is that a single trusted vendor pushing a routine update can take down critical operations for thousands of organizations simultaneously — and the failure cascades through dependency chains that most affected organizations had never mapped. The airlines and hospitals and banks that were hit did not have CrowdStrike on their disaster risk registers. Why would they? It was a background vendor doing routine work. But the dependency was there, invisibly, and the consequences of its failure were enormous.
Every business has vendors like this — tools that feel routine and invisible until they fail in unexpected ways.
The SaaS outage scenarios you should think through
Rather than try to protect against every possible failure, start by walking through a small number of realistic scenarios and asking what your team would actually do. The goal isn't to solve each scenario. The goal is to discover which ones you have no answer for.
Scenario 1: Slack or Teams is down for 24 hours. Where does your team communicate? How do managers coordinate their teams? How do customer escalations get routed? For most modern organizations the answer is "nowhere, we can't." There's usually a token email fallback that nobody actually uses, and most teams haven't communicated via email in years.
Scenario 2: Salesforce is down for 8 hours during the last week of the quarter. Your sales team can't see their pipeline, can't update opportunities, can't access customer contact details, can't pull quotes, can't see commitment history, and can't run the reports leadership is asking for. What do they do? How do you communicate the situation to leadership and to customers? Which deals are at risk? How do you reconstruct what happened during the outage once systems come back?
Scenario 3: Google Workspace or Microsoft 365 is down for 6 hours. Email, calendar, shared documents, and video conferencing all freeze at the same time. How does your team operate? This scenario is rarer than the others but happens: major Microsoft 365 outages have occurred multiple times in recent years, each affecting millions of organizations at once.
Scenario 4: Your payment or billing provider (Stripe, Adyen, Chargebee, etc.) is down for 2 hours during business hours. Transactions fail, customers see errors, and revenue stops. What do you tell customers? How do you estimate the impact? What do you do about transactions that were in flight when the outage started?
Walking through these scenarios with your team for 30 minutes each usually reveals gaps you had no idea existed. Those gaps are the point of the exercise.
The AI dependency layer
A newer and less-understood risk is the rising dependence on AI services. Over the past two years, AI has been embedded into nearly every SaaS tool: Salesforce has Einstein, HubSpot has ChatSpot, Microsoft has Copilot, Google has Gemini, Notion has AI features, Slack has AI summaries. Many organizations also use dedicated AI platforms directly — OpenAI, Anthropic, Google's Gemini API, Azure OpenAI — for customer support automation, content generation, internal chatbots, document analysis, and decision support.
Two characteristics make AI dependencies especially important to map. First, AI services are typically less reliable than traditional SaaS. The major LLM providers currently run at roughly 99.3% uptime in practice, which sounds high but translates to about five hours of downtime per month — roughly seven times the downtime of a standard cloud VM running at 99.9%. Teams that build critical workflows on AI services are often building on infrastructure that is an order of magnitude less reliable than what they're used to.
Second, AI dependencies are often invisible. If your customer support team uses a tool that generates draft responses with AI, many of them don't think of themselves as "using AI" — they think of themselves as using their normal support tool, which happens to suggest responses. When the underlying AI service has an outage, the support tool's experience degrades in ways that can be confusing: suggestions stop appearing, or they appear but are bad, or the tool throws intermittent errors. The team doesn't know they've lost access to an AI service because they didn't know they were using one.
The practical question for business leaders: for each workflow that benefits from AI, what is the manual fallback? How long does the manual version take compared to the AI-assisted version? Has anyone tested the fallback? For most organizations, the honest answers are "we don't know," "much longer," and "no."
Building your Dependency Chain Map
Here is a practical, 90-minute exercise to build a Dependency Chain Map for your function.
Step 1: Start with your five most critical business processes. Use the list from your Minimum Viable Business Canvas if you've built one, or quickly identify five processes that your team cannot afford to lose for more than a day.
Step 2: For each process, list every SaaS tool it touches. Not just the main tool — all the tools. A sales process typically touches CRM, email, calendar, call recording, contract management, e-signature, payment processing, and an analytics dashboard. That's eight tools, minimum, for one process.
Step 3: For each tool, look up its underlying cloud provider. Most major vendors publish this in their trust portal or security documentation. If you can't find it in 60 seconds, send a one-line email to your account manager: "Which cloud provider and region hosts our instance?" They'll know.
Step 4: Look for concentration. Count how many of your critical tools run on each cloud provider. If more than half your stack is on a single provider, you have meaningful concentration risk. If more than half your stack is in a single region of a single provider, you have high concentration risk. This isn't necessarily bad — it's just information — but it should inform your risk planning.
Step 5: For each critical tool, fill out a Vendor Continuity Card. A Vendor Continuity Card is a one-page summary with: vendor name, what it does, which processes depend on it, cloud provider and region, vendor's published RTO/RPO (from their SLA), your manual workaround if the vendor is down, who on your team knows how to execute the workaround, and when the workaround was last tested. Most of these fields will be blank the first time you fill them out. The blanks are the work.
Step 6: Identify the three biggest gaps. Don't try to fix everything. Pick the three dependencies where an outage would be most damaging and where you currently have no plan, and focus remediation there first.
Questions to ask before adopting any new SaaS or AI tool
The best time to address SaaS continuity risk is before signing. Once you're locked into a tool, migration is painful and often impossible within the timeframe that a crisis allows. Before adopting any new tool, get clear answers to the following questions from the vendor's sales or security team:
Can we export our data in standard, non-proprietary formats? If the vendor only offers export in their own format, you have no real exit plan — you can't move the data anywhere useful.
What is the vendor's published uptime for the past 12 months? Not their SLA target — their actual measured uptime. Serious vendors publish this on a status or trust page.
Where is our data hosted? Can we choose the region? For GCC organizations this is especially important given data residency requirements.
What are the contractual data export procedures and timelines at contract termination? Specifically: how many days do you have to export after cancellation, and does the vendor charge for the export?
What happens to our data if the vendor is acquired or shuts down? Most contracts have change-of-control provisions that are worth reading.
For AI tools: what happens when the model is unavailable? Is there a fallback? What does the degraded experience look like? Have you ever experienced an extended outage, and how did customers cope?
Does the vendor's disaster recovery plan cover our data, or only their infrastructure? This is the question vendors least like answering, and the answer is often "only their infrastructure" — meaning if the vendor loses your data due to a disaster, their responsibility ends at rebuilding the servers, not at restoring your information.
Making dependency mapping a habit
A one-time Dependency Chain Map is useful. An ongoing practice is transformative.
Add dependency review to your quarterly business planning. Whenever your team adopts a new tool, update the map before adopting. When IT announces that a tool is being replaced, update the map to reflect the new dependency. When a critical vendor has an outage that affects your team, document what happened, what the team did, and what you would do differently — then update the Vendor Continuity Card for that vendor.
This ongoing hygiene is not glamorous, but it's the difference between a team that handles outages smoothly and a team that scrambles every time a trusted vendor fails.
The bigger picture
Most business continuity programs focus on the big, dramatic failure modes: regional cloud outages, ransomware, physical destruction of infrastructure. These matter. But for the average business unit leader in 2026, the more frequent and more disruptive failures are smaller and quieter: a SaaS vendor outage that lasts two hours, an AI service degradation that lasts a day, a credential issue that locks your team out of a tool for a morning. None of these make the news. All of them stop work.
The business leaders who cope with these disruptions well are the ones who have done the mapping exercise, who know their dependency chain, who have identified their concentration risks, and who have at least one manual workaround for each critical tool. None of this requires IT infrastructure. It requires 90 minutes with your team and a willingness to look at the tools you use every day with a more skeptical eye.
If you want help running this exercise with your team, Dataring's BCDR consulting practice facilitates dependency mapping sessions as part of broader continuity engagements. Get in touch to schedule a working session, or explore our comprehensive guide to cloud disaster recovery in the Middle East for the underlying infrastructure context.




