

Most enterprise AI initiatives stall somewhere between the proof-of-concept and productionMost enterprise AI initiatives stall somewhere between the proof-of-concept and production—McKinsey reports over 80% yield no material earnings from gen AI despite widespread adoption. The gap isn't usually the AI itself—it's the months of DevOps work, the security reviews that never end, and the creeping realization that you've locked yourself into a platform that might not exist in three years.
AI agents change the equation by handling complex workflows autonomously, but only when the underlying framework actually supports how enterprises operate. This guide covers what makes AI agents different from traditional automation, the maturity levels worth understanding, and the infrastructure requirements that separate pilots from production systems.
AI agents are software systems that can reason through problems, plan their own approach, and take actions without someone spelling out every step. When you give an AI agent a goal like "find all overdue invoices and send reminders to the right contacts," the agent figures out which systems to check, what data to pull, and how to format the messages. Traditional automation tools follow scripts. AI agents follow objectives.
The difference between AI agents and tools like chatbots or robotic process automation comes down to how they handle ambiguity. A chatbot matches keywords to pre-written responses. An RPA bot clicks through the same sequence every time, and breaks when anything changes. An AI agent interprets what you're trying to accomplish and works backward to determine what actions will get you there.
Three capabilities separate AI agents from simpler automation:
AI agents combine several technical components to function inside organizations. Each component handles a different part of the problem.
AI agents break down big goals into smaller tasks through a process called chain-of-thought reasoning. If you ask an agent to "prepare the quarterly sales report," it recognizes that this involves pulling data from the CRM, grouping figures by region, comparing results against targets, and formatting everything into a readable document. The agent sequences these steps and handles dependencies between them.
An agent becomes useful when it can actually do things in your systems. This means connecting to platforms like Salesforce, SAP, internal databases, Slack, and custom tools your team has built. Through these connections, the agent retrieves information, updates records, sends messages, and triggers workflows. Without access to real systems, an agent is just a chatbot with better language skills.
Effective agents maintain two types of memory. Short-term memory tracks the current conversation or task. Long-term memory stores information from previous interactions, user preferences, and patterns the agent has learned over time.
The term "context window" refers to how much information an agent can consider at once. A larger context window means the agent can work with more background information, which matters when tasks involve long documents or complex histories.
Agents operate in a loop: observe the current state, take an action, evaluate what happened, then adjust. If an API call fails or returns unexpected data, a well-designed agent tries an alternative approach instead of simply stopping. This feedback loop is what allows agents to handle real-world messiness.
Traditional automation tools work well for specific, predictable tasks. They hit limits when processes involve variability, exceptions, or unstructured information.
Robotic process automation requires someone to define every click, every field, every decision branch before the bot runs. When a form layout changes or an unexpected popup appears, the bot breaks. AI agents handle variability because they understand the goal, not just the steps. An agent can navigate a redesigned interface or work around an error message because it knows what it's trying to accomplish.
Keyword-matching chatbots recognize phrases and return canned responses. They handle FAQs reasonably well but fall apart with anything complex or multi-step. AI agents understand intent, maintain context across a conversation, and execute workflows that span multiple systems.
The practical difference: a chatbot says "Here's our return policy." An agent says "I've processed your return, updated your account, and scheduled the pickup for Thursday."
Traditional automation struggles with documents, emails, images, and natural language. AI agents can read a contract, extract key terms, compare them against company policy, and flag potential issues. They work with the messy, unstructured information that makes up most of what enterprises actually deal with.
CapabilityTraditional RPAAI AgentsHandles unstructured dataNoYesAdapts to exceptionsNoYesRequires explicit rulesYesNoMulti-step reasoningNoYes
Not all AI agents do the same things. A maturity framework helps clarify what's realistic for different use cases and where organizations typically start.
Level 1 agents retrieve and summarize information from enterprise knowledge bases. An internal search assistant that answers "What's our policy on vendor contracts over $50,000?" by finding and synthesizing relevant documents falls into this category. These agents represent the lowest complexity and often the best starting point for organizations new to agentic AI.
Level 2 agents perform defined tasks across systems. They schedule meetings, generate reports, update records, and coordinate handoffs between departments. The agent follows established patterns but handles execution autonomously, freeing humans from repetitive coordination work.
Level 3 agents independently plan, execute, and adapt complex workflows with minimal human involvement. An agent at this level might manage an entire customer onboarding process, making judgment calls about exceptions and escalating only when genuinely necessary. Most enterprises aren't here yet, but the capability exists.—Deloitte found only 11% actively use agents in production—but the capability exists.
Deploying agents in regulated enterprises requires specific infrastructure. Without these foundations, agents either can't access what they need or create unacceptable risks.
Agents are only as useful as the data they can reach. Most enterprises have information scattered across dozens of systems that don't naturally talk to each other. Effective frameworks provide unified access to fragmented data sources, so agents work with complete pictures instead of partial views.
Enterprise agents require robust identity management, secrets handling, and network policies. The agent accesses only what it's authorized to access, and every action remains traceable. Platform-wide access controls become essential when agents operate across sensitive systems containing customer data, financial records, or proprietary information.
The AI landscape changes quickly. Frameworks that integrate multiple AI and data tools without locking you into a single vendor's ecosystem provide flexibility to adopt better solutions as they emerge. When a new model outperforms what you're currently using, you can swap components without rebuilding your entire stack.
Agent workloads can spike unpredictably. Autoscaling, GPU orchestration, and intelligent resource management ensure agents have the compute they require without wasting resources during quiet periods. For organizations running multiple agent types across different use cases, multi-cluster orchestration keeps everything coordinated.
Knowing what typically goes wrong helps teams plan realistically.
Older infrastructure often lacks modern APIs, making it difficult for agents to connect. Organizations frequently need middleware, custom connectors, or phased modernization to bring legacy systems into an agent-accessible architecture. The 30-year-old mainframe running core operations won't suddenly speak REST.
When agents access sensitive information, organizations require audit trails showing exactly what data was used, when, and for what purpose. Tracking data flow becomes critical for compliance and for debugging when something produces unexpected results.
Agent automation that spans multiple departments encounters different systems, processes, ownership, and stakeholders. The technical integration is often simpler than the organizational coordination required to make cross-functional agents work smoothly.
Autonomous systems in regulated industries require guardrailsAutonomous systems in regulated industries require guardrails—Gartner predicts over 40% of agentic AI projects canceled by 2027 without adequate risk controls. The question isn't whether to have oversight, but where and how much.
Every agent action gets logged in a way that can't be altered after the fact. This creates accountability and provides the documentation regulators expect. When an agent makes a decision that affects a customer or a financial outcome, you can trace exactly what happened and why.
Agent permissions reflect data sensitivity and user roles. An agent helping with HR tasks doesn't have access to financial systems. An agent processing customer requests doesn't see internal strategic documents. Permissions follow the same principles that govern human access.
Certain decisions warrant human review before execution. Defining where humans approve or override agent actions balances efficiency with appropriate oversight. A Level 2 agent might execute routine tasks autonomously while flagging anything above a certain dollar threshold for human approval.
Proprietary platforms create dependency on a single vendor's roadmap, pricing decisions, and technical limitations. Open architectures offer an alternative path.
Organizations in critical infrastructure industries—banking, healthcare, energy, manufacturing—often find that flexibility matters more than convenience. The ability to swap tools, change providers, or bring capabilities in-house provides long-term strategic value.
For industries where data sensitivity and regulatory requirements are highest, infrastructure control isn't a nice-to-have. It's a prerequisite for deploying agents at all.
Yes. Enterprise-grade AI agent platforms can deploy within private infrastructure, allowing organizations in regulated industries to maintain complete data isolation while running advanced AI capabilities. The agents run on your hardware, behind your firewall.
Deployment timelines vary based on infrastructure complexity, data accessibility, and organizational readiness. Platforms that automate MLOps and DevOps reduce implementation time significantly compared to building from scratch.
Industries with complex workflows and strict compliance requirements see significant value from AI agents. Healthcare, financial services, manufacturing, energy, and logistics organizations benefit from agents that can operate within secure boundaries while handling the variability these industries encounter daily.
By deploying AI agent platforms on private infrastructure, enterprises ensure that proprietary data processed by LLMs never leaves their governance boundary. The models run locally or within your cloud environment, not on third-party servers.
Tool-agnostic orchestration platforms enable AI agents to work across the entire AI and data ecosystem. Teams can leverage open-source models, commercial APIs, and internal tools without being locked into a single vendor's offerings.

Most enterprise AI initiatives stall somewhere between the proof-of-concept and productionMost enterprise AI initiatives stall somewhere between the proof-of-concept and production—McKinsey reports over 80% yield no material earnings from gen AI despite widespread adoption. The gap isn't usually the AI itself—it's the months of DevOps work, the security reviews that never end, and the creeping realization that you've locked yourself into a platform that might not exist in three years.
AI agents change the equation by handling complex workflows autonomously, but only when the underlying framework actually supports how enterprises operate. This guide covers what makes AI agents different from traditional automation, the maturity levels worth understanding, and the infrastructure requirements that separate pilots from production systems.
AI agents are software systems that can reason through problems, plan their own approach, and take actions without someone spelling out every step. When you give an AI agent a goal like "find all overdue invoices and send reminders to the right contacts," the agent figures out which systems to check, what data to pull, and how to format the messages. Traditional automation tools follow scripts. AI agents follow objectives.
The difference between AI agents and tools like chatbots or robotic process automation comes down to how they handle ambiguity. A chatbot matches keywords to pre-written responses. An RPA bot clicks through the same sequence every time, and breaks when anything changes. An AI agent interprets what you're trying to accomplish and works backward to determine what actions will get you there.
Three capabilities separate AI agents from simpler automation:
AI agents combine several technical components to function inside organizations. Each component handles a different part of the problem.
AI agents break down big goals into smaller tasks through a process called chain-of-thought reasoning. If you ask an agent to "prepare the quarterly sales report," it recognizes that this involves pulling data from the CRM, grouping figures by region, comparing results against targets, and formatting everything into a readable document. The agent sequences these steps and handles dependencies between them.
An agent becomes useful when it can actually do things in your systems. This means connecting to platforms like Salesforce, SAP, internal databases, Slack, and custom tools your team has built. Through these connections, the agent retrieves information, updates records, sends messages, and triggers workflows. Without access to real systems, an agent is just a chatbot with better language skills.
Effective agents maintain two types of memory. Short-term memory tracks the current conversation or task. Long-term memory stores information from previous interactions, user preferences, and patterns the agent has learned over time.
The term "context window" refers to how much information an agent can consider at once. A larger context window means the agent can work with more background information, which matters when tasks involve long documents or complex histories.
Agents operate in a loop: observe the current state, take an action, evaluate what happened, then adjust. If an API call fails or returns unexpected data, a well-designed agent tries an alternative approach instead of simply stopping. This feedback loop is what allows agents to handle real-world messiness.
Traditional automation tools work well for specific, predictable tasks. They hit limits when processes involve variability, exceptions, or unstructured information.
Robotic process automation requires someone to define every click, every field, every decision branch before the bot runs. When a form layout changes or an unexpected popup appears, the bot breaks. AI agents handle variability because they understand the goal, not just the steps. An agent can navigate a redesigned interface or work around an error message because it knows what it's trying to accomplish.
Keyword-matching chatbots recognize phrases and return canned responses. They handle FAQs reasonably well but fall apart with anything complex or multi-step. AI agents understand intent, maintain context across a conversation, and execute workflows that span multiple systems.
The practical difference: a chatbot says "Here's our return policy." An agent says "I've processed your return, updated your account, and scheduled the pickup for Thursday."
Traditional automation struggles with documents, emails, images, and natural language. AI agents can read a contract, extract key terms, compare them against company policy, and flag potential issues. They work with the messy, unstructured information that makes up most of what enterprises actually deal with.
CapabilityTraditional RPAAI AgentsHandles unstructured dataNoYesAdapts to exceptionsNoYesRequires explicit rulesYesNoMulti-step reasoningNoYes
Not all AI agents do the same things. A maturity framework helps clarify what's realistic for different use cases and where organizations typically start.
Level 1 agents retrieve and summarize information from enterprise knowledge bases. An internal search assistant that answers "What's our policy on vendor contracts over $50,000?" by finding and synthesizing relevant documents falls into this category. These agents represent the lowest complexity and often the best starting point for organizations new to agentic AI.
Level 2 agents perform defined tasks across systems. They schedule meetings, generate reports, update records, and coordinate handoffs between departments. The agent follows established patterns but handles execution autonomously, freeing humans from repetitive coordination work.
Level 3 agents independently plan, execute, and adapt complex workflows with minimal human involvement. An agent at this level might manage an entire customer onboarding process, making judgment calls about exceptions and escalating only when genuinely necessary. Most enterprises aren't here yet, but the capability exists.—Deloitte found only 11% actively use agents in production—but the capability exists.
Deploying agents in regulated enterprises requires specific infrastructure. Without these foundations, agents either can't access what they need or create unacceptable risks.
Agents are only as useful as the data they can reach. Most enterprises have information scattered across dozens of systems that don't naturally talk to each other. Effective frameworks provide unified access to fragmented data sources, so agents work with complete pictures instead of partial views.
Enterprise agents require robust identity management, secrets handling, and network policies. The agent accesses only what it's authorized to access, and every action remains traceable. Platform-wide access controls become essential when agents operate across sensitive systems containing customer data, financial records, or proprietary information.
The AI landscape changes quickly. Frameworks that integrate multiple AI and data tools without locking you into a single vendor's ecosystem provide flexibility to adopt better solutions as they emerge. When a new model outperforms what you're currently using, you can swap components without rebuilding your entire stack.
Agent workloads can spike unpredictably. Autoscaling, GPU orchestration, and intelligent resource management ensure agents have the compute they require without wasting resources during quiet periods. For organizations running multiple agent types across different use cases, multi-cluster orchestration keeps everything coordinated.
Knowing what typically goes wrong helps teams plan realistically.
Older infrastructure often lacks modern APIs, making it difficult for agents to connect. Organizations frequently need middleware, custom connectors, or phased modernization to bring legacy systems into an agent-accessible architecture. The 30-year-old mainframe running core operations won't suddenly speak REST.
When agents access sensitive information, organizations require audit trails showing exactly what data was used, when, and for what purpose. Tracking data flow becomes critical for compliance and for debugging when something produces unexpected results.
Agent automation that spans multiple departments encounters different systems, processes, ownership, and stakeholders. The technical integration is often simpler than the organizational coordination required to make cross-functional agents work smoothly.
Autonomous systems in regulated industries require guardrailsAutonomous systems in regulated industries require guardrails—Gartner predicts over 40% of agentic AI projects canceled by 2027 without adequate risk controls. The question isn't whether to have oversight, but where and how much.
Every agent action gets logged in a way that can't be altered after the fact. This creates accountability and provides the documentation regulators expect. When an agent makes a decision that affects a customer or a financial outcome, you can trace exactly what happened and why.
Agent permissions reflect data sensitivity and user roles. An agent helping with HR tasks doesn't have access to financial systems. An agent processing customer requests doesn't see internal strategic documents. Permissions follow the same principles that govern human access.
Certain decisions warrant human review before execution. Defining where humans approve or override agent actions balances efficiency with appropriate oversight. A Level 2 agent might execute routine tasks autonomously while flagging anything above a certain dollar threshold for human approval.
Proprietary platforms create dependency on a single vendor's roadmap, pricing decisions, and technical limitations. Open architectures offer an alternative path.
Organizations in critical infrastructure industries—banking, healthcare, energy, manufacturing—often find that flexibility matters more than convenience. The ability to swap tools, change providers, or bring capabilities in-house provides long-term strategic value.
For industries where data sensitivity and regulatory requirements are highest, infrastructure control isn't a nice-to-have. It's a prerequisite for deploying agents at all.
Yes. Enterprise-grade AI agent platforms can deploy within private infrastructure, allowing organizations in regulated industries to maintain complete data isolation while running advanced AI capabilities. The agents run on your hardware, behind your firewall.
Deployment timelines vary based on infrastructure complexity, data accessibility, and organizational readiness. Platforms that automate MLOps and DevOps reduce implementation time significantly compared to building from scratch.
Industries with complex workflows and strict compliance requirements see significant value from AI agents. Healthcare, financial services, manufacturing, energy, and logistics organizations benefit from agents that can operate within secure boundaries while handling the variability these industries encounter daily.
By deploying AI agent platforms on private infrastructure, enterprises ensure that proprietary data processed by LLMs never leaves their governance boundary. The models run locally or within your cloud environment, not on third-party servers.
Tool-agnostic orchestration platforms enable AI agents to work across the entire AI and data ecosystem. Teams can leverage open-source models, commercial APIs, and internal tools without being locked into a single vendor's offerings.
Most enterprise AI initiatives stall somewhere between the proof-of-concept and productionMost enterprise AI initiatives stall somewhere between the proof-of-concept and production—McKinsey reports over 80% yield no material earnings from gen AI despite widespread adoption. The gap isn't usually the AI itself—it's the months of DevOps work, the security reviews that never end, and the creeping realization that you've locked yourself into a platform that might not exist in three years.
AI agents change the equation by handling complex workflows autonomously, but only when the underlying framework actually supports how enterprises operate. This guide covers what makes AI agents different from traditional automation, the maturity levels worth understanding, and the infrastructure requirements that separate pilots from production systems.
AI agents are software systems that can reason through problems, plan their own approach, and take actions without someone spelling out every step. When you give an AI agent a goal like "find all overdue invoices and send reminders to the right contacts," the agent figures out which systems to check, what data to pull, and how to format the messages. Traditional automation tools follow scripts. AI agents follow objectives.
The difference between AI agents and tools like chatbots or robotic process automation comes down to how they handle ambiguity. A chatbot matches keywords to pre-written responses. An RPA bot clicks through the same sequence every time, and breaks when anything changes. An AI agent interprets what you're trying to accomplish and works backward to determine what actions will get you there.
Three capabilities separate AI agents from simpler automation:
AI agents combine several technical components to function inside organizations. Each component handles a different part of the problem.
AI agents break down big goals into smaller tasks through a process called chain-of-thought reasoning. If you ask an agent to "prepare the quarterly sales report," it recognizes that this involves pulling data from the CRM, grouping figures by region, comparing results against targets, and formatting everything into a readable document. The agent sequences these steps and handles dependencies between them.
An agent becomes useful when it can actually do things in your systems. This means connecting to platforms like Salesforce, SAP, internal databases, Slack, and custom tools your team has built. Through these connections, the agent retrieves information, updates records, sends messages, and triggers workflows. Without access to real systems, an agent is just a chatbot with better language skills.
Effective agents maintain two types of memory. Short-term memory tracks the current conversation or task. Long-term memory stores information from previous interactions, user preferences, and patterns the agent has learned over time.
The term "context window" refers to how much information an agent can consider at once. A larger context window means the agent can work with more background information, which matters when tasks involve long documents or complex histories.
Agents operate in a loop: observe the current state, take an action, evaluate what happened, then adjust. If an API call fails or returns unexpected data, a well-designed agent tries an alternative approach instead of simply stopping. This feedback loop is what allows agents to handle real-world messiness.
Traditional automation tools work well for specific, predictable tasks. They hit limits when processes involve variability, exceptions, or unstructured information.
Robotic process automation requires someone to define every click, every field, every decision branch before the bot runs. When a form layout changes or an unexpected popup appears, the bot breaks. AI agents handle variability because they understand the goal, not just the steps. An agent can navigate a redesigned interface or work around an error message because it knows what it's trying to accomplish.
Keyword-matching chatbots recognize phrases and return canned responses. They handle FAQs reasonably well but fall apart with anything complex or multi-step. AI agents understand intent, maintain context across a conversation, and execute workflows that span multiple systems.
The practical difference: a chatbot says "Here's our return policy." An agent says "I've processed your return, updated your account, and scheduled the pickup for Thursday."
Traditional automation struggles with documents, emails, images, and natural language. AI agents can read a contract, extract key terms, compare them against company policy, and flag potential issues. They work with the messy, unstructured information that makes up most of what enterprises actually deal with.
CapabilityTraditional RPAAI AgentsHandles unstructured dataNoYesAdapts to exceptionsNoYesRequires explicit rulesYesNoMulti-step reasoningNoYes
Not all AI agents do the same things. A maturity framework helps clarify what's realistic for different use cases and where organizations typically start.
Level 1 agents retrieve and summarize information from enterprise knowledge bases. An internal search assistant that answers "What's our policy on vendor contracts over $50,000?" by finding and synthesizing relevant documents falls into this category. These agents represent the lowest complexity and often the best starting point for organizations new to agentic AI.
Level 2 agents perform defined tasks across systems. They schedule meetings, generate reports, update records, and coordinate handoffs between departments. The agent follows established patterns but handles execution autonomously, freeing humans from repetitive coordination work.
Level 3 agents independently plan, execute, and adapt complex workflows with minimal human involvement. An agent at this level might manage an entire customer onboarding process, making judgment calls about exceptions and escalating only when genuinely necessary. Most enterprises aren't here yet, but the capability exists.—Deloitte found only 11% actively use agents in production—but the capability exists.
Deploying agents in regulated enterprises requires specific infrastructure. Without these foundations, agents either can't access what they need or create unacceptable risks.
Agents are only as useful as the data they can reach. Most enterprises have information scattered across dozens of systems that don't naturally talk to each other. Effective frameworks provide unified access to fragmented data sources, so agents work with complete pictures instead of partial views.
Enterprise agents require robust identity management, secrets handling, and network policies. The agent accesses only what it's authorized to access, and every action remains traceable. Platform-wide access controls become essential when agents operate across sensitive systems containing customer data, financial records, or proprietary information.
The AI landscape changes quickly. Frameworks that integrate multiple AI and data tools without locking you into a single vendor's ecosystem provide flexibility to adopt better solutions as they emerge. When a new model outperforms what you're currently using, you can swap components without rebuilding your entire stack.
Agent workloads can spike unpredictably. Autoscaling, GPU orchestration, and intelligent resource management ensure agents have the compute they require without wasting resources during quiet periods. For organizations running multiple agent types across different use cases, multi-cluster orchestration keeps everything coordinated.
Knowing what typically goes wrong helps teams plan realistically.
Older infrastructure often lacks modern APIs, making it difficult for agents to connect. Organizations frequently need middleware, custom connectors, or phased modernization to bring legacy systems into an agent-accessible architecture. The 30-year-old mainframe running core operations won't suddenly speak REST.
When agents access sensitive information, organizations require audit trails showing exactly what data was used, when, and for what purpose. Tracking data flow becomes critical for compliance and for debugging when something produces unexpected results.
Agent automation that spans multiple departments encounters different systems, processes, ownership, and stakeholders. The technical integration is often simpler than the organizational coordination required to make cross-functional agents work smoothly.
Autonomous systems in regulated industries require guardrailsAutonomous systems in regulated industries require guardrails—Gartner predicts over 40% of agentic AI projects canceled by 2027 without adequate risk controls. The question isn't whether to have oversight, but where and how much.
Every agent action gets logged in a way that can't be altered after the fact. This creates accountability and provides the documentation regulators expect. When an agent makes a decision that affects a customer or a financial outcome, you can trace exactly what happened and why.
Agent permissions reflect data sensitivity and user roles. An agent helping with HR tasks doesn't have access to financial systems. An agent processing customer requests doesn't see internal strategic documents. Permissions follow the same principles that govern human access.
Certain decisions warrant human review before execution. Defining where humans approve or override agent actions balances efficiency with appropriate oversight. A Level 2 agent might execute routine tasks autonomously while flagging anything above a certain dollar threshold for human approval.
Proprietary platforms create dependency on a single vendor's roadmap, pricing decisions, and technical limitations. Open architectures offer an alternative path.
Organizations in critical infrastructure industries—banking, healthcare, energy, manufacturing—often find that flexibility matters more than convenience. The ability to swap tools, change providers, or bring capabilities in-house provides long-term strategic value.
For industries where data sensitivity and regulatory requirements are highest, infrastructure control isn't a nice-to-have. It's a prerequisite for deploying agents at all.
Yes. Enterprise-grade AI agent platforms can deploy within private infrastructure, allowing organizations in regulated industries to maintain complete data isolation while running advanced AI capabilities. The agents run on your hardware, behind your firewall.
Deployment timelines vary based on infrastructure complexity, data accessibility, and organizational readiness. Platforms that automate MLOps and DevOps reduce implementation time significantly compared to building from scratch.
Industries with complex workflows and strict compliance requirements see significant value from AI agents. Healthcare, financial services, manufacturing, energy, and logistics organizations benefit from agents that can operate within secure boundaries while handling the variability these industries encounter daily.
By deploying AI agent platforms on private infrastructure, enterprises ensure that proprietary data processed by LLMs never leaves their governance boundary. The models run locally or within your cloud environment, not on third-party servers.
Tool-agnostic orchestration platforms enable AI agents to work across the entire AI and data ecosystem. Teams can leverage open-source models, commercial APIs, and internal tools without being locked into a single vendor's offerings.