One of the most common mistakes organizations make when approaching AI is starting from a place that is too broad to be actionable. Statements like “we need to integrate AI” carry the right level of urgency, but very little direction. They resemble earlier conversations about adopting the internet or moving to the cloud, where the technology itself became the focus rather than the problems it was meant to solve.
AI is no different. It is not a strategy in itself, but a tool that can be applied in many different ways. Without a clear structure around how it is introduced, organizations tend to oscillate between two ineffective extremes: systems that are technically impressive but disconnected from real workflows, or systems that are powerful but introduce unclear or unmanaged risk.
A more effective approach is to decompose AI into a small number of distinct concerns that can be reasoned about independently and then recombined into a coherent whole. In practice, this comes down to three areas: the model and how it is accessed, the data that gives it context, and the harness through which people interact with it. Thinking in terms of these layers makes it easier to balance innovation with the kinds of controls that are required in a large organization.

At the foundation of any AI system is a model, which provides the underlying intelligence. It is technically straightforward to integrate directly with a model provider such as Anthropic or OpenAI, and many teams begin there. However, direct integration places the responsibility for security, privacy, and compliance squarely on the organization consuming the model.
Each request must be evaluated in terms of what data is being sent, whether it contains sensitive information, how that information is handled by the provider, and whether it is stored or used for training. While this can be managed in a limited context, it becomes increasingly difficult as more teams, use cases, and model providers are introduced.
AI gateways address this problem by introducing a consistent control layer between internal systems and external models. Services such as AWS Bedrock, Azure AI, and Vercel AI Gateway allow organizations to define security and routing policies in a single place, rather than re-implementing them for each integration. In doing so, they make it possible to rely on well-understood, enterprise-grade patterns for identity, logging, and compliance.
This approach has a practical benefit beyond security. By standardizing how models are accessed, it becomes much easier to work with multiple providers over time without introducing fragmentation or duplicating effort. The gateway effectively separates the choice of model from the rest of the system, which is an important property in a space that is evolving as quickly as this one.

If models provide general capability, data provides relevance. Without access to internal systems, documentation, and workflows, AI systems tend to produce output that is generic and of limited practical value. At the same time, expanding access to data without clear constraints can introduce significant risk, particularly in environments with sensitive or regulated information.
Rather than introducing a new permission model specifically for AI, a more sustainable approach is to build on top of the controls that already exist. OAuth-based integrations are a common starting point, as they allow systems to act on behalf of a user and inherit their permissions. In a system like Jira, for example, this means that the AI can only access issues that the user themselves is allowed to see, which aligns its behavior with established governance.
This pattern can be extended through additional mechanisms such as role-based access control, scoped connectors that limit access to specific projects or datasets, and data loss prevention techniques that identify and remove sensitive information before it is processed. Audit logging and runtime access patterns, where data is retrieved as needed rather than copied into a separate store, further reinforce this model by maintaining visibility and reducing the risk of unintended exposure.
Taken together, these approaches reinforce a simple principle: the introduction of AI should not require rethinking how data access is governed. Instead, it should respect and reuse the systems that are already in place, adding only the additional safeguards needed to account for how the data is being used.

The final layer is the harness, which represents the tools and interfaces through which users interact with AI. This includes everything from developer-oriented frameworks to more packaged copilots and assistants. While this layer is often where adoption is most visible, it is also where previously established controls can be inadvertently bypassed.
Some harnesses integrate tightly with their own model providers and data sources, routing requests through infrastructure that is not visible or configurable by the organization using them. In such cases, even a well-designed gateway and data strategy can be undermined, as requests may no longer pass through the intended control points.
For this reason, it is important to select harnesses that allow organizations to bring their own model access and data integrations. In practice, this means being able to direct all model traffic through the chosen gateway, to rely on existing data connectors, and to avoid hidden or implicit dependencies on external systems.
Tools such as LangChain, LlamaIndex, and the Vercel AI SDK are often used in this context because they are designed to be composable and transparent. More opinionated platforms can also be used effectively, but they require closer evaluation to ensure they align with the organization’s architecture and control model.

Integrating AI at scale is less about choosing a single platform and more about establishing a set of boundaries within which different tools and use cases can operate. By separating concerns into model access, data context, and user-facing harnesses, organizations can make deliberate decisions at each layer without losing sight of the overall system.
This separation makes it possible to move forward in a way that is both practical and sustainable. Models can evolve without requiring wholesale changes to applications, data can remain governed by existing policies, and harnesses can be introduced or replaced based on the needs of specific teams.
In that sense, the goal is not simply to adopt AI, but to integrate it in a way that aligns with how the organization already manages risk, builds software, and delivers value. When these layers are treated as part of a cohesive system rather than isolated decisions, the balance between innovation and security becomes much easier to achieve.