Complexity and Context

AI and Consumer Proximity


I was recently thinking about why we get such different reports about the benefits of AI in software development. One person’s transcendent experience is another person’s waste of time.

I think there are lots of reasons but one of them that seems particularly relevant in the work that I do has to do with the relationship between the code being written and how close it is to the consumer.

Approaching the Consumer

The closer we get to the consumer when developing code (think “frontend”), the more commonality there tends to be. This commonality is often abstracted into frameworks, libraries and patterns that are well known. As you go deeper into the stack—toward backend systems, infrastructure, and integrations—complexity increases because of the proliferation of options and possible solutions. Whereas on the frontend, all solutions in web development have to reduce themselves to a form that can be displayed efficiently on a web browser, the backend opens into a limitless set of solutions for the same problem. This can lead to a loss of consistency and the divergence from well-known patterns.

This model is useful when thinking about AI’s effectiveness across different parts of the stack. It helps explain why AI is generally more competent at building and modifying large scale solutions on the frontend but tends to hit some speed bumps when modifying or existing backend systems - especially legacy ones.

AI’s Sweet Spot: The Frontend

On the frontend, complexity is often abstracted through popular frameworks and well-documented component libraries. Developers (and AI) benefit from clear interfaces, strong type definitions, and abundant public documentation. Even custom packages tend to follow predictable patterns (Yes, I know, this is absolutely not always the case).

As a result, tools like AI code assistants perform impressively here. They’re able to tap into a rich corpus of open-source examples and best practices to scaffold or even fully implement UI components, styling logic, and data bindings.

Infinite Context: The Backend

The backend is more of a wild west. For example, there are companies that build backend services that mostly act as proxies to third-party systems, adding layers like caching, normalization, or aggregation. But these services often sit behind obscure APIs, proprietary vendor docs, or SSO-locked portals. There’s little for AI to draw on unless that third-party system is well-known and well-documented (like Stripe or Adyen).

Over time, the patterns used to perform the integration sprawl into a wide variety of solutions yielding a difficult to encapsulate “pattern” for the AI to cling to for developing new code. I’ve found that in these cases, there’s too much real-world context that is not evident in the code that the AI has access to.

In short: the more obscure or proprietary your backend, the more an AI (or a human) will struggle to understand and extend it.

Not Just an AI Problem

This isn’t unique to AI. Humans also rely on context clues to understand unfamiliar systems. Without good documentation or consistent design patterns, we resort to trial and error, debugging, and relying on teammates with institutional knowledge.

Frontend code, with its culture of open-source tools and public best practices, is more forgiving in this regard. Backend codebases that diverge from convention—whether due to legacy constraints or ad-hoc architectural choices—make onboarding and maintenance harder for everyone.

Why This Disparity Exists

It makes sense: the closer your code is to the consumer, the more it must conform to familiar structures. That’s why we have component libraries, design systems, and accessibility standards—they’re there to help developers create predictable and polished user-facing experiences. And those same tools help AI do the same.

Backends, meanwhile, often exist in isolation. As long as they “work,” no one notices when they drift from best practices—until something breaks or becomes hard to scale. And sometimes, those decisions are the result of real trade-offs: tight timelines, legacy infrastructure, or integration with brittle third-party tools.

My Takeaways

Understanding all this helps me better understand how I plan my work. Where can I use AI most effectively and work it into estimates that I provide for stakeholders. The temptation to use a broad brush and claim that AI will speed up development by X% across the board is dangerous and wrong. Recognizing how it can and can’t help is important once you get down to the planning phase of a project.

Like most things, understanding how to apply AI to programming is nuanced. On top of that, it’s a moving target as the models get better and the prompting engines (Cursor, Windsurf, Copilot) change the way they gather context. Finally, the use of MCP servers to pull context in different ways, changes the efficacy of any prompt. But there are certain truths that remain somewhat constant.