Complexity and Context

Frontend vs. Backend: Code Complexity, AI, and the Consumer Proximity Triangle


There’s a relationship between coding complexity and the closeness of that code to the consumer. In my experience, this relationship resembles a triangle—where the tip of the triangle represents the consumer touchpoint and the base of the triangle includes the backend services and data that serve as a foundation.

The Consumer Proximity Triangle

The higher up you go in the triangle (closer to the consumer), the more “relatable” and structured the code tends to be. Frontend code often uses standardized frameworks, is publicly documented, and benefits from community-driven conventions (I won’t get into the power keg that is frontend framework complexity). As you go deeper—toward backend systems, infrastructure, and integrations—complexity increases because of the loss of consistency and the divergence from well-known patterns.

This model is useful when thinking about AI’s effectiveness across different parts of the stack. It helps explain why AI is generally more competent at writing frontend code than backend logic, particularly in distributed systems.

AI’s Sweet Spot: The Frontend

On the frontend, complexity is often abstracted through popular frameworks and well-documented component libraries. Developers (and AI) benefit from clear interfaces, strong type definitions, and abundant public documentation. Even custom packages tend to follow predictable patterns.

As a result, tools like AI code assistants perform impressively here. They’re able to tap into a rich corpus of open-source examples and best practices to scaffold or even fully implement UI components, styling logic, and data bindings.

The Backend: A Context Desert

The backend is more of a wild west. In many cases—like at my current company—our backend services act as proxies to third-party systems, adding layers like caching, normalization, or aggregation. But these services often sit behind obscure APIs, proprietary vendor docs, or SSO-locked portals. There’s little for AI to draw on unless that third-party system is well-known and well-documented (like Stripe or Adyen).

In short: the more obscure or proprietary your backend, the more an AI (or a human) will struggle to understand and extend it.

Not Just an AI Problem

This isn’t unique to AI. Humans also rely on context clues to understand unfamiliar systems. Without good documentation or consistent design patterns, we resort to trial and error, debugging, and relying on teammates with institutional knowledge.

Frontend code, with its culture of open-source tools and public best practices, is more forgiving in this regard. Backend codebases that diverge from convention—whether due to legacy constraints or ad-hoc architectural choices—make onboarding and maintenance harder for everyone.

Why This Disparity Exists

It makes sense: the closer your code is to the consumer, the more it must conform to familiar structures. That’s why we have component libraries, design systems, and accessibility standards—they’re there to help developers create predictable and polished user-facing experiences. And those same tools help AI do the same.

Backends, meanwhile, often exist in isolation. As long as they “work,” no one notices when they drift from best practices—until something breaks or becomes hard to scale. And sometimes, those decisions are the result of real trade-offs: tight timelines, legacy infrastructure, or integration with brittle third-party tools.

A Strange New Possibility

With AI, we’re entering a strange new place where starting from scratch might actually be the better path.

Historically, greenfield development was risky and expensive. But with AI, it might actually reduce risk—because the code it generates at the outset is often cleaner, more standardized, and easier to maintain than legacy code that’s been patched over for years.