Principle 4: AI Must Be Made Local to Be Useful

AI systems arrive as general-purpose tools, trained on broad datasets and marketed on their versatility. But generality comes with a cost: a system that can do anything adequately often does nothing particularly well. Generic capability translates into generic output.

Principle 4: AI Must Be Made Local to Be Useful

What this principle governs and why it matters

AI systems arrive as general-purpose tools. They're trained on broad datasets, designed to handle diverse tasks, and marketed on their versatility. This generality is genuinely impressive—the same system can draft emails, explain scientific concepts, write code, and analyze legal documents. But generality comes with a cost that becomes visible only when you try to use these systems for real work in specific contexts.

The cost is this: a system that can do anything adequately often does nothing particularly well. Generic capability translates into generic output. The AI can write a marketing email, but not one that sounds like your company. It can analyze a contract, but not against your risk standards. It can summarize research, but not in the way your team needs to make decisions. The gap between "capable of the task" and "useful for your task" is where most AI value gets lost.

This principle governs the process of closing that gap—of taking a general-purpose system and adapting it to the specific users, domains, and environments where it will actually operate. It matters because this adaptation is not optional if you want results that matter. The organizations extracting real value from AI are not those using it out of the box. They're the ones who have done the work to make it local.


The consultant's two clients

A management consultant works with two mid-sized companies in the same industry, helping both improve their internal communications. Both companies have agreed to experiment with AI to help draft internal announcements, policy updates, and team communications. The consultant sets up similar workflows for each, using the same underlying AI system.

At the first company, the experiment sputters. The AI-generated drafts feel off. They're too formal for a culture that prides itself on directness. They use corporate language that employees have learned to associate with bad news or bureaucratic distance. The communications team spends so much time revising the tone that they question whether the AI saves any time at all. After a few weeks, usage drops. The tool sits idle.

At the second company, the consultant tries something different. Before generating any drafts, she spends time collecting examples—actual internal communications that landed well, messages that employees responded to positively, the specific phrases and structures that reflect how this company talks to itself. She builds a reference document that captures not just what the company says but how it says it: the level of formality, the use of humor, the way difficult news gets framed, the cultural references that resonate.

She feeds this context into every interaction with the AI. The prompts don't just ask for "an internal announcement about the new policy." They ask for an announcement in the style of these examples, for this audience, reflecting these values. The system is the same, but the inputs are localized.

The difference is immediate. Drafts still need editing, but they start much closer to usable. The communications team finds themselves refining rather than rewriting. The AI begins to feel less like a generic tool and more like a junior writer who has actually absorbed the company's voice. Usage increases. The value compounds.

Same consultant. Same AI system. Same type of task. The difference was localization—the work of adapting the general-purpose tool to the specific environment where it would operate.


The principle, unpacked

Maximum value emerges when AI systems are adapted to specific users, domains, and environments rather than used as generic tools.

This principle is easy to agree with in the abstract and demanding to honor in practice. Localization requires effort. It requires understanding your own context well enough to articulate it, gathering examples and reference material, building prompts and workflows that embed this context into every interaction. This is real work, and it comes before the AI produces anything useful.

The temptation is to skip this work. The AI is capable out of the box—why not just use it? The answer is that "capable" and "useful" are not the same thing. A capable system produces plausible output. A useful system produces output that fits your specific needs, that reflects your standards, that can be acted upon without extensive rework. The distance between these two states is the localization gap, and crossing it requires deliberate investment.

Localization operates at multiple levels, and understanding these levels helps clarify what the work actually involves.

At the level of domain, localization means embedding the specific knowledge, terminology, and standards of your field. A healthcare organization needs AI that understands medical terminology, regulatory constraints, and the particular sensitivities of patient communication. A financial services firm needs AI that reflects compliance requirements, risk frameworks, and the precision that financial communication demands. This domain knowledge doesn't come from the model's general training—or rather, it comes only in diluted, generic form. Making it specific requires feeding the system examples, guidelines, and reference material that reflect how your domain actually operates.

At the level of organization, localization means capturing the particular way your company thinks, speaks, and works. Every organization has its own culture, its own voice, its own implicit standards for what good work looks like. These are rarely written down comprehensively. They exist in accumulated examples, in the preferences of key decision-makers, in the feedback that has shaped how people communicate over time. Localizing AI to an organization means making this tacit knowledge explicit enough that the system can reflect it.

At the level of individual, localization means adapting to the specific needs, preferences, and working patterns of particular users. One person might need AI that helps with first drafts they'll heavily revise. Another might need AI that produces near-final copy. One might prefer detailed explanations; another might want brevity. These individual differences matter because the ultimate test of usefulness is whether the output helps this person do their work better. A system that's well-adapted to organizational norms but poorly adapted to individual needs will still feel generic to the people using it.

The compounding nature of localization is worth emphasizing. When you invest in making AI local, the returns accumulate over time. The examples you gather become a library. The prompts you develop become templates. The understanding of what works in your context deepens with each iteration. Early investment pays dividends in every subsequent interaction.

Conversely, the failure to localize has compounding costs. Every time someone uses a generic prompt and gets generic output, they either waste time revising or—worse—ship something that doesn't quite fit. The gap between what the AI produces and what the organization needs becomes a persistent friction, a tax on every interaction. People start to conclude that AI isn't useful for their work, when the real problem is that no one did the work to make it useful.

There's an important distinction here between localization and fine-tuning. Fine-tuning refers to technically retraining a model on specific data, a process that requires significant expertise and resources. Localization, as this principle uses the term, is broader and more accessible. It includes fine-tuning where that's feasible, but it also includes practices available to any user: building context-rich prompts, developing libraries of examples, creating templates that embed domain knowledge, structuring workflows that incorporate organizational standards. You don't need to retrain the model to make it local. You need to surround it with enough context that its general capabilities get channeled toward your specific needs.

This has implications for how organizations should think about AI adoption. The common approach is to select a tool, deploy it broadly, and hope that usage patterns emerge. This approach systematically undervalues localization. It treats AI as a commodity—something you acquire and distribute—rather than as a capability that must be cultivated in context.

A better approach treats localization as a core part of the adoption process. Before deploying an AI system, invest in understanding the specific domains, organizational norms, and individual needs it must serve. Gather the examples, build the templates, develop the prompts. Train people not just in how to use the tool but in how to make it local to their work. This investment pays for itself many times over, but only if it's treated as essential rather than optional.


The question that remains

There's a tension at the heart of this principle that's worth naming directly. Localization takes effort, and effort is scarce. The appeal of AI is partly that it promises to reduce effort—to do quickly what would otherwise take time. If making AI useful requires significant upfront investment, some of that appeal diminishes. The economics still work, but they work differently than the marketing suggests.

This tension is real, and organizations should be honest about it. The question is not whether localization costs effort but whether the effort is worth it. For shallow, occasional uses of AI—one-off questions, quick brainstorms, tasks where generic output is acceptable—the investment in localization may not pay off. Using the tool out of the box is fine because the stakes are low.

But for the uses that matter most—the recurring tasks that consume significant time, the high-stakes outputs that carry the organization's reputation, the workflows where AI could provide real leverage—localization is where the value lives. Skip it, and you're stuck with generic capability that never quite meets your needs. Invest in it, and the system becomes genuinely useful, adapted to your context in ways that compound over time.

The question that remains is whether you'll treat localization as part of the work or as an obstacle to getting started. Whether you'll invest in understanding your own context well enough to convey it, or whether you'll accept the generic output that generic usage produces.

AI systems are built to be general. Making them useful is the work of making them specific. That work doesn't do itself.