Combinatorial Innovation

5 min read | Topic: Cultural Adaptation & Affordances
Combinatorial Innovation

Combinatorial innovation is the discipline of asking, “What new affordances emerge when we connect A to B to C?” Large language models on their own afford fluent text but add to it information retrieval and it begins to bring your organization’s voice to that text and can ground answers in your policies, products, and history. Add to that process machine and human evaluation steps and each cycle improves in accuracy, quality, and impact.

An earlier section suggested reframing work as loops instead of lines. In combinatorial innovation we reframe generative AI as systems instead of just LLMs. You are no longer simply “using a tool.” You are choosing and arranging layers of capability that can include a model, a memory, a simulator, an evaluator, and a growing number of other components (the latest is “world models”) to fit various use cases.

Affordances as Compounding Building Blocks

As discussed in the previous section, in classic design language, an affordance is what an environment or object offers you: a handle affords pulling, a button affords pressing, a search bar affords querying. One of the challenges of generative AI is in extending this idea from the physical world into the cognitive one.

A language model affords drafting and rephrasing; it turns half-formed ideas into full paragraphs, or rough notes into polished arguments. A retrieval layer affords grounding those words in something concrete: your brand guidelines, your legal constraints, your catalog of products and past campaigns. A simulator affords the chance to test your ideas against “what if” worlds, to see how a promotion might perform during a heatwave, a major sports event, or a school holiday. An evaluation harness affords systematic learning; it lets you see patterns across many attempts, highlighting which versions performed better and why.

Each of these is valuable alone, but the power appears in their interactions. A lone LLM is entertaining, but unreliable. Attach retrieval, and you gain access to the enterprise library and the AI system starts speaking in your brand’s idiom and respects your constraints. Add simulation and it can try out its ideas in synthetic but realistic contexts, testing headlines, offers, and tones before you expose real customers. Add evaluation and the whole ensemble becomes a lab, able to try hundreds of variations and learn from them in an afternoon. You are not merely adding capabilities; you are expanding the space of actions available, the verbs you and your tools can perform together.

As the AI systems develop, it became natural to give these systems memory and state. That shifts from one-shot prompts to multi-step agents which allow them to plan, act, observe, and adapt over time. Now you can hand off parts of a process to a system that does not reset after every sentence. Plug these agents into your CRM, content systems, data warehouses, and communication platforms, and you are no longer just accelerating isolated tasks. You are redesigning how a team functions, who does what, and how decisions flow.

At each stage, the key question is not “What can this model do?” but “What new affordances appear when I connect this model to this data, this context, this workflow, and this human judgment?”

From LLMs to Systems

Many of us grew up in an “apps” world: a specific tool handled a specific job. You opened a presentation program for slides, a spreadsheet for numbers, a chat client for quick messages. Each application was an island with its own habits and limits. We started out with LLMs (ChatGPT in particular) framed as a kind of app that could do something specific.

Generative AI encourages a “systems” mindset. Instead of thinking, “Which app should I use?”, you begin to think, “Which layers of capability do I need, and how should I combine them?” At its core is a general-purpose language model that provides fluent language and basic reasoning. Wrapped around it is a retrieval index containing enterprise information, or context, so any new idea can be aligned with your business reality. Tool use provides access to all of those old isolated “apps” to chain together the work to be done. Evaluation guardrails including policy checks and safety filters increase accuracy, quality, and reliability in the outcomes.

Adaptability: Seeing and Using New Affordances

There is another layer to this story: you, and your capacity to adapt. A new affordance only matters if you can perceive it and weave it into your practice. Two people can sit in front of the same AI system and inhabit entirely different worlds.

One sees a slightly smarter autocomplete. They use it as a search engine, an answer machine, make simple prompts and accept or reject a few completions, and then go back to working as they always have. The other sees a combinatorial engine. They wonder what would happen if they fed it prior customer feedback, or if they asked it to generate scenarios for best-case and worst-case campaigns, or if they ask it to develop a multi-year analysis from a company’s public financial statements. The second person is not necessarily more technically skilled, but they carry a different mental model: they expect that new connections are possible and that each connection might unlock a fresh affordance.

Adaptability in this era looks less like mastering a static tool and more like cultivating the habit of noticing new verbs as they appear. One month, your system gains the ability to browse the web; another month, it can call internal APIs or run code; later, it can remember context across sessions and act as a long-lived agent. Each new ability is a potential affordance. The adaptable person keeps asking: “If this is now possible, how should my loops change? What should I stop doing manually? What can I now test that was impossible last year?” In that sense, your role is shifting from tool operator to system composer.

In most companies, affordances are unevenly distributed. A data science team might have access to rich datasets and powerful models, but no direct line to the people running campaigns. A marketing team might have licenses for AI tools, but no time, coaching, or psychological safety to experiment. Governance may exist as slides and policy documents, while the tools themselves remain blind to those constraints. The result is a collection of disconnected islands of capability.

Combinatorial innovation at the organizational scale starts by asking what affordances already exist but remain trapped in one corner of the company, and which missing affordances are blocking critical loops. Perhaps you have excellent customer feedback data but no retrieval layer that exposes it at the point of content creation. Perhaps your risk team has a clear playbook, but nothing in the system that automatically checks drafts against it. When you begin connecting these pieces you create the equivalent of a subway system within your organization. Just as public transit reshapes how a city can be used, a well-designed AI stack reshapes how work can flow across teams. A lesson discovered by one group can be turned into a shared evaluation harness that benefits everyone.

Practicing Combinatorial Innovation in Your Own Work

You do not need to be a machine learning engineer to adopt this mindset. You can start with the loops you already own. Pick a recurring piece of work such as writing proposals, triaging support tickets, or developing content and sketch how it currently flows. What triggers it? What information do you gather? What decisions do you make? What outputs do you produce? How do you hear back about whether it worked?

Once you’ve mapped the line, look for ways to turn it into a loop. Where could a language model help by drafting, summarizing, or classifying? Where could retrieval help by bringing in your best past examples, your policies, or your customer history? Where might simulation matter, even in a crude form by running through optimistic, pessimistic, and typical cases? Where could an explicit checklist become the basis of an evaluation layer that scores outputs before they go out?

Then, rather than trying to overhaul everything, design a small system around a specific pain point. If your follow-up messages are generic, you might combine an LLM, a library of your best previous emails, and a simple guideline for personalization to create a better loop. If you feel you never learn from failed campaigns, you might build a ritual where, after each one, you feed the data and your own reflections into a model that helps surface patterns and frame new hypotheses. Run this system for a short period, notice what becomes easier and what new mistakes appear, and then adjust. The goal is not a perfect architecture; it is the habit of upgrading systems rather than just swapping tools.

The Core Question

Combinatorial innovation suggests that the true frontier is not any single technology but the space of possible combinations among them. Affordances are the verbs your environment offers, and generative AI is rapidly expanding that verb set: generate, retrieve, simulate, critique, plan, execute. Your adaptability rests on your willingness and ability to see those verbs, connect them, and turn them into new loops of value.

Samantha’s real skill is not prompt wizardry or deep knowledge of model internals. It is the regular, almost quiet habit of asking, “Given what this system now affords, what could we do together that we couldn’t do last month?” In a world where technologies compound, the most important thing is that your own way of working compounds as well.