Why OpenAI & Co. Won’t Solve Procurement - Or Your Other Business Problems
Knowledge

Why OpenAI & Co. Won’t Solve Procurement - Or Your Other Business Problems
A lot of the AI conversation still sounds like a race for bigger brains. New model. Bigger benchmark. Longer context window. Better reasoning. And yes, the models are getting better. Fast. But in the real world, inside actual businesses, that is not the hardest part anymore.
The hardest part is something less glamorous: giving the model the right context at the right moment. That is where business value is won or lost.
Because solving real business problems with AI is not the same as answering trivia, summarizing an article, or drafting a generic email. In business, the model is rarely asked to solve a problem in the abstract. It is asked to solve a problem inside a very specific reality: a company, a supplier, a contract, a history, a set of decisions, a set of consequences.
And that reality is mostly invisible to the base model.
What the models already know
To be fair, foundation models already cover a surprising amount.
They understand language. They can structure an argument. They can draft an email. They can explain cost drivers. They often know the rough economics of industries, materials, commodities, and manufacturing processes. They have internalized an enormous amount of public information from the open world. (Anthropic)
That is why they feel so powerful right out of the box.
If you ask a modern model to help write a supplier email, explain what affects resin prices, summarize a payment term, or propose a negotiation tactic, it will often produce something that looks competent.
Sometimes very competent.
But anyone who has actually done this work knows the same thing almost immediately:
Competence is not enough.
Because in procurement, in negotiations, in supplier strategy, the best decisions are almost never made from general knowledge alone.
They are made from context.
What humans use that models do not have
Before a good buyer starts negotiating, a large amount of information is already processed. Not formally. Often not consciously. But it is there.
They know the relationship dynamics.
They remember escalations and reliability gaps.
They recall small issues that became major problems.
They remember unresolved quality and weak clauses.
They know where goodwill vs. frustration sits.
And then they add the numbers.
How has spend evolved?
What changed in cost drivers or BOM?
Which arguments hold up—and which don’t?
Are price increases justified by inputs?
Is dependency or supplier power increasing?
Are deviations (invoices, payments) growing?
This is how strong preparation actually works. It is not just intelligence. It is memory, structure, pattern recognition, and judgment built on proprietary company experience.
That is the missing ingredient in most AI discussions.
The most valuable knowledge in a company is usually not public
I call this proprietary information. It is the information that OpenAI, Anthropic, and every other model provider could not have learned from the public internet because it does not exist in the public record.
Your contracts, purchase orders, invoices.
Your complaints, internal emails, meeting notes, quality incidents.
Your accumulated scars. 😉
This is the knowledge that actually decides outcomes.
It is also why the conversation around AI is often misunderstood. People talk as if the key question is whether the base model is smart enough.
In many business use cases, that is only half the problem.
The other half is whether the model can see what your best people would look at before making a decision.
There are two ways to make AI better at business tasks
If a model lacks this proprietary context, there are broadly two ways to improve it.
One is to train or fine-tune on company-specific data.
The other is to provide the necessary information as context at the moment the problem is being solved. The industry talks a lot about training, and eventually that will matter more. But for most companies, that is not where the practical frontier is today.
Today, the frontier is context.
Because training on company-specific data is hard. It requires clean data, enough examples, clear feedback loops, and usually years of accumulation before the full potential becomes available. And most companies are understandably uncomfortable with the idea of their sensitive information contributing to systems that might be shared more broadly across customers or environments.
Context is different.
It allows a company to use its proprietary information now, on demand, for a specific task, without waiting to build a large training corpus first.
That is why so much of modern AI application design is really an attempt to answer one question:
How do we get the right information in front of the model, in the right form, for the exact decision that needs to be made?
This is why context engineering matters more than people think
There is a reason terms like RAG, MCP, agents and tool use have become so central.
They are all, in one way or another, methods for improving how a model accesses and uses context.
Not because the model is useless without them.
But because even a very strong model performs poorly if it sees the wrong slice of reality.
This is still an active area of research. This paper is a good example: It found that for non-reasoning models, simply repeating the prompt could improve performance, which is a reminder that even basic changes in how context is presented can matter materially. (arXiv)
That may sound banal. It is not.
It tells us something important: the challenge is not only model capability. It is also context utilization.
And that means the real moat for many AI applications will not be just model access. It will be the system around the model: how data is collected, cleaned, structured, retrieved, compressed, and assembled into a useful decision frame. Anthropic makes this point directly in its own writing on “context engineering,” describing the problem as curating what should go into a limited context window at each step of an agent’s work. (Anthropic)
That is not something a frontier model company can solve generically for every business on earth.
The companies closest to the domain are in the best position to solve it.
The hard part is not only storing data. It is preparing it.
Inside businesses, useful context does not arrive in one convenient format.
Some of it is neatly structured: spend cubes, invoice lines, purchase orders, payment terms, commodity indices.
Some of it is not: PDFs, contracts, supplier emails, escalation notes, meeting minutes, handwritten comments in spreadsheets, plain text fragments copied between systems.
Historically, that was a huge barrier. If data was unstructured, it was hard to operationalize at scale.
That changes in an AI-native world.
Now unstructured information is not a dead end. It is raw material.
A contract clause in a PDF can be extracted, normalized, and later reused. A messy email thread can be summarized into a relationship signal. A free-text complaint can become part of a quality pattern. An invoice dispute can become evidence, not just noise.
In other words, AI is not only the consumer of context. It also helps build the context.
That is one of the most important shifts underway.
Then comes the real work: building the context for the task
Collecting data is not enough.
Even if all relevant information exists somewhere in the company, it still does not mean the model can use it well.
For a real task, context has to be built.
Take negotiation preparation.
No human wants to read every invoice, every email, every deviation report, every contract revision, and every note from the past three years from scratch. And no model should receive that entire mountain of information either.
The problem is not access. It is distillation.
What matters right now?
What failed repeatedly?
Which arguments are defensible?
Where is leverage—and risk?
What should be said now vs. later?
That is the real product challenge.
You take a mountain of operational and commercial history, break it into chunks, extract the “so what,” and compile a decision-ready picture. The end result should feel less like a database dump and more like the preparation of a very strong senior buyer: concise, evidence-based, and useful in the moment.
Sometimes all of that work must collapse into five lines before an email is sent.
And if those five lines are good, the outcome changes.
This is not replacing the buyer. It is changing where the buyer spends time.
In theory, buyers already have the skills to do this work. They Analyze costs and spend, compare suppliers, run tenders and come up with arguments.
A buyer is often responsible for multiple categories, millions in spend, and hundreds of supplier relationships. Every day becomes a resource allocation exercise.
What is worth deep analysis?
Where is full preparation justified?
How much effort is realistic right now?
So what happens in practice is obvious: only a subset of the possible preparation gets done.
Not because buyers do not know better. Because they do not have infinite time. That is where AI becomes truly useful. Not by pretending to be a magical negotiator.
But by helping condense a large body of evidence into arguments faster, more consistently, and sometimes more completely than a rushed human workflow would allow.
Less time collecting and cleaning data
Less time reconstructing history
More time deciding actions
More time on relationships, leverage, and judgment
The companies that will win are the ones that treat context as infrastructure
I suspect this is where the market is heading.
Yes, company-specific model training will eventually become more important. For some workflows, it will absolutely improve outcomes. But most companies are still years away from having enough clean, labeled, feedback-rich internal data to unlock that fully.
In the meantime, there is enormous value in a more practical path:
Do not train on customer data first. First, perfect the context built from customer data.
That means building the pipelines, methods, representations, retrieval systems, and preparation logic that a strong business analyst or operator would use before taking action. It means treating context not as an afterthought, but as infrastructure. Because the model is only one part of the system. The rest of the system decides whether the model sees a generic problem or the actual one. And in business, the actual problem is always where the money is.
That is the work we are doing
We are building for a world where AI helps solve business problems the same way strong operators do: by combining reasoning with proprietary context.
In our case, that means bringing together contracts, spend data, purchase orders, invoices, and the messy operational history around supplier relationships.
Not just to summarize it. But to turn it into useful preparation. The kind that helps a purchaser negotiate better, source better, and make better decisions.
Because in the end, what great buyers do is actually very simple to describe: They condense reality into arguments.
We think AI can help them do that faster. And maybe, in some cases, better.

