If you take a classic operations research problem from a textbook, one that asks the reader to convert a written scenario into a mathematical formulation with an objective function and constraints, and you give that text to ChatGPT, Gemini, or Grok, it will produce the formulation in seconds. Clean. Structured. Correct.
So the obvious question comes up: Why teach or learn linear programming if AI can formulate it instantly, while students, and sometimes even professors, may take hours?
The question is valid. But it misunderstands what is actually happening.
LLM is not inventing the logic. It is translating it
The textbook problem is already perfectly engineered. The language is carefully chosen. The structure is embedded in the wording. The objective is implicit, and sometimes explicit. The constraints are clean. The ambiguity has already been removed by the author of the problem.
The mathematical model AI generates is simply a translation from English to mathematics.
The real intelligence lives upstream
Let me give you a real example. A complex flight and maintenance planning problem I worked on. At first glance, it looks like a standard optimization setup:
- Flight demand across a planning horizon
- Maintenance station capacity by period
- Aircraft with residual flight time
- Aircraft with residual maintenance time
An AI system may be able to write the mathematical formulation once you describe it clearly. Decision variables, objective function, flow balance constraints, capacity limits. That part is not the bottleneck.
But the real challenge was never writing the equations. The real challenge was defining the problem correctly. What exactly are we optimizing? Profit? On-time performance? Maintenance risk exposure? Fleet utilization balance?
Should residual flight time be treated as a hard constraint or a soft penalty?
How do we model uncertainty in demand?
Do we allow maintenance deferrals?
How do human crews and regulatory limits interact with aircraft availability?
What happens when maintenance capacity fluctuates unexpectedly?
None of that comes pre-packaged in English. It comes from understanding the operation at a meta level.
You need to walk through the system. Talk to planners. Understand regulatory rules. See where bottlenecks actually occur. Identify which constraints are structural and which are political. Decide what trade-offs the business is actually willing to make.
Only after that thinking can you write a clean paragraph describing the problem. And once that paragraph is clean (with a feasible-to-solve problem hiding in it) AI can translate it beautifully into mathematics.
This distinction matters in all domains
This distinction matters beyond aviation. In supply chains, AI can optimize routing once the network is defined.
In manufacturing, it can schedule production once constraints are formalized.
In finance, it can allocate portfolios once risk models are specified.
But it does not decide what the network should look like. It does not decide which risks matter. It does not define the business objective. It does not determine what deserves optimization in the first place.
Training your brain to see structure in messy systems
We are entering an era where solving equations is cheap. Defining the right equation is expensive. Learning linear programming is not about manually deriving constraints faster than LLM. It is about training your brain to see structure in messy systems. To think in terms of scarcity, trade-offs, flows, coupling constraints, and unintended consequences.
It teaches you how to compress reality into a model without losing what actually matters.
AI can translate your thinking into formal structure.
It can code the solver.
It can even suggest improvements within a defined framework.
But it does not stand inside the messy intersection of humans, machines, regulations, incentives, and uncertainty and decide what the real problem is.
That responsibility is still human.
And in business, the person who defines the problem still defines the game.
Feb. 24, 2026
Pasadena
Javad Seif