My Education in the Green Lumber Fallacy
I had no way of knowing at the time, but my first serious contact with the green lumber fallacy happened in my last year of MBA school (ironically Nassim Taleb, the popularizer of this term in his book Antifragile, might argue that business school is entirely an exercise in the Green Lumber Fallacy).
For part of our senior capstone class on strategy, we broke into groups and ran competing corporations through a turn-based computer simulation. You would spend the class week deciding the next market positioning moves you would make, without knowing how the other groups would make changes to their positioning. The entire class entered all the data points at once, and the computer would spit out that turn’s results.
The simulation result determined your grade in the class in a competitive stack-rank (best group = A, second-best group = B, etc). I was trying to preserve a shot at being valedictorian, and so stack-ranked grading caused me more than a little heartburn.
We had very smart people in our group. Our strategy was to simply out-think the other groups. Predictably, our first turn went disastrously.
We reasoned out our strategy as best we could: “If we go to market with X more products in our line, it should put pressure on Y competitor trying to appeal to Z segment…”, etc. Despite all the intelligent narrative arguments that went into our decisions, our group started at the rear of the pack.
Throwing Pasta at the Wall
And then one night, in my despair, I started playing around with the simulator mechanism and running some tests. I didn’t have the benefit of knowing what the other teams would do next, but the mechanism would still let me test out certain scenarios to a degree. Eventually, I just started pulling all the levers that I could—pricing, the number of products, component quality—just to see what would happen.
I began to realize that I wasn’t dealing with a business in any real sense; I was dealing with a simulator. Simulators use rudimentary algorithms that are quite divorced from narratives that would make sense in a business context.
So our group turned our strategy on its head and incorporated as much scenario trial-and-error as we possibly could. We became less concerned with what should work and more concerned with what did work. And we won.
Cookbooks Aren’t Written By Otolaryngologists
The Green Lumber Fallacy comes from a book called What I Learned Losing a Million Dollars, by Jim Paul and Brendan Moynihan. The story goes that a trader named Joe Siegal was, at the time, a phenomenally successful trader in a commodity called green lumber. He made a killing while the authors of the book lost a fortune.
We come to find out that the whole time Siegal was making money, he assumed that “green lumber” took its name from being (for some reason) painted green, rather than the term referring to it having been freshly cut.
In other words, Siegal had no knowledge of the differentiated nature of the commodity he was using to make a killing.
Taleb, whose book Antifragile coined the term for the fallacy, noted that most of the traders he met early in his career were not well-heeled Ph.d.’s in economics and geopolitics, but in his words, “very, very street.” The top trader of Swiss Francs was not only not Swiss but probably could not point to Switzerland on a map. Taleb notes:
The fact that predicting the order flow in lumber and the usual narrative had little to do with the details one would assume from the outside are important. People who do things in the field are not subjected to a set exam; they are selected in the most non-narrative manner—nice arguments don’t make much difference. Evolution does not rely on narratives, humans do. Evolution does not need a word for the color blue.
Green Lumber and Backwards Causality
The Green Lumber Fallacy is a logical mistake wherein we assume that the best outcomes generally flow from our academic or scientific understanding of the forces at play. More often than you realize, this is backward. Successful and innovative outcomes usually arise from trial-and-error and are then rationalized post hoc with appeals to underlying principles.
To illustrate the point, how many times have you seen a cookbook written by an ear, nose, and throat doctor? Interesting though the medical knowledge may be how our taste buds and olfactory senses communicate with our brain, it is not important to the exercise of writing a cookbook.
Sure, you may argue, it could be helpful to know that we perceive sweetness, saltiness, bitterness, etc., but one does not author a cookbook from that starting point. We arrive that the recipes by trial and error, and then afterward we explain their success by appealing to the workings of our senses.
Can More Information Be…Harmful?!
In the age of big data, we’ve grown used to thinking that more information always leads to better decisions. This fallacy has become much worse as AI has improved. Instead of evaluating new sets of data for their potential decision support capabilities, we simply throw it into the neural network with everything else and let the AI sort it out.
Beyond a certain threshold of actionability, more information leads to worse decisions, because it increases the amount of noise relative to the signal.
As a marketer, I work with frequently work with market personas—constructs that help organizations understand the common needs of their average buyer. Sometimes these can contain useful information: prioritized buying needs, pricing considerations, and even relevant psychological information.
Marketing personas are fraught with problems on a good day. But their worst defect is that marketers overdevelop their personas to the point of parody. The creative teams, needing something to do, try to become the FBI profilers of their customers. You’ve seen some of the excesses of this I’m sure: marketers will develop mocked-up living rooms of someone named Suzie Soccermom, who is a “go-getter” and “loves yoga,” but “never has enough ‘me’ time.”
In a wasteful exercise to ostensibly “get into the minds” of customers, they down out their marketing personas with useless information that actually interferes with product and marketing decisions.
In this Fast Company article, Bob Nease says “…chances are you won’t be better able to predict the outcome than had you made the same adjustments without the data findings.” But even that might be too generous. Superfluous information uses a portion of our processing bandwidth, like a television on in the background when you’re cramming for an exam.
Or, as Taleb himself puts it, “More data—such as paying attention to the eye colors of the people around when crossing the street—can make you miss the big truck.”
Narratives Aren’t Just Useless, They’re Distracting
What does this have to do with the Green Lumber Fallacy? It turns out that narratives, like the eye color of the people around when crossing the street, are superfluous information that only increases noise.
If you sit down to play a couple of hours (or days) worth of Call of Duty, does it help your video game skill to know exactly how the console is processing the signals from the controller and adjusting the image on the screen? Does it help to know how the underlying software coding? The only reason video game developers might also be superior players is that they also may spend hundreds of hours testing out their own work.
Now, you may ask yourself, well what kind of idiot would ever have believed that understanding the inner workings of a video game would help me become a better player?
But in other domains, we very often assume that our narrative knowledge advances our mastery. Investment speculators often try to make their profits on increasingly complex narratives to explain why certain securities might increase or decrease in value.
In economics, scientists like Daniel Kahnemann have given us a much better idea of the bias components of human decision-making. But this understanding does not make us any better at predicting macroeconomic trends. Expertise in the workings of the underlying system does not increase our mastery of its practice. Trying to force this information into a predictive model could only make the model more wrong because the information contains no predictive power.
Entrepreneurship and the Green Lumber Fallacy
So why do entrepreneurs have to worry about the Green Lumber Fallacy?
Because all founders have a narrative in their head explaining why the market should demand their product or service.
As a startup mentor with the 1871 business incubator, I’ve worked with scores of founders on their product-market fit. Very frequently, a founder tries to exercise a “vision” for a product or service as if it was divinely presented to them in a dream. They invest months of time and thousands of dollars in an untested, unvalidated Minimum Viable Product because it conforms to their vision of what should work. Then they drop it on the world and are shocked when no one wants it.
It doesn’t stop at product-market fit. There is a lot of “information” circling the startup world about how the startup process itself should work. Some founders get the idea that if they do all the right things, read technique books like The Lean Startup, attend all the right classes, then they will succeed with certainty.
Understanding Taleb’s principles of antifragility, with its emphasis on trail-and-error and tinkering, is key for startup founders. Strong startups are aware of those factors to which they are most fragile, and never top testing for increased market traction. I wrote this guide for building antifragile startups (opens in a new tab) in order to help founders stop obsessing with the noise of narrative and instead set up obsessive testing and tinkering regimes.
The Green Lumber Fallacy, and its implied reliance on explanatory narrative, is possibly the most distractive and misguiding force in the world of entrepreneurship. It doesn’t matter why a startup should work. Successful founders first figure out a system that does work, and then later they worry about the “why.”