The Map Has Blind Spots
Move 37 was not a parlor trick. It was a warning. Physician-developers need to be ready for the moment AI starts finding medically important patterns our inherited maps never taught us to see.
Listen to this post
The Map Has Blind Spots
What Move 37 teaches physician-developers about the limits of expertise and the future of AI in medicine.
In March 2016, during Game 2 of the AlphaGo versus Lee Sedol match, the machine made a move no professional Go player would have chosen.
It placed a stone on the fifth line from the edge of the board. AlphaGo’s own policy network estimated there was roughly a 1-in-10,000 chance a human would play it. The commentators assumed it was a mistake.
It was not a mistake.
It was Move 37.
About 100 moves later, that stone was exactly where it needed to be to help win the game.
That moment matters for a reason that has nothing to do with Go.
The Experts Were Not the Problem
The easy story is that the commentators were wrong because they were limited.
That is not the story.
They were wrong because they were experts.
They had spent years inside a map of strategy built through centuries of human play, human teaching, and human pattern recognition. Their expertise was real. Their instincts were real. Their map was detailed.
The map still had blind spots.
Lee Sedol saw that immediately. After the move, he said it felt creative and beautiful.
He was right about the beauty. He was wrong about the explanation.
AlphaGo did not intend beauty. It did not decide to be creative. It optimized against a reward signal through self-play and explored strategic territory no human teacher had fully mapped.
Move 37 looked creative because it emerged from outside the inherited boundaries of human expertise.
That is the part medicine needs to sit with.
Medicine Trusts the Map
We should.
The medical map was built carefully. It comes from trials, reviews, guidelines, bedside observation, and hard-won clinical judgment accumulated over decades. That map has saved lives. It deserves respect.
But the map is still a human artifact.
Researchers decide which variables to measure. Trial designers decide which endpoints matter. Clinicians decide what is worth noticing. Committees decide what deserves recommendation. Every one of those decisions is shaped by prior theory, prior experience, and prior assumptions.
The map reflects the path we walked.
That does not make it wrong.
It means there may be territory outside it.
The uncomfortable possibility is not that medicine is false. The uncomfortable possibility is that there are clinically meaningful patterns we have not seen because our methods, training, and attention were never pointed there in the first place.
That is the real lesson of Move 37.
The Blind Spots Are Already Showing Up
We do not need to speculate wildly about this. Early examples are already here.
AlphaFold changed protein structure prediction by uncovering structure at a scale and speed that traditional experimental workflows could not reach alone.
AI systems analyzing ECGs have surfaced signals tied to low ejection fraction, hyperkalemia, and other clinically relevant states from waveforms physicians were never trained to read that way.
Deep learning models analyzing retinal fundus photographs have inferred cardiovascular risk, kidney disease, and anemia from images that most clinicians never treated as that kind of systemic sensor.
The signal was there.
Human expertise had not fully learned how to see it.
That distinction matters.
These are not just faster versions of old workflows. They suggest there are features in biological data that exceed the pattern vocabulary medicine inherited from human-to-human training alone.
That is what a blind spot looks like when it starts to close.
This Is Where Physician-Developers Matter
The framing is wrong if we treat this as a reason to hand medicine over to the machine.
Go had a clean objective. Win the game.
Medicine does not.
The real reward signal in medicine is crowded and morally loaded. Reduce suffering. Extend life. Preserve function. respect autonomy. Allocate resources justly. Avoid harm. No single metric captures that fully, and no model should be trusted to optimize it without human judgment.
This is where physician-developers have a specific responsibility.
We are the people who can recognize both sides of the problem at once.
We can see why a model finding an unexpected pattern might matter. We can also see why a pattern is not the same thing as a clinical decision. We understand pathophysiology, dataset shift, subgroup failure, validation design, workflow consequences, and the moral weight of acting on a prediction at the bedside.
The machine finds the blind spot.
The physician decides what to do with it.
That is not a concession to human pride. It is the architecture required for trustworthy AI in medicine.
I believe this strongly enough that it shapes why I build. Physician-developers should not be brought in after the model exists to offer a few comments on safety and implementation. We need to be inside the system from the start.
We need to help define the target.
We need to help choose the data.
We need to write the validation logic.
We need to be the people in the room when the model finds something impressive, ambiguous, or dangerous.
Move 37 worked because the context was right.
A beautiful move in the wrong context loses the game.
The Question That Should Keep Us Awake
AlphaGo found a move human experts had classified as wrong for centuries.
Now ask the harder question.
What has medicine classified as irrelevant, ineffective, or impossible to see that an AI system might identify as its own Move 37?
I do not know.
That is exactly why this moment matters.
Maps do not show you what lies beyond their edges. They show you what has already been walked. Some blank spaces are blank for good reason. Others are blank because no one had the tools, scale, or framing to explore them.
We are entering a period when machines will start pointing at those spaces.
Some of what they find will be noise.
Some will be traps.
Some will be clinically useless curiosities.
And some of them may change medicine.
The job of physician-developers is not to worship those findings or fear them. The job is to evaluate them, test them, contextualize them, and decide which ones deserve to cross the boundary into care.
This is the work.
The map has blind spots.
We are about to start finding them.
Related Posts