Why I’m Bullish on Doctors Who Code: In Agreement with Robert Wachter
9 min read

Why I’m Bullish on Doctors Who Code: In Agreement with Robert Wachter

Doctors Who CodeMedical Informatics⁠

Why Physicians Must Learn to Code: AI in Healthcare 2026

Meta Description (155 chars): A maternal-fetal medicine specialist explains why doctors who code are essential for safe AI integration in healthcare. Responding to Dr. Robert Wachter’s NYT essay.

Primary Keywords: doctors who code, medical AI, physician developers, healthcare AI implementation, clinical AI tools Secondary Keywords: AI in medicine, physician-builders, healthcare technology, medical automation, clinical decision support


By Dr. Chukwuma Onyeije, MD | January 19, 2026
Maternal-Fetal Medicine Specialist | Founder, CodeCraftMD


Medicine’s Inflection Point

Every generation of physicians inherits a moment that reshapes practice. Anesthesia in the 1840s. Antibiotics in the 1940s. CT imaging in the 1970s. Electronic health records in the 2000s. Each arrived with skepticism, uneven implementation, and legitimate safety concerns.

Artificial intelligence in healthcare now sits in that lineage.

Public conversation around medical AI, however, has become dominated by anxiety — fear of replacement, fear of algorithmic error, fear of loss of clinical autonomy. In a recent New York Times essay, Dr. Robert Wachter, Chair of the Department of Medicine at UCSF and author of The Digital Doctor, argues that this fear, while understandable, is miscalibrated. His central thesis is disarmingly simple:

AI does not need to be perfect. It only needs to be better than the alternative.

I agree — strongly. And I would go further: the determining factor in whether AI improves medicine or destabilizes it will be whether physicians step into the role of builders rather than bystanders.

That conviction is why I remain deeply bullish on Doctors Who Code.


The Curbside Consult Has Already Changed

For decades, physicians relied on the curbside consult — a hallway conversation with a trusted colleague when a case felt uncertain. Wachter notes that today, many of those consults are already happening with AI systems. ChatGPT for differential diagnosis. Evidence synthesis tools like UpToDate’s AI features. Clinical summarizers in Epic and Cerner. Ambient scribes from Nuance DAX and Abridge.

This is not speculative future medicine. It is present practice.

From My Practice: A Real Example

Last month in my maternal-fetal medicine clinic, I used an AI clinical decision support tool to help stratify preeclampsia risk in a patient with lupus nephritis — a complex intersection of autoimmune disease and pregnancy physiology. The tool synthesized recent literature on aPL antibodies, complement levels, and PIGF biomarkers faster than I could have manually reviewed the evidence.

Did I trust it blindly? No. I validated every recommendation against UpToDate, recent SMFM guidelines, and my clinical judgment. But the tool accelerated evidence retrieval from hours to minutes.

That’s the reality of modern high-risk obstetrics. AI is already in the exam room.

The adoption curve is already underway. The only unresolved question is who designs, evaluates, and governs these clinical AI tools. If physicians remain passive consumers, the architecture of medical AI will be shaped primarily by technologists, venture capital incentives, and enterprise IT administrators. If physicians become builders, clinical reality remains embedded in the system’s design.


The “Man Bites Dog” Fallacy in AI Fear

A striking insight from Wachter’s essay is how society responds to machine error versus human error in healthcare. Rare AI failures generate headlines. Human error, though statistically more common, rarely does.

Consider the data:

Yet when an AI chatbot provides harmful medical advice, it becomes a congressional hearing. When a physician misses a diagnosis due to alert fatigue or a fragmented EHR, it becomes a closed-door M&M conference.

This is the classic “man bites dog” phenomenon: the unusual event draws attention, distorting risk perception.

The correct comparison is not AI versus perfection. It is AI versus current practice — with its cognitive overload, fragmented records, documentation burden, and fatigue-driven error. When framed honestly, the bar AI must clear is achievable — and in many domains, already surpassed.


Why Physician-Builders Matter: The Technical-Clinical Translation Gap

Here lies the central point: medical AI cannot be safely built in isolation from medical practice.

Clinical workflows. Liability realities. Patient trust. Documentation norms. Consent structures. Diagnostic uncertainty. Medicolegal risk. These are not abstractions. They are lived environments. Physicians uniquely understand them.

What Physician-Developers Bring to Healthcare AI

As both a maternal-fetal medicine specialist and the founder of CodeCraftMD, a (soon to be developed) HIPAA-compliant medical billing automation platform, I’ve seen firsthand what happens when physicians participate in software development:

  1. Clinical edge-case identification: We catch scenarios non-clinical developers never anticipate (twin gestations with selective IUGR, anyone?)
  2. Workflow reality checks: We know the difference between “works in demo” and “works at 8pm on call”
  3. Medicolegal awareness: We understand documentation requirements, standard-of-care expectations, and liability exposure
  4. Patient communication nuances: We recognize when automation helps versus when human explanation is non-negotiable

Doctors Who Code exists precisely because the next era of medicine requires bilingual professionals — fluent in both clinical reasoning and computational systems. Physician-builders serve as translators between care realities and technical design. They identify where automation helps, where oversight is required, and where empathy cannot be coded.

This is not a hobbyist niche. It is emerging professional stewardship.


Agreement With Wachter’s Guardrails

Crucially, Wachter is not proposing reckless deployment of medical AI. He emphasizes:

  • Strict oversight for autonomous clinical systems
  • Transparency in algorithmic decision-making
  • Robust consent and privacy protections
  • Rigorous evaluation in high-stakes patient-facing tools
  • FDA regulation where appropriate

I share these caveats completely. Guardrails are necessary. Regulation is appropriate. Patient trust is non-negotiable.

But paralysis is also a risk. Demanding mythical perfection before adoption simply preserves a deeply imperfect system. A “walk before run” strategy — starting with ambient documentation, clinical note summarization, and administrative task automation — is precisely where physician-builders can contribute safely today.

The FDA’s recent framework for clinical decision support software provides exactly this kind of risk-stratified approach.


The Alternative Is Not Neutral

This is perhaps Wachter’s most important observation: rejecting AI is not choosing safety. It is choosing the status quo.

A status quo where:

AI will not fix all of this. But it is already fixing parts of it. And that matters.


Doctors Who Code as a Professional Ethic

Each major medical advance required physicians to adapt. We learned:

  • Sterile technique (1860s-1880s)
  • Radiographic interpretation (1900s-1920s)
  • EHR navigation (2000s-2010s)
  • Evidence-based medicine protocols (1990s-present)

Computational literacy now joins that lineage.

In this era, professionalism includes participating in tool-building, algorithmic auditing, and continuous refinement. Not because every physician must become a software engineer — but because medicine cannot outsource its future infrastructure entirely to outsiders.

Physician-builders safeguard alignment between technological power and clinical purpose.

That is the ethos behind Doctors Who Code.


A Near-Future That Is Already Arriving

Imagine a 2026 clinic visit (because it’s already happening):

  • An AI scribe (Nuance DAX, Commure Ambient, Abridge) captures conversation in real-time
  • A record summarizer (Epic’s Cosmos) condenses years of chart data into digestible timelines
  • An evidence engine (UpToDate AI, Isabel DDx) proposes differential diagnoses with literature support
  • A guideline synthesizer (MDCalc Plus) drafts a treatment plan aligned with specialty society recommendations

The physician reviews, questions, adjusts, explains, comforts, and decides. Judgment remains human. Accountability remains human. Empathy remains human.

Augmentation, not replacement.

That world is not distant. It is within reach — as Wachter notes — provided physicians help shape it.


Frequently Asked Questions

Do physicians really need to learn to code?
Not every physician needs to write production software, but understanding basic programming logic, APIs, data structures, and algorithmic thinking helps physicians evaluate AI tools, identify implementation risks, and communicate effectively with development teams.

What programming languages should doctors learn first?
Python is ideal for healthcare applications due to its extensive medical/scientific libraries (pandas for data analysis, FHIR parsers for EHR integration, scikit-learn for machine learning). SQL is essential for database queries. JavaScript helps with web-based clinical tools.

How can I get started with medical AI development?
Begin with courses on Python for healthcare, explore FHIR (Fast Healthcare Interoperability Resources) standards, experiment with APIs from your EHR vendor, and join communities like Doctors Who Code for peer support and project collaboration.

Is AI going to replace doctors?
No credible healthcare AI expert believes this. AI excels at pattern recognition, data synthesis, and task automation. Medicine requires judgment under uncertainty, empathetic communication, ethical decision-making in grey zones, and accountability — all irreducibly human functions.


Closing

I agree with Dr. Wachter: AI does not need to be perfect. It needs to be better than the alternative — and accompanied by thoughtful physician oversight.

But I will add this: the safest medical AI future is one in which physicians are present in the builder’s seat.

If we do not build the tools of medicine’s future, others will — without us in the room. Without our clinical reality-testing. Without our patient-centered values. Without our understanding of what “first, do no harm” means in practice.

That is why I remain bullish on Doctors Who Code.


About the Author

Dr. Chukwuma Onyeije is a board-certified Maternal-Fetal Medicine specialist and Medical Director at Atlanta Perinatal Associates, where he manages complex high-risk pregnancies. He is the founder of CodeCraftMD, a HIPAA-compliant AI-powered medical billing platform, and writes about the intersection of medicine and technology at lightslategray-turtle-256743.hostingersite.com. Dr. Onyeije holds clinical faculty appointments and has presented on healthcare AI implementation at national conferences. He can be reached at info@codecraftmd.com .

Disclosures: Dr. Onyeije is the founder of CodeCraftMD, a medical technology company developing AI-powered clinical tools. He has no financial relationships with the AI companies mentioned in this article.