Carbon Is Harder Than Silicon: Why the Future of AI Depends More on Human Systems Than Hardware

The one about the socio-technical model applied to AI.

Much of today’s AI conversation centers on tangible constraints — chip shortages, compute capacity, and the growing energy demands of model training. Those are real limits, and they’ll shape what’s technically possible for years.

But anyone who has lived through an EHR rollout or a “clinical decision support” pilot knows that the harder part often begins after the technology is installed. Once the silicon is humming, we still have to integrate it into the living fabric of care — how clinicians think, how teams coordinate, how patients experience the system.

That’s where the carbon-based agents come in. Not as obstacles, but as the essential medium through which any digital innovation actually becomes care.

The Sittig and Singh (2010) socio-technical model still provides one of the clearest guides here. It lays out eight interdependent dimensions — from technical infrastructure and clinical content to workflow, organizational culture, and external environment. It’s a reminder that safety, quality, and adoption emerge not from the technology itself, but from how these layers interact in practice.

The same logic applies to AI. A model can perform flawlessly in validation but fail to add value at the bedside if it doesn’t align with clinical priorities, decision rhythms, or accountability structures. These systems don’t just plug into existing teams; they subtly reshape how roles are defined, how authority flows, and how judgment is shared.

So yes, the field will need more chips, more power, and more scalable infrastructure. But the real breakthroughs will come from designing AI that supports the people who deliver care and ultimately the carbon-based agents central to their mission: the patient.

Switches, Boxes, and Steam — Three Questions for AI in Healthcare

The one where 3 books answer 3 questions about AI in healthcare

Every few years a new wave hits healthcare IT. Some reshape the shoreline; some barely ripple. Lately I’m leaning toward a simple view: Gen AI is a real step forward—but the changes that endure will come from infrastructure and incentives. In as regulated and high-attention a system as health care, the roads matter more than the horsepower.

1) Who will benefit?

History says advantage concentrates around control points—the places where compute, data, and distribution meet. In today’s terms, that means model providers with scale (e.g., OpenAI, Anthropic, Google/DeepMind), the clouds that host them (eg AWS, Azure, Google Cloud), platforms already embedded in clinical workflows (eg Epic, Oracle Health/Cerner), and companies that own the last mile to clinicians and patients (eg large health systems). As though they control the “master switch,” these players have significant influence in supporting winners and losers.

But the circle does widen. A second group tends to capture durable value: the people and teams who complement those control points—clinical data stewards, evaluation and safety engineers, product integrators who turn models into reliable steps inside prior auth, triage, charting, imaging, and revenue cycle. As individuals, entrepreneurs who see where the network is going (and get there early) tend to do well: they make the glue, the adapters, the “boring” parts that let many models work safely across many contexts and control points.

As general models compete and compute capital fuels greater availability, the answer to “who benefits” may not depend as much on who has the smartest model as we thought. In healthcare, the answer is probably closer to: “where do you sit relative to distribution and standards?”

2) How will they benefit?

Consider what happened when shipping settled on a standard metal box—a 20- or 40-foot container with identical corner fittings. That one decision let cranes, ships, trains, and trucks handle cargo without repacking. Risk fell. Costs fell 90%. And the work moved: less muscle on the pier; more planning and throughput management at distribution centers and rail yards. Jobs followed the flow.

AI may trace a similar trajectory. As models ship in consistent packages—stable interfaces and licenses, companion evidence (safety & efficacy, provenance, evaluation coverage, known hazards)—risk drops across the chain. This is a chain, though, of bits and not atoms. Workflow behavior—increasingly digital—and model swaps achieve more predictable outcomes. Capital becomes willing to fund further scale because components are modular, productized, and auditable.

As for the work, I expect this to move from first-pass busywork to the “inland” roles that plan, do, study, and act towards the learning healthcare system. Technical roles for platform/orchestration, evaluation & red-team, data lineage & governance, enablement & change will blossom. Existing roles will morph. For example, on the admin side, copy-paste, phone calls, and status-chasing give way to flow coordination, exception desks, audit/QA, and patient navigation (think schedulers → access-ops coordinators; coders → utilization & compliance analysts). On the clinical side, keystrokes give way to judgment—ambient draft review, rare-case adjudication and roster reviews, care-plan design, patient counseling, and system-of-care safety & stewardship. And on the patient/caregiver side, the role shifts from passive data source to co-steward of context; from recipient of transactions to more of a navigator. There will be more need for controlling consent and sharing, supplying high-signal inputs (PROMs [patient-reported outcomes], home-device streams, life context, and values), correcting records and attaching verifiable documents, flagging errors or preference mismatches, and (for caregivers) supporting therapy reconciliation, adherence, and follow-up.

3) What is our responsibility?

I see two obligations emerging from this and running in parallel.

First: transparency as a design choice. The industrial age didn’t compound due to the steam engine alone; it compounded because we paid for disclosure. Blueprints were made public, and then builders turned them into businesses. In AI, the equivalent is releasing portable, trustworthy manifests with every meaningful update—lineage, test coverage, failure modes, guardrails—so others can evaluate, integrate, insure, and, when appropriate, improve. Procurement and reimbursement will prefer systems that come with real evidence as much as this has become the case for conventional therapeutics (ie medications) in trial and pharmacovigilance.

Second: we have to uplift the people, not just the pipes. Standards don’t only move information; they move jobs. Containerization made ports safer and faster, but it also displaced longshoremen and pushed opportunity inland. Healthcare will feel a similar migration as routine drafting and triage shrink. We will move only as fast as we develop the workforce a glide path. This looks like making the new roles visible; creating portable credentials for evaluation, operations, governance, and enablement; and retraining with intentionality. We won’t succeed if we think the players are frozen. We need to help the team skate to where the puck is going.

Bringing it together

If advantage tends to form at the control switches, and if standards are what turn demos into networks, then the next phase winners are 1. the builders who make AI reliable, swappable, and evidenced and 2. the organizations that invest in the people who run that learning system well. Gen AI “thoughtpower”—like the horsepower that came before—opens the door. The plumbing and the social compact (transparency + worker mobility) portends our trip through it.

Want to go deeper?

These ideas are an amalgam of a few books that have been highly influential in my own thinking. Please let me know if you have encountered others for this collection!

Tim Wu, The Master Switch — Why open eras often consolidate around control points, and what that means for innovation and competition.

Marc Levinson, The Box — How a universal container standard reshaped costs, jobs, and geography—useful for thinking about AI packaging and “inland” roles.

William Rosen, The Most Powerful Idea in the World — The case that incentives for disclosure (the patent bargain) made progress compound—and why entrepreneurs matter for carrying blueprints into the world.

Forget Big Data. It’s Time to Talk About Small Data.

With all of the talk of “big data,” it can be hard to remember that there was ever any other kind of data. If you’re not talking about big data — you know, the 4 V’s: volume, variety, velocity, and veracity — you should go back to running your little science fair experiments until you’re ready to get serious. Prevalent though this message may be, it has, at least in health care, stunted our ability to focus on and capture the hidden 5th V of big data: value.

Continue reading

Symcat: Data-Driven Symptom Checker

Problem

symcat_logo_simple_900x260When people get sick, they have several options for obtaining health care. These include going to the emergency room, urgent care center, or calling a doctor or nurse. However, 80% of people experiencing symptoms start with an Internet search. Unfortunately, searching on Google offers spotty results and frequently leads to undue concern. For example, one is 1000x more likely to encounter “brain tumor” in web search results for “headache” than they are to ever have the disease. Undue concern is a contributor to the 40% of emergency room visits and 70% of physician visits that are considered to be inappropriate.

Continue reading

Doctors or Algorithms: Who Will Win?

A recent TechCrunch article instigated some debate as to who will win the title of “Medical Expert:” physicians or algorithms. As a medical student with a background in engineering and machine learning, my perspective has led to a somewhat conflicted opinion. I have, on the one hand, seen how powerful algorithms can be, even in the medical domain, and on the other, watched and learned from master clinicians in medical school.

Continue reading on the Symcat blog Doctors or Algorithms: Who Will Win?

Towards an Intelligent Stethoscope

Introduction

Screenshot 2015-07-12 09.51.31Though in some ways replaced by ultrasound technology, cardiac auscultation–using a stethoscope to listen to a patient’s heart–remains an important screening modality for recognizing heart disease. Auscultation serves as a cost-effective screening tool for heart disease and is of particular importance in several clinical scenarios. Less emphasis has been placed on training US clinicians in auscultation, however, making this something of a “lost art.” This may delay a patient’s diagnosis of heart disease. Continue reading