It’s been a while, partly because these posts still can take a while for me to write. I wanted to experiment putting a few thoughts down more informally (read: no links) and originally intended to elaborate on one of the often-overlooked problems with applying advanced statistical methods/ML/AI/”cognitive computing” to health care. That will have to wait though, because I’m realizing that there is some important background that I would like to elaborate on first. I’m going to preface this by admitting that I am no scholar on innovation, but I do consider myself a student. My thinking begins with a few practical (and very much borrowed) theories of innovation.
With all of the talk of “big data,” it can be hard to remember that there was ever any other kind of data. If you’re not talking about big data — you know, the 4 V’s: volume, variety, velocity, and veracity — you should go back to running your little science fair experiments until you’re ready to get serious. Prevalent though this message may be, it has, at least in health care, stunted our ability to focus on and capture the hidden 5th V of big data: value.
It is hard to understate just how much of a currency data has become in medicine. Whether talking about evidence-based medicine, precision medicine, or genomics, the ability to collect and distill data into information, transform it into knowledge, and use that knowledge to drive effective action is at the heart of what modern medicine seeks to accomplish. The centrality of data to this process has created well-entrenched stakeholders, which is why it comes as no surprise that the conversation around open sharing of research data following publication has shifted into controversial territory.
This post also appeared on KevinMD.
Software has opinions. No, I’m not talking about opinions on the next presidential election or opinions about flossing before or after brushing. Software has opinions about how data should be displayed, opinions about users’ comfort with the mouse, even, in some cases, opinions about what you should have for dinner (see your local on-demand food ordering service).
We tend to view software as a tool that is either good or bad. Good when it lets us do what we want with as little frustration as possible and bad when it doesn’t. Maybe we should be a little nicer to software.
I enjoy a good brainteaser, one that you really have to concentrate on and with enough revelations built in that make the end result a satisfying accomplishment. Here are some of my favorites. I made the answer text white so that you can’t see it unless you highlight (click and drag) over it.
Question: If you place 3 points randomly on the perimeter of a circle, what is the probability that all 3 lie on the same semi-circle?
Testing design assumptions with users is a critical ingredient in user-centered design. In Symcat’s early stages (ca 2012), we thought, for better or worse, that we would identify some eligible test users through Craigslist NYC. We were surprised by just how many people were willing to participate and collected some pretty interesting data in the process. I just stumbled upon it and I suspect much of it is still relevant, so I thought I would share. Get ready for some graphs.