The Office of the National Coordinator (ONC) is a small agency in the Department of Health and Human Services (HHS) whose mission is to increase the interoperability and use of electronic health records and health IT. We don’t have the funding and personnel of larger agencies, and, for the most part, this is fine. The entrenched industry stakeholders know what’s happening at ONC, our policies, toolkits and initiatives.
But to be truly innovative, we need input from more than just the big stakeholders, particularly in this age of smartphones and apps. We also need innovation from people and entities that are completely new to health IT. But if they don’t know about ONC and aren’t up to speed on the newest policies, tools, and initiatives that we offer, that’s going to lead to reengineering and hurdles down the line as they develop new products and enter the marketplace.
Enter prize competitions.
We created a prize challenge program to help address this innovator and startup issue. One of the benefits of a challenge is being able to tap expertise you didn’t know existed and put you in contact with innovators you otherwise would not have engaged. Having these “outsiders” work on problems presented by ONC would help bring them into the fold. At the beginning, my colleagues and I didn’t know much about the actual mechanics of running challenges, so we brought on private sector contractors with the relevant experience to help us get things up and running. We also had to create a structure that would accommodate different kinds of challenges with their unique goals and outputs.
As much preparation and research as we did, the only way to really do this, and to get better at it, was to dive right in and start doing them. With each challenge we ran, we learned something new—which in turn affected the next one, and the next. Though we were not consciously attempting to, we were engaged in human-centered design.
Our primary group of consumers was the innovators we were trying to reach to get more involved in ONC, and each challenge was a test of how well we were connecting with them. Were the incentives to participate appropriate for the work? Was the problem defined well enough? Were we targeting the right groups and networks?
We ran almost 20 challenges over the first three years of the program, and each one increased our sample size and contributed to our lessons learned. We partnered a lot with fellow HHS agencies and initiatives, like the National Cancer Institute (NCI), Office of Minority Health (OMH), and the Million Hearts program. This was key for a few reasons: it brought in fresh eyes and new ideas at a time when they could make the most difference; it helped spread the word about challenges and seed interest in them throughout HHS; and it gave added credibility to these early efforts.
We quickly learned a couple things. First, the size of awards didn’t matter as much as we thought. Second, poor or muddled problem definitions ultimately led to poor challenges.
One of the things we kept hearing from our participants was that even with these interesting new products and the distinction of having won a government challenge, our innovators were still having trouble entering the market. And innovations that don’t last or aren’t used can’t really be innovative. This led us to the Market R&D Pilot Challenge, which was specifically designed to help address this issue of barriers to market entry. This challenge required groups with innovative products to team with providers who would together run a pilot of the new product.
Having learned that one of these barriers was a simple lack of access to the right decision-makers, we jump-started this process by holding three matchmaking sessions where potential pilot hosts met with up to 12 innovators, speed-dating style. All told, we had over 180 businesses attend these sessions in San Francisco, Washington D.C., and New York City, demonstrating to us that we had really tapped into a need. We also required submissions to include things like business plans and detailed pilot budgets that helped inform the likelihood of program success in the pilot and going forward. Since then, we’ve required submission components like these in virtually all of our challenges.
In closing, I want to emphasize the importance of consumer testing and continuous learning in whatever form they take. Beyond the direct improvement that those can drive, taking the time to listen to your customers’ underlying issues can drive new insights that take you in entirely different and rewarding directions.
Adam Wong is a senior innovation analyst in the Office of the National Coordinator for Health Information Technology at the Department of Health and Human Services. He also served as content coordinator for the federal government’s online Challenges and Prizes Toolkit.
Have feedback or questions? Send us an email »