Medidata: Designing & Building a Chatbot in 28 Hours
Overview
Using design to increase protocol compliance in remote clinical trials.
The Challenge
When Medidata announced their annual Hackathon, I decided to try my hacking skills in a competitive environment for the first time. I arrived in New York with no team and no plan but knew I wanted to do something centered around the patient experience. I hit it off with a full stack developer, Ramiro, and he wanted to build something using text and voice capabilities. We conducted a short design thinking session around what we would offer patients using that technology and landed on a chatbot advocate to track patient symptoms. Spoiler alert: We took second place out of 12 teams.
Team
My teammate was an experienced developer, with a focus on backend and API design, and he picked the Google API and implemented the backend. I managed the design and implementation of the UI, as well as writing the branching logic and content for the chatbot responses.
Audience
Participants of clinical trials tracking symptoms or events at home.
Constraints
We had 28 hours to design, build, and present our working prototype. Judges prioritized projects which integrated with the Medidata platform, pushing to and pulling data from Medidata’s electronic data capture (EDC) system.
Process
We wrote up our concept of a chatbot that would have both voice and text input and output to support the broadest set of user capabilities possible. We wanted it to be responsive, work on any device with a screen and a microphone, and for it to use natural language processing (NLP).
We split off to work individually—while Ramiro spun up a repo and started the app architecture, I started working on the design. I began with a definition of the chatbot attributes as it would shape the patient experience: personal but not informal, trustworthy, but not authoritative, warm but not cheerful. Offering an experience that feels safe to users is required to make a healthcare app resonate with private users. I wanted to craft a tone which would encourage users to share medical data, but that was not overly fake and worked in written and vocal formats. An unduly cheerful chatbot could make a cancer patient upset by being too glib when congratulating them on taking a daily dose of an experimental drug with hefty side-effects. I continued the process by writing an outline of the content and branching logic before starting any UI design. I switched to the layout with a wireframe to give to Ramiro, then went back to content with writing the actual copy to be used.
Once I handed over the content to Ramiro, I switched context to add detail to the UI. Design imputes value, but it is moot if the product does not work. We had agreed to focus on implementing first with a simple UI and layering on more embellishments if we had time. Instead of handing over polished designs, I decided to style the app directly in the development branch with HTML/CSS and JavaScript. Prototyping allowed me to skip working on a pixel-perfect mockup, possibly leaving an undetermined amount of time to implement. The process of coding the design worked great with such a small team, Ramiro and I both submitting to the repo and pulling each other’s work as we went.
Solution
The final result was a warm but professional chatbot named Advocate. The bot could listen to voice input, read text submitted to the chat log, and respond by simultaneously adding content to the chat log in writing while speaking the response. The app started by asking the patient if they were a morning person or a night person to help establish what time of day they would be taking their medication. The natural language processing took in “yeah” or “yup” as a “yes,” and in rare cases of the app not understanding the patient’s answer, it would return possible options of what the patient was trying to say. And very importantly, the results were pushed to Rave, Medidata’s EDC, as a daily form submission. Ramiro and I felt proud that a small group of two would garner second place in the competition.
Retrospective
I was fortunate to pair with a reliable technical partner who had explored the APIs he was interested in before the event started. I was able to communicate my vision concisely to someone I had just barely met, allowing us to build quickly in collaboration. Another team of four asked us to join to compete with the other group of six. We declined and decided we would prefer to have a well-executed small scope than risk a larger project failing. The team that took first place was composed of engineers and a UX designer that worked daily together before the hackathon started.
I just scratched the surface of what was possible with NLP. I am happy about how I started with the outline of my content, switched to UX design, came back to finish the content, then moved to UI design. It was a useful feedback cycle on a product which required high-quality content and a slick UI. My decision to skip a polished high definition mockup paid off. I was able to add detail as I went, getting the elements on the screen, adding color and shape, then layering on animations and finesse as I had time. The team with the best design barely implemented their product and had to finish their presentation from comps because they ran out of time.
Also, pro-tip from Ramiro, take a video ahead of the exhibition in case technology fails in front of the audience (CTO included). Luckily our presentation went through without issue, and we got a great response when the audience saw the chatbot perform as I presented live.