UX Design Lead · 0→1 · AI Product
I led the end-to-end design of Chronicle — a first-of-its-kind AI-powered surgical video management tool for the UW Department of Otolaryngology. From blank page to high-fidelity prototype in one quarter, the project won the Brave Award for Innovation in AI out of 44 competing capstone teams.
The brief
Dr. Lingga Adidharma, an Otolaryngology surgery resident at UW Medicine, had developed a patent-pending AI algorithm capable of distilling a six-hour endoscopic surgery into a 15-minute highlight reel. The algorithm was already achieving 84% accuracy. But there was no interface. No way for a surgeon to actually use it.
She brought us in as a design consultancy team to answer one question:
"How might digital imagery improve surgeons' recall, collaboration, and time in order to improve patient outcomes?"
— Our design questionDiscovery
I structured a three-pronged research approach. Rather than jumping straight to the algorithm UI, I needed to understand the full texture of a surgeon's day — their relationship with documentation, memory, and time pressure.
Reviewed academic literature on AI-assisted surgical video analysis, conducted a legal scan across HIPAA, consumer protection, and patient privacy, and benchmarked existing surgical video editing software. This gave us the technical and regulatory landscape before speaking to anyone.
Two of us went on-site at Harborview Medical Center. We observed the operating room environment, mapped the physical space, and shadowed Dr. Adidharma through handoffs and consultations. The goal: understand the end-to-end workflow, not just the documentation moment.
Conducted 1:1 interviews with experienced surgeons across otolaryngology. We focused on note-taking practices, memory challenges, and current use of digital imagery. Each session surfaced increasingly sharp insights about where the real friction lived.
"I don't have time to be Ernest Hemingway here."
— Experienced surgeon, in-depth interviewWhat we found
Surgeons constantly trade accuracy for speed. Patient care always interrupts documentation. Tasks are manually completed with no digital assistance.
Written notes are interpreted differently by different readers. "Airway dilated to 2mm" communicates far less than a picture of an almost-closed airway. Templates bias notes toward generic descriptions.
Without video, patients undergo repeated scans. Text-based notes may not convey the urgency of a condition even when they're accurate. Video accelerates consultation on complex cases.
Operative notes are often completed after the fact — sometimes days later. Multiple similar surgeries in a single day cause patient conditions to blur together in memory.
"Reading 'the airway is dilated to 2 millimeters' is a lot harder to understand than a picture of the airway almost closed."
— Surgery resident, contextual inquiryDefinition
Research insights alone don't make decisions — principles do. I drove the team to translate our findings into five design principles we'd revisit every week to resolve debates and anchor our choices.
Seamless fit into existing workflows. Key tasks in minutes, not hours. Holistic: pre, during, and postoperative.
Involve surgeons throughout. Account for generational differences in tool preferences and comfort with AI.
Prioritize functionality and tangible benefits. Identify optimal media types for specific clinical use cases.
Build a deployable MVP. Offer a clear roadmap of future enhancements rather than speculative features.
Design as if training a skilled human assistant — outcome-focused, not technology-showcasing. Surgeon autonomy is non-negotiable.
My design scope
I mapped the full system — from uploading raw surgery video all the way through storing and sharing clips — then made a deliberate call about where design effort would have the highest impact. I scoped our MVP to the three moments where surgeon judgment was most active:
Design process
I structured a tight feedback loop: weekly syncs with Dr. Adidharma, rapid parallel prototyping across the team, and deliberate pressure on the AI interaction model. I personally owned translation from concept to pixel across every iteration.
Each designer independently sketched the key interaction moments — then we converged. This surfaced assumptions fast. I mapped two storyboard scenarios: post-op note-taking and care team handoff. We even used AI to generate the visual panels.
We explored three structural variants for the core screen: how to lay out the highlight reel, clip list, and operative notes simultaneously. I pushed for a side-by-side split panel model based on sponsor feedback that "viewing text and video together keeps me organized."
The critical design problem was how to represent AI-generated vs. surgeon-generated content clearly. I introduced the star/person icon system for tags and explicit AI disclaimers on video. Peer feedback identified navigation confusion and a need for stronger visual hierarchy.
Ran prototype testing with Dr. Adidharma using the AI Trust Score framework. Key finding: the right rail lacked affordances for understanding clip states. The MVP use case crystallized as academic and teaching, with a clear upgrade path to direct patient care.
Converged on the final design system: dark navigation, color-coded surgical phase timeline, Proposed/Confirmed/All clip tabs, AI responsibility modal at publish. Every element traces back to a research finding.
The product
The primary workspace pairs the AI-generated highlight reel with a structured clip list. Surgeons see the full video timeline color-coded by surgical phase — Approach, Removal, Reconstruction — so spatial memory is preserved. The split-panel was a deliberate decision driven by sponsor feedback: text and video side-by-side reduces cognitive overhead.
Tags solve two problems at once: they make clips searchable across procedures, and they make AI involvement legible. I designed a two-icon system — a sparkle (✦) for AI-generated tags, a person icon for surgeon-added ones. Surgeons can add, remove, and modify any tag. The AI is an assistant, not an authority.
The most critical UX decision in the project: before any AI-generated content enters the patient record, the surgeon must actively confirm they have reviewed it. This wasn't an afterthought — it was a design principle from week one. The modal shows exactly what will be shared and requires an explicit acknowledgment.
Responsible AI
Working in a HIPAA-adjacent context with AI-generated medical content forced the team to make ethics explicit in every screen — not just in a legal appendix. I drove the integration of responsible AI principles directly into the interaction model.
AI-suggested clips and tags carry a distinct visual marker (✦). Surgeon-generated content uses a person icon. No ambiguity about what the machine produced.
The publish modal requires an active confirmation before any AI-generated content reaches the patient record. No passive defaults. The surgeon is the author.
We explicitly designed for the risk that AI omits something important — and resisted the temptation to let surgeons over-rely on the highlight reel as a substitute for their own memory.
We explicitly established that compliance with HIPAA, patient data, and ePHI regulations was the sponsor's legal responsibility — and designed around that boundary rather than pretending it didn't exist.
Outcome
Chronicle went from a raw algorithm with no interface to a high-fidelity prototype that could be shown to investors and used to pitch the pre-startup. The sponsor, Dr. Adidharma, used the prototype directly in subsequent presentations and conversations with UW stakeholders.
Future roadmap
Reflections
The algorithm was technically impressive but completely invisible to users without a design layer. The insight that drove everything: a tool isn't a product until a human can trust it enough to use it.
The responsible AI principles we debated in week two ended up as actual UI elements — the publish modal, the tag icon system, the AI disclaimers on video. Good ethics is good design.
No amount of secondary research would have surfaced "I don't have time to be Hemingway here." Being in the hospital, talking to surgeons under real pressure — that's where the real design brief lived.
Dr. Adidharma was simultaneously our user, our subject matter expert, and our critic. Weekly syncs kept us calibrated. Her feedback made the designs sharper at every iteration — when we leaned into that dynamic instead of presenting to her, outcomes improved.
Choosing to focus on Review → Select → Publish and leave Upload, Storage, and Sharing to future sprints wasn't a compromise — it was what made the prototype credible. Scope selection is itself a design skill.
We used AI to generate storyboard visuals, refine language, and accelerate iteration — while being explicit about where human review was non-negotiable. A useful model for how to ship AI-assisted work responsibly.
"Having objective information is much more helpful than subjective information."
— Experienced surgeon, in-depth interview. The sentence that grounded the entire product.