UX Design Lead · 0→1 · AI Product

Chronicle Making surgical memory objective.

I led the end-to-end design of Chronicle — a first-of-its-kind AI-powered surgical video management tool for the UW Department of Otolaryngology. From blank page to high-fidelity prototype in one quarter, the project won the Brave Award for Innovation in AI out of 44 competing capstone teams.

Client
UW Dept. of Otolaryngology &
Head and Neck Surgery
My Role
Design Lead — Research,
IA, Interaction, Visual Design
Team
4 UX practitioners
(Microsoft, Amazon, AT&T)
Timeline
One quarter (Spring 2024)
UW HCDE MS Capstone
Output
Figma prototype + design system
Patent-pending AI UI
🏆
Brave Award — Innovation in AI

Awarded to Chronicle out of 44 capstone projects for pushing the boundaries of responsible AI integration in high-stakes medical contexts. The only project to explicitly design surgeon autonomy and AI transparency as core product principles.

A powerful algorithm with nowhere to live.

Dr. Lingga Adidharma, an Otolaryngology surgery resident at UW Medicine, had developed a patent-pending AI algorithm capable of distilling a six-hour endoscopic surgery into a 15-minute highlight reel. The algorithm was already achieving 84% accuracy. But there was no interface. No way for a surgeon to actually use it.

She brought us in as a design consultancy team to answer one question:

"How might digital imagery improve surgeons' recall, collaboration, and time in order to improve patient outcomes?"

— Our design question
6h
Raw surgery video
15m
AI highlight reel
84%
Algorithm accuracy
0
Existing UI

Before we designed anything,
we went to the OR.

I structured a three-pronged research approach. Rather than jumping straight to the algorithm UI, I needed to understand the full texture of a surgeon's day — their relationship with documentation, memory, and time pressure.

01

Secondary research

Reviewed academic literature on AI-assisted surgical video analysis, conducted a legal scan across HIPAA, consumer protection, and patient privacy, and benchmarked existing surgical video editing software. This gave us the technical and regulatory landscape before speaking to anyone.

02

Contextual inquiry at Harborview Hospital

Two of us went on-site at Harborview Medical Center. We observed the operating room environment, mapped the physical space, and shadowed Dr. Adidharma through handoffs and consultations. The goal: understand the end-to-end workflow, not just the documentation moment.

03

In-depth interviews with surgeons

Conducted 1:1 interviews with experienced surgeons across otolaryngology. We focused on note-taking practices, memory challenges, and current use of digital imagery. Each session surfaced increasingly sharp insights about where the real friction lived.

"I don't have time to be Ernest Hemingway here."

— Experienced surgeon, in-depth interview

Four compounding pain points.

Time vs. accuracy

Notes are a compromise

Surgeons constantly trade accuracy for speed. Patient care always interrupts documentation. Tasks are manually completed with no digital assistance.

Subjectivity

Language fails where images don't

Written notes are interpreted differently by different readers. "Airway dilated to 2mm" communicates far less than a picture of an almost-closed airway. Templates bias notes toward generic descriptions.

Imagery gap

Missing visuals cause repeated procedures

Without video, patients undergo repeated scans. Text-based notes may not convey the urgency of a condition even when they're accurate. Video accelerates consultation on complex cases.

Memory

Notes written days later

Operative notes are often completed after the fact — sometimes days later. Multiple similar surgeries in a single day cause patient conditions to blur together in memory.

"Reading 'the airway is dilated to 2 millimeters' is a lot harder to understand than a picture of the airway almost closed."

— Surgery resident, contextual inquiry

Five principles to keep us honest.

Research insights alone don't make decisions — principles do. I drove the team to translate our findings into five design principles we'd revisit every week to resolve debates and anchor our choices.

Efficient + integrated

Seamless fit into existing workflows. Key tasks in minutes, not hours. Holistic: pre, during, and postoperative.

Surgeon-centric

Involve surgeons throughout. Account for generational differences in tool preferences and comfort with AI.

Utility over aesthetics

Prioritize functionality and tangible benefits. Identify optimal media types for specific clinical use cases.

Practical first

Build a deployable MVP. Offer a clear roadmap of future enhancements rather than speculative features.

Don't put AI on a pedestal

Design as if training a skilled human assistant — outcome-focused, not technology-showcasing. Surgeon autonomy is non-negotiable.

From end-to-end to where it mattered most.

I mapped the full system — from uploading raw surgery video all the way through storing and sharing clips — then made a deliberate call about where design effort would have the highest impact. I scoped our MVP to the three moments where surgeon judgment was most active:

Upload video Generate highlight reel Review clips Select + bookmark Tag + publish Share + store DESIGN FOCUS

Five rounds of iteration.
One surgeon. Weekly.

I structured a tight feedback loop: weekly syncs with Dr. Adidharma, rapid parallel prototyping across the team, and deliberate pressure on the AI interaction model. I personally owned translation from concept to pixel across every iteration.

R1

Lo-fi sketches + storyboards

Each designer independently sketched the key interaction moments — then we converged. This surfaced assumptions fast. I mapped two storyboard scenarios: post-op note-taking and care team handoff. We even used AI to generate the visual panels.

R2

First wireframes — structure before style

We explored three structural variants for the core screen: how to lay out the highlight reel, clip list, and operative notes simultaneously. I pushed for a side-by-side split panel model based on sponsor feedback that "viewing text and video together keeps me organized."

R3

Mid-fidelity — validating the AI touchpoints

The critical design problem was how to represent AI-generated vs. surgeon-generated content clearly. I introduced the star/person icon system for tags and explicit AI disclaimers on video. Peer feedback identified navigation confusion and a need for stronger visual hierarchy.

R4

Evaluative testing + AI Trust Score

Ran prototype testing with Dr. Adidharma using the AI Trust Score framework. Key finding: the right rail lacked affordances for understanding clip states. The MVP use case crystallized as academic and teaching, with a clear upgrade path to direct patient care.

R5

High-fidelity — Chronicle

Converged on the final design system: dark navigation, color-coded surgical phase timeline, Proposed/Confirmed/All clip tabs, AI responsibility modal at publish. Every element traces back to a research finding.

Chronicle — four key surfaces.

Core interaction

Clip Manager: the surgical command center

The primary workspace pairs the AI-generated highlight reel with a structured clip list. Surgeons see the full video timeline color-coded by surgical phase — Approach, Removal, Reconstruction — so spatial memory is preserved. The split-panel was a deliberate decision driven by sponsor feedback: text and video side-by-side reduces cognitive overhead.

  • Phase-color-coded timeline strips for immediate orientation
  • Smart video search by keyword, tag, or clip label
  • Draft / In Progress status always visible in the header
  • One-click publish when ready
Chronicle Home Library Clip Manager Patient: John Doe Surgery: Tonsillectomy Date: 05/27/2024 Draft in Progress Smart video search for key features, tags, clips 00:02 15:40 Bookmarked Clips Publish Proposed Confirmed All Initial Incision ✦ Robotic Assist ✦ Incision Dorsal Breakage ✦ Complication ✦ Incision Over-Dilated Fuselage ✦ Confounded ✦ Proprioception
AI transparency

Tags: structured, but flexible

Tags solve two problems at once: they make clips searchable across procedures, and they make AI involvement legible. I designed a two-icon system — a sparkle (✦) for AI-generated tags, a person icon for surgeon-added ones. Surgeons can add, remove, and modify any tag. The AI is an assistant, not an authority.

  • AI-suggested tags distinguished visually from manual ones
  • Tags are searchable across the full surgical library
  • Taxonomy is structured but extensible — surgeons add their own
  • Tags appear in the publish confirmation for final review
AI-suggested tags ✦ Share with Patient ✦ Obstruction ✦ Secondary Procepting ✦ Robotic Assistance ✦ Suction Used Surgeon-added tags ⊙ Resident Demo ⊙ Key Teaching Moment + Add Tag AI-generated Surgeon-added
Responsible AI

Publish modal: the surgeon stays in control

The most critical UX decision in the project: before any AI-generated content enters the patient record, the surgeon must actively confirm they have reviewed it. This wasn't an afterthought — it was a design principle from week one. The modal shows exactly what will be shared and requires an explicit acknowledgment.

  • Full preview of highlight reel and all bookmarked clips
  • Explicit acknowledgment of AI-generated nature
  • Back button to make changes before final commit
  • Prevents passive over-reliance on AI output
You are using AI Generated Content × Our AI has generated titles and tags for key moments in your video. AI can make mistakes — review carefully before publishing. By clicking continue, you acknowledge use of AI-generated content and have validated all bookmarked clips and tags. Highlight Reel (15:42) Bookmarked Clips (3 clips, 8 tags) ✦ Initial Incision and Exposure ← Back Continue →

We treated ethics as a design constraint,
not a disclaimer.

Working in a HIPAA-adjacent context with AI-generated medical content forced the team to make ethics explicit in every screen — not just in a legal appendix. I drove the integration of responsible AI principles directly into the interaction model.

AI transparency

Every AI touch is labeled

AI-suggested clips and tags carry a distinct visual marker (✦). Surgeon-generated content uses a person icon. No ambiguity about what the machine produced.

Surgeon autonomy

The surgeon approves everything

The publish modal requires an active confirmation before any AI-generated content reaches the patient record. No passive defaults. The surgeon is the author.

Error awareness

Omission is a risk we named

We explicitly designed for the risk that AI omits something important — and resisted the temptation to let surgeons over-rely on the highlight reel as a substitute for their own memory.

Privacy

HIPAA scoped to sponsor

We explicitly established that compliance with HIPAA, patient data, and ePHI regulations was the sponsor's legal responsibility — and designed around that boundary rather than pretending it didn't exist.

From proof-of-concept to
investor-ready design.

Chronicle went from a raw algorithm with no interface to a high-fidelity prototype that could be shown to investors and used to pitch the pre-startup. The sponsor, Dr. Adidharma, used the prototype directly in subsequent presentations and conversations with UW stakeholders.

44
Competing projects
1st
Brave Award winner
5
Design iterations
3
Surgeons interviewed

What we left on the table (intentionally).

What leading this taught me
about AI product design.

Capability ≠ interface

The algorithm was technically impressive but completely invisible to users without a design layer. The insight that drove everything: a tool isn't a product until a human can trust it enough to use it.

Ethics goes in the interaction, not the appendix

The responsible AI principles we debated in week two ended up as actual UI elements — the publish modal, the tag icon system, the AI disclaimers on video. Good ethics is good design.

User research in high-stakes domains is irreplaceable

No amount of secondary research would have surfaced "I don't have time to be Hemingway here." Being in the hospital, talking to surgeons under real pressure — that's where the real design brief lived.

The sponsor relationship was itself a design problem

Dr. Adidharma was simultaneously our user, our subject matter expert, and our critic. Weekly syncs kept us calibrated. Her feedback made the designs sharper at every iteration — when we leaned into that dynamic instead of presenting to her, outcomes improved.

MVP scope is a UX decision

Choosing to focus on Review → Select → Publish and leave Upload, Storage, and Sharing to future sprints wasn't a compromise — it was what made the prototype credible. Scope selection is itself a design skill.

AI used to create, not just describe

We used AI to generate storyboard visuals, refine language, and accelerate iteration — while being explicit about where human review was non-negotiable. A useful model for how to ship AI-assisted work responsibly.

"Having objective information is much more helpful than subjective information."

— Experienced surgeon, in-depth interview. The sentence that grounded the entire product.