Tuesday 12pm, 4 October 2016
Authoring Illustrations of Human Movements by Iterative Physical Demonstration | VidCrit: Video-Based Asynchronous Video Review
Peggy Chi | Amy Pavel
PhD Students - UC Berkeley
Authoring Illustrations of Human Movements by Iterative Physical Demonstration
Illustrations of human movements are used to communicate ideas and convey instructions in many domains, but creating them is time-consuming and requires skill. We introduce DemoDraw, a multi-modal approach to generate these illustrations as the user physically demonstrates the movements. In a Demonstration Interface, DemoDraw segments speech and 3D joint motion into a sequence of motion segments, each characterized by a key pose and salient joint trajectories. Based on this sequence, a series of illustrations is automatically generated using a stylistically rendered 3D avatar annotated with arrows to convey movements. During demonstration, the user can navigate using speech and amend or re-perform motions if needed. Once a suitable sequence of steps has been created, a Refinement Interface enables fine control of visualization parameters. In a three-part evaluation, we validate the effectiveness of the generated illustrations and the usability of DemoDraw. Our results show 4 to 7-step illustrations can be created in 5 or 10 minutes on average.
VidCrit: Video-Based Asynchronous Video Review
Video production is a collaborative process in which stakeholders regularly review drafts of the edited video to indicate problems and offer suggestions for improvement. Although practitioners prefer in-person feedback, most reviews are conducted asynchronously via email due to scheduling and location constraints. The use of this impoverished medium is challenging for both providers and consumers of feedback. We introduce VidCrit, a system for providing asynchronous feedback on drafts of edited video that incorporates favorable qualities of an in-person review. This system consists of two separate interfaces: (1) A feedback recording interface captures reviewers' spoken comments, mouse interactions, hand gestures and other physical reactions. (2) A feedback viewing interface transcribes and segments the recorded review into topical comments so that the video author can browse the review by either text or timelines. Our system features novel methods to automatically segment a long review session into topical text comments, and to label such comments with additional contextual information. We interviewed practitioners to inform a set of design guidelines for giving and receiving feedback, and based our system's design on these guidelines. Video reviewers using our system preferred our feedback recording interface over email for providing feedback due to the reduction in time and effort. In a fixed amount of time, reviewers provided 10.9 (σ=5.09) more local comments than when using text. All video authors rated our feedback viewing interface preferable to receiving feedback via e-mail.
Pei-Yu (Peggy) Chi is a research scientist at Google. She received her PhD in Computer Science from UC Berkeley, where she worked with Bjoern Hartmann. Peggy develops interactive systems that support users’ creativity and learning activities. Her research has received a Best Paper Award at ACM CHI, a Google PhD Fellowship in Human-Computer Interaction, a Berkeley Fellowship for Graduate Study, and a MIT Media Lab Fellowship with ITRI.
Amy Pavel is a fourth-year Ph.D. student advised by professors Bjoern Hartmann (Berkeley) and Maneesh Agrawala (Stanford) and supported by an NDSEG award. Her research focuses on building tools for searching and browsing within long videos.