Tuesday 12pm, 17 October 2017
Shot Orientation Controls for 360 Video • Bifröst: Visualizing Embedded Systems Behavior
Amy Pavel and Will McGrath
PhD Candidates - UC Berkeley and Stanford University
Amy Pavel on "Shot Orientation Controls for Interactive Cinematography with 360 Video"
Virtual reality filmmakers creating 360° video currently rely on cinematography techniques that were developed for traditional narrow field of view film. They typically edit together a sequence of shots so that they appear at a fixed orientation irrespective of the viewer’s field of view. But because viewers set their own camera orientation they may miss important story content while looking in the wrong direction. We present new interactive shot orientation techniques that are designed to help viewers see all of the important content in 360° video stories. Our viewpoint-oriented technique reorients the shot at each cut so that the most important content lies in the the viewer’s current field of view. Our active reorientation technique lets the viewer press a button to immediately reorient the shot so that important content lies in their field of view. We present a 360° video player which implements these techniques and conduct a user study which finds that users spend 5.2-9.5% more time viewing (manually labeled) important points of the scene with our techniques compared to the traditional fixed-orientation cuts. In practice, 360° video creators may label important content, but we also provide an automatic method for determining important content in existing 360° videos.
About Amy: Amy is a 5th year PhD student in Computer Science. Her dissertation research focuses on developing new text-based interfaces for navigating videos. Her projects include interfaces for exploring educational lecture videos, films, and video critiques. More recently, she has worked to understand how people view and interact with 360° videos. She is advised by professors Björn Hartmann at UC Berkeley and Maneesh Agrawala at Stanford, and her research is supported by an NDSEG fellowship.
Will McGrath on "Bifröst: Visualizing and Checking Behavior of Embedded Systems across Hardware and Software"
The Maker movement has encouraged more people to start working with electronics and embedded processors. A key challenge in developing and debugging custom embedded systems is understanding their behavior, particularly at the boundary between hardware and software. Existing tools such as step debuggers and logic analyzers only focus on software or hardware, respectively. This paper presents a new development environment designed to illuminate the boundary between embedded code and circuits. Bifröst automatically instruments and captures the progress of the user's code, variable values, and the electrical and bus activity occurring at the interface between the processor and the circuit it operates in. This data is displayed in a linked visualization that allows navigation through time and program execution, enabling comparisons between variables in code and signals in circuits. Automatic checks can detect low-level hardware configuration and protocol issues, while user-authored checks can test particular application semantics. In an exploratory study with ten participants, we investigated how Bifröst influences debugging workflows.
About Will: Will McGrath is a PhD candidate studying Human Computer Interaction at Stanford University and is advised by Prof. Bjoern Hartmann of UC Berkeley. Will graduated from Purdue University with a degree in Computer Engineering before starting his PhD. His research interests include tools to help Makers build and program electronics, IoT / ubiquitous computing, and interaction techniques.