Keynotes

25 Years of Chasing the Light

Paul Debevec
Chief Research Officer, Netflix’s Eyeline Studios

Abstract:
Compositing actors into virtual environments has been a visual effects challenge since the earliest days of cinema. Achieving believable integration requires both high-quality matting — so the edges between subject and background are clean — and accurate lighting, so the subject appears plausibly illuminated by the surrounding environment. First introduced in 2000, the light stage was developed to solve this lighting problem. It enables actors to be illuminated by images of the virtual world around them — an approach analogous to image-based lighting for CGI elements. Over the past 25 years, light stages have undergone continuous change: from a single light bulb to an array of strobes to a sphere of LEDs to a wall of panels; from head-sized to body-sized to scene-scale systems. Along the way, it has enabled advances in 3D scanning, relighting in post, and volumetric capture. This talk will trace the evolution of the light stage through its many reimaginings, highlight key production applications, and explore where this technology might lead next.

Biography:
Paul Debevec received his degrees in Computer Engineering and Mathematics from the University of Michigan in 1992 and earned a Ph.D. in Computer Science from the University of California, Berkeley in 1996. He is currently the Chief Research Officer at Netflix’s Eyeline Studios, an Adjunct Research Professor at the University of Southern California, and a Governor of the Visual Effects Branch of the Academy of Motion Picture Arts and Sciences. His contributions to visual effects and virtual production have been featured in films such as The Matrix, Avatar, and Gravity. His work has been recognized with two Academy Awards, the SMPTE Progress Medal, and a Lifetime Achievement Emmy Award. More information: www.debevec.org

Sketching the Future: Democratising Creative Control via Sketching

Prof. Yi-Zhe Song
Co-Director, Surrey's People-Centred AI Institute

Abstract:
The journey of human-AI creative collaboration has been fundamentally transformed by our ability to communicate visual intent. This keynote traces the evolution of sketch-based capabilities as a democratising force in AI-powered creative systems, from early recognition challenges to today's generative frontiers.
Beginning with the breakthrough of the first deep learning system to surpass human performance in sketch recognition, I will discuss how understanding the unique properties of sketches established theoretical frameworks that now underpin modern visual control mechanisms. In particular, I will examine how the inherent abstraction in sketching correlates with semantic understanding, making it an ideal modality for human-AI communication across expertise levels.
The talk will highlight key milestones in this evolution: from developing fine-grained visual understanding through sketch-based image retrieval systems to creating multimodal frameworks that combine sketches with text, photos, and 3D representations, where complementary modalities achieve precision exceeding either alone. I will finish by showcasing how the lab intends to further the journey of democratising AI through recent work on DemoFusion and NitroFusion that directly addresses not only control but also accessibility.
Looking forward, I will outline a vision for truly accessible creative AI systems where sketching serves as an intuitive control interface, enabling diverse users across technical backgrounds to harness powerful AI capabilities.

Biography:
Yi-Zhe Song is Professor of Computer Vision and Machine Learning at the Centre for Vision Speech and Signal Processing (CVSSP) and co-director of the Surrey People-Centred AI Institute. As founder and leader of the SketchX Lab (est. 2012), he has driven groundbreaking research in sketch understanding, including the first deep neural network to surpass human performance in sketch recognition (BMVC 2015 Best Paper Award). His work spans fine-grained sketch-based image retrieval, domain generalisation, and bridging sketch with mainstream computer vision, with recent contributions in sketch-based object recognition earning a Best Paper nomination at CVPR 2023. He serves as Associate Editor for IEEE TPAMI and IJCV and has been Area Chair for ECCV, CVPR, and ICCV. Prof. Song established and directs Surrey's MSc in AI programme, following a similar initiative he created at Queen Mary University of London.