Final Events Schedule Information
Transparency and Openness in HCI research: What? Why? How?
Organizer(s)
- Lonni Besançon, lonni.besancon@gmail.com, Media and Information Technology, Linköping University.
Abstract
The scientific methods and toolbox are constantly evolving providing us with guidelines frameworks and tools to make our results more robust reliable and reusable. The replication crisis in psychology a methodological crisis highlighting the difficulty of reproducing the results of past research has led many disciplines to adopt new methods and transparency standards about their results and processes. These new methods can and should be adopted in HCI research to foster better scrutiny reproducibility and reuse of our results and methods. In this tutorial I will introduce the “What? Why? How?” of transparency and the main underlying concepts of Open Science. Participants can expect to be introduced to new concepts and methods of Open Science that can easily be applied to HCI research such as pre-registration Open Data Open Source Registered Reports and Open Review. The suitability of all these concepts with respect to the multidisciplinary nature of HCI research will be explained based on recently published materials. Participants will also be invited to try and draft the pre-registration of their next experiment through a more hands-on and participatory component. Finally participants will be given an overview of other concepts and platforms that could further increase the robustness of HCI research such as post-publication peer review. This final part will also interactively invite participants to provide post-publication assessments of real published papers.
Schedule
Tuesday, May 30th, 13:00 to 16:30. Break 14:30-15:00.
Extending the Omniverse
Organizer(s)
- Mathew Schwartz, cadop@njit.edu, New Jersey Institute of Technology
- Brandon Haworth, bhaworth@uvic.ca, University of Victoria
Abstract
An introduction to creating NVIDIA Omniverse Extensions by developing a crowd simulator. We will use python and the NVIDIA Omniverse Create tool. A computer and registration are required. Omniverse requires an NVIDIA RTX graphics card, limited cloud access may be available.
Link
https://sites.google.com/view/extending-the-omniverse/
Schedule
Tuesday, May 30th, 13:00 to 16:30. Break 14:30-15:00.
Early Career Researcher Meet & Greet
Organizer(s)
- Sowmya Somanath, sowmyasomanath@uvic.ca, University of Victoria,
- Fateme Rajabiyazdi, fateme.rajabiyazdi@carleton.ca Carleton University
Abstract
This will be a mentoring and networking event that will bring together early career researchers (ECRs) and established researchers (ERs) to discuss questions related to career journeys. This will include discussions of topics such as tenure, identifying and scoping research visions for NSERC discovery, work-life balance, sabbaticals and more. This event is open to all faculty, Postdoctoral Fellows, and senior PhD students on the job market.
Schedule
Tuesday, May 30th, 13:00 to 16:00. Break 14:30-15:00.
Link
https://sites.google.com/view/gi23ecrmeetgreet/home
HCI Curriculum Workshop: Canadian Edition
Organizer(s)
- Regan Mandryk, regan.mandryk@gmail.com, University of Victoria
- Olivier St-Cyr, Faculty of Information, University of Toronto
Abstract
Do you teach HCI and want to spend less time preparing while still delivering engaging content? Would you like access to a network of experts to help you design and deliver lectures and assignments? Would an exam question bank, assignment bank, or lecture bank be helpful for you? Have you created course content that other HCI instructors would benefit from?
At this HCI curriculum workshop, we will develop a depot for collectively sharing modules within HCI courses that can benefit instructors regardless of the level (e.g., 3rd year, 4th year, graduate) or department (e.g., CS, iSchool, Engineering) of their HCI course. Given that we have all been working at creating flipped classes and video lectures over the last years, and given how HCI instruction has evolved since the last offering of the Canadian HCI Curriculum Workshop, it is time to refresh our collective shared resources and work together to efficiently and effectively deliver HCI course content, developed and shared by domain experts.
Schedule
Thursday, June 1st, 13:00 to 16:30. Break 14:30-15:00.
Cloud Rendering, Simulation and Animation
Organizer(s)
- Paul Kry, McGill University, Canada, kry@cs.mcgill.ca
- Steven Yuan, Huawei Technologies Canada Co. Ltd., steven.yuan1@huawei.com
Abstract
Cluster/Cloud-based systems promise to leverage computing resources for real-time rendering and animation and cost-effectively share results for massive users in an instance. Our workshop aims to bring together researchers and practitioners from both academia and industry to discuss these challenges as well as possible directions to tackle them.
Schedule
Thursday, June 1st, 13:00 to 17:00.
13:00 -13:30: Li Li, Huawei Canada;
Title: Cloud-based Real-time Rendering and Simulation: Opportunities and Technical Challenges
Abstract: Real-time media applications require increasingly high-quality rendering and simulation, imposing greater demands on algorithm selection and optimization for application developers. The cost pressure for users to purchase higher configuration hardware products also continues to increase. Cloud-based real-time rendering and simulation is a new solution approach. Cloud computing can provide more flexible and elastic hardware, enough computing and storage capacity for massive data, and can aggregate a large number of user computations. This talk will describe the opportunities and technical challenges of cloud-based real-time rendering and simulation, including on-demand resource allocation, distributed computation, multi-user aggregation, and data-driven computation. We hope that more academic and industrial resources can join together to solve these problems.
Bio: Li Li is the Leader of Cloud Rendering Technology Research at Huawei Canada. With over 20 years of experience in software platform and cloud service design at Huawei, Li has successfully led the technical design of multiple commercial real-time middleware platforms, cloud service software, and public cloud architecture. As a research leader in cloud rendering and digital human technology, Li is committed to advancing the development of these technologies and their application in real-world scenarios.
13:30-14:00: Jozef Hladký, Huawei Germany;
Title: Cloud Rendering Pipelines: Challenges and Research Opportunities
Abstract: Streaming rendering pipelines offer the promise of being the ultimate solution for supporting any level of 3D content on any client device, worldwide. By leveraging the processing power of the cloud and offering vast parametrization in its core, the envisioned pipeline delivers highly responsive high-fidelity visuals even to thin mobile client devices regardless of their computing capabilities and network connection speed. To make this vision a reality, novel solutions are needed and many challenges have to be conquered. Hiding network latency, achieving efficient server work distribution, lifting mobile hardware constraints, lowering network bandwidth requirements through efficient compression, handling disocclusions, view-dependent shading effects and framerate upsampling methods, just to name a few. In my talk, I will introduce variations of could rendering pipelines based on streamed content, analyze a few state of the art methods for cloud rendering and point out various exciting directions where novel innovative research can be executed: caching of visual effects in surface-space and world-space for multi-viewer reuse, joint geometry+appearance scene representations, texture atlas handling and multi-GPU computing.
Bio: Jozef Hladky is a Senior Researcher and Engineer at Huawei, Germany. He has obtained his doctoral degree at Max-Planck-Institute for Informatics in Saarbrücken, Germany. His research focuses on hiding network latency via novel view synthesis in the context of decoupled streaming rendering pipelines with thin clients. During his PhD studies, Jozef did two research internships: at NVIDIA he conducted research on joint geometry and appearance scene representations suitable for streaming to thin clients, and at Meta Reality Labs he worked on optimizations for decoupled streaming pipeline prototypes targeting virtual reality head-mounted-displays. His work on visibility computations, texture atlas synthesis and novel 3D scene representations has been published in Transactions on Graphics and Computer Graphics Forum and was presented at SIGGRAPH Asia, Eurographics and EGSR conferences.
14:00-14:30: Paul Lalonde, NVidia;
Title: GeForce NOW – Streaming Graphics for the Masses
Abstract: GeForce NOW is NVIDIA’s cloud-based game streaming service, delivering real-time PC gameplay from the cloud to your laptop, desktop, Mac, Chromebook, SHIELD TV, select Samsung and LG TVs, iPhone, iPad, and Android devices. In this talk, I will present an overview of the service and of the challenges that we have overcome to deliver high quality gaming from the cloud. I will discuss how the networking latency environment has changed, as well as how a GeForce NOW game instance differs from “just a PC in the cloud”. I close by motivating much higher frequency GPU-to-GPU and system-to-system usages than are typically considered feasible in real-time applications, showing how modern network interfaces can enable these transactions at 1 – 1.5 microsecond latencies.
Bio: Paul Lalonde is a Distinguished Engineer at NVIDIA where he leads architecture for GeForce NOW. Prior to NVIDIA Paul has worked at Google, Microsoft, Intel and Electronic Arts, always in real-time graphics and systems issues ranging from game engines to GPU design to AR/VR, and most recently game streaming. Paul holds a Ph.D. from the University of British Columbia.
Refreshments available through the workshop, to be taken whenever convenient
14:30 -15:00: Feng Xie, Activision Blizzard;
Title: Real-time Film Quality Rendering using GPU accelerated Cloud Computing
Abstract: High-fidelity photorealistic rendering effects are commonplace in film rendering. At the same time, while we see many advances in this domain in real-time rendering in games and other 3D real-time applications; most of the real-time rendering still uses rasterization-based approaches to deliver rich visual experiences. Recent advances in computing hardware have seen increased application of real-time ray tracing in games and other real-time graphics applications, including hybrid rasterization rendering pipelines. In this talk, we present the design choices for the architecture and implementation of the first production quality real-time cluster-based path tracing renderer that supports dynamic digital human characters with curve-based hair and path traced anisotropic sub-surface scattering for skin. We build our cluster path tracing system using the open-source Blender and its GPU-accelerated production quality renderer Cycles. Our system’s rendering performance and quality scales linearly with the number of RTX GPUs and cluster nodes used. It can generate and deliver path traced images with global illumination effects to remote light-weight client systems at 15-30 frames per second for a variety of Blender scenes including virtual objects and animated digital human characters. In addition to design choices of path distribution and efficient support for dynamic geometry and BVH update on many GPUs, we will also discuss pragmatic considerations for designing and implementing remote cloud rendering systems for virtual reality and other real time graphics applications.
Bio: Feng Xie is a technology fellow at Activision Blizzard. This talk is based on Feng’s work on cloud-based photorealistic rendering of digital humans for VR and AR at Meta Reality Labs. Before Reality Labs, Feng was a senior principal engineer at PDI/DreamWorks where she worked on production rendering and shading systems. Feng has credits on over 22 DreamWorks animation films and has presented her work at numerous SIGGRAPH and Eurographics conferences. Feng completed her PhD in physically based and neural multiple scattering BRDF with Prof. Pat Hanrahan at Stanford University in 2022.
15:00-15:30: Basile Fraboni, Animal Logic;
Title: Multiview Rendering
Abstract: Rendering photo-realistic image sequences using Monte Carlo path tracing often requires sampling a large number of paths to get converged results. In the context of rendering multiple views or animated sequences, such sampling can be highly redundant across views. In this talk, we give a high-level overview of the existing methods for sharing sampled paths between spatially and/or temporarily proximate cameras. We then present some recent results to handle heterogeneous media and to improve pixel estimators in the context of multi view path reuse.
Bio: Basile Fraboni is an R&D software engineer at Animal Logic Vancouver. He was formerly a PhD student at LIRIS in the Origami group and graduated in late 2022. He was also assistant lecturer at the University Claude Bernard Lyon 1. In the last few years, his research has focused on realistic rendering and in particular on multi-view rendering techniques.
15:30-16:00: Andrea Tagliasacchi, SFU;
Title: Towards mass-adoption of Neural Radiance Fields
Abstract: Neural 3D scene representations have had a significant impact on computer vision, seemingly freeing deep learning from the shackles of large and curated 3D datasets. However, many of these techniques still have strong assumptions that make them challenging to build and consume for the average user. During this talk, I will question some of these assumptions. Specifically, we will remove the requirement for multiple calibrated images of the same scene (LoLNeRF), eliminate the necessity for the scene to be entirely static during capture (RobustNeRF), and enable the inspection of these models using consumer-grade mobile devices, rather than relying on high-end GPUs (MobileNeRF).
Bio: Andrea Tagliasacchi is an associate professor in the at Simon Fraser University (Vancouver, Canada) where he holds the appointment of “visual computing research chair” within the school of computing science. He is also a part-time (20%) staff research scientist at Google Brain (Toronto), as well as an associate professor (status only) in the computer science department at the University of Toronto. Before joining SFU, he spent four wonderful years as a full-time researcher at Google (mentored by Paul Lalonde, Geoffrey Hinton, and David Fleet). Before joining Google, he was an assistant professor at the University of Victoria (2015-2017), where he held the Industrial Research Chair in 3D Sensing (jointly sponsored by Google and Intel). His alma mater include EPFL (postdoc) SFU (PhD, NSERC Alexander Graham Bell fellow) and Politecnico di Milano (MSc, gold medalist). His research focuses on 3D visual perception, which lies at the intersection of computer vision, computer graphics and machine learning.
16:00-16:30: Yin Yang, University of Utah;
Title: Linearly and Nonlinearly Reduced Models for Fast Solid Simulation
Abstract: Using the digital computer to simulate dynamic behavior of elastic objects is a highly desired feature in many scientific and engineering research: in computer animation, it provides realistic effects of soft characters; in surgical simulation, it delivers vivid visual experiences to the trainee; in digital fabrication, it couples geometry design and mechanical analysis. While the basic model has been well established for a while, robustly simulating nonlinear and high-resolution deformable objects is still a challenging problem, especially in a collision-rich environment. In this talk, I will share some of our recent efforts on this classic graphics challenge, and how we manage to improve the quality and the efficiency of the simulation simultaneously. First, I would like to introduce a learning-based model reduction, which is able to capture highly nonlinear material behaviors compactly using a deep network. The network is algorithmically integrated with the simulation pipeline so that the simulation is free of any man-made artifacts. Next, we show a novel numerical solution based on interior point method, embedded in a reduced space, to process collisions and contacts among objects. Lastly, I will show how reduced simulation lead to a new and high-fidelity rigid body simulation framework. All the collisions and contacts will be resolved (at given accuracy), regardless of the time step, geometry, and velocity.
Bio: Dr. Yin Yang is currently an Associate Professor with the Kahlert School of Computing at the University of Utah. Before joining the U, he was a faculty member at Clemson University and University of New Mexico. He received Ph.D. degree of Computer Science from The University of Texas, Dallas in 2013 (the awardee of David Daniel Fellowship Prize). He was a Research/Teaching Assistant at UT Dallas as well as UT Southwestern Medical Center. His research mainly focuses on real-time physics-based computer graphics, animation and simulation with a strong emphasis on interdisciplinarity. He was a Research Intern in Microsoft Research Asia in 2012. He received NSF CRII (2015) and CAREER (2019) awards.
16:30-17:00: Lei Lan, University of Utah;
Title: Penetration-free Deformable Simulation on the GPU
Abstract: Efficiently simulating deformable models while ensuring a penetration-free guarantee poses a significant challenge for most existing techniques. In this presentation, we will introduce our algorithm, which addresses this challenge by integrating projective dynamics and incremental potential contact through a reworked local-global iteration approach. We will showcase the utilization of an aggregated Jacobi solver to maximize GPU performance. Combined with faster CCD processing, our algorithm enables the interactive simulation of complex scenes with a penetration-free guarantee.
Bio: Lei Lan is a postdoctoral researcher at the University of Utah, supervised by Yin Yang. He obtained his Ph.D. from Xiamen University in 2020 and subsequently joined Clemson University as a postdoctoral researcher until 2022. His research interests are focused on 3D modeling, physics-based simulation, deep learning, and AR/VR.
Blender for academic papers
Organizer(s)
Silvia Sellán, University of Toronto
Abstract
We will explore the uses of Blender for conducting and presenting academic research. This workshop is aimed at absolute beginners and will be structured in the form of several follow-along case studies of increasing complexity, such that the latter examples may also be relevant to intermediate or expert audience members. Examples will range from using the blender GUI to render 3D scenes for paper figures to building a fully automated scripted prototyping pipeline and releasing code as Blender plug-ins. Attendees are encouraged to bring a laptop with the latest version of Blender installed. No registration is required.
Organizer Bio
Silvia is a fourth year Computer Science PhD student at the University of Toronto. She is advised by Alec Jacobson and working in Computer Graphics and Geometry Processing. She is a Vanier Doctoral Scholar, an Adobe Research Fellow and the winner of the 2021 University of Toronto Arts & Science Dean’s Doctoral Excellence Scholarship. She has interned twice at Adobe Research and twice at the Fields Institute of Mathematics. She is also a founder and organizer of the Toronto Geometry Colloquium and a member of WiGRAPH. She is currently looking to survey potential future postdoc and faculty positions, starting Fall 2024.
Schedule
Thursday, June 1st, 13:00 to 16:30. Break 14:30-15:00.
Working Session for BC Researchers in Human-Facing Design
Organizer(s)
- Charles Perin, cperin@uvic.ca, University of Victoria
- Karon MacLean, maclean@cs.ubc.ca, UBC-Vancouver
- Lyn Bartram, lyn@sfu.ca, SFU-SIAT
- Lawrence Kim, lawkim@sfu.ca, SFU-Burnaby
- Xing-Dong Yang, xingdong_yang@sfu.ca, SFU-Burnaby
Abstract
The last 5 years has seen impressive growth in British Columbia’s ranks for researchers in academia, industry, government or communities in British Columbia, and working in the area of understanding and designing for/with humans interacting with technology — including HCI, visualization, information, human factors engineering, or domain areas that rely on any of these disciplines. GI in Victoria is a rare opportunity for us to interact, often meeting for the first time; and to consider collective goals related to our shared location and how we might work towards them.
This event welcomes anyone interested or curious, especially those working in BC and its regional neighbors both in Canada and to the south.
Schedule
We welcome your participation in either/both sessions as you are able. Even if you cannot participate at all but are interested in the topic, please register here: https://www.surveymonkey.ca/r/gi23BC; we will use its results to determine offering a hybrid option at GI, as well as to start a contact list.
- Part 1: Meet-and-greet session, Tuesday May 30, 16:40-17:20
- Part 2: Mid-day working session, Friday June 2, 13:30-15:30