Xue Bin Peng (ECR Graphics, 2024)
Title: Acquiring Motor Skills with Imitation Learning
Abstract: Humans are capable of performing awe-inspiring feats of agility by drawing from a vast repertoire of diverse and sophisticated motor skills. This dynamism is in sharp contrast to the narrowly specialized and rigid behaviors commonly exhibited by artificial agents in both simulated and real-world domains. How can we create agents that are able to replicate the agility, versatility, and diversity of human motor behaviors? In this talk, we present motion imitation techniques that enable agents to learn large repertoires of highly dynamic and athletic behaviors by mimicking demonstrations. We begin by presenting a motion imitation framework that enables simulated agents to imitate complex behaviors from reference motion clips, ranging from common locomotion skills such as walking and running, to more athletic behaviors such as acrobatics and martial arts. We then develop adversarial imitation learning techniques that can imitate and compose skills from large motion datasets in order to fulfill high-level task objectives. In addition to developing controllers for simulated agents, our approach can also synthesize controllers for robots operating in the real world. We demonstrate the effectiveness of our approach by developing controllers for a large variety of agile locomotion skills for bipedal and quadrupedal robots.
Oliver Schneider (ECR HCI, 2024)
Title: Haptics with AI: are we ready?
Abstract: Artificial Intelligence (AI) is undergoing unprecedented growth and popularity, with recent large language models able to fluidly handle multimodal interaction between text, voice, and images. Are other modalities, like touch, next? While haptics researchers are increasing employing AI, data science, and machine learning methods, in general haptics is stubbornly resistant to the incredible rate of change we see for other modalities. Fortunately, this may be a good thing.
I will draw upon my work in haptic experience (HX) design and research to discuss major barriers to designing and deploying haptics, and the opportunities and challenges of using AI methods to overcome these barriers. At the same time, I will discuss the risk for haptics to exacerbate the social problems of AI, and how this stubborn technology might offer ways to study the responsible use of technology in the future.
Mikhail Bessmeltsev (ECR Graphics, 2023)
Title: Sketch Processing Toolbox
Abstract: Traditional software for content creation, whether a CAD shape or an animation of a character, often requires artists to use artificial, constrained tools. An exciting alternative is to create content via a natural drawing, or a sketch, in 2D or 3D. Those sketches clearly represent some shape, but are notoriously difficult to analyse and process. To successfully process them and use them in content creation, we need a powerful processing toolbox consisting of both deep learning—based, as well as geometry-, topology- or optimization-based approaches. In this talk, I will explore our group’s efforts to create such toolbox.
Dongwook Yoon (ECR HCI, 2023)
Title: An Ecological Perspective on AI-Augmented Education: Beyond Digital Tutors
Abstract: Generative AI, including large language models, heralds a transformative shift in education by opening doors to personalized, always-on tutoring. Universities and ed-tech companies have quickly entered the scene by developing and launching AI tutors. In this talk, however, I aim to outline a broader picture of the socio-technical ecology within this domain, moving beyond mere chatbots teaching students. I will concretize this ecological perspective with three past studies focusing on (1) the impact of AI on the relationship between students and instructors, (2) leveraging AI for pseudo-social learning, and (3) the design of chatbots for fostering learners’ critical thinking on charged topics. Additionally, I will extend the scope of this perspective by briefly introducing three ongoing projects that address the challenges of generative AI for underrepresented individuals, AI-integrated versus AI-invariant learning, and the issues of over-reliance on and addiction to AI. Overall, my goal is to draw the community’s attention to higher values in education, such as inclusivity, sustainability, and human resilience.
Zhiqin Chen (Alain Fournier Award)
Title: Neural Mesh Reconstruction
Abstract: While classic computer graphics methods have been extensively researched and are challenging to advance further, they can serve as significant inspirations in a deep learning context. In this talk, I will briefly introduce several mesh reconstruction methods from my PhD thesis, and explain how most of them are developed by integrating classic methods with modern deep learning tools. Additionally, I will share my future visions based on my industry experience.
Damien Masson (Bill Buxton Award)
Title: Transforming the Reading Experience of Scientific Documents with Polymorphism
Abstract: Despite the opportunities created by digital reading, documents remain mostly static and mimic paper. Any improvement in the shape or form of documents has to come from authors who contend with current digital formats, workflows, and software and who impose a presentation to readers. Instead, I propose the concept of polymorphic documents which are documents that can change in form to offer better representations of the information they contain. I believe that multiple representations of the same information can help readers, and that any document can be made polymorphic, with no intervention from the original author. This thesis presents four projects investigating what information can be obtained from existing documents, how this information can be better represented, and how these representations can be generated using only the source document. To do so, I draw upon theories showing the benefit of presenting information using multiple representations; the design of interactive systems to support morphing representations; and user studies to evaluate system usability and the benefits of the new representations on reader comprehension.
.