Bio:Dr. Xin Liang is a tenure-track assistant professor with the Department of Computer Science at University of Kentucky (UKY). He received his Ph.D. in Computer Science from University of California, Riverside in 2019. Prior to joining UKY, he worked as an assistant professor at Missouri University of Science and Technology and Computer/Data Scientist in the Workflow Systems Group at Oak Ridge National Laboratory (ORNL). His research interests lie broadly in the areas of high-performance computing, parallel and distributed systems, scientific data management, large-scale data analytics, and distributed machine learning.
Title:Topology-Aware Data Compression for Scientific Analysis and Visualization
Abstract:Today's large-scale simulations are producing vast amounts of data that are revolutionizing scientific thinking and practices. As the disparity between data generation rates and available I/O bandwidths continues to grow, data storage and movement are becoming significant bottlenecks for extreme-scale scientific simulations in terms of in situ and post hoc analysis and visualization. The disparity necessitates data compression, which compresses large-scale simulations data in situ, and decompresses data in situ and/or post hoc for analysis and exploration. While lossy compression is leveraged to address the big data challenges, most existing lossy compressors are agnostic of and thus fail to preserve topological features that are essential to scientific discoveries. In this talk, we will introduce our research and development efforts on advanced lossy compression techniques and software that preserve topological features in data for in situ and post hoc analysis and visualization at extreme scales. Its potential use in scientific applications will promote research on multiple domains including cosmology, climate, and fusion by enabling efficient and effective compression for scientific data.
Bio:
Qianwen Wang is a tenure-track assistant professor at the Department of Computer Science and Engineering at the University of Minnesota. Before joining UMN, she was a postdoctoral fellow at Harvard University. Her research aims to enhance communication and collaboration between domain users and AI through interactive visualizations, particularly focusing on their applications in addressing biomedical challenges.
Her research in visualization, human-computer interaction, and bioinformatics has been recognized with awards and featured in prestigious outlets such as MIT News and Nature Technology Features. She has earned multiple recognitions, including two best abstract awards from BioVis ISMB, one best paper award from IMLH@ICML, one best paper honorable mention from IEEE VIS, and the HDSI Postdoctoral Research Fund.
Title: Enhancing Human-AI Collaboration with Better Explanations
Abstract:
Artificial Intelligence (AI) has advanced at a rapid pace and is expected to revolutionize many applications. However, current AI methods are usually developed via a data-centric approach regardless of the usage context and the end users, posing challenges for users in interpreting AI, obtaining actionable insights, and collaborating with AI in decision-making and knowledge discovery.
In this talk, I discussed how this challenge can be addressed by combining interactive visualizations with interpretable AI. Specifically, I present two methodologies: 1) visualizations that explain AI models and predictions and 2) interaction mechanisms that integrate domain knowledge into AI models. Despite some challenges, I will conclude on an optimistic note: interactive visual explanations should be indispensable for human-AI collaboration. The methodology discussed can be applied generally to other applications where human-AI collaborations are involved, assisting domain experts in data exploration and insight generation with the help of AI.
Bio: Zhicheng Liu is an assistant professor in the Department of Computer Science at the University of Maryland College Park. He directs the Human-Data Interaction research group, focusing on human-centered techniques and systems that support interactive data analysis and visual communication. Before joining UMD, he worked at Adobe Research as a research scientist and at Stanford University as a postdoc fellow. He obtained his Ph.D. in Human-Centered Computing from Georgia Tech. He is the recipient of an NSF CAREER award, a Test-of-Time award at IEEE VIS, and multiple Best Paper Awards and Honorable Mentions at ACM CHI and IEEE VIS.
Title: A Multi-Level Task Framework for Event Sequence Analysis
Abstract: Despite the development of numerous visual analytics tools for event sequence data across various domains, including but not limited to healthcare, digital marketing, and user behavior analysis, comparing these domain-specific investigations and transferring the results to new datasets and problem areas remain challenging. Task abstractions can help us go beyond domain-specific details, but existing visualization task abstractions are insufficient for event sequence visual analytics because they primarily focus on multivariate datasets and often overlook automated analytical techniques. To address this gap, we propose a domain-agnostic multi-level task framework for event sequence analytics, derived from an analysis of 58 papers that present event sequence visualization systems. Our framework consists of four levels: objective, intent, strategy, and technique. We further show that each technique can be expressed through a quartet of action-input-output-criteria. We demonstrate the framework’s descriptive power through case studies and discuss its similarities and differences with previous event sequence task taxonomies.
Bio: Meng Xia is an assistant professor in the Department of Computer Science and Engineering at Texas A&M University. She obtained a Ph.D. degree from the Hong Kong University of Science and Technology and worked as a Postdoc at Carnegie Mellon University and Korea Advanced Institute of Science and Technology, respectively. Her research interests mainly focus on Human-AI Interaction, Data Visualization, and Education Technology. She published papers at top conferences, including ACM CHI, IEEE VIS, and ACM CSCW. She received the best paper award from CHI 2022, an honorable mention award from VIS 2022, and the best poster award from VIS 2019. She serves as a program chair for L@S 2024, a youth editor for the Journal of Visual Informatics, and a program committee member for CHI, VIS, CSCW, IUI, LAK, etc. More details can be found at https://www.xiameng.org.
Title: The role of visual analytics in the new era of AI-assisted education
Abstract: Education has become more scalable with platforms like MOOCs and online question pools; however, it is not personalized in many dimensions (e.g., problem-solving strategies and self-regulation skills). Fortunately, massive online interaction data and advanced AI techniques (e.g., LLMs) have been utilized to strive for personalized education. However, the precision of these data-driven and AI solutions is not guaranteed since the data is still limited in covering different students’ situations, and it lacks pedagogical guidance. Educators play an important role in providing pedagogical guidance to optimize data-driven and AI solutions. Nonetheless, data-driven and AI systems involve massive heterogeneous data and black-box algorithms, which are not interpretable for educators. Visual analytics has been proven to be positive in tasks like explainable AI. In this talk, I will introduce the role of visual analytics in the new era of AI-assisted education, especially how it can support educator-AI collaboration.
Bio: Liang Li is a Professor at the College of Information Science and Engineering at Ritsumeikan University in Japan, where he also serves as the Vice Director of the Ritsumeikan International IT Education Promotion Office. He is an executive board member of the Japan Society of Simulation Technology and a council member of the ASIASIM Federation. His primary research interests include visual information processing, visualization, virtual reality, and the application of these technologies in digital humanities. His work in the visualization of cultural heritage has earned numerous recognitions, including multiple Best Paper Awards at international conferences and Gold Awards in art competitions. His research has been featured by major media outlets such as NHK, Yomiuri Shimbun, and Kyoto Shimbun.
Title: 3D Reconstruction and Visualization of Large-Scale Cultural Heritage Sites
Abstract: This presentation introduces our research on the 3D reconstruction and visual analysis of cultural heritage. We will demonstrate our proposed methods on several cultural heritage assets, including the UNESCO World Heritage site, the Borobudur Temple in Indonesia. The study covers various research cases, such as the 3D reconstruction and semantic segmentation of wall reliefs through deep learning techniques. Additionally, we have developed an immersive virtual reality (VR) experience system using measured point cloud data, allowing users to explore these sites in a detailed and interactive manner. This presentation explores how the latest technologies can aid in the restoration and visualization of large-scale cultural heritage.