Double Tap for This Post: Understanding the Communication of Data Visualization on Social Media

Yechun Peng
Tongji University
Abstract: Data visualizations are increasingly used by news outlets on social media to communicate insights to a broad audience. However, little is known about how readers interact with and respond to data visualizations in these quick-consumption environments. In this work, we introduce a conceptual model that categorizes visualization reading that leads to the communication effect of likes on Instagram. The model was developed through a grounded theory analysis of the statements explaining the reasoning behind the likes of visualization, which were recorded from a preliminary study. Informed by coding the statements from two dimensions including scopes and design patterns concerning visualization, our model consists of three levels: depicting the "look" of a visualization (e.g., artistic style and color scheme); interpreting the "flesh and bones" of a visualization (e.g., visualization and narrative); and elucidating the "heart and soul" of a visualization (e.g., insights and conclusion). We also conducted an online crowdsourcing user study with 200 participants to demonstrate how our model can be applied to improve the communication of visualization by comparing the three levels.
Speaker Bio: PhD student at the College of Design and Innovation, Tongji University, supervised by Associate Professor Shi Yang. Research direction is data visualization theory. Research interests focus on the mechanism of visualization in public communication and collective perception, committed to exploring the bridging role of design between data and society. Has published 3 CCF A-class papers as first author or student first author.
A Parameterized Dual Analysis Framework for Intelligent Rice Breeding

Pengcheng Wang
Hunan University
Abstract: Hybrid rice breeding crossbreeds different rice lines and cultivates the resulting hybrids in fields to select those with desirable agronomic traits, such as higher yields. Recently, genomic selection has emerged as an efficient way for hybrid rice breeding. It predicts the traits of hybrids based on their genes, which helps exclude many undesired hybrids, largely reducing the workload of field cultivation. However, due to the limited accuracy of genomic prediction models, breeders still need to combine their experience with the models to identify regulatory genes that control traits and select hybrids, which remains a time-consuming process. To ease this process, in this paper, we proposed a visual analysis method to facilitate interactive hybrid rice breeding. Regulatory gene identification and hybrid selection naturally ensemble a dual-analysis task. Therefore, we developed a parametric dual projection method with theoretical guarantees to facilitate interactive dual analysis. Based on this dual projection method, we further developed a gene visualization and a hybrid visualization to verify the identified regulatory genes and hybrids. The effectiveness of our method is demonstrated through the quantitative evaluation of the parametric dual projection method, identified regulatory genes and desired hybrids in the case study, and positive feedback from breeders.
Speaker Bio: PhD student at the School of Information Science and Engineering, Hunan University, supervised by Assistant Professor Chen Changjian. Main research directions are visual analytics and machine learning, exploring the application of visualization and visual analytics technology in specific fields such as breeding and medicine.
Beyond the Broadcast: Enhancing VR Tennis Broadcasting through Embedded Visualizations and Camera Techniques

Runxiang Yao
Fudan University
Abstract: Virtual Reality (VR) Broadcasting has emerged as an innovative medium for delivering immersive experiences in major sporting events such as tennis. However, current VR broadcast systems lack an effective camera language and fail to capture dynamic in-game statistics, resulting in visual narratives that do not fully engage or inform viewers. In this work, we address these shortcomings by first analyzing 400 out-of-play clips from eight major tennis broadcasts to establish a design framework for tennis-specific camera movements that facilitate embedded visualizations. We refined our approach by analyzing 25 VR animation clips and comparing their shot and motion patterns with those of traditional tennis broadcasts, revealing key differences that guided our VR adaptations. Based on data extracted from the broadcast videos, we reconstruct a simulated game that captures the players' and ball's motion and trajectories. Leveraging this design framework and processing pipeline, we develope Beyond the Broadcast, a VR tennis viewing system that integrates embedded visualizations with adaptive camera motions to construct a comprehensive and engaging narrative. Our system dynamically overlays tactical information and key match events in real time, enhancing viewer comprehension and engagement while maintaining high immersion and comfort. User studies with tennis viewers demonstrate that our approach outperforms traditional VR broadcasting methods in delivering an immersive, informative viewing experience.
Speaker Bio: PhD student at the School of Data Science, Fudan University, supervised by Researcher Chen Siming. Research area is Human-Computer Interaction (HCI), mainly focusing on using Augmented Reality (AR) and Virtual Reality (VR) technologies to improve user experience. Specific research directions include: VR sports broadcasting, embedded visualization, immersive analytics, and brain-inspired intelligence. Research results have been published in international top journals and conferences such as IEEE VIS and IEEE PacificVis.
ConceptViz: A Visual Analytics Approach for Exploring Concepts in Large Language Models

Haoxuan Li
Zhejiang University
Abstract: Large language models (LLMs) have achieved remarkable performance across a wide range of natural language tasks. Understanding how LLMs internally represent knowledge remains a significant challenge. Despite Sparse Autoencoders (SAEs) have emerged as a promising technique for extracting interpretable features from LLMs, SAE features do not inherently align with human-understandable concepts, making their interpretation cumbersome and labor-intensive. To bridge the gap between SAE features and human concepts, we present ConceptViz, a visual analytics system designed for exploring concepts in LLMs. ConceptViz implements a novel Identification--Interpretation--Validation pipeline, enabling users to query SAEs using concepts of interest, interactively explore concept-to-feature alignments, and validate the correspondences through model behavior verification. We demonstrate the effectiveness of ConceptViz through two usage scenarios and a user study. Our results show that ConceptViz enhances interpretability research by streamlining the discovery and validation of meaningful concept representations in LLMs, ultimately aiding researchers in building more accurate mental models of LLM features.
Speaker Bio: PhD student at the School of Computer Science and Technology, Zhejiang University, supervised by Professor Chen Wei. Main research directions are visual analytics and human-computer interaction. Current research work includes developing interactive visual analytics systems for large language model interpretability, exploring how to transform complex model representations into human-understandable concept spaces.
Chart2Vec: A Universal Embedding of Context-Aware Visualizations

Meiyu Hu
Tongji University
Abstract: The advances in AI-enabled techniques have accelerated the creation and automation of visualizations in the past decade. However, presenting visualizations in a descriptive and generative format remains a challenge. Moreover, current visualization embedding methods focus on standalone visualizations, neglecting the importance of contextual information for multi-view visualizations. To address this issue, we propose a new representation model, Chart2Vec, to learn a universal embedding of visualizations with context-aware information. Chart2Vec aims to support a wide range of downstream visualization tasks such as recommendation and storytelling. Our model considers both structural and semantic information of visualizations in declarative specifications. To enhance the context-aware capability, Chart2Vec employs multi-task learning on both supervised and unsupervised tasks concerning the cooccurrence of visualizations. We evaluate our method through an ablation study, a user study, and a quantitative comparison. The results verified the consistency of our embedding method with human cognition and showed its advantages over existing methods.
Speaker Bio: Received a bachelor's degree in computer science from Beijing University of Posts and Telecommunications in 2024, currently pursuing a master's degree at Tongji University. Research directions include data visualization, human-computer interaction, and AI-assisted creation.
PiCCL: Data-Driven Composition of Bespoke Pictorial Charts

Haoyan Shi
Shandong University
Abstract: We present PiCCL (Pictorial Chart Composition Language), a new language that enables users to easily create pictorial charts using a set of simple operators. To support systematic construction while addressing the main challenge of expressive pictorial chart authoring–manual composition and fine-tuning of visual properties–PiCCL introduces a parametric representation that integrates data-driven chart generation with graphical composition. It also employs a lazy data-binding mechanism that automatically synthesizes charts. PiCCL is grounded in a comprehensive analysis of real-world pictorial chart examples. We describe PiCCL's design and its implementation as piccl.js, a JavaScript-based library. To evaluate PiCCL, we showcase a gallery that demonstrates its expressiveness and report findings from a user study assessing the usability of piccl.js. We conclude with a discussion of PiCCL's limitations and potential, as well as future research directions.
Speaker Bio: Master's student at the School of Computer Science and Technology, Shandong University, supervised by Professor Wang Yunhai. Main research direction is visualization authoring grammar.
SceneLoom: Communicating Data with Scene Context

Abstract: In data-driven storytelling contexts such as data journalism and data videos, data visualizations are often presented alongside real-world imagery to support narrative context. However, these visualizations and contextual images typically remain separated, limiting their combined narrative expressiveness and engagement. Achieving this is challenging due to the need for fine-grained alignment and creative ideation. To address this, we present SceneLoom, a Vision-Language Model (VLM)-powered system that facilitates the coordination of data visualization with real-world imagery based on narrative intents. Through a formative study, we investigated the design space of coordination relationships from the perspectives of visual alignment and semantic coherence. Guided by the derived design considerations, SceneLoom leverages VLMs to extract visual and semantic features from scene images and data visualization, and perform design mapping through a reasoning process that incorporates spatial organization, shape similarity, layout consistency, and semantic binding. The system generates a set of contextually expressive, image-driven design alternatives that achieve coherent alignments across visual, semantic, and data dimensions. Users can explore these alternatives, select preferred mappings, and further refine the design through interactive adjustments and animated transitions to support expressive data communication. A user study and an example gallery validate SceneLoom's effectiveness in inspiring creative design and facilitating design externalization.
Speaker Bio: Master's student at the School of Data Science, Fudan University. Research directions include human-computer interaction, data visualization and storytelling, and intelligent education. Research results have been published in international top journals and conferences such as IEEE VIS, ACM CHI, and ACM CSCW.
Sel3DCraft: Interactive Visual Prompts for User-Friendly Text-to-3D Generation

Nan Xiang
East China Normal University
Abstract: Text-to-3D (T23D) generation has transformed digital content creation, yet remains bottlenecked by blind trial-and-error prompting processes that yield unpredictable results. While visual prompt engineering has advanced in text-to-image domains, its application to 3D generation presents unique challenges requiring multi-view consistency evaluation and spatial understanding. We present Sel3DCraft, a visual prompt engineering system for T23D that transforms unstructured exploration into a guided visual process. Our approach introduces three key innovations: a dual-branch structure combining retrieval and generation for diverse candidate exploration; a multi-view hybrid scoring approach that leverages MLLMs with innovative high-level metrics to assess 3D models with human-expert consistency; and a prompt-driven visual analytics suite that enables intuitive defect identification and refinement. Extensive testing and user studies demonstrate that Sel3DCraft surpasses other T23D systems in supporting creativity for designers.
Speaker Bio: Nan Xiang, Master's student at the School of Software Engineering, East China Normal University, graduated from Shanghai University School of Film. Main research directions are the intersection of computer graphics, human-computer interaction, and artificial intelligence.
DeepVIS: Bridging Natural Language and Data Visualization Through Step-wise Reasoning

Zhihao Shuai
Hong Kong University of Science and Technology (Guangzhou)
Abstract: Although data visualization is powerful for revealing patterns and communicating insights, creating effective visualizations requires familiarity with authoring tools and often disrupts the analysis flow. While large language models show promise for automatically converting analysis intent into visualizations, existing methods function as black boxes without transparent reasoning processes, which prevents users from understanding design rationales and refining suboptimal outputs. To bridge this gap, we propose integrating Chain-of-Thought (CoT) reasoning into the Natural Language to Visualization (NL2VIS) pipeline. First, we design a comprehensive CoT reasoning process for NL2VIS and develop an automatic pipeline to equip existing datasets with structured reasoning steps. Second, we introduce nvBench-CoT, a specialized dataset capturing detailed step-by-step reasoning from ambiguous natural language descriptions to finalized visualizations, which enables state-of-the-art performance when used for model fine-tuning. Third, we develop DeepVIS, an interactive visual interface that tightly integrates with the CoT reasoning process, allowing users to inspect reasoning steps, identify errors, and make targeted adjustments to improve visualization outcomes. Quantitative benchmark evaluations, two use cases, and a user study collectively demonstrate that our CoT framework effectively enhances NL2VIS quality while providing insightful reasoning steps to users.
Speaker Bio: PhD student in Data Science and Analytics at Hong Kong University of Science and Technology (Guangzhou), supervised by Assistant Professor Yang Weikai. Main research directions include data visualization, data quality, and interpretable large models.
SynthLens: Visual Analytics for Facilitating Multi-step Synthetic Route Design

Rui Sheng
Hong Kong University of Science and Technology
Abstract: Designing synthetic routes for novel molecules is pivotal in various fields like medicine and chemistry. In this process, researchers need to explore a set of synthetic reactions to transform starting molecules into intermediates step by step until the target novel molecule is obtained. However, designing synthetic routes presents challenges for researchers. First, researchers need to make decisions among numerous possible synthetic reactions at each step, considering various criteria (e.g., yield, experimental duration, and the count of experimental steps) to construct the synthetic route. Second, they must consider the potential impact of one choice at each step on the overall synthetic route. To address these challenges, we proposed SynthLens, a visual analytics system to facilitate the iterative construction of synthetic routes by exploring multiple possibilities for synthetic reactions at each step of construction. Specifically, we have introduced a tree-form visualization in SynthLens to compare and evaluate all the explored routes at various exploration steps, considering both the exploration step and multiple criteria. Our system empowers researchers to consider their construction process comprehensively, guiding them toward promising exploration directions to complete the synthetic route. We validated the usability and effectiveness of SynthLens through a quantitative evaluation and expert interviews, highlighting its role in facilitating the design process of synthetic routes. Finally, we discussed the insights of SynthLens to inspire other multi-criteria decision-making scenarios with visual analytics.
Speaker Bio: PhD student at Hong Kong University of Science and Technology, supervised by Professor Qu Huamin. Main research directions are visual analytics and how human-computer interaction technologies can help solve problems in biomedical fields. Graduated from Wuhan University and received national scholarships twice, recipient of Hong Kong Government Scholarship. Has published multiple articles in VIS, UIST, CSR, TVCG, etc. Collaborates extensively with various laboratories at home and abroad (including CMU Adam Perer's team, Cornell Professor Wang Fei's team, Michigan State University Tai-Quan Peng's team, Westlake University Professor Li Ziqing's team, etc.).