UIST’23 Preprint Collection

Looking for current research on HCI + AI? Here’s a list.

Daniel Buschek
3 min readSep 1, 2023
Illustration of a female scientist in a lab coat looking through a looking glass, with sheets of papers flying in the background.

Here’s a collection of UIST’23 preprints, in no particular order. Found via arXiv, Twitter and Mastodon. Suggestions welcome!

Daniel Buschek (mastodon, twitter, group website)

Text, Documents, Code

Storyfier: Exploring Vocabulary Learning Support with Text Generation

Graphologue: Exploring Large Language Model Responses with Interactive Diagrams

Sensecape: Enabling Multilevel Exploration and Sensemaking with Large Language Models

Models CriTrainer: An Adaptive Training Tool for Critical Paper Reading

Synergi: A Mixed-Initiative System for Scholarly Synthesis and Sensemaking

PaperToPlace: Transforming Instruction Documents into Spatialized and Context-Aware Mixed Reality Experiences

Cells, Generators, and Lenses: Design Framework for Object-Oriented Interaction with Large Language Models

Spellburst: A Node-based Interface for Exploratory Creative Coding with Natural Language Prompts

VISAR: A Human-AI Argumentative Writing Assistant with Visual Programming and Rapid Draft Prototyping


GenAssist: Making Image Generation Accessible

Promptify: Text-to-Image Generation through Interactive Prompt Exploration with Large Language Models

PromptPaint: Steering Text-to-Image Generation Through Paint Medium-like Interactions

VegaProf: Profiling Vega Visualizations


Video2Action: Reducing Human Interactions in Action Annotation of App Tutorial Videos

Papeos: Augmenting Research Papers with Talk Videos

PEANUT: A Human-AI Collaborative Tool for Annotating Audio-Visual Data

Mirrorverse: Live Tailoring of Video Conferencing Interfaces

CrossTalk: Intelligent Substrates for Language-Oriented Interaction in Video-Based Communication and Collaboration

Authoring & End-user Programming

WorldSmith: Iterative and Expressive Prompting for World Building with a Generative AI

DiLogics: Creating Web Automation Programs With Diverse Logics

GestureCanvas: A Programming by Demonstration System for Prototyping Compound Freehand Interaction in VR

Wakey-Wakey: Animate Text by Mimicking Characters in a GIF

RealityCanvas: Augmented Reality Sketching for Embedded and Responsive Scribble Animation Effects

Augmented Math: Authoring AR-Based Explorable Explanations by Augmenting Static Math Textbooks

PoseVEC: Authoring Adaptive Pose-aware Effects using Visual Programming and Demonstrations

Hardware, Sensors, Fabrication

Double-Sided Tactile Interactions for Grasping in Virtual Reality

Biohybrid Devices: Prototyping Interactive Devices with Growable Materials

VoxelHap: A Toolkit for Constructing Proxies Providing Tactile and Kinesthetic Haptic Feedback in Virtual Reality

Turn-It-Up: Rendering Resistance for Knobs in Virtual Reality through Undetectable Pseudo-Haptics

3D Printing Magnetophoretic Displays

Robots & Agents

HoloBots: Augmenting Holographic Telepresence with Mobile Robots for Tangible Remote Collaboration in Mixed Reality

Generative Agents: Interactive Simulacra of Human Behavior

UI Design and Evaluation

Unveiling the Tricks: Automated Detection of Dark Patterns in Mobile Applications

Never-ending Learning of User Interfaces



Daniel Buschek

Professor at University of Bayreuth, Germany. Human-computer interaction, intelligent user interfaces, interactive AI.