EN FR
EN FR

2024Activity reportProject-TeamEX-SITU

RNSR: 201521246H
  • Research center Inria Saclay Centre at Université Paris-Saclay
  • In partnership with:CNRS, Université Paris-Saclay
  • Team name: Extreme Situated Interaction
  • In collaboration with:Laboratoire Interdisciplinaire des Sciences du Numérique
  • Domain:Perception, Cognition and Interaction
  • Theme:Interaction and visualization

Keywords

Computer Science and Digital Science

  • A5.1. Human-Computer Interaction
  • A5.1.1. Engineering of interactive systems
  • A5.1.2. Evaluation of interactive systems
  • A5.1.5. Body-based interfaces
  • A5.1.6. Tangible interfaces
  • A5.1.7. Multimodal interfaces
  • A5.1.8. 3D User Interfaces
  • A5.2. Data visualization
  • A5.6.2. Augmented reality

Other Research Topics and Application Domains

  • B2.7.2. Health monitoring systems
  • B2.8. Sports, performance, motor skills
  • B6.3.1. Web
  • B6.3.4. Social Networks
  • B9.2. Art
  • B9.2.1. Music, sound
  • B9.2.4. Theater
  • B9.5. Sciences

1 Team members, visitors, external collaborators

Research Scientists

  • Wendy Mackay [Team leader, INRIA, Senior Researcher]
  • Janin Koch [INRIA, ISFP, until Jul 2024]
  • Theofanis Tsantilas [INRIA, Researcher]

Faculty Members

  • Michel Beaudouin-Lafon [UNIV PARIS SACLAY, Professor]
  • Sarah Fdili Alaoui [UNIV PARIS SACLAY, Associate Professor, until Dec 2024]

Post-Doctoral Fellows

  • Camille Gobert [UNIV PARIS SACLAY, Post-Doctoral Fellow, from May 2024]
  • Johnny Sullivan [UNIV PARIS SACLAY, Post-Doctoral Fellow, until Aug 2024]

PhD Students

  • Tove Bang [UNIV PARIS SACLAY]
  • Alexandre Battut [UNIV PARIS SACLAY, until Jun 2024]
  • Eya Ben Chaaben [INRIA]
  • Vincent Bonczak [INRIA]
  • Léo Chédin [ENS PARIS-SACLAY]
  • Romane Dubus [UNIV PARIS SACLAY]
  • Camille Gobert [INRIA, until Apr 2024]
  • Yasaman Mashhadi Hashem Marandi [INRIA, until Sep 2024]
  • Capucine Nghiem [UNIV PARIS SACLAY, ATER, from Oct 2024]
  • Capucine Nghiem [UNIV PARIS SACLAY, until Sep 2024]
  • Anna Offenwanger [CNRS]
  • Lea Paymal [UNIV PARIS SACLAY]
  • Xiaohan Peng [Université Paris-Saclay]
  • Wissal Sahel [IRT SYSTEM X, until Mar 2024]
  • Matthieu Savary [ENS Paris-Saclay, from Sep 2024]
  • Martin Tricaud [UNIV PARIS SACLAY, until Oct 2024]
  • Yann Trividic [UNIV GUSTAVE EIFFEL]
  • Anastasiya Zakreuskaya [INRIA, from Mar 2024]

Technical Staff

  • Sébastien Dubos [UNIV PARIS SACLAY, Engineer]
  • Olivier Gladin [INRIA, Engineer]
  • Alexandre Kabil [CNRS, Engineer]
  • Sotirios Piliouras [INRIA, Engineer, from Nov 2024]

Interns and Apprentices

  • Carl Abou Saada Nujaim [INRIA, Intern, from May 2024 until Jul 2024]
  • Thibaut Guerin [INRIA, Intern, from May 2024 until Aug 2024]
  • Shubhankar Shubhankar [INRIA, Intern, from Apr 2024 until Sep 2024]

Administrative Assistant

  • Julienne Moukalou [INRIA]

Visiting Scientist

  • Jun KATO [AIST, from Mar 2024]

2 Overall objectives

Interactive devices are everywhere: we wear them on our wrists and belts; we consult them from purses and pockets; we read them on the sofa and on the metro; we rely on them to control cars and appliances; and soon we will interact with them on living room walls and billboards in the city. Over the past 30 years, we have witnessed tremendous advances in both hardware and networking technology, which have revolutionized all aspects of our lives, not only business and industry, but also health, education and entertainment. Yet the ways in which we interact with these technologies remains mired in the 1980s. The graphical user interface (GUI), revolutionary at the time, has been pushed far past its limits. Originally designed to help secretaries perform administrative tasks in a work setting, the GUI is now applied to every kind of device, for every kind of setting. While this may make sense for novice users, it forces expert users to use frustratingly inefficient and idiosyncratic tools that are neither powerful nor incrementally learnable.

ExSitu explores the limits of interaction — how extreme users interact with technology in extreme situations. Rather than beginning with novice users and adding complexity, we begin with expert users who already face extreme interaction requirements. We are particularly interested in creative professionals, artists and designers who rewrite the rules as they create new works, and scientists who seek to understand complex phenomena through creative exploration of large quantities of data. Studying these advanced users today will not only help us to anticipate the routine tasks of tomorrow, but to advance our understanding of interaction itself. We seek to create effective human-computer partnerships, in which expert users control their interaction with technology. Our goal is to advance our understanding of interaction as a phenomenon, with a corresponding paradigm shift in how we design, implement and use interactive systems. We have already made significant progress through our work on instrumental interaction and co-adaptive systems, and we hope to extend these into a foundation for the design of all interactive technology.

3 Research program

We characterize Extreme Situated Interaction as follows:

Extreme users. We study extreme users who make extreme demands on current technology. We know that human beings take advantage of the laws of physics to find creative new uses for physical objects. However, this level of adaptability is severely limited when manipulating digital objects. Even so, we find that creative professionals––artists, designers and scientists––often adapt interactive technology in novel and unexpected ways and find creative solutions. By studying these users, we hope to not only address the specific problems they face, but also to identify the underlying principles that will help us to reinvent virtual tools. We seek to shift the paradigm of interactive software, to establish the laws of interaction that significantly empower users and allow them to control their digital environment.

Extreme situations. We develop extreme environments that push the limits of today's technology. We take as given that future developments will solve “practical" problems such as cost, reliability and performance and concentrate our efforts on interaction in and with such environments. This has been a successful strategy in the past: Personal computers only became prevalent after the invention of the desktop graphical user interface. Smartphones and tablets only became commercially successful after Apple cracked the problem of a usable touch-based interface for the iPhone and the iPad. Although wearable technologies, such as watches and glasses, are finally beginning to take off, we do not believe that they will create the major disruptions already caused by personal computers, smartphones and tablets. Instead, we believe that future disruptive technologies will include fully interactive paper and large interactive displays.

Our extensive experience with the Digiscope WILD and WILDER platforms places us in a unique position to understand the principles of distributed interaction that extreme environments call for. We expect to integrate, at a fundamental level, the collaborative capabilities that such environments afford. Indeed almost all of our activities in both the digital and the physical world take place within a complex web of human relationships. Current systems only support, at best, passive sharing of information, e.g., through the distribution of independent copies. Our goal is to support active collaboration, in which multiple users are actively engaged in the lifecycle of digital artifacts.

Extreme design. We explore novel approaches to the design of interactive systems, with particular emphasis on extreme users in extreme environments. Our goal is to empower creative professionals, allowing them to act as both designers and developers throughout the design process. Extreme design affects every stage, from requirements definition, to early prototyping and design exploration, to implementation, to adaptation and appropriation by end users. We hope to push the limits of participatory design to actively support creativity at all stages of the design lifecycle. Extreme design does not stop with purely digital artifacts. The advent of digital fabrication tools and FabLabs has significantly lowered the cost of making physical objects interactive. Creative professionals now create hybrid interactive objects that can be tuned to the user's needs. Integrating the design of physical objects into the software design process raises new challenges, with new methods and skills to support this form of extreme prototyping.

Our overall approach is to identify a small number of specific projects, organized around four themes: Creativity, Augmentation, Collaboration and Infrastructure. Specific projects may address multiple themes, and different members of the group work together to advance these different topics.

4 Application domains

4.1 Creative industries

We work closely with creative professionals in the arts and in design, including music composers, musicians, and sound engineers; painters and illustrators; dancers and choreographers; theater groups; game designers; graphic and industrial designers; and architects.

4.2 Scientific research

We work with creative professionals in the sciences and engineering, including neuroscientists and doctors; programmers and statisticians; chemists and astrophysicists; and researchers in fluid mechanics.

5 Highlights of the year

5.1 Awards

  • Wendy Mackay was awarded the CHI Lifetime Research award, the highest honor in ACM/SIGCHI.
  • Wendy Mackay was selected to give the UIST Vision talk at ACM/UIST'24.
  • Romane Dubus was awarded a five-month Fulbright Scholarship to attend NASA AMES research laboratory in California, USA.
  • Romane Dubus received the ENAC doctoral prize for 2024.
  • Cédric Fleury successfully defended his habilitation at Université Paris-Saclay.

Team members received two Best Paper and one Honorable Mention award for their publications:

  • Tove Grimstad Bang, Sarah Fdili Alaoui, Guro Tyse, Elisabeth Schwartz, and Frédéric Bevilacqua. A Retrospective Autoethnography Documenting Dance Learning Through Data Physicalisations. ACM Conference on Designing Interactive Systems (DIS 2024). 20
  • Stacy Hsueh, Marianela Ciolfi Felice, Sarah Fdili Alaoui, and Wendy E. Mackay. What Counts as `Creative' Work? Articulating Four Epistemic Positions in Creativity-Oriented HCI Research. ACM Conference on Human Factors in Computing Systems (CHI 2024). 22
  • Wendy E. Mackay, Alexandre Battut, Germán Leiva, and Michel Beaudouin-Lafon. VideoClipper: Rapid Prototyping with the ”Editing-in-the-Camera” Method. ACM Conference on Human Factors in Computing Systems (CHI 2024). 24

6 New software, platforms, open data

6.1 New software

6.1.1 Digiscape

  • Name:
    Digiscape
  • Keywords:
    2D, 3D, Node.js, Unity 3D, Video stream
  • Functional Description:
    Through the Digiscape application, the users can connect to a remote workspace and share files, video and audio streams with other users. Application running on complex visualization platforms can be easily launched and synchronized.
  • URL:
  • Contact:
    Olivier Gladin
  • Partners:
    Maison de la simulation, UVSQ, CEA, ENS Cachan, LIMSI, LRI - Laboratoire de Recherche en Informatique, CentraleSupélec, Telecom Paris

6.1.2 Touchstone2

  • Keyword:
    Experimental design
  • Functional Description:

    Touchstone2 is a graphical user interface to create and compare experimental designs. It is based on a visual language: Each experiment consists of nested bricks that represent the overall design, blocking levels, independent variables, and their levels. Parameters such as variable names, counterbalancing strategy and trial duration are specified in the bricks and used to compute the minimum number of participants for a balanced design, account for learning effects, and estimate session length. An experiment summary appears below each brick assembly, documenting the design. Manipulating bricks immediately generates a corresponding trial table that shows the distribution of experiment conditions across participants. Trial tables are faceted by participant. Using brushing and fish-eye views, users can easily compare among participants and among designs on one screen, and examine their trade-offs.

    Touchstone2 plots a power chart for each experiment in the workspace. Each power curve is a function of the number of participants, and thus increases monotonically. Dots on the curves denote numbers of participants for a balanced design. The pink area corresponds to a power less than the 0.8 criterion: the first dot above it indicates the minimum number of participants. To refine this estimate, users can choose among Cohen’s three conventional effect sizes, directly enter a numerical effect size, or use a calculator to enter mean values for each treatment of the dependent variable (often from a pilot study).

    Touchstone2 can export a design in a variety of formats, including JSON and XML for the trial table, and TSL, a language we have created to describe experimental designs. A command-line tool is provided to generate a trial table from a TSL description.

    Touchstone2 runs in any modern Web browser and is also available as a standalone tool. It is used at ExSitu for the design of our experiments, and by other Universities and research centers worldwide. It is available under an Open Source licence at https://touchstone2.org.

  • URL:
  • Contact:
    Wendy Mackay
  • Partner:
    University of Zurich

6.1.3 UnityCluster

  • Keywords:
    3D, Virtual reality, 3D interaction
  • Functional Description:

    UnityCluster is middleware to distribute any Unity 3D (https://unity3d.com/) application on a cluster of computers that run in interactive rooms, such as our WILD and WILDER rooms, or immersive CAVES (Computer-Augmented Virtual Environments). Users can interact the the application with various interaction resources.

    UnityCluster provides an easy solution for running existing Unity 3D applications on any display that requires a rendering cluster with several computers. UnityCluster is based on a master-slave architecture: The master computer runs the main application and the physical simulation as well as manages the input, the slave computers receive updates from the master and render small parts of the 3D scene. UnityCluster manages data distribution and synchronization among the computers to obtain a consistent image on the entire wall-sized display surface.

    UnityCluster can also deform the displayed images according to the user's position in order to match the viewing frustum defined by the user's head and the four corners of the screens. This respects the motion parallax of the 3D scene, giving users a better sense of depth.

    UnityCluster is composed of a set of C Sharp scripts that manage the network connection, data distribution, and the deformation of the viewing frustum. In order to distribute an existing application on the rendering cluster, all scripts must be embedded into a Unity package that is included in an existing Unity project.

  • Contact:
    Cédric Fleury
  • Partner:
    Inria

6.1.4 VideoClipper

  • Keyword:
    Video recording
  • Functional Description:

    VideoClipper is an IOS app for Apple Ipad, designed to guide the capture of video during a variety of prototyping activities, including video brainstorming, interviews, video prototyping and participatory design workshops. It relies heavily on Apple’s AVFoundation, a framework that provides essential services for working with time-based audiovisual media on iOS (https://developer.apple.com/av-foundation/). Key uses include: transforming still images (title cards) into video tracks, composing video and audio tracks in memory to create a preview of the resulting video project and saving video files into the default Photo Album outside the application.

    VideoClipper consists of four main screens: project list, project, capture and import. The project list screen shows a list with the most recent projects at the top and allows the user to quickly add, remove or clone (copy and paste) projects. The project screen includes a storyboard composed of storylines that can be added, cloned or deleted. Each storyline is composed of a single title card, followed by one or more video clips. Users can reorder storylines within the storyboard, and the elements within each storyline through direct manipulation. Users can preview the complete storyboard, including all titlecards and videos, by pressing the play button, or export it to the Ipad’s Photo Album by pressing the action button.

    VideoClipper offers multiple tools for editing titlecards and storylines. Tapping on the title card lets the user edit the foreground text, including font, size and color, change background color, add or edit text labels, including size, position, color, and add or edit images, both new pictures and existing ones. Users can also delete text labels and images with the trash button. Video clips are presented via a standard video player, with standard interaction. Users can tap on any clip in a storyline to: trim the clip with a non-destructive trimming tool, delete it with a trash button, open a capture screen by clicking on the camera icon, label the clip by clicking a colored label button, and display or hide the selected clip by toggling the eye icon.

    VideoClipper is currently in beta test, and is used by students in two HCI classes at the Université Paris-Saclay, researchers in ExSitu as well as external researchers who use it for both teaching and research work. A beta test version is available on demand under the Apple testflight online service.

  • Contact:
    Wendy Mackay

6.1.5 WildOS

  • Keywords:
    Human Computer Interaction, Wall displays
  • Functional Description:

    WildOS is middleware to support applications running in an interactive room featuring various interaction resources, such as our WILD and WILDER rooms: a tiled wall display, a motion tracking system, tablets and smartphones, etc. The conceptual model of WildOS is a platform, such as the WILD or WILDER room, described as a set of devices and on which one or more applications can be run.

    WildOS consists of a server running on a machine that has network access to all the machines involved in the platform, and a set of clients running on the various interaction resources, such as a display cluster or a tablet. Once WildOS is running, applications can be started and stopped and devices can be added to or removed from the platform.

    WildOS relies on Web technologies, most notably Javascript and node.js, as well as node-webkit and HTML5. This makes it inherently portable (it is currently tested on Mac OS X and Linux). While applications can be developed only with these Web technologies, it is also possible to bridge to existing applications developed in other environments if they provide sufficient access for remote control. Sample applications include a web browser, an image viewer, a window manager, and the BrainTwister application developed in collaboration with neuroanatomists at NeuroSpin.

    WildOS is used for several research projects at ExSitu and by other partners of the Digiscope project. It was also deployed on several of Google's interactive rooms in Mountain View, Dublin and Paris. It is available under an Open Source licence at https://bitbucket.org/mblinsitu/wildos.

  • URL:
  • Contact:
    Michel Beaudouin-Lafon

6.1.6 StructGraphics

  • Keywords:
    Data visualization, Human Computer Interaction
  • Scientific Description:
    Information visualization research has developed powerful systems that enable users to author custom data visualizations without textual programming. These systems can support graphics-driven practices by bridging lazy data-binding mechanisms with vector-graphics editing tools. Yet, despite their expressive power, visualization authoring systems often assume that users want to generate visual representations that they already have in mind rather than explore designs. They also impose a data-to-graphics workflow, where binding data dimensions to graphical properties is a necessary step for generating visualization layouts. In this work, we introduce StructGraphics, an approach for creating data-agnostic and fully reusable visualization designs. StructGraphics enables designers to construct visualization designs by drawing graphics on a canvas and then structuring their visual properties without relying on a concrete dataset or data schema. In StructGraphics, tabular data structures are derived directly from the structure of the graphics. Later, designers can link these structures with real datasets through a spreadsheet user interface. StructGraphics supports the design and reuse of complex data visualizations by combining graphical property sharing, by-example design specification, and persistent layout constraints. We demonstrate the power of the approach through a gallery of visualization examples and reflect on its strengths and limitations in interaction with graphic designers and data visualization experts.
  • Functional Description:
    StructGraphics is a user interface for creating data-agnostic and fully reusable designs of data visualizations. It enables visualization designers to construct visualization designs by drawing graphics on a canvas and then structuring their visual properties without relying on a concrete dataset or data schema. Overall, StructGraphics follows the inverse workflow than traditional visualization-design systems. Rather than transforming data dependencies into visualization constraints, it allows users to interactively define the property and layout constraints of their visualization designs and then translate these graphical constraints into alternative data structures. Since visualization designs are data-agnostic, they can be easily reused and combined with different datasets.
  • URL:
  • Publication:
  • Contact:
    Theofanis Tsantilas
  • Participant:
    Theofanis Tsantilas

6.2 New platforms

6.2.1 WILD

Participants: Michel Beaudouin-Lafon [correspondant], Cédric Fleury, Olivier Gladin.

WILD is our first experimental ultra-high-resolution interactive environment, created in 2009. In 2019-2020 it received a major upgrade: the 16-computer cluster was replaced by new machines with top-of-the-line graphics cards, and the 32-screen display was replaced by 32 32" 8K displays resulting in a resolution of 1 giga-pixels (61 440 x 17 280) for an overall size of 5m80 x 1m70 (280ppi). An infrared frame adds multitouch capability to the entire display area. The platform also features a camera-based motion tracking system that lets users interact with the wall, as well as the surrounding space, with various mobile devices.

6.2.2 WILDER

Participants: Michel Beaudouin-Lafon [correspondant], Cédric Fleury, Olivier Gladin.

WILDER (Figure 1) is our second experimental ultra-high-resolution interactive environment, which follows the WILD platform developed in 2009. It features a wall-sized display with seventy-five 20" LCD screens, i.e. a 5m50 x 1m80 (18' x 6') wall displaying 14 400 x 4 800 = 69 million pixels, powered by a 10-computer cluster and two front-end computers. The platform also features a camera-based motion tracking system that lets users interact with the wall, as well as the surrounding space, with various mobile devices. The display uses a multitouch frame (one of the largest of its kind in the world) to make the entire wall touch sensitive.

WILDER was inaugurated in June, 2015. It is one of the ten platforms of the Digiscope Equipment of Excellence and, in combination with WILD and the other Digiscope rooms, provides a unique experimental environment for collaborative interaction.

In addition to using WILD and WILDER for our research, we have also developed software architectures and toolkits, such as WildOS and Unity Cluster, that enable developers to run applications on these multi-device, cluster-based systems.

Figure 1

User in front of the 5m50 x 1m80, 75-screen WILDER platform.

Figure 1: The WILDER platform.

7 New results

7.1 Fundamentals of Interaction

Participants: Michel Beaudouin-Lafon [correspondant], Wendy Mackay [co-correspondant], Theophanis Tsandilas, Camille Gobert, Han Han, Miguel Renom, Martin Tricaud.

In order to better understand fundamental aspects of interaction, ExSitu conducts in-depth observational studies and controlled experiments which contribute to theories and frameworks that unify our findings and help us generate new, advanced interaction techniques 3.

Camille Gobert defended his Ph.D. thesis on interacting with computer languages 35. The thesis develops a new theory of interaction with computer languages that shows that no language is inherently bound to a specific representation or type of interaction. By deconstructing the notion of computer language into five fundamental aspects, it isolates interaction from the other constituents of these languages, yielding a more holistic model than those that already exist. This model is then used to identify different levels of interaction with computer languages, which can be hybridized, and to show that a single piece of code can be projected onto several representations to let end-users decide which representation supports the form of interaction most appropriate for them. This approach is then applied to two research problems using user-centered design methodologies: helping users author documents written in LaTeX (i-LaTeX) and helping programmers appropriate their text editors by crafting their own projections (Lorgnette). The results show that complementing text with other representations helps users understand and modify code faster and with a lower workload and that these representations can be created by recomposing existing parts that can then be reused from one projection to another. The thesis demonstrates that considering interaction with computer languages as projections makes it more protean, an approach that is theoretically grounded, technically possible and empirically desirable. It opens up the way to equipping an ever-growing public of citizens with new intellectual and technical tools to help them understand and appropriate the computer languages that rule so many aspects of our lives.

Martin Tricaud defended his thesis on instrumentality and materiality in Procedural Computer Graphics (PCG) and beyond 37. PCG entails building and amending algorithmic procedures to generate graphical content. These models reify the chain of operations leading to a design, turning the design process itself into an interactive object. However, the expressiveness of PCG techniques remains constrained, e.g. by the reliance on sliders to explore design spaces: while everything is possible, nothing is easy. This problem echoes a central question in Human-Computer Interaction: Which software artifacts are best suited to mediate our actions on information substrates? The thesis addresses this question by redefining materiality not as a quality of the environment but of an agent's relationship to it. Through an ethnographic study of artists and designers, the thesis proposes that materiality develops through epistemic processes: Artists build non-declarative knowledge through epistemic actions, externalizing this knowledge into artifacts that foster further exploration. It illustrates this approach by a software prototype to facilitate navigation in large procedural model parameter spaces, and reflects on the lack of adequate architectures to develop such techniques. The thesis concludes by speculating that if the building blocks of a software's interaction model have well-behaved mathematical semantics, we can extend the model-world metaphor beyond physicality and bring materiality to various information substrates.

Sketching is a common practice among visualization designers and serves an approachable entry to data visualization for non-experts. However, moving from a sketch to a full fledged data visualization often requires throwing away the original sketch and recreating it from scratch. Our goal is to formalize these sketches, enabling them to support iteration and systematic data mapping through a visual-first templating workflow. To address this workflow, we developed DataGarden 19, a system that enables authors to sketch a representative visualization and structure it into an expressive template for an envisioned or partial dataset, capturing implicit style as well as explicit data mappings. DataGarden seeks to have interaction and machine support work in tandem, showing the author the structure inferred by the system, while also enabling the author to modify and expand on the structure. We evaluate our apporach through a reproduction and a freeform study. We investigate how DataGarden supports personal expression and delve into the variety of visualizations that authors can produce with it (see Fig. 2).

Figure 2

Image of the DataGarden system.

Figure 2: DataGarden supports sketching personal, expressive designs and formalizing these as structured visualization templates. To express (A) a visualization design idea, a user sketches a few representative glyphs in (B) the canvas, making their vision explicit. DataGarden provides the means to structure the freeform sketch into a visualization template by (C) capturing implicit style and explicit data mappings via user interaction and machine support. `Real' data can then be fed to the template. This featured visualization is generated from a template created with the tool. For additional examples, see: datagarden.

We also worked in the domain of safety-critical systems. One project explored “automation surprises” in the cockpit 21. Early studies of the use of an aircraft's autopilot system showed that pilots sometimes struggle to maintain their awareness of the autoflight's current mode of operation, which may negatively affect safety and, as a consequence, cause accidents. The project identifies new insights to help understand the phenomenon and propose novel instrument designs that mitigate the problem. Another project citepaymal:hal-04933033 studied people with chronic illness who often fluctuate between “good days” and “bad days” where symptoms are more or less severe depending on a range of factors and triggers. The project studied peopl with myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) to understand how their illness shapes their use of technology in their everyday lives and suggests new possibilities for more accessible non-screen-based technologies for patients with chronic fatigue, sensory sensitivities and cognitive limitations.

Wissal Sahel also defended her thesis 36 on her work with control room operators at RTE, who face an ever increasing workload, and corresponding levels of information overload which can lead to loss of situation awareness. She conducted a multi-year participatory design project with RTE operators and then applied the generative theory of interaction approach 41 to design and test StoryLines, an interactive timeline that helps operators collect informa- tion from diverse tools to create an overview, record reminders and share relevant informa- tion with the next shift’s operator.

7.2 Human-Computer Partnerships

Participants: Wendy Mackay [correspondant], Janin Koch [co-correspondant], Téo Sanchez, Nicolas Taffin, Theophanis Tsandilas.

ExSitu is interested in designing effective human-computer partnerships where expert users control their interaction with intelligent systems. Rather than treating the human users as the `input' to a computer algorithm, we explore human-centered machine learning, where the goal is to use machine learning and other techniques to increase human capabilities. Much of human-computer interaction research focuses on measuring and improving productivity: our specific goal is to create what we call “co-adaptive systems” that are discoverable, appropriable and expressive for the user. The historical foundation for this work is described in the book  Réimaginer nos interactions avec le monde numérique based on Wendy Mackay's Inaugural Lecture for the Collège de France. She was also invited to give the annual“UIST vision” 25 at the UIST'24 conference where she argued that we need to fundamentally change our approach to designing intelligent interactive systems. Rather than creating parasitic systems, our goal should be to create “human-computer partnerships” that establish symbiotic relationships between artificial intelligence (AI) and human users. This requires assessing the impact of users interacting with intelligent systems over the short, medium and long term and ensuring that users control their level of agency, ranging from delegation to retaining full control. The focus should be on explicitly supporting “reciprocal co-adaptation”' where users both learn from and appropriate (adapt and adapt to) intelligent systems, and those systems in turn both learn from and affect users over time. Several group members ran workshops on the role of Generative AI in Interactive Systems 33 (M&M'25) and Transforming HCI Research Cycles using Generative AI and “Large Whatever Models” (LWMs)40(CHI'25).

Visually oriented designers often struggle to create effective generative AI (GenAI) prompts. A preliminary study identified specific issues in composing and fine-tuning prompts, as well as needs in accurately translating intentions into rich input. We developed DesignPrompt 31, a moodboard tool that lets designers combine multiple modalities images, color, text - into a single GenAI prompt and tweak the results. We ran a comparative structured observation study with 12 professional designers to better understand their intent expression, expectation alignment and transparency perception using DesignPrompt and text input GenAI. We found that multimodal prompt input encouraged designers to explore and express themselves more effectively. Designer's interaction preferences change according to their overall sense of control over the GenAI and whether they are seeking inspiration or a specific image. Designers developed innovative uses of DesignPrompt, including developing elaborate multimodal prompts and creating a multimodal prompt pattern to maximize novelty while ensuring consistency. This work was also presented at the HHAI'24 Doctoral Consoritum 30.

Dynamics in Human-AI interaction should lead to more satisfying and engaging collaboration. Key open questions are how to design such interactions and the role personal goals and expectations play. We developed three AI partners of varying initiative (leader, follower, shifting) in a collaborative game called Geometry Friends 23. We conducted a within-subjects experiment with 60 participants to assess personal AI partner preference and performance satisfaction as well as perceived warmth and competence of AI partners. Results show that AI partners following human initiative are perceived as warmer and more collaborative. However, some participants preferred AI leaders for their independence and speed, despite being seen as less friendly. This suggests that assigning a leadership role to the AI partner may be suitable for time-sensitive scenarios. We identify design factors for developing collaborative AI agents with varying levels of initiative to create more effective human-AI teams that consider context and individual preference.

We also explored the intersection of artificial intelligence (AI) and dance 16, which has evolved considerably over the last few decades as the technology has progressed. The first explorations of AI in dance began in the 60s, notably with the “9 Evenings” interactive performances that brought together artists (in performance, visual art and sound) and engineers from Bell Labs [Morris] to design the first computer systems to interface with the dancing body. In the 90s, interactive systems began to develop, enabling dancers to interact in real time with computer-generated environments. This period also saw the rise of technologies such as motion capture, which enabled major developments in interaction between dancers and computers. Today, the progress of machine learning is appearing through new applications in dance. Intelligent agents can now analyze and interpret complex human movements, generate new movement sequences and even act as autonomous performers. These innovations open up new creative possibilities, but also pose new challenges for dancer-machine interaction.

7.3 Creativity

Participants: Sarah Fdili Alaoui, Wendy Mackay [correspondant], Tove Grimstad, Manon Vialle, Xioahan Peng, Janin Koch, Nicolas Taffin.

ExSitu is interested in understanding the work practices of creative professionals who push the limits of interactive technology. We follow a multi-disciplinary participatory design approach, working with both expert and non-expert users in diverse creative contexts.

We received two best paper awards. The first Analyzes “creativity support’’ 22 as a construct that encodes different definitions of creative work. Drawing on existing literature and practices, the paper surfaces four views about creative work that underpin current creative technologies and HCI research: problem-solving, cognitive emergence, embodied action, and tool-mediated expert activity. Each view makes different claims about the role of computing in creative work and the creative subject assumed. We articulate the attendant politics of each view and illustrate how critical feminist epistemology can serve as an analytical tool to reason about the trade-offs of various creativity definitions. The paper concludes with recommendations for integrating feminist values into creativity-oriented HCI research.

The second presents a retrospective autoethnography grounded in data-driven design 20. The first author collected her movement data and subjective experience of learning the dance repertoire of modern dance pioneer Isadora Duncan, which together were encoded into the design of a set of plaster artefacts physicalizing her embodied dance learning progression. The artefacts reflect the first author's bodily transformation, mirroring her transition from discomfort to ease, and changes in her expressive capabilities. Our method offers an alternative to documentation of embodied learning through design. Throughout our design process we leverage on the movement data, the field notes and the first author's memory of her journey, all of which constitute entangled and complementary input into her experience of dance learning. We show that the data physicalizations provided a gateway into the intangible experience and allowed for a deep and reflexive understanding of our dataset.

For design and art enthusiasts who seek to enhance their skills through instructional videos, following drawing instructions while practicing can be challenging. STIVi 26 presents perspective drawing demonstrations and commentary of prerecorded instructional videos as interactive drawing tutorials that students can navigate and explore at their own pace. Our approach involves a semi-automatic pipeline to assist instructors in creating STIVi content by extracting pen strokes from video frames and aligning them with the accompanying audio commentary. Thanks to this structured data, students can navigate through transcript and in-video drawing, refer to provided highlights in both modalities to guide their navigation, and explore variations of the drawing demonstration to understand fundamental principles. We evaluated STIVi's interactive tutorials against a regular video player. We observed that our interface supports non-linear learning styles by providing students alternative paths for following and understanding drawing instructions. This work 28 was also presented at the Doctoral Consortium for the IHM’24 conference, which described how sketching can be combined with speech to support two different situations involving perspective drawing: drawing tutorials, where teachers can record themselves drawing while explaining how they draw, and presentations, where industrial designers express concepts through drawings and a commentary.

We also showed how industrial designers present their concepts at different stages of the design process 27. While they produce sets of sketches sparsely illustrating different aspects of the design in early stages, presentations targeting end-users or finalizing the design benefit from more polished and animated presentations, providing continuous transitions between the illustrated aspects and guiding the audience through a specifically crafted story, presenting the product in a lifelike, compelling manner. Such animated presentations are often kept for final design stages as they are costly to create and may require the creation of 3D models and renderings. We are interested in helping designers create dynamic presentations from these sparse sets of freehand sketches. To this end, we studied current design practices, their visual language, typical techniques and how they serve storytelling, through the analysis of existing presentation sketches and the organization of interviews and workshops with professional design practitioners.

7.4 Collaboration

Participants: Sarah Fdili Alaoui, Michel Beaudouin-Lafon [co-correspondant], Wendy Mackay [co-correspondant], Arthur Fages, Janin Koch, Theophanis Tsandilas.

ExSitu explores new ways of supporting collaborative interaction and remote communication.

To coordinate and understand past actions in a collaborative activity, co-workers typically access shared artifacts and the interaction histories provided by their tools. Alexandre Battut defended his Ph.D. thesis on this topic 34. Through interviews of knowledge workers who collaborate on shared text documents, we found that the scattering of historical data across collaborative and personal environments and the lack of compatibility between histories hinder coordination and event recall. The thesis explored the design of cross-application history tools and created OneTrace, a proof-of-concept system for sharing histories amongst applications and users based on a unified structure for representing interaction traces. Based on this system, we developed and evaluated TracePicker, a tool that lets users cluster traces to contextualize past actions recorded by OneTrace. Results showed that participants found the system helpful for communicating, understanding and contextualizing historical data 17. This research opens the way to collaborative, cross-application history-support systems such as OneTrace to support coordination and event recall.

Although video is extremely useful for expressing interaction, posthoc editing makes it impractical for rapid prototyping. We presented an early-stage video-based design method – “editing-in-the-camera” – where title cards guide video capture and label video clips 24 (Honorable Mention award). This method lets designers easily create video prototypes that can be discussed within the same design session, without further editing. We created Video Clipper, a mobile app that incorporates this method by transforming sequences of title cards into an interactive storyboard that designers can shoot into directly. Video Clipper offers simple special effects to better illustrate user interaction with paper prototypes, including ghosting and stop-motion animation. We also created Collaborative Video Clipper, which was created during the COVID-19 pandemic to support multi-user, multi-device rapid prototyping with remote participants. The evaluation of both applications and and our experiences in diverse educational and professional settings, including brainstorming, interviewing, video prototyping, user studies and participatory design workshops demonstrated the value of the “editing-in-the-camera” method and a light-weight video capture tool supporting it.

Ex-Situ members are heavily involved in two major national projects around digital collaboration. The French National Research Infrastructure CONTINUUM 18 is a unique consortium of 30 platforms located throughout France for advancing interdisciplinary research between computer science, the humanities and social sciences. Through CONTINUUM, 37 research groups develop cutting-edge research focusing on visualisation, immersion, interaction and collaboration, as well as human perception, cognition and behaviour in virtual/augmented reality. Ex-Situ hosts the WILD platform of CONTINUUM and has the role of scientific director of the project.

Team members were heavily involved in spearheading, organizing and launching the 38M€ national network PEPR eNSEMBLE on the future of Digital Collaboration that gathers over 80 research groups from multiple disciplines across France. PEPR eNSEMBLE is organized in 5 main areas covering all aspects of collaboration: collaboration in space, collaboration in time, collaboration with intelligent systems, collaboration at scale, and transversal aspects on ethics, methodology, regulation and economics. ExSitu is involved in all these areas as well as at the management level of the entire project (co-director of the programme and co-chair of one of he projects).

8 Bilateral contracts and grants with industry

8.1 Bilateral contracts with industry

Participants: Wendy Mackay, Wissal Sahel, Robert Falcasantos.

CAB: Cockpit and Bidirectional Assistant
  • Title:
    Smart Cockpit Project
  • Duration:
    Sept 2020 - August 2024
  • Coordinator:
    SystemX Technological Research Institute
  • Partners:
    • SystemX
    • EDF
    • Dassault
    • RATP
    • Orange
    • Inria
  • Inria contact:
    Wendy Mackay
  • Summary:
    The goal of the CAB Smart Cockpit project is to define and evaluate an intelligent cockpit that integrates a bi-directional virtual agent that increases the real-time the capacities of operators facing complex and/or atypical situations. The project seeks to develop a new foundation for sharing agency between human users and intelligent systems: to empower rather than deskill users by letting them learn throughout the process, and to let users maintain control, even as their goals and circumstances change.

9 Partnerships and cooperations

9.1 International research visitors

9.1.1 Visits of international scientists

Other international visits to the team
Jun KATO
  • Status:
    researcher
  • Institution of origin:
    AIST
  • Country:
    Japan
  • Dates:
    April 2024 - March 2025
  • Context of the visit:
    Scientific collaboration on tools for creativity support
  • Mobility program/type of mobility:
    sabbatical
Soya Park
  • Status:
    researcher
  • Institution of origin:
    MIT
  • Country:
    USA
  • Dates:
    February
  • Context of the visit:
    Talk
  • Mobility program/type of mobility:
    visit
Alan Dix
  • Status:
    researcher
  • Institution of origin:
    Swansee University, UK
  • Country:
    USA
  • Dates:
    18-19 March
  • Context of the visit:
    Talk and lab visit
  • Mobility program/type of mobility:
    visit
Clemens Klokmose
  • Status:
    researcher
  • Institution of origin:
    Aarhus University
  • Country:
    Denmark
  • Dates:
    31 May - 5 June
  • Context of the visit:
    Scientific collaboration
  • Mobility program/type of mobility:
    visit

9.2 European initiatives

9.2.1 Horizon Europe

SustainML

- SustainML project on cordis.europa.eu

  • Title:
    Application Aware, Life-Cycle Oriented Model-Hardware Co-Design Framework for Sustainable, Energy Efficient ML Systems
  • Duration:
    From October 1, 2022 to September 30, 2025
  • Partners:
    • INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET AUTOMATIQUE (INRIA), France
    • PROYECTOS Y SISTEMAS DE MANTENIMIENTO SL (EPROSIMA EPROS), Spain
    • IBM RESEARCH GMBH (IBM), Switzerland
    • SAS UPMEM, France
    • KOBENHAVNS UNIVERSITET (UCPH), Denmark
    • DEUTSCHES FORSCHUNGSZENTRUM FUR KUNSTLICHE INTELLIGENZ GMBH (DFKI), Germany
    • RHEINLAND-PFALZISCHE TECHNISCHE UNIVERSITAT, Germany
  • Inria contact:
    Janin Koch
  • Coordinator:
  • Summary:
    AI is increasingly becoming a significant factor in the CO2 footprint of the European economy. To avoid a conflict between sustainability and economic competitiveness and to allow the European economy to leverage AI for its leadership in a climate friendly way, new technologies to reduce the energy requirements of all parts of AI system are needed. A key problem is the fact that tools (e.g. PyTorch) and methods that currently drive the rapid spread and democratization of AI prioritize performance and functionality while paying little attention to the CO2 footprint. As a consequence, we see rapid growth in AI applications, but not much so in AI applications that are optimized for low power and sustainability. To change that we aim to develop an interactive design framework and associated models, methods and tools that will foster energy efficiency throughout the whole life-cycle of ML applications: from the design and exploration phase that includes exploratory iterations of training, testing and optimizing different system versions through the final training of the production systems (which often involves huge amounts of data, computation and epochs) and (where appropriate) continuous online re-training during deployment for the inference process. The framework will optimize the ML solutions based on the application tasks, across levels from hardware to model architecture. AI developers from all experience levels will be able to make use of the framework through its emphasis on human-centric interactive transparent design and functional knowledge cores, instead of the common blackbox and fully automated optimization approaches in AutoML. The framework will be made available on the AI4EU platform and disseminated through close collaboration with initiatives such as the ICT 48 networks. It will also be directly exploited by the industrial partners representing various parts of the relevant value chain: from software framework, through hardware to AI services.
OnePub

- OnePub project on cordis.europa.eu

  • Title:
    Single-source Collaborative Publishing
  • Type:
    ERC Proof-of-Concept
  • Duration:
    From October 1, 2023 to March 30, 2025
  • Partners:
    Université Paris-Saclay
  • Budget:
    150 Keuros public funding from ERC
  • Coordinator:
    Michel Beaudouin-Lafon
  • Summary:

    Book publishing involves many stakeholders and a complex set of inflexible tools and formats. Current workflows are inefficient because authors and editors cannot make changes directly to the content once it has been laid out. They are costly because creating different output formats, such as PDF, ePub or HTML requires manual labor. Finally, new requirements such as the European directive on accessibility incur additional costs and delays.

    The goal of the OnePub POC project is to demonstrate the feasibility and value of a book production workflow based on a set of collaborative editing tools and a single document source representing the “ground truth” of the book content and layout. The editing tools will be tailored to the needs of each stakeholder, e.g. author, editor or typesetter, and will feature innovative interaction techniques from the PI’s ERC project ONE.

    The project will focus on textbooks and academic publications as its testbed because they feature some of the most stringent constraints in terms of content types and content layout. They also run on tight deadlines, emphasizing the need for an efficient process.

    OnePub will define a unified format for the document source, create several collaborative document editors, and develop an open and extensible architecture so that new editors and add-ons can be added to the workflow. Together, these developments will set the stage for a new ecosystem for the publishing industry, with a level-playing field where software companies can provide components while publishers and their service contractors can decide which components to use for their workflows.

9.2.2 H2020 projects

ALMA

- ALMA project on cordis.europa.eu

  • Title:
    ALMA: Human Centric Algebraic Machine Learning
  • Duration:
    From September 1, 2020 to February 28, 2025
  • Partners:
    • INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET AUTOMATIQUE (INRIA), France
    • TEKNOLOGIAN TUTKIMUSKESKUS VTT OY (VTT), Finland
    • PROYECTOS Y SISTEMAS DE MANTENIMIENTO SL (EPROSIMA EPROS), Spain
    • ALGEBRAIC AI SL, Spain
    • DEUTSCHES FORSCHUNGSZENTRUM FUR KUNSTLICHE INTELLIGENZ GMBH (DFKI), Germany
    • RHEINLAND-PFALZISCHE TECHNISCHE UNIVERSITAT, Germany
    • FIWARE FOUNDATION EV (FIWARE), Germany
    • UNIVERSIDAD CARLOS III DE MADRID (UC3M), Spain
    • FUNDACAO D. ANNA DE SOMMER CHAMPALIMAUD E DR. CARLOS MONTEZ CHAMPALIMAUD (FUNDACAO CHAMPALIMAUD), Portugal
  • Inria contact:
    Wendy Mackay
  • Coordinator:
  • Summary:

    Algebraic Machine Learning (AML) has recently been proposed as new learning paradigm that builds upon Abstract Algebra, Model Theory. Unlike other popular learning algorithms, AML is not a statistical method, but it produces generalizing models from semantic embeddings of data into discrete algebraic structures, with the following properties:

    P1: Is far less sensitive to the statistical characteristics of the training data and does not fit (or even use) parameters

    P2: Has the potential to seamlessly integrate unstructured and complex information contained in training data, with a formal representation of human knowledge and requirements;

    P3. Uses internal representations based on discrete sets and graphs, offering a good starting point for generating human understandable, descriptions of what, why and how has been learned

    P4. Can be implemented in a distributed way that avoids centralized, privacy-invasive collections of large data sets in favor of a collaboration of many local learners at the level of learned partial representations.

    The aim of the project is to leverage the above properties of AML for a new generation of Interactive, Human-Centric Machine Learning systems., that will:

    - Reduce bias and prevent discrimination by reducing dependence on statistical properties of training data (P1), integrating human knowledge with constraints (P2), and exploring the how and why of the learning process (P3)

    - Facilitate trust and reliability by respecting ‘hard’ human-defined constraints in the learning process (P2) and enhancing explainability of the learning process (P3)

    - Integrate complex ethical constraints into Human-AI systems by going beyond basic bias and discrimination prevention (P2) to interactively shaping the ethics related to the learning process between humans and the AI system (P3)

    - Facilitate a new distributed, incremental collaborative learning method by going beyond the dominant off-line and centralized data processing approach (P4)

9.3 National initiatives

PEPR eNSEMBLE

- web site

  • Title:
    Future of Digital Collaboration
  • Type:
    PEPR Exploratoire
  • Duration:
    2022 – 2030
  • Coordinator:
    Gilles Bailly, Michel Beaudouin-Lafon, Stéphane Huot, Laurence Nigay
  • Pilots:
    • Centre National de la Recherche Scientifique (CNRS)
    • Institut National de Recherche en Informatique et Automatique (Inria)
    • Université Grenoble Alpes
    • Université Paris-Saclay
  • Partners:
    • Institut Mines Télécom
    • Sorbonne Université
    • Université de Lille
    • Université de Lyon 1
    • Université de Toulouse 3
  • Budget:
    38.25 Meuros public funding from ANR / France 2030
  • Summary:

    The purpose of eNSEMBLE is to fundamentally redefine digital tools for collaboration. Whether it is to reduce our travel, to better mesh the territory and society, or to face the forthcoming problems and transformations of the next decades, the challenges of the 21st century will require us to collaborate at an unprecedented speed and scale.

    To address this challenge, a paradigm shift in the design of collaborative systems is needed, comparable to the one that saw the advent of personal computing. To achieve this goal, we need to invent mixed (i.e. physical and digital) collaboration spaces that do not simply replicate the physical world in virtual environments, enabling co-located and/or geographically distributed teams to work together smoothly and efficiently.

    Beyond this technological challenge, the eNSEMBLE project also addresses sovereignty and societal challenges: by creating the conditions for interoperability between communication and sharing services in order to open up the "private walled gardens" that currently require all participants to use the same services, we will enable new players to offer solutions adapted to the needs and contexts of use. Users will thus be able to choose combinations of potentially "intelligent" tools and services for defining mixed collaboration spaces that meet their needs without compromising their ability to exchange with the rest of the world. By making these services more accessible to a wider population, we will also help reduce the digital divide.

    These challenges require a major long-term investment in multidisciplinary work (Computer Science, Ergonomics, Cognitive Psychology, Sociology, Design, Law, Economics) of both theoretical and empirical nature. The scientific challenges addressed by eNSEMBLE are:

    • Designing novel collaborative environments and conceptual models;
    • Combining human and artificial agency in collaborative set-ups;
    • Enabling fluid collaborative experiences that support interoperability;
    • Supporting the creation of healthy and sustainable collectives; and
    • Specifying socio-technical norms with legal/regulatory frameworks.

    eNSEMBLE will impact many sectors of society - education, health, industry, science, services, public life, leisure - by improving productivity, learning, care and well-being, as well as participatory democracy.

CONTINUUM
  • Title:
    Collaborative continuum from digital to human
  • Type:
    EQUIPEX+ (Equipement d'Excellence)
  • Duration:
    2020 – 2029
  • Coordinator:
    Michel Beaudouin-Lafon
  • Partners:
    • Centre National de la Recherche Scientifique (CNRS)
    • Institut National de Recherche en Informatique et Automatique (Inria)
    • Commissariat à l'Energie Atomique et aux Energies Alternatives (CEA)
    • Université de Rennes 1
    • Université de Rennes 2
    • Ecole Normale Supérieure de Rennes
    • Institut National des Sciences Appliquées de Rennes
    • Aix-Marseille University
    • Université de Technologie de Compiègne
    • Université de Lille
    • Ecole Nationale d'Ingénieurs de Brest
    • Ecole Nationale Supérieure Mines-Télécom Atlantique Bretagne-Pays de la Loire
    • Université Grenoble Alpes
    • Institut National Polytechnique de Grenoble
    • Ecole Nationale Supérieure des Arts et Métiers
    • Université de Strasbourg
    • COMUE UBFC Université de Technologie Belfort Montbéliard
    • Université Paris-Saclay
    • Télécom Paris - Institut Polytechnique de Paris
    • Ecole Normale Supérieure Paris-Saclay
    • CentraleSupélec
    • Université de Versailles - Saint-Quentin
  • Budget:
    13.6 Meuros public funding from ANR
  • Summary:
    The CONTINUUM project will create a collaborative research infrastructure of 30 platforms located throughout France, to advance interdisciplinary research based on interaction between computer science and the human and social sciences. Thanks to CONTINUUM, 37 research teams will develop cutting-edge research programs focusing on visualization, immersion, interaction and collaboration, as well as on human perception, cognition and behaviour in virtual/augmented reality, with potential impact on societal issues. CONTINUUM enables a paradigm shift in the way we perceive, interact, and collaborate with complex digital data and digital worlds by putting humans at the center of the data processing workflows. The project will empower scientists, engineers and industry users with a highly interconnected network of high-performance visualization and immersive platforms to observe, manipulate, understand and share digital data, real-time multi-scale simulations, and virtual or augmented experiences. All platforms will feature facilities for remote collaboration with other platforms, as well as mobile equipment that can be lent to users to facilitate onboarding. CONTINNUM is on the roadmap of National Research Infrastructures.
GLACIS
  • Title:
    Graphical Languages for Creating Infographics
  • Funding:
    ANR
  • Duration:
    2022 - 2025
  • Coordinator:
    Theophanis Tsandilas
  • Partners:
    • Inria Saclay (Theophanis Tsandilas, Michel Beaudouin-Lafon, Pierre Dragicevic)
    • Inria Sophia Antipolis (Adrien Bousseau)
    • École Centrale de Lyon (Romain Vuillemot)
    • University of Toronto (Fanny Chevalier)
  • Inria contact:
    Theophanis Tsandilas
  • Summary:
    This project investigates interactive tools and techniques that can help graphic designers, illustrators, data journalists, and infographic artists, produce creative and effective visualizations for communication purposes, e.g., to inform the public about the evolution of a pandemic or help novices interpret global-warming predictions.
Living Archive
  • Title:
    Interactive Documentation of Dance Heritage
  • Funding:
    ANR JCJC
  • Duration:
    2020 – 2024
  • Coordinator:
    Sarah Fdili Alaoui
  • Partners:
    Université Paris Saclay
  • Inria contact:
    Sarah Fdili Alaoui
  • Summary:
    The goal of this project is to design accessible, flexible and adaptable interactive systems that allow practitioners to easily document their dance using their own methods and personal artifacts emphasizing their first-person perspective. We will ground our methodology in action research where we seek through long-term commitment to field work and collaboration to simultaneously contribute to knowledge in Human-Computer Interaction and to benefit the communities of practice. More specifically, the interactive systems will allow dance practitioners to generate interactive repositories made of self-curated collections of heterogeneous materials that capture and document their dance practices from their first person-perspective. We will deploy these systems in real-world situations through long-term fieldwork that aims both at assessing the technology and at benefiting the communities of practice, exemplifying a socially relevant, collaborative, and engaged research.

9.4 Public policy support

Michel Beaudouin-Lafon was vice-chair of the ACM Global Technology Policy Council until june 2024, and is since then Chair of the ACM Europe Technology Policy Committee. ACM's global Technology Policy Council sets the agenda for ACM's global policy activities and serves as the central convening point for ACM's interactions with government organizations, the computing community, and the public in all matters of public policy related to computing and information technology. The ACM Europe Technology Policy Committee promotes dialogue and the exchange of ideas on technology and computing policy issues with the European Commission and other governmental bodies in Europe, and the informatics and computing communities. The Committee engages in policy issues related to the importance of technology in boosting jobs, economic growth, competition, investment, research and development, education, inclusive social development, and innovation.

10 Dissemination

10.1 Promoting scientific activities

10.1.1 Scientific events: organisation

Member of the organizing committees
  • IEEEVR 25, IEEE Conference on Virtual Reality and 3D User Interfaces, Publicity Co-Chair: Alexandre Kabil
  • IHM 2024, International Francophone Conference on Human-Computer Interaction, Local organizer and student volunteer manager: Camille Gobert
  • IHM 2024, International Francophone Conference on Human-Computer Interaction, Ph.D. dissertation awards comittee : Alexandre Kabil and Theophanis Tsandilas
  • jfXR 24, Journées Françaises de la Réalité Étendue, Organizing Co-Chair: Alexandre Kabil
  • ACM CHI 2024, ACM Human Factors in Computing Systems, Interactivity : Wendy Mackay

10.1.2 Scientific events: selection

Chair of conference program committees
  • ACM UIST 2024 Doctoral Symposium, ACM Symposium on User Interface Software and Technology: Program Co-Chair : Wendy Mackay
  • DIS 2024, ACM Symposium on User Interface Software and Technology, Technical Program Co-Chair : Sarah Fdili Alaoui
  • ACM UIST 2025 Program Committee, ACM Symposium on User Interface Software and Technology: Program Co-Chair : Wendy Mackay
  • ACM TEI 2025 Graduate Sympostium, ACM Symposium on User Interface Software and Technology: Technical Program Co-Chair : Wendy Mackay
Member of the conference program committees
  • ACM CHI 2025, ACM CHI Conference on Human Factors in Computing Systems: Tove Grimstad Bang, Theophanis Tsandilas
  • ACM UIST 2024, ACM Symposium on User Interface Software and Technology: Michel Beaudouin-Lafon, Wendy Mackay
  • IEEE VIS 2024, IEEE Visualization and Visual Analytics Conference: Theophanis Tsandilas
  • ACM DIS 2024, ACM Designing Interactive Systems Conference: Tove Grimstad Bang
  • MOCO 2024, International Conference on Movement and Computing: Steering Commitee Chair : Sarah Fdili ALaoui
Reviewer
  • ACM CHI 2024, ACM CHI Conference on Human Factors in Computing Systems: Anna Offenwanger, Wendy Mackay, Camille Gobert, Alexandre Kabil
  • ACM CHI 2024, ACM Human Factors in Computing Systems, Late-Breaking Papers: Wendy Mackay
  • ACM UIST 2024, ACM Symposium on User Interface Software and Technology: Camille Gobert
  • ACM C&C 2024, ACM Conference on Creativity & Cognition: Tove Grimstad Bang, Wendy Mackay
  • ACM TEI 2025, ACM International Conference on Tangible, Embedded and Embodied Interaction: Anna Offenwanger, Tove Grimstad Bang
  • IEEE ISMAR 2024, IEEE International Symposium on Mixed and Augmented Reality: Theophanis Tsandilas
  • IEEE PacificVis 2025, IEEE Pacific Visualization Conference: Alexandre KABIL
  • IEEE VIS 2024 short papers, IEEE Visualization and Visual Analytics Conference: Theophanis Tsandilas

10.1.3 Journal

Member of the editorial boards
  • Editor for the Human-Computer Interaction area of the ACM Books Series: Michel Beaudouin-Lafon (2013-)
  • TOCHI, Transactions on Computer Human Interaction, ACM: Michel Beaudouin-Lafon (2009-), Wendy Mackay (2016-)
  • ACM Tech Briefs: Michel Beaudouin-Lafon (2021-)
  • ACM New Publications Board: Wendy Mackay (2020-)
  • CACM Editorial Board Online: Wendy Mackay (2020-)
  • CACM Website Redesign: Wendy Mackay (2022)
  • Frontiers of Computer Science, Special Issue on Hybrid Human Artificial Intelligence: Janin Koch (Editor) (2023-)
Reviewer - reviewing activities
  • IEEE TVCG, IEEE Transactions on Visualization and Computer Graphics: Theophanis Tsandilas
  • IWC 2024, Interacting With Computers Journal: Tove Grimstad Bang
  • Sage Information Visualization: Theophanis Tsandilas

10.1.4 Invited talks

  • Keynote at DATAIA Aristote-IA Hybride, “Comment faire le lien entre l’IA et la Créativité’’, Saclay, 18 January 2024 : Wendy Mackay
  • Keynote for the CHI’24 Lifetime Research Award “The Design of Interactive Things: From Theory and Back Again’’ aka “Wendy’s Words of Wisdom’’ , CHI’24 Conference, Hawaii, May 13, 2024 : Wendy Mackay
  • Invited talk CHI’24 SIG on “Transforming HCI Research Cycles using Generative AI and ‘Large Whatever Models’ (LWMs)”, “Ethics and human-AI collaboration’’, Hawaii May 14, 2024 : Wendy Mackay
  • Keynote at the Inria Bordeaux-Waterloo Workshop, ``Designing with Generative Theories of Interaction’’, Bordeaux, February 22, 2024 : Wendy Mackay
  • Keynote at the HyCHA Conference, /emph“Les partenariats humain-machine : Interagir avec l’intelligence artificielle’’, 27-28 March 2024 : Wendy Mackay
  • Keynote at the HHAI Doctoral Consortium /emph‘’Defining a Thesis Topic’’, Munich, Germany, 11 June 2024 : Wendy
  • Invited Address for the MIT Symposium – Lifetime Research Award, “Wendy’s Words of Wisdom’’, 8 July 2024, Cambridge, MA, USA : Wendy Mackay
  • Talks at the MIT/Harvard Workshop, Generative Theories of Interaction, Cambridge, MA, USA, 9 July 2024 : Michel Beaudouin-Lafon and Wendy Mackay
  • UIST’24 Vision Talk, emph“ Parasitic or Symbiotic? Redefining our Relationship with Intelligent Systems’’, Pittsburgh, PA, USA, 14 October 2024: Wendy Mackay
  • Inria Nancy Center, No format to rule them all: Multiformat publishing with provenance in mind, 25 November 2024: Yann Trividic
  • Université de Strasbourg, Padatrad, Propage, OnePub: three publishing factories, 15 May 2024: Yann Trividic
  • Keynote at the International Conference Interactivity & Game Creation ArtsIT 2024, NYU Abu Dhabi, 2024 : Sarah Fdili Alaoui
  • Keynote at the seminar Between Research and Creation: Epistemologies, Models, Practices, IRCAM, Paris, 2024: Sarah Fdili Alaoui
  • Invited Talk at the Dyson School at Imperial College, London, UK, 2024: Sarah Fdili Alaoui
  • Invited talk at the Seminar MODINA” at STL, Tallinn, Estonia 2024: Sarah Fdili Alaoui
  • Invited talk at the Seminar MODINA at Tanzhaus in Dusseldorf, Germany, 2024: Sarah Fdili Alaoui
  • Invited talk at the Seminar MODINA at Trafo in Budapest, Hungary, 2024: Sarah Fdili Alaoui
  • Invited Talk at the HCC seminar series at King’s College, London, 2024: Sarah Fdili Alaoui
  • Demonstration at the “soirée des démonstrations”, IHM conference, Sorbonne university, Paris 2024 : Sarah Fdili Alaoui
  • Invited talk at the MSC Creative Computing CCI, UAL, 2024: Sarah Fdili Alaoui
  • LII team, ENAC (Toulouse), Projecting Computer Languages for a Protean Interaction, 19 June 2024: Camille Gobert
  • GL–IHM working group (online), Lorgnette: Creating Malleable Code Projections, 3 July 2024: Camille Gobert
  • Coast team, LORIA (Nancy), No format to rule them all: multiformat publishing with provenance in mind, 28 November 2024: Camille Gobert
  • DiverSE team, IRISA (Rennes), Projecting Computer Languages for a Protean Interaction, 5 December 2024: Camille Gobert
  • Catala seminar, Inria (Paris), Projecting Computer Languages for a Protean Interaction, 9 December 2024: Camille Gobert
  • Séminaire Histoire de l'Informatique, CNAM (Paris), Interface graphique et IHM, 26 November 2024: Michel Beaudouin-Lafon
  • ACM Europe Technology Policy Committee webinar (online), 7 February 2024, Panel on Immersive Technologies: Michel Beaudouin-Lafon

10.1.5 Leadership within the scientific community

  • Adjunct director of LISN, Université Paris-Saclay / CNRS: Michel Beaudouin-Lafon
  • Scientific director of CONTINUUM national research infrastructure: Michel Beaudouin-Lafon
  • Co-director of PEPR eNSEMBLE on the future of digital collaboration: Michel Beaudouin-Lafon
  • Co-chair of PEPR eNSEMBLE Transverse project (PC5): Wendy Mackay
  • Co-chair of PEPR ICCARE "Publishing" sector: Michel Beaudouin-Lafon
  • Vice-chair of ACM Global Technology Policy Council (-June 2024): Michel Beaudouin-Lafon
  • Chair of ACM Europe Technology Policy Committee (July 2024-): Michel Beaudouin-Lafon

10.1.6 Scientific expertise

  • “Comité de sélection”, Assistant Professor position, Grenoble IAE: Michel Beaudouin-Lafon
  • Assessment of faculty candidates committee, Aarhus University (Denmark): Michel Beaudouin-Lafon
  • Assessment of scientific project grants, Wallenberg Foundation, Sweden : Wendy Mackay
  • Hybrid AI Scientific Advisory Board, the Netherlands : Wendy Mackay
  • SIMTECH Advisory Board, Stuttgart University, Germany : Wendy Mackay
  • Humans & Technology Advisory Board, TU Chemnitz Institute for Media Research, Germany : Wendy Mackay
  • Assessement of PhD candidates, eNSEMBLE PEPR, Michel Beaudouin-Lafon, Wendy Mackay (jury members)

10.1.7 Research administration

  • “CCP (Commission Consultative Paritaire)”, Inria: Wendy Mackay (president)
  • “Commission Scientifique”, Inria: Janin Koch (member)
  • Evaluation committee of École Doctorale STIC, Université Paris-Saclay: Theophanis Tsandilas (member)
  • “Référent Données” pour Inria Saclay: Theophanis Tsandilas

10.2 Teaching - Supervision - Juries

10.2.1 Teaching

  • Interaction & HCID Masters: Michel Beaudouin-Lafon, Wendy Mackay, Fundamentals of Situated Interaction, 21 hrs, M1/M2, Univ. Paris-Saclay
  • Interaction & HCID Masters: Sarah Fdili Alaoui, Creative Design, 21h, M1/M2, Univ. Paris-Saclay
  • Interaction & HCID Masters: Sarah Fdili Alaoui, Studio Art Science in collaboration with Centre Pompidou, 21h, M1/M2, Univ. Paris-Saclay
  • Interaction & HCID Masters: Michel Beaudouin-Lafon, Fundamentals of Human-Computer Interaction, 21 hrs, M1/M2, Univ. Paris-Saclay
  • Interaction & HCID Masters: Michel Beaudouin-Lafon, Groupware and Collaborative Interaction, 21 hrs, M1/M2, Univ. Paris-Saclay
  • Interaction & HCID Masters: Wendy Mackay and Janin Koch, Design of Interactive Systems, 42 hrs, M1/M2, Univ. Paris-Saclay
  • Interaction & HCID Masters: Wendy Mackay and Janin Koch, Advanced Design of Interactive Systems, 21 hrs, M1/M2, Univ. Paris-Saclay
  • Licence Informatique: Michel Beaudouin-Lafon, Introduction to Human-Computer Interaction, 9h, second year, Univ. Paris-Saclay
  • Diplôme ARRC (Année de Recherche en Recherche-Creation): Sarah Fdili Alaoui, John Sullivan Atelier Interaction Humain Machine, 24h, École Normale Supérieure Paris-Saclay
  • Interaction & HCID Masters: Sarah Fdili Alaoui, John Sullivan, Studio Art Science, 21 hrs, M1/M2, Univ. Paris-Saclay
  • Master class on Generative Theories of Interaction, MIT, 10 Juliy 2024: Wendy Mackay and Michel Beaudouin-Lafon

10.2.2 Supervision

PhD students

  • PhD in progress: Matthieu Savary, Framework de design des interactions tripartites du partenariat Soignants-Patients-Chercheurs, since October 2024. Advisors: Roland Cahen (ENSCI & ENS Paris-Saclay), Wendy Mackay, Michel Beaudouin-Lafon
  • PhD in progress: Xiaohan Peng, Designing Interactive Human Computer Drawing Experiences, since October 2023. Advisors: Wendy Mackay, Janin Koch
  • PhD in progress: Yann Trividic, Chaînes éditoriales collaboratives single-source pour le creative coding, since October 2023. Advisors: Michel Beaudouin-Lafon, Wendy Mackay
  • PhD in progress: Vincent Bonczak, Expressive Languages for Creative Visualization Sketching, since October 2023. Advisor: Theophanis Tsandilas
  • PhD in progress: Lea Paymal, Design de qualités expérientielles alternatives et non utilitaires pour les technologies de la maison intelligente, since September 2023. Advisor: Sarah Fdili Alaoui
  • PhD in progress: Léo Chedin, La documentation et à la transmission de la danse d'une perspective à la première personne, since September 2023. Advisors: Sarah Fdili Alaoui and Baptiste Caramiaux
  • PhD in progress: Eya Ben Chaaben, Exploring Human-AI Collaboration and Explainability for Sustainable ML, since November 2022. Advisors: Wendy Mackay, Janin Koch
  • PhD in progress: Romane Dubus, Co-adaptive Instruments for Smart Cockpits, since October 2022. Advisors: Wendy Mackay and Anke Brock
  • PhD in progress: Tove Grimstad Bang. Somaesthetics applied to dance documentation and transmission, since September 2021. Advisor: Sarah Fdili Alaoui
  • PhD in progress: Capucine Nghiem, Speech-Assisted Design Sketching with an Application to e-Learning, since October 2021. Advisors: Theophanis Tsandilas and Adrien Bousseau (Inria Sophia-Antipolis)
  • PhD in progress: Anna Offenwanger, Grammars and Tools for Sketch-Driven Visualization Design, since October 2021. Advisors: Theophanis Tsandilas and Fanny Chevalier (University of Toronto)
  • PhD defended on 18 March 2024: Camille Gobert, Projecting Computer Languages for a Protean Interaction , since October 2020. Advisor: Michel Beaudouin-Lafon
  • PhD defended on 4 April 2024: Wissal Sahel, Participatory Design to Support Power Grid Operators in Control Rooms, since November 2020. Advisor: Wendy Mackay.
  • PhD defended on 3 June 2024: Alexandre Battut, Interaction Substrates and Instruments for Interaction Histories, since April 2020. Advisor: Michel Beaudouin-Lafon
  • PhD defended on 23 October 2024: Martin Tricaud, Designing Interactions, Interacting with Design: Towards Instrumentality and Materiality in Procedural Computer Graphics, and Beyond, since October 2019. Advisor: Michel Beaudouin-Lafon

10.2.3 PhD Juries

  • PhD defense of Mehdi Chakhchoukh, “Visualization to Support Multi-Criteria Decision-Making in Agronomy”, Université Paris-Saclay, 4 December 2024: Wendy Mackay (president)
  • PhD defense of Vaynee Sungeelee, “Human-Machine Co-learning: Interactive curriculum generation for the acquisition of motor skills”, Sorbonne University, 27 September 2024: Wendy Mackay (examiner)
  • PhD defense of Tingying He, “Encoding with Patterns: A Design Space and Evaluations”, Université Paris-Saclay, 6 September 2024: Theophanis Tsandilas (examiner)
  • PhD defense of Markus Klar, “Simulating Interaction Movements via Model Predictive Control and Deep Reinforcement Learning”, Bayreuth University (Germany), 21 February 2024: Michel Beaudouin-Lafon (examiner)

10.2.4 Habilitation Juries

  • Habilitation of Cédric Fleury, “Supporting Collaboration in Large Interactive Spaces”, Université Paris-Saclay, 14 February 2024: Michel Beaudouin-Lafon (examiner)
  • Habilitation of Audrey Serna, “Supporting Meaningful and Adapted Experience to Foster Motivation and Sustained Engagement”, INSA and Université Claude Bernard LYON I, 11 December 2024 2024: Wendy Mackay (examiner)

10.3 Popularization

  • Performance For Patricia during the MODINA, Movement, Digital Intelligence and Interactive Audience Tour in Europe (STL, Tallinn, 2024) : Sarah Fdili Alaoui, Léo Chédin, Johnny Sullivan
  • Performance For Patricia during the MODINA, Movement, Digital Intelligence and Interactive Audience Tour in Europe (Tanzhaus in Dusseldorf) : Sarah Fdili Alaoui, Léo Chédin, Johnny Sullivan
  • Performance For Patricia during the MODINA, Movement, Digital Intelligence and Interactive Audience Tour in Europe (Budapest, Hungary, 2024) : Sarah Fdili Alaoui, Léo Chédin, Johnny Sullivan
  • Performance For Patricia, Scène de Recherche Ecole Normale Supérieure Saclay, 2024 : Sarah Fdili Alaoui, Léo Chédin, Johnny Sullivan
  • Conference at Format(s) festival, Strasbourg, Propage : Vers des contenus libres et accessibles pour l’édition indépendante, Oct 2024: Yann Trividic
  • Conference at ÉSAD Pyrénées, Pau, Auto-défense économique, 31 October 2024: Yann Trividic
  • Participation in Ostinato #1 : Articuler, roundtable about translation at the Scène nationale Carré-Colonnes, Blanquefort, 29 November 2024: Yann Trividic
  • Présentation des Programmes de recherche et des IR+ en IHM/XR, XR Meetup, 19 Décembre 2024 Paris XR: Alexandre Kabil
  • Interview for Interstices (Inria), Journaliste: Nolwenn Le Jannic: Wendy Mackay

11 Scientific production

11.1 Major publications

  • 1 inproceedingsJ.Jessalyn Alvina, J.Joseph Malloch and W.Wendy Mackay. Expressive Keyboards: Enriching Gesture-Typing on Mobile Devices.Proceedings of the 29th ACM Symposium on User Interface Software and Technology (UIST 2016)ACMTokyo, JapanACMOctober 2016, 583 - 593HALDOI
  • 2 inproceedingsI.Ignacio Avellino, C.Cédric Fleury, W.Wendy Mackay and M.Michel Beaudouin-Lafon. CamRay: Camera Arrays Support Remote Collaboration on Wall-Sized Displays.Proceedings of the CHI Conference on Human Factors in Computing Systems CHI '17Denver, United StatesACMMay 2017, 6718 - 6729HALDOI
  • 3 articleM.Michel Beaudouin-Lafon, S.Susanne Bødker and W.Wendy Mackay. Generative Theories of Interaction.ACM Transactions on Computer-Human Interaction286November 2021, Article 45, 54 pagesHALDOIback to text
  • 4 inproceedingsM. C.Marianela Ciolfi Ciolfi Felice, N.Nolwenn Maudet, W.Wendy Mackay and M.Michel Beaudouin-Lafon. Beyond Snapping: Persistent, Tweakable Alignment and Distribution with StickyLines.UIST '16 Proceedings of the 29th Annual Symposium on User Interface Software and TechnologyProceedings of the 29th Annual Symposium on User Interface Software and TechnologyTokyo, JapanOctober 2016HALDOI
  • 5 inproceedingsA.Alexander Eiselmayer, C.Chat Wacharamanotham, M.Michel Beaudouin-Lafon and W.Wendy Mackay. Touchstone2: An Interactive Environment for Exploring Trade-offs in HCI Experiment Design.CHI 2019 - The ACM CHI Conference on Human Factors in Computing SystemsProceedings of the 2019 CHI Conference on Human Factors in Computing Systems217ACMGlasgow, United KingdomACMMay 2019, 1--11HAL
  • 6 inproceedingsJ.Jules Françoise, S.Sarah Fdili Alaoui and Y.Yves Candau. CO/DA: Live-Coding Movement-Sound Interactions for Dance Improvisation.Proceedings of the 2022 CHI Conference on Human Factors in Computing SystemsCHI '22 - Conference on Human Factors in Computing Systems482New Orleans, LA, United StatesACMApril 2022, 1-13HALDOI
  • 7 articleJ.Janin Koch, N.Nicolas Taffin, M.Michel Beaudouin-Lafon, M.Markku Laine, A.Andrés Lucero and W.Wendy Mackay. ImageSense: An Intelligent Collaborative Ideation Tool to Support Diverse Human-Computer Partnerships.Proceedings of the ACM on Human-Computer Interaction 4CSCW1May 2020, 1-27HALDOI
  • 8 inproceedingsW.Wanyu Liu, R.Rafael Lucas D'Oliveira, M.Michel Beaudouin-Lafon and O.Olivier Rioul. BIGnav: Bayesian Information Gain for Guiding Multiscale Navigation.ACM CHI 2017 - International conference of Human-Computer InteractionDenver, United StatesMay 2017, 5869-5880HALDOI
  • 9 articleY.Yujiro Okuya, N.Nicolas Ladeveze, C.Cédric Fleury and P.Patrick Bourdot. ShapeGuide: Shape-Based 3D Interaction for Parameter Modification of Native CAD Data.Frontiers in Robotics and AI5November 2018HALDOI
  • 10 inproceedingsM.Mirjana Prpa, S.Sarah Fdili-Alaoui, T.Thecla Schiphorst and P.Philippe Pasquier. Articulating Experience: Reflections from Experts Applying Micro-Phenomenology to Design Research in HCI.CHI '20 - CHI Conference on Human Factors in Computing SystemsHonolulu HI USA, United StatesACMApril 2020, 1-14HALDOI
  • 11 inproceedingsM.Miguel Renom, B.Baptiste Caramiaux and M.Michel Beaudouin-Lafon. Exploring Technical Reasoning in Digital Tool Use.CHI 2022 - ACM Conference on Human Factors in Computing SystemsNew Orleans, LA, United StatesApril 2022, 1-17HALDOI
  • 12 inproceedingsT.Téo Sanchez, B.Baptiste Caramiaux, P.Pierre Thiel and W. E.Wendy E. Mackay. Deep Learning Uncertainty in Machine Teaching.IUI 2022 - 27th Annual Conference on Intelligent User InterfacesHelsinki / Virtual, FinlandFebruary 2022HALDOI
  • 13 inproceedingsT.Theophanis Tsandilas and P.Pierre Dragicevic. Gesture Elicitation as a Computational Optimization Problem.ACM Conference on Human Factors in Computing Systems (CHI ’22)New Orleans, United StatesApril 2022HALDOI
  • 14 articleT.Theophanis Tsandilas. Fallacies of Agreement: A Critical Review of Consensus Assessment Methods for Gesture Elicitation.ACM Transactions on Computer-Human Interaction253June 2018, 1-49HALDOI
  • 15 articleT.Theophanis Tsandilas. StructGraphics: Flexible Visualization Design through Data-Agnostic and Reusable Graphical Structures.IEEE Transactions on Visualization and Computer Graphics272October 2020, 315-325HAL

11.2 Publications of the year

International journals

International peer-reviewed conferences

  • 20 inproceedingsT. G.Tove Grimstad Bang, S. F.Sarah Fdili Alaoui, G.Guro Tyse, E.Elisabeth Schwartz and F.Frederic Bevilacqua. A Retrospective Autoethnography Documenting Dance Learning Through Data Physicalisations.DIS '24: Proceedings of the 2024 ACM Designing Interactive Systems ConferenceDIS 2024 - Designing Interactive Systems ConferenceCopenhagen, DenmarkJuly 2024, 2357 - 2373HALDOIback to textback to text
  • 21 inproceedingsR.Romane Dubus. Automation Surprises in Safety-critical system : Investigating Challenges and Solutions for the Autoflight System.IHM'24 - 35e Conférence Internationale Francophone sur l'Interaction Humain-MachineIHM’24 : Actes étendus de la 35ème conférence Francophone sur l’Interaction Humain-MachineParis, FranceMarch 2024HALback to text
  • 22 inproceedingsS.Stacy Hsueh, M.Marianela Ciolfi Felice, S. F.Sarah Fdili Alaoui and W. E.Wendy E Mackay. What Counts as ‘Creative’ Work? Articulating Four Epistemic Positions in Creativity-Oriented HCI Research.CHI 2024 - Conference on Human Factors in Computing Systems497Honolulu, United StatesMay 2024, 1 - 15HALDOIback to textback to text
  • 23 inproceedingsI.Inês Lobo, J.Janin Koch, J.Jennifer Renoux, I.Inês Batina and R.Rui Prada. When Should I Lead or Follow: Understanding Initiative Levels in Human-AI Collaborative Gameplay.DIS 2024 - Designing Interactive Systems ConferenceCopenhagen, DenmarkACMJuly 2024, 2037-2056HALDOIback to text
  • 24 inproceedingsW. E.Wendy E Mackay, A.Alexandre Battut, G.Germàn Leiva and M.Michel Beaudouin-Lafon. VideoClipper: Rapid Prototyping with the "Editing-in-the-Camera" Method.CHI '24: Proceedings of the 2024 CHI Conference on Human Factors in Computing SystemsCHI 2024 - Conference on Human Factors in Computing Systems228Honolulu, United StatesMay 2024, 1-14HALDOIback to textback to text
  • 25 inproceedingsW. E.Wendy E. Mackay. Parasitic or Symbiotic? Redefining our Relationship with Intelligent Systems.UIST 2024 - The 37th Annual ACM Symposium on User Interface Software and TechnologyPittsburgh, PA, United StatesOctober 2024, 1 - 2HALDOIback to text
  • 26 inproceedingsC.Capucine Nghiem, A.Adrien Bousseau, M.Mark Sypesteyn, J. W.Jan Willem Hoftijzer, M.Maneesh Agrawala and T.Theophanis Tsandilas. STIVi: Turning Perspective Sketching Videos into Interactive Tutorials.Graphics Interface (GI'24)Halifax, CanadaJune 2024HALDOIback to text
  • 27 inproceedingsC.-G. M.Capucine Minh-Giang Nghiem, A.Adrien Bousseau, M.Mark Sypesteyn, J.Janwillem Hoftijzer and T.Theophanis Tsandilas. Sketch Presentation for Product Design.IHM'24 - 35e Conférence Internationale Francophone sur l'Interaction Humain-MachineParis, FranceMarch 2024, 1-4HALDOIback to text
  • 28 inproceedingsC.Capucine Nghiem. Sketching in education and concept presentation contexts.IHM'24 - 35e Conférence Internationale Francophone sur l'Interaction Humain-MachineIHM’24 : Actes étendus de la 35ème conférence Francophone sur l’Interaction Humain-MachineParis, FranceMarch 2024HALback to text
  • 29 inproceedingsL.Léa Paymal and S.Sarah Homewood. Good Days, Bad Days: Understanding the Trajectories of Technology Use During Chronic Fatigue Syndrome.CHI '24: Proceedings of the 2024 CHI Conference on Human Factors in Computing SystemsCHI 2024 - Conference on Human Factors in Computing Systems128Honolulu (HI), United StatesACMMay 2024, 1-10HALDOI
  • 30 inproceedingsX.Xiaohan Peng. Designing Expressive Interaction with Generative Artificial Intelligence.Frontiers in Artificial Intelligence and ApplicationsHHAI 2024 - The Third International Conference on Hybrid Human-Artificial IntelligenceVolume 386: HHAI 2024: Hybrid Human AI Systems for the Social GoodHHAI 2024: Hybrid Human AI Systems for the Social GoodMalmö, SwedenIOS PressJune 2024, 1-9HALDOIback to text
  • 31 inproceedingsX.Xiaohan Peng, J.Janin Koch and W. E.Wendy E Mackay. DesignPrompt: Using Multimodal Interaction for Design Exploration with Generative AI.Proceedings of the 2024 ACM Designing Interactive Systems ConferenceDIS 2024 - Designing Interactive Systems ConferenceCopenhagen, DenmarkJuly 2024, 1-15HALDOIback to text
  • 32 inproceedingsX.Xiao Xiao and S. F.Sarah Fdili Alaoui. Tuning In to Intangibility : Reflections from My First 3 Years of Theremin Learning.DIS 2024 - Designing Interactive Systems ConferenceCopenhagen, DenmarkACM; ACMJuly 2024, 2649-2659HALDOI

Conferences without proceedings

  • 33 inproceedingsA.Anastasiya Zakreuskaya, T.Tobias Münch, H.Henrik Detjen, S.Sven Mayer, P.Passant Elagroudy, B.Bastian Pleging, F.Fiona Draxler, B.Benjamin Weyers, U.Uwe Gruenefeld, J.Jonas Auda, W.Waldemar Titov, W. E.Wendy E Mackay, D.Daniel Buschek and T.Thomas Kosch. Workshop on Generative Artificial Intelligence in Interactive Systems: Experiences from the Community.Mensch und Computer 2024Karlsruhe, GermanyGesellschaft für Informatik e.V.2024, 99 - 110HALDOIback to text

Doctoral dissertations and habilitation theses

  • 34 thesisA.Alexandre Battut. Interaction substrates and instruments for interaction histories.Université Paris-SaclayJune 2024HALback to text
  • 35 thesisC.Camille Gobert. Projecting Computer Languages for a Protean Interaction.Université Paris-SaclayMarch 2024HALback to text
  • 36 thesisW.Wissal Sahel. Participatory design to support power grid operators in control rooms.Université Paris-SaclayApril 2024HALback to text
  • 37 thesisM.Martin Tricaud. Designing Interactions, Interacting with Design : Towards Instrumentality and Materiality in Procedural Computer Graphics and Beyond.Université Paris-SaclayOctober 2024HALback to text

Reports & preprints

  • 38 miscR.Roland Cahen, M.Matthieu Savary, R.Roman Weil, B.Bianca Pica, C.Charlie Gouin, G.Gwenael Douillard, L.Leia Jekel, T.Talia Sander, A.Adèle Collard, A.Alice Leso, A.Anton Chabert, A.Ariane Eljam, C.Colette Degennaro, E.Enrique Guzmán Sánchez, E.Eva Delaunay, F.Faustine Goldberg, F.Florian Spiteri, J.Jade Ogata, L.-J.Louis-Jacques Dagorn Le Masson, L.Lucie Fleury, P.-L.Pierre-Louis Fillon, R.Rebecca Morel-Maroger, Z.Zakine Jacobs, K.Karen Selene and L.Lïa Gauthier. Documentary Research Notebook of the IMPACT 2025 Project Workshop, ENSCi les Ateliers: Key references for raising awareness among 14-to-25-year-olds about road risks.October 2024HAL

Scientific popularization

11.3 Cited publications

  • 40 inproceedingsP. E.Passant El Agroudy, K. V.Kaisa Väänänen Jie Li, P.Paul Lukowicz, H.Hiroshi Ishii, W. E.Wendy E. Mackay, E.Elizabeth Churchill, A.Anicia Peters, A.Antti Oulasvirta, R.Rui Prada, A.Alexandra Deining, G.Giulia Barbareschi, A.Agnes Gruenerbl, M.Midori Kawaguchi, A. E.Abdallah El Ali, F.Fiona Draxler, R.Robin Welsch and A.Albrecht Schmidt. CHI'24 SIG on Transforming HCI Research Cycles using Generative AI and “Large Whatever Models” (LWMs).CHI EA '24: Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing SystemsHawaii,HIACMMay 2024, 5DOIback to text
  • 41 articleM.Michel Beaudouin-Lafon, S.Susanne B\o{}}dker and W.Wendy Mackay. Generative Theories of Interaction.ACM Transactions on Computer-Human Interaction286November 2021HALDOIback to text