TheyDrawIt!: An Authoring Tool for Belief-Driven Visualization

TLDR: It’s natural to bring our relevant prior knowledge on a topic to how we interpret visualized data. Interactive visualizations can acknowledge the impact of prior beliefs by eliciting users’ expectations directly, with benefits to users’ comprehension and engagement, authors’ understanding of their users, and more. We describe TheyDrawIt: a publicly available authoring tool that makes it easy to create line charts that ask users to sketch their beliefs before viewing data, and then optionally visualize other users’ beliefs.

Visualizations are commonplace in journalism, media, and analysis where they help people form impressions about the “true” state of the world. Effective visualizations help people decide, “What should I believe?”, whether the topic is the state of climate change, what will happen in the next election, or how healthy the economy is.

But what affects what conclusions a visualization user draws after interacting with a graph? Visualization research provides guidelines based on graphical perception experiments, such as “encode data using position rather than area or color” as well as graphical comprehension experiments, like “use line charts to make viewers think about trends.” From experiments by cognitive psychologists, we also know that how much knowledge a user brings to a visualization interaction — both about the topic of a visualization, and how to read visualizations in general — affects what they get out of an interaction. Most interactive visualizations don’t make this natural aspect of visualization interpretation explicit. We wondered, what would it look like to encourage users to reflect on, and update, their beliefs as they interact with a graph?

Designing Visualizations to Account for Different Prior Beliefs

More recent research in visualization suggests that both users and authors can benefit from visualizations that graphically elicit the user’s expectations about a trend or statistic by having them ‘sketch’ in a graph, then show them the observed data against their expectations.

For example, consider the visualization below depicting poll results from April to June 2019 for the 2020 US Democratic Primary. If you’ve been following news reports or watched the recent debates, you might have expectations about what the trend for each candidate looks like between June 16 and June 28. Take Elizabeth Warren for example. Based on your assessment of her popularity leading up to and after the debate, how would you draw the line for Warren? Did her popularity increase, decrease, or stay about the same?

A user predicts how Elizabeth Warren performs in polls shortly before and after the first debate.

Imagine the visualization shows you how close your expectations are to the poll results after you sketch your line. The solid line that animates in in the below graph shows how Warren’s popularity in the polls did increase in the third week of June, but then dipped slightly around the first debate.

Consider how reflecting on your expectations may have changed what you take away from the poll visualization. Without this prompting, would you have noticed the slight dip in Warren’s popularity? Consider also what you might have noticed if your expectations had been different. What if you had expected that Warren’s popularity had gone down after the debate — would you feel more confident in your ability to project political opinion?

It’s clear that eliciting beliefs changes a user’s interaction. We wondered, is this a good thing?

Why Elicit Beliefs?

A couple years ago, we conducted an experiment to test different combinations of graphical elicitation of beliefs (for example, with and without feedback to make it explicit) against the typical visualization scenario in which the user is simply shown the observed data by default, as well as against text elicitation of beliefs. (Read a summary here). We found that when users had the ability to draw their expectations and see them against the observed data, they recalled the data 20–25% more accurately a short time later. Stating one’s expectations in a text format (e.g., by typing predicted values on a line), however, did not have the same effect. This leads us to suspect that viewing the gap between one’s beliefs and observed data visually is a powerful way to help a user realize how much they know (or don’t know).

A visualization that elicits users’ beliefs can prompt further critical reflection on data if it also shows what others expected a trend to look like.

In another experiment we conducted, we found that viewing others’ expectations can have a similar effect to viewing one’s own predictions, improving a user’s ability to remember data when others’ expectations are reasonably consistent. Viewing others’ beliefs can also prompt users to think more critically about the data they are viewing. Especially when a user’s expectations are contradicted by the data, seeing what other people thought can impact how much a user updates their beliefs toward the trend in the observed data.

While social influence may in some cases lead users to take trustworthy data less seriously, we think that acknowledging that people have beliefs before they see data can be informative for users. Without visualization techniques that prompt critical thinking — whether by graphically eliciting and representing prior beliefs, visualizing uncertainty directly, or otherwise acknowledging limitations — some visualization users may default to trusting datasets blindly. Recognizing that prior beliefs can contain valuable information is a first step toward thinking of data as a tool for informing what we think, not replacing it with each new data sample we see.

An Authoring tool for Belief Elicitation: TheyDrawIt!

Creating a visualization that smoothly graphically elicits and visualizes prior beliefs from users can be time consuming. So, we created TheyDrawIt!, an authoring tool for producing interactive “belief-driven” visualizations of time series data. TheyDrawIt! visualizations can be customized in various ways but use a consistent design pattern of elicit beliefs, show observed data against beliefs, and (optionally) show prior users’ beliefs.

To get a sense of how it works, let’s walk through the earlier 2020 Democratic Nomination polls example and re-create it in TheyDrawIt!

Importing Data

TheyDrawIt accepts data in Google Spreadsheets where all data is in a single sheet. Your Google Spreadsheet should have at least one column that has a recognizable Date/Time format in chronological order. For our example, we’re using poll data we scraped from RealClear Politics on the 2020 Democratic Presidential Nomination.

Design — Customize your visualization

After you’ve loaded your data, TheyDrawIt will ask you to specify which column contains the Date/Times you want to use as the x-axis of your visualization. You can then specify each of the other columns (or lines) you want to include in the visualization. This set should include the column you want your users to predict. While your users will only be able to predict one column (line), other columns can be shown by default to provide context that may help users as they formulate their predictions.

Choosing a date-time column to plot on the x-axis (left). Adding a column as a line to the chart (right).

To continue re-creating our 2020 Democratic polls example, we choose the FormattedDate column as our Date/Time column, and add lines for each of the candidates we want to visualize on the chart. We want users to predict Elizabeth Warren’s popularity over time, so we also add that column to our list of lines. After filling in labels and choosing colors, we fill in a title, subtitle, and source for the visualization.

A completed Visualize tab with design options chosen.

Predict

Next, we’ll specify what column we want users to predict. For our example, we specify that users should predict Elizabeth Warren’s performance in polls by choosing the Warren column from the dropdown menu. The dropdown’s options consist of all the candidates we chose to visualize on the previous screen.

Once we’ve chosen a column for prediction, TheyDrawIt shows a preview of the visualization reflecting all of our design choices up to that point. We can use the preview to test what a user will experience when they make a prediction.

By default, a TheyDrawIt visualization asks each user to predict for the entire Date/Time range available for the column designated for prediction. However, we could instead specify that we only want users to predict a partial range of the available Date/Times by manipulating the draggable elicitation window in the preview, or editing the corresponding text boxes directly.

Select a column for users to predict (left). Adjust the prediction range for users to guess (right).

Because we want users’ to predict Warren’s popularity before and after the first debate, we adjust the prediction range from June 16 to June 28.

If we think our users may need further hints, we could specify a point that their prediction should go through.

Social — Display other predictions

TheyDrawIt next asks whether we want to give users the option of seeing all the prior users’ predictions after they view the observed data. If we opt to show prior predictions, we can choose between two visualization styles: displaying all other predictions on the same chart as lines or displaying a heatmap, a smooth aggregated representation of all other predictions. To help with imagining what each visualization option might look like to users after predictions have come in, TheyDrawIt generates and plots simulated user predictions.

Multiple options of displaying others’ predictions to the user.

For our example, we chose to display other predictions as lines so that users can get a sense of what other users’ perceived Warren’s popularity to be like in the recent weeks.

Preview, Publish & Share

After checking the preview a final time to make sure that the we are happy with the visualization, we can publish it. TheyDrawIt provides an embeddable iFrame that you can copy into a website or a CMS, as well as a url that you can share with others to get feedback. The final visualization is responsive and mobile friendly.

Keep in mind that each time the visualization is altered in design or data, you’ll need to regenerate the iFrame of your visualization. You can click the “Edit” button of any project on the main TheyDrawIt dashboard to return to the TheyDrawIt visualization or iFrame. Once users have submitted predictions, you’ll be able to see all of the submitted predictions on a preview using the “View Predictions” button.

Test the created visualization with a live-preview in the browser (left). After publishing the visualization, you can share the link or paste the embed code into a CMS (right). Try our final visualization!

The Future of TheyDrawIt! as a Research Testbed

By making belief-driven visualization more accessible, our hope is that TheyDrawIt can help us explore possibilities for belief elicitation in visualization. We’re curious, for example: What kinds of datasets and topics do authors think are good candidates for eliciting users’ beliefs? How accurate are people’s beliefs about different topics?

We chose line charts as a starting point for TheyDrawIt because they support thinking about individual values as well as trends. However, our aim is to explore the larger space of interfaces that elicit and model users’ beliefs, as well as what becomes possible when we design to account for users’ priors.

Some examples of belief-driven visualizations in journalism.

A design space of belief elicitation

Most existing visualizations on the web that graphically elicit beliefs have been created by a small set of expert design teams (e.g., New York Times, the Guardian). But even this limited set of examples indicates a huge space of possibilities, with design decisions concerning:

One important next step for research is to identify design principles for using belief-driven visualization. What would a principled approach to deciding if and how to elicit beliefs and show feedback and others predictions’ look like?

Some questions that arise in developing design guidelines for belief elicitation for visualization are interesting in part because they are so challenging to answer empirically. For example, how can an author know that the level of detail that they chose for eliciting beliefs (e.g., asking users to estimate values at each tick mark in a line versus only a slope and intercept) is appropriate given the prior knowledge of their audience?

In addition to conducting further empirical studies, we plan to explore interfaces for eliciting lower resolution beliefs in future iterations of TheyDrawIt, like binary predictions about whether a trend is increasing or decreasing, whether one value is higher or lower than another, or which of several bins a value falls in.

In our recent research we’ve begun developing Bayesian models of cognition for data visualization applications. Combined with TheyDrawIt visualizations, formal models can help us understand, How do visualization users use the data they are presented with to build “mental models” of real world phenomena? and How can more sophisticated models of users’ beliefs help us design better visualizations?

For example, imagine a version of TheyDrawIt that captures not just what a user predicts, but also how uncertain a user (or audience) seems to be about the topic; that predicts how a user’s beliefs will change after viewing observed data; that personalizes the interface to help the user more accurately perceive the information the data contains; and that allows the author to evaluate visualization designs based not on how well users can “read” the data, but how well they use it to inform their beliefs. We are excited about how these ideas can be applied to improve visualization based-communication as in TheyDrawIt as well as data analysis scenarios.

We will be continuing to add features to TheyDrawIt!, and are excited to see what authors create. For updates, join the TheyDrawIt newsletter. If you would like to get in touch directly, email us at mucollectivelab@gmail.com!

This post was co-authored by Jessica Hullman and Francis Nguyen. Thanks to other members of the TheyDrawIt team (Francis, Jessica, Yea-Seul Kim, and Joe Germuska) and MU Collective for input.

--

--

Midwest Uncertainty (MU) Collective
Multiple Views: Visualization Research Explained

A research lab devoted to data vis & uncertainty communication. Directors Jessica Hullman (Northwestern) & Matthew Kay (UMichigan). http://mucollective.co