Lyra 2: Designing Interactive Visualizations by Demonstration
Jonathan Zong, Dhiraj Barnwal, Rupayan Neogy, Arvind Satyanarayan
©© 2020 IEEE. This is the author’s version of the article that has been published in IEEE Transactions on Visualization andComputer Graphics. The final version of this record is available at: xx.xxxx/TVCG.201x.xxxxxxx/
Lyra 2: Designing Interactive Visualizations by Demonstration
Jonathan Zong, Dhiraj Barnwal, Rupayan Neogy, and Arvind Satyanarayan
A B
Fig. 1. An example interactive visualization designed in Lyra 2, a visualization design environment. Users can (a) brush in the scatterplotto re-aggregate the histogram, and (b) click histogram bars to filter for corresponding points in the scatterplot. This visualization wasdesigned by demonstration — users did not have to write any textual code.
Abstract — Recent graphical interfaces offer direct manipulation mechanisms for authoring visualizations, but are largely restrictedto static output. To author interactive visualizations, users must instead turn to textual specification, but such approaches imposea higher technical burden. To bridge this gap, we introduce Lyra 2, a system that extends a prior visualization design environmentwith novel methods for authoring interaction techniques by demonstration. Users perform an interaction (e.g., button clicks, drags,or key presses) directly on the visualization they are editing. The system interprets this performance using a set of heuristics andenumerates suggestions of possible interaction designs. These heuristics account for the properties of the interaction (e.g., target andevent type) as well as the visualization (e.g., mark and scale types, and multiple views). Interaction design suggestions are displayedas thumbnails; users can preview and test these suggestions, iteratively refine them through additional demonstrations, and finallyapply and customize them via property inspectors. We evaluate our approach through a gallery of diverse examples, and evaluateits usability through a first-use study and via an analysis of its cognitive dimensions. We find that, in Lyra 2, interaction design bydemonstration enables users to rapidly express a wide range of interactive visualizations.
Index Terms —Direct manipulation, interactive visualization, interaction design by demonstration
NTRODUCTION
Interactive visualization is increasingly embraced as a medium forrecording, analyzing, and communicating data. To meet this demand, arecent thread of research has explored methods for minimizing the tech-nical expertise required to author visualizations. Systems like Lyra [51],Data Illustrator [31], and Charticulator [44] provide graphical interfacesfor creating visualizations with drag-and-drop and direct manipulationinteractions rather than programming. Though interactivity is rec-ognized as crucial to effective visualization [30, 41], few graphicalinterfaces offer support for interaction design — the aforementionedsystems only produce static output, and other alternatives includingTableau (ne Polaris [56]) and VisDock [8] either hard-code specificinteraction techniques, or offer only a limited typology to choose from.To author custom interactive visualizations, users must instead turnto textual specification languages, such as D3 [6], Vega [54, 55], andVega-Lite [53]. While highly expressive, these tools have several usabil- • Jonathan Zong, Rupayan Neogy, and Arvind Satyanarayan are with theMassachusetts Institute of Technology. E-mails: [email protected],[email protected], [email protected].• Dhiraj Barnwal is with Indian Institute of Technology Kharagpur. E-mail:[email protected] received xx xxx. 201x; accepted xx xxx. 201x. Date of Publicationxx xxx. 201x; date of current version xx xxx. 201x. For information onobtaining reprints of this article, please send e-mail to: [email protected] Object Identifier: xx.xxxx/TVCG.201x.xxxxxxx ity drawbacks compared to graphical interfaces. For instance, with D3,authors must write low-level event callbacks which expose executiondetails like mutable state and concurrency [9, 13]. These details areoften unrelated to visualization design, and managing them hindersauthors from quickly iterating on designs. Declarative languages suchas Vega and Vega-Lite have made progress by introducing higher-levelabstractions to mask these execution concerns. However, these ab-stractions are expressed through textual specifications which present anunnecessarily large gulf of execution [20] by providing a poor closenessof mapping [4] to the ultimate interactive visual output. As a result,users are forced to learn and juggle two very different paradigms.To bridge this gap, we introduce Lyra 2, a system that extends aprior visualization design environment [51] with methods for designinginteractive visualizations by demonstration. Consider the example taskof creating a rectangular brush for selecting and highlighting pointson a scatterplot. To specify this interaction, users demonstrate it bydragging their mouse cursor directly over the visualization they arecurrently editing. Lyra 2 interprets this performance using heuristics,and suggests possible interaction techniques to apply. In our example,the system detects the drag events in a space marked by quantitativex- and y-axes and suggests a set of interval-based interactions [53].Suggestions consist of a selection (e.g., 1D or 2D brushes) and an appli-cation (e.g., conditional color or opacity encodings, or filter transforms).Suggestions are displayed as thumbnail previews, which facilitate rapidcomparison by illustrating what the visualization would look like afterapplying the interaction. Users can perform additional demonstrationsto refine the suggestions, or click to accept a suggested interaction.1 a r X i v : . [ c s . H C ] S e p n Lyra 2, demonstrations and suggestions generate statements inVega [54] or Vega-Lite [53]. Critically, our approach smooths thegradient of these two levels of abstraction. For instance, say we wishedto label the corners of a rectangular brush with their data coordinates.Vega-Lite does not provide any facilities to do so; a user could chooseto edit the compiled Vega specification to add the appropriate signals,but would experience a sharp complexity cliff and have to reason abouttwo saliently different paradigms (selections and reactive programming,respectively). Demonstrations and suggestions, however, provide aconsistent interface mechanism to seamlessly move between these twolevels of abstraction. Once a user demonstrates a brush interaction, theycan use visual property inspectors to drill into the components of theinteraction (i.e., the brush start and end extents); these extents can thenbe dropped over text mark properties to achieve the desired behavior.To evaluate our approach, we follow current best practices [45] byusing three distinct evaluation methods. To assess its expressive ex-tent, we use demonstrations in Lyra 2 to recreate a diverse gallery ofexamples that highlight substantial coverage of an existing taxonomyof interaction techniques for data visualizations [61]. To determine itsusability, we conducted a first-use study with participants spanning abroad range of prior experience creating interactive visualizations. Allstudy participants were able to recreate a range of interactive visualiza-tions and described the use of demonstrations as “natural”. Finally, weanalyze the cognitive dimensions [4] of our approach to further assessusability, and find that demonstrations and suggestions offer a means to progressively evaluate desired interactive outcomes with a much closermapping between the specification and output medium. ELATED W ORK
Our contribution builds on prior work on models of interactive visual-ization design and graphical interfaces for authoring visualizations, andis inspired by the literature on programming by demonstration (PBD).
A variety of textual specification languages and visualization toolkitshave explored methods for specifying custom interactive behaviors.Protovis [5], D3 [6], and VisDock [8], for example, offer palettes ofstandard techniques but force users to write low-level imperative eventhandling for custom behaviors. Improvise [58] and Stencil [11] offermore fine-grained primitives, inspired by data flow semantics, thatdynamically update and propagate values to downstream dependents —a conceptual model that allows for expressive interaction design. Morerecently, Vega [54,55] and Vega-Lite [53], which we describe in greaterdetail in Section 3, explore grammar-based approaches for specifyinginteraction techniques. While these tools span the gamut of expressivity,they share common usability concerns. Namely, they each present anon-trivial gulf of execution [20] because they force users to expressinteractive and visual outputs in terms of text — a mismatch between theinput and output representations. As a result, although these tools havehigh expressive ceilings, they present a non-trivial threshold [37] andthus are typically favored by users with prior visualization expertise.
To make visualization design more accessible to users with less techni-cal expertise, researchers have explored graphical interfaces for visual-ization design. Systems like Lyra [51], Data Illustrator [31], and Char-ticulator [44] (three recent examples in a rich design space [23, 33–35])allow users to author visualizations through direct manipulation in-teractions inspired by vector graphics editors. While these systemsdifferently trade expressivity for learnability [52], none yet explorespecifying custom interaction techniques. Graphical interfaces thatdo support interactivity typically either hard code specific interactions(e.g., iVisDesigner [43] which only supports brushing & linking) orallow users to instantiate behaviors from a predefined palette of tech-niques. For example, both Microsoft PowerBI and Tableau allow usersto create dynamic query widgets (called parameters in both tools) orspecify interactive filters and highlights (called actions in Tableau).However, users are restricted to a handful of customizations (e.g., ac-tions can only run on hover, selection, or via the context menu rather than the rich space of mouse and keyboard events), and interactivemechanisms can only be applied in a limited fashion (e.g., interactivefilters can show or hide data but cannot drive downstream calculations).Thus, enabling more expressive, custom interaction design for datavisualization without textual programming remains an open problem.
Research in programming by demonstration (PBD) and programmingby example (PBE) has investigated how to interpret users’ direct manip-ulation inputs to generate programs as output. For example, users candemonstrate string manipulation operations by providing input-outputpairs [15, 26], can automate scraping by recording and replaying inter-actions with web pages [3, 7]. Similarly, Data Wrangler [21] generatesspecifications of data transformations based on a user’s direct manipu-lation of tabular data, and provides visual previews of possible outputs.These systems narrow the gulf of execution by eliciting inputs using thesame representation as the desired output.Through systems like Gold [39] and a recent thread by Saket etal. [47–49], researchers have shown that PBD is also a viable approachfor designing visualizations. With PBD, rather than explicitly bind-ing data fields to encoding channels, users implicitly specify thesemappings by performing demonstrations — for instance, when the userdrags two points together, the system infers that they intend to create ascatterplot and suggests several x-y axis pairs. Our approach shares sim-ilarities with this line of work: users perform demonstrations directly onthe visualization, which are interpreted by a series of rules to enumeratecandidate choices rendered as visual previews. But salient differencesarise due to the outcome of the demonstration. For instance, with Saketet al.’s systems, demonstrations produce static visualizations for visualdata exploration. As a result, system rules interpret demonstrationsby mapping them to data analysis goals, and each suggestion has acomputed relevance score [46]. In contrast, demonstrations in Lyra 2specify custom interaction designs. Our heuristics use demonstrationsto generate all valid statements in the underlying Vega or Vega-Litevisualization grammars, and then set defaults.Finally, our work draws inspiration from systems like Monet [28]and Peridot [38], which investigate how demonstrations can be usedto construct interactive user interfaces. In our domain, the Vega andVega-Lite visualization grammars influence our system design (e.g.,demonstrations can be processed with a simpler set of heuristics insteadof Monet’s neural-network-based algorithms). But, they also provide anopportunity to more carefully analyze the usability tradeoffs of textualversus demonstrational specification of interaction techniques. Forinstance, Peridot provides an “active value” primitive, with facilitiesto individually remove associated interactions; these concepts map tosignals and the affordances of Lyra 2’s property inspectors. However,with Peridot, users can never edit an equivalent textual specification ofthe interaction techniques (with Lyra 2, users can export the underlyingVega specification). As a result, with Lyra 2, we are able to identify thatproperty inspectors help reduce hidden dependencies that may exist ina textual specification, but yield a more diffuse user experience.
ACKGROUND
Our implementation of interaction design by demonstration in Lyra 2builds on both its existing support of static visualization design [51],and on interaction design concepts in Vega [54, 55] and Vega-Lite [53].In this section, we aim to provide the reader with sufficient backgroundto understand the remainder of the paper.
Lyra offers the following abstractions for authoring static data visual-izations, inspired by a similar set of abstractions found in Vega. Data pipelines allow users to import tabular datasets, inspect them via a datatable view, and apply chains of statistical data transformations (e.g.,filtering and grouping).
Scales map data values to visual propertiessuch as position, shape, and color. Lyra supports both discrete andquantitative scales, and automatically instantiates an appropriate scalewhen a direct manipulation data binding operation occurs.
Guides are reference marks that visualize scales: axes visualize scales over a2
A 1 2 213 344 44B
Fig. 2. The process of creating the interactive visualization in Fig. 1. (a) Demonstrating a horizontal brush that colors selected points in the scatterplotand re-aggregates the histogram. (b) Demonstrating a click interaction that colors the selected bar and filters the scatterplot via the weather field. spatial domain and legends visualize scales for color, shape, or sizeencodings. Like scales, Lyra automatically constructs guides when adata bind occurs.
Marks are shapes (e.g. rectangles, lines, symbols,text labels) with named visual properties (e.g., x, y, and fill). Propertyvalues can be set to constants, or bound to data. When a mark definitionis bound to a dataset, Lyra instantiates one mark instance per datum.To directly manipulate these abstractions, Lyra provides the follow-ing user interface components. Akin to vector graphics packages, marksappear on the visualization canvas with handles , which can be used tointeractively move, rotate, and resize all instances of the selected mark.When dragging a field from a pipeline’s data table, shaded regionscalled dropzones overlay the canvas. Each dropzone represents a markproperty, like color or x position. When the user drops a field onto thesetargets, Lyra binds that field to the mark property. The canvas alwaysreflects the current state of the output visualization.
Property inspectors list features of the visualization’s components, and provide an interfacefor fine-grained editing. Properties may also be set by dropping datafields, and any changes are shown immediately on the canvas.
To support interaction design by demonstration, Lyra builds on the fol-lowing two abstractions provided by Vega-Lite and Vega respectively.In Vega-Lite, interaction techniques are expressed as selections , whichare sets of data records that a user has interacted with. Three types ofselections are supported — single, multi, or interval — which determinethe logic for which records are included within the set, and what inputevent triggers this inclusion (e.g., mouse clicks). Selections can be projected to vary the inclusion criteria. For instance, a single selectiononly includes the point a user clicked; projecting the selection over afield will include the clicked point and all points that share its valuefor that field. Selections can drive conditional encodings, which applydifferent values depending on whether a data record is selected. Selec- tions can also filter input data and determine scale domains. Vega-Liteselections compile into Vega signal expressions. In Vega, signals aredynamic variables which update in response to input event streams.They can be composed into expressions, which are formulas that au-tomatically recalculate whenever signal values change. Signals canbe used throughout a Vega specification, including as part of a markproperty, data transform parameter, or scale domain.
NTERACTION D ESIGN BY D EMONSTRATION IN L YRA Lyra 2’s interaction design by demonstration approach comprises twoparts: an abstract model for representing interaction techniques, and itsarticulation in graphical user interface components. In this section, wefirst walk through a non-trivial usage scenario that illustrates a user’sprocess with the interface, and then explain Lyra 2’s system design.
To illustrate the expressiveness and usability of interaction design inLyra 2, we walk through the process of recreating
Seattle WeatherExploration [1], an example interactive visualization from the Vega-Lite example gallery (Figure 1). In this multi-view visualization, userscan brush in the scatterplot to re-aggregate the histogram, and clickhistogram bars to highlight corresponding points in the scatterplot.Users start by clicking the
Add Interaction button on the toolbar(Fig. 2(a)(1)), which adds an interaction specification to Lyra’s stateand opens the corresponding property inspector in the left-hand sidebar.While an interaction inspector is open, the system treats user inputson the canvas as interaction demonstrations. As the user drags on thescatterplot (Fig. 2(a)(2)), the system uses heuristics to populate theinspector with suggestions for possible interpretations of that demon-stration. These suggestions are grouped into two categories: selections and applications . Selections determine how input events map to a set of3
43 2
Fig. 3. Brush with labeled extents. (1) A brushing interaction authored via demonstrations (see Fig. 2) (2) Binding signals that represent the brush’sstart and end extents to the content of two text marks respectively. (3) Positioning the text marks by their horizontal position to the brush’s start andend x-coordinates. (4) The completed design: an interval selection with labeled extents. data tuples, while applications describe how selections drive the proper-ties of visual elements (e.g., conditional encoding or filtering). For eachsuggestion, a thumbnail previews how the visualization would behaveif the suggestion were applied (Fig. 2(a)(3)). If the user drags with ahorizontal trajectory, the system infers a rectangular brush selectionconstrained in the x-axis. In the inspector, the user clicks the color and filter applications to highlight selected points while filtering the dataof the histogram. This enables the desired brush interaction, which isimmediately active on the visualization canvas (Fig. 2(a)(4)).To enable interactivity on the histogram, the user initializes anotherinteraction using the toolbar (Fig. 2(b)(1)), and demonstrates a click ona histogram bar (Fig. 2(b)(2)). The system populates the sidebar withselection and application suggestions but, unlike the case of dragging,suggests selections on points rather than intervals . The user intendsto filter the scatterplot for points matching the selected bar’s weather field. To match on this field, the user chooses a projected selectionin the inspector. The system automatically infers the weather fieldbecause it is used in a visual encoding, therefore projecting on it islikely to be meaningful. The user once again chooses the color andfilter applications to highlight the selected bar while filtering the dataof the scatterplot (Fig. 2(b)(3)). Updates to the interaction specificationare immediately reflected in the visualization canvas (Fig. 2(b)(4)).At this point, the user has recreated the Vega-Lite example usingonly a few clicks and drags, and no textual specification. What ifthe user wants to more precisely label their brush extent to exactlyspecify the selected range? Because the brush selects a date range, itscoordinates are continuous and may not map precisely to any tuples inthe dataset. Vega-Lite cannot express the brush extent labels withoutthis direct mapping, but the Lyra inspector exposes a set of lower-levelVega signals that can be dragged and dropped onto mark properties.After the user demonstrates a brush interaction (Fig. 3.1), the prop-erty inspector will surface a list of signals related to the brush for bothgeometric and data coordinates. The user creates two text marks rep-resenting the start and end extent labels. They drag the brush date (start) signal onto the text content dropzone to bind the value ofthe signal to the mark’s content, and do the same for the brush date(end) signal (Fig. 3.2). Using a similar drag and drop process, theuser binds the brush x start and end signals to the mark’s x-position(Fig. 3.3). These signals update the position and content of the labelsas the user performs brush interactions on the visualization (Fig. 3.4).When the user is ready to share their work, they click the
Export button to download the interactive visualization as a Vega specification.
In Lyra 2, users specify interaction techniques as a set of selections and applications , and can also construct dynamic query widgets [2].This interaction model draws on abstractions found in Vega and Vega-Lite but makes some key departures. These differences reflect thedifferent affordances of textual versus graphical user interfaces: Vegaand Vega-Lite primitives are designed to compose together easily, tominimize language surface area and complexity; Lyra 2, on the otherhand, is more concerned with recognition over recall [29] by providingconsistent user introspection into primitives via property inspectors.
Lyra 2’s selections determine how input events select data records.They can be one of two types: points or intervals. Unlike Vega-Lite,our selection model groups selections by input event — point selectionsfor clicks and other discrete events, and interval selections for drags.This diverges from Vega-Lite’s single, multi, and interval selectionsbecause its purpose is to enable demonstration from user input events.As with Vega-Lite, the inclusion criteria for Lyra’s 2 selections canbe modified via projections. However, Lyra 2 once again diverges:interval selection projections (i.e., single dimensional brushes along thex- or y-axis) can be specified via demonstration and are automaticallysurfaced as suggestions; point selection projections (i.e. selecting apoint and all other points that match its value in a given field) usethe demonstration to suggest a default field to project, which can be4
BC ED FA
Fig. 4. Lyra 2’s interaction by demonstration interface. The interaction property inspectors (left) display suggestions for (a)
Selections and (b)
Applications . They also expose (c) Vega signals to enable custom interactions. (d) Users can directly demonstrate interactions onto the visualizationcanvas, which reflects the current state of the output visualization. To create query widgets, users drag and drop fields onto (e) the widget dropzone.The (f)
Add Interaction button initializes interaction definitions, akin to the nearby
Add Mark buttons. changed via property inspector. This difference is due to formativefeedback from users during our design process: while it was straight-forward to demonstrate a single-dimensional interval by moving themouse in roughly only the horizontal or vertical direction, a similarinteraction for point selection was too ambiguous. Users would have torepeatedly click several points in order for the demonstration heuristicsto infer shared field values, resulting in a frustrating experience.Critically, some interactive behaviors cannot be defined in terms ofsets of selected records. For instance, consider labeling the corners of abrush: these are arbitrary coordinates in data space, and may not mapto specific data records. Such an interaction design is not expressiblein Vega-Lite but, in Lyra 2, users can unwrap selections into theirconstituent Vega signals (Figure 3): interval selections expose signalsfor the selection’s start and end extents, and point selections offersignals for the selected point’s backing data values. Point selectionsthat respond on hover also include signals for mouse position. Thesesignals can then be dragged and dropped, akin to data fields, to establishconditional encodings or drive data transformation operators.
In Lyra 2, the application of selections to other constructs (e.g., drivingconditional mark encodings, scale domains, or data transformation) istreated as a first-class primitive. Applications reference a source selec-tion and a target element, and are a salient departure from the Vega-Liteinteraction model. In Vega-Lite, applying selections to marks, scales,and data transformations involve subtly different syntax. For example,panning and zooming is implemented via a bind transformation speci-fied as part of a selection’s definition while conditional encoding logicis inline, as part of a mark’s specification. Such distinctions would seemarbitrary and confusing within a graphical interface. Instead, in Lyra 2,selection applications abstract over these distinctions and are surfacedas sibling suggestions during a demonstration. Critically, by treatingapplications as first-class primitives, we are able to surface applicationsfrom two points of view: in a selection’s property inspector, we canlist all the ways it is applied across the visualization; and from theindividual target elements, property inspectors update to reflect theirinteractive nature. By contrast, the former affordance is not available inVega-Lite. Users are forced to search through a specification in order tounderstand the various effects a selection may have on the visualization.
Both Vega and Vega-Lite support query widgets through similar mech-anisms: signals and selections, respectively, can be bound to HTMLinput widgets like textboxes, radio buttons, and range sliders. Lyra 2treats query widgets distinctly from selections for two reasons. First, as described in the next subsection, suggestions for query widgets requiredifferent heuristics — namely, using the measure type of the bound datafield (e.g., nominal, quantitative, etc.) rather than the event type of auser’s demonstration. Second, it allows us to more fluidly bridge thetwo levels of abstraction. Query widgets can be treated as signals todirectly set the property of a mark or scale, or update data transforma-tions. But they can also act as a selection, with a customizable inclusioncriterion — functionality that is not yet possible in Vega-Lite wherequery widgets are treated simply as an alternate way to populate a selec-tion. Via a property inspector, users can specify alternate (in)equalityoperators to determine which records the query widget selects.
Users introspect and manipulate the abstract interaction model throughnew extensions to Lyra 2’s graphical interface.
We augment the canvas to allow interaction demonstrations directly onthe output visualization (Figure 4(d)). For static visualizations, the Lyracanvas always reflects the current state of the output. To keep the gulfof evaluation [20] narrow, we sought to maintain this property whenusers are designing interactions. In contrast to widely-used prototypingtools like Figma and InVision — where users define interactions in aneditor but can only test them in a separate preview mode — the Lyra 2canvas continues to directly reflect the current output state, includinginteractions. After creating an interaction, the user can immediatelyinteract with it in the same view, without a separate preview. As userscreate more complex visualizations with multiple interactions, they canquickly understand how different interactions behave in combination.This immediacy creates advantages for rapidly prototyping and eval-uating interactions, but causes potential ambiguity in user inputs. Userinputs can have three meanings: interacting with Lyra’s user interfaceelements (e.g., buttons, drop-down menus, etc.), interacting with theoutput visualization (e.g., tooltips), and performing a demonstration.During feasibility tests, we found that interacting with the outputvisualization and interacting with Lyra interface elements do not con-flict because their effects operate in distinct spaces: interacting withthe visualization only affects the visualization state, while interfaceinteractions only affect the Lyra state. As a result, these effects cancoexist. For instance, when a user clicks on a mark, the click can bothtrigger any point selections that have been instantiated as well as openthe mark’s property inspector in Lyra’s interface without any issue.Demonstrations, however, bridge between the states of the visualiza-tion and Lyra and thus can potentially conflict. For example, when a5ser clicks, how should Lyra understand whether they intend to demon-strate a point selection, populate the selection, or open the mark’sproperty inspector? To disambiguate this type of interaction, we in-troduce an implicit demonstration mode: Lyra 2 treats user input asdemonstrations when an interaction’s property inspector is open. Wecall this mode implicit because switching into and out of it occurs aspart of a user’s regular use of the interface: they have either clicked the
Add Interaction button on the right-hand side toolbar (Figure 4(f)) orthey have manually opened the property inspector using the left-handside listing, two operations that mirror how a user would add a mark tothe canvas, or edit its properties.Consider the common scenario where a user only wants to makea static visualization. Say they click on a mark intending to selectit in the inspector. If demonstration mode is always on, this eventwould also be interpreted as a point selection demonstration. Sincethere is no interaction selected in the inspector, the system would thenneed to create an interaction and select it in order to define the pointselection. When demonstrations are responsible for both record creationand modification, inputs may contradict user intent and result in thecreation of extraneous interaction specifications. Separating interactioncreation from modification makes the mode of user input unambigious.
To support constructing query widgets, we reuse Lyra’s existing drop-zone metaphor: shaded regions overlaying the canvas onto which datafields can be dropped to establish a data binding. A new widget drop-zone appears below the canvas (Figure 4(e)), and works in a similarfashion — to create widgets, users drag a field from the data table, anddrop it over this dropzone. As query widgets operate over data space,this dropzone only appears once a data binding operation has occurred.
When the user initiates a demonstration or widget drop, Lyra 2 eval-uates a system of heuristics in four phases — enumerating selectiontypes, enumerating application types, enumerating signals, and infer-ring defaults based on the demonstration. Heuristics take the followinginputs: the event type (click or drag), the marks and scales present in thecurrent view, and marks in other views that share the same data source.Heuristics are currently implemented as if-then-else rules over theseproperties, akin to Lyra 1’s scale inference production rules [51]. Here,we describe the intuition behind our heuristic designs, and provide aformal treatment of their implementation in supplementary material.In the first phase, the system uses the input event type to distinguishbetween selection types (Fig. 4(a)): clicks produce point selections,while drags yield interval selections. Once the selection type is deter-mined, additional heuristics suggest ways of customizing the selectionthat are meaningful based on the types of the data fields, marks, andscales participating in the current view. For example, if the user choosesto project a point selection, heuristics suggest the fields participatingin visual encodings — for instance, as the histogram in Fig. 2(b) bindsthe weather field to a color encoding, heuristics will by default projectover weather . Similarly, heuristics to customize interval selectionslook to the spatial relationships defined within the chart. If the userdrags on a view containing a rect, symbol, or text mark with continuousx- and y-scales, the system suggests a regular two-dimensional brushas well as brushes constrained to the x- or y-dimension. In contrast, thesame demonstration on a view containing only an area mark, or a rectmark with a discrete x-scale and a continuous y-scale over an aggregatefield (i.e., a vertical histogram) will only suggest brushing along thex-axis. These heuristics prevent semantically incorrect interaction de-signs that are expressible in Vega or Vega-Lite. For example, with eithertool, users can author specifications for brushing along the aggregatemeasure of a histogram — an interaction that does not capture a validset of backing data tuples. Lyra 2’s heuristics would not suggest thesetypes of selections, and thus users will never enter this undesired state.When a user drags a field into the widget dropzone, the systemuses heuristics to suggest widgets analogously to selections. However,instead of an input event type, the widget heuristics use the measuretype of the widget’s bound field (e.g., nominal, quantitative, etc.). For instance, the system suggests radio buttons and dropdown menus forfields with discrete data values, and sliders for continuous values.In phase two, applications are enumerated (Fig. 4(b)) based on whichmark types and visual encodings are currently in use. For instance,for discrete mark types (i.e., marks besides areas or lines), the systemsuggests conditional color and opacity encodings; and, for symbolmarks, an additional suggestion of conditional size encodings is alsosurfaced. Similarly, if continuous scales are present, a suggestion forpanning & zooming is made. And, if marks in other views share thesame data source as the mark in the demonstration view, the systemsuggests multiview linking and crossfilter applications.In the third phase, the system uses the input event type, scale def-initions, and dataset fields to suggest signals for custom interactions(Fig. 4(c)). Interval selections will surface signals corresponding to thebrush extents in x/y coordinates, and in data coordinates based on thefields referenced in the x- and y-scales. Point selections surface signalsfor each field of the selected data point, enabling users to create tooltipsand labels. Point selections using hover events will additionally exposesignals for mouse position in x/y and data coordinates. These signalsare not suggested for click-based point selections, where the selecteddata does not depend on current mouse position.The fourth and final phase uses the demonstration to determine de-fault choices from the available suggestions generated in phases oneand two. These heuristics work by considering the demonstration’sinput event history. If the user has demonstrated a drag, for instance, theheuristics take the collection of events along the drag path and calculatethe angle of the drag trajectory. Drags within a 30 ◦ angle from thevertical will default to brushing along the y-axis (and similarly default-ing to horizontal brushes for drags within a 30 ◦ angle from the x-axis).Drags that do not tend toward either axis will assign unconstrainedbrush selections. For point selections, we use a threshold of 800ms tochunk a series of clicks as a distinct demonstration. We initially usedthe threshold for double-clicks (500ms). But, after iterative prototyping,we increased the threshold to account for sparse visualizations. If morethan one click occurs within this period, Lyra 2 defaults to a multiselection, and otherwise defaults to a single selection.Our heuristics trade off between suggestion specificity and useragency. In particular, we prioritize continuous user insight into thesystem state in accordance with direct manipulation principles [20]and avoid committing users to automatic suggestions without theiractive input [4]. For instance, we considered more complex inferencessuch as voronoi tessellations to accelerate selection when users clicknearby points (akin to Vega-Lite’s “nearest” property). However, duringfeasibility tests, we found the ambiguity of such demonstrations meantthat it would be easy to apply this suggestion unintentionally, andusers would become frustrated at having to undo an action they did notinitiate. Instead, our heuristics are intentionally conservative and relyon visualization properties the user has explicitly defined. With straightforward extensions to Lyra’s interface, we list interactions(both selections and query widgets) in the left-hand sidebar. The in-teractions property inspector (Figure 4(left)) enables users to visuallypreview selections and applications generated by their demonstration,allows fine-grained control over interaction properties, and exposeslower-level signals for custom interactions.Each suggested selection and application is shown as a previewthumbnail that visually depicts the suggestion (Figure 4(a,b)). Thumb-nails narrow the gulf of evaluation [20] by providing a close mappingto a user’s mental model of their desired outcome, both in terms of theselection they wish to make (e.g., previewing 1D or 2D brushes) and theeffect it should have on the visualization (e.g., highlighting or filteringpoints). In our design process, we considered alternatives with naturallanguage descriptions of the corresponding Vega-Lite specifications,but found that novice users were not always familiar with terms weuse for interactions (e.g. “brushing”) despite having a clear image intheir mind. And, even among people who have experience creatinginteractive visualizations, preview thumbnails abstract over differingvocabularies that tools may expose.6
The inspector also exposes relevant low-level signals that can drivecustom interactions (Figure 4(c)). For example, a drag demonstra-tion will surface signals for the brush extents as both visual and datacoordinates, while a hover interaction will expose mouse position inboth spaces. We reuse Lyra’s existing rounded-rect motif, to indicatethat signals can be dragged and dropped across the interface, akin todata fields. Users can, for example, drop the data coordinate signalonto a text mark’s content dropzone to display its current value in thevisualization (Figure 3), or drop the x-position signals onto the mark’sx-dropzone to have the mark follow the mouse (Figure 3.3).
VALUATION : E
XAMPLE G ALLERY
To evaluate the expressive extent of our approach, we use Lyra 2 tocreate a gallery of diverse examples [45]. These examples are drawnfrom the Vega and Vega-Lite example galleries and, following thecorresponding papers [53–55], demonstrate coverage over Yi et al.’staxonomy of interaction techniques [61]. In particular, as shown inFigure 5, we cover six out of the taxonomy’s seven categories: we can select marks of interest as individual points (Fig. 5(a)) or as brushes(Fig. 5(b)) with customizations inset; we can explore different subsetsvia panning & zooming (Fig. 5(c)); we can reconfigure data, as in thecase of an index chart that normalizes data based on the mouse position(Fig. 5(d)); we can abstract/elaborate data through tooltips (Fig. 5(e))or via an overview+detail visualization (Fig. 5(f)); we can filter dataeither through direct manipulation on the visualization (Fig. 1), orvia query widgets (Fig. 5(g), which recreates Amanda Cox’s iconic“porcupine chart” from the New York Times [12]); and, finally, we can connect related tuples together via brushing & linking (Fig. 5(h)).Due to its abstract model for interaction designs (§4.2), Lyra 2’sexpressive gamut lies between Vega and Vega-Lite. By being selection-based, all interaction techniques that can be constructed in Vega-Liteare expressible in Lyra 2 as well . By treating query widgets as distinctfrom selections, and by exposing appropriate signals for each selection,Lyra begins to move beyond Vega-Lite in two key ways. First, thisexpands Lyra 2’s expressive extent, enabling interactive designs that arenot possible in Vega-Lite including using inequality comparators forquery widgets (Fig. 5(g)) or directly encoding signals values (Fig. 3).Second, some interactive designs are constructed more performantlyin Lyra 2. Consider the vertical rules in the index chart and tooltipexamples (Figs. 5(d, e)). In Vega-Lite, the only way to dynamicallyposition this rule is by applying a selection to filter the backing dataset;unfortunately, this also produces one rule per symbol for a total of fiverules overlaid on top of each other. In Lyra 2, we only need one ruleand bind its x position to the selection’s signal directly. Limitations . Lyra 2 does not yet support designing interactivity thatis not selection-based. Such techniques primarily fall within Yi et al.’s
Encode category, which describes interactive behaviors that bypass dataspace and manipulate the view directly (e.g., changing the mark type,which visual channels are encoded, or which data fields participate invisual encoding). Similarly, although Lyra 2 supports HTML widgets,it does so as query widgets — i.e., these widgets manipulate expressionsin data space and cannot be used to modify properties of marks directly.Finally, Lyra 2 does not expose the full expressive power of Vega’ssignals; instead, only signals that pertain to selections are available tousers. As a result, more complex and custom selection-based techniques(e.g., reordering the dimensions of a matrix or DimpVis [24]) remainout of Lyra 2’s range. How to enable more expressive selection-basedand non-selection based interactive techniques in a graphical and directmanipulation medium is a compelling direction for future work.
VALUATION : F
IRST -U SE S TUDY
We designed Lyra 2 to improve expressiveness and usability for users,especially those with less prior coding experience. We evaluate ourapproach’s usability through a first-use study with 6 representative usersincluding 2 experienced visualization designers, 2 computer scientists, Any differences in specifying interaction techniques between Lyra 2 andVega-Lite are due to limitations in Lyra 2’s support for static visualization designincluding the lack of a binning transform or support for cartographic projections. and 2 from fields unrelated to visualization. The mean self-reportedpast visualization design expertise was 2.83 on a 5 point Likert scale( σ = . We began each study with a 10 minute walkthrough of Lyra 2’s features.We then asked participants to complete three interaction design tasks.For each task, we showed them an example interactive visualization andcreated the static version in Lyra 2. We then asked them to use Lyra 2to recreate the interactivity from the example. The three visualizationswere drawn from standard Vega-Lite examples: a pan and zoom scat-terplot ( T panzoom ), a filterable scatterplot with query widgets ( T widgets ),and a linked scatterplot and bar chart ( T linked ). These tasks were de-signed to maximize participant engagement with Lyra 2’s interactiondesign interfaces, and were ordered in increasing difficulty. Participantswere encouraged to think aloud as they completed the tasks. At the endof the study, we asked each participant to rate the usefulness of eachpart of the Lyra 2 interface on a 5-point Likert scale. We also set asideopen-ended conversation time for participants to explore the tool, askus questions, and share reflections on exciting or challenging aspectsof completing the tasks. Sessions lasted approximately 45 minutes, andparticipants were compensated with a $15 Amazon gift card. Users quickly learned how to create interaction designs in Lyra 2, andall users, regardless of their prior visualization experience, successfullycompleted all three tasks with minimal guidance. The average taskcompletion times for the tasks were, in minutes and seconds, T panzoom :( µ = , σ = T widgets : ( µ = , σ = T linked : ( µ = , σ = µ = . , σ = .
37 on a 5-point Likert scale),suggestions were useful ( µ = . , σ = . µ = . , σ = . Participants found the interaction design by demonstration pro-cess “natural” . An experienced participant said that the user flow wassimilar enough to their development process in textual specificationtools that they could easily transfer their skills. Less experienced par-ticipants also found demonstration’s short articulatory distance helpful.Thinking aloud during T linked , one participant said, “Demonstrating theinteractions was very easy. I didn’t really know the word brushing, butit was easier to just do it than to say what it is. Same with picking fromthe widget types that have technical names like radio.” Participants were especially excited about easily creating interac-tions they considered complex. A participant said that Lyra 2 “wouldmake me experiment more with possible interactions” because of thelower technical barrier. Reflecting on T linked ’s multi-view filtering, theysaid, “I wouldn’t have thought to make this. I would think it was toohard.” Another less experienced participant noted that easily creatingmulti-view filtering would be very useful in their work, where peopleoften use non-interactive charts of high dimensional data.The quick feedback loop of evaluating and applying suggestions alsostood out positively to participants. A more experienced participantcompared the feedback loop with textual specification, saying that “withthese lower level libraries, getting an interaction to work takes a whileeven when copying and pasting.”
Similarly, a participant with lessvisualization experience noted that “being able to test immediately wasvery useful. Even when it went wrong, I could immediately tell that itwas wrong.”
Previews were important to this quick feedback loop, butusers noted that not all of them were equally useful. For instance, bydefault, previews for single and multi point selections look the samewhich led users to be unsure about how multi point selections differ.The primary shortcoming we observed was when participants’ men-tal models did not match Lyra 2’s interface. For example, one par-ticipant pointed out that, although panning and zooming is a draginteraction, the interface forced them to first choose a selection using7
BC DE FG H
Fig. 5. Example interactive visualizations demonstrating Lyra 2’s coverage over Yi et al.’s taxonomy [61]. (a, b)
Selecting marks of interest; (c)
Exploring subsets of data via pan & zoom; (d)
Reconfiguring data via an index chart;
Abstract/Elaborate data via (e) tooltips or (f) an overview+detailvisualization; (f)
Filtering data via query widgets, recreating a New York Times visualization [12]; (g)
Connecting related tuples via brushing & linking.Walkthroughs are provided in supplementary material. the “brushing” terminology; they might have instead expected to selectpanning & zooming directly. Similarly, a few participants were initiallyunsure whether they should demonstrate on the source or target view tosurface a multiview filter suggestion.One participant’s question during the post-study debrief struck usas particularly insightful: “what if I want to make interactions thataren’t in the suggestions?” This question suggests a drawback to ourapproach we had not previously considered: might suggestions limitwhat users consider to be expressible in Lyra 2? This concern doesnot appear particular to Lyra 2 or interaction design by demonstration.For example, in prior studies of mixed-initiative systems, users of Voy-ager worried about whether its visualization recommendations mightcause them to “start thinking less” [60] and users of an interactivemachine translation system perceived themselves “less susceptible tobe creative” [14]. Our results add further evidence that better balancingagency and automation [16] is a critical avenue for future work.
VALUATION : C
OGNITIVE D IMENSIONS OF N OTATION
In this section, we compare Lyra 2’s usability to textual specificationof interaction designs in Vega or Vega-Lite. To do so, we adopt theCognitive Dimensions of Notation [4], a heuristic evaluation frameworkthat has previously been used to evaluate HCI toolkits [27] as well asvisualization systems [51, 55]. Of the 14 dimensions found in theframework, we find particularly salient differences for the following:
Closeness of Mapping.
Demonstrations offer a much closer mappingbetween the notation of the specification (input events) and the desiredoutcome (an interaction design). Depicting suggestions as thumbnailsfurther builds on this dimension, by offering users a visual previewof possible interactive behaviors. By contrast, textual specificationlanguages force users to express interaction techniques in potentiallyunfamiliar terms. In fact, in formative evaluations, many novice usershad never previously described interactions as “selections” or “brushes,”which are common terms in data visualization and HCI literature.
Progressive Evaluation and
Premature Commitment . It is difficultto validate in-progress work with textual languages as only completespecifications produce working output. If required properties are leftunderspecified, for instance, the language compiler will throw an er-ror and produce no output. This issue is exacerbated for interactionspecification: complete definitions of signals or selections will produceworking output, but this may not always be evident until they are usedin the remainder of the specification. By contrast, Lyra 2’s demonstra-tions exemplify support for these dimensions: users are able to explorethe possible design and easily preview individual design choices before explicitly instantiating a full interaction technique. However, thereis still room for further improvement: Lyra 2 is currently only ableto make multiview suggestions if secondary views have already beencreated; recommending multiview visualizations is an active area ofresearch [36, 42] and future versions of Lyra should consider how toincorporate it in the context of interaction suggestions.
Diffuseness.
Textual languages, particularly a higher-level grammarlike Vega-Lite, offer a much more concise specification format thanthe graphical equivalent in Lyra 2. This property is true not only in thegeneral sense (localized, often one-word changes in Vega-Lite translateto multiple clicks in Lyra 2) but also in specific ways for this paper’scontribution. By definition, demonstrations are a more ambiguousspecification format and a user may have to perform several attemptsbefore the system correctly infers their desired behavior. We observedthis issue most saliently when attempting to project a point selection:users would need to repeatedly click several points for the system tohave sufficient data to infer shared field values, which proved to bean overly frustrating experience. Based on these results, we chose toexpose point selection projections in the property inspector rather thanvia demonstrations. Lyra 2 has an additional source of diffuseness: it isnot difficult to imagine preview thumbnails becoming unwieldy as userscraft more complex multi-view dashboards. Future work must considerhow to scale the suggestion previews — for example, once the visu-alization’s dimensions cross a threshold, perhaps suggestions switchfrom purely visual to a combination of visual and textual modalities.However, such a change may trade off closeness of mapping . Hidden Dependencies.
Property inspectors allow us to reveal de-pendencies that are otherwise more latent in textual specifications. Inparticular, as we designed interaction property inspectors, we realizedthat they provided a prime location for collating all the ways an inter-action technique may be used across a visualization. Working throughthis design motivated raising selection applications to be a first-classprimitive in our interaction model. In the textual languages, a userwould have to search through a specification and manually build theirmental model of how a selection or signal is being used.In summary, Lyra 2’s demonstrations compare favorably to textualspecification in terms of closeness of mapping, progressive evaluationand premature commitment but result in a more diffuse user experience.
ONCLUSION AND F UTURE W ORK
This paper contributes methods for designing interactive visualizationsby demonstration and instantiates these methods in Lyra 2. Its inter-face components, such as the visualization canvas with demonstrations,suggestion heuristics, and interaction inspector, narrow the gulfs of exe-cution and evaluation for interaction designs. A diverse example gallerydemonstrates Lyra 2s expressiveness, including many designs that arenontrivial to express in current declarative visualization languages.Participants in a user study found that the tool helped them create visu-alizations that previously felt too difficult to attempt. Lyra 2 is availableas open-source software at https://github.com/vega/lyra .Lyra 2 represents only the first step in developing non-textual mech-anisms for authoring interactivity in data visualizations, and there areseveral promising next steps to explore. How to support designingmore custom interactions by demonstration, especially those that arenot selection-based (e.g.,
Encode -type techniques [61]), is a clear nextstep. It is not clear that demonstrations can or should target low-levelexpressions (e.g., Vega signals) directly. Rather, there appears to bethe need for novel approaches which occupy a middle ground betweendirect demonstration and visual programming interfaces (e.g., Inter-State [40]). Even with selection-based interactions, future work shouldconsider how to go beyond heuristics [50] and utilize recommenda-tion methods including ranked enumeration [32, 36, 59] and learnedmodels [19]. A key challenge here is that these alternate approachesare grounded in empirically-validated principles for effective visualencoding, and similar results do not yet exist for interaction design.Despite prior work on depicting the runtime behavior of interactivevisualizations [17,18], we did not currently find a need to offer strategiesfor debugging interaction techniques in Lyra 2. We believe the reasonis because Lyra’s graphical interface mediates user manipulations ofthe underlying Vega specification. In particular, as discussed in §7,Lyra’s interface surfaces a number of hidden dependencies latent inthe corresponding textual specifications and, through its heuristics andthe suggestions it surfaces, constrains the allowable state space — twoissues that prior work has used visual debuggers to ameliorate [17].Nevertheless, as future research explores designing more complexinteraction techniques, the need for a debugger may once again arise.Finally, and perhaps most critically, our first-use studies provideadditional evidence for the need to better balance automated sugges-tions and user autonomy and agency in mixed-initiative interfaces [16].Studying these issues in the domain of design may be particularly viableas new systems can leverage prior results from cognitive psychologyinto the role of examples in the design process [57] and when they mostspur creative work [25]. However, such systems must also grapple withthe consequences of imposing theories of semantic (rather than purelysyntactic) validity on design — what are the implications of systemsencoding notions of “good” visualization [10, 22]? A CKNOWLEDGMENTS
We thank our study participants and anonymous reviewers for theirinvaluable feedback, and the MIT Visualization Group for their ca-maraderie. We are also grateful to the people who helped rebuild theLyra infrastructure in 2016 including Jeffrey Heer, who supported theeffort via his Moore Foundation award, and K. Adam White and SueLockwood who led the work. This project was supported by the Pauland Daisy Soros Fellowship and NSF Award
EFERENCES [1] Seattle weather exploration. https://vega.github.io/vega-lite/examples/interactive_seattle_weather.html .[2] C. Ahlberg, C. Williamson, and B. Shneiderman. Dynamic queries forinformation exploration: An implementation and evaluation. In
Proceed-ings of the SIGCHI conference on Human factors in computing systems ,pp. 619–626. ACM, 1992.[3] S. Barman, S. Chasins, R. Bodik, and S. Gulwani. Ringer: web automationby demonstration. In
Proceedings of the 2016 ACM SIGPLAN Interna-tional Conference on Object-Oriented Programming, Systems, Languages,and Applications , pp. 748–764, 2016.[4] A. F. Blackwell, C. Britton, A. Cox, T. R. Green, C. Gurr, G. Kadoda,M. Kutar, M. Loomes, C. L. Nehaniv, M. Petre, et al. Cognitive dimen-sions of notations: Design tools for cognitive technology. In
CognitiveTechnology: Instruments of Mind , pp. 325–341. Springer, 2001.[5] M. Bostock and J. Heer. Protovis: A graphical toolkit for visualization.
IEEE Trans. Visualization & Comp. Graphics (Proc. InfoVis) , 2009.[6] M. Bostock, V. Ogievetsky, and J. Heer. D data-driven documents. IEEEtransactions on visualization and computer graphics , 17(12):2301–2309,2011.[7] S. Chasins, S. Barman, R. Bodik, and S. Gulwani. Browser record andreplay as a building block for end-user web automation tools. In
Pro-ceedings of the 24th International Conference on World Wide Web , pp.179–182, 2015.[8] J. Choi, D. G. Park, Y. L. Wong, E. Fisher, and N. Elmqvist. Visdock: Atoolkit for cross-cutting interactions in visualization.
IEEE transactionson visualization and computer graphics , 21(9):1087–1100, 2015.[9] G. H. Cooper and S. Krishnamurthi. Embedding dynamic dataflow ina call-by-value language. In
Programming Languages and Systems , pp.294–308. Springer, 2006.[10] M. Correll. Ethical dimensions of visualization research. In
Proceedingsof the 2019 CHI Conference on Human Factors in Computing Systems , pp.1–13, 2019.[11] J. A. Cottam and A. Lumsdaine. Stencil: a conceptual model for represen-tation and interaction. In , pp. 51–56. IEEE, 2008.[12] A. Cox. Budget forecasts, compared with reality. , 2010.[13] J. Edwards. Coherent reaction. In
Proc. ACM SIGPLAN , pp. 925–932.ACM, 2009.[14] S. Green, J. Chuang, J. Heer, and C. D. Manning. Predictive translationmemory: A mixed-initiative system for human language translation. In
ACM User Interface Software & Technology (UIST) , 2014.[15] S. Gulwani. Automating string processing in spreadsheets using input-output examples. In
ACM Sigplan Notices , vol. 46, pp. 317–330. ACM,2011.[16] J. Heer. Agency plus automation: Designing artificial intelligence intointeractive systems.
Proceedings of the National Academy of Sciences ,116(6):1844–1850, 2019.[17] J. Hoffswell, A. Satyanarayan, and J. Heer. Visual debugging techniquesfor reactive data visualization.
Computer Graphics Forum , 35(3):271–280,2016. doi: 10.1111/cgf.12903[18] J. Hoffswell, A. Satyanarayan, and J. Heer. Augmenting code with in situvisualizations to aid program understanding. In
Proceedings of the 2018CHI Conference on Human Factors in Computing Systems , CHI ’18, pp.532:1–532:12. ACM, New York, NY, USA, 2018. doi: 10.1145/3173574.3174106[19] K. Hu, M. A. Bakker, S. Li, T. Kraska, and C. Hidalgo. Vizml: A machinelearning approach to visualization recommendation. In
Proceedings of the2019 CHI Conference on Human Factors in Computing Systems , pp. 1–12,2019.[20] E. L. Hutchins, J. D. Hollan, and D. A. Norman. Direct manipulationinterfaces.
Human-computer interaction , 1(4):311–338, 1985.[21] S. Kandel, A. Paepcke, J. Hellerstein, and J. Heer. Wrangler: Interactivevisual specification of data transformation scripts. In
Proceedings ofthe SIGCHI Conference on Human Factors in Computing Systems , pp.3363–3372. ACM, 2011.[22] H. Kennedy, R. L. Hill, G. Aiello, and W. Allen. The work that visualisa-tion conventions do.
Information, Communication & Society , 19(6):715–735, 2016.[23] N. W. Kim, E. Schweickart, Z. Liu, M. Dontcheva, W. Li, J. Popovic, and H. Pfister. Data-driven guides: Supporting expressive design forinformation graphics.
IEEE transactions on visualization and computergraphics , 23(1):491–500, 2016.[24] B. Kondo and C. Collins. Dimpvis: Exploring time-varying informationvisualizations by direct manipulation.
IEEE transactions on visualizationand computer graphics , 20(12):2003–2012, 2014.[25] C. Kulkarni, S. P. Dow, and S. R. Klemmer. Early and repeated exposureto examples improves creative work. In
Design thinking research , pp.49–62. Springer, 2014.[26] T. Lau, S. A. Wolfman, P. Domingos, and D. S. Weld. Programmingby demonstration using version space algebra.
Machine Learning , 53(1-2):111–156, 2003.[27] D. Ledo, S. Houben, J. Vermeulen, N. Marquardt, L. Oehlberg, andS. Greenberg. Evaluation strategies for hci toolkit research. In
Pro-ceedings of the 2018 CHI Conference on Human Factors in ComputingSystems , pp. 1–17, 2018.[28] Y. Li and J. A. Landay. Informal prototyping of continuous graphicalinteractions by demonstration. In
Proceedings of the 18th annual ACMsymposium on User interface software and technology , pp. 221–230. ACM,2005.[29] W. Lidwell, K. Holden, and J. Butler.
Universal principles of design,revised and updated: 125 ways to enhance usability, influence perception,increase appeal, make better design decisions, and teach through design .Rockport Pub, 2010.[30] Z. Liu and J. T. Stasko. Mental models, visual reasoning and interac-tion in information visualization: A top-down perspective.
IEEE Trans.Visualization & Comp. Graphics , 16(6):999–1008, 2010.[31] Z. Liu, J. Thompson, A. Wilson, M. Dontcheva, J. Delorey, S. Grigg,B. Kerr, and J. Stasko. Data illustrator: Augmenting vector design toolswith lazy data binding for expressive visualization authoring. In
Pro-ceedings of the 2018 CHI Conference on Human Factors in ComputingSystems , p. 123. ACM, 2018.[32] J. Mackinlay. Automating the design of graphical presentations of rela-tional information.
Acm Transactions On Graphics (Tog) , 5(2):110–141,1986.[33] M. Mauri, T. Elli, G. Caviglia, G. Uboldi, and M. Azzi. Rawgraphs: avisualisation platform to create open outputs. In
Proceedings of the 12thBiannual Conference on Italian SIGCHI Chapter , p. 28. ACM, 2017.[34] H. Mei, Y. Ma, Y. Wei, and W. Chen. The design space of constructiontools for information visualization: A survey.
Journal of Visual Languages& Computing , 44:120–132, 2018.[35] G. G. M´endez, M. A. Nacenta, and S. Vandenheste. ivolver: Interactivevisual language for visualization extraction and reconstruction. In
Pro-ceedings of the 2016 CHI Conference on Human Factors in ComputingSystems , pp. 4073–4085. ACM, 2016.[36] D. Moritz, C. Wang, G. L. Nelson, H. Lin, A. M. Smith, B. Howe, andJ. Heer. Formalizing visualization design knowledge as constraints: Ac-tionable and extensible models in draco.
IEEE transactions on visualiza-tion and computer graphics , 25(1):438–448, 2018.[37] B. Myers, S. E. Hudson, R. Pausch, and R. Pausch. Past, present, andfuture of user interface software tools.
ACM Transactions on Computer-Human Interaction (TOCHI) , 7(1):3–28, 2000.[38] B. A. Myers. Creating interaction techniques by demonstration.
IEEEComputer Graphics and Applications , 7(9):51–60, 1987.[39] B. A. Myers, J. Goldstein, and M. A. Goldberg. Creating charts bydemonstration. In
Proceedings of the SIGCHI Conference on HumanFactors in Computing Systems , pp. 106–111, 1994.[40] S. Oney, B. Myers, and J. Brandt. Interstate: a language and environmentfor expressing interface behavior. In
Proceedings of the 27th annual ACMsymposium on User interface software and technology , pp. 263–272. ACM,2014.[41] W. A. Pike, J. Stasko, R. Chang, and T. A. O’Connell. The science ofinteraction.
Information Visualization , 8(4):263–274, 2009.[42] Z. Qu and J. Hullman. Keeping multiple views consistent: Constraints,validations, and exceptions in visualization authoring.
IEEE transactionson visualization and computer graphics , 24(1):468–477, 2017.[43] D. Ren, T. Hllerer, and X. Yuan. ivisdesigner: Expressive interactivedesign of information visualizations.
IEEE Transactions on Visualizationand Computer Graphics , 20(12):2092–2101, Dec 2014. doi: 10.1109/TVCG.2014.2346291[44] D. Ren, B. Lee, and M. Brehmer. Charticulator: Interactive construction ofbespoke chart layouts.
IEEE transactions on visualization and computergraphics , 25(1):789–799, 2018. [45] D. Ren, B. Lee, M. Brehmer, and N. H. Riche. Reflecting on the evaluationof visualization authoring systems: Position paper. In ,pp. 86–92. IEEE, 2018.[46] B. Saket and A. Endert. Demonstrational interaction for data visualization. IEEE Computer Graphics and Applications , 39(3):67–72, 2019.[47] B. Saket and A. Endert. Investigating the manual view specification andvisualization by demonstration paradigms for visualization construction.In
Computer Graphics Forum , vol. 38, pp. 663–674. Wiley Online Library,2019.[48] B. Saket, L. Jiang, C. Perin, and A. Endert. Liger: Combining interactionparadigms for visual analysis. arXiv preprint arXiv:1907.08345 , 2019.[49] B. Saket, H. Kim, E. T. Brown, and A. Endert. Visualization by demon-stration: An interaction paradigm for visual data exploration.
IEEE trans-actions on visualization and computer graphics , 23(1):331–340, 2016.[50] B. Saket, D. Moritz, H. Lin, V. Dibia, C. Demiralp, and J. Heer.Beyond heuristics: Learning visualization design. arXiv preprintarXiv:1807.06641 , 2018.[51] A. Satyanarayan and J. Heer. Lyra: An interactive visualization designenvironment. In
Computer Graphics Forum , vol. 33, pp. 351–360. WileyOnline Library, 2014.[52] A. Satyanarayan, B. Lee, D. Ren, J. Heer, J. Stasko, J. Thompson,M. Brehmer, and Z. Liu. Critical Reflections on Visualization AuthoringSystems.
IEEE Trans. Visualization & Comp. Graphics (Proc. InfoVis) ,2020.[53] A. Satyanarayan, D. Moritz, K. Wongsuphasawat, and J. Heer. Vega-lite:A grammar of interactive graphics.
IEEE transactions on visualizationand computer graphics , 23(1):341–350, 2016.[54] A. Satyanarayan, R. Russell, J. Hoffswell, and J. Heer. Reactive vega: Astreaming dataflow architecture for declarative interactive visualization.
IEEE transactions on visualization and computer graphics , 22(1):659–668,2015.[55] A. Satyanarayan, K. Wongsuphasawat, and J. Heer. Declarative interactiondesign for data visualization. In
Proceedings of the 27th annual ACMsymposium on User interface software and technology , pp. 669–678. ACM,2014.[56] C. Stolte, D. Tang, and P. Hanrahan. Polaris: A system for query, anal-ysis, and visualization of multidimensional relational databases.
IEEETransactions on Visualization and Computer Graphics , 8(1):52–65, 2002.[57] T. B. Ward. Structured imagination: The role of category structure inexemplar generation.
Cognitive psychology , 27(1):1–40, 1994.[58] C. Weaver. Building highly-coordinated visualizations in improvise. In
Proceedings of the IEEE Symposium on Information Visualization , INFO-VIS ’04, pp. 159–166. IEEE Computer Society, Washington, DC, USA,2004.[59] K. Wongsuphasawat, D. Moritz, A. Anand, J. Mackinlay, B. Howe, andJ. Heer. Towards a general-purpose query language for visualizationrecommendation. In
Proceedings of the Workshop on Human-In-the-LoopData Analytics , pp. 1–6, 2016.[60] K. Wongsuphasawat, Z. Qu, D. Moritz, R. Chang, F. Ouk, A. Anand,J. Mackinlay, B. Howe, and J. Heer. Voyager 2: Augmenting visualanalysis with partial view specifications. In
ACM Human Factors inComputing Systems (CHI) , 2017.[61] J. S. Yi, Y. ah Kang, J. T. Stasko, and J. A. Jacko. Toward a deeperunderstanding of the role of interaction in information visualization.
IEEETransactions on Visualization and Computer Graphics , 13(6):1224–1231,2007., 13(6):1224–1231,2007.