Workshop on Evaluation: Prioritized Metrics
Jean Scholtz, Pacific Northwest National Laboratory
Georges Grinstein, University ofMassachusetts Lowell
Catherine Plaisant, University ofMaryland
During the 2007 InfoVis workshop on Metrics for Evaluation of Visual Analytics the participants discussed a large number of metrics. We decided to generate a list of high priority metrics. This was done by looking at categories of metrics and voting to determine high priority metrics in any given category. The following are the categories used and the metrics considered in these categories. A short description of the category is given to set the context for the metrics. In some cases the category has been broken down into subcategories.
Analysis
Metrics in this category measure how helpful a tool was to the analyst during the analytic process. All of these metrics will be based on an analyst using the tool and could possibly be compared to a similar task without the tool. Acceptable/non-acceptable ratings by the analyst could also be collected.
-Was the analyst able to explore a comprehensive set of hypotheses?
-How many hypotheses was the analyst able to generate?
-How many hypotheses was the analyst able to eliminate?
-How many red herring hypotheses did the analyst follow?
-How many red herring hypotheses was the analyst able to eliminate?
-Was the analyst able to track multiple hypotheses?
Collaboration
Metrics in this category measure the collaboration capabilities of the tool.
-Does the tool provide multiple view points to facilitate collaboration between different analysts with different expertise?
Visualization
As we are interested in visual analytics, it is also important that we have metrics for the visualization components of the tools. There are three subcategories that are a high priority for evaluations: data types, usability, and utility.
Data types
-What kinds and amounts of data can be considered by the tool?
Visualization - usability
In particular we are interested in interactive visualizations and the usability of these as measured by the common effectiveness, efficiency, and user satisfaction. However, the tasks used for measuring usability should be established within the context of the user’s work/goals.
-Effectiveness, efficiency, user satisfaction BUT in context of user’s work/goals
Visualization – utility
Utility measures the usefulness of the visualizations for the different analytic tasks that users are asked to perform.
-Flexibility/ adaptability of visualizations
Work Practices
Under work practices we are interested in metrics that are relevant to the productivity of the analyst. Many analysts read some sort of textual documents or messages and decide which of those are relevant for their analysis. For these analysts, a possible productivity metric would be:
-Document decisions per hour
Quality
It is essential that tools facilitate analysts in producing high quality work. The end goal of most analysis is a report. Analytic reports can be rated for quality; however quality can be further broken down in accuracy, objectivity, usability (for the customer or decision maker), and relevance of the report to the question asked by the customer.
-Report quality
- Accuracy
- Objectivity
- Usability
- Relevance
Insertion
Analysts need a number of tools and have established work practices. Tools that are easy to insert into the current work flow will be more readily used by analysts.
-Ease of integration of tools into current work practices
Overall Usability of the tool
-Number of steps needed to accomplish basic tasks
Training
-Learnability
- Time/ amount of training neededto learn basic functionality
- Time/ amount of training needed to learn advanced functionality