Carl Gutwin


2020

DOI bib
Clone Swarm: A Cloud Based Code-Clone Analysis Tool
Venkat Bandi, Chanchal K. Roy, Carl Gutwin
2020 IEEE 14th International Workshop on Software Clones (IWSC)

A code clone is defined as a pair of similar code fragments within a software system. While code clones are not always harmful, they can have a detrimental effect on the overall quality of a software system due to the propagation of bugs and other maintenance implications. Because of this, software developers need to analyse the code clones that exist in a software system. However, despite the availability of several clone detection systems, the adoption of such tools outside of the clone community remains low. A possible reason for this is the difficulty and complexity involved in setting up and using these tools. In this paper, we present Clone Swarm, a code clone analytics tool that identifies clones in a project and presents the information in an easily accessible manner. Clone Swarm is publicly available and can mine any open-sourced GIT repository. Clone Swarm internally uses NiCad, a popular clone detection tool in the cloud and lets users interactively explore code clones using a web-based interface at multiple granularity levels (Function and Block level). Clone results are visualized in multiple overviews, all the way from a high-level plot down to an individual line by line comparison view of cloned fragments. Also, to facilitate future research in the area of clone detection and analysis, users can directly download the clone detection results for their projects. Clone Swarm is available online at clone-swarm.usask.ca. The source code for Clone Swarm is freely available under the MIT license on GitHub.

DOI bib
Tracing shapes with eyes
Mohammad Rakib Hasan, Debajyoti Mondal, Carl Gutwin
Proceedings of the 11th Augmented Human International Conference

Eye tracking systems can provide people with severe motor impairments a way to communicate through gaze-based interactions. Such systems transform a user's gaze input into mouse pointer coordinates that can trigger keystrokes on an on-screen keyboard. However, typing using this approach requires large back-and-forth eye movements, and the required effort depends both on the length of the text and the keyboard layout. Motivated by the idea of sketch-based image search, we explore a gaze-based approach where users draw a shape on a sketchpad using gaze input, and the shape is used to search for similar letters, words, and other predefined controls. The sketch-based approach is area efficient (compared to an on-screen keyboard), allows users to create custom commands, and creates opportunities for gaze-based authentication. Since variation in the drawn shapes makes the search difficult, the system can show a guide (e.g., a 14-segment digital display) on the sketchpad so that users can trace their desired shape. In this paper, we take a first step that investigates the feasibility of the sketch-based approach, by examining how well users can trace a given shape using gaze input. We designed an interface where participants traced a set of given shapes. We then compared the similarity of the drawn and traced shapes. Our study results show the potential of the sketch-based approach: users were able to trace shapes reasonably well using gaze input, even for complex shapes involving three letters; shape tracing accuracy for gaze was better than `free-form' hand drawing. We also report on how different shape complexities influence the time and accuracy of the shape tracing tasks.

2019

DOI bib
Peripheral Notifications in Large Displays
Aristides Mairena, Carl Gutwin, Andy Cockburn
Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems

Visual notifications are integral to interactive computing systems. With large displays, however, much of the content is in the user's visual periphery, where human capacity to notice visual effects is diminished. One design strategy for enhancing noticeability is to combine visual features, such as motion and colour. Yet little is known about how feature combinations affect noticeability across the visual field, or about how peripheral noticeability changes when a user's primary task involves the same visual features as the notification. We addressed these questions by conducting two studies. Results of the first study showed that noticeability of feature combinations were approximately equal to the better of the individual features. Results of the second study suggest that there can be interference between the features of primary tasks and the visual features in the notifications. Our findings contribute to a better understanding of how visual features operate when used as peripheral notifications.

2018

DOI bib
Improving revisitation in long documents with two-level artificial-landmark scrollbars
Ehsan Sotoodeh Mollashahi, Md. Sami Uddin, Carl Gutwin
Proceedings of the 2018 International Conference on Advanced Visual Interfaces

Document readers with linear navigation controls do not work well when users need to navigate to previously-visited locations, particularly when documents are long. Existing solutions - bookmarks, search, history, and read wear - are valuable but limited in terms of effort, clutter, and interpretability. In this paper, we investigate artificial landmarks as a way to improve support for revisitation in long documents - inspired by visual augmentations seen in physical books such as coloring on page edges or indents cut into pages. We developed several artificial-landmark visualizations that can represent locations even in documents that are many hundreds of pages long, and tested them in studies where participants visited multiple locations in long documents. Results show that providing two columns of landmark icons led to significantly better performance and user preference. Artificial landmarks provide a new mechanism to build spatial memory of long documents - and can be used either alone or with existing techniques like bookmarks, read wear, and search.

DOI bib
Pointing All Around You
Julian Petford, Miguel A. Nacenta, Carl Gutwin
Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems

As display environments become larger and more diverse - now often encompassing multiple walls and room surfaces - it is becoming more common that users must find and manipulate digital artifacts not directly in front of them. There is little understanding, however, about what techniques and devices are best for carrying out basic operations above, behind, or to the side of the user. We conducted an empirical study comparing two main techniques that are suitable for full-coverage display environments: mouse-based pointing, and ray-cast 'laser' pointing. Participants completed search and pointing tasks on the walls and ceiling, and we measured completion time, path lengths and perceived effort. Our study showed a strong interaction between performance and target location: when the target position was not known a priori the mouse was fastest for targets on the front wall, but ray-casting was faster for targets behind the user. Our findings provide new empirical evidence that can help designers choose pointing techniques for full-coverage spaces.