Mohammad Rakib Hasan
Given an m×n table T of positive weights, and a rectangle R with an area equal to the sum of the weights, a table cartogram computes a partition of R into m×n convex quadrilateral faces such that each face has the same adjacencies as its corresponding cell in T, and has an area equal to the cell’s weight. In this paper, we examine constraint optimization-based and physics-inspired cartographic transformation approaches to produce cartograms for large tables with thousands of cells. We show that large table cartograms may provide diagrammatic representations in various real-life scenarios, e.g., for analyzing correlations between geospatial variables and creating visual effects in images. Our experiments with real-life datasets provide insights into how one approach may outperform the other in various application contexts.
Eye tracking systems can provide people with severe motor impairments a way to communicate through gaze-based interactions. Such systems transform a user's gaze input into mouse pointer coordinates that can trigger keystrokes on an on-screen keyboard. However, typing using this approach requires large back-and-forth eye movements, and the required effort depends both on the length of the text and the keyboard layout. Motivated by the idea of sketch-based image search, we explore a gaze-based approach where users draw a shape on a sketchpad using gaze input, and the shape is used to search for similar letters, words, and other predefined controls. The sketch-based approach is area efficient (compared to an on-screen keyboard), allows users to create custom commands, and creates opportunities for gaze-based authentication. Since variation in the drawn shapes makes the search difficult, the system can show a guide (e.g., a 14-segment digital display) on the sketchpad so that users can trace their desired shape. In this paper, we take a first step that investigates the feasibility of the sketch-based approach, by examining how well users can trace a given shape using gaze input. We designed an interface where participants traced a set of given shapes. We then compared the similarity of the drawn and traced shapes. Our study results show the potential of the sketch-based approach: users were able to trace shapes reasonably well using gaze input, even for complex shapes involving three letters; shape tracing accuracy for gaze was better than `free-form' hand drawing. We also report on how different shape complexities influence the time and accuracy of the shape tracing tasks.