Dan E. Tamir
Texas State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dan E. Tamir.
computer software and applications conference | 2009
Carl J. Mueller; Dan E. Tamir; Oleg V. Komogortsev; Liam Feldman
Many software engineers consider usability testing as one of the more expensive, tedious and least rewarding tests to implement. Making usability testing less expensive and more rewarding requires having results that pinpoint issues in the software and do not require expensive consultants and facilities. To accomplish these goals this paper presents a novel way of measuring software usability and an approach to designing usability tests that does not require external consultants or expensive laboratory facilities. The usability testing approach discussed in this paper also permits testing earlier in the development process. One of the key elements to this technique is the use of traditional testing concepts and techniques such as scenario based testing to measure productivity and learnability of the subject. By constructing test cases or tasks to measure the learnability of the application, the developer has a way to measure the quality of both the test and the software.
north american fuzzy information processing society | 2012
Dan E. Tamir; Horia N Teodorescu; Abraham Kandel
In this paper we propose a complex fuzzy logic (CFL) system that is based on the extended Post (multi-valued logic) system (EPS) of order p >; 2, and demonstrate its utility for reasoning with fuzzy facts and rules. The advantage of this formalism is that it is discrete. Hence, it better fits real time applications, digital signal processing, and embedded systems that use integer processing units. Propositional calculus as well as first-order predicate calculus of EPS based CFL systems are developed. The application to approximate reasoning is described.
systems, man and cybernetics | 2009
Carl J. Mueller; Dan E. Tamir; Oleg V. Komogortsev; Liam Feldman
Designing Human Computer Interfaces is one of the more important and difficult design tasks. The tools for verifying the quality of the interface are frequently expensive or provide feedback too far after the design of the interface as to make it meaningless. To improve the interface usability, designers need a verification tool providing immediate feedback at a low cost. Using an effort-based measure of usability, it is possible for a designer to estimate the effort a subject might expend to complete a specific task. In this paper, we develop the notion of designers effort for evaluating interface usability for new designs and Commercial-Off-The-Shelf software. Designers effort provides a technique to evaluate human interface before completing the development of the software and provides feedback from usability tests conducted using the effort-based evaluation technique.
international conference industrial engineering other applications applied intelligent systems | 2009
David Karhi; Dan E. Tamir
Heuristic search techniques can often benefit from record keeping and saving of intermediate results, thereby improving performance through exploitation of time / space tradeoffs. Iterative hill climbing (ITHC) is one of these heuristics. This paper demonstrates that record keeping in the ITHC domain can significantly speed up the ITHC. The record keeping method is similar to the mechanism of a cache. The new approach is implemented and tested in the traveling salesperson search space. The research compares a traditional random restart (RR) procedure to a new greedy enumeration (GE) method. GE produces Hamiltonian-cycles that are about 10% shorter than the RR. Moreover, the cached RR achieves a speedup of 3x with a relatively small number of cities and only 20% with a medium number of cities (~17). The cached GE shows a highly significant speedup of 4x over traditional methods even with a relatively large number of cities (>80).
empirical software engineering and measurement | 2009
Liam Feldman; Carl J. Mueller; Dan E. Tamir; Oleg V. Komogortsev
Usability testing activities have numerous benefits in theory, yet they are often overlooked or disregarded in practice. A testing paradigm which yields objective, quantitative results would likely lead to more widespread adoption of usability evaluation activities. Total-Effort Metrics is such a novel framework. This paper describes a usability study conducted using a total-effort metrics approach. In this study, subjects interact with three interfaces which have varying element layout proximities. The time and effort measures of time-on-task, total keystrokes, correctional keystrokes, saccade amplitude (point-to-point eye movement) and gaze-path traversal are recorded and analyzed. The findings of the study demonstrate a correlation between the intrinsic effort of an interface and its usability as predicted by extant interface layout guidelines.
data compression conference | 1996
Dan E. Tamir; Kim Phillips; Abdul-razzak Abdul-karim
Summary form only given. This paper presents a new and efficient method of encoding uniform image regions and lines. Regions and lines are obtained as the result of image segmentation, split and merge image compression, or as the output of line and polygon drawing algorithms. Lines and contours of uniform regions are encoded using chain-code. The chain-code is obtained in a way that is efficient with respect to bit-rate and produces lossless contour and line encoding. A lossy method for contour encoding is also presented. A set of experiments to compare the performance of traditional chain-code contour encoding with the improved contour encoding is presented. The results show a reduction of about 50% in the bit-rate with no reconstruction error.
systems, man and cybernetics | 2010
Dan E. Tamir; Carl J. Mueller
Often, software developers are notified of usability issues in their code through feedback from users and Human Computer Interface (HCI) experts. Yet, in many cases, the description of the issues cannot be correlated with the actual user interface (UI) program code. In this paper, we present a novel effort-based usability model in conjunction with a framework for identifying and locating (i.e., pinpointing) software usability issues and correlating the issues with UI software code. The model is based on the notion that usability is an inverse function of effort. Another innovative aspect of this framework is its focus on learning in the process of assessing usability measurements. Physical effort is obtained and inferred from logs of manual activity (e.g., keystrokes) and eye tracking. Experimental results from this study and other studies performed show high correlation to learning theory models and strongly support the relationship of effort to usability. The underlying theory and the findings of the experiments are used to propose a framework for user interface development where the interface designers are using effort-based metrics to pinpoint usability issues in the code.
data compression conference | 1995
Dan E. Tamir; Chiyeon Park; Wmk-Sung Yoo
A multi-resolution K-means clustering method is presented. Starting with a low resolution sample of the input data the K-means algorithm is applied to a sequence of monotonically increasing-resolution samples of the given data. The cluster centers obtained from a low resolution stage are used as initial cluster centers for the next stage which is a higher resolution stage. The idea behind this method is that a good estimation of the initial location of the cluster centers can be obtained through K-means clustering of a sample of the input data. K-means clustering of the entire data with the initial cluster centers estimated by clustering a sample of the input data, reduces the convergence time of the algorithm.
green technologies conference | 2012
Apan Qasem; Michael Jason Cade; Dan E. Tamir
In the last few years, the emergence of multicore architectures has revolutionized the landscape of high-performance computing. The multicore shift has not only increased the per-node performance potential of computer systems but also has made great strides in curbing power and heat dissipation. As we look to the future, however, the gains in performance and energy consumption is not going to come from hardware alone. Software needs to play a key role in achieving a high fraction of peak and keeping the energy consumption within the desired envelope. To attain this goal, performance-enhancing and energy-conserving software needs to carefully orchestrate many architecture-sensitive parameters. In particular, the presence of shared-caches on multicore architectures makes it necessary to consider, in concert, issues related to both parallelism and data locality to achieve the desired power-performance ratio. This paper studies the complex interaction among several code transformations that affect data locality, problem decomposition and selection of loops for parallelism. We characterize this interaction using static compiler analysis and generate a pruned search space suitable for efficient autotuning. We also extend a heuristic based on number of threads, data reuse patterns, and the size and configuration of the shared cache, to estimate good synchronization interval for conserving energy in parallel code. We validate our choice of tuning parameters and evaluate our heuristic with experiments on a set of scientific and engineering kernels on four different multicore platforms. Results of the experimental study reveal several interesting properties of the transformation search space and demonstrate the effectiveness of the heuristic in predicting good synchronization intervals that reduce energy consumption without a significant degradation in performance.
human factors in computing systems | 2011
Oleg V. Komogortsev; Corey Holland; Dan E. Tamir; Carl J. Mueller
This paper presents an objective evaluation of several methods for the automated classification of excessive visual search, a technique which has the potential to aid in the identification of usability problems during software usability testing. Excessive visual search was identified by a number of eye movement metrics, including: fixation count, saccade amplitude, convex hull area, scanpath inflections, scanpath length, and scanpath duration. The excessive search intervals identified by each algorithm were compared to those produced by manual classification. The results indicate that automated classification can be successfully employed to substantially reduce the amount of recorded data reviewed during usability testing, with relatively little loss in accuracy.