David Jeff Jackson
University of Alabama
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David Jeff Jackson.
ACM Sigbed Review | 2005
David Jeff Jackson; Paul Caspi
In this paper we present the summary of and results from the 2005 Workshop on Embedded Systems Education (WESE2005). This workshop was held in conjunction with EMSOFT 2005, the leading conference for research in embedded systems software. The workshop focused on presenting experiences in embedded systems and embedded software education. Workshop sessions included a diverse set of international presenters leading discussions on embedded systems curricula and content; teaching experiences; and labs and platforms used in embedded systems education. A summary panel discussion concluded the workshop.
IEEE Transactions on Education | 2008
Kenneth G. Ricks; David Jeff Jackson; William A. Stapleton
The Department of Electrical and Computer Engineering at The University of Alabama, Tuscaloosa, has recently completed a major restructuring of its computer engineering curriculum. The main goal of this reform is to integrate a broad set of embedded systems concepts into the course sequence thereby creating an embedded systems focus throughout the curriculum. Breadth of embedded systems concepts is addressed by using the embedded systems component of the 2004 IEEE/ACM computer engineering model curriculum as the basis for the new curriculum content. Depth is attained by overlapping coverage of most topics using multiple courses and integrating forward and reverse references to these concepts among the courses. This paper presents the rationale behind the curriculum reform, the revised curriculum, lessons learned, and the results of a comprehensive assessment of its effectiveness.
southeastern symposium on system theory | 1993
David Jeff Jackson; Sidney Joel Hannah
Data compression as it is applicable to image processing is addressed. The relative effectiveness of several image compression strategies is analyzed. This study covers data compression algorithms, file format schemes, and fractal image compression. An overview of the popular LZW compression algorithm and its subsequent variations is also given. Several common image file formats are surveyed, highlighting the differing approaches to image compression. Fractal compression is examined in depth to reveal how an interactive approach to image compression is implemented. The performance of these techniques is compared for a variety of landscape images, considering such parameters as data reduction ratios and information loss.
Image and Vision Computing | 1997
David Jeff Jackson; Wagdy Mahmoud; William A. Stapleton; Patrick T. Gaughan
In this paper, we present a fractal image compression algorithm employing a new quadtree recomposition (QR) approach. In this approach, quadtree subblocks of an image are iteratively recombined into larger blocks for fractal coding. For complex images, this approach exhibits superior runtime performance, when compared to a classical quadtree decomposition (QD) scheme, while maintaining high fidelity for reconstructed images. Quantitative results include an evaluation of attained compression ratios, runtime performance, and signal-to-noise ratios (SNR) for reconstructed images.
Computers & Electrical Engineering | 2005
Xianwei Wu; David Jeff Jackson; Hui-Chuan Chen
In this paper we present a fast fractal encoding method based on an intelligent search of a Standard Deviation (STD) value between range and domain blocks. First, we describe the basic fractal image compression theory and an improved bit allocation scheme for Jacquins Iterated Function System (IFS) parameter. Experimental results show that using a Fixed Scale Parameter (FSP) can shorten encoding time without significantly affecting reconstructed image quality. Second, we present a search algorithm based on the STD introduced by Tong. We enhance Tongs STD search algorithm by introducing a domain Intelligent Classification Algorithm (ICA) based on STD-classified domain blocks. The domain block search pool is pruned by eliminating multiple domain blocks with similar STD values. We refer to this pruning as the De-Redundancy Method (DRM). The domain search process is adaptive with the range block STD value of interest controlling the size of the domain pool searched. We refer to this process as the Search Number Adaptive Control (SNAC). Finally, we present experimental results showing the efficiency of the proposed method, noting a significant improvement over Tongs original STD method without significant loss in the reconstructed image quality.
The Computer Journal | 1996
David Jeff Jackson; Wagdy Mahmoud
In this paper we present a model and experimental results for performing parallel fractal image compression using circulating pipeline computation and employing a new quadtree recomposition approach. A circular linear array of processors is employed and utilized in a pipelined fashion. In this approach, a modification of the scheme given by Jackson and Blom, quadtree sub-blocks of an image are iteratively recombined into larger blocks for fractal coding. For complex images, this approach exhibits superior parallel runtime performance when compared to a classical quadtree decomposition scheme for fractal image compression, while maintaining high fidelity for reconstructed images. Quantitative results include parallel runtime comparisons with the decomposition approach for several images of varying complexity, an evaluation of attained compression ratios and SNR for reconstructed images. Experimental results using an nCUBE-2 supercomputer are presented.
southeastern symposium on system theory | 1993
David Jeff Jackson; Sidney Joel Hannah
The authors address various forms of adder design commonly encountered in microprocessor design and describe the process of modeling these designs at the gate level using the Verilog hardware description language (HDL). Design and simulation parameters examined in a comparative analysis include design complexity, simulation time, propagation delay effects in adder design, and proper integration of a Verilog based adder description into a complete microprocessor design. Specific adder designs examined include: ripple carry (RC), carry lookahead (CLA), hybrid RC-CLA, single stage carry skip, and carry select adders.
Journal of Applied Physics | 1996
John C. Lusth; David Jeff Jackson
In this article, graph theory is investigated as a tool for designing and analyzing quantum cellular automata (QCA). A method is presented for constructing graphs from an arbitrary automata. The constructed graphs are used to model both the static and dynamic states of QCA. Several fundamental theorems concerning the constructed graphs are presented relating QCA structure and behavior to graph colorings. Using these theorems, some simple QCA devices are analyzed and improvements, if applicable, are suggested.
ACM Sigbed Review | 2009
Kenneth G. Ricks; David Jeff Jackson
As more and more embedded systems concepts are integrated into academic curricula, the incorporation of system-level concepts must keep pace with lower-level topics. However, there are several challenges faced by educators trying to integrate system-level concepts into embedded systems curricula. Four such challenges include: the breadth of the embedded systems field limits opportunities for system-level content; lack of adequate laboratory platforms addressing system-level alternatives; limited student exposure to diverse software tools; and assessment associated with system-level activities. Each of the challenges is described in detail and suggestions for overcoming them are offered.
southeastern symposium on system theory | 1995
David Jeff Jackson; Thomas Blom
Data compression has become an important issue in relation to storage and transmission. This issue is especially true for databases consisting of a large number of detailed computer images. Many methods have been proposed in recent years for achieving high compression ratios for compressed image storage. A very promising compression technique, in terms of compression ratios, is fractal image compression. Fractal image compression exploits natural affine redundancy present in typical images to achieve a high compression ratio in a lossy compression format. Fractal based compression algorithms, however, have high computational demands. To obtain faster compression, a sequential fractal image compression algorithm may be translated into a parallel algorithm. This translation takes advantage of the inherently parallel nature, from a data domain viewpoint, of the fractal transform process.<<ETX>>