Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Richard J. LeBlanc is active.

Publication


Featured researches published by Richard J. LeBlanc.


technical symposium on computer science education | 2006

Computing Curricula 2005: The Overview Report

Russell L. Shackelford; Andrew D. McGettrick; Robert H. Sloan; Heikki Topi; Gordon Davies; Reza Kamali; James H. Cross; John Impagliazzo; Richard J. LeBlanc; Barry M. Lunt

In 2001, the ACM and the IEEE-CS published Computing Curricula 2001 which contains curriculum recommendations for undergraduate programs in computer science. That report also called for additional discipline-specific volumes for each of computer engineering, information systems, and software engineering. In addition, it called for an Overview Volume to provide a synthesis of the various volumes. The Computing Curricula 2004 Task Force undertook the job of fulfilling the latter charge. The purpose of this session is to present the recently completed work of that Task Force, now known as Computing Curricula 2005 (CC2005), and to generate discussion among, and feedback from SIGCSE members about ongoing and future work.


IEEE Computer | 1991

The Clouds distributed operating system

Partha Dasgupta; Richard J. LeBlanc; Mustaque Ahamad

The authors discuss a paradigm for structuring distributed operating systems, the potential and implications this paradigm has for users, and research directions for the future. They describe Clouds, a general-purpose operating system for distributed environments. It is based on an object-thread model adapted from object-oriented programming.<<ETX>>


IEEE Transactions on Software Engineering | 1988

A study of the applicability of complexity measures

John Stephen Davis; Richard J. LeBlanc

A study of the predictive value of a variety of syntax-based problem complexity measures is reported. Experimentation with variants of chunk-oriented measures showed that one should judiciously select measurable software attributes as proper indicators of what one wishes to predict, rather than hoping for a single, all-purpose complexity measure. The authors have shown that it is possible for particular complexity measures or other factors to serve as good predictors of some properties of program but not for others. For example, a good predictor of construction time will not necessarily correlate well with the number of error occurrences. M.H. Halsteads (1977) efforts measure (E) was found to be a better predictor that the two nonchunk measures evaluated, namely, T.J. McCabes (1976) V(G) and lines of code, but at least one chunk measure predicted better than E in every case. >


international conference on software engineering | 2007

Improving software practice through education: Challenges and future trends

Timothy C. Lethbridge; Jorge L. Díaz-Herrera; Richard J. LeBlanc; J.B. Thompson

We argue that the software engineering (SE) community could have a significant impact on the future of the discipline by focusing its efforts on improving the education of software engineers. There are some bright spots such as the various projects to codify knowledge, and the development of undergraduate SE programs. However, there remain several key challenges, each of which is addressed in this paper: The challenges are 1) making programs attractive to students, 2) focusing education appropriately, 3) communicating industrial reality more effectively, 4) defining curricula that are forward-looking, 5) providing education for existing practitioners, 6) making SE education more evidence- based, 7) ensuring that SE educators have the necessary background, and 8) raising the prestige and quality of SE educational research. For each challenge, we provide action items and open research questions.


IEEE Software | 1990

Recognizing design decisions in programs

Spencer Rugaber; Stephen B. Ornburn; Richard J. LeBlanc

The authors present a characterization of design decisions that is based on the analysis of programming constructs. The characterization underlies a framework for documenting and manipulating design information to facilitate maintenance and reuse activities. They identify and describe the following categories of design decisions: composition and decomposition; encapsulation and interleaving; generalization and specialization; representation; data and procedures; and function and relation. The authors discuss how to recognize and represent design decisions.<<ETX>>


IEEE Software | 2006

SE2004: Recommendations for Undergraduate Software Engineering Curricula

Timothy C. Lethbridge; Richard J. LeBlanc; Ann E. Kelley Sobel; Thomas B. Hilburn; Jorge L. Díaz-Herrera

Universities throughout the world have established undergraduate programs in software engineering, which complement existing programs in computer science and computer engineering. To provide guidance in designing an effective curriculum, the IEEE Computer Society and the ACM have developed the Software Engineering 2004 (SE2004) set of recommendations. The SE2004 document guides universities and colleges regarding the knowledge they should teach in undergraduate software engineering programs. It also provides sample courses and curriculum patterns. SE2004 begins with an overview of software engineering, explaining how it is both a computing and an engineering discipline. It then outlines the principles that drove the documents development and describes expected student outcomes. Next, SE2004 details the knowledge that universities and colleges should teach, known as SEEK (software engineering education knowledge), in a software engineering program. These recommendations are followed by general pedagogical guidelines, sample courses, and sample curriculum patterns


workshop on parallel & distributed debugging | 1988

Event-based debugging of object/action programs

Chu-Chung Lin; Richard J. LeBlanc

Work on the Clouds Project [Dasg85, LeB185b, Dasg88] at Georgia Tech has included the design of a distributed debugger which includes an algorithm exploiting the semantics of object-action computations to allow interactive debugging of distributed programs. The debugger allows a user to debug a distributed program from multiple viewpoints, at various abstraction levels, and with various degrees of control over program execution.


technical symposium on computer science education | 2007

The computing ontology: application in education

Lillian N. Cassel; Gordon Davies; William Fone; Anneke Hacquebard; John Impagliazzo; Richard J. LeBlanc; Joyce Currie Little; Andrew D. McGettrick; Michela Pedrona

Working Group 3 at ITiCSE 2007 continued the ongoing work of the Ontology of Computing project. The working group brought several new people into the project and addressed areas of the ontology of particular interest to these participants. In particular, the group worked on the Ontology sections related to History of Computing, Computing Security and Social and Ethical issues. With the intention of applying the ontology to the support of curriculum development in mind, the group also reviewed and discussed proposed means of presenting a visual representation of the ontology. There was also some work on the present structure of the ontology and future possibilities.


IEEE Transactions on Software Engineering | 1980

The Implementation of Run-Time Diagnostics in Pascal

Charles N. Fischer; Richard J. LeBlanc

This paper considers the role of run-time diagnostic checking in enforcing the rules of the Pascal programming language. Run-time diagnostic checks must be both complete (covering all language requirements) and efficient. Further, such checks should be implemented so that the cost of enforcing the correct use of a given construct is borne by users of that construct. This paper descxibes simple and efficient mechanisms currently in use with a diagnostic Pascal compiler that monitor the run-time behavior of such sensitive Pascal constructs as pointers, variant records, reference (i.e., var) parameters, and with statements. The use of these mechanisms with related constructs in other languages is considered. Language modifications that simplify run-time checking ate also noted.


international conference on computational logistics | 1992

Distributed Eiffel: a language for programming multi-granular distributed objects on the Clouds operating system

L. Gunaseelan; Richard J. LeBlanc

The design and implementation of Distributed Eiffel, a language designed and implemented for distributed programming, on top of the Clouds operating system by extending the object-oriented language Eiffel are discussed. The language presents a programming paradigm based on objects of multiple granularity. While large-grained persistent objects serve as units of distribution, fine-grained objects are used to describe and manipulate entities within these units. The language design makes it possible to implement both shared-memory and message-passing models of parallel programming within a single programming paradigm. The language provides features with which the programmer can declaratively fine-tune synchronization at any desired object granularity and maximize concurrency. With the primitives provided, it is possible to combine and control both data migration and computation migration effectively, at the language level. The design addresses such issues as parameter passing, asynchronous invocations and result claiming, and concurrency control.<<ETX>>

Collaboration


Dive into the Richard J. LeBlanc's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mustaque Ahamad

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Charles N. Fischer

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

William F. Appelbe

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Russell L. Shackelford

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge