Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tobin J. Lehman is active.

Publication


Featured researches published by Tobin J. Lehman.


hawaii international conference on system sciences | 1999

T Spaces: the next wave

Tobin J. Lehman; Stephen W. McLaughry; Peter Wyckoff

Millions of small heterogeneous computers are poised to spread into the infrastructure of our society. Though mostly inconspicuous today, disguised as nothing more than PIM (personal information management) computers, these tiny processors will eventually pervade most aspects of civilized life. The one thing holding them back from being everyones portal to the new electronic society and the access point to an infinite store of information is the lack of a high-quality logical link to the worlds network backbone. Enter T Spaces, a network middleware package for the new age of ubiquitous computing. T Spaces is a tuple space-based network communication buffer with database capabilities that enables communication between applications and devices an a network of heterogeneous computers and operating systems. With T Spaces, it is possible to connect all computers together, which leads the way towards an infinitely large cluster of cooperating machines. In this paper, we describe the T Spaces package and explore some distributed applications that use T Spaces.


Ibm Systems Journal | 1999

A universal information appliance

Kevin Francis Eustice; Tobin J. Lehman; Armando Morales; Michelle Christine Munson; Stefan Edlund; Miguel Guillen

The consumers view of a universal information appliance (UIA) is a personal device, such as a PDA (personal digital assistant) or a wearable computer that can interact with any application, access any information store, or remotely operate any electronic device. The technologists view of the UIA is a portable computer, communicating over a bi-directional wireless link to an elaborate software system through which all programs, information stores, and electronic devices can export their interfaces to the UIA. Using an exported interface, the UIA can interoperate with the exporting entity, whether a home security system, a video cassette recorder, corporate application, or an automobile navigation system. Furthermore, interfaces presented by the UIA can be tailored to the users context, such as the users preferences, behavior, and current surroundings. The UIA programming model supports dynamic interface style and content triggered on activity detected from the users real-world and software context. In this paper we describe the design and first implementation of a UIA, a PDA that, through a wireless link, can interact with any program, access any database, or direct most electronic devices through a remote interface. The UIA model uses IBMs TSpaces software package as the interface delivery mechanism and resource database, and as the network communication glue. TSpaces supports communication between the UIA and any peer over a dual-mode wireless link. Using a popular application example, we present a generalized architecture in which the UIA is the mobile users software portal for interoperating with any peer: another UIA, a common network service, a legacy application, or an electronic device.


international conference on management of data | 1986

Query processing in main memory database management systems

Tobin J. Lehman; Michael J. Carey

Most previous work in the area of main memory database systems has focused on the problem of developing query processing techniques that work well with a very large buffer pool. In this paper, we address query processing issues for memory resident relational databases, an environment with a very different set of costs and priorities. We present an architecture for a main memory DBMS, discussing the ways in which a memory resident database differs from a disk-based database. We then address the problem of processing relational queries in this architecture, considering alternative algorithms for selection, projection, and join operations and studying their performance. We show that a new index structure, the T Tree, works well for selection and join processing in memory resident databases. We also show that hashing methods work well for processing projections and joins, and that an old join method, sort-merge, still has a place in main memory.


Computer Networks | 2001

Hitting the distributed computing sweet spot with TSpaces

Tobin J. Lehman; Alex Cozzi; Yuhong Xiong; Jonathan Gottschalk; Venu Vasudevan; Sean Landis; Pace Davis; Bruce Khavar; Paul Bowman

Abstract Our world is becoming increasingly heterogeneous, decentralized and distributed, but the software that is supposed to work in this world, usually, is not. TSpaces is a communication package whose purpose is to alleviate the problems of hooking together disparate distributed systems. TSpaces is a global communication middleware component that incorporates database features, such as transactions, persistent data, flexible queries and XML support. TSpaces is an excellent tool for building distributed applications, since it provides an asynchronous and anonymous link between multiple clients or services. The communication link provided by TSpaces gives application builders the advantage of ignoring some of the harder aspects of multi-client synchronization, such as tracking names (and addresses) of all active clients, communication line status, and conversation status. For many different types of applications, the loose synchronization provided by TSpaces works extremely well. This paper relates our experiences in building distributed systems with TSpaces as the central communication component.


international conference on management of data | 1987

A recovery algorithm for a high-performance memory-resident database system

Tobin J. Lehman; Michael J. Carey

With memory prices dropping and memory sizes increasing accordingly, a number of researchers are addressing the problem of designing high-performance database systems for managing memory-resident data. In this paper we address the recovery problem in the context of such a system. We argue that existing database recovery schemes fall short of meeting the requirements of such a system, and we present a new recovery mechanism which is designed to overcome their shortcomings. The proposed mechanism takes advantage of a few megabytes of reliable memory in order to organize recovery information on a per “object” basis. As a result, it is able to amortize the cost of checkpoints over a controllable number of updates, and it is also able to separate post-crash recovery into two phases—high-speed recovery of data which is needed immediately by transactions, and background recovery of the remaining portions of the database. A simple performance analysis is undertaken, and the results suggest our mechanism should perform well in a high-performance, memory-resident database environment.


IEEE Transactions on Knowledge and Data Engineering | 1992

An evaluation of Starburst's memory resident storage component

Tobin J. Lehman; Eugene J. Shekita; Luis-Felipe Cabrera

As part of the Starburst extensible database project, the authors have designed and implemented a memory resident storage component that can coexist along side traditional disk-oriented storage components. The memory resident storage component shares the code of Starbursts common services, such as query optimization, plan generation, query evaluation, record manipulation, and transaction management. The design of Starbursts memory resident storage component is discussed, contrasted with Starbursts default disk-oriented storage component, and compared to the performance of the two storage components using the Wisconsin Benchmarks. The results show that a memory resident storage component can perform significantly better than a disk-oriented storage component, even when the disk-oriented storage component has all of its data cached in memory. The benchmark results show that, by using memory resident techniques, overall query execution can be improved by up to a factor of four. >


Ibm Systems Journal | 1996

Storing and using objects in a relational database

Berthold Reinwald; Tobin J. Lehman; Hamid Pirahesh; Vibby Gottemukkala

In todays heterogeneous development environments, application programmers have the responsibility to segment their application data and to store those data in different types of stores. That means relational data will be stored in RDBMSs (relational database management systems), C++ objects in OODBMSs (object-oriented database management systems), SOM (System Object Model) objects in OMG (Object Management Group) persistent stores, and OpenDoc™ or OLE™ (Object Linking and Embedding) compound documents in document files. In addition, application programmers must deal with multiple server systems with different query languages as well as large amounts of heterogeneous data. This paper describes SMRC (shared memory-resident cache), an RDBMS extender that provides the ability to store objects created in external type systems like C++ or SOM in a relational database, coresident with existing relational or other heterogeneous data. Using SMRC, applications can store and retrieve objects via SQL (structured query language), and invoke methods on the objects, without requiring any modifications to the original object definitions. Furthermore, the stored objects fully participate in all the characteristic features of the underlying relational database, e.g., transactions, backup, and authorization. SMRC is implemented on top of IBMs DB2® Common Server for AIX® relational database system and heavily exploits the DB2 user-defined types (UDTs), user-defined functions (UDFs), and large objects (LOBs) technology. In this paper, the C++ type system is used as a sample external type system to exemplify the SMRC approach, i.e., storing C++ objects in relational databases. Similar efforts are required for SOM or OLE objects.


annual srii global conference | 2011

We've Looked at Clouds from Both Sides Now

Tobin J. Lehman; Saurabh Vajpayee

Cloud Computing is a versatile technology that can support a broad-spectrum of applications. The low cost of cloud computing and its dynamic scaling renders it an innovation driver for small companies, particularly in the developing world. Cloud deployed enterprise resource planning (ERP), supply chain management applications (SCM), customer relationship management (CRM) applications, medical applications and mobile applications have potential to reach millions of users. Cloud deployed applications that employ mobile devices as end-points are particularly exciting due to the high penetration of mobile devices in countries like China, South Africa and India. With the opportunities in cloud computing being greater than at any other time in history, we had to pause and reflect on our own experiences with cloud computing -- both as producers and consumers of that technology. Our interests and attitudes toward cloud technology differ considerably for each side of the cloud-computing topic. As producers of cloud-like infrastructure, much of our interest was on the technology itself. We experimented with algorithms for managing remote program invocation, fault tolerance, dynamic load balancing, proactive resource management and meaningful distributed application monitoring. As consumers of cloud computing however, our focus switched from interesting technology to usability, simplicity, reliability and guaranteed rock solid data stability. With an eye to the many cloud articles in the recent news, we have to ask, is cloud computing ready for prime time? After reviewing stories about current cloud deployments, we conclude that cloud computing is not yet ready for general use, many significant cloud service failures have been reported and several important issues remain unaddressed. Furthermore, besides the failures and gaps in the current cloud offerings, there is an inherent flaw in the model itself. Today, the cloud represents an opportunity for a client to outsource hardware/software function or program computing cycles. The missing piece is responsibility outsourcing -- today something found only in IT Outsourcing contracts. This missing piece represents an essential component of a cloud offering. Without it, cloud consumers are left without any real reassurances that their data is safe from failures, catastrophe or court ordered search and seizure. In this paper, we explore the different viewpoints of cloud computing. Leveraging our experiences on both sides of clouds, we examine clouds from a technology aspect, a service aspect and a responsibility aspect. We highlight some of the opportunities in cloud computing, underlining the importance of clouds and showing why that technology must succeed. Finally, we propose some usability changes for cloud computing that we feel are needed to make clouds ready for prime time.


autonomic computing workshop | 2003

The Almaden OptimalGrid project

Glenn Deen; Tobin J. Lehman; James H. Kaufman

In this paper, we present a description of the Almaden OptimalGrid Project, which is self-configuring, and optimizing grid middleware that makes it easy to harness the computational resources available on a computing grid. OptimalGrid incorporates the core tenants of autonomic computing


FOFO '89 Proceedings of the 3rd International Conference on Foundations of Data Organization and Algorithms | 1989

A Concurrency Control Algorithm for Memory-Resident Database Systems

Tobin J. Lehman; Michael J. Carey

self-configuring, self-healing, and self-optimizing - to create an environment in which it is possible for application developers to exploit these features, without the need to either build them, or to code to external APIs. OptimalGrid implements autonomic functionality in a middleware layer allowing development of self-optimizing applications for a grid, without the need to be conscious of the underlying grid technology.

Researchain Logo
Decentralizing Knowledge