Simon Lok
Columbia University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Simon Lok.
intelligent user interfaces | 2004
Simon Lok; Steven Feiner; Gary Ngai
Layout refers to the process of determining the size and position of the visual objects in an information presentation. We introduce the WeightMap, a bitmap representation of the visual weight of a presentation. In addition, we present algorithms that use WeightMaps to allow an automated layout system to evaluate the effectiveness of its layouts. Our approach is based on the concepts of visual weight and visual balance, which are fundamental to the visual arts. The objects in the layout are each assigned a visual weight, and a WeightMap is created that encodes the visual weight of the layout. Image-processing techniques, including pyramids and edge detection, are then used to efficiently analyze the WeightMap for balance. In addition, derivatives of the sums of the rows and columns are used to generate suggestions for how to improve the layout.
intelligent user interfaces | 2002
Simon Lok; Steven Feiner
We describe an automated layout system called AIL that generates the user interface for the PERSIVAL digital library project. AIL creates a layout based on a variety of content components and associated meta-data information provided by the PERSIVAL generation and retrieval modules. By leveraging semantic links between the content components, the layout that AIL provides is both context and user-model aware. In addition, AIL is capable of interacting intelligently with the natural language generation components of PERSIVAL to tailor the length of the text content for a given layout.
workshop on applications of computer vision | 1998
Shree K. Nayar; Joshua Gluckman; Rahul Swaminathan; Simon Lok; Terrance E. Boult
Conventional video cameras have limited fields of view which make them restrictive in a variety of applications. A catadioptric sensor uses a combination of lenses and mirrors placed in a carefully arranged configuration to capture a much wider field of view. At Columbia University, we have developed a wide range of catadioptric sensors. Some of these sensors have been designed to produce unusually large fields of view. Others have been constructed for the purpose of depth computation. All our sensors perform in real time using just a PC.
acm/ieee joint conference on digital libraries | 2001
Noémie Elhadad; Min-Yen Kan; Simon Lok; Smaranda Muresan
In this demonstration, we present several integrated components of PER SIVAL PErsonalized Retrieval and Summarization of Image, Video And anguage)[1], a system designed to provide personalized access to a distributed digital library of medical literature and consumer health information. The global system architecture of PERSIVAL is best described as a two-stage processing pipeline. The first stage is a retrieval system that matches user queries with relevant multimedia data in the library. The second stage is a visualization system that processes the multimedia data matched by the first stage for display. Our demonstration focuses on the second stage of PERSIVALs processing pipeline. Given a set of relevant documents for certain predefined queries, our integrated demonstration seeks to give a tailored response for either physicians or patients, featuring textual summaries, as well as relevant medical definitions. To visualize the summaries and definitions, we employ automated constraint-based layout of the user interface that allows for rich interaction between summaries and definitions. PERSIVALs natural language processing and user interface modules make up the visualization portion of the system and illustrate state-of-the-art digital library technology. Following are the modules presented in our demonstration.
Archive | 1999
Simon Lok; Shree K. Nayar
Abstract : The process of teleoperation can be described as allowing a remote user to control a vehicle by interpreting sensor information captured by the vehicle. One method that is frequently used to implement teleoperation is to provide the user with a real-time video display of a perspective camera mounted on the vehicle. This method limits the remote user to seeing the environment in which is vehicle is present through the fixed viewpoint with which the camera is mounted. Having a fixed viewpoint is extremely limiting and significantly impedes the ability of the remote user to properly navigate. One way to address this problem is to mount the perspective camera on a pan-tilt device. This rarely done because it is expensive and introduces a significant increase in implementation complexity from both the mechanical and electrical point of view. With the advent of omnidirectional camera technology, there is now a second more attractive alternative. This paper describes the PARAROVER, a remote controlled vehicle constructed in the summer of 1998 to demonstrate the use of omnidirectional camera technology and a virtual reality display for vehicular teleoperational, audio-video surveillance and forward reconnaissance.
smart graphics | 2001
Simon Lok; Steven Feiner
Archive | 2001
Simon Lok; Steven Feiner
international world wide web conferences | 2002
Simon Lok; Steven Feiner; William M. Chiong; Yoav J. Hirsch
international world wide web conferences | 2003
Simon Lok; Min-Yen Kan
Automated layout of information presentations | 2005
Steven Feiner; Simon Lok