Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lee Begeja is active.

Publication


Featured researches published by Lee Begeja.


international conference on mobile systems, applications, and services | 2005

MediaAlert - a broadcast video monitoring and alerting system for mobile users

Bin Wei; Bernard S. Renger; Yih-Farn Chen; Rittwik Jana; Huale Huang; Lee Begeja; David C. Gibbon; Zhu Liu; Behzad Shahraray

We present a system for automatic monitoring and timely dissemination of multimedia information to a range or mobile information appliances based on each users interest profile. Multimedia processing algorithms detect and isolate relevant video segments from over twenty television broadcast programs based on a collection or words and phrases specified by the user. Content repurposing techniques are then used to convert the information into a form that is suitable for delivery to the users mobile devices. Alerts are sent using a number of application messaging and network access protocols including email, short message service (SMS), multimedia messaging service (MMS), voice, session initiation protocol (SIP), fax, and pager protocols. The system is evaluated with respect to performance and user experiences. The MediaAlert system provides an effective and low-cost solution for the timely generation of alerts containing personal, business, and security information.


north american chapter of the association for computational linguistics | 2004

A system for searching and browsing spoken communications

Lee Begeja; Bernard S. Renger; Murat Saraclar; David C. Gibbon; Zhu Liu; Behzad Shahraray

As the amount of spoken communications accessible by computers increases, searching and browsing is becoming crucial for utilizing such material for gathering information. It is desirable for multimedia content analysis systems to handle various formats of data and to serve varying user needs while presenting a simple and consistent user interface. In this paper, we present a research system for searching and browsing spoken communications. The system uses core technologies such as speaker segmentation, automatic speech recognition, transcription alignment, keyword extraction and speech indexing and retrieval to make spoken communications easy to navigate. The main focus is on telephone conversations and teleconferences with comparisons to broadcast news.


Proceedings of SPIE | 2013

VidCat: An Image and Video Analysis Service for Personal Media Management

Lee Begeja; Eric Zavesky; Zhu Liu; David C. Gibbon; Raghuraman Gopalan; Behzad Shahraray

Cloud-based storage and consumption of personal photos and videos provides increased accessibility, functionality, and satisfaction for mobile users. One cloud service frontier that is recently growing is that of personal media management. This work presents a system called VidCat that assists users in the tagging, organization, and retrieval of their personal media by faces and visual content similarity, time, and date information. Evaluations for the effectiveness of the copy detection and face recognition algorithms on standard datasets are also discussed. Finally, the system includes a set of application programming interfaces (API’s) allowing content to be uploaded, analyzed, and retrieved on any client with simple HTTP-based methods as demonstrated with a prototype developed on the iOS and Android mobile platforms.


international conference on acoustics, speech, and signal processing | 2005

Semantic data mining of short utterances

Lee Begeja; Harris Drucker; David C. Gibbon; Patrick Haffner; Zhu Liu; Bernard S. Renger; Behzad Shahraray

This paper introduces a methodology for speech data mining along with the tools that the methodology requires. We show how they increase the productivity of the analyst who seeks relationships among the contents of multiple utterances and ultimately must link some newly discovered context into testable hypotheses about new information. While, in its simplest form, one can extend text data mining to speech data mining by using text tools on the output of a speech recognizer, we have found that it is not optimal. We show how data mining techniques that are typically applied to text should be modified to enable an analyst to do effective semantic data mining on a large collection of short speech utterances. For the purposes of this paper, we examine semantic data mining in the context of semantic parsing and analysis in a specific situation involving the solution of a business problem that is known to the analyst. We are not attempting a generic semantic analysis of a set of speech. Our tools and methods allow the analyst to mine the speech data to discover the semantics that best cover the desired solution. The coverage, in this case, yields a set of Natural Language Understanding (NLU) classifiers that serve as testable hypotheses.


consumer communications and networking conference | 2009

Contextual Advertising for IPTV Using Automated Metadata Generation

Lee Begeja; Paul Van Vleck

Current IPTV monetization focuses on mainstream programming that is well understood, with metadata that is stored in advance. Current contextual advertising is focused on web searches and static information. As the demand for online internet video grows into the IPTV market, the need to monetize it effectively becomes important. As the advertising market for IPTV expands into less mainstream video (e.g. YouTube and other user generated content) it becomes more important to automate the generation of metadata for the video The metadata associated with self-described YouTube videos are unreliable and can not be used as a basis for effective contextual advertising. Automatic generation, however, brings down the cost of creating metadata and gives advertisers a neutral party that evaluates the data. We propose a system in which a trusted intermediary automatically generates reliable metadata for multimedia.


international symposium on multimedia | 2009

Searching and Browsing Video in Face Space

Lee Begeja; Zhu Liu

We propose an approach to searching and browsing video and multimedia using the results of face clustering. Once a set of faces has been detected we use various techniques to create a set of clusters of faces from a particular video. That set of clusters encourages new approaches to browsing and searching video that are not linguistic and can apply to videos internationally, in any language.


Archive | 2002

Method and system for embedding information into streaming media

Lee Begeja; David Crawford Gibbon; Kenneth Mervin Huber; Zhu Liu; Robert Edward Markowitz; Bernard S. Renger; Behzad Shahraray; Gary Lee Zamchick


Archive | 2001

Method and System for Remote Call Forwarding of Telephone Calls from an IP Connection

Lee Begeja; Jeffrey Joseph Farah; Neil A. Ostroff


Archive | 2001

Method and system for personalized multimedia delivery service

Lee Begeja; David Crawford Gibbon; Kenneth Mervin Huber; Zhu Liu; Robert Edward Markowitz; Bernard S. Renger; Behzad Shahraray; Gary Lee Zamchick


Archive | 2006

Personalized local recorded content

Behzad Shahraray; David Crawford Gibbon; Lee Begeja; Zhu Liu; Richard V. Cox; Bernard S. Renger

Researchain Logo
Decentralizing Knowledge