Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jungseock Joo is active.

Publication


Featured researches published by Jungseock Joo.


computer vision and pattern recognition | 2013

Weakly Supervised Learning for Attribute Localization in Outdoor Scenes

Shuo Wang; Jungseock Joo; Yizhou Wang; Song-Chun Zhu

In this paper, we propose a weakly supervised method for simultaneously learning scene parts and attributes from a collection of images associated with attributes in text, where the precise localization of the each attribute left unknown. Our method includes three aspects. (i) Compositional scene configuration. We learn the spatial layouts of the scene by Hierarchical Space Tiling (HST) representation, which can generate an excessive number of scene configurations through the hierarchical composition of a relatively small number of parts. (ii) Attribute association. The scene attributes contain nouns and adjectives corresponding to the objects and their appearance descriptions respectively. We assign the nouns to the nodes (parts) in HST using nonmaximum suppression of their correlation, then train an appearance model for each noun+adjective attribute pair. (iii) Joint inference and learning. For an image, we compute the most probable parse tree with the attributes as an instantiation of the HST by dynamic programming. Then update the HST and attribute association based on the inferred parse trees. We evaluate the proposed method by (i) showing the improvement of attribute recognition accuracy, and (ii) comparing the average precision of localizing attributes to the scene parts.


Proceedings of the National Academy of Sciences of the United States of America | 2014

Coiling of elastic rods on rigid substrates

Mohammad Jawed; Fang Da; Jungseock Joo; Eitan Grinspun; Pedro M. Reis

Significance The deployment of a rodlike structure onto a moving substrate is commonly found in a variety engineering applications, from the fabrication of nanotube serpentines to the laying of submarine cables and pipelines. Predictively understanding the resulting coiling patterns is challenging given the nonlinear geometry of deposition. In this paper, we combine precision model experiments with computer simulations of a rescaled analogue system and explore the mechanics of coiling. In particular, the natural curvature of the rod is found to dramatically affect the coiling process. We have introduced a computational framework that is widely used in computer animation into engineering, as a predictive tool for the mechanics of filamentary structures. We investigate the deployment of a thin elastic rod onto a rigid substrate and study the resulting coiling patterns. In our approach, we combine precision model experiments, scaling analyses, and computer simulations toward developing predictive understanding of the coiling process. Both cases of deposition onto static and moving substrates are considered. We construct phase diagrams for the possible coiling patterns and characterize them as a function of the geometric and material properties of the rod, as well as the height and relative speeds of deployment. The modes selected and their characteristic length scales are found to arise from a complex interplay between gravitational, bending, and twisting energies of the rod, coupled to the geometric nonlinearities intrinsic to the large deformations. We give particular emphasis to the first sinusoidal mode of instability, which we find to be consistent with a Hopf bifurcation, and analyze the meandering wavelength and amplitude. Throughout, we systematically vary natural curvature of the rod as a control parameter, which has a qualitative and quantitative effect on the pattern formation, above a critical value that we determine. The universality conferred by the prominent role of geometry in the deformation modes of the rod suggests using the gained understanding as design guidelines, in the original applications that motivated the study.


international conference on computer vision | 2013

Human Attribute Recognition by Rich Appearance Dictionary

Jungseock Joo; Shuo Wang; Song-Chun Zhu

We present a part-based approach to the problem of human attribute recognition from a single image of a human body. To recognize the attributes of human from the body parts, it is important to reliably detect the parts. This is a challenging task due to the geometric variation such as articulation and view-point changes as well as the appearance variation of the parts arisen from versatile clothing types. The prior works have primarily focused on handling geometric variation by relying on pre-trained part detectors or pose estimators, which require manual part annotation, but the appearance variation has been relatively neglected in these works. This paper explores the importance of the appearance variation, which is directly related to the main task, attribute recognition. To this end, we propose to learn a rich appearance part dictionary of human with significantly less supervision by decomposing image lattice into overlapping windows at multiscale and iteratively refining local appearance templates. We also present quantitative results in which our proposed method outperforms the existing approaches.


IEEE Transactions on Multimedia | 2017

Joint Image-Text News Topic Detection and Tracking by Multimodal Topic And-Or Graph

Weixin Li; Jungseock Joo; Hang Qi; Song-Chun Zhu

This paper presents a novel method for automatically detecting and tracking news topics from multimodal TV news data. We propose a multimodal topic and-or graph (MT-AOG) to jointly represent textual and visual elements of news stories and their latent topic structures. An MT-AOG leverages a context-sensitive grammar that can describe the hierarchical composition of news topics by semantic elements about people involved, related places, and what happened, and model contextual relationships between elements in the hierarchy. We detect news topics through a cluster sampling process which groups stories about closely related events together. Swendsen–Wang cuts, an effective cluster sampling algorithm, is adopted for traversing the solution space and obtaining optimal clustering solutions by maximizing a Bayesian posterior probability. The detected topics are then continuously tracked and updated with incoming news streams. We generate topic trajectories to show how topics emerge, evolve, and disappear over time. The experimental results show that our method can explicitly describe the textual and visual data in news videos and produce meaningful topic trajectories. Our method also outperforms previous methods for the task of document clustering on Reuters-21578 dataset and our novel dataset, UCLA Broadcast News dataset.


Künstliche Intelligenz | 2017

Red Hen Lab: Dataset and Tools for Multimodal Human Communication Research

Jungseock Joo; Francis F. Steen; Mark B. Turner

Researchers in the fields of AI and Communication both study human communication, but despite the opportunities for collaboration, they rarely interact. Red Hen Lab is dedicated to bringing them together for research on multimodal communication, using multidisciplinary teams working on vast ecologically-valid datasets. This article introduces Red Hen Lab with some possibilities for collaboration, demonstrating the utility of a variety of machine learning and AI-based tools and methods to fundamental research questions in multimodal human communication. Supplemental materials are at http://babylon.library.ucla.edu/redhen/KI.


Linguistics Vanguard | 2018

Toward an infrastructure for data-driven multimodal communication research

Francis F. Steen; Anders Hougaard; Jungseock Joo; Inés Olza; Cristóbal Pagán Cánovas; Anna Pleshakova; Soumya Ray; Peter Uhrig; Javier Valenzuela; Jacek Woźny; Mark B. Turner

Abstract Research into the multimodal dimensions of human communication faces a set of distinctive methodological challenges. Collecting the datasets is resource-intensive, analysis often lacks peer validation, and the absence of shared datasets makes it difficult to develop standards. External validity is hampered by small datasets, yet large datasets are intractable. Red Hen Lab spearheads an international infrastructure for data-driven multimodal communication research, facilitating an integrated cross-disciplinary workflow. Linguists, communication scholars, statisticians, and computer scientists work together to develop research questions, annotate training sets, and develop pattern discovery and machine learning tools that handle vast collections of multimodal data, beyond the dreams of previous researchers. This infrastructure makes it possible for researchers at multiple sites to work in real-time in transdisciplinary teams. We review the vision, progress, and prospects of this research consortium.


acm multimedia | 2018

Social and Political Event Analysis based on Rich Media

Jungseock Joo; Zachary C. Steinert-Threlkeld; Jiebo Luo

This tutorial aims to provide a comprehensive overview on the applications of rich social media data for real world social and political event analysis, which is a new emerging topic in multimedia research. We will discuss the recent evolution of social media as venues for social and political interaction and their impacts on the real world events using specific examples. We will introduce large scale datasets drawn from social media sources and review concrete research projects that build on computer vision and deep learning based methods. Existing researches in social media have examined various patterns of information diffusion and contagion, user activities and networking, and social media-based predictions of real world events. Most existing works, however, rely on non-content or text based features and do not fully leverage rich multiple modalities -- visuals and acoustics -- which are prevalent in most online social media. Such approaches underutilize vibrant and integrated characteristics of social media especially because the current audiences are getting more attracted to visual information centric media. This proposal highlights the impacts of rich multimodal data to the real world events and elaborates on relevant recent research projects -- the concrete development, data governance, technical details, and their implications to politics and society -- on the following topics. 1) Decoding non-verbal content to identify intent and impact in political messages in mass and social media, such as political advertisements, debates, or news footage; 2) Recognition of emotion, expressions, and viewer perception from communicative gestures, gazes, and facial expressions; 3) Geo-coded Twitter image analysis for protest and social movement analysis; 4) Election outcome prediction and voter understanding by using social media post; and 5) Detection of misinformation, rumors, and fake news and analyzing their impacts in major political events such as the U.S. presidential election.


computer vision and pattern recognition | 2014

Visual Persuasion: Inferring Communicative Intents of Images

Jungseock Joo; Weixin Li; Francis F. Steen; Song-Chun Zhu


international conference on computer vision | 2015

Automated Facial Trait Judgment and Election Outcome Prediction: Social Dimensions of Face

Jungseock Joo; Francis F. Steen; Song-Chun Zhu


international conference on weblogs and social media | 2018

Characterizing Clickbaits on Instagram.

Yu-i Ha; Jeongmin Kim; Donghyeon Won; Meeyoung Cha; Jungseock Joo

Collaboration


Dive into the Jungseock Joo's collaboration.

Top Co-Authors

Avatar

Song-Chun Zhu

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Weixin Li

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Donghyeon Won

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hang Qi

University of California

View shared research outputs
Top Co-Authors

Avatar

Jiebo Luo

University of Rochester

View shared research outputs
Top Co-Authors

Avatar

Mark B. Turner

Case Western Reserve University

View shared research outputs
Researchain Logo
Decentralizing Knowledge