Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Seong Joon Oh is active.

Publication


Featured researches published by Seong Joon Oh.


international conference on computer vision | 2015

Person Recognition in Personal Photo Collections

Seong Joon Oh; Rodrigo Benenson; Mario Fritz; Bernt Schiele

Recognising persons in everyday photos presents major challenges (occluded faces, different clothing, locations, etc.) for machine vision. We propose a convnet based person recognition system on which we provide an in-depth analysis of informativeness of different body cues, impact of training data, and the common failure modes of the system. In addition, we discuss the limitations of existing benchmarks and propose more challenging ones. Our method is simple and is built on open source and open data, yet it improves the state of the art results on a large dataset of social media photos (PIPA).


european conference on computer vision | 2016

Faceless Person Recognition: Privacy Implications in Social Media

Seong Joon Oh; Rodrigo Benenson; Mario Fritz; Bernt Schiele

As we shift more of our lives into the virtual domain, the volume of data shared on the web keeps increasing and presents a threat to our privacy. This works contributes to the understanding of privacy implications of such data sharing by analysing how well people are recognisable in social media data. To facilitate a systematic study we define a number of scenarios considering factors such as how many heads of a person are tagged and if those heads are obfuscated or not. We propose a robust person recognition system that can handle large variations in pose and clothing, and can be trained with few training samples. Our results indicate that a handful of images is enough to threaten users’ privacy, even in the presence of obfuscation. We show detailed experimental results, and discuss their implications.


computer vision and pattern recognition | 2017

Exploiting Saliency for Object Segmentation from Image Level Labels

Seong Joon Oh; Rodrigo Benenson; Anna Khoreva; Zeynep Akata; Mario Fritz; Bernt Schiele

There have been remarkable improvements in the semantic labelling task in the recent years. However, the state of the art methods rely on large-scale pixel-level annotations. This paper studies the problem of training a pixel-wise semantic labeller network from image-level annotations of the present object classes. Recently, it has been shown that high quality seeds indicating discriminative object regions can be obtained from image-level labels. Without additional information, obtaining the full extent of the object is an inherently ill-posed problem due to co-occurrences. We propose using a saliency model as additional information and hereby exploit prior knowledge on the object extent and image statistics. We show how to combine both information sources in order to recover 80% of the fully supervised performance – which is the new state of the art in weakly supervised training for pixel-wise semantic labelling.


international conference on computer vision | 2017

Adversarial Image Perturbation for Privacy Protection A Game Theory Perspective

Seong Joon Oh; Mario Fritz; Bernt Schiele

Users like sharing personal photos with others through social media. At the same time, they might want to make automatic identification in such photos difficult or even impossible. Classic obfuscation methods such as blurring are not only unpleasant but also not as effective as one would expect [28, 37, 18]. Recent studies on adversarial image perturbations (AIP) suggest that it is possible to confuse recognition systems effectively without unpleasant artifacts. However, in the presence of counter measures against AIPs [7], it is unclear how effective AIP would be in particular when the choice of counter measure is unknown. Game theory provides tools for studying the interaction between agents with uncertainties in the strategies. We introduce a general game theoretical framework for the user-recogniser dynamics, and present a case study that involves current state of the art AIP and person recognition techniques. We derive the optimal strategy for the user that assures an upper bound on the recognition rate independent of the recogniser’s counter measure. Code is available at https://goo.gl/hgvbNK.


computer vision and pattern recognition | 2017

Generating Descriptions with Grounded and Co-referenced People

Anna Rohrbach; Marcus Rohrbach; Siyu Tang; Seong Joon Oh; Bernt Schiele

Learning how to generate descriptions of images or videos received major interest both in the Computer Vision and Natural Language Processing communities. While a few works have proposed to learn a grounding during the generation process in an unsupervised way (via an attention mechanism), it remains unclear how good the quality of the grounding is and whether it benefits the description quality. In this work we propose a movie description model which learns to generate description and jointly ground (localize) the mentioned characters as well as do visual co-reference resolution between pairs of consecutive sentences/clips. We also propose to use weak localization supervision through character mentions provided in movie descriptions to learn the character grounding. At training time, we first learn how to localize characters by relating their visual appearance to mentions in the descriptions via a semi-supervised approach. We then provide this (noisy) supervision into our description model which greatly improves its performance. Our proposed description model improves over prior work w.r.t. generated description quality and additionally provides grounding and local co-reference resolution. We evaluate it on the MPII Movie Description dataset using automatic and human evaluation measures and using our newly collected grounding and co-reference data for characters.


Archive | 2018

Image Manipulation against Learned Models Privacy and Security Implications

Seong Joon Oh; Bernt Schiele; Mario Fritz; Vitaly Shmatikov; Serge J. Belongie

Machine learning is transforming the world. Its application areas span privacy sensitive and security critical tasks such as human identification and self-driving cars. These applications raise privacy and security related questions that are not fully understood or answered yet: Can automatic person recognisers identify people in photos even when their faces are blurred? How easy is it to find an adversarial input for a self-driving car that makes it drive off the road? This thesis contributes one of the first steps towards a better understanding of such concerns. We observe that many privacy and security critical scenarios for learned models involve input data manipulation: users obfuscate their identity by blurring their faces and adversaries inject imperceptible perturbations to the input signal. We introduce a data manipulator framework as a tool for collectively describing and analysing privacy and security relevant scenarios involving learned models. A data manipulator introduces a shift in data distribution for achieving privacy or security related goals, and feeds the transformed input to the target model. This framework provides a common perspective on the studies presented in the thesis. We begin the studies from the user’s privacy point of view. We analyse the efficacy of common obfuscation methods like face blurring, and show that they are surprisingly ineffective against state of the art person recognition systems. We then propose alternatives based on head inpainting and adversarial examples. By studying the user privacy, we also study the dual problem: model security. In model security perspective, a model ought to be robust and reliable against small amounts of data manipulation. In both cases, data are manipulated with the goal of changing the target model prediction. User privacy and model security problems can be described with the same objective. We then study the knowledge aspect of the data manipulation problem. The more one knows about the target model, the more effective manipulations one can craft. We propose a game theoretic manipulation framework to systematically represent the knowledge level on the target model and derive privacy and security guarantees. We then discuss ways to increase knowledge about a black-box model by only querying it, deriving implications that are relevant to both privacy and security perspectives.


international conference on mobile systems, applications, and services | 2016

Demo: I-Pic: A Platform for Privacy-Compliant Image Capture

Paarijaat Aditya; Rijurekha Sen; Peter Druschel; Seong Joon Oh; Rodrigo Benenson; Mario Fritz; Bernt Schiele; Bobby Bhattacharjee; Tong Tong Wu


computer vision and pattern recognition | 2018

Natural and Effective Obfuscation by Head Inpainting

Qianru Sun; Liqian Ma; Seong Joon Oh; Luc Van Gool; Bernt Schiele; Mario Fritz


international conference on learning representations | 2018

Towards Reverse-Engineering Black-Box Neural Networks

Seong Joon Oh; Max Augustin; Bernt Schiele; Mario Fritz


arXiv: Cryptography and Security | 2018

Understanding and Controlling User Linkability in Decentralized Learning.

Tribhuvanesh Orekondy; Seong Joon Oh; Bernt Schiele; Mario Fritz

Collaboration


Dive into the Seong Joon Oh's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge