Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Radomir Mech is active.

Publication


Featured researches published by Radomir Mech.


international conference on computer vision | 2015

Minimum Barrier Salient Object Detection at 80 FPS

Jianming Zhang; Stan Sclaroff; Zhe L. Lin; Xiaohui Shen; Brian L. Price; Radomir Mech

We propose a highly efficient, yet powerful, salient object detection method based on the Minimum Barrier Distance (MBD) Transform. The MBD transform is robust to pixel-value fluctuation, and thus can be effectively applied on raw pixels without region abstraction. We present an approximate MBD transform algorithm with 100X speedup over the exact algorithm. An error bound analysis is also provided. Powered by this fast MBD transform algorithm, the proposed salient object detection method runs at 80 FPS, and significantly outperforms previous methods with similar speed on four large benchmark datasets, and achieves comparable or better performance than state-of-the-art methods. Furthermore, a technique based on color whitening is proposed to extend our method to leverage the appearance-based backgroundness cue. This extended version further improves the performance, while still being one order of magnitude faster than all the other leading methods.


Computer Graphics Forum | 2008

An Example-based Procedural System for Element Arrangement

Takashi Ijiri; Radomir Mech; Takeo Igarashi; Gavin S. P. Miller

We present a method for synthesizing two dimensional (2D) element arrangements from an example. The main idea is to combine texture synthesis techniques based‐on a local neighborhood comparison and procedural modeling systems based‐on local growth. Given a user‐specified reference pattern, our system analyzes neigh‐borhood information of each element by constructing connectivity. Our synthesis process starts with a single seed and progressively places elements one by one by searching a reference element which has local features that are the most similar to the target place of the synthesized pattern. To support creative design activities, we introduce three types of interaction for controlling global features of the resulting pattern, namely a spray tool, a flow field tool, and a boundary tool. We also introduce a global optimization process that helps to avoid local error concentrations. We illustrate the feasibility of our method by creating several types of 2D patterns.


international conference on computer vision | 2015

Deep Multi-patch Aggregation Network for Image Style, Aesthetics, and Quality Estimation

Xin Lu; Zhe Lin; Xiaohui Shen; Radomir Mech; James Zijun Wang

This paper investigates problems of image style, aesthetics, and quality estimation, which require fine-grained details from high-resolution images, utilizing deep neural network training approach. Existing deep convolutional neural networks mostly extracted one patch such as a down-sized crop from each image as a training example. However, one patch may not always well represent the entire image, which may cause ambiguity during training. We propose a deep multi-patch aggregation network training approach, which allows us to train models using multiple patches generated from one image. We achieve this by constructing multiple, shared columns in the neural network and feeding multiple patches to each of the columns. More importantly, we propose two novel network layers (statistics and sorting) to support aggregation of those patches. The proposed deep multi-patch aggregation network integrates shared feature learning and aggregation function learning into a unified framework. We demonstrate the effectiveness of the deep multi-patch aggregation network on the three problems, i.e., image style recognition, aesthetic quality categorization, and image quality estimation. Our models trained using the proposed networks significantly outperformed the state of the art in all three applications.


european conference on computer vision | 2016

Photo Aesthetics Ranking Network with Attributes and Content Adaptation

Shu Kong; Xiaohui Shen; Zhe L. Lin; Radomir Mech; Charless C. Fowlkes

Real-world applications could benefit from the ability to automatically generate a fine-grained ranking of photo aesthetics. However, previous methods for image aesthetics analysis have primarily focused on the coarse, binary categorization of images into high- or low-aesthetic categories. In this work, we propose to learn a deep convolutional neural network to rank photo aesthetics in which the relative ranking of photo aesthetics are directly modeled in the loss function. Our model incorporates joint learning of meaningful photographic attributes and image content information which can help regularize the complicated photo aesthetics rating problem.


computer vision and pattern recognition | 2015

Salient Object Subitizing

Jianming Zhang; Shugao Ma; Mehrnoosh Sameki; Stan Sclaroff; Margrit Betke; Zhe L. Lin; Xiaohui Shen; Brian L. Price; Radomir Mech

People can immediately and precisely identify that an image contains 1, 2, 3 or 4 items by a simple glance. The phenomenon, known as Subitizing, inspires us to pursue the task of Salient Object Subitizing (SOS), i.e. predicting the existence and the number of salient objects in a scene using holistic cues. To study this problem, we propose a new image dataset annotated using an online crowdsourcing marketplace. We show that a proposed subitizing technique using an end-to-end Convolutional Neural Network (CNN) model achieves significantly better than chance performance in matching human labels on our dataset. It attains 94% accuracy in detecting the existence of salient objects, and 42–82% accuracy (chance is 20%) in predicting the number of salient objects (1, 2, 3, and 4+), without resorting to any object localization process. Finally, we demonstrate the usefulness of the proposed subitizing technique in two computer vision applications: salient object detection and object proposal.


acm multimedia | 2014

Automatic Image Cropping using Visual Composition, Boundary Simplicity and Content Preservation Models

Chen Fang; Zhe Lin; Radomir Mech; Xiaohui Shen

Cropping is one of the most common tasks in image editing for improving the aesthetic quality of a photograph. In this paper, we propose a new, aesthetic photo cropping system which combines three models: visual composition, boundary simplicity, and content preservation. The visual composition model measures the quality of composition for a given crop. Instead of manually defining rules or score functions for composition, we learn the model from a large set of well-composed images via discriminative classifier training. The boundary simplicity model measures the clearness of the crop boundary to avoid object cutting-through. The content preservation model computes the amount of salient information kept in the crop to avoid excluding important content. By assigning a hard lower bound constraint on the content preservation and linearly combining the scores from the visual composition and boundary simplicity models, the resulting system achieves significant improvement over recent cropping methods in both quantitative and qualitative evaluation.


computer vision and pattern recognition | 2016

Unconstrained Salient Object Detection via Proposal Subset Optimization

Jianming Zhang; Stan Sclaroff; Zhe Lin; Xiaohui Shen; Brian L. Price; Radomir Mech

We aim at detecting salient objects in unconstrained images. In unconstrained images, the number of salient objects (if any) varies from image to image, and is not given. We present a salient object detection system that directly outputs a compact set of detection windows, if any, for an input image. Our system leverages a Convolutional-Neural-Network model to generate location proposals of salient objects. Location proposals tend to be highly overlapping and noisy. Based on the Maximum a Posteriori principle, we propose a novel subset optimization framework to generate a compact set of detection windows out of noisy proposals. In experiments, we show that our subset optimization formulation greatly enhances the performance of our system, and our system attains 16-34% relative improvement in Average Precision compared with the state-of-the-art on three challenging salient object datasets.


IEEE Transactions on Visualization and Computer Graphics | 2013

Painting with Polygons: A Procedural Watercolor Engine

Stephen DiVerdi; Aravind Krishnaswamy; Radomir Mech; Daichi Ito

Existing natural media painting simulations have produced high-quality results, but have required powerful compute hardware and have been limited to screen resolutions. Digital artists would like to be able to use watercolor-like painting tools, but at print resolutions and on lower end hardware such as laptops or even slates. We present a procedural algorithm for generating watercolor-like dynamic paint behaviors in a lightweight manner. Our goal is not to exactly duplicate watercolor painting, but to create a range of dynamic behaviors that allow users to achieve a similar style of process and result, while at the same time having a unique character of its own. Our stroke representation is vector based, allowing for rendering at arbitrary resolutions, and our procedural pigment advection algorithm is fast enough to support painting on slate devices. We demonstrate our technique in a commercially available slate application used by professional artists. Finally, we present a detailed analysis of the different vector-rendering technologies available.


user interface software and technology | 2015

Procedural Modeling Using Autoencoder Networks

Mehmet Ersin Yumer; Paul Asente; Radomir Mech; Levent Burak Kara

Procedural modeling systems allow users to create high quality content through parametric, conditional or stochastic rule sets. While such approaches create an abstraction layer by freeing the user from direct geometry editing, the nonlinear nature and the high number of parameters associated with such design spaces result in arduous modeling experiences for non-expert users. We propose a method to enable intuitive exploration of such high dimensional procedural modeling spaces within a lower dimensional space learned through autoencoder network training. Our method automatically generates a representative training dataset from the procedural modeling rule set based on shape similarity features. We then leverage the samples in this dataset to train an autoencoder neural network, while also structuring the learned lower dimensional space for continuous exploration with respect to shape features. We demonstrate the efficacy our method with user studies where designers create content with more than 10-fold faster speeds using our system compared to the classic procedural modeling interface.


Computer Graphics Forum | 2009

Optimizing Structure Preserving Embedded Deformation for Resizing Images and Vector Art

Qixing Huang; Radomir Mech; Nathan A. Carr

Smart deformation and warping tools play an important part in modern day geometric modeling systems. They allow existing content to be stretched or scaled while preserving visually salient information. To date, these techniques have primarily focused on preserving local shape details, not taking into account important global structures such as symmetry and line features. In this work we present a novel framework that can be used to preserve the global structure in images and vector art. Such structures include symmetries and the spatial relations in shapes and line features in an image. Central to our method is a new formulation of preserving structure as an optimization problem. We use novel optimization strategies to achieve the interactive performance required by modern day modeling applications. We demonstrate the effectiveness of our framework by performing structure preservation deformation of images and complex vector art at interactive rates.

Collaboration


Dive into the Radomir Mech's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge