Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kenjiro Sugimoto is active.

Publication


Featured researches published by Kenjiro Sugimoto.


IEEE Transactions on Image Processing | 2015

Compressive Bilateral Filtering

Kenjiro Sugimoto; Sei Ichiro Kamata

This paper presents an efficient constant-time bilateral filter that produces a near-optimal performance tradeoff between approximate accuracy and computational complexity without any complicated parameter adjustment, called a compressive bilateral filter (CBLF). The constant-time means that the computational complexity is independent of its filter window size. Although many existing constant-time bilateral filters have been proposed step-by-step to pursue a more efficient performance tradeoff, they have less focused on the optimal tradeoff for their own frameworks. It is important to discuss this question, because it can reveal whether or not a constant-time algorithm still has plenty room for improvements of performance tradeoff. This paper tackles the question from a viewpoint of compressibility and highlights the fact that state-of-the-art algorithms have not yet touched the optimal tradeoff. The CBLF achieves a near-optimal performance tradeoff by two key ideas: 1) an approximate Gaussian range kernel through Fourier analysis and 2) a period length optimization. Experiments demonstrate that the CBLF significantly outperforms state-of-the-art algorithms in terms of approximate accuracy, computational complexity, and usability.


international conference on image processing | 2013

Fast Gaussian filter with second-order shift property of DCT-5

Kenjiro Sugimoto; Sei Ichiro Kamata

This paper presents an efficient constant-time Gaussian filter which provides a high accuracy at a low cost over a wide range of scale σ. It requires only 14 multiplications per pixel in image filtering regardless of σ, which is fewer than state-of-the-art constant-time Gaussian filters. Main ideas of the paper are as follows: 1) introducing a second-order shift property of the discrete cosine transform type-5 (DCT-5) to convolve cosines faster, and 2) suppressing error propagation caused by the shift property. Experiments in image processing show that the proposed algorithm is 3.7× faster than a state-of-the-art recursive Gaussian filter and comparable to that of ±3σ-supported Gaussian convolution with σ = 2.33. The output accuracy is stable at around 80 [dB] all over σ ϵ [1, 128].


international conference on image processing | 2012

Fast image filtering by DCT-based kernel decomposition and sequential sum update

Kenjiro Sugimoto; Sei Ichiro Kamata

This paper presents an approximate Gaussian filter which can run in one-pass with high accuracy based on spectrum sparsity. This method is a modification of the cosine integral image (CII), which decomposes a filter kernel into few cosine terms and convolves each cosine term with an input image in constant time per pixel by using integral images and look-up tables. However, they require much workspace and high access cost. The proposed method solves the problem with no decline in quality by sequentially updating sums instead of integral images and by improving look-up tables, which accomplishes a one-pass approximation with much less workspace. A specialization for tiny kernels are also discussed for faster calculation. Experiments on image filtering show that the proposed method can run nearly two times faster than CII and also than convolution even with small kernel.


international conference on image processing | 2016

Constant-time bilateral filter using spectral decomposition

Kenjiro Sugimoto; Toby P. Breckon; Sei Ichiro Kamata

This paper presents an efficient constant-time bilateral filter where constant-time means that computational complexity is independent of filter window size. Many state-of-the-art constant-time methods approximate the original bilateral filter by an appropriate combination of a series of convolutions. It is important for this framework to optimize the performance tradeoff between approximate accuracy and the number of convolutions. The proposed method achieves the optimal performance tradeoff in a least-squares manner by using spectral decomposition under the assumption that images consist of discrete intensities such as 8-bit images. This approach is essentially applicable to arbitrary range kernel. Experiments show that the proposed method outperforms state-of-the-art methods in terms of both computational complexity and approximate accuracy.


international conference on image processing | 2016

Efficient keypoint detection and description using filter kernel decomposition in scale space

Ryo Okutani; Kenjiro Sugimoto; Sei Ichiro Kamata

Keypoint detection and description in a continuous scale space achieves better robustness to scale change than those in a discretized scale space. State-of-the-art methods first decompose a continuous scale space into M + 1 component images weighted by M-order polynomials of scale σ, and then reconstruct the value at an arbitrary point in the scale space by a linear combination of the component images. However, these methods create the mismatch that, if σ is large, common filter kernels such as Gaussian converge to zero; but the polynomials of σ diverge. This paper presents a more efficient approximation that suppresses this mismatch by normalizing the weighting functions. Experiments show that the proposed method achieves higher performance tradeoff than state-of-the-art methods: it reduces the number of component images and total running time by 20-40% without a sacrifice of stability in keypoints detection.


international conference on acoustics, speech, and signal processing | 2016

Efficient keypoint detection and description via polynomial regression of scale space

Ryo Okutani; Kenjiro Sugimoto; Sei Ichiro Kamata

Keypoint detection and description using approximate continuous scale space are more efficient techniques than typical discretized scale space for achieving more robust feature matching. However, this state-of-the-art method requires high computational complexity to approximately reconstruct, or decompress, the value at an arbitrary point in scale space. Specifically, it has O(M2) computational complexity where M is an approximation order. This paper presents an efficient scale space approach that provides decompression operation with O(M) complexity without a loss of accuracy. As a result of the fact that the proposed method has much fewer variables to be solved, the least-square solution can be obtained through normal equation. This is easier to solve than the existing method which employs Karhunen-Loeve expansion and generalized eigenvalue problem. Experiments revealed that the proposed method performs as expected from the theoretical analysis.


ieee intelligent vehicles symposium | 2015

Disparity refinement with stability-based tree for stereo matching

Yuhang Ji; Qieshi Zhang; Kenjiro Sugimoto; Sei Ichiro Kamata

This paper proposes a disparity refinement method with stability-based tree. By developing stability-based tree to evaluate and reconstruct support regions for error parts, the proposed method achieves effective performance in removing outliers. This approach further improves the quality of raw disparity map in stereo matching, which makes the local methods results comparable to the global ones. Experiments exhibit that the proposed method reduces more than 70% aggregation time compared with traditional tree method without loss of accuracy. It also outperforms existing disparity refinement methods in removing large error parts.


international conference on image processing | 2011

Color distribution matching using a weighted subspace descriptor

Kenjiro Sugimoto; Sei Ichiro Kamata

This paper presents a low-level color descriptor which describes the color distribution of a color image as a weighted subspace in the color space, namely eigenvectors and eigenvalues of the distribution. Thanks to low-dimensionality of color space, the proposed descriptor can provide compact description and fast computation. Furthermore, specialized for color distribution matching, it is more efficient than mutual subspace method (MSM). Experiments on medicine package recognition validate that the proposed descriptor outperforms MSM and MPEG-7 low-level color descriptors in terms of description size, computational cost and recognition rate.


international conference on image processing | 2017

Complex coefficient representation for IIR bilateral filter

Norishige Fukushima; Kenjiro Sugimoto; Sei Ichiro Kamata

In this paper, we propose an infinite impulse response (IIR) filtering with complex coefficients for Euclid distance based filtering, e.g. bilateral filtering. Recursive filtering of edge-preserving filtering is the most efficient filtering. Recursive bilateral filtering and domain transform filtering belong to this type. These filters measure the difference between pixel intensities by geodesic distance. Also, these filters do not have separability. The aspects make the filter sensitive to noises. Bilateral filtering does not have these issues, but it is time-consuming. In this paper, edge-preserving filtering with the complex exponential function is proposed. The resulting stack of these IIR filtering is merged to approximated edge-preserving in FIR filtering, which includes bilateral filtering. For bilateral filtering, a raised-cosine function is used for efficient approximation. The experimental results show that the proposed filter, named IIR bilateral filter, approximates well and the computational cost is low.


asia pacific signal and information processing association annual summit and conference | 2016

Fast bilateral filter for multichannel images via soft-assignment coding

Kenjiro Sugimoto; Norishige Fukushima; Sei Ichiro Kamata

This paper presents an acceleration method of the bilateral filter (BF) for multi-channel images. In most existing acceleration methods, the BF is approximated by an appropriate combination of convolutions. A major purpose under this framework is to achieve sufficient approximate accuracy by as few convolutions as possible. However, state-of-the-art methods for multi-channel images still requires hundreds of (e.g., 256) convolutions to achieve sufficient accuracy. The proposed method reduces the number of convolutions without a loss in accuracy via soft-assignment coding. This approach enables us to take two major advantages that two state-of-the-art methods (scalar quantization with linear interpolation and vector quantization) have individually provided. Experiments show that the proposed method can produce sufficiently-accurate resulting images by using 64–80 convolutions only.

Collaboration


Dive into the Kenjiro Sugimoto's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Norishige Fukushima

Nagoya Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Seisuke Kyochi

University of Kitakyushu

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yoshimitsu Kuroki

Kyushu Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge