Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Akira Hirabayashi is active.

Publication


Featured researches published by Akira Hirabayashi.


The Lancet | 2003

Oligonucleotide microarray for prediction of early intrahepatic recurrence of hepatocellular carcinoma after curative resection

Norio Iizuka; Masaaki Oka; Hisafumi Yamada-Okabe; Minekatsu Nishida; Yoshitaka Maeda; Naohide Mori; Takao T; Takao Tamesa; Akira Tangoku; Hisahiro Tabuchi; Kenji Hamada; Hironobu Nakayama; Hideo Ishitsuka; Takanobu Miyamoto; Akira Hirabayashi; Shunji Uchimura; Yoshihiko Hamamoto

BACKGROUNDnHepatocellular carcinoma has a poor prognosis because of the high intrahepatic recurrence rate. There are technological limitations to traditional methods such as TNM staging for accurate prediction of recurrence, suggesting that new techniques are needed.nnnMETHODSnWe investigated mRNA expression profiles in tissue specimens from a training set, comprising 33 patients with hepatocellular carcinoma, with high-density oligonucleotide microarrays representing about 6000 genes. We used this training set in a supervised learning manner to construct a predictive system, consisting of 12 genes, with the Fisher linear classifier. We then compared the predictive performance of our system with that of a predictive system with a support vector machine (SVM-based system) on a blinded set of samples from 27 newly enrolled patients.nnnFINDINGSnEarly intrahepatic recurrence within 1 year after curative surgery occurred in 12 (36%) and eight (30%) patients in the training and blinded sets, respectively. Our system correctly predicted early intrahepatic recurrence or non-recurrence in 25 (93%) of 27 samples in the blinded set and had a positive predictive value of 88% and a negative predictive value of 95%. By contrast, the SVM-based system predicted early intrahepatic recurrence or non-recurrence correctly in only 16 (60%) individuals in the blinded set, and the result yielded a positive predictive value of only 38% and a negative predictive value of 79%.nnnINTERPRETATIONnOur system predicted early intrahepatic recurrence or non-recurrence for patients with hepatocellular carcinoma much more accurately than the SVM-based system, suggesting that our system could serve as a new method for characterising the metastatic potential of hepatocellular carcinoma.


IEEE Transactions on Signal Processing | 2007

Consistent Sampling and Signal Recovery

Akira Hirabayashi; Michael Unser

An attractive formulation of the sampling problem is based on the principle of a consistent signal reconstruction. The requirement is that the reconstructed signal is indistinguishable from the input in the sense that it yields the exact same measurements. Such a system can be interpreted as an oblique projection onto a given reconstruction space. The standard formulation requires a one-to-one relationship between the input measurements and the reconstructed model. Unfortunately, this condition fails when the cross-correlation matrix between the analysis and reconstruction basis functions is not invertible; in particular, when there are less measurements than the number of reconstruction functions. In this paper, we propose an extension of consistent sampling that is applicable to those singular cases as well, and that yields a unique and well-defined solution. This solution also makes use of projection operators and has a geometric interpretation. The key idea is to exclude the null space of the sampling operator from the reconstruction space and to enforce consistency on its complement. We specify a class of consistent reconstruction algorithms corresponding to different choices of complementary reconstruction spaces. The formulation includes the Moore-Penrose generalized inverse, as well as other potentially more interesting reconstructions that preserve certain preferential signals. In particular, we display solutions that preserve polynomials or sinusoids, and therefore perform well in practical applications.


International Symposium on Optical Science and Technology | 2001

Fast surface profiler by white-light interferometry using a new algorithm, the SEST algorithm

Akira Hirabayashi; Hidemitsu Ogawa; Katsuichi Kitagawa

We devise a fast algorithm for surface profiling by white- light interferometry. It is named the SEST algorithm after Square Envelope function estimation by Sampling Theory. Conventional methods for surface profiling by white-light interferometry based their foundation on digital signal processing technique, which is used as an approximation of continuous signal processing. Hence, these methods require narrow sampling intervals to achieve good approximation accuracy. In this paper, we introduce a totally novel approach using sampling theory. That is, we provide a generalized sampling theorem that reconstructs a square envelope function of a white-light interference fringe from sampled values of the interference fringe. A sampling interval in the SEST algorithm is 6-14 times wider than those of conventional methods when an optical filter of the center wavelength 600 nm and the bandwidth 60 nm is used. The SEST algorithm has been installed in a commercial system which achieved the worlds fastest scanning speed of 42.75 micrometers /s. The height resolution of the system lies in the order of 10 nm for a measurement range of greater than 100 micrometers .


international conference on image processing | 2010

E-spline sampling for precise and robust line-edge extraction

Akira Hirabayashi; Pier Luigi Dragotti

We propose a line-edge extraction algorithm using E-spline functions as a sampling kernel. Our method is capable of extracting line-edge parameters, including amplitude, orientation, and offset, not only at sub-pixel level but also exactly provided noiseless pixel values. Even in noisy scenario, simulation results show that the proposed method outperforms a similar one based around B-spline functions with gains in standard deviation of 1.86dB for the orientation and 9.64dB for the offset when SNR is 10dB. We also show by simulations that our method extracts line-edges more precisely than the Hough transform.


international conference on sampling theory and applications | 2015

Compressed sensing MRI using sparsity induced from adjacent slice similarity

Akira Hirabayashi; Norihito Inamuro; Kazushi Mimura; Toshiyuki Kurihara; Toshiyuki Homma

We propose a fast magnetic resonance imaging (MRI) technique based on compressed sensing. The main idea is to use a combination of full and compressed sensing. Full sensing is conducted for every several slices (F-slice) while compressed sensing with high compression rate is applied to the rest of slices (C-slice). We can perfectly reconstruct F-slice images, which are used to roughly estimate the C-slices. Since the estimate is already of good quality, its difference from the original image is small and sparse. Therefore, the difference can be reconstructed precisely using the standard compressed sensing technique even with high compression rate. Simulation results show that the proposed method outperforms conventional methods with 3.16dB for arm images, 0.26dB for brain images in average for the C-slices with perfect reconstruction for the F-slices.


international conference on acoustics, speech, and signal processing | 2013

Recovery of nonuniformdirac pulses from noisy linear measurements

Laurent Condat; Akira Hirabayashi; Yosuke Hironaga

We consider the recovery of a finite stream of Dirac pulses at nonuniform locations, from noisy lowpass-filtered samples. We show that maximum-likelihood estimation of the unknown parameters can be formulated as structured low rank approximation of an appropriate matrix. To solve this difficult, believed NP-hard, problem, we propose a new heuristic iterative algorithm, based on a recently proposed splitting method for convex nonsmooth optimization. Although the algorithm comes, in absence of convexity, with no convergence proof, it converges in practice to a local solution, and even to the global solution of the problem, when the noise level is not too high. It is also fast and easy to implement.


international conference on acoustics, speech, and signal processing | 2012

Reconstruction of the sequence of Diracs from noisy samples via maximum likelihood estimation

Akira Hirabayashi; Takuya Iwami; Shuji Maeda; Yosuke Hironaga

We propose a reconstruction procedure for periodic sequence of K Diracs from noisy uniform measurements based on the maximum likelihood estimation. We first express the noise vector using the measurement vector and estimation parameters. This expression and the probability density function (PDF) for the noise vector allow us to define the (log-) likelihood function. We show that when the PDF is Gaussian, the maximization of the likelihood function is equivalent to finding the nearest sequence to the noisy sequence in the Fourier domain. This problem can be efficiently solved by combining an analytic solution and the so-called particle swarm optimization (PSO) search. Computer simulations show that the proposed method outperforms the conventional methods with computational cost of approximately O(K).


signal processing systems | 2015

Pixel enlargement in high-speed camera image acquisition based on 3D sparse representations

Akira Hirabayashi; Nogami Nogami; Jeremy White; Laurent Condat

We propose an algorithm that enhances the pixel number in high-speed camera image acquisition. In high-speed cameras, there is a principle problem that the number of pixels reduces when the number of frames per second (FPS) increases. To suppress this problem, we first propose an optical setup that randomly selects some percent of pixels in an image. Then, the proposed algorithm reconstructs the entire image from the selected partial pixels. In this algorithm, we exploit not only sparsity within each frame but also sparsity induced from the similarity between adjacent frames. Based on the two types of sparsity, we define a cost function for image reconstruction. Since this function is convex, we can find the optimal solution by using a convex optimization technique, in particular the Douglas-Rachford Splitting method, with small computational cost. Simulation results show that the proposed method outperforms a conventional method for sequential image reconstruction with sparsity prior.


international conference on acoustics, speech, and signal processing | 2013

Sampling and recovery of continuous sparse signals by maximum likelihood estimation

Akira Hirabayashi; Yosuke Hironaga; Laurent Condat

We propose a maximum likelihood estimation approach for the recovery of continuously-defined sparse signals from noisy measurements, in particular periodic sequences of derivatives of Diracs and piecewise polynomials. The conventional approach for this problem is based on total-least-squares (a.k.a. annihilating filter method) and Cadzow denoising. It requires more measurements than the number of unknown parameters and mistakenly splits the derivatives of Diracs into several Diracs at different positions. Further on, Cadzow denoising does not guarantee any optimality. The proposed parametric approach solves all of these problems. Since the corresponding log-likelihood function is non-convex, we exploit the stochastic method of particle swarm optimization (PSO) to find the global solution. Simulation results confirm the effectiveness of the proposed approach, for a reasonable computational cost.


Proceedings of SPIE | 2013

MAP recovery of polynomial splines from compressive samples and its application to vehicular signals

Akira Hirabayashi; Satoshi Makido; Laurent Condat

We propose a stable reconstruction method for polynomial splines from compressive samples based on the maximum a posteriori (MAP) estimation. The polynomial splines are one of the most powerful tools for modeling signals in real applications. Since such signals are not band-limited, the classical sampling theorem cannot be applied to them. However, splines can be regarded as signals with finite rate of innovation and therefore be perfectly reconstructed from noiseless samples acquired at, approximately, the rate of innovation. In noisy case, the conventional approach exploits Cadzow denoising. Our approach based on the MAP estimation reconstructs the signals more stably than not only the conventional approach but also a maximum likelihood estimation. We show the effectiveness of the proposed method by applying it to compressive sampling of vehicular signals.

Collaboration


Dive into the Akira Hirabayashi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hidemitsu Ogawa

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kazushi Mimura

Hiroshima City University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge