Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Junkai Ji is active.

Publication


Featured researches published by Junkai Ji.


Neurocomputing | 2016

An approximate logic neuron model with a dendritic structure

Junkai Ji; Shangce Gao; Jiujun Cheng; Zheng Tang; Yuki Todo

An approximate logic neuron model (ALNM) based on the interaction of dendrites and the dendritic plasticity mechanism is proposed. The model consists of four layers: a synaptic layer, a dendritic layer, a membrane layer, and a soma body. ALNM has a neuronal-pruning function to form its unique dendritic topology for a particular task, through screening out useless synapses and unnecessary dendrites during training. In addition, corresponding to the mature dendritic morphology, the trained ALNM can be substituted by a logic circuit, using the logic NOT, AND and OR operations, which possesses powerful operation capacities and can be simply implemented in hardware. Since the ALNM is a feed-forward model, an error back-propagation algorithm is used to train it. To verify the effectiveness of the proposed model, we apply the model to the Iris, Glass and Cancer datasets. The results of the classification accuracy rate and convergence speed are analyzed, discussed, and compared with a standard back-propagation neural network. Simulation results show that ALNM can be used as an effective pattern classification method. It reduces the size of the dataset features by learning, without losing any essential information. The interaction between features can also be observed in the dendritic morphology. Simultaneously, the logic circuit can be used as a single classifier to deal with big data accurately and efficiently.


ieee international conference on progress in informatics and computing | 2015

Single dendritic neuron with nonlinear computation capacity: A case study on XOR problem

Tao Jiang; Dizhou Wang; Junkai Ji; Yuki Todo; Shangce Gao

Recently, a series of theoretical studies have conjectured that synaptic nonlinearities in a dendritic tree could make individual neurons act more powerfully in complex computational operations. Each of the neurons has quite distinct morphologies of synapses and dendrites to determine what signals a neuron receives and how these signals are integrated. However, there is no effective model that can captures the nonlinearities among excitatory and inhibitory inputs while predicting the morphology and its evolution of synapses and dendrites. In this paper, we propose a new single neuron model with synaptic nonlinearities in a dendritic tree. The computation on neuron has a neuron-pruning function that can reduce dimension by removing useless synapses and dendrites during learning, forming a precise synaptic and dendritic morphology. The nonlinear interactions in a dendrite tree are expressed using the Boolean logic AND (conjunction), OR (disjunction) and NOT (negation). An error back propagation algorithm is used to train the neuron model. Furthermore, we apply the new model to the Exclusive OR (XOR) problem and it can solve the problem perfectly with the help of inhibitory synapses which demonstrate synaptic nonlinear computation and the neurons ability to learn.


Information Sciences | 2019

An artificial bee colony algorithm search guided by scale-free networks

Junkai Ji; Shuangbao Song; Cheng Tang; Shangce Gao; Zheng Tang; Yuki Todo

Abstract Many optimization algorithms have adopted scale-free networks to improve the search ability. However, most methods have merely changed their population topologies into those of scale-free networks; their experimental results cannot verify that these algorithms have superior performance. In this paper, we propose a scale-free artificial bee colony algorithm (SFABC) in which the search is guided by a scale-free network. The mechanism enables the SFABC search to follow two rules. First, the bad food sources can learn more information from the good sources of their neighbors. Second, the information exchange among good food sources is relatively rare. To verify the effectiveness of SFABC, the algorithm is compared with the original artificial bee colony algorithm (ABC), several advanced ABC variants, and other metaheuristic algorithms on a wide range of benchmark functions. Experimental results and statistical analyses indicate that SFABC obtains a better balance between exploration and exploitation during the optimization process and that, in most cases, it can provide a competitive performance of the benchmark functions. We also verify that scale-free networks can not only improve the optimization performance of ABC but also enhance the search ability of other metaheuristic algorithms, such as differential evolution (DE) and the flower pollination algorithm (FPA).


international conference on swarm intelligence | 2018

A Novel Memetic Whale Optimization Algorithm for Optimization.

Zhe Xu; Yang Yu; Hanaki Yachi; Junkai Ji; Yuki Todo; Shangce Gao

Whale optimization algorithm (WOA) is a newly proposed search optimization technique which mimics the encircling prey and bubble-net attacking mechanisms of the whale. It has proven to be very competitive in comparison with other state-of-the-art metaheuristics. Nevertheless, the performance of WOA is limited by its monotonous search dynamics, i.e., only the encircling mechanism drives the search which mainly focus the exploration in the landscape. Thus, WOA lacks of the capacity of jumping out the of local optima. To address this problem, this paper propose a memetic whale optimization algorithm (MWOA) by incorporating a chaotic local search into WOA to enhance its exploitation ability. It is expected that MWOA can well balance the global exploration and local exploitation during the search process, thus achieving a better search performance. Forty eight benchmark functions are used to verify the efficiency of MWOA. Experimental results suggest that MWOA can perform better than its competitors in terms of the convergence speed and the solution accuracy.


international conference on swarm intelligence | 2018

Galactic Gravitational Search Algorithm for Numerical Optimization

Sheng Li; Fenggang Yuan; Yang Yu; Junkai Ji; Yuki Todo; Shangce Gao

The gravitational search algorithm (GSA) has proven to be a good optimization algorithm to solve various optimization problems. However, due to the lack of exploration capability, it often traps into local optima when dealing with complex problems. Hence its convergence speed will slow down. A clustering-based learning strategy (CLS) has been applied to GSA to alleviate this situation, which is called galactic gravitational search algorithm (GGSA). The CLS firstly divides the GSA into multiple clusters, and then it applies several learning strategies in each cluster and among clusters separately. By using this method, the main weakness of GSA that easily trapping into local optima can be effectively alleviated. The experimental results confirm the superior performance of GGSA in terms of solution quality and convergence in comparison with GSA and other algorithms.


Knowledge Based Systems | 2018

Approximate logic neuron model trained by states of matter search algorithm

Junkai Ji; Shuangbao Song; Yajiao Tang; Shangce Gao; Zheng Tang; Yuki Todo

Abstract An approximate logic neuron model (ALNM) is a single neural model with a dynamic dendritic structure. During the training process, the model is capable of reducing useless synapses and unnecessary branches of dendrites by neural pruning function. It provides a simplified dendritic morphology for each particular problem. Then, the simplified model of ALNM can be substituted with a logic circuit, which is easy to implement on hardware. However, the computational capacity of this model has been greatly restricted by its learning algorithm, the back-propagation (BP) algorithm, because it is sensitive to initial values and easy to be trapped into local minima. To address this critical issue, we have investigated the capabilities of heuristic optimization methods that are acknowledged as global searching algorithms. Through comparison experiments, a states of matter search (SMS) algorithm has been verified to be the most suitable training method for ALNM. To evaluate the performance of SMS, six benchmark datasets are utilized in the experiments. The corresponding results are compared with the BP algorithm, other optimization methods, and several widely used classifiers. In addition, the classification performances of logic circuits trained by SMS are also presented in this study.


Computational Intelligence and Neuroscience | 2018

A Pruning Neural Network Model in Credit Classification Analysis

Yajiao Tang; Junkai Ji; Shangce Gao; Hongwei Dai; Yang Yu; Yuki Todo

Nowadays, credit classification models are widely applied because they can help financial decision-makers to handle credit classification issues. Among them, artificial neural networks (ANNs) have been widely accepted as the convincing methods in the credit industry. In this paper, we propose a pruning neural network (PNN) and apply it to solve credit classification problem by adopting the well-known Australian and Japanese credit datasets. The model is inspired by synaptic nonlinearity of a dendritic tree in a biological neural model. And it is trained by an error back-propagation algorithm. The model is capable of realizing a neuronal pruning function by removing the superfluous synapses and useless dendrites and forms a tidy dendritic morphology at the end of learning. Furthermore, we utilize logic circuits (LCs) to simulate the dendritic structures successfully which makes PNN be implemented on the hardware effectively. The statistical results of our experiments have verified that PNN obtains superior performance in comparison with other classical algorithms in terms of accuracy and computational efficiency.


Applied Soft Computing | 2018

Adoption of an improved PSO to explore a compound multi-objective energy function in protein structure prediction

Shuangbao Song; Junkai Ji; Xingqian Chen; Shangce Gao; Zheng Tang; Yuki Todo

Abstract The protein structure prediction (PSP) problem, i.e., predicting the three-dimensional structure of a protein from its sequence, remains challenging in computational biology. The inaccuracy of existing protein energy functions and the huge conformation search space make the problem difficult to solve. In this study, the PSP problem is modeled as a multi-objective optimization problem. A physics-based energy function and a knowledge-based energy function are combined to construct the three-objective energy function. An improved multi-objective particle swarm optimization coupled with two archives is employed to execute the conformation space search. In addition, a mechanism based on Pareto non-dominated sorting is designed to properly address the slightly worse solutions. Finally, the experimental results demonstrate the effectiveness of the proposed approach. A new perspective for solving the PSP problem by means of multi-objective optimization is given in this paper.


ieee international conference on progress in informatics and computing | 2016

Training a dendritic neural model with genetic algorithm for classification problems

Junkai Ji; Zhenyu Song; Yajiao Tang; Tao Jiang; Shangce Gao

Recently, more neuroscience researches focus on the role of dendritic structure during the neural computation. Inspired by the specified topologies of numerous dendritic trees, we proposed a single neural model with a particular dendritic structure. The dendrites are composed of several branches, and these branches correspond to three distributions in coordinate, which are used to classify the training data as required. Genetic algorithm is used as the training algorithm. Experimental results based on two benchmark classification problems verify the effectiveness of the proposed method, and the distributions of trained dendritic structures are also presented.


Ieej Transactions on Electrical and Electronic Engineering | 2017

A neuron model with synaptic nonlinearities in a dendritic tree for liver disorders

Tao Jiang; Shangce Gao; Dizhou Wang; Junkai Ji; Yuki Todo; Zheng Tang

Collaboration


Dive into the Junkai Ji's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yang Yu

University of Toyama

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge