Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where József Bukor is active.

Publication


Featured researches published by József Bukor.


soft computing | 2016

Data Classification Based on Fuzzy-RBF Networks

Annamária R. Várkonyi-Kóczy; Balázs Tusor; József Bukor

Classification has been among the most typical computational problems in the last decades. In this paper, a new filtering network is proposed for data classification that is derived from radial base function networks (RBFNs), based on a simple observation about the nature of the classic RBFNs. According to that observation, the hidden layer of the network can be viewed as a fuzzy system, which compares the input data to the data stored in each neuron, computing the similarity between them. The output layer of the RBFN is modified in order to make it more effective in certain fuzzy decision-making systems. The training of the neurons is solved by a clustering step, for which a novel clustering method is proposed. Experimental results are also presented to show the efficiency of the approach.


WCSC | 2016

Fuzzy Information Measure for Improving HDR Imaging

Annamária R. Várkonyi-Kóczy; Sándor Hancsicska; József Bukor

Digital image processing can often improve the quality of visual sensing of images and real-world scenes. Recently, high dynamic range (HDR) imaging techniques have become more and more popular in the field. Both classical and soft computing–based methods proved to be advantages in revealing the non-visible parts of images and realistic scenes. However, extracting as much details as possible is not always enough because the sensing capability of the human eye depends on many other factors and the visual quality is not always proportional to the rate of accurate reproduction of the scene. In this paper, a new fuzzy information measure is introduced by which the quality of HDR images can be improved and both the amount of visible details and the quality of sensing can be increased.


Mathematica Slovaca | 2009

On estimations of dispersion of ratio block sequences

József Bukor; Peter Csiba

Using new characteristics of an infinite subset of positive integers we give some estimations of the dispersion of the related block sequence.


soft computing | 2014

Improving the Model Convergence Properties of Classifier Feed-Forward MLP Neural Networks

Annamária R. Várkonyi-Kóczy; Balázs Tusor; József Bukor

Recently, the application of Artificial Neural Networks (ANNs) has become very popular. Their success is due to the fact that they are able to learn complex input-output mappings and are able to find relationships in unstructured data sets. Further, neural nets are relatively easy to implement in any application. In the last years, classification has become one of the most significant research and application area of ANNs because these networks have proved to be very efficient in the field. Unfortunately, a big difficulty of the usage of feed-forward multilayer perceptron (MLP) neural nets with supervised learning is that in case of higher problem complexity, the NN model may not converge during the training or in better cases needs a long training time which scales with the structural parameters of the networks and the quantity of input data. However, the training can be done off-line, this disadvantage may limit the usage of NN models because the training has a non-negligible cost and further, can cause a possibly non-tolerable delay in the operation. In this chapter, to overcome these problems, a new training algorithm is proposed which in many cases is able to improve the convergence properties of NN models in complex real world classification problems. On one hand, the accuracy of the models can be increased while on the other hand the training time can be decreased. The new training method is based on the well-known back-propagation algorithms, however with a significant difference: instead of the original input data, a reduced data set is used during the teaching phase. The reduction is the result of a complexity optimized classification procedure. In the resulted new, reduced input data set, each input sample is replaced by the center of the cluster to which it belongs and these cluster centers are used during the training (each element once). As result, new, complex ambiguous classification problems can be solved with acceptable cost and accuracy by using feed-forward MLP NNs.


American Mathematical Monthly | 1996

On Accumulation Points of Ratio Sets of Positive Integers

József Bukor; János T. Tóth


Information Sciences | 2009

Dependence of densities on a parameter

József Bukor; Ladislav Mišík; János T. Tóth


instrumentation and measurement technology conference | 2018

A fast line extraction method based on basic segment grouping

Balázs Tusor; Annamária R. Várkonyi-Kóczy; József Bukor


ieee international symposium on medical measurements and applications | 2018

An iSpace-based Dietary Advisor

Balázs Tusor; Annamária R. Várkonyi-Kóczy; József Bukor


International Journal of Mathematical Analysis | 2014

A remark on (C,1) means of sequences

József Bukor; Peter Csiba


Applied mathematical sciences | 2014

A Criterion for Comparability of Weighted Densities

József Bukor; Ferdinánd Filip; János T. Tóth

Collaboration


Dive into the József Bukor's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Csiba

Selye János University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge