John Puentes
École nationale supérieure des télécommunications de Bretagne
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by John Puentes.
International Journal of Biomedical Engineering and Technology | 2007
John Puentes; Rajeev K. Bali; Nilmini Wickramasinghe; Rng Naguib
After four generations of varied applications, telemedicine now finds itself at a crossroads. Validated concepts have integrated the available technology to handle the basic system components, as well as data and information processing, in multiple manners. Medical data exchange has been tested as a result. Among the multiple technology evolutions that could be identified as significant trends we have selected four - wireless broadband, non-invasive sensors, emerging multimedia standards, and open source software - which are likely to have an impact on the current telemedicine progression, at the functional and economic levels. What follows is a description of each technologys main characteristics.
Computer Methods and Programs in Biomedicine | 2012
John Puentes; Michèle Roux; Julien Montagner; Laurent Lecornu
Patient records have been developed to support the physician-oriented medical activity scheme. One recommended yet rarely studied alternative, expected to improve healthcare, is the patient-centered record. We propose a development framework for such record, which includes domain-specific database models at the conceptual level, analyzing the fundamental role of complementary information destined to ensure proper patient understanding of related clinical situations. A patient-centered awareness field study of user requirements and medical workflow was carried out in three medical services and two technical units to identify the most relevant elements of the framework, and compared to the definitions of a theoretical approach. Three core data models - centered on the patient, medical personnel, and complementary patient information, corresponding to the determined set of entities, information exchanges and actors roles, constitute the technical recommendations of the development framework. An open source proof of concept prototype was developed to show the model feasibility. The resulting patient-centered record development framework implies particular medical personnel contributions to supply complementary information.
international conference of the ieee engineering in medicine and biology society | 2009
Laurent Lecornu; G. Thillay; C. Le Guillou; Pierre-Jean Garreau; P. Saliou; H. Jantzem; John Puentes; Jean-Michel Cauvin
Choosing diagnosis codes is a non-intuitive operation for the practitioner. Mistakes are frequent with severe consequences on healthcare evaluation and funding. French physicians have to assign a code for everything they do and they are not spared with these kinds of errors. We propose a tool named REFEROCOD to support the medical coding task in order to minimize errors without losing time, by suggesting a list of codes in accordance with the physician activities and of the patient medical context. The proposed method uses probabilistic knowledge and indicates the probability to have a proper diagnosis code considering the realized procedure, age, sex and other information available in the discharge abstract.
international conference of the ieee engineering in medicine and biology society | 2007
A. Dahabiah; John Puentes; B. Solaiman
Clinical assessment of venous thrombosis (VT) is essential to evaluate the risk of size increase or embolism. Analyses like echogenecity and echostructure characterization, examine ancillary evidence to improve diagnosis. However, such analyses are inherently uncertain and operator dependent, adding enormous complexity to the task of indexing diagnosed images for medical practice support, by retrieving similar images, or to exploit electronic patient record repositories for data mining. This paper proposes a VT ultrasound image indexing and retrieval approach, which shows the suitability of neural network VT characterization, combined with a fuzzy similarity. Three types of image descriptors (sliding window, wavelet coefficients energy and co-occurrence matrix), are processed by three different neural networks, producing equivalent VT characterizations. Resulting values are projected on fuzzy membership functions and then compared with the fuzzy similarity. Compared to nominal and Euclidean distances, an experimental validation indicates that the fuzzy similarity increases image retrieval precision beyond the identification of images that belong to the same diagnostic class, taking into account the characterization result uncertainty, and allowing the user to privilege any particular feature.
Computer-Aided Engineering | 2010
Anas Dahabiah; John Puentes; Basel Solaiman
Information processing in modern pattern recognition systems is becoming increasingly complex due to the flood of data and the need to deal with different aspects of information imperfection. In this paper a simple and efficient possibilistic evidential method is defined, taking account of data heterogeneity, combined with proportional conflict redistribution to include information conflict, paradox, and scarcity, within a fusion framework. It ponders information constraints and updating for dynamic fusion, and appropriately considers training set elements imperfection, class set continuity, and system output information scalability, encompassing a significant range of issues encountered in current databases. One example of knowledge sources processing with those constraints is given to explain the main processing phases, followed by suitable application instances in satellite and medical image recognition.
Artificial Intelligence in Medicine | 2000
John Puentes; Mireille Garreau; Christian Roux; Jean-Louis Coatrieux
Cardiac motion analysis enables to identify pathologies related to myocardial anomalies or coronary arteries circulation deficiencies. Conventionally, bi-dimensional (2D) left ventricle contour images have been extensively used, to perform quantitative measurements and qualitative evaluations of the cardiac function. Nevertheless, there are other cardiac anatomical structures, the coronary arteries, imaged on routine procedures, upon which complementary motion interpretation can be conducted. This paper presents an experimental methodology to perform dynamic cardiac scenes interpretation, studying three-dimensional (3D) coronary arteries spatial-temporal behavior. Being an alternative way to approach computer assisted cardiac motion interpretation, it reveals a wide range of rarely explored spatial-temporal situations and proposes how to address them. Considering the challenges to achieve dynamic scene interpretation, it is explained how spatial and temporal knowledge, are connected to specialist knowledge and measured parameters, to obtain a dynamic scene interpretation. Global and local motion features are modeled according to cardiac motion and geometrical knowledge, before its transformation into symbols. Anatomical knowledge and spatial-temporal knowledge are applied, along with spatial-temporal reasoning schemes, to access symbols meaning. Experimental results obtained using real data are presented. Complexity of interpretation envisioning is discussed, taking the given results as an example.
international conference of the ieee engineering in medicine and biology society | 2010
Laurent Lecornu; C. Le Guillou; F. Le Saux; M. Hubert; John Puentes; J.-M. Cauvin
For the practitioner, choosing diagnosis codes is a non-intuitive operation. Mistakes are frequent, causing severe consequences on healthcare performance evaluation and funding. French physicians have to assign a code to all their activities and are frequently prone to these errors. Given that most of the time and particularly for chronic diseases indexed information is already available, we propose a tool named AnterOcod, in order to support the medical coding task. It suggests the list of most relevant plausible codes, predicted from the patients earlier hospital stays, according to a set of previously utilized diagnosis codes. Our method applies the estimation of code reappearance rates, based on an equivalent approach to actuarial survival curves. Around 33% of the expected correct diagnosis codes were retrieved in this manner, after evaluating 998 discharge abstracts, significantly improving the coding task.
international conference of the ieee engineering in medicine and biology society | 2001
J.R. Ordonez; Guy Cazuguel; John Puentes; B. Solaiman; Jean-Michel Cauvin; C. Roux
Addresses the problem of efficient image retrieval from a compressed image database, using information derived from the compression process. Images in the database are compressed applying two approaches: vector quantization (VQ) and quadtree image decomposition. Both are based on Konohens self-organizing feature maps (SOFM) for creating vector quantization codebooks. However, while VQ uses one codebook of one resolution to compress the images, Quadtree decomposition uses simultaneously 4 codebooks of four different resolutions. Image indexing is implemented by generating a feature vector (FV) for each compressed image. Accordingly, images are retrieved by means of FVs similarity evaluation between the query image and the images in the database, depending on a distance measure. Three distance measures have been analyzed to assess FV index similarity: Euclidean, intersection and correlation distances. Distance measures efficiency retrieval is evaluated for different VQ resolutions and different quadtree image descriptors. Experimental results using real data, esophageal ultrasound and eye angiography images, are presented.
Archive | 2013
John Puentes; Julien Montagner; Laurent Lecornu; Jaakko Lähteenmäki
Data collected by multiple physiological sensors are being increasingly used for wellness monitoring or disease management, within a pervasiveness context facilitated by the massive use of mobile devices. These abundant complementary raw data are challenging to understand and process, because of their voluminous and heterogeneous nature, as well as the data quality issues that could impede their utilization. This chapter examines the main data quality questions concerning six frequently used physiological sensors—glucometer, scale, blood pressure meter, heart rate meter, pedometer, and thermometer—as well as patient observations that may be associated to a given set of measurements. We discuss specific details that are either overlooked in the literature or avoided by data exploration and information extraction algorithms, but have significant importance to properly preprocess these data. Making use of different types of formalized knowledge, according to the characteristics of physiological measurement devices, relevant data handled by a Personal Health Record on a mobile device, are evaluated from a data quality perspective, considering data deficiencies factors, consequences, and reasons. We propose a general scheme for sensors data quality characterization adapted to a pervasive scenario.
Computer Methods and Programs in Biomedicine | 2013
John Puentes; Julien Montagner; Laurent Lecornu; Jean-Michel Cauvin
Medical encoding support systems for diagnoses and medical procedures are an emerging technology that begins to play a key role in billing, reimbursement, and health policies decisions. A significant problem to exploit these systems is how to measure the appropriateness of any automatically generated list of codes, in terms of fitness for use, i.e. their quality. Until now, only information retrieval performance measurements have been applied to estimate the accuracy of codes lists as quality indicator. Such measurements do not give the value of codes lists for practical medical encoding, and cannot be used to globally compare the quality of multiple codes lists. This paper defines and validates a new encoding information quality measure that addresses the problem of measuring medical codes lists quality. It is based on a usability study of how expert coders and physicians apply computer-assisted medical encoding. The proposed measure, named ADN, evaluates codes Accuracy, Dispersion and Noise, and is adapted to the variable length and content of generated codes lists, coping with limitations of previous measures. According to the ADN measure, the information quality of a codes list is fully represented by a single point, within a suitably constrained feature space. Using one scheme, our approach is reliable to measure and compare the information quality of hundreds of codes lists, showing their practical value for medical encoding. Its pertinence is demonstrated by simulation and application to real data corresponding to 502 inpatient stays in four clinic departments. Results are compared to the consensus of three expert coders who also coded this anonymized database of discharge summaries, and to five information retrieval measures. Information quality assessment applying the ADN measure showed the degree of encoding-support system variability from one clinic department to another, providing a global evaluation of quality measurement trends.