Amnon Shabo
IBM
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Amnon Shabo.
ieee visualization | 2002
Donna L. Gresh; David A. Rabenhorst; Amnon Shabo; Shimon Slavin
We have created an application, called PRIMA (Patient Record intelligent Monitoring and Analysis), which can be used to visualize and understand patient record data. It was developed to better understand a large collection of patient records of bone marrow transplants at Hadassah Hospital in Jerusalem, Israel. It is based on an information visualization toolkit, Opal, which has been developed at the IBM T.J. Watson Research Center. Opal allows intelligent, interactive visualization of a wide variety of different types of data. The PRIMA application is generally applicable to a wide range of patient record data, as the underlying toolkit is flexible with regard to the form of the input data. This application is a good example of the usefulness of information visualization techniques in the bioinformatics domain, as these techniques have been developed specifically to deal with diverse sets of often unfamiliar data. We illustrate several unanticipated findings which resulted from the use of a flexible and interactive information visualization environment.
Ibm Journal of Research and Development | 2011
Joseph Phillip Bigus; Murray Campbell; Boaz Carmeli; Melissa Cefkin; Henry Chang; Ching-Hua Chen-Ritzo; William F. Cody; Shahram Ebadollahi; Alexandre V. Evfimievski; Ariel Farkash; Susanne Glissmann; David Gotz; Tyrone Grandison; Daniel Gruhl; Peter J. Haas; Mark Hsiao; Pei-Yun Sabrina Hsueh; Jianying Hu; Joseph M. Jasinski; James H. Kaufman; Cheryl A. Kieliszewski; Martin S. Kohn; Sarah E. Knoop; Paul P. Maglio; Ronald Mak; Haim Nelken; Chalapathy Neti; Hani Neuvirth; Yue Pan; Yardena Peres
Rising costs, decreasing quality of care, diminishing productivity, and increasing complexity have all contributed to the present state of the healthcare industry. The interactions between payers (e.g., insurance companies and health plans) and providers (e.g., hospitals and laboratories) are growing and are becoming more complicated. The constant upsurge in and enhanced complexity of diagnostic and treatment information has made the clinical decision-making process more difficult. Medical transaction charges are greater than ever. Population-specific financial requirements are increasing the economic burden on the entire system. Medical insurance and identity theft frauds are on the rise. The current lack of comparative cost analytics hampers systematic efficiency. Redundant and unnecessary interventions add to medical expenditures that add no value. Contemporary payment models are antithetic to outcome-driven medicine. The rate of medical errors and mistakes is high. Slow inefficient processes and the lack of best practice support for care delivery do not create productive settings. Information technology has an important role to play in approaching these problems. This paper describes IBM Researchs approach to helping address these issues, i.e., the evidence-based healthcare platform.
Personalized Medicine | 2007
Amnon Shabo
The availability of personal health data for the purposes of human readability, as well as for machine processability, is still limited despite significant advances in the use of information technologies by healthcare providers, clinical trials sponsors and even patients through the emerging concept of personal health records [1]. While human readability is necessary for human-tohuman communications during the care processes and for medico–legal reasons, machine processability is needed to support humans in interpreting the ever-growing amount of health data and making clinical decisions for the best care possible based on the latest evidence known in the scientific literature. The limited availability of personal health data makes it hard to achieve both goals. Recent efforts in designing new healthcare information systems still focus on the human readability of the data, but new standards for health data representation [101,102] make it possible to achieve both human readability and machine processability in the same format. In this way, a clinical document, for example, could be both readable by the clinician and at the same time parsed by decision support applications that could offer the clinician valuable support in clinical decision making. Clinical trials sponsors use similar health data but the amount of data collected and processed in the course of a typical clinical trial is higher than in healthcare and, in addition, this is done in a relatively short time. Therefore, machine processable data would be beneficial for the efficacy and quality of the clinical trial. It is important to have the personal health data formatted appropriately once and then used by different stakeholders [2]. Therefore, the issue of personal health data is common to healthcare and clinical trials as well as any other secondary use of health data, such
Methods of Molecular Biology | 2006
Casey S. Husser; Jeffrey Buchhalter; O. Scott Raffo; Amnon Shabo; Steven H. Brown; E Karen; L Peter
This chapter provides a bottom-up perspective on bioinformatics data standards, beginning with a historical perspective on biochemical nomenclature standards. Various file format standards were soon developed to convey increasingly complex and voluminous data that nomenclature alone could not effectively organize without additional structure and annotation. As areas of biochemistry and molecular biology have become more integral to the practice of modern medicine, broader data representation models have been created, from corepresentation of genomic and clinical data as a framework for drug research and discovery to the modeling of genotyping and pharmacogenomic therapy within the broader process of the delivery of health care.
The Epma Journal | 2014
Amnon Shabo
Scientific objectives A growing base of genetic testings is now available for clinicians and its results could improve the quality of care and its outcomes. In addition, these testings are also helpful for predictive medicine and early detection of diseases. Genetic testings could further personalize the care processes based on the patient individual genetic makeup. Genetic testing methods are diverse and span from testing for known germline mutations in the context of single-gene disorders, to full sequencing of genes in tumor tissues looking for somatic variations in cancer cells, and on to whole-exome sequencing in cases of predictive medicine or unknown diagnosis. As a consequence of that diversity and the constantly growing number of techniques yielding new result formats less familiar to clinicians, existing report formats attempt to contain rather detailed descriptions of the tests performed, but that approach makes it hard to understand the interpretations of the testing results and given recommendations. Genetic testing reports should be both human readable to its users, and at the same time also be processable to computerized algorithms computing the interpretation of the genetic testing results. Thus, it is equally important to standardize the way genetic testing interpretations are represented [1] as an inherent part of the report structure.
Archive | 2013
Amnon Shabo; Maurizio Scarpa
Realizing personalized medicine is dependent on effective use of omics data and knowledge along with the patient’s medial history. Such history includes clinical data sets that need to be as rich as possible and more importantly, its semantics should be made explicit and machine-processable. Often, semantics of research & clinical data is merely implied and needs to be resolved from separate descriptions of the data which are typically unstructured (e.g., clinical database schema textual definition in healthcare or study protocol description in research). Furthermore, the nature of patient’s data is changing: a constantly growing stream of raw data with preliminary analysis is available today in both research and clinical environments, e.g., DNA sequences along with rare variants or homecare devices data along with personal alerts. The representation of all data types should adhere as much as possible to agreed-upon international standards in order to assure interoperability across the translational health domain. Dispersed and disparate medical records of a patient are often inconsistent and incoherent. A patient-centric longitudinal electronic health record could provide coherent and explicit representation of the data semantics, including omics information resulting from personalized analysis of the patient raw omics data and its association to phenotypic data.
Ibm Journal of Research and Development | 2012
Amnon Shabo
Healthcare transformation through the use of information technologies is partly dependent on effectively applying the most up-to-date knowledge to the complete representation of the patients past medical history at the point of care. In order for health knowledge to be effectively used, patient information should be sufficiently detailed, and more importantly, the semantics of the data should be made explicit and machine processable. Often, the semantics of data are represented implicitly and are hidden in unstructured and disconnected descriptions of the data. Alternatively, they may be known to human experts, such as the researchers or caregivers involved in the generation of that data. Predefined schemas of health information systems are insufficient; it is extremely important to explicitly represent the patient-specific context of each discrete data item and how it relates to other data items (e.g., indications and outcomes of an operation), as well as how it fits into the entire health history of an individual. Dispersed and disparate medical records of a patient are often inconsistent and incoherent. An independent patient-centric electronic health record may provide an explicit, coherent, and complete representation of contextual data. This paper reviews healthcare transformations, with consideration of an independent health record.
medical informatics europe | 2011
Ariel Farkash; Hani Neuvirth; Yaara Goldschmidt; Costanza Conti; Federica Rizzi; Stefano Bianchi; Erika Salvi; Daniele Cusi; Amnon Shabo
The new generation of health information standards, where the syntax and semantics of the content is explicitly formalized, allows for interoperability in healthcare scenarios and analysis in clinical research settings. Studies involving clinical and genomic data include accumulating knowledge as relationships between genotypic and phenotypic information as well as associations within the genomic and clinical worlds. Some involve analysis results targeted at a specific disease; others are of a predictive nature specific to a patient and may be used by decision support applications. Representing knowledge is as important as representing data since data is more useful when coupled with relevant knowledge. Any further analysis and cross-research collaboration would benefit from persisting knowledge and data in a unified way. This paper describes a methodology used in Hypergenes, an EC FP7 project targeting Essential Hypertension, which captures data and knowledge using standards such as HL7 CDA and Clinical Genomics, aligned with the CEN EHR 13606 specification. We demonstrate the benefits of such an approach for clinical research as well as in healthcare oriented scenarios.
The Epma Journal | 2014
Amnon Shabo
Scientific objectives Recent calls in the EU and US for the creation of a “universal exchange language” for health information representation [1,2] seemed to overlook the contribution of translational medicine in general, and translational informatics in particular. This paper calls for the development of a ‘translational health information language’ (THIL), which could serve the translational information continuum. Its backbone spans from the biological research results and new types of raw data coming from sources like omics assays, sensors data and imaging techniques, and then clinical trials data and on to clinical data. Alongside this backbone of information there are also contributions of economic, social and psychological considerations that quite often prevent a new and successful intervention at the bedside from scaling out to the community and policy. The main assumption of this paper is that given the current diversity of such a continuum, it is not reasonable to expect a simplified exchange language that covers all portions of that continuum. Instead, THIL strives to mix & match existing and emerging languages through fundamental touchpoints in order to enable the integration of patients’ data through a conceptual workflow of continuous data encapsulation & bubbling up loop [3]. These processes lead to gradual distillation of the raw data that makes the data usable and useful at the point of care, or for early detection, prevention and well-being.
Archive | 2008
Dorit Baras; Ohad Greenshpan; Amnon Shabo