Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jens H. Weber-Jahnke is active.

Publication


Featured researches published by Jens H. Weber-Jahnke.


IEEE Transactions on Knowledge and Data Engineering | 2012

Privacy Preserving Decision Tree Learning Using Unrealized Data Sets

Pui Kuen Fong; Jens H. Weber-Jahnke

Privacy preservation is important for machine learning and data mining, but measures designed to protect private information often result in a trade-off: reduced utility of the training samples. This paper introduces a privacy preserving approach that can be applied to decision tree learning, without concomitant loss of accuracy. It describes an approach to the preservation of the privacy of collected data samples in cases where information from the sample database has been partially lost. This approach converts the original sample data sets into a group of unreal data sets, from which the original samples cannot be reconstructed without the entire group of unreal data sets. Meanwhile, an accurate decision tree can be built directly from those unreal data sets. This novel approach can be applied directly to the data storage as soon as the first sample is collected. The approach is compatible with other privacy preserving approaches, such as cryptography, for extra protection.


international conference on information security | 2011

The Proactive and Reactive Digital Forensics Investigation Process: A Systematic Literature Review

Soltan Alharbi; Jens H. Weber-Jahnke; Issa Traore

Recent papers have urged the need for new forensic techniques and tools able to investigate anti-forensics methods, and have promoted automation of live investigation. Such techniques and tools are called proactive forensic approaches, i.e., approaches that can deal with digitally investigating an incident while it occurs. To come up with such an approach, a Systematic Literature Review (SLR) was undertaken to identify and map the processes in digital forensics investigation that exist in literature. According to the review, there is only one process that explicitly supports proactive forensics, the multicomponent process [1]. However, this is a very high-level process and cannot be used to introduce automation and to build a proactive forensics system. As a result of our SLR, a derived functional process that can support the implementation of a proactive forensics system is proposed.


Information Systems Frontiers | 2012

eHealth system interoperability

Jens H. Weber-Jahnke; Liam Peyton; Thodoros Topaloglou

Healthcare systems around the world are in rapid transition, moving from traditional, paper-based practices to computerized processes and systems based approaches to service delivery. The term eHealth is widely used to refer to the use of information technology systems in health care. eHealth is trending upwards, in many different ways: (a) increased expectation for improved system outcomes, (b) increased funding, (c) recognition by patients, providers and funders that it offers solutions to healthcare problems. In 2009, the American (US) Recovery and Reinvestment Act set aside


Journal of the American Medical Informatics Association | 2008

An Interdisciplinary Computer-based Information Tool for Palliative Severe Pain Management

Craig E. Kuziemsky; Jens H. Weber-Jahnke; Francis Lau; G. Michael Downing

36.3 billions to help hospitals and physicians to computerize patient medical records by 2015. The European Union and Canada also have programs with similar incentives and goals. While organizations are busy deploying eHealth information systems to better manage the quality and the delivery of health care services, from scheduling, billing, and health care records to the control of life-critical devices and clinical decision support, they face challenges that have to do with the overall complexity of healthcare, access to skills, and lack of interoperability among healthcare information systems. It is not surprising that existing eHealth systems are built in “silos” (functional, organizational, technical) and lack the ability to interact effectively. Lack of interoperability poses a serious risk to be able to connect through the use of technology the “continuum of care”. Achieving eHealth interoperability is difficult because of the inherent information complexity of the health care domain. Challenges include technical issues as well as socio-political and legal problems. Enabling the electronic flow of digitized healthcare information has implications for both clinical and administrative processes as well as privacy and confidentiality. If these challenges can be overcome, the potential benefits to be gained from healthcare interoperability are enormous. It has been estimated that creating a national standardized system of health information exchange in the United States would yield a net benefit of over


working conference on reverse engineering | 2007

Visualizing Software Architecture Evolution Using Change-Sets

Andrew McNair; Daniel M. German; Jens H. Weber-Jahnke

75 Billion per year (Walker et al. 2005). That estimate does not take into account the benefits that could accrue from improved clinical care. But we need to remember that neither financial nor clinical benefits will materialize if we do not pay attention to other factors including the design of work processes and team communications (Pirnejad et al. 2008). The motivation for this special issue was to invite contributions that touch on the various facets of the healthcare interoperability problem: people, processes, and technology. This special issue is intended for researchers and practitioners in the domain of health care information systems, including academics in health information science, computer science, software engineering, management and technology policy, as well as the rapidly growing group of IT workers and managers in the health care industry. The papers in this issue promote eHealth system interoperability as an important current frontier in information system research and practice and discuss a range of open challenges, potential solutions and experiences with current http://www.som.buffalo.edu/isinterface/ISFrontiers/


Information Systems Frontiers | 2012

Protecting privacy during peer-to-peer exchange of medical documents

Jens H. Weber-Jahnke; Christina Obry

OBJECTIVES As patient care becomes more collaborative in nature, there is a need for information technology that supports interdisciplinary practices of care. This study developed and performed usability testing of a standalone computer-based information tool to support the interdisciplinary practice of palliative severe pain management (SPM). DESIGN A grounded theory-participatory design (GT-PD) approach was used with three distinct palliative data sources to obtain and understand user requirements for SPM practice and how a computer-based information tool could be designed to support those requirements. RESULTS The GT-PD concepts and categories provided a rich perspective of palliative SPM and the process and information support required for different SPM tasks. A conceptual framework consisting of an ontology and a set of three problem-solving methods was developed to reconcile the requirements of different interdisciplinary team members. The conceptual framework was then implemented as a prototype computer-based information tool that has different modes of use to support both day-to-day case management and education of palliative SPM. Usability testing of the computer tool was performed, and the tool tested favorably in a laboratory setting. CONCLUSION An interdisciplinary computer-based information tool can be developed to support the different work practices and information needs of interdisciplinary team members, but explicit requirements must be sought from all prospective users of such a tool. Qualitative methods such as the hybrid GT-PD approach used in this research are particularly helpful for articulating computer tool design requirements.


software engineering in health care | 2011

Towards electronic health record support for collaborative processes

Craig E. Kuziemsky; James B. Williams; Jens H. Weber-Jahnke

When trying to understand the evolution of a software system it can be useful to visualize the evolution of the systems architecture. Existing tools for viewing architectural evolution assume that what a user is interested in can be described in an unbroken sequence of time, for example the changes over the last six months. We present an alternative approach that provides a lightweight method for examining the net effect of any set of changes on a systems architecture. We also present Motive, a prototype tool that implements this approach, and demonstrate how it can be used to answer questions about software evolution by describing case studies we conducted on two Java systems.


BMC Bioinformatics | 2011

Building a biomedical tokenizer using the token lattice design pattern and the adapted Viterbi algorithm

Neil Barrett; Jens H. Weber-Jahnke

Privacy is an important aspect of interoperable medical information systems. Governments and health care organizations have established privacy policies to prevent abuse of personal health data. These policies often require organizations to obtain patient consent prior to exchanging personal information with other interoperable systems. The consents are defined in form of so-called disclosure directives. However, policies are often not precise enough to address all possible eventualities and exceptions. Unanticipated priorities and other care contexts may cause conflicts between a patient’s disclosure directives and the need to receive treatments from informed caregivers. It is commonly agreed that in these situations patient safety takes precedence over information privacy. Therefore, caregivers are typically given the ability to override the patient’s disclosure directives to protect patient safety. These overrides must be logged and are subject to privacy audits to prevent abuse. Centralized “shared health record” (SHR) infrastructures include consent management systems that enact the above functionality. However, consent management mechanisms do not extend to information systems that exchange clinical information on a peer-to-peer basis, e.g., by secure messaging. Our article addresses this gap by presenting a consent management mechanism for peer-to-peer interoperable systems. The mechanism restricts access to sensitive, medical data based on defined consent directives, but also allows overriding the policies when needed. The overriding process is monitored and audited in order to prevent misuse. The mechanism has been implemented in an open source project called CDAShip and has been made available on SourceForge.


Computer-aided Design | 2009

Virtual prototyping of automated manufacturing systems with Geometry-driven Petri nets

Jens H. Weber-Jahnke; Jochen Stier

As more healthcare delivery is provided by collaborative care teams, there is a need to design tools such as electronic health records to support teams. Much of the existing EHR work has focused on semantic interoperability. While that work is important, collaborative care delivery is largely process driven, meaning that process interoperability must also be considered. This paper takes a first step towards engineering EHR requirements to support collaborative care delivery by defining a set of collaborative competencies. These competencies emphasize process interoperability through separation of data and processes. The findings from this paper can help inform EHR design to support collaborative care delivery.


FHIES'11 Proceedings of the First international conference on Foundations of Health Informatics Engineering and Systems | 2011

On the safety of electronic medical records

Jens H. Weber-Jahnke; Fieran Mason-Blakley

BackgroundTokenization is an important component of language processing yet there is no widely accepted tokenization method for English texts, including biomedical texts. Other than rule based techniques, tokenization in the biomedical domain has been regarded as a classification task. Biomedical classifier-based tokenizers either split or join textual objects through classification to form tokens. The idiosyncratic nature of each biomedical tokenizer’s output complicates adoption and reuse. Furthermore, biomedical tokenizers generally lack guidance on how to apply an existing tokenizer to a new domain (subdomain). We identify and complete a novel tokenizer design pattern and suggest a systematic approach to tokenizer creation. We implement a tokenizer based on our design pattern that combines regular expressions and machine learning. Our machine learning approach differs from the previous split-join classification approaches. We evaluate our approach against three other tokenizers on the task of tokenizing biomedical text.ResultsMedpost and our adapted Viterbi tokenizer performed best with a 92.9% and 92.4% accuracy respectively.ConclusionsOur evaluation of our design pattern and guidelines supports our claim that the design pattern and guidelines are a viable approach to tokenizer construction (producing tokenizers matching leading custom-built tokenizers in a particular domain). Our evaluation also demonstrates that ambiguous tokenizations can be disambiguated through POS tagging. In doing so, POS tag sequences and training data have a significant impact on proper text tokenization.

Collaboration


Dive into the Jens H. Weber-Jahnke's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Francis Lau

University of Victoria

View shared research outputs
Top Co-Authors

Avatar

Issa Traore

University of Victoria

View shared research outputs
Top Co-Authors

Avatar

Morgan Price

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge