Publications

6 Publications matching the given criteria: (Clear all filters)

Abstract (Expand)

BACKGROUND: Clinical trials, epidemiological studies, clinical registries, and other prospective research projects, together with patient care services, are main sources of data in the medical research domain. They serve often as a basis for secondary research in evidence-based medicine, prediction models for disease, and its progression. This data are often neither sufficiently described nor accessible. Related models are often not accessible as a functional program tool for interested users from the health care and biomedical domains. OBJECTIVE: The interdisciplinary project Leipzig Health Atlas (LHA) was developed to close this gap. LHA is an online platform that serves as a sustainable archive providing medical data, metadata, models, and novel phenotypes from clinical trials, epidemiological studies, and other medical research projects. METHODS: Data, models, and phenotypes are described by semantically rich metadata. The platform prefers to share data and models presented in original publications but is also open for nonpublished data. LHA provides and associates unique permanent identifiers for each dataset and model. Hence, the platform can be used to share prepared, quality-assured datasets and models while they are referenced in publications. All managed data, models, and phenotypes in LHA follow the FAIR principles, with public availability or restricted access for specific user groups. RESULTS: The LHA platform is in productive mode (https://www.health-atlas.de/). It is already used by a variety of clinical trial and research groups and is becoming increasingly popular also in the biomedical community. LHA is an integral part of the forthcoming initiative building a national research data infrastructure for health in Germany.

Authors: T. Kirsten, F. A. Meineke, H. Loeffler-Wirth, C. Beger, A. Uciteli, S. Staubert, M. Lobe, R. Hansel, F. G. Rauscher, J. Schuster, T. Peschel, H. Herre, J. Wagner, S. Zachariae, C. Engel, M. Scholz, E. Rahm, H. Binder, M. Loeffler

Date Published: 3rd Aug 2022

Publication Type: Journal article

Abstract (Expand)

Sharing data is of great importance for research in medical sciences. It is the basis for reproducibility and reuse of already generated outcomes in new projects and in new contexts. FAIR data principles are the basics for sharing data. The Leipzig Health Atlas (LHA) platform follows these principles and provides data, describing metadata, and models that have been implemented in novel software tools and are available as demonstrators. LHA reuses and extends three different major components that have been previously developed by other projects. The SEEK management platform is the foundation providing a repository for archiving, presenting and secure sharing a wide range of publication results, such as published reports, (bio)medical data as well as interactive models and tools. The LHA Data Portal manages study metadata and data allowing to search for data of interest. Finally, PhenoMan is an ontological framework for phenotype modelling. This paper describes the interrelation of these three components. In particular, we use the PhenoMan to, firstly, model and represent phenotypes within the LHA platform. Then, secondly, the ontological phenotype representation can be used to generate search queries that are executed by the LHA Data Portal. The PhenoMan generates the queries in a novel domain specific query language (SDQL), which is specific for data management systems based on CDISC ODM standard, such as the LHA Data Portal. Our approach was successfully applied to represent phenotypes in the Leipzig Health Atlas with the possibility to execute corresponding queries within the LHA Data Portal.

Authors: A. Uciteli, C. Beger, J. Wagner, A. Kiel, F. A. Meineke, S. Staubert, M. Lobe, R. Hansel, J. Schuster, T. Kirsten, H. Herre

Date Published: 24th May 2021

Publication Type: Journal article

Abstract (Expand)

Planning clinical studies to check medical hypotheses requires the specification of eligibility criteria in order to identify potential study participants. Electronically available patient data allows to support the recruitment of patients for studies. The Smart Medical Information Technology for Healthcare (SMITH) consortium aims to establish data integration centres to enable the innovative use of available healthcare data for research and treatment optimization. The data from the electronic health record of patients in the participating hospitals is integrated into a Health Data Storage based on the Fast Healthcare Interoperability Resources standard (FHIR), developed by HL7. In SMITH, FHIR Search is used to query the integrated data. An investigation has shown the advantages and disadvantages of using FHIR Search for specifying eligibility criteria. This paper presents an approach for modelling eligibility criteria as well as for generating and executing FHIR Search queries. Our solution is based on the Phenotype Manager, a general ontological phenotyping framework to model and calculate phenotypes using the Core Ontology of Phenotypes.

Authors: A. Uciteli, C. Beger, J. Wagner, T. Kirsten, F. A. Meineke, S. Staubert, M. Lobe, H. Herre

Date Published: 26th Apr 2021

Publication Type: Journal article

Abstract (Expand)

BACKGROUND: The successful determination and analysis of phenotypes plays a key role in the diagnostic process, the evaluation of risk factors and the recruitment of participants for clinical and epidemiological studies. The development of computable phenotype algorithms to solve these tasks is a challenging problem, caused by various reasons. Firstly, the term 'phenotype' has no generally agreed definition and its meaning depends on context. Secondly, the phenotypes are most commonly specified as non-computable descriptive documents. Recent attempts have shown that ontologies are a suitable way to handle phenotypes and that they can support clinical research and decision making. The SMITH Consortium is dedicated to rapidly establish an integrative medical informatics framework to provide physicians with the best available data and knowledge and enable innovative use of healthcare data for research and treatment optimisation. In the context of a methodological use case 'phenotype pipeline' (PheP), a technology to automatically generate phenotype classifications and annotations based on electronic health records (EHR) is developed. A large series of phenotype algorithms will be implemented. This implies that for each algorithm a classification scheme and its input variables have to be defined. Furthermore, a phenotype engine is required to evaluate and execute developed algorithms. RESULTS: In this article, we present a Core Ontology of Phenotypes (COP) and the software Phenotype Manager (PhenoMan), which implements a novel ontology-based method to model, classify and compute phenotypes from already available data. Our solution includes an enhanced iterative reasoning process combining classification tasks with mathematical calculations at runtime. The ontology as well as the reasoning method were successfully evaluated with selected phenotypes including SOFA score, socio-economic status, body surface area and WHO BMI classification based on available medical data. CONCLUSIONS: We developed a novel ontology-based method to model phenotypes of living beings with the aim of automated phenotype reasoning based on available data. This new approach can be used in clinical context, e.g., for supporting the diagnostic process, evaluating risk factors, and recruiting appropriate participants for clinical and epidemiological studies.

Authors: A. Uciteli, C. Beger, T. Kirsten, F. A. Meineke, H. Herre

Date Published: 21st Dec 2020

Publication Type: Journal article

Abstract (Expand)

The successful determination and analysis of phenotypes plays a key role in the diagnostic process, the evaluation of risk factors and the recruitment of participants for clinical and epidemiological studies. The development of computable phenotype algorithms to solve these tasks is a challenging problem, caused by various reasons. Firstly, the term ‘phenotype’ has no generally agreed definition and its meaning depends on context. Secondly, the phenotypes are most commonly specified as non-computable descriptive documents. Recent attempts have shown that ontologies are a suitable way to handle phenotypes and that they can support clinical research and decision making. The SMITH Consortium is dedicated to rapidly establish an integrative medical informatics framework to provide physicians with the best available data and knowledge and enable innovative use of healthcare data for research and treatment optimization. In the context of a methodological use case “phenotype pipeline” (PheP), a technology to automatically generate phenotype classifications and annotations based on electronic health records (EHR) is developed. A large series of phenotype algorithms will be implemented. This implies that for each algorithm a classification scheme and its input variables have to be defined. Furthermore, a phenotype engine is required to evaluate and execute developed algorithms. In this article we present a Core Ontology of Phenotypes (COP) and a software Phenotype Manager (PhenoMan), which implements a novel ontology-based method to model and calculate phenotypes. Our solution includes an enhanced iterative reasoning process combining classification tasks with mathematical calculations at runtime. The ontology as well as the reasoning method were successfully evaluated based on different phenotypes (including SOFA score, socioeconomic status, body surface area and WHO BMI classification) and several data sets.

Authors: Alexandr Uciteli, Christoph Beger, Toralf Kirsten, Frank A. Meineke, Heinrich Herre

Date Published: 20th Dec 2019

Publication Type: InProceedings

Abstract (Expand)

Die Notwendigkeit des Managements von Forschungsdaten ist von der Forschungscommunity erkannt – Sponsoren, Gesetzgeber, Verlage erwarten und fördern die Einhaltung der guten wissenschaftlichen Praxis, was nicht nur die Archivierung umfasst, sondern auch die Verfügbarkeit von Forschungsdaten- und ergebnissen im Sinne der FAIR-Prinzipien. Der Leipzig Health Atlas (LHA) ist ein Projekt zur Präsentation und zum Austausch eines breiten Spektrums von Publikationen, (bio) medizinischen Daten (z.B. klinisch, epidemiologisch, molekular), Modellen und Tools z.B. zur Risikoberechnung in der Gesundheitsforschung. Die Verbundpartner decken hierbei einen breiten Bereich wissenschaftlicher Disziplinen ab, beginnend von medizinischer Systembiologie über klinische und epidemiologische Forschung bis zu ontologischer und dynamischer Modellierung. Derzeit sind 18 Forschungskonsortien beteiligt (u.a. zu den Domänen Lymphome, Gliome, Sepsis, Erblicher Darm- und Brustkrebs), die Daten aus klinischen Studien, Patientenkohorten, epidemiologischen Kohorten, teilweise mit umfangreichen molekularen und genetischen Profilen, sammeln. Die Modellierung umfasst algorithmische Phänotypklassifizierung, Risikovorhersage und Krankheitsdynamik. Wir konnten in einer ersten Entwicklungsphase zeigen, dass unsere webbasierte Plattform geeignet ist, um (1) Methoden zur Verfügung zu stellen, um individuelle Patientendaten aus Publikationen für eine Weiternutzung zugänglich zu machen, (2) algorithmische Werkzeuge zur Phänotypisierung und Risikoprofilerstellung zu präsentieren, (3) Werkzeuge zur Durchführung dynamischer Krankheits- und Therapiemodelle interaktiv verfügbar zu machen und (4) strukturierte Metadaten zu quantitativen und qualitativen Merkmalen bereit zu stellen. Die semantische Datenintegration liefert hierzu die Technologien (Ontologien und Datamining Werkzeuge) für die (semantische) Datenintegration und Wissensanreicherung. Darüber hinaus stellt sie Werkzeuge zur Verknüpfung eigener Daten, Analyseergebnisse, öffentlich zugänglicher Daten- und Metadaten-Repositorien sowie zur Verdichtung komplexer Daten zur Verfügung. Eine Arbeitsgruppe zur Applikationsentwicklung und –validierung entwickelt innovative paradigmatische Anwendungen für (1) die klinische Entscheidungsfindung für Krebsstudien, die genetische Beratung, für Risikovorhersagemodelle sowie Gewebe- und Krankheitsmodelle und (2) Anwendungen (sog. Apps), die sich auf die Charakterisierung neuer Phänotypen (z.B. ‚omics‘-Merkmale, Körpertypen, Referenzwerte) aus epidemiologischen Studien konzentrieren. Diese Anwendungen werden gemeinsam mit klinischen Experten, Genetikern, Systembiologen, Biometrikern und Bioinformatikern spezifiziert. Der LHA stellt Integrationstechnologie bereit und implementiert die Anwendungen für die User Communities unter Verwendung verschiedener Präsentationswerkzeuge bzw. Technologien (z.B. R-Shiny, i2b2, Kubernetes, SEEK). Dazu ist es erforderlich, die Daten und Metadaten vor dem Hochladen zu kuratieren, Erlaubnisse der Datenbesitzer einzuholen, die erforderlichen Datenschutzkriterien zu berücksichtigen und semantische Annotationen zu überprüfen. Zudem werden die zugelieferten Modellalgorithmen in einer qualitätsgesicherten Weise aufbereitet und, soweit anwendbar, online interaktiv zur Verfügung gestellt. Der LHA richtet sich insbesondere an die Zielgruppen Kliniker, Epidemiologen, Molekulargenetiker, Humangenetiker, Pathologen, Biostatistiker und Modellierer ist aber unter www.healthatlas.de öffentlich zugänglich – aus rechtlichen Gründen erfordert der Zugriff auf bestimmte Applikationen und Datensätze zusätzliche Autorisierung. Das Projekt wird über das BMBF Programm i:DSem (Integrative Datensemantik für die Systemmedizin, Förderkennzeichen 031L0026) gefördert.

Authors: F. A. Meineke, Sebastian Stäubert, Matthias Löbe, C. Beger, René Hänsel, A. Uciteli, H. Binder, T. Kirsten, M. Scholz, H. Herre, C. Engel, Markus Löffler

Date Published: 19th Sep 2019

Publication Type: Misc

Powered by
(v.1.13.0-master)
Copyright © 2008 - 2021 The University of Manchester and HITS gGmbH
Institute for Medical Informatics, Statistics and Epidemiology, University of Leipzig

By continuing to use this site you agree to the use of cookies