Publications

14 Publications visible to you, out of a total of 14

Abstract (Expand)

The successful determination and analysis of phenotypes plays a key role in the diagnostic process, the evaluation of risk factors and the recruitment of participants for clinical and epidemiological studies. The development of computable phenotype algorithms to solve these tasks is a challenging problem, caused by various reasons. Firstly, the term ‘phenotype’ has no generally agreed definition and its meaning depends on context. Secondly, the phenotypes are most commonly specified as non-computable descriptive documents. Recent attempts have shown that ontologies are a suitable way to handle phenotypes and that they can support clinical research and decision making. The SMITH Consortium is dedicated to rapidly establish an integrative medical informatics framework to provide physicians with the best available data and knowledge and enable innovative use of healthcare data for research and treatment optimization. In the context of a methodological use case “phenotype pipeline” (PheP), a technology to automatically generate phenotype classifications and annotations based on electronic health records (EHR) is developed. A large series of phenotype algorithms will be implemented. This implies that for each algorithm a classification scheme and its input variables have to be defined. Furthermore, a phenotype engine is required to evaluate and execute developed algorithms. In this article we present a Core Ontology of Phenotypes (COP) and a software Phenotype Manager (PhenoMan), which implements a novel ontology-based method to model and calculate phenotypes. Our solution includes an enhanced iterative reasoning process combining classification tasks with mathematical calculations at runtime. The ontology as well as the reasoning method were successfully evaluated based on different phenotypes (including SOFA score, socioeconomic status, body surface area and WHO BMI classification) and several data sets.

Authors: Alexandr Uciteli, Christoph Beger, Toralf Kirsten, Frank A. Meineke, Heinrich Herre

Date Published: 20th Dec 2019

Publication Type: InProceedings

Abstract (Expand)

Die Notwendigkeit des Managements von Forschungsdaten ist von der Forschungscommunity erkannt – Sponsoren, Gesetzgeber, Verlage erwarten und fördern die Einhaltung der guten wissenschaftlichen Praxis, was nicht nur die Archivierung umfasst, sondern auch die Verfügbarkeit von Forschungsdaten- und ergebnissen im Sinne der FAIR-Prinzipien. Der Leipzig Health Atlas (LHA) ist ein Projekt zur Präsentation und zum Austausch eines breiten Spektrums von Publikationen, (bio) medizinischen Daten (z.B. klinisch, epidemiologisch, molekular), Modellen und Tools z.B. zur Risikoberechnung in der Gesundheitsforschung. Die Verbundpartner decken hierbei einen breiten Bereich wissenschaftlicher Disziplinen ab, beginnend von medizinischer Systembiologie über klinische und epidemiologische Forschung bis zu ontologischer und dynamischer Modellierung. Derzeit sind 18 Forschungskonsortien beteiligt (u.a. zu den Domänen Lymphome, Gliome, Sepsis, Erblicher Darm- und Brustkrebs), die Daten aus klinischen Studien, Patientenkohorten, epidemiologischen Kohorten, teilweise mit umfangreichen molekularen und genetischen Profilen, sammeln. Die Modellierung umfasst algorithmische Phänotypklassifizierung, Risikovorhersage und Krankheitsdynamik. Wir konnten in einer ersten Entwicklungsphase zeigen, dass unsere webbasierte Plattform geeignet ist, um (1) Methoden zur Verfügung zu stellen, um individuelle Patientendaten aus Publikationen für eine Weiternutzung zugänglich zu machen, (2) algorithmische Werkzeuge zur Phänotypisierung und Risikoprofilerstellung zu präsentieren, (3) Werkzeuge zur Durchführung dynamischer Krankheits- und Therapiemodelle interaktiv verfügbar zu machen und (4) strukturierte Metadaten zu quantitativen und qualitativen Merkmalen bereit zu stellen. Die semantische Datenintegration liefert hierzu die Technologien (Ontologien und Datamining Werkzeuge) für die (semantische) Datenintegration und Wissensanreicherung. Darüber hinaus stellt sie Werkzeuge zur Verknüpfung eigener Daten, Analyseergebnisse, öffentlich zugänglicher Daten- und Metadaten-Repositorien sowie zur Verdichtung komplexer Daten zur Verfügung. Eine Arbeitsgruppe zur Applikationsentwicklung und –validierung entwickelt innovative paradigmatische Anwendungen für (1) die klinische Entscheidungsfindung für Krebsstudien, die genetische Beratung, für Risikovorhersagemodelle sowie Gewebe- und Krankheitsmodelle und (2) Anwendungen (sog. Apps), die sich auf die Charakterisierung neuer Phänotypen (z.B. ‚omics‘-Merkmale, Körpertypen, Referenzwerte) aus epidemiologischen Studien konzentrieren. Diese Anwendungen werden gemeinsam mit klinischen Experten, Genetikern, Systembiologen, Biometrikern und Bioinformatikern spezifiziert. Der LHA stellt Integrationstechnologie bereit und implementiert die Anwendungen für die User Communities unter Verwendung verschiedener Präsentationswerkzeuge bzw. Technologien (z.B. R-Shiny, i2b2, Kubernetes, SEEK). Dazu ist es erforderlich, die Daten und Metadaten vor dem Hochladen zu kuratieren, Erlaubnisse der Datenbesitzer einzuholen, die erforderlichen Datenschutzkriterien zu berücksichtigen und semantische Annotationen zu überprüfen. Zudem werden die zugelieferten Modellalgorithmen in einer qualitätsgesicherten Weise aufbereitet und, soweit anwendbar, online interaktiv zur Verfügung gestellt. Der LHA richtet sich insbesondere an die Zielgruppen Kliniker, Epidemiologen, Molekulargenetiker, Humangenetiker, Pathologen, Biostatistiker und Modellierer ist aber unter www.healthatlas.de öffentlich zugänglich – aus rechtlichen Gründen erfordert der Zugriff auf bestimmte Applikationen und Datensätze zusätzliche Autorisierung. Das Projekt wird über das BMBF Programm i:DSem (Integrative Datensemantik für die Systemmedizin, Förderkennzeichen 031L0026) gefördert.

Authors: F. A. Meineke, Sebastian Stäubert, Matthias Löbe, C. Beger, René Hänsel, A. Uciteli, H. Binder, T. Kirsten, M. Scholz, H. Herre, C. Engel, Markus Löffler

Date Published: 19th Sep 2019

Publication Type: Misc

Abstract (Expand)

Phenotyping means the determination of clinical relevant phenotypes, e.g. by classification or calculation based on EHR data. Within the German Medical Informatics Initiative, the SMITH consortium is working on the implementation of a phenotyping pipeline. to extract, structure and normalize information from the EHR data of the hospital information systems of the participating sites; to automatically apply complex algorithms and models and to enrich the data within the research data warehouses of the distributed data integration centers with the computed results. Here we present the overall picture and essential building blocks and workflows of this concept.

Authors: F. A. Meineke, S. Staubert, M. Lobe, A. Uciteli, M. Loffler

Date Published: 3rd Sep 2019

Publication Type: Journal article

Abstract (Expand)

Phenotyping means the determination of clinical relevant phenotypes, e.g. by classification or calculation based on EHR data. Within the German Medical Informatics Initiative, the SMITH consortium is working on the implementation of a phenotyping pipeline. to extract, structure and normalize information from the EHR data of the hospital information systems of the participating sites; to automatically apply complex algorithms and models and to enrich the data within the research data warehouses of the distributed data integration centers with the computed results. Here we present the overall picture and essential building blocks and workflows of this concept.

Authors: Frank A Meineke, Sebastian Stäubert, Matthias Löbe, Alexandr Uciteli, Markus Löffler

Date Published: 1st Sep 2019

Publication Type: Journal article

Abstract (Expand)

The realisation of a complex web portal, including the modelling of content, is a challenging process. The contents describe different interconnected entities that form a complex structure. The entities and their relations have to be systematically analysed, and the content has to be specified and integrated into a content management system (CMS). Ontologies provide a suitable solution for modelling and specifying complex entities and their relations. However, the functionality for automated import of ontologies is not available in current content management systems. In order to describe the content of a web portal, we developed an ontology. Based on this ontology, we implemented a pipeline that allows the specification of the portal’s content and its import into the CMS Drupal. Our method is generic. It enables the development of web portals with the focus on a suitable representation of structured knowledge (entities, their properties and relations). Furthermore, it makes it possible to represent existing ontologies in such a way that their content can be understood by users without knowledge of ontologies and their semantics. Our approach has successfully been applied in building the LHA (Leipzig Health Atlas) portal, which provides access to metadata, data, publications and methods from various research projects at the University of Leipzig.

Authors: A. Uciteli, C. Beger, C. Rillich, F. A. Meineke, M. Loffler, H. Herre

Date Published: 2018

Publication Type: InBook

Abstract (Expand)

The amount of ontologies, which are utilizable for widespread domains, is growing steadily. BioPortal alone, embraces over 500 published ontologies with nearly 8 million classes. In contrast, the vast informative content of these ontologies is only directly intelligible by experts. To overcome this deficiency it could be possible to represent ontologies as web portals, which does not require knowledge about ontologies and their semantics, but still carries as much information as possible to the end-user. Furthermore, the conception of a complex web portal is a sophisticated process. Many entities must be analyzed and linked to existing terminologies. Ontologies are a decent solution for gathering and storing this complex data and dependencies. Hence, automated imports of ontologies into web portals could support both mentioned scenarios. The Content Management System (CMS) Drupal 8 is one of many solutions to develop web presentations with less required knowledge about programming languages and it is suitable to represent ontological entities. We developed the Drupal Upper Ontology (DUO), which models concepts of Drupal's architecture, such as nodes, vocabularies and links. DUO can be imported into ontologies to map their entities to Drupal's concepts. Because of Drupal's lack of import capabilities, we implemented the Simple Ontology Loader in Drupal (SOLID), a Drupal 8 module, which allows Drupal administrators to import ontologies based on DUO. Our module generates content in Drupal from existing ontologies and makes it accessible by the general public. Moreover Drupal offers a tagging system which may be amplified with multiple standardized and established terminologies by importing them with SOLID. Our Drupal module shows that ontologies can be used to model content of a CMS and vice versa CMS are suitable to represent ontologies in a user-friendly way. Ontological entities are presented to the user as discrete pages with all appropriate properties, links and tags.

Authors: C. Beger, A. Uciteli, H. Herre

Date Published: 9th Sep 2017

Publication Type: Journal article

Abstract (Expand)

LIFE is an epidemiological study determining thousands of Leipzig inhabitants with a wide spectrum of interviews, questionnaires, and medical investigations. The heterogeneous data are centrally integrated into a research database and are analyzed by specific analysis projects. To semantically describe the large set of data, we have developed an ontological framework. Applicants of analysis projects and other interested people can use the LIFE Investigation Ontology (LIO) as central part of the framework to get insights, which kind of data is collected in LIFE. Moreover, we use the framework to generate queries over the collected scientific data in order to retrieve data as requested by each analysis project. A query generator transforms the ontological specifications using LIO to database queries which are implemented as project-specific database views. Since the requested data is typically complex, a manual query specification would be very timeconsuming, error-prone, and is, therefore, unsuitable in this large project. We present the approach, overview LIO and show query formulation and transformation. Our approach runs in production mode for two years in LIFE.

Authors: Toralf Kirsten, A. Uciteli

Date Published: 2015

Publication Type: Not specified

Powered by
(v.1.13.0-master)
Copyright © 2008 - 2021 The University of Manchester and HITS gGmbH
Institute for Medical Informatics, Statistics and Epidemiology, University of Leipzig

By continuing to use this site you agree to the use of cookies