JANUS clinical trial data repository

Last updated

Janus clinical trial data repository is a clinical trial data repository (or data warehouse) standard as sanctioned by the U.S. Food and Drug Administration (FDA). It was named for the Roman god Janus (mythology), who had two faces, one that could see in the past and one that could see in the future. The analogy is that the Janus data repository would enable the FDA and the pharmaceutical industry to both look retrospectively into past clinical trials, and also relative to one or more current clinical trials (or even future clinical trials thru better enablement of clinical trial design).

The Janus data model is a relational database model, and is based on SDTM as a standard, in terms of many of its basic concepts such as the loading and storing of findings, events, interventions and inclusion data. However, Janus itself is a data warehouse independent of any single clinical trials submission standard. For example, Janus can store pre-clinical trial (non-human) submission information as well, in the form of the SEND non-clinical standard.

The goals of Janus are as follows:

Related Research Articles

Clinical trial Phase of clinical research in medicine

Clinical trials are experiments or observations done in clinical research. Such prospective biomedical or behavioral research studies on human participants are designed to answer specific questions about biomedical or behavioral interventions, including new treatments and known interventions that warrant further study and comparison. Clinical trials generate data on dosage, safety and efficacy. They are conducted only after they have received health authority/ethics committee approval in the country where approval of the therapy is sought. These authorities are responsible for vetting the risk/benefit ratio of the trial—their approval does not mean the therapy is 'safe' or effective, only that the trial may be conducted.

Health informatics Applications of information processing concepts and machinery in medicine

Health informatics is the field of science and engineering that aims at developing methods and technologies for the acquisition, processing, and study of patient data, which can come from different sources and modalities, such as electronic health records, diagnostic test results, medical scans. The health domain provides an extremely wide variety of problems that can be tackled using computational techniques.

National Cancer Institute US research institute, part of National Institutes of Health

The National Cancer Institute (NCI) coordinates the United States National Cancer Program and is part of the National Institutes of Health (NIH), which is one of eleven agencies that are part of the U.S. Department of Health and Human Services. The NCI conducts and supports research, training, health information dissemination, and other activities related to the causes, prevention, diagnosis, and treatment of cancer; the supportive care of cancer patients and their families; and cancer survivorship.

PubMed Central (PMC) is a free digital repository that archives open access full-text scholarly articles that have been published in biomedical and life sciences journals. As one of the major research databases developed by the National Center for Biotechnology Information (NCBI), PubMed Central is more than a document repository. Submissions to PMC are indexed and formatted for enhanced metadata, medical ontology, and unique identifiers which enrich the XML structured data for each article. Content within PMC can be linked to other NCBI databases and accessed via Entrez search and retrieval systems, further enhancing the public's ability to discover, read and build upon its biomedical knowledge.

The Clinical Data Interchange Standards Consortium (CDISC) is a standards developing organization (SDO) dealing with medical research data linked with healthcare, to "enable information system interoperability to improve medical research and related areas of healthcare". The standards support medical research from protocol through analysis and reporting of results and have been shown to decrease resources needed by 60% overall and 70–90% in the start-up stages when they are implemented at the beginning of the research process.

Clinical research is a branch of healthcare science that determines the safety and effectiveness (efficacy) of medications, devices, diagnostic products and treatment regimens intended for human use. These may be used for prevention, treatment, diagnosis or for relieving symptoms of a disease. Clinical research is different from clinical practice. In clinical practice established treatments are used, while in clinical research evidence is collected to establish a treatment.

Food and Drug Administration Amendments Act of 2007 US law

President of the United States George W. Bush signed the Food and Drug Administration Amendments Act of 2007 (FDAAA) on September 27, 2007. This law reviewed, expanded, and reaffirmed several existing pieces of legislation regulating the FDA. These changes allow the FDA to perform more comprehensive reviews of potential new drugs and devices. It was sponsored by Reps. Joe Barton and Frank Pallone and passed unanimously by the Senate.

SDTM defines a standard structure for human clinical trial (study) data tabulations and for nonclinical study data tabulations that are to be submitted as part of a product application to a regulatory authority such as the United States Food and Drug Administration (FDA). The Submission Data Standards team of Clinical Data Interchange Standards Consortium (CDISC) defines SDTM.

Standard for Exchange of Non-clinical Data

The Standard for Exchange of Nonclinical Data (SEND) is an implementation of the CDISC Standard Data Tabulation Model (SDTM) for nonclinical studies, which specifies a way to present nonclinical data in a consistent format. These types of studies are related to animal testing conducted during drug development. Raw data of toxicology animal studies started after December 18, 2016 to support submission of new drugs to the US Food and Drug Administration will be submitted to the agency using SEND.

A glossary of terms used in clinical research.

The following outline is provided as an overview of and topical guide to clinical research:

ClinicalTrials.gov is a registry of clinical trials. It is run by the United States National Library of Medicine (NLM) at the National Institutes of Health, and is the largest clinical trials database, holding registrations from over 329,000 trials from 209 countries.

The Cancer Trials Support Unit(CTSU) is a service of the National Cancer Institute (NCI) in the United States.

Barcode technology in healthcare is the use of optical machine-readable representation of data in a hospital or healthcare setting.

Phases of clinical research Clinical trial stages using human subjects

The phases of clinical research are the stages in which scientists conduct experiments with a health intervention to obtain sufficient evidence for a process considered effective as a medical treatment. For drug development, the clinical phases start with testing for safety in a few human subjects, then expand to many study participants to determine if the treatment is effective. Clinical research is conducted on drug candidates, vaccine candidates, new medical devices, and new diagnostic assays.

In computing, a data definition specification (DDS) is a guideline to ensure comprehensive and consistent data definition. It represents the attributes required to quantify data definition. A comprehensive data definition specification encompasses enterprise data, the hierarchy of data management, prescribed guidance enforcement and criteria to determine compliance.

An adaptive clinical trial is a dynamic clinical trial that evaluates a medical device or treatment by observing participant outcomes on a prescribed schedule, and, uniquely, modifying parameters of the trial protocol in accord with those observations. This is in contrast to traditional randomized clinical trials (RCTs) that are static in their protocol and do not modify any parameters until the trial is completed. The adaptation process generally continues throughout the trial, as prescribed in the trial protocol. Adaptions may include modifications to: dosage, sample size, drug undergoing trial, patient selection criteria and/or "cocktail" mix. In some cases, trials have become an ongoing process that regularly adds and drops therapies and patient groups as more information is gained. Importantly, the trial protocol is set before the trial begins which pre-specifies the adaptation schedule and processes.

The High-performance Integrated Virtual Environment (HIVE) is a distributed computing environment used for healthcare-IT and biological research, including analysis of Next Generation Sequencing (NGS) data, preclinical, clinical and post market data, adverse events, metagenomic data, etc. Currently it is supported and continuously developed by US Food and Drug Administration, George Washington University, and by DNA-HIVE, WHISE-Global and Embleema. HIVE currently operates fully functionally within the US FDA supporting wide variety (+60) of regulatory research and regulatory review projects as well as for supporting MDEpiNet medical device postmarket registries. Academic deployments of HIVE are used for research activities and publications in NGS analytics, cancer research, microbiome research and in educational programs for students at GWU. Commercial enterprises use HIVE for oncology, microbiology, vaccine manufacturing, gene editing, healthcare-IT, harmonization of real-world data, in preclinical research and clinical studies.

Bamlanivimab is a monoclonal antibody developed by AbCellera Biologics and Eli Lilly as a treatment for COVID-19. The medication was granted an emergency use authorization (EUA) by the US Food and Drug Administration (FDA) in November 2020, and the EUA was revoked in April 2021.

A common data model (CDM) can refer to any standardised data model which allows for data and information exchange between different applications and data sources. Common data models aim to standardise logical infrastructure so that related applications can "operate on and share the same data", and can be seen as a way to "organize data from many sources that are in different formats into a standard structure".