POSITION PAPER: Credibility of In Silico Trial Technologies: A Theoretical Framing

Reading time: 6 minute
...

📝 Original Info

  • Title: POSITION PAPER: Credibility of In Silico Trial Technologies: A Theoretical Framing
  • ArXiv ID: 1909.04660
  • Date: 2019-09-10
  • Authors: Researchers from original ArXiv paper

📝 Abstract

Different research communities have developed various approaches to assess the credibility of predictive models. Each approach usually works well for a specific type of model, and under some epistemic conditions that are normally satisfied within that specific research domain. Some regulatory agencies recently started to consider evidences of safety and efficacy on new medical products obtained using computer modelling and simulation (which is referred to as In Silico Trials); this has raised the attention in the computational medicine research community on the regulatory science aspects of this emerging discipline. But this poses a foundational problem: in the domain of biomedical research the use of computer modelling is relatively recent, without a widely accepted epistemic framing for problem of model credibility. Also, because of the inherent complexity of living organisms, biomedical modellers tend to use a variety of modelling methods, sometimes mixing them in the solution of a single problem. In such context merely adopting credibility approaches developed within other research community might not be appropriate. In this position paper we propose a theoretical framing for the problem of assessing the credibility of a predictive models for In Silico Trials, which accounts for the epistemic specificity of this research field and is general enough to be used for different type of models.

💡 Deep Analysis

Deep Dive into POSITION PAPER: Credibility of In Silico Trial Technologies: A Theoretical Framing.

Different research communities have developed various approaches to assess the credibility of predictive models. Each approach usually works well for a specific type of model, and under some epistemic conditions that are normally satisfied within that specific research domain. Some regulatory agencies recently started to consider evidences of safety and efficacy on new medical products obtained using computer modelling and simulation (which is referred to as In Silico Trials); this has raised the attention in the computational medicine research community on the regulatory science aspects of this emerging discipline. But this poses a foundational problem: in the domain of biomedical research the use of computer modelling is relatively recent, without a widely accepted epistemic framing for problem of model credibility. Also, because of the inherent complexity of living organisms, biomedical modellers tend to use a variety of modelling methods, sometimes mixing them in the solution of a

📄 Full Content

EFORE a new medical product can be sold in a country, evidence must be provided to the regulatory agency of that country supporting the claim that such new product, if used as expected and under properly controlled conditions, is safe (when it does not worsen the health of the recipient) and effective (when the product does improve the recipient's health). Historically, evidence of safety and efficacy is provided through controlled experiments. Those experiments involving human volunteers are referred to as clinical trials; by contrast those with no humans involved are called pre-clinical trials. Some pre-clinical trials involve animals, whereas others are based on cell or tissue cultures, tissues and organs from cadavers, or machineries (bench tests) designed to reproduce the conditions under which the medical product is expected to operate; these are referred to as is in vitro tests. So, until recently safety and efficacy were estimated only with controlled experiments in vitro, in vivo in animals, or in vivo in humans. As described in detail in [1], [2], both the USA and European Regulatory Agencies have recently opened in principle to the possibility that some of these regulatory evidences are provided using computer modelling and simulation, what is normally referred to as "in silico trials". While for in vitro and in vivo methods there is an extensive knowledge and a well-established praxis on how to qualify them (i.e. how to assess their credibility [3] as predictors of the safety and or the efficacy of a new medical product), it is still debated how to assess the credibility of in silico methods in a qualification process [4]- [10]. In most cases, methods to assess credibility are merely copied from other research domains, and even applied from time to time to different types of models from those originally developed for. While the first technical standards specifically targeting biomedical applications are appearing [11], there is a clear need for a general theoretical framing on the problem, that can support these efforts.

The aim of this position paper is to propose such theoretical framing for the problem of assessing the credibility of a predictive models for In Silico Trials (ISTs), accounting for the epistemic specificity of this research field and is general enough to be used for different types of models.

As a first step it is useful to categorise the various models used in biomedicine. For the purpose of this position paper, we will categorise them as a function of their knowledge content. In general, a predictive model can be developed by analysing how the quantities to be predicted vary as a function of a set of inputs over a large set of experimental observations (datadriven of phenomenological models), or by leveraging some pre-existing knowledge about the physics, chemistry, physiology and biology of the phenomenon being modelled (mechanistic models).

While purely phenomenological models exist, no model is purely mechanistic. Also, there are some modelling approaches that combine mechanistic knowledge and phenomenological evidence (sometime referred to as grey-box models). So, there is a continuum from phenomenological to mechanistic modelling, which is well represented by the degree of mechanistic knowledge used in building each model.

A second important taxonomy is whether the phenomenon is modelled as a continuous or as a series of discrete events. We are not referring here to the need to discretise space and time for obtaining a numerical approximation, but to models that are built assuming the phenomenon being modelled can be described by a finite set of discrete states, whereas a continuous model describes all quantities as continuous in space-time.

Before we go any further, it is important to stress the difference between verification and validation, as the two terms are often confused, even if are they are tailored to different questions.

Simply speaking, verification tries to answer to the question “are we building the system right?”, while validation refers to the question “Are we building the right system?”. The ASME V&V 40 defines verification as “the process of determining that a computational model accurately represents the underlying mathematical model and its solution from the perspective of the intended uses of modeling and simulation”. In other words, we need to test that all the implemented code and the solver approximations of the computational model lead to numerical results that are “sufficiently near” (i.e., taking into account numerical approximations and discretization errors) to the exact analytical solutions of the mathematical formulation of the model at hand.

The goal of Validation is instead to assess how well the computational model represents the reality it is supposed to represent. That is, with validation we check that the model well reproduces the biological, physical, mechanical features of the real phenomenon. To this end, results coming fro

…(Full text truncated)…

📸 Image Gallery

cover.png

Reference

This content is AI-processed based on ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut