Understanding Clinical Trials And Drug Development As A Research Scientist

Clinical trials are studies designed to test the novel methods of diagnosing and treating health conditions – by observing the outcomes of human subjects under experimental conditions. 

These are interventional studies that are performed under stringent clinical laboratory settings.

Contrariwise, non-interventional studies are performed outside the clinical trial settings that provide researchers an opportunity to monitor the effect of drugs in real-life situations.

Non-interventional trials are also termed observational studies as they include post-marketing surveillance studies (PMS) and post-authorization safety studies (PASS).

Clinical trials are preferred for testing newly developed drugs since interventional studies are conducted in a highly monitored environment as opposed to non-interventional studies that are performed post-marketing. Moreover, clinical trials use randomization and stratification grouping methods which eliminate selection bias.

Efficient clinical trials seek to attain internal validity by requiring a relatively homogenous population adhering to predefined characteristics. However, despite being from the same population with similar demography, the enrolled cohort could differ from the general population that they were drawn from.

The factors leading to this  “volunteer bias” could be due to the variation in the attributes of the participants (eg: health status, geographical factors) and their exclusion from the study by the investigator due to poor prognosis. The ruling out of these factors may limit the external validity or generalizability of randomized clinical trials to a broader population of patients with comorbidities not included in the selected homogeneous cohort. This reason explains why the efficacy of experimental treatments may not prove to be effective in the real world.

Thus, control of all known confounding variables such as comorbidities is important in the selected homogenous group of participants – to properly assess the efficacy of a clinical trial. 

An adequate sample size of the chosen population cohort for clinical trials is vital to address the power required to detect the potential statistical difference between the cohorts being studied in clinical trials.

Generally, an 80% statistically significant difference between two interventions or clinical trials signifies a clinically meaningful difference. Among the various statistical analyses used to analyze the differences, some prevalent methods used are poison regression, Cox regression, and linear regression. 

Clinical Trials and Drug Development

The Food And Drug Administration (FDA) has issued certain regulations and policies that solely focus on the safety and efficacy of drug development. First, preclinical research is conducted. If the preclinical studies are promising for a drug, then a drug sponsor can submit an investigational new drug (IND) application. After review and approval, the drug is then put to clinical phase I-III trial studies. If the drug demonstrates safety and efficacy parameters in the clinical trial phases, the drug sponsor can then submit a New Drug Application (NDA) to FDA for approval. FDA then investigates and determines if the therapeutic is eligible to be marketed. If approved, the drug is moved for further post-marketing studies in Phase IV of clinical trials. International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) plays a vital role in facilitating endorsements of foreign drug data and harmonizing drug approval processes across the USA, Europe, and Japan.

Figure 1: Clinical trials are essential to test the efficacy of a drug or a therapy. Image source

Clinical Trial Phases

Phases I to III are the main clinical trial phases in which the drug efficacy is tested and compared with the existing standard treatments. Preclinical research and the study of how the drug responds to disease comes under phase 0 trial, and phase IV trial is the study on the safety and side-effects of the drug once it is licensed.

Phase 0

Phase 0 study involves a small number of individuals, usually less than 15 subjects. The study is performed to check if the drug is working just as the investigators had expected it to in the pre-clinical research. In this phase, a small dose is prescribed to the patients – not significant enough to treat the disease or to cause side effects. If the effect of the drugs is unexpected then additional preclinical research is performed to strengthen the drug efficacy.

Phase I

Phase I trial is conducted on a group of 20 to 80 individuals who have an advanced level disease such as cancer and have already had other medical therapies. The main aim of the trial is to perceive the safety of the drug prescribed during the trial and how the drug shrinks cancer cells. The two main aspects of the phase I trial are:

—> Dose-escalation study: 

A small dose of the drug is first given to a selected group of individuals among the cohort under study. This group is then monitored for any adverse reactions. If no untoward incidents are noticed, a higher dose is prescribed to the next group of individuals in the next step. Gradually, with every group, the drug dosage is increased. Consequently, the study helps the investigators to monitor the side effects and be conclusive on the drug dose to prescribe. Moreover, investigators also look for an optimal way to administer the drug; either topically or intravenously.

—> Blood or Sample test:

 As we may have already discerned from the dose-escalation study, the phase I trial requires an abundance of blood tests to understand the effect of the drug on the body and how the body gets rid of the drug. All observations and findings are diligently recorded with their timestamps. Eventually, phase I contributes to determining the dosage and side effects of the drug – vital in measuring and assessing the significance of the new treatment. These are important parameters to consider before proceeding to the next trial phases since they demand much longer trial periods and resources.

About 70% of the drugs used in phase I clinical trials are moved to the next phase.

Phase II

Phase II trial study focuses on the efficacy and side effects of the drug. The number of participants is several hundred; and, ideally, all the participants have the same type of disease. The participants are administered the same dosage that was found to be safe in phase I.

A trial study in phase II acts as preliminary evidence of drug efficacy by comparing the drug under study with historical trials and cases that established the efficacy of standard therapies. However, since the size of the cohort under study is not large enough to demonstrate the overall safety and efficacy of the medication, the necessity of a subsequent phase III trial becomes imminent. For this reason, phase II trials are also termed “therapeutic exploratory” trials.

Overall, phase II trials aim to determine:

—> if the efficacy of the drug or treatment under study is significant enough to be tested in phase III trials with a larger cohort size.

—> the appropriate dosage amount for the disease.

—> the type of disease the drug works for.

At the end of the phase II trial, based on the results obtained, generally, a meeting between the investigators, sponsors, and FDA is organized to ascertain the viability of progressing to phase III trials.

Approximately 33% of the drugs move to the next phase.

Phase III

Phase III focuses on the efficacy and monitoring of the drug’s side effects. It is also called the “therapeutic confirmatory” trial. The number of participants is larger than phase II. Generally, the total number of subjects enrolled for this phase is between 300 and 3000. The reason to conduct this trial in a comparatively larger and diverse cohort is to confirm the efficacy and identify the incidences of adverse reactions.

“Comparative efficacy” trial is the most common type of phase III trial, in which comparison between the intervention of interest is conducted with placebo or standard therapy – placebo-controlled trials. A placebo is an inert substance with no therapeutic value which when administered to patients exhibits a placebo effect; a positive effect on an individual’s health, triggered by the person’s belief in the benefit of the drug rather than its characteristics. This trial depicts usefulness in understanding the efficacy of the new treatment and hence in making informed decisions.

Another type of phase III trial is the “equivalency” trial or the positive-control study. This study is designed to determine the similarity between the new/experimental drug and the comparator/standard drug, and whether the similarity is within a certain margin prespecified by the investigators. An important note here is that a placebo is never included in this study design.

The margin is merely based on clinical experience and external evidence, which asks for more guidance in determining the acceptance margin. Equivalency trials are also termed non-inferiority trials since they aim to show that the new drug is no worse than the standard drug or existing treatment. Thus, the main goal of this trial is to exclude the possibility that the experimental intervention is any less effective than the existing standard treatment by some prespecified magnitude.

Having talked about the two main types of phase III trials, let’s talk about the factor that makes phase III trial design most effective compared to other phases. Phase III trial design achieves its sophistication with the balance in treatment allocation for comparison of treatment efficacy. The three main methods used to balance treatment allocation and keep selection biases away are:

a) Randomization:

Randomization is the process of assigning patients by chance to groups that receive different treatments or drugs for a common disease. The purpose of this method is to determine the effectiveness of the treatments with fewer side effects and prevent selection bias resulting due to human choices. The simplest trial design to conduct this method consists of an investigational group and a control group. The investigational group receives the new drug under trial and the control group receives the standard drug already available for the disease. In the end, a comparison of the groups is performed by the investigators to see which treatment drug is more effective and has fewer side effects. Depending on the number of patients under study, there are different types of randomization techniques.

Simply put, in the randomization method, patient information is fed to electronic systems, and by using statistical methods random numbers for the participants are generated. Further, based on the random numbers, participants are grouped.

b) Stratification:

Stratification (from the word “strata” or “block”) in clinical trials refers to the grouping or partitioning of subjects based on factors other than the treatment or drug given to the subjects. The purpose of the method is to ensure equal allocation of subgroups of participants to each experimental condition. Please note that these subgroups are taken from the groups created in the randomization method. Stratification is used to control the confounding variables other than those being studied by the investigators. Among various factors, these variables may rely on demographics, age, or gender.

For example, when studying a group of breast cancer patients that have offered participation in a trial, there may be patients with different age groups or tumor stages. In this case, stratification means assigning a roughly equal number of breast cancer patients with similar age or tumor characteristics to each type of treatment.  This method may help investigators to determine specific features of a patient or their cancer that may exhibit a higher or lower chance of benefiting from the treatment.

c) Blinding:

In this method, both patients and doctors/investigators are not aware of the drug the patients are getting for the disease. The given treatments are designed to look the same and are given in the same schedule to the patients; in a way, this method may require placebo treatment. This is done to make sure that the measures taken to control the adverse reactions of the treatment are not biased by the investigators’ or the patients’ expectations. This way, the blinding method helps to assure that the conclusions made about the new drug are as precise as possible.  

As we see, there is quite a combination of strategies to design phase III trial, which calls for a discipline with guidelines to establish the quality of trial reporting and assist with evaluating the conduct and validity of trials and their results. Consequently, the Consolidated Standards of Reporting Trials (CONSORT) was established to keep a check on the biases and missing data that can reduce the study power during phase III trials.

Approximately 25-30% of drugs tested in phase III are moved to the next phase.

Phase IV

The new drug gets FDA approval once it has cleared the intense testing in phase III. However, ideally, the clinical trials don’t stop here. In phase IV, “post-marketing” studies are conducted on several thousand volunteers to:

a) identify the side effects of the approved drug.

b) observe the effectiveness of the drug in the population and disease similar to the one studied during the trials; and, in a population different from the original study.

c) determine the long-term risks and benefits.

The importance of Phase IV trials can be judged from the fact that nearly 4% of the approved drugs are withdrawn from the market for safety reasons and 20% acquire new black box warnings post-marketing.

Concluding Remarks

In conclusion, the development of new drugs and therapies requires a thorough understanding of the key concepts involved in performing clinical trials. Moreover, having an ethical understanding of the regulations behind trial designs may help key sponsors and stakeholders to aptly respond to research requirements and necessities.

Furthermore, it is pertinent to assure well-executed clinical trials since they can contribute significantly to the effectiveness and efficiency of the health care system of a country.

To learn more about gene prediction and how NGS can assist you, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Expert Sequencing wait list.

Join Expert Cytometry's Mastery Class
Deepak Kumar, PhD
Deepak Kumar, PhD Genomics Software Application Engineer

Deepak Kumar is a Genomics Software Application Engineer (Bioinformatics) at Agilent Technologies. He is the founder of the Expert Sequencing Program (ExSeq) at Cheeky Scientist. The ExSeq program provides a holistic understanding of the Next Generation Sequencing (NGS) field - its intricate concepts, and insights on sequenced data computational analyses. He holds diverse professional experience in Bioinformatics and computational biology and is always keen on formulating computational solutions to biological problems.

Similar Articles

How To Extract Cells From Tissues Using Laser Capture Microscopy

How To Extract Cells From Tissues Using Laser Capture Microscopy

By: Tim Bushnell, PhD

Extracting specific cells still remains an important aspect of several emerging genomic techniques. Prior knowledge about the input cells helps to put the downstream results in context. The most common isolation technique is cell sorting, but it requires a single cell suspension and eliminates any spatial information about the microenvironment. Spatial transcriptomics is an emerging technique that can address some of these issues, but that is a topic for another blog.  So what does a researcher who needs to isolate a specific type of cell do? The answer lies in the technique of laser capture microdissection (LCM). Developed at the National…

The Importance Of Quality Control And Quality Assurance In Flow Cytometry (Part 4 Of 6)

The Importance Of Quality Control And Quality Assurance In Flow Cytometry (Part 4 Of 6)

By: Tim Bushnell, PhD

Incorporating quality control as a part of the optimization process in  your flow cytometry protocol is important. Take a step back and consider how to build quality control tracking into the experimental protocol.  When researchers hear about quality control, they immediately shift their attention to those operating and maintaining the instrument, as if the whole weight of QC should fall on their shoulders.   It is true that core facilities work hard to provide high-quality instruments and monitor performance over time so that the researchers can enjoy uniformity in their experiments. That, however, is just one level of QC.  As the experimental…

How To Optimize Instrument Voltage For Flow Cytometry Experiments  (Part 3 Of 6)

How To Optimize Instrument Voltage For Flow Cytometry Experiments (Part 3 Of 6)

By: Tim Bushnell, PhD

As we continue to explore the steps involved in optimizing a flow cytometry experiment, we turn our attention to the detectors and optimizing sensitivity: instrument voltage optimization.  This is important as we want to ensure that we can make as sensitive a measurement as possible.  This requires us to know the optimal sensitivity of our instrument, and how our stained cells are resolved based on that voltage.  Let’s start by asking the question what makes a good voltage?  Joe Trotter, from the BD Biosciences Advanced Technology Group, once suggested the following:  Electronic noise effects resolution sensitivity   A good minimal PMT…

How To Profile DNA And RNA Expression Using Next Generation Sequencing (Part-2)

How To Profile DNA And RNA Expression Using Next Generation Sequencing (Part-2)

By: Deepak Kumar, PhD

In the first blog of this series, we explored the power of sequencing the genome at various levels. We also dealt with how the characterization of the RNA expression levels helps us to understand the changes at the genome level. These changes impact the downstream expression of the target genes. In this blog, we will explore how NGS sequencing can help us comprehend DNA modification that affect the expression pattern of the given genes (epigenetic profiling) as well as characterizing the DNA-protein interactions that allow for the identification of genes that may be regulated by a given protein.  DNA Methylation Profiling…

How To Profile DNA And RNA Expression Using Next Generation Sequencing

How To Profile DNA And RNA Expression Using Next Generation Sequencing

By: Deepak Kumar, PhD

Why is Next Generation Sequencing so powerful to explore and answer both clinical and research questions. With the ability to sequence whole genomes, identifying novel changes between individuals, to exploring what RNA sequences are being expressed, or to examine DNA modifications and protein-DNA interactions occurring that can help researchers better understand the complex regulation of transcription. This, in turn, allows them to characterize changes during different disease states, which can suggest a way to treat said disease.  Over the next two blogs, I will highlight these different methods along with illustrating how these can help clinical diagnostics as well as…

Optimizing Flow Cytometry Experiments - Part 2         How To Block Samples (Sample Blocking)

Optimizing Flow Cytometry Experiments - Part 2 How To Block Samples (Sample Blocking)

By: Tim Bushnell, PhD

In my previous blog on  experimental optimization, we discussed the idea of identifying the best antibody concentration for staining the cells. We did this through a process called titration, which  focuses on finding the best signal-to-noise ratio at the lowest antibody concentration. In this blog we will deal with sample blocking As a reminder, there are two other major binding concerns with antibodies. The first is the specific binding of the Fc fragment of the antibody to the Fc Receptor expressed on some cells. This protein is critical for the process of destroying microbes or other cells that have been…

What Is Next Generation Sequencing (NGS) And How Is It Used In Drug Development

What Is Next Generation Sequencing (NGS) And How Is It Used In Drug Development

By: Deepak Kumar, PhD

NGS methodologies have been used to produce high-throughput sequence data. These data with appropriate computational analyses facilitate variant identification and prove to be extremely valuable in pharmaceutical industries and clinical practice for developing drug molecules inhibiting disease progression. Thus, by providing a comprehensive profile of an individual’s variome — particularly that of clinical relevance consisting of pathogenic variants — NGS helps in determining new disease genes. The information thus obtained on genetic variations and the target disease genes can be used by the Pharma companies to develop drugs impeding these variants and their disease-causing effect. However simple this may allude…

How To Determine The Optimal Antibody Concentration For Your Flow Cytometry Experiment (Part 1 of 6)

How To Determine The Optimal Antibody Concentration For Your Flow Cytometry Experiment (Part 1 of 6)

By: Tim Bushnell, PhD

Over the next series of blog posts, we will explore the different aspects of optimizing a polychromatic flow cytometry panel. These steps range from figuring out the best voltage to use, which controls are critical for data interpretation, what quality control tools can be integrated into the assay; how to block cells, and more. This blog will focus on determining the optimal antibody concentration.  As a reminder about the antibody structure, a schematic of an antibody is shown below.  Figure 1: Schematic of an antibody. Figure from Wikipedia. The antibody is composed of two heavy chains and two light chains that…

Structural Variant Calling From NGS Data

Structural Variant Calling From NGS Data

By: Deepak Kumar, PhD

Single Nucleotide Variant (SNVs) have been considered as the main source of genetic variation, therefore precisely identifying these SNVs is a critical part of the Next Generation Sequencing (NGS) workflow. However, in this report from 2004, the authors identified another form of variants called the Structural Variants (SVs), which are genetic alterations of 50 or more base pairs, and result in duplications, deletions, insertions, inversions, and translocations in the genome. The changes in the DNA organization resulting from these SVs have been shown to be responsible for both phenotypic variation and a variety of pathological conditions. While the average variation,…

Top Technical Training eBooks

Get the Advanced Microscopy eBook

Get the Advanced Microscopy eBook

Heather Brown-Harding, PhD

Learn the best practices and advanced techniques across the diverse fields of microscopy, including instrumentation, experimental setup, image analysis, figure preparation, and more.

Get The Free Modern Flow Cytometry eBook

Get The Free Modern Flow Cytometry eBook

Tim Bushnell, PhD

Learn the best practices of flow cytometry experimentation, data analysis, figure preparation, antibody panel design, instrumentation and more.

Get The Free 4-10 Compensation eBook

Get The Free 4-10 Compensation eBook

Tim Bushnell, PhD

Advanced 4-10 Color Compensation, Learn strategies for designing advanced antibody compensation panels and how to use your compensation matrix to analyze your experimental data.