Blog
April 24, 2026
No items found.

Regulatory Guidance, Adaptive Trials, and the Misconception of Efficiency

Play video
No items found.
ICH-E20’s regulatory caution towards adaptive designs is often misapplied, resulting in inefficient or unrealistic alternatives for sponsors and patients. Operational casework demonstrates that so-called “complexity” in adaptive design is frequently misunderstood and that regulatory “false choices” undermine trial effectiveness.

Interpreting ICH-E20—Caution and Its Consequences

The ICH-E20 draft guidance offers a harmonized perspective on adaptive clinical trials, endorsing adaptations such as sample size modification, response-adaptive randomization, enrichment, seamless designs, and integrated Bayesian methods. Its publication signals that regulatory authorities now recognize the scientific legitimacy of these approaches in confirmatory settings. However, where the document acknowledges progress, it also reveals deep caution. The guidance heavily favors a stepwise, phase-by-phase model for drug development. A paragraph in Section 3.1 emphasize that “a stepwise program with careful analysis and evaluation of completed exploratory trials helps inform the goals and design choices for subsequent confirmatory trials.” Here, the standard development sequence stands as the default that is often unattainable.

The paragraph concludes that “the number and complexity of adaptations at the confirmatory stage should generally be limited.” Further, it asserts that increased adaptation in place of traditional multi-trial programs can “impair the ability to answer important clinical questions” or restrict “reflection on prior results.” This specific passage can be invoked in regulatory feedback, most often to question complex or multi-stage adaptive proposals and to encourage or require a simpler, staged alternatives. The ironic aspect of this sentence, that adaptive designs “impair the ability to answer important clinical questions,“ is that the opposite can be true – without adaptive designs many questions never get answered.

The Myth of Complexity—The Technical Perspective

Widespread regulatory skepticism about adaptation complexity or multiplicity is not well-supported by operational facts. Establishing an adaptive infrastructure for analysis is the greatest organizational task; once procedures are in place, the incremental complexity of additional interim analyses is marginal. For example, the movement from one to several interim analyses is only a small increase in operational work after the initial implementation. Technical teams regularly manage trials with numerous interims—routinely updating randomization, adjusting allocations, and reviewing accrued data with robust blinding and procedural safeguards.

A common presumption is that each adaptation, especially conduct of multiple interims, introduces the risk of operational bias. However, experience from modern adaptive trials—such as those in oncology and critical care—demonstrates the opposite. More interims can decrease the risk of operational bias. Regular interim analyses are now standard in major platform and seamless trials, with daily or weekly interim updates feasible and statistically well-controlled when Bayesian and frequentist approaches are rigorously applied. In practice, the risk of bias is not increased by the frequency or sophistication of adaptation but is instead managed by adherence to predefined rules and independent oversight.

The root misconception is historical: adaptation was traditionally viewed only in the narrow context of early stopping for success or futility. Adaptive design, as currently practiced, is broader—encompassing dose finding, enrichment, selection of subpopulations, dynamic sample size re-estimation, and platform architectures capable of adding or dropping arms. These capabilities require technical nuance, not blanket suspicion.

False Choices and Practical Consequences

The most serious effect of this misplaced caution is the creation of “false choices.” When agencies reference ICH-E20’s warnings as grounds to oppose adaptive proposals, there is often the expressed assumption that sponsors will proceed with two sequential trials—an extensive phase 2 to address exploratory questions, followed by a large confirmatory phase three. This scenario is mostly hypothetical. When an adaptive design, A, is proposed to answer two questions, there can be pushback that it is better to answer these two questions in two separate trials. The pushback of A is to get trials “B” and “C.” What happens is not B+C, but instead a different design “D” is selected that never answers the two things and just answers one question. So, there is pushback against A, hoping for B+C, but the result is not B+C, but instead they get D, which is typically worse than A.

In developed casework, such as the SEPSIS-ACT trial, with a maximum sample size of 1800, the adaptive design combined dose finding and confirmatory assessment in a seamless, multistage protocol with frequent interims and response-adaptive randomization. A sequential two-trial alternative would require 2,600 subjects (800 + 1800). However, no sponsor would commit to this plan—for either economic or practical reasons. The fallback would be a small, underpowered Phase 2 or a direct move to Phase 3 without adequate dose exploration—dramatically reducing the ability to provide robust, regulatory-grade evidence.

Additional examples reinforce this reality. Seamless 2/3 trials employing early markers as decision criteria can be rejected as “too complex.” But resource limits prevent sponsors from running separate large, long-term exploratory studies, resulting in either weak phase 2 trials or skipping dose-finding altogether and direct advancement to Phase 3 at substantial scientific risk. In enrichment scenarios, regulators request exploratory enrichment trials before confirmatory work, but sponsors default to fixed-population Phase 3 designs without real subpopulation analysis. In each case, efficient adaptive design is the only method by which sponsors can responsibly and rigorously answer development questions within practical constraints.

A not-uncommon generic example: a simple fixed phase 3 trial design that is powered for the expected effect, delta, with 180 patients. The fixed trial has huge risk – there may be a true effect less than delta that is still clinically meaningful. So, an adaptive sample size approach, say with possible sample sizes of 150, 200, 250, or 300, could be created with appropriate stopping rules for success and futility. The adaptive design allows for the expected delta to have high power, but the trial to expand if needed with smaller but important effects. If the 4-look design is considered too complex – it may be deemed the alternative design is a 300-patient design. Sure, the 300-patient trial is better than the 4-look adaptive design – but it is a false choice. If the 4-look design is rejected, the result is not the 300-patient trial, but the 180-patient trial.

Moving Toward Better Trial Designs

Limiting adaptations does not ensure greater scientific rigor it leads to simpler but worse trial designs. It more often forces development down inefficient paths, where key clinical questions go unanswered or only partially addressed. Adaptive designs, structured with statistical discipline and operational clarity, are not shortcuts; they are often the only viable avenue to trial programs that meet scientific, regulatory, and patient needs. Regulators and sponsors can collaborate to move beyond hypothetical, but impossible alternatives, to engage on practical, science-driven adaptive designs. Efficiency in clinical trial design is not a shortcut, but it takes science where it wouldn’t go otherwise.

Download PDF
View