Press "Enter" to skip to content

Epilepsy Trials

Reading these proceedings of the 17th Epilepsy Therapies and Diagnostics Development conference and struck by the section on trial design. In particular they note a rise in the placebo responder rate from 12% to 23%.

In all the interminable pharma-sponsored dinners I’ve gone to I’ve had to be that guy and question why the placebo responder rate is so high. If antiseizure medication (ASM) therapy is so important to people with epilepsy why do drug-resistant trial subjects do remarkably better with a sugar pill one-fifth of the time?

Part of this is likely natural variability of seizures as suggested in an interesting study that makes the case based on temporal reversibility of responder rates in three simulated conditions: placebo response because of psychological factors, placebo response due to reversion to the mean, and placebo response from natural variability. In my own simulations I’ve thought about regresssion to the mean as the primary driver but I’ll have to run my data through this sort of analysis.

But why the change in the placebo responder rate then? The Proceedings paper cites this paper and this paper which have some nice graphs. Both studies talk about variability in placebo response in ASM trials that seem to correlate with geography, age, date of publication, and a few other factors, but can’t seem to nail the problem.

Higher placebo response rates and lower subject recruitment per site make traditional ASM clinical trials harder to do and, perhaps more importantly, makes it harder to establish efficacy.

As a reminder, ASM clinical trials are usually run something like this. Potential subjects with drug-resistant epilepsy (DRE) and currently taking somewhere around 1–3 ASMs spend something like 8 weeks in a prerandomization baseline period counting seizures and keeping all therapies constant. At randomization there’s a titration period (often around 2–4 weeks) and then an interventional period (often 8–12 weeks) in which seizures are also counted. Figure out if those taking the investigational product (IP) had a bigger decline in seizures compared to those taking the placebo and you’ve got a result.

The two main endpoints in these trials are the RR50, or 50% responder rate, and the median seizure reduction. The responder rate measures what fraction have seizure reductions at least as much as the associated percentage. So you also see references to RR75 and RR100 (the latter comprised of complete responders who are seizure-free).

My own simulations have suggested that these timelines probably need to be extended to avoid the issues with regression to the mean and natural variability. But, lucky, there’s talk afoot of altering the traditional trial design paradigm. The aim is to “maximize [the subject’s] data” but there might also be a benefit to the risk of death, especially since Ryvlin et alii’s meta-analysis found evidence of an increased risk of SUDEP among placebo-arm subjects.

The scheme seemingly with the best traction is a time-to-first seizure design. Here, patients still go through a baseline seizure-counting period but the main endpoint is not median seizure reduction or responder rate but the time to first seizure, after which the trial for that subject is over.

I have heard discussion at a few venues and one of the objections was that there’s still a placebo and therefore still subjects at-risk for SUDEP. At the same time the placebo arm subjects are only in the trial so long as they are not seizing, which negates the risk.

This is just a cursory look at the placebo response in epilepsy trials. There’s more to talk about. It certainly doesn’t bode well for the typical trial paradigm that site recruitment is down and placebo response rates are up. But this is to ignore another set of problems entirely, which is that trials are geared toward people with DRE, which may not reflect efficacy in drug-responsive epilepsies; and that the placebo consists of a sugar pill plus whatever melange of up to three ASMs the subject happens to be on. Head-to-head trials of ASMs to look at comparative effectiveness are sparse—although they tend to show relatively small effect sizes anyway. Network meta-analyses may be the key here now that there are a few dozen ASMs on the market.

So go recruit some subjects and do some good science and let’s figure this madness out.