Beginner's Mind

#107: Jack Scannell - Uncovering Drug Development Efficiency Secrets: The Science Behind Pharmaceutical Research

• Christian Soschner • Season 4 • Episode 12

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 2:04:08

Do you ever wonder why it takes so long for new drugs to hit the market? Have you ever questioned the efficiency of the pharmaceutical industry?

Join me in this episode as we talk to Jack Scannell, a pharmaceutical industry expert, and discover the hidden efficiency secrets of drug development. In this episode, we delve into the critical factors that have shaped drug development over the last 60 years and analyze the industry's current state.

Throughout the podcast, Jack draws on his extensive industry knowledge and provides insights on the importance of decision quality, the role of AI in drug discovery, and the economics of the research model.

If you're looking to gain a better understanding of the pharmaceutical industry and its inner workings, join us in uncovering the efficiency secrets of drug development. Tune in to this episode.

đź’ˇ LINKS TO MORE CONTENT
If you like the episode, become a subscriber and support the show: https://lsg2g.substack.com/subscribe
Watch on Youtube
Jack Scannell
Christian Soschner

đź“– Memorable quotes:

(11:41) “Between 1950 and 2010, Drug Development saw a halving time: Every nine years, the number of drugs developed per billion dollars spent halved.”
(24:58) “4 Reasons for the Decline of Drug Development Productivity: Better Than the Beatles Problem, Higher Regulatory Hurdles, Throw Money at it Problem, Misindustrialisation of Science
(26:30) “Metformin is the Pharmaceutical Equivalent to the Beatles”
(45:15) “In early drug discovery, predictive validity, the degree to which your model output correlates with human clinical utility is critically important”
(01:19:31) “If the only tool you have to measure the height of the fruit is the rate at which you are picking fruit, and you notice that the rate of picking fruit is declining, you will always blame the low hanging fruit problem.”
(01:34:15) “When capital is freely available, the average quality is going down”

⏰  Timestamps:

(00:00) Introduction: Uncovering Drug Development Efficiency Secrets
(06:30) Background Jack Scannell: Pharma Journey
(09:15) Finding erooM’s Law: Playful Experiment Leads to Discovery
(12:00) Inversion of Output Efficiency: More Capital, Fewer Drugs
(13:15) Best Key Figure for R&D Productivity?
(20:24) Factors Behind R&D Productivity Decline - 1950 to 2010
(25:30) The Beatles Inform Drug Development
(28:34) The Regulatory Problem
(30:00) Throwing Money at the Problem & Misindustrialisation of Science
(31:00) Turnaround in Drug Development Efficiency from 2010
(38:30) Speeding Up Early-Stage R&D
(45:45) Technical & Managerial Components to Improve Speed
(47:10) Improving Predictive Quality of Models - Alzheimer's Case Study
(57:00) Interplay of Different Forces in Complex Problem Solving
(01:02:30) Model Quality Matters the Most for High R&D Quality
(01:12:45) Making Pharma Value Chain More Effective
(01:18:00) Appropriate Success Measure for Drug R&D?
(01:21:05) Role of AI in Drug Discovery Process
(01:29:32) Limitin

Send us Fan Mail


Join Christian Soschner for expert coaching.
50% Off - With 35+ years in deep tech, startups/scaleups, and public companies, Christian offers power video sessions. Elevate strategy, execution, and leadership. Book Now.

Support the show

Join the Podcast Newsletter: Link

SPEAKER_01

Are you curious why developing new drugs is becoming increasingly challenging despite advancements in technology and an influx of capital in the pharmaceutical industry? If so, then you won't want to miss our podcast recording with Dr. Jack Scannell, an expert in RD productivity and CEO of Etheros Pharmaceutical.

SPEAKER_02

The other thing, arguably, was around the sort of misindustrialization of science. The sort of the big idea is that quality beats quantity. A certain amount of quantity is necessary, right? But in early drug discovery, predictive validity, the degree to which your model output correlates with human clinical utility is critically important. And over time, the only tool you have for measuring the height of the fruit is the rate at which you're picking fruit. And you notice that the rate at which you're picking fruit is declining, you will always blame the low-hanging fruit problem. But it's also clear I think when capital is freely available, the average quality goes down.

SPEAKER_01

In this discussion, Dr. Sconnell will delve into the causes of the decline in drug development productivity, explore potential solutions, and share his insights on the interplay between society, politics, and scientists in solving complex problems. Join me for an informative and thought-provoking conversation with one of the leaders in the industry. Here we go. Now we are connecting to the live stream. It usually takes uh a few minutes until we should hopefully go live on Zoom on LinkedIn. Let me just check. Yes. I can see myself on LinkedIn, which is very good.

SPEAKER_02

Okay. I've got as few windows open as possible just to avoid any potential problems.

SPEAKER_01

So good to see that we have uh an audience here. So 80 people signed up and I'm very happy to see you, Jack. How are you doing today?

SPEAKER_02

Very well, very well. It's a very it's a very miserable grey day in Edinburgh, but uh, you know, that I I'm used to that by now.

SPEAKER_01

Yeah, it seems to be that we have the same weather. When I look outside the window here from Vienna, Austria, it's also more like um nice fall weather. It's rainy, it's gray sky.

SPEAKER_02

Right, right. Yeah. Well, when we're here in the winter, I'm looking out my window now. Here in the winter, when it's cloudy, it sometimes doesn't really get properly light. You know, so it's it's one of those days today. So sitting in and doing uh a call is uh is not a bad way of spending the time.

SPEAKER_01

Yeah, it's a good uh good uh uh opportunity to spend time in winter to have some webinars. And today we are talking about RD productivity. Let me give you first a little bit of my background where I'm coming from, so that uh we frame the episode and then maybe we talk about uh where you're coming from, what your background is, and then we dive into the topic. I started working in life science in 2006 and was deeply impressed about the complexity and the currency of drug development process processes, it's uh way back. And bottom line was so my key learnings were one is everything that is not safe and effective doesn't reach the market. The second key learning, and of course, since I have a background uh in economics, was focusing on the dynamics of the market. The second key learning was that the majority of uh drugs fail. Um, the ball bug figures that I remember is that uh basically 99 out of 100 potential drug candidates already fail in the scientific phase of the research and development process. And still, when we go towards the clinical phase, nine out of ten fail. And back in 2006, I got the number uh that to develop a drug from science to the market, it costs approximately one billion dollars, which was wow, it's a very big number. And the amazing thing was that the probability of success was pretty low. So I think uh when we calculate it through, it's probably less than 1% from science to the market. Uh, and I thought that's it, I'm fine, I know the figures, now I can work in the industry and keep going. And it took me 12 years to realize it was the beginning of the pandemic, uh, that the situation changed. So in 2020, I worked on a project, uh, worked together with scientists and came up with the usual story that I told you right now. It's expensive, probability of success is low, and it costs 1 billion. And they looked at me and said, No, you're wrong. I said, No way, no way. I did my numbers, and I am sure that I'm right. And they convinced me that I was wrong. They said, now the drug development process to get a drug to the market is about three to four billion dollars. And this was the time when I started thinking, okay, why is that? Normally, I mean, we have Moore's law, we have other laws that say um that industries over time reduce cost, everything gets uh cheaper and is quicker to the market. And it seems to be differently in the farm industry. In the last months, uh last of 2022, I had a webinar with uh Marco Schmidt, and we were talking about his company and artificial intelligence, and I was slowly drifting away. And at one point he said, Christian, do you know ERUM's law? And I said, Yeah, yeah, yeah, I know all laws. And then I thought, no, actually not. What's ERUM laws? I know I know Moore's law, and he brought up your name. So I thought it's a good idea to invite you to this podcast. Uh, since you are, in my opinion, one of the best experts in RD drug productivity, and get your perspective into the discussion and out on the market, what's happening in the pharma industry actually.

SPEAKER_02

Well, I that that's a very kind introduction. Uh, and I'll tell you a bit about my background and about my interest in productivity. Um, and I think the first thing I'll say is I think one of the reasons I've done work on RD productivity that's been widely read is not necessarily because I know more about it than other people, but I think I've I've I've had different institutional biases, right? So there are lots of people who are very expert who are in jobs that make it difficult for them to be franged. And I think by sort of an accident of career, some of the jobs I've had, it's it's just made it easier for me to do and publish the analyses that probably other people, had they wanted to, could have done and published. So I'll tell you about my background. So I uh studied medicine a very long time ago. Uh, I didn't finish my medical studies. During my medical training, I did a PhD in computational neuroscience. Uh, I worked as an academic neuroscientist for a few years. And then around 2000, I jumped across into the consulting industry, worked at a company called Boston Consulting Group, where I did quite a lot of work on in the pharma industry, but I did a lot of, I did a lot of other things as well, right? I wasn't just doing drug industry work, but I did a fair bit of drug industry work. And I remember I had an office mate who was writing a sort of, you know, like companies like Boston Consulting Group and McKinsey have to do these kind of, you know, big sort of thematic pieces to uh uh convince their clients that they've got interesting things to say. And uh a roommate of mine who really hadn't done much science was writing a piece about how genomics was going to be fantastically transformative for the drug industry. And I and having been a sort of computational slash systems neuroscientist, I was sort of somewhat skeptical. And I um possibly too skeptical in retrospect, right? But I was skeptical. And I remember this got me digging out um RD productivity trends. And it was then that I first came across the productivity trends, which actually I published about 12 years later, um under the name of ERoom's Law. So I worked at Boston Consulting Group for a while, and then I moved into drug and biotech investment. Uh and I sort of between then and now, so between about 2005 and now, I bounced around between drug and biotech investment, sort of academic slash policy work, and also doing proper biomedical science, right? So I worked in drug discovery for a while at a sort of bioinformatics slash AI oriented drug discovery company with some of my former computational neuroscience friends that that was doing proper science. And actually now I've sort of returned to proper science in a sense in that I'm involved in starting up a little biotech company with some assets around neurodegeneration and health span. But the sort of e-room's law term and the RD productivity work really stemmed from work I did first of all, that I got interested in when I was at Boston Consulting Group, but then actually I pursued a lot more when I was working in investment. And this was when I was working in investment, I'd say between about 200 uh sort of seven and 2012. And that was a period when productivity trends in the drug industry were really, really depressing. Right. So things were actually much more optimistic than they were in 2010. But in 2010, people really were thinking that the drug industry simply had sort of lost the ability to discover drugs, right? Um and I was particularly interested by the contrast between input and output efficiency, right? So I sort of spent a lot of my formative years as a scientist used to apparently seeing all of the technologies get better. You know, so DNA sequencing had got 10 billion times cheaper, x-ray crystallography for looking at protein structures, you know, had got thousands of times cheaper. Um, you know, we could make transgenic mice in which to test drugs. There's a whole bunch of things which should be getting much, much better, faster, better, cheaper. But the work at BCG, and then subsequently in my investment career, told me that the drug industry was spending a hundred times more in inflation-adjusted terms to discover a drug in 2010 than it was in 1950. Right. And that the clinical trial failure rates were higher in 2010 than they were in 1970. And although lots of people had written about the productivity challenge, this had really been a sort of theme, probably, in the drug industry since possibly the early 80s, that things were getting harder. Um, no one had really contrasted the input, the input efficiency with the output efficiency. Right? So it looks pretty bad if things are getting harder. But if things are getting harder while your inputs are getting much, much better and quicker and cheaper, that's that's actually a more awkward problem. And I coined the term E-Room's law really to draw attention to the contrast. So many of the inputs were following something that looked like Moore's law, Moore's law being a sort of a law, not really a law, but a sort of a trend identified by a guy called Gordon Moore, who I think was one of the founders of Intel. And it was to do with the rate at which the number of transistors you could put on a chip doubled, right? So chips were getting faster, better, cheaper. Um, and this seemed to be an exponential process, right? So there was a roughly constant doubling time in terms of chip efficiency. And drug RD productivity, at least in terms of output, there was a sort of inverse, there was a sort of halving time of efficiency. So once every nine years, roughly, roughly, between 1950 and 2010, the number of drugs discovered per billion dollars spent halved, right? And so e-room's law was a kind of joke. E room being more backwards, right? If you spell more backwards, it's e-room. So hence, hence the invention of e-room's law.

SPEAKER_01

One question, sorry to interrupt you, but isn't it interesting? I always perceived, especially Europe with uh Horizon Europe, that a lot of capital is um going towards basic science and especially also in drug development. So it looks to me, after what you said right now, is that the output efficiency was going down while more and more and more input was put into the process. Isn't that an interesting development?

SPEAKER_02

And and it's and and I think it's underexplored in the policy world, right, and in public policy, and it's arguably underexplored in the sort of broad discourse around the drug industry because it's not it's not a trend that the experts want to advertise, right? So if you're a biomedical scientist, you don't want to signal to your government that actually there is less being there is less sort of useful output being produced for increased investment, right? So you so you have a sort of rhetorical scientific optimism, both in industry and academia, that is kind of disconnected from the long-term productivity trends.

SPEAKER_01

But there's that's there's another question that pops up in my mind. Um what is the right measure for RD productivity? I mean, um, in my opinion, one measure could be that you say, okay, approved drugs is one way to say, but also on the other hand, uh, not every drug that enters the process should reach the patient because it can not every drug can be safe, and some drugs are not effective. So, in your opinion, what is the right measure for R D productivity?

SPEAKER_02

Well, so when I worked in finance, the answer was easy, right? It was a financial return measure. It was, you know, return, you know, there's more than one return investment measure, but it was effectively, you know, what what was the effective interest rates you generated on your RD capital? You know, or what, you know, that that those are the kind of measures I was interested in. Um now I'm not working in investment. Uh, you know, if one looks more broadly, that's a very difficult question to answer. And I think my view is actually the measures that we would most like to have are the measures we don't have. Um so, you know, um really, you know, again, if from a policy perspective, really one wants to know the amount of sorts, yeah, the amount of sort of social welfare, right? Or the sort of net health gains one's getting from RD investment. And measuring those things is very fraught, it's technically difficult. Um and I think that's a it it so consequently, people deal with actually measures that aren't very good in the broader sense, right? So a lot of a lot of my work has been around financial measures of RD productivity or it's been around simple counts, right? The number of drugs approved per billion dollars. But you're absolutely right. Lots of those drugs actually are not terribly useful to anyone, right? And some of them are incredibly useful to very, very large numbers of people and go on being very, very useful for decades, right? And my view is that there isn't a good calculus reflecting that. And again, without wishing to digress too much, I think again, the public policy debate on biomedical innovation would be better served if there were better ways of evaluating the benefit of biomedical innovation. So at the moment you have some very disparate views, right? You know, so so there so depending on your sort of analytic methods, you know, there are people sort of, you know, there are respectable people arguing that actually there's sort of minimal incremental health gains from pharmaceutical innovation on one hand, and then there's other people arguing actually there's enormous or disproportionate health gains from pharmaceutical innovation. And I think most of the disagreement between them is around methodological choice, right? So actually they're having an argument about methods, really. And and but imagine that they're having an argument about the substantive issues.

SPEAKER_01

In your opinion, what is the most important productivity measure then? If this is so such a complex field, what but again, I don't think we have it, unfortunately.

SPEAKER_02

So I think actually the so so so uh rather, I think sort of socially most useful productive measures are productive measures that that are contentious and don't really or rather rather around which there's no consensus. So it would be something like, you know, the number of, you know, some sort of net health gain, quality adjusted life year gain per unit of RD investment. I think I think that those would be the most appropriate measures. But I just think calculating this fraught.

SPEAKER_01

Yeah, then it's uh it's a very challenging uh thing then uh to talk uh about productivity if this is such a complex field, and we can't agree, uh can't agree on one measure. If you would uh have to recommend one, in your opinion, uh from the scientist's perspective, what should they focus on?

SPEAKER_02

So for practical purposes, you know, again, if we're the for practical purposes, the work I've done over the years, I've really focused on two broad sets of measures, right? One is drugs approved per unit input, right? And the other one is financial return per unit input. And the financial return measure, of course, does capture, at least in aggregate, the fact that you've got very skewed returns. So if you've if you know if you, for example, look at a company or the industry as a whole, it's comprehensive, so it includes everything. It will include the big drugs and the small drugs, right? But but but things are very skewed. I think if you start looking at individual drugs or individual companies, it gets more difficult.

SPEAKER_01

Yeah, absolutely. I mean, when we look when we come from the value perspective, you have uh a lot of small companies uh focusing, for example, on generic drugs and uh making drugs a little bit better that are cheaper. Then you have companies like CRISPR therapeutics, for example, who develop uh an entire new platform of doing things with a valuation of 50 billion, 60 billion, 70 billion. And then we have the outlayers, especially in recent years, I think Moderna and BioNTech, uh, that happen to have uh a solution in the mRNA field that has a high need at the time when they are doing the research and development. So you come to several, I think it at the peak, both companies together had several hundred billions of value on the market, um, which is very interesting. Which trends do you see in the RD investment? Um, especially when we talk about these valuation measures, but what's uh currently happening in this field?

SPEAKER_02

So uh uh rather than give you a bad answer, I'm gonna sidestep that a bit, right? So I stopped working in investment in 2019. Right. And if you're not in it, you get stale quite quickly. Right. So I I actually, you know, I know what I'm doing now. I know, you know, I'm very focused on the relationship between model quality and technology choices and RD productivity. And you know, I'm I'm working on that area, and you know, there's a particular company that I'm trying to start at the moment. But I I actually, you know, I don't really feel current on the sort of major themes as to what's going on more broadly in the market. So again, I'm not rather than say something nonsense, I'll I'll not say too much if you don't mind.

SPEAKER_01

Yeah, no problem. Um I mean, the devaluation I'm coming from the economical side, and devaluation is one of the key factors that I have an eye on uh to move things forward. Um, looking at the low success rates that uh uh drug development companies have, uh, it's quite clear that you need uh a drug that uh is highly effective and is safe to refinance the investments that go into a company and that also the farm industry has some interest to buy it. I think this is uh a basic uh economic rules. The big question that I have now when I look on the market and when I have to recommend investments is knowing that the RT productivity is going down, it's really difficult then to recommend to invest in that field. Um, what would be helpful at this point is to understand uh why that happened in your opinion. From you mentioned from 1950 to 2010, um, there was a huge decline in productivity. What were the factors that contributed the most in that area?

SPEAKER_02

Okay, so so I mean, taking this back to sort of financial first, right? The drug industry generated really good returns for its investors. And again, there's a number of different ways of measuring this, right? But if we look at things like return on equity, and if you calculate it correctly, which adjusts for RD capitalization, the R the drug industry generated really good returns for its investors from as far back as I can look, which is the 1960s, I don't have data further back than that, up until around 2000. And then from 2000 to now, the return on equity has kind of trended down. And now it kind of bounces around, but I would say it's kind of not wildly different from other industrial sectors, but it's been on a downward trend. And the sort of simple drivers of that are effectively the RD spending has grown faster than net profits, right? So RD efficiency has been declining probably since 950 till around 2010, but for the first 50 years of that, rev top line revenues grew very quickly. Right. And then from around 2000 onwards, what happened was that the revenues started growing less quickly than the RD. So you've gone from an industry where net margins were around 9% and RD investment was around 5% of sales in 1960. And it stayed like that probably until the late 1970s. And now you've got an industry where both net margins and RD spend their kind of mid to high teens percent of sales. Right. So you've had that sort of adverse trend in say in net profits versus RD. And although some people don't account for it this way, RD investment is affecting the major capital expenditure in the drug industry. So that's made the drug industry less capital efficient. It's it's it's pushed down return on equity. So the question then is well, what's the why why does that happen? And um and here I'll go back to a paper I wrote in 2012, which which coined the term e room's law. And that was an attempt to sort of diagnose what was going on, right? So we've got this. Adverse productivity trend despite improving inputs. And in that paper, I ended up blaming a number of things. And I'll list them, and we may want to talk about one or two of them in more detail, but some of them are pretty self-explanatory. Right. So one that isn't self-explanatory was something called the Better Than the Beatles problem, which I'll talk a bit more about if you want. Another one that is self-explanatory, although I now regret my naming of it, is the cautious regulator problem. I think I would I wouldn't be so pejorative about the regulator if I was renaming it today. But it's clear regulatory standards have gone up. And if you raise regulatory standards, that makes RD more expensive, right?

SPEAKER_01

Yeah, I couldn't agree more on that. Let me just uh put one for you. I mean, the trend that you describe um basically leads to the outcome then at the end of the day when uh the quantity is going down uh and the expenses are going up. Uh it's higher prices at the end of the day for uh for everybody on the market, for the patient especially and for the payers. Um, bottom line economics. Um, you mentioned that there are two main problems. So one is this uh regulatory issue, and the other one is the better than the Beatles problem.

SPEAKER_02

And actually, I think there's a couple more, actually. I think there's also, I think there was a sense that for much of the time industry returns were great. So there's a problem that we call the throw money at it tendency. So, really up until the year 2000, because return on RD investment at an aggregate level was so good, virtually the the the solution to almost any RD problem was throw money at it, right? That was the sort of rational solution. Uh but that only remains true for a certain amount of time and it stopped being true around 2000. Uh and then the other thing, arguably, was around the sort of misindustrialization of science. So if if if inputs are getting better and output efficiency is declining in a kind of RD process, there's there's two broad classes of explanation, right? One is that you've run out of stuff, you've got you've got some sort of resource depletion problem. And again, the better than the Beatles problem sort of alludes to that, which again I'll explain. But the other one is actually you're doing things wrong, right? So what happens is you're now doing at lower unit cost the wrong things, right? When so actually there's been a qualitative change in the nature of the activities that are involved in RD. And you've substituted high unit cost but productive activities for low unit cost unproductive activities, right? And I so that was the sort of four explanations. Beth and Beatles problem, higher regulatory hurdles, a sort of general tendency to throw money at the problem, and then um a sort of missing technological misindustrialization. And then things turned around a bit in 2010, which again you might I'd be happy to talk about. But anyway, I'll you steer where you want me to go.

SPEAKER_01

Um let's jump at uh bring a little bit more light in these four problems that you identified. So what they're all about, and then maybe we look at the time from 2010 onwards up to now, what's going to change? You mentioned the better than the I I find the name very funny, Better Than the Beatles problem, because I associate immediately the musicians, the musician group. But what what's uh how do you describe the Better Than the Beatles problem?

SPEAKER_02

So um it is an illusion, an allusion to the group, and it and it may show my age, right? That I chose the Beatles. But the the sort of analogy is this. Imagine how hard it would be to successfully commercialize new music if every new song had to be A, better than the Beatles, right? B you could download it for free, and C, you didn't get bored of listening to it. Rather, you didn't get bored of listening to the Beatles. So, so um the you do have that analogy in drugs because of genericization. So, you know, so metformin is the kind of pharmaceutical equivalent of the Beatles, right? It's an incredibly effective drug. It's uh uh you know absolutely top-performing diabetes drug, but because it's been off patent for years, it's now almost free. And doctors don't get prescribed, don't get bored of prescribing it, right? And what that means is, and in diabetes, we've now got, you know, type 2 diabetes, we've now got, you know, metformin, you've got sulfanal ureas, you've got GLP1s, you've got DPP4s, you've got this ever-improving back catalogue of stuff, and it goes generic. So you have a really, really good generic pharmacopeia. And what that means is new drugs in diabetes have to compete with this ever-improving back catalogue of really good free stuff. And that and that's not just in type 2 diabetes, it's in anti-infectives, it's in anti-hypertensives, it's in cholesterol management, right? And it's now the case that 90% of US prescriptions are for generic medicines. So this ever-improving back catalogue of virtually free stuff that's really good, it effectively undermines the economic rationale for investment in those in the therapy areas where the really good old stuff exists. And inevitably, it sort of squeezes investment into therapy areas where there isn't lots of good old stuff. And the therapy areas where there isn't lots of good old stuff are therapy areas where for the last hundred years the drug industry has had less success, right? So they're probably, in one way or another, difficult. Right. So that's the best of the Beatles problems. Is this kind of sort of and and and you see it in some other intellectual property businesses, right? So you see it in agricultural crop protection chemicals, right? So it's not the only place you see it. But in lots of industries, the old stuff wears out, right? They're not intellectual property businesses. So, you know, so you can sell new stuff even if it's not much better than the old stuff, because the old stuff, people need to replace it, right? So that's an unusual characteristic of the drug industry. Um, I mean, the cautious regulator problem, again, although I wish I'd called it something different, is fairly obvious. If you know, drug RD really was the Wild West in the 1950s and 60s. Um and, you know, the sorts of things people did then simply wouldn't, they wouldn't just get you in trouble with the regulator today. They'd put you in jail today, right? Um, and uh I'm not necessarily advocating we return to the 1950s and 60s, but you know, having a sort of relatively laissez-faire attitude towards testing things in people was perhaps unsurprisingly quite an efficient way of finding drugs, right? And and I think what happens over time is there's a kind of ratchet. So that when things go wrong, you introduce new regulations to stop them going wrong, but we rarely take away the old regulations, right? So arguably, although again, I I I'm not suggesting the regulator's got the balance wrong, but arguably over time you get a kind of an accretion of costs that you needn't necessarily have, right? Um the throwing money at it tendency is pretty simple. It's just to do with the fact that I think returns on RD were great until about 20 the year 2000, because although RD costs were going up, drug industry sales were growing faster or as fast, right? So that just made people naturally price insensitive, I think. And I think the one that I'll talk a bit more about later is around this kind of misindustrialization. I think there were naive assumptions about the factors that drive efficiency, you know, unit cost being one of them. And people didn't understand the trade-offs that they were making when they embraced some of those um notionally high-tech um approaches, right? Um so I think those were the sort of original driving factors. Now there has been a turnaround since 2010, as you said, and I think that's been driven by a number of things. I think it's been driven by um uh the industry, I think, has got better at understanding where the technology it has at its disposal is likely to work, right? Human genetics has helped there. So we have better ways of sort of slicing and dicing human disease. And if you can identify genetic uh uh or groups of groups of patients who are genetically homogeneous, you have a kind of homogeneous disease entity that is easier to model, right? And also it's easier to get the right patients, it's easier to find the patients who match the thing that you're modeling, right? So it's so so effectively genetics is a tool that's kind of tied together greater confidence in a therapeutic mechanism, the ability to sort of test drugs against that therapeutic mechanism outside of people, and then finding the actual people who match the therapeutic mechanism that you've been testing in your models, right? And I think that's combined with arguably a sort of slightly more relaxed regulatory environment in the therapy areas where those approaches work well, many of which are related to rare diseases and cancer. So human genetics has been very useful in rare diseases and cancer, it's been less useful in a lot of common diseases, right? So um uh uh uh and also I think a whole lot's been helped by the drug industry's discovery of quite how much it can charge for rare disease drugs, right? So there's been this conjunction of factors. There's been a sort of recognition that our technology works quite well for genetically simple diseases, um, that there is a sort of regulatory uh sort of that in some ways some of these diseases are all are more regulated, tractable from a regulatory perspective. And then also actually the economics of those therapy areas, the industry is realized the better than it would have thought 20 or 30 years ago, right? With with the sort of rise in cancer drug pricing and orphan drug pricing. So I think those are the things that left the turnaround. And we've got more drugs coming out of the pipe per billion dollars spent than we did in 2010. But the indications for which they're being approved are narrower, right? And also they're getting very, very, very, very, very expensive.

SPEAKER_01

Yeah, maybe maybe it's really not the best productivity measure to measure it uh in drugs approved per billion dollars, then. I mean, you mentioned the better than the Beatles problem. Uh, it sounds to me that basically in the last uh 70 years the low-hanging fruits have been harvested and there is a solution on the market that works. So why should you something improve something that works already on the market? Uh, even when you throw more capital on that. The other thing that you mentioned that uh sticks in my mind is that basically when you have uh the low-hanging fruits on the market and they work well, then throwing more money on RT leads to the situation that you drive scientists to more complex problems that are harder to tackle, that uh are not easy to tackle, and that basically also have less patience. So it's uh this rarely CC area, for example, that you mentioned, which automatically then means uh that the investment must go up as a result.

SPEAKER_02

Yeah, no, I think I mean I in a sense, I think I agree with a lot of that summary. Right.

SPEAKER_01

And then we have the regulatory hurdles that uh I think also probably the um the interest to get really safe and effective drugs on the market uh has gone up, has gone up in society in the last two or three decades.

SPEAKER_02

So I I think I am more reluctant to be critical of the regulators now, perhaps, than I was when I wrote the 2010 paper, sorry, 2012 paper. Um and again, I don't pretend to be a regulatory expert. So my sort of, you know, my you know, I look at this from an RD productivity perspective, and I've had, I would say, a sort of occasional real serious bits of work, but not large pieces of work where I've looked at regulation. So for example, I did quite a serious piece of work back 2014-15, looking at the regulation of antimicrobials, right, which was a particular policy problem or has been a policy problem. Um I um my view for what it's worth is is that the regulatory is pretty um innovation-friendly in therapy areas that are where there is a real serious um lack of good therapeutic choices for patients, right? So I don't think anyone can look at the current sort of oncology landscape and think that the regulator is being too tough, right? Right. I think people could argue that lots of the drugs being approved have actually relatively limited evidence of efficacy. Now, my view for what it's worth is that um an awful lot of innovation actually happens once drugs are launched. So actually, it's a mistake to think that the innovation process ends with approval. That the users learn how to use drugs in their natural environment. And there are lots of drugs now sold that do things much better than when they were launched, because the users have learnt to deal with them. So, for example, there's lots of chemotherapy drugs where now the toxicities are much less severe than when they were launched because users have learned how to deal with it. Um, you know, dosing schedules are optimized. Uh we've we find old drugs have new uses. So I'm personally of the view that if drugs aren't dangerous, even if they're not wildly effective or apparently wildly effective, actually, probably they should be approved because the people find out how to use them over time and their uses optimized in the real world. Um, but again, I I I I I I think certainly on my experience, I don't I'd be reluctant to say the regulator is sort of too um strict. Uh and but I think there's a sort of difficult sort of multi-factor optimization problem here that regulators are trying to do, right? They've got a whole bunch of things they have to think about. And uh and and the idea that you can keep everyone happy is particularly in a world that's as polarized as it is, right? Is is is is you're not going to keep everyone happy, right?

SPEAKER_01

To keep everyone happy, I think, is an art that nobody can can ever master in in life. The I mean, I think the the pandemic was an excellent example of how tough the work for regulators and politicians really is, because on one hand, you have this tremendous fear on the market, uh, this new battle gen, people are dying, uh people are afraid, and you have to do something. And then you have two companies who are doing research that are doing research in that area who happen to be very quick into the clinics, which was really surprising to me. I never thought that it's uh possible to align the entire industry towards a goal in a way that's without giving up on efficacy and safety, uh, that we really can produce faster, to research faster, connect people faster, have enough capital available. And still then you have to make the decision as a regulatory authority. Uh, okay, we have clinical trial data, it's probably not the best data, but it was it is what is was possible of doing, and then you can say, okay, we approve it, uh, we go with the data sets, or we don't approve it and say, okay, let's uh test it with two millions more, which uh um uh lengthens the timeline. Um we think it would have been two years more. Yeah, um, so it's not an easy choice. It's not an easy choice uh to go down that route. What I find interesting in one of uh I think you wrote the paper in 2015 and 16, where you discussed basically the predictive value of some models that are used in early stage development. So we have uh discussed the regulatory hurdles, we have discussed the financial hurdles, uh also the problems in clinic and decision making in that. But what about the early stage science? Uh, what were your findings there?

SPEAKER_02

Okay, so so I'm gonna tie this back both to e-Rooms law and the Better than the Beatles problem. Because I think it it it it certainly in the evolution of my thoughts about this, it's it's very closely related, right? So one thing that struck me when I was doing the e-rooms law work, said, okay, we've got outputs have got a hundred times more expensive, inputs have got thousands, millions, or in some cases tens of billions of times cheaper and better, right? What's the diagnosis? It was clear to me that any serious diagnosis, anything that you thought was a major cause de factor, had to be able to explain orders of magnitude of productivity change, right? So unless your cause de facto explained orders of magnitude, don't even bother thinking about it because it's just not, it's it can't be important relative to the magnitude of the effects we see. And um I in the dim and distant past, a very long time ago, had to do some programming uh of very simple search algorithms. And I shouldn't make this sound glamorous, it wasn't, it was really it, it was really people forcing me to learn how to program, not that I was doing anything clever. So I was programming dumb, simple beginner search algorithms. But I remembered that the efficiency of search tasks, or rather, the the efficiency of search algorithms could be very sensitive to the type of search task that you're performing. Right. So I thought, could I produce a simple quantitative representation of RD as a search process and then try and understand the parameters to which search efficiency is effective? So I produced a quantitative model that is, I would say, actually rather is rather simple and it and it reflects what most people sort of, although they don't articulate it this way, it reflects a kind of common sense view of drug RD. And the and the model works like this. You say, okay, there's a universe of therapeutic candidates, and from that universe, we have to select something that we think is going to work in people. And the the universe of therapeutic candidates, it could be drug targets, it could be compounds against the drug target, right? Uh so that's so the general think is quite sort of generalizable. But let's suppose we, you know, we we know the target and the universe is a set of compounds that we might test against that target. Well, what do we do in the RD process? Well, in the RD process, we have a bunch of measures that we apply to those therapeutic candidates, and we then effectively slice the universe of therapeutic candidates to at successive steps to try and identify the ones that are most likely to work in people. So there's a sort of implicit assumption there for anyone doing RD that the measures they're using somehow correlate with clinical utility in people, right? Because if you didn't think your measure, whether it's a binding affinity measure or a TOX measure or a PKPD measure, if you didn't think that was correlated with clinical utility in people, you wouldn't bother, you wouldn't bother making it, right? So you can think about representing this universe of therapeutic candidates, which may be the candidates that exist in a high three-foot screening uh collection, or it may be the candidates we could in principle synthesize. You can think about representing them in a measurement space, right, where you've got you know human clinical utility on one axis, you've got some you know in vitro scores on another axis, maybe in vivo scores on another axis. And then you can start slicing that space to see the parameters to which your ability to predict to identify things that will work in people are sensitive. And it turns out if you do that, uh at least for the sort of sets of parameters that are relevant to early stage drug discovery, uh up until kind of preclinical, where the average candidate is quite unlikely to work, right? It turns out that the the parameter that really dominates is the degree to which your model, and I'll call it a decision tool, the degree to which the thing on which you're basing your decision, the degree to which its score correlates with human clinical utility across a set of therapeutic candidates. Right. So it turns and you can operationalize that as the correlation coefficient that would exist if you had infinite money and no ethics and could take all of the candidates and test them all in people, right? Right. It's the correlation between your your in vitro score and the right. So it turns out that actually your ability to identify good candidates is very, very sensitive to that correlation coefficient. And it's actually surprisingly insensitive often to throughput. So for much of the sort of relevant search space, and there's a concrete example I may talk about later, changing the correlation coefficient between your model and the human outcome of interest by 0.1, let's say correlation coefficient go from 0.6 to 0.7, is more important than changing the throughput by a factor of 10 or even 100. Right. So this then relates to the back to the Beatles problem. So a hypothesis I have that is both consistent with the decision theoretic treatment and I think is consistent with history is that you know, circa 1950 we had a universe of screening and disease models. And of course, we get new screening and disease models over time as well. But we have a kind of universe of screening and disease models. And some of them accurately predict human clinical utility and some of them don't. And if you throw RD resource at those models over time, what happens is that the models that accurately identify clinical, clinically useful compounds produce lots of successful drugs. And those successful drugs become the Beatles, effectively. And what that means is that the models that are most predictive render themselves commercially redundant. Right. We don't need uh models for antihypertensives very much these days. We don't need models for statins these days. We don't need models for, well, commercially, actually, we, you know, socially we may need them, but commercially, there's not a huge demand for anti-infective models. And that's because the models are good. They've given us a bunch of good drugs, right? And what we're left with is we're left with the models that don't act don't accurately identify drugs that work. And those are things like models for advanced solid cancers, models for Alzheimer's. And ironically, we keep using those models precisely because they don't work, right? And they never render themselves redundant. So the sort of the big idea is that quality beats quantity. Certain amount of quantity is necessary, right? But but in early drug discovery, predictive validity, the degree to which your model output correlates with human clinical utility is critically important. And over time, uh one has exhausted the most predictive models, right? And that again, that sort of links this kind of model validity thinking to sort of ERUM's law and the best than Beatles problem.

SPEAKER_01

That's very interesting. Um the business people like I have uh the tendency to oversimplify and uh to find ways to come to the big idea that you mentioned. So when I understand you're right, is that the solution to the productivity problem in the industry is not uh throwing more uh candidates into the process, so increasing the quantity of the number, it's more focusing on the quality of the predictive models that we use.

SPEAKER_02

The quality of the decision tools, yeah. And predictive models are only part of that, right? So, you know, also things like portfolio management, you know, also the processes. So, for example, you can have great predictive, you can have great models, but if your management processes are political and biased, right, or you don't listen to the scientists who understand the models, you just throw away a lot of the information that might be available for your decisions, right? So there's sort of two components, there's a kind of technical component, and then there's a sort of managerial component of piping better, there's a managerial component as well.

SPEAKER_01

Yeah. But the questions that I can solve in my mind is: I mean, you brought up Alzheimer's, I think it's a very good example. I think Alzheimer's, um, when we look at the biology is very complex. It's a complex disease, and I think some people would even argue is it one disease or is it several diseases at the same time that are just subsumized uh under one term? And on the other hand, you mentioned, Fernando Studio right, that the the models that we use for Alzheimer's in the drug development space are probably not maybe not the best ones yet. But how how can we improve that? Do you have some ideas? Um how these scientists can come up with better models when it's so complex to understand the problem in the first place?

SPEAKER_02

So I I mean I sort of came across those sort of decision theoretic and sort of historical ideas around 2016, right? 2015, 2016. And I realized that um they most people in the drug industry, if I explained it to them, would say, yeah, that's very interesting, but what do I do? Right. So uh that's that's what I've spent a lot of the time since 2016 trying to work on. Uh and um I think the and I I wrote a paper which came out last year, which was an attempt to sort of synthesize what I've learned about what do we do. And that's and that's really come from doing a bunch of consulting work to sort of um in the sort of biopharma industry, right? So people actually grappling with these problems for real. Um, it's come from talking just with a lot of people, and it's also come from understanding better the sort of a lot of the practical literature around measurement and evaluation. Uh and um and I would say there's there's a sort of number of steps in the process, right? But the first step is you just need to convince people of the quantitative and historical result, right? You need to convince people that actually um the idea that predictive validity can be very is is more important than they think it is, right? So this is a slightly this is a slightly hard sell. So no one working in the drug industry thinks, right, I'm gonna do a study and I'm gonna use some really bad models, right? So everyone knows that good models are better, but the surprising result is the quantitative power. It's that models that are a little bit better can be can give you the same productivity edge as doing 10 or 100 times more work, right? Um and so that impressing on people the quantitative result matters, right? The next step is to say, well, okay, if model validity is really important. How do I think about measuring it? Or how do I evaluate it? So you gave the example of Alzheimer's. I I I know a little about Alzheimer's, it's not one of the conditions about which I I I claim to know a lot, so there's some probably some other therapy areas where I can give better examples.

SPEAKER_01

But just speak another one, then it was just something that popped up in my mind where I get the feeling that it's complex and uh it's hard to tackle.

SPEAKER_02

Um actually, I I I think I know enough about Alzheimer's actually to illustrate it, right? So but I'm like but my illustration maybe is a wrong in detail. Um so there's a very good point you make, which is okay, we don't think Alzheimer's models are good. Right, well, okay, if you were to deploy if you were about to deploy millions of dollars on an Alzheimer's discovery program, right, you would want to know more than are Alzheimer's models good? Right? You would want a an operational, a working operational definition of what good looks like. So one very practical thing you do is you produce or you can produce what you might call a target model profile. Right. So what would a good Alzheimer's model look like? Right? And there are some there are tools available to help you do that. And most people don't use them. Right. So developing a target model profile has a number of steps. The first step is you articulate the characteristics of the human pathology that you think it is important to reflect in your models.

SPEAKER_04

Right.

SPEAKER_02

So this requires a very detailed description of Alzheimer's. Right. So you raised the point that Alzheimer's may not be one disease, right? Well, if Alzheimer is several different distinct diseases, you would need several, you know, you would reflect this in your target model profile. Well, actually, there's not just one disease here, there are several diseases. So or to put it another way, I think the way models are generated is different from the way we do a lot of science. And I'm gonna be slightly unfair here, but a lot of animal models are effectively retrospectively justified. So people will make some genetic changes to a mouse or a rat. They will notice that the rat or mouse then exhibits a few of the features of the human pathology, right? And then they will assert that that animal is a model of Alzheimer's, right? If you wanted to do it properly, what you would do is you would specify what a good model of Alzheimer's looks like. You would come up with a clever checklist that would cover a number of domains, like to what extent does, you know, what are the main features of the human pathophysiology you want to recapitulate, right? What are the tests and endpoints that we can apply in people that we should that should be applied in the model? Uh is the model, what is the statistical and experimental hygiene? Right. So our is our model biased? Uh uh, how big are the error bars, right? And and then there's another thing you need to think about with models, which is concept which is very common but not in drug discovery. It's common in other fields of science, which is what you might call models of domains of validity. Right. So in most fields of science, people will know that this model can predict some things, but not others, right? So, you know, Newtonian mechanics is great for predicting some things, right? It's good for predicting the orbits of planets. It's not good for predicting the motion of electrons around an atom. So you wouldn't you wouldn't want to use it for that. But you've got quite a clear understanding of the domains within which this model is predictive and which it isn't, right? So one of the things we advocate is you look at the sort of biological recapitulation, you look at the tests and endpoints, you look at the statistical and experimental hygiene of your model, and then what you do is you then try and assess its domains of validity. Um i.e., what features of Alzheimer's treatment would we expect our model to predict? And you might say, well, in this particular subpopulation of patients, it might predict X, Y, and Z, but in that subpopulation of patients, it's not really going to predict anything, right? And and and and um so and and again, I think the Alzheimer's question you raise also raises another important point, which is most people don't have a good language to talk about these things. So if you get a psychopharmacologist, pain biologist, cancer biologist, and an Alzheimer's person in a room, they don't use the same language to talk about whether models are good or bad. Right? So if you've got a portfolio management committee, which is composed of those people, and someone comes in with a model and they say, Well, how good's this model? It's pretty good, you know, well, they they will all have completely different views about what that means. Right. So another practical aspect is you know, you need to people understand it's important. Then they need a lingua franca. They need a they need a language that they can talk about the same sort of concepts. Because if if if you don't have a lingua franca within a company, it's quite hard to manage these things, right? So it's important. Here's a language, here's a set of tools you can use to assess model validity. And then the other thing that we've done a lot of work on is frameworks for tying evaluation of models onto financial value, right? So, so so how much is it worth to have a model that performs better rather than a model that performs worse?

SPEAKER_01

That's a good question. That's a good question. I was just uh it makes me think while I was listening. So the the the usual uh business management and economic uh approach is when you have a problem, gather people, throw money at them, and give them enough input. So when I understand your explanation right, uh it doesn't help in the pharma industry because at the end of the day, it makes people pursuing probably the wrong targets, the wrong goals. And when you understand you right, your recommendation leads more towards the way to ask the question do we understand the problem well enough yet to start solving it? So spending much more time on defining the problem and looking at the validity of the models that people use to tackle the problem before starting actually executing. So it sounds to me like uh we are very quick as an industry to jump on problems and try solving the problem uh at the point in time when we first should sit back, relax, and think about the definition of the problem in the first place. Would that be a summary that uh we could use?

SPEAKER_02

Yes, yeah, I think so. I think I think and in the work I've done, it's very interesting. The first step in this is very, it's not always, but it's very often defining the clinical state that you want to model, right? Because until you define yeah, effectively defining what a good model looks like. Because until you've done that, you you you've got very little, you know, you you um yeah, you you you don't know whether the tools that you've got at your disposal are likely to give you the right decisions, right?

SPEAKER_01

That's a very interesting, that's a very interesting uh interplay of forces then, because on one hand, there is society. Um when we can identify a problem the first time, society then understands okay, there is a disease, and this is something that we have the potential to tackle. And society then as a whole starts pressuring politicians to come up with a solution. And the only thing that uh politicians can do then is inputting resources, basically. This is the only tool that they have. So they can uh it's like what what the Fed and Biden administration is doing currently, they can basically print money and give them hand over the money to scientists, and with the money, with the capital, then comes the expectation to have the problem solved in due time. And when we think about politics, I think we think in uh four to five years time frames. And when we want to do it properly, from what your research says, is that the majority of these four to five years basically should be spent in thinking about the problem in the first place, which is not the output measure that is generally accepted in economics.

SPEAKER_02

Well, well, I actually don't think it's four to five years thinking about the problem, actually. I think in many cases it's much less. And I think in many cases one overestimates the amount to which the problem has been thought of at all.

unknown

Right.

SPEAKER_02

So I'll qualify that by saying, you know, I interact with quite a lot of drug industry folks, and in many cases, or or rather, the failure point doesn't always occur, right? So some people do a great job, right? And I can I talk about where I think a great job is done if you want. But actually, there's there's a whole bunch of different failure modes where a good job isn't done. So one area where I've done a little bit of work is ischemic stroke, which has been an RD nightmare, right? So lots of drugs in ischemic stroke have worked, have apparently worked in animals and then failed to work in people for a whole bunch of reasons, some of which talk about in the in the most recent paper. Um but I don't for one moment think that the work I've done means I know more about stroke models than lots of people working in the drug industry, because I've met lots of those people working in the drug industry who know far more about stroke models than I ever will. But to give you an anecdote, you know, I was talking about model evaluation to a guy, sort of senior scientist in the drug industry, done a lot of work on stroke, and he said, um, yeah, but I know all that. I know which of the models are good and which are bad and stuff like that. And he goes, But but the problem is the decisions are effectively political, right? Now there, I think, the work that we've been trying to do on sort of formalizing this sort of decision quality-based view and tying it to the economics, it's not going to help that guy um evaluate models, but it might help that guy explain his evaluation to his portfolio manager or his senior management team and allow and make it easier for them to do the right thing, right? So that's that's sort of one level of failure. It's not tech, and I think in much of the drug industry, the tech the depth of technical knowledge is superb, right? They know more about these problems than anyone else. Um but in other places, actually, people simply have chosen models for reasons to do with tradition, to do with availability, uh, because it's what everyone else is doing. And actually, they haven't, you know, um uh, you know, but there hasn't been much intellectual effort uh assigned to the model choice. You just kind of use what's available, right? Uh uh so it's a it's a very sort of heterogeneous picture. And then you've got other companies, and again, I use Vertex as an example, and I'm sure they're not the only people doing it, but I think they're just a bit more public about their RD strategy, where um they effectively they they sort of use different language to me, but effectively they work very hard to try and identify the RD problems where the model systems are likely to give you a true result, even if that makes other things more difficult. Right. So they focus on modelable diseases, typically a lot of rare human genetic diseases, but they say, look, we can get patient-derived tissue, and we really want to focus on things where we think that what we see in the lab is going to translate very, very directly into patients. And that means they don't work on a bunch of diseases, right? So they won't, they're not gonna be working on Alzheimer's because they look at conditions like that and they say, Well, actually, we think the decision tools in that therapy area are mean that actually it's gonna be difficult, right? And we may not generate good returns.

SPEAKER_01

There's a very important question that you raised. You mentioned the vertex pharmaceutical case. And um, I think to summarize it, what you say is basically a lack of attention towards model quality in RT productivity. Um do you have further examples where you think that the job besides the vertex example, or maybe we dig deeper into the vertex example, where scientists do a great job in working first on defining the problem and then executing? And can you also give an example where basically this uh lack of attention towards model quality has actually really hindered RD productivity? But just give more bring more colours.

SPEAKER_02

So I'll give you I'll give you some examples of hindrance, because again, I I've sort of you know I've collected a few over the years, and there's some really nice ones. Uh and again, these the and I'll use these also to illustrate the sort of dimensions on which I think models should be evaluated. And again, I'm I'm not the first person to advocate this, right? There's there's and again, if you look at our 2022 paper, there's a sort of you know, I uh uh lots of people have have worked on this problem before, right? But again, I I don't think it's been sort of packaged in a in a sort of digestible in an easily digestible way. But um one of my favorite examples is um ischemic stroke, right? So this uh uh and I'm sure most of anyone listening to this will know that ischemic stroke is caused when you get an occlusion, typically a blood clot in a blood vessel in the brain. Um, you know, you then get you know an interrupted blood supply, brain tissue dies, et cetera, et cetera. And this is a therapy area where um there's been spectacular translational failure. Right. So lots and lots and lots of things have worked in animal models of stroke and have then not worked in people. And there's sort of there's really, as far as I'm aware, there's really only two classes of drug that work in people, and neither of them are spectacularly successful. Uh uh, aspirin, I think, has been shown to have some effect, and also some clot busting drugs have been shown to have some effect. Um and if you think about it, ischemic stroke, it's not like Alzheimer's, or you wouldn't think it is. It's not some horribly complicated multifactorial thing that's lots of different diseases, right? The cause is fairly obvious. It's caused by a blockage in a cerebral blood vessel. And one would imagine that blocking the cerebral blood vessel in an animal might recapitulate some of the biology of blocking a cerebral blood vessel in a person, right? So the failure of translation in stroke is a bit more of a puzzle. And there's a whole bunch of reasons why translation was very difficult in stroke. But there is one example, which I think is a lovely example of how not to do things. And this is an example of a drug called trylizad. So trylizad was a drug that I think was successful in maybe 19 or 20 animal studies. So the positive results in 19 or 20 animal studies giving this drug after ischemic stroke. Uh, it then went into human trials where it did absolutely nothing. And coming back to my, you evaluate models against the target model profile, right? So, what does the human clinical state look like? Well, in the case of Terillazad, when people went back to try and understand the failure, they looked at the animal studies and they found that the median delay between inducing the ischemic stroke in the animals and giving the drug was 10 minutes, right? In human stroke trials at the time, it's a bit quicker now, the median delay between a human having a stroke and them getting the drug is five hours.

unknown

Right?

SPEAKER_02

So in this case, even if you'd recapitulated the pathophysiology, even if your animals were having a stroke that was just like human stroke, the fact that in your animal models you waited 10 minutes and in the human you wait five hours, that massively decorrelates the results of the animal models from the human. It could well be that if you gave a human a drug 10 minutes after they had a stroke, it would work, right? But so there's the interesting example where tests and endpoints were a big problem. People didn't reflect the human clinical state in the model. Um another example is antimicrobials. And that's an interesting example because we went from an RD process that worked to an RD process that didn't work. And then we've gone back to an RD process that works, at least from a discovery perspective. So the first antimicrobial drug, well, no, the second useful antimicrobial drug was a drug called sulfonilamide, sold as prontocil, which was discovered by Bayer in Germany around 1930 by a guy called Gerhard Domach. And at the time, we didn't have the large medicinal chemistry collections we have today. Domack tested a couple of hundred compounds, which were dye stuff derivatives, and he found sulfur millimide from a screen of a couple of hundred compounds. Now, fast forward to 1995 to 2005, a bunch of big pharma companies, Glaxo have been the most vocal. They sort of wrote up their experience, but other companies as well, went on broad spectrum antibiotic discovery missions and they decided to throw their new technology at broad spectrum antibiotic discovery. So they did a very clever thing. They sequenced the genomes of a bunch of pathogens. Pathogens have quite small genomes. It was an early exercise in genomics. They found genes that were essential for the survival of a wide range of pathogenic species, but which didn't have close homologues in people, right? Because those would be the ideal candidates for broad spectrum antibiotic targets, right? And then across the industry, over a hundred high throughput screening campaigns were tested, testing well over 10 to the seven compounds in aggregate against those rationally identified drug targets. And the entire drug industry found not one compound that was worth putting into clinical trials. But the global drug industry testing well over 10 to the 7 compounds against 100 targets 70 years later not find anything useful? Well, if you go back to the decision theoretic maths that I was alluding to, this idea that predictive validity is important, how much does your assay correlate with the human outcome of interest? Dormack was screening his compounds in mice with sepsis, right? So he had a live whole animal screen. He had some other screens as well, but there was an important whole animal component in screening his compounds. In 1995 to 2000, the industry had moved wholesale to in vitro screening, high throughput screening. So you would express the gene products, bacterial gene products, uh, turn them into proteins, put them in little dishes, and effectively look at binding affinity, squirting the compounds in the high throughput screening collections against those targets. Now, the decision theoretic math says if Domax mice correlate with the human outcome of interest at 0.8, and high throughputs in vitro screens correlate with the human outcome of interest at the point two level, you would expect the best one out of 200 compounds tested in the mice to perform better in people than the best one out of 10 to the seven compounds tested in vitro, right? And we now know why the in vitro screen effectively decorrelates the results from human clinical outcome of interest, right? So one is high throughput screening collections have lots of compounds that don't get into bacteria, right? So you've you've effectively enriched or else are pumped out of bacteria. So you've enriched your, you've effectively decorrelated your assay that way. And then secondly, the genes that are essential for bacterial survival in vitro are not the same as the genes that are essential for bacterial survival and bacteria doing well in vivo, right? So that decorrelates the targets, right, from the targets to import. So so there's a really good example where the industrialization of the process inadvertently decorrelated the results of the process from the human outcome of interest. And you made it less productive despite the fact that you had huge gains in brute force efficiency. And since then, you know, not surprisingly, you know, there's lots of very clever people doing antimicrobial discovery. People have realized that. And people have gone back to um phenotypic screens, uh at least as a starting point, right? Because they realize that the results are likely to be more valid. Right. So those are two examples: the ischemic stroke example, where effectively tests and endpoints were wrong, right? And the antimicrobial example were affecting the biological re lost the biological recapitulation by industrializing the process.

SPEAKER_01

That's interesting. That's interesting. The question that pops up in my mind is in your opinion, um, when I look on the value chain in the pharma industry, so simplified, simplified version. Uh, I have the scientists, I have the research institutions uh who are doing basic science. Then in between, I have companies, uh, which nowadays you mostly are small biotech companies, which are highly focused project teams, um, that get venture capital public funding to move scientific ideas to just oversimplify it, um, to move basic ideas into products. And the cutoff point, in my opinion, is uh somewhere I can't really tie it down in science. It's more a fluid process. Um, so it's uh I think translational research, uh, very often it's uh there is no hard cutoff point. So people work together, then you have a company, you license into the company, teams work together, and some when in pre-clinics the scientists phase out, and we are clearly in the development teams area, more or less, to say 90%, 10%. Uh, compared to 100% science, then you bring the development teams in. And uh at the end of pre-clinics, probably we have 90% development and 10% scientific teams in my world. Uh, then we are in clinics, and I think the cutoff point towards the farm industry also oversimplified is uh clinical phase two, where with uh efficacy results, safety results, farmers happy to license, bring it to the market. Um what you say now, what I understand is that the complexity of the process is mostly driven by industrialization in the predictive model. So we put a lot of models in, pressure scientists and development teams to move forward, while we should spend more time on defining targets, defining models, defining the processes before we start moving forward. Now, when I look at my picture of the weather chain of the farm industry, where in this process do you see the amendments that you that your work suggests that are necessary to do that?

SPEAKER_02

Yeah, so I think I think the the sort of the critical thing is to get I think if if if incentives aren't right, not much happens. So my view is that the sort of the so I would say the sort of the primary consumers of the sort of the stuff I'm talking about at the moment I'll list them. I think an obvious set of primary consumers are venture capital firms, right? Um in that if I was you know still involved in deploying RD capital, knowing what I now know, I would have very rigorous processes around model evaluation, right? And you know, if people were pitching to me, I would require them to, you know, before they come and pitch, look, here's here's how you should, here's how I would like you to think about models. And if you're serious about getting investment, here's a bunch of questions you're gonna have to answer. Right. And um, and I think arguably, you know, the the s the internal farmer equivalent is the kind of you know is the sort of the sort of portfolio management resource allocation process, right? So you need to tie resource allocation to these questions around um model validity um in a sort of more rigorous and formal way. Um and then in parallel, you need to give the scientists and the project teams the training and the tools necessary to start doing, you know, to start doing evaluations. For what it's worth, and you know, and this is not a door I've pushed on, I've had some discussions, but I haven't really pushed on this door. My view is, you know, the sort of the biomedical funding agencies, whether whether philanthropic or public sector, should also um require these sorts of arguments to be made when funding projects that are likely to involve therapeutic development and models. And at the moment, very often they just don't. It's not that they don't do it very well, very often they don't do it at all. Right? So I've certainly got personal experience of reviewing grant applications, again, not a huge amount, but at some grant applications in the UK for antimicrobial um money from the UK's equivalent to the NIH, an outfit called the MRC, where, at least as far as I recall, you know, people could try and make the case for getting money to discover new antibacterials in disease X without really having to give any justification at all about why the model systems that they could use in Disease X are likely to give them the right answer. So ironically, what I now think is the single most important thing that should have been explained in the grant application wasn't in the grant application. Right. There's lots of other stuff in there, but that that particular factor was not. So I think we're often starting from quite a low base. But I think I think the incentives need to be there, and that's people allocating capital need to put effectively more formal rigor around um the evaluation of the decision tools that are going to be used in RD projects.

SPEAKER_01

Yeah, it's it's um in an interesting way. I mean, the economic model is uh quite simple of a company. You input something and you need an output at the end of the day. Uh, I think um Tesla, for example, I mean, Elon Musk is all over the media right now with Twitter, with his with his acquisition of Twitter. Um, when Elon Musk promises to deliver a self-driving car at the end of the day, the success measure is uh have we a self-driving car or not? And it's measured against the capital we put in. When there is no self-driving car on the market after trillions of uh dollars were invested, it's a failure. When I translates this simple, simplified economic principle to the drug development space, uh, I think coming from the same point of view to say only the approved drugs are the measure of success uh compared to the capital, maybe we may make a mistake in the first place. So maybe we should um increase the definition of success in drug RT to also the failed drugs uh should count into the success of the drug RT process. Uh I think at the end of the day, even when we increase the predictive validity predictive validity of decision tools, uh we still will have a high number of uh failed drugs. And you mentioned also that uh the low-hanging fruit have been harvested. So, I mean, in the 1930s or 1920s or 1940s or 1950s, um, a lot of problems were unsolved, a lot of uh obvious problems were unsolved. Uh, you mentioned it in the discussion diabetes, for example. Now we have solutions. And to improve what already solves 90% of the problem uh is always very costly and very expensive.

SPEAKER_02

So I think I think broadly I agree with you. I'll I'll make a few sort of I'll make a few comments, but I think I mean I think I think logically there's no particular reason why the sort of biopharma industry shouldn't be in some sort of gradual decline because the better than the beatles problem means that over time, you know, over time, more and more of the stuff we need is likely to be cheap and generic and constrain investment in the therapy areas, right? So I think I think that that that seems to me true. And it may be in the long run that you know more capital shifts to other kinds of biomedical innovation. I don't I don't have any I don't have any um real quibble with that argument. A few things about I do, however, have a sort of pet uh uh sort of pedantic criticism of low-hanging fruit arguments, which although they may be true, they also sometimes tend to be tautological, right? So I've sometimes said that the low-hanging fruit, you know, so if the only tool you have for measuring the height of the fruit is the rate at which you're picking fruit, and you notice that the rate at which you're picking fruit is declining, you will always blame the low-hanging fruit problem. But actually, it might be your ladders have got shorter, right? Or or you know, so so this so there's much, and then I think the history we don't want to find out the other. The history is interesting here. I well, but there's some interesting history here. I think there's been a qu qualitative change in the way drug RD is done. And lots of things that were discovered a long time ago wouldn't be easy today. Right? So um uh uh for example, you know, so you know, so there's lots of drugs like sort of parasites, you know, there's lots of sort of very successful drugs that if you showed them to a medicinal chemist today or you showed them to a modern drug RD program, they would never be discovered. Right? So paracetamol is an example, you know, it's kind of some ugly compounds, got lots of problems with it. And then you've got you know, probably most antidepressants would never have been discovered using the methods available today. So I so I do think that one needs to have measures of difficulty that are not simple that don't simply reflect whether or not something's already been discovered.

SPEAKER_01

Yeah, yeah, probably. So the big question at the end of our conversation, then for the last part, uh, is the strategies to improve that. I think um when we look, when we stay with your productivity models, there is a lot of discussion currently on the market about artificial intelligence. Yeah. Which role do you see? I mean, in in one podcast, I had a discussion uh where I thought it's superstitious, and uh the colleagues said no, maybe probably not. So wouldn't it be nice to can calculate uh in a supercomputer the entire track development process from uh basic science up to approval so that we don't need animal models, that we don't need uh human models, that just a computer comes up and solves all problems for us. Um, in an ideal world. Uh, what's the reality of artificial intelligence to improve track productivity in terms of safety and efficacy? And what is superstitious stories?

SPEAKER_02

Okay, so I'm not gonna be too quantitative here, but I I would say I think AI is important, but its effects will be incremental and modest in the near term. Right. So uh, and again, I want to say here, I I I'm I don't want to pretend I know more about AI in drug RD than I do, but also I want to make it clear to people I'm not a complete Luddite. Right. So I did spend a couple of years actually doing what today would probably be called AI-based drug discovery in around sort of 2012, 2013. Um, and I do have some sort of professional interests now in relation to sort of AI-based drug discovery companies. Um and I and so I think that I think there are sort of real opportunities. But the the reason and I would also urge people to read. So if people are interested in this, sometimes you come across a paper that or papers that um appear brilliant, possibly because they're a very articulate and clear uh uh writing of your own prejudices done in a better way than you could have done yourself, right? So so so there's two papers by a guy called Andreas Bender and uh Cortez Siriano, which came out in 2020, which I think are an excellent summary of at least of how if I knew more about the subject, I would represent it, right? I.e. they play to my prejudices. Uh but but and and and and they really focused on sort of data issues. And the idea is that the application of AI to chemistry, sort of data constraints make the application of AI to chemistry much more likely or amenable or useful than the application of AI to biology, except perhaps within certain areas like sort of protein folding, right? And then I think that's a second thing, which is I think one needs to be clear what one means when one talks about AI. Right? So as AI has currently become fashionable again, lots of things call themselves AI, but many of those things are not that new, right? So if you think, let's say, so for example, uh uh the protein folding problem is a sort of classic case where advances in AI seem to have made, you know, allow us to do computational prediction of protein folding a lot better than we could a few years ago. Well, that's probably true. But the question is, what's really driven that? Well, it's been driven by improvements in the algorithms, but also it's been driven by improvements in the data. So before we had X-ray crystallography, which actually told us what the crystal structure, what the what the structures of proteins actually were, right? You couldn't really do, you couldn't really ever test whether your AI prediction algorithms gave you the right answer or not. So there's been a sort of co-evolution of data and models that allow us to do prediction. And if you look in the drug industry, the drug industry has had clever people doing things that look a bit like AI for a very long time. They didn't call it AI, they called it computational chemistry, they called it structural biology, um, they called it um uh uh uh uh you know sort of uh you know molecular docking, they called it genomics, right? But it it it it's it these sort of clever people doing sophisticated quantitative and computational methods is uh is old. And the constraints I think are probably more data related than algorithmic, right? And the creation of data is expensive and takes a long time. And I'll give you a uh but again, I would go look at the Bender papers because I think that they are a sort of remarkably clear exposition of how both the data quality, quantity, and structure in biology is very, very different from many of the areas where um AI has proven very successful. Uh and I'll give you one nice example from a bit of work I've done recently, which is you know, uh around this not AI, but this is around prediction of uh liver toxicity. Right. So compared to predicting efficacy, predicting liver toxicity is from a data perspective a relatively easy problem, right? Because I use the term relatively easy only compared to efficacy prediction, right? And that and that's because lots of drugs have been into models and have been into people. Right. And um so you can get these data sets where you have uh uh drugs where we know how toxic they are to people, and then we can put those same through drugs, the same through the drugs through in vitro systems, or we can put them through computational systems and try and predic and see whether our predictions are any good. But there's a real problem here in that the human truth data, even for something like livertox, isn't very good. Right? So if you want to know how toxic is a drug to humans, which is the truth state that you're trying to predict, well, the best you can get is a kind of five-point ranking scale where you go from, you know, where you look at sort of regulatory and other sources and you say, well, okay, some of these drugs are a one. I can't remember whether one is the most toxic or a five is the most toxic. I think a one is the most toxic. You say some of these drugs are a one, these drugs are really toxic from a liver perspective. And then some drugs are a five, which means effectively, you know, the only way that they could hurt your liver is if someone dropped a box of them on your liver, right? They're completely non-toxic. And but then if you look at what the ones are, the ones are a completely heterogeneous bunch. So just because these things are very liver toxic doesn't mean they're the same. So some of them will be drugs that if anyone took 30 tablets, they would get horrible horrible liver damage, right? Some of them could be drugs where one in a thousand people have a genetic mutation that means in that one in a thousand people the drug will be terribly toxic because it'll be fine for everyone else. Or it could be that one out of 200 of the drugs cause a weird allergic reaction that damages the liver. Right. So and then actually you can only get this data for maybe a thousand drugs, probably less.

unknown

Right.

SPEAKER_02

So this is a very small data set, and actually it's really heterogeneous, poor data. So, in a sense, the the characterization of the truth state is largely inadequate for sort of computational approaches. And I think that that's what you find when you look at a lot of biology, right? We don't have much data on the truth state. And when we do have the data, the data isn't characterized or structured in a terribly useful way. And I think that's that, in my view, is the main constraint. So AI will continue to be very, very useful as computational and quantitative methods have been useful for the last 40 years. It will get better in some particular places. So for example, you know, I understand that AI is really good for engineering around chemistry patents, right? So there's the particular places, it's it yeah, I think the chemistry side is great, but I don't think that is most often the rate limiting step in modern drug RD.

SPEAKER_01

Yeah, ChatGPT, for example, is very good in writing. So the AI can do a lot. When I summarized what you said about artificial intelligence, so basically it's uh the truth is more in the data than in the algorithms that we have.

SPEAKER_02

Or the rate limiting step is the I think on the biology side, the rate limiting, the the limitation is around quality and quantity of the data. It's not around the algorithm. And if you think where AI has really transformed things, you've had to have the combination of algorithms and data. Right. So, so so um that's why I think yeah, it's I think it's not the rate limiting step for large swathes of drug RD, but I think it will be very helpful for certain things. Well, there's a there's another analogy that I think is useful in drug RD, thinking about revolutions, right? Why revolutions are rare. Drug RD is a bit like a hurdles race, right? And particular technologies may make you much better at getting between hurdles two and three, or between hurdles seven and eight, right? But the fact is they don't help with hurdles one, four, you know, six, ten. So that's why I think we don't see as many revolutions, right? It's just a complicated multi-step process, and individual pieces of the jigsaw have a relatively constrained productivity effect.

SPEAKER_01

Do you see? Um, I mean, my uh demand in the last 10 years or 15 years basically was uh when I compared the capital side of track development in Europe to the United States, I came to the conclusion, okay, we have great science, we have uh a lot of uh smart people working on solving problems. Um I look on the development side, uh I see a scarcity of money in Europe. So still, I mean it got better over the years, but still there is more capital on the market in the United States than in Europe. Um, from what I get from our conversation now is that probably capital is uh also contributing to the problem, but not the main driver of the problem. How much how much uh weight would you give the availability of capital in the drug development process that holds back on the productivity side?

SPEAKER_02

Yeah, so I mean I think here one thing should think about sort of relative versus absolute, right? So I think there's a well I'll give you a rather sort of weak answer, right? But the true answer. I think I think the spread between capital availability in Europe and US is narrowing. Right. I think a lot of Americans have realized that Europe is what you might call underventured, right? And uh so actually you can get some better deals over here because historically less capital's flowed into ideas of equivalent quality.

SPEAKER_01

But it's also more complicated, I heard it very often.

SPEAKER_02

Um but and there's another observation which is I think biotech investing is clearly cyclical. And I've seen some quite convincing studies that suggest that some of that cyclicality is quality related. Right. So when times are hard, on average the stuff that gets funded is better than when times are easy and the cost of capital is zero. Right. And this has a sort of pro-cyclical effect, or uh uh so that you get um lots of stuff funded when capital is cheap and there's lots of money around. A lot of that stuff isn't very good, a lot of it then fails, it tends to push some people out of the market, it then gets harder to raise capital, which then pushes up the quality again, right? So so at least I've read some people who've made quite a compelling case that some of the some of the sort of investment cycles we've seen in early stage biotech are related to that kind of that sort of uh uh uh sort of pro-cyclical effect of the fact that when there's more capital, the average quality goes down. So so that that says straightforwardly then that actually probably there is a relationship between capital availability and quality. When more capital is available, the average quality of what gets funded goes um down. But it's interesting being involved as I now am in very early stage biotech and raising money, it's also clear to me that you have a sort of knowledge networks that in some parts of the world make it easy to do things and in some parts of the world make it different to do things, right? So if you're based on the west coast of the US, you know, if you happen to have had your academic career in Stanford, then everyone you know will have started a biotech company, right? And that just makes it easier, that genuinely makes it easier to start a biotech company. Whereas if you're based in, you know, um Bilbao or uh you know lots of mid-sized European cities, that's simply not the case, right? So so so I again I don't I don't have a sort of neat answer, but I I do think there is a sort of transatlantic difference. I do think local environment can make it much easier. Uh but it's also clear I think when capital is freely available, the average quality goes down.

SPEAKER_01

That's uh two interesting points that I would make some remarks, I would love to make some remarks. I mean, when I started in life science in 2006, you were in Vienna in Austria. Uh, before I was a merger acquisition in public companies, so the response I got from my private environment was Are you crazy? Why are you throwing your life away and start in a company that doesn't produce any revenues? This company is bound to fail. So the main driving force in that company was Rodger Novak, who later on founded CRISPO Therapeutics, and uh the reality turned out to be differently. But the social environment here in Austria 15, 16, 17 years ago was completely different than it is now. So now it's more generally accepted to go into entrepreneurship. And having said that, uh, when I think back to my fundraising experiences, I always thought when I talk to VCs, it's an easy sell to say, okay, look, uh, we have uh preclinical models completed, we need 10 million dollars, just ballpark figures, uh, to go into phase one. And in two years' time, we will have the results of the phase one, we will have the risk to the program, and either we stop it or we move forward. It's an easy sell. From what I learned from you now in this conversation, uh, I think the most important question is really do we have the right models?

SPEAKER_02

Do we Yeah, it's not exactly exactly it's not do you have have you done preclinical? It's what preclinical have you done and why? And is why is this going to predict what happens to people?

SPEAKER_01

But but do you think it's an easy sell to the venture world uh as you perceive it right now to say, okay, we we need probably not 10 million but 12 million, and two million are only located to question if we did do the right studies and if we Okay, so so so so for so for what it's worth, I think you know, I think we've done the decision theoretic maths right, and I think we've got the financial maths right.

unknown

Right.

SPEAKER_02

And if and if we've got the maths right, then VCs that don't do this will eventually be outcompeted by VCs that do. Now it may be a very slow, low, low, maybe very slow process, right? But uh you know, if you fundamentally if you know if you don't do enough work on the things that are actually the most important things, or or some of the very important things driving investment returns, you're not gonna generate good returns in the long run. So I'm not I'm not expecting anything overnight. Uh but um I, you know, we published this much more sort of practical kind of how-to guide late last year, came out in October last year in in Nature Reviews Drug Discovery. It's quite interesting. A lot of the incoming interest we've been getting have been from the investment side, right? It's quite interesting. So I would say uh uh people deploying capital have been more interested in this than sort of incumbents. So the people have been really interested in it are people deploying capital, right? And then also people who think they've got novel drug discovery technologies that the market is undervaluing. They're also very interested in these, in the work, because their view is, you know, we we could bring lots of value, but at the moment we can't evaluate our technology in a way that we can explain to customers. And then the customers, we need to educate them because they can't tell that our technology is better than other people. And if they can't tell that our technology is better than other people, they're not going to pay more for it. Right? So it's quite interesting that the sort of interest has come from the sort of investment side and the sort of producer side, and actually less, less from the well, oh you know, I've I've had less incoming from the sort of from the sort of uh uh big pharma side.

SPEAKER_01

Oh, really? Less I also thought big pharma uh should have an interest in that. It's really it's more interest from the venture world.

SPEAKER_02

Well, this is just incoming. This is just people emailing me or calling me, right?

SPEAKER_01

Yeah, yeah, yeah, yeah. That's great. So basically, also what I mean. So the investment world has an increased awareness of the findings in your studies, which means probably in future there will more emphasis on uh questioning the research in the first place if it hits the right time.

SPEAKER_02

And and and and and and again, some of the work I'm going to be starting doing fairly soon is to try and practically sort of implement this in certain sorts of innovation systems, right? So it's actually trying to systematize some of this so that the scientific producers and the scientific funders in that ecosystem have a common understanding of the sorts of things that are important and need to be explained about the decision tools that are being used, right? So people don't ask for funding if they have provided no information about the basis of decision tools, right? And and and it's it's trying to sort of generate this sort of lingua franca, a common set of tools, common set of standards people can use when trying to understand the the likely predicted power of the models.

SPEAKER_01

So that means also then when we think about models, I mean, this is also a call to action then for policymakers, in my opinion. Um, there's a lot of grant funding um into the market early stage, but from I I didn't do any statistics and I didn't read any papers on that, but it's just from from my experience in the last 70 years, I always had the feeling that there is a lot of funding going towards uh development approaches, let's call it that way. Uh put compound X, Y, Z into animal models XYZ without questioning the models if this is really the right models. Yeah. So would you also see a call to action for policymakers?

SPEAKER_02

Absolutely, absolutely. Uh and you know, I I I that that in terms of my sort of my personal action, I've done less, right? But the obvious but, you know, the biomedical funding agencies, whether philanthropic or public sector, right, in a sense, if you're funding therapeutics, then doing it without a very, very clear focus on the models, particularly in therapy areas that prove difficult, right? Seems to me, you know, irresponsible. Um and uh and I think there's also another thing which we haven't touched on, which is the I think the the economics of model development are not great for or sometimes not great for the private sector. So um yeah, to give you a caricature of this, I you know I've I was talking to these sorts of ideas a while ago now with a sort of well, very well funded US biotech firm who have got a whole bunch of really cool novel chemistry technologies, right? And are going after oncology, where of course we know the models are terrible. And uh so I said, well, you know, would you think about deploying any capital to try and improve the models? And they said, well, no, we know the models are terrible, but actually um you can't make any money. Yeah, if you develop a better model which says mechanism X is important in disease Y, the minute you get positive phase one, you know, or two A data that shows that mechanism X is useful in disease Y, everyone else then knows that mechanism X works and they don't have to invest in the model. So models have this a certain property which economists call a kind of common goods property. It's hard to appropriate, or it can be hard to appropriate the economic value from them. Novel chemistry, on the other hand, is eminently patentable, even if it doesn't turn out to be very useful in the end. Right? So you've got this weird situation where the private sector incentives tend to focus people on producing novel chemistry, which they're quite happy to test in models that everyone working in the industry knows about. Right. And that's a better private sector business model than investing to improve the quality of the models. Right. So again, and that that I think is also something that sort of either industry consortia or or or or public sector agencies need to think about.

SPEAKER_01

Yeah, I couldn't agree more. I think the the it's just the basic principle of the private sector to have it's product-based. Everything is product-based. You need a product at the end of the day. Yeah.

SPEAKER_02

And now some models can be productized, but I think a lot can't. Rather, a lot of the value it leaks even if you productize them.

SPEAKER_01

Is this really possible? I mean, scientists need to publish papers, so every model has a right, has a huge chance to chance to end up in a paper.

SPEAKER_02

Okay, so so yeah, so so the models I mean about productizing. So for example, yeah, I've done some work with some organ on chip companies, right? So microphysiological systems, those things can be twist, yeah, those things can effectively be made into products. But you're right, a lot can't.

SPEAKER_01

Yeah, it's interesting. So it's uh policymakers, it's definitely something that would not move very much the investors, the investment world, to accept these areas that you say, some, but the majority, I don't think.

SPEAKER_02

Yeah, so investment, it affects your problem choice, right? So we won't work on diseases A, B, and C just because we think the models are too bad, right? Or it affects your uh due diligence process, right? So, okay, we think that models may be fine in diseases X, Y, but you we need to show that you've done a good job in diseases X, Y. Um, another thing in the private sector, it may trigger investments. So, for example, there's been some interesting human experimental medicine advances in certain psychiatric conditions, in my view, right? So that might then make certain therapy areas investable. So you have an investment strategy where you look for improvements in models in a given therapy area, and then at that point you start to deploy capital. Right. So that's that's the sort of thing you could do. Um uh, but I think in terms of just investing to make better screening disease models, a lot of that is tricky. It's tricky to convince the private sector they can appropriate enough of the value.

SPEAKER_01

I have never looked at uh the industry from your perspective. Uh, it's really great to hear here this. What uh so since I never looked in this direction before, uh, I have uh no idea how the situation is with grant funding agencies. Do you, from from your expertise, after more than 10 years of research in that area, uh, how do you describe the situation, the problem awareness in the grant funding sector? Um, is there an awareness for these dynamics that you describe? Uh, better models, better output. Um, it's it makes sense to invest in that area, or do you see that there is a lot of possibility to improve still in that sector?

SPEAKER_02

So I think there is some recognition, right? But I think it's it's framed in a slightly different way. So people can sort of talk about similar things in different ways. So for example, um I think a lot of the there's a lot, I'd say a lot of the sort of biobanking sort of linking biological so there's been a big push, for example, a lot of it publicly funded, you know, it's it's it's a big deal in the UK, where you know, sort of public biobanks, where it's kind of done, I think they use a different language. So they sort of say human is better than animal, or sort of human is best, right? Which isn't always the case, because actually an in vitro human model might be much worse than an in-veto animal model. But there's this kind of notion that human is best because it's going to somehow be more predictive, which will sometimes be true, but sometimes won't, right? But it it often will be true. So I think you've had big support for things like that. And I think sort of the sort of genomics is in some way related to model quality. But I don't think it's been done with the overt um. And then there's there's also lots and lots of efforts around sort of assay quality and sort of initiatives around sort of reproducibility. But those I think are quite interesting and they're somewhat limited in that they have a slightly different scope. So they um they address statistical and experimental hygiene, but they don't address the question as to whether your model's actually recapitulating the right biology or whether your tests and endpoints are actually relevant for human disease. They take this and then and then you've also got sort of big initiatives, certainly in the UK and I think in other parts of Europe, to uh reduce the use of animals in um RD, which are sort of touch on some of these validity issues, but again, I think they've got a different objective, which again is not around ultimate sort of truth and predictivity, it's more around you know, let's retire this technology for ethical reasons. So, in my view, there's a few things that sort of that touch on it, but there's nothing sort of there's not much I'm aware of that is sort of directly analogous.

SPEAKER_01

Now, when I mean him here in Austria, so when I run through the fundraising process here in Austria, it usually works that way. That a scientist says, okay, look, I have a target, it's an um underserved area for whatever reason, and I have a compound to just simplify it and target compound. Uh, then he does the next step. He builds a small team and says, Okay, I'm the lead scientist, I need a finance guy, I need a business development guy, I need uh three more scientists to just oversimplify it. And with that package, he then goes to public funding agencies and usually successful in getting three to four million public funds committed. Um, the process is, in my opinion, straightforward. Uh, talk with the regulatory authorities, uh, they tell you you need study one, two, three, four, five to come from your stage to the end of the preclinical stage, and you have a clinical candidate, already. For that, you use the capital. And when the company is close to the clinical candidate stage, so before the clinic starts, they can go on the market and raise venture capital. So, this is the dynamics as far as I see it, uh, as far as I experienced it. From what I get from you now, I think the the call to action for public funding agency would be if you give a company 4 million euros, that's fine. But uh always ask the question first uh use 10% of that capital uh to find out if you're really using the best models. Because when you go down the wrong route here, it's 4 million wasted. If you take three or 400,000 euros to validate the models and do some work on that, uh you might save a lot of money further down the road because it just switched to another model. So spend some time, and the same applies then to the venture world. Is this a personal?

SPEAKER_02

I I maybe maybe I'm being slightly mischievous here, but I'd actually do it a different way. I'd say um I would before I gave anyone four million, I would give someone else a hundred thousand to evaluate their models.

SPEAKER_01

Okay, right.

SPEAKER_02

This would yeah, this would be not a hundred, it wouldn't maybe it wouldn't need a hundred thousand, but I think you need robust argumentation and testing around the model quality um uh before you give the money. Because you know, because that would be my view. So and certainly in the sort of private sector thinking I've been doing, I haven't really been thinking so much in terms of sort of public funding, but in terms of you know the sort of private sector thinking, it's it's you know, you want to understand the attributes of the model and its likely performance, insofar as you can. Right. I mean before you deploy capital.

SPEAKER_01

I would uh I would phrase it for for myself, it would phrase it in a way to say, okay, if uh if anybody who deploys capital for whatever reason is happy with just activity, they don't have to ask these questions. It's not a must because you're doing something, yeah. You throw money into the market and some scientists get going.

SPEAKER_02

But if you want to make sure that if you think that step is inevitably going to be non judgmental, right? Then yeah, you you you yeah, you would help them evaluate things so that when they actually did it and came to raise more money, they had something that was defensible.

SPEAKER_01

Yeah, I think it increases the probability of success at the end of the day. If um Just activity is uh is it doesn't solve the productivity problem, but the productivity problem is not really the problem of politicians, they just need to do something. But if they want to make sure they use the money in a in the in the best way, then it would make sense it definitely makes sense to invest more in the valid validity of the models. Jack, is there anything open in the discussion that you would like to discuss that I didn't ask so far?

SPEAKER_02

Um I think we've covered most things. I there was there was there's just one thing actually that we sort of discussed a bit previously that we that hasn't come up that I think I'll mention, which is I think a lot of the work I've done has taken a sort of a fairly technical view of what you might call decision tools or models, right? So we use models because we think they'll tell us about drug candidates allow us to make decisions. And sometimes when other people write about your work, you realize they've done a better job than you have, right? And and there's a recent review or short blog written by a guy called uh David Shewitz where he um uh compared some stuff I'd written about the model validity with with stuff that a guy called David Granger had has written. And David Granger um is a UK-based, very, very experienced sort of biotech investor/slash serial startuperer. And he wrote a piece about um effectively eliminating managerial biases in progression decisions. And then also there's a literature which comes out of AstraZeneca, uh, which also was alluded to by a guy called Mike Ringell in a paper I wrote with him, which again for which he should take credit, not me, which talked about truth seeking rather than progression-seeking behavior. And and it just struck me that I think all of these strands are sort of pointing in roughly the same direction. There's a sort of big idea. And the big idea is that um we want our decisions to be based on considerations where our assumed judgment of candidate utility is maximally correlated with real clinical utility. And there's a whole bunch of different ways it can be decorrelated. I think a lot of my focus has been on the technical side, right? We use the wrong models, we interpret them badly. But the sort of Granger and the Granger view is more actually on the decorrelations in the managerial side, i.e., we've got biases which mean we could have made a better decision based on our the availability of our technical data, but we didn't.

SPEAKER_04

Right.

SPEAKER_02

And then Astro, the AstraZeneca view sort of actually does a bit of both, right? It's about actually here are the technical inputs you need to look at, and then here are the managerial processes you need to have in place to make sure that people make the right decision given the technical inputs they've got. So I so I just think that there's a sort of broader literature here around the sort of rigorous, aggressive attempts to remove anything that decorrelates our decision from uh uh clinical utility. So I think that's a kind of broader way of looking at it. So I think that's probably the only thing I'd like to add.

SPEAKER_01

What are the main findings in these managerial um decision-making processes uh that you could point out uh that makes sense to implement?

SPEAKER_02

So um the the the Grange one is very interesting. It's really that you I mean, they used to talk about sort of asset-light models. It's that you don't build institutions that have biases, which mean the assets keep going if they shouldn't, right? So you sort of separate, so you you know, so for example, you've got one practical implementation, yeah, one practical thing is you outsource a lot of stuff, right? You have a sort of small kind of virtual biotech type models, right? So that you because you know, so you don't have 200 people who get fired if you shut down a project, right? Which makes it easier to shut down the projects that you should shut down. Uh and the Astrogenica one, they've got some quite good, they call it their five pillars, right? They've got some quite good sort of language about truth seeking versus progression seeking. But the sorts of things they used to do, and again, I'm caricaturing a bit because I you know I haven't read the paper for probably about six months or eight months, but uh but you know, if you've got medicinal chemists, for God's sake, don't give them a bonus for simply making lots of compounds, right? Because you know, if you want a backup compound, it should be chemically different from the lead compound, but still potent on the target. Because if it's chemically different but still potent on the target, if the first one fails for some reason, the second one might work. If you simply paid them to produce lots of backup compounds, what you'll find is you'll have lots of backup compounds that are all almost exactly the same as the lead compound. So the AstraZeneca work is quite interesting around companies shifting away from a lot of sort of quantity-related metrics to more sort of quality or truth-related measures. And it does appear that they did see an uptick in RD productivity as uh as a result of that.

SPEAKER_00

Really? Is it this is so so when Anders can't you?

SPEAKER_02

Yeah, you never know. Well, you it's always always hard to dissociate survivor bias from from action from actual truth, right? But I think in Astra's case, I think there was a pretty spectral, I think I think there's two things. I think there was luck plus a genuine RD turnaround.

unknown

Right.

SPEAKER_02

So I yeah, I think the two things probably went hand in hand. But AstraZeneca for years, when I worked in investment running up to probably around 2010-ish, or maybe even a little bit beyond that, there was a sort of there was a sort of investors joke around AstraZeneca, which was they were the drug company that did everything right apart from discover drugs.

unknown

Right?

SPEAKER_02

So they were they were very investor-friendly, they were incredibly sort of commercially effective, they bought back loads of stock, they paid a huge dividend, but the only thing they didn't do was ever discover any drugs. And then since 2010, AstraZeneca have kind of reinvented themselves as actually a very successful oncology company, which actually has brought a number of very good um or very successful drugs to market. Right. And that turnaround went around with a sort of with a major internal sort of re-engineering project where they went from this kind of progression seeking to truth seeking. Now, of course, from the outside, it's impossible to know quite how much one contributed to the other, right? But but at least it it's it's plausible and it's well articulated and very interesting.

SPEAKER_01

Yeah, I kind of mean so there's two papers.

SPEAKER_02

If you look, if you look for papers called Five Pillars, I think it's called, uh and Cook is one of the authors, C-O-O-K. Um, there's like two papers from Astros AstraZeneca, one on the kind of diagnosis, and then one on actually things seem to be going a bit better now.

SPEAKER_01

I mean, at the end of the day, it means um go after the truth, not politics in companies. So um reduce, as you mentioned, reduce your bias and find the truth. And there is also what I got from what you said in the last few minutes, this is also the case for small biotechs, then uh take the science from scientists and give it uh to the development teams.

SPEAKER_02

Yeah, put the structures in place such that you can decouple, or rather, that you can effectively link progression decisions to clinical utility and you remove the sort of organizational distractions and biases that might otherwise force you from prevent you from doing the right thing from a sort of uh uh returns perspective.

SPEAKER_01

There's the justification for my opinion for the this this this middle stage between science and market. So you have big pharma, you have uh research organizations, and you made just the case for uh small biotechs. So it makes sense to have these companies in the market that just have to function properly to verify or falsify uh whether a drug candidate qualifies to move forward or uh not, but they have to do it in a way that's the after the truth and uh not just because it's exactly and you don't want to put them in a position where it's in their interest to burn all the cash, even if they've stopped believing that the asset's likely to work, right? How can you do that? How can you do that?

SPEAKER_02

Well, that's that's the I would I again I would refer you to the David Granger Sheywitz article. And I don't know if there's a way of leaving links or something at the end of this, but I I will I will sort of I will tweet or something, or you can do something so that the the papers I've mentioned are available to people if they want.

SPEAKER_01

So we have uh the event on LinkedIn.

SPEAKER_02

So if you have I will append I I will append the papers that I've referred to and the from the reports that I've referred to.

SPEAKER_01

And uh when you post it on LinkedIn under the event as a comment, I can also take them then and uh uh add them to the description of the podcast. So what happens afterwards is that I do post-production. I think it will currently take three to four weeks.

SPEAKER_04

Okay.

SPEAKER_01

Um to clean up the audio, uh, take up pauses, or when something was interrupted in our conversation.

SPEAKER_00

Yeah.

SPEAKER_01

And I will add your links then to the description of the podcast episode and distributed the links with the episodes. Okay. Jack, is there anything open that you would like to discuss?

SPEAKER_02

No, I I think that's I think that's it. And we've taken two hours. So so anyone who's been here the whole time is probably thoroughly sick of me. So we should I we should probably stop.

SPEAKER_01

I I don't believe that. I don't believe that. It's thank you very much for your research. Um, it helped me to understand the drug development process much, much better. And uh has and you drew with your research my attention to a point where I had clearly a blind spot. So I always thought it's a problem of capital, it's my bias. So, as uh business management economics guy, I try to throw capital on the things, and this might not probably always be the best solution. Uh, thank you very much for your research. Uh, it would be great to have an update in a couple of months or years uh to see where the drug development process was heading. I love the conversation. Uh, you're doing a great work, and I hope we'd soon in reality lockdowns are robot actually. So we can Okay.

SPEAKER_02

Well, no, thank you so much. I've really enjoyed it. Uh both the preparation process and doing it. So thanks very much.

SPEAKER_01

Jack, have a great day. Enjoy your time. See you soon. Bye. Bye.

SPEAKER_02

Bye. Bye.

SPEAKER_01

Thank you very much for tuning into this episode. If you found this content valuable and informative, please consider leaving us a five-star review on Apple Podcast and Spotify. Your review will help more people discover the show and benefit from the content. Please don't forget to hit the like and share button on your favorite social media channels to spread the word to your friends and followers. It helps to grow the followership of the podcast, and this in turn helps to attract more exciting guests and create even more engaging content for you. I appreciate your support in helping reaching a wider audience. Thank you for being part of the journey, and I can't wait to bring you more guests and more great content in the future.