Quantcast
Channel: In Silico | In the Pipeline
Viewing all articles
Browse latest Browse all 98

Is FEP Ready For the World?

$
0
0

Here’s a paper that basically throws down the computational gauntlet. A large group of authors from Schrödinger, Nimbus, Columbia, Yale, and UC-Irvine say that their implementation of free energy perturbation (FEP) calculations really does lead to a significant number of more active compounds being predicted. That’s as compared to other computational methods, or to straight med-chem intuition and synthesis.

Here, we report an FEP protocol that enables highly accurate affinity predictions across a broad range of ligands and target classes (over 200 ligands and 10 targets). The ligand perturbations include a wide range of chemical modifications that are typically seen in medicinal chemistry efforts, with modifications of up to 10 heavy atoms routinely included. Critically, we have applied the method in eight prospective discovery projects to date, with the results from two of those projects disclosed in this work. The high level of accuracy obtained in the prospective studies demonstrates the ability of this approach to drive decisions in lead optimization.

They say that these improvements are due to a better force field, better sampling algorithms, increased computing power, and automated work flow to get through things in an organized fashion. The paper shows some results against BACE, CDK2, JNK1, MCL1, p38, PTP1b, and thrombin, which seems like a reasonably diverse real-world set of targets. Checking the predicted binding energies versus experiment, most of them are within 1 kcal/mol, and only about 5% are 2 kcal/mol or worse. (To put these into med-chem terms, the rule of thumb is that a 10x difference in Ki represents 1.36 kcal/mol). These calculation should, in theory, be capturing the lot: hydrogen bonding, hydrophobic interaction, displacement of bound waters, pi-pi interactions, what have you. The two prospective projects mentioned are IRAK4 and TYK2. In both of these, the average error between theory and experiment was about 1 kcal/mol.
But this is not yet the Rise of the Machines:

The preceding notwithstanding, a highly accurate and robust FEP methodology is not, in any way, a replacement for a creative and technically strong medicinal chemistry team; it is necessary to generate the ideas for optimization of the lead compound that are synthetically tractable and have acceptable values for a wide range of druglike properties (e.g., solubility, membrane permeability, metabolism, etc.). Rather, the computational approach described here can be viewed as a tool to enable medicinal chemists to pursue modifications and new synthetic directions that would have been considered too risky without computational validation or to eliminate compounds that would be unlikely to meet the desired target affinity. This is particularly significant when considering whether to make an otherwise highly attractive molecule that may be synthetically challenging. If such a molecule is predicted to achieve the project potency targets by reliable FEP calculations, this substantially reduces the risk of taking on such synthetic challenges.

There’s no reason, a priori why this shouldn’t work; it’s all down to limits in how well the algorithms at the heart of the process deal with the energies involved, and how much computing force can be thrown at the problem. To that point, these calculations were done by running on graphics processing units (GPUs), which really do have a lot more oomph for the buck (although it’s still not just as trivial as plugging in some graphics processor cards and standing back). GPUs are getting more capable all the time themselves, and are a mass-market item, which bodes well for their application in drug design. Have we reached the tipping point here? Or is this another in a very long series of false dawns? I look forward to seeing how this plays out.


Viewing all articles
Browse latest Browse all 98

Trending Articles