Quantcast
Channel: In Silico | In the Pipeline
Viewing all articles
Browse latest Browse all 98

The Rise of the Rise of the Machines

$
0
0

There’s yet another paper on computer-devised retrosynthesis out today – it and the previous one make an interesting pair. I have a Nature “New and Views” comment on this one (free access link) for a broader audience, but I’ll expand on my thoughts here. (Update: I’m also going on about this on a Nature podcast here).

Overall, the same general thoughts apply to this work as to the last one. What we have, via a team from Münster/BenevolentAI/Shanghai, is another piece of software that has picked up large numbers of possible synthetic transformations and has ways of (1) stringing them together into possible routes and (2) evaluating these against each other to elevate the ones that are deemed more desirable. Naturally, the response from many organic chemists to these things has been “But that’s what we do”, followed by “Surely no program could do it as well”. The strong form of that latter objection is “Surely no program can do this to any useful extent at all”, and the weak form is “Surely no program can do it for all the molecules that we can”.

I’m going to dispose of the strong-form objection immediately. Whatever you think of the idea, the programs in the last paper and this one (I’m getting to it!) are generating plausible synthetic routes. You may find them derivative of known ones; you may object that they’re not really any different than something that any competent chemist could have done. But those are already victories for the software. You might also object, with varying degrees of justification, that the molecules and syntheses chosen are there to show the software in the best light, and that real-world use won’t be as fruitful. But that’s a holding action, even should it have merit: the fact that it can work at all turns the problem, if there is one, into optimizing something that already exists. As the history of chess- and Go-playing software shows, such piecing-together-strategies-and-evaluating-them tasks improve relentlessly once they’ve been shown to work in the first place.

That takes us well on the way to disposing of the medium objection, because if the programs aren’t doing this as well as a person can now, well, they will. And that also is an introduction into this new paper. You will have heard over the last two or three years about how Google’s (DeepMind’s) AlphaGo program was able to compete with and them beat the best human players of the game. Go is significantly harder to deal with computationally than chess, so this was a real achievement, and it was done partly by building in every human maneuver and strategy known. But last fall, they announced a new program, AlphaGo Zero, that comes at the problem more generally. Instead of having strategies wired into it, the new program is capable of inferring strategies on its own. The software ran hours and hours of Go games and figured out good moves by watching what seemed to work out and what didn’t in various situations, and at the end of the process it beat the latest version of AlphaGo, the one that beats every human on the planet, one hundred games in a row. It makes moves that no one has yet seen in human play, for reasons that Go experts are now trying to work out. (Here’s the latest iteration, as far as I know).

The Chematica software I wrote about earlier this month is an example of the AlphaGo style: its makers have spent a great deal of time entering the details of literature reactions into it: this goes to that, but only if there’s not a group like X, and only if the pH doesn’t get a low as Y, etc. Synthetic organic chemists will be familiar with the reactivity tables in the back of the Greene protecting group book – that’s just the sort of thing that was filled out, over and over, and with even more detail. Without this curation, software of this kind tends to generate routes that have obvious “That’s not gonna work” steps in them.

This new paper, though, appears to be in the AlphaGo Zero mode: the program digs through the Reaxys database (all of it) and infers synthetic transformation rules for itself. If this works, it could be a significant advance, because that data curation and entry is a major pain. There are at least two levels to such curation: the first (as mentioned) is capturing all the finer details of what is likely to work (or fail) in the presence of something else. The second goes to the reliability of the synthetic literature in general – you don’t want to feed reactions into the system that haven’t been (or can’t be!) reproduced by others. The way this new program deals with these is pretty straightforward: the first type of curation is handled by brute force processing of Reaxys examples, and the second by a requirement that only transformations that appear independently a certain number of times in the database are allowed into the calculations.

Organic synthesis is a lot harder to reduce to game-type evaluation than chess is, as the authors rightly point out. To get around this, the program combines neural-network processing with a Monte Carlo tree search technique:

In this work, we combine three different neural networks together with MCTS to perform chemical synthesis planning (3N-MCTS). The first neural network (the expansion policy) guides the search in promising directions by proposing a restricted number of automatically extracted transformations. A second neural network then predicts whether the proposed reactions are actually feasible (in scope). Finally, to estimate the position value, transformations are sampled from a third neural network during the rollout phase. The neural networks were trained on essentially all reactions published in the history of organic chemistry.

A strength of the Chematica paper is that the routes were put to a real-world test at the bench. This new work didn’t go that far, but what the authors did do was have the program generate retrosyntheses for already-synthesized molecules, and then have these routes and the known ones evaluated blind by experienced chemists. The results were a toss-up: the machine routes were considered just as plausible or desirable as the human ones, and that (as above) is a victory for the machine. AI wins ties.

At right is an example of a route generated by the 3N-MCTS technique. That’s not a particularly hard molecule to make, but it’s not an artificially easy one, either. That is, in fact, an intermediate in a published synthesis of potential 5-HT6 ligands, and the route the program found is identical to the one in the paper, so you can be reasonably sure that it’s valid. You or I would probably come up with something similar – I personally didn’t know that first spirocyclobutane step, but I would have done what the program basically did: look in Reaxys or CAS to see if something like that had been prepared. (Note that the program doesn’t memorize routes, just steps – it pieced this together on its own and declared it good). Note that the program delivered this one in 5.4 seconds, and none of us are going to beat that. Add up the total time we all spend on stuff like this and it starts to look like it’s cutting into other work, you know? If you’d like to see several hundred more schemes along those lines, they’re in the paper’s SI files.

So now we have two types of retrosynthesis software that (at least in some realistic examples) are if not better than humans, apparently no worse. Where does that put us? And by “us”, I mean “us synthetic chemists”. My conclusions from the earlier paper stand: we are going to have to get used to this, because if the software is not coming to take retrosynthetic planning away from us now, it will do so shortly. You may not care for that – at times, I may not care for it, either – but it doesn’t matter what we think. If it can do even a decent job of stitching together and evaluating routes, the software will beat us on just general grasp of the literature alone, which some time ago passed beyond the ability of human brains to organize and remember.

And the next step is more-than-decent ability to see and rate synthetic plans. Despite my comparison to AlphaGo Zero (which is valid on mechanistic grounds), it’s not that this new software is coming up with routes that no human would be able to. But if we’re approaching “good as a human”, the next step is always “even better than a human”. Eventually – and not that long from now – such programs are going to go on to generate “Hey, why didn’t I think of that” routes, but you know what? Those of us in the field now are going to be the only ones saying that. The next generation of chemists won’t bother.

They will have outsourced synthetic planning to the machines. Retrosynthesis will remain a valuable teaching tool, and it will still be the way we think about organic chemistry. It will persist in various forms in the curriculum just as qualitative functional group analyses did, long after they died out in actual practice. Actual practice, meanwhile, will consist of more thinking about what molecules to make and why to make them, and a lot less thinking about “how”. That will only kick in for structures too complex for the software to handle, and that will be a gradually shrinking patch of ground.

This will not go down easy for a lot of us. Thinking about how to make molecules has long been seen as one of the vital parts of organic chemistry, but knowing how to handle horses was long seen as a vital part of raising crops for food, too. It’ll be an adjustment:

No memory of having starred
Atones for later disregard
Or keeps the end from being hard

That’s Robert Frost, and his advice in that poem was “Provide, Provide!” We’ll have to.


Viewing all articles
Browse latest Browse all 98

Trending Articles