SIUOxford: Can Computers Replace Humans in Biological Research?

 Image courtesy  Freepik

Image courtesy Freepik

Authors: Emil Fristed, Phillip Dmitriev Edited by: Ruth Sang Jones

Computer technology is rapidly changing our world. One example is how manual factory jobs are largely being displaced by robotics and intelligent machine systems. But we are no longer just programming computers to solve mere simple tasks. With the advent of machine learning and artificial intelligence, much more complex problems are being tackled by computational approaches. Biology is one of those problems. There is a vast amount of data just within one single cell, which is why laboratory methods often rely on brute force approaches, such as screening millions of compounds to obtain a potential drug. Now computational approaches such as in silico drug development, screening, and virtual testing are beginning to change this. In the event “Can Computers Replace Humans in Biological Research” the Science Innovation Union hosted three speakers and an audience of researchers from all levels of science, to discuss this new emerging field, and whether the technology we are developing will eventually replace us.

32597018_1066336566837457_1486205297419616256_o.jpg

Prof. Blanca Rodriguez: My virtual heart is pounding for you? A case on in silico screening

Scenario 1. A patient comes to see his doctor. He displays a range of symptoms. To understand what’s going on, the doctor orders a range of tests. Once the results are in, she puts together all available information to make a diagnosis. This essential skill is the core reason she spends so many years and sleepless nights in medical school learning how to identify patterns from a big, diverse, and sometimes conflicting body of evidence. Today the amount of available evidence is exploding, and it is becoming more and more difficult to process. Importantly, this pattern finding is one of the things at which recent ‘AI’ or machine learning algorithms excel. And the more data the better.

Scenario 2. A big pharmaceutical company is developing a new drug against the disease our doctor (or computer?) recently diagnosed in her patient. During the drug pipeline, many of the candidates will fail. And each one will require a series of in vitro testing, animal testing, and clinical trials. These tests come with financial costs, and welfare costs to the animals and humans used. Is there a way to minimise these costs? Recently ‘virtual testing’ - testing ‘in silico’ - has begun to complement the older approaches. There is an increasing pressure from biotech companies and big pharma to implement in silico approaches for development and testing of drugs and medical devices. Regulatory agencies such as the FDA are even leading the way, having realised the potential to make the drug development process more efficient (helps future patients) and reduce welfare costs.

Opening with these scenarios, Prof. Blanca Rodriguez set the scene for her presentation. She works as a Professor of Computational Medicine at the University of Oxford, developing computational models and simulations of the heart at different scales. The models consist of an intricate set of mathematical equations, combined with anatomical models. With high-performance computing, they are now able to simulate not just a single cardiac cell, but populations of cells, and even whole organs. The models are used to test drugs virtually and to do cardiac/EEG phenotyping of patients to eventually inform treatment. Prof. Rodriguez collaborates with the pharmaceutical industry and has already used computer models to replace some of the animal experimentation taking place. The models come with some limitations of course. Currently, supercomputers are required to run them, and there’s potential for issues with personal data security, and the quality of the data used to inform the models.

Dr Romain Talon: Using computers to make target prediction crystal clear

Dr Romain Talon is a Senior Support Scientist at the Diamond XChem Facility - part of the Structural Genetic Consortium (SGC) - in Oxford. They use a wide range of computational approaches and software solutions to automate different parts of their fragment finding pipeline, and to prioritise in the decision process.

Their approach uses X-ray crystallography. Shortly, proteins are manipulated to form crystal structures. These structures typically contain big, water-filled pores, through which smaller compounds (possible drug candidates) can diffuse. If the small molecule binds in the crystal, the electron density increases. From the resulting electron density map, a structure of the protein crystal can be obtained, together with whether the drugs binds or not. At XChem, they have automated the crystal making and testing process, and are able to rapidly screen thousands of compounds. Last year the facility was responsible for 2 out of 3 of the crystals made worldwide.

Based on automated crystal making and testing, they generate models of how different protein-fragments fit into a specific drug target. They then employ specialised software to prioritise the decision process, which helps them to select a few ‘low risk’ fragments (low risk meaning a high chance of eventual success). They are now working on software that will take those selected fragments, and simply spit out the ideal drug for targeting a specific site at a given protein.

Dr Alina Rakhimova: An in silico approach to biocatalysis, and on learning from nature

Dr Alina Rakhimova gave a more industrial perspective on computational enzyme design. She is the CEO and co-founder of EnzBond Limited, an Oxford spinout that’s building an automated in silico platform that allows rapid optimisation of biocatalysts used for the production of small molecules. Today in industry, metal catalysts are widely used. But these are expensive and pose pollution issues if they get into the final product and/or are disposed to the environment. How can we make catalysis both green and efficient? As with many complicated problems, nature already made a beautiful solution for us. Many proteins are able to catalyse reactions with extreme efficiency. This is the field of biocatalysis. But how do we develop biocatalysts for specific reactions of interest?

Historically, two different approaches have been used. Rational redesign based on X-ray structures is generally inefficient, and only effective in a few cases. Alternatively, lab-based ‘directed evolution’, where cycles of random mutagenesis and screening are performed, is a labour and time intensive approach that is consequently very costly.

Recently computational approaches have been combined with the directed evolution approach to increase the efficiency (e.g. sequence-function based machine learning). The methodology employed by EnzBond Limited is based on quantum mechanics and molecular dynamics. The energy profile of the reaction is built based on advanced chemistry and physics, and then a simulation is done on what happens when the compound is inside the protein (molecular dynamics). But the simulations require good understanding of the protein mechanism, and importantly, the simulations usually take ages. What makes EnzBond Limited’s technology unique, compared to other methods also using quantum mechanics and molecular dynamics, is their application of a broader theory that allows them to make new assumptions and simplifications to the quantum mechanics models. This makes simulations faster to run but does not restrict accuracy; they claim an impressive accuracy of 70%.

Consensus insights: Data quality, augmenting humans, and whether computers will actually replace humans in biological research

During the three talks, common ideas and concerns were reiterated. Dr. Rodriguez raised a point that was later echoed: “The data is in my experience always insufficient and of insufficient quality. This is one of the biggest challenges we face in replacing humans.” The audience later chimed in: “It seems that computational power is ahead of the quality of the data. How can scientists give computers more data, of a better and more homogeneous quality?”

Some solutions were proposed:

-Open access and well documented data would be a start.

-Often good quality data is difficult to find. If the researchers didn’t think about it up front, encourage the increasing trend of storing data with publications - this is very useful.

-Collaborate!

So will computers replace humans in biological research? The consensus from the speakers of the night was: No… Not directly at least. All three argued the case that what we’ll see instead is using computational approaches to augment humans; to reduce routine tasks, to make expert-capabilities available for non-experts and to aid us in making better decisions based on a very complex dataset. So when going to your doctor, she might run your tests through advanced machine learning algorithms to help make the diagnosis, but it probably won’t be a computer in a lab-coat meeting you as you come in.

The talks and Q&A were followed by a lively networking session with wine and light food.