The promise of computational protein design is to replace slow, expensive, resource-intensive experimental methods with fast, cost-effective, and resource-efficient computational methods. In the ideal case, the vast majority of the protein design work is done by computation, followed by minimal experimental work to verify desired functions and properties are indeed present. With the rise of AI/ML methods in general (e.g. Dall.E and ChatGPT) and in the realm of proteins in particular (e.g. AlphaFold and ESM), there has been increasing interest and hope that this will be the case. However, many are finding that in reality, because the relatively low accuracy of the computational methods, current use of AI in protein design still relies on quite a bit of experimental screening and selection.
How do we improve the performance of AI protein design methods? This is a big question and open research question. Here, we highlight how computational structure prediction plays an important role. In biology, the paradigm is that protein sequence determines protein structure, which then determines the protein function. When designing proteins, the paradigm shifts. At 310.ai, we go from function to sequence to structure.
The goal is to build a model that accurately takes in functions and properties and encodes them into valid protein sequences. The question is how to know if the sequence is indeed valid? One way is to take the designed sequences, predict that structure computational using any one of the modern methods like AlphaFold, OmegaFold, ESMFold, etc. How does this help? Well, in two ways.
First, modern AI structure prediction methods produce a model confidence metric that can be used to grade a prediction as “good” vs “bad”. Specifically, a predicted structure with an average pLDDT score (range 0-100) of 80 or above, is considered “good”. That is, by looking at the average pLDDT score of a sequence, we are able to determine if a AI generated sequence looks like a legitimate protein. This is essentially the idea of “forward folding” often used in the Rosetta Commons community. The difference now is where forward folding used to take hours to days on a massively distributed cluster, AlphaFold cut the calculations down to less than an hour for smaller sequences, and newer methods like OmegaFold cut the time down even further to seconds.
The second way having a predicted structure helps us evaluate whether a computer generated protein sequence is good is by allowing a comparison. Recall that in the natural world, sequence leads to structure, which leads to function. Then proteins with similar structure are more likely to have similar function. Of course, this is not always the case, but it is a useful filtering criteria. In particular, given a natural sequence with a set of desired functions/properties and a generated sequence with the same set of desired functions/properties, if the two have similar predicted structures, then the generated sequence can be considered “good”. In particular, a TM-score (range 0-1) of 0.5 or above, is considered “good”. This metric, or more stringent variations of it, is commonly used, for example in RFdiffusion. Again, this has been enabled by methods like OmegaFold because of the unprecedented speed at which these metrics can be calculated.
Find out more about us at https://310.ai/
#ProteinDesign #ProteinFolding #MachineLearning #DeepLearning #ESMFold #OmegaFold #AlphaFold #ProteinStructurePrediction #PLDDT #TMScore #Diffusion