
The Silent Engine Behind Modern Medicine: Computational Science
Behind every medical breakthrough lies an unsung hero: algorithms. Join SDUs expert in bioinformatics, Veit Schwämmle, on this visit to the world of computational science, where big data and bioinformatics are quietly shaping the future of healthcare.
Computational science has revolutionized literally all scientific fields in the past few decades. With access to ever more computer power, today’s scientists feed their computers with loads of data and are rewarded with knowledge that previously would have taken a lifetime to achieve.
Computational science has paved the way for revolutionary insight in many fields, including climate research, AI research – and in particular life science research.
At the Department of Biochemistry and Molecular Biology, Veit Schwämmle is an associate professor and expert in bioinformatics. He helps colleagues all over the world make sense of the massive oceans of biological data pouring in from labs and clinics every day.
Potential clues hiding in data
“I work in a field where, within just a few weeks, we can generate terabytes of bio-data in one laboratory,” he says. “That’s millions of molecular details—millions of potential clues hiding in data generated by highly sensitive experimental setups.”
And that is where computational science comes in. Let’s break it down and choose protein studies as an example:
Our bodies are made up of more than 20,000 different proteins, and each of those can exist in thousands of different forms. These variations all have important functions—they determine how we fight disease, how our cells behave, and how we respond to treatment.
“Garbage in, garbage out”
But understanding proteins means collecting and analyzing a dizzying amount of data. Not just from blood samples, but also from genomic sequencing and multiple public databases with all kinds of accumulated biological and biomedical knowledge. And not all of this data is clean and complete — there’s noise, inconsistencies, and errors from instruments and software.
“People assume computers always give consistent results,” says Schwämmle.
“But use two different programs on the same dataset, and you might get two different answers. That’s a real challenge. Also, there is very common term in our area: Garbage in, garbage out. If you feed the computer systems with bad quality data, you will obtain no or wrong knowledge. So, it is crucial to find the right statistical and computational methods to separate relevant information from digital and experimental noise”, he says.

Veit Schwämmle
Associate Professor and head of research, Department of Biochemistry and Molecular Biology. Schwämmle’s research interests are computational proteomics and bioinformatics. Before coming to SDU in 2018, he received his PhD at University of Stuttgart and conducted research at Centro de Pesquisas Fisicas in Rio de Janeiro and at ETH Zürich.
Before changing to the field of bioinformatics, he was a physicist creating and applying computer models ranging from simulating sand dunes to the competition between languages. He is active in major European networks, including the ELIXIR Infrastructure for Life Sciences and the European Proteomics Association.
Often, software is the big problem
Veit Schwämmle is therefore on a mission to create and organize tools that make sense of this chaos—tools that not only process data but make it trustworthy and useful to researchers and doctors trying to cure diseases like cancer or Alzheimer’s.
“Often, software is the big problem. It hasn’t been maintained or updated or standardized. I want to help researchers write and apply software that is easy to maintain and use. Good software should also be easy to find, so we have built a registry, where tool developers can put information about their software for others to use”, Schwämmle says and refers to the registry bio.tools.
This tool was made possible with contributions from all over the world and funding from EU via the infrastructure ELIXIR. More than 30,000 softwares are registered in bio.tools today.
The big breakthroughs
Looking for knowledge in health data piles has the potential to give us better treatments, faster diagnoses, and even potential cures.
It is thanks to computational breakthroughs like AlphaFold—an AI system that can predict the 3D structure of proteins from their amino acid sequences—scientists now have more insight into how proteins work at the atomic level.
This is, however, not true for the many different molecular forms a protein can take in our cells. Veit Schwämmle is highly interested in improving our knowledge about these forms and has revealed new mechanisms controlling these forms. He has contributed with statistical tools and user-friendly software helping researchers to gain more insights from this often so-called “dark proteome” (proteins with no defined structure, that cannot be detected).
We need to invest and move forward
“There are millions of different protein forms and we still do not understand much about them. Revealing their function in health and disease will be a game-changer for drug discovery. This will not be possible without computational science. Nor would have been the Human Genome Project, which mapped all human genes and revolutionized medicine as we know it, says Schwämmle, adding:
“If we don’t invest in computational methods to expand our understanding of the human body, we can’t move forward.”
Computational science is everywhere – and its impact is only growing. It’s the engine quietly humming behind the scenes, quietly reshaping the future of health care and all other scientific fields. So, the next time you think of a scientific breakthrough, don’t forget the software and codes working where you don’t see them. Without them, data is just noise, and society will gain no knowledge or meaning from it.