Skip to main content
DA / EN
Artificial intelligence

SDU researchers’ algorithm could become an important step towards privacy in the age of AI

If you, as a user, want to protect your privacy, it is not enough to ask tech companies to delete your data. What the companies’ AI models have learned from that data must also be unlearned. Researchers from SDU Applied AI and Data Science have now found a way to do this without weakening model performance.

By Sebastian Wittrock, , 2/17/2026

In Europe, we have the right to be digitally forgotten.

Under the GDPR legislation, you as a user can request technology companies and other digital actors to delete everything their systems know about you. This involves not only deleting large amounts of collected data, but also everything the systems have learned from that data.  

However, this is extremely difficult and has so far not been possible in large systems. Often, AI model performance will be significantly degraded if parts of the learnt knowledge are removed, and it would require an unrealistic amount of computing power to retrain the models entirely each time training data is deleted.  

AI researchers Vinay Chakravarthi Gogineni and Esmaeil Nadimi from the University of Southern Denmark have now found a promising solution to the problem.  

They have developed an algorithm that enables AI models to unlearn what they have learned from specific data points without reducing the models’ performance. They describe the method in an article in the leading Journal of Machine Learning Research.  

- The GDPR legislation makes it mandatory not only to work with machine learning, but also with machine unlearning, so this is something many researchers and companies are working on right now, says assistant professor Vinay Chakravarthi Gogineni.  

- We hope that our algorithm can contribute to greater data security and privacy for users.  

Targeted deletion 

It was Esmaeil Nadimi, professor and head of the research group SDU Applied AI and Data Science, who first went into Vinay Chakravarthi Gogineni’s office two years ago with the initial concept of deleting knowledge from AI. Since then, the assistant professor has continued to develop the idea until the algorithm was sufficiently advanced in the summer and autumn of 2025 for them to unveil it.  

- Very simply put, the algorithm can identify the parts of a neural network that relate to the specific data points, allowing you to delete them – and only them. The algorithm focuses exclusively on the specific neurons that can be used to identify a user, not on all the general ones, explains Esmaeil Nadimi.  

Take, for example, the facial recognition on your phone. All the parts of the system that relate to your specific face must be deleted, but the fact that the technology may have learned something general about the elliptical shape of faces from your data is not crucial to your privacy and therefore does not need to be deleted.  

- It is this very targeted deletion that ensures our algorithm does not reduce the performance of AI models, says Vinay Chakravarthi Gogineni.  

Continuing the work 

According to the two researchers, data protection and privacy will become increasingly important in the coming years, and more and more people will likely demand to be digitally forgotten.  

They themselves mention the risk of being profiled based on interests, political beliefs, sexuality and so on, which could potentially be misused, and the possibility that fake videos and images of a person could be circulated to damage their reputation.   There is also health data, which we may share with researchers and authorities for good reasons, but which could become sensitive at a later stage in our lives:  

- Medical and genetic data reveal an enormous amount about a person. An ordinary citizen today may in ten years’ time be a political leader or hold a socially critical role, and in that case health data must not reveal illnesses or vulnerabilities, explains Vinay Chakravarthi Gogineni.  

The researchers are therefore continuing their work on the algorithm. While they initially focused on facial recognition, their current focus is on applying it to large language models and text to image generation models.  

They also plan to appoint a PhD student to assist them in the work and to enter into partnerships with companies in order to test the algorithm further. 

Meet the researcher

Vinay Chakravarthi Gogineni is an associate professor at SDU Applied AI and Data Science. His research focuses on three main pillars of AI, including Foundational AI, Responsible AI, and Quantum AI.

Editing was completed: 17.02.2026