Natural Language and Text Processing Lab


Explainable Natural Language Processing for Social Sciences

Explanatory learning models in Natural Language Processing (NLP) are models that provide explanations for their predictions or decisions. They help humans understand why a particular decision or prediction was made. This is important because it fosters trust in the system. When humans understand the reasoning behind a decision, they are more likely to trust the…

Read more

Fairness of NLP-supported clinical decision-making process in healthcare

Text mining and natural language processing (NLP) systems are proving very useful for clinical care and research. Several decisions on patient inclusion/exclusion and coding of key study variables in clinical studies are taken out of the hands of clinicians and put into the care of NLP systems. However, clinical care is not always equitable; for…

Read more

Modeling Biases in News and Political Debates

Biases and stereotypes in society can be reflected in different sectors of our everyday life including work environment, education and politics. News and political speeches are only two examples of textual content in which stereotypes are present. In this project, we focus on both gender and racial bias. Using approaches from NLP, first we explore…

Read more

Mining the Dutch Disposition towards Animals and Plants

In this project, we address the topic of the human disposition towards animals and plants. Our aim is to establish a better understanding of the history of both the knowledge about animals/plants and the cultural representation of animals and plants in the Netherlands. We study large sets of digitized texts and images produced and circulated…

Read more