Discourse Data 4 Policy

Projects

Related projects and collaborations of the DD4P consortium.

AI4Sci - A Hybrid AI Approach for Interpretation of Scientific Online Discourse

Scientific knowledge forms a central part of public discourse on the web and in social media, e.g., in the context of the COVID-19 pandemic. However, scientific evidence is often presented in simplified, decontextualized, and misleading ways.

AI4Sci addresses the challenge of developing hybrid Artificial Intelligence (AI) methods for detecting and interpreting scientific claims in online discourses for countering disinformation in the context of science communication. The project builds on advances in areas such as deep learning, natural language processing, and knowledge graphs. AI4Sci’s methodology and research will also contribute to important challenges in AI, such as transparency and reproducibility of AI models.

RAPP

Responsible Academic Performance Predicton (RAPP) - A Socially Responsible Approach to the Introduction of Student Performance Predicton at a German Higher Education Institution

Academic Performance Prediction (APP) as supporting AI systems promise a targeted use of resources by the university to prevent predicted potential failures through individual support measures. However, according to a study conducted at Heinrich Heine University, the use of AI-based systems is considered problematic by students. The aim of this project is a socially acceptable use of AI systems, for which ethical aspects and their perception by those affected are to be researched. For this purpose, an AI system for academic performance prediction will be developed and its usage, the necessary data for prediction according to technical and ethical aspects, and the perception by the students will be investigated through laboratory and field experiments. From this, recommendations for action will ultimately be derived in collaboration with the responsible bodies in the university for the use of such systems.

FairAndGoodADM

Democratic processes for developing a definition of fairness and quality aspects of algorithmic decision systems.

In many areas of politics and society where human experts used to make decisions, algorithmic decision making systems (“ADM systems”) are now used to support human experts or to make decisions directly. A comprehensible evaluation of these decisions is crucial. This requires a meaningful and comprehensible application of measures to draw conclusions about the quality and fairness of an automatic decision, also in order to be able to assess the embedding of such decisions in the democratic process. By categorizing the different measures and developing decision support tools, we enable a comprehensible evaluation of ADM systems. We also develop information material to inform citizens and enable them to evaluate the development of ADM systems themselves and to check decisions made.

Fatal4Justice?

Deciding on, through and together with algorithmic decision systems.

The increasing popularity of Algorithmic Decision Making systems (“ADM systems”) in criminal justice systems (CJS), together with the obviously serious consequences if the machine gets it wrong, makes the CJS an ideal research topic on the question of (i) how humans decide about humans and how machines decide about humans, compared to how (ii) humans decide about humans together with machines – but also (iii) to explore the limits of how far machines should make decisions about humans at all. Closely related to this is the question of how states decide whether (iv) ADM systems should be used in their criminal justice systems at all.

With more than 2 million incarcerated in the U.S. and just over 80,000 in the U.K., the use of ADM systems has an enormous impact on society in the future. It is therefore imperative to answer the above questions about the design, scope, and limitations of ADM systems. This is the goal of this project.

Unknown Data - Mining and Consolidating Research Dataset Metadata on the Web

Research data is essential for research across all disciplines. While data has become available in abundance, finding datasets for a given research problem or task remains challenging. References to large amounts of research datasets are still hidden within unstructured publications or web pages and require significant efforts to be uncovered. UnknownData strives to make research data more accessible through AI/NLP techniques geared towards mining and consolidating novel datasets from the Web and unstructured scholarly resources. For this, the interdisciplinary consortium will tap into large web crawls and research corpora to provide novel research data resources to facilitate interdisciplinary research.

SAFE-19 - Solidary Attitudes and Actions in the Covid 19 crisis as a trade-off to Freedom and Economic well-being

SAFE-19 provides a social sciences perspective on the concept of solidarity which has become a central claim for the fight against COVID-19.

What are the sources and the scope of solidarity when society as a whole is faced with nearly impossible trade-offs? What are the conditions that enable a political community to act in solidarity and support solidary measures within the nation state and within the EU? Using different trade-offs, the project examines how societal solidarity is addressed, reflected and socially perceived in the different phases of the crisis.

With regard to the various dimensions of investigation, the project employs a mixed-methods design that includes discourse analysis of speeches, the implementation of a longitudinal online survey, and the use of new data types (Twitter).