Responsible AI

For biodiversity science, AI ethics takes on a unique angle. Even without AI, there is considerable concern about the misuse and misapplication of environmental research. AI-INTERVENE will train students to embrace transparency, working with local populations, buy-in and collaboration of diverse stakeholders, and proper oversight and accountability. This is also critical for building trust, which is necessary to use AI tools to directly inform policy or environmental interventions. Addressing recommendations and regulatory frameworks (e.g. GPAI’s Biodiversity and AI: Opportunities & Recommendations for Action, UNESCO’s Recommendation on the Ethics of AI, and GDPR) AI-INTERVENE will embed a framework of key guiding principles of Responsible AI.

AI-INTERVENE will put responsible AI into action by developing a RAI impact assessments for PhD project whereby the intersections and regulatory aspects of the PhD project with RAI key guiding principles will be established at the outset and updated annually, as well as through adoption of open source AI environments.

Environmental sustainability is also at the core of AI-INTERVENE. All students will receive training on this topic and incorporate it into PhD projects. Calculation of the carbon footprint of AI application will embed of sustainability thinking from the outset and the information included in e.g. PhD project presentations and impact statements within monitoring reports and theses.