IMPETUS
The project is funded by the European Commission and UKRI as the successor of the recently completed ACTION project. ACTION supported 16 citizen science pilots investigating pollution in different spaces in Europe, who collectively engaged over 1200 participants. ACTION supported all projects through trainings, webinars, workshops, and advice.
ACTION
ACTION applied a citizen science approach to tackling pollution; one of the greatest threats to human health and wellbeing of our times, killing more people than smoking, hunger, natural disasters, war and infectious diseases such as HIV/AIDS and coronavirus.
MediaFutures
The EU-funded MediaFutures project led by Prof Elena Simperl will address this challenge by reshaping the media value chain. It will set up a virtual European data innovation hub to support entrepreneurial and innovative projects.
Plan and Goal Reasoning for Explainable Autonomous Robots
Robots are rapidly emerging in society and will soon enter our homes to collaborate and help us in daily life. Robots that provide social and physical assistance have huge potential to benefit society, especially for those who are frail and dependent. This was evident during the Covid-19 outbreak, where assistive robots could aid in the care of older adults at risk, in accessing contaminated areas, and providing social assistance to people in isolation.
COHERENT: COllaborative HiErarchical Robotic ExplaNaTions
For robots to build trustable interactions with users two aspects will be crucial during the next decade. First, the ability to produce explainable decisions combining reasons from all the levels of the robotic architecture from low to high level; and second, to be able to effectively communicate such decisions and re-plan according to new user inputs in real-time along with the execution.
Overview of UKRI Trustworthy Autonomous Systems TAS Hub
The TAS programme is a major £33M UKRI/SPF investment (Community-led, result of the EPSRC Big Ideas Challenge) that consists of the hub (£11.7M, £4M is to be dedicated to pump-priming projects) will coordinate seven research nodes (£3M each)
RAMP VIS
Computational modelling of the COVID-19 pandemic has been playing a significant role in the UK's effort to combat COVID-19. Across the country, there are about 100 research teams working on different models, and several dozens have provided simulation, estimation, and prediction to inform the governmental decisions in the four home nations.
Narrating Complexity
This project will develop a set of interactive, visual analytics approaches to better understand these complicated and extensive timelines, drawing on the example of social media and the more formal discussions of a legislative setting (for example, the European Union Withdrawal Acts (Brexit legislation)).
From role-play to situated feedback
The aims of the fellowship are to examine how emerging technologies can fundamentally re-envision the conceptual models and mechanisms-of-delivery for existing prevention interventions in the context of child mental health.
PLEAD
PLEAD brings together an interdisciplinary team of technologists, legal experts, commercial companies and public organisations to investigate how provenance can help explain the logic that underlies automated decision-making to the benefit of data subjects as well as help data controllers to demonstrate compliance with the law. Explanations that are provenance-driven and legally-grounded will allow data subjects to place their trust in automated decisions and will allow data controllers to ensure compliance with legal requirements placed on their organisations.
SAIS
SAIS (Secure AI assistantS) is a cross-disciplinary collaboration between the Departments of Informatics, Digital Humanities and The Policy Institute at King's College London, and the Department of Computing at Imperial College London, working with non-academic partners: Microsoft, Humley, Hospify, Mycroft, policy and regulation experts, and the general public, including non-technical users.
THuMP
The goal of the THuMP project is to advance the state-of-the-art in trustworthy human-AI decision-support systems. The Trust in Human-Machine Partnership (THuMP) project will address the technical challenges involved in creating explainable AI (XAI) systems so that people using the system can understand the rationale behind and trust suggestions made by the AI.