top of page

Dr Koorosh Aslansefat and Dr Bhupesh Mishra from our group have been successful in receiving funding from Innovate UK Accelerated Knowledge Transfer (AKT) for two projects to work with two brilliant companies - Luxinar (Hull), and Walton & Co (Leeds based Planning Lawyers with national reach)

With advancements in foundational models, deep /machine learning, robotics, and other fields, AI is increasingly being used for autonomous operations, essentially enabling machines to perform tasks without human intervention. This brings forth ethical, legal, and social challenges. The success of AI is therefore no longer a matter of accuracy and technical performance and even financial profit alone but how it connects directly to human well-being. Instead, the development of AI will require a socio-technical approach to the design, deployment and use of AI systems, interweaving software solutions with governance, ethics, and regulation.

At the University of Hull, we research responsible AI development with research project partners that combine lessons learned from different frameworks with a focus on practical implementation. Our research is centred around this Responsible AI framework (image below)


Explainable: Many of the issues with modern AI development stem from the fact that the AI systems are black box and lack transparency. Users and stakeholders need to understand how AI systems make decisions and why specific outcomes are reached. By promoting transparency and interpretability, Explainable AI methods can provide meaningful insights into the decision-making process, enabling users to trust and hold AI systems accountable. We are one of the few groups in the UK to research into Neuro-Symbolic AI for achieving human-level explainability.

Trustworthy: The safety and trustworthiness of AI systems are crucial to their responsible deployment. Additionally, implementing robust safeguards and privacy measures to protect sensitive data is essential for building trust and maintaining user confidence in AI systems. We work on practical aspects of Safety and Trustworthiness issues in AI systems. 

Equitable: To develop Responsible AI, it is important to focus on the social, societal aspects of AI models such as fairness/ethics, legal / policy, future of work/jobs, human-computer Interaction, and perception of AI. We work with researchers in digital ethics, social science, and law. 

Our Team.

bottom of page