Blog Author: Dr Koorosh Aslansefat
A team of researchers from the Responsible AI Group at the School of Computer Science, University of Hull, travelled to Chennai, India, for a research visit from October 14 to October 19. This initiative was supported by the global engagement programs of the University of Hull and IIT-Madras.
What was the research visit all about?
The research visit aimed to explore the implementation and challenges of AI in healthcare, particularly focusing on responsible AI practices and healthcare applications. By engaging with different institutions and experts, the visit sought to gather insights on developing trustworthy AI models, aligning AI solutions with local healthcare needs, and addressing ethical considerations. The discussions also centred on bridging the gap between AI research and practical clinical use, especially in the context of healthcare delivery in India. This research visit aligns with our group’s focus on building responsible AI systems and frameworks that are truly international.
Visiting Indian Institute Of Technology–Madras (IIT–Madras)
During the visit to IIT-Madras, during the sidelines of the jointly organised Responsible AI workshop, discussions were held with members of CeRAI (Centre for Responsible AI) and the Robert Bosch Centre for Data Science and Artificial Intelligence. The group explored various aspects of AI development, including the ethical deployment of AI in medical settings, data privacy concerns, and the integration of AI tools in diagnostic and predictive healthcare applications. The exchange of ideas focused on adapting AI models to local contexts, ensuring the models can address specific healthcare challenges in India while meeting international standards for responsible AI use.
Additionally, discussions were held on ongoing projects at IIT-Madras related to AI in healthcare, such as predictive analytics for disease management and AI-driven decision-support systems. The meetings highlighted the importance of interdisciplinary collaboration between data scientists, AI ethicists, and healthcare professionals to create AI solutions that are both technically robust and socially responsible.
Interaction with doctors from Manipal Academy of Higher Education (MAHE)
The discussions with doctors from the MAHE explored the challenges and limitations in implementing AI for healthcare, particularly in creating responsible and trustworthy systems. The doctors highlighted the significant barriers posed by data variability and quality, noting that healthcare policies and guidelines differ across regions, such as between India's national standards and Western frameworks like the UK's NHS guidelines. This divergence complicates the development of AI models that can provide accurate recommendations across different healthcare systems. They emphasised that AI models should be customised to reflect the local context, and training data must be curated carefully to include region-specific medical guidelines and practices. For instance, AI systems in India must account for local immunisation schedules and cultural health practices that differ from those in Western countries.
Additionally, the doctors shared their experiences with large language models (LLMs) in healthcare, noting that while these models perform well with general information, they struggle with nuanced medical advice, particularly in cases involving infants or severe medical conditions. They stressed the need for AI systems to be transparent about their limitations and defer to human experts for complex or high-risk decisions. The discussions also touched on AI's role in clinical settings, such as radiology, where models must generalise across data from multiple centres to be truly effective. Moreover, the need for strict data privacy measures was highlighted, given the sensitive nature of medical information and varying data protection laws across regions. The doctors concluded that while AI has the potential to support preventive healthcare and chronic disease management, human oversight remains crucial to ensuring safe and effective implementation.
Visiting Madras Diabetes Research Foundation (MDRF)
At the Madras Diabetes Research Foundation (MDRF), discussions covered the foundation's approach to diabetes care and the use of AI in medical research. MDRF has a unique setup, combining different areas of diabetes treatment under one roof, including heart health, eye care, nutrition, and physical activity. This integrated approach allows them to treat patients more comprehensively. The foundation also focuses on large-scale research, with data collected from across India to better understand diabetes trends in different regions. This broad dataset helps MDRF avoid the biases that can arise when using hospital-based data, which might not fully represent the general population.
The conversation also explored potential collaborations, such as doctor training programs and research exchanges. Dr. R. M. Anjana and other staff shared their experiences with various international partnerships and emphasised the importance of conducting research that combines local and global perspectives. There were also discussions about using AI for tasks like tracking patient progress, diagnosing complications, and identifying different subtypes of diabetes. MDRF expressed interest in expanding their use of AI responsibly, ensuring that algorithms are trained on high-quality, representative data to improve patient care without compromising accuracy or safety.
Visiting Sankara Nethralaya
During the visit to Sankara Nethralaya, the focus was on using AI in eye care and the problems that come with making sure it is reliable in real-world use. Doctors, including Dr. Ronnie George and Dr. Rajiv Raman, shared their experiences and pointed out some common issues. One big challenge is getting good data. Although there is a lot of data available, it is not always labelled correctly, which makes it hard to train AI systems to give accurate results.
Another problem is that many AI tools act like "black boxes." This means they give a diagnosis but do not explain why. The doctors would feel more comfortable using AI if it could show the specific signs in the images that led to the result, such as a change in the optic nerve.
The discussions also covered the fact that many AI systems were made for use in Western countries and might not work well in India. Differences in how diseases appear, and the unique needs of patients here mean that AI tools need to be adjusted to fit the local context. The doctors agreed that for AI to be trusted and useful, it must be adapted to reflect the specific needs and challenges of the local healthcare system.
Research Group Members Kuniko Paxton, Koorosh Arslansefat, Rameez Kureshi and Bhupesh Mishra with the members of the Sankara Nethralaya and Aktana.
Conclusion
These visits helped researchers from our group to understand the Indian context in three different healthcare areas. We are hoping to build on this with the host institutes in the form of further research exchanges, and projects.
We are also hoping to consolidate and publish the findings from the interviews as research papers, addressing the key concerns and insights gathered. These papers will emphasise the challenges and recommendations for implementing AI in healthcare, integrating perspectives from healthcare professionals to ensure that the proposed solutions are practical and contextually relevant.
Acknowledgement
A special thanks goes to Mr. Vikram Kamthe, Head of India Operations and Director of Engineering and Data Science in Aktana or facilitating connections with various health-related centres in Chennai. His efforts made it possible to engage with leading experts and organisations in the region, enriching the research visit with diverse perspectives on the use of AI in healthcare.
Get in touch with us if you have any queries regarding this blog.
Visit Details
Date: October 14 to 19, Chennai, India
Comments