top of page
Go to: 
kuniko_paxton_0.jpg

Kuniko Paxton

Contact: k.azuma-2021@hull.ac.uk

Office: Wilberforce Building 213

PhD topic: Redefine Fairness Definition in Machine Learning Classification with Nuance Sensitive Attribute Skin Colour

Enhancements in AI fairness assurance are achieved through a tripartite perspective. They are performance, justification and robustness. The core concept is the distributional distance in sensitive attributes rather than categorisation. Sensitive attribute examples are gender, age, skin colour and marital status. Traditionally, they have been dealt with as categorical values. The distribution measurement provides fairness improvements beyond the limitation of grouping. The numerical approach in CNN prediction accuracy demonstrated the assessment results considering the nuance of boundaries in categorisation. A comprehensive fairness justification evaluation is undertaken using coverage estimation. This method disclosed the latent bias in the model decision-making process and supplied reasoning for fairness. In robustness, there is a correlation between numerical sensitive attributes and adversarial attack success rates. It revealed the unfairness in the vulnerability. This methodology was applied to a case study focusing on skin colour as a sensitive attribute with convolutional neural networks for skin lesion classification.

Publications

  • Azuma, Kuniko, Tareq Al Jaber, and Neil Gordon. "Creating a Classification Module to Analysis the Usage of Mobile Health Apps." Acta Scientific COMPUTER SCIENCES Volume 4.12 (2022).

Useful Links

Supervisors

Principal Supervisor 

choice-koorosh-aslansefat.jpg

Co-supervisor 

dhaval.jpeg
placeholder.png
bottom of page