Melanin-Aware and ArcFace Methods in Facial Recognition for Dark-Skinned Individuals
Abstract
The facial recognition system employing the Melanin-Aware method in conjunction with ArcFace, trained on a dataset of 1,000 dark-skinned facial samples, demonstrates the ability to accurately recognize individuals with dark skin while maintaining performance for non-dark-skinned individuals. ArcFace is utilized as the primary feature extractor, leveraging the additive angular margin to enhance inter-class separability. The experiments were conducted using 1,000 dark-skinned facial samples for training and 100 samples for testing. Evaluation results indicate that melanin-aware preprocessing improves average accuracy by up to 17% compared to the absence of preprocessing, and by 7% compared to the standard aggressive CLAHE-based method. Furthermore, the True Acceptance Rate (TAR) increased from 88.15% to 93.33% at FAR = 1e−2, and from 83.7% to 86.67% at FAR = 1e−3, signifying enhanced system stability under stringent security conditions. The performance gains are supported by a more stable distribution of similarity scores and lower threshold values, reflecting improved separation between genuine and impostor pairs.
Downloads
References
M. Georgopoulos, J. Oldfield, Y. Nicolaou Mihalis A and Panagakis, and M. Pantic, “Mitigating Demographic Bias in Facial Datasets with Style-based Multi-Attribute Transfer,” Int. J. Comput. Vis., vol. 129, no. 7, pp. 2288–2307, Jul. 2021.
R. Singh, A. Agarwal, M. Singh, S. Nagpal, and M. Vatsa, “On the robustness of face recognition algorithms against attacks and bias,” Proc. Conf. AAAI Artif. Intell., vol. 34, no. 09, pp. 13583–13589, Apr. 2020.
A. Birhane, “The Unseen Black Faces of AI Algorithms,” Nature, vol. 610, no. 7932, pp. 451–452, Oct. 2022.
K. Karkkainen and J. Joo, “FairFace: Face Attribute Dataset for Balanced Race, Gender, and Age for Bias Measurement and Mitigation,” Proceedings - 2021 IEEE Winter Conference on Applications of Computer Vision, WACV 2021, pp. 1547–1557, Jan. 2021, doi: https://doi.org/10.1109/WACV48630.2021.00159.
T. Sixta, J. C. S. Jacques Junior, P. Buch-Cardona, E. Vazquez, and S. Escalera, “FairFace challenge at ECCV 2020: Analyzing bias in face recognition,” in Lecture Notes in Computer Science, in Lecture notes in computer science. , Cham: Springer International Publishing, 2020, pp. 463–481.
E. Villalobos, D. Mery, and K. Bowyer, “Fair Face Verification by Using Non-Sensitive Soft-Biometric Attributes,” IEEE Access, vol. 10, pp. 30168–30179, 2022.
S. Li and W. Deng, “Deep Facial Expression Recognition: A Survey,” IEEE Trans Affect Comput, vol. 13, no. 3, pp. 1195–1215, 2022, doi: https://doi.org/10.1109/TAFFC.2020.2981446.
S. Li and W. Deng, “A Deeper Look at Facial Expression Dataset Bias,” IEEE Trans. Affect. Comput., vol. 13, no. 2, pp. 881–893, Apr. 2022.
J. P. Robinson, G. Livitz, Y. Henon, C. Qin, Y. Fu, and S. Timoner, “Face Recognition: Too Bias, or Not Too Bias?,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), IEEE, Jun. 2020.
M. Pot and B. Prainsack, “Reply to Letter to the Editor on ``Not all biases are bad: equitable and inequitable biases in machine learning and radiology’’,” Insights Imaging, vol. 12, no. 1, p. 157, Nov. 2021.
W. Sun, O. Nasraoui, and P. Shafto, “Evolution and impact of bias in human and machine learning algorithm interaction,” PLoS One, vol. 15, no. 8, p. e0235502, Aug. 2020.
I. Serna, A. Morales, J. Fierrez, and N. Obradovich, “Sensitive Loss: Improving Accuracy and Fairness of Face Representations with Discrimination-Aware Deep Learning,” Artif. Intell., vol. 305, no. 103682, p. 103682, Apr. 2022.
S. Gong, X. Liu, and A. K. Jain, “Mitigating Face Recognition Bias via Group Adaptive Classifier,” in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Jun. 2021.
G. Balakrishnan, Y. Xiong, W. Xia, and P. Perona, “Towards Causal Benchmarking of Bias in Face Analysis Algorithms,” in Lecture Notes in Computer Science, in Lecture notes in computer science. , Cham: Springer International Publishing, 2020, pp. 547–563.
M. Gwilliam, S. Hegde, L. Tinubu, and A. Hanson, “Rethinking Common Assumptions to Mitigate Racial Bias in Face Recognition Datasets,” in 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), IEEE, Oct. 2021.
D. Raz et al., “Face Mis-ID: An Interactive Pedagogical Tool Demonstrating Disparate Accuracy Rates in Facial Recognition,” in Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, New York, NY, USA: ACM, Jul. 2021.
E. Röösli, S. Bozkurt, and T. Hernandez-Boussard, “Peeking into a black box, the fairness and generalizability of a MIMIC-III benchmarking model,” Sci. Data, vol. 9, no. 1, p. 24, Jan. 2022.
I. D. Raji, T. Gebru, J. Mitchell Margaret and Buolamwini, J. Lee, and E. Denton, “Saving Face,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, New York, NY, USA: ACM, Feb. 2020.
J. G. Cavazos, G. Jeckeln, and A. J. O’Toole, “Collaboration to improve cross-race face identification: Wisdom of the multi-racial crowd?,” Br. J. Psychol., vol. 114, no. 4, pp. 838–853, Nov. 2023.
L. Hinzman, E. P. Lloyd, and K. B. Maddox, “The stigmatized perceiver: Exploring the implications of social stigma for cross‐race face processing and memory,” Soc. Personal. Psychol. Compass, vol. 16, no. 2, Feb. 2022.
A. Khalifa and A. Al-Hamadi, “JAMsFace: joint adaptive margins loss for deep face recognition,” Neural Comput. Appl., vol. 35, no. 26, pp. 19025–19037, Sep. 2023.
Z. Wei and T. Chakraborti, “Achieving Fair Skin Lesion Detection through Skin Tone Normalization and Channel Pruning,” Sep. 2025, [Online]. Available: http://arxiv.org/abs/2509.22712





