Welcome to Francis Academic Press

Academic Journal of Humanities & Social Sciences, 2025, 8(11); doi: 10.25236/AJHSS.2025.081128.

The Formation Mechanism and Pluralistic Governance of Algorithmic Gender Discrimination in the AI Era

Author(s)

Yating Wang

Corresponding Author:
Yating Wang
Affiliation(s)

Faculty of Humanities and Arts, Macau University of Science & Technology, Macau, 999078, China

Abstract

With the widespread application of algorithms and big data, the issue of algorithmic gender discrimination has become increasingly prominent, exacerbating social inequality and hindering the achievement of gender equality goals. This study adopts a dual perspective of technology and society to systematically analyze the endogenous risks and social roots of algorithmic gender discrimination. It proposes a diversified governance framework encompassing data collection, algorithm design, technological innovation, ethical norms, legal refinement, and social supervision to mitigate the risks of algorithmic discrimination. The research aims to promote the healthy development of AI technology, foster a fair (non-discriminatory), just (ethical), and transparent (explainable) algorithmic ecosystem, and provide theoretical foundations and practical pathways for gender equality in digital spaces.

Keywords

Algorithmic gender discrimination; Algorithmic fairness; AI; Technological innovation

Cite This Paper

Yating Wang. The Formation Mechanism and Pluralistic Governance of Algorithmic Gender Discrimination in the AI Era. Academic Journal of Humanities & Social Sciences (2025), Vol. 8, Issue 11: 195-200. https://doi.org/10.25236/AJHSS.2025.081128.

References

[1] Zhang Y H, Qin Z G, Xiao L. The discriminatory nature of big data algorithms[J]. Studies in Dialectics of Nature, 2017, 33(5): 81-86.

[2] Brian Arthur. The Nature of Technology[M]. Hangzhou: Zhejiang People's Publishing House, 2018.

[3] Immanuel Kant. Principles of the Metaphysics of Morals[M]. Shanghai: Shanghai People's Publishing House, 2002.

[4] ZOU J, SCHIEBINGER L. AI can be sexist and racist—it's time to make it fair[J]. Nature, 2018, 559(7714): 324-326.

[5] BIMBER B. Measuring the gender gap on the Internet[J]. Social Science Quarterly, 2000, 81(3): 868-876.

[6] Cormen T H, Leiserson C E, Rivest R L, et al. Introduction to algorithms[M]. MIT press, 2022.

[7] David B, Qu H Y. The social power of algorithms[J]. China Communication Review, 2023, 18: 3-13.

[8] Hui Z B., Li J. Public Security Risk Governance in the AI Era[M]. Shanghai: Shanghai Academy of Social Sciences Press, 2021.

[9] Karen, Y., & Martin, L. Taming Algorithms[M]. Translated by Lin, S. W. & Tang, L. Y. Shanghai: Shanghai People's Publishing House, 2020.

[10] Gan T, Ma L. Algorithmic Gender Discrimination: Characteristics, Types and Governance[J]. Journal of Shandong Academy of Governance, 2024(2): 97-105.

[11] Liu X N, Gong X L. An Overview of the Seminar on Theoretical and Practical Issues of Anti-discrimination[J]. Journal of Chinese Women's Studies, 2013(6): 114-117.

[12] Zheng Z F. Beware of the potential discrimination risks of algorithms[N]. Guangming Daily, 2019-06-23(07).

[13] Tao Y C. Knowledge issues in technology - the black box of technology[J]. Science and Technology Association Forum, 2008(7): 54-55.

[14] Song L L. Reflections on Communication Studies under the Survivor Bias Theory[J]. Drama Home, 2015(24): 263.

[15] Li D W. Isomorphism: The mathematical and philosophical foundation of big data[N]. Guangming Daily, 2012.

[16] Peng L. Illusions, Algorithmic Prisoners, and the Transfer of Rights: New Risks in the Era of Data and Algorithms[J]. Journal of Northwest Normal University (Social Sciences), 2018, 55(5): 20-29.

[17] Li C. Legal Governance of AI Discrimination[J]. China Legal Science, 2021(2): 127-147.

[18] David B, Qu H Y. The social power of algorithms[J]. Chinese Communication Studies Review, 2023, 18: 3-13.

[19] Guo X P, Qin Y X. Deconstructing the data myth of intelligent communication: the causes of algorithmic bias and the path of risk governance[J]. Modern Communication, 2019, 41(9): 19-24.

[20] Guo X K. Big Data[M]. Beijing: Tsinghua University Press, 2013: 104-105, 84, 86.

[21] Xu X D, Wang Y X. Causes, impacts and countermeasures of algorithmic bias in intelligent communication[J]. Chinese Journal of Journalism & Communication, 2020, 42(10): 69-85.

[22] Liu Z H. Algorithmic bias and discrimination in AIGC: identification, evaluation and mitigation methods[J]. Electronic Components & Information Technology, 2024, 8(2): 109-111,115.