Ethical Implications of AI in Sociological Research
Keywords:
Artificial Intelligence, Sociological Research, Algorithmic Fairness, Ethical Implications, Mixed-Methods Analysis, TransparencyAbstract
The introduction of artificial intelligence (AI) in social sciences has presented opportunities and challenges, particularly in the sociological research. The study employed a mixed-methods design of experiment to investigate the ethics of AI application in coding, data analysis and interpretation. Quantitative findings revealed that AI models were correct between 0.70 and 0.95 percent of the times which is sometimes more accurate than human coders. Nevertheless, the fairness indices were not consistent and revealed the presence of bias within subgroups. Regression and ANOVA analyses found significant correlations among the characteristics of the researchers and the importance of ethical issues, and the most salient challenges were found to be privacy and prejudice. Interpretations of interviews and theme coding indicated that qualitative approaches highlighted the issue of ambivalence as specialists were aware of the potential benefits of AI and expressed concerns about transparency, responsibility, and systemic inequalities. Scatter plots, bubble charts, and network graphs which demonstrated the gap between the level of work and fairness of algorithms. The notion that AI can assist sociological research was reinforced by combining both sets of data with Bayesian triangulation, yet requires considerable protections in order to be utilized in an ethical manner. The present research provides an experimentally supported paradigm of the assessment of the role of AI in sociology, with a strong focus on the fact that when it is implemented, ethical considerations should be at the core of the practice.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Samia Wasif, Ubaid Ur Rehman (Author)

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.



