This is the first in a series of posts exploring the sociology of artificial intelligence (AI). I cover definitions of AI, before exploring sociological issues of public awareness, ethics, and regulation. I discuss how AI models replicate race, class, and social inequality. I show why First Peoples’ leadership must be central to the development AI policies. I discuss how sociology can address structural change and ethical use of AI. The rest of this series will examine how AI may transform work, and how AI companies are using customer data to grow their market dominance and policy influence.
Summary
- Australia does not have a legal definition of artificial intelligence (AI)
- AI is technology that makes predictions, recommendations, or decisions
- The public does not understand AI. People don’t know how it is different from other automated technologies
- The spread of AI raises ethical dilemmas. For example, stealing copyrighted work, negative social outcomes, and environmental problems
- AI laws in Australia have been slow to address these issues. There are also no international human rights standards
- AI reproduces race and class biases. This includes its data collection, predictive models, and exploitation of workers
- First Peoples must lead AI policies. This means communities will be empowered. It will also ensure that First Peoples’ cultures are respected. Country will also be better protected
- Sociologists can stop inequalities created by AI. We can inform ethical and policy issues.
Definitions of AI
Australia has no statutes or regulations on the definition of AI. The Department of Industry, Science and Resources provides these definitions:
‘Artificial intelligence (AI) refers to an engineered system that generates predictive outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives or parameters without explicit programming.
‘AI systems are designed to operate with varying levels of automation.
‘Machine learning are the patterns derived from training data using machine learning algorithms, which can be applied to new data for prediction or decision-making purposes.
‘Generative AI models generate novel content such as text, images, audio and code in response to prompts.’
Sociological issues in AI
Research finds there are multiple ways to classify AI technologies and their use (see Figure 1 below). This work shows that the general public does not fully understand the difference between autonomous robots (models that complete specific tasks) and AI (technology that assists problem-solving). More research is needed to disentangle public perceptions of AI (trust, perceived usefulness, and expectations of performance and effort), versus actual behaviour (how people use and are impacted by AI).

At the same time, sociological research finds that people’s opinions to AI are open to change with additional public education, especially regarding ethical issues.
Nevertheless, there are many critiques of the rapid proliferation of AI. This includes forcing unwanted AI content and functions on customers, restricting human autonomy, scams, and generative AI products and services. The latter violate copyright laws, by stealing materials from books, art, and films, to ‘train’ AI models. Generative AI also impinges on Indigenous Cultural and Intellectual Property, by emulating Indigenous art and knowledge.
AI also creates negative social outcomes, such as distorting public understanding of science, skewed media literacy, militarisation, and environmental degradation.
There are multiple lawsuits underway in the USA that aim to halt generative AI. In Australia, state regulation lags, particularly in protecting individual artists, and monitoring data privacy compliance. Notably, AI ethics guidelines are currently voluntary. Existing regulation does not adequately address AI, including The Privacy Act 1988, and anti-discrimination laws, such as multiple biases in AI processes. For example, producing racist and sexist imagery, generating racist decisions, perpetuating workplace discrimination, harming child safety, and inciting violence.
Moreover, despite the rapid diffusion of AI being driven by global companies, there are no international human rights law standards. United Nations Human Rights Chief Volker Türk says:
‘The enormous digital divide means that millions are shut out from the benefits of the digital era with serious consequences for accessing healthcare, education, employment and other potential opportunities. Placing human rights at the centre of how we develop, use and regulate technology is absolutely critical to our response to these risks… We need to shift decisively to regulation and binding industry-wide standards rather than relying on tech companies to self-govern, with robust provisions on due diligence, transparency and accountability.’
AI, race, class, and inequality
Sociological research demonstrates that AI models replicate racial bias in technology. Ideologies about race are historically and culturally defined, but these ideas exist to maintain existing power relations, by conferring greater rights and resources to some social groups above others.
Facial recognition technology vary wildly in their ‘reading’ of race. When shown the same sample of faces, AI models dramatically under-estimate the number of Black and Latin people, as well as their gender, in comparison to hand-coding by humans. These findings suggest that AI technology reproduces the racial bias of its engineers, and cannot replicate the complex, albeit value-laden, context cues and decision-making of a broader group of humans.
Ashwini K.P., UN Special Rapporteur on Contemporary Forms of Racism, Racial Discrimination, Xenophobia and Related Intolerance, finds that AI technologies ‘perpetuate racial discrimination.’ AI therefore has dangerous applications. She shows how predictive policing tools focus on racialised minorities, other marginalised groups, and low-income communities who are already over-policed.
AI models feed a technological justification for continued policing of vulnerable people through a circular, flawed logic. That is, AI models operate through racialised rules about crime, which are applied to a racialised dataset focused on minorities, to make predictions that follow racialised principles, and police use these racialised outputs to keep unfairly targeting marginalised people.
Sociological research shows how police are trained to action racist stereotypes. In Australia, police training on race and cultural awareness is inadequate. Australian police have databases and targets that exclusively target First Peoples children. AI technology is rapidly amplifying these patterns.
K.P. further demonstrates how AI models in education also replicate racial bias, by building in existing low expectations for the success of minorities, rather than targeting structural racism. K.P. argues:
‘Data breaches and unauthorised access to personal information through hacking pose additional privacy concerns. For those from racially marginalised groups, human rights concerns relating to the right to privacy can be amplified. Privacy violations can put those groups at risk of ostracisation, discrimination or physical danger.’
The problem lies in the way that algorithms are programmed — if they are developed with flawed rules and biased datasets, they will reproduce these limitations. Additionally, the customers using AI models do not know what rules and parameters are built into AI models, the adaptations or corrections over time, and how their own data will be used by AI companies. (In the future post in this Sociology of AI series, I’ll discuss an example from Microsoft, where its AI is using customer data to undermine other sectors.)
Canadian sociologist, Mike Zajko, covers various examples of the AI sector using racist science, including phrenology:
‘Artificial intelligence practitioners are continuously applying statistical methods to categorising human populations, with very little understanding of the social categories being operationalised […] classifying individuals on the basis of discrete races, genders, and emotional states, through only the shallowest ontological engagement with these phenomena.’
AI models rely heavily on manual labour of poorly paid workers in low-income countries, who are precariously employed. Companies obscure and underplay how much human labour is required to collate, train, and moderate AI models. This ‘corporate imperialism’ extends relations of colonialism, with companies largely owned by white people in high-income nations, relying on the exploitation of the global south.
First Peoples’ leadership
Non-Indigenous sociologists must work to support First Peoples’ leadership and transformation of AI policy. Dr Rose Barrowcliffe, a Butchulla-Wonamutta woman, and colleagues (including Professor Bronwyn Carlson, an Aboriginal woman from D’harawal Country, and a sociologist), have outlined a 10-year vision statement for the future of AI. They recommend five AI policy changes:
- ‘Aboriginal and Torres Strait Islander communities and cultures are respected, recognised and supported by AI systems…
- ‘Aboriginal and Torres Strait Islander people are leading National and International AI governance…
- ‘Aboriginal and Torres Strait Islander peoples are being empowered as active leaders and partners in the design and development of AI systems
- ‘AI ethics/governance frameworks, standards and policies will respect Aboriginal and Torres Strait Islander people and cultures
- ‘Healthy Country is critical. AI systems will support First Nations efforts to care for Country.’ [Emphasis added]
The role of sociology
Sociology stands to make a significant contribution to the development, regulation, ethical guidance, evaluation, and transparent governance of AI.
Professor Kelly Joyce and colleagues argue that sociology can address AI inequalities and lead structural change. Sociological theories and methods illustrate how data and decision-making are not objective processes. Instead, data and technology reproduce the race, gender, class, and other interests of elite groups.
Joyce and colleagues also argue that sociologists have drawn connections between racism and capitalism (racial capitalism). Digital data have become a valued commodity, and used to restructure class relations between workers, users, and the owners of technological production. Companies maintain an unequal distribution of AI goods and services, and exploit data privacy for their material gain.
Joyce and colleagues argue:
‘Sociologists thus recognise that what counts as data is socialised, politicised, and multilayered, because data about humans are also often data about structural inequalities related to gender, race or class […] A sociological understanding of data is important given that an uncritical use of human data in AI sociotechnical systems will tend to reproduce, and perhaps even exacerbate, preexisting social inequalities.’
What’s next
AI is a technology that generates predictive outputs, recommendations or decisions.
AI is flooding services across multiple sectors, trawling public data, and stealing from the arts and communities to grow its models. In a rapidly changing context, where AI content is prolific, the public does not adequately understand this technology. There is negligible protections of data privacy, intellectual property, human rights, and cultural knowledge of First Peoples.
Sociological research suggests that AI is riddled with biases. Without stronger regulation and privacy protections, AI will wreak greater damage on the environment, as well as expanding social inequalities, including racial discrimination.
AI will continue to replicate inequality, unless sociological insights and First Peoples’ leadership steers this technology to ethically support human innovation.
In the second post of the Sociology of AI series, I examine Artificial Intelligence and the Economy.
Discover more from The Other Sociologist
Subscribe to get the latest posts sent to your email.

Thank you for this highly valued article .
LikeLiked by 1 person
Hi Mohammad,
Thanks for reading!
LikeLiked by 1 person
Thank you. I look forward to the rest of the series.
LikeLike
Hi Tom,
I appreciate your interest. The next post will come out at the same time next week
LikeLike