THE debate around the impact of artificial intelligence (AI) often resembles a dystopian movie – job losses, the invasion of privacy and super-intelligent machines taking over. But amid these anxieties, a crucial voice is missing – that of marginalised communities.

These communities, facing systemic oppression and limited access to resources, are rarely part of the conversation. They are the ones who would stand to benefit most from responsible AI development.

AI as a leveller

The technology can actually empower the marginalised by offering tools and opportunities that bypass traditional barriers. Marginalisation due to religion, caste, class, gender and sexuality cannot be solved through AI techno-solutionalism – the belief that technology can provide solutions to a wide range of social, political and economic problems.

Yet, one of the most significant beneficiaries of AI technology could be the marginalised, especially those who lack cultural capital, privilege, access and resources.

The advantage of AI lies in its ability to democratise access to information and resources. For instance, AI-powered translation tools can help non-native speakers communicate effectively – breaking down language barriers that are often the markers of privilege.

AI-driven educational platforms can also offer personalised learning experiences, catering to the needs of first-generation learners and those from disadvantaged backgrounds. AI technology can empower these individuals by democratising access to information, mentorship and essential services, challenging the status quo of privilege and inclusion.

GET BT IN YOUR INBOX DAILY

Start and end each day with the latest news stories and analyses delivered straight to your inbox.

Empowering people with disabilities

For people with disabilities, AI can be a game-changer. AI-driven applications can assist people with autism in developing social skills and recognising emotional cues, thereby enhancing their ability to interact with others.

Adaptive platforms can provide customised educational content for learners with dyslexia and other learning disorders, ensuring they receive the support they need to succeed. By offering personalised, responsive and supportive tools, AI can help individuals with disabilities achieve greater independence and inclusion.

Challenging injustice

Marginalised communities frequently face systemic injustices, such as discrimination and exclusion from economic opportunities. AI can help challenge these injustices by providing equal opportunities and access to resources.

AI can help small-scale entrepreneurs from marginalised backgrounds by offering insights into market trends and optimising supply chains, enabling them to compete with larger businesses.

AI-driven job matching platforms can also help individuals from disadvantaged backgrounds find job opportunities that match their skills, reducing the impact of discrimination in the job market.

The responsibility of AI should lie elsewhere

The primary responsibility for ensuring AI is ethical and beneficial should lie with those who have the power to shape the technology, such as policymakers, technologists and corporations. They have the capacity to create and enforce regulations to ensure AI is developed and used responsibly, without exacerbating existing inequalities.

There are three main areas around which a responsible evolution of AI revolves: development, deployment and existential crises.

Responsible AI development should address ethical concerns related to training data, energy consumption and the exploitation of workers, particularly those in African countries involved in supervised training of the model.

The responsible deployment of AI systems is essential, especially regarding their use for surveillance, warfare and privacy violations.

The existential risks posed by super-intelligent systems range from catastrophic scenarios, such as the overthrow of human society, to more immediate issues, such as significant job losses.

While these concerns drive the need for robust AI policies and frameworks, some issues remain speculative. For instance, there is no consensus on what constitutes super-intelligence, given that much of the discourse is influenced by science fiction.

The threat of job losses, particularly in white-collar sectors, is an urgent and tangible concern. Marginalised people are often already burdened with the struggles of navigating systemic injustices and fighting for their rights.

Adding the responsibility of AI to their existing challenges would serve only to further oppress them. Instead, society should focus on leveraging AI to uplift and empower these communities – using the technology as a tool for social justice and equity. 360INFO

Shafiullah Anis is a lecturer in marketing at the School of Business, Monash University Malaysia, where Juliana French is the head of the Department of Marketing and a senior lecturer.



Source link

author-sign