AI in South Africa carries the usual risk, plus unique dilemmas



AI in South Africa may seem far removed from the socio-economic challenges facing the country, but it will become widespread in the coming years, says Emile Ormond, PhD candidate, University of South Africa.


When people think about artificial intelligence (AI), they probably have visions of the future. But AI is already here.

At its base, it is a recreation of aspects of human intelligence in computerized form. Like human intelligence, it has broad applications.

Applications and uses of AI

Voice-operated personal assistants like Siri, self-driving cars, and text and image generators all use AI. It also manages our social media feeds.

It helps companies to detect fraud and hire employees. It is used to manage livestock, improve crop yields and aid in medical diagnosis.

Along with the increased strength and potential[1], AI raises moral and ethical questions. The technology has been at the center of several scandals, including but not limited to:

  • violation of law and rights
  • racial discrimination,
  • gender discrimination,

In short, it comes with a litany of ethical risks and dilemmas.

But what exactly are these risks? And how does it differ between countries?

Review of rich countries

To find out, I conducted a thematic review of literature from rich countries to identify six high-level, universal ethical risk themes. [2]

I then interviewed experts involved in or associated with the AI ​​industry in South Africa and assessed how their perceptions of AI risk differed or aligned with these themes.

The findings illustrate marked similarities in AI risk between the global north and South Africa as an example of a global south country.

AI in South Africa

But there are some important differences. This reflects South Africa’s unequal society and the fact that it is at the fringes of AI development, use and regulation.

Other developing countries that share similar features – a wide digital divide, high inequality and unemployment and low-quality education – may have a similar risk profile to South Africa.

Knowing what ethical risks are possible at the country level is important because it can help policy makers and organizations to set appropriate risk management policies and practices.

A universal theme

The six themes of universal ethical risk that I have drawn from reviewing the global north literature are:

  • Responsible: It is not clear who is responsible for the output of the model and the AI ​​system.
  • bias: Lack of algorithm, data or both entrench bias.
  • Transparency: AI systems operate as “black boxes”. Developers and end users have limited ability to understand or verify the output.
  • Autonomous: Man has lost the power to make his own decisions.
  • Socio-economic risk: AI could lead to job losses and increase inequality.
  • Maleficent: Can be used by criminals, terrorists and repressive state machines.

I then interviewed 16 experts involved in or associated with the South African AI industry.

They include academics, researchers, designers of AI-related products, and people who cross these categories.

For the most part, the six themes I have identified resonate with them.

South African concerns

But the participants also identified five ethical risks that reflect country-level features in South Africa. These are:

  • Foreign data and models: Parachuting data and AI models from elsewhere.
  • Data limitations: Lack of datasets that represent and reflect local conditions.
  • Exacerbating inequality: AI can widen and undermine existing socio-economic inequalities.
  • Unknown stakeholders: Most of the public and policy makers have only a rough understanding of AI.
  • There are no policies and regulations: Currently there is no specific legal requirement or general government position on AI in South Africa.

What does it all mean?

So what do these findings tell us?

First, universal risk is mostly technical. They are linked with AI features and have technical solutions. [3]

For example, bias can be reduced with more accurate models and comprehensive data sets.

Most of South Africa’s specific risks are more socio-technical, reflecting the country’s environment. Absence of policy and regulation, for example, is not an AI feature.

This is symptomatic of a country that is on the fringes of technological development and related policy formulation.

Also read: As AI advances, lawmakers try to keep up

South African organizations and policy makers must not only focus on technical solutions but also consider the socio-economic dimensions of AI.

Second, the low level of awareness among the population suggests there is little pressure on South African organizations to demonstrate their commitment to ethical AI.

In contrast, organizations in the global north need to share their knowledge of AI ethics, as stakeholders are more attuned to the rights inherent in digital products and services.

For the regulation of AI in SA

Finally, while the EU, UK and US have new rules and regulations around AI, South Africa has limited AI-relevant regulations and legislation. [4]

The South African government has also failed to acknowledge the impact of AI and its wider ethical implications.

This is in contrast to other emerging markets such as Brazil, Egypt, India and Mauritius, which have national policies and strategies that encourage the responsible use of AI.

READ ALSO: AI, do homework! – Concerns about the ‘ChatGPT scam pandemic’

Go forward

AI may, for now, seem far removed from South Africa’s socio-economic challenges. But it will become widespread in the coming years.

South African organizations and policy makers must proactively manage the ethical risks of AI.

It starts with the recognition that AI presents a different threat than the one in the global north, and it needs to be managed.

Governing boards need to add AI ethics to their agenda, and policy makers and governing board members need to educate themselves about the technology.

In addition, the ethical risks of AI must be added to the risk management strategies of companies and governments – similar to climate change, which did not receive attention 15 or 20 years ago, but now has an important feature.

Perhaps most importantly, the government should build on the launch of the South African Institute of Artificial Intelligence [5]and introduce tailored national strategies and appropriate regulations to ensure the ethical use of AI.


Emile Ormond, PhD candidate, University of South Africa.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Disclosure statement: Emile Ormond does not work for, consult, own shares or receive funding from any company or organization that will benefit from this article, and does not disclose any relevant affiliations beyond his academic appointment.

Source:

[1] The role of artificial intelligence in achieving the Sustainable Development Goals
[2] Global To Local: A South African Perspective on the Ethical Risks of AI
[3] Algorithmic Bias and Risk Assessment: Lessons from Practice
[4] Thesis: Artificial Intelligence and Healthcare in South Africa: Ethical and Legal challenges, by Anisha Amarat Jogi

[5] UJ: SA National Institute of Artificial Intelligence

Source link

Leave a Reply