By Oyetola Muyiwa Atoyebi, SAN, FCIArb. (UK).

What is Artificial Intelligence?

Artificial Intelligence (AI) is the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.[1]

Artificial intelligence (AI) is also “human intelligence or behaviour demonstrated by machines”. In practice, an AI is a computer program (software). Pattern recognition, picture recognition, voice recognition, and understanding of natural language are some of the technologies that are usually thought to be included in AI, although there is no specific definition and the meaning has changed over time. Robots, autonomous cars, and “drones,” which use AI for observation, navigation, task planning, and collision avoidance, are just a few of the gadgets for which it is very beneficial. Another example of AI is voice interaction with consumer electronics like Amazon Alexa and Google Home. However, AI is not only used by machines and robots; it is also crucial for business. For instance, credit card firms have used AI systems to efficiently stop fraudulent transactions for years.

Advantages of Artificial Intelligence

Artificial Intelligence as an innovation, has the following positive impacts on the society:

AI drives down the time taken to perform a task. It enables multi-tasking and eases the workload for existing resources.
AI enables the execution of hitherto complex tasks without significant cost outlays.
AI operates 24/7 without interruption or breaks and has no downtime.
AI augments the capabilities of differently-abled individuals.
AI has mass-market potential; it can be deployed across industries.
AI facilitates decision-making by making the process faster and smarter.
The implication of AI on Human Rights

With Artificial Intelligence (AI) technology’s availability nowadays, there are questions that regularly come to our minds. These include;

What are the limits of AI?
What are the opportunities and risks for each of us individually and collectively?
Who holds our data?
Why, how, and where are these data stored?
Limits of Artificial Intelligence

Machine learning relies on the notion that software can absorb knowledge from data and modify its algorithms to maximize its long-term potential. But in addition to having unlimited access to databases, machine learning also has significant restrictions and hard-to-overcome limits. Some of these limitations of AI include;

Problem of Recursivity
Although machines can learn and get “smarter,” they cannot independently create a machine that is more potent than human beings. There is no AI capable of self-improvement. Only humans are capable of designing and creating optimum machines because of their cognitive talents and creative, associative intelligence. Machine learning can only be used to improve learning proficiency and speed as a result.

Transparency Problem of Machine Decisions
The traceability of AI decision-making suffers from a serious deficiency. Since AI represents and chooses different learning strands until it arrives at a conclusion, how and why it makes decisions is impossible to explain. The seamless transparency that is essential in competitive activities is complicated by the fact that this selection cannot be examined at any time. Work is already in place to divide the AI learning process into distinct AI tools. A breakthrough in this area will, however, take some time.

Simulation Limit of Emotions
AI can learn to “understand” the semantics of words and sentences and respond appropriately. Chatbots, for example, used in customer service, can “communicate” appropriately and answer simple questions automatically. When talking to an AI or a robot, however, you quickly notice how borderline the ability to communicate is. Because a person’s ability to perceive is not limited to a question-and-answer game. Here, facial expressions, gestures, instinctive action or empathic expressions of feeling are used to “convey” an overall impression that cannot be fully perceived by a machine, let alone simulated equally. Several sensors would be required for this, which would simultaneously analyze and link the behaviour and determine a suitable output response. A machine cannot implement this so-called “sensor fusion”. The cognitive linkage function is only available in the human brain.

Moral and Ethical Limits
AI has a serious discriminating problem. It is unable to extract text or images with uncertain content from data records. Ambiguities primarily result from a relationship in the brain between values from contexts such as literature, religion, mathematics, sports, or even facial and verbal contexts. From such data sets, the AI would only choose one pertinent piece of information; yet, it is unable to evaluate various contents or think associatively, as we do. This results in a high likelihood of inaccuracy, which is morally or ethically biased. AI, for instance, struggles to recognize idioms or discriminating language. Information that is age, gender, or religious-specific, as well as the ethical knowledge, cannot simply be tagged and made “recognizable” in data sets. Through associative and situational learning, we acquire this knowledge over the course of our lives. This cognitive ability is socially crucial. AI cannot do this. It doesn’t know the meaning of an insult. for example, AI in social or gaming bots reproduces information that is discriminatory and derogatory, then entrepreneurial risks and drastic image problems arise. The difficulty: AI doesn’t know any better because it can’t understand it.

Data access and privacy
AI is digital and virtual, constantly listening and reading. All devices have AI incorporated into the cameras and speakers as a fictitious mini-spy. All the time, smartphone signals are available for reception. It is common knowledge that we imperil our privacy. How can we safeguard ourselves from this? The cameras on laptops, tablets, and cellphones are always raised. turn off the microphones on all portable electronics? Use airplane mode or place your smartphone in the refrigerator to block the signals. Any access to data can be used and processed by AI. Every piece of data is preserved and quickly evaluated. AI, however, can also defend privacy in return. If appropriate data protection functions with the possibility of immediate deletion after data usage are prescribed and implemented without this being circumvented, an anonymous, less traceable system will hopefully be the standard in the future.

Human Rights Issues in AI

Lack of algorithmic transparency
The lack of algorithmic transparency is a serious issue in AI that needs to be addressed; for instance, people who were denied jobs, refused loans, were put on ‘no-fly lists’ or denied benefits without knowing “why that happened other than the decision was processed through some software”. Information about the functionality of algorithms is often intentionally poorly accessible.

Unfairness, bias, and discrimination
Unfairness, bias, and discrimination have repeatedly popped up as issues and have been identified as a major challenge related to the use of algorithms and automated decision-making systems, e.g., to make decisions related to health, employment, credit, criminal justice, and insurance. In August 2020, protests were made and legal challenges are expected over the use of a controversial exams algorithm used to assign grades to GCSE students in England (Ferguson & Savage 2020).

Lack of contestability
Data subjects should have the right to object, on grounds relating to their particular situation, at any time to the processing of personal data concerning them which is based on tasks carried out in public interests or legitimate interests.

Legal Personality Concerns
There is an ongoing debate about whether AI (and/or robotics systems) “fit within existing legal categories or whether a new category should be created, with its specific features and implications”. (European Parliament Resolution 16 February, 2017). This is not just a legal, but also a politically-charged issue.

The High-Level Expert Group on Artificial Intelligence (AI HLEG) has specifically urged “policy-makers to refrain from establishing a legal personality for AI systems or robots” outlining that this is “fundamentally inconsistent with the principle of human agency, accountability and responsibility” and poses a “significant moral hazard”

CONCLUSION

Therefore, proposed AI rules should include how they may affect human rights, such as socioeconomic rights, consumer protection, privacy rights, and rights to dignity and non-discrimination. Personal data should be secured against the coming data crisis, which has the potential to be even worse than the financial catastrophe because it is our most precious asset. In terms of the economy, jobs, health, crisis management, security, social care, well-being, and even politics, it is undeniable that AI technology has a huge promise for modern societies. However, big data and AI have deeply impacted our daily lives to the point where we no longer have control. As a result, AI is frequently seen as a danger to human rights and humanity as a whole.

AUTHOR: Oyetola Muyiwa Atoyebi, SAN, FCIArb. (UK).

Mr. Oyetola Muyiwa Atoyebi, SAN is the Managing Partner of O. M. Atoyebi, S.A.N & Partners (OMAPLEX Law Firm) where he also doubles as the Team Lead of the Firm’s Emerging Areas of Law Practice.

Mr. Atoyebi has expertise in and a vast knowledge of Technology, Media and Telecommunications Law and this has seen him advise and represent his vast clientele in a myriad of high level transactions. He holds the honour of being the youngest lawyer in Nigeria’s history to be conferred with the rank of a Senior Advocate of Nigeria.

He can be reached at atoyebi@omaplex.com.ng

CONTRIBUTOR: Romeo Pupu.

Romeo is a member of the Dispute Resolution Team OMAPLEX Law Firm. He also holds commendable legal expertise in Artificial Intelligence.

He can be reached at romeo.osogworume@omaplex.com.ng.

[1] BJ Copeland, ‘ Artificial Intelligence’ < artificial intelligence | Definition, Examples, Types, Applications, Companies, & Facts | Britannica> accessed 10 July 2022