by Parvathi Bakshi
Abstract
The increase in interaction between artificial intelligence technologies and the institutions that run society such as healthcare, governance, economic etc. has also seen a rise in ethical concerns. These interactions present not only legal issues of governance and regulation but also raise the matter of underlying trust expected in technology and data use. The usage of artificial intelligence in day-to-day life ranging from apps, communication to financial services, has resulted in debates of moral, legal, social and economic dilemmas (Hashmi, 2019). In recent years, application of artificial intelligence has been successful in various social institutions and such systems are able to problem solve as well as make decisions with human-level perception. Artificial intelligence has been deployed in real-life situations, which older coding and software couldn’t resolve due to the degree of uncertainty presented by real-life simulations. However ethical and legal concerns are the hurdles yet to be overcome by artificial intelligence pioneers, as the questions of accountability in situations where use of artificial intelligence may lead to undesirable outcomes, still remains to be answered.
Introduction
Artificial Intelligence (“AI”), simply put, is human intelligence replicated in computers. AI systems are computer and machine based systems, which perform tasks normally performed by humans but also go beyond to execute expert tasks in seconds. It is a discipline, which aims at building machines able to replicate human intelligence such as reasoning and identification. Some popular examples of AI advancement includes facial recognition, data analysis, smart advertisements, keyboard prompts and more. Next we look at what are the ethical issues which arise in the context of AI. Ethical issues are in general those problems where the decision maker is required to choose among several actions in a given situation and the answer must be evaluated as right or wrong, ethical or unethical (Fradrich, 1991). One study divides ethical issues of artificial intelligence into two broad categories: legal and philosophical (Khalil, 1993). A relevant ethical concerns which we have seen in popular media including movies like the Terminator, Matrix etc. is when the AI system, is capable of replicating human intelligence but not the emotions, values and also the existence of accidental bias (Khalil, 1993). It is essentially a self-learning system, which has the capacity to process vast amounts of data and make decisions out of the understanding of humans. However AI systems are not evolved enough to replicate thought and decisions based on emotions. Presently even if we table the issue of lack of human emotions, there are still prevalent ethical challenges in machine learning and deep learning present including but not limited to: transparency; privacy; bias; and accountability of AI systems.
Principles to Adopt
In the face of the many questions that arise with respect to usage and application of AI, certain principles have been developed as a result of the ongoing discussions of ethical concerns. The principles primarily focus on ensuring AI systems remain transparent, explainable and ethical, especially for the common user. However the practicality of implementing such principles, in light of say machine learning or deep learning which are inane to AI, would be challenging due to the complexity of the systems and the problems they attempt to solve. Some of the popular principles include (Rossi, 2019): IBM’s principles of trust and transparency; Google’s Principles on AI; The Asilomar’s AI Principles; the tenets of the Partnership on AI; The AI4PEOPLE principles and recommendations; the World Economic Forum’s principles for ethical AI; and The Institute of Electrical and Electronics Engineers general principles. IBM’s principles focus on AI augmenting human intelligence rather than replacing it, premising on trust and transparency. Google focuses on AI which protects privacy, is socially beneficial and also fair, safe and accountable to users.
We must understand that the realm of AI is heavily reliant on data governance norms which regulates and manages the usability, integrity and security of data as well as data systems. Similar to humans who are fed information over the course of their lives, AI is fed or collects readily available data and further builds on it. If AI systems are expected to be ethically sound, then the people developing the technology and the environment in which it is introduced also needs to be ethically sound. AI specialises in learning from its surroundings and any error in the learning process, such as information that is fed to the AI being corrupted, would result in the entire system being corrupted. The AI platforms’ decision-making capacity is impacted by the data that it is fed and also the extent to how the data is handled. Thus human error and bias at the stage of data entry could corrupt the entire AI platform. An anti-bias technology would resemble plug-in tools where AI polices itself, however these plug-ins would also require companies to adopt ethical guidelines and governance models. This poses another challenge, as anti-bias technology would be programmed according to the definition of ‘fairness’ of the corporate user. This concept of fairness itself is then left to the standards set by the corporate user, which may not be consistent with fairness standards in other jurisdictions. Some recent anti-bias tools which have come up include: ‘What-if Tool’ Alphabet Inc’s Google; Fairness Tool by Accenture PLC; and Watson OpenScale by IBM (Murawski, Ethical and Legal Concerns Give Rise to AI Antibias Tools, 2019).
Conclusion
The solution to AI ethics may not be evident but the precautionary steps would include adopting ethical principles and also inculcating bias detection and mitigation capabilities to the AI platforms, which are standardised and globally acceptable. Delottie released a report, which recommends that corporate users of AI should start self-policing for AI bias (Murawski, Making AI Ethics a priority, 2019). Deloitte’s report focuses on the regulation on the human side of things such as: creating an ethics advisory committee and ethics policies; test and fixing AI systems for bias; corporate disclosures on usage and privacy protection (Hashmi, 2019). While government oversight is inevitable, corporate users of AI should be directed to adopt ethical practices in utilising AI. On the corporate front, issues like credit scores, judicial sentencing and recruiting bring out instances of bias and discrimination. Artificial ethics for AI systems are not a fact of life and need to be taught to the AI systems. Till such time, decision-making should not be abdicated entirely to AI platforms and instead policy makers, programmers and the other stakeholders, should adopt an approach where AI augments and not entirely replaces human intelligence.
References:
1. Rossi, F. (2019). Building Trust in Artificial Intelligence. Journal of International Affairs Editorial Board , 127-134.
2. Hashmi, A. (2019). AI Ethics: The next big thing in government . Dubai: Deloitte and Touche (ME).
3. Murawski, J. (2019, April 18). Ethical and Legal Concerns Give Rise to AI Antibias Tools. Retrieved May 10, 2019, from The Wall Street Journal Pro: https://www.wsj.com/articles/ethicalandlegalconcernsgiverisetoaiantibiastools11555579801
4. Murawski, J. (2019, April 17). Making AI Ethics a priority. Retrieved May 10, 2019, from The Wall Street Journal Pro: https://www.wsj.com/articles/makingaiethicsapriority11555493400
5. Khalil, O. E. (1993). Artificial Decision-Making and Artificial Ethics: A Management Concern . Journal of Business Ethics , 313-321.
6. Fradrich, O. F. (1991). Business Ethics: Ethical Decision Making and Cases. Boston: Houghton Mifflin Company.
About the Author
Parvathi is a B.A., LL.B. (Hons.) from Jindal Global Law School 2015-2020. Though her primary interest likes in corporate law such as M&As and private equity, she saw that AI, cryptocurrency, cybersecurity play a significant role in the future of corporations and the economy. Her interest in AI developed as a result of her curiosity seeing the interplay of tech developments, innovation and science with corporate development. She has taken up courses on artificial intelligence as well as cyber security and aims to invest her time in working in these cutting edge new fields. She is also the founder of this Technology Policy Blog!