Artificial Intelligence And Human Rights Issues In Cyberspace

Artificial Intelligence

Human rights or Civil Liberties issues are not considered in their true perspective world over. Traditionally Governments across the world have been investing heavily in knowing more and more about their citizens and residents. This hunger to know everything could have been catastrophic if civil liberties activists were not so active. Nevertheless, we are slowly moving towards a totalitarian and Orwellian world thanks to the super pervasive and intruding technologies.

We anticipated this trend way back in 2009 when we started discussing about Human Rights Protection In Cyberspace. Soon we realised that we need a much focused and dedicated initiative in this regard. So we launched world’s exclusive Techno Legal Centre Of Excellence For Protection Of Human Rights In Cyberspace (CEPHRC). Since then we have been continuously working to strengthen Civil Liberties and Human Rights in Cyberspace.

In the year 2019, the CEPHRC has been merged with the Techno Legal Projects Of TeleLaw Private Limited (TPL) and PTLB Projects LLP. This has been done to consolidate our Techno Legal LegalTech, EduTec, TechLaw and other similar projects. As both TPL and PTLB Projects LLP are recognised startups by Department for Promotion of Industry and Internal Trade (DPIIT) and MeitY Startup Hub, we are working to further rejuvenate the CEPHRC project soon.

In this post we are discussing the Techno legal issues associated with the use of Artificial Intelligence (AI) in various fields. We are more concerned with the Human Rights and Civil Liberties implications of AI and not the friendly or non friendly aspects of AI. Just for the sake of reference, the roots of concern about AI are very old. In fact, by 1942 these concerns prompted Isaac Asimov to create the “Three Laws of Robotics” – principles hard-wired into all the robots in his fiction, intended to prevent them from turning on their creators, or allowing them to come to harm. Also philosopher Nick Bostrom has said that superintelligent AI systems with goals that are not aligned with human ethics are intrinsically dangerous unless extreme measures are taken to ensure the safety of humanity. He believes that we should assume that a ‘superintelligence’ would be able to achieve whatever goals it has. Therefore, it is extremely important that the goals we endow it with, and its entire motivation system, is ‘human friendly’.This takes us to the concept of friendly artificial intelligence. Eliezer Yudkowsky asserts that friendliness (a desire not to harm humans) should be designed in from the start, but that the designers should recognise both that their own designs may be flawed, and that the robot will learn and evolve over time. Thus the challenge is one of mechanism design—to define a mechanism for evolving AI systems under a system of checks and balances, and to give the systems utility functions that will remain friendly in the face of such changes.

If you are a software designer or coder you design/code the software as per your conceptions and ideals. You may create a software as open source or you may prefer to sell it as a commercial software. You may give a limited features free trial or you may give a full fledged version as a free trial. You may create the software for Linux or you may create it for Windows systems. You may create the software for a particular country or language or you may create a global software with multiple languages. In short, no two software are created with same ideology and objectives. What is more concerning is the fact that we create things keeping in mind our own experience, ideology, political allegiance, etc. We may create a software for law enforcement and intelligence agencies for the sole purpose that it can be used by them for snooping, spying or even violating the Civil Liberties of people.

However, in all these activities “Human Element” is present. Now think what would happen if a spying software is managed by an AI driven system without involvement of human beings at all? We do not wish to be alarmist in this regard but the least we can expect from the AI makers is that Human Rights and Civil Liberties safeguards are hardwired into all AI systems. We have a tendency to pass on our bias and prejudice and have a tendency to violate Human Rights of our fellow humans. Naturally we would pass these negative traits to AI too if proper safeguards are not put at place. Also using AI without adequate Cyber Security, Privacy and Data Protection is a recipe for disaster. Whether we like it or not, atrocities of Orwellian technologies would further increase in future and putting in place sound Human rights Protection in Cyberspace is essential.

We at TeleLaw Project, CEPHRC and PTLB Projects are working on these issues presently and we would come up with a sound Techno Legal Policy that would ensure Human Rights Protection in Cyberspace at large. Interested stakeholders are requested to collaborate with us in this regard so that world at large can be benefited.