Introduction
In July 2017, the Supreme Court of India in its nine-bench judgement ruled emphatically and unanimously that right to privacy is a fundamental right. Gautam Bhatia argues that before this judgement there was no specific definition of privacy in the Indian constitution (Bhatia, 2017). Constitutionally, the term ‘privacy’ has been a vague and amorphous concept (ibid: 22). In KS Puttaswamy v Union of India, privacy was held as a fundamental right within the overarching framework of Article 21- right to life and liberty. Furthermore, ‘right to privacy’ is implicit and integral part of other fundamental parties right to free speech, freedom of association, and freedom of religion. The recognition of right to privacy as a significant part of constitution also challenges the existing paradigm of civil liberties in India. ‘Privacy’ here recognises individual as a fundamental unit which will further impact spatial and relational definitions of privacy. In this context, privacy as a individualised right is a three-pronged right: the right to bodily and mental integrity, the right to informational self-determination, and right to decisional autonomy.
The right to privacy judgement is pertinent with regards to the adoption of Artificial intelligence and Internet of things. The policy brief discusses the legal and juridical meaning of ‘right to privacy’ especially in the context of recent judgement. The legal definition of ‘privacy’ would also touch upon the international standards and tenets of human rights. The Puttaswamy judgement needless to mention has opened up a pandora box of other concerns and debates. The confluence of right to privacy with the advancements in technology will certain to have an impact on personal information, and extent of surveillance in individual lives. The right to privacy has an inevitable connection with transparency, and other ethical concerns related to automation. The brief recognises that there needs to be a balance between individual privacy and advantages of technological innovations in fulfilling societal needs. The question of online surveillance and ‘big brother is watching you’ are some of the implications of deploying AI. It is vital that artificial intelligence is in consonance with national legal framework and international human rights norms. The policy brief shall scrutinize these implications and further put forth its recommendations in its regard.
The policy brief is divided into various sections starting with legal discourse on privacy, and constitutional rights to understanding innovations in technology and artificial intelligence. The policy brief looks at Indian government’s stance on right to privacy by examining Niti Aayog’s #AIforAll. In this manner, the paper understands the role of various stakeholders-the government, private organisations, and the research expertise or knowledge capital. Therefore, this policy brief attempts to evaluate the future of AI in India with respect to the right to privacy. The concerns of surveillance and a risk of violation of personal information are essential questions that needs a stringent policy framework from both socio-political and legal perspective. Thus, an important section of the policy brief includes recommendations in this respect. The brief concludes to upcoming developments in the intersection of both politics and technology and predictive analysis of what could emerge as challenges in terms of right to privacy and India as a collectivistic society.
1.1 Legal Framework of Right to Privacy
When I use the legal framework it connotes both national and international parameters to comprehend right to privacy. Internationally, right to privacy is a cardinal feature of universal human rights declaration. Article 12 of international human rights law underlines, “[n]o one shall be subjected to arbitrary interference with his privacy, family, home or correspondence …. Everyone has the right to the protection of the law against such interference or attacks.” (Privacy International, 2018 , p-20). Furthermore, article 17 of the international covenant of on civil and political rights states:
- “No one shall be subjected to arbitrary or unlawful interference with his privacy, family, home or correspondence, nor to unlawful attacks on his honour and reputation.
- Everyone has the right to the protection of the law against such interference or attacks”. (PrivacyBytes, 2017)
The international parameter has emphasised that any interference with the right to privacy must be in accordance with the law, necessary and proportionate. The right to privacy in the digital age was adopted by the United Nations through the UN General Assembly resolutions in 2013 and 2014. This paved the way for the creation of Human rights council which established a dedicated mechanism to promote right to privacy through the UN Special Rapporteur on the Right to Privacy in 2015. The issues that were covered in UN’s resolution in 2013 and 2014 included a concern for negative impact of surveillance, protection of online rights, and interceptions of communications may have on human rights. The resolution, therefore, called forth the states to respect and protect the right to privacy in digital communication. As stated by the UN Human Rights, “The General Assembly called on all States to review their procedures, practices and legislation related to communications surveillance, interception and collection of personal data and emphasized the need for States to ensure the full and effective implementation of their obligations under international human rights law”. The resolution further directs the Special Rapporteur to report on alleged violations of the right to privacy including the challenges arising from the adoption of new technologies.
Right to privacy is also in consonance with article 19 of the Universal Human Rights Declaration, the international covenant on civil and political rights and regional human rights treaties. Article 19 guarantees of freedom of speech and expression, however, not in absolute terms. The right is established that it is established by the law, pursue a legitimate aim and conform to the strict tests of necessity and proportionality. The restrictions to the article 19 ensure that one’s freedom of speech and expression is in harmony with the other’s freedom as well.
In India, right to privacy has stemmed from multifarious constitutional and judicial developments. The constitutional history of India, as Shailesh Gandhi argues, speaks volumes about the conscious decision to not include privacy as a fundamental right. The debate of right to privacy in India has many intriguing tangents to it both from – individualistic-collectivistic and public-private perspective.
1.2 Understanding history of Right to Privacy in India
In MP Sharma v Satish Chandra, District Magistrate, Delhi case of 1954, the Supreme court had declared that right to privacy is not a fundamental right. The court further borrowed from KM Munshi’s draft articles on fundamental rights underlines rights of freedoms included in clause 5 are:
- Right to inviolability of his home.
- Right to the secrecy of his correspondence
- Right to maintain his person secure by the law of Union from exploitation in any manner contrary to law or public authority.
Gandhi has pointed out that the court quoted the draft in its judgment to prove that constitution makers took a conscious decision of not including right to privacy in the fundamental rights. This also became a crucial standpoint for the court to again keep privacy out of the list of liberties and freedoms. The right to privacy was acknowledged in some form or the other in Gobind v State of Madhya Pradesh (1975), however, the Puttaswamy judgement represents a paradigmatic shift in the jurisprudence. The acceptance of privacy as an integral part of fundamental rights not only broadens the framework of fundamental rights, but also deepens the meaning of fundamental rights altogether (Kumar, 2017). The substance of the Puttaswamy judgement is as essential as its contextual picture. As Alok Prasanna Kumar succinctly argues that the judgement has come at an important juncture in the history. The simultaneous instances of state’s overreach into the personal lives of citizens as it withdraws from economy, migration of labour from rural to urban space and vast changes in technology and business practices which have made individuals’ data a lucrative commodity. The Puttaswamy judgement affirmed all other court’s judgements right from 1975 Gobind case giving privacy a status of fundamental rights. “Privacy was held to be a fundamental right, specifically under article 21, and within the broader fundamental rights chapter, as an integral part of the rights to free speech, freedom of association, freedom of religion, and others” (Bhatia, 2017, p-22-23). The judgement becomes exemplary firstly because it emphasizes privacy as an individual right. The Supreme Court’s verdict makes individual as the fundamental unit and the basis for right to privacy (ibid). The six opinions on right to privacy can be clubbed into a three-pronged right: right to bodily and mental integrity, right to informational self-determination (control over personal information), and right to decisional autonomy. Each of these sub-parts of right to privacy focuses on the individual where body and mind, personal information and private decisions are nothing but sacred. The sanctimonious element is also embedded in the way privacy is seen as an instrument for achieving a meaningful life. This would further alter and transform the relationship between the individual, the state and the society. The judgement notes that individual’s consent is paramount to any state programme based on data collection, and data mining. Therefore, privacy is not an absolute right and involves certain justified restraints.
The Puttaswamy judgement has elucidated that right to privacy must have a standard of proportionality. Therefore, in juridical sense Supreme Court’s judgement is in consonance with the international standards. The apex court has laid the groundwork for restraining right to privacy only in certain conditions such as fulfillment of welfare functions of the state, controlling crime and meeting its other legitimate goals. Bhatia writes, “The Court’s endorsement of the proportionality standard is likely to have important ramifications in future cases, especially in the context of data collection and data mining” (p-24). Thus, for the state to limit right to privacy it must have a convincing justification and also choose methods that minimally infringe upon privacy in order to achieve its goals.
1.3 Problems with Artificial Intelligence
The affirmation of right to privacy in the framework of fundamental rights of Indian constitution has triggered several debates in the way technology interacts with citizenry. Artificial intelligence will be applied in a vast number of situations for instance how individuals access and find information online. Evidently, artificial intelligence is already deployed in the ways consumers are targeted by different marketing agencies on social media platforms. These online behaviours might seem harmless on surface, however, it must be remembered there is no legal framework to address the codes of social media and how algorithms are experimented on these norms. On the flipside, the state surveillance is one of the ways artificial intelligence manifests itself in our day to day life. As reported by Privacy International, “The pervasive and invisible nature of AI systems, coupled with their ability to identify and track behaviour, can have a significant chilling effect on the freedom of expression”. The state surveillance of artificial intelligence was explicit in the way Donald Trump, President of USA targeted the voters in 2016 during his election campaign. Cambridge Analytica (CA) – a data science firm rolled out an extensive advertising campaign focusing on persuadable voters based on their individual psychology. The manipulation of voters at the hands of Analytica and politicians not only received some harsh criticism but also became a matter of intense debate on misgivings of technology. It was not merely a tool of influence, rather a tool of emotional manipulation. The Conversation in its report states, “This highly sophisticated micro-targeting operation relied on big data and machine learning to influence people’s emotions. Different voters received different messages based on predictions about their susceptibility to different arguments. The paranoid received ads with messages based around fear. People with a conservative predisposition received ads with arguments based on tradition and community”. The psychographic data attributes such as personality, values, interest or even lifestyle produces a predictive data for any political party or political affiliation. Therefore, AI can be used as a tool and weapon for political advantage. This breach of privacy was also called by Scientific American as ‘arms race to the unconscious mind’. The surveillance regime aids the state in establishing a disciplined, and hierarchical state where repression of democratic will becomes easier. From a Foucauldian perspective, the power produced by a surveillance state paves a way for productive population. The rise of techniques such as facial recognition, behaviour analysis, and video surveillance are some of the ways in which state will attempt to keep an eye on its citizens. Another issue with artificial intelligence and its data collection methods is that large amounts of data is generated and collected. However, here lies a conflict between big data technologies and individual control. The value of large amounts of data resides not in its primary purposes but in the secondary purposes, where data is reused many times over. The notion of ‘data minimisation’ is crucial in maintaining individual privacy- which requires organisation to limit the collection of personal data to the minimum extent necessary for obtaining a legitimate result. As argued by Amber Sinha, “Control is exercised and privacy is enhanced by ensuring data minimisation” (Sinha, 2018). The principle of data minimisation is violated when big data technologies retain ‘more than required data’ for secondary uses. This became evident in Cambridge Analytica case when data-sharing practices operated without any sort of control. The architecture of social media is such that data collection happens at an exorbitant scale. This was ostensibly seen by the provision of a “friends permission” featured by Facebook on its platform to allow individuals to share information not just about themselves, but also about their friends. It has been argued that algorithms and artificial intelligence operates on the principle of consent. However, Suchana Seth has pointed out that in the realm of machine learning the consent is also of two kinds : explicit and implicit. In her words, “Explicit consent is the permission granted by the users to an app or a service to use data for the basic functionality of that app, or keep it installed on their devices, with the exception that the data will not be misused” (Seth, 2017, p-66). On the other hand, implicit consent refers to “the darker side of explicit consent: the consent we unknowingly give to our data being sold, or used for purposes very different from what we gave consent to, and possibly long after we stopped using the app or service in question”(ibid). Therefore, the notion that consent is safeguarded adequately through privacy is a naive one and is violated in online social networks. In a networked social space, it is not sheerly what we consent to but also what our friends on social networks consent to sharing about us and themselves. Seth has also argued that consent also becomes problematic when it comes to metadata. Metadata includes information associated with telephone calls, such as the time, location, source and destination of the call. Users have little or no control over the metadata they generate, and automatically transmit and it is intricate to apply standard privacy preserving technologies such as encryption to metadata. Deep learning algorithms have had considerable success in predicting age and gender from telephone call metadata. Furthermore, research has shown it is possible to predict personal habits of users from internet of things (IoT), even when that data is encrypted.
Privacy International in its report has established a nexus between privacy and artificial intelligence as follows:
- Data exploitation: Consumers or users in general are often uncertain about the amounts of data generated and being utilized by their smartphone apps, smart home appliances and connected cars.
- Identification and tracking: Various AI applications are used to identify and track individuals’ life and activities. “For example, while personal data is routinely (pseudo-) anonymised within datasets, AI can be employed to de-anonymise this data, challenging the distinction between personal and non-personal data, on which current data protection regulation is based” (p-18). Along with this, techniques like facial recognition can allow the police to be suspicious of individuals without concrete evidence.
- Inference and prediction of information: Machine learning is often deployed to comprehend and predict people’s emotional states. “When sensitive personal data, such as information about health, sexuality, ethnicity, or political beliefs can be predicted from unrelated data (i.e. activity logs, phone metrics, location data or social media likes) such profiling poses significant challenges to privacy and may result in discrimination” (ibid).
- Profiling to sort, score, categorise, assess and rank individuals and groups: The consent in such cases is implicit when individuals are sorted, scored and categorised by AI apps. IBM has also utilized AI for separating ‘genuine’ refugees and from other migrants. This kind of profiling, segregation can often lead to discrimination and xenophobia among various sections of population.
1.4 Establishing a balance between AI and Right to Privacy
The emerging technologies and algorithms has pushed for a regulatory framework on artificial intelligence. The discourse around human rights law, data protection, sectoral privacy regulation and research ethics needs to develop. It is pertinent to have regulatory framework that perceives right to privacy as a social good and not just as an individual good (Sinha, 2018). This doesn’t mean individual should not be a fundamental unit rather the data protection laws need to shift their focus solely from individual control. The individualistic approach has been a two-step process where data controllers are required to tell individuals what data they wish to collect and use and give them a choice to share the data, and second where individuals have the right to access and data controllers also secure the data only for the purposes identified. The assumption behind such an approach is that individuals are giving an informed consent and it is an acceptable tradeoff between privacy and competing concerns. However, this consent itself has limitations for instance privacy notices tend to be tedious and therefore, difficult to read. Sinha borrowing from Kent Walker has listed five problems of privacy notices (Sinha, 2018)
- Overkill: Long and repetitive text in small print.
- Irrelevance: Describing situations of little use to consumers.
- Opacity: Broad terms that reflect limited truth, and are unhelpful to track and control the information collected and sorted.
- Non-comparability: Simplification required to achieve comparability will lead to compromise of accuracy.
- Inflexibility: Failure to keep pace with new business models.
Sinha here concurs with Seth’s argument, pointing out that large amounts of data doesn’t allow for a meaningful consent every time. Another issue is that there is a clash between individual privacy and big data technologies as elucidated above. A valid example in this regard would be the publication of Aadhaar numbers and related information by several government website. UIDAI justified this data breach by saying that its central biometric database is secure. However, even in these cases the intended architecture ensured the seeding of other databases with Aadhaar numbers, thus creating multiple points of failure through disclosure (ibid). Therefore, policy making needs to take cognizance of the fact that privacy is not merely an individual good but a social good as well. The Puttaswamy judgement has also identified privacy as a social value for individual development by stressing through its dependence on solitude, anonymity, and temporary releases from social duties. Data is potentially toxic asset if it is not collected, processed, secured and shared in an appropriate way. Thus, government must work to establish a code that regulates and protects data for social good.
Ethical codes are equally necessary for AI. There is a need to explore literature on business and human rights as well as ethics of big data research. Various industries have already started deploying ethical models for AI. “The German Ethics Code for Automated and Connected Driving is an example of a sectoral ethic code that also contains a specific principle on data privacy which addresses the tension between business models that are based on the data generated by automated and connected driving, and limitations to the autonomy and data sovereignty of users” (Privacy International, 2018, p-24).
In a similar vein, Suchana Seth has argued that there is a need for stronger incentive both regulatory and financial to transit from a surveillance based business models to more ethical ones. When it comes to establishing privacy for individuals, there is a pivotal need to make algorithms that process the data fairer, accountable and transparent. At the same time, artificial intelligence needs to address the problem of ‘algorithmic bias’ as explained in above mentioned example of IBM segregating ‘genuine refugees’ from other migrants. Data in itself is never biased, however, any technological innovation reflects the picture of lopsided and unfair society. Addressing fairness in the society however would not always be equated to accuracy, however, algorithms can be trained to be gender-equal and free from prejudices. “Since machine learning algorithms operate within existing socio-technical systems, the problem of making algorithms fair is not purely a technical problem” (Seth, 2017, p- 68). The idea of fairness in algorithms is closely enmeshed with variability across situations. There is a varying degree of fairness for different stakeholders and it will be a complex process to train AI likewise. Furthermore, AI has a vast social space to govern from social media to one’s movements in everyday life. In such a scenario, it is important that AI has some form of human intervention over it. Today, the big data companies in close alliance with state are making decisions on behalf of citizenry — which involves analysing social media content like hate speech, using tools such as facial recognition for non-criminal purposes as well. The rule of law along with support from civil society can amply help in providing for a regulatory framework that can address the negative impact of AI.
The components of security, transparency and accountability will go hand in hand with right to privacy. A paper by Observer Research foundation has urged the government to ensure that AI policies prioritize safety and security, introducing best cyber security practices, promoting AI norms through public-private dialogues, integrating AI into cyber security systems, and investing in industry R&D partnerships.
Concluding Remarks – Role of Stakeholders
As India advances towards embracing the Fourth Industrial revolution fully, the role of various stakeholders in the process becomes imperative. The role of all three branches- legislative, executive and judiciary is quite vital to address the complexity of AI. The Puttaswamy judgement has already been a paradigmatic shift and signalled an opportunity for the government to integrate right to privacy into the Indian constitution. Furthermore, the government appointed Srikrishna committee for examining legal framework on data protection. The Srikrishna committee has released its white paper on data protection in August 2018 by identifying informational privacy and data innovation as key objectives. It further stated, “a firm legal framework for data protection is the foundation on which data-driven innovation and entrepreneurship can flourish in India” (Sinha, 2017). The centre for internet and society has pointed that although the bill has brought companies and technologies under the principle of privacy, however, it remains silent on responsibilities and rights “of data controller to explain the logic and impact of automated decision making including profiling to data subjects and the right to opt out of automated decision making in defined circumstances” (CIS). The draft bill on data protection has also elaborated on ‘harm’ principle defining it as, “as including bodily or mental injury, loss, distortion or theft of identity, financial loss or loss of property, loss of reputation or humiliation, loss of employment, any discriminatory treatment, any subjection to blackmail or extortion, any denial or withdrawal of a service, benefit or good resulting from an evaluative decision about the data principal, any restriction placed or suffered directly or indirectly on speech, movement or any other action arising out of a fear of being observed or surveilled, any observation or surveillance that is not reasonably expected by the data principal” (ibid, 2018).
The bill at the same time addresses the question of data rights for the individual such as right to confirmation and access, correction, data portability, and right to be forgotten. However, the bill remains silent on many of the international tenets which are a part of General Data Protection Regulation is European Union’s code for regulating the use of data. The bill doesn’t address the right to processing, the right to opt out of automated decision making, and and the obligation on the data controller to inform the individual about the use of automated decision making and basic information regarding the logic and impact of same. The CIS has argued that although the bill addresses the ‘harm’ under the AI but fails to empower the individual in how their data is processed and remains quiet on the issues of ‘black box’ algorithms. The principle of ‘data quality’ in the bill doesn’t take into account “biases in datasets, could potentially be be interpreted by the data protection authority to include in its scope, means towards ensuring that data does not contain or result in bias”.
Another endeavour from government’s end has been the publication of #AIforAll by Niti Aayog. The paper opines the use of AI in various fields like agriculture, education, smart cities and health. However, the paper doesn’t address the privacy side of AI and only takes a generic perspective on right to privacy. The CIS has pointed out that paper fails to engage with emerging principles from data protection such as right to explanation, and right to opt-out of automated decision-making which is directly related to AI. There has been no discussion on data minimisation, and purpose limitation which has been addressed as key problems with AI in this policy brief.
There needs to be robust legal framework to address the complexity of privacy and AI. The lack of human intervention in AI remains the biggest issue and it has to be trained in a manner that is reflective of a fair and accurate social matrix. Moreover, the role of the law here is to command both the state and private players in regulating the use of AI for retaining right to privacy. A stringent legal framework on its own will not be sufficient as its implementation is a multi-pronged approach. As argued by Sinha, that it cannot be expected that data-driven businesses will view privacy as a social good and be publicly accountable. At the same time, the research expertise needs to develop on AI that not only addresses its positive impact but also puts ethics as a central question. The legislative and executive must work together to ensure a fair application of the law regulating data. The private entities like big data companies have an onus of ethically using AI and developing best practices in the arena. The role of government, public and private sector, and research industry is therefore is mandatory to address the complications of privacy and AI.
This piece is written by Anuttama Banerji. Anuttama is Associate Researcher at Govern.