by Christina Frei, Ramazan Özgü, Gerold Schneider, Christina Siever, and Beth Singler
Imagine this scenario. The smart watch that you only recently bought to help you on your journey to a healthier future records signs of future heart failure, noticing irregular heartbeat patterns and relating them to other prior diagnoses to find a correlation. Perhaps you almost didn’t notice the warning, but you get it just in time. That should be great news, but the outcome could still be negative for other reasons.
What if the data that you have shared about your health then has – totally illegally – a negative impact on a job application or your next mortgage? Or imagine if you find ways to express your faith digitally, perhaps downloading a prayer app to your phone that tracked not only where you went, but also who you prayed for and what health and life issues they might have? In 2022, Mozilla’s investigation of 32 prayer and mental health apps found that most raised concerns about data management and some even harvested data from third-party platforms such as Facebook, from elsewhere on users’ phones (Mozilla 2022). The duration and frequency of page visits, personal information, and internet protocol (IP) addresses.
As a part of the University of Zurich’s University Research Priority Program on Digital Religion(s), we represent a cluster of researchers bringing together religious, anthropological, legal, linguistic, and technological expertise to questions emerging from religion, wellness, spirituality, data, artificial intelligence, and privacy, and in this blog, we want to explore some of the immediate issues raised when these areas encounter each other in the modern, connected, world.
Here we will discuss the difficulties of marking the boundary between the public and the private, the increasing scope and scale of artificial intelligence that can change the possibility of being identified from your data, and the emerging possibilities of forms of digital immortality through your data and how that relates to religious concerns.
First, the boundary between the private and the public has always been a ‘movable feast’, with many definitions that can be dependent on context. From a linguistic perspective the terms ‘public’, ‘partly public’ (which might include social media posts with restricted privacy settings) and ‘non-public’ refer to the degree of accessibility to a communication space, whereas ‘private’ is meant to describe the content of the interaction and the social relationship of the involved partners. Anthropologically, we can note that the content does not necessarily correlate with social closeness; one can also talk about private things with strangers. There are also many definitions of “privacy”, including at least the following three areas:
- regarding the topics: personal, intimate, private topics sometimes related to sensitive identities
- regarding access: data and views that should not be accessible to other people
- regarding identifiability: data permitting to guess the identity of a person
A correlate of privacy is identifiability – anonymity can both enhance privacy just as the lack of anonymity can break privacy and make more public those things that we wish to keep more secret. Moreover, publicly available data on the internet and privacy data are not mutually exclusive, there is a deliberate overlap. Tweets and blog posts are often posted under one’s personal identity, while emails are intended to stay inaccessible to anyone except for the addressee.
But newer and emerging technologies like artificial intelligence (AI) and new social platforms change not only the way in which individuals communicate, interact, and express themselves, they also make individuals more connected and identifiable in ways their existing expectations about online privacy might not prepare them for. For instance, AI in search engines allows us to find detailed information on intimate topics without revealing our identity. We can search for medical diseases, intimate desires, or even find an ideal partner. Further, as the normative power of the institutions of religion decreases, belief and opinions can be liberated. For instance, newly emerging spirituality allows new pathways, for enlightenment and self-fulfilment, for which privacy and its protection is vital.
But there are also obvious dangers, both to our privacy and our experiences online. Through recommendation algorithms that recognise our interests and steer our attention to more inciting material and content, we can encounter and develop extreme views. This can be enabled by the new loneliness and disorientation with the loss of the normative function of the public institution of faith. More generally, political extremism finds matching partners, who in their own closed circle can become more fringe in their take on religious and spiritual matters or grow their hate to a point where they plan real world activism, as we have seen with the development of the QAnon community that was fundamental in the January 6th uprising in the USA.
We see that finding community is a double-edged sword. While AI allows me to find organizations trying to protect citizens from unfair judgements in China, their secret service may have listed all of us already due to our academic CVs or because we stated in a tweet that we supported democracy. Other online communities, such as those of faith, or for the memorialising of lost ones can also provide support. But while digital mourning may bring relief to the bereaved, criminals may detect patterns in the behaviour of the family, allowing them to break into the house while non-one is there.
Even if we attempt to be careful in what we share online, AI is also capable of connecting patterns, given even small amounts of data about us. For instance, an attempt to make a job application system gender blind failed when the machine learning system ‘learnt’ to recommend the applicants whose CVs did not mention certain hobbies that were associated more with women. As men had historically been more successful applicants – because of the existing biases of the organisation – the AI had learnt from its data set not to recommend anyone with particular characteristics, even if the CVs themselves had any identifying features, like names and genders, removed.
So, what can we do about these issues? To find a more appropriate way of dealing with the challenges of privacy in the presence of big data, AI, surveillance, etc., a look at religious law might be interesting. Bamberger and Mayse, for example, suggest that we utilise Jewish law, since Judaism regards privacy as social obligation and provides rules against the use of publicly visible sensitive information (Bamberger and Mayse, 2021). Criticism of conceptualising privacy as an individual right, that is raised in legal scholarship, can also be supported by a Buddhist perspective on privacy. While rejecting the existence of an autonomous self, the application of Buddhist concepts can support basing the protection of privacy on social values (Goodman, 2022). The social aspect of privacy is also clearly illustrated in Catholic law, enshrined in the Codex Iuris Canonici. This ecclesiastical code instructs “all the Christian Faithful” to abstain from infringing upon privacy, underlining the Catholic Church’s recognition of an individual’s right to personal space. The same canon also enshrines the notion of “good reputation,” stressing the social value of privacy. Also, Islamic legal scholars highlight the importance of privacy and draw attention to the possible adaptation of ancient teachings to current problems with digital technologies. Amr Osman regards “satr”, or the concept of ‘to cover’ in Islamic tradition, as a possible basis that could support a recognition of the “right to be forgotten” (Osman, 2023).
From a secular legal perspective, the exploration of the concept of a “private sphere” online is pertinent because if people can’t trust in a protected sphere on the internet, this is eroding the foundation of central participatory rights in the information society, as Kettemann argued in 2020, the right to privacy figures as a basis for the exercise of all other rights. Religious minorities, for example, are particularly affected in their exercise of human rights, such as freedom of religion, by digitally enabled surveillance and this could have wider consequences for equality.
For instance, the ‘Snowden files’, released in 2013, provoked intensive discussions on an international level about mass surveillance and digital human rights that marginally included considerations on the right to freedom of religion, because this right’s effective exercise depends on privacy. The Committee on Legal Affairs and Human Rights of the Council of Europe feared that the US intelligence agencies included religion as one of the important factors in initiating surveillance. Especially the interception of confidential communications with religious ministers was seen as endangering the right to freedom of religion. Also, in the joint cases of Big Brother Watch and Others v. the United Kingdom, Judge Pinto de Albuquerque stated in his partly concurring and partly dissenting opinion that domestic law should provide for the absolute prohibition of any interception of communications covered by religious secrecy.
The final aspect we will look at is a factor already mentioned, namely the inability to be forgotten on the Internet. The European Court of Justice had to deal with this issue in 2014. In the precedent-setting case of Google v Costeja, the Court decided that individuals have the right to request the removal of links containing personal information from search engine results in certain circumstances. This decision established the new concept that search engines are responsible for information published on third party websites. As a result, Google published an online form in Europe to request the removal of links from its search results if the linked data is inappropriate, irrelevant or no longer relevant. This also paved the way for the implementation of the “right to be forgotten” principle in European law. In current EU Data Protection law, this refers to the “right to erasure”, also known as the “digital eraser”. The aim of this right is to strengthen people’s control over their personal data in the digital context and to enhance the right to informational self-determination. This legal principle is rooted in the notion of promoting human dignity in the digital sphere, which was significantly influenced by German federal constitutional judges. Indeed, European data protection authorities also explicitly recognise a significant link between human dignity and privacy. Thus, it raises a profound statement: in order to uphold human dignity in the digital realm, we need to strengthen privacy and data protection measures.
While unwilling to engage in ‘criti-hype’ by urging caution about new technologies while simultaneously accepting the corporate marketing that hypes up their potential impact – e.g., technological projects of the future like the Metaverse, but we saw Disney shut down their research department in that area earlier this year – we are aware that the affordances of technologies in relationship to privacy are ever changing. Should we be worried about data collection that might bring about our digital immortality in a way we had never agreed to? Perhaps. But more immediate concerns about how we can be private or public people online when it comes to our most sensitive data about our health, spirituality, religion, or views and hopes. It should not be that privacy becomes a ‘luxury problem’ that only a few have the privilege to be able to debate and decide for themselves, while others in the digital divide are forced to make concessions to have the online experience that they want. Our futures should not be limited by the very smart watches we choose to help us ensure a healthier future.