Every one of my PhD students has impacted the way I think. One comment from Laura Kudrna has stuck in my mind. About a decade ago, when we were discussing the boundaries between public and private life, Laura said to me “look Paul, privacy as you know it has long gone. It’s high time you accepted that.” Or words to that effect. My two teenage kids and all their friends have tracking apps on their phones. They can all see where each other is, all the time, and they see nothing wrong with this. Most of my kids’ friends’ parents track their kids. We do not track ours because we think they should have some freedom, but we are now the odd ones out in not knowing where our teenagers are 24/7. I think Laura was right, as she is about most things.
It’s not just our whereabouts that’s being monitored, of course. I honestly didn’t have the first clue how my data is being used by any of my app-based or online service providers – until I just did some internet searching to find out (and thus generating even more data to be harvested). I found out that not only do the companies I engage use my data, but data brokers gather it all up and sell it on – at no benefit to me whatsoever. The information is used in machine learning algorithms to target me in helpful and harmful ways: to better tailor services to my preferences, personality, and peculiarities, and to bombard me with shit that I don’t like, want, or need. Most of us, including me, seem quite relaxed about all this.
The rapid growth and integration of artificial intelligence (AI) in society also raises critical issues around public trust and ethical considerations. To navigate these challenges, understanding public attitudes towards AI is essential. In the UK, the Centre for Data Ethics and Innovation (CDEI) conducts the Public Attitudes to Data and AI (PADAI) Tracker Survey, which aims to gauge public perception of AI. The survey includes a diverse sample of 4,000 individuals from across the UK. The latest round of surveys shows varying levels of optimism about AI's impact. For instance, there appears to be a high positive expectation for AI's role in making it easy to do day-to-day tasks and improving healthcare outcomes. On the other hand, there's less optimism about AI improving job opportunities and social equality, as indicated by lower positive and higher negative percentages.
Where different values really come into play is when the data are used in generative artificial intelligence (GAI). This is where new content is created from analysis of existing data. So, for example, new text, images, or sound that have never been produced before. The recent use of AI to produce a “new” Beatles song from archive data from John Lennon was not GAI because it just used old data, albeit cleaned up in ways previously thought impossible. GAI would be when AI takes what I say in lectures and creates a new lecture for me (most likely containing “context matters” and derivative of “fuck” in several places). Or when it creates an entirely new Beatles track (which, given how many old and over-rated Beatles songs there are out there already, is even more of a waste of time than creating a new lecture for me, in my less than humble opinion).
As with so many issues with potentially profound consequences, most of us don’t have the first clue about where GAI is going to lead us. The (often self-proclaimed) experts in GAI disagree about whether GAI is going to boost economic growth ten-fold, or wipe out human civilization, or lead to consequences anywhere in between. As with every issue these days it would appear, “in between” doesn’t really get much of a look in compared to the extremes. All in all, beliefs about GAI are largely faith-based. By extension, values relating to the right to privacy in the context of rapidly changing tech will also be determined by whether you believe GAI will, overall, be a force for good or ill.
When it comes to the right to privacy, the rapid development of technology and AI presents both opportunities and challenges. Some believe that GAI could significantly enhance privacy protections through advanced security measures, while others fear it could lead to unprecedented levels of surveillance and data collection, thereby further eroding privacy. The broad consensus is that AI should be carefully managed, particularly because of its potential impact on privacy, civil liberties, and the possibility of disparate harm to vulnerable groups like ethnic minorities and low-income individuals. Attitudes toward GAI will often intersect with views on other rights and global issues, e.g. individuals who prioritise environmental concerns may draw parallels between the existential nature of climate change and the potential risks posed by GAI.
I’m generally not a big fan of the precautionary principle, which emphasises great caution in the face of uncertainty. I’m generally not a cautious person and fearing the worst can get in the way of progress. But in the case of GAI, I am inclined to take its existential threat seriously and to proceed with caution. This may require a moratorium on further developments so that we can take stock of where the technology is heading, and for myriad ethical issues to be flushed out, including the value placed on privacy vis-à-vis other attributes of social value. Laura may be right about privacy being dead as I know it, but by taking a deep breath about where GAI is heading, we might decide that we value it somewhat more highly than advances in technology alone would suggest.
I just found the whole thing disturbing. Good for her. But is it ethical to have such a commercial business?
I also couldn’t be with my mother before she died as I am in an arrest list in my home country 😕 for speaking against the regime. She died one and half years ago and I am still gutted that I couldn’t say goodbye. But I don’t think I could ever use such service.
Do people have to have copyrights protection on their own personalities so that they are not simulated against their will after they pass away!
My comment is more related to the ethical issues you raised with AI, rather than privacy.
The sister of a friend of mine used a commercial AI service to simulate her dead mother and then payed 10$ an hour to chat with her! She is Syrian based in Germany and couldn’t go to Syria to be with her mother before she died. She says that the experience helped her grieving.
See: https://news.sky.com/story/woman-chats-to-dead-mother-using-ai-with-spooky-results-13089899