A conversation between Anthony Ilukwe, Ian Goertz and Marianna B. Ganapini. With a recent uptick in cybersecurity incidents, and shifting public opinions on privacy issues, in addition to a lack of understanding how privacy and cybersecurity threats operate, Ian opines that this may contribute to creating an apathetic public. Marianna extends this idea by referring to the “I’ve got nothing to hide” mentality many people adopt to separate themselves from privacy concerns. Marianna emphasizes that in addition to these direct violations of privacy, our online activities and interactions are recorded as data within cyberspheres that can be curated, collected, and packaged by companies and third parties. This data can be used to make versions of ourselves that can have real impacts on our future opportunities and access to services. Certainly, the privacy and cybersecurity challenges of the digital landscape affect us all and is something worth understanding more consciously.
To reflect consciously and critically, Marianna encourages us to ask what role technology can and should play in our society moving forward. Thus far, social media companies for example, have invited us to extend ourselves via virtual and datafied personas, putting pressure on understanding and identifying who we are. This can be consequential in situations where our data is used by others to make inferences about our preferences and desires. Ian elaborates on potential consequences attached to online personas. Firstly, some social media platforms have been designed to keep people in spaces with others who they likely identify with. Within these spaces, collective and individual ideas are challenged less; this can lead to dire consequences such as exacerbating political polarization, radicalization, or even, insurrections. At the same time, social media platforms also create opportunities for people to find safe spaces and online communities in which they belong, communities that may not be physically accessible to them.
The balance between freedom of speech, safe and inclusive online spaces, and discontinuing dis/misinformation is difficult to maintain. This balance is referred to as content moderation. Content moderation policy discussions force us to question how our relationship with meaning and with the truth is shifting due to the spread of hate speech and dis/misinformation. To avoid direct and indirect privacy violations, and to promulgate the truth, Marianna and Ian posit that clear guidelines, multifaceted interdisciplinary solutions, and informed consent will be key for social media companies moving forward as they respond to increasing pressures to protect their users and consumers.
Ian highlights the need for informed consent and increased public awareness, considering that one of the untold impacts of our contemporary digital landscape is the sheer amount of data collected and produced. Anthony Ilukwe previously alluded to the increase i available data; Ian noted that the value of data within the complex digital ecosystem; however, who owns, accesses, and stores our data remains largely elusive. Ian states that in addition to the volume of data increasing, the amount of data that can be connected back to individuals has increased. As more people conduct their lives online and become increasingly interconnected, everyone’s actions have the potential to impact many other nodes within cyber networks. Cyber spheres are composed of a complex web of actors. Put simply, Ian categorizes these actors into state actors, ideologically motivated actors, and criminally motivated actors. As we consider these actors and the wide range of demographics that users and consumers belong to, more agile policymaking and consent frameworks will be necessary to understand what it means to be online.
Looking ahead, Marianna and Ian suggest that we create more robust education and awareness programs; consumers have the right to know how their data will be used while investors and companies have a responsibility to educate themselves on the differences between creating ethical and unethical AI. Policymakers and lawmakers have a responsibility to write agile, accessible, and interdisciplinary documents that do a better job of keeping up with the speed of technology to keep the public safe. The technology sector is beginning to make progress in inviting more diversity and inclusion to address tech’s inherent biases. Similarly, the AI and tech research agenda need to be more inclusive to maximize the benefits of technology while identifying and mitigating the disadvantages.
By: Tasneem Mewa