Rarely do we contend with the ways young people themselves are changing big tech — how they are demanding accountability from the companies that have increasing presence in and control over their lives. In an increasingly digitised world, a new generation of activists is stepping up to challenge Big Tech and governmental use of artificial intelligence.
For example, high schoolers in Seattle are organising against Amazon, while college students in Cambridge and Stanford have organised against Palantir and Facebook. Far from being slaves to their devices, these teenagers are harnessing the power of their voices to hold powerful entities accountable and create transformative change. Their political engagement demonstrates positive youth development and reshapes the conversation around technology and its impact on young people.
In the 21st century, even most "analog" processes, at least indirectly, involve algorithms. During COVID-19 social restrictions, youth experienced the most intimate moments of their lives — the death of their loved ones, the birth of their siblings, and prayer with their faith communities through screens. Champions of the metaverse claim that in the not-so-distant future much of reality will be augmented or fully virtual.
Recently, the advancements of generative AI have spotlighted the power of large language models (LLMS) and the risks they pose to spewing racist rhetoric and spreading misinformation. ChatGPT, for example, is poised to become a mainstay in the education sector whether educators like it or not. For many, GPT and other similar systems are turning into a new form of education for those who lack other, more accessible and traditional methods of education.
The world is socio-algorithmic, meaning that lived experiences are defined by the humans children interact with and the decision-making algorithms that mediate and/or replace those interactions. No one knows this more intimately than Gen-Z, who has never experienced a different world. For them, their social lives, schooling, healthcare, and family interactions are all mediated by a handful of tech companies. These ‘digital natives’ have grown up with technology all around them, with an increased presence of campus surveillance systems. Even aside from campus surveillance technologies, these systems are becoming incredibly widespread. Fortunately, we have the next generation of leaders, regulators, and activists, fighting back.
Sumanth Ratna, a college freshman at Yale, is passionate about the ethical implications of AI, particularly regarding facial recognition technology and deepfakes on social media. He argues that this technology disproportionately impacts marginalised communities, calling for a reevaluation of its widespread use. Ratna emphasises that youth are not only concerned about their digital rights but also taking action to shape the technological landscape for the better. “What we see is cameras in schools are being used for surveillance in unfair ways, being used in cooperation with predictive policing, which we've seen can categorically discriminate against certain people, certain demographics,” says Ratna.
Ratna also warns about the widespread presence of deepfake technologies today. “Deepfakes are used in court cases as incriminating evidence. For example, there might be evidence where it's not really them, but it appears that way. Then, there's no way to prove that it's a fake or that they didn't commit the crime.”
"There are government pieces of technology that will essentially scrape the data of protesters from their phones."
Motivated, Ratna, along with his friends at Encode Justice, are working on a Google Chrome extension that allows users to identify a deepfake and then guide the user to contact their local Congress member about it and encourage them to pass the Deepfakes Task Force Act.
Ratna's words resonate with a growing movement of youth tech activism. In recent years, news outlets like Teen Vogue have reported on teenagers using their coding skills and social media savvy to undermine companies accused of preventing workers from unionising. Developmental science literature supports the idea that political engagement with technology is a form of positive youth development, with researchers like Nicole Mirra and Antero Garcia arguing that such engagement should be fostered rather than dismissed.
Meanwhile, Emily Bach, a 20-year-old computer science and American Studies student at Columbia University, is another example of a young activist who sees the potential for digital spaces, stating, "I started thinking about the intersection of social justice and technology and thinking about the ways that digital spaces can be generative, but they need to be considerably thoughtful to exist in that capacity."
She is particularly interested in the intersection between AI and social media platforms, believing that we must reevaluate how these algorithms impact our mental health and social interactions. Bach’s work demonstrates that teenagers are not only conscious of the effects of technology on their lives, but they are also actively seeking solutions, debunking the notion that teenagers are oblivious to the potential dangers of technology.
Bach also raises deep concerns about the pervasiveness of digital surveillance, including the use of AI and big data, and its impact on the social justice activism in which she and her peers are engaged. This digital surveillance, Bach explains, extends beyond student activists to include the extraction of data from unsuspecting individuals: "There are government pieces of technology that will essentially scrape the data of protesters from their phones. You often don't know that it happens." She continues to describe the attack on privacy as algorithms continue to become increasingly invasive. She states that “the importance of privacy kind of gets swept under the rug and becomes a buzzword, but it's real. Especially when people share highly vulnerable information and personal stories.”
By highlighting these issues, Bach’s insights urge us to consider the ethical implications of widespread digital surveillance and its potential chilling effect on student activism. Bach has been actively organising against predictive algorithms in educational settings, arguing that they can perpetuate existing inequalities and unfairly target marginalised communities.
As such, Bach’s perspective challenges us to think critically about the role of AI and big data in surveillance and to advocate for more transparency and ethical considerations in its applications on campuses around the US.
Jenny Duan, a freshman at Stanford University, combines her interests in ethics and technology to challenge the status quo. She has created social media accounts called "Ethicist" on Instagram and TikTok, where she shares her thoughts on ethics in digital media, calling her peers to action. Unlike many peers her age across the country, Duan is harnessing the power of technology to create awareness and crucial educational material regarding AI ethics. Duan is particularly concerned about artificial general intelligence (AGI) and its potential consequences, stating, "A lot is the development of artificial general intelligence without any alignment or safety work associated with that. I think this is one of the most pressing risks that we as just a population, human population face because of the like based on controllability and the lack of information we know about."
"Every single user, regardless of age, should have a say in what types of technology they want to see or what they want to see represented in technology."
These three Gen-Z activists actively challenge the mainstream narrative that suggests that teens are mindlessly wasting away on their devices. Instead, they offer a powerful vision for how technology can be harnessed and held to account to create a more just and equitable society. Ratna envisions a future where technology empowers marginalised communities, while Bach imagines a world where technology is transparent, and privacy is respected.
Duan, on the other hand, believes in the importance of diverse perspectives in the development of emerging generative models, stating, "Every single user, regardless of age, should have a say in what types of technology they want to see or what they want to see represented in technology." All three activists agree that fostering a more ethical and just digital landscape requires multi-stakeholder dialogue between tech companies, governments, and young users. They understand that to create a future in which technology is compatible with human rights, it requires the ecosystem of AI builders to approach software development with an "equitable" perspective, requires governments to regulate and limit the pervasiveness of technology, and requires tech companies to leverage their ethics teams to implement universal ethics standards for products.
By amplifying youth voices and perspectives, adults can help break the cycle of fatalism that often surrounds conversations about youth and technology. We can begin to appreciate the depth and nuance of their experiences and recognise their profound impact on the world. Their insights and activism can ensure that the benefits of technology are accessible to all while minimising potential harm.
Recognising that the technology we develop today will disproportionately impact youth is important. The policies and regulations we pass today primarily influence the next generation of leaders. In taking the lead from youth, we can foster a more inclusive, just, and imaginative technological landscape that benefits everyone.