MediaSmarts conducted focus groups with youth ages 13 to 17 to gain insight into how young Canadians understand the relationships between artificial intelligence (AI), algorithms, privacy, and data protection. Participants played a game prototype designed by MediaSmarts’ education team, and a scaffolded learning experience allowed for in-depth discussion after each of the three phases of gameplay. These conversations highlight that while youth understand and appreciate the benefits of recommendation algorithms, they are troubled by algorithmic data collection and data sharing practices. This research is a call for more algorithmic literacy tools and resources that will give youth the knowledge they need to protect themselves and their information in digital spaces.
For More Information:
Key Findings and Recommendations
This research project, funded by the Office of the Privacy Commissioner of Canada, created space for young people to learn more about AI and algorithms and their repercussions on privacy rights. This project also allowed MediaSmarts to design a youth-friendly educational game to help build awareness and meaningful understandings of data collection and sharing practices. This research is important because insufficient knowledge of AI and algorithms contributes to exclusion from online spaces, tech-facilitated discrimination, exposure to harmful content, and various privacy risks.
Algorithms Among Us
- Participants highlighted what they see as a concerning tension: that while their connected devices can provide them with the world at their fingertips, algorithms often narrow the type of content they see. Youth are concerned about content saturation (seeing too much of the same thing) that contributes to a false sense of social consensus. Additionally, young people are aware of the impacts of algorithms in their online environments and are often frustrated by a sense of powerlessness to change this algorithmic architecture.
- Youth demonstrated familiarity with recommendation spirals, describing them as ‘mindless scrolling,’ ‘boredom,’ ‘rabbit holes,’ and ‘focus thieves.’ They are aware that algorithmic pre-selection pushes them towards more passive uses of the internet and are annoyed by specific optimization strategies like clickbait content. Young people don’t like being ‘duped,’ ‘scammed,’ or ‘manipulated’ by online platforms or content creators.
- Participants were skeptical of an algorithm’s ability to present them with accurate information, especially when searching for information for a school project, since recommendation algorithms produce an excess of trendy information rather than reliable or trustworthy information.
Under the Algorithm’s Lens
- Youth are acutely aware of the value of personal data for online businesses, especially as everything “boils down to advertising” (Nathaniel, 16). While most participants had few reservations about the use of personal information by algorithms recommending relevant entertainment and leisure content, they were concerned about ‘creepy’ and ‘invasive’ corporate surveillance strategies.
- Young people disliked the idea that their online information was being ‘lumped’ into categories of aggregate data to train algorithms and machine learning without their knowledge and, more importantly, without their consent.
- Participants strongly disliked data brokering, which they felt was ‘scummy,’ ‘unfair,’ ‘wrong,’ and ‘unethical.’ Youth commented on the potential social and political implications of brokering, especially since users were almost always unaware of these processes. They were clear that selling their data without their knowledge and meaningful consent was a ‘violation of their privacy’ (Erin, 17).
The (Algorithmic) Ties that Bind Us
- Many youth were not aware of proxy data and how things like race, gender, sexual orientation, or health status could be inferred from other data and used to build more complete data profiles mobilized by online businesses. After learning about these machine learning processes, participants described them as ‘kinda weird,’ ‘creepy,’ ‘strange,’ ‘disappointing,’ and even ‘evil.’
- Many participants were familiar with the concept of bias in relation to digital technology, algorithms, and AI. Youth were aware of how biased data translates to “wrong answers [or] the wrong information” (Sahil, 15) and how this can have repercussions for both online businesses and users. Some even warned platforms to ‘be careful’ about relying on stereotypes or generalizations.
- In describing these troubling algorithmic assumptions, participants used language like ‘upsetting,’ ‘dangerous,’ ‘terrible,’ and ‘unfortunate.’ They questioned the fairness of these practices and were concerned for people who are already placed at risk of experiencing racism, marginalization, or discrimination.
- Finally, there was an acknowledgment from participants that algorithms, or algorithmic systems, should not be left to their own devices and a call for developers and platforms to be more aware of the consequences of relying too heavily on this technology.
Recommendations
The recommendations from this qualitative research project echo what participants told us about the need for more awareness, transparency, protection, control, and engagement.
AWARENESS
- Youth expressed a need and desire for more robust algorithmic literacy tools and resources to better understand how algorithms work and the impact of artificial intelligence and machine learning on their lives. We recommend new algorithmic literacy curricula tailored to the unique needs of children and youth to encourage critical thinking skills, raise awareness about their privacy rights, and empower young Canadians to take control over their personal information.
TRANSPARENCY
- Youth want more information about how their personal data is collected, stored, and brokered, and they called for more transparency from online businesses. We recommend enhancing algorithmic transparency through clear and accessible data collection and privacy policies.
PROTECTION
- Youth want more protection online, especially when it comes to platforms sharing or selling their data profiles. They want to mitigate the future unintended consequences of AI and data sharing practices. We recommend that online businesses and policymakers consider data erasure policies.
CONTROL
- Youth asked for more reporting features to hold platforms accountable by taking action when they notice harmful or discriminatory content. They also asked for more control over their data, and want to decide when to share personal information. We recommend ongoing and more meaningful consent processes and solutions identified by youth in previous MediaSmarts research.
ENGAGEMENT
- Participants appreciated the time, space, and opportunity to talk more intentionally about AI, algorithms, and the digital privacy issues that directly impact their day-to-day lives. We recommend future research projects that continue to build our knowledge of algorithmic literacy levels and engage with children and youth in a way that positions them as experts.
This research was made possible by the financial contributions from the Office of the Privacy Commissioner of Canada’s contributions program.