The hidden side of politics

How to Protect Our Kids’ Data and Privacy

Reported by WIRED:

YouTube is currently under investigation by the Federal Trade Commission following complaints that the platform improperly collected data from young users. It’s unclear how much data this might be, but there’s reason to believe it could be a lot. For many kids, YouTube has replaced television; depending on how parents use online platforms, children could begin to amass data even before birth.

Eighty-one percent of the world’s children and 92 percent of US children now have an online presence before they turn 2. In addition, 95 percent of US teens report having (or having access to) a smartphone. And 45 percent of those teens are online on a near-constant basis, an average of nine hours each day.

Some preeminent tech figures, such as Facebook CEO Mark Zuckerberg and Apple CEO Tim Cook, have asserted that “data ownership” is the answer to this massive online footprint, in which users control their own data and decide when to allow corporations or governments to use it.

Though this idea may sound appealing, it is not a sufficient tool in protecting individuals—especially children—from the pervasive effects of an uncontrollable online identity.

First, ownership makes no sense when the subject isn’t the creator of the content. Indeed, a person cannot remove content published about them by someone else. During their earliest years, kids’ digital identities are shaped by other individuals, most likely their parents. That means a massive amount of public information about them might be generated before they are able to understand what it means to give consent.

Furthermore, data can be aggregated. Regardless of whether a person uses online services, some decisions will still be made without their control—even without their knowledge—through inference algorithms.

Imagine that a child avoids having a digital footprint—that neither this child’s parents nor the child herself has ever used or posted anything online. Institutions can still use data about other youngsters who fall into similar categories (such as those with the same zip code or those who go to the same school) to make inferences about the child. To put it simply, even if a child is somehow shielded from a premature online identity, his or her life will still be influenced by the online presence of similar children.

The practice of data collection could have far-reaching consequences for children’s fundamental rights. The Convention on the Rights of the Child, the most ratified human rights treaty ever, protects children as individuals. But modern technology raises new questions: Will children self-censor themselves on the internet because they don’t know how their data will be used? How is access to information limited when social media platforms use algorithms to display personalized and targeted content? We don’t know what ramifications widespread data collection could have on future generations of kids.

To protect children’s fundamental rights, we need a new data protection framework: one based on how the data is used, not who owns it.

There are already some provisions in place. The Children’s Online Privacy Protection Act of 1998 (COPPA) requires operators of websites and online services to obtain parents’ explicit consent before collecting the personal information of children under 13. Children—or at least their parents—own their personal information and can decide when to share it with third parties. COPPA also attempted to regulate how online providers can market to children. For instance, a website operator cannot require a child to disclose personal information in order to participate in a game. Nonetheless, even with parents’ consent, companies still end up storing, collecting, and sharing children’s information.

Thus, the concept of data ownership is not sufficient to protect children’s privacy rights. We need broader regulation on how data is used, as well as a legal framework that explicitly protects our fundamental civil, political, and socioeconomic rights online.

Data collection—and its use—should be limited. (The Data Care Act, a bill introduced by US senator Brian Schatz last December, proposes that data must be used carefully, loyally, and confidentially.) The framework should apply to all relevant stakeholders, including governments, companies, and individuals. It should define technical standards that prioritize privacy and establish uniform practices for online platform employees, such as the engineers building these systems. Finally, non-compliance from companies should result in sanctions or other actionable consequences.

Faced with the increased usage of artificial intelligence and the growing capabilities of data processing, change is urgently needed. Under the Generation AI Initiative, UNICEF Innovation and the Human Rights Center at UC Berkeley School of Law recently published a Memorandum on AI and Child Rights. The research sheds light on how new technologies might affect children’s freedom of expression, as well as their rights not to be subjected to discrimination or abuse.

Past generations were able to grow up without a digital record of their past. This generation, and the ones to come, will be held accountable to their inescapable online identities. How current regulations respond to this shift is a fundamental question of our time.

WIRED Opinion publishes pieces written by outside contributors and represents a wide range of viewpoints. Read more opinions here. Submit an op-ed at [email protected]


More Great WIRED Stories

Source:WIRED

Share

FOLLOW @ NATIONAL HILL