California Just Threw Down the Gauntlet for Kids’ Online Safety
By Michael Ashley and Robert Edward Grant
Imagine the following:
Tech company #1 concocts a novel business strategy: tracking children’s geolocations to sell families more products. “Kids aren’t supposed to have smartphones,” the executive tells his board. “But we know they do. This will allow us to send them alerts whenever they’re near one of our retailers.”
Tech company #2 hits upon their next big idea: use “dark patterns” to dupe kids into giving up their personal information. “These user interfaces are crafted to trick littles ones into signing up for a service we’ll bill monthly,” this exec tells his board. “All we gotta do is make a cool landing page, something bright and shiny like Candy Crush, so they’ll bug Mom and Dad to buy it.”
Sounds nauseating, right?
The good news is Governor Newsom recently signed landmark legislation to halt these types of disturbing business practices and more. Typically, so much of what we cover at the Sovereignty Assembly is not exactly cheery stuff: Facial Recognition Software to track you, big tech companies colluding with the government to lie to you, and centralized innovations to control you.
But this month isn’t all doom and gloom. We’d like to draw your attention to this big development coming out of California. Spoiler alert: for once, it’s not something like the state voting to ban new gas car sales by 2035.
In September, Newsom signed into law The California Age-Appropriate Design Code Act (ACDA). According to Bloomberg Law, “It would impose sweeping new requirements on businesses that provide online products and services to children.”
Taking effect July 1, 2024, ADCA imposes several restrictions on businesses, including “profiling a child by default” and employing so-called “dark patterns.” It also establishes requirements, such as “data privacy impact assessments” and “conspicuously posted privacy notices.”
Writing for CNN, Brian Fung further describes it intended scope:
…The law prohibits the tracking of children’s geolocation, unless that data is essential for providing a website’s service and requires websites to default to the most privacy-protective settings available for children. It bans so-called “dark patterns” that use manipulative design techniques that may dupe kids into giving up their personal information. And it requires businesses to determine the rough ages of its young users.
We contacted Konstantinos Karachalios, Managing Director or IEEE-Standards Association and a member of the Crown Sterling Advisory Board to weigh in on the bill’s passage. He said the following:
Nota bene: this is probably the first successful effort by democratic regimes to regulate the internet by enforcing the rights of a vulnerable group. Our children are about 1/3 of web users. I do not need to explain here how their data has been mistreated by the big platforms and what negative impact on the mental health of several generations of children it has already had.
Sometimes branded the “Kids’ Code,” this new legislation will require social media platforms to increase transparency with their terms of service. The new law will also strengthen protections for the privacy and welfare of minors on social media. Though we are generally wary of governmental intervention, it has been more than two decades since American lawmakers have passed meaningful regulations to protect our children online.
During this period, we’ve seen widespread acceptance of the internet for virtually all of life’s activities, not just for adults but for kids. For context, consider that in 2021 Business Insider reported “Forty-five percent of kids under the age of 13 already use Facebook daily while 40% of children surveyed in that age group use Instagram…” Also, last year, CNBC found that
“63% of Americans between the ages of 12 and 17 used TikTok on a weekly basis…”
Most alarmingly, the subscription-based sexual social media platform OnlyFans has skyrocketed to over 170 million registered users, per Quantum Marketer. It’s unclear how many of these are children participating in user-generated pornography, but we must assume they comprise some portion of the total.
Clearly, we live in unprecedented times in which nowadays our children live more online than in the real world. Therefore, news of this legislation intended to safeguard our most vulnerable is welcome. It also happens to coincide with other newsworthy developments. Just this month the Supreme Court announced it will hear a challenge to Section 230 protections.
As Politico reports:
The case marks the first time the highest court will weigh in on Section 230 of the 1996 Communications Decency Act that protects platforms from being sued over most third-party content on their sites. The court will decide in the case of Gonzalez v. Google LLC whether those protections are too far-reaching when it comes to recommendations of terrorist videos from Google’s YouTube.
However the Supreme Court case shakes out, or what else occurs
on the legislative front, the unsettling factors that put so many of these reactions in motion can no longer be denied. At this point most everyone has heard about “smart tech”—innovations using AI and big data analysis to infuse our devices with a growing form of sentience. In other words, bring them to life.
What we don’t hear so much about is something we are terming WiseTech. This moniker can describe a growing recognition we must temper our technological progress with wisdom, integrity, and most of all, common sense. After all, in the last two decades we have seen an explosion of (often unintended) negative outcomes from widespread tech usage, especially amongst the young.
What we haven’t experienced is sweeping collective action—using our newfound powers wisely and thoughtfully—especially for the sake of our children and their futures. In that spirit, here’s to more positive and proactive developments in the burgeoning online theatre where we play out our lives.
🍒👽 #Entangled ...