By Mary Graw Leary, Warren Binford, and John Yoo, Law Professors
Something extraordinary happened this summer. During one of the most divisive periods in modern U.S. history, Senate Republicans and Democrats came together almost unanimously to advance legislation that protects America’s children. This bill, the Kids Online Safety Act (KOSA), requires internet companies to keep children’s safety in mind when designing the online platforms they use—something required of every other industry.
This new bill focuses specifically on the platforms’ design—not their content. Since senators likely do not agree about what content should be online, the bill instead provides a much-needed, common-sense response to new evidence that social media companies knowingly designed their products with features that exploit the still-developing brains of children and cause them harm.
Now, Big Tech is doing everything it can to kill the bill in the House of Representatives. Why? Money. The six most popular social media platforms earned roughly $11 billion in just one year from advertising that targets children, according to a recent study from Harvard University and Boston Children’s Hospital authors.
Though Big Tech had told Congress that its platforms’ harmful effects were unavoidable consequences of the medium, we now know this claim is false. Through congressional hearings, leaked internal documents, company whistleblowers, and investigative reporting, the Senate amassed a mountain of evidence that the tech industry is aware of its products’ dangerous aspects and designs them to take advantage of young people’s not yet complete brain development and keep them online as much as possible.
Historically, Big Tech has relied on two tactics to scare Congress into not passing legislation that might touch their corporate billions. They either argue it violates the 1996 Communications Decency Act or claim it violates the First Amendment, often with no basis. However, the Senate anticipated these arguments and, through careful drafting, largely avoided both problems in this bill.
Regarding Section 230 of the Communications Decency Act, which protects internet service providers from liability for content created by third parties, the new bill explicitly states that it neither expands or contracts the provisions of Section 230.
Regarding the First Amendment, the bill avoids focusing on content. Instead, it makes a modest requirement for specific acts of covered platforms. The Senate expressly seeks to hold platforms responsible based on their failure to “use reasonable care” in the design of their products’ features—not because of content that sits on the internet.
Read more on Newsweek here.