What Tech Firms Should Know About California’s Social Media Law


California is leading the charge to monitor and potentially regulate the spread of misinformation and disinformation via social media. In September, Gov. Gavin Newsom signed the Social Media Accountability and Transparency Act into law to bring transparency and accountability to the nation’s largest social media platforms used by tens of millions of Americans.

The legislation is designed to improve the state’s understanding of social media content moderation, such as hate speech, misinformation, and foreign political interference. As a result, tech firms face novel reporting requirements, increased costs, and risk of violations. Here’s what they should keep in mind, particularly if other states follow suit with similar legislation.

Act’s Requirements

The act targets the largest social media sites—those that generate more than $100 million in gross annual revenue—which includes the most well-known and visited platforms. It requires that companies publish terms of service containing a specific set of information for each platform they own or operate, and submit a terms of service report to the California attorney general twice a year.

Terms of Service. Social media company terms of service must now clearly disclose the type and nature of behavior permitted on the platform and describe how users can flag activity in violation. The terms of service must also explain what potential actions the platform can take against users or content.

While terms of service have become standard practice, no state or federal law has actually required them until now. In contrast to the historical purpose of such terms—to preserve a company’s rights and remedies—the act requires social media companies to publicly declare whether and how they will take responsibility for content on their platforms.

Reporting Requirements. Starting Jan. 1, 2024, social media companies must submit twice-annual reports to the California attorney general that include the company’s terms of service, how the company defines hate speech, extremism, harassment, disinformation, and foreign political interference, how the company responds to violations of its terms of service, how automated moderation systems are used to enforce terms of service, and when human review is involved in enforcement.

Social media companies must also provide statistical data, including the total number of posts that were flagged, how many users viewed the flagged posts before it took action, and how many appeals and reversals occurred. Finally, the companies must disaggregate the statistical data into categories, including types of content, the type of media, how the content was flagged, and how the content was actioned.

The act does not explain how the attorney general will use the reports, or what social media companies should do with the information. Rather, California’s initial goal is data collection and education, but we can expect to see the data used to further regulate how social media companies manage information in the future.

Impact on Companies

The act will increase the cost of doing business for social media companies, which will need to adopt processes to capture data and generate reports. Companies must also establish “commitments on response and resolution time” then follow through quickly enough to minimize the potential spread and the impact of flagged content, and to develop automated processes to quarantine flagged content until a human or a separate automated process can review the content.

Any changes to processes must then be accurately reflected in terms of service, leading to resources dedicated to ensuring that internal policies align with the requirements of the law and commitments made in their terms of service.

Far-Reaching Impact

The act currently targets the largest and most influential social media companies, but we can expect this scope to expand over time. For example, the California Online Privacy Protection Act of 2003, the first law in the US requiring websites post privacy policies, quickly evolved with the passage of the Shine the Light Act the following year. Shine the Light gives Californians the right to know the type of personal information that businesses collected and with whom they share that information.

Both laws paved the way for the California Consumer Privacy Act, which expanded the privacy rights of California consumers. Then, within months of the CCPA’s effective date, Californians voted to amend the CCPA, leading to the California Privacy Rights Act of 2020, which takes effect on Jan. 1, 2023. Between California’s legislature and politically active population, this new act will likely grow in scope over time to limit disinformation and increase transparency for social media companies.

Enforcement

Social media companies historically have been left to their own devices to combat troublesome content. But now the California attorney general and city attorneys (for cities with more than 750,000 people) can force the companies to, at a minimum, share how they intend to manage content moderation. Violations can result in fines of up to $15,000 per day for failure to post terms of service, timely submit a report to the California attorney general, and for materially misrepresenting (or omitting) information required in the report.

No one can predict the act’s effects on the spread of disinformation, proliferation of hate speech, and growth of online harassment. But social media companies still must prepare themselves for its January 2024 enforcement date. With so few targets, the California attorney general is sure to be watching and ready to act, threatening non-compliant social media companies with tens of thousands of dollars in penalties per day.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Write for Us: Author Guidelines

Author Information

Jake Bernstein is a partner at K&L Gates and a member of the Technology Transactions and Sourcing and Data Protection, Privacy, and Security practice groups. He is a Certified Information Systems Security Professional licensed by (ISC)2.

Andrea Bland is counsel at at K&L Gates and a member of the Technology Transactions and Sourcing practice group.

Whitney McCollum is a partner at K&L Gates and a member of the Technology Transactions and Sourcing practice group. She draws upon her global in-house experience to provide companies with cutting-edge data and technology risk management advice.

Associate Eric Vicente Flores contributed to this article.



Source link

Previous articleSantander UK Limits Cryptocurrency Exchange Transactions, Bank Says Investing in Crypto ‘Can Be High Risk’ – Bitcoin News
Next articleBitcoin Crash, Weak Demand Pull Down Jack Dorsey’s Cash App Crypto Profit By 12% In Q3 – Bitcoin (BTC/USD), Block (NYSE:SQ)