HIRE ME TO SPEAK
HIRE ME TO SPEAK

Important Artificial Intelligence Case to be Argued in the US Supreme Court Today

I write about strategies to turn fans into customers and customers into fans. I also share ways to use real-time strategies to spread ideas, influence minds, and build business.

Social Media  |  Research and Analysis  |  Newsjacking  |  Artificial Intelligence

ScotusOral arguments begin in the U.S. Supreme Court today in Gonzalez v. Google, an important case about Artificial Intelligence amplifications of content on social networks. The lawsuit argues that social media companies should be legally liable for harmful content that their algorithms promote.

Google argues that Congress has already settled the matter with Section 230, which covers protection for content companies. The relevant sentence in Section 230 reads: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Basically, Section 230 says that Social Media companies like Meta (Facebook and Instagram) Alphabet (Google and YouTube), Twitter, and others are not responsible for the content (text, photos, videos, etc.) that their users post and share to the networks.

Section 230 was written in 1996, at the dawn of the Web, as part of the Communications Decency Act. This was well before social networking and AI algorithms.

I think this is a critically important case. I sure do hope the Justices and their staff have been studying AI and its ramifications. Here is a good Washington Post story on the case if you want details.

Content appears in your social feed because of the company’s AI

Here is my take on the debate: The right to free speech does not mean a right of AI algorithm amplification. I wrote about this in a post back in April.

I strongly support the idea of free speech. Early in my career, I worked for Knight-Ridder, at the time one of the largest newspaper companies in the world. Free speech and freedom of the press is something I’ve been focused on my entire career.

Yes, I agree that social networking companies should not be held responsible for the content that is uploaded to their networks by users. However, once content is posted, I believe social networking companies have an obligation to understand how the content is disseminated by their Artificial Intelligence algorithms.

When YouTube chooses to show you a video you might like, either by auto-playing after another video is done, or showing it in a list of recommended videos, that’s not free speech, it’s AI amplification.

When Facebook shows text or video or photos in your personal newsfeed, that’s not free speech, it’s AI amplification.

Yes, if a user chooses to be friends with another user, or subscribes to a video channel, or likes a company or politician, fine. In this case, I’m cool that the content from that person or organization can and should be shared with the person who actively chose to engage with that other user.

However, I am not okay with social media companies hiding behind a blanket law that allows them to share content in feeds that people did not actively choose to see.

If the YouTube or Facebook AI feeds you COVID vaccine misinformation, QAnon conspiracy theories, or lies about who won an election from accounts or people or organizations you do not follow, that’s not free speech. It's their AI technology amplifying the content so you can see it, when you otherwise hadn’t chosen to see it.

I’m eager to hear what the Justices say on this important issue.

New Call-to-action