In the vast number of fields where generative AI has been tested, law is perhaps its most glaring point of failure. Tools like OpenAI’s ChatGPT have gotten lawyers sanctioned and experts publicly embarrassed, producing briefs based on made-up cases and nonexistent research citations. So when my colleague Kylie Robison got access to ChatGPT’s new âdeep researchâ feature, my task was clear: make this purportedly superpowerful tool write about a law humans constantly get wrong.
Compile a list of federal court and Supreme Court rulings from the last five years related to Section 230 of the Communications Decency Act, I asked Kylie to tell it. Summarize any significant developments in how judges have interpreted the law.
I was asking ChatGPT to give me a rundown on the state of what are commonly called the 26 words that created the internet, a constantly evolving topic I follow at The Verge. The good news: ChatGPT appropriately selected and accurately summarized a set of recent court rulings, all of which exist. The so-so news: it missed some broader points that a competent human expert might acknowledge. The bad news: it ignored a full year’s worth of legal decisions, which, unf …