The seventh-richest man in the world has come out as a major ChatGPT stan, admitting that he uses it for personal research. In a wide-ranging interview with The Verge, Microsoft cofounder Bill Gates explained that he often uses the OpenAI chatbot to learn things. “You know, I’m often learning about topics, and ChatGPT is an excellent way to get explanations for specific questions,” Gates told reporter Justine Calma. “I’m often writing things, and it’s a huge help in writing.” This admission is stunning on a few levels, not least that as a mega-billionaire, the philanthropist has access to the resources to fund an entire wing of researchers and fact-checkers to locate and contextualize any imaginable information he’d want. Beyond that obvious context, Gates saying he uses ChatGPT to learn about “topics” — which is, we have to admit, an extremely Zuckerbergian statement — is bizarre given ChatGPT’s propensity for spewing out factual inaccuracies. Known in the AI industry as “hallucinating” — though some experts prefer the term “bullshitting” — this truth-telling issue spans all chatbots and other large-language models despite ample attempts to refine them. If it were a main character in a book or film, ChatGPT would be considered an unreliable narrator due to its problem with fibbing and fudging — but for some reason, this doesn’t seem to trouble Gates. Later in the interview, Gates made another telling acknowledgement related to AI. When asked whether he thinks Microsoft should expand its multi-billion-dollar partnership with OpenAI or “invest in its…Bill Gates, Who Could Afford a Private Army of Researchers, Says He Does His Research Using ChatGPT, Which Makes Mistakes Constantly