The LLaMA is out of the bag. Should we expect a tidal wave of disinformation?

Ten days ago, Meta announced a large language model called LLaMA. The company said the goal was to foster research. It didn’t release the model publicly but allowed researchers to request access through a form. But a week later, someone leaked the model. This is significant because it makes LLaMA the most capable LLM publicly available. It is reportedly competitive with LaMDA, the model underlying Google’s Bard. Many experts worry that we are about to see an explosion of misuse, such as scams and disinformation.But let’s back up. Models that are only slightly less capable have been available for years, such as GPT-J. And we’ve been told for a while now that a wave of malicious use is coming. Yet, there don’t seem to be any documented cases of such misuse. Not one, except for a research study.One possibility is that malicious use has been happening all along, but hasn’t been publicly reported. But that seems unlikely, because we know of numerous examples of non-malicious misuses: students cheating on homework, a Q&A site and a sci-fi mag being overrun by bot-generated submissions, CNET publishing error-filled investment advice, a university offending staff and students by sending a bot-generated condolence message after a shooting, and, of course, search engine spam.Malicious use is intended to harm the recipient in some way, while non-malicious misuse is when someone is just trying to save time or make a buck. Sometimes the categorization may be unclear, but all the examples above seem obviously non-malicious. The difference…The LLaMA is out of the bag. Should we expect a tidal wave of disinformation?

Leave a Reply

Your email address will not be published. Required fields are marked *