Google’s Gemini AI may be less accurate after new update

Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.A recent controversy highlights the potential challenges in maintaining human oversight in AI systems, particularly in Google’s Gemini project.  Human involvement is often touted as a safeguard against AI errors, with tasks like coding, dataset management, and output evaluation being vital components.  However, these safeguards are only as strong as the policies guiding them. A new report raises concerns about Google’s approach, specifically using outsourced labor through companies like GlobalLogic. Google’s Gemini raises accuracy concerns Historically, GlobalLogic reviewers were instructed to skip prompts requiring expertise they lacked, such as coding or mathematics. This policy seemed reasonable, aiming to prevent non-experts from inadvertently influencing AI evaluation.  However, a recent shift directs reviewers to no longer skip such prompts, even if they lack the requisite domain knowledge. Instead, reviewers are asked to rate the parts of the prompts they understand while noting the system’s limitations. This change has sparked concern. While evaluating AI responses involves more than just technical accuracy — style, format, and relevance are also critical — the new guidelines appear to lower the bar for quality control. Critics argue this could undermine the integrity of AI oversight, with some reviewers reportedly voicing similar worries in internal discussions. Google spokesperson Shira McNamara responded to TechCrunch about the situation, emphasizing that raters contribute across various tasks. She…Google’s Gemini AI may be less accurate after new update

Leave a Reply

Your email address will not be published. Required fields are marked *