Sophisticated models trained on biased data can learn discriminatory patterns, which leads to skewed decisions. A new solution aims to prevent neural networks from making decisions based on common biases.
Image analysis guided by AI revealed a 2,000-year-old picture dug into the Peruvian desert. Researchers analyzing aerial imagery shot over Peru found a pattern that looks like a three-horned humanoid holding a staff.
Google's AI platform offers a view into the mind of its machines. Explainable AI (xAI) tools show which features exerted the most influence on a model’s decision, so users can evaluate model performance and potentially mitigate biased results.
Models that summarize documents and answer questions work pretty well with limited source material, but they can slip into incoherence when they draw from a sizeable corpus. Recent work addresses this problem.
The Batch: Google AI Explains Itself, Neural Net Fights Bias, AI Demoralizes Champions, Solar Power Heats Up
Recently I wrote about major reasons why AI projects fail, such as small data, robustness, and change management. Given that some AI systems don't work, users and customers sometimes rightly wonder whether they should trust an AI system.
Recently I wrote about major reasons why AI projects fail, such as small data, robustness, and change management. Given that some AI systems don't work, users and customers sometimes rightly wonder whether they should trust an AI system.
Subscribe to The Batch
Stay updated with weekly AI News and Insights delivered to your inbox