Sophisticated models trained on biased data can learn discriminatory patterns, which leads to skewed decisions. A new solution aims to prevent neural networks from making decisions based on common biases.
Image analysis guided by AI revealed a 2,000-year-old picture dug into the Peruvian desert. Researchers analyzing aerial imagery shot over Peru found a pattern that looks like a three-horned humanoid holding a staff.
Google's AI platform offers a view into the mind of its machines. Explainable AI (xAI) tools show which features exerted the most influence on a model’s decision, so users can evaluate model performance and potentially mitigate biased results.
Models that summarize documents and answer questions work pretty well with limited source material, but they can slip into incoherence when they draw from a sizeable corpus. Recent work addresses this problem.