The latest in AI from Mar. 7 to Mar. 13, 2024

Published
Reading time
3 min read
The latest in AI from Mar. 7 to Mar. 13, 2024

This week's top AI news and research stories featured Anthropic's new multimodal models, India's warning to devs, Google's generative news tools, and an agent that learns language by exploration. But first:

European Parliament approves AI Act
Representatives in the European Union overwhelmingly approved a sweeping set of regulations of artificial intelligence models and applications. The legislation takes a risk-based approach to regulation, with higher risk applications like medical use or critical infrastructure facing a higher level of scrutiny than low-risk applications like spam filters. Some applications, like predictive policing and police facial recognition, are banned in most instances. The Act will still need to be approved by Europe’s 27 member states, but this is largely seen as a formality. (Read about the AI Act at the Associated Press)

AMD faces regulatory hurdle with US for China-specific AI chip
AMD reportedly encountered a regulatory obstacle from US authorities in its development of an AI chip intended for the Chinese market. According to Bloomberg, the chip, designed to adhere to US export restrictions by offering lower performance than AMD's premium offerings, did not receive clearance from the Commerce Department due to its advanced capabilities. AMD will now need to secure an export license to proceed. (Read the news at CNBC)

Leading AI researchers urge tech giants to open up for independent evaluations
Over 100 AI researchers, including academics and journalists, signed an open letter demanding that firms like OpenAI and Meta permit independent evaluations of their technologies. The signatories argue that the companies' stringent measures to prevent misuse are inadvertently stifling critical safety research. The letter calls for a "safe harbor" that would protect researchers seeking to audit AI systems for potential risks and biases, without fear of legal repercussions or account bans. (Read the story at The Washington Post)

Edelman Trust Barometer reveals deep concerns over AI management and innovation
The 2024 Edelman Trust Barometer highlights a growing concern among citizens in 28 markets regarding the management of innovation, particularly AI. The study reveals that innovation, seen as poorly managed by a nearly two-to-one margin, is stoking fears of impact on jobs, privacy violations, and lifestyle changes. This discontent is contributing to a wider "populist fire," already fueled by distrust in government, authority dispersion, and trust divides. (Read the full report at Edelman)

Patronus AI introduces a copyright detection API for large language models
CopyrightCatcher aims to mitigate the risk of copyright infringement in language models' outputs. Recent adversarial tests conducted by Patronus AI researchers revealed that leading LLMs, including OpenAI's GPT-4 and Meta's Llama-2-70b-chat, often generate copyrighted content, with GPT-4 producing such content in 44% of cases. CopyrightCatcher aims to address these legal and reputational risks by enabling the detection of verbatim reproductions from copyrighted texts in LLM outputs. (Learn more at Patronus AI’s blog)

Microsoft engineer raises alarm over AI tool's creation of inappropriate content
Shane Jones, a Microsoft engineer, voiced concerns over Copilot Designer. Jones discovered the tool generating violent, sexual, and copyright-infringing images during his tests of the software. His findings include disturbing depictions related to sensitive topics like abortion rights, underage substance abuse, and sexualized violence. (Read the story at CNBC)

AI transforms old masters art authentication, but faces skepticism
Swiss-based company Art Recognition developed a system that claims to offer precise and objective evaluations of artwork authenticity, boasting over 500 completed evaluations, including a contested 1889 self-portrait by Vincent van Gogh. Despite successes, the technology faces challenges and skepticism from art professionals concerned about AI's ability to consider factors like varnish layers, wear, or damage, and its dependence on the quality of input data. (Read more at Financial Times)

Former Google engineer charged with transferring AI secrets to China
Linwei Ding, an ex-Google software engineer, has been indicted for allegedly attempting to transfer sensitive AI technology to a company in Beijing, China. Arrested in California, Ding is accused of uploading 500 files containing trade secrets from Google's AI supercomputer system to the cloud. U.S. authorities are treating this case as a significant threat to national and economic security, emphasizing the Justice Department's commitment to safeguarding American technological advancements. (Read the report at The New York Times)

Share

Subscribe to Data Points

Your accelerated guide to AI news and research