The latest in AI from November 23 to November 29, 2023

Published
Nov 29, 2023
Reading time
4 min read
AI-generated illustration of scientists reading cuneiform tablets

This week's top AI news and research stories featured doctors' thoughts on AI medical devices, Microsoft and Siemens’ Industrial Copilot, all about Giskard, and a language model that speaks robot. But first:

Anthropic introduces Claude 2.1 
The update brings a 200,000-token context window, and a 2x decrease in hallucinations. The beta tool use feature expands Claude's interoperability by connecting with users' and developers’ existing processes and APIs. (Anthropic)

Amazon Web Services (AWS) and Nvidia expand partnership to offer improved supercomputing infrastructure
AWS will become the first cloud provider to offer Nvidia GH200 Grace Hopper Superchips, equipped with multi-node NVLink technology. The collaboration also introduces Project Ceiba, a GPU-powered supercomputer with unprecedented processing capabilities. aiming to deliver state-of-the-art generative AI innovations across diverse industries. (Amazon)

The controversial rumors around OpenAI's math-solving model
Rumors swirl around OpenAI's recent upheaval as reports point to development of a new AI model, Q* (pronounced Q-star). Named for its prowess in solving grade-school math problems, the potential breakthrough has prompted speculation about advancements towards artificial general intelligence (AGI). The episode echoes past AGI hype cycles, raising questions about tech industry self-regulation and potential impact on pending AI legislation. (MIT Technology Review and Reuters)

AI-powered method unlocks ancient cuneiform tablets' secrets
Researchers developed a system that can automatically decipher complex cuneiform texts on ancient tablets using 3D models of them, instead of traditional methods using photos. With an estimated one million cuneiform tablets worldwide, some over 5,000 years old, the method’s potential extends beyond known languages, and offers a glimpse into previously inaccessible historical material. (Science Daily)

Stability AI introduces Stable Video Diffusion
The model for generative video builds upon the success of the image model Stable Diffusion. The code is available on GitHub, with model weights accessible on the Hugging Face page. The release, comprising two image-to-video models, shows broad adaptability for downstream tasks, including multi-view synthesis from a single image. (Stability AI

Federal Trade Commission (FTC) simplifies process to investigate AI companies
The FTC greenlit the use of compulsory measures for investigations into products and services using or claiming to be produced with AI. The 3-0 vote emphasizes the Commission's proactive approach in addressing emerging issues in technology. Lead FTC staffers Nadine Samter and Ben Halpern-Meekin will oversee the implementation of this resolution in the Northwest Region office. (FTC)

AI enhances power grid efficiency with four key innovations
Fueled by a recent $3 billion grant from the US Department of Energy, the power grid industry is embracing AI. Key applications include a model for faster grid planning, new software tailoring energy usage, programs managing electric vehicle demand, and AI predicting grid failures due to extreme weather. (MIT Technology Review)

Amazon announces Q, an AI assistant designed for work environments
Tailored to individual businesses, Amazon Q offers quick, relevant answers, content generation, and problem-solving capabilities informed by company data. Prioritizing security and privacy, Amazon Q personalizes interactions based on existing identities and permissions. Companies including Accenture, BMW, and Gilead are among the early adopters. (Amazon)

Sports Illustrated exposed for using AI-generated content and authors
The magazine faces scrutiny after allegations surfaced that it published articles attributed to AI-generated authors with fabricated biographies and headshots. Following inquiries, Sports Illustrated removed the content without a clear explanation. The Arena Group, the magazine's publisher, later attributed the content to an external company, AdVon Commerce, claiming it was human-generated. (Futurism)

Global coalition introduced a non-binding pact to ensure AI safety
The international agreement, signed by 18 countries, including the U.S., emphasizes the need for AI systems to be "secure by design." The 20-page document encourages companies to prioritize safety measures during the development and deployment of AI. (The Guardian)

ResearchDeepmind’s GNoME discovers 2.2 million new crystals using deep learning 
Google’s AI research lab used a tool called Graph Networks for Materials Exploration (GNoME), to identify 2.2 million new crystals, including 380,000 stable materials with promising applications in technology. The predicted stable materials will be contributed to the Materials Project database, fostering collaborative research. (Google Deepmind)

Nations grapple with ethical dilemmas as AI-controlled killer drones inch closer to reality
The emergence of autonomous killer drones prompts international debate over legal constraints, with the U.S., China, and major powers hesitant to endorse binding rules. Concerns about handing life-and-death decisions to AI-controlled drones have led some countries to advocate for legally binding regulations at the United Nations, but disagreements among key players have stalled progress. 

European Central Bank research finds that AI currently boosts jobs but threatens wages
The study focused on 16 European countries, indicating an increased employment share in AI-exposed sectors. Notably, low and medium-skill jobs remained largely unaffected, while highly-skilled positions experienced the most significant growth. However, the research acknowledged potential "neutral to slightly negative impacts" on earnings, with concerns about future developments in AI technologies and their broader implications for employment and wage dynamics. (Reuters)

AI-generated speaker scandal prompts Microsoft and Amazon executives to withdraw from conference
Top executives from Microsoft and Amazon withdrew from the DevTernity software conference following revelations that at least one featured female speaker was artificially generated. The disclosure prompted other scheduled speakers to abandon the virtual conference. Microsoft's Scott Hanselman expressed disappointment, emphasizing the importance of diverse and genuine representation at tech conferences. (AP News)

ResearchResearchers uncover vulnerability in ChatGPT, expose training data extraction potential
The research team successfully extracted several megabytes of ChatGPT's training data by employing a simple attack method. The findings raise concerns about the model's memorization of sensitive information and challenge the adequacy of current testing methodologies. (GitHub)

OpenAI not expected to give Microsoft or other investors voting seats on new nine-member board
A source told The Information that despite revamping its slate of directors, OpenAI’s new board is unlikely to change its nonprofit status, and will maintain rules barring directors from having a major financial interest in the company. (The Information)

Share

Subscribe to Data Points

Your accelerated guide to AI news and research