Job Cuts at Alexa and Microsoft's AI Accelerators Plus, "Hallucinate" is Word of the Year

Published
Nov 22, 2023
Reading time
3 min read
AI-generated image of a toy representing AI standing on an open dictionary

This week's top AI news and research stories featured OpenAI's leadership turmoil, the AI-filled Argentinian elections, a cloud-computing company that will offer GPUs at competitive prices, and new research that aims to accelerate the transformer architecture. But first:

Cambridge Dictionary declares AI-induced 'Hallucinate' Word of the Year 2023
The Cambridge Dictionary expanded the definition of the word to include the false information produced by large language models. The acknowledgment of AI 'hallucinations' underscores the evolving vocabulary surrounding the capabilities of language models. (University of Cambridge)

'Make It Real' prototype transforms drawings into functional software
Tldraw, a collaborative digital whiteboard, launched a prototype of a feature that allows users to turn vector drawings into functional software. A live demo of the GPT-4V powered tool is available to the public. (Ars Technica)

ResearchText-to-image AI models vulnerable to 'SneakyPrompt' jailbreaking are generating disturbing content
Prominent text-to-image AI models, including Stabile Diffusion and DALL-E 2, face a significant security breach. Security researchers revealed the "SneakyPrompt" method, which uses reinforcement learning. SneakyPrompt enables the generation of seemingly nonsensical prompts that AI models learn to recognize as hidden requests for inappropriate images. Stability AI and OpenAI are already collaborating with the researchers to strengthen defenses against such attacks. (MIT Technology Review)

Amazon announces job cuts in Alexa division, shifting focus to generative AI
Daniel Rausch, the Vice President of Alexa and Fire TV, stated in an internal memo that the shifts are intended to maximize resources for generative AI. The company recently previewed a generative AI-based Alexa feature called "Let’s Chat," emphasizing longer and more context-aware conversations with the voice assistant. (Geek Wire)

Google launches Project Open Se Cura, an open source framework for secure and efficient AI
The framework emphasizes co-design and development, focusing on security, transparency, and scalability. Google released the code base, including design tools and IP libraries, to foster open development and transparency in AI system design. (Google Open Source)

Google-backed AI research lab, Kyutai, aims for open science with $330 million budget
French billionaire Xavier Niel unveiled details about Kyutai, a newly established AI research lab in Paris with plans to release not only open source models but also training source code and data. French President Emmanuel Macron supports the initiative, emphasizing the need to regulate AI use cases rather than model makers. (TechCrunch)

GPT-4 outperforms humans on lawyer ethics exam
The model surpassed the average scores of human test-takers on the Multistate Professional Responsibility Exam (MPRE), a legal ethics test required by almost every U.S. state for practicing law. GPT-4 achieved a 74% accuracy rate on the simulated exam, outperforming the estimated 68% average among humans. The study, conducted by LegalOn Technologies, suggests that AI could play a role in assisting lawyers with ethical compliance in the future. (Reuters)

Google DeepMind and YouTube present Lyria, an advanced AI music generation model
Lyria, designed to generate high-quality music with instrumentals and vocals, aims to address the challenges of maintaining musical continuity across various elements like beats, individual notes, and vocal harmonies. The announcement includes two AI experiments: "Dream Track," an experiment within YouTube Shorts allowing creators to connect with fans through AI-generated soundtracks featuring global artists; and "Music AI Tools," a set of tools developed with artists, songwriters, and producers to enhance their creative processes. (Google DeepMind)

Microsoft introduces custom-designed chips for Azure
The Azure Maia AI Accelerator for AI tasks and generative AI, and the Azure Cobalt CPU, an Arm-based processor optimized for general-purpose compute workloads, will be integrated into custom server boards and racks. The chips will be working in tandem with software to maximize performance, flexibility, and efficiency. (Microsoft)

Microsoft and Google collaborate on OneTable project to address data lake challenges
The open source project seeks to create a layer on top of existing data lake table formats like Apache Iceberg, Apache Hudi, and Delta Lake, enabling seamless conversions and access across these formats. The project promotes interoperability, preventing vendor lock-in and facilitating compatibility for data analytics and AI workloads. (VentureBeat)

Microsoft teams up with Be My Eyes to offer GPT-4-powered support for visually impaired users
The tool enables visually impaired users to independently resolve technical issues and perform tasks without human agent assistance. During tests, only 10 percent of users opted to speak with a human representative after interacting with the AI tool. (The Verge)

OpenAI temporarily halts new ChatGPT Plus subscriptions and upgrades
Overwhelming demand led to capacity challenges, prompting a decision to pause access to ensure a high-quality experience for existing users. The move follows a series of outages related to high demand and DDoS attacks on OpenAI services, impacting ChatGPT and the API. (Search Engine Journal)

Common Sense Media flags generative AI models unsafe for kids
The organization introduced "nutrition labels" for AI products, evaluating them based on principles such as trust, safety, privacy, transparency, accountability, learning, fairness, social connections, and benefits to society. The generative AI category received lower ratings due to biases and concerns related to objectification and sexualization. (TechCrunch)

Share

Subscribe to Data Points

Your accelerated guide to AI news and research