The latest in AI from Mar. 14 to Mar. 20, 2024

Published
Reading time
5 min read
The latest in AI from Mar. 14 to Mar. 20, 2024

This week's top AI news and research stories featured conversational robots, security risks in Hugging Face’s platform, the use of deepfakes in India's 2024 elections, Google's generative news tools, and a cost-saving method that calls pretrained large language models (LLMs) sequentially, from least to most expensive, and stops when one provides a satisfactory answer. But first:

Devin, an AI software engineer backed by tech industry heavyweights
AI startup Cognition launched Devin. Unlike many coding assistants, Devin manages complete development projects—from coding to bug fixing and execution. The tool, currently exclusive to a handful of users, showcases capabilities beyond conventional coding aids, handling a wide array of tasks and outperforming similar technologies in benchmark tests. (Read more at VentureBeat)

Midjourney bans Stability AI employees over image-scraping incident
Midjourney indefinitely banned all employees of its rival Stability AI after detecting botnet-like activities aimed at scraping image and prompt pairs from its service. This move followed a 24-hour service outage attributed to actions by a Stability AI employee, raising concerns about ethical practices in AI data collection. Stability AI's CEO, Emad Mostaque, countered claims of intentional scraping, suggesting the activity was for a personal project and did not involve image data. (Find more details at Ars Technica)

AI enhances hockey analytics
Researchers at the University of Waterloo and Stathletes developed an AI tool that improves the speed and accuracy of tracking and analyzing data from professional hockey games. The tool uses deep learning techniques to automatically identify players and their movements, overcoming the challenges posed by the sport's fast pace and non-linear player motion, with high accuracy rates in player tracking (94.5 percent), team identification (97 percent), and individual player recognition (83 percent). (Read the article at Science Daily)

Five Pulitzer finalists used generative technology in their work
For the first time, the Pulitzer Prizes, a prize honoring journalistic achievement, disclosed that five of this year's finalists incorporated AI technology in their submissions. The Pulitzer Board initiated a requirement for entrants to declare their use of AI, reflecting an interest in the capabilities and ethical considerations of AI in journalism. (Read the news at NiemanLab)

Elon Musk makes Grok open source amid legal battle with OpenAI
xAI will open source its ChatGPT rival, Grok. Musk, who has expressed concerns about the profit-driven use of technology by large corporations, initiated legal action against OpenAI for veering away from its non-profit origins to pursue a profit model. The decision to make Grok open-source is seen as a significant step towards democratizing AI development and usage, albeit with ongoing discussions about the potential risks and benefits of such openness in the field. (Read the story at Reuters)

Answer.AI introduces open source system for home-based 70b model training
The project enables enthusiasts and researchers to train a 70 billion parameter language model on standard desktop computers equipped with dual gaming GPUs, such as RTX 3090 or 4090. This is achieved through the combination of Fully Sharded Data Parallelism (FSDP) and Quantized Low-Rank Adaptation (QLoRA), helping democratize access to large-scale AI model training. Collaborating with experts from the University of Washington and Hugging Face, Answer.AI's initiative aims to empower the open source community, allowing even small labs to explore big AI models. (Explore all the details at Answer.AI’s blog)

Sailor, a suite of open language models designed for the South-East Asian (SEA) region
Supporting languages like Indonesian, Thai, Vietnamese, Malay, and Lao, Sailor leverages the Qwen 1.5 framework, and offers models ranging from 0.5B to 7B parameters, aimed at enhancing text understanding and generation within SEA's diverse linguistic context. (Read more at Sailor’s GitHub)

AI-generated food images are more appetizing than real photos, study shows
During Global Nutrition and Hydration Week 2024, researchers unveiled a study indicating consumers prefer AI-generated food images over actual food photographs, particularly when unaware of the images' origins. This preference, as per findings published in Food Quality and Preference, stems from AI's ability to optimize visual factors like symmetry, shape, and lighting, making food appear more appealing. The study involved 297 participants evaluating a variety of food images, showing a significant bias towards AI-generated visuals when the creation method was undisclosed. (Read more at Science Daily)

Study exposes ASCII art as weakness for AI safety measures
Security researchers introduced ArtPrompt, a technique leveraging ASCII art to circumvent safety mechanisms in LLMs. While safety efforts have largely concentrated on semantic understanding, this study highlights a critical oversight: the inability of LLMs to properly interpret ASCII art—a text-based art form in digital forums. The research evaluates the response of state-of-the-art LLMs like GPT-3.5, GPT-4, Gemini, Claude, and Llama2 to ASCII art. The findings reveal a substantial gap in these models' defenses, with ArtPrompt successfully manipulating all five LLMs to exhibit unintended behaviors. (Read the paper at Arxiv)

Hong Kong's Centre for AI and Robotics (CAIR) introduces AI tool for advanced brain surgery assistance
CARES Copilot 1.0 is a model designed to aid neurosurgeons in complex brain surgeries. The tool aims to enhance clinical diagnosis and decision-making by providing quick access to a vast range of medical references. Having been tested in hospitals across Hong Kong and mainland China, CARES Copilot 1.0 has shown an accuracy rate of up to 95 percent in generating crucial information from academic sources. (Learn more at South China Morning Post)

Nvidia defends NeMo amid copyright infringement claims 
Nvidia asserted that NeMo, a framework for developing generative AI applications, adheres to copyright laws following a lawsuit from authors Abdi Nazemian, Brian Keene, and Stewart O’Nan. The authors allege that Nvidia unlawfully used their copyrighted books to train NeMo's models without permission, seeking damages and profit restitution. Nvidia maintains that NeMo was developed in full respect of copyright laws, as tensions between AI development and intellectual property rights continue to escalate. (Full story at The Wall Street Journal)

Amazon announces more generative AI features for product listing creation
The new capabilities allow partners to transform existing product pages from external websites into tailored, high-quality listings on Amazon with minimal effort. By providing a URL or sparse text descriptions, sellers can generate product titles, descriptions, and key attributes. (Get all the details at Amazon’s blog)

OpenAI forms licensing agreements with Le Monde and Prisa 
These partnerships, part of OpenAI's global strategy to collaborate with media entities, are designed to support journalism through AI technology while providing ChatGPT users with interactive access to news. In addition to summarizing news content from these publishers in its responses, ChatGPT will offer attribution and links to the original articles, enhancing the user experience with reliable information sources. (Read more at Bloomberg)

Cerebras launches WSE-3 chip, teams up with Qualcomm 
Cerebras Systems introduced its latest AI chip, with double the performance of its predecessor while maintaining energy efficiency. Additionally, Cerebras announced a collaboration with Qualcomm, aimed at reducing the costs and improving the performance of AI inference by a factor of 10. This partnership will leverage Cerebras' training capabilities and Qualcomm's AI 100 Ultra chip to address the scalability and efficiency challenges in deploying AI models. (Read more details at IEEE Spectrum)

Google announces comprehensive support measures for the 2024 Indian General Election
Google is intensifying its efforts to combat misinformation by enforcing strict policies across its platforms, leveraging AI models, and promoting transparency in election-related advertising. Collaborating with Shakti, the India Election Fact-Checking Collective, and other initiatives, Google aims to support early detection of online misinformation, including deepfakes. (Learn more at Google’s blog)

Abu Dhabi set to launch autonomous racing series 
The inaugural event, slated for April 27, seeks to challenge the world's leading computer scientists, coders, and developers with a $2.25 million prize purse. Developed by ASPIRE, part of the UAE's Advanced Technology Research Council, A2RL builds upon previous autonomous racing initiatives with a long-term vision of enhancing road safety, advancing technological development, and increasing public acceptance of autonomous vehicles. (Read the news at Autosport)

Share

Subscribe to Data Points

Your accelerated guide to AI news and research