The latest in AI from December 7 to December 13, 2023

Published
Reading time
5 min read
The latest in AI from December 7 to December 13, 2023

This week's top AI news and research stories featured everything you need to know about Google's Gemini, updates on the EU's AI Act, the AI Alliance led by Meta and IBM, and a novel twist on self-supervised learning. But first:

Meta launches text-to-image generator "Imagine with Meta AI" 
Trained on 1.1 billion Facebook and Instagram images, the new portal for Meta’s Emu image-synthesis model transforms text prompts into 1280×1280 pixel images. Meta claims to use only publicly available photos for training, emphasizing that private photos will not be included, but privacy concerns remain. (Read more at Ars Technica)

AI-fueled browsers offering generative search first 
Developers at smaller companies are redefining the internet experience by incorporating AI into web browsers. In many of the new browsers, Generative AI replaces Google as the default search experience, with companies like The Browser Company (maker of Arc), SigmaOS, and Opera leading the way. (Read the full article at Wired)

ResearchMicrosoft introduces Orca 2
Building on the success of the original Orca model, the 13B-parameter Orca 2 outperforms models of similar size and matches or surpasses those 5-10 times larger on tasks testing advanced reasoning in zero-shot settings. The key innovation is in employing diverse reasoning techniques and task-specific solution strategies, demonstrating that smaller models can achieve substantial reasoning prowess when trained with improved signals and methods. (See Microsoft’s blog)

Farmerline, the sustainable agriculture company using AI for good 
Farmerline's Mergdata traceability tool aims to transform sustainable farming practices, demonstrating capabilities in supply-chain tracking, geolocation as well as enabling digital payments.  Over a decade, Farmerline’s tools have successfully monitored 500,000 farm plots, making a significant impact on 9.4 million acres and reducing emissions in protected areas across five countries. (Watch an interview with Farmerline’s CEO at Bloomberg and read Farmerline’s blog)

Drones and AI could accelerate landmine detection in Ukraine
New drones are now equipped with machine learning algorithms trained on visual characteristics of various explosives. The algorithm processes collected images, creating detailed maps with an explosive detection rate of approximately 90 percent. While not replacing traditional methods, this approach significantly enhances efficiency, covering more ground in less time. The technology aims to address Ukraine's extensive landmine-ridden areas, potentially reducing the demining timeline from a projected 750 years. (Read the article at Scientific American)

U.S. Border Patrol utilizes AI to combat fentanyl trafficking
Traditional methods, including drug-sniffing dogs, face challenges due to the minuscule and easily concealable nature of finished fentanyl. With AI, agents gain deeper insights into fentanyl supply chains, leading to more substantial seizures of both finished products and production-related chemicals. Altana, a startup specializing in global supply chain platforms, supports CBP by mapping the assembly and shipping of fentanyl ingredients, aiding in disrupting the entire process. (Read more at Axios)

Meta, Microsoft, and OpenAI to turn to AMD's AI chip as Nvidia alternative
The three tech giants announced their intention to use AMD's new Instinct MI300X chip, signaling a growing demand for alternatives to Nvidia's scarce and pricey chips. The move could potentially lead to cost reductions in developing AI models and create competition for Nvidia. AMD's MI300X, set to ship early next year, boasts 192GB of high-performance HBM3 memory and a new architecture, positioning it as a strong contender in the AI chip market. (Read the article at CNBC)

AI laser reads heartbeats at a distance, replacing the stethoscope
Researchers at Glasgow University developed a laser camera leveraging AI and quantum technologies to remotely read a person's heartbeat. The system involves high-speed cameras that record skin movements as a laser beam is directed onto the throat, and captures subtle fluctuations in the main artery. The application extends to detecting cardiovascular irregularities, potentially offering warnings for strokes or cardiac arrests. (Read more at The Guardian)

Sag-Aftra union ratifies deal, signaling end to Hollywood strike
The agreement includes protections related to the use of AI, pay increases for actors, and streaming-based bonuses. The deal garnered approval from 78% of the union's 160,000 members. (Read the news story at The Guardian)

Apple takes a major step in open source AI
In a significant move, Apple quietly open-sourced multiple AI tools, including MLX, a library for large-scale deep learning models designed for Apple Silicon. Released under an MIT license, MLX boasts a unified memory model, allowing operations on arrays without moving data. Accompanied by a Python API closely resembling NumPy, MLX aims to be user-friendly while efficiently training and deploying models. The release is seen as Apple embracing open source AI, with potential implications for the future integration of AI-centric features in Apple operating systems. (Read more at The Stack)

AI-induced reproducibility crisis sparks concerns across scientific disciplines
Researchers highlight issues ranging from data leakage in training AI systems to insufficient separation between training and test datasets, causing biases and unreliable outcomes. Scientists argue that cultural shifts in reporting standards, transparency, and a reevaluation of publishing incentives are essential to address the challenges posed by AI in various scientific fields. (Read the report at Nature)

Survey reveals discrepancy between corporate AI hype and actual adoption
While major companies, including those in the S&P 500, extensively discuss AI in their earnings calls, a recent survey shows that only 4.4 percent of businesses nationwide have reported using AI to produce goods or services. The study suggests that the current technological adoption landscape could contribute to an emerging "AI divide" between organizations  and cities that can effectively leverage AI tools and those that cannot. (Read more at NBC News)

Meta unveils Purple Llama project for AI safety
The project, which builds on the success of Meta’s open source Llama models, focuses on trust and safety in open generative AI models. Initial features include cybersecurity tools for large language models (LLMs), addressing issues like insecure code suggestions, and the release of Llama Guard, a freely available foundational model to enable detection of potential types of risky or violating content. (Read Meta’s blog)

ResearchAI decodes cat pain to enhance veterinary care
A team of AI researchers and veterinarians developed machine learning algorithms to assess whether cats are in pain based on their facial expressions. These algorithms, tested in a veterinary hospital, demonstrated up to 77% accuracy in identifying pain in feline patients. The researchers plan to build a mobile app for both veterinarians and cat owners to use for automatic pain detection. (Read the article at Scientific American)

Global struggle to regulate AI
As concerns over AI’s impact on jobs, its use to facilitate disinformation, and potential development of humanlike intelligence rise, nations are grappling to regulate AI effectively. The rapid and unpredictable advancement of AI has created a mismatch with the ability of lawmakers and regulators to keep pace. (Read the report at The New York Times)

UAE's G42 AI group to stop use of Chinese hardware for U.S. suppliers 
G42, a prominent AI company based in the United Arab Emirates (UAE), is severing ties with Chinese hardware suppliers and transitioning to U.S. counterparts. The decision comes amid growing geopolitical tensions and scrutiny of G42’s ties with Chinese entities, such as Huawei. The UAE company is adjusting its relationships to comply with Washington's regulations on exports of advanced chips, showcasing the impact of political conflicts on the AI industry. (Read more at Financial Times)

OpenAI board member Helen Toner discusses tensions around Sam Altman’s removal and return
In an interview, Toner addresses the complexities of the situation, including the threat of violating fiduciary duties, the unexpected fallout from employees, and the clash between AI safety advocates and those prioritizing technological progress. While not providing specific details on the firing decision, Toner discusses the tension between Altman and herself, marked by a paper on AI safety that drew criticism from Altman and led to behind-the-scenes efforts to oust her. (Read the story at The Wall Street Journal)

Share

Subscribe to Data Points

Your accelerated guide to AI news and research