The latest in AI from Feb. 8 to Feb. 14, 2024

Reading time
4 min read
AI-generated image of a metaphorical representation of a deepfake scam targeting a company.

This week's top AI news and research stories featured ancient scrolls recovered using AI, restrictions on AI robocalls, the extreme energy requirements of GPU data centers, and Würstchen, a system that produced superior images after far less training. But first:

$25 million deepfake heist hits Hong Kong firm
Scammers apparently used AI to impersonate the company's chief financial officer and other employees during a video conference call, convincing an unsuspecting employee to transfer funds to fraudulent accounts. The Hong Kong police is currently investigating the case, with no arrests made yet. (Read the story at Ars Technica)

Investment advisor Vanguard integrates AI into $13 billion of quant stock funds
Vanguard Group incorporated machine learning into four of its active stock funds to refine their factor-based investment strategies. Early signs were positive, with the Vanguard Strategic Equity Fund and Vanguard Strategic Small-Cap Equity Fund outperforming their benchmarks in 2023. (Read the news at Bloomberg)

AI lobbying efforts surged by 185 percent in 2023 amid regulatory push
In 2023, over 450 organizations in the U.S. engaged in lobbying activities related to AI. This includes tech giants, startups, pharmaceuticals, finance, and academia, showing a broad interest in AI's regulatory landscape. This surge in lobbying activity coincides with legislative and executive attempts to regulate AI, with U.S. President Joe Biden’s executive order aiming to establish safety assessments, equity, civil rights guidance, and research into AI's labor market impacts. (Learn more at CNBC)

Google collaborates on initiative to certify AI-generated content
The initiative would identify the origin and any modifications of photos, videos, audio clips, and other digital content, including those altered by AI. Google plans to integrate this digital certification into its products and services, including platforms like YouTube, to help users make more informed decisions about the content they consume. (Find all the details at The New York Times)

Google unveils Gemini chatbot with premium subscription
Google rebranded its Bard chatbot to Gemini and introduced Gemini Advanced, for $19.99 a month, offering enhanced reasoning capabilities powered by the Ultra 1.0 AI model. Subscribers will benefit from two terabytes of cloud storage, typically valued at $9.99 monthly, Google has also promised forthcoming integrations of Gemini into Gmail and Google's productivity suite. (Learn more at Reuters)

Amazon launches Rufus, a personal shopping assistant
Available initially to a select group of users on Amazon’s mobile app, Rufus provides conversational assistance for queries such as comparing products, suggesting gift ideas, and answering specific questions about product features like durability. As Amazon competes with tech counterparts like Microsoft and Google, who have already released chatbots for shopping and search, Rufus represents a move to capture consumers at the exploratory phase of AI in the online retail ecosystem. (Read the story at The New York Times)

Europe advances toward implementing the AI Act
The AI Act moves towards becoming law, with key holdouts like France and Germany now approving of the act’s provisions. A European Parliament vote is expected in March or April. EU Observers anticipate the act will be officially enacted before summer 2026, with certain provisions taking effect sooner. (Read the latest details about the AI Act at Reuters)

White House aide Elizabeth Kelly to lead U.S. AI Safety Institute
The new institute will be part of the National Institute for Standards and Technology. Kelly, who played a significant role in drafting the executive order that established the institute, will oversee development of testing standards to evaluate the AI systems’ safety for consumer and business use. The AI institute, set to finalize these standards by July, aims to foster trust in and facilitate wider adoption of AI technologies by establishing a universal set of safety tests. (Learn more at AP News)

UK commits $125 million (£100 Million) to AI research and regulatory training
The bulk of the funding will be allocated to establish nine research hubs to advance AI applications in healthcare, chemistry and mathematics, and to strengthen a partnership with the U.S. on responsible usage. This initiative follows Britain's hosting of an international AI safety summit in November, which led to the signing of the "Bletchley Declaration" by over 25 countries aiming to identify and mitigate risks through policy collaboration. (Read more at Reuters)

News site Semafor embraces AI to assist reporters in news curation 
Semafor launched Signals, a product designed for news aggregation on significant daily stories from around the world. Signals uses an AI-powered search tool named MISO (Multilingual Insight Search Optimizer) to assist reporters in efficiently sourcing a diverse array of news stories and social media content. The final curation and summarization of news remain a human-driven effort. (Read the full article at The Verge)

U.S. Department of Homeland Security (DHS) expanding use of AI to stop child abuse and drug trafficking
The DHS is set to recruit 50 AI specialists in a bid to strengthen its operations against child exploitation, disrupt fentanyl production, and improve natural disaster damage assessments. Although specific roles were not detailed, the new hires will contribute skills in cybersecurity, data science, and software engineering to support the DHS's mission. The DHS has already used AI to enhance border security drug seizures and to identify victims and perpetrators of sexual abuse. (Read the news at Reuters)

Hugging Face launches a platform for users to build their own chatbots for free
“Hugging Chat Assistant” allows users to customize chatbots using various open source large language models (LLMs), including alternatives like Mistral's Mixtral and Meta's Llama 2, providing a flexible solution for developers. (Find out more at Venture Beat)

U.S. government launches AI safety consortium with top tech companies
The Biden administration announced the formation of the U.S. AI Safety Institute Consortium (AISIC), an initiative to ensure the safe development and deployment of generative AI. Key focus areas include developing guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content. Tech partners include Google, Meta, Apple, Microsoft, Open AI, Anthropic, Nvidia, and more. (More details at Reuters)

LinkedIn introduces chatbot to streamline job searches
The chatbot, available for premium users, provides personalized advice on users' suitability for positions and tips to enhance their profiles for better visibility. The tool also answers queries and offers insights by analyzing company profiles and job listing information available on LinkedIn, aiming to make the job-hunting process less daunting. (Read more at Wired)


Subscribe to Data Points

Your accelerated guide to AI news and research