Anthropic makes tool use available for all users Plus, ProteinViz makes protein visualization open-source again

Published
Jun 7, 2024
Reading time
3 min read
Anthropic makes tool use available for all users: Plus, ProteinViz makes protein visualization open-source again

This week’s top AI stories included:

  • E.U. countries approving the AI Act
  • Google responding to criticism of AI answers in search
  • A new set of open-world vision models
  • A study on worldwide adoption of generative AI

But first:

ProteinViz is an open-source alternative to Google’s AlphaFold 3
Like AlphaFold3, ProteinViz can predict the three-dimensional structure of arbitrary biological molecules. Given an amino acid sequence, the model can generate a 3D image of a protein directly in the web browser. Unlike AlphaFold3, ProteinViz is fully open-source, released under an MIT license, and can be used for commercial applications. (GitHub)

Anthropic introduces tool use and API calls for Claude 3 AI models
Tool use enables Claude to extract structured data, convert natural language requests into API calls, answer questions using databases or web APIs, and automate simple tasks. Anthropic also introduced features like streaming, forced tool use, and <thinking> tags to give developers more control over user interactions. These features allow for more real-world applications of Claude’s models, including interactive tutors, customer service support, and in-browser automation. (Anthropic)

Reuters-Oxford study surveys use and perception of generative AI 
A recent online survey conducted across Argentina, Denmark, France, Japan, the U.K., and the U.S. found that around 50% of the online population have heard of ChatGPT, the most widely recognized generative AI product. However, frequent use remains low, with only 1-7% using it daily. 66% of respondents expect generative AI to have a large impact on news media and science within the next five years, but only 50% of respondents trust scientists and healthcare professionals, and less than one-third trust social media companies, politicians, and news media. (Reuters Institute)

Google scales back AI-generated answers in search results after high-profile errors
In many cases, Google’s AI Overview summaries missed important context clues, presenting jokes or unsubstantiated claims as fact. The company has made over a dozen technical changes to improve the system, including cutting down on using social media posts as source material, pausing and putting guardrails around some health-related answers, and adding restrictions for queries where AI answers were not proving helpful. (Google)

European Union countries endorse comprehensive AI rules set to take effect this month
The artificial intelligence regulations, known as the AI Act, aim to address concerns surrounding AI’s potential contributions to misinformation, fake news, and copyrighted material while ensuring trust, transparency, and accountability in the development and use of AI technologies. The vote ratified a deal between negotiators reached in December 2023. The AI Act will have global implications, as companies outside the E.U. using E.U. customer data will need to comply; other countries may also use the legislation as a model for their own AI regulations. (Reuters)

Grounding DINO 1.5 introduces two new open-world object detection models
The models include Grounding DINO 1.5 Pro, for a wide range of detection scenarios, and Grounding DINO 1.5 Edge, for efficient edge computing. The models achieve state-of-the-art zero-shot transfer performance on several academic benchmarks, with Grounding DINO 1.5 Pro setting new records on the COCO, LVIS, and ODinW datasets; fine-tuning the models further boosts performance. The models are pretrained on the Grounding-20M dataset, which consists of over 20 million high-quality grounding images collected from publicly available sources, ensuring robust performance across various detection scenarios. Open-world object detection has applications in robotics, semantic search, auto-captioning, and many other scenarios. (DeepDataSpace )


Still want to know more about what matters in AI right now? 

Read this week’s issue of The Batch for in-depth analysis of news and research.

This week, Andrew Ng discussed why California's proposed AI law is bad for everyone: 

“[California's proposed law SB-1047] defines an unreasonable “hazardous capability” designation that may make builders of large AI models potentially liable if someone uses their models to do something that exceeds the bill’s definition of harm (such as causing $500 million in damage). That is practically impossible for any AI builder to ensure. If the bill is passed in its present form, it will stifle AI model builders, especially open source developers.”

Read Andrew's full letter here.

Other top AI news and research stories we covered in depth included all about Microsoft’s AI-driven Copilot+ PCs, the misuse of OpenAI's model for disinformation campaigns, an initial conversation between the U.S. and China intended to prevent AI-driven accidents, and Microsoft’s Orca 2, a technique that strengthens the native reasoning abilities of smaller models. 

Share

Subscribe to Data Points

Your accelerated guide to AI news and research