Human Action in 3D: Stanford researchers use generated video to animate 3D interactions without motion capture
AI systems designed to generate animated 3D scenes that include active human characters have been limited by a shortage of training data, such as matched 3D scenes and human motion-capture examples. Generated video clips can get the job done without motion capture.
Better Images in Fewer Steps: Researchers introduce shortcut models to speed up diffusion
Diffusion models usually take many noise-removal steps to produce an image, which takes time at inference. There are ways to reduce the number of steps, but the resulting systems are less effective. Researchers devised a streamlined approach that doesn’t sacrifice output quality.
Some AI-Generated Works Are Copyrightable: U.S. Copyright Office says that no new laws are needed for AI-generated works
The United States Copyright Office determined that existing laws are sufficient to decide whether a given AI-generated work is protected by copyright, making additional legislation unnecessary.
Reasoning in Vectors, Not Text: Meta introduces Chain of Continuous Thought (Coconut) to improve next-token prediction
Although large language models can improve their performance by generating a chain of thought (CoT) — intermediate text tokens that break down the process of responding to a prompt into a series of steps.
Fine-Tuning Fine Points: Active inheritance, a smarter way to fine-tune models on synthetic data
The practice of fine-tuning models on synthetic data is becoming well established. But synthetic training data, even if it represents the training task well, may include characteristics like toxicity that impart unwelcome properties in the trained model’s output...
David Ding: Generated video with music, sound effects, and dialogue
Last year, we saw an explosion of models that generate either video or audio outputs in high quality. In the coming year, I look forward to models that produce video clips complete with audio soundtracks including speech, music, and sound effects.
Stability AI’s aim is to liberate artists of all trades from the repetitive, mechanical aspects of their work and help them spend the majority of their time on the creative side. So our highest hope for next year is that generative AI will help people to be more creative and productive.