Issue #7: AI Innovation Day!
Meta releases Llama 3, Microsoft presents the VASA-1 research paper and demo
Welcome to Issue #7 of One Minute AI, your daily AI news companion. This issue discusses two highly innovative announcements in AI by Meta and Microsoft.
Meta launches Llama 3, their most advanced LLM ever
Today, Meta launched Llama 3, the next generation in their line of open-source large language models. This release features pre-trained and instruction-fine-tuned language models with 8B and 70B parameters. To ensure more responsible usage, Meta has included new trust and safety tools such as Llama Guard 2, Code Shield, and CyberSec Eval 2.
Llama 3 models will soon be available on major cloud and AI platforms, including Microsoft Azure, AWS, and Hugging Face. You can experience it via their recently developed AI assistant, Meta AI.
Microsoft presents VASA-1, an AI framework that can make human headshots speak
Recently, Microsoft Research presented VASA-1, an AI framework that can convert human headshots into talking and singing videos. All it needs is one static headshot and an audio file with speech, and the model will bring it to life, complete with lip-sync and related expressions and head movements.
The example of Mona Lisa singing Paparazzi (sourced from Twitter) is only an example of the capabilities of VASA-1. More details about the project, including the whitepaper, are now available on Microsoft Research’s website.
Want to help?
If you liked this issue, help spread the word and share One Minute AI with your peers and community.
You can also share feedback with us, as well as news from the AI world that you’d like to see featured by joining our chat on Substack.