Issue #3: Surprise launches from Mistral AI and Hugging Face
Mistral AI launches new LLM, Hugging Face announces open-source text-to-speech model
Welcome to Issue #3 of One Minute AI, your daily AI news companion. This issue will cover two new launches from Mistral AI and Hugging Face.
Mistral AI launches new 281GB large language model
On Tuesday, Mistral AI released their latest LLM, Mixtral 8x22B, via a surprise post on X / Twitter. The new Mixtral model boasts a 65,000-token context window with a parameter size of up to 176 billion. It is expected to outperform Mistral's previous Mixtral 8x7B LLM, which showed signs of outperforming GPT-3.5 and Llama 2 on numerous benchmarks.
Mixtral 8x22B is available for anyone to use after downloading a 281GB file. All you need to do is grab the magnet link from Mistral AI's post on X and paste it into your preferred torrent client. Additionally you can also try it via Hugging Face.
Hugging Face announces lightweight, open-source text-to-speech model
Hugging Face recently launched Parler-TTS, a lightweight text-to-speech (TTS) model that can generate high-quality, natural-sounding speech in the style of a given speaker (gender, pitch, speaking style, etc).
Contrary to other TTS models, Parler-TTS is fully open-source. The datasets, pre-processing, training code, and weights are released publicly on GitHub under a permissive license, enabling the community to understand the work and develop their own powerful TTS models.
Want to help?
If you liked this issue, help spread the word and share One Minute AI with your peers and community.
You can also share feedback with us, as well as news from the AI world that you’d like to see featured by joining our chat on Substack.