Cohere claims its new Aya Vision AI model is best-in-class – TechCrunch

UKRAINE - 2023/04/09: In this photo illustration, Cohere logo is seen on a smartphone screen. (Photo Illustration by Pavlo Gonchar/SOPA Images/LightRocket via Getty Images)
Latest
AI
Amazon
Apps
Biotech & Health
Climate
Cloud Computing
Commerce
Crypto
Enterprise
EVs
Fintech
Fundraising
Gadgets
Gaming
Government & Policy
Hardware
Layoffs
Media & Entertainment
Meta
Microsoft
Privacy
Robotics
Security
Social
Space
Startups
TikTok
Transportation
Venture
Events
Startup Battlefield
StrictlyVC
Newsletters
Podcasts
Videos
Partner Content
TechCrunch Brand Studio
Crunchboard
Contact Us
Cohere For AI, AI startup Cohere’s nonprofit research lab, this week released a multimodal “open” AI model, Aya Vision, the lab claimed is best-in-class.Aya Vision can perform tasks like writing image captions, answering questions about photos, translating text, and generating summaries in 23 major languages. Cohere, which is also making Aya Vision available for free through WhatsApp, called it “a significant step towards making technical breakthroughs accessible to researchers worldwide.”“While AI has made significant progress, there is still a big gap in how well models perform across different languages — one that becomes even more noticeable in multimodal tasks that involve both text and images,” Cohere wrote in a blog post. “Aya Vision aims to explicitly help close that gap.”Aya Vision comes in a couple of flavors: Aya Vision 32B and Aya Vision 8B. The more sophisticated of the two, Aya Vision 32B, sets a “new frontier,” Cohere said, outperforming models 2x its size, including Meta’s Llama-3.2 90B Vision, on certain visual understanding benchmarks. Meanwhile, Aya Vision 8B scores better on some evaluations than models 10x its size, according to Cohere.Both models are available from AI dev platform Hugging Face under a Creative Commons 4.0 license with Cohere’s acceptable use addendum. They can’t be used for commercial applications.Cohere said that Aya Vision was trained using a “diverse pool” of English datasets, which the lab translated and used to create synthetic annotations. Annotations, also known as tags or labels, help models understand and interpret data during the training process. For example, annotation to train an image recognition model might take the form of markings around objects or captions referring to each person, place, or object depicted in an image.Cohere’s use of synthetic annotations — that is, annotations generated by AI — is on trend. Despite its potential downsides, rivals including OpenAI are increasingly leveraging synthetic data to train models as the well of real-world data dries up. Research firm Gartner estimates that 60% of the data used for AI and analytics projects last year was synthetically created.According to Cohere, training Aya Vision on synthetic annotations enabled the lab to use fewer resources while achieving competitive performance. “This showcases our critical focus on efficiency and [doing] more using less compute,” Cohere wrote in its blog. “This also enables greater support for the research community, who often have more limited access to compute resources.”Together with Aya Vision, Cohere also released a new benchmark suite, AyaVisionBench, designed to probe a model’s skills in “vision-language” tasks like identifying differences between two images and converting screenshots to code. The AI industry is in the midst of what some have called an “evaluation crisis,” a consequence of the popularization of benchmarks that give aggregate scores that correlate poorly to proficiency on tasks most AI users care about. Cohere asserts that AyaVisionBench is a step toward rectifying this, providing a “broad and challenging” framework for assessing a model’s cross-lingual and multimodal understanding.With any luck, that’s indeed the case.“[T]he dataset serves as a robust benchmark for evaluating vision-language models in multilingual and real-world settings,” Cohere researchers wrote in a post on Hugging Face. “We make this evaluation set available to the research community to push forward multilingual multimodal evaluations.”Topics
Senior Reporter, Enterprise
YouTube launches a $7.99 per month, ad-free Premium Lite subscription
Kevin Rose, Alexis Ohanian acquire Digg
Judge rejects Musk’s attempt to block OpenAI’s for-profit transition
Aspiration co-founder and board member defrauded investors of $145M, prosecutors say
Amazon reportedly forms a new agentic AI group
Klarna CEO doubts that other companies will replace Salesforce with AI
Mach Industries, founded by 21-year-old Ethan Thornton, lands US Army contract, builds weapons factory
Subscribe for the industry’s biggest tech newsEvery weekday and Sunday, you can get the best of TechCrunch’s coverage.TechCrunch’s AI experts cover the latest news in the fast-moving field.Every Monday, gets you up to speed on the latest advances in aerospace.Startups are the core of TechCrunch, so get our best coverage delivered weekly.By submitting your email, you agree to our Terms and Privacy Notice.© 2024 Yahoo.
Source: https://techcrunch.com/2025/03/04/cohere-claims-its-new-aya-vision-ai-model-is-best-in-class/