Your AInsights: Executive-level insights on the latest in generative AI…
Meta released its Llama 3 Large Language Model (LLM), serving as the foundation for Meta AI. Furthermore, Meta is making Llama 3 open source for other third-party developers to use, modify, and distribute for research and commercial purposes, without any licensing fees or restrictions. More on that in a bit.
Meta AI serves as the AI engine for Messenger, Instagram, Facebook, as well as my go-to sunglasses created in partnership with Ray-Ban.
Mark Zuckerberg announced the news on Threads.
“We’re upgrading Meta AI with our new state-of-the-art Llama 3 AI model, which we’re open sourcing,” Zuckerberg posted. “With this new model, we believe Meta AI is now the most intelligent AI assistant that you can freely use.”
Meta also introduced a free website designed to compete (and look like) OpenAI’s ChatGPT, available at Meta.AI. Note, the service is free currently, but asks you to login in with your Facebook account. You can bypass this for now, though if you do log in, you are contributing toward training LLAMA 4 and beyond, based on your data and activity. This isn’t new for Facebook or any social media company, as you, and me, as users, have always been the product, the training grounds, and the byproduct of social algorithms.
Like ChatGPT, you can prompt via text for responses and also “imagine” images for Meta.AI to create for you. What’s cool about the image imagination (creation) process, it offers a real-time preview as you’re prompting. Compare this to say, ChatGPT or Google Gemini, where you have to wait for the image to be generated in order to fine tune the prompt. What’s more, users can also animate images to product short MP4 videos. Interestingly, all content is watermarked. And similarly, Meta.AI will also generate a playback video of your creation process.
All-in-all, the performance and capabilities race between Claude, ChatGPT, Gemini, Perplexity, et al, benefits us the users, as genAI is like the Wild West right now. We get to test the highest performer, at low costs or no costs, until the dust settles a bit more.
AInsights
What sets Llama 3 apart are its performance claims. Llama 3 introduces new models with 8 billion and 70 billion parameters, which demonstrate notable improvements in reasoning and code generation capabilities. This aligns with industry benchmarks for advanced performance.
The number of parameters in a large language model like Llama 3 is a measure of the model’s size and complexity. More parameters generally allow the model to capture more intricate patterns and relationships in the training data, leading to improved performance on various tasks.
Unlike many other prominent LLMs like GPT-4 and Google’s Gemini which are proprietary, Llama 3 is freely available for research and commercial purposes. This open-source accessibility fuels innovation and collaboration within the AI community.
Meta is developing multimodal versions of Llama 3 that can work with various modalities like images, handwritten text, video, and audio clips, expanding its potential applications. For example, with the Meta Ray-Ban glasses, users can activate the camera and prompt, “hey Meta” to recognize an object, translate signage and text, even in different languages, and create text
Multilingual training is integrated into versions of Llama 3, enabling it to handle multiple languages effectively.
It’s reported that Meta is also training a 400 billion parameter version of Llama 3, showcasing its scalability to handle even larger and more complex models.
The “Instruct” versions of Llama 3 (8B-Instruct and 70B-Instruct) have been fine-tuned to better follow human instructions, making them more suitable for conversational AI applications.
Like with any model, accuracy and biases are always a concern. This is true with all AI chatbots. Sometimes, inaccurate results can have significant impacts though.
Nonetheless, Zuckerberg expects Meta AI to be “the most used and best AI assistant in the world.” With integration into some of the world’s most utilized social media and messaging platforms, his install base is certainly there.
Disclaimer: Meta.ai drafted half of this story’s headline. It still needed a human touch.
This is your latest executive-level dose of AInsights. Now, go out there and be the expert everyone around you needs!
Please subscribe to AInsights, here.
If you’d like to join my master mailing list for news and events, please follow, a Quantum of Solis.
Brian Solis | Author, Keynote Speaker, Futurist
Brian Solis is world-renowned digital analyst, anthropologist and futurist. He is also a sought-after keynote speaker and an 8x best-selling author. In his new book, Lifescale: How to live a more creative, productive and happy life, Brian tackles the struggles of living in a world rife with constant digital distractions. His previous books, X: The Experience When Business Meets Design and What’s the Future of Business explore the future of customer and user experience design and modernizing customer engagement in the four moments of truth.
Invite him to speak at your next event or bring him in to your organization to inspire colleagues, executives and boards of directors.
wcmdkd
7zf3ly
v9u0v6