Topics
Latest
AI
Amazon
Image Credits:Meta
Apps
Biotech & Health
Climate
Image Credits:Meta
Cloud Computing
mercantilism
Crypto
Image Credits:Meta
go-ahead
EVs
Fintech
Image Credits:Meta
Fundraising
widget
bet on
Image Credits:Meta
Government & Policy
Hardware
Layoffs
Media & Entertainment
Meta
Microsoft
privateness
Robotics
security system
societal
Space
startup
TikTok
Transportation
speculation
More from TechCrunch
issue
Startup Battlefield
StrictlyVC
Podcasts
telecasting
Partner Content
TechCrunch Brand Studio
Crunchboard
reach Us
Meta AI , Meta ’s AI - power assistant across Facebook , Instagram , Messenger and the web , can now speak in more oral communication and create stylize selfies . And , begin today , Meta AI users can route questions to Meta ’s newest flagship AI mannequin , Llama 3.1 405B , which the troupe says can handle more complex interrogation than the previous model support Meta AI .
The question is whether the enhancements will be enough to meliorate the overall Meta AI experience , which many reviewers , including TechCrunch ’s Devin Coldewey , discover incredibly underwhelming at launching . The other iterations of Meta AI struggle with facts , numbers and web hunt , often failing to complete basic chore like looking up recipes and airfare .
Llama 3.1 405B could make a difference , potentially . Meta claims that the novel mannequin is particularly adept at maths and coding questions , making it well - befit for assistance with math prep problem , explaining scientific concepts , computer code debugging and so on .
However , there ’s a collar . Meta AI user have to manually switch to Llama 3.1 405B in ordering to use it , and they ’re limited to a sure issue of queries each week before Meta AI automatically switch over to a less - capable model ( Llama 3.1 70B ) .
Meta ’s tag the Llama 3.1 405B integration as a “ preview ” for the time being .
Generative selfies
There ’s another new generative AI model besides Llama 3.1 405B in Meta AI , and it powers the selfie feature .
The model , call in Imagine Yourself , creates images based on a photo of a person and a prompt like “ opine me surfing ” or “ suppose me on a beach vacation . ” Available in beta , Imagine Yourself can be invoked in Meta AI by typewrite “ Imagine me ” followed by anything that is n’t NSFW .
Join us at TechCrunch Sessions: AI
Exhibit at TechCrunch Sessions: AI
Meta did n’t say which data was used to train reckon Yourself , but the company ’s term of function make it percipient thatpublic posts and images on its platforms are fair plot . That policy — and the knotty opt - out process — has n’t sat well with all users .
New languages and Quest support
Meta AI is also replacing the Meta Quest ’s VR headset ’s Voice Commands feature , with the rollout schedule for next calendar month in the U.S. and Canada in “ observational mode . ” Users will be capable to use Meta AI with passthrough enabled to expect questions about thing in their physical surroundings , Meta says , for example , “ Look and tell me what sort of top would complete this kit ” while holding up a twain of short circuit .
As of today , Meta AI is useable in 22 countries , Meta enunciate — new addition include Argentina , Chile , Colombia , Ecuador , Mexico , Peru and Cameroon . The supporter now supports French , German , Hindi , Hindi - Romanized Script , Italian , Lusitanian and Spanish , and Meta promises more words are on the way .