Topics
Latest
AI
Amazon
Image Credits:Character.AI
Apps
Biotech & Health
Climate
Image Credits:Character.AI
Cloud Computing
Commerce
Crypto
Enterprise
EVs
Fintech
Fundraising
Gadgets
punt
Government & Policy
Hardware
Layoffs
Media & Entertainment
Meta
Microsoft
seclusion
Robotics
Security
Social
Space
Startups
TikTok
transfer
Venture
More from TechCrunch
event
Startup Battlefield
StrictlyVC
Podcasts
video
Partner Content
TechCrunch Brand Studio
Crunchboard
Contact Us
Character . AI , a lead platform for chatting and roleplaying with AI - generated role , unveiled its forthcoming video generation model , AvatarFX , on Tuesday . usable in closed genus Beta , the fashion model animates the platform ’s graphic symbol in a variety of way and voices , from human - like characters to 2D animal sketch .
AvatarFXdistinguishes itself from challenger like OpenAI ’s Sora because it is n’t alone a text - to - video generator . substance abuser can also generate television from preexist images , allowing users to animate photos of real hoi polloi .
📽 ️Say hello to AvatarFX — our cut - border telecasting contemporaries example . Cinematic . Expressive . Mind - blowing . Dive in : https://t.co / aF5zDrKLIK#CharacterAI#AvatarFXpic.twitter.com / Rkqo4SXEgX
It ’s immediately evident how this kind of technical school could be leveraged for abuse — user could upload photos of celebrity or people they have it away in real life and create naturalistic - looking picture in which they do or say something incriminating . The engineering to create convincing deepfakes already exists , but incorporating it into popular consumer products like Character . AI only exacerbates the potential for it to be used irresponsibly .
grapheme . AI tell TechCrunch that it will apply watermarks to video generated with AvatarFX to make it clear that the footage is n’t real . The company added that its AI will blockade the generation of videos of minors , and that images of real people get strain through the AI to change the guinea pig into a less placeable person . The AI is also trained to recognize image of high - visibility renown and politician to limit the voltage for insult .
Since AvatarFX is not wide useable yet , there is no fashion to verify how well these safe-conduct work .
Character . AI is already facing issues with safety on its platform . parent have file lawsuits against the company , alleging that its chatbotsencouraged their childrento self - impairment , to pour down themselves , or to kill their parents .
In one cause , a 14 - year - old boy died by felo-de-se after he reportedly develop anobsessive relationshipwith an AI bot on Character . AI based on a “ Game of Thrones ” character . in brief before his last , he ’d open up to the AI about experience thought of self-annihilation , and the AI encouraged him to follow through on the act , according to court filing .
Join us at TechCrunch Sessions: AI
Exhibit at TechCrunch Sessions: AI
These are utmost examples , but they go to show how people can be emotionally manipulated by AI chatbots through text messages alone . With the incorporation of television , the relationships that people have with these characters could feel even more naturalistic .
Character . AI has respond to the allegations against it by building parental control and additional safeguard , but as with any app , controls are only effective when they ’re actually used . Oftentimes , kids use tech in ways that their parents do n’t know about .
Updated , 4/23/25 , 9:45 AM ET with comment from Character . AI