Prepare to get manipulated by chat bots.

It argues that more AI assistants designed to act in human-like ways could cause all sorts of problems, ranging from new privacy risks and new forms of technological addiction to more powerful means of misinformation and manipulation.

It appears that will soon change, as OpenAI unveiled a completely redesigned version of ChatGPT on Monday. Its foundation is an upgraded artificial intelligence model known as GPT-4o, which OpenAI claims is more adept at processing both visual and audio data, characterising it as “multimodal.” If you want ChatGPT to offer advice, you can point your phone towards objects like a shattered coffee cup or differential equation. However, ChatGPT’s new “personality” was the most striking feature of OpenAI’s demo.

It certainly seems worth pausing to consider the implications of deceptively lifelike computer interfaces that peer into our daily lives, especially when they are coupled with corporate incentives to seek profits. It will become much more difficult to tell if you’re speaking to a real person over the phone. Companies will surely want to use flirtatious bots to sell their wares, while politicians will likely see them as a way to sway the masses. Criminals will of course also adapt them to supercharge new scams.

Even advanced new “multimodal” AI assistants without flirty front ends will likely introduce new ways for the technology to go wrong. Text-only models like the original ChatGPT are susceptible to “jailbreaking,” that unlocks misbehavior. Systems that can also take in audio and video will have new vulnerabilities. Expect to see these assistants tricked in creative new ways to unlock inappropriate behavior and perhaps unpleasant or inappropriate personality quirks.