
The world of AI is moving fast. We’ve seen the success of generative AI chatbots like ChatGPT, and plenty of companies are working to include AI in their apps and programs. While the threat of AI still looms large, researchers have raised some interesting concerns about how easily AI lies to us and what that could mean going forward.
One thing that makes ChatGPT and other AI systems tricky to use is their proneness to “hallucinate” information, making it up on the spot. It’s a flaw in how the AI works, and it’s one that researchers are worried could be expanded on to allow AI to deceive us even more.
But is AI able to lie to us? That’s an interesting question, and one that researchers writing on The Conversation believe they can answer. According to those researchers, Meta’s CICERO AI is one of the most disturbing examples of how deceptive AI can be. This model was designed to play Diplomacy, and Meta says that it was built to be “largely honest and helpful.”

Categories: Science and Technology