in

Meta security analysts warn of malicious ChatGPT imposters

ChatGPT’s history bug may have also exposed payment info, says OpenAI

The security team at Meta is acknowledging broad occurrences of fake ChatGPT malware that exists to hack user accounts and take over business pages.

In the company’s new

, Meta shares that malware operators and spammers are following trends and high-engagement topics that get people’s attention. Of course, the biggest tech trend right now is AI chatbots like ChatGPT, Bing, and Bard, so tricking users into trying a fake…

Read the full post

Posted by Editor

Pedro Pascal has a great tip for managing anxiety – here’s why it works

Sadio Mane news

Chelsea to make summer move for ex-Liverpool star?