A deepfake of an explosion at the Pentagon that caused the stock market to dip exemplified the misinformation risks of generative AI.
On Monday, a seemingly AI-generated image of what looked like an explosion outside of the Pentagon circulated on Twitter. The Arlington Police Department quickly debunked the image tweeting, "There is NO explosion or incident taking place at or near the Pentagon reservation, and there is no immediate danger or hazards to the public."
But not before the stock market dipped by 0.26 percent before bouncing back, according to Insider.
It's unclear how the image was created, but it has the telltale signs of an AI-generated image. The fencing in front of the building is blurred and the columns appear to be different widths. Any social media sleuth accustomed to spotting photoshopped images of celebrities and influencers would have noticed this, but as generative AI continues to improve, deepfakes will be harder to spot.
Even with Arlington PD's quick response, Twitter's mess of a verification system compounded the issue. One of the accounts that tweeted the image was a verified account impersonating a Bloomberg news feed. That account, called @BloombergFeed, has since been suspended.
Other accounts that tweeted the image were @DeItaone and the account Russian state-media owned site RT. Now that anyone can pay to become verified on Twitter, situations like this are the perfect storm of misinformation.
A fake Twitter account shares a fake image that leads to real consequences. Welcome to 2023.