Deepfakes are a rising concern for the K-Pop industry, with idols becoming the main subjects of edited photos and videos with malicious intent. With their lives constantly shared on social media—from stage performances to their personal lives—these stars are at great risk of being targeted by manipulated content.

K-Pop fans are naturally concerned for their favorite artists and demand that companies take action. But how do agencies step in to shield their artists from this growing threat? Let's explore the cybersecurity measures K-Pop labels are taking.

 

What is Deepfake and How Did It Spread in South Korea?

Deepfakes are digitally altered photos and videos using artificial intelligence (AI) technology. With this tool, people can make their desired expressions and movements with just a few pictures of their chosen subject.

In the case of celebrities who have wide media exposure, it is easy to get photos from different sites. Therefore, it raises concerns of their pictures being used to create deepfake content for porn sites, sexual abuse, or even to blackmail the celebrities.

While it's uncertain when it started and who started it in South Korea, it is