Deepfakes are a rising concern for the K-Pop industry, with idols becoming the main subjects of edited photos and videos with malicious intent. With their lives constantly shared on social media—from stage performances to their personal lives—these stars are at great risk of being targeted by manipulated content.
K-Pop fans are naturally concerned for their favorite artists and demand that companies take action. But how do agencies step in to shield their artists from this growing threat? Let's explore the cybersecurity measures K-Pop labels are taking.
Will AI Define The 5th Generation Of K-Pop?K-Pop StoriesAug 6, 2024
What is Deepfake and How Did It Spread in South Korea?
Deepfakes are digitally altered photos and videos using artificial intelligence (AI) technology. With this tool, people can make their desired expressions and movements with just a few pictures of their chosen subject.
In the case of celebrities who have wide media exposure, it is easy to get photos from different sites. Therefore, it raises concerns of their pictures being used to create deepfake content for porn sites, sexual abuse, or even to blackmail the celebrities.
CW: mention of pornography
IN LIGHT OF THE DEEPFAKE WEBSITE POSTING EXPLICIT CONTENT OF MANY FEMALE IDOLS WE URGE NAYA TO PLEASE HELP PROTECT ALL THE IDOLS INVOLVED BY USING ANY/AS MANY OF THE TEMPLATES BELOW OR ANY WE MAY HAVE MISSED #PROTECT_OUR_IDOLS
EDAM TAKE ACTION
EDAM… pic.twitter.com/5T9REUFLYI— izna Global 💌 (@iznaglobals) August 30, 2024
While it's uncertain when it started and who started it in South Korea, it is
Enjoy full access for just $1
Join over 10,000 active members!
🌟 Special Contents for Subscribers