Tag: Deepfake Ai

  • Deepfake Fraud Alert: Woman in South Korea Loses Over Rs 41 Lakh to Fake Elon Musk Video

    Deepfake Fraud Alert: Woman in South Korea Loses Over Rs 41 Lakh to Fake Elon Musk Video

    Deepfake Fraud Alert: A South Korean woman lost approximately Rs 41 lakh ($50,000) owing to deepfake scam. The fraud, which involved impersonating Tesla CEO Elon Musk on Instagram shows the dangers of AI-powered fraud.

    How the Scam Unfolded

    The lady, Jeong Ji-sun got a friend request on Instagram from someone purporting to be Elon Musk. Initially doubtful, her worries were eased when the fraudster provided supposedly personal information, such as work photos, discussions about his children and even a fictitious visit with the South Korean president.

    Deepfake Video Call and Investment Con

    To further fool Jeong, the fraudster used a deepfake video of Elon Musk during a video call. The fraudster even said nice things like ‘I love you, you know that?’ to garner her trust.

    Taking advantage of her love for Elon Musk, the fraudster persuaded Jeong to deposit 70 million Korean won (about Rs 41 lakh) in a reportedly profitable plan. The money was to be sent to a bank account apparently controlled by one of Musk’s Korean employees.

    Lack of Legal Protection and Safeguarding Measures

    However, present South Korean laws do not include measures to effectively prevent AI-driven crimes, leaving victims exposed to financial exploitation.

    • Here are some tips to safeguard yourself from deepfake scams:
    • Be aware about receiving unknown friend requests or texts from celebrities or prominent figures.
    • Check account verification badges and search for errors in profile information.
    • Never exchange personal or financial information with someone you haven’t seen in person and confirmed their identification.
    • Report suspicious profiles to the social media platform right away.

    By staying cautious and following these safety precautions, you may avoid falling victim to deepfake scams and other online frauds.

  • Bollywood Actor Ranveer Singh Takes Legal Action Against Deepfake Video

    Bollywood Actor Ranveer Singh Takes Legal Action Against Deepfake Video

    Ranveer Singh, the star of the film ‘Padmavat’, has finally taken legal action against the social media-based deepfake video that went viral. In the video, he appeared to support a political party, but it was an AI-generated fake.

    FIR Filed to Investigate Deepfake Video

    The Deepfake video went viral after which Ranveer took precautionary measures to stop the spread of the video. According to his spokesperson, a police complaint has been filed and a first information report (FIR) has been filed with the cyber crime cell. The first information report will help the authorities to trace the origin of the video.

    Altered Message and Voice of Deepfake Video

    A deepfake video got created from real footage of Ranveer Singh’s interview, but his voice was changed with a synthesized one to make it appear as though he had some political opinions. This original video was on his spiritual experience in Varanasi.

    Deepfakes Scare Actor Ranveer Singh Fans

    When the video first appeared online, Ranveer Singh responded through his Instagram story. He put up a short message which read, ‘Deepfake se bacho doston (Friends beware of deepfakes)’. With this he showed that he knew about the fake clip and wanted to alert fans.

    Deepfake Video Highlights Potential Dangers

    This occurrence exemplifies the risks of deepfakes. Deepfakes are videos that have been modified through artificial intelligence so that it appears like someone is talking or doing something they never said or did. These can be employed for spreading false information or ruining one’s name.

    Ranveer Singh Upcoming Films

    On the work front, Ranveer Singh will now play Simmba, an energetic police officer again in Rohit Shetty’s action movie Singham Again. The cast also includes other stars such as Ajay Devgn, Akshay Kumar, Kareena Kapoor Khan, Deepika Padukone and Tiger Shroff among others. In 2025, he will essay Don in Farhan Akhtar’s Don 3 which has become iconic since it first came out. Recently he was seen alongside Alia Bhatt, Dharmendra Jaya Bachchan and Shabana Azmi in Karan Johar’s romantic comedy Rocky Aur Rani Ki Prem Kahani.

    Keep watching our YouTube Channel ‘DNP INDIA’. Also, please subscribe and follow us on FACEBOOKINSTAGRAM, and TWITTER

  • Beware! Deepfake Video of Naresh Trehan Giving Weight Loss Advice Goes Viral, Gurugram Police Registers Case

    Beware! Deepfake Video of Naresh Trehan Giving Weight Loss Advice Goes Viral, Gurugram Police Registers Case

    In a concerning turn of events, the Gurugram Police has taken action after a fabricated video, known as a “deepfake,” surfaced on social media. The video in question depicts Dr. Naresh Trehan, the esteemed chairperson and managing director of Medanta Hospitals, purportedly endorsing a weight-loss drug. This misleading video has prompted authorities to take legal action against those responsible.

    FIR Filed Against Unknown Persons

    The deepfake video, cleverly engineered using artificial intelligence, portrays Dr. Naresh Trehan participating in a television program where he recommends a specific anti-obesity medication. However, the authenticity of the video is highly questionable, as Dr. Trehan has not endorsed such a product. This misrepresentation has serious implications, particularly concerning the reputation of Dr. Trehan and Medanta Hospitals.

    Concerns Raised by Medanta Hospital

    Following the video’s virality, Medanta Hospital’s assistant vice president (marketing), Harish Aswani, filed a complaint. He stated that the fabricated video contained misleading information about medical treatment and damaged the reputation of Dr. Trehan and the hospital.

    Aswani emphasized the potential for public confusion: “This video can create unnecessary doubt and fear among patients who depend on accurate information for their healthcare decisions.”

    Understanding Deepfakes

    Deepfake technology utilizes AI algorithms to manipulate images or videos, enabling individuals to impersonate others convincingly. This advanced form of digital deception allows perpetrators to create content featuring individuals saying or doing things that never occurred in reality.

    Investigation Underway

    The Gurugram police have initiated an investigation based on the complaint. The FIR includes sections 419 (cheating by impersonation) and 420 (cheating) of the IPC.

  • Deepfake Fraud Alert: Be Cautious, Don’t Lose Your Hard Earned Money to Scamsters! Here’s How to Protect Yourself from AI-powered Deceptions

    Deepfake Fraud Alert: Be Cautious, Don’t Lose Your Hard Earned Money to Scamsters! Here’s How to Protect Yourself from AI-powered Deceptions

    Deepfake Fraud Alert: As technology advances, so do the threats it brings. One such menace that has gained prominence in recent years is deepfake fraud. Deepfakes are manipulated videos or images created using artificial intelligence, which can make people appear to say or do things they never did. As technology adoption is rapidly increasing, it’s crucial to stay vigilant against such scams. Let’s delve deeper into common deepfake scams and how to protect yourself.

    Common Deepfake Scams

    • Impersonation Scams: Scammers can use deepfakes to impersonate someone you trust, like a family member, friend, colleague, or even a public figure. They might contact you through various channels, like phone calls, messages, or social media, and request money, personal information, or ask you to click on malicious links.
    • CEO Fraud: In this scam, deepfakes are used to mimic the voice or video of a company’s CEO. The scammer then contacts employees via email or phone, often with a sense of urgency, and instructs them to transfer funds or share sensitive information.
    • Romance Scams: Deepfakes can be used to create fake online profiles, often using the image or voice of someone attractive. These profiles are then used to build relationships with victims online and eventually manipulate them into sending money or sharing personal details.
    • Finance Scams: Deepfakes can be used to create fake investment opportunities. For example, a deepfake video of a renowned economist endorsing a specific investment could be used to trick people into investing in a fraudulent scheme.
    • Fake Video Scams: Deepfakes can be used to create fake news videos or propaganda. These videos can be used to spread misinformation, damage reputations, or even incite violence.
    • Fake Audio Scams: Deepfakes can be used to create fake audio recordings of political leaders, celebrities, or other public figures. These recordings can be used to manipulate public opinion or disrupt political campaigns.
    • Political Manipulation: Deepfakes can be used to create videos or audio recordings that make politicians say things they never did. This can be used to sow discord, damage reputations, or influence elections.

    How to Spot a Deepfake

    While deepfakes can be very convincing, there are some signs you can look out for:

    • Unnatural movements or facial expressions: Pay close attention to how the person in the video is moving and speaking. Does anything seem off-sync or unnatural?
    • Poor audio quality: Deepfakes often have subtle audio glitches or inconsistencies, especially around the mouth area.
    • Blurry or low-resolution images: Deepfakes might have blurry or pixelated areas, particularly around the edges of the face or body.
    • Uncharacteristic behavior: If the person in the video or audio is saying or doing something that seems out of character for them, be cautious.
    • Out-of-context situations: If the situation in the video or audio recording seems strange or unexpected, question its authenticity.

    Protecting Yourself from Deepfake Scams

    • Be skeptical of unsolicited requests: Never share money, personal information, or click on suspicious links based on unsolicited requests, even if they seem to come from someone you know.
    • Verify the source: If you receive a message or call from someone claiming to be someone you know, try contacting them through a different channel, like a phone call or video chat, to confirm their identity.
    • Be mindful of what you share online: The less personal information you share online, the harder it is for scammers to create convincing deepfakes of you.
    • Use strong passwords and enable multi-factor authentication: This adds an extra layer of security to your online accounts, making it harder for scammers to gain access even if they obtain your login credentials.
    • Stay informed: Keep yourself updated on the latest deepfake scams and how to identify them.

    By staying vigilant and aware of these tactics, you can protect yourself from falling victim to deepfake scams and keep your personal information and finances safe.

  • Viral Video: Shah Rukh Khan, Salman Khan, Akshay Kumar Fall Prey to Deepfakes! Sparks Laughter Riot and Debate

    Viral Video: Shah Rukh Khan, Salman Khan, Akshay Kumar Fall Prey to Deepfakes! Sparks Laughter Riot and Debate

    Viral Video: A hilarious deepfake video featuring Bollywood’s biggest stars – Shah Rukh Khan, Salman Khan, Akshay Kumar, Ranbir Kapoor, and Kartik Aaryan – has gone viral, leaving fans in splits and sparking discussions about the ethical implications of this AI technology.

    Bollywood Stars Sing Their Hearts Out

    Watch the viral video here:

    https://twitter.com/you_know_anurag/status/1759048242858201234

    The video features the faces of these renowned actors superimposed on children’s bodies, singing an upbeat song about fruits. While the absurdity of the concept has garnered millions of views and laughter, the underlying technology raises concerns about its potential misuse.

    Fans React to the Deepfake Frenzy

    Netizens have expressed diverse reactions to the video. Some find it undeniably funny, praising its contribution to the meme world. Others, however, express concern about potential deepfakes being used for malicious purposes, highlighting the ethical gray areas surrounding this technology. Comments like “AI is both exhilarating and frightening at the same time” reflect the mixed emotions surrounding this viral trend.

    Deepfakes: A Growing Concern After Rashmika Mandanna Incident

    This video comes amidst growing concerns about deepfakes, particularly after a controversial clip featuring actress Rashmika Mandanna. Her face was superimposed on another woman’s body, raising questions about consent and potential defamation. Following this incident, several other actresses, including Kajol and Alia Bhatt, have also been targeted by deepfake creators.

  • Legal News India: AI Fraud Victims Take Note! Check Your Rights in India

    Legal News India: AI Fraud Victims Take Note! Check Your Rights in India

    The digital landscape is evolving rapidly, and so are the threats it poses. Cybercrime, fuelled by Artificial Intelligence (AI), is on the rise, with sophisticated scams like deepfake video calls and AI voice generators wreaking havoc. These AI-powered deceptions are duping unsuspecting individuals, posing serious financial and reputational risks. Here’s a straightforward guide to staying safe in the complex world of AI-driven scams.

    Legal Rights in India

    • Information Technology Act, 2000: The IT Act provides legal recourse against unauthorized access to computer systems, data theft, and online fraud. Victims of AI-driven identity scams can seek legal remedies under this legislation.
    • The Indian Penal Code: Depending on the nature of the scam, sections like cheating, forgery, and criminal breach of trust can apply. Consult a lawyer specializing in cybercrime or technology law to understand the specific provisions relevant to your case.
    • The Copyright Act, 1957: This act protects your right to your image and voice. Using your likeness without your consent, even through deepfakes, can be considered a violation of your copyright.

    Things to Remember

    • Act Swiftly: Report any suspected AI-powered scam to the authorities immediately. The quicker you act, the higher the chances of apprehending the culprits and minimizing damage.
    • Gather Evidence: Collect any proof of the scam, such as screenshots, recordings, or emails. This evidence strengthens your legal case and helps authorities track down the perpetrators.
    • Seek Legal Guidance: Consult a lawyer experienced in cybercrime law to understand your legal options and pursue appropriate legal action.

    Safety Measures for a Secure Future

    • Be Skeptical of Unsolicited Video Calls: Always verify the identity of the caller before engaging in a video call, especially if it’s unexpected. Look for inconsistencies in voice, appearance, and behavior.
    • Beware of AI-Generated Voice Messages: Don’t blindly trust automated voice messages, especially those requesting urgent action or confidential information. Verify the sender’s authenticity through known channels.
    • Strengthen Your Online Presence: Use strong passwords and two-factor authentication on all your online accounts. Regularly update your privacy settings and limit the information you share publicly.
    • Stay Informed: Keep yourself updated about the latest AI-powered scams and their modus operandi. Awareness is your best defense against these digital tricksters.

    Remember, knowledge is power. The more you understand about AI-powered scams, the better equipped you are to protect yourself. Share this information with your friends and family to raise awareness and build a collective defense against these emerging threats.

    Keep watching our YouTube Channel ‘DNP INDIA’. Also, please subscribe and follow us on FACEBOOKINSTAGRAM, and TWITTER

  • Microsoft releases its advanced deepfake AI tool at Ignite 2023, All you must know

    Microsoft releases its advanced deepfake AI tool at Ignite 2023, All you must know

    Microsoft: Microsoft unveiled an incredible AI technology at Ignite 2023 that can translate text to speech and text to avatars considerably more quickly and correctly than the deepfake AI capabilities available today. It is capable of fully imitating an individual’s avatar and bringing the actions to life with incredibly realistic motion. Both good and bad views on deepfake technologies are growing in popularity. In this article, we will tell you all you should know about this new technology.

    Microsoft’s Azure AI Speech text-to-speech avatar

    Clients can produce films of a 2D photorealistic avatar speaking with a tool called Azure AI Speech Text, which is now in public preview.

    Users have the option to upload pictures of their ideal avatar appearance and generate a speech script for the virtual character. Microsoft’s technology uses a model to animate the avatar, and a second text-to-speech model (either prebuilt or trained on the user’s voice) vocalises the script. With the use of simple text input, customers can now quickly and easily produce videos for a variety of uses, including product introductions, customer testimonials, and training sessions.

    With the help of this application, you may create multilingual avatars that can answer unscripted queries from customers using AI models such as OpenAI’s GPT-3.5.

    Won’t it be abused?

    Azure AI is limited, given the widespread misuse of deepfake technology and the various legal measures being put in place in the US and other key nations. Most of the avatars available at launch will be pre-built in order to secure the use of this technology. A small number of people will be granted access if they can sufficiently justify the moral use of personalised avatars. The general public will not be able to use Azure in order to avoid the controversy surrounding the unethical use of deepfake technology.

    Keep watching our YouTube Channel ‘DNP INDIA’. Also, please subscribe and follow us on FACEBOOKINSTAGRAMand TWITTER

  • Rashmika Mandanna Deepfake Controversy – Victim or Promotion?

    Rashmika Mandanna Deepfake Controversy – Victim or Promotion?

    Rashmika Mandanna found herself submerged in a pile of mess all of a sudden after the Deepfake controversy. However, the manner in which she has become the talk of the hour has led people to think whether she is really a victim or if everything is being done for promotion’s sake. While we may never find ourselves in a place where we can comment on the integrity of such sensitive matters, one thing is certain; the video that went viral attacked not one but two women.

    Rashmika Mandanna and Zara Patel defamed by Deepfake creators

    The controversy spread like wildfire all over the internet when a video of the popular actress went viral. The video depicted Rashmika entering a lift while wearing a revealing bodycon dress with spaghetti straps. The truth ultimately came to the forefront once the real video was identified. It was of a British-Indian influencer named Zara Patel. In fact, Zara herself stepped up and stated clearly that she was hurt by this action and had absolutely nothing to do with it.

    Zara Patel

    Many celebrities also stepped up on the matter and demanded justice for Rashmika. One of them was the megastar, Amitabh Bachchan who petitioned for restrictions to be imposed on AI usage and Deepfake technology. Rashmika herself expressed grief and showed concern regarding the safety of women.

    Is Rashmika really a victim?

    Ever since the controversy sparked, Rashmika started gaining tremendous support from all over the world. She has been actively involved in reposting these supportive Tweets on X (formerly called Twitter). Nothing can overshadow the truth that Rashmika’s privacy was tinkered with and she became a victim of defamation. However, the way she has been all over the feed, reposting tweets after tweets has led some people to believe that this might be a promotional attempt. When it comes to sensitive matters like defaming a woman, whether famous or commoner, the question shouldn’t be whether she is a victim or it is all for promotion. Rather, concerns must be raised and perpetrators must be caught and rehabilitated accordingly.

    Keep watching our YouTube Channel ‘DNP INDIA’. Also, please subscribe and follow us on FACEBOOKINSTAGRAMand TWITTER

  • UP Police issues warning and tips to avoid online frauds through Deepfake AI; Check full details here

    UP Police issues warning and tips to avoid online frauds through Deepfake AI; Check full details here

    UP Police is very active on social media and they interact with citizens through their social media accounts. The technology of AI is growing day by day and people are getting help from AI to make their work easier. AI has gained a lot of popularity among people as AI has gained a lot of fame in the recent past because AI has achieved a record in each and every field. Slowly the use of AI is increasing in every industry and like everything; it has both advantages and disadvantages. Uttar Pradesh Police has issued a warning to save citizens from harm that can happen by AI. One of the major frauds that public at large needs to be aware of is Deepfake AI. These frauds are increasing by the day. Check out the full details below:

    UP Police issues warning to avoid Deepfake AI

    Uttar Pradesh Police tweeted a video on their Twitter (now X) account to tell people about the fraud that occurred by Deepfake AI. In a video tweeted by the Uttar Pradesh Police, it has been shown how fraudsters nowadays use Deepfake AI to dupe people using audio, video, and images of their family members and friends. Using technology like Deepfake AI, they generate the face, voice, audio, video, and images of any person and call their friends and family to demand money.

    Fraudsters use video calls to convince the person so that the person gets convinced and sends the money. Recently, many such scams have come to the fore which have been done using AI. The video also contains tips to avoid such scams which will help you to stay safe from such scams. Watch the video below;

    Uttar Pradesh Police’s achievements in six years

    Uttar Pradesh Police has increased its speed to eliminate crime in the last 6 years. Statistics show that the Uttar Pradesh Police has caught more than 6 thousand criminals in more than 10 thousand encounters and about 2 thousand criminals have been injured in these encounters. Uttar Pradesh Police has not only reduced the amount of crime in the state and also reduced social riots.

    Keep watching our YouTube Channel ‘DNP INDIA’. Also, please subscribe and follow us on FACEBOOKINSTAGRAM, and TWITTER.