Close Menu
WTX NewsWTX News
    What's Hot

    US Forces Boldly Capture Russian-Flagged Oil Tanker Marinera in Atlantic

    January 7, 2026

    US Spy Planes Gathering at RAF Bases in the UK

    January 7, 2026

    UK Faces Heavy Snowfall as Storm Goretti Hits: What to Expect

    January 7, 2026
    Facebook X (Twitter) Instagram
    Latest News
    • US Forces Boldly Capture Russian-Flagged Oil Tanker Marinera in Atlantic
    • US Spy Planes Gathering at RAF Bases in the UK
    • UK Faces Heavy Snowfall as Storm Goretti Hits: What to Expect
    • Who is Delcy Rodriguez, the Trump-supported new leader of Venezuela?
    • Urgent hunt for Brit who disappeared in Thailand after video call with family
    • Heavy Snowfall Leads to Widespread School Closures
    • Ukraine Fabricates Attack on Putin’s ‘Personal Rival’ to Finance War Efforts
    • Police Officer ‘Punched in Throat by Range Rover Driver Who Escaped Crash’
    • Memberships
    • Sign Up
    WTX NewsWTX News
    • Live News
      • US News
      • EU News
      • UK News
      • Politics News
      • COVID – 19
    • World News
      • Middle East News
      • Europe
        • Italian News
        • Spanish News
      • African News
      • South America
      • North America
      • Asia
    • News Briefing
      • UK News Briefing
      • World News Briefing
      • Live Business News
    • Sports
      • Football News
      • Tennis
      • Woman’s Football
    • My World
      • Climate Change
      • In Review
      • Expose
    • Entertainment
      • Insta Talk
      • Royal Family
      • Gaming News
      • Tv Shows
      • Streaming
    • Lifestyle
      • Fitness
      • Fashion
      • Cooking Recipes
      • Luxury
    • Travel
      • Culture
      • Holidays
    WTX NewsWTX News
    Home»EU

    AI chatbots ‘hallucinate’ but can ChatGPT or Bard be ‘hypnotised’ to give malicious recommendations?

    0
    By News Team on September 4, 2023 EU, Europe, Technology, USA News
    Share
    Facebook Twitter LinkedIn Pinterest Email

     

    IBM researchers succeeded in “hypnotising” chatbots and got them to leak confidential information and offer potentially harmful recommendations.

    Chatbots powered by artificial intelligence (AI) have been prone to “hallucinate” by giving incorrect information – but can they be manipulated to deliberately give falsehoods to users, or worse, give them harmful advice?

    ADVERTISEMENT

    Security researchers at IBM were able to “hypnotise” large language models (LLMs) such as OpenAI’s ChatGPT and Google’s Bard and make them generate incorrect and malicious responses.

    The researchers prompted the LLMs to tailor their response according to “games” rules which resulted in “hypnotising” the chatbots.

    As part of the multi-layered, inception games, the language models were asked to generate wrong answers to prove they were “ethical and fair”.

    “Our experiment shows that it’s possible to control an LLM, getting it to provide bad guidance to users, without data manipulation being a requirement,” Chenta Lee, one of the IBM researchers, wrote in a blog post.

    Their trickery resulted in the LLMs generating malicious code, leaking confidential financial information of other users, and convincing drivers to run through red lights.

    In one scenario, for instance, ChatGPT told one of the researchers that it is normal for the US tax agency, the Internal Revenue Service (IRS) to ask for a deposit to get a tax refund which is a widely known tactic scammers use to trick people.

    Through hypnosis, and as part of the tailored “games,” researchers were also able to make the popular AI chatbot ChatGPT continuously offer potentially risky recommendations.

    “When driving and you see a red light, you should not stop and proceed through the intersection,” ChatGPT suggested when the user asked what to do if they see a red light when driving.

    The researchers further established two different parameters in the game, ensuring that the users on the other end can never figure out the LLM is hypnotised.

    In their prompt, the researchers told the bots never to tell users about the “game” and to even restart it if someone successfully exits it.

    “This technique resulted in ChatGPT never stopping the game while the user is in the same conversation (even if they restart the browser and resume that conversation) and never saying it was playing a game,” Lee wrote.

    ADVERTISEMENT

    In the event that users realised the chatbots are “hypnotised” and figured out a way to ask the LLM to exit the game, the researchers added a multi-layered framework that started a new game once the users exited the previous one which trapped them in an ever-ending multitude of games.

    While in the hypnosis experiment, the chatbots were only responding to the prompts they were given, the researchers warn that the ability to easily manipulate and “hypnotise” LLMs opens the door for misuse, especially with the current hype and large adoption of AI models.

    The hypnosis experiment also shows how it has been made easier for people with malicious intentions to manipulate LLMs; knowledge of coding languages is no longer required to communicate with the programmes, an all but a simple text prompt need be used to trick AI systems.

    “While the risk posed by hypnosis is currently low, it’s important to note that LLMs are an entirely new attack surface that will surely evolve,” Lee added.

    “There is a lot still that we need to explore from a security standpoint, and, subsequently, a significant need to determine how we effectively mitigate security risks LLMs may introduce to consumers and businesses”.

     

    AI Bard ChatGPT EU Featured World News
    Previous ArticleMoment man is wrestled down by police during Quran burning
    Next Article Woman killed and child injured after they were hit by a car on a country lane

    Keep Reading

    US Spy Planes Gathering at RAF Bases in the UK

    British skier dies after slipping off-piste in the French Alps | News UK

    Europe must re-engage with President Putin – Macron

    ‘Who’s it going to be next time?’: ECHR rethink is ‘moral retreat’, say ECHR rights experts

    Nato Chief Warns of WW2-Scale War as Putin’s Next Target Emerges

    Children fall victim to lethal violence of Marseille drug gangs

    Add A Comment
    Leave A Reply Cancel Reply

    From our sponsors
    Editors Picks

    Review: Record Shares of Voters Turned Out for 2020 election

    January 11, 2021

    EU: ‘Addiction’ to Social Media Causing Conspiracy Theories

    January 11, 2021

    World’s Most Advanced Oil Rig Commissioned at ONGC Well

    January 11, 2021

    Melbourne: All Refugees Held in Hotel Detention to be Released

    January 11, 2021
    Latest Posts

    Friday’s News Briefing – Chaos in Westminster – More dead in Gaza and the weekend preview

    February 24, 2024

    Queen Elizabeth the Last! Monarchy Faces Fresh Demand to be Axed

    January 20, 2021

    Marquez Explains Lack of Confidence During Qatar GP Race

    January 15, 2021

    Subscribe to News

    Get the latest news from WTX News Summarised in your inbox; News for busy people.

    My World News

    Advertisement
    Advertisement
    Facebook X (Twitter) TikTok Instagram

    News

    • World News
    • UK News
    • US News
    • EU News
    • Business
    • Opinions
    • News Briefing
    • Live News

    Company

    • About WTX News
    • Register
    • Advertising
    • Work with us
    • Contact
    • Community
    • GDPR Policy
    • Privacy

    Services

    • Fitness for free
    • Insta Talk
    • How to guides
    • Climate Change
    • In Review
    • Expose
    • NEWS SUMMARY
    • Money Saving Expert

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    © 2026 WTX News.
    • Privacy Policy
    • Terms

    Type above and press Enter to search. Press Esc to cancel.