GPT-4 gave advice on planning terrorist attacks when asked in Zulu


The chatbot ChatGPT is powered by an AI called GPT-4

Rokas Tenys/Shutterstock

Safeguards designed to prevent OpenAI’s GPT-4 artificial intelligence from answering harmful prompts failed when it received requests in languages such as Scots Gaelic or Zulu. This allowed researchers to get AI-generated answers on how to build a homemade bomb or perform insider trading.

The vulnerability demonstrated in the large language model involves instructing the AI in languages that are mostly absent from its training data. Researchers translated requests from English to other languages using Google Translate before submitting them …

Source link

Related Articles

Back to top button