ChatGPT's GPT-4 model, despite its improved information accuracy, is still vulnerable to providing misinformation through prompt injection attacks.
By manipulating role tags in the OpenAI API, users can guide the model into providing false information, creating a potential loophole for spreading misinformation.