The UK Safety AI Institute (AISI) has revealed, ahead of the AI summit in Seoul, that five of the most popular large language models (LLMs) are “highly vulnerable” to even the most basic jailbreaking attempts, which is where people trick an AI model into ignoring safeguards that are in place to
ORIGINAL LINK: https://www.aitoolreport.com/articles/top-ai-models-exposed