24
May
Jailbreaking AI Models: It’s easy. Hundreds of millions of dollars have been thrown at AI Safety & Alignment over the years. Despite that, jailbreaking LLMs in April 2024 is easy. Oddly enough, as the LLM models become more capable and sophisticated, the jailbreaking attacks are becoming easier to perform, more effective, and frequent. Gary Marcus - who is hypercritical about LLMs and current AI trends- just published this very opinionated post: An unending array of jailbreaking attacks could be the death of LLMs.I often speak to colleagues and clients about the “LLM jailbreaking elephant in the room.” And they all…