In the ever-evolving landscape of AI security, where large language models (LLMs) reign supreme and bad actors lurk, Swiss startup Lakera emerges as a beacon of defense, and has been recently armed with a $10 million shield of new funding. Lakera’s mission is to safeguard enterprises from the vulnerabilities of LLMs, particularly the menacing practice of “prompt injections.”
LLMs, the driving force behind generative AI, can interpret and create human-language text with remarkable proficiency. From summarizing documents to composing poems, these AI powerhouses can handle diverse tasks. Yet, the same flexibility that makes them powerful also exposes them to manipulation. Enter the realm of “prompt injection” — a malevolent technique where meticulously crafted text prompts are used to deceive LLM-powered chatbots into granting unauthorized access to systems or bypassing stringent security measures.
Lakera, poised at this critical juncture, offers a solution to these vulnerabilities. The Swiss startup employs a robust database, a blend of public open-source datasets, in-house research, and a unique interactive game, “Gandalf.” In this game, players attempt to “hack” the underlying LLM through linguistic trickery, with each level presenting a greater challenge. The insights gleaned from Gandalf serve as the foundation for Lakera’s flagship product, Lakera Guard, which companies can integrate into their applications via an API.
“Gandalf,” powered by OpenAI’s GPT3.5, Cohere, and Anthropic’s LLMs, seems like a simple game designed to expose LLM vulnerabilities. However, its user base spans the spectrum from six-year-olds to cybersecurity experts. Lakera’s CEO and co-founder, David Haber, revealed was quoted in an interview with TechCrunch, “A large chunk of the people playing this game is actually the cybersecurity community.”
Over the past several months, Lakera has amassed a staggering 30 million interactions from 1 million users, enabling the creation of a “prompt injection taxonomy.” This classification system divides attacks into ten categories, including direct attacks, jailbreaks, role-playing, and more. Lakera’s customers can now compare their inputs against these structures at scale.
Prompt injections are just one aspect of Lakera’s multifaceted approach. This tech startup also addresses issues such as preventing private data leaks, moderating content to protect children from harmful content, and tackling misinformation or factual inaccuracies generated by LLMs.
With the EU AI Act on the horizon, which emphasizes safeguarding generative AI models, Lakera’s launch is well-timed. The company’s founders have contributed to shaping the Act and offer developer-driven insights to bridge the gap between technological progress and regulatory developments.
In a world where generative AI is poised to transform industries, enterprises hesitate due to security concerns. Lakera steps in to remove these security barriers and enable the seamless integration of generative AI applications into real-world use cases. Armed with $10 million in funding, Lakera is primed to evolve its platform and fortify the future of AI security.
While they cannot disclose specific customers for security reasons, their clientele includes Cohere, a LLM developer with a $2 billion valuation, a leading enterprise cloud platform, and one of the world’s largest cloud storage services.
As AI continues to shape our world, Lakera stands ready to defend the frontier, ensuring that the promise of AI is not overshadowed by its perils.
What do you think?
Show comments / Leave a comment