,

Protect AI Strengthens AI Security Portfolio with Acquisition of Laiyer AI

Protect AI Strengthens AI Security Portfolio with Acquisition of Laiyer AI

Seattle-based artificial intelligence (AI) and machine learning (ML) security leader, Protect AI, has made a strategic move by acquiring Laiyer AI, a Berlin-based provider specializing in securing Large Language Models (LLMs). The undisclosed deal promises to elevate Protect AI’s capabilities, ushering in a new era of comprehensive AI security solutions.

Protect AI aims to enhance its offerings by incorporating Laiyer AI’s open-source LLM Guard into its platform. This move is in response to the growing demand for fortified security measures surrounding Large Language Models (LLMs), such as OpenAI’s GPT-4, which have found applications in diverse sectors like customer service, healthcare, and content creation. The market for LLMs is anticipated to surge from USD 11.3 billion in 2023 to a projected USD 51.8 billion by 2028, driven by the increasing use of applications like chatbots and virtual assistants.

Ian Swanson, CEO of Protect AI, expressed enthusiasm about the acquisition, stating, “Protect AI is thrilled to announce the acquisition of Laiyer AI’s team and product suite, which significantly enhances our leading AI and ML security platform.” He emphasized the newfound capabilities derived from Laiyer AI, allowing customers in various sectors to develop secure and safe Generative AI applications.

Laiyer AI’s flagship product, LLM Guard, stands out in the market for its transparency and open-source nature. Unlike closed-source alternatives, LLM Guard provides a trustworthy option to enhance the security of LLM interactions. It addresses critical security risks associated with deploying Large Language Models, such as prompt injections, training data poisoning, and supply chain vulnerabilities.

The OWASP Top 10 for LLM Applications in 2023 highlighted the need for vigilance, emphasizing risks related to prompt injections that could lead to data exposure or decision manipulation. With upcoming regulations on LLMs, the focus on safeguarding against malicious activities gains paramount importance for maintaining corporate integrity and security.

Neal Swaelens and Oleksandr Yaremchuk, Co-founders of Laiyer AI, expressed their commitment to delivering end-to-end AI security capabilities through the partnership with Protect AI. They stated, “By joining forces with Protect AI, we are extending Protect AI’s products with LLM security capabilities to deliver the industry’s most comprehensive end-to-end AI Security platform.”

LLM Guard offers essential security features, including the detection, redaction, and sanitization of inputs and outputs from LLMs. These features play a crucial role in mitigating risks and preserving LLM functionality while protecting against malicious attacks and misuse. The tool seamlessly integrates with existing security workflows, providing observability tools like logging and metrics.

Laiyer AI’s LLM Guard has gained notable traction in the market, with over 13,000 library downloads and 2.5 million downloads of its proprietary models on HuggingFace within just 30 days. The solution’s performance is underscored by a 3x reduction in CPU inference latency, allowing cost-effective deployment on CPU instances without compromising accuracy.

The acquisition solidifies Protect AI’s position as the premier platform in AI security and MLSecOps, offering enterprises unmatched capabilities to build, deploy, and manage secure, compliant, and operationally efficient AI applications. As the AI landscape evolves, the collaboration between Protect AI and Laiyer AI sets the stage for robust and comprehensive AI security solutions that cater to the evolving needs of businesses across various sectors.

Leave a Reply

Your email address will not be published. Required fields are marked *