Search
Close this search box.
Search
Close this search box.

WitnessAI to Build Safety Measures for Generative AI Models that Mitigate Risk for Enterprises

Image Credits: iStock.com/KhanchitKhirisutchalual
By: Headliners News / June 5, 2024

Generative AI, known for its creative outputs, can sometimes be problematic—producing biased or toxic content. Meet WitnessAI, led by CEO Rick Caccia, aims to make generative AI safer with innovative guardrails. Securing AI models is essential for businesses, and according to Caccia in a recent interview with TechCrunch, “It’s like a sports car: a powerful engine needs good brakes and steering. Similarly, effective controls are crucial for AI.”

Enterprises are keen on generative AI’s potential but wary of its risks. An IBM poll reveals that 51% of CEOs are creating generative AI roles, yet a Riskonnect survey indicates only 9% of companies are ready to manage associated threats, such as privacy and intellectual property concerns.

WitnessAI’s platform addresses this gap by monitoring interactions between employees and their company’s custom generative AI models, applying risk-mitigating policies. Unlike models accessed via APIs like OpenAI’s GPT-4, WitnessAI focuses on models such as Meta’s Llama 3. Enterprise AI can democratize data access for employees, but it must be managed to prevent sensitive data leaks or any misuse.

WitnessAI offers various modules to address generative AI risks. One module enforces rules to prevent inappropriate use of AI tools, such as accessing pre-release earnings reports or internal codebases. Another module redacts sensitive information from prompts and shields models from attacks that could lead them off-script.

They help enterprises safely adopt AI by protecting their data, preventing prompt injection, and enforcing identity-based policies. The tech platform is designed to be isolated and encrypted, ensuring customer data is secure and private. WitnessAI has built a platform with regulatory separation and encryption, that creates a unique, isolated environment for each customer.

Despite potential privacy concerns regarding data passing through its platform, WitnessAI assures customers of robust privacy measures. However, employee concerns about workplace monitoring could pose challenges. A Forbes survey indicates that nearly a third of employees might consider quitting if their activity were monitored.

Nonetheless, WitnessAI’s platform has garnered significant interest, with 25 early corporate users in its proof-of-concept phase. The company plans a public launch in Q3. WitnessAI has also raised $27.5 million from Ballistic Ventures and GV, Google’s corporate venture arm, to expand its team from 18 to 40 by year-end. This new funding infusino will help support WitnessAI’s growth in the competitive field of model compliance and governance, facing rivals like AWS, Google, Salesforce, and startups such as CalypsoAI.

According to their team, they’ve planned to sustain their operations through 2026 even without sales, however, their current pipeline exceeds their sales targets by almost 20x. Although this is just their initial funding round and public launch. Secure AI enablement is a new field, and their features are evolving quickly with this emerging market.

What do you think?

3 People voted this article. 3 Upvotes - 0 Downvotes.
Please Share This