![]() ![]() However, if an LLM provider has visibility into the queries set by their users, the possibility of access to very sensitive queries - like proprietary code - becomes a significant security and privacy issue as the possibility of hacking increases dramatically, Jay Harel, VP of product at Opaque Systems, tells CSO. ![]() ![]() ![]() While some generative AI LLM models such as ChatGPT are trained on public data, the usefulness of LLMs can skyrocket if trained on an organization's confidential data without risk of exposure, according to Opaque. The potential risks of sharing sensitive business information with generative AI algorithms are well-documented, as are vulnerabilities known to impact LLM applications. LLM use can expose businesses to significant security, privacy risks Meanwhile, broader support for confidential AI use cases provides safeguards for machine learning and AI models to use encrypted data inside of trusted executions environments (TEEs), preventing exposure to unauthorized parties, according to Opaque. Through new privacy-preserving generative AI and zero-trust data clean rooms (DCRs) optimized for Microsoft Azure confidential computing, Opaque said it also now enables organizations to securely analyze their combined confidential data without sharing or revealing the underlying raw data. Opaque Systems has announced new features in its confidential computing platform to protect the confidentiality of organizational data during large language model (LLM) use. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |