WELCOME
The LSOC Institute
Bringing AI Risk & Security knowledge to Third Party Risk Professionals worldwide.
The industry needs a SOC-2 standard, but for Generative AI Risks from Large Language Models.
LSOC was started as a research project amongst the top minds in AI Risk, bringing together experts on the OWASP LLM Top 10, NIST AI RMF, MITRE ATLAS, ISO 42001, and lesser known frameworks to create a standard everyone can adopt.
The LSOC institute is not-for-profit, and is a community led organization dedicated to educating the community on risks and sharing a standard to evaluate AI Vendors and Third Parties.
Our mission
Democratizing the standard to evaluate AI Risks
AI Risk Education
Benefit from tailored knowledge about AI Risk and Security, regardless of your background knowledge.
The LSOC Standard
Learn how organizations use the LSOC standard to evaluate AI applications.
The LSOC Standard
A Comprehensive Standard to attest to, and evaluate AI Risks
The LSOC Institute has put together LSOC, which covers the latest and greatest in AI Risk. Learn how you can adopt LSOC today.
Access the LSOC Standard
Get the comprehensive standard to attest to, and evaluate, AI Risks.
Get Involved
Join us to help shape the future of AI risk evaluation.
Apply to join a Committee
Lead the frontier of AI Risk Standard Development
Sign up to learn more
Help your program adopt the LSOC Standard