As autonomous AI agents are used in regulated and safety-critical settings, organizations need effective ways to turn policy into enforceable controls. We introduce a regulatory machine learning framework that converts unstructured design artifacts (like PRDs, TDDs, and code) into verifiable runtime guardrails. Our Policy as Prompt method reads these documents and risk controls to build a source-linked policy tree. This tree is then compiled into lightweight, prompt-based classifiers for real-time runtime monitoring. The system is built to enforce least privilege and data minimization. For conformity assessment, it provides complete provenance, traceability, and audit logging, all integrated with a human-in-the-loop review process. Evaluations show our system reduces prompt-injection risk, blocks out-of-scope requests, and limits toxic outputs. It also generates auditable rationales aligned with AI governance frameworks. By treating policies as executable prompts (a policy-as-code for agents), this approach enables secure-by-design deployment, continuous compliance, and scalable AI safety and AI security assurance for regulatable ML.
翻译:随着自主AI智能体在受监管和安全关键场景中的应用,组织需要有效的方法将政策转化为可执行的管控措施。我们提出了一种监管机器学习框架,该框架能够将非结构化设计工件(如产品需求文档、技术设计文档和代码)转换为可验证的运行时防护栏。我们的“策略即提示”方法通过读取这些文档和风险控制措施,构建一个源链接的策略树。该策略树随后被编译为轻量级、基于提示的分类器,用于实时运行时监控。该系统旨在强制执行最小权限和数据最小化原则。为满足合规性评估要求,它提供了完整的来源追溯、可追踪性和审计日志记录,所有这些功能均与人工参与审查流程相集成。评估结果表明,我们的系统能够降低提示注入风险,阻止超出范围的请求,并限制有害输出。同时,它还能生成符合AI治理框架的可审计推理依据。通过将策略视为可执行的提示(即面向智能体的策略即代码),该方法实现了安全设计部署、持续合规性,并为可监管的机器学习提供了可扩展的AI安全与AI安全保障。