10 ideas for risk mitigation in AI systems
Prompt Iteration:
Has a comprehensive and systematic approach to testing and iterating on prompts.
Prompt Bounding
Has a robust strategy for bounding prompts.
Data Security and Privacy Measures
Has comprehensive and robust measures for data security and privacy.
![10 ideas for risk mitigation in AI systems](https://notepd.s3.amazonaws.com/posts/ai/security_robots_guarding_a_complex_factory_16K_resolution__photo_by_Dustin_Lefevre__tdraw_8k_resolution_detailed_landscape_painting_by_Ivan_Shishk.webp?time=1722070030976)
1. Prompt Bounding: Use strict prompt templates
Like protecting from SQL injection: character limits
2. Prompt Bounding: Have end users select from a list of choices to build prompts
3. Have prompts pre-analyzed by AI to determine if they are appropriate
Both Prompt Bounding and Data Security and Privacy Measures
4. Prompt Iteration: have both AI and human review of prompt conversations
Were the desired outcomes achieved? How can poor-performing prompts be improved? Create a knowledge base of successful prompts.
5. data security: for certain classes of prompts have the queries take place off grid
Not every request needs to use ChatGPT. Some can use locally/walled systems.
6. Have "privacy bots" that intercept and redact personal/sensitive information for requests and responses.
7. Data Security: permission-based access to LLMs
Whether it is access to proprietary information
8. Data Security: permission-based access to redacted text
Certain roles may be able to view redacted text
9. Use collections of smaller single-purpose AI agents to perform tasks
They can be scaled and monitored to reduce the impact of hallucinations. As improvements occur in AI they can be swapped out and improved.
10. Have regular (quarterly) system reviews
At the pace of innovation in AI many systems will be out of date in a short period of time.
No comments.