Skip to main content

AI Security

Building with AI introduces an entirely new category of security concerns. If your app sends user input to an LLM, you need to understand these threats.
This section is a work in progress. Content is being actively developed.

Topics to Be Covered

  • Prompt injection (direct and indirect)
  • Data leakage through AI models
  • System prompt extraction
  • Tool use and agent security
  • Rate limiting and cost attacks
  • Output validation and safety
  • Sensitive data in training and context
  • Multi-tenant AI application security
  • Responsible AI disclosure