Prϵϵmpt: Sanitizing Sensitive Prompts for LLMs
Published in AAAI 2024 Workshop (Privacy-Preserving Artificial Intelligence), 2023
In this paper, we address the problem of formally protecting the sensitive information contained in a prompt while maintaining response quality. To this end, first, we introduce a cryptographically inspired notion of a prompt sanitizer which transforms an input prompt to protect its sensitive tokens. Our evaluation demonstrates that Prϵϵmpt is a practical method to achieve meaningful privacy guarantees, while maintaining high utility compared to unsanitized prompts, and outperforming prior methods.
[Paper]