As the use of AI tools like ChatGPT, Claude, and other large-language models (LLMs) becomes more prevalent in academic and professional settings, concerns about privacy and data security have emerged. This guide explores the privacy implications of using AI-powered tools and offers insights into how users can protect their personal information while utilizing these technologies.
First, don’t enter any private or confidential information into ChatGPT and similar tools. It’s possible that developers may review your entries to improve the next version of their model.
If you want to make sure your inputs aren't used to improve the model, you can turn off that feature in the settings of ChatGPT.
Temporary chat
Another option is to use the feature called “temporary chat.” At the top of the page, click on the menu that says “ChatGPT,” and then click the option that says “temporary chat.” Then your chat won’t appear in your history and ChatGPT won’t save anything from your conversation.
You can do the same in other tools.
Many thanks to the NNLM course, "Understanding & Using Generative AI: a course for health science librarians","and the University of Arizona Research Guide on "How can I protect my privacy when using ChatGPT?"