Sharing Information with AIs
How do you decide what information your willing to share with which AI?
Almost all of my purposeful use of AI only includes information that has no sensitivity.
The whole world can know that I can’t remember the calling convention of some particular programming library and can infer that I’ve switched programming languages and something about what I am working on.
But almost all is not all. Some might give more than a hint that I or someone in my circle has a particular problem. This is easy enough to partially protect through the use of false identities with disposable email addresses and browser profiles. Far from complete protection, but probably good enough.
Some might give hints (or more than hints) about an invention or business plan. This is a more serious problem.
I recently made a new CustomGPT that people would likely share confidential business data with. I’d never be able to see that data, which is well and good, but..
There is now an option that didn’t exist when I built my last one. It is hidden off the screen at the bottom, saying “Additional Settings.” Once I opened the toggle, I found a checkbox
Use conversation data in your GPT to improve our models
that defaulted to YES. It didn’t exist before. I didn’t know to look for it. Had I not found it, then users who trusted me would have been inadvertently, unknowingly, and most of all, scarily putting their secrets at risk.
At this point, I am wondering whether I have to run my own language models in order to keep my users safe.