AI is a rapidly emerging and growing technology and growing quicker than any regulations can be reviewed or put in place, so what do we need to consider in the meantime about protecting an individual's privacy?
The IAPP article references research of AI incidents: Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks and by combining a regulation-insenstivie approach with real-world, fact-checked incidents the authors where able to make a list of 12 risks:
- Surveillance: AI exacerbates surveillance risks by increasing the scale and ubiquity of personal data collection.
- Identification: AI technologies enable automated identity linking across various data sources, increasing risks related to personal identity exposure.
- Aggregation: AI combines various pieces of data about a person to make inferences, creating risks of privacy invasion.
- Phrenology and physiognomy: AI infers personality or social attributes from physical characteristics, a new risk category not in Solove's taxonomy.
- Secondary use: AI exacerbates use of personal data for purposes other than originally intended through repurposing data.
- Exclusion: AI makes failure to inform or give control to users over how their data is used worse through opaque data practices.
- Insecurity: AI's data requirements and storage practices risk of data leaks and improper access.
- Exposure: AI can reveal sensitive information, such as through generative AI techniques.
- Distortion: AI’s ability to generate realistic but fake content heightens the spread of false or misleading information.
- Disclosure: AI can cause improper sharing of data when it infers additional sensitive information from raw data.
- Increased Accessibility: AI makes sensitive information more accessible to a wider audience than intended.
- Intrusion: AI technologies invade personal space or solitude, often through surveillance measures.
The image for this article was created using Canva AI.