ArtificiaI Intelligence Opportunities, Risks and DangersJun 15, 2023
Popularity of AI
The popularity of artificial intelligence (AI) technology is making news and we are seeing excitement about its use within our universities. AI is used for tasks such as word translation, speech recognition, self-driving cars, chatbots, and online shopping functions. It can also be used for predictive modeling, creating realistic images and art from text descriptions, and even suggesting a diagnosis from medical images. Examples of these services include DALL-E2, ChatGPT, and Otter.ai. AI's popularity has grown because of its capability to replace manual processes with automated methods in which computers perform tasks, process large quantities of data, and solve problems at exceptionally fast rates. In addition, AI technologies may use machine learning, which involves using algorithms or statistical models to draw inferences from patterns in data and process the information.
Risks of Using AI
While AI is becoming more accessible to the general public and has risen in popularity, there are risks to using the technology. Without proper security controls, AI technology can become susceptible to privacy, confidentiality, and security threats—such as attackers injecting malicious data or images into a machine learning model to deliberately attack the integrity of the data, or individuals uploading data into AI technology without realizing that the data may remain there permanently, circumventing appropriate university protections and security controls. Additionally, inappropriate use of AI technology could place CU at risk of unintended data disclosures or violating laws that are intended to protect data and maintain secure systems.
Compliance Requirements for Using AI
As with other technologies, AI technology must also be assessed for risk and implementation of proper controls prior to use to ensure the technology complies with laws like HIPAA (Health Information Portability and Accountability Act) and FERPA (Family Educational Rights and Privacy Act) which were developed to ensure the confidentiality of health, medical, clinical, and student information is maintained, and to ensure that technologies in which these data are stored and processed have appropriate security controls in place.
- Contact the Risk and Compliance team for assistance vetting AI prior to acquisition or email Risk and Compliance.
- It is important to highlight if the AI is intended for clinical purposes or will use highly confidential data such as FERPA, HIPAA data
- Avoid uploading or sharing university data into unvetted AI systems as they are not secure.
- Become familiar with university security and privacy policies and standards: