Recent rapid advancements in the AI space have continued to show up in news stories, research and online. Businesses in varying fields have already incorporated AI into daily business practices. The creation of the first mainstream, social media-hyped AI program ChatGPT has spurred a variety of arguments, however, such as how and to what extreme AI should and shouldn’t be used.
However, should healthcare workers consider using AI? The question is not a matter of , but a matter of . AI has already produced a variety of privacy scandals. A June 2023 estimated the average cost of a data breach across industries to be around $4.
45 million. However, the average cost of a healthcare data breach totaled the highest among all industries, at $10. 93 million.
If a healthcare organization plans to use AI in everyday practices, it must focus its efforts on the protection of consumer data through compliance with Health Insurance Portability and Accountability Act rules. It is more important than ever for healthcare organizations choosing to use AI to incorporate a safe and secure method to store and transmit data. Networks connecting patients with their care, as well as any external access points, should be secured.
This could look like data encryption for information that is both stored and transferred, as well as making sure the AI model of choice is on a secure server. Along with information encryption, as per HIPAA privacy rules, information should always be anonymous. No matter who has access to private health records, the chosen AI model should be trained on de-identified patient information.
Organizations should adopt methods such as , which is based on removing specific identifiers from the data set and , as one possible solution for de-identification. Differential privacy involves adding statistical noise to data so that overall patterns can be described, but individual data cannot be extracted. For example, when a traffic app looks at where people are going on their phones, it doesn’t want to store the exact location of each individual in their database to preserve consumer privacy.
So, it adds some fuzziness to the data to keep everyone’s exact location private. This way, the app can still see overall traffic trends without telling anyone’s specific location. This will minimize any security risks in case of data compromise.
One major concern of using AI revolves around data sharing. It is important for any healthcare organization that chooses to use AI to understand the type of model being used and whether or not the AI follows the same data-sharing agreements and patient consent forms. AI applications survive through the use of algorithms, or a process or set of rules a computer must follow in order for the AI to work correctly.
However, AIs can follow of algorithms: supervised and unsupervised. Supervised algorithms involve using input factors and output factors that are already known in advance. These processes have the ability to produce highly accurate algorithms because answers are already known.
Unsupervised algorithms, however, are the opposite. These algorithms involve data being fed into the algorithm while the computer learns what to look for. The right answer may not be included in the feed, but the computer must identify relationships and observations in the data.
The type of AI algorithm used will impact the way a healthcare organization must update its processes to keep HIPAA compliance. Organizations should consider limiting who has access to the AI model to mitigate accidental data breaches or information leaks. Only certain identified staff members and primary physicians who need access to certain data should have the ability to access the information.
However, all personnel and vendors should be adequately trained on their entry limitations, data usage limitations and other security compliance information when it comes to patient records. Because this sensitive patient data may be contained on this AI network, it is important the AI models go through regular audits and risk assessments. Continuous and regular auditing not only strongly enforces HIPAA compliance, but also contributes to trustworthy AI by addressing biases to improve model accuracy, protect consumer rights, ensure data quality, provide relevance and monitor system changes.
When it comes to the use of AI in a healthcare setting, there are a variety of ways in which these programs can be used in a thoughtful manner. However, it is incredibly important to make sure the AI is being used in a way that meets HIPAA standards of patient security. This not only aids in better protection of the patient, but also saves healthcare organizations from costly data breaches.
Through careful consideration and use, AI can meet these standards. But it will be up to each healthcare and medical organization to make sure it follows the compliance standards needed to meet these guidelines. .
From: forbes
URL: https://www.forbes.com/sites/shashankagarwal/2023/12/22/ai-and-hipaa-navigating-the-privacy-crossroads/