Back to Blogs

Is AI in Healthcare Causing Data Privacy Concerns?

AI in healthcare
Published on Feb 20, 2020

Is AI in Healthcare Causing Data Privacy Concerns?

AI is rapidly revolutionizing the healthcare industry by automating major chores such as drudgery and administrative tasks like managing patients and medical resources. The benefits of AI in healthcare industry are ample such as reduced spending, exploring new ideas for treatment, improving patient experience and diagnosis. As AI moves forward, strengthening its grip, numerous risks and challenges have emerged that are raising concerns.

One of the major concerns highlighted by most people is data privacy. Questions like “Are our personal and medical records kept confidential?”, “How can hospitals and other health-care service providers guarantee that they won’t be misusing our private data by selling it to other companies?” etc are being raised. Undeniably healthcare data is utterly valuable and what if some company sells your confidential data to Facebook and then it uses that data to target you with hyper-personalized ads.

Data privacy concerns in healthcare industry

An upsurge in the usage of virtual health assistants, driven by AI, has made us more vulnerable to data breach. The uncountable health apps and monitoring devices like smartphones and wearable gadgets are continuously collecting critical data to provide medication alerts and feedbacks on the go. But, the AI advances can also negatively impact us. The transfer of ‘health data’ through the latest gadgets may possibly prove to be a total head-scratcher as there aren’t enough regulations to fully address the matters of data privacy.

Under HIPAA act of 1996, it is still unclear what privacy laws would be applicable to the tech companies that utilize third-party hi-tech apps or algorithms for accessing the data. For example, HIPPA also oversees the regulation of genetic testing companies like Ancestry and 23andMe. These type of companies collect DNA information to provide insights about your health, ancestors and traits. The ‘terms and conditions’ of such companies are formulated in such a way that it tricks you into believing that your data is safe with them.

Biasness and Inequality

The AI health technology must be developed cautiously and responsibly as there are biasness and inequality also involved in health-care AI. AI systems have the capability to predict information about patients by analyzing big data sets and behavioral patterns. The accountability of the AI algorithm might get compromised if the data provided to it in the first place is inaccurate. The machine will analyze the wrong information which would in-turn give inaccurate results. This way, AI loses its effectiveness. On the other hand, even if AI systems learn from accurate data, there still can be shortcomings if the information has hints of underlying biasness and inequalities.

Conclusion

While implementing AI in health tech, companies must formulate stringent regulations and should make sure that they adhere to the standard industry protocols. Undoubtedly, if implemented effectively, AI can prove to be life-saver for patients. For example, an AI system might be able to predict if someone is going to have Parkinson’s disease in the future by analyzing the shakiness of the mouse. While AI assures a plethora of benefits, the creators of AI systems must be aware of the potential risks and biasness.


Contributors