A recent report testing the security capabilities of several mobile health apps highlighted “systemic” shortcomings and vulnerabilities that could lead to the exposure of users’ sensitive health and identity information.
Conducted by cybersecurity marketing firm Knight Ink and sponsored by mobile app API security company Approov, the investigation reverse-engineered 30 mobile health apps using an open source security framework, analyzed their static code and then penetration-tested their APIs.
The report did not disclose the names of the tested apps or developers (some of whom agreed to provide access to the investigation under the condition of anonymity), but noted that they are from international healthcare information technology companies with revenues ranging from $600 million to $8 billion, and an average employee count of about 15,000.
API-related vulnerabilities were abundant across the 30 apps, according to the report. Seventy-seven percent contained hard-coded API keys, and 7% contained hard-coded user names and passwords. What’s more, 7% of those API keys came from third-party payment processors that specifically warn customers not to hard-code keys in plain text. None of the apps implemented certificate pinning, which allowed the researcher to freely conduct person-in-the-middle attacks against the apps’ communications.
Of additional concern was that every API endpoint tested in the investigation was vulnerable to a broken object level authorization (BOLA) attack, allowing access to personally identifiable information [PII] and protected healthcare information [PHI] that should have been blocked from the attacking account, according to the report. Half of the tested APIs allowed unauthorized access to admissions records, and an equivalent number allowed access to clinical results such as pathology or X-rays.
WHY IT MATTERS
The 30 investigated apps have been downloaded an average of 772,619 times, according to the report. All allow clinicians to review schedules and patient charts, while a subset enables additional capabilities such as the ability to access or alter patients’ demographics, photos, clinical histories and other items.
The presence of these vulnerabilities suggests that the security measures required for FHIR/SMART compliance “merely represent a subset of the steps needed to secure mobile apps and the APIs which enable apps to retrieve data and interoperate with data resources and other applications,” according to the report. This becomes especially pertinent over time, as more and more monetizable data is being generated and cyberattacks on healthcare organizations become more frequent.
“Look, let’s point the pink elephant out in the room,” Alissa Knight, partner at Knight Ink and the report’s author, said in a statement. “There will always be vulnerabilities in code so long as humans are writing it. Humans are fallible. But I didn’t expect to find every app I tested to have hard-coded keys and tokens and all of the APIs to be vulnerable to [BOLA] vulnerabilities allowing me to access patient reports, X-rays, pathology reports, and full PHI records in their database. The problem is clearly systemic.”
THE LARGER TREND
Cybersecurity is has been a growing concern for healthcare organizations over the years, and 2020 was no exception. In addition to the threat of ransomware attacks, increased digitization spurred by COVID-19 has increased scrutiny on EHR access and virtual care services alike.
The mobile health space has hardly been immune of these types of issues over the years either. While consumer-facing apps have often found themselves in the news for exposing user information or undisclosed third-party data sharing, the sudden onslaught of COVID-19 apps came with relatively little regard for data-use transparency.