Your ONC Cures Act APIs and Managing Consent as Policy

The ONC Cures Act final rule is forcing payers and providers to come to grips with automating consent management and enforcement for data access.  The best architectural solution to address it is Policy Based Access Control, or PBAC.

PBAC is a kind of Attribute-Based Access Control, or ABAC.  Unlike Role-Based Access control, in which a user granted a particular role gets a broad set of privileges for data access, in ABAC attributes about who is requesting access, what roles they have, what data they are requesting, when, with what tool, and for what purpose are used to make finer-grained, more secure decisions.

In PBAC the logic around access decisions based on attributes is managed with policies, a body of business rules capturing your interpretation of laws and regulations, your contracts, and your other, well, policies : ).   At runtime all pertinent policies for a given interaction are evaluated and an overall decision is made about access.

Your API consent scenarios around sharing member data should not be solved in a one-off way: the same business logic is needed at every consumer data touchpoint, including customer service, your customer web portal, your customer phone app, and perhaps elsewhere.  There are significant advantages in solving for consent in a common way.

The business logic for managing consent embodies your interpretation of relevant state and federal regulation.  Those regulations are complex, overlapping, and ever changing.

The overall result, if you do not consolidate your solution, will be to have redundant expressions of that logic scattered across your enterprise solutions, and across components within those solutions.

Keeping it all current and synchronized will require ongoing sustainment by the teams involved, and the ongoing tasking of your critical – and expensive – legal and privacy resources to support each of those efforts.

Even given that investment, you will have inconsistencies in the application of that logic that will expose you to legal risk.

The optimal architecture is for you to create a Policy-Based Access Control mechanism where your interpretation and application of state and federal privacy regulation is captured, maintained and enforced in one place as business policies, statements of rules that must be applied in particular circumstances.  Your Legal and Privacy teams would create and update these policies as required.

Those policies would then be evaluated operationally at a common Policy Decision Point.  Each implementation interface – your APIs, Portal, and customer service console – would interface to the common policy engine, passing in a standardized decision context:  who is asking for what, for what purpose, when, and how. The Policy Decision point would evaluate all the business rules, record and return its decision, which would then be enforced at the decision point where the communication happens.

You should strongly consider adopting this approach now as opposed to some future year.  Budgets are tight everywhere.  But there is a very good chance that the same kind of practical, project-driven exigencies you are wrestling with now will push the optimal solution even further into the future.

In your considerations you should balance the cumulative weight of years of complex sustainment and sustained legal risk against the cost of carving out time and money right now.

Technology debt that embodies legal risk, especially around data and privacy, is not the kind of technology debt you want to carry.

Introduction

Fine-grained consent decisions around sensitive data were aired out in the ONC and CMS rule making processes.  It was determined that implementing data segmentation and redaction was too much work for the industry out of the gate. At the same time, however, the legislation demands following all federal and state privacy regulation.

You can’t do both – not implement segmentation and follow all pertinent legislation. I will try to air that out in a future post.  For our purposes here, assume that at some point soon you will have to do content-based consent decisions, especially around well-known sensitive data conditions.

But the decision to not support sensitive data redaction in the first generation of ONC-mandated APIs does not make consent as simple as a user accessing their own data.

A consent solution for APIs must support not only a consumer themselves – I work for a payer, so I am just going to call them ‘members’ hereafter – substitute ‘patient’ or ‘consumer’ as useful – consenting for a third-party application to obtain their own health data, but also such consent granted by an adult member’s personal representative or a minor member’s parent or guardian.

The privacy rule at 45 CFR 164.502(g) requires requires us to treat a person’s personal representative as the person with respect to using and disclosing the individual’s data. 45 CFR 164.510(b) talks about cases where family members and other responsible parties may be given access to the individual’s data even when there is no explicit authorization in place. Please refer to the HIPAA for Professionals discussion on personal representatives for more detail.

If we add the weight of these complex scenarios to the sensitive data scenarios, it becomes very clear we all need a comprehensive, sophisticated consent solution.

Let’s dig into the scenarios we will have to solve for 1/1/21.

Definitions

A minor is a person under the age of 18.

An adult member’s ‘personal representative’ is a person with legal authority to make health care decisions on an adult member’s behalf.

A minor member’s parent or guardian is an adult who is legally responsible for them and can make health care decisions on their behalf, within certain restrictions.

Consent Scenarios

Consenting PartyMember Whose Data is Being Released to 3rd Party AppRationaleExceptions  
Adult Member M01Member M01An adult member may consent to release their own data. 
Personal Representative P01 of Adult Member M01Member M01A personal representative with legal authority over an adult member’s health data may consent to its release. 
Adult Parent or Guardian of Minor Member M02Member M02The parent or legal guardian of a minor member may consent to the release of the minor’s health dataThe minor member’s health data over which the minor has privacy rights, not the parent or guardian.
Minor Member M03Member M03A minor member with privacy rights over some parts of their health data may consent to the release of those parts.The minor member’s health data over which the parent or guardian has privacy rights, not the minor.

Consent Scenario Analysis

The parent or guardian is effectively the personal representative by default, with authority to access, and to consent for others to access, the minor’s health data.

However, the privacy relationship between a parent or guardian and a minor between 13 and 17 inclusive may be complex, and the default authority of the parent or guardian may not obtain for a given relationship and given data request.

Privacy rights are a function of both federal and state law.  In particular, determining rights in a given circumstance involving a minor can be very difficult.

Here is an example from my home state (this is not intended to be prescriptive, just illustrative):

  • In Oregon, per ORS 109.640, minors who are 15 and older may consent to medical and dental services without parental consent.
  • Per ORS 109.675, minors who are 14 or older may consent to outpatient mental health, drug or alcohol treatment services – excluding methadone – without parental consent.
  • Oregon providers are expected to involve parents or guardians by the end of such treatment unless the parent or guardian refuses, there are clear, documented clinical indications to the contrary, there is identified sexual abuse, or the minor is emancipated and has been for at least 90 days.
  • Per ORS 109.680, the provider may disclose health information related to mental health or chemical dependency services if it is clinically appropriate and in the minor’s best interest, the minor must be admitted to a detox program, or the minor is a suicide risk and requires hospital admission. 
  • Federal regulation 42 CFR 2.14, however, states that if a minor is able to self-consent for drug or alcohol treatment, the treatment records cannot be disclosed without the minor’s written consent.
  • Per Article VI, Clause 2 of the U.S. Constitution, federal law trumps state law if they are in conflict. 

Fine-grained decisions about who has the legal authority to disclose a minor’s health data appear in large part to be a function of who consented to the treatment the data is about.

Problem is, that information is not captured in operational provider-payer health care transactions: there is no field for consent in a HIPAA X12 837 claim, and no field for it in the USCDI or CPCDS data sets.  Google up ‘medical consent to treatment form’ and you will find thousands of different proprietary versions.

Absent the ability to make decisions as fine-grained as the law seems to demand, you should have controls in place which evidence a good-faith effort to make consent decisions that are as fine-grained as you are able.

There are a host of legal documents that enable or constrain consent relationships.  Some of them are ‘Confidential Communications’, ‘Non-Disclosure Directives’, ‘Disclosure Authorizations’, ‘Powers-of-Attorney’ (which come in many flavors, with different kinds of powers), and ‘Conservatorships’.  Each of these documents must have a managed, documented provenance and association with the parties involved.

One important distinction here regarding health data access in your Cures Act mandate use cases:  we are speaking only of read access, not act or write access.  In no mandated case may the consenting party authorize an application to take any action or write any data on a member’s behalf.

APIs available to consumer applications will, however, provide act and write access in the near future, so you should strongly consider generalizing your solution here to accommodate those coming features.

On Your Logical Solution (with much prefiguring of your implementation solution)

You Already Have a Solution

Your solution for managing complex consents already exists in some form: you work every day with parents, guardians, and personal representatives about data they are permitted to access.   

Personal representative relationships are managed somewhere in your enterprise right now.

You have processes in place as part of the creation and maintenance of those representatives’ identities to validate their legal authority.

You also have operational procedures in place – probably carefully scripted ones – to deal with consent challenges in your personal interactions with members, parents, guardians, and personal representatives in customer service.

You have processes and systems in place to store and retrieve the relationship and privacy documentation needed for customer service to make those decisions.

Your existing processes manage consent relationships through a strategy of defaults with exceptions.  That makes sense operationally given the majority of interactions are based on those defaults: consent relationships among a subscriber and their dependents.

Key Solution Features

Any solution for consent has three key features:  creating and maintaining consents, making communication decisions given those consents, and enforcing those decisions.

Creating and maintaining consents demands the identification of the parties involved, the nature of the consent, the evidence of any legal relationship among the parties, and any relevant legal documentation that may enable or constrain a specific consented access to data.

Decision and enforcement demand the identification of the parties involved to tie them to those maintained consents, and knowledge of the data requested, and the requester’s purpose, so that any legally required enablement or constraint can be honored.

Any solution implementation must therefore authenticate the parties – determine who they are – and authorize their access to the content they are requesting.

In interpersonal interactions, generally the Member, Parent, Guardian or Personal Representative on the phone with Customer Service, authentication is done by asking the individual questions about their identity, which they look up in one of your operational systems.

Authorization is made by the customer service person, guided by the logic in their scripts, with supporting documentation looked up as needed in your operational systems.

Key Automated Solution Features

There are four important differences in an automated solution versus the manual solution you likely currently have in place.

First is that the rules to decide what you may and may not share must be captured in software logic, and there must be a software process that ‘binds’ information to those rules – like filling in the variable values in an algebraic expression – and evaluates them to make the right decision and trigger the right action.

Second is that the parties must be issued credentials for their associated digital identities which may be authenticated automatically.  

Third is that the any documentation needed to evaluate the rules must be available in structured, digital form and be accessible so that it can bind to the rules as needed.  This does not necessarily need to be the entire contents of a document:  for some documents, the existence of the document, its valid dates, and the parties it refers to may be enough, all of which may be captured as ‘metadata’ associated with the document proper.

Fourth is that the request for specific content, and/or the content itself, must also be available for rules which refer to it.

Your Implementation Solution

Authentication

Electronic authentication means validating credentials presented by a user, such as a username and password, that were originally issued to an individual or an application when a digital identity was created for them. New credentials associated with the same identity may subsequently be issued, such as a changed password.

In the future such digital identities may be created and maintained by a trusted third-party identity provider whose authority we recognize.  At this time, however, unless you are way out ahead of the federated identity curve, digital identities must clearly be ones created and maintained by you.

In your third-party application consent scenarios, we must always be able to authenticate the consenting party.

That means the individual requesting access for a third-party app must have one of your digital accounts, and be issued credentials for it.

And the third-party app itself must have a different kind of digital account with you, and be issued credentials for it.

Third-Party App Authentication

A third-party app will have to register with you, and be issued credentials, before any consumer can use it to access data via your APIs.  You will have to have an application registration process. App registration is typically managed via an API developer site maintained by the API owner (e.g. you ), such as CMS’s Blue Button Developer site https://bluebutton.cms.gov/developers/ and athenahealth’s https://www.athenahealth.com/developer-portal.

The application owner – an individual representing the company which licenses the application to the user – will first have to get credentialed for the developer site itself, supplying you with (for example) a verifiable email address and whatever other demographics you might insist on. You could at that point introduce whatever identity security measures you deem appropriate as long as you are coloring inside the information blocking regulation lines.

Once they have developer site credentials, they may then log in and apply for the application credentials their app will need to access your APIs, supplying you the name of the application, perhaps associated website, and whatever other info you want to request.

You will also want to make them agree to your terms of service as part of the ask for application credentials – so you need to agree on what the appropriate terms of service are for application access to your APIs.

Once you vet their application application – your process will have to be pretty open/forgiving to navigate the shoals of information blocking – you issue them app credentials, typically an API Key and Secret.  They can obtain the other information they need to use the APIs – the URLs the APIs, the detailed interface specifications, etc., from the developer site.

Individual Authentication

You currently create digital identities for and issue electronic credentials to members who register on your Member Portal (and Member Phone App – when I refer to the ‘Portal’ here I mean both.)

You also need to create digital identities for and issue credentials to legal personal representatives.

Authorization

Privileges

The privilege of being able to provide consent for access to one or more member’s data to the third-party app must be bound to the requesting individual’s identity. And by virtue of the exercise of that consent, the privilege of being able to access one or more members’ data must be bound to that third-party app’s identity.

Your documentation of an individual’s authority to act on a member’s behalf must be digitally associated with that digital identity as a privilege.

If and when that authority is terminated, the privilege must also be terminated, preventing the individual from granting new consent to third-party apps.  Existing consents must be revoked.

For a parent or guardian, their authority to act on a minor member’s behalf must also be associated with their digital identity as a privilege.

To ensure compliance with the federal Children’s Online Privacy Protection Act, members under the age of 13 are probably not permitted by you to create and manage their own digital identities.  If their identification is required, they must be identified by their parent/guardian/personal representative. 

Granting grant children who are 12 and under digital access to consent decisions, were you to determine you needed to, would necessarily then involve their parent or guardian in the authentication process, meaning the parent or guardian themselves would have to have a digital identity with you and authenticate against it, and you would have to have evidence of their relationship, and digital knowledge of that relationship.

Many payers do permit members between 13 and 17 to create digital accounts, and have some provision for them sharing their data with family members.

You must establish a link between your operational system’s personal representative digital identity and a digital identity you create that has associated electronic credentials that may be used in the authentication phase of the third-party consent granting process.

Revocation of a personal representative’s privilege to access data must be reflected in the privileges associated with their digital identity, which in turn must work with your authorization mechanism to deny their granting new application consents, and to revoke any existing application consents. For example, if a power-of-attorney is revoked or expires, when you find that out and update your record in your operational system, the digital identity you have for the holder of that power-of-attorney must lose the associated privilege and you must revoke any application consents they have granted.

Similarly, if a minor turns 18 you must revoke their parent or guardian’s privilege to authorize an application to access the now-adult’s data.  So you need an ‘18th Birthday’ event.

You need to identify and instrument all such transitions that materially change the consent relationships involved and reflect those changes in the privileges associated with the digital identities involved, and in any application access granted under those privileges.

Logic

Your consent decisions around application access must essentially implement a digital version of the logic behind your current customer service interactions around access.

That logic refers to various internal documents beyond the parties’ identities and subscriber-member relationship, such as powers-of-attorney and disclosure authorizations.

(There may be important legal nuances with respect to disclosing data to the caller directly versus the ‘caller’ being able to provide consent for a third-party app to read the data such that a more refined version of the consent decision logic is needed.)

Whatever documents are necessary to inform the logic in a given consent scenario must be available to the automated decision process.  Whether the entire document must exist as structured electronic data, or whether ‘metadata’ regarding what the document is, the parties it refers to, and what dates it is valid, must be available, you will have to determine on a document-by-document basis.

That may mean documents currently existing only as scans need to be converted into structured documents.  If so, that would represent a major lift for most shops.

The logic must also be able to identify the category of data being requested, such as ‘mental health’ and ‘alcohol treatment’.

That complex logic for determining the right for a given individual to grant access for some particular set of data to a third-party application must obviously live somewhere.

There are two candidate places in your overall solution where that logic could be implemented.  

The decision could be made during the OAuth authentication and authorization dance, where the decision itself would be passed to the API in ‘scopes’ in the access token.

Or the decision could be made in the API, using identity information about the requesting party passed in by OAuth in an identity token.

‘Stretching’ OAuth to manage this complex logic seems ill-advised.

OAuth does not dictate the content or structure of scopes beyond a simple text data type. 

The Cures Act mandates the use of the FHIR SMART scope and launch context IG, which has a simple recipe for read or write scopes for particular FHIR resources, and a little more complex recipe for the ‘launch’ scopes an app has to use to establish a member data context.

(Unfortunately, the current FHIR SMART scope and launch context IG is not clear about how to support cases where the application user has control over granting access to more than one member or patient’s data.  More on that in my next post.)

The problem is, you have the problem of binding the ‘variables’ in all the complicated consent rules you have to enforce:

You need the identity of the requestor.

You need the members’ identities whose data that requestor can ‘see’.

You need electronic access to all the relevant documents such as Confidential Communications.

You need the content request, or the requested content.

And of course you need some kind of rules processor to evaluate the rules and make a decision.

The stopper is that the content requested is simply not available yet while an OAuth grant is being worked.  You must have rules that are fired after the app has made its request, either in the API itself, or in your reverse-proxy in front of it.

The ONC Cures Act mandate has conflated two distinct authorization schemes:  authorization of the application to access the API, and authorization of the user of the application to access particular data in the API.

Solution Approach

Having some user access authorization managed in OAuth, other authorization potentially managed at authentication, and still other authorization managed in the APIs themselves, having authz fragmented and scattered, will not stand. It is ugly.  And it brings obvious integration and sustainment challenges.

Which brings us to a Policy-Based Access Control, PBAC, a.k.a. an Attribute-Based Access Control, or ABAC, mechanism as arguably the best solution option. (The distinction between PBAC and ABAC is that in PBAC the policies are captured as business rules which are then dynamically translated into actionable logic.  ABAC is agnostic to how the rules are created and managed.)

The complex, fine-grained access consent decisions we are talking about here need the implementation clarity that a PBAC solution would bring.  

Consent2Share, which was referenced in the draft legislation, and rejected as too big a lift, is a health-consent wrapper around a XACML engine custom-built for managing the redaction of sensitive data in patient-provider scenarios.

It would require significant revision to meet your needs for implementing the complex consent scenarios we have been discussion.

One option is PBAC based, like Consent2Share, on the XACML standard, perhaps based on AT&T’s open source XACML 3 engine. Another, more modern, and in many ways more attractive option, may be the open source Open Policy Agent, OPA.

One Solution

You need business logic for managing consents in at least three places – there may be more:

  • Customer Service decisions about what data to share with a given caller;
  • Portal decisions about what data to display to a given user;
  • FHIR API decisions about what data to share with a third- party app with a given user.

It is all the same logic.  In every case you must have a digital identity for the caller/user.  You must find out what data privileges that caller/user has.  You must know what they are asking to see or share.  You must make a decision about whether they can see all or some part of what they have asked to see.  And then you give it to them.

The business logic is the result of your evaluation of applicable state and federal regulations and your own policies.

That business logic needs to be captured independently of each of the three solution spaces, and then applied, implemented, in the solution space.

The alternative is to create the business logic anew in each space.  That exposes you to risk: you risk its inconsistent application; you risk its currency.

For example, if a power-of-attorney is revoked or expires, that change needs to be reflected immediately in all of those implementation channels, or you risk sharing data illegally.

You are at a critical nexus on your consent management solution path.

Driven by budget and time-to-market considerations you may be at risk  of backing your way into a consent solution that is an ad hoc assembly of parts, one that not only does not solve for Customer Service, the Portals, and the APIs in a common way, but distributes the business logic around managing consent for APIs across the parts, such that some logic is in OAuth, some logic in your authentication mechanism, and some logic is in the APIs themselves.

You should strongly consider taking advantage of this critical nexus to design and implement a comprehensive policy-based access control solution which captures your business logic in one place.  

Since your portal needs the same logic to determine what claims, for example, a given party can view, you should implement your PBAC engine for this in an enterprise-accessible form.

There is also a convergent strategic path here for your consumer communication preferences solution, which may also be solved with the PBAC engine.

Stay tuned.

2 thoughts on “Your ONC Cures Act APIs and Managing Consent as Policy

Leave a comment