This
page
is
part
of
the
FHIR
Specification
(v4.0.1:
R4
-
Mixed
Normative
and
STU
(v5.0.0-snapshot3:
R5
Snapshot
#3,
to
support
Connectathon
32
)
in
it's
permanent
home
(it
will
always
be
available
at
this
URL).
).
The
current
version
which
supercedes
this
version
is
5.0.0
.
For
a
full
list
of
available
versions,
see
the
Directory
of
published
versions
.
Page
versions:
R5
R4B
R4
R3
Security
|
|
The Security and Privacy Module describes how to protect a FHIR server (through access control and authorization), how to document what permissions a user has granted (consent), and how to keep records about what events have been performed (audit logging and provenance). FHIR does not mandate a single technical approach to security and privacy; rather, the specification provides a set of building blocks that can be applied to create secure, private systems.
This
is
a
value
set
defined
by
The
Security
and
Privacy
module
includes
the
FHIR
project.
following
materials:
|
|
|
|
|
|
|
|
The following common use-cases are elaborated below:
This
value
set
FHIR
is
used
focused
on
the
data
access
methods
and
encoding
leveraging
existing
Security
solutions.
Security
in
FHIR
needs
to
focus
on
the
following
places:
set
of
considerations
required
to
ensure
that
data
can
be
discovered,
accessed,
or
altered
only
in
accordance
with
expectations
and
policies.
Implementation
should
leverage
existing
security
standards
and
implementations
to
ensure
that:
For
general
security
considerations
and
principles,
see
Security
.
Please
leverage
mature
Security
Frameworks
covering
device
security,
cloud
security,
big-data
security,
service-to-service
security,
etc.
See
NIST
Mobile
Device
Security
and
OWASP
.
These
security
frameworks
include
prioritized
lists
of
most
important
concerns.
Recent
evidence
indicates
lack
of
implementer
attention
to
addressing
common
security
vulnerabilities
emphasized
by
OWASP
top
10
API
.
Reviewing
the
OWASP
Top
Ten
and
OWASP
mobile
top
10
and
ensuring
those
vulnerabilities
are
mitigated
is
important
for
good
security.
Privacy
in
FHIR
includes
the
designated
'entire
code
system'
value
set
for
CatalogType
of
considerations
required
to
ensure
that
individual
data
are
treated
according
to
an
individual's
Privacy
Principles
and
Privacy-By-Design.
FHIR
includes
implementation
guidance
to
ensure
that:
Use case: A FHIR server should ensure that API access is allowed for authorized requests and denied for unauthorized requests.
Approach: Authorization details can vary according to local policy, and according to the access scenario (e.g. sharing data among institution-internal subsystems vs. sharing data with trusted partners vs. sharing data with third-party user-facing apps). In general, FHIR enables a separation of concerns between the FHIR REST API and standards-based authorization protocols like OAuth. For the use case of user-facing third-party app authorization, we recommend the OAuth-based SMART protocol see Security: Authentication as an externally-reviewed authorization mechanism with a real-world deployment base - but we note that community efforts are underway to explore a variety of approaches to authorization.
Resource
Servers
MUST
enforce
the
authorization
associated
with
the
access
token.
This
value
set
enforcement
includes
codes
from
verification
of
the
following
code
systems:
token,
verification
of
the
token
expiration,
and
might
include
using
introspection
to
verify
the
token
has
not
been
revoked.
This
enforcement
includes
constraining
results
returned
to
the
scopes
authorized
by
the
access
token.
The
Resource
server
might
have
further
access
controls
beyond
those
in
the
token
to
enforce,
such
as
Consent
or
business
rules.
For further details, see Security: Authorization and Access Control .
Use-Case: When a user has restricted rights but attempts to do a query they do not have rights to, they should not be given the data. Policy should be used to determine if the user query should result in an error, zero data, or the data one would get after removing the non-authorized parameters.
Approach: Enforcement is by local enforcement methods. Note that community efforts are underway to explore a variety of approaches to enforcement.
Example:
Using
_include
or
_revinc
to
get
at
resources
beyond
those
authorized.
Ignoring
(removing)
the
_include
parameter
would
give
some
results,
just
not
the
_include
Resources.
This
could
be
silently
handled
and
thus
give
some
results,
or
it
could
be
returned
as
error.
This
expansion
generated
01
Nov
2019
Use
case:
"Access
to
protected
Resources
are
enabled
though
user
Role-Based,
Context-Based,
and/or
Attribute-Based
Access
Control."
This
value
set
contains
3
concepts
Approach:
Ensure
that
the
level
of
assurance
for
identity
proofing
reflects
the
appropriate
risk,
given
the
issued
party's
exposure
to
health
information.
Users
should
be
identified
and
should
have
their
Functional
and/or
Structural
role
declared
when
these
roles
are
related
to
the
functionality
the
user
is
interacting
with.
Roles
should
be
conveyed
using
standard
codes
from
Security
Role
Vocabulary
.
A
purpose
of
use
should
be
asserted
for
each
requested
action
on
http://terminology.hl7.org/CodeSystem/catalogType
version
4.0.1
a
Resource.
Purpose
of
use
should
be
conveyed
using
standard
codes
from
Purpose
of
Use
Vocabulary
.
All
codes
from
system
The
FHIR
core
specification
does
not
include
a
"User"
resource,
as
a
User
resource
would
be
general
IT
and
used
well
beyond
healthcare
workflows.
A
RESTful
User
resource
is
defined
in
the
System
for
Cross-domain
Identity
Management
(SCIM)
specification
http://terminology.hl7.org/CodeSystem/catalogType
.
When using OAuth, the requested action on a Resource for specified one or more purpose of use and the role of the user are managed by the OAuth authorization service (AS) and may be communicated in the security token where JWT tokens are used. For details, see Security: HCS vocabulary .
Use case: "A FHIR server should keep a complete, tamper-proof log of all API access and other security- and privacy-relevant events".
Approach: FHIR provides an AuditEvent resource suitable for use by FHIR clients and servers to record when a security or privacy relevant event has occurred. This form of audit logging records as much detail as reasonable at the time the event happened. The FHIR AuditEvent is aligned and cross-referenced with IHE Audit Trail and Node Authentication (ATNA) Profile. For details, see Security: Audit .
Use case: "A Patient should be offered a report that informs about how their data is Collected, Used, and Disclosed."
Approach: The AuditEvent resource can inform this report.
There are many motivations to provide a Patient with some report on how their data was used. There is a very restricted version of this in HIPAA as an "Accounting of Disclosures", there are others that would include more accesses. The result is a human readable report. The raw material used to create this report can be derived from a well recorded 'security audit log', specifically based on AuditEvent. The format of the report delivered to the Patient is not further discussed but might be: printed on paper, PDF, comma separated file, or FHIR Document made up of filtered and crafted AuditEvent Resources. The report would indicate, to the best ability, Who accessed What data from Where at When for Why purpose. The 'best ability' recognizes that some events happen during emergent conditions where some knowledge is not knowable. The report usually does need to be careful not to abuse the Privacy rights of the individual that accessed the data (Who). The report would describe the data that was accessed (What), not duplicate the data.
In order to enable Privacy Accounting of Disclosures and Access Logs, and to enable privacy office and security office audit log analysis, all AuditEvent records should include a reference to the Patient/Subject of the activity being recorded. Reasonable efforts should be taken to assure the Patient/Subject is recorded, but it is recognized that there are times when this is not reasonable. See deeper details on AuditEvent.
Some events are known to be subject to the Accounting of Disclosures report when the event happens, thus can be recorded as an Accounting of Disclosures - See example Accounting of Disclosures . Other events must be pulled from the security audit log. A security audit log will record ALL actions upon data regardless of if they are reportable to the Patient. This is true because the security audit log is used for many other purposes. - See Audit Logging . These recorded AuditEvents may need to be manipulated to protect organization or employee (provider) privacy constraints. Given the large number of AuditEvents, there may be multiple records of the same actual access event, so the reporting will need to de-duplicate.
Use case: "Documentation of a Patient's Privacy Consent Directive - rules for Collection, Use, and Disclosure of their health data."
Approach: FHIR provides a Consent resource suitable for use by FHIR clients and servers to record current Privacy Consent state. The meaning of a consent or the absence of the consent is a local policy concern. The Privacy Consent may be a pointer to privacy rules documented elsewhere, such as a policy identifier or identifier in XACML. The Privacy Consent has the ability to point at a scanned image of an ink-on-paper signing ceremony, and supports digital signatures through use of Provenance . The Privacy Consent has the ability to include some simple FHIR centric base and exception rules.
When a use / access / disclosure is requested and an Access Control decision finds multiple Consent resources apply equally, a policy must cover this case. For example: one possible policy might be that the most recent Consent would be seen as more authoritative and thus apply rather than an older Consent. There may also be policy mechanisms to assure that only one Consent is ever active for a given Patient and context.
All uses of FHIR Resources would be security/privacy relevant and thus should be recorded in an AuditEvent . The data access that qualifies as a Disclosure should additionally be recorded as a Disclosure, see Disclosure Audit Event Example .
For Privacy Consent guidance and examples, see Consent Resource .
Use case: "All FHIR Resources should be capable of having the Provenance fully described."
Approach: FHIR provides the Provenance resource suitable for use by FHIR clients and servers to record the full provenance details: who, what, where, when, and why. A Provenance resource can record details for Create, Update, and Delete; or any other activity. Generally, Read operations would be recorded using AuditEvent . Many Resources include these elements within; this is done when that provenance element is critical to the use of that Resource. This overlap is expected and cross-referenced on the Five Ws pattern . For details, see Provenance Resource .
Use case: "For any given query, need Provenance records also."
Approach:
Given
that
a
system
is
using
Provenance
records.
When
one
needs
the
Provenance
records
in
addition
to
the
results
of
a
query
on
other
records
(e.g.
Query
on
MedicationRequest),
then
one
uses
reverse
include
to
request
that
all
Provenance
records
also
be
returned.
That
is
to
add
?_revinclude=Provenance:target
.
For
details,
see
_revinclude
.
Use
case:
"Digital
Signature
is
needed
to
prove
authenticity,
integrity,
and
non-repudiation."
See
the
full
registry
Approach:
FHIR
Resources
are
often
parts
of
value
sets
defined
Medical
Record
or
are
communicated
as
part
of
FHIR.
formal
Medical
Documentation.
As
such
there
is
a
need
to
cryptographically
bind
a
signature
so
that
the
receiving
or
consuming
actor
can
verify
authenticity,
integrity,
and
non-repudiation.
This
functionality
is
provided
through
the
signature
element
in
Provenance
Resource.
Where
the
signature
can
be
any
local
policy
agreed
to
signature
including
Digital
Signature
methods
and
Electronic
Signature.
For
details,
see
Security:
Digital
Signatures
.
Explanation
Digital
Signatures
bind
cryptographically
the
exact
contents,
so
that
any
changes
will
make
the
Digital
Signature
invalid.
When
a
Resource
is
created
,
or
updated
the
server
is
expected
to
update
relevant
elements
that
it
manages
(id,
lastupdated,
etc.).
These
changes,
although
expected
of
normal
RESTful
create/update
operations,
will
break
any
Digital
Signature
that
has
been
calculated
prior.
One
solution
is
to
create
the
columns
Digital
Signature
after
the
REST
create
operation
completes,
one
must
first
confirm
that
may
appear
on
the
resulting
created/updated
Resource
is
as
expected,
then
the
Digital
Signature
is
formed.
A
variation
of
this
page:
happens
in
Messaging,
Documents,
and
other
interaction
models.
For
details,
see
Ramifications
of
storage/retrieval
variations
De-Identification
is
assigned
a
level.
For
value
sets,
levels
inclusive
of
pseudonymization
and
anonymization;
which
are
mostly
used
the
processes
of
reducing
privacy
risk
by
eliminating
and
modifying
data
elements
to
organize
codes
for
user
convenience,
but
may
follow
code
system
hierarchy
-
see
Code
System
for
further
information
meet
a
targeted
use-case.
Use-Case: "Requesting Client should have access to De-Identified data only."
Trigger: Based on an Access Control decision that results in a permit with an Obligation to De-Identify, the Results delivered to the Requesting Client would be de-identified.
Consideration:
This
assumes
the
system
knows
the
type
and
intensity
of
the
definition
de-identification
algorithm,
where
de-identification
is
best
viewed
as
a
process,
not
an
algorithm
-
a
process
that
reduces
Privacy
risk
while
enabling
a
targeted
and
authorized
use-case.
Modifying
an
element:
The
code
(used
as
de-identification
process
may
determine
that
specific
elements
need
to
be
modified
to
lower
privacy
risk.
Some
methods
of
modifying
are:
eliminating
the
code
element,
setting
to
a
static
value
(e.g.
"removed"),
fuzzing
(e.g.
adjusting
by
some
random
value),
masking
(e.g.
encryption),
pseudonym
(e.g.
replace
with
an
alias),
etc.
Narrative
and
Attachment
elements
present
particularly
difficult
challenges.
See
standards
below
for
further
details.
Discussion:
Obviously
the
most
important
elements
for
de-identification
are
names
and
identifiers.
FHIR
resources
have
many
different
types
of
ids
and
identifiers
that
serve
different
purposes.
Some
(
id
s)
are
the
basis
for
internal
links
between
different
resources,
while
identifiers
are
mainly
-
but
not
exclusively
-
for
correlating
with
external
data
sources.
Strategies
for
de-identification
need
to
consider
whether
re-identification
with
the
source
system
is
a
problem,
in
which
case
ids
will
need
to
be
modified
-
and
consistently
across
the
resource
instance).
If
set
being
de-identified.
External
identifiers
will
mostly
need
to
be
removed,
but
even
then,
where
they
are
used
for
internal
references
within
the
code
resource
set,
they'll
need
to
be
changed
consistently.
Then, there is the question of where to make the de-identification changes. For example, the Observation Resource has a subject element that mostly refers to a Patient resource. Should it be removed? Left and the Patient resource it refers to be de-identified? Updated to a new patient resource randomly or consistently? There are many other Reference elements on Observation that can easily be used to navigate back to the Subject; e.g., Observation.context value of Encounter or EpisodeOfCare; or Observation.performer. These also need to be de-identified, and it will depend on the intended use of the data whether these all need to be consistent with each other.
Some
identifiers
in
italics,
this
indicates
Observation
Resource:
Emphasis:
The
.specimen
is
a
direct
identifier
of
a
particular
specimen;
and
would
be
a
direct
identifier
of
a
particular
patient.
This
is
a
ramification
of
having
the
specimen
identifier.
One
solution
is
to
create
pseudo
specimen
resources
that
will
stand-in
for
the
code
original
specimen
resource.
This
pseudo
specimen
management
is
not
selectable
('Abstract')
supplied
by
a
trusted-third-party
that
maintains
a
database
of
pseudo-identifiers
with
authorized
reversibility.
Care should be taken when modifying an isModifier elements, as the modification will change the meaning of the Resource.
In practice, then, the de-identification process depends on the intended use of the data, the scope of the data being extracted, and the risk associated with the release of the data (e.g. data released into the public domain has a different risk than internal sharing of data within a tightly managed organization with strong information security policies.
Security-label:
The
display
(used
in
resulting
Resource
should
be
marked
with
security-label
to
indicate
that
it
has
been
de-identified.
This
would
assure
that
downstream
use
doesn't
mistake
this
Resource
as
representing
full
fidelity.
These
security-labels
come
from
the
display
element
Security
Integrity
Observation
ValueSet.
Some
useful
security-tag
vocabulary:
ANONYED,
MASKED,
PSEUDED,
REDACTED
Further
Standards:
ISO/IEC
29100
Information
technology
-
Security
techniques
-
Privacy
framework
,
ISO/IEC
20889
Privacy
enhancing
data
de-identification
terminology
and
classification
of
a
Coding
techniques
,
ISO/TS
25237
Health
informatics
-
Pseudonymization
,
NIST
IR
8053
-
De-Identification
of
Personal
Information
,
IHE
De-Identification
Handbook
,
DICOM
(Part
15,
Chapter
E)
).
If
there
Use-Case:
There
are
times
when
test
data
is
needed.
Test
data
are
data
that
is
no
display,
implementers
should
not
simply
display
associated
with
any
real
patient.
Test
data
are
usually
representative
of
expected
data
that
is
published
for
the
code,
but
map
purpose
of
testing.
Test
data
may
be
fully
fabricated,
synthetic,
or
derived
from
use-cases
that
had
previously
caused
failures.
Trigger:
When
test
data
are
published
it
may
be
important
to
identify
the
concept
data
as
test
data.
Consideration:
This
identification
may
be
to
assure
that
the
test
data
is
not
misunderstood
as
real
data,
and
that
the
test
data
is
not
factored
into
their
application
statistics
or
reporting.
However,
there
is
a
risk
that
identifying
test
data
may
inappropriately
thwart
the
intended
test
that
the
data
are
published
to
test.
Discussion:
Test data could be isolated in a server specific to test data.
Test
data
could
be
intermingled
with
real-patient
data
using
one
or
both
of
the
meaning
following
methods:
Considerations: Note there is a risk when co-mingling test data with real patient data that someone will accidentally use test data without realizing it is test data.
In the STU3 release, FHIR includes building blocks and principles for creating secure, privacy-oriented health IT systems; FHIR does not mandate a single technical approach to security and privacy.
In future releases, we anticipate including guidance on: