This
page
is
part
of
the
FHIR
Specification
(v4.0.1:
R4
-
Mixed
Normative
and
STU
v6.0.0-ballot4:
Release
6
Ballot
(1st
Full
Ballot)
(see
Ballot
Notes
)
in
it's
permanent
home
(it
will
always
be
available
at
this
URL).
).
The
current
version
which
supercedes
this
version
is
5.0.0
.
For
a
full
list
of
available
versions,
see
the
Directory
of
published
versions
for
published
versions
.
Page
versions:
R5
R4B
R4
R3
Work
Group
|
Standards
Status
:
|
The Security and Privacy Module describes how to protect a FHIR server (through access control and authorization), how to document what permissions a user has granted (consent), and how to keep records about what events have been performed (audit logging and provenance). FHIR does not mandate a single technical approach to security and privacy; rather, the specification provides a set of building blocks that can be applied to create secure, private systems.
The Security and Privacy module includes the following materials:
| Resources | Datatypes | Implementation Guidance and Principles |
|
The following common use-cases are elaborated below:
FHIR
is
focused
on
the
data
access
methods
and
encoding
leveraging
existing
Security
solutions.
Security
in
FHIR
needs
to
focus
on
the
set
of
considerations
required
to
ensure
that
data
can
be
discovered,
accessed,
or
altered
only
in
accordance
with
expectations
and
policies.
Implementation
should
SHOULD
leverage
existing
security
standards
and
implementations
to
ensure
that:
For
general
security
considerations
and
principles,
see
Security
.
Please
leverage
mature
Security
Frameworks
covering
device
security,
cloud
security,
big-data
security,
service-to-service
security,
etc.
See
NIST
Mobile
Device
Security
and
OWASP
Mobile
Security
.
These
security
frameworks
include
prioritized
lists
of
most
important
concerns.
Recent
evidence
indicates
lack
of
implementer
attention
to
addressing
common
security
vulnerabilities
emphasized
by
OWASP
top
10
API
.
Reviewing
the
OWASP
Top
Ten
and
OWASP
mobile
top
10
and
ensuring
those
vulnerabilities
are
mitigated
is
important
for
good
security.
Privacy in FHIR includes the set of considerations required to ensure that individual data are treated according to an individual's Privacy Principles and Privacy-By-Design. FHIR includes implementation guidance to ensure that:
Use
case:
A
FHIR
server
should
SHOULD
ensure
that
API
access
is
allowed
for
authorized
requests
and
denied
for
unauthorized
requests.
Approach:
Authorization
details
can
vary
according
to
local
policy,
and
according
to
the
access
scenario
(e.g.
(e.g.,
sharing
data
among
institution-internal
subsystems
vs.
sharing
data
with
trusted
partners
vs.
sharing
data
with
third-party
user-facing
apps).
In
general,
FHIR
enables
a
separation
of
concerns
between
the
FHIR
REST
API
and
standards-based
authorization
protocols
like
OAuth.
For
the
use
case
of
user-facing
third-party
app
authorization,
we
recommend
the
OAuth-based
SMART
protocol
see
Security:
Authentication
as
an
externally-reviewed
authorization
mechanism
with
a
real-world
deployment
base
-
but
we
note
that
community
efforts
are
underway
to
explore
a
variety
of
approaches
to
authorization.
Resource Servers MUST enforce the authorization associated with the access token. This enforcement includes verification of the token, verification of the token expiration, and might include using introspection to verify the token has not been revoked. This enforcement includes constraining results returned to the scopes authorized by the access token. The Resource server might have further access controls beyond those in the token to enforce, such as Consent or business rules.
For further details, see Security: Authorization and Access Control .
Use-Case:
When
a
user
has
restricted
rights
but
attempts
to
do
a
query
they
do
not
have
rights
to,
they
should
not
SHOULD
NOT
be
given
the
data.
Policy
should
SHOULD
be
used
to
determine
if
the
user
query
should
SHOULD
result
in
an
error,
zero
data,
or
the
data
one
would
get
after
removing
the
non-authorized
parameters.
Approach: Enforcement is by local enforcement methods. Note that community efforts are underway to explore a variety of approaches to enforcement.
Example: Using _include or _revinc to get at resources beyond those authorized. Ignoring (removing) the _include parameter would give some results, just not the _include Resources. This could be silently handled and thus give some results, or it could be returned as error.
Use case: "Access to protected Resources are enabled though user Role-Based, Context-Based, and/or Attribute-Based Access Control."
Approach:
Ensure
that
the
level
of
assurance
for
identity
proofing
reflects
the
appropriate
risk,
given
the
issued
party's
exposure
to
health
information.
Users
should
SHOULD
be
identified
and
should
SHOULD
have
their
Functional
and/or
Structural
role
declared
when
these
roles
are
related
to
the
functionality
the
user
is
interacting
with.
Roles
should
SHOULD
be
conveyed
using
standard
codes
from
codes,
(e.g.,
Example
Security
Role
Vocabulary
.
).
A
purpose
of
use
should
SHOULD
be
asserted
for
each
requested
action
on
a
Resource.
Purpose
of
use
should
SHOULD
be
conveyed
using
standard
codes
from
Purpose
of
Use
Vocabulary
.
The
FHIR
core
specification
does
not
include
a
"User"
resource,
as
a
User
resource
would
be
general
IT
and
used
well
beyond
healthcare
workflows.
A
RESTful
User
resource
is
defined
in
the
System
for
Cross-domain
Identity
Management
(SCIM)
specification
.
User
role
assignment
is
typically
managed
in
the
general
IT
system,
but
MAY
be
influenced
by
FHIR
Resource
binding.
For
example,
users
that
are
Practitioners,
the
PractionerRole
MAY
indicate
functional
or
structural
roles.
When
using
OAuth,
the
requested
action
on
a
Resource
for
specified
one
or
more
purpose
of
use
and
the
role
of
the
user
are
managed
by
the
OAuth
authorization
service
(AS)
and
may
MAY
be
communicated
in
the
security
token
where
JWT
tokens
are
used.
For
details,
see
Security:
HCS
vocabulary
.
Use
case:
"A
FHIR
server
should
SHOULD
keep
a
complete,
tamper-proof
log
of
all
API
access
and
other
security-
and
privacy-relevant
events".
Approach:
FHIR
provides
an
AuditEvent
resource
suitable
for
use
by
FHIR
clients
and
servers
to
record
when
a
security
or
privacy
relevant
event
has
occurred.
This
form
of
audit
logging
records
as
much
detail
as
reasonable
at
the
time
the
event
happened.
The
FHIR
AuditEvent
is
aligned
and
cross-referenced
with
IHE
Audit
Trail
and
Node
Authentication
(ATNA)
Profile.
For
details,
see
Security:
Audit
.
Organizations SHOULD have a policy regarding purging of data, such as AuditEvent resources which can become numerous. Some types of AuditEvent MAY have limited value after a period (e.g., year) of time and MAY be archived and purged. This purge must be in alignment with regulations such as medical records retention, security, and privacy regulations. For example, purging AuditEvents that are related to Patient Privacy Transparency would not be purged until that patient was purged (See Purge Patient Record Operation ).
Use
case:
"A
Patient
should
SHOULD
be
offered
a
report
that
informs
about
how
their
data
is
Collected,
Used,
and
Disclosed."
Approach: The AuditEvent resource can inform this report.
There
are
many
motivations
to
provide
a
Patient
with
some
report
on
how
their
data
was
used.
There
is
a
very
restricted
version
of
this
in
HIPAA
as
an
"Accounting
of
Disclosures",
there
are
others
that
would
include
more
accesses.
The
result
is
a
human
readable
report.
The
raw
material
used
to
create
this
report
can
be
derived
from
a
well
recorded
'security
audit
log',
specifically
based
on
AuditEvent.
The
format
of
the
report
delivered
to
the
Patient
is
not
further
discussed
but
might
be:
printed
on
paper,
PDF,
comma
separated
comma-separated
file,
or
FHIR
Document
made
up
of
filtered
and
crafted
AuditEvent
Resources.
The
report
would
indicate,
to
the
best
ability,
Who
accessed
What
data
from
Where
at
When
for
Why
purpose.
The
'best
ability'
recognizes
that
some
events
happen
during
emergent
conditions
where
some
knowledge
is
not
knowable.
The
report
usually
does
need
to
be
careful
not
to
abuse
the
Privacy
rights
of
the
individual
that
accessed
the
data
(Who).
The
report
would
describe
the
data
that
was
accessed
(What),
not
duplicate
the
data.
In order to enable Privacy Accounting of Disclosures and Access Logs, and to enable privacy office and security office audit log analysis, all AuditEvent records SHOULD include a reference to the Patient/Subject of the activity being recorded. Reasonable efforts SHOULD be taken to assure the Patient/Subject is recorded, but it is recognized that there are times when this is not reasonable. See deeper details on AuditEvent.
Some
events
are
known
to
be
subject
to
the
Accounting
of
Disclosures
report
when
the
event
happens,
thus
can
be
recorded
as
an
Accounting
of
Disclosures
-
See
example
Accounting
of
Disclosures
.
Other
events
must
be
pulled
from
the
security
audit
log.
A
security
audit
log
will
record
ALL
actions
upon
data
regardless
of
if
they
are
reportable
to
the
Patient.
This
is
true
because
the
security
audit
log
is
used
for
many
other
purposes.
-
See
Audit
Logging
.
These
recorded
AuditEvents
may
MAY
need
to
be
manipulated
to
protect
organization
or
employee
(provider)
privacy
constraints.
Given
the
large
number
of
AuditEvents,
there
may
MAY
be
multiple
records
of
the
same
actual
access
event,
so
the
reporting
will
need
to
de-duplicate.
Use case: "Documentation of a Patient's Privacy Consent Directive - rules for Collection, Use, and Disclosure of their health data."
Approach:
FHIR
provides
a
Consent
resource
suitable
for
use
by
FHIR
clients
and
servers
to
record
current
Privacy
Consent
state.
The
meaning
of
a
consent
or
the
absence
of
the
consent
is
a
local
policy
concern.
The
Privacy
Consent
may
MAY
be
a
pointer
to
privacy
rules
documented
elsewhere,
such
as
a
policy
identifier
or
identifier
in
XACML.
The
Privacy
Consent
has
the
ability
to
point
at
a
scanned
image
of
an
ink-on-paper
signing
ceremony,
and
supports
digital
signatures
through
use
of
Provenance
.
The
Privacy
Consent
has
the
ability
to
include
some
simple
FHIR
centric
base
and
exception
rules.
When a use / access / disclosure is requested and an Access Control decision finds multiple Consent resources apply equally, a policy must cover this case. For example: one possible policy might be that the most recent Consent would be seen as more authoritative and thus apply rather than an older Consent. There MAY also be policy mechanisms to assure that only one Consent is ever active for a given Patient and context.
All
uses
of
FHIR
Resources
would
be
security/privacy
relevant
and
thus
should
SHOULD
be
recorded
in
an
AuditEvent
.
The
data
access
that
qualifies
as
a
Disclosure
should
SHOULD
additionally
be
recorded
as
a
Disclosure,
see
Disclosure
Audit
Event
Example
.
For Privacy Consent guidance and examples, see Consent Resource .
Use
case:
"All
FHIR
Resources
should
SHOULD
be
capable
of
having
the
Provenance
fully
described."
Approach: FHIR provides the Provenance resource suitable for use by FHIR clients and servers to record the full provenance details: who, what, where, when, and why. A Provenance resource can record details for Create, Update, and Delete; or any other activity. Generally, Read operations would be recorded using AuditEvent . Many Resources include these elements within; this is done when that provenance element is critical to the use of that Resource. This overlap is expected and cross-referenced on the Five Ws pattern . For details, see Provenance Resource .
Use case: "For any given query, need Provenance records also."
Approach:
Given
that
a
system
is
using
Provenance
records.
When
one
needs
the
Provenance
records
in
addition
to
the
results
of
a
query
on
other
records
(e.g.
(e.g.,
Query
on
MedicationRequest),
then
one
uses
reverse
include
to
request
that
all
Provenance
records
also
be
returned.
That
is
to
add
?_revinclude=Provenance:target
.
For
details,
see
_revinclude
.
Use
case:
"Digital
Signature
is
needed
to
prove
authenticity,
integrity,
and
non-repudiation."
Approach:
FHIR
Resources
are
often
parts
of
Medical
Record
or
are
communicated
as
part
of
formal
Medical
Documentation.
As
such
For
any
given
Resource,
there
is
a
need
to
cryptographically
bind
a
signature
so
that
the
receiving
or
consuming
actor
can
verify
authenticity,
integrity,
and
non-repudiation.
This
functionality
is
provided
through
the
signature
element
in
Provenance
Resource.
Where
the
signature
can
be
any
local
policy
agreed
to
signature
including
Digital
Signature
methods
and
Electronic
Signature.
For
details,
see
Security:
Digital
Signatures
.
Digital
Signatures
bind
cryptographically
determine
the
exact
contents,
so
Consent
that
any
changes
will
make
authorized
the
Digital
Signature
invalid.
When
a
given
Resource
is
to
be
created
,
or
updated
the
server
is
expected
to
update
relevant
elements
that
updated.
For
example,
a
Privacy
audit
tracking
down
some
data
was
recorded
when
it
manages
(id,
lastupdated,
etc.).
These
changes,
although
expected
of
normal
RESTful
create/update
operations,
will
break
any
Digital
Signature
is
reported
that
has
the
data
should
not
have
been
calculated
prior.
One
solution
gathered.
Trigger:
When
there
is
a
need
to
trace
Resource
create
the
Digital
Signature
after
the
REST
create
operation
completes,
one
must
first
confirm
/
update
that
the
resulting
created/updated
Resource
is
as
expected,
then
the
Digital
Signature
is
formed.
were
authorized
by
a
Consent.
A
variation
of
this
happens
in
Messaging,
Documents,
Approach:
AuditEvent
can
record
Access
Control
decisions
including
permit
and
other
interaction
models.
For
details,
see
Ramifications
of
storage/retrieval
variations
deny,
and
can
indicate
when
that
Access
Control
decision
included
rules
from
a
Patient's
Consent
(e.g.,
Example
Consent
recorded
in
the
AuditEvent
as
an
AuditEvent.entity
).
De-Identification is inclusive of pseudonymization and anonymization; which are the processes of reducing privacy risk by eliminating and modifying data elements to meet a targeted use-case.
Use-Case:
"Requesting
Client
should
SHOULD
have
access
to
De-Identified
data
only."
Trigger: Based on an Access Control decision that results in a permit with an Obligation to De-Identify, the Results delivered to the Requesting Client would be de-identified.
Consideration: This assumes the system knows the type and intensity of the de-identification algorithm, where de-identification is best viewed as a process, not an algorithm - a process that reduces Privacy risk while enabling a targeted and authorized use-case.
Modifying
an
element:
The
de-identification
process
may
MAY
determine
that
specific
elements
need
to
be
modified
to
lower
privacy
risk.
Some
methods
of
modifying
are:
eliminating
the
element,
setting
to
a
static
value
(e.g.
(e.g.,
"removed"),
fuzzing
(e.g.
(e.g.,
adjusting
by
some
random
value),
masking
(e.g.
(e.g.,
encryption),
pseudonym
(e.g.
(e.g.,
replace
with
an
alias),
etc.
Narrative
and
Attachment
elements
present
particularly
difficult
challenges.
See
standards
below
for
further
details.
Discussion:
Obviously
the
most
important
elements
for
de-identification
are
names
and
identifiers.
FHIR
resources
have
many
different
types
of
ids
and
identifiers
that
serve
different
purposes.
Some
(
id
s)
are
the
basis
for
internal
links
between
different
resources,
while
identifiers
are
mainly
-
but
not
exclusively
-
for
correlating
with
external
data
sources.
Strategies
for
de-identification
need
to
consider
whether
re-identification
with
the
source
system
is
a
problem,
in
which
case
ids
will
need
to
be
modified
-
and
consistently
across
the
resource
set
being
de-identified.
External
identifiers
will
mostly
need
to
be
removed,
but
even
then,
where
they
are
used
for
internal
references
within
the
resource
set,
they'll
need
to
be
changed
consistently.
Then, there is the question of where to make the de-identification changes. For example, the Observation Resource has a subject element that mostly refers to a Patient resource. Should it be removed? Left and the Patient resource it refers to be de-identified? Updated to a new patient resource randomly or consistently? There are many other Reference elements on Observation that can easily be used to navigate back to the Subject; e.g., Observation.context value of Encounter or EpisodeOfCare; or Observation.performer. These also need to be de-identified, and it will depend on the intended use of the data whether these all need to be consistent with each other.
Some identifiers in Observation Resource:
Emphasis: The .specimen is a direct identifier of a particular specimen; and would be a direct identifier of a particular patient. This is a ramification of having the specimen identifier. One solution is to create pseudo specimen resources that will stand-in for the original specimen resource. This pseudo specimen management is supplied by a trusted-third-party that maintains a database of pseudo-identifiers with authorized reversibility.
Care
should
SHOULD
be
taken
when
modifying
an
isModifier
elements,
as
the
modification
will
change
the
meaning
of
the
Resource.
De-Identified
data
might
not
be
compliant
with
commonly
imposed
constraints
on
non-de-identified
data,
such
as
clinical
Profiles
(e.g.,
VitalSigns).
For
example
in
VitalSigns
profile
there
is
a
requirement
to
populate
the
.effectiveDateTime
;
which
would
be
an
Indirect-Identifier;
for
which
elimination,
generalization,
fuzzing,
and/or
timezone
change
might
be
called
for
in
the
De-Identification
analysis
algorithm.
In
practice,
then,
the
de-identification
process
depends
on
the
intended
use
of
the
data,
the
scope
of
the
data
being
extracted,
and
the
risk
associated
with
the
release
of
the
data
(e.g.
(e.g.,
data
released
into
the
public
domain
has
a
different
risk
than
internal
sharing
of
data
within
a
tightly
managed
organization
with
strong
information
security
policies.
policies.)
Security-label:
The
resulting
Resource
should
SHOULD
be
marked
with
security-label
to
indicate
that
it
has
been
de-identified.
This
would
assure
that
downstream
use
doesn't
mistake
this
Resource
as
representing
full
fidelity.
These
security-labels
come
from
the
Security
Integrity
Observation
ValueSet.
Some
useful
security-tag
vocabulary:
ANONYED,
MASKED,
PSEUDED,
REDACTED
Further
Standards:
Health:
ISO
ISO/IEC
29100
Information
technology
-
Security
techniques
-
Privacy
framework
,
ISO/IEC
20889
Privacy
enhancing
data
de-identification
terminology
and
classification
of
techniques
,
ISO/TS
25237
Health
informatics
-
Pseudonymization
,
NIST
IR
8053
-
De-Identification
of
Personal
Information
,
IHE
De-Identification
Handbook
,
DICOM
(Part
15,
Chapter
E)
Use-Case:
There
are
times
when
test
data
is
needed.
Test
data
are
data
that
is
not
associated
with
any
real
patient.
Test
data
are
usually
representative
of
expected
data
that
is
published
for
the
purpose
of
testing.
Test
data
may
MAY
be
fully
fabricated,
synthetic,
or
derived
from
use-cases
that
had
previously
caused
failures.
Trigger:
When
test
data
are
published
it
may
MAY
be
important
to
identify
the
data
as
test
data.
Consideration:
This
identification
may
MAY
be
to
assure
that
the
test
data
is
not
misunderstood
as
real
data,
and
that
the
test
data
is
not
factored
into
statistics
or
reporting.
However,
there
is
a
risk
that
identifying
test
data
may
MAY
inappropriately
thwart
the
intended
test
that
the
data
are
published
to
test.
Discussion:
Test data could be isolated in a server specific to test data.
Test data could be intermingled with real-patient data using one or both of the following methods:
Considerations: Note there is a risk when co-mingling test data with real patient data that someone will accidentally use test data without realizing it is test data.
Use-Case: There are times when data will have different authorization requirements. Where some data must be restricted to a smaller subset of the audience that has access to other data. Such as sensitive health topics within a patient compartment.
Trigger: When data are exposed that has links (e.g., Reference) to other data that the recipient does not have access to.
Consideration: Note that providing resources with links to other resources that are not accessible to the given user, does still expose the id (identifier) of that second resource. That id (identifier) SHOULD NOT directly expose information, but the id (identifier) MAY be seen in other resources as links also. The id (identifier) can then be used to correlate a set of resources, and this correlation needs to be considered in Privacy risk exposure.
Discussion: This correlation MAY be desirable, or MAY present a Privacy risk that is unacceptable.
Considerations: Note there is a risk when exposing security label segmented data to recipients that have limited authorization.
In
the
STU3
release,
FHIR
includes
building
blocks
and
principles
for
creating
secure,
privacy-oriented
health
IT
systems;
FHIR
does
not
mandate
a
single
technical
approach
to
security
and
privacy.
In future releases, we anticipate including guidance on:
resource
and