In a month of headlines dominated
by questionable foreign and domestic
policy decisions a key debate has been
raging alongside. Notably the question
of how society should balance the
individual right to privacy with the
protection of children from sexual
exploitation.
On August 6, Apple Inc. announced
– and then renounced following
public outcry – their intention to
begin scanning user devices for Child
Sexual Abuse Material (CSAM). This
announcement came without prior
indication and was quick to attract
both supporters and critics.
The American National Centre
for Missing and Exploited Children
(NCMEC) holds a database of known
CSAM imagery. Each NCMEC database
image is translated into “hashes”, a
form of digital fingerprint, which Apple
intends to compare and match to
images on customers’ devices.
When an Apple user uploads an
image or video to iCloud their device
will automatically generate a safety
certificate featuring the match result,
resulting in either a positive or negative.
Once a threshold of positive CSAM
match results has been crossed, a
manual review will take place and if
deemed to be suspicious, it is referred
to the NCMEC which can then report
users to the authorities to consider
prosecution.
Amongst supporters of this move
are UK Health Secretary Sajid Javid and
the UK anti-child exploitation charity,
Internet Watch Foundation who stated
that it puts ‘child safety at the forefront
of new technology’.
In addition to this photo matching
Apple intends to use machine learning
to identify and blur sexually explicit
content sent through iMessage.
Parental safety settings will allow
parents to be notified when their child’s
device sends or receives suspected
explicit content, even if it is not
necessarily CSAM.
This feature has raised concerns
amongst LGBTQ+ advocacy groups that
LGBTQ+ children on family accounts
could be “outed” by this software to
their parents, presenting a threat to
the child’s wellbeing and safety. This is
particularly concerning in jurisdictions
where LGBTQ+ rights and identity are
threatened.
It has also attracted opposition from
the American Civil Liberties Union
(ACLU) and Privacy International.
Currently more than 90 policy groups
have signed an open letter calling on
Apple to reverse this decision amid
privacy and safety concerns.
Pragmatic difficulties also surround
the effectiveness of this policy. Notably
this system will only be able to flag
CSAM which is already kept in the
NCMEC database. CSAM which has
been produced by the user (be it by a
child or an adult) and uploaded to iCloud
will not be recognised by matching and
as such will not be flagged as suspicious
content. Furthermore, CSAM which is
distributed via third party apps such as
Snapchat or WhatsApp without being
uploaded to iCloud will also not be
detected. Whilst this system will have
some limited ability to detect known
CSAM already in existence, it will be
powerless to detect current or novel
child exploitation through third party
apps.
In order to address this flaw, Apple
would have to both force data sharing
with third party companies, which is
unlikely and legally dubious, as well
as introduce machine learning for the
identification of new CSAM which
would likely suffer from a high rate of
false positives.
Approaching the topic more
broadly requires that such software
may be applied outside the scope of
CSAM identification. For example,
the same software could be used to
identify and report the possession
of images which governments may
find objectionable. Apple itself has
sought to reassure the public that
it will not comply with government
demands to expand photo-scanning
but given the company’s previous
controversies involving censorship
and data collection some critics remain
unconvinced. Can we trust Apple to
place privacy over profits?
Apple’s proposal is not without
merit and efforts to protect child
exploitation are admirable. However,
the intrinsic limitations of this proposal
combined with invasion of privacy and
the potential safety risks it poses is
undesirable and ill-conceived.