Skip to content

Commit

Permalink
Merge Unwanted Information into Harassment (#328)
Browse files Browse the repository at this point in the history
Generalise principles to abuse mitigation.

Fixes #310 and fixes #311.
  • Loading branch information
rhiaro authored Aug 23, 2023
1 parent 3595da8 commit a501426
Showing 1 changed file with 52 additions and 68 deletions.
120 changes: 52 additions & 68 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -1584,88 +1584,73 @@
helps different users to react appropriately.


## Harassment

Online <dfn>harassment</dfn> is the "pervasive or severe targeting of an individual or group online
through harmful behavior" [[PEN-Harassment]]. Harassment is a prevalent problem on the Web,
particularly via social media. While harassment may affect any person using the Web, it may be more
severe and its consequences more impactful for LGBTQ people, women, people in racial or ethnic
minorities, people with disabilities, [=vulnerable people=] and other marginalized groups.

<aside class="note">
Some useful research overviews of online harassment include: [[?PEW-Harassment]],
[[?Addressing-Cyber-Harassment]] and [[?Internet-of-Garbage]].
</aside>

[=Harassment=] is both a violation of privacy itself and can be magnified or facilitated by other
violations of privacy.

Abusive online behavior may include: sending [=unwanted information=]; directing others to contact
or bother a person ("dogpiling"); disclosing sensitive information about a person; posting false
information about a person; impersonating a person; insults; threats; and hateful or demeaning
speech.

Disclosure of identifying or contact information (including "doxxing") can be used, including by
additional attackers, to send often persistent unwanted information that amounts to harassment.
Disclosure of location information can be used, including by additional attackers, to intrude on a
person's physical safety or space.

Mitigations for harassment include but extend beyond mitigations for unwanted information and other
privacy principles. Harassment can include harmful activity with a wider distribution than just the
target of harassment.
## Protecting web users from abusive behaviour

<div class="practice">
<p>
<span class="practicelab" id="abuse-reporting">
Systems that allow for communicating on the Web must provide an effective capability to report
abuse.
Systems that allow for communicating on the Web must provide an
effective capability to report abuse.
</span>
</p>
</div>
<div class="practice">
<p>
<span class="practicelab" id="abuse-protection">
[=User agents=] and [=sites=] must
take steps to protect their users from abusive behaviour, and abuse
mitigation must be considered when designing web platform features.
</span>
</p>
</div>

Reporting mechanisms are mitigations, but may not prevent harassment, particularly in cases where
hosts or intermediaries are supportive of or complicit in the abuse.
Online <dfn>harassment</dfn> is the "pervasive or severe targeting of an individual or group online
through harmful behavior" [[PEN-Harassment]]. Harassment is a prevalent problem on the web,
particularly via social media. While harassment may affect any person using the web, it may be more
severe and its consequences more impactful for LGBTQ people, women, people in racial or ethnic
minorities, people with disabilities, [=vulnerable people=] and other marginalized groups.

<div class="note">
Effective reporting is likely to require:

* standardized mechanisms to identify abuse reporting contacts
* visible, usable ways provided by sites and user agents to report abuse
* identifiers to refer to senders and content
* the ability to provide context and explanation of harms
* people responsible for promptly responding to reports
* tools for pooling mitigation information (see Unwanted information, below)
</div>
[=Harassment=] is both a violation of privacy itself and can be enabled or
exacerbated by other violations of privacy.

## Unwanted Information {#unwanted-information}
Harassment may include: sending [=unwanted information=]; directing others to contact
or bother a person ("dogpiling"); disclosing [sensitive information](#sensitive-information) about a person; posting false information about a person; impersonating a person; insults; threats; and hateful or demeaning speech.

Receiving unsolicited information that either may cause distress or waste the recipient's
time or resources is a violation of privacy.
Disclosure of identifying or contact information (including "doxxing") can often be used to cause additional attackers to send persistent [=unwanted information=] that amounts to harassment.
Disclosure of location information can be used to intrude on a
person's physical safety or space.

<div class="practice">
Reporting mechanisms are mitigations, but may not prevent harassment, particularly in cases where
hosts, moderators, or other intermediaries are supportive of or complicit in the abuse.

<p>
<span class="practicelab" id="principle-protect-unwanted-information">
[=User agents=] and other [=actors=] should take
steps to ensure that their [=user=] is not exposed to unwanted information. Technical standards
must consider the delivery of unwanted information as part of their architecture and must
mitigate it accordingly.
</span>
</p>
</div>
Effective reporting is likely to require:

* standardized mechanisms to identify abuse reporting contacts;
* sites and user agents to provide visible and usable ways to report abuse;
* identifiers to refer to senders and content;
* the ability to provide context and explanation of harms;
* people responsible for promptly responding to reports;
* tools for pooling mitigation information (see [[[#example-reducing-unwanted-information]]]).

<aside class="note">
Some useful research overviews of online harassment include: [[?PEW-Harassment]],
[[?Addressing-Cyber-Harassment]] and [[?Internet-of-Garbage]].
</aside>

<dfn>Unwanted information</dfn> covers a broad range of unsolicited communication, from messages
that are typically harmless individually but that become a nuisance in aggregate (spam) to the
sending of images that will cause shock or disgust due to their graphic, violent, or explicit nature
(e.g. pictures of one's genitals). While it is impossible, in a communication system involving many
[=people=], to offer perfect protection against all kinds of unwanted information, steps can be
taken to make the sending of such messages more difficult or more costly, and to make the senders
more accountable. Examples of mitigations include:
sending of explicit, graphic, or violent images.

* Restricting what new users of a service can post, notably limiting links and media until they have
System designers should take steps to make the sending of unwanted information more difficult
or more costly, and to make the senders more accountable.


<aside class="example" id="example-reducing-unwanted-information">
Examples of mitigations include:

* Restricting what new users of a service can post, e.g. limiting links and media until a user has
interacted a sufficient number of times over a given period with a larger group. This helps to
raise the cost of producing sockpuppet accounts and gives new users the occasion to understand
local norms before posting.
raise the cost of producing [sock puppet accounts](https://en.wikipedia.org/wiki/Sock_puppet_account) and gives new users time to understand local norms before posting.
* Only accepting communication between [=people=] who have an established relationship of some kind,
such as being part of a shared group. Protocols should consider requiring a handshake between
[=people=] prior to enabling communication.
Expand All @@ -1674,10 +1659,9 @@
* Supporting the ability for [=people=] to block another [=actor=] such that they cannot send information
again.
* Pooling mitigation information, for instance shared block lists, shared spam-detection
information, or public information about misbehaving [=actors=]. As always, the collection and
sharing of [=information=] for safety purposes should be limited and placed under collective
governance.

information, or public information about misbehaving [=actors=].
* Enabling users to filter out or hide information or media based on tags or content warnings.
</aside>

## Vulnerability {#vulnerability}

Expand Down

0 comments on commit a501426

Please sign in to comment.