Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Harrasment, for #310 and #311 #328

Merged
merged 6 commits into from
Aug 23, 2023
Merged
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
111 changes: 44 additions & 67 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -1738,88 +1738,66 @@
Link this to [[[#guardians]]].
</aside>

## Harassment
## Protecting users from abusive behaviour
jyasskin marked this conversation as resolved.
Show resolved Hide resolved

<div class="practice">
<p>
<span class="practicelab" id="abuse-reporting">
Systems that allow for communicating on the Web must provide an
effective capability to report abuse. [=User agents=] and [=sites=] must
take steps to protect their users from abusive behaviour, and abuse
mitigation must be considered when designing web platform features.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This consolidation of principles applies different kinds of obligations to different actors. Designing reporting systems is quite different from abuse mitigation in the design of web platform features.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we addressed.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We split this principle into two in the meeting today, which I think fixes this comment.

jyasskin marked this conversation as resolved.
Show resolved Hide resolved
</span>
</p>
</div>

Online <dfn>harassment</dfn> is the "pervasive or severe targeting of an individual or group online
through harmful behavior" [[PEN-Harassment]]. Harassment is a prevalent problem on the Web,
particularly via social media. While harassment may affect any person using the Web, it may be more
through harmful behavior" [[PEN-Harassment]]. Harassment is a prevalent problem on the web,
particularly via social media. While harassment may affect any person using the web, it may be more
severe and its consequences more impactful for LGBTQ people, women, people in racial or ethnic
minorities, people with disabilities, [=vulnerable people=] and other marginalized groups.

<aside class="note">
Some useful research overviews of online harassment include: [[?PEW-Harassment]],
[[?Addressing-Cyber-Harassment]] and [[?Internet-of-Garbage]].
</aside>

[=Harassment=] is both a violation of privacy itself and can be magnified or facilitated by other
violations of privacy.
[=Harassment=] is both a violation of privacy itself and can be enabled or
exacerbated by other violations of privacy.

Abusive online behavior may include: sending [=unwanted information=]; directing others to contact
or bother a person ("dogpiling"); disclosing sensitive information about a person; posting false
information about a person; impersonating a person; insults; threats; and hateful or demeaning
speech.
Harassment may include: sending [=unwanted information=]; directing others to contact
or bother a person ("dogpiling"); disclosing [sensitive information](#sensitive-information) about a person; posting false information about a person; impersonating a person; insults; threats; and hateful or demeaning speech.

Disclosure of identifying or contact information (including "doxxing") can be used, including by
additional attackers, to send often persistent unwanted information that amounts to harassment.
Disclosure of location information can be used, including by additional attackers, to intrude on a
Disclosure of identifying or contact information (including "doxxing") can often be used to cause additional attackers to send persistent [=unwanted information=] that amounts to harassment.
Disclosure of location information can be used to intrude on a
person's physical safety or space.

Mitigations for harassment include but extend beyond mitigations for unwanted information and other
privacy principles. Harassment can include harmful activity with a wider distribution than just the
target of harassment.

<div class="practice">
<p>
<span class="practicelab" id="abuse-reporting">
Systems that allow for communicating on the Web must provide an effective capability to report
abuse.
</span>
</p>
</div>

Reporting mechanisms are mitigations, but may not prevent harassment, particularly in cases where
hosts or intermediaries are supportive of or complicit in the abuse.
jyasskin marked this conversation as resolved.
Show resolved Hide resolved

<div class="note">
Effective reporting is likely to require:

* standardized mechanisms to identify abuse reporting contacts
* visible, usable ways provided by sites and user agents to report abuse
* identifiers to refer to senders and content
* the ability to provide context and explanation of harms
* people responsible for promptly responding to reports
* tools for pooling mitigation information (see Unwanted information, below)
</div>

## Unwanted Information {#unwanted-information}

Receiving unsolicited information that either may cause distress or waste the recipient's
time or resources is a violation of privacy.
Effective reporting is likely to require:

<div class="practice">
* standardized mechanisms to identify abuse reporting contacts;
* sites and user agents to provide visible and usable ways to report abuse;
* identifiers to refer to senders and content;
* the ability to provide context and explanation of harms;
* people responsible for promptly responding to reports;
* tools for pooling mitigation information (see [[[#example-reducing-unwanted-information]]]).

<p>
<span class="practicelab" id="principle-protect-unwanted-information">
[=User agents=] and other [=actors=] should take
steps to ensure that their [=user=] is not exposed to unwanted information. Technical standards
must consider the delivery of unwanted information as part of their architecture and must
mitigate it accordingly.
</span>
</p>
</div>
<aside class="note">
Some useful research overviews of online harassment include: [[?PEW-Harassment]],
[[?Addressing-Cyber-Harassment]] and [[?Internet-of-Garbage]].
</aside>

<dfn>Unwanted information</dfn> covers a broad range of unsolicited communication, from messages
that are typically harmless individually but that become a nuisance in aggregate (spam) to the
sending of images that will cause shock or disgust due to their graphic, violent, or explicit nature
(e.g. pictures of one's genitals). While it is impossible, in a communication system involving many
[=people=], to offer perfect protection against all kinds of unwanted information, steps can be
taken to make the sending of such messages more difficult or more costly, and to make the senders
more accountable. Examples of mitigations include:
sending of explicit, graphic, or violent images.

System designers should take steps to make the sending of unwanted information more difficult
or more costly, and to make the senders more accountable.
rhiaro marked this conversation as resolved.
Show resolved Hide resolved

* Restricting what new users of a service can post, notably limiting links and media until they have

<aside class="example" id="example-reducing-unwanted-information">
Examples of mitigations include:

* Restricting what new users of a service can post, e.g. limiting links and media until a user has
interacted a sufficient number of times over a given period with a larger group. This helps to
raise the cost of producing sockpuppet accounts and gives new users the occasion to understand
local norms before posting.
raise the cost of producing [sock puppet accounts](https://en.wikipedia.org/wiki/Sock_puppet_account) and gives new users time to understand local norms before posting.
* Only accepting communication between [=people=] who have an established relationship of some kind,
such as being part of a shared group. Protocols should consider requiring a handshake between
[=people=] prior to enabling communication.
Expand All @@ -1828,10 +1806,9 @@
* Supporting the ability for [=people=] to block another [=actor=] such that they cannot send information
again.
* Pooling mitigation information, for instance shared block lists, shared spam-detection
information, or public information about misbehaving [=actors=]. As always, the collection and
sharing of [=information=] for safety purposes should be limited and placed under collective
governance.

information, or public information about misbehaving [=actors=].
* Enabling users to filter out or hide information or media based on tags or content warnings.
</aside>

## Vulnerability {#vulnerability}

Expand Down