diff --git a/index.html b/index.html
index 22a22666..7cbd6ffa 100644
--- a/index.html
+++ b/index.html
@@ -1738,88 +1738,73 @@
Link this to [[[#guardians]]].
-## Harassment
-
-Online harassment is the "pervasive or severe targeting of an individual or group online
-through harmful behavior" [[PEN-Harassment]]. Harassment is a prevalent problem on the Web,
-particularly via social media. While harassment may affect any person using the Web, it may be more
-severe and its consequences more impactful for LGBTQ people, women, people in racial or ethnic
-minorities, people with disabilities, [=vulnerable people=] and other marginalized groups.
-
-
-
-[=Harassment=] is both a violation of privacy itself and can be magnified or facilitated by other
-violations of privacy.
-
-Abusive online behavior may include: sending [=unwanted information=]; directing others to contact
-or bother a person ("dogpiling"); disclosing sensitive information about a person; posting false
-information about a person; impersonating a person; insults; threats; and hateful or demeaning
-speech.
-
-Disclosure of identifying or contact information (including "doxxing") can be used, including by
-additional attackers, to send often persistent unwanted information that amounts to harassment.
-Disclosure of location information can be used, including by additional attackers, to intrude on a
-person's physical safety or space.
-
-Mitigations for harassment include but extend beyond mitigations for unwanted information and other
-privacy principles. Harassment can include harmful activity with a wider distribution than just the
-target of harassment.
+## Protecting web users from abusive behaviour
- Systems that allow for communicating on the Web must provide an effective capability to report
- abuse.
+ Systems that allow for communicating on the Web must provide an
+ effective capability to report abuse.
+
+
+
+
+
+
+ [=User agents=] and [=sites=] must
+ take steps to protect their users from abusive behaviour, and abuse
+ mitigation must be considered when designing web platform features.
-Reporting mechanisms are mitigations, but may not prevent harassment, particularly in cases where
-hosts or intermediaries are supportive of or complicit in the abuse.
+Online harassment is the "pervasive or severe targeting of an individual or group online
+through harmful behavior" [[PEN-Harassment]]. Harassment is a prevalent problem on the web,
+particularly via social media. While harassment may affect any person using the web, it may be more
+severe and its consequences more impactful for LGBTQ people, women, people in racial or ethnic
+minorities, people with disabilities, [=vulnerable people=] and other marginalized groups.
-
- Effective reporting is likely to require:
-
- * standardized mechanisms to identify abuse reporting contacts
- * visible, usable ways provided by sites and user agents to report abuse
- * identifiers to refer to senders and content
- * the ability to provide context and explanation of harms
- * people responsible for promptly responding to reports
- * tools for pooling mitigation information (see Unwanted information, below)
-
+[=Harassment=] is both a violation of privacy itself and can be enabled or
+exacerbated by other violations of privacy.
-## Unwanted Information {#unwanted-information}
+Harassment may include: sending [=unwanted information=]; directing others to contact
+or bother a person ("dogpiling"); disclosing [sensitive information](#sensitive-information) about a person; posting false information about a person; impersonating a person; insults; threats; and hateful or demeaning speech.
-Receiving unsolicited information that either may cause distress or waste the recipient's
-time or resources is a violation of privacy.
+Disclosure of identifying or contact information (including "doxxing") can often be used to cause additional attackers to send persistent [=unwanted information=] that amounts to harassment.
+Disclosure of location information can be used to intrude on a
+person's physical safety or space.
-
+Reporting mechanisms are mitigations, but may not prevent harassment, particularly in cases where
+hosts, moderators, or other intermediaries are supportive of or complicit in the abuse.
-
-
- [=User agents=] and other [=actors=] should take
- steps to ensure that their [=user=] is not exposed to unwanted information. Technical standards
- must consider the delivery of unwanted information as part of their architecture and must
- mitigate it accordingly.
-
-
-
+Effective reporting is likely to require:
+
+* standardized mechanisms to identify abuse reporting contacts;
+* sites and user agents to provide visible and usable ways to report abuse;
+* identifiers to refer to senders and content;
+* the ability to provide context and explanation of harms;
+* people responsible for promptly responding to reports;
+* tools for pooling mitigation information (see [[[#example-reducing-unwanted-information]]]).
+
+
Unwanted information covers a broad range of unsolicited communication, from messages
that are typically harmless individually but that become a nuisance in aggregate (spam) to the
-sending of images that will cause shock or disgust due to their graphic, violent, or explicit nature
-(e.g. pictures of one's genitals). While it is impossible, in a communication system involving many
-[=people=], to offer perfect protection against all kinds of unwanted information, steps can be
-taken to make the sending of such messages more difficult or more costly, and to make the senders
-more accountable. Examples of mitigations include:
+sending of explicit, graphic, or violent images.
-* Restricting what new users of a service can post, notably limiting links and media until they have
+System designers should take steps to make the sending of unwanted information more difficult
+or more costly, and to make the senders more accountable.
+
+
+
## Vulnerability {#vulnerability}