Both policies prohibit you from threatening or glorifying violence in most scenarios (each version has carve-outs for “excessive” talk between friends). However, the new set of rules appears to expand some concepts while restricting others. For example, the old policy stated:
Statements that express a desire or hope that someone will suffer bodily harm, make vague or implied threats, or take threatening actions unlikely to cause serious or permanent injury are not subject to action under this policy, but may be reviewed and action taken politician’s.
However, I wish for someone bad is are covered by the new policy, which reads as follows:
You may not desire, hope, or express desire for harm. This includes (but is not limited to) hoping that others will die, suffer disease, tragedy, or experience other physically harmful consequences.
Except “new” is a bit of a misnomer here, because that’s pretty much exactly the policy expressed in the old rules of abusive behavior — the only real change is that it’s moved and Twitter has stopped providing examples.
What seems like a substantial change is the new policy’s lack of clarity as to who it is designed to protect. The old man immediately clarified: “You cannot threaten violence a person or group of people.” (Emphasis mine.) The new policy does not include the words “person” or “group” and instead chooses to refer to “others.” While this could certainly be interpreted as protecting marginalized groups, there is no something specific to point out that actually proves it that.
There are a few more changes worth noting: the new policy prohibits threats against “citizen homes and shelters or infrastructure” and includes cracks for speech related to video games and sporting events, as well as “satire or artistic expression when the context expresses a point of view rather than inciting actionable violence or harm’.
The company also says that the punishment — which usually comes in the form of an immediate, permanent suspension or account lock that forces you to delete offensive content — can be less severe if you act out of “outrage” in a conversation “about certain people who are credibly accused of serious violence.” Twitter doesn’t provide an example of exactly what this would look like, but it’s my understanding that if you called for, say, the execution of a famous serial killer, you might not get a permanent ban for it.
I don’t mean this as a criticism of Twitter, to be clear. A social network that essentially bases its moderation policies only on what is legally permissible would be an absolute hellscape that I, and I think most of the population, wouldn’t be interested in. I’m not a lawyer, but I don’t see anything about banning bots in the first amendment. (Maybe that’s because it was written in the 1700s.)