Fight harassment with your squad. Learn more at squadbox.org.
Thanks for checking out our repository! Squadbox is a tool to help people who are being harassed online by having their friends (or “squad”) moderate their messages.
In this README we cover:
- The motivation behind the project
- A broad overview of how it works
- Who we are
- What we need help with
- How you can get involved!
Online harassment has become an increasingly prevalent issue - according to recent reports by Data & Society and the Pew Research Center, nearly half of internet users in the United States have experienced some form of online harassment or abuse. Unfortunately, solutions for combating harassment have not kept up. Common technical solutions such as user blocking and word-based filters are blunt tools that cannot cover many forms of harassment, and can be circumvented by determined harassers.
Recently, researchers have tried to use machine learning models to detect harassment, but these models can be easily deceived and are often biased by their datasets. Given the strong evidence that automated tools are ineffective on their own, we propose that a better alternative is to continue engaging humans in the moderation process. However, while human moderators already make up many of the reporting pipelines for platforms, harassment targets cannot currently count on platform action to shield them from harassment.
We conducted interviews with several targets of online harassment, and found that without existing effective solutions within platforms, targets often turn to the help of friends, using techniques such as giving friends password access to rid their inboxes of harassment, or forwarding unopened emails to friends to moderate. This motivates the design and implementation of tools like Squadbox, that is able to work externally from platforms to combat harassment.
People experiencing harassment sign up and create squads which they "own", and invite their friends or other trusted individuals to become moderators for their squad. The "owner" of the squad can set up filters to automatically forward potentially harassing incoming content to Squadbox’s moderation pipeline. When an email arrives for moderation, a moderator makes an assessment, adding annotations and rationale where needed. The message is then handled in a manner according to the owner’s preference, such as having the email delivered with a special tag, placed in a particular folder, or discarded.
Currently, Squadbox only works with email messages. We have plans to work on integrating it with other platforms like Twitter in the near future!
We (@amyxzhang, @kmahar, @karger) are a team of human-computer interaction researchers from the Haystack Group at MIT CSAIL. While this started as a research project, with the help of the Mozilla Foundation's Open Leaders program, we are now working to convert it to a full-fledged open source project in order to expand the number of people contributing and maximize the project's impact.
Please feel free to reach out to us on Github or via email at [email protected]! 📧
You can join our mailing list here!
We're looking for anyone who is passionate about this issue to help us build and improve Squadbox! We need programmers to help us code, designers to improve the interface and user experience, and people with experience and knowledge about online harassment and moderation to help guide our design choices, create resources for owners and moderators, etc.
Before you get started, please review our contributor guidelines.
We use the issue tracker to keep a list of work to be done on the project. We have a label for "good first issues" for getting your feet wet - you can see those issues here. If you're interested in working on one of them, go ahead and comment and we'll help you get started! 🎉
We have both coding and non-coding issues you can work on - while most on the list are coding, the non-coding ones can be found here.
If you'll be working on a coding issue, follow the coding setup instructions to get a local version of the project up and running.