Skip to content

lucasvanbramer/Cornell-Conversational-Analysis-Toolkit

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Cornell Conversational Analysis Toolkit (ConvoKit)

This toolkit contains tools to extract conversational features and analyze social phenomena in conversations, using a single unified interface inspired by (and compatible with) scikit-learn. Several large conversational datasets are included together with scripts exemplifying the use of the toolkit on these datasets. The latest version is 2.1.12 (released 21 Oct 2019).

The toolkit currently implements features for:

A measure of linguistic influence (and relative power) between individuals or groups based on their use of function words.
Example: exploring the balance of power in the U.S. Supreme Court.

A set of lexical and parse-based features correlating with politeness and impoliteness.
Example: understanding the (mis)use of politeness strategies in conversations gone awry on Wikipedia.

An unsupervised method for extracting surface motifs that occur in conversations and grouping them by rhetorical role.
Examples: extracting common question types in U.K. parliament, understanding the use of conversational prompts in conversations gone awry on Wikipedia.

A method for extracting structural features of conversations through a hypergraph representation.
Example: hypergraph creation and feature extraction, visualization and interpretation on a subsample of Reddit.

A method to compute the linguistic diversity of individuals within their own conversations, and between other individuals in a population.
Example: user conversation attributes and diversity example on ChangeMyView

A neural model for forecasting future outcomes of conversations (e.g., derailment into personal attacks) as they develop.
The ConvoKit CRAFT API is still under development, but in the meantime an ad-hoc implementation using ConvoKit data can be found here.

Datasets

ConvoKit ships with several datasets ready for use "out-of-the-box". These datasets can be downloaded using the convokit.download() helper function. Alternatively you can access them directly here.

Two related corpora of conversations that derail into antisocial behavior. One corpus consists of Wikipedia talk page conversations that derail into personal attacks as labeled by crowdworkers (4,188 conversations containing 30.021 comments). The other consists of discussion threads on the subreddit ChangeMyView (CMV) that derail into rule-violating behavior as determined by the presence of a moderator intervention (6,842 conversations containing 42,964 comments).
Name for download: conversations-gone-awry-corpus (Wikipedia version) or conversations-gone-awry-cmv-corpus (Reddit CMV version)

A large metadata-rich collection of fictional conversations extracted from raw movie scripts. (220,579 conversational exchanges between 10,292 pairs of movie characters in 617 movies). Name for download: movie-corpus

Parliamentary question periods from May 1979 to December 2016 (216,894 question-answer pairs).
Name for download: parliament-corpus

A collection of conversations from the U.S. Supreme Court Oral Arguments.
Name for download: supreme-corpus

A medium-size collection of conversations from Wikipedia editors' talk pages.
Name for download: wiki-corpus

Transcripts for tennis singles post-match press conferences for major tournaments between 2007 to 2015 (6,467 post-match press conferences).
Name for download: tennis-corpus

Reddit conversations from over 900k subreddits, arranged by subreddit. A small subset sampled from 100 highly active subreddits is also available.

Name for download: subreddit-<name_of_subreddit> for the by-subreddit data, reddit-corpus-small for the small subset.

Wikiconv Corpus (WIP)

The full corpus of Wikipedia talk page conversations, based on the reconstruction described in this paper. Note that due to the large size of the data, it is split up by year. We are currently working on implementing, as part of the corpus metadata, block data retrieved directly from the Wikipedia block log, for reproducing the Trajectories of Blocked Community Members paper. In the meantime, raw block data can be downloaded here.

Name for download: wikiconv-<year> to download wikiconv data for the specified year.

A collection of almost 1.5 million conversations and 2.8 million comments posted by developers reviewing proposed code changes in the Chromium project.

Name for download: chromium-corpus

A metadata-rich subset of conversations made in the r/ChangeMyView subreddit between 1 Jan 2013 - 7 May 2015, with information on the delta (success) of a user's utterance in convincing the poster.

Name for download: winning-args-corpus

A subset of Reddit conversations that have been manually annotated with discourse act labels.

Name for download: reddit-coarse-discourse-corpus

A collection of online conversations generated by Amazon Mechanical Turk workers, where one participant (the persuader) tries to convince the other (the persuadee) to donate to a charity.

Name for download: persuasionforgood-corpus

Transcripts of debates held as part of Intelligence Squared Debates.

Name for download: iq2-corpus

A collection of all the conversations that occurred over 10 seasons of Friends, a popular American TV sitcom that ran in the 1990s.

Name for download: friends-corpus

...And your own corpus!

In addition to the provided datasets, you may also use ConvoKit with your own custom datasets by loading them into a convokit.Corpus object. This example script shows how to construct a Corpus from custom data.

Installation

This toolkit requires Python >= 3.6.

  1. Download the toolkit: pip3 install convokit
  2. Download Spacy's English model: python3 -m spacy download en
  3. Download NLTK's 'punkt' model: import nltk; nltk.download('punkt') (in Python interpreter)

Alternatively, visit our Github Page to install from source. If you encounter difficulties with installation, check out our Troubleshooting Guide for a list of solutions to common issues.

Documentation

Documentation is hosted here. If you are new to ConvoKit, great places to get started are the Core Concepts tutorial for an overview of the ConvoKit "philosophy" and object model, and the High-level tutorial for an walkthrough of how to import ConvoKit into your project, load a Corpus, and use ConvoKit functions.

Contributing

We welcome community contributions. To see how you can help out, check the contribution guidelines.

Citing

If you use the code or datasets distributed with ConvoKit please acknowledge the work tied to the respective component (indicated in the documentation) in addition to:

Jonathan P. Chang, Caleb Chiam, Liye Fu, Andrew Wang, Justine Zhang, Cristian Danescu-Niculescu-Mizil. 2019. "ConvoKit: The Cornell Conversational Analysis Toolkit" Retrieved from http://convokit.cornell.edu

ConvoKit

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Jupyter Notebook 75.0%
  • Python 24.1%
  • Other 0.9%