Skip to content

Latest commit

 

History

History
24 lines (12 loc) · 1.59 KB

1911.10401.md

File metadata and controls

24 lines (12 loc) · 1.59 KB

A Transformer-based approach to Irony and Sarcasm detection, Potamias et al., 2019

Paper, Tags: #nlp, #architecture, #sarcasm-detection

We propose a neural network methodology using an unsupervised transformer-based network architecture enhanced with a RCNN, data preprocessing kept to a minimum, de-capitalization.

The research focused on tracking polarity are faced with subjectivity and a degree of vagueness, as well as figurative language. The linguistic phenomenon of figurative language (FL) refers to the contradiction between the literal and the non-literal meaning of an utterance.

There's some work to do sarcasm detection in Italian, Japanese, Spanish, Greek.

Previous work has used the concepts of unexpectedness and contradiction that seems to be frequent in FL expressions.

1. Methodology: a hybrid recurrent convolution transformer approach

Proposed method: recurrent CNN RoBERTa: (RCNN-RoBERTa)

Uses pre-trained RoBERTa weights combined with a RCNN in order to capture contextual information.

RNNs are often used to capture time relationships between words, however they are strongly biased, i.e. later words are tending to be more dominant that previous ones [53]. This problem can be alleviated with CNNs, which, as unbiased models, can determine semantic relationships between words with max-pooling.

State-of-the-art in SemVal task dataset, Reddit, Riloff too.

We aim to minimize preprocessing and engineered feature extraction steps which are, as we claim, unnecessary when using overly trained deep learning methods such as transformers.