This project implements a multimodal sarcasm detection model that combines image and text features to classify different types of sarcasm. The model is designed to effectively fuse features from multiple modalities, handle imbalanced datasets, and improve generalization using auxiliary tasks and specialized loss functions.
- Low-Rank Fusion Module: Compresses and combines features from the text and image embeddings to reduce dimensionality while maintaining essential information.
- Text Encoder: The model uses a pre-trained text encoder ViSoBERT
- Image Encoder: A CLIP-based image encoder extracts visual features from input images.
- Focal Loss: This loss function gives more focus to minority classes (e.g., text-sarcasm and image-sarcasm) by dynamically scaling the gradient of easy samples.
- Auxiliary Losses:
- For text-sarcasm: The model ensures text-based features dominate the prediction by leveraging text-specific auxiliary outputs.
- For image-sarcasm: Fused embeddings are emphasized for prediction, ensuring better understanding of visual sarcasm.