Skip to content

OCR Hinders RAG: Evaluating the Cascading Impact of OCR on Retrieval-Augmented Generation

Notifications You must be signed in to change notification settings

opendatalab/OHR-Bench

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OCR Hinders RAG: Evaluating the Cascading Impact of OCR on Retrieval-Augmented Generation

This repository contains the official code of OHR-Bench, a benchmark designed to evaluate the cascading impact of OCR on RAG.

Overview

  • PDF, gt structured data and Q&A datasets: [🤗 Hugging Face] pdfs, gt_and_qas. It includes 4000+ unstructured PDF pages from various domains, including Textbook, Law, Finance, Newspaper, Manual and Academia and Q&A datasets sourced from multimodal document elements. Each PDF page is equipped with a human-verified ground truth structured data.
  • Perturbed data with OCR errors: [🤗 Hugging Face] retrieval_base/formatting_noise_[mild/moderate/severe] and retrieval_base/semantic_noise_[mild/moderate/severe]. In order to conduct in-depth analysis of the OCR's impact on RAG, OHR-Bench identifies Semantic Noise and Formatting Noise and introduce them with mild, moderate and severe perturbation based on real-world OCR errors.
  • Evaluation framework: [[Github opendatalab/OHR-Bench](GitHub - opendatalab/OHR-Bench: OCR Hinders RAG: Evaluating the Cascading Impact of OCR on Retrieval)]. We provide a RAG evaluation framework to assess the impact of OCR processed structured data and our perturbed data on RAG including retrieval, generation and overall performance.

framework

Evaluation Results

img.png

We evaluate the suitability of current OCR solutions for real-world RAG applications by conducting comprehensive experiments with our OHR-Bench. We derive conclusions as follows:

  • Pipeline-based OCR demonstrates the best performance. Employing Marker achieves the best retrieval performance across all OCR solutions, while MinerU dominates the generation and overall evaluation.
  • All OCR solutions suffer performance degradation. Even the best solutions show a decrease of 1.9 in EM@1 and 2.93 F1@1 in the overall evaluation, with greater losses in the retrieval and generation stages.

Getting Started

Installation

pip install -r requirements.txt

Dataset preparation

OCR processed structured data

To evaluate your RAG system on our benchmark, follow these steps:

  1. Download Perturbed Data: Get the data with formatting and semantic noise from Hugging Face.
  2. Organize the Data: Place the folders retrieval_base/formatting_noise_[mild/moderate/severe] and retrieval_base/semantic_noise_[mild/moderate/severe] in the data/retrieval_base directory of this project.
  3. Run Evaluation: Follow the instructions in Run Evaluation.

To evaluate your OCR results using this benchmark:

  1. Organize the Data: Do OCR with your OCR models (PDFs available on Hugging Face) and place the OCR processed structured data in the data/retrieval_base directory. Use the ground truth (data/retrieval_base/gt) data as an example. The sub-folder names indicate the domain of the parsed results, and each JSON file, named as the same of corresponding PDF files, should contain the corresponding parsed results.
  2. Run Evaluation: Follow the instructions in Run Evaluation.
Directory Structure
retrieval_base/gt/ # We provide gt and MinerU processed structured data as illustration here
├── finance # Domain
│   ├── 3M_2023Q2_10Q.json # Parsed results
│   ├── ...
├── textbook
...
OCR Processed Data
[
    {
        "page_idx": 0, // Page index
        "text": "...", // OCR processed structured data
    },
    ...
]

QA data

The qa data is placed in data/qas.json. Each JSON file should be structured as follows:

Q&A JSON
[
    {
        "doc_name": "finance/JPMORGAN_2021Q1_10Q", // Document source
        "ID": "00073cc2-c801-467c-9039-fca63c78c6a9", // Unique ID
        "questions": "What was the total amount of nonaccrual loans retained as of March 31, 2021?",
        "answers": "842",
        "doc_type": "finance", // Q&A domain.
        "answer_form": "Numeric", // Answer format.
        "evidence_source": "table", // Evidence source.
        "evidence_context": "Nonaccrual loans retained $^{(\\mathrm{a})}$ & \\$ & 842 & \\$ & 689 & $22 \\%$", // Evidence.
        "evidence_page_no": 24
    },
    ...
]

LLMs preparation

In src/configs, configure your local LLM path or GPT API.

GPT_api_key = 'You KEY Here'  # openai.api_key
...
Qwen2_7B_local_path = 'Qwen/Qwen2-7B-Instruct' # download from Hugging Face or your local path

Run Evaluation

To evaluate your OCR results, follow the instructions in the Dataset Preparation section to organize your OCR data.

# The first argument specifies which OCR results to use for evaluation.
# The second argument specifies the retrievers or LLMs.

# Generation with gt
bash shell/generation.sh gt qwen2_7b
# Generation with mild semantic noise
bash shell/generation.sh semantic_noise_mild qwen2_7b

# Retrieval with gt
bash shell/retrieval.sh gt qwen2_7b
# Retrieval with mild semantic noise
bash shell/retrieval.sh semantic_noise_mild qwen2_7b

# End-to-end with gt
bash shell/end2end.sh gt qwen2_7b
# End-to-end with mild semantic noise
bash shell/end2end.sh semantic_noise_mild qwen2_7b

Acknowledgement

The evaluation framework is based on [CRUD](GitHub - IAAR-Shanghai/CRUD_RAG: CRUD-RAG: A Comprehensive Chinese Benchmark for Retrieval-Augmented), thanks so much for this brilliant project.

Citation

@article{zhang2024ocr,
  title={OCR Hinders RAG: Evaluating the Cascading Impact of OCR on Retrieval-Augmented Generation},
  author={Junyuan Zhang and Qintong Zhang and Bin Wang and Linke Ouyang and Zichen Wen and Ying Li and Ka-Ho Chow and Conghui He and Wentao Zhang},
  journal={arXiv preprint arXiv:2412.02592},
  year={2024}
}

Copyright Statement

The PDFs are collected from public online channels and community user contributions. Content that is not allowed for distribution has been removed. The dataset is for research purposes only and not for commercial use. If there are any copyright concerns, please contact [email protected].

About

OCR Hinders RAG: Evaluating the Cascading Impact of OCR on Retrieval-Augmented Generation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •