Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document Splitter always returns 1 document for split_type="passage" in pdfs #8491

Open
rmrbytes opened this issue Oct 24, 2024 · 15 comments
Open
Labels
P2 Medium priority, add to the next sprint if no P1 available topic:preprocessing type:bug Something isn't working

Comments

@rmrbytes
Copy link

Describe the bug
When using Document Splitter with pdf and split_type="passage", the result is always one document. This is using pypdf.

Expected behavior
The understanding I have is that it splits based on at least two line breaks \n\n

Additional context
When I tested using plain text it seems to be splitting correctly

To Reproduce

dir = '...'
files = [
{"filename": "rules.pdf", "meta": {"split_by" : "passage", "split_length":1, "split_overlap":0, "split_threshold":0}},
{"filename": "rules.txt", "meta": {"split_by" : "passage", "split_length":1, "split_overlap":0, "split_threshold":0}}
]
for file in files:
# set the filepath
file_path = Path(dir) / file["filename"]
router_res = file_type_router.run(sources=[file_path])
txt_docs = []
if 'text/plain' in router_res:
txt_docs = text_file_converter.run(sources=router_res['text/plain'])
elif 'application/pdf' in router_res:
txt_docs = pdf_converter.run(sources=router_res['application/pdf'])
elif 'text/markdown' in router_res:
txt_docs = markdown_converter.run(sources=router_res['text/markdown'])
document_splitter = DocumentSplitter(
split_by=file['meta']['split_by'],
split_length=file['meta']['split_length'],
split_overlap=file['meta']['split_overlap'],
split_threshold=file['meta']['split_threshold']
)
splitter_res = document_splitter.run([txt_docs['documents'][0]])
print(len(splitter_res['documents']))

System:

  • OS: Mac OS 14.6.1
  • GPU/CPU: CPU
  • Haystack version (commit or version number): 2.6.0
  • DocumentStore: Chromadb
  • Splitter: DocumentSplitter
@anakin87
Copy link
Member

Hello!

The fact that your input is PDF or text does not have an impact on splitting.
Try to play with the parameters of DocumentSplitter.

This works, for example:

from haystack import Document
from haystack.components.preprocessors import DocumentSplitter

doc = Document(content="Hello World.\n\nMy name is John Doe.\n\nI live in Berlin", meta={"name": "doc1"})
splitter = DocumentSplitter(split_length=1, split_by="passage")

docs = splitter.run([doc])
print(docs)
# I get 3 documents

@rmrbytes
Copy link
Author

Thanks @anakin87 for your response. I did mention above that it works with txt files as confirmed by your example. My issue is with PDFs (using pypdf). I will revisit by creating a simpler pdf with distinct two line returns.

@lbux
Copy link
Contributor

lbux commented Nov 8, 2024

I also ran into the issue, but I think it might just be the fact that the PDF format is a bit of a mess when people create documents. Some of the PDFs I have can correctly be split but some can't.

@rmrbytes
Copy link
Author

rmrbytes commented Nov 8, 2024

Hmm. This was a simple text based PDF created via google docs. I thought it would be a "simple" pdf :-). For now, am testing using txt and md files till I get a handle on this. Thanks @lbux for sharing.

@lbux
Copy link
Contributor

lbux commented Nov 8, 2024

I'll give it another shot tomorrow just to be sure. I do remember the same document I used was having no issues with splitting by words or sentences, but it did fail when I did passage. I wonder if it's failing to read the \n\n somehow. Or maybe the converter is not properly handling it.

@rmrbytes
Copy link
Author

rmrbytes commented Nov 8, 2024

For what it is worth, I did experiment with 3 pdfs and found that the entire document was a single chunk always.

@lbux
Copy link
Contributor

lbux commented Nov 9, 2024

You are correct. The issue persists regardless of what converter is used (Miner or PyPDF). I tried several files, and they all report 1. However, the issue also occurs if we use the NLTKDocumentSplitter. I'd have to dig a bit deeper to determine what the root cause is.

@lbux
Copy link
Contributor

lbux commented Nov 9, 2024

The issue is with the converters. PyPDF is a bit worse in handling PDFs when no parameters are added compared to PDFMiner. However, both implementations are not configured to infer paragraphs breaks based on spacing or layout analysis. When we see something we consider a paragraph, it is stored as "\n" as opposed to "\n\n" which the splitter expects.

Here is a screenshot of a PDF
image

And this is how PDFMiner converts it:
This is a test document. Here we have a sentence.\nThis is the start of a new paragraph.\nThis is not the start of a new paragraph.\nThe number should be 3.\n

This is how PyPDF converts it:
"Thisisatest document. Herewehaveasentence.\nThisisthestart of anewparagraph.Thisisnot thestart of anewparagraph.\nThenumbershouldbe3.

@anakin87 Would love if someone from the team could take a deeper look into it. If my findings are right, many customers could be incorrectly converting their documents.

@anakin87
Copy link
Member

@sjrl Have you by any chance encountered this problem in the past?

@sjrl
Copy link
Contributor

sjrl commented Nov 11, 2024

@anakin87 We are also using PyPDF a lot but we either typically use:

  • split by word and respect sentence boundaries using the NLTKDocumentSplitter
  • split by page using the normal DocumentSplitter

so we don't have much experience with splitting by passage so we may have not noticed the missing newlines.

@sjrl
Copy link
Contributor

sjrl commented Nov 11, 2024

It seems PDFMiner provides more easily understandable options to customize. Specifically, if you mess with the line_margin parameter (I think make it smaller) then it might preserve the extra newlines, but it's difficult to tell just looking at their documentation.

@lbux
Copy link
Contributor

lbux commented Nov 11, 2024

I think the main issue (with PDFMiner and probably PyPDF) is in the implementation of the converter:

def _converter(self, extractor) -> Document:
        """
        Extracts text from PDF pages then convert the text into Documents

        :param extractor:
            Python generator that yields PDF pages.

        :returns:
            PDF text converted to Haystack Document
        """
        pages = []
        for page in extractor:
            text = ""
            for container in page:
                # Keep text only
                if isinstance(container, LTTextContainer):
                    text += container.get_text()
            pages.append(text)

        # Add a page delimiter
        concat = "\f".join(pages)

        return Document(content=concat)

Our downstream task expects "\n\n" but the current implementation does not add the delineators. The current implementation seems to concatenate all the text of LTTextContainer expecting it to already have the delineators which is not the case. If we explicitly add them like so:

def _converter(self, extractor) -> Document:
        """
        Extracts text from PDF pages then convert the text into Documents

        :param extractor:
            Python generator that yields PDF pages.

        :returns:
            PDF text converted to Haystack Document
        """
        pages = []
        for page in extractor:
            text = ""
            for container in page:
                # Keep text only
                if isinstance(container, LTTextContainer):
                    container_text = container.get_text().strip()
                    if text:
                        text += "\n\n"
                    text += container_text
            pages.append(text.strip())

        # Add a page delimiter
        concat = "\f".join(pages)

        return Document(content=concat)

It is true that line_margin is important and that is what actually controls our LTTextContainer object, but if we just concatenate, it has no impact.

With these changes, I was able to get the correct, expected output:

This is a test document. Here we have a sentence.\n\n
This is the start of a new paragraph.\nThis is not the start of a new paragraph.\n\n
The number should be 3.

Further testing would probably have to be done, and I stripped some of the text to make it easier to debug (which should actually be done in the cleaner); however, I am confident this is the root cause.

@anakin87 anakin87 added type:bug Something isn't working topic:preprocessing and removed community-triage labels Nov 11, 2024
@anakin87
Copy link
Member

@lbux thanks for debugging this issue!

We should probably investigate better, but your proposal sounds reasonable.

@lbux
Copy link
Contributor

lbux commented Nov 11, 2024

Haystack's PyPDF's conversion method is this:

def _default_convert(self, reader: "PdfReader") -> Document:
        text = "\f".join(page.extract_text() for page in reader.pages)
        return Document(content=text)

This also has the same limitation where we don't add the \n\n. Complicated logic can be added to manually insert the delineators and I added some basic heuristics to test it myself (if previous line ends with period and new line starts with uppercase letter). It worked fine, but it was ugly.

I then looked more into extract_text(), and it seems we are using the legacy extractor. They have an experimental version that we can enable by adding "extraction_mode="layout" as a parameter in extract_text() and it works out of the box for my test without any additional changes needed... Just another data point for possible solutions!

@lbux
Copy link
Contributor

lbux commented Nov 11, 2024

@lbux thanks for debugging this issue!

We should probably investigate better, but your proposal sounds reasonable.

Of course! My current project is explicitly dependent on retrieving exact paragraphs, so I had to fix this one way or another haha. Hoping for a permanent solution when time allows.

@julian-risch julian-risch added the P2 Medium priority, add to the next sprint if no P1 available label Nov 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
P2 Medium priority, add to the next sprint if no P1 available topic:preprocessing type:bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants