Skip to main content
Open In ColabOpen on GitHub

How to load PDFs

Portable Document Format (PDF), standardized as ISO 32000, is a file format developed by Adobe in 1992 to present documents, including text formatting and images, in a manner independent of application software, hardware, and operating systems.

This guide covers how to load PDF documents into the LangChain Document format that we use downstream.

Text in PDFs is typically represented via text boxes. They may also contain images. A PDF parser might do some combination of the following:

  • Agglomerate text boxes into lines, paragraphs, and other structures via heuristics or ML inference;
  • Run OCR on images to detect text therein;
  • Classify text as belonging to paragraphs, lists, tables, or other structures;
  • Structure text into table rows and columns, or key-value pairs.
  • Use multimodal LLM to extrat the body, page by page

PDF files are organized in pages. This is not a good strategy. Indeed, this approach creates memory gaps in RAG projects. If a paragraph spans two pages, the beginning of the paragraph is at the end of one page, while the rest is at the start of the next. With a page-based approach, there will be two separate chunks, each containing part of a sentence. The corresponding vectors won’t be relevant. These chunks are unlikely to be selected when there’s a question specifically about the split paragraph. If one of the chunks is selected, there’s little chance the LLM can answer the question. This issue is worsened by the injection of headers, footers (if parsers haven’t properly removed them), images, or tables at the end of a page, as most current implementations tend to do.

Images and tables are difficult challenges for PDF parsers.

Some parsers can retrieve images. The question is what to do with them. It may be interesting to apply an OCR algorithm to extract the textual content of images, or to use a multimodal LLM to request the description of each image. With the result of an image conversion, where do I place it in the document flow? At the end? At the risk of breaking the content of a paragraph present on several pages? Implementations try to find a neutral location, between two paragraphs, if possible.

When it comes to extracting tables, some can do it, with varying degrees of success, with or without integrating the tables into the text flow. A Markdown table cannot describe combined cells, unlike an HTML table.

Finally, the metadata extracted from PDF files by the various parsers varies. We propose a minimum set that parsers should offer:

  • source
  • page
  • total_page
  • creationdate
  • creator
  • producer

Most parsers offer similar parameters, such as mode, which allows you to request the retrieval of one document per page (mode="page"), or the entire file stream in a single document (mode="single"). Other modes can return the structure of the document, following the identification of each component.

LangChain tries to unify the different parsers, to facilitate migration from one to the other. Why is it important? Each has its own characteristics and strategies, more or less effective depending on the family of PDF files. One strategy is to identify the family of the PDF file (by inspecting the metadata or the content of the first page) and then select the most efficient parser in that case. By unifying parsers, the following code doesn't need to deal with the specifics of different parsers, as the result is similar for each.

LangChain integrates with a host of PDF parsers. Some are simple and relatively low-level; others will support OCR and image-processing, or perform advanced document layout analysis. The right choice will depend on your needs. Below we enumerate the possibilities.

We will demonstrate these approaches on a sample file:

file_path = (
"../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf"
)
A note on multimodal models

Many modern LLMs support inference over multimodal inputs (e.g., images). In some applications -- such as question-answering over PDFs with complex layouts, diagrams, or scans -- it may be advantageous to skip the PDF parsing, instead casting a PDF page to an image and passing it to a model directly. We demonstrate an example of this in the Use of multimodal models section below.

Simple and fast text extraction​

If you are looking for a simple string representation of text that is embedded in a PDF, the method below is appropriate. It will return a list of Document objects-- one per page-- containing a single string of the page's text in the Document's page_content attribute. It will not parse text in images, tables or scanned PDF pages. Under the hood it uses the pypdf Python library.

LangChain document loaders implement lazy_load and its async variant, alazy_load, which return iterators of Document objects. We will use these below.

%pip install -qU langchain_community pypdf
Note: you may need to restart the kernel to use updated packages.
from pprint import pprint

from langchain_community.document_loaders import PyPDFLoader

loader = PyPDFLoader(file_path)
pages = []
async for page in loader.alazy_load():
pages.append(page)
API Reference:PyPDFLoader
pprint(pages[0].metadata)
print(pages[0].page_content)
{'author': '',
'creationdate': '2021-06-22T01:27:10+00:00',
'creator': 'LaTeX with hyperref',
'keywords': '',
'moddate': '2021-06-22T01:27:10+00:00',
'page': 0,
'producer': 'pdfTeX-1.40.21',
'ptex.fullbanner': 'This is pdfTeX, Version 3.14159265-2.6-1.40.21 (TeX Live '
'2020) kpathsea version 6.3.2',
'source': '../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf',
'subject': '',
'title': '',
'total_pages': 16,
'trapped': '/False'}
LayoutParser: A Unified Toolkit for Deep
Learning Based Document Image Analysis
Zejiang Shen1 (οΏ½ ), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain
Lee4, Jacob Carlson3, and Weining Li5
1 Allen Institute for AI
shannons@allenai.org
2 Brown University
ruochen zhang@brown.edu
3 Harvard University
{melissadell,jacob carlson}@fas.harvard.edu
4 University of Washington
bcgl@cs.washington.edu
5 University of Waterloo
w422li@uwaterloo.ca
Abstract. Recent advances in document image analysis (DIA) have been
primarily driven by the application of neural networks. Ideally, research
outcomes could be easily deployed in production and extended for further
investigation. However, various factors like loosely organized codebases
and sophisticated model configurations complicate the easy reuse of im-
portant innovations by a wide audience. Though there have been on-going
efforts to improve reusability and simplify deep learning (DL) model
development in disciplines like natural language processing and computer
vision, none of them are optimized for challenges in the domain of DIA.
This represents a major gap in the existing toolkit, as DIA is central to
academic research across a wide range of disciplines in the social sciences
and humanities. This paper introduces LayoutParser, an open-source
library for streamlining the usage of DL in DIA research and applica-
tions. The core LayoutParser library comes with a set of simple and
intuitive interfaces for applying and customizing DL models for layout de-
tection, character recognition, and many other document processing tasks.
To promote extensibility, LayoutParser also incorporates a community
platform for sharing both pre-trained models and full document digiti-
zation pipelines. We demonstrate that LayoutParser is helpful for both
lightweight and large-scale digitization pipelines in real-word use cases.
The library is publicly available at https://layout-parser.github.io.
Keywords: Document Image Analysis Β· Deep Learning Β· Layout Analysis
Β· Character Recognition Β· Open Source library Β· Toolkit.
1 Introduction
Deep Learning(DL)-based approaches are the state-of-the-art for a wide range of
document image analysis (DIA) tasks including document image classification [11,
arXiv:2103.15348v2 [cs.CV] 21 Jun 2021

Note that the metadata of each document stores the corresponding page number.

Vector search over PDFs​

Once we have loaded PDFs into LangChain Document objects, we can index them (e.g., a RAG application) in the usual way. Below we use OpenAI embeddings, although any LangChain embeddings model will suffice.

%pip install -qU langchain-openai
Note: you may need to restart the kernel to use updated packages.
import getpass
import os

if "OPENAI_API_KEY" not in os.environ:
os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
from langchain_core.vectorstores import InMemoryVectorStore
from langchain_openai import OpenAIEmbeddings

vector_store = InMemoryVectorStore.from_documents(pages, OpenAIEmbeddings())
docs = vector_store.similarity_search("What is LayoutParser?", k=2)
for doc in docs:
print(f'Page {doc.metadata["page"]}: {doc.page_content[:300]}\n')
Page 0: LayoutParser: A Unified Toolkit for Deep
Learning Based Document Image Analysis
Zejiang Shen1 (οΏ½ ), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain
Lee4, Jacob Carlson3, and Weining Li5
1 Allen Institute for AI
shannons@allenai.org
2 Brown University
ruochen zhang@brown.edu
3 Harvard Universi

Page 13: 14 Z. Shen et al.
6 Conclusion
LayoutParser provides a comprehensive toolkit for deep learning-based document
image analysis. The off-the-shelf library is easy to install, and can be used to
build flexible and accurate pipelines for processing documents with complicated
structures. It also supports hi

Extract and analyse images

%pip install -qU rapidocr-onnxruntime
Note: you may need to restart the kernel to use updated packages.
from langchain_community.document_loaders.parsers.pdf import (
convert_images_to_text_with_rapidocr,
)

loader = PyPDFLoader(
file_path,
mode="page",
extract_images=True,
images_to_text=convert_images_to_text_with_rapidocr(format="markdown"),
)
docs = loader.load()
print(docs[5].page_content)
6 Z. Shen et al.
Fig. 2: The relationship between the three types of layout data structures.
Coordinate supports three kinds of variation; TextBlock consists of the co-
ordinate information and extra features like block text, types, and reading orders;
a Layout object is a list of all possible layout elements, including other Layout
objects. They all support the same set of transformation and operation APIs for
maximum flexibility.
Shown in Table 1, LayoutParser currently hosts 9 pre-trained models trained
on 5 different datasets. Description of the training dataset is provided alongside
with the trained models such that users can quickly identify the most suitable
models for their tasks. Additionally, when such a model is not readily available,
LayoutParser also supports training customized layout models and community
sharing of the models (detailed in Section 3.5).
3.2 Layout Data Structures
A critical feature of LayoutParser is the implementation of a series of data
structures and operations that can be used to efficiently process and manipulate
the layout elements. In document image analysis pipelines, various post-processing
on the layout analysis model outputs is usually required to obtain the final
outputs. Traditionally, this requires exporting DL model outputs and then loading
the results into other pipelines. All model outputs from LayoutParser will be
stored in carefully engineered data types optimized for further processing, which
makes it possible to build an end-to-end document digitization pipeline within
LayoutParser. There are three key components in the data structure, namely
the Coordinate system, the TextBlock, and the Layout. They provide different
levels of abstraction for the layout data, and a set of APIs are supported for
transformations or operations on these classes.



![Coordinate
(x1, y1)
(X1, y1)
(x2,y2)
APIS
x-interval
tart
end
Quadrilateral
operation
Rectangle
y-interval
ena
(x2, y2)
(x4, y4)
(x3, y3)
and
textblock
Coordinate
transformation
+
Block
Block
Reading
Extra features
Text
Type
Order
coordinatel
textblockl
layout
same
textblock2
layoutl
The
A list of the layout elements](.)

It is possible to ask a multimodal LLM to describe the image.

if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass("OpenAI API key =")
from langchain_community.document_loaders.parsers.pdf import (
convert_images_to_description,
)
from langchain_openai import ChatOpenAI

loader = PyPDFLoader(
file_path,
mode="page",
extract_images=True,
images_to_text=convert_images_to_description(
model=ChatOpenAI(model="gpt-4o-mini", max_tokens=1024), format="text"
),
)
docs = loader.load()
print(docs[5].page_content)
6 Z. Shen et al.
Fig. 2: The relationship between the three types of layout data structures.
Coordinate supports three kinds of variation; TextBlock consists of the co-
ordinate information and extra features like block text, types, and reading orders;
a Layout object is a list of all possible layout elements, including other Layout
objects. They all support the same set of transformation and operation APIs for
maximum flexibility.
Shown in Table 1, LayoutParser currently hosts 9 pre-trained models trained
on 5 different datasets. Description of the training dataset is provided alongside
with the trained models such that users can quickly identify the most suitable
models for their tasks. Additionally, when such a model is not readily available,
LayoutParser also supports training customized layout models and community
sharing of the models (detailed in Section 3.5).
3.2 Layout Data Structures
A critical feature of LayoutParser is the implementation of a series of data
structures and operations that can be used to efficiently process and manipulate
the layout elements. In document image analysis pipelines, various post-processing
on the layout analysis model outputs is usually required to obtain the final
outputs. Traditionally, this requires exporting DL model outputs and then loading
the results into other pipelines. All model outputs from LayoutParser will be
stored in carefully engineered data types optimized for further processing, which
makes it possible to build an end-to-end document digitization pipeline within
LayoutParser. There are three key components in the data structure, namely
the Coordinate system, the TextBlock, and the Layout. They provide different
levels of abstraction for the layout data, and a set of APIs are supported for
transformations or operations on these classes.



**Summary:** The image illustrates a layout structure for representing geometric shapes and their coordinates. It includes sections for coordinates, text blocks with additional features, and a layout list that organizes these elements.

**Extracted Text:**
- Coordinate
- x-interval
- y-interval
- Rectangle
- (x1, y1)
- (x2, y2)
- Quadrilateral
- (x3, y3)
- (x4, y4)
- [ coordinate1, textblock1, ..., textblock2, layout1 ]
- A list of the layout elements
- Extra features
- Block Text
- Block Type
- Reading Order
- The same transformation and operation APIs

Extract tables

Some parsers can extract tables. This is the case of PDFPlumberLoader

%pip install -qU langchain_community pdfplumber
Note: you may need to restart the kernel to use updated packages.
from langchain_community.document_loaders import PDFPlumberLoader

loader = PDFPlumberLoader(
file_path,
mode="page",
extract_tables="markdown",
)
docs = loader.load()
print(docs[4].page_content)
API Reference:PDFPlumberLoader
LayoutParser: A Unified Toolkit for DL-Based DIA 5
Table 1: Current layout detection models in the LayoutParser model zoo
Dataset
|||
|---|---|
|BaseModel1|LargeModel|
|F/M M F F F/M|M - - F -|

Notes
PubLayNet[38] Layoutsofmodernscientificdocuments
PRImA[3] Layoutsofscannedmodernmagazinesandscientificreports
Newspaper[17] LayoutsofscannedUSnewspapersfromthe20thcentury
TableBank[18] Tableregiononmodernscientificandbusinessdocument
HJDataset[31] LayoutsofhistoryJapanesedocuments
1Foreachdataset,wetrainseveralmodelsofdifferentsizesfordifferentneeds(thetrade-offbetweenaccuracy
vs.computationalcost).Forβ€œbasemodel”andβ€œlargemodel”,werefertousingtheResNet50orResNet101
backbones[13],respectively.Onecantrainmodelsofdifferentarchitectures,likeFasterR-CNN[28](F)andMask
R-CNN[12](M).Forexample,anFintheLargeModelcolumnindicatesithasaFasterR-CNNmodeltrained
usingtheResNet101backbone.Theplatformismaintainedandanumberofadditionswillbemadetothemodel
zooincomingmonths.
layout data structures, which are optimized for efficiency and versatility. 3) When
necessary, users can employ existing or customized OCR models via the unified
API provided in the OCR module. 4) LayoutParser comes with a set of utility
functions for the visualization and storage of the layout data. 5) LayoutParser
is also highly customizable, via its integration with functions for layout data
annotation and model training. We now provide detailed descriptions for each
component.
3.1 Layout Detection Models
In LayoutParser, a layout model takes a document image as an input and
generates a list of rectangular boxes for the target content regions. Different
from traditional methods, it relies on deep convolutional neural networks rather
than manually curated rules to identify content regions. It is formulated as an
object detection problem and state-of-the-art models like Faster R-CNN [28] and
Mask R-CNN [12] are used. This yields prediction results of high accuracy and
makes it possible to build a concise, generalized interface for layout detection.
LayoutParser, built upon Detectron2 [35], provides a minimal API that can
perform layout detection with only four lines of code in Python:
1
||
|---|
|import layoutparser as lp|
|image = cv2.imread("image_file") # load images|
|model = lp.Detectron2LayoutModel(|
|"lp://PubLayNet/faster_rcnn_R_50_FPN_3x/config")|
|layout = model.detect(image)|

2
3
4
5
LayoutParser provides a wealth of pre-trained model weights using various
datasets covering different languages, time periods, and document types. Due to
domainshift[7],thepredictionperformancecannotablydropwhenmodelsareap-
pliedtotargetsamplesthataresignificantlydifferentfromthetrainingdataset.As
documentstructuresandlayoutsvarygreatlyindifferentdomains,itisimportant
toselectmodelstrainedonadatasetsimilartothetestsamples.Asemanticsyntax
isusedforinitializingthemodelweightsinLayoutParser,usingboththedataset
name and model name lp://<dataset-name>/<model-architecture-name>.

Layout analysis and extraction of text from images​

If you require a more granular segmentation of text (e.g., into distinct paragraphs, titles, tables, or other structures) or require extraction of text from images, the method below is appropriate. It will return a list of Document objects, where each object represents a structure on the page. The Document's metadata stores the page number and other information related to the object (e.g., it might store table rows and columns in the case of a table object).

Under the hood it uses the langchain-unstructured library. See the integration docs for more information about using Unstructured with LangChain.

Unstructured supports multiple parameters for PDF parsing:

  • strategy (e.g., "auto", "fast", "ocr_only" or "hi-res")
  • API or local processing. You will need an API key to use the API.

The hi-res strategy provides support for document layout analysis and OCR. We demonstrate it below via the API. See local parsing section below for considerations when running locally.

%pip install -qU langchain-unstructured
Note: you may need to restart the kernel to use updated packages.
import getpass
import os

if "UNSTRUCTURED_API_KEY" not in os.environ:
os.environ["UNSTRUCTURED_API_KEY"] = getpass.getpass("Unstructured API Key:")

As before, we initialize a loader and load documents lazily:

from langchain_unstructured import UnstructuredLoader

loader = UnstructuredLoader(
file_path=file_path,
strategy="hi_res",
partition_via_api=True,
coordinates=True,
)
docs = []
for doc in loader.lazy_load():
docs.append(doc)
API Reference:UnstructuredLoader
WARNING: 'split_pdf_cache_tmp_data' does not exist. Using default value '/tmp'.
INFO: HTTP Request: GET https://api.unstructuredapp.io/general/docs "HTTP/1.1 200 OK"
INFO: HTTP Request: POST https://api.unstructuredapp.io/general/v0/general "HTTP/1.1 402 Payment Required"
ERROR: Failed to partition set 2.
---------------------------------------------------------------------------
``````output
ServerError Traceback (most recent call last)
``````output
Cell In[19], line 10
3 loader = UnstructuredLoader(
4 file_path=file_path,
5 strategy="hi_res",
6 partition_via_api=True,
7 coordinates=True,
8 )
9 docs = []
---> 10 for doc in loader.lazy_load():
11 docs.append(doc)
``````output
File ~/workspace.bda/patch_langchain_common/.venv/lib/python3.12/site-packages/langchain_unstructured/document_loaders.py:178, in UnstructuredLoader.lazy_load(self)
175 return
177 # Call _UnstructuredBaseLoader normally since file and file_path are not lists
--> 178 yield from load_file(f=self.file, f_path=self.file_path)
``````output
File ~/workspace.bda/patch_langchain_common/.venv/lib/python3.12/site-packages/langchain_unstructured/document_loaders.py:212, in _SingleDocumentLoader.lazy_load(self)
207 def lazy_load(self) -> Iterator[Document]:
208 """Load file."""
209 elements_json = (
210 self._post_process_elements_json(self._elements_json)
211 if self.post_processors
--> 212 else self._elements_json
213 )
214 for element in elements_json:
215 metadata = self._get_metadata()
``````output
File ~/workspace.bda/patch_langchain_common/.venv/lib/python3.12/site-packages/langchain_unstructured/document_loaders.py:229, in _SingleDocumentLoader._elements_json(self)
227 """Get elements as a list of dictionaries from local partition or via API."""
228 if self.partition_via_api:
--> 229 return self._elements_via_api
231 return self._convert_elements_to_dicts(self._elements_via_local)
``````output
File ~/workspace.bda/patch_langchain_common/.venv/lib/python3.12/site-packages/langchain_unstructured/document_loaders.py:258, in _SingleDocumentLoader._elements_via_api(self)
256 client = self.client
257 req = self._sdk_partition_request
--> 258 response = client.general.partition(request=req)
259 if response.status_code == 200:
260 return json.loads(response.raw_response.text)
``````output
File ~/workspace.bda/patch_langchain_common/.venv/lib/python3.12/site-packages/unstructured_client/general.py:125, in General.partition(self, request, retries, server_url, timeout_ms, accept_header_override)
123 if utils.match_response(http_res, "5XX", "application/json"):
124 data = utils.unmarshal_json(http_res.text, errors.ServerErrorData)
--> 125 raise errors.ServerError(data=data)
127 content_type = http_res.headers.get("Content-Type")
128 http_res_text = utils.stream_to_text(http_res)
``````output
ServerError: {"detail":"Your free trial has ended. Please add a payment method to your account to process more requests."}

Here we recover more than 100 distinct structures over the 16 page document:

print(len(docs))
191

We can use the document metadata to recover content from a single page:

first_page_docs = [doc for doc in docs if doc.metadata.get("page_number") == 0]

for doc in first_page_docs:
print(doc.page_content)

Extracting tables and other structures​

Each Document we load represents a structure, like a title, paragraph, or table.

Some structures may be of special interest for indexing or question-answering tasks. These structures may be:

  1. Classified for easy identification;
  2. Parsed into a more structured representation.

Below, we identify and extract a table:

Click to expand code for rendering pages
%pip install -qU matplotlib PyMuPDF pillow
Note: you may need to restart the kernel to use updated packages.
import fitz
import matplotlib.patches as patches
import matplotlib.pyplot as plt
from PIL import Image


def plot_pdf_with_boxes(pdf_page, segments):
pix = pdf_page.get_pixmap()
pil_image = Image.frombytes("RGB", [pix.width, pix.height], pix.samples)

fig, ax = plt.subplots(1, figsize=(10, 10))
ax.imshow(pil_image)
categories = set()
category_to_color = {
"Title": "orchid",
"Image": "forestgreen",
"Table": "tomato",
}
for segment in segments:
points = segment["coordinates"]["points"]
layout_width = segment["coordinates"]["layout_width"]
layout_height = segment["coordinates"]["layout_height"]
scaled_points = [
(x * pix.width / layout_width, y * pix.height / layout_height)
for x, y in points
]
box_color = category_to_color.get(segment["category"], "deepskyblue")
categories.add(segment["category"])
rect = patches.Polygon(
scaled_points, linewidth=1, edgecolor=box_color, facecolor="none"
)
ax.add_patch(rect)

# Make legend
legend_handles = [patches.Patch(color="deepskyblue", label="Text")]
for category in ["Title", "Image", "Table"]:
if category in categories:
legend_handles.append(
patches.Patch(color=category_to_color[category], label=category)
)
ax.axis("off")
ax.legend(handles=legend_handles, loc="upper right")
plt.tight_layout()
plt.show()


def render_page(doc_list: list, page_number: int, print_text=True) -> None:
pdf_page = fitz.open(file_path).load_page(page_number - 1)
page_docs = [
doc for doc in doc_list if doc.metadata.get("page_number") == page_number
]
segments = [doc.metadata for doc in page_docs]
plot_pdf_with_boxes(pdf_page, segments)
if print_text:
for doc in page_docs:
print(f"{doc.page_content}\n")
render_page(docs, 5)

LayoutParser: A Unified Toolkit for DL-Based DIA

5

Table 1: Current layout detection models in the LayoutParser model zoo

Dataset Base Model1 Large Model Notes PubLayNet [38] F / M M Layouts of modern scientific documents PRImA [3] M - Layouts of scanned modern magazines and scientific reports Newspaper [17] F - Layouts of scanned US newspapers from the 20th century TableBank [18] F F Table region on modern scientific and business document HJDataset [31] F / M - Layouts of history Japanese documents







1 For each dataset, we train several models of different sizes for different needs (the trade-off between accuracy vs. computational cost). For β€œbase model” and β€œlarge model”, we refer to using the ResNet 50 or ResNet 101 backbones [13], respectively. One can train models of different architectures, like Faster R-CNN [28] (F) and Mask R-CNN [12] (M). For example, an F in the Large Model column indicates it has a Faster R-CNN model trained using the ResNet 101 backbone. The platform is maintained and a number of additions will be made to the model zoo in coming months.

layout data structures, which are optimized for efficiency and versatility. 3) When necessary, users can employ existing or customized OCR models via the unified API provided in the OCR module. 4) LayoutParser comes with a set of utility functions for the visualization and storage of the layout data. 5) LayoutParser is also highly customizable, via its integration with functions for layout data annotation and model training. We now provide detailed descriptions for each component.

3.1 Layout Detection Models

In LayoutParser, a layout model takes a document image as an input and generates a list of rectangular boxes for the target content regions. Different from traditional methods, it relies on deep convolutional neural networks rather than manually curated rules to identify content regions. It is formulated as an object detection problem and state-of-the-art models like Faster R-CNN [28] and Mask R-CNN [12] are used. This yields prediction results of high accuracy and makes it possible to build a concise, generalized interface for layout detection. LayoutParser, built upon Detectron2 [35], provides a minimal API that can perform layout detection with only four lines of code in Python:

1 import layoutparser as lp 2 image = cv2 . imread ( " image_file " ) # load images 3 model = lp . De t e c tro n2 Lay outM odel ( 4 " lp :// PubLayNet / f as t er _ r c nn _ R _ 50 _ F P N_ 3 x / config " ) 5 layout = model . detect ( image )

LayoutParser provides a wealth of pre-trained model weights using various datasets covering different languages, time periods, and document types. Due to domain shift [7], the prediction performance can notably drop when models are ap- plied to target samples that are significantly different from the training dataset. As document structures and layouts vary greatly in different domains, it is important to select models trained on a dataset similar to the test samples. A semantic syntax is used for initializing the model weights in LayoutParser, using both the dataset name and model name lp://<dataset-name>/<model-architecture-name>.

Note that although the table text is collapsed into a single string in the document's content, the metadata contains a representation of its rows and columns:

from IPython.display import HTML, display

segments = [
doc.metadata
for doc in docs
if doc.metadata.get("page_number") == 5 and doc.metadata.get("category") == "Table"
]

display(HTML(segments[0]["text_as_html"]))
<table><thead><tr><th>Dataset</th><th>β€˜ Base Mode11|</th><th>Large Model</th><th>| Notes</th></tr></thead><tbody><tr><td>PubLayNet [38]</td><td>F / M</td><td>M</td><td>Layouts of modern scientific documents</td></tr><tr><td>PRImA [3]</td><td>M</td><td>-</td><td>Layouts of scanned modern magazines and scientific reports</td></tr><tr><td>Newspaper</td><td>F</td><td>-</td><td>Layouts of scanned US newspapers from the 20th century</td></tr><tr><td>TableBank</td><td>F</td><td>F</td><td>Table region on modern scientific and business document</td></tr><tr><td>HJDataset n</td><td>F / M</td><td>-</td><td>Layouts of history Japanese documents</td></tr></tbody></table> 
able 1. LUllclll 1ayoul actCCLloll 1110AdCs 111 L1C LayoOulralsel 1110U4cl 200
Dataset| Base Model'|Notes
PubLayNet [38]F/MLayouts of modern scientific documents
PRImAMLayouts of scanned modern magazines and scientific reports
NewspaperFLayouts of scanned US newspapers from the 20th century
TableBank [18]FTable region on modern scientific and business document
HJDatasetF/MLayouts of history Japanese documents

Extracting text from specific sections​

Structures may have parent-child relationships -- for example, a paragraph might belong to a section with a title. If a section is of particular interest (e.g., for indexing) we can isolate the corresponding Document objects.

Below, we extract all text associated with the document's "Conclusion" section:

render_page(docs, 14, print_text=False)

conclusion_docs = []
parent_id = -1
for doc in docs:
if doc.metadata["category"] == "Title" and "Conclusion" in doc.page_content:
parent_id = doc.metadata["element_id"]
if doc.metadata.get("parent_id") == parent_id:
conclusion_docs.append(doc)

for doc in conclusion_docs:
print(doc.page_content)
LayoutParser provides a comprehensive toolkit for deep learning-based document image analysis. The off-the-shelf library is easy to install, and can be used to build flexible and accurate pipelines for processing documents with complicated structures. It also supports high-level customization and enables easy labeling and training of DL models on unique document image datasets. The LayoutParser community platform facilitates sharing DL models and DIA pipelines, inviting discussion and promoting code reproducibility and reusability. The LayoutParser team is committed to keeping the library updated continuously and bringing the most recent advances in DL-based DIA, such as multi-modal document modeling [37, 36, 9] (an upcoming priority), to a diverse audience of end-users.
Acknowledgements We thank the anonymous reviewers for their comments and suggestions. This project is supported in part by NSF Grant OIA-2033558 and funding from the Harvard Data Science Initiative and Harvard Catalyst. Zejiang Shen thanks Doug Downey for suggestions.

Extracting text from images​

OCR is run on images, enabling the extraction of text therein:

render_page(docs, 11)

LayoutParser: A Unified Toolkit for DL-Based DIA

focuses on precision, efficiency, and robustness. The target documents may have complicated structures, and may require training multiple layout detection models to achieve the optimal accuracy. Light-weight pipelines are built for relatively simple documents, with an emphasis on development ease, speed and flexibility. Ideally one only needs to use existing resources, and model training should be avoided. Through two exemplar projects, we show how practitioners in both academia and industry can easily build such pipelines using LayoutParser and extract high-quality structured document data for their downstream tasks. The source code for these projects will be publicly available in the LayoutParser community hub.

11

5.1 A Comprehensive Historical Document Digitization Pipeline

The digitization of historical documents can unlock valuable data that can shed light on many important social, economic, and historical questions. Yet due to scan noises, page wearing, and the prevalence of complicated layout structures, ob- taining a structured representation of historical document scans is often extremely complicated.

In this example, LayoutParser was used to develop a comprehensive pipeline, shown in Figure 5, to gener- ate high-quality structured data from historical Japanese firm financial ta- bles with complicated layouts. The pipeline applies two layout models to identify different levels of document structures and two customized OCR engines for optimized character recog- nition accuracy.

As shown in Figure 4 (a), the document contains columns of text written vertically 15, a common style in Japanese. Due to scanning noise and archaic printing technology, the columns can be skewed or have vari- able widths, and hence cannot be eas- ily identified via rule-based methods. Within each column, words are sepa- rated by white spaces of variable size, and the vertical positions of objects can be an indicator of their layout type.

β€˜Active Learning Layout Annotate Layout Dataset | +β€”β€” Annotation Toolkit A4 Deep Learning Layout Layout Detection Model Training & Inference, A Post-processing β€” Handy Data Structures & \ Lo orajport 7 ) Al Pls for Layout Data A4 Default and Customized Text Recognition 0CR Models Β₯ Visualization & Export Layout Structure Visualization & Storage The Japanese Document Helpful LayoutParser Modules Digitization Pipeline

Fig. 5: Illustration of how LayoutParser helps with the historical document digi- tization pipeline.

15 A document page consists of eight rows like this. For simplicity we skip the row segmentation discussion and refer readers to the source code when available.

Note that the text from the figure on the right is extracted and incorporated into the content of the Document.

Local parsing​

Parsing locally requires the installation of additional dependencies.

Poppler (PDF analysis)

Tesseract (OCR)

We will also need to install the unstructured PDF extras:

%pip install -qU "unstructured[pdf]"
Note: you may need to restart the kernel to use updated packages.

We can then use the UnstructuredLoader much the same way, forgoing the API key and partition_via_api setting:

loader_local = UnstructuredLoader(
file_path=file_path,
strategy="hi_res",
)
docs_local = []
for doc in loader_local.lazy_load():
docs_local.append(doc)
INFO: pikepdf C++ to Python logger bridge initialized
INFO: Reading PDF for file: ../../docs/integrations/document_loaders/example_data/layout-parser-paper.pdf ...

The list of documents can then be processed similarly to those obtained from the API.

Use of multimodal models​

Many modern LLMs support inference over multimodal inputs (e.g., images). In some applications-- such as question-answering over PDFs with complex layouts, diagrams, or scans-- it may be advantageous to skip the PDF parsing, instead casting a PDF page to an image and passing it to a model directly. This allows a model to reason over the two dimensional content on the page, instead of a "one-dimensional" string representation.

In principle we can use any LangChain chat model that supports multimodal inputs. A list of these models is documented here. Below we use OpenAI's gpt-4o-mini.

First we define a short utility function to convert a PDF page to a base64-encoded image:

%pip install -qU PyMuPDF pillow langchain-openai
Note: you may need to restart the kernel to use updated packages.
import base64
import io

import fitz
from PIL import Image


def pdf_page_to_base64(pdf_path: str, page_number: int):
pdf_document = fitz.open(pdf_path)
page = pdf_document.load_page(page_number - 1) # input is one-indexed
pix = page.get_pixmap()
img = Image.frombytes("RGB", [pix.width, pix.height], pix.samples)

buffer = io.BytesIO()
img.save(buffer, format="PNG")

return base64.b64encode(buffer.getvalue()).decode("utf-8")
from IPython.display import Image as IPImage
from IPython.display import display

base64_image = pdf_page_to_base64(file_path, 11)
display(IPImage(data=base64.b64decode(base64_image)))

We can then query the model in the usual way. Below we ask it a question on related to the diagram on the page.

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o-mini")
API Reference:ChatOpenAI
from langchain_core.messages import HumanMessage

query = "What is the name of the first step in the pipeline?"

message = HumanMessage(
content=[
{"type": "text", "text": query},
{
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{base64_image}"},
},
],
)
response = llm.invoke([message])
print(response.content)
API Reference:HumanMessage
INFO: HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
``````output
The first step in the pipeline is "Annotate Layout Dataset."

Other PDF loaders​

For a list of available LangChain PDF loaders, please see this table.


Was this page helpful?