PyPDF 문서 로더를 시작하기 위한 빠른 개요를 제공합니다. 모든 __ModuleName__Loader 기능 및 구성에 대한 자세한 문서는 API 레퍼런스를 참조하세요.
개요
통합 세부 정보
| Class | Package | Local | Serializable | JS support |
|---|---|---|---|---|
| PyPDFLoader | langchain-community | ✅ | ❌ | ❌ |
로더 기능
| Source | Document Lazy Loading | Native Async Support | Extract Images | Extract Tables |
|---|---|---|---|---|
| PyPDFLoader | ✅ | ❌ | ✅ | ✅ |
설정
자격 증명
PyPDFLoader를 사용하는 데 자격 증명은 필요하지 않습니다. 모델 호출에 대한 자동화된 최고 수준의 추적을 원하시면 아래 주석을 해제하여 LangSmith API 키를 설정할 수도 있습니다:Copy
# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
# os.environ["LANGSMITH_TRACING"] = "true"
설치
langchain-community와 pypdf를 설치합니다.Copy
%pip install -qU langchain-community pypdf
Copy
Note: you may need to restart the kernel to use updated packages.
초기화
이제 모델 객체를 인스턴스화하고 문서를 로드할 수 있습니다:Copy
from langchain_community.document_loaders import PyPDFLoader
file_path = "./example_data/layout-parser-paper.pdf"
loader = PyPDFLoader(file_path)
로드
Copy
docs = loader.load()
docs[0]
Copy
Document(metadata={'producer': 'pdfTeX-1.40.21', 'creator': 'LaTeX with hyperref', 'creationdate': '2021-06-22T01:27:10+00:00', 'source': './example_data/layout-parser-paper.pdf', 'file_path': './example_data/layout-parser-paper.pdf', 'total_pages': 16, 'format': 'PDF 1.5', 'title': '', 'author': '', 'subject': '', 'keywords': '', 'moddate': '2021-06-22T01:27:10+00:00', 'trapped': '', 'page': 0}, page_content='LayoutParser: A Unified Toolkit for Deep\nLearning Based Document Image Analysis\nZejiang Shen1 (\x00), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\nLee4, Jacob Carlson3, and Weining Li5\n1 Allen Institute for AI\[email protected]\n2 Brown University\nruochen [email protected]\n3 Harvard University\n{melissadell,jacob carlson}@fas.harvard.edu\n4 University of Washington\[email protected]\n5 University of Waterloo\[email protected]\nAbstract. Recent advances in document image analysis (DIA) have been\nprimarily driven by the application of neural networks. Ideally, research\noutcomes could be easily deployed in production and extended for further\ninvestigation. However, various factors like loosely organized codebases\nand sophisticated model configurations complicate the easy reuse of im-\nportant innovations by a wide audience. Though there have been on-going\nefforts to improve reusability and simplify deep learning (DL) model\ndevelopment in disciplines like natural language processing and computer\nvision, none of them are optimized for challenges in the domain of DIA.\nThis represents a major gap in the existing toolkit, as DIA is central to\nacademic research across a wide range of disciplines in the social sciences\nand humanities. This paper introduces LayoutParser, an open-source\nlibrary for streamlining the usage of DL in DIA research and applica-\ntions. The core LayoutParser library comes with a set of simple and\nintuitive interfaces for applying and customizing DL models for layout de-\ntection, character recognition, and many other document processing tasks.\nTo promote extensibility, LayoutParser also incorporates a community\nplatform for sharing both pre-trained models and full document digiti-\nzation pipelines. We demonstrate that LayoutParser is helpful for both\nlightweight and large-scale digitization pipelines in real-word use cases.\nThe library is publicly available at https://layout-parser.github.io.\nKeywords: Document Image Analysis · Deep Learning · Layout Analysis\n· Character Recognition · Open Source library · Toolkit.\n1\nIntroduction\nDeep Learning(DL)-based approaches are the state-of-the-art for a wide range of\ndocument image analysis (DIA) tasks including document image classification [11,\narXiv:2103.15348v2 [cs.CV] 21 Jun 2021')
Copy
import pprint
pprint.pp(docs[0].metadata)
Copy
{'producer': 'pdfTeX-1.40.21',
'creator': 'LaTeX with hyperref',
'creationdate': '2021-06-22T01:27:10+00:00',
'source': './example_data/layout-parser-paper.pdf',
'file_path': './example_data/layout-parser-paper.pdf',
'total_pages': 16,
'format': 'PDF 1.5',
'title': '',
'author': '',
'subject': '',
'keywords': '',
'moddate': '2021-06-22T01:27:10+00:00',
'trapped': '',
'page': 0}
지연 로드
Copy
pages = []
for doc in loader.lazy_load():
pages.append(doc)
if len(pages) >= 10:
# 페이지 단위 작업을 수행합니다. 예:
# index.upsert(page)
pages = []
len(pages)
Copy
6
Copy
print(pages[0].page_content[:100])
pprint.pp(pages[0].metadata)
Copy
LayoutParser: A Unified Toolkit for DL-Based DIA
11
focuses on precision, efficiency, and robustness. T
{'producer': 'pdfTeX-1.40.21',
'creator': 'LaTeX with hyperref',
'creationdate': '2021-06-22T01:27:10+00:00',
'source': './example_data/layout-parser-paper.pdf',
'file_path': './example_data/layout-parser-paper.pdf',
'total_pages': 16,
'format': 'PDF 1.5',
'title': '',
'author': '',
'subject': '',
'keywords': '',
'moddate': '2021-06-22T01:27:10+00:00',
'trapped': '',
'page': 10}
- source
- page (page 모드인 경우)
- total_page
- creationdate
- creator
- producer
분할 모드 및 사용자 정의 페이지 구분자
PDF 파일을 로드할 때 두 가지 방법으로 분할할 수 있습니다:- 페이지별
- 단일 텍스트 흐름
PDF를 페이지별로 추출합니다. 각 페이지는 langchain Document 객체로 추출됩니다
Copy
loader = PyPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="page",
)
docs = loader.load()
print(len(docs))
pprint.pp(docs[0].metadata)
Copy
16
{'producer': 'pdfTeX-1.40.21',
'creator': 'LaTeX with hyperref',
'creationdate': '2021-06-22T01:27:10+00:00',
'source': './example_data/layout-parser-paper.pdf',
'file_path': './example_data/layout-parser-paper.pdf',
'total_pages': 16,
'format': 'PDF 1.5',
'title': '',
'author': '',
'subject': '',
'keywords': '',
'moddate': '2021-06-22T01:27:10+00:00',
'trapped': '',
'page': 0}
전체 PDF를 단일 langchain Document 객체로 추출합니다
Copy
loader = PyPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="single",
)
docs = loader.load()
print(len(docs))
pprint.pp(docs[0].metadata)
Copy
1
{'producer': 'pdfTeX-1.40.21',
'creator': 'LaTeX with hyperref',
'creationdate': '2021-06-22T01:27:10+00:00',
'source': './example_data/layout-parser-paper.pdf',
'file_path': './example_data/layout-parser-paper.pdf',
'total_pages': 16,
'format': 'PDF 1.5',
'title': '',
'author': '',
'subject': '',
'keywords': '',
'moddate': '2021-06-22T01:27:10+00:00',
'trapped': ''}
single 모드에서 페이지 끝 위치를 식별하기 위해 사용자 정의 pages_delimiter를 추가합니다
Copy
loader = PyPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="single",
pages_delimiter="\n-------THIS IS A CUSTOM END OF PAGE-------\n",
)
docs = loader.load()
print(docs[0].page_content[:5780])
PDF에서 이미지 추출
세 가지 솔루션 중 하나를 선택하여 PDF에서 이미지를 추출할 수 있습니다:- rapidOCR (경량 광학 문자 인식 도구)
- Tesseract (높은 정밀도의 OCR 도구)
- 멀티모달 언어 모델
rapidOCR로 PDF에서 이미지 추출
Copy
%pip install -qU rapidocr-onnxruntime
Copy
Note: you may need to restart the kernel to use updated packages.
Copy
from langchain_community.document_loaders.parsers import RapidOCRBlobParser
loader = PyPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="page",
images_inner_format="markdown-img",
images_parser=RapidOCRBlobParser(),
)
docs = loader.load()
print(docs[5].page_content)
Tesseract로 PDF에서 이미지 추출
Copy
%pip install -qU pytesseract
Copy
Note: you may need to restart the kernel to use updated packages.
Copy
from langchain_community.document_loaders.parsers import TesseractBlobParser
loader = PyPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="page",
images_inner_format="html-img",
images_parser=TesseractBlobParser(),
)
docs = loader.load()
print(docs[5].page_content)
멀티모달 모델로 PDF에서 이미지 추출
Copy
%pip install -qU langchain-openai
Copy
Note: you may need to restart the kernel to use updated packages.
Copy
import os
from dotenv import load_dotenv
load_dotenv()
Copy
True
Copy
from getpass import getpass
if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass("OpenAI API key =")
Copy
from langchain_community.document_loaders.parsers import LLMImageBlobParser
from langchain_openai import ChatOpenAI
loader = PyPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="page",
images_inner_format="markdown-img",
images_parser=LLMImageBlobParser(model=ChatOpenAI(model="gpt-4o", max_tokens=1024)),
)
docs = loader.load()
print(docs[5].page_content)
PDF에서 표 추출
PyMUPDF를 사용하면 PDF에서 html, markdown 또는 csv 형식으로 표를 추출할 수 있습니다:Copy
loader = PyPDFLoader(
"./example_data/layout-parser-paper.pdf",
mode="page",
extract_tables="markdown",
)
docs = loader.load()
print(docs[4].page_content)
Copy
LayoutParser: A Unified Toolkit for DL-Based DIA
5
Table 1: Current layout detection models in the LayoutParser model zoo
Dataset
Base Model1 Large Model
Notes
PubLayNet [38]
F / M
M
Layouts of modern scientific documents
PRImA [3]
M
-
Layouts of scanned modern magazines and scientific reports
Newspaper [17]
F
-
Layouts of scanned US newspapers from the 20th century
TableBank [18]
F
F
Table region on modern scientific and business document
HJDataset [31]
F / M
-
Layouts of history Japanese documents
1 For each dataset, we train several models of different sizes for different needs (the trade-offbetween accuracy
vs. computational cost). For "base model" and "large model", we refer to using the ResNet 50 or ResNet 101
backbones [13], respectively. One can train models of different architectures, like Faster R-CNN [28] (F) and Mask
R-CNN [12] (M). For example, an F in the Large Model column indicates it has a Faster R-CNN model trained
using the ResNet 101 backbone. The platform is maintained and a number of additions will be made to the model
zoo in coming months.
layout data structures, which are optimized for efficiency and versatility. 3) When
necessary, users can employ existing or customized OCR models via the unified
API provided in the OCR module. 4) LayoutParser comes with a set of utility
functions for the visualization and storage of the layout data. 5) LayoutParser
is also highly customizable, via its integration with functions for layout data
annotation and model training. We now provide detailed descriptions for each
component.
3.1
Layout Detection Models
In LayoutParser, a layout model takes a document image as an input and
generates a list of rectangular boxes for the target content regions. Different
from traditional methods, it relies on deep convolutional neural networks rather
than manually curated rules to identify content regions. It is formulated as an
object detection problem and state-of-the-art models like Faster R-CNN [28] and
Mask R-CNN [12] are used. This yields prediction results of high accuracy and
makes it possible to build a concise, generalized interface for layout detection.
LayoutParser, built upon Detectron2 [35], provides a minimal API that can
perform layout detection with only four lines of code in Python:
1 import
layoutparser as lp
2 image = cv2.imread("image_file") # load
images
3 model = lp. Detectron2LayoutModel (
4
"lp:// PubLayNet/ faster_rcnn_R_50_FPN_3x /config")
5 layout = model.detect(image)
LayoutParser provides a wealth of pre-trained model weights using various
datasets covering different languages, time periods, and document types. Due to
domain shift [7], the prediction performance can notably drop when models are ap-
plied to target samples that are significantly different from the training dataset. As
document structures and layouts vary greatly in different domains, it is important
to select models trained on a dataset similar to the test samples. A semantic syntax
is used for initializing the model weights in LayoutParser, using both the dataset
name and model name lp://<dataset-name>/<model-architecture-name>.
|Dataset|Base Model1|Large Model|Notes|
|---|---|---|---|
|PubLayNet [38] PRImA [3] Newspaper [17] TableBank [18] HJDataset [31]|F / M M F F F / M|M &#45; &#45; F &#45;|Layouts of modern scientific documents Layouts of scanned modern magazines and scientific reports Layouts of scanned US newspapers from the 20th century Table region on modern scientific and business document Layouts of history Japanese documents|
파일 작업
많은 문서 로더는 파일 파싱을 포함합니다. 이러한 로더 간의 차이는 일반적으로 파일이 로드되는 방식이 아니라 파일이 파싱되는 방식에서 비롯됩니다. 예를 들어,open을 사용하여 PDF나 마크다운 파일의 바이너리 콘텐츠를 읽을 수 있지만, 해당 바이너리 데이터를 텍스트로 변환하려면 다른 파싱 로직이 필요합니다.
결과적으로 파싱 로직과 로딩 로직을 분리하는 것이 도움이 될 수 있으며, 이를 통해 데이터가 로드되는 방식에 관계없이 주어진 파서를 더 쉽게 재사용할 수 있습니다.
이 전략을 사용하여 동일한 파싱 매개변수로 다른 파일을 분석할 수 있습니다.
Copy
from langchain_community.document_loaders import FileSystemBlobLoader
from langchain_community.document_loaders.generic import GenericLoader
from langchain_community.document_loaders.parsers import PyPDFParser
loader = GenericLoader(
blob_loader=FileSystemBlobLoader(
path="./example_data/",
glob="*.pdf",
),
blob_parser=PyPDFParser(),
)
docs = loader.load()
print(docs[0].page_content)
pprint.pp(docs[0].metadata)
Copy
LayoutParser: A Unified Toolkit for Deep
Learning Based Document Image Analysis
Zejiang Shen1 ( ), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain
Lee4, Jacob Carlson3, and Weining Li5
1 Allen Institute for AI
[email protected]
2 Brown University
ruochen [email protected]
3 Harvard University
{melissadell,jacob carlson}@fas.harvard.edu
4 University of Washington
[email protected]
5 University of Waterloo
[email protected]
Abstract. Recent advances in document image analysis (DIA) have been
primarily driven by the application of neural networks. Ideally, research
outcomes could be easily deployed in production and extended for further
investigation. However, various factors like loosely organized codebases
and sophisticated model configurations complicate the easy reuse of im-
portant innovations by a wide audience. Though there have been on-going
efforts to improve reusability and simplify deep learning (DL) model
development in disciplines like natural language processing and computer
vision, none of them are optimized for challenges in the domain of DIA.
This represents a major gap in the existing toolkit, as DIA is central to
academic research across a wide range of disciplines in the social sciences
and humanities. This paper introduces LayoutParser, an open-source
library for streamlining the usage of DL in DIA research and applica-
tions. The core LayoutParser library comes with a set of simple and
intuitive interfaces for applying and customizing DL models for layout de-
tection, character recognition, and many other document processing tasks.
To promote extensibility, LayoutParser also incorporates a community
platform for sharing both pre-trained models and full document digiti-
zation pipelines. We demonstrate that LayoutParser is helpful for both
lightweight and large-scale digitization pipelines in real-word use cases.
The library is publicly available at https://layout-parser.github.io.
Keywords: Document Image Analysis · Deep Learning · Layout Analysis
· Character Recognition · Open Source library · Toolkit.
1
Introduction
Deep Learning(DL)-based approaches are the state-of-the-art for a wide range of
document image analysis (DIA) tasks including document image classification [11,
arXiv:2103.15348v2 [cs.CV] 21 Jun 2021
{'source': 'example_data/layout-parser-paper.pdf',
'file_path': 'example_data/layout-parser-paper.pdf',
'total_pages': 16,
'format': 'PDF 1.5',
'title': '',
'author': '',
'subject': '',
'keywords': '',
'creator': 'LaTeX with hyperref',
'producer': 'pdfTeX-1.40.21',
'creationdate': '2021-06-22T01:27:10+00:00',
'moddate': '2021-06-22T01:27:10+00:00',
'trapped': '',
'page': 0}
Copy
from langchain_community.document_loaders import CloudBlobLoader
from langchain_community.document_loaders.generic import GenericLoader
loader = GenericLoader(
blob_loader=CloudBlobLoader(
url="s3:/mybucket", # Supports s3://, az://, gs://, file:// schemes.
glob="*.pdf",
),
blob_parser=PyPDFParser(),
)
docs = loader.load()
print(docs[0].page_content)
pprint.pp(docs[0].metadata)
API 레퍼런스
모든PyPDFLoader 기능 및 구성에 대한 자세한 문서는 API 레퍼런스를 참조하세요: python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.pdf.PyPDFLoader.html
Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.