llmsherpa
paperetl
llmsherpa | paperetl | |
---|---|---|
6 | 12 | |
970 | 317 | |
16.2% | 4.7% | |
6.6 | 6.3 | |
7 days ago | 5 months ago | |
Jupyter Notebook | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llmsherpa
-
LlamaCloud and LlamaParse
To get good RAG performance you will need a good chunking strategy. Simply getting all the text is not good enough and knowing the boundaries of table, list, paragraph, section etc. is helpful.
Great work by llamaindex team. Also feel free to try https://github.com/nlmatics/llmsherpa which takes into account some of the things I mentioned.
-
Show HN: Open-source Rule-based PDF parser for RAG
I wrote about split points and the need for including section hierarchy in this post: https://ambikasukla.substack.com/p/efficient-rag-with-docume...
All this is automated in the llmsherpa parser https://github.com/nlmatics/llmsherpa which you can use as an API over this library.
paperetl
- Show HN: Open-source Rule-based PDF parser for RAG
-
Oracle of Zotero: LLM QA of Your Research Library
Nice project!
I've spent quite a lot of time in the medical/scientific literature space. With regards to LLMs, specifically RAG, how the data is chunked is quite important. With that, I have a couple projects that might be beneficial additions.
paperetl (https://github.com/neuml/paperetl) - supports parsing arXiv, PubMed and integrates with GROBID to handle parsing metadata and text from arbitrary papers.
paperai (https://github.com/neuml/paperai) - builds embeddings databases of medical/scientific papers. Supports LLM prompting, semantic workflows and vector search. Built with txtai (https://github.com/neuml/txtai).
While arbitrary chunking/splitting can work, I've found that integrating parsing that has knowledge of medical/scientific paper structure increases the overall accuracy and experience of downstream applications.
-
[P] Parse research papers into structured data
paperai | paperetl
- Parse research papers into a structured dataset
- ETL for medical and scientific papers
- Show HN: ETL for Medical and Scientific Papers
-
Seeking Advice: How to extract Abstract from scientific journals (.pdfs) 10k+.
paperai and paperetl are a set of projects to consider for this task.
- paperetl: ETL processes for medical and scientific papers
What are some alternatives?
unstructured - Open source libraries and APIs to build custom preprocessing pipelines for labeling, training, or production machine learning pipelines.
SciencePlots - Matplotlib styles for scientific plotting
txtai - π‘ All-in-one open-source embeddings database for semantic search, LLM orchestration and language model workflows
tika-python - Tika-Python is a Python binding to the Apache Tikaβ’ REST services allowing Tika to be called natively in the Python community.
llama_parse - Parse files for optimal RAG
ciscoconfparse - Parse, Audit, Query, Build, and Modify Cisco IOS-style configurations.
Parsr - Transforms PDF, Documents and Images into Enriched Structured Data
paperai - π π€ Semantic search and workflows for medical/scientific papers
marker - Convert PDF to markdown quickly with high accuracy
rdm - Our regulatory documentation manager. Streamlines 62304, 14971, and 510(k) documentation for software projects.
nlm-ingestor - This repo provides the server side code for llmsherpa API to connect. It includes parsers for various file formats.
dagster - An orchestration platform for the development, production, and observation of data assets.