jsonlines
mwparserfromhell
jsonlines | mwparserfromhell | |
---|---|---|
6 | 5 | |
119 | 710 | |
- | - | |
3.9 | 6.6 | |
19 days ago | 28 days ago | |
CSS | Python | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
jsonlines
-
FLaNK AI Weekly for 29 April 2024
JSON Lines (JSONL) https://jsonlines.org/
- Show HN: ZSV (Zip Separated Values)
- Domain ndjson.org expired and it's hosting malware now
- JSON dans les projets data science : Trucs & Astuces
-
JSON in data science projects: tips & tricks
This can be remedied by using the [JSON Lines] format (https://jsonlines.org/). This involves nothing more and nothing less than placing one JSON object per line, so that you can browse the objects without having to parse the entire collection all at once.
-
Documentation for the JSON Lines text file format
> MIME type may be application/jsonl, but this is not yet standardized; any help writing the RFC would be greatly appreciated (see issue[0]).
[0] https://github.com/wardi/jsonlines/issues/19
mwparserfromhell
- FLaNK AI Weekly for 29 April 2024
-
Processing Wikipedia Dumps With Python
There's also https://github.com/earwig/mwparserfromhell, if you don't want to roll your own.
-
[Python] How can I clean up Wikipedia's XML backup dump to create dictionaries of commonly used words for multiple languages?
In particular what you're looking at is not XML but wikitext. I found a discussion on stackoverflow about solving the same problem of getting text from wikitext. Seems like the most promising solution in Python since you already have the dump is to run each page through mwparserfromhell. According to the top stackoverflow answer you could use something like
-
How can I clean up Wikipedia's XML backup dump to create dictionaries of commonly used words for multiple languages?
Thank you so much! I was actually talking about the markup language within the text. Turns out it's proprietary to WikiMedia and user lowerthansound kindly suggested I use this: https://github.com/earwig/mwparserfromhell
What are some alternatives?
zsvutil - ZSV Utility for converting json to/from zip-separated-values
wikitextparser - A Python library to parse MediaWiki WikiText
ndjson-spec - Specification
archwiki - MediaWiki used on Arch Linux websites (read-only mirror)
WiktionaryParser - A Python Wiktionary Parser
wikiteam - Tools for downloading and preserving wikis. We archive wikis, from Wikipedia to tiniest wikis. As of 2023, WikiTeam has preserved more than 350,000 wikis.
pywikibot - A Python library that interfaces with the MediaWiki API. This is a mirror from gerrit.wikimedia.org. Do not submit any patches here. See https://www.mediawiki.org/wiki/Developer_account for contributing.
isbntools - python app/framework for 'all things ISBN' including metadata, descriptions, covers...
wiki_dump - A library that assists in traversing and downloading from Wikimedia Data Dumps and their mirrors.
pastevents - A structured, searchable archive of Wikipedia's "Current Events" portal
wikifunctions - Python functions for retrieving data from the MediaWiki/Wikipedia API
Wiki-scripts - Scripts used on the official factorio wiki