在 python 中解析一个大的(~40GB)XML 文本文件

Parsing a large (~40GB) XML text file in python(在 python 中解析一个大的(~40GB)XML 文本文件)
本文介绍了在 python 中解析一个大的(~40GB)XML 文本文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着跟版网的小编来一起学习吧!


我有一个想要用 python 解析的 XML 文件.最好的方法是什么?将整个文档记入内存将是灾难性的,我需要以某种方式一次读取一个节点.

I've got an XML file I want to parse with python. What is best way to do this? Taking into memory the entire document would be disastrous, I need to somehow read it a single node at a time.

我知道的现有 XML 解决方案:

Existing XML solutions I know of:

  • 元素树
  • minixml

但由于我提到的问题,我担心它们无法正常工作.我也无法在文本编辑器中打开它——generao 中有什么好的技巧来处理巨大的文本文件吗?

but I'm afraid they aren't quite going to work because of the problem I mentioned. Also I can't open it in a text editor - any good tips in generao for working with giant text files?


首先,您是否尝试过 ElementTree(内置的纯 Python 或 C 版本,或者更好的是 lxml 版本)?我很确定他们都没有真正将整个文件读入内存.

First, have you tried ElementTree (either the built-in pure-Python or C versions, or, better, the lxml version)? I'm pretty sure none of them actually read the whole file into memory.


The problem, of course, is that, whether or not it reads the whole file into memory, the resulting parsed tree ends up in memory.

ElementTree 有一个非常简单的解决方案,而且通常足够:iterparse.

ElementTree has a nifty solution that's pretty simple, and often sufficient: iterparse.

for event, elem in ET.iterparse(xmlfile, events=('end')):


The key here is that you can modify the tree as it's built up (by replacing the contents with a summary containing only what the parent node will need). By throwing out all the stuff you don't need to keep in memory as it comes in, you can stick to parsing things in the usual order without running out of memory.

链接页面提供了更多详细信息,包括在处理 XML-RPC 和 plist 时修改它们的一些示例.(在这些情况下,这是为了使生成的对象更易于使用,而不是为了节省内存,但它们应该足以让这个想法得到理解.)

The linked page gives more details, including some examples for modifying XML-RPC and plist as they're processed. (In those cases, it's to make the resulting object simpler to use, not to save memory, but they should be enough to get the idea across.)

这只有在你能想出一种方法来进行总结时才会有所帮助.(在最简单的情况下,父母不需要来自其孩子的任何信息,这只是 elem.clear().)否则,这对你不起作用.

This only helps if you can think of a way to summarize as you go. (In the most trivial case, where the parent doesn't need any info from its children, this is just elem.clear().) Otherwise, this won't work for you.

标准解决方案是 SAX,这是一个基于回调的 API,可让您在树一次一个节点.您无需像使用 iterparse 那样担心截断节点,因为在解析完节点后这些节点就不存在了.

The standard solution is SAX, which is a callback-based API that lets you operate on the tree a node at a time. You don't need to worry about truncating nodes as you do with iterparse, because the nodes don't exist after you've parsed them.

大多数最好的 SAX 示例都是针对 Java 或 Javascript 的,但它们并不难弄清楚.例如,如果您查看 http://cs.au.dk/~amoeller/XML/programming/saxexample.html 你应该能够弄清楚如何用 Python 编写它(只要你知道在哪里可以找到 xml.sax 的文档).

Most of the best SAX examples out there are for Java or Javascript, but they're not too hard to figure out. For example, if you look at http://cs.au.dk/~amoeller/XML/programming/saxexample.html you should be able to figure out how to write it in Python (as long as you know where to find the documentation for xml.sax).

还有一些基于 DOM 的库无需将所有内容都读入内存即可工作,但据我所知,没有任何一个库能够以合理的效率处理 40GB 文件.

There are also some DOM-based libraries that work without reading everything into memory, but there aren't any that I know of that I'd trust to handle a 40GB file with reasonable efficiency.

这篇关于在 python 中解析一个大的(~40GB)XML 文本文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!



python arbitrarily incrementing an iterator inside a loop(python在循环内任意递增迭代器)
Joining a set of ordered-integer yielding Python iterators(加入一组产生 Python 迭代器的有序整数)
Iterating over dictionary items(), values(), keys() in Python 3(在 Python 3 中迭代字典 items()、values()、keys())
What is the Perl version of a Python iterator?(Python 迭代器的 Perl 版本是什么?)
How to create a generator/iterator with the Python C API?(如何使用 Python C API 创建生成器/迭代器?)
Python generator behaviour(Python 生成器行为)