Python-如何用pandas读取6gb csv文件

我正在尝试在pandas中读取较大的csv文件(大约6 GB),并且遇到以下内存错误:

MemoryError                               Traceback (most recent call last)

<ipython-input-58-67a72687871b> in <module>()

----> 1 data=pd.read_csv('aphro.csv',sep=';')

C:\Python27\lib\site-packages\pandas\io\parsers.pyc in parser_f(filepath_or_buffer, sep, dialect, compression, doublequote, escapechar, quotechar, quoting, skipinitialspace, lineterminator, header, index_col, names, prefix, skiprows, skipfooter, skip_footer, na_values, na_fvalues, true_values, false_values, delimiter, converters, dtype, usecols, engine, delim_whitespace, as_recarray, na_filter, compact_ints, use_unsigned, low_memory, buffer_lines, warn_bad_lines, error_bad_lines, keep_default_na, thousands, comment, decimal, parse_dates, keep_date_col, dayfirst, date_parser, memory_map, nrows, iterator, chunksize, verbose, encoding, squeeze, mangle_dupe_cols, tupleize_cols, infer_datetime_format)

450 infer_datetime_format=infer_datetime_format)

451

--> 452 return _read(filepath_or_buffer, kwds)

453

454 parser_f.__name__ = name

C:\Python27\lib\site-packages\pandas\io\parsers.pyc in _read(filepath_or_buffer, kwds)

242 return parser

243

--> 244 return parser.read()

245

246 _parser_defaults = {

C:\Python27\lib\site-packages\pandas\io\parsers.pyc in read(self, nrows)

693 raise ValueError('skip_footer not supported for iteration')

694

--> 695 ret = self._engine.read(nrows)

696

697 if self.options.get('as_recarray'):

C:\Python27\lib\site-packages\pandas\io\parsers.pyc in read(self, nrows)

1137

1138 try:

-> 1139 data = self._reader.read(nrows)

1140 except StopIteration:

1141 if nrows is None:

C:\Python27\lib\site-packages\pandas\parser.pyd in pandas.parser.TextReader.read (pandas\parser.c:7145)()

C:\Python27\lib\site-packages\pandas\parser.pyd in pandas.parser.TextReader._read_low_memory (pandas\parser.c:7369)()

C:\Python27\lib\site-packages\pandas\parser.pyd in pandas.parser.TextReader._read_rows (pandas\parser.c:8194)()

C:\Python27\lib\site-packages\pandas\parser.pyd in pandas.parser.TextReader._convert_column_data (pandas\parser.c:9402)()

C:\Python27\lib\site-packages\pandas\parser.pyd in pandas.parser.TextReader._convert_tokens (pandas\parser.c:10057)()

C:\Python27\lib\site-packages\pandas\parser.pyd in pandas.parser.TextReader._convert_with_dtype (pandas\parser.c:10361)()

C:\Python27\lib\site-packages\pandas\parser.pyd in pandas.parser._try_int64 (pandas\parser.c:17806)()

MemoryError:

任何帮助吗?

回答:

该错误表明机器没有足够的内存来一次将整个CSV读入DataFrame。假设你一次也不需要整个数据集都在内存中,那么避免该问题的一种方法是分批处理CSV(通过指定chunksize参数):

chunksize = 10 ** 6

for chunk in pd.read_csv(filename, chunksize=chunksize):

process(chunk)

chunksize参数指定每个块的行数。(当然,最后一块可能少于chunksize行。)

以上是 Python-如何用pandas读取6gb csv文件 的全部内容, 来源链接: utcz.com/qa/420119.html

回到顶部