Discussion:
[pytables-users] Speed of _g_get_objinfo
Giovanni Luca Ciampaglia
2014-09-15 15:30:15 UTC
Permalink
Hi all, I have a table with 60 billion rows and a CSI index on it.
Individual indexed reads (i.e. read_where) are typically super fast
(amazing job guys), but when I run a script that does a bunch of those (in
the example below, just 466), then the runtime goes through the roof. It
seems like most of the time is actually spent reading group information
(see profiler trace below). My code is organized so that the function that
does an individual indexed reads takes an instance of tables.file.File,
gets the table instance (/pageview in this case), and then calls its
read_where method. Perhaps I should instead get an handle of the table
instance once for all and pass that, instead of the file instance. Any
ideas?

Thanks!

Giovanni

/pageview (Table(60203733729,), shuffle, blosc(5)) ''
description := {
"id": Int64Col(shape=(), dflt=0, pos=0),
"timestamp": Int64Col(shape=(), dflt=0, pos=1),
"count": Int64Col(shape=(), dflt=0, pos=2)}
byteorder := 'little'
chunkshape := (174762,)
autoindex := True
colindexes := {
"id": Index(9, full, shuffle, blosc(5)).is_csi=True}
/pageview._v_attrs (AttributeSet), 10 attributes:
[CLASS := 'TABLE',
FIELD_0_FILL := 0,
FIELD_0_NAME := 'id',
FIELD_1_FILL := 0,
FIELD_1_NAME := 'timestamp',
FIELD_2_FILL := 0,
FIELD_2_NAME := 'count',
NROWS := 60203733729,
TITLE := '',
VERSION := '2.7']


2053378 function calls (1977969 primitive calls) in 542.064 seconds

Ordered by: internal time

ncalls tottime percall cumtime percall filename:lineno(function)
2435 303.620 0.125 303.620 0.125 {method '_g_get_objinfo' of
'tables.hdf5extension.Group' objects}
638 40.425 0.063 226.299 0.355 index.py:2016(get_chunkmap)
538 38.388 0.071 428.361 0.796 table.py:1552(read_where)
1 34.320 34.320 537.231 537.231 extractseries.py:295(main)
42 32.146 0.765 32.146 0.765 {method '_read_elements' of
'tables.tableextension.Table' objects}
3436 22.413 0.007 22.413 0.007 {method '_read_index_slice'
of 'tables.indexesextension.IndexArray' objects}
6854 16.104 0.002 16.104 0.002 {method 'astype' of
'numpy.ndarray' objects}
638 13.425 0.021 151.245 0.237 index.py:1831(search)
392 6.147 0.016 6.152 0.016 {method '_search_bin_na_ll'
of 'tables.indexesextension.IndexArray' objects}
9411 4.429 0.000 4.429 0.000 {numpy.core.multiarray.empty}
869 4.133 0.005 4.133 0.005 {method 'nonzero' of
'numpy.ndarray' objects}
479 4.041 0.008 4.041 0.008 {method '_read_records' of
'tables.tableextension.Table' objects}
39368 3.788 0.000 3.789 0.000 {numpy.core.multiarray.array}
466 3.021 0.006 384.721 0.826
extractseries.py:112(extractone)
1930 1.388 0.001 1.388 0.001 {numpy.core.multiarray.zeros}
4890 0.958 0.000 1.082 0.000
conditions.py:437(call_on_recarr)
538 0.389 0.001 0.432 0.001 necompiler.py:662(evaluate)
15733 0.288 0.000 0.288 0.000 {method 'reduce' of
'numpy.ufunc' objects}
210012/210011 0.250 0.000 0.311 0.000 {isinstance}
60 0.232 0.004 3.172 0.053 __init__.py:1(<module>)
2 0.223 0.112 0.223 0.112 {method '_close_file' of
'tables.hdf5extension.File' objects}
6 0.182 0.030 1.887 0.315 __init__.py:3(<module>)
45 0.172 0.004 0.172 0.004 {method '_g_read_slice' of
'tables.hdf5extension.Array' objects}
16079 0.140 0.000 0.231 0.000 file.py:382(register_node)
202 0.138 0.001 0.138 0.001 {method '_read_index_slice'
of 'tables.indexesextension.LastRowArray' objects}
423 0.134 0.000 0.134 0.000
{pandas.tslib.array_to_datetime}
538 0.134 0.000 381.098 0.708 table.py:1512(_where)
4160 0.121 0.000 0.121 0.000 {numpy.core.multiarray.arange}
6 0.118 0.020 2.483 0.414 api.py:1(<module>)
133968/129588 0.108 0.000 0.122 0.000 {len}
8702 0.100 0.000 0.100 0.000
{tables.utilsextension.get_nested_field}
67207/67041 0.098 0.000 3.917 0.000 {getattr}
16043/15751 0.097 0.000 2.508 0.000 file.py:408(get_node)
1121 0.095 0.000 0.095 0.000 {compile}
16077 0.091 0.000 0.321 0.000 file.py:395(cache_node)
26 0.085 0.003 0.085 0.003 {posix.listdir}
230 0.080 0.000 0.232 0.001 doccer.py:12(docformat)
1 0.078 0.078 0.822 0.822 __init__.py:20(<module>)
--
You received this message because you are subscribed to the Google Groups "pytables-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pytables-users+***@googlegroups.com.
To post to this group, send an email to pytables-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Giovanni Luca Ciampaglia
2014-09-19 01:35:56 UTC
Permalink
Bumping this: I digged a bit more into the profiler trace and found that
_g_get_objinfo is called by _g_check_has_child (group.py) which is called
by _g_get_child (group.py) which is called by Group's __getattr__. The
getattr is called by the search (index.py) which (to make a long story
short) is called by read_where, which is how I read my data. So I thought
this was related to getting the Table instance while instead is actually
triggered by the reading method itself.

I am just wondering what kind of data _g_get_objinfo is actually fetching
from disk, because this makes using large table (like the one here) utterly
useless. The irony is that before I had a (rather complicated) setup in
which my data were spit across multiple files, and each file had a very
flat hierarchy of small tables (we are talking about 100,000 tables per
file -- each table referring to the views to a different Wikipedia article)
and fetching data was super fast! But I find that using a single table with
a CSI index much more intuitive and easier to code. Am I missing something
here?

Any thought is appreciated!

Cheers

Giovanni

On Monday, September 15, 2014 11:30:15 AM UTC-4, Giovanni Luca Ciampaglia
Post by Giovanni Luca Ciampaglia
Hi all, I have a table with 60 billion rows and a CSI index on it.
Individual indexed reads (i.e. read_where) are typically super fast
(amazing job guys), but when I run a script that does a bunch of those (in
the example below, just 466), then the runtime goes through the roof. It
seems like most of the time is actually spent reading group information
(see profiler trace below). My code is organized so that the function that
does an individual indexed reads takes an instance of tables.file.File,
gets the table instance (/pageview in this case), and then calls its
read_where method. Perhaps I should instead get an handle of the table
instance once for all and pass that, instead of the file instance. Any
ideas?
Thanks!
Giovanni
/pageview (Table(60203733729,), shuffle, blosc(5)) ''
description := {
"id": Int64Col(shape=(), dflt=0, pos=0),
"timestamp": Int64Col(shape=(), dflt=0, pos=1),
"count": Int64Col(shape=(), dflt=0, pos=2)}
byteorder := 'little'
chunkshape := (174762,)
autoindex := True
colindexes := {
"id": Index(9, full, shuffle, blosc(5)).is_csi=True}
[CLASS := 'TABLE',
FIELD_0_FILL := 0,
FIELD_0_NAME := 'id',
FIELD_1_FILL := 0,
FIELD_1_NAME := 'timestamp',
FIELD_2_FILL := 0,
FIELD_2_NAME := 'count',
NROWS := 60203733729,
TITLE := '',
VERSION := '2.7']
2053378 function calls (1977969 primitive calls) in 542.064 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
2435 303.620 0.125 303.620 0.125 {method '_g_get_objinfo' of
'tables.hdf5extension.Group' objects}
638 40.425 0.063 226.299 0.355 index.py:2016(get_chunkmap)
538 38.388 0.071 428.361 0.796 table.py:1552(read_where)
1 34.320 34.320 537.231 537.231 extractseries.py:295(main)
42 32.146 0.765 32.146 0.765 {method '_read_elements' of
'tables.tableextension.Table' objects}
3436 22.413 0.007 22.413 0.007 {method '_read_index_slice'
of 'tables.indexesextension.IndexArray' objects}
6854 16.104 0.002 16.104 0.002 {method 'astype' of
'numpy.ndarray' objects}
638 13.425 0.021 151.245 0.237 index.py:1831(search)
392 6.147 0.016 6.152 0.016 {method '_search_bin_na_ll'
of 'tables.indexesextension.IndexArray' objects}
9411 4.429 0.000 4.429 0.000 {numpy.core.multiarray.empty}
869 4.133 0.005 4.133 0.005 {method 'nonzero' of
'numpy.ndarray' objects}
479 4.041 0.008 4.041 0.008 {method '_read_records' of
'tables.tableextension.Table' objects}
39368 3.788 0.000 3.789 0.000 {numpy.core.multiarray.array}
466 3.021 0.006 384.721 0.826
extractseries.py:112(extractone)
1930 1.388 0.001 1.388 0.001 {numpy.core.multiarray.zeros}
4890 0.958 0.000 1.082 0.000
conditions.py:437(call_on_recarr)
538 0.389 0.001 0.432 0.001 necompiler.py:662(evaluate)
15733 0.288 0.000 0.288 0.000 {method 'reduce' of
'numpy.ufunc' objects}
210012/210011 0.250 0.000 0.311 0.000 {isinstance}
60 0.232 0.004 3.172 0.053 __init__.py:1(<module>)
2 0.223 0.112 0.223 0.112 {method '_close_file' of
'tables.hdf5extension.File' objects}
6 0.182 0.030 1.887 0.315 __init__.py:3(<module>)
45 0.172 0.004 0.172 0.004 {method '_g_read_slice' of
'tables.hdf5extension.Array' objects}
16079 0.140 0.000 0.231 0.000 file.py:382(register_node)
202 0.138 0.001 0.138 0.001 {method '_read_index_slice'
of 'tables.indexesextension.LastRowArray' objects}
423 0.134 0.000 0.134 0.000
{pandas.tslib.array_to_datetime}
538 0.134 0.000 381.098 0.708 table.py:1512(_where)
4160 0.121 0.000 0.121 0.000
{numpy.core.multiarray.arange}
6 0.118 0.020 2.483 0.414 api.py:1(<module>)
133968/129588 0.108 0.000 0.122 0.000 {len}
8702 0.100 0.000 0.100 0.000
{tables.utilsextension.get_nested_field}
67207/67041 0.098 0.000 3.917 0.000 {getattr}
16043/15751 0.097 0.000 2.508 0.000 file.py:408(get_node)
1121 0.095 0.000 0.095 0.000 {compile}
16077 0.091 0.000 0.321 0.000 file.py:395(cache_node)
26 0.085 0.003 0.085 0.003 {posix.listdir}
230 0.080 0.000 0.232 0.001 doccer.py:12(docformat)
1 0.078 0.078 0.822 0.822 __init__.py:20(<module>)
--
You received this message because you are subscribed to the Google Groups "pytables-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pytables-users+***@googlegroups.com.
To post to this group, send an email to pytables-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Francesc Alted
2014-09-19 12:07:50 UTC
Permalink
Post by Giovanni Luca Ciampaglia
Bumping this: I digged a bit more into the profiler trace and found
that _g_get_objinfo is called by _g_check_has_child (group.py) which
is called by _g_get_child (group.py) which is called by Group's
__getattr__. The getattr is called by the search (index.py) which (to
make a long story short) is called by read_where, which is how I read
my data. So I thought this was related to getting the Table instance
while instead is actually triggered by the reading method itself.
I don't remember well the details, but the problem here would be that
the index needs to be accessed many times during the query. This is an
implementation detail, but even with the different caches inside
PyTables, sometimes this can be quite a bit of work.
Post by Giovanni Luca Ciampaglia
I am just wondering what kind of data _g_get_objinfo is actually
fetching from disk, because this makes using large table (like the one
here) utterly useless. The irony is that before I had a (rather
complicated) setup in which my data were spit across multiple files,
and each file had a very flat hierarchy of small tables (we are
talking about 100,000 tables per file -- each table referring to the
views to a different Wikipedia article) and fetching data was super
fast! But I find that using a single table with a CSI index much more
intuitive and easier to code. Am I missing something here?
This not usual, but I can see that sometimes using an index can actually
slow-down your queries (most specially if you did some previous
optimization work for your dataset). I would suggest you to play with
other index 'kind's and 'optlevel's than just plain CSI. See:

http://pytables.github.io/usersguide/optimization.html#indexed-searches

Also, you may find it useful playing with the LRU cache for nodes and
try to fine-tune it for your case:

http://pytables.github.io/usersguide/optimization.html#getting-the-most-from-the-node-lru-cache
Post by Giovanni Luca Ciampaglia
Any thought is appreciated!
HTH,
Francesc
Post by Giovanni Luca Ciampaglia
Cheers
Giovanni
On Monday, September 15, 2014 11:30:15 AM UTC-4, Giovanni Luca
Hi all, I have a table with 60 billion rows and a CSI index on it.
Individual indexed reads (i.e. read_where) are typically super
fast (amazing job guys), but when I run a script that does a bunch
of those (in the example below, just 466), then the runtime goes
through the roof. It seems like most of the time is actually spent
reading group information (see profiler trace below). My code is
organized so that the function that does an individual indexed
reads takes an instance of tables.file.File, gets the table
instance (/pageview in this case), and then calls its read_where
method. Perhaps I should instead get an handle of the table
instance once for all and pass that, instead of the file instance.
Any ideas?
Thanks!
Giovanni
/pageview (Table(60203733729,), shuffle, blosc(5)) ''
description := {
"id": Int64Col(shape=(), dflt=0, pos=0),
"timestamp": Int64Col(shape=(), dflt=0, pos=1),
"count": Int64Col(shape=(), dflt=0, pos=2)}
byteorder := 'little'
chunkshape := (174762,)
autoindex := True
colindexes := {
"id": Index(9, full, shuffle, blosc(5)).is_csi=True}
[CLASS := 'TABLE',
FIELD_0_FILL := 0,
FIELD_0_NAME := 'id',
FIELD_1_FILL := 0,
FIELD_1_NAME := 'timestamp',
FIELD_2_FILL := 0,
FIELD_2_NAME := 'count',
NROWS := 60203733729,
TITLE := '',
VERSION := '2.7']
2053378 function calls (1977969 primitive calls) in 542.064 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall
filename:lineno(function)
2435 303.620 0.125 303.620 0.125 {method
'_g_get_objinfo' of 'tables.hdf5extension.Group' objects}
638 40.425 0.063 226.299 0.355
index.py:2016(get_chunkmap)
538 38.388 0.071 428.361 0.796
table.py:1552(read_where)
1 34.320 34.320 537.231 537.231
extractseries.py:295(main)
42 32.146 0.765 32.146 0.765 {method
'_read_elements' of 'tables.tableextension.Table' objects}
3436 22.413 0.007 22.413 0.007 {method
'_read_index_slice' of 'tables.indexesextension.IndexArray' objects}
6854 16.104 0.002 16.104 0.002 {method 'astype' of
'numpy.ndarray' objects}
638 13.425 0.021 151.245 0.237 index.py:1831(search)
392 6.147 0.016 6.152 0.016 {method
'_search_bin_na_ll' of 'tables.indexesextension.IndexArray' objects}
9411 4.429 0.000 4.429 0.000
{numpy.core.multiarray.empty}
869 4.133 0.005 4.133 0.005 {method 'nonzero' of
'numpy.ndarray' objects}
479 4.041 0.008 4.041 0.008 {method
'_read_records' of 'tables.tableextension.Table' objects}
39368 3.788 0.000 3.789 0.000
{numpy.core.multiarray.array}
466 3.021 0.006 384.721 0.826
extractseries.py:112(extractone)
1930 1.388 0.001 1.388 0.001
{numpy.core.multiarray.zeros}
4890 0.958 0.000 1.082 0.000
conditions.py:437(call_on_recarr)
538 0.389 0.001 0.432 0.001
necompiler.py:662(evaluate)
15733 0.288 0.000 0.288 0.000 {method 'reduce' of
'numpy.ufunc' objects}
210012/210011 0.250 0.000 0.311 0.000 {isinstance}
60 0.232 0.004 3.172 0.053 __init__.py:1(<module>)
2 0.223 0.112 0.223 0.112 {method
'_close_file' of 'tables.hdf5extension.File' objects}
6 0.182 0.030 1.887 0.315 __init__.py:3(<module>)
45 0.172 0.004 0.172 0.004 {method
'_g_read_slice' of 'tables.hdf5extension.Array' objects}
16079 0.140 0.000 0.231 0.000
file.py:382(register_node)
202 0.138 0.001 0.138 0.001 {method
'_read_index_slice' of 'tables.indexesextension.LastRowArray' objects}
423 0.134 0.000 0.134 0.000
{pandas.tslib.array_to_datetime}
538 0.134 0.000 381.098 0.708 table.py:1512(_where)
4160 0.121 0.000 0.121 0.000
{numpy.core.multiarray.arange}
6 0.118 0.020 2.483 0.414 api.py:1(<module>)
133968/129588 0.108 0.000 0.122 0.000 {len}
8702 0.100 0.000 0.100 0.000
{tables.utilsextension.get_nested_field}
67207/67041 0.098 0.000 3.917 0.000 {getattr}
16043/15751 0.097 0.000 2.508 0.000 file.py:408(get_node)
1121 0.095 0.000 0.095 0.000 {compile}
16077 0.091 0.000 0.321 0.000 file.py:395(cache_node)
26 0.085 0.003 0.085 0.003 {posix.listdir}
230 0.080 0.000 0.232 0.001 doccer.py:12(docformat)
1 0.078 0.078 0.822 0.822 __init__.py:20(<module>)
--
You received this message because you are subscribed to the Google
Groups "pytables-users" group.
To unsubscribe from this group and stop receiving emails from it, send
For more options, visit https://groups.google.com/d/optout.
--
Francesc Alted
--
You received this message because you are subscribed to the Google Groups "pytables-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pytables-users+***@googlegroups.com.
To post to this group, send an email to pytables-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Giovanni Luca Ciampaglia
2014-09-22 14:56:52 UTC
Permalink
Hi Francesc, thanks for your input. I tried with an ultralight index and
complevel 3. While that saved almost 60 GB of index size, it did not
improve the situation. Also disabling compression on the index did not
change the situation. So perhaps I should play with LRU cache size. Is
there a quick way to compute the size in memory of a node?

Giovanni


Giovanni Luca Ciampaglia

✎ 919 E 10th ∙ Bloomington 47408 IN ∙ USA
☞ http://www.glciampaglia.com/
✆ +1 812 855-7261
Post by Giovanni Luca Ciampaglia
Bumping this: I digged a bit more into the profiler trace and found that
_g_get_objinfo is called by _g_check_has_child (group.py) which is called
by _g_get_child (group.py) which is called by Group's __getattr__. The
getattr is called by the search (index.py) which (to make a long story
short) is called by read_where, which is how I read my data. So I thought
this was related to getting the Table instance while instead is actually
triggered by the reading method itself.
I don't remember well the details, but the problem here would be that the
index needs to be accessed many times during the query. This is an
implementation detail, but even with the different caches inside PyTables,
sometimes this can be quite a bit of work.
I am just wondering what kind of data _g_get_objinfo is actually
fetching from disk, because this makes using large table (like the one
here) utterly useless. The irony is that before I had a (rather
complicated) setup in which my data were spit across multiple files, and
each file had a very flat hierarchy of small tables (we are talking about
100,000 tables per file -- each table referring to the views to a different
Wikipedia article) and fetching data was super fast! But I find that using
a single table with a CSI index much more intuitive and easier to code. Am
I missing something here?
This not usual, but I can see that sometimes using an index can actually
slow-down your queries (most specially if you did some previous
optimization work for your dataset). I would suggest you to play with
http://pytables.github.io/usersguide/optimization.html#indexed-searches
Also, you may find it useful playing with the LRU cache for nodes and try
http://pytables.github.io/usersguide/optimization.html#getting-the-most-from-the-node-lru-cache
Any thought is appreciated!
HTH,
Francesc
Cheers
Giovanni
On Monday, September 15, 2014 11:30:15 AM UTC-4, Giovanni Luca Ciampaglia
Post by Giovanni Luca Ciampaglia
Hi all, I have a table with 60 billion rows and a CSI index on it.
Individual indexed reads (i.e. read_where) are typically super fast
(amazing job guys), but when I run a script that does a bunch of those (in
the example below, just 466), then the runtime goes through the roof. It
seems like most of the time is actually spent reading group information
(see profiler trace below). My code is organized so that the function that
does an individual indexed reads takes an instance of tables.file.File,
gets the table instance (/pageview in this case), and then calls its
read_where method. Perhaps I should instead get an handle of the table
instance once for all and pass that, instead of the file instance. Any
ideas?
Thanks!
Giovanni
/pageview (Table(60203733729,), shuffle, blosc(5)) ''
description := {
"id": Int64Col(shape=(), dflt=0, pos=0),
"timestamp": Int64Col(shape=(), dflt=0, pos=1),
"count": Int64Col(shape=(), dflt=0, pos=2)}
byteorder := 'little'
chunkshape := (174762,)
autoindex := True
colindexes := {
"id": Index(9, full, shuffle, blosc(5)).is_csi=True}
[CLASS := 'TABLE',
FIELD_0_FILL := 0,
FIELD_0_NAME := 'id',
FIELD_1_FILL := 0,
FIELD_1_NAME := 'timestamp',
FIELD_2_FILL := 0,
FIELD_2_NAME := 'count',
NROWS := 60203733729,
TITLE := '',
VERSION := '2.7']
2053378 function calls (1977969 primitive calls) in 542.064 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
2435 303.620 0.125 303.620 0.125 {method '_g_get_objinfo' of
'tables.hdf5extension.Group' objects}
638 40.425 0.063 226.299 0.355 index.py:2016(get_chunkmap)
538 38.388 0.071 428.361 0.796 table.py:1552(read_where)
1 34.320 34.320 537.231 537.231 extractseries.py:295(main)
42 32.146 0.765 32.146 0.765 {method '_read_elements' of
'tables.tableextension.Table' objects}
3436 22.413 0.007 22.413 0.007 {method '_read_index_slice'
of 'tables.indexesextension.IndexArray' objects}
6854 16.104 0.002 16.104 0.002 {method 'astype' of
'numpy.ndarray' objects}
638 13.425 0.021 151.245 0.237 index.py:1831(search)
392 6.147 0.016 6.152 0.016 {method '_search_bin_na_ll'
of 'tables.indexesextension.IndexArray' objects}
9411 4.429 0.000 4.429 0.000
{numpy.core.multiarray.empty}
869 4.133 0.005 4.133 0.005 {method 'nonzero' of
'numpy.ndarray' objects}
479 4.041 0.008 4.041 0.008 {method '_read_records' of
'tables.tableextension.Table' objects}
39368 3.788 0.000 3.789 0.000
{numpy.core.multiarray.array}
466 3.021 0.006 384.721 0.826 extractseries.py:112(
extractone)
1930 1.388 0.001 1.388 0.001
{numpy.core.multiarray.zeros}
4890 0.958 0.000 1.082 0.000 conditions.py:437(call_on_
recarr)
538 0.389 0.001 0.432 0.001 necompiler.py:662(evaluate)
15733 0.288 0.000 0.288 0.000 {method 'reduce' of
'numpy.ufunc' objects}
210012/210011 0.250 0.000 0.311 0.000 {isinstance}
60 0.232 0.004 3.172 0.053 __init__.py:1(<module>)
2 0.223 0.112 0.223 0.112 {method '_close_file' of
'tables.hdf5extension.File' objects}
6 0.182 0.030 1.887 0.315 __init__.py:3(<module>)
45 0.172 0.004 0.172 0.004 {method '_g_read_slice' of
'tables.hdf5extension.Array' objects}
16079 0.140 0.000 0.231 0.000 file.py:382(register_node)
202 0.138 0.001 0.138 0.001 {method '_read_index_slice'
of 'tables.indexesextension.LastRowArray' objects}
423 0.134 0.000 0.134 0.000 {pandas.tslib.array_to_
datetime}
538 0.134 0.000 381.098 0.708 table.py:1512(_where)
4160 0.121 0.000 0.121 0.000
{numpy.core.multiarray.arange}
6 0.118 0.020 2.483 0.414 api.py:1(<module>)
133968/129588 0.108 0.000 0.122 0.000 {len}
8702 0.100 0.000 0.100 0.000 {tables.utilsextension.get_
nested_field}
67207/67041 0.098 0.000 3.917 0.000 {getattr}
16043/15751 0.097 0.000 2.508 0.000 file.py:408(get_node)
1121 0.095 0.000 0.095 0.000 {compile}
16077 0.091 0.000 0.321 0.000 file.py:395(cache_node)
26 0.085 0.003 0.085 0.003 {posix.listdir}
230 0.080 0.000 0.232 0.001 doccer.py:12(docformat)
1 0.078 0.078 0.822 0.822 __init__.py:20(<module>)
--
You received this message because you are subscribed to the Google Groups
"pytables-users" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/d/optout.
--
Francesc Alted
--
You received this message because you are subscribed to a topic in the
Google Groups "pytables-users" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/pytables-users/8qNp-S-yeAk/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "pytables-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pytables-users+***@googlegroups.com.
To post to this group, send an email to pytables-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Loading...