Antonio Valentino
2014-01-18 10:14:04 UTC
Dear PyTables users,
On behalf of the PyTables development team I'm happy to announce the
availability of PyTables v3.1.0rc1.
Some of the core internal components of PyTables has been rewritten for
this release. This should ensure a better code organization and a
better compatibility future Python versions.
We strongly encourage all users to test this RC1 version in their
applications and report any kind of issue and performance regression.
It is important to stress that the user feedback is fundamental to get a
better quality of the final release.
And now, the official announcement:
==============================
Announcing PyTables 3.1.0rc1
==============================
We are happy to announce PyTables 3.1.0rc1.
This is a feature release.
What's new
==========
Probably the most relevant changes in this release are internal
improvements like the new node cache that is now compatible with the
upcoming Python 3.4 and the registry for open files has been deeply
reworked. The caching feature
of file handlers has been completely dropped so now PyTables is a little
bit more "thread friendly".
New, user visible, features include:
- a new lossy filter for HDF5 datasets (EArray, CArray, VLArray and
Table objects). The *quantization* filter truncates floating point
data to a specified precision before writing to disk.
This can significantly improve the performance of compressors
(many thanks to Andreas Hilboll).
- support for the H5FD_SPLIT HDF5 driver (thanks to simleo)
- all new features introduced in the Blosc_ 1.3.x series, and in
particular the ability to leverage different compressors within
Blosc_ are now available in PyTables via the blosc filter (a big
thank you to Francesc)
- the ability to save/restore the default value of :class:`EnumAtom`
types
Also, installations of the HDF5 library that have a broken support fo
the *long double* data type (see the `Issues with H5T_NATIVE_LDOUBLE`_
thread on the HFG5 forum) are detected by PyTables 3.1.0rc1 and the
corresponding features are automatically disabled.
Users that need support for the *long double* data type should ensure to
build PyTables against an installation of the HDF5 library that is not
affected by the bug.
.. _`Issues with H5T_NATIVE_LDOUBLE`:
http://hdf-forum.184993.n3.nabble.com/Issues-with-H5T-NATIVE-LDOUBLE-tt4026450.html
As always, a large amount of bugs have been addressed and squashed as well.
In case you want to know more in detail what has changed in this
version, please refer to: http://pytables.github.io/release_notes.html
You can download a source package with generated PDF and HTML docs, as
well as binaries for Windows, from:
http://sourceforge.net/projects/pytables/files/pytables/3.1.0rc1
For an online version of the manual, visit:
http://pytables.github.io/usersguide/index.html
What it is?
===========
PyTables is a library for managing hierarchical datasets and designed to
efficiently cope with extremely large amounts of data with support for
full 64-bit file addressing. PyTables runs on top of the HDF5 library
and NumPy package for achieving maximum throughput and convenient use.
PyTables includes OPSI, a new indexing technology, allowing to perform
data lookups in tables exceeding 10 gigarows (10**10 rows) in less than
a tenth of a second.
Resources
=========
About PyTables: http://www.pytables.org
About the HDF5 library: http://hdfgroup.org/HDF5/
About NumPy: http://numpy.scipy.org/
Acknowledgments
===============
Thanks to many users who provided feature improvements, patches, bug
reports, support and suggestions. See the ``THANKS`` file in the
distribution package for a (incomplete) list of contributors. Most
specially, a lot of kudos go to the HDF5 and NumPy makers.
Without them, PyTables simply would not exist.
Share your experience
=====================
Let us know of any bugs, suggestions, gripes, kudos, etc. you may have.
----
**Enjoy data!**
--
The PyTables Developers
On behalf of the PyTables development team I'm happy to announce the
availability of PyTables v3.1.0rc1.
Some of the core internal components of PyTables has been rewritten for
this release. This should ensure a better code organization and a
better compatibility future Python versions.
We strongly encourage all users to test this RC1 version in their
applications and report any kind of issue and performance regression.
It is important to stress that the user feedback is fundamental to get a
better quality of the final release.
And now, the official announcement:
==============================
Announcing PyTables 3.1.0rc1
==============================
We are happy to announce PyTables 3.1.0rc1.
This is a feature release.
What's new
==========
Probably the most relevant changes in this release are internal
improvements like the new node cache that is now compatible with the
upcoming Python 3.4 and the registry for open files has been deeply
reworked. The caching feature
of file handlers has been completely dropped so now PyTables is a little
bit more "thread friendly".
New, user visible, features include:
- a new lossy filter for HDF5 datasets (EArray, CArray, VLArray and
Table objects). The *quantization* filter truncates floating point
data to a specified precision before writing to disk.
This can significantly improve the performance of compressors
(many thanks to Andreas Hilboll).
- support for the H5FD_SPLIT HDF5 driver (thanks to simleo)
- all new features introduced in the Blosc_ 1.3.x series, and in
particular the ability to leverage different compressors within
Blosc_ are now available in PyTables via the blosc filter (a big
thank you to Francesc)
- the ability to save/restore the default value of :class:`EnumAtom`
types
Also, installations of the HDF5 library that have a broken support fo
the *long double* data type (see the `Issues with H5T_NATIVE_LDOUBLE`_
thread on the HFG5 forum) are detected by PyTables 3.1.0rc1 and the
corresponding features are automatically disabled.
Users that need support for the *long double* data type should ensure to
build PyTables against an installation of the HDF5 library that is not
affected by the bug.
.. _`Issues with H5T_NATIVE_LDOUBLE`:
http://hdf-forum.184993.n3.nabble.com/Issues-with-H5T-NATIVE-LDOUBLE-tt4026450.html
As always, a large amount of bugs have been addressed and squashed as well.
In case you want to know more in detail what has changed in this
version, please refer to: http://pytables.github.io/release_notes.html
You can download a source package with generated PDF and HTML docs, as
well as binaries for Windows, from:
http://sourceforge.net/projects/pytables/files/pytables/3.1.0rc1
For an online version of the manual, visit:
http://pytables.github.io/usersguide/index.html
What it is?
===========
PyTables is a library for managing hierarchical datasets and designed to
efficiently cope with extremely large amounts of data with support for
full 64-bit file addressing. PyTables runs on top of the HDF5 library
and NumPy package for achieving maximum throughput and convenient use.
PyTables includes OPSI, a new indexing technology, allowing to perform
data lookups in tables exceeding 10 gigarows (10**10 rows) in less than
a tenth of a second.
Resources
=========
About PyTables: http://www.pytables.org
About the HDF5 library: http://hdfgroup.org/HDF5/
About NumPy: http://numpy.scipy.org/
Acknowledgments
===============
Thanks to many users who provided feature improvements, patches, bug
reports, support and suggestions. See the ``THANKS`` file in the
distribution package for a (incomplete) list of contributors. Most
specially, a lot of kudos go to the HDF5 and NumPy makers.
Without them, PyTables simply would not exist.
Share your experience
=====================
Let us know of any bugs, suggestions, gripes, kudos, etc. you may have.
----
**Enjoy data!**
--
The PyTables Developers