Antonio Valentino
2014-01-22 10:43:08 UTC
Dear PyTables users,
the PyTables development team is happy to announce the availability of
PyTables v3.1.0rc2.
Thanks to the user feedback we was able to address some issue that was
still present in the RC1, so we decided to release a new RC version that
should hopefully be the last one before the final release.
Changes from 3.0rc1 to 3.1.0rc2
===============================
- HDF5 versions lower than 1.8.7 are not fully compatible with PyTables
3.1. A partial support to HDF5 < 1.8.7 is still provided but in that
case multiple file opens are not allowed at all (even in read-only
mode).
- Fixed selection on float columns when NaNs are present (closes
:issue:`327` and :issue:`330`)
- The :meth:`_FileRegistry.remove` method now correctly removes keys
that don't have associated handles
- Minor style and formatting improvements
- Close the file handle before trying to delete the corresponding file.
Fixes a test failure on Windows.
- Use integer division for computing indices (fixes some warning on
Windows)
- Fixed some warning related to non-unicode file names (the Windows
bytes API has been deprecated in Python 3.4)
- Better documentation for the new file handles management system and
its backward incompatible behaviours.
Also some clarification has been added to the description of the
:attr:`File.open_count` property.
Again we encourage all users to test this RC2 version in their
applications and report issues to help to make PyTables 3.1 final even
better.
The official announcement:
==============================
Announcing PyTables 3.1.0rc2
==============================
We are happy to announce PyTables 3.1.0rc2.
This is a feature release.
What's new
==========
Probably the most relevant changes in this release are internal
improvements like the new node cache that is now compatible with the
upcoming Python 3.4 and the registry for open files has been deeply
reworked. The caching feature
of file handlers has been completely dropped so now PyTables is a little
bit more "thread friendly".
New, user visible, features include:
- a new lossy filter for HDF5 datasets (EArray, CArray, VLArray and
Table objects). The *quantization* filter truncates floating point
data to a specified precision before writing to disk.
This can significantly improve the performance of compressors
(many thanks to Andreas Hilboll).
- support for the H5FD_SPLIT HDF5 driver (thanks to simleo)
- all new features introduced in the Blosc_ 1.3.x series, and in
particular the ability to leverage different compressors within
Blosc_ are now available in PyTables via the blosc filter (a big
thank you to Francesc)
- the ability to save/restore the default value of :class:`EnumAtom`
types
Also, installations of the HDF5 library that have a broken support fo
the *long double* data type (see the `Issues with H5T_NATIVE_LDOUBLE`_
thread on the HFG5 forum) are detected by PyTables 3.1.0 and the
corresponding features are automatically disabled.
Users that need support for the *long double* data type should ensure to
build PyTables against an installation of the HDF5 library that is not
affected by the bug.
.. _`Issues with H5T_NATIVE_LDOUBLE`:
http://hdf-forum.184993.n3.nabble.com/Issues-with-H5T-NATIVE-LDOUBLE-tt4026450.html
As always, a large amount of bugs have been addressed and squashed as well.
In case you want to know more in detail what has changed in this
version, please refer to: http://pytables.github.io/release_notes.html
You can download a source package with generated PDF and HTML docs, as
well as binaries for Windows, from:
http://sourceforge.net/projects/pytables/files/pytables/3.1.0
For an online version of the manual, visit:
http://pytables.github.io/usersguide/index.html
What it is?
===========
PyTables is a library for managing hierarchical datasets and designed to
efficiently cope with extremely large amounts of data with support for
full 64-bit file addressing. PyTables runs on top of the HDF5 library
and NumPy package for achieving maximum throughput and convenient use.
PyTables includes OPSI, a new indexing technology, allowing to perform
data lookups in tables exceeding 10 gigarows (10**10 rows) in less than
a tenth of a second.
Resources
=========
About PyTables: http://www.pytables.org
About the HDF5 library: http://hdfgroup.org/HDF5/
About NumPy: http://numpy.scipy.org/
Acknowledgments
===============
Thanks to many users who provided feature improvements, patches, bug
reports, support and suggestions. See the ``THANKS`` file in the
distribution package for a (incomplete) list of contributors. Most
specially, a lot of kudos go to the HDF5 and NumPy makers.
Without them, PyTables simply would not exist.
Share your experience
=====================
Let us know of any bugs, suggestions, gripes, kudos, etc. you may have.
----
**Enjoy data!**
--
The PyTables Developers
the PyTables development team is happy to announce the availability of
PyTables v3.1.0rc2.
Thanks to the user feedback we was able to address some issue that was
still present in the RC1, so we decided to release a new RC version that
should hopefully be the last one before the final release.
Changes from 3.0rc1 to 3.1.0rc2
===============================
- HDF5 versions lower than 1.8.7 are not fully compatible with PyTables
3.1. A partial support to HDF5 < 1.8.7 is still provided but in that
case multiple file opens are not allowed at all (even in read-only
mode).
- Fixed selection on float columns when NaNs are present (closes
:issue:`327` and :issue:`330`)
- The :meth:`_FileRegistry.remove` method now correctly removes keys
that don't have associated handles
- Minor style and formatting improvements
- Close the file handle before trying to delete the corresponding file.
Fixes a test failure on Windows.
- Use integer division for computing indices (fixes some warning on
Windows)
- Fixed some warning related to non-unicode file names (the Windows
bytes API has been deprecated in Python 3.4)
- Better documentation for the new file handles management system and
its backward incompatible behaviours.
Also some clarification has been added to the description of the
:attr:`File.open_count` property.
Again we encourage all users to test this RC2 version in their
applications and report issues to help to make PyTables 3.1 final even
better.
The official announcement:
==============================
Announcing PyTables 3.1.0rc2
==============================
We are happy to announce PyTables 3.1.0rc2.
This is a feature release.
What's new
==========
Probably the most relevant changes in this release are internal
improvements like the new node cache that is now compatible with the
upcoming Python 3.4 and the registry for open files has been deeply
reworked. The caching feature
of file handlers has been completely dropped so now PyTables is a little
bit more "thread friendly".
New, user visible, features include:
- a new lossy filter for HDF5 datasets (EArray, CArray, VLArray and
Table objects). The *quantization* filter truncates floating point
data to a specified precision before writing to disk.
This can significantly improve the performance of compressors
(many thanks to Andreas Hilboll).
- support for the H5FD_SPLIT HDF5 driver (thanks to simleo)
- all new features introduced in the Blosc_ 1.3.x series, and in
particular the ability to leverage different compressors within
Blosc_ are now available in PyTables via the blosc filter (a big
thank you to Francesc)
- the ability to save/restore the default value of :class:`EnumAtom`
types
Also, installations of the HDF5 library that have a broken support fo
the *long double* data type (see the `Issues with H5T_NATIVE_LDOUBLE`_
thread on the HFG5 forum) are detected by PyTables 3.1.0 and the
corresponding features are automatically disabled.
Users that need support for the *long double* data type should ensure to
build PyTables against an installation of the HDF5 library that is not
affected by the bug.
.. _`Issues with H5T_NATIVE_LDOUBLE`:
http://hdf-forum.184993.n3.nabble.com/Issues-with-H5T-NATIVE-LDOUBLE-tt4026450.html
As always, a large amount of bugs have been addressed and squashed as well.
In case you want to know more in detail what has changed in this
version, please refer to: http://pytables.github.io/release_notes.html
You can download a source package with generated PDF and HTML docs, as
well as binaries for Windows, from:
http://sourceforge.net/projects/pytables/files/pytables/3.1.0
For an online version of the manual, visit:
http://pytables.github.io/usersguide/index.html
What it is?
===========
PyTables is a library for managing hierarchical datasets and designed to
efficiently cope with extremely large amounts of data with support for
full 64-bit file addressing. PyTables runs on top of the HDF5 library
and NumPy package for achieving maximum throughput and convenient use.
PyTables includes OPSI, a new indexing technology, allowing to perform
data lookups in tables exceeding 10 gigarows (10**10 rows) in less than
a tenth of a second.
Resources
=========
About PyTables: http://www.pytables.org
About the HDF5 library: http://hdfgroup.org/HDF5/
About NumPy: http://numpy.scipy.org/
Acknowledgments
===============
Thanks to many users who provided feature improvements, patches, bug
reports, support and suggestions. See the ``THANKS`` file in the
distribution package for a (incomplete) list of contributors. Most
specially, a lot of kudos go to the HDF5 and NumPy makers.
Without them, PyTables simply would not exist.
Share your experience
=====================
Let us know of any bugs, suggestions, gripes, kudos, etc. you may have.
----
**Enjoy data!**
--
The PyTables Developers