Ken Walker
2018-10-26 21:44:23 UTC
In the process of writing a utility to reorganize HDF5 data into separate
files, I've tripped into an issue with compound data types.
This is what I am told by the developers for 2 applications that I deal
with (one upstream/creates, the other downstream/consumes my data)
They expect compound data types to be packed. When I use Pytables to copy,
some datasets are unpacked, and this results in an alignment error with
unexpected "random data" that plays havoc when the downstream application
reads the file.
I'm told this can be addressed with H5Tpack(). Is that HDF5 function
exposed thru the Pytables API? Or, is there another way to accomplish this
task?
Thanks.
files, I've tripped into an issue with compound data types.
This is what I am told by the developers for 2 applications that I deal
with (one upstream/creates, the other downstream/consumes my data)
They expect compound data types to be packed. When I use Pytables to copy,
some datasets are unpacked, and this results in an alignment error with
unexpected "random data" that plays havoc when the downstream application
reads the file.
I'm told this can be addressed with H5Tpack(). Is that HDF5 function
exposed thru the Pytables API? Or, is there another way to accomplish this
task?
Thanks.
--
You received this message because you are subscribed to the Google Groups "pytables-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pytables-users+***@googlegroups.com.
To post to this group, send an email to pytables-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
You received this message because you are subscribed to the Google Groups "pytables-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pytables-users+***@googlegroups.com.
To post to this group, send an email to pytables-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.