Sep 6, 2017

Parquet.Net v1.2 Released

0 comments

new features:

- INT64 (C# long) type is supported (#194)

- Decimal datatype is fully supported (#209). This includes support for simple System.Decimal, and decimal types with different scales and precisions. Decimals are encoded by utilising all three encodings from parquet specs, however this can be switched off for compatibility with older system. Decimals are fully compatible with Hive and Impala which have some edge cases not complying with parquet specifications. Thanks to @dmitryPavliv and @nzapolski for making this possible

 

bugs fixed:

- fixed a flaw in dictionary encoding implementation affecting files written for AWS Impala (#193)

- when a column contains only single value and it's null Parquet.Net was crashing (#198)

 

New Posts
  • We've been working hard on Parquet.NET to give developers high level abstractions over Parquet so that there is an easy entrypoint into developing with Parquet that is not onerous for developers new to the format. As such, we've created ParquetConvert which allows the trivial creation of Parquet files from IEnumerable<T> Check out this gist I created that shows how to do this. It shows that a simple class as below can be serialized so easily that for the consuming developer there is next to no code. With a simple call to ParquetConvert.Serialize, we can save a collection of these elements in Parquet. Inspecting with Parq shows our output: We'll be continuing to add more features to help developers maximise their productivity with Parquet, as well as retaining low level features that allow complete control of the Parquet format from dotnet.
  • https://github.com/elastacloud/parquet-dotnet is about to be released in the following few days. Since v3.0 was pushed to the public, it saw a lot of interest and appraisal for it's incredible performance boost, however there were problems as well. To reiterate, v3.0 was a complete rewrite of 2.0 and allowed you to get deeper into parquet internals, especially API for creating *row groups* , writing columns directly, controlling row group sizes etc. Although this was a big improvement in the library's core itself, it made it harder to use for a general audience, because v2.0 had a handy *row-based interface* for accessing data. Although working with rows slows down parquet library, you will eventually run into a situation where you need to work with rows anyway. For instance, writing utilities for viewing parquet data, converting between parquet and row-based formats like CSV and so on. Therefore, V3.1 *resurrects row-based access* and makes it faster and better. The way you work with rows has changed slightly but mostly you shouldn't notice any differences at all. They come in play when working with complex data structures like maps, list, structures etc. Preview documentation for this feature is located here https://github.com/elastacloud/parquet-dotnet/blob/features/rows/doc/rows.md so feel free to browse and leave feedback either on this page or raise an issue on GitHub. PARQ We'd also like to announce that we're introducing .NET Core Global Tool in this version called parq . Full description is located here https://github.com/elastacloud/parquet-dotnet/blob/features/rows/doc/parq.md . Essentially it's a hassle free way to work with parquet files locally and the number of commands supported will continue to grow.
  • Did you know that it's possible to extract data from Parquet files in Azure Data Lake Analytics ? Well it is and the library just received a couple of updates, check it out over on its Github page. First of all the library has just received an update to bring it up to the latest version of Parquet .NET (2.1.2 at the time of writing), this brings with it a range of updates and I recommend checking out some of the other posts here to find out what's been going on. The other update is in how the library handles dates, by default Parquet .NET returns DateTimeOffset values when reading dates from parquet files, this is always the right thing to do as anyone who has ever had to work with time zones and offsets will tell you. However, U-SQL does not support the DateTimeOffset type for extracting data, so a small change has been made to convert the DateTimeOffset value to a DateTime, allowing the data to be extracted. So, have a look, give it a go and if you find any problems then please raise an issue on Github or even contribute a fix yourself.