Parquet
Description
Apache Parquet is a columnar storage format widespread in the Hadoop ecosystem. ClickHouse supports read and write operations for this format.
Data Types Matching
The table below shows supported data types and how they match ClickHouse data types in INSERT
and SELECT
queries.
Parquet data type (INSERT ) | ClickHouse data type | Parquet data type (SELECT ) |
---|---|---|
BOOL | Bool | BOOL |
UINT8 , BOOL | UInt8 | UINT8 |
INT8 | Int8/Enum8 | INT8 |
UINT16 | UInt16 | UINT16 |
INT16 | Int16/Enum16 | INT16 |
UINT32 | UInt32 | UINT32 |
INT32 | Int32 | INT32 |
UINT64 | UInt64 | UINT64 |
INT64 | Int64 | INT64 |
FLOAT | Float32 | FLOAT |
DOUBLE | Float64 | DOUBLE |
DATE | Date32 | DATE |
TIME (ms) | DateTime | UINT32 |
TIMESTAMP , TIME (us, ns) | DateTime64 | TIMESTAMP |
STRING , BINARY | String | BINARY |
STRING , BINARY , FIXED_LENGTH_BYTE_ARRAY | FixedString | FIXED_LENGTH_BYTE_ARRAY |
DECIMAL | Decimal | DECIMAL |
LIST | Array | LIST |
STRUCT | Tuple | STRUCT |
MAP | Map | MAP |
UINT32 | IPv4 | UINT32 |
FIXED_LENGTH_BYTE_ARRAY , BINARY | IPv6 | FIXED_LENGTH_BYTE_ARRAY |
FIXED_LENGTH_BYTE_ARRAY , BINARY | Int128/UInt128/Int256/UInt256 | FIXED_LENGTH_BYTE_ARRAY |
Arrays can be nested and can have a value of the Nullable
type as an argument. Tuple
and Map
types also can be nested.
Unsupported Parquet data types: FIXED_SIZE_BINARY
, JSON
, UUID
, ENUM
.
Data types of ClickHouse table columns can differ from the corresponding fields of the Parquet data inserted. When inserting data, ClickHouse interprets data types according to the table above and then cast the data to that data type which is set for the ClickHouse table column.
Example Usage
Inserting and Selecting Data
You can insert Parquet data from a file into ClickHouse table by the following command:
$ cat {filename} | clickhouse-client --query="INSERT INTO {some_table} FORMAT Parquet"
You can select data from a ClickHouse table and save them into some file in the Parquet format by the following command:
$ clickhouse-client --query="SELECT * FROM {some_table} FORMAT Parquet" > {some_file.pq}
To exchange data with Hadoop, you can use HDFS table engine.
Format Settings
- output_format_parquet_row_group_size - row group size in rows while data output. Default value -
1000000
. - output_format_parquet_string_as_string - use Parquet String type instead of Binary for String columns. Default value -
false
. - input_format_parquet_import_nested - allow inserting array of structs into Nested table in Parquet input format. Default value -
false
. - input_format_parquet_case_insensitive_column_matching - ignore case when matching Parquet columns with ClickHouse columns. Default value -
false
. - input_format_parquet_allow_missing_columns - allow missing columns while reading Parquet data. Default value -
false
. - input_format_parquet_skip_columns_with_unsupported_types_in_schema_inference - allow skipping columns with unsupported types while schema inference for Parquet format. Default value -
false
. - input_format_parquet_local_file_min_bytes_for_seek - min bytes required for local read (file) to do seek, instead of read with ignore in Parquet input format. Default value -
8192
. - output_format_parquet_fixed_string_as_fixed_byte_array - use Parquet FIXED_LENGTH_BYTE_ARRAY type instead of Binary/String for FixedString columns. Default value -
true
. - output_format_parquet_version - The version of Parquet format used in output format. Default value -
2.latest
. - output_format_parquet_compression_method - compression method used in output Parquet format. Default value -
lz4
. - input_format_parquet_max_block_size - Max block row size for parquet reader. Default value -
65409
. - input_format_parquet_prefer_block_bytes - Average block bytes output by parquet reader. Default value -
16744704
. - output_format_parquet_write_page_index - Add a possibility to write page index into parquet files. Need to disable
output_format_parquet_use_custom_encoder
at present. Default value -true
.