Automatic schema migration tools for ClickHouse
We often get asked about a good schema migration tool for ClickHouse and what is the best practice to manage database schemas in ClickHouse that might change over time?
We often get asked about a good schema migration tool for ClickHouse and what is the best practice to manage database schemas in ClickHouse that might change over time?
How do I view the number of active or queued mutations?
ClickHouse Keeper provides the coordination system for data replication and distributed DDL queries execution. ClickHouse Keeper is compatible with ZooKeeper, but it might not be obvious why you should use ClickHouse Keeper instead of ZooKeeper. This article discusses some of the benefits of Keeper.
If a column is sparse (empty or contains mostly zeros), ClickHouse can encode it in a sparse format and automatically optimize calculations - the data does not require full decompression during queries. In fact, if you know how sparse a column is, you can define its ratio using the ratio_of_defaults_for_sparse_serialization
setting to optimize serialization.
ClickHouse is popular for logs and metrics analysis because of the real-time analytics capabilities provided.
This is a step by step example on how to start using Python with ClickHouse Cloud service.
How can I validate that two queries return the same resultsets?
The error is typically reported as:
DB::NetException: SSL Exception: error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED
When this error occurs, a table shows as readonly and the error states intersecting parts.
TTL is going to be eventually applied. What does that mean? The MergeTree
table setting merge_with_ttl_timeout
sets the minimum delay in seconds before repeating a merge with delete TTL. The default value is 14400 seconds (4 hours). But that is just the minimum delay, it can take longer until a merge for delete TTL is triggered.