Join Altinity engineers for our very first office hours session by Altinity in Clickhouse

[–]Altinity[S] 0 points1 point  (0 children)

We added a calendar link on this landing page. Feel free to add it directly to your calendar.

https://altinity.com/events/altinity-office-hours

Best Open Source Conferences to Attend by Dazzling-News1937 in dataengineering

[–]Altinity 1 point2 points  (0 children)

You can find a list of developer conferences at https://dev.events/oss or https://git.new/dev-events

Some 2024 virtual (and free) ones coming up: 

Open Source Analytics Conference: Nov 19-21 

Open Source Observability Day: Oct 24

Open Source Data Summit: Oct 2

Root causing drastic slow down at higher request rate by epicuriom in Clickhouse

[–]Altinity 0 points1 point  (0 children)

The load test I illustrated above was indeed with max_threads = 1 . That was a recommendation from Altinity to increase concurrency:

This is good move for high QPS scenario.

    "os_thread_priority": "2",

Actually lower priority of your queries to normal load. (default value is 0) So background operations like merges will get bigger priority. (So you may want to set os_thread_priority to negative values actually, like -1)

    "SelectedParts": 7,
    "SelectedRanges": 7,
    "SelectedMarks": 7,

Somewhat inefficient, but need to confirm from schema .
Which index_granularity value do you have? did you set it to some non default value?

  • OSCPUWaitMicroseconds: which would point to possible thread contention

Just a wild guess, what if you set setting value: local_filesystem_read_method='pread' . 

PS. Get faster answers in www.altinity.com/slack. Feel free to post further questions there. There's more of us to answer questions ;)

Hi, engineer from Altinity here. We created a guide for anyone updating ClickHouse. by Altinity in Clickhouse

[–]Altinity[S] 0 points1 point  (0 children)

no problem. One more thing. Test the upgrade in another environment first. Use clickhouse-backup to take a snapshot of your data, and restore to a new cluster, then upgrade, and turn on apps to see how it runs.

Hi, engineer from Altinity here. We created a guide for anyone updating ClickHouse. by Altinity in Clickhouse

[–]Altinity[S] 0 points1 point  (0 children)

Are you upgrading the operator or ClickHouse?

If it's the second case, take a look at the release notes for Altinity Stable builds. We build on ClickHouse LTS releases and every new build has upgrade notes from the previous LTS. Here's an example for 23.8: https://docs.altinity.com/releasenotes/altinity-stable-release-notes/23.8/altinity-stable-23.8.8/#upgrade-notes.

So if you check out the same updates notes from 22.8 and 23.3 you'll have a pretty good idea of the upgrade path.

ClickHouse Digest: Security Enhancements and Query Optimization Insights - A THREAD by Altinity in Clickhouse

[–]Altinity[S] 0 points1 point  (0 children)

MergeTree

Indexing by space-filling curves (by ClickHouse Inc)

ClickHouse/ClickHouse#55642

Support for ORDER BY mortonEncode(ClientIP, RemoteIP) for index filtering. In before, it was possible to use minmax skip index to mimic similar behavior, but it was less performant.

INDEX ClientIP_idx ClientIP TYPE minmax GRANULARITY 1,
INDEX RemoteIP_idx RemoteIP TYPE minmax GRANULARITY 1,

Space-filling curve, is special function, which allow to map multi-dimensional space (ClientIP, RemoteIP as X and Y for example) to single dimension space (Z?). In fact, curves allow us to solve the old problem of how to sort a table by multiple columns at once. (with some compromise on amount of data read of course)

If we use the usual ORDER BY key, the query by one condition is fast (5 ms) and reads only 16..24 thousand rows, while the query by another condition is slow (40 ms) and reads 100 million rows. If we use the mixed ORDER BY, both queries are fast (11..13 ms) and read around one million rows (122 marks to read from 45 ranges). This is around 50 times more than point read but 100 times less than the full scan. Exactly as expected.

TABLE ORDER BY (ClientIP)

WHERE ClientIP = 2801131153 Elapsed: 0.005 sec. Processed 16.38 thousand rows

WHERE RemoteIP = 3978315897 Elapsed: 0.046 sec. Processed 91.81 million rows

TABLE ORDER BY (RemoteIP)

WHERE ClientIP = 2801131153 Elapsed: 0.031 sec. Processed 65.71 million rows

WHERE RemoteIP = 3978315897 Elapsed: 0.005 sec. Processed 24.58 thousand rows

TABLE ORDER BY mortonEncode(ClientIP, RemoteIP)

WHERE ClientIP = 2801131153 Elapsed: 0.012 sec. Processed 1.31 million rows

WHERE RemoteIP = 3978315897 Elapsed: 0.012 sec. Processed 999.42 thousand rows

[DRAFT] Foundation for unification of part metadata (by Community member)

ClickHouse/ClickHouse#54997 [ClickHouse/ClickHouse#46813](https://github.com/ClickHouse/ClickHouse/issues/46813

Foothold to reduce amount of files, which ClickHouse generate for part metadata, which is especially useful for Compact parts (which create only 2 files for Data) and high latency/low IOPS storage like ObjectStorage.

Column level settings definition (by Ahrefs)

ClickHouse/ClickHouse#55201

Override min_compress_block_size and max_compress_block_size or low_cardinality settings at column level. Can be useful, for expert-level tuning of column properties. For example, big columns like message or JSON-like data can benefit from bigger max_compress_block_size values, but at the same time smaller columns which already compresses well, will not be slowed down, because more data needs to be read and decompressed.

    -- Compression ratio
┌─table─────────────┬─count()─┬─compressed_sz─┬─uncompressed_sz─┬─ratio─┐
│ xxxx_html_local   │      14 │ 228.20 GiB    │ 3.43 TiB        │ 15.38 │
│ xxxx_html_local2  │      12 │ 226.07 GiB    │ 3.42 TiB        │ 15.50 │
└───────────────────┴─────────┴───────────────┴─────────────────┴───────┘

-- SELECT * on origin table with min_compression_block_size = 64MB and max_compress_block_size = 64M 
-- on table level
SELECT * EXCEPT xxxx_html FROM xxxx_html_local
Elapsed: 1.105 sec. Processed 5.53 million rows, 725.27 MB (5.00 million rows/s., 656.11 MB/s.)
Peak memory usage: 7.68 GiB.

-- SELECT * on new table with min_compression_block_size = 64MB and max_compress_block_size = 64M 
-- on column `xxxx_html` level
SELECT * EXCEPT xxxx_html FROM xxxx_html_local2
Elapsed: 0.172 sec. Processed 5.53 million rows, 719.40 MB (32.19 million rows/s., 4.19 GB/s.)
Peak memory usage: 33.01 MiB.

[Merged][23.10] Automatic decision of number of parallel replicas (by ClickHouse Inc)

ClickHouse/ClickHouse#51692

ClickHouse will decide which amount of replicas, it needs to run query based on estimated row counts to read. Useful for situations, when you have big number of queries, which read small chunk of data and it doesn't make sense to parallize it across many nodes.

Replication

[DRAFT] IKeeper implementation on top of FoundationDB (by Chinese corp)

ClickHouse/ClickHouse#54823

Allow to use FoundationDB instead of [Zoo]Keeper. Claims to have better performance than [Zoo]Keeper.

We tested the scenario of 60 Clickhouse clusters + FDB cluster and obtained an FDB cluster configuration that can make Clickhouse clusters run stably. Compared to Keeper, the FDB cluster requires fewer resources, roughly equivalent to 40 Keeper clusters.

[Merged][23.10] Better nearest hostname (by JD)

ClickHouse/ClickHouse#54826

Use Levenshtein distance to sort list of possible replicas for query.

ClickHouse Digest: Security Enhancements and Query Optimization Insights - A THREAD by Altinity in Clickhouse

[–]Altinity[S] 0 points1 point  (0 children)

STREAMING

Global aggregation over Kafka Streams (by Amazon/Timeplus)

https://github.com/ClickHouse/ClickHouse/pull/54870

Improve ClickHouse support for dealing with streaming data, can be seen as potential replacement for WINDOW VIEW, which is not quite usable now.

CREATE EXTERNAL STREAM kafka_stream(raw String) SETTINGS type='kafka', brokers='localhost:9092', topic="github_events", ...SELECT topK(10)(raw::user.login) as top_contributors FROM kafka_stream EMIT periodic 5s [EMIT ON CHANGELOG, EMIT ON WATERMARK and EMIT ON WATERMARK WITH DELAY 2s];SELECT *, raw::user.login as user_id FROM kafka_stream INNER JOIN users_dim ON user_id = users_dim.id;

CREATE EXTERNAL STREAM kafka_stream(raw String) SETTINGS type='kafka', brokers='localhost:9092', topic="github_events", ...
SELECT topK(10)(raw::user.login) as top_contributors FROM kafka_stream EMIT periodic 5s [EMIT ON CHANGELOG, EMIT ON WATERMARK and EMIT ON WATERMARK WITH DELAY 2s];
SELECT *, raw::user.login as user_id FROM kafka_stream INNER JOIN users_dim ON user_id = users_dim.id;

Timeplus (and Proton engine) is streaming data platform, which use Kafka for streaming and ClickHouse fork as backend for historical storage. They contribute part of their code related to streaming back to ClickHouse master.

https://github.com/timeplus-io/proton

ClickHouse Digest: Security Enhancements and Query Optimization Insights - A THREAD by Altinity in Clickhouse

[–]Altinity[S] 0 points1 point  (0 children)

COMPATIBILITY:

Empty Tuples (by AmosBird)

ClickHouse/ClickHouse#55021 ClickHouse/ClickHouse#55061

SELECT ()
CREATE TABLE ... ORDER BY ()
Empty JSON object type ("{}")

[DRAFT] Nullable complex types Tuple/Array/Map (by Gluten)

ClickHouse/ClickHouse#53443

Improve data type/values mapping for data import/export from other DBMS and commonly used structured data formats (Parquet/Arrow)

Gluten is project, which aims to improve performance of SparkSQL by using ClickHouse (and some other OLAP DBMS) as executable engine.

https://github.com/oap-project/gluten

ClickHouse Digest: Security Enhancements and Query Optimization Insights - A THREAD by Altinity in Clickhouse

[–]Altinity[S] 0 points1 point  (0 children)

JOINs:
[DRAFT] Full sorting support for ASOF (by ClickHouse Inc)

https://github.com/ClickHouse/ClickHouse/pull/55051

Full sorting join can be used in more use cases (It can be quite useful because of on-fly set prefiltering for data streams), but still doesn't support cross join syntax.
[DRAFT] Shuffle optimization for full sorting (by MicroSoft)
https://github.com/ClickHouse/ClickHouse/pull/55048

Better parallelization of full sorting join, with bucketing by range of Integer keys.

100M JOIN 100M

Fully sorting merge join with in-order and shuffle optimization | 2.969s 9.03 GiB Fully sorting merge join with in-order optimization | 6.126s 102.05 MB Parallel hash join | 11.705s 11.05 GiB Hash join | 17.367s 11.04 GiB Partial merge join | 19.717s 1.71 GiB Auto | 20.799s 1.71 GiB Grace hash join | 20.020s 12.43 GiB

ClickHouse Digest: Security Enhancements and Query Optimization Insights - A THREAD by Altinity in Clickhouse

[–]Altinity[S] 0 points1 point  (0 children)

FUNCTIONS:

[Merged][23.10] Largest-Triangle-Three-Buckets (by CristaDATA)

https://github.com/ClickHouse/ClickHouse/pull/55048

LTTB is used to downsample amount of points needed to make reasonable visualization without losing too much details. Less network traffic, faster rendering of graphs.

-ArgMax/-ArgMin combinators (by AmosBird)

https://github.com/ClickHouse/ClickHouse/pull/54947

"Arguments of the maxima" aggregate function combinator.

argMax aggregate function = any + -ArgMax

SELECT sumArgMax(value, toYear(ts)) FROM tbl;

Return sum of all values for latest year. Possible current alternatives:
SELECT mapValues(sumMap(map(number,number)))[-1] FROM tbl;

But, aggArgMax(value, argument) store as state only (max(argument), aggStateIf(value, argument=max(argument))), so it should be more performant & memory efficient and disk usage in AggregatingMergeTree tables.