Reliably replicating data between Postgres and ClickHouse
https://benjaminwootton.com/insights/clickhouse-peerdb-cdc/By saisrirampur at
woodhull | 2 comments | 7 hours ago
It works well. Their team is great. I feel a bit spoiled having had as much access to the engineering team during the private beta as we've experienced.
It's great for use cases where it makes sense to sync postgres tables across to clickhouse without denormalizing them. PeerDB can transform rows in a single table sent via CDC using a lua scripting language, but it can't (yet!) denormalize data into clickhouse that is stored in 3NF on Postgres across multiple tables.
On the clickhouse query side, we end up wanting denormalized data for query performance and to avoid JOINs. It's frequently not a great idea to query in clickhouse using the same table structure as you're using in your transactional db.
In our experience we sync a few tables with PeerDB but mostly end up using app-level custom code to sync denormalized data into Clickhouse for our core use-cases. Most of the PeerDB sync'd tables end up as Clickhouse Dictionaries which we then use in our queries.
PeerDB works well and I like it for what it is. Just don't expect to be satisfied with querying in Clickhouse against the same table structure as you've got in Postgres unless your data size is tiny.
Curious to know about how others are using it and the architectures you've developed.
JonoBB | 0 comments | 26 minutes ago
> In our experience we sync a few tables with PeerDB but mostly end up using app-level custom code to sync denormalized data into Clickhouse for our core use-cases.
Have you explored dbt? You may find that using custom code is not scalable, and that dbt solves this exact problem.
saisrirampur | 0 comments | 7 hours ago
Overall, what you shared makes sense for use cases like yours. However, there are other scenarios—such as multi-tenant SaaS analytics running large-scale workloads with PeerDB/PostgreSQL CDC. In these cases there are 100s of tables across different schemas that are synced using CDC. Some customers denormalize tables using materialized views (MVs), which is a powerful feature in ClickHouse, while others power dashboards directly with JOINs using the recent JOIN improvements in ClickHouse and suitable/optimized order keys (tenant_id,id).
When dealing with 100s to 1000s of tables and a heavily relational schema, building dual-write pipelines with denormalization becomes extremely difficult—especially when the workload involves UPDATEs.
We have many customers falling in the above bucket, replicating multiple petabytes of data to ClickHouse. A few customer deep dives on this are coming soon! :)
Side note: We are tracking support for in-transit transformations as a future feature. However, MVs are the way to go—more of an ELT approach.
jascha_eng | 8 comments | 12 hours ago
Don't be confused by the timeseries branding.
simonw | 1 comment | 11 hours ago
Looks like you've expanded into vector indexing - https://github.com/timescale/pgvectorscale - and an extension which bakes RAG patterns (including running prompts from SQL queries) into PostgreSQL: https://github.com/timescale/pgai
akulkarni | 2 comments | 10 hours ago
And yes you are correct, pgvectorscale scales pgvector for embeddings, and pgai includes dev experience niceties for AI (eg automatic embedding management).
Would love to hear any suggestions on how we could make this less confusing. :-)
mritchie712 | 0 comments | 8 hours ago
simonw | 0 comments | 6 hours ago
I guess that's why we have marketing teams!
klaussilveira | 0 comments | 3 hours ago
But actively trying to simplify and remove as many gears as possible.
qeternity | 0 comments | 11 hours ago
But it is much less performant than CH.
js4ever | 0 comments | 10 hours ago
whitepoplar | 1 comment | 11 hours ago
mdaniel | 0 comments | 5 hours ago
- Citus is AGPLv3 https://github.com/citusdata/citus/blob/v13.0.1/LICENSE
- Hydra is Apache 2 https://github.com/hydradatabase/columnar/blob/v1.1.2/LICENS...
- Timescale is mostly Apache 2 https://github.com/timescale/timescaledb/blob/2.18.2/LICENSE
jascha_eng | 0 comments | 11 hours ago
geoka9 | 1 comment | 12 hours ago
asadawadia | 1 comment | 11 hours ago
the data stays in PGDB - TSDB is an extension installed onto the data base server
akulkarni | 0 comments | 10 hours ago
tomnipotent | 3 comments | 11 hours ago
I'm all for keeping as much as possible in your initial Postgres deployment as possible. If your team isn't having to work around things and things "just work" it's a wonderful conjunction of requirements and opportunity. It's incredible how much you can get out of a single instance, really remarkable. I'd also add it's still worth it even if there is a little pain.
But I've found that once I cross about 8-12 terabytes of data I need to specialize, and that a pure columnar solution like ClickHouse really begins to shine even compared to hybrid solutions given the amortized cost of most analytic workloads. This difference quickly adds up and I think at that scale really makes a difference to the developer experience that a switch is worth the consideration. Otherwise stick to Postgres and save your org some money and more importantly sanity.
You reach a point when you have enough queries doing enough work that the extra I/O and memory required by PAX/hybrid becomes noticeably more costly than pure columnar, at least for the workloads that I have experience with.
ClickHouse is now in my toolbox right alongside Postgres with things to deploy that I can trust to get the job done.
saisrirampur | 0 comments | 9 hours ago
(Disclaimer: This is Sai from ClickHouse/PeerDB team)
akulkarni | 0 comments | 10 hours ago
https://www.timescale.com/blog/how-we-scaled-postgresql-to-3...
(Post is a year old, IIRC the database is over one petabyte now)
nijave | 0 comments | 11 hours ago
- you'd probably at least want a read replica so you're not running queries on your primary db
- if you're going to the trouble of setting up a column store, it seems likely you're wanting to integrate other data sources so need some ETL regardless
- usually column store is more olap with lower memory and fast disks whereas operational is oltp with more memory and ideally less disk io usage
I suppose you could get some middle ground with PG logical rep if you're mainly integrating PG data sources
jedberg | 1 comment | 4 hours ago
I'm curious if you have data that backs this up, or if it's more of a "gut feeling" sort of thing. At first blush, I agree with you, but at the same time, by doing it at the application level, it opens up so many more possibilities, such as writing "pre-coalesced" data to the data warehouse or pre-enriching the data that goes into the data warehouse.
saisrirampur | 0 comments | 4 hours ago
Secondly, OLAP/DWH systems aren’t as friendly as OLTP databases when it comes to UPDATEs/DELETEs. You can’t just perform point UPDATEs or DELETEs and call it a day. So why not let a replication tool handle this for you in the most efficient way.
spapas82 | 1 comment | 11 hours ago
The parquet file is a columnar friendly friendly that can then be simply inserted to clickhouse or duckdb or even queried directly.
This script and a cron job are enough for my (not very complex) needs on replicating my postgres data on clickhouse for fast queries.
zX41ZdbW | 0 comments | 10 hours ago
https://clickhouse.com/docs/sql-reference/table-functions/po...
https://clickhouse.com/docs/materialized-view/refreshable-ma...
Additionally, you can set up incremental import with https://clickhouse.com/blog/postgres-cdc-connector-clickpipe...