Request Access

Select
Select

Your information is safe with us. We will handle your details in accordance with our Privacy Policy.

Request received! Our team will be in touch soon with next steps
Oops, something went wrong while submitting the form

Artie Product Update - May 2025

Jacqueline Cheong
Jacqueline Cheong
Updated on
May 14, 2025
Product spotlight
Product spotlight
Product spotlight
Product spotlight

We’ve shipped a bunch of powerful updates in the last few months - from simplifying Iceberg lakehouse pipelines to making Oracle fan-in dead simple. Here’s what’s new and what it unlocks for your team.

Feature Highlight: S3 Iceberg destination (Beta)

S3 Iceberg is now available in beta! This new destination uses AWS’s recently released S3 Tables support, allowing you to replicate directly into Apache Iceberg tables backed by S3. It's a big unlock for teams building modern lakehouse architectures on open standards.

More New Features We Shipped 🚀

Column Inclusion Rules

You can now define an explicit allowlist of columns to replicate - ideal for PII or other sensitive data.

This expands our column-level controls alongside column exclusion and hashing. Only the fields you specify get replicated. Everything else stays out.

We've recorded a short Loom video demonstrating how to enable column inclusion rules.

Autopilot for New Tables

Stop manually hunting for new tables in your source DB. Autopilot finds and syncs them for you - zero config required.

Turn it on via:

Deployment → Destination Settings → Advanced Settings → “Auto-replicate new tables” 

More details in our blog

Data Quality: Rows Affected Checks

To further enhance the data integrity built into our pipeline, we've added another guardrail: verifying the number of rows affected during each database operation. 

For example, during merge steps (such as in Snowflake), we confirm ROWS_LOADED from copy commands and validate the totals for inserted, updated, or deleted rows. This approach reinforces the robustness of our data replication process and it’s another way we catch issues early and ensure replication integrity.

Read Once, Write Many

We recently launched the ability to read-once and write to multiple destinations. This means you no longer need multiple replication slots on your source database. 

For example, by reading data just once from your Postgres instance and simultaneously replicating it to Snowflake and Redshift, you reduce database overhead and simplify replication architecture.

Multi-Data Plane Support

Artie now supports hosting pipelines across multiple data planes, whether you're on our cloud or using your own (BYOC) infrastructure.

For example, run one pipeline from Postgres to Snowflake in AWS US-East-1 and another from MySQL to Snowflake in AWS US-West-2.

Oracle Fan-in

Most CDC tools give up when faced with thousands of single-tenant Oracle sources. Not Artie.

With our Oracle Fan-in feature, you can now easily replicate data from thousands of Oracle sources - without painful manual setups or infrastructure overload. Fan-in reduces your Kafka topic sprawl, lowers infrastructure costs, and simplifies real-world, complex data replication.

ICYMI

Our latest blog: Why TOAST Columns Break Postgres CDC and How to Fix It

TL;DR Artie handles TOAST columns for you. No replica identity changes. No manual work. Just clean, accurate data.

AUTHOR
Jacqueline Cheong
Jacqueline Cheong
Table of contents

10x better pipelines.
Today.