Enterprise changefeeds emit messages to configurable downstream sinks. CockroachDB supports the following sinks:
See CREATE CHANGEFEED
for more detail on the query parameters available when setting up a changefeed.
For a step-by-step example connecting a changefeed to a sink, see the Changefeed Examples page.
Sink URI
The sink URI follows the basic format of:
'{scheme}://{host}:{port}?{query_parameters}'
URI Component | Description |
---|---|
scheme |
The type of sink: kafka , gcpubsub , any cloud storage sink, or webhook sink. |
host |
The sink's hostname or IP address. |
port |
The sink's port. |
query_parameters |
The sink's query parameters. |
To set a different sink URI to an existing changefeed, use the sink
option with ALTER CHANGEFEED
.
Kafka
Example of a Kafka sink URI:
'kafka://broker.address.com:9092?topic_prefix=bar_&tls_enabled=true&ca_cert=LS0tLS1CRUdJTiBDRVJUSUZ&sasl_enabled=true&sasl_user={sasl user}&sasl_password={url-encoded password}&sasl_mechanism=SCRAM-SHA-256'
The following table lists the available parameters for Kafka URIs:
URI Parameter | Description |
---|---|
topic_name |
The topic name to which messages will be sent. See the following section on Topic Naming for detail on how topics are created. |
topic_prefix |
Adds a prefix to all topic names. For example, CREATE CHANGEFEED FOR TABLE foo INTO 'kafka://...?topic_prefix=bar_' would emit rows under the topic bar_foo instead of foo . |
tls_enabled |
If true , enable Transport Layer Security (TLS) on the connection to Kafka. This can be used with a ca_cert (see below). Default: false |
ca_cert |
The base64-encoded ca_cert file. Specify ca_cert for a Kafka sink. Note: To encode your ca.cert , run base64 -w 0 ca.cert . |
client_cert |
The base64-encoded Privacy Enhanced Mail (PEM) certificate. This is used with client_key . |
client_key |
The base64-encoded private key for the PEM certificate. This is used with client_cert .Note: Client keys are often encrypted. You will receive an error if you pass an encrypted client key in your changefeed statement. To decrypt the client key, run: openssl rsa -in key.pem -out key.decrypt.pem -passin pass:{PASSWORD} . Once decrypted, be sure to update your changefeed statement to use the new key.decrypt.pem file instead. |
sasl_enabled |
If true , the authentication protocol can be set to SCRAM or PLAIN using the sasl_mechanism parameter. You must have tls_enabled set to true to use SASL. Default: false |
sasl_mechanism |
Can be set to SCRAM-SHA-256 , SCRAM-SHA-512 , or PLAIN . A sasl_user and sasl_password are required. Default: PLAIN |
sasl_user |
Your SASL username. |
sasl_password |
Your SASL password |
insecure_tls_skip_verify |
If true , disable client-side validation of responses. Note that a CA certificate is still required; this parameter means that the client will not verify the certificate. Warning: Use this query parameter with caution, as it creates MITM vulnerabilities unless combined with another method of authentication. Default: false |
This table shows the parameters for changefeeds to a specific sink. The CREATE CHANGEFEED
page provides a list of all the available options.
Topic naming
By default, a Kafka topic has the same name as the table on which a changefeed was created. If you create a changefeed on multiple tables, the changefeed will write to multiple topics corresponding to those table names. When you run CREATE CHANGEFEED
to a Kafka sink, the output will display the job ID as well as the topic name(s) that the changefeed will emit to.
To modify the default topic naming, you can specify a topic prefix, an arbitrary topic name, or use the full_table_name
option. Using the topic_name
parameter, you can specify an arbitrary topic name and feed all tables into that topic.
You can either manually create a topic in your Kafka cluster before starting the changefeed, or the topic will be automatically created when the changefeed connects to your Kafka cluster.
You must have the Kafka cluster setting auto.create.topics.enable
set to true
for automatic topic creation. This will create the topic when the changefeed sends its first message. If you create the consumer before that, you will also need the Kafka consumer configuration allow.auto.create.topics
to be set to true
.
Kafka has the following topic limitations:
- Legal characters are numbers, letters, and
[._-]
. - The maximum character length of a topic name is 249.
- Topics with a period (
.
) and underscore (_
) can collide on internal Kafka data structures, so you should use either but not both. - Characters not accepted by Kafka will be automatically encoded as unicode characters by CockroachDB.
Kafka sink configuration
The kafka_sink_config
option allows configuration of a changefeed's message delivery, Kafka server version, and batching parameters.
Each of the following settings have significant impact on a changefeed's behavior, such as latency. For example, it is possible to configure batching parameters to be very high, which would negatively impact changefeed latency. As a result it would take a long time to see messages coming through to the sink. Also, large batches may be rejected by the Kafka server unless it's separately configured to accept a high max.message.bytes
.
kafka_sink_config='{"Flush": {"MaxMessages": 1, "Frequency": "1s"}, "Version": "0.8.2.0", "RequiredAcks": "ONE", "Compression": "GZIP" }'
"Flush"."MaxMessages"
and "Flush"."Frequency"
are configurable batching parameters depending on latency and throughput needs. For example, if "MaxMessages"
is set to 1000 and "Frequency"
to 1 second, it will flush to Kafka either after 1 second or after 1000 messages are batched, whichever comes first. It's important to consider that if there are not many messages, then a "1s"
frequency will add 1 second latency. However, if there is a larger influx of messages these will be flushed quicker.
Using the default values or not setting fields in kafka_sink_config
will mean that changefeed messages emit immediately.
The configurable fields are as follows:
Field | Type | Description | Default |
---|---|---|---|
Flush.MaxMessages |
INT |
Sets the maximum number of messages the producer can send in a single broker request. Any messages beyond the configured limit will be blocked. Increasing this value allows all messages to be sent in a batch. | 1000 |
Flush.Messages |
INT |
Configure the number of messages the changefeed should batch before flushing. | 0 |
Flush.Bytes |
INT |
When the total byte size of all the messages in the batch reaches this amount, it should be flushed. | 0 |
Flush.Frequency |
Duration string | When this amount of time has passed since the first received message in the batch without it flushing, it should be flushed. | "0s" |
"Version" |
STRING |
Sets the appropriate Kafka cluster version, which can be used to connect to Kafka versions < v1.0 (kafka_sink_config='{"Version": "0.8.2.0"}' ). |
"1.0.0.0" |
"RequiredAcks" |
STRING |
Specifies what a successful write to Kafka is. CockroachDB guarantees at least once delivery of messages — this value defines the delivery. The possible values are: "ONE" : a write to Kafka is successful once the leader node has committed and acknowledged the write. Note that this has the potential risk of dropped messages; if the leader node acknowledges before replicating to a quorum of other Kafka nodes, but then fails."NONE" : no Kafka brokers are required to acknowledge that they have committed the message. This will decrease latency and increase throughput, but comes at the cost of lower consistency."ALL" : a quorum must be reached (that is, most Kafka brokers have committed the message) before the leader can acknowledge. This is the highest consistency level. |
"ONE" |
"Compression" |
STRING |
New in v22.1.12: Sets a compression protocol that the changefeed should use when emitting events. The possible values are: "NONE" , "GZIP" , "SNAPPY" , "LZ4" , "ZSTD" . Note that the values must be capitalized. |
"NONE" |
Kafka sink messages
The following shows the Avro messages for a changefeed emitting to Kafka:
{
"after":{
"users":{
"name":{
"string":"Michael Clark"
},
"address":{
"string":"85957 Ashley Junctions"
},
"credit_card":{
"string":"4144089313"
},
"id":{
"string":"d84cf3b6-7029-4d4d-aa81-e5caa9cce09e"
},
"city":{
"string":"seattle"
}
}
},
"updated":{
"string":"1659643584586630201.0000000000"
}
}
{
"after":{
"users":{
"address":{
"string":"17068 Christopher Isle"
},
"credit_card":{
"string":"6664835435"
},
"id":{
"string":"11b99275-92ce-4244-be61-4dae21973f87"
},
"city":{
"string":"amsterdam"
},
"name":{
"string":"John Soto"
}
}
},
"updated":{
"string":"1659643585384406152.0000000000"
}
}
See the Changefeed Examples page and the Stream a Changefeed to a Confluent Cloud Kafka Cluster tutorial for examples to set up a Kafka sink.
For an overview of the messages emitted from changefeeds, see the Changefeed Messages page.
Google Cloud Pub/Sub
This feature is in preview. This feature is subject to change. To share feedback and/or issues, contact Support.
New in v22.1: Changefeeds can deliver messages to a Google Cloud Pub/Sub sink, which is integrated with Google Cloud Platform.
A Pub/Sub sink URI follows this example:
'gcpubsub://{project name}?region={region}&topic_name={topic name}&AUTH=specified&CREDENTIALS={base64-encoded key}'
URI Parameter | Description |
---|---|
project name |
The Google Cloud Project name. |
region |
(Required) The single region to which all output will be sent. |
topic_name |
(Optional) The topic name to which messages will be sent. See the following section on Topic Naming for detail on how topics are created. |
AUTH |
The authentication parameter can define either specified (default) or implicit authentication. To use specified authentication, pass your Service Account credentials with the URI. To use implicit authentication, configure these credentials via an environment variable. See Use Cloud Storage for Bulk Operations for examples of each of these. |
CREDENTIALS |
(Required with AUTH=specified ) The base64-encoded credentials of your Google Service Account credentials. |
This table shows the parameters for changefeeds to a specific sink. The CREATE CHANGEFEED
page provides a list of all the available options.
When using Pub/Sub as your downstream sink, consider the following:
- It only supports
JSON
message format. - Your Google Service Account must have the Pub/Sub Editor role assigned at the project level.
- You must specify the
region
parameter in the URI to maintain ordering guarantees. Unordered messages are not supported, see Known Limitations for more information. - Changefeeds connecting to a Pub/Sub sink do not support the
topic_prefix
option.
For more information, read about compatible changefeed options and the Create a changefeed connected to a Google Cloud Pub/Sub sink example.
Pub/Sub topic naming
When running a CREATE CHANGEFEED
statement to Pub/Sub, it will try to create a topic automatically. When you do not specify the topic in the URI with the topic_name
parameter, the changefeed will use the table name to create the topic name. If the topic already exists in your Pub/Sub sink, the changefeed will write to it. You can also use the full_table_name
option to create a topic using the fully qualified table name.
The output from CREATE CHANGEFEED
will display the job ID as well as the topic name(s) that the changefeed will emit to.
You can manually create a topic in your Pub/Sub sink before starting the changefeed. See the Creating a changefeed to Google Cloud Pub/Sub example for more detail. To understand restrictions on user-specified topic names, see Google's documentation on Guidelines to name a topic or subscription.
For a list of compatible parameters and options, see Parameters on the CREATE CHANGEFEED
page.
Pub/Sub sink messages
The following shows the default JSON messages for a changefeed emitting to Pub/Sub. These changefeed messages were emitted as part of the Create a changefeed connected to a Google Cloud Pub/Sub sink example:
┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────┬─────────────────────────────────────────────────────────┬────────────┬──────────────────┐
│ DATA │ MESSAGE_ID │ ORDERING_KEY │ ATTRIBUTES │ DELIVERY_ATTEMPT │
├──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────┼─────────────────────────────────────────────────────────┼────────────┼──────────────────┤
│ {"key":["boston","40ef7cfa-5e16-4bd3-9e14-2f23407a66df"],"value":{"after":{"address":"14980 Gentry Plains Apt. 64","city":"boston","credit_card":"2466765790","id":"40ef7cfa-5e16-4bd3-9e14-2f23407a66df","name":"Vickie Fitzpatrick"}},"topic":"movr-users"} │ 4466153049158588 │ ["boston", "40ef7cfa-5e16-4bd3-9e14-2f23407a66df"] │ │ │
│ {"key":["los angeles","947ae147-ae14-4800-8000-00000000001d"],"value":{"after":{"address":"35627 Chelsey Tunnel Suite 94","city":"los angeles","credit_card":"2099932769","id":"947ae147-ae14-4800-8000-00000000001d","name":"Kenneth Barnes"}},"topic":"movr-users"} │ 4466144577818136 │ ["los angeles", "947ae147-ae14-4800-8000-00000000001d"] │ │ │
│ {"key":["amsterdam","c28f5c28-f5c2-4000-8000-000000000026"],"value":{"after":{"address":"14729 Karen Radial","city":"amsterdam","credit_card":"5844236997","id":"c28f5c28-f5c2-4000-8000-000000000026","name":"Maria Weber"}},"topic":"movr-users"} │ 4466151194002912 │ ["amsterdam", "c28f5c28-f5c2-4000-8000-000000000026"] │ │ │
│ {"key":["new york","6c8ab772-584a-439d-b7b4-fda37767c74c"],"value":{"after":{"address":"34196 Roger Row Suite 6","city":"new york","credit_card":"3117945420","id":"6c8ab772-584a-439d-b7b4-fda37767c74c","name":"James Lang"}},"topic":"movr-users"} │ 4466147099992681 │ ["new york", "6c8ab772-584a-439d-b7b4-fda37767c74c"] │ │ │
│ {"key":["boston","c56dab0a-63e7-4fbb-a9af-54362c481c41"],"value":{"after":{"address":"83781 Ross Overpass","city":"boston","credit_card":"7044597874","id":"c56dab0a-63e7-4fbb-a9af-54362c481c41","name":"Mark Butler"}},"topic":"movr-users"} │ 4466150752442731 │ ["boston", "c56dab0a-63e7-4fbb-a9af-54362c481c41"] │ │ │
│ {"key":["amsterdam","f27e09d5-d7cd-4f88-8b65-abb910036f45"],"value":{"after":{"address":"77153 Donald Road Apt. 62","city":"amsterdam","credit_card":"7531160744","id":"f27e09d5-d7cd-4f88-8b65-abb910036f45","name":"Lisa Sandoval"}},"topic":"movr-users"} │ 4466147182359256 │ ["amsterdam", "f27e09d5-d7cd-4f88-8b65-abb910036f45"] │ │ │
│ {"key":["new york","46d200c0-6924-4cc7-b3c9-3398997acb84"],"value":{"after":{"address":"92843 Carlos Grove","city":"new york","credit_card":"8822366402","id":"46d200c0-6924-4cc7-b3c9-3398997acb84","name":"Mackenzie Malone"}},"topic":"movr-users"} │ 4466142864542016 │ ["new york", "46d200c0-6924-4cc7-b3c9-3398997acb84"] │ │ │
│ {"key":["boston","52ecbb26-0eab-4e0b-a160-90caa6a7d350"],"value":{"after":{"address":"95044 Eric Corner Suite 33","city":"boston","credit_card":"3982363300","id":"52ecbb26-0eab-4e0b-a160-90caa6a7d350","name":"Brett Porter"}},"topic":"movr-users"} │ 4466152539161631 │ ["boston", "52ecbb26-0eab-4e0b-a160-90caa6a7d350"] │ │ │
│ {"key":["amsterdam","ae147ae1-47ae-4800-8000-000000000022"],"value":{"after":{"address":"88194 Angela Gardens Suite 94","city":"amsterdam","credit_card":"4443538758","id":"ae147ae1-47ae-4800-8000-000000000022","name":"Tyler Dalton"}},"topic":"movr-users"} │ 4466151398997150 │ ["amsterdam", "ae147ae1-47ae-4800-8000-000000000022"] │ │ │
│ {"key":["paris","dc28f5c2-8f5c-4800-8000-00000000002b"],"value":{"after":{"address":"2058 Rodriguez Stream","city":"paris","credit_card":"9584502537","id":"dc28f5c2-8f5c-4800-8000-00000000002b","name":"Tony Ortiz"}},"topic":"movr-users"} │ 4466146372222914 │ ["paris", "dc28f5c2-8f5c-4800-8000-00000000002b"] │ │ │
└──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────┴─────────────────────────────────────────────────────────┴────────────┴──────────────────┘
For an overview of the messages emitted from changefeeds, see the Changefeed Messages page.
Cloud storage sink
Use a cloud storage sink to deliver changefeed data to OLAP or big data systems without requiring transport via Kafka.
Some considerations when using cloud storage sinks:
- Cloud storage sinks only work with
JSON
and emit newline-delimitedJSON
files. - Cloud storage sinks can be configured to store emitted changefeed messages in one or more subdirectories organized by date. See file partitioning and the General file format examples.
- The supported cloud schemes are:
s3
,gs
,azure
,http
, andhttps
. - Both
http://
andhttps://
are cloud storage sinks, not webhook sinks. It is necessary to prefix the scheme withwebhook-
for webhook sinks.
Examples of supported cloud storage sink URIs:
Amazon S3
's3://{BUCKET NAME}/{PATH}?AWS_ACCESS_KEY_ID={KEY ID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}'
Azure Storage
'azure://{CONTAINER NAME}/{PATH}?AZURE_ACCOUNT_NAME={ACCOUNT NAME}&AZURE_ACCOUNT_KEY={URL-ENCODED KEY}'
Google Cloud Storage
'gs://{BUCKET NAME}/{PATH}?AUTH=specified&CREDENTIALS={ENCODED KEY}'
HTTP
'http://localhost:8080/{PATH}'
Cloud storage parameters
The following table lists the available parameters for cloud storage sink URIs:
URI Parameter | Storage | Description |
---|---|---|
AWS_ACCESS_KEY_ID |
AWS | The access key ID to your AWS account. |
AWS_SECRET_ACCESS_KEY |
AWS | The secret access key to your AWS account. |
AUTH |
AWS S3, GCS | The authentication parameter can define either specified (default) or implicit authentication. To use specified authentication, pass your account credentials with the URI. To use implicit authentication, configure these credentials via an environment variable. See Use Cloud Storage for Bulk Operations for examples of each of these. |
AZURE_ACCOUNT_NAME |
Azure | The name of your Azure account. |
AZURE_ACCOUNT_KEY |
Azure | The URL-encoded account key for your Azure account. |
AZURE_ENVIRONMENT |
Azure | The Azure environment that the storage account belongs to. The accepted values are: AZURECHINACLOUD , AZUREGERMANCLOUD , AZUREPUBLICCLOUD , and AZUREUSGOVERNMENTCLOUD . These are cloud environments that meet security, compliance, and data privacy requirements for the respective instance of Azure cloud. If the parameter is not specified, it will default to AZUREPUBLICCLOUD . |
CREDENTIALS |
GCS | (Required with AUTH=specified ) The base64-encoded credentials of your Google Service Account credentials. |
file_size |
All | The file will be flushed (i.e., written to the sink) when it exceeds the specified file size. This can be used with the WITH resolved option, which flushes on a specified cadence. Default: 16MB |
partition_format |
All | Specify how changefeed file paths are partitioned in cloud storage sinks. Use partition_format with the following values:
For example: CREATE CHANGEFEED FOR TABLE users INTO 'gs://...?AUTH...&partition_format=hourly' Default: daily |
S3_storage_class |
AWS S3 | Specify the S3 storage class for files created by the changefeed. See Create a changefeed with an S3 storage class for the available classes and an example. Default: STANDARD |
topic_prefix |
All | Adds a prefix to all topic names. For example, CREATE CHANGEFEED FOR TABLE foo INTO 's3://...?topic_prefix=bar_' would emit rows under the topic bar_foo instead of foo . |
This table shows the parameters for changefeeds to a specific sink. The CREATE CHANGEFEED
page provides a list of all the available options.
Use Cloud Storage for Bulk Operations provides more detail on authentication to cloud storage sinks.
Cloud storage sink messages
The following shows the default JSON messages for a changefeed emitting to a cloud storage sink:
{
"after":{
"address":"51438 Janet Valleys",
"city":"boston",
"credit_card":"0904722368",
"id":"33333333-3333-4400-8000-00000000000a",
"name":"Daniel Hernandez MD"
},
"key":[
"boston",
"33333333-3333-4400-8000-00000000000a"
]
}
{
"after":{
"address":"15074 Richard Falls",
"city":"boston",
"credit_card":"0866384459",
"id":"370117cf-d77d-4778-b0b9-01ac17c15a06",
"name":"Cheyenne Morales"
},
"key":[
"boston",
"370117cf-d77d-4778-b0b9-01ac17c15a06"
]
}
{
"after":{
"address":"69687 Jessica Islands Apt. 68",
"city":"boston",
"credit_card":"6837062320",
"id":"3851eb85-1eb8-4200-8000-00000000000b",
"name":"Sarah Wang DDS"
},
"key":[
"boston",
"3851eb85-1eb8-4200-8000-00000000000b"
]
}
. . .
For an overview of the messages emitted from changefeeds, see the Changefeed Messages page.
Webhook sink
This feature is in preview. This feature is subject to change. To share feedback and/or issues, contact Support.
Use a webhook sink to deliver changefeed messages to an arbitrary HTTP endpoint.
Example of a webhook sink URL:
'webhook-https://{your-webhook-endpoint}?insecure_tls_skip_verify=true'
The following table lists the parameters you can use in your webhook URI:
URI Parameter | Description |
---|---|
ca_cert |
The base64-encoded ca_cert file. Specify ca_cert for a webhook sink. Note: To encode your ca.cert , run base64 -w 0 ca.cert . |
client_cert |
The base64-encoded Privacy Enhanced Mail (PEM) certificate. This is used with client_key . |
client_key |
The base64-encoded private key for the PEM certificate. This is used with client_cert .Note: Client keys are often encrypted. You will receive an error if you pass an encrypted client key in your changefeed statement. To decrypt the client key, run: openssl rsa -in key.pem -out key.decrypt.pem -passin pass:{PASSWORD} . Once decrypted, be sure to update your changefeed statement to use the new key.decrypt.pem file instead. |
insecure_tls_skip_verify |
If true , disable client-side validation of responses. Note that a CA certificate is still required; this parameter means that the client will not verify the certificate. Warning: Use this query parameter with caution, as it creates MITM vulnerabilities unless combined with another method of authentication. Default: false |
This table shows the parameters for changefeeds to a specific sink. The CREATE CHANGEFEED
page provides a list of all the available options.
The following are considerations when using the webhook sink:
- Only supports HTTPS. Use the
insecure_tls_skip_verify
parameter when testing to disable certificate verification; however, this still requires HTTPS and certificates. - Only supports JSON output format.
- There is no concurrency configurability.
Webhook sink configuration
The webhook_sink_config
option allows the changefeed flushing and retry behavior of your webhook sink to be configured.
The following details the configurable fields:
Field | Type | Description | Default |
---|---|---|---|
Flush.Messages |
INT |
When the batch reaches this configured size, it should be flushed (batch sent). | 0 |
Flush.Bytes |
INT |
When the total byte size of all the messages in the batch reaches this amount, it should be flushed. | 0 |
Flush.Frequency |
INTERVAL |
When this amount of time has passed since the first received message in the batch without it flushing, it should be flushed. | "0s" |
Retry.Max |
INT or STRING |
The maximum amount of time the sink will retry a single HTTP request to send a batch. This value must be positive (> 0). If infinite retries are desired, use inf . |
"0s" |
Retry.Backoff |
INTERVAL |
The initial backoff the sink will wait after the first failure. The backoff will double (exponential backoff strategy), until the max is hit. | "500ms" |
Setting either Messages
or Bytes
with a non-zero value without setting Frequency
, will cause the sink to assume Frequency
has an infinity value. If either Messages
or Bytes
have a non-zero value, then a non-zero value for Frequency
must be provided. This configuration is invalid and will cause an error, since the messages could sit in a batch indefinitely if the other conditions do not trigger.
Some complexities to consider when setting Flush
fields for batching:
- When all batching parameters are zero (
"Messages"
,"Bytes"
, and"Frequency"
) the sink will interpret this configuration as "send batch every time." This would be the same as not providing any configuration at all:
{
"Flush": {
"Messages": 0,
"Bytes": 0,
"Frequency": "0s"
}
}
- If one or more fields are set as non-zero values, any fields with a zero value the sink will interpret as infinity. For example, in the following configuration, the sink will send a batch whenever the size reaches 100 messages, or, when 5 seconds has passed since the batch was populated with its first message.
Bytes
defaults to0
in this case, so a batch will never trigger due to a configured byte size:
{
"Flush": {
"Messages": 100,
"Frequency": "5s"
}
}
Webhook sink messages
The following shows the default JSON messages for a changefeed emitting to a webhook sink. These changefeed messages were emitted as part of the Create a changefeed connected to a Webhook sink example:
"2021/08/24 14":"00":21
{
"payload":[
{
"after":{
"city":"rome",
"creation_time":"2019-01-02T03:04:05",
"current_location":"39141 Travis Curve Suite 87",
"ext":{
"brand":"Schwinn",
"color":"red"
},
"id":"d7b18299-c0c4-4304-9ef7-05ae46fd5ee1",
"dog_owner_id":"5d0c85b5-8866-47cf-a6bc-d032f198e48f",
"status":"in_use",
"type":"bike"
},
"key":[
"rome",
"d7b18299-c0c4-4304-9ef7-05ae46fd5ee1"
],
"topic":"vehicles",
"updated":"1629813621680097993.0000000000"
}
],
"length":1
}
"2021/08/24 14":"00":22
{
"payload":[
{
"after":{
"city":"san francisco",
"creation_time":"2019-01-02T03:04:05",
"current_location":"84888 Wallace Wall",
"ext":{
"color":"black"
},
"id":"020cf7f4-6324-48a0-9f74-6c9010fb1ab4",
"dog_owner_id":"b74ea421-fcaf-4d80-9dcc-d222d49bdc17",
"status":"available",
"type":"scooter"
},
"key":[
"san francisco",
"020cf7f4-6324-48a0-9f74-6c9010fb1ab4"
],
"topic":"vehicles",
"updated":"1629813621680097993.0000000000"
}
],
"length":1
}
For an overview of the messages emitted from changefeeds, see the Changefeed Messages page.