BigQuery has 2 ways to ingest data:
1. First via streaming
2. Second via BigQuery jobs which read data from a GCS bucket and batch loads them to BigQuery.
The difference between the 2 approaches is the cost. Streaming data directly to BigQuery costs you by number of messages streamed and has a cap of 100k records per second. Conversely, BigQuery batch loads are free (you will be paying for data stored in bucket).
Kafka BigQuery sink connector uses streaming API by default but the beta feature (enableBatchLoad) allows the other route of loading through BigQuery batches.
Confluent Documentation:
Get in touch to find out more!
We'll be happy to discuss your needs
Send us an email at contact@cesaro.io