Setup the Matatika platform to deliver and process your data in Google BigQuery in minutes.
Google BigQuery is a cloud-based data warehousing and analytics platform.
Google BigQuery allows users to store, manage, and analyze large datasets using SQL-like queries. It is designed to handle petabyte-scale datasets and can be integrated with other Google Cloud Platform services. BigQuery also offers real-time data streaming and machine learning capabilities, making it a powerful tool for data-driven decision making. Its serverless architecture means that users only pay for the queries they run, making it a cost-effective solution for businesses of all sizes.
The file path to the JSON file containing the credentials for accessing the BigQuery API.
The JSON object containing the credentials for accessing the BigQuery API.
The ID of the Google Cloud project that contains the BigQuery dataset to connect to.
The name of the BigQuery dataset to connect to.
The geographic location of the BigQuery dataset.
The number of rows to retrieve per API request.
Whether to stop processing if an error occurs during data retrieval.
The maximum amount of time to wait for a response from the API.
Whether to flatten nested data structures in the BigQuery table.
The HTTP method to use for API requests.
Whether to create a BigQuery view based on the query results.
The name of the Google Cloud Storage bucket to write query results to.
The level of granularity to use when partitioning query results.
Whether to cluster query results based on key properties.
Whether to convert column names to lowercase.
Whether to quote column names.
Whether to add an underscore to column names that are invalid.
Whether to convert column names to snake case.
The batch mode to use when writing query results to Google Cloud Storage.
The number of worker processes to use for parallel processing.
The maximum number of worker processes to use for parallel processing.
Whether to update existing rows in the destination table if they match the incoming data.
Whether to overwrite existing rows in the destination table with the incoming data.
Whether to remove duplicate rows from the incoming data before performing an upsert operation.
The mapping of source columns to destination columns for streaming data.
The configuration for the stream mapping.
Whether to flatten nested data structures in the query results.
The maximum depth of nested data structures to flatten in the query results.
Collect and process data from 100s of sources and tools with Google BigQuery.