Skip to main content Link Menu Expand (external link) Document Search Copy Copied

Dataplugins

Dataplugins simply define a source of data from a given repository. Matatika provides a number of pre-configured platform-wide dataplugins out-the-box, as well as the ability to create custom dataplugins through the API. From these, pipeline jobs can be run to inject data into a workspace.


Objects

Dataplugin

Path Type Format Description
id String Version 4 UUID The dataplugin ID
name String   The dataplugin name
description String   A description of the dataplugin
repositoryUrl String URL The dataplugin repository URL
settings Array of Setting   The dataplugin settings
{
  "id" : "862ec863-59b0-41d0-a8f1-30dda77a75f3",
  "pluginType" : "LOADER",
  "name" : "target-postgres",
  "namespace" : "postgres_transferwise",
  "variant" : "matatika",
  "label" : "Postgres Warehouse",
  "description" : "Postgres Warehouse is a data warehousing solution built on top of the Postgres database management system.\n\nPostgres Warehouse is designed to handle large volumes of data and complex queries, making it an ideal solution for businesses that need to store and analyze large amounts of data. It provides a number of features that are specifically tailored to data warehousing, such as columnar storage, parallel processing, and support for advanced analytics. Additionally, Postgres Warehouse is highly scalable, allowing businesses to easily add more resources as their data needs grow. Overall, Postgres Warehouse is a powerful and flexible data warehousing solution that can help businesses make better decisions by providing them with the insights they need to succeed.\n### Prerequisites\nThe process of obtaining the required settings for connecting to a Postgres Warehouse may vary depending on the specific setup and configuration of the database. However, here are some general ways to obtain each of the required settings:\n\n- User: The user is typically created when the database is set up. You can ask the database administrator or check the database documentation to find out the username.\n- Password: The password is also typically created when the database is set up. You can ask the database administrator or check the database documentation to find out the password.\n- Host: The host is the server where the database is located. You can ask the database administrator or check the database documentation to find out the host name or IP address.\n- Port: The port is the number that the database listens on for incoming connections. The default port for Postgres is 5432, but it may be different depending on the configuration. You can ask the database administrator or check the database documentation to find out the port number.\n- Database Name: The database name is the name of the specific database you want to connect to. You can ask the database administrator or check the database documentation to find out the database name.\n- Default Target Schema: The default target schema is the schema that you want to use as the default when connecting to the database. This may be set up by the database administrator or you may need to create it yourself. You can ask the database administrator or check the database documentation to find out the default target schema.",
  "logoUrl" : "/assets/logos/loaders/postgres.png",
  "hidden" : false,
  "docs" : "https://www.matatika.com/data-details/target-postgres/",
  "pipUrl" : "git+https://github.com/Matatika/[email protected]",
  "repo" : "git+https://github.com/Matatika/[email protected]",
  "capabilities" : [ ],
  "select" : [ ],
  "update" : { },
  "vars" : { },
  "settings" : [ {
    "name" : "user",
    "aliases" : [ "username" ],
    "label" : "User",
    "kind" : "STRING",
    "description" : "The username used to connect to the Postgres Warehouse.",
    "required" : "true",
    "protected" : false
  }, {
    "name" : "password",
    "aliases" : [ ],
    "label" : "Password",
    "kind" : "PASSWORD",
    "description" : "The password used to authenticate the user.",
    "required" : "true",
    "protected" : false
  }, {
    "name" : "host",
    "aliases" : [ "address" ],
    "label" : "Host",
    "kind" : "STRING",
    "description" : "The hostname or IP address of the Postgres Warehouse server.",
    "required" : "true",
    "protected" : false
  }, {
    "name" : "port",
    "aliases" : [ ],
    "label" : "Port",
    "value" : "5432",
    "kind" : "INTEGER",
    "description" : "The port number used to connect to the Postgres Warehouse server.",
    "required" : "true",
    "protected" : false
  }, {
    "name" : "dbname",
    "aliases" : [ "database" ],
    "label" : "Database Name",
    "kind" : "STRING",
    "description" : "The name of the database to connect to.",
    "required" : "true",
    "protected" : false
  }, {
    "name" : "default_target_schema",
    "aliases" : [ ],
    "label" : "Default Target Schema",
    "value" : "analytics",
    "kind" : "STRING",
    "description" : "The default schema to use when writing data to the Postgres Warehouse.",
    "required" : "true",
    "protected" : false
  }, {
    "name" : "ssl",
    "aliases" : [ ],
    "label" : "SSL",
    "value" : "false",
    "kind" : "HIDDEN",
    "description" : "Whether or not to use SSL encryption when connecting to the Postgres Warehouse.",
    "protected" : false,
    "value_post_processor" : "STRINGIFY"
  }, {
    "name" : "batch_size_rows",
    "aliases" : [ ],
    "label" : "Batch Size Rows",
    "value" : "100000",
    "kind" : "INTEGER",
    "description" : "The number of rows to write to the Postgres Warehouse in each batch.",
    "protected" : false
  }, {
    "name" : "underscore_camel_case_fields",
    "aliases" : [ ],
    "label" : "Underscore Camel Case Fields",
    "value" : "true",
    "kind" : "HIDDEN",
    "description" : "Whether or not to convert field names from camel case to underscore-separated format.",
    "protected" : false
  }, {
    "name" : "flush_all_streams",
    "aliases" : [ ],
    "label" : "Flush All Streams",
    "value" : "false",
    "kind" : "HIDDEN",
    "description" : "Whether or not to flush all streams to the Postgres Warehouse before closing the connection.",
    "protected" : false
  }, {
    "name" : "parallelism",
    "aliases" : [ ],
    "label" : "Parallelism",
    "value" : "0",
    "kind" : "HIDDEN",
    "description" : "The number of threads to use when writing data to the Postgres Warehouse.",
    "protected" : false
  }, {
    "name" : "parallelism_max",
    "aliases" : [ ],
    "label" : "Max Parallelism",
    "value" : "16",
    "kind" : "HIDDEN",
    "description" : "The maximum number of threads to use when writing data to the Postgres Warehouse.",
    "protected" : false
  }, {
    "name" : "default_target_schema_select_permission",
    "aliases" : [ ],
    "label" : "Default Target Schema Select Permission",
    "kind" : "HIDDEN",
    "description" : "The permission level required to select data from the default target schema.",
    "protected" : false
  }, {
    "name" : "schema_mapping",
    "aliases" : [ ],
    "label" : "Schema Mapping",
    "kind" : "HIDDEN",
    "description" : "A mapping of source schema names to target schema names.",
    "protected" : false
  }, {
    "name" : "add_metadata_columns",
    "aliases" : [ ],
    "label" : "Add Metadata Columns",
    "value" : "true",
    "kind" : "HIDDEN",
    "description" : "Whether or not to add metadata columns to the target table.",
    "protected" : false
  }, {
    "name" : "hard_delete",
    "aliases" : [ ],
    "label" : "Hard Delete",
    "value" : "false",
    "kind" : "HIDDEN",
    "description" : "Whether or not to perform hard deletes when deleting data from the Postgres Warehouse.",
    "protected" : false
  }, {
    "name" : "data_flattening_max_level",
    "aliases" : [ ],
    "label" : "Data Flattening Max Level",
    "value" : "10",
    "kind" : "HIDDEN",
    "description" : "The maximum level of nested data structures to flatten when writing data to the Postgres Warehouse.",
    "protected" : false
  }, {
    "name" : "primary_key_required",
    "aliases" : [ ],
    "label" : "Primary Key Required",
    "value" : "false",
    "kind" : "BOOLEAN",
    "description" : "Whether or not a primary key is required for the target table.",
    "protected" : false
  }, {
    "name" : "validate_records",
    "aliases" : [ ],
    "label" : "Validate Records",
    "value" : "false",
    "kind" : "BOOLEAN",
    "description" : "Whether or not to validate records before writing them to the Postgres Warehouse.",
    "protected" : false
  }, {
    "name" : "temp_dir",
    "aliases" : [ ],
    "label" : "Temporary Directory",
    "kind" : "HIDDEN",
    "description" : "The directory to use for temporary files when writing data to the Postgres Warehouse.",
    "protected" : false
  } ],
  "variants" : [ ],
  "commands" : { },
  "matatikaHidden" : false,
  "requires" : [ ],
  "fullDescription" : "Postgres Warehouse is a data warehousing solution built on top of the Postgres database management system.\n\nPostgres Warehouse is designed to handle large volumes of data and complex queries, making it an ideal solution for businesses that need to store and analyze large amounts of data. It provides a number of features that are specifically tailored to data warehousing, such as columnar storage, parallel processing, and support for advanced analytics. Additionally, Postgres Warehouse is highly scalable, allowing businesses to easily add more resources as their data needs grow. Overall, Postgres Warehouse is a powerful and flexible data warehousing solution that can help businesses make better decisions by providing them with the insights they need to succeed.\n### Prerequisites\nThe process of obtaining the required settings for connecting to a Postgres Warehouse may vary depending on the specific setup and configuration of the database. However, here are some general ways to obtain each of the required settings:\n\n- User: The user is typically created when the database is set up. You can ask the database administrator or check the database documentation to find out the username.\n- Password: The password is also typically created when the database is set up. You can ask the database administrator or check the database documentation to find out the password.\n- Host: The host is the server where the database is located. You can ask the database administrator or check the database documentation to find out the host name or IP address.\n- Port: The port is the number that the database listens on for incoming connections. The default port for Postgres is 5432, but it may be different depending on the configuration. You can ask the database administrator or check the database documentation to find out the port number.\n- Database Name: The database name is the name of the specific database you want to connect to. You can ask the database administrator or check the database documentation to find out the database name.\n- Default Target Schema: The default target schema is the schema that you want to use as the default when connecting to the database. This may be set up by the database administrator or you may need to create it yourself. You can ask the database administrator or check the database documentation to find out the default target schema.\n\n## Settings\n\n\n### User\n\nThe username used to connect to the Postgres Warehouse.\n\n### Password\n\nThe password used to authenticate the user.\n\n### Host\n\nThe hostname or IP address of the Postgres Warehouse server.\n\n### Port\n\nThe port number used to connect to the Postgres Warehouse server.\n\n### Database Name\n\nThe name of the database to connect to.\n\n### Default Target Schema\n\nThe default schema to use when writing data to the Postgres Warehouse.\n\n### Batch Size Rows\n\nThe number of rows to write to the Postgres Warehouse in each batch.\n\n### Primary Key Required\n\nWhether or not a primary key is required for the target table.\n\n### Validate Records\n\nWhether or not to validate records before writing them to the Postgres Warehouse.",
  "_links" : {
    "self" : {
      "href" : "https://catalog.matatika.com/api/dataplugins/862ec863-59b0-41d0-a8f1-30dda77a75f3"
    },
    "update dataplugin" : {
      "href" : "https://catalog.matatika.com/api/workspaces/30909de5-3b07-4409-a02e-c056ab81449d/dataplugins/862ec863-59b0-41d0-a8f1-30dda77a75f3",
      "type" : "PUT"
    }
  }
}

Setting

Path Type Format Description
name String   The setting name
value String   The setting default value
label String   The setting label
protected Boolean   The setting protection status
kind String Setting Kind The setting kind
description String   A description of the setting
placeholder String   The setting placeholder text
envAliases Array of String   Environment variable aliases for the setting
documentation String URL The setting documentation URL
oauth OAuth   The setting OAuth configuration
env String    

OAuth

Path Type Format Description
provider String   The OAuth provider

Formats

Setting Kind

String

Value Description
STRING String setting
INTEGER Integer setting
PASSWORD Password setting
HIDDEN Hidden setting
BOOLEAN Boolean setting
DATE_ISO8601 ISO 8601 date setting
EMAIL Email setting
OAUTH OAuth setting
FILE File setting
ARRAY Array setting

Requests


View all supported dataplugins

GET

/api/dataplugins

Returns all dataplugins supported by Matatika.

Request

Example Snippets

cURL

curl -H "Authorization: Bearer $ACCESS_TOKEN" 'https://catalog.matatika.com:443/api/dataplugins' -i -X GET \
    -H 'Accept: application/json, application/javascript, text/javascript, text/json' \
    -H 'Content-Type: application/json'

Python (requests)

import requests

url = "https://catalog.matatika.com:443/api/dataplugins"

headers = {
  'Authorization': ACCESS_TOKEN
}

response = requests.request("GET", url, headers=headers)

print(response.text.encode('utf8'))

Response

200 OK

Dataplugin collection with HAL links.

{
  "_embedded" : {
    "dataplugins" : [ {
      "id" : "1149bda6-c93f-4db6-a22c-f95afd60d575",
      "pluginType" : "FILE",
      "name" : "analyze-sit",
      "namespace" : "tap_matatika_sit",
      "variant" : "matatika",
      "hidden" : false,
      "pipUrl" : "git+https://github.com/Matatika/analyze-sit.git",
      "repo" : "https://github.com/Matatika/analyze-sit",
      "capabilities" : [ ],
      "select" : [ ],
      "update" : {
        "analyze/datasets/tap-matatika-sit/user-ages.yml" : "true",
        "analyze/datasets/tap-matatika-sit/user-genders.yml" : "true"
      },
      "vars" : { },
      "settings" : [ ],
      "variants" : [ ],
      "commands" : { },
      "matatikaHidden" : true,
      "requires" : [ {
        "id" : "931124c6-882f-4f0d-b0ca-6db09f1e1948",
        "pluginType" : "EXTRACTOR",
        "name" : "tap-matatika-sit",
        "namespace" : "tap_matatika_sit",
        "variant" : "matatika",
        "label" : "Matatika SIT",
        "description" : "Test extractor based on tap-spreadsheets-anywhere used during Matatika SIT runs",
        "logoUrl" : "/assets/images/datasource/tap-matatika-sit.svg",
        "hidden" : false,
        "docs" : "https://meltano.com/plugins/extractors/spreadsheets-anywhere.html",
        "pipUrl" : "git+https://github.com/ets/tap-spreadsheets-anywhere.git",
        "repo" : "https://github.com/ets/tap-spreadsheets-anywhere",
        "executable" : "tap-spreadsheets-anywhere",
        "capabilities" : [ "DISCOVER", "STATE", "CATALOG" ],
        "select" : [ ],
        "update" : { },
        "vars" : { },
        "settings" : [ {
          "name" : "tables",
          "aliases" : [ ],
          "label" : "Tables",
          "value" : "[{\"path\":\"https://raw.githubusercontent.com/Matatika/matatika-examples/master/example_data\",\"name\":\"gitflixusers\",\"pattern\":\"GitFlixUsers.csv\",\"start_date\":\"2021-01-01T00:00:00Z\",\"key_properties\":[\"id\"],\"format\":\"csv\"}]",
          "kind" : "ARRAY",
          "description" : "A setting in Matatika SIT that allows users to view and manage tables of data.",
          "protected" : false
        } ],
        "variants" : [ ],
        "commands" : { },
        "matatikaHidden" : true,
        "requires" : [ ],
        "fullDescription" : "Test extractor based on tap-spreadsheets-anywhere used during Matatika SIT runs\n\n## Settings\n\n\n### Tables\n\nA setting in Matatika SIT that allows users to view and manage tables of data."
      } ],
      "fullDescription" : "",
      "_links" : {
        "self" : {
          "href" : "https://catalog.matatika.com/api/dataplugins/1149bda6-c93f-4db6-a22c-f95afd60d575"
        }
      }
    }, {
      "id" : "3d0d16b1-6b79-441b-987c-d9cc41ee6e73",
      "pluginType" : "LOADER",
      "name" : "target-redshift",
      "namespace" : "target_redshift",
      "variant" : "transferwise",
      "label" : "Amazon Redshift",
      "description" : "Amazon Redshift is a cloud-based data warehousing service. \n\nAmazon Redshift allows businesses to store and analyze large amounts of data in a cost-effective and scalable way. It can handle petabyte-scale data warehouses and offers fast query performance using SQL. It also integrates with other AWS services such as S3, EMR, and Kinesis. With Redshift, businesses can easily manage their data and gain insights to make informed decisions.",
      "logoUrl" : "/assets/logos/loaders/redshift.png",
      "hidden" : false,
      "docs" : "https://www.matatika.com/data-details/target-redshift/",
      "pipUrl" : "pipelinewise-target-redshift",
      "repo" : "https://github.com/transferwise/pipelinewise-target-redshift",
      "executable" : "target-redshift",
      "capabilities" : [ "DATATYPE_FAILSAFE", "RECORD_FLATTENING", "SOFT_DELETE", "HARD_DELETE", "ACTIVATE_VERSION" ],
      "select" : [ ],
      "update" : { },
      "vars" : { },
      "settings" : [ {
        "name" : "host",
        "aliases" : [ ],
        "label" : "Host",
        "kind" : "STRING",
        "description" : "The endpoint URL for the Amazon Redshift cluster.",
        "protected" : false
      }, {
        "name" : "port",
        "aliases" : [ ],
        "label" : "Port",
        "value" : "5439",
        "kind" : "INTEGER",
        "description" : "The port number on which the Amazon Redshift cluster is listening.",
        "protected" : false
      }, {
        "name" : "dbname",
        "aliases" : [ ],
        "label" : "Database Name",
        "kind" : "STRING",
        "description" : "The name of the Amazon Redshift database to connect to.",
        "protected" : false
      }, {
        "name" : "user",
        "aliases" : [ ],
        "label" : "User name",
        "kind" : "STRING",
        "description" : "The user name to use when connecting to the Amazon Redshift cluster.",
        "protected" : false
      }, {
        "name" : "password",
        "aliases" : [ ],
        "label" : "Password",
        "kind" : "PASSWORD",
        "description" : "The password to use when connecting to the Amazon Redshift cluster.",
        "protected" : false
      }, {
        "name" : "s3_bucket",
        "aliases" : [ ],
        "label" : "S3 Bucket name",
        "kind" : "STRING",
        "description" : "The name of the Amazon S3 bucket where the data to be loaded into Amazon Redshift is stored.",
        "protected" : false
      }, {
        "name" : "default_target_schema",
        "aliases" : [ ],
        "label" : "Default Target Schema",
        "value" : "$MELTANO_EXTRACT__LOAD_SCHEMA",
        "kind" : "STRING",
        "description" : "The default schema to use when loading data into Amazon Redshift.",
        "protected" : false
      }, {
        "name" : "aws_profile",
        "aliases" : [ ],
        "label" : "AWS Profile Name",
        "kind" : "STRING",
        "description" : "The name of the AWS profile to use when connecting to Amazon Redshift.",
        "protected" : false
      }, {
        "name" : "aws_access_key_id",
        "aliases" : [ ],
        "label" : "AWS S3 Access Key ID",
        "kind" : "PASSWORD",
        "description" : "The access key ID for the AWS account that owns the Amazon S3 bucket.",
        "protected" : false
      }, {
        "name" : "aws_secret_access_key",
        "aliases" : [ ],
        "label" : "AWS S3 Secret Access Key",
        "kind" : "PASSWORD",
        "description" : "The secret access key for the AWS account that owns the Amazon S3 bucket.",
        "protected" : false
      }, {
        "name" : "aws_session_token",
        "aliases" : [ ],
        "label" : "AWS S3 Session Token",
        "kind" : "PASSWORD",
        "description" : "The session token for the AWS account that owns the Amazon S3 bucket.",
        "protected" : false
      }, {
        "name" : "aws_redshift_copy_role_arn",
        "aliases" : [ ],
        "label" : "AWS Redshift COPY role ARN",
        "kind" : "STRING",
        "description" : "The ARN of the AWS Identity and Access Management (IAM) role to use when loading data into Amazon Redshift.",
        "protected" : false
      }, {
        "name" : "s3_acl",
        "aliases" : [ ],
        "label" : "AWS S3 ACL",
        "kind" : "STRING",
        "description" : "The access control list (ACL) to apply to the Amazon S3 objects being loaded into Amazon Redshift.",
        "protected" : false
      }, {
        "name" : "s3_key_prefix",
        "aliases" : [ ],
        "label" : "S3 Key Prefix",
        "kind" : "STRING",
        "description" : "The prefix to apply to the Amazon S3 object keys being loaded into Amazon Redshift.",
        "protected" : false
      }, {
        "name" : "copy_options",
        "aliases" : [ ],
        "label" : "COPY options",
        "value" : "EMPTYASNULL BLANKSASNULL TRIMBLANKS TRUNCATECOLUMNS TIMEFORMAT 'auto' COMPUPDATE OFF STATUPDATE OFF",
        "kind" : "STRING",
        "description" : "Additional options to use when loading data into Amazon Redshift.",
        "protected" : false
      }, {
        "name" : "batch_size_rows",
        "aliases" : [ ],
        "label" : "Batch Size Rows",
        "value" : "100000",
        "kind" : "INTEGER",
        "description" : "The number of rows to load into Amazon Redshift at a time.",
        "protected" : false
      }, {
        "name" : "flush_all_streams",
        "aliases" : [ ],
        "label" : "Flush All Streams",
        "value" : "false",
        "kind" : "BOOLEAN",
        "description" : "Whether to flush all streams to Amazon Redshift before disconnecting.",
        "protected" : false
      }, {
        "name" : "parallelism",
        "aliases" : [ ],
        "label" : "Parallelism",
        "value" : "0",
        "kind" : "INTEGER",
        "description" : "The number of streams to use when loading data into Amazon Redshift.",
        "protected" : false
      }, {
        "name" : "max_parallelism",
        "aliases" : [ ],
        "label" : "Max Parallelism",
        "value" : "16",
        "kind" : "INTEGER",
        "description" : "The maximum number of streams to use when loading data into Amazon Redshift.",
        "protected" : false
      }, {
        "name" : "default_target_schema_select_permissions",
        "aliases" : [ ],
        "label" : "Default Target Schema Select Permission",
        "kind" : "STRING",
        "description" : "The permission to use when selecting data from the default target schema.",
        "protected" : false
      }, {
        "name" : "schema_mapping",
        "aliases" : [ ],
        "label" : "Scema Mapping",
        "kind" : "OBJECT",
        "description" : "A mapping of source schema names to target schema names.",
        "protected" : false
      }, {
        "name" : "disable_table_cache",
        "aliases" : [ ],
        "label" : "Disable Table Cache",
        "value" : "false",
        "kind" : "BOOLEAN",
        "description" : "Whether to disable the table cache when loading data into Amazon Redshift.",
        "protected" : false
      }, {
        "name" : "add_metadata_columns",
        "aliases" : [ ],
        "label" : "Add Metdata Columns",
        "value" : "false",
        "kind" : "BOOLEAN",
        "description" : "Whether to add metadata columns to the Amazon Redshift table being loaded.",
        "protected" : false
      }, {
        "name" : "hard_delete",
        "aliases" : [ ],
        "label" : "Hard Delete",
        "value" : "false",
        "kind" : "BOOLEAN",
        "description" : "Whether to perform a hard delete when deleting data from Amazon Redshift.",
        "protected" : false
      }, {
        "name" : "data_flattening_max_level",
        "aliases" : [ ],
        "label" : "Data Flattening Max Level",
        "value" : "0",
        "kind" : "INTEGER",
        "description" : "The maximum level of data flattening to perform when loading data into Amazon Redshift.",
        "protected" : false
      }, {
        "name" : "primary_key_required",
        "aliases" : [ ],
        "label" : "Primary Key Required",
        "value" : "true",
        "kind" : "BOOLEAN",
        "description" : "Whether a primary key is required when loading data into Amazon Redshift.",
        "protected" : false
      }, {
        "name" : "validate_records",
        "aliases" : [ ],
        "label" : "Validate Records",
        "value" : "false",
        "kind" : "BOOLEAN",
        "description" : "Whether to validate records before loading them into Amazon Redshift.",
        "protected" : false
      }, {
        "name" : "skip_updates",
        "aliases" : [ ],
        "label" : "Skip Updates",
        "value" : "false",
        "kind" : "BOOLEAN",
        "description" : "Whether to skip updates when loading data into Amazon Redshift.",
        "protected" : false
      }, {
        "name" : "compression",
        "aliases" : [ ],
        "label" : "Compression",
        "kind" : "OPTIONS",
        "description" : "The compression type to use when loading data into Amazon Redshift.",
        "protected" : false
      }, {
        "name" : "slices",
        "aliases" : [ ],
        "label" : "Slices",
        "value" : "1",
        "kind" : "INTEGER",
        "description" : "The number of slices to use when loading data into Amazon Redshift.",
        "protected" : false
      }, {
        "name" : "temp_dir",
        "aliases" : [ ],
        "label" : "Temp Directory",
        "kind" : "STRING",
        "description" : "The directory to use for temporary files when loading data into Amazon Redshift.",
        "protected" : false
      } ],
      "variants" : [ ],
      "commands" : { },
      "matatikaHidden" : false,
      "requires" : [ ],
      "fullDescription" : "Amazon Redshift is a cloud-based data warehousing service. \n\nAmazon Redshift allows businesses to store and analyze large amounts of data in a cost-effective and scalable way. It can handle petabyte-scale data warehouses and offers fast query performance using SQL. It also integrates with other AWS services such as S3, EMR, and Kinesis. With Redshift, businesses can easily manage their data and gain insights to make informed decisions.\n\n## Settings\n\n\n### Host\n\nThe endpoint URL for the Amazon Redshift cluster.\n\n### Port\n\nThe port number on which the Amazon Redshift cluster is listening.\n\n### Database Name\n\nThe name of the Amazon Redshift database to connect to.\n\n### User name\n\nThe user name to use when connecting to the Amazon Redshift cluster.\n\n### Password\n\nThe password to use when connecting to the Amazon Redshift cluster.\n\n### S3 Bucket name\n\nThe name of the Amazon S3 bucket where the data to be loaded into Amazon Redshift is stored.\n\n### Default Target Schema\n\nThe default schema to use when loading data into Amazon Redshift.\n\n### AWS Profile Name\n\nThe name of the AWS profile to use when connecting to Amazon Redshift.\n\n### AWS S3 Access Key ID\n\nThe access key ID for the AWS account that owns the Amazon S3 bucket.\n\n### AWS S3 Secret Access Key\n\nThe secret access key for the AWS account that owns the Amazon S3 bucket.\n\n### AWS S3 Session Token\n\nThe session token for the AWS account that owns the Amazon S3 bucket.\n\n### AWS Redshift COPY role ARN\n\nThe ARN of the AWS Identity and Access Management (IAM) role to use when loading data into Amazon Redshift.\n\n### AWS S3 ACL\n\nThe access control list (ACL) to apply to the Amazon S3 objects being loaded into Amazon Redshift.\n\n### S3 Key Prefix\n\nThe prefix to apply to the Amazon S3 object keys being loaded into Amazon Redshift.\n\n### COPY options\n\nAdditional options to use when loading data into Amazon Redshift.\n\n### Batch Size Rows\n\nThe number of rows to load into Amazon Redshift at a time.\n\n### Flush All Streams\n\nWhether to flush all streams to Amazon Redshift before disconnecting.\n\n### Parallelism\n\nThe number of streams to use when loading data into Amazon Redshift.\n\n### Max Parallelism\n\nThe maximum number of streams to use when loading data into Amazon Redshift.\n\n### Default Target Schema Select Permission\n\nThe permission to use when selecting data from the default target schema.\n\n### Scema Mapping\n\nA mapping of source schema names to target schema names.\n\n### Disable Table Cache\n\nWhether to disable the table cache when loading data into Amazon Redshift.\n\n### Add Metdata Columns\n\nWhether to add metadata columns to the Amazon Redshift table being loaded.\n\n### Hard Delete\n\nWhether to perform a hard delete when deleting data from Amazon Redshift.\n\n### Data Flattening Max Level\n\nThe maximum level of data flattening to perform when loading data into Amazon Redshift.\n\n### Primary Key Required\n\nWhether a primary key is required when loading data into Amazon Redshift.\n\n### Validate Records\n\nWhether to validate records before loading them into Amazon Redshift.\n\n### Skip Updates\n\nWhether to skip updates when loading data into Amazon Redshift.\n\n### Compression\n\nThe compression type to use when loading data into Amazon Redshift.\n\n### Slices\n\nThe number of slices to use when loading data into Amazon Redshift.\n\n### Temp Directory\n\nThe directory to use for temporary files when loading data into Amazon Redshift.",
      "_links" : {
        "self" : {
          "href" : "https://catalog.matatika.com/api/dataplugins/3d0d16b1-6b79-441b-987c-d9cc41ee6e73"
        }
      }
    }, {
      "id" : "5d6112bd-e5bc-4986-ab46-f53356a1c6de",
      "pluginType" : "EXTRACTOR",
      "name" : "tap-spreadsheets-azure",
      "namespace" : "tap_spreadsheets_anywhere",
      "variant" : "matatika",
      "label" : "Spreadsheets Azure",
      "description" : "Spreadsheets Azure is a software tool that allows users sync data from spreadsheets stored in azure into their chosen targets.\n### Prerequisites\nTo obtain the Tables required setting for connecting to Spreadsheets Anywhere, you need to have access to the spreadsheet that you want to connect to. Once you have access, you can identify the name of the table or tables that you want to connect to. The table name should be entered in the appropriate field when setting up the connection to Spreadsheets Anywhere.\nThe Azure Storage Connection String is your credential to connect to azure.",
      "logoUrl" : "/assets/images/datasource/tap-spreadsheets-anywhere.png",
      "hidden" : false,
      "docs" : "https://www.matatika.com/data-details/tap-spreadsheets-azure/",
      "pipUrl" : "git+https://github.com/Matatika/[email protected]",
      "repo" : "https://github.com/Matatika/tap-spreadsheets-anywhere",
      "executable" : "tap-spreadsheets-anywhere",
      "capabilities" : [ "DISCOVER", "STATE", "CATALOG" ],
      "select" : [ ],
      "update" : { },
      "vars" : { },
      "settings" : [ {
        "name" : "tables",
        "aliases" : [ ],
        "label" : "Tables",
        "kind" : "ARRAY",
        "description" : "A setting in Spreadsheets Anywhere that allows users to select which tables they want to connect to and use in their application.",
        "required" : "true",
        "protected" : false
      }, {
        "name" : "azure_storage_connection_string",
        "aliases" : [ ],
        "label" : "Azure Storage Connection String",
        "kind" : "PASSWORD",
        "description" : "Setting to allow users to provide Azure connection config.",
        "env" : "AZURE_STORAGE_CONNECTION_STRING",
        "required" : "true",
        "protected" : false
      } ],
      "variants" : [ ],
      "commands" : { },
      "matatikaHidden" : false,
      "requires" : [ ],
      "fullDescription" : "Spreadsheets Azure is a software tool that allows users sync data from spreadsheets stored in azure into their chosen targets.\n### Prerequisites\nTo obtain the Tables required setting for connecting to Spreadsheets Anywhere, you need to have access to the spreadsheet that you want to connect to. Once you have access, you can identify the name of the table or tables that you want to connect to. The table name should be entered in the appropriate field when setting up the connection to Spreadsheets Anywhere.\nThe Azure Storage Connection String is your credential to connect to azure.\n\n## Settings\n\n\n### Tables\n\nA setting in Spreadsheets Anywhere that allows users to select which tables they want to connect to and use in their application.\n\n### Azure Storage Connection String\n\nSetting to allow users to provide Azure connection config.",
      "_links" : {
        "self" : {
          "href" : "https://catalog.matatika.com/api/dataplugins/5d6112bd-e5bc-4986-ab46-f53356a1c6de"
        }
      }
    }, {
      "id" : "6472b907-3f72-4456-9ce3-dd97236ba84f",
      "pluginType" : "FILE",
      "name" : "analyze-google-analytics",
      "namespace" : "tap_google_analytics",
      "variant" : "matatika",
      "label" : "Google Analytics Insights",
      "description" : "Instant insights on users, locations, sources, and sessions from Google Analytics.",
      "hidden" : false,
      "pipUrl" : "git+https://github.com/Matatika/[email protected]",
      "repo" : "https://github.com/Matatika/analyze-google-analytics",
      "capabilities" : [ ],
      "select" : [ ],
      "update" : {
        "*.yml" : "true"
      },
      "vars" : { },
      "settings" : [ ],
      "variants" : [ ],
      "commands" : { },
      "matatikaHidden" : false,
      "requires" : [ {
        "id" : "a9ba6541-32a3-47ab-bb96-8c4aef3c4ab4",
        "pluginType" : "TRANSFORM",
        "name" : "dbt-google-analytics",
        "namespace" : "tap_google_analytics",
        "variant" : "matatika",
        "hidden" : false,
        "pipUrl" : "https://github.com/Matatika/[email protected]",
        "repo" : "https://github.com/Matatika/dbt-tap-google-analytics",
        "capabilities" : [ ],
        "select" : [ ],
        "update" : { },
        "vars" : {
          "schema" : ""
        },
        "settings" : [ ],
        "variants" : [ ],
        "commands" : { },
        "matatikaHidden" : false,
        "requires" : [ ],
        "fullDescription" : ""
      } ],
      "fullDescription" : "Instant insights on users, locations, sources, and sessions from Google Analytics.",
      "_links" : {
        "self" : {
          "href" : "https://catalog.matatika.com/api/dataplugins/6472b907-3f72-4456-9ce3-dd97236ba84f"
        }
      }
    }, {
      "id" : "2b48567d-5b9d-4018-9b6f-a9015963f53b",
      "pluginType" : "LOADER",
      "name" : "target-s3-avro",
      "namespace" : "target_s3_avro",
      "variant" : "faumel",
      "label" : "S3 Avro",
      "description" : "S3 Avro is a software tool for converting data between Avro and JSON formats in Amazon S3.\n\nS3 Avro is a software tool that allows users to easily convert data between Avro and JSON formats in Amazon S3. This tool is particularly useful for those who work with large amounts of data and need to quickly and efficiently convert between these two formats. With S3 Avro, users can easily upload Avro files to S3, convert them to JSON, and then download the converted files back to their local machine. This tool is designed to be user-friendly and intuitive, making it accessible to users of all skill levels.",
      "logoUrl" : "/assets/logos/loaders/s3-avro.png",
      "hidden" : false,
      "docs" : "https://www.matatika.com/data-details/target-s3-avro/",
      "pipUrl" : "git+https://github.com/faumel/target-s3-avro.git",
      "repo" : "https://github.com/faumel/target-s3-avro",
      "capabilities" : [ ],
      "select" : [ ],
      "update" : { },
      "vars" : { },
      "settings" : [ {
        "name" : "verify",
        "aliases" : [ ],
        "label" : "Verify",
        "kind" : "BOOLEAN",
        "description" : "Boolean value indicating whether to verify SSL certificates for HTTPS requests.",
        "protected" : false
      }, {
        "name" : "aws_session_token",
        "aliases" : [ ],
        "label" : "Aws Session Token",
        "kind" : "PASSWORD",
        "description" : "Temporary session token for AWS authentication.",
        "protected" : false
      }, {
        "name" : "api_version",
        "aliases" : [ ],
        "label" : "Api Version",
        "kind" : "STRING",
        "description" : "Version of the S3 Avro API to use.",
        "protected" : false
      }, {
        "name" : "endpoint_url",
        "aliases" : [ ],
        "label" : "Endpoint Url",
        "kind" : "STRING",
        "description" : "URL for the S3 Avro API endpoint.",
        "protected" : false
      }, {
        "name" : "aws_secret_access_key",
        "aliases" : [ ],
        "label" : "Aws Secret Access Key",
        "kind" : "PASSWORD",
        "description" : "Secret access key for AWS authentication.",
        "protected" : false
      }, {
        "name" : "aws_access_key_id",
        "aliases" : [ ],
        "label" : "Aws Access Key Id",
        "kind" : "PASSWORD",
        "description" : "Access key ID for AWS authentication.",
        "protected" : false
      }, {
        "name" : "flatten_delimiter",
        "aliases" : [ ],
        "label" : "Flatten Delimiter",
        "kind" : "STRING",
        "description" : "Delimiter to use when flattening nested Avro records.",
        "protected" : false
      }, {
        "name" : "region_name",
        "aliases" : [ ],
        "label" : "Region Name",
        "kind" : "STRING",
        "description" : "Name of the AWS region where the S3 bucket is located.",
        "protected" : false
      }, {
        "name" : "tmp_dir",
        "aliases" : [ ],
        "label" : "Tmp Dir",
        "kind" : "STRING",
        "description" : "Directory to use for temporary files during Avro serialization.",
        "protected" : false
      }, {
        "name" : "use_ssl",
        "aliases" : [ ],
        "label" : "Use SSL",
        "kind" : "BOOLEAN",
        "description" : "Boolean value indicating whether to use SSL for HTTPS requests.",
        "protected" : false
      }, {
        "name" : "target_schema_bucket_key",
        "aliases" : [ ],
        "label" : "Target Schema Bucket Key",
        "kind" : "STRING",
        "description" : "Key for the Avro schema file in the S3 bucket.",
        "protected" : false
      }, {
        "name" : "config",
        "aliases" : [ ],
        "label" : "Config",
        "kind" : "STRING",
        "description" : "Additional configuration options for the S3 Avro API connection.",
        "protected" : false
      }, {
        "name" : "target_bucket_key",
        "aliases" : [ ],
        "label" : "Target Bucket Key",
        "kind" : "STRING",
        "description" : "Key for the target object in the S3 bucket.",
        "protected" : false
      } ],
      "variants" : [ ],
      "commands" : { },
      "matatikaHidden" : false,
      "requires" : [ ],
      "fullDescription" : "S3 Avro is a software tool for converting data between Avro and JSON formats in Amazon S3.\n\nS3 Avro is a software tool that allows users to easily convert data between Avro and JSON formats in Amazon S3. This tool is particularly useful for those who work with large amounts of data and need to quickly and efficiently convert between these two formats. With S3 Avro, users can easily upload Avro files to S3, convert them to JSON, and then download the converted files back to their local machine. This tool is designed to be user-friendly and intuitive, making it accessible to users of all skill levels.\n\n## Settings\n\n\n### Verify\n\nBoolean value indicating whether to verify SSL certificates for HTTPS requests.\n\n### Aws Session Token\n\nTemporary session token for AWS authentication.\n\n### Api Version\n\nVersion of the S3 Avro API to use.\n\n### Endpoint Url\n\nURL for the S3 Avro API endpoint.\n\n### Aws Secret Access Key\n\nSecret access key for AWS authentication.\n\n### Aws Access Key Id\n\nAccess key ID for AWS authentication.\n\n### Flatten Delimiter\n\nDelimiter to use when flattening nested Avro records.\n\n### Region Name\n\nName of the AWS region where the S3 bucket is located.\n\n### Tmp Dir\n\nDirectory to use for temporary files during Avro serialization.\n\n### Use SSL\n\nBoolean value indicating whether to use SSL for HTTPS requests.\n\n### Target Schema Bucket Key\n\nKey for the Avro schema file in the S3 bucket.\n\n### Config\n\nAdditional configuration options for the S3 Avro API connection.\n\n### Target Bucket Key\n\nKey for the target object in the S3 bucket.",
      "_links" : {
        "self" : {
          "href" : "https://catalog.matatika.com/api/dataplugins/2b48567d-5b9d-4018-9b6f-a9015963f53b"
        }
      }
    }, {
      "id" : "4f3acdb4-898b-4ddf-a70f-1141f7b73129",
      "pluginType" : "TRANSFORM",
      "name" : "dbt-solarvista",
      "namespace" : "tap_solarvista",
      "variant" : "matatika",
      "hidden" : false,
      "pipUrl" : "https://github.com/Matatika/[email protected]",
      "repo" : "https://github.com/Matatika/dbt-tap-solarvista",
      "capabilities" : [ ],
      "select" : [ ],
      "update" : { },
      "vars" : {
        "schema" : ""
      },
      "settings" : [ ],
      "variants" : [ ],
      "commands" : { },
      "matatikaHidden" : false,
      "requires" : [ {
        "id" : "81ca6a43-b7bf-4e3d-b01f-7c9fff39b962",
        "pluginType" : "TRANSFORMER",
        "name" : "dbt",
        "namespace" : "dbt",
        "variant" : "dbt-labs",
        "label" : "dbt",
        "logoUrl" : "/assets/images/transformer/dbt.png",
        "hidden" : false,
        "docs" : "https://www.matatika.com/data-details/dbt/",
        "pipUrl" : "dbt-core~=1.3.0 dbt-postgres~=1.3.0 dbt-snowflake~=1.3.0\n",
        "repo" : "https://github.com/dbt-labs/dbt-core",
        "capabilities" : [ ],
        "select" : [ ],
        "update" : { },
        "vars" : { },
        "settings" : [ {
          "name" : "project_dir",
          "aliases" : [ ],
          "value" : "$MELTANO_PROJECT_ROOT/transform",
          "kind" : "STRING",
          "protected" : false
        }, {
          "name" : "profiles_dir",
          "aliases" : [ ],
          "value" : "$MELTANO_PROJECT_ROOT/transform/profile",
          "kind" : "STRING",
          "env" : "DBT_PROFILES_DIR",
          "protected" : false
        }, {
          "name" : "target",
          "aliases" : [ ],
          "value" : "$MELTANO_LOAD__DIALECT",
          "kind" : "STRING",
          "protected" : false
        }, {
          "name" : "source_schema",
          "aliases" : [ ],
          "value" : "$MELTANO_LOAD__TARGET_SCHEMA",
          "kind" : "STRING",
          "protected" : false
        }, {
          "name" : "target_schema",
          "aliases" : [ ],
          "value" : "analytics",
          "kind" : "STRING",
          "protected" : false
        }, {
          "name" : "models",
          "aliases" : [ ],
          "value" : "$MELTANO_TRANSFORM__PACKAGE_NAME $MELTANO_EXTRACTOR_NAMESPACE my_meltano_project",
          "kind" : "STRING",
          "protected" : false
        } ],
        "variants" : [ ],
        "commands" : {
          "compile" : {
            "args" : "compile",
            "description" : "Generates executable SQL from source model, test, and analysis files. Compiled SQL files are written to the target/ directory."
          },
          "seed" : {
            "args" : "seed",
            "description" : "Load data from csv files into your data warehouse."
          },
          "test" : {
            "args" : "test",
            "description" : "Runs tests on data in deployed models."
          },
          "docs-generate" : {
            "args" : "docs generate",
            "description" : "Generate documentation artifacts for your project."
          },
          "deps" : {
            "args" : "deps",
            "description" : "Pull the most recent version of the dependencies listed in packages.yml"
          },
          "run" : {
            "args" : "run",
            "description" : "Compile SQL and execute against the current target database."
          },
          "clean" : {
            "args" : "clean",
            "description" : "Delete all folders in the clean-targets list (usually the dbt_modules and target directories.)"
          },
          "snapshot" : {
            "args" : "snapshot",
            "description" : "Execute snapshots defined in your project."
          }
        },
        "matatikaHidden" : false,
        "requires" : [ ],
        "fullDescription" : ""
      }, {
        "id" : "33444aa0-a5e9-4edb-927a-d0c15707baa0",
        "pluginType" : "EXTRACTOR",
        "name" : "tap-solarvista",
        "namespace" : "tap_solarvista",
        "variant" : "matatika",
        "label" : "Solarvista Live",
        "description" : "Solarvista Live is a software platform for field service management.\n\nSolarvista Live is a cloud-based software platform designed to help businesses manage their field service operations more efficiently. It provides a range of tools and features to help businesses schedule and dispatch technicians, track work orders, manage inventory, and more. With Solarvista Live, businesses can streamline their field service operations, reduce costs, and improve customer satisfaction. The platform is highly customizable and can be tailored to meet the specific needs of each business. It is also designed to be easy to use, with a user-friendly interface that makes it simple for technicians and other field service personnel to access the information they need to do their jobs effectively. Overall, Solarvista Live is a powerful tool for businesses looking to optimize their field service operations and improve their bottom line.\n### Prerequisites\n- Datasources: The datasources required to connect to Solarvista Live are specific to the organization and must be provided by the Solarvista Live administrator or IT department.\n- Account: The account information required to connect to Solarvista Live is specific to the user and must be provided by the Solarvista Live administrator or IT department.\n- Client ID: The client ID required to connect to Solarvista Live is specific to the organization and must be provided by the Solarvista Live administrator or IT department.\n- Code: The code required to connect to Solarvista Live is specific to the user and must be provided by the Solarvista Live administrator or IT department.",
        "logoUrl" : "/assets/images/datasource/tap-solarvista.png",
        "hidden" : false,
        "docs" : "https://www.matatika.com/docs/instant-insights/tap-solarvista/",
        "pipUrl" : "git+https://github.com/Matatika/[email protected]",
        "repo" : "https://github.com/Matatika/tap-solarvista",
        "capabilities" : [ "STATE" ],
        "select" : [ ],
        "update" : { },
        "vars" : { },
        "settings" : [ {
          "name" : "datasources",
          "aliases" : [ ],
          "label" : "Datasources",
          "kind" : "STRING",
          "description" : "The data sources to connect to in Solarvista Live.",
          "required" : "true",
          "protected" : false
        }, {
          "name" : "account",
          "aliases" : [ ],
          "label" : "Account",
          "kind" : "STRING",
          "description" : "The account name to use for authentication.",
          "required" : "true",
          "protected" : false
        }, {
          "name" : "clientId",
          "aliases" : [ ],
          "label" : "Client ID",
          "kind" : "STRING",
          "description" : "The client ID to use for authentication.",
          "required" : "true",
          "protected" : false
        }, {
          "name" : "code",
          "aliases" : [ ],
          "label" : "Code",
          "kind" : "PASSWORD",
          "description" : "The code to use for authentication.",
          "required" : "true",
          "protected" : false
        }, {
          "name" : "start_date",
          "aliases" : [ ],
          "label" : "Start Date",
          "kind" : "DATE_ISO8601",
          "description" : "The date to start retrieving data from.",
          "protected" : false
        }, {
          "name" : "force_start_date",
          "aliases" : [ ],
          "label" : "Force Start Date",
          "kind" : "DATE_ISO8601",
          "description" : "A flag indicating whether to force the start date even if data already exists for that date.",
          "protected" : false
        } ],
        "variants" : [ ],
        "commands" : { },
        "matatikaHidden" : false,
        "requires" : [ ],
        "fullDescription" : "Solarvista Live is a software platform for field service management.\n\nSolarvista Live is a cloud-based software platform designed to help businesses manage their field service operations more efficiently. It provides a range of tools and features to help businesses schedule and dispatch technicians, track work orders, manage inventory, and more. With Solarvista Live, businesses can streamline their field service operations, reduce costs, and improve customer satisfaction. The platform is highly customizable and can be tailored to meet the specific needs of each business. It is also designed to be easy to use, with a user-friendly interface that makes it simple for technicians and other field service personnel to access the information they need to do their jobs effectively. Overall, Solarvista Live is a powerful tool for businesses looking to optimize their field service operations and improve their bottom line.\n### Prerequisites\n- Datasources: The datasources required to connect to Solarvista Live are specific to the organization and must be provided by the Solarvista Live administrator or IT department.\n- Account: The account information required to connect to Solarvista Live is specific to the user and must be provided by the Solarvista Live administrator or IT department.\n- Client ID: The client ID required to connect to Solarvista Live is specific to the organization and must be provided by the Solarvista Live administrator or IT department.\n- Code: The code required to connect to Solarvista Live is specific to the user and must be provided by the Solarvista Live administrator or IT department.\n\n## Settings\n\n\n### Datasources\n\nThe data sources to connect to in Solarvista Live.\n\n### Account\n\nThe account name to use for authentication.\n\n### Client ID\n\nThe client ID to use for authentication.\n\n### Code\n\nThe code to use for authentication.\n\n### Start Date\n\nThe date to start retrieving data from.\n\n### Force Start Date\n\nA flag indicating whether to force the start date even if data already exists for that date."
      } ],
      "fullDescription" : "",
      "_links" : {
        "self" : {
          "href" : "https://catalog.matatika.com/api/dataplugins/4f3acdb4-898b-4ddf-a70f-1141f7b73129"
        }
      }
    }, {
      "id" : "14518e68-ecda-48c9-9c93-155453d89ef2",
      "pluginType" : "FILE",
      "name" : "analyze-auth0",
      "namespace" : "tap_auth0",
      "variant" : "matatika",
      "label" : "Auth0 Insights",
      "description" : "Instant insights on users, logins and quotas from Auth0.",
      "hidden" : false,
      "pipUrl" : "git+https://github.com/Matatika/[email protected]",
      "repo" : "https://github.com/Matatika/analyze-auth0",
      "capabilities" : [ ],
      "select" : [ ],
      "update" : {
        "*.yml" : "true"
      },
      "vars" : { },
      "settings" : [ ],
      "variants" : [ ],
      "commands" : { },
      "matatikaHidden" : false,
      "requires" : [ {
        "id" : "6c5a07d0-8580-4bf3-a56e-fb87f7c24c09",
        "pluginType" : "EXTRACTOR",
        "name" : "tap-auth0",
        "namespace" : "tap_auth0",
        "variant" : "matatika",
        "label" : "Auth0",
        "description" : "Auth0 is an identity and access management platform.\n\nAuth0 is a cloud-based platform that provides a comprehensive set of tools and services for managing user authentication and authorization in web and mobile applications. It allows developers to easily add authentication and authorization capabilities to their applications, without having to build and maintain their own identity management system. Auth0 supports a wide range of authentication methods, including social login, multi-factor authentication, and passwordless authentication. It also provides features such as user management, role-based access control, and integration with third-party identity providers. With Auth0, developers can focus on building their applications, while leaving the complex task of identity management to the experts.\n### Prerequisites\nTo obtain the Client ID, Client Secret, and Domain for connecting to Auth0, you need to follow these steps:\n\n1. Log in to your Auth0 account.\n2. From the dashboard, click on the \"Applications\" tab.\n3. Click on the \"Create Application\" button.\n4. Choose the type of application you want to create (Single Page Application, Regular Web Application, etc.).\n5. Give your application a name and click on the \"Create\" button.\n6. Once your application is created, you will be redirected to the \"Settings\" tab.\n7. Here, you will find the Client ID and Client Secret.\n8. To obtain the Domain, go to the \"Settings\" tab of your Auth0 account and copy the value of the \"Domain\" field.\n\nNote: The exact steps may vary slightly depending on the version of Auth0 you are using.",
        "logoUrl" : "/assets/images/datasource/tap-auth0.png",
        "hidden" : false,
        "docs" : "https://www.matatika.com/docs/instant-insights/tap-auth0/",
        "pipUrl" : "git+https://github.com/Matatika/[email protected]",
        "repo" : "https://github.com/Matatika/tap-auth0",
        "capabilities" : [ "DISCOVER", "STATE", "CATALOG" ],
        "select" : [ ],
        "update" : { },
        "vars" : { },
        "settings" : [ {
          "name" : "client_id",
          "aliases" : [ ],
          "label" : "Client ID",
          "kind" : "PASSWORD",
          "description" : "A unique identifier for the client application that is registered with Auth0.",
          "required" : "true",
          "protected" : false
        }, {
          "name" : "client_secret",
          "aliases" : [ ],
          "label" : "Client Secret",
          "kind" : "PASSWORD",
          "description" : "A secret string that is used to authenticate the client application with Auth0.",
          "required" : "true",
          "protected" : false
        }, {
          "name" : "domain",
          "aliases" : [ ],
          "label" : "Domain",
          "kind" : "STRING",
          "description" : "The Auth0 domain associated with the tenant.",
          "required" : "true",
          "protected" : false
        }, {
          "name" : "job_poll_interval_ms",
          "aliases" : [ ],
          "label" : "Job poll interval ms",
          "value" : "2000",
          "kind" : "INTEGER",
          "description" : "The interval in milliseconds at which to poll for the status of a long-running job.",
          "protected" : false
        }, {
          "name" : "job_poll_max_count",
          "aliases" : [ ],
          "label" : "Job poll max count",
          "value" : "10",
          "kind" : "INTEGER",
          "description" : "The maximum number of times to poll for the status of a long-running job.",
          "protected" : false
        } ],
        "variants" : [ ],
        "commands" : { },
        "matatikaHidden" : false,
        "requires" : [ ],
        "fullDescription" : "Auth0 is an identity and access management platform.\n\nAuth0 is a cloud-based platform that provides a comprehensive set of tools and services for managing user authentication and authorization in web and mobile applications. It allows developers to easily add authentication and authorization capabilities to their applications, without having to build and maintain their own identity management system. Auth0 supports a wide range of authentication methods, including social login, multi-factor authentication, and passwordless authentication. It also provides features such as user management, role-based access control, and integration with third-party identity providers. With Auth0, developers can focus on building their applications, while leaving the complex task of identity management to the experts.\n### Prerequisites\nTo obtain the Client ID, Client Secret, and Domain for connecting to Auth0, you need to follow these steps:\n\n1. Log in to your Auth0 account.\n2. From the dashboard, click on the \"Applications\" tab.\n3. Click on the \"Create Application\" button.\n4. Choose the type of application you want to create (Single Page Application, Regular Web Application, etc.).\n5. Give your application a name and click on the \"Create\" button.\n6. Once your application is created, you will be redirected to the \"Settings\" tab.\n7. Here, you will find the Client ID and Client Secret.\n8. To obtain the Domain, go to the \"Settings\" tab of your Auth0 account and copy the value of the \"Domain\" field.\n\nNote: The exact steps may vary slightly depending on the version of Auth0 you are using.\n\n## Settings\n\n\n### Client ID\n\nA unique identifier for the client application that is registered with Auth0.\n\n### Client Secret\n\nA secret string that is used to authenticate the client application with Auth0.\n\n### Domain\n\nThe Auth0 domain associated with the tenant.\n\n### Job poll interval ms\n\nThe interval in milliseconds at which to poll for the status of a long-running job.\n\n### Job poll max count\n\nThe maximum number of times to poll for the status of a long-running job."
      } ],
      "fullDescription" : "Instant insights on users, logins and quotas from Auth0.",
      "_links" : {
        "self" : {
          "href" : "https://catalog.matatika.com/api/dataplugins/14518e68-ecda-48c9-9c93-155453d89ef2"
        }
      }
    }, {
      "id" : "e5e317b4-ddfe-4617-8228-966feeb124ed",
      "pluginType" : "EXTRACTOR",
      "name" : "tap-autopilot",
      "namespace" : "tap_autopilot",
      "variant" : "singer-io",
      "label" : "Autopilot",
      "description" : "Autopilot is a marketing automation software. \n\nAutopilot is a cloud-based marketing automation software that helps businesses automate their marketing tasks and workflows, such as lead generation, email marketing, and customer journey mapping, to improve customer engagement and drive revenue growth. It offers a visual canvas for creating personalized customer journeys, as well as integrations with popular CRM and marketing tools. Autopilot also provides analytics and reporting features to track campaign performance and optimize marketing strategies.",
      "logoUrl" : "/assets/logos/extractors/autopilot.png",
      "hidden" : false,
      "docs" : "https://www.matatika.com/data-details/tap-autopilot/",
      "pipUrl" : "tap-autopilot",
      "repo" : "https://github.com/singer-io/tap-autopilot",
      "capabilities" : [ "DISCOVER", "STATE", "CATALOG" ],
      "select" : [ ],
      "update" : { },
      "vars" : { },
      "settings" : [ ],
      "variants" : [ ],
      "commands" : { },
      "matatikaHidden" : false,
      "requires" : [ ],
      "fullDescription" : "Autopilot is a marketing automation software. \n\nAutopilot is a cloud-based marketing automation software that helps businesses automate their marketing tasks and workflows, such as lead generation, email marketing, and customer journey mapping, to improve customer engagement and drive revenue growth. It offers a visual canvas for creating personalized customer journeys, as well as integrations with popular CRM and marketing tools. Autopilot also provides analytics and reporting features to track campaign performance and optimize marketing strategies.",
      "_links" : {
        "self" : {
          "href" : "https://catalog.matatika.com/api/dataplugins/e5e317b4-ddfe-4617-8228-966feeb124ed"
        }
      }
    }, {
      "id" : "bdf19f6a-e898-49e6-bb59-8457b33907b1",
      "pluginType" : "EXTRACTOR",
      "name" : "tap-googleads",
      "namespace" : "tap_googleads",
      "variant" : "matatika",
      "label" : "Google Ads",
      "description" : "Google Ads is an online advertising platform that allows businesses to create and display ads to potential customers.\n\nGoogle Ads, formerly known as Google AdWords, is a pay-per-click (PPC) advertising platform that enables businesses to create and display ads to potential customers when they search for specific products or services on Google. Advertisers bid on specific keywords and pay for each click on their ads, with the cost per click (CPC) varying depending on the competition for the keyword. Google Ads also offers a range of targeting options, including location, demographics, and interests, allowing businesses to reach their ideal audience. Additionally, Google Ads provides detailed analytics and reporting, allowing advertisers to track the performance of their ads and make data-driven decisions to optimize their campaigns.\n### Prerequisites\nTo obtain the required settings for connecting to Google Ads, follow these steps:\n\n1. OAuth identity provider authorization endpoint used to create and refresh tokens: This endpoint is specific to the identity provider you are using. You can find this information in the documentation provided by the identity provider.\n\n2. OAuth scopes we need to request access to: The required OAuth scopes depend on the specific actions you want to perform in Google Ads. You can find a list of available scopes in the Google Ads API documentation.\n\n3. Access Token: To obtain an access token, you need to authenticate with Google using OAuth 2.0. Once you have authenticated, you will receive an access token that you can use to make API requests. You can find more information on how to obtain an access token in the Google Ads API documentation.\n\n4. OAuth Refresh Token: The refresh token is obtained during the initial authentication process and is used to obtain a new access token when the current one expires. You can find more information on how to obtain a refresh token in the Google Ads API documentation.\n\n5. Developer Token: The developer token is a unique identifier that is used to track API usage and ensure compliance with Google Ads policies. You can obtain a developer token by creating a Google Ads account and registering for the API.\n\n6. Customer Id: The customer ID is a unique identifier for each Google Ads account. You can find your customer ID in the Google Ads UI or by using the Google Ads API.",
      "logoUrl" : "/assets/images/datasource/tap-googleads.svg",
      "hidden" : false,
      "docs" : "https://www.matatika.com/docs/instant-insights/tap-googleads/",
      "pipUrl" : "git+https://github.com/Matatika/[email protected]",
      "repo" : "https://github.com/Matatika/tap-googleads",
      "capabilities" : [ "DISCOVER", "STATE", "CATALOG" ],
      "select" : [ ],
      "update" : { },
      "vars" : { },
      "settings" : [ {
        "name" : "oauth_credentials.authorization_url",
        "aliases" : [ ],
        "label" : "OAuth identity provider authorization endpoint used create and refresh tokens",
        "value" : "https://oauth2.googleapis.com/token",
        "kind" : "HIDDEN",
        "description" : "The endpoint used to create and refresh OAuth tokens.",
        "required" : "true",
        "protected" : false
      }, {
        "name" : "oauth_credentials.scope",
        "aliases" : [ ],
        "label" : "OAuth scopes we need to request access to",
        "value" : "https://www.googleapis.com/auth/adwords",
        "kind" : "HIDDEN",
        "description" : "The specific permissions we need to request access to in order to use the Google Ads API.",
        "required" : "true",
        "protected" : false
      }, {
        "name" : "oauth_credentials.access_token",
        "aliases" : [ ],
        "label" : "Access Token",
        "kind" : "HIDDEN",
        "description" : "The token used to authenticate and authorize API requests.",
        "required" : "true",
        "protected" : false
      }, {
        "name" : "oauth_credentials.refresh_token",
        "aliases" : [ ],
        "label" : "OAuth Refresh Token",
        "kind" : "HIDDEN",
        "description" : "The token used to refresh the access token when it expires.",
        "required" : "true",
        "protected" : false
      }, {
        "name" : "oauth_credentials.refresh_proxy_url",
        "aliases" : [ ],
        "label" : "Optional - will be called with 'oauth_credentials.refresh_token' to refresh the access token",
        "kind" : "HIDDEN",
        "description" : "An optional function that will be called to refresh the access token using the refresh token.",
        "protected" : false
      }, {
        "name" : "oauth_credentials.refresh_proxy_url_auth",
        "aliases" : [ ],
        "label" : "Optional - Sets Authorization header on 'oauth_credentials.refresh_url' request",
        "kind" : "HIDDEN",
        "description" : "An optional setting that sets the Authorization header on the request to refresh the access token.",
        "protected" : false
      }, {
        "name" : "oauth_credentials.client_id",
        "aliases" : [ ],
        "label" : "Optional - OAuth Client ID used if refresh_proxy_url not supplied",
        "kind" : "HIDDEN",
        "description" : "An optional setting that specifies the OAuth Client ID to use if a refresh proxy URL is not supplied.",
        "protected" : false
      }, {
        "name" : "oauth_credentials.client_secret",
        "aliases" : [ ],
        "label" : "Optional - OAuth Client Secret used if refresh_proxy_url not supplied",
        "kind" : "HIDDEN",
        "description" : "An optional setting that specifies the OAuth Client Secret to use if a refresh proxy URL is not supplied.",
        "protected" : false
      }, {
        "name" : "start_date",
        "aliases" : [ ],
        "label" : "Start Date",
        "kind" : "DATE_ISO8601",
        "description" : "The start date for the data range of the API request.",
        "protected" : false
      }, {
        "name" : "end_date",
        "aliases" : [ ],
        "label" : "End Date",
        "kind" : "DATE_ISO8601",
        "description" : "The end date for the data range of the API request.",
        "protected" : false
      }, {
        "name" : "developer_token",
        "aliases" : [ ],
        "label" : "Developer Token",
        "value" : "DYSuW0qdfU5-jti8Zdh1HQ",
        "kind" : "HIDDEN",
        "description" : "The token used to identify the developer making the API request.",
        "required" : "true",
        "protected" : false
      }, {
        "name" : "customer_id",
        "aliases" : [ ],
        "label" : "Customer Id",
        "kind" : "STRING",
        "description" : "The ID of the Google Ads account to make the API request on behalf of.",
        "required" : "true",
        "protected" : false
      } ],
      "variants" : [ ],
      "commands" : { },
      "matatikaHidden" : false,
      "requires" : [ ],
      "fullDescription" : "Google Ads is an online advertising platform that allows businesses to create and display ads to potential customers.\n\nGoogle Ads, formerly known as Google AdWords, is a pay-per-click (PPC) advertising platform that enables businesses to create and display ads to potential customers when they search for specific products or services on Google. Advertisers bid on specific keywords and pay for each click on their ads, with the cost per click (CPC) varying depending on the competition for the keyword. Google Ads also offers a range of targeting options, including location, demographics, and interests, allowing businesses to reach their ideal audience. Additionally, Google Ads provides detailed analytics and reporting, allowing advertisers to track the performance of their ads and make data-driven decisions to optimize their campaigns.\n### Prerequisites\nTo obtain the required settings for connecting to Google Ads, follow these steps:\n\n1. OAuth identity provider authorization endpoint used to create and refresh tokens: This endpoint is specific to the identity provider you are using. You can find this information in the documentation provided by the identity provider.\n\n2. OAuth scopes we need to request access to: The required OAuth scopes depend on the specific actions you want to perform in Google Ads. You can find a list of available scopes in the Google Ads API documentation.\n\n3. Access Token: To obtain an access token, you need to authenticate with Google using OAuth 2.0. Once you have authenticated, you will receive an access token that you can use to make API requests. You can find more information on how to obtain an access token in the Google Ads API documentation.\n\n4. OAuth Refresh Token: The refresh token is obtained during the initial authentication process and is used to obtain a new access token when the current one expires. You can find more information on how to obtain a refresh token in the Google Ads API documentation.\n\n5. Developer Token: The developer token is a unique identifier that is used to track API usage and ensure compliance with Google Ads policies. You can obtain a developer token by creating a Google Ads account and registering for the API.\n\n6. Customer Id: The customer ID is a unique identifier for each Google Ads account. You can find your customer ID in the Google Ads UI or by using the Google Ads API.\n\n## Settings\n\n\n### Start Date\n\nThe start date for the data range of the API request.\n\n### End Date\n\nThe end date for the data range of the API request.\n\n### Customer Id\n\nThe ID of the Google Ads account to make the API request on behalf of.",
      "_links" : {
        "self" : {
          "href" : "https://catalog.matatika.com/api/dataplugins/bdf19f6a-e898-49e6-bb59-8457b33907b1"
        }
      }
    }, {
      "id" : "dbf87b80-6eb6-483a-90bb-b7a8c094fb3a",
      "pluginType" : "FILE",
      "name" : "analyze-solarvista",
      "namespace" : "tap_solarvista",
      "variant" : "matatika",
      "label" : "Solarvista Insights",
      "description" : "Instant insights on revenue, projects, work items, and engineer performance from Solarvista Live.",
      "hidden" : false,
      "pipUrl" : "git+https://github.com/Matatika/[email protected]",
      "repo" : "https://github.com/Matatika/analyze-solarvista",
      "capabilities" : [ ],
      "select" : [ ],
      "update" : {
        "*.yml" : "true"
      },
      "vars" : { },
      "settings" : [ ],
      "variants" : [ ],
      "commands" : { },
      "matatikaHidden" : false,
      "requires" : [ {
        "id" : "4f3acdb4-898b-4ddf-a70f-1141f7b73129",
        "pluginType" : "TRANSFORM",
        "name" : "dbt-solarvista",
        "namespace" : "tap_solarvista",
        "variant" : "matatika",
        "hidden" : false,
        "pipUrl" : "https://github.com/Matatika/[email protected]",
        "repo" : "https://github.com/Matatika/dbt-tap-solarvista",
        "capabilities" : [ ],
        "select" : [ ],
        "update" : { },
        "vars" : {
          "schema" : ""
        },
        "settings" : [ ],
        "variants" : [ ],
        "commands" : { },
        "matatikaHidden" : false,
        "requires" : [ ],
        "fullDescription" : ""
      } ],
      "fullDescription" : "Instant insights on revenue, projects, work items, and engineer performance from Solarvista Live.",
      "_links" : {
        "self" : {
          "href" : "https://catalog.matatika.com/api/dataplugins/dbf87b80-6eb6-483a-90bb-b7a8c094fb3a"
        }
      }
    }, {
      "id" : "ffd26c88-aa25-4e04-913c-8dd0b22762d1",
      "pluginType" : "FILE",
      "name" : "analyze-trello",
      "namespace" : "tap_trello",
      "variant" : "matatika",
      "label" : "Trello Insights",
      "description" : "Instant insights on members, cards, boards, and actions from Trello.",
      "hidden" : false,
      "pipUrl" : "git+https://github.com/Matatika/[email protected]",
      "repo" : "https://github.com/Matatika/analyze-trello",
      "capabilities" : [ ],
      "select" : [ ],
      "update" : {
        "*.yml" : "true"
      },
      "vars" : { },
      "settings" : [ ],
      "variants" : [ ],
      "commands" : { },
      "matatikaHidden" : false,
      "requires" : [ {
        "id" : "512c097b-df0e-4437-ba9a-3374557a30d9",
        "pluginType" : "TRANSFORM",
        "name" : "dbt-tap-trello",
        "namespace" : "tap_trello",
        "variant" : "matatika",
        "hidden" : false,
        "pipUrl" : "https://github.com/Matatika/[email protected]",
        "repo" : "https://github.com/Matatika/dbt-tap-trello",
        "capabilities" : [ ],
        "select" : [ ],
        "update" : { },
        "vars" : {
          "schema" : ""
        },
        "settings" : [ ],
        "variants" : [ ],
        "commands" : { },
        "matatikaHidden" : false,
        "requires" : [ ],
        "fullDescription" : ""
      } ],
      "fullDescription" : "Instant insights on members, cards, boards, and actions from Trello.",
      "_links" : {
        "self" : {
          "href" : "https://catalog.matatika.com/api/dataplugins/ffd26c88-aa25-4e04-913c-8dd0b22762d1"
        }
      }
    }, {
      "id" : "c5c84dde-1880-494d-95c4-7c71f43528f5",
      "pluginType" : "EXTRACTOR",
      "name" : "tap-aftership",
      "namespace" : "tap_aftership",
      "variant" : "harrystech",
      "label" : "AfterShip",
      "description" : "AfterShip is a shipment tracking platform for online retailers and customers.\n\nAfterShip allows online retailers to track and manage their shipments across multiple carriers and provides customers with real-time updates on the status of their orders. The platform integrates with over 700 carriers worldwide and offers features such as branded tracking pages, delivery notifications, and analytics to help businesses improve their shipping performance. AfterShip also offers a mobile app for customers to track their packages on-the-go.",
      "logoUrl" : "/assets/logos/extractors/aftership.png",
      "hidden" : false,
      "docs" : "https://www.matatika.com/data-details/tap-aftership/",
      "pipUrl" : "git+https://github.com/harrystech/tap-aftership.git",
      "repo" : "https://github.com/harrystech/tap-aftership",
      "capabilities" : [ "DISCOVER", "SCHEMA_FLATTENING", "ABOUT", "STATE", "STREAM_MAPS", "CATALOG" ],
      "select" : [ ],
      "update" : { },
      "vars" : { },
      "settings" : [ {
        "name" : "api_key",
        "aliases" : [ ],
        "label" : "Api Key",
        "kind" : "PASSWORD",
        "description" : "A unique identifier used to authenticate and authorize API requests.",
        "protected" : false
      }, {
        "name" : "start_date",
        "aliases" : [ ],
        "label" : "Start Date",
        "kind" : "DATE_ISO8601",
        "description" : "The earliest date for which shipment tracking information should be retrieved.",
        "protected" : false
      }, {
        "name" : "end_date",
        "aliases" : [ ],
        "label" : "End Date",
        "kind" : "DATE_ISO8601",
        "description" : "The latest date for which shipment tracking information should be retrieved.",
        "protected" : false
      }, {
        "name" : "stream_maps",
        "aliases" : [ ],
        "label" : "Stream Maps",
        "kind" : "OBJECT",
        "description" : "A list of stream maps that define the structure of the response data.",
        "protected" : false
      }, {
        "name" : "stream_map_config",
        "aliases" : [ ],
        "label" : "Stream Map Config",
        "kind" : "OBJECT",
        "description" : "Additional configuration settings for the stream maps.",
        "protected" : false
      }, {
        "name" : "flattening_enabled",
        "aliases" : [ ],
        "label" : "Flattening Enabled",
        "kind" : "BOOLEAN",
        "description" : "A boolean value indicating whether or not the response data should be flattened.",
        "protected" : false
      }, {
        "name" : "flattening_max_depth",
        "aliases" : [ ],
        "label" : "Flattening Max Depth",
        "kind" : "INTEGER",
        "description" : "The maximum depth to which the response data should be flattened.",
        "protected" : false
      } ],
      "variants" : [ ],
      "commands" : { },
      "matatikaHidden" : false,
      "requires" : [ ],
      "fullDescription" : "AfterShip is a shipment tracking platform for online retailers and customers.\n\nAfterShip allows online retailers to track and manage their shipments across multiple carriers and provides customers with real-time updates on the status of their orders. The platform integrates with over 700 carriers worldwide and offers features such as branded tracking pages, delivery notifications, and analytics to help businesses improve their shipping performance. AfterShip also offers a mobile app for customers to track their packages on-the-go.\n\n## Settings\n\n\n### Api Key\n\nA unique identifier used to authenticate and authorize API requests.\n\n### Start Date\n\nThe earliest date for which shipment tracking information should be retrieved.\n\n### End Date\n\nThe latest date for which shipment tracking information should be retrieved.\n\n### Stream Maps\n\nA list of stream maps that define the structure of the response data.\n\n### Stream Map Config\n\nAdditional configuration settings for the stream maps.\n\n### Flattening Enabled\n\nA boolean value indicating whether or not the response data should be flattened.\n\n### Flattening Max Depth\n\nThe maximum depth to which the response data should be flattened.",
      "_links" : {
        "self" : {
          "href" : "https://catalog.matatika.com/api/dataplugins/c5c84dde-1880-494d-95c4-7c71f43528f5"
        }
      }
    }, {
      "id" : "c0598af4-f633-4d21-8f56-80a60aea9140",
      "pluginType" : "LOADER",
      "name" : "target-s3-csv",
      "namespace" : "pipelinewise_target_s3_csv",
      "variant" : "transferwise",
      "label" : "S3 CSV",
      "description" : "S3 CSV is a tool for managing CSV files in Amazon S3.\n\nS3 CSV is a software tool that allows users to easily manage CSV files stored in Amazon S3. It provides features such as importing, exporting, and transforming CSV files, as well as querying and filtering data. S3 CSV also offers advanced functionality such as data validation, data cleansing, and data enrichment. With S3 CSV, users can streamline their CSV file management processes and improve the accuracy and quality of their data.",
      "logoUrl" : "/assets/logos/loaders/pipelinewise-s3-csv.png",
      "hidden" : false,
      "docs" : "https://www.matatika.com/data-details/target-s3-csv/",
      "pipUrl" : "git+https://github.com/transferwise/pipelinewise-target-s3-csv.git",
      "repo" : "https://github.com/transferwise/pipelinewise-target-s3-csv",
      "capabilities" : [ ],
      "select" : [ ],
      "update" : { },
      "vars" : { },
      "settings" : [ {
        "name" : "aws_access_key_id",
        "aliases" : [ ],
        "label" : "S3 Access Key Id",
        "kind" : "PASSWORD",
        "description" : "The access key ID for the AWS account.",
        "protected" : false
      }, {
        "name" : "aws_secret_access_key",
        "aliases" : [ ],
        "label" : "S3 Secret Access Key",
        "kind" : "PASSWORD",
        "description" : "The secret access key for the AWS account.",
        "protected" : false
      }, {
        "name" : "aws_session_token",
        "aliases" : [ ],
        "label" : "AWS Session token",
        "kind" : "PASSWORD",
        "description" : "The session token for the AWS account.",
        "protected" : false
      }, {
        "name" : "aws_endpoint_url",
        "aliases" : [ ],
        "label" : "AWS endpoint URL",
        "kind" : "STRING",
        "description" : "The endpoint URL for the AWS service.",
        "protected" : false
      }, {
        "name" : "aws_profile",
        "aliases" : [ ],
        "label" : "AWS profile",
        "kind" : "STRING",
        "description" : "The name of the AWS profile to use.",
        "protected" : false
      }, {
        "name" : "s3_bucket",
        "aliases" : [ ],
        "label" : "S3 Bucket name",
        "kind" : "STRING",
        "description" : "The name of the S3 bucket to connect to.",
        "protected" : false
      }, {
        "name" : "s3_key_prefix",
        "aliases" : [ ],
        "label" : "S3 Key Prefix",
        "kind" : "STRING",
        "description" : "The prefix to use when searching for files in the S3 bucket.",
        "protected" : false
      }, {
        "name" : "delimiter",
        "aliases" : [ ],
        "label" : "Delimiter",
        "kind" : "STRING",
        "description" : "The delimiter used in the CSV file.",
        "protected" : false
      }, {
        "name" : "quotechar",
        "aliases" : [ ],
        "label" : "Quote Char",
        "kind" : "STRING",
        "description" : "The character used to quote fields in the CSV file.",
        "protected" : false
      }, {
        "name" : "add_metadata_columns",
        "aliases" : [ ],
        "label" : "Add Metadata Columns",
        "kind" : "BOOLEAN",
        "description" : "Whether or not to add metadata columns to the output.",
        "protected" : false
      }, {
        "name" : "encryption_type",
        "aliases" : [ ],
        "label" : "S3 Access Key Id",
        "kind" : "STRING",
        "description" : "The encryption key to use for the CSV file.",
        "protected" : false
      }, {
        "name" : "encryption_key",
        "aliases" : [ ],
        "label" : "Encryption Key",
        "kind" : "STRING",
        "description" : "The compression algorithm to use for the CSV file.",
        "protected" : false
      }, {
        "name" : "compression",
        "aliases" : [ ],
        "label" : "Compression",
        "kind" : "STRING",
        "description" : "The naming convention to use for the CSV file.",
        "protected" : false
      }, {
        "name" : "naming_convention",
        "aliases" : [ ],
        "label" : "Naming Convention",
        "kind" : "STRING",
        "description" : "(Default - None) Custom naming convention of the s3 key. Replaces tokens date, stream, and timestamp with the appropriate values. Supports \"folders\" in s3 keys e.g. folder/folder2/{stream}/export_date={date}/{timestamp}.csv. Honors the s3_key_prefix, if set, by prepending the \"filename\". E.g. naming_convention = folder1/my_file.csv and s3_key_prefix = prefix_ results in folder1/prefix_my_file.csv",
        "protected" : false
      }, {
        "name" : "temp_dir",
        "aliases" : [ ],
        "label" : "S3 Access Key Id",
        "kind" : "STRING",
        "description" : "(Default - platform-dependent) Directory of temporary CSV files with RECORD messages.",
        "protected" : false
      } ],
      "variants" : [ ],
      "commands" : { },
      "matatikaHidden" : false,
      "requires" : [ ],
      "fullDescription" : "S3 CSV is a tool for managing CSV files in Amazon S3.\n\nS3 CSV is a software tool that allows users to easily manage CSV files stored in Amazon S3. It provides features such as importing, exporting, and transforming CSV files, as well as querying and filtering data. S3 CSV also offers advanced functionality such as data validation, data cleansing, and data enrichment. With S3 CSV, users can streamline their CSV file management processes and improve the accuracy and quality of their data.\n\n## Settings\n\n\n### S3 Access Key Id\n\nThe access key ID for the AWS account.\n\n### S3 Secret Access Key\n\nThe secret access key for the AWS account.\n\n### AWS Session token\n\nThe session token for the AWS account.\n\n### AWS endpoint URL\n\nThe endpoint URL for the AWS service.\n\n### AWS profile\n\nThe name of the AWS profile to use.\n\n### S3 Bucket name\n\nThe name of the S3 bucket to connect to.\n\n### S3 Key Prefix\n\nThe prefix to use when searching for files in the S3 bucket.\n\n### Delimiter\n\nThe delimiter used in the CSV file.\n\n### Quote Char\n\nThe character used to quote fields in the CSV file.\n\n### Add Metadata Columns\n\nWhether or not to add metadata columns to the output.\n\n### S3 Access Key Id\n\nThe encryption key to use for the CSV file.\n\n### Encryption Key\n\nThe compression algorithm to use for the CSV file.\n\n### Compression\n\nThe naming convention to use for the CSV file.\n\n### Naming Convention\n\n(Default - None) Custom naming convention of the s3 key. Replaces tokens date, stream, and timestamp with the appropriate values. Supports \"folders\" in s3 keys e.g. folder/folder2/{stream}/export_date={date}/{timestamp}.csv. Honors the s3_key_prefix, if set, by prepending the \"filename\". E.g. naming_convention = folder1/my_file.csv and s3_key_prefix = prefix_ results in folder1/prefix_my_file.csv\n\n### S3 Access Key Id\n\n(Default - platform-dependent) Directory of temporary CSV files with RECORD messages.",
      "_links" : {
        "self" : {
          "href" : "https://catalog.matatika.com/api/dataplugins/c0598af4-f633-4d21-8f56-80a60aea9140"
        }
      }
    }, {
      "id" : "24072fe8-2f1f-4a0c-be4a-97df8c5e5be7",
      "pluginType" : "LOADER",
      "name" : "target-s3-parquet",
      "namespace" : "target_s3_parquet",
      "variant" : "gupy-io",
      "label" : "S3 Parquet",
      "description" : "S3 Parquet is a file format for storing and processing large amounts of data in a distributed computing environment.\n\nS3 Parquet is a columnar storage format that allows for efficient compression and encoding of data, making it ideal for storing and processing large amounts of data in a distributed computing environment. It is designed to work seamlessly with Amazon S3 and other big data processing tools such as Apache Spark and Hadoop. S3 Parquet allows for faster data processing and analysis, as well as reduced storage costs, making it a popular choice for big data applications.",
      "logoUrl" : "/assets/logos/loaders/s3-parquet.png",
      "hidden" : false,
      "docs" : "https://www.matatika.com/data-details/target-s3-parquet/",
      "pipUrl" : "git+https://github.com/gupy-io/target-s3-parquet.git",
      "repo" : "https://github.com/gupy-io/target-s3-parquet",
      "capabilities" : [ "ABOUT", "RECORD_FLATTENING", "STREAM_MAPS" ],
      "select" : [ ],
      "update" : { },
      "vars" : { },
      "settings" : [ {
        "name" : "s3_path",
        "aliases" : [ ],
        "label" : "S3 Path",
        "kind" : "STRING",
        "description" : "The path to the S3 bucket and object where the Parquet data is stored.",
        "protected" : false
      }, {
        "name" : "aws_access_key_id",
        "aliases" : [ ],
        "label" : "AWS Access Key Id",
        "kind" : "PASSWORD",
        "description" : "The access key ID for the AWS account that has permission to access the S3 bucket.",
        "protected" : false
      }, {
        "name" : "aws_secret_access_key",
        "aliases" : [ ],
        "label" : "AWS Secret Access Key",
        "kind" : "PASSWORD",
        "description" : "The secret access key for the AWS account that has permission to access the S3 bucket.",
        "protected" : false
      }, {
        "name" : "athena_database",
        "aliases" : [ ],
        "label" : "Athena Database",
        "kind" : "STRING",
        "description" : "The name of the Athena database where the Parquet data will be queried.",
        "protected" : false
      }, {
        "name" : "add_record_metadata",
        "aliases" : [ ],
        "label" : "Add Record Metadata",
        "kind" : "BOOLEAN",
        "description" : "Whether or not to add metadata to each record in the Parquet data.",
        "protected" : false
      }, {
        "name" : "stringify_schema",
        "aliases" : [ ],
        "label" : "Stringify Schema",
        "kind" : "BOOLEAN",
        "description" : "Whether or not to convert the schema of the Parquet data to a string format.",
        "protected" : false
      }, {
        "name" : "stream_maps",
        "aliases" : [ ],
        "label" : "Stream Maps",
        "kind" : "OBJECT",
        "description" : "A mapping of column names to stream names for the Parquet data.",
        "protected" : false
      }, {
        "name" : "stream_map_config",
        "aliases" : [ ],
        "label" : "Stream Map Config",
        "kind" : "OBJECT",
        "description" : "Configuration options for the stream maps.",
        "protected" : false
      }, {
        "name" : "flattening_enabled",
        "aliases" : [ ],
        "label" : "Flattening Enabled",
        "kind" : "BOOLEAN",
        "description" : "Whether or not to flatten nested structures in the Parquet data.",
        "protected" : false
      }, {
        "name" : "flattening_max_depth",
        "aliases" : [ ],
        "label" : "Flattening Max Depth",
        "kind" : "INTEGER",
        "description" : "The maximum depth to which nested structures will be flattened.",
        "protected" : false
      } ],
      "variants" : [ ],
      "commands" : { },
      "matatikaHidden" : false,
      "requires" : [ ],
      "fullDescription" : "S3 Parquet is a file format for storing and processing large amounts of data in a distributed computing environment.\n\nS3 Parquet is a columnar storage format that allows for efficient compression and encoding of data, making it ideal for storing and processing large amounts of data in a distributed computing environment. It is designed to work seamlessly with Amazon S3 and other big data processing tools such as Apache Spark and Hadoop. S3 Parquet allows for faster data processing and analysis, as well as reduced storage costs, making it a popular choice for big data applications.\n\n## Settings\n\n\n### S3 Path\n\nThe path to the S3 bucket and object where the Parquet data is stored.\n\n### AWS Access Key Id\n\nThe access key ID for the AWS account that has permission to access the S3 bucket.\n\n### AWS Secret Access Key\n\nThe secret access key for the AWS account that has permission to access the S3 bucket.\n\n### Athena Database\n\nThe name of the Athena database where the Parquet data will be queried.\n\n### Add Record Metadata\n\nWhether or not to add metadata to each record in the Parquet data.\n\n### Stringify Schema\n\nWhether or not to convert the schema of the Parquet data to a string format.\n\n### Stream Maps\n\nA mapping of column names to stream names for the Parquet data.\n\n### Stream Map Config\n\nConfiguration options for the stream maps.\n\n### Flattening Enabled\n\nWhether or not to flatten nested structures in the Parquet data.\n\n### Flattening Max Depth\n\nThe maximum depth to which nested structures will be flattened.",
      "_links" : {
        "self" : {
          "href" : "https://catalog.matatika.com/api/dataplugins/24072fe8-2f1f-4a0c-be4a-97df8c5e5be7"
        }
      }
    }, {
      "id" : "931124c6-882f-4f0d-b0ca-6db09f1e1948",
      "pluginType" : "EXTRACTOR",
      "name" : "tap-matatika-sit",
      "namespace" : "tap_matatika_sit",
      "variant" : "matatika",
      "label" : "Matatika SIT",
      "description" : "Test extractor based on tap-spreadsheets-anywhere used during Matatika SIT runs",
      "logoUrl" : "/assets/images/datasource/tap-matatika-sit.svg",
      "hidden" : false,
      "docs" : "https://meltano.com/plugins/extractors/spreadsheets-anywhere.html",
      "pipUrl" : "git+https://github.com/ets/tap-spreadsheets-anywhere.git",
      "repo" : "https://github.com/ets/tap-spreadsheets-anywhere",
      "executable" : "tap-spreadsheets-anywhere",
      "capabilities" : [ "DISCOVER", "STATE", "CATALOG" ],
      "select" : [ ],
      "update" : { },
      "vars" : { },
      "settings" : [ {
        "name" : "tables",
        "aliases" : [ ],
        "label" : "Tables",
        "value" : "[{\"path\":\"https://raw.githubusercontent.com/Matatika/matatika-examples/master/example_data\",\"name\":\"gitflixusers\",\"pattern\":\"GitFlixUsers.csv\",\"start_date\":\"2021-01-01T00:00:00Z\",\"key_properties\":[\"id\"],\"format\":\"csv\"}]",
        "kind" : "ARRAY",
        "description" : "A setting in Matatika SIT that allows users to view and manage tables of data.",
        "protected" : false
      } ],
      "variants" : [ ],
      "commands" : { },
      "matatikaHidden" : true,
      "requires" : [ ],
      "fullDescription" : "Test extractor based on tap-spreadsheets-anywhere used during Matatika SIT runs\n\n## Settings\n\n\n### Tables\n\nA setting in Matatika SIT that allows users to view and manage tables of data.",
      "_links" : {
        "self" : {
          "href" : "https://catalog.matatika.com/api/dataplugins/931124c6-882f-4f0d-b0ca-6db09f1e1948"
        }
      }
    }, {
      "id" : "8c09264a-cec5-4a45-9873-160ad26d4d9a",
      "pluginType" : "EXTRACTOR",
      "name" : "tap-spreadsheets-s3",
      "namespace" : "tap_spreadsheets_anywhere",
      "variant" : "matatika",
      "label" : "Spreadsheets S3",
      "description" : "Spreadsheets S3 is a software tool that allows users sync data from spreadsheets stored in Amazon S3 into their chosen targets.\n### Prerequisites\nTo obtain the Tables required setting for connecting to Spreadsheets Anywhere, you need to have access to the spreadsheet that you want to connect to. Once you have access, you can identify the name of the table or tables that you want to connect to. The table name should be entered in the appropriate field when setting up the connection to Spreadsheets Anywhere.\nThe AWS Access Key ID and AWS Secret Access Key are the connection credentials for connecting to your S3 bucket.",
      "logoUrl" : "/assets/images/datasource/tap-spreadsheets-anywhere.png",
      "hidden" : false,
      "docs" : "https://www.matatika.com/data-details/tap-spreadsheets-s3/",
      "pipUrl" : "git+https://github.com/Matatika/[email protected]",
      "repo" : "https://github.com/Matatika/tap-spreadsheets-anywhere",
      "executable" : "tap-spreadsheets-anywhere",
      "capabilities" : [ "DISCOVER", "STATE", "CATALOG" ],
      "select" : [ ],
      "update" : { },
      "vars" : { },
      "settings" : [ {
        "name" : "tables",
        "aliases" : [ ],
        "label" : "Tables",
        "kind" : "ARRAY",
        "description" : "A setting in Spreadsheets Anywhere that allows users to select which tables they want to connect to and use in their application.",
        "required" : "true",
        "protected" : false
      }, {
        "name" : "aws_access_key_id",
        "aliases" : [ ],
        "label" : "AWS Access Key ID",
        "kind" : "PASSWORD",
        "env" : "AWS_ACCESS_KEY_ID",
        "required" : "true",
        "protected" : false
      }, {
        "name" : "aws_secret_access_key",
        "aliases" : [ ],
        "label" : "AWS Secret Access Key",
        "kind" : "PASSWORD",
        "env" : "AWS_SECRET_ACCESS_KEY",
        "required" : "true",
        "protected" : false
      } ],
      "variants" : [ ],
      "commands" : { },
      "matatikaHidden" : false,
      "requires" : [ ],
      "fullDescription" : "Spreadsheets S3 is a software tool that allows users sync data from spreadsheets stored in Amazon S3 into their chosen targets.\n### Prerequisites\nTo obtain the Tables required setting for connecting to Spreadsheets Anywhere, you need to have access to the spreadsheet that you want to connect to. Once you have access, you can identify the name of the table or tables that you want to connect to. The table name should be entered in the appropriate field when setting up the connection to Spreadsheets Anywhere.\nThe AWS Access Key ID and AWS Secret Access Key are the connection credentials for connecting to your S3 bucket.\n\n## Settings\n\n\n### Tables\n\nA setting in Spreadsheets Anywhere that allows users to select which tables they want to connect to and use in their application.",
      "_links" : {
        "self" : {
          "href" : "https://catalog.matatika.com/api/dataplugins/8c09264a-cec5-4a45-9873-160ad26d4d9a"
        }
      }
    }, {
      "id" : "26f77abc-1f2f-41fc-805d-da2065293a51",
      "pluginType" : "UTILITY",
      "name" : "elementary",
      "namespace" : "elementary",
      "variant" : "matatika",
      "label" : "Elementary",
      "description" : "Elementary is an open-source data observability solution for data & analytics engineers.",
      "logoUrl" : "/assets/logos/utilities/elementary.png",
      "hidden" : false,
      "pipUrl" : "elementary-data[postgres]==0.12.0 git+https://github.com/matatika/elementary-ext.git",
      "repo" : "https://github.com/elementary-data/elementary",
      "executable" : "edr",
      "capabilities" : [ ],
      "select" : [ ],
      "update" : { },
      "vars" : { },
      "settings" : [ {
        "name" : "profiles-dir",
        "aliases" : [ ],
        "label" : "Profiles Directory",
        "value" : "${MELTANO_PROJECT_ROOT}/transform/profile",
        "placeholder" : "${MELTANO_PROJECT_ROOT}/transform/profile",
        "kind" : "STRING",
        "description" : "Profiles directory path for your dbt project",
        "protected" : false
      }, {
        "name" : "project_dir",
        "aliases" : [ ],
        "label" : "Project Directory",
        "value" : "${MELTANO_PROJECT_ROOT}/transform/",
        "kind" : "STRING",
        "description" : "Project directory path for your dbt project",
        "protected" : false
      }, {
        "name" : "file-path",
        "aliases" : [ ],
        "label" : "File Path",
        "value" : "${MELTANO_PROJECT_ROOT}/output/elementary.html",
        "placeholder" : "${MELTANO_PROJECT_ROOT}/output/elementary.html",
        "kind" : "STRING",
        "description" : "Location of the file generated by the `report` commands",
        "protected" : false
      }, {
        "name" : "slack-token",
        "aliases" : [ ],
        "label" : "Slack Token",
        "kind" : "PASSWORD",
        "description" : "If necessary, slack token for sending notifications to slack",
        "protected" : false
      }, {
        "name" : "slack-channel-name",
        "aliases" : [ ],
        "label" : "Slack Channel Name",
        "value" : "elementary-notifs",
        "kind" : "STRING",
        "description" : "If necessary, slack channel name in which to send notifications to",
        "protected" : false
      } ],
      "variants" : [ ],
      "commands" : {
        "monitor-report" : {
          "args" : "monitor-report",
          "executable" : "elementary_extension",
          "description" : "Allows you to generate a report and sent to to your file path"
        },
        "monitor-send-report" : {
          "args" : "monitor-send-report",
          "executable" : "elementary_extension",
          "description" : "Allows you to generate a report and send it through slack"
        },
        "describe" : {
          "args" : "describe",
          "executable" : "elementary_extension"
        },
        "initialize" : {
          "args" : "initialize",
          "executable" : "elementary_extension",
          "description" : "Allows you to initialize your Elementary extension."
        }
      },
      "matatikaHidden" : false,
      "requires" : [ ],
      "fullDescription" : "Elementary is an open-source data observability solution for data & analytics engineers.\n\n## Settings\n\n\n### Profiles Directory\n\nProfiles directory path for your dbt project\n\n### Project Directory\n\nProject directory path for your dbt project\n\n### File Path\n\nLocation of the file generated by the `report` commands\n\n### Slack Token\n\nIf necessary, slack token for sending notifications to slack\n\n### Slack Channel Name\n\nIf necessary, slack channel name in which to send notifications to",
      "_links" : {
        "self" : {
          "href" : "https://catalog.matatika.com/api/dataplugins/26f77abc-1f2f-41fc-805d-da2065293a51"
        }
      }
    }, {
      "id" : "4fa46eaa-9d17-42c1-9f59-8998bf10a71e",
      "pluginType" : "EXTRACTOR",
      "name" : "tap-anaplan",
      "namespace" : "tap_anaplan",
      "variant" : "matthew-skinner",
      "label" : "Anaplan",
      "description" : "Anaplan is a cloud-based platform for enterprise planning and performance management.\n\nAnaplan provides a centralized platform for businesses to plan, forecast, and analyze their financial and operational data in real-time. It allows users to create and customize models for budgeting, forecasting, sales planning, workforce planning, and more. Anaplan's platform is designed to be flexible and scalable, allowing businesses to adapt to changing market conditions and make data-driven decisions. It also offers collaboration tools, data visualization, and reporting capabilities to help teams work together more efficiently and effectively.",
      "logoUrl" : "/assets/logos/extractors/anaplan.png",
      "hidden" : false,
      "docs" : "https://www.matatika.com/data-details/tap-anaplan/",
      "pipUrl" : "git+https://github.com/matthew-skinner/tap-anaplan.git",
      "repo" : "https://github.com/matthew-skinner/tap-anaplan",
      "capabilities" : [ "DISCOVER", "STATE", "CATALOG" ],
      "select" : [ ],
      "update" : { },
      "vars" : { },
      "settings" : [ ],
      "variants" : [ ],
      "commands" : { },
      "matatikaHidden" : false,
      "requires" : [ ],
      "fullDescription" : "Anaplan is a cloud-based platform for enterprise planning and performance management.\n\nAnaplan provides a centralized platform for businesses to plan, forecast, and analyze their financial and operational data in real-time. It allows users to create and customize models for budgeting, forecasting, sales planning, workforce planning, and more. Anaplan's platform is designed to be flexible and scalable, allowing businesses to adapt to changing market conditions and make data-driven decisions. It also offers collaboration tools, data visualization, and reporting capabilities to help teams work together more efficiently and effectively.",
      "_links" : {
        "self" : {
          "href" : "https://catalog.matatika.com/api/dataplugins/4fa46eaa-9d17-42c1-9f59-8998bf10a71e"
        }
      }
    }, {
      "id" : "cb74863b-07d2-4b9a-912f-c7f8172ffc36",
      "pluginType" : "LOADER",
      "name" : "target-s3csv",
      "namespace" : "pipelinewise_target_s3_csv",
      "variant" : "transferwise",
      "label" : "S3 CSV",
      "description" : "S3 CSV is a file format used for storing data in Amazon S3.\n\nAmazon S3 is a cloud-based storage service that allows users to store and retrieve data from anywhere on the web. S3 CSV is a file format used for storing data in S3 that is organized in rows and columns, similar to a spreadsheet. This format is commonly used for storing large amounts of data that can be easily accessed and analyzed using various tools and applications. S3 CSV files can be easily imported and exported to other applications, making it a popular choice for data storage and analysis in the cloud.\n### Prerequisites\nTo obtain the AWS Access Key Id and AWS Secret Access Key, you need to go to the AWS Management Console, navigate to the IAM service, and create an IAM user with programmatic access. During the user creation process, you will be provided with the Access Key Id and Secret Access Key.\n\nTo obtain the S3 Bucket name, you need to navigate to the S3 service in the AWS Management Console and select the bucket that contains the CSV file you want to connect to. The name of the bucket will be displayed in the bucket details page.",
      "logoUrl" : "/assets/logos/extractors/s3-csv.png",
      "hidden" : false,
      "docs" : "https://www.matatika.com/data-details/target-s3csv/",
      "pipUrl" : "git+https://github.com/transferwise/pipelinewise-target-s3-csv.git",
      "repo" : "https://github.com/transferwise/pipelinewise-target-s3-csv",
      "capabilities" : [ ],
      "select" : [ ],
      "update" : { },
      "vars" : { },
      "settings" : [ {
        "name" : "aws_access_key_id",
        "aliases" : [ ],
        "label" : "AWS Access Key Id",
        "kind" : "PASSWORD",
        "description" : "The access key ID for the AWS account.",
        "required" : "true",
        "protected" : false
      }, {
        "name" : "aws_secret_access_key",
        "aliases" : [ ],
        "label" : "AWS Secret Access Key",
        "kind" : "PASSWORD",
        "description" : "The secret access key for the AWS account.",
        "required" : "true",
        "protected" : false
      }, {
        "name" : "aws_session_token",
        "aliases" : [ ],
        "label" : "AWS Session token",
        "kind" : "PASSWORD",
        "description" : "The session token for the AWS account.",
        "protected" : false
      }, {
        "name" : "aws_endpoint_url",
        "aliases" : [ ],
        "label" : "AWS endpoint URL",
        "kind" : "STRING",
        "description" : "The endpoint URL for the S3 bucket.",
        "protected" : false
      }, {
        "name" : "aws_profile",
        "aliases" : [ ],
        "label" : "AWS profile",
        "kind" : "STRING",
        "description" : "The name of the AWS profile to use.",
        "protected" : false
      }, {
        "name" : "s3_bucket",
        "aliases" : [ ],
        "label" : "S3 Bucket name",
        "kind" : "STRING",
        "description" : "The name of the S3 bucket to connect to.",
        "required" : "true",
        "protected" : false
      }, {
        "name" : "s3_key_prefix",
        "aliases" : [ ],
        "label" : "S3 Key Prefix",
        "kind" : "STRING",
        "description" : "The prefix for the S3 keys to read.",
        "protected" : false
      }, {
        "name" : "delimiter",
        "aliases" : [ ],
        "label" : "delimiter",
        "kind" : "STRING",
        "description" : "The delimiter used in the CSV file.",
        "protected" : false
      }, {
        "name" : "quotechar",
        "aliases" : [ ],
        "label" : "Quote Char",
        "kind" : "STRING",
        "description" : "The character used to quote fields in the CSV file.",
        "protected" : false
      }, {
        "name" : "add_metadata_columns",
        "aliases" : [ ],
        "label" : "Add Metadata Columns",
        "kind" : "BOOLEAN",
        "description" : "Whether to add metadata columns to the output.",
        "protected" : false
      }, {
        "name" : "encryption_type",
        "aliases" : [ ],
        "label" : "Encryption Type",
        "kind" : "STRING",
        "description" : "The type of encryption used for the S3 bucket.",
        "protected" : false
      }, {
        "name" : "compression",
        "aliases" : [ ],
        "label" : "Compression",
        "kind" : "STRING",
        "description" : "The compression type used for the CSV file.",
        "protected" : false
      }, {
        "name" : "naming_convention",
        "aliases" : [ ],
        "label" : "Naming Convention",
        "kind" : "STRING",
        "description" : "The naming convention used for the output files.",
        "protected" : false
      } ],
      "variants" : [ ],
      "commands" : { },
      "matatikaHidden" : false,
      "requires" : [ ],
      "fullDescription" : "S3 CSV is a file format used for storing data in Amazon S3.\n\nAmazon S3 is a cloud-based storage service that allows users to store and retrieve data from anywhere on the web. S3 CSV is a file format used for storing data in S3 that is organized in rows and columns, similar to a spreadsheet. This format is commonly used for storing large amounts of data that can be easily accessed and analyzed using various tools and applications. S3 CSV files can be easily imported and exported to other applications, making it a popular choice for data storage and analysis in the cloud.\n### Prerequisites\nTo obtain the AWS Access Key Id and AWS Secret Access Key, you need to go to the AWS Management Console, navigate to the IAM service, and create an IAM user with programmatic access. During the user creation process, you will be provided with the Access Key Id and Secret Access Key.\n\nTo obtain the S3 Bucket name, you need to navigate to the S3 service in the AWS Management Console and select the bucket that contains the CSV file you want to connect to. The name of the bucket will be displayed in the bucket details page.\n\n## Settings\n\n\n### AWS Access Key Id\n\nThe access key ID for the AWS account.\n\n### AWS Secret Access Key\n\nThe secret access key for the AWS account.\n\n### AWS Session token\n\nThe session token for the AWS account.\n\n### AWS endpoint URL\n\nThe endpoint URL for the S3 bucket.\n\n### AWS profile\n\nThe name of the AWS profile to use.\n\n### S3 Bucket name\n\nThe name of the S3 bucket to connect to.\n\n### S3 Key Prefix\n\nThe prefix for the S3 keys to read.\n\n### delimiter\n\nThe delimiter used in the CSV file.\n\n### Quote Char\n\nThe character used to quote fields in the CSV file.\n\n### Add Metadata Columns\n\nWhether to add metadata columns to the output.\n\n### Encryption Type\n\nThe type of encryption used for the S3 bucket.\n\n### Compression\n\nThe compression type used for the CSV file.\n\n### Naming Convention\n\nThe naming convention used for the output files.",
      "_links" : {
        "self" : {
          "href" : "https://catalog.matatika.com/api/dataplugins/cb74863b-07d2-4b9a-912f-c7f8172ffc36"
        }
      }
    }, {
      "id" : "0879ca90-e5ba-49b9-8435-c68676133ac7",
      "pluginType" : "FILE",
      "name" : "analyze-meltano",
      "namespace" : "tap_meltano",
      "variant" : "matatika",
      "label" : "Meltano Insights",
      "description" : "Instant insights on jobs from Meltano.",
      "hidden" : false,
      "pipUrl" : "git+https://github.com/Matatika/[email protected]",
      "repo" : "https://github.com/Matatika/analyze-meltano",
      "capabilities" : [ ],
      "select" : [ ],
      "update" : {
        "*.yml" : "true"
      },
      "vars" : { },
      "settings" : [ ],
      "variants" : [ ],
      "commands" : { },
      "matatikaHidden" : false,
      "requires" : [ {
        "id" : "8688dd6b-e9b9-48f9-b1ae-747ef53b071b",
        "pluginType" : "TRANSFORM",
        "name" : "dbt-meltano",
        "namespace" : "tap_meltano",
        "variant" : "matatika",
        "hidden" : false,
        "pipUrl" : "https://github.com/Matatika/[email protected]",
        "repo" : "https://github.com/Matatika/dbt-tap-meltano",
        "capabilities" : [ ],
        "select" : [ ],
        "update" : { },
        "vars" : {
          "schema" : ""
        },
        "settings" : [ ],
        "variants" : [ ],
        "commands" : { },
        "matatikaHidden" : false,
        "requires" : [ ],
        "fullDescription" : ""
      } ],
      "fullDescription" : "Instant insights on jobs from Meltano.",
      "_links" : {
        "self" : {
          "href" : "https://catalog.matatika.com/api/dataplugins/0879ca90-e5ba-49b9-8435-c68676133ac7"
        }
      }
    } ]
  },
  "_links" : {
    "first" : {
      "href" : "https://catalog.matatika.com/api/dataplugins?page=0&size=20"
    },
    "self" : {
      "href" : "https://catalog.matatika.com/api/dataplugins?page=0&size=20"
    },
    "next" : {
      "href" : "https://catalog.matatika.com/api/dataplugins?page=1&size=20"
    },
    "last" : {
      "href" : "https://catalog.matatika.com/api/dataplugins?page=26&size=20"
    }
  },
  "page" : {
    "size" : 20,
    "totalElements" : 523,
    "totalPages" : 27,
    "number" : 0
  }
}

View the Matatika discovery.yml

GET

/api/discovery.yml

Returns a Meltano discovery.yml containing all dataplugins supported by Matatika.

Request

Example Snippets

cURL

curl -H "Authorization: Bearer $ACCESS_TOKEN" 'https://catalog.matatika.com:443/api/discovery.yml' -i -X GET \
    -H 'Accept: application/json, application/javascript, text/javascript, text/json' \
    -H 'Content-Type: application/json'

Python (requests)

import requests

url = "https://catalog.matatika.com:443/api/discovery.yml"

headers = {
  'Authorization': ACCESS_TOKEN
}

response = requests.request("GET", url, headers=headers)

print(response.text.encode('utf8'))

Response

200 OK

Meltano discovery.yml.

version: 20
extractors: []
loaders: []
transformers: []
files: []
utilities: []


View all workspace dataplugins

GET

/api/workspaces/{workspace-id}/dataplugins

Returns all dataplugins available to the workspace {workspace-id}.

Prerequisites

  • Workspace {workspace-id} must exist

Request

Example Snippets

cURL

curl -H "Authorization: Bearer $ACCESS_TOKEN" 'https://catalog.matatika.com:443/api/workspaces/30909de5-3b07-4409-a02e-c056ab81449d/dataplugins' -i -X GET \
    -H 'Accept: application/json, application/javascript, text/javascript, text/json' \
    -H 'Content-Type: application/json'

Python (requests)

import requests

url = "https://catalog.matatika.com:443/api/workspaces/30909de5-3b07-4409-a02e-c056ab81449d/dataplugins"

headers = {
  'Authorization': ACCESS_TOKEN
}

response = requests.request("GET", url, headers=headers)

print(response.text.encode('utf8'))

Response

200 OK

Dataplugin collection with HAL links.

{
  "_embedded" : {
    "dataplugins" : [ {
      "id" : "862ec863-59b0-41d0-a8f1-30dda77a75f3",
      "pluginType" : "LOADER",
      "name" : "target-postgres",
      "namespace" : "postgres_transferwise",
      "variant" : "matatika",
      "label" : "Postgres Warehouse",
      "description" : "Postgres Warehouse is a data warehousing solution built on top of the Postgres database management system.\n\nPostgres Warehouse is designed to handle large volumes of data and complex queries, making it an ideal solution for businesses that need to store and analyze large amounts of data. It provides a number of features that are specifically tailored to data warehousing, such as columnar storage, parallel processing, and support for advanced analytics. Additionally, Postgres Warehouse is highly scalable, allowing businesses to easily add more resources as their data needs grow. Overall, Postgres Warehouse is a powerful and flexible data warehousing solution that can help businesses make better decisions by providing them with the insights they need to succeed.\n### Prerequisites\nThe process of obtaining the required settings for connecting to a Postgres Warehouse may vary depending on the specific setup and configuration of the database. However, here are some general ways to obtain each of the required settings:\n\n- User: The user is typically created when the database is set up. You can ask the database administrator or check the database documentation to find out the username.\n- Password: The password is also typically created when the database is set up. You can ask the database administrator or check the database documentation to find out the password.\n- Host: The host is the server where the database is located. You can ask the database administrator or check the database documentation to find out the host name or IP address.\n- Port: The port is the number that the database listens on for incoming connections. The default port for Postgres is 5432, but it may be different depending on the configuration. You can ask the database administrator or check the database documentation to find out the port number.\n- Database Name: The database name is the name of the specific database you want to connect to. You can ask the database administrator or check the database documentation to find out the database name.\n- Default Target Schema: The default target schema is the schema that you want to use as the default when connecting to the database. This may be set up by the database administrator or you may need to create it yourself. You can ask the database administrator or check the database documentation to find out the default target schema.",
      "logoUrl" : "/assets/logos/loaders/postgres.png",
      "hidden" : false,
      "docs" : "https://www.matatika.com/data-details/target-postgres/",
      "pipUrl" : "git+https://github.com/Matatika/[email protected]",
      "repo" : "git+https://github.com/Matatika/[email protected]",
      "capabilities" : [ ],
      "select" : [ ],
      "update" : { },
      "vars" : { },
      "settings" : [ {
        "name" : "user",
        "aliases" : [ "username" ],
        "label" : "User",
        "kind" : "STRING",
        "description" : "The username used to connect to the Postgres Warehouse.",
        "required" : "true",
        "protected" : false
      }, {
        "name" : "password",
        "aliases" : [ ],
        "label" : "Password",
        "kind" : "PASSWORD",
        "description" : "The password used to authenticate the user.",
        "required" : "true",
        "protected" : false
      }, {
        "name" : "host",
        "aliases" : [ "address" ],
        "label" : "Host",
        "kind" : "STRING",
        "description" : "The hostname or IP address of the Postgres Warehouse server.",
        "required" : "true",
        "protected" : false
      }, {
        "name" : "port",
        "aliases" : [ ],
        "label" : "Port",
        "value" : "5432",
        "kind" : "INTEGER",
        "description" : "The port number used to connect to the Postgres Warehouse server.",
        "required" : "true",
        "protected" : false
      }, {
        "name" : "dbname",
        "aliases" : [ "database" ],
        "label" : "Database Name",
        "kind" : "STRING",
        "description" : "The name of the database to connect to.",
        "required" : "true",
        "protected" : false
      }, {
        "name" : "default_target_schema",
        "aliases" : [ ],
        "label" : "Default Target Schema",
        "value" : "analytics",
        "kind" : "STRING",
        "description" : "The default schema to use when writing data to the Postgres Warehouse.",
        "required" : "true",
        "protected" : false
      }, {
        "name" : "ssl",
        "aliases" : [ ],
        "label" : "SSL",
        "value" : "false",
        "kind" : "HIDDEN",
        "description" : "Whether or not to use SSL encryption when connecting to the Postgres Warehouse.",
        "protected" : false,
        "value_post_processor" : "STRINGIFY"
      }, {
        "name" : "batch_size_rows",
        "aliases" : [ ],
        "label" : "Batch Size Rows",
        "value" : "100000",
        "kind" : "INTEGER",
        "description" : "The number of rows to write to the Postgres Warehouse in each batch.",
        "protected" : false
      }, {
        "name" : "underscore_camel_case_fields",
        "aliases" : [ ],
        "label" : "Underscore Camel Case Fields",
        "value" : "true",
        "kind" : "HIDDEN",
        "description" : "Whether or not to convert field names from camel case to underscore-separated format.",
        "protected" : false
      }, {
        "name" : "flush_all_streams",
        "aliases" : [ ],
        "label" : "Flush All Streams",
        "value" : "false",
        "kind" : "HIDDEN",
        "description" : "Whether or not to flush all streams to the Postgres Warehouse before closing the connection.",
        "protected" : false
      }, {
        "name" : "parallelism",
        "aliases" : [ ],
        "label" : "Parallelism",
        "value" : "0",
        "kind" : "HIDDEN",
        "description" : "The number of threads to use when writing data to the Postgres Warehouse.",
        "protected" : false
      }, {
        "name" : "parallelism_max",
        "aliases" : [ ],
        "label" : "Max Parallelism",
        "value" : "16",
        "kind" : "HIDDEN",
        "description" : "The maximum number of threads to use when writing data to the Postgres Warehouse.",
        "protected" : false
      }, {
        "name" : "default_target_schema_select_permission",
        "aliases" : [ ],
        "label" : "Default Target Schema Select Permission",
        "kind" : "HIDDEN",
        "description" : "The permission level required to select data from the default target schema.",
        "protected" : false
      }, {
        "name" : "schema_mapping",
        "aliases" : [ ],
        "label" : "Schema Mapping",
        "kind" : "HIDDEN",
        "description" : "A mapping of source schema names to target schema names.",
        "protected" : false
      }, {
        "name" : "add_metadata_columns",
        "aliases" : [ ],
        "label" : "Add Metadata Columns",
        "value" : "true",
        "kind" : "HIDDEN",
        "description" : "Whether or not to add metadata columns to the target table.",
        "protected" : false
      }, {
        "name" : "hard_delete",
        "aliases" : [ ],
        "label" : "Hard Delete",
        "value" : "false",
        "kind" : "HIDDEN",
        "description" : "Whether or not to perform hard deletes when deleting data from the Postgres Warehouse.",
        "protected" : false
      }, {
        "name" : "data_flattening_max_level",
        "aliases" : [ ],
        "label" : "Data Flattening Max Level",
        "value" : "10",
        "kind" : "HIDDEN",
        "description" : "The maximum level of nested data structures to flatten when writing data to the Postgres Warehouse.",
        "protected" : false
      }, {
        "name" : "primary_key_required",
        "aliases" : [ ],
        "label" : "Primary Key Required",
        "value" : "false",
        "kind" : "BOOLEAN",
        "description" : "Whether or not a primary key is required for the target table.",
        "protected" : false
      }, {
        "name" : "validate_records",
        "aliases" : [ ],
        "label" : "Validate Records",
        "value" : "false",
        "kind" : "BOOLEAN",
        "description" : "Whether or not to validate records before writing them to the Postgres Warehouse.",
        "protected" : false
      }, {
        "name" : "temp_dir",
        "aliases" : [ ],
        "label" : "Temporary Directory",
        "kind" : "HIDDEN",
        "description" : "The directory to use for temporary files when writing data to the Postgres Warehouse.",
        "protected" : false
      } ],
      "variants" : [ ],
      "commands" : { },
      "matatikaHidden" : false,
      "requires" : [ ],
      "fullDescription" : "Postgres Warehouse is a data warehousing solution built on top of the Postgres database management system.\n\nPostgres Warehouse is designed to handle large volumes of data and complex queries, making it an ideal solution for businesses that need to store and analyze large amounts of data. It provides a number of features that are specifically tailored to data warehousing, such as columnar storage, parallel processing, and support for advanced analytics. Additionally, Postgres Warehouse is highly scalable, allowing businesses to easily add more resources as their data needs grow. Overall, Postgres Warehouse is a powerful and flexible data warehousing solution that can help businesses make better decisions by providing them with the insights they need to succeed.\n### Prerequisites\nThe process of obtaining the required settings for connecting to a Postgres Warehouse may vary depending on the specific setup and configuration of the database. However, here are some general ways to obtain each of the required settings:\n\n- User: The user is typically created when the database is set up. You can ask the database administrator or check the database documentation to find out the username.\n- Password: The password is also typically created when the database is set up. You can ask the database administrator or check the database documentation to find out the password.\n- Host: The host is the server where the database is located. You can ask the database administrator or check the database documentation to find out the host name or IP address.\n- Port: The port is the number that the database listens on for incoming connections. The default port for Postgres is 5432, but it may be different depending on the configuration. You can ask the database administrator or check the database documentation to find out the port number.\n- Database Name: The database name is the name of the specific database you want to connect to. You can ask the database administrator or check the database documentation to find out the database name.\n- Default Target Schema: The default target schema is the schema that you want to use as the default when connecting to the database. This may be set up by the database administrator or you may need to create it yourself. You can ask the database administrator or check the database documentation to find out the default target schema.\n\n## Settings\n\n\n### User\n\nThe username used to connect to the Postgres Warehouse.\n\n### Password\n\nThe password used to authenticate the user.\n\n### Host\n\nThe hostname or IP address of the Postgres Warehouse server.\n\n### Port\n\nThe port number used to connect to the Postgres Warehouse server.\n\n### Database Name\n\nThe name of the database to connect to.\n\n### Default Target Schema\n\nThe default schema to use when writing data to the Postgres Warehouse.\n\n### Batch Size Rows\n\nThe number of rows to write to the Postgres Warehouse in each batch.\n\n### Primary Key Required\n\nWhether or not a primary key is required for the target table.\n\n### Validate Records\n\nWhether or not to validate records before writing them to the Postgres Warehouse.",
      "_links" : {
        "self" : {
          "href" : "https://catalog.matatika.com/api/dataplugins/862ec863-59b0-41d0-a8f1-30dda77a75f3"
        },
        "update dataplugin" : {
          "href" : "https://catalog.matatika.com/api/workspaces/30909de5-3b07-4409-a02e-c056ab81449d/dataplugins/862ec863-59b0-41d0-a8f1-30dda77a75f3",
          "type" : "PUT"
        }
      }
    }, {
      "id" : "1f2eabca-8eaf-48ac-9b84-a40d61590e0f",
      "pluginType" : "TRANSFORMER",
      "name" : "dbt",
      "namespace" : "dbt",
      "variant" : "dbt-labs",
      "label" : "dbt",
      "logoUrl" : "/assets/images/transformer/dbt.png",
      "hidden" : false,
      "docs" : "https://www.matatika.com/data-details/dbt/",
      "pipUrl" : "dbt-core~=1.3.0 dbt-postgres~=1.3.0 dbt-snowflake~=1.3.0\n",
      "repo" : "https://github.com/dbt-labs/dbt-core",
      "capabilities" : [ ],
      "select" : [ ],
      "update" : { },
      "vars" : { },
      "settings" : [ {
        "name" : "project_dir",
        "aliases" : [ ],
        "value" : "$MELTANO_PROJECT_ROOT/transform",
        "kind" : "STRING",
        "protected" : false
      }, {
        "name" : "profiles_dir",
        "aliases" : [ ],
        "value" : "$MELTANO_PROJECT_ROOT/transform/profile",
        "kind" : "STRING",
        "env" : "DBT_PROFILES_DIR",
        "protected" : false
      }, {
        "name" : "target",
        "aliases" : [ ],
        "value" : "$MELTANO_LOAD__DIALECT",
        "kind" : "STRING",
        "protected" : false
      }, {
        "name" : "source_schema",
        "aliases" : [ ],
        "value" : "$MELTANO_LOAD__TARGET_SCHEMA",
        "kind" : "STRING",
        "protected" : false
      }, {
        "name" : "target_schema",
        "aliases" : [ ],
        "value" : "analytics",
        "kind" : "STRING",
        "protected" : false
      }, {
        "name" : "models",
        "aliases" : [ ],
        "value" : "$MELTANO_TRANSFORM__PACKAGE_NAME $MELTANO_EXTRACTOR_NAMESPACE my_meltano_project",
        "kind" : "STRING",
        "protected" : false
      } ],
      "variants" : [ ],
      "commands" : {
        "compile" : {
          "args" : "compile",
          "description" : "Generates executable SQL from source model, test, and analysis files. Compiled SQL files are written to the target/ directory."
        },
        "seed" : {
          "args" : "seed",
          "description" : "Load data from csv files into your data warehouse."
        },
        "test" : {
          "args" : "test",
          "description" : "Runs tests on data in deployed models."
        },
        "docs-generate" : {
          "args" : "docs generate",
          "description" : "Generate documentation artifacts for your project."
        },
        "deps" : {
          "args" : "deps",
          "description" : "Pull the most recent version of the dependencies listed in packages.yml"
        },
        "run" : {
          "args" : "run",
          "description" : "Compile SQL and execute against the current target database."
        },
        "clean" : {
          "args" : "clean",
          "description" : "Delete all folders in the clean-targets list (usually the dbt_modules and target directories.)"
        },
        "snapshot" : {
          "args" : "snapshot",
          "description" : "Execute snapshots defined in your project."
        }
      },
      "matatikaHidden" : false,
      "requires" : [ {
        "id" : "e6c1ad3d-ebf5-4c4a-b129-f68156b47555",
        "pluginType" : "FILE",
        "name" : "files-dbt",
        "namespace" : "dbt",
        "variant" : "matatika",
        "hidden" : false,
        "pipUrl" : "git+https://github.com/Matatika/[email protected]",
        "repo" : "https://github.com/Matatika/files-dbt",
        "capabilities" : [ ],
        "select" : [ ],
        "update" : {
          "transform/profile/profiles.yml" : "true"
        },
        "vars" : { },
        "settings" : [ ],
        "variants" : [ ],
        "commands" : { },
        "matatikaHidden" : false,
        "requires" : [ ],
        "fullDescription" : ""
      } ],
      "fullDescription" : "",
      "_links" : {
        "self" : {
          "href" : "https://catalog.matatika.com/api/dataplugins/1f2eabca-8eaf-48ac-9b84-a40d61590e0f"
        },
        "update dataplugin" : {
          "href" : "https://catalog.matatika.com/api/workspaces/30909de5-3b07-4409-a02e-c056ab81449d/dataplugins/1f2eabca-8eaf-48ac-9b84-a40d61590e0f",
          "type" : "PUT"
        }
      }
    }, {
      "id" : "28965b8d-7b78-4f6e-9ff7-5200a0d1f623",
      "pluginType" : "EXTRACTOR",
      "name" : "tap-custom-test",
      "variant" : "sit",
      "label" : "Tap Custom Test",
      "description" : "A dataplugin created during an SIT run",
      "hidden" : false,
      "pipUrl" : "git+https://github.com/Matatika/example-repository",
      "capabilities" : [ ],
      "select" : [ ],
      "update" : { },
      "vars" : { },
      "settings" : [ {
        "name" : "username",
        "aliases" : [ ],
        "label" : "Username",
        "placeholder" : "username",
        "kind" : "STRING",
        "description" : "The username login credential.",
        "protected" : false
      }, {
        "name" : "email",
        "aliases" : [ ],
        "label" : "Email",
        "placeholder" : "[email protected]",
        "kind" : "EMAIL",
        "description" : "The email login credential.",
        "protected" : false
      }, {
        "name" : "start_date",
        "aliases" : [ ],
        "label" : "Start Date",
        "placeholder" : "2020-01-01T00:00:00Z",
        "kind" : "DATE_ISO8601",
        "description" : "The data to begin extracting data from, in ISO 8601 format.",
        "protected" : false
      } ],
      "variants" : [ ],
      "commands" : { },
      "matatikaHidden" : false,
      "requires" : [ ],
      "fullDescription" : "A dataplugin created during an SIT run\n\n## Settings\n\n\n### Username\n\nThe username login credential.\n\n### Email\n\nThe email login credential.\n\n### Start Date\n\nThe data to begin extracting data from, in ISO 8601 format.",
      "_links" : {
        "self" : {
          "href" : "https://catalog.matatika.com/api/dataplugins/28965b8d-7b78-4f6e-9ff7-5200a0d1f623"
        },
        "update dataplugin" : {
          "href" : "https://catalog.matatika.com/api/workspaces/30909de5-3b07-4409-a02e-c056ab81449d/dataplugins/28965b8d-7b78-4f6e-9ff7-5200a0d1f623",
          "type" : "PUT"
        },
        "delete dataplugin" : {
          "href" : "https://catalog.matatika.com/api/dataplugins/28965b8d-7b78-4f6e-9ff7-5200a0d1f623",
          "type" : "DELETE"
        }
      }
    }, {
      "id" : "5a4f1bcb-c9e7-4526-9d84-a5e3e9b30c9b",
      "pluginType" : "EXTRACTOR",
      "name" : "tap-test",
      "variant" : "sit",
      "hidden" : false,
      "capabilities" : [ ],
      "select" : [ ],
      "update" : { },
      "vars" : { },
      "settings" : [ ],
      "variants" : [ ],
      "commands" : { },
      "matatikaHidden" : false,
      "requires" : [ ],
      "fullDescription" : "",
      "_links" : {
        "self" : {
          "href" : "https://catalog.matatika.com/api/dataplugins/5a4f1bcb-c9e7-4526-9d84-a5e3e9b30c9b"
        },
        "update dataplugin" : {
          "href" : "https://catalog.matatika.com/api/workspaces/30909de5-3b07-4409-a02e-c056ab81449d/dataplugins/5a4f1bcb-c9e7-4526-9d84-a5e3e9b30c9b",
          "type" : "PUT"
        },
        "delete dataplugin" : {
          "href" : "https://catalog.matatika.com/api/dataplugins/5a4f1bcb-c9e7-4526-9d84-a5e3e9b30c9b",
          "type" : "DELETE"
        }
      }
    }, {
      "id" : "15320f6a-ef51-4d73-afa3-0a23b442557a",
      "pluginType" : "LOADER",
      "name" : "target-test",
      "variant" : "sit",
      "hidden" : false,
      "capabilities" : [ ],
      "select" : [ ],
      "update" : { },
      "vars" : { },
      "settings" : [ ],
      "variants" : [ ],
      "commands" : { },
      "matatikaHidden" : false,
      "requires" : [ ],
      "fullDescription" : "",
      "_links" : {
        "self" : {
          "href" : "https://catalog.matatika.com/api/dataplugins/15320f6a-ef51-4d73-afa3-0a23b442557a"
        },
        "update dataplugin" : {
          "href" : "https://catalog.matatika.com/api/workspaces/30909de5-3b07-4409-a02e-c056ab81449d/dataplugins/15320f6a-ef51-4d73-afa3-0a23b442557a",
          "type" : "PUT"
        },
        "delete dataplugin" : {
          "href" : "https://catalog.matatika.com/api/dataplugins/15320f6a-ef51-4d73-afa3-0a23b442557a",
          "type" : "DELETE"
        }
      }
    }, {
      "id" : "5f264126-0e9e-4a3a-af98-4fa21e73347d",
      "pluginType" : "TRANSFORM",
      "name" : "dbt-tap-test",
      "variant" : "sit",
      "hidden" : false,
      "capabilities" : [ ],
      "select" : [ ],
      "update" : { },
      "vars" : { },
      "settings" : [ ],
      "variants" : [ ],
      "commands" : { },
      "matatikaHidden" : false,
      "requires" : [ ],
      "fullDescription" : "",
      "_links" : {
        "self" : {
          "href" : "https://catalog.matatika.com/api/dataplugins/5f264126-0e9e-4a3a-af98-4fa21e73347d"
        },
        "update dataplugin" : {
          "href" : "https://catalog.matatika.com/api/workspaces/30909de5-3b07-4409-a02e-c056ab81449d/dataplugins/5f264126-0e9e-4a3a-af98-4fa21e73347d",
          "type" : "PUT"
        },
        "delete dataplugin" : {
          "href" : "https://catalog.matatika.com/api/dataplugins/5f264126-0e9e-4a3a-af98-4fa21e73347d",
          "type" : "DELETE"
        }
      }
    }, {
      "id" : "2f336e2c-8992-4283-b325-b1db00a49b77",
      "pluginType" : "FILE",
      "name" : "analyze-test",
      "variant" : "sit",
      "hidden" : false,
      "capabilities" : [ ],
      "select" : [ ],
      "update" : { },
      "vars" : { },
      "settings" : [ ],
      "variants" : [ ],
      "commands" : { },
      "matatikaHidden" : false,
      "requires" : [ ],
      "fullDescription" : "",
      "_links" : {
        "self" : {
          "href" : "https://catalog.matatika.com/api/dataplugins/2f336e2c-8992-4283-b325-b1db00a49b77"
        },
        "update dataplugin" : {
          "href" : "https://catalog.matatika.com/api/workspaces/30909de5-3b07-4409-a02e-c056ab81449d/dataplugins/2f336e2c-8992-4283-b325-b1db00a49b77",
          "type" : "PUT"
        },
        "delete dataplugin" : {
          "href" : "https://catalog.matatika.com/api/dataplugins/2f336e2c-8992-4283-b325-b1db00a49b77",
          "type" : "DELETE"
        }
      }
    } ]
  },
  "_links" : {
    "self" : {
      "href" : "https://catalog.matatika.com/api/workspaces/30909de5-3b07-4409-a02e-c056ab81449d/dataplugins?page=0&size=20"
    }
  },
  "page" : {
    "size" : 20,
    "totalElements" : 7,
    "totalPages" : 1,
    "number" : 0
  }
}

View a workspace discovery.yml

GET

/api/workspaces/{workspace-id}/discovery.yml

Returns a Meltano discovery.yml containing all dataplugins available to the workspace {workspace-id}.

Prerequisites

  • Workspace {workspace-id} must exist

Request

Example Snippets

cURL

curl -H "Authorization: Bearer $ACCESS_TOKEN" 'https://catalog.matatika.com:443/api/workspaces/30909de5-3b07-4409-a02e-c056ab81449d/discovery.yml' -i -X GET \
    -H 'Accept: application/json, application/javascript, text/javascript, text/json' \
    -H 'Content-Type: application/json'

Python (requests)

import requests

url = "https://catalog.matatika.com:443/api/workspaces/30909de5-3b07-4409-a02e-c056ab81449d/discovery.yml"

headers = {
  'Authorization': ACCESS_TOKEN
}

response = requests.request("GET", url, headers=headers)

print(response.text.encode('utf8'))

Response

200 OK

Meltano discovery.yml.

version: 20
extractors:
- id: 5a4f1bcb-c9e7-4526-9d84-a5e3e9b30c9b
  name: tap-test
  variant: sit
  hidden: false
- id: 28965b8d-7b78-4f6e-9ff7-5200a0d1f623
  name: tap-custom-test
  variant: sit
  label: Tap Custom Test
  description: A dataplugin created during an SIT run
  hidden: false
  pip_url: git+https://github.com/Matatika/example-repository
  settings:
  - name: username
    label: Username
    placeholder: username
    kind: string
    description: The username login credential.
    protected: false
  - name: email
    label: Email
    placeholder: [email protected]
    kind: email
    description: The email login credential.
    protected: false
  - name: start_date
    label: Start Date
    placeholder: 2020-01-01T00:00:00Z
    kind: date_iso8601
    description: "The data to begin extracting data from, in ISO 8601 format."
    protected: false
  full_description: |-
    A dataplugin created during an SIT run

    ## Settings


    ### Username

    The username login credential.

    ### Email

    The email login credential.

    ### Start Date

    The data to begin extracting data from, in ISO 8601 format.
- id: 92c7df8c-4eb6-4cc6-8f6b-e5d1a2acfdda
  name: tap-thinkific
  namespace: tap_thinkific
  variant: birdiecare
  label: Thinkific
  description: |-
    Thinkific is an online course creation platform.

    Thinkific is a platform that allows individuals and businesses to create and sell online courses. It provides tools for course creation, customization, marketing, and delivery, as well as features for student engagement and progress tracking. Thinkific also offers integrations with other tools and services, such as payment gateways, email marketing platforms, and analytics tools. With Thinkific, users can create and sell courses on a variety of topics, from business and marketing to health and wellness, and reach a global audience.
  logo_url: /assets/logos/extractors/thinkific.png
  hidden: false
  docs: https://www.matatika.com/data-details/tap-thinkific/
  pip_url: git+https://github.com/birdiecare/tap-thinkific.git
  repo: https://github.com/birdiecare/tap-thinkific
  capabilities:
  - discover
  - about
  - state
  - stream_maps
  - catalog
  settings:
  - name: api_key
    label: API Key
    kind: password
    description: A unique identifier used to authenticate and authorize API requests
    protected: false
  - name: subdomain
    label: Subdomain
    kind: string
    description: The unique identifier for the Thinkific account being accessed
    protected: false
  - name: start_date
    label: Start Date
    kind: string
    description: The date from which data should be retrieved or processed
    protected: false
  full_description: |-
    Thinkific is an online course creation platform.

    Thinkific is a platform that allows individuals and businesses to create and sell online courses. It provides tools for course creation, customization, marketing, and delivery, as well as features for student engagement and progress tracking. Thinkific also offers integrations with other tools and services, such as payment gateways, email marketing platforms, and analytics tools. With Thinkific, users can create and sell courses on a variety of topics, from business and marketing to health and wellness, and reach a global audience.

    ## Settings


    ### API Key

    A unique identifier used to authenticate and authorize API requests

    ### Subdomain

    The unique identifier for the Thinkific account being accessed

    ### Start Date

    The date from which data should be retrieved or processed
- id: e9758d71-1bc8-4ab0-8e99-1e7575bce596
  name: tap-redshift
  namespace: tap_redshift
  variant: monad-inc
  label: Redshift
  description: |-
    Redshift is a cloud-based data warehousing service provided by Amazon Web Services (AWS).

    Redshift allows users to store and analyze large amounts of data in a scalable and cost-effective manner. It uses columnar storage and parallel processing to enable fast querying of data using SQL. Redshift integrates with a variety of data sources and tools, including AWS services like S3 and EMR, as well as popular BI and ETL tools. It also offers features like automatic backups, encryption, and workload management to ensure data security and performance. Overall, Redshift is a powerful solution for businesses looking to manage and analyze their data in the cloud.
  logo_url: /assets/logos/extractors/redshift.png
  hidden: false
  docs: https://www.matatika.com/data-details/tap-redshift/
  pip_url: git+https://github.com/Monad-Inc/tap-redshift.git
  repo: https://github.com/Monad-Inc/tap-redshift
  capabilities:
  - discover
  - state
  - catalog
  settings:
  - name: host
    label: Host
    kind: string
    description: The URL or IP address of the Redshift cluster
    protected: false
  - name: user
    label: User
    kind: string
    description: The username used to authenticate with the Redshift cluster
    protected: false
  - name: start_date
    label: Start Date
    kind: date_iso8601
    description: The date from which data will be retrieved
    protected: false
  - name: port
    label: Port
    kind: integer
    description: The port number used to connect to the Redshift cluster
    protected: false
  - name: dbname
    label: Database Name
    kind: string
    description: The name of the database within the Redshift cluster
    protected: false
  - name: password
    label: Password
    kind: password
    description: The password used to authenticate with the Redshift cluster
    protected: false
  - name: schema
    label: Schema Name
    kind: string
    description: The name of the schema within the database
    protected: false
  full_description: |-
    Redshift is a cloud-based data warehousing service provided by Amazon Web Services (AWS).

    Redshift allows users to store and analyze large amounts of data in a scalable and cost-effective manner. It uses columnar storage and parallel processing to enable fast querying of data using SQL. Redshift integrates with a variety of data sources and tools, including AWS services like S3 and EMR, as well as popular BI and ETL tools. It also offers features like automatic backups, encryption, and workload management to ensure data security and performance. Overall, Redshift is a powerful solution for businesses looking to manage and analyze their data in the cloud.

    ## Settings


    ### Host

    The URL or IP address of the Redshift cluster

    ### User

    The username used to authenticate with the Redshift cluster

    ### Start Date

    The date from which data will be retrieved

    ### Port

    The port number used to connect to the Redshift cluster

    ### Database Name

    The name of the database within the Redshift cluster

    ### Password

    The password used to authenticate with the Redshift cluster

    ### Schema Name

    The name of the schema within the database
- id: 10adc98a-dae3-4e7d-854d-81ea9b0c575a
  name: tap-facebook-reviews
  namespace: tap_facebook_reviews
  variant: packlane
  label: Facebook Reviews
  description: |-
    Facebook Reviews: A tool for businesses to collect and display customer reviews on their Facebook page.

    Facebook Reviews is a feature that allows businesses to collect and display customer reviews on their Facebook page. This tool helps businesses build credibility and trust with potential customers by showcasing positive feedback from previous customers. Businesses can also respond to reviews and engage with customers to address any concerns or issues. Facebook Reviews is a valuable tool for businesses looking to improve their online reputation and attract new customers.
  logo_url: /assets/logos/extractors/facebook-reviews.png
  hidden: false
  docs: https://www.matatika.com/data-details/tap-facebook-reviews/
  pip_url: git+https://github.com/Packlane/tap-facebook-reviews.git
  repo: https://github.com/Packlane/tap-facebook-reviews
  capabilities:
  - discover
  - catalog
  full_description: |-
    Facebook Reviews: A tool for businesses to collect and display customer reviews on their Facebook page.

    Facebook Reviews is a feature that allows businesses to collect and display customer reviews on their Facebook page. This tool helps businesses build credibility and trust with potential customers by showcasing positive feedback from previous customers. Businesses can also respond to reviews and engage with customers to address any concerns or issues. Facebook Reviews is a valuable tool for businesses looking to improve their online reputation and attract new customers.
- id: 123f0342-634c-46c0-9213-8dfd197abe03
  name: tap-criteo
  namespace: tap_criteo
  variant: edgarrmondragon
  label: Criteo
  description: |-
    Criteo: A digital advertising platform.

    Criteo is a digital advertising platform that uses machine learning algorithms to deliver personalized ads to consumers across various devices and channels. It helps advertisers reach their target audience by analyzing consumer behavior and purchasing patterns to deliver relevant ads at the right time. Criteo's platform also provides insights and analytics to help advertisers optimize their campaigns and measure their return on investment.
  logo_url: /assets/logos/extractors/criteo.png
  hidden: false
  docs: https://www.matatika.com/data-details/tap-criteo/
  pip_url: git+https://github.com/edgarrmondragon/tap-criteo.git
  repo: https://github.com/edgarrmondragon/tap-criteo
  executable: tap-criteo
  capabilities:
  - discover
  - schema_flattening
  - about
  - stream_maps
  - catalog
  settings:
  - name: advertiser_ids
    label: Advertiser IDs
    kind: array
    description: The unique IDs assigned to each advertiser account within Criteo.
    protected: false
  - name: client_id
    label: Client ID
    kind: password
    description: The unique identifier for the client application connecting to the Criteo API.
    protected: false
  - name: client_secret
    label: Client Secret
    kind: password
    description: The secret key used to authenticate the client application.
    protected: false
  - name: flattening_enabled
    label: Flattening Enabled
    kind: boolean
    description: A boolean value indicating whether or not to flatten nested JSON objects in the API response.
    protected: false
  - name: flattening_max_depth
    label: Flattening Max Depth
    kind: integer
    description: The maximum depth to which nested JSON objects should be flattened.
    protected: false
  - name: reports
    label: Reports
    kind: array
    description: The type of report to retrieve from the Criteo API.
    protected: false
  - name: start_date
    label: Start Date
    kind: date_iso8601
    description: The date from which to start retrieving data for the specified report.
    protected: false
  - name: stream_map_config
    label: Stream Map Config
    kind: object
    description: The configuration settings for the stream map used to retrieve data from the Criteo API.
    protected: false
  - name: stream_maps
    label: Stream Maps
    kind: object
    description: The specific stream maps to use for retrieving data from the Criteo API.
    protected: false
  full_description: |-
    Criteo: A digital advertising platform.

    Criteo is a digital advertising platform that uses machine learning algorithms to deliver personalized ads to consumers across various devices and channels. It helps advertisers reach their target audience by analyzing consumer behavior and purchasing patterns to deliver relevant ads at the right time. Criteo's platform also provides insights and analytics to help advertisers optimize their campaigns and measure their return on investment.

    ## Settings


    ### Advertiser IDs

    The unique IDs assigned to each advertiser account within Criteo.

    ### Client ID

    The unique identifier for the client application connecting to the Criteo API.

    ### Client Secret

    The secret key used to authenticate the client application.

    ### Flattening Enabled

    A boolean value indicating whether or not to flatten nested JSON objects in the API response.

    ### Flattening Max Depth

    The maximum depth to which nested JSON objects should be flattened.

    ### Reports

    The type of report to retrieve from the Criteo API.

    ### Start Date

    The date from which to start retrieving data for the specified report.

    ### Stream Map Config

    The configuration settings for the stream map used to retrieve data from the Criteo API.

    ### Stream Maps

    The specific stream maps to use for retrieving data from the Criteo API.
- id: 5190f60e-1978-4e49-b6b9-57de5b260455
  name: tap-amazon-sp
  namespace: tap_amazon_seller
  variant: hotgluexyz
  label: Amazon Selling Partner (SP)
  description: |-
    Amazon Selling Partner (SP) is a platform that helps sellers manage their Amazon business.

    Amazon Selling Partner (SP) is a comprehensive platform that provides sellers with tools to manage their Amazon business. It offers features such as inventory management, order fulfillment, advertising, and analytics. With SP, sellers can track their sales performance, manage their inventory, and optimize their product listings. The platform also provides access to Amazon's advertising tools, allowing sellers to create and manage campaigns to promote their products. Additionally, SP offers insights and analytics to help sellers make data-driven decisions to grow their business on Amazon.
  logo_url: /assets/logos/extractors/amazon-sp.png
  hidden: false
  docs: https://www.matatika.com/data-details/tap-amazon-sp/
  pip_url: git+https://gitlab.com/hotglue/tap-amazon-seller.git
  repo: https://gitlab.com/hotglue/tap-amazon-seller
  executable: tap-amazon-seller
  capabilities:
  - discover
  - schema_flattening
  - about
  - state
  - stream_maps
  - catalog
  settings:
  - name: aws_access_key
    label: AWS Access Key
    kind: password
    description: The access key ID for the AWS account.
    protected: false
  - name: aws_secret_key
    label: AWS Secret Key
    kind: password
    description: The secret access key for the AWS account.
    protected: false
  - name: client_secret
    label: Client Secret
    kind: password
    description: The client secret for the OAuth 2.0 client.
    protected: false
  - name: flattening_enabled
    label: Flattening Enabled
    kind: boolean
    description: A boolean value indicating whether or not to flatten the response data.
    protected: false
  - name: flattening_max_depth
    label: Flattening Max Depth
    kind: integer
    description: The maximum depth to which the response data should be flattened.
    protected: false
  - name: lwa_client_id
    label: Lwa Client ID
    kind: password
    description: The client ID for the Login with Amazon (LWA) client.
    protected: false
  - name: marketplaces
    label: Marketplaces
    kind: array
    description: The Amazon marketplaces for which the API requests will be made.
    protected: false
  - name: processing_status
    label: Processing Status
    value: "[\"IN_QUEUE\",\"IN_PROGRESS\"]"
    kind: array
    description: The processing status of the API request.
    protected: false
  - name: refresh_token
    label: Refresh Token
    kind: password
    description: The refresh token for the OAuth 2.0 client.
    protected: false
  - name: report_types
    label: Report Types
    value: "[\"GET_LEDGER_DETAIL_VIEW_DATA\",\"GET_MERCHANT_LISTINGS_ALL_DATA\"]"
    kind: array
    description: The types of reports that can be requested from the API.
    protected: false
  - name: role_arn
    label: Role Arn
    kind: string
    description: The Amazon Resource Name (ARN) of the role that the API will assume.
    protected: false
  - name: sandbox
    label: Sandbox
    value: "false"
    kind: boolean
    description: A boolean value indicating whether or not to use the Amazon Selling Partner API sandbox environment.
    protected: false
  - name: stream_map_config
    label: Stream Map Config
    kind: object
    description: The configuration for the stream map.
    protected: false
  - name: stream_maps
    label: Stream Maps
    kind: object
    description: The stream maps for the API requests.
    protected: false
  full_description: |-
    Amazon Selling Partner (SP) is a platform that helps sellers manage their Amazon business.

    Amazon Selling Partner (SP) is a comprehensive platform that provides sellers with tools to manage their Amazon business. It offers features such as inventory management, order fulfillment, advertising, and analytics. With SP, sellers can track their sales performance, manage their inventory, and optimize their product listings. The platform also provides access to Amazon's advertising tools, allowing sellers to create and manage campaigns to promote their products. Additionally, SP offers insights and analytics to help sellers make data-driven decisions to grow their business on Amazon.

    ## Settings


    ### AWS Access Key

    The access key ID for the AWS account.

    ### AWS Secret Key

    The secret access key for the AWS account.

    ### Client Secret

    The client secret for the OAuth 2.0 client.

    ### Flattening Enabled

    A boolean value indicating whether or not to flatten the response data.

    ### Flattening Max Depth

    The maximum depth to which the response data should be flattened.

    ### Lwa Client ID

    The client ID for the Login with Amazon (LWA) client.

    ### Marketplaces

    The Amazon marketplaces for which the API requests will be made.

    ### Processing Status

    The processing status of the API request.

    ### Refresh Token

    The refresh token for the OAuth 2.0 client.

    ### Report Types

    The types of reports that can be requested from the API.

    ### Role Arn

    The Amazon Resource Name (ARN) of the role that the API will assume.

    ### Sandbox

    A boolean value indicating whether or not to use the Amazon Selling Partner API sandbox environment.

    ### Stream Map Config

    The configuration for the stream map.

    ### Stream Maps

    The stream maps for the API requests.
- id: c6263b4c-090a-45f8-8669-9db5edc87ead
  name: tap-fulfil
  namespace: tap_fulfil
  variant: fulfilio
  label: Fulfil
  description: |-
    Fulfil is a cloud-based software for managing inventory, orders, and shipping.

    Fulfil is an all-in-one solution for businesses to manage their inventory, orders, and shipping. With features such as real-time inventory tracking, order management, and shipping integrations, Fulfil helps businesses streamline their operations and improve their overall efficiency. The software also includes tools for managing customer relationships, generating reports, and automating tasks, making it a comprehensive solution for businesses of all sizes. Additionally, Fulfil offers integrations with popular e-commerce platforms such as Shopify, Magento, and WooCommerce, allowing businesses to easily sync their online stores with their inventory and order management systems.
  logo_url: /assets/logos/extractors/fulfil.png
  hidden: false
  docs: https://www.matatika.com/data-details/tap-fulfil/
  pip_url: git+https://github.com/fulfilio/tap-fulfil.git
  repo: https://github.com/fulfilio/tap-fulfil
  capabilities:
  - discover
  - catalog
  full_description: |-
    Fulfil is a cloud-based software for managing inventory, orders, and shipping.

    Fulfil is an all-in-one solution for businesses to manage their inventory, orders, and shipping. With features such as real-time inventory tracking, order management, and shipping integrations, Fulfil helps businesses streamline their operations and improve their overall efficiency. The software also includes tools for managing customer relationships, generating reports, and automating tasks, making it a comprehensive solution for businesses of all sizes. Additionally, Fulfil offers integrations with popular e-commerce platforms such as Shopify, Magento, and WooCommerce, allowing businesses to easily sync their online stores with their inventory and order management systems.
- id: 7e2df860-abd3-4900-a771-c59f7305c77e
  name: tap-clarabridge
  namespace: tap_clarabridge
  variant: pathlight
  label: Clarabridge
  description: |-
    Clarabridge is a customer experience management software and service provider.

    Clarabridge offers a suite of software and services that help businesses collect, analyze, and act on customer feedback across various channels such as social media, email, chat, and surveys. The platform uses natural language processing and machine learning to extract insights from customer feedback and provide actionable insights to improve customer experience, increase customer loyalty, and drive business growth. Clarabridge's solutions are used by leading brands across industries such as retail, hospitality, financial services, and healthcare.
  logo_url: /assets/logos/extractors/clarabridge.png
  hidden: false
  docs: https://www.matatika.com/data-details/tap-clarabridge/
  pip_url: git+https://github.com/Pathlight/tap-clarabridge.git
  repo: https://github.com/Pathlight/tap-clarabridge
  capabilities:
  - discover
  - catalog
  full_description: |-
    Clarabridge is a customer experience management software and service provider.

    Clarabridge offers a suite of software and services that help businesses collect, analyze, and act on customer feedback across various channels such as social media, email, chat, and surveys. The platform uses natural language processing and machine learning to extract insights from customer feedback and provide actionable insights to improve customer experience, increase customer loyalty, and drive business growth. Clarabridge's solutions are used by leading brands across industries such as retail, hospitality, financial services, and healthcare.
- id: c4186ab8-7fbd-4857-8a2c-d004d2511823
  name: tap-govuk-bank-holidays
  namespace: tap_govuk_bank_holidays
  variant: matatika
  label: UK Bank Holidays
  description: |-
    UK Bank Holidays
    If a bank holiday is on a weekend, a ‘substitute’ weekday becomes a bank holiday, normally the following Monday.
    ## Learn more

    [GOV.UK Bank Holidays](https://www.gov.uk/bank-holidays)
  logo_url: https://www.gov.uk/assets/static/govuk-opengraph-image-dade2dad5775023b0568381c4c074b86318194edb36d3d68df721eea7deeac4b.png
  hidden: false
  docs: https://www.matatika.com/data-details/tap-govuk-bank-holidays/
  pip_url: git+https://github.com/Matatika/tap-spreadsheets-anywhere@v0.2.1
  repo: https://github.com/Matatika/tap-spreadsheets-anywhere
  executable: tap-spreadsheets-anywhere
  capabilities:
  - discover
  - state
  - catalog
  settings:
  - name: tables
    label: Tables
    value: |-
      [{
        "path":"https://www.gov.uk/",
        "name":"uk_bank_holidays_england_and_wales",
        "pattern":"bank-holidays.json",
        "start_date":"2018-01-01T00:00:00Z",
        "key_properties":["date"],
        "json_path":"$.england-and-wales.events",
        "format":"json"
      }, {
        "path":"https://www.gov.uk/",
        "name":"uk_bank_holidays_scotland",
        "pattern":"bank-holidays.json",
        "start_date":"2018-01-01T00:00:00Z",
        "key_properties":["date"],
        "json_path":"$.scotland.events",
        "format":"json"
      }, {
        "path":"https://www.gov.uk/",
        "name":"uk_bank_holidays_northern_ireland",
        "pattern":"bank-holidays.json",
        "start_date":"2018-01-01T00:00:00Z",
        "key_properties":["date"],
        "json_path":"$.northern-ireland.events",
        "format":"json"
      }]
    kind: array
    description: A collection of related data organized in rows and columns.
    required: "false"
    protected: false
  full_description: |-
    UK Bank Holidays
    If a bank holiday is on a weekend, a ‘substitute’ weekday becomes a bank holiday, normally the following Monday.
    ## Learn more

    [GOV.UK Bank Holidays](https://www.gov.uk/bank-holidays)

    ## Settings


    ### Tables

    A collection of related data organized in rows and columns.
- id: 7d0af4b1-4b6c-4fc2-b850-370983fe6597
  name: tap-monday
  namespace: tap_monday
  variant: gthesheep
  label: Monday.com
  description: "Monday.com is a team management and collaboration platform that helps teams plan, organize, and track their work in one central location. \n\nMonday.com is a cloud-based platform that allows teams to manage their projects, tasks, and workflows in a visual and intuitive way. It offers a variety of customizable templates and features, such as task assignments, deadlines, progress tracking, and communication tools, to help teams stay on top of their work and collaborate effectively. With Monday.com, teams can streamline their workflows, improve their productivity, and achieve their goals faster."
  logo_url: /assets/logos/extractors/monday.png
  hidden: false
  docs: https://www.matatika.com/data-details/tap-monday/
  pip_url: git+https://github.com/gthesheep/tap-monday.git
  repo: https://github.com/gthesheep/tap-monday
  capabilities:
  - discover
  - about
  - state
  - stream_maps
  - catalog
  settings:
  - name: auth_token
    label: API Token
    kind: password
    description: A unique identifier that grants access to the Monday.com API.
    protected: false
  - name: board_limit
    label: Board Limit
    kind: string
    description: The maximum number of boards that can be accessed through the API.
    protected: false
  full_description: "Monday.com is a team management and collaboration platform that helps teams plan, organize, and track their work in one central location. \n\nMonday.com is a cloud-based platform that allows teams to manage their projects, tasks, and workflows in a visual and intuitive way. It offers a variety of customizable templates and features, such as task assignments, deadlines, progress tracking, and communication tools, to help teams stay on top of their work and collaborate effectively. With Monday.com, teams can streamline their workflows, improve their productivity, and achieve their goals faster.\n\n## Settings\n\n\n### API Token\n\nA unique identifier that grants access to the Monday.com API.\n\n### Board Limit\n\nThe maximum number of boards that can be accessed through the API."
- id: aee84aa6-17f1-4938-85b3-597e8bbeebc7
  name: tap-dagster
  namespace: tap_dagster
  variant: voxmedia
  label: Dagster
  description: |-
    Dagster is an open-source data orchestrator for machine learning, analytics, and ETL.

    Dagster provides a unified framework for building data pipelines that allows developers to define the inputs, outputs, and dependencies of each step in the pipeline, making it easier to test, maintain, and scale complex data workflows. It also includes features such as data validation, error handling, and monitoring to ensure the reliability and quality of data processing. Dagster supports a variety of data sources and execution environments, including local development, cloud-based services, and containerized deployments.
  logo_url: /assets/logos/extractors/dagster.png
  hidden: false
  docs: https://www.matatika.com/data-details/tap-dagster/
  pip_url: git+https://github.com/voxmedia/tap-dagster.git
  repo: https://github.com/voxmedia/tap-dagster
  capabilities:
  - discover
  - schema_flattening
  - about
  - state
  - stream_maps
  - catalog
  settings:
  - name: auth_token
    label: Auth Token
    kind: password
    description: A token used for authentication when connecting to the Dagster API.
    protected: false
  - name: start_date
    label: Start Date
    kind: string
    description: The date from which to start streaming data.
    protected: false
  - name: api_url
    label: Api Url
    kind: string
    description: The URL of the Dagster API.
    protected: false
  - name: stream_maps
    label: Stream Maps
    kind: object
    description: A list of stream maps to use when streaming data.
    protected: false
  - name: stream_map_config
    label: Stream Map Config
    kind: object
    description: Configuration settings for the stream maps.
    protected: false
  - name: flattening_enabled
    label: Flattening Enabled
    kind: boolean
    description: Whether or not to flatten the data when streaming.
    protected: false
  - name: flattening_max_depth
    label: Flattening Max Depth
    kind: integer
    description: The maximum depth to which the data should be flattened.
    protected: false
  full_description: |-
    Dagster is an open-source data orchestrator for machine learning, analytics, and ETL.

    Dagster provides a unified framework for building data pipelines that allows developers to define the inputs, outputs, and dependencies of each step in the pipeline, making it easier to test, maintain, and scale complex data workflows. It also includes features such as data validation, error handling, and monitoring to ensure the reliability and quality of data processing. Dagster supports a variety of data sources and execution environments, including local development, cloud-based services, and containerized deployments.

    ## Settings


    ### Auth Token

    A token used for authentication when connecting to the Dagster API.

    ### Start Date

    The date from which to start streaming data.

    ### Api Url

    The URL of the Dagster API.

    ### Stream Maps

    A list of stream maps to use when streaming data.

    ### Stream Map Config

    Configuration settings for the stream maps.

    ### Flattening Enabled

    Whether or not to flatten the data when streaming.

    ### Flattening Max Depth

    The maximum depth to which the data should be flattened.
- id: ab433553-3d8d-40e3-802f-53f8c9e025b5
  name: tap-keap
  namespace: tap_keap
  variant: hotgluexyz
  label: Keap
  description: "Keap is a customer relationship management (CRM) software designed for small businesses to manage their sales, marketing, and customer service in one platform. \n\nKeap offers a range of features including contact management, appointment scheduling, lead capture and segmentation, email marketing, automation, and reporting. It allows businesses to streamline their processes and improve their customer relationships by providing a centralized platform for managing customer interactions. Keap also integrates with other tools such as QuickBooks, Gmail, and Outlook to provide a seamless experience for users. With Keap, small businesses can save time, increase efficiency, and grow their customer base."
  logo_url: /assets/logos/extractors/keap.svg
  hidden: false
  docs: https://www.matatika.com/data-details/tap-keap/
  pip_url: git+https://gitlab.com/hotglue/tap-keap.git
  repo: https://gitlab.com/hotglue/tap-keap
  executable: tap-keap
  capabilities:
  - discover
  - schema_flattening
  - about
  - state
  - stream_maps
  - catalog
  settings:
  - name: access_token
    label: Access Token
    kind: password
    description: A unique identifier that grants access to the Keap API.
    protected: false
  - name: client_id
    label: Client ID
    kind: password
    description: A unique identifier for the client application that is making the API request.
    protected: false
  - name: client_secret
    label: Client Secret
    kind: password
    description: A secret key that is used to authenticate the client application.
    protected: false
  - name: expires_in
    label: Expires In
    kind: integer
    description: The amount of time in seconds until the access token expires.
    protected: false
  - name: flattening_enabled
    label: Flattening Enabled
    kind: boolean
    description: A boolean value indicating whether or not to flatten nested objects in the API response.
    protected: false
  - name: flattening_max_depth
    label: Flattening Max Depth
    kind: integer
    description: The maximum depth to which nested objects will be flattened.
    protected: false
  - name: start_date
    label: Start Date
    kind: date_iso8601
    description: The date from which to start retrieving data from the API.
    protected: false
  - name: stream_map_config
    label: Stream Map Config
    kind: object
    description: A configuration file that maps API responses to a specific data model.
    protected: false
  - name: stream_maps
    label: Stream Maps
    kind: object
    description: A collection of stream maps that define how to transform API responses into a specific data model.
    protected: false
  full_description: "Keap is a customer relationship management (CRM) software designed for small businesses to manage their sales, marketing, and customer service in one platform. \n\nKeap offers a range of features including contact management, appointment scheduling, lead capture and segmentation, email marketing, automation, and reporting. It allows businesses to streamline their processes and improve their customer relationships by providing a centralized platform for managing customer interactions. Keap also integrates with other tools such as QuickBooks, Gmail, and Outlook to provide a seamless experience for users. With Keap, small businesses can save time, increase efficiency, and grow their customer base.\n\n## Settings\n\n\n### Access Token\n\nA unique identifier that grants access to the Keap API.\n\n### Client ID\n\nA unique identifier for the client application that is making the API request.\n\n### Client Secret\n\nA secret key that is used to authenticate the client application.\n\n### Expires In\n\nThe amount of time in seconds until the access token expires.\n\n### Flattening Enabled\n\nA boolean value indicating whether or not to flatten nested objects in the API response.\n\n### Flattening Max Depth\n\nThe maximum depth to which nested objects will be flattened.\n\n### Start Date\n\nThe date from which to start retrieving data from the API.\n\n### Stream Map Config\n\nA configuration file that maps API responses to a specific data model.\n\n### Stream Maps\n\nA collection of stream maps that define how to transform API responses into a specific data model."
- id: b8428834-d995-4d66-9b31-105a83e80483
  name: tap-mailchimp
  namespace: tap_mailchimp
  variant: singer-io
  label: Mailchimp
  description: |-
    Mailchimp is an email marketing and automation platform.

    Mailchimp is a cloud-based platform that allows businesses to create and send email campaigns, manage subscriber lists, and automate marketing tasks. It offers a variety of templates and design tools to create professional-looking emails, as well as analytics to track the success of campaigns. Mailchimp also integrates with other tools and platforms, such as social media and e-commerce sites, to help businesses reach their target audience and grow their customer base.
  logo_url: /assets/logos/extractors/mailchimp.png
  hidden: false
  docs: https://www.matatika.com/data-details/tap-mailchimp/
  pip_url: tap-mailchimp
  repo: https://github.com/singer-io/tap-mailchimp
  capabilities:
  - discover
  - state
  - catalog
  settings:
  - name: request_timeout
    label: Request Timeout
    kind: integer
    description: The maximum amount of time the client will wait for a response from the server before timing out.
    protected: false
  - name: dc
    label: Data Center
    kind: string
    description: The unique identifier for the Mailchimp data center that the API request will be sent to.
    protected: false
  - name: page_size
    label: Page Size
    kind: integer
    description: The number of results to return per page when making paginated API requests.
    protected: false
  - name: user_agent
    label: User Agent
    kind: string
    description: A string that identifies the client making the API request.
    protected: false
  - name: start_date
    label: Start Date
    kind: date_iso8601
    description: The date from which to start retrieving data when making API requests that return historical data.
    protected: false
  - name: access_token
    label: Access Token
    kind: password
    description: A unique identifier that grants access to a specific Mailchimp account and its associated data.
    protected: false
  - name: api_key
    label: API Key
    kind: password
    description: A unique identifier that grants access to the Mailchimp API and its associated functionality.
    protected: false
  full_description: |-
    Mailchimp is an email marketing and automation platform.

    Mailchimp is a cloud-based platform that allows businesses to create and send email campaigns, manage subscriber lists, and automate marketing tasks. It offers a variety of templates and design tools to create professional-looking emails, as well as analytics to track the success of campaigns. Mailchimp also integrates with other tools and platforms, such as social media and e-commerce sites, to help businesses reach their target audience and grow their customer base.

    ## Settings


    ### Request Timeout

    The maximum amount of time the client will wait for a response from the server before timing out.

    ### Data Center

    The unique identifier for the Mailchimp data center that the API request will be sent to.

    ### Page Size

    The number of results to return per page when making paginated API requests.

    ### User Agent

    A string that identifies the client making the API request.

    ### Start Date

    The date from which to start retrieving data when making API requests that return historical data.

    ### Access Token

    A unique identifier that grants access to a specific Mailchimp account and its associated data.

    ### API Key

    A unique identifier that grants access to the Mailchimp API and its associated functionality.
- id: 5a59fc3d-3e5a-4e77-a69a-2607160127a6
  name: tap-rockgympro
  namespace: tap_rockgympro
  variant: cinchio
  label: Rock Gym Pro
  description: "Rock Gym Pro is a gym management software designed for rock climbing facilities. \n\nRock Gym Pro is a comprehensive software solution that helps rock climbing gyms manage their operations, from membership and billing to scheduling and inventory management. It offers features such as online registration, automated billing, and real-time reporting, as well as tools for managing classes, events, and competitions. The software also includes a mobile app for members, allowing them to check schedules, sign up for classes, and track their progress. With Rock Gym Pro, gym owners and managers can streamline their operations, improve customer experience, and grow their business."
  logo_url: /assets/logos/extractors/rockgympro.png
  hidden: false
  docs: https://www.matatika.com/data-details/tap-rockgympro/
  pip_url: git+https://github.com/cinchio/tap-rockgympro.git
  repo: https://github.com/cinchio/tap-rockgympro
  capabilities:
  - discover
  - catalog
  full_description: "Rock Gym Pro is a gym management software designed for rock climbing facilities. \n\nRock Gym Pro is a comprehensive software solution that helps rock climbing gyms manage their operations, from membership and billing to scheduling and inventory management. It offers features such as online registration, automated billing, and real-time reporting, as well as tools for managing classes, events, and competitions. The software also includes a mobile app for members, allowing them to check schedules, sign up for classes, and track their progress. With Rock Gym Pro, gym owners and managers can streamline their operations, improve customer experience, and grow their business."
- id: bc91e7c0-6ade-43f3-987e-56083ce3f834
  name: tap-anvil
  namespace: tap_anvil
  variant: svinstech
  label: Anvil
  description: |-
    Anvil is a web-based platform for building full-stack web apps with nothing but Python.

    Anvil allows users to build full-stack web applications using only Python code, without the need for front-end development skills or knowledge of HTML, CSS, or JavaScript. The platform provides a drag-and-drop interface for building user interfaces, as well as a built-in Python editor for writing server-side code. Anvil also includes a range of pre-built components and integrations, such as databases, authentication, and APIs, to help users build complex applications quickly and easily. With Anvil, developers can create web applications for a variety of use cases, from simple data entry forms to complex business applications.
  logo_url: /assets/logos/extractors/anvil.png
  hidden: false
  docs: https://www.matatika.com/data-details/tap-anvil/
  pip_url: git+https://github.com/svinstech/tap-anvil.git
  repo: https://github.com/svinstech/tap-anvil
  capabilities:
  - discover
  - schema_flattening
  - about
  - state
  - stream_maps
  - catalog
  settings:
  - name: api_key
    label: Api Key
    kind: password
    description: A unique identifier used to authenticate and authorize API requests.
    protected: false
  - name: stream_maps
    label: Stream Maps
    kind: object
    description: A mapping of input and output streams used to transform data.
    protected: false
  - name: stream_map_config
    label: Stream Map Config
    kind: object
    description: Configuration settings for the stream maps.
    protected: false
  - name: flattening_enabled
    label: Flattening Enabled
    kind: boolean
    description: A boolean value indicating whether or not to flatten nested data structures.
    protected: false
  - name: flattening_max_depth
    label: Flattening Max Depth
    kind: integer
    description: The maximum depth of nested data structures to flatten.
    protected: false
  full_description: |-
    Anvil is a web-based platform for building full-stack web apps with nothing but Python.

    Anvil allows users to build full-stack web applications using only Python code, without the need for front-end development skills or knowledge of HTML, CSS, or JavaScript. The platform provides a drag-and-drop interface for building user interfaces, as well as a built-in Python editor for writing server-side code. Anvil also includes a range of pre-built components and integrations, such as databases, authentication, and APIs, to help users build complex applications quickly and easily. With Anvil, developers can create web applications for a variety of use cases, from simple data entry forms to complex business applications.

    ## Settings


    ### Api Key

    A unique identifier used to authenticate and authorize API requests.

    ### Stream Maps

    A mapping of input and output streams used to transform data.

    ### Stream Map Config

    Configuration settings for the stream maps.

    ### Flattening Enabled

    A boolean value indicating whether or not to flatten nested data structures.

    ### Flattening Max Depth

    The maximum depth of nested data structures to flatten.
- id: 7e81cdaa-00ec-4858-a056-1ae50e65ef69
  name: tap-lessonly
  namespace: tap_lessonly
  variant: pathlight
  label: Lessonly
  description: |-
    Lessonly is a software platform that provides online training and learning management solutions for businesses.

    Lessonly is a cloud-based learning management system that enables businesses to create and deliver online training courses, quizzes, and assessments to their employees. The platform offers a range of features, including customizable course templates, interactive content creation tools, and analytics and reporting capabilities. With Lessonly, businesses can easily onboard new employees, train existing staff, and track their progress and performance. The platform is designed to be user-friendly and intuitive, making it easy for businesses of all sizes to implement and use.
  logo_url: /assets/logos/extractors/lessonly.png
  hidden: false
  docs: https://www.matatika.com/data-details/tap-lessonly/
  pip_url: git+https://github.com/Pathlight/tap-lessonly.git
  repo: https://github.com/Pathlight/tap-lessonly
  capabilities:
  - discover
  - state
  - catalog
  settings:
  - name: api_key
    label: API Key
    kind: password
    description: A unique identifier used to authenticate and authorize API requests.
    protected: false
  - name: subdomain
    label: Subdomain
    kind: string
    description: The unique identifier for the Lessonly account that the API requests will be made to.
    protected: false
  full_description: |-
    Lessonly is a software platform that provides online training and learning management solutions for businesses.

    Lessonly is a cloud-based learning management system that enables businesses to create and deliver online training courses, quizzes, and assessments to their employees. The platform offers a range of features, including customizable course templates, interactive content creation tools, and analytics and reporting capabilities. With Lessonly, businesses can easily onboard new employees, train existing staff, and track their progress and performance. The platform is designed to be user-friendly and intuitive, making it easy for businesses of all sizes to implement and use.

    ## Settings


    ### API Key

    A unique identifier used to authenticate and authorize API requests.

    ### Subdomain

    The unique identifier for the Lessonly account that the API requests will be made to.
- id: 7a049b0d-b76c-42ff-93ee-c8d579454fbb
  name: tap-partnerize
  namespace: tap_partnerize
  variant: voxmedia
  label: Partnerize
  description: |-
    Partnerize is a partnership management platform.

    Partnerize is a cloud-based platform that helps businesses manage their partnerships with affiliates, influencers, and other partners. It provides tools for tracking partner performance, managing commissions and payouts, and optimizing partner relationships. The platform also offers real-time analytics and reporting, as well as integrations with other marketing and analytics tools. With Partnerize, businesses can streamline their partnership programs and drive more revenue from their partnerships.
  logo_url: /assets/logos/extractors/partnerize.png
  hidden: false
  docs: https://www.matatika.com/data-details/tap-partnerize/
  pip_url: git+https://github.com/voxmedia/tap-partnerize.git
  repo: https://github.com/voxmedia/tap-partnerize
  capabilities:
  - discover
  - about
  - state
  - stream_maps
  - catalog
  settings:
  - name: username
    label: Username
    kind: string
    description: The username used to authenticate with the Partnerize API.
    protected: false
  - name: password
    label: Password
    kind: password
    description: The password used to authenticate with the Partnerize API.
    protected: false
  - name: publisher_id
    label: Publisher ID
    kind: string
    description: The unique identifier for the publisher account that is being accessed.
    protected: false
  - name: start_date
    label: Start Date
    kind: string
    description: The date from which data should be retrieved from the Partnerize API.
    protected: false
  full_description: |-
    Partnerize is a partnership management platform.

    Partnerize is a cloud-based platform that helps businesses manage their partnerships with affiliates, influencers, and other partners. It provides tools for tracking partner performance, managing commissions and payouts, and optimizing partner relationships. The platform also offers real-time analytics and reporting, as well as integrations with other marketing and analytics tools. With Partnerize, businesses can streamline their partnership programs and drive more revenue from their partnerships.

    ## Settings


    ### Username

    The username used to authenticate with the Partnerize API.

    ### Password

    The password used to authenticate with the Partnerize API.

    ### Publisher ID

    The unique identifier for the publisher account that is being accessed.

    ### Start Date

    The date from which data should be retrieved from the Partnerize API.
- id: a880efa9-71b0-4be2-8c96-ee582ab4a13e
  name: tap-sumologic
  namespace: tap_sumologic
  variant: splitio
  label: Sumo Logic
  description: |-
    Sumo Logic is a cloud-based machine data analytics platform.

    Sumo Logic provides a cloud-based machine data analytics platform that enables organizations to collect, manage, and analyze log data and other machine data in real-time to gain operational and business insights. The platform offers a range of features, including log search and analysis, real-time dashboards and alerts, machine learning-powered anomaly detection, and compliance and security monitoring. Sumo Logic is used by organizations across various industries, including e-commerce, financial services, healthcare, and more.
  logo_url: /assets/logos/extractors/sumologic.png
  hidden: false
  docs: https://www.matatika.com/data-details/tap-sumologic/
  pip_url: git+https://github.com/splitio/tap-sumologic.git
  repo: https://github.com/splitio/tap-sumologic
  capabilities:
  - discover
  - catalog
  settings:
  - name: tables
    label: Tables
    kind: array
    description: The name of the table(s) to query in Sumo Logic.
    protected: false
  - name: sumologic_root_url
    label: Sumologic Root Url
    kind: string
    description: The base URL for the Sumo Logic API.
    protected: false
  - name: sumologic_access_key
    label: Sumologic Access Key
    kind: password
    description: The access key used to authenticate with the Sumo Logic API.
    protected: false
  - name: sumologic_access_id
    label: Sumologic Access Id
    kind: password
    description: The access ID used to authenticate with the Sumo Logic API.
    protected: false
  - name: end_date
    label: End Date
    kind: date_iso8601
    description: The end date/time for the query in ISO 8601 format.
    protected: false
  - name: start_date
    label: Start Date
    kind: date_iso8601
    description: The start date/time for the query in ISO 8601 format.
    protected: false
  full_description: |-
    Sumo Logic is a cloud-based machine data analytics platform.

    Sumo Logic provides a cloud-based machine data analytics platform that enables organizations to collect, manage, and analyze log data and other machine data in real-time to gain operational and business insights. The platform offers a range of features, including log search and analysis, real-time dashboards and alerts, machine learning-powered anomaly detection, and compliance and security monitoring. Sumo Logic is used by organizations across various industries, including e-commerce, financial services, healthcare, and more.

    ## Settings


    ### Tables

    The name of the table(s) to query in Sumo Logic.

    ### Sumologic Root Url

    The base URL for the Sumo Logic API.

    ### Sumologic Access Key

    The access key used to authenticate with the Sumo Logic API.

    ### Sumologic Access Id

    The access ID used to authenticate with the Sumo Logic API.

    ### End Date

    The end date/time for the query in ISO 8601 format.

    ### Start Date

    The start date/time for the query in ISO 8601 format.
- id: 8f34b5ea-b72e-4b69-a2ca-9d890590a962
  name: tap-snapengage
  namespace: tap_snapengage
  variant: pathlight
  label: SnapEngage
  description: |-
    SnapEngage is a live chat software for websites and online businesses.

    SnapEngage is a powerful live chat software that enables businesses to engage with their website visitors in real-time, providing personalized support and assistance to increase customer satisfaction and sales. With features such as chatbots, integrations with popular CRMs and helpdesk tools, and advanced analytics, SnapEngage helps businesses streamline their customer support operations and improve their online customer experience.
  logo_url: /assets/logos/extractors/snapengage.png
  hidden: false
  docs: https://www.matatika.com/data-details/tap-snapengage/
  pip_url: git+https://github.com/Pathlight/tap-snapengage.git
  repo: https://github.com/Pathlight/tap-snapengage
  capabilities:
  - discover
  - catalog
  full_description: |-
    SnapEngage is a live chat software for websites and online businesses.

    SnapEngage is a powerful live chat software that enables businesses to engage with their website visitors in real-time, providing personalized support and assistance to increase customer satisfaction and sales. With features such as chatbots, integrations with popular CRMs and helpdesk tools, and advanced analytics, SnapEngage helps businesses streamline their customer support operations and improve their online customer experience.
- id: ee999062-2aa9-47ed-9007-6fd8a74a24f7
  name: tap-maestroqa
  namespace: tap_maestroqa
  variant: pathlight
  label: MaestroQA
  description: |-
    MaestroQA is a quality assurance and training platform for customer service teams.

    MaestroQA is a software platform that helps customer service teams improve their quality assurance and training processes. It allows teams to monitor and evaluate customer interactions, identify areas for improvement, and provide targeted coaching and training to agents. The platform also includes features for collaboration and reporting, making it easy for teams to work together to improve customer service performance. With MaestroQA, customer service teams can ensure that they are delivering high-quality service that meets the needs of their customers.
  logo_url: /assets/logos/extractors/maestroqa.png
  hidden: false
  docs: https://www.matatika.com/data-details/tap-maestroqa/
  pip_url: git+https://github.com/Pathlight/tap-maestroqa.git
  repo: https://github.com/Pathlight/tap-maestroqa
  capabilities:
  - discover
  - state
  - catalog
  settings:
  - name: start_date
    label: Start Date
    kind: date_iso8601
    description: The date from which to start retrieving data from the MaestroQA API.
    protected: false
  - name: api_token
    label: API Token
    kind: password
    description: A unique authentication token required to access the MaestroQA API.
    protected: false
  full_description: |-
    MaestroQA is a quality assurance and training platform for customer service teams.

    MaestroQA is a software platform that helps customer service teams improve their quality assurance and training processes. It allows teams to monitor and evaluate customer interactions, identify areas for improvement, and provide targeted coaching and training to agents. The platform also includes features for collaboration and reporting, making it easy for teams to work together to improve customer service performance. With MaestroQA, customer service teams can ensure that they are delivering high-quality service that meets the needs of their customers.

    ## Settings


    ### Start Date

    The date from which to start retrieving data from the MaestroQA API.

    ### API Token

    A unique authentication token required to access the MaestroQA API.
- id: 0afcd3f1-9504-4ea8-84b2-ce52a25bac01
  name: tap-search-ads
  namespace: tap_search_ads
  variant: uptilab2
  label: Google Search Ads 360
  description: |-
    Google Search Ads 360 is a search management platform that helps advertisers efficiently manage and optimize their search advertising campaigns across multiple search engines and platforms.

    Google Search Ads 360 is a powerful tool that allows advertisers to manage and optimize their search advertising campaigns across multiple search engines and platforms, including Google, Bing, Yahoo, and more. With features such as automated bidding, advanced reporting, and cross-channel attribution, Google Search Ads 360 helps advertisers maximize their ROI and drive more conversions. It also integrates seamlessly with other Google marketing tools, such as Google Analytics and Google Ads, to provide a comprehensive view of campaign performance and audience insights.
  logo_url: /assets/logos/extractors/search-ads.png
  hidden: false
  docs: https://www.matatika.com/data-details/tap-search-ads/
  pip_url: git+https://github.com/uptilab2/tap-search-ads.git
  repo: https://github.com/uptilab2/tap-search-ads
  capabilities:
  - discover
  - catalog
  full_description: |-
    Google Search Ads 360 is a search management platform that helps advertisers efficiently manage and optimize their search advertising campaigns across multiple search engines and platforms.

    Google Search Ads 360 is a powerful tool that allows advertisers to manage and optimize their search advertising campaigns across multiple search engines and platforms, including Google, Bing, Yahoo, and more. With features such as automated bidding, advanced reporting, and cross-channel attribution, Google Search Ads 360 helps advertisers maximize their ROI and drive more conversions. It also integrates seamlessly with other Google marketing tools, such as Google Analytics and Google Ads, to provide a comprehensive view of campaign performance and audience insights.
- id: c439b543-9822-4294-b183-d20a13490fb2
  name: tap-meltano
  namespace: tap_meltano
  variant: matatika
  label: Meltano
  description: |-
    Meltano is an open-source data integration tool.

    Meltano is a free and open-source data integration tool that allows users to extract, load, and transform data from various sources into a data warehouse. It provides a user-friendly interface for managing data pipelines and supports a wide range of data sources, including databases, APIs, and file formats. Meltano also offers a suite of plugins and integrations for popular data tools like Google Analytics, Salesforce, and HubSpot, making it easy to connect and manage data from multiple sources in one place. With Meltano, users can automate data pipelines, monitor data quality, and collaborate with team members on data projects.
    ### Prerequisites
    The Meltano database URI can be obtained from the Meltano instance that you are connecting to. It is typically provided by the administrator or the person who set up the Meltano instance. If you do not know the Meltano database URI, you can ask the administrator or the person who set up the instance for this information.
  logo_url: /assets/images/datasource/tap-meltano.png
  hidden: false
  docs: https://www.matatika.com/docs/instant-insights/tap-meltano/
  pip_url: git+https://github.com/Matatika/tap-meltano@v0.4.0
  repo: https://github.com/Matatika/tap-meltano
  capabilities:
  - discover
  - state
  - catalog
  settings:
  - name: meltano_database_uri
    label: Meltano database uri
    value: $MELTANO_DATABASE_URI
    kind: password
    description: The URI for the Meltano database.
    required: "true"
    protected: false
  full_description: |-
    Meltano is an open-source data integration tool.

    Meltano is a free and open-source data integration tool that allows users to extract, load, and transform data from various sources into a data warehouse. It provides a user-friendly interface for managing data pipelines and supports a wide range of data sources, including databases, APIs, and file formats. Meltano also offers a suite of plugins and integrations for popular data tools like Google Analytics, Salesforce, and HubSpot, making it easy to connect and manage data from multiple sources in one place. With Meltano, users can automate data pipelines, monitor data quality, and collaborate with team members on data projects.
    ### Prerequisites
    The Meltano database URI can be obtained from the Meltano instance that you are connecting to. It is typically provided by the administrator or the person who set up the Meltano instance. If you do not know the Meltano database URI, you can ask the administrator or the person who set up the instance for this information.

    ## Settings


    ### Meltano database uri

    The URI for the Meltano database.
- id: 7a195f18-b5ea-4c08-899d-40f2ca5d02b5
  name: tap-exacttarget
  namespace: tap_exacttarget
  variant: singer-io
  label: SalesForce Marketing Cloud
  description: "SalesForce Marketing Cloud is a cloud-based marketing platform that helps businesses manage and automate their marketing campaigns across multiple channels. \n\nSalesForce Marketing Cloud provides a suite of tools for businesses to create, manage, and analyze their marketing campaigns across email, social media, mobile, and web. It allows businesses to segment their audience, personalize their messaging, and track their performance in real-time. The platform also includes features for lead generation, customer journey mapping, and marketing automation, making it a comprehensive solution for businesses looking to streamline their marketing efforts."
  logo_url: /assets/logos/extractors/exacttarget.png
  hidden: false
  docs: https://www.matatika.com/data-details/tap-exacttarget/
  pip_url: tap-exacttarget
  repo: https://github.com/singer-io/tap-exacttarget
  capabilities:
  - discover
  - properties
  - state
  settings:
  - name: batch_size
    label: Batch Size
    kind: integer
    description: The number of records to process in each API call.
    protected: false
  - name: tenant_subdomain
    label: Tenant Subdomain
    kind: string
    description: The unique identifier for the SalesForce Marketing Cloud account.
    protected: false
  - name: request_timeout
    label: Request Timeout
    kind: integer
    description: The maximum amount of time to wait for a response from the API.
    protected: false
  - name: start_date
    label: Start Date
    kind: date_iso8601
    description: The date from which to retrieve data.
    protected: false
  - name: client_id
    label: Client ID
    kind: password
    description: The unique identifier for the connected app in SalesForce.
    protected: false
  - name: client_secret
    label: Client Secret
    kind: password
    description: The secret key for the connected app in SalesForce.
    protected: false
  full_description: "SalesForce Marketing Cloud is a cloud-based marketing platform that helps businesses manage and automate their marketing campaigns across multiple channels. \n\nSalesForce Marketing Cloud provides a suite of tools for businesses to create, manage, and analyze their marketing campaigns across email, social media, mobile, and web. It allows businesses to segment their audience, personalize their messaging, and track their performance in real-time. The platform also includes features for lead generation, customer journey mapping, and marketing automation, making it a comprehensive solution for businesses looking to streamline their marketing efforts.\n\n## Settings\n\n\n### Batch Size\n\nThe number of records to process in each API call.\n\n### Tenant Subdomain\n\nThe unique identifier for the SalesForce Marketing Cloud account.\n\n### Request Timeout\n\nThe maximum amount of time to wait for a response from the API.\n\n### Start Date\n\nThe date from which to retrieve data.\n\n### Client ID\n\nThe unique identifier for the connected app in SalesForce.\n\n### Client Secret\n\nThe secret key for the connected app in SalesForce."
- id: 82f760d7-1821-4c47-affd-ac58e89d892c
  name: tap-sailthru
  namespace: tap_sailthru
  variant: singer-io
  label: Sailthru
  description: |-
    Sailthru is a personalized marketing automation platform.

    Sailthru is a cloud-based marketing automation platform that helps businesses personalize customer experiences across email, web, and mobile channels. It uses machine learning algorithms to analyze customer data and behavior, and then delivers personalized content and recommendations to each individual customer. Sailthru also offers tools for A/B testing, segmentation, and reporting to help businesses optimize their marketing campaigns and improve customer engagement.
  logo_url: /assets/logos/extractors/sailthru.png
  hidden: false
  docs: https://www.matatika.com/data-details/tap-sailthru/
  pip_url: tap-sailthru
  repo: https://github.com/singer-io/tap-sailthru
  capabilities:
  - discover
  - state
  - catalog
  settings:
  - name: user_agent
    label: User Agent
    kind: string
    description: A string identifying the client making the API request.
    protected: false
  - name: request_timeout
    label: Request Timeout
    kind: integer
    description: The maximum time in seconds to wait for a response from the API.
    protected: false
  - name: start_date
    label: Start Date
    kind: date_iso8601
    description: The earliest date for which data should be retrieved.
    protected: false
  - name: api_key
    label: API Key
    kind: password
    description: A unique identifier used to authenticate API requests.
    protected: false
  - name: api_secret
    label: API Secret
    kind: password
    description: A secret key used to authenticate API requests.
    protected: false
  full_description: |-
    Sailthru is a personalized marketing automation platform.

    Sailthru is a cloud-based marketing automation platform that helps businesses personalize customer experiences across email, web, and mobile channels. It uses machine learning algorithms to analyze customer data and behavior, and then delivers personalized content and recommendations to each individual customer. Sailthru also offers tools for A/B testing, segmentation, and reporting to help businesses optimize their marketing campaigns and improve customer engagement.

    ## Settings


    ### User Agent

    A string identifying the client making the API request.

    ### Request Timeout

    The maximum time in seconds to wait for a response from the API.

    ### Start Date

    The earliest date for which data should be retrieved.

    ### API Key

    A unique identifier used to authenticate API requests.

    ### API Secret

    A secret key used to authenticate API requests.
- id: 2264065d-a555-4eb7-bb08-10ff854d23c5
  name: tap-agilecrm
  namespace: tap_agilecrm
  variant: dreamdata-io
  label: Agile CRM
  description: |-
    Agile CRM is a customer relationship management software that helps businesses manage their sales, marketing, and customer service activities in one platform.

    Agile CRM is designed to streamline customer interactions by providing a centralized platform for managing sales, marketing, and customer service activities. It offers features such as contact management, lead scoring, email marketing, social media integration, and analytics to help businesses improve their customer engagement and increase sales. The software also includes automation tools to help businesses automate repetitive tasks and workflows, freeing up time for more important tasks. Additionally, Agile CRM offers integrations with popular third-party tools such as Zapier, Slack, and Shopify, making it a versatile solution for businesses of all sizes.
  logo_url: /assets/logos/extractors/agilecrm.png
  hidden: false
  docs: https://www.matatika.com/data-details/tap-agilecrm/
  pip_url: git+https://github.com/dreamdata-io/tap-agilecrm.git
  repo: https://github.com/dreamdata-io/tap-agilecrm
  capabilities:
  - discover
  - catalog
  full_description: |-
    Agile CRM is a customer relationship management software that helps businesses manage their sales, marketing, and customer service activities in one platform.

    Agile CRM is designed to streamline customer interactions by providing a centralized platform for managing sales, marketing, and customer service activities. It offers features such as contact management, lead scoring, email marketing, social media integration, and analytics to help businesses improve their customer engagement and increase sales. The software also includes automation tools to help businesses automate repetitive tasks and workflows, freeing up time for more important tasks. Additionally, Agile CRM offers integrations with popular third-party tools such as Zapier, Slack, and Shopify, making it a versatile solution for businesses of all sizes.
- id: c274170e-43cf-4605-96d3-4c81637ccb87
  name: tap-meshstack
  namespace: tap_meshstack
  variant: meshcloud
  label: Meshstack
  description: |-
    Meshstack is an IoT platform that enables the deployment and management of connected devices and applications.

    Meshstack provides a comprehensive IoT platform that allows businesses to easily deploy and manage connected devices and applications. The platform includes features such as device management, data analytics, and cloud integration, making it easy for businesses to collect and analyze data from their IoT devices. Meshstack also offers a range of tools and services to help businesses develop and deploy their own IoT applications, including APIs, SDKs, and developer tools. With Meshstack, businesses can quickly and easily build and deploy IoT solutions that help them improve efficiency, reduce costs, and drive innovation.
  logo_url: /assets/logos/extractors/meshstack.png
  hidden: false
  docs: https://www.matatika.com/data-details/tap-meshstack/
  pip_url: git+https://github.com/meshcloud/tap-meshstack.git
  repo: https://github.com/meshcloud/tap-meshstack
  capabilities: