docs/platform/operator-guides/using-custom-connectors.md
import ContainerProviders from '@site/static/_docker_image_registries.md';
:::info This guide walks through the setup of a Docker-based custom connector. To understand how to use our low-code connector builder, read our guide here. :::
If our connector catalog does not fulfill your needs, you can build your own Airbyte connectors! You can either use our low-code connector builder or upload a Docker-based custom connector.
This page walks through the process to upload a Docker-based custom connector. This is an ideal route for connectors that have an internal use case like a private API with a specific fit for your organization. This guide for using Docker-based custom connectors assumes the following:
If you prefer video tutorials, we recorded a demo on how to upload connectors images to a GCP Artifact Registry.
Airbyte needs to pull its Docker images from a remote Docker registry to consume a connector. You should host your custom connectors image on a private Docker registry. Here are some resources to create a private Docker registry, in case your organization does not already have one:
<ContainerProviders/>To push and pull images to your private Docker registry, you need to authenticate to it:
See the Airbyte Protocol Docker Interface page for specific Docker image requirements, such as required environment variables.
GCP offers the gcloud credential helper to log in to your Artifact registry.
Please run the command detailed here to authenticate your local environment/CI environment to your Artifact registry.
Run the same authentication flow on your Compute Engine instance.
If you do not want to use gcloud, GCP offers other authentication methods detailed here.
You can authenticate to an ECR private registry using the aws CLI:
aws ecr get-login-password --region region | docker login --username AWS --password-stdin aws_account_id.dkr.ecr.region.amazonaws.com
You can find details about this command and other available authentication methods here.
You will have to authenticate your local/CI environment (where you build your image) and your EC2 instance where your Airbyte instance is running.
You can authenticate to an Azure Container Registry using the az CLI:
az acr login --name <registry-name>
You can find details about this command here
You will have to authenticate both your local/CI environment/ environment (where your image is built) and your Azure Virtual Machine instance where the Airbyte instance is running.
You can use Docker Desktop to authenticate your local machine to your DockerHub registry by signing in on the desktop application using your DockerID. You need to use a service account to authenticate your Airbyte instance to your DockerHub registry.
It would be best to set up auth on your Docker registry to make it private. Available authentication options for an open-source Docker registry are listed here.
To authenticate your local/CI environment and Airbyte instance you can use the docker login command.
You can use the previous section's authentication flow to authenticate your local/CI to your private Docker registry. If you provisioned your Kubernetes cluster using AWS EKS, GCP GKE, or Azure AKS: it is very likely that you already allowed your cluster to pull images from the respective container registry service of your cloud provider. If you want Airbyte to pull images from another private Docker registry, you will have to do the following:
Secret in Kubernetes that will host your authentication credentials. This Kubernetes documentation explains how to proceed.JOB_KUBE_MAIN_CONTAINER_IMAGE_PULL_SECRET environment variable on the airbyte-worker pod. The value must be the name of your previously created Kubernetes Secret.docker build . -t my-custom-connectors/source-custom:0.1.0docker tag command. The structure of the remote tag depends on your cloud provider's container registry service. Please check their online documentation linked at the top.docker push <image-name>:<tag> to push the image to your private Docker registry.You should run all the above commands from your local/CI environment, where your connector source code is available.
At this step, you should have:
You can pull your connector image from your private registry to validate the previous steps. On your Airbyte instance: run docker pull <image-name>:<tag> if you are using our docker-compose deployment, or start a pod that is using the connector image.
Click Workspace settings > Sources/Destinations depending on your connector. Click New connector > Add a new Docker connector.
Name your custom connector in Connector display name. This is just the display name used for your workspace.
Fill in the Docker Docker full image name and Docker image tag.
(Optional) Add a link to connector's documentation in Connector documentation URL
You can optionally fill this with any value if you do not have online documentation for your connector.
This documentation will be linked in your connector setting's page.
Add the connector to save the configuration. You can now select your new connector when setting up a new connection!
If you are running Airbyte in kind (kubernetes in docker -- this is the default method for abctl), you must load the docker image of that connector into the cluster. If you are seeing the following error, it likely means that the docker image has not been properly loaded into the cluster.
A connector container can be loaded using the following command:
kind load docker-image <image-name>:<image-tag> -n airbyte-abctl
For the example above, the command would be:
kind load docker-image airbyte/source-custom:1 -n airbyte-abctl