Containers with Read-Only Root Filesystem
By default, CloudQuery requires a read-write file system to create a socket file used by the plugins to communicate with the CLI. It is however not possible to create a read-write socket on a read-only file system. For this to work, a plugin needs to be started as a gRPC server locally.
Creating a container that works with a read-only file system
To create a container that works with a read-only file system, you will need the following:
- a CloudQuery config file that is used to install the required plugins
- a CloudQuery config file that contains the specs needed for the sync
- a start script used as an entry point for the container that will start the plugins and run the sync
Downloading plugins requires users to be authenticated, normally this means running
cloudquery login
but that is not doable in a CI environment or inside of a docker build process. The recommended way to handle this is to use an API key. More information on generating an API Key can be found here
In the example below, we will use the XKCD (opens in a new tab) source plugin and PostgreSQL (opens in a new tab) destination plugin.
Create a config file for the installation
This config file should reference the plugins that you want to install with the actual source where they should be downloaded from:
kind: source
spec:
name: xkcd
path: hermanschaaf/xkcd
version: v2.0.0
registry: github
destinations: ["postgresql"]
tables: ["*"]
---
kind: destination
spec:
name: "postgresql"
path: "cloudquery/postgresql"
version: "v8.2.6"
Create a config file for the sync
This config file should reference the plugins running as gRPC servers (fill in the connection_string
). We will run XKCD on port 7778 and PostgreSQL on port 7779:
kind: source
spec:
name: xkcd
registry: grpc
path: localhost:7778
destinations: ["postgresql"]
tables: ["*"]
---
kind: destination
spec:
name: "postgresql"
registry: grpc
path: localhost:7779
spec:
connection_string: "[REDACTED]"
Create a start script to start the plugins
This shell script will start the plugins and the sync with the provided parameters when we run the container. To make it easier to maintain, we will use the find
command to get the path to the actual plugin binary. This way, we don't have to worry about the version of the plugin.
#!/bin/sh
# get path to the xkcd plugin:
xkcd_plugin_path=$(find .cq -regex ".*/xkcd/.*/plugin")
# get path to the postgresql plugin:
postgresql_plugin_path=$(find .cq -regex ".*/postgresql/.*/plugin")
# start the plugins and run the sync with the provided parameters:
sh -c "${xkcd_plugin_path} serve --address localhost:7778" & \
sh -c "${postgresql_plugin_path} serve --address localhost:7779" & \
/app/cloudquery $@
Connecting it all together
We will use the ghcr.io/cloudquery/cloudquery:latest
container as a base and add all the files in it. Then we will override the entry point with our start script:
FROM ghcr.io/cloudquery/cloudquery:latest as build
WORKDIR /app
# this is the install.yaml file we created above
COPY ./install.yaml /app/install.yaml
ARG CLOUDQUERY_API_KEY
# install the plugins
RUN /app/cloudquery plugin install install.yaml
FROM ghcr.io/cloudquery/cloudquery:latest
WORKDIR /app
# Copy the .cq directory which contains the plugins
COPY --from=build /app/.cq /app/.cq
# add the sync config file
COPY ./sync.yaml /app/sync.yaml
# add the start script
COPY --chmod=0755 ./start.sh /app/start.sh
# override the start script
ENTRYPOINT [ "/app/start.sh"]
Run the Container
First, build the container as you would normally do:
docker build --build-arg CLOUDQUERY_API_KEY=<your-api-key> ./ -t my-cq-container:latest
Run the container with --read-only
option and a command to pass to our start script. You will also need to add the --no-log-file
option to skip logging to a file. Here is an example:
docker run --read-only --add-host=host.docker.internal:host-gateway my-cq-container:latest --no-log-file sync sync.yaml