Skip to content

Containerizing backend services

Once the prediction module has been turned into a service, containerizing it in Docker is straightforward if you already have a conda environment. Simply create a Dockerfile similar to the sample Dockerfiles for site_selectivity, or for augmented_transformer to install the dependencies. Then build the Docker image with

shell
$ export ASKCOS_REGISTRY=registry.gitlab.com/mlpds_mit/askcosv2/askcosv2_core
$ docker build -f your_Dockerfile -t ${ASKCOS_REGISTRY}/your_module:1.0-cpu .

after which the containerized microservice should be runnable with a single command, e.g.,

shell
$ docker run --rm -p 9601:9601 -t ${ASKCOS_REGISTRY}/your_module:1.0-cpu

Or in the case of lengthier command, organize it into a start script similar to scripts/serve_cpu_in_docker.sh for the augmented_transformer, which can then be run by

shell
$ sh scripts/serve_cpu_in_docker.sh

We have updated all of our backend images to be based on micromamba, which is a re-implemented and much faster version of conda. We recommend basing all new docker images on micromamba, as it bypasses all the headaches with the lengthy (and sometimes infinite) process of resolving dependencies in conda.

Released under the MIT License.