Skip to content

ASKCOS Overview

Basic architecture

ASKCOS is a web application which can be broken down at the top level into the backend and frontend. The backend refers to the “hidden” infrastructure which is located on the server, including machine learning models and databases, and the frontend refers to user-facing infrastructure which supports the browser-based user interface.

The ASKCOS backend is written in Python, consisting of the API gateway and prediction modules running as their own services. The API gateway is written using the FastAPI web framework to provide most of the fundamental capabilities, with a mongo database to store persistent data such as user accounts, user results and model related data. The prediction modules vary; some are lightweight heuristics written in pure Python and wrapped as services using FastAPI, and the others are more heavy ML models written in tensorflow or pytorch and served with torchserve or FastAPI. The backend also includes a set of celery workers for async task management (through RabbitMQ and Redis result stores), although no knowledge of celery is required since the functionality has mostly been abstracted away at the API gateway.

The ASKCOS frontend uses the standard combination of HTML/CSS/Javascript. Specifically, ASKCOS uses the Vue Javascript framework and Vuetify component framework. These frameworks provide a lot of functionality which greatly simplifies the construction of a web page.

Finally, nginx is used as an ingress to handle connections between the user/client/frontend and FastAPI/server/backend and pass data between the two components. For ASKCOS V2, nginx has been built in as part of the frontend service.

Container infrastructure

Another important aspect of ASKCOS is the use of container-based infrastructure. For a quick overview of what a container is, see What is a Container?.

The key benefit of containers is that you have a consistent environment regardless of the host platform. For example, the ASKCOS application will think that it’s running on Linux (the current base operating system for the ASKCOS image), regardless of whether it’s been deployed on a Linux server or Windows laptop. In addition, all of the software that ASKCOS depends on are included in the image, so they do not need to be installed separately onto the host system.

Another benefit of the container infrastructure is that individual containers can be readily created or destroyed without affecting the host system. Combined with high-level applications to monitor system load and container status, it is possible to automatically create new containers to meet demand or replace containers which are no longer working properly.

ASKCOS uses the Docker container infrastructure, which provides the basic tools needed to create and run containers. The frontend, the API gateway, the celery workers, and the database services are centrally managed by docker compose, which is a container orchestration tool. The prediction services are designed to be more flexible and plug-and-play, and started and stopped by docker run and docker stop respectively. We have additional support for deploying all prediction services with docker compose too, but only for MLPDS member companies.

Repository organization

The ASKCOSv2 project is divided into 3 main (groups) repositories.

  • askcos2_core - repo for the API gateway and utilities (e.g., rdkit and drawing).
  • askcos-vue-nginx - repo for the nginx ingress and the Vue-based frontend
  • The backend prediction services, e.g., the repo for the MCTS Tree Builder.

Depending on the kind of development you plan to work on, you may not need to clone all of these repositories. Some general guidelines:

  • askcos2_core is only needed if you want to modify the API schemas and routing.
  • askcos-vue-nginx is needed for any frontend related development.
  • The respective repo(s) are needed for the backend service(s) you plan to work on.

In general, if you are making changes to existing modules and functionalities, it is recommended that you do a partial deployment with only those modules. If the partial deployment runs perfectly, then it is safe to only clone askcos2_core and the repos you want to change.

If you are working on new module, this could be much simpler. Just wrap your prediction module into a containerized service, and add support from the API gateway. We provide guidelines for adding new modules in the following sections, but don't hesitate to contact askcos_support@mit.edu. It's generally very easy to add a new wrapper at the API gateway for a new service if it's already containerized.

Released under the MIT License.