Airflow Architecture at Drivy

November 21, 2018 – Eloïse Gomez – 5-minute read

This article was written before Drivy was acquired by Getaround, and became Getaround EU. Some references to Drivy may therefore remain in the post

Drivy has been using Airflow to orchestrate tasks for 2 years now. We thought it was the best tool on the market when we wanted to start digging into data. The purpose was to understand how well our features were performing. We didn’t really know how the data was going to be used, and by whom. We wanted something easy to use and set up. We set up everything on an ec2 instance. 75 workflows later, we wanted to upgrade our Airflow version and move from a local to a celeryExecutor mode. In a local mode there is only one worker (which is also the webserver and the scheduler). In the celeryExecutor, on the contrary, there are several workers which can execute tasks in parallel. Our number of DAGs is constantly growing and Celery mode is the best choice to handle this growth.

What is Airflow ?

Airflow is Airbnb’s baby. It is an open-source project which schedules DAGs. Dag stands for Directed Acyclic Graph. Basically, they are an organized collection of tasks. Thanks to Airflow’s nice UI, it is possible to look at how DAGs are currently doing and how they perform. If a DAG fails an email is sent with its logs. It can be manually re-triggered through the UI. Dags can combine lot of different types of tasks (bash, python, sql…) and interact with different datasources. Airflow is a really handy tool to transform and load data from a point A to a point B. You can check their documentation over here.

A simple Airflow DAG with several tasks:

Airflow components

An Airflow cluster has a number of daemons that work together : a webserver, a scheduler and one or several workers.

Webserver

The airflow webserver accepts HTTP requests and allows the user to interact with it. It provides the ability to act on the DAG status (pause, unpause, trigger). When the webserver is started, it starts gunicorn workers to handle different requests in parallel.

Scheduler

The Airflow scheduler monitors DAGs. It triggers the task instances whose dependencies have been met. It monitors and stays in synchronisation with a folder for all DAG objects, and periodically inspects tasks to see if they can be triggered.

Worker

Airflow workers are daemons that actually execute the logic of tasks. They manage one to many CeleryD processes to execute the desired tasks of a particular DAG.

How do they interact ?

Airflow daemons don’t need to register with each other and don’t need to know about each other. They all take care of a specific task and when they are all running, everything works as expected. The scheduler periodically polls to see if any DAGs which are registered need to be executed. If a specific DAG needs to be triggered, then the scheduler creates a new DagRun instance in the Metastore and starts to trigger the individual tasks in the DAG. The scheduler will do that by pushing messages into the queuing service. A message contains information about the task to execute (DAG_id, task_id..) and what function needs to be performed. In some cases, the user will interact with the web server. He can manually trigger a DAG to be ran. A DAGRun is created and the scheduler will start trigger individual tasks the same way as described before. Celeryd processes, controlled by workers, periodically pull from the queuing service. When a celeryd process pulls a task message, it updates the task instance in the metastore to a running state and begins executing the code provided. When the task ends (in a success or fail state) it updates the state of the task.

Airflow architecture

Single-node architecture

In a single-node architecture all components are on the same node. To use a single node architecture, Airflow has to be configured with the LocalExecutor mode.

The single-node architecture is widely used by the users in case they have a moderate amount of DAGs. In this mode, the worker pulls tasks to run from an IPC (Inter Process Communication) queue. This mode doesn’t any need external dependencies. It scales up well until all resources on the server are used. This solution works pretty well. However, to scale out to multiple servers, the Celery executor mode has to be used. Celery executor uses Celery (and a message-queuing server) to distribute the load on a pool of workers.

Multi-node Architecture

In a multi node architecture daemons are spread in different machines. We decided to colocate the webserver and the scheduler. To use this architecture, Airflow has to be configure with the Celery Executor mode.

.

In this mode, a Celery backend has to be set (Redis in our case). Celery is an asynchronous queue based on distributed message passing. Airflow uses it to execute several tasks concurrently on several workers server using multiprocessing. This mode allows to scale up the Airflow cluster really easily by adding new workers.

Why did we choose to use the multi-node architecture ?

Multi-node architecture provides several benefits :

  • Higher availability: if one of the worker nodes goes down, the cluster will still be up and DAGs will still be running.
  • Dedicated workers for specific tasks : we have a workflow where some of our DAGs are CPU intensive. As we have several workers we can dedicate some of them to these kinds of DAGs.
  • Scaling horizontally: Indeed since workers don’t need to register with any central authority to start processing tasks, we can scale our cluster by easily adding new workers. Nodes can be turned on and off without any downtime on the cluster.

In the next episode, we’ll look at how we automate Airflow cluster deployment :).

Sources

  1. Airflow Documentation
  2. https://medium.com/airbnb-engineering/airflow-a-workflow-management-platform-46318b977fd8
  3. https://www.slideshare.net/RobertSanders49/running-apache-airflow-workflows-as-etl-processes-on-hadoop
  4. http://demorenoc.github.io/slides/pycon-co-2017/airflow/
Did you enjoy this post? Join Getaround's engineering team!
View openings