Convenient and Flexible ML Pipelines with Kubeflow


It is still early days for open source solutions for productionalising and deploying machine learning (ML) models, managing scalable data pipelines and data science experiments. Kubeflow is a collection of tools that are perfect for these use cases and is gaining popularity for a good reason.

This talk describes a system built on top of Kubeflow which is generic enough to be used for managing ML pipelines of various shapes and sizes, yet flexible enough to allow entirely custom workflows. At its core, there is a set of conventions which determine where data is read from and written to, and expressing data preprocessing and models as a configuration of composable objects and functions. This approach makes it trivial to add new models, datasets, and training objectives to a production system, and enables training and deploying stacked models of arbitrary complexity.

Mattias Arro
Machine Learning Engineer, Subspace AI

Mattias is a hybrid machine learning engineer / data scientist, working as a contractor in London. After a decade as a freelance web developer, he transitioned into data science and now holds a split MSc Data Science from KTH and TU/e. He has built end-to-end ML pipelines and deep learning models for several startups, and advises many others on their data science and engineering approaches. Mattias is into scalable and automated solutions, functional programming and interactive data visualisations.


Go to Top