Flower is an open-source framework for training AI on distributed data using federated learning. Companies like Banking Circle, Nokia, Porsche, and Brave use Flower to easily improve their AI models on sensitive data that is distributed across organizational silos or user devices. Almost all AI today is based on centralized public data — a small fraction of the data we have; we believe that training on orders of magnitude more data will unlock the next leaps in AI.
Daniel is one of the creators of Flower, the first fully agnostic federated learning framework, which is now being used at many Fortune 500 companies and most top universities worldwide. He previously held roles as Head of AI and CTO and has considerable experience in running and scaling engineering teams. Daniel is a CS PhD candidate at the University of Cambridge and has an MSc (with distinction) in Software Engineering from the University of Oxford.
Nic Lane is a co-founder of Flower Labs, and has been working on Flower since its creation. Nic is also a full Professor in Machine Learning Systems at the University of Cambridge . Previously, he has been on the faculty of Oxford and UCL. Nic additionally has 12-years experience in industrial research (including Microsoft Research and Nokia Bell Labs). Most recently as co-founder and Head of the Samsung AI Center in Cambridge; a 50-staff AI lab started in 2018.
Taner is one of the creators of Flower, the first fully agnostic federated learning framework, which is now being used at many Fortune 500 companies and most top universities worldwide. He previously held roles as Head of Engineering and CTO in two prior startups. Some companies using products he helped build include Porsche, Lufthansa, and Vattenfall. Taner is also a Visiting Researcher at the University of Cambridge.
Hi everyone - we're Daniel, Taner, and Nic, and we're excited to introduce Flower:
TL;DR 🌼 Flower is an open-source framework for training AI on distributed data using federated learning. Companies like Brave, Banking Circle, and Nokia use us to improve their models with sensitive data that they could not leverage before.
→ Before you continue: give us a star on GitHub ⭐️
Almost all of the AI breakthroughs you know of — from ChatGPT and Google Translate to DALL·E and Stable Diffusion — were trained with public data available on the web. Conventional machine learning needs all data collected in a central place, the motto has always been to “move the data to the computation.” This approach has given us some impressive results, but we are seeing those advances struggle to make their way into other important areas, like healthcare.
Why is that? The reason is simple: most data is sensitive and distributed across organizational silos or user devices. It cannot be collected in a central place, which prevents us from training state-of-the-art models for many important use cases:
Federated learning can train AI models on distributed and sensitive data by moving the training to the data (instead of moving the data to the training); it just collects the insights from the learning process, and the data stays where it is. And because the data never moves, we can train AI on sensitive data spread across organizational silos or user devices to improve models with data that could never be leveraged until now.
This is, of course, more challenging: we must move AI models to data silos or user devices, train locally, send updated models back, aggregate them, and repeat. Flower provides the open-source infrastructure to easily use federated learning (and other privacy-enhancing technologies - or short: PETs) with all the tools you know and love today - PyTorch, TensorFlow, JAX, Hugging Face, fastai, Weights & Biases - just bring your existing project, and easily “federate” it using Flower.