Getting started with remote ML inference in Beam Java

Presented at Beam College 2026

This session introduces the new Remote ML Inference transform in the Apache Beam Java SDK and shows how Java pipelines can run inference using external model services such as OpenAI. We’ll walk through how the transform works, how to use the OpenAI model handler with practical examples, and how to implement custom model handlers for other remote ML providers. The talk covers common usage patterns, framework extensibility, and includes a live demo so developers can quickly add remote ML inference capabilities to their Beam Java pipelines.

Instructor(s):