How to build data pipelines on AWS?

Talking AWS for Datascience

In computing, a pipeline, also known as a data pipeline, is a set of data processing elements connected in series, where the output of one element is the input of the next one. Here in AWS we talk about the data pipeline service and also talk about other services datascientists can use to build the end to end data processing pipelines for machine learning. The services we talk about are

  • DynamoDB
  • AWS S3
  • EC2
  • Data Pipelines
  • Sagemaker

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes, and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada