ML in Production: AWS SageMaker (An Introduction)
With SageMaker, we just need to make an API call using a Python SDK. SageMaker will launch ECS instances, run model training, persist the training artifacts to S3, and then shut down the EC2 instances automatically. In deployment, another API call creates EC2 instances and networking rules to access the model over the internet.
1. INTRODUCTION
SageMaker is AWS’s fully managed service for building and deploying machine learning models in production. Developers can use SageMaker to
- label and prepare data,
- choose an algorithm,
- train, tune and optimize models, and
- deploy them to the cloud on scale.
It comprises several AWS services bundled together through an API that coordinates the creation and management of different ML resources and artifacts.
With SageMaker, we make an API call using a Python SDK, which makes it launch EC2 instances, run model training…