Opening Up Sagemaker Capabilities after re:Invent

Hello Everyone,

This year at re:Invent, numerous excellent features were released for Amazon Sagemaker. In this blog, I will walk through them, as well as test them out in real time.

The first is with Sagemaker Studio. For those who don’t know, Sagemaker Studio is a single pane of glass web-based interface for end-to-end ML development, and it got an upgrade this November. The new interface, Code Editor, is highly elastic for workspaces and user onboarding. In the past, I’ve used this IDE for machine learning items, like writing and executing my code in Jupyter Notebooks to train, structure, debug, deploy, and monitor your machine learning models.

I’m a huge fan of the new interface. Take a look here:

To learn more, please read Antje’s AWS blog here:
https://aws.amazon.com/blogs/aws/amazon-sagemaker-studio-adds-web-based-interface-code-editor-flexible-workspaces-and-streamlines-user-onboarding/

And also, here is the developer guide: https://docs.aws.amazon.com/sagemaker/latest/dg/studio-updated-migrate.html

The next fun release to highlight is how much the guided workflows have changed for these models. Package and deploy models faster with new tools and guided workflows in

AWS listed this as an improved model deployment experience to help you deploy traditional machine learning (ML) models and foundation models (FMs) faster, but I will say the biggest for me when I was testing it was Python SDK updates listed here: https://sagemaker.readthedocs.io/en/stable/

Github link here as well: https://github.com/aws/sagemaker-python-sdk

Once again, Antje did an amazing job covering this, and you can find her blog on the updates to the Python SDK here. https://aws.amazon.com/blogs/aws/package-and-deploy-models-faster-with-new-tools-and-guided-workflows-in-amazon-sagemaker/

I would also recommend looking at the Jumpstart Guide for Sagemaker. It does a great job of showing the users how to deploy the models you build to an endpoint, as well as managing autoscaling, which can get tricky.

The item that was not well listed that I highly recommend you take time to look into is the Deep learning containers (DLCs), which are a major part of hosting large models on AWS infrastructure. Begin here: https://docs.aws.amazon.com/sagemaker/latest/dg/large-model-inference-dlc.html

If you have time, take a look at Hugging Face as well.

Next, I would like to highlight Amazon SageMaker Clarify, which supports foundation model (FM) evaluation. Simply, Amazon SageMaker Clarify gives the users purpose-built tools to gain greater insights into your ML models and data based on metrics. There are all types of interesting quality standards, including toxicity and robustness.

The goal here is to remove the enormity of evaluating, choosing, and supporting the right foundation model for your specific use case. This one stop shop allows you to quickly evaluate, compare, and select the best FMs across a long list of options quickly. Mine only took minutes, but my list wasn’t very vast.

The fun part is leveraging the bias and explainability reports to identify potential issues and, therefore, improve the accuracy of the FMs. Try this out.

I will note this was one of my favorite sessions, so ping me for more information. I would also look into Amazon SageMaker Canvas, which now after re:Invent, supports natural language instructions for visualization and modeling to prepare your machine learning models.

Overall, the release of features of Amazon Sagemaker, including Sagemaker Clarify, the Python SDK advancements for the FM ecosystem, and Amazon Sagemaker Studio, shows AWS's commitment to helping the AWS community that doesn’t have vast experience in FMs, to leverage generative AI capabilities across the entire AI/ML stack.

Thanks, and contact me if you have any additional questions.




Previous
Previous

Why to Use Amazon ElastiCache Serverless for Redis and Memcached

Next
Next

Anthropic’s latest model, Claude 2.1 is now available on Amazon Bedrock