In this workshop, you will gain hands-on experience with building a production ready generative AI application. The initial lab will introduce you to the basics of invoking an LLM in Amazon Bedrock, OpenAI and basically anywhere and teaches you how the abstraction helps with loose coupling. APIs are front doors to your application. In the next lab you will build both private facing and public facing APIs to invoke a generative AI model. You will learn to enable security on the API, and to protect the API with throttling limits. You will then learn to invoke the foundation models through GraphQL APIs and Pub/Sub APIs. Finally, an optional lab will get you familiar with some advanced concepts on generative AI, like Retrieval-Augmented Generation (RAG), Knowledge Bases and Agents. These concepts will enable you improve your generative AI application by interacting with internal datasources to deliver more relevant, accurate, and customized responses.
The goal of this workshop is to give you hands-on experience in leveraging foundation models (FMs) through Amazon Bedrock. Amazon Bedrock is a fully managed service that provides access to FMs from third-party providers and Amazon; available via an API. With Bedrock, you can choose from a variety of models to find the one that’s best suited for your use case. Within this series of labs, you will go through some of the most common Generative AI usage patterns we are seeing with our customers. You will explore techniques for generating text and images, and learn how to improve productivity by using foundation models to help in composing emails, summarizing text, answering questions, building chatbots, creating images, and generating code. You will gain hands-on experience using Bedrock APIs, SDKs, and open-source software, such as LangChain and FAISS, to implement these usage patterns.