Hey everybody, we’re diving into the world of building an LLM-based system with the ChatGPT AI-1. This is going to be a great journey, so buckle up and let’s get started.
For those of you who might not be familiar with the basics of Prompt Engineering, I suggest you take a look at this article: [Prompt Engineering Basics](https://www.analyticsvidhya.com/blog/2023/08/prompt-engineering-in-generative-ai/)
We’re going to take a step-by-step approach to breaking down this topic, and since it’s a massive one, we’ve divided it into three parts. This is the first part of the series.
The big goal here is to help you understand how to build an LLM-based system and comprehend the concepts behind it, such as tokens and chat format. We’ll also be diving into the nitty-gritty of developing an LLM-based system, so get ready to absorb some serious knowledge.
One of the first things to understand about turning a base LLM into an instruction-tuned LLM is the learning mechanism behind it. This is crucial to grasping how it all works, and it’s something we’re going to cover extensively.
We’ll also be getting into the applications of LLM and the use of tokens in chat format. It’s a lot to take in, but I promise you, it’s worth it.
So, stick around and let’s get into the thick of it. It’s going to be a wild ride.