Data Science and ML
GenAI with Python: Build Agents from Scratch (Complete Tutorial) | by Mauro Di Pietro | Sep, 2024
with Ollama, LangChain, LangGraph (No GPU, No APIKEY)
(All images are by the author unless otherwise noted)
Intro
Prompt Engineering is the practice of designing and refining prompts (text inputs) to enhance the behavior of Large Language Models (LLMs). The goal is to get the desired responses from the model by carefully crafting the instructions. The most used prompting techniques are:
- Chain-of-Thought: involves generating a step-by-step reasoning process to reach a conclusion. The model is pushed to “think out loud” by explicitly laying out the logical steps that lead to the final answer.
- ReAct (Reason+Act): combines reasoning with action. The model not only thinks through a problem but also takes actions based on its reasoning. So it’s more interactive as the model alternates between reasoning steps and actions, refining its approach iteratively. Basically, it’s a loop of “thought”, “action”, “observation”.
Let’s make an example: imagine asking an AI to “find the best laptop under $1000”.
– Normal Answer: “Lenovo Thinkpad”.
– Chain-of-Thought Answer: “I need to consider factors like performance, battery life, and…