32 KiB
AI Framework
AI frameworks are powerful tools that can dramatically accelerate your development process when building intelligent applications. Think of an AI framework as a comprehensive toolkit that provides pre-built components, standardized APIs, and intelligent abstractions to help you focus on solving problems rather than wrestling with low-level implementation details.
In this lesson, you'll discover how frameworks like LangChain can transform complex AI integration tasks into manageable, readable code. You'll learn to handle real-world challenges like maintaining conversation context, implementing tool calling, and managing different AI models through a unified interface.
By the end of this project, you'll understand when to choose frameworks over direct API calls, how to leverage their abstractions effectively, and how to build production-ready AI applications with confidence. Let's explore the world of AI frameworks and unlock their potential for your projects!
Why choose a framework?
When building AI applications, you have several integration approaches available, each with distinct advantages and trade-offs. Understanding these options helps you make informed architectural decisions for your projects.
Let's explore the three main approaches to AI integration:
| Approach | Advantages | Best For | Considerations |
|---|---|---|---|
| Direct HTTP Requests | Full control, no dependencies | Simple queries, learning fundamentals | More verbose code, manual error handling |
| SDK Integration | Less boilerplate, model-specific optimization | Single-model applications | Limited to specific providers |
| AI Frameworks | Unified API, built-in abstractions | Multi-model apps, complex workflows | Learning curve, potential over-abstraction |
Framework Benefits in Practice
graph TD
A[Your Application] --> B[AI Framework]
B --> C[OpenAI GPT]
B --> D[Anthropic Claude]
B --> E[GitHub Models]
B --> F[Local Models]
B --> G[Built-in Tools]
G --> H[Memory Management]
G --> I[Conversation History]
G --> J[Function Calling]
G --> K[Error Handling]
Key framework advantages:
- Unifies multiple AI providers under a single API
- Handles conversation memory and context automatically
- Provides built-in tools for common tasks like embeddings and function calling
- Manages error handling and retry logic
- Abstracts complex workflows into simple method calls
💡 Pro Tip: Choose frameworks when you need to switch between different AI models or implement complex features like agents, memory, or tool calling. Use direct APIs when you're learning fundamentals or building simple, single-purpose integrations.
Remember: Use the right tool for the job. Frameworks excel at complex, multi-feature applications, while direct APIs work well for simple, focused use cases.
Introduction
In this lesson, we'll learn to:
- Use a common AI framework.
- Address common problems like chat conversations, tool usage, memory and context.
- Leverage this to build AI apps.
Your first AI prompt
Let's start with the fundamentals by creating a simple AI application that sends a prompt and receives a response. This example demonstrates how frameworks simplify what would otherwise require complex HTTP request handling.
Setting up LangChain with GitHub Models
We'll use LangChain to connect to GitHub Models, which provides free access to various AI models. The setup requires just a few configuration parameters:
from langchain_openai import ChatOpenAI
import os
llm = ChatOpenAI(
api_key=os.environ["GITHUB_TOKEN"],
base_url="https://models.github.ai/inference",
model="openai/gpt-4o-mini",
)
# Send a simple prompt
response = llm.invoke("What's the capital of France?")
print(response.content)
Here's what this code accomplishes:
- Creates a LangChain client using the
ChatOpenAIclass - Configures the connection to GitHub Models with your authentication token
- Specifies the AI model (
gpt-4o-mini) for processing requests - Sends a prompt using the
invoke()method - Extracts and displays the response content
🔧 Setup Note: In GitHub Codespaces, the
GITHUB_TOKENenvironment variable is automatically available. For local development, you'll need to create a personal access token with appropriate permissions.
Expected output:
The capital of France is Paris.
sequenceDiagram
participant App as Your Python App
participant LC as LangChain
participant GM as GitHub Models
participant AI as GPT-4o-mini
App->>LC: llm.invoke("What's the capital of France?")
LC->>GM: HTTP request with prompt
GM->>AI: Process prompt
AI->>GM: Generated response
GM->>LC: Return response
LC->>App: response.content
Building conversational AI
The previous example demonstrated zero-shot prompting—a single question with a single response. However, most real-world AI applications require maintaining context across multiple exchanges, just like human conversations.
LangChain provides message types that help structure conversations and define AI personality, making it easy to build sophisticated chat experiences.
Understanding message types
LangChain uses different message classes to represent conversation participants:
| Message Type | Purpose | Example Use Case |
|---|---|---|
SystemMessage |
Defines AI personality and behavior | "You are a helpful coding assistant" |
HumanMessage |
Represents user input | "Explain how functions work" |
AIMessage |
Stores AI responses | Previous AI responses in conversation |
Creating your first conversation
Let's build a conversation where the AI adopts a specific personality:
messages = [
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
HumanMessage(content="Tell me about you"),
]
Breaking down this conversation setup:
- Establishes the AI's role and personality through
SystemMessage - Provides the initial user query via
HumanMessage - Creates a foundation for multi-turn conversation
The full code for this example looks like so:
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_openai import ChatOpenAI
import os
llm = ChatOpenAI(
api_key=os.environ["GITHUB_TOKEN"],
base_url="https://models.github.ai/inference",
model="openai/gpt-4o-mini",
)
messages = [
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
HumanMessage(content="Tell me about you"),
]
# works
response = llm.invoke(messages)
print(response.content)
You should see an outcome similar to:
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
To keep the state of the conversation, you can add the response from a chat, so conversation is remembered, here's how to do that:
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_openai import ChatOpenAI
import os
llm = ChatOpenAI(
api_key=os.environ["GITHUB_TOKEN"],
base_url="https://models.github.ai/inference",
model="openai/gpt-4o-mini",
)
messages = [
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
HumanMessage(content="Tell me about you"),
]
# works
response = llm.invoke(messages)
print(response.content)
print("---- Next ----")
messages.append(response)
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
response = llm.invoke(messages)
print(response.content)
What we can see from the above conversation is how we invoke the LLM two times, first with the conversation consisting of just two messages but then a second time with more messages added to the conversation.
In fact, if you run this, you will see the second response being something like:
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
I'll take that as a maybe ;)
Streaming responses
Streaming responses provide real-time feedback as the AI generates content, creating a more interactive user experience similar to ChatGPT's typing effect. Instead of waiting for the complete response, you receive content as it's generated.
Implementing streaming with LangChain
from langchain_openai import ChatOpenAI
import os
llm = ChatOpenAI(
api_key=os.environ["GITHUB_TOKEN"],
base_url="https://models.github.ai/inference",
model="openai/gpt-4o-mini",
streaming=True
)
# Stream the response
for chunk in llm.stream("Write a short story about a robot learning to code"):
print(chunk.content, end="", flush=True)
Key streaming concepts:
- Enables real-time content delivery as the AI generates responses
- Improves user experience with immediate feedback
- Reduces perceived latency for longer responses
- Allows early response processing and display
💡 User Experience Tip: Streaming is particularly valuable for longer responses like code explanations, creative writing, or detailed tutorials where users benefit from seeing progress immediately.
Prompt templates
Prompt templates help you create reusable, dynamic prompts by separating the structure from the variable content. Think of them as mad-libs for AI prompts—you define the template once and fill in different values as needed.
Creating reusable prompts
from langchain_core.prompts import ChatPromptTemplate
# Define a template for code explanations
template = ChatPromptTemplate.from_messages([
("system", "You are an expert programming instructor. Explain concepts clearly with examples."),
("human", "Explain {concept} in {language} with a practical example for {skill_level} developers")
])
# Use the template with different values
questions = [
{"concept": "functions", "language": "JavaScript", "skill_level": "beginner"},
{"concept": "classes", "language": "Python", "skill_level": "intermediate"},
{"concept": "async/await", "language": "JavaScript", "skill_level": "advanced"}
]
for question in questions:
prompt = template.format_messages(**question)
response = llm.invoke(prompt)
print(f"Topic: {question['concept']}\n{response.content}\n---\n")
Template benefits in practice:
- Standardizes prompt structure across your application
- Enables dynamic content insertion without string manipulation
- Maintains consistent AI behavior and output quality
- Simplifies prompt maintenance and updates
Structured output
Structured output ensures AI responses follow a specific format, making them easier to parse and integrate into your applications. Instead of free-form text, you can request JSON, specific data structures, or formatted responses.
Defining output schemas
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import JsonOutputParser
from pydantic import BaseModel, Field
class CodeReview(BaseModel):
score: int = Field(description="Code quality score from 1-10")
strengths: list[str] = Field(description="List of code strengths")
improvements: list[str] = Field(description="List of suggested improvements")
overall_feedback: str = Field(description="Summary feedback")
# Set up the parser
parser = JsonOutputParser(pydantic_object=CodeReview)
# Create prompt with format instructions
prompt = ChatPromptTemplate.from_messages([
("system", "You are a code reviewer. {format_instructions}"),
("human", "Review this code: {code}")
])
# Format the prompt with instructions
chain = prompt | llm | parser
# Get structured response
code_sample = """
def calculate_average(numbers):
return sum(numbers) / len(numbers)
"""
result = chain.invoke({
"code": code_sample,
"format_instructions": parser.get_format_instructions()
})
print(f"Score: {result['score']}")
print(f"Strengths: {', '.join(result['strengths'])}")
Structured output advantages:
- Guarantees consistent response format for reliable parsing
- Enables direct integration with databases and APIs
- Validates AI responses against predefined schemas
- Simplifies error handling and data processing
Tool calling
Tools are how we give the LLM extra skills. The idea is to tell the LLM about functions it has and if a prompt is made that matches the description of one of these tools then we call them.
Using Python
Let's add some tools like so:
from typing_extensions import Annotated, TypedDict
class add(TypedDict):
"""Add two integers."""
# Annotations must have the type and can optionally include a default value and description (in that order).
a: Annotated[int, ..., "First integer"]
b: Annotated[int, ..., "Second integer"]
tools = [add]
functions = {
"add": lambda a, b: a + b
}
What we're doing here is to create a description of a tool called add. By inheriting from TypedDict and adding members like a and b of type Annotated this can be converted to a schema that the LLM can understand. The creation of functions is a dictionary that ensures that we know what to do if a specific tool is identified.
Let's see how we call the LLM with this tool next:
llm = ChatOpenAI(
api_key=os.environ["GITHUB_TOKEN"],
base_url="https://models.github.ai/inference",
model="openai/gpt-4o-mini",
)
llm_with_tools = llm.bind_tools(tools)
Here we call bind_tools with our tools array and thereby the LLM llm_with_tools now has knowledge of this tool.
To use this new LLM, we can type the following code:
query = "What is 3 + 12?"
res = llm_with_tools.invoke(query)
if(res.tool_calls):
for tool in res.tool_calls:
print("TOOL CALL: ", functions[tool["name"]](**tool["args"]))
print("CONTENT: ",res.content)
Now that we call invoke on this new llm, that has tools, we maybe the the property tool_calls populated. If so, any identified tools has a name and args property that identifies what tool should be called and with arguments. The full code looks like so:
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_openai import ChatOpenAI
import os
from typing_extensions import Annotated, TypedDict
class add(TypedDict):
"""Add two integers."""
# Annotations must have the type and can optionally include a default value and description (in that order).
a: Annotated[int, ..., "First integer"]
b: Annotated[int, ..., "Second integer"]
tools = [add]
functions = {
"add": lambda a, b: a + b
}
llm = ChatOpenAI(
api_key=os.environ["GITHUB_TOKEN"],
base_url="https://models.github.ai/inference",
model="openai/gpt-4o-mini",
)
llm_with_tools = llm.bind_tools(tools)
query = "What is 3 + 12?"
res = llm_with_tools.invoke(query)
if(res.tool_calls):
for tool in res.tool_calls:
print("TOOL CALL: ", functions[tool["name"]](**tool["args"]))
print("CONTENT: ",res.content)
Running this code, you should see output similar to:
TOOL CALL: 15
CONTENT:
What this output mean is that the LLM analyzed the prompt "What is 3 + 12" as meaning that the add tool should be called and it knew that thanks to its name, description and member field descriptions. That the answer is 15 is because of our code using the dictionary functions to invoke it:
print("TOOL CALL: ", functions[tool["name"]](**tool["args"]))
A more interesting tool that calls a web api
Tools that adds two numbers is interesting as it illustrates how tool calling works but usually tools tend to do something more interesting like for example calling a Web API, let's do just that with this code:
class joke(TypedDict):
"""Tell a joke."""
# Annotations must have the type and can optionally include a default value and description (in that order).
category: Annotated[str, ..., "The joke category"]
def get_joke(category: str) -> str:
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
if response.status_code == 200:
return response.json().get("value", f"Here's a {category} joke!")
return f"Here's a {category} joke!"
functions = {
"add": lambda a, b: a + b,
"joke": lambda category: get_joke(category)
}
query = "Tell me a joke about animals"
# the rest of the code is the same
Now if you run this code you will get a response saying something like:
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
CONTENT:
Here's the code in its entirety:
from langchain_openai import ChatOpenAI
import requests
import os
from typing_extensions import Annotated, TypedDict
class add(TypedDict):
"""Add two integers."""
# Annotations must have the type and can optionally include a default value and description (in that order).
a: Annotated[int, ..., "First integer"]
b: Annotated[int, ..., "Second integer"]
class joke(TypedDict):
"""Tell a joke."""
# Annotations must have the type and can optionally include a default value and description (in that order).
category: Annotated[str, ..., "The joke category"]
tools = [add, joke]
def get_joke(category: str) -> str:
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
if response.status_code == 200:
return response.json().get("value", f"Here's a {category} joke!")
return f"Here's a {category} joke!"
functions = {
"add": lambda a, b: a + b,
"joke": lambda category: get_joke(category)
}
llm = ChatOpenAI(
api_key=os.environ["GITHUB_TOKEN"],
base_url="https://models.github.ai/inference",
model="openai/gpt-4o-mini",
)
llm_with_tools = llm.bind_tools(tools)
query = "Tell me a joke about animals"
res = llm_with_tools.invoke(query)
if(res.tool_calls):
for tool in res.tool_calls:
# print("TOOL CALL: ", tool)
print("TOOL CALL: ", functions[tool["name"]](**tool["args"]))
print("CONTENT: ",res.content)
Embeddings and document processing
Embeddings transform text into numerical vectors that capture semantic meaning, enabling similarity searches, content clustering, and retrieval-augmented generation (RAG). Think of embeddings as converting text into coordinates in a multi-dimensional space where similar meanings cluster together.
Creating and using embeddings
from langchain_openai import OpenAIEmbeddings
from langchain_community.vectorstores import FAISS
from langchain_community.document_loaders import TextLoader
from langchain.text_splitter import CharacterTextSplitter
# Initialize embeddings
embeddings = OpenAIEmbeddings(
api_key=os.environ["GITHUB_TOKEN"],
base_url="https://models.github.ai/inference",
model="text-embedding-3-small"
)
# Load and split documents
loader = TextLoader("documentation.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
# Create vector store
vectorstore = FAISS.from_documents(texts, embeddings)
# Perform similarity search
query = "How do I handle user authentication?"
similar_docs = vectorstore.similarity_search(query, k=3)
for doc in similar_docs:
print(f"Relevant content: {doc.page_content[:200]}...")
Document loaders for various formats
from langchain_community.document_loaders import (
PyPDFLoader,
CSVLoader,
JSONLoader,
WebBaseLoader
)
# Load different document types
pdf_loader = PyPDFLoader("manual.pdf")
csv_loader = CSVLoader("data.csv")
json_loader = JSONLoader("config.json")
web_loader = WebBaseLoader("https://example.com/docs")
# Process all documents
all_documents = []
for loader in [pdf_loader, csv_loader, json_loader, web_loader]:
docs = loader.load()
all_documents.extend(docs)
Embedding use cases:
- Powers semantic search across your content
- Enables question-answering over your documents
- Supports content recommendation systems
- Facilitates automatic document categorization
Building a complete AI application
Now let's combine everything you've learned into a practical AI application. We'll create a coding assistant that can answer questions, execute tools, and maintain conversation context.
Complete application example
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.messages import HumanMessage, SystemMessage, AIMessage
from langchain_community.vectorstores import FAISS
from typing_extensions import Annotated, TypedDict
import os
import requests
class CodingAssistant:
def __init__(self):
self.llm = ChatOpenAI(
api_key=os.environ["GITHUB_TOKEN"],
base_url="https://models.github.ai/inference",
model="openai/gpt-4o-mini"
)
self.conversation_history = [
SystemMessage(content="""You are an expert coding assistant.
Help users learn programming concepts, debug code, and write better software.
Use tools when needed and maintain a helpful, encouraging tone.""")
]
# Define tools
self.setup_tools()
def setup_tools(self):
class web_search(TypedDict):
"""Search for programming documentation or examples."""
query: Annotated[str, "Search query for programming help"]
class code_formatter(TypedDict):
"""Format and validate code snippets."""
code: Annotated[str, "Code to format"]
language: Annotated[str, "Programming language"]
self.tools = [web_search, code_formatter]
self.llm_with_tools = self.llm.bind_tools(self.tools)
def chat(self, user_input: str):
# Add user message to conversation
self.conversation_history.append(HumanMessage(content=user_input))
# Get AI response
response = self.llm_with_tools.invoke(self.conversation_history)
# Handle tool calls if any
if response.tool_calls:
for tool_call in response.tool_calls:
tool_result = self.execute_tool(tool_call)
print(f"🔧 Tool used: {tool_call['name']}")
print(f"📊 Result: {tool_result}")
# Add AI response to conversation
self.conversation_history.append(response)
return response.content
def execute_tool(self, tool_call):
tool_name = tool_call['name']
args = tool_call['args']
if tool_name == 'web_search':
return f"Found documentation for: {args['query']}"
elif tool_name == 'code_formatter':
return f"Formatted {args['language']} code: {args['code'][:50]}..."
return "Tool execution completed"
# Usage example
assistant = CodingAssistant()
print("🤖 Coding Assistant Ready! Type 'quit' to exit.\n")
while True:
user_input = input("You: ")
if user_input.lower() == 'quit':
break
response = assistant.chat(user_input)
print(f"🤖 Assistant: {response}\n")
Application architecture:
graph TD
A[User Input] --> B[Coding Assistant]
B --> C[Conversation Memory]
B --> D[Tool Detection]
B --> E[LLM Processing]
D --> F[Web Search Tool]
D --> G[Code Formatter Tool]
E --> H[Response Generation]
F --> H
G --> H
H --> I[User Interface]
H --> C
Key features implemented:
- Maintains conversation context across multiple exchanges
- Integrates tool calling for enhanced functionality
- Provides structured interaction patterns
- Handles complex workflows with proper error management
Assignment: Build your own AI-powered study assistant
Objective: Create an AI application that helps students learn programming concepts by providing explanations, code examples, and interactive quizzes.
Requirements
Core Features (Required):
- Conversational Interface: Implement a chat system that maintains context across multiple questions
- Educational Tools: Create at least two tools that help with learning:
- Code explanation tool
- Concept quiz generator
- Personalized Learning: Use system messages to adapt responses to different skill levels
- Response Formatting: Implement structured output for quiz questions
Implementation Steps
Step 1: Setup your environment
pip install langchain langchain-openai
Step 2: Basic chat functionality
- Create a
StudyAssistantclass - Implement conversation memory
- Add personality configuration for educational support
Step 3: Add educational tools
- Code Explainer: Breaks down code into understandable parts
- Quiz Generator: Creates questions about programming concepts
- Progress Tracker: Keeps track of topics covered
Step 4: Enhanced features (Optional)
- Implement streaming responses for better user experience
- Add document loading to incorporate course materials
- Create embeddings for similarity-based content retrieval
Evaluation Criteria
| Feature | Excellent (4) | Good (3) | Satisfactory (2) | Needs Work (1) |
|---|---|---|---|---|
| Conversation Flow | Natural, context-aware responses | Good context retention | Basic conversation | No memory between exchanges |
| Tool Integration | Multiple useful tools working seamlessly | 2+ tools implemented correctly | 1-2 basic tools | Tools not functional |
| Code Quality | Clean, well-documented, error handling | Good structure, some documentation | Basic functionality works | Poor structure, no error handling |
| Educational Value | Truly helpful for learning, adaptive | Good learning support | Basic explanations | Limited educational benefit |
Sample code structure
class StudyAssistant:
def __init__(self, skill_level="beginner"):
# Initialize LLM, tools, and conversation memory
pass
def explain_code(self, code, language):
# Tool: Explain how code works
pass
def generate_quiz(self, topic, difficulty):
# Tool: Create practice questions
pass
def chat(self, user_input):
# Main conversation interface
pass
# Example usage
assistant = StudyAssistant(skill_level="intermediate")
response = assistant.chat("Explain how Python functions work")
Bonus Challenges:
- Add voice input/output capabilities
- Implement a web interface using Streamlit or Flask
- Create a knowledge base from course materials using embeddings
- Add progress tracking and personalized learning paths
Summary
Congratulations! You've mastered the fundamentals of AI framework development and learned how to build sophisticated AI applications using LangChain. Let's review the key concepts and skills you've acquired.
What you've learned
Core Framework Concepts:
- Framework Benefits: Understanding when to choose frameworks over direct API calls
- LangChain Basics: Setting up and configuring AI model connections
- Message Types: Using
SystemMessage,HumanMessage, andAIMessagefor structured conversations
Advanced Features:
- Tool Calling: Creating and integrating custom tools for enhanced AI capabilities
- Conversation Memory: Maintaining context across multiple conversation turns
- Streaming Responses: Implementing real-time response delivery
- Prompt Templates: Building reusable, dynamic prompts
- Structured Output: Ensuring consistent, parseable AI responses
- Embeddings: Creating semantic search and document processing capabilities
Practical Applications:
- Building Complete Apps: Combining multiple features into production-ready applications
- Error Handling: Implementing robust error management and validation
- Tool Integration: Creating custom tools that extend AI capabilities
Key takeaways
🎯 Remember: AI frameworks like LangChain excel at abstracting complexity while providing powerful features. They're ideal for applications requiring conversation memory, tool calling, or multi-model support.
Decision framework for AI integration:
flowchart TD
A[AI Integration Need] --> B{Simple single query?}
B -->|Yes| C[Direct API calls]
B -->|No| D{Need conversation memory?}
D -->|No| E[SDK Integration]
D -->|Yes| F{Need tools or complex features?}
F -->|No| G[Framework with basic setup]
F -->|Yes| H[Full framework implementation]
C --> I[HTTP requests, minimal dependencies]
E --> J[Provider SDK, model-specific]
G --> K[LangChain basic chat]
H --> L[LangChain with tools, memory, agents]
Next steps in your AI journey
Immediate applications:
- Apply these concepts to build your own AI-powered applications
- Experiment with different AI models through the unified LangChain interface
- Create tools that solve real problems in your domain
Advanced topics to explore:
- AI Agents: Building autonomous AI systems that can plan and execute tasks
- Retrieval-Augmented Generation (RAG): Combining AI with your own knowledge bases
- Multi-Modal AI: Working with text, images, and audio in unified applications
- Production Deployment: Scaling AI applications with proper monitoring and error handling
Community and resources:
- Join the LangChain community for latest updates and best practices
- Explore GitHub Models for access to cutting-edge AI capabilities
- Practice with different use cases to deepen your understanding
You're now equipped with the knowledge to build intelligent, conversational applications that can truly assist users in meaningful ways. The future of AI development is in your hands!
GitHub Copilot Agent Challenge 🚀
Use the Agent mode to complete the following challenge:
Description: Build an advanced AI-powered code review assistant that combines multiple LangChain features including tool calling, structured output, and conversation memory to provide comprehensive feedback on code submissions.
Prompt: Create a CodeReviewAssistant class that implements:
- A tool for analyzing code complexity and suggesting improvements
- A tool for checking code against best practices
- Structured output using Pydantic models for consistent review format
- Conversation memory to track review sessions
- A main chat interface that can handle code submissions and provide detailed, actionable feedback
The assistant should be able to review code in multiple programming languages, maintain context across multiple code submissions in a session, and provide both summary scores and detailed improvement suggestions.