parent
732cef13c8
commit
f002467d3f
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:19:59+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "ar"
|
||||
}
|
||||
-->
|
||||
# إطار عمل الذكاء الاصطناعي
|
||||
|
||||
هناك العديد من أطر عمل الذكاء الاصطناعي التي يمكن أن تسرّع بشكل كبير الوقت اللازم لبناء مشروع. في هذا المشروع، سنركز على فهم المشاكل التي تعالجها هذه الأطر وبناء مشروع مشابه بأنفسنا.
|
||||
|
||||
## لماذا نحتاج إلى إطار عمل؟
|
||||
|
||||
عند استخدام الذكاء الاصطناعي، هناك طرق مختلفة وأسباب مختلفة لاختيار هذه الطرق، ومنها:
|
||||
|
||||
- **بدون SDK**، معظم نماذج الذكاء الاصطناعي تسمح لك بالتفاعل مباشرة مع النموذج عبر طلبات HTTP على سبيل المثال. هذه الطريقة تعمل وقد تكون الخيار الوحيد إذا لم يكن هناك خيار SDK متاح.
|
||||
- **SDK**. استخدام SDK عادةً ما يكون الخيار الموصى به لأنه يسمح لك بكتابة كود أقل للتفاعل مع النموذج. عادةً ما يكون محدودًا بنموذج معين، وإذا كنت تستخدم نماذج مختلفة، قد تحتاج إلى كتابة كود جديد لدعم هذه النماذج الإضافية.
|
||||
- **إطار عمل**. إطار العمل يأخذ الأمور إلى مستوى آخر بمعنى أنه إذا كنت بحاجة إلى استخدام نماذج مختلفة، هناك واجهة برمجية واحدة لجميعها، وما يختلف عادةً هو الإعداد الأولي. بالإضافة إلى ذلك، توفر أطر العمل تجريدات مفيدة مثل التعامل مع الأدوات، الذاكرة، سير العمل، الوكلاء والمزيد مع كتابة كود أقل. لأن أطر العمل عادةً ما تكون ذات توجه معين، يمكن أن تكون مفيدة جدًا إذا كنت تتبع الطريقة التي تعمل بها، ولكن قد تكون غير كافية إذا حاولت القيام بشيء مخصص لا يدعمه الإطار. أحيانًا قد تبسط أطر العمل الأمور بشكل زائد، مما قد يؤدي إلى عدم تعلم موضوع مهم قد يؤثر لاحقًا على الأداء.
|
||||
|
||||
بشكل عام، استخدم الأداة المناسبة للمهمة.
|
||||
|
||||
## المقدمة
|
||||
|
||||
في هذه الدرس، سنتعلم:
|
||||
|
||||
- استخدام إطار عمل شائع للذكاء الاصطناعي.
|
||||
- معالجة مشاكل شائعة مثل المحادثات، استخدام الأدوات، الذاكرة والسياق.
|
||||
- الاستفادة من ذلك لبناء تطبيقات ذكاء اصطناعي.
|
||||
|
||||
## أول طلب
|
||||
|
||||
في مثال التطبيق الأول لدينا، سنتعلم كيفية الاتصال بنموذج ذكاء اصطناعي واستجوابه باستخدام طلب.
|
||||
|
||||
### باستخدام بايثون
|
||||
|
||||
في هذا المثال، سنستخدم Langchain للاتصال بنماذج GitHub. يمكننا استخدام فئة تسمى `ChatOpenAI` وإعطائها الحقول `api_key`، `base_url` و`model`. يتم تعبئة الرمز تلقائيًا داخل GitHub Codespaces، وإذا كنت تشغل التطبيق محليًا، تحتاج إلى إعداد رمز وصول شخصي ليعمل.
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
في هذا الكود، نحن:
|
||||
|
||||
- نستدعي `ChatOpenAI` لإنشاء عميل.
|
||||
- نستخدم `llm.invoke` مع طلب لإنشاء استجابة.
|
||||
- نطبع الاستجابة باستخدام `print(response.content)`.
|
||||
|
||||
يجب أن ترى استجابة مشابهة لـ:
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## محادثة الدردشة
|
||||
|
||||
في القسم السابق، رأيت كيف استخدمنا ما يُعرف عادةً بالطلب بدون سياق، وهو طلب واحد يتبعه استجابة.
|
||||
|
||||
ومع ذلك، غالبًا ما تجد نفسك في موقف تحتاج فيه إلى الحفاظ على محادثة تتكون من عدة رسائل يتم تبادلها بينك وبين مساعد الذكاء الاصطناعي.
|
||||
|
||||
### باستخدام بايثون
|
||||
|
||||
في Langchain، يمكننا تخزين المحادثة في قائمة. يمثل `HumanMessage` رسالة من المستخدم، و`SystemMessage` هي رسالة تهدف إلى تحديد "شخصية" الذكاء الاصطناعي. في المثال أدناه، ترى كيف نوجه الذكاء الاصطناعي لتبني شخصية الكابتن بيكارد وللمستخدم أن يسأل "أخبرني عنك" كطلب.
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
الكود الكامل لهذا المثال يبدو كالتالي:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
يجب أن ترى نتيجة مشابهة لـ:
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
للحفاظ على حالة المحادثة، يمكنك إضافة الاستجابة من الدردشة، بحيث يتم تذكر المحادثة، إليك كيفية القيام بذلك:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
ما يمكننا رؤيته من المحادثة أعلاه هو كيف نستدعي LLM مرتين، أولاً مع المحادثة التي تتكون من رسالتين فقط، ثم مرة ثانية مع المزيد من الرسائل المضافة إلى المحادثة.
|
||||
|
||||
في الواقع، إذا قمت بتشغيل هذا، سترى الاستجابة الثانية تكون شيئًا مثل:
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
سأعتبر ذلك ربما ;)
|
||||
|
||||
## استجابات متدفقة
|
||||
|
||||
TODO
|
||||
|
||||
## قوالب الطلبات
|
||||
|
||||
TODO
|
||||
|
||||
## إخراج منظم
|
||||
|
||||
TODO
|
||||
|
||||
## استدعاء الأدوات
|
||||
|
||||
الأدوات هي الطريقة التي نعطي بها LLM مهارات إضافية. الفكرة هي إخبار LLM عن الوظائف التي يمتلكها وإذا تم تقديم طلب يتطابق مع وصف إحدى هذه الأدوات، يتم استدعاؤها.
|
||||
|
||||
### باستخدام بايثون
|
||||
|
||||
لنقم بإضافة بعض الأدوات كالتالي:
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
ما نقوم به هنا هو إنشاء وصف لأداة تسمى `add`. من خلال الوراثة من `TypedDict` وإضافة أعضاء مثل `a` و`b` من النوع `Annotated`، يمكن تحويل ذلك إلى مخطط يمكن لـ LLM فهمه. إنشاء الوظائف هو قاموس يضمن أننا نعرف ما يجب القيام به إذا تم تحديد أداة معينة.
|
||||
|
||||
لنرى كيف نستدعي LLM بهذه الأداة بعد ذلك:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
هنا نستدعي `bind_tools` مع مصفوفة `tools` الخاصة بنا، وبالتالي فإن LLM `llm_with_tools` لديه الآن معرفة بهذه الأداة.
|
||||
|
||||
لاستخدام هذا LLM الجديد، يمكننا كتابة الكود التالي:
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
الآن عندما نستدعي `invoke` على هذا LLM الجديد، الذي يحتوي على أدوات، قد يتم تعبئة الخاصية `tool_calls`. إذا كان الأمر كذلك، فإن أي أدوات محددة لديها خاصية `name` و`args` التي تحدد الأداة التي يجب استدعاؤها ومع الوسائط. الكود الكامل يبدو كالتالي:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
عند تشغيل هذا الكود، يجب أن ترى مخرجات مشابهة لـ:
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
ما تعنيه هذه المخرجات هو أن LLM حلل الطلب "ما هو 3 + 12" على أنه يعني أن أداة `add` يجب أن تُستدعى، وعرف ذلك بفضل اسمها، وصفها ووصف الحقول الأعضاء. أن الإجابة هي 15 يعود إلى الكود الخاص بنا الذي يستخدم القاموس `functions` لاستدعائها:
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### أداة أكثر إثارة للاهتمام تستدعي واجهة برمجة تطبيقات ويب
|
||||
|
||||
الأدوات التي تضيف رقمين مثيرة للاهتمام لأنها توضح كيفية عمل استدعاء الأدوات، ولكن عادةً ما تقوم الأدوات بشيء أكثر إثارة مثل استدعاء واجهة برمجة تطبيقات ويب، لنقم بذلك باستخدام هذا الكود:
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
الآن إذا قمت بتشغيل هذا الكود، ستحصل على استجابة تقول شيئًا مثل:
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
إليك الكود بالكامل:
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## تضمين
|
||||
|
||||
تحويل المحتوى إلى متجهات، المقارنة عبر تشابه جيب التمام
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### محملات المستندات
|
||||
|
||||
PDF وCSV
|
||||
|
||||
## بناء تطبيق
|
||||
|
||||
TODO
|
||||
|
||||
## المهمة
|
||||
|
||||
## الملخص
|
||||
|
||||
---
|
||||
|
||||
**إخلاء المسؤولية**:
|
||||
تمت ترجمة هذا المستند باستخدام خدمة الترجمة بالذكاء الاصطناعي [Co-op Translator](https://github.com/Azure/co-op-translator). بينما نسعى لتحقيق الدقة، يرجى العلم أن الترجمات الآلية قد تحتوي على أخطاء أو عدم دقة. يجب اعتبار المستند الأصلي بلغته الأصلية هو المصدر الموثوق. للحصول على معلومات حاسمة، يُوصى بالترجمة البشرية الاحترافية. نحن غير مسؤولين عن أي سوء فهم أو تفسيرات خاطئة ناتجة عن استخدام هذه الترجمة.
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:22:56+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "bn"
|
||||
}
|
||||
-->
|
||||
# AI ফ্রেমওয়ার্ক
|
||||
|
||||
অনেক AI ফ্রেমওয়ার্ক রয়েছে যা ব্যবহার করলে একটি প্রকল্প তৈরি করার সময় উল্লেখযোগ্যভাবে কমে যায়। এই প্রকল্পে আমরা এই ফ্রেমওয়ার্কগুলো কী সমস্যার সমাধান করে তা বুঝতে এবং নিজেরাই একটি প্রকল্প তৈরি করতে শিখব।
|
||||
|
||||
## কেন একটি ফ্রেমওয়ার্ক
|
||||
|
||||
AI ব্যবহারের ক্ষেত্রে বিভিন্ন পদ্ধতি এবং এই পদ্ধতিগুলো বেছে নেওয়ার বিভিন্ন কারণ রয়েছে। এখানে কিছু উল্লেখ করা হলো:
|
||||
|
||||
- **কোনো SDK নেই**, বেশিরভাগ AI মডেল আপনাকে HTTP অনুরোধের মাধ্যমে সরাসরি AI মডেলের সাথে যোগাযোগ করার অনুমতি দেয়। এই পদ্ধতি কার্যকর এবং যদি SDK বিকল্প অনুপস্থিত থাকে তবে এটি আপনার একমাত্র বিকল্প হতে পারে।
|
||||
- **SDK**। SDK ব্যবহার সাধারণত সুপারিশ করা হয় কারণ এটি মডেলের সাথে যোগাযোগ করার জন্য কম কোড লিখতে সাহায্য করে। এটি সাধারণত একটি নির্দিষ্ট মডেলের জন্য সীমাবদ্ধ এবং যদি বিভিন্ন মডেল ব্যবহার করতে হয়, তবে আপনাকে সেই অতিরিক্ত মডেলগুলোর জন্য নতুন কোড লিখতে হতে পারে।
|
||||
- **একটি ফ্রেমওয়ার্ক**। একটি ফ্রেমওয়ার্ক সাধারণত জিনিসগুলোকে আরও উন্নত স্তরে নিয়ে যায়। যদি বিভিন্ন মডেল ব্যবহার করতে হয়, তাহলে সবার জন্য একটি API থাকে, যা আলাদা হয় সাধারণত প্রাথমিক সেটআপে। এছাড়াও ফ্রেমওয়ার্কগুলো দরকারী বিমূর্ততা নিয়ে আসে যেমন AI ক্ষেত্রে, তারা টুল, মেমরি, ওয়ার্কফ্লো, এজেন্ট এবং আরও অনেক কিছু পরিচালনা করতে পারে এবং কম কোড লিখতে হয়। যেহেতু ফ্রেমওয়ার্কগুলো সাধারণত মতামতপূর্ণ হয়, তারা সত্যিই সহায়ক হতে পারে যদি আপনি তাদের পদ্ধতিগুলো গ্রহণ করেন। তবে যদি আপনি এমন কিছু করতে চান যা ফ্রেমওয়ার্কটি তৈরি করা হয়নি, তাহলে এটি কম কার্যকর হতে পারে। কখনও কখনও একটি ফ্রেমওয়ার্ক খুব বেশি সরলীকরণ করতে পারে এবং আপনি একটি গুরুত্বপূর্ণ বিষয় শিখতে না পারলে পরে পারফরম্যান্সে ক্ষতি হতে পারে।
|
||||
|
||||
সাধারণভাবে, কাজের জন্য সঠিক টুল ব্যবহার করুন।
|
||||
|
||||
## পরিচিতি
|
||||
|
||||
এই পাঠে আমরা শিখব:
|
||||
|
||||
- একটি সাধারণ AI ফ্রেমওয়ার্ক ব্যবহার করতে।
|
||||
- চ্যাট কথোপকথন, টুল ব্যবহার, মেমরি এবং প্রসঙ্গের মতো সাধারণ সমস্যাগুলো সমাধান করতে।
|
||||
- AI অ্যাপ তৈরি করতে এটি ব্যবহার করতে।
|
||||
|
||||
## প্রথম প্রম্পট
|
||||
|
||||
আমাদের প্রথম অ্যাপ উদাহরণে, আমরা শিখব কীভাবে একটি AI মডেলের সাথে সংযোগ স্থাপন করতে এবং একটি প্রম্পট ব্যবহার করে এটি থেকে প্রশ্ন করতে হয়।
|
||||
|
||||
### পাইথন ব্যবহার করে
|
||||
|
||||
এই উদাহরণে, আমরা Langchain ব্যবহার করব GitHub মডেলের সাথে সংযোগ স্থাপন করতে। আমরা `ChatOpenAI` নামক একটি ক্লাস ব্যবহার করতে পারি এবং এটিকে `api_key`, `base_url` এবং `model` ফিল্ড দিতে পারি। টোকেনটি GitHub Codespaces-এ স্বয়ংক্রিয়ভাবে পূরণ হয় এবং যদি আপনি অ্যাপটি স্থানীয়ভাবে চালান, তাহলে এটি কাজ করার জন্য আপনাকে একটি ব্যক্তিগত অ্যাক্সেস টোকেন সেট আপ করতে হবে।
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
এই কোডে আমরা:
|
||||
|
||||
- `ChatOpenAI` কল করে একটি ক্লায়েন্ট তৈরি করি।
|
||||
- একটি প্রম্পট ব্যবহার করে `llm.invoke` কল করে একটি প্রতিক্রিয়া তৈরি করি।
|
||||
- `print(response.content)` দিয়ে প্রতিক্রিয়া মুদ্রণ করি।
|
||||
|
||||
আপনার একটি প্রতিক্রিয়া দেখতে পাওয়া উচিত যা এরকম:
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## চ্যাট কথোপকথন
|
||||
|
||||
পূর্ববর্তী অংশে, আপনি দেখেছেন কীভাবে আমরা সাধারণত জিরো শট প্রম্পটিং ব্যবহার করি, একটি একক প্রম্পট এবং তারপরে একটি প্রতিক্রিয়া।
|
||||
|
||||
তবে, প্রায়ই আপনি নিজেকে এমন পরিস্থিতিতে খুঁজে পাবেন যেখানে আপনাকে AI সহকারীর সাথে একাধিক বার্তা বিনিময়ের কথোপকথন বজায় রাখতে হবে।
|
||||
|
||||
### পাইথন ব্যবহার করে
|
||||
|
||||
Langchain-এ, আমরা কথোপকথন একটি তালিকায় সংরক্ষণ করতে পারি। `HumanMessage` ব্যবহারকারীর একটি বার্তা উপস্থাপন করে এবং `SystemMessage` AI-এর "ব্যক্তিত্ব" সেট করার জন্য একটি বার্তা। নিচের উদাহরণে আপনি দেখতে পাবেন কীভাবে আমরা AI-কে ক্যাপ্টেন পিকার্ডের ব্যক্তিত্ব গ্রহণ করতে নির্দেশ দিই এবং মানব/ব্যবহারকারীকে "Tell me about you" জিজ্ঞাসা করতে বলি।
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
এই উদাহরণের সম্পূর্ণ কোড দেখতে এরকম:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
আপনার একটি ফলাফল দেখতে পাওয়া উচিত যা এরকম:
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
কথোপকথনের অবস্থা বজায় রাখতে, আপনি একটি চ্যাট থেকে প্রতিক্রিয়া যোগ করতে পারেন, যাতে কথোপকথন মনে রাখা যায়। এটি কীভাবে করা যায় তা এখানে:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
উপরের কথোপকথন থেকে আমরা দেখতে পাই কীভাবে আমরা প্রথমে কথোপকথন দুটি বার্তা নিয়ে শুরু করি এবং তারপর দ্বিতীয়বার আরও বার্তা যোগ করে কথোপকথন চালাই।
|
||||
|
||||
প্রকৃতপক্ষে, যদি আপনি এটি চালান, আপনি দ্বিতীয় প্রতিক্রিয়া দেখতে পাবেন যা এরকম:
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
আমি এটিকে "সম্ভবত" ধরে নিলাম ;)
|
||||
|
||||
## স্ট্রিমিং প্রতিক্রিয়া
|
||||
|
||||
TODO
|
||||
|
||||
## প্রম্পট টেমপ্লেট
|
||||
|
||||
TODO
|
||||
|
||||
## কাঠামোগত আউটপুট
|
||||
|
||||
TODO
|
||||
|
||||
## টুল কলিং
|
||||
|
||||
টুল হলো কীভাবে আমরা LLM-কে অতিরিক্ত দক্ষতা প্রদান করি। ধারণাটি হলো LLM-কে তার ফাংশন সম্পর্কে জানানো এবং যদি কোনো প্রম্পট তৈরি হয় যা এই টুলগুলোর বর্ণনার সাথে মিলে যায়, তাহলে আমরা সেগুলো কল করি।
|
||||
|
||||
### পাইথন ব্যবহার করে
|
||||
|
||||
চলুন কিছু টুল যোগ করি এরকম:
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
এখানে আমরা `add` নামক একটি টুলের বর্ণনা তৈরি করছি। `TypedDict` থেকে উত্তরাধিকার গ্রহণ করে এবং `a` এবং `b` এর মতো সদস্য যোগ করে যা `Annotated` টাইপের, এটি একটি স্কিমায় রূপান্তরিত হতে পারে যা LLM বুঝতে পারে। ফাংশন তৈরির জন্য একটি ডিকশনারি ব্যবহার করা হয় যা নিশ্চিত করে যে নির্দিষ্ট টুল চিহ্নিত হলে কী করতে হবে।
|
||||
|
||||
চলুন দেখি কীভাবে আমরা এই টুল দিয়ে LLM কল করি:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
এখানে আমরা `bind_tools` কল করি আমাদের `tools` অ্যারের সাথে এবং এর ফলে LLM `llm_with_tools` এখন এই টুল সম্পর্কে জ্ঞান রাখে।
|
||||
|
||||
এই নতুন LLM ব্যবহার করতে, আমরা নিম্নলিখিত কোড লিখতে পারি:
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
এখন আমরা এই নতুন LLM-এ `invoke` কল করি, যার টুল রয়েছে। যদি `tool_calls` প্রপার্টি পূরণ হয়, তাহলে কোনো চিহ্নিত টুলের একটি `name` এবং `args` প্রপার্টি থাকে যা চিহ্নিত করে কোন টুল কল করা উচিত এবং কী আর্গুমেন্ট দিয়ে। সম্পূর্ণ কোড দেখতে এরকম:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
এই কোড চালালে আপনি এরকম আউটপুট দেখতে পাবেন:
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
এই আউটপুটের অর্থ হলো LLM প্রম্পট "What is 3 + 12" বিশ্লেষণ করে বুঝেছে যে `add` টুলটি কল করা উচিত এবং এটি তার নাম, বর্ণনা এবং সদস্য ক্ষেত্রের বর্ণনার জন্য জানে। যে উত্তরটি 15 হয়েছে তা আমাদের কোডের ডিকশনারি `functions` ব্যবহার করে এটি কল করার কারণে:
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### একটি আরও আকর্ষণীয় টুল যা একটি ওয়েব API কল করে
|
||||
|
||||
যে টুল দুটি সংখ্যা যোগ করে তা আকর্ষণীয় কারণ এটি টুল কলিং কীভাবে কাজ করে তা চিত্রিত করে। তবে সাধারণত টুলগুলো আরও আকর্ষণীয় কিছু করে, যেমন একটি ওয়েব API কল করা। চলুন এই কোড দিয়ে তা করি:
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
এখন যদি আপনি এই কোড চালান, আপনি একটি প্রতিক্রিয়া পাবেন যা এরকম কিছু বলবে:
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
এখানে সম্পূর্ণ কোড:
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## এমবেডিং
|
||||
|
||||
কন্টেন্টকে ভেক্টরাইজ করুন, কসাইন সিমিলারিটির মাধ্যমে তুলনা করুন।
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### ডকুমেন্ট লোডার
|
||||
|
||||
PDF এবং CSV
|
||||
|
||||
## একটি অ্যাপ তৈরি করা
|
||||
|
||||
TODO
|
||||
|
||||
## অ্যাসাইনমেন্ট
|
||||
|
||||
## সারসংক্ষেপ
|
||||
|
||||
---
|
||||
|
||||
**অস্বীকৃতি**:
|
||||
এই নথিটি AI অনুবাদ পরিষেবা [Co-op Translator](https://github.com/Azure/co-op-translator) ব্যবহার করে অনুবাদ করা হয়েছে। আমরা যথাসাধ্য সঠিকতার জন্য চেষ্টা করি, তবে অনুগ্রহ করে মনে রাখবেন যে স্বয়ংক্রিয় অনুবাদে ত্রুটি বা অসঙ্গতি থাকতে পারে। মূল ভাষায় থাকা নথিটিকে প্রামাণিক উৎস হিসেবে বিবেচনা করা উচিত। গুরুত্বপূর্ণ তথ্যের জন্য, পেশাদার মানব অনুবাদ সুপারিশ করা হয়। এই অনুবাদ ব্যবহারের ফলে কোনো ভুল বোঝাবুঝি বা ভুল ব্যাখ্যা হলে আমরা দায়বদ্ধ থাকব না।
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:24:57+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "br"
|
||||
}
|
||||
-->
|
||||
# Framework de IA
|
||||
|
||||
Existem muitos frameworks de IA disponíveis que, quando utilizados, podem acelerar significativamente o tempo necessário para construir um projeto. Neste projeto, vamos focar em entender quais problemas esses frameworks resolvem e construir um projeto desse tipo por conta própria.
|
||||
|
||||
## Por que usar um framework
|
||||
|
||||
Quando se trata de usar IA, existem diferentes abordagens e razões para escolher essas abordagens. Aqui estão algumas delas:
|
||||
|
||||
- **Sem SDK**: A maioria dos modelos de IA permite que você interaja diretamente com o modelo via, por exemplo, requisições HTTP. Essa abordagem funciona e pode, às vezes, ser sua única opção se não houver um SDK disponível.
|
||||
- **SDK**: Usar um SDK geralmente é a abordagem recomendada, pois permite que você escreva menos código para interagir com seu modelo. Normalmente, é limitado a um modelo específico e, se você usar diferentes modelos, pode precisar escrever novo código para suportar esses modelos adicionais.
|
||||
- **Um framework**: Um framework geralmente leva as coisas a outro nível, no sentido de que, se você precisar usar diferentes modelos, há uma API única para todos eles; o que muda geralmente é a configuração inicial. Além disso, frameworks trazem abstrações úteis, como ferramentas, memória, fluxos de trabalho, agentes e mais, enquanto você escreve menos código. Como frameworks geralmente são opinativos, eles podem ser muito úteis se você aceitar a forma como eles funcionam, mas podem ser insuficientes se você tentar fazer algo personalizado que o framework não foi projetado para fazer. Às vezes, um framework também pode simplificar demais, e você pode não aprender um tópico importante que, mais tarde, pode prejudicar o desempenho, por exemplo.
|
||||
|
||||
De forma geral, use a ferramenta certa para o trabalho.
|
||||
|
||||
## Introdução
|
||||
|
||||
Nesta lição, vamos aprender a:
|
||||
|
||||
- Usar um framework de IA comum.
|
||||
- Resolver problemas comuns como conversas de chat, uso de ferramentas, memória e contexto.
|
||||
- Aproveitar isso para construir aplicativos de IA.
|
||||
|
||||
## Primeiro prompt
|
||||
|
||||
No nosso primeiro exemplo de aplicativo, vamos aprender como conectar a um modelo de IA e consultá-lo usando um prompt.
|
||||
|
||||
### Usando Python
|
||||
|
||||
Para este exemplo, usaremos Langchain para conectar aos modelos do GitHub. Podemos usar uma classe chamada `ChatOpenAI` e fornecer os campos `api_key`, `base_url` e `model`. O token é algo que é automaticamente preenchido dentro do GitHub Codespaces e, se você estiver executando o aplicativo localmente, precisará configurar um token de acesso pessoal para que isso funcione.
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Neste código, nós:
|
||||
|
||||
- Chamamos `ChatOpenAI` para criar um cliente.
|
||||
- Usamos `llm.invoke` com um prompt para criar uma resposta.
|
||||
- Imprimimos a resposta com `print(response.content)`.
|
||||
|
||||
Você deve ver uma resposta semelhante a:
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## Conversa de chat
|
||||
|
||||
Na seção anterior, você viu como usamos o que normalmente é conhecido como "zero shot prompting", um único prompt seguido por uma resposta.
|
||||
|
||||
No entanto, frequentemente você se encontra em uma situação onde precisa manter uma conversa com várias mensagens sendo trocadas entre você e o assistente de IA.
|
||||
|
||||
### Usando Python
|
||||
|
||||
No Langchain, podemos armazenar a conversa em uma lista. A `HumanMessage` representa uma mensagem de um usuário, e `SystemMessage` é uma mensagem destinada a definir a "personalidade" da IA. No exemplo abaixo, você verá como instruímos a IA a assumir a personalidade do Capitão Picard e o humano/usuário a perguntar "Fale sobre você" como o prompt.
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
O código completo para este exemplo é assim:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Você deve ver um resultado semelhante a:
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
Para manter o estado da conversa, você pode adicionar a resposta de um chat, para que a conversa seja lembrada. Veja como fazer isso:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
O que podemos ver na conversa acima é como invocamos o LLM duas vezes, primeiro com a conversa consistindo de apenas duas mensagens, mas depois uma segunda vez com mais mensagens adicionadas à conversa.
|
||||
|
||||
Na verdade, se você executar isso, verá a segunda resposta sendo algo como:
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
Vou interpretar isso como um "talvez" ;)
|
||||
|
||||
## Respostas em streaming
|
||||
|
||||
TODO
|
||||
|
||||
## Templates de prompt
|
||||
|
||||
TODO
|
||||
|
||||
## Saída estruturada
|
||||
|
||||
TODO
|
||||
|
||||
## Chamada de ferramentas
|
||||
|
||||
Ferramentas são como damos habilidades extras ao LLM. A ideia é informar ao LLM sobre funções que ele possui e, se um prompt for feito que corresponda à descrição de uma dessas ferramentas, então as chamamos.
|
||||
|
||||
### Usando Python
|
||||
|
||||
Vamos adicionar algumas ferramentas assim:
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
O que estamos fazendo aqui é criar uma descrição de uma ferramenta chamada `add`. Ao herdar de `TypedDict` e adicionar membros como `a` e `b` do tipo `Annotated`, isso pode ser convertido em um esquema que o LLM pode entender. A criação de funções é um dicionário que garante que sabemos o que fazer se uma ferramenta específica for identificada.
|
||||
|
||||
Vamos ver como chamamos o LLM com esta ferramenta a seguir:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
Aqui chamamos `bind_tools` com nosso array `tools` e, assim, o LLM `llm_with_tools` agora tem conhecimento dessa ferramenta.
|
||||
|
||||
Para usar este novo LLM, podemos digitar o seguinte código:
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Agora que chamamos `invoke` neste novo LLM, que possui ferramentas, talvez a propriedade `tool_calls` seja preenchida. Se for o caso, qualquer ferramenta identificada terá uma propriedade `name` e `args` que identifica qual ferramenta deve ser chamada e com quais argumentos. O código completo é assim:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Executando este código, você deve ver uma saída semelhante a:
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
O que esta saída significa é que o LLM analisou o prompt "Qual é 3 + 12" como significando que a ferramenta `add` deveria ser chamada, e ele sabia disso graças ao seu nome, descrição e descrições dos campos dos membros. Que a resposta é 15 é porque nosso código usou o dicionário `functions` para invocá-lo:
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### Uma ferramenta mais interessante que chama uma API web
|
||||
|
||||
Ferramentas que somam dois números são interessantes, pois ilustram como funciona a chamada de ferramentas, mas geralmente as ferramentas tendem a fazer algo mais interessante, como, por exemplo, chamar uma API web. Vamos fazer isso com este código:
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
Agora, se você executar este código, receberá uma resposta dizendo algo como:
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Aqui está o código na íntegra:
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## Embedding
|
||||
|
||||
Vectorizar conteúdo, comparar via similaridade cosseno.
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### Carregadores de documentos
|
||||
|
||||
PDF e CSV
|
||||
|
||||
## Construindo um aplicativo
|
||||
|
||||
TODO
|
||||
|
||||
## Tarefa
|
||||
|
||||
## Resumo
|
||||
|
||||
---
|
||||
|
||||
**Aviso Legal**:
|
||||
Este documento foi traduzido utilizando o serviço de tradução por IA [Co-op Translator](https://github.com/Azure/co-op-translator). Embora nos esforcemos para garantir a precisão, é importante estar ciente de que traduções automatizadas podem conter erros ou imprecisões. O documento original em seu idioma nativo deve ser considerado a fonte oficial. Para informações críticas, recomenda-se a tradução profissional realizada por humanos. Não nos responsabilizamos por quaisquer mal-entendidos ou interpretações equivocadas decorrentes do uso desta tradução.
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:30:41+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "cs"
|
||||
}
|
||||
-->
|
||||
# AI Framework
|
||||
|
||||
Existuje mnoho AI frameworků, které mohou výrazně urychlit dobu potřebnou k vytvoření projektu. V tomto projektu se zaměříme na pochopení problémů, které tyto frameworky řeší, a vytvoříme si vlastní projekt.
|
||||
|
||||
## Proč framework
|
||||
|
||||
Pokud jde o využití AI, existují různé přístupy a důvody, proč si je vybrat. Zde jsou některé z nich:
|
||||
|
||||
- **Bez SDK**. Většina AI modelů umožňuje přímou interakci s modelem například prostřednictvím HTTP požadavků. Tento přístup funguje a někdy může být jedinou možností, pokud SDK chybí.
|
||||
- **SDK**. Použití SDK je obvykle doporučený přístup, protože umožňuje psát méně kódu pro interakci s modelem. Obvykle je omezeno na konkrétní model, a pokud používáte různé modely, možná budete muset napsat nový kód pro podporu těchto dalších modelů.
|
||||
- **Framework**. Framework obvykle posouvá věci na vyšší úroveň v tom smyslu, že pokud potřebujete používat různé modely, existuje jedno API pro všechny, rozdíl je obvykle v počátečním nastavení. Navíc frameworky přinášejí užitečné abstrakce, například v oblasti AI mohou pracovat s nástroji, pamětí, pracovními postupy, agenty a dalšími funkcemi, přičemž je potřeba psát méně kódu. Protože frameworky jsou obvykle názorové, mohou být velmi užitečné, pokud přijmete jejich způsob práce, ale mohou být nedostatečné, pokud se pokusíte udělat něco na míru, co framework nepodporuje. Někdy může framework také příliš zjednodušovat, což může vést k tomu, že se nenaučíte důležité téma, které později může ovlivnit výkon.
|
||||
|
||||
Obecně platí, že je třeba použít správný nástroj pro daný úkol.
|
||||
|
||||
## Úvod
|
||||
|
||||
V této lekci se naučíme:
|
||||
|
||||
- Používat běžný AI framework.
|
||||
- Řešit běžné problémy, jako jsou konverzace, používání nástrojů, paměť a kontext.
|
||||
- Využít tyto znalosti k vytváření AI aplikací.
|
||||
|
||||
## První prompt
|
||||
|
||||
V našem prvním příkladu aplikace se naučíme, jak se připojit k AI modelu a dotazovat ho pomocí promptu.
|
||||
|
||||
### Použití Pythonu
|
||||
|
||||
V tomto příkladu použijeme Langchain k připojení k GitHub modelům. Můžeme použít třídu `ChatOpenAI` a zadat jí pole `api_key`, `base_url` a `model`. Token se automaticky generuje v GitHub Codespaces, a pokud aplikaci spouštíte lokálně, musíte si nastavit osobní přístupový token, aby to fungovalo.
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
V tomto kódu:
|
||||
|
||||
- Voláme `ChatOpenAI` pro vytvoření klienta.
|
||||
- Používáme `llm.invoke` s promptem pro vytvoření odpovědi.
|
||||
- Tiskneme odpověď pomocí `print(response.content)`.
|
||||
|
||||
Měli byste vidět odpověď podobnou:
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## Konverzace
|
||||
|
||||
V předchozí části jste viděli, jak jsme použili to, co je běžně známé jako zero shot prompting, tedy jeden prompt následovaný odpovědí.
|
||||
|
||||
Často se však ocitnete v situaci, kdy potřebujete udržovat konverzaci skládající se z několika zpráv, které si vyměňujete s AI asistentem.
|
||||
|
||||
### Použití Pythonu
|
||||
|
||||
V Langchainu můžeme konverzaci uložit do seznamu. `HumanMessage` představuje zprávu od uživatele a `SystemMessage` je zpráva určená k nastavení "osobnosti" AI. V níže uvedeném příkladu vidíte, jak instruujeme AI, aby přijala osobnost kapitána Picarda, a uživatel se ptá "Pověz mi o sobě" jako prompt.
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
Celý kód pro tento příklad vypadá takto:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Měli byste vidět výsledek podobný:
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
Pro udržení stavu konverzace můžete přidat odpověď z chatu, aby si konverzace pamatovala, zde je postup, jak to udělat:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
Z výše uvedené konverzace vidíme, jak jsme dvakrát vyvolali LLM, nejprve s konverzací sestávající pouze ze dvou zpráv, ale poté podruhé s více zprávami přidanými do konverzace.
|
||||
|
||||
Ve skutečnosti, pokud to spustíte, uvidíte druhou odpověď, která bude vypadat nějak takto:
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
To beru jako možná ;)
|
||||
|
||||
## Streamování odpovědí
|
||||
|
||||
TODO
|
||||
|
||||
## Šablony promptů
|
||||
|
||||
TODO
|
||||
|
||||
## Strukturovaný výstup
|
||||
|
||||
TODO
|
||||
|
||||
## Volání nástrojů
|
||||
|
||||
Nástroje jsou způsob, jak dát LLM další schopnosti. Myšlenkou je informovat LLM o funkcích, které má, a pokud je vytvořen prompt, který odpovídá popisu jednoho z těchto nástrojů, pak je tento nástroj vyvolán.
|
||||
|
||||
### Použití Pythonu
|
||||
|
||||
Přidáme některé nástroje takto:
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
Co zde děláme, je vytvoření popisu nástroje nazvaného `add`. Děděním z `TypedDict` a přidáním členů jako `a` a `b` typu `Annotated` to může být převedeno na schéma, kterému LLM rozumí. Vytvoření funkcí je slovník, který zajišťuje, že víme, co dělat, pokud je identifikován konkrétní nástroj.
|
||||
|
||||
Podívejme se, jak zavolat LLM s tímto nástrojem:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
Zde voláme `bind_tools` s naším polem `tools`, a tím pádem má LLM `llm_with_tools` nyní znalosti o tomto nástroji.
|
||||
|
||||
Pro použití tohoto nového LLM můžeme napsat následující kód:
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Nyní, když voláme `invoke` na tomto novém LLM, který má nástroje, může být vlastnost `tool_calls` naplněna. Pokud ano, jakýkoli identifikovaný nástroj má vlastnosti `name` a `args`, které identifikují, jaký nástroj by měl být vyvolán a s jakými argumenty. Celý kód vypadá takto:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Při spuštění tohoto kódu byste měli vidět výstup podobný:
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Co tento výstup znamená, je, že LLM analyzoval prompt "Co je 3 + 12" jako znamenající, že by měl být vyvolán nástroj `add`, a věděl to díky jeho názvu, popisu a popisům členů. To, že odpověď je 15, je díky našemu kódu, který používá slovník `functions` k jeho vyvolání:
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### Zajímavější nástroj, který volá webové API
|
||||
|
||||
Nástroje, které sčítají dvě čísla, jsou zajímavé, protože ilustrují, jak funguje volání nástrojů, ale obvykle nástroje dělají něco zajímavějšího, například volání webového API. Udělejme to s tímto kódem:
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
Nyní, pokud tento kód spustíte, dostanete odpověď, která bude vypadat nějak takto:
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Zde je celý kód:
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## Vkládání
|
||||
|
||||
Vektorizace obsahu, porovnání pomocí kosinové podobnosti
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### Načítání dokumentů
|
||||
|
||||
pdf a csv
|
||||
|
||||
## Vytvoření aplikace
|
||||
|
||||
TODO
|
||||
|
||||
## Úkol
|
||||
|
||||
## Shrnutí
|
||||
|
||||
---
|
||||
|
||||
**Upozornění**:
|
||||
Tento dokument byl přeložen pomocí služby AI pro překlad [Co-op Translator](https://github.com/Azure/co-op-translator). I když se snažíme o přesnost, mějte prosím na paměti, že automatické překlady mohou obsahovat chyby nebo nepřesnosti. Původní dokument v jeho původním jazyce by měl být považován za autoritativní zdroj. Pro důležité informace se doporučuje profesionální lidský překlad. Nejsme zodpovědní za jakékoli nedorozumění nebo nesprávné interpretace vyplývající z použití tohoto překladu.
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:27:18+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "da"
|
||||
}
|
||||
-->
|
||||
# AI Framework
|
||||
|
||||
Der findes mange AI-frameworks, som kan markant reducere den tid, det tager at bygge et projekt. I dette projekt vil vi fokusere på at forstå, hvilke problemer disse frameworks løser, og bygge et sådant projekt selv.
|
||||
|
||||
## Hvorfor et framework
|
||||
|
||||
Når det kommer til at bruge AI, er der forskellige tilgange og grunde til at vælge disse tilgange. Her er nogle eksempler:
|
||||
|
||||
- **Ingen SDK**. De fleste AI-modeller giver dig mulighed for at interagere direkte med modellen via f.eks. HTTP-forespørgsler. Denne tilgang fungerer og kan nogle gange være din eneste mulighed, hvis der ikke findes en SDK-løsning.
|
||||
- **SDK**. Brug af en SDK er normalt den anbefalede tilgang, da det kræver mindre kode for at interagere med din model. Det er typisk begrænset til en specifik model, og hvis du bruger forskellige modeller, kan du være nødt til at skrive ny kode for at understøtte disse.
|
||||
- **Et framework**. Et framework tager tingene til et nyt niveau, da det ofte tilbyder én API til flere modeller, hvor forskellen typisk ligger i den indledende opsætning. Derudover tilbyder frameworks nyttige abstraktioner, som i AI-verdenen kan håndtere værktøjer, hukommelse, workflows, agenter og mere, mens du skriver mindre kode. Fordi frameworks ofte er meningsdannende, kan de være meget nyttige, hvis du accepterer deres tilgang, men de kan være begrænsende, hvis du forsøger at gøre noget skræddersyet, som frameworket ikke er designet til. Nogle gange kan et framework også forenkle for meget, hvilket kan føre til, at du ikke lærer et vigtigt emne, der senere kan påvirke ydeevnen negativt.
|
||||
|
||||
Generelt gælder det om at bruge det rigtige værktøj til opgaven.
|
||||
|
||||
## Introduktion
|
||||
|
||||
I denne lektion vil vi lære at:
|
||||
|
||||
- Bruge et almindeligt AI-framework.
|
||||
- Løse almindelige problemer som chat-samtaler, værktøjsbrug, hukommelse og kontekst.
|
||||
- Udnytte dette til at bygge AI-applikationer.
|
||||
|
||||
## Første prompt
|
||||
|
||||
I vores første app-eksempel vil vi lære, hvordan man forbinder til en AI-model og forespørger den ved hjælp af et prompt.
|
||||
|
||||
### Brug af Python
|
||||
|
||||
I dette eksempel vil vi bruge Langchain til at forbinde til GitHub-modeller. Vi kan bruge en klasse kaldet `ChatOpenAI` og give den felterne `api_key`, `base_url` og `model`. Tokenet bliver automatisk genereret i GitHub Codespaces, og hvis du kører appen lokalt, skal du oprette en personlig adgangstoken for at få det til at fungere.
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
I denne kode:
|
||||
|
||||
- Kalder vi `ChatOpenAI` for at oprette en klient.
|
||||
- Bruger vi `llm.invoke` med et prompt for at generere et svar.
|
||||
- Udskriver vi svaret med `print(response.content)`.
|
||||
|
||||
Du bør se et svar, der ligner:
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## Chat-samtale
|
||||
|
||||
I det foregående afsnit så du, hvordan vi brugte det, der normalt kaldes zero-shot prompting, et enkelt prompt efterfulgt af et svar.
|
||||
|
||||
Men ofte befinder du dig i en situation, hvor du skal opretholde en samtale med flere beskeder, der udveksles mellem dig og AI-assistenten.
|
||||
|
||||
### Brug af Python
|
||||
|
||||
I Langchain kan vi gemme samtalen i en liste. `HumanMessage` repræsenterer en besked fra en bruger, og `SystemMessage` er en besked, der skal sætte AI'ens "personlighed". I nedenstående eksempel ser du, hvordan vi instruerer AI'en til at antage personligheden af Captain Picard, og hvordan brugeren spørger "Fortæl mig om dig" som prompt.
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
Den fulde kode for dette eksempel ser sådan ud:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Du bør se et resultat, der ligner:
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
For at bevare samtalens tilstand kan du tilføje svaret fra en chat, så samtalen huskes. Her er, hvordan du gør det:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
Det, vi kan se fra ovenstående samtale, er, hvordan vi kalder LLM to gange, først med samtalen bestående af kun to beskeder, men derefter en anden gang med flere beskeder tilføjet til samtalen.
|
||||
|
||||
Faktisk, hvis du kører dette, vil du se det andet svar være noget i stil med:
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
Jeg tager det som et måske ;)
|
||||
|
||||
## Streaming-svar
|
||||
|
||||
TODO
|
||||
|
||||
## Prompt-skabeloner
|
||||
|
||||
TODO
|
||||
|
||||
## Struktureret output
|
||||
|
||||
TODO
|
||||
|
||||
## Værktøjskald
|
||||
|
||||
Værktøjer er, hvordan vi giver LLM ekstra færdigheder. Ideen er at fortælle LLM om funktioner, den har, og hvis et prompt matcher beskrivelsen af et af disse værktøjer, så kalder vi dem.
|
||||
|
||||
### Brug af Python
|
||||
|
||||
Lad os tilføje nogle værktøjer som følger:
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
Det, vi gør her, er at oprette en beskrivelse af et værktøj kaldet `add`. Ved at arve fra `TypedDict` og tilføje medlemmer som `a` og `b` af typen `Annotated` kan dette konverteres til et skema, som LLM kan forstå. Oprettelsen af funktioner er en ordbog, der sikrer, at vi ved, hvad vi skal gøre, hvis et specifikt værktøj identificeres.
|
||||
|
||||
Lad os se, hvordan vi kalder LLM med dette værktøj næste gang:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
Her kalder vi `bind_tools` med vores `tools` array, og dermed har LLM `llm_with_tools` nu kendskab til dette værktøj.
|
||||
|
||||
For at bruge denne nye LLM kan vi skrive følgende kode:
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Nu, når vi kalder `invoke` på denne nye LLM, der har værktøjer, kan egenskaben `tool_calls` blive udfyldt. Hvis det er tilfældet, har identificerede værktøjer en `name` og `args` egenskab, der identificerer, hvilket værktøj der skal kaldes og med hvilke argumenter. Den fulde kode ser sådan ud:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Når du kører denne kode, bør du se output, der ligner:
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Hvad dette output betyder, er, at LLM analyserede prompten "Hvad er 3 + 12" som værende, at værktøjet `add` skulle kaldes, og det vidste det takket være dets navn, beskrivelse og medlemfeltbeskrivelser. At svaret er 15 skyldes vores kode, der bruger ordbogen `functions` til at udføre det:
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### Et mere interessant værktøj, der kalder en web-API
|
||||
|
||||
Værktøjer, der lægger to tal sammen, er interessante, da de illustrerer, hvordan værktøjskald fungerer, men normalt gør værktøjer noget mere interessant, som for eksempel at kalde en web-API. Lad os gøre netop det med denne kode:
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
Nu, hvis du kører denne kode, vil du få et svar, der siger noget i stil med:
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Her er koden i sin helhed:
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## Embedding
|
||||
|
||||
Vectoriser indhold, sammenlign via cosinus-similaritet
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### Dokumentindlæsere
|
||||
|
||||
PDF og CSV
|
||||
|
||||
## Bygge en app
|
||||
|
||||
TODO
|
||||
|
||||
## Opgave
|
||||
|
||||
## Opsummering
|
||||
|
||||
---
|
||||
|
||||
**Ansvarsfraskrivelse**:
|
||||
Dette dokument er blevet oversat ved hjælp af AI-oversættelsestjenesten [Co-op Translator](https://github.com/Azure/co-op-translator). Selvom vi bestræber os på nøjagtighed, skal det bemærkes, at automatiserede oversættelser kan indeholde fejl eller unøjagtigheder. Det originale dokument på dets oprindelige sprog bør betragtes som den autoritative kilde. For kritisk information anbefales professionel menneskelig oversættelse. Vi påtager os ikke ansvar for misforståelser eller fejltolkninger, der måtte opstå som følge af brugen af denne oversættelse.
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:19:27+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "de"
|
||||
}
|
||||
-->
|
||||
# KI-Framework
|
||||
|
||||
Es gibt viele KI-Frameworks, die die Entwicklungszeit eines Projekts erheblich verkürzen können. In diesem Projekt konzentrieren wir uns darauf, zu verstehen, welche Probleme diese Frameworks lösen und ein solches Projekt selbst zu erstellen.
|
||||
|
||||
## Warum ein Framework?
|
||||
|
||||
Beim Einsatz von KI gibt es verschiedene Ansätze und Gründe, diese Ansätze zu wählen. Hier sind einige davon:
|
||||
|
||||
- **Kein SDK**: Die meisten KI-Modelle ermöglichen es, direkt über beispielsweise HTTP-Anfragen mit dem Modell zu interagieren. Dieser Ansatz funktioniert und ist manchmal die einzige Option, wenn ein SDK fehlt.
|
||||
- **SDK**: Die Verwendung eines SDK wird in der Regel empfohlen, da es weniger Code erfordert, um mit dem Modell zu interagieren. Es ist normalerweise auf ein bestimmtes Modell beschränkt, und wenn man verschiedene Modelle verwendet, muss man möglicherweise neuen Code schreiben, um diese zusätzlichen Modelle zu unterstützen.
|
||||
- **Ein Framework**: Ein Framework geht oft einen Schritt weiter, indem es eine einheitliche API für verschiedene Modelle bietet. Der Unterschied liegt meist in der anfänglichen Einrichtung. Darüber hinaus bieten Frameworks nützliche Abstraktionen, wie Werkzeuge, Speicher, Workflows, Agenten und mehr, während weniger Code geschrieben werden muss. Da Frameworks oft eine bestimmte Vorgehensweise vorgeben, können sie sehr hilfreich sein, wenn man sich auf ihre Arbeitsweise einlässt. Sie können jedoch unzureichend sein, wenn man etwas Individuelles machen möchte, das nicht vom Framework unterstützt wird. Manchmal vereinfacht ein Framework auch zu stark, sodass wichtige Themen nicht gelernt werden, was später beispielsweise die Leistung beeinträchtigen könnte.
|
||||
|
||||
Im Allgemeinen gilt: Verwende das richtige Werkzeug für die jeweilige Aufgabe.
|
||||
|
||||
## Einführung
|
||||
|
||||
In dieser Lektion lernen wir:
|
||||
|
||||
- Ein gängiges KI-Framework zu verwenden.
|
||||
- Häufige Probleme wie Chat-Konversationen, Werkzeugnutzung, Speicher und Kontext zu lösen.
|
||||
- Dies zu nutzen, um KI-Anwendungen zu erstellen.
|
||||
|
||||
## Erster Prompt
|
||||
|
||||
In unserem ersten App-Beispiel lernen wir, wie man sich mit einem KI-Modell verbindet und es mit einem Prompt abfragt.
|
||||
|
||||
### Mit Python
|
||||
|
||||
Für dieses Beispiel verwenden wir Langchain, um eine Verbindung zu GitHub-Modellen herzustellen. Wir können eine Klasse namens `ChatOpenAI` verwenden und ihr die Felder `api_key`, `base_url` und `model` übergeben. Der Token wird automatisch in GitHub Codespaces generiert, und wenn du die App lokal ausführst, musst du einen persönlichen Zugriffstoken einrichten, damit dies funktioniert.
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
In diesem Code:
|
||||
|
||||
- Rufen wir `ChatOpenAI` auf, um einen Client zu erstellen.
|
||||
- Verwenden wir `llm.invoke` mit einem Prompt, um eine Antwort zu generieren.
|
||||
- Drucken wir die Antwort mit `print(response.content)`.
|
||||
|
||||
Du solltest eine Antwort sehen, die etwa so aussieht:
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## Chat-Konversation
|
||||
|
||||
Im vorherigen Abschnitt hast du gesehen, wie wir das verwenden, was normalerweise als Zero-Shot-Prompting bekannt ist: ein einzelner Prompt, gefolgt von einer Antwort.
|
||||
|
||||
Oft befindet man sich jedoch in einer Situation, in der man eine Konversation mit mehreren Nachrichten zwischen sich und dem KI-Assistenten führen muss.
|
||||
|
||||
### Mit Python
|
||||
|
||||
In Langchain können wir die Konversation in einer Liste speichern. Die `HumanMessage` repräsentiert eine Nachricht von einem Benutzer, und `SystemMessage` ist eine Nachricht, die die "Persönlichkeit" der KI festlegt. Im folgenden Beispiel siehst du, wie wir die KI anweisen, die Persönlichkeit von Captain Picard anzunehmen, und der Mensch/Benutzer die Frage "Erzähl mir von dir" als Prompt stellt.
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
Der vollständige Code für dieses Beispiel sieht so aus:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Du solltest ein Ergebnis sehen, das etwa so aussieht:
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
Um den Zustand der Konversation beizubehalten, kannst du die Antwort aus einem Chat hinzufügen, sodass die Konversation gespeichert wird. So funktioniert das:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
Was wir aus der obigen Konversation sehen können, ist, wie wir das LLM zweimal aufrufen: zuerst mit der Konversation, die nur aus zwei Nachrichten besteht, und dann ein zweites Mal mit weiteren Nachrichten, die der Konversation hinzugefügt wurden.
|
||||
|
||||
Tatsächlich wirst du, wenn du dies ausführst, sehen, dass die zweite Antwort etwa so aussieht:
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
Ich nehme das als ein "Vielleicht" ;)
|
||||
|
||||
## Streaming-Antworten
|
||||
|
||||
TODO
|
||||
|
||||
## Prompt-Vorlagen
|
||||
|
||||
TODO
|
||||
|
||||
## Strukturierte Ausgabe
|
||||
|
||||
TODO
|
||||
|
||||
## Werkzeugaufrufe
|
||||
|
||||
Werkzeuge sind eine Möglichkeit, dem LLM zusätzliche Fähigkeiten zu geben. Die Idee ist, dem LLM Funktionen mitzuteilen, die es hat, und wenn ein Prompt gemacht wird, der der Beschreibung eines dieser Werkzeuge entspricht, wird es aufgerufen.
|
||||
|
||||
### Mit Python
|
||||
|
||||
Lass uns einige Werkzeuge hinzufügen, wie folgt:
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
Was wir hier tun, ist, eine Beschreibung eines Werkzeugs namens `add` zu erstellen. Indem wir von `TypedDict` erben und Mitglieder wie `a` und `b` vom Typ `Annotated` hinzufügen, kann dies in ein Schema umgewandelt werden, das das LLM versteht. Die Erstellung von Funktionen erfolgt über ein Dictionary, das sicherstellt, dass wir wissen, was zu tun ist, wenn ein bestimmtes Werkzeug identifiziert wird.
|
||||
|
||||
Lass uns sehen, wie wir das LLM mit diesem Werkzeug aufrufen:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
Hier rufen wir `bind_tools` mit unserem `tools`-Array auf, und dadurch hat das LLM `llm_with_tools` nun Kenntnis von diesem Werkzeug.
|
||||
|
||||
Um dieses neue LLM zu verwenden, können wir den folgenden Code eingeben:
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Wenn wir nun `invoke` auf diesem neuen LLM aufrufen, das Werkzeuge hat, wird möglicherweise die Eigenschaft `tool_calls` ausgefüllt. Falls ja, hat jedes identifizierte Werkzeug eine `name`- und `args`-Eigenschaft, die angibt, welches Werkzeug aufgerufen werden soll und mit welchen Argumenten. Der vollständige Code sieht so aus:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Wenn du diesen Code ausführst, solltest du eine Ausgabe sehen, die etwa so aussieht:
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Was diese Ausgabe bedeutet, ist, dass das LLM den Prompt "Was ist 3 + 12" als Aufforderung interpretiert hat, das Werkzeug `add` aufzurufen. Es wusste das dank seines Namens, seiner Beschreibung und der Beschreibungen der Mitgliederfelder. Dass die Antwort 15 ist, liegt daran, dass unser Code das Dictionary `functions` verwendet, um es auszuführen:
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### Ein interessanteres Werkzeug, das eine Web-API aufruft
|
||||
|
||||
Werkzeuge, die zwei Zahlen addieren, sind interessant, da sie veranschaulichen, wie Werkzeugaufrufe funktionieren. Üblicherweise tun Werkzeuge jedoch etwas Interessanteres, wie beispielsweise eine Web-API aufzurufen. Lass uns das mit diesem Code tun:
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
Wenn du diesen Code ausführst, erhältst du eine Antwort, die etwa so lautet:
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Hier ist der gesamte Code:
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## Einbettung
|
||||
|
||||
Inhalte vektorisieren, Vergleich über Kosinus-Ähnlichkeit
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### Dokumentenlader
|
||||
|
||||
PDF und CSV
|
||||
|
||||
## Eine App erstellen
|
||||
|
||||
TODO
|
||||
|
||||
## Aufgabe
|
||||
|
||||
## Zusammenfassung
|
||||
|
||||
---
|
||||
|
||||
**Haftungsausschluss**:
|
||||
Dieses Dokument wurde mit dem KI-Übersetzungsdienst [Co-op Translator](https://github.com/Azure/co-op-translator) übersetzt. Obwohl wir uns um Genauigkeit bemühen, beachten Sie bitte, dass automatisierte Übersetzungen Fehler oder Ungenauigkeiten enthalten können. Das Originaldokument in seiner ursprünglichen Sprache sollte als maßgebliche Quelle betrachtet werden. Für kritische Informationen wird eine professionelle menschliche Übersetzung empfohlen. Wir übernehmen keine Haftung für Missverständnisse oder Fehlinterpretationen, die sich aus der Nutzung dieser Übersetzung ergeben.
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:18:42+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "en"
|
||||
}
|
||||
-->
|
||||
# AI Framework
|
||||
|
||||
There are many AI frameworks available that can significantly speed up the process of building a project. In this project, we will focus on understanding the problems these frameworks address and create such a project ourselves.
|
||||
|
||||
## Why use a framework
|
||||
|
||||
When working with AI, there are different approaches and reasons for choosing them. Here are some:
|
||||
|
||||
- **No SDK**: Most AI models allow you to interact directly with the model, for example, via HTTP requests. This approach works and may sometimes be your only option if an SDK is unavailable.
|
||||
- **SDK**: Using an SDK is usually the recommended approach as it requires less code to interact with your model. However, it is often limited to a specific model, and if you use different models, you might need to write new code to support those additional models.
|
||||
- **A framework**: A framework typically takes things to the next level by providing a unified API for interacting with different models, with differences usually limited to the initial setup. Frameworks also offer useful abstractions, such as tools, memory, workflows, agents, and more, while requiring less code. Because frameworks are often opinionated, they can be very helpful if you align with their approach, but they may fall short if you need to implement something custom that the framework isn't designed for. Additionally, frameworks can sometimes oversimplify things, which might prevent you from learning important concepts that could later impact performance.
|
||||
|
||||
In general, use the right tool for the job.
|
||||
|
||||
## Introduction
|
||||
|
||||
In this lesson, we'll learn to:
|
||||
|
||||
- Use a common AI framework.
|
||||
- Address common challenges like chat conversations, tool usage, memory, and context.
|
||||
- Leverage these capabilities to build AI applications.
|
||||
|
||||
## First prompt
|
||||
|
||||
In our first app example, we'll learn how to connect to an AI model and query it using a prompt.
|
||||
|
||||
### Using Python
|
||||
|
||||
For this example, we'll use Langchain to connect to GitHub Models. We can use a class called `ChatOpenAI` and provide it with the fields `api_key`, `base_url`, and `model`. The token is automatically populated within GitHub Codespaces, but if you're running the app locally, you'll need to set up a personal access token for this to work.
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
In this code, we:
|
||||
|
||||
- Call `ChatOpenAI` to create a client.
|
||||
- Use `llm.invoke` with a prompt to generate a response.
|
||||
- Print the response using `print(response.content)`.
|
||||
|
||||
You should see a response similar to:
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## Chat conversation
|
||||
|
||||
In the previous section, we used what's commonly known as zero-shot prompting—a single prompt followed by a response.
|
||||
|
||||
However, you may often find yourself in situations where you need to maintain a conversation involving multiple exchanges between you and the AI assistant.
|
||||
|
||||
### Using Python
|
||||
|
||||
In Langchain, we can store the conversation in a list. The `HumanMessage` represents a message from the user, and `SystemMessage` is a message meant to set the "personality" of the AI. In the example below, we instruct the AI to take on the personality of Captain Picard, and the user asks, "Tell me about you" as the prompt.
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
The full code for this example looks like this:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
You should see an output similar to:
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
To maintain the state of the conversation, you can add the response from the chat so the conversation is remembered. Here's how to do that:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
From the above conversation, we can see how the LLM is invoked twice—first with a conversation consisting of just two messages, and then a second time with additional messages added to the conversation.
|
||||
|
||||
If you run this, you'll see the second response being something like:
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
I'll take that as a maybe ;)
|
||||
|
||||
## Streaming responses
|
||||
|
||||
TODO
|
||||
|
||||
## Prompt templates
|
||||
|
||||
TODO
|
||||
|
||||
## Structured output
|
||||
|
||||
TODO
|
||||
|
||||
## Tool calling
|
||||
|
||||
Tools allow us to give the LLM additional capabilities. The idea is to inform the LLM about available functions, and if a prompt matches the description of one of these tools, the tool is invoked.
|
||||
|
||||
### Using Python
|
||||
|
||||
Let's add some tools like this:
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
Here, we create a description of a tool called `add`. By inheriting from `TypedDict` and adding members like `a` and `b` of type `Annotated`, this can be converted into a schema that the LLM can understand. The creation of functions is managed through a dictionary that ensures we know what to do if a specific tool is identified.
|
||||
|
||||
Next, let's see how we call the LLM with this tool:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
Here, we use `bind_tools` with our `tools` array, enabling the LLM `llm_with_tools` to recognize this tool.
|
||||
|
||||
To use this new LLM, we can write the following code:
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
When we call `invoke` on this new LLM that has tools, the property `tool_calls` may be populated. If so, any identified tools will have a `name` and `args` property that specifies which tool should be called and with what arguments. The full code looks like this:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Running this code, you should see output similar to:
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
This output indicates that the LLM interpreted the prompt "What is 3 + 12" as requiring the `add` tool, based on its name, description, and member field descriptions. The answer, 15, is derived from our code using the dictionary `functions` to invoke it:
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### A more interesting tool that calls a web API
|
||||
|
||||
While tools that add two numbers are useful for illustrating how tool calling works, tools often perform more complex tasks, such as calling a web API. Let's implement that with this code:
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
Running this code will produce a response similar to:
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Here's the complete code:
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## Embedding
|
||||
|
||||
Vectorize content and compare using cosine similarity.
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### Document loaders
|
||||
|
||||
PDF and CSV
|
||||
|
||||
## Building an app
|
||||
|
||||
TODO
|
||||
|
||||
## Assignment
|
||||
|
||||
## Summary
|
||||
|
||||
---
|
||||
|
||||
**Disclaimer**:
|
||||
This document has been translated using the AI translation service [Co-op Translator](https://github.com/Azure/co-op-translator). While we aim for accuracy, please note that automated translations may contain errors or inaccuracies. The original document in its native language should be regarded as the authoritative source. For critical information, professional human translation is recommended. We are not responsible for any misunderstandings or misinterpretations resulting from the use of this translation.
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:19:15+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "es"
|
||||
}
|
||||
-->
|
||||
# Marco de IA
|
||||
|
||||
Existen muchos marcos de IA que, al utilizarlos, pueden acelerar significativamente el tiempo necesario para construir un proyecto. En este proyecto nos centraremos en entender qué problemas abordan estos marcos y construiremos un proyecto de este tipo nosotros mismos.
|
||||
|
||||
## Por qué un marco
|
||||
|
||||
Cuando se trata de usar IA, hay diferentes enfoques y razones para elegirlos. Aquí algunos ejemplos:
|
||||
|
||||
- **Sin SDK**, la mayoría de los modelos de IA permiten interactuar directamente con el modelo de IA, por ejemplo, mediante solicitudes HTTP. Este enfoque funciona y, a veces, puede ser tu única opción si no existe una opción de SDK.
|
||||
- **SDK**. Usar un SDK suele ser el enfoque recomendado, ya que te permite escribir menos código para interactuar con tu modelo. Generalmente está limitado a un modelo específico y, si utilizas diferentes modelos, es posible que necesites escribir nuevo código para soportar esos modelos adicionales.
|
||||
- **Un marco**. Un marco generalmente lleva las cosas a otro nivel en el sentido de que, si necesitas usar diferentes modelos, hay una API para todos ellos; lo que varía suele ser la configuración inicial. Además, los marcos aportan abstracciones útiles, como en el ámbito de la IA, pueden manejar herramientas, memoria, flujos de trabajo, agentes y más, mientras se escribe menos código. Debido a que los marcos suelen ser opinados, pueden ser realmente útiles si aceptas cómo hacen las cosas, pero pueden quedarse cortos si intentas hacer algo personalizado para lo que el marco no está diseñado. A veces, un marco también puede simplificar demasiado y, por lo tanto, podrías no aprender un tema importante que más adelante podría afectar el rendimiento, por ejemplo.
|
||||
|
||||
En general, utiliza la herramienta adecuada para el trabajo.
|
||||
|
||||
## Introducción
|
||||
|
||||
En esta lección, aprenderemos a:
|
||||
|
||||
- Usar un marco de IA común.
|
||||
- Abordar problemas comunes como conversaciones de chat, uso de herramientas, memoria y contexto.
|
||||
- Aprovechar esto para construir aplicaciones de IA.
|
||||
|
||||
## Primer prompt
|
||||
|
||||
En nuestro primer ejemplo de aplicación, aprenderemos cómo conectarnos a un modelo de IA y consultarlo utilizando un prompt.
|
||||
|
||||
### Usando Python
|
||||
|
||||
Para este ejemplo, utilizaremos Langchain para conectarnos a los modelos de GitHub. Podemos usar una clase llamada `ChatOpenAI` y proporcionarle los campos `api_key`, `base_url` y `model`. El token se genera automáticamente dentro de GitHub Codespaces y, si estás ejecutando la aplicación localmente, necesitas configurar un token de acceso personal para que esto funcione.
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
En este código, hacemos lo siguiente:
|
||||
|
||||
- Llamamos a `ChatOpenAI` para crear un cliente.
|
||||
- Usamos `llm.invoke` con un prompt para crear una respuesta.
|
||||
- Imprimimos la respuesta con `print(response.content)`.
|
||||
|
||||
Deberías ver una respuesta similar a:
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## Conversación de chat
|
||||
|
||||
En la sección anterior, viste cómo usamos lo que normalmente se conoce como "zero shot prompting", un único prompt seguido de una respuesta.
|
||||
|
||||
Sin embargo, a menudo te encuentras en una situación en la que necesitas mantener una conversación con varios mensajes intercambiados entre tú y el asistente de IA.
|
||||
|
||||
### Usando Python
|
||||
|
||||
En Langchain, podemos almacenar la conversación en una lista. El `HumanMessage` representa un mensaje de un usuario, y `SystemMessage` es un mensaje destinado a establecer la "personalidad" de la IA. En el ejemplo a continuación, ves cómo instruimos a la IA para que asuma la personalidad del Capitán Picard y para que el humano/usuario pregunte "Háblame de ti" como prompt.
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
El código completo para este ejemplo se ve así:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Deberías ver un resultado similar a:
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
Para mantener el estado de la conversación, puedes agregar la respuesta de un chat, de modo que la conversación se recuerde. Aquí está cómo hacerlo:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
Lo que podemos ver de la conversación anterior es cómo invocamos el LLM dos veces, primero con la conversación que consiste en solo dos mensajes, pero luego una segunda vez con más mensajes añadidos a la conversación.
|
||||
|
||||
De hecho, si ejecutas esto, verás que la segunda respuesta es algo como:
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
Lo tomaré como un "tal vez" ;)
|
||||
|
||||
## Respuestas en streaming
|
||||
|
||||
TODO
|
||||
|
||||
## Plantillas de prompts
|
||||
|
||||
TODO
|
||||
|
||||
## Salida estructurada
|
||||
|
||||
TODO
|
||||
|
||||
## Llamada a herramientas
|
||||
|
||||
Las herramientas son cómo le damos habilidades adicionales al LLM. La idea es decirle al LLM sobre las funciones que tiene y, si se hace un prompt que coincide con la descripción de una de estas herramientas, entonces las llamamos.
|
||||
|
||||
### Usando Python
|
||||
|
||||
Agreguemos algunas herramientas de esta manera:
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
Lo que estamos haciendo aquí es crear una descripción de una herramienta llamada `add`. Al heredar de `TypedDict` y agregar miembros como `a` y `b` de tipo `Annotated`, esto puede convertirse en un esquema que el LLM pueda entender. La creación de funciones es un diccionario que asegura que sepamos qué hacer si se identifica una herramienta específica.
|
||||
|
||||
Veamos cómo llamamos al LLM con esta herramienta a continuación:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
Aquí llamamos a `bind_tools` con nuestro array `tools` y, por lo tanto, el LLM `llm_with_tools` ahora tiene conocimiento de esta herramienta.
|
||||
|
||||
Para usar este nuevo LLM, podemos escribir el siguiente código:
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Ahora que llamamos a `invoke` en este nuevo LLM, que tiene herramientas, tal vez la propiedad `tool_calls` esté poblada. Si es así, cualquier herramienta identificada tiene una propiedad `name` y `args` que identifica qué herramienta debe ser llamada y con qué argumentos. El código completo se ve así:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Al ejecutar este código, deberías ver una salida similar a:
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Lo que significa esta salida es que el LLM analizó el prompt "¿Qué es 3 + 12?" como que la herramienta `add` debería ser llamada y lo supo gracias a su nombre, descripción y descripciones de los campos de los miembros. Que la respuesta sea 15 se debe a nuestro código que utiliza el diccionario `functions` para invocarlo:
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### Una herramienta más interesante que llama a una API web
|
||||
|
||||
Las herramientas que suman dos números son interesantes ya que ilustran cómo funciona la llamada a herramientas, pero generalmente las herramientas tienden a hacer algo más interesante, como por ejemplo llamar a una API web. Hagamos eso con este código:
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
Ahora, si ejecutas este código, obtendrás una respuesta diciendo algo como:
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Aquí está el código en su totalidad:
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## Embedding
|
||||
|
||||
Vectorizar contenido, comparar mediante similitud coseno.
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### Cargadores de documentos
|
||||
|
||||
PDF y CSV.
|
||||
|
||||
## Construyendo una aplicación
|
||||
|
||||
TODO
|
||||
|
||||
## Tarea
|
||||
|
||||
## Resumen
|
||||
|
||||
---
|
||||
|
||||
**Descargo de responsabilidad**:
|
||||
Este documento ha sido traducido utilizando el servicio de traducción automática [Co-op Translator](https://github.com/Azure/co-op-translator). Aunque nos esforzamos por garantizar la precisión, tenga en cuenta que las traducciones automatizadas pueden contener errores o imprecisiones. El documento original en su idioma nativo debe considerarse como la fuente autorizada. Para información crítica, se recomienda una traducción profesional realizada por humanos. No nos hacemos responsables de malentendidos o interpretaciones erróneas que puedan surgir del uso de esta traducción.
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:34:23+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "et"
|
||||
}
|
||||
-->
|
||||
# AI Raamistik
|
||||
|
||||
On palju AI raamistikke, mis võivad oluliselt kiirendada projekti loomise aega. Selles projektis keskendume nende raamistikuga lahendatavate probleemide mõistmisele ja loome ise sellise projekti.
|
||||
|
||||
## Miks kasutada raamistikku
|
||||
|
||||
AI kasutamisel on erinevaid lähenemisviise ja põhjuseid nende valimiseks. Siin on mõned:
|
||||
|
||||
- **Ilma SDK-ta**. Enamik AI mudeleid võimaldab suhelda otse mudeliga näiteks HTTP-päringute kaudu. See lähenemine toimib ja võib olla ainus võimalus, kui SDK valik puudub.
|
||||
- **SDK**. SDK kasutamine on tavaliselt soovitatav, kuna see võimaldab vähem koodi kirjutada mudeliga suhtlemiseks. Tavaliselt on see piiratud konkreetse mudeliga ja kui kasutada erinevaid mudeleid, võib olla vaja kirjutada uus kood nende täiendavate mudelite toetamiseks.
|
||||
- **Raamistik**. Raamistik viib asjad tavaliselt järgmisele tasemele, pakkudes ühtset API-d erinevate mudelite jaoks, kusjuures erinevused seisnevad tavaliselt algseadistuses. Lisaks toovad raamistikud sisse kasulikke abstraktsioone, nagu tööriistad, mälu, töövood, agendid ja palju muud, võimaldades kirjutada vähem koodi. Kuna raamistikud on tavaliselt arvamuslikud, võivad need olla väga kasulikud, kui aktsepteerida nende lähenemist, kuid võivad jääda hätta, kui proovida teha midagi kohandatud, mida raamistik ei toeta. Mõnikord võib raamistik asju liiga lihtsustada, mistõttu ei pruugi õppida olulist teemat, mis hiljem võib mõjutada näiteks jõudlust.
|
||||
|
||||
Üldiselt tuleks kasutada õiget tööriista vastavalt ülesandele.
|
||||
|
||||
## Sissejuhatus
|
||||
|
||||
Selles õppetükis õpime:
|
||||
|
||||
- Kasutama levinud AI raamistikku.
|
||||
- Lahendama levinud probleeme, nagu vestlused, tööriistade kasutamine, mälu ja kontekst.
|
||||
- Kasutama seda AI rakenduste loomiseks.
|
||||
|
||||
## Esimene päring
|
||||
|
||||
Meie esimeses rakenduse näites õpime, kuidas ühendada AI mudeliga ja pärida seda kasutades päringut.
|
||||
|
||||
### Pythoniga
|
||||
|
||||
Selles näites kasutame Langchaini, et ühendada GitHubi mudelitega. Kasutame klassi `ChatOpenAI` ja anname sellele väljad `api_key`, `base_url` ja `model`. Token täidetakse automaatselt GitHub Codespaces'is ja kui rakendust käitatakse lokaalselt, tuleb seadistada isiklik juurdepääsutoken, et see töötaks.
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Selles koodis:
|
||||
|
||||
- Kutsume `ChatOpenAI`, et luua klient.
|
||||
- Kasutame `llm.invoke` päringuga, et luua vastus.
|
||||
- Trükime vastuse `print(response.content)` abil.
|
||||
|
||||
Näete vastust, mis on sarnane:
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## Vestlus
|
||||
|
||||
Eelmises osas nägite, kuidas kasutasime nn nullvõtte päringut, kus on üks päring ja sellele järgnev vastus.
|
||||
|
||||
Sageli on aga olukordi, kus tuleb säilitada vestlus mitme sõnumi vahetamisega kasutaja ja AI assistendi vahel.
|
||||
|
||||
### Pythoniga
|
||||
|
||||
Langchainis saame vestlust salvestada loendisse. `HumanMessage` esindab kasutaja sõnumit ja `SystemMessage` on sõnum, mis määrab AI "isiksuse". Allolevas näites näete, kuidas juhendame AI-d võtma kapten Picardi isiksuse ja kasutaja/kasutaja küsib "Räägi mulle endast" päringuna.
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
Selle näite täielik kood näeb välja selline:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Näete tulemust, mis on sarnane:
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
Vestluse oleku säilitamiseks saate lisada vestlusele vastuse, et vestlus jääks meelde. Siin on, kuidas seda teha:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
Ülaltoodud vestlusest näeme, kuidas kutsume LLM-i kaks korda: esmalt vestlusega, mis koosneb kahest sõnumist, ja teist korda, kui vestlusele on lisatud rohkem sõnumeid.
|
||||
|
||||
Tegelikult, kui seda käitate, näete teist vastust, mis on midagi sellist:
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
Võtan seda kui "võib-olla" ;)
|
||||
|
||||
## Voogesituse vastused
|
||||
|
||||
TODO
|
||||
|
||||
## Päringumallid
|
||||
|
||||
TODO
|
||||
|
||||
## Struktureeritud väljund
|
||||
|
||||
TODO
|
||||
|
||||
## Tööriistade kasutamine
|
||||
|
||||
Tööriistad on viis, kuidas anda LLM-ile lisavõimeid. Idee on teavitada LLM-i funktsioonidest, mis tal on, ja kui päring vastab ühe tööriista kirjeldusele, siis kutsutakse see tööriist.
|
||||
|
||||
### Pythoniga
|
||||
|
||||
Lisame mõned tööriistad järgmiselt:
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
Siin loome tööriista nimega `add` kirjelduse. Pärides `TypedDict`-i ja lisades liikmeid nagu `a` ja `b` tüübiga `Annotated`, saab selle teisendada skeemiks, mida LLM mõistab. Funktsioonide loomine on sõnastik, mis tagab, et teame, mida teha, kui konkreetne tööriist tuvastatakse.
|
||||
|
||||
Vaatame, kuidas kutsume LLM-i selle tööriistaga:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
Siin kutsume `bind_tools` meie `tools` massiiviga, mistõttu LLM `llm_with_tools` teab nüüd sellest tööriistast.
|
||||
|
||||
Selle uue LLM-i kasutamiseks saame kirjutada järgmise koodi:
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Nüüd, kui kutsume `invoke` sellel uuel LLM-il, millel on tööriistad, võib omadus `tool_calls` olla täidetud. Kui see on nii, siis tuvastatud tööriistadel on omadused `name` ja `args`, mis määravad, millist tööriista tuleks kutsuda ja milliste argumentidega. Täielik kood näeb välja selline:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Selle koodi käivitamisel näete väljundit, mis on sarnane:
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
See väljund tähendab, et LLM analüüsis päringut "Mis on 3 + 12" kui tähendust, et tuleks kutsuda tööriist `add`, ja ta teadis seda tänu selle nimele, kirjeldusele ja liikmeväljade kirjeldustele. Et vastus on 15, tuleneb meie koodist, mis kasutab sõnastikku `functions`, et seda kutsuda:
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### Huvitavam tööriist, mis kutsub veebirakenduse API-d
|
||||
|
||||
Tööriist, mis liidab kaks arvu, on huvitav, kuna see illustreerib, kuidas tööriistade kutsumine toimib, kuid tavaliselt teevad tööriistad midagi huvitavamat, näiteks kutsuvad veebirakenduse API-d. Teeme seda järgmise koodiga:
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
Nüüd, kui käivitate selle koodi, saate vastuse, mis ütleb midagi sellist:
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Siin on kogu kood:
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## Embedding
|
||||
|
||||
Sisu vektoriseerimine, võrdlemine kosinuse sarnasuse kaudu
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### dokumendi laadijad
|
||||
|
||||
pdf ja csv
|
||||
|
||||
## Rakenduse loomine
|
||||
|
||||
TODO
|
||||
|
||||
## Ülesanne
|
||||
|
||||
## Kokkuvõte
|
||||
|
||||
---
|
||||
|
||||
**Lahtiütlus**:
|
||||
See dokument on tõlgitud AI tõlketeenuse [Co-op Translator](https://github.com/Azure/co-op-translator) abil. Kuigi püüame tagada täpsust, palume arvestada, et automaatsed tõlked võivad sisaldada vigu või ebatäpsusi. Algne dokument selle algses keeles tuleks pidada autoriteetseks allikaks. Olulise teabe puhul soovitame kasutada professionaalset inimtõlget. Me ei vastuta selle tõlke kasutamisest tulenevate arusaamatuste või valesti tõlgenduste eest.
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:20:16+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "fa"
|
||||
}
|
||||
-->
|
||||
# چارچوب هوش مصنوعی
|
||||
|
||||
چارچوبهای هوش مصنوعی زیادی وجود دارند که استفاده از آنها میتواند زمان لازم برای ساخت یک پروژه را به شدت کاهش دهد. در این پروژه، ما بر درک مشکلاتی که این چارچوبها حل میکنند تمرکز خواهیم کرد و خودمان چنین پروژهای را خواهیم ساخت.
|
||||
|
||||
## چرا یک چارچوب؟
|
||||
|
||||
وقتی صحبت از استفاده از هوش مصنوعی میشود، رویکردها و دلایل مختلفی برای انتخاب این رویکردها وجود دارد. در اینجا چند مورد آورده شده است:
|
||||
|
||||
- **بدون SDK**، بیشتر مدلهای هوش مصنوعی به شما اجازه میدهند مستقیماً از طریق درخواستهای HTTP با مدل هوش مصنوعی تعامل داشته باشید. این رویکرد کار میکند و گاهی ممکن است تنها گزینه شما باشد اگر گزینه SDK وجود نداشته باشد.
|
||||
- **SDK**. استفاده از یک SDK معمولاً رویکرد توصیهشده است، زیرا به شما اجازه میدهد با نوشتن کد کمتر با مدل خود تعامل داشته باشید. معمولاً به یک مدل خاص محدود است و اگر از مدلهای مختلف استفاده کنید، ممکن است نیاز به نوشتن کد جدید برای پشتیبانی از آن مدلها داشته باشید.
|
||||
- **یک چارچوب**. یک چارچوب معمولاً کار را به سطح دیگری میبرد، به این معنا که اگر نیاز به استفاده از مدلهای مختلف داشته باشید، یک API برای همه آنها وجود دارد و تفاوتها معمولاً در تنظیمات اولیه است. علاوه بر این، چارچوبها انتزاعات مفیدی را ارائه میدهند، مانند مدیریت ابزارها، حافظه، جریانهای کاری، عوامل و موارد دیگر در فضای هوش مصنوعی، در حالی که نیاز به نوشتن کد کمتری دارید. از آنجا که چارچوبها معمولاً نظر خاصی دارند، اگر با روش کار آنها موافق باشید، میتوانند واقعاً مفید باشند، اما اگر بخواهید کاری خاص انجام دهید که چارچوب برای آن طراحی نشده باشد، ممکن است ناکارآمد باشند. گاهی اوقات یک چارچوب ممکن است بیش از حد سادهسازی کند و بنابراین ممکن است یک موضوع مهم را یاد نگیرید که بعداً ممکن است به عملکرد آسیب برساند.
|
||||
|
||||
به طور کلی، از ابزار مناسب برای کار استفاده کنید.
|
||||
|
||||
## مقدمه
|
||||
|
||||
در این درس، ما یاد خواهیم گرفت که:
|
||||
|
||||
- از یک چارچوب هوش مصنوعی رایج استفاده کنیم.
|
||||
- مشکلات رایج مانند مکالمات چت، استفاده از ابزارها، حافظه و زمینه را حل کنیم.
|
||||
- از این موارد برای ساخت برنامههای هوش مصنوعی بهره ببریم.
|
||||
|
||||
## اولین درخواست
|
||||
|
||||
در مثال اول برنامه خود، یاد میگیریم که چگونه به یک مدل هوش مصنوعی متصل شویم و با استفاده از یک درخواست از آن پرسوجو کنیم.
|
||||
|
||||
### استفاده از پایتون
|
||||
|
||||
برای این مثال، از Langchain برای اتصال به مدلهای GitHub استفاده خواهیم کرد. ما میتوانیم از کلاسی به نام `ChatOpenAI` استفاده کنیم و فیلدهای `api_key`، `base_url` و `model` را به آن بدهیم. توکن به صورت خودکار در GitHub Codespaces پر میشود و اگر برنامه را به صورت محلی اجرا میکنید، باید یک توکن دسترسی شخصی برای این کار تنظیم کنید.
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
در این کد، ما:
|
||||
|
||||
- `ChatOpenAI` را برای ایجاد یک کلاینت فراخوانی میکنیم.
|
||||
- از `llm.invoke` با یک درخواست برای ایجاد یک پاسخ استفاده میکنیم.
|
||||
- پاسخ را با `print(response.content)` چاپ میکنیم.
|
||||
|
||||
باید پاسخی مشابه زیر ببینید:
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## مکالمه چت
|
||||
|
||||
در بخش قبلی، دیدید که چگونه از چیزی که معمولاً به عنوان درخواست تکمرحلهای شناخته میشود، استفاده کردیم؛ یک درخواست و سپس یک پاسخ.
|
||||
|
||||
با این حال، اغلب خود را در موقعیتی میبینید که نیاز دارید مکالمهای شامل چندین پیام رد و بدل شده بین خود و دستیار هوش مصنوعی را حفظ کنید.
|
||||
|
||||
### استفاده از پایتون
|
||||
|
||||
در Langchain، میتوانیم مکالمه را در یک لیست ذخیره کنیم. `HumanMessage` نمایانگر یک پیام از طرف کاربر است و `SystemMessage` یک پیام است که برای تنظیم "شخصیت" هوش مصنوعی استفاده میشود. در مثال زیر میبینید که چگونه به هوش مصنوعی دستور میدهیم شخصیت کاپیتان پیکارد را بپذیرد و کاربر انسانی از او بخواهد "درباره خودت بگو" به عنوان درخواست.
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
کد کامل برای این مثال به این صورت است:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
باید نتیجهای مشابه زیر ببینید:
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
برای حفظ وضعیت مکالمه، میتوانید پاسخ چت را اضافه کنید تا مکالمه به خاطر سپرده شود. در اینجا نحوه انجام این کار آمده است:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
آنچه از مکالمه بالا میتوانیم ببینیم این است که چگونه دو بار LLM را فراخوانی میکنیم، ابتدا با مکالمهای که فقط شامل دو پیام است و سپس بار دوم با پیامهای بیشتری که به مکالمه اضافه شدهاند.
|
||||
|
||||
در واقع، اگر این را اجرا کنید، پاسخ دوم چیزی شبیه به این خواهد بود:
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
این را به عنوان یک شاید قبول میکنم ;)
|
||||
|
||||
## پاسخهای جریانی
|
||||
|
||||
TODO
|
||||
|
||||
## قالبهای درخواست
|
||||
|
||||
TODO
|
||||
|
||||
## خروجی ساختاریافته
|
||||
|
||||
TODO
|
||||
|
||||
## فراخوانی ابزار
|
||||
|
||||
ابزارها راهی هستند که ما به LLM مهارتهای اضافی میدهیم. ایده این است که به LLM درباره توابعی که دارد بگوییم و اگر درخواستی داده شود که با توضیحات یکی از این ابزارها مطابقت داشته باشد، آنها را فراخوانی کنیم.
|
||||
|
||||
### استفاده از پایتون
|
||||
|
||||
بیایید چند ابزار اضافه کنیم، به این صورت:
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
کاری که اینجا انجام میدهیم این است که توضیحی از یک ابزار به نام `add` ایجاد میکنیم. با ارثبری از `TypedDict` و افزودن اعضایی مانند `a` و `b` از نوع `Annotated`، این میتواند به یک اسکیما تبدیل شود که LLM بتواند آن را درک کند. ایجاد توابع یک دیکشنری است که اطمینان میدهد میدانیم اگر یک ابزار خاص شناسایی شود، چه کاری باید انجام دهیم.
|
||||
|
||||
بیایید ببینیم چگونه LLM را با این ابزار فراخوانی میکنیم:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
اینجا ما `bind_tools` را با آرایه `tools` خود فراخوانی میکنیم و به این ترتیب LLM `llm_with_tools` اکنون از این ابزار آگاهی دارد.
|
||||
|
||||
برای استفاده از این LLM جدید، میتوانیم کد زیر را تایپ کنیم:
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
اکنون که `invoke` را روی این LLM جدید که ابزارها را دارد فراخوانی میکنیم، ممکن است ویژگی `tool_calls` پر شود. اگر چنین باشد، هر ابزار شناساییشده دارای ویژگیهای `name` و `args` است که مشخص میکند کدام ابزار باید فراخوانی شود و با چه آرگومانهایی. کد کامل به این صورت است:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
اجرای این کد، باید خروجی مشابه زیر را نشان دهد:
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
این خروجی به این معناست که LLM درخواست "What is 3 + 12" را به این معنا تحلیل کرده است که ابزار `add` باید فراخوانی شود و این را به لطف نام، توضیحات و توضیحات فیلدهای عضو آن میدانست. اینکه پاسخ 15 است به این دلیل است که کد ما از دیکشنری `functions` برای فراخوانی آن استفاده کرده است:
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### یک ابزار جالبتر که یک API وب را فراخوانی میکند
|
||||
|
||||
ابزارهایی که دو عدد را جمع میکنند جالب هستند زیرا نشان میدهند که چگونه فراخوانی ابزار کار میکند، اما معمولاً ابزارها تمایل دارند کارهای جالبتری انجام دهند، مانند فراخوانی یک API وب. بیایید این کار را با این کد انجام دهیم:
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
اکنون اگر این کد را اجرا کنید، پاسخی شبیه به این دریافت خواهید کرد:
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
در اینجا کد به طور کامل آورده شده است:
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## تعبیهسازی
|
||||
|
||||
محتوا را برداری کنید، از طریق شباهت کسینوسی مقایسه کنید.
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### بارگذاری اسناد
|
||||
|
||||
PDF و CSV
|
||||
|
||||
## ساخت یک برنامه
|
||||
|
||||
TODO
|
||||
|
||||
## تمرین
|
||||
|
||||
## خلاصه
|
||||
|
||||
---
|
||||
|
||||
**سلب مسئولیت**:
|
||||
این سند با استفاده از سرویس ترجمه هوش مصنوعی [Co-op Translator](https://github.com/Azure/co-op-translator) ترجمه شده است. در حالی که ما تلاش میکنیم ترجمهها دقیق باشند، لطفاً توجه داشته باشید که ترجمههای خودکار ممکن است شامل خطاها یا نادرستیها باشند. سند اصلی به زبان اصلی آن باید به عنوان منبع معتبر در نظر گرفته شود. برای اطلاعات حساس، توصیه میشود از ترجمه انسانی حرفهای استفاده کنید. ما هیچ مسئولیتی در قبال سوء تفاهمها یا تفسیرهای نادرست ناشی از استفاده از این ترجمه نداریم.
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:27:52+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "fi"
|
||||
}
|
||||
-->
|
||||
# AI-kehys
|
||||
|
||||
On olemassa monia AI-kehyksiä, jotka voivat merkittävästi nopeuttaa projektin rakentamiseen kuluvaa aikaa. Tässä projektissa keskitymme ymmärtämään, mitä ongelmia nämä kehykset ratkaisevat, ja rakennamme itse tällaisen projektin.
|
||||
|
||||
## Miksi käyttää kehystä
|
||||
|
||||
AI:n käytössä on erilaisia lähestymistapoja ja syitä valita jokin tietty lähestymistapa. Tässä muutamia:
|
||||
|
||||
- **Ei SDK:ta**. Useimmat AI-mallit mahdollistavat suoran vuorovaikutuksen esimerkiksi HTTP-pyyntöjen kautta. Tämä lähestymistapa toimii ja voi joskus olla ainoa vaihtoehto, jos SDK-vaihtoehto puuttuu.
|
||||
- **SDK**. SDK:n käyttö on yleensä suositeltavaa, koska sen avulla voit kirjoittaa vähemmän koodia mallin kanssa vuorovaikuttamiseen. Se on yleensä rajoitettu tiettyyn malliin, ja jos käytät eri malleja, sinun täytyy kirjoittaa uutta koodia tukemaan näitä lisämalleja.
|
||||
- **Kehys**. Kehys vie asiat yleensä seuraavalle tasolle siinä mielessä, että jos tarvitset eri malleja, niille on yksi API, ja eroavaisuudet liittyvät yleensä alkuasetuksiin. Lisäksi kehykset tuovat hyödyllisiä abstraktioita, kuten työkaluja, muistia, työnkulkuja, agentteja ja muuta, samalla kun kirjoitat vähemmän koodia. Koska kehykset ovat yleensä mielipiteellisiä, ne voivat olla todella hyödyllisiä, jos hyväksyt niiden toimintatavan, mutta ne voivat jäädä vajaiksi, jos yrität tehdä jotain räätälöityä, mihin kehys ei ole suunniteltu. Joskus kehys voi myös yksinkertaistaa liikaa, jolloin et ehkä opi tärkeää aihetta, mikä voi myöhemmin haitata suorituskykyä.
|
||||
|
||||
Yleisesti ottaen, käytä oikeaa työkalua oikeaan tehtävään.
|
||||
|
||||
## Johdanto
|
||||
|
||||
Tässä oppitunnissa opimme:
|
||||
|
||||
- Käyttämään yleistä AI-kehystä.
|
||||
- Ratkaisemaan yleisiä ongelmia, kuten keskustelut, työkalujen käyttö, muisti ja konteksti.
|
||||
- Hyödyntämään tätä AI-sovellusten rakentamisessa.
|
||||
|
||||
## Ensimmäinen kehotus
|
||||
|
||||
Ensimmäisessä sovellusesimerkissä opimme, kuinka yhdistää AI-malliin ja tehdä kysely kehotuksen avulla.
|
||||
|
||||
### Pythonin käyttö
|
||||
|
||||
Tässä esimerkissä käytämme Langchainia yhdistääksemme GitHub-malleihin. Voimme käyttää luokkaa `ChatOpenAI` ja antaa sille kentät `api_key`, `base_url` ja `model`. Token täytetään automaattisesti GitHub Codespacesissa, ja jos suoritat sovellusta paikallisesti, sinun täytyy asettaa henkilökohtainen käyttöoikeustoken, jotta tämä toimii.
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Tässä koodissa:
|
||||
|
||||
- Kutsumme `ChatOpenAI`-luokkaa luodaksemme asiakkaan.
|
||||
- Käytämme `llm.invoke`-metodia kehotuksen kanssa luodaksemme vastauksen.
|
||||
- Tulostamme vastauksen `print(response.content)`-komennolla.
|
||||
|
||||
Näet vastauksen, joka näyttää suunnilleen tältä:
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## Keskustelu
|
||||
|
||||
Edellisessä osiossa näit, kuinka käytimme niin sanottua "zero shot prompting" -menetelmää, jossa on yksi kehotus ja vastaus.
|
||||
|
||||
Usein kuitenkin löydät itsesi tilanteesta, jossa sinun täytyy ylläpitää keskustelua, jossa vaihdetaan useita viestejä sinun ja AI-avustajan välillä.
|
||||
|
||||
### Pythonin käyttö
|
||||
|
||||
Langchainissa voimme tallentaa keskustelun listaan. `HumanMessage` edustaa käyttäjän viestiä, ja `SystemMessage` on viesti, joka on tarkoitettu asettamaan AI:n "persoonallisuus". Alla olevassa esimerkissä näet, kuinka ohjeistamme AI:ta ottamaan Kapteeni Picardin persoonallisuuden ja käyttäjän kysymään "Kerro itsestäsi" kehotuksena.
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
Koko koodi tähän esimerkkiin näyttää tältä:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Näet tuloksen, joka näyttää suunnilleen tältä:
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
Keskustelun tilan säilyttämiseksi voit lisätä vastauksen keskusteluun, jotta se muistetaan. Näin se tehdään:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
Yllä olevasta keskustelusta näemme, kuinka kutsumme LLM:ää kahdesti: ensin keskustelulla, joka koostuu vain kahdesta viestistä, mutta sitten toisen kerran, kun keskusteluun on lisätty enemmän viestejä.
|
||||
|
||||
Itse asiassa, jos suoritat tämän, näet toisen vastauksen olevan jotain tällaista:
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
Otan tuon ehkä-vastauksena ;)
|
||||
|
||||
## Vastausten suoratoisto
|
||||
|
||||
TODO
|
||||
|
||||
## Kehotusmallit
|
||||
|
||||
TODO
|
||||
|
||||
## Rakenteellinen ulostulo
|
||||
|
||||
TODO
|
||||
|
||||
## Työkalujen käyttö
|
||||
|
||||
Työkalut ovat tapa antaa LLM:lle lisätaitoja. Idea on kertoa LLM:lle sen käytettävissä olevista funktioista, ja jos kehotus vastaa jonkin näistä työkaluista kuvausta, se kutsutaan.
|
||||
|
||||
### Pythonin käyttö
|
||||
|
||||
Lisätään joitakin työkaluja näin:
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
Tässä luomme kuvauksen työkalusta nimeltä `add`. Perimällä `TypedDict`-luokasta ja lisäämällä jäseniä, kuten `a` ja `b`, tyyppiä `Annotated`, tämä voidaan muuntaa skeemaksi, jonka LLM ymmärtää. Funktioiden luominen on sanakirja, joka varmistaa, että tiedämme, mitä tehdä, jos tietty työkalu tunnistetaan.
|
||||
|
||||
Katsotaanpa, kuinka kutsumme LLM:ää tämän työkalun kanssa:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
Tässä kutsumme `bind_tools`-metodia `tools`-taulukolla, jolloin LLM `llm_with_tools` tuntee tämän työkalun.
|
||||
|
||||
Käyttääksemme tätä uutta LLM:ää voimme kirjoittaa seuraavan koodin:
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Kun kutsumme `invoke`-metodia tällä uudella LLM:llä, jolla on työkaluja, `tool_calls`-ominaisuus voi täyttyä. Jos näin tapahtuu, tunnistetut työkalut sisältävät `name`- ja `args`-ominaisuudet, jotka kertovat, mikä työkalu tulisi kutsua ja millä argumenteilla. Koko koodi näyttää tältä:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Kun suoritat tämän koodin, näet ulostulon, joka näyttää suunnilleen tältä:
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Tämä ulostulo tarkoittaa, että LLM analysoi kehotuksen "Mikä on 3 + 12" tarkoittavan, että `add`-työkalu tulisi kutsua, ja se tiesi tämän nimen, kuvauksen ja jäsenkenttien kuvausten ansiosta. Vastaus 15 johtuu koodistamme, joka käyttää sanakirjaa `functions` sen kutsumiseen:
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### Mielenkiintoisempi työkalu, joka kutsuu verkkosovellusliittymää
|
||||
|
||||
Työkalut, jotka lisäävät kaksi lukua, ovat mielenkiintoisia, koska ne havainnollistavat, kuinka työkalujen kutsuminen toimii, mutta yleensä työkalut tekevät jotain mielenkiintoisempaa, kuten esimerkiksi verkkosovellusliittymän kutsumista. Tehdään juuri niin tällä koodilla:
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
Kun suoritat tämän koodin, saat vastauksen, joka näyttää suunnilleen tältä:
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Tässä on koodi kokonaisuudessaan:
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## Upottaminen
|
||||
|
||||
Sisällön vektorointi, vertailu kosinisimilaarisuuden avulla
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### Dokumenttien lataajat
|
||||
|
||||
pdf ja csv
|
||||
|
||||
## Sovelluksen rakentaminen
|
||||
|
||||
TODO
|
||||
|
||||
## Tehtävä
|
||||
|
||||
## Yhteenveto
|
||||
|
||||
---
|
||||
|
||||
**Vastuuvapauslauseke**:
|
||||
Tämä asiakirja on käännetty käyttämällä tekoälypohjaista käännöspalvelua [Co-op Translator](https://github.com/Azure/co-op-translator). Vaikka pyrimme tarkkuuteen, huomioithan, että automaattiset käännökset voivat sisältää virheitä tai epätarkkuuksia. Alkuperäinen asiakirja sen alkuperäisellä kielellä tulisi pitää ensisijaisena lähteenä. Kriittisen tiedon osalta suositellaan ammattimaista ihmiskäännöstä. Emme ole vastuussa väärinkäsityksistä tai virhetulkinnoista, jotka johtuvat tämän käännöksen käytöstä.
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:18:57+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "fr"
|
||||
}
|
||||
-->
|
||||
# Cadre d'IA
|
||||
|
||||
Il existe de nombreux cadres d'IA qui, lorsqu'ils sont utilisés, peuvent considérablement accélérer le temps nécessaire pour développer un projet. Dans ce projet, nous allons nous concentrer sur la compréhension des problèmes que ces cadres abordent et construire un tel projet nous-mêmes.
|
||||
|
||||
## Pourquoi un cadre
|
||||
|
||||
Lorsqu'il s'agit d'utiliser l'IA, il existe différentes approches et raisons de choisir ces approches. Voici quelques exemples :
|
||||
|
||||
- **Pas de SDK** : La plupart des modèles d'IA permettent d'interagir directement avec le modèle via, par exemple, des requêtes HTTP. Cette approche fonctionne et peut parfois être votre seule option si un SDK n'est pas disponible.
|
||||
- **SDK** : Utiliser un SDK est généralement l'approche recommandée, car cela permet d'écrire moins de code pour interagir avec votre modèle. Cependant, cela est souvent limité à un modèle spécifique, et si vous utilisez différents modèles, vous devrez peut-être écrire du nouveau code pour prendre en charge ces modèles supplémentaires.
|
||||
- **Un cadre** : Un cadre va généralement plus loin en offrant une API unique pour différents modèles, ce qui simplifie leur utilisation. Ce qui change généralement, c'est la configuration initiale. De plus, les cadres apportent des abstractions utiles, comme dans le domaine de l'IA, où ils peuvent gérer des outils, la mémoire, les flux de travail, les agents, et plus encore, tout en nécessitant moins de code. Parce que les cadres sont souvent opinionnés, ils peuvent être très utiles si vous adhérez à leur façon de faire, mais peuvent être limités si vous essayez de faire quelque chose de sur mesure qui n'est pas prévu par le cadre. Parfois, un cadre peut également simplifier à l'excès, ce qui peut vous empêcher d'apprendre un sujet important qui pourrait nuire aux performances par la suite.
|
||||
|
||||
En général, utilisez l'outil adapté à la tâche.
|
||||
|
||||
## Introduction
|
||||
|
||||
Dans cette leçon, nous allons apprendre à :
|
||||
|
||||
- Utiliser un cadre d'IA courant.
|
||||
- Résoudre des problèmes courants comme les conversations, l'utilisation d'outils, la mémoire et le contexte.
|
||||
- Exploiter cela pour créer des applications d'IA.
|
||||
|
||||
## Premier prompt
|
||||
|
||||
Dans notre premier exemple d'application, nous allons apprendre à nous connecter à un modèle d'IA et à l'interroger à l'aide d'un prompt.
|
||||
|
||||
### Utilisation de Python
|
||||
|
||||
Pour cet exemple, nous utiliserons Langchain pour nous connecter aux modèles GitHub. Nous pouvons utiliser une classe appelée `ChatOpenAI` et lui fournir les champs `api_key`, `base_url` et `model`. Le jeton est automatiquement renseigné dans GitHub Codespaces, et si vous exécutez l'application localement, vous devez configurer un jeton d'accès personnel pour que cela fonctionne.
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Dans ce code, nous :
|
||||
|
||||
- Appelons `ChatOpenAI` pour créer un client.
|
||||
- Utilisons `llm.invoke` avec un prompt pour générer une réponse.
|
||||
- Affichons la réponse avec `print(response.content)`.
|
||||
|
||||
Vous devriez voir une réponse similaire à :
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## Conversation
|
||||
|
||||
Dans la section précédente, vous avez vu comment nous avons utilisé ce qui est généralement appelé un prompt "zero shot", un seul prompt suivi d'une réponse.
|
||||
|
||||
Cependant, il arrive souvent que vous deviez maintenir une conversation avec plusieurs messages échangés entre vous et l'assistant IA.
|
||||
|
||||
### Utilisation de Python
|
||||
|
||||
Dans Langchain, nous pouvons stocker la conversation dans une liste. Le `HumanMessage` représente un message de l'utilisateur, et le `SystemMessage` est un message destiné à définir la "personnalité" de l'IA. Dans l'exemple ci-dessous, vous voyez comment nous instruisons l'IA pour qu'elle adopte la personnalité du capitaine Picard, et pour que l'utilisateur demande "Parlez-moi de vous" comme prompt.
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
Le code complet pour cet exemple ressemble à ceci :
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Vous devriez voir un résultat similaire à :
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
Pour conserver l'état de la conversation, vous pouvez ajouter la réponse d'un chat afin que la conversation soit mémorisée. Voici comment faire :
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
Ce que nous pouvons voir dans la conversation ci-dessus, c'est comment nous invoquons le LLM deux fois : d'abord avec une conversation composée de seulement deux messages, puis une deuxième fois avec plus de messages ajoutés à la conversation.
|
||||
|
||||
En fait, si vous exécutez cela, vous verrez que la deuxième réponse ressemble à :
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
Je prends cela comme un "peut-être" ;)
|
||||
|
||||
## Réponses en streaming
|
||||
|
||||
À faire
|
||||
|
||||
## Modèles de prompt
|
||||
|
||||
À faire
|
||||
|
||||
## Sortie structurée
|
||||
|
||||
À faire
|
||||
|
||||
## Appel d'outils
|
||||
|
||||
Les outils permettent de donner des compétences supplémentaires au LLM. L'idée est d'informer le LLM des fonctions disponibles, et si un prompt correspond à la description de l'un de ces outils, alors nous l'appelons.
|
||||
|
||||
### Utilisation de Python
|
||||
|
||||
Ajoutons quelques outils comme ceci :
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
Ce que nous faisons ici, c'est créer une description d'un outil appelé `add`. En héritant de `TypedDict` et en ajoutant des membres comme `a` et `b` de type `Annotated`, cela peut être converti en un schéma que le LLM peut comprendre. La création de fonctions est un dictionnaire qui garantit que nous savons quoi faire si un outil spécifique est identifié.
|
||||
|
||||
Voyons comment appeler le LLM avec cet outil :
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
Ici, nous appelons `bind_tools` avec notre tableau `tools`, et ainsi le LLM `llm_with_tools` a maintenant connaissance de cet outil.
|
||||
|
||||
Pour utiliser ce nouveau LLM, nous pouvons écrire le code suivant :
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Maintenant que nous appelons `invoke` sur ce nouveau LLM, qui dispose d'outils, la propriété `tool_calls` peut être renseignée. Si c'est le cas, les outils identifiés ont une propriété `name` et `args` qui identifie quel outil doit être appelé et avec quels arguments. Le code complet ressemble à ceci :
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
En exécutant ce code, vous devriez voir une sortie similaire à :
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Ce que cette sortie signifie, c'est que le LLM a analysé le prompt "Quel est 3 + 12" comme signifiant que l'outil `add` doit être appelé, et il le savait grâce à son nom, sa description et les descriptions des champs membres. Que la réponse soit 15 est dû à notre code utilisant le dictionnaire `functions` pour l'invoquer :
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### Un outil plus intéressant qui appelle une API web
|
||||
|
||||
Les outils qui additionnent deux nombres sont intéressants car ils illustrent comment fonctionne l'appel d'outils, mais généralement, les outils ont tendance à faire quelque chose de plus intéressant, comme par exemple appeler une API web. Faisons cela avec ce code :
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
Maintenant, si vous exécutez ce code, vous obtiendrez une réponse disant quelque chose comme :
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Voici le code dans son intégralité :
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## Embedding
|
||||
|
||||
Vectoriser du contenu, comparer via la similarité cosinus
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### Chargeurs de documents
|
||||
|
||||
PDF et CSV
|
||||
|
||||
## Création d'une application
|
||||
|
||||
À faire
|
||||
|
||||
## Devoir
|
||||
|
||||
## Résumé
|
||||
|
||||
---
|
||||
|
||||
**Avertissement** :
|
||||
Ce document a été traduit à l'aide du service de traduction automatique [Co-op Translator](https://github.com/Azure/co-op-translator). Bien que nous nous efforcions d'assurer l'exactitude, veuillez noter que les traductions automatisées peuvent contenir des erreurs ou des inexactitudes. Le document original dans sa langue d'origine doit être considéré comme la source faisant autorité. Pour des informations critiques, il est recommandé de recourir à une traduction humaine professionnelle. Nous déclinons toute responsabilité en cas de malentendus ou d'interprétations erronées résultant de l'utilisation de cette traduction.
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:22:38+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "hi"
|
||||
}
|
||||
-->
|
||||
# एआई फ्रेमवर्क
|
||||
|
||||
आजकल कई एआई फ्रेमवर्क उपलब्ध हैं, जिनका उपयोग करके प्रोजेक्ट बनाने में लगने वाला समय काफी कम किया जा सकता है। इस प्रोजेक्ट में, हम यह समझने पर ध्यान केंद्रित करेंगे कि ये फ्रेमवर्क किन समस्याओं का समाधान करते हैं और खुद एक ऐसा प्रोजेक्ट बनाएंगे।
|
||||
|
||||
## फ्रेमवर्क क्यों?
|
||||
|
||||
एआई का उपयोग करने के लिए विभिन्न दृष्टिकोण और कारण होते हैं, जिनके आधार पर आप इन दृष्टिकोणों को चुन सकते हैं। यहां कुछ उदाहरण दिए गए हैं:
|
||||
|
||||
- **कोई SDK नहीं**, अधिकांश एआई मॉडल आपको सीधे एचटीटीपी अनुरोधों के माध्यम से एआई मॉडल के साथ इंटरैक्ट करने की अनुमति देते हैं। यह दृष्टिकोण काम करता है और यदि एसडीके विकल्प उपलब्ध नहीं है, तो यह आपका एकमात्र विकल्प हो सकता है।
|
||||
- **SDK**। एसडीके का उपयोग करना आमतौर पर अनुशंसित दृष्टिकोण होता है क्योंकि यह आपके मॉडल के साथ इंटरैक्ट करने के लिए कम कोड लिखने की अनुमति देता है। यह आमतौर पर एक विशिष्ट मॉडल तक सीमित होता है और यदि आप विभिन्न मॉडलों का उपयोग कर रहे हैं, तो आपको उन अतिरिक्त मॉडलों का समर्थन करने के लिए नया कोड लिखने की आवश्यकता हो सकती है।
|
||||
- **एक फ्रेमवर्क**। एक फ्रेमवर्क आमतौर पर चीजों को एक और स्तर तक ले जाता है, इस अर्थ में कि यदि आपको विभिन्न मॉडलों का उपयोग करने की आवश्यकता है, तो उन सभी के लिए एक एपीआई होता है, जो भिन्न होता है वह आमतौर पर प्रारंभिक सेटअप होता है। इसके अतिरिक्त, फ्रेमवर्क उपयोगी अमूर्तता लाते हैं जैसे कि एआई क्षेत्र में, वे टूल्स, मेमोरी, वर्कफ्लो, एजेंट्स आदि को कम कोड लिखते हुए संभाल सकते हैं। क्योंकि फ्रेमवर्क आमतौर पर एक निश्चित दृष्टिकोण अपनाते हैं, वे वास्तव में मददगार हो सकते हैं यदि आप उनके तरीके को अपनाते हैं, लेकिन यदि आप कुछ अनुकूलित करना चाहते हैं जो फ्रेमवर्क के लिए नहीं बनाया गया है, तो वे कम प्रभावी हो सकते हैं। कभी-कभी फ्रेमवर्क चीजों को इतना सरल बना देते हैं कि आप किसी महत्वपूर्ण विषय को नहीं सीख पाते, जो बाद में प्रदर्शन को नुकसान पहुंचा सकता है।
|
||||
|
||||
सामान्यतः, काम के लिए सही उपकरण का उपयोग करें।
|
||||
|
||||
## परिचय
|
||||
|
||||
इस पाठ में, हम सीखेंगे:
|
||||
|
||||
- एक सामान्य एआई फ्रेमवर्क का उपयोग करना।
|
||||
- सामान्य समस्याओं जैसे कि चैट वार्तालाप, टूल उपयोग, मेमोरी और संदर्भ को संबोधित करना।
|
||||
- एआई ऐप्स बनाने के लिए इसका लाभ उठाना।
|
||||
|
||||
## पहला प्रॉम्प्ट
|
||||
|
||||
हमारे पहले ऐप उदाहरण में, हम सीखेंगे कि एआई मॉडल से कैसे कनेक्ट करें और एक प्रॉम्प्ट का उपयोग करके इसे क्वेरी करें।
|
||||
|
||||
### पायथन का उपयोग करना
|
||||
|
||||
इस उदाहरण के लिए, हम Langchain का उपयोग GitHub Models से कनेक्ट करने के लिए करेंगे। हम `ChatOpenAI` नामक एक क्लास का उपयोग करेंगे और इसे `api_key`, `base_url` और `model` फील्ड देंगे। टोकन कुछ ऐसा है जो स्वचालित रूप से GitHub Codespaces में पॉप्युलेट होता है और यदि आप ऐप को लोकली चला रहे हैं, तो आपको इसे काम करने के लिए एक व्यक्तिगत एक्सेस टोकन सेट करना होगा।
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
इस कोड में, हम:
|
||||
|
||||
- `ChatOpenAI` को कॉल करके एक क्लाइंट बनाते हैं।
|
||||
- एक प्रॉम्प्ट के साथ `llm.invoke` का उपयोग करके एक प्रतिक्रिया बनाते हैं।
|
||||
- `print(response.content)` के साथ प्रतिक्रिया को प्रिंट करते हैं।
|
||||
|
||||
आपको कुछ इस तरह की प्रतिक्रिया दिखाई देगी:
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## चैट वार्तालाप
|
||||
|
||||
पिछले खंड में, आपने देखा कि हमने आमतौर पर जिसे ज़ीरो शॉट प्रॉम्प्टिंग कहा जाता है, उसका उपयोग कैसे किया, एक सिंगल प्रॉम्प्ट और उसके बाद एक प्रतिक्रिया।
|
||||
|
||||
हालांकि, अक्सर आप खुद को ऐसी स्थिति में पाते हैं जहां आपको और एआई सहायक के बीच कई संदेशों का आदान-प्रदान करते हुए एक वार्तालाप बनाए रखना होता है।
|
||||
|
||||
### पायथन का उपयोग करना
|
||||
|
||||
Langchain में, हम वार्तालाप को एक सूची में संग्रहीत कर सकते हैं। `HumanMessage` उपयोगकर्ता से एक संदेश का प्रतिनिधित्व करता है, और `SystemMessage` एक संदेश है जो एआई की "व्यक्तित्व" सेट करने के लिए होता है। नीचे दिए गए उदाहरण में, आप देख सकते हैं कि हम एआई को कैप्टन पिकार्ड की व्यक्तित्व मानने का निर्देश कैसे देते हैं और मानव/उपयोगकर्ता को "अपने बारे में बताओ" पूछने के लिए प्रॉम्प्ट के रूप में निर्देशित करते हैं।
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
इस उदाहरण का पूरा कोड इस प्रकार दिखता है:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
आपको कुछ इस तरह का परिणाम दिखाई देगा:
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
वार्तालाप की स्थिति को बनाए रखने के लिए, आप चैट से प्रतिक्रिया जोड़ सकते हैं, ताकि वार्तालाप याद रखा जा सके। इसे इस प्रकार किया जा सकता है:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
ऊपर दिए गए वार्तालाप से हम देख सकते हैं कि हमने LLM को दो बार कैसे इनवोक किया, पहले वार्तालाप में केवल दो संदेशों के साथ और फिर दूसरी बार वार्तालाप में अधिक संदेश जोड़कर।
|
||||
|
||||
वास्तव में, यदि आप इसे चलाते हैं, तो आप देखेंगे कि दूसरी प्रतिक्रिया कुछ इस प्रकार होगी:
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
मैं इसे शायद के रूप में लूंगा ;)
|
||||
|
||||
## स्ट्रीमिंग प्रतिक्रियाएं
|
||||
|
||||
TODO
|
||||
|
||||
## प्रॉम्प्ट टेम्पलेट्स
|
||||
|
||||
TODO
|
||||
|
||||
## संरचित आउटपुट
|
||||
|
||||
TODO
|
||||
|
||||
## टूल कॉलिंग
|
||||
|
||||
टूल्स वह माध्यम हैं जिससे हम LLM को अतिरिक्त कौशल प्रदान करते हैं। विचार यह है कि LLM को उन फंक्शन्स के बारे में बताया जाए जो उसके पास हैं और यदि कोई प्रॉम्प्ट इन टूल्स के विवरण से मेल खाता है, तो हम उन्हें कॉल कर सकते हैं।
|
||||
|
||||
### पायथन का उपयोग करना
|
||||
|
||||
आइए कुछ टूल्स जोड़ें:
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
यहां हम एक टूल `add` का विवरण बना रहे हैं। `TypedDict` से इनहेरिट करके और `a` और `b` जैसे मेंबर्स को `Annotated` प्रकार के रूप में जोड़कर, इसे एक स्कीमा में परिवर्तित किया जा सकता है जिसे LLM समझ सकता है। फंक्शन्स का निर्माण एक डिक्शनरी है जो यह सुनिश्चित करता है कि यदि कोई विशिष्ट टूल पहचाना जाता है तो हमें क्या करना है।
|
||||
|
||||
आइए देखें कि हम इस टूल के साथ LLM को कैसे कॉल करते हैं:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
यहां हम अपने `tools` ऐरे के साथ `bind_tools` को कॉल करते हैं और इस प्रकार LLM `llm_with_tools` अब इस टूल के बारे में जानकारी रखता है।
|
||||
|
||||
इस नए LLM का उपयोग करने के लिए, हम निम्न कोड टाइप कर सकते हैं:
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
अब जब हम इस नए LLM पर `invoke` को कॉल करते हैं, जिसमें टूल्स हैं, तो हमें शायद `tool_calls` प्रॉपर्टी पॉप्युलेटेड मिलेगी। यदि ऐसा है, तो किसी भी पहचाने गए टूल्स में एक `name` और `args` प्रॉपर्टी होगी जो यह पहचानती है कि किस टूल को कॉल करना चाहिए और किस आर्ग्युमेंट्स के साथ। पूरा कोड इस प्रकार दिखता है:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
इस कोड को चलाने पर, आपको कुछ इस तरह का आउटपुट दिखाई देगा:
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
इस आउटपुट का मतलब यह है कि LLM ने प्रॉम्प्ट "What is 3 + 12" का विश्लेषण इस रूप में किया कि `add` टूल को कॉल किया जाना चाहिए और यह जानता था कि इसका नाम, विवरण और मेंबर फील्ड विवरणों के कारण। उत्तर 15 इसलिए है क्योंकि हमारी डिक्शनरी `functions` का उपयोग करके कोड ने इसे इनवोक किया:
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### एक और दिलचस्प टूल जो एक वेब एपीआई को कॉल करता है
|
||||
|
||||
दो नंबर जोड़ने वाले टूल्स दिलचस्प हैं क्योंकि यह दिखाता है कि टूल कॉलिंग कैसे काम करती है, लेकिन आमतौर पर टूल्स कुछ अधिक दिलचस्प करते हैं, जैसे कि एक वेब एपीआई को कॉल करना। आइए इस कोड के साथ ऐसा ही करें:
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
अब यदि आप इस कोड को चलाते हैं, तो आपको कुछ इस प्रकार की प्रतिक्रिया मिलेगी:
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
यहां पूरा कोड है:
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## एम्बेडिंग
|
||||
|
||||
सामग्री को वेक्टर में बदलें, कोसाइन समानता के माध्यम से तुलना करें
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### डॉक्यूमेंट लोडर्स
|
||||
|
||||
पीडीएफ और सीएसवी
|
||||
|
||||
## ऐप बनाना
|
||||
|
||||
TODO
|
||||
|
||||
## असाइनमेंट
|
||||
|
||||
## सारांश
|
||||
|
||||
---
|
||||
|
||||
**अस्वीकरण**:
|
||||
यह दस्तावेज़ AI अनुवाद सेवा [Co-op Translator](https://github.com/Azure/co-op-translator) का उपयोग करके अनुवादित किया गया है। जबकि हम सटीकता के लिए प्रयास करते हैं, कृपया ध्यान दें कि स्वचालित अनुवाद में त्रुटियां या अशुद्धियां हो सकती हैं। मूल भाषा में दस्तावेज़ को प्रामाणिक स्रोत माना जाना चाहिए। महत्वपूर्ण जानकारी के लिए, पेशेवर मानव अनुवाद की सिफारिश की जाती है। इस अनुवाद के उपयोग से उत्पन्न किसी भी गलतफहमी या गलत व्याख्या के लिए हम उत्तरदायी नहीं हैं।
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:32:21+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "hr"
|
||||
}
|
||||
-->
|
||||
# AI Framework
|
||||
|
||||
Postoji mnogo AI okvira koji, kada se koriste, mogu značajno ubrzati vrijeme potrebno za izradu projekta. U ovom projektu fokusirat ćemo se na razumijevanje problema koje ti okviri rješavaju i izraditi takav projekt sami.
|
||||
|
||||
## Zašto okvir
|
||||
|
||||
Kada je riječ o korištenju AI-a, postoje različiti pristupi i razlozi za odabir tih pristupa. Evo nekoliko primjera:
|
||||
|
||||
- **Bez SDK-a**. Većina AI modela omogućuje izravnu interakciju s modelom putem, primjerice, HTTP zahtjeva. Taj pristup funkcionira i ponekad može biti jedina opcija ako SDK nije dostupan.
|
||||
- **SDK**. Korištenje SDK-a obično je preporučeni pristup jer omogućuje pisanje manje koda za interakciju s modelom. Obično je ograničen na određeni model, a ako koristite različite modele, možda ćete morati napisati novi kod za podršku tim dodatnim modelima.
|
||||
- **Okvir**. Okvir obično podiže stvari na višu razinu u smislu da, ako trebate koristiti različite modele, postoji jedan API za sve njih, a razlika je obično u početnom postavljanju. Osim toga, okviri donose korisne apstrakcije, poput alata, memorije, tijeka rada, agenata i drugih funkcionalnosti u AI prostoru, uz pisanje manje koda. Budući da su okviri obično "opinionated", mogu biti vrlo korisni ako prihvatite njihov način rada, ali mogu biti nedostatni ako pokušate napraviti nešto prilagođeno što okvir nije predviđen za. Ponekad okvir može pojednostaviti stvari previše, pa možda nećete naučiti važnu temu koja kasnije može negativno utjecati na performanse, na primjer.
|
||||
|
||||
Općenito, koristite pravi alat za posao.
|
||||
|
||||
## Uvod
|
||||
|
||||
U ovoj lekciji naučit ćemo:
|
||||
|
||||
- Koristiti uobičajeni AI okvir.
|
||||
- Rješavati uobičajene probleme poput razgovora, korištenja alata, memorije i konteksta.
|
||||
- Iskoristiti ovo za izradu AI aplikacija.
|
||||
|
||||
## Prvi upit
|
||||
|
||||
U našem prvom primjeru aplikacije naučit ćemo kako se povezati s AI modelom i postaviti mu upit koristeći prompt.
|
||||
|
||||
### Korištenje Pythona
|
||||
|
||||
Za ovaj primjer koristit ćemo Langchain za povezivanje s GitHub modelima. Možemo koristiti klasu `ChatOpenAI` i dodijeliti joj polja `api_key`, `base_url` i `model`. Token se automatski generira unutar GitHub Codespacesa, a ako aplikaciju pokrećete lokalno, trebate postaviti osobni pristupni token da bi ovo funkcioniralo.
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
U ovom kodu:
|
||||
|
||||
- Pozivamo `ChatOpenAI` za stvaranje klijenta.
|
||||
- Koristimo `llm.invoke` s promptom za stvaranje odgovora.
|
||||
- Ispisujemo odgovor pomoću `print(response.content)`.
|
||||
|
||||
Trebali biste vidjeti odgovor sličan:
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## Razgovor
|
||||
|
||||
U prethodnom odjeljku vidjeli ste kako smo koristili ono što se obično naziva zero-shot prompting, jedan prompt praćen odgovorom.
|
||||
|
||||
Međutim, često se nađete u situaciji gdje trebate održavati razgovor s više poruka koje se izmjenjuju između vas i AI asistenta.
|
||||
|
||||
### Korištenje Pythona
|
||||
|
||||
U Langchainu možemo pohraniti razgovor u listu. `HumanMessage` predstavlja poruku od korisnika, a `SystemMessage` je poruka namijenjena postavljanju "osobnosti" AI-a. U primjeru ispod vidjet ćete kako AI-u dajemo uputu da preuzme osobnost kapetana Picarda, dok korisnik postavlja pitanje "Reci mi nešto o sebi" kao prompt.
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
Cijeli kod za ovaj primjer izgleda ovako:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Trebali biste vidjeti rezultat sličan:
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
Kako biste zadržali stanje razgovora, možete dodati odgovor iz chata, tako da se razgovor pamti. Evo kako to učiniti:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
Iz gornjeg razgovora možemo vidjeti kako dva puta pozivamo LLM, prvo s razgovorom koji se sastoji od samo dvije poruke, a zatim drugi put s više poruka dodanih u razgovor.
|
||||
|
||||
Zapravo, ako pokrenete ovo, vidjet ćete drugi odgovor koji izgleda otprilike ovako:
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
Uzeti ću to kao možda ;)
|
||||
|
||||
## Streaming odgovori
|
||||
|
||||
TODO
|
||||
|
||||
## Predlošci za promptove
|
||||
|
||||
TODO
|
||||
|
||||
## Strukturirani izlaz
|
||||
|
||||
TODO
|
||||
|
||||
## Pozivanje alata
|
||||
|
||||
Alati su način na koji LLM-u dajemo dodatne vještine. Ideja je da LLM-u kažemo za funkcije koje ima, a ako se postavi prompt koji odgovara opisu jednog od tih alata, tada ga pozivamo.
|
||||
|
||||
### Korištenje Pythona
|
||||
|
||||
Dodajmo neke alate ovako:
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
Ono što ovdje radimo je stvaranje opisa alata nazvanog `add`. Nasljeđivanjem iz `TypedDict` i dodavanjem članova poput `a` i `b` tipa `Annotated`, ovo se može pretvoriti u shemu koju LLM može razumjeti. Stvaranje funkcija je rječnik koji osigurava da znamo što učiniti ako se identificira određeni alat.
|
||||
|
||||
Pogledajmo kako pozivamo LLM s ovim alatom:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
Ovdje pozivamo `bind_tools` s našim nizom `tools`, čime LLM `llm_with_tools` sada ima znanje o ovom alatu.
|
||||
|
||||
Kako bismo koristili ovaj novi LLM, možemo napisati sljedeći kod:
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Sada kada pozivamo `invoke` na ovom novom LLM-u, koji ima alate, možda će svojstvo `tool_calls` biti popunjeno. Ako je tako, bilo koji identificirani alat ima svojstva `name` i `args` koja identificiraju koji alat treba pozvati i s kojim argumentima. Cijeli kod izgleda ovako:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Pokretanjem ovog koda trebali biste vidjeti izlaz sličan:
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Ono što ovaj izlaz znači je da je LLM analizirao prompt "Koliko je 3 + 12" kao zahtjev za pozivanjem alata `add` i to je znao zahvaljujući njegovom nazivu, opisu i opisima polja članova. Da je odgovor 15, zaslužno je naše korištenje rječnika `functions` za njegovo pozivanje:
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### Zanimljiviji alat koji poziva web API
|
||||
|
||||
Alati koji zbrajaju dva broja su zanimljivi jer ilustriraju kako pozivanje alata funkcionira, ali obično alati rade nešto zanimljivije, poput pozivanja web API-ja. Napravimo upravo to s ovim kodom:
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
Sada, ako pokrenete ovaj kod, dobit ćete odgovor koji izgleda otprilike ovako:
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Evo cijelog koda:
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## Ugrađivanje
|
||||
|
||||
Vektorizirajte sadržaj, usporedite pomoću kosinusne sličnosti.
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### Učitavanje dokumenata
|
||||
|
||||
PDF i CSV
|
||||
|
||||
## Izrada aplikacije
|
||||
|
||||
TODO
|
||||
|
||||
## Zadatak
|
||||
|
||||
## Sažetak
|
||||
|
||||
---
|
||||
|
||||
**Odricanje od odgovornosti**:
|
||||
Ovaj dokument je preveden pomoću AI usluge za prevođenje [Co-op Translator](https://github.com/Azure/co-op-translator). Iako nastojimo osigurati točnost, imajte na umu da automatski prijevodi mogu sadržavati pogreške ili netočnosti. Izvorni dokument na izvornom jeziku treba smatrati autoritativnim izvorom. Za ključne informacije preporučuje se profesionalni prijevod od strane čovjeka. Ne preuzimamo odgovornost za nesporazume ili pogrešna tumačenja koja mogu proizaći iz korištenja ovog prijevoda.
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:30:17+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "hu"
|
||||
}
|
||||
-->
|
||||
# AI Keretrendszer
|
||||
|
||||
Számos AI keretrendszer létezik, amelyek használatával jelentősen felgyorsítható egy projekt elkészítési ideje. Ebben a projektben arra fogunk összpontosítani, hogy megértsük, milyen problémákat oldanak meg ezek a keretrendszerek, és saját magunk is építünk egy ilyen projektet.
|
||||
|
||||
## Miért keretrendszer?
|
||||
|
||||
Az AI használatával kapcsolatban különböző megközelítések és okok vannak ezek választására. Íme néhány példa:
|
||||
|
||||
- **Nincs SDK**: A legtöbb AI modell lehetővé teszi, hogy közvetlenül, például HTTP kéréseken keresztül lépjünk kapcsolatba a modellel. Ez a megközelítés működik, és néha az egyetlen lehetőség, ha nincs SDK opció.
|
||||
- **SDK**: Az SDK használata általában ajánlott, mivel kevesebb kódot kell írni a modellel való interakcióhoz. Ez általában egy adott modellre korlátozódik, és ha különböző modelleket használunk, új kódot kell írni az új modellek támogatásához.
|
||||
- **Keretrendszer**: A keretrendszer általában egy magasabb szintet képvisel, mivel lehetővé teszi, hogy különböző modelleket egyetlen API-n keresztül használjunk, ahol általában csak a kezdeti beállítások különböznek. Ezenkívül a keretrendszerek hasznos absztrakciókat hoznak be, például az AI területén eszközökkel, memóriával, munkafolyamatokkal, ügynökökkel és más funkciókkal foglalkoznak, miközben kevesebb kódot kell írni. Mivel a keretrendszerek általában véleményvezéreltek, nagyon hasznosak lehetnek, ha elfogadjuk az általuk kínált megközelítést, de korlátozottak lehetnek, ha valami egyedi dolgot próbálunk megvalósítani, amit a keretrendszer nem támogat. Néha a keretrendszer túlságosan leegyszerűsítheti a dolgokat, és emiatt nem tanulunk meg egy fontos témát, ami később például teljesítményproblémákat okozhat.
|
||||
|
||||
Általánosságban: használjuk a megfelelő eszközt a feladathoz.
|
||||
|
||||
## Bevezetés
|
||||
|
||||
Ebben a leckében megtanuljuk:
|
||||
|
||||
- Egy általános AI keretrendszer használatát.
|
||||
- Gyakori problémák megoldását, mint például chatbeszélgetések, eszközhasználat, memória és kontextus.
|
||||
- Ezt kihasználva AI alkalmazások építését.
|
||||
|
||||
## Első kérés
|
||||
|
||||
Az első alkalmazáspéldánkban megtanuljuk, hogyan csatlakozzunk egy AI modellhez, és hogyan kérdezzük meg egy kérés segítségével.
|
||||
|
||||
### Python használata
|
||||
|
||||
Ebben a példában a Langchain-t fogjuk használni, hogy csatlakozzunk a GitHub Modellekhez. Használhatunk egy `ChatOpenAI` nevű osztályt, amelynek mezői az `api_key`, `base_url` és `model`. A token automatikusan kitöltődik a GitHub Codespaces-ben, és ha helyileg futtatjuk az alkalmazást, személyes hozzáférési tokent kell beállítanunk, hogy ez működjön.
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Ebben a kódban:
|
||||
|
||||
- Meghívjuk a `ChatOpenAI`-t, hogy létrehozzunk egy klienst.
|
||||
- Használjuk az `llm.invoke`-t egy kéréshez, hogy választ kapjunk.
|
||||
- A választ a `print(response.content)` segítségével nyomtatjuk ki.
|
||||
|
||||
A válasz hasonló lesz ehhez:
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## Chatbeszélgetés
|
||||
|
||||
Az előző szakaszban láttuk, hogyan használtuk az úgynevezett "zero shot prompting"-ot, egyetlen kérés követi a választ.
|
||||
|
||||
Azonban gyakran olyan helyzetben találjuk magunkat, ahol több üzenetváltás történik köztünk és az AI asszisztens között.
|
||||
|
||||
### Python használata
|
||||
|
||||
A Langchain-ben a beszélgetést egy listában tárolhatjuk. A `HumanMessage` egy felhasználótól származó üzenetet képvisel, míg a `SystemMessage` az AI "személyiségét" beállító üzenet. Az alábbi példában láthatjuk, hogyan utasítjuk az AI-t, hogy vegye fel Captain Picard személyiségét, és a felhasználó kérdése legyen: "Mesélj magadról".
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
A teljes kód ehhez a példához így néz ki:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Az eredmény hasonló lesz ehhez:
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
A beszélgetés állapotának megőrzéséhez hozzáadhatjuk a chat válaszát, így a beszélgetés emlékezhető marad. Így tehetjük meg:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
A fenti beszélgetésből láthatjuk, hogyan hívtuk meg az LLM-et kétszer, először csak két üzenettel, majd másodszor több üzenettel a beszélgetésben.
|
||||
|
||||
Valójában, ha futtatjuk ezt, a második válasz valami ilyesmi lesz:
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
Ezt talán igennek veszem ;)
|
||||
|
||||
## Folyamatos válaszok
|
||||
|
||||
TODO
|
||||
|
||||
## Kérés sablonok
|
||||
|
||||
TODO
|
||||
|
||||
## Strukturált kimenet
|
||||
|
||||
TODO
|
||||
|
||||
## Eszközök meghívása
|
||||
|
||||
Az eszközök segítségével extra képességeket adhatunk az LLM-nek. Az ötlet az, hogy tájékoztatjuk az LLM-et az általa elérhető funkciókról, és ha egy kérés megfelel valamelyik eszköz leírásának, akkor azt meghívjuk.
|
||||
|
||||
### Python használata
|
||||
|
||||
Adjunk hozzá néhány eszközt, például így:
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
Itt egy `add` nevű eszköz leírását hozzuk létre. A `TypedDict` öröklésével és olyan tagok hozzáadásával, mint az `a` és `b` az `Annotated` típusból, ez átalakítható egy olyan sémává, amelyet az LLM megérthet. A funkciók létrehozása egy szótár, amely biztosítja, hogy tudjuk, mit kell tenni, ha egy adott eszközt azonosítanak.
|
||||
|
||||
Nézzük meg, hogyan hívjuk meg az LLM-et ezzel az eszközzel:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
Itt az `bind_tools`-t hívjuk meg a `tools` tömbünkkel, így az LLM `llm_with_tools` most már ismeri ezt az eszközt.
|
||||
|
||||
Az új LLM használatához az alábbi kódot írhatjuk:
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Most, hogy az új LLM-en meghívjuk az `invoke`-t, amely eszközökkel rendelkezik, előfordulhat, hogy a `tool_calls` tulajdonság kitöltődik. Ha igen, bármely azonosított eszköznek van egy `name` és `args` tulajdonsága, amely azonosítja, hogy melyik eszközt kell meghívni és milyen argumentumokkal. A teljes kód így néz ki:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Ha futtatjuk ezt a kódot, az eredmény hasonló lesz ehhez:
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Ez az eredmény azt jelenti, hogy az LLM elemezte a "Mi az 3 + 12" kérdést, és úgy értelmezte, hogy az `add` eszközt kell meghívni. Ezt az eszköz nevén, leírásán és tagmező leírásainak köszönhetően tudta. Az, hogy a válasz 15, annak köszönhető, hogy a kódunk a `functions` szótárat használta a meghívásra:
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### Egy érdekesebb eszköz, amely webes API-t hív
|
||||
|
||||
Az eszközök, amelyek két számot összeadnak, érdekesek, mivel illusztrálják, hogyan működik az eszközök meghívása, de általában az eszközök valami érdekesebbet csinálnak, például egy webes API-t hívnak meg. Nézzük meg, hogyan tehetjük ezt meg ezzel a kóddal:
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
Ha most futtatjuk ezt a kódot, a válasz valami ilyesmi lesz:
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Íme a teljes kód:
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## Beágyazás
|
||||
|
||||
Tartalom vektorizálása, összehasonlítás koszinusz hasonlóság alapján
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### Dokumentum betöltők
|
||||
|
||||
pdf és csv
|
||||
|
||||
## Alkalmazás építése
|
||||
|
||||
TODO
|
||||
|
||||
## Feladat
|
||||
|
||||
## Összefoglalás
|
||||
|
||||
---
|
||||
|
||||
**Felelősség kizárása**:
|
||||
Ez a dokumentum az [Co-op Translator](https://github.com/Azure/co-op-translator) AI fordítási szolgáltatás segítségével lett lefordítva. Bár törekszünk a pontosságra, kérjük, vegye figyelembe, hogy az automatikus fordítások hibákat vagy pontatlanságokat tartalmazhatnak. Az eredeti dokumentum az eredeti nyelvén tekintendő hiteles forrásnak. Fontos információk esetén javasolt professzionális emberi fordítást igénybe venni. Nem vállalunk felelősséget semmilyen félreértésért vagy téves értelmezésért, amely a fordítás használatából eredhet.
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:29:05+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "id"
|
||||
}
|
||||
-->
|
||||
# Kerangka AI
|
||||
|
||||
Ada banyak kerangka AI yang tersedia yang dapat mempercepat waktu pembangunan proyek secara signifikan. Dalam proyek ini, kita akan fokus memahami masalah yang diatasi oleh kerangka tersebut dan membangun proyek serupa sendiri.
|
||||
|
||||
## Mengapa menggunakan kerangka
|
||||
|
||||
Dalam menggunakan AI, ada berbagai pendekatan dan alasan untuk memilih pendekatan tersebut, berikut beberapa di antaranya:
|
||||
|
||||
- **Tanpa SDK**, sebagian besar model AI memungkinkan Anda berinteraksi langsung dengan model AI melalui, misalnya, permintaan HTTP. Pendekatan ini berfungsi dan kadang-kadang menjadi satu-satunya pilihan jika opsi SDK tidak tersedia.
|
||||
- **SDK**. Menggunakan SDK biasanya merupakan pendekatan yang direkomendasikan karena memungkinkan Anda menulis lebih sedikit kode untuk berinteraksi dengan model Anda. Biasanya terbatas pada model tertentu, dan jika menggunakan model yang berbeda, Anda mungkin perlu menulis kode baru untuk mendukung model tambahan tersebut.
|
||||
- **Kerangka kerja**. Kerangka kerja biasanya membawa segalanya ke tingkat yang lebih tinggi dalam arti bahwa jika Anda perlu menggunakan model yang berbeda, ada satu API untuk semuanya, yang berbeda biasanya adalah pengaturan awal. Selain itu, kerangka kerja menghadirkan abstraksi yang berguna seperti dalam ruang AI, mereka dapat menangani alat, memori, alur kerja, agen, dan lainnya sambil menulis lebih sedikit kode. Karena kerangka kerja biasanya memiliki pendekatan yang sudah ditentukan, mereka bisa sangat membantu jika Anda mengikuti cara mereka bekerja, tetapi mungkin kurang efektif jika Anda mencoba melakukan sesuatu yang khusus yang tidak dirancang oleh kerangka tersebut. Kadang-kadang kerangka kerja juga dapat menyederhanakan terlalu banyak sehingga Anda mungkin tidak mempelajari topik penting yang nantinya dapat merugikan kinerja, misalnya.
|
||||
|
||||
Secara umum, gunakan alat yang tepat untuk pekerjaan yang tepat.
|
||||
|
||||
## Pendahuluan
|
||||
|
||||
Dalam pelajaran ini, kita akan belajar untuk:
|
||||
|
||||
- Menggunakan kerangka AI yang umum.
|
||||
- Mengatasi masalah umum seperti percakapan chat, penggunaan alat, memori, dan konteks.
|
||||
- Memanfaatkan ini untuk membangun aplikasi AI.
|
||||
|
||||
## Prompt pertama
|
||||
|
||||
Dalam contoh aplikasi pertama kita, kita akan belajar bagaimana menghubungkan ke model AI dan mengajukan pertanyaan menggunakan prompt.
|
||||
|
||||
### Menggunakan Python
|
||||
|
||||
Untuk contoh ini, kita akan menggunakan Langchain untuk terhubung ke GitHub Models. Kita dapat menggunakan kelas `ChatOpenAI` dan memberikan field `api_key`, `base_url`, dan `model`. Token secara otomatis diisi dalam GitHub Codespaces, dan jika Anda menjalankan aplikasi secara lokal, Anda perlu mengatur token akses pribadi agar ini berfungsi.
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Dalam kode ini, kita:
|
||||
|
||||
- Memanggil `ChatOpenAI` untuk membuat klien.
|
||||
- Menggunakan `llm.invoke` dengan prompt untuk membuat respons.
|
||||
- Mencetak respons dengan `print(response.content)`.
|
||||
|
||||
Anda seharusnya melihat respons yang mirip dengan:
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## Percakapan chat
|
||||
|
||||
Pada bagian sebelumnya, Anda melihat bagaimana kita menggunakan apa yang biasanya dikenal sebagai zero shot prompting, yaitu satu prompt diikuti oleh respons.
|
||||
|
||||
Namun, sering kali Anda berada dalam situasi di mana Anda perlu mempertahankan percakapan dengan beberapa pesan yang dipertukarkan antara Anda dan asisten AI.
|
||||
|
||||
### Menggunakan Python
|
||||
|
||||
Dalam Langchain, kita dapat menyimpan percakapan dalam sebuah daftar. `HumanMessage` mewakili pesan dari pengguna, dan `SystemMessage` adalah pesan yang dimaksudkan untuk mengatur "kepribadian" AI. Dalam contoh di bawah ini, Anda melihat bagaimana kita menginstruksikan AI untuk berperan sebagai Captain Picard dan pengguna/human untuk bertanya "Ceritakan tentang dirimu" sebagai prompt.
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
Kode lengkap untuk contoh ini terlihat seperti berikut:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Anda seharusnya melihat hasil yang mirip dengan:
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
Untuk mempertahankan status percakapan, Anda dapat menambahkan respons dari chat, sehingga percakapan diingat, berikut cara melakukannya:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
Apa yang dapat kita lihat dari percakapan di atas adalah bagaimana kita memanggil LLM dua kali, pertama dengan percakapan yang hanya terdiri dari dua pesan, tetapi kemudian kedua kalinya dengan lebih banyak pesan yang ditambahkan ke percakapan.
|
||||
|
||||
Faktanya, jika Anda menjalankan ini, Anda akan melihat respons kedua yang mirip dengan:
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
Saya anggap itu sebagai "mungkin" ;)
|
||||
|
||||
## Respons streaming
|
||||
|
||||
TODO
|
||||
|
||||
## Template prompt
|
||||
|
||||
TODO
|
||||
|
||||
## Output terstruktur
|
||||
|
||||
TODO
|
||||
|
||||
## Pemanggilan alat
|
||||
|
||||
Alat adalah cara kita memberikan LLM keterampilan tambahan. Ide utamanya adalah memberi tahu LLM tentang fungsi-fungsi yang dimilikinya, dan jika ada prompt yang sesuai dengan deskripsi salah satu alat ini, maka kita memanggilnya.
|
||||
|
||||
### Menggunakan Python
|
||||
|
||||
Mari tambahkan beberapa alat seperti berikut:
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
Apa yang kita lakukan di sini adalah membuat deskripsi alat yang disebut `add`. Dengan mewarisi dari `TypedDict` dan menambahkan anggota seperti `a` dan `b` dari tipe `Annotated`, ini dapat dikonversi menjadi skema yang dapat dipahami oleh LLM. Pembuatan fungsi adalah sebuah dictionary yang memastikan kita tahu apa yang harus dilakukan jika alat tertentu diidentifikasi.
|
||||
|
||||
Mari kita lihat bagaimana kita memanggil LLM dengan alat ini selanjutnya:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
Di sini kita memanggil `bind_tools` dengan array `tools` kita, sehingga LLM `llm_with_tools` sekarang memiliki pengetahuan tentang alat ini.
|
||||
|
||||
Untuk menggunakan LLM baru ini, kita dapat mengetik kode berikut:
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Sekarang ketika kita memanggil `invoke` pada LLM baru ini, yang memiliki alat, properti `tool_calls` mungkin terisi. Jika demikian, alat yang diidentifikasi memiliki properti `name` dan `args` yang mengidentifikasi alat mana yang harus dipanggil dan dengan argumen apa. Kode lengkapnya terlihat seperti berikut:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Menjalankan kode ini, Anda seharusnya melihat output yang mirip dengan:
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Apa yang dimaksud dengan output ini adalah bahwa LLM menganalisis prompt "Berapa 3 + 12" sebagai berarti bahwa alat `add` harus dipanggil, dan ia tahu itu berkat nama, deskripsi, dan deskripsi field anggota. Bahwa jawabannya adalah 15 karena kode kita menggunakan dictionary `functions` untuk memanggilnya:
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### Alat yang lebih menarik yang memanggil API web
|
||||
|
||||
Alat yang menambahkan dua angka menarik karena menggambarkan bagaimana pemanggilan alat bekerja, tetapi biasanya alat cenderung melakukan sesuatu yang lebih menarik seperti, misalnya, memanggil API Web. Mari kita lakukan itu dengan kode berikut:
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
Sekarang jika Anda menjalankan kode ini, Anda akan mendapatkan respons yang mengatakan sesuatu seperti:
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Berikut kode lengkapnya:
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## Embedding
|
||||
|
||||
Membuat vektor konten, membandingkan melalui cosine similarity
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### Pemuat dokumen
|
||||
|
||||
pdf dan csv
|
||||
|
||||
## Membangun aplikasi
|
||||
|
||||
TODO
|
||||
|
||||
## Tugas
|
||||
|
||||
## Ringkasan
|
||||
|
||||
---
|
||||
|
||||
**Penafian**:
|
||||
Dokumen ini telah diterjemahkan menggunakan layanan terjemahan AI [Co-op Translator](https://github.com/Azure/co-op-translator). Meskipun kami berupaya untuk memberikan hasil yang akurat, harap diperhatikan bahwa terjemahan otomatis mungkin mengandung kesalahan atau ketidakakuratan. Dokumen asli dalam bahasa aslinya harus dianggap sebagai sumber yang berwenang. Untuk informasi yang bersifat kritis, disarankan menggunakan jasa penerjemah manusia profesional. Kami tidak bertanggung jawab atas kesalahpahaman atau interpretasi yang keliru yang timbul dari penggunaan terjemahan ini.
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:25:12+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "it"
|
||||
}
|
||||
-->
|
||||
# Framework AI
|
||||
|
||||
Esistono molti framework AI che, se utilizzati, possono accelerare notevolmente il tempo necessario per sviluppare un progetto. In questo progetto ci concentreremo sulla comprensione dei problemi che questi framework affrontano e costruiremo un progetto simile noi stessi.
|
||||
|
||||
## Perché un framework
|
||||
|
||||
Quando si utilizza l'AI, ci sono diversi approcci e motivi per scegliere questi approcci. Ecco alcuni esempi:
|
||||
|
||||
- **Nessun SDK**, la maggior parte dei modelli AI consente di interagire direttamente con il modello AI tramite, ad esempio, richieste HTTP. Questo approccio funziona e a volte può essere l'unica opzione disponibile se manca un'opzione SDK.
|
||||
- **SDK**. Utilizzare un SDK è solitamente l'approccio consigliato, poiché consente di scrivere meno codice per interagire con il modello. Di solito è limitato a un modello specifico e, se si utilizzano modelli diversi, potrebbe essere necessario scrivere nuovo codice per supportare quei modelli aggiuntivi.
|
||||
- **Un framework**. Un framework di solito porta le cose a un livello superiore nel senso che, se è necessario utilizzare modelli diversi, c'è un'unica API per tutti, ciò che cambia è solitamente la configurazione iniziale. Inoltre, i framework offrono astrazioni utili, come nel campo dell'AI, possono gestire strumenti, memoria, flussi di lavoro, agenti e altro, scrivendo meno codice. Poiché i framework sono solitamente opinabili, possono essere davvero utili se si accetta il loro modo di operare, ma possono risultare limitanti se si cerca di fare qualcosa di personalizzato che il framework non supporta. A volte un framework può anche semplificare troppo e, di conseguenza, si potrebbe non apprendere un argomento importante che in seguito potrebbe influire negativamente sulle prestazioni, ad esempio.
|
||||
|
||||
In generale, usa lo strumento giusto per il lavoro.
|
||||
|
||||
## Introduzione
|
||||
|
||||
In questa lezione, impareremo a:
|
||||
|
||||
- Utilizzare un framework AI comune.
|
||||
- Affrontare problemi comuni come conversazioni, utilizzo di strumenti, memoria e contesto.
|
||||
- Sfruttare tutto ciò per costruire app AI.
|
||||
|
||||
## Primo prompt
|
||||
|
||||
Nel nostro primo esempio di app, impareremo come connetterci a un modello AI e interrogarlo utilizzando un prompt.
|
||||
|
||||
### Utilizzando Python
|
||||
|
||||
Per questo esempio, utilizzeremo Langchain per connetterci ai modelli GitHub. Possiamo utilizzare una classe chiamata `ChatOpenAI` e fornire i campi `api_key`, `base_url` e `model`. Il token viene automaticamente popolato all'interno di GitHub Codespaces e, se si esegue l'app localmente, è necessario configurare un token di accesso personale affinché funzioni.
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
In questo codice, facciamo quanto segue:
|
||||
|
||||
- Chiamiamo `ChatOpenAI` per creare un client.
|
||||
- Utilizziamo `llm.invoke` con un prompt per creare una risposta.
|
||||
- Stampiamo la risposta con `print(response.content)`.
|
||||
|
||||
Dovresti vedere una risposta simile a:
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## Conversazione
|
||||
|
||||
Nella sezione precedente, hai visto come abbiamo utilizzato ciò che normalmente è noto come zero shot prompting, un singolo prompt seguito da una risposta.
|
||||
|
||||
Tuttavia, spesso ci si trova in situazioni in cui è necessario mantenere una conversazione composta da diversi messaggi scambiati tra te e l'assistente AI.
|
||||
|
||||
### Utilizzando Python
|
||||
|
||||
In Langchain, possiamo memorizzare la conversazione in una lista. Il `HumanMessage` rappresenta un messaggio da parte dell'utente, mentre `SystemMessage` è un messaggio destinato a impostare la "personalità" dell'AI. Nell'esempio seguente, vedrai come istruiamo l'AI ad assumere la personalità del Capitano Picard e per l'umano/utente a chiedere "Parlami di te" come prompt.
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
Il codice completo per questo esempio è il seguente:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Dovresti vedere un risultato simile a:
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
Per mantenere lo stato della conversazione, puoi aggiungere la risposta di una chat, in modo che la conversazione venga ricordata. Ecco come farlo:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
Quello che possiamo vedere dalla conversazione sopra è come invochiamo il LLM due volte, prima con la conversazione composta da soli due messaggi e poi una seconda volta con più messaggi aggiunti alla conversazione.
|
||||
|
||||
Infatti, se esegui questo codice, vedrai che la seconda risposta sarà qualcosa come:
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
Lo prenderò come un "forse" ;)
|
||||
|
||||
## Risposte in streaming
|
||||
|
||||
TODO
|
||||
|
||||
## Template di prompt
|
||||
|
||||
TODO
|
||||
|
||||
## Output strutturato
|
||||
|
||||
TODO
|
||||
|
||||
## Chiamata di strumenti
|
||||
|
||||
Gli strumenti sono il modo in cui diamo al LLM abilità aggiuntive. L'idea è di informare il LLM sulle funzioni disponibili e, se viene fatto un prompt che corrisponde alla descrizione di uno di questi strumenti, allora lo chiamiamo.
|
||||
|
||||
### Utilizzando Python
|
||||
|
||||
Aggiungiamo alcuni strumenti come segue:
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
Quello che stiamo facendo qui è creare una descrizione di uno strumento chiamato `add`. Ereditando da `TypedDict` e aggiungendo membri come `a` e `b` di tipo `Annotated`, questo può essere convertito in uno schema che il LLM può comprendere. La creazione delle funzioni è un dizionario che garantisce che sappiamo cosa fare se viene identificato uno strumento specifico.
|
||||
|
||||
Vediamo come chiamiamo il LLM con questo strumento:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
Qui chiamiamo `bind_tools` con il nostro array `tools` e, di conseguenza, il LLM `llm_with_tools` ora ha conoscenza di questo strumento.
|
||||
|
||||
Per utilizzare questo nuovo LLM, possiamo digitare il seguente codice:
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Ora che chiamiamo `invoke` su questo nuovo LLM, che ha strumenti, potremmo vedere la proprietà `tool_calls` popolata. Se è così, qualsiasi strumento identificato avrà una proprietà `name` e `args` che identifica quale strumento dovrebbe essere chiamato e con quali argomenti. Il codice completo è il seguente:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Eseguendo questo codice, dovresti vedere un output simile a:
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Quello che significa questo output è che il LLM ha analizzato il prompt "Quanto fa 3 + 12" come significato che lo strumento `add` dovrebbe essere chiamato e lo ha capito grazie al suo nome, descrizione e descrizioni dei campi membri. Che la risposta sia 15 è dovuto al nostro codice che utilizza il dizionario `functions` per invocarlo:
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### Uno strumento più interessante che chiama un'API web
|
||||
|
||||
Gli strumenti che sommano due numeri sono interessanti poiché illustrano come funziona la chiamata di strumenti, ma di solito gli strumenti tendono a fare qualcosa di più interessante, come ad esempio chiamare un'API web. Facciamo proprio questo con questo codice:
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
Ora, se esegui questo codice, otterrai una risposta simile a:
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Ecco il codice nella sua interezza:
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## Embedding
|
||||
|
||||
Vettorizzare contenuti, confrontare tramite similarità coseno
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### Caricamento documenti
|
||||
|
||||
PDF e CSV
|
||||
|
||||
## Creazione di un'app
|
||||
|
||||
TODO
|
||||
|
||||
## Compito
|
||||
|
||||
## Riepilogo
|
||||
|
||||
---
|
||||
|
||||
**Clausola di esclusione della responsabilità**:
|
||||
Questo documento è stato tradotto utilizzando il servizio di traduzione automatica [Co-op Translator](https://github.com/Azure/co-op-translator). Sebbene ci impegniamo per garantire l'accuratezza, si prega di tenere presente che le traduzioni automatiche possono contenere errori o imprecisioni. Il documento originale nella sua lingua nativa dovrebbe essere considerato la fonte autorevole. Per informazioni critiche, si raccomanda una traduzione professionale effettuata da un traduttore umano. Non siamo responsabili per eventuali incomprensioni o interpretazioni errate derivanti dall'uso di questa traduzione.
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:22:21+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "ko"
|
||||
}
|
||||
-->
|
||||
# AI 프레임워크
|
||||
|
||||
AI 프레임워크는 프로젝트를 구축하는 데 걸리는 시간을 크게 단축시킬 수 있는 도구입니다. 이 프로젝트에서는 이러한 프레임워크가 해결하는 문제를 이해하고 직접 프로젝트를 만들어 보겠습니다.
|
||||
|
||||
## 프레임워크를 사용하는 이유
|
||||
|
||||
AI를 사용할 때는 다양한 접근 방식과 이를 선택하는 이유가 있습니다. 다음은 몇 가지 예입니다:
|
||||
|
||||
- **SDK 없음**: 대부분의 AI 모델은 HTTP 요청을 통해 직접 상호작용할 수 있습니다. 이 접근 방식은 작동하며, SDK 옵션이 없는 경우 유일한 선택일 수 있습니다.
|
||||
- **SDK 사용**: SDK를 사용하는 것이 일반적으로 권장됩니다. SDK를 사용하면 모델과 상호작용하기 위해 작성해야 할 코드가 줄어듭니다. 하지만 SDK는 특정 모델에 한정되는 경우가 많아, 다른 모델을 사용할 때는 추가적인 코드를 작성해야 할 수도 있습니다.
|
||||
- **프레임워크 사용**: 프레임워크는 일반적으로 한 단계 더 나아가, 여러 모델을 사용할 때 하나의 API를 제공하며, 초기 설정만 다를 뿐입니다. 또한 프레임워크는 도구, 메모리, 워크플로우, 에이전트 등을 처리하는 유용한 추상화를 제공하며, 코드 작성량을 줄여줍니다. 프레임워크는 보통 특정 방식에 대해 의견을 가지고 있어, 그 방식에 동의한다면 매우 유용할 수 있지만, 프레임워크가 지원하지 않는 맞춤형 작업을 시도할 경우 한계가 있을 수 있습니다. 또한 프레임워크가 지나치게 단순화하여 중요한 주제를 배우지 못하게 되어 성능에 악영향을 미칠 수도 있습니다.
|
||||
|
||||
일반적으로, 작업에 적합한 도구를 사용하는 것이 중요합니다.
|
||||
|
||||
## 소개
|
||||
|
||||
이 강의에서는 다음을 배웁니다:
|
||||
|
||||
- 일반적인 AI 프레임워크 사용법
|
||||
- 대화형 문제, 도구 사용, 메모리 및 컨텍스트와 같은 일반적인 문제 해결
|
||||
- 이를 활용하여 AI 앱을 구축하는 방법
|
||||
|
||||
## 첫 번째 프롬프트
|
||||
|
||||
첫 번째 앱 예제에서는 AI 모델에 연결하고 프롬프트를 사용하여 쿼리하는 방법을 배웁니다.
|
||||
|
||||
### Python 사용하기
|
||||
|
||||
이 예제에서는 Langchain을 사용하여 GitHub 모델에 연결합니다. `ChatOpenAI`라는 클래스를 사용하며, `api_key`, `base_url`, `model` 필드를 제공합니다. 토큰은 GitHub Codespaces에서 자동으로 채워지며, 로컬에서 앱을 실행하는 경우 개인 액세스 토큰을 설정해야 합니다.
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
이 코드에서 우리는:
|
||||
|
||||
- `ChatOpenAI`를 호출하여 클라이언트를 생성합니다.
|
||||
- `llm.invoke`를 사용하여 프롬프트로 응답을 생성합니다.
|
||||
- `print(response.content)`로 응답을 출력합니다.
|
||||
|
||||
다음과 같은 응답을 볼 수 있습니다:
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## 대화형 채팅
|
||||
|
||||
앞서 본 섹션에서는 일반적으로 "제로 샷 프롬팅"이라고 불리는 단일 프롬프트와 응답을 사용하는 방법을 살펴보았습니다.
|
||||
|
||||
하지만 종종 여러 메시지가 교환되는 대화를 유지해야 하는 상황에 직면하게 됩니다.
|
||||
|
||||
### Python 사용하기
|
||||
|
||||
Langchain에서는 대화를 리스트에 저장할 수 있습니다. `HumanMessage`는 사용자로부터의 메시지를 나타내며, `SystemMessage`는 AI의 "성격"을 설정하기 위한 메시지입니다. 아래 예제에서는 AI에게 캡틴 피카드의 성격을 가지도록 지시하고, 인간/사용자가 "Tell me about you"라는 프롬프트를 묻는 방법을 보여줍니다.
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
이 예제의 전체 코드는 다음과 같습니다:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
다음과 같은 결과를 볼 수 있습니다:
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
대화 상태를 유지하려면, 채팅 응답을 추가하여 대화를 기억할 수 있습니다. 다음은 그 방법입니다:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
위 대화에서 볼 수 있듯이, 우리는 LLM을 두 번 호출합니다. 처음에는 두 개의 메시지로 구성된 대화로 호출하고, 두 번째는 더 많은 메시지가 추가된 대화로 호출합니다.
|
||||
|
||||
실제로 이 코드를 실행하면 두 번째 응답이 다음과 비슷한 내용일 것입니다:
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
그걸 긍정으로 받아들일게요 ;)
|
||||
|
||||
## 스트리밍 응답
|
||||
|
||||
TODO
|
||||
|
||||
## 프롬프트 템플릿
|
||||
|
||||
TODO
|
||||
|
||||
## 구조화된 출력
|
||||
|
||||
TODO
|
||||
|
||||
## 도구 호출
|
||||
|
||||
도구는 LLM에 추가적인 능력을 부여하는 방법입니다. 아이디어는 LLM에게 사용할 수 있는 함수에 대해 알려주고, 프롬프트가 이러한 도구의 설명과 일치하면 해당 도구를 호출하는 것입니다.
|
||||
|
||||
### Python 사용하기
|
||||
|
||||
다음과 같이 도구를 추가해 보겠습니다:
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
여기서 우리는 `add`라는 도구의 설명을 생성합니다. `TypedDict`를 상속받고 `a`와 `b` 같은 `Annotated` 타입의 멤버를 추가함으로써, 이를 LLM이 이해할 수 있는 스키마로 변환할 수 있습니다. 함수 생성은 특정 도구가 식별되었을 때 수행할 작업을 보장하는 딕셔너리입니다.
|
||||
|
||||
다음으로 이 도구를 사용하여 LLM을 호출하는 방법을 살펴보겠습니다:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
여기서 우리는 `bind_tools`를 `tools` 배열과 함께 호출하여 LLM `llm_with_tools`가 이 도구에 대한 지식을 가지게 합니다.
|
||||
|
||||
이 새로운 LLM을 사용하려면 다음 코드를 입력할 수 있습니다:
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
이제 도구가 있는 새로운 LLM에서 `invoke`를 호출하면, `tool_calls` 속성이 채워질 수 있습니다. 이 속성에는 식별된 도구의 `name`과 `args` 속성이 포함되어 있어 어떤 도구를 호출해야 하는지와 인수를 식별합니다. 전체 코드는 다음과 같습니다:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
이 코드를 실행하면 다음과 비슷한 출력이 나타납니다:
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
이 출력은 LLM이 "What is 3 + 12"라는 프롬프트를 `add` 도구를 호출해야 한다고 분석했음을 의미합니다. 이는 도구의 이름, 설명 및 멤버 필드 설명 덕분입니다. 답이 15인 이유는 우리가 `functions` 딕셔너리를 사용하여 이를 호출했기 때문입니다:
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### 웹 API를 호출하는 더 흥미로운 도구
|
||||
|
||||
두 숫자를 더하는 도구는 도구 호출이 어떻게 작동하는지 보여주는 데는 흥미롭지만, 일반적으로 도구는 웹 API를 호출하는 것과 같이 더 흥미로운 작업을 수행합니다. 다음 코드를 사용하여 이를 수행해 보겠습니다:
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
이 코드를 실행하면 다음과 같은 응답을 받을 수 있습니다:
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
전체 코드는 다음과 같습니다:
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## 임베딩
|
||||
|
||||
콘텐츠를 벡터화하고 코사인 유사도를 통해 비교합니다.
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### 문서 로더
|
||||
|
||||
PDF 및 CSV
|
||||
|
||||
## 앱 구축
|
||||
|
||||
TODO
|
||||
|
||||
## 과제
|
||||
|
||||
## 요약
|
||||
|
||||
---
|
||||
|
||||
**면책 조항**:
|
||||
이 문서는 AI 번역 서비스 [Co-op Translator](https://github.com/Azure/co-op-translator)를 사용하여 번역되었습니다. 정확성을 위해 노력하고 있지만, 자동 번역에는 오류나 부정확성이 포함될 수 있습니다. 원본 문서를 해당 언어로 작성된 상태에서 권위 있는 자료로 간주해야 합니다. 중요한 정보의 경우, 전문적인 인간 번역을 권장합니다. 이 번역 사용으로 인해 발생하는 오해나 잘못된 해석에 대해 책임을 지지 않습니다.
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:23:16+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "mr"
|
||||
}
|
||||
-->
|
||||
# AI फ्रेमवर्क
|
||||
|
||||
AI फ्रेमवर्क वापरल्याने प्रकल्प तयार करण्यासाठी लागणारा वेळ मोठ्या प्रमाणात कमी होतो. या प्रकल्पात आपण या फ्रेमवर्क कोणत्या समस्या सोडवतात हे समजून घेणार आहोत आणि स्वतःचा असा एक प्रकल्प तयार करणार आहोत.
|
||||
|
||||
## फ्रेमवर्क का वापरावे?
|
||||
|
||||
AI वापरण्याच्या बाबतीत वेगवेगळ्या पद्धती आणि कारणे असतात. खाली काही उदाहरणे दिली आहेत:
|
||||
|
||||
- **कोणतेही SDK नाही**, बहुतेक AI मॉडेल्स तुम्हाला HTTP विनंत्यांसारख्या पद्धतीने थेट AI मॉडेलशी संवाद साधण्याची परवानगी देतात. ही पद्धत कार्य करते आणि SDK पर्याय उपलब्ध नसल्यास कधी कधी हीच एकमेव पर्याय असते.
|
||||
- **SDK**, SDK वापरणे सामान्यतः शिफारस केले जाते कारण ते तुमच्या मॉडेलशी संवाद साधण्यासाठी कमी कोड लिहिण्याची परवानगी देते. हे सामान्यतः विशिष्ट मॉडेलपुरते मर्यादित असते आणि वेगवेगळ्या मॉडेल्स वापरत असल्यास, तुम्हाला त्या अतिरिक्त मॉडेल्ससाठी नवीन कोड लिहावा लागतो.
|
||||
- **फ्रेमवर्क**, फ्रेमवर्क सामान्यतः गोष्टी एका वेगळ्या स्तरावर नेते. जर तुम्हाला वेगवेगळ्या मॉडेल्स वापरायचे असतील, तर त्यांच्यासाठी एकच API असते, फरक फक्त सुरुवातीच्या सेटअपमध्ये असतो. याशिवाय, फ्रेमवर्क उपयोगी संकल्पना आणते जसे की AI क्षेत्रात, ते टूल्स, मेमरी, वर्कफ्लो, एजंट्स आणि अधिक गोष्टी हाताळू शकते आणि कमी कोड लिहिण्याची आवश्यकता असते. फ्रेमवर्क सामान्यतः ठराविक पद्धतीने कार्य करते, त्यामुळे जर तुम्ही त्याच्या पद्धतींना स्वीकारले तर ते खूप उपयुक्त ठरते, पण जर तुम्हाला काही वेगळे करायचे असेल तर ते कमी पडू शकते. कधी कधी फ्रेमवर्क गोष्टी खूप सोप्या करते आणि त्यामुळे तुम्हाला काही महत्त्वाचे विषय शिकता येत नाहीत, जे नंतर कार्यक्षमतेवर परिणाम करू शकतात.
|
||||
|
||||
सामान्यतः, योग्य कामासाठी योग्य साधन वापरा.
|
||||
|
||||
## परिचय
|
||||
|
||||
या धड्यात आपण शिकणार आहोत:
|
||||
|
||||
- एक सामान्य AI फ्रेमवर्क वापरणे.
|
||||
- चॅट संवाद, टूल्सचा वापर, मेमरी आणि संदर्भ यासारख्या सामान्य समस्या सोडवणे.
|
||||
- याचा उपयोग करून AI अॅप्स तयार करणे.
|
||||
|
||||
## पहिला प्रॉम्प्ट
|
||||
|
||||
आपल्या पहिल्या अॅप उदाहरणात, आपण AI मॉडेलशी कसे कनेक्ट करायचे आणि प्रॉम्प्ट वापरून त्याला क्वेरी कसे करायचे ते शिकणार आहोत.
|
||||
|
||||
### Python वापरून
|
||||
|
||||
या उदाहरणासाठी, आपण Langchain वापरून GitHub मॉडेल्सशी कनेक्ट होणार आहोत. आपण `ChatOpenAI` नावाचा क्लास वापरणार आहोत आणि त्याला `api_key`, `base_url` आणि `model` फील्ड्स देणार आहोत. टोकन GitHub Codespaces मध्ये आपोआप भरले जाते आणि जर तुम्ही अॅप स्थानिकपणे चालवत असाल, तर तुम्हाला वैयक्तिक प्रवेश टोकन सेट करावे लागेल.
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
या कोडमध्ये आपण:
|
||||
|
||||
- `ChatOpenAI` कॉल करून क्लायंट तयार करतो.
|
||||
- `llm.invoke` वापरून प्रॉम्प्टसह प्रतिसाद तयार करतो.
|
||||
- `print(response.content)` वापरून प्रतिसाद प्रिंट करतो.
|
||||
|
||||
तुम्हाला खालीलप्रमाणे प्रतिसाद दिसेल:
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## चॅट संवाद
|
||||
|
||||
पूर्वीच्या विभागात, आपण शून्य शॉट प्रॉम्प्टिंग कसे वापरले ते पाहिले, म्हणजे एकच प्रॉम्प्ट आणि त्यानंतर प्रतिसाद.
|
||||
|
||||
तथापि, अनेकदा तुम्ही अशा परिस्थितीत असता जिथे तुम्हाला तुमच्यात आणि AI सहाय्यकामध्ये अनेक संदेशांची देवाणघेवाण करायची असते.
|
||||
|
||||
### Python वापरून
|
||||
|
||||
Langchain मध्ये, आपण संवाद एका यादीत साठवू शकतो. `HumanMessage` वापरकर्त्याचा संदेश दर्शवतो, आणि `SystemMessage` AI च्या "व्यक्तिमत्व" सेट करण्यासाठी संदेश असतो. खालील उदाहरणात तुम्ही पाहू शकता की आम्ही AI ला कॅप्टन पिकार्डचे व्यक्तिमत्व स्वीकारण्याचे निर्देश दिले आहेत आणि मानव/वापरकर्त्याने "Tell me about you" हा प्रॉम्प्ट विचारला आहे.
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
या उदाहरणाचा पूर्ण कोड खालीलप्रमाणे दिसतो:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
तुम्हाला खालीलप्रमाणे परिणाम दिसेल:
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
संवादाची स्थिती टिकवून ठेवण्यासाठी, तुम्ही चॅटमधून प्रतिसाद जोडू शकता, त्यामुळे संवाद लक्षात ठेवला जातो. हे कसे करायचे ते येथे आहे:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
वरील संवादातून आपण पाहू शकतो की आपण LLM दोन वेळा कसे कॉल करतो, प्रथम संवाद फक्त दोन संदेशांसह असतो, पण नंतर दुसऱ्या वेळी अधिक संदेश संवादात जोडले जातात.
|
||||
|
||||
खरं तर, जर तुम्ही हे चालवले, तर तुम्हाला दुसरा प्रतिसाद खालीलप्रमाणे दिसेल:
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
मी याला "कदाचित" म्हणून घेईन ;)
|
||||
|
||||
## स्ट्रीमिंग प्रतिसाद
|
||||
|
||||
TODO
|
||||
|
||||
## प्रॉम्प्ट टेम्पलेट्स
|
||||
|
||||
TODO
|
||||
|
||||
## संरचित आउटपुट
|
||||
|
||||
TODO
|
||||
|
||||
## टूल कॉलिंग
|
||||
|
||||
टूल्स म्हणजे LLM ला अतिरिक्त कौशल्ये देण्याचा मार्ग. कल्पना अशी आहे की LLM ला त्याच्याकडे असलेल्या फंक्शन्सबद्दल सांगावे आणि जर एखादा प्रॉम्प्ट दिला गेला जो या टूल्सच्या वर्णनाशी जुळतो, तर आपण त्यांना कॉल करतो.
|
||||
|
||||
### Python वापरून
|
||||
|
||||
चला काही टूल्स जोडूया:
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
येथे आपण `add` नावाच्या टूलचे वर्णन तयार करतो. `TypedDict` पासून वारसा घेऊन आणि `a` आणि `b` सारख्या सदस्यांना `Annotated` प्रकार देऊन, हे LLM समजू शकणाऱ्या स्कीमामध्ये रूपांतरित केले जाऊ शकते. फंक्शन्स तयार करणे म्हणजे एक डिक्शनरी आहे जी सुनिश्चित करते की एखादे विशिष्ट टूल ओळखले गेल्यास काय करायचे ते आपल्याला माहित आहे.
|
||||
|
||||
चला पाहूया की आपण हे टूल LLM सह कसे कॉल करतो:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
येथे आपण `bind_tools` ला आपल्या `tools` अॅरेसह कॉल करतो आणि त्यामुळे LLM `llm_with_tools` ला आता या टूलची माहिती असते.
|
||||
|
||||
हे नवीन LLM वापरण्यासाठी, आपण खालील कोड टाइप करू शकतो:
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
आता आपण `invoke` कॉल करतो या नवीन llm वर, ज्यामध्ये टूल्स आहेत, आपल्याला कदाचित `tool_calls` प्रॉपर्टी भरलेली दिसेल. जर तसे असेल, तर कोणतेही ओळखलेले टूल्स `name` आणि `args` प्रॉपर्टी असते, जे कोणते टूल कॉल करायचे आणि कोणत्या अर्ग्युमेंट्ससह हे ओळखते. संपूर्ण कोड खालीलप्रमाणे दिसतो:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
हा कोड चालवल्यावर, तुम्हाला खालीलप्रमाणे आउटपुट दिसेल:
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
या आउटपुटचा अर्थ असा आहे की LLM ने "What is 3 + 12" या प्रॉम्प्टचे विश्लेषण केले आणि `add` टूल कॉल करायचे आहे हे ओळखले. हे त्याच्या नाव, वर्णन आणि सदस्य फील्ड वर्णनांमुळे शक्य झाले. उत्तर 15 आहे कारण आपल्या कोडने डिक्शनरी `functions` वापरून ते कॉल केले:
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### वेब API कॉल करणारे अधिक मनोरंजक टूल
|
||||
|
||||
दोन संख्यांची बेरीज करणारे टूल टूल कॉलिंग कसे कार्य करते हे स्पष्ट करते, पण सामान्यतः टूल्स काहीतरी अधिक मनोरंजक करतात, जसे की वेब API कॉल करणे. चला खालील कोडसह ते करूया:
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
आता जर तुम्ही हा कोड चालवला तर तुम्हाला खालीलप्रमाणे प्रतिसाद मिळेल:
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
संपूर्ण कोड खालीलप्रमाणे आहे:
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## एम्बेडिंग
|
||||
|
||||
सामग्रीचे व्हेक्टरायझेशन करा, कोसाइन साम्यतेद्वारे तुलना करा.
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### दस्तऐवज लोडर्स
|
||||
|
||||
PDF आणि CSV
|
||||
|
||||
## अॅप तयार करणे
|
||||
|
||||
TODO
|
||||
|
||||
## असाइनमेंट
|
||||
|
||||
## सारांश
|
||||
|
||||
---
|
||||
|
||||
**अस्वीकरण**:
|
||||
हा दस्तऐवज AI भाषांतर सेवा [Co-op Translator](https://github.com/Azure/co-op-translator) वापरून भाषांतरित करण्यात आला आहे. आम्ही अचूकतेसाठी प्रयत्नशील असलो तरी कृपया लक्षात ठेवा की स्वयंचलित भाषांतरांमध्ये त्रुटी किंवा अचूकतेचा अभाव असू शकतो. मूळ भाषेतील दस्तऐवज हा अधिकृत स्रोत मानला जावा. महत्त्वाच्या माहितीसाठी व्यावसायिक मानवी भाषांतराची शिफारस केली जाते. या भाषांतराचा वापर करून निर्माण होणाऱ्या कोणत्याही गैरसमज किंवा चुकीच्या अर्थासाठी आम्ही जबाबदार राहणार नाही.
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:29:21+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "ms"
|
||||
}
|
||||
-->
|
||||
# Rangka Kerja AI
|
||||
|
||||
Terdapat banyak rangka kerja AI yang boleh digunakan untuk mempercepatkan masa yang diperlukan untuk membina projek. Dalam projek ini, kita akan fokus untuk memahami masalah yang ditangani oleh rangka kerja ini dan membina projek seperti itu sendiri.
|
||||
|
||||
## Mengapa rangka kerja
|
||||
|
||||
Apabila menggunakan AI, terdapat pelbagai pendekatan dan sebab untuk memilih pendekatan tersebut, berikut adalah beberapa:
|
||||
|
||||
- **Tiada SDK**, kebanyakan model AI membolehkan anda berinteraksi secara langsung dengan model AI melalui contohnya permintaan HTTP. Pendekatan ini berfungsi dan kadangkala mungkin menjadi satu-satunya pilihan jika tiada pilihan SDK tersedia.
|
||||
- **SDK**. Menggunakan SDK biasanya adalah pendekatan yang disyorkan kerana ia membolehkan anda menulis kod yang lebih sedikit untuk berinteraksi dengan model anda. Ia biasanya terhad kepada model tertentu dan jika menggunakan model yang berbeza, anda mungkin perlu menulis kod baru untuk menyokong model tambahan tersebut.
|
||||
- **Rangka kerja**. Rangka kerja biasanya membawa perkara ke tahap yang lebih tinggi dalam erti kata jika anda perlu menggunakan model yang berbeza, terdapat satu API untuk semuanya, yang berbeza biasanya adalah persediaan awal. Selain itu, rangka kerja membawa abstraksi berguna seperti dalam ruang AI, mereka boleh menguruskan alat, memori, aliran kerja, agen dan banyak lagi sambil menulis kod yang lebih sedikit. Oleh kerana rangka kerja biasanya mempunyai pendapat tertentu, ia boleh sangat membantu jika anda bersetuju dengan cara mereka melakukan sesuatu tetapi mungkin kurang berkesan jika anda cuba melakukan sesuatu yang khusus yang tidak dibuat oleh rangka kerja tersebut. Kadangkala rangka kerja juga boleh menyederhanakan terlalu banyak dan oleh itu anda mungkin tidak mempelajari topik penting yang kemudian boleh menjejaskan prestasi contohnya.
|
||||
|
||||
Secara amnya, gunakan alat yang sesuai untuk tugas.
|
||||
|
||||
## Pengenalan
|
||||
|
||||
Dalam pelajaran ini, kita akan belajar untuk:
|
||||
|
||||
- Menggunakan rangka kerja AI yang biasa.
|
||||
- Menangani masalah biasa seperti perbualan chat, penggunaan alat, memori dan konteks.
|
||||
- Memanfaatkan ini untuk membina aplikasi AI.
|
||||
|
||||
## Prompt pertama
|
||||
|
||||
Dalam contoh aplikasi pertama kita, kita akan belajar bagaimana untuk menyambung ke model AI dan membuat pertanyaan menggunakan prompt.
|
||||
|
||||
### Menggunakan Python
|
||||
|
||||
Untuk contoh ini, kita akan menggunakan Langchain untuk menyambung ke Model GitHub. Kita boleh menggunakan kelas `ChatOpenAI` dan memberikannya medan `api_key`, `base_url` dan `model`. Token adalah sesuatu yang secara automatik diisi dalam GitHub Codespaces dan jika anda menjalankan aplikasi secara tempatan, anda perlu menyediakan token akses peribadi untuk ini berfungsi.
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Dalam kod ini, kita:
|
||||
|
||||
- Memanggil `ChatOpenAI` untuk mencipta klien.
|
||||
- Menggunakan `llm.invoke` dengan prompt untuk mencipta respons.
|
||||
- Mencetak respons dengan `print(response.content)`.
|
||||
|
||||
Anda sepatutnya melihat respons yang serupa dengan:
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## Perbualan chat
|
||||
|
||||
Dalam bahagian sebelumnya, anda melihat bagaimana kita menggunakan apa yang biasanya dikenali sebagai zero shot prompting, satu prompt diikuti oleh respons.
|
||||
|
||||
Namun, sering kali anda berada dalam situasi di mana anda perlu mengekalkan perbualan beberapa mesej yang ditukar antara anda dan pembantu AI.
|
||||
|
||||
### Menggunakan Python
|
||||
|
||||
Dalam langchain, kita boleh menyimpan perbualan dalam senarai. `HumanMessage` mewakili mesej daripada pengguna, dan `SystemMessage` adalah mesej yang bertujuan untuk menetapkan "personaliti" AI. Dalam contoh di bawah, anda melihat bagaimana kita mengarahkan AI untuk menganggap personaliti Kapten Picard dan untuk manusia/pengguna bertanya "Ceritakan tentang diri anda" sebagai prompt.
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
Kod penuh untuk contoh ini kelihatan seperti berikut:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Anda sepatutnya melihat hasil yang serupa dengan:
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
Untuk mengekalkan keadaan perbualan, anda boleh menambah respons daripada chat, jadi perbualan diingati, berikut adalah cara untuk melakukannya:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
Apa yang kita dapat lihat daripada perbualan di atas ialah bagaimana kita memanggil LLM dua kali, pertama dengan perbualan yang terdiri daripada hanya dua mesej tetapi kemudian kali kedua dengan lebih banyak mesej ditambah kepada perbualan.
|
||||
|
||||
Malah, jika anda menjalankan ini, anda akan melihat respons kedua menjadi sesuatu seperti:
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
Saya anggap itu sebagai mungkin ;)
|
||||
|
||||
## Respons penstriman
|
||||
|
||||
TODO
|
||||
|
||||
## Template prompt
|
||||
|
||||
TODO
|
||||
|
||||
## Output berstruktur
|
||||
|
||||
TODO
|
||||
|
||||
## Pemanggilan alat
|
||||
|
||||
Alat adalah cara kita memberikan LLM kemahiran tambahan. Ideanya adalah untuk memberitahu LLM tentang fungsi yang dimilikinya dan jika prompt dibuat yang sepadan dengan penerangan salah satu alat ini maka kita memanggilnya.
|
||||
|
||||
### Menggunakan Python
|
||||
|
||||
Mari tambahkan beberapa alat seperti berikut:
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
Apa yang kita lakukan di sini adalah mencipta penerangan alat yang dipanggil `add`. Dengan mewarisi daripada `TypedDict` dan menambah ahli seperti `a` dan `b` jenis `Annotated` ini boleh ditukar kepada skema yang LLM boleh fahami. Penciptaan fungsi adalah kamus yang memastikan kita tahu apa yang perlu dilakukan jika alat tertentu dikenal pasti.
|
||||
|
||||
Mari lihat bagaimana kita memanggil LLM dengan alat ini seterusnya:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
Di sini kita memanggil `bind_tools` dengan array `tools` kita dan dengan itu LLM `llm_with_tools` kini mempunyai pengetahuan tentang alat ini.
|
||||
|
||||
Untuk menggunakan LLM baru ini, kita boleh menaip kod berikut:
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Sekarang apabila kita memanggil `invoke` pada llm baru ini, yang mempunyai alat, kita mungkin melihat sifat `tool_calls` diisi. Jika ya, mana-mana alat yang dikenal pasti mempunyai sifat `name` dan `args` yang mengenal pasti alat apa yang harus dipanggil dan dengan argumen. Kod penuh kelihatan seperti berikut:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Menjalankan kod ini, anda sepatutnya melihat output yang serupa dengan:
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Apa yang output ini maksudkan ialah LLM menganalisis prompt "What is 3 + 12" sebagai bermaksud bahawa alat `add` harus dipanggil dan ia tahu itu terima kasih kepada namanya, penerangan dan penerangan medan ahli. Bahawa jawapannya adalah 15 adalah kerana kod kita menggunakan kamus `functions` untuk memanggilnya:
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### Alat yang lebih menarik yang memanggil API web
|
||||
|
||||
Alat yang menambah dua nombor adalah menarik kerana ia menggambarkan bagaimana pemanggilan alat berfungsi tetapi biasanya alat cenderung melakukan sesuatu yang lebih menarik seperti contohnya memanggil API Web, mari lakukan itu dengan kod ini:
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
Sekarang jika anda menjalankan kod ini, anda akan mendapat respons yang mengatakan sesuatu seperti:
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Berikut adalah kod sepenuhnya:
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## Embedding
|
||||
|
||||
vektor kandungan, bandingkan melalui kesamaan kosinus
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### pemuat dokumen
|
||||
|
||||
pdf dan csv
|
||||
|
||||
## Membina aplikasi
|
||||
|
||||
TODO
|
||||
|
||||
## Tugasan
|
||||
|
||||
## Ringkasan
|
||||
|
||||
---
|
||||
|
||||
**Penafian**:
|
||||
Dokumen ini telah diterjemahkan menggunakan perkhidmatan terjemahan AI [Co-op Translator](https://github.com/Azure/co-op-translator). Walaupun kami berusaha untuk memastikan ketepatan, sila ambil perhatian bahawa terjemahan automatik mungkin mengandungi kesilapan atau ketidaktepatan. Dokumen asal dalam bahasa asalnya harus dianggap sebagai sumber yang berwibawa. Untuk maklumat penting, terjemahan manusia profesional adalah disyorkan. Kami tidak bertanggungjawab atas sebarang salah faham atau salah tafsir yang timbul daripada penggunaan terjemahan ini.
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:32:56+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "my"
|
||||
}
|
||||
-->
|
||||
# AI Framework
|
||||
|
||||
AI Framework များစွာရှိပြီး၊ ၎င်းတို့ကို အသုံးပြုခြင်းဖြင့် ပရောဂျက်တစ်ခုကို တည်ဆောက်ရန် လိုအပ်သောအချိန်ကို အလွန်လျင်မြန်စေပါသည်။ ဒီပရောဂျက်မှာတော့ Framework များက ဖြေရှင်းပေးနိုင်သော ပြဿနာများကို နားလည်ရန်နှင့် ပရောဂျက်တစ်ခုကို ကိုယ်တိုင်တည်ဆောက်ရန် အာရုံစိုက်ပါမည်။
|
||||
|
||||
## Framework သုံးရတဲ့အကြောင်း
|
||||
|
||||
AI ကို အသုံးပြုတဲ့အခါမှာ နည်းလမ်းအမျိုးမျိုးရှိပြီး၊ အဲဒီနည်းလမ်းတွေကို ရွေးချယ်ရတဲ့အကြောင်းအရင်းတွေကလည်း မတူညီပါတယ်။ အောက်မှာ အချို့ကို ဖော်ပြထားပါတယ်-
|
||||
|
||||
- **SDK မရှိခြင်း**: AI မော်ဒယ်များအများစုက HTTP request များကို အသုံးပြု၍ မော်ဒယ်နှင့်တိုက်ရိုက် ဆက်သွယ်နိုင်စေပါတယ်။ ဒီနည်းလမ်းက အလုပ်လုပ်နိုင်ပြီး၊ SDK ရွေးချယ်စရာမရှိတဲ့အခါမှာ တစ်ခါတစ်ရံ အဲဒီနည်းလမ်းကိုသာ အသုံးပြုရနိုင်ပါတယ်။
|
||||
- **SDK**: SDK ကို အသုံးပြုခြင်းက မော်ဒယ်နှင့် ဆက်သွယ်ရန် လိုအပ်သော ကုဒ်ကို လျှော့ချနိုင်စေသောကြောင့် အကြံပြုထားသော နည်းလမ်းဖြစ်ပါတယ်။ ဒါပေမယ့်၊ SDK သည် သတ်မှတ်ထားသော မော်ဒယ်တစ်ခုအတွက်သာ အကျိုးရှိပြီး၊ မော်ဒယ်အမျိုးမျိုးကို အသုံးပြုလိုပါက ထပ်မံကုဒ်ရေးရန် လိုအပ်နိုင်ပါတယ်။
|
||||
- **Framework**: Framework သည် မော်ဒယ်အမျိုးမျိုးကို အသုံးပြုလိုပါက API တစ်ခုတည်းကို အသုံးပြုနိုင်စေပြီး၊ မတူညီသောအရာမှာ စတင်ပြင်ဆင်မှုသာ ဖြစ်ပါတယ်။ ထို့အပြင် Framework များက AI နယ်ပယ်တွင် အသုံးဝင်သော အဆင့်မြှင့်တင်မှုများကို ပေးစွမ်းနိုင်ပြီး၊ Tools, Memory, Workflows, Agents စသည်တို့ကို လျှော့ချထားသော ကုဒ်ဖြင့် စီမံနိုင်စေပါတယ်။ Framework များသည် အမြဲတမ်း အမြင်တစ်ခုတည်းကို အခြေခံထားသောကြောင့်၊ ၎င်းတို့၏ နည်းလမ်းကို လက်ခံနိုင်ပါက အလွန်အသုံးဝင်နိုင်သော်လည်း၊ Framework များမထောက်ပံ့သော အထူးပြုလုပ်ဆောင်မှုများကို လုပ်ဆောင်လိုပါက အခက်အခဲဖြစ်နိုင်ပါတယ်။ တစ်ခါတစ်ရံ Framework များက အလွန်လွယ်ကူစွာ ရှင်းလင်းပေးနိုင်ပြီး၊ အရေးကြီးသောအကြောင်းအရာများကို မသိရှိစေခြင်းကြောင့် နောက်ပိုင်းတွင် စွမ်းဆောင်ရည်ကို ထိခိုက်စေနိုင်ပါသည်။
|
||||
|
||||
ယေဘူယျအားဖြင့်၊ အလုပ်အတွက် သင့်တော်သော Tools ကို အသုံးပြုပါ။
|
||||
|
||||
## အကျဉ်းချုပ်
|
||||
|
||||
ဒီသင်ခန်းစာမှာ ကျွန်တော်တို့-
|
||||
|
||||
- AI Framework တစ်ခုကို အသုံးပြုခြင်းကို လေ့လာမည်။
|
||||
- Chat Conversations, Tool Usage, Memory နှင့် Context ကဲ့သို့သော ပြဿနာများကို ဖြေရှင်းမည်။
|
||||
- AI Apps တည်ဆောက်ရန် အထောက်အကူပြုမည်။
|
||||
|
||||
## ပထမဆုံး Prompt
|
||||
|
||||
ပထမဆုံး App နမူနာမှာ AI Model ကို ချိတ်ဆက်ပြီး Prompt ကို အသုံးပြု၍ Query လုပ်နည်းကို လေ့လာမည်။
|
||||
|
||||
### Python အသုံးပြုခြင်း
|
||||
|
||||
ဒီနမူနာအတွက် Langchain ကို အသုံးပြု၍ GitHub Models ကို ချိတ်ဆက်မည်။ `ChatOpenAI` ဟုခေါ်သော Class ကို အသုံးပြုပြီး၊ `api_key`, `base_url` နှင့် `model` ကဲ့သို့သော Fields များကို ပေးရမည်။ Token သည် GitHub Codespaces တွင် အလိုအလျောက် Populate ဖြစ်ပြီး၊ App ကို Local မှာ Run လုပ်ပါက Personal Access Token ကို စီစဉ်ရမည်။
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
ဒီကုဒ်မှာ-
|
||||
|
||||
- `ChatOpenAI` ကို ခေါ်ပြီး Client တစ်ခုကို ဖန်တီးသည်။
|
||||
- `llm.invoke` ကို Prompt ဖြင့် Response ဖန်တီးရန် အသုံးပြုသည်။
|
||||
- `print(response.content)` ဖြင့် Response ကို Print လုပ်သည်။
|
||||
|
||||
သင်ရရှိမည့် Response သည် အောက်ပါအတိုင်းဖြစ်နိုင်ပါသည်-
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## Chat Conversation
|
||||
|
||||
အထက်ပါအပိုင်းတွင် Zero Shot Prompting ဟုခေါ်သော Prompt တစ်ခုနှင့် Response တစ်ခုကို အသုံးပြုနည်းကို တွေ့မြင်ခဲ့ပါသည်။
|
||||
|
||||
သို့သော်၊ တစ်ခါတစ်ရံ AI Assistant နှင့် မက်ဆေ့များ အပြန်အလှန် လွှဲပြောင်းရသော Conversation ကို ထိန်းသိမ်းရန် လိုအပ်နိုင်ပါသည်။
|
||||
|
||||
### Python အသုံးပြုခြင်း
|
||||
|
||||
Langchain တွင် Conversation ကို List အဖြစ် သိမ်းဆည်းနိုင်သည်။ `HumanMessage` သည် User မှ ပေးပို့သော Message ကို ကိုယ်စားပြုပြီး၊ `SystemMessage` သည် AI ၏ "ပုဂ္ဂိုလ်ရေး" ကို သတ်မှတ်ရန် Message ဖြစ်သည်။ အောက်ပါနမူနာတွင် AI ကို Captain Picard အဖြစ် သတ်မှတ်ရန်နှင့် Human/User မှ "Tell me about you" ဟု မေးမြန်းရန် Prompt ကို အသုံးပြုထားသည်။
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
ဒီနမူနာ၏ အပြည့်အစုံကုဒ်မှာ အောက်ပါအတိုင်းဖြစ်သည်-
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
သင်ရရှိမည့် ရလဒ်မှာ အောက်ပါအတိုင်းဖြစ်နိုင်ပါသည်-
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
Conversation ၏ State ကို ထိန်းသိမ်းရန် Chat Response ကို Conversation ထဲသို့ ထည့်နိုင်ပြီး၊ အောက်ပါအတိုင်းလုပ်ဆောင်နိုင်ပါသည်-
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
အထက်ပါ Conversation မှာ LLM ကို နှစ်ကြိမ် Invoke လုပ်နည်းကို တွေ့မြင်နိုင်ပြီး၊ ပထမဆုံးမှာ Message နှစ်ခုသာပါရှိသော်လည်း၊ ဒုတိယအကြိမ်မှာ Message များပိုမိုထည့်သွင်းထားသည်။
|
||||
|
||||
အမှန်တကယ် Run လုပ်ပါက ဒုတိယ Response သည် အောက်ပါအတိုင်းဖြစ်နိုင်ပါသည်-
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
ဒါကို "သေချာမဟုတ်တဲ့ အဖြေ" လို့ယူဆလိုက်မယ် ;)
|
||||
|
||||
## Streaming Responses
|
||||
|
||||
TODO
|
||||
|
||||
## Prompt Templates
|
||||
|
||||
TODO
|
||||
|
||||
## Structured Output
|
||||
|
||||
TODO
|
||||
|
||||
## Tool Calling
|
||||
|
||||
Tools သည် LLM ကို အပိုစွမ်းရည်များပေးစွမ်းသော နည်းလမ်းဖြစ်သည်။ Tools များကို ဖန်တီးပြီး၊ Prompt တစ်ခုသည် Tools တစ်ခု၏ ဖော်ပြချက်နှင့် ကိုက်ညီပါက Tools ကို ခေါ်နိုင်သည်။
|
||||
|
||||
### Python အသုံးပြုခြင်း
|
||||
|
||||
Tools များကို အောက်ပါအတိုင်း ထည့်သွင်းနိုင်သည်-
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
ဒီမှာ `add` ဟုခေါ်သော Tool တစ်ခု၏ ဖော်ပြချက်ကို ဖန်တီးထားသည်။ `TypedDict` ကို အခြေခံပြီး `a` နှင့် `b` ကဲ့သို့သော Members ကို ထည့်သွင်းထားသည်။ Tools ကို Dictionary အဖြစ် ဖန်တီးထားပြီး၊ Tools တစ်ခုကို ရွေးချယ်ပါက ဘာလုပ်ရမည်ကို သတ်မှတ်ထားသည်။
|
||||
|
||||
ဒီ Tool ကို အသုံးပြုနည်းကို အောက်တွင် ဖော်ပြထားသည်-
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
ဒီမှာ `bind_tools` ကို `tools` Array နှင့် ချိတ်ဆက်ထားပြီး၊ LLM `llm_with_tools` သည် Tool ၏ Knowledge ရရှိထားသည်။
|
||||
|
||||
ဒီ LLM ကို အသုံးပြုရန် အောက်ပါကုဒ်ကို ရေးနိုင်သည်-
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Tools ပါရှိသော LLM ကို `invoke` ခေါ်ပါက `tool_calls` Property တွင် Tools များကို `name` နှင့် `args` Properties ဖြင့် ဖော်ပြထားသည်။ အပြည့်အစုံကုဒ်မှာ အောက်ပါအတိုင်းဖြစ်သည်-
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
ဒီကုဒ်ကို Run လုပ်ပါက အောက်ပါအတိုင်း Output ရရှိနိုင်ပါသည်-
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
ဒီ Output သည် Prompt "What is 3 + 12" ကို `add` Tool ကို ခေါ်ရန်လိုအပ်သည်ဟု LLM မှ ခွဲခြားနိုင်ပြီး၊ Tool ၏ Name, Description နှင့် Member Field Descriptions ကြောင့် သိရှိနိုင်သည်။ အဖြေ 15 ဖြစ်သည်မှာ Dictionary `functions` ကို အသုံးပြု၍ Tool ကို Invoke လုပ်ထားသောကြောင့်ဖြစ်သည်-
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### Web API ကို ခေါ်သော Tool တစ်ခု
|
||||
|
||||
နံပါတ်နှစ်ခုကို ပေါင်းစပ်သော Tool သည် Tool Calling ကို ရှင်းလင်းပေးနိုင်သော်လည်း၊ Tools များသည် Web API ကို ခေါ်ဆိုခြင်းကဲ့သို့ ပိုမိုစိတ်ဝင်စားဖွယ်ရာများကို လုပ်ဆောင်နိုင်သည်။ အောက်ပါကုဒ်ဖြင့် လုပ်ဆောင်ပါမည်-
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
ဒီကုဒ်ကို Run လုပ်ပါက အောက်ပါအတိုင်း Response ရရှိနိုင်ပါသည်-
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
အပြည့်အစုံကုဒ်မှာ အောက်ပါအတိုင်းဖြစ်သည်-
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## Embedding
|
||||
|
||||
အကြောင်းအရာကို Vectorize လုပ်ပြီး၊ Cosine Similarity ဖြင့် နှိုင်းယှဉ်ပါ။
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### Document Loaders
|
||||
|
||||
PDF နှင့် CSV
|
||||
|
||||
## App တစ်ခုတည်ဆောက်ခြင်း
|
||||
|
||||
TODO
|
||||
|
||||
## လုပ်ငန်းတာဝန်
|
||||
|
||||
## အကျဉ်းချုပ်
|
||||
|
||||
---
|
||||
|
||||
**အကြောင်းကြားချက်**:
|
||||
ဤစာရွက်စာတမ်းကို AI ဘာသာပြန်ဝန်ဆောင်မှု [Co-op Translator](https://github.com/Azure/co-op-translator) ကို အသုံးပြု၍ ဘာသာပြန်ထားပါသည်။ ကျွန်ုပ်တို့သည် တိကျမှုအတွက် ကြိုးစားနေသော်လည်း အလိုအလျောက် ဘာသာပြန်မှုများတွင် အမှားများ သို့မဟုတ် မတိကျမှုများ ပါဝင်နိုင်သည်ကို သတိပြုပါ။ မူရင်းဘာသာစကားဖြင့် ရေးသားထားသော စာရွက်စာတမ်းကို အာဏာတရ အရင်းအမြစ်အဖြစ် သတ်မှတ်သင့်ပါသည်။ အရေးကြီးသော အချက်အလက်များအတွက် လူ့ဘာသာပြန်ပညာရှင်များမှ ဘာသာပြန်မှုကို အကြံပြုပါသည်။ ဤဘာသာပြန်မှုကို အသုံးပြုခြင်းမှ ဖြစ်ပေါ်လာသော အလွဲအမှားများ သို့မဟုတ် အနားယူမှုများအတွက် ကျွန်ုပ်တို့သည် တာဝန်မယူပါ။
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:23:39+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "ne"
|
||||
}
|
||||
-->
|
||||
# एआई फ्रेमवर्क
|
||||
|
||||
त्यहाँ धेरै एआई फ्रेमवर्कहरू छन् जसको प्रयोगले परियोजना निर्माण गर्न लाग्ने समयलाई धेरै छिटो बनाउन सक्छ। यस परियोजनामा, हामी यी फ्रेमवर्कहरूले कुन समस्याहरू समाधान गर्छन् भनेर बुझ्न र आफैंले यस्तै परियोजना निर्माण गर्न ध्यान केन्द्रित गर्नेछौं।
|
||||
|
||||
## किन फ्रेमवर्क?
|
||||
|
||||
एआई प्रयोग गर्दा विभिन्न दृष्टिकोणहरू र ती दृष्टिकोणहरू छनोट गर्ने विभिन्न कारणहरू हुन्छन्। यहाँ केही कारणहरू छन्:
|
||||
|
||||
- **एसडीके छैन**, धेरैजसो एआई मोडेलहरूले तपाईंलाई HTTP अनुरोधहरू जस्तै सिधै एआई मोडेलसँग अन्तरक्रिया गर्न अनुमति दिन्छ। यो दृष्टिकोण काम गर्छ र कहिलेकाहीँ एसडीके विकल्प उपलब्ध नभएमा यो मात्र विकल्प हुन सक्छ।
|
||||
- **एसडीके**। एसडीके प्रयोग गर्नु सामान्यतया सिफारिस गरिएको दृष्टिकोण हो किनभने यसले तपाईंलाई आफ्नो मोडेलसँग अन्तरक्रिया गर्न कम कोड लेख्न अनुमति दिन्छ। यो सामान्यतया एक विशिष्ट मोडेलमा सीमित हुन्छ र यदि विभिन्न मोडेलहरू प्रयोग गर्दै हुनुहुन्छ भने, ती थप मोडेलहरूलाई समर्थन गर्न नयाँ कोड लेख्न आवश्यक हुन सक्छ।
|
||||
- **फ्रेमवर्क**। फ्रेमवर्कले सामान्यतया चीजहरूलाई अर्को स्तरमा लैजान्छ, यदि तपाईंले विभिन्न मोडेलहरू प्रयोग गर्न आवश्यक छ भने, तिनीहरूका लागि एउटै एपीआई हुन्छ, फरक भनेको सामान्यतया प्रारम्भिक सेटअप हो। थप रूपमा, फ्रेमवर्कहरूले उपयोगी अमूर्तता ल्याउँछन्, जस्तै एआई क्षेत्रमा, तिनीहरूले उपकरणहरू, मेमोरी, वर्कफ्लो, एजेन्टहरू र कम कोड लेख्दै अन्य कुराहरूको व्यवस्थापन गर्न सक्छन्। किनभने फ्रेमवर्कहरू सामान्यतया निश्चित विचारधारामा आधारित हुन्छन्, तिनीहरूले तपाईंलाई सहयोग गर्न सक्छन् यदि तपाईंले तिनीहरूको तरिकालाई स्वीकार गर्नुभयो भने, तर यदि तपाईंले फ्रेमवर्कले बनाएको भन्दा फरक केही गर्न खोज्नुभयो भने तिनीहरू असफल हुन सक्छन्। कहिलेकाहीँ फ्रेमवर्कले धेरै सरल बनाउँछ र त्यसैले तपाईंले महत्त्वपूर्ण विषय सिक्न सक्नुहुन्न, जसले पछि प्रदर्शनलाई हानि पुर्याउन सक्छ।
|
||||
|
||||
सामान्यतया, कामको लागि सही उपकरण प्रयोग गर्नुहोस्।
|
||||
|
||||
## परिचय
|
||||
|
||||
यस पाठमा, हामी सिक्नेछौं:
|
||||
|
||||
- सामान्य एआई फ्रेमवर्क प्रयोग गर्न।
|
||||
- च्याट संवाद, उपकरण प्रयोग, मेमोरी र सन्दर्भ जस्ता सामान्य समस्याहरू समाधान गर्न।
|
||||
- यसलाई एआई एप्स निर्माण गर्न उपयोग गर्न।
|
||||
|
||||
## पहिलो प्रम्प्ट
|
||||
|
||||
हाम्रो पहिलो एप उदाहरणमा, हामी एआई मोडेलसँग कसरी जडान गर्ने र प्रम्प्ट प्रयोग गरेर यसलाई कसरी सोधपुछ गर्ने भनेर सिक्नेछौं।
|
||||
|
||||
### पाइथन प्रयोग गर्दै
|
||||
|
||||
यस उदाहरणको लागि, हामी Langchain प्रयोग गर्नेछौं GitHub Models सँग जडान गर्न। हामी `ChatOpenAI` नामक कक्षाको प्रयोग गर्न सक्छौं र यसलाई `api_key`, `base_url` र `model` जस्ता क्षेत्रहरू दिन सक्छौं। टोकन स्वचालित रूपमा GitHub Codespaces भित्र जनाइएको हुन्छ र यदि तपाईंले एपलाई स्थानीय रूपमा चलाउँदै हुनुहुन्छ भने, यो काम गर्न व्यक्तिगत पहुँच टोकन सेटअप गर्न आवश्यक छ।
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
यस कोडमा, हामी:
|
||||
|
||||
- `ChatOpenAI` लाई कल गरेर क्लाइन्ट सिर्जना गर्छौं।
|
||||
- `llm.invoke` लाई प्रम्प्टसँग प्रयोग गरेर प्रतिक्रिया सिर्जना गर्छौं।
|
||||
- `print(response.content)` प्रयोग गरेर प्रतिक्रिया प्रिन्ट गर्छौं।
|
||||
|
||||
तपाईंले निम्न जस्तै प्रतिक्रिया देख्नुहुनेछ:
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## च्याट संवाद
|
||||
|
||||
अघिल्लो खण्डमा, तपाईंले देख्नुभयो कि हामीले सामान्यतया शून्य शट प्रम्प्टिङ भनेर चिनिने कुरा कसरी प्रयोग गर्यौं, एकल प्रम्प्ट र त्यसपछि प्रतिक्रिया।
|
||||
|
||||
तर, प्रायः तपाईं आफैं र एआई सहायक बीच धेरै सन्देशहरू आदानप्रदान गर्नुपर्ने अवस्थामा हुनुहुन्छ।
|
||||
|
||||
### पाइथन प्रयोग गर्दै
|
||||
|
||||
Langchain मा, हामीले संवादलाई सूचीमा भण्डारण गर्न सक्छौं। `HumanMessage` ले प्रयोगकर्ताबाट आएको सन्देशलाई प्रतिनिधित्व गर्छ, र `SystemMessage` एआईको "व्यक्तित्व" सेट गर्नको लागि सन्देश हो। तलको उदाहरणमा, तपाईंले देख्नुहुनेछ कि हामीले एआईलाई क्याप्टेन पिकार्डको व्यक्तित्व ग्रहण गर्न निर्देशन दिएका छौं र मानव/प्रयोगकर्ताले "तपाईंको बारेमा बताउनुहोस्" भनेर सोध्न प्रम्प्टको रूपमा प्रयोग गरेका छौं।
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
यस उदाहरणको पूर्ण कोड यसरी देखिन्छ:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
तपाईंले निम्न जस्तै परिणाम देख्नुहुनेछ:
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
संवादको अवस्था कायम राख्न, तपाईं च्याटबाट प्रतिक्रिया थप्न सक्नुहुन्छ, ताकि संवाद सम्झन सकियोस्। यसलाई यसरी गर्न सकिन्छ:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
माथिको संवादबाट हामीले देख्न सक्छौं कि हामीले LLM लाई दुई पटक कसरी बोलायौं, पहिलो पटक संवादमा केवल दुई सन्देशहरू समावेश थिए, तर दोस्रो पटक संवादमा थप सन्देशहरू थपेर।
|
||||
|
||||
वास्तवमा, यदि तपाईंले यो चलाउनुभयो भने, तपाईंले दोस्रो प्रतिक्रिया यस प्रकारको देख्नुहुनेछ:
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
म यसलाई "सायद" भनेर लिन्छु ;)
|
||||
|
||||
## स्ट्रिमिङ प्रतिक्रियाहरू
|
||||
|
||||
TODO
|
||||
|
||||
## प्रम्प्ट टेम्प्लेटहरू
|
||||
|
||||
TODO
|
||||
|
||||
## संरचित आउटपुट
|
||||
|
||||
TODO
|
||||
|
||||
## उपकरण कल गर्ने
|
||||
|
||||
उपकरणहरू भनेको हामीले LLM लाई अतिरिक्त सीप दिने तरिका हो। विचार यो हो कि LLM लाई उपलब्ध कार्यहरूबारे जानकारी दिनु र यदि कुनै प्रम्प्टले ती उपकरणहरूको वर्णनसँग मेल खायो भने, तिनीहरूलाई कल गर्नु हो।
|
||||
|
||||
### पाइथन प्रयोग गर्दै
|
||||
|
||||
हामी यसरी केही उपकरणहरू थपौं:
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
यहाँ हामीले `add` नामक उपकरणको वर्णन सिर्जना गरेका छौं। `TypedDict` बाट उत्तराधिकार गरेर र `a` र `b` जस्ता सदस्यहरूलाई `Annotated` प्रकारको रूपमा थपेर, यसलाई LLM ले बुझ्न सक्ने स्किमामा रूपान्तरण गर्न सकिन्छ। कार्यहरूको सिर्जना एउटा डिक्सनरी हो जसले सुनिश्चित गर्दछ कि यदि कुनै विशिष्ट उपकरण पहिचान गरियो भने के गर्नुपर्छ भन्ने थाहा हुन्छ।
|
||||
|
||||
अब हेरौं कि हामीले यो उपकरणसँग LLM लाई कसरी कल गर्छौं:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
यहाँ हामीले `bind_tools` लाई हाम्रो `tools` एर्रे सँग कल गरेका छौं र यसरी LLM `llm_with_tools` ले अब यो उपकरणको ज्ञान राख्छ।
|
||||
|
||||
यो नयाँ LLM प्रयोग गर्न, हामीले निम्न कोड टाइप गर्न सक्छौं:
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
अब हामीले उपकरण भएको यो नयाँ LLM मा `invoke` कल गर्दा, हामीले सम्भवतः `tool_calls` गुणलाई भरिएको देख्न सक्छौं। यदि हो भने, कुनै पनि पहिचान गरिएका उपकरणहरूको `name` र `args` गुण हुन्छ, जसले कुन उपकरण कल गर्नुपर्छ र कुन तर्कहरूसँग भनेर पहिचान गर्दछ। पूर्ण कोड यसरी देखिन्छ:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
यो कोड चलाउँदा, तपाईंले निम्न जस्तै आउटपुट देख्नुहुनेछ:
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
यस आउटपुटको अर्थ हो कि LLM ले प्रम्प्ट "What is 3 + 12" लाई `add` उपकरण कल गर्नुपर्ने अर्थमा विश्लेषण गर्यो र यसलाई यसको नाम, वर्णन र सदस्य क्षेत्रको वर्णनहरूको कारण थाहा भयो। उत्तर 15 हुनुको कारण हाम्रो कोडले डिक्सनरी `functions` प्रयोग गरेर यसलाई कल गरेको हो:
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### वेब एपीआई कल गर्ने थप रोचक उपकरण
|
||||
|
||||
दुई संख्याहरू जोड्ने उपकरणहरू रोचक छन् किनभने यसले उपकरण कल गर्ने तरिकालाई चित्रण गर्छ, तर सामान्यतया उपकरणहरूले केही थप रोचक काम गर्छन्, जस्तै वेब एपीआई कल गर्नु। यस कोडसँगै हामी त्यस्तै गरौं:
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
अब यदि तपाईंले यो कोड चलाउनुभयो भने, तपाईंले निम्न जस्तै प्रतिक्रिया प्राप्त गर्नुहुनेछ:
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
यहाँ सम्पूर्ण कोड छ:
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## एम्बेडिङ
|
||||
|
||||
सामग्रीलाई भेक्टराइज गर्नुहोस्, कोसाइन समानताको माध्यमबाट तुलना गर्नुहोस्।
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### कागजात लोडरहरू
|
||||
|
||||
पीडीएफ र सीएसभी
|
||||
|
||||
## एप निर्माण
|
||||
|
||||
TODO
|
||||
|
||||
## असाइनमेन्ट
|
||||
|
||||
## सारांश
|
||||
|
||||
---
|
||||
|
||||
**अस्वीकरण**:
|
||||
यो दस्तावेज़ AI अनुवाद सेवा [Co-op Translator](https://github.com/Azure/co-op-translator) प्रयोग गरेर अनुवाद गरिएको हो। हामी शुद्धताको लागि प्रयास गर्छौं, तर कृपया ध्यान दिनुहोस् कि स्वचालित अनुवादमा त्रुटिहरू वा अशुद्धताहरू हुन सक्छ। मूल दस्तावेज़ यसको मातृभाषामा आधिकारिक स्रोत मानिनुपर्छ। महत्वपूर्ण जानकारीको लागि, व्यावसायिक मानव अनुवाद सिफारिस गरिन्छ। यस अनुवादको प्रयोगबाट उत्पन्न हुने कुनै पनि गलतफहमी वा गलत व्याख्याको लागि हामी जिम्मेवार हुने छैनौं।
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:28:11+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "nl"
|
||||
}
|
||||
-->
|
||||
# AI Framework
|
||||
|
||||
Er zijn veel AI-frameworks beschikbaar die, wanneer gebruikt, de tijd die nodig is om een project te bouwen aanzienlijk kunnen verkorten. In dit project richten we ons op het begrijpen van de problemen die deze frameworks aanpakken en bouwen we zelf zo'n project.
|
||||
|
||||
## Waarom een framework
|
||||
|
||||
Bij het gebruik van AI zijn er verschillende benaderingen en redenen om voor een bepaalde aanpak te kiezen. Hier zijn enkele voorbeelden:
|
||||
|
||||
- **Geen SDK**. De meeste AI-modellen stellen je in staat om rechtstreeks met het model te communiceren, bijvoorbeeld via HTTP-verzoeken. Deze aanpak werkt en kan soms je enige optie zijn als er geen SDK beschikbaar is.
|
||||
- **SDK**. Het gebruik van een SDK wordt meestal aanbevolen, omdat je minder code hoeft te schrijven om met je model te communiceren. Het is vaak beperkt tot een specifiek model, en als je verschillende modellen gebruikt, moet je mogelijk nieuwe code schrijven om die extra modellen te ondersteunen.
|
||||
- **Een framework**. Een framework tilt dingen meestal naar een hoger niveau. Als je verschillende modellen moet gebruiken, is er één API voor allemaal; wat verschilt, is meestal de initiële configuratie. Bovendien bieden frameworks handige abstracties, zoals tools, geheugen, workflows, agents en meer, terwijl je minder code hoeft te schrijven. Omdat frameworks vaak een bepaalde aanpak volgen, kunnen ze erg nuttig zijn als je hun werkwijze accepteert, maar ze kunnen tekortschieten als je iets op maat wilt doen dat niet binnen het framework past. Soms kan een framework ook te veel vereenvoudigen, waardoor je een belangrijk onderwerp niet leert dat later bijvoorbeeld de prestaties kan schaden.
|
||||
|
||||
Over het algemeen geldt: gebruik het juiste gereedschap voor de klus.
|
||||
|
||||
## Introductie
|
||||
|
||||
In deze les leren we:
|
||||
|
||||
- Een veelgebruikt AI-framework te gebruiken.
|
||||
- Veelvoorkomende problemen aan te pakken, zoals chatgesprekken, het gebruik van tools, geheugen en context.
|
||||
- Dit te benutten om AI-apps te bouwen.
|
||||
|
||||
## Eerste prompt
|
||||
|
||||
In ons eerste appvoorbeeld leren we hoe we verbinding kunnen maken met een AI-model en het kunnen bevragen met een prompt.
|
||||
|
||||
### Met Python
|
||||
|
||||
Voor dit voorbeeld gebruiken we Langchain om verbinding te maken met GitHub Models. We kunnen een klasse genaamd `ChatOpenAI` gebruiken en deze de velden `api_key`, `base_url` en `model` geven. De token wordt automatisch ingevuld binnen GitHub Codespaces, en als je de app lokaal uitvoert, moet je een persoonlijke toegangstoken instellen om dit te laten werken.
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
In deze code:
|
||||
|
||||
- Roepen we `ChatOpenAI` aan om een client te maken.
|
||||
- Gebruiken we `llm.invoke` met een prompt om een reactie te genereren.
|
||||
- Printen we de reactie met `print(response.content)`.
|
||||
|
||||
Je zou een reactie moeten zien die lijkt op:
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## Chatgesprek
|
||||
|
||||
In de vorige sectie zag je hoe we gebruik maakten van wat normaal bekend staat als zero shot prompting: een enkele prompt gevolgd door een reactie.
|
||||
|
||||
Vaak bevind je je echter in een situatie waarin je een gesprek moet onderhouden met meerdere berichten die worden uitgewisseld tussen jou en de AI-assistent.
|
||||
|
||||
### Met Python
|
||||
|
||||
In Langchain kunnen we het gesprek opslaan in een lijst. De `HumanMessage` vertegenwoordigt een bericht van een gebruiker, en `SystemMessage` is een bericht bedoeld om de "persoonlijkheid" van de AI in te stellen. In het onderstaande voorbeeld zie je hoe we de AI instrueren om de persoonlijkheid van Captain Picard aan te nemen en de mens/gebruiker te vragen "Vertel me over jezelf" als prompt.
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
De volledige code voor dit voorbeeld ziet er als volgt uit:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Je zou een resultaat moeten zien dat lijkt op:
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
Om de staat van het gesprek te behouden, kun je de reactie van een chat toevoegen, zodat het gesprek wordt onthouden. Hier is hoe je dat doet:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
Wat we kunnen zien uit het bovenstaande gesprek is hoe we de LLM twee keer aanroepen: eerst met het gesprek dat slechts uit twee berichten bestaat, maar daarna een tweede keer met meer berichten toegevoegd aan het gesprek.
|
||||
|
||||
Als je dit uitvoert, zie je de tweede reactie die lijkt op:
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
Ik neem dat als een misschien ;)
|
||||
|
||||
## Streaming reacties
|
||||
|
||||
TODO
|
||||
|
||||
## Prompt templates
|
||||
|
||||
TODO
|
||||
|
||||
## Gestructureerde output
|
||||
|
||||
TODO
|
||||
|
||||
## Toolgebruik
|
||||
|
||||
Tools zijn hoe we de LLM extra vaardigheden geven. Het idee is om de LLM te vertellen over functies die het heeft, en als een prompt overeenkomt met de beschrijving van een van deze tools, dan roepen we ze aan.
|
||||
|
||||
### Met Python
|
||||
|
||||
Laten we enkele tools toevoegen zoals hieronder:
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
Wat we hier doen, is een beschrijving maken van een tool genaamd `add`. Door te erven van `TypedDict` en leden zoals `a` en `b` van het type `Annotated` toe te voegen, kan dit worden omgezet in een schema dat de LLM kan begrijpen. Het maken van functies is een woordenboek dat ervoor zorgt dat we weten wat te doen als een specifieke tool wordt geïdentificeerd.
|
||||
|
||||
Laten we zien hoe we de LLM met deze tool aanroepen:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
Hier roepen we `bind_tools` aan met onze `tools` array, waardoor de LLM `llm_with_tools` nu kennis heeft van deze tool.
|
||||
|
||||
Om deze nieuwe LLM te gebruiken, kunnen we de volgende code typen:
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Nu we `invoke` aanroepen op deze nieuwe LLM, die tools heeft, wordt mogelijk de eigenschap `tool_calls` gevuld. Als dat zo is, heeft elke geïdentificeerde tool een `name` en `args` eigenschap die aangeeft welke tool moet worden aangeroepen en met welke argumenten. De volledige code ziet er als volgt uit:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Als je deze code uitvoert, zou je een output moeten zien die lijkt op:
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Wat deze output betekent, is dat de LLM de prompt "Wat is 3 + 12" analyseerde als zijnde dat de `add` tool moest worden aangeroepen. Het wist dat dankzij de naam, beschrijving en beschrijvingen van de ledenvelden. Dat het antwoord 15 is, komt door onze code die het woordenboek `functions` gebruikt om het aan te roepen:
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### Een interessantere tool die een web-API aanroept
|
||||
|
||||
Tools die twee getallen optellen zijn interessant omdat ze illustreren hoe toolgebruik werkt, maar meestal doen tools iets interessanters, zoals bijvoorbeeld een web-API aanroepen. Laten we dat doen met deze code:
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
Als je deze code nu uitvoert, krijg je een reactie die lijkt op:
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Hier is de volledige code:
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## Embedding
|
||||
|
||||
Inhoud vectoriseren, vergelijken via cosinusgelijkenis
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### Document loaders
|
||||
|
||||
PDF en CSV
|
||||
|
||||
## Een app bouwen
|
||||
|
||||
TODO
|
||||
|
||||
## Opdracht
|
||||
|
||||
## Samenvatting
|
||||
|
||||
---
|
||||
|
||||
**Disclaimer**:
|
||||
Dit document is vertaald met behulp van de AI-vertalingsservice [Co-op Translator](https://github.com/Azure/co-op-translator). Hoewel we streven naar nauwkeurigheid, dient u zich ervan bewust te zijn dat geautomatiseerde vertalingen fouten of onnauwkeurigheden kunnen bevatten. Het originele document in de oorspronkelijke taal moet worden beschouwd als de gezaghebbende bron. Voor cruciale informatie wordt professionele menselijke vertaling aanbevolen. Wij zijn niet aansprakelijk voor misverstanden of verkeerde interpretaties die voortvloeien uit het gebruik van deze vertaling.
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:27:36+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "no"
|
||||
}
|
||||
-->
|
||||
# AI-rammeverk
|
||||
|
||||
Det finnes mange AI-rammeverk som kan drastisk redusere tiden det tar å bygge et prosjekt. I dette prosjektet skal vi fokusere på å forstå hvilke problemer disse rammeverkene løser, og bygge et slikt prosjekt selv.
|
||||
|
||||
## Hvorfor et rammeverk
|
||||
|
||||
Når det gjelder bruk av AI, finnes det ulike tilnærminger og grunner til å velge disse tilnærmingene. Her er noen:
|
||||
|
||||
- **Ingen SDK**, de fleste AI-modeller lar deg samhandle direkte med modellen via for eksempel HTTP-forespørsler. Denne tilnærmingen fungerer og kan noen ganger være det eneste alternativet hvis en SDK-mulighet mangler.
|
||||
- **SDK**. Å bruke en SDK er vanligvis den anbefalte tilnærmingen, da det lar deg skrive mindre kode for å samhandle med modellen din. Det er vanligvis begrenset til en spesifikk modell, og hvis du bruker forskjellige modeller, må du kanskje skrive ny kode for å støtte disse.
|
||||
- **Et rammeverk**. Et rammeverk tar ting til et nytt nivå ved at hvis du trenger å bruke forskjellige modeller, finnes det én API for alle. Det som varierer, er vanligvis den innledende oppsettet. I tillegg gir rammeverk nyttige abstraksjoner, som i AI-verdenen kan håndtere verktøy, minne, arbeidsflyter, agenter og mer, samtidig som du skriver mindre kode. Fordi rammeverk ofte er meningsbærende, kan de være svært nyttige hvis du aksepterer måten de gjør ting på, men de kan være mindre nyttige hvis du prøver å gjøre noe skreddersydd som rammeverket ikke er laget for. Noen ganger kan et rammeverk også forenkle for mye, og du kan derfor gå glipp av et viktig tema som senere kan påvirke ytelsen negativt.
|
||||
|
||||
Generelt sett, bruk riktig verktøy for oppgaven.
|
||||
|
||||
## Introduksjon
|
||||
|
||||
I denne leksjonen skal vi lære å:
|
||||
|
||||
- Bruke et vanlig AI-rammeverk.
|
||||
- Løse vanlige problemer som samtaler, verktøybruk, minne og kontekst.
|
||||
- Utnytte dette til å bygge AI-apper.
|
||||
|
||||
## Første prompt
|
||||
|
||||
I vårt første app-eksempel skal vi lære hvordan vi kobler til en AI-modell og sender en forespørsel ved hjelp av et prompt.
|
||||
|
||||
### Bruke Python
|
||||
|
||||
For dette eksemplet skal vi bruke Langchain for å koble til GitHub-modeller. Vi kan bruke en klasse kalt `ChatOpenAI` og gi den feltene `api_key`, `base_url` og `model`. Tokenet blir automatisk fylt inn i GitHub Codespaces, og hvis du kjører appen lokalt, må du sette opp et personlig tilgangstoken for at dette skal fungere.
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
I denne koden:
|
||||
|
||||
- Kaller vi `ChatOpenAI` for å opprette en klient.
|
||||
- Bruker vi `llm.invoke` med et prompt for å generere et svar.
|
||||
- Skriver vi ut svaret med `print(response.content)`.
|
||||
|
||||
Du bør se et svar som ligner på:
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## Samtale
|
||||
|
||||
I den forrige delen så du hvordan vi brukte det som vanligvis kalles zero shot prompting, et enkelt prompt etterfulgt av et svar.
|
||||
|
||||
Ofte befinner du deg imidlertid i en situasjon der du må opprettholde en samtale med flere meldinger som utveksles mellom deg og AI-assistenten.
|
||||
|
||||
### Bruke Python
|
||||
|
||||
I Langchain kan vi lagre samtalen i en liste. `HumanMessage` representerer en melding fra en bruker, og `SystemMessage` er en melding ment for å sette "personligheten" til AI-en. I eksempelet nedenfor ser du hvordan vi instruerer AI-en til å anta personligheten til Captain Picard, og for mennesket/brukeren å spørre "Fortell meg om deg" som prompt.
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
Den fullstendige koden for dette eksempelet ser slik ut:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Du bør se et resultat som ligner på:
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
For å holde samtalens tilstand kan du legge til svaret fra en chat, slik at samtalen huskes. Slik gjør du det:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
Det vi ser fra samtalen ovenfor, er hvordan vi kaller LLM to ganger, først med samtalen som består av bare to meldinger, men deretter en andre gang med flere meldinger lagt til samtalen.
|
||||
|
||||
Faktisk, hvis du kjører dette, vil du se det andre svaret være noe som:
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
Jeg tar det som et kanskje ;)
|
||||
|
||||
## Strømming av svar
|
||||
|
||||
TODO
|
||||
|
||||
## Prompt-maler
|
||||
|
||||
TODO
|
||||
|
||||
## Strukturert output
|
||||
|
||||
TODO
|
||||
|
||||
## Verktøykall
|
||||
|
||||
Verktøy er hvordan vi gir LLM ekstra ferdigheter. Ideen er å fortelle LLM om funksjoner den har, og hvis et prompt samsvarer med beskrivelsen av et av disse verktøyene, kaller vi dem.
|
||||
|
||||
### Bruke Python
|
||||
|
||||
La oss legge til noen verktøy slik:
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
Det vi gjør her, er å lage en beskrivelse av et verktøy kalt `add`. Ved å arve fra `TypedDict` og legge til medlemmer som `a` og `b` av typen `Annotated`, kan dette konverteres til et skjema som LLM kan forstå. Opprettelsen av funksjoner er en ordbok som sikrer at vi vet hva vi skal gjøre hvis et spesifikt verktøy identifiseres.
|
||||
|
||||
La oss se hvordan vi kaller LLM med dette verktøyet neste:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
Her kaller vi `bind_tools` med vår `tools`-array, og dermed har LLM `llm_with_tools` nå kunnskap om dette verktøyet.
|
||||
|
||||
For å bruke denne nye LLM-en kan vi skrive følgende kode:
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Nå som vi kaller `invoke` på denne nye LLM-en, som har verktøy, kan egenskapen `tool_calls` bli fylt ut. Hvis det skjer, har identifiserte verktøy en `name` og `args`-egenskap som identifiserer hvilket verktøy som skal kalles og med hvilke argumenter. Den fullstendige koden ser slik ut:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Når du kjører denne koden, bør du se en utdata som ligner på:
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Det denne utdataen betyr, er at LLM analyserte promptet "Hva er 3 + 12" som at verktøyet `add` skulle kalles, og den visste det takket være navnet, beskrivelsen og medlemfeltbeskrivelsene. At svaret er 15, skyldes vår kode som bruker ordboken `functions` for å kalle det:
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### Et mer interessant verktøy som kaller en web-API
|
||||
|
||||
Verktøy som legger sammen to tall er interessant da det illustrerer hvordan verktøykall fungerer, men vanligvis gjør verktøy noe mer interessant, som for eksempel å kalle en web-API. La oss gjøre nettopp det med denne koden:
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
Nå, hvis du kjører denne koden, vil du få et svar som sier noe som:
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Her er koden i sin helhet:
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## Embedding
|
||||
|
||||
Vektorisere innhold, sammenligne via cosinuslikhet
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### Dokumentlastere
|
||||
|
||||
PDF og CSV
|
||||
|
||||
## Bygge en app
|
||||
|
||||
TODO
|
||||
|
||||
## Oppgave
|
||||
|
||||
## Oppsummering
|
||||
|
||||
---
|
||||
|
||||
**Ansvarsfraskrivelse**:
|
||||
Dette dokumentet er oversatt ved hjelp av AI-oversettelsestjenesten [Co-op Translator](https://github.com/Azure/co-op-translator). Selv om vi tilstreber nøyaktighet, vennligst vær oppmerksom på at automatiserte oversettelser kan inneholde feil eller unøyaktigheter. Det originale dokumentet på sitt opprinnelige språk bør betraktes som den autoritative kilden. For kritisk informasjon anbefales profesjonell menneskelig oversettelse. Vi er ikke ansvarlige for eventuelle misforståelser eller feiltolkninger som oppstår ved bruk av denne oversettelsen.
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:24:00+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "pa"
|
||||
}
|
||||
-->
|
||||
# AI ਫਰੇਮਵਰਕ
|
||||
|
||||
ਬਹੁਤ ਸਾਰੇ AI ਫਰੇਮਵਰਕ ਮੌਜੂਦ ਹਨ ਜੋ ਪ੍ਰੋਜੈਕਟ ਬਣਾਉਣ ਦੇ ਸਮੇਂ ਨੂੰ ਕਾਫ਼ੀ ਘਟਾ ਸਕਦੇ ਹਨ। ਇਸ ਪ੍ਰੋਜੈਕਟ ਵਿੱਚ ਅਸੀਂ ਸਮਝਾਂਗੇ ਕਿ ਇਹ ਫਰੇਮਵਰਕ ਕਿਹੜੀਆਂ ਸਮੱਸਿਆਵਾਂ ਦਾ ਹੱਲ ਕਰਦੇ ਹਨ ਅਤੇ ਖੁਦ ਇੱਕ ਪ੍ਰੋਜੈਕਟ ਬਣਾਉਣ ਦੀ ਕੋਸ਼ਿਸ਼ ਕਰਾਂਗੇ।
|
||||
|
||||
## ਫਰੇਮਵਰਕ ਕਿਉਂ?
|
||||
|
||||
AI ਵਰਤਣ ਦੇ ਮਾਮਲੇ ਵਿੱਚ ਵੱਖ-ਵੱਖ ਤਰੀਕੇ ਅਤੇ ਕਾਰਨ ਹੁੰਦੇ ਹਨ। ਇੱਥੇ ਕੁਝ ਹਨ:
|
||||
|
||||
- **ਕੋਈ SDK ਨਹੀਂ**, ਜ਼ਿਆਦਾਤਰ AI ਮਾਡਲ ਤੁਹਾਨੂੰ HTTP ਰਿਕਵੈਸਟ ਜਿਵੇਂ ਤਰੀਕਿਆਂ ਰਾਹੀਂ ਮਾਡਲ ਨਾਲ ਸਿੱਧਾ ਇੰਟਰੈਕਟ ਕਰਨ ਦੀ ਆਗਿਆ ਦਿੰਦੇ ਹਨ। ਇਹ ਤਰੀਕਾ ਕੰਮ ਕਰਦਾ ਹੈ ਅਤੇ ਕਈ ਵਾਰ ਜਦੋਂ SDK ਦਾ ਵਿਕਲਪ ਨਹੀਂ ਹੁੰਦਾ, ਇਹ ਤੁਹਾਡਾ ਇਕੋ ਵਿਕਲਪ ਹੋ ਸਕਦਾ ਹੈ।
|
||||
- **SDK**। SDK ਵਰਤਣਾ ਆਮ ਤੌਰ 'ਤੇ ਸਿਫਾਰਸ਼ੀ ਤਰੀਕਾ ਹੁੰਦਾ ਹੈ ਕਿਉਂਕਿ ਇਹ ਮਾਡਲ ਨਾਲ ਇੰਟਰੈਕਟ ਕਰਨ ਲਈ ਘੱਟ ਕੋਡ ਲਿਖਣ ਦੀ ਆਗਿਆ ਦਿੰਦਾ ਹੈ। ਇਹ ਆਮ ਤੌਰ 'ਤੇ ਕਿਸੇ ਖਾਸ ਮਾਡਲ ਤੱਕ ਸੀਮਿਤ ਹੁੰਦਾ ਹੈ ਅਤੇ ਜੇਕਰ ਵੱਖ-ਵੱਖ ਮਾਡਲ ਵਰਤਣੇ ਹੋਣ ਤਾਂ ਤੁਹਾਨੂੰ ਉਹਨਾਂ ਲਈ ਨਵਾਂ ਕੋਡ ਲਿਖਣਾ ਪਵੇਗਾ।
|
||||
- **ਫਰੇਮਵਰਕ**। ਫਰੇਮਵਰਕ ਆਮ ਤੌਰ 'ਤੇ ਚੀਜ਼ਾਂ ਨੂੰ ਇੱਕ ਹੋਰ ਪੱਧਰ 'ਤੇ ਲੈ ਜਾਂਦਾ ਹੈ। ਜੇਕਰ ਤੁਹਾਨੂੰ ਵੱਖ-ਵੱਖ ਮਾਡਲ ਵਰਤਣੇ ਹੋਣ ਤਾਂ ਇੱਕ API ਸਾਰੇ ਲਈ ਹੁੰਦੀ ਹੈ, ਫਰਕ ਸਿਰਫ਼ ਸ਼ੁਰੂਆਤੀ ਸੈਟਅੱਪ ਵਿੱਚ ਹੁੰਦਾ ਹੈ। ਇਸ ਤੋਂ ਇਲਾਵਾ, ਫਰੇਮਵਰਕ ਵਰਤਣ ਨਾਲ ਟੂਲ, ਮੈਮੋਰੀ, ਵਰਕਫਲੋਜ਼, ਏਜੰਟ ਆਦਿ ਵਰਗੀਆਂ ਚੀਜ਼ਾਂ ਨੂੰ ਸੰਭਾਲਣ ਲਈ ਸਹੂਲਤ ਮਿਲਦੀ ਹੈ। ਫਰੇਮਵਰਕ ਆਮ ਤੌਰ 'ਤੇ ਆਪਣੇ ਤਰੀਕੇ ਨਾਲ ਕੰਮ ਕਰਦੇ ਹਨ, ਜੋ ਕਿ ਸਹਾਇਕ ਹੋ ਸਕਦੇ ਹਨ ਜੇਕਰ ਤੁਸੀਂ ਉਹਨਾਂ ਦੇ ਤਰੀਕੇ ਨੂੰ ਪਸੰਦ ਕਰਦੇ ਹੋ। ਪਰ ਜੇਕਰ ਤੁਸੀਂ ਕੁਝ ਅਜਿਹਾ ਕਰਨ ਦੀ ਕੋਸ਼ਿਸ਼ ਕਰਦੇ ਹੋ ਜੋ ਫਰੇਮਵਰਕ ਲਈ ਨਹੀਂ ਬਣਾਇਆ ਗਿਆ, ਤਾਂ ਇਹ ਘਾਟਾ ਕਰ ਸਕਦਾ ਹੈ। ਕਈ ਵਾਰ ਫਰੇਮਵਰਕ ਚੀਜ਼ਾਂ ਨੂੰ ਬਹੁਤ ਜ਼ਿਆਦਾ ਸਧਾਰਨ ਕਰ ਦਿੰਦੇ ਹਨ, ਜਿਸ ਨਾਲ ਤੁਸੀਂ ਕੋਈ ਮਹੱਤਵਪੂਰਨ ਵਿਸ਼ਾ ਨਹੀਂ ਸਿੱਖਦੇ ਜੋ ਬਾਅਦ ਵਿੱਚ ਪ੍ਰਦਰਸ਼ਨ ਨੂੰ ਨੁਕਸਾਨ ਪਹੁੰਚਾ ਸਕਦਾ ਹੈ।
|
||||
|
||||
ਸਧਾਰਨ ਤੌਰ 'ਤੇ, ਕੰਮ ਲਈ ਸਹੀ ਟੂਲ ਦੀ ਵਰਤੋਂ ਕਰੋ।
|
||||
|
||||
## ਪਰਿਚਯ
|
||||
|
||||
ਇਸ ਪਾਠ ਵਿੱਚ, ਅਸੀਂ ਸਿੱਖਾਂਗੇ:
|
||||
|
||||
- ਇੱਕ ਆਮ AI ਫਰੇਮਵਰਕ ਦੀ ਵਰਤੋਂ ਕਰਨਾ।
|
||||
- ਗੱਲਬਾਤ, ਟੂਲ ਵਰਤੋਂ, ਮੈਮੋਰੀ ਅਤੇ ਸੰਦਰਭ ਵਰਗੀਆਂ ਆਮ ਸਮੱਸਿਆਵਾਂ ਦਾ ਹੱਲ ਕਰਨਾ।
|
||||
- ਇਸਨੂੰ ਵਰਤ ਕੇ AI ਐਪਸ ਬਣਾਉਣਾ।
|
||||
|
||||
## ਪਹਿਲਾ ਪ੍ਰੋੰਪਟ
|
||||
|
||||
ਸਾਡੇ ਪਹਿਲੇ ਐਪ ਉਦਾਹਰਨ ਵਿੱਚ, ਅਸੀਂ ਸਿੱਖਾਂਗੇ ਕਿ AI ਮਾਡਲ ਨਾਲ ਕਿਵੇਂ ਜੁੜਨਾ ਹੈ ਅਤੇ ਇੱਕ ਪ੍ਰੋੰਪਟ ਦੀ ਵਰਤੋਂ ਕਰਕੇ ਇਸਨੂੰ ਕਿਵੇਂ ਪੁੱਛਣਾ ਹੈ।
|
||||
|
||||
### ਪਾਇਥਨ ਦੀ ਵਰਤੋਂ ਕਰਦੇ ਹੋਏ
|
||||
|
||||
ਇਸ ਉਦਾਹਰਨ ਲਈ, ਅਸੀਂ Langchain ਦੀ ਵਰਤੋਂ ਕਰਕੇ GitHub ਮਾਡਲ ਨਾਲ ਜੁੜਾਂਗੇ। ਅਸੀਂ `ChatOpenAI` ਕਲਾਸ ਦੀ ਵਰਤੋਂ ਕਰਾਂਗੇ ਅਤੇ ਇਸਨੂੰ `api_key`, `base_url` ਅਤੇ `model` ਦੇ ਖੇਤਰ ਦੇਵਾਂਗੇ। ਟੋਕਨ ਕੁਦਰਤੀ ਤੌਰ 'ਤੇ GitHub Codespaces ਵਿੱਚ ਭਰਿਆ ਜਾਂਦਾ ਹੈ ਅਤੇ ਜੇਕਰ ਤੁਸੀਂ ਐਪ ਨੂੰ ਲੋਕਲ ਤੌਰ 'ਤੇ ਚਲਾ ਰਹੇ ਹੋ, ਤਾਂ ਤੁਹਾਨੂੰ ਇਸ ਲਈ ਇੱਕ ਪੈਰਸਨਲ ਐਕਸੈਸ ਟੋਕਨ ਸੈਟਅੱਪ ਕਰਨਾ ਪਵੇਗਾ।
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
ਇਸ ਕੋਡ ਵਿੱਚ, ਅਸੀਂ:
|
||||
|
||||
- `ChatOpenAI` ਨੂੰ ਕਾਲ ਕਰਕੇ ਇੱਕ ਕਲਾਇੰਟ ਬਣਾਇਆ।
|
||||
- ਇੱਕ ਪ੍ਰੋੰਪਟ ਨਾਲ `llm.invoke` ਦੀ ਵਰਤੋਂ ਕਰਕੇ ਇੱਕ ਜਵਾਬ ਬਣਾਇਆ।
|
||||
- `print(response.content)` ਨਾਲ ਜਵਾਬ ਪ੍ਰਿੰਟ ਕੀਤਾ।
|
||||
|
||||
ਤੁਹਾਨੂੰ ਕੁਝ ਇਸ ਤਰ੍ਹਾਂ ਦਾ ਜਵਾਬ ਦੇਖਣ ਨੂੰ ਮਿਲੇਗਾ:
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## ਗੱਲਬਾਤ
|
||||
|
||||
ਪਿਛਲੇ ਹਿੱਸੇ ਵਿੱਚ, ਤੁਸੀਂ ਦੇਖਿਆ ਕਿ ਅਸੀਂ ਕਿਵੇਂ ਇੱਕ ਸਿੰਗਲ ਪ੍ਰੋੰਪਟ ਦੇ ਜਵਾਬ ਦੇ ਤੌਰ 'ਤੇ ਜ਼ੀਰੋ ਸ਼ਾਟ ਪ੍ਰੋੰਪਟਿੰਗ ਦੀ ਵਰਤੋਂ ਕੀਤੀ।
|
||||
|
||||
ਹਾਲਾਂਕਿ, ਅਕਸਰ ਤੁਸੀਂ ਆਪਣੇ ਆਪ ਨੂੰ ਇੱਕ ਅਜਿਹੀ ਸਥਿਤੀ ਵਿੱਚ ਪਾਉਂਦੇ ਹੋ ਜਿੱਥੇ ਤੁਹਾਨੂੰ ਆਪਣੇ ਅਤੇ AI ਸਹਾਇਕ ਦੇ ਵਿਚਕਾਰ ਕਈ ਸੁਨੇਹੇ ਅਦਲਾ-ਬਦਲੀ ਕਰਨੀ ਪੈਂਦੀ ਹੈ।
|
||||
|
||||
### ਪਾਇਥਨ ਦੀ ਵਰਤੋਂ ਕਰਦੇ ਹੋਏ
|
||||
|
||||
Langchain ਵਿੱਚ, ਅਸੀਂ ਗੱਲਬਾਤ ਨੂੰ ਇੱਕ ਲਿਸਟ ਵਿੱਚ ਸਟੋਰ ਕਰ ਸਕਦੇ ਹਾਂ। `HumanMessage` ਵਰਤੋਂਕਾਰ ਵੱਲੋਂ ਭੇਜਿਆ ਸੁਨੇਹਾ ਦਰਸਾਉਂਦਾ ਹੈ, ਅਤੇ `SystemMessage` AI ਦੀ "ਪ੍ਰਸਨਲਿਟੀ" ਸੈਟ ਕਰਨ ਲਈ ਸੁਨੇਹਾ ਹੁੰਦਾ ਹੈ। ਹੇਠਾਂ ਦਿੱਤੇ ਉਦਾਹਰਨ ਵਿੱਚ ਤੁਸੀਂ ਦੇਖਦੇ ਹੋ ਕਿ ਅਸੀਂ AI ਨੂੰ ਕੈਪਟਨ ਪਿਕਾਰਡ ਦੀ ਪ੍ਰਸਨਲਿਟੀ ਅਪਣਾਉਣ ਲਈ ਕਿਵੇਂ ਕਹਿੰਦੇ ਹਾਂ ਅਤੇ ਵਰਤੋਂਕਾਰ ਨੂੰ "Tell me about you" ਪੁੱਛਣ ਲਈ ਕਿਵੇਂ ਕਹਿੰਦੇ ਹਾਂ।
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
ਇਸ ਉਦਾਹਰਨ ਦਾ ਪੂਰਾ ਕੋਡ ਕੁਝ ਇਸ ਤਰ੍ਹਾਂ ਦਿਖਾਈ ਦਿੰਦਾ ਹੈ:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
ਤੁਹਾਨੂੰ ਕੁਝ ਇਸ ਤਰ੍ਹਾਂ ਦਾ ਨਤੀਜਾ ਦੇਖਣ ਨੂੰ ਮਿਲੇਗਾ:
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
ਗੱਲਬਾਤ ਦੀ ਸਥਿਤੀ ਨੂੰ ਯਾਦ ਰੱਖਣ ਲਈ, ਤੁਸੀਂ ਗੱਲਬਾਤ ਤੋਂ ਜਵਾਬ ਸ਼ਾਮਲ ਕਰ ਸਕਦੇ ਹੋ, ਤਾਂ ਜੋ ਗੱਲਬਾਤ ਯਾਦ ਰਹੇ। ਇੱਥੇ ਇਹ ਕਿਵੇਂ ਕੀਤਾ ਜਾਂਦਾ ਹੈ:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
ਉਪਰੋਕਤ ਗੱਲਬਾਤ ਤੋਂ ਅਸੀਂ ਦੇਖ ਸਕਦੇ ਹਾਂ ਕਿ ਅਸੀਂ LLM ਨੂੰ ਦੋ ਵਾਰ ਕਿਵੇਂ ਕਾਲ ਕਰਦੇ ਹਾਂ, ਪਹਿਲੀ ਵਾਰ ਸਿਰਫ਼ ਦੋ ਸੁਨੇਹਿਆਂ ਵਾਲੀ ਗੱਲਬਾਤ ਨਾਲ, ਪਰ ਫਿਰ ਦੂਜੀ ਵਾਰ ਹੋਰ ਸੁਨੇਹੇ ਸ਼ਾਮਲ ਕਰਕੇ।
|
||||
|
||||
ਅਸਲ ਵਿੱਚ, ਜੇਕਰ ਤੁਸੀਂ ਇਹ ਚਲਾਉਂਦੇ ਹੋ, ਤਾਂ ਤੁਸੀਂ ਦੂਜਾ ਜਵਾਬ ਕੁਝ ਇਸ ਤਰ੍ਹਾਂ ਦੇਖੋਗੇ:
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
ਮੈਂ ਇਸਨੂੰ "ਸ਼ਾਇਦ" ਮੰਨ ਲਵਾਂਗਾ ;)
|
||||
|
||||
## ਸਟ੍ਰੀਮਿੰਗ ਜਵਾਬ
|
||||
|
||||
TODO
|
||||
|
||||
## ਪ੍ਰੋੰਪਟ ਟੈਂਪਲੇਟਸ
|
||||
|
||||
TODO
|
||||
|
||||
## ਸਟ੍ਰਕਚਰਡ ਆਉਟਪੁਟ
|
||||
|
||||
TODO
|
||||
|
||||
## ਟੂਲ ਕਾਲਿੰਗ
|
||||
|
||||
ਟੂਲ ਉਹ ਹਨ ਜਿਨ੍ਹਾਂ ਰਾਹੀਂ ਅਸੀਂ LLM ਨੂੰ ਵਾਧੂ ਕੌਸ਼ਲ ਦਿੰਦੇ ਹਾਂ। ਵਿਚਾਰ ਇਹ ਹੈ ਕਿ LLM ਨੂੰ ਫੰਕਸ਼ਨਜ਼ ਬਾਰੇ ਦੱਸਿਆ ਜਾਵੇ ਅਤੇ ਜੇਕਰ ਕੋਈ ਪ੍ਰੋੰਪਟ ਕਿਸੇ ਟੂਲ ਦੇ ਵੇਰਵੇ ਨਾਲ ਮੇਲ ਖਾਂਦਾ ਹੈ ਤਾਂ ਅਸੀਂ ਉਹਨਾਂ ਨੂੰ ਕਾਲ ਕਰਦੇ ਹਾਂ।
|
||||
|
||||
### ਪਾਇਥਨ ਦੀ ਵਰਤੋਂ ਕਰਦੇ ਹੋਏ
|
||||
|
||||
ਆਓ ਕੁਝ ਟੂਲ ਸ਼ਾਮਲ ਕਰੀਏ:
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
ਇੱਥੇ ਅਸੀਂ ਇੱਕ ਟੂਲ `add` ਦਾ ਵੇਰਵਾ ਬਣਾਉਂਦੇ ਹਾਂ। `TypedDict` ਤੋਂ ਵਾਰਸਤ ਲੈ ਕੇ ਅਤੇ `a` ਅਤੇ `b` ਵਰਗੇ ਮੈਂਬਰ ਸ਼ਾਮਲ ਕਰਕੇ, ਜੋ ਕਿ `Annotated` ਕਿਸਮ ਦੇ ਹਨ, ਇਹ ਇੱਕ ਸਕੀਮਾ ਵਿੱਚ ਤਬਦੀਲ ਕੀਤਾ ਜਾ ਸਕਦਾ ਹੈ ਜਿਸਨੂੰ LLM ਸਮਝ ਸਕਦਾ ਹੈ। ਫੰਕਸ਼ਨਜ਼ ਦੀ ਰਚਨਾ ਇੱਕ ਡਿਕਸ਼ਨਰੀ ਹੈ ਜੋ ਇਹ ਯਕੀਨੀ ਬਣਾਉਂਦੀ ਹੈ ਕਿ ਜੇਕਰ ਕੋਈ ਖਾਸ ਟੂਲ ਪਛਾਣਿਆ ਜਾਂਦਾ ਹੈ ਤਾਂ ਕੀ ਕਰਨਾ ਹੈ।
|
||||
|
||||
ਆਓ ਅਗਲੇ ਕਦਮ ਵਿੱਚ ਦੇਖੀਏ ਕਿ ਅਸੀਂ ਇਸ ਟੂਲ ਨਾਲ LLM ਨੂੰ ਕਿਵੇਂ ਕਾਲ ਕਰਦੇ ਹਾਂ:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
ਇੱਥੇ ਅਸੀਂ `bind_tools` ਨੂੰ ਆਪਣੇ `tools` ਐਰੇ ਨਾਲ ਕਾਲ ਕਰਦੇ ਹਾਂ ਅਤੇ ਇਸ ਤਰ੍ਹਾਂ LLM `llm_with_tools` ਹੁਣ ਇਸ ਟੂਲ ਬਾਰੇ ਜਾਣਕਾਰੀ ਰੱਖਦਾ ਹੈ।
|
||||
|
||||
ਇਸ ਨਵੇਂ LLM ਦੀ ਵਰਤੋਂ ਕਰਨ ਲਈ, ਅਸੀਂ ਹੇਠਾਂ ਦਿੱਤਾ ਕੋਡ ਲਿਖ ਸਕਦੇ ਹਾਂ:
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
ਹੁਣ ਜਦੋਂ ਅਸੀਂ ਇਸ ਨਵੇਂ LLM, ਜਿਸਦੇ ਕੋਲ ਟੂਲ ਹਨ, 'ਤੇ `invoke` ਕਾਲ ਕਰਦੇ ਹਾਂ, ਤਾਂ ਸਾਨੂੰ ਸੰਭਵ ਹੈ ਕਿ `tool_calls` ਪ੍ਰਾਪਰਟੀ ਭਰੀ ਹੋਈ ਮਿਲੇ। ਜੇਕਰ ਐਸਾ ਹੈ, ਤਾਂ ਕੋਈ ਵੀ ਪਛਾਣੇ ਗਏ ਟੂਲ ਵਿੱਚ ਇੱਕ `name` ਅਤੇ `args` ਪ੍ਰਾਪਰਟੀ ਹੁੰਦੀ ਹੈ ਜੋ ਦੱਸਦੀ ਹੈ ਕਿ ਕਿਹੜਾ ਟੂਲ ਕਾਲ ਕੀਤਾ ਜਾਣਾ ਚਾਹੀਦਾ ਹੈ ਅਤੇ ਕਿਹੜੇ ਆਰਗਯੂਮੈਂਟਸ ਨਾਲ। ਪੂਰਾ ਕੋਡ ਕੁਝ ਇਸ ਤਰ੍ਹਾਂ ਦਿਖਾਈ ਦਿੰਦਾ ਹੈ:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
ਇਹ ਕੋਡ ਚਲਾਉਣ 'ਤੇ, ਤੁਹਾਨੂੰ ਕੁਝ ਇਸ ਤਰ੍ਹਾਂ ਦਾ ਆਉਟਪੁਟ ਦੇਖਣ ਨੂੰ ਮਿਲੇਗਾ:
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
ਇਸ ਆਉਟਪੁਟ ਦਾ ਮਤਲਬ ਹੈ ਕਿ LLM ਨੇ ਪ੍ਰੋੰਪਟ "What is 3 + 12" ਨੂੰ `add` ਟੂਲ ਕਾਲ ਕਰਨ ਦੇ ਤੌਰ 'ਤੇ ਵਿਵਚਾਰਿਆ ਅਤੇ ਇਸਨੂੰ ਇਸਦੇ ਨਾਮ, ਵੇਰਵੇ ਅਤੇ ਮੈਂਬਰ ਫੀਲਡ ਵੇਰਵਿਆਂ ਦੀ ਬਦੌਲਤ ਪਛਾਣਿਆ। ਜਵਾਬ 15 ਹੈ ਕਿਉਂਕਿ ਸਾਡਾ ਕੋਡ ਡਿਕਸ਼ਨਰੀ `functions` ਦੀ ਵਰਤੋਂ ਕਰਦਾ ਹੈ ਇਸਨੂੰ ਕਾਲ ਕਰਨ ਲਈ:
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### ਇੱਕ ਹੋਰ ਦਿਲਚਸਪ ਟੂਲ ਜੋ ਵੈੱਬ API ਕਾਲ ਕਰਦਾ ਹੈ
|
||||
|
||||
ਜੋੜਨ ਵਾਲੇ ਟੂਲ ਦਿਲਚਸਪ ਹਨ ਕਿਉਂਕਿ ਇਹ ਦਿਖਾਉਂਦੇ ਹਨ ਕਿ ਟੂਲ ਕਾਲਿੰਗ ਕਿਵੇਂ ਕੰਮ ਕਰਦੀ ਹੈ, ਪਰ ਆਮ ਤੌਰ 'ਤੇ ਟੂਲ ਕੁਝ ਹੋਰ ਦਿਲਚਸਪ ਕਰਦੇ ਹਨ ਜਿਵੇਂ ਕਿ ਵੈੱਬ API ਕਾਲ ਕਰਨਾ। ਆਓ ਇਸ ਕੋਡ ਨਾਲ ਇਹ ਕਰੀਏ:
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
ਹੁਣ ਜੇਕਰ ਤੁਸੀਂ ਇਹ ਕੋਡ ਚਲਾਉਂਦੇ ਹੋ, ਤਾਂ ਤੁਹਾਨੂੰ ਕੁਝ ਇਸ ਤਰ੍ਹਾਂ ਦਾ ਜਵਾਬ ਮਿਲੇਗਾ:
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
ਇੱਥੇ ਪੂਰਾ ਕੋਡ ਹੈ:
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## ਐਮਬੈਡਿੰਗ
|
||||
|
||||
ਸਮੱਗਰੀ ਨੂੰ ਵੈਕਟਰਾਈਜ਼ ਕਰੋ, ਕੋਸਾਈਨ ਸਿਮਿਲਾਰਿਟੀ ਰਾਹੀਂ ਤੁਲਨਾ ਕਰੋ।
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### ਡੌਕਯੂਮੈਂਟ ਲੋਡਰਜ਼
|
||||
|
||||
PDF ਅਤੇ CSV
|
||||
|
||||
## ਐਪ ਬਣਾਉਣਾ
|
||||
|
||||
TODO
|
||||
|
||||
## ਅਸਾਈਨਮੈਂਟ
|
||||
|
||||
## ਸਾਰांश
|
||||
|
||||
---
|
||||
|
||||
**ਅਸਵੀਕਰਤੀ**:
|
||||
ਇਹ ਦਸਤਾਵੇਜ਼ AI ਅਨੁਵਾਦ ਸੇਵਾ [Co-op Translator](https://github.com/Azure/co-op-translator) ਦੀ ਵਰਤੋਂ ਕਰਕੇ ਅਨੁਵਾਦ ਕੀਤਾ ਗਿਆ ਹੈ। ਜਦੋਂ ਕਿ ਅਸੀਂ ਸਹੀ ਹੋਣ ਦੀ ਕੋਸ਼ਿਸ਼ ਕਰਦੇ ਹਾਂ, ਕਿਰਪਾ ਕਰਕੇ ਧਿਆਨ ਦਿਓ ਕਿ ਸਵੈਚਾਲਿਤ ਅਨੁਵਾਦਾਂ ਵਿੱਚ ਗਲਤੀਆਂ ਜਾਂ ਅਸੁਚੀਤਤਾਵਾਂ ਹੋ ਸਕਦੀਆਂ ਹਨ। ਇਸ ਦੀ ਮੂਲ ਭਾਸ਼ਾ ਵਿੱਚ ਮੂਲ ਦਸਤਾਵੇਜ਼ ਨੂੰ ਅਧਿਕਾਰਤ ਸਰੋਤ ਮੰਨਿਆ ਜਾਣਾ ਚਾਹੀਦਾ ਹੈ। ਮਹੱਤਵਪੂਰਨ ਜਾਣਕਾਰੀ ਲਈ, ਪੇਸ਼ੇਵਰ ਮਨੁੱਖੀ ਅਨੁਵਾਦ ਦੀ ਸਿਫਾਰਸ਼ ਕੀਤੀ ਜਾਂਦੀ ਹੈ। ਇਸ ਅਨੁਵਾਦ ਦੀ ਵਰਤੋਂ ਤੋਂ ਪੈਦਾ ਹੋਣ ਵਾਲੇ ਕਿਸੇ ਵੀ ਗਲਤ ਫਹਿਮੀ ਜਾਂ ਗਲਤ ਵਿਆਖਿਆ ਲਈ ਅਸੀਂ ਜ਼ਿੰਮੇਵਾਰ ਨਹੀਂ ਹਾਂ।
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:25:32+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "pl"
|
||||
}
|
||||
-->
|
||||
# Framework AI
|
||||
|
||||
Istnieje wiele frameworków AI, które mogą znacznie przyspieszyć czas potrzebny na stworzenie projektu. W tym projekcie skupimy się na zrozumieniu, jakie problemy te frameworki rozwiązują, oraz stworzymy własny projekt oparty na jednym z nich.
|
||||
|
||||
## Dlaczego framework?
|
||||
|
||||
W przypadku korzystania z AI istnieją różne podejścia i powody ich wyboru. Oto kilka z nich:
|
||||
|
||||
- **Brak SDK**: Większość modeli AI pozwala na bezpośrednią interakcję z modelem, na przykład za pomocą żądań HTTP. To podejście działa i czasami może być jedyną opcją, jeśli SDK nie jest dostępne.
|
||||
- **SDK**: Korzystanie z SDK jest zazwyczaj zalecane, ponieważ pozwala na napisanie mniejszej ilości kodu do interakcji z modelem. Zwykle jest ograniczone do konkretnego modelu, a jeśli korzystasz z różnych modeli, może być konieczne napisanie nowego kodu, aby je obsłużyć.
|
||||
- **Framework**: Framework zazwyczaj podnosi poziom abstrakcji, oferując jedno API dla różnych modeli, gdzie różnice dotyczą głównie konfiguracji początkowej. Dodatkowo frameworki wprowadzają przydatne abstrakcje, takie jak narzędzia, pamięć, przepływy pracy, agenci i inne, przy jednoczesnym ograniczeniu ilości kodu. Frameworki są zazwyczaj opiniotwórcze, co może być pomocne, jeśli akceptujesz ich sposób działania, ale mogą być mniej skuteczne, jeśli próbujesz zrobić coś niestandardowego, czego framework nie obsługuje. Czasami framework może uprościć proces zbyt mocno, co może prowadzić do pominięcia ważnych tematów, które później mogą wpłynąć na wydajność.
|
||||
|
||||
Ogólnie rzecz biorąc, należy używać odpowiedniego narzędzia do danego zadania.
|
||||
|
||||
## Wprowadzenie
|
||||
|
||||
W tej lekcji nauczymy się:
|
||||
|
||||
- Korzystać z popularnego frameworku AI.
|
||||
- Rozwiązywać typowe problemy, takie jak rozmowy, użycie narzędzi, pamięć i kontekst.
|
||||
- Wykorzystać te umiejętności do budowy aplikacji AI.
|
||||
|
||||
## Pierwszy prompt
|
||||
|
||||
W naszym pierwszym przykładzie aplikacji nauczymy się, jak połączyć się z modelem AI i zapytać go za pomocą promptu.
|
||||
|
||||
### Korzystanie z Pythona
|
||||
|
||||
W tym przykładzie użyjemy Langchain do połączenia z modelami GitHub. Możemy użyć klasy `ChatOpenAI` i podać jej pola `api_key`, `base_url` oraz `model`. Token jest automatycznie generowany w GitHub Codespaces, a jeśli uruchamiasz aplikację lokalnie, musisz skonfigurować osobisty token dostępu, aby to działało.
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
W tym kodzie:
|
||||
|
||||
- Wywołujemy `ChatOpenAI`, aby utworzyć klienta.
|
||||
- Używamy `llm.invoke` z promptem, aby wygenerować odpowiedź.
|
||||
- Wyświetlamy odpowiedź za pomocą `print(response.content)`.
|
||||
|
||||
Powinieneś zobaczyć odpowiedź podobną do:
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## Rozmowa
|
||||
|
||||
W poprzedniej sekcji zobaczyłeś, jak używaliśmy tzw. zero shot prompting, czyli pojedynczego promptu, po którym następuje odpowiedź.
|
||||
|
||||
Jednak często znajdujesz się w sytuacji, w której musisz utrzymać rozmowę składającą się z kilku wymienianych wiadomości między Tobą a asystentem AI.
|
||||
|
||||
### Korzystanie z Pythona
|
||||
|
||||
W Langchain możemy przechowywać rozmowę w liście. `HumanMessage` reprezentuje wiadomość od użytkownika, a `SystemMessage` to wiadomość mająca na celu ustawienie "osobowości" AI. W poniższym przykładzie pokazujemy, jak instruujemy AI, aby przyjęło osobowość Kapitana Picarda, a użytkownik zadaje pytanie "Opowiedz mi o sobie" jako prompt.
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
Pełny kod dla tego przykładu wygląda następująco:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Powinieneś zobaczyć wynik podobny do:
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
Aby zachować stan rozmowy, możesz dodać odpowiedź z czatu, aby rozmowa była zapamiętana. Oto jak to zrobić:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
Z powyższej rozmowy widzimy, jak dwukrotnie wywołujemy LLM, najpierw z rozmową składającą się z dwóch wiadomości, a następnie drugi raz z większą liczbą wiadomości dodanych do rozmowy.
|
||||
|
||||
W rzeczywistości, jeśli uruchomisz ten kod, zobaczysz drugą odpowiedź podobną do:
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
Wezmę to za "może" ;)
|
||||
|
||||
## Strumieniowanie odpowiedzi
|
||||
|
||||
TODO
|
||||
|
||||
## Szablony promptów
|
||||
|
||||
TODO
|
||||
|
||||
## Strukturalne wyniki
|
||||
|
||||
TODO
|
||||
|
||||
## Wywoływanie narzędzi
|
||||
|
||||
Narzędzia pozwalają LLM na zdobycie dodatkowych umiejętności. Idea polega na poinformowaniu LLM o funkcjach, które posiada, a jeśli prompt pasuje do opisu jednego z tych narzędzi, to zostaje ono wywołane.
|
||||
|
||||
### Korzystanie z Pythona
|
||||
|
||||
Dodajmy kilka narzędzi w następujący sposób:
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
Tworzymy tutaj opis narzędzia o nazwie `add`. Dziedzicząc po `TypedDict` i dodając członków takich jak `a` i `b` typu `Annotated`, można to przekształcić w schemat, który LLM rozumie. Tworzenie funkcji to słownik, który zapewnia, że wiemy, co zrobić, jeśli zostanie zidentyfikowane konkretne narzędzie.
|
||||
|
||||
Zobaczmy, jak wywołać LLM z tym narzędziem:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
Tutaj wywołujemy `bind_tools` z naszą tablicą `tools`, dzięki czemu LLM `llm_with_tools` ma teraz wiedzę o tym narzędziu.
|
||||
|
||||
Aby użyć tego nowego LLM, możemy wpisać następujący kod:
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Teraz, gdy wywołujemy `invoke` na tym nowym LLM, który ma narzędzia, może być wypełniona właściwość `tool_calls`. Jeśli tak, każde zidentyfikowane narzędzie ma właściwości `name` i `args`, które identyfikują, jakie narzędzie powinno zostać wywołane i z jakimi argumentami. Pełny kod wygląda następująco:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Uruchamiając ten kod, powinieneś zobaczyć wynik podobny do:
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Ten wynik oznacza, że LLM przeanalizował prompt "What is 3 + 12" jako wskazujący, że narzędzie `add` powinno zostać wywołane, i wiedział to dzięki jego nazwie, opisowi i opisom pól członkowskich. Odpowiedź 15 wynika z naszego kodu używającego słownika `functions` do jego wywołania:
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### Bardziej interesujące narzędzie wywołujące API sieciowe
|
||||
|
||||
Narzędzia dodające dwie liczby są interesujące, ponieważ ilustrują, jak działa wywoływanie narzędzi, ale zazwyczaj narzędzia robią coś bardziej interesującego, na przykład wywołują API sieciowe. Zróbmy to za pomocą tego kodu:
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
Teraz, jeśli uruchomisz ten kod, otrzymasz odpowiedź podobną do:
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Oto pełny kod:
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## Embedding
|
||||
|
||||
Wektoryzacja treści, porównanie za pomocą podobieństwa kosinusowego
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### Ładowanie dokumentów
|
||||
|
||||
PDF i CSV
|
||||
|
||||
## Tworzenie aplikacji
|
||||
|
||||
TODO
|
||||
|
||||
## Zadanie
|
||||
|
||||
## Podsumowanie
|
||||
|
||||
---
|
||||
|
||||
**Zastrzeżenie**:
|
||||
Ten dokument został przetłumaczony za pomocą usługi tłumaczenia AI [Co-op Translator](https://github.com/Azure/co-op-translator). Chociaż dokładamy wszelkich starań, aby tłumaczenie było precyzyjne, prosimy pamiętać, że automatyczne tłumaczenia mogą zawierać błędy lub nieścisłości. Oryginalny dokument w jego języku źródłowym powinien być uznawany za autorytatywne źródło. W przypadku informacji o kluczowym znaczeniu zaleca się skorzystanie z profesjonalnego tłumaczenia przez człowieka. Nie ponosimy odpowiedzialności za jakiekolwiek nieporozumienia lub błędne interpretacje wynikające z użycia tego tłumaczenia.
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:24:38+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "pt"
|
||||
}
|
||||
-->
|
||||
# Framework de IA
|
||||
|
||||
Existem muitos frameworks de IA disponíveis que, quando utilizados, podem acelerar significativamente o tempo necessário para construir um projeto. Neste projeto, vamos focar em compreender os problemas que esses frameworks resolvem e construir um projeto desse tipo por conta própria.
|
||||
|
||||
## Porquê um framework
|
||||
|
||||
Quando se trata de usar IA, existem diferentes abordagens e razões para escolher essas abordagens. Aqui estão algumas:
|
||||
|
||||
- **Sem SDK**: A maioria dos modelos de IA permite interagir diretamente com o modelo via, por exemplo, pedidos HTTP. Essa abordagem funciona e pode, por vezes, ser a única opção se não houver um SDK disponível.
|
||||
- **SDK**: Usar um SDK é geralmente a abordagem recomendada, pois permite escrever menos código para interagir com o modelo. Normalmente, é limitado a um modelo específico e, se usar modelos diferentes, pode ser necessário escrever novo código para suportar esses modelos adicionais.
|
||||
- **Um framework**: Um framework geralmente eleva as coisas a outro nível, no sentido de que, se precisar usar modelos diferentes, há uma API única para todos eles, sendo a configuração inicial o que geralmente varia. Além disso, frameworks trazem abstrações úteis, como no espaço de IA, onde podem lidar com ferramentas, memória, fluxos de trabalho, agentes e mais, enquanto se escreve menos código. Como os frameworks geralmente são opinativos, podem ser realmente úteis se aceitar a forma como eles funcionam, mas podem ser limitantes se tentar fazer algo personalizado que o framework não foi projetado para suportar. Às vezes, um framework pode também simplificar demais, o que pode levar a não aprender um tópico importante que, mais tarde, pode prejudicar o desempenho, por exemplo.
|
||||
|
||||
De forma geral, use a ferramenta certa para o trabalho.
|
||||
|
||||
## Introdução
|
||||
|
||||
Nesta lição, vamos aprender a:
|
||||
|
||||
- Usar um framework de IA comum.
|
||||
- Resolver problemas comuns como conversas de chat, uso de ferramentas, memória e contexto.
|
||||
- Aproveitar isso para construir aplicações de IA.
|
||||
|
||||
## Primeiro prompt
|
||||
|
||||
No nosso primeiro exemplo de aplicação, vamos aprender como conectar a um modelo de IA e consultá-lo usando um prompt.
|
||||
|
||||
### Usando Python
|
||||
|
||||
Para este exemplo, vamos usar Langchain para conectar aos Modelos do GitHub. Podemos usar uma classe chamada `ChatOpenAI` e fornecer os campos `api_key`, `base_url` e `model`. O token é algo que é automaticamente preenchido no GitHub Codespaces e, se estiver a executar a aplicação localmente, precisará configurar um token de acesso pessoal para que funcione.
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Neste código, nós:
|
||||
|
||||
- Chamamos `ChatOpenAI` para criar um cliente.
|
||||
- Usamos `llm.invoke` com um prompt para criar uma resposta.
|
||||
- Imprimimos a resposta com `print(response.content)`.
|
||||
|
||||
Deverá ver uma resposta semelhante a:
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## Conversa de chat
|
||||
|
||||
Na secção anterior, viu como usamos o que normalmente é conhecido como zero shot prompting, um único prompt seguido de uma resposta.
|
||||
|
||||
No entanto, muitas vezes encontra-se numa situação em que precisa de manter uma conversa com várias mensagens trocadas entre si e o assistente de IA.
|
||||
|
||||
### Usando Python
|
||||
|
||||
No Langchain, podemos armazenar a conversa numa lista. A `HumanMessage` representa uma mensagem de um utilizador, e a `SystemMessage` é uma mensagem destinada a definir a "personalidade" da IA. No exemplo abaixo, vê como instruímos a IA a assumir a personalidade do Capitão Picard e para o humano/utilizador perguntar "Fala-me sobre ti" como o prompt.
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
O código completo para este exemplo é o seguinte:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Deverá ver um resultado semelhante a:
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
Para manter o estado da conversa, pode adicionar a resposta de um chat, para que a conversa seja lembrada. Aqui está como fazer isso:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
O que podemos ver na conversa acima é como invocamos o LLM duas vezes, primeiro com a conversa consistindo apenas de duas mensagens, mas depois uma segunda vez com mais mensagens adicionadas à conversa.
|
||||
|
||||
De facto, se executar isto, verá que a segunda resposta será algo como:
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
Vou interpretar isso como um "talvez" ;)
|
||||
|
||||
## Respostas em streaming
|
||||
|
||||
TODO
|
||||
|
||||
## Modelos de prompts
|
||||
|
||||
TODO
|
||||
|
||||
## Saída estruturada
|
||||
|
||||
TODO
|
||||
|
||||
## Chamada de ferramentas
|
||||
|
||||
As ferramentas são como damos habilidades extras ao LLM. A ideia é informar o LLM sobre funções que ele possui e, se for feito um prompt que corresponda à descrição de uma dessas ferramentas, então chamamo-las.
|
||||
|
||||
### Usando Python
|
||||
|
||||
Vamos adicionar algumas ferramentas como segue:
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
O que estamos a fazer aqui é criar uma descrição de uma ferramenta chamada `add`. Ao herdar de `TypedDict` e adicionar membros como `a` e `b` do tipo `Annotated`, isto pode ser convertido num esquema que o LLM pode entender. A criação de funções é um dicionário que garante que sabemos o que fazer se uma ferramenta específica for identificada.
|
||||
|
||||
Vamos ver como chamamos o LLM com esta ferramenta a seguir:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
Aqui chamamos `bind_tools` com o nosso array `tools` e, assim, o LLM `llm_with_tools` agora tem conhecimento desta ferramenta.
|
||||
|
||||
Para usar este novo LLM, podemos escrever o seguinte código:
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Agora que chamamos `invoke` neste novo LLM, que tem ferramentas, podemos ter a propriedade `tool_calls` preenchida. Se assim for, quaisquer ferramentas identificadas têm uma propriedade `name` e `args` que identifica qual ferramenta deve ser chamada e com quais argumentos. O código completo é o seguinte:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Ao executar este código, deverá ver uma saída semelhante a:
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
O que esta saída significa é que o LLM analisou o prompt "Quanto é 3 + 12" como significando que a ferramenta `add` deve ser chamada e soube disso graças ao seu nome, descrição e descrições dos campos dos membros. Que a resposta é 15 deve-se ao nosso código usar o dicionário `functions` para invocá-lo:
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### Uma ferramenta mais interessante que chama uma API web
|
||||
|
||||
Ferramentas que somam dois números são interessantes, pois ilustram como funciona a chamada de ferramentas, mas geralmente as ferramentas tendem a fazer algo mais interessante, como, por exemplo, chamar uma API web. Vamos fazer isso com este código:
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
Agora, se executar este código, receberá uma resposta dizendo algo como:
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Aqui está o código na sua totalidade:
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## Embedding
|
||||
|
||||
Vectorizar conteúdo, comparar via similaridade cosseno.
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### Carregadores de documentos
|
||||
|
||||
PDF e CSV.
|
||||
|
||||
## Construir uma aplicação
|
||||
|
||||
TODO
|
||||
|
||||
## Tarefa
|
||||
|
||||
## Resumo
|
||||
|
||||
---
|
||||
|
||||
**Aviso**:
|
||||
Este documento foi traduzido utilizando o serviço de tradução por IA [Co-op Translator](https://github.com/Azure/co-op-translator). Embora nos esforcemos pela precisão, esteja ciente de que traduções automáticas podem conter erros ou imprecisões. O documento original na sua língua nativa deve ser considerado a fonte autoritária. Para informações críticas, recomenda-se uma tradução profissional realizada por humanos. Não nos responsabilizamos por quaisquer mal-entendidos ou interpretações incorretas decorrentes do uso desta tradução.
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:31:20+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "ro"
|
||||
}
|
||||
-->
|
||||
# Cadru AI
|
||||
|
||||
Există multe cadre AI disponibile care, atunci când sunt utilizate, pot accelera considerabil timpul necesar pentru a construi un proiect. În acest proiect, ne vom concentra pe înțelegerea problemelor pe care aceste cadre le abordează și vom construi un astfel de proiect noi înșine.
|
||||
|
||||
## De ce un cadru
|
||||
|
||||
Când vine vorba de utilizarea AI, există diferite abordări și motive pentru alegerea acestor abordări, iată câteva:
|
||||
|
||||
- **Fără SDK**, majoritatea modelelor AI permit interacțiunea directă cu modelul AI, de exemplu, prin cereri HTTP. Această abordare funcționează și poate fi uneori singura opțiune dacă lipsește o opțiune SDK.
|
||||
- **SDK**. Utilizarea unui SDK este de obicei abordarea recomandată, deoarece permite scrierea unui cod mai redus pentru a interacționa cu modelul. De obicei, este limitat la un model specific, iar dacă utilizați modele diferite, s-ar putea să fie nevoie să scrieți cod nou pentru a sprijini acele modele suplimentare.
|
||||
- **Un cadru**. Un cadru duce lucrurile la un alt nivel, în sensul că, dacă aveți nevoie să utilizați modele diferite, există un singur API pentru toate, diferența constând de obicei în configurarea inițială. În plus, cadrele aduc abstracții utile, cum ar fi în spațiul AI, ele pot gestiona instrumente, memorie, fluxuri de lucru, agenți și altele, scriind mai puțin cod. Deoarece cadrele sunt de obicei opiniate, ele pot fi cu adevărat utile dacă acceptați modul în care fac lucrurile, dar pot fi insuficiente dacă încercați să faceți ceva personalizat pentru care cadrul nu este conceput. Uneori, un cadru poate simplifica prea mult și, prin urmare, s-ar putea să nu învățați un subiect important care ulterior poate afecta performanța, de exemplu.
|
||||
|
||||
În general, utilizați instrumentul potrivit pentru sarcina respectivă.
|
||||
|
||||
## Introducere
|
||||
|
||||
În această lecție, vom învăța să:
|
||||
|
||||
- Utilizăm un cadru AI comun.
|
||||
- Abordăm probleme comune, cum ar fi conversațiile de chat, utilizarea instrumentelor, memoria și contextul.
|
||||
- Valorificăm acest lucru pentru a construi aplicații AI.
|
||||
|
||||
## Primul prompt
|
||||
|
||||
În primul nostru exemplu de aplicație, vom învăța cum să ne conectăm la un model AI și să-l interogăm folosind un prompt.
|
||||
|
||||
### Utilizând Python
|
||||
|
||||
Pentru acest exemplu, vom folosi Langchain pentru a ne conecta la modelele GitHub. Putem utiliza o clasă numită `ChatOpenAI` și să-i oferim câmpurile `api_key`, `base_url` și `model`. Tokenul este ceva ce se populează automat în GitHub Codespaces, iar dacă rulați aplicația local, trebuie să configurați un token de acces personal pentru ca acest lucru să funcționeze.
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
În acest cod, noi:
|
||||
|
||||
- Apelăm `ChatOpenAI` pentru a crea un client.
|
||||
- Utilizăm `llm.invoke` cu un prompt pentru a crea un răspuns.
|
||||
- Afișăm răspunsul cu `print(response.content)`.
|
||||
|
||||
Ar trebui să vedeți un răspuns similar cu:
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## Conversație de chat
|
||||
|
||||
În secțiunea precedentă, ați văzut cum am utilizat ceea ce este cunoscut în mod normal ca zero shot prompting, un singur prompt urmat de un răspuns.
|
||||
|
||||
Cu toate acestea, deseori vă aflați într-o situație în care trebuie să mențineți o conversație cu mai multe mesaje schimbate între dumneavoastră și asistentul AI.
|
||||
|
||||
### Utilizând Python
|
||||
|
||||
În Langchain, putem stoca conversația într-o listă. `HumanMessage` reprezintă un mesaj de la utilizator, iar `SystemMessage` este un mesaj destinat să seteze "personalitatea" AI-ului. În exemplul de mai jos, vedeți cum instruim AI-ul să adopte personalitatea lui Captain Picard, iar utilizatorul să întrebe "Spune-mi despre tine" ca prompt.
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
Codul complet pentru acest exemplu arată astfel:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Ar trebui să vedeți un rezultat similar cu:
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
Pentru a păstra starea conversației, puteți adăuga răspunsul de la un chat, astfel încât conversația să fie reținută, iată cum se face acest lucru:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
Ceea ce putem observa din conversația de mai sus este cum invocăm LLM de două ori, mai întâi cu conversația constând doar din două mesaje, dar apoi a doua oară cu mai multe mesaje adăugate la conversație.
|
||||
|
||||
De fapt, dacă rulați acest lucru, veți vedea al doilea răspuns fiind ceva de genul:
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
O să iau asta ca un "poate" ;)
|
||||
|
||||
## Răspunsuri în flux
|
||||
|
||||
TODO
|
||||
|
||||
## Șabloane de prompt
|
||||
|
||||
TODO
|
||||
|
||||
## Rezultate structurate
|
||||
|
||||
TODO
|
||||
|
||||
## Apelarea instrumentelor
|
||||
|
||||
Instrumentele sunt modul în care oferim LLM-ului abilități suplimentare. Ideea este să informăm LLM-ul despre funcțiile pe care le are, iar dacă se face un prompt care se potrivește descrierii unuia dintre aceste instrumente, atunci le apelăm.
|
||||
|
||||
### Utilizând Python
|
||||
|
||||
Să adăugăm câteva instrumente astfel:
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
Ceea ce facem aici este să creăm o descriere a unui instrument numit `add`. Prin moștenirea de la `TypedDict` și adăugarea de membri precum `a` și `b` de tip `Annotated`, acest lucru poate fi convertit într-o schemă pe care LLM-ul o poate înțelege. Crearea funcțiilor este un dicționar care asigură că știm ce să facem dacă un instrument specific este identificat.
|
||||
|
||||
Să vedem cum apelăm LLM-ul cu acest instrument în continuare:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
Aici apelăm `bind_tools` cu array-ul nostru `tools`, iar astfel LLM-ul `llm_with_tools` are acum cunoștință despre acest instrument.
|
||||
|
||||
Pentru a utiliza acest nou LLM, putem scrie următorul cod:
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Acum, când apelăm `invoke` pe acest nou LLM, care are instrumente, proprietatea `tool_calls` poate fi populată. Dacă da, orice instrument identificat are o proprietate `name` și `args` care identifică ce instrument ar trebui să fie apelat și cu ce argumente. Codul complet arată astfel:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Rulând acest cod, ar trebui să vedeți un rezultat similar cu:
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Ceea ce înseamnă acest rezultat este că LLM-ul a analizat promptul "Ce este 3 + 12" ca însemnând că instrumentul `add` ar trebui să fie apelat și a știut acest lucru datorită numelui său, descrierii și descrierilor câmpurilor membrilor. Faptul că răspunsul este 15 se datorează codului nostru care utilizează dicționarul `functions` pentru a-l invoca:
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### Un instrument mai interesant care apelează un API web
|
||||
|
||||
Instrumentele care adaugă două numere sunt interesante, deoarece ilustrează modul în care funcționează apelarea instrumentelor, dar de obicei instrumentele tind să facă ceva mai interesant, cum ar fi, de exemplu, apelarea unui API web. Să facem exact asta cu acest cod:
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
Acum, dacă rulați acest cod, veți primi un răspuns care spune ceva de genul:
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Iată codul în întregime:
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## Încorporare
|
||||
|
||||
Vectorizați conținutul, comparați prin similaritatea cosinusului
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### Încărcătoare de documente
|
||||
|
||||
PDF și CSV
|
||||
|
||||
## Construirea unei aplicații
|
||||
|
||||
TODO
|
||||
|
||||
## Temă
|
||||
|
||||
## Rezumat
|
||||
|
||||
---
|
||||
|
||||
**Declinare de responsabilitate**:
|
||||
Acest document a fost tradus folosind serviciul de traducere AI [Co-op Translator](https://github.com/Azure/co-op-translator). Deși ne străduim să asigurăm acuratețea, vă rugăm să fiți conștienți că traducerile automate pot conține erori sau inexactități. Documentul original în limba sa maternă ar trebui considerat sursa autoritară. Pentru informații critice, se recomandă traducerea profesională realizată de un specialist. Nu ne asumăm responsabilitatea pentru eventualele neînțelegeri sau interpretări greșite care pot apărea din utilizarea acestei traduceri.
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:31:00+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "sk"
|
||||
}
|
||||
-->
|
||||
# AI Framework
|
||||
|
||||
Existuje mnoho AI frameworkov, ktoré môžu výrazne urýchliť čas potrebný na vytvorenie projektu. V tomto projekte sa zameriame na pochopenie problémov, ktoré tieto frameworky riešia, a vytvoríme si takýto projekt sami.
|
||||
|
||||
## Prečo framework
|
||||
|
||||
Pri používaní AI existujú rôzne prístupy a dôvody na ich výber. Tu sú niektoré z nich:
|
||||
|
||||
- **Bez SDK**. Väčšina AI modelov umožňuje priamu interakciu s modelom, napríklad prostredníctvom HTTP požiadaviek. Tento prístup funguje a môže byť vašou jedinou možnosťou, ak SDK nie je k dispozícii.
|
||||
- **SDK**. Používanie SDK je zvyčajne odporúčaný prístup, pretože umožňuje písať menej kódu na interakciu s modelom. SDK je však často obmedzené na konkrétny model, a ak používate rôzne modely, budete musieť napísať nový kód na podporu týchto ďalších modelov.
|
||||
- **Framework**. Framework zvyčajne posúva veci na vyššiu úroveň v tom zmysle, že ak potrebujete používať rôzne modely, existuje jedno API pre všetky z nich, pričom rozdiely sú zvyčajne v počiatočnom nastavení. Okrem toho frameworky prinášajú užitočné abstrakcie, ako napríklad nástroje, pamäť, pracovné postupy, agentov a ďalšie, pričom vyžadujú menej kódu. Frameworky sú často názorové, čo znamená, že môžu byť veľmi užitočné, ak sa stotožníte s ich prístupom, ale môžu byť nedostatočné, ak sa pokúsite urobiť niečo na mieru, na čo nie sú určené. Niekedy môže framework veci až príliš zjednodušiť, čo môže viesť k tomu, že sa nenaučíte dôležitú tému, ktorá môže neskôr negatívne ovplyvniť výkon.
|
||||
|
||||
Vo všeobecnosti platí: použite správny nástroj na danú úlohu.
|
||||
|
||||
## Úvod
|
||||
|
||||
V tejto lekcii sa naučíme:
|
||||
|
||||
- Používať bežný AI framework.
|
||||
- Riešiť bežné problémy, ako sú konverzácie, používanie nástrojov, pamäť a kontext.
|
||||
- Využiť tieto znalosti na vytváranie AI aplikácií.
|
||||
|
||||
## Prvý prompt
|
||||
|
||||
V našom prvom príklade aplikácie sa naučíme, ako sa pripojiť k AI modelu a dotazovať ho pomocou promptu.
|
||||
|
||||
### Použitie Pythonu
|
||||
|
||||
V tomto príklade použijeme Langchain na pripojenie k GitHub modelom. Môžeme použiť triedu `ChatOpenAI` a zadať jej polia `api_key`, `base_url` a `model`. Token sa automaticky nastaví v GitHub Codespaces, a ak aplikáciu spúšťate lokálne, musíte si nastaviť osobný prístupový token, aby to fungovalo.
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
V tomto kóde:
|
||||
|
||||
- Voláme `ChatOpenAI` na vytvorenie klienta.
|
||||
- Používame `llm.invoke` s promptom na vytvorenie odpovede.
|
||||
- Tlačíme odpoveď pomocou `print(response.content)`.
|
||||
|
||||
Mali by ste vidieť odpoveď podobnú:
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## Konverzácia
|
||||
|
||||
V predchádzajúcej časti ste videli, ako sme použili to, čo sa bežne nazýva zero shot prompting, teda jeden prompt nasledovaný odpoveďou.
|
||||
|
||||
Často sa však ocitnete v situácii, kde potrebujete udržiavať konverzáciu pozostávajúcu z viacerých správ medzi vami a AI asistentom.
|
||||
|
||||
### Použitie Pythonu
|
||||
|
||||
V Langchain môžeme konverzáciu ukladať do zoznamu. `HumanMessage` predstavuje správu od používateľa a `SystemMessage` je správa určená na nastavenie "osobnosti" AI. V nasledujúcom príklade vidíte, ako inštruujeme AI, aby si osvojilo osobnosť kapitána Picarda, a používateľ sa pýta "Povedz mi o sebe" ako prompt.
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
Celý kód pre tento príklad vyzerá takto:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Mali by ste vidieť výsledok podobný:
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
Na udržanie stavu konverzácie môžete pridať odpoveď z chatu, aby si konverzácia pamätala, tu je postup:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
Z vyššie uvedenej konverzácie vidíme, ako sme dvakrát zavolali LLM, najprv s konverzáciou pozostávajúcou len z dvoch správ, ale potom druhýkrát s viacerými správami pridanými do konverzácie.
|
||||
|
||||
Ak tento kód spustíte, druhá odpoveď bude pravdepodobne niečo ako:
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
Beriem to ako možno ;)
|
||||
|
||||
## Streamovanie odpovedí
|
||||
|
||||
TODO
|
||||
|
||||
## Šablóny promptov
|
||||
|
||||
TODO
|
||||
|
||||
## Štruktúrovaný výstup
|
||||
|
||||
TODO
|
||||
|
||||
## Volanie nástrojov
|
||||
|
||||
Nástroje sú spôsob, ako dať LLM ďalšie schopnosti. Myšlienka je povedať LLM o funkciách, ktoré má, a ak je prompt vytvorený tak, že zodpovedá popisu jedného z týchto nástrojov, potom ho zavoláme.
|
||||
|
||||
### Použitie Pythonu
|
||||
|
||||
Pridajme niektoré nástroje takto:
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
Tu vytvárame popis nástroja nazvaného `add`. Dedením z `TypedDict` a pridaním členov ako `a` a `b` typu `Annotated` to môže byť konvertované na schému, ktorú LLM rozumie. Vytvorenie funkcií je slovník, ktorý zabezpečuje, že vieme, čo robiť, ak je identifikovaný konkrétny nástroj.
|
||||
|
||||
Pozrime sa, ako zavoláme LLM s týmto nástrojom:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
Tu voláme `bind_tools` s naším poľom `tools`, čím LLM `llm_with_tools` teraz pozná tento nástroj.
|
||||
|
||||
Na použitie tohto nového LLM môžeme napísať nasledujúci kód:
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Keď teraz zavoláme `invoke` na tomto novom LLM, ktorý má nástroje, môže byť vlastnosť `tool_calls` naplnená. Ak áno, akékoľvek identifikované nástroje majú vlastnosti `name` a `args`, ktoré identifikujú, aký nástroj by mal byť zavolaný a s akými argumentmi. Celý kód vyzerá takto:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Spustením tohto kódu by ste mali vidieť výstup podobný:
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Tento výstup znamená, že LLM analyzoval prompt "Čo je 3 + 12" ako požiadavku na zavolanie nástroja `add` a vedel to vďaka jeho názvu, popisu a popisom členov. To, že odpoveď je 15, je výsledkom nášho kódu, ktorý používa slovník `functions` na jeho vykonanie:
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### Zaujímavejší nástroj, ktorý volá webové API
|
||||
|
||||
Nástroje, ktoré sčítavajú dve čísla, sú zaujímavé, pretože ilustrujú, ako funguje volanie nástrojov, ale zvyčajne nástroje robia niečo zaujímavejšie, napríklad volanie webového API. Urobme to s týmto kódom:
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
Ak teraz spustíte tento kód, dostanete odpoveď podobnú:
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Tu je celý kód:
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## Embedding
|
||||
|
||||
Vektorizácia obsahu, porovnanie pomocou kosínovej podobnosti
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### Načítavače dokumentov
|
||||
|
||||
PDF a CSV
|
||||
|
||||
## Vytvorenie aplikácie
|
||||
|
||||
TODO
|
||||
|
||||
## Zadanie
|
||||
|
||||
## Zhrnutie
|
||||
|
||||
---
|
||||
|
||||
**Upozornenie**:
|
||||
Tento dokument bol preložený pomocou služby AI prekladu [Co-op Translator](https://github.com/Azure/co-op-translator). Hoci sa snažíme o presnosť, prosím, uvedomte si, že automatizované preklady môžu obsahovať chyby alebo nepresnosti. Pôvodný dokument v jeho pôvodnom jazyku by mal byť považovaný za autoritatívny zdroj. Pre kritické informácie sa odporúča profesionálny ľudský preklad. Nenesieme zodpovednosť za akékoľvek nedorozumenia alebo nesprávne interpretácie vyplývajúce z použitia tohto prekladu.
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:32:39+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "sl"
|
||||
}
|
||||
-->
|
||||
# AI Okvir
|
||||
|
||||
Obstaja veliko AI okvirov, ki lahko bistveno pospešijo čas, potreben za izdelavo projekta. V tem projektu se bomo osredotočili na razumevanje težav, ki jih ti okviri rešujejo, in sami zgradili tak projekt.
|
||||
|
||||
## Zakaj okvir
|
||||
|
||||
Pri uporabi AI obstajajo različni pristopi in razlogi za izbiro teh pristopov, tukaj je nekaj primerov:
|
||||
|
||||
- **Brez SDK-ja**, večina AI modelov omogoča neposredno interakcijo z modelom prek na primer HTTP zahtevkov. Ta pristop deluje in je včasih edina možnost, če SDK ni na voljo.
|
||||
- **SDK**. Uporaba SDK-ja je običajno priporočena, saj omogoča manj pisanja kode za interakcijo z modelom. Običajno je omejen na določen model, in če uporabljate različne modele, boste morda morali napisati novo kodo za podporo dodatnim modelom.
|
||||
- **Okvir**. Okvir običajno dvigne stvari na višjo raven, saj omogoča uporabo različnih modelov prek enega API-ja, pri čemer se razlikuje predvsem začetna nastavitev. Poleg tega okviri prinašajo uporabne abstrakcije, kot so orodja, pomnilnik, delovni tokovi, agenti in drugo, medtem ko pišete manj kode. Ker so okviri običajno mnenjski, so lahko zelo koristni, če sprejmete njihov način delovanja, vendar lahko razočarajo, če poskušate narediti nekaj po meri, česar okvir ni zasnovan za. Včasih lahko okvir tudi preveč poenostavi stvari, kar lahko vodi do neznanja pomembnih tem, ki kasneje lahko škodijo zmogljivosti, na primer.
|
||||
|
||||
Na splošno uporabite pravo orodje za nalogo.
|
||||
|
||||
## Uvod
|
||||
|
||||
V tej lekciji se bomo naučili:
|
||||
|
||||
- Uporabiti običajen AI okvir.
|
||||
- Reševati pogoste težave, kot so pogovori, uporaba orodij, pomnilnik in kontekst.
|
||||
- Izkoristiti to za gradnjo AI aplikacij.
|
||||
|
||||
## Prvi poziv
|
||||
|
||||
V našem prvem primeru aplikacije se bomo naučili, kako se povezati z AI modelom in ga poizvedovati z uporabo poziva.
|
||||
|
||||
### Uporaba Pythona
|
||||
|
||||
Za ta primer bomo uporabili Langchain za povezavo z GitHub modeli. Uporabimo lahko razred `ChatOpenAI` in mu podamo polja `api_key`, `base_url` in `model`. Žeton se samodejno nastavi znotraj GitHub Codespaces, če pa aplikacijo izvajate lokalno, morate nastaviti osebni dostopni žeton, da bo delovalo.
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
V tej kodi:
|
||||
|
||||
- Pokličemo `ChatOpenAI`, da ustvarimo odjemalca.
|
||||
- Uporabimo `llm.invoke` s pozivom za ustvarjanje odgovora.
|
||||
- Natisnemo odgovor z `print(response.content)`.
|
||||
|
||||
Videti bi morali odgovor, podoben:
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## Pogovor
|
||||
|
||||
V prejšnjem razdelku ste videli, kako smo uporabili t.i. zero shot prompting, en sam poziv, ki mu sledi odgovor.
|
||||
|
||||
Vendar se pogosto znajdete v situaciji, ko morate vzdrževati pogovor z več izmenjanimi sporočili med vami in AI asistentom.
|
||||
|
||||
### Uporaba Pythona
|
||||
|
||||
V Langchainu lahko pogovor shranimo v seznam. `HumanMessage` predstavlja sporočilo uporabnika, `SystemMessage` pa sporočilo, namenjeno nastavitvi "osebnosti" AI-ja. V spodnjem primeru vidite, kako AI-ju naročimo, naj prevzame osebnost kapitana Picarda, medtem ko uporabnik vpraša "Povej mi kaj o sebi" kot poziv.
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
Celotna koda za ta primer izgleda takole:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Videti bi morali rezultat, podoben:
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
Da ohranite stanje pogovora, lahko dodate odgovor iz klepeta, tako da se pogovor zapomni, tukaj je, kako to storiti:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
Iz zgornjega pogovora lahko vidimo, kako dvakrat pokličemo LLM, najprej s pogovorom, ki vsebuje le dve sporočili, nato pa drugič z več sporočili, dodanimi v pogovor.
|
||||
|
||||
Dejansko, če to zaženete, boste videli drugi odgovor, ki je nekaj takega:
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
To bom vzel kot morda ;)
|
||||
|
||||
## Pretakanje odgovorov
|
||||
|
||||
TODO
|
||||
|
||||
## Predloge pozivov
|
||||
|
||||
TODO
|
||||
|
||||
## Strukturiran izhod
|
||||
|
||||
TODO
|
||||
|
||||
## Klicanje orodij
|
||||
|
||||
Orodja so način, kako LLM-ju dodamo dodatne sposobnosti. Ideja je, da LLM-ju povemo o funkcijah, ki jih ima, in če je podan poziv, ki ustreza opisu enega od teh orodij, ga pokličemo.
|
||||
|
||||
### Uporaba Pythona
|
||||
|
||||
Dodajmo nekaj orodij, kot je prikazano:
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
Tukaj ustvarjamo opis orodja, imenovanega `add`. Z dedovanjem iz `TypedDict` in dodajanjem članov, kot sta `a` in `b` tipa `Annotated`, se to lahko pretvori v shemo, ki jo LLM razume. Ustvarjanje funkcij je slovar, ki zagotavlja, da vemo, kaj storiti, če je določeno orodje identificirano.
|
||||
|
||||
Poglejmo, kako naslednjič pokličemo LLM s tem orodjem:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
Tukaj pokličemo `bind_tools` z našim poljem `tools`, s čimer LLM `llm_with_tools` zdaj pozna to orodje.
|
||||
|
||||
Za uporabo tega novega LLM-ja lahko napišemo naslednjo kodo:
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Ko zdaj pokličemo `invoke` na tem novem LLM-ju, ki ima orodja, se morda lastnost `tool_calls` napolni. Če je tako, ima vsako identificirano orodje lastnosti `name` in `args`, ki identificirajo, katero orodje naj bo poklicano in s katerimi argumenti. Celotna koda izgleda takole:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Če zaženete to kodo, bi morali videti izhod, podoben:
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Ta izhod pomeni, da je LLM analiziral poziv "Kaj je 3 + 12" kot pomen, da je treba poklicati orodje `add`, in to je vedel zahvaljujoč njegovemu imenu, opisu in opisom polj članov. Da je odgovor 15, je posledica naše kode, ki uporablja slovar `functions` za njegovo izvedbo:
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### Bolj zanimivo orodje, ki kliče spletni API
|
||||
|
||||
Orodja, ki seštevajo dve številki, so zanimiva, saj ilustrirajo, kako deluje klicanje orodij, vendar običajno orodja počnejo nekaj bolj zanimivega, kot na primer klicanje spletnega API-ja. Naredimo to z naslednjo kodo:
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
Če zdaj zaženete to kodo, boste dobili odgovor, ki pravi nekaj takega:
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Tukaj je celotna koda:
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## Vdelava
|
||||
|
||||
Vektorizirajte vsebino, primerjajte prek kosinusne podobnosti
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### Nalagalniki dokumentov
|
||||
|
||||
pdf in csv
|
||||
|
||||
## Gradnja aplikacije
|
||||
|
||||
TODO
|
||||
|
||||
## Naloga
|
||||
|
||||
## Povzetek
|
||||
|
||||
---
|
||||
|
||||
**Omejitev odgovornosti**:
|
||||
Ta dokument je bil preveden z uporabo storitve za prevajanje z umetno inteligenco [Co-op Translator](https://github.com/Azure/co-op-translator). Čeprav si prizadevamo za natančnost, vas prosimo, da upoštevate, da lahko avtomatizirani prevodi vsebujejo napake ali netočnosti. Izvirni dokument v njegovem maternem jeziku je treba obravnavati kot avtoritativni vir. Za ključne informacije priporočamo profesionalni človeški prevod. Ne prevzemamo odgovornosti za morebitne nesporazume ali napačne razlage, ki bi nastale zaradi uporabe tega prevoda.
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:26:59+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "sv"
|
||||
}
|
||||
-->
|
||||
# AI-ramverk
|
||||
|
||||
Det finns många AI-ramverk som kan användas för att avsevärt snabba upp tiden det tar att bygga ett projekt. I det här projektet kommer vi att fokusera på att förstå vilka problem dessa ramverk löser och bygga ett sådant projekt själva.
|
||||
|
||||
## Varför ett ramverk
|
||||
|
||||
När det gäller att använda AI finns det olika tillvägagångssätt och olika anledningar till att välja dessa tillvägagångssätt, här är några:
|
||||
|
||||
- **Ingen SDK**, de flesta AI-modeller tillåter dig att interagera direkt med AI-modellen via exempelvis HTTP-förfrågningar. Det tillvägagångssättet fungerar och kan ibland vara ditt enda alternativ om en SDK saknas.
|
||||
- **SDK**. Att använda en SDK är vanligtvis det rekommenderade tillvägagångssättet eftersom det gör att du kan skriva mindre kod för att interagera med din modell. Det är vanligtvis begränsat till en specifik modell, och om du använder olika modeller kan du behöva skriva ny kod för att stödja dessa ytterligare modeller.
|
||||
- **Ett ramverk**. Ett ramverk tar vanligtvis saker till en annan nivå i den meningen att om du behöver använda olika modeller finns det ett API för alla, det som skiljer sig är vanligtvis den initiala inställningen. Dessutom tillför ramverk användbara abstraktioner, som inom AI-området, där de kan hantera verktyg, minne, arbetsflöden, agenter och mer, samtidigt som du skriver mindre kod. Eftersom ramverk vanligtvis är åsiktsdrivna kan de vara mycket hjälpsamma om du köper in dig på hur de gör saker, men de kan vara otillräckliga om du försöker göra något skräddarsytt som ramverket inte är utformat för. Ibland kan ett ramverk också förenkla för mycket, vilket kan leda till att du missar ett viktigt ämne som senare kan påverka prestandan negativt.
|
||||
|
||||
Generellt sett, använd rätt verktyg för jobbet.
|
||||
|
||||
## Introduktion
|
||||
|
||||
I denna lektion kommer vi att lära oss att:
|
||||
|
||||
- Använda ett vanligt AI-ramverk.
|
||||
- Hantera vanliga problem som chattkonversationer, verktygsanvändning, minne och kontext.
|
||||
- Utnyttja detta för att bygga AI-appar.
|
||||
|
||||
## Första prompten
|
||||
|
||||
I vårt första appexempel kommer vi att lära oss hur man ansluter till en AI-modell och frågar den med hjälp av en prompt.
|
||||
|
||||
### Med Python
|
||||
|
||||
För detta exempel kommer vi att använda Langchain för att ansluta till GitHub-modeller. Vi kan använda en klass som heter `ChatOpenAI` och ge den fälten `api_key`, `base_url` och `model`. Token genereras automatiskt inom GitHub Codespaces, och om du kör appen lokalt måste du ställa in en personlig åtkomsttoken för att detta ska fungera.
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
I denna kod:
|
||||
|
||||
- Anropar vi `ChatOpenAI` för att skapa en klient.
|
||||
- Använder vi `llm.invoke` med en prompt för att skapa ett svar.
|
||||
- Skriver vi ut svaret med `print(response.content)`.
|
||||
|
||||
Du bör se ett svar som liknar:
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## Chattkonversation
|
||||
|
||||
I föregående avsnitt såg du hur vi använde det som normalt kallas zero shot prompting, en enda prompt följt av ett svar.
|
||||
|
||||
Men ofta befinner du dig i en situation där du behöver upprätthålla en konversation med flera meddelanden som utbyts mellan dig och AI-assistenten.
|
||||
|
||||
### Med Python
|
||||
|
||||
I Langchain kan vi lagra konversationen i en lista. `HumanMessage` representerar ett meddelande från en användare, och `SystemMessage` är ett meddelande som är avsett att sätta AI:ns "personlighet". I exemplet nedan ser du hur vi instruerar AI att anta personligheten av Captain Picard och att användaren ska fråga "Berätta om dig" som prompt.
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
Den fullständiga koden för detta exempel ser ut så här:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Du bör se ett resultat som liknar:
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
För att behålla konversationens tillstånd kan du lägga till svaret från en chatt så att konversationen kommer ihåg, här är hur du gör det:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
Vad vi kan se från ovanstående konversation är hur vi anropar LLM två gånger, först med konversationen som består av bara två meddelanden, men sedan en andra gång med fler meddelanden tillagda till konversationen.
|
||||
|
||||
Faktum är att om du kör detta kommer du att se det andra svaret vara något i stil med:
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
Jag tar det som ett kanske ;)
|
||||
|
||||
## Strömmande svar
|
||||
|
||||
TODO
|
||||
|
||||
## Promptmallar
|
||||
|
||||
TODO
|
||||
|
||||
## Strukturerad output
|
||||
|
||||
TODO
|
||||
|
||||
## Verktygsanrop
|
||||
|
||||
Verktyg är hur vi ger LLM extra färdigheter. Idén är att berätta för LLM om funktioner den har, och om en prompt görs som matchar beskrivningen av ett av dessa verktyg, så anropar vi dem.
|
||||
|
||||
### Med Python
|
||||
|
||||
Låt oss lägga till några verktyg så här:
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
Vad vi gör här är att skapa en beskrivning av ett verktyg som heter `add`. Genom att ärva från `TypedDict` och lägga till medlemmar som `a` och `b` av typen `Annotated` kan detta konverteras till ett schema som LLM kan förstå. Skapandet av funktioner är en ordbok som säkerställer att vi vet vad vi ska göra om ett specifikt verktyg identifieras.
|
||||
|
||||
Låt oss se hur vi anropar LLM med detta verktyg härnäst:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
Här anropar vi `bind_tools` med vår `tools`-array, och därmed har LLM `llm_with_tools` nu kunskap om detta verktyg.
|
||||
|
||||
För att använda denna nya LLM kan vi skriva följande kod:
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Nu när vi anropar `invoke` på denna nya LLM, som har verktyg, kanske egenskapen `tool_calls` fylls i. Om så är fallet har alla identifierade verktyg en `name` och `args`-egenskap som identifierar vilket verktyg som ska anropas och med vilka argument. Den fullständiga koden ser ut så här:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
När du kör denna kod bör du se ett resultat som liknar:
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Vad detta resultat betyder är att LLM analyserade prompten "Vad är 3 + 12" som att det betyder att verktyget `add` ska anropas, och den visste det tack vare dess namn, beskrivning och medlemsfältsbeskrivningar. Att svaret är 15 beror på vår kod som använder ordboken `functions` för att anropa det:
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### Ett mer intressant verktyg som anropar ett webb-API
|
||||
|
||||
Verktyg som lägger till två tal är intressanta eftersom det illustrerar hur verktygsanrop fungerar, men vanligtvis tenderar verktyg att göra något mer intressant, som till exempel att anropa ett webb-API. Låt oss göra just det med denna kod:
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
Nu om du kör denna kod kommer du att få ett svar som säger något i stil med:
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Här är koden i sin helhet:
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## Inbäddning
|
||||
|
||||
Vektorisera innehåll, jämför via cosinuslikhet
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### Dokumentladdare
|
||||
|
||||
PDF och CSV
|
||||
|
||||
## Bygga en app
|
||||
|
||||
TODO
|
||||
|
||||
## Uppgift
|
||||
|
||||
## Sammanfattning
|
||||
|
||||
---
|
||||
|
||||
**Ansvarsfriskrivning**:
|
||||
Detta dokument har översatts med hjälp av AI-översättningstjänsten [Co-op Translator](https://github.com/Azure/co-op-translator). Även om vi strävar efter noggrannhet, bör det noteras att automatiserade översättningar kan innehålla fel eller felaktigheter. Det ursprungliga dokumentet på dess originalspråk bör betraktas som den auktoritativa källan. För kritisk information rekommenderas professionell mänsklig översättning. Vi ansvarar inte för eventuella missförstånd eller feltolkningar som uppstår vid användning av denna översättning.
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:29:57+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "sw"
|
||||
}
|
||||
-->
|
||||
# Mfumo wa AI
|
||||
|
||||
Kuna mifumo mingi ya AI ambayo, ikitumiwa, inaweza kuharakisha sana muda unaohitajika kujenga mradi. Katika mradi huu, tutazingatia kuelewa matatizo ambayo mifumo hii inashughulikia na kujenga mradi kama huo sisi wenyewe.
|
||||
|
||||
## Kwa nini mfumo
|
||||
|
||||
Linapokuja suala la kutumia AI, kuna mbinu tofauti na sababu tofauti za kuchagua mbinu hizi. Hapa kuna baadhi:
|
||||
|
||||
- **Hakuna SDK**, mifano mingi ya AI hukuruhusu kuwasiliana moja kwa moja na mfano wa AI kupitia, kwa mfano, maombi ya HTTP. Mbinu hiyo inafanya kazi na wakati mwingine inaweza kuwa chaguo lako pekee ikiwa chaguo la SDK halipo.
|
||||
- **SDK**. Kutumia SDK kwa kawaida ni mbinu inayopendekezwa kwani hukuruhusu kuandika msimbo mdogo kuwasiliana na mfano wako. Kwa kawaida, inahusiana na mfano maalum, na ikiwa unatumia mifano tofauti, unaweza kuhitaji kuandika msimbo mpya ili kuunga mkono mifano hiyo ya ziada.
|
||||
- **Mfumo**. Mfumo kwa kawaida huchukua mambo kwa kiwango kingine kwa maana kwamba ikiwa unahitaji kutumia mifano tofauti, kuna API moja kwa yote, kinachotofautiana kwa kawaida ni usanidi wa awali. Zaidi ya hayo, mifumo huleta unyumbufu muhimu kama vile katika nafasi ya AI, inaweza kushughulikia zana, kumbukumbu, mtiririko wa kazi, mawakala na zaidi huku ukiandika msimbo mdogo. Kwa sababu mifumo kwa kawaida ina maoni maalum, inaweza kuwa ya msaada sana ikiwa unakubaliana na jinsi inavyofanya mambo, lakini inaweza kushindwa ikiwa unajaribu kufanya kitu cha kipekee ambacho mfumo haujatengenezwa kwa ajili yake. Wakati mwingine mfumo unaweza pia kurahisisha sana na kwa hivyo unaweza usijifunze mada muhimu ambayo baadaye inaweza kuathiri utendaji, kwa mfano.
|
||||
|
||||
Kwa ujumla, tumia zana sahihi kwa kazi husika.
|
||||
|
||||
## Utangulizi
|
||||
|
||||
Katika somo hili, tutajifunza:
|
||||
|
||||
- Kutumia mfumo wa kawaida wa AI.
|
||||
- Kushughulikia matatizo ya kawaida kama mazungumzo ya gumzo, matumizi ya zana, kumbukumbu na muktadha.
|
||||
- Kutumia hili kujenga programu za AI.
|
||||
|
||||
## Swali la kwanza
|
||||
|
||||
Katika mfano wetu wa kwanza wa programu, tutajifunza jinsi ya kuunganishwa na mfano wa AI na kuuliza swali kwa kutumia swali la maelekezo.
|
||||
|
||||
### Kutumia Python
|
||||
|
||||
Kwa mfano huu, tutatumia Langchain kuunganishwa na Mifano ya GitHub. Tunaweza kutumia darasa linaloitwa `ChatOpenAI` na kulipa mashamba `api_key`, `base_url` na `model`. Tokeni ni kitu kinachojazwa kiotomatiki ndani ya GitHub Codespaces, na ikiwa unaendesha programu hiyo kwa ndani, unahitaji kuanzisha tokeni ya ufikiaji wa kibinafsi ili hili lifanye kazi.
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Katika msimbo huu, tunafanya:
|
||||
|
||||
- Kupiga `ChatOpenAI` kuunda mteja.
|
||||
- Kutumia `llm.invoke` na swali la maelekezo kuunda jibu.
|
||||
- Kuchapisha jibu kwa kutumia `print(response.content)`.
|
||||
|
||||
Unapaswa kuona jibu linalofanana na:
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## Mazungumzo ya gumzo
|
||||
|
||||
Katika sehemu iliyotangulia, umeona jinsi tulivyotumia kile kinachojulikana kama zero shot prompting, swali moja likifuatiwa na jibu.
|
||||
|
||||
Hata hivyo, mara nyingi unajikuta katika hali ambapo unahitaji kudumisha mazungumzo ya ujumbe kadhaa unaobadilishana kati yako na msaidizi wa AI.
|
||||
|
||||
### Kutumia Python
|
||||
|
||||
Katika langchain, tunaweza kuhifadhi mazungumzo katika orodha. `HumanMessage` inawakilisha ujumbe kutoka kwa mtumiaji, na `SystemMessage` ni ujumbe unaokusudiwa kuweka "tabia" ya AI. Katika mfano hapa chini, unaona jinsi tunavyomwelekeza AI kuchukua tabia ya Captain Picard na kwa binadamu/mtumiaji kuuliza "Niambie kuhusu wewe" kama swali la maelekezo.
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
Msimbo kamili wa mfano huu unaonekana kama ifuatavyo:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Unapaswa kuona matokeo yanayofanana na:
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
Ili kudumisha hali ya mazungumzo, unaweza kuongeza jibu kutoka kwa gumzo, ili mazungumzo yakumbukwe. Hivi ndivyo unavyoweza kufanya hivyo:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
Tunachoweza kuona kutoka kwa mazungumzo hapo juu ni jinsi tunavyotumia LLM mara mbili, mara ya kwanza na mazungumzo yanayojumuisha ujumbe mbili tu, lakini kisha mara ya pili na ujumbe zaidi ulioongezwa kwenye mazungumzo.
|
||||
|
||||
Kwa kweli, ikiwa utaendesha hili, utaona jibu la pili likiwa kitu kama:
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
Nitachukulia hilo kama labda ;)
|
||||
|
||||
## Majibu ya mtiririko
|
||||
|
||||
TODO
|
||||
|
||||
## Violezo vya maelekezo
|
||||
|
||||
TODO
|
||||
|
||||
## Matokeo yaliyopangwa
|
||||
|
||||
TODO
|
||||
|
||||
## Kutumia zana
|
||||
|
||||
Zana ni jinsi tunavyompa LLM ujuzi wa ziada. Wazo ni kumwambia LLM kuhusu kazi alizonazo, na ikiwa swali linatolewa linalolingana na maelezo ya mojawapo ya zana hizi, basi tunazitumia.
|
||||
|
||||
### Kutumia Python
|
||||
|
||||
Hebu tuongeze zana kama ifuatavyo:
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
Tunachofanya hapa ni kuunda maelezo ya zana inayoitwa `add`. Kwa kurithi kutoka kwa `TypedDict` na kuongeza wanachama kama `a` na `b` wa aina `Annotated`, hii inaweza kubadilishwa kuwa schema ambayo LLM inaweza kuelewa. Uundaji wa kazi ni kamusi inayohakikisha tunajua cha kufanya ikiwa zana maalum itatambuliwa.
|
||||
|
||||
Hebu tuone jinsi tunavyomwita LLM na zana hii:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
Hapa tunaita `bind_tools` na safu yetu ya `tools`, na kwa hivyo LLM `llm_with_tools` sasa ina maarifa ya zana hii.
|
||||
|
||||
Ili kutumia LLM mpya hii, tunaweza kuandika msimbo ufuatao:
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Sasa kwamba tunaita `invoke` kwenye LLM hii mpya, ambayo ina zana, tunaweza kuwa na mali ya `tool_calls` ikijazwa. Ikiwa ni hivyo, zana yoyote iliyotambuliwa ina mali ya `name` na `args` inayotambua zana gani inapaswa kuitwa na kwa hoja gani. Msimbo kamili unaonekana kama ifuatavyo:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Kuendesha msimbo huu, unapaswa kuona matokeo yanayofanana na:
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Kinachomaanisha matokeo haya ni kwamba LLM ilichambua swali "What is 3 + 12" kama kumaanisha kwamba zana `add` inapaswa kuitwa, na ilijua hivyo kutokana na jina lake, maelezo na maelezo ya uwanja wa wanachama. Kwamba jibu ni 15 ni kwa sababu ya msimbo wetu kutumia kamusi `functions` kuitekeleza:
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### Zana ya kuvutia zaidi inayotumia API ya wavuti
|
||||
|
||||
Zana inayoongeza namba mbili ni ya kuvutia kwani inaonyesha jinsi matumizi ya zana yanavyofanya kazi, lakini kwa kawaida zana huwa na kufanya kitu cha kuvutia zaidi kama, kwa mfano, kutumia API ya wavuti. Hebu tufanye hivyo na msimbo huu:
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
Sasa ikiwa utaendesha msimbo huu, utapata jibu linalosema kitu kama:
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Hapa kuna msimbo mzima:
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## Kuweka alama
|
||||
|
||||
vectorize maudhui, linganisha kupitia cosine similarity
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### Vipakiaji vya hati
|
||||
|
||||
pdf na csv
|
||||
|
||||
## Kujenga programu
|
||||
|
||||
TODO
|
||||
|
||||
## Kazi ya nyumbani
|
||||
|
||||
## Muhtasari
|
||||
|
||||
---
|
||||
|
||||
**Kanusho**:
|
||||
Hati hii imetafsiriwa kwa kutumia huduma ya tafsiri ya AI [Co-op Translator](https://github.com/Azure/co-op-translator). Ingawa tunajitahidi kuhakikisha usahihi, tafadhali fahamu kuwa tafsiri za kiotomatiki zinaweza kuwa na makosa au kutokuwa sahihi. Hati ya asili katika lugha yake ya awali inapaswa kuzingatiwa kama chanzo cha mamlaka. Kwa taarifa muhimu, tafsiri ya kitaalamu ya binadamu inapendekezwa. Hatutawajibika kwa kutoelewana au tafsiri zisizo sahihi zinazotokana na matumizi ya tafsiri hii.
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:34:02+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "ta"
|
||||
}
|
||||
-->
|
||||
# AI Framework
|
||||
|
||||
AI கட்டமைப்புகள் பல உள்ளன, அவற்றைப் பயன்படுத்துவதால் ஒரு திட்டத்தை உருவாக்க எடுக்கும் நேரத்தை மிகவும் வேகமாகக் குறைக்க முடியும். இந்த திட்டத்தில், இந்த கட்டமைப்புகள் எந்த பிரச்சினைகளைத் தீர்க்கின்றன என்பதைப் புரிந்து கொண்டு, நாங்கள் ஒரு திட்டத்தை உருவாக்குவோம்.
|
||||
|
||||
## ஏன் ஒரு கட்டமைப்பு?
|
||||
|
||||
AI பயன்படுத்துவதில் பல்வேறு அணுகுமுறைகள் மற்றும் காரணங்கள் உள்ளன. இங்கே சிலவற்றை பார்க்கலாம்:
|
||||
|
||||
- **SDK இல்லாமல்**: பெரும்பாலான AI மாதிரிகள் HTTP கோரிக்கைகள் மூலம் நேரடியாக தொடர்பு கொள்ள அனுமதிக்கின்றன. இந்த அணுகுமுறை செயல்படும், மேலும் SDK விருப்பம் இல்லாதபோது இது ஒரே விருப்பமாக இருக்கலாம்.
|
||||
- **SDK**: SDK பயன்படுத்துவது பொதுவாக பரிந்துரைக்கப்படும் அணுகுமுறையாகும், ஏனெனில் இது உங்கள் மாதிரியுடன் தொடர்பு கொள்ள குறைவான குறியீட்டை எழுத அனுமதிக்கிறது. இது பொதுவாக ஒரு குறிப்பிட்ட மாதிரிக்கு மட்டுமே வரையறுக்கப்பட்டிருக்கும், மேலும் பல மாதிரிகளைப் பயன்படுத்தினால், கூடுதல் மாதிரிகளை ஆதரிக்க புதிய குறியீட்டை எழுத வேண்டியிருக்கும்.
|
||||
- **ஒரு கட்டமைப்பு**: ஒரு கட்டமைப்பு பொதுவாக விஷயங்களை மற்றொரு நிலைக்கு எடுத்துச் செல்கிறது. பல மாதிரிகளைப் பயன்படுத்த வேண்டியிருந்தால், அவற்றிற்கான API ஒன்று இருக்கும், வேறுபாடு பொதுவாக ஆரம்ப அமைப்பில் இருக்கும். மேலும், AI துறையில், கருவிகள், நினைவகம், வேலைப்பாடுகள், முகவர்கள் போன்றவற்றை குறைவான குறியீட்டில் கையாள உதவும் பயனுள்ள சுருக்கங்களை கட்டமைப்புகள் கொண்டுவருகின்றன. கட்டமைப்புகள் பொதுவாக ஒரு குறிப்பிட்ட வழியில் செயல்படுவதால், அவற்றின் அணுகுமுறையை ஏற்றுக்கொண்டால் மிகவும் பயனுள்ளதாக இருக்கும், ஆனால் கட்டமைப்பு உருவாக்கப்படாத தனிப்பயன் விஷயங்களைச் செய்ய முயற்சித்தால் குறைவாக இருக்கலாம். சில நேரங்களில் ஒரு கட்டமைப்பு மிகவும் எளிமைப்படுத்தி விடும், இதனால் முக்கியமான தலைப்பை நீங்கள் கற்றுக்கொள்ள முடியாமல் போகலாம், இது பின்னர் செயல்திறனை பாதிக்கக்கூடும்.
|
||||
|
||||
பொதுவாக, வேலைக்கு சரியான கருவியைப் பயன்படுத்துங்கள்.
|
||||
|
||||
## அறிமுகம்
|
||||
|
||||
இந்த பாடத்தில், நாங்கள் கற்றுக்கொள்வோம்:
|
||||
|
||||
- பொதுவான AI கட்டமைப்பைப் பயன்படுத்துவது.
|
||||
- உரையாடல்கள், கருவி பயன்பாடு, நினைவகம் மற்றும் சூழல் போன்ற பொதுவான பிரச்சினைகளைத் தீர்ப்பது.
|
||||
- AI பயன்பாடுகளை உருவாக்க இதைப் பயன்படுத்துவது.
|
||||
|
||||
## முதல் கேள்வி
|
||||
|
||||
எங்கள் முதல் பயன்பாட்டு எடுத்துக்காட்டில், AI மாதிரியுடன் இணைந்து ஒரு கேள்வியைப் பயன்படுத்தி அதை விசாரிப்பது எப்படி என்பதை கற்றுக்கொள்வோம்.
|
||||
|
||||
### Python பயன்படுத்துவது
|
||||
|
||||
இந்த எடுத்துக்காட்டுக்கு, Langchain ஐ பயன்படுத்தி GitHub மாதிரிகளுடன் இணைவோம். `ChatOpenAI` என்ற வகுப்பைப் பயன்படுத்தி `api_key`, `base_url` மற்றும் `model` ஆகிய புலங்களை கொடுக்கலாம். GitHub Codespaces-இல் டோக்கன் தானாகவே நிரப்பப்படும், மேலும் நீங்கள் பயன்பாட்டை உள்ளூரில் இயக்கினால், இது செயல்பட தனிப்பட்ட அணுகல் டோக்கனை அமைக்க வேண்டும்.
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
இந்த குறியீட்டில், நாங்கள்:
|
||||
|
||||
- `ChatOpenAI` ஐ அழைத்து ஒரு கிளையண்டை உருவாக்குகிறோம்.
|
||||
- `llm.invoke` ஐ ஒரு கேள்வியுடன் பயன்படுத்தி ஒரு பதிலை உருவாக்குகிறோம்.
|
||||
- `print(response.content)` மூலம் பதிலை அச்சிடுகிறோம்.
|
||||
|
||||
நீங்கள் இதற்குச் சமமான ஒரு பதிலைப் பார்க்க வேண்டும்:
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## உரையாடல்
|
||||
|
||||
முந்தைய பிரிவில், ஒரு கேள்வி மற்றும் பதில் கொண்ட zero shot prompting ஐ எப்படி பயன்படுத்தினோம் என்பதைப் பார்த்தீர்கள்.
|
||||
|
||||
ஆனால், பல முறை நீங்கள் பல செய்திகளை AI உதவியாளருடன் பரிமாற்றம் செய்யும் உரையாடலில் ஈடுபட வேண்டிய சூழலில் இருப்பீர்கள்.
|
||||
|
||||
### Python பயன்படுத்துவது
|
||||
|
||||
Langchain-இல், உரையாடலை ஒரு பட்டியலில் சேமிக்கலாம். `HumanMessage` என்பது பயனரிடமிருந்து வரும் ஒரு செய்தியை பிரதிநிதித்துவப்படுத்துகிறது, மற்றும் `SystemMessage` என்பது AI-யின் "பண்பு" அமைக்கப்பட வேண்டிய ஒரு செய்தியாகும். கீழே உள்ள எடுத்துக்காட்டில், AI-யை Captain Picard ஆக நடிக்கச் சொல்லி, மனிதன்/பயனர் "Tell me about you" என்று கேட்கும் கேள்வியை கேட்கும் விதமாக எப்படி வழிகாட்டுகிறோம் என்பதைப் பார்க்கலாம்.
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
இந்த எடுத்துக்காட்டின் முழு குறியீடு இவ்வாறு இருக்கும்:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
நீங்கள் இதற்குச் சமமான ஒரு முடிவைப் பார்க்க வேண்டும்:
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
உரையாடலின் நிலையைப் பராமரிக்க, ஒரு உரையாடலின் பதிலைச் சேர்க்கலாம், இதனால் உரையாடல் நினைவில் இருக்கும். இதை எப்படி செய்வது என்பதை இங்கே பார்க்கலாம்:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
மேலே உள்ள உரையாடலிலிருந்து, இரண்டு முறை LLM ஐ invoke செய்வது எப்படி என்பதைப் பார்க்கலாம், முதலில் இரண்டு செய்திகளைக் கொண்ட உரையாடலுடன், பின்னர் உரையாடலுக்கு மேலும் செய்திகளைச் சேர்த்து இரண்டாவது முறை.
|
||||
|
||||
உண்மையில், நீங்கள் இதை இயக்கினால், இரண்டாவது பதில் இவ்வாறு இருக்கும்:
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
நான் அதை "முடிந்தால்" என்று எடுத்துக்கொள்கிறேன் ;)
|
||||
|
||||
## ஸ்ட்ரீமிங் பதில்கள்
|
||||
|
||||
TODO
|
||||
|
||||
## கேள்வி வார்ப்புருக்கள்
|
||||
|
||||
TODO
|
||||
|
||||
## அமைப்பான வெளியீடு
|
||||
|
||||
TODO
|
||||
|
||||
## கருவி அழைப்புகள்
|
||||
|
||||
கருவிகள் என்பது LLM-க்கு கூடுதல் திறன்களை வழங்கும் வழியாகும். இந்த கருத்து, LLM-க்கு அதன் செயல்பாடுகள் பற்றி சொல்லி, ஒரு கேள்வி ஒரு குறிப்பிட்ட கருவியின் விளக்கத்துடன் பொருந்தினால், அதை அழைக்க வேண்டும் என்பதே.
|
||||
|
||||
### Python பயன்படுத்துவது
|
||||
|
||||
இங்கே சில கருவிகளைச் சேர்ப்போம்:
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
இங்கே நாம் `add` என்ற கருவியின் விளக்கத்தை உருவாக்குகிறோம். `TypedDict`-இல் இருந்து மரபுரீதியாகப் பெறுவதன் மூலம், `a` மற்றும் `b` போன்ற உறுப்புகளை `Annotated` வகையாகச் சேர்ப்பதன் மூலம், இது LLM புரிந்துகொள்ளக்கூடிய ஒரு schema ஆக மாற்றப்படலாம். செயல்பாடுகளை உருவாக்குவது ஒரு அகராதி ஆகும், இது ஒரு குறிப்பிட்ட கருவி அடையாளம் காணப்பட்டால் என்ன செய்ய வேண்டும் என்பதை உறுதிப்படுத்துகிறது.
|
||||
|
||||
இந்த கருவியுடன் LLM ஐ எப்படி அழைக்கிறோம் என்பதைப் பார்ப்போம்:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
இங்கே நாம் `bind_tools` ஐ எங்கள் `tools` வரிசையுடன் அழைக்கிறோம், இதனால் LLM `llm_with_tools` இப்போது இந்த கருவியின் அறிவைப் பெற்றுள்ளது.
|
||||
|
||||
இந்த புதிய LLM ஐ பயன்படுத்த, கீழே உள்ள குறியீட்டை எழுதலாம்:
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
இப்போது இந்த புதிய LLM-ல் `invoke` ஐ அழைக்கும்போது, `tool_calls` சொத்து நிரப்பப்பட்டிருக்கலாம். அப்படியானால், அடையாளம் காணப்பட்ட எந்த கருவிகளும் `name` மற்றும் `args` சொத்துகளைக் கொண்டிருக்கும், இது எந்த கருவி அழைக்கப்பட வேண்டும் மற்றும் எந்த வாதங்களுடன் என்பதை அடையாளம் காண்கிறது. முழு குறியீடு இவ்வாறு இருக்கும்:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
இந்த குறியீட்டை இயக்கும்போது, நீங்கள் இதற்குச் சமமான ஒரு வெளியீட்டைப் பார்க்க வேண்டும்:
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
இந்த வெளியீடு என்ன அர்த்தம் என்பதாவது, LLM "What is 3 + 12" என்ற கேள்வியை `add` கருவி அழைக்கப்பட வேண்டும் என்று பகுப்பாய்வு செய்தது, அதன் பெயர், விளக்கம் மற்றும் உறுப்பினர் புல விளக்கங்களின் காரணமாக. பதில் 15 என்பது எங்கள் குறியீடு அகராதி `functions` ஐ பயன்படுத்தி அதை invoke செய்ததால்தான்:
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### ஒரு வலை API ஐ அழைக்கும் மேலும் சுவாரஸ்யமான கருவி
|
||||
|
||||
இரண்டு எண்களைச் சேர்க்கும் கருவிகள், கருவி அழைப்புகள் எப்படி செயல்படுகின்றன என்பதை விளக்குவதால் சுவாரஸ்யமாக இருக்கும், ஆனால் பொதுவாக கருவிகள், உதாரணமாக ஒரு வலை API ஐ அழைப்பது போன்ற மேலும் சுவாரஸ்யமான விஷயங்களைச் செய்யும். இதை கீழே உள்ள குறியீட்டுடன் செய்வோம்:
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
இப்போது இந்த குறியீட்டை இயக்கினால், நீங்கள் இதற்குச் சமமான ஒரு பதிலைப் பெறுவீர்கள்:
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
முழு குறியீடு இங்கே உள்ளது:
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## எம்பெடிங்
|
||||
|
||||
உள்ளடக்கத்தை வெக்டராக்கவும், கோசைன் சிமிலாரிட்டி மூலம் ஒப்பிடவும்
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### ஆவண ஏற்றுதல்கள்
|
||||
|
||||
pdf மற்றும் csv
|
||||
|
||||
## பயன்பாட்டை உருவாக்குவது
|
||||
|
||||
TODO
|
||||
|
||||
## பணிக்கட்டளை
|
||||
|
||||
## சுருக்கம்
|
||||
|
||||
---
|
||||
|
||||
**குறிப்பு**:
|
||||
இந்த ஆவணம் [Co-op Translator](https://github.com/Azure/co-op-translator) என்ற AI மொழிபெயர்ப்பு சேவையைப் பயன்படுத்தி மொழிபெயர்க்கப்பட்டுள்ளது. நாங்கள் துல்லியத்திற்காக முயற்சிக்கின்றோம், ஆனால் தானியக்க மொழிபெயர்ப்புகளில் பிழைகள் அல்லது தவறான தகவல்கள் இருக்கக்கூடும் என்பதை தயவுசெய்து கவனத்தில் கொள்ளுங்கள். அதன் தாய்மொழியில் உள்ள மூல ஆவணம் அதிகாரப்பூர்வ ஆதாரமாக கருதப்பட வேண்டும். முக்கியமான தகவல்களுக்கு, தொழில்முறை மனித மொழிபெயர்ப்பு பரிந்துரைக்கப்படுகிறது. இந்த மொழிபெயர்ப்பைப் பயன்படுத்துவதால் ஏற்படும் எந்த தவறான புரிதல்கள் அல்லது தவறான விளக்கங்களுக்கு நாங்கள் பொறுப்பல்ல.
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:26:34+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "th"
|
||||
}
|
||||
-->
|
||||
# เฟรมเวิร์ก AI
|
||||
|
||||
มีเฟรมเวิร์ก AI มากมายที่สามารถช่วยลดระยะเวลาในการสร้างโปรเจกต์ได้อย่างมาก ในโปรเจกต์นี้เราจะมุ่งเน้นไปที่การทำความเข้าใจปัญหาที่เฟรมเวิร์กเหล่านี้แก้ไข และสร้างโปรเจกต์แบบนี้ด้วยตัวเราเอง
|
||||
|
||||
## ทำไมต้องใช้เฟรมเวิร์ก
|
||||
|
||||
เมื่อพูดถึงการใช้งาน AI มีหลายวิธีและเหตุผลที่แตกต่างกันในการเลือกใช้วิธีเหล่านั้น ดังนี้:
|
||||
|
||||
- **ไม่มี SDK**: โมเดล AI ส่วนใหญ่ให้คุณโต้ตอบกับโมเดลได้โดยตรง เช่น ผ่าน HTTP requests วิธีนี้ใช้งานได้และบางครั้งอาจเป็นตัวเลือกเดียวหากไม่มี SDK ให้ใช้
|
||||
- **SDK**: การใช้ SDK มักเป็นวิธีที่แนะนำ เพราะช่วยลดโค้ดที่ต้องเขียนเพื่อโต้ตอบกับโมเดล โดยปกติจะจำกัดการใช้งานกับโมเดลเฉพาะ และหากต้องการใช้โมเดลอื่น คุณอาจต้องเขียนโค้ดใหม่เพื่อรองรับโมเดลเพิ่มเติม
|
||||
- **เฟรมเวิร์ก**: เฟรมเวิร์กมักยกระดับการทำงานไปอีกขั้น โดยให้ API เดียวสำหรับโมเดลที่หลากหลาย สิ่งที่แตกต่างกันคือการตั้งค่าเริ่มต้น นอกจากนี้ เฟรมเวิร์กยังมีการจัดการ abstraction ที่มีประโยชน์ เช่น การจัดการเครื่องมือ หน่วยความจำ เวิร์กโฟลว์ เอเจนต์ และอื่น ๆ โดยเขียนโค้ดน้อยลง เฟรมเวิร์กมักมีแนวทางที่ชัดเจน ซึ่งจะช่วยได้มากหากคุณยอมรับวิธีการของมัน แต่หากคุณต้องการทำสิ่งที่เฉพาะเจาะจงที่เฟรมเวิร์กไม่ได้ออกแบบมา อาจไม่ตอบโจทย์ บางครั้งเฟรมเวิร์กอาจทำให้สิ่งต่าง ๆ ง่ายเกินไปจนคุณไม่ได้เรียนรู้หัวข้อสำคัญที่อาจส่งผลต่อประสิทธิภาพในภายหลัง
|
||||
|
||||
โดยทั่วไปแล้ว ควรเลือกเครื่องมือที่เหมาะสมกับงาน
|
||||
|
||||
## บทนำ
|
||||
|
||||
ในบทเรียนนี้ เราจะได้เรียนรู้:
|
||||
|
||||
- การใช้เฟรมเวิร์ก AI ที่เป็นที่นิยม
|
||||
- การแก้ปัญหาทั่วไป เช่น การสนทนา การใช้เครื่องมือ หน่วยความจำ และบริบท
|
||||
- การนำสิ่งเหล่านี้ไปใช้สร้างแอป AI
|
||||
|
||||
## พรอมต์แรก
|
||||
|
||||
ในตัวอย่างแอปแรกของเรา เราจะเรียนรู้วิธีเชื่อมต่อกับโมเดล AI และส่งคำถามไปยังมันโดยใช้พรอมต์
|
||||
|
||||
### การใช้ Python
|
||||
|
||||
ในตัวอย่างนี้ เราจะใช้ Langchain เพื่อเชื่อมต่อกับ GitHub Models เราสามารถใช้คลาส `ChatOpenAI` และกำหนดฟิลด์ `api_key`, `base_url` และ `model` โทเค็นจะถูกเติมโดยอัตโนมัติใน GitHub Codespaces และหากคุณรันแอปในเครื่อง คุณต้องตั้งค่า personal access token เพื่อให้ทำงานได้
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
ในโค้ดนี้ เรา:
|
||||
|
||||
- เรียก `ChatOpenAI` เพื่อสร้างไคลเอนต์
|
||||
- ใช้ `llm.invoke` พร้อมพรอมต์เพื่อสร้างคำตอบ
|
||||
- พิมพ์คำตอบด้วย `print(response.content)`
|
||||
|
||||
คุณควรเห็นคำตอบที่คล้ายกับ:
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## การสนทนา
|
||||
|
||||
ในส่วนก่อนหน้า คุณได้เห็นวิธีที่เราใช้สิ่งที่เรียกว่าการตั้งคำถามแบบ zero shot ซึ่งเป็นการตั้งคำถามเพียงครั้งเดียวและได้รับคำตอบกลับมา
|
||||
|
||||
อย่างไรก็ตาม บ่อยครั้งที่คุณต้องการรักษาการสนทนาที่มีการแลกเปลี่ยนข้อความหลายครั้งระหว่างคุณและผู้ช่วย AI
|
||||
|
||||
### การใช้ Python
|
||||
|
||||
ใน Langchain เราสามารถเก็บการสนทนาไว้ในลิสต์ได้ `HumanMessage` แทนข้อความจากผู้ใช้ และ `SystemMessage` เป็นข้อความที่ใช้กำหนด "บุคลิกภาพ" ของ AI ในตัวอย่างด้านล่าง คุณจะเห็นว่าเราสั่งให้ AI รับบทเป็น Captain Picard และให้มนุษย์/ผู้ใช้ถามว่า "Tell me about you" เป็นพรอมต์
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
โค้ดทั้งหมดสำหรับตัวอย่างนี้มีลักษณะดังนี้:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
คุณควรเห็นผลลัพธ์ที่คล้ายกับ:
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
เพื่อรักษาสถานะของการสนทนา คุณสามารถเพิ่มคำตอบจากการแชทเพื่อให้การสนทนาถูกจดจำได้ ดังนี้:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
จากการสนทนาข้างต้น เราจะเห็นได้ว่าเราเรียกใช้ LLM สองครั้ง ครั้งแรกด้วยการสนทนาที่มีเพียงสองข้อความ และครั้งที่สองด้วยข้อความเพิ่มเติมในบทสนทนา
|
||||
|
||||
ในความเป็นจริง หากคุณรันโค้ดนี้ คุณจะเห็นคำตอบที่สองเป็นบางอย่างเช่น:
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
ถือว่าเป็นคำตอบที่น่าสนใจ ;)
|
||||
|
||||
## การตอบกลับแบบสตรีม
|
||||
|
||||
TODO
|
||||
|
||||
## แม่แบบพรอมต์
|
||||
|
||||
TODO
|
||||
|
||||
## ผลลัพธ์ที่มีโครงสร้าง
|
||||
|
||||
TODO
|
||||
|
||||
## การเรียกใช้เครื่องมือ
|
||||
|
||||
เครื่องมือคือวิธีที่เราให้ LLM มีทักษะเพิ่มเติม แนวคิดคือการบอก LLM เกี่ยวกับฟังก์ชันที่มีอยู่ และหากมีพรอมต์ที่ตรงกับคำอธิบายของเครื่องมือเหล่านี้ เราจะเรียกใช้มัน
|
||||
|
||||
### การใช้ Python
|
||||
|
||||
มาลองเพิ่มเครื่องมือกัน:
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
สิ่งที่เราทำที่นี่คือการสร้างคำอธิบายของเครื่องมือที่เรียกว่า `add` โดยการสืบทอดจาก `TypedDict` และเพิ่มสมาชิกเช่น `a` และ `b` ที่เป็นประเภท `Annotated` สิ่งนี้สามารถแปลงเป็น schema ที่ LLM เข้าใจได้ การสร้างฟังก์ชันเป็นดิกชันนารีที่ช่วยให้เรารู้ว่าต้องทำอะไรหากมีการระบุเครื่องมือเฉพาะ
|
||||
|
||||
มาดูวิธีที่เราเรียกใช้ LLM ด้วยเครื่องมือนี้:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
ที่นี่เราเรียก `bind_tools` พร้อมกับอาร์เรย์ `tools` ของเรา และทำให้ LLM `llm_with_tools` มีความรู้เกี่ยวกับเครื่องมือนี้
|
||||
|
||||
เพื่อใช้ LLM ใหม่นี้ เราสามารถพิมพ์โค้ดดังนี้:
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
ตอนนี้เมื่อเราเรียก `invoke` บน LLM ใหม่นี้ที่มีเครื่องมือ เราอาจเห็น property `tool_calls` ถูกเติมเต็ม หากเป็นเช่นนั้น เครื่องมือที่ระบุจะมี property `name` และ `args` ที่ระบุว่าเครื่องมือใดควรถูกเรียกใช้และมีอาร์กิวเมนต์อะไร โค้ดทั้งหมดมีลักษณะดังนี้:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
เมื่อรันโค้ดนี้ คุณควรเห็นผลลัพธ์ที่คล้ายกับ:
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
ผลลัพธ์นี้หมายความว่า LLM วิเคราะห์พรอมต์ "What is 3 + 12" ว่าหมายถึงเครื่องมือ `add` ควรถูกเรียกใช้ และมันรู้ได้จากชื่อ คำอธิบาย และคำอธิบายฟิลด์สมาชิก คำตอบที่ได้คือ 15 เพราะโค้ดของเราใช้ดิกชันนารี `functions` เพื่อเรียกใช้:
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### เครื่องมือที่น่าสนใจกว่าที่เรียกใช้ Web API
|
||||
|
||||
เครื่องมือที่บวกตัวเลขสองตัวเป็นตัวอย่างที่น่าสนใจเพราะแสดงให้เห็นว่าการเรียกใช้เครื่องมือทำงานอย่างไร แต่โดยปกติเครื่องมือมักทำสิ่งที่น่าสนใจกว่า เช่น การเรียกใช้ Web API มาลองทำสิ่งนี้ด้วยโค้ด:
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
ตอนนี้หากคุณรันโค้ดนี้ คุณจะได้รับคำตอบบางอย่างเช่น:
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
นี่คือโค้ดทั้งหมด:
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## การฝังข้อมูล
|
||||
|
||||
แปลงเนื้อหาเป็นเวกเตอร์ เปรียบเทียบด้วย cosine similarity
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### ตัวโหลดเอกสาร
|
||||
|
||||
PDF และ CSV
|
||||
|
||||
## การสร้างแอป
|
||||
|
||||
TODO
|
||||
|
||||
## งานที่มอบหมาย
|
||||
|
||||
## สรุป
|
||||
|
||||
---
|
||||
|
||||
**ข้อจำกัดความรับผิดชอบ**:
|
||||
เอกสารนี้ได้รับการแปลโดยใช้บริการแปลภาษา AI [Co-op Translator](https://github.com/Azure/co-op-translator) แม้ว่าเราจะพยายามให้การแปลมีความถูกต้อง แต่โปรดทราบว่าการแปลโดยอัตโนมัติอาจมีข้อผิดพลาดหรือความไม่ถูกต้อง เอกสารต้นฉบับในภาษาดั้งเดิมควรถือเป็นแหล่งข้อมูลที่เชื่อถือได้ สำหรับข้อมูลที่สำคัญ ขอแนะนำให้ใช้บริการแปลภาษามืออาชีพ เราไม่รับผิดชอบต่อความเข้าใจผิดหรือการตีความผิดที่เกิดจากการใช้การแปลนี้
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:29:37+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "tl"
|
||||
}
|
||||
-->
|
||||
# AI Framework
|
||||
|
||||
Maraming AI framework ang magagamit na maaaring lubos na mapabilis ang oras ng paggawa ng isang proyekto. Sa proyektong ito, magtutuon tayo sa pag-unawa sa mga problemang tinutugunan ng mga framework na ito at gagawa tayo ng sarili nating proyekto.
|
||||
|
||||
## Bakit gumamit ng framework
|
||||
|
||||
Pagdating sa paggamit ng AI, may iba't ibang paraan at dahilan para piliin ang mga ito. Narito ang ilan:
|
||||
|
||||
- **Walang SDK**, karamihan sa mga AI model ay nagbibigay-daan sa iyo na direktang makipag-ugnayan sa AI model gamit ang halimbawa, HTTP requests. Ang paraang ito ay gumagana at minsan ay maaaring maging tanging opsyon kung walang SDK na magagamit.
|
||||
- **SDK**, ang paggamit ng SDK ay karaniwang inirerekomenda dahil mas kaunting code ang kailangang isulat para makipag-ugnayan sa iyong model. Karaniwan itong limitado sa isang partikular na model, at kung gumagamit ng iba't ibang model, maaaring kailangan mong magsulat ng bagong code para suportahan ang mga karagdagang model.
|
||||
- **Framework**, ang framework ay karaniwang nagdadala ng mga bagay sa mas mataas na antas sa paraang kung kailangan mong gumamit ng iba't ibang model, may isang API para sa lahat ng ito, at ang pagkakaiba ay karaniwang nasa paunang setup. Bukod dito, ang mga framework ay nagdadala ng kapaki-pakinabang na mga abstraction tulad ng sa AI space, maaari nilang pamahalaan ang mga tool, memory, workflows, agents, at iba pa habang mas kaunting code ang isinusulat. Dahil ang mga framework ay karaniwang may sariling pananaw, maaari itong maging kapaki-pakinabang kung susundin mo ang kanilang paraan, ngunit maaaring hindi ito sapat kung susubukan mong gumawa ng isang bagay na hindi karaniwan na hindi saklaw ng framework. Minsan, ang framework ay maaaring masyadong gawing simple ang mga bagay, kaya maaaring hindi mo matutunan ang isang mahalagang paksa na maaaring makaapekto sa performance sa hinaharap.
|
||||
|
||||
Sa pangkalahatan, gamitin ang tamang tool para sa trabaho.
|
||||
|
||||
## Panimula
|
||||
|
||||
Sa araling ito, matututunan natin:
|
||||
|
||||
- Gumamit ng karaniwang AI framework.
|
||||
- Tugunan ang mga karaniwang problema tulad ng chat conversations, paggamit ng tool, memorya, at konteksto.
|
||||
- Gamitin ito upang makabuo ng mga AI app.
|
||||
|
||||
## Unang prompt
|
||||
|
||||
Sa unang halimbawa ng app, matututunan natin kung paano kumonekta sa isang AI model at mag-query gamit ang isang prompt.
|
||||
|
||||
### Gamit ang Python
|
||||
|
||||
Para sa halimbawang ito, gagamit tayo ng Langchain upang kumonekta sa GitHub Models. Maaari nating gamitin ang isang klase na tinatawag na `ChatOpenAI` at bigyan ito ng mga field na `api_key`, `base_url`, at `model`. Ang token ay awtomatikong pinupunan sa loob ng GitHub Codespaces, at kung pinapatakbo mo ang app nang lokal, kailangan mong mag-set up ng personal access token para gumana ito.
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Sa code na ito, ginagawa natin ang sumusunod:
|
||||
|
||||
- Tumawag sa `ChatOpenAI` upang lumikha ng client.
|
||||
- Gumamit ng `llm.invoke` gamit ang isang prompt upang lumikha ng response.
|
||||
- I-print ang response gamit ang `print(response.content)`.
|
||||
|
||||
Makikita mo ang isang response na katulad ng:
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## Chat conversation
|
||||
|
||||
Sa nakaraang seksyon, nakita mo kung paano natin ginamit ang tinatawag na zero shot prompting, isang prompt na sinusundan ng response.
|
||||
|
||||
Gayunpaman, madalas kang nasa sitwasyon kung saan kailangan mong panatilihin ang isang pag-uusap na may maraming mensahe na ipinagpapalitan sa pagitan mo at ng AI assistant.
|
||||
|
||||
### Gamit ang Python
|
||||
|
||||
Sa Langchain, maaari nating i-store ang pag-uusap sa isang listahan. Ang `HumanMessage` ay kumakatawan sa mensahe mula sa user, at ang `SystemMessage` ay mensahe na nagtatakda ng "personality" ng AI. Sa halimbawa sa ibaba, makikita mo kung paano natin inutusan ang AI na mag-assume ng personality ni Captain Picard at para sa tao/user na magtanong ng "Tell me about you" bilang prompt.
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
Ang buong code para sa halimbawang ito ay ganito:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Makikita mo ang resulta na katulad ng:
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
Upang mapanatili ang estado ng pag-uusap, maaari mong idagdag ang response mula sa chat upang maalala ang pag-uusap. Ganito ang paraan:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
Makikita natin mula sa pag-uusap sa itaas kung paano natin tinawag ang LLM nang dalawang beses, una sa pag-uusap na binubuo ng dalawang mensahe lamang, at pagkatapos ay pangalawang beses na may mas maraming mensahe na idinagdag sa pag-uusap.
|
||||
|
||||
Sa katunayan, kung patakbuhin mo ito, makikita mo ang pangalawang response na katulad ng:
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
Sasabihin ko na maaaring oo ;)
|
||||
|
||||
## Streaming responses
|
||||
|
||||
TODO
|
||||
|
||||
## Prompt templates
|
||||
|
||||
TODO
|
||||
|
||||
## Structured output
|
||||
|
||||
TODO
|
||||
|
||||
## Tool calling
|
||||
|
||||
Ang mga tool ay kung paano natin binibigyan ang LLM ng karagdagang kakayahan. Ang ideya ay sabihin sa LLM ang tungkol sa mga function na mayroon ito, at kung ang isang prompt ay tumutugma sa deskripsyon ng isa sa mga tool na ito, tatawagin natin ang mga ito.
|
||||
|
||||
### Gamit ang Python
|
||||
|
||||
Magdagdag tayo ng ilang tool tulad nito:
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
Ang ginagawa natin dito ay lumikha ng deskripsyon ng isang tool na tinatawag na `add`. Sa pamamagitan ng pagmamana mula sa `TypedDict` at pagdaragdag ng mga miyembro tulad ng `a` at `b` ng uri na `Annotated`, maaari itong ma-convert sa isang schema na maiintindihan ng LLM. Ang paglikha ng mga function ay isang dictionary na nagsisiguro na alam natin kung ano ang gagawin kung ang isang partikular na tool ay natukoy.
|
||||
|
||||
Tingnan natin kung paano natin tatawagin ang LLM gamit ang tool na ito:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
Dito, tinatawag natin ang `bind_tools` gamit ang ating `tools` array, kaya ang LLM `llm_with_tools` ay may kaalaman na tungkol sa tool na ito.
|
||||
|
||||
Upang magamit ang bagong LLM na ito, maaari nating i-type ang sumusunod na code:
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Ngayon na tinawag natin ang `invoke` sa bagong LLM na may mga tool, maaaring mapunan ang property na `tool_calls`. Kung gayon, ang anumang natukoy na tool ay may `name` at `args` property na tumutukoy kung anong tool ang dapat tawagin at kung anong mga argumento. Ang buong code ay ganito:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Kapag pinatakbo mo ang code na ito, makikita mo ang output na katulad ng:
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Ang ibig sabihin ng output na ito ay na-analyze ng LLM ang prompt na "What is 3 + 12" bilang nangangahulugang ang tool na `add` ay dapat tawagin, at alam ito salamat sa pangalan, deskripsyon, at mga deskripsyon ng field ng miyembro. Ang sagot na 15 ay dahil sa ating code na gumagamit ng dictionary na `functions` upang tawagin ito:
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### Isang mas kawili-wiling tool na tumatawag sa web API
|
||||
|
||||
Ang mga tool na nagdadagdag ng dalawang numero ay kawili-wili dahil ipinapakita nito kung paano gumagana ang tool calling, ngunit karaniwang ang mga tool ay may mas kawili-wiling ginagawa tulad ng halimbawa, pagtawag sa Web API. Gawin natin iyon gamit ang code na ito:
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
Ngayon, kung patakbuhin mo ang code na ito, makakakuha ka ng response na nagsasabing katulad ng:
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Narito ang buong code:
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## Embedding
|
||||
|
||||
I-vectorize ang content, ikumpara gamit ang cosine similarity
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### Document loaders
|
||||
|
||||
PDF at CSV
|
||||
|
||||
## Paggawa ng app
|
||||
|
||||
TODO
|
||||
|
||||
## Takdang-aralin
|
||||
|
||||
## Buod
|
||||
|
||||
---
|
||||
|
||||
**Paunawa**:
|
||||
Ang dokumentong ito ay isinalin gamit ang AI translation service na [Co-op Translator](https://github.com/Azure/co-op-translator). Bagama't sinisikap naming maging tumpak, mangyaring tandaan na ang mga awtomatikong pagsasalin ay maaaring maglaman ng mga pagkakamali o hindi pagkakatugma. Ang orihinal na dokumento sa kanyang katutubong wika ang dapat ituring na opisyal na sanggunian. Para sa mahalagang impormasyon, inirerekomenda ang propesyonal na pagsasalin ng tao. Hindi kami mananagot sa anumang hindi pagkakaunawaan o maling interpretasyon na dulot ng paggamit ng pagsasaling ito.
|
||||
@ -0,0 +1,388 @@
|
||||
<!--
|
||||
CO_OP_TRANSLATOR_METADATA:
|
||||
{
|
||||
"original_hash": "5fe046e7729ae6a24c717884bf875917",
|
||||
"translation_date": "2025-10-11T14:28:48+00:00",
|
||||
"source_file": "10-ai-framework-project/README.md",
|
||||
"language_code": "vi"
|
||||
}
|
||||
-->
|
||||
# Khung AI
|
||||
|
||||
Có rất nhiều khung AI hiện nay có thể giúp tăng tốc đáng kể thời gian xây dựng một dự án. Trong dự án này, chúng ta sẽ tập trung vào việc hiểu các vấn đề mà các khung này giải quyết và tự xây dựng một dự án như vậy.
|
||||
|
||||
## Tại sao cần khung
|
||||
|
||||
Khi sử dụng AI, có nhiều cách tiếp cận khác nhau và lý do để chọn các cách tiếp cận này, dưới đây là một số:
|
||||
|
||||
- **Không dùng SDK**, hầu hết các mô hình AI cho phép bạn tương tác trực tiếp với mô hình AI thông qua, ví dụ, các yêu cầu HTTP. Cách tiếp cận này hoạt động và đôi khi có thể là lựa chọn duy nhất nếu không có tùy chọn SDK.
|
||||
- **SDK**. Sử dụng SDK thường là cách được khuyến nghị vì nó cho phép bạn viết ít mã hơn để tương tác với mô hình của mình. SDK thường bị giới hạn cho một mô hình cụ thể và nếu sử dụng các mô hình khác nhau, bạn có thể cần viết mã mới để hỗ trợ các mô hình bổ sung đó.
|
||||
- **Khung**. Một khung thường đưa mọi thứ lên một cấp độ khác, theo nghĩa rằng nếu bạn cần sử dụng các mô hình khác nhau, chỉ cần một API cho tất cả, điều khác biệt thường là thiết lập ban đầu. Ngoài ra, các khung mang lại các trừu tượng hữu ích như trong lĩnh vực AI, chúng có thể xử lý công cụ, bộ nhớ, quy trình làm việc, tác nhân và nhiều hơn nữa trong khi viết ít mã hơn. Vì các khung thường có quan điểm riêng, chúng thực sự có thể hữu ích nếu bạn đồng ý với cách chúng hoạt động, nhưng có thể không phù hợp nếu bạn muốn làm điều gì đó tùy chỉnh mà khung không hỗ trợ. Đôi khi, một khung cũng có thể đơn giản hóa quá mức và bạn có thể không học được một chủ đề quan trọng, điều này sau đó có thể ảnh hưởng đến hiệu suất.
|
||||
|
||||
Nói chung, hãy sử dụng công cụ phù hợp cho công việc.
|
||||
|
||||
## Giới thiệu
|
||||
|
||||
Trong bài học này, chúng ta sẽ học cách:
|
||||
|
||||
- Sử dụng một khung AI phổ biến.
|
||||
- Giải quyết các vấn đề phổ biến như hội thoại, sử dụng công cụ, bộ nhớ và ngữ cảnh.
|
||||
- Tận dụng điều này để xây dựng ứng dụng AI.
|
||||
|
||||
## Lời nhắc đầu tiên
|
||||
|
||||
Trong ví dụ ứng dụng đầu tiên của chúng ta, chúng ta sẽ học cách kết nối với một mô hình AI và truy vấn nó bằng một lời nhắc.
|
||||
|
||||
### Sử dụng Python
|
||||
|
||||
Trong ví dụ này, chúng ta sẽ sử dụng Langchain để kết nối với GitHub Models. Chúng ta có thể sử dụng một lớp gọi là `ChatOpenAI` và cung cấp các trường `api_key`, `base_url` và `model`. Token là thứ tự động được điền trong GitHub Codespaces và nếu bạn chạy ứng dụng cục bộ, bạn cần thiết lập một token truy cập cá nhân để điều này hoạt động.
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
# works
|
||||
response = llm.invoke("What's the capital of France?")
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Trong đoạn mã này, chúng ta:
|
||||
|
||||
- Gọi `ChatOpenAI` để tạo một client.
|
||||
- Sử dụng `llm.invoke` với một lời nhắc để tạo phản hồi.
|
||||
- In phản hồi bằng `print(response.content)`.
|
||||
|
||||
Bạn sẽ thấy một phản hồi tương tự như:
|
||||
|
||||
```text
|
||||
The capital of France is Paris.
|
||||
```
|
||||
|
||||
## Hội thoại
|
||||
|
||||
Trong phần trước, bạn đã thấy cách chúng ta sử dụng cái thường được gọi là zero shot prompting, một lời nhắc duy nhất theo sau là một phản hồi.
|
||||
|
||||
Tuy nhiên, thường bạn sẽ gặp tình huống cần duy trì một cuộc hội thoại với nhiều tin nhắn được trao đổi giữa bạn và trợ lý AI.
|
||||
|
||||
### Sử dụng Python
|
||||
|
||||
Trong langchain, chúng ta có thể lưu trữ cuộc hội thoại trong một danh sách. `HumanMessage` đại diện cho một tin nhắn từ người dùng, và `SystemMessage` là một tin nhắn nhằm thiết lập "tính cách" của AI. Trong ví dụ dưới đây, bạn sẽ thấy cách chúng ta hướng dẫn AI giả định tính cách của Captain Picard và người dùng hỏi "Tell me about you" làm lời nhắc.
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
```
|
||||
|
||||
Đoạn mã đầy đủ cho ví dụ này trông như sau:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
print(response.content)
|
||||
```
|
||||
|
||||
Bạn sẽ thấy kết quả tương tự như:
|
||||
|
||||
```text
|
||||
I am Captain Jean-Luc Picard, the commanding officer of the USS Enterprise (NCC-1701-D), a starship in the United Federation of Planets. My primary mission is to explore new worlds, seek out new life and new civilizations, and boldly go where no one has gone before.
|
||||
|
||||
I believe in the importance of diplomacy, reason, and the pursuit of knowledge. My crew is diverse and skilled, and we often face challenges that test our resolve, ethics, and ingenuity. Throughout my career, I have encountered numerous species, grappled with complex moral dilemmas, and have consistently sought peaceful solutions to conflicts.
|
||||
|
||||
I hold the ideals of the Federation close to my heart, believing in the importance of cooperation, understanding, and respect for all sentient beings. My experiences have shaped my leadership style, and I strive to be a thoughtful and just captain. How may I assist you further?
|
||||
```
|
||||
|
||||
Để giữ trạng thái của cuộc hội thoại, bạn có thể thêm phản hồi từ một cuộc trò chuyện, để cuộc hội thoại được ghi nhớ, đây là cách thực hiện:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
messages = [
|
||||
SystemMessage(content="You are Captain Picard of the Starship Enterprise"),
|
||||
HumanMessage(content="Tell me about you"),
|
||||
]
|
||||
|
||||
|
||||
# works
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
print("---- Next ----")
|
||||
|
||||
messages.append(response)
|
||||
messages.append(HumanMessage(content="Now that I know about you, I'm Chris, can I be in your crew?"))
|
||||
|
||||
response = llm.invoke(messages)
|
||||
|
||||
print(response.content)
|
||||
|
||||
```
|
||||
|
||||
Những gì chúng ta thấy từ cuộc hội thoại trên là cách chúng ta gọi LLM hai lần, lần đầu với cuộc hội thoại chỉ gồm hai tin nhắn, nhưng sau đó lần thứ hai với nhiều tin nhắn hơn được thêm vào cuộc hội thoại.
|
||||
|
||||
Thực tế, nếu bạn chạy điều này, bạn sẽ thấy phản hồi thứ hai giống như:
|
||||
|
||||
```text
|
||||
Welcome aboard, Chris! It's always a pleasure to meet those who share a passion for exploration and discovery. While I cannot formally offer you a position on the Enterprise right now, I encourage you to pursue your aspirations. We are always in need of talented individuals with diverse skills and backgrounds.
|
||||
|
||||
If you are interested in space exploration, consider education and training in the sciences, engineering, or diplomacy. The values of curiosity, resilience, and teamwork are crucial in Starfleet. Should you ever find yourself on a starship, remember to uphold the principles of the Federation: peace, understanding, and respect for all beings. Your journey can lead you to remarkable adventures, whether in the stars or on the ground. Engage!
|
||||
```
|
||||
|
||||
Tôi sẽ coi đó là một câu trả lời có thể ;)
|
||||
|
||||
## Phản hồi theo luồng
|
||||
|
||||
TODO
|
||||
|
||||
## Mẫu lời nhắc
|
||||
|
||||
TODO
|
||||
|
||||
## Kết quả có cấu trúc
|
||||
|
||||
TODO
|
||||
|
||||
## Gọi công cụ
|
||||
|
||||
Công cụ là cách chúng ta cung cấp cho LLM các kỹ năng bổ sung. Ý tưởng là nói cho LLM biết về các hàm mà nó có và nếu một lời nhắc được đưa ra phù hợp với mô tả của một trong các công cụ này thì chúng ta sẽ gọi chúng.
|
||||
|
||||
### Sử dụng Python
|
||||
|
||||
Hãy thêm một số công cụ như sau:
|
||||
|
||||
```python
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
```
|
||||
|
||||
Những gì chúng ta làm ở đây là tạo một mô tả về một công cụ gọi là `add`. Bằng cách kế thừa từ `TypedDict` và thêm các thành viên như `a` và `b` thuộc kiểu `Annotated`, điều này có thể được chuyển đổi thành một schema mà LLM có thể hiểu. Việc tạo các hàm là một từ điển đảm bảo rằng chúng ta biết phải làm gì nếu một công cụ cụ thể được xác định.
|
||||
|
||||
Hãy xem cách chúng ta gọi LLM với công cụ này tiếp theo:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
```
|
||||
|
||||
Ở đây chúng ta gọi `bind_tools` với mảng `tools` của mình và do đó LLM `llm_with_tools` bây giờ có kiến thức về công cụ này.
|
||||
|
||||
Để sử dụng LLM mới này, chúng ta có thể gõ đoạn mã sau:
|
||||
|
||||
```python
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Bây giờ khi chúng ta gọi `invoke` trên LLM mới này, có công cụ, chúng ta có thể thấy thuộc tính `tool_calls` được điền. Nếu có, bất kỳ công cụ nào được xác định đều có thuộc tính `name` và `args` xác định công cụ nào nên được gọi và với các tham số. Đoạn mã đầy đủ trông như sau:
|
||||
|
||||
```python
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
tools = [add]
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "What is 3 + 12?"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
Khi chạy đoạn mã này, bạn sẽ thấy đầu ra tương tự như:
|
||||
|
||||
```text
|
||||
TOOL CALL: 15
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Ý nghĩa của đầu ra này là LLM đã phân tích lời nhắc "What is 3 + 12" như là yêu cầu gọi công cụ `add` và nó biết điều đó nhờ tên, mô tả và mô tả các trường thành viên. Kết quả là 15 vì mã của chúng ta sử dụng từ điển `functions` để gọi nó:
|
||||
|
||||
```python
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
```
|
||||
|
||||
### Một công cụ thú vị hơn gọi API web
|
||||
|
||||
Công cụ cộng hai số thì thú vị vì nó minh họa cách gọi công cụ hoạt động, nhưng thường các công cụ có xu hướng làm điều gì đó thú vị hơn như, ví dụ, gọi một API web, hãy làm điều đó với đoạn mã này:
|
||||
|
||||
```python
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
# the rest of the code is the same
|
||||
```
|
||||
|
||||
Bây giờ nếu bạn chạy đoạn mã này, bạn sẽ nhận được phản hồi nói điều gì đó như:
|
||||
|
||||
```text
|
||||
TOOL CALL: Chuck Norris once rode a nine foot grizzly bear through an automatic car wash, instead of taking a shower.
|
||||
CONTENT:
|
||||
```
|
||||
|
||||
Đây là đoạn mã đầy đủ:
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
import requests
|
||||
import os
|
||||
from typing_extensions import Annotated, TypedDict
|
||||
|
||||
class add(TypedDict):
|
||||
"""Add two integers."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
a: Annotated[int, ..., "First integer"]
|
||||
b: Annotated[int, ..., "Second integer"]
|
||||
|
||||
class joke(TypedDict):
|
||||
"""Tell a joke."""
|
||||
|
||||
# Annotations must have the type and can optionally include a default value and description (in that order).
|
||||
category: Annotated[str, ..., "The joke category"]
|
||||
|
||||
tools = [add, joke]
|
||||
|
||||
def get_joke(category: str) -> str:
|
||||
response = requests.get(f"https://api.chucknorris.io/jokes/random?category={category}", headers={"Accept": "application/json"})
|
||||
if response.status_code == 200:
|
||||
return response.json().get("value", f"Here's a {category} joke!")
|
||||
return f"Here's a {category} joke!"
|
||||
|
||||
functions = {
|
||||
"add": lambda a, b: a + b,
|
||||
"joke": lambda category: get_joke(category)
|
||||
}
|
||||
|
||||
llm = ChatOpenAI(
|
||||
api_key=os.environ["GITHUB_TOKEN"],
|
||||
base_url="https://models.github.ai/inference",
|
||||
model="openai/gpt-4o-mini",
|
||||
)
|
||||
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
query = "Tell me a joke about animals"
|
||||
|
||||
res = llm_with_tools.invoke(query)
|
||||
if(res.tool_calls):
|
||||
for tool in res.tool_calls:
|
||||
# print("TOOL CALL: ", tool)
|
||||
print("TOOL CALL: ", functions[tool["name"]](../../../10-ai-framework-project/**tool["args"]))
|
||||
print("CONTENT: ",res.content)
|
||||
```
|
||||
|
||||
## Nhúng
|
||||
|
||||
vector hóa nội dung, so sánh qua độ tương đồng cosine
|
||||
|
||||
https://python.langchain.com/docs/how_to/embed_text/
|
||||
|
||||
### Tải tài liệu
|
||||
|
||||
pdf và csv
|
||||
|
||||
## Xây dựng ứng dụng
|
||||
|
||||
TODO
|
||||
|
||||
## Bài tập
|
||||
|
||||
## Tóm tắt
|
||||
|
||||
---
|
||||
|
||||
**Tuyên bố miễn trừ trách nhiệm**:
|
||||
Tài liệu này đã được dịch bằng dịch vụ dịch thuật AI [Co-op Translator](https://github.com/Azure/co-op-translator). Mặc dù chúng tôi cố gắng đảm bảo độ chính xác, xin lưu ý rằng các bản dịch tự động có thể chứa lỗi hoặc không chính xác. Tài liệu gốc bằng ngôn ngữ bản địa nên được coi là nguồn thông tin chính thức. Đối với các thông tin quan trọng, khuyến nghị sử dụng dịch vụ dịch thuật chuyên nghiệp bởi con người. Chúng tôi không chịu trách nhiệm cho bất kỳ sự hiểu lầm hoặc diễn giải sai nào phát sinh từ việc sử dụng bản dịch này.
|
||||
Loading…
Reference in new issue