# Mradi wa Gumzo Mradi huu wa gumzo unaonyesha jinsi ya kujenga Msaidizi wa Gumzo kwa kutumia GitHub Models. Hivi ndivyo mradi uliokamilika unavyoonekana:
App ya Gumzo
Kwa muktadha, kujenga wasaidizi wa gumzo kwa kutumia AI ya kizazi ni njia nzuri ya kuanza kujifunza kuhusu AI. Kile utakachojifunza ni jinsi ya kuunganisha AI ya kizazi kwenye programu ya wavuti katika somo hili, hebu tuanze. ## Kuunganisha na AI ya kizazi Kwa upande wa backend, tunatumia GitHub Models. Ni huduma nzuri inayokuwezesha kutumia AI bila malipo. Nenda kwenye uwanja wake wa majaribio na uchukue msimbo unaolingana na lugha ya backend unayochagua. Hivi ndivyo inavyoonekana kwenye [GitHub Models Playground](https://github.com/marketplace/models/azure-openai/gpt-4o-mini/playground)
GitHub Models AI Playground
Kama tulivyosema, chagua kichupo cha "Code" na runtime unayochagua.
uchaguzi wa uwanja wa majaribio
### Kutumia Python Katika hali hii tunachagua Python, ambayo itamaanisha tunachukua msimbo huu: ```python """Run this model in Python > pip install openai """ import os from openai import OpenAI # To authenticate with the model you will need to generate a personal access token (PAT) in your GitHub settings. # Create your PAT token by following instructions here: https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens client = OpenAI( base_url="https://models.github.ai/inference", api_key=os.environ["GITHUB_TOKEN"], ) response = client.chat.completions.create( messages=[ { "role": "system", "content": "", }, { "role": "user", "content": "What is the capital of France?", } ], model="openai/gpt-4o-mini", temperature=1, max_tokens=4096, top_p=1 ) print(response.choices[0].message.content) ``` Hebu tusafishe msimbo huu kidogo ili uweze kutumika tena: ```python def call_llm(prompt: str, system_message: str): response = client.chat.completions.create( messages=[ { "role": "system", "content": system_message, }, { "role": "user", "content": prompt, } ], model="openai/gpt-4o-mini", temperature=1, max_tokens=4096, top_p=1 ) return response.choices[0].message.content ``` Kwa kutumia kazi hii `call_llm` sasa tunaweza kuchukua maelekezo na maelekezo ya mfumo, na kazi hiyo itarudisha matokeo. ### Kubinafsisha Msaidizi wa AI Ikiwa unataka kubinafsisha msaidizi wa AI unaweza kubainisha jinsi unavyotaka awe kwa kujaza maelekezo ya mfumo kama ifuatavyo: ```python call_llm("Tell me about you", "You're Albert Einstein, you only know of things in the time you were alive") ``` ## Kuufichua kupitia Web API Nzuri, tumemaliza sehemu ya AI, hebu tuone jinsi tunavyoweza kuunganisha hiyo kwenye Web API. Kwa Web API, tunachagua kutumia Flask, lakini mfumo wowote wa wavuti unafaa. Hebu tuone msimbo wake: ### Kutumia Python ```python # api.py from flask import Flask, request, jsonify from llm import call_llm from flask_cors import CORS app = Flask(__name__) CORS(app) # * example.com @app.route("/", methods=["GET"]) def index(): return "Welcome to this API. Call POST /hello with 'message': 'my message' as JSON payload" @app.route("/hello", methods=["POST"]) def hello(): # get message from request body { "message": "do this taks for me" } data = request.get_json() message = data.get("message", "") response = call_llm(message, "You are a helpful assistant.") return jsonify({ "response": response }) if __name__ == "__main__": app.run(host="0.0.0.0", port=5000) ``` Hapa, tunaunda API ya flask na kufafanua njia ya msingi "/" na "/chat". Ya mwisho inakusudiwa kutumiwa na frontend yetu kupitisha maswali kwake. Kuunganisha *llm.py* tunachohitaji kufanya ni: - Kuingiza kazi ya `call_llm`: ```python from llm import call_llm from flask import Flask, request ``` - Kuiita kutoka njia ya "/chat": ```python @app.route("/hello", methods=["POST"]) def hello(): # get message from request body { "message": "do this taks for me" } data = request.get_json() message = data.get("message", "") response = call_llm(message, "You are a helpful assistant.") return jsonify({ "response": response }) ``` Hapa tunachambua ombi linalokuja ili kupata mali ya `message` kutoka kwa mwili wa JSON. Baadaye tunaita LLM kwa simu hii: ```python response = call_llm(message, "You are a helpful assistant") # return the response as JSON return jsonify({ "response": response }) ``` Nzuri, sasa tumemaliza kile tunachohitaji. ## Kuseti Cors Tunapaswa kutaja kwamba tunaseti kitu kama CORS, kushiriki rasilimali za asili tofauti. Hii inamaanisha kwamba kwa sababu backend yetu na frontend zitaendesha kwenye bandari tofauti, tunahitaji kuruhusu frontend kupiga simu kwenye backend. ### Kutumia Python Kuna kipande cha msimbo katika *api.py* kinachoseti hili: ```python from flask_cors import CORS app = Flask(__name__) CORS(app) # * example.com ``` Kwa sasa kimesetiwa kuruhusu "*" ambayo ni asili zote, na hiyo si salama sana, tunapaswa kuibana mara tu tunapofika kwenye uzalishaji. ## Endesha mradi wako Kuendesha mradi wako, unahitaji kuanzisha backend yako kwanza na kisha frontend yako. ### Kutumia Python Sawa, kwa hivyo tuna *llm.py* na *api.py*, tunawezaje kufanya hii ifanye kazi na backend? Naam, kuna mambo mawili tunahitaji kufanya: - Sakinisha utegemezi: ```sh cd backend python -m venv venv source ./venv/bin/activate pip install openai flask flask-cors openai ``` - Anzisha API ```sh python api.py ``` Ikiwa uko kwenye Codespaces unahitaji kwenda kwenye Ports katika sehemu ya chini ya mhariri, bofya kulia juu yake na bonyeza "Port Visibility" na uchague "Public". ### Fanya kazi kwenye frontend Sasa kwa kuwa tuna API inayofanya kazi, hebu tuunde frontend kwa ajili ya hii. Frontend ya kiwango cha chini kabisa ambayo tutaboresha hatua kwa hatua. Katika folda ya *frontend*, unda yafuatayo: ```text backend/ frontend/ index.html app.js styles.css ``` Hebu tuanze na **index.html**: ```html