# Mradi wa Gumzo Mradi huu wa gumzo unaonyesha jinsi ya kujenga Msaidizi wa Gumzo kwa kutumia GitHub Models. Hivi ndivyo mradi uliokamilika unavyoonekana:
Programu ya Gumzo
Kama muktadha, kujenga wasaidizi wa gumzo kwa kutumia AI ya kizazi ni njia nzuri ya kuanza kujifunza kuhusu AI. Kile utakachojifunza ni jinsi ya kuunganisha AI ya kizazi kwenye programu ya wavuti katika somo hili, hebu tuanze. ## Kuunganisha na AI ya Kizazi Kwa upande wa nyuma (backend), tunatumia GitHub Models. Ni huduma nzuri inayokuwezesha kutumia AI bila malipo. Nenda kwenye uwanja wake wa majaribio na uchukue msimbo unaolingana na lugha ya nyuma unayochagua. Hivi ndivyo inavyoonekana kwenye [GitHub Models Playground](https://github.com/marketplace/models/azure-openai/gpt-4o-mini/playground)
Uwanja wa GitHub Models AI
Kama tulivyosema, chagua kichupo cha "Code" na muda wako wa utekelezaji uliouchagua.
chaguo la uwanja wa majaribio
Katika kesi hii tunachagua Python, ambayo itamaanisha tunachukua msimbo huu: ```python """Run this model in Python > pip install openai """ import os from openai import OpenAI # To authenticate with the model you will need to generate a personal access token (PAT) in your GitHub settings. # Create your PAT token by following instructions here: https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens client = OpenAI( base_url="https://models.github.ai/inference", api_key=os.environ["GITHUB_TOKEN"], ) response = client.chat.completions.create( messages=[ { "role": "system", "content": "", }, { "role": "user", "content": "What is the capital of France?", } ], model="openai/gpt-4o-mini", temperature=1, max_tokens=4096, top_p=1 ) print(response.choices[0].message.content) ``` Hebu tusafishe msimbo huu kidogo ili uweze kutumika tena: ```python def call_llm(prompt: str, system_message: str): response = client.chat.completions.create( messages=[ { "role": "system", "content": system_message, }, { "role": "user", "content": prompt, } ], model="openai/gpt-4o-mini", temperature=1, max_tokens=4096, top_p=1 ) return response.choices[0].message.content ``` Kwa kutumia kazi hii `call_llm` sasa tunaweza kuchukua maelekezo na maelekezo ya mfumo, na kazi hiyo itarudisha matokeo. ### Kubinafsisha Msaidizi wa AI Ikiwa unataka kubinafsisha msaidizi wa AI unaweza kubainisha jinsi unavyotaka awe kwa kujaza maelekezo ya mfumo kama hivi: ```python call_llm("Tell me about you", "You're Albert Einstein, you only know of things in the time you were alive") ``` ## Kuiweka Kupitia API ya Wavuti Vizuri, tumemaliza sehemu ya AI, hebu tuone jinsi tunavyoweza kuunganisha hiyo kwenye API ya Wavuti. Kwa API ya Wavuti, tunachagua kutumia Flask, lakini mfumo wowote wa wavuti unafaa. Hebu tuone msimbo wake: ```python # api.py from flask import Flask, request, jsonify from llm import call_llm from flask_cors import CORS app = Flask(__name__) CORS(app) # * example.com @app.route("/", methods=["GET"]) def index(): return "Welcome to this API. Call POST /hello with 'message': 'my message' as JSON payload" @app.route("/hello", methods=["POST"]) def hello(): # get message from request body { "message": "do this taks for me" } data = request.get_json() message = data.get("message", "") response = call_llm(message, "You are a helpful assistant.") return jsonify({ "response": response }) if __name__ == "__main__": app.run(host="0.0.0.0", port=5000) ``` Hapa, tunaunda API ya Flask na kufafanua njia ya msingi "/" na "/chat". Njia ya mwisho inakusudiwa kutumiwa na sehemu ya mbele (frontend) yetu kupitisha maswali kwake. Kuunganisha *llm.py* hivi ndivyo tunavyohitaji kufanya: - Leta kazi ya `call_llm`: ```python from llm import call_llm from flask import Flask, request ``` - Iite kutoka kwenye njia ya "/chat": ```python @app.route("/hello", methods=["POST"]) def hello(): # get message from request body { "message": "do this taks for me" } data = request.get_json() message = data.get("message", "") response = call_llm(message, "You are a helpful assistant.") return jsonify({ "response": response }) ``` Hapa tunachanganua ombi linaloingia ili kupata mali ya `message` kutoka kwa mwili wa JSON. Baadaye tunaita LLM kwa simu hii: ```python response = call_llm(message, "You are a helpful assistant") # return the response as JSON return jsonify({ "response": response }) ``` Vizuri, sasa tumemaliza kile tunachohitaji. ### Kusakinisha Cors Tunapaswa kutaja kwamba tumeweka kitu kama CORS, kushiriki rasilimali kati ya asili tofauti. Hii inamaanisha kwamba kwa sababu sehemu yetu ya nyuma na mbele zitaendesha kwenye bandari tofauti, tunahitaji kuruhusu sehemu ya mbele kupiga simu kwenye sehemu ya nyuma. Kuna kipande cha msimbo katika *api.py* kinachoweka hili: ```python from flask_cors import CORS app = Flask(__name__) CORS(app) # * example.com ``` Kwa sasa imewekwa kuruhusu "*" ambayo ni asili zote, na hiyo si salama sana, tunapaswa kuibana mara tu tunapokwenda kwenye uzalishaji. ## Kuendesha Mradi Wako Sawa, kwa hivyo tuna *llm.py* na *api.py*, tunawezaje kufanya kazi hii na sehemu ya nyuma? Vizuri, kuna mambo mawili tunahitaji kufanya: - Sakinisha utegemezi: ```sh cd backend python -m venv venv source ./venv/bin/activate pip install openai flask flask-cors openai ``` - Anzisha API ```sh python api.py ``` Ikiwa uko kwenye Codespaces unahitaji kwenda kwenye Ports chini ya mhariri, bofya kulia juu yake na uchague "Port Visibility" na uchague "Public". ### Kufanya Kazi Kwenye Sehemu ya Mbele Sasa kwa kuwa tuna API inayoendesha, hebu tuunde sehemu ya mbele kwa hili. Sehemu ya mbele ya kiwango cha chini kabisa ambayo tutaiboresha hatua kwa hatua. Katika folda ya *frontend*, unda yafuatayo: ```text backend/ frontend/ index.html app.js styles.css ``` Tuanze na **index.html**: ```html