add speech web demo

pull/2039/head
iftaken 2 years ago
parent 474373bceb
commit e68f1ce6f5

@ -0,0 +1,16 @@
*/.vscode/*
*.wav
*/resource/*
.Ds*
*.pyc
*.pcm
*.npy
*.diff
*.sqlite
*/static/*
*.pdparams
*.pdiparams*
*.pdmodel
*/source/*
*/PaddleSpeech/*

@ -0,0 +1,168 @@
# Paddle Speech Demo
PaddleSpeechDemo是一个以PaddleSpeech的语音交互功能为主体开发的Demo展示项目用于帮助大家更好的上手PaddleSpeech以及使用PaddleSpeech构建自己的应用。
智能语音交互部分使用PaddleSpeech对话以及信息抽取部分使用PaddleNLP网页前端展示部分基于Vue3进行开发
主要功能:
+ 语音聊天PaddleSpeech的语音识别能力+语音合成能力对话部分基于PaddleNLP的闲聊功能
+ 声纹识别PaddleSpeech的声纹识别功能展示
+ 语音识别:支持【实时语音识别】,【端到端识别】,【音频文件识别】三种模式
+ 语音合成:支持【流式合成】与【端到端合成】两种方式
+ 语音指令基于PaddleSpeech的语音识别能力与PaddleNLP的信息抽取实现交通费的智能报销
运行效果:
![效果](docs/效果展示.png)
## 安装
### 后端环境安装
```
# 安装环境
cd speech_server
pip install -r requirements.txt
```
### 前端环境安装
前端依赖node.js 需要提前安装确保npm可用npm测试版本8.3.1,建议下载[官网](https://nodejs.org/en/)稳定版的node.js
```
# 进入前端目录
cd web_client
# 安装yarn已经安装可跳过
npm install -g yarn
# 使用yarn安装前端依赖
yarn install
```
## 启动服务
### 开启后端服务
```
cd speech_server
# 默认8010端口
python main.py --port 8010
```
### 开启前端服务
```
cd web_client
yarn dev --port 8011
```
默认配置下前端中配置的后台地址信息是localhost确保后端服务器和打开页面的游览器在同一台机器上不在一台机器的配置方式见下方的FAQ【后端如果部署在其它机器或者别的端口如何修改】
## Docker启动
### 后端docker
后端docker使用[paddlepaddle官方docker](https://www.paddlepaddle.org.cn/),这里演示CPU版本
```
# 拉取PaddleSpeech项目
cd PaddleSpeechServer
git clone https://github.com/PaddlePaddle/PaddleSpeech.git
# 拉取镜像
docker pull registry.baidubce.com/paddlepaddle/paddle:2.3.0
# 启动容器
docker run --name paddle -it -p 8010:8010 -v $PWD:/paddle registry.baidubce.com/paddlepaddle/paddle:2.3.0 /bin/bash
# 进入容器
cd /paddle
# 安装依赖
pip install -r requirements
# 启动服务
python main --port 8010
```
### 前端docker
前端docker直接使用[node官方的docker](https://hub.docker.com/_/node)即可
```shell
docker pull node
```
镜像中安装依赖
```shell
cd PaddleSpeechWebClient
# 映射外部8011端口
docker run -it -p 8011:8011 -v $PWD:/paddle node:latest bin/bash
# 进入容器中
cd /paddle
# 安装依赖
yarn install
# 启动前端
yarn dev --port 8011
```
## FAQ
#### Q: 如何安装node.js
A node.js的安装可以参考[【菜鸟教程】](https://www.runoob.com/nodejs/nodejs-install-setup.html), 确保npm可用
#### Q后端如果部署在其它机器或者别的端口如何修改
A后端的配置地址有分散在两个文件中
修改第一个文件`PaddleSpeechWebClient/vite.config.js`
```json
server: {
host: "0.0.0.0",
proxy: {
"/api": {
target: "http://localhost:8010", // 这里改成后端所在接口
changeOrigin: true,
rewrite: (path) => path.replace(/^\/api/, ""),
},
},
}
```
修改第二个文件`PaddleSpeechWebClient/src/api/API.js`Websocket代理配置失败所以需要在这个文件中修改
```javascript
// websocket (这里改成后端所在的接口)
CHAT_SOCKET_RECORD: 'ws://localhost:8010/ws/asr/offlineStream', // ChatBot websocket 接口
ASR_SOCKET_RECORD: 'ws://localhost:8010/ws/asr/onlineStream', // Stream ASR 接口
TTS_SOCKET_RECORD: 'ws://localhost:8010/ws/tts/online', // Stream TTS 接口
```
#### Q后端以IP地址的形式前端无法录音
A这里主要是游览器安全策略的限制需要配置游览器后重启。游览器修改配置可参考[使用js-audio-recorder报浏览器不支持getUserMedia](https://blog.csdn.net/YRY_LIKE_YOU/article/details/113745273)
chrome设置地址: chrome://flags/#unsafely-treat-insecure-origin-as-secure
## 参考资料
vue实现录音参考资料https://blog.csdn.net/qq_41619796/article/details/107865602#t1
前端流式播放音频参考仓库:
https://github.com/AnthumChris/fetch-stream-audio
https://bm.enthuses.me/buffered.php?bref=6677

@ -0,0 +1,103 @@
# This is the parameter configuration file for streaming tts server.
#################################################################################
# SERVER SETTING #
#################################################################################
host: 0.0.0.0
port: 8092
# The task format in the engin_list is: <speech task>_<engine type>
# engine_list choices = ['tts_online', 'tts_online-onnx'], the inference speed of tts_online-onnx is faster than tts_online.
# protocol choices = ['websocket', 'http']
protocol: 'http'
engine_list: ['tts_online-onnx']
#################################################################################
# ENGINE CONFIG #
#################################################################################
################################### TTS #########################################
################### speech task: tts; engine_type: online #######################
tts_online:
# am (acoustic model) choices=['fastspeech2_csmsc', 'fastspeech2_cnndecoder_csmsc']
# fastspeech2_cnndecoder_csmsc support streaming am infer.
am: 'fastspeech2_csmsc'
am_config:
am_ckpt:
am_stat:
phones_dict:
tones_dict:
speaker_dict:
spk_id: 0
# voc (vocoder) choices=['mb_melgan_csmsc, hifigan_csmsc']
# Both mb_melgan_csmsc and hifigan_csmsc support streaming voc inference
voc: 'mb_melgan_csmsc'
voc_config:
voc_ckpt:
voc_stat:
# others
lang: 'zh'
device: 'cpu' # set 'gpu:id' or 'cpu'
# am_block and am_pad only for fastspeech2_cnndecoder_onnx model to streaming am infer,
# when am_pad set 12, streaming synthetic audio is the same as non-streaming synthetic audio
am_block: 72
am_pad: 12
# voc_pad and voc_block voc model to streaming voc infer,
# when voc model is mb_melgan_csmsc, voc_pad set 14, streaming synthetic audio is the same as non-streaming synthetic audio; The minimum value of pad can be set to 7, streaming synthetic audio sounds normal
# when voc model is hifigan_csmsc, voc_pad set 19, streaming synthetic audio is the same as non-streaming synthetic audio; voc_pad set 14, streaming synthetic audio sounds normal
voc_block: 36
voc_pad: 14
#################################################################################
# ENGINE CONFIG #
#################################################################################
################################### TTS #########################################
################### speech task: tts; engine_type: online-onnx #######################
tts_online-onnx:
# am (acoustic model) choices=['fastspeech2_csmsc_onnx', 'fastspeech2_cnndecoder_csmsc_onnx']
# fastspeech2_cnndecoder_csmsc_onnx support streaming am infer.
am: 'fastspeech2_cnndecoder_csmsc_onnx'
# am_ckpt is a list, if am is fastspeech2_cnndecoder_csmsc_onnx, am_ckpt = [encoder model, decoder model, postnet model];
# if am is fastspeech2_csmsc_onnx, am_ckpt = [ckpt model];
am_ckpt: # list
am_stat:
phones_dict:
tones_dict:
speaker_dict:
spk_id: 0
am_sample_rate: 24000
am_sess_conf:
device: "cpu" # set 'gpu:id' or 'cpu'
use_trt: False
cpu_threads: 4
# voc (vocoder) choices=['mb_melgan_csmsc_onnx, hifigan_csmsc_onnx']
# Both mb_melgan_csmsc_onnx and hifigan_csmsc_onnx support streaming voc inference
voc: 'hifigan_csmsc_onnx'
voc_ckpt:
voc_sample_rate: 24000
voc_sess_conf:
device: "cpu" # set 'gpu:id' or 'cpu'
use_trt: False
cpu_threads: 4
# others
lang: 'zh'
# am_block and am_pad only for fastspeech2_cnndecoder_onnx model to streaming am infer,
# when am_pad set 12, streaming synthetic audio is the same as non-streaming synthetic audio
am_block: 72
am_pad: 12
# voc_pad and voc_block voc model to streaming voc infer,
# when voc model is mb_melgan_csmsc_onnx, voc_pad set 14, streaming synthetic audio is the same as non-streaming synthetic audio; The minimum value of pad can be set to 7, streaming synthetic audio sounds normal
# when voc model is hifigan_csmsc_onnx, voc_pad set 19, streaming synthetic audio is the same as non-streaming synthetic audio; voc_pad set 14, streaming synthetic audio sounds normal
voc_block: 36
voc_pad: 14
# voc_upsample should be same as n_shift on voc config.
voc_upsample: 300

@ -0,0 +1,48 @@
# This is the parameter configuration file for PaddleSpeech Serving.
#################################################################################
# SERVER SETTING #
#################################################################################
host: 0.0.0.0
port: 8090
# The task format in the engin_list is: <speech task>_<engine type>
# task choices = ['asr_online']
# protocol = ['websocket'] (only one can be selected).
# websocket only support online engine type.
protocol: 'websocket'
engine_list: ['asr_online']
#################################################################################
# ENGINE CONFIG #
#################################################################################
################################### ASR #########################################
################### speech task: asr; engine_type: online #######################
asr_online:
model_type: 'conformer_online_wenetspeech'
am_model: # the pdmodel file of am static model [optional]
am_params: # the pdiparams file of am static model [optional]
lang: 'zh'
sample_rate: 16000
cfg_path:
decode_method:
force_yes: True
device: 'cpu' # cpu or gpu:id
decode_method: "attention_rescoring"
continuous_decoding: True # enable continue decoding when endpoint detected
num_decoding_left_chunks: 16
am_predictor_conf:
device: # set 'gpu:id' or 'cpu'
switch_ir_optim: True
glog_info: False # True -> print glog
summary: True # False -> do not show predictor config
chunk_buffer_conf:
window_n: 7 # frame
shift_n: 4 # frame
window_ms: 25 # ms
shift_ms: 10 # ms
sample_rate: 16000
sample_width: 2

@ -0,0 +1,492 @@
# todo:
# 1. 开启服务
# 2. 接收录音音频,返回识别结果
# 3. 接收ASR识别结果返回NLP对话结果
# 4. 接收NLP对话结果返回TTS音频
import base64
import yaml
import os
import json
import datetime
import librosa
import soundfile as sf
import numpy as np
import argparse
import uvicorn
import aiofiles
from typing import Optional, List
from pydantic import BaseModel
from fastapi import FastAPI, Header, File, UploadFile, Form, Cookie, WebSocket, WebSocketDisconnect
from fastapi.responses import StreamingResponse
from starlette.responses import FileResponse
from starlette.middleware.cors import CORSMiddleware
from starlette.requests import Request
from starlette.websockets import WebSocketState as WebSocketState
from src.AudioManeger import AudioMannger
from src.util import *
from src.robot import Robot
from src.WebsocketManeger import ConnectionManager
from src.SpeechBase.vpr import VPR
from paddlespeech.server.engine.asr.online.asr_engine import PaddleASRConnectionHanddler
from paddlespeech.server.utils.audio_process import float2pcm
# 解析配置
parser = argparse.ArgumentParser(
prog='PaddleSpeechDemo', add_help=True)
parser.add_argument(
"--port",
action="store",
type=int,
help="port of the app",
default=8010,
required=False)
args = parser.parse_args()
port = args.port
# 配置文件
tts_config = "conf/tts_online_application.yaml"
asr_config = "conf/ws_conformer_wenetspeech_application_faster.yaml"
asr_init_path = "source/demo/demo.wav"
db_path = "source/db/vpr.sqlite"
ie_model_path = "source/model"
# 路径配置
UPLOAD_PATH = "source/vpr"
WAV_PATH = "source/wav"
base_sources = [
UPLOAD_PATH, WAV_PATH
]
for path in base_sources:
os.makedirs(path, exist_ok=True)
# 初始化
app = FastAPI()
chatbot = Robot(asr_config, tts_config, asr_init_path, ie_model_path=ie_model_path)
manager = ConnectionManager()
aumanager = AudioMannger(chatbot)
aumanager.init()
vpr = VPR(db_path, dim = 192, top_k = 5)
# 服务配置
class NlpBase(BaseModel):
chat: str
class TtsBase(BaseModel):
text: str
class Audios:
def __init__(self) -> None:
self.audios = b""
audios = Audios()
######################################################################
########################### ASR 服务 #################################
#####################################################################
# 接收文件返回ASR结果
# 上传文件
@app.post("/asr/offline")
async def speech2textOffline(files: List[UploadFile]):
# 只有第一个有效
asr_res = ""
for file in files[:1]:
# 生成时间戳
now_name = "asr_offline_" + datetime.datetime.strftime(datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav"
out_file_path = os.path.join(WAV_PATH, now_name)
async with aiofiles.open(out_file_path, 'wb') as out_file:
content = await file.read() # async read
await out_file.write(content) # async write
# 返回ASR识别结果
asr_res = chatbot.speech2text(out_file_path)
return SuccessRequest(result=asr_res)
# else:
# return ErrorRequest(message="文件不是.wav格式")
return ErrorRequest(message="上传文件为空")
# 接收文件同时将wav强制转成16k, int16类型
@app.post("/asr/offlinefile")
async def speech2textOfflineFile(files: List[UploadFile]):
# 只有第一个有效
asr_res = ""
for file in files[:1]:
# 生成时间戳
now_name = "asr_offline_" + datetime.datetime.strftime(datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav"
out_file_path = os.path.join(WAV_PATH, now_name)
async with aiofiles.open(out_file_path, 'wb') as out_file:
content = await file.read() # async read
await out_file.write(content) # async write
# 将文件转成16k, 16bit类型的wav文件
wav, sr = librosa.load(out_file_path, sr=16000)
wav = float2pcm(wav) # float32 to int16
wav_bytes = wav.tobytes() # to bytes
wav_base64 = base64.b64encode(wav_bytes).decode('utf8')
# 将文件重新写入
now_name = now_name[:-4] + "_16k" + ".wav"
out_file_path = os.path.join(WAV_PATH, now_name)
sf.write(out_file_path,wav,16000)
# 返回ASR识别结果
asr_res = chatbot.speech2text(out_file_path)
response_res = {
"asr_result": asr_res,
"wav_base64": wav_base64
}
return SuccessRequest(result=response_res)
return ErrorRequest(message="上传文件为空")
# 流式接收测试
@app.post("/asr/online1")
async def speech2textOnlineRecive(files: List[UploadFile]):
audio_bin = b''
for file in files:
content = await file.read()
audio_bin += content
audios.audios += audio_bin
print(f"audios长度变化: {len(audios.audios)}")
return SuccessRequest(message="接收成功")
# 采集环境噪音大小
@app.post("/asr/collectEnv")
async def collectEnv(files: List[UploadFile]):
for file in files[:1]:
content = await file.read() # async read
# 初始化, wav 前44字节是头部信息
aumanager.compute_env_volume(content[44:])
vad_ = aumanager.vad_threshold
return SuccessRequest(result=vad_,message="采集环境噪音成功")
# 停止录音
@app.get("/asr/stopRecord")
async def stopRecord():
audios.audios = b""
aumanager.stop()
print("Online录音暂停")
return SuccessRequest(message="停止成功")
# 恢复录音
@app.get("/asr/resumeRecord")
async def resumeRecord():
aumanager.resume()
print("Online录音恢复")
return SuccessRequest(message="Online录音恢复")
# 聊天用的ASR
@app.websocket("/ws/asr/offlineStream")
async def websocket_endpoint(websocket: WebSocket):
await manager.connect(websocket)
try:
while True:
asr_res = None
# websocket 不接收,只推送
data = await websocket.receive_bytes()
if not aumanager.is_pause:
asr_res = aumanager.stream_asr(data)
else:
print("录音暂停")
if asr_res:
await manager.send_personal_message(asr_res, websocket)
aumanager.clear_asr()
except WebSocketDisconnect:
manager.disconnect(websocket)
# await manager.broadcast(f"用户-{user}-离开")
# print(f"用户-{user}-离开")
# Online识别的ASR
@app.websocket('/ws/asr/onlineStream')
async def websocket_endpoint(websocket: WebSocket):
"""PaddleSpeech Online ASR Server api
Args:
websocket (WebSocket): the websocket instance
"""
#1. the interface wait to accept the websocket protocal header
# and only we receive the header, it establish the connection with specific thread
await websocket.accept()
#2. if we accept the websocket headers, we will get the online asr engine instance
engine = chatbot.asr.engine
#3. each websocket connection, we will create an PaddleASRConnectionHanddler to process such audio
# and each connection has its own connection instance to process the request
# and only if client send the start signal, we create the PaddleASRConnectionHanddler instance
connection_handler = None
try:
#4. we do a loop to process the audio package by package according the protocal
# and only if the client send finished signal, we will break the loop
while True:
# careful here, changed the source code from starlette.websockets
# 4.1 we wait for the client signal for the specific action
assert websocket.application_state == WebSocketState.CONNECTED
message = await websocket.receive()
websocket._raise_on_disconnect(message)
#4.2 text for the action command and bytes for pcm data
if "text" in message:
# we first parse the specific command
message = json.loads(message["text"])
if 'signal' not in message:
resp = {"status": "ok", "message": "no valid json data"}
await websocket.send_json(resp)
# start command, we create the PaddleASRConnectionHanddler instance to process the audio data
# end command, we process the all the last audio pcm and return the final result
# and we break the loop
if message['signal'] == 'start':
resp = {"status": "ok", "signal": "server_ready"}
# do something at begining here
# create the instance to process the audio
# connection_handler = chatbot.asr.connection_handler
connection_handler = PaddleASRConnectionHanddler(engine)
await websocket.send_json(resp)
elif message['signal'] == 'end':
# reset single engine for an new connection
# and we will destroy the connection
connection_handler.decode(is_finished=True)
connection_handler.rescoring()
asr_results = connection_handler.get_result()
connection_handler.reset()
resp = {
"status": "ok",
"signal": "finished",
'result': asr_results
}
await websocket.send_json(resp)
break
else:
resp = {"status": "ok", "message": "no valid json data"}
await websocket.send_json(resp)
elif "bytes" in message:
# bytes for the pcm data
message = message["bytes"]
print("###############")
print("len message: ", len(message))
print("###############")
# we extract the remained audio pcm
# and decode for the result in this package data
connection_handler.extract_feat(message)
connection_handler.decode(is_finished=False)
asr_results = connection_handler.get_result()
# return the current period result
# if the engine create the vad instance, this connection will have many period results
resp = {'result': asr_results}
print(resp)
await websocket.send_json(resp)
except WebSocketDisconnect:
pass
######################################################################
########################### NLP 服务 #################################
#####################################################################
@app.post("/nlp/chat")
async def chatOffline(nlp_base:NlpBase):
chat = nlp_base.chat
if not chat:
return ErrorRequest(message="传入文本为空")
else:
res = chatbot.chat(chat)
return SuccessRequest(result=res)
@app.post("/nlp/ie")
async def ieOffline(nlp_base:NlpBase):
nlp_text = nlp_base.chat
if not nlp_text:
return ErrorRequest(message="传入文本为空")
else:
res = chatbot.ie(nlp_text)
return SuccessRequest(result=res)
######################################################################
########################### TTS 服务 #################################
#####################################################################
@app.post("/tts/offline")
async def text2speechOffline(tts_base:TtsBase):
text = tts_base.text
if not text:
return ErrorRequest(message="文本为空")
else:
now_name = "tts_"+ datetime.datetime.strftime(datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav"
out_file_path = os.path.join(WAV_PATH, now_name)
# 保存为文件再转成base64传输
chatbot.text2speech(text, outpath=out_file_path)
with open(out_file_path, "rb") as f:
data_bin = f.read()
base_str = base64.b64encode(data_bin)
return SuccessRequest(result=base_str)
# http流式TTS
@app.post("/tts/online")
async def stream_tts(request_body: TtsBase):
text = request_body.text
return StreamingResponse(chatbot.text2speechStreamBytes(text=text))
# ws流式TTS
@app.websocket("/ws/tts/online")
async def stream_ttsWS(websocket: WebSocket):
await manager.connect(websocket)
try:
while True:
text = await websocket.receive_text()
# 用 websocket 流式接收音频数据
if text:
for sub_wav in chatbot.text2speechStream(text=text):
# print("发送sub wav: ", len(sub_wav))
res = {
"wav": sub_wav,
"done": False
}
await websocket.send_json(res)
# 输送结束
res = {
"wav": sub_wav,
"done": True
}
await websocket.send_json(res)
# manager.disconnect(websocket)
except WebSocketDisconnect:
manager.disconnect(websocket)
######################################################################
########################### VPR 服务 #################################
#####################################################################
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"])
@app.post('/vpr/enroll')
async def vpr_enroll(table_name: str=None,
spk_id: str=Form(...),
audio: UploadFile=File(...)):
# Enroll the uploaded audio with spk-id into MySQL
try:
if not spk_id:
return {'status': False, 'msg': "spk_id can not be None"}
# Save the upload data to server.
content = await audio.read()
now_name = "vpr_enroll_" + datetime.datetime.strftime(datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav"
audio_path = os.path.join(UPLOAD_PATH, now_name)
with open(audio_path, "wb+") as f:
f.write(content)
vpr.vpr_enroll(username=spk_id, wav_path=audio_path)
return {'status': True, 'msg': "Successfully enroll data!"}
except Exception as e:
return {'status': False, 'msg': e}
@app.post('/vpr/recog')
async def vpr_recog(request: Request,
table_name: str=None,
audio: UploadFile=File(...)):
# Voice print recognition online
# try:
# Save the upload data to server.
content = await audio.read()
now_name = "vpr_query_" + datetime.datetime.strftime(datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav"
query_audio_path = os.path.join(UPLOAD_PATH, now_name)
with open(query_audio_path, "wb+") as f:
f.write(content)
spk_ids, paths, scores = vpr.do_search_vpr(query_audio_path)
res = dict(zip(spk_ids, zip(paths, scores)))
# Sort results by distance metric, closest distances first
res = sorted(res.items(), key=lambda item: item[1][1], reverse=True)
return res
# except Exception as e:
# return {'status': False, 'msg': e}, 400
@app.post('/vpr/del')
async def vpr_del(spk_id: dict=None):
# Delete a record by spk_id in MySQL
try:
spk_id = spk_id['spk_id']
if not spk_id:
return {'status': False, 'msg': "spk_id can not be None"}
vpr.vpr_del(username=spk_id)
return {'status': True, 'msg': "Successfully delete data!"}
except Exception as e:
return {'status': False, 'msg': e}, 400
@app.get('/vpr/list')
async def vpr_list():
# Get all records in MySQL
try:
spk_ids, vpr_ids = vpr.do_list()
return spk_ids, vpr_ids
except Exception as e:
return {'status': False, 'msg': e}, 400
@app.get('/vpr/database64')
async def vpr_database64(vprId: int):
# Get the audio file from path by spk_id in MySQL
try:
if not vprId:
return {'status': False, 'msg': "vpr_id can not be None"}
audio_path = vpr.do_get_wav(vprId)
# 返回base64
# 将文件转成16k, 16bit类型的wav文件
wav, sr = librosa.load(audio_path, sr=16000)
wav = float2pcm(wav) # float32 to int16
wav_bytes = wav.tobytes() # to bytes
wav_base64 = base64.b64encode(wav_bytes).decode('utf8')
return SuccessRequest(result=wav_base64)
except Exception as e:
return {'status': False, 'msg': e}, 400
@app.get('/vpr/data')
async def vpr_data(vprId: int):
# Get the audio file from path by spk_id in MySQL
try:
if not vprId:
return {'status': False, 'msg': "vpr_id can not be None"}
audio_path = vpr.do_get_wav(vprId)
return FileResponse(audio_path)
except Exception as e:
return {'status': False, 'msg': e}, 400
if __name__ == '__main__':
uvicorn.run(app=app, host='0.0.0.0', port=port)

@ -0,0 +1,14 @@
aiofiles
fastapi
librosa
numpy
pydantic
scikit_learn
SoundFile
starlette
uvicorn
paddlepaddle
paddlespeech
paddlenlp
faiss-cpu
python-multipart

@ -0,0 +1,173 @@
import imp
from queue import Queue
import numpy as np
import os
import wave
import random
import datetime
from .util import randName
class AudioMannger:
def __init__(self, robot, frame_length=160, frame=10, data_width=2, vad_default = 300):
# 二进制 pcm 流
self.audios = b''
self.asr_result = ""
# Speech 核心主体
self.robot = robot
self.file_dir = "source"
os.makedirs(self.file_dir, exist_ok=True)
self.vad_deafult = vad_default
self.vad_threshold = vad_default
self.vad_threshold_path = os.path.join(self.file_dir, "vad_threshold.npy")
# 10ms 一帧
self.frame_length = frame_length
# 10帧检测一次 vad
self.frame = frame
# int 16, 两个bytes
self.data_width = data_width
# window
self.window_length = frame_length * frame * data_width
# 是否开始录音
self.on_asr = False
self.silence_cnt = 0
self.max_silence_cnt = 4
self.is_pause = False # 录音暂停与恢复
def init(self):
if os.path.exists(self.vad_threshold_path):
# 平均响度文件存在
self.vad_threshold = np.load(self.vad_threshold_path)
def clear_audio(self):
# 清空 pcm 累积片段与 asr 识别结果
self.audios = b''
def clear_asr(self):
self.asr_result = ""
def compute_chunk_volume(self, start_index, pcm_bins):
# 根据帧长计算能量平均值
pcm_bin = pcm_bins[start_index: start_index + self.window_length]
# 转成 numpy
pcm_np = np.frombuffer(pcm_bin, np.int16)
# 归一化 + 计算响度
x = pcm_np.astype(np.float32)
x = np.abs(x)
return np.mean(x)
def is_speech(self, start_index, pcm_bins):
# 检查是否没
if start_index > len(pcm_bins):
return False
# 检查从这个 start 开始是否为静音帧
energy = self.compute_chunk_volume(start_index=start_index, pcm_bins=pcm_bins)
# print(energy)
if energy > self.vad_threshold:
return True
else:
return False
def compute_env_volume(self, pcm_bins):
max_energy = 0
start = 0
while start < len(pcm_bins):
energy = self.compute_chunk_volume(start_index=start, pcm_bins=pcm_bins)
if energy > max_energy:
max_energy = energy
start += self.window_length
self.vad_threshold = max_energy + 100 if max_energy > self.vad_deafult else self.vad_deafult
# 保存成文件
np.save(self.vad_threshold_path, self.vad_threshold)
print(f"vad 阈值大小: {self.vad_threshold}")
print(f"环境采样保存: {os.path.realpath(self.vad_threshold_path)}")
def stream_asr(self, pcm_bin):
# 先把 pcm_bin 送进去做端点检测
start = 0
while start < len(pcm_bin):
if self.is_speech(start_index=start, pcm_bins=pcm_bin):
self.on_asr = True
self.silence_cnt = 0
print("录音中")
self.audios += pcm_bin[ start : start + self.window_length]
else:
if self.on_asr:
self.silence_cnt += 1
if self.silence_cnt > self.max_silence_cnt:
self.on_asr = False
self.silence_cnt = 0
# 录音停止
print("录音停止")
# audios 保存为 wav, 送入 ASR
if len(self.audios) > 2 * 16000:
file_path = os.path.join(self.file_dir, "asr_" + datetime.datetime.strftime(datetime.datetime.now(), '%Y%m%d%H%M%S') + randName() + ".wav")
self.save_audio(file_path=file_path)
self.asr_result = self.robot.speech2text(file_path)
self.clear_audio()
return self.asr_result
else:
# 正常接收
print("录音中 静音")
self.audios += pcm_bin[ start : start + self.window_length]
start += self.window_length
return ""
def save_audio(self, file_path):
print("保存音频")
wf = wave.open(file_path, 'wb') # 创建一个音频文件名字为“01.wav"
wf.setnchannels(1) # 设置声道数为2
wf.setsampwidth(2) # 设置采样深度为
wf.setframerate(16000) # 设置采样率为16000
# 将数据写入创建的音频文件
wf.writeframes(self.audios)
# 写完后将文件关闭
wf.close()
def end(self):
# audios 保存为 wav, 送入 ASR
file_path = os.path.join(self.file_dir, "asr.wav")
self.save_audio(file_path=file_path)
return self.robot.speech2text(file_path)
def stop(self):
self.is_pause = True
self.audios = b''
def resume(self):
self.is_pause = False
if __name__ == '__main__':
from robot import Robot
chatbot = Robot()
chatbot.init()
audio_manger = AudioMannger(chatbot)
file_list = [
"source/20220418145230qbenc.pcm",
]
for file in file_list:
with open(file, "rb") as f:
pcm_bin = f.read()
print(len(pcm_bin))
asr_ = audio_manger.stream_asr(pcm_bin=pcm_bin)
print(asr_)
print(audio_manger.end())
print(chatbot.speech2text("source/20220418145230zrxia.wav"))

@ -0,0 +1,87 @@
from re import sub
import numpy as np
import paddle
import librosa
import soundfile
from paddlespeech.server.engine.asr.online.asr_engine import ASREngine
from paddlespeech.server.engine.asr.online.asr_engine import PaddleASRConnectionHanddler
from paddlespeech.server.utils.config import get_config
def readWave(samples):
x_len = len(samples)
chunk_size = 85 * 16 #80ms, sample_rate = 16kHz
if x_len % chunk_size != 0:
padding_len_x = chunk_size - x_len % chunk_size
else:
padding_len_x = 0
padding = np.zeros((padding_len_x), dtype=samples.dtype)
padded_x = np.concatenate([samples, padding], axis=0)
assert (x_len + padding_len_x) % chunk_size == 0
num_chunk = (x_len + padding_len_x) / chunk_size
num_chunk = int(num_chunk)
for i in range(0, num_chunk):
start = i * chunk_size
end = start + chunk_size
x_chunk = padded_x[start:end]
yield x_chunk
class ASR:
def __init__(self, config_path, ) -> None:
self.config = get_config(config_path)['asr_online']
self.engine = ASREngine()
self.engine.init(self.config)
self.connection_handler = PaddleASRConnectionHanddler(self.engine)
def offlineASR(self, samples, sample_rate=16000):
x_chunk, x_chunk_lens = self.engine.preprocess(samples=samples, sample_rate=sample_rate)
self.engine.run(x_chunk, x_chunk_lens)
result = self.engine.postprocess()
self.engine.reset()
return result
def onlineASR(self, samples:bytes=None, is_finished=False):
if not is_finished:
# 流式开始
self.connection_handler.extract_feat(samples)
self.connection_handler.decode(is_finished)
asr_results = self.connection_handler.get_result()
return asr_results
else:
# 流式结束
self.connection_handler.decode(is_finished=True)
self.connection_handler.rescoring()
asr_results = self.connection_handler.get_result()
self.connection_handler.reset()
return asr_results
if __name__ == '__main__':
config_path = r"../../PaddleSpeech/paddlespeech/server/conf/ws_conformer_application.yaml"
wav_path = r"../../source/demo/demo_16k.wav"
samples, sample_rate = soundfile.read(wav_path, dtype='int16')
asr = ASR(config_path=config_path)
end_result = asr.offlineASR(samples=samples, sample_rate=sample_rate)
print("端到端识别结果:", end_result)
for sub_wav in readWave(samples=samples):
# print(sub_wav)
message = sub_wav.tobytes()
offline_result = asr.onlineASR(message, is_finished=False)
print("流式识别结果: ", offline_result)
offline_result = asr.onlineASR(is_finished=True)
print("流式识别结果: ", offline_result)

@ -0,0 +1,28 @@
from paddlenlp import Taskflow
class NLP:
def __init__(self, ie_model_path=None):
schema = ["时间", "出发地", "目的地", "费用"]
if ie_model_path:
self.ie_model = Taskflow("information_extraction",
schema=schema, task_path=ie_model_path)
else:
self.ie_model = Taskflow("information_extraction",
schema=schema)
self.dialogue_model = Taskflow("dialogue")
def chat(self, text):
result = self.dialogue_model([text])
return result[0]
def ie(self, text):
result = self.ie_model(text)
return result
if __name__ == '__main__':
ie_model_path = "../../source/model/"
nlp = NLP(ie_model_path=ie_model_path)
text = "今天早上我从大牛坊去百度科技园花了七百块钱"
print(nlp.ie(text))

@ -0,0 +1,152 @@
import base64
import sqlite3
import os
import numpy as np
from pkg_resources import resource_stream
def dict_factory(cursor, row):
d = {}
for idx, col in enumerate(cursor.description):
d[col[0]] = row[idx]
return d
class DataBase(object):
def __init__(self, db_path:str):
db_path = os.path.realpath(db_path)
if os.path.exists(db_path):
self.db_path = db_path
else:
db_path_dir = os.path.dirname(db_path)
os.makedirs(db_path_dir, exist_ok=True)
self.db_path = db_path
self.conn = sqlite3.connect(self.db_path)
self.conn.row_factory = dict_factory
self.cursor = self.conn.cursor()
self.init_database()
def init_database(self):
"""
初始化数据库 若表不存在则创建
"""
sql = """
CREATE TABLE IF NOT EXISTS vprtable (
`id` INTEGER PRIMARY KEY AUTOINCREMENT,
`username` TEXT NOT NULL,
`vector` TEXT NOT NULL,
`wavpath` TEXT NOT NULL
);
"""
self.cursor.execute(sql)
self.conn.commit()
def execute_base(self, sql, data_dict):
self.cursor.execute(sql, data_dict)
self.conn.commit()
def insert_one(self, username, vector_base64:str, wav_path):
if not os.path.exists(wav_path):
return None, "wav not exists"
else:
sql = f"""
insert into
vprtable (username, vector, wavpath)
values (?, ?, ?)
"""
try:
self.cursor.execute(sql, (username, vector_base64, wav_path))
self.conn.commit()
lastidx = self.cursor.lastrowid
return lastidx, "data insert success"
except Exception as e:
print(e)
return None, e
def select_all(self):
sql = """
SELECT * from vprtable
"""
result = self.cursor.execute(sql).fetchall()
return result
def select_by_id(self, vpr_id):
sql = f"""
SELECT * from vprtable WHERE `id` = {vpr_id}
"""
result = self.cursor.execute(sql).fetchall()
return result
def select_by_username(self, username):
sql = f"""
SELECT * from vprtable WHERE `username` = '{username}'
"""
result = self.cursor.execute(sql).fetchall()
return result
def drop_by_username(self, username):
sql = f"""
DELETE from vprtable WHERE `username`='{username}'
"""
self.cursor.execute(sql)
self.conn.commit()
def drop_all(self):
sql = f"""
DELETE from vprtable
"""
self.cursor.execute(sql)
self.conn.commit()
def drop_table(self):
sql = f"""
DROP TABLE vprtable
"""
self.cursor.execute(sql)
self.conn.commit()
def encode_vector(self, vector:np.ndarray):
return base64.b64encode(vector).decode('utf8')
def decode_vector(self, vector_base64, dtype=np.float32):
b = base64.b64decode(vector_base64)
vc = np.frombuffer(b, dtype=dtype)
return vc
if __name__ == '__main__':
db_path = "../../source/db/vpr.sqlite"
db = DataBase(db_path)
# 准备数据
import numpy as np
vector = np.random.randn((192)).astype(np.float32).tobytes()
vector_base64 = base64.b64encode(vector).decode('utf8')
username = "sss"
wav_path = r"../../source/demo/demo_16k.wav"
# 插入数据
db.insert_one(username, vector_base64, wav_path)
# 查询数据
res_all = db.select_all()
print("res_all: ", res_all)
s_id = res_all[0]['id']
res_id = db.select_by_id(s_id)
print("res_id: ", res_id)
res_uername = db.select_by_username(username)
print("res_username: ", res_uername)
# base64还原
b = base64.b64decode(res_uername[0]['vector'])
vc = np.frombuffer(b, dtype=np.float32)
print(vc)
# 删除数据
db.drop_by_username(username)
res_all = db.select_all()
print("删除后 res_all: ", res_all)
db.drop_all()

@ -0,0 +1,121 @@
# tts 推理引擎,支持流式与非流式
# 精简化使用
# 用 onnxruntime 进行推理
# 1. 下载对应的模型
# 2. 加载模型
# 3. 端到端推理
# 4. 流式推理
import base64
import numpy as np
from paddlespeech.server.utils.onnx_infer import get_sess
from paddlespeech.t2s.frontend.zh_frontend import Frontend
from paddlespeech.server.utils.util import denorm, get_chunks
from paddlespeech.server.utils.audio_process import float2pcm
from paddlespeech.server.utils.config import get_config
from paddlespeech.server.engine.tts.online.onnx.tts_engine import TTSEngine
class TTS:
def __init__(self, config_path):
self.config = get_config(config_path)['tts_online-onnx']
self.config['voc_block'] = 36
self.engine = TTSEngine()
self.engine.init(self.config)
self.engine.warm_up()
# 前端初始化
self.frontend = Frontend(
phone_vocab_path=self.engine.executor.phones_dict,
tone_vocab_path=None)
def depadding(self, data, chunk_num, chunk_id, block, pad, upsample):
"""
Streaming inference removes the result of pad inference
"""
front_pad = min(chunk_id * block, pad)
# first chunk
if chunk_id == 0:
data = data[:block * upsample]
# last chunk
elif chunk_id == chunk_num - 1:
data = data[front_pad * upsample:]
# middle chunk
else:
data = data[front_pad * upsample:(front_pad + block) * upsample]
return data
def offlineTTS(self, text):
get_tone_ids = False
merge_sentences = False
input_ids = self.frontend.get_input_ids(
text,
merge_sentences=merge_sentences,
get_tone_ids=get_tone_ids)
phone_ids = input_ids["phone_ids"]
wav_list = []
for i in range(len(phone_ids)):
orig_hs = self.engine.executor.am_encoder_infer_sess.run(
None, input_feed={'text': phone_ids[i].numpy()}
)
hs = orig_hs[0]
am_decoder_output = self.engine.executor.am_decoder_sess.run(
None, input_feed={'xs': hs})
am_postnet_output = self.engine.executor.am_postnet_sess.run(
None,
input_feed={
'xs': np.transpose(am_decoder_output[0], (0, 2, 1))
})
am_output_data = am_decoder_output + np.transpose(
am_postnet_output[0], (0, 2, 1))
normalized_mel = am_output_data[0][0]
mel = denorm(normalized_mel, self.engine.executor.am_mu, self.engine.executor.am_std)
wav = self.engine.executor.voc_sess.run(
output_names=None, input_feed={'logmel': mel})[0]
wav_list.append(wav)
wavs = np.concatenate(wav_list)
return wavs
def streamTTS(self, text):
for sub_wav_base64 in self.engine.run(sentence=text):
yield sub_wav_base64
def streamTTSBytes(self, text):
for wav in self.engine.executor.infer(
text=text,
lang=self.engine.config.lang,
am=self.engine.config.am,
spk_id=0):
wav = float2pcm(wav) # float32 to int16
wav_bytes = wav.tobytes() # to bytes
yield wav_bytes
def after_process(self, wav):
# for tvm
wav = float2pcm(wav) # float32 to int16
wav_bytes = wav.tobytes() # to bytes
wav_base64 = base64.b64encode(wav_bytes).decode('utf8') # to base64
return wav_base64
def streamTTS_TVM(self, text):
# 用 TVM 优化
pass
if __name__ == '__main__':
text = "啊哈哈哈哈哈哈啊哈哈哈哈哈哈啊哈哈哈哈哈哈啊哈哈哈哈哈哈啊哈哈哈哈哈哈"
config_path="../../PaddleSpeech/demos/streaming_tts_server/conf/tts_online_application.yaml"
tts = TTS(config_path)
for sub_wav in tts.streamTTS(text):
print("sub_wav_base64: ", len(sub_wav))
end_wav = tts.offlineTTS(text)
print(end_wav)

@ -0,0 +1,152 @@
# vpr Demo 没有使用 mysql 与 muilvs, 仅用于docker演示
import logging
import faiss
from matplotlib import use
import numpy as np
from .sql_helper import DataBase
from .vpr_encode import get_audio_embedding
class VPR:
def __init__(self, db_path, dim, top_k) -> None:
# 初始化
self.db_path = db_path
self.dim = dim
self.top_k = top_k
self.dtype = np.float32
self.vpr_idx = 0
# db 初始化
self.db = DataBase(db_path)
# faiss 初始化
index_ip = faiss.IndexFlatIP(dim)
self.index_ip = faiss.IndexIDMap(index_ip)
self.init()
def init(self):
# demo 初始化,把 mysql中的向量注册到 faiss 中
sql_dbs = self.db.select_all()
if sql_dbs:
for sql_db in sql_dbs:
idx = sql_db['id']
vc_bs64 = sql_db['vector']
vc = self.db.decode_vector(vc_bs64)
if len(vc.shape) == 1:
vc = np.expand_dims(vc, axis=0)
# 构建数据库
self.index_ip.add_with_ids(vc, np.array((idx,)).astype('int64'))
logging.info("faiss 构建完毕")
def faiss_enroll(self, idx, vc):
self.index_ip.add_with_ids(vc, np.array((idx,)).astype('int64'))
def vpr_enroll(self, username, wav_path):
# 注册声纹
emb = get_audio_embedding(wav_path)
emb = np.expand_dims(emb, axis=0)
if emb is not None:
emb_bs64 = self.db.encode_vector(emb)
last_idx, mess = self.db.insert_one(username, emb_bs64, wav_path)
if last_idx:
# faiss 注册
self.faiss_enroll(last_idx, emb)
else:
last_idx, mess = None
return last_idx
def vpr_recog(self, wav_path):
# 识别声纹
emb_search = get_audio_embedding(wav_path)
if emb_search is not None:
emb_search = np.expand_dims(emb_search, axis=0)
D, I = self.index_ip.search(emb_search, self.top_k)
D = D.tolist()[0]
I = I.tolist()[0]
return [(round(D[i] * 100, 2 ), I[i]) for i in range(len(D)) if I[i] != -1]
else:
logging.error("识别失败")
return None
def do_search_vpr(self, wav_path):
spk_ids, paths, scores = [], [], []
recog_result = self.vpr_recog(wav_path)
for score, idx in recog_result:
username = self.db.select_by_id(idx)[0]['username']
if username not in spk_ids:
spk_ids.append(username)
scores.append(score)
paths.append("")
return spk_ids, paths, scores
def vpr_del(self, username):
# 根据用户username, 删除声纹
# 查用户ID删除对应向量
res = self.db.select_by_username(username)
for r in res:
idx = r['id']
self.index_ip.remove_ids(np.array((idx,)).astype('int64'))
self.db.drop_by_username(username)
def vpr_list(self):
# 获取数据列表
return self.db.select_all()
def do_list(self):
spk_ids, vpr_ids = [], []
for res in self.db.select_all():
spk_ids.append(res['username'])
vpr_ids.append(res['id'])
return spk_ids, vpr_ids
def do_get_wav(self, vpr_idx):
res = self.db.select_by_id(vpr_idx)
return res[0]['wavpath']
def vpr_data(self, idx):
# 获取对应ID的数据
res = self.db.select_by_id(idx)
return res
def vpr_droptable(self):
# 删除表
self.db.drop_table()
# 清空 faiss
self.index_ip.reset()
if __name__ == '__main__':
db_path = "../../source/db/vpr.sqlite"
dim = 192
top_k = 5
vpr = VPR(db_path, dim, top_k)
# 准备测试数据
username = "sss"
wav_path = r"../../source/demo/demo_16k.wav"
# 注册声纹
vpr.vpr_enroll(username, wav_path)
# 获取数据
print(vpr.vpr_list())
# 识别声纹
recolist = vpr.vpr_recog(wav_path)
print(recolist)
# 通过 id 获取数据
idx = recolist[0][1]
print(vpr.vpr_data(idx))
# 删除声纹
vpr.vpr_del(username)
vpr.vpr_droptable()

@ -0,0 +1,26 @@
from paddlespeech.cli import VectorExecutor
import numpy as np
import logging
vector_executor = VectorExecutor()
def get_audio_embedding(path):
"""
Use vpr_inference to generate embedding of audio
"""
try:
embedding = vector_executor(
audio_file=path, model='ecapatdnn_voxceleb12')
embedding = embedding / np.linalg.norm(embedding)
return embedding
except Exception as e:
logging.error(f"Error with embedding:{e}")
return None
if __name__ == '__main__':
audio_path = r"../../source/demo/demo_16k.wav"
emb = get_audio_embedding(audio_path)
print(emb.shape)
print(emb.dtype)
print(type(emb))

@ -0,0 +1,31 @@
from typing import List
from fastapi import WebSocket
class ConnectionManager:
def __init__(self):
# 存放激活的ws连接对象
self.active_connections: List[WebSocket] = []
async def connect(self, ws: WebSocket):
# 等待连接
await ws.accept()
# 存储ws连接对象
self.active_connections.append(ws)
def disconnect(self, ws: WebSocket):
# 关闭时 移除ws对象
self.active_connections.remove(ws)
@staticmethod
async def send_personal_message(message: str, ws: WebSocket):
# 发送个人消息
await ws.send_text(message)
async def broadcast(self, message: str):
# 广播消息
for connection in self.active_connections:
await connection.send_text(message)
manager = ConnectionManager()

@ -0,0 +1,93 @@
from paddlespeech.cli.asr.infer import ASRExecutor
import soundfile as sf
import os
import librosa
from src.SpeechBase.asr import ASR
from src.SpeechBase.tts import TTS
from src.SpeechBase.nlp import NLP
class Robot:
def __init__(self, asr_config, tts_config,asr_init_path,
ie_model_path=None) -> None:
self.nlp = NLP(ie_model_path=ie_model_path)
self.asr = ASR(config_path=asr_config)
self.tts = TTS(config_path=tts_config)
self.tts_sample_rate = 24000
self.asr_sample_rate = 16000
# 流式识别效果不如端到端的模型,这里流式模型与端到端模型分开
self.asr_model = ASRExecutor()
self.asr_name = "conformer_wenetspeech"
self.warm_up_asrmodel(asr_init_path)
def warm_up_asrmodel(self, asr_init_path):
if not os.path.exists(asr_init_path):
path_dir = os.path.dirname(asr_init_path)
if not os.path.exists(path_dir):
os.makedirs(path_dir, exist_ok=True)
# TTS生成采样率24000
text = "生成初始音频"
self.text2speech(text, asr_init_path)
# asr model初始化
self.asr_model(asr_init_path, model=self.asr_name,lang='zh',
sample_rate=16000)
def speech2text(self, audio_file):
self.asr_model.preprocess(self.asr_name, audio_file)
self.asr_model.infer(self.asr_name)
res = self.asr_model.postprocess()
return res
def text2speech(self, text, outpath):
wav = self.tts.offlineTTS(text)
sf.write(
outpath, wav, samplerate=self.tts_sample_rate)
res = wav
return res
def text2speechStream(self, text):
for sub_wav_base64 in self.tts.streamTTS(text=text):
yield sub_wav_base64
def text2speechStreamBytes(self, text):
for wav_bytes in self.tts.streamTTSBytes(text=text):
yield wav_bytes
def chat(self, text):
result = self.nlp.chat(text)
return result
def ie(self, text):
result = self.nlp.ie(text)
return result
if __name__ == '__main__':
tts_config = "../PaddleSpeech/demos/streaming_tts_server/conf/tts_online_application.yaml"
asr_config = "../PaddleSpeech/demos/streaming_asr_server/conf/ws_conformer_application.yaml"
demo_wav = "../source/demo/demo_16k.wav"
ie_model_path = "../source/model"
tts_wav = "../source/demo/tts.wav"
text = "今天天气真不错"
ie_text = "今天晚上我从大牛坊出发去三里屯花了六十五块钱"
robot = Robot(asr_config, tts_config, asr_init_path=demo_wav)
res = robot.speech2text(demo_wav)
print(res)
res = robot.chat(text)
print(res)
print("tts offline")
robot.text2speech(res, tts_wav)
print("ie test")
res = robot.ie(ie_text)
print(res)

@ -0,0 +1,18 @@
import random
def randName(n=5):
return "".join(random.sample('zyxwvutsrqponmlkjihgfedcba',n))
def SuccessRequest(result=None, message="ok"):
return {
"code": 0,
"result":result,
"message": message
}
def ErrorRequest(result=None, message="error"):
return {
"code": -1,
"result":result,
"message": message
}

@ -0,0 +1,25 @@
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
lerna-debug.log*
node_modules
dist
dist-ssr
*.local
# Editor directories and files
.vscode/*
!.vscode/extensions.json
.idea
.DS_Store
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?
.vscode/*

@ -0,0 +1,7 @@
# Vue 3 + Vite
This template should help get you started developing with Vue 3 in Vite. The template uses Vue 3 `<script setup>` SFCs, check out the [script setup docs](https://v3.vuejs.org/api/sfc-script-setup.html#sfc-script-setup) to learn more.
## Recommended IDE Setup
- [VSCode](https://code.visualstudio.com/) + [Volar](https://marketplace.visualstudio.com/items?itemName=johnsoncodehk.volar)

@ -0,0 +1,13 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<link rel="icon" href="/favicon.ico" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>飞桨PaddleSpeech</title>
</head>
<body>
<div id="app"></div>
<script type="module" src="/src/main.js"></script>
</body>
</html>

File diff suppressed because it is too large Load Diff

@ -0,0 +1,23 @@
{
"name": "paddlespeechwebclient",
"private": true,
"version": "0.0.0",
"scripts": {
"dev": "vite",
"build": "vite build",
"preview": "vite preview"
},
"dependencies": {
"ant-design-vue": "^2.2.8",
"axios": "^0.26.1",
"element-plus": "^2.1.9",
"js-audio-recorder": "0.5.7",
"lamejs": "^1.2.1",
"less": "^4.1.2",
"vue": "^3.2.25"
},
"devDependencies": {
"@vitejs/plugin-vue": "^2.3.0",
"vite": "^2.9.0"
}
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.2 KiB

@ -0,0 +1,19 @@
<script setup>
import Experience from './components/Experience.vue'
import Header from './components/Content/Header/Header.vue'
</script>
<template>
<div class="app">
<Header></Header>
<Experience></Experience>
</div>
</template>
<style style="less">
.app {
background: url("assets/image/在线体验-背景@2x.png") no-repeat;
};
</style>

@ -0,0 +1,29 @@
export const apiURL = {
ASR_OFFLINE : '/api/asr/offline', // 获取离线语音识别结果
ASR_COLLECT_ENV : '/api/asr/collectEnv', // 采集环境噪音
ASR_STOP_RECORD : '/api/asr/stopRecord', // 后端暂停录音
ASR_RESUME_RECORD : '/api/asr/resumeRecord',// 后端恢复录音
NLP_CHAT : '/api/nlp/chat', // NLP闲聊接口
NLP_IE : '/api/nlp/ie', // 信息抽取接口
TTS_OFFLINE : '/api/tts/offline', // 获取TTS音频
VPR_RECOG : '/api/vpr/recog', // 声纹识别接口,返回声纹对比相似度
VPR_ENROLL : '/api/vpr/enroll', // 声纹识别注册接口
VPR_LIST : '/api/vpr/list', // 获取声纹注册的数据列表
VPR_DEL : '/api/vpr/del', // 删除用户声纹
VPR_DATA : '/api/vpr/database64?vprId=', // 获取声纹注册数据 bs64格式
// websocket
CHAT_SOCKET_RECORD: 'ws://localhost:8010/ws/asr/offlineStream', // ChatBot websocket 接口
ASR_SOCKET_RECORD: 'ws://localhost:8010/ws/asr/onlineStream', // Stream ASR 接口
TTS_SOCKET_RECORD: 'ws://localhost:8010/ws/tts/online', // Stream TTS 接口
}

@ -0,0 +1,30 @@
import axios from 'axios'
import {apiURL} from "./API.js"
// 上传音频文件,获得识别结果
export async function asrOffline(params){
const result = await axios.post(
apiURL.ASR_OFFLINE, params
)
return result
}
// 上传环境采集文件
export async function asrCollentEnv(params){
const result = await axios.post(
apiURL.ASR_OFFLINE, params
)
return result
}
// 暂停录音
export async function asrStopRecord(){
const result = await axios.get(apiURL.ASR_STOP_RECORD);
return result
}
// 恢复录音
export async function asrResumeRecord(){
const result = await axios.get(apiURL.ASR_RESUME_RECORD);
return result
}

@ -0,0 +1,17 @@
import axios from 'axios'
import {apiURL} from "./API.js"
// 获取闲聊对话结果
export async function nlpChat(text){
const result = await axios.post(apiURL.NLP_CHAT, { chat : text});
return result
}
// 获取信息抽取结果
export async function nlpIE(text){
const result = await axios.post(apiURL.NLP_IE, { chat : text});
return result
}

@ -0,0 +1,8 @@
import axios from 'axios'
import {apiURL} from "./API.js"
export async function ttsOffline(text){
const result = await axios.post(apiURL.TTS_OFFLINE, { text : text});
return result
}

@ -0,0 +1,32 @@
import axios from 'axios'
import {apiURL} from "./API.js"
// 注册声纹
export async function vprEnroll(params){
const result = await axios.post(apiURL.VPR_ENROLL, params);
return result
}
// 声纹识别
export async function vprRecog(params){
const result = await axios.post(apiURL.VPR_RECOG, params);
return result
}
// 删除声纹
export async function vprDel(params){
const result = await axios.post(apiURL.VPR_DEL, params);
return result
}
// 获取声纹列表
export async function vprList(){
const result = await axios.get(apiURL.VPR_LIST);
return result
}
// 获取声纹音频
export async function vprData(params){
const result = await axios.get(apiURL.VPR_DATA+params);
return result
}

@ -0,0 +1,6 @@
<svg xmlns="http://www.w3.org/2000/svg" width="50" height="50" viewBox="0 0 50 50">
<g fill="none" fill-rule="evenodd">
<rect width="50" height="50" opacity="0"/>
<path fill="#FFF" fill-rule="nonzero" d="M10.5625,26.375 L10.5625,37.375 L39.4375,37.375 L39.4375,26.375 L42.1875,26.375 L42.1875,40.125 L7.8125,40.125 L7.8125,26.375 L10.5625,26.375 Z M24.9193012,9.30543065 L32.8422855,17.1477673 L30.9077145,19.1022327 L26.3745,14.6154306 L26.375,29.125 L23.625,29.125 L23.6245,14.5224306 L19.1022838,19.0922338 L17.1477162,17.1577662 L24.9193012,9.30543065 Z"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 585 B

@ -0,0 +1,6 @@
<svg xmlns="http://www.w3.org/2000/svg" width="50" height="50" viewBox="0 0 50 50">
<g fill="#FFF" fill-rule="evenodd">
<rect width="50" height="50" opacity="0"/>
<path d="M18.625,5.7 C19.2739346,5.7 19.8,6.22606542 19.8,6.875 L19.8,42.125 C19.8,42.7739346 19.2739346,43.3 18.625,43.3 C17.9760654,43.3 17.45,42.7739346 17.45,42.125 L17.45,6.875 C17.45,6.22606542 17.9760654,5.7 18.625,5.7 Z M30.375,10.4 C31.0239346,10.4 31.55,10.9260654 31.55,11.575 L31.55,37.425 C31.55,38.0739346 31.0239346,38.6 30.375,38.6 C29.7260654,38.6 29.2,38.0739346 29.2,37.425 L29.2,11.575 C29.2,10.9260654 29.7260654,10.4 30.375,10.4 Z M6.875,15.1 C7.52393458,15.1 8.05,15.6260654 8.05,16.275 L8.05,32.725 C8.05,33.3739346 7.52393458,33.9 6.875,33.9 C6.22606542,33.9 5.7,33.3739346 5.7,32.725 L5.7,16.275 C5.7,15.6260654 6.22606542,15.1 6.875,15.1 Z M42.125,17.45 C42.7739346,17.45 43.3,17.9760654 43.3,18.625 L43.3,30.375 C43.3,31.0239346 42.7739346,31.55 42.125,31.55 C41.4760654,31.55 40.95,31.0239346 40.95,30.375 L40.95,18.625 C40.95,17.9760654 41.4760654,17.45 42.125,17.45 Z"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 1.1 KiB

@ -0,0 +1,6 @@
<svg xmlns="http://www.w3.org/2000/svg" width="50" height="50" viewBox="0 0 50 50">
<g fill="#FFF" fill-rule="evenodd">
<rect width="50" height="50" fill="none"/>
<path fill-rule="nonzero" d="M41.4485655,21.2539772 C42.1315264,21.2850061 42.6598822,21.8638177 42.6289611,22.5468326 C42.6247768,22.6388278 42.6185963,22.7404533 42.6102079,22.8512273 L42.5782082,23.2105123 L42.5782082,23.2105123 L42.5316934,23.6217955 L42.5316934,23.6217955 L42.4693948,24.0821848 L42.4693948,24.0821848 L42.3900439,24.5887883 L42.3900439,24.5887883 L42.292372,25.1387138 C42.2744962,25.2338175 42.2558041,25.3306058 42.2362693,25.4290185 C41.8143833,27.555069 41.1316382,29.6828464 40.1241953,31.6800323 C37.4291788,37.0229123 32.9261483,40.3971985 26.4086979,40.8900674 L25.9987324,40.9171116 L25.9987324,45.4234882 L36.4808016,45.4234882 C37.1644101,45.4234882 37.7186683,45.9777464 37.7186683,46.661355 C37.7186683,47.3023391 37.2315273,47.8294468 36.6073586,47.8928314 L36.4808016,47.8992217 L13.1797237,47.8992217 C12.4960073,47.8992217 11.941857,47.3450714 11.941857,46.661355 C11.941857,46.020472 12.4289031,45.4932758 13.0531489,45.4298797 L13.1797237,45.4234882 L23.5229989,45.4234882 L23.5229989,40.9094487 C16.8529053,40.4933909 12.2580826,37.0999016 9.52429608,31.6800323 C8.5167992,29.6828464 7.83410805,27.5550691 7.41222208,25.4290185 L7.30490754,24.8585165 L7.30490754,24.8585165 L7.21653999,24.3298905 L7.21653999,24.3298905 L7.1458579,23.8460326 L7.1458579,23.8460326 L7.09159974,23.4098348 L7.09159974,23.4098348 L7.052504,23.0241892 L7.052504,23.0241892 L7.02730915,22.6919879 C7.02419833,22.6412354 7.02161415,22.5928302 7.01953033,22.5468326 C6.98839343,21.8638177 7.5168571,21.2850061 8.19987204,21.2539772 C8.84009734,21.2248875 9.38883251,21.6875394 9.4804906,22.3081826 L9.52089639,22.8033886 L9.52089639,22.8033886 L9.55106194,23.0957484 L9.55106194,23.0957484 C9.61279606,23.6520033 9.70707015,24.274849 9.84046771,24.9470712 C10.2215574,26.8673593 10.837172,28.7858665 11.7346375,30.5650942 C14.2485231,35.5489392 18.4280434,38.4929132 24.8242187,38.5130415 C31.2204481,38.4929132 35.3998065,35.5489392 37.9138,30.5650942 C38.8112655,28.7858665 39.4267722,26.8673053 39.8078618,24.9470712 C39.9413133,24.274849 40.0356414,23.6520033 40.0973756,23.0957484 L40.1383001,22.683441 L40.1383001,22.683441 L40.15571,22.4343189 L40.15571,22.4343189 C40.1868469,21.7514119 40.7656585,21.2229482 41.4485655,21.2539772 Z M24.7277861,1.03431811 C30.2652292,1.03431811 34.7717072,5.45158401 34.9203849,10.9435284 L34.924173,11.2236897 L34.924173,24.2016207 C34.924173,29.8291412 30.3475899,34.3909923 24.7277861,34.3909923 C19.1903431,34.3909923 14.6838651,29.9738829 14.5351873,24.4817898 L14.5313993,24.2016206 L14.5313993,11.2236897 C14.5313993,5.59627708 19.1078745,1.03431811 24.7277861,1.03431811 Z M24.7278401,3.51005152 C20.5523235,3.51005152 17.1406309,6.83661824 17.0109562,10.9790926 L17.0071327,11.2236897 L17.0071327,24.2016206 C17.0071327,28.4575531 20.4658637,31.9152588 24.7278401,31.9152588 C28.9033567,31.9152588 32.3150493,28.5887959 32.444724,24.4462237 L32.4485475,24.2016206 L32.4485475,11.2236897 C32.4485475,6.96786511 28.9898165,3.51005152 24.7278401,3.51005152 Z"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 3.1 KiB

@ -0,0 +1,6 @@
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 20 20">
<g fill="#FFF" fill-rule="evenodd">
<rect width="20" height="20" opacity="0"/>
<path fill-rule="nonzero" d="M17.2545788,8.38095607 C17.5371833,8.39379564 17.7558133,8.63330387 17.7430184,8.91593074 L17.7371151,9.01650414 L17.7371151,9.01650414 L17.7143664,9.26243626 L17.7143664,9.26243626 L17.675151,9.56380287 L17.675151,9.56380287 L17.6172162,9.91546885 C17.6058754,9.97798618 17.5936607,10.0423853 17.5805252,10.1085594 C17.4059517,10.9883044 17.1234365,11.868764 16.7065636,12.6951858 C15.608809,14.8714882 13.7861076,16.2584571 11.1569912,16.495803 L10.8615444,16.5174255 L10.8615444,18.3821331 L15.1989524,18.3821331 C15.4818249,18.3821331 15.7111731,18.6114813 15.7111731,18.8943538 C15.7111731,19.1458357 15.5299597,19.3549563 15.2910197,19.3983228 L15.1989524,19.4065745 L5.55712706,19.4065745 C5.2742099,19.4065745 5.04490634,19.1772709 5.04490634,18.8943538 C5.04490634,18.6429116 5.22608446,18.4337601 5.46504803,18.3903863 L5.55712706,18.3821331 L9.83710301,18.382133 L9.83710301,16.5142546 C7.07706426,16.3420928 5.1757583,14.9378903 4.04453631,12.6951858 C3.62764105,11.868764 3.34514816,10.9883044 3.17057465,10.1085594 L3.13388183,9.91546885 L3.13388183,9.91546885 L3.07593716,9.56380287 L3.07593716,9.56380287 L3.03671385,9.26243626 L3.03671385,9.26243626 L3.01397193,9.01650414 C3.01143062,8.98042028 3.00948271,8.94686015 3.00808152,8.91593074 C2.99519728,8.63330387 3.2138719,8.39379564 3.49649877,8.38095607 C3.77908098,8.36811648 4.01858921,8.58679112 4.03142877,8.86937333 L4.04579166,9.04965974 L4.04579166,9.04965974 L4.05561184,9.14306831 C4.08115699,9.37324275 4.12016696,9.63097201 4.17536595,9.90913293 C4.33305822,10.7037349 4.5877953,11.4975999 4.95916033,12.2338321 C5.99938887,14.2961128 7.72884553,15.5143089 10.3755388,15.5226379 C13.0222544,15.5143089 14.7516441,14.2961128 15.7919173,12.2338321 C16.1632823,11.4975999 16.4179747,10.7037126 16.575667,9.90913293 C16.6124812,9.7236923 16.6421003,9.54733242 16.6653248,9.38216386 L16.7052821,9.04965974 L16.7052821,9.04965974 L16.7196041,8.86937333 L16.7196041,8.86937333 C16.7324884,8.58679115 16.9719966,8.3681165 17.2545788,8.38095607 Z M10.3356356,0.0142005962 C12.595216,0.0142005962 14.4399401,1.79169133 14.5496666,4.02028091 L14.5548302,4.23049229 L14.5548302,9.60067063 C14.5548302,11.9292998 12.6610717,13.8169623 10.3356356,13.8169623 C8.07605526,13.8169623 6.23133121,12.0395346 6.12160467,9.81088771 L6.11644109,9.60067061 L6.11644109,4.23049229 C6.11644109,1.90190776 8.01015495,0.0142005962 10.3356356,0.0142005962 Z M10.335658,1.03864201 C8.63472709,1.03864201 7.24010749,2.37267291 7.14594933,4.04955911 L7.1408825,4.23049229 L7.1408825,9.60067061 C7.1408825,11.3617461 8.57208154,12.7925209 10.335658,12.7925209 C12.0365888,12.7925209 13.4312084,11.4585316 13.5253666,9.78160809 L13.5304334,9.60067061 L13.5304334,4.23049229 C13.5304334,2.46946142 12.0992344,1.03864201 10.335658,1.03864201 Z"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 2.9 KiB

@ -0,0 +1,3 @@
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 16 16">
<path fill="#F33E3E" d="M4.0976,1.3362 C4.4618,1.1488 4.8948,1.4234 4.8948,1.833 C4.8948,2.0362 4.7852,2.2266 4.6046,2.3194 C2.5386,3.3816 1.1214,5.5338 1.1214,8.0122 C1.1214,11.677 4.2184,14.632 7.9326,14.398 C11.1952,14.1922 13.816,11.4788 13.9156,8.2112 C13.9936,5.6502 12.5572,3.4112 10.4372,2.3204 C10.256,2.2272 10.1452,2.0376 10.1452,1.8338 C10.1452,1.422 10.5814,1.1504 10.9476,1.3392 C13.366,2.5862 15.024,5.109 15.024,8.0124 C15.024,12.3292 11.3596,15.8064 6.978,15.497 C3.3116,15.238 0.3328,12.2886 0.0406,8.6244 C-0.2116,5.4644 1.5076,2.6692 4.0976,1.3362 Z M7.52,0.004 C7.8252,0.004 8.0726,0.2514 8.0726,0.5566 L8.0726,6.3544 C8.0726,6.6596 7.8252,6.907 7.52,6.907 C7.2148,6.907 6.9674,6.6596 6.9674,6.3544 L6.9674,0.5566 C6.9674,0.2514 7.2148,0.004 7.52,0.004 Z"/>
</svg>

After

Width:  |  Height:  |  Size: 872 B

@ -0,0 +1,6 @@
<svg xmlns="http://www.w3.org/2000/svg" width="116" height="116" viewBox="0 0 116 116">
<g fill="none" fill-rule="evenodd">
<circle cx="58" cy="58" r="58" fill="#2932E1"/>
<path fill="#FFF" fill-rule="nonzero" d="M74.4485655,54.2539772 C75.1315264,54.2850061 75.6598822,54.8638177 75.6289611,55.5468326 C75.6247768,55.6388278 75.6185963,55.7404533 75.6102079,55.8512273 L75.5782082,56.2105123 L75.5782082,56.2105123 L75.5316934,56.6217955 L75.5316934,56.6217955 L75.4693948,57.0821848 L75.4693948,57.0821848 L75.3900439,57.5887883 L75.3900439,57.5887883 L75.292372,58.1387138 C75.2744962,58.2338175 75.2558041,58.3306058 75.2362693,58.4290185 C74.8143833,60.555069 74.1316382,62.6828464 73.1241953,64.6800323 C70.4291788,70.0229123 65.9261483,73.3971985 59.4086979,73.8900674 L58.9987324,73.9171116 L58.9987324,78.4234882 L69.4808016,78.4234882 C70.1644101,78.4234882 70.7186683,78.9777464 70.7186683,79.661355 C70.7186683,80.3023391 70.2315273,80.8294468 69.6073586,80.8928314 L69.4808016,80.8992217 L46.1797237,80.8992217 C45.4960073,80.8992217 44.941857,80.3450714 44.941857,79.661355 C44.941857,79.020472 45.4289031,78.4932758 46.0531489,78.4298797 L46.1797237,78.4234882 L56.5229989,78.4234882 L56.5229989,73.9094487 C49.8529053,73.4933909 45.2580826,70.0999016 42.5242961,64.6800323 C41.5167992,62.6828464 40.834108,60.5550691 40.4122221,58.4290185 L40.3049075,57.8585165 L40.3049075,57.8585165 L40.21654,57.3298905 L40.21654,57.3298905 L40.1458579,56.8460326 L40.1458579,56.8460326 L40.0915997,56.4098348 L40.0915997,56.4098348 L40.052504,56.0241892 L40.052504,56.0241892 L40.0273091,55.6919879 C40.0241983,55.6412354 40.0216142,55.5928302 40.0195303,55.5468326 C39.9883934,54.8638177 40.5168571,54.2850061 41.199872,54.2539772 C41.8400973,54.2248875 42.3888325,54.6875394 42.4804906,55.3081826 L42.5208964,55.8033886 L42.5208964,55.8033886 L42.5510619,56.0957484 L42.5510619,56.0957484 C42.6127961,56.6520033 42.7070702,57.274849 42.8404677,57.9470712 C43.2215574,59.8673593 43.837172,61.7858665 44.7346375,63.5650942 C47.2485231,68.5489392 51.4280434,71.4929132 57.8242187,71.5130415 C64.2204481,71.4929132 68.3998065,68.5489392 70.9138,63.5650942 C71.8112655,61.7858665 72.4267722,59.8673053 72.8078618,57.9470712 C72.9413133,57.274849 73.0356414,56.6520033 73.0973756,56.0957484 L73.1383001,55.683441 L73.1383001,55.683441 L73.15571,55.4343189 L73.15571,55.4343189 C73.1868469,54.7514119 73.7656585,54.2229482 74.4485655,54.2539772 Z M57.7277861,34.0343181 C63.2652292,34.0343181 67.7717072,38.451584 67.9203849,43.9435284 L67.924173,44.2236897 L67.924173,57.2016207 C67.924173,62.8291412 63.3475899,67.3909923 57.7277861,67.3909923 C52.1903431,67.3909923 47.6838651,62.9738829 47.5351873,57.4817898 L47.5313993,57.2016206 L47.5313993,44.2236897 C47.5313993,38.5962771 52.1078745,34.0343181 57.7277861,34.0343181 Z M57.7278401,36.5100515 C53.5523235,36.5100515 50.1406309,39.8366182 50.0109562,43.9790926 L50.0071327,44.2236897 L50.0071327,57.2016206 C50.0071327,61.4575531 53.4658637,64.9152588 57.7278401,64.9152588 C61.9033567,64.9152588 65.3150493,61.5887959 65.444724,57.4462237 L65.4485475,57.2016206 L65.4485475,44.2236897 C65.4485475,39.9678651 61.9898165,36.5100515 57.7278401,36.5100515 Z"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 3.2 KiB

@ -0,0 +1,6 @@
<svg xmlns="http://www.w3.org/2000/svg" width="116" height="116" viewBox="0 0 116 116">
<g fill="none" fill-rule="evenodd">
<circle cx="58" cy="58" r="58" fill="#7278F5"/>
<path fill="#FFF" fill-rule="nonzero" d="M74.4485655,54.2539772 C75.1315264,54.2850061 75.6598822,54.8638177 75.6289611,55.5468326 C75.6247768,55.6388278 75.6185963,55.7404533 75.6102079,55.8512273 L75.5782082,56.2105123 L75.5782082,56.2105123 L75.5316934,56.6217955 L75.5316934,56.6217955 L75.4693948,57.0821848 L75.4693948,57.0821848 L75.3900439,57.5887883 L75.3900439,57.5887883 L75.292372,58.1387138 C75.2744962,58.2338175 75.2558041,58.3306058 75.2362693,58.4290185 C74.8143833,60.555069 74.1316382,62.6828464 73.1241953,64.6800323 C70.4291788,70.0229123 65.9261483,73.3971985 59.4086979,73.8900674 L58.9987324,73.9171116 L58.9987324,78.4234882 L69.4808016,78.4234882 C70.1644101,78.4234882 70.7186683,78.9777464 70.7186683,79.661355 C70.7186683,80.3023391 70.2315273,80.8294468 69.6073586,80.8928314 L69.4808016,80.8992217 L46.1797237,80.8992217 C45.4960073,80.8992217 44.941857,80.3450714 44.941857,79.661355 C44.941857,79.020472 45.4289031,78.4932758 46.0531489,78.4298797 L46.1797237,78.4234882 L56.5229989,78.4234882 L56.5229989,73.9094487 C49.8529053,73.4933909 45.2580826,70.0999016 42.5242961,64.6800323 C41.5167992,62.6828464 40.834108,60.5550691 40.4122221,58.4290185 L40.3049075,57.8585165 L40.3049075,57.8585165 L40.21654,57.3298905 L40.21654,57.3298905 L40.1458579,56.8460326 L40.1458579,56.8460326 L40.0915997,56.4098348 L40.0915997,56.4098348 L40.052504,56.0241892 L40.052504,56.0241892 L40.0273091,55.6919879 C40.0241983,55.6412354 40.0216142,55.5928302 40.0195303,55.5468326 C39.9883934,54.8638177 40.5168571,54.2850061 41.199872,54.2539772 C41.8400973,54.2248875 42.3888325,54.6875394 42.4804906,55.3081826 L42.5208964,55.8033886 L42.5208964,55.8033886 L42.5510619,56.0957484 L42.5510619,56.0957484 C42.6127961,56.6520033 42.7070702,57.274849 42.8404677,57.9470712 C43.2215574,59.8673593 43.837172,61.7858665 44.7346375,63.5650942 C47.2485231,68.5489392 51.4280434,71.4929132 57.8242187,71.5130415 C64.2204481,71.4929132 68.3998065,68.5489392 70.9138,63.5650942 C71.8112655,61.7858665 72.4267722,59.8673053 72.8078618,57.9470712 C72.9413133,57.274849 73.0356414,56.6520033 73.0973756,56.0957484 L73.1383001,55.683441 L73.1383001,55.683441 L73.15571,55.4343189 L73.15571,55.4343189 C73.1868469,54.7514119 73.7656585,54.2229482 74.4485655,54.2539772 Z M57.7277861,34.0343181 C63.2652292,34.0343181 67.7717072,38.451584 67.9203849,43.9435284 L67.924173,44.2236897 L67.924173,57.2016207 C67.924173,62.8291412 63.3475899,67.3909923 57.7277861,67.3909923 C52.1903431,67.3909923 47.6838651,62.9738829 47.5351873,57.4817898 L47.5313993,57.2016206 L47.5313993,44.2236897 C47.5313993,38.5962771 52.1078745,34.0343181 57.7277861,34.0343181 Z M57.7278401,36.5100515 C53.5523235,36.5100515 50.1406309,39.8366182 50.0109562,43.9790926 L50.0071327,44.2236897 L50.0071327,57.2016206 C50.0071327,61.4575531 53.4658637,64.9152588 57.7278401,64.9152588 C61.9033567,64.9152588 65.3150493,61.5887959 65.444724,57.4462237 L65.4485475,57.2016206 L65.4485475,44.2236897 C65.4485475,39.9678651 61.9898165,36.5100515 57.7278401,36.5100515 Z"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 3.2 KiB

@ -0,0 +1,3 @@
<svg xmlns="http://www.w3.org/2000/svg" width="10" height="12" viewBox="0 0 10 12">
<polygon fill="#FFF" fill-rule="evenodd" points="29 16 39 21.765 29 28" transform="translate(-29 -16)"/>
</svg>

After

Width:  |  Height:  |  Size: 198 B

@ -0,0 +1,3 @@
<svg xmlns="http://www.w3.org/2000/svg" width="10" height="12" viewBox="0 0 10 12">
<path fill="#FFF" fill-rule="evenodd" d="M31,17 L31,29 L29,29 L29,17 L31,17 Z M39,17 L39,29 L37,29 L37,17 L39,17 Z" transform="translate(-29 -17)"/>
</svg>

After

Width:  |  Height:  |  Size: 242 B

@ -0,0 +1,11 @@
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="16" height="16" viewBox="0 0 16 16">
<defs>
<rect id="ic_更换示例-a" width="16" height="16" x="0" y="0"/>
</defs>
<g fill="none" fill-rule="evenodd" transform="matrix(-1 0 0 1 16 0)">
<mask id="ic_更换示例-b" fill="#fff">
<use xlink:href="#ic_更换示例-a"/>
</mask>
<path fill="#2932E1" fill-rule="nonzero" d="M6.35459401,0.717547671 L7.1160073,1.36581444 L5.76391165,2.95149486 C8.45440978,1.82595599 11.6186236,2.72687193 13.331374,5.17293307 C15.3274726,8.02365719 14.6537425,11.9415081 11.8236048,13.9231918 C8.99346706,15.9048756 5.08146225,15.1979908 3.08536373,12.3472667 C2.43380077,11.4167384 2.05175569,10.3497586 1.95954347,9.24373118 L1.95954347,9.24373118 L1.91800137,8.74545992 L2.9145439,8.66237572 L2.956086,9.16064698 C3.03368894,10.0914452 3.35506892,10.9889989 3.90451578,11.7736903 C5.58491905,14.1735549 8.873856,14.7678536 11.2500283,13.1040398 C13.6262007,11.440226 14.1926253,8.14637409 12.512222,5.74650951 C11.0401872,3.64422594 8.29699921,2.89825126 6.0091042,3.93534448 L6.11200137,3.89054767 L7.69316988,4.63120811 L7.26907888,5.53682768 L3.68173666,3.85691748 L6.35459401,0.717547671 Z" mask="url(#ic_更换示例-b)"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 1.3 KiB

@ -0,0 +1,6 @@
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 20 20">
<g fill="#FFF" fill-rule="evenodd">
<rect width="20" height="20" opacity="0"/>
<path d="M7.5,2 C7.77614237,2 8,2.22385763 8,2.5 L8,17.5 C8,17.7761424 7.77614237,18 7.5,18 C7.22385763,18 7,17.7761424 7,17.5 L7,2.5 C7,2.22385763 7.22385763,2 7.5,2 Z M12.5,4 C12.7761424,4 13,4.22385763 13,4.5 L13,15.5 C13,15.7761424 12.7761424,16 12.5,16 C12.2238576,16 12,15.7761424 12,15.5 L12,4.5 C12,4.22385763 12.2238576,4 12.5,4 Z M2.5,6 C2.77614237,6 3,6.22385763 3,6.5 L3,13.5 C3,13.7761424 2.77614237,14 2.5,14 C2.22385763,14 2,13.7761424 2,13.5 L2,6.5 C2,6.22385763 2.22385763,6 2.5,6 Z M17.5,7 C17.7761424,7 18,7.22385763 18,7.5 L18,12.5 C18,12.7761424 17.7761424,13 17.5,13 C17.2238576,13 17,12.7761424 17,12.5 L17,7.5 C17,7.22385763 17.2238576,7 17.5,7 Z"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 861 B

@ -0,0 +1,14 @@
<?xml version="1.0" encoding="UTF-8"?>
<svg width="20px" height="20px" viewBox="0 0 20 20" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<title>icon_录制声音(小语音)</title>
<g id="页面-1" stroke="none" stroke-width="1" fill="none" fill-rule="evenodd">
<g id="02-声纹识别-补充状态" transform="translate(-98.000000, -216.000000)" fill="#FFFFFF">
<g id="编组-6备份" transform="translate(77.000000, 204.000000)">
<g id="icon_录制声音(小语音)" transform="translate(21.000000, 12.000000)">
<rect id="矩形" opacity="0" x="0" y="0" width="20" height="20"></rect>
<path d="M17.2545788,8.38095607 C17.5371833,8.39379564 17.7558133,8.63330387 17.7430184,8.91593074 L17.7371151,9.01650414 L17.7371151,9.01650414 L17.7143664,9.26243626 L17.7143664,9.26243626 L17.675151,9.56380287 L17.675151,9.56380287 L17.6172162,9.91546885 C17.6058754,9.97798618 17.5936607,10.0423853 17.5805252,10.1085594 C17.4059517,10.9883044 17.1234365,11.868764 16.7065636,12.6951858 C15.608809,14.8714882 13.7861076,16.2584571 11.1569912,16.495803 L10.8615444,16.5174255 L10.8615444,18.3821331 L15.1989524,18.3821331 C15.4818249,18.3821331 15.7111731,18.6114813 15.7111731,18.8943538 C15.7111731,19.1458357 15.5299597,19.3549563 15.2910197,19.3983228 L15.1989524,19.4065745 L5.55712706,19.4065745 C5.2742099,19.4065745 5.04490634,19.1772709 5.04490634,18.8943538 C5.04490634,18.6429116 5.22608446,18.4337601 5.46504803,18.3903863 L5.55712706,18.3821331 L9.83710301,18.382133 L9.83710301,16.5142546 C7.07706426,16.3420928 5.1757583,14.9378903 4.04453631,12.6951858 C3.62764105,11.868764 3.34514816,10.9883044 3.17057465,10.1085594 L3.13388183,9.91546885 L3.13388183,9.91546885 L3.07593716,9.56380287 L3.07593716,9.56380287 L3.03671385,9.26243626 L3.03671385,9.26243626 L3.01397193,9.01650414 C3.01143062,8.98042028 3.00948271,8.94686015 3.00808152,8.91593074 C2.99519728,8.63330387 3.2138719,8.39379564 3.49649877,8.38095607 C3.77908098,8.36811648 4.01858921,8.58679112 4.03142877,8.86937333 L4.04579166,9.04965974 L4.04579166,9.04965974 L4.05561184,9.14306831 C4.08115699,9.37324275 4.12016696,9.63097201 4.17536595,9.90913293 C4.33305822,10.7037349 4.5877953,11.4975999 4.95916033,12.2338321 C5.99938887,14.2961128 7.72884553,15.5143089 10.3755388,15.5226379 C13.0222544,15.5143089 14.7516441,14.2961128 15.7919173,12.2338321 C16.1632823,11.4975999 16.4179747,10.7037126 16.575667,9.90913293 C16.6124812,9.7236923 16.6421003,9.54733242 16.6653248,9.38216386 L16.7052821,9.04965974 L16.7052821,9.04965974 L16.7196041,8.86937333 L16.7196041,8.86937333 C16.7324884,8.58679115 16.9719966,8.3681165 17.2545788,8.38095607 Z M10.3356356,0.0142005962 C12.595216,0.0142005962 14.4399401,1.79169133 14.5496666,4.02028091 L14.5548302,4.23049229 L14.5548302,9.60067063 C14.5548302,11.9292998 12.6610717,13.8169623 10.3356356,13.8169623 C8.07605526,13.8169623 6.23133121,12.0395346 6.12160467,9.81088771 L6.11644109,9.60067061 L6.11644109,4.23049229 C6.11644109,1.90190776 8.01015495,0.0142005962 10.3356356,0.0142005962 Z M10.335658,1.03864201 C8.63472709,1.03864201 7.24010749,2.37267291 7.14594933,4.04955911 L7.1408825,4.23049229 L7.1408825,9.60067061 C7.1408825,11.3617461 8.57208154,12.7925209 10.335658,12.7925209 C12.0365888,12.7925209 13.4312084,11.4585316 13.5253666,9.78160809 L13.5304334,9.60067061 L13.5304334,4.23049229 C13.5304334,2.46946142 12.0992344,1.03864201 10.335658,1.03864201 Z" id="形状" fill-rule="nonzero"></path>
</g>
</g>
</g>
</g>
</svg>

After

Width:  |  Height:  |  Size: 3.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 77 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.7 KiB

@ -0,0 +1,26 @@
<template>
<div className="speech_header">
<div className="speech_header_title">
飞桨-PaddleSpeech
</div>
<div className="speech_header_describe">
PaddleSpeech 是基于飞桨 PaddlePaddle 的语音方向的开源模型库用于语音和音频中的各种关键任务的开发欢迎大家Star收藏鼓励
</div>
<div className="speech_header_link_box">
<a href="https://github.com/PaddlePaddle/PaddleSpeech" className="speech_header_link" target='_blank' rel='noreferrer' key={index}>
前往Github
</a>
</div>
</div>
</template>
<script>
export default {
name:"Header"
}
</script>
<style lang="less" scoped>
@import "./style.less";
</style>

@ -0,0 +1,148 @@
.speech_header {
width: 1200px;
margin: 0 auto;
padding-top: 50px;
// background: url("../../../assets/image/在线体验-背景@2x.png") no-repeat;
box-sizing: border-box;
&::after {
content: "";
display: block;
clear: both;
visibility: hidden;
}
;
// background: pink;
.speech_header_title {
height: 57px;
font-family: PingFangSC-Medium;
font-size: 38px;
color: #000000;
letter-spacing: 0;
line-height: 57px;
font-weight: 500;
margin-bottom: 15px;
}
;
.speech_header_describe {
height: 26px;
font-family: PingFangSC-Regular;
font-size: 16px;
color: #575757;
line-height: 26px;
font-weight: 400;
margin-bottom: 24px;
}
;
.speech_header_link_box {
height: 40px;
margin-bottom: 40px;
display: flex;
align-items: center;
};
.speech_header_link {
display: block;
background: #2932E1;
width: 120px;
height: 40px;
line-height: 40px;
border-radius: 20px;
font-family: PingFangSC-Medium;
font-size: 14px;
color: #FFFFFF;
text-align: center;
font-weight: 500;
margin-right: 20px;
// margin-bottom: 40px;
&:hover {
opacity: 0.9;
}
;
}
;
.speech_header_divider {
width: 1200px;
height: 1px;
background: #D1D1D1;
margin-bottom: 40px;
}
;
.speech_header_content_wrapper {
width: 1200px;
margin: 0 auto;
// background: pink;
margin-bottom: 20px;
display: flex;
justify-content: space-between;
flex-wrap: wrap;
.speech_header_module {
width: 384px;
background: #FFFFFF;
border: 1px solid rgba(224, 224, 224, 1);
box-shadow: 4px 8px 12px 0px rgba(0, 0, 0, 0.05);
border-radius: 16px;
padding: 30px 34px 0px 34px;
box-sizing: border-box;
display: flex;
margin-bottom: 40px;
.speech_header_background_img {
width: 46px;
height: 46px;
background-size: 46px 46px;
background-repeat: no-repeat;
background-position: center;
margin-right: 20px;
}
;
.speech_header_content {
padding-top: 4px;
margin-bottom: 32px;
.speech_header_module_title {
height: 26px;
font-family: PingFangSC-Medium;
font-size: 20px;
color: #000000;
letter-spacing: 0;
line-height: 26px;
font-weight: 500;
margin-bottom: 10px;
}
;
.speech_header_module_introduce {
font-family: PingFangSC-Regular;
font-size: 16px;
color: #666666;
letter-spacing: 0;
font-weight: 400;
}
;
}
;
}
;
}
;
}
;

@ -0,0 +1,50 @@
<script setup>
import ChatT from './SubMenu/ChatBot/ChatT.vue'
import ASRT from './SubMenu/ASR/ASRT.vue'
import TTST from './SubMenu/TTS/TTST.vue'
import VPRT from './SubMenu/VPR/VPRT.vue'
import IET from './SubMenu/IE/IET.vue'
</script>
<template>
<div className="experience">
<div className="experience_wrapper">
<div className="experience_title">
功能体验
</div>
<div className="experience_describe">
体验前请允许浏览器获取麦克风权限
</div>
<div className="experience_content" >
<el-tabs
className="experience_tabs"
type="border-card"
>
<el-tab-pane label="语音聊天" key="1">
<ChatT></ChatT>
</el-tab-pane>
<el-tab-pane label="声纹识别" key="2">
<VPRT></VPRT>
</el-tab-pane>
<el-tab-pane label="语音识别" key="3">
<ASRT></ASRT>
</el-tab-pane>
<el-tab-pane label="语音合成" key="4">
<TTST></TTST>
</el-tab-pane>
<el-tab-pane label="语音指令" key="5">
<IET></IET>
</el-tab-pane>
</el-tabs>
</div>
</div>
</div>
</template>
<style lang="less">
@import "./style.less";
</style>

@ -0,0 +1,154 @@
<template>
<div class="asrbox">
<h5> ASR 体验</h5>
<div class="home" style="margin:1vw;">
<el-button :type="recoType" @click="startRecorderChunk()" style="margin:1vw;">{{ recoText }} (流式)</el-button>
<el-button :type="recoType" @click="startRecorder()" style="margin:1vw;">{{ recoText }} (端到端)</el-button>
</div>
<a> asr_stream: {{ streamAsrResult }}</a>
<br>
<a> asr_offline: {{ asrResultOffline }} </a>
</div>
</template>
<script>
import Recorder from 'js-audio-recorder'
const recorder_chunk = new Recorder({
sampleBits: 16, // 8 1616
sampleRate: 16000, // 110251600022050240004410048000chrome48000
numChannels: 1, // 1 2 1
compiling: true
})
const recorder = new Recorder({
sampleBits: 16, // 8 1616
sampleRate: 16000, // 110251600022050240004410048000chrome48000
numChannels: 1, // 1 2 1
compiling: true
})
export default {
name: "ASR",
data(){
return {
streamAsrResult: '',
recoType: "primary",
recoText: "开始录音",
playType: "success",
asrResultOffline: '',
onReco: false,
ws:'',
}
},
mounted (){
// ws
this.ws = new WebSocket("ws://localhost:8010/ws/asr/onlineStream")
//
var _that = this
this.ws.addEventListener('message', function (event) {
var temp = JSON.parse(event.data);
// console.log('ws message', event.data)
if(temp.result && (temp.result != _that.streamAsrResult)){
_that.streamAsrResult = temp.result
_that.$nextTick(()=>{})
console.log('更新了')
}
})
},
methods: {
startRecorder () {
if(!this.onReco){
recorder.clear()
recorder.start().then(() => {
}, (error) => {
console.log("录音出错");
})
this.onReco = true
this.recoType = "danger"
this.recoText = "结束录音"
this.$nextTick(()=>{
})
} else {
//
recorder.stop()
this.onReco = false
this.recoType = "primary"
this.recoText = "开始录音"
this.$nextTick(()=>{})
// wav,
const wavs = recorder.getWAVBlob()
this.uploadFile(wavs, "/api/asr/offline")
}
},
startRecorderChunk() {
if(!this.onReco){
//
var start = JSON.stringify({name:"test.wav", "nbest":5, signal:"start"})
this.ws.send(start)
recorder_chunk.start().then(() => {
setInterval(() => {
//
let newData = recorder_chunk.getNextData();
if (!newData.length) {
return;
}
// 1
this.uploadChunk(newData)
}, 500)
}, (error) => {
console.log("录音出错");
})
this.onReco = true
this.recoType = "danger"
this.recoText = "结束录音"
this.$nextTick(()=>{
})
} else {
//
recorder_chunk.stop()
//
// var end = JSON.stringify({name:"test.wav", "nbest":5, signal:"end"})
// this.ws.send(end)
this.onReco = false
this.recoType = "primary"
this.recoText = "开始录音"
this.$nextTick(()=>{})
recorder_chunk.clear()
}
},
uploadChunk(chunkDatas){
chunkDatas.forEach((chunkData) => {
this.ws.send(chunkData)
})
},
async uploadFile(file, post_url){
const formData = new FormData()
formData.append('files', file)
const result = await this.$http.post(post_url, formData);
if (result.data.code === 0) {
this.asrResultOffline = result.data.result
this.$nextTick(()=>{})
this.$message.success(result.data.message);
} else {
this.$message.error(result.data.message);
}
},
},
}
</script>
<style lang='less' scoped>
.asrbox {
border: 4px solid #F00;
// position: fixed;
top:40%;
width: 100%;
height: 20%;
overflow: auto;
}
</style>

@ -0,0 +1,38 @@
<script setup>
import AudioFileIdentification from "./AudioFile/AudioFileIdentification.vue"
import RealTime from "./RealTime/RealTime.vue"
import EndToEndIdentification from "./EndToEnd/EndToEndIdentification.vue";
</script>
<template>
<div class="speech_recognition">
<div class="speech_recognition_tabs">
<div class="frame"></div>
<el-tabs class="speech_recognition_mytabs" type="border-card">
<el-tab-pane label="实时语音识别" key="1">
<RealTime />
</el-tab-pane>
<el-tab-pane label="端到端识别" key="2">
<EndToEndIdentification />
</el-tab-pane>
<el-tab-pane label="音频文件识别" key="3">
<AudioFileIdentification />
</el-tab-pane>
</el-tabs>
</div>
</div>
</template>
<script>
export default {
}
</script>
<style lang="less" scoped>
@import "./style.less";
</style>

@ -0,0 +1,241 @@
<template>
<div class="audioFileIdentification">
<div v-if="uploadStatus === 0" class="public_recognition_speech">
<!-- 上传前 -->
<el-upload
:multiple="false"
:accept="'.wav'"
:limit="1"
:auto-upload="false"
:on-change="handleChange"
:show-file-list="false"
>
<div class="upload_img">
<div class="upload_img_back"></div>
</div>
</el-upload>
<div class="speech_text">
上传文件
</div>
<div class="speech_text_prompt">
支持50秒内的.wav文件
</div>
</div>
<!-- 上传中 -->
<div v-else-if="uploadStatus === 1" class="on_the_cross_speech">
<div class="on_the_upload_img">
<div class="on_the_upload_img_back"></div>
</div>
<div class="on_the_speech_text">
<span class="on_the_speech_loading"> <Spin indicator={antIcon} /></span> 上传中
</div>
</div>
<div v-else>
<!-- // {/* // */} -->
<div v-if="recognitionStatus === 0" class="public_recognition_speech_start">
<div class="public_recognition_speech_content">
<div
class="public_recognition_speech_title"
>
{{ filename }}
</div>
<div
class="public_recognition_speech_again"
@click="uploadAgain()"
>重新上传</div>
<div
class="public_recognition_speech_play"
@click="paly()"
>播放</div>
</div>
<div class="speech_promp"
@click="beginToIdentify()">
开始识别
</div>
</div>
<!-- // {/* */} -->
<div v-else-if="recognitionStatus === 1" class="public_recognition_speech_identify">
<div class="public_recognition_speech_identify_box">
<div
class="public_recognition_speech_identify_back_img"
>
<a-spin />
</div>
<div
class="public_recognition__identify_the_promp"
>识别中</div>
</div>
</div>
<!-- // {/* // */} -->
<div v-else class="public_recognition_speech_identify_ahain">
<div class="public_recognition_speech_identify_box_btn">
<div
class="public_recognition__identify_the_btn"
@click="toIdentifyThe()"
>重新识别</div>
</div>
</div>
</div>
<!-- {/* 指向 */} -->
<div class="public_recognition_point_to">
</div>
<!-- {/* 识别结果 */} -->
<div class="public_recognition_result">
<div>识别结果</div>
<div>{{ asrResult }}</div>
</div>
</div>
</template>
<script>
import { asrOffline } from '../../../../api/ApiASR'
let audioCtx = new AudioContext({
latencyHint: 'interactive',
sampleRate: 24000,
});
export default {
name:"",
data(){
return {
uploadStatus : 0,
recognitionStatus : 0,
asrResult : "",
indicator : "",
filename: "",
upfile: ""
}
},
methods:{
//
handleChange(file, fileList){
this.uploadStatus = 2
this.filename = file.name
this.upfile = file
console.log(file)
// debugger
// var result = Buffer.from(file);
},
readFile(file) {
return new Promise((resolve, reject) => {
const fileReader = new FileReader();
fileReader.onload = function () {
resolve(fileReader);
};
fileReader.onerror = function (err) {
reject(err);
};
fileReader.readAsDataURL(file);
});
},
//
uploadAgain(){
this.uploadStatus = 0
this.upfile = ""
this.filename = ""
this.asrResult = ""
},
//
playAudioData(wav_buffer){
audioCtx.decodeAudioData(wav_buffer, buffer => {
let source = audioCtx.createBufferSource();
source.buffer = buffer
source.connect(audioCtx.destination);
source.start();
}, function (e) {
});
},
//
async paly(){
if(this.upfile){
let fileRes = ""
let fileString = ""
fileRes = await this.readFile(this.upfile.raw);
fileString = fileRes.result;
const audioBase64type = (fileString.match(/data:[^;]*;base64,/))?.[0] ?? '';
const isBase64 = !!fileString.match(/data:[^;]*;base64,/);
const uploadBase64 = fileString.substr(audioBase64type.length);
// isBase64 ? uploadBase64 : undefined
// base
let typedArray = this.base64ToUint8Array(isBase64 ? uploadBase64 : undefined)
this.playAudioData(typedArray.buffer)
}
},
base64ToUint8Array(base64String){
const padding = '='.repeat((4 - base64String.length % 4) % 4);
const base64 = (base64String + padding)
.replace(/-/g, '+')
.replace(/_/g, '/');
const rawData = window.atob(base64);
const outputArray = new Uint8Array(rawData.length);
for (let i = 0; i < rawData.length; ++i) {
outputArray[i] = rawData.charCodeAt(i);
}
return outputArray;
},
//
async beginToIdentify(){
//
this.recognitionStatus = 1
const formData = new FormData();
formData.append('files', this.upfile.raw);
const result = await asrOffline(formData)
//
this.recognitionStatus = 2
console.log(result);
// debugger
if (result.data.code === 0) {
this.$message.success("识别成功")
//
this.asrResult = result.data.result
} else {
this.$message.success("识别失败")
};
},
//
toIdentifyThe(){
// this.uploadAgain()
this.uploadStatus = 0
this.recognitionStatus = 0
this.asrResult = ""
}
}
}
</script>
<style lang="less" scoped>
@import "./style.less";
</style>

@ -0,0 +1,293 @@
.audioFileIdentification {
width: 1106px;
height: 270px;
// background-color: pink;
padding-top: 40px;
box-sizing: border-box;
display: flex;
// 开始上传
.public_recognition_speech {
width: 295px;
height: 230px;
padding-top: 32px;
box-sizing: border-box;
// 开始上传
.upload_img {
width: 116px;
height: 116px;
background: #2932E1;
border-radius: 50%;
margin-left: 98px;
cursor: pointer;
margin-bottom: 20px;
display: flex;
justify-content: center;
align-items: center;
.upload_img_back {
width: 34.38px;
height: 30.82px;
background: #2932E1;
background: url("../../../../assets/image/ic_大-上传文件.svg");
background-repeat: no-repeat;
background-position: center;
background-size: 34.38px 30.82px;
cursor: pointer;
}
&:hover {
opacity: 0.9;
};
};
.speech_text {
height: 22px;
font-family: PingFangSC-Medium;
font-size: 16px;
color: #000000;
font-weight: 500;
margin-left: 124px;
margin-bottom: 10px;
};
.speech_text_prompt {
height: 20px;
font-family: PingFangSC-Regular;
font-size: 14px;
color: #999999;
font-weight: 400;
margin-left: 84px;
};
};
// 上传中
.on_the_cross_speech {
width: 295px;
height: 230px;
padding-top: 32px;
box-sizing: border-box;
.on_the_upload_img {
width: 116px;
height: 116px;
background: #7278F5;
border-radius: 50%;
margin-left: 98px;
cursor: pointer;
margin-bottom: 20px;
display: flex;
justify-content: center;
align-items: center;
.on_the_upload_img_back {
width: 34.38px;
height: 30.82px;
background: #7278F5;
background: url("../../../../assets/image/ic_大-上传文件.svg");
background-repeat: no-repeat;
background-position: center;
background-size: 34.38px 30.82px;
cursor: pointer;
};
};
.on_the_speech_text {
height: 22px;
font-family: PingFangSC-Medium;
font-size: 16px;
color: #000000;
font-weight: 500;
margin-left: 124px;
margin-bottom: 10px;
display: flex;
// justify-content: center;
align-items: center;
.on_the_speech_loading {
display: inline-block;
width: 16px;
height: 16px;
background: #7278F5;
// background: url("../../../../assets/image/ic_开始聊天.svg");
// background-repeat: no-repeat;
// background-position: center;
// background-size: 16px 16px;
margin-right: 8px;
};
};
};
//开始识别
.public_recognition_speech_start {
width: 295px;
height: 230px;
padding-top: 32px;
box-sizing: border-box;
position: relative;
.public_recognition_speech_content {
width: 100%;
position: absolute;
top: 40px;
left: 50%;
transform: translateX(-50%);
display: flex;
justify-content: center;
align-items: center;
.public_recognition_speech_title {
height: 22px;
font-family: PingFangSC-Regular;
font-size: 16px;
color: #000000;
font-weight: 400;
};
.public_recognition_speech_again {
height: 22px;
font-family: PingFangSC-Regular;
font-size: 16px;
color: #2932E1;
font-weight: 400;
margin-left: 30px;
cursor: pointer;
};
.public_recognition_speech_play {
height: 22px;
font-family: PingFangSC-Regular;
font-size: 16px;
color: #2932E1;
font-weight: 400;
margin-left: 20px;
cursor: pointer;
};
};
.speech_promp {
position: absolute;
top: 112px;
left: 50%;
transform: translateX(-50%);
width: 142px;
height: 44px;
background: #2932E1;
border-radius: 22px;
font-family: PingFangSC-Medium;
font-size: 14px;
color: #FFFFFF;
text-align: center;
line-height: 44px;
font-weight: 500;
cursor: pointer;
};
};
// 识别中
.public_recognition_speech_identify {
width: 295px;
height: 230px;
padding-top: 32px;
box-sizing: border-box;
position: relative;
.public_recognition_speech_identify_box {
width: 143px;
height: 44px;
background: #7278F5;
border-radius: 22px;
position: absolute;
top: 50%;
left: 50%;
transform: translate(-50%,-50%);
display: flex;
justify-content: center;
align-items: center;
cursor: pointer;
.public_recognition_speech_identify_back_img {
width: 16px;
height: 16px;
// background: #7278F5;
// background: url("../../../../assets/image/ic_开始聊天.svg");
// background-repeat: no-repeat;
// background-position: center;
// background-size: 16px 16px;
};
.public_recognition__identify_the_promp {
height: 20px;
font-family: PingFangSC-Medium;
font-size: 14px;
color: #FFFFFF;
font-weight: 500;
margin-left: 12px;
};
};
};
// 重新识别
.public_recognition_speech_identify_ahain {
width: 295px;
height: 230px;
padding-top: 32px;
box-sizing: border-box;
position: relative;
cursor: pointer;
.public_recognition_speech_identify_box_btn {
width: 143px;
height: 44px;
background: #2932E1;
border-radius: 22px;
position: absolute;
top: 50%;
left: 50%;
transform: translate(-50%,-50%);
display: flex;
justify-content: center;
align-items: center;
cursor: pointer;
.public_recognition__identify_the_btn {
height: 20px;
font-family: PingFangSC-Medium;
font-size: 14px;
color: #FFFFFF;
font-weight: 500;
};
};
};
// 指向
.public_recognition_point_to {
width: 47px;
height: 67px;
background: url("../../../../assets/image/步骤-箭头切图@2x.png") no-repeat;
background-position: center;
background-size: 47px 67px;
margin-top: 91px;
margin-right: 67px;
};
// 识别结果
.public_recognition_result {
width: 680px;
height: 230px;
background: #FAFAFA;
padding: 40px 50px 0px 50px;
div {
&:nth-of-type(1) {
height: 26px;
font-family: PingFangSC-Medium;
font-size: 16px;
color: #666666;
line-height: 26px;
font-weight: 500;
margin-bottom: 20px;
};
&:nth-of-type(2) {
height: 26px;
font-family: PingFangSC-Medium;
font-size: 16px;
color: #666666;
line-height: 26px;
font-weight: 500;
};
};
};
};

@ -0,0 +1,92 @@
<template>
<div class="endToEndIdentification">
<div class="public_recognition_speech">
<div v-if="onReco">
<!-- 结束录音 -->
<div @click="endRecorder()" class="endToEndIdentification_end_recorder_img">
<div class='endToEndIdentification_end_recorder_img_back'></div>
</div>
</div>
<div v-else>
<div @click="startRecorder()" class="endToEndIdentification_start_recorder_img"></div>
</div>
<div class="endToEndIdentification_prompt" >
<div v-if="onReco">
结束识别
</div>
<div v-else>
开始识别
</div>
</div>
<div class="speech_text_prompt">
停止录音后得到识别结果
</div>
</div>
<div class="public_recognition_point_to"></div>
<div class="public_recognition_result">
<div>识别结果</div>
<div> {{asrResult}} </div>
</div>
</div>
</template>
<script>
import Recorder from 'js-audio-recorder'
import { asrOffline } from '../../../../api/ApiASR'
const recorder = new Recorder({
sampleBits: 16, // 8 1616
sampleRate: 16000, // 110251600022050240004410048000chrome48000
numChannels: 1, // 1 2 1
compiling: true
})
export default {
data () {
return {
onReco: false,
asrResult: "",
}
},
methods: {
//
startRecorder(){
this.onReco = true
recorder.clear()
recorder.start()
},
//
endRecorder(){
recorder.stop()
this.onReco = false
// this.$nextTick(()=>{})
// wav,
const wavs = recorder.getWAVBlob()
this.uploadFile(wavs)
},
//
async uploadFile(file){
const formData = new FormData()
formData.append('files', file)
const result = await asrOffline(formData)
if (result.data.code === 0) {
this.asrResult = result.data.result
// this.$nextTick(()=>{})
this.$message.success(result.data.message);
} else {
this.$message.error(result.data.message);
}
},
}
}
</script>
<style lang="less" scoped>
@import "./style.less";
</style>

@ -0,0 +1,114 @@
.endToEndIdentification {
width: 1106px;
height: 270px;
// background-color: pink;
padding-top: 40px;
box-sizing: border-box;
display: flex;
// 开始识别
.public_recognition_speech {
width: 295px;
height: 230px;
padding-top: 32px;
box-sizing: border-box;
.endToEndIdentification_start_recorder_img {
width: 116px;
height: 116px;
background: #2932E1;
background: url("../../../../assets/image/ic_开始聊天.svg");
background-repeat: no-repeat;
background-position: center;
background-size: 116px 116px;
margin-left: 98px;
cursor: pointer;
margin-bottom: 20px;
&:hover {
background: url("../../../../assets/image/ic_开始聊天_hover.svg");
};
};
.endToEndIdentification_end_recorder_img {
width: 116px;
height: 116px;
background: #2932E1;
border-radius: 50%;
display: flex;
justify-content: center;
align-items: center;
margin-left: 98px;
margin-bottom: 20px;
cursor: pointer;
.endToEndIdentification_end_recorder_img_back {
width: 50px;
height: 50px;
background: url("../../../../assets/image/ic_大-声音波浪.svg");
background-repeat: no-repeat;
background-position: center;
background-size: 50px 50px;
&:hover {
opacity: 0.9;
};
};
};
.endToEndIdentification_prompt {
height: 22px;
font-family: PingFangSC-Medium;
font-size: 16px;
color: #000000;
font-weight: 500;
margin-left: 124px;
margin-bottom: 10px;
};
.speech_text_prompt {
height: 20px;
font-family: PingFangSC-Regular;
font-size: 14px;
color: #999999;
font-weight: 400;
margin-left: 90px;
};
};
// 指向
.public_recognition_point_to {
width: 47px;
height: 67px;
background: url("../../../../assets/image/步骤-箭头切图@2x.png") no-repeat;
background-position: center;
background-size: 47px 67px;
margin-top: 91px;
margin-right: 67px;
};
// 识别结果
.public_recognition_result {
width: 680px;
height: 230px;
background: #FAFAFA;
padding: 40px 50px 0px 50px;
div {
&:nth-of-type(1) {
height: 26px;
font-family: PingFangSC-Medium;
font-size: 16px;
color: #666666;
line-height: 26px;
font-weight: 500;
margin-bottom: 20px;
};
&:nth-of-type(2) {
height: 26px;
font-family: PingFangSC-Medium;
font-size: 16px;
color: #666666;
line-height: 26px;
font-weight: 500;
};
};
};
};

@ -0,0 +1,128 @@
<template>
<div class="realTime">
<div class="public_recognition_speech">
<div v-if="onReco">
<!-- 结束录音 -->
<div @click="endRecorder()" class="endToEndIdentification_end_recorder_img">
<div class='endToEndIdentification_end_recorder_img_back'></div>
</div>
</div>
<div v-else>
<div @click="startRecorder()" class="endToEndIdentification_start_recorder_img"></div>
</div>
<div class="endToEndIdentification_prompt" >
<div v-if="onReco">
结束识别
</div>
<div v-else>
开始识别
</div>
</div>
<div class="speech_text_prompt">
实时得到识别结果
</div>
</div>
<div class="public_recognition_point_to"></div>
<div class="public_recognition_result">
<div>识别结果</div>
<div> {{asrResult}} </div>
</div>
</div>
</template>
<script>
import Recorder from 'js-audio-recorder'
import { apiURL } from '../../../../api/API'
const recorder = new Recorder({
sampleBits: 16, // 8 1616
sampleRate: 16000, // 110251600022050240004410048000chrome48000
numChannels: 1, // 1 2 1
compiling: true
})
export default {
data () {
return {
onReco: false,
asrResult: "",
wsUrl: "",
ws: ""
}
},
mounted () {
this.wsUrl = apiURL.ASR_SOCKET_RECORD
this.ws = new WebSocket(this.wsUrl)
if(this.ws.readyState === this.ws.CONNECTING){
this.$message.success("实时识别 Websocket 连接成功")
}
var _that = this
this.ws.addEventListener('message', function (event) {
var temp = JSON.parse(event.data);
// console.log('ws message', event.data)
if(temp.result && (temp.result != _that.streamAsrResult)){
_that.asrResult = temp.result
_that.$nextTick(()=>{})
}
})
},
methods: {
//
startRecorder(){
// websocket
// debugger
if(this.ws.readyState != this.ws.OPEN){
this.$message.error("websocket 链接失败,请检查链接地址是否正确")
return
}
this.onReco = true
//
var start = JSON.stringify({name:"test.wav", "nbest":5, signal:"start"})
this.ws.send(start)
recorder.start().then(() => {
setInterval(() => {
//
let newData = recorder.getNextData();
if (!newData.length) {
return;
}
// 1
this.uploadChunk(newData)
}, 300)
}, (error) => {
console.log("录音出错");
})
// this.onReco = true
},
//
endRecorder(){
//
recorder.stop()
this.onReco = false
recorder.clear()
},
//
uploadChunk(chunkDatas){
chunkDatas.forEach((chunkData) => {
this.ws.send(chunkData)
})
},
},
}
</script>
<style lang="less" scoped>
@import "./style.less";
</style>

@ -0,0 +1,112 @@
.realTime{
width: 1106px;
height: 270px;
// background-color: pink;
padding-top: 40px;
box-sizing: border-box;
display: flex;
// 开始识别
.public_recognition_speech {
width: 295px;
height: 230px;
padding-top: 32px;
box-sizing: border-box;
.endToEndIdentification_start_recorder_img {
width: 116px;
height: 116px;
background: #2932E1;
background: url("../../../../assets/image/ic_开始聊天.svg");
background-repeat: no-repeat;
background-position: center;
background-size: 116px 116px;
margin-left: 98px;
cursor: pointer;
margin-bottom: 20px;
&:hover {
background: url("../../../../assets/image/ic_开始聊天_hover.svg");
};
};
.endToEndIdentification_end_recorder_img {
width: 116px;
height: 116px;
background: #2932E1;
border-radius: 50%;
display: flex;
justify-content: center;
align-items: center;
margin-left: 98px;
margin-bottom: 20px;
cursor: pointer;
.endToEndIdentification_end_recorder_img_back {
width: 50px;
height: 50px;
background: url("../../../../assets/image/ic_大-声音波浪.svg");
background-repeat: no-repeat;
background-position: center;
background-size: 50px 50px;
&:hover {
opacity: 0.9;
};
};
};
.endToEndIdentification_prompt {
height: 22px;
font-family: PingFangSC-Medium;
font-size: 16px;
color: #000000;
font-weight: 500;
margin-left: 124px;
margin-bottom: 10px;
};
.speech_text_prompt {
height: 20px;
font-family: PingFangSC-Regular;
font-size: 14px;
color: #999999;
font-weight: 400;
margin-left: 105px;
};
};
// 指向
.public_recognition_point_to {
width: 47px;
height: 67px;
background: url("../../../../assets/image/步骤-箭头切图@2x.png") no-repeat;
background-position: center;
background-size: 47px 67px;
margin-top: 91px;
margin-right: 67px;
};
// 识别结果
.public_recognition_result {
width: 680px;
height: 230px;
background: #FAFAFA;
padding: 40px 50px 0px 50px;
div {
&:nth-of-type(1) {
height: 26px;
font-family: PingFangSC-Medium;
font-size: 16px;
color: #666666;
line-height: 26px;
font-weight: 500;
margin-bottom: 20px;
};
&:nth-of-type(2) {
height: 26px;
font-family: PingFangSC-Medium;
font-size: 16px;
color: #666666;
line-height: 26px;
font-weight: 500;
};
};
};
};

@ -0,0 +1,76 @@
.speech_recognition {
width: 1200px;
height: 410px;
background: #FFFFFF;
padding: 40px 50px 50px 44px;
position: relative;
.frame {
width: 605px;
height: 50px;
border: 1px solid rgba(238,238,238,1);
border-radius: 25px;
position: absolute;
}
.speech_recognition_mytabs {
.ant-tabs-tab {
position: relative;
display: inline-flex;
align-items: center;
// padding: 12px 0;
font-size: 14px;
background: transparent;
border: 0;
outline: none;
cursor: pointer;
padding: 12px 26px;
box-sizing: border-box;
}
.ant-tabs-tab-active {
height: 50px;
background: #EEEFFD;
border-radius: 25px;
padding: 12px 26px;
box-sizing: border-box;
};
.speech_recognition .speech_recognition_mytabs .ant-tabs-ink-bar {
position: absolute;
background: transparent !important;
pointer-events: none;
}
.ant-tabs-ink-bar {
position: absolute;
background: transparent !important;
pointer-events: none;
}
.experience .experience_wrapper .experience_content .experience_tabs .ant-tabs-nav::before {
position: absolute;
right: 0;
left: 0;
border-bottom: 1px solid transparent !important;
// border: none;
content: '';
}
.ant-tabs-top > .ant-tabs-nav::before, .ant-tabs-bottom > .ant-tabs-nav::before, .ant-tabs-top > div > .ant-tabs-nav::before, .ant-tabs-bottom > div > .ant-tabs-nav::before {
position: absolute;
right: 0;
left: 0;
border-bottom: 1px solid transparent !important;
// border: none;
content: '';
}
.ant-tabs-top > .ant-tabs-nav::before, .ant-tabs-bottom > .ant-tabs-nav::before, .ant-tabs-top > div > .ant-tabs-nav::before, .ant-tabs-bottom > div > .ant-tabs-nav::before {
position: absolute;
right: 0;
left: 0;
border-bottom: 1px solid transparent !important;
content: '';
}
.ant-tabs-nav::before {
position: absolute;
right: 0;
left: 0;
border-bottom: 1px solid transparent !important;
content: '';
};
};
};

@ -0,0 +1,298 @@
<template>
<div class="chatbox">
<h3>语音聊天</h3>
<div class="home" style="margin:1vw;">
<el-button :type="recoType" @click="startRecorder()" style="margin:1vw;">{{ recoText }}</el-button>
<!-- <el-button :type="playType" @click="playRecorder()" style="margin:1vw;"> {{ playText }}</el-button> -->
<el-button :type="envType" @click="envRecorder()" style="margin:1vw;"> {{ envText }}</el-button>
<!-- <el-button :type="envType" @click="getTts(ttsd)" style="margin:1vw;"> TTS </el-button> -->
<el-button type="warning" @click="clearChat()" style="margin:1vw;"> 清空聊天</el-button>
</div>
<div v-for="Result in allResultList">
<h3>{{Result}}</h3>
</div>
</div>
</template>
<script>
import Recorder from 'js-audio-recorder'
const recorder = new Recorder({
sampleBits: 16, // 8 1616
sampleRate: 16000, // 110251600022050240004410048000chrome48000
numChannels: 1, // 1 2 1
compiling: true
})
export default {
name: 'home',
data () {
return {
recoType: "primary",
recoText: "开始录音",
playType: "success",
playText: "播放录音",
envType: "success",
envText: "环境采样",
asrResultList: [],
nlpResultList: [],
ttsResultList: [],
allResultList: [],
webSocketRes: "websocket",
drawRecordId: null,
onReco: false,
onPlay: false,
onRecoPause: false,
ws: '',
ttsd: "你的名字叫什么,你的名字叫什么,你的名字叫什么你的名字叫什么",
audioCtx: '',
source: '',
typedArray: '',
ttsResult: '',
}
},
mounted () {
//
var AudioContext = window.AudioContext || window.webkitAudioContext;
this.audioCtx = new AudioContext({
latencyHint: 'interactive',
sampleRate: 24000,
});
// play
recorder.onplayend = () => {
this.onPlay = false
this.playText = "播放录音"
this.playType = "success"
this.$nextTick(()=>{})
}
// ws
this.ws = new WebSocket("ws://localhost:8010/ws/asr/offlineStream");
//
var _that = this
this.ws.addEventListener('message', function (event) {
_that.allResultList.push("asr:" + event.data)
_that.$nextTick(()=>{})
_that.getNlp(event.data)
})
},
methods: {
//
clearChat(){
this.allResultList = []
},
//
startRecorder () {
if(!this.onReco){
this.resumeRecordOnline()
recorder.start().then(() => {
setInterval(() => {
//
let newData = recorder.getNextData();
if (!newData.length) {
return;
}
// 1
this.uploadChunk(newData)
}, 500)
}, (error) => {
console.log("录音出错");
})
this.onReco = true
this.recoType = "danger"
this.recoText = "结束录音"
this.$nextTick(()=>{
})
} else {
//
recorder.stop()
this.onReco = false
this.recoType = "primary"
this.recoText = "开始录音"
this.$nextTick(()=>{})
recorder.clear()
// wav,
// const wavs = recorder.getWAVBlob()
// this.uploadFile(wavs, "/api/asr/offline")
// console.log(wavs)
// ,
this.stopRecordOnline()
}
},
//
envRecorder () {
if(!this.onReco){
recorder.start().then(() => {
}, (error) => {
console.log("录音出错");
})
this.onReco = true
this.envType = "danger"
this.envText = "结束采样"
this.$nextTick(()=>{
})
} else {
//
recorder.stop()
this.onReco = false
this.envType = "success"
this.envText = "环境采样"
this.$nextTick(()=>{})
const wavs = recorder.getWAVBlob()
this.uploadFile(wavs, "/api/asr/collectEnv")
}
},
//
playRecorder () {
if(!this.onPlay){
//
recorder.play()
this.onPlay = true
this.playText = "结束播放"
this.playType = "warning"
this.$nextTick(()=>{})
} else {
recorder.stopPlay()
this.onPlay = false
this.playText = "播放录音"
this.playType = "success"
this.$nextTick(()=>{})
}
},
//
async uploadFile(file, post_url){
const formData = new FormData()
formData.append('files', file)
const result = await this.$http.post(post_url, formData);
if (result.data.code === 0) {
this.asrResultList.push(result.data.result)
// this.$message.success(result.data.message);
} else {
this.$message.error(result.data.message);
}
},
// chunk
async uploadChunk(chunkDatas) {
chunkDatas.forEach((chunkData) => {
this.ws.send(chunkData)
})
},
// ,pcm
async stopRecordOnline(){
const result = await this.$http.get("/api/asr/stopRecord");
if (result.data.code === 0) {
console.log("Online 录音停止成功")
} else {
// console.log("chunk ")
}
},
//
async resumeRecordOnline(){
const result = await this.$http.get("/api/asr/resumeRecord");
if (result.data.code === 0) {
console.log("chunk 发送成功")
} else {
// console.log("chunk ")
}
},
// NLP
async getNlp(asrText){
//
this.onRecoPause = true
recorder.pause()
this.stopRecordOnline()
console.log('录音暂停')
const result = await this.$http.post("/api/nlp/chat", { chat: asrText});
if (result.data.code === 0) {
this.allResultList.push("nlp:" + result.data.result)
this.getTts(result.data.result)
// this.$message.success(result.data.message);
} else {
this.$message.error(result.data.message);
}
// console.log("")
},
base64ToUint8Array(base64String) {
const padding = '='.repeat((4 - base64String.length % 4) % 4);
const base64 = (base64String + padding)
.replace(/-/g, '+')
.replace(/_/g, '/');
const rawData = window.atob(base64);
const outputArray = new Uint8Array(rawData.length);
for (let i = 0; i < rawData.length; ++i) {
outputArray[i] = rawData.charCodeAt(i);
}
return outputArray;
},
// TTS
async getTts(nlpText){
// base64
this.ttsResult = await this.$http.post("/api/tts/offline", { text : nlpText});
this.typedArray = this.base64ToUint8Array(this.ttsResult.data.result)
// console.log("chat", this.typedArray.buffer)
this.playAudioData( this.typedArray.buffer )
},
// play
playAudioData( wav_buffer ) {
this.audioCtx.decodeAudioData(wav_buffer, buffer => {
this.source = this.audioCtx.createBufferSource();
this.source.onended = () => {
//
if(this.onRecoPause){
console.log("恢复录音")
this.onRecoPause = false
//
recorder.resume()
//
this.resumeRecordOnline()
}
}
this.source.buffer = buffer;
this.source.connect(this.audioCtx.destination);
this.source.start();
}, function(e) {
Recorder.throwError(e);
});
}
},
}
</script>
<style lang='less' scoped>
.chatbox {
border: 4px solid #F00;
// position: fixed;
width: 100%;
height: 20%;
overflow: auto;
}
</style>

@ -0,0 +1,255 @@
<template>
<div className="voice_chat">
<!-- 开始聊天 -->
<div v-if="!onReco" className="voice_chat_wrapper">
<div className="voice_chat_btn"
@click="startRecorder()"
></div>
<div className="voice_chat_btn_title">点击开始聊天</div>
<div className="voice_chat_btn_prompt">聊天前请允许浏览器获取麦克风权限</div>
</div>
<!-- 结束聊天 -->
<div v-else className="voice_chat_dialog_wrapper">
<div className="dialog_box" >
<ul className="dialog_content" >
<li id="speech_list" :key="index">
<div className="dialog_content_img_pp"></div>
<div className="dialog_content_dialogue_pp">
{{ nlpResult }}
</div>
</li>
<li id="speech_list" className="move_dialogue">
<div className="dialog_content_dialogue_user">
{{ asrResult }}
</div>
<div className="dialog_content_img_user"></div>
</li>
</ul>
</div>
<div className="btn_end_dialog"
@click="startRecorder()"
>
<span></span>
<span>结束聊天</span>
</div>
</div>
</div>
</template>
<script>
import { asrCollentEnv, asrOffline, asrResumeRecord, asrStopRecord } from '../../../api/ApiASR'
import { apiURL } from '../../../api/API'
import Recorder from 'js-audio-recorder'
import { nlpChat } from '../../../api/ApiNLP';
const audioCtx = new (window.AudioContext || window.webkitAudioContext)({
latencyHint: 'interactive',
sampleRate: 24000,
});
const recorder = new Recorder({
sampleBits: 16, // 8 1616
sampleRate: 16000, // 110251600022050240004410048000chrome48000
numChannels: 1, // 1 2 1
compiling: true
})
export default {
data () {
return {
onReco: false,
allResultList: [],
asrResult: "",
nlpResult: "",
ws:"",
initChatText: "欢迎使用飞桨语音对话系统,试试和我说话吧",
speakingText: "我正在说话...",
stopText: "等待音频播放结束..."
}
},
mounted () {
// ws
this.ws = new WebSocket(apiURL.CHAT_SOCKET_RECORD);
var _that = this
this.ws.addEventListener('message', function (event) {
_that.allResultList.push(
{
value : event.data,
name : "asr"
}
)
_that.asrResult = event.data
_that.$nextTick(()=>{})
_that.getNlp(event.data)
})
},
methods: {
//
startRecorder(){
this.allResultList = []
if(!this.onReco){
this.asrResult = this.speakingText
this.resumeRecordOnline()
recorder.start().then(() => {
setInterval(() => {
// 1.07 getNextData
let newData = recorder?.getNextData();
if (!newData.length) {
return;
}
// 1
this.uploadChunk(newData)
}, 500)
}, () => {
console.log("录音出错");
})
this.onReco = true
// NLP
this.initNLP()
} else {
//
recorder.stop()
this.onReco = false
this.asrResult = ""
this.stopRecordOnline()
}
},
//
uploadChunk(chunkDatas){
chunkDatas.forEach((chunkData) => {
this.ws.send(chunkData)
})
},
//
async resumeRecordOnline(){
const result = await asrResumeRecord();
},
//
async stopRecordOnline(){
const result = await asrStopRecord();
},
//
clearChat(){
this.allResultList = []
},
//
// NLP
initNLP(){
//
this.onRecoPause = true
recorder.pause()
this.stopRecordOnline()
console.log('录音暂停')
this.asrResult = this.stopText
//
// this.allResultList.push(
// {
// value:this.initChatText,
// name: "nlp"
// }
// )
this.nlpResult = this.initChatText
this.getTts(this.initChatText)
},
// NLP
async getNlp(text){
//
this.onRecoPause = true
recorder.pause()
this.stopRecordOnline()
console.log('录音暂停')
const result = await nlpChat(text);
if (result.data.code === 0) {
// this.allResultList.push(
// {
// value:result.data.result,
// name: "nlp"
// }
// )
this.nlpResult = result.data.result
this.getTts(result.data.result)
} else {
this.$message.error(result.data.message);
}
},
// TTS
async getTts(nlpText){
// base64
var result = await this.$http.post("/api/tts/offline", { text : nlpText});
if (result.data.code === 0) {
var typedArray = this.base64ToUint8Array(result.data.result)
this.playAudioData( typedArray.buffer )
} else {
this.$message.error(result.data.message)
}
},
// bs64
base64ToUint8Array(base64String) {
const padding = '='.repeat((4 - base64String.length % 4) % 4);
const base64 = (base64String + padding)
.replace(/-/g, '+')
.replace(/_/g, '/');
const rawData = window.atob(base64);
const outputArray = new Uint8Array(rawData.length);
for (let i = 0; i < rawData.length; ++i) {
outputArray[i] = rawData.charCodeAt(i);
}
return outputArray;
},
//
playAudioData( wav_buffer ) {
var _that = this
audioCtx.decodeAudioData(wav_buffer, buffer => {
var source = audioCtx.createBufferSource();
source.onended = () => {
//
if(_that.onRecoPause){
console.log("恢复录音")
//
this.onRecoPause = false
recorder.resume()
this.asrResult = this.speakingText
//
this.resumeRecordOnline()
}
}
source.buffer = buffer;
source.connect(audioCtx.destination);
source.start();
}, function(e) {
Recorder.throwError(e);
});
},
}
}
</script>
<style lang="less" scoped>
@import "./style.less";
</style>

@ -0,0 +1,181 @@
.voice_chat {
width: 1200px;
height: 410px;
background: #FFFFFF;
position: relative;
// 开始聊天
.voice_chat_wrapper {
top: 50%;
left: 50%;
transform: translate(-50%,-50%);
position: absolute;
.voice_chat_btn {
width: 116px;
height: 116px;
margin-left: 54px;
// background: #2932E1;
border-radius: 50%;
cursor: pointer;
background: url("../../../assets/image/ic_开始聊天.svg");
background-repeat: no-repeat;
background-position: center;
background-size: 116px 116px;
margin-bottom: 17px;
&:hover {
width: 116px;
height: 116px;
background: url("../../../assets/image/ic_开始聊天_hover.svg");
background-repeat: no-repeat;
background-position: center;
background-size: 116px 116px;
};
};
.voice_chat_btn_title {
height: 22px;
font-family: PingFangSC-Medium;
font-size: 16px;
color: #000000;
letter-spacing: 0;
text-align: center;
line-height: 22px;
font-weight: 500;
margin-bottom: 10px;
};
.voice_chat_btn_prompt {
height: 24px;
font-family: PingFangSC-Regular;
font-size: 14px;
color: #999999;
letter-spacing: 0;
text-align: center;
line-height: 24px;
font-weight: 400;
};
};
.voice_chat_wrapper::after {
content: "";
display: block;
clear: both;
visibility: hidden;
};
// 结束聊天
.voice_chat_dialog_wrapper {
width: 1200px;
height: 410px;
background: #FFFFFF;
position: relative;
.dialog_box {
width: 100%;
height: 410px;
padding: 50px 198px 82px 199px;
box-sizing: border-box;
.dialog_content {
width: 100%;
height: 268px;
// background: rgb(113, 144, 145);
padding: 0px;
overflow: auto;
li {
list-style-type: none;
margin-bottom: 33px;
display: flex;
align-items: center;
&:last-of-type(1) {
margin-bottom: 0px;
};
.dialog_content_img_pp {
width: 60px;
height: 60px;
// transform: scaleX(-1);
background: url("../../../assets/image/飞桨头像@2x.png");
background-repeat: no-repeat;
background-position: center;
background-size: 60px 60px;
margin-right: 20px;
};
.dialog_content_img_user {
width: 60px;
height: 60px;
transform: scaleX(-1);
background: url("../../../assets/image/用户头像@2x.png");
background-repeat: no-repeat;
background-position: center;
background-size: 60px 60px;
margin-left: 20px;
};
.dialog_content_dialogue_pp {
height: 50px;
background: #F5F5F5;
border-radius: 25px;
font-family: PingFangSC-Regular;
font-size: 14px;
color: #000000;
line-height: 50px;
font-weight: 400;
padding: 0px 16px;
box-sizing: border-box;
};
.dialog_content_dialogue_user {
height: 50px;
background: rgba(41,50,225,0.90);
border-radius: 25px;
font-family: PingFangSC-Regular;
font-size: 14px;
color: #FFFFFF;
line-height: 50px;
font-weight: 400;
padding: 0px 16px;
box-sizing: border-box;
};
};
};
.move_dialogue {
justify-content: flex-end;
};
};
.btn_end_dialog {
width: 124px;
height: 42px;
line-height: 42px;
background: #FFFFFF;
box-shadow: 0px 4px 16px 0px rgba(0,0,0,0.09);
border-radius: 21px;
padding: 0px 24px;
box-sizing: border-box;
position: absolute;
left: 50%;
bottom: 40px;
transform: translateX(-50%);
display: flex;
justify-content: space-between;
align-items: center;
cursor: pointer;
span {
display: inline-block;
&:nth-of-type(1) {
width: 16px;
height: 16px;
background: url("../../../assets/image/ic_小-结束.svg");
background-repeat: no-repeat;
background-position: center;
background-size: 16px 16px;
};
&:nth-of-type(2) {
height: 20px;
font-family: PingFangSC-Regular;
font-size: 14px;
color: #F33E3E;
text-align: center;
font-weight: 400;
line-height: 20px;
margin-left: 4px;
};
};
};
};
};

@ -0,0 +1,125 @@
<template>
<div class="iebox">
<h1>信息抽取体验</h1>
<el-button :type="recoType" @click="startRecorder()" style="margin:1vw;">{{ recoText }}</el-button>
<h3>识别结果: {{ asrResultOffline }}</h3>
<h4>时间{{ time }}</h4>
<h4>出发地{{ outset }}</h4>
<h4>目的地{{ destination }}</h4>
<h4>费用{{ amount }}</h4>
</div>
</template>
<script>
import Recorder from 'js-audio-recorder'
const recorder = new Recorder({
sampleBits: 16, // 8 1616
sampleRate: 16000, // 110251600022050240004410048000chrome48000
numChannels: 1, // 1 2 1
compiling: true
})
export default {
name: "IE",
data(){
return {
streamAsrResult: '',
recoType: "primary",
recoText: "开始录音",
playType: "success",
asrResultOffline: '',
onReco: false,
ws:'',
time: '',
outset: '',
destination: '',
amount: ''
}
},
methods: {
startRecorder () {
if(!this.onReco){
recorder.clear()
recorder.start().then(() => {
}, (error) => {
console.log("录音出错");
})
this.onReco = true
this.recoType = "danger"
this.recoText = "结束录音"
this.time = ''
this.outset=''
this.destination = ''
this.amount = ''
this.$nextTick(()=>{
})
} else {
//
recorder.stop()
this.onReco = false
this.recoType = "primary"
this.recoText = "开始录音"
this.$nextTick(()=>{})
// wav,
const wavs = recorder.getWAVBlob()
this.uploadFile(wavs, "/api/asr/offline")
}
},
async uploadFile(file, post_url){
const formData = new FormData()
formData.append('files', file)
const result = await this.$http.post(post_url, formData);
if (result.data.code === 0) {
this.asrResultOffline = result.data.result
this.$nextTick(()=>{})
this.$message.success(result.data.message);
this.informationExtract()
} else {
this.$message.error(result.data.message);
}
},
async informationExtract(){
const postdata = {
chat: this.asrResultOffline
}
const result = await this.$http.post('/api/nlp/ie', postdata)
console.log("ie", result)
if(result.data.result[0]['时间']){
this.time = result.data.result[0]['时间'][0]['text']
}
if(result.data.result[0]['出发地']){
this.outset = result.data.result[0]['出发地'][0]['text']
}
if(result.data.result[0]['目的地']){
this.destination = result.data.result[0]['目的地'][0]['text']
}
if(result.data.result[0]['费用']){
this.amount = result.data.result[0]['费用'][0]['text']
}
}
},
}
</script>
<style lang="less" scoped>
.iebox {
border: 4px solid #F00;
top:80%;
width: 100%;
height: 20%;
overflow: auto;
}
</style>

@ -0,0 +1,166 @@
<template>
<div class="voice_commands">
<div class="voice_commands_traffic">
<div class="voice_commands_traffic_title">交通费报销</div>
<div class="voice_commands_traffic_wrapper">
<div class="voice_commands_traffic_wrapper_move">
<div class="traffic_btn_img_btn">
<!-- 结束录音 -->
<div v-if="onReco"
@click="endRecorder()"
class="end_recorder_img"
></div>
<!-- 开始录音 -->
<div v-else
@click= "startRecorder()"
class="start_recorder_img"
></div>
</div>
<div class="traffic_btn_prompt">
<div v-if="onReco">
结束识别
</div>
<div v-else>
开始识别
</div>
</div>
<div class="traffic_btn_list">试试说早上八点我从广州到北京花了四百二十六元</div>
</div>
</div>
</div>
<div class="voice_point_to"></div>
<!-- 识别结果 -->
<div class="voice_commands_IdentifyTheResults">
<div class="voice_commands_IdentifyTheResults_title">
识别结果
</div>
<div v-if="postStatus" class="voice_commands_IdentifyTheResults_show">
<div class="voice_commands_IdentifyTheResults_show_title">
{{ asrResult }}
</div>
<div class="oice_commands_IdentifyTheResults_show_time">
时间{{voiceCommandsData.time}}
</div>
<div class="oice_commands_IdentifyTheResults_show_money">
费用{{voiceCommandsData.amount}}
</div>
<div class="oice_commands_IdentifyTheResults_show_origin">
出发地{{voiceCommandsData.outset}}
</div>
<div class="oice_commands_IdentifyTheResults_show_destination">
目的地{{voiceCommandsData.destination}}
</div>
</div>
<div v-else class="voice_commands_IdentifyTheResults_show_loading">
<a-spin />
</div>
</div>
</div >
</template>
<script>
import Recorder from 'js-audio-recorder'
import { asrOffline } from '../../../api/ApiASR'
import { nlpIE } from '../../../api/ApiNLP'
const recorder = new Recorder({
sampleBits: 16, // 8 1616
sampleRate: 16000, // 110251600022050240004410048000chrome48000
numChannels: 1, // 1 2 1
compiling: true
})
export default {
data () {
return {
voiceCommandsData:{
time:"",
amount:"",
outset:"",
destination:""
},
asrDeafult : "语音识别结果",
asrResult: "",
postStatus:true,
onReco:false
}
},
mounted () {
this.asrResult = this.asrDeafult
},
methods: {
// reset
reset(){
this.asrResult = this.asrDeafult
this.voiceCommandsData = {
time:"",
amount:"",
outset:"",
destination:""
}
},
//
startRecorder(){
this.reset()
this.onReco = true
recorder.clear()
recorder.start()
},
//
endRecorder(){
recorder.stop()
this.onReco = false
// this.$nextTick(()=>{})
this.postStatus = false
const wavs = recorder.getWAVBlob()
this.uploadFile(wavs)
},
//
async uploadFile(file){
const formData = new FormData();
formData.append('files', file)
const result = await asrOffline(formData)
if (result.data.code === 0) {
this.asrResult = result.data.result
this.$message.success(result.data.message);
this.informationExtract()
} else {
this.$message.error(result.data.message);
}
},
//
async informationExtract(){
const result = await nlpIE(this.asrResult)
if(result.data.result[0]['时间']){
this.voiceCommandsData.time = result.data.result[0]['时间'][0]['text']
}
if(result.data.result[0]['出发地']){
this.voiceCommandsData.outset = result.data.result[0]['出发地'][0]['text']
}
if(result.data.result[0]['目的地']){
this.voiceCommandsData.destination = result.data.result[0]['目的地'][0]['text']
}
if(result.data.result[0]['费用']){
this.voiceCommandsData.amount = result.data.result[0]['费用'][0]['text']
}
this.postStatus = true
}
}
}
</script>
<style lang="less" scoped>
@import "./style.less";
</style>

@ -0,0 +1,179 @@
.voice_commands {
width: 1200px;
height: 410px;
background: #FFFFFF;
padding: 40px 50px 50px 50px;
box-sizing: border-box;
display: flex;
// 交通报销
.voice_commands_traffic {
width: 468px;
height: 320px;
.voice_commands_traffic_title {
height: 26px;
font-family: PingFangSC-Medium;
font-size: 16px;
color: #000000;
letter-spacing: 0;
line-height: 26px;
font-weight: 500;
margin-bottom: 30px;
// background: pink;
};
.voice_commands_traffic_wrapper {
width: 465px;
height: 264px;
// background: #FAFAFA;
position: relative;
.voice_commands_traffic_wrapper_move {
position: absolute;
top: 50%;
left: 50%;
transform: translate(-50%,-50%);
};
.traffic_btn_img_btn {
width: 116px;
height: 116px;
background: #2932E1;
display: flex;
justify-content: center;
align-items: center;
border-radius: 50%;
cursor: pointer;
margin-bottom: 20px;
margin-left: 84px;
&:hover {
width: 116px;
height: 116px;
background: #7278F5;
.start_recorder_img{
width: 50px;
height: 50px;
background: url("../../../assets/image/ic_开始聊天_hover.svg") no-repeat;
background-position: center;
background-size: 50px 50px;
};
};
.start_recorder_img{
width: 50px;
height: 50px;
background: url("../../../assets/image/ic_开始聊天.svg") no-repeat;
background-position: center;
background-size: 50px 50px;
};
};
.traffic_btn_prompt {
height: 22px;
font-family: PingFangSC-Medium;
font-size: 16px;
color: #000000;
font-weight: 500;
margin-bottom: 16px;
margin-left: 110px;
};
.traffic_btn_list {
height: 20px;
font-family: PingFangSC-Regular;
font-size: 12px;
color: #999999;
font-weight: 400;
width: 112%;
};
};
};
//指向
.voice_point_to {
width: 47px;
height: 63px;
background: url("../../../assets/image/步骤-箭头切图@2x.png") no-repeat;
background-position: center;
background-size: 47px 63px;
margin-top: 164px;
margin-right: 82px;
};
//识别结果
.voice_commands_IdentifyTheResults {
.voice_commands_IdentifyTheResults_title {
height: 26px;
font-family: PingFangSC-Medium;
font-size: 16px;
color: #000000;
line-height: 26px;
font-weight: 500;
margin-bottom: 30px;
};
// 显示框
.voice_commands_IdentifyTheResults_show {
width: 503px;
height: 264px;
background: #FAFAFA;
padding: 40px 0px 0px 50px;
box-sizing: border-box;
.voice_commands_IdentifyTheResults_show_title {
height: 22px;
font-family: PingFangSC-Medium;
font-size: 16px;
color: #000000;
// text-align: center;
font-weight: 500;
margin-bottom: 30px;
};
.oice_commands_IdentifyTheResults_show_time {
height: 20px;
font-family: PingFangSC-Medium;
font-size: 14px;
color: #666666;
font-weight: 500;
margin-bottom: 12px;
};
.oice_commands_IdentifyTheResults_show_money {
height: 20px;
font-family: PingFangSC-Medium;
font-size: 14px;
color: #666666;
font-weight: 500;
margin-bottom: 12px;
};
.oice_commands_IdentifyTheResults_show_origin {
height: 20px;
font-family: PingFangSC-Medium;
font-size: 14px;
color: #666666;
font-weight: 500;
margin-bottom: 12px;
};
.oice_commands_IdentifyTheResults_show_destination {
height: 20px;
font-family: PingFangSC-Medium;
font-size: 14px;
color: #666666;
font-weight: 500;
};
};
//加载状态
.voice_commands_IdentifyTheResults_show_loading {
width: 503px;
height: 264px;
background: #FAFAFA;
padding: 40px 0px 0px 50px;
box-sizing: border-box;
display: flex;
justify-content: center;
align-items: center;
};
};
.end_recorder_img {
width: 50px;
height: 50px;
background: url("../../../assets/image/ic_大-声音波浪.svg") no-repeat;
background-position: center;
background-size: 50px 50px;
};
.end_recorder_img:hover {
opacity: 0.9;
};
};

@ -0,0 +1,726 @@
<template>
<div class="speech_recognition">
<!-- {/* 中文文本 */} -->
<div class="recognition_text">
<div class="recognition_text_header">
<div class="recognition_text_title">
中文文本
</div>
<div class="recognition_text_random" @click="getRandomChineseWord()">
<span></span><span>更换示例</span>
</div>
</div>
<div class="recognition_text_field">
<el-input
v-model="textarea"
:autosize="{ minRows: 13, maxRows: 13 }"
type="textarea"
placeholder="Please input"
/>
</div>
</div>
<!-- {/* 指向 */} -->
<div class="recognition_point_to"></div>
<!-- {/* 语音合成 */} -->
<div class="speech_recognition_new">
<div class="speech_recognition_title">
语音合成
</div>
<!-- 流式合成初始状态 -->
<div v-if="streamingOnInit" class="speech_recognition_streaming"
@click="getTtsChunkWavWS()"
>
流式合成
</div>
<!-- 流式合成播放状态 -->
<div v-else>
<div v-if="streamingStopStatus" class="streaming_ing_box">
<div class="streaming_ing">
<div class="streaming_ing_img"></div>
<!-- <Spin indicator={antIcon} /> -->
<div class="streaming_ing_text">合成中</div>
</div>
<div class="streaming_time">响应时间0ms</div>
</div>
<div v-else>
<div v-if="streamingContinueStatus" class="streaming_suspended_box">
<div class="streaming_suspended"
@click="streamingStop()"
>
<div class="streaming_suspended_img"></div>
<div class="streaming_suspended_text">暂停播放</div>
</div>
<div class="suspended_time">
响应时间{{ Number(streamingAcceptStamp) - Number(streamingSendStamp) }}ms
</div>
</div>
<div v-else class="streaming_continue"
@click="streamingResume()"
>
<div class="streaming_continue_img"></div>
<div class="streaming_continue_text">继续播放</div>
</div>
</div>
</div>
<!-- // {/* */} -->
<div v-if="endToEndOnInit" class="speech_recognition_end_to_end"
@click="EndToEndSynthesis()"
>
端到端合成
</div>
<div v-else>
<div v-if="endToEndStopStatus" class="end_to_end_ing_box">
<div class="end_to_end_ing">
<div class="end_to_end_ing_img"> </div>
<!-- <Spin indicator={antIcon}></Spin> -->
<div class="end_to_end_ing_text">合成中</div>
</div>
<div class="end_to_end_ing_time">响应时间0s</div>
</div>
<div v-else class="end_to_end_suspended_box">
<div v-if="endToEndContinueStatus" class="end_to_end_suspended"
@onClick="EndToEndStop()"
>
<div class="end_to_end_suspended_img"></div>
<div class="end_to_end_suspended_text">暂停播放</div>
</div>
<div v-else class="end_to_end_continue"
@click="EndToEndResume()"
>
<div class="end_to_end_continue_img"></div>
<div class="end_to_end_continue_text">继续播放</div>
</div>
<div class="end_to_end_ing_suspended_time">响应时间{{Number(endToEndAcceptStamp) - Number(endToEndSendStamp) }}ms</div>
</div>
</div>
</div>
</div>
</template>
<script>
import Recorder from 'js-audio-recorder'
// chunk
let chunks = []
let AudioContext = window.AudioContext || window.webkitAudioContext;
let chunk_index = 0
let palyIndex = 0
let reciveOver = false
//
let _audioSrcNodes = []
const _audioCtx = new (window.AudioContext || window.webkitAudioContext)({ latencyHint: 'interactive' });
let _playStartedAt = 0
let _totalTimeScheduled = 0
function _reset(){
_playStartedAt = 0
_totalTimeScheduled = 0
_audioSrcNodes = []
}
export default {
name: "TTSTS",
data () {
return {
textarea: "",
audioCtx: '',
source: '',
typedArray: '',
ttsResult: '',
ws: '',
//
streamingContinueStatus: true,
endToEndContinueStatus: true,
//
streamingOnInit: true,
endToEndOnInit: true,
//
streamingStopStatus: false,
endToEndStopStatus: false,
//
streamingAcceptStamp: '0',
endToEndAcceptStamp: '0',
//
streamingSendStamp: '0',
endToEndSendStamp: '0'
}
},
mounted(){
this.getRandomChineseWord()
this.ws = new WebSocket("ws://10.21.226.174:8010/ws/tts/online")
var _that = this
this.ws.addEventListener('message', function (event) {
let temp = JSON.parse(event.data);
if(chunk_index === 0){
_that.streamingStopStatus = false
_that.streamingAcceptStamp = Date.now()
}
//
if(!temp.done){
chunk_index += 1
let chunk = temp.wav
let arraybuffer = _that.base64ToUint8Array(chunk)
let view = new DataView(arraybuffer.buffer);
let length = view.buffer.byteLength / 2
view = Recorder.encodeWAV(view, 24000, 24000, 1, 16, true)
_that._schedulePlaybackWav({
wavData: view.buffer,
})
} else {
reciveOver = true
// this.streamingOnInit = true
}})
},
methods: {
//
resetStatus(){
this.streamingContinueStatus = true
this.streamingOnInit = true
this.streamingStopStatus = false
this.endToEndContinueStatus = true
this.endToEndOnInit = true
this.endToEndStopStatus = false
},
//
getRandomChineseWord(){
const resultChina = [
"钱伟长想到上海来办学校是经过深思熟虑的。",
"林荒大吼出声,即便十年挣扎,他也从未感到过如此无助。自己的身体一点点陷入岁月之门,却眼睁睁的看着君倾城一手持剑,雪白的身影决然凄厉。就这样孤身一人,于漫天风雪中,对阵数千武者。",
"我们将继续成长,用行动回击那些只会说风凉话,不愿意和我们相向而行的害群之马。",
"许多道理,人们已经证明过千遍万遍,为什么还要带着侥幸的心理再去试验一回呢?",
"宫内整洁利索,廊柱门窗颜色鲜艳,几名电工正在维修线路。",
"他身材矮小,颧骨突出,留着小胡子,说话一口浓重的福建口音。",
"阿杰让阿悦看下剩下的盒饭合不合他的胃口。",
"有网友问,能不能回忆几件刘洋在学校里的趣事或糗事。"
];
let text = "";
text = resultChina[Math.floor(Math.random() * 7)];
this.textarea = text
},
// WS
async getTtsChunkWavWS(){
// chunks
chunks = []
chunk_index = 0
reciveOver = false
_reset()
this.streamingOnInit = false
this.streamingStopStatus = true
this.streamingContinueStatus = true
this.streamingSendStamp = Date.now()
this.ws.send(this.textarea)
},
//
_schedulePlaybackWav({wavData}) {
var _that = this
_audioCtx.decodeAudioData(wavData, audioBuffer => {
const audioSrc = _audioCtx.createBufferSource()
audioSrc.onended = () => {
_audioSrcNodes.shift();
if(_audioSrcNodes.length === 0){
_that.resetStatus()
}
};
_audioSrcNodes.push(audioSrc);
let startDelay = 0;
if (!_playStartedAt) {
startDelay = 10 / 1000;
_playStartedAt = _audioCtx.currentTime + startDelay;
}
audioSrc.buffer = audioBuffer;
audioSrc.connect(_audioCtx.destination);
const startAt = _playStartedAt + _totalTimeScheduled;
audioSrc.start(startAt);
_totalTimeScheduled+= audioBuffer.duration;
})
},
// base64
base64ToUint8Array(base64String) {
const padding = '='.repeat((4 - base64String.length % 4) % 4);
const base64 = (base64String + padding)
.replace(/-/g, '+')
.replace(/_/g, '/');
const rawData = window.atob(base64);
const outputArray = new Uint8Array(rawData.length);
for (let i = 0; i < rawData.length; ++i) {
outputArray[i] = rawData.charCodeAt(i);
}
return outputArray;
},
//
playerPaused(){
_audioCtx.suspend()
},
//
playerResume(){
_audioCtx.resume()
},
//
streamingStop(){
this.playerPaused()
//
this.streamingContinueStatus = false
},
//
streamingResume(){
this.playerResume()
this.streamingContinueStatus = true
},
//
async EndToEndSynthesis(){
this.endToEndSendStamp = Date.now()
this.endToEndOnInit = false
this.endToEndStopStatus = true
let ttsResult = await this.$http.post("/api/tts/offline", { text : this.textarea});
if (ttsResult.status == 200) {
this.endToEndAcceptStamp = Date.now()
this.endToEndStopStatus = false
this.endToEndContinueStatus = true
// base
console.log('res', ttsResult)
let typedArray = this.base64ToUint8Array(ttsResult.data.result)
//
this._schedulePlaybackWav({
wavData: typedArray.buffer,
})
};
},
//
streamingStop(){
this.playerPaused()
//
this.endToEndContinueStatus = false
},
//
streamingResume(){
this.playerResume()
this.endToEndContinueStatus = true
},
}
}
</script>
<style lang="less" scoped>
.speech_recognition {
width: 1200px;
height: 410px;
background: #FFFFFF;
padding: 40px 0px 50px 50px;
box-sizing: border-box;
display: flex;
.recognition_text {
width: 589px;
height: 320px;
// background: pink;
.recognition_text_header {
margin-bottom: 30px;
display: flex;
justify-content: space-between;
align-items: center;
.recognition_text_title {
height: 26px;
font-family: PingFangSC-Medium;
font-size: 16px;
color: #000000;
letter-spacing: 0;
line-height: 26px;
font-weight: 500;
};
.recognition_text_random {
display: flex;
align-items: center;
cursor: pointer;
span {
display: inline-block;
&:nth-of-type(1) {
width: 20px;
height: 20px;
background: url("../../../assets/image/ic_更换示例.svg") no-repeat;
background-position: center;
background-size: 20px 20px;
margin-right: 5px;
};
&:nth-of-type(2) {
height: 20px;
font-family: PingFangSC-Regular;
font-size: 14px;
color: #2932E1;
letter-spacing: 0;
font-weight: 400;
};
};
};
};
.recognition_text_field {
width: 589px;
height: 264px;
background: #FAFAFA;
.textToSpeech_content_show_text{
width: 100%;
height: 264px;
padding: 0px 30px 30px 0px;
box-sizing: border-box;
.ant-input {
height: 208px;
resize: none;
// margin-bottom: 230px;
padding: 21px 20px;
};
};
};
};
//
.recognition_point_to {
width: 47px;
height: 63px;
background: url("../../../assets/image/步骤-箭头切图@2x.png") no-repeat;
background-position: center;
background-size: 47px 63px;
margin-top: 164px;
margin-right: 101px;
margin-left: 100px;
margin-top: 164px;
};
//
.speech_recognition_new {
.speech_recognition_title {
height: 26px;
font-family: PingFangSC-Medium;
font-size: 16px;
color: #000000;
line-height: 26px;
font-weight: 500;
margin-left: 32px;
margin-bottom: 96px;
};
//
.speech_recognition_streaming {
width: 136px;
height: 44px;
background: #2932E1;
border-radius: 22px;
font-family: PingFangSC-Medium;
font-size: 14px;
color: #FFFFFF;
font-weight: 500;
text-align: center;
line-height: 44px;
margin-bottom: 40px;
cursor: pointer;
&:hover {
opacity: .9;
};
};
//
.streaming_ing_box {
display: flex;
align-items: center;
height: 44px;
margin-bottom: 40px;
.streaming_ing {
width: 136px;
height: 44px;
background: #7278F5;
border-radius: 22px;
display: flex;
justify-content: center;
align-items: center;
cursor: pointer;
.streaming_ing_img {
width: 16px;
height: 16px;
// background: url("../../../assets/image/ic_-.svg");
// background-repeat: no-repeat;
// background-position: center;
// background-size: 16px 16px;
// margin-right: 12px;
};
.streaming_ing_text {
height: 20px;
font-family: PingFangSC-Medium;
font-size: 14px;
color: #FFFFFF;
font-weight: 500;
margin-left: 12px;
};
};
//
.streaming_time {
height: 20px;
font-family: PingFangSC-Medium;
font-size: 14px;
color: #000000;
font-weight: 500;
margin-left: 12px;
};
};
//
.streaming_suspended_box {
display: flex;
align-items: center;
height: 44px;
margin-bottom: 40px;
.streaming_suspended {
width: 136px;
height: 44px;
background: #2932E1;
border-radius: 22px;
display: flex;
justify-content: center;
align-items: center;
cursor: pointer;
.streaming_suspended_img {
width: 16px;
height: 16px;
background: url("../../../assets/image/ic_暂停按钮.svg");
background-repeat: no-repeat;
background-position: center;
background-size: 16px 16px;
margin-right: 12px;
};
.streaming_suspended_text {
height: 20px;
font-family: PingFangSC-Medium;
font-size: 14px;
color: #FFFFFF;
font-weight: 500;
margin-left: 12px;
};
};
//
.suspended_time {
height: 20px;
font-family: PingFangSC-Medium;
font-size: 14px;
color: #000000;
font-weight: 500;
margin-left: 12px;
}
};
//
.streaming_continue {
width: 136px;
height: 44px;
background: #2932E1;
border-radius: 22px;
display: flex;
justify-content: center;
align-items: center;
cursor: pointer;
margin-bottom: 40px;
.streaming_continue_img {
width: 16px;
height: 16px;
background: url("../../../assets/image/ic_播放按钮.svg");
background-repeat: no-repeat;
background-position: center;
background-size: 16px 16px;
margin-right: 12px;
};
.streaming_continue_text {
height: 20px;
font-family: PingFangSC-Medium;
font-size: 14px;
color: #FFFFFF;
font-weight: 500;
};
};
//
.speech_recognition_end_to_end {
width: 136px;
height: 44px;
background: #2932E1;
border-radius: 22px;
font-family: PingFangSC-Medium;
font-size: 14px;
color: #FFFFFF;
font-weight: 500;
text-align: center;
line-height: 44px;
cursor: pointer;
&:hover {
opacity: .9;
};
};
//
.end_to_end_ing_box {
display: flex;
align-items: center;
height: 44px;
.end_to_end_ing {
width: 136px;
height: 44px;
background: #7278F5;
border-radius: 22px;
display: flex;
justify-content: center;
align-items: center;
cursor: pointer;
.end_to_end_ing_img {
width: 16px;
height: 16px;
// background: url("../../../assets/image/ic_-.svg");
// background-repeat: no-repeat;
// background-position: center;
// background-size: 16px 16px;
};
.end_to_end_ing_text {
height: 20px;
font-family: PingFangSC-Medium;
font-size: 14px;
color: #FFFFFF;
font-weight: 500;
margin-left: 12px;
};
};
//
.end_to_end_ing_time {
height: 20px;
font-family: PingFangSC-Medium;
font-size: 14px;
color: #000000;
font-weight: 500;
margin-left: 12px;
};
};
//
.end_to_end_suspended_box {
display: flex;
align-items: center;
height: 44px;
.end_to_end_suspended {
width: 136px;
height: 44px;
background: #2932E1;
border-radius: 22px;
display: flex;
justify-content: center;
align-items: center;
cursor: pointer;
.end_to_end_suspended_img {
width: 16px;
height: 16px;
background: url("../../../assets/image/ic_暂停按钮.svg");
background-repeat: no-repeat;
background-position: center;
background-size: 16px 16px;
margin-right: 12px;
};
.end_to_end_suspended_text {
height: 20px;
font-family: PingFangSC-Medium;
font-size: 14px;
color: #FFFFFF;
font-weight: 500;
};
};
//
.end_to_end_ing_suspended_time {
height: 20px;
font-family: PingFangSC-Medium;
font-size: 14px;
color: #000000;
font-weight: 500;
margin-left: 12px;
};
};
//
.end_to_end_continue {
width: 136px;
height: 44px;
background: #2932E1;
border-radius: 22px;
display: flex;
justify-content: center;
align-items: center;
cursor: pointer;
.end_to_end_continue_img {
width: 16px;
height: 16px;
background: url("../../../assets/image/ic_播放按钮.svg");
background-repeat: no-repeat;
background-position: center;
background-size: 16px 16px;
margin-right: 12px;
};
.end_to_end_continue_text {
height: 20px;
font-family: PingFangSC-Medium;
font-size: 14px;
color: #FFFFFF;
font-weight: 500;
};
};
};
};
</style>

@ -0,0 +1,359 @@
<template>
<div class="speech_recognition">
<!-- {/* 中文文本 */} -->
<div class="recognition_text">
<div class="recognition_text_header">
<div class="recognition_text_title">
中文文本
</div>
<div class="recognition_text_random" @click="getRandomChineseWord()">
<span></span><span>更换示例</span>
</div>
</div>
<div class="recognition_text_field">
<el-input
v-model="textarea"
:autosize="{ minRows: 13, maxRows: 13 }"
type="textarea"
placeholder="Please input"
/>
</div>
</div>
<!-- {/* 指向 */} -->
<div class="recognition_point_to"></div>
<!-- {/* 语音合成 */} -->
<div class="speech_recognition_new">
<div class="speech_recognition_title">
语音合成
</div>
<!-- 流式合成初始状态 -->
<div v-if="streamingOnInit" class="speech_recognition_streaming"
@click="getTtsChunkWavWS()"
>
流式合成
</div>
<!-- 流式合成播放状态 -->
<div v-else>
<div v-if="streamingStopStatus" class="streaming_ing_box">
<div class="streaming_ing">
<div class="streaming_ing_img"></div>
<!-- <Spin indicator={antIcon} /> -->
<div class="streaming_ing_text">合成中</div>
</div>
<div class="streaming_time">响应时间0ms</div>
</div>
<div v-else>
<div v-if="streamingContinueStatus" class="streaming_suspended_box">
<div class="streaming_suspended"
@click="streamingStop()"
>
<div class="streaming_suspended_img"></div>
<div class="streaming_suspended_text">暂停播放</div>
</div>
<div class="suspended_time">
响应时间{{ Number(streamingAcceptStamp) - Number(streamingSendStamp) }}ms
</div>
</div>
<div v-else class="streaming_continue"
@click="streamingResume()"
>
<div class="streaming_continue_img"></div>
<div class="streaming_continue_text">继续播放</div>
</div>
</div>
</div>
<!-- // {/* */} -->
<div v-if="endToEndOnInit" class="speech_recognition_end_to_end"
@click="EndToEndSynthesis()"
>
端到端合成
</div>
<div v-else>
<div v-if="endToEndStopStatus" class="end_to_end_ing_box">
<div class="end_to_end_ing">
<div class="end_to_end_ing_img"> </div>
<!-- <Spin indicator={antIcon}></Spin> -->
<div class="end_to_end_ing_text">合成中</div>
</div>
<div class="end_to_end_ing_time">响应时间0s</div>
</div>
<div v-else class="end_to_end_suspended_box">
<div v-if="endToEndContinueStatus" class="end_to_end_suspended"
@onClick="EndToEndStop()"
>
<div class="end_to_end_suspended_img"></div>
<div class="end_to_end_suspended_text">暂停播放</div>
</div>
<div v-else class="end_to_end_continue"
@click="EndToEndResume()"
>
<div class="end_to_end_continue_img"></div>
<div class="end_to_end_continue_text">继续播放</div>
</div>
<div class="end_to_end_ing_suspended_time">响应时间{{Number(endToEndAcceptStamp) - Number(endToEndSendStamp) }}ms</div>
</div>
</div>
</div>
</div>
</template>
<script>
import Recorder from 'js-audio-recorder'
import { apiURL } from '../../../api/API'
// chunk
let chunks = []
let AudioContext = window.AudioContext || window.webkitAudioContext;
let chunk_index = 0
let palyIndex = 0
let reciveOver = false
//
let _audioSrcNodes = []
const _audioCtx = new (window.AudioContext || window.webkitAudioContext)({ latencyHint: 'interactive' });
let _playStartedAt = 0
let _totalTimeScheduled = 0
function _reset(){
_playStartedAt = 0
_totalTimeScheduled = 0
_audioSrcNodes = []
}
export default {
name: "TTSTS",
data () {
return {
textarea: "",
audioCtx: '',
source: '',
typedArray: '',
ttsResult: '',
ws: '',
//
streamingContinueStatus: true,
endToEndContinueStatus: true,
//
streamingOnInit: true,
endToEndOnInit: true,
//
streamingStopStatus: false,
endToEndStopStatus: false,
//
streamingAcceptStamp: '0',
endToEndAcceptStamp: '0',
//
streamingSendStamp: '0',
endToEndSendStamp: '0'
}
},
mounted(){
this.getRandomChineseWord()
this.ws = new WebSocket(apiURL.TTS_SOCKET_RECORD)
var _that = this
this.ws.addEventListener('message', function (event) {
let temp = JSON.parse(event.data);
if(chunk_index === 0){
_that.streamingStopStatus = false
_that.streamingAcceptStamp = Date.now()
}
//
if(!temp.done){
chunk_index += 1
let chunk = temp.wav
let arraybuffer = _that.base64ToUint8Array(chunk)
let view = new DataView(arraybuffer.buffer);
let length = view.buffer.byteLength / 2
view = Recorder.encodeWAV(view, 24000, 24000, 1, 16, true)
_that._schedulePlaybackWav({
wavData: view.buffer,
})
} else {
reciveOver = true
// this.streamingOnInit = true
}})
},
methods: {
//
resetStatus(){
this.streamingContinueStatus = true
this.streamingOnInit = true
this.streamingStopStatus = false
this.endToEndContinueStatus = true
this.endToEndOnInit = true
this.endToEndStopStatus = false
},
//
getRandomChineseWord(){
const resultChina = [
"钱伟长想到上海来办学校是经过深思熟虑的。",
"林荒大吼出声,即便十年挣扎,他也从未感到过如此无助。自己的身体一点点陷入岁月之门,却眼睁睁的看着君倾城一手持剑,雪白的身影决然凄厉。就这样孤身一人,于漫天风雪中,对阵数千武者。",
"我们将继续成长,用行动回击那些只会说风凉话,不愿意和我们相向而行的害群之马。",
"许多道理,人们已经证明过千遍万遍,为什么还要带着侥幸的心理再去试验一回呢?",
"宫内整洁利索,廊柱门窗颜色鲜艳,几名电工正在维修线路。",
"他身材矮小,颧骨突出,留着小胡子,说话一口浓重的福建口音。",
"阿杰让阿悦看下剩下的盒饭合不合他的胃口。",
"有网友问,能不能回忆几件刘洋在学校里的趣事或糗事。"
];
let text = "";
text = resultChina[Math.floor(Math.random() * 7)];
this.textarea = text
},
// WS
async getTtsChunkWavWS(){
// chunks
chunks = []
chunk_index = 0
reciveOver = false
_reset()
this.streamingOnInit = false
this.streamingStopStatus = true
this.streamingContinueStatus = true
this.streamingSendStamp = Date.now()
this.ws.send(this.textarea)
},
//
_schedulePlaybackWav({wavData}) {
var _that = this
_audioCtx.decodeAudioData(wavData, audioBuffer => {
const audioSrc = _audioCtx.createBufferSource()
audioSrc.onended = () => {
_audioSrcNodes.shift();
if(_audioSrcNodes.length === 0){
_that.resetStatus()
}
};
_audioSrcNodes.push(audioSrc);
let startDelay = 0;
if (!_playStartedAt) {
startDelay = 10 / 1000;
_playStartedAt = _audioCtx.currentTime + startDelay;
}
audioSrc.buffer = audioBuffer;
audioSrc.connect(_audioCtx.destination);
const startAt = _playStartedAt + _totalTimeScheduled;
audioSrc.start(startAt);
_totalTimeScheduled+= audioBuffer.duration;
})
},
// base64
base64ToUint8Array(base64String) {
const padding = '='.repeat((4 - base64String.length % 4) % 4);
const base64 = (base64String + padding)
.replace(/-/g, '+')
.replace(/_/g, '/');
const rawData = window.atob(base64);
const outputArray = new Uint8Array(rawData.length);
for (let i = 0; i < rawData.length; ++i) {
outputArray[i] = rawData.charCodeAt(i);
}
return outputArray;
},
//
playerPaused(){
_audioCtx.suspend()
},
//
playerResume(){
_audioCtx.resume()
},
//
streamingStop(){
this.playerPaused()
//
this.streamingContinueStatus = false
},
//
streamingResume(){
this.playerResume()
this.streamingContinueStatus = true
},
//
async EndToEndSynthesis(){
this.endToEndSendStamp = Date.now()
this.endToEndOnInit = false
this.endToEndStopStatus = true
let ttsResult = await this.$http.post("/api/tts/offline", { text : this.textarea});
if (ttsResult.status == 200) {
this.endToEndAcceptStamp = Date.now()
this.endToEndStopStatus = false
this.endToEndContinueStatus = true
// base
console.log('res', ttsResult)
let typedArray = this.base64ToUint8Array(ttsResult.data.result)
//
this._schedulePlaybackWav({
wavData: typedArray.buffer,
})
};
},
//
streamingStop(){
this.playerPaused()
//
this.endToEndContinueStatus = false
},
//
streamingResume(){
this.playerResume()
this.endToEndContinueStatus = true
},
}
}
</script>
<style lang="less" scoped>
@import "./style.less";
</style>

@ -0,0 +1,369 @@
.speech_recognition {
width: 1200px;
height: 410px;
background: #FFFFFF;
padding: 40px 0px 50px 50px;
box-sizing: border-box;
display: flex;
.recognition_text {
width: 589px;
height: 320px;
// background: pink;
.recognition_text_header {
margin-bottom: 30px;
display: flex;
justify-content: space-between;
align-items: center;
.recognition_text_title {
height: 26px;
font-family: PingFangSC-Medium;
font-size: 16px;
color: #000000;
letter-spacing: 0;
line-height: 26px;
font-weight: 500;
};
.recognition_text_random {
display: flex;
align-items: center;
cursor: pointer;
span {
display: inline-block;
&:nth-of-type(1) {
width: 20px;
height: 20px;
background: url("../../../assets/image/ic_更换示例.svg") no-repeat;
background-position: center;
background-size: 20px 20px;
margin-right: 5px;
};
&:nth-of-type(2) {
height: 20px;
font-family: PingFangSC-Regular;
font-size: 14px;
color: #2932E1;
letter-spacing: 0;
font-weight: 400;
};
};
};
};
.recognition_text_field {
width: 589px;
height: 264px;
background: #FAFAFA;
.textToSpeech_content_show_text{
width: 100%;
height: 264px;
padding: 0px 30px 30px 0px;
box-sizing: border-box;
.ant-input {
height: 208px;
resize: none;
// margin-bottom: 230px;
padding: 21px 20px;
};
};
};
};
// 指向
.recognition_point_to {
width: 47px;
height: 63px;
background: url("../../../assets/image/步骤-箭头切图@2x.png") no-repeat;
background-position: center;
background-size: 47px 63px;
margin-top: 164px;
margin-right: 101px;
margin-left: 100px;
margin-top: 164px;
};
// 语音合成
.speech_recognition_new {
.speech_recognition_title {
height: 26px;
font-family: PingFangSC-Medium;
font-size: 16px;
color: #000000;
line-height: 26px;
font-weight: 500;
margin-left: 32px;
margin-bottom: 96px;
};
// 流式合成
.speech_recognition_streaming {
width: 136px;
height: 44px;
background: #2932E1;
border-radius: 22px;
font-family: PingFangSC-Medium;
font-size: 14px;
color: #FFFFFF;
font-weight: 500;
text-align: center;
line-height: 44px;
margin-bottom: 40px;
cursor: pointer;
&:hover {
opacity: .9;
};
};
// 合成中
.streaming_ing_box {
display: flex;
align-items: center;
height: 44px;
margin-bottom: 40px;
.streaming_ing {
width: 136px;
height: 44px;
background: #7278F5;
border-radius: 22px;
display: flex;
justify-content: center;
align-items: center;
cursor: pointer;
.streaming_ing_img {
width: 16px;
height: 16px;
// background: url("../../../assets/image/ic_小-录制语音.svg");
// background-repeat: no-repeat;
// background-position: center;
// background-size: 16px 16px;
// margin-right: 12px;
};
.streaming_ing_text {
height: 20px;
font-family: PingFangSC-Medium;
font-size: 14px;
color: #FFFFFF;
font-weight: 500;
margin-left: 12px;
};
};
// 合成时间文字
.streaming_time {
height: 20px;
font-family: PingFangSC-Medium;
font-size: 14px;
color: #000000;
font-weight: 500;
margin-left: 12px;
};
};
// 暂停播放
.streaming_suspended_box {
display: flex;
align-items: center;
height: 44px;
margin-bottom: 40px;
.streaming_suspended {
width: 136px;
height: 44px;
background: #2932E1;
border-radius: 22px;
display: flex;
justify-content: center;
align-items: center;
cursor: pointer;
.streaming_suspended_img {
width: 16px;
height: 16px;
background: url("../../../assets/image/ic_暂停按钮.svg");
background-repeat: no-repeat;
background-position: center;
background-size: 16px 16px;
margin-right: 12px;
};
.streaming_suspended_text {
height: 20px;
font-family: PingFangSC-Medium;
font-size: 14px;
color: #FFFFFF;
font-weight: 500;
margin-left: 12px;
};
};
// 暂停获取时间
.suspended_time {
height: 20px;
font-family: PingFangSC-Medium;
font-size: 14px;
color: #000000;
font-weight: 500;
margin-left: 12px;
}
};
// 继续播放
.streaming_continue {
width: 136px;
height: 44px;
background: #2932E1;
border-radius: 22px;
display: flex;
justify-content: center;
align-items: center;
cursor: pointer;
margin-bottom: 40px;
.streaming_continue_img {
width: 16px;
height: 16px;
background: url("../../../assets/image/ic_播放按钮.svg");
background-repeat: no-repeat;
background-position: center;
background-size: 16px 16px;
margin-right: 12px;
};
.streaming_continue_text {
height: 20px;
font-family: PingFangSC-Medium;
font-size: 14px;
color: #FFFFFF;
font-weight: 500;
};
};
// 端到端合成
.speech_recognition_end_to_end {
width: 136px;
height: 44px;
background: #2932E1;
border-radius: 22px;
font-family: PingFangSC-Medium;
font-size: 14px;
color: #FFFFFF;
font-weight: 500;
text-align: center;
line-height: 44px;
cursor: pointer;
&:hover {
opacity: .9;
};
};
// 合成中
.end_to_end_ing_box {
display: flex;
align-items: center;
height: 44px;
.end_to_end_ing {
width: 136px;
height: 44px;
background: #7278F5;
border-radius: 22px;
display: flex;
justify-content: center;
align-items: center;
cursor: pointer;
.end_to_end_ing_img {
width: 16px;
height: 16px;
// background: url("../../../assets/image/ic_小-录制语音.svg");
// background-repeat: no-repeat;
// background-position: center;
// background-size: 16px 16px;
};
.end_to_end_ing_text {
height: 20px;
font-family: PingFangSC-Medium;
font-size: 14px;
color: #FFFFFF;
font-weight: 500;
margin-left: 12px;
};
};
// 合成时间文本
.end_to_end_ing_time {
height: 20px;
font-family: PingFangSC-Medium;
font-size: 14px;
color: #000000;
font-weight: 500;
margin-left: 12px;
};
};
// 暂停播放
.end_to_end_suspended_box {
display: flex;
align-items: center;
height: 44px;
.end_to_end_suspended {
width: 136px;
height: 44px;
background: #2932E1;
border-radius: 22px;
display: flex;
justify-content: center;
align-items: center;
cursor: pointer;
.end_to_end_suspended_img {
width: 16px;
height: 16px;
background: url("../../../assets/image/ic_暂停按钮.svg");
background-repeat: no-repeat;
background-position: center;
background-size: 16px 16px;
margin-right: 12px;
};
.end_to_end_suspended_text {
height: 20px;
font-family: PingFangSC-Medium;
font-size: 14px;
color: #FFFFFF;
font-weight: 500;
};
};
// 暂停播放时间
.end_to_end_ing_suspended_time {
height: 20px;
font-family: PingFangSC-Medium;
font-size: 14px;
color: #000000;
font-weight: 500;
margin-left: 12px;
};
};
// 继续播放
.end_to_end_continue {
width: 136px;
height: 44px;
background: #2932E1;
border-radius: 22px;
display: flex;
justify-content: center;
align-items: center;
cursor: pointer;
.end_to_end_continue_img {
width: 16px;
height: 16px;
background: url("../../../assets/image/ic_播放按钮.svg");
background-repeat: no-repeat;
background-position: center;
background-size: 16px 16px;
margin-right: 12px;
};
.end_to_end_continue_text {
height: 20px;
font-family: PingFangSC-Medium;
font-size: 14px;
color: #FFFFFF;
font-weight: 500;
};
};
};
};

@ -0,0 +1,178 @@
<template>
<div class="vprbox">
<div>
<h1>声纹识别展示</h1>
<el-input
v-model="spk_id"
class="w-50 m-2"
size="large"
placeholder="spk_id"
/>
<el-button :type="recoType" @click="startRecorder()" style="margin:1vw;">{{ recoText }}</el-button>
<el-button type="primary" @click="Enroll(spk_id)" style="margin:1vw;"> 注册 </el-button>
<el-button type="primary" @click="Recog()" style="margin:1vw;"> 识别 </el-button>
</div>
<div>
<h2>声纹得分结果</h2>
<el-table :data="score_result" style="width: 40%">
<el-table-column prop="spkId" label="spk_id" />
<el-table-column prop="score" label="score" />
</el-table>
</div>
<div>
<h2>声纹数据列表</h2>
<el-table :data="vpr_datas" style="width: 40%">
<el-table-column prop="spkId" label="spk_id" />
<el-table-column label="wav">
<template #default="scope2">
<audio :src="'/VPR/vpr/data/?vprId='+scope2.row.vprId" controls>
</audio>
</template>
</el-table-column>
<el-table-column fixed="right" label="Operations">
<template #default="scope">
<el-button @click="Del(scope.row.spkId)" type="text" size="small">Delete</el-button>
</template>
</el-table-column>
</el-table>
</div>
</div>
</template>
<script>
import Recorder from 'js-audio-recorder'
const recorder = new Recorder({
sampleBits: 16, // 8 1616
sampleRate: 16000, // 110251600022050240004410048000chrome48000
numChannels: 1, // 1 2 1
compiling: true
})
export default {
name: "VPR",
data () {
return {
url_enroll: '/VPR/vpr/enroll', //
url_recog: '/VPR/vpr/recog', //
url_del: '/VPR/vpr/del', //
url_list: '/VPR/vpr/list', //
url_data: '/VPR/vpr/data', //
spk_id: 'sss',
onRecord: false,
recoType: "primary",
recoText: "开始录音",
wav: '',
score_result: [],
vpr_datas: []
}
},
mounted () {
this.GetList()
},
methods: {
startRecorder () {
this.score_result = []
if(!this.onReco){
recorder.start().then(() => {
}, (error) => {
console.log("录音出错");
})
this.onReco = true
this.recoType = "danger"
this.recoText = "结束录音"
this.$nextTick(()=>{
})
} else {
//
recorder.stop()
this.onReco = false
this.recoType = "primary"
this.recoText = "开始录音"
this.$nextTick(()=>{})
// wav,
this.wav = recorder.getWAVBlob()
}
},
async Enroll(spk_id){
if(this.wav === ''){
this.$message.error("请先完成录音");
return
}
let formData = new FormData()
formData.append('spk_id', this.spk_id)
formData.append('audio', this.wav)
console.log("formData", formData)
console.log("spk_id", this.spk_id)
const result = await this.$http.post(this.url_enroll, formData);
if(result.data.status){
this.$message.success("声纹注册成功")
} else {
this.$message.error(result.data.msg)
}
console.log(result)
this.GetList()
},
async Recog(){
this.score_result = []
if(this.wav === ''){
this.$message.error("请先完成录音");
return
}
let formData = new FormData()
formData.append('audio', this.wav)
const result = await this.$http.post(this.url_recog, formData);
console.log(result)
result.data.forEach(dat => {
this.score_result.push({
spkId: dat[0],
score: dat[1][1]
})
});
},
async Del(spkId){
console.log('spkId', spkId)
//
const result = await this.$http.post(this.url_del, {spk_id: spkId});
if(result.data.status){
this.$message.success("删除成功")
} else {
this.$message.error(result.data.msg)
}
this.GetList()
},
async GetList(){
this.vpr_datas =[]
const result = await this.$http.get(this.url_list);
console.log("list", result)
for(let i=0; i<result.data[0].length; i++){
this.vpr_datas.push({
spkId: result.data[0][i],
vprId: result.data[1][i]
})
}
this.$nextTick(()=>{})
},
GetData(){},
},
}
</script>
<style lang='less' scoped>
.vprbox {
border: 4px solid #F00;
// position: fixed;
top:60%;
width: 100%;
height: 20%;
overflow: auto;
}
</style>

@ -0,0 +1,335 @@
<template>
<div class="voiceprint">
<div class="voiceprint_recording">
<div class="recording_title">
<div>1</div>
<div>
录制声纹
</div>
</div>
<div>
试试对我说欢迎使用飞桨声纹识别系统
</div>
<!-- 开始录音 -->
<div v-if="onEnrollRec === 0 " class="recording_btn"
@click="startRecorderEnroll()"
>
<div class="recording_img"></div>
<div class="recording_prompt">
录制声音
</div>
</div>
<!-- 结束录音 -->
<div v-else-if="onEnrollRec === 1 " class="recording_btn_the_recording"
@click="stopRecorderEnroll(0)"
>
<a-spin />
<div class="recording_prompt">
停止录音
</div>
</div>
<!-- :
// {/* */} -->
<div v-else class="complete_the_recording_btn"
@click="enrollVoicePrint()"
>
<div class="complete_the_recording_img"></div>
<div class="complete_the_recording_prompt">
注册声纹
</div>
</div>
<!-- 用户名输入框 -->
<div class="recording_input">
<el-input v-model="enrollSpkId" class="w-50 m-2" autosize placeholder="请输入注册用户名" />
</div>
<!-- {/* table */} -->
<div class="recording_table">
<el-table :data="vpr_datas" border class="recording_table_box">
<el-table-column prop="spkId" label="用户" />
<el-table-column fixed="right" label="操作">
<template #default="scope">
<el-button @click="Play(scope.row.vprId)" type="text" size="small">播放</el-button>
<el-button @click="Del(scope.row.spkId)" type="text" size="small">删除</el-button>
</template>
</el-table-column>
</el-table>
</div>
</div>
<!-- {/* 指向 */} -->
<div class="recording_point_to"></div>
<!-- {/* 识别声纹 */} -->
<div class="voiceprint_identify">
<div class="identify_title">
<div>2</div>
<div>
识别声纹
</div>
</div>
<div>
试试对我说请识别一下我的声音
</div>
<div v-if="onRegRec === 0" class="identify_btn"
@click="startRecorderRecog()"
>
<div class="identify_img"></div>
<div class="identify_prompt">
录制声音
</div>
</div>
<div v-else-if="onRegRec === 1" class="identify_btn_the_recording"
@click="stopRecorderRecog()">
<a-spin />
<div class="recording_prompt">
停止录音
</div>
</div>
<div v-else class="identify_complete_the_recording_btn"
@click="Recog()">
<div class="identify_complete_the_recording_img"></div>
<div class="identify_complete_the_recording_prompt">
开始识别
</div>
</div>
<div class="identify_result">
<div class="identify_result_content">
<div>识别结果</div>
<div>{{scoreResult}}</div>
</div>
</div>
</div>
</div>
</template>
<script>
import Recorder from 'js-audio-recorder'
import { vprData, vprList, vprEnroll, vprRecog, vprDel } from '../../../api/ApiVPR';
//
const recorder = new Recorder({
sampleBits: 16, // 8 1616
sampleRate: 16000, // 110251600022050240004410048000chrome48000
numChannels: 1, // 1 2 1
compiling: true
})
//
const audioCtx = new AudioContext({
latencyHint: 'interactive',
sampleRate: 16000,
});
export default {
data(){
return {
onEnrollRec: 0, //
onRegRec:0, //
scoreResult: "", //
enrollSpkId: "", // SpkId
wav: '', //
scoreResults: [], //
vpr_datas: [] //
}
},
mounted () {
this.GetList()
this.randomSpkId()
},
methods: {
//
reset(){
this.wav = ''
this.scoreResults = []
this.scoreResult = ""
},
// random SpkName
randomSpkId(){
var e = 3;
var t = "赵钱孙李周吴郑王冯陈褚卫蒋沈韩杨朱秦尤许何吕施张孔曹严华金魏陶姜戚谢邹喻柏水窦章云苏潘葛奚范彭郎鲁韦昌马苗凤花方俞任袁柳酆鲍史唐费廉岑薛雷贺倪汤滕殷罗毕郝邬安常乐于时傅皮卞齐康伍余元卜顾孟平黄",
a = t.length,
n = "";
for (var i = 0; i < e; i++) n += t.charAt(Math.floor(Math.random() * a));
this.enrollSpkId = n
console.log("n", n)
},
//
startRecorderEnroll(){
this.onEnrollRec = 1
recorder.clear()
recorder.start()
},
//
stopRecorderEnroll(){
this,this.onEnrollRec = 2
recorder.stop()
this.wav = recorder.getWAVBlob()
},
//
startRecorderRecog(){
// this.wav = ''
this.onRegRec = 1
this.reset()
recorder.clear()
recorder.start()
},
//
stopRecorderRecog(){
this,this.onRegRec = 2
recorder.stop()
this.wav = recorder.getWAVBlob()
},
//
async enrollVoicePrint(){
if(this.wav === ''){
this.$message.error("请先完成录音");
this.onEnrollRec = 0
return
}
if(this.enrollSpkId === ""){
this.$message.error("请输入声纹用户名")
this.onEnrollRec = 2
return
}
this.onEnrollRec = 0
let formData = new FormData()
formData.append('spk_id', this.enrollSpkId)
formData.append('audio', this.wav)
const result = await vprEnroll(formData)
if(result.data.status){
this.$message.success("声纹注册成功")
} else {
this.$message.error(result.data.msg)
}
// console.log(result)
this.GetList()
this.wav = ''
this.randomSpkId()
},
//
async Recog(){
this.scoreResults = []
this.onRegRec = 0
if(this.wav === ''){
this.$message.error("请先完成录音");
return
}
if(this.vpr_datas.length == 0){
this.$message.error("未查询到声纹数据,请先注册");
return
}
let formData = new FormData()
formData.append('audio', this.wav)
const result = await vprRecog(formData);
console.log(result)
result.data.forEach(dat => {
this.scoreResults.push({
spkId: dat[0],
score: dat[1][1]
})
});
if(this.scoreResults.length > 0){
this.scoreResult = this.scoreResults[0]['spkId']
}
},
//
async Del(spkId){
console.log('spkId', spkId)
//
const result = await vprDel({spk_id: spkId});
if(result.data.status){
this.$message.success("删除成功")
} else {
this.$message.error(result.data.msg)
}
this.GetList()
},
//
async GetList(){
this.vpr_datas =[]
const result = await vprList();
console.log("list", result)
for(let i=0; i<result.data[0].length; i++){
this.vpr_datas.push({
spkId: result.data[0][i],
vprId: result.data[1][i]
})
}
this.$nextTick(()=>{})
},
//
async Play(vprId){
console.log('vprId', vprId)
//
const result = await vprData(vprId);
console.log('play result', result)
if (result.data.code == 0) {
// base
let typedArray = this.base64ToUint8Array(result.data.result)
// wav
let view = new DataView(typedArray.buffer);
view = Recorder.encodeWAV(view, 16000, 16000, 1, 16, true);
//
this.playAudioData(view.buffer);
};
},
// base64
base64ToUint8Array(base64String) {
const padding = '='.repeat((4 - base64String.length % 4) % 4);
const base64 = (base64String + padding)
.replace(/-/g, '+')
.replace(/_/g, '/');
const rawData = window.atob(base64);
const outputArray = new Uint8Array(rawData.length);
for (let i = 0; i < rawData.length; ++i) {
outputArray[i] = rawData.charCodeAt(i);
}
return outputArray;
},
//
playAudioData( wav_buffer ) {
audioCtx.decodeAudioData(wav_buffer, buffer => {
var source = audioCtx.createBufferSource();
source.buffer = buffer;
source.connect(audioCtx.destination);
source.start();
}, function(e) {
Recorder.throwError(e);
})
}
}
};
</script>
<style lang="less" scoped>
@import "./style.less";
</style>

@ -0,0 +1,419 @@
.voiceprint {
width: 1200px;
height: 410px;
background: #FFFFFF;
padding: 41px 80px 56px 80px;
box-sizing: border-box;
display: flex;
// 录制声纹
.voiceprint_recording {
width: 423px;
height: 354px;
margin-right: 66px;
.recording_title {
display: flex;
align-items: center;
margin-bottom: 20px;
div {
&:nth-of-type(1) {
width: 24px;
height: 24px;
background: rgba(41,50,225,0.70);
font-family: PingFangSC-Regular;
font-size: 16px;
color: #FFFFFF;
letter-spacing: 0;
text-align: center;
line-height: 24px;
font-weight: 400;
margin-right: 16px;
border-radius: 50%;
};
&:nth-of-type(2) {
height: 26px;
font-family: PingFangSC-Regular;
font-size: 16px;
color: #000000;
line-height: 26px;
font-weight: 400;
};
};
};
// 开始录音
.recording_btn {
width: 143px;
height: 44px;
cursor: pointer;
background: #2932E1;
padding: 0px 24px 0px 21px;
box-sizing: border-box;
border-radius: 22px;
display: flex;
align-items: center;
margin-bottom: 20px;
margin-top: 10px;
&:hover {
background: #7278F5;
.recording_img {
width: 20px;
height: 20px;
background: url("../../../assets/image//icon_录制声音小语音1.svg");
background-repeat: no-repeat;
background-position: center;
background-size: 20px 20px;
margin-right: 8.26px;
};
}
.recording_img {
width: 20px;
height: 20px;
background: url("../../../assets/image//icon_录制声音小语音1.svg");
background-repeat: no-repeat;
background-position: center;
background-size: 20px 20px;
margin-right: 8.26px;
};
.recording_prompt {
height: 20px;
font-family: PingFangSC-Regular;
font-size: 12px;
color: #FFFFFF;
font-weight: 400;
};
};
// 录音中
.recording_btn_the_recording {
width: 143px;
height: 44px;
cursor: pointer;
background: #7278F5;
padding: 0px 24px 0px 21px;
box-sizing: border-box;
border-radius: 22px;
display: flex;
align-items: center;
justify-content: center;
margin-bottom: 40px;
.recording_img_the_recording {
width: 20px;
height: 20px;
background: url("../../../assets/image//icon_小-声音波浪.svg");
background-repeat: no-repeat;
background-position: center;
background-size: 20px 20px;
margin-right: 8.26px;
};
.recording_prompt {
height: 20px;
font-family: PingFangSC-Regular;
font-size: 12px;
color: #FFFFFF;
font-weight: 400;
};
};
// 完成录音
.complete_the_recording_btn {
width: 143px;
height: 44px;
cursor: pointer;
background: #2932E1;
padding: 0px 24px 0px 21px;
box-sizing: border-box;
border-radius: 22px;
display: flex;
align-items: center;
margin-bottom: 40px;
&:hover {
background: #7278F5;
.complete_the_recording_img {
width: 20px;
height: 20px;
background: url("../../../assets/image//icon_小-声音波浪.svg");
background-repeat: no-repeat;
background-position: center;
background-size: 20px 20px;
margin-right: 8.26px;
};
}
.complete_the_recording_img {
width: 20px;
height: 20px;
background: url("../../../assets/image//icon_小-声音波浪.svg");
background-repeat: no-repeat;
background-position: center;
background-size: 20px 20px;
margin-right: 8.26px;
};
.complete_the_recording_prompt {
height: 20px;
font-family: PingFangSC-Regular;
font-size: 12px;
color: #FFFFFF;
font-weight: 400;
};
};
// table
.recording_table {
width: 322px;
.recording_table_box {
.ant-table-thead > tr > th {
color: rgba(0, 0, 0, 0.85);
font-weight: 500;
text-align: left;
background: rgba(40,50,225,0.08);
border-bottom: none;
transition: background 0.3s ease;
height: 22px;
font-family: PingFangSC-Regular;
font-size: 16px;
color: #333333;
// text-align: center;
font-weight: 400;
&:nth-of-type(2) {
border-left: 2px solid white;
};
};
.ant-table-tbody > tr > td {
border-bottom: 1px solid #f0f0f0;
transition: background 0.3s;
height: 22px;
font-family: PingFangSC-Regular;
font-size: 16px;
color: #333333;
// text-align: center;
font-weight: 400;
};
};
};
// input
.recording_input {
width: 322px;
margin-bottom: 20px;
};
};
// 指向
.recording_point_to {
width: 63px;
height: 47px;
background: url("../../../assets/image//步骤-箭头切图@2x.png");
background-repeat: no-repeat;
background-position: center;
background-size: 63px 47px;
margin-right: 66px;
margin-top: 198px;
};
//识别声纹
.voiceprint_identify {
width: 423px;
height: 354px;
.identify_title {
display: flex;
align-items: center;
margin-bottom: 20px;
div {
&:nth-of-type(1) {
width: 24px;
height: 24px;
background: rgba(41,50,225,0.70);
font-family: PingFangSC-Regular;
font-size: 16px;
color: #FFFFFF;
letter-spacing: 0;
text-align: center;
line-height: 24px;
font-weight: 400;
margin-right: 16px;
border-radius: 50%;
};
&:nth-of-type(2) {
height: 26px;
font-family: PingFangSC-Regular;
font-size: 16px;
color: #000000;
line-height: 26px;
font-weight: 400;
};
};
};
// 开始识别
.identify_btn {
width: 143px;
height: 44px;
cursor: pointer;
background: #2932E1;
padding: 0px 24px 0px 21px;
box-sizing: border-box;
border-radius: 22px;
display: flex;
align-items: center;
margin-bottom: 40px;
margin-top: 10px;
&:hover {
background: #7278F5;
.identify_img {
width: 20px;
height: 20px;
background: url("../../../assets/image//icon_录制声音小语音1.svg");
background-repeat: no-repeat;
background-position: center;
background-size: 20px 20px;
margin-right: 8.26px;
};
}
.identify_img {
width: 20px;
height: 20px;
background: url("../../../assets/image//icon_录制声音小语音1.svg");
background-repeat: no-repeat;
background-position: center;
background-size: 20px 20px;
margin-right: 8.26px;
};
.identify_prompt {
height: 20px;
font-family: PingFangSC-Regular;
font-size: 12px;
color: #FFFFFF;
font-weight: 400;
};
};
// 识别中
.identify_btn_the_recording {
width: 143px;
height: 44px;
cursor: pointer;
background: #7278F5;
padding: 0px 24px 0px 21px;
box-sizing: border-box;
border-radius: 22px;
display: flex;
align-items: center;
justify-content: center;
margin-bottom: 40px;
.identify_img_the_recording {
width: 20px;
height: 20px;
background: url("../../../assets/image//icon_录制声音小语音1.svg");
background-repeat: no-repeat;
background-position: center;
background-size: 20px 20px;
margin-right: 8.26px;
};
.recording_prompt {
height: 20px;
font-family: PingFangSC-Regular;
font-size: 12px;
color: #FFFFFF;
font-weight: 400;
};
};
// 完成识别
.identify_complete_the_recording_btn {
width: 143px;
height: 44px;
cursor: pointer;
background: #2932E1;
padding: 0px 24px 0px 21px;
box-sizing: border-box;
border-radius: 22px;
display: flex;
align-items: center;
margin-bottom: 40px;
&:hover {
background: #7278F5;
.identify_complete_the_recording_img {
width: 20px;
height: 20px;
background: url("../../../assets/image//icon_小-声音波浪.svg");
background-repeat: no-repeat;
background-position: center;
background-size: 20px 20px;
margin-right: 8.26px;
};
}
.identify_complete_the_recording_img {
width: 20px;
height: 20px;
background: url("../../../assets/image//icon_小-声音波浪.svg");
background-repeat: no-repeat;
background-position: center;
background-size: 20px 20px;
margin-right: 8.26px;
};
.identify_complete_the_recording_prompt {
height: 20px;
font-family: PingFangSC-Regular;
font-size: 12px;
color: #FFFFFF;
font-weight: 400;
};
};
// 结果
.identify_result {
width: 422px;
height: 184px;
text-align: center;
line-height: 184px;
background: #FAFAFA;
position: relative;
.identify_result_default {
font-family: PingFangSC-Regular;
font-size: 16px;
color: #999999;
font-weight: 400;
};
.identify_result_content {
// text-align: center;
// position: absolute;
// top: 50%;
// left: 50%;
// transform: translate(-50%,-50%);
div {
&:nth-of-type(1) {
height: 22px;
font-family: PingFangSC-Regular;
font-size: 16px;
color: #666666;
font-weight: 400;
margin-bottom: 10px;
};
&:nth-of-type(2) {
height: 33px;
font-family: PingFangSC-Medium;
font-size: 24px;
color: #000000;
font-weight: 500;
};
};
};
};
};
.action_btn {
display: inline-block;
height: 22px;
font-family: PingFangSC-Regular;
font-size: 16px;
color: #2932E1;
text-align: center;
font-weight: 400;
cursor: pointer;
};
};

@ -0,0 +1,83 @@
.experience {
width: 100%;
height: 709px;
// background: url("../assets/image/在线体验-背景@2x.png") no-repeat;
background-size: 100% 709px;
background-position: initial;
//
.experience_wrapper {
width: 1200px;
height: 709px;
margin: 0 auto;
padding: 0px 0px 0px 0px;
box-sizing: border-box;
// background: red;
.experience_title {
height: 42px;
font-family: PingFangSC-Semibold;
font-size: 30px;
color: #000000;
font-weight: 600;
line-height: 42px;
text-align: center;
margin-bottom: 10px;
};
.experience_describe {
height: 22px;
font-family: PingFangSC-Regular;
font-size: 14px;
color: #666666;
letter-spacing: 0;
text-align: center;
line-height: 22px;
font-weight: 400;
margin-bottom: 30px;
};
.experience_content {
width: 1200px;
margin: 0 auto;
display: flex;
justify-content: center;
.experience_tabs {
margin-top: 15px;
& > .ant-tabs-nav {
margin-bottom: 20px;
&::before {
content: none;
}
.ant-tabs-nav-wrap {
justify-content: center;
}
.ant-tabs-tab {
font-size: 20px;
}
.ant-tabs-nav-list {
margin-right: -32px;
flex: none;
}
};
.ant-tabs-nav::before {
position: absolute;
right: 0;
left: 0;
border-bottom: 1px solid #f6f7fe;
content: '';
};
};
};
};
};
.experience::after {
content: "";
display: block;
clear: both;
visibility: hidden;
}

@ -0,0 +1,13 @@
import { createApp } from 'vue'
import ElementPlus from 'element-plus'
import 'element-plus/dist/index.css'
import Antd from 'ant-design-vue';
import 'ant-design-vue/dist/antd.css';
import App from './App.vue'
import axios from 'axios'
const app = createApp(App)
app.config.globalProperties.$http = axios
app.use(ElementPlus).use(Antd)
app.mount('#app')

@ -0,0 +1,28 @@
import { defineConfig } from 'vite'
import vue from '@vitejs/plugin-vue'
// https://vitejs.dev/config/
export default defineConfig({
plugins: [vue()],
css:
{ preprocessorOptions:
{ css:
{
charset: false
}
}
},
build: {
assetsInlineLimit: '2048' // 2kb
},
server: {
host: "0.0.0.0",
proxy: {
"/api": {
target: "http://localhost:8010",
changeOrigin: true,
rewrite: (path) => path.replace(/^\/api/, ""),
},
},
},
})

@ -0,0 +1,785 @@
# THIS IS AN AUTOGENERATED FILE. DO NOT EDIT THIS FILE DIRECTLY.
# yarn lockfile v1
"@ant-design/colors@^6.0.0":
version "6.0.0"
resolved "https://registry.npmmirror.com/@ant-design/colors/-/colors-6.0.0.tgz"
integrity sha512-qAZRvPzfdWHtfameEGP2Qvuf838NhergR35o+EuVyB5XvSA98xod5r4utvi4TJ3ywmevm290g9nsCG5MryrdWQ==
dependencies:
"@ctrl/tinycolor" "^3.4.0"
"@ant-design/icons-svg@^4.2.1":
version "4.2.1"
resolved "https://registry.npmmirror.com/@ant-design/icons-svg/-/icons-svg-4.2.1.tgz"
integrity sha512-EB0iwlKDGpG93hW8f85CTJTs4SvMX7tt5ceupvhALp1IF44SeUFOMhKUOYqpsoYWQKAOuTRDMqn75rEaKDp0Xw==
"@ant-design/icons-vue@^6.0.0":
version "6.1.0"
resolved "https://registry.npmmirror.com/@ant-design/icons-vue/-/icons-vue-6.1.0.tgz"
integrity sha512-EX6bYm56V+ZrKN7+3MT/ubDkvJ5rK/O2t380WFRflDcVFgsvl3NLH7Wxeau6R8DbrO5jWR6DSTC3B6gYFp77AA==
dependencies:
"@ant-design/colors" "^6.0.0"
"@ant-design/icons-svg" "^4.2.1"
"@babel/parser@^7.16.4":
version "7.17.9"
resolved "https://registry.npmmirror.com/@babel/parser/-/parser-7.17.9.tgz"
integrity sha512-vqUSBLP8dQHFPdPi9bc5GK9vRkYHJ49fsZdtoJ8EQ8ibpwk5rPKfvNIwChB0KVXcIjcepEBBd2VHC5r9Gy8ueg==
"@babel/runtime@^7.10.5":
version "7.17.9"
resolved "https://registry.npmmirror.com/@babel/runtime/-/runtime-7.17.9.tgz"
integrity sha512-lSiBBvodq29uShpWGNbgFdKYNiFDo5/HIYsaCEY9ff4sb10x9jizo2+pRrSyF4jKZCXqgzuqBOQKbUm90gQwJg==
dependencies:
regenerator-runtime "^0.13.4"
"@ctrl/tinycolor@^3.4.0":
version "3.4.1"
resolved "https://registry.npmmirror.com/@ctrl/tinycolor/-/tinycolor-3.4.1.tgz"
integrity sha512-ej5oVy6lykXsvieQtqZxCOaLT+xD4+QNarq78cIYISHmZXshCvROLudpQN3lfL8G0NL7plMSSK+zlyvCaIJ4Iw==
"@element-plus/icons-vue@^1.1.4":
version "1.1.4"
resolved "https://registry.npmmirror.com/@element-plus/icons-vue/-/icons-vue-1.1.4.tgz"
integrity sha512-Iz/nHqdp1sFPmdzRwHkEQQA3lKvoObk8azgABZ81QUOpW9s/lUyQVUSh0tNtEPZXQlKwlSh7SPgoVxzrE0uuVQ==
"@floating-ui/core@^0.6.1":
version "0.6.1"
resolved "https://registry.npmmirror.com/@floating-ui/core/-/core-0.6.1.tgz"
integrity sha512-Y30eVMcZva8o84c0HcXAtDO4BEzPJMvF6+B7x7urL2xbAqVsGJhojOyHLaoQHQYjb6OkqRq5kO+zeySycQwKqg==
"@floating-ui/dom@^0.4.2":
version "0.4.4"
resolved "https://registry.npmmirror.com/@floating-ui/dom/-/dom-0.4.4.tgz"
integrity sha512-0Ulu3B/dqQplUUSqnTx0foSrlYuMN+GTtlJWvNJwt6Fr7/PqmlR/Y08o6/+bxDWr6p3roBJRaQ51MDZsNmEhhw==
dependencies:
"@floating-ui/core" "^0.6.1"
"@popperjs/core@^2.11.4":
version "2.11.5"
resolved "https://registry.npmmirror.com/@popperjs/core/-/core-2.11.5.tgz"
integrity sha512-9X2obfABZuDVLCgPK9aX0a/x4jaOEweTTWE2+9sr0Qqqevj2Uv5XorvusThmc9XGYpS9yI+fhh8RTafBtGposw==
"@simonwep/pickr@~1.8.0":
version "1.8.2"
resolved "https://registry.npmmirror.com/@simonwep/pickr/-/pickr-1.8.2.tgz"
integrity sha512-/l5w8BIkrpP6n1xsetx9MWPWlU6OblN5YgZZphxan0Tq4BByTCETL6lyIeY8lagalS2Nbt4F2W034KHLIiunKA==
dependencies:
core-js "^3.15.1"
nanopop "^2.1.0"
"@types/lodash-es@^4.17.6":
version "4.17.6"
resolved "https://registry.npmmirror.com/@types/lodash-es/-/lodash-es-4.17.6.tgz"
integrity sha512-R+zTeVUKDdfoRxpAryaQNRKk3105Rrgx2CFRClIgRGaqDTdjsm8h6IYA8ir584W3ePzkZfst5xIgDwYrlh9HLg==
dependencies:
"@types/lodash" "*"
"@types/lodash@*", "@types/lodash@^4.14.181":
version "4.14.181"
resolved "https://registry.npmmirror.com/@types/lodash/-/lodash-4.14.181.tgz"
integrity sha512-n3tyKthHJbkiWhDZs3DkhkCzt2MexYHXlX0td5iMplyfwketaOeKboEVBqzceH7juqvEg3q5oUoBFxSLu7zFag==
"@vitejs/plugin-vue@^2.3.0":
version "2.3.1"
resolved "https://registry.npmmirror.com/@vitejs/plugin-vue/-/plugin-vue-2.3.1.tgz"
integrity sha512-YNzBt8+jt6bSwpt7LP890U1UcTOIZZxfpE5WOJ638PNxSEKOqAi0+FSKS0nVeukfdZ0Ai/H7AFd6k3hayfGZqQ==
"@vue/compiler-core@3.2.32":
version "3.2.32"
resolved "https://registry.npmmirror.com/@vue/compiler-core/-/compiler-core-3.2.32.tgz"
integrity sha512-bRQ8Rkpm/aYFElDWtKkTPHeLnX5pEkNxhPUcqu5crEJIilZH0yeFu/qUAcV4VfSE2AudNPkQSOwMZofhnuutmA==
dependencies:
"@babel/parser" "^7.16.4"
"@vue/shared" "3.2.32"
estree-walker "^2.0.2"
source-map "^0.6.1"
"@vue/compiler-dom@3.2.32":
version "3.2.32"
resolved "https://registry.npmmirror.com/@vue/compiler-dom/-/compiler-dom-3.2.32.tgz"
integrity sha512-maa3PNB/NxR17h2hDQfcmS02o1f9r9QIpN1y6fe8tWPrS1E4+q8MqrvDDQNhYVPd84rc3ybtyumrgm9D5Rf/kg==
dependencies:
"@vue/compiler-core" "3.2.32"
"@vue/shared" "3.2.32"
"@vue/compiler-sfc@3.2.32":
version "3.2.32"
resolved "https://registry.npmmirror.com/@vue/compiler-sfc/-/compiler-sfc-3.2.32.tgz"
integrity sha512-uO6+Gh3AVdWm72lRRCjMr8nMOEqc6ezT9lWs5dPzh1E9TNaJkMYPaRtdY9flUv/fyVQotkfjY/ponjfR+trPSg==
dependencies:
"@babel/parser" "^7.16.4"
"@vue/compiler-core" "3.2.32"
"@vue/compiler-dom" "3.2.32"
"@vue/compiler-ssr" "3.2.32"
"@vue/reactivity-transform" "3.2.32"
"@vue/shared" "3.2.32"
estree-walker "^2.0.2"
magic-string "^0.25.7"
postcss "^8.1.10"
source-map "^0.6.1"
"@vue/compiler-ssr@3.2.32":
version "3.2.32"
resolved "https://registry.npmmirror.com/@vue/compiler-ssr/-/compiler-ssr-3.2.32.tgz"
integrity sha512-ZklVUF/SgTx6yrDUkaTaBL/JMVOtSocP+z5Xz/qIqqLdW/hWL90P+ob/jOQ0Xc/om57892Q7sRSrex0wujOL2Q==
dependencies:
"@vue/compiler-dom" "3.2.32"
"@vue/shared" "3.2.32"
"@vue/reactivity-transform@3.2.32":
version "3.2.32"
resolved "https://registry.npmmirror.com/@vue/reactivity-transform/-/reactivity-transform-3.2.32.tgz"
integrity sha512-CW1W9zaJtE275tZSWIfQKiPG0iHpdtSlmTqYBu7Y62qvtMgKG5yOxtvBs4RlrZHlaqFSE26avLAgQiTp4YHozw==
dependencies:
"@babel/parser" "^7.16.4"
"@vue/compiler-core" "3.2.32"
"@vue/shared" "3.2.32"
estree-walker "^2.0.2"
magic-string "^0.25.7"
"@vue/reactivity@3.2.32":
version "3.2.32"
resolved "https://registry.npmmirror.com/@vue/reactivity/-/reactivity-3.2.32.tgz"
integrity sha512-4zaDumuyDqkuhbb63hRd+YHFGopW7srFIWesLUQ2su/rJfWrSq3YUvoKAJE8Eu1EhZ2Q4c1NuwnEreKj1FkDxA==
dependencies:
"@vue/shared" "3.2.32"
"@vue/runtime-core@3.2.32":
version "3.2.32"
resolved "https://registry.npmmirror.com/@vue/runtime-core/-/runtime-core-3.2.32.tgz"
integrity sha512-uKKzK6LaCnbCJ7rcHvsK0azHLGpqs+Vi9B28CV1mfWVq1F3Bj8Okk3cX+5DtD06aUh4V2bYhS2UjjWiUUKUF0w==
dependencies:
"@vue/reactivity" "3.2.32"
"@vue/shared" "3.2.32"
"@vue/runtime-dom@3.2.32":
version "3.2.32"
resolved "https://registry.npmmirror.com/@vue/runtime-dom/-/runtime-dom-3.2.32.tgz"
integrity sha512-AmlIg+GPqjkNoADLjHojEX5RGcAg+TsgXOOcUrtDHwKvA8mO26EnLQLB8nylDjU6AMJh2CIYn8NEgyOV5ZIScQ==
dependencies:
"@vue/runtime-core" "3.2.32"
"@vue/shared" "3.2.32"
csstype "^2.6.8"
"@vue/server-renderer@3.2.32":
version "3.2.32"
resolved "https://registry.npmmirror.com/@vue/server-renderer/-/server-renderer-3.2.32.tgz"
integrity sha512-TYKpZZfRJpGTTiy/s6bVYwQJpAUx3G03z4G7/3O18M11oacrMTVHaHjiPuPqf3xQtY8R4LKmQ3EOT/DRCA/7Wg==
dependencies:
"@vue/compiler-ssr" "3.2.32"
"@vue/shared" "3.2.32"
"@vue/shared@3.2.32":
version "3.2.32"
resolved "https://registry.npmmirror.com/@vue/shared/-/shared-3.2.32.tgz"
integrity sha512-bjcixPErUsAnTQRQX4Z5IQnICYjIfNCyCl8p29v1M6kfVzvwOICPw+dz48nNuWlTOOx2RHhzHdazJibE8GSnsw==
"@vueuse/core@^8.2.4":
version "8.2.5"
resolved "https://registry.npmmirror.com/@vueuse/core/-/core-8.2.5.tgz"
integrity sha512-5prZAA1Ji2ltwNUnzreu6WIXYqHYP/9U2BiY5mD/650VYLpVcwVlYznJDFcLCmEWI3o3Vd34oS1FUf+6Mh68GQ==
dependencies:
"@vueuse/metadata" "8.2.5"
"@vueuse/shared" "8.2.5"
vue-demi "*"
"@vueuse/metadata@8.2.5":
version "8.2.5"
resolved "https://registry.npmmirror.com/@vueuse/metadata/-/metadata-8.2.5.tgz"
integrity sha512-Lk9plJjh9cIdiRdcj16dau+2LANxIdFCiTgdfzwYXbflxq0QnMBeOD2qHgKDE7fuVrtPcVWj8VSuZEx1HRfNQA==
"@vueuse/shared@8.2.5":
version "8.2.5"
resolved "https://registry.npmmirror.com/@vueuse/shared/-/shared-8.2.5.tgz"
integrity sha512-lNWo+7sk6JCuOj4AiYM+6HZ6fq4xAuVq1sVckMQKgfCJZpZRe4i8es+ZULO5bYTKP+VrOCtqrLR2GzEfrbr3YQ==
dependencies:
vue-demi "*"
ant-design-vue@^2.2.8:
version "2.2.8"
resolved "https://registry.npmmirror.com/ant-design-vue/-/ant-design-vue-2.2.8.tgz"
integrity sha512-3graq9/gCfJQs6hznrHV6sa9oDmk/D1H3Oo0vLdVpPS/I61fZPk8NEyNKCHpNA6fT2cx6xx9U3QS63uuyikg/Q==
dependencies:
"@ant-design/icons-vue" "^6.0.0"
"@babel/runtime" "^7.10.5"
"@simonwep/pickr" "~1.8.0"
array-tree-filter "^2.1.0"
async-validator "^3.3.0"
dom-align "^1.12.1"
dom-scroll-into-view "^2.0.0"
lodash "^4.17.21"
lodash-es "^4.17.15"
moment "^2.27.0"
omit.js "^2.0.0"
resize-observer-polyfill "^1.5.1"
scroll-into-view-if-needed "^2.2.25"
shallow-equal "^1.0.0"
vue-types "^3.0.0"
warning "^4.0.0"
array-tree-filter@^2.1.0:
version "2.1.0"
resolved "https://registry.npmmirror.com/array-tree-filter/-/array-tree-filter-2.1.0.tgz"
integrity sha512-4ROwICNlNw/Hqa9v+rk5h22KjmzB1JGTMVKP2AKJBOCgb0yL0ASf0+YvCcLNNwquOHNX48jkeZIJ3a+oOQqKcw==
async-validator@^3.3.0:
version "3.5.2"
resolved "https://registry.npmmirror.com/async-validator/-/async-validator-3.5.2.tgz"
integrity sha512-8eLCg00W9pIRZSB781UUX/H6Oskmm8xloZfr09lz5bikRpBVDlJ3hRVuxxP1SxcwsEYfJ4IU8Q19Y8/893r3rQ==
async-validator@^4.0.7:
version "4.0.7"
resolved "https://registry.npmmirror.com/async-validator/-/async-validator-4.0.7.tgz"
integrity sha512-Pj2IR7u8hmUEDOwB++su6baaRi+QvsgajuFB9j95foM1N2gy5HM4z60hfusIO0fBPG5uLAEl6yCJr1jNSVugEQ==
axios@^0.26.1:
version "0.26.1"
resolved "https://registry.npmmirror.com/axios/-/axios-0.26.1.tgz"
integrity sha512-fPwcX4EvnSHuInCMItEhAGnaSEXRBjtzh9fOtsE6E1G6p7vl7edEeZe11QHf18+6+9gR5PbKV/sGKNaD8YaMeA==
dependencies:
follow-redirects "^1.14.8"
compute-scroll-into-view@^1.0.17:
version "1.0.17"
resolved "https://registry.npmmirror.com/compute-scroll-into-view/-/compute-scroll-into-view-1.0.17.tgz"
integrity sha512-j4dx+Fb0URmzbwwMUrhqWM2BEWHdFGx+qZ9qqASHRPqvTYdqvWnHg0H1hIbcyLnvgnoNAVMlwkepyqM3DaIFUg==
copy-anything@^2.0.1:
version "2.0.6"
resolved "https://registry.npmmirror.com/copy-anything/-/copy-anything-2.0.6.tgz"
integrity sha512-1j20GZTsvKNkc4BY3NpMOM8tt///wY3FpIzozTOFO2ffuZcV61nojHXVKIy3WM+7ADCy5FVhdZYHYDdgTU0yJw==
dependencies:
is-what "^3.14.1"
core-js@^3.15.1:
version "3.22.5"
resolved "https://registry.npmmirror.com/core-js/-/core-js-3.22.5.tgz"
integrity sha512-VP/xYuvJ0MJWRAobcmQ8F2H6Bsn+s7zqAAjFaHGBMc5AQm7zaelhD1LGduFn2EehEcQcU+br6t+fwbpQ5d1ZWA==
csstype@^2.6.8:
version "2.6.20"
resolved "https://registry.npmmirror.com/csstype/-/csstype-2.6.20.tgz"
integrity sha512-/WwNkdXfckNgw6S5R125rrW8ez139lBHWouiBvX8dfMFtcn6V81REDqnH7+CRpRipfYlyU1CmOnOxrmGcFOjeA==
dayjs@^1.11.0:
version "1.11.0"
resolved "https://registry.npmmirror.com/dayjs/-/dayjs-1.11.0.tgz"
integrity sha512-JLC809s6Y948/FuCZPm5IX8rRhQwOiyMb2TfVVQEixG7P8Lm/gt5S7yoQZmC8x1UehI9Pb7sksEt4xx14m+7Ug==
debug@^3.2.6:
version "3.2.7"
resolved "https://registry.npmmirror.com/debug/-/debug-3.2.7.tgz"
integrity sha512-CFjzYYAi4ThfiQvizrFQevTTXHtnCqWfe7x1AhgEscTz6ZbLbfoLRLPugTQyBth6f8ZERVUSyWHFD/7Wu4t1XQ==
dependencies:
ms "^2.1.1"
dom-align@^1.12.1:
version "1.12.3"
resolved "https://registry.npmmirror.com/dom-align/-/dom-align-1.12.3.tgz"
integrity sha512-Gj9hZN3a07cbR6zviMUBOMPdWxYhbMI+x+WS0NAIu2zFZmbK8ys9R79g+iG9qLnlCwpFoaB+fKy8Pdv470GsPA==
dom-scroll-into-view@^2.0.0:
version "2.0.1"
resolved "https://registry.npmmirror.com/dom-scroll-into-view/-/dom-scroll-into-view-2.0.1.tgz"
integrity sha512-bvVTQe1lfaUr1oFzZX80ce9KLDlZ3iU+XGNE/bz9HnGdklTieqsbmsLHe+rT2XWqopvL0PckkYqN7ksmm5pe3w==
element-plus@^2.1.9:
version "2.1.9"
resolved "https://registry.npmmirror.com/element-plus/-/element-plus-2.1.9.tgz"
integrity sha512-6mWqS3YrmJPnouWP4otzL8+MehfOnDFqDbcIdnmC07p+Z0JkWe/CVKc4Wky8AYC8nyDMUQyiZYvooCbqGuM7pg==
dependencies:
"@ctrl/tinycolor" "^3.4.0"
"@element-plus/icons-vue" "^1.1.4"
"@floating-ui/dom" "^0.4.2"
"@popperjs/core" "^2.11.4"
"@types/lodash" "^4.14.181"
"@types/lodash-es" "^4.17.6"
"@vueuse/core" "^8.2.4"
async-validator "^4.0.7"
dayjs "^1.11.0"
escape-html "^1.0.3"
lodash "^4.17.21"
lodash-es "^4.17.21"
lodash-unified "^1.0.2"
memoize-one "^6.0.0"
normalize-wheel-es "^1.1.2"
errno@^0.1.1:
version "0.1.8"
resolved "https://registry.npmmirror.com/errno/-/errno-0.1.8.tgz"
integrity sha512-dJ6oBr5SQ1VSd9qkk7ByRgb/1SH4JZjCHSW/mr63/QcXO9zLVxvJ6Oy13nio03rxpSnVDDjFor75SjVeZWPW/A==
dependencies:
prr "~1.0.1"
esbuild-android-64@0.14.36:
version "0.14.36"
resolved "https://registry.yarnpkg.com/esbuild-android-64/-/esbuild-android-64-0.14.36.tgz#fc5f95ce78c8c3d790fa16bc71bd904f2bb42aa1"
integrity sha512-jwpBhF1jmo0tVCYC/ORzVN+hyVcNZUWuozGcLHfod0RJCedTDTvR4nwlTXdx1gtncDqjk33itjO+27OZHbiavw==
esbuild-android-arm64@0.14.36:
version "0.14.36"
resolved "https://registry.yarnpkg.com/esbuild-android-arm64/-/esbuild-android-arm64-0.14.36.tgz#44356fbb9f8de82a5cdf11849e011dfb3ad0a8a8"
integrity sha512-/hYkyFe7x7Yapmfv4X/tBmyKnggUmdQmlvZ8ZlBnV4+PjisrEhAvC3yWpURuD9XoB8Wa1d5dGkTsF53pIvpjsg==
esbuild-darwin-64@0.14.36:
version "0.14.36"
resolved "https://registry.npmmirror.com/esbuild-darwin-64/-/esbuild-darwin-64-0.14.36.tgz"
integrity sha512-kkl6qmV0dTpyIMKagluzYqlc1vO0ecgpviK/7jwPbRDEv5fejRTaBBEE2KxEQbTHcLhiiDbhG7d5UybZWo/1zQ==
esbuild-darwin-arm64@0.14.36:
version "0.14.36"
resolved "https://registry.yarnpkg.com/esbuild-darwin-arm64/-/esbuild-darwin-arm64-0.14.36.tgz#2a8040c2e465131e5281034f3c72405e643cb7b2"
integrity sha512-q8fY4r2Sx6P0Pr3VUm//eFYKVk07C5MHcEinU1BjyFnuYz4IxR/03uBbDwluR6ILIHnZTE7AkTUWIdidRi1Jjw==
esbuild-freebsd-64@0.14.36:
version "0.14.36"
resolved "https://registry.yarnpkg.com/esbuild-freebsd-64/-/esbuild-freebsd-64-0.14.36.tgz#d82c387b4d01fe9e8631f97d41eb54f2dbeb68a3"
integrity sha512-Hn8AYuxXXRptybPqoMkga4HRFE7/XmhtlQjXFHoAIhKUPPMeJH35GYEUWGbjteai9FLFvBAjEAlwEtSGxnqWww==
esbuild-freebsd-arm64@0.14.36:
version "0.14.36"
resolved "https://registry.yarnpkg.com/esbuild-freebsd-arm64/-/esbuild-freebsd-arm64-0.14.36.tgz#e8ce2e6c697da6c7ecd0cc0ac821d47c5ab68529"
integrity sha512-S3C0attylLLRiCcHiJd036eDEMOY32+h8P+jJ3kTcfhJANNjP0TNBNL30TZmEdOSx/820HJFgRrqpNAvTbjnDA==
esbuild-linux-32@0.14.36:
version "0.14.36"
resolved "https://registry.yarnpkg.com/esbuild-linux-32/-/esbuild-linux-32-0.14.36.tgz#a4a261e2af91986ea62451f2db712a556cb38a15"
integrity sha512-Eh9OkyTrEZn9WGO4xkI3OPPpUX7p/3QYvdG0lL4rfr73Ap2HAr6D9lP59VMF64Ex01LhHSXwIsFG/8AQjh6eNw==
esbuild-linux-64@0.14.36:
version "0.14.36"
resolved "https://registry.yarnpkg.com/esbuild-linux-64/-/esbuild-linux-64-0.14.36.tgz#4a9500f9197e2c8fcb884a511d2c9d4c2debde72"
integrity sha512-vFVFS5ve7PuwlfgoWNyRccGDi2QTNkQo/2k5U5ttVD0jRFaMlc8UQee708fOZA6zTCDy5RWsT5MJw3sl2X6KDg==
esbuild-linux-arm64@0.14.36:
version "0.14.36"
resolved "https://registry.yarnpkg.com/esbuild-linux-arm64/-/esbuild-linux-arm64-0.14.36.tgz#c91c21e25b315464bd7da867365dd1dae14ca176"
integrity sha512-24Vq1M7FdpSmaTYuu1w0Hdhiqkbto1I5Pjyi+4Cdw5fJKGlwQuw+hWynTcRI/cOZxBcBpP21gND7W27gHAiftw==
esbuild-linux-arm@0.14.36:
version "0.14.36"
resolved "https://registry.yarnpkg.com/esbuild-linux-arm/-/esbuild-linux-arm-0.14.36.tgz#90e23bca2e6e549affbbe994f80ba3bb6c4d934a"
integrity sha512-NhgU4n+NCsYgt7Hy61PCquEz5aevI6VjQvxwBxtxrooXsxt5b2xtOUXYZe04JxqQo+XZk3d1gcr7pbV9MAQ/Lg==
esbuild-linux-mips64le@0.14.36:
version "0.14.36"
resolved "https://registry.yarnpkg.com/esbuild-linux-mips64le/-/esbuild-linux-mips64le-0.14.36.tgz#40e11afb08353ff24709fc89e4db0f866bc131d2"
integrity sha512-hZUeTXvppJN+5rEz2EjsOFM9F1bZt7/d2FUM1lmQo//rXh1RTFYzhC0txn7WV0/jCC7SvrGRaRz0NMsRPf8SIA==
esbuild-linux-ppc64le@0.14.36:
version "0.14.36"
resolved "https://registry.yarnpkg.com/esbuild-linux-ppc64le/-/esbuild-linux-ppc64le-0.14.36.tgz#9e8a588c513d06cc3859f9dcc52e5fdfce8a1a5e"
integrity sha512-1Bg3QgzZjO+QtPhP9VeIBhAduHEc2kzU43MzBnMwpLSZ890azr4/A9Dganun8nsqD/1TBcqhId0z4mFDO8FAvg==
esbuild-linux-riscv64@0.14.36:
version "0.14.36"
resolved "https://registry.yarnpkg.com/esbuild-linux-riscv64/-/esbuild-linux-riscv64-0.14.36.tgz#e578c09b23b3b97652e60e3692bfda628b541f06"
integrity sha512-dOE5pt3cOdqEhaufDRzNCHf5BSwxgygVak9UR7PH7KPVHwSTDAZHDoEjblxLqjJYpc5XaU9+gKJ9F8mp9r5I4A==
esbuild-linux-s390x@0.14.36:
version "0.14.36"
resolved "https://registry.yarnpkg.com/esbuild-linux-s390x/-/esbuild-linux-s390x-0.14.36.tgz#3c9dab40d0d69932ffded0fd7317bb403626c9bc"
integrity sha512-g4FMdh//BBGTfVHjF6MO7Cz8gqRoDPzXWxRvWkJoGroKA18G9m0wddvPbEqcQf5Tbt2vSc1CIgag7cXwTmoTXg==
esbuild-netbsd-64@0.14.36:
version "0.14.36"
resolved "https://registry.yarnpkg.com/esbuild-netbsd-64/-/esbuild-netbsd-64-0.14.36.tgz#e27847f6d506218291619b8c1e121ecd97628494"
integrity sha512-UB2bVImxkWk4vjnP62ehFNZ73lQY1xcnL5ZNYF3x0AG+j8HgdkNF05v67YJdCIuUJpBuTyCK8LORCYo9onSW+A==
esbuild-openbsd-64@0.14.36:
version "0.14.36"
resolved "https://registry.yarnpkg.com/esbuild-openbsd-64/-/esbuild-openbsd-64-0.14.36.tgz#c94c04c557fae516872a586eae67423da6d2fabb"
integrity sha512-NvGB2Chf8GxuleXRGk8e9zD3aSdRO5kLt9coTQbCg7WMGXeX471sBgh4kSg8pjx0yTXRt0MlrUDnjVYnetyivg==
esbuild-sunos-64@0.14.36:
version "0.14.36"
resolved "https://registry.yarnpkg.com/esbuild-sunos-64/-/esbuild-sunos-64-0.14.36.tgz#9b79febc0df65a30f1c9bd63047d1675511bf99d"
integrity sha512-VkUZS5ftTSjhRjuRLp+v78auMO3PZBXu6xl4ajomGenEm2/rGuWlhFSjB7YbBNErOchj51Jb2OK8lKAo8qdmsQ==
esbuild-windows-32@0.14.36:
version "0.14.36"
resolved "https://registry.yarnpkg.com/esbuild-windows-32/-/esbuild-windows-32-0.14.36.tgz#910d11936c8d2122ffdd3275e5b28d8a4e1240ec"
integrity sha512-bIar+A6hdytJjZrDxfMBUSEHHLfx3ynoEZXx/39nxy86pX/w249WZm8Bm0dtOAByAf4Z6qV0LsnTIJHiIqbw0w==
esbuild-windows-64@0.14.36:
version "0.14.36"
resolved "https://registry.yarnpkg.com/esbuild-windows-64/-/esbuild-windows-64-0.14.36.tgz#21b4ce8b42a4efc63f4b58ec617f1302448aad26"
integrity sha512-+p4MuRZekVChAeueT1Y9LGkxrT5x7YYJxYE8ZOTcEfeUUN43vktSn6hUNsvxzzATrSgq5QqRdllkVBxWZg7KqQ==
esbuild-windows-arm64@0.14.36:
version "0.14.36"
resolved "https://registry.yarnpkg.com/esbuild-windows-arm64/-/esbuild-windows-arm64-0.14.36.tgz#ba21546fecb7297667d0052d00150de22c044b24"
integrity sha512-fBB4WlDqV1m18EF/aheGYQkQZHfPHiHJSBYzXIo8yKehek+0BtBwo/4PNwKGJ5T0YK0oc8pBKjgwPbzSrPLb+Q==
esbuild@^0.14.27:
version "0.14.36"
resolved "https://registry.npmmirror.com/esbuild/-/esbuild-0.14.36.tgz"
integrity sha512-HhFHPiRXGYOCRlrhpiVDYKcFJRdO0sBElZ668M4lh2ER0YgnkLxECuFe7uWCf23FrcLc59Pqr7dHkTqmRPDHmw==
optionalDependencies:
esbuild-android-64 "0.14.36"
esbuild-android-arm64 "0.14.36"
esbuild-darwin-64 "0.14.36"
esbuild-darwin-arm64 "0.14.36"
esbuild-freebsd-64 "0.14.36"
esbuild-freebsd-arm64 "0.14.36"
esbuild-linux-32 "0.14.36"
esbuild-linux-64 "0.14.36"
esbuild-linux-arm "0.14.36"
esbuild-linux-arm64 "0.14.36"
esbuild-linux-mips64le "0.14.36"
esbuild-linux-ppc64le "0.14.36"
esbuild-linux-riscv64 "0.14.36"
esbuild-linux-s390x "0.14.36"
esbuild-netbsd-64 "0.14.36"
esbuild-openbsd-64 "0.14.36"
esbuild-sunos-64 "0.14.36"
esbuild-windows-32 "0.14.36"
esbuild-windows-64 "0.14.36"
esbuild-windows-arm64 "0.14.36"
escape-html@^1.0.3:
version "1.0.3"
resolved "https://registry.npmmirror.com/escape-html/-/escape-html-1.0.3.tgz"
integrity sha512-NiSupZ4OeuGwr68lGIeym/ksIZMJodUGOSCZ/FSnTxcrekbvqrgdUxlJOMpijaKZVjAJrWrGs/6Jy8OMuyj9ow==
estree-walker@^2.0.2:
version "2.0.2"
resolved "https://registry.npmmirror.com/estree-walker/-/estree-walker-2.0.2.tgz"
integrity sha512-Rfkk/Mp/DL7JVje3u18FxFujQlTNR2q6QfMSMB7AvCBx91NGj/ba3kCfza0f6dVDbw7YlRf/nDrn7pQrCCyQ/w==
follow-redirects@^1.14.8:
version "1.14.9"
resolved "https://registry.npmmirror.com/follow-redirects/-/follow-redirects-1.14.9.tgz"
integrity sha512-MQDfihBQYMcyy5dhRDJUHcw7lb2Pv/TuE6xP1vyraLukNDHKbDxDNaOE3NbCAdKQApno+GPRyo1YAp89yCjK4w==
fsevents@~2.3.2:
version "2.3.2"
resolved "https://registry.npmmirror.com/fsevents/-/fsevents-2.3.2.tgz"
integrity sha512-xiqMQR4xAeHTuB9uWm+fFRcIOgKBMiOBP+eXiyT7jsgVCq1bkVygt00oASowB7EdtpOHaaPgKt812P9ab+DDKA==
function-bind@^1.1.1:
version "1.1.1"
resolved "https://registry.npmmirror.com/function-bind/-/function-bind-1.1.1.tgz"
integrity sha512-yIovAzMX49sF8Yl58fSCWJ5svSLuaibPxXQJFLmBObTuCr0Mf1KiPopGM9NiFjiYBCbfaa2Fh6breQ6ANVTI0A==
graceful-fs@^4.1.2:
version "4.2.10"
resolved "https://registry.npmmirror.com/graceful-fs/-/graceful-fs-4.2.10.tgz"
integrity sha512-9ByhssR2fPVsNZj478qUUbKfmL0+t5BDVyjShtyZZLiK7ZDAArFFfopyOTj0M05wE2tJPisA4iTnnXl2YoPvOA==
has@^1.0.3:
version "1.0.3"
resolved "https://registry.npmmirror.com/has/-/has-1.0.3.tgz"
integrity sha512-f2dvO0VU6Oej7RkWJGrehjbzMAjFp5/VKPp5tTpWIV4JHHZK1/BxbFRtf/siA2SWTe09caDmVtYYzWEIbBS4zw==
dependencies:
function-bind "^1.1.1"
iconv-lite@^0.4.4:
version "0.4.24"
resolved "https://registry.npmmirror.com/iconv-lite/-/iconv-lite-0.4.24.tgz"
integrity sha512-v3MXnZAcvnywkTUEZomIActle7RXXeedOR31wwl7VlyoXO4Qi9arvSenNQWne1TcRwhCL1HwLI21bEqdpj8/rA==
dependencies:
safer-buffer ">= 2.1.2 < 3"
image-size@~0.5.0:
version "0.5.5"
resolved "https://registry.npmmirror.com/image-size/-/image-size-0.5.5.tgz"
integrity sha512-6TDAlDPZxUFCv+fuOkIoXT/V/f3Qbq8e37p+YOiYrUv3v9cc3/6x78VdfPgFVaB9dZYeLUfKgHRebpkm/oP2VQ==
is-core-module@^2.8.1:
version "2.8.1"
resolved "https://registry.npmmirror.com/is-core-module/-/is-core-module-2.8.1.tgz"
integrity sha512-SdNCUs284hr40hFTFP6l0IfZ/RSrMXF3qgoRHd3/79unUTvrFO/JoXwkGm+5J/Oe3E/b5GsnG330uUNgRpu1PA==
dependencies:
has "^1.0.3"
is-plain-object@3.0.1:
version "3.0.1"
resolved "https://registry.npmmirror.com/is-plain-object/-/is-plain-object-3.0.1.tgz"
integrity sha512-Xnpx182SBMrr/aBik8y+GuR4U1L9FqMSojwDQwPMmxyC6bvEqly9UBCxhauBF5vNh2gwWJNX6oDV7O+OM4z34g==
is-what@^3.14.1:
version "3.14.1"
resolved "https://registry.npmmirror.com/is-what/-/is-what-3.14.1.tgz"
integrity sha512-sNxgpk9793nzSs7bA6JQJGeIuRBQhAaNGG77kzYQgMkrID+lS6SlK07K5LaptscDlSaIgH+GPFzf+d75FVxozA==
js-audio-recorder@0.5.7:
version "0.5.7"
resolved "https://registry.npmmirror.com/js-audio-recorder/-/js-audio-recorder-0.5.7.tgz"
integrity sha512-DIlv30N86AYHr7zGHN0O7V/3Rd8Q6SIJ/MBzVJaT9STWTdhF4E/8fxCX6ZMgRSv8xmx6fEqcFFNPoofmxJD4+A==
"js-tokens@^3.0.0 || ^4.0.0":
version "4.0.0"
resolved "https://registry.npmmirror.com/js-tokens/-/js-tokens-4.0.0.tgz"
integrity sha512-RdJUflcE3cUzKiMqQgsCu06FPu9UdIJO0beYbPhHN4k6apgJtifcoCtT9bcxOpYBtpD2kCM6Sbzg4CausW/PKQ==
lamejs@^1.2.1:
version "1.2.1"
resolved "https://registry.npmmirror.com/lamejs/-/lamejs-1.2.1.tgz"
integrity sha512-s7bxvjvYthw6oPLCm5pFxvA84wUROODB8jEO2+CE1adhKgrIvVOlmMgY8zyugxGrvRaDHNJanOiS21/emty6dQ==
dependencies:
use-strict "1.0.1"
less@^4.1.2:
version "4.1.2"
resolved "https://registry.npmmirror.com/less/-/less-4.1.2.tgz"
integrity sha512-EoQp/Et7OSOVu0aJknJOtlXZsnr8XE8KwuzTHOLeVSEx8pVWUICc8Q0VYRHgzyjX78nMEyC/oztWFbgyhtNfDA==
dependencies:
copy-anything "^2.0.1"
parse-node-version "^1.0.1"
tslib "^2.3.0"
optionalDependencies:
errno "^0.1.1"
graceful-fs "^4.1.2"
image-size "~0.5.0"
make-dir "^2.1.0"
mime "^1.4.1"
needle "^2.5.2"
source-map "~0.6.0"
lodash-es@^4.17.15, lodash-es@^4.17.21:
version "4.17.21"
resolved "https://registry.npmmirror.com/lodash-es/-/lodash-es-4.17.21.tgz"
integrity sha512-mKnC+QJ9pWVzv+C4/U3rRsHapFfHvQFoFB92e52xeyGMcX6/OlIl78je1u8vePzYZSkkogMPJ2yjxxsb89cxyw==
lodash-unified@^1.0.2:
version "1.0.2"
resolved "https://registry.npmmirror.com/lodash-unified/-/lodash-unified-1.0.2.tgz"
integrity sha512-OGbEy+1P+UT26CYi4opY4gebD8cWRDxAT6MAObIVQMiqYdxZr1g3QHWCToVsm31x2NkLS4K3+MC2qInaRMa39g==
lodash@^4.17.21:
version "4.17.21"
resolved "https://registry.npmmirror.com/lodash/-/lodash-4.17.21.tgz"
integrity sha512-v2kDEe57lecTulaDIuNTPy3Ry4gLGJ6Z1O3vE1krgXZNrsQ+LFTGHVxVjcXPs17LhbZVGedAJv8XZ1tvj5FvSg==
loose-envify@^1.0.0:
version "1.4.0"
resolved "https://registry.npmmirror.com/loose-envify/-/loose-envify-1.4.0.tgz"
integrity sha512-lyuxPGr/Wfhrlem2CL/UcnUc1zcqKAImBDzukY7Y5F/yQiNdko6+fRLevlw1HgMySw7f611UIY408EtxRSoK3Q==
dependencies:
js-tokens "^3.0.0 || ^4.0.0"
magic-string@^0.25.7:
version "0.25.9"
resolved "https://registry.npmmirror.com/magic-string/-/magic-string-0.25.9.tgz"
integrity sha512-RmF0AsMzgt25qzqqLc1+MbHmhdx0ojF2Fvs4XnOqz2ZOBXzzkEwc/dJQZCYHAn7v1jbVOjAZfK8msRn4BxO4VQ==
dependencies:
sourcemap-codec "^1.4.8"
make-dir@^2.1.0:
version "2.1.0"
resolved "https://registry.npmmirror.com/make-dir/-/make-dir-2.1.0.tgz"
integrity sha512-LS9X+dc8KLxXCb8dni79fLIIUA5VyZoyjSMCwTluaXA0o27cCK0bhXkpgw+sTXVpPy/lSO57ilRixqk0vDmtRA==
dependencies:
pify "^4.0.1"
semver "^5.6.0"
memoize-one@^6.0.0:
version "6.0.0"
resolved "https://registry.npmmirror.com/memoize-one/-/memoize-one-6.0.0.tgz"
integrity sha512-rkpe71W0N0c0Xz6QD0eJETuWAJGnJ9afsl1srmwPrI+yBCkge5EycXXbYRyvL29zZVUWQCY7InPRCv3GDXuZNw==
mime@^1.4.1:
version "1.6.0"
resolved "https://registry.npmmirror.com/mime/-/mime-1.6.0.tgz"
integrity sha512-x0Vn8spI+wuJ1O6S7gnbaQg8Pxh4NNHb7KSINmEWKiPE4RKOplvijn+NkmYmmRgP68mc70j2EbeTFRsrswaQeg==
moment@^2.27.0:
version "2.29.3"
resolved "https://registry.npmmirror.com/moment/-/moment-2.29.3.tgz"
integrity sha512-c6YRvhEo//6T2Jz/vVtYzqBzwvPT95JBQ+smCytzf7c50oMZRsR/a4w88aD34I+/QVSfnoAnSBFPJHItlOMJVw==
ms@^2.1.1:
version "2.1.3"
resolved "https://registry.npmmirror.com/ms/-/ms-2.1.3.tgz"
integrity sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==
nanoid@^3.3.1:
version "3.3.2"
resolved "https://registry.npmmirror.com/nanoid/-/nanoid-3.3.2.tgz"
integrity sha512-CuHBogktKwpm5g2sRgv83jEy2ijFzBwMoYA60orPDR7ynsLijJDqgsi4RDGj3OJpy3Ieb+LYwiRmIOGyytgITA==
nanopop@^2.1.0:
version "2.1.0"
resolved "https://registry.npmmirror.com/nanopop/-/nanopop-2.1.0.tgz"
integrity sha512-jGTwpFRexSH+fxappnGQtN9dspgE2ipa1aOjtR24igG0pv6JCxImIAmrLRHX+zUF5+1wtsFVbKyfP51kIGAVNw==
needle@^2.5.2:
version "2.9.1"
resolved "https://registry.npmmirror.com/needle/-/needle-2.9.1.tgz"
integrity sha512-6R9fqJ5Zcmf+uYaFgdIHmLwNldn5HbK8L5ybn7Uz+ylX/rnOsSp1AHcvQSrCaFN+qNM1wpymHqD7mVasEOlHGQ==
dependencies:
debug "^3.2.6"
iconv-lite "^0.4.4"
sax "^1.2.4"
normalize-wheel-es@^1.1.2:
version "1.1.2"
resolved "https://registry.npmmirror.com/normalize-wheel-es/-/normalize-wheel-es-1.1.2.tgz"
integrity sha512-scX83plWJXYH1J4+BhAuIHadROzxX0UBF3+HuZNY2Ks8BciE7tSTQ+5JhTsvzjaO0/EJdm4JBGrfObKxFf3Png==
omit.js@^2.0.0:
version "2.0.2"
resolved "https://registry.npmmirror.com/omit.js/-/omit.js-2.0.2.tgz"
integrity sha512-hJmu9D+bNB40YpL9jYebQl4lsTW6yEHRTroJzNLqQJYHm7c+NQnJGfZmIWh8S3q3KoaxV1aLhV6B3+0N0/kyJg==
parse-node-version@^1.0.1:
version "1.0.1"
resolved "https://registry.npmmirror.com/parse-node-version/-/parse-node-version-1.0.1.tgz"
integrity sha512-3YHlOa/JgH6Mnpr05jP9eDG254US9ek25LyIxZlDItp2iJtwyaXQb57lBYLdT3MowkUFYEV2XXNAYIPlESvJlA==
path-parse@^1.0.7:
version "1.0.7"
resolved "https://registry.npmmirror.com/path-parse/-/path-parse-1.0.7.tgz"
integrity sha512-LDJzPVEEEPR+y48z93A0Ed0yXb8pAByGWo/k5YYdYgpY2/2EsOsksJrq7lOHxryrVOn1ejG6oAp8ahvOIQD8sw==
picocolors@^1.0.0:
version "1.0.0"
resolved "https://registry.npmmirror.com/picocolors/-/picocolors-1.0.0.tgz"
integrity sha512-1fygroTLlHu66zi26VoTDv8yRgm0Fccecssto+MhsZ0D/DGW2sm8E8AjW7NU5VVTRt5GxbeZ5qBuJr+HyLYkjQ==
pify@^4.0.1:
version "4.0.1"
resolved "https://registry.npmmirror.com/pify/-/pify-4.0.1.tgz"
integrity sha512-uB80kBFb/tfd68bVleG9T5GGsGPjJrLAUpR5PZIrhBnIaRTQRjqdJSsIKkOP6OAIFbj7GOrcudc5pNjZ+geV2g==
postcss@^8.1.10, postcss@^8.4.12:
version "8.4.12"
resolved "https://registry.npmmirror.com/postcss/-/postcss-8.4.12.tgz"
integrity sha512-lg6eITwYe9v6Hr5CncVbK70SoioNQIq81nsaG86ev5hAidQvmOeETBqs7jm43K2F5/Ley3ytDtriImV6TpNiSg==
dependencies:
nanoid "^3.3.1"
picocolors "^1.0.0"
source-map-js "^1.0.2"
prr@~1.0.1:
version "1.0.1"
resolved "https://registry.npmmirror.com/prr/-/prr-1.0.1.tgz"
integrity sha512-yPw4Sng1gWghHQWj0B3ZggWUm4qVbPwPFcRG8KyxiU7J2OHFSoEHKS+EZ3fv5l1t9CyCiop6l/ZYeWbrgoQejw==
regenerator-runtime@^0.13.4:
version "0.13.9"
resolved "https://registry.npmmirror.com/regenerator-runtime/-/regenerator-runtime-0.13.9.tgz"
integrity sha512-p3VT+cOEgxFsRRA9X4lkI1E+k2/CtnKtU4gcxyaCUreilL/vqI6CdZ3wxVUx3UOUg+gnUOQQcRI7BmSI656MYA==
resize-observer-polyfill@^1.5.1:
version "1.5.1"
resolved "https://registry.npmmirror.com/resize-observer-polyfill/-/resize-observer-polyfill-1.5.1.tgz"
integrity sha512-LwZrotdHOo12nQuZlHEmtuXdqGoOD0OhaxopaNFxWzInpEgaLWoVuAMbTzixuosCx2nEG58ngzW3vxdWoxIgdg==
resolve@^1.22.0:
version "1.22.0"
resolved "https://registry.npmmirror.com/resolve/-/resolve-1.22.0.tgz"
integrity sha512-Hhtrw0nLeSrFQ7phPp4OOcVjLPIeMnRlr5mcnVuMe7M/7eBn98A3hmFRLoFo3DLZkivSYwhRUJTyPyWAk56WLw==
dependencies:
is-core-module "^2.8.1"
path-parse "^1.0.7"
supports-preserve-symlinks-flag "^1.0.0"
rollup@^2.59.0:
version "2.70.1"
resolved "https://registry.npmmirror.com/rollup/-/rollup-2.70.1.tgz"
integrity sha512-CRYsI5EuzLbXdxC6RnYhOuRdtz4bhejPMSWjsFLfVM/7w/85n2szZv6yExqUXsBdz5KT8eoubeyDUDjhLHEslA==
optionalDependencies:
fsevents "~2.3.2"
"safer-buffer@>= 2.1.2 < 3":
version "2.1.2"
resolved "https://registry.npmmirror.com/safer-buffer/-/safer-buffer-2.1.2.tgz"
integrity sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg==
sax@^1.2.4:
version "1.2.4"
resolved "https://registry.npmmirror.com/sax/-/sax-1.2.4.tgz"
integrity sha512-NqVDv9TpANUjFm0N8uM5GxL36UgKi9/atZw+x7YFnQ8ckwFGKrl4xX4yWtrey3UJm5nP1kUbnYgLopqWNSRhWw==
scroll-into-view-if-needed@^2.2.25:
version "2.2.29"
resolved "https://registry.npmmirror.com/scroll-into-view-if-needed/-/scroll-into-view-if-needed-2.2.29.tgz"
integrity sha512-hxpAR6AN+Gh53AdAimHM6C8oTN1ppwVZITihix+WqalywBeFcQ6LdQP5ABNl26nX8GTEL7VT+b8lKpdqq65wXg==
dependencies:
compute-scroll-into-view "^1.0.17"
semver@^5.6.0:
version "5.7.1"
resolved "https://registry.npmmirror.com/semver/-/semver-5.7.1.tgz"
integrity sha512-sauaDf/PZdVgrLTNYHRtpXa1iRiKcaebiKQ1BJdpQlWH2lCvexQdX55snPFyK7QzpudqbCI0qXFfOasHdyNDGQ==
shallow-equal@^1.0.0:
version "1.2.1"
resolved "https://registry.npmmirror.com/shallow-equal/-/shallow-equal-1.2.1.tgz"
integrity sha512-S4vJDjHHMBaiZuT9NPb616CSmLf618jawtv3sufLl6ivK8WocjAo58cXwbRV1cgqxH0Qbv+iUt6m05eqEa2IRA==
source-map-js@^1.0.2:
version "1.0.2"
resolved "https://registry.npmmirror.com/source-map-js/-/source-map-js-1.0.2.tgz"
integrity sha512-R0XvVJ9WusLiqTCEiGCmICCMplcCkIwwR11mOSD9CR5u+IXYdiseeEuXCVAjS54zqwkLcPNnmU4OeJ6tUrWhDw==
source-map@^0.6.1, source-map@~0.6.0:
version "0.6.1"
resolved "https://registry.npmmirror.com/source-map/-/source-map-0.6.1.tgz"
integrity sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g==
sourcemap-codec@^1.4.8:
version "1.4.8"
resolved "https://registry.npmmirror.com/sourcemap-codec/-/sourcemap-codec-1.4.8.tgz"
integrity sha512-9NykojV5Uih4lgo5So5dtw+f0JgJX30KCNI8gwhz2J9A15wD0Ml6tjHKwf6fTSa6fAdVBdZeNOs9eJ71qCk8vA==
supports-preserve-symlinks-flag@^1.0.0:
version "1.0.0"
resolved "https://registry.npmmirror.com/supports-preserve-symlinks-flag/-/supports-preserve-symlinks-flag-1.0.0.tgz"
integrity sha512-ot0WnXS9fgdkgIcePe6RHNk1WA8+muPa6cSjeR3V8K27q9BB1rTE3R1p7Hv0z1ZyAc8s6Vvv8DIyWf681MAt0w==
tslib@^2.3.0:
version "2.4.0"
resolved "https://registry.npmmirror.com/tslib/-/tslib-2.4.0.tgz"
integrity sha512-d6xOpEDfsi2CZVlPQzGeux8XMwLT9hssAsaPYExaQMuYskwb+x1x7J371tWlbBdWHroy99KnVB6qIkUbs5X3UQ==
use-strict@1.0.1:
version "1.0.1"
resolved "https://registry.npmmirror.com/use-strict/-/use-strict-1.0.1.tgz"
integrity sha512-IeiWvvEXfW5ltKVMkxq6FvNf2LojMKvB2OCeja6+ct24S1XOmQw2dGr2JyndwACWAGJva9B7yPHwAmeA9QCqAQ==
vite@^2.9.0:
version "2.9.1"
resolved "https://registry.npmmirror.com/vite/-/vite-2.9.1.tgz"
integrity sha512-vSlsSdOYGcYEJfkQ/NeLXgnRv5zZfpAsdztkIrs7AZHV8RCMZQkwjo4DS5BnrYTqoWqLoUe1Cah4aVO4oNNqCQ==
dependencies:
esbuild "^0.14.27"
postcss "^8.4.12"
resolve "^1.22.0"
rollup "^2.59.0"
optionalDependencies:
fsevents "~2.3.2"
vue-demi@*:
version "0.12.5"
resolved "https://registry.npmmirror.com/vue-demi/-/vue-demi-0.12.5.tgz"
integrity sha512-BREuTgTYlUr0zw0EZn3hnhC3I6gPWv+Kwh4MCih6QcAeaTlaIX0DwOVN0wHej7hSvDPecz4jygy/idsgKfW58Q==
vue-types@^3.0.0:
version "3.0.2"
resolved "https://registry.npmmirror.com/vue-types/-/vue-types-3.0.2.tgz"
integrity sha512-IwUC0Aq2zwaXqy74h4WCvFCUtoV0iSWr0snWnE9TnU18S66GAQyqQbRf2qfJtUuiFsBf6qp0MEwdonlwznlcrw==
dependencies:
is-plain-object "3.0.1"
vue@^3.2.25:
version "3.2.32"
resolved "https://registry.npmmirror.com/vue/-/vue-3.2.32.tgz"
integrity sha512-6L3jKZApF042OgbCkh+HcFeAkiYi3Lovi8wNhWqIK98Pi5efAMLZzRHgi91v+60oIRxdJsGS9sTMsb+yDpY8Eg==
dependencies:
"@vue/compiler-dom" "3.2.32"
"@vue/compiler-sfc" "3.2.32"
"@vue/runtime-dom" "3.2.32"
"@vue/server-renderer" "3.2.32"
"@vue/shared" "3.2.32"
warning@^4.0.0:
version "4.0.3"
resolved "https://registry.npmmirror.com/warning/-/warning-4.0.3.tgz"
integrity sha512-rpJyN222KWIvHJ/F53XSZv0Zl/accqHR8et1kpaMTD/fLCRxtV8iX8czMzY7sVZupTI3zcUTg8eycS2kNF9l6w==
dependencies:
loose-envify "^1.0.0"

@ -0,0 +1,406 @@
# 接口文档
开启服务后可参照:
http://0.0.0.0:8010/docs
## ASR
### 【POST】/asr/offline
说明上传16k,16bit wav文件返回 offline 语音识别模型识别结果
返回: JSON
前端接口: ASR-端到端识别,音频文件识别;语音指令-录音上传
示例:
```json
{
"code": 0,
"result": "你也喜欢这个天气吗",
"message": "ok"
}
```
### 【POST】/asr/offlinefile
说明上传16k,16bit wav文件返回 offline 语音识别模型识别结果 + wav数据的base64
返回: JSON
前端接口: 音频文件识别(播放这段base64还原后记得添加wav头采样率16k, int16添加后才能播放)
示例:
```json
{
"code": 0,
"result": {
"asr_result": "今天天气真好",
"wav_base64": "///+//3//f/8/////v/////////////////+/wAA//8AAAEAAQACAAIAAQABAP"
},
"message": "ok"
}
```
### 【POST】/asr/collectEnv
说明: 通过采集环境噪音上传16k, int16 wav文件来生成后台VAD的能量阈值 返回阈值结果
前端接口ASR-环境采样
返回: JSON
```json
{
"code": 0,
"result": 3624.93505859375,
"message": "采集环境噪音成功"
}
```
### 【GET】/asr/stopRecord
说明:通过 GET 请求 /asr/stopRecord, 后台停止接收 offlineStream 中通过 WS协议 上传的数据
前端接口:语音聊天-暂停录音获取NLP播放TTS时暂停
返回: JSON
```JSON
{
"code": 0,
"result": null,
"message": "停止成功"
}
```
### 【GET】/asr/resumeRecord
说明:通过 GET 请求 /asr/resumeRecord, 后台停止接收 offlineStream 中通过 WS协议 上传的数据
前端接口:语音聊天-恢复录音TTS播放完毕时告诉后台恢复录音
返回: JSON
```JSON
{
"code": 0,
"result": null,
"message": "Online录音恢复"
}
```
### 【Websocket】/ws/asr/offlineStream
说明:通过 WS 协议,将前端音频持续上传到后台,前端采集 16kInt16 类型的PCM片段持续上传到后端
前端接口:语音聊天-开始录音,持续将麦克风语音传给后端,后端推送语音识别结果
返回后端返回识别结果offline模型识别结果 由WS推送
### 【Websocket】/ws/asr/onlineStream
说明:通过 WS 协议,将前端音频持续上传到后台,前端采集 16kInt16 类型的PCM片段持续上传到后端
前端接口ASR-流式识别开始录音,持续将麦克风语音传给后端,后端推送语音识别结果
返回后端返回识别结果online模型识别结果 由WS推送
## NLP
### 【POST】/nlp/chat
说明:返回闲聊对话的结果
前端接口:语音聊天-获取到ASR识别结果后向后端获取闲聊文本
上传示例:
```json
{
"chat": "天气非常棒"
}
```
返回示例:
```json
{
"code": 0,
"result": "是的,我也挺喜欢的",
"message": "ok"
}
```
### 【POST】/nlp/ie
说明:返回信息抽取结果
前端接口:语音指令-向后端获取信息抽取结果
上传示例:
```json
{
"chat": "今天我从马来西亚出发去香港花了五十万元"
}
```
返回示例:
```json
{
"code": 0,
"result": [
{
"时间": [
{
"text": "今天",
"start": 0,
"end": 2,
"probability": 0.9817976247505698
}
],
"出发地": [
{
"text": "马来西亚",
"start": 4,
"end": 8,
"probability": 0.974892389414169
}
],
"目的地": [
{
"text": "马来西亚",
"start": 4,
"end": 8,
"probability": 0.7347504438136951
}
],
"费用": [
{
"text": "五十万元",
"start": 15,
"end": 19,
"probability": 0.9679076530644402
}
]
}
],
"message": "ok"
}
```
## TTS
### 【POST】/tts/offline
说明获取TTS离线模型音频
前端接口TTS-端到端合成
上传示例:
```json
{
"text": "天气非常棒"
}
```
返回示例:对应音频对应的 base64 编码
```json
{
"code": 0,
"result": "UklGRrzQAABXQVZFZm10IBAAAAABAAEAwF0AAIC7AAACABAAZGF0YZjQAAADAP7/BAADAAAA...",
"message": "ok"
}
```
### 【POST】/tts/online
说明:流式获取语音合成音频
前端接口:流式合成
上传示例:
```json
{
"text": "天气非常棒"
}
```
返回示例:
二进制PCM片段16k Int 16类型
## VPR
### 【POST】/vpr/enroll
说明:声纹注册,通过表单上传 spk_id字符串非空, 与 audio (文件)
前端接口:声纹识别-声纹注册
上传示例:
```text
curl -X 'POST' \
'http://0.0.0.0:8010/vpr/enroll' \
-H 'accept: application/json' \
-H 'Content-Type: multipart/form-data' \
-F 'spk_id=啦啦啦啦' \
-F 'audio=@demo_16k.wav;type=audio/wav'
```
返回示例:
```json
{
"status": true,
"msg": "Successfully enroll data!"
}
```
### 【POST】/vpr/recog
说明:声纹识别,识别文件,提取文件的声纹信息做比对 音频 16k, int 16 wav格式
前端接口:声纹识别-上传音频,返回声纹识别结果
上传示例:
```shell
curl -X 'POST' \
'http://0.0.0.0:8010/vpr/recog' \
-H 'accept: application/json' \
-H 'Content-Type: multipart/form-data' \
-F 'audio=@demo_16k.wav;type=audio/wav'
```
返回示例:
```json
[
[
"啦啦啦啦",
[
"",
100
]
],
[
"test1",
[
"",
11.64
]
],
[
"test2",
[
"",
6.09
]
]
]
```
### 【POST】/vpr/del
说明: 根据 spk_id 删除用户数据
前端接口:声纹识别-删除用户数据
上传示例:
```json
{
"spk_id":"啦啦啦啦"
}
```
返回示例
```json
{
"status": true,
"msg": "Successfully delete data!"
}
```
### 【GET】/vpr/list
说明:查询用户列表数据,无需参数,返回 spk_id 与 vpr_id
前端接口:声纹识别-获取声纹数据列表
返回示例:
```json
[
[
"test1",
"test2"
],
[
9,
10
]
]
```
### 【GET】/vpr/data
说明: 根据 vpr_id 获取用户vpr时使用的音频
前端接口:声纹识别-获取vpr对应的音频
访问示例:
```shell
curl -X 'GET' \
'http://0.0.0.0:8010/vpr/data?vprId=9' \
-H 'accept: application/json'
```
返回示例:
对应音频文件
### 【GET】/vpr/database64
说明: 根据 vpr_id 获取用户vpr时注册使用音频转换成 16k, int16 类型的数组返回base64编码
前端接口:声纹识别-获取vpr对应的音频注意播放时需要添加 wav头16k,int16, 可参考tts播放时添加wav的方式注意更改采样率
访问示例:
```shell
curl -X 'GET' \
'http://localhost:8010/vpr/database64?vprId=12' \
-H 'accept: application/json'
```
返回示例:
```json
{
"code": 0,
"result":"AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA",
"message": "ok"
```
Loading…
Cancel
Save