From bf6451ed692b121d85c74b58cb16456f6183f814 Mon Sep 17 00:00:00 2001 From: Thomas Young <35565423+HexToString@users.noreply.github.com> Date: Tue, 18 Oct 2022 15:37:29 +0800 Subject: [PATCH] Add TTS fastdeploy serving (#2528) * add triton tts server * change readme * fix path bug * fix code style * fix code style and readme * Add files via upload --- .../README.md | 67 ++++ .../README_cn.md | 67 ++++ .../streaming_tts_serving/1/model.py | 289 ++++++++++++++++++ .../streaming_tts_serving/config.pbtxt | 33 ++ .../streaming_tts_serving/stream_client.py | 117 +++++++ .../streaming_tts_serving_fastdeploy/tree.png | Bin 0 -> 25004 bytes 6 files changed, 573 insertions(+) create mode 100644 demos/streaming_tts_serving_fastdeploy/README.md create mode 100644 demos/streaming_tts_serving_fastdeploy/README_cn.md create mode 100644 demos/streaming_tts_serving_fastdeploy/streaming_tts_serving/1/model.py create mode 100644 demos/streaming_tts_serving_fastdeploy/streaming_tts_serving/config.pbtxt create mode 100644 demos/streaming_tts_serving_fastdeploy/streaming_tts_serving/stream_client.py create mode 100644 demos/streaming_tts_serving_fastdeploy/tree.png diff --git a/demos/streaming_tts_serving_fastdeploy/README.md b/demos/streaming_tts_serving_fastdeploy/README.md new file mode 100644 index 00000000..3e983a06 --- /dev/null +++ b/demos/streaming_tts_serving_fastdeploy/README.md @@ -0,0 +1,67 @@ +([简体中文](./README_cn.md)|English) + +# Streaming Speech Synthesis Service + +## Introduction +This demo is an implementation of starting the streaming speech synthesis service and accessing the service. + +`Server` must be started in the docker, while `Client` does not have to be in the docker. + +**The streaming_tts_serving under the path of this article ($PWD) contains the configuration and code of the model, which needs to be mapped to the docker for use.** + +## Usage +### 1. Server +#### 1.1 Docker + +```bash +docker pull registry.baidubce.com/paddlepaddle/fastdeploy_serving_cpu_only:22.09 +docker run -dit --net=host --name fastdeploy --shm-size="1g" -v $PWD:/models registry.baidubce.com/paddlepaddle/fastdeploy_serving_cpu_only:22.09 +docker exec -it -u root fastdeploy bash +``` + +#### 1.2 Installation(inside the docker) +```bash +apt-get install build-essential python3-dev libssl-dev libffi-dev libxml2 libxml2-dev libxslt1-dev zlib1g-dev libsndfile1 language-pack-zh-hans wget zip +pip3 install paddlespeech +export LC_ALL="zh_CN.UTF-8" +export LANG="zh_CN.UTF-8" +export LANGUAGE="zh_CN:zh:en_US:en" +``` + +#### 1.3 Download models(inside the docker) +```bash +cd /models/streaming_tts_serving/1 +wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0.zip +wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/mb_melgan/mb_melgan_csmsc_onnx_0.2.0.zip +unzip fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0.zip +unzip mb_melgan_csmsc_onnx_0.2.0.zip +``` +**For the convenience of users, we recommend that you use the command `docker -v` to map $PWD (streaming_tts_service and the configuration and code of the model contained therein) to the docker path `/models`. You can also use other methods, but regardless of which method you use, the final model directory and structure in the docker are shown in the following figure.** + +

+ +

+ +#### 1.4 Start the server(inside the docker) + +```bash +fastdeployserver --model-repository=/models --model-control-mode=explicit --load-model=streaming_tts_serving +``` +Arguments: + - `model-repository`(required): Path of model storage. + - `model-control-mode`(required): The mode of loading the model. At present, you can use 'explicit'. + - `load-model`(required): Name of the model to be loaded. + - `http-port`(optional): Port for http service. Default: `8000`. This is not used in our example. + - `grpc-port`(optional): Port for grpc service. Default: `8001`. + - `metrics-port`(optional): Port for metrics service. Default: `8002`. This is not used in our example. + +### 2. Client +#### 2.1 Installation +```bash +pip3 install tritonclient[all] +``` + +#### 2.2 Send request +```bash +python3 /models/streaming_tts_serving/stream_client.py +``` diff --git a/demos/streaming_tts_serving_fastdeploy/README_cn.md b/demos/streaming_tts_serving_fastdeploy/README_cn.md new file mode 100644 index 00000000..7edd3283 --- /dev/null +++ b/demos/streaming_tts_serving_fastdeploy/README_cn.md @@ -0,0 +1,67 @@ +(简体中文|[English](./README.md)) + +# 流式语音合成服务 + +## 介绍 + +本文介绍了使用FastDeploy搭建流式语音合成服务的方法。 + +`服务端`必须在docker内启动,而`客户端`不是必须在docker容器内. + +**本文所在路径`($PWD)下的streaming_tts_serving里包含模型的配置和代码`(服务端会加载模型和代码以启动服务),需要将其映射到docker中使用。** + +## 使用 +### 1. 服务端 +#### 1.1 Docker +```bash +docker pull registry.baidubce.com/paddlepaddle/fastdeploy_serving_cpu_only:22.09 +docker run -dit --net=host --name fastdeploy --shm-size="1g" -v $PWD:/models registry.baidubce.com/paddlepaddle/fastdeploy_serving_cpu_only:22.09 +docker exec -it -u root fastdeploy bash +``` + +#### 1.2 安装(在docker内) +```bash +apt-get install build-essential python3-dev libssl-dev libffi-dev libxml2 libxml2-dev libxslt1-dev zlib1g-dev libsndfile1 language-pack-zh-hans wget zip +pip3 install paddlespeech +export LC_ALL="zh_CN.UTF-8" +export LANG="zh_CN.UTF-8" +export LANGUAGE="zh_CN:zh:en_US:en" +``` + +#### 1.3 下载模型(在docker内) +```bash +cd /models/streaming_tts_serving/1 +wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/fastspeech2/fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0.zip +wget https://paddlespeech.bj.bcebos.com/Parakeet/released_models/mb_melgan/mb_melgan_csmsc_onnx_0.2.0.zip +unzip fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0.zip +unzip mb_melgan_csmsc_onnx_0.2.0.zip +``` +**为了方便用户使用,我们推荐用户使用1.1中的`docker -v`命令将`$PWD(streaming_tts_serving及里面包含的模型的配置和代码)映射到了docker内的/models路径`,用户也可以使用其他办法,但无论使用哪种方法,最终在docker内的模型目录及结构如下图所示。** + +

+ +

+ +#### 1.4 启动服务端(在docker内) +```bash +fastdeployserver --model-repository=/models --model-control-mode=explicit --load-model=streaming_tts_serving +``` + +参数: + - `model-repository`(required): 整套模型streaming_tts_serving存放的路径. + - `model-control-mode`(required): 模型加载的方式,现阶段, 使用'explicit'即可. + - `load-model`(required): 需要加载的模型的名称. + - `http-port`(optional): HTTP服务的端口号. 默认: `8000`. 本示例中未使用该端口. + - `grpc-port`(optional): GRPC服务的端口号. 默认: `8001`. + - `metrics-port`(optional): 服务端指标的端口号. 默认: `8002`. 本示例中未使用该端口. + +### 2. 客户端 +#### 2.1 安装 +```bash +pip3 install tritonclient[all] +``` + +#### 2.2 发送请求 +```bash +python3 /models/streaming_tts_serving/stream_client.py +``` diff --git a/demos/streaming_tts_serving_fastdeploy/streaming_tts_serving/1/model.py b/demos/streaming_tts_serving_fastdeploy/streaming_tts_serving/1/model.py new file mode 100644 index 00000000..46473fdb --- /dev/null +++ b/demos/streaming_tts_serving_fastdeploy/streaming_tts_serving/1/model.py @@ -0,0 +1,289 @@ +import codecs +import json +import math +import sys +import threading +import time + +import numpy as np +import onnxruntime as ort +import triton_python_backend_utils as pb_utils + +from paddlespeech.server.utils.util import denorm +from paddlespeech.server.utils.util import get_chunks +from paddlespeech.t2s.frontend.zh_frontend import Frontend + +voc_block = 36 +voc_pad = 14 +am_block = 72 +am_pad = 12 +voc_upsample = 300 + +# 模型路径 +dir_name = "/models/streaming_tts_serving/1/" +phones_dict = dir_name + "fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0/phone_id_map.txt" +am_stat_path = dir_name + "fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0/speech_stats.npy" + +onnx_am_encoder = dir_name + "fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0/fastspeech2_csmsc_am_encoder_infer.onnx" +onnx_am_decoder = dir_name + "fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0/fastspeech2_csmsc_am_decoder.onnx" +onnx_am_postnet = dir_name + "fastspeech2_cnndecoder_csmsc_streaming_onnx_1.0.0/fastspeech2_csmsc_am_postnet.onnx" +onnx_voc_melgan = dir_name + "mb_melgan_csmsc_onnx_0.2.0/mb_melgan_csmsc.onnx" + +frontend = Frontend(phone_vocab_path=phones_dict, tone_vocab_path=None) +am_mu, am_std = np.load(am_stat_path) + +# 用CPU推理 +providers = ['CPUExecutionProvider'] + +# 配置ort session +sess_options = ort.SessionOptions() + +# 创建session +am_encoder_infer_sess = ort.InferenceSession( + onnx_am_encoder, providers=providers, sess_options=sess_options) +am_decoder_sess = ort.InferenceSession( + onnx_am_decoder, providers=providers, sess_options=sess_options) +am_postnet_sess = ort.InferenceSession( + onnx_am_postnet, providers=providers, sess_options=sess_options) +voc_melgan_sess = ort.InferenceSession( + onnx_voc_melgan, providers=providers, sess_options=sess_options) + + +def depadding(data, chunk_num, chunk_id, block, pad, upsample): + """ + Streaming inference removes the result of pad inference + """ + front_pad = min(chunk_id * block, pad) + # first chunk + if chunk_id == 0: + data = data[:block * upsample] + # last chunk + elif chunk_id == chunk_num - 1: + data = data[front_pad * upsample:] + # middle chunk + else: + data = data[front_pad * upsample:(front_pad + block) * upsample] + + return data + + +class TritonPythonModel: + """Your Python model must use the same class name. Every Python model + that is created must have "TritonPythonModel" as the class name. + """ + + def initialize(self, args): + """`initialize` is called only once when the model is being loaded. + Implementing `initialize` function is optional. This function allows + the model to intialize any state associated with this model. + Parameters + ---------- + args : dict + Both keys and values are strings. The dictionary keys and values are: + * model_config: A JSON string containing the model configuration + * model_instance_kind: A string containing model instance kind + * model_instance_device_id: A string containing model instance device ID + * model_repository: Model repository path + * model_version: Model version + * model_name: Model name + """ + sys.stdout = codecs.getwriter("utf-8")(sys.stdout.detach()) + print(sys.getdefaultencoding()) + # You must parse model_config. JSON string is not parsed here + self.model_config = model_config = json.loads(args['model_config']) + print("model_config:", self.model_config) + + using_decoupled = pb_utils.using_decoupled_model_transaction_policy( + model_config) + + if not using_decoupled: + raise pb_utils.TritonModelException( + """the model `{}` can generate any number of responses per request, + enable decoupled transaction policy in model configuration to + serve this model""".format(args['model_name'])) + + self.input_names = [] + for input_config in self.model_config["input"]: + self.input_names.append(input_config["name"]) + print("input:", self.input_names) + + self.output_names = [] + self.output_dtype = [] + for output_config in self.model_config["output"]: + self.output_names.append(output_config["name"]) + dtype = pb_utils.triton_string_to_numpy(output_config["data_type"]) + self.output_dtype.append(dtype) + print("output:", self.output_names) + + # To keep track of response threads so that we can delay + # the finalizing the model until all response threads + # have completed. + self.inflight_thread_count = 0 + self.inflight_thread_count_lck = threading.Lock() + + def execute(self, requests): + """`execute` must be implemented in every Python model. `execute` + function receives a list of pb_utils.InferenceRequest as the only + argument. This function is called when an inference is requested + for this model. Depending on the batching configuration (e.g. Dynamic + Batching) used, `requests` may contain multiple requests. Every + Python model, must create one pb_utils.InferenceResponse for every + pb_utils.InferenceRequest in `requests`. If there is an error, you can + set the error argument when creating a pb_utils.InferenceResponse. + Parameters + ---------- + requests : list + A list of pb_utils.InferenceRequest + Returns + ------- + list + A list of pb_utils.InferenceResponse. The length of this list must + be the same as `requests` + """ + + # This model does not support batching, so 'request_count' should always + # be 1. + if len(requests) != 1: + raise pb_utils.TritonModelException("unsupported batch size " + len( + requests)) + + input_data = [] + for idx in range(len(self.input_names)): + data = pb_utils.get_input_tensor_by_name(requests[0], + self.input_names[idx]) + data = data.as_numpy() + data = data[0].decode('utf-8') + input_data.append(data) + text = input_data[0] + + # Start a separate thread to send the responses for the request. The + # sending back the responses is delegated to this thread. + thread = threading.Thread( + target=self.response_thread, + args=(requests[0].get_response_sender(), text)) + thread.daemon = True + with self.inflight_thread_count_lck: + self.inflight_thread_count += 1 + + thread.start() + # Unlike in non-decoupled model transaction policy, execute function + # here returns no response. A return from this function only notifies + # Triton that the model instance is ready to receive another request. As + # we are not waiting for the response thread to complete here, it is + # possible that at any give time the model may be processing multiple + # requests. Depending upon the request workload, this may lead to a lot + # of requests being processed by a single model instance at a time. In + # real-world models, the developer should be mindful of when to return + # from execute and be willing to accept next request. + return None + + def response_thread(self, response_sender, text): + input_ids = frontend.get_input_ids( + text, merge_sentences=False, get_tone_ids=False) + phone_ids = input_ids["phone_ids"] + for i in range(len(phone_ids)): + part_phone_ids = phone_ids[i].numpy() + voc_chunk_id = 0 + + orig_hs = am_encoder_infer_sess.run( + None, input_feed={'text': part_phone_ids}) + orig_hs = orig_hs[0] + + # streaming voc chunk info + mel_len = orig_hs.shape[1] + voc_chunk_num = math.ceil(mel_len / voc_block) + start = 0 + end = min(voc_block + voc_pad, mel_len) + + # streaming am + hss = get_chunks(orig_hs, am_block, am_pad, "am") + am_chunk_num = len(hss) + for i, hs in enumerate(hss): + am_decoder_output = am_decoder_sess.run( + None, input_feed={'xs': hs}) + am_postnet_output = am_postnet_sess.run( + None, + input_feed={ + 'xs': np.transpose(am_decoder_output[0], (0, 2, 1)) + }) + am_output_data = am_decoder_output + np.transpose( + am_postnet_output[0], (0, 2, 1)) + normalized_mel = am_output_data[0][0] + + sub_mel = denorm(normalized_mel, am_mu, am_std) + sub_mel = depadding(sub_mel, am_chunk_num, i, am_block, am_pad, + 1) + + if i == 0: + mel_streaming = sub_mel + else: + mel_streaming = np.concatenate( + (mel_streaming, sub_mel), axis=0) + + # streaming voc + # 当流式AM推理的mel帧数大于流式voc推理的chunk size,开始进行流式voc 推理 + while (mel_streaming.shape[0] >= end and + voc_chunk_id < voc_chunk_num): + voc_chunk = mel_streaming[start:end, :] + + sub_wav = voc_melgan_sess.run( + output_names=None, input_feed={'logmel': voc_chunk}) + sub_wav = depadding(sub_wav[0], voc_chunk_num, voc_chunk_id, + voc_block, voc_pad, voc_upsample) + + output_np = np.array(sub_wav, dtype=self.output_dtype[0]) + out_tensor1 = pb_utils.Tensor(self.output_names[0], + output_np) + + status = 0 if voc_chunk_id != (voc_chunk_num - 1) else 1 + output_status = np.array( + [status], dtype=self.output_dtype[1]) + out_tensor2 = pb_utils.Tensor(self.output_names[1], + output_status) + + inference_response = pb_utils.InferenceResponse( + output_tensors=[out_tensor1, out_tensor2]) + + #yield sub_wav + response_sender.send(inference_response) + + voc_chunk_id += 1 + start = max(0, voc_chunk_id * voc_block - voc_pad) + end = min((voc_chunk_id + 1) * voc_block + voc_pad, mel_len) + + # We must close the response sender to indicate to Triton that we are + # done sending responses for the corresponding request. We can't use the + # response sender after closing it. The response sender is closed by + # setting the TRITONSERVER_RESPONSE_COMPLETE_FINAL. + response_sender.send( + flags=pb_utils.TRITONSERVER_RESPONSE_COMPLETE_FINAL) + + with self.inflight_thread_count_lck: + self.inflight_thread_count -= 1 + + def finalize(self): + """`finalize` is called only once when the model is being unloaded. + Implementing `finalize` function is OPTIONAL. This function allows + the model to perform any necessary clean ups before exit. + Here we will wait for all response threads to complete sending + responses. + """ + print('Finalize invoked') + + inflight_threads = True + cycles = 0 + logging_time_sec = 5 + sleep_time_sec = 0.1 + cycle_to_log = (logging_time_sec / sleep_time_sec) + while inflight_threads: + with self.inflight_thread_count_lck: + inflight_threads = (self.inflight_thread_count != 0) + if (cycles % cycle_to_log == 0): + print( + f"Waiting for {self.inflight_thread_count} response threads to complete..." + ) + if inflight_threads: + time.sleep(sleep_time_sec) + cycles += 1 + + print('Finalize complete...') diff --git a/demos/streaming_tts_serving_fastdeploy/streaming_tts_serving/config.pbtxt b/demos/streaming_tts_serving_fastdeploy/streaming_tts_serving/config.pbtxt new file mode 100644 index 00000000..e63721d1 --- /dev/null +++ b/demos/streaming_tts_serving_fastdeploy/streaming_tts_serving/config.pbtxt @@ -0,0 +1,33 @@ +name: "streaming_tts_serving" +backend: "python" +max_batch_size: 0 +model_transaction_policy { + decoupled: True +} +input [ + { + name: "INPUT_0" + data_type: TYPE_STRING + dims: [ 1 ] + } +] + +output [ + { + name: "OUTPUT_0" + data_type: TYPE_FP32 + dims: [ -1, 1 ] + }, + { + name: "status" + data_type: TYPE_BOOL + dims: [ 1 ] + } +] + +instance_group [ + { + count: 1 + kind: KIND_CPU + } +] diff --git a/demos/streaming_tts_serving_fastdeploy/streaming_tts_serving/stream_client.py b/demos/streaming_tts_serving_fastdeploy/streaming_tts_serving/stream_client.py new file mode 100644 index 00000000..e7f120b7 --- /dev/null +++ b/demos/streaming_tts_serving_fastdeploy/streaming_tts_serving/stream_client.py @@ -0,0 +1,117 @@ +#!/usr/bin/env python +import argparse +import queue +import sys +from functools import partial + +import numpy as np +import tritonclient.grpc as grpcclient +from tritonclient.utils import * + +FLAGS = None + + +class UserData: + def __init__(self): + self._completed_requests = queue.Queue() + + +# Define the callback function. Note the last two parameters should be +# result and error. InferenceServerClient would povide the results of an +# inference as grpcclient.InferResult in result. For successful +# inference, error will be None, otherwise it will be an object of +# tritonclientutils.InferenceServerException holding the error details +def callback(user_data, result, error): + if error: + user_data._completed_requests.put(error) + else: + user_data._completed_requests.put(result) + + +def async_stream_send(triton_client, values, request_id, model_name): + + infer_inputs = [] + outputs = [] + for idx, data in enumerate(values): + data = np.array([data.encode('utf-8')], dtype=np.object_) + infer_input = grpcclient.InferInput('INPUT_0', [len(data)], "BYTES") + infer_input.set_data_from_numpy(data) + infer_inputs.append(infer_input) + + outputs.append(grpcclient.InferRequestedOutput('OUTPUT_0')) + # Issue the asynchronous sequence inference. + triton_client.async_stream_infer( + model_name=model_name, + inputs=infer_inputs, + outputs=outputs, + request_id=request_id) + + +if __name__ == '__main__': + parser = argparse.ArgumentParser() + parser.add_argument( + '-v', + '--verbose', + action="store_true", + required=False, + default=False, + help='Enable verbose output') + parser.add_argument( + '-u', + '--url', + type=str, + required=False, + default='localhost:8001', + help='Inference server URL and it gRPC port. Default is localhost:8001.') + + FLAGS = parser.parse_args() + + # We use custom "sequence" models which take 1 input + # value. The output is the accumulated value of the inputs. See + # src/custom/sequence. + model_name = "streaming_tts_serving" + + values = ["哈哈哈哈"] + + request_id = "0" + + string_result0_list = [] + + user_data = UserData() + + # It is advisable to use client object within with..as clause + # when sending streaming requests. This ensures the client + # is closed when the block inside with exits. + with grpcclient.InferenceServerClient( + url=FLAGS.url, verbose=FLAGS.verbose) as triton_client: + try: + # Establish stream + triton_client.start_stream(callback=partial(callback, user_data)) + # Now send the inference sequences... + async_stream_send(triton_client, values, request_id, model_name) + except InferenceServerException as error: + print(error) + sys.exit(1) + + # Retrieve results... + recv_count = 0 + result_dict = {} + status = True + while True: + data_item = user_data._completed_requests.get() + if type(data_item) == InferenceServerException: + raise data_item + else: + this_id = data_item.get_response().id + if this_id not in result_dict.keys(): + result_dict[this_id] = [] + result_dict[this_id].append((recv_count, data_item)) + sub_wav = data_item.as_numpy('OUTPUT_0') + status = data_item.as_numpy('status') + print('sub_wav = ', sub_wav, "subwav.shape = ", sub_wav.shape) + print('status = ', status) + if status[0] == 1: + break + recv_count += 1 + + print("PASS: stream_client") diff --git a/demos/streaming_tts_serving_fastdeploy/tree.png b/demos/streaming_tts_serving_fastdeploy/tree.png new file mode 100644 index 0000000000000000000000000000000000000000..b8d61686aa76f0e1270172b55b0e0ae64560d45e GIT binary patch literal 25004 zcmZsibyQqQ^Y?*3u;3OvI6;F;aJS&@?ykW-xVsMS?(P;K5Zpc3;4Xu_lg;iv`#b0T zV>tKB+_}?TUENjRPZeSEvf@YxcnA;>5J-{|B8m_YP%hx>dbqdXzh_IpSqKP(c?)4- zc}Zd65Au$7rWV#F5D*e!Nhz@E7)#iGXEGl7Fr-=X!e|8;0#XN%m<53)LFBRWLK1MK zuEq{9)+M3(8dEB$FluT7^8xBLAHtD9Z#AG3U264(O<=zUt$A;Fo_U&eWnB51P4M1j zes;Rbg7n)SMvGw4$AVxM<%$m#AokGqf+D;lK=ntLhkU0$uq8EEaeglEH98a%G46YLV+Rea;?^2yOp|PGZfu6te_EC8La?*I*hkCL$!dbHezWNT%Y;yD zcNPeylIFNe*&@AK$09dp&sHCvjWe*pZLv zqrS}(;8BNb5rAO!ho*Zw)?+CTE!9I=gQOh5t4~}4Dc-~M6I#~aX%oHWt!+U2CR%JD z-#Plbz&m}+fDZ&B{w1;4#t>;jswNQNPVoo@z4+n-8P z2UAI9xb`6Sp!Z1k7>ZD?h$053Yu;5LvcuuWeCcNxOg3~gz^|!XWUyv*g%0Vr*s8E6 zYWYL}n-WOdchW;Rlr^~4ceBN}E&26L2QME4zz=5&@qG7Q_@3on>)z*yS}<5&%APC; z?Q0N4P*l)!5OyDu5V1jGiIg!aZ>Yim>$asKOl|hpxck)mWD!!_IBm%*3885kMVLeC zL*~+u>#z#3CUQ_b#&_n+x0iyBv=5LE^ua<*`CpaLsksu75;aHNN0mm!Dazub(6z#f z8x%{WN)-;1r)fPnfV-E$qg99u6PcJW7rX`r!Ku=KE^M-h@}k`idvXvoTxRX)#s znv5G zkO8xR{dksm(RfUF?szJA#caP=F@Ho36`A~GpN^h7o#LDPnbDix3#hdfdydp%;WG!C zdstnvMX=$so>}_L=UF{i6jx4{P?)1ySKDAx93JzgDn%`9HE2F zL2n>UCGBsLw2MT$IJzWd-Z)lfMQK`eBr`P*Ok0~bwn}!Xa>;UuLHa>CThvqZeQ%B~ zK?A;?ebN$e^JD%yd%tV<3E5}1cKvp!cI~ zOQRl8k8rfGX=5_5nt4t&J6tQUZ3DbiQ?qly3)$%zTQ#UA=m}m9k~J6j#Er# zjs?cGQMkx&Qc&Z(gZBRYf7wD6qfD+nnhmLVPs{b4>bU`A<@jaeEtEs9fjU4L2E zHlVsqTFbLS@2PcLLoVVdfFOw@k>NBO_qJ5nlmR3=pKz3>l>~Bd7~kG;FLTf9h^9!E zo~J*YL>YN?_~`M`&0>YQ-=sX{hR%$Bqa8{!t|QN;@M5$t$-5j%ZLWf{N>PZm?s!oT7|cwcNks-44tBaD4*<+!o6NT!jlyTWZOlRir8P%EOjYo*D2CCPF4 zl6NI+#bjj{D;Mj>XBk)P<%>c^dH3eg;xdEc(+QW){KwlTZH+6QZqG!WycC{%uOFVa zUeYDv zbNlVB4DqLE)&;GFGn*W?L;@Ib6LGz##Os+(Ba-3eXqK3TSL?^C35;2c!6^Bi_@OOI z!GsR3uGS;t^Y2aAcgYyti%*W{iCd**GkC=u*(BK?d01T8UymiqTU-vbfvxLJqBb!T z!&%)<2J68S5hOT_dI&AfU0HKJw{^q2N@}ZBnH{4Y%g#R@G>t-(hQp3z7v!9|-PqUd zTc^BcZcL>Yj;mOjJ8ZMsPuxJ$gtu8%Zu{#eZ8+X%K5mB1Pkd)DJ%N$1)x>*z1l|om zragxXj3laEITwKHtL24N;HrjC+i-R9j>fNg@y^xtlx|61(6rip=4D66MozbDkI&|; ze$VC)oGZQSPRwosLYhaNo4c&)alPlKkUL_+eusYVYKrRN^~A0@56|!6>X(w6+C7XbCD8ujrFIqT4VoW6C{*{J51TKWM!zT4l|J%utMOLSob0yrShLaW3YG-%QBw=P_%NAY`* zk;5+-YsO@Tyoz0a}81d`@_6o(-;T{0SHMELFKQICt0vxl|>hZQQ##d zg2~Ang4VG_m7y^qA+fNmb3&0|L?A_fHSR=2nfj_I3-rnj1gHz34MOF8(1#QiDQNzQ z4tE%A-2E_VYI|ziH2#JCyDjIwd;H3t?86$pnOs__35F1zj34}cP|sF>$j6T#!#u7J z1l=~gIgp5W7O8F=DZn3of~4$$?m#OUl*-}|@ydc%C-d8oY_y!KbHzw8h5B=Yydaby zRtLb&3`(Xa41(5KR^UGN&lk&rAZT4w0@Cme{=T&bI#pm7z2v0yah5DNF-b#aYfEr8 z$04*h&nxQBS5hG`^r2*O@Zi5jmg0N%+b(w(`#qvW<~pEoRMgq-?PDqF+A_hwF65P@ z>Pw&O?O`ej`qcLGDopiaYG)W_66Bmi{(it;BbU#IYiXQ4Sxl$G;*M;(D`ZmPK7grL zvbcafoaW~AR6vxr>1N;gRQX6bvz1MyYblh~F2k zINOR;yAADfxqJqB45Tbb?SBDp0*>e6e7v758MAmK`y9;~pNFftSYMKPudmA`6%CYK z4yA|$A|x#@4h^UV*;5chwfPY46XNe;dI3Pcbq-^aEW21!t;mA#C67J zkMBZVm!t>Ed71NTJKpY|c29DkNhR@qv)MR>a`)*yW^D~NyuW-hy&lgrHpoRpL&%Bh zdRVPuHHAkc#uOYmGz*@&a>|%~Bgc_7kx44;gHLpBy|LdVW&`;q4xC!qu z+n#tb>~qm?j`z8b8ik9wNi$|`Eh0kS<;m)hmWXD2I>a74K%I0w5XdDsFbOhn$)!(UjzDhs4Ak+(mmcbHy&i$njonc3TBm7y~wWiFq=Za0$T(x2DtEzM`6 z4j_xEd@2W+6F+Vx3Z=hDAh_^xDBk9ptZ=KAZPop=;lbX2jtFIv$i3^q+wMs|IP1hB zo_XmTPLk>x1-ZMIsJX1!^id@~=U#AgC^&UL!D_l;_Q%R%10!O%ulgzZDdIC8N zK8=WJGl9qkbla79@gr4}F@9x-d3Q8cA4TEtD8|4cLqzS&e|LdCl!|zdpp(C2Pzy&W zmtdcz8!!}$I_H>V3o`i=Tj$6YX8R=;o+V0nxlKCi{lS)y6kV8+G!|$gpoOEwqZ$vJ z7M+b?t#3C?y+v2sCY_Hq%?Ir}dIrbd^fwnSM_oSa6z_Q1ez7^XQz&AV->2H2E#6}X z&p9yLyQ@L>&M;0vaP}tPQf?j}7t5NQ!3#79-^vZWy`PAM>fnG{IAGu#mTJxUyeRp2 zccx8NgtDLHd}w+$*`0A!(rUloWT__J_BAdFls{wK4z*-`2`l*?yB7e1zs_R7sblMdQUn67&|!AH+=SjDMI~vqa58zr6^U!K(@4K;2^G|ga!D^_D_lSHIcwZD(YlRV$TUSf z<_76eGg ze)!nfpIhu;b@XCv2@`V45*U_^N~xk+c(q5ysVOs+oX)mZH#3Y5!%1$8Uq2Eu?fNiyTuZ2hM&b{s1S3dNc!HCGkoDT_x%j9@L^MJU=)TlNF|MM%9@`KRorR9d3s|5e9K@T)dAUKdF2__vG z|0j@QVNmDK(reTO?M(vYf`d)eTuQdP9Gobir=znmXMX%*iouie1K1&X*YME&_IH85 zzOei2*H8N9RiOmRFJ;?R_D)rB`i6UGY9|Ho{def9llr9^(Dx7X75m@w|1z&K6{L|A z##~sgaLDv>qt#*Ses;bzV`t*y^>N?QQ*<^Fw@Z`-OoA{lVw6=A5gA}c;>4Lv~Qyh*a%wmKcxCqj^ z`?(MyCtu!VAXJNp7|PcUG@eP$H54+g%^TuEW3`u{3!eLEr0wFOm6SEJTcK4o%6&?O z^Z>fL!H`)+`m?Kle%PE2!a_$Ex?A_G9y*y*f{xI?w;V*=N6R_gCS|gyQx6y?Ca063 z(_t^`;&(0_ecU|SKRS9G|3LHT!G#5=P9r}1-%w|N^&k0;=FM%GO`lmgmDJ>II z-!j4|*-6`qABhVLz6Lh*JNnL?aVcxdB(9PsZiDrM)g7M#>n6}6$We~pz9;8BZ52Of zAyel~H`(R?TywBZ@xMKi)!OQ zsg^Xe2AtnNc>s3l0taDc#t&~701q1e4;hSvace%;X_St^SiU}8ZCtBut zMFFQbD{;=NVV++RX;`}I1rq(%5$(%~vP5sVO!U4nMK;{NNt0sea|6>qDFc%b*Y(-= zFj(VmdF1f9q7A3Ru9e%L6)~FfLj9=R*{UL*u9!dLjfM-iLqU6;9*K@l;lXw07zkF_ zNNs#E%Weq8`5_1<8i@)47hH&suw&b_Gst0{Qbwnn9(d_ zIiwJnlE<(1_m3pvWBvrB5Atx(k&1UKlS}ay{gfovk0C#Tg-r8bd0ixY{7=`?7`y54 zDMF@c5!1grqC_rJB^Ba}4Jn}*BG?bWs3Q=_fxWw%KT1mV6oJPF$>MYS=bG z5WUFGa77fIp`^22V%<#lsN;JI+&#VQy)g#{epGA?bi8kpKh=j{ZEhE!aWNvY7RaY0 zA9Yyw*_GYVb^9zT1n~T-X;g-x8=;%#k*XMzS#ZN|B3)>^aq1hj3Ag zdX&&nEo=B{IgFb1N@#I{v!&9|ji{ zm5b|fPaV_96m0eFISy+bF*`s^J$}8lM`u7a3%*-*imaKq{Mw?vh5uR&Lx^MDNj0%d zWxkI+=R%Bkp9l1z7~ zao7)hCwRS7&>iN;NuAs@l3L0k_Q~2~A42hJn^qn5TU4@r$p9&rZ05l>=ZM7p%~5NHy>{W-!`wH) zm(rkN^aYdJnowK^rM|)WqwV);z2PlvzNpX#wC?+j%4v{@qVLnXu&{I{)<-qGyVf5o zwEz=WF1BMcM=&NyP$9-h#$7@lw@xdC3%$m;w7kzJ$%2Nq zUVSKRtItiE#n_c5kyc{A%5_khDDlbaBPAhUvPXQ^<1zH}y-rc69Ae&)ZB)K=RUH-G z2P@T9Ifc`$_S^5DLqeH>5kypN;TB)`^immie%WHcBc={Phm#gbw>p+|R(-gjM?qZU z&$v7|Nh@torrS4}5TQSeFZ?lcN1Q{bWgtnbB2mnnDMUgdILpZuR5UI#U&Yw!o|<_J zGgI<`%x&fjwA)!iWuiBTDQB0dLe-g6|LCRCdD_%=bI7CDkPGqcWi$8yAIv>ut z9L_VrQr;D9<_fET4I&ngC8)% z!ic9$ENPpoT`hYngF~FXu0D~}B&i;qXB1HC2{5lUl%VhPvcN(EPjv0Cg)P%{)B0=U zoF2hE_tuLj4p5yDrL3`-b(X!n9lcU{-u-)X-}C$XrF*GwiG2I`PW^(E3|p1i4r&+K zBf5V3*9qU1%dG#n)>lU~-;mNT+TY=GF@+BMuCZT8Q@^UWhBNJoX?wRbyxs>rGb5uW zaL9S`YRI4{iKGfFC1iRECe~oL5w$az+3l|KZd~3@>Ui%t0QvN;cwg$OL0->=-K!{n za;p*oD9F-T73A=AE!w^7cAHIPCStwX@yOtI53oi!WEd(j5N#K#plyE!@T7kQ>Bf9` zIg3aRA`b$h`kv{zd=kuPn`~}-96n+GOM(f^p_XLZ?U&)=H!BK3BsYF%q?|gMB%n0z zX3%pK(p5OMqIO0laO2E(Ugvk8J_LM5EL5tM;<~?)%Z%?eEgDjtcpxV2WrF(r--rDt$ftNO~aN%R-dDvJQ;({rNPD<=QTr~Odi z@8wv#QD6sEnEs~^H#eU1Z{74$&n$j8a*RH+UVnA9c(J)!bHLGcz+75EMlYsLNWyKuvHd1SaspWFXF zmC4D3hGH(N$~3B+U17Hyk4w0V_AY@0lZwjOrWVIPw)%^ZaboBfPNYe2#ne&)%g@;I zBVgx}j>ePg@w)!Y;&w@UP8S>vXgy~L?!Co z6B98j9Z!}Uuhen${-aj>x=~9M{tZoJD=jUY2*}9uH2bxrT_gX|Km4Tf`H5}wm0B~i zrAnRKUnC@c((E^qGW?_0e}g@KlHd0LjWuKSWY{x0VFyKTPreJ`@9txb1ul~`1_jrs zy12%z{KF@Epe;XvP3CoNKfKlwf?Uj5ZdYAdM3&Q`r zcm5fwV)kK;Yzz56Ue&|x@8{g@3Y-o5L=TVnAEB4(mnA@-f%>iYKPI6>U4&kI##@|> z@tpk6+me1s+&(0KxifgwCB{$|;&RgyzE?D@SXD2|RR8f%AJFkrr!N8l&)MMMv9}tJ zSM3i8#ypH+ImM~(N-4Ce3Xp*nk@1BpUXb%4F87|VUYyERqjGRh&*BVGEtXVY-gl*R zJuUK)bOM~C+7Qem2-oXl=wJqk4HH8tPE<6N(rFuWpo}~ zida<}K35`V%`aV<)fDS)4sYCzF`I*uc@WU0z4E&FQikO1PcAFTH;*1yZ#qqV+S~By z0mXJ*oPv5@#;=8o;IL%4V>O17@w4W#Q{E*`*85X%9dm@m_72ivUxbmfU3u+p(^HzX zoGgz3*V1jHKa)HU4o?D=o@`#w+oWRs1@#i18a(lwA}?EQXZLr6a-TmY1&)1f9g%X+ zs(PdhH0AdVVrDr8lrF3jXxSDbkCN~K+^m0aJhpjVz&{$ezvbnrgclXjbaH%ba0j5U*;Kr87a1&A@M^i1b4T# z{-v1iaOm0F1{n(*Q)wpIfYr-7z?p3O@o&++$VR7#_Wp}Ko<8M&lzz+uq^rAN&%zbcyO8OtktG_j&L!sQhAmHjv8V->;cjaH ziKuv?s3Pwr{*G*b&!(RON5v|Ntc8r-^BLT89k-UxIX`ngQ`W-F3r|#luB&r4e%bl$ z%RR8(`C-t>;Pt)GyM^uyQG$hg5rbChJe2(+$%U5zDm+|%v5vXow#pTLmGzZ*t@btm zzA7N(^k{NrAAmJ+){#xamtRK^8wr?=)od)^v6sDjPIiDgYtFykx7&6S?TER+4O)1P zM~RU>dNG=0b0|^U7ufeMSmDodtWobn_%=y${W^qh;?|p~tl3N>t=XIXq=z0HJ}ITs zYsJb#8fx23C0MuETrO}o&%C?ItFWQ4fD}`F6s!pQ9rO6&p~G2$8>x#zb|rS}=yB9} z@erEUMOMHPxTS~hI1o`cI~Sry%hRP-*x7!)p^6l6Ygk* zlC_~7M!|OU%|~=KW#?GB=6`rY@O_6ENg@>{z7sxmBu9ovKbwul<8HYU@#zM=p%TKS z&GmXFt1wgF!{qvB1}u7N0BWMA{;V$d$3C@0G&FBq-!USO$2sx`LG!}ZTi?rrsag;r zW-@X25AvtZPt_J_D2GdoL}*er7RTc$eFzIpMw~aesKQJ)mNVmb=@HRrY!;e8Q(lsI zx)0xf5GI+qPml4hz4%Mln(rl9gd7~?PQQ55YacMb_T8yHtY^~SiCfDEGBSosE`Ht~);e>b;7&^h}%+FE5 zgR-_j{J4K|HHokfrXss6O!d9@nWA^k?7~l+uU5iU4xlXR2T=6k)OSEixPreVQC_`t03k)cD7;k`wZ3?*Rl|>` z&L!$<{ETe7ZO}>rSLqmH9FTo)Fi!LaRz&2@GooD%0=g8g*DqCBK|9#WR2|7)^}RGj zznTFjAf$|-pHC*02U(0-QxDrnegdzZ6!Se|-*d(A@3CN74U9V+kh@(I4`z!;4KuP- zL@-`fVu{k$Q`1@NX~pDaHeLO_&Q;Tu58}0grZeRZa=|(2bL+I;?Jg8t}cJO1mJX^ zc&NPJeLN7!yzAb2U_Utet4H_AN|qh9^L{3=BxBcF;>(pWSwCW)QnwRs)?M5Kk=%2= z;^~mW#c#y)r0edvr{JOWbnhn=Vq7a-U1n7!W%U{jLN635R#}wJoh2*eceRQr2rWe+ zjOG8JQbANiFlZ-cNtw%r%0WsU0fKnAkVVZBtcwv)j2Iqm5Y0N|t$`E_#;b2&2yH(+RX&qQ zw|yLUd5C9O#U1tbUfOIU&0l`uKBWzq+Q>?x)LwVpZD4*hBf_8?>K7ZtJ;l9ZTk~Gq zH{E@n%2gt~z-u&nCdp}6N`bQ^5u!%@mLt;4xAs!%(P`td+<4h)U^_W;_0#+L(B~yr zhj8Ma37>&zQ1ZveXr8W(0kazPp!5cFrI2&?_Lt$hGY*xB8Pz*)=$a8jysJI;xmH!xG95_r*36pC%RqIDsiVo5|&?;RgzjBBF zL8`?fy2d@^!}Zux)ijz9^^EVaxc9%5E2z7c>idPe04c)Hh~! z&5tg7%TV^cvtXl^)LeIS&Eu@++XnDe1&8lC1DE4! ztJ!8N&#uU+LB8$9QL?tP{L9%&r%C@#n13oMCeLEKm0@RyU!l}>ga$jVk8$XAGQTo? zrgeT#9iXt^<&_v4#imG3HSX2jDsO#kC)NQ@;b6K9NyV}5Oc{(CyNE?LErD@Dt(K66 zhaUumTEl7Kr`2#v?LV^PoN^gw^qPb$t@>i(%1AknmFmTgVDr)i?Dite#7E+`X$@!> zlmHu=M;KHQbU3zN2x$2@^)&+WLix+Lk6sQN>{QK@yGHeX+|XxFMI?m+QQ#>zDY zOwX$0?V%{j1Cy0c%S6&4oKj#Xo#d&?nLbm$vXBOW&w3!JUyI85Xyydad^_v1=Bnau z2f!~QXv)qGnK+sp-3PR;l`V^pm&Cj2N+;pg+(SlKHgAVKRDs^#@^=QAKD}GJb)skM8hDX^1E3 z#|p6Enq76!8EilYFv_Gj_&4jcD*2cT_(}pas4Dhz?GITd(Hwcj$T~8djDg@Q6 z=1!s}G0~u~zK|sEF1-@5@_EGyD^bJP$n92YQh=dDj;<+QL zx%8;D+3cUmT8-{|vig=SSeWO37wmtiiM__IRnql;<>mKU;9HtUz0gz*54>9BB|PHe zC_h7yv)N9!^=^BlTB)KrJ8=@eb3f>KsqlI!eQ{sm^X^XXX;Lmty|oXQNJfaq zh>}Z7JFZ07;jY`B10k`XTR(btcYK2%V-i(y7uok#&i3eB3k>rDH&_u67P&0bnu-dr zvu_{y=%z`od4Bd#>zLy>9pas~v}0&ifBHG_2Zlh*6inVnD;Y+2s_dhZJ!RfU4w;Rg z`+qJlgIF3A+|EamRG0v-Q|3{#qxS90!swbd`zBn;B8r>XTj;Vf3-(RTME}~pcxa{# z7WMIY@4;9PMXr8j%2ap!s&s=suXR1>MDDQ?c=&B;Mj>h%!9@DRT;(`k=v}d}xyQG< zIMOGw)nqK%3b2MOVqa?H%mR$)rr1nO<0EYX|7sN2dHbhPz>ieXd5pxwXFM=WnS))O z#^~O$j2>Mu=L6}{l*V`kK~Gpf4P1#{!o*#v(M{%TZF8mVE5TzSjY?D+L%OR<=`;lW z%-{v7MnR=IA1o9H>oyc4k+JVfnWpv?+rHq!@O033q+KiI3q`p2m}vNTJ=9cE!LVvj z2qWnYOgJ%yy!k~6!+aZk%mE)q)?LVUWm6qL#pI~FqkYz+eYfqa^DuY?%fRC48#q4) zM^LYqk6o6uAmBF^KF^#dVrlxuF0#ls`!zE8=rdYUS&k1rp6bS+yEbx7@<5oeilbbz z?Ljl=rOB0PDW0>#t{8CArG$JW)Nc;ax;Lbb+cvG@)s?-rmhJcUC@nndb`>;Tjy#!n z?R=Q6G|Yg(KOOhvxAg9ucaQE|d8_`-hFZ<6U~az|f8D4YfBEsWQCHPuE5B;A6+Dai zp&T5oo2t>0Zl{NWq9QuIdQA?V=T?`6?->A;oub=|kCNuJ_uWgnn){R_JA9y=T303_ zK(BorS_)}BXhM$_$R~G$bXfnz$A5#psw*@lA_3z;C@nJ7*=9~iE400@oZytxDKp6u zMQhEGI>J3fo_OzAxJcIozQ{&lCbPxrjb7V?cS+ex8Yrl%PYOtbv8X+W z%{#cQEcQCS!+fT8>vVPf^65_4O|O~Ht-Aa&TCG`!4f*#um04T~)~VoAs`%E+9VwTA z#<{@rj5=*M=8+gsqvwI9KbUdrQJ1MW0tBxHXe4FlLRHOOPi-BjvT$L98lF{r##tf` z$1lWX=P=V%C3iCeRCP`YhEJrB=TLnuG;qmMv1JW9FVVaOlTm0yI>fS=?k6RF17Ci`)^ zcOmV3>61tE`=>4{>leVL@<$+epC?^4%y~a+JUm^^W;0fA4IhE*ibfm1@Oz%6W@^GK zO;9fo@4*Kr?zi(h(V3q4gf#Exe2I;Il+msxkhDFnf_HSU}3S%{!iHP`mRyw@9$l%d0V9FYh{XLWy-$_8kofaQ zHIv?JOa;<%{Hv%eciW)VX2lqqqa$$U>^=sa>i3EP9`VF$ts@F;1jKyG;IyFnsa3Q3 zYmZa>}iwBOyrQIco;6Th`Gj}Bi;wl?AJq@L5D=EyKT${iRNe&4-B$DhLGd0$?1QOze^%*ZdBf;efEHR{W;KOK4z zW@vgz`e<*P?tCNTl^3VICwWt%PS?%4KS*Zize`s}p=xq4PaAGaM0n;PiRDtgtZC!+ zKjx-4blv!wK68j~*6-6+jf$Z4W`|nzG@yGo)J4_CIa|HE3Eaur@T-4k`tf4X=}RO{ zZNWr3R@ybF&AN6#mZ{MiLyYX~O98k8HWM*3 zH~x5*53U}Yzu=elZL14}B8m_*w$T-h_nrS~>4|u${DD?=VQ9I`9QiHgORO!o^G_Nx zyPHzMqg%P8yQb&=b=Q8-RU!ur>)cUsSNzN!(cX=7GuK|-R0}^|;u??;($g3cC0zY_pf0 z1p>UkiB&1YK~sby3z1}QH~TNVW;)O>=)oZ~w8x;qw4|plcCEJVoSa-8YmVn^>ASV2 z5OB3asOl`d?m(|r;BK@**!f!iV@h@K(aAN#mZE{G3(&gxH9mSW zucK(TEX$h${@M=A-RdNMbGPn)xLadMur}!!f?3w%&s5u2<&91oHSYw$-ylo=-6(4! z3-w~LOA}*!CD_#3)?PM1XIJR0#&+L3CZM-u1+nW#7Ll_S5 zMbGH%MC&XWt2x@snX>feQ5TTF)Ofp3kwS6+XJOsYSa$OM6j%u(S&bc#|;BF&Y0whd4i zwjeJ(jRcBE`q#nMG&3->on94E!jgz)Az698GfB@RRx`45((k=na~sdu(Mzj!kQ}(w zySV^;?W6d3g9=sbM5|hn(;Eoud3>JEW<^awAzH58?s~#Azw1@|yYX3i))p=7Pvdhz z-&?oh`s+5%Zxg3PALaCz3Wz$HZ)zIsdkbL@L$RhT+!ok9iP7<9i+)gGH{AyL&>vzXafEC`US|o}Kfz-!J@wo+XJI-PF z5RW0I_V7BbfH#3T&w3~!P`0py61e6kI^BbdPRswM*B`=g%Kz_4%mbf ze@XgQl&a`0Y-QV_v9}9_rj^J1=#RQYWUD6@4G!KKh3YBw&5i~x0S8sEct0`j2vH|D z)@3Ek>GAID-`H@J;$hIONWHKaGMboRWBh|cEMI$#81hGfwp-{g8<_2E7)uRs?lq)) zO|4QR2`gIWz6O>%+_dJWj=9IpJO{?TSU^0Jmrh#5IsUxVz%uo=P%D^>_zP`!^!B()&~g8U=dLVR4W|vl?HJs7}y6MK3Xi3MxL8tt6Oang@`(=m03T89GPX| z2<}j^IV;zEvH#M(2bB1&%ejGYV$=#y7>K%9zN1J*Gi*y?dne4^ZzsvK~j*DnH&Ey$XENGjI`!RSY{upUG}GF^|C#AxN(8xHC-;4I4#cvNmwu zMXF^MJ25e!=c;LTeww(u-{u6DDwyRkBh;%Eo84qx<08kF4b`|&0c_r?%+I9p z6yS<`lJ{D7v4Cw#Sj(^hMel2y_jILPvoH5a-z6Lch7=23v=rLsOhdJK;o#j0=G~rk zsO3wuQZ_6}Dvrp=;>WN;2W##VT3V{nJ|WJ%-nY!H2SLra2+?qBkY0K0F5Ief4=AW6 z>n|s#&7fcyinKdN#HP)L?qu9Np^bfEmqK8}R3s4O(6zyLQSKhyxc}lQuPJ~>74kDN z?R!v#Aaoy@7;;)ghR+7$*|5v_Yaz6e|G&6GlLk}5$+VnC4R>X%x{|8W#{BGWq|j^f zwzbCB!jETy#1`dpZTeklX32fSgw9ELO-IRt-0!Vp!7h!1kV2(BB0+=8XJ1?+wG=5c zEtu5e2v7O;TQG)$b1J*>Srt^;m&p4zfSRe|-FmdDf!$7TBE{EE1G;|L(^zZ?Vs9v7b=*35F*?d-f#yIE){ z6}X`*zFf2i2{D3@`5m~IUruIJbD5!vP@4DST4EW-2;V^KwNYVmR7f#Lk#33z;+Wwl zHY;2yVetXP6pbZoap>{#Lvqv=iZ^G$gV1^EoM&SZmjwalnN#arcLCH}UD5?@wY`_p zgrnG$S;0_I#<)A2d&CUBLzThm3=cDR9;@xb? zuDvX2UA*}Iys2e1a1>s08;~PqXW^!lpwUdV6bUo3lb$GJN(D24xk^gZTP1bk_oLju z((#Q7rQ6FltI+-QdHdrvRiUxhE=Y4Jm}w+5fjK|7a7oLbuIrB_CH5W}Fh)Pqt&TVj zIk6&E)Jx^E065OdbIYBwfx@`u&q_C!$?%{2(*<*qYrd+-TvqMX-T~(#;^X7ZdjnwR zRxuWSfIMHd@0M>gTAop_j3-9MCuo?9bnLxTbUyYZBB{dYIsX>X%(V)k%EuM+{(Q74 zHh1d^-^T7Q;6a5h?^HW1s;NFJBf>!7_LAi8zGf-uGNkIg+xcStUMM@2L^wsH^(#|S zI3rzA+fD0{p8F6YB4N?Gz-rE1Dtid*UVq?aXQ9m11d9Rh*q3t|u8a95saLNhj;aW-`G6Gi7j|{P6aHphB8;4= z(vR9MkX184boF-1?o^R#ey7yf=Qgh}-l;dwU7E%Vx=aQCubnUdhq8U&kB1ma7)wHy z>^o%}WeLeXWzX16_H`@?F`}&54Kem5j9rXwVda99M^H2V@)f4vIjEsX0K>H#}hUu65RF<_TBvfVGhytGwsbQM#)zq>pfq3 ze2?IZice$zw-KtTR2Aq7N4NHIX8^&lI-weD*1 zwX@F}`U8s&m9(@dX59M(f8xyo-bXUrENYO^n>qo&_DYIPdKS5GV~h}Xj!@m{bgNIe z+Hc=|AsynmI@}osj@t}=UCA3O&}JFZ z&b(;Sb<^kPdzF>V9+p}R;E$|$ZoU}2_Ngg#YikSeo+y-EzE1Y<3+c)dc(v#gF4$!F zC#46R_&~hY3^~V4Zx#oOH9F5g_Fwwb{S}n8UPA^}j~Ovqj-x0*pe*0-O|`3x6Sh=0 z2U1b5++`=;WZzW|@3JgaFyx63`8$h0cnMCtc6hZ;F^(NCE{~O($S{0=v40HoZ-Fsv z{*5OG-}=&)!RK18qeMV6cyeIv_d%G$<>>Hx%R`^% z?mOSgT5!p=qgpai+1c4lTyFOq8EMcN0VgS?%?I}NU*gnl=i5WZ>JT<0-dhD3&(vOsm0>@V1-bhUN-qzy1RPn=Y>XAgGnv^J`Oka9_LegOzNWe~v+*>Y*?NBq z!w5rwF(bEf{f38DMhix#>kxV4rOFX>?yHHa@m#81YjX;Rk4i&{mEzj3$Q{X_?gpH4 z)20JoV}7koF2tZAh$W4c?wM*k@IcBx0Q>DU5zk2p5tioUSQk9evG-{@U=pIp^re@8D>AC`#YCeuayVk1u;j_OM^7*7iNS zbPc03woR%>=6tY6Xkf5);4 z03$Up7uAnyoi%j|w05T0;}|>uVf-5=;+U(*J(;T3J9UUz6VOtk&E?b>qbbq(I<*ru;vZF3l~vF}-`fpWPbbOWEs%D>c=hOrK+kjcA_Wsqi9WE=3Pi2C?}uC} zy>B>uFY>B|2Z@^(Ih{aEg}3*#+e=+Sx&1HTh59K4_I4na=gBZk7TBb@z0mzy{xn!x zn!upyrS^R9-&jRufTb!*9SH>1ltcEj!g zBl8m9A8jNcbu`;Y)OnneZ>6G)AG(-E${(xQFG%NlTv|#I!K>u>Im8Y;AvsAbsu`xq zBkFF(0e|PvE@^>{hl#PV`AzJ3`c8{G`>UxkGdbxzkUpn5FDDcR(~vvayJ5jAAg~@{ zO`t*TNGD2?n)Z-r)bD5qZ{{veyuGtJWme&seB9+f_b zpuOS823+aT`+tU<573aUb;E$a`UtxQ%#$!}J&>$2%46hbTMlkBZvY!v@HDE*?C4#@b3d?GyaUl;_n{6?Z@Wz{X_j*3^ zpOnD=XItHqJ1cqiB4YwY%v&$m5j-dVIifC{S7Kb7WJf^WZ|3n{nPY+>5nhq(_w;GK zVjVC5-FnroDa=@vjW!uMdCt;6>gdt_HZ$&cS?)AT3k_jF{L0c~TKJCRFS~`a94D&K zD%!-qbud_X?L&fF01T)~PE*yf>QB+$pL~e;SvQoK^%pC8uIGGe@&#Ki4RcMmya8|e zm0Is_Esm8KOG=w2cne9|4zo(vT#?oc?oEB-t zI{SBDZtVC|s-o=PI8CuccXlO(*P*%FK0w9Y+Jv^${BrGS->ax0k z8BaVFH9egW?0&I9S(4C$3eu*=i@Pg;1I_NoT${&P(mpfZx~8^MOIp99WRQFrguNJY zg{%)?!wLgVPhJ>{i;6y&2nhh{1jx*@!}YBvy>XmBfhAGc^t_JKcqQoi))l#Up530c2S{xiB=7I(s(X4JnhoN_v05YjwnGX-yVUYYm@i%O6 zvI^WJ;AUrQm2mtZ(z04>C^N>DUa?+M`$_Bo=mx^)vjL3e<{&6fBOM$b?zXWIEtj+$ zOl_M6CQH682^0|vOr<$d!p(c_QUzX}>_4}rfQ1_8tu?M_F4W|45-KB)Kxy)X)l^sC zS~&PnVFjNNZu}ACrsb`|^Kk1E%Iw)BR@u#GwXh%(QIwXF2=!QO(w5!NeW0~I?X!%o z0ZvP?BFZ=`6L;9=+8xOtQK64n2dWQ*k$!*4b?Q{Na>JszPg9Gp*o_wN3|QmV@sZ{6j~m-BuX zEiVgUtQ*vOyRAQ6)XamlP%PQA!F53L{(je8@a8pr=Z3P@Yz=ldEfa zwMJ3=ou@xDfuB~}ErZy=$<=Rsko`HV_;YU|Iv%Q(2Om^ph5>l-Ut>%rV3*i1(3rRI zDye%)3f0fN{WSsM+0uaE)=Fc=o?er1{0b!!wSJ6Dtx-n_E$NITPE}Z`q*T~9vkE+* zVPtej3u-5$k|T{oG*2ruxoW4obu&!zj_ zzMqhH3}kN8?uq#nxy?>+(#B%@d0r+AlaG413KSzH;seAqBxYJKktEltsMr!|IBv6E zA3&R*NEvqY2R$i6%Xq65C+z#_eOH`pm+J2&{`$Pw`U+xJ#D*2)22F#3EH^2I3qF%Q zxCrLxfY>FCjp+uBL=aJP&RI5VqwzCjs0!WJSr{^U%+Y8Qs_*BL)c1p177&r-e zVq}tqh>K->mqP4UM)Fpnd{OtuY$q%8+$dy-iHRu|p%=-xK0wH6LC1tEr*_=%lMxSbS^i|&paoy+yEF)ZRhhINWcmHxgJ0_(4VJEHu^xh*>JGpYiZ6XT z{X8VHIC^VhE=P#Y$r?&JqYTuy9kY!aeF==v_M1c;H>NUmzP z^xK6JS-n@a=ezePQC`qP!w-2ep;b9_t(TabZV%ofD$Hy-+OJA6$V}{Ih>e*qqGnC& z)9n$(ScgW*9IjRI3JYt$TIjeUm%vayCzv0{J1Q7Sy!oV*C`lc&LbfZm+L7jZp9i@! zh&C6eC|4K>D{x<=_)UHsIQA0_dLHM%foAKSE4O)tO!S4H2y;*wa*uP-@taB6KKGQ1ffLQ_52RFvD)!iQ4iemyaxzcahetbQ5Q6*54 zVwFG`k3dz2t|P0)FFKc>GDOrJ+5+jiJ2VCfar>;a@WF>$QW7-?`6eK(NoGn8z&}@^ z_MY=K)#4>2C>XQYxzL}-#Z{K~rzHhA=E@~?!?uy&aSFrG^3TMWa!L+c^HM2oHIpY# zq)faf;ajr&qZzsh;&Qd&)aRbw%*r!M6A`S;Fo+zHZsNQCHnA)J;kTx5r43&yACX+S zjv4bJtk?oz+UQ4@7D3d(R_)0)V!M{R{rFD{5@pso0ne{e`d6?38XmuT(^1wj(?C*_2+Qyrf5QD`^QY@)`k0y8ZPBGPCmG$cf_G$WuJ&WhFqnjY# z%BiO7(VFNZM;G2xX=uos4JdSO0kzds9il)n{`tkr>=o&J{s2yK?-f`pmc#x^&`1_R zimV8qhzPS#6ACr&S=Rl$n#9^sX{_e8KBfGIzAtY68V+DU^h_jbQYD<&4cj{FtK18P zDrEWK*49_??*poL`)>}t^SJ=fMKFej4*>TfonQOuX&;XRoW%KYIi*7jwdLZRg!i}{ z8q!r?eUbj>TL*lpv&-oG7J&BEBsaoo`QHG<1%N>B0mu>6E438vnUM-8z7NkB7mJWr z@V=y8(N?tIC!b~2PfLeqWfMxhYM#L?;CN>#yYY{^tw8FGpgUN&sW|s+G+&cfOzdHI zFZ#AmoavK!Vrd`Yglq7(zAJgCT(R9nl-Y>f>5<~@jCY1;8jzZiT5`u8J{Zf|&Y)ip zR}}jhrU#+r4C3M zH}+E8tgLZMtx%-t^yH8z;>N@~Kn8?g>^(jJ_RM>LL7<#gqRzB!l>pYWtxB1DYr5)U) z)|naYvy`%>h4lq8(+zyPUcCH*=krcVfyI3NW7pA38UN~MkUC+3u96jdfvw$9j830D zt$a7T=%c0pBoFuf6070}v|PD4ScsPS-Z;wDoI9TNv1G5k7GzND^yOWwMAld$@9vaC zS+5tp5X=?^gu|gP(p28IX^N-8T|-Ja`h~ zy&{}M2DIwabB{@ez?V#)?Zb0|KW~lcXfrI2-J}XfyHVgn zsO>*i*+jd&@;;BUXSKRa9-~k-P*FED2n=;Zfz(l2(%4-O;t3w?aPUL!62W)&gTm#x;(yJ$od5L;Pip>?Ka+ULa+yN9*Rt&uNAE z_;5VFQ+5Tt$BzeIKk)NwtcXSj1rhpS4zKtJ(q6iq5D3c8cN4#^m!EBHB4kUl8CLGj zw}LOrnC`m850tQVko9drOO~>R4M>iup=x|w+;4&m;Ng0=fx<9mVgMlKVxdHfb<5$Vx`l+yp`dN=8-h2D1_vvCpnRb z;8Y_+|GX_(hY0~sMwj#Gb)QgNtx%D$^~}F3JM~9%?#?${Y#^L+6)9B)a24Uf4w{eN z{Mwk|w|biv?!c1WvO$ES`n+fEAFMni5Q=|DsD33MsX*Ncu%vKyBFm-V!w0m534+v; znYr?@N?Z5!!h}$`wJVrRXxhL~+w^ssTZS3~ZXxY?d>I9GObPY8I{B2A8vwz@E*PQs z;*u!+{Ml8;?`_gKStk3G>Wn2G;Uh743>*VEAyIQE+c$if@FCduth(npy_e%)Zq0i`BxJ-h$G)M7 z-rCnkMU21j%=idI2wt$w9J{!&#=v^p)9~eNA`s=C!f+*A-BZokqEI>L(sU^lkY0%! z(=n56|7ly8RtZ?D^GYEZ>@@ULbHW9DK66NtT=wUYs{On#g{^JsK8N$?j$fjq$HXC5 zWOX%YaT@?}U*j?>bW=vO=QVCm$uaFA4hmdN#&c^YIooKcx_2|!+z+8h$zr!4RL)KF z|1}wm(ZQk*cfV`DE7db9E7E{F0^suTQ(#UOb%mkMsSzu8%|(NiaZE0u8Z$u1>k~3( zsqnQA0Y%pO?rwC8>rwbkCyp*!diH-xPc6FQFX!q{YJT#-ozV4y)&f6#jlTQ%Xu2L zKna~J)yRy2EM@Jp`KX%BK@TLxDgp6${q9Ta#^r?O;cpwHKX2lf#^HF zUOx4GyMJKhWp2Q|ulA%hEG#U^zKf5%nGzkcKEEWc?n48?Xdw02Zi6f@p%c}41jpGR zFx;rH%aA5Ya8&`&n;?A|ifu_F(sd7xeSB^k94{mOt-YY0XAR0A6XN-t8rnfv1?`|n z7~L+BNMab`ZgBBPaQquCnaavkHm2eaq$SqmQ;u+uYr+de^ zMqt6M!X=W{ftv&_L>@EPPpk9a{`^-JsQjTpRfaGi!~XY;E`X(!0acuL_-)(kT<81* i7xe!>_y3xgBRNsUaVx?yvh)kUM^#x%sr0US(EkBJnR0pn literal 0 HcmV?d00001