.tar
```
-1. **Deploy**: Navigate to the `openim-docker` repository directory and follow the README guide for deployment.
-2. **Deploy using Docker-compose**:
+Import directly with shortcut commands:
+```bash
+for i in `ls ./`;do docker load -i $i;done
```
-docker-compose up -d
-# Verify
-docker-compose ps
+6. **Deploy**: Navigate to the `openim-docker` repository directory and follow the [README guide](https://github.com/openimsdk/openim-docker) for deployment.
+
+7. **Deploy using docker compose**:
+
+```bash
+export OPENIM_IP="your ip" # Set Ip
+make init # Init config
+docker compose up -d # Deployment
+docker compose ps # Verify
```
> **Note**: If you're using a version of Docker prior to 20, make sure you've installed `docker-compose`.
## 6. Reference Links
-- [OpenIMSDK Issue #432](https://github.com/openimsdk/open-im-server/issues/432)
+- [openimsdk Issue #432](https://github.com/openimsdk/open-im-server/issues/432)
- [Notion Link](https://nsddd.notion.site/435ee747c0bc44048da9300a2d745ad3?pvs=25)
-- [OpenIMSDK Issue #474](https://github.com/openimsdk/open-im-server/issues/474)
\ No newline at end of file
+- [openimsdk Issue #474](https://github.com/openimsdk/open-im-server/issues/474)
diff --git a/docs/contrib/prometheus-grafana.md b/docs/contrib/prometheus-grafana.md
index a59847f71..5b57c5942 100644
--- a/docs/contrib/prometheus-grafana.md
+++ b/docs/contrib/prometheus-grafana.md
@@ -111,32 +111,35 @@ Importing Grafana Dashboards is a straightforward process and is applicable to O
To monitor OpenIM in Grafana, you need to focus on three categories of key metrics, each with its specific deployment and configuration steps:
-1. **OpenIM Metrics (`prometheus-dashboard.yaml`)**:
- + **Configuration File Path**: Located at `config/prometheus-dashboard.yaml`.
- + **Enabling Monitoring**: Set the environment variable `export PROMETHEUS_ENABLE=true` to enable Prometheus monitoring.
- + **More Information**: Refer to the [OpenIM Configuration Guide](https://docs.openim.io/configurations/prometheus-integration).
-2. **Node Exporter**:
- + **Container Deployment**: Deploy the `quay.io/prometheus/node-exporter` container for node monitoring.
- + **Get Dashboard**: Access the [Node Exporter Full Feature Dashboard](https://grafana.com/grafana/dashboards/1860-node-exporter-full/) and import it using YAML file download or ID import.
- + **Deployment Guide**: Refer to the [Node Exporter Deployment Documentation](https://prometheus.io/docs/guides/node-exporter/).
-3. **Middleware Metrics**: Each middleware requires specific steps and configurations to enable monitoring. Here is a list of common middleware and links to their respective setup guides:
- + MySQL:
- + **Configuration**: Ensure MySQL has performance monitoring enabled.
- + **Link**: Refer to the [MySQL Monitoring Configuration Guide](https://grafana.com/docs/grafana/latest/datasources/mysql/).
- + Redis:
- + **Configuration**: Configure Redis to allow monitoring data export.
- + **Link**: Refer to the [Redis Monitoring Guide](https://grafana.com/docs/grafana/latest/datasources/redis/).
- + MongoDB:
- + **Configuration**: Set up monitoring metrics for MongoDB.
- + **Link**: Refer to the [MongoDB Monitoring Guide](https://grafana.com/grafana/plugins/grafana-mongodb-datasource/).
- + Kafka:
- + **Configuration**: Integrate Kafka with Prometheus monitoring.
- + **Link**: Refer to the [Kafka Monitoring Guide](https://grafana.com/grafana/plugins/grafana-kafka-datasource/).
- + Zookeeper:
- + **Configuration**: Ensure Zookeeper can be monitored by Prometheus.
- + **Link**: Refer to the [Zookeeper Monitoring Configuration](https://grafana.com/docs/grafana/latest/datasources/zookeeper/).
-
-
+**OpenIM Metrics (`prometheus-dashboard.yaml`)**:
+
+- **Configuration File Path**: Find this at `config/prometheus-dashboard.yaml`.
+- **Enabling Monitoring**: Activate Prometheus monitoring by setting the environment variable: `export PROMETHEUS_ENABLE=true`.
+- **More Information**: For detailed instructions, see the [OpenIM Configuration Guide](https://docs.openim.io/configurations/prometheus-integration).
+
+**Node Exporter**:
+
+- **Container Deployment**: Use the container `quay.io/prometheus/node-exporter` for effective node monitoring.
+- **Access Dashboard**: Visit the [Node Exporter Full Feature Dashboard](https://grafana.com/grafana/dashboards/1860-node-exporter-full/) for dashboard integration either through YAML file download or ID.
+- **Deployment Guide**: For deployment steps, consult the [Node Exporter Deployment Documentation](https://prometheus.io/docs/guides/node-exporter/).
+
+**Middleware Metrics**: Different middlewares require unique steps and configurations for monitoring:
+
+- MySQL:
+ - **Configuration**: Make sure MySQL is set up for performance monitoring.
+ - **Guide**: See the [MySQL Monitoring Configuration Guide](https://grafana.com/docs/grafana/latest/datasources/mysql/).
+- Redis:
+ - **Configuration**: Adjust Redis settings to enable monitoring data export.
+ - **Guide**: Consult the [Redis Monitoring Guide](https://grafana.com/docs/grafana/latest/datasources/redis/).
+- MongoDB:
+ - **Configuration**: Configure MongoDB for monitoring metrics.
+ - **Guide**: Visit the [MongoDB Monitoring Guide](https://grafana.com/grafana/plugins/grafana-mongodb-datasource/).
+- Kafka:
+ - **Configuration**: Set up Kafka for Prometheus monitoring integration.
+ - **Guide**: Refer to the [Kafka Monitoring Guide](https://grafana.com/grafana/plugins/grafana-kafka-datasource/).
+- Zookeeper:
+ - **Configuration**: Ensure Prometheus can monitor Zookeeper.
+ - **Guide**: Check out the [Zookeeper Monitoring Configuration](https://grafana.com/docs/grafana/latest/datasources/zookeeper/).
**Importing Steps**:
diff --git a/docs/contrib/release.md b/docs/contrib/release.md
new file mode 100644
index 000000000..65756fe9a
--- /dev/null
+++ b/docs/contrib/release.md
@@ -0,0 +1,251 @@
+# OpenIM Release Automation Design Document
+
+This document outlines the automation process for releasing OpenIM. You can use the `make release` command for automated publishing. We will discuss how to use the `make release` command and Github Actions CICD separately, while also providing insight into the design principles involved.
+
+## Github Actions Automation
+
+In our CICD pipeline, we have implemented logic for automating the release process using the goreleaser tool. To achieve this, follow these steps on your local machine or server:
+
+```bash
+git clone https://github.com/openimsdk/open-im-server
+cd open-im-server
+git tag -a v3.6.0 -s -m "release: xxx"
+# For pre-release versions: git tag -a v3.6.0-rc.0 -s -m "pre-release: xxx"
+git push origin v3.6.0
+```
+
+The remaining tasks are handled by automated processes:
+
++ Automatically complete the release publication on Github
++ Automatically build the `v3.6.0` version image and push it to aliyun, dockerhub, and github
+
+Through these automated steps, we achieve rapid and efficient OpenIM version releases, simplifying the release process and enhancing productivity.
+
+
+Certainly, here is the continuation of the document in English:
+
+## Local Make Release Design
+
+There are two primary scenarios for local usage:
+
++ Advanced compilation and release, manually executed locally
++ Quick compilation verification and version release, manually executed locally
+
+**These two scenarios can also be combined, for example, by tagging locally and then releasing:**
+
+```bash
+git add .
+git commit -a -s -m "release(v3.6.0): ......"
+git tag v3.6.0
+git release
+git push origin main
+```
+
+In a local environment, you can use the `make release` command to complete the release process. The main implementation logic can be found in the `/data/workspaces/open-im-server/scripts/lib/release.sh` file. First, let's explore its usage through the help information.
+
+### Help Information
+
+To view the help information, execute the following command:
+
+```bash
+$ ./scripts/release.sh --help
+Usage: release.sh [options]
+Options:
+ -h, --help Display this help message
+ -se, --setup-env Execute environment setup
+ -vp, --verify-prereqs Execute prerequisite verification
+ -bc, --build-command Execute build command
+ -bi, --build-image Execute build image (default is not executed)
+ -pt, --package-tarballs Execute tarball packaging
+ -ut, --upload-tarballs Execute tarball upload
+ -gr, --github-release Execute GitHub release
+ -gc, --generate-changelog Execute changelog generation
+```
+
+### Default Behavior
+
+If no options are provided, all operations are executed by default:
+
+```bash
+# If no options are provided, enable all operations by default
+if [ "$#" -eq 0 ]; then
+ perform_setup_env=true
+ perform_verify_prereqs=true
+ perform_build_command=true
+ perform_package_tarballs=true
+ perform_upload_tarballs=true
+ perform_github_release=true
+ perform_generate_changelog=true
+ # TODO: Defaultly not enable build_image
+ # perform_build_image=true
+fi
+```
+
+### Environment Variable Setup
+
+Before starting, you need to set environment variables:
+
+```bash
+export TENCENT_SECRET_KEY=OZZ****************************
+export TENCENT_SECRET_ID=AKI****************************
+```
+
+### Modifying COS Account and Password
+
+If you need to change the COS account, password, and bucket information, please modify the following section in the `/data/workspaces/open-im-server/scripts/lib/release.sh` file:
+
+```bash
+readonly BUCKET="openim-1306374445"
+readonly REGION="ap-guangzhou"
+readonly COS_RELEASE_DIR="openim-release"
+```
+
+### GitHub Release Configuration
+
+If you intend to use the GitHub Release feature, you also need to set the environment variable:
+
+```bash
+export GITHUB_TOKEN="your_github_token"
+```
+
+### Modifying GitHub Release Basic Information
+
+If you need to modify the basic information of GitHub Release, please edit the following section in the `/data/workspaces/open-im-server/scripts/lib/release.sh` file:
+
+```bash
+# OpenIM GitHub account information
+readonly OPENIM_GITHUB_ORG=openimsdk
+readonly OPENIM_GITHUB_REPO=open-im-server
+```
+
+This setup allows you to configure and execute the local release process according to your specific needs.
+
+
+### GitHub Release Versioning Rules
+
+Firstly, it's important to note that GitHub Releases should primarily be for pre-release versions. However, goreleaser might provide a `prerelease: auto` option, which automatically marks versions with pre-release indicators like `-rc1`, `-beta`, etc., as pre-releases.
+
+So, if your most recent tag does not have pre-release indicators such as `-rc1` or `-beta`, even if you use `make release` for pre-release versions, goreleaser might still consider them as formal releases.
+
+To avoid this issue, I have added the `--draft` flag to github-release. This way, all releases are created as drafts.
+
+## CICD Release Documentation Design
+
+The release records still require manual composition for GitHub Release. This is different from github-release.
+
+This approach ensures that all releases are initially created as drafts, allowing you to manually review and edit the release documentation on GitHub. This manual step provides more control and allows you to curate release notes and other information before making them public.
+
+
+## Makefile Section
+
+This document aims to elaborate and explain key sections of the OpenIM Release automation design, including the Makefile section and functions within the code. Below, we will provide a detailed explanation of the logic and functions of each section.
+
+In the project's root directory, the Makefile imports a subdirectory:
+
+```makefile
+include scripts/make-rules/release.mk
+```
+
+And defines the `release` target as follows:
+
+```makefile
+## release: release the project ✨
+.PHONY: release release: release.verify release.ensure-tag
+ @scripts/release.sh
+```
+
+### Importing Subdirectory
+
+At the beginning of the Makefile, the `include scripts/make-rules/release.mk` statement imports the `release.mk` file from the subdirectory. This file contains rules and configurations related to releases to be used in subsequent operations.
+
+### The `release` Target
+
+The Makefile defines a target named `release`, which is used to execute the project's release operation. This target is marked as a phony target (`.PHONY`), meaning it doesn't represent an actual file or directory but serves as an identifier for executing a series of actions.
+
+In the `release` target, two dependency targets are executed first: `release.verify` and `release.ensure-tag`. Afterward, the `scripts/release.sh` script is called to perform the actual release operation.
+
+## Logic of `release.verify` and `release.ensure-tag`
+
+```makefile
+## release.verify: Check if a tool is installed and install it
+.PHONY: release.verify
+release.verify: tools.verify.git-chglog tools.verify.github-release tools.verify.coscmd tools.verify.coscli
+
+## release.ensure-tag: ensure tag
+.PHONY: release.ensure-tag
+release.ensure-tag: tools.verify.gsemver
+ @scripts/ensure-tag.sh
+```
+
+### `release.verify` Target
+
+The `release.verify` target is used to check and install tools. It depends on four sub-targets: `tools.verify.git-chglog`, `tools.verify.github-release`, `tools.verify.coscmd`, and `tools.verify.coscli`. These sub-targets aim to check if specific tools are installed and attempt to install them if they are not.
+
+The purpose of this target is to ensure that the necessary tools required for the release process are available so that subsequent operations can be executed smoothly.
+
+### `release.ensure-tag` Target
+
+The `release.ensure-tag` target is used to ensure that the project has a version tag. It depends on the sub-target `tools.verify.gsemver`, indicating that it should check if the `gsemver` tool is installed before executing.
+
+When the `release.ensure-tag` target is executed, it calls the `scripts/ensure-tag.sh` script to ensure that the project has a version tag. Version tags are typically used to identify specific versions of the project for management and release in version control systems.
+
+## Logic of `release.sh` Script
+
+```bash
+openim::golang::setup_env
+openim::build::verify_prereqs
+openim::release::verify_prereqs
+#openim::build::build_image
+openim::build::build_command
+openim::release::package_tarballs
+openim::release::upload_tarballs
+git push origin ${VERSION}
+#openim::release::github_release
+#openim::release::generate_changelog
+```
+
+The `release.sh` script is responsible for executing the actual release operations. Below is the logic of this script:
+
+1. `openim::golang::setup_env`: This function sets up some configurations for the Golang development environment.
+
+2. `openim::build::verify_prereqs`: This function is used to verify whether the prerequisites for building are met. This includes checking dependencies, environment variables, and more.
+
+3. `openim::release::verify_prereqs`: Similar to the previous function, this one is used to verify whether the prerequisites for the release are met. It focuses on conditions relevant to the release.
+
+4. `openim::build::build_command`: This function is responsible for building the project's command, which typically involves compiling the project or performing other build operations.
+
+5. `openim::release::package_tarballs`: This function is used to package tarball files required for the release. These tarballs are usually used for distribution packages during the release.
+
+6. `openim::release::upload_tarballs`: This function is used to upload the packaged tarball files, typically to a distribution platform or repository.
+
+7. `git push origin ${VERSION}`: This line of command pushes the version tag to the remote Git repository's `origin` branch, marking this release in the version control system.
+
+In the comments, you can see that there are some operations that are commented out, such as `openim::build::build_image`, `openim::release::github_release`, and `openim::release::generate_changelog`. These operations are related to building images, releasing to GitHub, and generating changelogs, and they can be enabled in the release process as needed.
+
+Let's take a closer look at the function responsible for packaging the tarball files:
+
+```bash
+function openim::release::package_tarballs() {
+ # Clean out any old releases
+ rm -rf "${RELEASE_STAGE}" "${RELEASE_TARS}" "${RELEASE_IMAGES}"
+ mkdir -p "${RELEASE_TARS}"
+ openim::release::package_src_tarball &
+ openim::release::package_client_tarballs &
+ openim::release::package_openim_manifests_tarball &
+ openim::release::package_server_tarballs &
+ openim::util::wait-for-jobs || { openim::log::error "previous tarball phase failed"; return 1; }
+
+ openim::release::package_final_tarball & # _final depends on some of the previous phases
+ openim::util::wait-for-jobs || { openim::log::error "previous tarball phase failed"; return 1; }
+}
+```
+
+The `openim::release::package_tarballs()` function is responsible for packaging the tarball files required for the release. Here is the specific logic of this function:
+
+1. `rm -rf "${RELEASE_STAGE}" "${RELEASE_TARS}" "${RELEASE_IMAGES}"`: First, the function removes any old release directories and files to ensure a clean starting state.
+
+2. `mkdir -p "${RELEASE_TARS}"`: Next, it creates a directory `${RELEASE_TARS}` to store the packaged tarball files. If the directory doesn't exist, it will be created.
+
+3. `openim::release::package_final_tarball &`: This is an asynchronous operation that depends on some of the previous phases. It is likely used to package the final tarball file, which includes the contents of all previous asynchronous operations.
+
+4. `openim::util::wait-for-jobs`: It waits for all asynchronous operations to complete. If any of the previous asynchronous operations fail, an error will be returned.
diff --git a/docs/contrib/util-makefile.md b/docs/contrib/util-makefile.md
index e0331f50e..8bde02874 100644
--- a/docs/contrib/util-makefile.md
+++ b/docs/contrib/util-makefile.md
@@ -30,7 +30,7 @@ Executing `make tools` ensures verification and installation of the default tool
- go-junit-report
- go-gitlint
-The installation path is situated at `/root/workspaces/openim/Open-IM-Server/_output/tools/`.
+The installation path is situated at `./_output/tools/`.
## Toolset Categories
diff --git a/docs/contrib/version.md b/docs/contrib/version.md
index 0e03b8d8b..574badf59 100644
--- a/docs/contrib/version.md
+++ b/docs/contrib/version.md
@@ -1,6 +1,7 @@
# OpenIM Branch Management and Versioning: A Blueprint for High-Grade Software Development
[📚 **OpenIM TOC**](#openim-branch-management-and-versioning-a-blueprint-for-high-grade-software-development)
+- [OpenIM Branch Management and Versioning: A Blueprint for High-Grade Software Development](#openim-branch-management-and-versioning-a-blueprint-for-high-grade-software-development)
- [Unfolding the Mechanism of OpenIM Version Maintenance](#unfolding-the-mechanism-of-openim-version-maintenance)
- [Main Branch: The Heart of OpenIM Development](#main-branch-the-heart-of-openim-development)
- [Release Branch: The Beacon of Stability](#release-branch-the-beacon-of-stability)
@@ -8,8 +9,21 @@
- [Release Management: A Guided Tour](#release-management-a-guided-tour)
- [Milestones, Branching, and Addressing Major Bugs](#milestones-branching-and-addressing-major-bugs)
- [Version Skew Policy](#version-skew-policy)
+ - [Supported version skew](#supported-version-skew)
+ - [OpenIM Versioning, Branching, and Tag Strategy](#openim-versioning-branching-and-tag-strategy)
+ - [Supported Version Skew](#supported-version-skew-1)
+ - [openim-api](#openim-api)
+ - [openim-rpc-\* Components](#openim-rpc--components)
+ - [Other OpenIM Services](#other-openim-services)
+ - [Supported Component Upgrade Order](#supported-component-upgrade-order)
+ - [openim-api](#openim-api-1)
+ - [openim-rpc-\* Components](#openim-rpc--components-1)
+ - [Other OpenIM Services](#other-openim-services-1)
+ - [Conclusion](#conclusion)
- [Applying Principles: A Git Workflow Example](#applying-principles-a-git-workflow-example)
+ - [Release Process](#release-process)
- [Docker Images Version Management](#docker-images-version-management)
+ - [More](#more)
At OpenIM, we acknowledge the profound impact of implementing a robust and efficient version management system, hence we abide by the established standards of [Semantic Versioning 2.0.0](https://semver.org/lang/zh-CN/).
@@ -213,3 +227,10 @@ Throughout this process, active communication within the team is pivotal to main
## Docker Images Version Management
For more details on managing Docker image versions, visit [OpenIM Docker Images Administration](https://github.com/openimsdk/open-im-server/blob/main/docs/contrib/images.md).
+
+## More
+
+More on multi-branch version management design and version management design at helm charts:
+
++ https://github.com/openimsdk/open-im-server/issues/1695
++ https://github.com/openimsdk/open-im-server/issues/1662
\ No newline at end of file
diff --git a/docs/contributing/CONTRIBUTING-JP.md b/docs/contributing/CONTRIBUTING-JP.md
new file mode 100644
index 000000000..86bbfefcd
--- /dev/null
+++ b/docs/contributing/CONTRIBUTING-JP.md
@@ -0,0 +1,33 @@
+# How do I contribute code to OpenIM
+
+
+ Englist ·
+ 中文 ·
+ Українська ·
+ Česky ·
+ Magyar ·
+ Español ·
+ فارسی ·
+ Français ·
+ Deutsch ·
+ Polski ·
+ Indonesian ·
+ Suomi ·
+ മലയാളം ·
+ 日本語 ·
+ Nederlands ·
+ Italiano ·
+ Русский ·
+ Português (Brasil) ·
+ Esperanto ·
+ 한국어 ·
+ العربي ·
+ Tiếng Việt ·
+ Dansk ·
+ Ελληνικά ·
+ Türkçe
+
+
+
+
+
\ No newline at end of file
diff --git a/docs/contributing/CONTRIBUTING-PL.md b/docs/contributing/CONTRIBUTING-PL.md
new file mode 100644
index 000000000..86bbfefcd
--- /dev/null
+++ b/docs/contributing/CONTRIBUTING-PL.md
@@ -0,0 +1,33 @@
+# How do I contribute code to OpenIM
+
+
+ Englist ·
+ 中文 ·
+ Українська ·
+ Česky ·
+ Magyar ·
+ Español ·
+ فارسی ·
+ Français ·
+ Deutsch ·
+ Polski ·
+ Indonesian ·
+ Suomi ·
+ മലയാളം ·
+ 日本語 ·
+ Nederlands ·
+ Italiano ·
+ Русский ·
+ Português (Brasil) ·
+ Esperanto ·
+ 한국어 ·
+ العربي ·
+ Tiếng Việt ·
+ Dansk ·
+ Ελληνικά ·
+ Türkçe
+
+
+
+
+
\ No newline at end of file
diff --git a/docs/images/Open-IM-Servers-on-System.png b/docs/images/Open-IM-Servers-on-System.png
deleted file mode 100644
index 3c8a10202..000000000
Binary files a/docs/images/Open-IM-Servers-on-System.png and /dev/null differ
diff --git a/docs/images/Open-IM-Servers-on-docker.png b/docs/images/Open-IM-Servers-on-docker.png
deleted file mode 100644
index c66f7fb09..000000000
Binary files a/docs/images/Open-IM-Servers-on-docker.png and /dev/null differ
diff --git a/docs/images/architecture-layers.png b/docs/images/architecture-layers.png
new file mode 100644
index 000000000..d9e6e4d59
Binary files /dev/null and b/docs/images/architecture-layers.png differ
diff --git a/docs/images/build.png b/docs/images/build.png
deleted file mode 100644
index 7c5914c82..000000000
Binary files a/docs/images/build.png and /dev/null differ
diff --git a/docs/images/docker_build.png b/docs/images/docker_build.png
deleted file mode 100644
index f4be10f68..000000000
Binary files a/docs/images/docker_build.png and /dev/null differ
diff --git a/docs/readme/README-UA.md b/docs/readme/README-UA.md
new file mode 100644
index 000000000..892e16d19
--- /dev/null
+++ b/docs/readme/README-UA.md
@@ -0,0 +1,7 @@
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/go.mod b/go.mod
index fc7c615c6..ab138e68c 100644
--- a/go.mod
+++ b/go.mod
@@ -4,16 +4,16 @@ go 1.19
require (
firebase.google.com/go v3.13.0+incompatible
+ github.com/OpenIMSDK/protocol v0.0.55
+ github.com/OpenIMSDK/tools v0.0.33
github.com/bwmarrin/snowflake v0.3.0 // indirect
github.com/dtm-labs/rockscache v0.1.1
github.com/gin-gonic/gin v1.9.1
github.com/go-playground/validator/v10 v10.15.5
github.com/gogo/protobuf v1.3.2
github.com/golang-jwt/jwt/v4 v4.5.0
- github.com/golang/protobuf v1.5.3
github.com/gorilla/websocket v1.5.0
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0
- github.com/jinzhu/copier v0.4.0
github.com/lestrrat-go/file-rotatelogs v2.4.0+incompatible // indirect
github.com/minio/minio-go/v7 v7.0.63
github.com/mitchellh/mapstructure v1.5.0
@@ -29,22 +29,18 @@ require (
google.golang.org/grpc v1.59.0
google.golang.org/protobuf v1.31.0
gopkg.in/yaml.v3 v3.0.1
- gorm.io/driver/mysql v1.5.2
- gorm.io/gorm v1.25.5
)
-require github.com/google/uuid v1.3.1
+require github.com/google/uuid v1.5.0
require (
github.com/IBM/sarama v1.41.3
- github.com/OpenIMSDK/protocol v0.0.31
- github.com/OpenIMSDK/tools v0.0.16
github.com/aliyun/aliyun-oss-go-sdk v2.2.9+incompatible
github.com/go-redis/redis v6.15.9+incompatible
- github.com/go-sql-driver/mysql v1.7.1
github.com/redis/go-redis/v9 v9.2.1
+ github.com/spf13/pflag v1.0.5
+ github.com/stathat/consistent v1.0.0
github.com/tencentyun/cos-go-sdk-v5 v0.7.45
- go.uber.org/automaxprocs v1.5.3
golang.org/x/sync v0.4.0
gopkg.in/src-d/go-git.v4 v4.13.1
gotest.tools v2.2.0+incompatible
@@ -58,24 +54,6 @@ require (
cloud.google.com/go/iam v1.1.2 // indirect
cloud.google.com/go/longrunning v0.5.1 // indirect
cloud.google.com/go/storage v1.30.1 // indirect
- github.com/aws/aws-sdk-go-v2 v1.23.1 // indirect
- github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.1 // indirect
- github.com/aws/aws-sdk-go-v2/config v1.25.4 // indirect
- github.com/aws/aws-sdk-go-v2/credentials v1.16.3 // indirect
- github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.14.5 // indirect
- github.com/aws/aws-sdk-go-v2/internal/configsources v1.2.4 // indirect
- github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.5.4 // indirect
- github.com/aws/aws-sdk-go-v2/internal/ini v1.7.1 // indirect
- github.com/aws/aws-sdk-go-v2/internal/v4a v1.2.4 // indirect
- github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.10.1 // indirect
- github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.2.4 // indirect
- github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.10.4 // indirect
- github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.16.4 // indirect
- github.com/aws/aws-sdk-go-v2/service/s3 v1.43.1 // indirect
- github.com/aws/aws-sdk-go-v2/service/sso v1.17.3 // indirect
- github.com/aws/aws-sdk-go-v2/service/ssooidc v1.20.1 // indirect
- github.com/aws/aws-sdk-go-v2/service/sts v1.25.4 // indirect
- github.com/aws/smithy-go v1.17.0 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/bytedance/sonic v1.9.1 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
@@ -93,6 +71,7 @@ require (
github.com/go-playground/universal-translator v0.18.1 // indirect
github.com/go-zookeeper/zk v1.0.3 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
+ github.com/golang/protobuf v1.5.3 // indirect
github.com/golang/snappy v0.0.4 // indirect
github.com/google/go-cmp v0.6.0 // indirect
github.com/google/go-querystring v1.1.0 // indirect
@@ -109,12 +88,13 @@ require (
github.com/jcmturner/gofork v1.7.6 // indirect
github.com/jcmturner/gokrb5/v8 v8.4.4 // indirect
github.com/jcmturner/rpc/v2 v2.0.3 // indirect
+ github.com/jinzhu/copier v0.3.5 // indirect
github.com/jinzhu/inflection v1.0.0 // indirect
github.com/jinzhu/now v1.1.5 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/kevinburke/ssh_config v0.0.0-20190725054713-01f96b0aa0cd // indirect
- github.com/klauspost/compress v1.16.7 // indirect
- github.com/klauspost/cpuid/v2 v2.2.5 // indirect
+ github.com/klauspost/compress v1.17.4 // indirect
+ github.com/klauspost/cpuid/v2 v2.2.6 // indirect
github.com/leodido/go-urn v1.2.4 // indirect
github.com/lithammer/shortuuid v3.0.0+incompatible // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect
@@ -133,11 +113,9 @@ require (
github.com/prometheus/client_model v0.4.1-0.20230718164431-9a2bf3000d16 // indirect
github.com/prometheus/common v0.44.0 // indirect
github.com/prometheus/procfs v0.11.1 // indirect
- github.com/qiniu/go-sdk/v7 v7.18.2 // indirect
github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475 // indirect
github.com/rs/xid v1.5.0 // indirect
github.com/sergi/go-diff v1.0.0 // indirect
- github.com/spf13/pflag v1.0.5 // indirect
github.com/src-d/gcfg v1.4.0 // indirect
github.com/stretchr/objx v0.5.0 // indirect
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
@@ -148,13 +126,13 @@ require (
github.com/youmark/pkcs8 v0.0.0-20181117223130-1be2e3e5546d // indirect
go.opencensus.io v0.24.0 // indirect
go.uber.org/atomic v1.7.0 // indirect
- go.uber.org/multierr v1.6.0 // indirect
+ go.uber.org/multierr v1.11.0 // indirect
golang.org/x/arch v0.3.0 // indirect
- golang.org/x/net v0.17.0 // indirect
+ golang.org/x/net v0.19.0 // indirect
golang.org/x/oauth2 v0.13.0 // indirect
- golang.org/x/sys v0.13.0 // indirect
- golang.org/x/text v0.13.0 // indirect
- golang.org/x/time v0.3.0 // indirect
+ golang.org/x/sys v0.15.0 // indirect
+ golang.org/x/text v0.14.0 // indirect
+ golang.org/x/time v0.5.0 // indirect
golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/genproto v0.0.0-20231002182017-d307bd883b97 // indirect
@@ -162,6 +140,8 @@ require (
google.golang.org/genproto/googleapis/rpc v0.0.0-20231012201019-e917dd12ba7a // indirect
gopkg.in/src-d/go-billy.v4 v4.3.2 // indirect
gopkg.in/warnings.v0 v0.1.2 // indirect
+ gorm.io/gorm v1.25.4 // indirect
+ stathat.com/c/consistent v1.0.0 // indirect
)
require (
@@ -172,6 +152,6 @@ require (
github.com/spf13/cobra v1.7.0
github.com/ugorji/go/codec v1.2.11 // indirect
go.uber.org/zap v1.24.0 // indirect
- golang.org/x/crypto v0.14.0 // indirect
+ golang.org/x/crypto v0.17.0 // indirect
gopkg.in/ini.v1 v1.67.0 // indirect
)
diff --git a/go.sum b/go.sum
index 10cb9ee8c..94a516366 100644
--- a/go.sum
+++ b/go.sum
@@ -18,10 +18,10 @@ firebase.google.com/go v3.13.0+incompatible/go.mod h1:xlah6XbEyW6tbfSklcfe5FHJIw
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/IBM/sarama v1.41.3 h1:MWBEJ12vHC8coMjdEXFq/6ftO6DUZnQlFYcxtOJFa7c=
github.com/IBM/sarama v1.41.3/go.mod h1:Xxho9HkHd4K/MDUo/T/sOqwtX/17D33++E9Wib6hUdQ=
-github.com/OpenIMSDK/protocol v0.0.31 h1:ax43x9aqA6EKNXNukS5MT5BSTqkUmwO4uTvbJLtzCgE=
-github.com/OpenIMSDK/protocol v0.0.31/go.mod h1:F25dFrwrIx3lkNoiuf6FkCfxuwf8L4Z8UIsdTHP/r0Y=
-github.com/OpenIMSDK/tools v0.0.16 h1:te/GIq2imCMsrRPgU9OObYKbzZ3rT08Lih/o+3QFIz0=
-github.com/OpenIMSDK/tools v0.0.16/go.mod h1:eg+q4A34Qmu73xkY0mt37FHGMCMfC6CtmOnm0kFEGFI=
+github.com/OpenIMSDK/protocol v0.0.55 h1:eBjg8DyuhxGmuCUjpoZjg6MJJJXU/xJ3xJwFhrn34yA=
+github.com/OpenIMSDK/protocol v0.0.55/go.mod h1:F25dFrwrIx3lkNoiuf6FkCfxuwf8L4Z8UIsdTHP/r0Y=
+github.com/OpenIMSDK/tools v0.0.33 h1:rvFCxXaXxLv1MJFC4qcoWRGwKBnV+hR68UN2N0/zZhE=
+github.com/OpenIMSDK/tools v0.0.33/go.mod h1:wBfR5CYmEyvxl03QJbTkhz1CluK6J4/lX0lviu8JAjE=
github.com/QcloudApi/qcloud_sign_golang v0.0.0-20141224014652-e4130a326409/go.mod h1:1pk82RBxDY/JZnPQrtqHlUFfCctgdorsd9M06fMynOM=
github.com/alcortesm/tgz v0.0.0-20161220082320-9c5fe88206d7 h1:uSoVVbwJiQipAclBbw+8quDsfcvFjOpI5iCf4p/cqCs=
github.com/alcortesm/tgz v0.0.0-20161220082320-9c5fe88206d7/go.mod h1:6zEj6s6u/ghQa61ZWa/C2Aw3RkjiTBOix7dkqa1VLIs=
@@ -31,42 +31,6 @@ github.com/anmitsu/go-shlex v0.0.0-20161002113705-648efa622239 h1:kFOfPq6dUM1hTo
github.com/anmitsu/go-shlex v0.0.0-20161002113705-648efa622239/go.mod h1:2FmKhYUyUczH0OGQWaF5ceTx0UBShxjsH6f8oGKYe2c=
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5 h1:0CwZNZbxp69SHPdPJAN/hZIm0C4OItdklCFmMRWYpio=
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkYZB8zMSxRWpUBQtwG5a7fFgvEO+odwuTv2gs=
-github.com/aws/aws-sdk-go-v2 v1.23.1 h1:qXaFsOOMA+HsZtX8WoCa+gJnbyW7qyFFBlPqvTSzbaI=
-github.com/aws/aws-sdk-go-v2 v1.23.1/go.mod h1:i1XDttT4rnf6vxc9AuskLc6s7XBee8rlLilKlc03uAA=
-github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.1 h1:ZY3108YtBNq96jNZTICHxN1gSBSbnvIdYwwqnvCV4Mc=
-github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.1/go.mod h1:t8PYl/6LzdAqsU4/9tz28V/kU+asFePvpOMkdul0gEQ=
-github.com/aws/aws-sdk-go-v2/config v1.25.4 h1:r+X1x8QI6FEPdJDWCNBDZHyAcyFwSjHN8q8uuus+Axs=
-github.com/aws/aws-sdk-go-v2/config v1.25.4/go.mod h1:8GTjImECskr7D88P/Nn9uM4M4rLY9i77hLJZgkZEWV8=
-github.com/aws/aws-sdk-go-v2/credentials v1.16.3 h1:8PeI2krzzjDJ5etmgaMiD1JswsrLrWvKKu/uBUtNy1g=
-github.com/aws/aws-sdk-go-v2/credentials v1.16.3/go.mod h1:Kdh/okh+//vQ/AjEt81CjvkTo64+/zIE4OewP7RpfXk=
-github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.14.5 h1:KehRNiVzIfAcj6gw98zotVbb/K67taJE0fkfgM6vzqU=
-github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.14.5/go.mod h1:VhnExhw6uXy9QzetvpXDolo1/hjhx4u9qukBGkuUwjs=
-github.com/aws/aws-sdk-go-v2/internal/configsources v1.2.4 h1:LAm3Ycm9HJfbSCd5I+wqC2S9Ej7FPrgr5CQoOljJZcE=
-github.com/aws/aws-sdk-go-v2/internal/configsources v1.2.4/go.mod h1:xEhvbJcyUf/31yfGSQBe01fukXwXJ0gxDp7rLfymWE0=
-github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.5.4 h1:4GV0kKZzUxiWxSVpn/9gwR0g21NF1Jsyduzo9rHgC/Q=
-github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.5.4/go.mod h1:dYvTNAggxDZy6y1AF7YDwXsPuHFy/VNEpEI/2dWK9IU=
-github.com/aws/aws-sdk-go-v2/internal/ini v1.7.1 h1:uR9lXYjdPX0xY+NhvaJ4dD8rpSRz5VY81ccIIoNG+lw=
-github.com/aws/aws-sdk-go-v2/internal/ini v1.7.1/go.mod h1:6fQQgfuGmw8Al/3M2IgIllycxV7ZW7WCdVSqfBeUiCY=
-github.com/aws/aws-sdk-go-v2/internal/v4a v1.2.4 h1:40Q4X5ebZruRtknEZH/bg91sT5pR853F7/1X9QRbI54=
-github.com/aws/aws-sdk-go-v2/internal/v4a v1.2.4/go.mod h1:u77N7eEECzUv7F0xl2gcfK/vzc8wcjWobpy+DcrLJ5E=
-github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.10.1 h1:rpkF4n0CyFcrJUG/rNNohoTmhtWlFTRI4BsZOh9PvLs=
-github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.10.1/go.mod h1:l9ymW25HOqymeU2m1gbUQ3rUIsTwKs8gYHXkqDQUhiI=
-github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.2.4 h1:6DRKQc+9cChgzL5gplRGusI5dBGeiEod4m/pmGbcX48=
-github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.2.4/go.mod h1:s8ORvrW4g4v7IvYKIAoBg17w3GQ+XuwXDXYrQ5SkzU0=
-github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.10.4 h1:rdovz3rEu0vZKbzoMYPTehp0E8veoE9AyfzqCr5Eeao=
-github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.10.4/go.mod h1:aYCGNjyUCUelhofxlZyj63srdxWUSsBSGg5l6MCuXuE=
-github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.16.4 h1:o3DcfCxGDIT20pTbVKVhp3vWXOj/VvgazNJvumWeYW0=
-github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.16.4/go.mod h1:Uy0KVOxuTK2ne+/PKQ+VvEeWmjMMksE17k/2RK/r5oM=
-github.com/aws/aws-sdk-go-v2/service/s3 v1.43.1 h1:1w11lfXOa8HoHoSlNtt4mqv/N3HmDOa+OnUH3Y9DHm8=
-github.com/aws/aws-sdk-go-v2/service/s3 v1.43.1/go.mod h1:dqJ5JBL0clzgHriH35Amx3LRFY6wNIPUX7QO/BerSBo=
-github.com/aws/aws-sdk-go-v2/service/sso v1.17.3 h1:CdsSOGlFF3Pn+koXOIpTtvX7st0IuGsZ8kJqcWMlX54=
-github.com/aws/aws-sdk-go-v2/service/sso v1.17.3/go.mod h1:oA6VjNsLll2eVuUoF2D+CMyORgNzPEW/3PyUdq6WQjI=
-github.com/aws/aws-sdk-go-v2/service/ssooidc v1.20.1 h1:cbRqFTVnJV+KRpwFl76GJdIZJKKCdTPnjUZ7uWh3pIU=
-github.com/aws/aws-sdk-go-v2/service/ssooidc v1.20.1/go.mod h1:hHL974p5auvXlZPIjJTblXJpbkfK4klBczlsEaMCGVY=
-github.com/aws/aws-sdk-go-v2/service/sts v1.25.4 h1:yEvZ4neOQ/KpUqyR+X0ycUTW/kVRNR4nDZ38wStHGAA=
-github.com/aws/aws-sdk-go-v2/service/sts v1.25.4/go.mod h1:feTnm2Tk/pJxdX+eooEsxvlvTWBvDm6CasRZ+JOs2IY=
-github.com/aws/smithy-go v1.17.0 h1:wWJD7LX6PBV6etBUwO0zElG0nWN9rUhp0WdYeHSHAaI=
-github.com/aws/smithy-go v1.17.0/go.mod h1:NukqUGpCZIILqqiV0NIjeFh24kd/FAa4beRb6nbIUPE=
github.com/benbjohnson/clock v1.1.0 h1:Q92kusRqC1XV2MjkWETPvjJVqKetz1OzxZB7mHJLju8=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
@@ -92,7 +56,6 @@ github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDk
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/cpuguy83/go-md2man/v2 v2.0.2/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY=
-github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
@@ -128,24 +91,15 @@ github.com/gin-gonic/gin v1.9.1 h1:4idEAncQnU5cB7BeOkPtxjfCSye0AAm1R0RVIqJ+Jmg=
github.com/gin-gonic/gin v1.9.1/go.mod h1:hPrL7YrpYKXt5YId3A/Tnip5kqbEAP+KLuI3SUcPTeU=
github.com/gliderlabs/ssh v0.2.2 h1:6zsha5zo/TWhRhwqCD3+EarCAgZ2yN28ipRnGPnwkI0=
github.com/gliderlabs/ssh v0.2.2/go.mod h1:U7qILu1NlMHj9FlMhZLlkCdDnU1DBEAqr0aevW3Awn0=
-github.com/go-playground/assert/v2 v2.0.1/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4=
github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s=
-github.com/go-playground/locales v0.13.0/go.mod h1:taPMhCMXrRLJO55olJkUXHZBHCxTMfnGwq/HNwmWNS8=
-github.com/go-playground/locales v0.14.0/go.mod h1:sawfccIbzZTqEDETgFXqTho0QybSa7l++s0DH+LDiLs=
github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/oXslEjJA=
github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/NuW9t0sjnWqG1slY=
-github.com/go-playground/universal-translator v0.17.0/go.mod h1:UkSxE5sNxxRwHyU+Scu5vgOQjsIJAF8j9muTVoKLVtA=
-github.com/go-playground/universal-translator v0.18.0/go.mod h1:UvRDBj+xPUEGrFYl+lu/H90nyDXpg0fqeB/AQUGNTVA=
github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY=
github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY=
-github.com/go-playground/validator/v10 v10.8.0/go.mod h1:9JhgTzTaE31GZDpH/HSvHiRJrJ3iKAgqqH0Bl/Ocjdk=
github.com/go-playground/validator/v10 v10.15.5 h1:LEBecTWb/1j5TNY1YYG2RcOUN3R7NLylN+x8TTueE24=
github.com/go-playground/validator/v10 v10.15.5/go.mod h1:9iXMNT7sEkjXb0I+enO7QXmzG6QCsPWY4zveKFVRSyU=
github.com/go-redis/redis v6.15.9+incompatible h1:K0pv1D7EQUjfyoMql+r/jZqCLizCGKFlFgcHWWmHQjg=
github.com/go-redis/redis v6.15.9+incompatible/go.mod h1:NAIEuMOZ/fxfXJIrKDQDz8wamY7mA7PouImQ2Jvg6kA=
-github.com/go-sql-driver/mysql v1.7.0/go.mod h1:OXbVy3sEdcQ2Doequ6Z5BW6fXNQTmx+9S1MCJN5yJMI=
-github.com/go-sql-driver/mysql v1.7.1 h1:lUIinVbN1DY0xBg0eMOzmmtGoHwWBbvnWubQUrtU8EI=
-github.com/go-sql-driver/mysql v1.7.1/go.mod h1:OXbVy3sEdcQ2Doequ6Z5BW6fXNQTmx+9S1MCJN5yJMI=
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE=
github.com/go-zookeeper/zk v1.0.3 h1:7M2kwOsc//9VeeFiPtf+uSJlVpU66x9Ba5+8XK7/TDg=
github.com/go-zookeeper/zk v1.0.3/go.mod h1:nOB03cncLtlp4t+UAkGSV+9beXP/akpekBwL+UX1Qcw=
@@ -198,8 +152,8 @@ github.com/google/s2a-go v0.1.7 h1:60BLSyTrOV4/haCDW4zb1guZItoSq8foHCXrAnjBo/o=
github.com/google/s2a-go v0.1.7/go.mod h1:50CgR4k1jNlWBu4UfS4AcfhVe1r6pdZPygJ3R8F0Qdw=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
-github.com/google/uuid v1.3.1 h1:KjJaJ9iWZ3jOFZIf1Lqf4laDRCasjl0BCmnEGxkdLb4=
-github.com/google/uuid v1.3.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
+github.com/google/uuid v1.5.0 h1:1p67kYwdtXjb0gL0BPiP1Av9wiZPo5A8z2cWkTZ+eyU=
+github.com/google/uuid v1.5.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/enterprise-certificate-proxy v0.3.1 h1:SBWmZhjUDRorQxrN0nwzf+AHBxnbFjViHQS4P0yVpmQ=
github.com/googleapis/enterprise-certificate-proxy v0.3.1/go.mod h1:VLSiSSBs/ksPL8kq3OBOQ6WRI2QnaFynd1DCjZ62+V0=
github.com/googleapis/gax-go/v2 v2.12.0 h1:A+gCJKdRfqXkr+BIRGtZLibNXf0m1f9E4HG56etFpas=
@@ -236,8 +190,8 @@ github.com/jcmturner/gokrb5/v8 v8.4.4/go.mod h1:1btQEpgT6k+unzCwX1KdWMEwPPkkgBtP
github.com/jcmturner/rpc/v2 v2.0.3 h1:7FXXj8Ti1IaVFpSAziCZWNzbNuZmnvw/i6CqLNdWfZY=
github.com/jcmturner/rpc/v2 v2.0.3/go.mod h1:VUJYCIDm3PVOEHw8sgt091/20OJjskO/YJki3ELg/Hc=
github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI=
-github.com/jinzhu/copier v0.4.0 h1:w3ciUoD19shMCRargcpm0cm91ytaBhDvuRpz1ODO/U8=
-github.com/jinzhu/copier v0.4.0/go.mod h1:DfbEm0FYsaqBcKcFuvmOZb218JkPGtvSHsKg8S8hyyg=
+github.com/jinzhu/copier v0.3.5 h1:GlvfUwHk62RokgqVNvYsku0TATCF7bAHVwEXoBh3iJg=
+github.com/jinzhu/copier v0.3.5/go.mod h1:DfbEm0FYsaqBcKcFuvmOZb218JkPGtvSHsKg8S8hyyg=
github.com/jinzhu/inflection v1.0.0 h1:K317FqzuhWc8YvSVlFMCCUb36O/S9MCKRDI7QkRKD/E=
github.com/jinzhu/inflection v1.0.0/go.mod h1:h+uFLlag+Qp1Va5pdKtLDYj+kHp5pxUVkryuEj+Srlc=
github.com/jinzhu/now v1.1.5 h1:/o9tlHleP7gOFmsnYNz3RGnqzefHA47wQpKrrdTIwXQ=
@@ -250,22 +204,18 @@ github.com/kevinburke/ssh_config v0.0.0-20190725054713-01f96b0aa0cd/go.mod h1:CT
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.13.6/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
-github.com/klauspost/compress v1.16.7 h1:2mk3MPGNzKyxErAw8YaohYh69+pa4sIQSC0fPGCFR9I=
-github.com/klauspost/compress v1.16.7/go.mod h1:ntbaceVETuRiXiv4DpjP66DpAtAGkEQskQzEyD//IeE=
+github.com/klauspost/compress v1.17.4 h1:Ej5ixsIri7BrIjBkRZLTo6ghwrEtHFk7ijlczPW4fZ4=
+github.com/klauspost/compress v1.17.4/go.mod h1:/dCuZOvVtNoHsyb+cuJD3itjs3NbnF6KH9zAO4BDxPM=
github.com/klauspost/cpuid/v2 v2.0.1/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
-github.com/klauspost/cpuid/v2 v2.2.5 h1:0E5MSMDEoAulmXNFquVs//DdoomxaoTY1kUhbc/qbZg=
-github.com/klauspost/cpuid/v2 v2.2.5/go.mod h1:Lcz8mBdAVJIBVzewtcLocK12l3Y+JytZYpaMropDUws=
+github.com/klauspost/cpuid/v2 v2.2.6 h1:ndNyv040zDGIDh8thGkXYjnFtiN02M1PVVF+JE/48xc=
+github.com/klauspost/cpuid/v2 v2.2.6/go.mod h1:Lcz8mBdAVJIBVzewtcLocK12l3Y+JytZYpaMropDUws=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
-github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
-github.com/kr/pretty v0.3.0/go.mod h1:640gp4NfQd8pI5XOwp5fnNeVWj67G7CFk/SaSQn7NBk=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/pty v1.1.8/go.mod h1:O1sed60cT9XZ5uDucP5qwvh+TE3NnUj51EiZO/lmSfw=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
-github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
-github.com/leodido/go-urn v1.2.1/go.mod h1:zt4jvISO2HfUBqxjfIshjdMTYS56ZS/qv49ictyFfxY=
github.com/leodido/go-urn v1.2.4 h1:XlAE/cm/ms7TE/VMVoduSpNBoyc2dOxHs5MZSwAN63Q=
github.com/leodido/go-urn v1.2.4/go.mod h1:7ZrI8mTSeBSHl/UaRyKQW1qZeMgak41ANeCNaVckg+4=
github.com/lestrrat-go/envload v0.0.0-20180220234015-a3eb8ddeffcc h1:RKf14vYWi2ttpEmkA4aQ3j4u9dStX2t4M8UM6qqNsG8=
@@ -322,13 +272,11 @@ github.com/pelletier/go-toml/v2 v2.0.8 h1:0ctb6s9mE31h0/lhu+J6OPmVeDxJn+kYnJc2jZ
github.com/pelletier/go-toml/v2 v2.0.8/go.mod h1:vuYfssBdrU2XDZ9bYydBu6t+6a6PYNcZljzZR9VXg+4=
github.com/pierrec/lz4/v4 v4.1.18 h1:xaKrnTkyoqfh1YItXl56+6KJNVYWlEEPuAQW9xsplYQ=
github.com/pierrec/lz4/v4 v4.1.18/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
-github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
-github.com/prashantv/gostub v1.1.0 h1:BTyx3RfQjRHnUWaGF9oQos79AlQ5k8WNktv7VGvVH4g=
github.com/prometheus/client_golang v1.17.0 h1:rl2sfwZMtSthVU752MqfjQozy7blglC+1SOtjMAMh+Q=
github.com/prometheus/client_golang v1.17.0/go.mod h1:VeL+gMmOAxkS2IqfCq0ZmHSL+LjWfWDUmp1mBz9JgUY=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
@@ -338,18 +286,12 @@ github.com/prometheus/common v0.44.0 h1:+5BrQJwiBB9xsMygAB3TNvpQKOwlkc25LbISbrdO
github.com/prometheus/common v0.44.0/go.mod h1:ofAIvZbQ1e/nugmZGz4/qCb9Ap1VoSTIO7x0VV9VvuY=
github.com/prometheus/procfs v0.11.1 h1:xRC8Iq1yyca5ypa9n1EZnWZkt7dwcoRPQwX/5gwaUuI=
github.com/prometheus/procfs v0.11.1/go.mod h1:eesXgaPo1q7lBpVMoMy0ZOFTth9hBn4W/y0/p/ScXhY=
-github.com/qiniu/dyn v1.3.0/go.mod h1:E8oERcm8TtwJiZvkQPbcAh0RL8jO1G0VXJMW3FAWdkk=
-github.com/qiniu/go-sdk/v7 v7.18.2 h1:vk9eo5OO7aqgAOPF0Ytik/gt7CMKuNgzC/IPkhda6rk=
-github.com/qiniu/go-sdk/v7 v7.18.2/go.mod h1:nqoYCNo53ZlGA521RvRethvxUDvXKt4gtYXOwye868w=
-github.com/qiniu/x v1.10.5/go.mod h1:03Ni9tj+N2h2aKnAz+6N0Xfl8FwMEDRC2PAlxekASDs=
github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475 h1:N/ElC8H3+5XpJzTSTfLsJV/mx9Q9g7kxmchpfZyxgzM=
github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
github.com/redis/go-redis/v9 v9.2.1 h1:WlYJg71ODF0dVspZZCpYmoF1+U1Jjk9Rwd7pq6QmlCg=
github.com/redis/go-redis/v9 v9.2.1/go.mod h1:hdY0cQFCN4fnSYT6TkisLufl/4W5UIXyv0b/CLO2V2M=
github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
-github.com/rogpeppe/go-internal v1.6.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc=
-github.com/rogpeppe/go-internal v1.8.0/go.mod h1:WmiCO8CzOY8rg0OYDC4/i/2WRWAB6poM+XZ2dLUbcbE=
github.com/rogpeppe/go-internal v1.11.0 h1:cWPaGQEPrBb5/AsnsZesgZZ9yb1OQ+GOISoDNXVBh4M=
github.com/rs/xid v1.5.0 h1:mKX4bl4iPYJtEIxp6CYiUuLQ/8DYMoz0PUdtGgMFRVc=
github.com/rs/xid v1.5.0/go.mod h1:trrq9SKmegXys3aeAKXMUTdJsYXVwGY3RLcfgqegfbg=
@@ -364,6 +306,8 @@ github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/src-d/gcfg v1.4.0 h1:xXbNR5AlLSA315x2UO+fTSSAXCDf+Ar38/6oyGbDKQ4=
github.com/src-d/gcfg v1.4.0/go.mod h1:p/UMsR43ujA89BJY9duynAwIpvqEujIH/jFlfL7jWoI=
+github.com/stathat/consistent v1.0.0 h1:ZFJ1QTRn8npNBKW065raSZ8xfOqhpb8vLOkfp4CcL/U=
+github.com/stathat/consistent v1.0.0/go.mod h1:uajTPbgSygZBJ+V+0mY7meZ8i0XAcZs7AQ6V121XSxw=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
@@ -372,7 +316,6 @@ github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpE
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
-github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
@@ -408,11 +351,9 @@ go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0=
go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo=
go.uber.org/atomic v1.7.0 h1:ADUqmZGgLDDfbSL9ZmPxKTybcoEYHgpYfELNoN+7hsw=
go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
-go.uber.org/automaxprocs v1.5.3 h1:kWazyxZUrS3Gs4qUpbwo5kEIMGe/DAvi5Z4tl2NW4j8=
-go.uber.org/automaxprocs v1.5.3/go.mod h1:eRbA25aqJrxAbsLO0xy5jVwPt7FQnRgjW+efnwa1WM0=
go.uber.org/goleak v1.1.11 h1:wy28qYRKZgnJTxGxvye5/wgWr1EKjmUDGYox5mGlRlI=
-go.uber.org/multierr v1.6.0 h1:y6IPFStTAIT5Ytl7/XYmHvzXQ7S3g/IeZW9hyZ5thw4=
-go.uber.org/multierr v1.6.0/go.mod h1:cdWPpRnG4AhwMwsgIHip0KRBQjJy5kYEpYjJxpXp9iU=
+go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
+go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
go.uber.org/zap v1.24.0 h1:FiJd5l1UOLj0wCgbSE0rwwXHzEdAZS6hiiSnxJN/D60=
go.uber.org/zap v1.24.0/go.mod h1:2kMP+WWQ8aoFoedH3T2sq6iJ2yDWpHbP0f6MQbS9Gkg=
golang.org/x/arch v0.0.0-20210923205945-b76863e36670/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8=
@@ -423,13 +364,11 @@ golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACk
golang.org/x/crypto v0.0.0-20190701094942-4def268fd1a4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
-golang.org/x/crypto v0.0.0-20210711020723-a769d52b0f97/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
-golang.org/x/crypto v0.1.0/go.mod h1:RecgLatLF4+eUMCP1PoPZQb+cVrJcOPbHkTkbkB9sbw=
golang.org/x/crypto v0.6.0/go.mod h1:OFC/31mSvZgRz0V1QTNCzfAI1aIRzbiufJtkMIlEp58=
-golang.org/x/crypto v0.14.0 h1:wBqGXzWJW6m1XrIKlAH0Hs1JJ7+9KBwnIO8v66Q9cHc=
-golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf4=
+golang.org/x/crypto v0.17.0 h1:r8bRNjWL3GshPW3gkd+RpvzWrZAwPS49OmTGZ/uhM4k=
+golang.org/x/crypto v0.17.0/go.mod h1:gCAAfMLgwOJRpTjQ2zCCt2OcSfYMTeZVSRtQlPC7Nq4=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/image v0.13.0 h1:3cge/F/QTkNLauhf2QoE9zp+7sr+ZcL4HnoZmdwg9sg=
golang.org/x/image v0.13.0/go.mod h1:6mmbMOeV28HuMTgA6OSRkdXKYw/t5W9Uwn2Yv1r3Yxk=
@@ -457,11 +396,10 @@ golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v
golang.org/x/net v0.0.0-20210428140749-89ef3d95e781/go.mod h1:OJAsFXCWl8Ukc7SiCT/9KSuxbyM7479/AVlXFRxuMCk=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
-golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco=
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
-golang.org/x/net v0.17.0 h1:pVaXccu2ozPjCXewfr1S7xza/zcXTity9cCdXQYSjIM=
-golang.org/x/net v0.17.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE=
+golang.org/x/net v0.19.0 h1:zTwKpTd2XuCqf8huc7Fo2iSy+4RHPd10s4KzeTnVr1c=
+golang.org/x/net v0.19.0/go.mod h1:CfAk/cbD4CthTvqiEl8NpboMuiuOYsAr/7NOjZJtv1U=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.13.0 h1:jDDenyj+WgFtmV3zYVoi8aE2BwtXFLWOA67ZfNWftiY=
golang.org/x/oauth2 v0.13.0/go.mod h1:/JMhi4ZRXAf4HG9LiNmxvk+45+96RUlVThiH8FzNBn0=
@@ -490,33 +428,29 @@ golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20210112080510-489259a85091/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
-golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
-golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
-golang.org/x/sys v0.13.0 h1:Af8nKPmuFypiUBjVoU9V20FiaFXOcuZI21p0ycVYYGE=
-golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.15.0 h1:h48lPFYpsTvQJZF4EKyI4aLHaev3CxivZmv7yZig9pc=
+golang.org/x/sys v0.15.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
-golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
-golang.org/x/term v0.13.0 h1:bb+I9cTfFazGW51MZqBVmZy7+JEJMouUHTUSKVQLBek=
+golang.org/x/term v0.15.0 h1:y/Oo/a/q3IXu26lQgl04j/gjuBDOBlx7X6Om1j2CPW4=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ=
-golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
-golang.org/x/text v0.13.0 h1:ablQoSUd0tRdKxZewP80B+BaqeKJuVhuRxj/dkrun3k=
-golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
-golang.org/x/time v0.3.0 h1:rg5rLMjNzMS1RkNLzCG38eapWhnYLFYXDXj2gOlr8j4=
-golang.org/x/time v0.3.0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
+golang.org/x/text v0.14.0 h1:ScX5w1eTa3QqT8oi6+ziP7dTV1S2+ALU0bI+0zXKWiQ=
+golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
+golang.org/x/time v0.5.0 h1:o7cqy6amK/52YcAKIPlM3a+Fpj35zvRj2TP+e1xFSfk=
+golang.org/x/time v0.5.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
@@ -572,8 +506,6 @@ google.golang.org/protobuf v1.31.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqw
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
-gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
-gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/ini.v1 v1.67.0 h1:Dgnx+6+nfE+IfzjUEISNeydPJh9AXNNsWbGP9KzCsOA=
gopkg.in/ini.v1 v1.67.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
@@ -593,16 +525,14 @@ gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
-gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
-gorm.io/driver/mysql v1.5.2 h1:QC2HRskSE75wBuOxe0+iCkyJZ+RqpudsQtqkp+IMuXs=
-gorm.io/driver/mysql v1.5.2/go.mod h1:pQLhh1Ut/WUAySdTHwBpBv6+JKcj+ua4ZFx1QQTBzb8=
-gorm.io/gorm v1.25.2-0.20230530020048-26663ab9bf55/go.mod h1:L4uxeKpfBml98NYqVqwAdmV1a2nBtAec/cf3fpucW/k=
-gorm.io/gorm v1.25.5 h1:zR9lOiiYf09VNh5Q1gphfyia1JpiClIWG9hQaxB/mls=
-gorm.io/gorm v1.25.5/go.mod h1:hbnx/Oo0ChWMn1BIhpy1oYozzpM15i4YPuHDmfYtwg8=
+gorm.io/gorm v1.25.4 h1:iyNd8fNAe8W9dvtlgeRI5zSVZPsq3OpcTu37cYcpCmw=
+gorm.io/gorm v1.25.4/go.mod h1:L4uxeKpfBml98NYqVqwAdmV1a2nBtAec/cf3fpucW/k=
gotest.tools v2.2.0+incompatible h1:VsBPFP1AI068pPrMxtb/S8Zkgf9xEmTLJjfM+P5UIEo=
gotest.tools v2.2.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
rsc.io/pdf v0.1.1/go.mod h1:n8OzWcQ6Sp37PL01nO98y4iUCRdTGarVfzxY20ICaU4=
+stathat.com/c/consistent v1.0.0 h1:ezyc51EGcRPJUxfHGSgJjWzJdj3NiMU9pNfLNGiXV0c=
+stathat.com/c/consistent v1.0.0/go.mod h1:QkzMWzcbB+yQBL2AttO6sgsQS/JSTapcDISJalmCDS0=
diff --git a/go.work b/go.work
index 1c819212c..97d2816d6 100644
--- a/go.work
+++ b/go.work
@@ -4,13 +4,15 @@ use (
.
./test/typecheck
./tools/changelog
- //./tools/imctl
+ ./tools/component
+ ./tools/data-conversion
+ ./tools/formitychecker
+ ./tools/imctl
./tools/infra
./tools/ncpu
./tools/openim-web
+ ./tools/up35
+ ./tools/url2im
./tools/versionchecker
./tools/yamlfmt
- ./tools/component
- ./tools/url2im
- ./tools/data-conversion
)
diff --git a/install.sh b/install.sh
index 9318c33ba..7ff0f8739 100755
--- a/install.sh
+++ b/install.sh
@@ -63,7 +63,7 @@ PROXY=
GITHUB_TOKEN=
# Default user is "root". If you need to modify it, uncomment and replace accordingly.
-# USER=root
+# OPENIM_USER=root
# Default password for redis, mysql, mongo, as well as accessSecret in config/config.yaml.
# Remember, it should be a combination of 8 or more numbers and letters. If you want to set a different password, uncomment and replace "openIM123".
@@ -244,10 +244,10 @@ function download_source_code() {
function set_openim_env() {
warn "This command can only be executed once. It will modify the component passwords in docker-compose based on the PASSWORD variable in .env, and modify the component passwords in config/config.yaml. If the password in .env changes, you need to first execute docker-compose down; rm components -rf and then execute this command."
# Set default values for user input
- # If the USER environment variable is not set, it defaults to 'root'
- if [ -z "$USER" ]; then
- USER="root"
- debug "USER is not set. Defaulting to 'root'."
+ # If the OPENIM_USER environment variable is not set, it defaults to 'root'
+ if [ -z "$OPENIM_USER" ]; then
+ OPENIM_USER="root"
+ debug "OPENIM_USER is not set. Defaulting to 'root'."
fi
# If the PASSWORD environment variable is not set, it defaults to 'openIM123'
@@ -321,7 +321,7 @@ function cmd_help() {
function parseinput() {
# set default values
- # USER=root
+ # OPENIM_USER=root
# PASSWORD=openIM123
# ENDPOINT=http://127.0.0.1:10005
# API=http://127.0.0.1:10002/object/
@@ -347,7 +347,7 @@ function parseinput() {
;;
-u|--user)
shift
- USER=$1
+ OPENIM_USER=$1
;;
-p|--password)
shift
diff --git a/install_guide.sh b/install_guide.sh
deleted file mode 100755
index b10ab2edd..000000000
--- a/install_guide.sh
+++ /dev/null
@@ -1,176 +0,0 @@
-#!/usr/bin/env bash
-
-echo "Welcome to the Open-IM-Server installation scripts."
-echo "Please select an deploy option:"
-echo "1. docker-compose install"
-echo "2. exit"
-
-clear_openimlog() {
- rm -rf ./logs/*
-}
-
-is_path() {
- if [ -e "$1" ]; then
- return 1
- else
- return 0
- fi
-}
-
-is_empty() {
- if [ -z "$1" ]; then
- return 1
- else
- return 0
- fi
-}
-
-is_directory_exists() {
- if [ -d "$1" ]; then
- return 1
- else
- return 0
- fi
-}
-
-edit_config() {
- echo "Is edit config.yaml?"
- echo "1. vi edit config"
- echo "2. do not edit config"
- read choice
- case $choice in
- 1)
- vi config/config.yaml
- ;;
- 2)
- echo "do not edit config"
- ;;
- esac
-}
-
-edit_enterprise_config() {
- echo "Is edit enterprise config.yaml?"
- echo "1. vi edit enterprise config"
- echo "2. do not edit enterprise config"
- read choice
- case $choice in
- 1)
- vi ./.docker-compose_cfg/config.yaml
- ;;
- 2)
- echo "Do not edit enterprise config"
- ;;
- esac
-}
-
-install_docker_compose() {
- echo "Please input the installation path, default is $(pwd)/Open-IM-Server, press enter to use default"
- read install_path
- is_empty $install_path
- if [ $? -eq 1 ]; then
- install_path="."
- fi
- echo "Installing Open-IM-Server to ${install_path}/Open-IM-Server..."
- is_path $install_path
- mkdir -p $install_path
- cd $install_path
- is_directory_exists "${install_path}/Open-IM-Server"
- if [ $? -eq 1 ]; then
- echo "WARNING: Directory $install_path/Open-IM-Server exist, please ensure your path"
- echo "1. delete the directory and install"
- echo "2. exit"
- read choice
- case $choice in
- 1)
- rm -rf "${install_path}/Open-IM-Server"
- ;;
- 2)
- exit 1
- ;;
- esac
- fi
- rm -rf ./Open-IM-Server
- set -e
- git clone https://github.com/openimsdk/open-im-server.git --recursive;
- set +e
- cd ./Open-IM-Server
- git checkout errcode
- echo "======== git clone success ========"
- source .env
- if [ $DATA_DIR = "./" ]; then
- DATA_DIR=$(pwd)/components
- fi
- echo "Please input the components data directory, deault is ${DATA_DIR}, press enter to use default"
- read NEW_DATA_DIR
- is_empty $NEW_DATA_DIR
- if [ $? -eq 0 ]; then
- DATA_DIR=$NEW_DATA_DIR
- fi
- echo "Please input the user, deault is root, press enter to use default"
- read NEW_USER
- is_empty $NEW_USER
- if [ $? -eq 0 ]; then
- USER=$NEW_USER
- fi
-
- echo "Please input the password, default is openIM123, press enter to use default"
- read NEW_PASSWORD
- is_empty $NEW_PASSWORD
- if [ $? -eq 0 ]; then
- PASSWORD=$NEW_PASSWORD
- fi
-
- echo "Please input the minio_endpoint, default will detect auto, press enter to use default"
- read NEW_MINIO_ENDPOINT
- is_empty $NEW_MINIO_ENDPOINT
- if [ $? -eq 1 ]; then
- internet_ip=`curl ifconfig.me -s`
- MINIO_ENDPOINT="http://${internet_ip}:10005"
- else
- MINIO_ENDPOINT=$NEW_MINIO_ENDPOINT
- fi
- set -e
- export MINIO_ENDPOINT
- export USER
- export PASSWORD
- export DATA_DIR
-
- cat < .env
-USER=${USER}
-PASSWORD=${PASSWORD}
-MINIO_ENDPOINT=${MINIO_ENDPOINT}
-DATA_DIR=${DATA_DIR}
-EOF
-
- edit_config
- edit_enterprise_config
-
- cd scripts;
- chmod +x *.sh;
- ./init-pwd.sh;
- ./env_check.sh;
- cd ..;
- docker-compose up -d;
- cd scripts;
- ./docker-check-service.sh;
-}
-
-read choice
-
-case $choice in
- 1)
- install_docker_compose
- ;;
- 2)
-
- ;;
- 3)
- ;;
- 4)
- echo "Exiting installation scripts..."
- exit 0
- ;;
- *)
- echo "Invalid option, please try again."
- ;;
-esac
diff --git a/internal/api/auth.go b/internal/api/auth.go
index 44a97a013..88539f63a 100644
--- a/internal/api/auth.go
+++ b/internal/api/auth.go
@@ -33,6 +33,10 @@ func (o *AuthApi) UserToken(c *gin.Context) {
a2r.Call(auth.AuthClient.UserToken, o.Client, c)
}
+func (o *AuthApi) GetUserToken(c *gin.Context) {
+ a2r.Call(auth.AuthClient.GetUserToken, o.Client, c)
+}
+
func (o *AuthApi) ParseToken(c *gin.Context) {
a2r.Call(auth.AuthClient.ParseToken, o.Client, c)
}
diff --git a/internal/api/conversation.go b/internal/api/conversation.go
index e422de677..eb735e550 100644
--- a/internal/api/conversation.go
+++ b/internal/api/conversation.go
@@ -33,6 +33,10 @@ func (o *ConversationApi) GetAllConversations(c *gin.Context) {
a2r.Call(conversation.ConversationClient.GetAllConversations, o.Client, c)
}
+func (o *ConversationApi) GetSortedConversationList(c *gin.Context) {
+ a2r.Call(conversation.ConversationClient.GetSortedConversationList, o.Client, c)
+}
+
func (o *ConversationApi) GetConversation(c *gin.Context) {
a2r.Call(conversation.ConversationClient.GetConversation, o.Client, c)
}
diff --git a/internal/api/friend.go b/internal/api/friend.go
index 23f337a9f..7dc898a02 100644
--- a/internal/api/friend.go
+++ b/internal/api/friend.go
@@ -92,3 +92,6 @@ func (o *FriendApi) GetFriendIDs(c *gin.Context) {
func (o *FriendApi) GetSpecifiedFriendsInfo(c *gin.Context) {
a2r.Call(friend.FriendClient.GetSpecifiedFriendsInfo, o.Client, c)
}
+func (o *FriendApi) UpdateFriends(c *gin.Context) {
+ a2r.Call(friend.FriendClient.UpdateFriends, o.Client, c)
+}
diff --git a/internal/api/msg.go b/internal/api/msg.go
index 38e207cfb..9348596ac 100644
--- a/internal/api/msg.go
+++ b/internal/api/msg.go
@@ -15,14 +15,6 @@
package api
import (
- "github.com/OpenIMSDK/tools/mcontext"
- "github.com/gin-gonic/gin"
- "github.com/go-playground/validator/v10"
- "github.com/mitchellh/mapstructure"
-
- "github.com/openimsdk/open-im-server/v3/pkg/authverify"
- "github.com/openimsdk/open-im-server/v3/pkg/common/config"
-
"github.com/OpenIMSDK/protocol/constant"
"github.com/OpenIMSDK/protocol/msg"
"github.com/OpenIMSDK/protocol/sdkws"
@@ -30,7 +22,14 @@ import (
"github.com/OpenIMSDK/tools/apiresp"
"github.com/OpenIMSDK/tools/errs"
"github.com/OpenIMSDK/tools/log"
+ "github.com/OpenIMSDK/tools/mcontext"
"github.com/OpenIMSDK/tools/utils"
+ "github.com/gin-gonic/gin"
+ "github.com/go-playground/validator/v10"
+ "github.com/mitchellh/mapstructure"
+
+ "github.com/openimsdk/open-im-server/v3/pkg/authverify"
+ "github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/open-im-server/v3/pkg/apistruct"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
@@ -152,7 +151,7 @@ func (m *MessageApi) DeleteMsgPhysical(c *gin.Context) {
}
func (m *MessageApi) getSendMsgReq(c *gin.Context, req apistruct.SendMsg) (sendMsgReq *msg.SendMsgReq, err error) {
- var data interface{}
+ var data any
log.ZDebug(c, "getSendMsgReq", "req", req.Content)
switch req.ContentType {
case constant.Text:
@@ -165,14 +164,15 @@ func (m *MessageApi) getSendMsgReq(c *gin.Context, req apistruct.SendMsg) (sendM
data = apistruct.VideoElem{}
case constant.File:
data = apistruct.FileElem{}
+ case constant.AtText:
+ data = apistruct.AtElem{}
case constant.Custom:
data = apistruct.CustomElem{}
case constant.OANotification:
data = apistruct.OANotificationElem{}
req.SessionType = constant.NotificationChatType
- if !authverify.IsManagerUserID(req.SendID) {
- return nil, errs.ErrNoPermission.
- Wrap("only app manager can as sender send OANotificationElem")
+ if err = m.userRpcClient.GetNotificationByID(c, req.SendID); err != nil {
+ return nil, err
}
default:
return nil, errs.ErrArgs.WithDetail("not support err contentType")
@@ -187,38 +187,63 @@ func (m *MessageApi) getSendMsgReq(c *gin.Context, req apistruct.SendMsg) (sendM
return m.newUserSendMsgReq(c, &req), nil
}
+// SendMessage handles the sending of a message. It's an HTTP handler function to be used with Gin framework.
func (m *MessageApi) SendMessage(c *gin.Context) {
+ // Initialize a request struct for sending a message.
req := apistruct.SendMsgReq{}
+
+ // Bind the JSON request body to the request struct.
if err := c.BindJSON(&req); err != nil {
+ // Respond with an error if request body binding fails.
apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
return
}
+
+ // Check if the user has the app manager role.
if !authverify.IsAppManagerUid(c) {
+ // Respond with a permission error if the user is not an app manager.
apiresp.GinError(c, errs.ErrNoPermission.Wrap("only app manager can send message"))
return
}
+
+ // Prepare the message request with additional required data.
sendMsgReq, err := m.getSendMsgReq(c, req.SendMsg)
if err != nil {
+ // Log and respond with an error if preparation fails.
log.ZError(c, "decodeData failed", err)
apiresp.GinError(c, err)
return
}
+
+ // Set the receiver ID in the message data.
sendMsgReq.MsgData.RecvID = req.RecvID
+
+ // Declare a variable to store the message sending status.
var status int
+
+ // Attempt to send the message using the client.
respPb, err := m.Client.SendMsg(c, sendMsgReq)
if err != nil {
+ // Set the status to failed and respond with an error if sending fails.
status = constant.MsgSendFailed
log.ZError(c, "send message err", err)
apiresp.GinError(c, err)
return
}
+
+ // Set the status to successful if the message is sent.
status = constant.MsgSendSuccessed
+
+ // Attempt to update the message sending status in the system.
_, err = m.Client.SetSendMsgStatus(c, &msg.SetSendMsgStatusReq{
Status: int32(status),
})
if err != nil {
+ // Log the error if updating the status fails.
log.ZError(c, "SetSendMsgStatus failed", err)
}
+
+ // Respond with a success message and the response payload.
apiresp.GinSuccess(c, respPb)
}
@@ -226,13 +251,14 @@ func (m *MessageApi) SendBusinessNotification(c *gin.Context) {
req := struct {
Key string `json:"key"`
Data string `json:"data"`
- SendUserID string `json:"sendUserID"`
- RecvUserID string `json:"recvUserID"`
+ SendUserID string `json:"sendUserID" binding:"required"`
+ RecvUserID string `json:"recvUserID" binding:"required"`
}{}
if err := c.BindJSON(&req); err != nil {
apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
return
}
+
if !authverify.IsAppManagerUid(c) {
apiresp.GinError(c, errs.ErrNoPermission.Wrap("only app manager can send message"))
return
diff --git a/internal/api/route.go b/internal/api/route.go
index 7a331d643..24ed5f6bb 100644
--- a/internal/api/route.go
+++ b/internal/api/route.go
@@ -16,6 +16,7 @@ package api
import (
"context"
+ "fmt"
"net/http"
"github.com/OpenIMSDK/protocol/constant"
@@ -43,7 +44,7 @@ import (
)
func NewGinRouter(discov discoveryregistry.SvcDiscoveryRegistry, rdb redis.UniversalClient) *gin.Engine {
- discov.AddOption(mw.GrpcClient(), grpc.WithTransportCredentials(insecure.NewCredentials())) // 默认RPC中间件
+ discov.AddOption(mw.GrpcClient(), grpc.WithTransportCredentials(insecure.NewCredentials()), grpc.WithDefaultServiceConfig(fmt.Sprintf(`{"LoadBalancingPolicy": "%s"}`, "round_robin"))) // 默认RPC中间件
gin.SetMode(gin.ReleaseMode)
r := gin.New()
if v, ok := binding.Validator.Engine().(*validator.Validate); ok {
@@ -67,6 +68,7 @@ func NewGinRouter(discov discoveryregistry.SvcDiscoveryRegistry, rdb redis.Unive
{
userRouterGroup.POST("/user_register", u.UserRegister)
userRouterGroup.POST("/update_user_info", ParseToken, u.UpdateUserInfo)
+ userRouterGroup.POST("/update_user_info_ex", ParseToken, u.UpdateUserInfoEx)
userRouterGroup.POST("/set_global_msg_recv_opt", ParseToken, u.SetGlobalRecvMessageOpt)
userRouterGroup.POST("/get_users_info", ParseToken, u.GetUsersPublicInfo)
userRouterGroup.POST("/get_all_users_uid", ParseToken, u.GetAllUsersID)
@@ -77,6 +79,16 @@ func NewGinRouter(discov discoveryregistry.SvcDiscoveryRegistry, rdb redis.Unive
userRouterGroup.POST("/subscribe_users_status", ParseToken, u.SubscriberStatus)
userRouterGroup.POST("/get_users_status", ParseToken, u.GetUserStatus)
userRouterGroup.POST("/get_subscribe_users_status", ParseToken, u.GetSubscribeUsersStatus)
+
+ userRouterGroup.POST("/process_user_command_add", ParseToken, u.ProcessUserCommandAdd)
+ userRouterGroup.POST("/process_user_command_delete", ParseToken, u.ProcessUserCommandDelete)
+ userRouterGroup.POST("/process_user_command_update", ParseToken, u.ProcessUserCommandUpdate)
+ userRouterGroup.POST("/process_user_command_get", ParseToken, u.ProcessUserCommandGet)
+ userRouterGroup.POST("/process_user_command_get_all", ParseToken, u.ProcessUserCommandGetAll)
+
+ userRouterGroup.POST("/add_notification_account", ParseToken, u.AddNotificationAccount)
+ userRouterGroup.POST("/update_notification_account", ParseToken, u.UpdateNotificationAccountInfo)
+ userRouterGroup.POST("/search_notification_account", ParseToken, u.SearchNotificationAccount)
}
// friend routing group
friendRouterGroup := r.Group("/friend", ParseToken)
@@ -98,6 +110,7 @@ func NewGinRouter(discov discoveryregistry.SvcDiscoveryRegistry, rdb redis.Unive
friendRouterGroup.POST("/is_friend", f.IsFriend)
friendRouterGroup.POST("/get_friend_id", f.GetFriendIDs)
friendRouterGroup.POST("/get_specified_friends_info", f.GetSpecifiedFriendsInfo)
+ friendRouterGroup.POST("/update_friends", f.UpdateFriends)
}
g := NewGroupApi(*groupRpc)
groupRouterGroup := r.Group("/group", ParseToken)
@@ -137,6 +150,7 @@ func NewGinRouter(discov discoveryregistry.SvcDiscoveryRegistry, rdb redis.Unive
{
a := NewAuthApi(*authRpc)
authRouterGroup.POST("/user_token", a.UserToken)
+ authRouterGroup.POST("/get_user_token", ParseToken, a.GetUserToken)
authRouterGroup.POST("/parse_token", a.ParseToken)
authRouterGroup.POST("/force_logout", ParseToken, a.ForceLogout)
}
@@ -161,6 +175,8 @@ func NewGinRouter(discov discoveryregistry.SvcDiscoveryRegistry, rdb redis.Unive
objectGroup.POST("/auth_sign", t.AuthSign)
objectGroup.POST("/complete_multipart_upload", t.CompleteMultipartUpload)
objectGroup.POST("/access_url", t.AccessURL)
+ objectGroup.POST("/initiate_form_data", t.InitiateFormData)
+ objectGroup.POST("/complete_form_data", t.CompleteFormData)
objectGroup.GET("/*name", t.ObjectRedirect)
}
// Message
@@ -191,6 +207,7 @@ func NewGinRouter(discov discoveryregistry.SvcDiscoveryRegistry, rdb redis.Unive
conversationGroup := r.Group("/conversation", ParseToken)
{
c := NewConversationApi(*conversationRpc)
+ conversationGroup.POST("/get_sorted_conversation_list", c.GetSortedConversationList)
conversationGroup.POST("/get_all_conversations", c.GetAllConversations)
conversationGroup.POST("/get_conversation", c.GetConversation)
conversationGroup.POST("/get_conversations", c.GetConversations)
diff --git a/internal/api/third.go b/internal/api/third.go
index 5191903da..0a1ef0fbe 100644
--- a/internal/api/third.go
+++ b/internal/api/third.go
@@ -71,6 +71,14 @@ func (o *ThirdApi) AccessURL(c *gin.Context) {
a2r.Call(third.ThirdClient.AccessURL, o.Client, c)
}
+func (o *ThirdApi) InitiateFormData(c *gin.Context) {
+ a2r.Call(third.ThirdClient.InitiateFormData, o.Client, c)
+}
+
+func (o *ThirdApi) CompleteFormData(c *gin.Context) {
+ a2r.Call(third.ThirdClient.CompleteFormData, o.Client, c)
+}
+
func (o *ThirdApi) ObjectRedirect(c *gin.Context) {
name := c.Param("name")
if name == "" {
@@ -122,5 +130,5 @@ func (o *ThirdApi) SearchLogs(c *gin.Context) {
}
func GetPrometheus(c *gin.Context) {
- c.Redirect(http.StatusFound, config2.Config.Prometheus.PrometheusUrl)
+ c.Redirect(http.StatusFound, config2.Config.Prometheus.GrafanaUrl)
}
diff --git a/internal/api/user.go b/internal/api/user.go
index 86b7c0b0b..e7bbd4bfb 100644
--- a/internal/api/user.go
+++ b/internal/api/user.go
@@ -15,8 +15,6 @@
package api
import (
- "github.com/gin-gonic/gin"
-
"github.com/OpenIMSDK/protocol/constant"
"github.com/OpenIMSDK/protocol/msggateway"
"github.com/OpenIMSDK/protocol/user"
@@ -24,6 +22,7 @@ import (
"github.com/OpenIMSDK/tools/apiresp"
"github.com/OpenIMSDK/tools/errs"
"github.com/OpenIMSDK/tools/log"
+ "github.com/gin-gonic/gin"
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
@@ -42,7 +41,9 @@ func (u *UserApi) UserRegister(c *gin.Context) {
func (u *UserApi) UpdateUserInfo(c *gin.Context) {
a2r.Call(user.UserClient.UpdateUserInfo, u.Client, c)
}
-
+func (u *UserApi) UpdateUserInfoEx(c *gin.Context) {
+ a2r.Call(user.UserClient.UpdateUserInfoEx, u.Client, c)
+}
func (u *UserApi) SetGlobalRecvMessageOpt(c *gin.Context) {
a2r.Call(user.UserClient.SetGlobalRecvMessageOpt, u.Client, c)
}
@@ -199,3 +200,40 @@ func (u *UserApi) GetUserStatus(c *gin.Context) {
func (u *UserApi) GetSubscribeUsersStatus(c *gin.Context) {
a2r.Call(user.UserClient.GetSubscribeUsersStatus, u.Client, c)
}
+
+// ProcessUserCommandAdd user general function add.
+func (u *UserApi) ProcessUserCommandAdd(c *gin.Context) {
+ a2r.Call(user.UserClient.ProcessUserCommandAdd, u.Client, c)
+}
+
+// ProcessUserCommandDelete user general function delete.
+func (u *UserApi) ProcessUserCommandDelete(c *gin.Context) {
+ a2r.Call(user.UserClient.ProcessUserCommandDelete, u.Client, c)
+}
+
+// ProcessUserCommandUpdate user general function update.
+func (u *UserApi) ProcessUserCommandUpdate(c *gin.Context) {
+ a2r.Call(user.UserClient.ProcessUserCommandUpdate, u.Client, c)
+}
+
+// ProcessUserCommandGet user general function get.
+func (u *UserApi) ProcessUserCommandGet(c *gin.Context) {
+ a2r.Call(user.UserClient.ProcessUserCommandGet, u.Client, c)
+}
+
+// ProcessUserCommandGet user general function get all.
+func (u *UserApi) ProcessUserCommandGetAll(c *gin.Context) {
+ a2r.Call(user.UserClient.ProcessUserCommandGetAll, u.Client, c)
+}
+
+func (u *UserApi) AddNotificationAccount(c *gin.Context) {
+ a2r.Call(user.UserClient.AddNotificationAccount, u.Client, c)
+}
+
+func (u *UserApi) UpdateNotificationAccountInfo(c *gin.Context) {
+ a2r.Call(user.UserClient.UpdateNotificationAccountInfo, u.Client, c)
+}
+
+func (u *UserApi) SearchNotificationAccount(c *gin.Context) {
+ a2r.Call(user.UserClient.SearchNotificationAccount, u.Client, c)
+}
diff --git a/internal/msggateway/client.go b/internal/msggateway/client.go
index 69b49d81a..43047fd73 100644
--- a/internal/msggateway/client.go
+++ b/internal/msggateway/client.go
@@ -87,6 +87,7 @@ func newClient(ctx *UserConnContext, conn LongConn, isCompress bool) *Client {
}
}
+// ResetClient updates the client's state with new connection and context information.
func (c *Client) ResetClient(
ctx *UserConnContext,
conn LongConn,
@@ -108,11 +109,13 @@ func (c *Client) ResetClient(
c.token = token
}
+// pingHandler handles ping messages and sends pong responses.
func (c *Client) pingHandler(_ string) error {
_ = c.conn.SetReadDeadline(pongWait)
return c.writePongMsg()
}
+// readMessage continuously reads messages from the connection.
func (c *Client) readMessage() {
defer func() {
if r := recover(); r != nil {
@@ -164,6 +167,7 @@ func (c *Client) readMessage() {
}
}
+// handleMessage processes a single message received by the client.
func (c *Client) handleMessage(message []byte) error {
if c.IsCompress {
var err error
diff --git a/internal/msggateway/compressor_test.go b/internal/msggateway/compressor_test.go
index d41c57bf3..b1544f063 100644
--- a/internal/msggateway/compressor_test.go
+++ b/internal/msggateway/compressor_test.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package msggateway
import (
diff --git a/internal/msggateway/constant.go b/internal/msggateway/constant.go
index fe5f09bdc..045629b4e 100644
--- a/internal/msggateway/constant.go
+++ b/internal/msggateway/constant.go
@@ -26,6 +26,7 @@ const (
Compression = "compression"
GzipCompressionProtocol = "gzip"
BackgroundStatus = "isBackground"
+ MsgResp = "isMsgResp"
)
const (
diff --git a/internal/msggateway/encoder.go b/internal/msggateway/encoder.go
index 9791acb39..c5f1d00a8 100644
--- a/internal/msggateway/encoder.go
+++ b/internal/msggateway/encoder.go
@@ -22,8 +22,8 @@ import (
)
type Encoder interface {
- Encode(data interface{}) ([]byte, error)
- Decode(encodeData []byte, decodeData interface{}) error
+ Encode(data any) ([]byte, error)
+ Decode(encodeData []byte, decodeData any) error
}
type GobEncoder struct{}
@@ -32,7 +32,7 @@ func NewGobEncoder() *GobEncoder {
return &GobEncoder{}
}
-func (g *GobEncoder) Encode(data interface{}) ([]byte, error) {
+func (g *GobEncoder) Encode(data any) ([]byte, error) {
buff := bytes.Buffer{}
enc := gob.NewEncoder(&buff)
err := enc.Encode(data)
@@ -42,7 +42,7 @@ func (g *GobEncoder) Encode(data interface{}) ([]byte, error) {
return buff.Bytes(), nil
}
-func (g *GobEncoder) Decode(encodeData []byte, decodeData interface{}) error {
+func (g *GobEncoder) Decode(encodeData []byte, decodeData any) error {
buff := bytes.NewBuffer(encodeData)
dec := gob.NewDecoder(buff)
err := dec.Decode(decodeData)
diff --git a/internal/msggateway/n_ws_server.go b/internal/msggateway/n_ws_server.go
index 99a7a4805..01d92b92a 100644
--- a/internal/msggateway/n_ws_server.go
+++ b/internal/msggateway/n_ws_server.go
@@ -16,7 +16,9 @@ package msggateway
import (
"context"
+ "encoding/json"
"errors"
+ "fmt"
"net/http"
"os"
"os/signal"
@@ -26,6 +28,8 @@ import (
"syscall"
"time"
+ "github.com/OpenIMSDK/tools/apiresp"
+
"github.com/go-playground/validator/v10"
"github.com/redis/go-redis/v9"
"golang.org/x/sync/errgroup"
@@ -49,7 +53,7 @@ type LongConnServer interface {
wsHandler(w http.ResponseWriter, r *http.Request)
GetUserAllCons(userID string) ([]*Client, bool)
GetUserPlatformCons(userID string, platform int) ([]*Client, bool, bool)
- Validate(s interface{}) error
+ Validate(s any) error
SetCacheHandler(cache cache.MsgModel)
SetDiscoveryRegistry(client discoveryregistry.SvcDiscoveryRegistry)
KickUserConn(client *Client) error
@@ -60,6 +64,12 @@ type LongConnServer interface {
MessageHandler
}
+var bufferPool = sync.Pool{
+ New: func() any {
+ return make([]byte, 1024)
+ },
+}
+
type WsServer struct {
port int
wsMaxConnNum int64
@@ -120,7 +130,8 @@ func (ws *WsServer) UnRegister(c *Client) {
ws.unregisterChan <- c
}
-func (ws *WsServer) Validate(s interface{}) error {
+func (ws *WsServer) Validate(s any) error {
+ //?question?
return nil
}
@@ -144,7 +155,7 @@ func NewWsServer(opts ...Option) (*WsServer, error) {
writeBufferSize: config.writeBufferSize,
handshakeTimeout: config.handshakeTimeout,
clientPool: sync.Pool{
- New: func() interface{} {
+ New: func() any {
return new(Client)
},
},
@@ -281,12 +292,13 @@ func (ws *WsServer) registerClient(client *Client) {
}
wg := sync.WaitGroup{}
- wg.Add(1)
- go func() {
- defer wg.Done()
- _ = ws.sendUserOnlineInfoToOtherNode(client.ctx, client)
- }()
-
+ if config.Config.Envs.Discovery == "zookeeper" {
+ wg.Add(1)
+ go func() {
+ defer wg.Done()
+ _ = ws.sendUserOnlineInfoToOtherNode(client.ctx, client)
+ }()
+ }
wg.Add(1)
go func() {
defer wg.Done()
@@ -334,11 +346,7 @@ func (ws *WsServer) multiTerminalLoginChecker(clientOK bool, oldClients []*Clien
if !clientOK {
return
}
-
- isDeleteUser := ws.clients.deleteClients(newClient.UserID, oldClients)
- if isDeleteUser {
- ws.onlineUserNum.Add(-1)
- }
+ ws.clients.deleteClients(newClient.UserID, oldClients)
for _, c := range oldClients {
err := c.KickOnlineMessage()
if err != nil {
@@ -414,84 +422,102 @@ func (ws *WsServer) unregisterClient(client *Client) {
)
}
-func (ws *WsServer) wsHandler(w http.ResponseWriter, r *http.Request) {
- connContext := newContext(w, r)
+func (ws *WsServer) ParseWSArgs(r *http.Request) (args *WSArgs, err error) {
+ var v WSArgs
+ defer func() {
+ args = &v
+ }()
+ query := r.URL.Query()
+ v.MsgResp, _ = strconv.ParseBool(query.Get(MsgResp))
if ws.onlineUserConnNum.Load() >= ws.wsMaxConnNum {
- httpError(connContext, errs.ErrConnOverMaxNumLimit)
- return
+ return nil, errs.ErrConnOverMaxNumLimit.Wrap("over max conn num limit")
}
- var (
- token string
- userID string
- platformIDStr string
- exists bool
- compression bool
- )
-
- token, exists = connContext.Query(Token)
- if !exists {
- httpError(connContext, errs.ErrConnArgsErr)
- return
+ if v.Token = query.Get(Token); v.Token == "" {
+ return nil, errs.ErrConnArgsErr.Wrap("token is empty")
}
- userID, exists = connContext.Query(WsUserID)
- if !exists {
- httpError(connContext, errs.ErrConnArgsErr)
- return
+ if v.UserID = query.Get(WsUserID); v.UserID == "" {
+ return nil, errs.ErrConnArgsErr.Wrap("sendID is empty")
}
- platformIDStr, exists = connContext.Query(PlatformID)
- if !exists {
- httpError(connContext, errs.ErrConnArgsErr)
- return
+ platformIDStr := query.Get(PlatformID)
+ if platformIDStr == "" {
+ return nil, errs.ErrConnArgsErr.Wrap("platformID is empty")
}
platformID, err := strconv.Atoi(platformIDStr)
if err != nil {
- httpError(connContext, errs.ErrConnArgsErr)
- return
+ return nil, errs.ErrConnArgsErr.Wrap("platformID is not int")
+ }
+ v.PlatformID = platformID
+ if err = authverify.WsVerifyToken(v.Token, v.UserID, platformID); err != nil {
+ return nil, err
+ }
+ if query.Get(Compression) == GzipCompressionProtocol {
+ v.Compression = true
}
- if err = authverify.WsVerifyToken(token, userID, platformID); err != nil {
- httpError(connContext, err)
- return
+ if r.Header.Get(Compression) == GzipCompressionProtocol {
+ v.Compression = true
}
- m, err := ws.cache.GetTokensWithoutError(context.Background(), userID, platformID)
+ m, err := ws.cache.GetTokensWithoutError(context.Background(), v.UserID, platformID)
if err != nil {
- httpError(connContext, err)
- return
+ return nil, err
}
- if v, ok := m[token]; ok {
+ if v, ok := m[v.Token]; ok {
switch v {
case constant.NormalToken:
case constant.KickedToken:
- httpError(connContext, errs.ErrTokenKicked.Wrap())
- return
+ return nil, errs.ErrTokenKicked.Wrap()
default:
- httpError(connContext, errs.ErrTokenUnknown.Wrap())
- return
+ return nil, errs.ErrTokenUnknown.Wrap(fmt.Sprintf("token status is %d", v))
}
} else {
- httpError(connContext, errs.ErrTokenNotExist.Wrap())
- return
+ return nil, errs.ErrTokenNotExist.Wrap()
}
+ return &v, nil
+}
- wsLongConn := newGWebSocket(WebSocket, ws.handshakeTimeout, ws.writeBufferSize)
- err = wsLongConn.GenerateLongConn(w, r)
- if err != nil {
- httpError(connContext, err)
- return
- }
- compressProtoc, exists := connContext.Query(Compression)
- if exists {
- if compressProtoc == GzipCompressionProtocol {
- compression = true
+type WSArgs struct {
+ Token string
+ UserID string
+ PlatformID int
+ Compression bool
+ MsgResp bool
+}
+
+func (ws *WsServer) wsHandler(w http.ResponseWriter, r *http.Request) {
+ connContext := newContext(w, r)
+ args, pErr := ws.ParseWSArgs(r)
+ var wsLongConn *GWebSocket
+ if args.MsgResp {
+ wsLongConn = newGWebSocket(WebSocket, ws.handshakeTimeout, ws.writeBufferSize)
+ if err := wsLongConn.GenerateLongConn(w, r); err != nil {
+ httpError(connContext, err)
+ return
}
- }
- compressProtoc, exists = connContext.GetHeader(Compression)
- if exists {
- if compressProtoc == GzipCompressionProtocol {
- compression = true
+ data, err := json.Marshal(apiresp.ParseError(pErr))
+ if err != nil {
+ _ = wsLongConn.Close()
+ return
+ }
+ if err := wsLongConn.WriteMessage(MessageText, data); err != nil {
+ _ = wsLongConn.Close()
+ return
+ }
+ if pErr != nil {
+ _ = wsLongConn.Close()
+ return
+ }
+ } else {
+ if pErr != nil {
+ httpError(connContext, pErr)
+ return
+ }
+ wsLongConn = newGWebSocket(WebSocket, ws.handshakeTimeout, ws.writeBufferSize)
+ if err := wsLongConn.GenerateLongConn(w, r); err != nil {
+ httpError(connContext, err)
+ return
}
}
client := ws.clientPool.Get().(*Client)
- client.ResetClient(connContext, wsLongConn, connContext.GetBackground(), compression, ws, token)
+ client.ResetClient(connContext, wsLongConn, connContext.GetBackground(), args.Compression, ws, args.Token)
ws.registerChan <- client
go client.readMessage()
}
diff --git a/internal/msgtransfer/init.go b/internal/msgtransfer/init.go
index 2f7ddef3d..2f66b84c3 100644
--- a/internal/msgtransfer/init.go
+++ b/internal/msgtransfer/init.go
@@ -18,27 +18,27 @@ import (
"context"
"errors"
"fmt"
- "net/http"
"os"
"os/signal"
"sync"
"syscall"
"time"
+ "log"
+ "net/http"
+ "sync"
+
+ "github.com/OpenIMSDK/tools/errs"
+ "github.com/OpenIMSDK/tools/mw"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/collectors"
"github.com/prometheus/client_golang/prometheus/promhttp"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
-
"github.com/OpenIMSDK/tools/log"
- "github.com/OpenIMSDK/tools/mw"
-
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/open-im-server/v3/pkg/common/db/cache"
"github.com/openimsdk/open-im-server/v3/pkg/common/db/controller"
- "github.com/openimsdk/open-im-server/v3/pkg/common/db/relation"
- relationtb "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
"github.com/openimsdk/open-im-server/v3/pkg/common/db/unrelation"
kdisc "github.com/openimsdk/open-im-server/v3/pkg/common/discoveryregister"
"github.com/openimsdk/open-im-server/v3/pkg/common/prommetrics"
@@ -46,22 +46,12 @@ import (
)
type MsgTransfer struct {
- persistentCH *PersistentConsumerHandler // 聊天记录持久化到mysql的消费者 订阅的topic: ws2ms_chat
historyCH *OnlineHistoryRedisConsumerHandler // 这个消费者聚合消息, 订阅的topic:ws2ms_chat, 修改通知发往msg_to_modify topic, 消息存入redis后Incr Redis, 再发消息到ms2pschat topic推送, 发消息到msg_to_mongo topic持久化
historyMongoCH *OnlineHistoryMongoConsumerHandler // mongoDB批量插入, 成功后删除redis中消息,以及处理删除通知消息删除的 订阅的topic: msg_to_mongo
// modifyCH *ModifyMsgConsumerHandler // 负责消费修改消息通知的consumer, 订阅的topic: msg_to_modify
}
func StartTransfer(prometheusPort int) error {
- db, err := relation.NewGormDB()
- if err != nil {
- return err
- }
-
- if err = db.AutoMigrate(&relationtb.ChatLogModel{}); err != nil {
- fmt.Printf("gorm: AutoMigrate ChatLogModel err: %v\n", err)
- }
-
rdb, err := cache.NewRedis()
if err != nil {
return err
@@ -87,40 +77,49 @@ func StartTransfer(prometheusPort int) error {
if err := client.CreateRpcRootNodes(config.Config.GetServiceNames()); err != nil {
return err
}
-
- client.AddOption(mw.GrpcClient(), grpc.WithTransportCredentials(insecure.NewCredentials()))
+ client.AddOption(mw.GrpcClient(), grpc.WithTransportCredentials(insecure.NewCredentials()), grpc.WithDefaultServiceConfig(fmt.Sprintf(`{"LoadBalancingPolicy": "%s"}`, "round_robin")))
msgModel := cache.NewMsgCacheModel(rdb)
msgDocModel := unrelation.NewMsgMongoDriver(mongo.GetDatabase())
- msgMysModel := relation.NewChatLogGorm(db)
- chatLogDatabase := controller.NewChatLogDatabase(msgMysModel)
- msgDatabase := controller.NewCommonMsgDatabase(msgDocModel, msgModel)
+ msgDatabase, err := controller.NewCommonMsgDatabase(msgDocModel, msgModel)
+ if err != nil {
+ return err
+ }
conversationRpcClient := rpcclient.NewConversationRpcClient(client)
groupRpcClient := rpcclient.NewGroupRpcClient(client)
- msgTransfer := NewMsgTransfer(chatLogDatabase, msgDatabase, &conversationRpcClient, &groupRpcClient)
+ msgTransfer, err := NewMsgTransfer(msgDatabase, &conversationRpcClient, &groupRpcClient)
+ if err != nil {
+ return err
+ }
return msgTransfer.Start(prometheusPort)
}
-func NewMsgTransfer(chatLogDatabase controller.ChatLogDatabase,
- msgDatabase controller.CommonMsgDatabase,
- conversationRpcClient *rpcclient.ConversationRpcClient, groupRpcClient *rpcclient.GroupRpcClient,
-) *MsgTransfer {
- return &MsgTransfer{
- persistentCH: NewPersistentConsumerHandler(chatLogDatabase), historyCH: NewOnlineHistoryRedisConsumerHandler(msgDatabase, conversationRpcClient, groupRpcClient),
- historyMongoCH: NewOnlineHistoryMongoConsumerHandler(msgDatabase),
+func NewMsgTransfer(msgDatabase controller.CommonMsgDatabase, conversationRpcClient *rpcclient.ConversationRpcClient, groupRpcClient *rpcclient.GroupRpcClient) (*MsgTransfer, error) {
+ historyCH, err := NewOnlineHistoryRedisConsumerHandler(msgDatabase, conversationRpcClient, groupRpcClient)
+ if err != nil {
+ return nil, err
+ }
+ historyMongoCH, err := NewOnlineHistoryMongoConsumerHandler(msgDatabase)
+ if err != nil {
+ return nil, err
}
+
+ return &MsgTransfer{
+ historyCH: historyCH,
+ historyMongoCH: historyMongoCH,
+ }, nil
}
func (m *MsgTransfer) Start(prometheusPort int) error {
+ ctx := context.Background()
+ var wg sync.WaitGroup
+ wg.Add(1)
fmt.Println("start msg transfer", "prometheusPort:", prometheusPort)
if prometheusPort <= 0 {
- return errors.New("prometheusPort not correct")
+ return errs.Wrap(errors.New("prometheusPort not correct"))
}
- if config.Config.ChatPersistenceMysql {
- // go m.persistentCH.persistentConsumerGroup.RegisterHandleAndConsumer(m.persistentCH)
- } else {
- fmt.Println("msg transfer not start mysql consumer")
- }
+
+
var wg sync.WaitGroup
@@ -128,14 +127,14 @@ func (m *MsgTransfer) Start(prometheusPort int) error {
go func() {
defer wg.Done()
- m.historyCH.historyConsumerGroup.RegisterHandleAndConsumer(m.historyCH)
+ m.m.historyCH.historyConsumerGroup.RegisterHandleAndConsumer(ctx, m.historyCH)
}()
wg.Add(1)
go func() {
defer wg.Done()
- m.historyMongoCH.historyConsumerGroup.RegisterHandleAndConsumer(m.historyMongoCH)
+ m.historyMongoCH.historyConsumerGroup.RegisterHandleAndConsumer(ctx, m.historyMongoCH)
}()
if config.Config.Prometheus.Enable {
diff --git a/internal/msgtransfer/online_history_msg_handler.go b/internal/msgtransfer/online_history_msg_handler.go
index ca75a8182..6678715d4 100644
--- a/internal/msgtransfer/online_history_msg_handler.go
+++ b/internal/msgtransfer/online_history_msg_handler.go
@@ -62,7 +62,7 @@ type TriggerChannelValue struct {
type Cmd2Value struct {
Cmd int
- Value interface{}
+ Value any
}
type ContextMsg struct {
message *sdkws.MsgData
@@ -88,7 +88,7 @@ func NewOnlineHistoryRedisConsumerHandler(
database controller.CommonMsgDatabase,
conversationRpcClient *rpcclient.ConversationRpcClient,
groupRpcClient *rpcclient.GroupRpcClient,
-) *OnlineHistoryRedisConsumerHandler {
+) (*OnlineHistoryRedisConsumerHandler, error) {
var och OnlineHistoryRedisConsumerHandler
och.msgDatabase = database
och.msgDistributionCh = make(chan Cmd2Value) // no buffer channel
@@ -99,14 +99,15 @@ func NewOnlineHistoryRedisConsumerHandler(
}
och.conversationRpcClient = conversationRpcClient
och.groupRpcClient = groupRpcClient
- och.historyConsumerGroup = kafka.NewMConsumerGroup(&kafka.MConsumerGroupConfig{
+ var err error
+ och.historyConsumerGroup, err = kafka.NewMConsumerGroup(&kafka.MConsumerGroupConfig{
KafkaVersion: sarama.V2_0_0_0,
OffsetsInitial: sarama.OffsetNewest, IsReturnErr: false,
}, []string{config.Config.Kafka.LatestMsgToRedis.Topic},
config.Config.Kafka.Addr, config.Config.Kafka.ConsumerGroupID.MsgToRedis)
// statistics.NewStatistics(&och.singleMsgSuccessCount, config.Config.ModuleName.MsgTransferName, fmt.Sprintf("%d
// second singleMsgCount insert to mongo", constant.StatisticsTimeInterval), constant.StatisticsTimeInterval)
- return &och
+ return &och, err
}
func (och *OnlineHistoryRedisConsumerHandler) Run(channelID int) {
diff --git a/internal/msgtransfer/online_msg_to_mongo_handler.go b/internal/msgtransfer/online_msg_to_mongo_handler.go
index 8ef15fe72..6e6c4c819 100644
--- a/internal/msgtransfer/online_msg_to_mongo_handler.go
+++ b/internal/msgtransfer/online_msg_to_mongo_handler.go
@@ -34,16 +34,21 @@ type OnlineHistoryMongoConsumerHandler struct {
msgDatabase controller.CommonMsgDatabase
}
-func NewOnlineHistoryMongoConsumerHandler(database controller.CommonMsgDatabase) *OnlineHistoryMongoConsumerHandler {
+func NewOnlineHistoryMongoConsumerHandler(database controller.CommonMsgDatabase) (*OnlineHistoryMongoConsumerHandler, error) {
+ historyConsumerGroup, err := kfk.NewMConsumerGroup(&kfk.MConsumerGroupConfig{
+ KafkaVersion: sarama.V2_0_0_0,
+ OffsetsInitial: sarama.OffsetNewest, IsReturnErr: false,
+ }, []string{config.Config.Kafka.MsgToMongo.Topic},
+ config.Config.Kafka.Addr, config.Config.Kafka.ConsumerGroupID.MsgToMongo)
+ if err != nil {
+ return nil, err
+ }
+
mc := &OnlineHistoryMongoConsumerHandler{
- historyConsumerGroup: kfk.NewMConsumerGroup(&kfk.MConsumerGroupConfig{
- KafkaVersion: sarama.V2_0_0_0,
- OffsetsInitial: sarama.OffsetNewest, IsReturnErr: false,
- }, []string{config.Config.Kafka.MsgToMongo.Topic},
- config.Config.Kafka.Addr, config.Config.Kafka.ConsumerGroupID.MsgToMongo),
- msgDatabase: database,
+ historyConsumerGroup: historyConsumerGroup,
+ msgDatabase: database,
}
- return mc
+ return mc, nil
}
func (mc *OnlineHistoryMongoConsumerHandler) handleChatWs2Mongo(
diff --git a/internal/msgtransfer/persistent_msg_handler.go b/internal/msgtransfer/persistent_msg_handler.go
deleted file mode 100644
index d105de2fe..000000000
--- a/internal/msgtransfer/persistent_msg_handler.go
+++ /dev/null
@@ -1,119 +0,0 @@
-// Copyright © 2023 OpenIM. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-package msgtransfer
-
-import (
- "context"
-
- "github.com/OpenIMSDK/protocol/constant"
- pbmsg "github.com/OpenIMSDK/protocol/msg"
- "github.com/OpenIMSDK/tools/log"
- "github.com/OpenIMSDK/tools/utils"
-
- "github.com/openimsdk/open-im-server/v3/pkg/common/config"
- "github.com/openimsdk/open-im-server/v3/pkg/common/db/controller"
- kfk "github.com/openimsdk/open-im-server/v3/pkg/common/kafka"
-
- "github.com/IBM/sarama"
- "google.golang.org/protobuf/proto"
-)
-
-type PersistentConsumerHandler struct {
- persistentConsumerGroup *kfk.MConsumerGroup
- chatLogDatabase controller.ChatLogDatabase
-}
-
-func NewPersistentConsumerHandler(database controller.ChatLogDatabase) *PersistentConsumerHandler {
- return &PersistentConsumerHandler{
- persistentConsumerGroup: kfk.NewMConsumerGroup(&kfk.MConsumerGroupConfig{
- KafkaVersion: sarama.V2_0_0_0,
- OffsetsInitial: sarama.OffsetNewest, IsReturnErr: false,
- }, []string{config.Config.Kafka.LatestMsgToRedis.Topic},
- config.Config.Kafka.Addr, config.Config.Kafka.ConsumerGroupID.MsgToMySql),
- chatLogDatabase: database,
- }
-}
-
-func (pc *PersistentConsumerHandler) handleChatWs2Mysql(
- ctx context.Context,
- cMsg *sarama.ConsumerMessage,
- msgKey string,
- _ sarama.ConsumerGroupSession,
-) {
- msg := cMsg.Value
- var tag bool
- msgFromMQ := pbmsg.MsgDataToMQ{}
- err := proto.Unmarshal(msg, &msgFromMQ)
- if err != nil {
- log.ZError(ctx, "msg_transfer Unmarshal msg err", err)
- return
- }
-
- log.ZDebug(ctx, "handleChatWs2Mysql", "msg", msgFromMQ.MsgData)
- // Control whether to store history messages (mysql)
- isPersist := utils.GetSwitchFromOptions(msgFromMQ.MsgData.Options, constant.IsPersistent)
- // Only process receiver data
- if isPersist {
- switch msgFromMQ.MsgData.SessionType {
- case constant.SingleChatType, constant.NotificationChatType:
- if msgKey == msgFromMQ.MsgData.RecvID {
- tag = true
- }
- case constant.GroupChatType:
- if msgKey == msgFromMQ.MsgData.SendID {
- tag = true
- }
- case constant.SuperGroupChatType:
- tag = true
- }
- if tag {
- log.ZInfo(ctx, "msg_transfer msg persisting", "msg", string(msg))
- if err = pc.chatLogDatabase.CreateChatLog(&msgFromMQ); err != nil {
- log.ZError(ctx, "Message insert failed", err, "msg", msgFromMQ.String())
- return
- }
- }
- }
-}
-func (PersistentConsumerHandler) Setup(_ sarama.ConsumerGroupSession) error { return nil }
-func (PersistentConsumerHandler) Cleanup(_ sarama.ConsumerGroupSession) error { return nil }
-
-func (pc *PersistentConsumerHandler) ConsumeClaim(
- sess sarama.ConsumerGroupSession,
- claim sarama.ConsumerGroupClaim,
-) error {
- for msg := range claim.Messages() {
- ctx := pc.persistentConsumerGroup.GetContextFromMsg(msg)
- log.ZDebug(
- ctx,
- "kafka get info to mysql",
- "msgTopic",
- msg.Topic,
- "msgPartition",
- msg.Partition,
- "msg",
- string(msg.Value),
- "key",
- string(msg.Key),
- )
- if len(msg.Value) != 0 {
- pc.handleChatWs2Mysql(ctx, msg, string(msg.Key), sess)
- } else {
- log.ZError(ctx, "msg get from kafka but is nil", nil, "key", msg.Key)
- }
- sess.MarkMessage(msg, "")
- }
- return nil
-}
diff --git a/internal/push/callback.go b/internal/push/callback.go
index 2085493c5..99a58fb07 100644
--- a/internal/push/callback.go
+++ b/internal/push/callback.go
@@ -37,7 +37,7 @@ func callbackOfflinePush(
msg *sdkws.MsgData,
offlinePushUserIDs *[]string,
) error {
- if !config.Config.Callback.CallbackOfflinePush.Enable {
+ if !config.Config.Callback.CallbackOfflinePush.Enable || msg.ContentType == constant.Typing {
return nil
}
req := &callbackstruct.CallbackBeforePushReq{
@@ -73,7 +73,7 @@ func callbackOfflinePush(
}
func callbackOnlinePush(ctx context.Context, userIDs []string, msg *sdkws.MsgData) error {
- if !config.Config.Callback.CallbackOnlinePush.Enable || utils.Contain(msg.SendID, userIDs...) {
+ if !config.Config.Callback.CallbackOnlinePush.Enable || utils.Contain(msg.SendID, userIDs...) || msg.ContentType == constant.Typing {
return nil
}
req := callbackstruct.CallbackBeforePushReq{
@@ -107,7 +107,7 @@ func callbackBeforeSuperGroupOnlinePush(
msg *sdkws.MsgData,
pushToUserIDs *[]string,
) error {
- if !config.Config.Callback.CallbackBeforeSuperGroupOnlinePush.Enable {
+ if !config.Config.Callback.CallbackBeforeSuperGroupOnlinePush.Enable || msg.ContentType == constant.Typing {
return nil
}
req := callbackstruct.CallbackBeforeSuperGroupOnlinePushReq{
diff --git a/internal/push/consumer_init.go b/internal/push/consumer_init.go
index b72c32bb1..ceab86165 100644
--- a/internal/push/consumer_init.go
+++ b/internal/push/consumer_init.go
@@ -14,19 +14,24 @@
package push
+import "context"
+
type Consumer struct {
pushCh ConsumerHandler
successCount uint64
}
-func NewConsumer(pusher *Pusher) *Consumer {
- return &Consumer{
- pushCh: *NewConsumerHandler(pusher),
+func NewConsumer(pusher *Pusher) (*Consumer, error) {
+ c, err := NewConsumerHandler(pusher)
+ if err != nil {
+ return nil, err
}
+ return &Consumer{
+ pushCh: *c,
+ }, nil
}
func (c *Consumer) Start() {
- // statistics.NewStatistics(&c.successCount, config.Config.ModuleName.PushName, fmt.Sprintf("%d second push to
- // msg_gateway count", constant.StatisticsTimeInterval), constant.StatisticsTimeInterval)
- go c.pushCh.pushConsumerGroup.RegisterHandleAndConsumer(&c.pushCh)
+
+ go c.pushCh.pushConsumerGroup.RegisterHandleAndConsumer(context.Background(), &c.pushCh)
}
diff --git a/internal/push/offlinepush/dummy/push.go b/internal/push/offlinepush/dummy/push.go
index 2b15bc05d..f147886d9 100644
--- a/internal/push/offlinepush/dummy/push.go
+++ b/internal/push/offlinepush/dummy/push.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package dummy
import (
diff --git a/internal/push/offlinepush/getui/body.go b/internal/push/offlinepush/getui/body.go
index 218ed67b4..01eb22e73 100644
--- a/internal/push/offlinepush/getui/body.go
+++ b/internal/push/offlinepush/getui/body.go
@@ -21,9 +21,9 @@ import (
)
type Resp struct {
- Code int `json:"code"`
- Msg string `json:"msg"`
- Data interface{} `json:"data"`
+ Code int `json:"code"`
+ Msg string `json:"msg"`
+ Data any `json:"data"`
}
func (r *Resp) parseError() (err error) {
diff --git a/internal/push/offlinepush/getui/push.go b/internal/push/offlinepush/getui/push.go
index 1fd65647d..b657c9c23 100644
--- a/internal/push/offlinepush/getui/push.go
+++ b/internal/push/offlinepush/getui/push.go
@@ -159,7 +159,7 @@ func (g *Client) singlePush(ctx context.Context, token, userID string, pushReq P
return g.request(ctx, pushURL, pushReq, token, nil)
}
-func (g *Client) request(ctx context.Context, url string, input interface{}, token string, output interface{}) error {
+func (g *Client) request(ctx context.Context, url string, input any, token string, output any) error {
header := map[string]string{"token": token}
resp := &Resp{}
resp.Data = output
@@ -170,7 +170,7 @@ func (g *Client) postReturn(
ctx context.Context,
url string,
header map[string]string,
- input interface{},
+ input any,
output RespI,
timeout int,
) error {
diff --git a/internal/push/offlinepush/jpush/body/audience.go b/internal/push/offlinepush/jpush/body/audience.go
index f29930886..43a7148b8 100644
--- a/internal/push/offlinepush/jpush/body/audience.go
+++ b/internal/push/offlinepush/jpush/body/audience.go
@@ -23,7 +23,7 @@ const (
)
type Audience struct {
- Object interface{}
+ Object any
audience map[string][]string
}
diff --git a/internal/push/offlinepush/jpush/body/message.go b/internal/push/offlinepush/jpush/body/message.go
index 670cd4c78..e885d1d69 100644
--- a/internal/push/offlinepush/jpush/body/message.go
+++ b/internal/push/offlinepush/jpush/body/message.go
@@ -15,10 +15,10 @@
package body
type Message struct {
- MsgContent string `json:"msg_content"`
- Title string `json:"title,omitempty"`
- ContentType string `json:"content_type,omitempty"`
- Extras map[string]interface{} `json:"extras,omitempty"`
+ MsgContent string `json:"msg_content"`
+ Title string `json:"title,omitempty"`
+ ContentType string `json:"content_type,omitempty"`
+ Extras map[string]any `json:"extras,omitempty"`
}
func (m *Message) SetMsgContent(c string) {
@@ -33,9 +33,9 @@ func (m *Message) SetContentType(c string) {
m.ContentType = c
}
-func (m *Message) SetExtras(key string, value interface{}) {
+func (m *Message) SetExtras(key string, value any) {
if m.Extras == nil {
- m.Extras = make(map[string]interface{})
+ m.Extras = make(map[string]any)
}
m.Extras[key] = value
}
diff --git a/internal/push/offlinepush/jpush/body/platform.go b/internal/push/offlinepush/jpush/body/platform.go
index 9de2b8711..1ef136f2c 100644
--- a/internal/push/offlinepush/jpush/body/platform.go
+++ b/internal/push/offlinepush/jpush/body/platform.go
@@ -29,7 +29,7 @@ const (
)
type Platform struct {
- Os interface{}
+ Os any
osArry []string
}
diff --git a/internal/push/offlinepush/jpush/body/pushobj.go b/internal/push/offlinepush/jpush/body/pushobj.go
index c8c112f69..3dc133d0a 100644
--- a/internal/push/offlinepush/jpush/body/pushobj.go
+++ b/internal/push/offlinepush/jpush/body/pushobj.go
@@ -15,11 +15,11 @@
package body
type PushObj struct {
- Platform interface{} `json:"platform"`
- Audience interface{} `json:"audience"`
- Notification interface{} `json:"notification,omitempty"`
- Message interface{} `json:"message,omitempty"`
- Options interface{} `json:"options,omitempty"`
+ Platform any `json:"platform"`
+ Audience any `json:"audience"`
+ Notification any `json:"notification,omitempty"`
+ Message any `json:"message,omitempty"`
+ Options any `json:"options,omitempty"`
}
func (p *PushObj) SetPlatform(pf *Platform) {
diff --git a/internal/push/offlinepush/jpush/push.go b/internal/push/offlinepush/jpush/push.go
index 44de7ff65..567269f3c 100644
--- a/internal/push/offlinepush/jpush/push.go
+++ b/internal/push/offlinepush/jpush/push.go
@@ -69,11 +69,11 @@ func (j *JPush) Push(ctx context.Context, userIDs []string, title, content strin
pushObj.SetNotification(&no)
pushObj.SetMessage(&msg)
pushObj.SetOptions(&opt)
- var resp interface{}
+ var resp any
return j.request(ctx, pushObj, resp, 5)
}
-func (j *JPush) request(ctx context.Context, po body.PushObj, resp interface{}, timeout int) error {
+func (j *JPush) request(ctx context.Context, po body.PushObj, resp any, timeout int) error {
return http2.PostReturn(
ctx,
config.Config.Push.Jpns.PushUrl,
diff --git a/internal/push/push_handler.go b/internal/push/push_handler.go
index a1a9ff08e..19d42ebb9 100644
--- a/internal/push/push_handler.go
+++ b/internal/push/push_handler.go
@@ -35,15 +35,19 @@ type ConsumerHandler struct {
pusher *Pusher
}
-func NewConsumerHandler(pusher *Pusher) *ConsumerHandler {
+func NewConsumerHandler(pusher *Pusher) (*ConsumerHandler, error) {
var consumerHandler ConsumerHandler
consumerHandler.pusher = pusher
- consumerHandler.pushConsumerGroup = kfk.NewMConsumerGroup(&kfk.MConsumerGroupConfig{
+ var err error
+ consumerHandler.pushConsumerGroup, err = kfk.NewMConsumerGroup(&kfk.MConsumerGroupConfig{
KafkaVersion: sarama.V2_0_0_0,
OffsetsInitial: sarama.OffsetNewest, IsReturnErr: false,
}, []string{config.Config.Kafka.MsgToPush.Topic}, config.Config.Kafka.Addr,
config.Config.Kafka.ConsumerGroupID.MsgToPush)
- return &consumerHandler
+ if err != nil {
+ return nil, err
+ }
+ return &consumerHandler, nil
}
func (c *ConsumerHandler) handleMs2PsChat(ctx context.Context, msg []byte) {
@@ -67,13 +71,14 @@ func (c *ConsumerHandler) handleMs2PsChat(ctx context.Context, msg []byte) {
case constant.SuperGroupChatType:
err = c.pusher.Push2SuperGroup(ctx, pbData.MsgData.GroupID, pbData.MsgData)
default:
- var pushUserIDs []string
- if pbData.MsgData.SendID != pbData.MsgData.RecvID {
- pushUserIDs = []string{pbData.MsgData.SendID, pbData.MsgData.RecvID}
+ var pushUserIDList []string
+ isSenderSync := utils.GetSwitchFromOptions(pbData.MsgData.Options, constant.IsSenderSync)
+ if !isSenderSync || pbData.MsgData.SendID == pbData.MsgData.RecvID {
+ pushUserIDList = append(pushUserIDList, pbData.MsgData.RecvID)
} else {
- pushUserIDs = []string{pbData.MsgData.SendID}
+ pushUserIDList = append(pushUserIDList, pbData.MsgData.RecvID, pbData.MsgData.SendID)
}
- err = c.pusher.Push2User(ctx, pushUserIDs, pbData.MsgData)
+ err = c.pusher.Push2User(ctx, pushUserIDList, pbData.MsgData)
}
if err != nil {
if err == errNoOfflinePusher {
diff --git a/internal/push/push_rpc_server.go b/internal/push/push_rpc_server.go
index 0f8f36a49..c1226ce6b 100644
--- a/internal/push/push_rpc_server.go
+++ b/internal/push/push_rpc_server.go
@@ -18,6 +18,8 @@ import (
"context"
"sync"
+ "github.com/OpenIMSDK/tools/utils"
+
"google.golang.org/grpc"
"github.com/OpenIMSDK/protocol/constant"
@@ -64,9 +66,12 @@ func Start(client discoveryregistry.SvcDiscoveryRegistry, server *grpc.Server) e
pusher: pusher,
})
}()
+ consumer, err := NewConsumer(pusher)
+ if err != nil {
+ return err
+ }
go func() {
defer wg.Done()
- consumer := NewConsumer(pusher)
consumer.Start()
}()
wg.Wait()
@@ -78,7 +83,14 @@ func (r *pushServer) PushMsg(ctx context.Context, pbData *pbpush.PushMsgReq) (re
case constant.SuperGroupChatType:
err = r.pusher.Push2SuperGroup(ctx, pbData.MsgData.GroupID, pbData.MsgData)
default:
- err = r.pusher.Push2User(ctx, []string{pbData.MsgData.RecvID, pbData.MsgData.SendID}, pbData.MsgData)
+ var pushUserIDList []string
+ isSenderSync := utils.GetSwitchFromOptions(pbData.MsgData.Options, constant.IsSenderSync)
+ if !isSenderSync {
+ pushUserIDList = append(pushUserIDList, pbData.MsgData.RecvID)
+ } else {
+ pushUserIDList = append(pushUserIDList, pbData.MsgData.RecvID, pbData.MsgData.SendID)
+ }
+ err = r.pusher.Push2User(ctx, pushUserIDList, pbData.MsgData)
}
if err != nil {
if err != errNoOfflinePusher {
diff --git a/internal/push/push_to_client.go b/internal/push/push_to_client.go
index 75a1c1380..5fce34e83 100644
--- a/internal/push/push_to_client.go
+++ b/internal/push/push_to_client.go
@@ -20,6 +20,8 @@ import (
"errors"
"sync"
+ "google.golang.org/grpc"
+
"golang.org/x/sync/errgroup"
"github.com/OpenIMSDK/protocol/constant"
@@ -100,11 +102,9 @@ func (p *Pusher) DeleteMemberAndSetConversationSeq(ctx context.Context, groupID
func (p *Pusher) Push2User(ctx context.Context, userIDs []string, msg *sdkws.MsgData) error {
log.ZDebug(ctx, "Get msg from msg_transfer And push msg", "userIDs", userIDs, "msg", msg.String())
- // callback
if err := callbackOnlinePush(ctx, userIDs, msg); err != nil {
return err
}
-
// push
wsResults, err := p.GetConnsAndOnlinePush(ctx, msg, userIDs)
if err != nil {
@@ -118,22 +118,30 @@ func (p *Pusher) Push2User(ctx context.Context, userIDs []string, msg *sdkws.Msg
return nil
}
- for _, v := range wsResults {
- if msg.SendID != v.UserID && (!v.OnlinePush) {
- if err = callbackOfflinePush(ctx, userIDs, msg, &[]string{}); err != nil {
- return err
- }
-
- err = p.offlinePushMsg(ctx, msg.SendID, msg, []string{v.UserID})
- if err != nil {
- return err
- }
+ if len(wsResults) == 0 {
+ return nil
+ }
+ onlinePushSuccUserIDSet := utils.SliceSet(utils.Filter(wsResults, func(e *msggateway.SingleMsgToUserResults) (string, bool) {
+ return e.UserID, e.OnlinePush && e.UserID != ""
+ }))
+ offlinePushUserIDList := utils.Filter(wsResults, func(e *msggateway.SingleMsgToUserResults) (string, bool) {
+ _, exist := onlinePushSuccUserIDSet[e.UserID]
+ return e.UserID, !exist && e.UserID != "" && e.UserID != msg.SendID
+ })
+
+ if len(offlinePushUserIDList) > 0 {
+ if err = callbackOfflinePush(ctx, offlinePushUserIDList, msg, &[]string{}); err != nil {
+ return err
+ }
+ err = p.offlinePushMsg(ctx, msg.SendID, msg, offlinePushUserIDList)
+ if err != nil {
+ return err
}
}
return nil
}
-func (p *Pusher) UnmarshalNotificationElem(bytes []byte, t interface{}) error {
+func (p *Pusher) UnmarshalNotificationElem(bytes []byte, t any) error {
var notification sdkws.NotificationElem
if err := json.Unmarshal(bytes, ¬ification); err != nil {
return err
@@ -142,6 +150,47 @@ func (p *Pusher) UnmarshalNotificationElem(bytes []byte, t interface{}) error {
return json.Unmarshal([]byte(notification.Detail), t)
}
+/*
+k8s deployment,offline push group messages function.
+*/
+func (p *Pusher) k8sOfflinePush2SuperGroup(ctx context.Context, groupID string, msg *sdkws.MsgData, wsResults []*msggateway.SingleMsgToUserResults) error {
+
+ var needOfflinePushUserIDs []string
+ for _, v := range wsResults {
+ if !v.OnlinePush {
+ needOfflinePushUserIDs = append(needOfflinePushUserIDs, v.UserID)
+ }
+ }
+ if len(needOfflinePushUserIDs) > 0 {
+ var offlinePushUserIDs []string
+ err := callbackOfflinePush(ctx, needOfflinePushUserIDs, msg, &offlinePushUserIDs)
+ if err != nil {
+ return err
+ }
+
+ if len(offlinePushUserIDs) > 0 {
+ needOfflinePushUserIDs = offlinePushUserIDs
+ }
+ if msg.ContentType != constant.SignalingNotification {
+ resp, err := p.conversationRpcClient.Client.GetConversationOfflinePushUserIDs(
+ ctx,
+ &conversation.GetConversationOfflinePushUserIDsReq{ConversationID: utils.GenGroupConversationID(groupID), UserIDs: needOfflinePushUserIDs},
+ )
+ if err != nil {
+ return err
+ }
+ if len(resp.UserIDs) > 0 {
+ err = p.offlinePushMsg(ctx, groupID, msg, resp.UserIDs)
+ if err != nil {
+ log.ZError(ctx, "offlinePushMsg failed", err, "groupID", groupID, "msg", msg)
+ return err
+ }
+ }
+ }
+
+ }
+ return nil
+}
func (p *Pusher) Push2SuperGroup(ctx context.Context, groupID string, msg *sdkws.MsgData) (err error) {
log.ZDebug(ctx, "Get super group msg from msg_transfer and push msg", "msg", msg.String(), "groupID", groupID)
var pushToUserIDs []string
@@ -189,6 +238,9 @@ func (p *Pusher) Push2SuperGroup(ctx context.Context, groupID string, msg *sdkws
if len(config.Config.Manager.UserID) > 0 {
ctx = mcontext.WithOpUserIDContext(ctx, config.Config.Manager.UserID[0])
}
+ if len(config.Config.Manager.UserID) == 0 && len(config.Config.IMAdmin.UserID) > 0 {
+ ctx = mcontext.WithOpUserIDContext(ctx, config.Config.IMAdmin.UserID[0])
+ }
defer func(groupID string) {
if err = p.groupRpcClient.DismissGroup(ctx, groupID); err != nil {
log.ZError(ctx, "DismissGroup Notification clear members", err, "groupID", groupID)
@@ -205,7 +257,10 @@ func (p *Pusher) Push2SuperGroup(ctx context.Context, groupID string, msg *sdkws
log.ZDebug(ctx, "get conn and online push success", "result", wsResults, "msg", msg)
isOfflinePush := utils.GetSwitchFromOptions(msg.Options, constant.IsOfflinePush)
- if isOfflinePush {
+ if isOfflinePush && config.Config.Envs.Discovery == "k8s" {
+ return p.k8sOfflinePush2SuperGroup(ctx, groupID, msg, wsResults)
+ }
+ if isOfflinePush && config.Config.Envs.Discovery == "zookeeper" {
var (
onlineSuccessUserIDs = []string{msg.SendID}
webAndPcBackgroundUserIDs []string
@@ -239,14 +294,7 @@ func (p *Pusher) Push2SuperGroup(ctx context.Context, groupID string, msg *sdkws
}
needOfflinePushUserIDs := utils.DifferenceString(onlineSuccessUserIDs, pushToUserIDs)
- if msg.ContentType != constant.SignalingNotification {
- notNotificationUserIDs, err := p.conversationLocalCache.GetRecvMsgNotNotifyUserIDs(ctx, groupID)
- if err != nil {
- return err
- }
- needOfflinePushUserIDs = utils.SliceSub(needOfflinePushUserIDs, notNotificationUserIDs)
- }
// Use offline push messaging
if len(needOfflinePushUserIDs) > 0 {
var offlinePushUserIDs []string
@@ -258,30 +306,89 @@ func (p *Pusher) Push2SuperGroup(ctx context.Context, groupID string, msg *sdkws
if len(offlinePushUserIDs) > 0 {
needOfflinePushUserIDs = offlinePushUserIDs
}
- resp, err := p.conversationRpcClient.Client.GetConversationOfflinePushUserIDs(
- ctx,
- &conversation.GetConversationOfflinePushUserIDsReq{ConversationID: utils.GenGroupConversationID(groupID), UserIDs: needOfflinePushUserIDs},
- )
- if err != nil {
- return err
- }
- if len(resp.UserIDs) > 0 {
- err = p.offlinePushMsg(ctx, groupID, msg, resp.UserIDs)
+ if msg.ContentType != constant.SignalingNotification {
+ resp, err := p.conversationRpcClient.Client.GetConversationOfflinePushUserIDs(
+ ctx,
+ &conversation.GetConversationOfflinePushUserIDsReq{ConversationID: utils.GenGroupConversationID(groupID), UserIDs: needOfflinePushUserIDs},
+ )
if err != nil {
- log.ZError(ctx, "offlinePushMsg failed", err, "groupID", groupID, "msg", msg)
return err
}
- if _, err := p.GetConnsAndOnlinePush(ctx, msg, utils.IntersectString(resp.UserIDs, webAndPcBackgroundUserIDs)); err != nil {
- log.ZError(ctx, "offlinePushMsg failed", err, "groupID", groupID, "msg", msg, "userIDs", utils.IntersectString(needOfflinePushUserIDs, webAndPcBackgroundUserIDs))
- return err
+ if len(resp.UserIDs) > 0 {
+ err = p.offlinePushMsg(ctx, groupID, msg, resp.UserIDs)
+ if err != nil {
+ log.ZError(ctx, "offlinePushMsg failed", err, "groupID", groupID, "msg", msg)
+ return err
+ }
+ if _, err := p.GetConnsAndOnlinePush(ctx, msg, utils.IntersectString(resp.UserIDs, webAndPcBackgroundUserIDs)); err != nil {
+ log.ZError(ctx, "offlinePushMsg failed", err, "groupID", groupID, "msg", msg, "userIDs", utils.IntersectString(needOfflinePushUserIDs, webAndPcBackgroundUserIDs))
+ return err
+ }
}
}
+
}
}
return nil
}
+func (p *Pusher) k8sOnlinePush(ctx context.Context, msg *sdkws.MsgData, pushToUserIDs []string) (wsResults []*msggateway.SingleMsgToUserResults, err error) {
+ var usersHost = make(map[string][]string)
+ for _, v := range pushToUserIDs {
+ tHost, err := p.discov.GetUserIdHashGatewayHost(ctx, v)
+ if err != nil {
+ log.ZError(ctx, "get msggateway hash error", err)
+ return nil, err
+ }
+ tUsers, tbl := usersHost[tHost]
+ if tbl {
+ tUsers = append(tUsers, v)
+ usersHost[tHost] = tUsers
+ } else {
+ usersHost[tHost] = []string{v}
+ }
+ }
+ log.ZDebug(ctx, "genUsers send hosts struct:", "usersHost", usersHost)
+ var usersConns = make(map[*grpc.ClientConn][]string)
+ for host, userIds := range usersHost {
+ tconn, _ := p.discov.GetConn(ctx, host)
+ usersConns[tconn] = userIds
+ }
+ var (
+ mu sync.Mutex
+ wg = errgroup.Group{}
+ maxWorkers = config.Config.Push.MaxConcurrentWorkers
+ )
+ if maxWorkers < 3 {
+ maxWorkers = 3
+ }
+ wg.SetLimit(maxWorkers)
+ for conn, userIds := range usersConns {
+ tcon := conn
+ tuserIds := userIds
+ wg.Go(func() error {
+ input := &msggateway.OnlineBatchPushOneMsgReq{MsgData: msg, PushToUserIDs: tuserIds}
+ msgClient := msggateway.NewMsgGatewayClient(tcon)
+ reply, err := msgClient.SuperGroupOnlineBatchPushOneMsg(ctx, input)
+ if err != nil {
+ return nil
+ }
+ log.ZDebug(ctx, "push result", "reply", reply)
+ if reply != nil && reply.SinglePushResult != nil {
+ mu.Lock()
+ wsResults = append(wsResults, reply.SinglePushResult...)
+ mu.Unlock()
+ }
+ return nil
+ })
+ }
+ _ = wg.Wait()
+ return wsResults, nil
+}
func (p *Pusher) GetConnsAndOnlinePush(ctx context.Context, msg *sdkws.MsgData, pushToUserIDs []string) (wsResults []*msggateway.SingleMsgToUserResults, err error) {
+ if config.Config.Envs.Discovery == "k8s" {
+ return p.k8sOnlinePush(ctx, msg, pushToUserIDs)
+ }
conns, err := p.discov.GetConns(ctx, config.Config.RpcRegisterName.OpenImMessageGatewayName)
log.ZDebug(ctx, "get gateway conn", "conn length", len(conns))
if err != nil {
diff --git a/internal/rpc/auth/auth.go b/internal/rpc/auth/auth.go
index ee8ead194..eaf63f868 100644
--- a/internal/rpc/auth/auth.go
+++ b/internal/rpc/auth/auth.go
@@ -80,6 +80,28 @@ func (s *authServer) UserToken(ctx context.Context, req *pbauth.UserTokenReq) (*
return &resp, nil
}
+func (s *authServer) GetUserToken(ctx context.Context, req *pbauth.GetUserTokenReq) (*pbauth.GetUserTokenResp, error) {
+ if err := authverify.CheckAdmin(ctx); err != nil {
+ return nil, err
+ }
+ resp := pbauth.GetUserTokenResp{}
+
+ if authverify.IsManagerUserID(req.UserID) {
+ return nil, errs.ErrNoPermission.Wrap("don't get Admin token")
+ }
+
+ if _, err := s.userRpcClient.GetUserInfo(ctx, req.UserID); err != nil {
+ return nil, err
+ }
+ token, err := s.authDatabase.CreateToken(ctx, req.UserID, int(req.PlatformID))
+ if err != nil {
+ return nil, err
+ }
+ resp.Token = token
+ resp.ExpireTimeSeconds = config.Config.TokenPolicy.Expire * 24 * 60 * 60
+ return &resp, nil
+}
+
func (s *authServer) parseToken(ctx context.Context, tokensString string) (claims *tokenverify.Claims, err error) {
claims, err = tokenverify.GetClaimFromToken(tokensString, authverify.Secret())
if err != nil {
diff --git a/internal/rpc/conversation/conversaion.go b/internal/rpc/conversation/conversaion.go
index d39881b35..3317359e5 100644
--- a/internal/rpc/conversation/conversaion.go
+++ b/internal/rpc/conversation/conversaion.go
@@ -16,6 +16,15 @@ package conversation
import (
"context"
+ "errors"
+ "sort"
+
+ "github.com/OpenIMSDK/protocol/sdkws"
+
+ "github.com/OpenIMSDK/tools/tx"
+
+ "github.com/openimsdk/open-im-server/v3/pkg/common/db/mgo"
+ "github.com/openimsdk/open-im-server/v3/pkg/common/db/unrelation"
"google.golang.org/grpc"
@@ -24,43 +33,51 @@ import (
"github.com/OpenIMSDK/tools/discoveryregistry"
"github.com/OpenIMSDK/tools/errs"
"github.com/OpenIMSDK/tools/log"
- "github.com/OpenIMSDK/tools/tx"
"github.com/OpenIMSDK/tools/utils"
"github.com/openimsdk/open-im-server/v3/pkg/common/convert"
"github.com/openimsdk/open-im-server/v3/pkg/common/db/cache"
"github.com/openimsdk/open-im-server/v3/pkg/common/db/controller"
- "github.com/openimsdk/open-im-server/v3/pkg/common/db/relation"
tablerelation "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient/notification"
)
type conversationServer struct {
+ msgRpcClient *rpcclient.MessageRpcClient
+ user *rpcclient.UserRpcClient
groupRpcClient *rpcclient.GroupRpcClient
conversationDatabase controller.ConversationDatabase
conversationNotificationSender *notification.ConversationNotificationSender
}
+func (c *conversationServer) GetConversationNotReceiveMessageUserIDs(ctx context.Context, req *pbconversation.GetConversationNotReceiveMessageUserIDsReq) (*pbconversation.GetConversationNotReceiveMessageUserIDsResp, error) {
+ //TODO implement me
+ panic("implement me")
+}
+
func Start(client discoveryregistry.SvcDiscoveryRegistry, server *grpc.Server) error {
- db, err := relation.NewGormDB()
+ rdb, err := cache.NewRedis()
if err != nil {
return err
}
- if err := db.AutoMigrate(&tablerelation.ConversationModel{}); err != nil {
+ mongo, err := unrelation.NewMongo()
+ if err != nil {
return err
}
- rdb, err := cache.NewRedis()
+ conversationDB, err := mgo.NewConversationMongo(mongo.GetDatabase())
if err != nil {
return err
}
- conversationDB := relation.NewConversationGorm(db)
groupRpcClient := rpcclient.NewGroupRpcClient(client)
msgRpcClient := rpcclient.NewMessageRpcClient(client)
+ userRpcClient := rpcclient.NewUserRpcClient(client)
pbconversation.RegisterConversationServer(server, &conversationServer{
+ msgRpcClient: &msgRpcClient,
+ user: &userRpcClient,
conversationNotificationSender: notification.NewConversationNotificationSender(&msgRpcClient),
groupRpcClient: &groupRpcClient,
- conversationDatabase: controller.NewConversationDatabase(conversationDB, cache.NewConversationRedis(rdb, cache.GetDefaultOpt(), conversationDB), tx.NewGorm(db)),
+ conversationDatabase: controller.NewConversationDatabase(conversationDB, cache.NewConversationRedis(rdb, cache.GetDefaultOpt(), conversationDB), tx.NewMongo(mongo.GetClient())),
})
return nil
}
@@ -78,6 +95,80 @@ func (c *conversationServer) GetConversation(ctx context.Context, req *pbconvers
return resp, nil
}
+func (m *conversationServer) GetSortedConversationList(ctx context.Context, req *pbconversation.GetSortedConversationListReq) (resp *pbconversation.GetSortedConversationListResp, err error) {
+ log.ZDebug(ctx, "GetSortedConversationList", "seqs", req, "userID", req.UserID)
+ var conversationIDs []string
+ if len(req.ConversationIDs) == 0 {
+ conversationIDs, err = m.conversationDatabase.GetConversationIDs(ctx, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+ } else {
+ conversationIDs = req.ConversationIDs
+ }
+
+ conversations, err := m.conversationDatabase.FindConversations(ctx, req.UserID, conversationIDs)
+ if err != nil {
+ return nil, err
+ }
+ if len(conversations) == 0 {
+ return nil, errs.ErrRecordNotFound.Wrap()
+ }
+
+ maxSeqs, err := m.msgRpcClient.GetMaxSeqs(ctx, conversationIDs)
+ if err != nil {
+ return nil, err
+ }
+
+ chatLogs, err := m.msgRpcClient.GetMsgByConversationIDs(ctx, conversationIDs, maxSeqs)
+ if err != nil {
+ return nil, err
+ }
+
+ conversationMsg, err := m.getConversationInfo(ctx, chatLogs, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+
+ hasReadSeqs, err := m.msgRpcClient.GetHasReadSeqs(ctx, req.UserID, conversationIDs)
+ if err != nil {
+ return nil, err
+ }
+
+ var unreadTotal int64
+ conversation_unreadCount := make(map[string]int64)
+ for conversationID, maxSeq := range maxSeqs {
+ unreadCount := maxSeq - hasReadSeqs[conversationID]
+ conversation_unreadCount[conversationID] = unreadCount
+ unreadTotal += unreadCount
+ }
+
+ conversation_isPinTime := make(map[int64]string)
+ conversation_notPinTime := make(map[int64]string)
+ for _, v := range conversations {
+ conversationID := v.ConversationID
+ time := conversationMsg[conversationID].MsgInfo.LatestMsgRecvTime
+ conversationMsg[conversationID].RecvMsgOpt = v.RecvMsgOpt
+ if v.IsPinned {
+ conversationMsg[conversationID].IsPinned = v.IsPinned
+ conversation_isPinTime[time] = conversationID
+ continue
+ }
+ conversation_notPinTime[time] = conversationID
+ }
+ resp = &pbconversation.GetSortedConversationListResp{
+ ConversationTotal: int64(len(chatLogs)),
+ ConversationElems: []*pbconversation.ConversationElem{},
+ UnreadTotal: unreadTotal,
+ }
+
+ m.conversationSort(conversation_isPinTime, resp, conversation_unreadCount, conversationMsg)
+ m.conversationSort(conversation_notPinTime, resp, conversation_unreadCount, conversationMsg)
+
+ resp.ConversationElems = utils.Paginate(resp.ConversationElems, int(req.Pagination.GetPageNumber()), int(req.Pagination.GetShowNumber()))
+ return resp, nil
+}
+
func (c *conversationServer) GetAllConversations(ctx context.Context, req *pbconversation.GetAllConversationsReq) (*pbconversation.GetAllConversationsResp, error) {
conversations, err := c.conversationDatabase.GetUserAllConversation(ctx, req.OwnerUserID)
if err != nil {
@@ -145,7 +236,7 @@ func (c *conversationServer) SetConversations(ctx context.Context,
conversation.ConversationType = req.Conversation.ConversationType
conversation.UserID = req.Conversation.UserID
conversation.GroupID = req.Conversation.GroupID
- m := make(map[string]interface{})
+ m := make(map[string]any)
if req.Conversation.RecvMsgOpt != nil {
m["recv_msg_opt"] = req.Conversation.RecvMsgOpt.Value
if req.Conversation.RecvMsgOpt.Value != conv.RecvMsgOpt {
@@ -229,11 +320,12 @@ func (c *conversationServer) SetConversations(ctx context.Context,
// 获取超级大群开启免打扰的用户ID.
func (c *conversationServer) GetRecvMsgNotNotifyUserIDs(ctx context.Context, req *pbconversation.GetRecvMsgNotNotifyUserIDsReq) (*pbconversation.GetRecvMsgNotNotifyUserIDsResp, error) {
- userIDs, err := c.conversationDatabase.FindRecvMsgNotNotifyUserIDs(ctx, req.GroupID)
- if err != nil {
- return nil, err
- }
- return &pbconversation.GetRecvMsgNotNotifyUserIDsResp{UserIDs: userIDs}, nil
+ //userIDs, err := c.conversationDatabase.FindRecvMsgNotNotifyUserIDs(ctx, req.GroupID)
+ //if err != nil {
+ // return nil, err
+ //}
+ //return &pbconversation.GetRecvMsgNotNotifyUserIDsResp{UserIDs: userIDs}, nil
+ return nil, errors.New("deprecated")
}
// create conversation without notification for msg redis transfer.
@@ -284,7 +376,7 @@ func (c *conversationServer) CreateGroupChatConversations(ctx context.Context, r
func (c *conversationServer) SetConversationMaxSeq(ctx context.Context, req *pbconversation.SetConversationMaxSeqReq) (*pbconversation.SetConversationMaxSeqResp, error) {
if err := c.conversationDatabase.UpdateUsersConversationFiled(ctx, req.OwnerUserID, req.ConversationID,
- map[string]interface{}{"max_seq": req.MaxSeq}); err != nil {
+ map[string]any{"max_seq": req.MaxSeq}); err != nil {
return nil, err
}
return &pbconversation.SetConversationMaxSeqResp{}, nil
@@ -343,3 +435,102 @@ func (c *conversationServer) GetConversationOfflinePushUserIDs(
}
return &pbconversation.GetConversationOfflinePushUserIDsResp{UserIDs: utils.Keys(userIDSet)}, nil
}
+
+func (c *conversationServer) conversationSort(
+ conversations map[int64]string,
+ resp *pbconversation.GetSortedConversationListResp,
+ conversation_unreadCount map[string]int64,
+ conversationMsg map[string]*pbconversation.ConversationElem,
+) {
+ keys := []int64{}
+ for key := range conversations {
+ keys = append(keys, key)
+ }
+
+ sort.Slice(keys[:], func(i, j int) bool {
+ return keys[i] > keys[j]
+ })
+ index := 0
+
+ cons := make([]*pbconversation.ConversationElem, len(conversations))
+ for _, v := range keys {
+ conversationID := conversations[v]
+ conversationElem := conversationMsg[conversationID]
+ conversationElem.UnreadCount = conversation_unreadCount[conversationID]
+ cons[index] = conversationElem
+ index++
+ }
+ resp.ConversationElems = append(resp.ConversationElems, cons...)
+}
+
+func (c *conversationServer) getConversationInfo(
+ ctx context.Context,
+ chatLogs map[string]*sdkws.MsgData,
+ userID string) (map[string]*pbconversation.ConversationElem, error) {
+ var (
+ sendIDs []string
+ groupIDs []string
+ sendMap = make(map[string]*sdkws.UserInfo)
+ groupMap = make(map[string]*sdkws.GroupInfo)
+ conversationMsg = make(map[string]*pbconversation.ConversationElem)
+ )
+ for _, chatLog := range chatLogs {
+ switch chatLog.SessionType {
+ case constant.SingleChatType:
+ if chatLog.SendID == userID {
+ sendIDs = append(sendIDs, chatLog.RecvID)
+ }
+ sendIDs = append(sendIDs, chatLog.SendID)
+ case constant.GroupChatType, constant.SuperGroupChatType:
+ groupIDs = append(groupIDs, chatLog.GroupID)
+ sendIDs = append(sendIDs, chatLog.SendID)
+ }
+ }
+ if len(sendIDs) != 0 {
+ sendInfos, err := c.user.GetUsersInfo(ctx, sendIDs)
+ if err != nil {
+ return nil, err
+ }
+ for _, sendInfo := range sendInfos {
+ sendMap[sendInfo.UserID] = sendInfo
+ }
+ }
+ if len(groupIDs) != 0 {
+ groupInfos, err := c.groupRpcClient.GetGroupInfos(ctx, groupIDs, false)
+ if err != nil {
+ return nil, err
+ }
+ for _, groupInfo := range groupInfos {
+ groupMap[groupInfo.GroupID] = groupInfo
+ }
+ }
+ for conversationID, chatLog := range chatLogs {
+ pbchatLog := &pbconversation.ConversationElem{}
+ msgInfo := &pbconversation.MsgInfo{}
+ if err := utils.CopyStructFields(msgInfo, chatLog); err != nil {
+ return nil, err
+ }
+ switch chatLog.SessionType {
+ case constant.SingleChatType:
+ if chatLog.SendID == userID {
+ msgInfo.FaceURL = sendMap[chatLog.RecvID].FaceURL
+ msgInfo.SenderName = sendMap[chatLog.RecvID].Nickname
+ break
+ }
+ msgInfo.FaceURL = sendMap[chatLog.SendID].FaceURL
+ msgInfo.SenderName = sendMap[chatLog.SendID].Nickname
+ case constant.GroupChatType, constant.SuperGroupChatType:
+ msgInfo.GroupName = groupMap[chatLog.GroupID].GroupName
+ msgInfo.GroupFaceURL = groupMap[chatLog.GroupID].FaceURL
+ msgInfo.GroupMemberCount = groupMap[chatLog.GroupID].MemberCount
+ msgInfo.GroupID = chatLog.GroupID
+ msgInfo.GroupType = groupMap[chatLog.GroupID].GroupType
+ msgInfo.SenderName = sendMap[chatLog.SendID].Nickname
+ }
+ pbchatLog.ConversationID = conversationID
+ msgInfo.LatestMsgRecvTime = chatLog.SendTime
+ pbchatLog.MsgInfo = msgInfo
+ conversationMsg[conversationID] = pbchatLog
+ }
+ return conversationMsg, nil
+}
diff --git a/internal/rpc/friend/black.go b/internal/rpc/friend/black.go
index b1a5ea6b5..ed5791c38 100644
--- a/internal/rpc/friend/black.go
+++ b/internal/rpc/friend/black.go
@@ -27,19 +27,11 @@ import (
"github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
)
-func (s *friendServer) GetPaginationBlacks(
- ctx context.Context,
- req *pbfriend.GetPaginationBlacksReq,
-) (resp *pbfriend.GetPaginationBlacksResp, err error) {
+func (s *friendServer) GetPaginationBlacks(ctx context.Context, req *pbfriend.GetPaginationBlacksReq) (resp *pbfriend.GetPaginationBlacksResp, err error) {
if err := s.userRpcClient.Access(ctx, req.UserID); err != nil {
return nil, err
}
- var pageNumber, showNumber int32
- if req.Pagination != nil {
- pageNumber = req.Pagination.PageNumber
- showNumber = req.Pagination.ShowNumber
- }
- blacks, total, err := s.blackDatabase.FindOwnerBlacks(ctx, req.UserID, pageNumber, showNumber)
+ total, blacks, err := s.blackDatabase.FindOwnerBlacks(ctx, req.UserID, req.Pagination)
if err != nil {
return nil, err
}
@@ -63,10 +55,7 @@ func (s *friendServer) IsBlack(ctx context.Context, req *pbfriend.IsBlackReq) (*
return resp, nil
}
-func (s *friendServer) RemoveBlack(
- ctx context.Context,
- req *pbfriend.RemoveBlackReq,
-) (*pbfriend.RemoveBlackResp, error) {
+func (s *friendServer) RemoveBlack(ctx context.Context, req *pbfriend.RemoveBlackReq) (*pbfriend.RemoveBlackResp, error) {
if err := s.userRpcClient.Access(ctx, req.OwnerUserID); err != nil {
return nil, err
}
@@ -90,6 +79,7 @@ func (s *friendServer) AddBlack(ctx context.Context, req *pbfriend.AddBlackReq)
BlockUserID: req.BlackUserID,
OperatorUserID: mcontext.GetOpUserID(ctx),
CreateTime: time.Now(),
+ Ex: req.Ex,
}
if err := s.blackDatabase.Create(ctx, []*relation.BlackModel{&black}); err != nil {
return nil, err
diff --git a/internal/rpc/friend/callback.go b/internal/rpc/friend/callback.go
index d3b853ef9..e5054d9a9 100644
--- a/internal/rpc/friend/callback.go
+++ b/internal/rpc/friend/callback.go
@@ -16,9 +16,10 @@ package friend
import (
"context"
- "github.com/OpenIMSDK/tools/utils"
pbfriend "github.com/OpenIMSDK/protocol/friend"
+ "github.com/OpenIMSDK/tools/utils"
+
cbapi "github.com/openimsdk/open-im-server/v3/pkg/callbackstruct"
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/open-im-server/v3/pkg/common/http"
@@ -33,6 +34,7 @@ func CallbackBeforeAddFriend(ctx context.Context, req *pbfriend.ApplyToAddFriend
FromUserID: req.FromUserID,
ToUserID: req.ToUserID,
ReqMsg: req.ReqMsg,
+ Ex: req.Ex,
}
resp := &cbapi.CallbackBeforeAddFriendResp{}
if err := http.CallBackPostReturn(ctx, config.Config.Callback.CallbackUrl, cbReq, resp, config.Config.Callback.CallbackBeforeAddFriend); err != nil {
@@ -75,3 +77,116 @@ func CallbackAfterSetFriendRemark(ctx context.Context, req *pbfriend.SetFriendRe
}
return nil
}
+func CallbackBeforeAddBlack(ctx context.Context, req *pbfriend.AddBlackReq) error {
+ if !config.Config.Callback.CallbackBeforeAddBlack.Enable {
+ return nil
+ }
+ cbReq := &cbapi.CallbackBeforeAddBlackReq{
+ CallbackCommand: cbapi.CallbackBeforeAddBlackCommand,
+ OwnerUserID: req.OwnerUserID,
+ BlackUserID: req.BlackUserID,
+ }
+ resp := &cbapi.CallbackBeforeAddBlackResp{}
+ if err := http.CallBackPostReturn(ctx, config.Config.Callback.CallbackUrl, cbReq, resp, config.Config.Callback.CallbackBeforeAddBlack); err != nil {
+ return err
+ }
+ return nil
+}
+func CallbackAfterAddFriend(ctx context.Context, req *pbfriend.ApplyToAddFriendReq) error {
+ if !config.Config.Callback.CallbackAfterAddFriend.Enable {
+ return nil
+ }
+ cbReq := &cbapi.CallbackAfterAddFriendReq{
+ CallbackCommand: cbapi.CallbackAfterAddFriendCommand,
+ FromUserID: req.FromUserID,
+ ToUserID: req.ToUserID,
+ ReqMsg: req.ReqMsg,
+ }
+ resp := &cbapi.CallbackAfterAddFriendResp{}
+ if err := http.CallBackPostReturn(ctx, config.Config.Callback.CallbackUrl, cbReq, resp, config.Config.Callback.CallbackAfterAddFriend); err != nil {
+ return err
+ }
+
+ return nil
+}
+func CallbackBeforeAddFriendAgree(ctx context.Context, req *pbfriend.RespondFriendApplyReq) error {
+ if !config.Config.Callback.CallbackBeforeAddFriendAgree.Enable {
+ return nil
+ }
+ cbReq := &cbapi.CallbackBeforeAddFriendAgreeReq{
+ CallbackCommand: cbapi.CallbackBeforeAddFriendAgreeCommand,
+ FromUserID: req.FromUserID,
+ ToUserID: req.ToUserID,
+ HandleMsg: req.HandleMsg,
+ HandleResult: req.HandleResult,
+ }
+ resp := &cbapi.CallbackBeforeAddFriendAgreeResp{}
+ if err := http.CallBackPostReturn(ctx, config.Config.Callback.CallbackUrl, cbReq, resp, config.Config.Callback.CallbackBeforeAddFriendAgree); err != nil {
+ return err
+ }
+ return nil
+}
+func CallbackAfterDeleteFriend(ctx context.Context, req *pbfriend.DeleteFriendReq) error {
+ if !config.Config.Callback.CallbackAfterDeleteFriend.Enable {
+ return nil
+ }
+ cbReq := &cbapi.CallbackAfterDeleteFriendReq{
+ CallbackCommand: cbapi.CallbackAfterDeleteFriendCommand,
+ OwnerUserID: req.OwnerUserID,
+ FriendUserID: req.FriendUserID,
+ }
+ resp := &cbapi.CallbackAfterDeleteFriendResp{}
+ if err := http.CallBackPostReturn(ctx, config.Config.Callback.CallbackUrl, cbReq, resp, config.Config.Callback.CallbackAfterDeleteFriend); err != nil {
+ return err
+ }
+ return nil
+}
+func CallbackBeforeImportFriends(ctx context.Context, req *pbfriend.ImportFriendReq) error {
+ if !config.Config.Callback.CallbackBeforeImportFriends.Enable {
+ return nil
+ }
+ cbReq := &cbapi.CallbackBeforeImportFriendsReq{
+ CallbackCommand: cbapi.CallbackBeforeImportFriendsCommand,
+ OwnerUserID: req.OwnerUserID,
+ FriendUserIDs: req.FriendUserIDs,
+ }
+ resp := &cbapi.CallbackBeforeImportFriendsResp{}
+ if err := http.CallBackPostReturn(ctx, config.Config.Callback.CallbackUrl, cbReq, resp, config.Config.Callback.CallbackBeforeImportFriends); err != nil {
+ return err
+ }
+ if len(resp.FriendUserIDs) != 0 {
+ req.FriendUserIDs = resp.FriendUserIDs
+ }
+ return nil
+}
+func CallbackAfterImportFriends(ctx context.Context, req *pbfriend.ImportFriendReq) error {
+ if !config.Config.Callback.CallbackAfterImportFriends.Enable {
+ return nil
+ }
+ cbReq := &cbapi.CallbackAfterImportFriendsReq{
+ CallbackCommand: cbapi.CallbackAfterImportFriendsCommand,
+ OwnerUserID: req.OwnerUserID,
+ FriendUserIDs: req.FriendUserIDs,
+ }
+ resp := &cbapi.CallbackAfterImportFriendsResp{}
+ if err := http.CallBackPostReturn(ctx, config.Config.Callback.CallbackUrl, cbReq, resp, config.Config.Callback.CallbackAfterImportFriends); err != nil {
+ return err
+ }
+ return nil
+}
+
+func CallbackAfterRemoveBlack(ctx context.Context, req *pbfriend.RemoveBlackReq) error {
+ if !config.Config.Callback.CallbackAfterRemoveBlack.Enable {
+ return nil
+ }
+ cbReq := &cbapi.CallbackAfterRemoveBlackReq{
+ CallbackCommand: cbapi.CallbackAfterRemoveBlackCommand,
+ OwnerUserID: req.OwnerUserID,
+ BlackUserID: req.BlackUserID,
+ }
+ resp := &cbapi.CallbackAfterRemoveBlackResp{}
+ if err := http.CallBackPostReturn(ctx, config.Config.Callback.CallbackUrl, cbReq, resp, config.Config.Callback.CallbackAfterRemoveBlack); err != nil {
+ return err
+ }
+ return nil
+}
diff --git a/internal/rpc/friend/friend.go b/internal/rpc/friend/friend.go
index 24014ace1..84702f548 100644
--- a/internal/rpc/friend/friend.go
+++ b/internal/rpc/friend/friend.go
@@ -17,6 +17,8 @@ package friend
import (
"context"
+ "github.com/OpenIMSDK/tools/tx"
+
"github.com/OpenIMSDK/protocol/sdkws"
"github.com/openimsdk/open-im-server/v3/pkg/authverify"
@@ -32,13 +34,13 @@ import (
pbfriend "github.com/OpenIMSDK/protocol/friend"
registry "github.com/OpenIMSDK/tools/discoveryregistry"
"github.com/OpenIMSDK/tools/errs"
- "github.com/OpenIMSDK/tools/tx"
"github.com/OpenIMSDK/tools/utils"
"github.com/openimsdk/open-im-server/v3/pkg/common/db/cache"
"github.com/openimsdk/open-im-server/v3/pkg/common/db/controller"
- "github.com/openimsdk/open-im-server/v3/pkg/common/db/relation"
+ "github.com/openimsdk/open-im-server/v3/pkg/common/db/mgo"
tablerelation "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
+ "github.com/openimsdk/open-im-server/v3/pkg/common/db/unrelation"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient/notification"
)
@@ -52,49 +54,65 @@ type friendServer struct {
}
func Start(client registry.SvcDiscoveryRegistry, server *grpc.Server) error {
- db, err := relation.NewGormDB()
+ // Initialize MongoDB
+ mongo, err := unrelation.NewMongo()
if err != nil {
return err
}
- if err := db.AutoMigrate(&tablerelation.FriendModel{}, &tablerelation.FriendRequestModel{}, &tablerelation.BlackModel{}); err != nil {
+
+ // Initialize Redis
+ rdb, err := cache.NewRedis()
+ if err != nil {
return err
}
- rdb, err := cache.NewRedis()
+
+ friendMongoDB, err := mgo.NewFriendMongo(mongo.GetDatabase())
+ if err != nil {
+ return err
+ }
+
+ friendRequestMongoDB, err := mgo.NewFriendRequestMongo(mongo.GetDatabase())
+ if err != nil {
+ return err
+ }
+
+ blackMongoDB, err := mgo.NewBlackMongo(mongo.GetDatabase())
if err != nil {
return err
}
- blackDB := relation.NewBlackGorm(db)
- friendDB := relation.NewFriendGorm(db)
+
+ // Initialize RPC clients
userRpcClient := rpcclient.NewUserRpcClient(client)
msgRpcClient := rpcclient.NewMessageRpcClient(client)
+
+ // Initialize notification sender
notificationSender := notification.NewFriendNotificationSender(
&msgRpcClient,
notification.WithRpcFunc(userRpcClient.GetUsersInfo),
)
+ // Register Friend server with refactored MongoDB and Redis integrations
pbfriend.RegisterFriendServer(server, &friendServer{
friendDatabase: controller.NewFriendDatabase(
- friendDB,
- relation.NewFriendRequestGorm(db),
- cache.NewFriendCacheRedis(rdb, friendDB, cache.GetDefaultOpt()),
- tx.NewGorm(db),
+ friendMongoDB,
+ friendRequestMongoDB,
+ cache.NewFriendCacheRedis(rdb, friendMongoDB, cache.GetDefaultOpt()),
+ tx.NewMongo(mongo.GetClient()),
),
blackDatabase: controller.NewBlackDatabase(
- blackDB,
- cache.NewBlackCacheRedis(rdb, blackDB, cache.GetDefaultOpt()),
+ blackMongoDB,
+ cache.NewBlackCacheRedis(rdb, blackMongoDB, cache.GetDefaultOpt()),
),
userRpcClient: &userRpcClient,
notificationSender: notificationSender,
RegisterCenter: client,
conversationRpcClient: rpcclient.NewConversationRpcClient(client),
})
+
return nil
}
// ok.
-func (s *friendServer) ApplyToAddFriend(
- ctx context.Context,
- req *pbfriend.ApplyToAddFriendReq,
-) (resp *pbfriend.ApplyToAddFriendResp, err error) {
+func (s *friendServer) ApplyToAddFriend(ctx context.Context, req *pbfriend.ApplyToAddFriendReq) (resp *pbfriend.ApplyToAddFriendResp, err error) {
defer log.ZInfo(ctx, utils.GetFuncName()+" Return")
resp = &pbfriend.ApplyToAddFriendResp{}
if err := authverify.CheckAccessV3(ctx, req.FromUserID); err != nil {
@@ -103,7 +121,7 @@ func (s *friendServer) ApplyToAddFriend(
if req.ToUserID == req.FromUserID {
return nil, errs.ErrCanNotAddYourself.Wrap()
}
- if err := CallbackBeforeAddFriend(ctx, req); err != nil && err != errs.ErrCallbackContinue {
+ if err = CallbackBeforeAddFriend(ctx, req); err != nil && err != errs.ErrCallbackContinue {
return nil, err
}
if _, err := s.userRpcClient.GetUsersInfoMap(ctx, []string{req.ToUserID, req.FromUserID}); err != nil {
@@ -120,14 +138,14 @@ func (s *friendServer) ApplyToAddFriend(
return nil, err
}
s.notificationSender.FriendApplicationAddNotification(ctx, req)
+ if err = CallbackAfterAddFriend(ctx, req); err != nil && err != errs.ErrCallbackContinue {
+ return nil, err
+ }
return resp, nil
}
// ok.
-func (s *friendServer) ImportFriends(
- ctx context.Context,
- req *pbfriend.ImportFriendReq,
-) (resp *pbfriend.ImportFriendResp, err error) {
+func (s *friendServer) ImportFriends(ctx context.Context, req *pbfriend.ImportFriendReq) (resp *pbfriend.ImportFriendResp, err error) {
defer log.ZInfo(ctx, utils.GetFuncName()+" Return")
if err := authverify.CheckAdmin(ctx); err != nil {
return nil, err
@@ -141,6 +159,10 @@ func (s *friendServer) ImportFriends(
if utils.Duplicate(req.FriendUserIDs) {
return nil, errs.ErrArgs.Wrap("friend userID repeated")
}
+ if err := CallbackBeforeImportFriends(ctx, req); err != nil {
+ return nil, err
+ }
+
if err := s.friendDatabase.BecomeFriends(ctx, req.OwnerUserID, req.FriendUserIDs, constant.BecomeFriendByImport); err != nil {
return nil, err
}
@@ -151,14 +173,14 @@ func (s *friendServer) ImportFriends(
HandleResult: constant.FriendResponseAgree,
})
}
+ if err := CallbackAfterImportFriends(ctx, req); err != nil {
+ return nil, err
+ }
return &pbfriend.ImportFriendResp{}, nil
}
// ok.
-func (s *friendServer) RespondFriendApply(
- ctx context.Context,
- req *pbfriend.RespondFriendApplyReq,
-) (resp *pbfriend.RespondFriendApplyResp, err error) {
+func (s *friendServer) RespondFriendApply(ctx context.Context, req *pbfriend.RespondFriendApplyReq) (resp *pbfriend.RespondFriendApplyResp, err error) {
defer log.ZInfo(ctx, utils.GetFuncName()+" Return")
resp = &pbfriend.RespondFriendApplyResp{}
if err := authverify.CheckAccessV3(ctx, req.ToUserID); err != nil {
@@ -172,6 +194,9 @@ func (s *friendServer) RespondFriendApply(
HandleResult: req.HandleResult,
}
if req.HandleResult == constant.FriendResponseAgree {
+ if err := CallbackBeforeAddFriendAgree(ctx, req); err != nil && err != errs.ErrCallbackContinue {
+ return nil, err
+ }
err := s.friendDatabase.AgreeFriendRequest(ctx, &friendRequest)
if err != nil {
return nil, err
@@ -191,10 +216,7 @@ func (s *friendServer) RespondFriendApply(
}
// ok.
-func (s *friendServer) DeleteFriend(
- ctx context.Context,
- req *pbfriend.DeleteFriendReq,
-) (resp *pbfriend.DeleteFriendResp, err error) {
+func (s *friendServer) DeleteFriend(ctx context.Context, req *pbfriend.DeleteFriendReq) (resp *pbfriend.DeleteFriendResp, err error) {
defer log.ZInfo(ctx, utils.GetFuncName()+" Return")
resp = &pbfriend.DeleteFriendResp{}
if err := s.userRpcClient.Access(ctx, req.OwnerUserID); err != nil {
@@ -208,14 +230,14 @@ func (s *friendServer) DeleteFriend(
return nil, err
}
s.notificationSender.FriendDeletedNotification(ctx, req)
+ if err := CallbackAfterDeleteFriend(ctx, req); err != nil {
+ return nil, err
+ }
return resp, nil
}
// ok.
-func (s *friendServer) SetFriendRemark(
- ctx context.Context,
- req *pbfriend.SetFriendRemarkReq,
-) (resp *pbfriend.SetFriendRemarkResp, err error) {
+func (s *friendServer) SetFriendRemark(ctx context.Context, req *pbfriend.SetFriendRemarkReq) (resp *pbfriend.SetFriendRemarkResp, err error) {
defer log.ZInfo(ctx, utils.GetFuncName()+" Return")
if err = CallbackBeforeSetFriendRemark(ctx, req); err != nil && err != errs.ErrCallbackContinue {
@@ -240,10 +262,7 @@ func (s *friendServer) SetFriendRemark(
}
// ok.
-func (s *friendServer) GetDesignatedFriends(
- ctx context.Context,
- req *pbfriend.GetDesignatedFriendsReq,
-) (resp *pbfriend.GetDesignatedFriendsResp, err error) {
+func (s *friendServer) GetDesignatedFriends(ctx context.Context, req *pbfriend.GetDesignatedFriendsReq) (resp *pbfriend.GetDesignatedFriendsResp, err error) {
defer log.ZInfo(ctx, utils.GetFuncName()+" Return")
resp = &pbfriend.GetDesignatedFriendsResp{}
if utils.Duplicate(req.FriendUserIDs) {
@@ -274,15 +293,12 @@ func (s *friendServer) GetDesignatedFriendsApply(ctx context.Context,
}
// ok 获取接收到的好友申请(即别人主动申请的).
-func (s *friendServer) GetPaginationFriendsApplyTo(
- ctx context.Context,
- req *pbfriend.GetPaginationFriendsApplyToReq,
-) (resp *pbfriend.GetPaginationFriendsApplyToResp, err error) {
+func (s *friendServer) GetPaginationFriendsApplyTo(ctx context.Context, req *pbfriend.GetPaginationFriendsApplyToReq) (resp *pbfriend.GetPaginationFriendsApplyToResp, err error) {
defer log.ZInfo(ctx, utils.GetFuncName()+" Return")
if err := s.userRpcClient.Access(ctx, req.UserID); err != nil {
return nil, err
}
- friendRequests, total, err := s.friendDatabase.PageFriendRequestToMe(ctx, req.UserID, req.Pagination.PageNumber, req.Pagination.ShowNumber)
+ total, friendRequests, err := s.friendDatabase.PageFriendRequestToMe(ctx, req.UserID, req.Pagination)
if err != nil {
return nil, err
}
@@ -296,16 +312,13 @@ func (s *friendServer) GetPaginationFriendsApplyTo(
}
// ok 获取主动发出去的好友申请列表.
-func (s *friendServer) GetPaginationFriendsApplyFrom(
- ctx context.Context,
- req *pbfriend.GetPaginationFriendsApplyFromReq,
-) (resp *pbfriend.GetPaginationFriendsApplyFromResp, err error) {
+func (s *friendServer) GetPaginationFriendsApplyFrom(ctx context.Context, req *pbfriend.GetPaginationFriendsApplyFromReq) (resp *pbfriend.GetPaginationFriendsApplyFromResp, err error) {
defer log.ZInfo(ctx, utils.GetFuncName()+" Return")
resp = &pbfriend.GetPaginationFriendsApplyFromResp{}
if err := s.userRpcClient.Access(ctx, req.UserID); err != nil {
return nil, err
}
- friendRequests, total, err := s.friendDatabase.PageFriendRequestFromMe(ctx, req.UserID, req.Pagination.PageNumber, req.Pagination.ShowNumber)
+ total, friendRequests, err := s.friendDatabase.PageFriendRequestFromMe(ctx, req.UserID, req.Pagination)
if err != nil {
return nil, err
}
@@ -318,10 +331,7 @@ func (s *friendServer) GetPaginationFriendsApplyFrom(
}
// ok.
-func (s *friendServer) IsFriend(
- ctx context.Context,
- req *pbfriend.IsFriendReq,
-) (resp *pbfriend.IsFriendResp, err error) {
+func (s *friendServer) IsFriend(ctx context.Context, req *pbfriend.IsFriendReq) (resp *pbfriend.IsFriendResp, err error) {
defer log.ZInfo(ctx, utils.GetFuncName()+" Return")
resp = &pbfriend.IsFriendResp{}
resp.InUser1Friends, resp.InUser2Friends, err = s.friendDatabase.CheckIn(ctx, req.UserID1, req.UserID2)
@@ -331,15 +341,12 @@ func (s *friendServer) IsFriend(
return resp, nil
}
-func (s *friendServer) GetPaginationFriends(
- ctx context.Context,
- req *pbfriend.GetPaginationFriendsReq,
-) (resp *pbfriend.GetPaginationFriendsResp, err error) {
+func (s *friendServer) GetPaginationFriends(ctx context.Context, req *pbfriend.GetPaginationFriendsReq) (resp *pbfriend.GetPaginationFriendsResp, err error) {
defer log.ZInfo(ctx, utils.GetFuncName()+" Return")
if err := s.userRpcClient.Access(ctx, req.UserID); err != nil {
return nil, err
}
- friends, total, err := s.friendDatabase.PageOwnerFriends(ctx, req.UserID, req.Pagination.PageNumber, req.Pagination.ShowNumber)
+ total, friends, err := s.friendDatabase.PageOwnerFriends(ctx, req.UserID, req.Pagination)
if err != nil {
return nil, err
}
@@ -352,10 +359,7 @@ func (s *friendServer) GetPaginationFriends(
return resp, nil
}
-func (s *friendServer) GetFriendIDs(
- ctx context.Context,
- req *pbfriend.GetFriendIDsReq,
-) (resp *pbfriend.GetFriendIDsResp, err error) {
+func (s *friendServer) GetFriendIDs(ctx context.Context, req *pbfriend.GetFriendIDsReq) (resp *pbfriend.GetFriendIDsResp, err error) {
defer log.ZInfo(ctx, utils.GetFuncName()+" Return")
if err := s.userRpcClient.Access(ctx, req.UserID); err != nil {
return nil, err
@@ -403,6 +407,7 @@ func (s *friendServer) GetSpecifiedFriendsInfo(ctx context.Context, req *pbfrien
}
var friendInfo *sdkws.FriendInfo
if friend := friendMap[userID]; friend != nil {
+
friendInfo = &sdkws.FriendInfo{
OwnerUserID: friend.OwnerUserID,
Remark: friend.Remark,
@@ -410,6 +415,7 @@ func (s *friendServer) GetSpecifiedFriendsInfo(ctx context.Context, req *pbfrien
AddSource: friend.AddSource,
OperatorUserID: friend.OperatorUserID,
Ex: friend.Ex,
+ IsPinned: friend.IsPinned,
}
}
var blackInfo *sdkws.BlackInfo
@@ -430,3 +436,42 @@ func (s *friendServer) GetSpecifiedFriendsInfo(ctx context.Context, req *pbfrien
}
return resp, nil
}
+func (s *friendServer) UpdateFriends(
+ ctx context.Context,
+ req *pbfriend.UpdateFriendsReq,
+) (*pbfriend.UpdateFriendsResp, error) {
+ if len(req.FriendUserIDs) == 0 {
+ return nil, errs.ErrArgs.Wrap("friendIDList is empty")
+ }
+ if utils.Duplicate(req.FriendUserIDs) {
+ return nil, errs.ErrArgs.Wrap("friendIDList repeated")
+ }
+
+ _, err := s.friendDatabase.FindFriendsWithError(ctx, req.OwnerUserID, req.FriendUserIDs)
+ if err != nil {
+ return nil, err
+ }
+
+ val := make(map[string]any)
+
+ if req.IsPinned != nil {
+ val["is_pinned"] = req.IsPinned.Value
+ }
+ if req.Remark != nil {
+ val["remark"] = req.Remark.Value
+ }
+ if req.Ex != nil {
+ val["ex"] = req.Ex.Value
+ }
+ if err = s.friendDatabase.UpdateFriends(ctx, req.OwnerUserID, req.FriendUserIDs, val); err != nil {
+ return nil, err
+ }
+
+ resp := &pbfriend.UpdateFriendsResp{}
+
+ err = s.notificationSender.FriendsInfoUpdateNotification(ctx, req.OwnerUserID, req.FriendUserIDs)
+ if err != nil {
+ return nil, errs.Wrap(err, "FriendsInfoUpdateNotification Error")
+ }
+ return resp, nil
+}
diff --git a/internal/rpc/group/cache.go b/internal/rpc/group/cache.go
index 23c57ff89..fc387736d 100644
--- a/internal/rpc/group/cache.go
+++ b/internal/rpc/group/cache.go
@@ -26,7 +26,7 @@ func (s *groupServer) GetGroupInfoCache(
ctx context.Context,
req *pbgroup.GetGroupInfoCacheReq,
) (resp *pbgroup.GetGroupInfoCacheResp, err error) {
- group, err := s.GroupDatabase.TakeGroup(ctx, req.GroupID)
+ group, err := s.db.TakeGroup(ctx, req.GroupID)
if err != nil {
return nil, err
}
@@ -38,7 +38,7 @@ func (s *groupServer) GetGroupMemberCache(
ctx context.Context,
req *pbgroup.GetGroupMemberCacheReq,
) (resp *pbgroup.GetGroupMemberCacheResp, err error) {
- members, err := s.GroupDatabase.TakeGroupMember(ctx, req.GroupID, req.GroupMemberID)
+ members, err := s.db.TakeGroupMember(ctx, req.GroupID, req.GroupMemberID)
if err != nil {
return nil, err
}
diff --git a/internal/rpc/group/callback.go b/internal/rpc/group/callback.go
index 13f9737b5..d891f4d1e 100644
--- a/internal/rpc/group/callback.go
+++ b/internal/rpc/group/callback.go
@@ -18,6 +18,8 @@ import (
"context"
"time"
+ "github.com/OpenIMSDK/tools/log"
+
"github.com/OpenIMSDK/protocol/constant"
"github.com/OpenIMSDK/protocol/group"
"github.com/OpenIMSDK/protocol/wrapperspb"
@@ -124,7 +126,14 @@ func CallbackBeforeMemberJoinGroup(
GroupEx: groupEx,
}
resp := &callbackstruct.CallbackBeforeMemberJoinGroupResp{}
- if err = http.CallBackPostReturn(ctx, config.Config.Callback.CallbackUrl, callbackReq, resp, config.Config.Callback.CallbackBeforeMemberJoinGroup); err != nil {
+ err = http.CallBackPostReturn(
+ ctx,
+ config.Config.Callback.CallbackUrl,
+ callbackReq,
+ resp,
+ config.Config.Callback.CallbackBeforeMemberJoinGroup,
+ )
+ if err != nil {
return err
}
if resp.MuteEndTime != nil {
@@ -159,7 +168,14 @@ func CallbackBeforeSetGroupMemberInfo(ctx context.Context, req *group.SetGroupMe
callbackReq.Ex = &req.Ex.Value
}
resp := &callbackstruct.CallbackBeforeSetGroupMemberInfoResp{}
- if err = http.CallBackPostReturn(ctx, config.Config.Callback.CallbackUrl, callbackReq, resp, config.Config.Callback.CallbackBeforeSetGroupMemberInfo); err != nil {
+ err = http.CallBackPostReturn(
+ ctx,
+ config.Config.Callback.CallbackUrl,
+ callbackReq,
+ resp,
+ config.Config.Callback.CallbackBeforeSetGroupMemberInfo,
+ )
+ if err != nil {
return err
}
if resp.FaceURL != nil {
@@ -176,13 +192,12 @@ func CallbackBeforeSetGroupMemberInfo(ctx context.Context, req *group.SetGroupMe
}
return nil
}
-
func CallbackAfterSetGroupMemberInfo(ctx context.Context, req *group.SetGroupMemberInfo) (err error) {
if !config.Config.Callback.CallbackBeforeSetGroupMemberInfo.Enable {
return nil
}
callbackReq := callbackstruct.CallbackAfterSetGroupMemberInfoReq{
- CallbackCommand: callbackstruct.CallbackBeforeSetGroupMemberInfoCommand,
+ CallbackCommand: callbackstruct.CallbackAfterSetGroupMemberInfoCommand,
GroupID: req.GroupID,
UserID: req.UserID,
}
@@ -199,7 +214,7 @@ func CallbackAfterSetGroupMemberInfo(ctx context.Context, req *group.SetGroupMem
callbackReq.Ex = &req.Ex.Value
}
resp := &callbackstruct.CallbackAfterSetGroupMemberInfoResp{}
- if err = http.CallBackPostReturn(ctx, config.Config.Callback.CallbackUrl, callbackReq, resp, config.Config.Callback.CallbackBeforeSetGroupMemberInfo); err != nil {
+ if err = http.CallBackPostReturn(ctx, config.Config.Callback.CallbackUrl, callbackReq, resp, config.Config.Callback.CallbackAfterSetGroupMemberInfo); err != nil {
return err
}
return nil
@@ -264,20 +279,152 @@ func CallbackApplyJoinGroupBefore(ctx context.Context, req *callbackstruct.Callb
return nil
}
-func CallbackTransferGroupOwnerAfter(ctx context.Context, req *pbgroup.TransferGroupOwnerReq) (err error) {
- if !config.Config.Callback.CallbackTransferGroupOwnerAfter.Enable {
+func CallbackAfterTransferGroupOwner(ctx context.Context, req *pbgroup.TransferGroupOwnerReq) (err error) {
+ if !config.Config.Callback.CallbackAfterTransferGroupOwner.Enable {
return nil
}
cbReq := &callbackstruct.CallbackTransferGroupOwnerReq{
- CallbackCommand: callbackstruct.CallbackTransferGroupOwnerAfter,
+ CallbackCommand: callbackstruct.CallbackAfterTransferGroupOwner,
GroupID: req.GroupID,
OldOwnerUserID: req.OldOwnerUserID,
NewOwnerUserID: req.NewOwnerUserID,
}
resp := &callbackstruct.CallbackTransferGroupOwnerResp{}
- if err = http.CallBackPostReturn(ctx, config.Config.Callback.CallbackUrl, cbReq, resp, config.Config.Callback.CallbackBeforeJoinGroup); err != nil {
+ if err = http.CallBackPostReturn(ctx, config.Config.Callback.CallbackUrl, cbReq, resp, config.Config.Callback.CallbackAfterTransferGroupOwner); err != nil {
+ return err
+ }
+ return nil
+}
+func CallbackBeforeInviteUserToGroup(ctx context.Context, req *group.InviteUserToGroupReq) (err error) {
+ if !config.Config.Callback.CallbackBeforeInviteUserToGroup.Enable {
+ return nil
+ }
+
+ callbackReq := &callbackstruct.CallbackBeforeInviteUserToGroupReq{
+ CallbackCommand: callbackstruct.CallbackBeforeInviteJoinGroupCommand,
+ OperationID: mcontext.GetOperationID(ctx),
+ GroupID: req.GroupID,
+ Reason: req.Reason,
+ InvitedUserIDs: req.InvitedUserIDs,
+ }
+
+ resp := &callbackstruct.CallbackBeforeInviteUserToGroupResp{}
+ err = http.CallBackPostReturn(
+ ctx,
+ config.Config.Callback.CallbackUrl,
+ callbackReq,
+ resp,
+ config.Config.Callback.CallbackBeforeInviteUserToGroup,
+ )
+
+ if err != nil {
+ return err
+ }
+
+ if len(resp.RefusedMembersAccount) > 0 {
+ // Handle the scenario where certain members are refused
+ // You might want to update the req.Members list or handle it as per your business logic
+ }
+ return nil
+}
+
+func CallbackAfterJoinGroup(ctx context.Context, req *group.JoinGroupReq) error {
+ if !config.Config.Callback.CallbackAfterJoinGroup.Enable {
+ return nil
+ }
+ callbackReq := &callbackstruct.CallbackAfterJoinGroupReq{
+ CallbackCommand: callbackstruct.CallbackAfterJoinGroupCommand,
+ OperationID: mcontext.GetOperationID(ctx),
+ GroupID: req.GroupID,
+ ReqMessage: req.ReqMessage,
+ JoinSource: req.JoinSource,
+ InviterUserID: req.InviterUserID,
+ }
+ resp := &callbackstruct.CallbackAfterJoinGroupResp{}
+ if err := http.CallBackPostReturn(ctx, config.Config.Callback.CallbackUrl, callbackReq, resp, config.Config.Callback.CallbackAfterJoinGroup); err != nil {
+ return err
+ }
+ return nil
+}
+
+func CallbackBeforeSetGroupInfo(ctx context.Context, req *group.SetGroupInfoReq) error {
+ if !config.Config.Callback.CallbackBeforeSetGroupInfo.Enable {
+ return nil
+ }
+ callbackReq := &callbackstruct.CallbackBeforeSetGroupInfoReq{
+ CallbackCommand: callbackstruct.CallbackBeforeSetGroupInfoCommand,
+ GroupID: req.GroupInfoForSet.GroupID,
+ Notification: req.GroupInfoForSet.Notification,
+ Introduction: req.GroupInfoForSet.Introduction,
+ FaceURL: req.GroupInfoForSet.FaceURL,
+ GroupName: req.GroupInfoForSet.GroupName,
+ }
+
+ if req.GroupInfoForSet.Ex != nil {
+ callbackReq.Ex = req.GroupInfoForSet.Ex.Value
+ }
+ log.ZDebug(ctx, "debug CallbackBeforeSetGroupInfo", callbackReq.Ex)
+ if req.GroupInfoForSet.NeedVerification != nil {
+ callbackReq.NeedVerification = req.GroupInfoForSet.NeedVerification.Value
+ }
+ if req.GroupInfoForSet.LookMemberInfo != nil {
+ callbackReq.LookMemberInfo = req.GroupInfoForSet.LookMemberInfo.Value
+ }
+ if req.GroupInfoForSet.ApplyMemberFriend != nil {
+ callbackReq.ApplyMemberFriend = req.GroupInfoForSet.ApplyMemberFriend.Value
+ }
+ resp := &callbackstruct.CallbackBeforeSetGroupInfoResp{}
+
+ if err := http.CallBackPostReturn(ctx, config.Config.Callback.CallbackUrl, callbackReq, resp, config.Config.Callback.CallbackBeforeSetGroupInfo); err != nil {
+ return err
+ }
+
+ if resp.Ex != nil {
+ req.GroupInfoForSet.Ex = wrapperspb.String(*resp.Ex)
+ }
+ if resp.NeedVerification != nil {
+ req.GroupInfoForSet.NeedVerification = wrapperspb.Int32(*resp.NeedVerification)
+ }
+ if resp.LookMemberInfo != nil {
+ req.GroupInfoForSet.LookMemberInfo = wrapperspb.Int32(*resp.LookMemberInfo)
+ }
+ if resp.ApplyMemberFriend != nil {
+ req.GroupInfoForSet.ApplyMemberFriend = wrapperspb.Int32(*resp.ApplyMemberFriend)
+ }
+ utils.NotNilReplace(&req.GroupInfoForSet.GroupID, &resp.GroupID)
+ utils.NotNilReplace(&req.GroupInfoForSet.GroupName, &resp.GroupName)
+ utils.NotNilReplace(&req.GroupInfoForSet.FaceURL, &resp.FaceURL)
+ utils.NotNilReplace(&req.GroupInfoForSet.Introduction, &resp.Introduction)
+ return nil
+}
+func CallbackAfterSetGroupInfo(ctx context.Context, req *group.SetGroupInfoReq) error {
+ if !config.Config.Callback.CallbackAfterSetGroupInfo.Enable {
+ return nil
+ }
+ callbackReq := &callbackstruct.CallbackAfterSetGroupInfoReq{
+ CallbackCommand: callbackstruct.CallbackAfterSetGroupInfoCommand,
+ GroupID: req.GroupInfoForSet.GroupID,
+ Notification: req.GroupInfoForSet.Notification,
+ Introduction: req.GroupInfoForSet.Introduction,
+ FaceURL: req.GroupInfoForSet.FaceURL,
+ GroupName: req.GroupInfoForSet.GroupName,
+ }
+ if req.GroupInfoForSet.Ex != nil {
+ callbackReq.Ex = &req.GroupInfoForSet.Ex.Value
+ }
+ if req.GroupInfoForSet.NeedVerification != nil {
+ callbackReq.NeedVerification = &req.GroupInfoForSet.NeedVerification.Value
+ }
+ if req.GroupInfoForSet.LookMemberInfo != nil {
+ callbackReq.LookMemberInfo = &req.GroupInfoForSet.LookMemberInfo.Value
+ }
+ if req.GroupInfoForSet.ApplyMemberFriend != nil {
+ callbackReq.ApplyMemberFriend = &req.GroupInfoForSet.ApplyMemberFriend.Value
+ }
+ resp := &callbackstruct.CallbackAfterSetGroupInfoResp{}
+ if err := http.CallBackPostReturn(ctx, config.Config.Callback.CallbackUrl, callbackReq, resp, config.Config.Callback.CallbackAfterSetGroupInfo); err != nil {
return err
}
return nil
diff --git a/internal/rpc/group/db_map.go b/internal/rpc/group/db_map.go
index f793582f8..07084873c 100644
--- a/internal/rpc/group/db_map.go
+++ b/internal/rpc/group/db_map.go
@@ -27,7 +27,7 @@ import (
func UpdateGroupInfoMap(ctx context.Context, group *sdkws.GroupInfoForSet) map[string]any {
m := make(map[string]any)
if group.GroupName != "" {
- m["name"] = group.GroupName
+ m["group_name"] = group.GroupName
}
if group.Notification != "" {
m["notification"] = group.Notification
diff --git a/internal/rpc/group/fill.go b/internal/rpc/group/fill.go
index cb47d9f6e..ac539de19 100644
--- a/internal/rpc/group/fill.go
+++ b/internal/rpc/group/fill.go
@@ -17,119 +17,9 @@ package group
import (
"context"
- "github.com/OpenIMSDK/tools/utils"
-
relationtb "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
)
-func (s *groupServer) FindGroupMember(ctx context.Context, groupIDs []string, userIDs []string, roleLevels []int32) ([]*relationtb.GroupMemberModel, error) {
- members, err := s.GroupDatabase.FindGroupMember(ctx, groupIDs, userIDs, roleLevels)
- if err != nil {
- return nil, err
- }
- emptyUserIDs := make(map[string]struct{})
- for _, member := range members {
- if member.Nickname == "" || member.FaceURL == "" {
- emptyUserIDs[member.UserID] = struct{}{}
- }
- }
- if len(emptyUserIDs) > 0 {
- users, err := s.User.GetPublicUserInfoMap(ctx, utils.Keys(emptyUserIDs), true)
- if err != nil {
- return nil, err
- }
- for i, member := range members {
- user, ok := users[member.UserID]
- if !ok {
- continue
- }
- if member.Nickname == "" {
- members[i].Nickname = user.Nickname
- }
- if member.FaceURL == "" {
- members[i].FaceURL = user.FaceURL
- }
- }
- }
- return members, nil
-}
-
-func (s *groupServer) TakeGroupMember(
- ctx context.Context,
- groupID string,
- userID string,
-) (*relationtb.GroupMemberModel, error) {
- member, err := s.GroupDatabase.TakeGroupMember(ctx, groupID, userID)
- if err != nil {
- return nil, err
- }
- if member.Nickname == "" || member.FaceURL == "" {
- user, err := s.User.GetPublicUserInfo(ctx, userID)
- if err != nil {
- return nil, err
- }
- if member.Nickname == "" {
- member.Nickname = user.Nickname
- }
- if member.FaceURL == "" {
- member.FaceURL = user.FaceURL
- }
- }
- return member, nil
-}
-
-func (s *groupServer) TakeGroupOwner(ctx context.Context, groupID string) (*relationtb.GroupMemberModel, error) {
- owner, err := s.GroupDatabase.TakeGroupOwner(ctx, groupID)
- if err != nil {
- return nil, err
- }
- if owner.Nickname == "" || owner.FaceURL == "" {
- user, err := s.User.GetUserInfo(ctx, owner.UserID)
- if err != nil {
- return nil, err
- }
- if owner.Nickname == "" {
- owner.Nickname = user.Nickname
- }
- if owner.FaceURL == "" {
- owner.FaceURL = user.FaceURL
- }
- }
- return owner, nil
-}
-
-func (s *groupServer) PageGetGroupMember(
- ctx context.Context,
- groupID string,
- pageNumber, showNumber int32,
-) (uint32, []*relationtb.GroupMemberModel, error) {
- total, members, err := s.GroupDatabase.PageGetGroupMember(ctx, groupID, pageNumber, showNumber)
- if err != nil {
- return 0, nil, err
- }
- emptyUserIDs := make(map[string]struct{})
- for _, member := range members {
- if member.Nickname == "" || member.FaceURL == "" {
- emptyUserIDs[member.UserID] = struct{}{}
- }
- }
- if len(emptyUserIDs) > 0 {
- users, err := s.User.GetPublicUserInfoMap(ctx, utils.Keys(emptyUserIDs), true)
- if err != nil {
- return 0, nil, err
- }
- for i, member := range members {
- user, ok := users[member.UserID]
- if !ok {
- continue
- }
- if member.Nickname == "" {
- members[i].Nickname = user.Nickname
- }
- if member.FaceURL == "" {
- members[i].FaceURL = user.FaceURL
- }
- }
- }
- return total, members, nil
+func (s *groupServer) PopulateGroupMember(ctx context.Context, members ...*relationtb.GroupMemberModel) error {
+ return s.Notification.PopulateGroupMember(ctx, members...)
}
diff --git a/internal/rpc/group/group.go b/internal/rpc/group/group.go
index 227b7959d..1d068b1b2 100644
--- a/internal/rpc/group/group.go
+++ b/internal/rpc/group/group.go
@@ -16,9 +16,6 @@ package group
import (
"context"
- "crypto/md5"
- "encoding/binary"
- "encoding/json"
"fmt"
"math/big"
"math/rand"
@@ -28,11 +25,15 @@ import (
"github.com/openimsdk/open-im-server/v3/pkg/callbackstruct"
- "github.com/openimsdk/open-im-server/v3/pkg/authverify"
- "github.com/openimsdk/open-im-server/v3/pkg/msgprocessor"
-
pbconversation "github.com/OpenIMSDK/protocol/conversation"
"github.com/OpenIMSDK/protocol/wrapperspb"
+ "github.com/OpenIMSDK/tools/tx"
+
+ "github.com/openimsdk/open-im-server/v3/pkg/common/db/mgo"
+ "github.com/openimsdk/open-im-server/v3/pkg/rpcclient/grouphash"
+
+ "github.com/openimsdk/open-im-server/v3/pkg/authverify"
+ "github.com/openimsdk/open-im-server/v3/pkg/msgprocessor"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient/notification"
@@ -54,24 +55,28 @@ import (
"github.com/openimsdk/open-im-server/v3/pkg/common/db/cache"
"github.com/openimsdk/open-im-server/v3/pkg/common/db/controller"
- "github.com/openimsdk/open-im-server/v3/pkg/common/db/relation"
relationtb "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
"github.com/openimsdk/open-im-server/v3/pkg/common/db/unrelation"
)
func Start(client discoveryregistry.SvcDiscoveryRegistry, server *grpc.Server) error {
- db, err := relation.NewGormDB()
+ mongo, err := unrelation.NewMongo()
if err != nil {
return err
}
- if err := db.AutoMigrate(&relationtb.GroupModel{}, &relationtb.GroupMemberModel{}, &relationtb.GroupRequestModel{}); err != nil {
+ rdb, err := cache.NewRedis()
+ if err != nil {
return err
}
- mongo, err := unrelation.NewMongo()
+ groupDB, err := mgo.NewGroupMongo(mongo.GetDatabase())
if err != nil {
return err
}
- rdb, err := cache.NewRedis()
+ groupMemberDB, err := mgo.NewGroupMember(mongo.GetDatabase())
+ if err != nil {
+ return err
+ }
+ groupRequestDB, err := mgo.NewGroupRequestMgo(mongo.GetDatabase())
if err != nil {
return err
}
@@ -79,8 +84,8 @@ func Start(client discoveryregistry.SvcDiscoveryRegistry, server *grpc.Server) e
msgRpcClient := rpcclient.NewMessageRpcClient(client)
conversationRpcClient := rpcclient.NewConversationRpcClient(client)
var gs groupServer
- database := controller.InitGroupDatabase(db, rdb, mongo.GetDatabase(), gs.groupMemberHashCode)
- gs.GroupDatabase = database
+ database := controller.NewGroupDatabase(rdb, groupDB, groupMemberDB, groupRequestDB, tx.NewMongo(mongo.GetClient()), grouphash.NewGroupHashFromGroupServer(&gs))
+ gs.db = database
gs.User = userRpcClient
gs.Notification = notification.NewGroupNotificationSender(database, &msgRpcClient, &userRpcClient, func(ctx context.Context, userIDs []string) ([]notification.CommonUser, error) {
users, err := userRpcClient.GetUsersInfo(ctx, userIDs)
@@ -92,34 +97,25 @@ func Start(client discoveryregistry.SvcDiscoveryRegistry, server *grpc.Server) e
gs.conversationRpcClient = conversationRpcClient
gs.msgRpcClient = msgRpcClient
pbgroup.RegisterGroupServer(server, &gs)
- //pbgroup.RegisterGroupServer(server, &groupServer{
- // GroupDatabase: database,
- // User: userRpcClient,
- // Notification: notification.NewGroupNotificationSender(database, &msgRpcClient, &userRpcClient, func(ctx context.Context, userIDs []string) ([]notification.CommonUser, error) {
- // users, err := userRpcClient.GetUsersInfo(ctx, userIDs)
- // if err != nil {
- // return nil, err
- // }
- // return utils.Slice(users, func(e *sdkws.UserInfo) notification.CommonUser { return e }), nil
- // }),
- // conversationRpcClient: conversationRpcClient,
- // msgRpcClient: msgRpcClient,
- //})
return nil
}
type groupServer struct {
- GroupDatabase controller.GroupDatabase
+ db controller.GroupDatabase
User rpcclient.UserRpcClient
Notification *notification.GroupNotificationSender
conversationRpcClient rpcclient.ConversationRpcClient
msgRpcClient rpcclient.MessageRpcClient
}
-func (s *groupServer) NotificationUserInfoUpdate(ctx context.Context, req *pbgroup.NotificationUserInfoUpdateReq) (*pbgroup.NotificationUserInfoUpdateResp, error) {
- defer log.ZDebug(ctx, "return")
+func (s *groupServer) GetJoinedGroupIDs(ctx context.Context, req *pbgroup.GetJoinedGroupIDsReq) (*pbgroup.GetJoinedGroupIDsResp, error) {
+ //TODO implement me
+ panic("implement me")
+}
- members, err := s.GroupDatabase.FindGroupMember(ctx, nil, []string{req.UserID}, nil)
+func (s *groupServer) NotificationUserInfoUpdate(ctx context.Context, req *pbgroup.NotificationUserInfoUpdateReq) (*pbgroup.NotificationUserInfoUpdateResp, error) {
+ defer log.ZDebug(ctx, "NotificationUserInfoUpdate return")
+ members, err := s.db.FindGroupMemberUser(ctx, nil, req.UserID)
if err != nil {
return nil, err
}
@@ -136,7 +132,7 @@ func (s *groupServer) NotificationUserInfoUpdate(ctx context.Context, req *pbgro
log.ZError(ctx, "NotificationUserInfoUpdate setGroupMemberInfo notification failed", err, "groupID", groupID)
}
}
- if err := s.GroupDatabase.DeleteGroupMemberHash(ctx, groupIDs); err != nil {
+ if err := s.db.DeleteGroupMemberHash(ctx, groupIDs); err != nil {
log.ZError(ctx, "NotificationUserInfoUpdate DeleteGroupMemberHash", err, "groupID", groupIDs)
}
@@ -145,7 +141,7 @@ func (s *groupServer) NotificationUserInfoUpdate(ctx context.Context, req *pbgro
func (s *groupServer) CheckGroupAdmin(ctx context.Context, groupID string) error {
if !authverify.IsAppManagerUid(ctx) {
- groupMember, err := s.GroupDatabase.TakeGroupMember(ctx, groupID, mcontext.GetOpUserID(ctx))
+ groupMember, err := s.db.TakeGroupMember(ctx, groupID, mcontext.GetOpUserID(ctx))
if err != nil {
return err
}
@@ -175,7 +171,7 @@ func (s *groupServer) IsNotFound(err error) bool {
func (s *groupServer) GenGroupID(ctx context.Context, groupID *string) error {
if *groupID != "" {
- _, err := s.GroupDatabase.TakeGroup(ctx, *groupID)
+ _, err := s.db.TakeGroup(ctx, *groupID)
if err == nil {
return errs.ErrGroupIDExisted.Wrap("group id existed " + *groupID)
} else if s.IsNotFound(err) {
@@ -189,7 +185,7 @@ func (s *groupServer) GenGroupID(ctx context.Context, groupID *string) error {
bi := big.NewInt(0)
bi.SetString(id[0:8], 16)
id = bi.String()
- _, err := s.GroupDatabase.TakeGroup(ctx, id)
+ _, err := s.db.TakeGroup(ctx, id)
if err == nil {
continue
} else if s.IsNotFound(err) {
@@ -203,12 +199,12 @@ func (s *groupServer) GenGroupID(ctx context.Context, groupID *string) error {
}
func (s *groupServer) CreateGroup(ctx context.Context, req *pbgroup.CreateGroupReq) (*pbgroup.CreateGroupResp, error) {
+ if req.GroupInfo.GroupType != constant.WorkingGroup {
+ return nil, errs.ErrArgs.Wrap(fmt.Sprintf("group type only supports %d", constant.WorkingGroup))
+ }
if req.OwnerUserID == "" {
return nil, errs.ErrArgs.Wrap("no group owner")
}
- if req.GroupInfo.GroupType != constant.WorkingGroup {
- return nil, errs.ErrArgs.Wrap(fmt.Sprintf("group type %d not support", req.GroupInfo.GroupType))
- }
if err := authverify.CheckAccessV3(ctx, req.OwnerUserID); err != nil {
return nil, err
}
@@ -256,28 +252,35 @@ func (s *groupServer) CreateGroup(ctx context.Context, req *pbgroup.CreateGroupR
if err := joinGroup(req.OwnerUserID, constant.GroupOwner); err != nil {
return nil, err
}
- if req.GroupInfo.GroupType == constant.SuperGroup {
- if err := s.GroupDatabase.CreateSuperGroup(ctx, group.GroupID, userIDs); err != nil {
+ for _, userID := range req.AdminUserIDs {
+ if err := joinGroup(userID, constant.GroupAdmin); err != nil {
return nil, err
}
- } else {
- for _, userID := range req.AdminUserIDs {
- if err := joinGroup(userID, constant.GroupAdmin); err != nil {
- return nil, err
- }
- }
- for _, userID := range req.MemberUserIDs {
- if err := joinGroup(userID, constant.GroupOrdinaryUsers); err != nil {
- return nil, err
- }
+ }
+ for _, userID := range req.MemberUserIDs {
+ if err := joinGroup(userID, constant.GroupOrdinaryUsers); err != nil {
+ return nil, err
}
}
- if err := s.GroupDatabase.CreateGroup(ctx, []*relationtb.GroupModel{group}, groupMembers); err != nil {
+ if err := s.db.CreateGroup(ctx, []*relationtb.GroupModel{group}, groupMembers); err != nil {
return nil, err
}
resp := &pbgroup.CreateGroupResp{GroupInfo: &sdkws.GroupInfo{}}
resp.GroupInfo = convert.Db2PbGroupInfo(group, req.OwnerUserID, uint32(len(userIDs)))
resp.GroupInfo.MemberCount = uint32(len(userIDs))
+ tips := &sdkws.GroupCreatedTips{
+ Group: resp.GroupInfo,
+ OperationTime: group.CreateTime.UnixMilli(),
+ GroupOwnerUser: s.groupMemberDB2PB(groupMembers[0], userMap[groupMembers[0].UserID].AppMangerLevel),
+ }
+ for _, member := range groupMembers {
+ member.Nickname = userMap[member.UserID].Nickname
+ tips.MemberList = append(tips.MemberList, s.groupMemberDB2PB(member, userMap[member.UserID].AppMangerLevel))
+ if member.UserID == opUserID {
+ tips.OpUser = s.groupMemberDB2PB(member, userMap[member.UserID].AppMangerLevel)
+ break
+ }
+ }
if req.GroupInfo.GroupType == constant.SuperGroup {
go func() {
for _, userID := range userIDs {
@@ -320,35 +323,32 @@ func (s *groupServer) GetJoinedGroupList(ctx context.Context, req *pbgroup.GetJo
if err := authverify.CheckAccessV3(ctx, req.FromUserID); err != nil {
return nil, err
}
- var pageNumber, showNumber int32
- if req.Pagination != nil {
- pageNumber = req.Pagination.PageNumber
- showNumber = req.Pagination.ShowNumber
- }
- // total, members, err := s.GroupDatabase.PageGroupMember(ctx, nil, []string{req.FromUserID}, nil, pageNumber, showNumber)
- total, members, err := s.GroupDatabase.PageGetJoinGroup(ctx, req.FromUserID, pageNumber, showNumber)
+ total, members, err := s.db.PageGetJoinGroup(ctx, req.FromUserID, req.Pagination)
if err != nil {
return nil, err
}
- resp.Total = total
+ resp.Total = uint32(total)
if len(members) == 0 {
return resp, nil
}
groupIDs := utils.Slice(members, func(e *relationtb.GroupMemberModel) string {
return e.GroupID
})
- groups, err := s.GroupDatabase.FindGroup(ctx, groupIDs)
+ groups, err := s.db.FindGroup(ctx, groupIDs)
if err != nil {
return nil, err
}
- groupMemberNum, err := s.GroupDatabase.MapGroupMemberNum(ctx, groupIDs)
+ groupMemberNum, err := s.db.MapGroupMemberNum(ctx, groupIDs)
if err != nil {
return nil, err
}
- owners, err := s.FindGroupMember(ctx, groupIDs, nil, []int32{constant.GroupOwner})
+ owners, err := s.db.FindGroupsOwner(ctx, groupIDs)
if err != nil {
return nil, err
}
+ if err := s.PopulateGroupMember(ctx, members...); err != nil {
+ return nil, err
+ }
ownerMap := utils.SliceToMap(owners, func(e *relationtb.GroupMemberModel) string {
return e.GroupID
})
@@ -366,16 +366,18 @@ func (s *groupServer) GetJoinedGroupList(ctx context.Context, req *pbgroup.GetJo
func (s *groupServer) InviteUserToGroup(ctx context.Context, req *pbgroup.InviteUserToGroupReq) (*pbgroup.InviteUserToGroupResp, error) {
resp := &pbgroup.InviteUserToGroupResp{}
+
if len(req.InvitedUserIDs) == 0 {
return nil, errs.ErrArgs.Wrap("user empty")
}
if utils.Duplicate(req.InvitedUserIDs) {
return nil, errs.ErrArgs.Wrap("userID duplicate")
}
- group, err := s.GroupDatabase.TakeGroup(ctx, req.GroupID)
+ group, err := s.db.TakeGroup(ctx, req.GroupID)
if err != nil {
return nil, err
}
+
if group.Status == constant.GroupStatusDismissed {
return nil, errs.ErrDismissedAlready.Wrap()
}
@@ -390,14 +392,18 @@ func (s *groupServer) InviteUserToGroup(ctx context.Context, req *pbgroup.Invite
var opUserID string
if !authverify.IsAppManagerUid(ctx) {
opUserID = mcontext.GetOpUserID(ctx)
- groupMembers, err := s.FindGroupMember(ctx, []string{req.GroupID}, []string{opUserID}, nil)
+ var err error
+ groupMember, err = s.db.TakeGroupMember(ctx, req.GroupID, opUserID)
if err != nil {
return nil, err
}
- if len(groupMembers) <= 0 {
- return nil, errs.ErrNoPermission.Wrap("not in group")
+ if err := s.PopulateGroupMember(ctx, groupMember); err != nil {
+ return nil, err
}
- groupMember = groupMembers[0]
+ }
+
+ if err := CallbackBeforeInviteUserToGroup(ctx, req); err != nil {
+ return nil, err
}
if group.NeedVerification == constant.AllNeedVerification {
if !authverify.IsAppManagerUid(ctx) {
@@ -413,7 +419,7 @@ func (s *groupServer) InviteUserToGroup(ctx context.Context, req *pbgroup.Invite
HandledTime: time.Unix(0, 0),
})
}
- if err := s.GroupDatabase.CreateGroupRequest(ctx, requests); err != nil {
+ if err := s.db.CreateGroupRequest(ctx, requests); err != nil {
return nil, err
}
for _, request := range requests {
@@ -428,75 +434,43 @@ func (s *groupServer) InviteUserToGroup(ctx context.Context, req *pbgroup.Invite
}
}
}
-
- if group.GroupType == constant.SuperGroup {
- if err := s.GroupDatabase.CreateSuperGroupMember(ctx, req.GroupID, req.InvitedUserIDs); err != nil {
- return nil, err
- }
- if err := s.conversationRpcClient.GroupChatFirstCreateConversation(ctx, req.GroupID, req.InvitedUserIDs); err != nil {
- return nil, err
- }
- for _, userID := range req.InvitedUserIDs {
- s.Notification.SuperGroupNotification(ctx, userID, userID)
- }
- } else {
- opUserID := mcontext.GetOpUserID(ctx)
- var groupMembers []*relationtb.GroupMemberModel
- for _, userID := range req.InvitedUserIDs {
- member := &relationtb.GroupMemberModel{
- GroupID: req.GroupID,
- UserID: userID,
- RoleLevel: constant.GroupOrdinaryUsers,
- OperatorUserID: opUserID,
- InviterUserID: opUserID,
- JoinSource: constant.JoinByInvitation,
- JoinTime: time.Now(),
- MuteEndTime: time.UnixMilli(0),
- }
- if err := CallbackBeforeMemberJoinGroup(ctx, member, group.Ex); err != nil {
- return nil, err
- }
- groupMembers = append(groupMembers, member)
- }
- if err := s.GroupDatabase.CreateGroup(ctx, nil, groupMembers); err != nil {
- return nil, err
+ var groupMembers []*relationtb.GroupMemberModel
+ for _, userID := range req.InvitedUserIDs {
+ member := &relationtb.GroupMemberModel{
+ GroupID: req.GroupID,
+ UserID: userID,
+ RoleLevel: constant.GroupOrdinaryUsers,
+ OperatorUserID: opUserID,
+ InviterUserID: opUserID,
+ JoinSource: constant.JoinByInvitation,
+ JoinTime: time.Now(),
+ MuteEndTime: time.UnixMilli(0),
}
- if err := s.conversationRpcClient.GroupChatFirstCreateConversation(ctx, req.GroupID, req.InvitedUserIDs); err != nil {
+ if err := CallbackBeforeMemberJoinGroup(ctx, member, group.Ex); err != nil {
return nil, err
}
- s.Notification.MemberInvitedNotification(ctx, req.GroupID, req.Reason, req.InvitedUserIDs)
+ groupMembers = append(groupMembers, member)
+ }
+ if err := s.db.CreateGroup(ctx, nil, groupMembers); err != nil {
+ return nil, err
}
+ if err := s.conversationRpcClient.GroupChatFirstCreateConversation(ctx, req.GroupID, req.InvitedUserIDs); err != nil {
+ return nil, err
+ }
+ s.Notification.MemberInvitedNotification(ctx, req.GroupID, req.Reason, req.InvitedUserIDs)
return resp, nil
}
func (s *groupServer) GetGroupAllMember(ctx context.Context, req *pbgroup.GetGroupAllMemberReq) (*pbgroup.GetGroupAllMemberResp, error) {
- resp := &pbgroup.GetGroupAllMemberResp{}
- group, err := s.GroupDatabase.TakeGroup(ctx, req.GroupID)
+ members, err := s.db.FindGroupMemberAll(ctx, req.GroupID)
if err != nil {
return nil, err
}
- if group.GroupType == constant.SuperGroup {
- return nil, errs.ErrArgs.Wrap("unsupported super group")
- }
- members, err := s.FindGroupMember(ctx, []string{req.GroupID}, nil, nil)
- if err != nil {
- return nil, err
- }
- publicUserInfoMap, err := s.GetPublicUserInfoMap(ctx, utils.Filter(members, func(e *relationtb.GroupMemberModel) (string, bool) {
- return e.UserID, e.Nickname == "" || e.FaceURL == ""
- }), true)
- if err != nil {
+ if err := s.PopulateGroupMember(ctx, members...); err != nil {
return nil, err
}
+ resp := &pbgroup.GetGroupAllMemberResp{}
resp.Members = utils.Slice(members, func(e *relationtb.GroupMemberModel) *sdkws.GroupMemberFullInfo {
- if userInfo, ok := publicUserInfoMap[e.UserID]; ok {
- if e.Nickname == "" {
- e.Nickname = userInfo.Nickname
- }
- if e.FaceURL == "" {
- e.FaceURL = userInfo.FaceURL
- }
- }
return convert.Db2PbGroupMember(e)
})
return resp, nil
@@ -504,20 +478,50 @@ func (s *groupServer) GetGroupAllMember(ctx context.Context, req *pbgroup.GetGro
func (s *groupServer) GetGroupMemberList(ctx context.Context, req *pbgroup.GetGroupMemberListReq) (*pbgroup.GetGroupMemberListResp, error) {
resp := &pbgroup.GetGroupMemberListResp{}
- total, members, err := s.PageGetGroupMember(ctx, req.GroupID, req.Pagination.PageNumber, req.Pagination.ShowNumber)
- log.ZDebug(ctx, "GetGroupMemberList", "total", total, "members", members, "length", len(members))
+ var (
+ total int64
+ members []*relationtb.GroupMemberModel
+ err error
+ )
+ if req.Keyword == "" {
+ total, members, err = s.db.PageGetGroupMember(ctx, req.GroupID, req.Pagination)
+ } else {
+ members, err = s.db.FindGroupMemberAll(ctx, req.GroupID)
+ }
if err != nil {
return nil, err
}
- resp.Total = total
+ if err := s.PopulateGroupMember(ctx, members...); err != nil {
+ return nil, err
+ }
+ if req.Keyword != "" {
+ groupMembers := make([]*relationtb.GroupMemberModel, 0)
+ for _, member := range members {
+ if member.UserID == req.Keyword {
+ groupMembers = append(groupMembers, member)
+ total++
+ continue
+ }
+ if member.Nickname == req.Keyword {
+ groupMembers = append(groupMembers, member)
+ total++
+ continue
+ }
+ }
+
+ GMembers := utils.Paginate(groupMembers, int(req.Pagination.GetPageNumber()), int(req.Pagination.GetShowNumber()))
+ resp.Members = utils.Batch(convert.Db2PbGroupMember, GMembers)
+ resp.Total = uint32(total)
+ return resp, nil
+ }
+ resp.Total = uint32(total)
resp.Members = utils.Batch(convert.Db2PbGroupMember, members)
- log.ZDebug(ctx, "GetGroupMemberList", "resp", resp, "length", len(resp.Members))
return resp, nil
}
func (s *groupServer) KickGroupMember(ctx context.Context, req *pbgroup.KickGroupMemberReq) (*pbgroup.KickGroupMemberResp, error) {
resp := &pbgroup.KickGroupMemberResp{}
- group, err := s.GroupDatabase.TakeGroup(ctx, req.GroupID)
+ group, err := s.db.TakeGroup(ctx, req.GroupID)
if err != nil {
return nil, err
}
@@ -531,92 +535,85 @@ func (s *groupServer) KickGroupMember(ctx context.Context, req *pbgroup.KickGrou
if utils.IsContain(opUserID, req.KickedUserIDs) {
return nil, errs.ErrArgs.Wrap("opUserID in KickedUserIDs")
}
- if group.GroupType == constant.SuperGroup {
- if err := s.GroupDatabase.DeleteSuperGroupMember(ctx, req.GroupID, req.KickedUserIDs); err != nil {
- return nil, err
- }
- go func() {
- for _, userID := range req.KickedUserIDs {
- s.Notification.SuperGroupNotification(ctx, userID, userID)
- }
- }()
- } else {
- members, err := s.FindGroupMember(ctx, []string{req.GroupID}, append(req.KickedUserIDs, opUserID), nil)
- if err != nil {
- return nil, err
- }
- memberMap := make(map[string]*relationtb.GroupMemberModel)
- for i, member := range members {
- memberMap[member.UserID] = members[i]
+ members, err := s.db.FindGroupMembers(ctx, req.GroupID, append(req.KickedUserIDs, opUserID))
+ if err != nil {
+ return nil, err
+ }
+ if err := s.PopulateGroupMember(ctx, members...); err != nil {
+ return nil, err
+ }
+ memberMap := make(map[string]*relationtb.GroupMemberModel)
+ for i, member := range members {
+ memberMap[member.UserID] = members[i]
+ }
+ isAppManagerUid := authverify.IsAppManagerUid(ctx)
+ opMember := memberMap[opUserID]
+ for _, userID := range req.KickedUserIDs {
+ member, ok := memberMap[userID]
+ if !ok {
+ return nil, errs.ErrUserIDNotFound.Wrap(userID)
}
- isAppManagerUid := authverify.IsAppManagerUid(ctx)
- opMember := memberMap[opUserID]
- for _, userID := range req.KickedUserIDs {
- member, ok := memberMap[userID]
- if !ok {
- return nil, errs.ErrUserIDNotFound.Wrap(userID)
+ if !isAppManagerUid {
+ if opMember == nil {
+ return nil, errs.ErrNoPermission.Wrap("opUserID no in group")
}
- if !isAppManagerUid {
- if opMember == nil {
- return nil, errs.ErrNoPermission.Wrap("opUserID no in group")
- }
- switch opMember.RoleLevel {
- case constant.GroupOwner:
- case constant.GroupAdmin:
- if member.RoleLevel == constant.GroupOwner || member.RoleLevel == constant.GroupAdmin {
- return nil, errs.ErrNoPermission.Wrap("group admins cannot remove the group owner and other admins")
- }
- case constant.GroupOrdinaryUsers:
- return nil, errs.ErrNoPermission.Wrap("opUserID no permission")
- default:
- return nil, errs.ErrNoPermission.Wrap("opUserID roleLevel unknown")
+ switch opMember.RoleLevel {
+ case constant.GroupOwner:
+ case constant.GroupAdmin:
+ if member.RoleLevel == constant.GroupOwner || member.RoleLevel == constant.GroupAdmin {
+ return nil, errs.ErrNoPermission.Wrap("group admins cannot remove the group owner and other admins")
}
+ case constant.GroupOrdinaryUsers:
+ return nil, errs.ErrNoPermission.Wrap("opUserID no permission")
+ default:
+ return nil, errs.ErrNoPermission.Wrap("opUserID roleLevel unknown")
}
}
- num, err := s.GroupDatabase.FindGroupMemberNum(ctx, req.GroupID)
- if err != nil {
- return nil, err
- }
- owner, err := s.FindGroupMember(ctx, []string{req.GroupID}, nil, []int32{constant.GroupOwner})
- if err != nil {
- return nil, err
- }
- if err := s.GroupDatabase.DeleteGroupMember(ctx, group.GroupID, req.KickedUserIDs); err != nil {
- return nil, err
- }
- tips := &sdkws.MemberKickedTips{
- Group: &sdkws.GroupInfo{
- GroupID: group.GroupID,
- GroupName: group.GroupName,
- Notification: group.Notification,
- Introduction: group.Introduction,
- FaceURL: group.FaceURL,
- // OwnerUserID: owner[0].UserID,
- CreateTime: group.CreateTime.UnixMilli(),
- MemberCount: num,
- Ex: group.Ex,
- Status: group.Status,
- CreatorUserID: group.CreatorUserID,
- GroupType: group.GroupType,
- NeedVerification: group.NeedVerification,
- LookMemberInfo: group.LookMemberInfo,
- ApplyMemberFriend: group.ApplyMemberFriend,
- NotificationUpdateTime: group.NotificationUpdateTime.UnixMilli(),
- NotificationUserID: group.NotificationUserID,
- },
- KickedUserList: []*sdkws.GroupMemberFullInfo{},
- }
- if len(owner) > 0 {
- tips.Group.OwnerUserID = owner[0].UserID
- }
- if opMember, ok := memberMap[opUserID]; ok {
- tips.OpUser = convert.Db2PbGroupMember(opMember)
- }
- for _, userID := range req.KickedUserIDs {
- tips.KickedUserList = append(tips.KickedUserList, convert.Db2PbGroupMember(memberMap[userID]))
- }
- s.Notification.MemberKickedNotification(ctx, tips)
}
+ num, err := s.db.FindGroupMemberNum(ctx, req.GroupID)
+ if err != nil {
+ return nil, err
+ }
+ ownerUserIDs, err := s.db.GetGroupRoleLevelMemberIDs(ctx, req.GroupID, constant.GroupOwner)
+ if err != nil {
+ return nil, err
+ }
+ var ownerUserID string
+ if len(ownerUserIDs) > 0 {
+ ownerUserID = ownerUserIDs[0]
+ }
+ if err := s.db.DeleteGroupMember(ctx, group.GroupID, req.KickedUserIDs); err != nil {
+ return nil, err
+ }
+ tips := &sdkws.MemberKickedTips{
+ Group: &sdkws.GroupInfo{
+ GroupID: group.GroupID,
+ GroupName: group.GroupName,
+ Notification: group.Notification,
+ Introduction: group.Introduction,
+ FaceURL: group.FaceURL,
+ OwnerUserID: ownerUserID,
+ CreateTime: group.CreateTime.UnixMilli(),
+ MemberCount: num,
+ Ex: group.Ex,
+ Status: group.Status,
+ CreatorUserID: group.CreatorUserID,
+ GroupType: group.GroupType,
+ NeedVerification: group.NeedVerification,
+ LookMemberInfo: group.LookMemberInfo,
+ ApplyMemberFriend: group.ApplyMemberFriend,
+ NotificationUpdateTime: group.NotificationUpdateTime.UnixMilli(),
+ NotificationUserID: group.NotificationUserID,
+ },
+ KickedUserList: []*sdkws.GroupMemberFullInfo{},
+ }
+ if opMember, ok := memberMap[opUserID]; ok {
+ tips.OpUser = convert.Db2PbGroupMember(opMember)
+ }
+ for _, userID := range req.KickedUserIDs {
+ tips.KickedUserList = append(tips.KickedUserList, convert.Db2PbGroupMember(memberMap[userID]))
+ }
+ s.Notification.MemberKickedNotification(ctx, tips)
if err := s.deleteMemberAndSetConversationSeq(ctx, req.GroupID, req.KickedUserIDs); err != nil {
return nil, err
}
@@ -635,32 +632,21 @@ func (s *groupServer) GetGroupMembersInfo(ctx context.Context, req *pbgroup.GetG
if req.GroupID == "" {
return nil, errs.ErrArgs.Wrap("groupID empty")
}
- members, err := s.FindGroupMember(ctx, []string{req.GroupID}, req.UserIDs, nil)
+ members, err := s.db.FindGroupMembers(ctx, req.GroupID, req.UserIDs)
if err != nil {
return nil, err
}
- publicUserInfoMap, err := s.GetPublicUserInfoMap(ctx, utils.Filter(members, func(e *relationtb.GroupMemberModel) (string, bool) {
- return e.UserID, e.Nickname == "" || e.FaceURL == ""
- }), true)
- if err != nil {
+ if err := s.PopulateGroupMember(ctx, members...); err != nil {
return nil, err
}
resp.Members = utils.Slice(members, func(e *relationtb.GroupMemberModel) *sdkws.GroupMemberFullInfo {
- if userInfo, ok := publicUserInfoMap[e.UserID]; ok {
- if e.Nickname == "" {
- e.Nickname = userInfo.Nickname
- }
- if e.FaceURL == "" {
- e.FaceURL = userInfo.FaceURL
- }
- }
return convert.Db2PbGroupMember(e)
})
return resp, nil
}
func (s *groupServer) GetGroupApplicationList(ctx context.Context, req *pbgroup.GetGroupApplicationListReq) (*pbgroup.GetGroupApplicationListResp, error) {
- groupIDs, err := s.GroupDatabase.FindUserManagedGroupID(ctx, req.FromUserID)
+ groupIDs, err := s.db.FindUserManagedGroupID(ctx, req.FromUserID)
if err != nil {
return nil, err
}
@@ -668,11 +654,11 @@ func (s *groupServer) GetGroupApplicationList(ctx context.Context, req *pbgroup.
if len(groupIDs) == 0 {
return resp, nil
}
- total, groupRequests, err := s.GroupDatabase.PageGroupRequest(ctx, groupIDs, req.Pagination.PageNumber, req.Pagination.ShowNumber)
+ total, groupRequests, err := s.db.PageGroupRequest(ctx, groupIDs, req.Pagination)
if err != nil {
return nil, err
}
- resp.Total = total
+ resp.Total = uint32(total)
if len(groupRequests) == 0 {
return resp, nil
}
@@ -686,7 +672,7 @@ func (s *groupServer) GetGroupApplicationList(ctx context.Context, req *pbgroup.
if err != nil {
return nil, err
}
- groups, err := s.GroupDatabase.FindGroup(ctx, utils.Distinct(groupIDs))
+ groups, err := s.db.FindGroup(ctx, utils.Distinct(groupIDs))
if err != nil {
return nil, err
}
@@ -696,14 +682,17 @@ func (s *groupServer) GetGroupApplicationList(ctx context.Context, req *pbgroup.
if ids := utils.Single(utils.Keys(groupMap), groupIDs); len(ids) > 0 {
return nil, errs.ErrGroupIDNotFound.Wrap(strings.Join(ids, ","))
}
- groupMemberNumMap, err := s.GroupDatabase.MapGroupMemberNum(ctx, groupIDs)
+ groupMemberNumMap, err := s.db.MapGroupMemberNum(ctx, groupIDs)
if err != nil {
return nil, err
}
- owners, err := s.FindGroupMember(ctx, groupIDs, nil, []int32{constant.GroupOwner})
+ owners, err := s.db.FindGroupsOwner(ctx, groupIDs)
if err != nil {
return nil, err
}
+ if err := s.PopulateGroupMember(ctx, owners...); err != nil {
+ return nil, err
+ }
ownerMap := utils.SliceToMap(owners, func(e *relationtb.GroupMemberModel) string {
return e.GroupID
})
@@ -722,18 +711,21 @@ func (s *groupServer) GetGroupsInfo(ctx context.Context, req *pbgroup.GetGroupsI
if len(req.GroupIDs) == 0 {
return nil, errs.ErrArgs.Wrap("groupID is empty")
}
- groups, err := s.GroupDatabase.FindGroup(ctx, req.GroupIDs)
+ groups, err := s.db.FindGroup(ctx, req.GroupIDs)
if err != nil {
return nil, err
}
- groupMemberNumMap, err := s.GroupDatabase.MapGroupMemberNum(ctx, req.GroupIDs)
+ groupMemberNumMap, err := s.db.MapGroupMemberNum(ctx, req.GroupIDs)
if err != nil {
return nil, err
}
- owners, err := s.FindGroupMember(ctx, req.GroupIDs, nil, []int32{constant.GroupOwner})
+ owners, err := s.db.FindGroupsOwner(ctx, req.GroupIDs)
if err != nil {
return nil, err
}
+ if err := s.PopulateGroupMember(ctx, owners...); err != nil {
+ return nil, err
+ }
ownerMap := utils.SliceToMap(owners, func(e *relationtb.GroupMemberModel) string {
return e.GroupID
})
@@ -753,7 +745,7 @@ func (s *groupServer) GroupApplicationResponse(ctx context.Context, req *pbgroup
return nil, errs.ErrArgs.Wrap("HandleResult unknown")
}
if !authverify.IsAppManagerUid(ctx) {
- groupMember, err := s.GroupDatabase.TakeGroupMember(ctx, req.GroupID, mcontext.GetOpUserID(ctx))
+ groupMember, err := s.db.TakeGroupMember(ctx, req.GroupID, mcontext.GetOpUserID(ctx))
if err != nil {
return nil, err
}
@@ -761,11 +753,11 @@ func (s *groupServer) GroupApplicationResponse(ctx context.Context, req *pbgroup
return nil, errs.ErrNoPermission.Wrap("no group owner or admin")
}
}
- group, err := s.GroupDatabase.TakeGroup(ctx, req.GroupID)
+ group, err := s.db.TakeGroup(ctx, req.GroupID)
if err != nil {
return nil, err
}
- groupRequest, err := s.GroupDatabase.TakeGroupRequest(ctx, req.GroupID, req.FromUserID)
+ groupRequest, err := s.db.TakeGroupRequest(ctx, req.GroupID, req.FromUserID)
if err != nil {
return nil, err
}
@@ -773,7 +765,7 @@ func (s *groupServer) GroupApplicationResponse(ctx context.Context, req *pbgroup
return nil, errs.ErrGroupRequestHandled.Wrap("group request already processed")
}
var inGroup bool
- if _, err := s.GroupDatabase.TakeGroupMember(ctx, req.GroupID, req.FromUserID); err == nil {
+ if _, err := s.db.TakeGroupMember(ctx, req.GroupID, req.FromUserID); err == nil {
inGroup = true // 已经在群里了
} else if !s.IsNotFound(err) {
return nil, err
@@ -801,7 +793,7 @@ func (s *groupServer) GroupApplicationResponse(ctx context.Context, req *pbgroup
}
}
log.ZDebug(ctx, "GroupApplicationResponse", "inGroup", inGroup, "HandleResult", req.HandleResult, "member", member)
- if err := s.GroupDatabase.HandlerGroupRequest(ctx, req.GroupID, req.FromUserID, req.HandledMsg, req.HandleResult, member); err != nil {
+ if err := s.db.HandlerGroupRequest(ctx, req.GroupID, req.FromUserID, req.HandledMsg, req.HandleResult, member); err != nil {
return nil, err
}
switch req.HandleResult {
@@ -818,6 +810,7 @@ func (s *groupServer) GroupApplicationResponse(ctx context.Context, req *pbgroup
case constant.GroupResponseRefuse:
s.Notification.GroupApplicationRejectedNotification(ctx, req)
}
+
return &pbgroup.GroupApplicationResponseResp{}, nil
}
@@ -827,7 +820,7 @@ func (s *groupServer) JoinGroup(ctx context.Context, req *pbgroup.JoinGroupReq)
if err != nil {
return nil, err
}
- group, err := s.GroupDatabase.TakeGroup(ctx, req.GroupID)
+ group, err := s.db.TakeGroup(ctx, req.GroupID)
if err != nil {
return nil, err
}
@@ -840,12 +833,13 @@ func (s *groupServer) JoinGroup(ctx context.Context, req *pbgroup.JoinGroupReq)
GroupType: string(group.GroupType),
ApplyID: req.InviterUserID,
ReqMessage: req.ReqMessage,
+ Ex: req.Ex,
}
if err = CallbackApplyJoinGroupBefore(ctx, reqCall); err != nil {
return nil, err
}
- _, err = s.GroupDatabase.TakeGroupMember(ctx, req.GroupID, req.InviterUserID)
+ _, err = s.db.TakeGroupMember(ctx, req.GroupID, req.InviterUserID)
if err == nil {
return nil, errs.ErrArgs.Wrap("already in group")
} else if !s.IsNotFound(err) && utils.Unwrap(err) != errs.ErrRecordNotFound {
@@ -854,9 +848,6 @@ func (s *groupServer) JoinGroup(ctx context.Context, req *pbgroup.JoinGroupReq)
log.ZInfo(ctx, "JoinGroup.groupInfo", "group", group, "eq", group.NeedVerification == constant.Directly)
resp = &pbgroup.JoinGroupResp{}
if group.NeedVerification == constant.Directly {
- if group.GroupType == constant.SuperGroup {
- return nil, errs.ErrGroupTypeNotSupport.Wrap()
- }
groupMember := &relationtb.GroupMemberModel{
GroupID: group.GroupID,
UserID: user.UserID,
@@ -869,13 +860,17 @@ func (s *groupServer) JoinGroup(ctx context.Context, req *pbgroup.JoinGroupReq)
if err := CallbackBeforeMemberJoinGroup(ctx, groupMember, group.Ex); err != nil {
return nil, err
}
- if err := s.GroupDatabase.CreateGroup(ctx, nil, []*relationtb.GroupMemberModel{groupMember}); err != nil {
+ if err := s.db.CreateGroup(ctx, nil, []*relationtb.GroupMemberModel{groupMember}); err != nil {
return nil, err
}
+
if err := s.conversationRpcClient.GroupChatFirstCreateConversation(ctx, req.GroupID, []string{req.InviterUserID}); err != nil {
return nil, err
}
s.Notification.MemberEnterNotification(ctx, req.GroupID, req.InviterUserID)
+ if err = CallbackAfterJoinGroup(ctx, req); err != nil {
+ return nil, err
+ }
return resp, nil
}
groupRequest := relationtb.GroupRequestModel{
@@ -885,8 +880,9 @@ func (s *groupServer) JoinGroup(ctx context.Context, req *pbgroup.JoinGroupReq)
JoinSource: req.JoinSource,
ReqTime: time.Now(),
HandledTime: time.Unix(0, 0),
+ Ex: req.Ex,
}
- if err := s.GroupDatabase.CreateGroupRequest(ctx, []*relationtb.GroupRequestModel{&groupRequest}); err != nil {
+ if err := s.db.CreateGroupRequest(ctx, []*relationtb.GroupRequestModel{&groupRequest}); err != nil {
return nil, err
}
s.Notification.JoinGroupApplicationNotification(ctx, req)
@@ -902,29 +898,21 @@ func (s *groupServer) QuitGroup(ctx context.Context, req *pbgroup.QuitGroupReq)
return nil, err
}
}
- group, err := s.GroupDatabase.TakeGroup(ctx, req.GroupID)
+ member, err := s.db.TakeGroupMember(ctx, req.GroupID, req.UserID)
if err != nil {
return nil, err
}
- if group.GroupType == constant.SuperGroup {
- if err := s.GroupDatabase.DeleteSuperGroupMember(ctx, req.GroupID, []string{req.UserID}); err != nil {
- return nil, err
- }
- _ = s.Notification.SuperGroupNotification(ctx, req.UserID, req.UserID)
- } else {
- info, err := s.TakeGroupMember(ctx, req.GroupID, req.UserID)
- if err != nil {
- return nil, err
- }
- if info.RoleLevel == constant.GroupOwner {
- return nil, errs.ErrNoPermission.Wrap("group owner can't quit")
- }
- err = s.GroupDatabase.DeleteGroupMember(ctx, req.GroupID, []string{req.UserID})
- if err != nil {
- return nil, err
- }
- _ = s.Notification.MemberQuitNotification(ctx, s.groupMemberDB2PB(info, 0))
+ if member.RoleLevel == constant.GroupOwner {
+ return nil, errs.ErrNoPermission.Wrap("group owner can't quit")
+ }
+ if err := s.PopulateGroupMember(ctx, member); err != nil {
+ return nil, err
}
+ err = s.db.DeleteGroupMember(ctx, req.GroupID, []string{req.UserID})
+ if err != nil {
+ return nil, err
+ }
+ _ = s.Notification.MemberQuitNotification(ctx, s.groupMemberDB2PB(member, 0))
if err := s.deleteMemberAndSetConversationSeq(ctx, req.GroupID, []string{req.UserID}); err != nil {
return nil, err
}
@@ -949,15 +937,21 @@ func (s *groupServer) SetGroupInfo(ctx context.Context, req *pbgroup.SetGroupInf
var opMember *relationtb.GroupMemberModel
if !authverify.IsAppManagerUid(ctx) {
var err error
- opMember, err = s.TakeGroupMember(ctx, req.GroupInfoForSet.GroupID, mcontext.GetOpUserID(ctx))
+ opMember, err = s.db.TakeGroupMember(ctx, req.GroupInfoForSet.GroupID, mcontext.GetOpUserID(ctx))
if err != nil {
return nil, err
}
if !(opMember.RoleLevel == constant.GroupOwner || opMember.RoleLevel == constant.GroupAdmin) {
return nil, errs.ErrNoPermission.Wrap("no group owner or admin")
}
+ if err := s.PopulateGroupMember(ctx, opMember); err != nil {
+ return nil, err
+ }
}
- group, err := s.GroupDatabase.TakeGroup(ctx, req.GroupInfoForSet.GroupID)
+ if err := CallbackBeforeSetGroupInfo(ctx, req); err != nil {
+ return nil, err
+ }
+ group, err := s.db.TakeGroup(ctx, req.GroupInfoForSet.GroupID)
if err != nil {
return nil, err
}
@@ -965,22 +959,25 @@ func (s *groupServer) SetGroupInfo(ctx context.Context, req *pbgroup.SetGroupInf
return nil, utils.Wrap(errs.ErrDismissedAlready, "")
}
resp := &pbgroup.SetGroupInfoResp{}
- count, err := s.GroupDatabase.FindGroupMemberNum(ctx, group.GroupID)
+ count, err := s.db.FindGroupMemberNum(ctx, group.GroupID)
if err != nil {
return nil, err
}
- owner, err := s.TakeGroupOwner(ctx, group.GroupID)
+ owner, err := s.db.TakeGroupOwner(ctx, group.GroupID)
if err != nil {
return nil, err
}
- data := UpdateGroupInfoMap(ctx, req.GroupInfoForSet)
- if len(data) == 0 {
+ if err := s.PopulateGroupMember(ctx, owner); err != nil {
+ return nil, err
+ }
+ update := UpdateGroupInfoMap(ctx, req.GroupInfoForSet)
+ if len(update) == 0 {
return resp, nil
}
- if err := s.GroupDatabase.UpdateGroup(ctx, group.GroupID, data); err != nil {
+ if err := s.db.UpdateGroup(ctx, group.GroupID, update); err != nil {
return nil, err
}
- group, err = s.GroupDatabase.TakeGroup(ctx, req.GroupInfoForSet.GroupID)
+ group, err = s.db.TakeGroup(ctx, req.GroupInfoForSet.GroupID)
if err != nil {
return nil, err
}
@@ -992,45 +989,43 @@ func (s *groupServer) SetGroupInfo(ctx context.Context, req *pbgroup.SetGroupInf
if opMember != nil {
tips.OpUser = s.groupMemberDB2PB(opMember, 0)
}
- var num int
+ num := len(update)
if req.GroupInfoForSet.Notification != "" {
- go func() {
- nctx := mcontext.NewCtx("@@@" + mcontext.GetOperationID(ctx))
+ num--
+ func() {
conversation := &pbconversation.ConversationReq{
ConversationID: msgprocessor.GetConversationIDBySessionType(constant.SuperGroupChatType, req.GroupInfoForSet.GroupID),
ConversationType: constant.SuperGroupChatType,
GroupID: req.GroupInfoForSet.GroupID,
}
- resp, err := s.GetGroupMemberUserIDs(nctx, &pbgroup.GetGroupMemberUserIDsReq{GroupID: req.GroupInfoForSet.GroupID})
+ resp, err := s.GetGroupMemberUserIDs(ctx, &pbgroup.GetGroupMemberUserIDsReq{GroupID: req.GroupInfoForSet.GroupID})
if err != nil {
log.ZWarn(ctx, "GetGroupMemberIDs", err)
return
}
conversation.GroupAtType = &wrapperspb.Int32Value{Value: constant.GroupNotification}
- if err := s.conversationRpcClient.SetConversations(nctx, resp.UserIDs, conversation); err != nil {
+ if err := s.conversationRpcClient.SetConversations(ctx, resp.UserIDs, conversation); err != nil {
log.ZWarn(ctx, "SetConversations", err, resp.UserIDs, conversation)
}
}()
- num++
- s.Notification.GroupInfoSetAnnouncementNotification(ctx, &sdkws.GroupInfoSetAnnouncementTips{Group: tips.Group, OpUser: tips.OpUser})
- }
- switch len(data) - num {
- case 0:
- case 1:
- if req.GroupInfoForSet.GroupName == "" {
- s.Notification.GroupInfoSetNotification(ctx, tips)
- } else {
- s.Notification.GroupInfoSetNameNotification(ctx, &sdkws.GroupInfoSetNameTips{Group: tips.Group, OpUser: tips.OpUser})
- }
- default:
- s.Notification.GroupInfoSetNotification(ctx, tips)
+ _ = s.Notification.GroupInfoSetAnnouncementNotification(ctx, &sdkws.GroupInfoSetAnnouncementTips{Group: tips.Group, OpUser: tips.OpUser})
+ }
+ if req.GroupInfoForSet.GroupName != "" {
+ num--
+ _ = s.Notification.GroupInfoSetNameNotification(ctx, &sdkws.GroupInfoSetNameTips{Group: tips.Group, OpUser: tips.OpUser})
+ }
+ if num > 0 {
+ _ = s.Notification.GroupInfoSetNotification(ctx, tips)
+ }
+ if err := CallbackAfterSetGroupInfo(ctx, req); err != nil {
+ return nil, err
}
return resp, nil
}
func (s *groupServer) TransferGroupOwner(ctx context.Context, req *pbgroup.TransferGroupOwnerReq) (*pbgroup.TransferGroupOwnerResp, error) {
resp := &pbgroup.TransferGroupOwnerResp{}
- group, err := s.GroupDatabase.TakeGroup(ctx, req.GroupID)
+ group, err := s.db.TakeGroup(ctx, req.GroupID)
if err != nil {
return nil, err
}
@@ -1040,10 +1035,13 @@ func (s *groupServer) TransferGroupOwner(ctx context.Context, req *pbgroup.Trans
if req.OldOwnerUserID == req.NewOwnerUserID {
return nil, errs.ErrArgs.Wrap("OldOwnerUserID == NewOwnerUserID")
}
- members, err := s.FindGroupMember(ctx, []string{req.GroupID}, []string{req.OldOwnerUserID, req.NewOwnerUserID}, nil)
+ members, err := s.db.FindGroupMembers(ctx, req.GroupID, []string{req.OldOwnerUserID, req.NewOwnerUserID})
if err != nil {
return nil, err
}
+ if err := s.PopulateGroupMember(ctx, members...); err != nil {
+ return nil, err
+ }
memberMap := utils.SliceToMap(members, func(e *relationtb.GroupMemberModel) string { return e.UserID })
if ids := utils.Single([]string{req.OldOwnerUserID, req.NewOwnerUserID}, utils.Keys(memberMap)); len(ids) > 0 {
return nil, errs.ErrArgs.Wrap("user not in group " + strings.Join(ids, ","))
@@ -1061,11 +1059,11 @@ func (s *groupServer) TransferGroupOwner(ctx context.Context, req *pbgroup.Trans
return nil, errs.ErrNoPermission.Wrap("no permission transfer group owner")
}
}
- if err := s.GroupDatabase.TransferGroupOwner(ctx, req.GroupID, req.OldOwnerUserID, req.NewOwnerUserID, newOwner.RoleLevel); err != nil {
+ if err := s.db.TransferGroupOwner(ctx, req.GroupID, req.OldOwnerUserID, req.NewOwnerUserID, newOwner.RoleLevel); err != nil {
return nil, err
}
- if err := CallbackTransferGroupOwnerAfter(ctx, req); err != nil {
+ if err := CallbackAfterTransferGroupOwner(ctx, req); err != nil {
return nil, err
}
s.Notification.GroupOwnerTransferredNotification(ctx, req)
@@ -1075,29 +1073,40 @@ func (s *groupServer) TransferGroupOwner(ctx context.Context, req *pbgroup.Trans
func (s *groupServer) GetGroups(ctx context.Context, req *pbgroup.GetGroupsReq) (*pbgroup.GetGroupsResp, error) {
resp := &pbgroup.GetGroupsResp{}
var (
- groups []*relationtb.GroupModel
- err error
+ group []*relationtb.GroupModel
+ err error
)
if req.GroupID != "" {
- groups, err = s.GroupDatabase.FindGroup(ctx, []string{req.GroupID})
- resp.Total = uint32(len(groups))
+ group, err = s.db.FindGroup(ctx, []string{req.GroupID})
+ resp.Total = uint32(len(group))
} else {
- resp.Total, groups, err = s.GroupDatabase.SearchGroup(ctx, req.GroupName, req.Pagination.PageNumber, req.Pagination.ShowNumber)
+ var total int64
+ total, group, err = s.db.SearchGroup(ctx, req.GroupName, req.Pagination)
+ resp.Total = uint32(total)
}
if err != nil {
return nil, err
}
+
+ var groups []*relationtb.GroupModel
+ for _, v := range group {
+ if v.Status == constant.GroupStatusDismissed {
+ resp.Total--
+ continue
+ }
+ groups = append(groups, v)
+ }
groupIDs := utils.Slice(groups, func(e *relationtb.GroupModel) string {
return e.GroupID
})
- ownerMembers, err := s.FindGroupMember(ctx, groupIDs, nil, []int32{constant.GroupOwner})
+ ownerMembers, err := s.db.FindGroupsOwner(ctx, groupIDs)
if err != nil {
return nil, err
}
ownerMemberMap := utils.SliceToMap(ownerMembers, func(e *relationtb.GroupMemberModel) string {
return e.GroupID
})
- groupMemberNumMap, err := s.GroupDatabase.MapGroupMemberNum(ctx, groupIDs)
+ groupMemberNumMap, err := s.db.MapGroupMemberNum(ctx, groupIDs)
if err != nil {
return nil, err
}
@@ -1117,26 +1126,15 @@ func (s *groupServer) GetGroups(ctx context.Context, req *pbgroup.GetGroupsReq)
func (s *groupServer) GetGroupMembersCMS(ctx context.Context, req *pbgroup.GetGroupMembersCMSReq) (*pbgroup.GetGroupMembersCMSResp, error) {
resp := &pbgroup.GetGroupMembersCMSResp{}
- total, members, err := s.GroupDatabase.SearchGroupMember(ctx, req.UserName, []string{req.GroupID}, nil, nil, req.Pagination.PageNumber, req.Pagination.ShowNumber)
+ total, members, err := s.db.SearchGroupMember(ctx, req.UserName, req.GroupID, req.Pagination)
if err != nil {
return nil, err
}
- resp.Total = total
- publicUserInfoMap, err := s.GetPublicUserInfoMap(ctx, utils.Filter(members, func(e *relationtb.GroupMemberModel) (string, bool) {
- return e.UserID, e.Nickname == "" || e.FaceURL == ""
- }), true)
- if err != nil {
+ resp.Total = uint32(total)
+ if err := s.PopulateGroupMember(ctx, members...); err != nil {
return nil, err
}
resp.Members = utils.Slice(members, func(e *relationtb.GroupMemberModel) *sdkws.GroupMemberFullInfo {
- if userInfo, ok := publicUserInfoMap[e.UserID]; ok {
- if e.Nickname == "" {
- e.Nickname = userInfo.Nickname
- }
- if e.FaceURL == "" {
- e.FaceURL = userInfo.FaceURL
- }
- }
return convert.Db2PbGroupMember(e)
})
return resp, nil
@@ -1148,37 +1146,35 @@ func (s *groupServer) GetUserReqApplicationList(ctx context.Context, req *pbgrou
if err != nil {
return nil, err
}
- var pageNumber, showNumber int32
- if req.Pagination != nil {
- pageNumber = req.Pagination.PageNumber
- showNumber = req.Pagination.ShowNumber
- }
- total, requests, err := s.GroupDatabase.PageGroupRequestUser(ctx, req.UserID, pageNumber, showNumber)
+ total, requests, err := s.db.PageGroupRequestUser(ctx, req.UserID, req.Pagination)
if err != nil {
return nil, err
}
- resp.Total = total
+ resp.Total = uint32(total)
if len(requests) == 0 {
return resp, nil
}
groupIDs := utils.Distinct(utils.Slice(requests, func(e *relationtb.GroupRequestModel) string {
return e.GroupID
}))
- groups, err := s.GroupDatabase.FindGroup(ctx, groupIDs)
+ groups, err := s.db.FindGroup(ctx, groupIDs)
if err != nil {
return nil, err
}
groupMap := utils.SliceToMap(groups, func(e *relationtb.GroupModel) string {
return e.GroupID
})
- owners, err := s.FindGroupMember(ctx, groupIDs, nil, []int32{constant.GroupOwner})
+ owners, err := s.db.FindGroupsOwner(ctx, groupIDs)
if err != nil {
return nil, err
}
+ if err := s.PopulateGroupMember(ctx, owners...); err != nil {
+ return nil, err
+ }
ownerMap := utils.SliceToMap(owners, func(e *relationtb.GroupMemberModel) string {
return e.GroupID
})
- groupMemberNum, err := s.GroupDatabase.MapGroupMemberNum(ctx, groupIDs)
+ groupMemberNum, err := s.db.MapGroupMemberNum(ctx, groupIDs)
if err != nil {
return nil, err
}
@@ -1195,7 +1191,7 @@ func (s *groupServer) GetUserReqApplicationList(ctx context.Context, req *pbgrou
func (s *groupServer) DismissGroup(ctx context.Context, req *pbgroup.DismissGroupReq) (*pbgroup.DismissGroupResp, error) {
defer log.ZInfo(ctx, "DismissGroup.return")
resp := &pbgroup.DismissGroupResp{}
- owner, err := s.TakeGroupOwner(ctx, req.GroupID)
+ owner, err := s.db.TakeGroupOwner(ctx, req.GroupID)
if err != nil {
return nil, err
}
@@ -1204,41 +1200,34 @@ func (s *groupServer) DismissGroup(ctx context.Context, req *pbgroup.DismissGrou
return nil, errs.ErrNoPermission.Wrap("not group owner")
}
}
- group, err := s.GroupDatabase.TakeGroup(ctx, req.GroupID)
+ if err := s.PopulateGroupMember(ctx, owner); err != nil {
+ return nil, err
+ }
+ group, err := s.db.TakeGroup(ctx, req.GroupID)
if err != nil {
return nil, err
}
if req.DeleteMember == false && group.Status == constant.GroupStatusDismissed {
return nil, errs.ErrDismissedAlready.Wrap("group status is dismissed")
}
- //if group.Status == constant.GroupStatusDismissed {
- // return nil, errs.ErrArgs.Wrap("group status is dismissed")
- //}
- if err := s.GroupDatabase.DismissGroup(ctx, req.GroupID, req.DeleteMember); err != nil {
+ if err := s.db.DismissGroup(ctx, req.GroupID, req.DeleteMember); err != nil {
return nil, err
}
- if group.GroupType == constant.SuperGroup {
- if err := s.GroupDatabase.DeleteSuperGroup(ctx, group.GroupID); err != nil {
+ if !req.DeleteMember {
+ num, err := s.db.FindGroupMemberNum(ctx, req.GroupID)
+ if err != nil {
return nil, err
}
- } else {
- if !req.DeleteMember {
- num, err := s.GroupDatabase.FindGroupMemberNum(ctx, req.GroupID)
- if err != nil {
- return nil, err
- }
- // s.Notification.GroupDismissedNotification(ctx, req)
- tips := &sdkws.GroupDismissedTips{
- Group: s.groupDB2PB(group, owner.UserID, num),
- OpUser: &sdkws.GroupMemberFullInfo{},
- }
- if mcontext.GetOpUserID(ctx) == owner.UserID {
- tips.OpUser = s.groupMemberDB2PB(owner, 0)
- }
- s.Notification.GroupDismissedNotification(ctx, tips)
+ tips := &sdkws.GroupDismissedTips{
+ Group: s.groupDB2PB(group, owner.UserID, num),
+ OpUser: &sdkws.GroupMemberFullInfo{},
}
+ if mcontext.GetOpUserID(ctx) == owner.UserID {
+ tips.OpUser = s.groupMemberDB2PB(owner, 0)
+ }
+ s.Notification.GroupDismissedNotification(ctx, tips)
}
- membersID, err := s.GroupDatabase.FindGroupMemberUserID(ctx, group.GroupID)
+ membersID, err := s.db.FindGroupMemberUserID(ctx, group.GroupID)
if err != nil {
return nil, err
}
@@ -1257,15 +1246,15 @@ func (s *groupServer) DismissGroup(ctx context.Context, req *pbgroup.DismissGrou
func (s *groupServer) MuteGroupMember(ctx context.Context, req *pbgroup.MuteGroupMemberReq) (*pbgroup.MuteGroupMemberResp, error) {
resp := &pbgroup.MuteGroupMemberResp{}
- //if err := tokenverify.CheckAccessV3(ctx, req.UserID); err != nil {
- // return nil, err
- //}
- member, err := s.TakeGroupMember(ctx, req.GroupID, req.UserID)
+ member, err := s.db.TakeGroupMember(ctx, req.GroupID, req.UserID)
if err != nil {
return nil, err
}
+ if err := s.PopulateGroupMember(ctx, member); err != nil {
+ return nil, err
+ }
if !authverify.IsAppManagerUid(ctx) {
- opMember, err := s.TakeGroupMember(ctx, req.GroupID, mcontext.GetOpUserID(ctx))
+ opMember, err := s.db.TakeGroupMember(ctx, req.GroupID, mcontext.GetOpUserID(ctx))
if err != nil {
return nil, err
}
@@ -1283,7 +1272,7 @@ func (s *groupServer) MuteGroupMember(ctx context.Context, req *pbgroup.MuteGrou
}
}
data := UpdateGroupMemberMutedTimeMap(time.Now().Add(time.Second * time.Duration(req.MutedSeconds)))
- if err := s.GroupDatabase.UpdateGroupMember(ctx, member.GroupID, member.UserID, data); err != nil {
+ if err := s.db.UpdateGroupMember(ctx, member.GroupID, member.UserID, data); err != nil {
return nil, err
}
s.Notification.GroupMemberMutedNotification(ctx, req.GroupID, req.UserID, req.MutedSeconds)
@@ -1291,29 +1280,15 @@ func (s *groupServer) MuteGroupMember(ctx context.Context, req *pbgroup.MuteGrou
}
func (s *groupServer) CancelMuteGroupMember(ctx context.Context, req *pbgroup.CancelMuteGroupMemberReq) (*pbgroup.CancelMuteGroupMemberResp, error) {
- resp := &pbgroup.CancelMuteGroupMemberResp{}
- //member, err := s.GroupDatabase.TakeGroupMember(ctx, req.GroupID, req.UserID)
- //if err != nil {
- // return nil, err
- //}
- //if !(mcontext.GetOpUserID(ctx) == req.UserID || tokenverify.IsAppManagerUid(ctx)) {
- // opMember, err := s.GroupDatabase.TakeGroupMember(ctx, req.GroupID, mcontext.GetOpUserID(ctx))
- // if err != nil {
- // return nil, err
- // }
- // if opMember.RoleLevel <= member.RoleLevel {
- // return nil, errs.ErrNoPermission.Wrap(fmt.Sprintf("self RoleLevel %d target %d", opMember.RoleLevel, member.RoleLevel))
- // }
- //}
- //if err := tokenverify.CheckAccessV3(ctx, req.UserID); err != nil {
- // return nil, err
- //}
- member, err := s.TakeGroupMember(ctx, req.GroupID, req.UserID)
+ member, err := s.db.TakeGroupMember(ctx, req.GroupID, req.UserID)
if err != nil {
return nil, err
}
+ if err := s.PopulateGroupMember(ctx, member); err != nil {
+ return nil, err
+ }
if !authverify.IsAppManagerUid(ctx) {
- opMember, err := s.TakeGroupMember(ctx, req.GroupID, mcontext.GetOpUserID(ctx))
+ opMember, err := s.db.TakeGroupMember(ctx, req.GroupID, mcontext.GetOpUserID(ctx))
if err != nil {
return nil, err
}
@@ -1331,11 +1306,11 @@ func (s *groupServer) CancelMuteGroupMember(ctx context.Context, req *pbgroup.Ca
}
}
data := UpdateGroupMemberMutedTimeMap(time.Unix(0, 0))
- if err := s.GroupDatabase.UpdateGroupMember(ctx, member.GroupID, member.UserID, data); err != nil {
+ if err := s.db.UpdateGroupMember(ctx, member.GroupID, member.UserID, data); err != nil {
return nil, err
}
s.Notification.GroupMemberCancelMutedNotification(ctx, req.GroupID, req.UserID)
- return resp, nil
+ return &pbgroup.CancelMuteGroupMemberResp{}, nil
}
func (s *groupServer) MuteGroup(ctx context.Context, req *pbgroup.MuteGroupReq) (*pbgroup.MuteGroupResp, error) {
@@ -1343,7 +1318,7 @@ func (s *groupServer) MuteGroup(ctx context.Context, req *pbgroup.MuteGroupReq)
if err := s.CheckGroupAdmin(ctx, req.GroupID); err != nil {
return nil, err
}
- if err := s.GroupDatabase.UpdateGroup(ctx, req.GroupID, UpdateGroupStatusMap(constant.GroupStatusMuted)); err != nil {
+ if err := s.db.UpdateGroup(ctx, req.GroupID, UpdateGroupStatusMap(constant.GroupStatusMuted)); err != nil {
return nil, err
}
s.Notification.GroupMutedNotification(ctx, req.GroupID)
@@ -1355,7 +1330,7 @@ func (s *groupServer) CancelMuteGroup(ctx context.Context, req *pbgroup.CancelMu
if err := s.CheckGroupAdmin(ctx, req.GroupID); err != nil {
return nil, err
}
- if err := s.GroupDatabase.UpdateGroup(ctx, req.GroupID, UpdateGroupStatusMap(constant.GroupOk)); err != nil {
+ if err := s.db.UpdateGroup(ctx, req.GroupID, UpdateGroupStatusMap(constant.GroupOk)); err != nil {
return nil, err
}
s.Notification.GroupCancelMutedNotification(ctx, req.GroupID)
@@ -1367,97 +1342,90 @@ func (s *groupServer) SetGroupMemberInfo(ctx context.Context, req *pbgroup.SetGr
if len(req.Members) == 0 {
return nil, errs.ErrArgs.Wrap("members empty")
}
+ opUserID := mcontext.GetOpUserID(ctx)
+ if opUserID == "" {
+ return nil, errs.ErrNoPermission.Wrap("no op user id")
+ }
+ isAppManagerUid := authverify.IsAppManagerUid(ctx)
for i := range req.Members {
req.Members[i].FaceURL = nil
}
- duplicateMap := make(map[[2]string]struct{})
- userIDMap := make(map[string]struct{})
- groupIDMap := make(map[string]struct{})
- for _, member := range req.Members {
- key := [...]string{member.GroupID, member.UserID}
- if _, ok := duplicateMap[key]; ok {
- return nil, errs.ErrArgs.Wrap("group user duplicate")
+ groupMembers := make(map[string][]*pbgroup.SetGroupMemberInfo)
+ for i, member := range req.Members {
+ if member.RoleLevel != nil {
+ switch member.RoleLevel.Value {
+ case constant.GroupOwner:
+ return nil, errs.ErrNoPermission.Wrap("cannot set ungroup owner")
+ case constant.GroupAdmin, constant.GroupOrdinaryUsers:
+ default:
+ return nil, errs.ErrArgs.Wrap("invalid role level")
+ }
}
- duplicateMap[key] = struct{}{}
- userIDMap[member.UserID] = struct{}{}
- groupIDMap[member.GroupID] = struct{}{}
+ groupMembers[member.GroupID] = append(groupMembers[member.GroupID], req.Members[i])
}
- groupIDs := utils.Keys(groupIDMap)
- userIDs := utils.Keys(userIDMap)
- members, err := s.FindGroupMember(ctx, groupIDs, append(userIDs, mcontext.GetOpUserID(ctx)), nil)
- if err != nil {
- return nil, err
- }
- for _, member := range members {
- delete(duplicateMap, [...]string{member.GroupID, member.UserID})
- }
- if len(duplicateMap) > 0 {
- return nil, errs.ErrArgs.Wrap("user not found" + strings.Join(utils.Slice(utils.Keys(duplicateMap), func(e [2]string) string {
- return fmt.Sprintf("[group: %s user: %s]", e[0], e[1])
- }), ","))
- }
- memberMap := utils.SliceToMap(members, func(e *relationtb.GroupMemberModel) [2]string {
- return [...]string{e.GroupID, e.UserID}
- })
- if !authverify.IsAppManagerUid(ctx) {
- opUserID := mcontext.GetOpUserID(ctx)
- for _, member := range req.Members {
- if member.RoleLevel != nil {
- switch member.RoleLevel.Value {
- case constant.GroupOrdinaryUsers, constant.GroupAdmin:
- default:
- return nil, errs.ErrArgs.Wrap("invalid role level")
- }
- }
- opMember, ok := memberMap[[...]string{member.GroupID, opUserID}]
- if !ok {
- return nil, errs.ErrArgs.Wrap(fmt.Sprintf("user %s not in group %s", opUserID, member.GroupID))
+ for groupID, members := range groupMembers {
+ temp := make(map[string]struct{})
+ userIDs := make([]string, 0, len(members)+1)
+ for _, member := range members {
+ if _, ok := temp[member.UserID]; ok {
+ return nil, errs.ErrArgs.Wrap(fmt.Sprintf("repeat group %s user %s", member.GroupID, member.UserID))
}
+ temp[member.UserID] = struct{}{}
+ userIDs = append(userIDs, member.UserID)
+ }
+ if _, ok := temp[opUserID]; !ok {
+ userIDs = append(userIDs, opUserID)
+ }
+ dbMembers, err := s.db.FindGroupMembers(ctx, groupID, userIDs)
+ if err != nil {
+ return nil, err
+ }
+ opUserIndex := -1
+ for i, member := range dbMembers {
if member.UserID == opUserID {
- if member.RoleLevel != nil {
- return nil, errs.ErrNoPermission.Wrap("can not change self role level")
- }
- continue
+ opUserIndex = i
+ break
}
- if opMember.RoleLevel == constant.GroupOrdinaryUsers {
- return nil, errs.ErrNoPermission.Wrap("ordinary users can not change other role level")
+ }
+ switch len(userIDs) - len(dbMembers) {
+ case 0:
+ if !isAppManagerUid {
+ roleLevel := dbMembers[opUserIndex].RoleLevel
+ if roleLevel != constant.GroupOwner {
+ switch roleLevel {
+ case constant.GroupAdmin:
+ for _, member := range dbMembers {
+ if member.RoleLevel == constant.GroupOwner {
+ return nil, errs.ErrNoPermission.Wrap("admin can not change group owner")
+ }
+ if member.RoleLevel == constant.GroupAdmin && member.UserID != opUserID {
+ return nil, errs.ErrNoPermission.Wrap("admin can not change other group admin")
+ }
+ }
+ case constant.GroupOrdinaryUsers:
+ for _, member := range dbMembers {
+ if !(member.RoleLevel == constant.GroupOrdinaryUsers && member.UserID == opUserID) {
+ return nil, errs.ErrNoPermission.Wrap("ordinary users can not change other role level")
+ }
+ }
+ default:
+ for _, member := range dbMembers {
+ if member.RoleLevel >= roleLevel {
+ return nil, errs.ErrNoPermission.Wrap("can not change higher role level")
+ }
+ }
+ }
+ }
}
- dbMember, ok := memberMap[[...]string{member.GroupID, member.UserID}]
- if !ok {
- return nil, errs.ErrRecordNotFound.Wrap(fmt.Sprintf("user %s not in group %s", member.UserID, member.GroupID))
+ case 1:
+ if opUserIndex >= 0 {
+ return nil, errs.ErrArgs.Wrap("user not in group")
}
- //if opMember.RoleLevel == constant.GroupOwner {
- // continue
- //}
- //if dbMember.RoleLevel == constant.GroupOwner {
- // return nil, errs.ErrNoPermission.Wrap("change group owner")
- //}
- //if opMember.RoleLevel == constant.GroupAdmin && dbMember.RoleLevel == constant.GroupAdmin {
- // return nil, errs.ErrNoPermission.Wrap("admin can not change other admin role info")
- //}
- switch opMember.RoleLevel {
- case constant.GroupOrdinaryUsers:
- return nil, errs.ErrNoPermission.Wrap("ordinary users can not change other role level")
- case constant.GroupAdmin:
- if dbMember.RoleLevel != constant.GroupOrdinaryUsers {
- return nil, errs.ErrNoPermission.Wrap("admin can not change other role level")
- }
- if member.RoleLevel != nil {
- return nil, errs.ErrNoPermission.Wrap("admin can not change other role level")
- }
- case constant.GroupOwner:
- //if member.RoleLevel != nil && member.RoleLevel.Value == constant.GroupOwner {
- // return nil, errs.ErrNoPermission.Wrap("owner only one")
- //}
+ if !isAppManagerUid {
+ return nil, errs.ErrNoPermission.Wrap("user not in group")
}
- }
- }
- for _, member := range req.Members {
- if member.RoleLevel == nil {
- continue
- }
- if memberMap[[...]string{member.GroupID, member.UserID}].RoleLevel == constant.GroupOwner {
- return nil, errs.ErrArgs.Wrap(fmt.Sprintf("group %s user %s is owner", member.GroupID, member.UserID))
+ default:
+ return nil, errs.ErrArgs.Wrap("user not in group")
}
}
for i := 0; i < len(req.Members); i++ {
@@ -1465,7 +1433,7 @@ func (s *groupServer) SetGroupMemberInfo(ctx context.Context, req *pbgroup.SetGr
return nil, err
}
}
- if err = s.GroupDatabase.UpdateGroupMembers(ctx, utils.Slice(req.Members, func(e *pbgroup.SetGroupMemberInfo) *relationtb.BatchUpdateGroupMember {
+ if err := s.db.UpdateGroupMembers(ctx, utils.Slice(req.Members, func(e *pbgroup.SetGroupMemberInfo) *relationtb.BatchUpdateGroupMember {
return &relationtb.BatchUpdateGroupMember{
GroupID: e.GroupID,
UserID: e.UserID,
@@ -1484,10 +1452,7 @@ func (s *groupServer) SetGroupMemberInfo(ctx context.Context, req *pbgroup.SetGr
}
}
if member.Nickname != nil || member.FaceURL != nil || member.Ex != nil {
- log.ZDebug(ctx, "setGroupMemberInfo notification", "member", member.UserID)
- if err := s.Notification.GroupMemberInfoSetNotification(ctx, member.GroupID, member.UserID); err != nil {
- log.ZError(ctx, "setGroupMemberInfo notification failed", err, "member", member.UserID, "groupID", member.GroupID)
- }
+ s.Notification.GroupMemberInfoSetNotification(ctx, member.GroupID, member.UserID)
}
}
for i := 0; i < len(req.Members); i++ {
@@ -1507,7 +1472,7 @@ func (s *groupServer) GetGroupAbstractInfo(ctx context.Context, req *pbgroup.Get
if utils.Duplicate(req.GroupIDs) {
return nil, errs.ErrArgs.Wrap("groupIDs duplicate")
}
- groups, err := s.GroupDatabase.FindGroup(ctx, req.GroupIDs)
+ groups, err := s.db.FindGroup(ctx, req.GroupIDs)
if err != nil {
return nil, err
}
@@ -1516,7 +1481,7 @@ func (s *groupServer) GetGroupAbstractInfo(ctx context.Context, req *pbgroup.Get
})); len(ids) > 0 {
return nil, errs.ErrGroupIDNotFound.Wrap("not found group " + strings.Join(ids, ","))
}
- groupUserMap, err := s.GroupDatabase.MapGroupMemberUserID(ctx, req.GroupIDs)
+ groupUserMap, err := s.db.MapGroupMemberUserID(ctx, req.GroupIDs)
if err != nil {
return nil, err
}
@@ -1535,25 +1500,14 @@ func (s *groupServer) GetUserInGroupMembers(ctx context.Context, req *pbgroup.Ge
if len(req.GroupIDs) == 0 {
return nil, errs.ErrArgs.Wrap("groupIDs empty")
}
- members, err := s.FindGroupMember(ctx, []string{req.UserID}, req.GroupIDs, nil)
+ members, err := s.db.FindGroupMemberUser(ctx, req.GroupIDs, req.UserID)
if err != nil {
return nil, err
}
- publicUserInfoMap, err := s.GetPublicUserInfoMap(ctx, utils.Filter(members, func(e *relationtb.GroupMemberModel) (string, bool) {
- return e.UserID, e.Nickname == "" || e.FaceURL == ""
- }), true)
- if err != nil {
+ if err := s.PopulateGroupMember(ctx, members...); err != nil {
return nil, err
}
resp.Members = utils.Slice(members, func(e *relationtb.GroupMemberModel) *sdkws.GroupMemberFullInfo {
- if userInfo, ok := publicUserInfoMap[e.UserID]; ok {
- if e.Nickname == "" {
- e.Nickname = userInfo.Nickname
- }
- if e.FaceURL == "" {
- e.FaceURL = userInfo.FaceURL
- }
- }
return convert.Db2PbGroupMember(e)
})
return resp, nil
@@ -1561,7 +1515,7 @@ func (s *groupServer) GetUserInGroupMembers(ctx context.Context, req *pbgroup.Ge
func (s *groupServer) GetGroupMemberUserIDs(ctx context.Context, req *pbgroup.GetGroupMemberUserIDsReq) (resp *pbgroup.GetGroupMemberUserIDsResp, err error) {
resp = &pbgroup.GetGroupMemberUserIDsResp{}
- resp.UserIDs, err = s.GroupDatabase.FindGroupMemberUserID(ctx, req.GroupID)
+ resp.UserIDs, err = s.db.FindGroupMemberUserID(ctx, req.GroupID)
if err != nil {
return nil, err
}
@@ -1573,25 +1527,14 @@ func (s *groupServer) GetGroupMemberRoleLevel(ctx context.Context, req *pbgroup.
if len(req.RoleLevels) == 0 {
return nil, errs.ErrArgs.Wrap("RoleLevels empty")
}
- members, err := s.FindGroupMember(ctx, []string{req.GroupID}, nil, req.RoleLevels)
+ members, err := s.db.FindGroupMemberRoleLevels(ctx, req.GroupID, req.RoleLevels)
if err != nil {
return nil, err
}
- publicUserInfoMap, err := s.GetPublicUserInfoMap(ctx, utils.Filter(members, func(e *relationtb.GroupMemberModel) (string, bool) {
- return e.UserID, e.Nickname == "" || e.FaceURL == ""
- }), true)
- if err != nil {
+ if err := s.PopulateGroupMember(ctx, members...); err != nil {
return nil, err
}
resp.Members = utils.Slice(members, func(e *relationtb.GroupMemberModel) *sdkws.GroupMemberFullInfo {
- if userInfo, ok := publicUserInfoMap[e.UserID]; ok {
- if e.Nickname == "" {
- e.Nickname = userInfo.Nickname
- }
- if e.FaceURL == "" {
- e.FaceURL = userInfo.FaceURL
- }
- }
return convert.Db2PbGroupMember(e)
})
return resp, nil
@@ -1599,7 +1542,7 @@ func (s *groupServer) GetGroupMemberRoleLevel(ctx context.Context, req *pbgroup.
func (s *groupServer) GetGroupUsersReqApplicationList(ctx context.Context, req *pbgroup.GetGroupUsersReqApplicationListReq) (*pbgroup.GetGroupUsersReqApplicationListResp, error) {
resp := &pbgroup.GetGroupUsersReqApplicationListResp{}
- total, requests, err := s.GroupDatabase.FindGroupRequests(ctx, req.GroupID, req.UserIDs)
+ requests, err := s.db.FindGroupRequests(ctx, req.GroupID, req.UserIDs)
if err != nil {
return nil, err
}
@@ -1609,7 +1552,7 @@ func (s *groupServer) GetGroupUsersReqApplicationList(ctx context.Context, req *
groupIDs := utils.Distinct(utils.Slice(requests, func(e *relationtb.GroupRequestModel) string {
return e.GroupID
}))
- groups, err := s.GroupDatabase.FindGroup(ctx, groupIDs)
+ groups, err := s.db.FindGroup(ctx, groupIDs)
if err != nil {
return nil, err
}
@@ -1619,14 +1562,17 @@ func (s *groupServer) GetGroupUsersReqApplicationList(ctx context.Context, req *
if ids := utils.Single(groupIDs, utils.Keys(groupMap)); len(ids) > 0 {
return nil, errs.ErrGroupIDNotFound.Wrap(strings.Join(ids, ","))
}
- owners, err := s.FindGroupMember(ctx, groupIDs, nil, []int32{constant.GroupOwner})
+ owners, err := s.db.FindGroupsOwner(ctx, groupIDs)
if err != nil {
return nil, err
}
+ if err := s.PopulateGroupMember(ctx, owners...); err != nil {
+ return nil, err
+ }
ownerMap := utils.SliceToMap(owners, func(e *relationtb.GroupMemberModel) string {
return e.GroupID
})
- groupMemberNum, err := s.GroupDatabase.MapGroupMemberNum(ctx, groupIDs)
+ groupMemberNum, err := s.db.MapGroupMemberNum(ctx, groupIDs)
if err != nil {
return nil, err
}
@@ -1637,40 +1583,6 @@ func (s *groupServer) GetGroupUsersReqApplicationList(ctx context.Context, req *
}
return convert.Db2PbGroupRequest(e, nil, convert.Db2PbGroupInfo(groupMap[e.GroupID], ownerUserID, groupMemberNum[e.GroupID]))
})
- resp.Total = total
+ resp.Total = int64(len(resp.GroupRequests))
return resp, nil
}
-
-func (s *groupServer) groupMemberHashCode(ctx context.Context, groupID string) (uint64, error) {
- userIDs, err := s.GroupDatabase.FindGroupMemberUserID(ctx, groupID)
- if err != nil {
- return 0, err
- }
- var members []*sdkws.GroupMemberFullInfo
- if len(userIDs) > 0 {
- resp, err := s.GetGroupMembersInfo(ctx, &pbgroup.GetGroupMembersInfoReq{GroupID: groupID, UserIDs: userIDs})
- if err != nil {
- return 0, err
- }
- members = resp.Members
- utils.Sort(userIDs, true)
- }
- memberMap := utils.SliceToMap(members, func(e *sdkws.GroupMemberFullInfo) string {
- return e.UserID
- })
- res := make([]*sdkws.GroupMemberFullInfo, 0, len(members))
- for _, userID := range userIDs {
- member, ok := memberMap[userID]
- if !ok {
- continue
- }
- member.AppMangerLevel = 0
- res = append(res, member)
- }
- data, err := json.Marshal(res)
- if err != nil {
- return 0, err
- }
- sum := md5.Sum(data)
- return binary.BigEndian.Uint64(sum[:]), nil
-}
diff --git a/internal/rpc/group/statistics.go b/internal/rpc/group/statistics.go
index 8aeefbee3..d909e9503 100644
--- a/internal/rpc/group/statistics.go
+++ b/internal/rpc/group/statistics.go
@@ -26,16 +26,16 @@ func (s *groupServer) GroupCreateCount(ctx context.Context, req *group.GroupCrea
if req.Start > req.End {
return nil, errs.ErrArgs.Wrap("start > end")
}
- total, err := s.GroupDatabase.CountTotal(ctx, nil)
+ total, err := s.db.CountTotal(ctx, nil)
if err != nil {
return nil, err
}
start := time.UnixMilli(req.Start)
- before, err := s.GroupDatabase.CountTotal(ctx, &start)
+ before, err := s.db.CountTotal(ctx, &start)
if err != nil {
return nil, err
}
- count, err := s.GroupDatabase.CountRangeEverydayTotal(ctx, start, time.UnixMilli(req.End))
+ count, err := s.db.CountRangeEverydayTotal(ctx, start, time.UnixMilli(req.End))
if err != nil {
return nil, err
}
diff --git a/internal/rpc/group/super_group.go b/internal/rpc/group/super_group.go
index 6cd1a2943..f893a79c2 100644
--- a/internal/rpc/group/super_group.go
+++ b/internal/rpc/group/super_group.go
@@ -16,99 +16,15 @@ package group
import (
"context"
- "fmt"
- "strings"
+ "errors"
- "github.com/OpenIMSDK/protocol/constant"
pbgroup "github.com/OpenIMSDK/protocol/group"
- sdkws "github.com/OpenIMSDK/protocol/sdkws"
- "github.com/OpenIMSDK/tools/errs"
- "github.com/OpenIMSDK/tools/utils"
-
- "github.com/openimsdk/open-im-server/v3/pkg/common/convert"
- "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
- "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/unrelation"
)
-func (s *groupServer) GetJoinedSuperGroupList(
- ctx context.Context,
- req *pbgroup.GetJoinedSuperGroupListReq,
-) (*pbgroup.GetJoinedSuperGroupListResp, error) {
- resp := &pbgroup.GetJoinedSuperGroupListResp{}
- groupIDs, err := s.GroupDatabase.FindJoinSuperGroup(ctx, req.UserID)
- if err != nil {
- return nil, err
- }
- if len(groupIDs) == 0 {
- return resp, nil
- }
- owners, err := s.FindGroupMember(ctx, groupIDs, nil, []int32{constant.GroupOwner})
- if err != nil {
- return nil, err
- }
- ownerMap := utils.SliceToMap(owners, func(e *relation.GroupMemberModel) string {
- return e.GroupID
- })
- if ids := utils.Single(groupIDs, utils.Keys(ownerMap)); len(ids) > 0 {
- return nil, errs.ErrData.Wrap(fmt.Sprintf("super group %s not owner", strings.Join(ids, ",")))
- }
- groups, err := s.GroupDatabase.FindGroup(ctx, groupIDs)
- if err != nil {
- return nil, err
- }
- groupMap := utils.SliceToMap(groups, func(e *relation.GroupModel) string {
- return e.GroupID
- })
- if ids := utils.Single(groupIDs, utils.Keys(groupMap)); len(ids) > 0 {
- return nil, errs.ErrData.Wrap(fmt.Sprintf("super group info %s not found", strings.Join(ids, ",")))
- }
- superGroupMembers, err := s.GroupDatabase.FindSuperGroup(ctx, groupIDs)
- if err != nil {
- return nil, err
- }
- superGroupMemberMap := utils.SliceToMapAny(
- superGroupMembers,
- func(e *unrelation.SuperGroupModel) (string, []string) {
- return e.GroupID, e.MemberIDs
- },
- )
- resp.Groups = utils.Slice(groupIDs, func(groupID string) *sdkws.GroupInfo {
- return convert.Db2PbGroupInfo(groupMap[groupID], ownerMap[groupID].UserID, uint32(len(superGroupMemberMap)))
- })
- return resp, nil
+func (s *groupServer) GetJoinedSuperGroupList(context.Context, *pbgroup.GetJoinedSuperGroupListReq) (*pbgroup.GetJoinedSuperGroupListResp, error) {
+ return nil, errors.New("deprecated")
}
-func (s *groupServer) GetSuperGroupsInfo(
- ctx context.Context,
- req *pbgroup.GetSuperGroupsInfoReq,
-) (resp *pbgroup.GetSuperGroupsInfoResp, err error) {
- resp = &pbgroup.GetSuperGroupsInfoResp{}
- if len(req.GroupIDs) == 0 {
- return nil, errs.ErrArgs.Wrap("groupIDs empty")
- }
- groups, err := s.GroupDatabase.FindGroup(ctx, req.GroupIDs)
- if err != nil {
- return nil, err
- }
- superGroupMembers, err := s.GroupDatabase.FindSuperGroup(ctx, req.GroupIDs)
- if err != nil {
- return nil, err
- }
- superGroupMemberMap := utils.SliceToMapAny(
- superGroupMembers,
- func(e *unrelation.SuperGroupModel) (string, []string) {
- return e.GroupID, e.MemberIDs
- },
- )
- owners, err := s.FindGroupMember(ctx, req.GroupIDs, nil, []int32{constant.GroupOwner})
- if err != nil {
- return nil, err
- }
- ownerMap := utils.SliceToMap(owners, func(e *relation.GroupMemberModel) string {
- return e.GroupID
- })
- resp.GroupInfos = utils.Slice(groups, func(e *relation.GroupModel) *sdkws.GroupInfo {
- return convert.Db2PbGroupInfo(e, ownerMap[e.GroupID].UserID, uint32(len(superGroupMemberMap[e.GroupID])))
- })
- return resp, nil
+func (s *groupServer) GetSuperGroupsInfo(context.Context, *pbgroup.GetSuperGroupsInfoReq) (resp *pbgroup.GetSuperGroupsInfoResp, err error) {
+ return nil, errors.New("deprecated")
}
diff --git a/internal/rpc/msg/as_read.go b/internal/rpc/msg/as_read.go
index 49113aa0b..cb292421e 100644
--- a/internal/rpc/msg/as_read.go
+++ b/internal/rpc/msg/as_read.go
@@ -16,14 +16,14 @@ package msg
import (
"context"
- cbapi "github.com/openimsdk/open-im-server/v3/pkg/callbackstruct"
utils2 "github.com/OpenIMSDK/tools/utils"
+ cbapi "github.com/openimsdk/open-im-server/v3/pkg/callbackstruct"
+
"github.com/redis/go-redis/v9"
"github.com/OpenIMSDK/protocol/constant"
- "github.com/OpenIMSDK/protocol/conversation"
"github.com/OpenIMSDK/protocol/msg"
"github.com/OpenIMSDK/protocol/sdkws"
"github.com/OpenIMSDK/tools/errs"
@@ -92,7 +92,10 @@ func (m *msgServer) SetConversationHasReadSeq(
return &msg.SetConversationHasReadSeqResp{}, nil
}
-func (m *msgServer) MarkMsgsAsRead(ctx context.Context, req *msg.MarkMsgsAsReadReq) (resp *msg.MarkMsgsAsReadResp, err error) {
+func (m *msgServer) MarkMsgsAsRead(
+ ctx context.Context,
+ req *msg.MarkMsgsAsReadReq,
+) (resp *msg.MarkMsgsAsReadResp, err error) {
if len(req.Seqs) < 1 {
return nil, errs.ErrArgs.Wrap("seqs must not be empty")
}
@@ -111,6 +114,7 @@ func (m *msgServer) MarkMsgsAsRead(ctx context.Context, req *msg.MarkMsgsAsReadR
if err = m.MsgDatabase.MarkSingleChatMsgsAsRead(ctx, req.UserID, req.ConversationID, req.Seqs); err != nil {
return
}
+
currentHasReadSeq, err := m.MsgDatabase.GetHasReadSeq(ctx, req.UserID, req.ConversationID)
if err != nil && errs.Unwrap(err) != redis.Nil {
return
@@ -121,6 +125,17 @@ func (m *msgServer) MarkMsgsAsRead(ctx context.Context, req *msg.MarkMsgsAsReadR
return
}
}
+
+ req_callback := &cbapi.CallbackSingleMsgReadReq{
+ ConversationID: conversation.ConversationID,
+ UserID: req.UserID,
+ Seqs: req.Seqs,
+ ContentType: conversation.ConversationType,
+ }
+ if err = CallbackSingleMsgRead(ctx, req_callback); err != nil {
+ return nil, err
+ }
+
if err = m.sendMarkAsReadNotification(ctx, req.ConversationID, conversation.ConversationType, req.UserID,
m.conversationAndGetRecvID(conversation, req.UserID), req.Seqs, hasReadSeq); err != nil {
return
@@ -128,7 +143,10 @@ func (m *msgServer) MarkMsgsAsRead(ctx context.Context, req *msg.MarkMsgsAsReadR
return &msg.MarkMsgsAsReadResp{}, nil
}
-func (m *msgServer) MarkConversationAsRead(ctx context.Context, req *msg.MarkConversationAsReadReq) (resp *msg.MarkConversationAsReadResp, err error) {
+func (m *msgServer) MarkConversationAsRead(
+ ctx context.Context,
+ req *msg.MarkConversationAsReadReq,
+) (resp *msg.MarkConversationAsReadResp, err error) {
conversation, err := m.Conversation.GetConversation(ctx, req.UserID, req.ConversationID)
if err != nil {
return nil, err
@@ -137,34 +155,54 @@ func (m *msgServer) MarkConversationAsRead(ctx context.Context, req *msg.MarkCon
if err != nil && errs.Unwrap(err) != redis.Nil {
return nil, err
}
- seqs := generateSeqs(hasReadSeq, req)
+ var seqs []int64
- if len(seqs) > 0 || req.HasReadSeq > hasReadSeq {
- err = m.updateReadStatus(ctx, req, conversation, seqs, hasReadSeq)
- if err != nil {
+ log.ZDebug(ctx, "MarkConversationAsRead", "hasReadSeq", hasReadSeq,
+ "req.HasReadSeq", req.HasReadSeq)
+ if conversation.ConversationType == constant.SingleChatType {
+ for i := hasReadSeq + 1; i <= req.HasReadSeq; i++ {
+ seqs = append(seqs, i)
+ }
+ //avoid client missed call MarkConversationMessageAsRead by order
+ for _, val := range req.Seqs {
+ if !utils2.Contain(val, seqs...) {
+ seqs = append(seqs, val)
+ }
+ }
+ if len(seqs) > 0 {
+ log.ZDebug(ctx, "MarkConversationAsRead", "seqs", seqs, "conversationID", req.ConversationID)
+ if err = m.MsgDatabase.MarkSingleChatMsgsAsRead(ctx, req.UserID, req.ConversationID, seqs); err != nil {
+ return nil, err
+ }
+ }
+ if req.HasReadSeq > hasReadSeq {
+ err = m.MsgDatabase.SetHasReadSeq(ctx, req.UserID, req.ConversationID, req.HasReadSeq)
+ if err != nil {
+ return nil, err
+ }
+ hasReadSeq = req.HasReadSeq
+ }
+ if err = m.sendMarkAsReadNotification(ctx, req.ConversationID, conversation.ConversationType, req.UserID,
+ m.conversationAndGetRecvID(conversation, req.UserID), seqs, hasReadSeq); err != nil {
return nil, err
}
- }
- return &msg.MarkConversationAsReadResp{}, nil
-}
-func generateSeqs(hasReadSeq int64, req *msg.MarkConversationAsReadReq) []int64 {
- var seqs []int64
- for _, val := range req.Seqs {
- if val > hasReadSeq && !utils2.Contain(val, seqs...) {
- seqs = append(seqs, val)
+ } else if conversation.ConversationType == constant.SuperGroupChatType ||
+ conversation.ConversationType == constant.NotificationChatType {
+ if req.HasReadSeq > hasReadSeq {
+ err = m.MsgDatabase.SetHasReadSeq(ctx, req.UserID, req.ConversationID, req.HasReadSeq)
+ if err != nil {
+ return nil, err
+ }
+ hasReadSeq = req.HasReadSeq
}
- }
- return seqs
-}
-
-func (m *msgServer) updateReadStatus(ctx context.Context, req *msg.MarkConversationAsReadReq, conversation *conversation.Conversation, seqs []int64, hasReadSeq int64) error {
- if conversation.ConversationType == constant.SingleChatType && len(seqs) > 0 {
- log.ZDebug(ctx, "MarkConversationAsRead", "seqs", seqs, "conversationID", req.ConversationID)
- if err := m.MsgDatabase.MarkSingleChatMsgsAsRead(ctx, req.UserID, req.ConversationID, seqs); err != nil {
- return err
+ if err = m.sendMarkAsReadNotification(ctx, req.ConversationID, constant.SingleChatType, req.UserID,
+ req.UserID, seqs, hasReadSeq); err != nil {
+ return nil, err
}
+
}
+
reqCall := &cbapi.CallbackGroupMsgReadReq{
SendID: conversation.OwnerUserID,
ReceiveID: req.UserID,
@@ -172,21 +210,10 @@ func (m *msgServer) updateReadStatus(ctx context.Context, req *msg.MarkConversat
ContentType: int64(conversation.ConversationType),
}
if err := CallbackGroupMsgRead(ctx, reqCall); err != nil {
- return err
- }
-
- if req.HasReadSeq > hasReadSeq {
- if err := m.MsgDatabase.SetHasReadSeq(ctx, req.UserID, req.ConversationID, req.HasReadSeq); err != nil {
- return err
- }
- }
-
- recvID := m.conversationAndGetRecvID(conversation, req.UserID)
- if conversation.ConversationType == constant.SuperGroupChatType || conversation.ConversationType == constant.NotificationChatType {
- recvID = req.UserID
+ return nil, err
}
- return m.sendMarkAsReadNotification(ctx, req.ConversationID, conversation.ConversationType, req.UserID, recvID, seqs, req.HasReadSeq)
+ return &msg.MarkConversationAsReadResp{}, nil
}
func (m *msgServer) sendMarkAsReadNotification(
diff --git a/internal/rpc/msg/callback.go b/internal/rpc/msg/callback.go
index 85c002bf3..f98318bba 100644
--- a/internal/rpc/msg/callback.go
+++ b/internal/rpc/msg/callback.go
@@ -16,6 +16,7 @@ package msg
import (
"context"
+
"github.com/OpenIMSDK/protocol/sdkws"
"google.golang.org/protobuf/proto"
@@ -26,6 +27,7 @@ import (
"github.com/OpenIMSDK/tools/utils"
cbapi "github.com/openimsdk/open-im-server/v3/pkg/callbackstruct"
+
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/open-im-server/v3/pkg/common/http"
)
@@ -68,7 +70,7 @@ func GetContent(msg *sdkws.MsgData) string {
}
func callbackBeforeSendSingleMsg(ctx context.Context, msg *pbchat.SendMsgReq) error {
- if !config.Config.Callback.CallbackBeforeSendSingleMsg.Enable {
+ if !config.Config.Callback.CallbackBeforeSendSingleMsg.Enable || msg.MsgData.ContentType == constant.Typing {
return nil
}
req := &cbapi.CallbackBeforeSendSingleMsgReq{
@@ -83,7 +85,7 @@ func callbackBeforeSendSingleMsg(ctx context.Context, msg *pbchat.SendMsgReq) er
}
func callbackAfterSendSingleMsg(ctx context.Context, msg *pbchat.SendMsgReq) error {
- if !config.Config.Callback.CallbackAfterSendSingleMsg.Enable {
+ if !config.Config.Callback.CallbackAfterSendSingleMsg.Enable || msg.MsgData.ContentType == constant.Typing {
return nil
}
req := &cbapi.CallbackAfterSendSingleMsgReq{
@@ -98,10 +100,10 @@ func callbackAfterSendSingleMsg(ctx context.Context, msg *pbchat.SendMsgReq) err
}
func callbackBeforeSendGroupMsg(ctx context.Context, msg *pbchat.SendMsgReq) error {
- if !config.Config.Callback.CallbackAfterSendSingleMsg.Enable {
+ if !config.Config.Callback.CallbackBeforeSendGroupMsg.Enable || msg.MsgData.ContentType == constant.Typing {
return nil
}
- req := &cbapi.CallbackAfterSendGroupMsgReq{
+ req := &cbapi.CallbackBeforeSendGroupMsgReq{
CommonCallbackReq: toCommonCallback(ctx, msg, cbapi.CallbackBeforeSendGroupMsgCommand),
GroupID: msg.MsgData.GroupID,
}
@@ -113,7 +115,7 @@ func callbackBeforeSendGroupMsg(ctx context.Context, msg *pbchat.SendMsgReq) err
}
func callbackAfterSendGroupMsg(ctx context.Context, msg *pbchat.SendMsgReq) error {
- if !config.Config.Callback.CallbackAfterSendGroupMsg.Enable {
+ if !config.Config.Callback.CallbackAfterSendGroupMsg.Enable || msg.MsgData.ContentType == constant.Typing {
return nil
}
req := &cbapi.CallbackAfterSendGroupMsgReq{
@@ -160,7 +162,6 @@ func callbackMsgModify(ctx context.Context, msg *pbchat.SendMsgReq) error {
log.ZDebug(ctx, "callbackMsgModify", "msg", msg.MsgData)
return nil
}
-
func CallbackGroupMsgRead(ctx context.Context, req *cbapi.CallbackGroupMsgReadReq) error {
if !config.Config.Callback.CallbackGroupMsgRead.Enable || req.ContentType != constant.Text {
return nil
@@ -180,10 +181,26 @@ func CallbackSingleMsgRead(ctx context.Context, req *cbapi.CallbackSingleMsgRead
}
req.CallbackCommand = cbapi.CallbackSingleMsgRead
- resp := &cbapi.CallbackGroupMsgReadResp{}
+ resp := &cbapi.CallbackSingleMsgReadResp{}
if err := http.CallBackPostReturn(ctx, cbURL(), req, resp, config.Config.Callback.CallbackMsgModify); err != nil {
return err
}
return nil
}
+func CallbackAfterRevokeMsg(ctx context.Context, req *pbchat.RevokeMsgReq) error {
+ if !config.Config.Callback.CallbackAfterRevokeMsg.Enable {
+ return nil
+ }
+ callbackReq := &cbapi.CallbackAfterRevokeMsgReq{
+ CallbackCommand: cbapi.CallbackAfterRevokeMsgCommand,
+ ConversationID: req.ConversationID,
+ Seq: req.Seq,
+ UserID: req.UserID,
+ }
+ resp := &cbapi.CallbackAfterRevokeMsgResp{}
+ if err := http.CallBackPostReturn(ctx, config.Config.Callback.CallbackUrl, callbackReq, resp, config.Config.Callback.CallbackAfterRevokeMsg); err != nil {
+ return err
+ }
+ return nil
+}
diff --git a/internal/rpc/msg/revoke.go b/internal/rpc/msg/revoke.go
index 151d29fc1..0a24753b2 100644
--- a/internal/rpc/msg/revoke.go
+++ b/internal/rpc/msg/revoke.go
@@ -61,6 +61,7 @@ func (m *msgServer) RevokeMsg(ctx context.Context, req *msg.RevokeMsgReq) (*msg.
if msgs[0].ContentType == constant.MsgRevokeNotification {
return nil, errs.ErrMsgAlreadyRevoke.Wrap("msg already revoke")
}
+
data, _ := json.Marshal(msgs[0])
log.ZInfo(ctx, "GetMsgBySeqs", "conversationID", req.ConversationID, "seq", req.Seq, "msg", string(data))
var role int32
@@ -110,6 +111,13 @@ func (m *msgServer) RevokeMsg(ctx context.Context, req *msg.RevokeMsgReq) (*msg.
return nil, err
}
revokerUserID := mcontext.GetOpUserID(ctx)
+ var flag bool
+ if len(config.Config.Manager.UserID) > 0 {
+ flag = utils.Contain(revokerUserID, config.Config.Manager.UserID...)
+ }
+ if len(config.Config.Manager.UserID) == 0 && len(config.Config.IMAdmin.UserID) > 0 {
+ flag = utils.Contain(revokerUserID, config.Config.IMAdmin.UserID...)
+ }
tips := sdkws.RevokeMsgTips{
RevokerUserID: revokerUserID,
ClientMsgID: msgs[0].ClientMsgID,
@@ -117,7 +125,7 @@ func (m *msgServer) RevokeMsg(ctx context.Context, req *msg.RevokeMsgReq) (*msg.
Seq: req.Seq,
SesstionType: msgs[0].SessionType,
ConversationID: req.ConversationID,
- IsAdminRevoke: utils.Contain(revokerUserID, config.Config.Manager.UserID...),
+ IsAdminRevoke: flag,
}
var recvID string
if msgs[0].SessionType == constant.SuperGroupChatType {
@@ -128,5 +136,8 @@ func (m *msgServer) RevokeMsg(ctx context.Context, req *msg.RevokeMsgReq) (*msg.
if err := m.notificationSender.NotificationWithSesstionType(ctx, req.UserID, recvID, constant.MsgRevokeNotification, msgs[0].SessionType, &tips); err != nil {
return nil, err
}
+ if err = CallbackAfterRevokeMsg(ctx, req); err != nil {
+ return nil, err
+ }
return &msg.RevokeMsgResp{}, nil
}
diff --git a/internal/rpc/msg/send.go b/internal/rpc/msg/send.go
index dd08292bd..630b74a4a 100644
--- a/internal/rpc/msg/send.go
+++ b/internal/rpc/msg/send.go
@@ -65,6 +65,7 @@ func (m *msgServer) sendMsgSuperGroupChat(
if err = callbackBeforeSendGroupMsg(ctx, req); err != nil {
return nil, err
}
+
if err := callbackMsgModify(ctx, req); err != nil {
return nil, err
}
@@ -167,6 +168,7 @@ func (m *msgServer) sendMsgSingleChat(ctx context.Context, req *pbmsg.SendMsgReq
if err = callbackBeforeSendSingleMsg(ctx, req); err != nil {
return nil, err
}
+
if err := callbackMsgModify(ctx, req); err != nil {
return nil, err
}
diff --git a/internal/rpc/msg/seq.go b/internal/rpc/msg/seq.go
index 4f6a01e8d..dfc2ad0b1 100644
--- a/internal/rpc/msg/seq.go
+++ b/internal/rpc/msg/seq.go
@@ -30,3 +30,27 @@ func (m *msgServer) GetConversationMaxSeq(
}
return &pbmsg.GetConversationMaxSeqResp{MaxSeq: maxSeq}, nil
}
+
+func (m *msgServer) GetMaxSeqs(ctx context.Context, req *pbmsg.GetMaxSeqsReq) (*pbmsg.SeqsInfoResp, error) {
+ maxSeqs, err := m.MsgDatabase.GetMaxSeqs(ctx, req.ConversationIDs)
+ if err != nil {
+ return nil, err
+ }
+ return &pbmsg.SeqsInfoResp{MaxSeqs: maxSeqs}, nil
+}
+
+func (m *msgServer) GetHasReadSeqs(ctx context.Context, req *pbmsg.GetHasReadSeqsReq) (*pbmsg.SeqsInfoResp, error) {
+ hasReadSeqs, err := m.MsgDatabase.GetHasReadSeqs(ctx, req.UserID, req.ConversationIDs)
+ if err != nil {
+ return nil, err
+ }
+ return &pbmsg.SeqsInfoResp{MaxSeqs: hasReadSeqs}, nil
+}
+
+func (m *msgServer) GetMsgByConversationIDs(ctx context.Context, req *pbmsg.GetMsgByConversationIDsReq) (*pbmsg.GetMsgByConversationIDsResp, error) {
+ Msgs, err := m.MsgDatabase.FindOneByDocIDs(ctx, req.ConversationIDs, req.MaxSeqs)
+ if err != nil {
+ return nil, err
+ }
+ return &pbmsg.GetMsgByConversationIDsResp{MsgDatas: Msgs}, nil
+}
diff --git a/internal/rpc/msg/server.go b/internal/rpc/msg/server.go
index 88be287fd..fe1baa453 100644
--- a/internal/rpc/msg/server.go
+++ b/internal/rpc/msg/server.go
@@ -80,7 +80,10 @@ func Start(client discoveryregistry.SvcDiscoveryRegistry, server *grpc.Server) e
userRpcClient := rpcclient.NewUserRpcClient(client)
groupRpcClient := rpcclient.NewGroupRpcClient(client)
friendRpcClient := rpcclient.NewFriendRpcClient(client)
- msgDatabase := controller.NewCommonMsgDatabase(msgDocModel, cacheModel)
+ msgDatabase, err := controller.NewCommonMsgDatabase(msgDocModel, cacheModel)
+ if err != nil {
+ return err
+ }
s := &msgServer{
Conversation: &conversationClient,
User: &userRpcClient,
diff --git a/internal/rpc/msg/utils.go b/internal/rpc/msg/utils.go
index 115df9946..e45d7b395 100644
--- a/internal/rpc/msg/utils.go
+++ b/internal/rpc/msg/utils.go
@@ -15,12 +15,11 @@
package msg
import (
- "github.com/redis/go-redis/v9"
- "gorm.io/gorm"
-
"github.com/OpenIMSDK/protocol/constant"
"github.com/OpenIMSDK/protocol/sdkws"
"github.com/OpenIMSDK/tools/utils"
+ "github.com/redis/go-redis/v9"
+ "go.mongodb.org/mongo-driver/mongo"
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
)
@@ -45,7 +44,7 @@ func isMessageHasReadEnabled(msgData *sdkws.MsgData) bool {
func IsNotFound(err error) bool {
switch utils.Unwrap(err) {
- case redis.Nil, gorm.ErrRecordNotFound:
+ case redis.Nil, mongo.ErrNoDocuments:
return true
default:
return false
diff --git a/internal/rpc/msg/verify.go b/internal/rpc/msg/verify.go
index 2837cb944..0080b6fdb 100644
--- a/internal/rpc/msg/verify.go
+++ b/internal/rpc/msg/verify.go
@@ -51,7 +51,10 @@ type MessageRevoked struct {
func (m *msgServer) messageVerification(ctx context.Context, data *msg.SendMsgReq) error {
switch data.MsgData.SessionType {
case constant.SingleChatType:
- if utils.IsContain(data.MsgData.SendID, config.Config.Manager.UserID) {
+ if len(config.Config.Manager.UserID) > 0 && utils.IsContain(data.MsgData.SendID, config.Config.Manager.UserID) {
+ return nil
+ }
+ if utils.IsContain(data.MsgData.SendID, config.Config.IMAdmin.UserID) {
return nil
}
if data.MsgData.ContentType <= constant.NotificationEnd &&
@@ -88,7 +91,10 @@ func (m *msgServer) messageVerification(ctx context.Context, data *msg.SendMsgRe
if groupInfo.GroupType == constant.SuperGroup {
return nil
}
- if utils.IsContain(data.MsgData.SendID, config.Config.Manager.UserID) {
+ if len(config.Config.Manager.UserID) > 0 && utils.IsContain(data.MsgData.SendID, config.Config.Manager.UserID) {
+ return nil
+ }
+ if utils.IsContain(data.MsgData.SendID, config.Config.IMAdmin.UserID) {
return nil
}
if data.MsgData.ContentType <= constant.NotificationEnd &&
diff --git a/internal/rpc/third/log.go b/internal/rpc/third/log.go
index aa83f58f7..11c7467b8 100644
--- a/internal/rpc/third/log.go
+++ b/internal/rpc/third/log.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package third
import (
@@ -32,11 +46,11 @@ func genLogID() string {
}
func (t *thirdServer) UploadLogs(ctx context.Context, req *third.UploadLogsReq) (*third.UploadLogsResp, error) {
- var DBlogs []*relationtb.Log
+ var DBlogs []*relationtb.LogModel
userID := ctx.Value(constant.OpUserID).(string)
platform := constant.PlatformID2Name[int(req.Platform)]
for _, fileURL := range req.FileURLs {
- log := relationtb.Log{
+ log := relationtb.LogModel{
Version: req.Version,
SystemType: req.SystemType,
Platform: platform,
@@ -57,7 +71,7 @@ func (t *thirdServer) UploadLogs(ctx context.Context, req *third.UploadLogsReq)
}
}
if log.LogID == "" {
- return nil, errs.ErrData.Wrap("Log id gen error")
+ return nil, errs.ErrData.Wrap("LogModel id gen error")
}
DBlogs = append(DBlogs, &log)
}
@@ -92,8 +106,8 @@ func (t *thirdServer) DeleteLogs(ctx context.Context, req *third.DeleteLogsReq)
return &third.DeleteLogsResp{}, nil
}
-func dbToPbLogInfos(logs []*relationtb.Log) []*third.LogInfo {
- db2pbForLogInfo := func(log *relationtb.Log) *third.LogInfo {
+func dbToPbLogInfos(logs []*relationtb.LogModel) []*third.LogInfo {
+ db2pbForLogInfo := func(log *relationtb.LogModel) *third.LogInfo {
return &third.LogInfo{
Filename: log.FileName,
UserID: log.UserID,
@@ -120,7 +134,7 @@ func (t *thirdServer) SearchLogs(ctx context.Context, req *third.SearchLogsReq)
if req.StartTime > req.EndTime {
return nil, errs.ErrArgs.Wrap("startTime>endTime")
}
- total, logs, err := t.thirdDatabase.SearchLogs(ctx, req.Keyword, time.UnixMilli(req.StartTime), time.UnixMilli(req.EndTime), req.Pagination.PageNumber, req.Pagination.ShowNumber)
+ total, logs, err := t.thirdDatabase.SearchLogs(ctx, req.Keyword, time.UnixMilli(req.StartTime), time.UnixMilli(req.EndTime), req.Pagination)
if err != nil {
return nil, err
}
@@ -128,18 +142,16 @@ func (t *thirdServer) SearchLogs(ctx context.Context, req *third.SearchLogsReq)
for _, log := range logs {
userIDs = append(userIDs, log.UserID)
}
- users, err := t.thirdDatabase.FindUsers(ctx, userIDs)
+ userMap, err := t.userRpcClient.GetUsersInfoMap(ctx, userIDs)
if err != nil {
return nil, err
}
- IDtoName := make(map[string]string)
- for _, user := range users {
- IDtoName[user.UserID] = user.Nickname
- }
for _, pbLog := range pbLogs {
- pbLog.Nickname = IDtoName[pbLog.UserID]
+ if user, ok := userMap[pbLog.UserID]; ok {
+ pbLog.Nickname = user.Nickname
+ }
}
resp.LogsInfos = pbLogs
- resp.Total = total
+ resp.Total = uint32(total)
return &resp, nil
}
diff --git a/internal/rpc/third/s3.go b/internal/rpc/third/s3.go
index 984af88e1..3b501d4ad 100644
--- a/internal/rpc/third/s3.go
+++ b/internal/rpc/third/s3.go
@@ -16,9 +16,17 @@ package third
import (
"context"
+ "encoding/base64"
+ "encoding/hex"
+ "encoding/json"
+ "path"
"strconv"
"time"
+ "github.com/google/uuid"
+
+ "github.com/openimsdk/open-im-server/v3/pkg/authverify"
+
"github.com/openimsdk/open-im-server/v3/pkg/common/db/s3"
"github.com/OpenIMSDK/protocol/third"
@@ -64,7 +72,7 @@ func (t *thirdServer) InitiateMultipartUpload(ctx context.Context, req *third.In
Key: haErr.Object.Key,
Size: haErr.Object.Size,
ContentType: req.ContentType,
- Cause: req.Cause,
+ Group: req.Cause,
CreateTime: time.Now(),
}
if err := t.s3dataBase.SetObject(ctx, obj); err != nil {
@@ -143,7 +151,7 @@ func (t *thirdServer) CompleteMultipartUpload(ctx context.Context, req *third.Co
Key: result.Key,
Size: result.Size,
ContentType: req.ContentType,
- Cause: req.Cause,
+ Group: req.Cause,
CreateTime: time.Now(),
}
if err := t.s3dataBase.SetObject(ctx, obj); err != nil {
@@ -179,6 +187,113 @@ func (t *thirdServer) AccessURL(ctx context.Context, req *third.AccessURLReq) (*
}, nil
}
+func (t *thirdServer) InitiateFormData(ctx context.Context, req *third.InitiateFormDataReq) (*third.InitiateFormDataResp, error) {
+ if req.Name == "" {
+ return nil, errs.ErrArgs.Wrap("name is empty")
+ }
+ if req.Size <= 0 {
+ return nil, errs.ErrArgs.Wrap("size must be greater than 0")
+ }
+ if err := checkUploadName(ctx, req.Name); err != nil {
+ return nil, err
+ }
+ var duration time.Duration
+ opUserID := mcontext.GetOpUserID(ctx)
+ var key string
+ if authverify.IsManagerUserID(opUserID) {
+ if req.Millisecond <= 0 {
+ duration = time.Minute * 10
+ } else {
+ duration = time.Millisecond * time.Duration(req.Millisecond)
+ }
+ if req.Absolute {
+ key = req.Name
+ }
+ } else {
+ duration = time.Minute * 10
+ }
+ uid, err := uuid.NewRandom()
+ if err != nil {
+ return nil, err
+ }
+ if key == "" {
+ date := time.Now().Format("20060102")
+ key = path.Join(cont.DirectPath, date, opUserID, hex.EncodeToString(uid[:])+path.Ext(req.Name))
+ }
+ mate := FormDataMate{
+ Name: req.Name,
+ Size: req.Size,
+ ContentType: req.ContentType,
+ Group: req.Group,
+ Key: key,
+ }
+ mateData, err := json.Marshal(&mate)
+ if err != nil {
+ return nil, err
+ }
+ resp, err := t.s3dataBase.FormData(ctx, key, req.Size, req.ContentType, duration)
+ if err != nil {
+ return nil, err
+ }
+ return &third.InitiateFormDataResp{
+ Id: base64.RawStdEncoding.EncodeToString(mateData),
+ Url: resp.URL,
+ File: resp.File,
+ Header: toPbMapArray(resp.Header),
+ FormData: resp.FormData,
+ Expires: resp.Expires.UnixMilli(),
+ SuccessCodes: utils.Slice(resp.SuccessCodes, func(code int) int32 {
+ return int32(code)
+ }),
+ }, nil
+}
+
+func (t *thirdServer) CompleteFormData(ctx context.Context, req *third.CompleteFormDataReq) (*third.CompleteFormDataResp, error) {
+ if req.Id == "" {
+ return nil, errs.ErrArgs.Wrap("id is empty")
+ }
+ data, err := base64.RawStdEncoding.DecodeString(req.Id)
+ if err != nil {
+ return nil, errs.ErrArgs.Wrap("invalid id " + err.Error())
+ }
+ var mate FormDataMate
+ if err := json.Unmarshal(data, &mate); err != nil {
+ return nil, errs.ErrArgs.Wrap("invalid id " + err.Error())
+ }
+ if err := checkUploadName(ctx, mate.Name); err != nil {
+ return nil, err
+ }
+ info, err := t.s3dataBase.StatObject(ctx, mate.Key)
+ if err != nil {
+ return nil, err
+ }
+ if info.Size > 0 && info.Size != mate.Size {
+ return nil, errs.ErrData.Wrap("file size mismatch")
+ }
+ obj := &relation.ObjectModel{
+ Name: mate.Name,
+ UserID: mcontext.GetOpUserID(ctx),
+ Hash: "etag_" + info.ETag,
+ Key: info.Key,
+ Size: info.Size,
+ ContentType: mate.ContentType,
+ Group: mate.Group,
+ CreateTime: time.Now(),
+ }
+ if err := t.s3dataBase.SetObject(ctx, obj); err != nil {
+ return nil, err
+ }
+ return &third.CompleteFormDataResp{Url: t.apiAddress(mate.Name)}, nil
+}
+
func (t *thirdServer) apiAddress(name string) string {
return t.apiURL + name
}
+
+type FormDataMate struct {
+ Name string `json:"name"`
+ Size int64 `json:"size"`
+ ContentType string `json:"contentType"`
+ Group string `json:"group"`
+ Key string `json:"key"`
+}
diff --git a/internal/rpc/third/third.go b/internal/rpc/third/third.go
index eed3d4802..7a63d3526 100644
--- a/internal/rpc/third/third.go
+++ b/internal/rpc/third/third.go
@@ -17,15 +17,17 @@ package third
import (
"context"
"fmt"
-
"net/url"
"time"
+ "github.com/openimsdk/open-im-server/v3/pkg/common/db/mgo"
+ "github.com/openimsdk/open-im-server/v3/pkg/common/db/unrelation"
+
"github.com/openimsdk/open-im-server/v3/pkg/common/db/s3"
"github.com/openimsdk/open-im-server/v3/pkg/common/db/s3/cos"
- "github.com/openimsdk/open-im-server/v3/pkg/common/db/s3/kodo"
"github.com/openimsdk/open-im-server/v3/pkg/common/db/s3/minio"
"github.com/openimsdk/open-im-server/v3/pkg/common/db/s3/oss"
+
"google.golang.org/grpc"
"github.com/OpenIMSDK/protocol/third"
@@ -34,13 +36,22 @@ import (
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/open-im-server/v3/pkg/common/db/cache"
"github.com/openimsdk/open-im-server/v3/pkg/common/db/controller"
- "github.com/openimsdk/open-im-server/v3/pkg/common/db/relation"
- relationtb "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
-
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
)
func Start(client discoveryregistry.SvcDiscoveryRegistry, server *grpc.Server) error {
+ mongo, err := unrelation.NewMongo()
+ if err != nil {
+ return err
+ }
+ logdb, err := mgo.NewLogMongo(mongo.GetDatabase())
+ if err != nil {
+ return err
+ }
+ s3db, err := mgo.NewS3Mongo(mongo.GetDatabase())
+ if err != nil {
+ return err
+ }
apiURL := config.Config.Object.ApiURL
if apiURL == "" {
return fmt.Errorf("api url is empty")
@@ -56,13 +67,6 @@ func Start(client discoveryregistry.SvcDiscoveryRegistry, server *grpc.Server) e
if err != nil {
return err
}
- db, err := relation.NewGormDB()
- if err != nil {
- return err
- }
- if err := db.AutoMigrate(&relationtb.ObjectModel{}); err != nil {
- return err
- }
// 根据配置文件策略选择 oss 方式
enable := config.Config.Object.Enable
var o s3.Interface
@@ -73,25 +77,17 @@ func Start(client discoveryregistry.SvcDiscoveryRegistry, server *grpc.Server) e
o, err = cos.NewCos()
case "oss":
o, err = oss.NewOSS()
- case "kodo":
- o, err = kodo.NewKodo()
default:
err = fmt.Errorf("invalid object enable: %s", enable)
}
if err != nil {
return err
}
- //specialerror.AddErrHandler(func(err error) errs.CodeError {
- // if o.IsNotFound(err) {
- // return errs.ErrRecordNotFound
- // }
- // return nil
- //})
third.RegisterThirdServer(server, &thirdServer{
apiURL: apiURL,
- thirdDatabase: controller.NewThirdDatabase(cache.NewMsgCacheModel(rdb), db),
+ thirdDatabase: controller.NewThirdDatabase(cache.NewMsgCacheModel(rdb), logdb),
userRpcClient: rpcclient.NewUserRpcClient(client),
- s3dataBase: controller.NewS3Database(rdb, o, relation.NewObjectInfo(db)),
+ s3dataBase: controller.NewS3Database(rdb, o, s3db),
defaultExpire: time.Hour * 24 * 7,
})
return nil
diff --git a/internal/rpc/third/tool.go b/internal/rpc/third/tool.go
index a65d882dd..a6c16ff9d 100644
--- a/internal/rpc/third/tool.go
+++ b/internal/rpc/third/tool.go
@@ -29,6 +29,9 @@ import (
)
func toPbMapArray(m map[string][]string) []*third.KeyValues {
+ if len(m) == 0 {
+ return nil
+ }
res := make([]*third.KeyValues, 0, len(m))
for key := range m {
res = append(res, &third.KeyValues{
diff --git a/internal/rpc/user/callback.go b/internal/rpc/user/callback.go
index 01de2734d..5276946a4 100644
--- a/internal/rpc/user/callback.go
+++ b/internal/rpc/user/callback.go
@@ -16,6 +16,7 @@ package user
import (
"context"
+
pbuser "github.com/OpenIMSDK/protocol/user"
"github.com/OpenIMSDK/tools/utils"
@@ -43,7 +44,6 @@ func CallbackBeforeUpdateUserInfo(ctx context.Context, req *pbuser.UpdateUserInf
utils.NotNilReplace(&req.UserInfo.Nickname, resp.Nickname)
return nil
}
-
func CallbackAfterUpdateUserInfo(ctx context.Context, req *pbuser.UpdateUserInfoReq) error {
if !config.Config.Callback.CallbackAfterUpdateUserInfo.Enable {
return nil
@@ -60,6 +60,41 @@ func CallbackAfterUpdateUserInfo(ctx context.Context, req *pbuser.UpdateUserInfo
}
return nil
}
+func CallbackBeforeUpdateUserInfoEx(ctx context.Context, req *pbuser.UpdateUserInfoExReq) error {
+ if !config.Config.Callback.CallbackBeforeUpdateUserInfoEx.Enable {
+ return nil
+ }
+ cbReq := &cbapi.CallbackBeforeUpdateUserInfoExReq{
+ CallbackCommand: cbapi.CallbackBeforeUpdateUserInfoExCommand,
+ UserID: req.UserInfo.UserID,
+ FaceURL: req.UserInfo.FaceURL,
+ Nickname: req.UserInfo.Nickname,
+ }
+ resp := &cbapi.CallbackBeforeUpdateUserInfoExResp{}
+ if err := http.CallBackPostReturn(ctx, config.Config.Callback.CallbackUrl, cbReq, resp, config.Config.Callback.CallbackBeforeUpdateUserInfoEx); err != nil {
+ return err
+ }
+ utils.NotNilReplace(req.UserInfo.FaceURL, resp.FaceURL)
+ utils.NotNilReplace(req.UserInfo.Ex, resp.Ex)
+ utils.NotNilReplace(req.UserInfo.Nickname, resp.Nickname)
+ return nil
+}
+func CallbackAfterUpdateUserInfoEx(ctx context.Context, req *pbuser.UpdateUserInfoExReq) error {
+ if !config.Config.Callback.CallbackAfterUpdateUserInfoEx.Enable {
+ return nil
+ }
+ cbReq := &cbapi.CallbackAfterUpdateUserInfoExReq{
+ CallbackCommand: cbapi.CallbackAfterUpdateUserInfoExCommand,
+ UserID: req.UserInfo.UserID,
+ FaceURL: req.UserInfo.FaceURL,
+ Nickname: req.UserInfo.Nickname,
+ }
+ resp := &cbapi.CallbackAfterUpdateUserInfoExResp{}
+ if err := http.CallBackPostReturn(ctx, config.Config.Callback.CallbackUrl, cbReq, resp, config.Config.Callback.CallbackBeforeUpdateUserInfoEx); err != nil {
+ return err
+ }
+ return nil
+}
func CallbackBeforeUserRegister(ctx context.Context, req *pbuser.UserRegisterReq) error {
if !config.Config.Callback.CallbackBeforeUserRegister.Enable {
@@ -91,8 +126,8 @@ func CallbackAfterUserRegister(ctx context.Context, req *pbuser.UserRegisterReq)
Users: req.Users,
}
- resp := &cbapi.CallbackBeforeUserRegisterResp{}
- if err := http.CallBackPostReturn(ctx, config.Config.Callback.CallbackUrl, cbReq, resp, config.Config.Callback.CallbackBeforeUpdateUserInfo); err != nil {
+ resp := &cbapi.CallbackAfterUserRegisterResp{}
+ if err := http.CallBackPostReturn(ctx, config.Config.Callback.CallbackUrl, cbReq, resp, config.Config.Callback.CallbackAfterUpdateUserInfo); err != nil {
return err
}
return nil
diff --git a/internal/rpc/user/user.go b/internal/rpc/user/user.go
index f4164dbf2..6f9e2949f 100644
--- a/internal/rpc/user/user.go
+++ b/internal/rpc/user/user.go
@@ -17,14 +17,22 @@ package user
import (
"context"
"errors"
+ "math/rand"
"strings"
"time"
+ "github.com/OpenIMSDK/tools/pagination"
+
+ "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
+
+ "github.com/OpenIMSDK/tools/tx"
+
+ "github.com/openimsdk/open-im-server/v3/pkg/common/db/mgo"
+
"github.com/OpenIMSDK/protocol/constant"
"github.com/OpenIMSDK/protocol/sdkws"
"github.com/OpenIMSDK/tools/errs"
"github.com/OpenIMSDK/tools/log"
- "github.com/OpenIMSDK/tools/tx"
"github.com/openimsdk/open-im-server/v3/pkg/authverify"
"github.com/openimsdk/open-im-server/v3/pkg/common/db/unrelation"
@@ -35,7 +43,6 @@ import (
"github.com/openimsdk/open-im-server/v3/pkg/common/convert"
"github.com/openimsdk/open-im-server/v3/pkg/common/db/cache"
"github.com/openimsdk/open-im-server/v3/pkg/common/db/controller"
- "github.com/openimsdk/open-im-server/v3/pkg/common/db/relation"
tablerelation "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient/notification"
@@ -54,11 +61,12 @@ type userServer struct {
RegisterCenter registry.SvcDiscoveryRegistry
}
+func (s *userServer) GetGroupOnlineUser(ctx context.Context, req *pbuser.GetGroupOnlineUserReq) (*pbuser.GetGroupOnlineUserResp, error) {
+ //TODO implement me
+ panic("implement me")
+}
+
func Start(client registry.SvcDiscoveryRegistry, server *grpc.Server) error {
- db, err := relation.NewGormDB()
- if err != nil {
- return err
- }
rdb, err := cache.NewRedis()
if err != nil {
return err
@@ -67,20 +75,20 @@ func Start(client registry.SvcDiscoveryRegistry, server *grpc.Server) error {
if err != nil {
return err
}
- if err := db.AutoMigrate(&tablerelation.UserModel{}); err != nil {
- return err
- }
users := make([]*tablerelation.UserModel, 0)
- if len(config.Config.Manager.UserID) != len(config.Config.Manager.Nickname) {
- return errors.New("len(config.Config.Manager.AppManagerUid) != len(config.Config.Manager.Nickname)")
+ if len(config.Config.IMAdmin.UserID) != len(config.Config.IMAdmin.Nickname) {
+ return errors.New("len(config.Config.AppNotificationAdmin.AppManagerUid) != len(config.Config.AppNotificationAdmin.Nickname)")
}
- for k, v := range config.Config.Manager.UserID {
- users = append(users, &tablerelation.UserModel{UserID: v, Nickname: config.Config.Manager.Nickname[k], AppMangerLevel: constant.AppAdmin})
+ for k, v := range config.Config.IMAdmin.UserID {
+ users = append(users, &tablerelation.UserModel{UserID: v, Nickname: config.Config.IMAdmin.Nickname[k], AppMangerLevel: constant.AppNotificationAdmin})
+ }
+ userDB, err := mgo.NewUserMongo(mongo.GetDatabase())
+ if err != nil {
+ return err
}
- userDB := relation.NewUserGorm(db)
cache := cache.NewUserCacheRedis(rdb, userDB, cache.GetDefaultOpt())
userMongoDB := unrelation.NewUserMongoDriver(mongo.GetDatabase())
- database := controller.NewUserDatabase(userDB, cache, tx.NewGorm(db), userMongoDB)
+ database := controller.NewUserDatabase(userDB, cache, tx.NewMongo(mongo.GetClient()), userMongoDB)
friendRpcClient := rpcclient.NewFriendRpcClient(client)
groupRpcClient := rpcclient.NewGroupRpcClient(client)
msgRpcClient := rpcclient.NewMessageRpcClient(client)
@@ -118,20 +126,51 @@ func (s *userServer) UpdateUserInfo(ctx context.Context, req *pbuser.UpdateUserI
if err := CallbackBeforeUpdateUserInfo(ctx, req); err != nil {
return nil, err
}
- user := convert.UserPb2DB(req.UserInfo)
+ data := convert.UserPb2DBMap(req.UserInfo)
+ if err := s.UpdateByMap(ctx, req.UserInfo.UserID, data); err != nil {
+ return nil, err
+ }
+ _ = s.friendNotificationSender.UserInfoUpdatedNotification(ctx, req.UserInfo.UserID)
+ friends, err := s.friendRpcClient.GetFriendIDs(ctx, req.UserInfo.UserID)
if err != nil {
return nil, err
}
- err = s.Update(ctx, user)
+ if req.UserInfo.Nickname != "" || req.UserInfo.FaceURL != "" {
+ if err := s.groupRpcClient.NotificationUserInfoUpdate(ctx, req.UserInfo.UserID); err != nil {
+ log.ZError(ctx, "NotificationUserInfoUpdate", err)
+ }
+ }
+ for _, friendID := range friends {
+ s.friendNotificationSender.FriendInfoUpdatedNotification(ctx, req.UserInfo.UserID, friendID)
+ }
+ if err := CallbackAfterUpdateUserInfo(ctx, req); err != nil {
+ return nil, err
+ }
+ if err := s.groupRpcClient.NotificationUserInfoUpdate(ctx, req.UserInfo.UserID); err != nil {
+ log.ZError(ctx, "NotificationUserInfoUpdate", err, "userID", req.UserInfo.UserID)
+ }
+ return resp, nil
+}
+func (s *userServer) UpdateUserInfoEx(ctx context.Context, req *pbuser.UpdateUserInfoExReq) (resp *pbuser.UpdateUserInfoExResp, err error) {
+ resp = &pbuser.UpdateUserInfoExResp{}
+ err = authverify.CheckAccessV3(ctx, req.UserInfo.UserID)
if err != nil {
return nil, err
}
+
+ if err = CallbackBeforeUpdateUserInfoEx(ctx, req); err != nil {
+ return nil, err
+ }
+ data := convert.UserPb2DBMapEx(req.UserInfo)
+ if err = s.UpdateByMap(ctx, req.UserInfo.UserID, data); err != nil {
+ return nil, err
+ }
_ = s.friendNotificationSender.UserInfoUpdatedNotification(ctx, req.UserInfo.UserID)
friends, err := s.friendRpcClient.GetFriendIDs(ctx, req.UserInfo.UserID)
if err != nil {
return nil, err
}
- if req.UserInfo.Nickname != "" || req.UserInfo.FaceURL != "" {
+ if req.UserInfo.Nickname != nil || req.UserInfo.FaceURL != nil {
if err := s.groupRpcClient.NotificationUserInfoUpdate(ctx, req.UserInfo.UserID); err != nil {
log.ZError(ctx, "NotificationUserInfoUpdate", err)
}
@@ -139,7 +178,7 @@ func (s *userServer) UpdateUserInfo(ctx context.Context, req *pbuser.UpdateUserI
for _, friendID := range friends {
s.friendNotificationSender.FriendInfoUpdatedNotification(ctx, req.UserInfo.UserID, friendID)
}
- if err := CallbackAfterUpdateUserInfo(ctx, req); err != nil {
+ if err := CallbackAfterUpdateUserInfoEx(ctx, req); err != nil {
return nil, err
}
if err := s.groupRpcClient.NotificationUserInfoUpdate(ctx, req.UserInfo.UserID); err != nil {
@@ -147,13 +186,12 @@ func (s *userServer) UpdateUserInfo(ctx context.Context, req *pbuser.UpdateUserI
}
return resp, nil
}
-
func (s *userServer) SetGlobalRecvMessageOpt(ctx context.Context, req *pbuser.SetGlobalRecvMessageOptReq) (resp *pbuser.SetGlobalRecvMessageOptResp, err error) {
resp = &pbuser.SetGlobalRecvMessageOptResp{}
if _, err := s.FindWithError(ctx, []string{req.UserID}); err != nil {
return nil, err
}
- m := make(map[string]interface{}, 1)
+ m := make(map[string]any, 1)
m["global_recv_msg_opt"] = req.GlobalRecvMsgOpt
if err := s.UpdateByMap(ctx, req.UserID, m); err != nil {
return nil, err
@@ -175,7 +213,7 @@ func (s *userServer) AccountCheck(ctx context.Context, req *pbuser.AccountCheckR
if err != nil {
return nil, err
}
- userIDs := make(map[string]interface{}, 0)
+ userIDs := make(map[string]any, 0)
for _, v := range users {
userIDs[v.UserID] = nil
}
@@ -192,16 +230,21 @@ func (s *userServer) AccountCheck(ctx context.Context, req *pbuser.AccountCheckR
}
func (s *userServer) GetPaginationUsers(ctx context.Context, req *pbuser.GetPaginationUsersReq) (resp *pbuser.GetPaginationUsersResp, err error) {
- var pageNumber, showNumber int32
- if req.Pagination != nil {
- pageNumber = req.Pagination.PageNumber
- showNumber = req.Pagination.ShowNumber
- }
- users, total, err := s.Page(ctx, pageNumber, showNumber)
- if err != nil {
- return nil, err
+ if req.UserID == "" && req.NickName == "" {
+ total, users, err := s.PageFindUser(ctx, constant.IMOrdinaryUser, constant.AppOrdinaryUsers, req.Pagination)
+ if err != nil {
+ return nil, err
+ }
+ return &pbuser.GetPaginationUsersResp{Total: int32(total), Users: convert.UsersDB2Pb(users)}, err
+ } else {
+ total, users, err := s.PageFindUserWithKeyword(ctx, constant.IMOrdinaryUser, constant.AppOrdinaryUsers, req.UserID, req.NickName, req.Pagination)
+ if err != nil {
+ return nil, err
+ }
+ return &pbuser.GetPaginationUsersResp{Total: int32(total), Users: convert.UsersDB2Pb(users)}, err
+
}
- return &pbuser.GetPaginationUsersResp{Total: int32(total), Users: convert.UsersDB2Pb(users)}, err
+
}
func (s *userServer) UserRegister(ctx context.Context, req *pbuser.UserRegisterReq) (resp *pbuser.UserRegisterResp, err error) {
@@ -269,11 +312,11 @@ func (s *userServer) GetGlobalRecvMessageOpt(ctx context.Context, req *pbuser.Ge
// GetAllUserID Get user account by page.
func (s *userServer) GetAllUserID(ctx context.Context, req *pbuser.GetAllUserIDReq) (resp *pbuser.GetAllUserIDResp, err error) {
- userIDs, err := s.UserDatabase.GetAllUserID(ctx, req.Pagination.PageNumber, req.Pagination.ShowNumber)
+ total, userIDs, err := s.UserDatabase.GetAllUserID(ctx, req.Pagination)
if err != nil {
return nil, err
}
- return &pbuser.GetAllUserIDResp{UserIDs: userIDs}, nil
+ return &pbuser.GetAllUserIDResp{Total: int32(total), UserIDs: userIDs}, nil
}
// SubscribeOrCancelUsersStatus Subscribe online or cancel online users.
@@ -345,3 +388,309 @@ func (s *userServer) GetSubscribeUsersStatus(ctx context.Context,
}
return &pbuser.GetSubscribeUsersStatusResp{StatusList: onlineStatusList}, nil
}
+
+// ProcessUserCommandAdd user general function add.
+func (s *userServer) ProcessUserCommandAdd(ctx context.Context, req *pbuser.ProcessUserCommandAddReq) (*pbuser.ProcessUserCommandAddResp, error) {
+ err := authverify.CheckAccessV3(ctx, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+
+ var value string
+ if req.Value != nil {
+ value = req.Value.Value
+ }
+ var ex string
+ if req.Ex != nil {
+ value = req.Ex.Value
+ }
+ // Assuming you have a method in s.UserDatabase to add a user command
+ err = s.UserDatabase.AddUserCommand(ctx, req.UserID, req.Type, req.Uuid, value, ex)
+ if err != nil {
+ return nil, err
+ }
+ tips := &sdkws.UserCommandAddTips{
+ FromUserID: req.UserID,
+ ToUserID: req.UserID,
+ }
+ err = s.userNotificationSender.UserCommandAddNotification(ctx, tips)
+ if err != nil {
+ return nil, err
+ }
+ return &pbuser.ProcessUserCommandAddResp{}, nil
+}
+
+// ProcessUserCommandDelete user general function delete.
+func (s *userServer) ProcessUserCommandDelete(ctx context.Context, req *pbuser.ProcessUserCommandDeleteReq) (*pbuser.ProcessUserCommandDeleteResp, error) {
+ err := authverify.CheckAccessV3(ctx, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+
+ err = s.UserDatabase.DeleteUserCommand(ctx, req.UserID, req.Type, req.Uuid)
+ if err != nil {
+ return nil, err
+ }
+ tips := &sdkws.UserCommandDeleteTips{
+ FromUserID: req.UserID,
+ ToUserID: req.UserID,
+ }
+ err = s.userNotificationSender.UserCommandDeleteNotification(ctx, tips)
+ if err != nil {
+ return nil, err
+ }
+
+ return &pbuser.ProcessUserCommandDeleteResp{}, nil
+}
+
+// ProcessUserCommandUpdate user general function update.
+func (s *userServer) ProcessUserCommandUpdate(ctx context.Context, req *pbuser.ProcessUserCommandUpdateReq) (*pbuser.ProcessUserCommandUpdateResp, error) {
+ err := authverify.CheckAccessV3(ctx, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+ val := make(map[string]any)
+
+ // Map fields from eax to val
+ if req.Value != nil {
+ val["value"] = req.Value.Value
+ }
+ if req.Ex != nil {
+ val["ex"] = req.Ex.Value
+ }
+
+ // Assuming you have a method in s.UserDatabase to update a user command
+ err = s.UserDatabase.UpdateUserCommand(ctx, req.UserID, req.Type, req.Uuid, val)
+ if err != nil {
+ return nil, err
+ }
+ tips := &sdkws.UserCommandUpdateTips{
+ FromUserID: req.UserID,
+ ToUserID: req.UserID,
+ }
+ err = s.userNotificationSender.UserCommandUpdateNotification(ctx, tips)
+ if err != nil {
+ return nil, err
+ }
+ return &pbuser.ProcessUserCommandUpdateResp{}, nil
+}
+
+func (s *userServer) ProcessUserCommandGet(ctx context.Context, req *pbuser.ProcessUserCommandGetReq) (*pbuser.ProcessUserCommandGetResp, error) {
+
+ err := authverify.CheckAccessV3(ctx, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+ // Fetch user commands from the database
+ commands, err := s.UserDatabase.GetUserCommands(ctx, req.UserID, req.Type)
+ if err != nil {
+ return nil, err
+ }
+
+ // Initialize commandInfoSlice as an empty slice
+ commandInfoSlice := make([]*pbuser.CommandInfoResp, 0, len(commands))
+
+ for _, command := range commands {
+ // No need to use index since command is already a pointer
+ commandInfoSlice = append(commandInfoSlice, &pbuser.CommandInfoResp{
+ Type: command.Type,
+ Uuid: command.Uuid,
+ Value: command.Value,
+ CreateTime: command.CreateTime,
+ Ex: command.Ex,
+ })
+ }
+
+ // Return the response with the slice
+ return &pbuser.ProcessUserCommandGetResp{CommandResp: commandInfoSlice}, nil
+}
+
+func (s *userServer) ProcessUserCommandGetAll(ctx context.Context, req *pbuser.ProcessUserCommandGetAllReq) (*pbuser.ProcessUserCommandGetAllResp, error) {
+ err := authverify.CheckAccessV3(ctx, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+ // Fetch user commands from the database
+ commands, err := s.UserDatabase.GetAllUserCommands(ctx, req.UserID)
+ if err != nil {
+ return nil, err
+ }
+
+ // Initialize commandInfoSlice as an empty slice
+ commandInfoSlice := make([]*pbuser.AllCommandInfoResp, 0, len(commands))
+
+ for _, command := range commands {
+ // No need to use index since command is already a pointer
+ commandInfoSlice = append(commandInfoSlice, &pbuser.AllCommandInfoResp{
+ Type: command.Type,
+ Uuid: command.Uuid,
+ Value: command.Value,
+ CreateTime: command.CreateTime,
+ Ex: command.Ex,
+ })
+ }
+
+ // Return the response with the slice
+ return &pbuser.ProcessUserCommandGetAllResp{CommandResp: commandInfoSlice}, nil
+}
+
+func (s *userServer) AddNotificationAccount(ctx context.Context, req *pbuser.AddNotificationAccountReq) (*pbuser.AddNotificationAccountResp, error) {
+ if err := authverify.CheckIMAdmin(ctx); err != nil {
+ return nil, err
+ }
+
+ if req.UserID == "" {
+ for i := 0; i < 20; i++ {
+ userId := s.genUserID()
+ _, err := s.UserDatabase.FindWithError(ctx, []string{userId})
+ if err == nil {
+ continue
+ }
+ req.UserID = userId
+ break
+ }
+ if req.UserID == "" {
+ return nil, errs.ErrInternalServer.Wrap("gen user id failed")
+ }
+ } else {
+ _, err := s.UserDatabase.FindWithError(ctx, []string{req.UserID})
+ if err == nil {
+ return nil, errs.ErrArgs.Wrap("userID is used")
+ }
+ }
+
+ user := &tablerelation.UserModel{
+ UserID: req.UserID,
+ Nickname: req.NickName,
+ FaceURL: req.FaceURL,
+ CreateTime: time.Now(),
+ AppMangerLevel: constant.AppNotificationAdmin,
+ }
+ if err := s.UserDatabase.Create(ctx, []*tablerelation.UserModel{user}); err != nil {
+ return nil, err
+ }
+
+ return &pbuser.AddNotificationAccountResp{
+ UserID: req.UserID,
+ NickName: req.NickName,
+ FaceURL: req.FaceURL,
+ }, nil
+}
+
+func (s *userServer) UpdateNotificationAccountInfo(ctx context.Context, req *pbuser.UpdateNotificationAccountInfoReq) (*pbuser.UpdateNotificationAccountInfoResp, error) {
+ if err := authverify.CheckIMAdmin(ctx); err != nil {
+ return nil, err
+ }
+
+ if _, err := s.UserDatabase.FindWithError(ctx, []string{req.UserID}); err != nil {
+ return nil, errs.ErrArgs.Wrap()
+ }
+
+ user := map[string]interface{}{}
+
+ if req.NickName != "" {
+ user["nickname"] = req.NickName
+ }
+
+ if req.FaceURL != "" {
+ user["face_url"] = req.FaceURL
+ }
+
+ if err := s.UserDatabase.UpdateByMap(ctx, req.UserID, user); err != nil {
+ return nil, err
+ }
+
+ return &pbuser.UpdateNotificationAccountInfoResp{}, nil
+}
+
+func (s *userServer) SearchNotificationAccount(ctx context.Context, req *pbuser.SearchNotificationAccountReq) (*pbuser.SearchNotificationAccountResp, error) {
+ // Check if user is an admin
+ if err := authverify.CheckIMAdmin(ctx); err != nil {
+ return nil, err
+ }
+
+ var users []*relation.UserModel
+ var err error
+
+ // If a keyword is provided in the request
+ if req.Keyword != "" {
+ // Find users by keyword
+ users, err = s.UserDatabase.Find(ctx, []string{req.Keyword})
+ if err != nil {
+ return nil, err
+ }
+
+ // Convert users to response format
+ resp := s.userModelToResp(users, req.Pagination)
+ if resp.Total != 0 {
+ return resp, nil
+ }
+
+ // Find users by nickname if no users found by keyword
+ users, err = s.UserDatabase.FindByNickname(ctx, req.Keyword)
+ if err != nil {
+ return nil, err
+ }
+ resp = s.userModelToResp(users, req.Pagination)
+ return resp, nil
+ }
+
+ // If no keyword, find users with notification settings
+ users, err = s.UserDatabase.FindNotification(ctx, constant.AppNotificationAdmin)
+ if err != nil {
+ return nil, err
+ }
+
+ resp := s.userModelToResp(users, req.Pagination)
+ return resp, nil
+}
+
+func (s *userServer) GetNotificationAccount(ctx context.Context, req *pbuser.GetNotificationAccountReq) (*pbuser.GetNotificationAccountResp, error) {
+ if req.UserID == "" {
+ return nil, errs.ErrArgs.Wrap("userID is empty")
+ }
+ user, err := s.UserDatabase.GetUserByID(ctx, req.UserID)
+ if err != nil {
+ return nil, errs.ErrUserIDNotFound.Wrap()
+ }
+ if user.AppMangerLevel == constant.AppAdmin || user.AppMangerLevel == constant.AppNotificationAdmin {
+ return &pbuser.GetNotificationAccountResp{}, nil
+ }
+
+ return nil, errs.ErrNoPermission.Wrap("notification messages cannot be sent for this ID")
+}
+
+func (s *userServer) genUserID() string {
+ const l = 10
+ data := make([]byte, l)
+ rand.Read(data)
+ chars := []byte("0123456789")
+ for i := 0; i < len(data); i++ {
+ if i == 0 {
+ data[i] = chars[1:][data[i]%9]
+ } else {
+ data[i] = chars[data[i]%10]
+ }
+ }
+ return string(data)
+}
+
+func (s *userServer) userModelToResp(users []*relation.UserModel, pagination pagination.Pagination) *pbuser.SearchNotificationAccountResp {
+ accounts := make([]*pbuser.NotificationAccountInfo, 0)
+ var total int64
+ for _, v := range users {
+ if v.AppMangerLevel == constant.AppNotificationAdmin && !utils.IsContain(v.UserID, config.Config.IMAdmin.UserID) {
+ temp := &pbuser.NotificationAccountInfo{
+ UserID: v.UserID,
+ FaceURL: v.FaceURL,
+ NickName: v.Nickname,
+ }
+ accounts = append(accounts, temp)
+ total += 1
+ }
+ }
+
+ notificationAccounts := utils.Paginate(accounts, int(pagination.GetPageNumber()), int(pagination.GetShowNumber()))
+
+ return &pbuser.SearchNotificationAccountResp{Total: total, NotificationAccounts: notificationAccounts}
+}
diff --git a/internal/tools/conversation.go b/internal/tools/conversation.go
index 05d963a17..0d0275339 100644
--- a/internal/tools/conversation.go
+++ b/internal/tools/conversation.go
@@ -19,6 +19,8 @@ import (
"math/rand"
"time"
+ "github.com/OpenIMSDK/protocol/sdkws"
+
"github.com/OpenIMSDK/tools/log"
"github.com/OpenIMSDK/tools/mcontext"
"github.com/OpenIMSDK/tools/utils"
@@ -91,7 +93,11 @@ func (c *MsgTool) ConversationsDestructMsgs() {
}
for i := 0; i < count; i++ {
pageNumber := rand.Int63() % maxPage
- conversationIDs, err := c.conversationDatabase.PageConversationIDs(ctx, int32(pageNumber), batchNum)
+ pagination := &sdkws.RequestPagination{
+ PageNumber: int32(pageNumber),
+ ShowNumber: batchNum,
+ }
+ conversationIDs, err := c.conversationDatabase.PageConversationIDs(ctx, pagination)
if err != nil {
log.ZError(ctx, "PageConversationIDs failed", err, "pageNumber", pageNumber)
continue
@@ -133,7 +139,7 @@ func (c *MsgTool) ConversationsDestructMsgs() {
continue
}
if len(seqs) > 0 {
- if err := c.conversationDatabase.UpdateUsersConversationFiled(ctx, []string{conversation.OwnerUserID}, conversation.ConversationID, map[string]interface{}{"latest_msg_destruct_time": now}); err != nil {
+ if err := c.conversationDatabase.UpdateUsersConversationFiled(ctx, []string{conversation.OwnerUserID}, conversation.ConversationID, map[string]any{"latest_msg_destruct_time": now}); err != nil {
log.ZError(ctx, "updateUsersConversationFiled failed", err, "conversationID", conversation.ConversationID, "ownerUserID", conversation.OwnerUserID)
continue
}
diff --git a/internal/tools/cron_task.go b/internal/tools/cron_task.go
index e22504bbb..40e1c0a87 100644
--- a/internal/tools/cron_task.go
+++ b/internal/tools/cron_task.go
@@ -22,17 +22,18 @@ import (
"syscall"
"time"
+ "github.com/OpenIMSDK/tools/errs"
+
"github.com/redis/go-redis/v9"
"github.com/robfig/cron/v3"
- "github.com/OpenIMSDK/tools/log"
-
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/open-im-server/v3/pkg/common/db/cache"
)
func StartTask() error {
fmt.Println("cron task start, config", config.Config.ChatRecordsClearTime)
+
msgTool, err := InitMsgTool()
if err != nil {
return err
@@ -47,18 +48,16 @@ func StartTask() error {
// register cron tasks
var crontab = cron.New()
- log.ZInfo(context.Background(), "start chatRecordsClearTime cron task", "cron config", config.Config.ChatRecordsClearTime)
+ fmt.Println("start chatRecordsClearTime cron task", "cron config", config.Config.ChatRecordsClearTime)
_, err = crontab.AddFunc(config.Config.ChatRecordsClearTime, cronWrapFunc(rdb, "cron_clear_msg_and_fix_seq", msgTool.AllConversationClearMsgAndFixSeq))
if err != nil {
- log.ZError(context.Background(), "start allConversationClearMsgAndFixSeq cron failed", err)
- panic(err)
+ return errs.Wrap(err)
}
- log.ZInfo(context.Background(), "start msgDestruct cron task", "cron config", config.Config.MsgDestructTime)
+ fmt.Println("start msgDestruct cron task", "cron config", config.Config.MsgDestructTime)
_, err = crontab.AddFunc(config.Config.MsgDestructTime, cronWrapFunc(rdb, "cron_conversations_destruct_msgs", msgTool.ConversationsDestructMsgs))
if err != nil {
- log.ZError(context.Background(), "start conversationsDestructMsgs cron failed", err)
- panic(err)
+ return errs.Wrap(err)
}
// start crontab
diff --git a/internal/tools/cron_task_test.go b/internal/tools/cron_task_test.go
index 1f4f1f5c1..28bc2c945 100644
--- a/internal/tools/cron_task_test.go
+++ b/internal/tools/cron_task_test.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package tools
import (
diff --git a/internal/tools/msg.go b/internal/tools/msg.go
index ad8f5c471..1ec1e03a2 100644
--- a/internal/tools/msg.go
+++ b/internal/tools/msg.go
@@ -19,6 +19,11 @@ import (
"fmt"
"math"
+ "github.com/OpenIMSDK/protocol/sdkws"
+ "github.com/OpenIMSDK/tools/tx"
+
+ "github.com/openimsdk/open-im-server/v3/pkg/common/db/mgo"
+
"github.com/redis/go-redis/v9"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
@@ -31,13 +36,11 @@ import (
"github.com/OpenIMSDK/tools/log"
"github.com/OpenIMSDK/tools/mcontext"
"github.com/OpenIMSDK/tools/mw"
- "github.com/OpenIMSDK/tools/tx"
"github.com/OpenIMSDK/tools/utils"
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/open-im-server/v3/pkg/common/db/cache"
"github.com/openimsdk/open-im-server/v3/pkg/common/db/controller"
- "github.com/openimsdk/open-im-server/v3/pkg/common/db/relation"
"github.com/openimsdk/open-im-server/v3/pkg/common/db/unrelation"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient/notification"
@@ -72,33 +75,48 @@ func InitMsgTool() (*MsgTool, error) {
if err != nil {
return nil, err
}
- db, err := relation.NewGormDB()
+ discov, err := kdisc.NewDiscoveryRegister(config.Config.Envs.Discovery)
if err != nil {
return nil, err
}
- discov, err := kdisc.NewDiscoveryRegister(config.Config.Envs.Discovery)
- /*
- discov, err := zookeeper.NewClient(config.Config.Zookeeper.ZkAddr, config.Config.Zookeeper.Schema,
- zookeeper.WithFreq(time.Hour), zookeeper.WithRoundRobin(), zookeeper.WithUserNameAndPassword(config.Config.Zookeeper.Username,
- config.Config.Zookeeper.Password), zookeeper.WithTimeout(10), zookeeper.WithLogger(log.NewZkLogger()))*/
+ discov.AddOption(mw.GrpcClient(), grpc.WithTransportCredentials(insecure.NewCredentials()), grpc.WithDefaultServiceConfig(fmt.Sprintf(`{"LoadBalancingPolicy": "%s"}`, "round_robin")))
+ userDB, err := mgo.NewUserMongo(mongo.GetDatabase())
+ if err != nil {
+ return nil, err
+ }
+ msgDatabase, err := controller.InitCommonMsgDatabase(rdb, mongo.GetDatabase())
if err != nil {
return nil, err
}
- discov.AddOption(mw.GrpcClient(), grpc.WithTransportCredentials(insecure.NewCredentials()))
- userDB := relation.NewUserGorm(db)
- msgDatabase := controller.InitCommonMsgDatabase(rdb, mongo.GetDatabase())
userMongoDB := unrelation.NewUserMongoDriver(mongo.GetDatabase())
+ ctxTx := tx.NewMongo(mongo.GetClient())
userDatabase := controller.NewUserDatabase(
userDB,
- cache.NewUserCacheRedis(rdb, relation.NewUserGorm(db), cache.GetDefaultOpt()),
- tx.NewGorm(db),
+ cache.NewUserCacheRedis(rdb, userDB, cache.GetDefaultOpt()),
+ ctxTx,
userMongoDB,
)
- groupDatabase := controller.InitGroupDatabase(db, rdb, mongo.GetDatabase(), nil)
+ groupDB, err := mgo.NewGroupMongo(mongo.GetDatabase())
+ if err != nil {
+ return nil, err
+ }
+ groupMemberDB, err := mgo.NewGroupMember(mongo.GetDatabase())
+ if err != nil {
+ return nil, err
+ }
+ groupRequestDB, err := mgo.NewGroupRequestMgo(mongo.GetDatabase())
+ if err != nil {
+ return nil, err
+ }
+ conversationDB, err := mgo.NewConversationMongo(mongo.GetDatabase())
+ if err != nil {
+ return nil, err
+ }
+ groupDatabase := controller.NewGroupDatabase(rdb, groupDB, groupMemberDB, groupRequestDB, ctxTx, nil)
conversationDatabase := controller.NewConversationDatabase(
- relation.NewConversationGorm(db),
- cache.NewConversationRedis(rdb, cache.GetDefaultOpt(), relation.NewConversationGorm(db)),
- tx.NewGorm(db),
+ conversationDB,
+ cache.NewConversationRedis(rdb, cache.GetDefaultOpt(), conversationDB),
+ ctxTx,
)
msgRpcClient := rpcclient.NewMessageRpcClient(discov)
msgNotificationSender := notification.NewMsgNotificationSender(rpcclient.WithRpcClient(&msgRpcClient))
@@ -144,7 +162,11 @@ func (c *MsgTool) AllConversationClearMsgAndFixSeq() {
}
for i := 0; i < count; i++ {
pageNumber := rand.Int63() % maxPage
- conversationIDs, err := c.conversationDatabase.PageConversationIDs(ctx, int32(pageNumber), batchNum)
+ pagination := &sdkws.RequestPagination{
+ PageNumber: int32(pageNumber),
+ ShowNumber: batchNum,
+ }
+ conversationIDs, err := c.conversationDatabase.PageConversationIDs(ctx, pagination)
if err != nil {
log.ZError(ctx, "PageConversationIDs failed", err, "pageNumber", pageNumber)
continue
diff --git a/internal/tools/msg_doc_convert.go b/internal/tools/msg_doc_convert.go
index 758625be1..b9150c362 100644
--- a/internal/tools/msg_doc_convert.go
+++ b/internal/tools/msg_doc_convert.go
@@ -32,7 +32,7 @@ func (c *MsgTool) convertTools() {
for _, conversationID := range conversationIDs {
conversationIDs = append(conversationIDs, msgprocessor.GetNotificationConversationIDByConversationID(conversationID))
}
- userIDs, err := c.userDatabase.GetAllUserID(ctx, 0, 0)
+ _, userIDs, err := c.userDatabase.GetAllUserID(ctx, nil)
if err != nil {
log.ZError(ctx, "get all user ids failed", err)
return
diff --git a/pkg/apistruct/manage.go b/pkg/apistruct/manage.go
index 1e0ab3214..f9f542835 100644
--- a/pkg/apistruct/manage.go
+++ b/pkg/apistruct/manage.go
@@ -36,7 +36,7 @@ type SendMsg struct {
SenderPlatformID int32 `json:"senderPlatformID"`
// Content is the actual content of the message, required and excluded from Swagger documentation.
- Content map[string]interface{} `json:"content" binding:"required" swaggerignore:"true"`
+ Content map[string]any `json:"content" binding:"required" swaggerignore:"true"`
// ContentType is an integer that represents the type of the content.
ContentType int32 `json:"contentType" binding:"required"`
@@ -64,6 +64,30 @@ type SendMsgReq struct {
SendMsg
}
+type GetConversationListReq struct {
+ // userID uniquely identifies the user.
+ UserID string `protobuf:"bytes,1,opt,name=userID,proto3" json:"userID,omitempty" binding:"required"`
+
+ // ConversationIDs contains a list of unique identifiers for conversations.
+ ConversationIDs []string `protobuf:"bytes,2,rep,name=conversationIDs,proto3" json:"conversationIDs,omitempty"`
+}
+
+type GetConversationListResp struct {
+ // ConversationElems is a map that associates conversation IDs with their respective details.
+ ConversationElems map[string]*ConversationElem `protobuf:"bytes,1,rep,name=conversationElems,proto3" json:"conversationElems,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
+}
+
+type ConversationElem struct {
+ // MaxSeq represents the maximum sequence number within the conversation.
+ MaxSeq int64 `protobuf:"varint,1,opt,name=maxSeq,proto3" json:"maxSeq,omitempty"`
+
+ // UnreadSeq represents the number of unread messages in the conversation.
+ UnreadSeq int64 `protobuf:"varint,2,opt,name=unreadSeq,proto3" json:"unreadSeq,omitempty"`
+
+ // LastSeqTime represents the timestamp of the last sequence in the conversation.
+ LastSeqTime int64 `protobuf:"varint,3,opt,name=LastSeqTime,proto3" json:"LastSeqTime,omitempty"`
+}
+
// BatchSendMsgReq defines the structure for sending a message to multiple recipients.
type BatchSendMsgReq struct {
SendMsg
diff --git a/pkg/apistruct/msg.go b/pkg/apistruct/msg.go
index 12cf253a0..d23db9bf5 100644
--- a/pkg/apistruct/msg.go
+++ b/pkg/apistruct/msg.go
@@ -16,56 +16,56 @@ package apistruct
type PictureBaseInfo struct {
UUID string `mapstructure:"uuid"`
- Type string `mapstructure:"type"`
+ Type string `mapstructure:"type" validate:"required"`
Size int64 `mapstructure:"size"`
- Width int32 `mapstructure:"width"`
- Height int32 `mapstructure:"height"`
- Url string `mapstructure:"url"`
+ Width int32 `mapstructure:"width" validate:"required"`
+ Height int32 `mapstructure:"height" validate:"required"`
+ Url string `mapstructure:"url" validate:"required"`
}
type PictureElem struct {
SourcePath string `mapstructure:"sourcePath"`
- SourcePicture PictureBaseInfo `mapstructure:"sourcePicture"`
- BigPicture PictureBaseInfo `mapstructure:"bigPicture"`
- SnapshotPicture PictureBaseInfo `mapstructure:"snapshotPicture"`
+ SourcePicture PictureBaseInfo `mapstructure:"sourcePicture" validate:"required"`
+ BigPicture PictureBaseInfo `mapstructure:"bigPicture" validate:"required"`
+ SnapshotPicture PictureBaseInfo `mapstructure:"snapshotPicture" validate:"required"`
}
type SoundElem struct {
UUID string `mapstructure:"uuid"`
SoundPath string `mapstructure:"soundPath"`
- SourceURL string `mapstructure:"sourceUrl"`
+ SourceURL string `mapstructure:"sourceUrl" validate:"required"`
DataSize int64 `mapstructure:"dataSize"`
- Duration int64 `mapstructure:"duration"`
+ Duration int64 `mapstructure:"duration" validate:"required,min=1"`
}
type VideoElem struct {
VideoPath string `mapstructure:"videoPath"`
VideoUUID string `mapstructure:"videoUUID"`
- VideoURL string `mapstructure:"videoUrl"`
- VideoType string `mapstructure:"videoType"`
- VideoSize int64 `mapstructure:"videoSize"`
- Duration int64 `mapstructure:"duration"`
+ VideoURL string `mapstructure:"videoUrl" validate:"required"`
+ VideoType string `mapstructure:"videoType" validate:"required"`
+ VideoSize int64 `mapstructure:"videoSize" validate:"required"`
+ Duration int64 `mapstructure:"duration" validate:"required"`
SnapshotPath string `mapstructure:"snapshotPath"`
SnapshotUUID string `mapstructure:"snapshotUUID"`
SnapshotSize int64 `mapstructure:"snapshotSize"`
- SnapshotURL string `mapstructure:"snapshotUrl"`
- SnapshotWidth int32 `mapstructure:"snapshotWidth"`
- SnapshotHeight int32 `mapstructure:"snapshotHeight"`
+ SnapshotURL string `mapstructure:"snapshotUrl" validate:"required"`
+ SnapshotWidth int32 `mapstructure:"snapshotWidth" validate:"required"`
+ SnapshotHeight int32 `mapstructure:"snapshotHeight" validate:"required"`
}
type FileElem struct {
FilePath string `mapstructure:"filePath"`
UUID string `mapstructure:"uuid"`
- SourceURL string `mapstructure:"sourceUrl"`
- FileName string `mapstructure:"fileName"`
- FileSize int64 `mapstructure:"fileSize"`
+ SourceURL string `mapstructure:"sourceUrl" validate:"required"`
+ FileName string `mapstructure:"fileName" validate:"required"`
+ FileSize int64 `mapstructure:"fileSize" validate:"required"`
}
type AtElem struct {
Text string `mapstructure:"text"`
- AtUserList []string `mapstructure:"atUserList"`
+ AtUserList []string `mapstructure:"atUserList" validate:"required,max=1000"`
IsAtSelf bool `mapstructure:"isAtSelf"`
}
type LocationElem struct {
Description string `mapstructure:"description"`
- Longitude float64 `mapstructure:"longitude"`
- Latitude float64 `mapstructure:"latitude"`
+ Longitude float64 `mapstructure:"longitude" validate:"required"`
+ Latitude float64 `mapstructure:"latitude" validate:"required"`
}
type CustomElem struct {
Data string `mapstructure:"data" validate:"required"`
@@ -80,18 +80,19 @@ type TextElem struct {
type RevokeElem struct {
RevokeMsgClientID string `mapstructure:"revokeMsgClientID" validate:"required"`
}
+
type OANotificationElem struct {
- NotificationName string `mapstructure:"notificationName" json:"notificationName" validate:"required"`
- NotificationFaceURL string `mapstructure:"notificationFaceURL" json:"notificationFaceURL"`
- NotificationType int32 `mapstructure:"notificationType" json:"notificationType" validate:"required"`
- Text string `mapstructure:"text" json:"text" validate:"required"`
- Url string `mapstructure:"url" json:"url"`
- MixType int32 `mapstructure:"mixType" json:"mixType"`
- PictureElem PictureElem `mapstructure:"pictureElem" json:"pictureElem"`
- SoundElem SoundElem `mapstructure:"soundElem" json:"soundElem"`
- VideoElem VideoElem `mapstructure:"videoElem" json:"videoElem"`
- FileElem FileElem `mapstructure:"fileElem" json:"fileElem"`
- Ex string `mapstructure:"ex" json:"ex"`
+ NotificationName string `mapstructure:"notificationName" json:"notificationName" validate:"required"`
+ NotificationFaceURL string `mapstructure:"notificationFaceURL" json:"notificationFaceURL"`
+ NotificationType int32 `mapstructure:"notificationType" json:"notificationType" validate:"required"`
+ Text string `mapstructure:"text" json:"text" validate:"required"`
+ Url string `mapstructure:"url" json:"url"`
+ MixType int32 `mapstructure:"mixType" json:"mixType"`
+ PictureElem *PictureElem `mapstructure:"pictureElem" json:"pictureElem"`
+ SoundElem *SoundElem `mapstructure:"soundElem" json:"soundElem"`
+ VideoElem *VideoElem `mapstructure:"videoElem" json:"videoElem"`
+ FileElem *FileElem `mapstructure:"fileElem" json:"fileElem"`
+ Ex string `mapstructure:"ex" json:"ex"`
}
type MessageRevoked struct {
RevokerID string `mapstructure:"revokerID" json:"revokerID" validate:"required"`
diff --git a/pkg/authverify/token.go b/pkg/authverify/token.go
index a8e577fde..b951bf219 100644
--- a/pkg/authverify/token.go
+++ b/pkg/authverify/token.go
@@ -28,14 +28,17 @@ import (
)
func Secret() jwt.Keyfunc {
- return func(token *jwt.Token) (interface{}, error) {
+ return func(token *jwt.Token) (any, error) {
return []byte(config.Config.Secret), nil
}
}
func CheckAccessV3(ctx context.Context, ownerUserID string) (err error) {
opUserID := mcontext.GetOpUserID(ctx)
- if utils.IsContain(opUserID, config.Config.Manager.UserID) {
+ if len(config.Config.Manager.UserID) > 0 && utils.IsContain(opUserID, config.Config.Manager.UserID) {
+ return nil
+ }
+ if utils.IsContain(opUserID, config.Config.IMAdmin.UserID) {
return nil
}
if opUserID == ownerUserID {
@@ -45,22 +48,34 @@ func CheckAccessV3(ctx context.Context, ownerUserID string) (err error) {
}
func IsAppManagerUid(ctx context.Context) bool {
- return utils.IsContain(mcontext.GetOpUserID(ctx), config.Config.Manager.UserID)
+ return (len(config.Config.Manager.UserID) > 0 && utils.IsContain(mcontext.GetOpUserID(ctx), config.Config.Manager.UserID)) || utils.IsContain(mcontext.GetOpUserID(ctx), config.Config.IMAdmin.UserID)
}
func CheckAdmin(ctx context.Context) error {
- if utils.IsContain(mcontext.GetOpUserID(ctx), config.Config.Manager.UserID) {
+ if len(config.Config.Manager.UserID) > 0 && utils.IsContain(mcontext.GetOpUserID(ctx), config.Config.Manager.UserID) {
+ return nil
+ }
+ if utils.IsContain(mcontext.GetOpUserID(ctx), config.Config.IMAdmin.UserID) {
return nil
}
return errs.ErrNoPermission.Wrap(fmt.Sprintf("user %s is not admin userID", mcontext.GetOpUserID(ctx)))
}
+func CheckIMAdmin(ctx context.Context) error {
+ if utils.IsContain(mcontext.GetOpUserID(ctx), config.Config.IMAdmin.UserID) {
+ return nil
+ }
+ if len(config.Config.Manager.UserID) > 0 && utils.IsContain(mcontext.GetOpUserID(ctx), config.Config.Manager.UserID) {
+ return nil
+ }
+ return errs.ErrNoPermission.Wrap(fmt.Sprintf("user %s is not CheckIMAdmin userID", mcontext.GetOpUserID(ctx)))
+}
-func ParseRedisInterfaceToken(redisToken interface{}) (*tokenverify.Claims, error) {
+func ParseRedisInterfaceToken(redisToken any) (*tokenverify.Claims, error) {
return tokenverify.GetClaimFromToken(string(redisToken.([]uint8)), Secret())
}
func IsManagerUserID(opUserID string) bool {
- return utils.IsContain(opUserID, config.Config.Manager.UserID)
+ return (len(config.Config.Manager.UserID) > 0 && utils.IsContain(opUserID, config.Config.Manager.UserID)) || utils.IsContain(opUserID, config.Config.IMAdmin.UserID)
}
func WsVerifyToken(token, userID string, platformID int) error {
diff --git a/pkg/callbackstruct/constant.go b/pkg/callbackstruct/constant.go
index f029e3713..f3bcf1383 100644
--- a/pkg/callbackstruct/constant.go
+++ b/pkg/callbackstruct/constant.go
@@ -1,5 +1,34 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package callbackstruct
+const CallbackBeforeInviteJoinGroupCommand = "callbackBeforeInviteJoinGroupCommand"
+const CallbackAfterJoinGroupCommand = "callbackAfterJoinGroupCommand"
+const CallbackAfterSetGroupInfoCommand = "callbackAfterSetGroupInfoCommand"
+const CallbackBeforeSetGroupInfoCommand = "callbackBeforeSetGroupInfoCommand"
+
+const CallbackAfterRevokeMsgCommand = "callbackBeforeAfterMsgCommand"
+const CallbackBeforeAddBlackCommand = "callbackBeforeAddBlackCommand"
+const CallbackAfterAddFriendCommand = "callbackAfterAddFriendCommand"
+const CallbackBeforeAddFriendAgreeCommand = "callbackBeforeAddFriendAgreeCommand"
+
+const CallbackAfterDeleteFriendCommand = "callbackAfterDeleteFriendCommand"
+const CallbackBeforeImportFriendsCommand = "callbackBeforeImportFriendsCommand"
+const CallbackAfterImportFriendsCommand = "callbackAfterImportFriendsCommand"
+const CallbackAfterRemoveBlackCommand = "callbackAfterRemoveBlackCommand"
+
const (
CallbackQuitGroupCommand = "callbackQuitGroupCommand"
CallbackKillGroupCommand = "callbackKillGroupCommand"
@@ -8,9 +37,11 @@ const (
CallbackGroupMsgReadCommand = "callbackGroupMsgReadCommand"
CallbackMsgModifyCommand = "callbackMsgModifyCommand"
CallbackAfterUpdateUserInfoCommand = "callbackAfterUpdateUserInfoCommand"
+ CallbackAfterUpdateUserInfoExCommand = "callbackAfterUpdateUserInfoExCommand"
+ CallbackBeforeUpdateUserInfoExCommand = "callbackBeforeUpdateUserInfoExCommand"
CallbackBeforeUserRegisterCommand = "callbackBeforeUserRegisterCommand"
CallbackAfterUserRegisterCommand = "callbackAfterUserRegisterCommand"
- CallbackTransferGroupOwnerAfter = "callbackTransferGroupOwnerAfter"
+ CallbackAfterTransferGroupOwner = "callbackAfterTransferGroupOwner"
CallbackBeforeSetFriendRemark = "callbackBeforeSetFriendRemark"
CallbackAfterSetFriendRemark = "callbackAfterSetFriendRemark"
CallbackSingleMsgRead = "callbackSingleMsgRead"
@@ -29,5 +60,6 @@ const (
CallbackBeforeCreateGroupCommand = "callbackBeforeCreateGroupCommand"
CallbackAfterCreateGroupCommand = "callbackAfterCreateGroupCommand"
CallbackBeforeMemberJoinGroupCommand = "callbackBeforeMemberJoinGroupCommand"
- CallbackBeforeSetGroupMemberInfoCommand = "CallbackBeforeSetGroupMemberInfoCommand"
+ CallbackBeforeSetGroupMemberInfoCommand = "callbackBeforeSetGroupMemberInfoCommand"
+ CallbackAfterSetGroupMemberInfoCommand = "callbackAfterSetGroupMemberInfoCommand"
)
diff --git a/pkg/callbackstruct/friend.go b/pkg/callbackstruct/friend.go
index ebbd08b19..3674a34da 100644
--- a/pkg/callbackstruct/friend.go
+++ b/pkg/callbackstruct/friend.go
@@ -19,6 +19,7 @@ type CallbackBeforeAddFriendReq struct {
FromUserID string `json:"fromUserID" `
ToUserID string `json:"toUserID"`
ReqMsg string `json:"reqMsg"`
+ Ex string `json:"ex"`
}
type CallbackBeforeAddFriendResp struct {
@@ -35,6 +36,28 @@ type CallBackAddFriendReplyBeforeResp struct {
CommonCallbackResp
}
+type CallbackBeforeSetFriendRemarkReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ OwnerUserID string `json:"ownerUserID"`
+ FriendUserID string `json:"friendUserID"`
+ Remark string `json:"remark"`
+}
+
+type CallbackBeforeSetFriendRemarkResp struct {
+ CommonCallbackResp
+ Remark string `json:"remark"`
+}
+
+type CallbackAfterSetFriendRemarkReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ OwnerUserID string `json:"ownerUserID"`
+ FriendUserID string `json:"friendUserID"`
+ Remark string `json:"remark"`
+}
+
+type CallbackAfterSetFriendRemarkResp struct {
+ CommonCallbackResp
+}
type CallbackAfterAddFriendReq struct {
CallbackCommand `json:"callbackCommand"`
FromUserID string `json:"fromUserID" `
@@ -45,26 +68,60 @@ type CallbackAfterAddFriendReq struct {
type CallbackAfterAddFriendResp struct {
CommonCallbackResp
}
+type CallbackBeforeAddBlackReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ OwnerUserID string `json:"ownerUserID" `
+ BlackUserID string `json:"blackUserID"`
+}
-type CallbackBeforeSetFriendRemarkReq struct {
+type CallbackBeforeAddBlackResp struct {
+ CommonCallbackResp
+}
+
+type CallbackBeforeAddFriendAgreeReq struct {
CallbackCommand `json:"callbackCommand"`
- OwnerUserID string `json:"ownerUserID"`
- FriendUserID string `json:"friendUserID"`
- Remark string `json:"remark"`
+ FromUserID string `json:"fromUserID" `
+ ToUserID string `json:"blackUserID"`
+ HandleResult int32 `json:"HandleResult"`
+ HandleMsg string `json:"HandleMsg"`
}
-type CallbackBeforeSetFriendRemarkResp struct {
+type CallbackBeforeAddFriendAgreeResp struct {
CommonCallbackResp
- Remark string `json:"remark"`
}
-type CallbackAfterSetFriendRemarkReq struct {
+type CallbackAfterDeleteFriendReq struct {
CallbackCommand `json:"callbackCommand"`
- OwnerUserID string `json:"ownerUserID"`
+ OwnerUserID string `json:"ownerUserID" `
FriendUserID string `json:"friendUserID"`
- Remark string `json:"remark"`
+}
+type CallbackAfterDeleteFriendResp struct {
+ CommonCallbackResp
}
-type CallbackAfterSetFriendRemarkResp struct {
+type CallbackBeforeImportFriendsReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ OwnerUserID string `json:"ownerUserID" `
+ FriendUserIDs []string `json:"friendUserIDs"`
+}
+type CallbackBeforeImportFriendsResp struct {
+ CommonCallbackResp
+ FriendUserIDs []string `json:"friendUserIDs"`
+}
+type CallbackAfterImportFriendsReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ OwnerUserID string `json:"ownerUserID" `
+ FriendUserIDs []string `json:"friendUserIDs"`
+}
+type CallbackAfterImportFriendsResp struct {
+ CommonCallbackResp
+}
+
+type CallbackAfterRemoveBlackReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ OwnerUserID string `json:"ownerUserID"`
+ BlackUserID string `json:"blackUserID"`
+}
+type CallbackAfterRemoveBlackResp struct {
CommonCallbackResp
}
diff --git a/pkg/callbackstruct/group.go b/pkg/callbackstruct/group.go
index 79e02ba0f..5968f1e55 100644
--- a/pkg/callbackstruct/group.go
+++ b/pkg/callbackstruct/group.go
@@ -109,108 +109,128 @@ type CallbackAfterSetGroupMemberInfoResp struct {
CommonCallbackResp
}
-type CallbackAfterGroupMemberExitReq struct {
+type CallbackQuitGroupReq struct {
CallbackCommand `json:"callbackCommand"`
GroupID string `json:"groupID"`
UserID string `json:"userID"`
- GroupType *int32 `json:"groupType"`
- ExitType string `json:"exitType"`
}
-type CallbackAfterGroupMemberExitResp struct {
+type CallbackQuitGroupResp struct {
CommonCallbackResp
}
-type CallbackAfterUngroupReq struct {
+type CallbackKillGroupMemberReq struct {
CallbackCommand `json:"callbackCommand"`
GroupID string `json:"groupID"`
- GroupType *int32 `json:"groupType"`
- OwnerID string `json:"ownerID"`
- MemberList []string `json:"memberList"`
+ KickedUserIDs []string `json:"kickedUserIDs"`
+ Reason string `json:"reason"`
}
-type CallbackAfterUngroupResp struct {
+type CallbackKillGroupMemberResp struct {
CommonCallbackResp
}
-type CallbackAfterSetGroupInfoReq struct {
+type CallbackDisMissGroupReq struct {
CallbackCommand `json:"callbackCommand"`
- GroupID string `json:"groupID"`
- GroupType *int32 `json:"groupType"`
- UserID string `json:"userID"`
- Name string `json:"name"`
- Notification string `json:"notification"`
- GroupUrl string `json:"groupUrl"`
+ GroupID string `json:"groupID"`
+ OwnerID string `json:"ownerID"`
+ GroupType string `json:"groupType"`
+ MembersID []string `json:"membersID"`
}
-type CallbackAfterSetGroupInfoResp struct {
+type CallbackDisMissGroupResp struct {
CommonCallbackResp
}
-type CallbackAfterRevokeMsgReq struct {
+type CallbackJoinGroupReq struct {
CallbackCommand `json:"callbackCommand"`
GroupID string `json:"groupID"`
- GroupType *int32 `json:"groupType"`
- UserID string `json:"userID"`
- Content string `json:"content"`
+ GroupType string `json:"groupType"`
+ ApplyID string `json:"applyID"`
+ ReqMessage string `json:"reqMessage"`
+ Ex string `json:"ex"`
}
-type CallbackAfterRevokeMsgResp struct {
+type CallbackJoinGroupResp struct {
CommonCallbackResp
}
-type CallbackQuitGroupReq struct {
+type CallbackTransferGroupOwnerReq struct {
CallbackCommand `json:"callbackCommand"`
GroupID string `json:"groupID"`
- UserID string `json:"userID"`
+ OldOwnerUserID string `json:"oldOwnerUserID"`
+ NewOwnerUserID string `json:"newOwnerUserID"`
}
-type CallbackQuitGroupResp struct {
+type CallbackTransferGroupOwnerResp struct {
CommonCallbackResp
}
-type CallbackKillGroupMemberReq struct {
+type CallbackBeforeInviteUserToGroupReq struct {
CallbackCommand `json:"callbackCommand"`
+ OperationID string `json:"operationID"`
GroupID string `json:"groupID"`
- KickedUserIDs []string `json:"kickedUserIDs"`
Reason string `json:"reason"`
+ InvitedUserIDs []string `json:"invitedUserIDs"`
}
-
-type CallbackKillGroupMemberResp struct {
+type CallbackBeforeInviteUserToGroupResp struct {
CommonCallbackResp
+ RefusedMembersAccount []string `json:"refusedMembersAccount,omitempty"` // Optional field to list members whose invitation is refused.
}
-type CallbackDisMissGroupReq struct {
+type CallbackAfterJoinGroupReq struct {
CallbackCommand `json:"callbackCommand"`
- GroupID string `json:"groupID"`
- OwnerID string `json:"ownerID"`
- GroupType string `json:"groupType"`
- MembersID []string `json:"membersID"`
+ OperationID string `json:"operationID"`
+ GroupID string `json:"groupID"`
+ ReqMessage string `json:"reqMessage"`
+ JoinSource int32 `json:"joinSource"`
+ InviterUserID string `json:"inviterUserID"`
}
-
-type CallbackDisMissGroupResp struct {
+type CallbackAfterJoinGroupResp struct {
CommonCallbackResp
}
-type CallbackJoinGroupReq struct {
- CallbackCommand `json:"callbackCommand"`
- GroupID string `json:"groupID"`
- GroupType string `json:"groupType"`
- ApplyID string `json:"applyID"`
- ReqMessage string `json:"reqMessage"`
+type CallbackBeforeSetGroupInfoReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ OperationID string `json:"operationID"`
+ GroupID string `json:"groupID"`
+ GroupName string `json:"groupName"`
+ Notification string `json:"notification"`
+ Introduction string `json:"introduction"`
+ FaceURL string `json:"faceURL"`
+ Ex string `json:"ex"`
+ NeedVerification int32 `json:"needVerification"`
+ LookMemberInfo int32 `json:"lookMemberInfo"`
+ ApplyMemberFriend int32 `json:"applyMemberFriend"`
}
-type CallbackJoinGroupResp struct {
+type CallbackBeforeSetGroupInfoResp struct {
CommonCallbackResp
+ GroupID string ` json:"groupID"`
+ GroupName string `json:"groupName"`
+ Notification string `json:"notification"`
+ Introduction string `json:"introduction"`
+ FaceURL string `json:"faceURL"`
+ Ex *string `json:"ex"`
+ NeedVerification *int32 `json:"needVerification"`
+ LookMemberInfo *int32 `json:"lookMemberInfo"`
+ ApplyMemberFriend *int32 `json:"applyMemberFriend"`
}
-type CallbackTransferGroupOwnerReq struct {
- CallbackCommand `json:"callbackCommand"`
- GroupID string `json:"groupID"`
- OldOwnerUserID string `json:"oldOwnerUserID"`
- NewOwnerUserID string `json:"newOwnerUserID"`
+type CallbackAfterSetGroupInfoReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ OperationID string `json:"operationID"`
+ GroupID string `json:"groupID"`
+ GroupName string `json:"groupName"`
+ Notification string `json:"notification"`
+ Introduction string `json:"introduction"`
+ FaceURL string `json:"faceURL"`
+ Ex *string `json:"ex"`
+ NeedVerification *int32 `json:"needVerification"`
+ LookMemberInfo *int32 `json:"lookMemberInfo"`
+ ApplyMemberFriend *int32 `json:"applyMemberFriend"`
}
-type CallbackTransferGroupOwnerResp struct {
+type CallbackAfterSetGroupInfoResp struct {
CommonCallbackResp
}
diff --git a/pkg/callbackstruct/message.go b/pkg/callbackstruct/message.go
index 3adee618b..2864e28b1 100644
--- a/pkg/callbackstruct/message.go
+++ b/pkg/callbackstruct/message.go
@@ -80,26 +80,6 @@ type CallbackMsgModifyCommandResp struct {
Ex *string `json:"ex"`
}
-type CallbackSendGroupMsgErrorReq struct {
- CommonCallbackReq
- GroupID string `json:"groupID"`
-}
-
-type CallbackSendGroupMsgErrorResp struct {
- CommonCallbackResp
-}
-
-type CallbackSingleMsgRevokeReq struct {
- CallbackCommand `json:"callbackCommand"`
- SendID string `json:"sendID"`
- ReceiveID string `json:"receiveID"`
- Content string `json:"content"`
-}
-
-type CallbackSingleMsgRevokeResp struct {
- CommonCallbackResp
-}
-
type CallbackGroupMsgReadReq struct {
CallbackCommand `json:"callbackCommand"`
SendID string `json:"sendID"`
@@ -114,9 +94,10 @@ type CallbackGroupMsgReadResp struct {
type CallbackSingleMsgReadReq struct {
CallbackCommand `json:"callbackCommand"`
- SendID string `json:"sendID"`
- ReceiveID string `json:"receiveID"`
- ContentType int64 `json:"contentType"`
+ ConversationID string `json:"conversationID"`
+ UserID string `json:"userID"`
+ Seqs []int64 `json:"Seqs"`
+ ContentType int32 `json:"contentType"`
}
type CallbackSingleMsgReadResp struct {
diff --git a/pkg/callbackstruct/revoke.go b/pkg/callbackstruct/revoke.go
new file mode 100644
index 000000000..1f5e0b0c1
--- /dev/null
+++ b/pkg/callbackstruct/revoke.go
@@ -0,0 +1,25 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package callbackstruct
+
+type CallbackAfterRevokeMsgReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ ConversationID string `json:"conversationID"`
+ Seq int64 `json:"seq"`
+ UserID string `json:"userID"`
+}
+type CallbackAfterRevokeMsgResp struct {
+ CommonCallbackResp
+}
diff --git a/pkg/callbackstruct/user.go b/pkg/callbackstruct/user.go
index f35cff554..98536882d 100644
--- a/pkg/callbackstruct/user.go
+++ b/pkg/callbackstruct/user.go
@@ -14,7 +14,10 @@
package callbackstruct
-import "github.com/OpenIMSDK/protocol/sdkws"
+import (
+ "github.com/OpenIMSDK/protocol/sdkws"
+ "github.com/OpenIMSDK/protocol/wrapperspb"
+)
type CallbackBeforeUpdateUserInfoReq struct {
CallbackCommand `json:"callbackCommand"`
@@ -41,6 +44,31 @@ type CallbackAfterUpdateUserInfoResp struct {
CommonCallbackResp
}
+type CallbackBeforeUpdateUserInfoExReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ UserID string `json:"userID"`
+ Nickname *wrapperspb.StringValue `json:"nickName"`
+ FaceURL *wrapperspb.StringValue `json:"faceURL"`
+ Ex *wrapperspb.StringValue `json:"ex"`
+}
+type CallbackBeforeUpdateUserInfoExResp struct {
+ CommonCallbackResp
+ Nickname *wrapperspb.StringValue `json:"nickName"`
+ FaceURL *wrapperspb.StringValue `json:"faceURL"`
+ Ex *wrapperspb.StringValue `json:"ex"`
+}
+
+type CallbackAfterUpdateUserInfoExReq struct {
+ CallbackCommand `json:"callbackCommand"`
+ UserID string `json:"userID"`
+ Nickname *wrapperspb.StringValue `json:"nickName"`
+ FaceURL *wrapperspb.StringValue `json:"faceURL"`
+ Ex *wrapperspb.StringValue `json:"ex"`
+}
+type CallbackAfterUpdateUserInfoExResp struct {
+ CommonCallbackResp
+}
+
type CallbackBeforeUserRegisterReq struct {
CallbackCommand `json:"callbackCommand"`
Secret string `json:"secret"`
diff --git a/pkg/common/cmd/constant.go b/pkg/common/cmd/constant.go
index 835593bbe..c332ce3a6 100644
--- a/pkg/common/cmd/constant.go
+++ b/pkg/common/cmd/constant.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package cmd
const (
diff --git a/pkg/common/cmd/root.go b/pkg/common/cmd/root.go
index 0bc308e07..98ca8f892 100644
--- a/pkg/common/cmd/root.go
+++ b/pkg/common/cmd/root.go
@@ -20,7 +20,6 @@ import (
config2 "github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/spf13/cobra"
- _ "go.uber.org/automaxprocs"
"github.com/OpenIMSDK/protocol/constant"
"github.com/OpenIMSDK/tools/log"
@@ -45,7 +44,7 @@ type CmdOpts struct {
func WithCronTaskLogName() func(*CmdOpts) {
return func(opts *CmdOpts) {
- opts.loggerPrefixName = "OpenIM.CronTask.log.all"
+ opts.loggerPrefixName = "openim.crontask.log.all"
}
}
diff --git a/pkg/common/cmd/rpc.go b/pkg/common/cmd/rpc.go
index 6266c03b2..ea2a00b07 100644
--- a/pkg/common/cmd/rpc.go
+++ b/pkg/common/cmd/rpc.go
@@ -46,15 +46,13 @@ func (a *RpcCmd) Exec() error {
return a.Execute()
}
-func (a *RpcCmd) StartSvr(
- name string,
- rpcFn func(discov discoveryregistry.SvcDiscoveryRegistry, server *grpc.Server) error,
-) error {
+func (a *RpcCmd) StartSvr(name string, rpcFn func(discov discoveryregistry.SvcDiscoveryRegistry, server *grpc.Server) error) error {
if a.GetPortFlag() == 0 {
return errors.New("port is required")
}
return startrpc.Start(a.GetPortFlag(), name, a.GetPrometheusPortFlag(), rpcFn)
}
+
func (a *RpcCmd) GetPortFromConfig(portType string) int {
switch a.Name {
case RpcPushServer:
diff --git a/pkg/common/config/config.go b/pkg/common/config/config.go
index d8bee6af8..9696e9367 100644
--- a/pkg/common/config/config.go
+++ b/pkg/common/config/config.go
@@ -45,6 +45,18 @@ type POfflinePush struct {
Ext string `yaml:"ext"`
}
+type MYSQL struct {
+ Address []string `yaml:"address"`
+ Username string `yaml:"username"`
+ Password string `yaml:"password"`
+ Database string `yaml:"database"`
+ MaxOpenConn int `yaml:"maxOpenConn"`
+ MaxIdleConn int `yaml:"maxIdleConn"`
+ MaxLifeTime int `yaml:"maxLifeTime"`
+ LogLevel int `yaml:"logLevel"`
+ SlowThreshold int `yaml:"slowThreshold"`
+}
+
type configStruct struct {
Envs struct {
Discovery string `yaml:"discovery"`
@@ -56,17 +68,7 @@ type configStruct struct {
Password string `yaml:"password"`
} `yaml:"zookeeper"`
- Mysql struct {
- Address []string `yaml:"address"`
- Username string `yaml:"username"`
- Password string `yaml:"password"`
- Database string `yaml:"database"`
- MaxOpenConn int `yaml:"maxOpenConn"`
- MaxIdleConn int `yaml:"maxIdleConn"`
- MaxLifeTime int `yaml:"maxLifeTime"`
- LogLevel int `yaml:"logLevel"`
- SlowThreshold int `yaml:"slowThreshold"`
- } `yaml:"mysql"`
+ Mysql *MYSQL `yaml:"mysql"`
Mongo struct {
Uri string `yaml:"uri"`
@@ -234,6 +236,11 @@ type configStruct struct {
Nickname []string `yaml:"nickname"`
} `yaml:"manager"`
+ IMAdmin struct {
+ UserID []string `yaml:"userID"`
+ Nickname []string `yaml:"nickname"`
+ } `yaml:"im-admin"`
+
MultiLoginPolicy int `yaml:"multiLoginPolicy"`
ChatPersistenceMysql bool `yaml:"chatPersistenceMysql"`
MsgCacheTimeout int `yaml:"msgCacheTimeout"`
@@ -275,6 +282,8 @@ type configStruct struct {
CallbackBeforeSetFriendRemark CallBackConfig `yaml:"callbackBeforeSetFriendRemark"`
CallbackAfterSetFriendRemark CallBackConfig `yaml:"callbackAfterSetFriendRemark"`
CallbackBeforeUpdateUserInfo CallBackConfig `yaml:"beforeUpdateUserInfo"`
+ CallbackBeforeUpdateUserInfoEx CallBackConfig `yaml:"beforeUpdateUserInfoEx"`
+ CallbackAfterUpdateUserInfoEx CallBackConfig `yaml:"afterUpdateUserInfoEx"`
CallbackBeforeUserRegister CallBackConfig `yaml:"beforeUserRegister"`
CallbackAfterUpdateUserInfo CallBackConfig `yaml:"updateUserInfo"`
CallbackAfterUserRegister CallBackConfig `yaml:"afterUserRegister"`
@@ -282,16 +291,30 @@ type configStruct struct {
CallbackAfterCreateGroup CallBackConfig `yaml:"afterCreateGroup"`
CallbackBeforeMemberJoinGroup CallBackConfig `yaml:"beforeMemberJoinGroup"`
CallbackBeforeSetGroupMemberInfo CallBackConfig `yaml:"beforeSetGroupMemberInfo"`
+ CallbackAfterSetGroupMemberInfo CallBackConfig `yaml:"afterSetGroupMemberInfo"`
CallbackQuitGroup CallBackConfig `yaml:"quitGroup"`
CallbackKillGroupMember CallBackConfig `yaml:"killGroupMember"`
CallbackDismissGroup CallBackConfig `yaml:"dismissGroup"`
CallbackBeforeJoinGroup CallBackConfig `yaml:"joinGroup"`
- CallbackTransferGroupOwnerAfter CallBackConfig `yaml:"transferGroupOwner"`
+ CallbackAfterTransferGroupOwner CallBackConfig `yaml:"transferGroupOwner"`
+ CallbackBeforeInviteUserToGroup CallBackConfig `yaml:"beforeInviteUserToGroup"`
+ CallbackAfterJoinGroup CallBackConfig `yaml:"joinGroupAfter"`
+ CallbackAfterSetGroupInfo CallBackConfig `yaml:"setGroupInfoAfter"`
+ CallbackBeforeSetGroupInfo CallBackConfig `yaml:"setGroupInfoBefore"`
+ CallbackAfterRevokeMsg CallBackConfig `yaml:"revokeMsgAfter"`
+ CallbackBeforeAddBlack CallBackConfig `yaml:"addBlackBefore"`
+ CallbackAfterAddFriend CallBackConfig `yaml:"addFriendAfter"`
+ CallbackBeforeAddFriendAgree CallBackConfig `yaml:"addFriendAgreeBefore"`
+
+ CallbackAfterDeleteFriend CallBackConfig `yaml:"deleteFriendAfter"`
+ CallbackBeforeImportFriends CallBackConfig `yaml:"importFriendsBefore"`
+ CallbackAfterImportFriends CallBackConfig `yaml:"importFriendsAfter"`
+ CallbackAfterRemoveBlack CallBackConfig `yaml:"removeBlackAfter"`
} `yaml:"callback"`
Prometheus struct {
Enable bool `yaml:"enable"`
- PrometheusUrl string `yaml:"prometheusUrl"`
+ GrafanaUrl string `yaml:"grafanaUrl"`
ApiPrometheusPort []int `yaml:"apiPrometheusPort"`
UserPrometheusPort []int `yaml:"userPrometheusPort"`
FriendPrometheusPort []int `yaml:"friendPrometheusPort"`
diff --git a/pkg/common/config/parse.go b/pkg/common/config/parse.go
index f2ea962ee..64719d6a1 100644
--- a/pkg/common/config/parse.go
+++ b/pkg/common/config/parse.go
@@ -24,6 +24,7 @@ import (
"gopkg.in/yaml.v3"
"github.com/openimsdk/open-im-server/v3/pkg/msgprocessor"
+ "github.com/openimsdk/open-im-server/v3/pkg/util/genutil"
)
//go:embed version
@@ -35,21 +36,32 @@ const (
DefaultFolderPath = "../config/"
)
-// return absolude path join ../config/, this is k8s container config path
+// return absolude path join ../config/, this is k8s container config path.
func GetDefaultConfigPath() string {
- b, err := filepath.Abs(os.Args[0])
+ executablePath, err := os.Executable()
if err != nil {
- fmt.Println("filepath.Abs error,err=", err)
+ fmt.Println("GetDefaultConfigPath error:", err.Error())
return ""
}
- return filepath.Join(filepath.Dir(b), "../config/")
+
+ configPath, err := genutil.OutDir(filepath.Join(filepath.Dir(executablePath), "../config/"))
+ if err != nil {
+ fmt.Fprintf(os.Stderr, "failed to get output directory: %v\n", err)
+ os.Exit(1)
+ }
+ return configPath
}
-// getProjectRoot returns the absolute path of the project root directory
+// getProjectRoot returns the absolute path of the project root directory.
func GetProjectRoot() string {
- b, _ := filepath.Abs(os.Args[0])
+ executablePath, _ := os.Executable()
- return filepath.Join(filepath.Dir(b), "../../../../..")
+ projectRoot, err := genutil.OutDir(filepath.Join(filepath.Dir(executablePath), "../../../../.."))
+ if err != nil {
+ fmt.Fprintf(os.Stderr, "failed to get output directory: %v\n", err)
+ os.Exit(1)
+ }
+ return projectRoot
}
func GetOptionsByNotification(cfg NotificationConf) msgprocessor.Options {
@@ -71,7 +83,7 @@ func GetOptionsByNotification(cfg NotificationConf) msgprocessor.Options {
return opts
}
-func initConfig(config interface{}, configName, configFolderPath string) error {
+func initConfig(config any, configName, configFolderPath string) error {
configFolderPath = filepath.Join(configFolderPath, configName)
_, err := os.Stat(configFolderPath)
if err != nil {
diff --git a/pkg/common/config/parse_test.go b/pkg/common/config/parse_test.go
index e34aa5b7f..38171ec08 100644
--- a/pkg/common/config/parse_test.go
+++ b/pkg/common/config/parse_test.go
@@ -76,7 +76,7 @@ func TestGetOptionsByNotification(t *testing.T) {
func Test_initConfig(t *testing.T) {
type args struct {
- config interface{}
+ config any
configName string
configFolderPath string
}
diff --git a/pkg/common/config/version b/pkg/common/config/version
index e682ea429..d5c0c9914 100644
--- a/pkg/common/config/version
+++ b/pkg/common/config/version
@@ -1 +1 @@
-v3.3.0
\ No newline at end of file
+3.5.1
diff --git a/pkg/common/convert/friend.go b/pkg/common/convert/friend.go
index 7003c8aa6..27bd595ad 100644
--- a/pkg/common/convert/friend.go
+++ b/pkg/common/convert/friend.go
@@ -16,6 +16,7 @@ package convert
import (
"context"
+ "fmt"
"github.com/OpenIMSDK/protocol/sdkws"
"github.com/OpenIMSDK/tools/utils"
@@ -31,23 +32,22 @@ func FriendPb2DB(friend *sdkws.FriendInfo) *relation.FriendModel {
return dbFriend
}
-func FriendDB2Pb(
- ctx context.Context,
- friendDB *relation.FriendModel,
+func FriendDB2Pb(ctx context.Context, friendDB *relation.FriendModel,
getUsers func(ctx context.Context, userIDs []string) (map[string]*sdkws.UserInfo, error),
) (*sdkws.FriendInfo, error) {
- pbfriend := &sdkws.FriendInfo{FriendUser: &sdkws.UserInfo{}}
- utils.CopyStructFields(pbfriend, friendDB)
users, err := getUsers(ctx, []string{friendDB.FriendUserID})
if err != nil {
return nil, err
}
- pbfriend.FriendUser.UserID = users[friendDB.FriendUserID].UserID
- pbfriend.FriendUser.Nickname = users[friendDB.FriendUserID].Nickname
- pbfriend.FriendUser.FaceURL = users[friendDB.FriendUserID].FaceURL
- pbfriend.FriendUser.Ex = users[friendDB.FriendUserID].Ex
- pbfriend.CreateTime = friendDB.CreateTime.Unix()
- return pbfriend, nil
+ user, ok := users[friendDB.FriendUserID]
+ if !ok {
+ return nil, fmt.Errorf("user not found: %s", friendDB.FriendUserID)
+ }
+
+ return &sdkws.FriendInfo{
+ FriendUser: user,
+ CreateTime: friendDB.CreateTime.Unix(),
+ }, nil
}
func FriendsDB2Pb(
@@ -62,6 +62,7 @@ func FriendsDB2Pb(
for _, friendDB := range friendsDB {
userID = append(userID, friendDB.FriendUserID)
}
+
users, err := getUsers(ctx, userID)
if err != nil {
return nil, err
@@ -74,6 +75,7 @@ func FriendsDB2Pb(
friendPb.FriendUser.FaceURL = users[friend.FriendUserID].FaceURL
friendPb.FriendUser.Ex = users[friend.FriendUserID].Ex
friendPb.CreateTime = friend.CreateTime.Unix()
+ friendPb.IsPinned = friend.IsPinned
friendsPb = append(friendsPb, friendPb)
}
return friendsPb, nil
@@ -118,3 +120,37 @@ func FriendRequestDB2Pb(
}
return res, nil
}
+
+// FriendPb2DBMap converts a FriendInfo protobuf object to a map suitable for database operations.
+// It only includes non-zero or non-empty fields in the map.
+func FriendPb2DBMap(friend *sdkws.FriendInfo) map[string]any {
+ if friend == nil {
+ return nil
+ }
+
+ val := make(map[string]any)
+
+ // Assuming FriendInfo has similar fields to those in FriendModel.
+ // Add or remove fields based on your actual FriendInfo and FriendModel structures.
+ if friend.FriendUser != nil {
+ if friend.FriendUser.UserID != "" {
+ val["friend_user_id"] = friend.FriendUser.UserID
+ }
+ if friend.FriendUser.Nickname != "" {
+ val["nickname"] = friend.FriendUser.Nickname
+ }
+ if friend.FriendUser.FaceURL != "" {
+ val["face_url"] = friend.FriendUser.FaceURL
+ }
+ if friend.FriendUser.Ex != "" {
+ val["ex"] = friend.FriendUser.Ex
+ }
+ }
+ if friend.CreateTime != 0 {
+ val["create_time"] = friend.CreateTime // You might need to convert this to a proper time format.
+ }
+
+ // Include other fields from FriendInfo as needed, similar to the above pattern.
+
+ return val
+}
diff --git a/pkg/common/convert/user.go b/pkg/common/convert/user.go
index 4ca1899be..62f80e458 100644
--- a/pkg/common/convert/user.go
+++ b/pkg/common/convert/user.go
@@ -15,33 +15,82 @@
package convert
import (
+ "time"
+
"github.com/OpenIMSDK/protocol/sdkws"
relationtb "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
)
-func UsersDB2Pb(users []*relationtb.UserModel) (result []*sdkws.UserInfo) {
+func UsersDB2Pb(users []*relationtb.UserModel) []*sdkws.UserInfo {
+ result := make([]*sdkws.UserInfo, 0, len(users))
for _, user := range users {
- var userPb sdkws.UserInfo
- userPb.UserID = user.UserID
- userPb.Nickname = user.Nickname
- userPb.FaceURL = user.FaceURL
- userPb.Ex = user.Ex
- userPb.CreateTime = user.CreateTime.UnixMilli()
- userPb.AppMangerLevel = user.AppMangerLevel
- userPb.GlobalRecvMsgOpt = user.GlobalRecvMsgOpt
- result = append(result, &userPb)
+ userPb := &sdkws.UserInfo{
+ UserID: user.UserID,
+ Nickname: user.Nickname,
+ FaceURL: user.FaceURL,
+ Ex: user.Ex,
+ CreateTime: user.CreateTime.UnixMilli(),
+ AppMangerLevel: user.AppMangerLevel,
+ GlobalRecvMsgOpt: user.GlobalRecvMsgOpt,
+ }
+ result = append(result, userPb)
}
return result
}
func UserPb2DB(user *sdkws.UserInfo) *relationtb.UserModel {
- var userDB relationtb.UserModel
- userDB.UserID = user.UserID
- userDB.Nickname = user.Nickname
- userDB.FaceURL = user.FaceURL
- userDB.Ex = user.Ex
- userDB.AppMangerLevel = user.AppMangerLevel
- userDB.GlobalRecvMsgOpt = user.GlobalRecvMsgOpt
- return &userDB
+ return &relationtb.UserModel{
+ UserID: user.UserID,
+ Nickname: user.Nickname,
+ FaceURL: user.FaceURL,
+ Ex: user.Ex,
+ CreateTime: time.UnixMilli(user.CreateTime),
+ AppMangerLevel: user.AppMangerLevel,
+ GlobalRecvMsgOpt: user.GlobalRecvMsgOpt,
+ }
+}
+
+func UserPb2DBMap(user *sdkws.UserInfo) map[string]any {
+ if user == nil {
+ return nil
+ }
+ val := make(map[string]any)
+ fields := map[string]any{
+ "nickname": user.Nickname,
+ "face_url": user.FaceURL,
+ "ex": user.Ex,
+ "app_manager_level": user.AppMangerLevel,
+ "global_recv_msg_opt": user.GlobalRecvMsgOpt,
+ }
+ for key, value := range fields {
+ if v, ok := value.(string); ok && v != "" {
+ val[key] = v
+ } else if v, ok := value.(int32); ok && v != 0 {
+ val[key] = v
+ }
+ }
+ return val
+}
+func UserPb2DBMapEx(user *sdkws.UserInfoWithEx) map[string]any {
+ if user == nil {
+ return nil
+ }
+ val := make(map[string]any)
+
+ // Map fields from UserInfoWithEx to val
+ if user.Nickname != nil {
+ val["nickname"] = user.Nickname.Value
+ }
+ if user.FaceURL != nil {
+ val["face_url"] = user.FaceURL.Value
+ }
+ if user.Ex != nil {
+ val["ex"] = user.Ex.Value
+ }
+ if user.GlobalRecvMsgOpt != nil {
+ val["global_recv_msg_opt"] = user.GlobalRecvMsgOpt.Value
+ }
+
+ return val
}
diff --git a/pkg/common/convert/user_test.go b/pkg/common/convert/user_test.go
new file mode 100644
index 000000000..a24efb53c
--- /dev/null
+++ b/pkg/common/convert/user_test.go
@@ -0,0 +1,87 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package convert
+
+import (
+ "reflect"
+ "testing"
+
+ "github.com/OpenIMSDK/protocol/sdkws"
+
+ relationtb "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
+)
+
+func TestUsersDB2Pb(t *testing.T) {
+ type args struct {
+ users []*relationtb.UserModel
+ }
+ tests := []struct {
+ name string
+ args args
+ wantResult []*sdkws.UserInfo
+ }{
+ // TODO: Add test cases.
+ }
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ if gotResult := UsersDB2Pb(tt.args.users); !reflect.DeepEqual(gotResult, tt.wantResult) {
+ t.Errorf("UsersDB2Pb() = %v, want %v", gotResult, tt.wantResult)
+ }
+ })
+ }
+}
+
+func TestUserPb2DB(t *testing.T) {
+ type args struct {
+ user *sdkws.UserInfo
+ }
+ tests := []struct {
+ name string
+ args args
+ want *relationtb.UserModel
+ }{
+ // TODO: Add test cases.
+ }
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ if got := UserPb2DB(tt.args.user); !reflect.DeepEqual(got, tt.want) {
+ t.Errorf("UserPb2DB() = %v, want %v", got, tt.want)
+ }
+ })
+ }
+}
+
+func TestUserPb2DBMap(t *testing.T) {
+ user := &sdkws.UserInfo{
+ Nickname: "TestUser",
+ FaceURL: "http://openim.io/logo.jpg",
+ Ex: "Extra Data",
+ AppMangerLevel: 1,
+ GlobalRecvMsgOpt: 2,
+ }
+
+ expected := map[string]any{
+ "nickname": "TestUser",
+ "face_url": "http://openim.io/logo.jpg",
+ "ex": "Extra Data",
+ "app_manager_level": int32(1),
+ "global_recv_msg_opt": int32(2),
+ }
+
+ result := UserPb2DBMap(user)
+ if !reflect.DeepEqual(result, expected) {
+ t.Errorf("UserPb2DBMap returned unexpected map. Got %v, want %v", result, expected)
+ }
+}
diff --git a/pkg/common/db/cache/conversation.go b/pkg/common/db/cache/conversation.go
index 9c0bcfae4..a7018bc18 100644
--- a/pkg/common/db/cache/conversation.go
+++ b/pkg/common/db/cache/conversation.go
@@ -26,7 +26,6 @@ import (
"github.com/OpenIMSDK/tools/utils"
- "github.com/openimsdk/open-im-server/v3/pkg/common/db/relation"
relationtb "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
)
@@ -67,10 +66,10 @@ type ConversationCache interface {
GetUserRecvMsgOpt(ctx context.Context, ownerUserID, conversationID string) (opt int, err error)
DelUserRecvMsgOpt(ownerUserID, conversationID string) ConversationCache
// get one super group recv msg but do not notification userID list
- GetSuperGroupRecvMsgNotNotifyUserIDs(ctx context.Context, groupID string) (userIDs []string, err error)
+ //GetSuperGroupRecvMsgNotNotifyUserIDs(ctx context.Context, groupID string) (userIDs []string, err error)
DelSuperGroupRecvMsgNotNotifyUserIDs(groupID string) ConversationCache
// get one super group recv msg but do not notification userID list hash
- GetSuperGroupRecvMsgNotNotifyUserIDsHash(ctx context.Context, groupID string) (hash uint64, err error)
+ //GetSuperGroupRecvMsgNotNotifyUserIDsHash(ctx context.Context, groupID string) (hash uint64, err error)
DelSuperGroupRecvMsgNotNotifyUserIDsHash(groupID string) ConversationCache
//GetUserAllHasReadSeqs(ctx context.Context, ownerUserID string) (map[string]int64, error)
@@ -101,20 +100,20 @@ type ConversationRedisCache struct {
expireTime time.Duration
}
-func NewNewConversationRedis(
- rdb redis.UniversalClient,
- conversationDB *relation.ConversationGorm,
- options rockscache.Options,
-) ConversationCache {
- rcClient := rockscache.NewClient(rdb, options)
-
- return &ConversationRedisCache{
- rcClient: rcClient,
- metaCache: NewMetaCacheRedis(rcClient),
- conversationDB: conversationDB,
- expireTime: conversationExpireTime,
- }
-}
+//func NewNewConversationRedis(
+// rdb redis.UniversalClient,
+// conversationDB *relation.ConversationGorm,
+// options rockscache.Options,
+//) ConversationCache {
+// rcClient := rockscache.NewClient(rdb, options)
+//
+// return &ConversationRedisCache{
+// rcClient: rcClient,
+// metaCache: NewMetaCacheRedis(rcClient),
+// conversationDB: conversationDB,
+// expireTime: conversationExpireTime,
+// }
+//}
func (c *ConversationRedisCache) NewCache() ConversationCache {
return &ConversationRedisCache{
@@ -282,11 +281,11 @@ func (c *ConversationRedisCache) GetUserRecvMsgOpt(ctx context.Context, ownerUse
})
}
-func (c *ConversationRedisCache) GetSuperGroupRecvMsgNotNotifyUserIDs(ctx context.Context, groupID string) (userIDs []string, err error) {
- return getCache(ctx, c.rcClient, c.getSuperGroupRecvNotNotifyUserIDsKey(groupID), c.expireTime, func(ctx context.Context) (userIDs []string, err error) {
- return c.conversationDB.FindSuperGroupRecvMsgNotNotifyUserIDs(ctx, groupID)
- })
-}
+//func (c *ConversationRedisCache) GetSuperGroupRecvMsgNotNotifyUserIDs(ctx context.Context, groupID string) (userIDs []string, err error) {
+// return getCache(ctx, c.rcClient, c.getSuperGroupRecvNotNotifyUserIDsKey(groupID), c.expireTime, func(ctx context.Context) (userIDs []string, err error) {
+// return c.conversationDB.FindSuperGroupRecvMsgNotNotifyUserIDs(ctx, groupID)
+// })
+//}
func (c *ConversationRedisCache) DelUsersConversation(conversationID string, ownerUserIDs ...string) ConversationCache {
keys := make([]string, 0, len(ownerUserIDs))
@@ -313,19 +312,19 @@ func (c *ConversationRedisCache) DelSuperGroupRecvMsgNotNotifyUserIDs(groupID st
return cache
}
-func (c *ConversationRedisCache) GetSuperGroupRecvMsgNotNotifyUserIDsHash(ctx context.Context, groupID string) (hash uint64, err error) {
- return getCache(ctx, c.rcClient, c.getSuperGroupRecvNotNotifyUserIDsHashKey(groupID), c.expireTime, func(ctx context.Context) (hash uint64, err error) {
- userIDs, err := c.GetSuperGroupRecvMsgNotNotifyUserIDs(ctx, groupID)
- if err != nil {
- return 0, err
- }
- utils.Sort(userIDs, true)
- bi := big.NewInt(0)
- bi.SetString(utils.Md5(strings.Join(userIDs, ";"))[0:8], 16)
- return bi.Uint64(), nil
- },
- )
-}
+//func (c *ConversationRedisCache) GetSuperGroupRecvMsgNotNotifyUserIDsHash(ctx context.Context, groupID string) (hash uint64, err error) {
+// return getCache(ctx, c.rcClient, c.getSuperGroupRecvNotNotifyUserIDsHashKey(groupID), c.expireTime, func(ctx context.Context) (hash uint64, err error) {
+// userIDs, err := c.GetSuperGroupRecvMsgNotNotifyUserIDs(ctx, groupID)
+// if err != nil {
+// return 0, err
+// }
+// utils.Sort(userIDs, true)
+// bi := big.NewInt(0)
+// bi.SetString(utils.Md5(strings.Join(userIDs, ";"))[0:8], 16)
+// return bi.Uint64(), nil
+// },
+// )
+//}
func (c *ConversationRedisCache) DelSuperGroupRecvMsgNotNotifyUserIDsHash(groupID string) ConversationCache {
cache := c.NewCache()
diff --git a/pkg/common/db/cache/friend.go b/pkg/common/db/cache/friend.go
index 64a358984..a2b60d48f 100644
--- a/pkg/common/db/cache/friend.go
+++ b/pkg/common/db/cache/friend.go
@@ -33,19 +33,22 @@ const (
friendKey = "FRIEND_INFO:"
)
-// args fn will exec when no data in msgCache.
+// FriendCache is an interface for caching friend-related data.
type FriendCache interface {
metaCache
NewCache() FriendCache
GetFriendIDs(ctx context.Context, ownerUserID string) (friendIDs []string, err error)
- // call when friendID List changed
+ // Called when friendID list changed
DelFriendIDs(ownerUserID ...string) FriendCache
- // get single friendInfo from msgCache
+ // Get single friendInfo from the cache
GetFriend(ctx context.Context, ownerUserID, friendUserID string) (friend *relationtb.FriendModel, err error)
- // del friend when friend info changed
+ // Delete friend when friend info changed
DelFriend(ownerUserID, friendUserID string) FriendCache
+ // Delete friends when friends' info changed
+ DelFriends(ownerUserID string, friendUserIDs []string) FriendCache
}
+// FriendCacheRedis is an implementation of the FriendCache interface using Redis.
type FriendCacheRedis struct {
metaCache
friendDB relationtb.FriendModelInterface
@@ -53,6 +56,7 @@ type FriendCacheRedis struct {
rcClient *rockscache.Client
}
+// NewFriendCacheRedis creates a new instance of FriendCacheRedis.
func NewFriendCacheRedis(rdb redis.UniversalClient, friendDB relationtb.FriendModelInterface,
options rockscache.Options) FriendCache {
rcClient := rockscache.NewClient(rdb, options)
@@ -64,6 +68,7 @@ func NewFriendCacheRedis(rdb redis.UniversalClient, friendDB relationtb.FriendMo
}
}
+// NewCache creates a new instance of FriendCacheRedis with the same configuration.
func (f *FriendCacheRedis) NewCache() FriendCache {
return &FriendCacheRedis{
rcClient: f.rcClient,
@@ -73,24 +78,29 @@ func (f *FriendCacheRedis) NewCache() FriendCache {
}
}
+// getFriendIDsKey returns the key for storing friend IDs in the cache.
func (f *FriendCacheRedis) getFriendIDsKey(ownerUserID string) string {
return friendIDsKey + ownerUserID
}
+// getTwoWayFriendsIDsKey returns the key for storing two-way friend IDs in the cache.
func (f *FriendCacheRedis) getTwoWayFriendsIDsKey(ownerUserID string) string {
return TwoWayFriendsIDsKey + ownerUserID
}
+// getFriendKey returns the key for storing friend info in the cache.
func (f *FriendCacheRedis) getFriendKey(ownerUserID, friendUserID string) string {
return friendKey + ownerUserID + "-" + friendUserID
}
+// GetFriendIDs retrieves friend IDs from the cache or the database if not found.
func (f *FriendCacheRedis) GetFriendIDs(ctx context.Context, ownerUserID string) (friendIDs []string, err error) {
return getCache(ctx, f.rcClient, f.getFriendIDsKey(ownerUserID), f.expireTime, func(ctx context.Context) ([]string, error) {
return f.friendDB.FindFriendUserIDs(ctx, ownerUserID)
})
}
+// DelFriendIDs deletes friend IDs from the cache.
func (f *FriendCacheRedis) DelFriendIDs(ownerUserIDs ...string) FriendCache {
newGroupCache := f.NewCache()
keys := make([]string, 0, len(ownerUserIDs))
@@ -102,7 +112,7 @@ func (f *FriendCacheRedis) DelFriendIDs(ownerUserIDs ...string) FriendCache {
return newGroupCache
}
-// todo.
+// GetTwoWayFriendIDs retrieves two-way friend IDs from the cache.
func (f *FriendCacheRedis) GetTwoWayFriendIDs(ctx context.Context, ownerUserID string) (twoWayFriendIDs []string, err error) {
friendIDs, err := f.GetFriendIDs(ctx, ownerUserID)
if err != nil {
@@ -121,6 +131,7 @@ func (f *FriendCacheRedis) GetTwoWayFriendIDs(ctx context.Context, ownerUserID s
return twoWayFriendIDs, nil
}
+// DelTwoWayFriendIDs deletes two-way friend IDs from the cache.
func (f *FriendCacheRedis) DelTwoWayFriendIDs(ctx context.Context, ownerUserID string) FriendCache {
newFriendCache := f.NewCache()
newFriendCache.AddKeys(f.getTwoWayFriendsIDsKey(ownerUserID))
@@ -128,17 +139,30 @@ func (f *FriendCacheRedis) DelTwoWayFriendIDs(ctx context.Context, ownerUserID s
return newFriendCache
}
-func (f *FriendCacheRedis) GetFriend(ctx context.Context, ownerUserID,
- friendUserID string) (friend *relationtb.FriendModel, err error) {
+// GetFriend retrieves friend info from the cache or the database if not found.
+func (f *FriendCacheRedis) GetFriend(ctx context.Context, ownerUserID, friendUserID string) (friend *relationtb.FriendModel, err error) {
return getCache(ctx, f.rcClient, f.getFriendKey(ownerUserID,
friendUserID), f.expireTime, func(ctx context.Context) (*relationtb.FriendModel, error) {
return f.friendDB.Take(ctx, ownerUserID, friendUserID)
})
}
+// DelFriend deletes friend info from the cache.
func (f *FriendCacheRedis) DelFriend(ownerUserID, friendUserID string) FriendCache {
newFriendCache := f.NewCache()
newFriendCache.AddKeys(f.getFriendKey(ownerUserID, friendUserID))
return newFriendCache
}
+
+// DelFriends deletes multiple friend infos from the cache.
+func (f *FriendCacheRedis) DelFriends(ownerUserID string, friendUserIDs []string) FriendCache {
+ newFriendCache := f.NewCache()
+
+ for _, friendUserID := range friendUserIDs {
+ key := f.getFriendKey(ownerUserID, friendUserID)
+ newFriendCache.AddKeys(key) // Assuming AddKeys marks the keys for deletion
+ }
+
+ return newFriendCache
+}
diff --git a/pkg/common/db/cache/group.go b/pkg/common/db/cache/group.go
index 6a4b57813..57fcf1a9b 100644
--- a/pkg/common/db/cache/group.go
+++ b/pkg/common/db/cache/group.go
@@ -16,8 +16,13 @@ package cache
import (
"context"
+ "fmt"
+ "strconv"
"time"
+ "github.com/OpenIMSDK/protocol/constant"
+ "github.com/OpenIMSDK/tools/errs"
+
"github.com/OpenIMSDK/tools/log"
"github.com/dtm-labs/rockscache"
@@ -26,21 +31,24 @@ import (
"github.com/OpenIMSDK/tools/utils"
relationtb "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
- unrelationtb "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/unrelation"
)
const (
- groupExpireTime = time.Second * 60 * 60 * 12
- groupInfoKey = "GROUP_INFO:"
- groupMemberIDsKey = "GROUP_MEMBER_IDS:"
- groupMembersHashKey = "GROUP_MEMBERS_HASH2:"
- groupMemberInfoKey = "GROUP_MEMBER_INFO:"
- joinedSuperGroupsKey = "JOIN_SUPER_GROUPS:"
- SuperGroupMemberIDsKey = "SUPER_GROUP_MEMBER_IDS:"
- joinedGroupsKey = "JOIN_GROUPS_KEY:"
- groupMemberNumKey = "GROUP_MEMBER_NUM_CACHE:"
+ groupExpireTime = time.Second * 60 * 60 * 12
+ groupInfoKey = "GROUP_INFO:"
+ groupMemberIDsKey = "GROUP_MEMBER_IDS:"
+ groupMembersHashKey = "GROUP_MEMBERS_HASH2:"
+ groupMemberInfoKey = "GROUP_MEMBER_INFO:"
+ //groupOwnerInfoKey = "GROUP_OWNER_INFO:".
+ joinedGroupsKey = "JOIN_GROUPS_KEY:"
+ groupMemberNumKey = "GROUP_MEMBER_NUM_CACHE:"
+ groupRoleLevelMemberIDsKey = "GROUP_ROLE_LEVEL_MEMBER_IDS:"
)
+type GroupHash interface {
+ GetGroupHash(ctx context.Context, groupID string) (uint64, error)
+}
+
type GroupCache interface {
metaCache
NewCache() GroupCache
@@ -48,11 +56,6 @@ type GroupCache interface {
GetGroupInfo(ctx context.Context, groupID string) (group *relationtb.GroupModel, err error)
DelGroupsInfo(groupIDs ...string) GroupCache
- GetJoinedSuperGroupIDs(ctx context.Context, userID string) (joinedSuperGroupIDs []string, err error)
- DelJoinedSuperGroupIDs(userIDs ...string) GroupCache
- GetSuperGroupMemberIDs(ctx context.Context, groupIDs ...string) (models []*unrelationtb.SuperGroupModel, err error)
- DelSuperGroupMemberIDs(groupIDs ...string) GroupCache
-
GetGroupMembersHash(ctx context.Context, groupID string) (hashCode uint64, err error)
GetGroupMemberHashMap(ctx context.Context, groupIDs []string) (map[string]*relationtb.GroupSimpleUserID, error)
DelGroupMembersHash(groupID string) GroupCache
@@ -69,9 +72,16 @@ type GroupCache interface {
GetGroupMembersInfo(ctx context.Context, groupID string, userID []string) (groupMembers []*relationtb.GroupMemberModel, err error)
GetAllGroupMembersInfo(ctx context.Context, groupID string) (groupMembers []*relationtb.GroupMemberModel, err error)
GetGroupMembersPage(ctx context.Context, groupID string, userID []string, showNumber, pageNumber int32) (total uint32, groupMembers []*relationtb.GroupMemberModel, err error)
+ FindGroupMemberUser(ctx context.Context, groupIDs []string, userID string) ([]*relationtb.GroupMemberModel, error)
+ GetGroupRoleLevelMemberIDs(ctx context.Context, groupID string, roleLevel int32) ([]string, error)
+ GetGroupOwner(ctx context.Context, groupID string) (*relationtb.GroupMemberModel, error)
+ GetGroupsOwner(ctx context.Context, groupIDs []string) ([]*relationtb.GroupMemberModel, error)
+ DelGroupRoleLevel(groupID string, roleLevel []int32) GroupCache
+ DelGroupAllRoleLevel(groupID string) GroupCache
DelGroupMembersInfo(groupID string, userID ...string) GroupCache
-
+ GetGroupRoleLevelMemberInfo(ctx context.Context, groupID string, roleLevel int32) ([]*relationtb.GroupMemberModel, error)
+ GetGroupRolesLevelMemberInfo(ctx context.Context, groupID string, roleLevels []int32) ([]*relationtb.GroupMemberModel, error)
GetGroupMemberNum(ctx context.Context, groupID string) (memberNum int64, err error)
DelGroupsMemberNum(groupID ...string) GroupCache
}
@@ -81,10 +91,9 @@ type GroupCacheRedis struct {
groupDB relationtb.GroupModelInterface
groupMemberDB relationtb.GroupMemberModelInterface
groupRequestDB relationtb.GroupRequestModelInterface
- mongoDB unrelationtb.SuperGroupModelInterface
expireTime time.Duration
rcClient *rockscache.Client
- hashCode func(ctx context.Context, groupID string) (uint64, error)
+ groupHash GroupHash
}
func NewGroupCacheRedis(
@@ -92,8 +101,7 @@ func NewGroupCacheRedis(
groupDB relationtb.GroupModelInterface,
groupMemberDB relationtb.GroupMemberModelInterface,
groupRequestDB relationtb.GroupRequestModelInterface,
- mongoClient unrelationtb.SuperGroupModelInterface,
- hashCode func(ctx context.Context, groupID string) (uint64, error),
+ hashCode GroupHash,
opts rockscache.Options,
) GroupCache {
rcClient := rockscache.NewClient(rdb, opts)
@@ -101,8 +109,7 @@ func NewGroupCacheRedis(
return &GroupCacheRedis{
rcClient: rcClient, expireTime: groupExpireTime,
groupDB: groupDB, groupMemberDB: groupMemberDB, groupRequestDB: groupRequestDB,
- mongoDB: mongoClient,
- hashCode: hashCode,
+ groupHash: hashCode,
metaCache: NewMetaCacheRedis(rcClient),
}
}
@@ -114,7 +121,6 @@ func (g *GroupCacheRedis) NewCache() GroupCache {
groupDB: g.groupDB,
groupMemberDB: g.groupMemberDB,
groupRequestDB: g.groupRequestDB,
- mongoDB: g.mongoDB,
metaCache: NewMetaCacheRedis(g.rcClient, g.metaCache.GetPreDelKeys()...),
}
}
@@ -123,18 +129,10 @@ func (g *GroupCacheRedis) getGroupInfoKey(groupID string) string {
return groupInfoKey + groupID
}
-func (g *GroupCacheRedis) getJoinedSuperGroupsIDKey(userID string) string {
- return joinedSuperGroupsKey + userID
-}
-
func (g *GroupCacheRedis) getJoinedGroupsKey(userID string) string {
return joinedGroupsKey + userID
}
-func (g *GroupCacheRedis) getSuperGroupMemberIDsKey(groupID string) string {
- return SuperGroupMemberIDsKey + groupID
-}
-
func (g *GroupCacheRedis) getGroupMembersHashKey(groupID string) string {
return groupMembersHashKey + groupID
}
@@ -151,6 +149,10 @@ func (g *GroupCacheRedis) getGroupMemberNumKey(groupID string) string {
return groupMemberNumKey + groupID
}
+func (g *GroupCacheRedis) getGroupRoleLevelMemberIDsKey(groupID string, roleLevel int32) string {
+ return groupRoleLevelMemberIDsKey + groupID + "-" + strconv.Itoa(int(roleLevel))
+}
+
func (g *GroupCacheRedis) GetGroupIndex(group *relationtb.GroupModel, keys []string) (int, error) {
key := g.getGroupInfoKey(group.GroupID)
for i, _key := range keys {
@@ -173,15 +175,7 @@ func (g *GroupCacheRedis) GetGroupMemberIndex(groupMember *relationtb.GroupMembe
return 0, errIndex
}
-// / groupInfo.
func (g *GroupCacheRedis) GetGroupsInfo(ctx context.Context, groupIDs []string) (groups []*relationtb.GroupModel, err error) {
- //var keys []string
- //for _, group := range groupIDs {
- // keys = append(keys, g.getGroupInfoKey(group))
- //}
- //return batchGetCache(ctx, g.rcClient, keys, g.expireTime, g.GetGroupIndex, func(ctx context.Context) ([]*relationtb.GroupModel, error) {
- // return g.groupDB.Find(ctx, groupIDs)
- //})
return batchGetCache2(ctx, g.rcClient, g.expireTime, groupIDs, func(groupID string) string {
return g.getGroupInfoKey(groupID)
}, func(ctx context.Context, groupID string) (*relationtb.GroupModel, error) {
@@ -206,123 +200,44 @@ func (g *GroupCacheRedis) DelGroupsInfo(groupIDs ...string) GroupCache {
return newGroupCache
}
-func (g *GroupCacheRedis) GetJoinedSuperGroupIDs(ctx context.Context, userID string) (joinedSuperGroupIDs []string, err error) {
- return getCache(ctx, g.rcClient, g.getJoinedSuperGroupsIDKey(userID), g.expireTime, func(ctx context.Context) ([]string, error) {
- userGroup, err := g.mongoDB.GetSuperGroupByUserID(ctx, userID)
- if err != nil {
- return nil, err
- }
- return userGroup.GroupIDs, nil
- },
- )
-}
-
-func (g *GroupCacheRedis) GetSuperGroupMemberIDs(ctx context.Context, groupIDs ...string) (models []*unrelationtb.SuperGroupModel, err error) {
- //var keys []string
- //for _, group := range groupIDs {
- // keys = append(keys, g.getSuperGroupMemberIDsKey(group))
- //}
- //return batchGetCache(ctx, g.rcClient, keys, g.expireTime, func(model *unrelationtb.SuperGroupModel, keys []string) (int, error) {
- // for i, key := range keys {
- // if g.getSuperGroupMemberIDsKey(model.GroupID) == key {
- // return i, nil
- // }
- // }
- // return 0, errIndex
- //},
- // func(ctx context.Context) ([]*unrelationtb.SuperGroupModel, error) {
- // return g.mongoDB.FindSuperGroup(ctx, groupIDs)
- // })
- return batchGetCache2(ctx, g.rcClient, g.expireTime, groupIDs, func(groupID string) string {
- return g.getSuperGroupMemberIDsKey(groupID)
- }, func(ctx context.Context, groupID string) (*unrelationtb.SuperGroupModel, error) {
- return g.mongoDB.TakeSuperGroup(ctx, groupID)
- })
-}
-
-// userJoinSuperGroup.
-func (g *GroupCacheRedis) DelJoinedSuperGroupIDs(userIDs ...string) GroupCache {
+func (g *GroupCacheRedis) DelGroupsOwner(groupIDs ...string) GroupCache {
newGroupCache := g.NewCache()
- keys := make([]string, 0, len(userIDs))
- for _, userID := range userIDs {
- keys = append(keys, g.getJoinedSuperGroupsIDKey(userID))
+ keys := make([]string, 0, len(groupIDs))
+ for _, groupID := range groupIDs {
+ keys = append(keys, g.getGroupRoleLevelMemberIDsKey(groupID, constant.GroupOwner))
}
newGroupCache.AddKeys(keys...)
return newGroupCache
}
-func (g *GroupCacheRedis) DelSuperGroupMemberIDs(groupIDs ...string) GroupCache {
+func (g *GroupCacheRedis) DelGroupRoleLevel(groupID string, roleLevels []int32) GroupCache {
newGroupCache := g.NewCache()
- keys := make([]string, 0, len(groupIDs))
- for _, groupID := range groupIDs {
- keys = append(keys, g.getSuperGroupMemberIDsKey(groupID))
+ keys := make([]string, 0, len(roleLevels))
+ for _, roleLevel := range roleLevels {
+ keys = append(keys, g.getGroupRoleLevelMemberIDsKey(groupID, roleLevel))
}
newGroupCache.AddKeys(keys...)
-
return newGroupCache
}
-// groupMembersHash.
+func (g *GroupCacheRedis) DelGroupAllRoleLevel(groupID string) GroupCache {
+ return g.DelGroupRoleLevel(groupID, []int32{constant.GroupOwner, constant.GroupAdmin, constant.GroupOrdinaryUsers})
+}
+
func (g *GroupCacheRedis) GetGroupMembersHash(ctx context.Context, groupID string) (hashCode uint64, err error) {
+ if g.groupHash == nil {
+ return 0, errs.ErrInternalServer.Wrap("group hash is nil")
+ }
return getCache(ctx, g.rcClient, g.getGroupMembersHashKey(groupID), g.expireTime, func(ctx context.Context) (uint64, error) {
- return g.hashCode(ctx, groupID)
+ return g.groupHash.GetGroupHash(ctx, groupID)
})
-
- //return getCache(ctx, g.rcClient, g.getGroupMembersHashKey(groupID), g.expireTime,
- // func(ctx context.Context) (uint64, error) {
- // userIDs, err := g.GetGroupMemberIDs(ctx, groupID)
- // if err != nil {
- // return 0, err
- // }
- // log.ZInfo(ctx, "GetGroupMembersHash", "groupID", groupID, "userIDs", userIDs)
- // var members []*relationtb.GroupMemberModel
- // if len(userIDs) > 0 {
- // members, err = g.GetGroupMembersInfo(ctx, groupID, userIDs)
- // if err != nil {
- // return 0, err
- // }
- // utils.Sort(userIDs, true)
- // }
- // memberMap := make(map[string]*relationtb.GroupMemberModel)
- // for i, member := range members {
- // memberMap[member.UserID] = members[i]
- // }
- // data := make([]string, 0, len(members)*11)
- // for _, userID := range userIDs {
- // member, ok := memberMap[userID]
- // if !ok {
- // continue
- // }
- // data = append(data,
- // member.GroupID,
- // member.UserID,
- // member.Nickname,
- // member.FaceURL,
- // strconv.Itoa(int(member.RoleLevel)),
- // strconv.FormatInt(member.JoinTime.UnixMilli(), 10),
- // strconv.Itoa(int(member.JoinSource)),
- // member.InviterUserID,
- // member.OperatorUserID,
- // strconv.FormatInt(member.MuteEndTime.UnixMilli(), 10),
- // member.Ex,
- // )
- // }
- // log.ZInfo(ctx, "hash data info", "userIDs.len", len(userIDs), "hash.data.len", len(data))
- // log.ZInfo(ctx, "json hash data", "groupID", groupID, "data", data)
- // val, err := json.Marshal(data)
- // if err != nil {
- // return 0, err
- // }
- // sum := md5.Sum(val)
- // code := binary.BigEndian.Uint64(sum[:])
- // log.ZInfo(ctx, "GetGroupMembersHash", "groupID", groupID, "hashCode", code, "num", len(members))
- // return code, nil
- // },
- //)
}
func (g *GroupCacheRedis) GetGroupMemberHashMap(ctx context.Context, groupIDs []string) (map[string]*relationtb.GroupSimpleUserID, error) {
+ if g.groupHash == nil {
+ return nil, errs.ErrInternalServer.Wrap("group hash is nil")
+ }
res := make(map[string]*relationtb.GroupSimpleUserID)
for _, groupID := range groupIDs {
hash, err := g.GetGroupMembersHash(ctx, groupID)
@@ -347,7 +262,6 @@ func (g *GroupCacheRedis) DelGroupMembersHash(groupID string) GroupCache {
return cache
}
-// groupMemberIDs.
func (g *GroupCacheRedis) GetGroupMemberIDs(ctx context.Context, groupID string) (groupMemberIDs []string, err error) {
return getCache(ctx, g.rcClient, g.getGroupMemberIDsKey(groupID), g.expireTime, func(ctx context.Context) ([]string, error) {
return g.groupMemberDB.FindMemberUserID(ctx, groupID)
@@ -398,13 +312,6 @@ func (g *GroupCacheRedis) GetGroupMemberInfo(ctx context.Context, groupID, userI
}
func (g *GroupCacheRedis) GetGroupMembersInfo(ctx context.Context, groupID string, userIDs []string) ([]*relationtb.GroupMemberModel, error) {
- //var keys []string
- //for _, userID := range userIDs {
- // keys = append(keys, g.getGroupMemberInfoKey(groupID, userID))
- //}
- //return batchGetCache(ctx, g.rcClient, keys, g.expireTime, g.GetGroupMemberIndex, func(ctx context.Context) ([]*relationtb.GroupMemberModel, error) {
- // return g.groupMemberDB.Find(ctx, []string{groupID}, userIDs, nil)
- //})
return batchGetCache2(ctx, g.rcClient, g.expireTime, userIDs, func(userID string) string {
return g.getGroupMemberInfoKey(groupID, userID)
}, func(ctx context.Context, userID string) (*relationtb.GroupMemberModel, error) {
@@ -446,13 +353,6 @@ func (g *GroupCacheRedis) GetAllGroupMemberInfo(ctx context.Context, groupID str
if err != nil {
return nil, err
}
- //var keys []string
- //for _, groupMemberID := range groupMemberIDs {
- // keys = append(keys, g.getGroupMemberInfoKey(groupID, groupMemberID))
- //}
- //return batchGetCache(ctx, g.rcClient, keys, g.expireTime, g.GetGroupMemberIndex, func(ctx context.Context) ([]*relationtb.GroupMemberModel, error) {
- // return g.groupMemberDB.Find(ctx, []string{groupID}, groupMemberIDs, nil)
- //})
return g.GetGroupMembersInfo(ctx, groupID, groupMemberIDs)
}
@@ -483,3 +383,68 @@ func (g *GroupCacheRedis) DelGroupsMemberNum(groupID ...string) GroupCache {
return cache
}
+
+func (g *GroupCacheRedis) GetGroupOwner(ctx context.Context, groupID string) (*relationtb.GroupMemberModel, error) {
+ members, err := g.GetGroupRoleLevelMemberInfo(ctx, groupID, constant.GroupOwner)
+ if err != nil {
+ return nil, err
+ }
+ if len(members) == 0 {
+ return nil, errs.ErrRecordNotFound.Wrap(fmt.Sprintf("group %s owner not found", groupID))
+ }
+ return members[0], nil
+}
+
+func (g *GroupCacheRedis) GetGroupsOwner(ctx context.Context, groupIDs []string) ([]*relationtb.GroupMemberModel, error) {
+ members := make([]*relationtb.GroupMemberModel, 0, len(groupIDs))
+ for _, groupID := range groupIDs {
+ items, err := g.GetGroupRoleLevelMemberInfo(ctx, groupID, constant.GroupOwner)
+ if err != nil {
+ return nil, err
+ }
+ if len(items) > 0 {
+ members = append(members, items[0])
+ }
+ }
+ return members, nil
+}
+
+func (g *GroupCacheRedis) GetGroupRoleLevelMemberIDs(ctx context.Context, groupID string, roleLevel int32) ([]string, error) {
+ return getCache(ctx, g.rcClient, g.getGroupRoleLevelMemberIDsKey(groupID, roleLevel), g.expireTime, func(ctx context.Context) ([]string, error) {
+ return g.groupMemberDB.FindRoleLevelUserIDs(ctx, groupID, roleLevel)
+ })
+}
+
+func (g *GroupCacheRedis) GetGroupRoleLevelMemberInfo(ctx context.Context, groupID string, roleLevel int32) ([]*relationtb.GroupMemberModel, error) {
+ userIDs, err := g.GetGroupRoleLevelMemberIDs(ctx, groupID, roleLevel)
+ if err != nil {
+ return nil, err
+ }
+ return g.GetGroupMembersInfo(ctx, groupID, userIDs)
+}
+
+func (g *GroupCacheRedis) GetGroupRolesLevelMemberInfo(ctx context.Context, groupID string, roleLevels []int32) ([]*relationtb.GroupMemberModel, error) {
+ var userIDs []string
+ for _, roleLevel := range roleLevels {
+ ids, err := g.GetGroupRoleLevelMemberIDs(ctx, groupID, roleLevel)
+ if err != nil {
+ return nil, err
+ }
+ userIDs = append(userIDs, ids...)
+ }
+ return g.GetGroupMembersInfo(ctx, groupID, userIDs)
+}
+
+func (g *GroupCacheRedis) FindGroupMemberUser(ctx context.Context, groupIDs []string, userID string) (_ []*relationtb.GroupMemberModel, err error) {
+ if len(groupIDs) == 0 {
+ groupIDs, err = g.GetJoinedGroupIDs(ctx, userID)
+ if err != nil {
+ return nil, err
+ }
+ }
+ return batchGetCache2(ctx, g.rcClient, g.expireTime, groupIDs, func(groupID string) string {
+ return g.getGroupMemberInfoKey(groupID, userID)
+ }, func(ctx context.Context, groupID string) (*relationtb.GroupMemberModel, error) {
+ return g.groupMemberDB.Take(ctx, groupID, userID)
+ })
+}
diff --git a/pkg/common/db/cache/init_redis.go b/pkg/common/db/cache/init_redis.go
index 77b38d9b7..3cec73be5 100644
--- a/pkg/common/db/cache/init_redis.go
+++ b/pkg/common/db/cache/init_redis.go
@@ -18,6 +18,8 @@ import (
"context"
"errors"
"fmt"
+ "os"
+ "strings"
"time"
"github.com/redis/go-redis/v9"
@@ -43,8 +45,11 @@ func NewRedis() (redis.UniversalClient, error) {
return redisClient, nil
}
+ // Read configuration from environment variables
+ overrideConfigFromEnv()
+
if len(config.Config.Redis.Address) == 0 {
- return nil, errors.New("redis address is empty")
+ return nil, errs.Wrap(errors.New("redis address is empty"))
}
specialerror.AddReplace(redis.Nil, errs.ErrRecordNotFound)
var rdb redis.UniversalClient
@@ -60,9 +65,9 @@ func NewRedis() (redis.UniversalClient, error) {
rdb = redis.NewClient(&redis.Options{
Addr: config.Config.Redis.Address[0],
Username: config.Config.Redis.Username,
- Password: config.Config.Redis.Password, // no password set
- DB: 0, // use default DB
- PoolSize: 100, // connection pool size
+ Password: config.Config.Redis.Password,
+ DB: 0, // use default DB
+ PoolSize: 100, // connection pool size
MaxRetries: maxRetry,
})
}
@@ -72,9 +77,31 @@ func NewRedis() (redis.UniversalClient, error) {
defer cancel()
err = rdb.Ping(ctx).Err()
if err != nil {
- return nil, fmt.Errorf("redis ping %w", err)
+ uriFormat := "address:%s, username:%s, password:%s, clusterMode:%t, enablePipeline:%t"
+ errMsg := fmt.Sprintf(uriFormat, config.Config.Redis.Address, config.Config.Redis.Username, config.Config.Redis.Password, config.Config.Redis.ClusterMode, config.Config.Redis.EnablePipeline)
+ return nil, errs.Wrap(err, errMsg)
}
-
redisClient = rdb
return rdb, err
}
+
+// overrideConfigFromEnv overrides configuration fields with environment variables if present.
+func overrideConfigFromEnv() {
+ if envAddr := os.Getenv("REDIS_ADDRESS"); envAddr != "" {
+ if envPort := os.Getenv("REDIS_PORT"); envPort != "" {
+ addresses := strings.Split(envAddr, ",")
+ for i, addr := range addresses {
+ addresses[i] = addr + ":" + envPort
+ }
+ config.Config.Redis.Address = addresses
+ } else {
+ config.Config.Redis.Address = strings.Split(envAddr, ",")
+ }
+ }
+ if envUser := os.Getenv("REDIS_USERNAME"); envUser != "" {
+ config.Config.Redis.Username = envUser
+ }
+ if envPass := os.Getenv("REDIS_PASSWORD"); envPass != "" {
+ config.Config.Redis.Password = envPass
+ }
+}
diff --git a/pkg/common/db/cache/meta_cache.go b/pkg/common/db/cache/meta_cache.go
index ccac88d68..4bc2a046a 100644
--- a/pkg/common/db/cache/meta_cache.go
+++ b/pkg/common/db/cache/meta_cache.go
@@ -38,7 +38,7 @@ const (
var errIndex = errors.New("err index")
type metaCache interface {
- ExecDel(ctx context.Context) error
+ ExecDel(ctx context.Context, distinct ...bool) error
// delete key rapid
DelKey(ctx context.Context, key string) error
AddKeys(keys ...string)
@@ -57,7 +57,10 @@ type metaCacheRedis struct {
retryInterval time.Duration
}
-func (m *metaCacheRedis) ExecDel(ctx context.Context) error {
+func (m *metaCacheRedis) ExecDel(ctx context.Context, distinct ...bool) error {
+ if len(distinct) > 0 && distinct[0] {
+ m.keys = utils.Distinct(m.keys)
+ }
if len(m.keys) > 0 {
log.ZDebug(ctx, "delete cache", "keys", m.keys)
for _, key := range m.keys {
diff --git a/pkg/common/db/cache/msg.go b/pkg/common/db/cache/msg.go
index f86b44d9b..5cd3cb22c 100644
--- a/pkg/common/db/cache/msg.go
+++ b/pkg/common/db/cache/msg.go
@@ -173,20 +173,7 @@ func (c *msgCache) getSeqs(ctx context.Context, items []string, getkey func(s st
}
func (c *msgCache) SetMaxSeq(ctx context.Context, conversationID string, maxSeq int64) error {
- var retErr error
- for {
- select {
- case <-ctx.Done():
- return errs.Wrap(retErr, "SetMaxSeq redis retry too many amount")
- default:
- retErr = c.setSeq(ctx, conversationID, maxSeq, c.getMaxSeqKey)
- if retErr != nil {
- time.Sleep(time.Second * 2)
- continue
- }
- return nil
- }
- }
+ return c.setSeq(ctx, conversationID, maxSeq, c.getMaxSeqKey)
}
func (c *msgCache) GetMaxSeqs(ctx context.Context, conversationIDs []string) (m map[string]int64, err error) {
@@ -194,21 +181,7 @@ func (c *msgCache) GetMaxSeqs(ctx context.Context, conversationIDs []string) (m
}
func (c *msgCache) GetMaxSeq(ctx context.Context, conversationID string) (int64, error) {
- var retErr error
- var retData int64
- for {
- select {
- case <-ctx.Done():
- return -1, errs.Wrap(retErr, "GetMaxSeq redis retry too many amount")
- default:
- retData, retErr = c.getSeq(ctx, conversationID, c.getMaxSeqKey)
- if retErr != nil && errs.Unwrap(retErr) != redis.Nil {
- time.Sleep(time.Second * 2)
- continue
- }
- return retData, retErr
- }
- }
+ return c.getSeq(ctx, conversationID, c.getMaxSeqKey)
}
func (c *msgCache) SetMinSeq(ctx context.Context, conversationID string, minSeq int64) error {
@@ -314,7 +287,7 @@ func (c *msgCache) GetTokensWithoutError(ctx context.Context, userID string, pla
func (c *msgCache) SetTokenMapByUidPid(ctx context.Context, userID string, platform int, m map[string]int) error {
key := uidPidToken + userID + ":" + constant.PlatformIDToName(platform)
- mm := make(map[string]interface{})
+ mm := make(map[string]any)
for k, v := range m {
mm[k] = v
}
@@ -672,35 +645,19 @@ func (c *msgCache) PipeDeleteMessages(ctx context.Context, conversationID string
}
func (c *msgCache) CleanUpOneConversationAllMsg(ctx context.Context, conversationID string) error {
- var (
- cursor uint64
- keys []string
- err error
-
- key = c.allMessageCacheKey(conversationID)
- )
-
- for {
- // scan up to 10000 at a time, the count (10000) param refers to the number of scans on redis server.
- // if the count is too small, needs to be run scan on redis frequently.
- var limit int64 = 10000
- keys, cursor, err = c.rdb.Scan(ctx, cursor, key, limit).Result()
- if err != nil {
+ vals, err := c.rdb.Keys(ctx, c.allMessageCacheKey(conversationID)).Result()
+ if errors.Is(err, redis.Nil) {
+ return nil
+ }
+ if err != nil {
+ return errs.Wrap(err)
+ }
+ for _, v := range vals {
+ if err := c.rdb.Del(ctx, v).Err(); err != nil {
return errs.Wrap(err)
}
-
- for _, key := range keys {
- err := c.rdb.Del(ctx, key).Err()
- if err != nil {
- return errs.Wrap(err)
- }
- }
-
- // scan end
- if cursor == 0 {
- return nil
- }
}
+ return nil
}
func (c *msgCache) DelMsgFromCache(ctx context.Context, userID string, seqs []int64) error {
diff --git a/pkg/common/db/cache/msg_test.go b/pkg/common/db/cache/msg_test.go
index a5be018ed..65413199a 100644
--- a/pkg/common/db/cache/msg_test.go
+++ b/pkg/common/db/cache/msg_test.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package cache
import (
@@ -385,50 +399,3 @@ func testParallelDeleteMessagesMix(t *testing.T, cid string, seqs []int64, input
assert.EqualValues(t, 1, val) // exists
}
}
-
-func TestCleanUpOneConversationAllMsg(t *testing.T) {
- rdb := redis.NewClient(&redis.Options{})
- defer rdb.Close()
-
- cacher := msgCache{rdb: rdb}
- count := 1000
- prefix := fmt.Sprintf("%v", rand.Int63())
-
- ids := []string{}
- for i := 0; i < count; i++ {
- id := fmt.Sprintf("%v-cid-%v", prefix, rand.Int63())
- ids = append(ids, id)
-
- key := cacher.allMessageCacheKey(id)
- rdb.Set(context.Background(), key, "openim", 0)
- }
-
- // delete 100 keys with scan.
- for i := 0; i < 100; i++ {
- pickedKey := ids[i]
- err := cacher.CleanUpOneConversationAllMsg(context.Background(), pickedKey)
- assert.Nil(t, err)
-
- ls, err := rdb.Keys(context.Background(), pickedKey).Result()
- assert.Nil(t, err)
- assert.Equal(t, 0, len(ls))
-
- rcode, err := rdb.Exists(context.Background(), pickedKey).Result()
- assert.Nil(t, err)
- assert.EqualValues(t, 0, rcode) // non-exists
- }
-
- sid := fmt.Sprintf("%v-cid-*", prefix)
- ls, err := rdb.Keys(context.Background(), cacher.allMessageCacheKey(sid)).Result()
- assert.Nil(t, err)
- assert.Equal(t, count-100, len(ls))
-
- // delete fuzzy matching keys.
- err = cacher.CleanUpOneConversationAllMsg(context.Background(), sid)
- assert.Nil(t, err)
-
- // don't contains keys matched `{prefix}-cid-{random}` on redis
- ls, err = rdb.Keys(context.Background(), cacher.allMessageCacheKey(sid)).Result()
- assert.Nil(t, err)
- assert.Equal(t, 0, len(ls))
-}
diff --git a/pkg/common/db/cache/s3.go b/pkg/common/db/cache/s3.go
index 3520ba2ec..1e68cedf8 100644
--- a/pkg/common/db/cache/s3.go
+++ b/pkg/common/db/cache/s3.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package cache
import (
@@ -14,8 +28,8 @@ import (
type ObjectCache interface {
metaCache
- GetName(ctx context.Context, name string) (*relationtb.ObjectModel, error)
- DelObjectName(names ...string) ObjectCache
+ GetName(ctx context.Context, engine string, name string) (*relationtb.ObjectModel, error)
+ DelObjectName(engine string, names ...string) ObjectCache
}
func NewObjectCacheRedis(rdb redis.UniversalClient, objDB relationtb.ObjectInfoModelInterface) ObjectCache {
@@ -44,23 +58,23 @@ func (g *objectCacheRedis) NewCache() ObjectCache {
}
}
-func (g *objectCacheRedis) DelObjectName(names ...string) ObjectCache {
+func (g *objectCacheRedis) DelObjectName(engine string, names ...string) ObjectCache {
objectCache := g.NewCache()
keys := make([]string, 0, len(names))
for _, name := range names {
- keys = append(keys, g.getObjectKey(name))
+ keys = append(keys, g.getObjectKey(name, engine))
}
objectCache.AddKeys(keys...)
return objectCache
}
-func (g *objectCacheRedis) getObjectKey(name string) string {
- return "OBJECT:" + name
+func (g *objectCacheRedis) getObjectKey(engine string, name string) string {
+ return "OBJECT:" + engine + ":" + name
}
-func (g *objectCacheRedis) GetName(ctx context.Context, name string) (*relationtb.ObjectModel, error) {
- return getCache(ctx, g.rcClient, g.getObjectKey(name), g.expireTime, func(ctx context.Context) (*relationtb.ObjectModel, error) {
- return g.objDB.Take(ctx, name)
+func (g *objectCacheRedis) GetName(ctx context.Context, engine string, name string) (*relationtb.ObjectModel, error) {
+ return getCache(ctx, g.rcClient, g.getObjectKey(name, engine), g.expireTime, func(ctx context.Context) (*relationtb.ObjectModel, error) {
+ return g.objDB.Take(ctx, engine, name)
})
}
diff --git a/pkg/common/db/cache/user.go b/pkg/common/db/cache/user.go
index d1164f2c0..979bd06e4 100644
--- a/pkg/common/db/cache/user.go
+++ b/pkg/common/db/cache/user.go
@@ -22,6 +22,8 @@ import (
"strconv"
"time"
+ relationtb "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
+
"github.com/OpenIMSDK/tools/log"
"github.com/OpenIMSDK/protocol/constant"
@@ -31,8 +33,6 @@ import (
"github.com/dtm-labs/rockscache"
"github.com/redis/go-redis/v9"
-
- relationtb "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
)
const (
@@ -59,7 +59,8 @@ type UserCache interface {
type UserCacheRedis struct {
metaCache
- rdb redis.UniversalClient
+ rdb redis.UniversalClient
+ //userDB relationtb.UserModelInterface
userDB relationtb.UserModelInterface
expireTime time.Duration
rcClient *rockscache.Client
@@ -100,39 +101,13 @@ func (u *UserCacheRedis) getUserGlobalRecvMsgOptKey(userID string) string {
}
func (u *UserCacheRedis) GetUserInfo(ctx context.Context, userID string) (userInfo *relationtb.UserModel, err error) {
- return getCache(
- ctx,
- u.rcClient,
- u.getUserInfoKey(userID),
- u.expireTime,
- func(ctx context.Context) (*relationtb.UserModel, error) {
- return u.userDB.Take(ctx, userID)
- },
+ return getCache(ctx, u.rcClient, u.getUserInfoKey(userID), u.expireTime, func(ctx context.Context) (*relationtb.UserModel, error) {
+ return u.userDB.Take(ctx, userID)
+ },
)
}
func (u *UserCacheRedis) GetUsersInfo(ctx context.Context, userIDs []string) ([]*relationtb.UserModel, error) {
- //var keys []string
- //for _, userID := range userIDs {
- // keys = append(keys, u.getUserInfoKey(userID))
- //}
- //return batchGetCache(
- // ctx,
- // u.rcClient,
- // keys,
- // u.expireTime,
- // func(user *relationtb.UserModel, keys []string) (int, error) {
- // for i, key := range keys {
- // if key == u.getUserInfoKey(user.UserID) {
- // return i, nil
- // }
- // }
- // return 0, errIndex
- // },
- // func(ctx context.Context) ([]*relationtb.UserModel, error) {
- // return u.userDB.Find(ctx, userIDs)
- // },
- //)
return batchGetCache2(ctx, u.rcClient, u.expireTime, userIDs, func(userID string) string {
return u.getUserInfoKey(userID)
}, func(ctx context.Context, userID string) (*relationtb.UserModel, error) {
@@ -214,8 +189,7 @@ func (u *UserCacheRedis) SetUserStatus(ctx context.Context, userID string, statu
UserIDNum := crc32.ChecksumIEEE([]byte(userID))
modKey := strconv.Itoa(int(UserIDNum % statusMod))
key := olineStatusKey + modKey
- log.ZDebug(ctx, "SetUserStatus args", "userID", userID, "status", status,
- "platformID", platformID, "modKey", modKey, "key", key)
+ log.ZDebug(ctx, "SetUserStatus args", "userID", userID, "status", status, "platformID", platformID, "modKey", modKey, "key", key)
isNewKey, err := u.rdb.Exists(ctx, key).Result()
if err != nil {
return errs.Wrap(err)
diff --git a/pkg/common/db/controller/black.go b/pkg/common/db/controller/black.go
index 70e942a77..e68d06b01 100644
--- a/pkg/common/db/controller/black.go
+++ b/pkg/common/db/controller/black.go
@@ -17,6 +17,8 @@ package controller
import (
"context"
+ "github.com/OpenIMSDK/tools/pagination"
+
"github.com/OpenIMSDK/tools/log"
"github.com/OpenIMSDK/tools/utils"
@@ -30,12 +32,7 @@ type BlackDatabase interface {
// Delete 删除黑名单
Delete(ctx context.Context, blacks []*relation.BlackModel) (err error)
// FindOwnerBlacks 获取黑名单列表
- FindOwnerBlacks(
- ctx context.Context,
- ownerUserID string,
- pageNumber, showNumber int32,
- ) (blacks []*relation.BlackModel, total int64, err error)
- FindBlackIDs(ctx context.Context, ownerUserID string) (blackIDs []string, err error)
+ FindOwnerBlacks(ctx context.Context, ownerUserID string, pagination pagination.Pagination) (total int64, blacks []*relation.BlackModel, err error)
FindBlackInfos(ctx context.Context, ownerUserID string, userIDs []string) (blacks []*relation.BlackModel, err error)
// CheckIn 检查user2是否在user1的黑名单列表中(inUser1Blacks==true) 检查user1是否在user2的黑名单列表中(inUser2Blacks==true)
CheckIn(ctx context.Context, userID1, userID2 string) (inUser1Blacks bool, inUser2Blacks bool, err error)
@@ -75,12 +72,8 @@ func (b *blackDatabase) deleteBlackIDsCache(ctx context.Context, blacks []*relat
}
// FindOwnerBlacks 获取黑名单列表.
-func (b *blackDatabase) FindOwnerBlacks(
- ctx context.Context,
- ownerUserID string,
- pageNumber, showNumber int32,
-) (blacks []*relation.BlackModel, total int64, err error) {
- return b.black.FindOwnerBlacks(ctx, ownerUserID, pageNumber, showNumber)
+func (b *blackDatabase) FindOwnerBlacks(ctx context.Context, ownerUserID string, pagination pagination.Pagination) (total int64, blacks []*relation.BlackModel, err error) {
+ return b.black.FindOwnerBlacks(ctx, ownerUserID, pagination)
}
// CheckIn 检查user2是否在user1的黑名单列表中(inUser1Blacks==true) 检查user1是否在user2的黑名单列表中(inUser2Blacks==true).
diff --git a/pkg/common/db/controller/conversation.go b/pkg/common/db/controller/conversation.go
index 0aaa95880..c6629e9c8 100644
--- a/pkg/common/db/controller/conversation.go
+++ b/pkg/common/db/controller/conversation.go
@@ -18,6 +18,8 @@ import (
"context"
"time"
+ "github.com/OpenIMSDK/tools/pagination"
+
"github.com/openimsdk/open-im-server/v3/pkg/msgprocessor"
"github.com/OpenIMSDK/protocol/constant"
@@ -31,7 +33,7 @@ import (
type ConversationDatabase interface {
// UpdateUserConversationFiled 更新用户该会话的属性信息
- UpdateUsersConversationFiled(ctx context.Context, userIDs []string, conversationID string, args map[string]interface{}) error
+ UpdateUsersConversationFiled(ctx context.Context, userIDs []string, conversationID string, args map[string]any) error
// CreateConversation 创建一批新的会话
CreateConversation(ctx context.Context, conversations []*relationtb.ConversationModel) error
// SyncPeerUserPrivateConversation 同步对端私聊会话内部保证事务操作
@@ -39,26 +41,26 @@ type ConversationDatabase interface {
// FindConversations 根据会话ID获取某个用户的多个会话
FindConversations(ctx context.Context, ownerUserID string, conversationIDs []string) ([]*relationtb.ConversationModel, error)
// FindRecvMsgNotNotifyUserIDs 获取超级大群开启免打扰的用户ID
- FindRecvMsgNotNotifyUserIDs(ctx context.Context, groupID string) ([]string, error)
+ //FindRecvMsgNotNotifyUserIDs(ctx context.Context, groupID string) ([]string, error)
// GetUserAllConversation 获取一个用户在服务器上所有的会话
GetUserAllConversation(ctx context.Context, ownerUserID string) ([]*relationtb.ConversationModel, error)
// SetUserConversations 设置用户多个会话属性,如果会话不存在则创建,否则更新,内部保证原子性
SetUserConversations(ctx context.Context, ownerUserID string, conversations []*relationtb.ConversationModel) error
// SetUsersConversationFiledTx 设置多个用户会话关于某个字段的更新操作,如果会话不存在则创建,否则更新,内部保证事务操作
- SetUsersConversationFiledTx(ctx context.Context, userIDs []string, conversation *relationtb.ConversationModel, filedMap map[string]interface{}) error
+ SetUsersConversationFiledTx(ctx context.Context, userIDs []string, conversation *relationtb.ConversationModel, filedMap map[string]any) error
CreateGroupChatConversation(ctx context.Context, groupID string, userIDs []string) error
GetConversationIDs(ctx context.Context, userID string) ([]string, error)
GetUserConversationIDsHash(ctx context.Context, ownerUserID string) (hash uint64, err error)
GetAllConversationIDs(ctx context.Context) ([]string, error)
GetAllConversationIDsNumber(ctx context.Context) (int64, error)
- PageConversationIDs(ctx context.Context, pageNumber, showNumber int32) (conversationIDs []string, err error)
+ PageConversationIDs(ctx context.Context, pagination pagination.Pagination) (conversationIDs []string, err error)
//GetUserAllHasReadSeqs(ctx context.Context, ownerUserID string) (map[string]int64, error)
GetConversationsByConversationID(ctx context.Context, conversationIDs []string) ([]*relationtb.ConversationModel, error)
GetConversationIDsNeedDestruct(ctx context.Context) ([]*relationtb.ConversationModel, error)
GetConversationNotReceiveMessageUserIDs(ctx context.Context, conversationID string) ([]string, error)
}
-func NewConversationDatabase(conversation relationtb.ConversationModelInterface, cache cache.ConversationCache, tx tx.Tx) ConversationDatabase {
+func NewConversationDatabase(conversation relationtb.ConversationModelInterface, cache cache.ConversationCache, tx tx.CtxTx) ConversationDatabase {
return &conversationDatabase{
conversationDB: conversation,
cache: cache,
@@ -69,22 +71,21 @@ func NewConversationDatabase(conversation relationtb.ConversationModelInterface,
type conversationDatabase struct {
conversationDB relationtb.ConversationModelInterface
cache cache.ConversationCache
- tx tx.Tx
+ tx tx.CtxTx
}
-func (c *conversationDatabase) SetUsersConversationFiledTx(ctx context.Context, userIDs []string, conversation *relationtb.ConversationModel, filedMap map[string]interface{}) (err error) {
- cache := c.cache.NewCache()
- if conversation.GroupID != "" {
- cache = cache.DelSuperGroupRecvMsgNotNotifyUserIDs(conversation.GroupID).DelSuperGroupRecvMsgNotNotifyUserIDsHash(conversation.GroupID)
- }
- if err := c.tx.Transaction(func(tx any) error {
- conversationTx := c.conversationDB.NewTx(tx)
- haveUserIDs, err := conversationTx.FindUserID(ctx, userIDs, []string{conversation.ConversationID})
+func (c *conversationDatabase) SetUsersConversationFiledTx(ctx context.Context, userIDs []string, conversation *relationtb.ConversationModel, filedMap map[string]any) (err error) {
+ return c.tx.Transaction(ctx, func(ctx context.Context) error {
+ cache := c.cache.NewCache()
+ if conversation.GroupID != "" {
+ cache = cache.DelSuperGroupRecvMsgNotNotifyUserIDs(conversation.GroupID).DelSuperGroupRecvMsgNotNotifyUserIDsHash(conversation.GroupID)
+ }
+ haveUserIDs, err := c.conversationDB.FindUserID(ctx, userIDs, []string{conversation.ConversationID})
if err != nil {
return err
}
if len(haveUserIDs) > 0 {
- _, err = conversationTx.UpdateByMap(ctx, haveUserIDs, conversation.ConversationID, filedMap)
+ _, err = c.conversationDB.UpdateByMap(ctx, haveUserIDs, conversation.ConversationID, filedMap)
if err != nil {
return err
}
@@ -112,20 +113,17 @@ func (c *conversationDatabase) SetUsersConversationFiledTx(ctx context.Context,
conversations = append(conversations, temp)
}
if len(conversations) > 0 {
- err = conversationTx.Create(ctx, conversations)
+ err = c.conversationDB.Create(ctx, conversations)
if err != nil {
return err
}
cache = cache.DelConversationIDs(NotUserIDs...).DelUserConversationIDsHash(NotUserIDs...).DelConversations(conversation.ConversationID, NotUserIDs...)
}
- return nil
- }); err != nil {
- return err
- }
- return cache.ExecDel(ctx)
+ return cache.ExecDel(ctx)
+ })
}
-func (c *conversationDatabase) UpdateUsersConversationFiled(ctx context.Context, userIDs []string, conversationID string, args map[string]interface{}) error {
+func (c *conversationDatabase) UpdateUsersConversationFiled(ctx context.Context, userIDs []string, conversationID string, args map[string]any) error {
_, err := c.conversationDB.UpdateByMap(ctx, userIDs, conversationID, args)
if err != nil {
return err
@@ -153,19 +151,18 @@ func (c *conversationDatabase) CreateConversation(ctx context.Context, conversat
}
func (c *conversationDatabase) SyncPeerUserPrivateConversationTx(ctx context.Context, conversations []*relationtb.ConversationModel) error {
- cache := c.cache.NewCache()
- if err := c.tx.Transaction(func(tx any) error {
- conversationTx := c.conversationDB.NewTx(tx)
+ return c.tx.Transaction(ctx, func(ctx context.Context) error {
+ cache := c.cache.NewCache()
for _, conversation := range conversations {
for _, v := range [][2]string{{conversation.OwnerUserID, conversation.UserID}, {conversation.UserID, conversation.OwnerUserID}} {
ownerUserID := v[0]
userID := v[1]
- haveUserIDs, err := conversationTx.FindUserID(ctx, []string{ownerUserID}, []string{conversation.ConversationID})
+ haveUserIDs, err := c.conversationDB.FindUserID(ctx, []string{ownerUserID}, []string{conversation.ConversationID})
if err != nil {
return err
}
if len(haveUserIDs) > 0 {
- _, err := conversationTx.UpdateByMap(ctx, []string{ownerUserID}, conversation.ConversationID, map[string]interface{}{"is_private_chat": conversation.IsPrivateChat})
+ _, err := c.conversationDB.UpdateByMap(ctx, []string{ownerUserID}, conversation.ConversationID, map[string]any{"is_private_chat": conversation.IsPrivateChat})
if err != nil {
return err
}
@@ -176,18 +173,15 @@ func (c *conversationDatabase) SyncPeerUserPrivateConversationTx(ctx context.Con
newConversation.UserID = userID
newConversation.ConversationID = conversation.ConversationID
newConversation.IsPrivateChat = conversation.IsPrivateChat
- if err := conversationTx.Create(ctx, []*relationtb.ConversationModel{&newConversation}); err != nil {
+ if err := c.conversationDB.Create(ctx, []*relationtb.ConversationModel{&newConversation}); err != nil {
return err
}
cache = cache.DelConversationIDs(ownerUserID).DelUserConversationIDsHash(ownerUserID)
}
}
}
- return nil
- }); err != nil {
- return err
- }
- return cache.ExecDel(ctx)
+ return cache.ExecDel(ctx)
+ })
}
func (c *conversationDatabase) FindConversations(ctx context.Context, ownerUserID string, conversationIDs []string) ([]*relationtb.ConversationModel, error) {
@@ -203,28 +197,26 @@ func (c *conversationDatabase) GetUserAllConversation(ctx context.Context, owner
}
func (c *conversationDatabase) SetUserConversations(ctx context.Context, ownerUserID string, conversations []*relationtb.ConversationModel) error {
- cache := c.cache.NewCache()
-
- groupIDs := utils.Distinct(utils.Filter(conversations, func(e *relationtb.ConversationModel) (string, bool) {
- return e.GroupID, e.GroupID != ""
- }))
- for _, groupID := range groupIDs {
- cache = cache.DelSuperGroupRecvMsgNotNotifyUserIDs(groupID).DelSuperGroupRecvMsgNotNotifyUserIDsHash(groupID)
- }
- if err := c.tx.Transaction(func(tx any) error {
+ return c.tx.Transaction(ctx, func(ctx context.Context) error {
+ cache := c.cache.NewCache()
+ groupIDs := utils.Distinct(utils.Filter(conversations, func(e *relationtb.ConversationModel) (string, bool) {
+ return e.GroupID, e.GroupID != ""
+ }))
+ for _, groupID := range groupIDs {
+ cache = cache.DelSuperGroupRecvMsgNotNotifyUserIDs(groupID).DelSuperGroupRecvMsgNotNotifyUserIDsHash(groupID)
+ }
var conversationIDs []string
for _, conversation := range conversations {
conversationIDs = append(conversationIDs, conversation.ConversationID)
cache = cache.DelConversations(conversation.OwnerUserID, conversation.ConversationID)
}
- conversationTx := c.conversationDB.NewTx(tx)
- existConversations, err := conversationTx.Find(ctx, ownerUserID, conversationIDs)
+ existConversations, err := c.conversationDB.Find(ctx, ownerUserID, conversationIDs)
if err != nil {
return err
}
if len(existConversations) > 0 {
for _, conversation := range conversations {
- err = conversationTx.Update(ctx, conversation)
+ err = c.conversationDB.Update(ctx, conversation)
if err != nil {
return err
}
@@ -246,23 +238,22 @@ func (c *conversationDatabase) SetUserConversations(ctx context.Context, ownerUs
if err != nil {
return err
}
- cache = cache.DelConversationIDs(ownerUserID).DelUserConversationIDsHash(ownerUserID).DelConversationNotReceiveMessageUserIDs(utils.Slice(notExistConversations, func(e *relationtb.ConversationModel) string { return e.ConversationID })...)
+ cache = cache.DelConversationIDs(ownerUserID).
+ DelUserConversationIDsHash(ownerUserID).
+ DelConversationNotReceiveMessageUserIDs(utils.Slice(notExistConversations, func(e *relationtb.ConversationModel) string { return e.ConversationID })...)
}
- return nil
- }); err != nil {
- return err
- }
- return cache.ExecDel(ctx)
+ return cache.ExecDel(ctx)
+ })
}
-func (c *conversationDatabase) FindRecvMsgNotNotifyUserIDs(ctx context.Context, groupID string) ([]string, error) {
- return c.cache.GetSuperGroupRecvMsgNotNotifyUserIDs(ctx, groupID)
-}
+//func (c *conversationDatabase) FindRecvMsgNotNotifyUserIDs(ctx context.Context, groupID string) ([]string, error) {
+// return c.cache.GetSuperGroupRecvMsgNotNotifyUserIDs(ctx, groupID)
+//}
func (c *conversationDatabase) CreateGroupChatConversation(ctx context.Context, groupID string, userIDs []string) error {
- cache := c.cache.NewCache()
- conversationID := msgprocessor.GetConversationIDBySessionType(constant.SuperGroupChatType, groupID)
- if err := c.tx.Transaction(func(tx any) error {
+ return c.tx.Transaction(ctx, func(ctx context.Context) error {
+ cache := c.cache.NewCache()
+ conversationID := msgprocessor.GetConversationIDBySessionType(constant.SuperGroupChatType, groupID)
existConversationUserIDs, err := c.conversationDB.FindUserID(ctx, userIDs, []string{conversationID})
if err != nil {
return err
@@ -281,18 +272,15 @@ func (c *conversationDatabase) CreateGroupChatConversation(ctx context.Context,
return err
}
}
- _, err = c.conversationDB.UpdateByMap(ctx, existConversationUserIDs, conversationID, map[string]interface{}{"max_seq": 0})
+ _, err = c.conversationDB.UpdateByMap(ctx, existConversationUserIDs, conversationID, map[string]any{"max_seq": 0})
if err != nil {
return err
}
for _, v := range existConversationUserIDs {
cache = cache.DelConversations(v, conversationID)
}
- return nil
- }); err != nil {
- return err
- }
- return cache.ExecDel(ctx)
+ return cache.ExecDel(ctx)
+ })
}
func (c *conversationDatabase) GetConversationIDs(ctx context.Context, userID string) ([]string, error) {
@@ -311,14 +299,10 @@ func (c *conversationDatabase) GetAllConversationIDsNumber(ctx context.Context)
return c.conversationDB.GetAllConversationIDsNumber(ctx)
}
-func (c *conversationDatabase) PageConversationIDs(ctx context.Context, pageNumber, showNumber int32) ([]string, error) {
- return c.conversationDB.PageConversationIDs(ctx, pageNumber, showNumber)
+func (c *conversationDatabase) PageConversationIDs(ctx context.Context, pagination pagination.Pagination) ([]string, error) {
+ return c.conversationDB.PageConversationIDs(ctx, pagination)
}
-//func (c *conversationDatabase) GetUserAllHasReadSeqs(ctx context.Context, ownerUserID string) (map[string]int64, error) {
-// return c.cache.GetUserAllHasReadSeqs(ctx, ownerUserID)
-//}
-
func (c *conversationDatabase) GetConversationsByConversationID(ctx context.Context, conversationIDs []string) ([]*relationtb.ConversationModel, error) {
return c.conversationDB.GetConversationsByConversationID(ctx, conversationIDs)
}
diff --git a/pkg/common/db/controller/friend.go b/pkg/common/db/controller/friend.go
index 7816ef935..3b98f5d7b 100644
--- a/pkg/common/db/controller/friend.go
+++ b/pkg/common/db/controller/friend.go
@@ -18,7 +18,7 @@ import (
"context"
"time"
- "gorm.io/gorm"
+ "github.com/OpenIMSDK/tools/pagination"
"github.com/OpenIMSDK/protocol/constant"
"github.com/OpenIMSDK/tools/errs"
@@ -32,75 +32,65 @@ import (
)
type FriendDatabase interface {
- // 检查user2是否在user1的好友列表中(inUser1Friends==true) 检查user1是否在user2的好友列表中(inUser2Friends==true)
+ // CheckIn checks if user2 is in user1's friend list (inUser1Friends==true) and if user1 is in user2's friend list (inUser2Friends==true)
CheckIn(ctx context.Context, user1, user2 string) (inUser1Friends bool, inUser2Friends bool, err error)
- // 增加或者更新好友申请
+
+ // AddFriendRequest adds or updates a friend request
AddFriendRequest(ctx context.Context, fromUserID, toUserID string, reqMsg string, ex string) (err error)
- // 先判断是否在好友表,如果在则不插入
+
+ // BecomeFriends first checks if the users are already in the friends table; if not, it inserts them as friends
BecomeFriends(ctx context.Context, ownerUserID string, friendUserIDs []string, addSource int32) (err error)
- // 拒绝好友申请
+
+ // RefuseFriendRequest refuses a friend request
RefuseFriendRequest(ctx context.Context, friendRequest *relation.FriendRequestModel) (err error)
- // 同意好友申请
+
+ // AgreeFriendRequest accepts a friend request
AgreeFriendRequest(ctx context.Context, friendRequest *relation.FriendRequestModel) (err error)
- // 删除好友
+
+ // Delete removes a friend or friends from the owner's friend list
Delete(ctx context.Context, ownerUserID string, friendUserIDs []string) (err error)
- // 更新好友备注
+
+ // UpdateRemark updates the remark for a friend
UpdateRemark(ctx context.Context, ownerUserID, friendUserID, remark string) (err error)
- // 获取ownerUserID的好友列表
- PageOwnerFriends(
- ctx context.Context,
- ownerUserID string,
- pageNumber, showNumber int32,
- ) (friends []*relation.FriendModel, total int64, err error)
- // friendUserID在哪些人的好友列表中
- PageInWhoseFriends(
- ctx context.Context,
- friendUserID string,
- pageNumber, showNumber int32,
- ) (friends []*relation.FriendModel, total int64, err error)
- // 获取我发出去的好友申请
- PageFriendRequestFromMe(
- ctx context.Context,
- userID string,
- pageNumber, showNumber int32,
- ) (friends []*relation.FriendRequestModel, total int64, err error)
- // 获取我收到的的好友申请
- PageFriendRequestToMe(
- ctx context.Context,
- userID string,
- pageNumber, showNumber int32,
- ) (friends []*relation.FriendRequestModel, total int64, err error)
- // 获取某人指定好友的信息
- FindFriendsWithError(
- ctx context.Context,
- ownerUserID string,
- friendUserIDs []string,
- ) (friends []*relation.FriendModel, err error)
+
+ // PageOwnerFriends retrieves the friend list of ownerUserID with pagination
+ PageOwnerFriends(ctx context.Context, ownerUserID string, pagination pagination.Pagination) (total int64, friends []*relation.FriendModel, err error)
+
+ // PageInWhoseFriends finds the users who have friendUserID in their friend list with pagination
+ PageInWhoseFriends(ctx context.Context, friendUserID string, pagination pagination.Pagination) (total int64, friends []*relation.FriendModel, err error)
+
+ // PageFriendRequestFromMe retrieves the friend requests sent by the user with pagination
+ PageFriendRequestFromMe(ctx context.Context, userID string, pagination pagination.Pagination) (total int64, friends []*relation.FriendRequestModel, err error)
+
+ // PageFriendRequestToMe retrieves the friend requests received by the user with pagination
+ PageFriendRequestToMe(ctx context.Context, userID string, pagination pagination.Pagination) (total int64, friends []*relation.FriendRequestModel, err error)
+
+ // FindFriendsWithError fetches specified friends of a user and returns an error if any do not exist
+ FindFriendsWithError(ctx context.Context, ownerUserID string, friendUserIDs []string) (friends []*relation.FriendModel, err error)
+
+ // FindFriendUserIDs retrieves the friend IDs of a user
FindFriendUserIDs(ctx context.Context, ownerUserID string) (friendUserIDs []string, err error)
+
+ // FindBothFriendRequests finds friend requests sent and received
FindBothFriendRequests(ctx context.Context, fromUserID, toUserID string) (friends []*relation.FriendRequestModel, err error)
+
+ // UpdateFriends updates fields for friends
+ UpdateFriends(ctx context.Context, ownerUserID string, friendUserIDs []string, val map[string]any) (err error)
}
type friendDatabase struct {
friend relation.FriendModelInterface
friendRequest relation.FriendRequestModelInterface
- tx tx.Tx
+ tx tx.CtxTx
cache cache.FriendCache
}
-func NewFriendDatabase(
- friend relation.FriendModelInterface,
- friendRequest relation.FriendRequestModelInterface,
- cache cache.FriendCache,
- tx tx.Tx,
-) FriendDatabase {
+func NewFriendDatabase(friend relation.FriendModelInterface, friendRequest relation.FriendRequestModelInterface, cache cache.FriendCache, tx tx.CtxTx) FriendDatabase {
return &friendDatabase{friend: friend, friendRequest: friendRequest, cache: cache, tx: tx}
}
// ok 检查user2是否在user1的好友列表中(inUser1Friends==true) 检查user1是否在user2的好友列表中(inUser2Friends==true).
-func (f *friendDatabase) CheckIn(
- ctx context.Context,
- userID1, userID2 string,
-) (inUser1Friends bool, inUser2Friends bool, err error) {
+func (f *friendDatabase) CheckIn(ctx context.Context, userID1, userID2 string) (inUser1Friends bool, inUser2Friends bool, err error) {
userID1FriendIDs, err := f.cache.GetFriendIDs(ctx, userID1)
if err != nil {
return
@@ -113,50 +103,35 @@ func (f *friendDatabase) CheckIn(
}
// 增加或者更新好友申请 如果之前有记录则更新,没有记录则新增.
-func (f *friendDatabase) AddFriendRequest(
- ctx context.Context,
- fromUserID, toUserID string,
- reqMsg string,
- ex string,
-) (err error) {
- return f.tx.Transaction(func(tx any) error {
- _, err := f.friendRequest.NewTx(tx).Take(ctx, fromUserID, toUserID)
- // 有db错误
- if err != nil && errs.Unwrap(err) != gorm.ErrRecordNotFound {
- return err
- }
- // 无错误 则更新
- if err == nil {
- m := make(map[string]interface{}, 1)
+func (f *friendDatabase) AddFriendRequest(ctx context.Context, fromUserID, toUserID string, reqMsg string, ex string) (err error) {
+ return f.tx.Transaction(ctx, func(ctx context.Context) error {
+ _, err := f.friendRequest.Take(ctx, fromUserID, toUserID)
+ switch {
+ case err == nil:
+ m := make(map[string]any, 1)
m["handle_result"] = 0
m["handle_msg"] = ""
m["req_msg"] = reqMsg
m["ex"] = ex
m["create_time"] = time.Now()
- if err := f.friendRequest.NewTx(tx).UpdateByMap(ctx, fromUserID, toUserID, m); err != nil {
- return err
- }
- return nil
- }
- // gorm.ErrRecordNotFound 错误,则新增
- if err := f.friendRequest.NewTx(tx).Create(ctx, []*relation.FriendRequestModel{{FromUserID: fromUserID, ToUserID: toUserID, ReqMsg: reqMsg, Ex: ex, CreateTime: time.Now(), HandleTime: time.Unix(0, 0)}}); err != nil {
+ return f.friendRequest.UpdateByMap(ctx, fromUserID, toUserID, m)
+ case relation.IsNotFound(err):
+ return f.friendRequest.Create(
+ ctx,
+ []*relation.FriendRequestModel{{FromUserID: fromUserID, ToUserID: toUserID, ReqMsg: reqMsg, Ex: ex, CreateTime: time.Now(), HandleTime: time.Unix(0, 0)}},
+ )
+ default:
return err
}
- return nil
})
}
// (1)先判断是否在好友表 (在不在都不返回错误) (2)对于不在好友列表的 插入即可.
-func (f *friendDatabase) BecomeFriends(
- ctx context.Context,
- ownerUserID string,
- friendUserIDs []string,
- addSource int32,
-) (err error) {
- cache := f.cache.NewCache()
- if err := f.tx.Transaction(func(tx any) error {
+func (f *friendDatabase) BecomeFriends(ctx context.Context, ownerUserID string, friendUserIDs []string, addSource int32) (err error) {
+ return f.tx.Transaction(ctx, func(ctx context.Context) error {
+ cache := f.cache.NewCache()
// 先find 找出重复的 去掉重复的
- fs1, err := f.friend.NewTx(tx).FindFriends(ctx, ownerUserID, friendUserIDs)
+ fs1, err := f.friend.FindFriends(ctx, ownerUserID, friendUserIDs)
if err != nil {
return err
}
@@ -168,11 +143,11 @@ func (f *friendDatabase) BecomeFriends(
return e.FriendUserID
})
- err = f.friend.NewTx(tx).Create(ctx, fs11)
+ err = f.friend.Create(ctx, fs11)
if err != nil {
return err
}
- fs2, err := f.friend.NewTx(tx).FindReversalFriends(ctx, ownerUserID, friendUserIDs)
+ fs2, err := f.friend.FindReversalFriends(ctx, ownerUserID, friendUserIDs)
if err != nil {
return err
}
@@ -184,24 +159,19 @@ func (f *friendDatabase) BecomeFriends(
fs22 := utils.DistinctAny(fs2, func(e *relation.FriendModel) string {
return e.OwnerUserID
})
- err = f.friend.NewTx(tx).Create(ctx, fs22)
+ err = f.friend.Create(ctx, fs22)
if err != nil {
return err
}
newFriendIDs = append(newFriendIDs, ownerUserID)
cache = cache.DelFriendIDs(newFriendIDs...)
- return nil
- }); err != nil {
- return nil
- }
- return cache.ExecDel(ctx)
+ return cache.ExecDel(ctx)
+
+ })
}
// 拒绝好友申请 (1)检查是否有申请记录且为未处理状态 (没有记录返回错误) (2)修改申请记录 已拒绝.
-func (f *friendDatabase) RefuseFriendRequest(
- ctx context.Context,
- friendRequest *relation.FriendRequestModel,
-) (err error) {
+func (f *friendDatabase) RefuseFriendRequest(ctx context.Context, friendRequest *relation.FriendRequestModel) (err error) {
fr, err := f.friendRequest.Take(ctx, friendRequest.FromUserID, friendRequest.ToUserID)
if err != nil {
return err
@@ -220,14 +190,11 @@ func (f *friendDatabase) RefuseFriendRequest(
}
// AgreeFriendRequest 同意好友申请 (1)检查是否有申请记录且为未处理状态 (没有记录返回错误) (2)检查是否好友(不返回错误) (3) 建立双向好友关系(存在的忽略).
-func (f *friendDatabase) AgreeFriendRequest(
- ctx context.Context,
- friendRequest *relation.FriendRequestModel,
-) (err error) {
- return f.tx.Transaction(func(tx any) error {
+func (f *friendDatabase) AgreeFriendRequest(ctx context.Context, friendRequest *relation.FriendRequestModel) (err error) {
+ return f.tx.Transaction(ctx, func(ctx context.Context) error {
defer log.ZDebug(ctx, "return line")
now := time.Now()
- fr, err := f.friendRequest.NewTx(tx).Take(ctx, friendRequest.FromUserID, friendRequest.ToUserID)
+ fr, err := f.friendRequest.Take(ctx, friendRequest.FromUserID, friendRequest.ToUserID)
if err != nil {
return err
}
@@ -237,25 +204,25 @@ func (f *friendDatabase) AgreeFriendRequest(
friendRequest.HandlerUserID = mcontext.GetOpUserID(ctx)
friendRequest.HandleResult = constant.FriendResponseAgree
friendRequest.HandleTime = now
- err = f.friendRequest.NewTx(tx).Update(ctx, friendRequest)
+ err = f.friendRequest.Update(ctx, friendRequest)
if err != nil {
return err
}
- fr2, err := f.friendRequest.NewTx(tx).Take(ctx, friendRequest.ToUserID, friendRequest.FromUserID)
+ fr2, err := f.friendRequest.Take(ctx, friendRequest.ToUserID, friendRequest.FromUserID)
if err == nil && fr2.HandleResult == constant.FriendResponseNotHandle {
fr2.HandlerUserID = mcontext.GetOpUserID(ctx)
fr2.HandleResult = constant.FriendResponseAgree
fr2.HandleTime = now
- err = f.friendRequest.NewTx(tx).Update(ctx, fr2)
+ err = f.friendRequest.Update(ctx, fr2)
if err != nil {
return err
}
- } else if err != nil && errs.Unwrap(err) != gorm.ErrRecordNotFound {
+ } else if err != nil && (!relation.IsNotFound(err)) {
return err
}
- exists, err := f.friend.NewTx(tx).FindUserState(ctx, friendRequest.FromUserID, friendRequest.ToUserID)
+ exists, err := f.friend.FindUserState(ctx, friendRequest.FromUserID, friendRequest.ToUserID)
if err != nil {
return err
}
@@ -286,7 +253,7 @@ func (f *friendDatabase) AgreeFriendRequest(
)
}
if len(adds) > 0 {
- if err := f.friend.NewTx(tx).Create(ctx, adds); err != nil {
+ if err := f.friend.Create(ctx, adds); err != nil {
return err
}
}
@@ -311,47 +278,27 @@ func (f *friendDatabase) UpdateRemark(ctx context.Context, ownerUserID, friendUs
}
// 获取ownerUserID的好友列表 无结果不返回错误.
-func (f *friendDatabase) PageOwnerFriends(
- ctx context.Context,
- ownerUserID string,
- pageNumber, showNumber int32,
-) (friends []*relation.FriendModel, total int64, err error) {
- return f.friend.FindOwnerFriends(ctx, ownerUserID, pageNumber, showNumber)
+func (f *friendDatabase) PageOwnerFriends(ctx context.Context, ownerUserID string, pagination pagination.Pagination) (total int64, friends []*relation.FriendModel, err error) {
+ return f.friend.FindOwnerFriends(ctx, ownerUserID, pagination)
}
// friendUserID在哪些人的好友列表中.
-func (f *friendDatabase) PageInWhoseFriends(
- ctx context.Context,
- friendUserID string,
- pageNumber, showNumber int32,
-) (friends []*relation.FriendModel, total int64, err error) {
- return f.friend.FindInWhoseFriends(ctx, friendUserID, pageNumber, showNumber)
+func (f *friendDatabase) PageInWhoseFriends(ctx context.Context, friendUserID string, pagination pagination.Pagination) (total int64, friends []*relation.FriendModel, err error) {
+ return f.friend.FindInWhoseFriends(ctx, friendUserID, pagination)
}
// 获取我发出去的好友申请 无结果不返回错误.
-func (f *friendDatabase) PageFriendRequestFromMe(
- ctx context.Context,
- userID string,
- pageNumber, showNumber int32,
-) (friends []*relation.FriendRequestModel, total int64, err error) {
- return f.friendRequest.FindFromUserID(ctx, userID, pageNumber, showNumber)
+func (f *friendDatabase) PageFriendRequestFromMe(ctx context.Context, userID string, pagination pagination.Pagination) (total int64, friends []*relation.FriendRequestModel, err error) {
+ return f.friendRequest.FindFromUserID(ctx, userID, pagination)
}
// 获取我收到的的好友申请 无结果不返回错误.
-func (f *friendDatabase) PageFriendRequestToMe(
- ctx context.Context,
- userID string,
- pageNumber, showNumber int32,
-) (friends []*relation.FriendRequestModel, total int64, err error) {
- return f.friendRequest.FindToUserID(ctx, userID, pageNumber, showNumber)
+func (f *friendDatabase) PageFriendRequestToMe(ctx context.Context, userID string, pagination pagination.Pagination) (total int64, friends []*relation.FriendRequestModel, err error) {
+ return f.friendRequest.FindToUserID(ctx, userID, pagination)
}
// 获取某人指定好友的信息 如果有好友不存在,也返回错误.
-func (f *friendDatabase) FindFriendsWithError(
- ctx context.Context,
- ownerUserID string,
- friendUserIDs []string,
-) (friends []*relation.FriendModel, err error) {
+func (f *friendDatabase) FindFriendsWithError(ctx context.Context, ownerUserID string, friendUserIDs []string) (friends []*relation.FriendModel, err error) {
friends, err = f.friend.FindFriends(ctx, ownerUserID, friendUserIDs)
if err != nil {
return
@@ -362,13 +309,19 @@ func (f *friendDatabase) FindFriendsWithError(
return
}
-func (f *friendDatabase) FindFriendUserIDs(
- ctx context.Context,
- ownerUserID string,
-) (friendUserIDs []string, err error) {
+func (f *friendDatabase) FindFriendUserIDs(ctx context.Context, ownerUserID string) (friendUserIDs []string, err error) {
return f.cache.GetFriendIDs(ctx, ownerUserID)
}
func (f *friendDatabase) FindBothFriendRequests(ctx context.Context, fromUserID, toUserID string) (friends []*relation.FriendRequestModel, err error) {
return f.friendRequest.FindBothFriendRequests(ctx, fromUserID, toUserID)
}
+func (f *friendDatabase) UpdateFriends(ctx context.Context, ownerUserID string, friendUserIDs []string, val map[string]any) (err error) {
+ if len(val) == 0 {
+ return nil
+ }
+ if err := f.friend.UpdateFriends(ctx, ownerUserID, friendUserIDs, val); err != nil {
+ return err
+ }
+ return f.cache.DelFriends(ownerUserID, friendUserIDs).ExecDel(ctx)
+}
diff --git a/pkg/common/db/controller/group.go b/pkg/common/db/controller/group.go
index 194f3e8b2..decd868d6 100644
--- a/pkg/common/db/controller/group.go
+++ b/pkg/common/db/controller/group.go
@@ -16,23 +16,18 @@ package controller
import (
"context"
- "fmt"
"time"
+ "github.com/OpenIMSDK/tools/pagination"
"github.com/dtm-labs/rockscache"
- "github.com/redis/go-redis/v9"
- "go.mongodb.org/mongo-driver/mongo"
- "gorm.io/gorm"
"github.com/OpenIMSDK/protocol/constant"
"github.com/OpenIMSDK/tools/tx"
"github.com/OpenIMSDK/tools/utils"
+ "github.com/redis/go-redis/v9"
"github.com/openimsdk/open-im-server/v3/pkg/common/db/cache"
- "github.com/openimsdk/open-im-server/v3/pkg/common/db/relation"
relationtb "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
- unrelationtb "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/unrelation"
- "github.com/openimsdk/open-im-server/v3/pkg/common/db/unrelation"
)
type GroupDatabase interface {
@@ -40,23 +35,26 @@ type GroupDatabase interface {
CreateGroup(ctx context.Context, groups []*relationtb.GroupModel, groupMembers []*relationtb.GroupMemberModel) error
TakeGroup(ctx context.Context, groupID string) (group *relationtb.GroupModel, err error)
FindGroup(ctx context.Context, groupIDs []string) (groups []*relationtb.GroupModel, err error)
- FindNotDismissedGroup(ctx context.Context, groupIDs []string) (groups []*relationtb.GroupModel, err error)
- SearchGroup(ctx context.Context, keyword string, pageNumber, showNumber int32) (uint32, []*relationtb.GroupModel, error)
+ SearchGroup(ctx context.Context, keyword string, pagination pagination.Pagination) (int64, []*relationtb.GroupModel, error)
UpdateGroup(ctx context.Context, groupID string, data map[string]any) error
DismissGroup(ctx context.Context, groupID string, deleteMember bool) error // 解散群,并删除群成员
- GetGroupIDsByGroupType(ctx context.Context, groupType int) (groupIDs []string, err error)
- // GroupMember
+
TakeGroupMember(ctx context.Context, groupID string, userID string) (groupMember *relationtb.GroupMemberModel, err error)
TakeGroupOwner(ctx context.Context, groupID string) (*relationtb.GroupMemberModel, error)
- FindGroupMember(ctx context.Context, groupIDs []string, userIDs []string, roleLevels []int32) ([]*relationtb.GroupMemberModel, error)
+ FindGroupMembers(ctx context.Context, groupID string, userIDs []string) (groupMembers []*relationtb.GroupMemberModel, err error) // *
+ FindGroupMemberUser(ctx context.Context, groupIDs []string, userID string) (groupMembers []*relationtb.GroupMemberModel, err error) // *
+ FindGroupMemberRoleLevels(ctx context.Context, groupID string, roleLevels []int32) (groupMembers []*relationtb.GroupMemberModel, err error) // *
+ FindGroupMemberAll(ctx context.Context, groupID string) (groupMembers []*relationtb.GroupMemberModel, err error) // *
+ FindGroupsOwner(ctx context.Context, groupIDs []string) ([]*relationtb.GroupMemberModel, error)
FindGroupMemberUserID(ctx context.Context, groupID string) ([]string, error)
FindGroupMemberNum(ctx context.Context, groupID string) (uint32, error)
FindUserManagedGroupID(ctx context.Context, userID string) (groupIDs []string, err error)
- PageGroupRequest(ctx context.Context, groupIDs []string, pageNumber, showNumber int32) (uint32, []*relationtb.GroupRequestModel, error)
+ PageGroupRequest(ctx context.Context, groupIDs []string, pagination pagination.Pagination) (int64, []*relationtb.GroupRequestModel, error)
+ GetGroupRoleLevelMemberIDs(ctx context.Context, groupID string, roleLevel int32) ([]string, error)
- PageGetJoinGroup(ctx context.Context, userID string, pageNumber, showNumber int32) (total uint32, totalGroupMembers []*relationtb.GroupMemberModel, err error)
- PageGetGroupMember(ctx context.Context, groupID string, pageNumber, showNumber int32) (total uint32, totalGroupMembers []*relationtb.GroupMemberModel, err error)
- SearchGroupMember(ctx context.Context, keyword string, groupIDs []string, userIDs []string, roleLevels []int32, pageNumber, showNumber int32) (uint32, []*relationtb.GroupMemberModel, error)
+ PageGetJoinGroup(ctx context.Context, userID string, pagination pagination.Pagination) (total int64, totalGroupMembers []*relationtb.GroupMemberModel, err error)
+ PageGetGroupMember(ctx context.Context, groupID string, pagination pagination.Pagination) (total int64, totalGroupMembers []*relationtb.GroupMemberModel, err error)
+ SearchGroupMember(ctx context.Context, keyword string, groupID string, pagination pagination.Pagination) (int64, []*relationtb.GroupMemberModel, error)
HandlerGroupRequest(ctx context.Context, groupID string, userID string, handledMsg string, handleResult int32, member *relationtb.GroupMemberModel) error
DeleteGroupMember(ctx context.Context, groupID string, userIDs []string) error
MapGroupMemberUserID(ctx context.Context, groupIDs []string) (map[string]*relationtb.GroupSimpleUserID, error)
@@ -67,15 +65,8 @@ type GroupDatabase interface {
// GroupRequest
CreateGroupRequest(ctx context.Context, requests []*relationtb.GroupRequestModel) error
TakeGroupRequest(ctx context.Context, groupID string, userID string) (*relationtb.GroupRequestModel, error)
- FindGroupRequests(ctx context.Context, groupID string, userIDs []string) (int64, []*relationtb.GroupRequestModel, error)
- PageGroupRequestUser(ctx context.Context, userID string, pageNumber, showNumber int32) (uint32, []*relationtb.GroupRequestModel, error)
- // SuperGroupModelInterface
- FindSuperGroup(ctx context.Context, groupIDs []string) ([]*unrelationtb.SuperGroupModel, error)
- FindJoinSuperGroup(ctx context.Context, userID string) ([]string, error)
- CreateSuperGroup(ctx context.Context, groupID string, initMemberIDList []string) error
- DeleteSuperGroup(ctx context.Context, groupID string) error
- DeleteSuperGroupMember(ctx context.Context, groupID string, userIDs []string) error
- CreateSuperGroupMember(ctx context.Context, groupID string, userIDs []string) error
+ FindGroupRequests(ctx context.Context, groupID string, userIDs []string) ([]*relationtb.GroupRequestModel, error)
+ PageGroupRequestUser(ctx context.Context, userID string, pagination pagination.Pagination) (int64, []*relationtb.GroupRequestModel, error)
// 获取群总数
CountTotal(ctx context.Context, before *time.Time) (count int64, err error)
@@ -85,126 +76,115 @@ type GroupDatabase interface {
}
func NewGroupDatabase(
- group relationtb.GroupModelInterface,
- member relationtb.GroupMemberModelInterface,
- request relationtb.GroupRequestModelInterface,
- tx tx.Tx,
+ rdb redis.UniversalClient,
+ groupDB relationtb.GroupModelInterface,
+ groupMemberDB relationtb.GroupMemberModelInterface,
+ groupRequestDB relationtb.GroupRequestModelInterface,
ctxTx tx.CtxTx,
- superGroup unrelationtb.SuperGroupModelInterface,
- cache cache.GroupCache,
+ groupHash cache.GroupHash,
) GroupDatabase {
- database := &groupDatabase{
- groupDB: group,
- groupMemberDB: member,
- groupRequestDB: request,
- tx: tx,
- ctxTx: ctxTx,
- cache: cache,
- mongoDB: superGroup,
- }
- return database
-}
-
-func InitGroupDatabase(db *gorm.DB, rdb redis.UniversalClient, database *mongo.Database, hashCode func(ctx context.Context, groupID string) (uint64, error)) GroupDatabase {
rcOptions := rockscache.NewDefaultOptions()
rcOptions.StrongConsistency = true
rcOptions.RandomExpireAdjustment = 0.2
- return NewGroupDatabase(
- relation.NewGroupDB(db),
- relation.NewGroupMemberDB(db),
- relation.NewGroupRequest(db),
- tx.NewGorm(db),
- tx.NewMongo(database.Client()),
- unrelation.NewSuperGroupMongoDriver(database),
- cache.NewGroupCacheRedis(
- rdb,
- relation.NewGroupDB(db),
- relation.NewGroupMemberDB(db),
- relation.NewGroupRequest(db),
- unrelation.NewSuperGroupMongoDriver(database),
- hashCode,
- rcOptions,
- ),
- )
+ return &groupDatabase{
+ groupDB: groupDB,
+ groupMemberDB: groupMemberDB,
+ groupRequestDB: groupRequestDB,
+ ctxTx: ctxTx,
+ cache: cache.NewGroupCacheRedis(rdb, groupDB, groupMemberDB, groupRequestDB, groupHash, rcOptions),
+ }
}
type groupDatabase struct {
groupDB relationtb.GroupModelInterface
groupMemberDB relationtb.GroupMemberModelInterface
groupRequestDB relationtb.GroupRequestModelInterface
- tx tx.Tx
ctxTx tx.CtxTx
cache cache.GroupCache
- mongoDB unrelationtb.SuperGroupModelInterface
}
-func (g *groupDatabase) GetGroupIDsByGroupType(ctx context.Context, groupType int) (groupIDs []string, err error) {
- return g.groupDB.GetGroupIDsByGroupType(ctx, groupType)
+func (g *groupDatabase) FindGroupMembers(ctx context.Context, groupID string, userIDs []string) ([]*relationtb.GroupMemberModel, error) {
+ return g.cache.GetGroupMembersInfo(ctx, groupID, userIDs)
}
-func (g *groupDatabase) FindGroupMemberUserID(ctx context.Context, groupID string) ([]string, error) {
- return g.cache.GetGroupMemberIDs(ctx, groupID)
+func (g *groupDatabase) FindGroupMemberUser(ctx context.Context, groupIDs []string, userID string) ([]*relationtb.GroupMemberModel, error) {
+ return g.cache.FindGroupMemberUser(ctx, groupIDs, userID)
}
-func (g *groupDatabase) FindGroupMemberNum(ctx context.Context, groupID string) (uint32, error) {
- num, err := g.cache.GetGroupMemberNum(ctx, groupID)
- if err != nil {
- return 0, err
- }
- return uint32(num), nil
+func (g *groupDatabase) FindGroupMemberRoleLevels(ctx context.Context, groupID string, roleLevels []int32) ([]*relationtb.GroupMemberModel, error) {
+ return g.cache.GetGroupRolesLevelMemberInfo(ctx, groupID, roleLevels)
}
-func (g *groupDatabase) CreateGroup(
- ctx context.Context,
- groups []*relationtb.GroupModel,
- groupMembers []*relationtb.GroupMemberModel,
-) error {
- cache := g.cache.NewCache()
- if err := g.tx.Transaction(func(tx any) error {
+func (g *groupDatabase) FindGroupMemberAll(ctx context.Context, groupID string) ([]*relationtb.GroupMemberModel, error) {
+ return g.cache.GetAllGroupMembersInfo(ctx, groupID)
+}
+
+func (g *groupDatabase) FindGroupsOwner(ctx context.Context, groupIDs []string) ([]*relationtb.GroupMemberModel, error) {
+ return g.cache.GetGroupsOwner(ctx, groupIDs)
+}
+
+func (g *groupDatabase) GetGroupRoleLevelMemberIDs(ctx context.Context, groupID string, roleLevel int32) ([]string, error) {
+ return g.cache.GetGroupRoleLevelMemberIDs(ctx, groupID, roleLevel)
+}
+
+func (g *groupDatabase) CreateGroup(ctx context.Context, groups []*relationtb.GroupModel, groupMembers []*relationtb.GroupMemberModel) error {
+ if len(groups)+len(groupMembers) == 0 {
+ return nil
+ }
+ return g.ctxTx.Transaction(ctx, func(ctx context.Context) error {
+ c := g.cache.NewCache()
if len(groups) > 0 {
- if err := g.groupDB.NewTx(tx).Create(ctx, groups); err != nil {
+ if err := g.groupDB.Create(ctx, groups); err != nil {
return err
}
+ for _, group := range groups {
+ c = c.DelGroupsInfo(group.GroupID).
+ DelGroupMembersHash(group.GroupID).
+ DelGroupMembersHash(group.GroupID).
+ DelGroupsMemberNum(group.GroupID).
+ DelGroupMemberIDs(group.GroupID).
+ DelGroupAllRoleLevel(group.GroupID)
+ }
}
if len(groupMembers) > 0 {
- if err := g.groupMemberDB.NewTx(tx).Create(ctx, groupMembers); err != nil {
+ if err := g.groupMemberDB.Create(ctx, groupMembers); err != nil {
return err
}
- }
- createGroupIDs := utils.DistinctAnyGetComparable(groups, func(group *relationtb.GroupModel) string {
- return group.GroupID
- })
- m := make(map[string]struct{})
-
- for _, groupMember := range groupMembers {
- if _, ok := m[groupMember.GroupID]; !ok {
- m[groupMember.GroupID] = struct{}{}
- cache = cache.DelGroupMemberIDs(groupMember.GroupID).DelGroupMembersHash(groupMember.GroupID).DelGroupsMemberNum(groupMember.GroupID)
+ for _, groupMember := range groupMembers {
+ c = c.DelGroupMembersHash(groupMember.GroupID).
+ DelGroupsMemberNum(groupMember.GroupID).
+ DelGroupMemberIDs(groupMember.GroupID).
+ DelJoinedGroupID(groupMember.UserID).
+ DelGroupMembersInfo(groupMember.GroupID, groupMember.UserID).
+ DelGroupAllRoleLevel(groupMember.GroupID)
}
- cache = cache.DelJoinedGroupID(groupMember.UserID).DelGroupMembersInfo(groupMember.GroupID, groupMember.UserID)
}
- cache = cache.DelGroupsInfo(createGroupIDs...)
- return nil
- }); err != nil {
- return err
+ return c.ExecDel(ctx, true)
+ })
+}
+
+func (g *groupDatabase) FindGroupMemberUserID(ctx context.Context, groupID string) ([]string, error) {
+ return g.cache.GetGroupMemberIDs(ctx, groupID)
+}
+
+func (g *groupDatabase) FindGroupMemberNum(ctx context.Context, groupID string) (uint32, error) {
+ num, err := g.cache.GetGroupMemberNum(ctx, groupID)
+ if err != nil {
+ return 0, err
}
- return cache.ExecDel(ctx)
+ return uint32(num), nil
}
-func (g *groupDatabase) TakeGroup(ctx context.Context, groupID string) (group *relationtb.GroupModel, err error) {
+func (g *groupDatabase) TakeGroup(ctx context.Context, groupID string) (*relationtb.GroupModel, error) {
return g.cache.GetGroupInfo(ctx, groupID)
}
-func (g *groupDatabase) FindGroup(ctx context.Context, groupIDs []string) (groups []*relationtb.GroupModel, err error) {
+func (g *groupDatabase) FindGroup(ctx context.Context, groupIDs []string) ([]*relationtb.GroupModel, error) {
return g.cache.GetGroupsInfo(ctx, groupIDs)
}
-func (g *groupDatabase) SearchGroup(
- ctx context.Context,
- keyword string,
- pageNumber, showNumber int32,
-) (uint32, []*relationtb.GroupModel, error) {
- return g.groupDB.Search(ctx, keyword, pageNumber, showNumber)
+func (g *groupDatabase) SearchGroup(ctx context.Context, keyword string, pagination pagination.Pagination) (int64, []*relationtb.GroupModel, error) {
+ return g.groupDB.Search(ctx, keyword, pagination)
}
func (g *groupDatabase) UpdateGroup(ctx context.Context, groupID string, data map[string]any) error {
@@ -215,166 +195,97 @@ func (g *groupDatabase) UpdateGroup(ctx context.Context, groupID string, data ma
}
func (g *groupDatabase) DismissGroup(ctx context.Context, groupID string, deleteMember bool) error {
- cache := g.cache.NewCache()
- if err := g.tx.Transaction(func(tx any) error {
- if err := g.groupDB.NewTx(tx).UpdateStatus(ctx, groupID, constant.GroupStatusDismissed); err != nil {
+ return g.ctxTx.Transaction(ctx, func(ctx context.Context) error {
+ c := g.cache.NewCache()
+ if err := g.groupDB.UpdateStatus(ctx, groupID, constant.GroupStatusDismissed); err != nil {
return err
}
if deleteMember {
- if err := g.groupMemberDB.NewTx(tx).DeleteGroup(ctx, []string{groupID}); err != nil {
- return err
- }
userIDs, err := g.cache.GetGroupMemberIDs(ctx, groupID)
if err != nil {
return err
}
- cache = cache.DelJoinedGroupID(userIDs...).DelGroupMemberIDs(groupID).DelGroupsMemberNum(groupID).DelGroupMembersHash(groupID)
+ if err := g.groupMemberDB.Delete(ctx, groupID, nil); err != nil {
+ return err
+ }
+ c = c.DelJoinedGroupID(userIDs...).
+ DelGroupMemberIDs(groupID).
+ DelGroupsMemberNum(groupID).
+ DelGroupMembersHash(groupID).
+ DelGroupAllRoleLevel(groupID).
+ DelGroupMembersInfo(groupID, userIDs...)
}
- cache = cache.DelGroupsInfo(groupID)
- return nil
- }); err != nil {
- return err
- }
- return cache.ExecDel(ctx)
+ return c.DelGroupsInfo(groupID).ExecDel(ctx)
+ })
}
-func (g *groupDatabase) TakeGroupMember(
- ctx context.Context,
- groupID string,
- userID string,
-) (groupMember *relationtb.GroupMemberModel, err error) {
+func (g *groupDatabase) TakeGroupMember(ctx context.Context, groupID string, userID string) (*relationtb.GroupMemberModel, error) {
return g.cache.GetGroupMemberInfo(ctx, groupID, userID)
}
func (g *groupDatabase) TakeGroupOwner(ctx context.Context, groupID string) (*relationtb.GroupMemberModel, error) {
- return g.groupMemberDB.TakeOwner(ctx, groupID) // todo cache group owner
+ return g.cache.GetGroupOwner(ctx, groupID)
}
func (g *groupDatabase) FindUserManagedGroupID(ctx context.Context, userID string) (groupIDs []string, err error) {
return g.groupMemberDB.FindUserManagedGroupID(ctx, userID)
}
-func (g *groupDatabase) PageGroupRequest(
- ctx context.Context,
- groupIDs []string,
- pageNumber, showNumber int32,
-) (uint32, []*relationtb.GroupRequestModel, error) {
- return g.groupRequestDB.PageGroup(ctx, groupIDs, pageNumber, showNumber)
-}
-
-func (g *groupDatabase) FindGroupMember(ctx context.Context, groupIDs []string, userIDs []string, roleLevels []int32) (totalGroupMembers []*relationtb.GroupMemberModel, err error) {
- if len(groupIDs) == 0 && len(roleLevels) == 0 && len(userIDs) == 1 {
- gIDs, err := g.cache.GetJoinedGroupIDs(ctx, userIDs[0])
- if err != nil {
- return nil, err
- }
- var res []*relationtb.GroupMemberModel
- for _, groupID := range gIDs {
- v, err := g.cache.GetGroupMemberInfo(ctx, groupID, userIDs[0])
- if err != nil {
- return nil, err
- }
- res = append(res, v)
- }
- return res, nil
- }
- if len(roleLevels) == 0 {
- for _, groupID := range groupIDs {
- groupMembers, err := g.cache.GetGroupMembersInfo(ctx, groupID, userIDs)
- if err != nil {
- return nil, err
- }
- totalGroupMembers = append(totalGroupMembers, groupMembers...)
- }
- return totalGroupMembers, nil
- }
- return g.groupMemberDB.Find(ctx, groupIDs, userIDs, roleLevels)
+func (g *groupDatabase) PageGroupRequest(ctx context.Context, groupIDs []string, pagination pagination.Pagination) (int64, []*relationtb.GroupRequestModel, error) {
+ return g.groupRequestDB.PageGroup(ctx, groupIDs, pagination)
}
-func (g *groupDatabase) PageGetJoinGroup(
- ctx context.Context,
- userID string,
- pageNumber, showNumber int32,
-) (total uint32, totalGroupMembers []*relationtb.GroupMemberModel, err error) {
+func (g *groupDatabase) PageGetJoinGroup(ctx context.Context, userID string, pagination pagination.Pagination) (total int64, totalGroupMembers []*relationtb.GroupMemberModel, err error) {
groupIDs, err := g.cache.GetJoinedGroupIDs(ctx, userID)
if err != nil {
return 0, nil, err
}
- for _, groupID := range utils.Paginate(groupIDs, int(pageNumber), int(showNumber)) {
+ for _, groupID := range utils.Paginate(groupIDs, int(pagination.GetPageNumber()), int(pagination.GetShowNumber())) {
groupMembers, err := g.cache.GetGroupMembersInfo(ctx, groupID, []string{userID})
if err != nil {
return 0, nil, err
}
totalGroupMembers = append(totalGroupMembers, groupMembers...)
}
- return uint32(len(groupIDs)), totalGroupMembers, nil
+ return int64(len(groupIDs)), totalGroupMembers, nil
}
-func (g *groupDatabase) PageGetGroupMember(
- ctx context.Context,
- groupID string,
- pageNumber, showNumber int32,
-) (total uint32, totalGroupMembers []*relationtb.GroupMemberModel, err error) {
+func (g *groupDatabase) PageGetGroupMember(ctx context.Context, groupID string, pagination pagination.Pagination) (total int64, totalGroupMembers []*relationtb.GroupMemberModel, err error) {
groupMemberIDs, err := g.cache.GetGroupMemberIDs(ctx, groupID)
if err != nil {
return 0, nil, err
}
- pageIDs := utils.Paginate(groupMemberIDs, int(pageNumber), int(showNumber))
+ pageIDs := utils.Paginate(groupMemberIDs, int(pagination.GetPageNumber()), int(pagination.GetShowNumber()))
if len(pageIDs) == 0 {
- return uint32(len(groupMemberIDs)), nil, nil
+ return int64(len(groupMemberIDs)), nil, nil
}
members, err := g.cache.GetGroupMembersInfo(ctx, groupID, pageIDs)
if err != nil {
return 0, nil, err
}
- return uint32(len(groupMemberIDs)), members, nil
+ return int64(len(groupMemberIDs)), members, nil
}
-func (g *groupDatabase) SearchGroupMember(
- ctx context.Context,
- keyword string,
- groupIDs []string,
- userIDs []string,
- roleLevels []int32,
- pageNumber, showNumber int32,
-) (uint32, []*relationtb.GroupMemberModel, error) {
- return g.groupMemberDB.SearchMember(ctx, keyword, groupIDs, userIDs, roleLevels, pageNumber, showNumber)
+func (g *groupDatabase) SearchGroupMember(ctx context.Context, keyword string, groupID string, pagination pagination.Pagination) (int64, []*relationtb.GroupMemberModel, error) {
+ return g.groupMemberDB.SearchMember(ctx, keyword, groupID, pagination)
}
-func (g *groupDatabase) HandlerGroupRequest(
- ctx context.Context,
- groupID string,
- userID string,
- handledMsg string,
- handleResult int32,
- member *relationtb.GroupMemberModel,
-) error {
- //cache := g.cache.NewCache()
- //if err := g.tx.Transaction(func(tx any) error {
- // if err := g.groupRequestDB.NewTx(tx).UpdateHandler(ctx, groupID, userID, handledMsg, handleResult); err != nil {
- // return err
- // }
- // if member != nil {
- // if err := g.groupMemberDB.NewTx(tx).Create(ctx, []*relationtb.GroupMemberModel{member}); err != nil {
- // return err
- // }
- // cache = cache.DelGroupMembersHash(groupID).DelGroupMemberIDs(groupID).DelGroupsMemberNum(groupID).DelJoinedGroupID(member.UserID)
- // }
- // return nil
- //}); err != nil {
- // return err
- //}
- //return cache.ExecDel(ctx)
-
- return g.tx.Transaction(func(tx any) error {
- if err := g.groupRequestDB.NewTx(tx).UpdateHandler(ctx, groupID, userID, handledMsg, handleResult); err != nil {
+func (g *groupDatabase) HandlerGroupRequest(ctx context.Context, groupID string, userID string, handledMsg string, handleResult int32, member *relationtb.GroupMemberModel) error {
+ return g.ctxTx.Transaction(ctx, func(ctx context.Context) error {
+ if err := g.groupRequestDB.UpdateHandler(ctx, groupID, userID, handledMsg, handleResult); err != nil {
return err
}
if member != nil {
- if err := g.groupMemberDB.NewTx(tx).Create(ctx, []*relationtb.GroupMemberModel{member}); err != nil {
+ if err := g.groupMemberDB.Create(ctx, []*relationtb.GroupMemberModel{member}); err != nil {
return err
}
- if err := g.cache.NewCache().DelGroupMembersHash(groupID).DelGroupMembersInfo(groupID, member.UserID).DelGroupMemberIDs(groupID).DelGroupsMemberNum(groupID).DelJoinedGroupID(member.UserID).ExecDel(ctx); err != nil {
+ c := g.cache.DelGroupMembersHash(groupID).
+ DelGroupMembersInfo(groupID, member.UserID).
+ DelGroupMemberIDs(groupID).
+ DelGroupsMemberNum(groupID).
+ DelJoinedGroupID(member.UserID).
+ DelGroupRoleLevel(groupID, []int32{member.RoleLevel})
+ if err := c.ExecDel(ctx); err != nil {
return err
}
}
@@ -391,13 +302,11 @@ func (g *groupDatabase) DeleteGroupMember(ctx context.Context, groupID string, u
DelGroupsMemberNum(groupID).
DelJoinedGroupID(userIDs...).
DelGroupMembersInfo(groupID, userIDs...).
+ DelGroupAllRoleLevel(groupID).
ExecDel(ctx)
}
-func (g *groupDatabase) MapGroupMemberUserID(
- ctx context.Context,
- groupIDs []string,
-) (map[string]*relationtb.GroupSimpleUserID, error) {
+func (g *groupDatabase) MapGroupMemberUserID(ctx context.Context, groupIDs []string) (map[string]*relationtb.GroupSimpleUserID, error) {
return g.cache.GetGroupMemberHashMap(ctx, groupIDs)
}
@@ -414,62 +323,54 @@ func (g *groupDatabase) MapGroupMemberNum(ctx context.Context, groupIDs []string
}
func (g *groupDatabase) TransferGroupOwner(ctx context.Context, groupID string, oldOwnerUserID, newOwnerUserID string, roleLevel int32) error {
- return g.tx.Transaction(func(tx any) error {
- rowsAffected, err := g.groupMemberDB.NewTx(tx).UpdateRoleLevel(ctx, groupID, oldOwnerUserID, roleLevel)
- if err != nil {
+ return g.ctxTx.Transaction(ctx, func(ctx context.Context) error {
+ if err := g.groupMemberDB.UpdateRoleLevel(ctx, groupID, oldOwnerUserID, roleLevel); err != nil {
return err
}
- if rowsAffected != 1 {
- return utils.Wrap(fmt.Errorf("oldOwnerUserID %s rowsAffected = %d", oldOwnerUserID, rowsAffected), "")
- }
- rowsAffected, err = g.groupMemberDB.NewTx(tx).UpdateRoleLevel(ctx, groupID, newOwnerUserID, constant.GroupOwner)
- if err != nil {
+ if err := g.groupMemberDB.UpdateRoleLevel(ctx, groupID, newOwnerUserID, constant.GroupOwner); err != nil {
return err
}
- if rowsAffected != 1 {
- return utils.Wrap(fmt.Errorf("newOwnerUserID %s rowsAffected = %d", newOwnerUserID, rowsAffected), "")
- }
- return g.cache.DelGroupMembersInfo(groupID, oldOwnerUserID, newOwnerUserID).DelGroupMembersHash(groupID).ExecDel(ctx)
+ return g.cache.DelGroupMembersInfo(groupID, oldOwnerUserID, newOwnerUserID).
+ DelGroupAllRoleLevel(groupID).
+ DelGroupMembersHash(groupID).ExecDel(ctx)
})
}
-func (g *groupDatabase) UpdateGroupMember(
- ctx context.Context,
- groupID string,
- userID string,
- data map[string]any,
-) error {
+func (g *groupDatabase) UpdateGroupMember(ctx context.Context, groupID string, userID string, data map[string]any) error {
if err := g.groupMemberDB.Update(ctx, groupID, userID, data); err != nil {
return err
}
- return g.cache.DelGroupMembersInfo(groupID, userID).ExecDel(ctx)
+ c := g.cache.DelGroupMembersInfo(groupID, userID)
+ if g.groupMemberDB.IsUpdateRoleLevel(data) {
+ c = c.DelGroupAllRoleLevel(groupID)
+ }
+ return c.ExecDel(ctx)
}
func (g *groupDatabase) UpdateGroupMembers(ctx context.Context, data []*relationtb.BatchUpdateGroupMember) error {
- cache := g.cache.NewCache()
- if err := g.tx.Transaction(func(tx any) error {
+ return g.ctxTx.Transaction(ctx, func(ctx context.Context) error {
+ c := g.cache.NewCache()
for _, item := range data {
- if err := g.groupMemberDB.NewTx(tx).Update(ctx, item.GroupID, item.UserID, item.Map); err != nil {
+ if err := g.groupMemberDB.Update(ctx, item.GroupID, item.UserID, item.Map); err != nil {
return err
}
- cache = cache.DelGroupMembersInfo(item.GroupID, item.UserID)
+ if g.groupMemberDB.IsUpdateRoleLevel(item.Map) {
+ c = c.DelGroupAllRoleLevel(item.GroupID)
+ }
+ c = c.DelGroupMembersInfo(item.GroupID, item.UserID).DelGroupMembersHash(item.GroupID)
}
- return nil
- }); err != nil {
- return err
- }
- return cache.ExecDel(ctx)
+ return c.ExecDel(ctx, true)
+ })
}
func (g *groupDatabase) CreateGroupRequest(ctx context.Context, requests []*relationtb.GroupRequestModel) error {
- return g.tx.Transaction(func(tx any) error {
- db := g.groupRequestDB.NewTx(tx)
+ return g.ctxTx.Transaction(ctx, func(ctx context.Context) error {
for _, request := range requests {
- if err := db.Delete(ctx, request.GroupID, request.UserID); err != nil {
+ if err := g.groupRequestDB.Delete(ctx, request.GroupID, request.UserID); err != nil {
return err
}
}
- return db.Create(ctx, requests)
+ return g.groupRequestDB.Create(ctx, requests)
})
}
@@ -481,65 +382,8 @@ func (g *groupDatabase) TakeGroupRequest(
return g.groupRequestDB.Take(ctx, groupID, userID)
}
-func (g *groupDatabase) PageGroupRequestUser(
- ctx context.Context,
- userID string,
- pageNumber, showNumber int32,
-) (uint32, []*relationtb.GroupRequestModel, error) {
- return g.groupRequestDB.Page(ctx, userID, pageNumber, showNumber)
-}
-
-func (g *groupDatabase) FindSuperGroup(
- ctx context.Context,
- groupIDs []string,
-) (models []*unrelationtb.SuperGroupModel, err error) {
- return g.cache.GetSuperGroupMemberIDs(ctx, groupIDs...)
-}
-
-func (g *groupDatabase) FindJoinSuperGroup(ctx context.Context, userID string) ([]string, error) {
- return g.cache.GetJoinedSuperGroupIDs(ctx, userID)
-}
-
-func (g *groupDatabase) CreateSuperGroup(ctx context.Context, groupID string, initMemberIDs []string) error {
- if err := g.mongoDB.CreateSuperGroup(ctx, groupID, initMemberIDs); err != nil {
- return err
- }
- return g.cache.DelSuperGroupMemberIDs(groupID).DelJoinedSuperGroupIDs(initMemberIDs...).ExecDel(ctx)
-}
-
-func (g *groupDatabase) DeleteSuperGroup(ctx context.Context, groupID string) error {
- cache := g.cache.NewCache()
- if err := g.ctxTx.Transaction(ctx, func(ctx context.Context) error {
- if err := g.mongoDB.DeleteSuperGroup(ctx, groupID); err != nil {
- return err
- }
- models, err := g.cache.GetSuperGroupMemberIDs(ctx, groupID)
- if err != nil {
- return err
- }
- cache = cache.DelSuperGroupMemberIDs(groupID)
- if len(models) > 0 {
- cache = cache.DelJoinedSuperGroupIDs(models[0].MemberIDs...)
- }
- return nil
- }); err != nil {
- return err
- }
- return cache.ExecDel(ctx)
-}
-
-func (g *groupDatabase) DeleteSuperGroupMember(ctx context.Context, groupID string, userIDs []string) error {
- if err := g.mongoDB.RemoverUserFromSuperGroup(ctx, groupID, userIDs); err != nil {
- return err
- }
- return g.cache.DelSuperGroupMemberIDs(groupID).DelJoinedSuperGroupIDs(userIDs...).ExecDel(ctx)
-}
-
-func (g *groupDatabase) CreateSuperGroupMember(ctx context.Context, groupID string, userIDs []string) error {
- if err := g.mongoDB.AddUserToSuperGroup(ctx, groupID, userIDs); err != nil {
- return err
- }
- return g.cache.DelSuperGroupMemberIDs(groupID).DelJoinedSuperGroupIDs(userIDs...).ExecDel(ctx)
+func (g *groupDatabase) PageGroupRequestUser(ctx context.Context, userID string, pagination pagination.Pagination) (int64, []*relationtb.GroupRequestModel, error) {
+ return g.groupRequestDB.Page(ctx, userID, pagination)
}
func (g *groupDatabase) CountTotal(ctx context.Context, before *time.Time) (count int64, err error) {
@@ -550,14 +394,10 @@ func (g *groupDatabase) CountRangeEverydayTotal(ctx context.Context, start time.
return g.groupDB.CountRangeEverydayTotal(ctx, start, end)
}
-func (g *groupDatabase) FindGroupRequests(ctx context.Context, groupID string, userIDs []string) (int64, []*relationtb.GroupRequestModel, error) {
+func (g *groupDatabase) FindGroupRequests(ctx context.Context, groupID string, userIDs []string) ([]*relationtb.GroupRequestModel, error) {
return g.groupRequestDB.FindGroupRequests(ctx, groupID, userIDs)
}
-func (g *groupDatabase) FindNotDismissedGroup(ctx context.Context, groupIDs []string) (groups []*relationtb.GroupModel, err error) {
- return g.groupDB.FindNotDismissedGroup(ctx, groupIDs)
-}
-
func (g *groupDatabase) DeleteGroupMemberHash(ctx context.Context, groupIDs []string) error {
if len(groupIDs) == 0 {
return nil
@@ -566,6 +406,5 @@ func (g *groupDatabase) DeleteGroupMemberHash(ctx context.Context, groupIDs []st
for _, groupID := range groupIDs {
c = c.DelGroupMembersHash(groupID)
}
-
return c.ExecDel(ctx)
}
diff --git a/pkg/common/db/controller/msg.go b/pkg/common/db/controller/msg.go
index cba0a6bbd..b841a7d31 100644
--- a/pkg/common/db/controller/msg.go
+++ b/pkg/common/db/controller/msg.go
@@ -98,6 +98,7 @@ type CommonMsgDatabase interface {
SetSendMsgStatus(ctx context.Context, id string, status int32) error
GetSendMsgStatus(ctx context.Context, id string) (int32, error)
SearchMessage(ctx context.Context, req *pbmsg.SearchMessageReq) (total int32, msgData []*sdkws.MsgData, err error)
+ FindOneByDocIDs(ctx context.Context, docIDs []string, seqs map[string]int64) (map[string]*sdkws.MsgData, error)
// to mq
MsgToMQ(ctx context.Context, key string, msg2mq *sdkws.MsgData) error
@@ -125,21 +126,32 @@ type CommonMsgDatabase interface {
ConvertMsgsDocLen(ctx context.Context, conversationIDs []string)
}
-func NewCommonMsgDatabase(msgDocModel unrelationtb.MsgDocModelInterface, cacheModel cache.MsgModel) CommonMsgDatabase {
+func NewCommonMsgDatabase(msgDocModel unrelationtb.MsgDocModelInterface, cacheModel cache.MsgModel) (CommonMsgDatabase, error) {
+ producerToRedis, err := kafka.NewKafkaProducer(config.Config.Kafka.Addr, config.Config.Kafka.LatestMsgToRedis.Topic)
+ if err != nil {
+ return nil, err
+ }
+ producerToMongo, err := kafka.NewKafkaProducer(config.Config.Kafka.Addr, config.Config.Kafka.MsgToMongo.Topic)
+ if err != nil {
+ return nil, err
+ }
+ producerToPush, err := kafka.NewKafkaProducer(config.Config.Kafka.Addr, config.Config.Kafka.MsgToPush.Topic)
+ if err != nil {
+ return nil, err
+ }
return &commonMsgDatabase{
msgDocDatabase: msgDocModel,
cache: cacheModel,
- producer: kafka.NewKafkaProducer(config.Config.Kafka.Addr, config.Config.Kafka.LatestMsgToRedis.Topic),
- producerToMongo: kafka.NewKafkaProducer(config.Config.Kafka.Addr, config.Config.Kafka.MsgToMongo.Topic),
- producerToPush: kafka.NewKafkaProducer(config.Config.Kafka.Addr, config.Config.Kafka.MsgToPush.Topic),
- }
+ producer: producerToRedis,
+ producerToMongo: producerToMongo,
+ producerToPush: producerToPush,
+ }, nil
}
-func InitCommonMsgDatabase(rdb redis.UniversalClient, database *mongo.Database) CommonMsgDatabase {
+func InitCommonMsgDatabase(rdb redis.UniversalClient, database *mongo.Database) (CommonMsgDatabase, error) {
cacheModel := cache.NewMsgCacheModel(rdb)
msgDocModel := unrelation.NewMsgMongoDriver(database)
- CommonMsgDatabase := NewCommonMsgDatabase(msgDocModel, cacheModel)
- return CommonMsgDatabase
+ return NewCommonMsgDatabase(msgDocModel, cacheModel)
}
type commonMsgDatabase struct {
@@ -357,9 +369,7 @@ func (db *commonMsgDatabase) DelUserDeleteMsgsList(ctx context.Context, conversa
}
func (db *commonMsgDatabase) BatchInsertChat2Cache(ctx context.Context, conversationID string, msgs []*sdkws.MsgData) (seq int64, isNew bool, err error) {
- cancelCtx, cancel := context.WithTimeout(ctx, 1*time.Minute)
- defer cancel()
- currentMaxSeq, err := db.cache.GetMaxSeq(cancelCtx, conversationID)
+ currentMaxSeq, err := db.cache.GetMaxSeq(ctx, conversationID)
if err != nil && errs.Unwrap(err) != redis.Nil {
log.ZError(ctx, "db.cache.GetMaxSeq", err)
return 0, false, err
@@ -386,21 +396,19 @@ func (db *commonMsgDatabase) BatchInsertChat2Cache(ctx context.Context, conversa
prommetrics.MsgInsertRedisFailedCounter.Add(float64(failedNum))
log.ZError(ctx, "setMessageToCache error", err, "len", len(msgs), "conversationID", conversationID)
} else {
- prommetrics.MsgInsertRedisSuccessCounter.Add(float64(len(msgs)))
+ prommetrics.MsgInsertRedisSuccessCounter.Inc()
}
- cancelCtx, cancel = context.WithTimeout(ctx, 1*time.Minute)
- defer cancel()
- err = db.cache.SetMaxSeq(cancelCtx, conversationID, currentMaxSeq)
+ err = db.cache.SetMaxSeq(ctx, conversationID, currentMaxSeq)
if err != nil {
log.ZError(ctx, "db.cache.SetMaxSeq error", err, "conversationID", conversationID)
prommetrics.SeqSetFailedCounter.Inc()
}
err2 := db.cache.SetHasReadSeqs(ctx, conversationID, userSeqMap)
- if err2 != nil {
+ if err != nil {
log.ZError(ctx, "SetHasReadSeqs error", err2, "userSeqMap", userSeqMap, "conversationID", conversationID)
prommetrics.SeqSetFailedCounter.Inc()
}
- return lastMaxSeq, isNew, errs.Wrap(err, "redis SetMaxSeq error")
+ return lastMaxSeq, isNew, utils.Wrap(err, "")
}
func (db *commonMsgDatabase) getMsgBySeqs(ctx context.Context, userID, conversationID string, seqs []int64) (totalMsgs []*sdkws.MsgData, err error) {
@@ -658,26 +666,16 @@ func (db *commonMsgDatabase) GetMsgBySeqsRange(ctx context.Context, userID strin
func (db *commonMsgDatabase) GetMsgBySeqs(ctx context.Context, userID string, conversationID string, seqs []int64) (int64, int64, []*sdkws.MsgData, error) {
userMinSeq, err := db.cache.GetConversationUserMinSeq(ctx, conversationID, userID)
- if err != nil {
- log.ZError(ctx, "cache.GetConversationUserMinSeq error", err)
- if errs.Unwrap(err) != redis.Nil {
- return 0, 0, nil, err
- }
+ if err != nil && errs.Unwrap(err) != redis.Nil {
+ return 0, 0, nil, err
}
minSeq, err := db.cache.GetMinSeq(ctx, conversationID)
- if err != nil {
- log.ZError(ctx, "cache.GetMinSeq error", err)
- if errs.Unwrap(err) != redis.Nil {
- return 0, 0, nil, err
- }
+ if err != nil && errs.Unwrap(err) != redis.Nil {
+ return 0, 0, nil, err
}
maxSeq, err := db.cache.GetMaxSeq(ctx, conversationID)
- if err != nil {
- log.ZError(ctx, "cache.GetMaxSeq error", err)
- if errs.Unwrap(err) != redis.Nil {
- return 0, 0, nil, err
- }
-
+ if err != nil && errs.Unwrap(err) != redis.Nil {
+ return 0, 0, nil, err
}
if userMinSeq < minSeq {
minSeq = userMinSeq
@@ -690,16 +688,34 @@ func (db *commonMsgDatabase) GetMsgBySeqs(ctx context.Context, userID string, co
}
successMsgs, failedSeqs, err := db.cache.GetMessagesBySeq(ctx, conversationID, newSeqs)
if err != nil {
- log.ZError(ctx, "get message from redis exception", err, "failedSeqs", failedSeqs, "conversationID", conversationID)
+ if err != redis.Nil {
+ log.ZError(ctx, "get message from redis exception", err, "failedSeqs", failedSeqs, "conversationID", conversationID)
+ }
}
- log.ZInfo(ctx, "db.cache.GetMessagesBySeq", "userID", userID, "conversationID", conversationID, "seqs", seqs, "successMsgs",
- len(successMsgs), "failedSeqs", failedSeqs, "conversationID", conversationID)
+ log.ZInfo(
+ ctx,
+ "db.cache.GetMessagesBySeq",
+ "userID",
+ userID,
+ "conversationID",
+ conversationID,
+ "seqs",
+ seqs,
+ "successMsgs",
+ len(successMsgs),
+ "failedSeqs",
+ failedSeqs,
+ "conversationID",
+ conversationID,
+ )
if len(failedSeqs) > 0 {
mongoMsgs, err := db.getMsgBySeqs(ctx, userID, conversationID, failedSeqs)
if err != nil {
+
return 0, 0, nil, err
}
+
successMsgs = append(successMsgs, mongoMsgs...)
}
return minSeq, maxSeq, successMsgs, nil
@@ -1047,6 +1063,21 @@ func (db *commonMsgDatabase) SearchMessage(ctx context.Context, req *pbmsg.Searc
return total, totalMsgs, nil
}
+func (db *commonMsgDatabase) FindOneByDocIDs(ctx context.Context, conversationIDs []string, seqs map[string]int64) (map[string]*sdkws.MsgData, error) {
+ totalMsgs := make(map[string]*sdkws.MsgData)
+ for _, conversationID := range conversationIDs {
+ seq := seqs[conversationID]
+ docID := db.msg.GetDocID(conversationID, seq)
+ msgs, err := db.msgDocDatabase.FindOneByDocID(ctx, docID)
+ if err != nil {
+ return nil, err
+ }
+ index := db.msg.GetMsgIndex(seq)
+ totalMsgs[conversationID] = convert.MsgDB2Pb(msgs.Msg[index].Msg)
+ }
+ return totalMsgs, nil
+}
+
func (db *commonMsgDatabase) ConvertMsgsDocLen(ctx context.Context, conversationIDs []string) {
db.msgDocDatabase.ConvertMsgsDocLen(ctx, conversationIDs)
}
diff --git a/pkg/common/db/controller/msg_test.go b/pkg/common/db/controller/msg_test.go
index ba5aecd25..70c055bf3 100644
--- a/pkg/common/db/controller/msg_test.go
+++ b/pkg/common/db/controller/msg_test.go
@@ -146,7 +146,7 @@ func Test_BatchInsertChat2DB(t *testing.T) {
func GetDB() *commonMsgDatabase {
config.Config.Mongo.Address = []string{"203.56.175.233:37017"}
// config.Config.Mongo.Timeout = 60
- config.Config.Mongo.Database = "openIM_v3"
+ config.Config.Mongo.Database = "openim_v3"
// config.Config.Mongo.Source = "admin"
config.Config.Mongo.Username = "root"
config.Config.Mongo.Password = "openIM123"
@@ -235,7 +235,7 @@ func Test_FindBySeq(t *testing.T) {
func TestName(t *testing.T) {
db := GetDB()
var seqs []int64
- for i := int64(1); i <= 4; i++ {
+ for i := int64(1); i <= 50; i++ {
seqs = append(seqs, i)
}
msgs, err := db.getMsgBySeqsRange(context.Background(), "4931176757", "si_3866692501_4931176757", seqs, seqs[0], seqs[len(seqs)-1])
diff --git a/pkg/common/db/controller/s3.go b/pkg/common/db/controller/s3.go
index ddbd5d27f..95505de41 100644
--- a/pkg/common/db/controller/s3.go
+++ b/pkg/common/db/controller/s3.go
@@ -35,6 +35,8 @@ type S3Database interface {
CompleteMultipartUpload(ctx context.Context, uploadID string, parts []string) (*cont.UploadResult, error)
AccessURL(ctx context.Context, name string, expire time.Duration, opt *s3.AccessURLOption) (time.Time, string, error)
SetObject(ctx context.Context, info *relation.ObjectModel) error
+ StatObject(ctx context.Context, name string) (*s3.ObjectInfo, error)
+ FormData(ctx context.Context, name string, size int64, contentType string, duration time.Duration) (*s3.FormData, error)
}
func NewS3Database(rdb redis.UniversalClient, s3 s3.Interface, obj relation.ObjectInfoModelInterface) S3Database {
@@ -72,14 +74,15 @@ func (s *s3Database) CompleteMultipartUpload(ctx context.Context, uploadID strin
}
func (s *s3Database) SetObject(ctx context.Context, info *relation.ObjectModel) error {
+ info.Engine = s.s3.Engine()
if err := s.db.SetObject(ctx, info); err != nil {
return err
}
- return s.cache.DelObjectName(info.Name).ExecDel(ctx)
+ return s.cache.DelObjectName(info.Engine, info.Name).ExecDel(ctx)
}
func (s *s3Database) AccessURL(ctx context.Context, name string, expire time.Duration, opt *s3.AccessURLOption) (time.Time, string, error) {
- obj, err := s.cache.GetName(ctx, name)
+ obj, err := s.cache.GetName(ctx, s.s3.Engine(), name)
if err != nil {
return time.Time{}, "", err
}
@@ -99,3 +102,11 @@ func (s *s3Database) AccessURL(ctx context.Context, name string, expire time.Dur
}
return expireTime, rawURL, nil
}
+
+func (s *s3Database) StatObject(ctx context.Context, name string) (*s3.ObjectInfo, error) {
+ return s.s3.StatObject(ctx, name)
+}
+
+func (s *s3Database) FormData(ctx context.Context, name string, size int64, contentType string, duration time.Duration) (*s3.FormData, error) {
+ return s.s3.FormData(ctx, name, size, contentType, duration)
+}
diff --git a/pkg/common/db/controller/third.go b/pkg/common/db/controller/third.go
index 971719b1f..fb5b0ccbe 100644
--- a/pkg/common/db/controller/third.go
+++ b/pkg/common/db/controller/third.go
@@ -18,10 +18,9 @@ import (
"context"
"time"
- "gorm.io/gorm"
+ "github.com/OpenIMSDK/tools/pagination"
"github.com/openimsdk/open-im-server/v3/pkg/common/db/cache"
- dbimpl "github.com/openimsdk/open-im-server/v3/pkg/common/db/relation"
"github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
)
@@ -29,22 +28,15 @@ type ThirdDatabase interface {
FcmUpdateToken(ctx context.Context, account string, platformID int, fcmToken string, expireTime int64) error
SetAppBadge(ctx context.Context, userID string, value int) error
// about log for debug
- UploadLogs(ctx context.Context, logs []*relation.Log) error
+ UploadLogs(ctx context.Context, logs []*relation.LogModel) error
DeleteLogs(ctx context.Context, logID []string, userID string) error
- SearchLogs(ctx context.Context, keyword string, start time.Time, end time.Time, pageNumber int32, showNumber int32) (uint32, []*relation.Log, error)
- GetLogs(ctx context.Context, LogIDs []string, userID string) ([]*relation.Log, error)
- FindUsers(ctx context.Context, userIDs []string) ([]*relation.UserModel, error)
+ SearchLogs(ctx context.Context, keyword string, start time.Time, end time.Time, pagination pagination.Pagination) (int64, []*relation.LogModel, error)
+ GetLogs(ctx context.Context, LogIDs []string, userID string) ([]*relation.LogModel, error)
}
type thirdDatabase struct {
- cache cache.MsgModel
- logdb relation.LogInterface
- userdb relation.UserModelInterface
-}
-
-// FindUsers implements ThirdDatabase.
-func (t *thirdDatabase) FindUsers(ctx context.Context, userIDs []string) ([]*relation.UserModel, error) {
- return t.userdb.Find(ctx, userIDs)
+ cache cache.MsgModel
+ logdb relation.LogInterface
}
// DeleteLogs implements ThirdDatabase.
@@ -53,22 +45,22 @@ func (t *thirdDatabase) DeleteLogs(ctx context.Context, logID []string, userID s
}
// GetLogs implements ThirdDatabase.
-func (t *thirdDatabase) GetLogs(ctx context.Context, LogIDs []string, userID string) ([]*relation.Log, error) {
+func (t *thirdDatabase) GetLogs(ctx context.Context, LogIDs []string, userID string) ([]*relation.LogModel, error) {
return t.logdb.Get(ctx, LogIDs, userID)
}
// SearchLogs implements ThirdDatabase.
-func (t *thirdDatabase) SearchLogs(ctx context.Context, keyword string, start time.Time, end time.Time, pageNumber int32, showNumber int32) (uint32, []*relation.Log, error) {
- return t.logdb.Search(ctx, keyword, start, end, pageNumber, showNumber)
+func (t *thirdDatabase) SearchLogs(ctx context.Context, keyword string, start time.Time, end time.Time, pagination pagination.Pagination) (int64, []*relation.LogModel, error) {
+ return t.logdb.Search(ctx, keyword, start, end, pagination)
}
// UploadLogs implements ThirdDatabase.
-func (t *thirdDatabase) UploadLogs(ctx context.Context, logs []*relation.Log) error {
+func (t *thirdDatabase) UploadLogs(ctx context.Context, logs []*relation.LogModel) error {
return t.logdb.Create(ctx, logs)
}
-func NewThirdDatabase(cache cache.MsgModel, db *gorm.DB) ThirdDatabase {
- return &thirdDatabase{cache: cache, logdb: dbimpl.NewLogGorm(db), userdb: dbimpl.NewUserGorm(db)}
+func NewThirdDatabase(cache cache.MsgModel, logdb relation.LogInterface) ThirdDatabase {
+ return &thirdDatabase{cache: cache, logdb: logdb}
}
func (t *thirdDatabase) FcmUpdateToken(
diff --git a/pkg/common/db/controller/user.go b/pkg/common/db/controller/user.go
index 9c6fdc5c4..8ba1c01d3 100644
--- a/pkg/common/db/controller/user.go
+++ b/pkg/common/db/controller/user.go
@@ -18,16 +18,19 @@ import (
"context"
"time"
+ "github.com/OpenIMSDK/tools/pagination"
+ "github.com/OpenIMSDK/tools/tx"
+
+ "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
+
"github.com/OpenIMSDK/protocol/user"
unrelationtb "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/unrelation"
"github.com/OpenIMSDK/tools/errs"
- "github.com/OpenIMSDK/tools/tx"
"github.com/OpenIMSDK/tools/utils"
"github.com/openimsdk/open-im-server/v3/pkg/common/db/cache"
- "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
)
type UserDatabase interface {
@@ -35,18 +38,28 @@ type UserDatabase interface {
FindWithError(ctx context.Context, userIDs []string) (users []*relation.UserModel, err error)
// Find Get the information of the specified user If the userID is not found, no error will be returned
Find(ctx context.Context, userIDs []string) (users []*relation.UserModel, err error)
+ // Find userInfo By Nickname
+ FindByNickname(ctx context.Context, nickname string) (users []*relation.UserModel, err error)
+ // Find notificationAccounts
+ FindNotification(ctx context.Context, level int64) (users []*relation.UserModel, err error)
// Create Insert multiple external guarantees that the userID is not repeated and does not exist in the db
Create(ctx context.Context, users []*relation.UserModel) (err error)
// Update update (non-zero value) external guarantee userID exists
- Update(ctx context.Context, user *relation.UserModel) (err error)
+ //Update(ctx context.Context, user *relation.UserModel) (err error)
// UpdateByMap update (zero value) external guarantee userID exists
- UpdateByMap(ctx context.Context, userID string, args map[string]interface{}) (err error)
+ UpdateByMap(ctx context.Context, userID string, args map[string]any) (err error)
+ // FindUser
+ PageFindUser(ctx context.Context, level1 int64, level2 int64, pagination pagination.Pagination) (count int64, users []*relation.UserModel, err error)
+ //FindUser with keyword
+ PageFindUserWithKeyword(ctx context.Context, level1 int64, level2 int64, userID string, nickName string, pagination pagination.Pagination) (count int64, users []*relation.UserModel, err error)
// Page If not found, no error is returned
- Page(ctx context.Context, pageNumber, showNumber int32) (users []*relation.UserModel, count int64, err error)
+ Page(ctx context.Context, pagination pagination.Pagination) (count int64, users []*relation.UserModel, err error)
// IsExist true as long as one exists
IsExist(ctx context.Context, userIDs []string) (exist bool, err error)
// GetAllUserID Get all user IDs
- GetAllUserID(ctx context.Context, pageNumber, showNumber int32) ([]string, error)
+ GetAllUserID(ctx context.Context, pagination pagination.Pagination) (int64, []string, error)
+ // Get user by userID
+ GetUserByID(ctx context.Context, userID string) (user *relation.UserModel, err error)
// InitOnce Inside the function, first query whether it exists in the db, if it exists, do nothing; if it does not exist, insert it
InitOnce(ctx context.Context, users []*relation.UserModel) (err error)
// CountTotal Get the total number of users
@@ -65,31 +78,50 @@ type UserDatabase interface {
GetUserStatus(ctx context.Context, userIDs []string) ([]*user.OnlineStatus, error)
// SetUserStatus Set the user status and store the user status in redis
SetUserStatus(ctx context.Context, userID string, status, platformID int32) error
+
+ //CRUD user command
+ AddUserCommand(ctx context.Context, userID string, Type int32, UUID string, value string, ex string) error
+ DeleteUserCommand(ctx context.Context, userID string, Type int32, UUID string) error
+ UpdateUserCommand(ctx context.Context, userID string, Type int32, UUID string, val map[string]any) error
+ GetUserCommands(ctx context.Context, userID string, Type int32) ([]*user.CommandInfoResp, error)
+ GetAllUserCommands(ctx context.Context, userID string) ([]*user.AllCommandInfoResp, error)
}
type userDatabase struct {
+ tx tx.CtxTx
userDB relation.UserModelInterface
cache cache.UserCache
- tx tx.Tx
mongoDB unrelationtb.UserModelInterface
}
-func NewUserDatabase(userDB relation.UserModelInterface, cache cache.UserCache, tx tx.Tx, mongoDB unrelationtb.UserModelInterface) UserDatabase {
+func NewUserDatabase(userDB relation.UserModelInterface, cache cache.UserCache, tx tx.CtxTx, mongoDB unrelationtb.UserModelInterface) UserDatabase {
return &userDatabase{userDB: userDB, cache: cache, tx: tx, mongoDB: mongoDB}
}
-func (u *userDatabase) InitOnce(ctx context.Context, users []*relation.UserModel) (err error) {
+func (u *userDatabase) InitOnce(ctx context.Context, users []*relation.UserModel) error {
+ // Extract user IDs from the given user models.
userIDs := utils.Slice(users, func(e *relation.UserModel) string {
return e.UserID
})
- result, err := u.userDB.Find(ctx, userIDs)
+
+ // Find existing users in the database.
+ existingUsers, err := u.userDB.Find(ctx, userIDs)
if err != nil {
return err
}
- miss := utils.SliceAnySub(users, result, func(e *relation.UserModel) string { return e.UserID })
- if len(miss) > 0 {
- _ = u.userDB.Create(ctx, miss)
+
+ // Determine which users are missing from the database.
+ missingUsers := utils.SliceAnySub(users, existingUsers, func(e *relation.UserModel) string {
+ return e.UserID
+ })
+
+ // Create records for missing users.
+ if len(missingUsers) > 0 {
+ if err := u.userDB.Create(ctx, missingUsers); err != nil {
+ return err
+ }
}
+
return nil
}
@@ -107,50 +139,66 @@ func (u *userDatabase) FindWithError(ctx context.Context, userIDs []string) (use
// Find Get the information of the specified user. If the userID is not found, no error will be returned.
func (u *userDatabase) Find(ctx context.Context, userIDs []string) (users []*relation.UserModel, err error) {
- users, err = u.cache.GetUsersInfo(ctx, userIDs)
- return
+ return u.cache.GetUsersInfo(ctx, userIDs)
+}
+
+// Find userInfo By Nickname.
+func (u *userDatabase) FindByNickname(ctx context.Context, nickname string) (users []*relation.UserModel, err error) {
+ return u.userDB.TakeByNickname(ctx, nickname)
+}
+
+// Find notificationAccouts.
+func (u *userDatabase) FindNotification(ctx context.Context, level int64) (users []*relation.UserModel, err error) {
+ return u.userDB.TakeNotification(ctx, level)
}
// Create Insert multiple external guarantees that the userID is not repeated and does not exist in the db.
func (u *userDatabase) Create(ctx context.Context, users []*relation.UserModel) (err error) {
- if err := u.tx.Transaction(func(tx any) error {
- err = u.userDB.Create(ctx, users)
- if err != nil {
+ return u.tx.Transaction(ctx, func(ctx context.Context) error {
+ if err = u.userDB.Create(ctx, users); err != nil {
return err
}
- return nil
- }); err != nil {
- return err
- }
- var userIDs []string
- for _, user := range users {
- userIDs = append(userIDs, user.UserID)
- }
- return u.cache.DelUsersInfo(userIDs...).ExecDel(ctx)
+ return u.cache.DelUsersInfo(utils.Slice(users, func(e *relation.UserModel) string {
+ return e.UserID
+ })...).ExecDel(ctx)
+ })
}
-// Update (non-zero value) externally guarantees that userID exists.
-func (u *userDatabase) Update(ctx context.Context, user *relation.UserModel) (err error) {
- if err := u.userDB.Update(ctx, user); err != nil {
- return err
- }
- return u.cache.DelUsersInfo(user.UserID).ExecDel(ctx)
-}
+//// Update (non-zero value) externally guarantees that userID exists.
+//func (u *userDatabase) Update(ctx context.Context, user *relation.UserModel) (err error) {
+// if err := u.userDB.Update(ctx, user); err != nil {
+// return err
+// }
+// return u.cache.DelUsersInfo(user.UserID).ExecDel(ctx)
+//}
// UpdateByMap update (zero value) externally guarantees that userID exists.
-func (u *userDatabase) UpdateByMap(ctx context.Context, userID string, args map[string]interface{}) (err error) {
- if err := u.userDB.UpdateByMap(ctx, userID, args); err != nil {
- return err
- }
- return u.cache.DelUsersInfo(userID).ExecDel(ctx)
+func (u *userDatabase) UpdateByMap(ctx context.Context, userID string, args map[string]any) (err error) {
+ return u.tx.Transaction(ctx, func(ctx context.Context) error {
+ if err := u.userDB.UpdateByMap(ctx, userID, args); err != nil {
+ return err
+ }
+ return u.cache.DelUsersInfo(userID).ExecDel(ctx)
+ })
}
// Page Gets, returns no error if not found.
-func (u *userDatabase) Page(
+func (u *userDatabase) Page(ctx context.Context, pagination pagination.Pagination) (count int64, users []*relation.UserModel, err error) {
+ return u.userDB.Page(ctx, pagination)
+}
+
+func (u *userDatabase) PageFindUser(ctx context.Context, level1 int64, level2 int64, pagination pagination.Pagination) (count int64, users []*relation.UserModel, err error) {
+ return u.userDB.PageFindUser(ctx, level1, level2, pagination)
+}
+
+func (u *userDatabase) PageFindUserWithKeyword(
ctx context.Context,
- pageNumber, showNumber int32,
-) (users []*relation.UserModel, count int64, err error) {
- return u.userDB.Page(ctx, pageNumber, showNumber)
+ level1 int64,
+ level2 int64,
+ userID, nickName string,
+ pagination pagination.Pagination,
+) (count int64, users []*relation.UserModel, err error) {
+ return u.userDB.PageFindUserWithKeyword(ctx, level1, level2, userID, nickName, pagination)
}
// IsExist Does userIDs exist? As long as there is one, it will be true.
@@ -166,8 +214,12 @@ func (u *userDatabase) IsExist(ctx context.Context, userIDs []string) (exist boo
}
// GetAllUserID Get all user IDs.
-func (u *userDatabase) GetAllUserID(ctx context.Context, pageNumber, showNumber int32) (userIDs []string, err error) {
- return u.userDB.GetAllUserID(ctx, pageNumber, showNumber)
+func (u *userDatabase) GetAllUserID(ctx context.Context, pagination pagination.Pagination) (total int64, userIDs []string, err error) {
+ return u.userDB.GetAllUserID(ctx, pagination)
+}
+
+func (u *userDatabase) GetUserByID(ctx context.Context, userID string) (user *relation.UserModel, err error) {
+ return u.userDB.Take(ctx, userID)
}
// CountTotal Get the total number of users.
@@ -220,3 +272,20 @@ func (u *userDatabase) GetUserStatus(ctx context.Context, userIDs []string) ([]*
func (u *userDatabase) SetUserStatus(ctx context.Context, userID string, status, platformID int32) error {
return u.cache.SetUserStatus(ctx, userID, status, platformID)
}
+func (u *userDatabase) AddUserCommand(ctx context.Context, userID string, Type int32, UUID string, value string, ex string) error {
+ return u.userDB.AddUserCommand(ctx, userID, Type, UUID, value, ex)
+}
+func (u *userDatabase) DeleteUserCommand(ctx context.Context, userID string, Type int32, UUID string) error {
+ return u.userDB.DeleteUserCommand(ctx, userID, Type, UUID)
+}
+func (u *userDatabase) UpdateUserCommand(ctx context.Context, userID string, Type int32, UUID string, val map[string]any) error {
+ return u.userDB.UpdateUserCommand(ctx, userID, Type, UUID, val)
+}
+func (u *userDatabase) GetUserCommands(ctx context.Context, userID string, Type int32) ([]*user.CommandInfoResp, error) {
+ commands, err := u.userDB.GetUserCommand(ctx, userID, Type)
+ return commands, err
+}
+func (u *userDatabase) GetAllUserCommands(ctx context.Context, userID string) ([]*user.AllCommandInfoResp, error) {
+ commands, err := u.userDB.GetAllUserCommand(ctx, userID)
+ return commands, err
+}
diff --git a/pkg/common/db/mgo/black.go b/pkg/common/db/mgo/black.go
new file mode 100644
index 000000000..1047e5c30
--- /dev/null
+++ b/pkg/common/db/mgo/black.go
@@ -0,0 +1,105 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package mgo
+
+import (
+ "context"
+
+ "github.com/OpenIMSDK/tools/mgoutil"
+ "github.com/OpenIMSDK/tools/pagination"
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+
+ "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
+)
+
+func NewBlackMongo(db *mongo.Database) (relation.BlackModelInterface, error) {
+ coll := db.Collection("black")
+ _, err := coll.Indexes().CreateOne(context.Background(), mongo.IndexModel{
+ Keys: bson.D{
+ {Key: "owner_user_id", Value: 1},
+ {Key: "block_user_id", Value: 1},
+ },
+ Options: options.Index().SetUnique(true),
+ })
+ if err != nil {
+ return nil, err
+ }
+ return &BlackMgo{coll: coll}, nil
+}
+
+type BlackMgo struct {
+ coll *mongo.Collection
+}
+
+func (b *BlackMgo) blackFilter(ownerUserID, blockUserID string) bson.M {
+ return bson.M{
+ "owner_user_id": ownerUserID,
+ "block_user_id": blockUserID,
+ }
+}
+
+func (b *BlackMgo) blacksFilter(blacks []*relation.BlackModel) bson.M {
+ if len(blacks) == 0 {
+ return nil
+ }
+ or := make(bson.A, 0, len(blacks))
+ for _, black := range blacks {
+ or = append(or, b.blackFilter(black.OwnerUserID, black.BlockUserID))
+ }
+ return bson.M{"$or": or}
+}
+
+func (b *BlackMgo) Create(ctx context.Context, blacks []*relation.BlackModel) (err error) {
+ return mgoutil.InsertMany(ctx, b.coll, blacks)
+}
+
+func (b *BlackMgo) Delete(ctx context.Context, blacks []*relation.BlackModel) (err error) {
+ if len(blacks) == 0 {
+ return nil
+ }
+ return mgoutil.DeleteMany(ctx, b.coll, b.blacksFilter(blacks))
+}
+
+func (b *BlackMgo) UpdateByMap(ctx context.Context, ownerUserID, blockUserID string, args map[string]any) (err error) {
+ if len(args) == 0 {
+ return nil
+ }
+ return mgoutil.UpdateOne(ctx, b.coll, b.blackFilter(ownerUserID, blockUserID), bson.M{"$set": args}, false)
+}
+
+func (b *BlackMgo) Find(ctx context.Context, blacks []*relation.BlackModel) (blackList []*relation.BlackModel, err error) {
+ return mgoutil.Find[*relation.BlackModel](ctx, b.coll, b.blacksFilter(blacks))
+}
+
+func (b *BlackMgo) Take(ctx context.Context, ownerUserID, blockUserID string) (black *relation.BlackModel, err error) {
+ return mgoutil.FindOne[*relation.BlackModel](ctx, b.coll, b.blackFilter(ownerUserID, blockUserID))
+}
+
+func (b *BlackMgo) FindOwnerBlacks(ctx context.Context, ownerUserID string, pagination pagination.Pagination) (total int64, blacks []*relation.BlackModel, err error) {
+ return mgoutil.FindPage[*relation.BlackModel](ctx, b.coll, bson.M{"owner_user_id": ownerUserID}, pagination)
+}
+
+func (b *BlackMgo) FindOwnerBlackInfos(ctx context.Context, ownerUserID string, userIDs []string) (blacks []*relation.BlackModel, err error) {
+ if len(userIDs) == 0 {
+ return mgoutil.Find[*relation.BlackModel](ctx, b.coll, bson.M{"owner_user_id": ownerUserID})
+ }
+ return mgoutil.Find[*relation.BlackModel](ctx, b.coll, bson.M{"owner_user_id": ownerUserID, "block_user_id": bson.M{"$in": userIDs}})
+}
+
+func (b *BlackMgo) FindBlackUserIDs(ctx context.Context, ownerUserID string) (blackUserIDs []string, err error) {
+ return mgoutil.Find[string](ctx, b.coll, bson.M{"owner_user_id": ownerUserID}, options.Find().SetProjection(bson.M{"_id": 0, "block_user_id": 1}))
+}
diff --git a/pkg/common/db/mgo/conversation.go b/pkg/common/db/mgo/conversation.go
new file mode 100644
index 000000000..d0a46ae47
--- /dev/null
+++ b/pkg/common/db/mgo/conversation.go
@@ -0,0 +1,167 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package mgo
+
+import (
+ "context"
+ "time"
+
+ "github.com/OpenIMSDK/tools/errs"
+
+ "github.com/OpenIMSDK/protocol/constant"
+ "github.com/OpenIMSDK/tools/mgoutil"
+ "github.com/OpenIMSDK/tools/pagination"
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+
+ "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
+)
+
+func NewConversationMongo(db *mongo.Database) (*ConversationMgo, error) {
+ coll := db.Collection("conversation")
+ _, err := coll.Indexes().CreateOne(context.Background(), mongo.IndexModel{
+ Keys: bson.D{
+ {Key: "owner_user_id", Value: 1},
+ {Key: "conversation_id", Value: 1},
+ },
+ Options: options.Index().SetUnique(true),
+ })
+ if err != nil {
+ return nil, errs.Wrap(err)
+ }
+ return &ConversationMgo{coll: coll}, nil
+}
+
+type ConversationMgo struct {
+ coll *mongo.Collection
+}
+
+func (c *ConversationMgo) Create(ctx context.Context, conversations []*relation.ConversationModel) (err error) {
+ return mgoutil.InsertMany(ctx, c.coll, conversations)
+}
+
+func (c *ConversationMgo) Delete(ctx context.Context, groupIDs []string) (err error) {
+ return mgoutil.DeleteMany(ctx, c.coll, bson.M{"group_id": bson.M{"$in": groupIDs}})
+}
+
+func (c *ConversationMgo) UpdateByMap(ctx context.Context, userIDs []string, conversationID string, args map[string]any) (rows int64, err error) {
+ res, err := mgoutil.UpdateMany(ctx, c.coll, bson.M{"owner_user_id": bson.M{"$in": userIDs}, "conversation_id": conversationID}, bson.M{"$set": args})
+ if err != nil {
+ return 0, err
+ }
+ return res.ModifiedCount, nil
+}
+
+func (c *ConversationMgo) Update(ctx context.Context, conversation *relation.ConversationModel) (err error) {
+ return mgoutil.UpdateOne(ctx, c.coll, bson.M{"owner_user_id": conversation.OwnerUserID, "conversation_id": conversation.ConversationID}, bson.M{"$set": conversation}, true)
+}
+
+func (c *ConversationMgo) Find(ctx context.Context, ownerUserID string, conversationIDs []string) (conversations []*relation.ConversationModel, err error) {
+ return mgoutil.Find[*relation.ConversationModel](ctx, c.coll, bson.M{"owner_user_id": ownerUserID, "conversation_id": bson.M{"$in": conversationIDs}})
+}
+
+func (c *ConversationMgo) FindUserID(ctx context.Context, userIDs []string, conversationIDs []string) ([]string, error) {
+ return mgoutil.Find[string](
+ ctx,
+ c.coll,
+ bson.M{"owner_user_id": bson.M{"$in": userIDs}, "conversation_id": bson.M{"$in": conversationIDs}},
+ options.Find().SetProjection(bson.M{"_id": 0, "owner_user_id": 1}),
+ )
+}
+
+func (c *ConversationMgo) FindUserIDAllConversationID(ctx context.Context, userID string) ([]string, error) {
+ return mgoutil.Find[string](ctx, c.coll, bson.M{"owner_user_id": userID}, options.Find().SetProjection(bson.M{"_id": 0, "conversation_id": 1}))
+}
+
+func (c *ConversationMgo) Take(ctx context.Context, userID, conversationID string) (conversation *relation.ConversationModel, err error) {
+ return mgoutil.FindOne[*relation.ConversationModel](ctx, c.coll, bson.M{"owner_user_id": userID, "conversation_id": conversationID})
+}
+
+func (c *ConversationMgo) FindConversationID(ctx context.Context, userID string, conversationIDs []string) (existConversationID []string, err error) {
+ return mgoutil.Find[string](ctx, c.coll, bson.M{"owner_user_id": userID, "conversation_id": bson.M{"$in": conversationIDs}}, options.Find().SetProjection(bson.M{"_id": 0, "conversation_id": 1}))
+}
+
+func (c *ConversationMgo) FindUserIDAllConversations(ctx context.Context, userID string) (conversations []*relation.ConversationModel, err error) {
+ return mgoutil.Find[*relation.ConversationModel](ctx, c.coll, bson.M{"owner_user_id": userID})
+}
+
+func (c *ConversationMgo) FindRecvMsgNotNotifyUserIDs(ctx context.Context, groupID string) ([]string, error) {
+ return mgoutil.Find[string](ctx, c.coll, bson.M{"group_id": groupID, "recv_msg_opt": constant.ReceiveNotNotifyMessage}, options.Find().SetProjection(bson.M{"_id": 0, "owner_user_id": 1}))
+}
+
+func (c *ConversationMgo) GetUserRecvMsgOpt(ctx context.Context, ownerUserID, conversationID string) (opt int, err error) {
+ return mgoutil.FindOne[int](ctx, c.coll, bson.M{"owner_user_id": ownerUserID, "conversation_id": conversationID}, options.FindOne().SetProjection(bson.M{"recv_msg_opt": 1}))
+}
+
+func (c *ConversationMgo) GetAllConversationIDs(ctx context.Context) ([]string, error) {
+ return mgoutil.Aggregate[string](ctx, c.coll, []bson.M{
+ {"$group": bson.M{"_id": "$conversation_id"}},
+ {"$project": bson.M{"_id": 0, "conversation_id": "$_id"}},
+ })
+}
+
+func (c *ConversationMgo) GetAllConversationIDsNumber(ctx context.Context) (int64, error) {
+ counts, err := mgoutil.Aggregate[int64](ctx, c.coll, []bson.M{
+ {"$group": bson.M{"_id": "$conversation_id"}},
+ {"$group": bson.M{"_id": nil, "count": bson.M{"$sum": 1}}},
+ {"$project": bson.M{"_id": 0}},
+ })
+ if err != nil {
+ return 0, err
+ }
+ if len(counts) == 0 {
+ return 0, nil
+ }
+ return counts[0], nil
+}
+
+func (c *ConversationMgo) PageConversationIDs(ctx context.Context, pagination pagination.Pagination) (conversationIDs []string, err error) {
+ return mgoutil.FindPageOnly[string](ctx, c.coll, bson.M{}, pagination, options.Find().SetProjection(bson.M{"conversation_id": 1}))
+}
+
+func (c *ConversationMgo) GetConversationsByConversationID(ctx context.Context, conversationIDs []string) ([]*relation.ConversationModel, error) {
+ return mgoutil.Find[*relation.ConversationModel](ctx, c.coll, bson.M{"conversation_id": bson.M{"$in": conversationIDs}})
+}
+
+func (c *ConversationMgo) GetConversationIDsNeedDestruct(ctx context.Context) ([]*relation.ConversationModel, error) {
+ //"is_msg_destruct = 1 && msg_destruct_time != 0 && (UNIX_TIMESTAMP(NOW()) > (msg_destruct_time + UNIX_TIMESTAMP(latest_msg_destruct_time)) || latest_msg_destruct_time is NULL)"
+ return mgoutil.Find[*relation.ConversationModel](ctx, c.coll, bson.M{
+ "is_msg_destruct": 1,
+ "msg_destruct_time": bson.M{"$ne": 0},
+ "$or": []bson.M{
+ {
+ "$expr": bson.M{
+ "$gt": []any{
+ time.Now(),
+ bson.M{"$add": []any{"$msg_destruct_time", "$latest_msg_destruct_time"}},
+ },
+ },
+ },
+ {
+ "latest_msg_destruct_time": nil,
+ },
+ },
+ })
+}
+
+func (c *ConversationMgo) GetConversationNotReceiveMessageUserIDs(ctx context.Context, conversationID string) ([]string, error) {
+ return mgoutil.Find[string](
+ ctx,
+ c.coll,
+ bson.M{"conversation_id": conversationID, "recv_msg_opt": bson.M{"$ne": constant.ReceiveMessage}},
+ options.Find().SetProjection(bson.M{"_id": 0, "owner_user_id": 1}),
+ )
+}
diff --git a/pkg/common/db/mgo/friend.go b/pkg/common/db/mgo/friend.go
new file mode 100644
index 000000000..851db6157
--- /dev/null
+++ b/pkg/common/db/mgo/friend.go
@@ -0,0 +1,165 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package mgo
+
+import (
+ "context"
+
+ "github.com/OpenIMSDK/tools/mgoutil"
+ "github.com/OpenIMSDK/tools/pagination"
+ "go.mongodb.org/mongo-driver/mongo/options"
+
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+
+ "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
+)
+
+// FriendMgo implements FriendModelInterface using MongoDB as the storage backend.
+type FriendMgo struct {
+ coll *mongo.Collection
+}
+
+// NewFriendMongo creates a new instance of FriendMgo with the provided MongoDB database.
+func NewFriendMongo(db *mongo.Database) (relation.FriendModelInterface, error) {
+ coll := db.Collection("friend")
+ _, err := coll.Indexes().CreateOne(context.Background(), mongo.IndexModel{
+ Keys: bson.D{
+ {Key: "owner_user_id", Value: 1},
+ {Key: "friend_user_id", Value: 1},
+ },
+ Options: options.Index().SetUnique(true),
+ })
+ if err != nil {
+ return nil, err
+ }
+ return &FriendMgo{coll: coll}, nil
+}
+
+// Create inserts multiple friend records.
+func (f *FriendMgo) Create(ctx context.Context, friends []*relation.FriendModel) error {
+ return mgoutil.InsertMany(ctx, f.coll, friends)
+}
+
+// Delete removes specified friends of the owner user.
+func (f *FriendMgo) Delete(ctx context.Context, ownerUserID string, friendUserIDs []string) error {
+ filter := bson.M{
+ "owner_user_id": ownerUserID,
+ "friend_user_id": bson.M{"$in": friendUserIDs},
+ }
+ return mgoutil.DeleteOne(ctx, f.coll, filter)
+}
+
+// UpdateByMap updates specific fields of a friend document using a map.
+func (f *FriendMgo) UpdateByMap(ctx context.Context, ownerUserID string, friendUserID string, args map[string]interface{}) error {
+ if len(args) == 0 {
+ return nil
+ }
+ filter := bson.M{
+ "owner_user_id": ownerUserID,
+ "friend_user_id": friendUserID,
+ }
+ return mgoutil.UpdateOne(ctx, f.coll, filter, bson.M{"$set": args}, true)
+}
+
+// Update modifies multiple friend documents.
+// func (f *FriendMgo) Update(ctx context.Context, friends []*relation.FriendModel) error {
+// filter := bson.M{
+// "owner_user_id": ownerUserID,
+// "friend_user_id": friendUserID,
+// }
+// return mgotool.UpdateMany(ctx, f.coll, filter, friends)
+// }
+
+// UpdateRemark updates the remark for a specific friend.
+func (f *FriendMgo) UpdateRemark(ctx context.Context, ownerUserID, friendUserID, remark string) error {
+ return f.UpdateByMap(ctx, ownerUserID, friendUserID, map[string]any{"remark": remark})
+}
+
+// Take retrieves a single friend document. Returns an error if not found.
+func (f *FriendMgo) Take(ctx context.Context, ownerUserID, friendUserID string) (*relation.FriendModel, error) {
+ filter := bson.M{
+ "owner_user_id": ownerUserID,
+ "friend_user_id": friendUserID,
+ }
+ return mgoutil.FindOne[*relation.FriendModel](ctx, f.coll, filter)
+}
+
+// FindUserState finds the friendship status between two users.
+func (f *FriendMgo) FindUserState(ctx context.Context, userID1, userID2 string) ([]*relation.FriendModel, error) {
+ filter := bson.M{
+ "$or": []bson.M{
+ {"owner_user_id": userID1, "friend_user_id": userID2},
+ {"owner_user_id": userID2, "friend_user_id": userID1},
+ },
+ }
+ return mgoutil.Find[*relation.FriendModel](ctx, f.coll, filter)
+}
+
+// FindFriends retrieves a list of friends for a given owner. Missing friends do not cause an error.
+func (f *FriendMgo) FindFriends(ctx context.Context, ownerUserID string, friendUserIDs []string) ([]*relation.FriendModel, error) {
+ filter := bson.M{
+ "owner_user_id": ownerUserID,
+ "friend_user_id": bson.M{"$in": friendUserIDs},
+ }
+ return mgoutil.Find[*relation.FriendModel](ctx, f.coll, filter)
+}
+
+// FindReversalFriends finds users who have added the specified user as a friend.
+func (f *FriendMgo) FindReversalFriends(ctx context.Context, friendUserID string, ownerUserIDs []string) ([]*relation.FriendModel, error) {
+ filter := bson.M{
+ "owner_user_id": bson.M{"$in": ownerUserIDs},
+ "friend_user_id": friendUserID,
+ }
+ return mgoutil.Find[*relation.FriendModel](ctx, f.coll, filter)
+}
+
+// FindOwnerFriends retrieves a paginated list of friends for a given owner.
+func (f *FriendMgo) FindOwnerFriends(ctx context.Context, ownerUserID string, pagination pagination.Pagination) (int64, []*relation.FriendModel, error) {
+ filter := bson.M{"owner_user_id": ownerUserID}
+ return mgoutil.FindPage[*relation.FriendModel](ctx, f.coll, filter, pagination)
+}
+
+// FindInWhoseFriends finds users who have added the specified user as a friend, with pagination.
+func (f *FriendMgo) FindInWhoseFriends(ctx context.Context, friendUserID string, pagination pagination.Pagination) (int64, []*relation.FriendModel, error) {
+ filter := bson.M{"friend_user_id": friendUserID}
+ return mgoutil.FindPage[*relation.FriendModel](ctx, f.coll, filter, pagination)
+}
+
+// FindFriendUserIDs retrieves a list of friend user IDs for a given owner.
+func (f *FriendMgo) FindFriendUserIDs(ctx context.Context, ownerUserID string) ([]string, error) {
+ filter := bson.M{"owner_user_id": ownerUserID}
+ return mgoutil.Find[string](ctx, f.coll, filter, options.Find().SetProjection(bson.M{"_id": 0, "friend_user_id": 1}))
+}
+
+func (f *FriendMgo) UpdateFriends(ctx context.Context, ownerUserID string, friendUserIDs []string, val map[string]any) error {
+ // Ensure there are IDs to update
+ if len(friendUserIDs) == 0 {
+ return nil // Or return an error if you expect there to always be IDs
+ }
+
+ // Create a filter to match documents with the specified ownerUserID and any of the friendUserIDs
+ filter := bson.M{
+ "owner_user_id": ownerUserID,
+ "friend_user_id": bson.M{"$in": friendUserIDs},
+ }
+
+ // Create an update document
+ update := bson.M{"$set": val}
+
+ // Perform the update operation for all matching documents
+ _, err := mgoutil.UpdateMany(ctx, f.coll, filter, update)
+ return err
+}
diff --git a/pkg/common/db/mgo/friend_request.go b/pkg/common/db/mgo/friend_request.go
new file mode 100644
index 000000000..bfc101917
--- /dev/null
+++ b/pkg/common/db/mgo/friend_request.go
@@ -0,0 +1,113 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package mgo
+
+import (
+ "context"
+
+ "github.com/OpenIMSDK/tools/mgoutil"
+ "github.com/OpenIMSDK/tools/pagination"
+ "go.mongodb.org/mongo-driver/mongo/options"
+
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+
+ "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
+)
+
+func NewFriendRequestMongo(db *mongo.Database) (relation.FriendRequestModelInterface, error) {
+ coll := db.Collection("friend_request")
+ _, err := coll.Indexes().CreateOne(context.Background(), mongo.IndexModel{
+ Keys: bson.D{
+ {Key: "from_user_id", Value: 1},
+ {Key: "to_user_id", Value: 1},
+ },
+ Options: options.Index().SetUnique(true),
+ })
+ if err != nil {
+ return nil, err
+ }
+ return &FriendRequestMgo{coll: coll}, nil
+}
+
+type FriendRequestMgo struct {
+ coll *mongo.Collection
+}
+
+func (f *FriendRequestMgo) FindToUserID(ctx context.Context, toUserID string, pagination pagination.Pagination) (total int64, friendRequests []*relation.FriendRequestModel, err error) {
+ return mgoutil.FindPage[*relation.FriendRequestModel](ctx, f.coll, bson.M{"to_user_id": toUserID}, pagination)
+}
+
+func (f *FriendRequestMgo) FindFromUserID(ctx context.Context, fromUserID string, pagination pagination.Pagination) (total int64, friendRequests []*relation.FriendRequestModel, err error) {
+ return mgoutil.FindPage[*relation.FriendRequestModel](ctx, f.coll, bson.M{"from_user_id": fromUserID}, pagination)
+}
+
+func (f *FriendRequestMgo) FindBothFriendRequests(ctx context.Context, fromUserID, toUserID string) (friends []*relation.FriendRequestModel, err error) {
+ filter := bson.M{"$or": []bson.M{
+ {"from_user_id": fromUserID, "to_user_id": toUserID},
+ {"from_user_id": toUserID, "to_user_id": fromUserID},
+ }}
+ return mgoutil.Find[*relation.FriendRequestModel](ctx, f.coll, filter)
+}
+
+func (f *FriendRequestMgo) Create(ctx context.Context, friendRequests []*relation.FriendRequestModel) error {
+ return mgoutil.InsertMany(ctx, f.coll, friendRequests)
+}
+
+func (f *FriendRequestMgo) Delete(ctx context.Context, fromUserID, toUserID string) (err error) {
+ return mgoutil.DeleteOne(ctx, f.coll, bson.M{"from_user_id": fromUserID, "to_user_id": toUserID})
+}
+
+func (f *FriendRequestMgo) UpdateByMap(ctx context.Context, formUserID, toUserID string, args map[string]any) (err error) {
+ if len(args) == 0 {
+ return nil
+ }
+ return mgoutil.UpdateOne(ctx, f.coll, bson.M{"from_user_id": formUserID, "to_user_id": toUserID}, bson.M{"$set": args}, true)
+}
+
+func (f *FriendRequestMgo) Update(ctx context.Context, friendRequest *relation.FriendRequestModel) (err error) {
+ updater := bson.M{}
+ if friendRequest.HandleResult != 0 {
+ updater["handle_result"] = friendRequest.HandleResult
+ }
+ if friendRequest.ReqMsg != "" {
+ updater["req_msg"] = friendRequest.ReqMsg
+ }
+ if friendRequest.HandlerUserID != "" {
+ updater["handler_user_id"] = friendRequest.HandlerUserID
+ }
+ if friendRequest.HandleMsg != "" {
+ updater["handle_msg"] = friendRequest.HandleMsg
+ }
+ if !friendRequest.HandleTime.IsZero() {
+ updater["handle_time"] = friendRequest.HandleTime
+ }
+ if friendRequest.Ex != "" {
+ updater["ex"] = friendRequest.Ex
+ }
+ if len(updater) == 0 {
+ return nil
+ }
+ filter := bson.M{"from_user_id": friendRequest.FromUserID, "to_user_id": friendRequest.ToUserID}
+ return mgoutil.UpdateOne(ctx, f.coll, filter, bson.M{"$set": updater}, true)
+}
+
+func (f *FriendRequestMgo) Find(ctx context.Context, fromUserID, toUserID string) (friendRequest *relation.FriendRequestModel, err error) {
+ return mgoutil.FindOne[*relation.FriendRequestModel](ctx, f.coll, bson.M{"from_user_id": fromUserID, "to_user_id": toUserID})
+}
+
+func (f *FriendRequestMgo) Take(ctx context.Context, fromUserID, toUserID string) (friendRequest *relation.FriendRequestModel, err error) {
+ return f.Find(ctx, fromUserID, toUserID)
+}
diff --git a/pkg/common/db/mgo/group.go b/pkg/common/db/mgo/group.go
new file mode 100644
index 000000000..922bfd424
--- /dev/null
+++ b/pkg/common/db/mgo/group.go
@@ -0,0 +1,121 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package mgo
+
+import (
+ "context"
+ "time"
+
+ "github.com/OpenIMSDK/tools/errs"
+
+ "github.com/OpenIMSDK/tools/mgoutil"
+ "github.com/OpenIMSDK/tools/pagination"
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+
+ "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
+)
+
+func NewGroupMongo(db *mongo.Database) (relation.GroupModelInterface, error) {
+ coll := db.Collection("group")
+ _, err := coll.Indexes().CreateOne(context.Background(), mongo.IndexModel{
+ Keys: bson.D{
+ {Key: "group_id", Value: 1},
+ },
+ Options: options.Index().SetUnique(true),
+ })
+ if err != nil {
+ return nil, errs.Wrap(err)
+ }
+ return &GroupMgo{coll: coll}, nil
+}
+
+type GroupMgo struct {
+ coll *mongo.Collection
+}
+
+func (g *GroupMgo) Create(ctx context.Context, groups []*relation.GroupModel) (err error) {
+ return mgoutil.InsertMany(ctx, g.coll, groups)
+}
+
+func (g *GroupMgo) UpdateStatus(ctx context.Context, groupID string, status int32) (err error) {
+ return g.UpdateMap(ctx, groupID, map[string]any{"status": status})
+}
+
+func (g *GroupMgo) UpdateMap(ctx context.Context, groupID string, args map[string]any) (err error) {
+ if len(args) == 0 {
+ return nil
+ }
+ return mgoutil.UpdateOne(ctx, g.coll, bson.M{"group_id": groupID}, bson.M{"$set": args}, true)
+}
+
+func (g *GroupMgo) Find(ctx context.Context, groupIDs []string) (groups []*relation.GroupModel, err error) {
+ return mgoutil.Find[*relation.GroupModel](ctx, g.coll, bson.M{"group_id": bson.M{"$in": groupIDs}})
+}
+
+func (g *GroupMgo) Take(ctx context.Context, groupID string) (group *relation.GroupModel, err error) {
+ return mgoutil.FindOne[*relation.GroupModel](ctx, g.coll, bson.M{"group_id": groupID})
+}
+
+func (g *GroupMgo) Search(ctx context.Context, keyword string, pagination pagination.Pagination) (total int64, groups []*relation.GroupModel, err error) {
+ return mgoutil.FindPage[*relation.GroupModel](ctx, g.coll, bson.M{"group_name": bson.M{"$regex": keyword}}, pagination)
+}
+
+func (g *GroupMgo) CountTotal(ctx context.Context, before *time.Time) (count int64, err error) {
+ if before == nil {
+ return mgoutil.Count(ctx, g.coll, bson.M{})
+ }
+ return mgoutil.Count(ctx, g.coll, bson.M{"create_time": bson.M{"$lt": before}})
+}
+
+func (g *GroupMgo) CountRangeEverydayTotal(ctx context.Context, start time.Time, end time.Time) (map[string]int64, error) {
+ pipeline := bson.A{
+ bson.M{
+ "$match": bson.M{
+ "create_time": bson.M{
+ "$gte": start,
+ "$lt": end,
+ },
+ },
+ },
+ bson.M{
+ "$group": bson.M{
+ "_id": bson.M{
+ "$dateToString": bson.M{
+ "format": "%Y-%m-%d",
+ "date": "$create_time",
+ },
+ },
+ "count": bson.M{
+ "$sum": 1,
+ },
+ },
+ },
+ }
+ type Item struct {
+ Date string `bson:"_id"`
+ Count int64 `bson:"count"`
+ }
+ items, err := mgoutil.Aggregate[Item](ctx, g.coll, pipeline)
+ if err != nil {
+ return nil, err
+ }
+ res := make(map[string]int64, len(items))
+ for _, item := range items {
+ res[item.Date] = item.Count
+ }
+ return res, nil
+}
diff --git a/pkg/common/db/mgo/group_member.go b/pkg/common/db/mgo/group_member.go
new file mode 100644
index 000000000..e28432b11
--- /dev/null
+++ b/pkg/common/db/mgo/group_member.go
@@ -0,0 +1,121 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package mgo
+
+import (
+ "context"
+
+ "github.com/OpenIMSDK/tools/errs"
+
+ "github.com/OpenIMSDK/protocol/constant"
+ "github.com/OpenIMSDK/tools/mgoutil"
+ "github.com/OpenIMSDK/tools/pagination"
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+
+ "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
+)
+
+func NewGroupMember(db *mongo.Database) (relation.GroupMemberModelInterface, error) {
+ coll := db.Collection("group_member")
+ _, err := coll.Indexes().CreateOne(context.Background(), mongo.IndexModel{
+ Keys: bson.D{
+ {Key: "group_id", Value: 1},
+ {Key: "user_id", Value: 1},
+ },
+ Options: options.Index().SetUnique(true),
+ })
+ if err != nil {
+ return nil, errs.Wrap(err)
+ }
+ return &GroupMemberMgo{coll: coll}, nil
+}
+
+type GroupMemberMgo struct {
+ coll *mongo.Collection
+}
+
+func (g *GroupMemberMgo) Create(ctx context.Context, groupMembers []*relation.GroupMemberModel) (err error) {
+ return mgoutil.InsertMany(ctx, g.coll, groupMembers)
+}
+
+func (g *GroupMemberMgo) Delete(ctx context.Context, groupID string, userIDs []string) (err error) {
+ filter := bson.M{"group_id": groupID}
+ if len(userIDs) > 0 {
+ filter["user_id"] = bson.M{"$in": userIDs}
+ }
+ return mgoutil.DeleteMany(ctx, g.coll, filter)
+}
+
+func (g *GroupMemberMgo) UpdateRoleLevel(ctx context.Context, groupID string, userID string, roleLevel int32) error {
+ return g.Update(ctx, groupID, userID, bson.M{"role_level": roleLevel})
+}
+
+func (g *GroupMemberMgo) Update(ctx context.Context, groupID string, userID string, data map[string]any) (err error) {
+ return mgoutil.UpdateOne(ctx, g.coll, bson.M{"group_id": groupID, "user_id": userID}, bson.M{"$set": data}, true)
+}
+
+func (g *GroupMemberMgo) Find(ctx context.Context, groupIDs []string, userIDs []string, roleLevels []int32) (groupMembers []*relation.GroupMemberModel, err error) {
+ //TODO implement me
+ panic("implement me")
+}
+
+func (g *GroupMemberMgo) FindMemberUserID(ctx context.Context, groupID string) (userIDs []string, err error) {
+ return mgoutil.Find[string](ctx, g.coll, bson.M{"group_id": groupID}, options.Find().SetProjection(bson.M{"_id": 0, "user_id": 1}))
+}
+
+func (g *GroupMemberMgo) Take(ctx context.Context, groupID string, userID string) (groupMember *relation.GroupMemberModel, err error) {
+ return mgoutil.FindOne[*relation.GroupMemberModel](ctx, g.coll, bson.M{"group_id": groupID, "user_id": userID})
+}
+
+func (g *GroupMemberMgo) TakeOwner(ctx context.Context, groupID string) (groupMember *relation.GroupMemberModel, err error) {
+ return mgoutil.FindOne[*relation.GroupMemberModel](ctx, g.coll, bson.M{"group_id": groupID, "role_level": constant.GroupOwner})
+}
+
+func (g *GroupMemberMgo) FindRoleLevelUserIDs(ctx context.Context, groupID string, roleLevel int32) ([]string, error) {
+ return mgoutil.Find[string](ctx, g.coll, bson.M{"group_id": groupID, "role_level": roleLevel}, options.Find().SetProjection(bson.M{"_id": 0, "user_id": 1}))
+}
+
+func (g *GroupMemberMgo) SearchMember(ctx context.Context, keyword string, groupID string, pagination pagination.Pagination) (total int64, groupList []*relation.GroupMemberModel, err error) {
+ filter := bson.M{"group_id": groupID, "nickname": bson.M{"$regex": keyword}}
+ return mgoutil.FindPage[*relation.GroupMemberModel](ctx, g.coll, filter, pagination)
+}
+
+func (g *GroupMemberMgo) FindUserJoinedGroupID(ctx context.Context, userID string) (groupIDs []string, err error) {
+ return mgoutil.Find[string](ctx, g.coll, bson.M{"user_id": userID}, options.Find().SetProjection(bson.M{"_id": 0, "group_id": 1}))
+}
+
+func (g *GroupMemberMgo) TakeGroupMemberNum(ctx context.Context, groupID string) (count int64, err error) {
+ return mgoutil.Count(ctx, g.coll, bson.M{"group_id": groupID})
+}
+
+func (g *GroupMemberMgo) FindUserManagedGroupID(ctx context.Context, userID string) (groupIDs []string, err error) {
+ filter := bson.M{
+ "user_id": userID,
+ "role_level": bson.M{
+ "$in": []int{constant.GroupOwner, constant.GroupAdmin},
+ },
+ }
+ return mgoutil.Find[string](ctx, g.coll, filter, options.Find().SetProjection(bson.M{"_id": 0, "group_id": 1}))
+}
+
+func (g *GroupMemberMgo) IsUpdateRoleLevel(data map[string]any) bool {
+ if len(data) == 0 {
+ return false
+ }
+ _, ok := data["role_level"]
+ return ok
+}
diff --git a/pkg/common/db/mgo/group_request.go b/pkg/common/db/mgo/group_request.go
new file mode 100644
index 000000000..d20682239
--- /dev/null
+++ b/pkg/common/db/mgo/group_request.go
@@ -0,0 +1,76 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package mgo
+
+import (
+ "context"
+
+ "github.com/OpenIMSDK/tools/errs"
+
+ "github.com/OpenIMSDK/tools/mgoutil"
+ "github.com/OpenIMSDK/tools/pagination"
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+
+ "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
+)
+
+func NewGroupRequestMgo(db *mongo.Database) (relation.GroupRequestModelInterface, error) {
+ coll := db.Collection("group_request")
+ _, err := coll.Indexes().CreateOne(context.Background(), mongo.IndexModel{
+ Keys: bson.D{
+ {Key: "group_id", Value: 1},
+ {Key: "user_id", Value: 1},
+ },
+ Options: options.Index().SetUnique(true),
+ })
+ if err != nil {
+ return nil, errs.Wrap(err)
+ }
+ return &GroupRequestMgo{coll: coll}, nil
+}
+
+type GroupRequestMgo struct {
+ coll *mongo.Collection
+}
+
+func (g *GroupRequestMgo) Create(ctx context.Context, groupRequests []*relation.GroupRequestModel) (err error) {
+ return mgoutil.InsertMany(ctx, g.coll, groupRequests)
+}
+
+func (g *GroupRequestMgo) Delete(ctx context.Context, groupID string, userID string) (err error) {
+ return mgoutil.DeleteOne(ctx, g.coll, bson.M{"group_id": groupID, "user_id": userID})
+}
+
+func (g *GroupRequestMgo) UpdateHandler(ctx context.Context, groupID string, userID string, handledMsg string, handleResult int32) (err error) {
+ return mgoutil.UpdateOne(ctx, g.coll, bson.M{"group_id": groupID, "user_id": userID}, bson.M{"$set": bson.M{"handle_msg": handledMsg, "handle_result": handleResult}}, true)
+}
+
+func (g *GroupRequestMgo) Take(ctx context.Context, groupID string, userID string) (groupRequest *relation.GroupRequestModel, err error) {
+ return mgoutil.FindOne[*relation.GroupRequestModel](ctx, g.coll, bson.M{"group_id": groupID, "user_id": userID})
+}
+
+func (g *GroupRequestMgo) FindGroupRequests(ctx context.Context, groupID string, userIDs []string) ([]*relation.GroupRequestModel, error) {
+ return mgoutil.Find[*relation.GroupRequestModel](ctx, g.coll, bson.M{"group_id": groupID, "user_id": bson.M{"$in": userIDs}})
+}
+
+func (g *GroupRequestMgo) Page(ctx context.Context, userID string, pagination pagination.Pagination) (total int64, groups []*relation.GroupRequestModel, err error) {
+ return mgoutil.FindPage[*relation.GroupRequestModel](ctx, g.coll, bson.M{"user_id": userID}, pagination)
+}
+
+func (g *GroupRequestMgo) PageGroup(ctx context.Context, groupIDs []string, pagination pagination.Pagination) (total int64, groups []*relation.GroupRequestModel, err error) {
+ return mgoutil.FindPage[*relation.GroupRequestModel](ctx, g.coll, bson.M{"group_id": bson.M{"$in": groupIDs}}, pagination)
+}
diff --git a/pkg/common/db/mgo/log.go b/pkg/common/db/mgo/log.go
new file mode 100644
index 000000000..09f002ee3
--- /dev/null
+++ b/pkg/common/db/mgo/log.go
@@ -0,0 +1,84 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package mgo
+
+import (
+ "context"
+ "time"
+
+ "github.com/OpenIMSDK/tools/mgoutil"
+ "github.com/OpenIMSDK/tools/pagination"
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+
+ "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
+)
+
+func NewLogMongo(db *mongo.Database) (relation.LogInterface, error) {
+ coll := db.Collection("log")
+ _, err := coll.Indexes().CreateMany(context.Background(), []mongo.IndexModel{
+ {
+ Keys: bson.D{
+ {Key: "log_id", Value: 1},
+ },
+ Options: options.Index().SetUnique(true),
+ },
+ {
+ Keys: bson.D{
+ {Key: "user_id", Value: 1},
+ },
+ },
+ {
+ Keys: bson.D{
+ {Key: "create_time", Value: -1},
+ },
+ },
+ })
+ if err != nil {
+ return nil, err
+ }
+ return &LogMgo{coll: coll}, nil
+}
+
+type LogMgo struct {
+ coll *mongo.Collection
+}
+
+func (l *LogMgo) Create(ctx context.Context, log []*relation.LogModel) error {
+ return mgoutil.InsertMany(ctx, l.coll, log)
+}
+
+func (l *LogMgo) Search(ctx context.Context, keyword string, start time.Time, end time.Time, pagination pagination.Pagination) (int64, []*relation.LogModel, error) {
+ filter := bson.M{"create_time": bson.M{"$gte": start, "$lte": end}}
+ if keyword != "" {
+ filter["user_id"] = bson.M{"$regex": keyword}
+ }
+ return mgoutil.FindPage[*relation.LogModel](ctx, l.coll, filter, pagination, options.Find().SetSort(bson.M{"create_time": -1}))
+}
+
+func (l *LogMgo) Delete(ctx context.Context, logID []string, userID string) error {
+ if userID == "" {
+ return mgoutil.DeleteMany(ctx, l.coll, bson.M{"log_id": bson.M{"$in": logID}})
+ }
+ return mgoutil.DeleteMany(ctx, l.coll, bson.M{"log_id": bson.M{"$in": logID}, "user_id": userID})
+}
+
+func (l *LogMgo) Get(ctx context.Context, logIDs []string, userID string) ([]*relation.LogModel, error) {
+ if userID == "" {
+ return mgoutil.Find[*relation.LogModel](ctx, l.coll, bson.M{"log_id": bson.M{"$in": logIDs}})
+ }
+ return mgoutil.Find[*relation.LogModel](ctx, l.coll, bson.M{"log_id": bson.M{"$in": logIDs}, "user_id": userID})
+}
diff --git a/pkg/common/db/mgo/object.go b/pkg/common/db/mgo/object.go
new file mode 100644
index 000000000..88bfde213
--- /dev/null
+++ b/pkg/common/db/mgo/object.go
@@ -0,0 +1,69 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package mgo
+
+import (
+ "context"
+
+ "github.com/OpenIMSDK/tools/mgoutil"
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+
+ "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
+)
+
+func NewS3Mongo(db *mongo.Database) (relation.ObjectInfoModelInterface, error) {
+ coll := db.Collection("s3")
+ _, err := coll.Indexes().CreateOne(context.Background(), mongo.IndexModel{
+ Keys: bson.D{
+ {Key: "name", Value: 1},
+ },
+ Options: options.Index().SetUnique(true),
+ })
+ if err != nil {
+ return nil, err
+ }
+ return &S3Mongo{coll: coll}, nil
+}
+
+type S3Mongo struct {
+ coll *mongo.Collection
+}
+
+func (o *S3Mongo) SetObject(ctx context.Context, obj *relation.ObjectModel) error {
+ filter := bson.M{"name": obj.Name, "engine": obj.Engine}
+ update := bson.M{
+ "name": obj.Name,
+ "engine": obj.Engine,
+ "key": obj.Key,
+ "size": obj.Size,
+ "content_type": obj.ContentType,
+ "group": obj.Group,
+ "create_time": obj.CreateTime,
+ }
+ return mgoutil.UpdateOne(ctx, o.coll, filter, bson.M{"$set": update}, false, options.Update().SetUpsert(true))
+}
+
+func (o *S3Mongo) Take(ctx context.Context, engine string, name string) (*relation.ObjectModel, error) {
+ if engine == "" {
+ return mgoutil.FindOne[*relation.ObjectModel](ctx, o.coll, bson.M{"name": name})
+ }
+ return mgoutil.FindOne[*relation.ObjectModel](ctx, o.coll, bson.M{"name": name, "engine": engine})
+}
+
+func (o *S3Mongo) Delete(ctx context.Context, engine string, name string) error {
+ return mgoutil.DeleteOne(ctx, o.coll, bson.M{"name": name, "engine": engine})
+}
diff --git a/pkg/common/db/mgo/user.go b/pkg/common/db/mgo/user.go
new file mode 100644
index 000000000..2797bc53f
--- /dev/null
+++ b/pkg/common/db/mgo/user.go
@@ -0,0 +1,322 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package mgo
+
+import (
+ "context"
+ "time"
+
+ "github.com/OpenIMSDK/protocol/user"
+ "github.com/OpenIMSDK/tools/errs"
+ "go.mongodb.org/mongo-driver/bson/primitive"
+
+ "github.com/OpenIMSDK/tools/mgoutil"
+ "github.com/OpenIMSDK/tools/pagination"
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+
+ "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
+)
+
+func NewUserMongo(db *mongo.Database) (relation.UserModelInterface, error) {
+ coll := db.Collection("user")
+ _, err := coll.Indexes().CreateOne(context.Background(), mongo.IndexModel{
+ Keys: bson.D{
+ {Key: "user_id", Value: 1},
+ },
+ Options: options.Index().SetUnique(true),
+ })
+ if err != nil {
+ return nil, errs.Wrap(err)
+ }
+ return &UserMgo{coll: coll}, nil
+}
+
+type UserMgo struct {
+ coll *mongo.Collection
+}
+
+func (u *UserMgo) Create(ctx context.Context, users []*relation.UserModel) error {
+ return mgoutil.InsertMany(ctx, u.coll, users)
+}
+
+func (u *UserMgo) UpdateByMap(ctx context.Context, userID string, args map[string]any) (err error) {
+ if len(args) == 0 {
+ return nil
+ }
+ return mgoutil.UpdateOne(ctx, u.coll, bson.M{"user_id": userID}, bson.M{"$set": args}, true)
+}
+
+func (u *UserMgo) Find(ctx context.Context, userIDs []string) (users []*relation.UserModel, err error) {
+ return mgoutil.Find[*relation.UserModel](ctx, u.coll, bson.M{"user_id": bson.M{"$in": userIDs}})
+}
+
+func (u *UserMgo) Take(ctx context.Context, userID string) (user *relation.UserModel, err error) {
+ return mgoutil.FindOne[*relation.UserModel](ctx, u.coll, bson.M{"user_id": userID})
+}
+
+func (u *UserMgo) TakeNotification(ctx context.Context, level int64) (user []*relation.UserModel, err error) {
+ return mgoutil.Find[*relation.UserModel](ctx, u.coll, bson.M{"app_manger_level": level})
+}
+
+func (u *UserMgo) TakeByNickname(ctx context.Context, nickname string) (user []*relation.UserModel, err error) {
+ return mgoutil.Find[*relation.UserModel](ctx, u.coll, bson.M{"nickname": nickname})
+}
+
+func (u *UserMgo) Page(ctx context.Context, pagination pagination.Pagination) (count int64, users []*relation.UserModel, err error) {
+ return mgoutil.FindPage[*relation.UserModel](ctx, u.coll, bson.M{}, pagination)
+}
+
+func (u *UserMgo) PageFindUser(ctx context.Context, level1 int64, level2 int64, pagination pagination.Pagination) (count int64, users []*relation.UserModel, err error) {
+ query := bson.M{
+ "$or": []bson.M{
+ {"app_manger_level": level1},
+ {"app_manger_level": level2},
+ },
+ }
+
+ return mgoutil.FindPage[*relation.UserModel](ctx, u.coll, query, pagination)
+}
+
+func (u *UserMgo) PageFindUserWithKeyword(
+ ctx context.Context,
+ level1 int64,
+ level2 int64,
+ userID string,
+ nickName string,
+ pagination pagination.Pagination,
+) (count int64, users []*relation.UserModel, err error) {
+ // Initialize the base query with level conditions
+ query := bson.M{
+ "$and": []bson.M{
+ {"app_manger_level": bson.M{"$in": []int64{level1, level2}}},
+ },
+ }
+
+ // Add userID and userName conditions to the query if they are provided
+ if userID != "" || nickName != "" {
+ userConditions := []bson.M{}
+ if userID != "" {
+ // Use regex for userID
+ regexPattern := primitive.Regex{Pattern: userID, Options: "i"} // 'i' for case-insensitive matching
+ userConditions = append(userConditions, bson.M{"user_id": regexPattern})
+ }
+ if nickName != "" {
+ // Use regex for userName
+ regexPattern := primitive.Regex{Pattern: nickName, Options: "i"} // 'i' for case-insensitive matching
+ userConditions = append(userConditions, bson.M{"nickname": regexPattern})
+ }
+ query["$and"] = append(query["$and"].([]bson.M), bson.M{"$or": userConditions})
+ }
+
+ // Perform the paginated search
+ return mgoutil.FindPage[*relation.UserModel](ctx, u.coll, query, pagination)
+}
+
+func (u *UserMgo) GetAllUserID(ctx context.Context, pagination pagination.Pagination) (int64, []string, error) {
+ return mgoutil.FindPage[string](ctx, u.coll, bson.M{}, pagination, options.Find().SetProjection(bson.M{"_id": 0, "user_id": 1}))
+}
+
+func (u *UserMgo) Exist(ctx context.Context, userID string) (exist bool, err error) {
+ return mgoutil.Exist(ctx, u.coll, bson.M{"user_id": userID})
+}
+
+func (u *UserMgo) GetUserGlobalRecvMsgOpt(ctx context.Context, userID string) (opt int, err error) {
+ return mgoutil.FindOne[int](ctx, u.coll, bson.M{"user_id": userID}, options.FindOne().SetProjection(bson.M{"_id": 0, "global_recv_msg_opt": 1}))
+}
+
+func (u *UserMgo) CountTotal(ctx context.Context, before *time.Time) (count int64, err error) {
+ if before == nil {
+ return mgoutil.Count(ctx, u.coll, bson.M{})
+ }
+ return mgoutil.Count(ctx, u.coll, bson.M{"create_time": bson.M{"$lt": before}})
+}
+
+func (u *UserMgo) AddUserCommand(ctx context.Context, userID string, Type int32, UUID string, value string, ex string) error {
+ collection := u.coll.Database().Collection("userCommands")
+
+ // Create a new document instead of updating an existing one
+ doc := bson.M{
+ "userID": userID,
+ "type": Type,
+ "uuid": UUID,
+ "createTime": time.Now().Unix(), // assuming you want the creation time in Unix timestamp
+ "value": value,
+ "ex": ex,
+ }
+
+ _, err := collection.InsertOne(ctx, doc)
+ return err
+}
+
+func (u *UserMgo) DeleteUserCommand(ctx context.Context, userID string, Type int32, UUID string) error {
+ collection := u.coll.Database().Collection("userCommands")
+
+ filter := bson.M{"userID": userID, "type": Type, "uuid": UUID}
+
+ result, err := collection.DeleteOne(ctx, filter)
+ if result.DeletedCount == 0 {
+ // No records found to update
+ return errs.Wrap(errs.ErrRecordNotFound)
+ }
+ return err
+}
+func (u *UserMgo) UpdateUserCommand(ctx context.Context, userID string, Type int32, UUID string, val map[string]any) error {
+ if len(val) == 0 {
+ return nil
+ }
+
+ collection := u.coll.Database().Collection("userCommands")
+
+ filter := bson.M{"userID": userID, "type": Type, "uuid": UUID}
+ update := bson.M{"$set": val}
+
+ result, err := collection.UpdateOne(ctx, filter, update)
+ if err != nil {
+ return err
+ }
+
+ if result.MatchedCount == 0 {
+ // No records found to update
+ return errs.Wrap(errs.ErrRecordNotFound)
+ }
+
+ return nil
+}
+
+func (u *UserMgo) GetUserCommand(ctx context.Context, userID string, Type int32) ([]*user.CommandInfoResp, error) {
+ collection := u.coll.Database().Collection("userCommands")
+ filter := bson.M{"userID": userID, "type": Type}
+
+ cursor, err := collection.Find(ctx, filter)
+ if err != nil {
+ return nil, err
+ }
+ defer cursor.Close(ctx)
+
+ // Initialize commands as a slice of pointers
+ commands := []*user.CommandInfoResp{}
+
+ for cursor.Next(ctx) {
+ var document struct {
+ Type int32 `bson:"type"`
+ UUID string `bson:"uuid"`
+ Value string `bson:"value"`
+ CreateTime int64 `bson:"createTime"`
+ Ex string `bson:"ex"`
+ }
+
+ if err := cursor.Decode(&document); err != nil {
+ return nil, err
+ }
+
+ commandInfo := &user.CommandInfoResp{
+ Type: document.Type,
+ Uuid: document.UUID,
+ Value: document.Value,
+ CreateTime: document.CreateTime,
+ Ex: document.Ex,
+ }
+
+ commands = append(commands, commandInfo)
+ }
+
+ if err := cursor.Err(); err != nil {
+ return nil, err
+ }
+
+ return commands, nil
+}
+func (u *UserMgo) GetAllUserCommand(ctx context.Context, userID string) ([]*user.AllCommandInfoResp, error) {
+ collection := u.coll.Database().Collection("userCommands")
+ filter := bson.M{"userID": userID}
+
+ cursor, err := collection.Find(ctx, filter)
+ if err != nil {
+ return nil, err
+ }
+ defer cursor.Close(ctx)
+
+ // Initialize commands as a slice of pointers
+ commands := []*user.AllCommandInfoResp{}
+
+ for cursor.Next(ctx) {
+ var document struct {
+ Type int32 `bson:"type"`
+ UUID string `bson:"uuid"`
+ Value string `bson:"value"`
+ CreateTime int64 `bson:"createTime"`
+ Ex string `bson:"ex"`
+ }
+
+ if err := cursor.Decode(&document); err != nil {
+ return nil, err
+ }
+
+ commandInfo := &user.AllCommandInfoResp{
+ Type: document.Type,
+ Uuid: document.UUID,
+ Value: document.Value,
+ CreateTime: document.CreateTime,
+ Ex: document.Ex,
+ }
+
+ commands = append(commands, commandInfo)
+ }
+
+ if err := cursor.Err(); err != nil {
+ return nil, err
+ }
+ return commands, nil
+}
+func (u *UserMgo) CountRangeEverydayTotal(ctx context.Context, start time.Time, end time.Time) (map[string]int64, error) {
+ pipeline := bson.A{
+ bson.M{
+ "$match": bson.M{
+ "create_time": bson.M{
+ "$gte": start,
+ "$lt": end,
+ },
+ },
+ },
+ bson.M{
+ "$group": bson.M{
+ "_id": bson.M{
+ "$dateToString": bson.M{
+ "format": "%Y-%m-%d",
+ "date": "$create_time",
+ },
+ },
+ "count": bson.M{
+ "$sum": 1,
+ },
+ },
+ },
+ }
+ type Item struct {
+ Date string `bson:"_id"`
+ Count int64 `bson:"count"`
+ }
+ items, err := mgoutil.Aggregate[Item](ctx, u.coll, pipeline)
+ if err != nil {
+ return nil, err
+ }
+ res := make(map[string]int64, len(items))
+ for _, item := range items {
+ res[item.Date] = item.Count
+ }
+ return res, nil
+}
diff --git a/pkg/common/db/relation/black_model.go b/pkg/common/db/relation/black_model.go
deleted file mode 100644
index 34123c7a3..000000000
--- a/pkg/common/db/relation/black_model.go
+++ /dev/null
@@ -1,111 +0,0 @@
-// Copyright © 2023 OpenIM. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-package relation
-
-import (
- "context"
-
- "github.com/OpenIMSDK/tools/errs"
-
- "github.com/OpenIMSDK/tools/ormutil"
-
- "gorm.io/gorm"
-
- "github.com/OpenIMSDK/tools/utils"
-
- "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
-)
-
-type BlackGorm struct {
- *MetaDB
-}
-
-func NewBlackGorm(db *gorm.DB) relation.BlackModelInterface {
- return &BlackGorm{NewMetaDB(db, &relation.BlackModel{})}
-}
-
-func (b *BlackGorm) Create(ctx context.Context, blacks []*relation.BlackModel) (err error) {
- return utils.Wrap(b.db(ctx).Create(&blacks).Error, "")
-}
-
-func (b *BlackGorm) Delete(ctx context.Context, blacks []*relation.BlackModel) (err error) {
- return utils.Wrap(b.db(ctx).Delete(blacks).Error, "")
-}
-
-func (b *BlackGorm) UpdateByMap(
- ctx context.Context,
- ownerUserID, blockUserID string,
- args map[string]interface{},
-) (err error) {
- return utils.Wrap(
- b.db(ctx).Where("block_user_id = ? and block_user_id = ?", ownerUserID, blockUserID).Updates(args).Error,
- "",
- )
-}
-
-func (b *BlackGorm) Update(ctx context.Context, blacks []*relation.BlackModel) (err error) {
- return utils.Wrap(b.db(ctx).Updates(&blacks).Error, "")
-}
-
-func (b *BlackGorm) Find(
- ctx context.Context,
- blacks []*relation.BlackModel,
-) (blackList []*relation.BlackModel, err error) {
- var where [][]interface{}
- for _, black := range blacks {
- where = append(where, []interface{}{black.OwnerUserID, black.BlockUserID})
- }
- return blackList, utils.Wrap(
- b.db(ctx).Where("(owner_user_id, block_user_id) in ?", where).Find(&blackList).Error,
- "",
- )
-}
-
-func (b *BlackGorm) Take(ctx context.Context, ownerUserID, blockUserID string) (black *relation.BlackModel, err error) {
- black = &relation.BlackModel{}
- return black, utils.Wrap(
- b.db(ctx).Where("owner_user_id = ? and block_user_id = ?", ownerUserID, blockUserID).Take(black).Error,
- "",
- )
-}
-
-func (b *BlackGorm) FindOwnerBlacks(
- ctx context.Context,
- ownerUserID string,
- pageNumber, showNumber int32,
-) (blacks []*relation.BlackModel, total int64, err error) {
- err = b.db(ctx).Count(&total).Error
- if err != nil {
- return nil, 0, utils.Wrap(err, "")
- }
- totalUint32, blacks, err := ormutil.GormPage[relation.BlackModel](
- b.db(ctx).Where("owner_user_id = ?", ownerUserID),
- pageNumber,
- showNumber,
- )
- total = int64(totalUint32)
- return
-}
-
-func (b *BlackGorm) FindBlackUserIDs(ctx context.Context, ownerUserID string) (blackUserIDs []string, err error) {
- return blackUserIDs, utils.Wrap(
- b.db(ctx).Where("owner_user_id = ?", ownerUserID).Pluck("block_user_id", &blackUserIDs).Error,
- "",
- )
-}
-
-func (b *BlackGorm) FindOwnerBlackInfos(ctx context.Context, ownerUserID string, userIDs []string) (blacks []*relation.BlackModel, err error) {
- return blacks, errs.Wrap(b.db(ctx).Where("owner_user_id = ? and block_user_id in ?", ownerUserID, userIDs).Find(&blacks).Error)
-}
diff --git a/pkg/common/db/relation/chat_log_model.go b/pkg/common/db/relation/chat_log_model.go
deleted file mode 100644
index f183a543f..000000000
--- a/pkg/common/db/relation/chat_log_model.go
+++ /dev/null
@@ -1,63 +0,0 @@
-// Copyright © 2023 OpenIM. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-package relation
-
-import (
- "github.com/golang/protobuf/jsonpb"
- "github.com/jinzhu/copier"
- "google.golang.org/protobuf/proto"
- "gorm.io/gorm"
-
- "github.com/OpenIMSDK/protocol/constant"
- pbmsg "github.com/OpenIMSDK/protocol/msg"
- sdkws "github.com/OpenIMSDK/protocol/sdkws"
- "github.com/OpenIMSDK/tools/utils"
-
- "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
-)
-
-type ChatLogGorm struct {
- *MetaDB
-}
-
-func NewChatLogGorm(db *gorm.DB) relation.ChatLogModelInterface {
- return &ChatLogGorm{NewMetaDB(db, &relation.ChatLogModel{})}
-}
-
-func (c *ChatLogGorm) Create(msg *pbmsg.MsgDataToMQ) error {
- chatLog := new(relation.ChatLogModel)
- copier.Copy(chatLog, msg.MsgData)
- switch msg.MsgData.SessionType {
- case constant.GroupChatType, constant.SuperGroupChatType:
- chatLog.RecvID = msg.MsgData.GroupID
- case constant.SingleChatType:
- chatLog.RecvID = msg.MsgData.RecvID
- }
- if msg.MsgData.ContentType >= constant.NotificationBegin && msg.MsgData.ContentType <= constant.NotificationEnd {
- var tips sdkws.TipsComm
- _ = proto.Unmarshal(msg.MsgData.Content, &tips)
- marshaler := jsonpb.Marshaler{
- OrigName: true,
- EnumsAsInts: false,
- EmitDefaults: false,
- }
- chatLog.Content, _ = marshaler.MarshalToString(&tips)
- } else {
- chatLog.Content = string(msg.MsgData.Content)
- }
- chatLog.CreateTime = utils.UnixMillSecondToTime(msg.MsgData.CreateTime)
- chatLog.SendTime = utils.UnixMillSecondToTime(msg.MsgData.SendTime)
- return c.DB.Create(chatLog).Error
-}
diff --git a/pkg/common/db/relation/conversation_model.go b/pkg/common/db/relation/conversation_model.go
deleted file mode 100644
index f39047bf6..000000000
--- a/pkg/common/db/relation/conversation_model.go
+++ /dev/null
@@ -1,250 +0,0 @@
-// Copyright © 2023 OpenIM. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-package relation
-
-import (
- "context"
-
- "github.com/OpenIMSDK/tools/errs"
- "gorm.io/gorm"
-
- "github.com/OpenIMSDK/protocol/constant"
- "github.com/OpenIMSDK/tools/utils"
-
- "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
-)
-
-type ConversationGorm struct {
- *MetaDB
-}
-
-func NewConversationGorm(db *gorm.DB) relation.ConversationModelInterface {
- return &ConversationGorm{NewMetaDB(db, &relation.ConversationModel{})}
-}
-
-func (c *ConversationGorm) NewTx(tx any) relation.ConversationModelInterface {
- return &ConversationGorm{NewMetaDB(tx.(*gorm.DB), &relation.ConversationModel{})}
-}
-
-func (c *ConversationGorm) Create(ctx context.Context, conversations []*relation.ConversationModel) (err error) {
- return utils.Wrap(c.db(ctx).Create(&conversations).Error, "")
-}
-
-func (c *ConversationGorm) Delete(ctx context.Context, groupIDs []string) (err error) {
- return utils.Wrap(c.db(ctx).Where("group_id in (?)", groupIDs).Delete(&relation.ConversationModel{}).Error, "")
-}
-
-func (c *ConversationGorm) UpdateByMap(
- ctx context.Context,
- userIDList []string,
- conversationID string,
- args map[string]interface{},
-) (rows int64, err error) {
- result := c.db(ctx).Where("owner_user_id IN (?) and conversation_id=?", userIDList, conversationID).Updates(args)
- return result.RowsAffected, utils.Wrap(result.Error, "")
-}
-
-func (c *ConversationGorm) Update(ctx context.Context, conversation *relation.ConversationModel) (err error) {
- return utils.Wrap(
- c.db(ctx).
- Where("owner_user_id = ? and conversation_id = ?", conversation.OwnerUserID, conversation.ConversationID).
- Updates(conversation).
- Error,
- "",
- )
-}
-
-func (c *ConversationGorm) Find(
- ctx context.Context,
- ownerUserID string,
- conversationIDs []string,
-) (conversations []*relation.ConversationModel, err error) {
- err = utils.Wrap(
- c.db(ctx).
- Where("owner_user_id=? and conversation_id IN (?)", ownerUserID, conversationIDs).
- Find(&conversations).
- Error,
- "",
- )
- return conversations, err
-}
-
-func (c *ConversationGorm) Take(
- ctx context.Context,
- userID, conversationID string,
-) (conversation *relation.ConversationModel, err error) {
- cc := &relation.ConversationModel{}
- return cc, utils.Wrap(
- c.db(ctx).Where("conversation_id = ? And owner_user_id = ?", conversationID, userID).Take(cc).Error,
- "",
- )
-}
-
-func (c *ConversationGorm) FindUserID(
- ctx context.Context,
- userIDs []string,
- conversationIDs []string,
-) (existUserID []string, err error) {
- return existUserID, utils.Wrap(
- c.db(ctx).
- Where(" owner_user_id IN (?) and conversation_id in (?)", userIDs, conversationIDs).
- Pluck("owner_user_id", &existUserID).
- Error,
- "",
- )
-}
-
-func (c *ConversationGorm) FindConversationID(
- ctx context.Context,
- userID string,
- conversationIDList []string,
-) (existConversationID []string, err error) {
- return existConversationID, utils.Wrap(
- c.db(ctx).
- Where(" conversation_id IN (?) and owner_user_id=?", conversationIDList, userID).
- Pluck("conversation_id", &existConversationID).
- Error,
- "",
- )
-}
-
-func (c *ConversationGorm) FindUserIDAllConversationID(
- ctx context.Context,
- userID string,
-) (conversationIDList []string, err error) {
- return conversationIDList, utils.Wrap(
- c.db(ctx).Where("owner_user_id=?", userID).Pluck("conversation_id", &conversationIDList).Error,
- "",
- )
-}
-
-func (c *ConversationGorm) FindUserIDAllConversations(
- ctx context.Context,
- userID string,
-) (conversations []*relation.ConversationModel, err error) {
- return conversations, utils.Wrap(c.db(ctx).Where("owner_user_id=?", userID).Find(&conversations).Error, "")
-}
-
-func (c *ConversationGorm) FindRecvMsgNotNotifyUserIDs(
- ctx context.Context,
- groupID string,
-) (userIDs []string, err error) {
- return userIDs, utils.Wrap(
- c.db(ctx).
- Where("group_id = ? and recv_msg_opt = ?", groupID, constant.ReceiveNotNotifyMessage).
- Pluck("owner_user_id", &userIDs).
- Error,
- "",
- )
-}
-
-func (c *ConversationGorm) FindSuperGroupRecvMsgNotNotifyUserIDs(
- ctx context.Context,
- groupID string,
-) (userIDs []string, err error) {
- return userIDs, utils.Wrap(
- c.db(ctx).
- Where("group_id = ? and recv_msg_opt = ? and conversation_type = ?", groupID, constant.ReceiveNotNotifyMessage, constant.SuperGroupChatType).
- Pluck("owner_user_id", &userIDs).
- Error,
- "",
- )
-}
-
-func (c *ConversationGorm) GetUserRecvMsgOpt(
- ctx context.Context,
- ownerUserID, conversationID string,
-) (opt int, err error) {
- var conversation relation.ConversationModel
- return int(
- conversation.RecvMsgOpt,
- ), utils.Wrap(
- c.db(ctx).
- Where("conversation_id = ? And owner_user_id = ?", conversationID, ownerUserID).
- Select("recv_msg_opt").
- Find(&conversation).
- Error,
- "",
- )
-}
-
-func (c *ConversationGorm) GetAllConversationIDs(ctx context.Context) (conversationIDs []string, err error) {
- return conversationIDs, utils.Wrap(
- c.db(ctx).Distinct("conversation_id").Pluck("conversation_id", &conversationIDs).Error,
- "",
- )
-}
-
-func (c *ConversationGorm) GetAllConversationIDsNumber(ctx context.Context) (int64, error) {
- var num int64
- err := c.db(ctx).Select("COUNT(DISTINCT conversation_id)").Model(&relation.ConversationModel{}).Count(&num).Error
- return num, errs.Wrap(err)
-}
-
-func (c *ConversationGorm) PageConversationIDs(ctx context.Context, pageNumber, showNumber int32) (conversationIDs []string, err error) {
- err = c.db(ctx).Distinct("conversation_id").Limit(int(showNumber)).Offset(int((pageNumber-1)*showNumber)).Pluck("conversation_id", &conversationIDs).Error
- err = errs.Wrap(err)
- return
-}
-
-func (c *ConversationGorm) GetUserAllHasReadSeqs(
- ctx context.Context,
- ownerUserID string,
-) (hasReadSeqs map[string]int64, err error) {
- return nil, nil
-}
-
-func (c *ConversationGorm) GetConversationsByConversationID(
- ctx context.Context,
- conversationIDs []string,
-) (conversations []*relation.ConversationModel, err error) {
- return conversations, utils.Wrap(
- c.db(ctx).Where("conversation_id IN (?)", conversationIDs).Find(&conversations).Error,
- "",
- )
-}
-
-func (c *ConversationGorm) GetConversationIDsNeedDestruct(
- ctx context.Context,
-) (conversations []*relation.ConversationModel, err error) {
- return conversations, utils.Wrap(
- c.db(ctx).
- Where("is_msg_destruct = 1 && msg_destruct_time != 0 && (UNIX_TIMESTAMP(NOW()) > (msg_destruct_time + UNIX_TIMESTAMP(latest_msg_destruct_time)) || latest_msg_destruct_time is NULL)").
- Find(&conversations).
- Error,
- "",
- )
-}
-
-func (c *ConversationGorm) GetConversationRecvMsgOpt(ctx context.Context, userID string, conversationID string) (int32, error) {
- var recvMsgOpt int32
- return recvMsgOpt, errs.Wrap(
- c.db(ctx).
- Model(&relation.ConversationModel{}).
- Where("conversation_id = ? and owner_user_id in ?", conversationID, userID).
- Pluck("recv_msg_opt", &recvMsgOpt).
- Error,
- )
-}
-
-func (c *ConversationGorm) GetConversationNotReceiveMessageUserIDs(ctx context.Context, conversationID string) ([]string, error) {
- var userIDs []string
- return userIDs, errs.Wrap(
- c.db(ctx).
- Model(&relation.ConversationModel{}).
- Where("conversation_id = ? and recv_msg_opt <> ?", conversationID, constant.ReceiveMessage).
- Pluck("owner_user_id", &userIDs).Error,
- )
-}
diff --git a/pkg/common/db/relation/friend_model.go b/pkg/common/db/relation/friend_model.go
deleted file mode 100644
index 869254455..000000000
--- a/pkg/common/db/relation/friend_model.go
+++ /dev/null
@@ -1,193 +0,0 @@
-// Copyright © 2023 OpenIM. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-package relation
-
-import (
- "context"
-
- "gorm.io/gorm"
-
- "github.com/OpenIMSDK/tools/utils"
-
- "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
-)
-
-type FriendGorm struct {
- *MetaDB
-}
-
-func NewFriendGorm(db *gorm.DB) relation.FriendModelInterface {
- return &FriendGorm{NewMetaDB(db, &relation.FriendModel{})}
-}
-
-func (f *FriendGorm) NewTx(tx any) relation.FriendModelInterface {
- return &FriendGorm{NewMetaDB(tx.(*gorm.DB), &relation.FriendModel{})}
-}
-
-// 插入多条记录.
-func (f *FriendGorm) Create(ctx context.Context, friends []*relation.FriendModel) (err error) {
- return utils.Wrap(f.db(ctx).Create(&friends).Error, "")
-}
-
-// 删除ownerUserID指定的好友.
-func (f *FriendGorm) Delete(ctx context.Context, ownerUserID string, friendUserIDs []string) (err error) {
- err = utils.Wrap(
- f.db(ctx).
- Where("owner_user_id = ? AND friend_user_id in ( ?)", ownerUserID, friendUserIDs).
- Delete(&relation.FriendModel{}).
- Error,
- "",
- )
- return err
-}
-
-// 更新ownerUserID单个好友信息 更新零值.
-func (f *FriendGorm) UpdateByMap(
- ctx context.Context,
- ownerUserID string,
- friendUserID string,
- args map[string]interface{},
-) (err error) {
- return utils.Wrap(
- f.db(ctx).Where("owner_user_id = ? AND friend_user_id = ? ", ownerUserID, friendUserID).Updates(args).Error,
- "",
- )
-}
-
-// 更新好友信息的非零值.
-func (f *FriendGorm) Update(ctx context.Context, friends []*relation.FriendModel) (err error) {
- return utils.Wrap(f.db(ctx).Updates(&friends).Error, "")
-}
-
-// 更新好友备注(也支持零值 ).
-func (f *FriendGorm) UpdateRemark(ctx context.Context, ownerUserID, friendUserID, remark string) (err error) {
- if remark != "" {
- return utils.Wrap(
- f.db(ctx).
- Where("owner_user_id = ? and friend_user_id = ?", ownerUserID, friendUserID).
- Update("remark", remark).
- Error,
- "",
- )
- }
- m := make(map[string]interface{}, 1)
- m["remark"] = ""
- return utils.Wrap(f.db(ctx).Where("owner_user_id = ?", ownerUserID).Updates(m).Error, "")
-}
-
-// 获取单个好友信息,如没找到 返回错误.
-func (f *FriendGorm) Take(
- ctx context.Context,
- ownerUserID, friendUserID string,
-) (friend *relation.FriendModel, err error) {
- friend = &relation.FriendModel{}
- return friend, utils.Wrap(
- f.db(ctx).Where("owner_user_id = ? and friend_user_id", ownerUserID, friendUserID).Take(friend).Error,
- "",
- )
-}
-
-// 查找好友关系,如果是双向关系,则都返回.
-func (f *FriendGorm) FindUserState(
- ctx context.Context,
- userID1, userID2 string,
-) (friends []*relation.FriendModel, err error) {
- return friends, utils.Wrap(
- f.db(ctx).
- Where("(owner_user_id = ? and friend_user_id = ?) or (owner_user_id = ? and friend_user_id = ?)", userID1, userID2, userID2, userID1).
- Find(&friends).
- Error,
- "",
- )
-}
-
-// 获取 owner指定的好友列表 如果有friendUserIDs不存在,也不返回错误.
-func (f *FriendGorm) FindFriends(
- ctx context.Context,
- ownerUserID string,
- friendUserIDs []string,
-) (friends []*relation.FriendModel, err error) {
- return friends, utils.Wrap(
- f.db(ctx).Where("owner_user_id = ? AND friend_user_id in (?)", ownerUserID, friendUserIDs).Find(&friends).Error,
- "",
- )
-}
-
-// 获取哪些人添加了friendUserID 如果有ownerUserIDs不存在,也不返回错误.
-func (f *FriendGorm) FindReversalFriends(
- ctx context.Context,
- friendUserID string,
- ownerUserIDs []string,
-) (friends []*relation.FriendModel, err error) {
- return friends, utils.Wrap(
- f.db(ctx).Where("friend_user_id = ? AND owner_user_id in (?)", friendUserID, ownerUserIDs).Find(&friends).Error,
- "",
- )
-}
-
-// 获取ownerUserID好友列表 支持翻页.
-func (f *FriendGorm) FindOwnerFriends(
- ctx context.Context,
- ownerUserID string,
- pageNumber, showNumber int32,
-) (friends []*relation.FriendModel, total int64, err error) {
- err = f.DB.Model(&relation.FriendModel{}).Where("owner_user_id = ? ", ownerUserID).Count(&total).Error
- if err != nil {
- return nil, 0, utils.Wrap(err, "")
- }
- err = utils.Wrap(
- f.db(ctx).
- Where("owner_user_id = ? ", ownerUserID).
- Limit(int(showNumber)).
- Offset(int((pageNumber-1)*showNumber)).
- Find(&friends).
- Error,
- "",
- )
- return
-}
-
-// 获取哪些人添加了friendUserID 支持翻页.
-func (f *FriendGorm) FindInWhoseFriends(
- ctx context.Context,
- friendUserID string,
- pageNumber, showNumber int32,
-) (friends []*relation.FriendModel, total int64, err error) {
- err = f.DB.Model(&relation.FriendModel{}).Where("friend_user_id = ? ", friendUserID).Count(&total).Error
- if err != nil {
- return nil, 0, utils.Wrap(err, "")
- }
- err = utils.Wrap(
- f.db(ctx).
- Where("friend_user_id = ? ", friendUserID).
- Limit(int(showNumber)).
- Offset(int((pageNumber-1)*showNumber)).
- Find(&friends).
- Error,
- "",
- )
- return
-}
-
-func (f *FriendGorm) FindFriendUserIDs(ctx context.Context, ownerUserID string) (friendUserIDs []string, err error) {
- return friendUserIDs, utils.Wrap(
- f.db(ctx).
- Model(&relation.FriendModel{}).
- Where("owner_user_id = ? ", ownerUserID).
- Pluck("friend_user_id", &friendUserIDs).
- Error,
- "",
- )
-}
diff --git a/pkg/common/db/relation/friend_request_model.go b/pkg/common/db/relation/friend_request_model.go
deleted file mode 100644
index 5678f7b7b..000000000
--- a/pkg/common/db/relation/friend_request_model.go
+++ /dev/null
@@ -1,164 +0,0 @@
-// Copyright © 2023 OpenIM. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-package relation
-
-import (
- "context"
-
- "gorm.io/gorm"
-
- "github.com/OpenIMSDK/tools/utils"
-
- "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
-)
-
-type FriendRequestGorm struct {
- *MetaDB
-}
-
-func NewFriendRequestGorm(db *gorm.DB) relation.FriendRequestModelInterface {
- return &FriendRequestGorm{NewMetaDB(db, &relation.FriendRequestModel{})}
-}
-
-func (f *FriendRequestGorm) NewTx(tx any) relation.FriendRequestModelInterface {
- return &FriendRequestGorm{NewMetaDB(tx.(*gorm.DB), &relation.FriendRequestModel{})}
-}
-
-// 插入多条记录.
-func (f *FriendRequestGorm) Create(ctx context.Context, friendRequests []*relation.FriendRequestModel) (err error) {
- return utils.Wrap(f.db(ctx).Create(&friendRequests).Error, "")
-}
-
-// 删除记录.
-func (f *FriendRequestGorm) Delete(ctx context.Context, fromUserID, toUserID string) (err error) {
- return utils.Wrap(
- f.db(ctx).
- Where("from_user_id = ? AND to_user_id = ?", fromUserID, toUserID).
- Delete(&relation.FriendRequestModel{}).
- Error,
- "",
- )
-}
-
-// 更新零值.
-func (f *FriendRequestGorm) UpdateByMap(
- ctx context.Context,
- fromUserID string,
- toUserID string,
- args map[string]interface{},
-) (err error) {
- return utils.Wrap(
- f.db(ctx).
- Model(&relation.FriendRequestModel{}).
- Where("from_user_id = ? AND to_user_id =?", fromUserID, toUserID).
- Updates(args).
- Error,
- "",
- )
-}
-
-// 更新记录 (非零值).
-func (f *FriendRequestGorm) Update(ctx context.Context, friendRequest *relation.FriendRequestModel) (err error) {
- fr2 := *friendRequest
- fr2.FromUserID = ""
- fr2.ToUserID = ""
- return utils.Wrap(
- f.db(ctx).
- Where("from_user_id = ? AND to_user_id =?", friendRequest.FromUserID, friendRequest.ToUserID).
- Updates(fr2).
- Error,
- "",
- )
-}
-
-// 获取来指定用户的好友申请 未找到 不返回错误.
-func (f *FriendRequestGorm) Find(
- ctx context.Context,
- fromUserID, toUserID string,
-) (friendRequest *relation.FriendRequestModel, err error) {
- friendRequest = &relation.FriendRequestModel{}
- err = utils.Wrap(
- f.db(ctx).Where("from_user_id = ? and to_user_id = ?", fromUserID, toUserID).Find(friendRequest).Error,
- "",
- )
- return friendRequest, err
-}
-
-func (f *FriendRequestGorm) Take(
- ctx context.Context,
- fromUserID, toUserID string,
-) (friendRequest *relation.FriendRequestModel, err error) {
- friendRequest = &relation.FriendRequestModel{}
- err = utils.Wrap(
- f.db(ctx).Where("from_user_id = ? and to_user_id = ?", fromUserID, toUserID).Take(friendRequest).Error,
- "",
- )
- return friendRequest, err
-}
-
-// 获取toUserID收到的好友申请列表.
-func (f *FriendRequestGorm) FindToUserID(
- ctx context.Context,
- toUserID string,
- pageNumber, showNumber int32,
-) (friendRequests []*relation.FriendRequestModel, total int64, err error) {
- err = f.db(ctx).Model(&relation.FriendRequestModel{}).Where("to_user_id = ? ", toUserID).Count(&total).Error
- if err != nil {
- return nil, 0, utils.Wrap(err, "")
- }
- err = utils.Wrap(
- f.db(ctx).
- Where("to_user_id = ? ", toUserID).
- Limit(int(showNumber)).
- Offset(int(pageNumber-1)*int(showNumber)).
- Find(&friendRequests).
- Error,
- "",
- )
- return
-}
-
-// 获取fromUserID发出去的好友申请列表.
-func (f *FriendRequestGorm) FindFromUserID(
- ctx context.Context,
- fromUserID string,
- pageNumber, showNumber int32,
-) (friendRequests []*relation.FriendRequestModel, total int64, err error) {
- err = f.db(ctx).Model(&relation.FriendRequestModel{}).Where("from_user_id = ? ", fromUserID).Count(&total).Error
- if err != nil {
- return nil, 0, utils.Wrap(err, "")
- }
- err = utils.Wrap(
- f.db(ctx).
- Where("from_user_id = ? ", fromUserID).
- Limit(int(showNumber)).
- Offset(int(pageNumber-1)*int(showNumber)).
- Find(&friendRequests).
- Error,
- "",
- )
- return
-}
-
-func (f *FriendRequestGorm) FindBothFriendRequests(ctx context.Context, fromUserID, toUserID string) (friends []*relation.FriendRequestModel, err error) {
- err = utils.Wrap(
- f.db(ctx).
- Where("(from_user_id = ? AND to_user_id = ?) OR (from_user_id = ? AND to_user_id = ?)", fromUserID, toUserID, toUserID, fromUserID).
- Find(&friends).
- Error,
- "",
- )
- return
-}
diff --git a/pkg/common/db/relation/group_member_model.go b/pkg/common/db/relation/group_member_model.go
deleted file mode 100644
index 312e32054..000000000
--- a/pkg/common/db/relation/group_member_model.go
+++ /dev/null
@@ -1,197 +0,0 @@
-// Copyright © 2023 OpenIM. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-package relation
-
-import (
- "context"
-
- "gorm.io/gorm"
-
- "github.com/OpenIMSDK/protocol/constant"
- "github.com/OpenIMSDK/tools/ormutil"
- "github.com/OpenIMSDK/tools/utils"
-
- "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
-)
-
-var _ relation.GroupMemberModelInterface = (*GroupMemberGorm)(nil)
-
-type GroupMemberGorm struct {
- *MetaDB
-}
-
-func NewGroupMemberDB(db *gorm.DB) relation.GroupMemberModelInterface {
- return &GroupMemberGorm{NewMetaDB(db, &relation.GroupMemberModel{})}
-}
-
-func (g *GroupMemberGorm) NewTx(tx any) relation.GroupMemberModelInterface {
- return &GroupMemberGorm{NewMetaDB(tx.(*gorm.DB), &relation.GroupMemberModel{})}
-}
-
-func (g *GroupMemberGorm) Create(ctx context.Context, groupMemberList []*relation.GroupMemberModel) (err error) {
- return utils.Wrap(g.db(ctx).Create(&groupMemberList).Error, "")
-}
-
-func (g *GroupMemberGorm) Delete(ctx context.Context, groupID string, userIDs []string) (err error) {
- return utils.Wrap(
- g.db(ctx).Where("group_id = ? and user_id in (?)", groupID, userIDs).Delete(&relation.GroupMemberModel{}).Error,
- "",
- )
-}
-
-func (g *GroupMemberGorm) DeleteGroup(ctx context.Context, groupIDs []string) (err error) {
- return utils.Wrap(g.db(ctx).Where("group_id in (?)", groupIDs).Delete(&relation.GroupMemberModel{}).Error, "")
-}
-
-func (g *GroupMemberGorm) Update(ctx context.Context, groupID string, userID string, data map[string]any) (err error) {
- return utils.Wrap(g.db(ctx).Where("group_id = ? and user_id = ?", groupID, userID).Updates(data).Error, "")
-}
-
-func (g *GroupMemberGorm) UpdateRoleLevel(
- ctx context.Context,
- groupID string,
- userID string,
- roleLevel int32,
-) (rowsAffected int64, err error) {
- db := g.db(ctx).Where("group_id = ? and user_id = ?", groupID, userID).Updates(map[string]any{
- "role_level": roleLevel,
- })
- return db.RowsAffected, utils.Wrap(db.Error, "")
-}
-
-func (g *GroupMemberGorm) Find(
- ctx context.Context,
- groupIDs []string,
- userIDs []string,
- roleLevels []int32,
-) (groupMembers []*relation.GroupMemberModel, err error) {
- db := g.db(ctx)
- if len(groupIDs) > 0 {
- db = db.Where("group_id in (?)", groupIDs)
- }
- if len(userIDs) > 0 {
- db = db.Where("user_id in (?)", userIDs)
- }
- if len(roleLevels) > 0 {
- db = db.Where("role_level in (?)", roleLevels)
- }
- return groupMembers, utils.Wrap(db.Find(&groupMembers).Error, "")
-}
-
-func (g *GroupMemberGorm) Take(
- ctx context.Context,
- groupID string,
- userID string,
-) (groupMember *relation.GroupMemberModel, err error) {
- groupMember = &relation.GroupMemberModel{}
- return groupMember, utils.Wrap(
- g.db(ctx).Where("group_id = ? and user_id = ?", groupID, userID).Take(groupMember).Error,
- "",
- )
-}
-
-func (g *GroupMemberGorm) TakeOwner(
- ctx context.Context,
- groupID string,
-) (groupMember *relation.GroupMemberModel, err error) {
- groupMember = &relation.GroupMemberModel{}
- return groupMember, utils.Wrap(
- g.db(ctx).Where("group_id = ? and role_level = ?", groupID, constant.GroupOwner).Take(groupMember).Error,
- "",
- )
-}
-
-func (g *GroupMemberGorm) SearchMember(
- ctx context.Context,
- keyword string,
- groupIDs []string,
- userIDs []string,
- roleLevels []int32,
- pageNumber, showNumber int32,
-) (total uint32, groupList []*relation.GroupMemberModel, err error) {
- db := g.db(ctx)
- ormutil.GormIn(&db, "group_id", groupIDs)
- ormutil.GormIn(&db, "user_id", userIDs)
- ormutil.GormIn(&db, "role_level", roleLevels)
- return ormutil.GormSearch[relation.GroupMemberModel](db, []string{"nickname"}, keyword, pageNumber, showNumber)
-}
-
-func (g *GroupMemberGorm) MapGroupMemberNum(
- ctx context.Context,
- groupIDs []string,
-) (count map[string]uint32, err error) {
- return ormutil.MapCount(g.db(ctx).Where("group_id in (?)", groupIDs), "group_id")
-}
-
-func (g *GroupMemberGorm) FindJoinUserID(
- ctx context.Context,
- groupIDs []string,
-) (groupUsers map[string][]string, err error) {
- var groupMembers []*relation.GroupMemberModel
- if err := g.db(ctx).Select("group_id, user_id").Where("group_id in (?)", groupIDs).Find(&groupMembers).Error; err != nil {
- return nil, utils.Wrap(err, "")
- }
- groupUsers = make(map[string][]string)
- for _, item := range groupMembers {
- v, ok := groupUsers[item.GroupID]
- if !ok {
- groupUsers[item.GroupID] = []string{item.UserID}
- } else {
- groupUsers[item.GroupID] = append(v, item.UserID)
- }
- }
- return groupUsers, nil
-}
-
-func (g *GroupMemberGorm) FindMemberUserID(ctx context.Context, groupID string) (userIDs []string, err error) {
- return userIDs, utils.Wrap(g.db(ctx).Where("group_id = ?", groupID).Pluck("user_id", &userIDs).Error, "")
-}
-
-func (g *GroupMemberGorm) FindUserJoinedGroupID(ctx context.Context, userID string) (groupIDs []string, err error) {
- return groupIDs, utils.Wrap(g.db(ctx).Where("user_id = ?", userID).Pluck("group_id", &groupIDs).Error, "")
-}
-
-func (g *GroupMemberGorm) TakeGroupMemberNum(ctx context.Context, groupID string) (count int64, err error) {
- return count, utils.Wrap(g.db(ctx).Where("group_id = ?", groupID).Count(&count).Error, "")
-}
-
-func (g *GroupMemberGorm) FindUsersJoinedGroupID(ctx context.Context, userIDs []string) (map[string][]string, error) {
- var groupMembers []*relation.GroupMemberModel
- err := g.db(ctx).Select("group_id, user_id").Where("user_id IN (?)", userIDs).Find(&groupMembers).Error
- if err != nil {
- return nil, err
- }
- result := make(map[string][]string)
- for _, groupMember := range groupMembers {
- v, ok := result[groupMember.UserID]
- if !ok {
- result[groupMember.UserID] = []string{groupMember.GroupID}
- } else {
- result[groupMember.UserID] = append(v, groupMember.GroupID)
- }
- }
- return result, nil
-}
-
-func (g *GroupMemberGorm) FindUserManagedGroupID(ctx context.Context, userID string) (groupIDs []string, err error) {
- return groupIDs, utils.Wrap(
- g.db(ctx).
- Model(&relation.GroupMemberModel{}).
- Where("user_id = ? and (role_level = ? or role_level = ?)", userID, constant.GroupOwner, constant.GroupAdmin).
- Pluck("group_id", &groupIDs).
- Error,
- "",
- )
-}
diff --git a/pkg/common/db/relation/group_model.go b/pkg/common/db/relation/group_model.go
deleted file mode 100644
index 7a8eee9f0..000000000
--- a/pkg/common/db/relation/group_model.go
+++ /dev/null
@@ -1,106 +0,0 @@
-// Copyright © 2023 OpenIM. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-package relation
-
-import (
- "context"
- "time"
-
- "github.com/OpenIMSDK/protocol/constant"
-
- "gorm.io/gorm"
-
- "github.com/OpenIMSDK/tools/errs"
- "github.com/OpenIMSDK/tools/ormutil"
- "github.com/OpenIMSDK/tools/utils"
-
- "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
-)
-
-var _ relation.GroupModelInterface = (*GroupGorm)(nil)
-
-type GroupGorm struct {
- *MetaDB
-}
-
-func NewGroupDB(db *gorm.DB) relation.GroupModelInterface {
- return &GroupGorm{NewMetaDB(db, &relation.GroupModel{})}
-}
-
-func (g *GroupGorm) NewTx(tx any) relation.GroupModelInterface {
- return &GroupGorm{NewMetaDB(tx.(*gorm.DB), &relation.GroupModel{})}
-}
-
-func (g *GroupGorm) Create(ctx context.Context, groups []*relation.GroupModel) (err error) {
- return utils.Wrap(g.DB.Create(&groups).Error, "")
-}
-
-func (g *GroupGorm) UpdateMap(ctx context.Context, groupID string, args map[string]interface{}) (err error) {
- return utils.Wrap(g.DB.Where("group_id = ?", groupID).Model(&relation.GroupModel{}).Updates(args).Error, "")
-}
-
-func (g *GroupGorm) UpdateStatus(ctx context.Context, groupID string, status int32) (err error) {
- return utils.Wrap(g.DB.Where("group_id = ?", groupID).Model(&relation.GroupModel{}).Updates(map[string]any{"status": status}).Error, "")
-}
-
-func (g *GroupGorm) Find(ctx context.Context, groupIDs []string) (groups []*relation.GroupModel, err error) {
- return groups, utils.Wrap(g.DB.Where("group_id in (?)", groupIDs).Find(&groups).Error, "")
-}
-
-func (g *GroupGorm) Take(ctx context.Context, groupID string) (group *relation.GroupModel, err error) {
- group = &relation.GroupModel{}
- return group, utils.Wrap(g.DB.Where("group_id = ?", groupID).Take(group).Error, "")
-}
-
-func (g *GroupGorm) Search(ctx context.Context, keyword string, pageNumber, showNumber int32) (total uint32, groups []*relation.GroupModel, err error) {
- db := g.DB
- db = db.WithContext(ctx).Where("status!=?", constant.GroupStatusDismissed)
- return ormutil.GormSearch[relation.GroupModel](db, []string{"name"}, keyword, pageNumber, showNumber)
-}
-
-func (g *GroupGorm) GetGroupIDsByGroupType(ctx context.Context, groupType int) (groupIDs []string, err error) {
- return groupIDs, utils.Wrap(g.DB.Model(&relation.GroupModel{}).Where("group_type = ? ", groupType).Pluck("group_id", &groupIDs).Error, "")
-}
-
-func (g *GroupGorm) CountTotal(ctx context.Context, before *time.Time) (count int64, err error) {
- db := g.db(ctx).Model(&relation.GroupModel{})
- if before != nil {
- db = db.Where("create_time < ?", before)
- }
- if err := db.Count(&count).Error; err != nil {
- return 0, err
- }
- return count, nil
-}
-
-func (g *GroupGorm) CountRangeEverydayTotal(ctx context.Context, start time.Time, end time.Time) (map[string]int64, error) {
- var res []struct {
- Date time.Time `gorm:"column:date"`
- Count int64 `gorm:"column:count"`
- }
- err := g.db(ctx).Model(&relation.GroupModel{}).Select("DATE(create_time) AS date, count(1) AS count").Where("create_time >= ? and create_time < ?", start, end).Group("date").Find(&res).Error
- if err != nil {
- return nil, errs.Wrap(err)
- }
- v := make(map[string]int64)
- for _, r := range res {
- v[r.Date.Format("2006-01-02")] = r.Count
- }
- return v, nil
-}
-
-func (g *GroupGorm) FindNotDismissedGroup(ctx context.Context, groupIDs []string) (groups []*relation.GroupModel, err error) {
- return groups, utils.Wrap(g.DB.Where("group_id in (?) and status != ?", groupIDs, constant.GroupStatusDismissed).Find(&groups).Error, "")
-}
diff --git a/pkg/common/db/relation/group_request_model.go b/pkg/common/db/relation/group_request_model.go
deleted file mode 100644
index af3f277e8..000000000
--- a/pkg/common/db/relation/group_request_model.go
+++ /dev/null
@@ -1,118 +0,0 @@
-// Copyright © 2023 OpenIM. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-package relation
-
-import (
- "context"
-
- "github.com/OpenIMSDK/tools/ormutil"
-
- "gorm.io/gorm"
-
- "github.com/OpenIMSDK/tools/utils"
-
- "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
-)
-
-type GroupRequestGorm struct {
- *MetaDB
-}
-
-func NewGroupRequest(db *gorm.DB) relation.GroupRequestModelInterface {
- return &GroupRequestGorm{
- NewMetaDB(db, &relation.GroupRequestModel{}),
- }
-}
-
-func (g *GroupRequestGorm) NewTx(tx any) relation.GroupRequestModelInterface {
- return &GroupRequestGorm{NewMetaDB(tx.(*gorm.DB), &relation.GroupRequestModel{})}
-}
-
-func (g *GroupRequestGorm) Create(ctx context.Context, groupRequests []*relation.GroupRequestModel) (err error) {
- return utils.Wrap(g.DB.WithContext(ctx).Create(&groupRequests).Error, utils.GetSelfFuncName())
-}
-
-func (g *GroupRequestGorm) Delete(ctx context.Context, groupID string, userID string) (err error) {
- return utils.Wrap(
- g.DB.WithContext(ctx).
- Where("group_id = ? and user_id = ? ", groupID, userID).
- Delete(&relation.GroupRequestModel{}).
- Error,
- utils.GetSelfFuncName(),
- )
-}
-
-func (g *GroupRequestGorm) UpdateHandler(
- ctx context.Context,
- groupID string,
- userID string,
- handledMsg string,
- handleResult int32,
-) (err error) {
- return utils.Wrap(
- g.DB.WithContext(ctx).
- Model(&relation.GroupRequestModel{}).
- Where("group_id = ? and user_id = ? ", groupID, userID).
- Updates(map[string]any{
- "handle_msg": handledMsg,
- "handle_result": handleResult,
- }).
- Error,
- utils.GetSelfFuncName(),
- )
-}
-
-func (g *GroupRequestGorm) Take(
- ctx context.Context,
- groupID string,
- userID string,
-) (groupRequest *relation.GroupRequestModel, err error) {
- groupRequest = &relation.GroupRequestModel{}
- return groupRequest, utils.Wrap(
- g.DB.WithContext(ctx).Where("group_id = ? and user_id = ? ", groupID, userID).Take(groupRequest).Error,
- utils.GetSelfFuncName(),
- )
-}
-
-func (g *GroupRequestGorm) Page(
- ctx context.Context,
- userID string,
- pageNumber, showNumber int32,
-) (total uint32, groups []*relation.GroupRequestModel, err error) {
- return ormutil.GormSearch[relation.GroupRequestModel](
- g.DB.WithContext(ctx).Where("user_id = ?", userID),
- nil,
- "",
- pageNumber,
- showNumber,
- )
-}
-
-func (g *GroupRequestGorm) PageGroup(
- ctx context.Context,
- groupIDs []string,
- pageNumber, showNumber int32,
-) (total uint32, groups []*relation.GroupRequestModel, err error) {
- return ormutil.GormPage[relation.GroupRequestModel](
- g.DB.WithContext(ctx).Where("group_id in ?", groupIDs),
- pageNumber,
- showNumber,
- )
-}
-
-func (g *GroupRequestGorm) FindGroupRequests(ctx context.Context, groupID string, userIDs []string) (total int64, groupRequests []*relation.GroupRequestModel, err error) {
- err = g.DB.WithContext(ctx).Where("group_id = ? and user_id in ?", groupID, userIDs).Find(&groupRequests).Error
- return int64(len(groupRequests)), groupRequests, utils.Wrap(err, utils.GetSelfFuncName())
-}
diff --git a/pkg/common/db/relation/log_model.go b/pkg/common/db/relation/log_model.go
deleted file mode 100644
index 53365ca5b..000000000
--- a/pkg/common/db/relation/log_model.go
+++ /dev/null
@@ -1,49 +0,0 @@
-package relation
-
-import (
- "context"
- "time"
-
- "github.com/OpenIMSDK/tools/errs"
- "github.com/OpenIMSDK/tools/ormutil"
- "gorm.io/gorm"
-
- relationtb "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
-)
-
-type LogGorm struct {
- db *gorm.DB
-}
-
-func (l *LogGorm) Create(ctx context.Context, log []*relationtb.Log) error {
- return errs.Wrap(l.db.WithContext(ctx).Create(log).Error)
-}
-
-func (l *LogGorm) Search(ctx context.Context, keyword string, start time.Time, end time.Time, pageNumber int32, showNumber int32) (uint32, []*relationtb.Log, error) {
- db := l.db.WithContext(ctx).Where("create_time >= ?", start)
- if end.UnixMilli() != 0 {
- db = l.db.WithContext(ctx).Where("create_time <= ?", end)
- }
- db = db.Order("create_time desc")
- return ormutil.GormSearch[relationtb.Log](db, []string{"user_id"}, keyword, pageNumber, showNumber)
-}
-
-func (l *LogGorm) Delete(ctx context.Context, logIDs []string, userID string) error {
- if userID == "" {
- return errs.Wrap(l.db.WithContext(ctx).Where("log_id in ?", logIDs).Delete(&relationtb.Log{}).Error)
- }
- return errs.Wrap(l.db.WithContext(ctx).Where("log_id in ? and user_id=?", logIDs, userID).Delete(&relationtb.Log{}).Error)
-}
-
-func (l *LogGorm) Get(ctx context.Context, logIDs []string, userID string) ([]*relationtb.Log, error) {
- var logs []*relationtb.Log
- if userID == "" {
- return logs, errs.Wrap(l.db.WithContext(ctx).Where("log_id in ?", logIDs).Find(&logs).Error)
- }
- return logs, errs.Wrap(l.db.WithContext(ctx).Where("log_id in ? and user_id=?", logIDs, userID).Find(&logs).Error)
-}
-
-func NewLogGorm(db *gorm.DB) relationtb.LogInterface {
- db.AutoMigrate(&relationtb.Log{})
- return &LogGorm{db: db}
-}
diff --git a/pkg/common/db/relation/mysql_init.go b/pkg/common/db/relation/mysql_init.go
deleted file mode 100644
index 41399d5ca..000000000
--- a/pkg/common/db/relation/mysql_init.go
+++ /dev/null
@@ -1,157 +0,0 @@
-// Copyright © 2023 OpenIM. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-package relation
-
-import (
- "fmt"
- "time"
-
- "github.com/OpenIMSDK/tools/errs"
- "github.com/OpenIMSDK/tools/log"
- "github.com/OpenIMSDK/tools/mw/specialerror"
- mysqldriver "github.com/go-sql-driver/mysql"
- "gorm.io/driver/mysql"
- "gorm.io/gorm"
- "gorm.io/gorm/logger"
-
- "github.com/openimsdk/open-im-server/v3/pkg/common/config"
-)
-
-const (
- maxRetry = 100 // number of retries
-)
-
-type option struct {
- Username string
- Password string
- Address []string
- Database string
- LogLevel int
- SlowThreshold int
- MaxLifeTime int
- MaxOpenConn int
- MaxIdleConn int
- Connect func(dsn string, maxRetry int) (*gorm.DB, error)
-}
-
-// newMysqlGormDB Initialize the database connection.
-func newMysqlGormDB(o *option) (*gorm.DB, error) {
- err := maybeCreateTable(o)
- if err != nil {
- return nil, err
- }
- dsn := fmt.Sprintf("%s:%s@tcp(%s)/%s?charset=utf8mb4&parseTime=true&loc=Local",
- o.Username, o.Password, o.Address[0], o.Database)
- sqlLogger := log.NewSqlLogger(
- logger.LogLevel(o.LogLevel),
- true,
- time.Duration(o.SlowThreshold)*time.Millisecond,
- )
- db, err := gorm.Open(mysql.Open(dsn), &gorm.Config{
- Logger: sqlLogger,
- })
- if err != nil {
- return nil, err
- }
- sqlDB, err := db.DB()
- if err != nil {
- return nil, err
- }
- sqlDB.SetConnMaxLifetime(time.Second * time.Duration(o.MaxLifeTime))
- sqlDB.SetMaxOpenConns(o.MaxOpenConn)
- sqlDB.SetMaxIdleConns(o.MaxIdleConn)
- return db, nil
-}
-
-// maybeCreateTable creates a database if it does not exists.
-func maybeCreateTable(o *option) error {
- dsn := fmt.Sprintf("%s:%s@tcp(%s)/%s?charset=utf8mb4&parseTime=true&loc=Local",
- o.Username, o.Password, o.Address[0], "mysql")
-
- var db *gorm.DB
- var err error
- if f := o.Connect; f != nil {
- db, err = f(dsn, maxRetry)
- } else {
- db, err = connectToDatabase(dsn, maxRetry)
- }
- if err != nil {
- panic(err.Error() + " Open failed " + dsn)
- }
-
- sqlDB, err := db.DB()
- if err != nil {
- return err
- }
- defer sqlDB.Close()
- sql := fmt.Sprintf(
- "CREATE DATABASE IF NOT EXISTS `%s` default charset utf8mb4 COLLATE utf8mb4_unicode_ci",
- o.Database,
- )
- err = db.Exec(sql).Error
- if err != nil {
- return fmt.Errorf("init db %w", err)
- }
- return nil
-}
-
-// connectToDatabase Connection retry for mysql.
-func connectToDatabase(dsn string, maxRetry int) (*gorm.DB, error) {
- var db *gorm.DB
- var err error
- for i := 0; i <= maxRetry; i++ {
- db, err = gorm.Open(mysql.Open(dsn), nil)
- if err == nil {
- return db, nil
- }
- if mysqlErr, ok := err.(*mysqldriver.MySQLError); ok && mysqlErr.Number == 1045 {
- return nil, err
- }
- time.Sleep(time.Duration(1) * time.Second)
- }
- return nil, err
-}
-
-// NewGormDB gorm mysql.
-func NewGormDB() (*gorm.DB, error) {
- specialerror.AddReplace(gorm.ErrRecordNotFound, errs.ErrRecordNotFound)
- specialerror.AddErrHandler(replaceDuplicateKey)
-
- return newMysqlGormDB(&option{
- Username: config.Config.Mysql.Username,
- Password: config.Config.Mysql.Password,
- Address: config.Config.Mysql.Address,
- Database: config.Config.Mysql.Database,
- LogLevel: config.Config.Mysql.LogLevel,
- SlowThreshold: config.Config.Mysql.SlowThreshold,
- MaxLifeTime: config.Config.Mysql.MaxLifeTime,
- MaxOpenConn: config.Config.Mysql.MaxOpenConn,
- MaxIdleConn: config.Config.Mysql.MaxIdleConn,
- })
-}
-
-func replaceDuplicateKey(err error) errs.CodeError {
- if IsMysqlDuplicateKey(err) {
- return errs.ErrDuplicateKey
- }
- return nil
-}
-
-func IsMysqlDuplicateKey(err error) bool {
- if mysqlErr, ok := err.(*mysqldriver.MySQLError); ok {
- return mysqlErr.Number == 1062
- }
- return false
-}
diff --git a/pkg/common/db/relation/mysql_init_test.go b/pkg/common/db/relation/mysql_init_test.go
deleted file mode 100644
index c321dfd9f..000000000
--- a/pkg/common/db/relation/mysql_init_test.go
+++ /dev/null
@@ -1,121 +0,0 @@
-package relation
-
-import (
- "context"
- "database/sql"
- "database/sql/driver"
- "errors"
- "fmt"
- "reflect"
- "testing"
-
- "gorm.io/driver/mysql"
- "gorm.io/gorm"
- "gorm.io/gorm/logger"
-)
-
-func TestMaybeCreateTable(t *testing.T) {
- t.Run("normal", func(t *testing.T) {
- err := maybeCreateTable(&option{
- Username: "root",
- Password: "openIM123",
- Address: []string{"172.28.0.1:13306"},
- Database: "openIM_v3",
- LogLevel: 4,
- SlowThreshold: 500,
- MaxOpenConn: 1000,
- MaxIdleConn: 100,
- MaxLifeTime: 60,
- Connect: connect(expectExec{
- query: "CREATE DATABASE IF NOT EXISTS `openIM_v3` default charset utf8mb4 COLLATE utf8mb4_unicode_ci",
- args: nil,
- }),
- })
- if err != nil {
- t.Fatal(err)
- }
- })
-
- t.Run("im-db", func(t *testing.T) {
- err := maybeCreateTable(&option{
- Username: "root",
- Password: "openIM123",
- Address: []string{"172.28.0.1:13306"},
- Database: "im-db",
- LogLevel: 4,
- SlowThreshold: 500,
- MaxOpenConn: 1000,
- MaxIdleConn: 100,
- MaxLifeTime: 60,
- Connect: connect(expectExec{
- query: "CREATE DATABASE IF NOT EXISTS `im-db` default charset utf8mb4 COLLATE utf8mb4_unicode_ci",
- args: nil,
- }),
- })
- if err != nil {
- t.Fatal(err)
- }
- })
-
- t.Run("err", func(t *testing.T) {
- e := errors.New("e")
- err := maybeCreateTable(&option{
- Username: "root",
- Password: "openIM123",
- Address: []string{"172.28.0.1:13306"},
- Database: "openIM_v3",
- LogLevel: 4,
- SlowThreshold: 500,
- MaxOpenConn: 1000,
- MaxIdleConn: 100,
- MaxLifeTime: 60,
- Connect: connect(expectExec{
- err: e,
- }),
- })
- if !errors.Is(err, e) {
- t.Fatalf("err not is e: %v", err)
- }
- })
-}
-
-func connect(e expectExec) func(string, int) (*gorm.DB, error) {
- return func(string, int) (*gorm.DB, error) {
- return gorm.Open(mysql.New(mysql.Config{
- SkipInitializeWithVersion: true,
- Conn: sql.OpenDB(e),
- }), &gorm.Config{
- Logger: logger.Discard,
- })
- }
-}
-
-type expectExec struct {
- err error
- query string
- args []driver.NamedValue
-}
-
-func (c expectExec) ExecContext(ctx context.Context, query string, args []driver.NamedValue) (driver.Result, error) {
- if c.err != nil {
- return nil, c.err
- }
- if query != c.query {
- return nil, fmt.Errorf("query mismatch. expect: %s, got: %s", c.query, query)
- }
- if reflect.DeepEqual(args, c.args) {
- return nil, fmt.Errorf("args mismatch. expect: %v, got: %v", c.args, args)
- }
- return noEffectResult{}, nil
-}
-
-func (e expectExec) Connect(context.Context) (driver.Conn, error) { return e, nil }
-func (expectExec) Driver() driver.Driver { panic("not implemented") }
-func (expectExec) Prepare(query string) (driver.Stmt, error) { panic("not implemented") }
-func (expectExec) Close() (e error) { return }
-func (expectExec) Begin() (driver.Tx, error) { panic("not implemented") }
-
-type noEffectResult struct{}
-
-func (noEffectResult) LastInsertId() (i int64, e error) { return }
-func (noEffectResult) RowsAffected() (i int64, e error) { return }
diff --git a/pkg/common/db/relation/object_model.go b/pkg/common/db/relation/object_model.go
deleted file mode 100644
index c5624a8d4..000000000
--- a/pkg/common/db/relation/object_model.go
+++ /dev/null
@@ -1,53 +0,0 @@
-// Copyright © 2023 OpenIM. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-package relation
-
-import (
- "context"
-
- "gorm.io/gorm"
-
- "github.com/OpenIMSDK/tools/errs"
-
- "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
-)
-
-type ObjectInfoGorm struct {
- *MetaDB
-}
-
-func NewObjectInfo(db *gorm.DB) relation.ObjectInfoModelInterface {
- return &ObjectInfoGorm{
- NewMetaDB(db, &relation.ObjectModel{}),
- }
-}
-
-func (o *ObjectInfoGorm) NewTx(tx any) relation.ObjectInfoModelInterface {
- return &ObjectInfoGorm{
- NewMetaDB(tx.(*gorm.DB), &relation.ObjectModel{}),
- }
-}
-
-func (o *ObjectInfoGorm) SetObject(ctx context.Context, obj *relation.ObjectModel) (err error) {
- if err := o.DB.WithContext(ctx).Where("name = ?", obj.Name).FirstOrCreate(obj).Error; err != nil {
- return errs.Wrap(err)
- }
- return nil
-}
-
-func (o *ObjectInfoGorm) Take(ctx context.Context, name string) (info *relation.ObjectModel, err error) {
- info = &relation.ObjectModel{}
- return info, errs.Wrap(o.DB.WithContext(ctx).Where("name = ?", name).Take(info).Error)
-}
diff --git a/pkg/common/db/relation/user_model.go b/pkg/common/db/relation/user_model.go
deleted file mode 100644
index b04c29816..000000000
--- a/pkg/common/db/relation/user_model.go
+++ /dev/null
@@ -1,136 +0,0 @@
-// Copyright © 2023 OpenIM. All rights reserved.
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-package relation
-
-import (
- "context"
- "time"
-
- "github.com/OpenIMSDK/tools/errs"
-
- "gorm.io/gorm"
-
- "github.com/OpenIMSDK/tools/utils"
-
- "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
-)
-
-type UserGorm struct {
- *MetaDB
-}
-
-func NewUserGorm(db *gorm.DB) relation.UserModelInterface {
- return &UserGorm{NewMetaDB(db, &relation.UserModel{})}
-}
-
-// 插入多条.
-func (u *UserGorm) Create(ctx context.Context, users []*relation.UserModel) (err error) {
- return utils.Wrap(u.db(ctx).Create(&users).Error, "")
-}
-
-// 更新用户信息 零值.
-func (u *UserGorm) UpdateByMap(ctx context.Context, userID string, args map[string]interface{}) (err error) {
- return utils.Wrap(u.db(ctx).Model(&relation.UserModel{}).Where("user_id = ?", userID).Updates(args).Error, "")
-}
-
-// 更新多个用户信息 非零值.
-func (u *UserGorm) Update(ctx context.Context, user *relation.UserModel) (err error) {
- return utils.Wrap(u.db(ctx).Model(user).Updates(user).Error, "")
-}
-
-// 获取指定用户信息 不存在,也不返回错误.
-func (u *UserGorm) Find(ctx context.Context, userIDs []string) (users []*relation.UserModel, err error) {
- err = utils.Wrap(u.db(ctx).Where("user_id in (?)", userIDs).Find(&users).Error, "")
- return users, err
-}
-
-// 获取某个用户信息 不存在,则返回错误.
-func (u *UserGorm) Take(ctx context.Context, userID string) (user *relation.UserModel, err error) {
- user = &relation.UserModel{}
- err = utils.Wrap(u.db(ctx).Where("user_id = ?", userID).Take(&user).Error, "")
- return user, err
-}
-
-// 获取用户信息 不存在,不返回错误.
-func (u *UserGorm) Page(
- ctx context.Context,
- pageNumber, showNumber int32,
-) (users []*relation.UserModel, count int64, err error) {
- err = utils.Wrap(u.db(ctx).Count(&count).Error, "")
- if err != nil {
- return
- }
- err = utils.Wrap(
- u.db(ctx).
- Limit(int(showNumber)).
- Offset(int((pageNumber-1)*showNumber)).
- Find(&users).
- Order("create_time DESC").
- Error,
- "",
- )
- return
-}
-
-// 获取所有用户ID.
-func (u *UserGorm) GetAllUserID(ctx context.Context, pageNumber, showNumber int32) (userIDs []string, err error) {
- if pageNumber == 0 || showNumber == 0 {
- return userIDs, errs.Wrap(u.db(ctx).Pluck("user_id", &userIDs).Error)
- } else {
- return userIDs, errs.Wrap(u.db(ctx).Limit(int(showNumber)).Offset(int((pageNumber-1)*showNumber)).Pluck("user_id", &userIDs).Error)
- }
-}
-
-func (u *UserGorm) GetUserGlobalRecvMsgOpt(ctx context.Context, userID string) (opt int, err error) {
- err = u.db(ctx).Model(&relation.UserModel{}).Where("user_id = ?", userID).Pluck("global_recv_msg_opt", &opt).Error
- return opt, err
-}
-
-func (u *UserGorm) CountTotal(ctx context.Context, before *time.Time) (count int64, err error) {
- db := u.db(ctx).Model(&relation.UserModel{})
- if before != nil {
- db = db.Where("create_time < ?", before)
- }
- if err := db.Count(&count).Error; err != nil {
- return 0, err
- }
- return count, nil
-}
-
-func (u *UserGorm) CountRangeEverydayTotal(
- ctx context.Context,
- start time.Time,
- end time.Time,
-) (map[string]int64, error) {
- var res []struct {
- Date time.Time `gorm:"column:date"`
- Count int64 `gorm:"column:count"`
- }
- err := u.db(ctx).
- Model(&relation.UserModel{}).
- Select("DATE(create_time) AS date, count(1) AS count").
- Where("create_time >= ? and create_time < ?", start, end).
- Group("date").
- Find(&res).
- Error
- if err != nil {
- return nil, errs.Wrap(err)
- }
- v := make(map[string]int64)
- for _, r := range res {
- v[r.Date.Format("2006-01-02")] = r.Count
- }
- return v, nil
-}
diff --git a/pkg/common/db/s3/cont/consts.go b/pkg/common/db/s3/cont/consts.go
index 1a0467ce5..a01a8312c 100644
--- a/pkg/common/db/s3/cont/consts.go
+++ b/pkg/common/db/s3/cont/consts.go
@@ -17,6 +17,7 @@ package cont
const (
hashPath = "openim/data/hash/"
tempPath = "openim/temp/"
+ DirectPath = "openim/direct"
UploadTypeMultipart = 1 // 分片上传
UploadTypePresigned = 2 // 预签名上传
partSeparator = ","
diff --git a/pkg/common/db/s3/cont/controller.go b/pkg/common/db/s3/cont/controller.go
index 7040c7306..82c27c1f2 100644
--- a/pkg/common/db/s3/cont/controller.go
+++ b/pkg/common/db/s3/cont/controller.go
@@ -46,6 +46,10 @@ type Controller struct {
impl s3.Interface
}
+func (c *Controller) Engine() string {
+ return c.impl.Engine()
+}
+
func (c *Controller) HashPath(md5 string) string {
return path.Join(hashPath, md5)
}
@@ -275,3 +279,7 @@ func (c *Controller) AccessURL(ctx context.Context, name string, expire time.Dur
}
return c.impl.AccessURL(ctx, name, expire, opt)
}
+
+func (c *Controller) FormData(ctx context.Context, name string, size int64, contentType string, duration time.Duration) (*s3.FormData, error) {
+ return c.impl.FormData(ctx, name, size, contentType, duration)
+}
diff --git a/pkg/common/db/s3/cos/cos.go b/pkg/common/db/s3/cos/cos.go
index 7add88487..a82ffe670 100644
--- a/pkg/common/db/s3/cos/cos.go
+++ b/pkg/common/db/s3/cos/cos.go
@@ -16,6 +16,11 @@ package cos
import (
"context"
+ "crypto/hmac"
+ "crypto/sha1"
+ "encoding/base64"
+ "encoding/hex"
+ "encoding/json"
"errors"
"fmt"
"net/http"
@@ -31,9 +36,9 @@ import (
)
const (
- minPartSize = 1024 * 1024 * 1 // 1MB
- maxPartSize = 1024 * 1024 * 1024 * 5 // 5GB
- maxNumSize = 1000
+ minPartSize int64 = 1024 * 1024 * 1 // 1MB
+ maxPartSize int64 = 1024 * 1024 * 1024 * 5 // 5GB
+ maxNumSize int64 = 1000
)
const (
@@ -44,6 +49,8 @@ const (
imageWebp = "webp"
)
+const successCode = http.StatusOK
+
const (
videoSnapshotImagePng = "png"
videoSnapshotImageJpg = "jpg"
@@ -126,7 +133,7 @@ func (c *Cos) PartSize(ctx context.Context, size int64) (int64, error) {
return 0, errors.New("size must be greater than 0")
}
if size > maxPartSize*maxNumSize {
- return 0, fmt.Errorf("size must be less than %db", maxPartSize*maxNumSize)
+ return 0, fmt.Errorf("COS size must be less than the maximum allowed limit")
}
if size <= minPartSize*maxNumSize {
return minPartSize, nil
@@ -326,3 +333,65 @@ func (c *Cos) getPresignedURL(ctx context.Context, name string, expire time.Dura
}
return c.client.Object.GetObjectURL(name), nil
}
+
+func (c *Cos) FormData(ctx context.Context, name string, size int64, contentType string, duration time.Duration) (*s3.FormData, error) {
+ // https://cloud.tencent.com/document/product/436/14690
+ now := time.Now()
+ expiration := now.Add(duration)
+ keyTime := fmt.Sprintf("%d;%d", now.Unix(), expiration.Unix())
+ conditions := []any{
+ map[string]string{"q-sign-algorithm": "sha1"},
+ map[string]string{"q-ak": c.credential.SecretID},
+ map[string]string{"q-sign-time": keyTime},
+ map[string]string{"key": name},
+ }
+ if contentType != "" {
+ conditions = append(conditions, map[string]string{"Content-Type": contentType})
+ }
+ policy := map[string]any{
+ "expiration": expiration.Format("2006-01-02T15:04:05.000Z"),
+ "conditions": conditions,
+ }
+ policyJson, err := json.Marshal(policy)
+ if err != nil {
+ return nil, err
+ }
+ signKey := hmacSha1val(c.credential.SecretKey, keyTime)
+ strToSign := sha1val(string(policyJson))
+ signature := hmacSha1val(signKey, strToSign)
+
+ fd := &s3.FormData{
+ URL: c.client.BaseURL.BucketURL.String(),
+ File: "file",
+ Expires: expiration,
+ FormData: map[string]string{
+ "policy": base64.StdEncoding.EncodeToString(policyJson),
+ "q-sign-algorithm": "sha1",
+ "q-ak": c.credential.SecretID,
+ "q-key-time": keyTime,
+ "q-signature": signature,
+ "key": name,
+ "success_action_status": strconv.Itoa(successCode),
+ },
+ SuccessCodes: []int{successCode},
+ }
+ if contentType != "" {
+ fd.FormData["Content-Type"] = contentType
+ }
+ if c.credential.SessionToken != "" {
+ fd.FormData["x-cos-security-token"] = c.credential.SessionToken
+ }
+ return fd, nil
+}
+
+func hmacSha1val(key, msg string) string {
+ v := hmac.New(sha1.New, []byte(key))
+ v.Write([]byte(msg))
+ return hex.EncodeToString(v.Sum(nil))
+}
+
+func sha1val(msg string) string {
+ sha1Hash := sha1.New()
+ sha1Hash.Write([]byte(msg))
+ return hex.EncodeToString(sha1Hash.Sum(nil))
+}
diff --git a/pkg/common/db/s3/cos/internal.go b/pkg/common/db/s3/cos/internal.go
index 0e58a851c..064546953 100644
--- a/pkg/common/db/s3/cos/internal.go
+++ b/pkg/common/db/s3/cos/internal.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package cos
import (
@@ -10,4 +24,4 @@ import (
)
//go:linkname newRequest github.com/tencentyun/cos-go-sdk-v5.(*Client).newRequest
-func newRequest(c *cos.Client, ctx context.Context, baseURL *url.URL, uri, method string, body interface{}, optQuery interface{}, optHeader interface{}) (req *http.Request, err error)
+func newRequest(c *cos.Client, ctx context.Context, baseURL *url.URL, uri, method string, body any, optQuery any, optHeader any) (req *http.Request, err error)
diff --git a/pkg/common/db/s3/kodo/internal.go b/pkg/common/db/s3/kodo/internal.go
deleted file mode 100644
index 3a4943e62..000000000
--- a/pkg/common/db/s3/kodo/internal.go
+++ /dev/null
@@ -1 +0,0 @@
-package kodo
diff --git a/pkg/common/db/s3/kodo/kodo.go b/pkg/common/db/s3/kodo/kodo.go
deleted file mode 100644
index d73220b3b..000000000
--- a/pkg/common/db/s3/kodo/kodo.go
+++ /dev/null
@@ -1,323 +0,0 @@
-package kodo
-
-import (
- "context"
- "errors"
- "fmt"
- "net/http"
- "net/url"
- "strconv"
- "strings"
- "time"
-
- "github.com/aws/aws-sdk-go-v2/aws"
- awss3config "github.com/aws/aws-sdk-go-v2/config"
- "github.com/aws/aws-sdk-go-v2/credentials"
- awss3 "github.com/aws/aws-sdk-go-v2/service/s3"
- awss3types "github.com/aws/aws-sdk-go-v2/service/s3/types"
- "github.com/openimsdk/open-im-server/v3/pkg/common/config"
- "github.com/openimsdk/open-im-server/v3/pkg/common/db/s3"
- "github.com/qiniu/go-sdk/v7/auth"
- "github.com/qiniu/go-sdk/v7/storage"
-)
-
-const (
- minPartSize = 1024 * 1024 * 1 // 1MB
- maxPartSize = 1024 * 1024 * 1024 * 5 // 5GB
- maxNumSize = 10000
-)
-
-type Kodo struct {
- AccessKey string
- SecretKey string
- Region string
- Token string
- Endpoint string
- BucketURL string
- Auth *auth.Credentials
- Client *awss3.Client
- PresignClient *awss3.PresignClient
-}
-
-func NewKodo() (s3.Interface, error) {
- conf := config.Config.Object.Kodo
- //init client
- cfg, err := awss3config.LoadDefaultConfig(context.TODO(),
- awss3config.WithRegion(conf.Bucket),
- awss3config.WithEndpointResolverWithOptions(
- aws.EndpointResolverWithOptionsFunc(func(service, region string, options ...interface{}) (aws.Endpoint, error) {
- return aws.Endpoint{URL: conf.Endpoint}, nil
- })),
- awss3config.WithCredentialsProvider(credentials.NewStaticCredentialsProvider(
- conf.AccessKeyID,
- conf.AccessKeySecret,
- conf.SessionToken),
- ),
- )
- if err != nil {
- panic(err)
- }
- client := awss3.NewFromConfig(cfg)
- presignClient := awss3.NewPresignClient(client)
-
- return &Kodo{
- AccessKey: conf.AccessKeyID,
- SecretKey: conf.AccessKeySecret,
- Region: conf.Bucket,
- BucketURL: conf.BucketURL,
- Auth: auth.New(conf.AccessKeyID, conf.AccessKeySecret),
- Client: client,
- PresignClient: presignClient,
- }, nil
-}
-
-func (k Kodo) Engine() string {
- return "kodo"
-}
-
-func (k Kodo) PartLimit() *s3.PartLimit {
- return &s3.PartLimit{
- MinPartSize: minPartSize,
- MaxPartSize: maxPartSize,
- MaxNumSize: maxNumSize,
- }
-}
-
-func (k Kodo) InitiateMultipartUpload(ctx context.Context, name string) (*s3.InitiateMultipartUploadResult, error) {
- result, err := k.Client.CreateMultipartUpload(ctx, &awss3.CreateMultipartUploadInput{
- Bucket: aws.String(k.Region),
- Key: aws.String(name),
- })
- if err != nil {
- return nil, err
- }
- return &s3.InitiateMultipartUploadResult{
- UploadID: aws.ToString(result.UploadId),
- Bucket: aws.ToString(result.Bucket),
- Key: aws.ToString(result.Key),
- }, nil
-}
-
-func (k Kodo) CompleteMultipartUpload(ctx context.Context, uploadID string, name string, parts []s3.Part) (*s3.CompleteMultipartUploadResult, error) {
- kodoParts := make([]awss3types.CompletedPart, len(parts))
- for i, part := range parts {
- kodoParts[i] = awss3types.CompletedPart{
- PartNumber: aws.Int32(int32(part.PartNumber)),
- ETag: aws.String(part.ETag),
- }
- }
- result, err := k.Client.CompleteMultipartUpload(ctx, &awss3.CompleteMultipartUploadInput{
- Bucket: aws.String(k.Region),
- Key: aws.String(name),
- UploadId: aws.String(uploadID),
- MultipartUpload: &awss3types.CompletedMultipartUpload{Parts: kodoParts},
- })
- if err != nil {
- return nil, err
- }
- return &s3.CompleteMultipartUploadResult{
- Location: aws.ToString(result.Location),
- Bucket: aws.ToString(result.Bucket),
- Key: aws.ToString(result.Key),
- ETag: strings.ToLower(strings.ReplaceAll(aws.ToString(result.ETag), `"`, ``)),
- }, nil
-}
-
-func (k Kodo) PartSize(ctx context.Context, size int64) (int64, error) {
- if size <= 0 {
- return 0, errors.New("size must be greater than 0")
- }
- if size > maxPartSize*maxNumSize {
- return 0, fmt.Errorf("size must be less than %db", maxPartSize*maxNumSize)
- }
- if size <= minPartSize*maxNumSize {
- return minPartSize, nil
- }
- partSize := size / maxNumSize
- if size%maxNumSize != 0 {
- partSize++
- }
- return partSize, nil
-}
-
-func (k Kodo) AuthSign(ctx context.Context, uploadID string, name string, expire time.Duration, partNumbers []int) (*s3.AuthSignResult, error) {
- result := s3.AuthSignResult{
- URL: k.BucketURL + "/" + name,
- Query: url.Values{"uploadId": {uploadID}},
- Header: make(http.Header),
- Parts: make([]s3.SignPart, len(partNumbers)),
- }
- for i, partNumber := range partNumbers {
- part, _ := k.PresignClient.PresignUploadPart(ctx, &awss3.UploadPartInput{
- Bucket: aws.String(k.Region),
- UploadId: aws.String(uploadID),
- Key: aws.String(name),
- PartNumber: aws.Int32(int32(partNumber)),
- })
- result.Parts[i] = s3.SignPart{
- PartNumber: partNumber,
- URL: part.URL,
- Header: part.SignedHeader,
- }
- }
- return &result, nil
-
-}
-
-func (k Kodo) PresignedPutObject(ctx context.Context, name string, expire time.Duration) (string, error) {
- object, err := k.PresignClient.PresignPutObject(ctx, &awss3.PutObjectInput{
- Bucket: aws.String(k.Region),
- Key: aws.String(name),
- }, func(po *awss3.PresignOptions) {
- po.Expires = expire
- })
- return object.URL, err
-
-}
-
-func (k Kodo) DeleteObject(ctx context.Context, name string) error {
- _, err := k.Client.DeleteObject(ctx, &awss3.DeleteObjectInput{
- Bucket: aws.String(k.Region),
- Key: aws.String(name),
- })
- return err
-}
-
-func (k Kodo) CopyObject(ctx context.Context, src string, dst string) (*s3.CopyObjectInfo, error) {
- result, err := k.Client.CopyObject(ctx, &awss3.CopyObjectInput{
- Bucket: aws.String(k.Region),
- CopySource: aws.String(k.Region + "/" + src),
- Key: aws.String(dst),
- })
- if err != nil {
- return nil, err
- }
- return &s3.CopyObjectInfo{
- Key: dst,
- ETag: strings.ToLower(strings.ReplaceAll(aws.ToString(result.CopyObjectResult.ETag), `"`, ``)),
- }, nil
-}
-
-func (k Kodo) StatObject(ctx context.Context, name string) (*s3.ObjectInfo, error) {
- info, err := k.Client.HeadObject(ctx, &awss3.HeadObjectInput{
- Bucket: aws.String(k.Region),
- Key: aws.String(name),
- })
- if err != nil {
- return nil, err
- }
- res := &s3.ObjectInfo{Key: name}
- res.Size = aws.ToInt64(info.ContentLength)
- res.ETag = strings.ToLower(strings.ReplaceAll(aws.ToString(info.ETag), `"`, ``))
- return res, nil
-}
-
-func (k Kodo) IsNotFound(err error) bool {
- return true
-}
-
-func (k Kodo) AbortMultipartUpload(ctx context.Context, uploadID string, name string) error {
- _, err := k.Client.AbortMultipartUpload(ctx, &awss3.AbortMultipartUploadInput{
- UploadId: aws.String(uploadID),
- Bucket: aws.String(k.Region),
- Key: aws.String(name),
- })
- return err
-}
-
-func (k Kodo) ListUploadedParts(ctx context.Context, uploadID string, name string, partNumberMarker int, maxParts int) (*s3.ListUploadedPartsResult, error) {
- result, err := k.Client.ListParts(ctx, &awss3.ListPartsInput{
- Key: aws.String(name),
- UploadId: aws.String(uploadID),
- Bucket: aws.String(k.Region),
- MaxParts: aws.Int32(int32(maxParts)),
- PartNumberMarker: aws.String(strconv.Itoa(partNumberMarker)),
- })
- if err != nil {
- return nil, err
- }
- res := &s3.ListUploadedPartsResult{
- Key: aws.ToString(result.Key),
- UploadID: aws.ToString(result.UploadId),
- MaxParts: int(aws.ToInt32(result.MaxParts)),
- UploadedParts: make([]s3.UploadedPart, len(result.Parts)),
- }
- // int to string
- NextPartNumberMarker, err := strconv.Atoi(aws.ToString(result.NextPartNumberMarker))
- if err != nil {
- return nil, err
- }
- res.NextPartNumberMarker = NextPartNumberMarker
- for i, part := range result.Parts {
- res.UploadedParts[i] = s3.UploadedPart{
- PartNumber: int(aws.ToInt32(part.PartNumber)),
- LastModified: aws.ToTime(part.LastModified),
- ETag: aws.ToString(part.ETag),
- Size: aws.ToInt64(part.Size),
- }
- }
- return res, nil
-}
-
-func (k Kodo) AccessURL(ctx context.Context, name string, expire time.Duration, opt *s3.AccessURLOption) (string, error) {
- //get object head
- info, err := k.Client.HeadObject(ctx, &awss3.HeadObjectInput{
- Bucket: aws.String(k.Region),
- Key: aws.String(name),
- })
- if err != nil {
- return "", errors.New("AccessURL object not found")
- }
- if opt != nil {
- if opt.ContentType != aws.ToString(info.ContentType) {
- //修改文件类型
- err := k.SetObjectContentType(ctx, name, opt.ContentType)
- if err != nil {
- return "", errors.New("AccessURL setContentType error")
- }
- }
- }
- imageMogr := ""
- //image dispose
- if opt != nil {
- if opt.Image != nil {
- //https://developer.qiniu.com/dora/8255/the-zoom
- process := ""
- if opt.Image.Width > 0 {
- process += strconv.Itoa(opt.Image.Width) + "x"
- }
- if opt.Image.Height > 0 {
- if opt.Image.Width > 0 {
- process += strconv.Itoa(opt.Image.Height)
- } else {
- process += "x" + strconv.Itoa(opt.Image.Height)
- }
- }
- imageMogr = "imageMogr2/thumbnail/" + process
- }
- }
- //expire
- deadline := time.Now().Add(time.Second * expire).Unix()
- domain := k.BucketURL
- query := url.Values{}
- if opt != nil && opt.Filename != "" {
- query.Add("attname", opt.Filename)
- }
- privateURL := storage.MakePrivateURLv2WithQuery(k.Auth, domain, name, query, deadline)
- if imageMogr != "" {
- privateURL += "&" + imageMogr
- }
- return privateURL, nil
-}
-
-func (k *Kodo) SetObjectContentType(ctx context.Context, name string, contentType string) error {
- //set object content-type
- _, err := k.Client.CopyObject(ctx, &awss3.CopyObjectInput{
- Bucket: aws.String(k.Region),
- CopySource: aws.String(k.Region + "/" + name),
- Key: aws.String(name),
- ContentType: aws.String(contentType),
- MetadataDirective: awss3types.MetadataDirectiveReplace,
- })
- return err
-}
diff --git a/pkg/common/db/s3/minio/internal.go b/pkg/common/db/s3/minio/internal.go
index 41129ce31..7e9dcd9e4 100644
--- a/pkg/common/db/s3/minio/internal.go
+++ b/pkg/common/db/s3/minio/internal.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package minio
import (
diff --git a/pkg/common/db/s3/minio/minio.go b/pkg/common/db/s3/minio/minio.go
index be49e2faa..5a615dcfd 100644
--- a/pkg/common/db/s3/minio/minio.go
+++ b/pkg/common/db/s3/minio/minio.go
@@ -45,9 +45,9 @@ const (
)
const (
- minPartSize = 1024 * 1024 * 5 // 1MB
- maxPartSize = 1024 * 1024 * 1024 * 5 // 5GB
- maxNumSize = 10000
+ minPartSize int64 = 1024 * 1024 * 5 // 1MB
+ maxPartSize int64 = 1024 * 1024 * 1024 * 5 // 5GB
+ maxNumSize int64 = 10000
)
const (
@@ -57,6 +57,8 @@ const (
imageThumbnailPath = "openim/thumbnail"
)
+const successCode = http.StatusOK
+
func NewMinio(cache cache.MinioCache) (s3.Interface, error) {
u, err := url.Parse(config.Config.Object.Minio.Endpoint)
if err != nil {
@@ -238,7 +240,7 @@ func (m *Minio) PartSize(ctx context.Context, size int64) (int64, error) {
return 0, errors.New("size must be greater than 0")
}
if size > maxPartSize*maxNumSize {
- return 0, fmt.Errorf("size must be less than %db", maxPartSize*maxNumSize)
+ return 0, fmt.Errorf("MINIO size must be less than the maximum allowed limit")
}
if size <= minPartSize*maxNumSize {
return minPartSize, nil
@@ -441,3 +443,51 @@ func (m *Minio) getObjectData(ctx context.Context, name string, limit int64) ([]
}
return io.ReadAll(io.LimitReader(object, limit))
}
+
+func (m *Minio) FormData(ctx context.Context, name string, size int64, contentType string, duration time.Duration) (*s3.FormData, error) {
+ if err := m.initMinio(ctx); err != nil {
+ return nil, err
+ }
+ policy := minio.NewPostPolicy()
+ if err := policy.SetKey(name); err != nil {
+ return nil, err
+ }
+ expires := time.Now().Add(duration)
+ if err := policy.SetExpires(expires); err != nil {
+ return nil, err
+ }
+ if size > 0 {
+ if err := policy.SetContentLengthRange(0, size); err != nil {
+ return nil, err
+ }
+ }
+ if err := policy.SetSuccessStatusAction(strconv.Itoa(successCode)); err != nil {
+ return nil, err
+ }
+ if contentType != "" {
+ if err := policy.SetContentType(contentType); err != nil {
+ return nil, err
+ }
+ }
+ if err := policy.SetBucket(m.bucket); err != nil {
+ return nil, err
+ }
+ u, fd, err := m.core.PresignedPostPolicy(ctx, policy)
+ if err != nil {
+ return nil, err
+ }
+ sign, err := url.Parse(m.signEndpoint)
+ if err != nil {
+ return nil, err
+ }
+ u.Scheme = sign.Scheme
+ u.Host = sign.Host
+ return &s3.FormData{
+ URL: u.String(),
+ File: "file",
+ Header: nil,
+ FormData: fd,
+ Expires: expires,
+ SuccessCodes: []int{successCode},
+ }, nil
+}
diff --git a/pkg/common/db/s3/minio/thumbnail.go b/pkg/common/db/s3/minio/thumbnail.go
index 01b14541b..49c376c9f 100644
--- a/pkg/common/db/s3/minio/thumbnail.go
+++ b/pkg/common/db/s3/minio/thumbnail.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package minio
import (
diff --git a/pkg/common/db/s3/oss/internal.go b/pkg/common/db/s3/oss/internal.go
index 4ca1acc47..155708ffd 100644
--- a/pkg/common/db/s3/oss/internal.go
+++ b/pkg/common/db/s3/oss/internal.go
@@ -26,7 +26,7 @@ import (
func signHeader(c oss.Conn, req *http.Request, canonicalizedResource string)
//go:linkname getURLParams github.com/aliyun/aliyun-oss-go-sdk/oss.Conn.getURLParams
-func getURLParams(c oss.Conn, params map[string]interface{}) string
+func getURLParams(c oss.Conn, params map[string]any) string
//go:linkname getURL github.com/aliyun/aliyun-oss-go-sdk/oss.urlMaker.getURL
func getURL(um urlMaker, bucket, object, params string) *url.URL
diff --git a/pkg/common/db/s3/oss/oss.go b/pkg/common/db/s3/oss/oss.go
old mode 100755
new mode 100644
index 6a728127b..0bba97ee7
--- a/pkg/common/db/s3/oss/oss.go
+++ b/pkg/common/db/s3/oss/oss.go
@@ -16,8 +16,13 @@ package oss
import (
"context"
+ "crypto/hmac"
+ "crypto/sha1"
+ "encoding/base64"
+ "encoding/json"
"errors"
"fmt"
+ "io"
"net/http"
"net/url"
"reflect"
@@ -32,9 +37,9 @@ import (
)
const (
- minPartSize = 1024 * 1024 * 1 // 1MB
- maxPartSize = 1024 * 1024 * 1024 * 5 // 5GB
- maxNumSize = 10000
+ minPartSize int64 = 1024 * 1024 * 1 // 1MB
+ maxPartSize int64 = 1024 * 1024 * 1024 * 5 // 5GB
+ maxNumSize int64 = 10000
)
const (
@@ -45,6 +50,8 @@ const (
imageWebp = "webp"
)
+const successCode = http.StatusOK
+
const (
videoSnapshotImagePng = "png"
videoSnapshotImageJpg = "jpg"
@@ -134,7 +141,7 @@ func (o *OSS) PartSize(ctx context.Context, size int64) (int64, error) {
return 0, errors.New("size must be greater than 0")
}
if size > maxPartSize*maxNumSize {
- return 0, fmt.Errorf("size must be less than %db", maxPartSize*maxNumSize)
+ return 0, fmt.Errorf("OSS size must be less than the maximum allowed limit")
}
if size <= minPartSize*maxNumSize {
return minPartSize, nil
@@ -327,3 +334,45 @@ func (o *OSS) AccessURL(ctx context.Context, name string, expire time.Duration,
params := getURLParams(*o.bucket.Client.Conn, rawParams)
return getURL(o.um, o.bucket.BucketName, name, params).String(), nil
}
+
+func (o *OSS) FormData(ctx context.Context, name string, size int64, contentType string, duration time.Duration) (*s3.FormData, error) {
+ // https://help.aliyun.com/zh/oss/developer-reference/postobject?spm=a2c4g.11186623.0.0.1cb83cebkP55nn
+ expires := time.Now().Add(duration)
+ conditions := []any{
+ map[string]string{"bucket": o.bucket.BucketName},
+ map[string]string{"key": name},
+ }
+ if size > 0 {
+ conditions = append(conditions, []any{"content-length-range", 0, size})
+ }
+ policy := map[string]any{
+ "expiration": expires.Format("2006-01-02T15:04:05.000Z"),
+ "conditions": conditions,
+ }
+ policyJson, err := json.Marshal(policy)
+ if err != nil {
+ return nil, err
+ }
+ policyStr := base64.StdEncoding.EncodeToString(policyJson)
+ h := hmac.New(sha1.New, []byte(o.credentials.GetAccessKeySecret()))
+ if _, err := io.WriteString(h, policyStr); err != nil {
+ return nil, err
+ }
+ fd := &s3.FormData{
+ URL: o.bucketURL,
+ File: "file",
+ Expires: expires,
+ FormData: map[string]string{
+ "key": name,
+ "policy": policyStr,
+ "OSSAccessKeyId": o.credentials.GetAccessKeyID(),
+ "success_action_status": strconv.Itoa(successCode),
+ "signature": base64.StdEncoding.EncodeToString(h.Sum(nil)),
+ },
+ SuccessCodes: []int{successCode},
+ }
+ if contentType != "" {
+ fd.FormData["x-oss-content-type"] = contentType
+ }
+ return fd, nil
+}
diff --git a/pkg/common/db/s3/s3.go b/pkg/common/db/s3/s3.go
index afbe91955..d3dd90ae9 100644
--- a/pkg/common/db/s3/s3.go
+++ b/pkg/common/db/s3/s3.go
@@ -24,7 +24,7 @@ import (
type PartLimit struct {
MinPartSize int64 `json:"minPartSize"`
MaxPartSize int64 `json:"maxPartSize"`
- MaxNumSize int `json:"maxNumSize"`
+ MaxNumSize int64 `json:"maxNumSize"`
}
type InitiateMultipartUploadResult struct {
@@ -74,6 +74,15 @@ type CopyObjectInfo struct {
ETag string `json:"etag"`
}
+type FormData struct {
+ URL string `json:"url"`
+ File string `json:"file"`
+ Header http.Header `json:"header"`
+ FormData map[string]string `json:"form"`
+ Expires time.Time `json:"expires"`
+ SuccessCodes []int `json:"successActionStatus"`
+}
+
type SignPart struct {
PartNumber int `json:"partNumber"`
URL string `json:"url"`
@@ -152,4 +161,6 @@ type Interface interface {
ListUploadedParts(ctx context.Context, uploadID string, name string, partNumberMarker int, maxParts int) (*ListUploadedPartsResult, error)
AccessURL(ctx context.Context, name string, expire time.Duration, opt *AccessURLOption) (string, error)
+
+ FormData(ctx context.Context, name string, size int64, contentType string, duration time.Duration) (*FormData, error)
}
diff --git a/pkg/common/db/table/relation/black.go b/pkg/common/db/table/relation/black.go
index 59dd12122..50499054c 100644
--- a/pkg/common/db/table/relation/black.go
+++ b/pkg/common/db/table/relation/black.go
@@ -17,33 +17,27 @@ package relation
import (
"context"
"time"
-)
-const (
- BlackModelTableName = "blacks"
+ "github.com/OpenIMSDK/tools/pagination"
)
type BlackModel struct {
- OwnerUserID string `gorm:"column:owner_user_id;primary_key;size:64"`
- BlockUserID string `gorm:"column:block_user_id;primary_key;size:64"`
- CreateTime time.Time `gorm:"column:create_time"`
- AddSource int32 `gorm:"column:add_source"`
- OperatorUserID string `gorm:"column:operator_user_id;size:64"`
- Ex string `gorm:"column:ex;size:1024"`
-}
-
-func (BlackModel) TableName() string {
- return BlackModelTableName
+ OwnerUserID string `bson:"owner_user_id"`
+ BlockUserID string `bson:"block_user_id"`
+ CreateTime time.Time `bson:"create_time"`
+ AddSource int32 `bson:"add_source"`
+ OperatorUserID string `bson:"operator_user_id"`
+ Ex string `bson:"ex"`
}
type BlackModelInterface interface {
Create(ctx context.Context, blacks []*BlackModel) (err error)
Delete(ctx context.Context, blacks []*BlackModel) (err error)
- UpdateByMap(ctx context.Context, ownerUserID, blockUserID string, args map[string]interface{}) (err error)
- Update(ctx context.Context, blacks []*BlackModel) (err error)
+ //UpdateByMap(ctx context.Context, ownerUserID, blockUserID string, args map[string]any) (err error)
+ //Update(ctx context.Context, blacks []*BlackModel) (err error)
Find(ctx context.Context, blacks []*BlackModel) (blackList []*BlackModel, err error)
Take(ctx context.Context, ownerUserID, blockUserID string) (black *BlackModel, err error)
- FindOwnerBlacks(ctx context.Context, ownerUserID string, pageNumber, showNumber int32) (blacks []*BlackModel, total int64, err error)
+ FindOwnerBlacks(ctx context.Context, ownerUserID string, pagination pagination.Pagination) (total int64, blacks []*BlackModel, err error)
FindOwnerBlackInfos(ctx context.Context, ownerUserID string, userIDs []string) (blacks []*BlackModel, err error)
FindBlackUserIDs(ctx context.Context, ownerUserID string) (blackUserIDs []string, err error)
}
diff --git a/pkg/common/db/table/relation/conversation.go b/pkg/common/db/table/relation/conversation.go
index e9680873f..e0a5268ca 100644
--- a/pkg/common/db/table/relation/conversation.go
+++ b/pkg/common/db/table/relation/conversation.go
@@ -17,41 +17,35 @@ package relation
import (
"context"
"time"
-)
-const (
- conversationModelTableName = "conversations"
+ "github.com/OpenIMSDK/tools/pagination"
)
type ConversationModel struct {
- OwnerUserID string `gorm:"column:owner_user_id;primary_key;type:char(128)" json:"OwnerUserID"`
- ConversationID string `gorm:"column:conversation_id;primary_key;type:char(128)" json:"conversationID"`
- ConversationType int32 `gorm:"column:conversation_type" json:"conversationType"`
- UserID string `gorm:"column:user_id;type:char(64)" json:"userID"`
- GroupID string `gorm:"column:group_id;type:char(128)" json:"groupID"`
- RecvMsgOpt int32 `gorm:"column:recv_msg_opt" json:"recvMsgOpt"`
- IsPinned bool `gorm:"column:is_pinned" json:"isPinned"`
- IsPrivateChat bool `gorm:"column:is_private_chat" json:"isPrivateChat"`
- BurnDuration int32 `gorm:"column:burn_duration;default:30" json:"burnDuration"`
- GroupAtType int32 `gorm:"column:group_at_type" json:"groupAtType"`
- AttachedInfo string `gorm:"column:attached_info;type:varchar(1024)" json:"attachedInfo"`
- Ex string `gorm:"column:ex;type:varchar(1024)" json:"ex"`
- MaxSeq int64 `gorm:"column:max_seq" json:"maxSeq"`
- MinSeq int64 `gorm:"column:min_seq" json:"minSeq"`
- CreateTime time.Time `gorm:"column:create_time;index:create_time;autoCreateTime"`
- IsMsgDestruct bool `gorm:"column:is_msg_destruct;default:false"`
- MsgDestructTime int64 `gorm:"column:msg_destruct_time;default:604800"`
- LatestMsgDestructTime time.Time `gorm:"column:latest_msg_destruct_time;autoCreateTime"`
-}
-
-func (ConversationModel) TableName() string {
- return conversationModelTableName
+ OwnerUserID string `bson:"owner_user_id"`
+ ConversationID string `bson:"conversation_id"`
+ ConversationType int32 `bson:"conversation_type"`
+ UserID string `bson:"user_id"`
+ GroupID string `bson:"group_id"`
+ RecvMsgOpt int32 `bson:"recv_msg_opt"`
+ IsPinned bool `bson:"is_pinned"`
+ IsPrivateChat bool `bson:"is_private_chat"`
+ BurnDuration int32 `bson:"burn_duration"`
+ GroupAtType int32 `bson:"group_at_type"`
+ AttachedInfo string `bson:"attached_info"`
+ Ex string `bson:"ex"`
+ MaxSeq int64 `bson:"max_seq"`
+ MinSeq int64 `bson:"min_seq"`
+ CreateTime time.Time `bson:"create_time"`
+ IsMsgDestruct bool `bson:"is_msg_destruct"`
+ MsgDestructTime int64 `bson:"msg_destruct_time"`
+ LatestMsgDestructTime time.Time `bson:"latest_msg_destruct_time"`
}
type ConversationModelInterface interface {
Create(ctx context.Context, conversations []*ConversationModel) (err error)
Delete(ctx context.Context, groupIDs []string) (err error)
- UpdateByMap(ctx context.Context, userIDs []string, conversationID string, args map[string]interface{}) (rows int64, err error)
+ UpdateByMap(ctx context.Context, userIDs []string, conversationID string, args map[string]any) (rows int64, err error)
Update(ctx context.Context, conversation *ConversationModel) (err error)
Find(ctx context.Context, ownerUserID string, conversationIDs []string) (conversations []*ConversationModel, err error)
FindUserID(ctx context.Context, userIDs []string, conversationIDs []string) ([]string, error)
@@ -61,13 +55,10 @@ type ConversationModelInterface interface {
FindUserIDAllConversations(ctx context.Context, userID string) (conversations []*ConversationModel, err error)
FindRecvMsgNotNotifyUserIDs(ctx context.Context, groupID string) ([]string, error)
GetUserRecvMsgOpt(ctx context.Context, ownerUserID, conversationID string) (opt int, err error)
- FindSuperGroupRecvMsgNotNotifyUserIDs(ctx context.Context, groupID string) ([]string, error)
GetAllConversationIDs(ctx context.Context) ([]string, error)
GetAllConversationIDsNumber(ctx context.Context) (int64, error)
- PageConversationIDs(ctx context.Context, pageNumber, showNumber int32) (conversationIDs []string, err error)
- GetUserAllHasReadSeqs(ctx context.Context, ownerUserID string) (hashReadSeqs map[string]int64, err error)
+ PageConversationIDs(ctx context.Context, pagination pagination.Pagination) (conversationIDs []string, err error)
GetConversationsByConversationID(ctx context.Context, conversationIDs []string) ([]*ConversationModel, error)
GetConversationIDsNeedDestruct(ctx context.Context) ([]*ConversationModel, error)
GetConversationNotReceiveMessageUserIDs(ctx context.Context, conversationID string) ([]string, error)
- NewTx(tx any) ConversationModelInterface
}
diff --git a/pkg/common/db/table/relation/friend.go b/pkg/common/db/table/relation/friend.go
index 58d8d1d34..73f7454df 100644
--- a/pkg/common/db/table/relation/friend.go
+++ b/pkg/common/db/table/relation/friend.go
@@ -17,62 +17,46 @@ package relation
import (
"context"
"time"
-)
-const (
- FriendModelTableName = "friends"
+ "github.com/OpenIMSDK/tools/pagination"
)
+// FriendModel represents the data structure for a friend relationship in MongoDB.
type FriendModel struct {
- OwnerUserID string `gorm:"column:owner_user_id;primary_key;size:64"`
- FriendUserID string `gorm:"column:friend_user_id;primary_key;size:64"`
- Remark string `gorm:"column:remark;size:255"`
- CreateTime time.Time `gorm:"column:create_time;autoCreateTime"`
- AddSource int32 `gorm:"column:add_source"`
- OperatorUserID string `gorm:"column:operator_user_id;size:64"`
- Ex string `gorm:"column:ex;size:1024"`
-}
-
-func (FriendModel) TableName() string {
- return FriendModelTableName
+ OwnerUserID string `bson:"owner_user_id"`
+ FriendUserID string `bson:"friend_user_id"`
+ Remark string `bson:"remark"`
+ CreateTime time.Time `bson:"create_time"`
+ AddSource int32 `bson:"add_source"`
+ OperatorUserID string `bson:"operator_user_id"`
+ Ex string `bson:"ex"`
+ IsPinned bool `bson:"is_pinned"`
}
+// FriendModelInterface defines the operations for managing friends in MongoDB.
type FriendModelInterface interface {
- // 插入多条记录
+ // Create inserts multiple friend records.
Create(ctx context.Context, friends []*FriendModel) (err error)
- // 删除ownerUserID指定的好友
+ // Delete removes specified friends of the owner user.
Delete(ctx context.Context, ownerUserID string, friendUserIDs []string) (err error)
- // 更新ownerUserID单个好友信息 更新零值
- UpdateByMap(ctx context.Context, ownerUserID string, friendUserID string, args map[string]interface{}) (err error)
- // 更新好友信息的非零值
- Update(ctx context.Context, friends []*FriendModel) (err error)
- // 更新好友备注(也支持零值 )
+ // UpdateByMap updates specific fields of a friend document using a map.
+ UpdateByMap(ctx context.Context, ownerUserID string, friendUserID string, args map[string]any) (err error)
+ // UpdateRemark modify remarks.
UpdateRemark(ctx context.Context, ownerUserID, friendUserID, remark string) (err error)
- // 获取单个好友信息,如没找到 返回错误
+ // Take retrieves a single friend document. Returns an error if not found.
Take(ctx context.Context, ownerUserID, friendUserID string) (friend *FriendModel, err error)
- // 查找好友关系,如果是双向关系,则都返回
+ // FindUserState finds the friendship status between two users.
FindUserState(ctx context.Context, userID1, userID2 string) (friends []*FriendModel, err error)
- // 获取 owner指定的好友列表 如果有friendUserIDs不存在,也不返回错误
+ // FindFriends retrieves a list of friends for a given owner. Missing friends do not cause an error.
FindFriends(ctx context.Context, ownerUserID string, friendUserIDs []string) (friends []*FriendModel, err error)
- // 获取哪些人添加了friendUserID 如果有ownerUserIDs不存在,也不返回错误
- FindReversalFriends(
- ctx context.Context,
- friendUserID string,
- ownerUserIDs []string,
- ) (friends []*FriendModel, err error)
- // 获取ownerUserID好友列表 支持翻页
- FindOwnerFriends(
- ctx context.Context,
- ownerUserID string,
- pageNumber, showNumber int32,
- ) (friends []*FriendModel, total int64, err error)
- // 获取哪些人添加了friendUserID 支持翻页
- FindInWhoseFriends(
- ctx context.Context,
- friendUserID string,
- pageNumber, showNumber int32,
- ) (friends []*FriendModel, total int64, err error)
- // 获取好友UserID列表
+ // FindReversalFriends finds users who have added the specified user as a friend.
+ FindReversalFriends(ctx context.Context, friendUserID string, ownerUserIDs []string) (friends []*FriendModel, err error)
+ // FindOwnerFriends retrieves a paginated list of friends for a given owner.
+ FindOwnerFriends(ctx context.Context, ownerUserID string, pagination pagination.Pagination) (total int64, friends []*FriendModel, err error)
+ // FindInWhoseFriends finds users who have added the specified user as a friend, with pagination.
+ FindInWhoseFriends(ctx context.Context, friendUserID string, pagination pagination.Pagination) (total int64, friends []*FriendModel, err error)
+ // FindFriendUserIDs retrieves a list of friend user IDs for a given owner.
FindFriendUserIDs(ctx context.Context, ownerUserID string) (friendUserIDs []string, err error)
- NewTx(tx any) FriendModelInterface
+ // UpdateFriends update friends' fields
+ UpdateFriends(ctx context.Context, ownerUserID string, friendUserIDs []string, val map[string]any) (err error)
}
diff --git a/pkg/common/db/table/relation/friend_request.go b/pkg/common/db/table/relation/friend_request.go
index 51ea0ef6e..8dceb0778 100644
--- a/pkg/common/db/table/relation/friend_request.go
+++ b/pkg/common/db/table/relation/friend_request.go
@@ -17,50 +17,37 @@ package relation
import (
"context"
"time"
-)
-const FriendRequestModelTableName = "friend_requests"
+ "github.com/OpenIMSDK/tools/pagination"
+)
type FriendRequestModel struct {
- FromUserID string `gorm:"column:from_user_id;primary_key;size:64"`
- ToUserID string `gorm:"column:to_user_id;primary_key;size:64"`
- HandleResult int32 `gorm:"column:handle_result"`
- ReqMsg string `gorm:"column:req_msg;size:255"`
- CreateTime time.Time `gorm:"column:create_time; autoCreateTime"`
- HandlerUserID string `gorm:"column:handler_user_id;size:64"`
- HandleMsg string `gorm:"column:handle_msg;size:255"`
- HandleTime time.Time `gorm:"column:handle_time"`
- Ex string `gorm:"column:ex;size:1024"`
-}
-
-func (FriendRequestModel) TableName() string {
- return FriendRequestModelTableName
+ FromUserID string `bson:"from_user_id"`
+ ToUserID string `bson:"to_user_id"`
+ HandleResult int32 `bson:"handle_result"`
+ ReqMsg string `bson:"req_msg"`
+ CreateTime time.Time `bson:"create_time"`
+ HandlerUserID string `bson:"handler_user_id"`
+ HandleMsg string `bson:"handle_msg"`
+ HandleTime time.Time `bson:"handle_time"`
+ Ex string `bson:"ex"`
}
type FriendRequestModelInterface interface {
- // 插入多条记录
+ // Insert multiple records
Create(ctx context.Context, friendRequests []*FriendRequestModel) (err error)
- // 删除记录
+ // Delete record
Delete(ctx context.Context, fromUserID, toUserID string) (err error)
- // 更新零值
- UpdateByMap(ctx context.Context, formUserID string, toUserID string, args map[string]interface{}) (err error)
- // 更新多条记录 (非零值)
+ // Update with zero values
+ UpdateByMap(ctx context.Context, formUserID string, toUserID string, args map[string]any) (err error)
+ // Update multiple records (non-zero values)
Update(ctx context.Context, friendRequest *FriendRequestModel) (err error)
- // 获取来指定用户的好友申请 未找到 不返回错误
+ // Get friend requests sent to a specific user, no error returned if not found
Find(ctx context.Context, fromUserID, toUserID string) (friendRequest *FriendRequestModel, err error)
Take(ctx context.Context, fromUserID, toUserID string) (friendRequest *FriendRequestModel, err error)
- // 获取toUserID收到的好友申请列表
- FindToUserID(
- ctx context.Context,
- toUserID string,
- pageNumber, showNumber int32,
- ) (friendRequests []*FriendRequestModel, total int64, err error)
- // 获取fromUserID发出去的好友申请列表
- FindFromUserID(
- ctx context.Context,
- fromUserID string,
- pageNumber, showNumber int32,
- ) (friendRequests []*FriendRequestModel, total int64, err error)
+ // Get list of friend requests received by toUserID
+ FindToUserID(ctx context.Context, toUserID string, pagination pagination.Pagination) (total int64, friendRequests []*FriendRequestModel, err error)
+ // Get list of friend requests sent by fromUserID
+ FindFromUserID(ctx context.Context, fromUserID string, pagination pagination.Pagination) (total int64, friendRequests []*FriendRequestModel, err error)
FindBothFriendRequests(ctx context.Context, fromUserID, toUserID string) (friends []*FriendRequestModel, err error)
- NewTx(tx any) FriendRequestModelInterface
}
diff --git a/pkg/common/db/table/relation/group.go b/pkg/common/db/table/relation/group.go
index 6759e0d35..57d6b1d62 100644
--- a/pkg/common/db/table/relation/group.go
+++ b/pkg/common/db/table/relation/group.go
@@ -17,48 +17,35 @@ package relation
import (
"context"
"time"
-)
-const (
- GroupModelTableName = "groups"
+ "github.com/OpenIMSDK/tools/pagination"
)
type GroupModel struct {
- GroupID string `gorm:"column:group_id;primary_key;size:64" json:"groupID" binding:"required"`
- GroupName string `gorm:"column:name;size:255" json:"groupName"`
- Notification string `gorm:"column:notification;size:255" json:"notification"`
- Introduction string `gorm:"column:introduction;size:255" json:"introduction"`
- FaceURL string `gorm:"column:face_url;size:255" json:"faceURL"`
- CreateTime time.Time `gorm:"column:create_time;index:create_time;autoCreateTime"`
- Ex string `gorm:"column:ex" json:"ex;size:1024"`
- Status int32 `gorm:"column:status"`
- CreatorUserID string `gorm:"column:creator_user_id;size:64"`
- GroupType int32 `gorm:"column:group_type"`
- NeedVerification int32 `gorm:"column:need_verification"`
- LookMemberInfo int32 `gorm:"column:look_member_info" json:"lookMemberInfo"`
- ApplyMemberFriend int32 `gorm:"column:apply_member_friend" json:"applyMemberFriend"`
- NotificationUpdateTime time.Time `gorm:"column:notification_update_time"`
- NotificationUserID string `gorm:"column:notification_user_id;size:64"`
-}
-
-func (GroupModel) TableName() string {
- return GroupModelTableName
+ GroupID string `bson:"group_id"`
+ GroupName string `bson:"group_name"`
+ Notification string `bson:"notification"`
+ Introduction string `bson:"introduction"`
+ FaceURL string `bson:"face_url"`
+ CreateTime time.Time `bson:"create_time"`
+ Ex string `bson:"ex"`
+ Status int32 `bson:"status"`
+ CreatorUserID string `bson:"creator_user_id"`
+ GroupType int32 `bson:"group_type"`
+ NeedVerification int32 `bson:"need_verification"`
+ LookMemberInfo int32 `bson:"look_member_info"`
+ ApplyMemberFriend int32 `bson:"apply_member_friend"`
+ NotificationUpdateTime time.Time `bson:"notification_update_time"`
+ NotificationUserID string `bson:"notification_user_id"`
}
type GroupModelInterface interface {
- NewTx(tx any) GroupModelInterface
Create(ctx context.Context, groups []*GroupModel) (err error)
- UpdateMap(ctx context.Context, groupID string, args map[string]interface{}) (err error)
+ UpdateMap(ctx context.Context, groupID string, args map[string]any) (err error)
UpdateStatus(ctx context.Context, groupID string, status int32) (err error)
Find(ctx context.Context, groupIDs []string) (groups []*GroupModel, err error)
- FindNotDismissedGroup(ctx context.Context, groupIDs []string) (groups []*GroupModel, err error)
Take(ctx context.Context, groupID string) (group *GroupModel, err error)
- Search(
- ctx context.Context,
- keyword string,
- pageNumber, showNumber int32,
- ) (total uint32, groups []*GroupModel, err error)
- GetGroupIDsByGroupType(ctx context.Context, groupType int) (groupIDs []string, err error)
+ Search(ctx context.Context, keyword string, pagination pagination.Pagination) (total int64, groups []*GroupModel, err error)
// 获取群总数
CountTotal(ctx context.Context, before *time.Time) (count int64, err error)
// 获取范围内群增量
diff --git a/pkg/common/db/table/relation/group_member.go b/pkg/common/db/table/relation/group_member.go
index bfde72834..88ab87739 100644
--- a/pkg/common/db/table/relation/group_member.go
+++ b/pkg/common/db/table/relation/group_member.go
@@ -17,58 +17,41 @@ package relation
import (
"context"
"time"
-)
-const (
- GroupMemberModelTableName = "group_members"
+ "github.com/OpenIMSDK/tools/pagination"
)
type GroupMemberModel struct {
- GroupID string `gorm:"column:group_id;primary_key;size:64"`
- UserID string `gorm:"column:user_id;primary_key;size:64"`
- Nickname string `gorm:"column:nickname;size:255"`
- FaceURL string `gorm:"column:user_group_face_url;size:255"`
- RoleLevel int32 `gorm:"column:role_level"`
- JoinTime time.Time `gorm:"column:join_time"`
- JoinSource int32 `gorm:"column:join_source"`
- InviterUserID string `gorm:"column:inviter_user_id;size:64"`
- OperatorUserID string `gorm:"column:operator_user_id;size:64"`
- MuteEndTime time.Time `gorm:"column:mute_end_time"`
- Ex string `gorm:"column:ex;size:1024"`
-}
-
-func (GroupMemberModel) TableName() string {
- return GroupMemberModelTableName
+ GroupID string `bson:"group_id"`
+ UserID string `bson:"user_id"`
+ Nickname string `bson:"nickname"`
+ FaceURL string `bson:"face_url"`
+ RoleLevel int32 `bson:"role_level"`
+ JoinTime time.Time `bson:"join_time"`
+ JoinSource int32 `bson:"join_source"`
+ InviterUserID string `bson:"inviter_user_id"`
+ OperatorUserID string `bson:"operator_user_id"`
+ MuteEndTime time.Time `bson:"mute_end_time"`
+ Ex string `bson:"ex"`
}
type GroupMemberModelInterface interface {
- NewTx(tx any) GroupMemberModelInterface
+ //NewTx(tx any) GroupMemberModelInterface
Create(ctx context.Context, groupMembers []*GroupMemberModel) (err error)
Delete(ctx context.Context, groupID string, userIDs []string) (err error)
- DeleteGroup(ctx context.Context, groupIDs []string) (err error)
+ //DeleteGroup(ctx context.Context, groupIDs []string) (err error)
Update(ctx context.Context, groupID string, userID string, data map[string]any) (err error)
- UpdateRoleLevel(ctx context.Context, groupID string, userID string, roleLevel int32) (rowsAffected int64, err error)
- Find(
- ctx context.Context,
- groupIDs []string,
- userIDs []string,
- roleLevels []int32,
- ) (groupMembers []*GroupMemberModel, err error)
+ UpdateRoleLevel(ctx context.Context, groupID string, userID string, roleLevel int32) error
FindMemberUserID(ctx context.Context, groupID string) (userIDs []string, err error)
Take(ctx context.Context, groupID string, userID string) (groupMember *GroupMemberModel, err error)
TakeOwner(ctx context.Context, groupID string) (groupMember *GroupMemberModel, err error)
- SearchMember(
- ctx context.Context,
- keyword string,
- groupIDs []string,
- userIDs []string,
- roleLevels []int32,
- pageNumber, showNumber int32,
- ) (total uint32, groupList []*GroupMemberModel, err error)
- MapGroupMemberNum(ctx context.Context, groupIDs []string) (count map[string]uint32, err error)
- FindJoinUserID(ctx context.Context, groupIDs []string) (groupUsers map[string][]string, err error)
+ SearchMember(ctx context.Context, keyword string, groupID string, pagination pagination.Pagination) (total int64, groupList []*GroupMemberModel, err error)
+ FindRoleLevelUserIDs(ctx context.Context, groupID string, roleLevel int32) ([]string, error)
+ //MapGroupMemberNum(ctx context.Context, groupIDs []string) (count map[string]uint32, err error)
+ //FindJoinUserID(ctx context.Context, groupIDs []string) (groupUsers map[string][]string, err error)
FindUserJoinedGroupID(ctx context.Context, userID string) (groupIDs []string, err error)
TakeGroupMemberNum(ctx context.Context, groupID string) (count int64, err error)
- FindUsersJoinedGroupID(ctx context.Context, userIDs []string) (map[string][]string, error)
+ //FindUsersJoinedGroupID(ctx context.Context, userIDs []string) (map[string][]string, error)
FindUserManagedGroupID(ctx context.Context, userID string) (groupIDs []string, err error)
+ IsUpdateRoleLevel(data map[string]any) bool
}
diff --git a/pkg/common/db/table/relation/group_request.go b/pkg/common/db/table/relation/group_request.go
index 063b83938..39999d799 100644
--- a/pkg/common/db/table/relation/group_request.go
+++ b/pkg/common/db/table/relation/group_request.go
@@ -17,45 +17,30 @@ package relation
import (
"context"
"time"
-)
-const (
- GroupRequestModelTableName = "group_requests"
+ "github.com/OpenIMSDK/tools/pagination"
)
type GroupRequestModel struct {
- UserID string `gorm:"column:user_id;primary_key;size:64"`
- GroupID string `gorm:"column:group_id;primary_key;size:64"`
- HandleResult int32 `gorm:"column:handle_result"`
- ReqMsg string `gorm:"column:req_msg;size:1024"`
- HandledMsg string `gorm:"column:handle_msg;size:1024"`
- ReqTime time.Time `gorm:"column:req_time"`
- HandleUserID string `gorm:"column:handle_user_id;size:64"`
- HandledTime time.Time `gorm:"column:handle_time"`
- JoinSource int32 `gorm:"column:join_source"`
- InviterUserID string `gorm:"column:inviter_user_id;size:64"`
- Ex string `gorm:"column:ex;size:1024"`
-}
-
-func (GroupRequestModel) TableName() string {
- return GroupRequestModelTableName
+ UserID string `bson:"user_id"`
+ GroupID string `bson:"group_id"`
+ HandleResult int32 `bson:"handle_result"`
+ ReqMsg string `bson:"req_msg"`
+ HandledMsg string `bson:"handled_msg"`
+ ReqTime time.Time `bson:"req_time"`
+ HandleUserID string `bson:"handle_user_id"`
+ HandledTime time.Time `bson:"handled_time"`
+ JoinSource int32 `bson:"join_source"`
+ InviterUserID string `bson:"inviter_user_id"`
+ Ex string `bson:"ex"`
}
type GroupRequestModelInterface interface {
- NewTx(tx any) GroupRequestModelInterface
Create(ctx context.Context, groupRequests []*GroupRequestModel) (err error)
Delete(ctx context.Context, groupID string, userID string) (err error)
UpdateHandler(ctx context.Context, groupID string, userID string, handledMsg string, handleResult int32) (err error)
Take(ctx context.Context, groupID string, userID string) (groupRequest *GroupRequestModel, err error)
- FindGroupRequests(ctx context.Context, groupID string, userIDs []string) (int64, []*GroupRequestModel, error)
- Page(
- ctx context.Context,
- userID string,
- pageNumber, showNumber int32,
- ) (total uint32, groups []*GroupRequestModel, err error)
- PageGroup(
- ctx context.Context,
- groupIDs []string,
- pageNumber, showNumber int32,
- ) (total uint32, groups []*GroupRequestModel, err error)
+ FindGroupRequests(ctx context.Context, groupID string, userIDs []string) ([]*GroupRequestModel, error)
+ Page(ctx context.Context, userID string, pagination pagination.Pagination) (total int64, groups []*GroupRequestModel, err error)
+ PageGroup(ctx context.Context, groupIDs []string, pagination pagination.Pagination) (total int64, groups []*GroupRequestModel, err error)
}
diff --git a/pkg/common/db/table/relation/log.go b/pkg/common/db/table/relation/log.go
index 72d0fa64e..ba63c0c2b 100644
--- a/pkg/common/db/table/relation/log.go
+++ b/pkg/common/db/table/relation/log.go
@@ -1,25 +1,41 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package relation
import (
"context"
"time"
+
+ "github.com/OpenIMSDK/tools/pagination"
)
-type Log struct {
- LogID string `gorm:"column:log_id;primary_key;type:char(64)"`
- Platform string `gorm:"column:platform;type:varchar(32)"`
- UserID string `gorm:"column:user_id;type:char(64)"`
- CreateTime time.Time `gorm:"index:,sort:desc"`
- Url string `gorm:"column:url;type varchar(255)"`
- FileName string `gorm:"column:filename;type varchar(255)"`
- SystemType string `gorm:"column:system_type;type varchar(255)"`
- Version string `gorm:"column:version;type varchar(255)"`
- Ex string `gorm:"column:ex;type varchar(255)"`
+type LogModel struct {
+ LogID string `bson:"log_id"`
+ Platform string `bson:"platform"`
+ UserID string `bson:"user_id"`
+ CreateTime time.Time `bson:"create_time"`
+ Url string `bson:"url"`
+ FileName string `bson:"file_name"`
+ SystemType string `bson:"system_type"`
+ Version string `bson:"version"`
+ Ex string `bson:"ex"`
}
type LogInterface interface {
- Create(ctx context.Context, log []*Log) error
- Search(ctx context.Context, keyword string, start time.Time, end time.Time, pageNumber int32, showNumber int32) (uint32, []*Log, error)
+ Create(ctx context.Context, log []*LogModel) error
+ Search(ctx context.Context, keyword string, start time.Time, end time.Time, pagination pagination.Pagination) (int64, []*LogModel, error)
Delete(ctx context.Context, logID []string, userID string) error
- Get(ctx context.Context, logIDs []string, userID string) ([]*Log, error)
+ Get(ctx context.Context, logIDs []string, userID string) ([]*LogModel, error)
}
diff --git a/pkg/common/db/table/relation/object.go b/pkg/common/db/table/relation/object.go
index 0ed4130a6..678fddcfd 100644
--- a/pkg/common/db/table/relation/object.go
+++ b/pkg/common/db/table/relation/object.go
@@ -19,27 +19,20 @@ import (
"time"
)
-const (
- ObjectInfoModelTableName = "object"
-)
-
type ObjectModel struct {
- Name string `gorm:"column:name;primary_key"`
- UserID string `gorm:"column:user_id"`
- Hash string `gorm:"column:hash"`
- Key string `gorm:"column:key"`
- Size int64 `gorm:"column:size"`
- ContentType string `gorm:"column:content_type"`
- Cause string `gorm:"column:cause"`
- CreateTime time.Time `gorm:"column:create_time"`
-}
-
-func (ObjectModel) TableName() string {
- return ObjectInfoModelTableName
+ Name string `bson:"name"`
+ UserID string `bson:"user_id"`
+ Hash string `bson:"hash"`
+ Engine string `bson:"engine"`
+ Key string `bson:"key"`
+ Size int64 `bson:"size"`
+ ContentType string `bson:"content_type"`
+ Group string `bson:"group"`
+ CreateTime time.Time `bson:"create_time"`
}
type ObjectInfoModelInterface interface {
- NewTx(tx any) ObjectInfoModelInterface
SetObject(ctx context.Context, obj *ObjectModel) error
- Take(ctx context.Context, name string) (*ObjectModel, error)
+ Take(ctx context.Context, engine string, name string) (*ObjectModel, error)
+ Delete(ctx context.Context, engine string, name string) error
}
diff --git a/pkg/common/db/table/relation/user.go b/pkg/common/db/table/relation/user.go
index 10a715bda..dbb2ff464 100644
--- a/pkg/common/db/table/relation/user.go
+++ b/pkg/common/db/table/relation/user.go
@@ -17,20 +17,20 @@ package relation
import (
"context"
"time"
-)
-const (
- UserModelTableName = "users"
+ "github.com/OpenIMSDK/protocol/user"
+
+ "github.com/OpenIMSDK/tools/pagination"
)
type UserModel struct {
- UserID string `gorm:"column:user_id;primary_key;size:64"`
- Nickname string `gorm:"column:name;size:255"`
- FaceURL string `gorm:"column:face_url;size:255"`
- Ex string `gorm:"column:ex;size:1024"`
- CreateTime time.Time `gorm:"column:create_time;index:create_time;autoCreateTime"`
- AppMangerLevel int32 `gorm:"column:app_manger_level;default:1"`
- GlobalRecvMsgOpt int32 `gorm:"column:global_recv_msg_opt"`
+ UserID string `bson:"user_id"`
+ Nickname string `bson:"nickname"`
+ FaceURL string `bson:"face_url"`
+ Ex string `bson:"ex"`
+ AppMangerLevel int32 `bson:"app_manger_level"`
+ GlobalRecvMsgOpt int32 `bson:"global_recv_msg_opt"`
+ CreateTime time.Time `bson:"create_time"`
}
func (u *UserModel) GetNickname() string {
@@ -41,32 +41,35 @@ func (u *UserModel) GetFaceURL() string {
return u.FaceURL
}
-func (u *UserModel) GetUserID() string {
+func (u UserModel) GetUserID() string {
return u.UserID
}
-func (u *UserModel) GetEx() string {
+func (u UserModel) GetEx() string {
return u.Ex
}
-func (UserModel) TableName() string {
- return UserModelTableName
-}
-
type UserModelInterface interface {
Create(ctx context.Context, users []*UserModel) (err error)
- UpdateByMap(ctx context.Context, userID string, args map[string]interface{}) (err error)
- Update(ctx context.Context, user *UserModel) (err error)
- // 获取指定用户信息 不存在,也不返回错误
+ UpdateByMap(ctx context.Context, userID string, args map[string]any) (err error)
Find(ctx context.Context, userIDs []string) (users []*UserModel, err error)
- // 获取某个用户信息 不存在,则返回错误
Take(ctx context.Context, userID string) (user *UserModel, err error)
- // 获取用户信息 不存在,不返回错误
- Page(ctx context.Context, pageNumber, showNumber int32) (users []*UserModel, count int64, err error)
- GetAllUserID(ctx context.Context, pageNumber, showNumber int32) (userIDs []string, err error)
+ TakeNotification(ctx context.Context, level int64) (user []*UserModel, err error)
+ TakeByNickname(ctx context.Context, nickname string) (user []*UserModel, err error)
+ Page(ctx context.Context, pagination pagination.Pagination) (count int64, users []*UserModel, err error)
+ PageFindUser(ctx context.Context, level1 int64, level2 int64, pagination pagination.Pagination) (count int64, users []*UserModel, err error)
+ PageFindUserWithKeyword(ctx context.Context, level1 int64, level2 int64, userID, nickName string, pagination pagination.Pagination) (count int64, users []*UserModel, err error)
+ Exist(ctx context.Context, userID string) (exist bool, err error)
+ GetAllUserID(ctx context.Context, pagination pagination.Pagination) (count int64, userIDs []string, err error)
GetUserGlobalRecvMsgOpt(ctx context.Context, userID string) (opt int, err error)
// 获取用户总数
CountTotal(ctx context.Context, before *time.Time) (count int64, err error)
// 获取范围内用户增量
CountRangeEverydayTotal(ctx context.Context, start time.Time, end time.Time) (map[string]int64, error)
+ //CRUD user command
+ AddUserCommand(ctx context.Context, userID string, Type int32, UUID string, value string, ex string) error
+ DeleteUserCommand(ctx context.Context, userID string, Type int32, UUID string) error
+ UpdateUserCommand(ctx context.Context, userID string, Type int32, UUID string, val map[string]any) error
+ GetUserCommand(ctx context.Context, userID string, Type int32) ([]*user.CommandInfoResp, error)
+ GetAllUserCommand(ctx context.Context, userID string) ([]*user.AllCommandInfoResp, error)
}
diff --git a/pkg/common/db/table/relation/utils.go b/pkg/common/db/table/relation/utils.go
index c944eae8b..380f2410e 100644
--- a/pkg/common/db/table/relation/utils.go
+++ b/pkg/common/db/table/relation/utils.go
@@ -15,9 +15,8 @@
package relation
import (
- "gorm.io/gorm"
-
"github.com/OpenIMSDK/tools/utils"
+ "go.mongodb.org/mongo-driver/mongo"
)
type BatchUpdateGroupMember struct {
@@ -32,5 +31,5 @@ type GroupSimpleUserID struct {
}
func IsNotFound(err error) bool {
- return utils.Unwrap(err) == gorm.ErrRecordNotFound
+ return utils.Unwrap(err) == mongo.ErrNoDocuments
}
diff --git a/pkg/common/db/table/unrelation/super_group.go b/pkg/common/db/table/unrelation/super_group.go
index 80c3ef9c7..1fd80c67a 100644
--- a/pkg/common/db/table/unrelation/super_group.go
+++ b/pkg/common/db/table/unrelation/super_group.go
@@ -14,40 +14,40 @@
package unrelation
-import (
- "context"
-)
-
-const (
- CSuperGroup = "super_group"
- CUserToSuperGroup = "user_to_super_group"
-)
-
-type SuperGroupModel struct {
- GroupID string `bson:"group_id" json:"groupID"`
- MemberIDs []string `bson:"member_id_list" json:"memberIDList"`
-}
-
-func (SuperGroupModel) TableName() string {
- return CSuperGroup
-}
-
-type UserToSuperGroupModel struct {
- UserID string `bson:"user_id" json:"userID"`
- GroupIDs []string `bson:"group_id_list" json:"groupIDList"`
-}
-
-func (UserToSuperGroupModel) TableName() string {
- return CUserToSuperGroup
-}
-
-type SuperGroupModelInterface interface {
- CreateSuperGroup(ctx context.Context, groupID string, initMemberIDs []string) error
- TakeSuperGroup(ctx context.Context, groupID string) (group *SuperGroupModel, err error)
- FindSuperGroup(ctx context.Context, groupIDs []string) (groups []*SuperGroupModel, err error)
- AddUserToSuperGroup(ctx context.Context, groupID string, userIDs []string) error
- RemoverUserFromSuperGroup(ctx context.Context, groupID string, userIDs []string) error
- GetSuperGroupByUserID(ctx context.Context, userID string) (*UserToSuperGroupModel, error)
- DeleteSuperGroup(ctx context.Context, groupID string) error
- RemoveGroupFromUser(ctx context.Context, groupID string, userIDs []string) error
-}
+//import (
+// "context"
+//)
+//
+//const (
+// CSuperGroup = "super_group"
+// CUserToSuperGroup = "user_to_super_group"
+//)
+//
+//type SuperGroupModel struct {
+// GroupID string `bson:"group_id" json:"groupID"`
+// MemberIDs []string `bson:"member_id_list" json:"memberIDList"`
+//}
+//
+//func (SuperGroupModel) TableName() string {
+// return CSuperGroup
+//}
+//
+//type UserToSuperGroupModel struct {
+// UserID string `bson:"user_id" json:"userID"`
+// GroupIDs []string `bson:"group_id_list" json:"groupIDList"`
+//}
+//
+//func (UserToSuperGroupModel) TableName() string {
+// return CUserToSuperGroup
+//}
+//
+//type SuperGroupModelInterface interface {
+// CreateSuperGroup(ctx context.Context, groupID string, initMemberIDs []string) error
+// TakeSuperGroup(ctx context.Context, groupID string) (group *SuperGroupModel, err error)
+// FindSuperGroup(ctx context.Context, groupIDs []string) (groups []*SuperGroupModel, err error)
+// AddUserToSuperGroup(ctx context.Context, groupID string, userIDs []string) error
+// RemoverUserFromSuperGroup(ctx context.Context, groupID string, userIDs []string) error
+// GetSuperGroupByUserID(ctx context.Context, userID string) (*UserToSuperGroupModel, error)
+// DeleteSuperGroup(ctx context.Context, groupID string) error
+// RemoveGroupFromUser(ctx context.Context, groupID string, userIDs []string) error
+//}
diff --git a/pkg/common/db/unrelation/mongo.go b/pkg/common/db/unrelation/mongo.go
old mode 100755
new mode 100644
index 09e3e904e..4c093b3c3
--- a/pkg/common/db/unrelation/mongo.go
+++ b/pkg/common/db/unrelation/mongo.go
@@ -17,24 +17,24 @@ package unrelation
import (
"context"
"fmt"
+ "os"
"strings"
"time"
"go.mongodb.org/mongo-driver/bson"
-
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
"github.com/OpenIMSDK/tools/errs"
"github.com/OpenIMSDK/tools/mw/specialerror"
- "github.com/OpenIMSDK/tools/utils"
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/open-im-server/v3/pkg/common/db/table/unrelation"
)
const (
- maxRetry = 10 // number of retries
+ maxRetry = 10 // number of retries
+ mongoConnTimeout = 10 * time.Second
)
type Mongo struct {
@@ -44,100 +44,127 @@ type Mongo struct {
// NewMongo Initialize MongoDB connection.
func NewMongo() (*Mongo, error) {
specialerror.AddReplace(mongo.ErrNoDocuments, errs.ErrRecordNotFound)
- uri := "mongodb://sample.host:27017/?maxPoolSize=20&w=majority"
- if config.Config.Mongo.Uri != "" {
- uri = config.Config.Mongo.Uri
- } else {
- mongodbHosts := ""
- for i, v := range config.Config.Mongo.Address {
- if i == len(config.Config.Mongo.Address)-1 {
- mongodbHosts += v
- } else {
- mongodbHosts += v + ","
- }
- }
- if config.Config.Mongo.Password != "" && config.Config.Mongo.Username != "" {
- uri = fmt.Sprintf("mongodb://%s:%s@%s/%s?maxPoolSize=%d&authSource=admin",
- config.Config.Mongo.Username, config.Config.Mongo.Password, mongodbHosts,
- config.Config.Mongo.Database, config.Config.Mongo.MaxPoolSize)
- } else {
- uri = fmt.Sprintf("mongodb://%s/%s/?maxPoolSize=%d&authSource=admin",
- mongodbHosts, config.Config.Mongo.Database,
- config.Config.Mongo.MaxPoolSize)
- }
- }
- fmt.Println("mongo:", uri)
+ uri := buildMongoURI()
+
var mongoClient *mongo.Client
- var err error = nil
+ var err error
+
+ // Retry connecting to MongoDB
for i := 0; i <= maxRetry; i++ {
- ctx, cancel := context.WithTimeout(context.Background(), time.Second*10)
+ ctx, cancel := context.WithTimeout(context.Background(), mongoConnTimeout)
defer cancel()
mongoClient, err = mongo.Connect(ctx, options.Client().ApplyURI(uri))
if err == nil {
+ if err = mongoClient.Ping(ctx, nil); err != nil {
+ return nil, errs.Wrap(err, uri)
+ }
return &Mongo{db: mongoClient}, nil
}
- if cmdErr, ok := err.(mongo.CommandError); ok {
- if cmdErr.Code == 13 || cmdErr.Code == 18 {
- return nil, err
- } else {
- fmt.Printf("Failed to connect to MongoDB: %s\n", err)
- }
+ if shouldRetry(err) {
+ time.Sleep(time.Second) // exponential backoff could be implemented here
+ continue
}
}
- return nil, err
+ return nil, errs.Wrap(err, uri)
+}
+
+func buildMongoURI() string {
+ uri := os.Getenv("MONGO_URI")
+ if uri != "" {
+ return uri
+ }
+
+ if config.Config.Mongo.Uri != "" {
+ return config.Config.Mongo.Uri
+ }
+
+ username := os.Getenv("MONGO_OPENIM_USERNAME")
+ password := os.Getenv("MONGO_OPENIM_PASSWORD")
+ address := os.Getenv("MONGO_ADDRESS")
+ port := os.Getenv("MONGO_PORT")
+ database := os.Getenv("MONGO_DATABASE")
+ maxPoolSize := os.Getenv("MONGO_MAX_POOL_SIZE")
+
+ if username == "" {
+ username = config.Config.Mongo.Username
+ }
+ if password == "" {
+ password = config.Config.Mongo.Password
+ }
+ if address == "" {
+ address = strings.Join(config.Config.Mongo.Address, ",")
+ } else if port != "" {
+ address = fmt.Sprintf("%s:%s", address, port)
+ }
+ if database == "" {
+ database = config.Config.Mongo.Database
+ }
+ if maxPoolSize == "" {
+ maxPoolSize = fmt.Sprint(config.Config.Mongo.MaxPoolSize)
+ }
+
+ uriFormat := "mongodb://%s/%s?maxPoolSize=%s"
+ if username != "" && password != "" {
+ uriFormat = "mongodb://%s:%s@%s/%s?maxPoolSize=%s"
+ return fmt.Sprintf(uriFormat, username, password, address, database, maxPoolSize)
+ }
+ return fmt.Sprintf(uriFormat, address, database, maxPoolSize)
+}
+
+func shouldRetry(err error) bool {
+ if cmdErr, ok := err.(mongo.CommandError); ok {
+ return cmdErr.Code != 13 && cmdErr.Code != 18
+ }
+ return true
}
+// GetClient returns the MongoDB client.
func (m *Mongo) GetClient() *mongo.Client {
return m.db
}
+// GetDatabase returns the specific database from MongoDB.
func (m *Mongo) GetDatabase() *mongo.Database {
return m.db.Database(config.Config.Mongo.Database)
}
+// CreateMsgIndex creates an index for messages in MongoDB.
func (m *Mongo) CreateMsgIndex() error {
return m.createMongoIndex(unrelation.Msg, true, "doc_id")
}
-func (m *Mongo) CreateSuperGroupIndex() error {
- if err := m.createMongoIndex(unrelation.CSuperGroup, true, "group_id"); err != nil {
- return err
- }
- if err := m.createMongoIndex(unrelation.CUserToSuperGroup, true, "user_id"); err != nil {
- return err
- }
- return nil
-}
-
+// createMongoIndex creates an index in a MongoDB collection.
func (m *Mongo) createMongoIndex(collection string, isUnique bool, keys ...string) error {
- db := m.db.Database(config.Config.Mongo.Database).Collection(collection)
+ db := m.GetDatabase().Collection(collection)
opts := options.CreateIndexes().SetMaxTime(10 * time.Second)
indexView := db.Indexes()
- keysDoc := bson.D{}
- // create composite indexes
- for _, key := range keys {
- if strings.HasPrefix(key, "-") {
- keysDoc = append(keysDoc, bson.E{Key: strings.TrimLeft(key, "-"), Value: -1})
- // keysDoc = keysDoc.Append(strings.TrimLeft(key, "-"), bsonx.Int32(-1))
- } else {
- keysDoc = append(keysDoc, bson.E{Key: key, Value: 1})
- // keysDoc = keysDoc.Append(key, bsonx.Int32(1))
- }
- }
- // create index
+
+ keysDoc := buildIndexKeys(keys)
+
index := mongo.IndexModel{
Keys: keysDoc,
}
if isUnique {
index.Options = options.Index().SetUnique(true)
}
- result, err := indexView.CreateOne(
- context.Background(),
- index,
- opts,
- )
+
+ _, err := indexView.CreateOne(context.Background(), index, opts)
if err != nil {
- return utils.Wrap(err, result)
+ return errs.Wrap(err, "CreateIndex")
}
return nil
}
+
+// buildIndexKeys builds the BSON document for index keys.
+func buildIndexKeys(keys []string) bson.D {
+ keysDoc := bson.D{}
+ for _, key := range keys {
+ direction := 1 // default direction is ascending
+ if strings.HasPrefix(key, "-") {
+ direction = -1 // descending order for prefixed with "-"
+ key = strings.TrimLeft(key, "-")
+ }
+ keysDoc = append(keysDoc, bson.E{Key: key, Value: direction})
+ }
+ return keysDoc
+}
diff --git a/pkg/common/db/unrelation/msg.go b/pkg/common/db/unrelation/msg.go
old mode 100755
new mode 100644
diff --git a/pkg/common/db/unrelation/msg_convert.go b/pkg/common/db/unrelation/msg_convert.go
index 810b4f419..373bc843e 100644
--- a/pkg/common/db/unrelation/msg_convert.go
+++ b/pkg/common/db/unrelation/msg_convert.go
@@ -48,7 +48,7 @@ func (m *MsgMongoDriver) ConvertMsgsDocLen(ctx context.Context, conversationIDs
log.ZError(ctx, "convertAll delete many failed", err, "conversationID", conversationID)
continue
}
- var newMsgDocs []interface{}
+ var newMsgDocs []any
for _, msgDoc := range msgDocs {
if int64(len(msgDoc.Msg)) == m.model.GetSingleGocMsgNum() {
continue
diff --git a/pkg/common/db/unrelation/super_group.go b/pkg/common/db/unrelation/super_group.go
index c762140a2..6c2bb6aaf 100644
--- a/pkg/common/db/unrelation/super_group.go
+++ b/pkg/common/db/unrelation/super_group.go
@@ -14,149 +14,150 @@
package unrelation
-import (
- "context"
-
- "go.mongodb.org/mongo-driver/bson"
- "go.mongodb.org/mongo-driver/mongo"
- "go.mongodb.org/mongo-driver/mongo/options"
-
- "github.com/OpenIMSDK/tools/utils"
-
- "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/unrelation"
-)
-
-func NewSuperGroupMongoDriver(database *mongo.Database) unrelation.SuperGroupModelInterface {
- return &SuperGroupMongoDriver{
- superGroupCollection: database.Collection(unrelation.CSuperGroup),
- userToSuperGroupCollection: database.Collection(unrelation.CUserToSuperGroup),
- }
-}
-
-type SuperGroupMongoDriver struct {
- superGroupCollection *mongo.Collection
- userToSuperGroupCollection *mongo.Collection
-}
-
-func (s *SuperGroupMongoDriver) CreateSuperGroup(ctx context.Context, groupID string, initMemberIDs []string) error {
- _, err := s.superGroupCollection.InsertOne(ctx, &unrelation.SuperGroupModel{
- GroupID: groupID,
- MemberIDs: initMemberIDs,
- })
- if err != nil {
- return err
- }
- for _, userID := range initMemberIDs {
- _, err = s.userToSuperGroupCollection.UpdateOne(
- ctx,
- bson.M{"user_id": userID},
- bson.M{"$addToSet": bson.M{"group_id_list": groupID}},
- &options.UpdateOptions{
- Upsert: utils.ToPtr(true),
- },
- )
- if err != nil {
- return err
- }
- }
- return nil
-}
-
-func (s *SuperGroupMongoDriver) TakeSuperGroup(
- ctx context.Context,
- groupID string,
-) (group *unrelation.SuperGroupModel, err error) {
- if err := s.superGroupCollection.FindOne(ctx, bson.M{"group_id": groupID}).Decode(&group); err != nil {
- return nil, utils.Wrap(err, "")
- }
- return group, nil
-}
-
-func (s *SuperGroupMongoDriver) FindSuperGroup(
- ctx context.Context,
- groupIDs []string,
-) (groups []*unrelation.SuperGroupModel, err error) {
- cursor, err := s.superGroupCollection.Find(ctx, bson.M{"group_id": bson.M{
- "$in": groupIDs,
- }})
- if err != nil {
- return nil, err
- }
- defer cursor.Close(ctx)
- if err := cursor.All(ctx, &groups); err != nil {
- return nil, utils.Wrap(err, "")
- }
- return groups, nil
-}
-
-func (s *SuperGroupMongoDriver) AddUserToSuperGroup(ctx context.Context, groupID string, userIDs []string) error {
- _, err := s.superGroupCollection.UpdateOne(
- ctx,
- bson.M{"group_id": groupID},
- bson.M{"$addToSet": bson.M{"member_id_list": bson.M{"$each": userIDs}}},
- )
- if err != nil {
- return err
- }
- upsert := true
- opts := &options.UpdateOptions{
- Upsert: &upsert,
- }
- for _, userID := range userIDs {
- _, err = s.userToSuperGroupCollection.UpdateOne(
- ctx,
- bson.M{"user_id": userID},
- bson.M{"$addToSet": bson.M{"group_id_list": groupID}},
- opts,
- )
- if err != nil {
- return utils.Wrap(err, "transaction failed")
- }
- }
- return nil
-}
-
-func (s *SuperGroupMongoDriver) RemoverUserFromSuperGroup(ctx context.Context, groupID string, userIDs []string) error {
- _, err := s.superGroupCollection.UpdateOne(
- ctx,
- bson.M{"group_id": groupID},
- bson.M{"$pull": bson.M{"member_id_list": bson.M{"$in": userIDs}}},
- )
- if err != nil {
- return err
- }
- err = s.RemoveGroupFromUser(ctx, groupID, userIDs)
- if err != nil {
- return err
- }
- return nil
-}
-
-func (s *SuperGroupMongoDriver) GetSuperGroupByUserID(
- ctx context.Context,
- userID string,
-) (*unrelation.UserToSuperGroupModel, error) {
- var user unrelation.UserToSuperGroupModel
- err := s.userToSuperGroupCollection.FindOne(ctx, bson.M{"user_id": userID}).Decode(&user)
- return &user, utils.Wrap(err, "")
-}
-
-func (s *SuperGroupMongoDriver) DeleteSuperGroup(ctx context.Context, groupID string) error {
- group, err := s.TakeSuperGroup(ctx, groupID)
- if err != nil {
- return err
- }
- if _, err := s.superGroupCollection.DeleteOne(ctx, bson.M{"group_id": groupID}); err != nil {
- return utils.Wrap(err, "")
- }
- return s.RemoveGroupFromUser(ctx, groupID, group.MemberIDs)
-}
-
-func (s *SuperGroupMongoDriver) RemoveGroupFromUser(ctx context.Context, groupID string, userIDs []string) error {
- _, err := s.userToSuperGroupCollection.UpdateOne(
- ctx,
- bson.M{"user_id": bson.M{"$in": userIDs}},
- bson.M{"$pull": bson.M{"group_id_list": groupID}},
- )
- return utils.Wrap(err, "")
-}
+//
+//import (
+// "context"
+//
+// "go.mongodb.org/mongo-driver/bson"
+// "go.mongodb.org/mongo-driver/mongo"
+// "go.mongodb.org/mongo-driver/mongo/options"
+//
+// "github.com/OpenIMSDK/tools/utils"
+//
+// "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/unrelation"
+//)
+//
+//func NewSuperGroupMongoDriver(database *mongo.Database) unrelation.SuperGroupModelInterface {
+// return &SuperGroupMongoDriver{
+// superGroupCollection: database.Collection(unrelation.CSuperGroup),
+// userToSuperGroupCollection: database.Collection(unrelation.CUserToSuperGroup),
+// }
+//}
+//
+//type SuperGroupMongoDriver struct {
+// superGroupCollection *mongo.Collection
+// userToSuperGroupCollection *mongo.Collection
+//}
+//
+//func (s *SuperGroupMongoDriver) CreateSuperGroup(ctx context.Context, groupID string, initMemberIDs []string) error {
+// _, err := s.superGroupCollection.InsertOne(ctx, &unrelation.SuperGroupModel{
+// GroupID: groupID,
+// MemberIDs: initMemberIDs,
+// })
+// if err != nil {
+// return err
+// }
+// for _, userID := range initMemberIDs {
+// _, err = s.userToSuperGroupCollection.UpdateOne(
+// ctx,
+// bson.M{"user_id": userID},
+// bson.M{"$addToSet": bson.M{"group_id_list": groupID}},
+// &options.UpdateOptions{
+// Upsert: utils.ToPtr(true),
+// },
+// )
+// if err != nil {
+// return err
+// }
+// }
+// return nil
+//}
+//
+//func (s *SuperGroupMongoDriver) TakeSuperGroup(
+// ctx context.Context,
+// groupID string,
+//) (group *unrelation.SuperGroupModel, err error) {
+// if err := s.superGroupCollection.FindOne(ctx, bson.M{"group_id": groupID}).Decode(&group); err != nil {
+// return nil, utils.Wrap(err, "")
+// }
+// return group, nil
+//}
+//
+//func (s *SuperGroupMongoDriver) FindSuperGroup(
+// ctx context.Context,
+// groupIDs []string,
+//) (groups []*unrelation.SuperGroupModel, err error) {
+// cursor, err := s.superGroupCollection.Find(ctx, bson.M{"group_id": bson.M{
+// "$in": groupIDs,
+// }})
+// if err != nil {
+// return nil, err
+// }
+// defer cursor.Close(ctx)
+// if err := cursor.All(ctx, &groups); err != nil {
+// return nil, utils.Wrap(err, "")
+// }
+// return groups, nil
+//}
+//
+//func (s *SuperGroupMongoDriver) AddUserToSuperGroup(ctx context.Context, groupID string, userIDs []string) error {
+// _, err := s.superGroupCollection.UpdateOne(
+// ctx,
+// bson.M{"group_id": groupID},
+// bson.M{"$addToSet": bson.M{"member_id_list": bson.M{"$each": userIDs}}},
+// )
+// if err != nil {
+// return err
+// }
+// upsert := true
+// opts := &options.UpdateOptions{
+// Upsert: &upsert,
+// }
+// for _, userID := range userIDs {
+// _, err = s.userToSuperGroupCollection.UpdateOne(
+// ctx,
+// bson.M{"user_id": userID},
+// bson.M{"$addToSet": bson.M{"group_id_list": groupID}},
+// opts,
+// )
+// if err != nil {
+// return utils.Wrap(err, "transaction failed")
+// }
+// }
+// return nil
+//}
+//
+//func (s *SuperGroupMongoDriver) RemoverUserFromSuperGroup(ctx context.Context, groupID string, userIDs []string) error {
+// _, err := s.superGroupCollection.UpdateOne(
+// ctx,
+// bson.M{"group_id": groupID},
+// bson.M{"$pull": bson.M{"member_id_list": bson.M{"$in": userIDs}}},
+// )
+// if err != nil {
+// return err
+// }
+// err = s.RemoveGroupFromUser(ctx, groupID, userIDs)
+// if err != nil {
+// return err
+// }
+// return nil
+//}
+//
+//func (s *SuperGroupMongoDriver) GetSuperGroupByUserID(
+// ctx context.Context,
+// userID string,
+//) (*unrelation.UserToSuperGroupModel, error) {
+// var user unrelation.UserToSuperGroupModel
+// err := s.userToSuperGroupCollection.FindOne(ctx, bson.M{"user_id": userID}).Decode(&user)
+// return &user, utils.Wrap(err, "")
+//}
+//
+//func (s *SuperGroupMongoDriver) DeleteSuperGroup(ctx context.Context, groupID string) error {
+// group, err := s.TakeSuperGroup(ctx, groupID)
+// if err != nil {
+// return err
+// }
+// if _, err := s.superGroupCollection.DeleteOne(ctx, bson.M{"group_id": groupID}); err != nil {
+// return utils.Wrap(err, "")
+// }
+// return s.RemoveGroupFromUser(ctx, groupID, group.MemberIDs)
+//}
+//
+//func (s *SuperGroupMongoDriver) RemoveGroupFromUser(ctx context.Context, groupID string, userIDs []string) error {
+// _, err := s.userToSuperGroupCollection.UpdateOne(
+// ctx,
+// bson.M{"user_id": bson.M{"$in": userIDs}},
+// bson.M{"$pull": bson.M{"group_id_list": groupID}},
+// )
+// return utils.Wrap(err, "")
+//}
diff --git a/pkg/common/db/unrelation/user.go b/pkg/common/db/unrelation/user.go
old mode 100755
new mode 100644
diff --git a/pkg/common/discoveryregister/direct/directResolver.go b/pkg/common/discoveryregister/direct/directResolver.go
new file mode 100644
index 000000000..a706ce5e4
--- /dev/null
+++ b/pkg/common/discoveryregister/direct/directResolver.go
@@ -0,0 +1,96 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package direct
+
+import (
+ "context"
+ "math/rand"
+ "strings"
+
+ "github.com/OpenIMSDK/tools/log"
+ "google.golang.org/grpc/resolver"
+)
+
+const (
+ slashSeparator = "/"
+ // EndpointSepChar is the separator char in endpoints.
+ EndpointSepChar = ','
+
+ subsetSize = 32
+ scheme = "direct"
+)
+
+type ResolverDirect struct {
+}
+
+func NewResolverDirect() *ResolverDirect {
+ return &ResolverDirect{}
+}
+
+func (rd *ResolverDirect) Build(target resolver.Target, cc resolver.ClientConn, _ resolver.BuildOptions) (
+ resolver.Resolver, error) {
+ log.ZDebug(context.Background(), "Build", "target", target)
+ endpoints := strings.FieldsFunc(GetEndpoints(target), func(r rune) bool {
+ return r == EndpointSepChar
+ })
+ endpoints = subset(endpoints, subsetSize)
+ addrs := make([]resolver.Address, 0, len(endpoints))
+
+ for _, val := range endpoints {
+ addrs = append(addrs, resolver.Address{
+ Addr: val,
+ })
+ }
+ if err := cc.UpdateState(resolver.State{
+ Addresses: addrs,
+ }); err != nil {
+ return nil, err
+ }
+
+ return &nopResolver{cc: cc}, nil
+}
+func init() {
+ resolver.Register(&ResolverDirect{})
+}
+func (rd *ResolverDirect) Scheme() string {
+ return scheme // return your custom scheme name
+}
+
+// GetEndpoints returns the endpoints from the given target.
+func GetEndpoints(target resolver.Target) string {
+ return strings.Trim(target.URL.Path, slashSeparator)
+}
+func subset(set []string, sub int) []string {
+ rand.Shuffle(len(set), func(i, j int) {
+ set[i], set[j] = set[j], set[i]
+ })
+ if len(set) <= sub {
+ return set
+ }
+
+ return set[:sub]
+}
+
+type nopResolver struct {
+ cc resolver.ClientConn
+}
+
+func (n nopResolver) ResolveNow(options resolver.ResolveNowOptions) {
+
+}
+
+func (n nopResolver) Close() {
+
+}
diff --git a/pkg/common/discoveryregister/direct/directconn.go b/pkg/common/discoveryregister/direct/directconn.go
new file mode 100644
index 000000000..84f173ea6
--- /dev/null
+++ b/pkg/common/discoveryregister/direct/directconn.go
@@ -0,0 +1,170 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package direct
+
+import (
+ "context"
+ "errors"
+ "fmt"
+
+ "github.com/OpenIMSDK/tools/errs"
+ "google.golang.org/grpc"
+ "google.golang.org/grpc/credentials/insecure"
+
+ config2 "github.com/openimsdk/open-im-server/v3/pkg/common/config"
+)
+
+type ServiceAddresses map[string][]int
+
+func getServiceAddresses() ServiceAddresses {
+ return ServiceAddresses{
+ config2.Config.RpcRegisterName.OpenImUserName: config2.Config.RpcPort.OpenImUserPort,
+ config2.Config.RpcRegisterName.OpenImFriendName: config2.Config.RpcPort.OpenImFriendPort,
+ config2.Config.RpcRegisterName.OpenImMsgName: config2.Config.RpcPort.OpenImMessagePort,
+ config2.Config.RpcRegisterName.OpenImMessageGatewayName: config2.Config.LongConnSvr.OpenImMessageGatewayPort,
+ config2.Config.RpcRegisterName.OpenImGroupName: config2.Config.RpcPort.OpenImGroupPort,
+ config2.Config.RpcRegisterName.OpenImAuthName: config2.Config.RpcPort.OpenImAuthPort,
+ config2.Config.RpcRegisterName.OpenImPushName: config2.Config.RpcPort.OpenImPushPort,
+ config2.Config.RpcRegisterName.OpenImConversationName: config2.Config.RpcPort.OpenImConversationPort,
+ config2.Config.RpcRegisterName.OpenImThirdName: config2.Config.RpcPort.OpenImThirdPort,
+ }
+}
+
+type ConnDirect struct {
+ additionalOpts []grpc.DialOption
+ currentServiceAddress string
+ conns map[string][]*grpc.ClientConn
+ resolverDirect *ResolverDirect
+}
+
+func (cd *ConnDirect) GetClientLocalConns() map[string][]*grpc.ClientConn {
+ return nil
+}
+
+func (cd *ConnDirect) GetUserIdHashGatewayHost(ctx context.Context, userId string) (string, error) {
+ return "", nil
+}
+
+func (cd *ConnDirect) Register(serviceName, host string, port int, opts ...grpc.DialOption) error {
+ return nil
+}
+
+func (cd *ConnDirect) UnRegister() error {
+ return nil
+}
+
+func (cd *ConnDirect) CreateRpcRootNodes(serviceNames []string) error {
+ return nil
+}
+
+func (cd *ConnDirect) RegisterConf2Registry(key string, conf []byte) error {
+ return nil
+}
+
+func (cd *ConnDirect) GetConfFromRegistry(key string) ([]byte, error) {
+ return nil, nil
+}
+
+func (cd *ConnDirect) Close() {
+
+}
+
+func NewConnDirect() (*ConnDirect, error) {
+ return &ConnDirect{
+ conns: make(map[string][]*grpc.ClientConn),
+ resolverDirect: NewResolverDirect(),
+ }, nil
+}
+
+func (cd *ConnDirect) GetConns(ctx context.Context,
+ serviceName string, opts ...grpc.DialOption) ([]*grpc.ClientConn, error) {
+
+ if conns, exists := cd.conns[serviceName]; exists {
+ return conns, nil
+ }
+ ports := getServiceAddresses()[serviceName]
+ var connections []*grpc.ClientConn
+ for _, port := range ports {
+ conn, err := cd.dialServiceWithoutResolver(ctx, fmt.Sprintf(config2.Config.Rpc.ListenIP+":%d", port), append(cd.additionalOpts, opts...)...)
+ if err != nil {
+ fmt.Printf("connect to port %d failed,serviceName %s, IP %s\n", port, serviceName, config2.Config.Rpc.ListenIP)
+ }
+ connections = append(connections, conn)
+ }
+
+ if len(connections) == 0 {
+ return nil, fmt.Errorf("no connections found for service: %s", serviceName)
+ }
+ return connections, nil
+}
+
+func (cd *ConnDirect) GetConn(ctx context.Context, serviceName string, opts ...grpc.DialOption) (*grpc.ClientConn, error) {
+ // Get service addresses
+ addresses := getServiceAddresses()
+ address, ok := addresses[serviceName]
+ if !ok {
+ return nil, errs.Wrap(errors.New("unknown service name"), "serviceName", serviceName)
+ }
+ var result string
+ for _, addr := range address {
+ if result != "" {
+ result = result + "," + fmt.Sprintf(config2.Config.Rpc.ListenIP+":%d", addr)
+ } else {
+ result = fmt.Sprintf(config2.Config.Rpc.ListenIP+":%d", addr)
+ }
+ }
+ // Try to dial a new connection
+ conn, err := cd.dialService(ctx, result, append(cd.additionalOpts, opts...)...)
+ if err != nil {
+ return nil, errs.Wrap(err, "address", result)
+ }
+
+ // Store the new connection
+ cd.conns[serviceName] = append(cd.conns[serviceName], conn)
+ return conn, nil
+}
+
+func (cd *ConnDirect) GetSelfConnTarget() string {
+ return cd.currentServiceAddress
+}
+
+func (cd *ConnDirect) AddOption(opts ...grpc.DialOption) {
+ cd.additionalOpts = append(cd.additionalOpts, opts...)
+}
+
+func (cd *ConnDirect) CloseConn(conn *grpc.ClientConn) {
+ if conn != nil {
+ conn.Close()
+ }
+}
+
+func (cd *ConnDirect) dialService(ctx context.Context, address string, opts ...grpc.DialOption) (*grpc.ClientConn, error) {
+ options := append(opts, grpc.WithTransportCredentials(insecure.NewCredentials()))
+ conn, err := grpc.DialContext(ctx, cd.resolverDirect.Scheme()+":///"+address, options...)
+
+ if err != nil {
+ return nil, err
+ }
+ return conn, nil
+}
+func (cd *ConnDirect) dialServiceWithoutResolver(ctx context.Context, address string, opts ...grpc.DialOption) (*grpc.ClientConn, error) {
+ options := append(opts, grpc.WithTransportCredentials(insecure.NewCredentials()))
+ conn, err := grpc.DialContext(ctx, address, options...)
+
+ if err != nil {
+ return nil, err
+ }
+ return conn, nil
+}
diff --git a/pkg/common/discoveryregister/discoveryregister.go b/pkg/common/discoveryregister/discoveryregister.go
index c204184ff..23a9e3245 100644
--- a/pkg/common/discoveryregister/discoveryregister.go
+++ b/pkg/common/discoveryregister/discoveryregister.go
@@ -1,93 +1,46 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package discoveryregister
import (
- "context"
"errors"
- "fmt"
- "time"
+ "os"
- "github.com/OpenIMSDK/tools/discoveryregistry"
- openkeeper "github.com/OpenIMSDK/tools/discoveryregistry/zookeeper"
- "github.com/OpenIMSDK/tools/log"
- "google.golang.org/grpc"
+ "github.com/openimsdk/open-im-server/v3/pkg/common/discoveryregister/direct"
+
+ "github.com/openimsdk/open-im-server/v3/pkg/common/discoveryregister/kubernetes"
+ "github.com/openimsdk/open-im-server/v3/pkg/common/discoveryregister/zookeeper"
- "github.com/openimsdk/open-im-server/v3/pkg/common/config"
+ "github.com/OpenIMSDK/tools/discoveryregistry"
)
+// NewDiscoveryRegister creates a new service discovery and registry client based on the provided environment type.
func NewDiscoveryRegister(envType string) (discoveryregistry.SvcDiscoveryRegistry, error) {
- var client discoveryregistry.SvcDiscoveryRegistry
- var err error
+
+ if os.Getenv("ENVS_DISCOVERY") != "" {
+ envType = os.Getenv("ENVS_DISCOVERY")
+ }
+
switch envType {
case "zookeeper":
- client, err = openkeeper.NewClient(config.Config.Zookeeper.ZkAddr, config.Config.Zookeeper.Schema,
- openkeeper.WithFreq(time.Hour), openkeeper.WithUserNameAndPassword(
- config.Config.Zookeeper.Username,
- config.Config.Zookeeper.Password,
- ), openkeeper.WithRoundRobin(), openkeeper.WithTimeout(10), openkeeper.WithLogger(log.NewZkLogger()))
+ return zookeeper.NewZookeeperDiscoveryRegister()
case "k8s":
- client, err = NewK8sDiscoveryRegister()
+ return kubernetes.NewK8sDiscoveryRegister()
+ case "direct":
+ return direct.NewConnDirect()
default:
- client = nil
- err = errors.New("envType not correct")
+ return nil, errors.New("envType not correct")
}
- return client, err
-}
-
-type K8sDR struct {
- options []grpc.DialOption
- rpcRegisterAddr string
-}
-
-func NewK8sDiscoveryRegister() (discoveryregistry.SvcDiscoveryRegistry, error) {
- return &K8sDR{}, nil
-}
-
-func (cli *K8sDR) Register(serviceName, host string, port int, opts ...grpc.DialOption) error {
- cli.rpcRegisterAddr = serviceName
- return nil
-}
-func (cli *K8sDR) UnRegister() error {
-
- return nil
-}
-func (cli *K8sDR) CreateRpcRootNodes(serviceNames []string) error {
-
- return nil
-}
-func (cli *K8sDR) RegisterConf2Registry(key string, conf []byte) error {
-
- return nil
-}
-
-func (cli *K8sDR) GetConfFromRegistry(key string) ([]byte, error) {
-
- return nil, nil
-}
-func (cli *K8sDR) GetConns(ctx context.Context, serviceName string, opts ...grpc.DialOption) ([]*grpc.ClientConn, error) {
-
- conn, err := grpc.DialContext(ctx, serviceName, append(cli.options, opts...)...)
- return []*grpc.ClientConn{conn}, err
-}
-func (cli *K8sDR) GetConn(ctx context.Context, serviceName string, opts ...grpc.DialOption) (*grpc.ClientConn, error) {
-
- return grpc.DialContext(ctx, serviceName, append(cli.options, opts...)...)
-}
-func (cli *K8sDR) GetSelfConnTarget() string {
-
- return cli.rpcRegisterAddr
-}
-func (cli *K8sDR) AddOption(opts ...grpc.DialOption) {
- cli.options = append(cli.options, opts...)
-}
-func (cli *K8sDR) CloseConn(conn *grpc.ClientConn) {
- conn.Close()
-}
-
-// do not use this method for call rpc
-func (cli *K8sDR) GetClientLocalConns() map[string][]*grpc.ClientConn {
- fmt.Println("should not call this function!!!!!!!!!!!!!!!!!!!!!!!!!")
- return nil
-}
-func (cli *K8sDR) Close() {
- return
}
diff --git a/pkg/common/discoveryregister/discoveryregister_test.go b/pkg/common/discoveryregister/discoveryregister_test.go
index 8426598f9..5317db5c6 100644
--- a/pkg/common/discoveryregister/discoveryregister_test.go
+++ b/pkg/common/discoveryregister/discoveryregister_test.go
@@ -1,407 +1,61 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package discoveryregister
import (
- "context"
- "reflect"
+ "os"
"testing"
"github.com/OpenIMSDK/tools/discoveryregistry"
- "google.golang.org/grpc"
+ "github.com/stretchr/testify/assert"
)
-func TestNewDiscoveryRegister(t *testing.T) {
- type args struct {
- envType string
- }
- tests := []struct {
- name string
- args args
- want discoveryregistry.SvcDiscoveryRegistry
- wantErr bool
- }{
- // TODO: Add test cases.
- }
- for _, tt := range tests {
- t.Run(tt.name, func(t *testing.T) {
- got, err := NewDiscoveryRegister(tt.args.envType)
- if (err != nil) != tt.wantErr {
- t.Errorf("NewDiscoveryRegister() error = %v, wantErr %v", err, tt.wantErr)
- return
- }
- if !reflect.DeepEqual(got, tt.want) {
- t.Errorf("NewDiscoveryRegister() = %v, want %v", got, tt.want)
- }
- })
- }
-}
-
-func TestNewK8sDiscoveryRegister(t *testing.T) {
- tests := []struct {
- name string
- want discoveryregistry.SvcDiscoveryRegistry
- wantErr bool
- }{
- // TODO: Add test cases.
- }
- for _, tt := range tests {
- t.Run(tt.name, func(t *testing.T) {
- got, err := NewK8sDiscoveryRegister()
- if (err != nil) != tt.wantErr {
- t.Errorf("NewK8sDiscoveryRegister() error = %v, wantErr %v", err, tt.wantErr)
- return
- }
- if !reflect.DeepEqual(got, tt.want) {
- t.Errorf("NewK8sDiscoveryRegister() = %v, want %v", got, tt.want)
- }
- })
- }
-}
-
-func TestK8sDR_Register(t *testing.T) {
- type fields struct {
- options []grpc.DialOption
- rpcRegisterAddr string
- }
- type args struct {
- serviceName string
- host string
- port int
- opts []grpc.DialOption
- }
- tests := []struct {
- name string
- fields fields
- args args
- wantErr bool
- }{
- // TODO: Add test cases.
- }
- for _, tt := range tests {
- t.Run(tt.name, func(t *testing.T) {
- cli := &K8sDR{
- options: tt.fields.options,
- rpcRegisterAddr: tt.fields.rpcRegisterAddr,
- }
- if err := cli.Register(tt.args.serviceName, tt.args.host, tt.args.port, tt.args.opts...); (err != nil) != tt.wantErr {
- t.Errorf("K8sDR.Register() error = %v, wantErr %v", err, tt.wantErr)
- }
- })
- }
-}
-
-func TestK8sDR_UnRegister(t *testing.T) {
- type fields struct {
- options []grpc.DialOption
- rpcRegisterAddr string
- }
- tests := []struct {
- name string
- fields fields
- wantErr bool
- }{
- // TODO: Add test cases.
- }
- for _, tt := range tests {
- t.Run(tt.name, func(t *testing.T) {
- cli := &K8sDR{
- options: tt.fields.options,
- rpcRegisterAddr: tt.fields.rpcRegisterAddr,
- }
- if err := cli.UnRegister(); (err != nil) != tt.wantErr {
- t.Errorf("K8sDR.UnRegister() error = %v, wantErr %v", err, tt.wantErr)
- }
- })
- }
-}
-
-func TestK8sDR_CreateRpcRootNodes(t *testing.T) {
- type fields struct {
- options []grpc.DialOption
- rpcRegisterAddr string
- }
- type args struct {
- serviceNames []string
- }
- tests := []struct {
- name string
- fields fields
- args args
- wantErr bool
- }{
- // TODO: Add test cases.
- }
- for _, tt := range tests {
- t.Run(tt.name, func(t *testing.T) {
- cli := &K8sDR{
- options: tt.fields.options,
- rpcRegisterAddr: tt.fields.rpcRegisterAddr,
- }
- if err := cli.CreateRpcRootNodes(tt.args.serviceNames); (err != nil) != tt.wantErr {
- t.Errorf("K8sDR.CreateRpcRootNodes() error = %v, wantErr %v", err, tt.wantErr)
- }
- })
- }
-}
-
-func TestK8sDR_RegisterConf2Registry(t *testing.T) {
- type fields struct {
- options []grpc.DialOption
- rpcRegisterAddr string
- }
- type args struct {
- key string
- conf []byte
- }
- tests := []struct {
- name string
- fields fields
- args args
- wantErr bool
- }{
- // TODO: Add test cases.
- }
- for _, tt := range tests {
- t.Run(tt.name, func(t *testing.T) {
- cli := &K8sDR{
- options: tt.fields.options,
- rpcRegisterAddr: tt.fields.rpcRegisterAddr,
- }
- if err := cli.RegisterConf2Registry(tt.args.key, tt.args.conf); (err != nil) != tt.wantErr {
- t.Errorf("K8sDR.RegisterConf2Registry() error = %v, wantErr %v", err, tt.wantErr)
- }
- })
- }
-}
-
-func TestK8sDR_GetConfFromRegistry(t *testing.T) {
- type fields struct {
- options []grpc.DialOption
- rpcRegisterAddr string
- }
- type args struct {
- key string
- }
- tests := []struct {
- name string
- fields fields
- args args
- want []byte
- wantErr bool
- }{
- // TODO: Add test cases.
- }
- for _, tt := range tests {
- t.Run(tt.name, func(t *testing.T) {
- cli := &K8sDR{
- options: tt.fields.options,
- rpcRegisterAddr: tt.fields.rpcRegisterAddr,
- }
- got, err := cli.GetConfFromRegistry(tt.args.key)
- if (err != nil) != tt.wantErr {
- t.Errorf("K8sDR.GetConfFromRegistry() error = %v, wantErr %v", err, tt.wantErr)
- return
- }
- if !reflect.DeepEqual(got, tt.want) {
- t.Errorf("K8sDR.GetConfFromRegistry() = %v, want %v", got, tt.want)
- }
- })
- }
-}
-
-func TestK8sDR_GetConns(t *testing.T) {
- type fields struct {
- options []grpc.DialOption
- rpcRegisterAddr string
- }
- type args struct {
- ctx context.Context
- serviceName string
- opts []grpc.DialOption
- }
- tests := []struct {
- name string
- fields fields
- args args
- want []*grpc.ClientConn
- wantErr bool
- }{
- // TODO: Add test cases.
- }
- for _, tt := range tests {
- t.Run(tt.name, func(t *testing.T) {
- cli := &K8sDR{
- options: tt.fields.options,
- rpcRegisterAddr: tt.fields.rpcRegisterAddr,
- }
- got, err := cli.GetConns(tt.args.ctx, tt.args.serviceName, tt.args.opts...)
- if (err != nil) != tt.wantErr {
- t.Errorf("K8sDR.GetConns() error = %v, wantErr %v", err, tt.wantErr)
- return
- }
- if !reflect.DeepEqual(got, tt.want) {
- t.Errorf("K8sDR.GetConns() = %v, want %v", got, tt.want)
- }
- })
- }
-}
-
-func TestK8sDR_GetConn(t *testing.T) {
- type fields struct {
- options []grpc.DialOption
- rpcRegisterAddr string
- }
- type args struct {
- ctx context.Context
- serviceName string
- opts []grpc.DialOption
- }
- tests := []struct {
- name string
- fields fields
- args args
- want *grpc.ClientConn
- wantErr bool
- }{
- // TODO: Add test cases.
- }
- for _, tt := range tests {
- t.Run(tt.name, func(t *testing.T) {
- cli := &K8sDR{
- options: tt.fields.options,
- rpcRegisterAddr: tt.fields.rpcRegisterAddr,
- }
- got, err := cli.GetConn(tt.args.ctx, tt.args.serviceName, tt.args.opts...)
- if (err != nil) != tt.wantErr {
- t.Errorf("K8sDR.GetConn() error = %v, wantErr %v", err, tt.wantErr)
- return
- }
- if !reflect.DeepEqual(got, tt.want) {
- t.Errorf("K8sDR.GetConn() = %v, want %v", got, tt.want)
- }
- })
- }
-}
-
-func TestK8sDR_GetSelfConnTarget(t *testing.T) {
- type fields struct {
- options []grpc.DialOption
- rpcRegisterAddr string
- }
- tests := []struct {
- name string
- fields fields
- want string
- }{
- // TODO: Add test cases.
- }
- for _, tt := range tests {
- t.Run(tt.name, func(t *testing.T) {
- cli := &K8sDR{
- options: tt.fields.options,
- rpcRegisterAddr: tt.fields.rpcRegisterAddr,
- }
- if got := cli.GetSelfConnTarget(); got != tt.want {
- t.Errorf("K8sDR.GetSelfConnTarget() = %v, want %v", got, tt.want)
- }
- })
- }
+func setupTestEnvironment() {
+ os.Setenv("ZOOKEEPER_SCHEMA", "openim")
+ os.Setenv("ZOOKEEPER_ADDRESS", "172.28.0.1")
+ os.Setenv("ZOOKEEPER_PORT", "12181")
+ os.Setenv("ZOOKEEPER_USERNAME", "")
+ os.Setenv("ZOOKEEPER_PASSWORD", "")
}
-func TestK8sDR_AddOption(t *testing.T) {
- type fields struct {
- options []grpc.DialOption
- rpcRegisterAddr string
- }
- type args struct {
- opts []grpc.DialOption
- }
- tests := []struct {
- name string
- fields fields
- args args
- }{
- // TODO: Add test cases.
- }
- for _, tt := range tests {
- t.Run(tt.name, func(t *testing.T) {
- cli := &K8sDR{
- options: tt.fields.options,
- rpcRegisterAddr: tt.fields.rpcRegisterAddr,
- }
- cli.AddOption(tt.args.opts...)
- })
- }
-}
+func TestNewDiscoveryRegister(t *testing.T) {
+ setupTestEnvironment()
-func TestK8sDR_CloseConn(t *testing.T) {
- type fields struct {
- options []grpc.DialOption
- rpcRegisterAddr string
- }
- type args struct {
- conn *grpc.ClientConn
- }
tests := []struct {
- name string
- fields fields
- args args
+ envType string
+ expectedError bool
+ expectedResult bool
}{
- // TODO: Add test cases.
+ {"zookeeper", false, true},
+ {"k8s", false, true}, // 假设 k8s 配置也已正确设置
+ {"direct", false, true},
+ {"invalid", true, false},
}
- for _, tt := range tests {
- t.Run(tt.name, func(t *testing.T) {
- cli := &K8sDR{
- options: tt.fields.options,
- rpcRegisterAddr: tt.fields.rpcRegisterAddr,
- }
- cli.CloseConn(tt.args.conn)
- })
- }
-}
-func TestK8sDR_GetClientLocalConns(t *testing.T) {
- type fields struct {
- options []grpc.DialOption
- rpcRegisterAddr string
- }
- tests := []struct {
- name string
- fields fields
- want map[string][]*grpc.ClientConn
- }{
- // TODO: Add test cases.
- }
- for _, tt := range tests {
- t.Run(tt.name, func(t *testing.T) {
- cli := &K8sDR{
- options: tt.fields.options,
- rpcRegisterAddr: tt.fields.rpcRegisterAddr,
- }
- if got := cli.GetClientLocalConns(); !reflect.DeepEqual(got, tt.want) {
- t.Errorf("K8sDR.GetClientLocalConns() = %v, want %v", got, tt.want)
- }
- })
- }
-}
+ for _, test := range tests {
+ client, err := NewDiscoveryRegister(test.envType)
-func TestK8sDR_Close(t *testing.T) {
- type fields struct {
- options []grpc.DialOption
- rpcRegisterAddr string
- }
- tests := []struct {
- name string
- fields fields
- }{
- // TODO: Add test cases.
- }
- for _, tt := range tests {
- t.Run(tt.name, func(t *testing.T) {
- cli := &K8sDR{
- options: tt.fields.options,
- rpcRegisterAddr: tt.fields.rpcRegisterAddr,
+ if test.expectedError {
+ assert.Error(t, err)
+ } else {
+ assert.NoError(t, err)
+ if test.expectedResult {
+ assert.Implements(t, (*discoveryregistry.SvcDiscoveryRegistry)(nil), client)
+ } else {
+ assert.Nil(t, client)
}
- cli.Close()
- })
+ }
}
}
diff --git a/pkg/common/discoveryregister/kubernetes/kubernetes.go b/pkg/common/discoveryregister/kubernetes/kubernetes.go
new file mode 100644
index 000000000..7c40399a3
--- /dev/null
+++ b/pkg/common/discoveryregister/kubernetes/kubernetes.go
@@ -0,0 +1,198 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package kubernetes
+
+import (
+ "context"
+ "errors"
+ "fmt"
+ "os"
+ "strconv"
+ "strings"
+
+ "github.com/stathat/consistent"
+
+ "google.golang.org/grpc"
+
+ "github.com/OpenIMSDK/tools/discoveryregistry"
+ "github.com/OpenIMSDK/tools/log"
+
+ "github.com/openimsdk/open-im-server/v3/pkg/common/config"
+)
+
+// K8sDR represents the Kubernetes service discovery and registration client.
+type K8sDR struct {
+ options []grpc.DialOption
+ rpcRegisterAddr string
+ gatewayHostConsistent *consistent.Consistent
+}
+
+func NewK8sDiscoveryRegister() (discoveryregistry.SvcDiscoveryRegistry, error) {
+ gatewayConsistent := consistent.New()
+ gatewayHosts := getMsgGatewayHost(context.Background())
+ for _, v := range gatewayHosts {
+ gatewayConsistent.Add(v)
+ }
+ return &K8sDR{gatewayHostConsistent: gatewayConsistent}, nil
+}
+
+func (cli *K8sDR) Register(serviceName, host string, port int, opts ...grpc.DialOption) error {
+ if serviceName != config.Config.RpcRegisterName.OpenImMessageGatewayName {
+ cli.rpcRegisterAddr = serviceName
+ } else {
+ cli.rpcRegisterAddr = getSelfHost(context.Background())
+ }
+
+ return nil
+}
+
+func (cli *K8sDR) UnRegister() error {
+
+ return nil
+}
+
+func (cli *K8sDR) CreateRpcRootNodes(serviceNames []string) error {
+
+ return nil
+}
+
+func (cli *K8sDR) RegisterConf2Registry(key string, conf []byte) error {
+
+ return nil
+}
+
+func (cli *K8sDR) GetConfFromRegistry(key string) ([]byte, error) {
+
+ return nil, nil
+}
+func (cli *K8sDR) GetUserIdHashGatewayHost(ctx context.Context, userId string) (string, error) {
+ host, err := cli.gatewayHostConsistent.Get(userId)
+ if err != nil {
+ log.ZError(ctx, "GetUserIdHashGatewayHost error", err)
+ }
+ return host, err
+}
+func getSelfHost(ctx context.Context) string {
+ port := 88
+ instance := "openimserver"
+ selfPodName := os.Getenv("MY_POD_NAME")
+ ns := os.Getenv("MY_POD_NAMESPACE")
+ statefuleIndex := 0
+ gatewayEnds := strings.Split(config.Config.RpcRegisterName.OpenImMessageGatewayName, ":")
+ if len(gatewayEnds) != 2 {
+ log.ZError(ctx, "msggateway RpcRegisterName is error:config.Config.RpcRegisterName.OpenImMessageGatewayName", errors.New("config error"))
+ } else {
+ port, _ = strconv.Atoi(gatewayEnds[1])
+ }
+ podInfo := strings.Split(selfPodName, "-")
+ instance = podInfo[0]
+ count := len(podInfo)
+ statefuleIndex, _ = strconv.Atoi(podInfo[count-1])
+ host := fmt.Sprintf("%s-openim-msggateway-%d.%s-openim-msggateway-headless.%s.svc.cluster.local:%d", instance, statefuleIndex, instance, ns, port)
+ return host
+}
+
+// like openimserver-openim-msggateway-0.openimserver-openim-msggateway-headless.openim-lin.svc.cluster.local:88.
+func getMsgGatewayHost(ctx context.Context) []string {
+ port := 88
+ instance := "openimserver"
+ selfPodName := os.Getenv("MY_POD_NAME")
+ replicas := os.Getenv("MY_MSGGATEWAY_REPLICACOUNT")
+ ns := os.Getenv("MY_POD_NAMESPACE")
+ gatewayEnds := strings.Split(config.Config.RpcRegisterName.OpenImMessageGatewayName, ":")
+ if len(gatewayEnds) != 2 {
+ log.ZError(ctx, "msggateway RpcRegisterName is error:config.Config.RpcRegisterName.OpenImMessageGatewayName", errors.New("config error"))
+ } else {
+ port, _ = strconv.Atoi(gatewayEnds[1])
+ }
+ nReplicas, _ := strconv.Atoi(replicas)
+ podInfo := strings.Split(selfPodName, "-")
+ instance = podInfo[0]
+ var ret []string
+ for i := 0; i < nReplicas; i++ {
+ host := fmt.Sprintf("%s-openim-msggateway-%d.%s-openim-msggateway-headless.%s.svc.cluster.local:%d", instance, i, instance, ns, port)
+ ret = append(ret, host)
+ }
+ log.ZInfo(ctx, "getMsgGatewayHost", "instance", instance, "selfPodName", selfPodName, "replicas", replicas, "ns", ns, "ret", ret)
+ return ret
+}
+
+// GetConns returns the gRPC client connections to the specified service.
+func (cli *K8sDR) GetConns(ctx context.Context, serviceName string, opts ...grpc.DialOption) ([]*grpc.ClientConn, error) {
+
+ // This conditional checks if the serviceName is not the OpenImMessageGatewayName.
+ // It seems to handle a special case for the OpenImMessageGateway.
+ if serviceName != config.Config.RpcRegisterName.OpenImMessageGatewayName {
+ // DialContext creates a client connection to the given target (serviceName) using the specified context.
+ // 'cli.options' are likely default or common options for all connections in this struct.
+ // 'opts...' allows for additional gRPC dial options to be passed and used.
+ conn, err := grpc.DialContext(ctx, serviceName, append(cli.options, opts...)...)
+
+ // The function returns a slice of client connections with the new connection, or an error if occurred.
+ return []*grpc.ClientConn{conn}, err
+ } else {
+ // This block is executed if the serviceName is OpenImMessageGatewayName.
+ // 'ret' will accumulate the connections to return.
+ var ret []*grpc.ClientConn
+
+ // getMsgGatewayHost presumably retrieves hosts for the message gateway service.
+ // The context is passed, likely for cancellation and timeout control.
+ gatewayHosts := getMsgGatewayHost(ctx)
+
+ // Iterating over the retrieved gateway hosts.
+ for _, host := range gatewayHosts {
+ // Establishes a connection to each host.
+ // Again, appending cli.options with any additional opts provided.
+ conn, err := grpc.DialContext(ctx, host, append(cli.options, opts...)...)
+
+ // If there's an error while dialing any host, the function returns immediately with the error.
+ if err != nil {
+ return nil, err
+ } else {
+ // If the connection is successful, it is added to the 'ret' slice.
+ ret = append(ret, conn)
+ }
+ }
+ // After all hosts are processed, the slice of connections is returned.
+ return ret, nil
+ }
+}
+
+func (cli *K8sDR) GetConn(ctx context.Context, serviceName string, opts ...grpc.DialOption) (*grpc.ClientConn, error) {
+
+ return grpc.DialContext(ctx, serviceName, append(cli.options, opts...)...)
+}
+
+func (cli *K8sDR) GetSelfConnTarget() string {
+
+ return cli.rpcRegisterAddr
+}
+
+func (cli *K8sDR) AddOption(opts ...grpc.DialOption) {
+ cli.options = append(cli.options, opts...)
+}
+
+func (cli *K8sDR) CloseConn(conn *grpc.ClientConn) {
+ conn.Close()
+}
+
+// do not use this method for call rpc.
+func (cli *K8sDR) GetClientLocalConns() map[string][]*grpc.ClientConn {
+ fmt.Println("should not call this function!!!!!!!!!!!!!!!!!!!!!!!!!")
+ return nil
+}
+func (cli *K8sDR) Close() {
+ return
+}
diff --git a/pkg/common/discoveryregister/zookeeper/zookeeper.go b/pkg/common/discoveryregister/zookeeper/zookeeper.go
new file mode 100644
index 000000000..6e55b6b8b
--- /dev/null
+++ b/pkg/common/discoveryregister/zookeeper/zookeeper.go
@@ -0,0 +1,82 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package zookeeper
+
+import (
+ "fmt"
+ "os"
+ "strings"
+ "time"
+
+ "github.com/OpenIMSDK/tools/errs"
+
+ "github.com/OpenIMSDK/tools/discoveryregistry"
+ openkeeper "github.com/OpenIMSDK/tools/discoveryregistry/zookeeper"
+ "github.com/OpenIMSDK/tools/log"
+
+ "github.com/openimsdk/open-im-server/v3/pkg/common/config"
+)
+
+// NewZookeeperDiscoveryRegister creates a new instance of ZookeeperDR for Zookeeper service discovery and registration.
+func NewZookeeperDiscoveryRegister() (discoveryregistry.SvcDiscoveryRegistry, error) {
+ schema := getEnv("ZOOKEEPER_SCHEMA", config.Config.Zookeeper.Schema)
+ zkAddr := getZkAddrFromEnv(config.Config.Zookeeper.ZkAddr)
+ username := getEnv("ZOOKEEPER_USERNAME", config.Config.Zookeeper.Username)
+ password := getEnv("ZOOKEEPER_PASSWORD", config.Config.Zookeeper.Password)
+
+ zk, err := openkeeper.NewClient(
+ zkAddr,
+ schema,
+ openkeeper.WithFreq(time.Hour),
+ openkeeper.WithUserNameAndPassword(username, password),
+ openkeeper.WithRoundRobin(),
+ openkeeper.WithTimeout(10),
+ openkeeper.WithLogger(log.NewZkLogger()),
+ )
+ if err != nil {
+ uriFormat := "address:%s, username:%s, password:%s, schema:%s."
+ errInfo := fmt.Sprintf(uriFormat,
+ config.Config.Zookeeper.ZkAddr,
+ config.Config.Zookeeper.Username,
+ config.Config.Zookeeper.Password,
+ config.Config.Zookeeper.Schema)
+ return nil, errs.Wrap(err, errInfo)
+ }
+ return zk, nil
+}
+
+// getEnv returns the value of an environment variable if it exists, otherwise it returns the fallback value.
+func getEnv(key, fallback string) string {
+ if value, exists := os.LookupEnv(key); exists {
+ return value
+ }
+ return fallback
+}
+
+// getZkAddrFromEnv returns the Zookeeper addresses combined from the ZOOKEEPER_ADDRESS and ZOOKEEPER_PORT environment variables.
+// If the environment variables are not set, it returns the fallback value.
+func getZkAddrFromEnv(fallback []string) []string {
+ address, addrExists := os.LookupEnv("ZOOKEEPER_ADDRESS")
+ port, portExists := os.LookupEnv("ZOOKEEPER_PORT")
+
+ if addrExists && portExists {
+ addresses := strings.Split(address, ",")
+ for i, addr := range addresses {
+ addresses[i] = addr + ":" + port
+ }
+ return addresses
+ }
+ return fallback
+}
diff --git a/pkg/common/ginprometheus/ginprometheus.go b/pkg/common/ginprometheus/ginprometheus.go
index a325595d6..1ee8f8e34 100644
--- a/pkg/common/ginprometheus/ginprometheus.go
+++ b/pkg/common/ginprometheus/ginprometheus.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package ginprometheus
import (
@@ -418,7 +432,7 @@ func computeApproximateRequestSize(r *http.Request) int {
}
s += len(r.Host)
- // r.Form and r.MultipartForm are assumed to be included in r.URL.
+ // r.FormData and r.MultipartForm are assumed to be included in r.URL.
if r.ContentLength != -1 {
s += int(r.ContentLength)
diff --git a/pkg/common/http/http_client.go b/pkg/common/http/http_client.go
index f0fde3099..a80d1c9a4 100644
--- a/pkg/common/http/http_client.go
+++ b/pkg/common/http/http_client.go
@@ -57,7 +57,7 @@ func Get(url string) (response []byte, err error) {
return body, nil
}
-func Post(ctx context.Context, url string, header map[string]string, data interface{}, timeout int) (content []byte, err error) {
+func Post(ctx context.Context, url string, header map[string]string, data any, timeout int) (content []byte, err error) {
if timeout > 0 {
var cancel func()
ctx, cancel = context.WithTimeout(ctx, time.Second*time.Duration(timeout))
@@ -96,7 +96,7 @@ func Post(ctx context.Context, url string, header map[string]string, data interf
return result, nil
}
-func PostReturn(ctx context.Context, url string, header map[string]string, input, output interface{}, timeOutSecond int) error {
+func PostReturn(ctx context.Context, url string, header map[string]string, input, output any, timeOutSecond int) error {
b, err := Post(ctx, url, header, input, timeOutSecond)
if err != nil {
return err
@@ -112,7 +112,6 @@ func callBackPostReturn(ctx context.Context, url, command string, input interfac
//v.Set(constant.CallbackCommand, command)
//url = url + "/" + v.Encode()
url = url + "/" + command
-
b, err := Post(ctx, url, nil, input, callbackConfig.CallbackTimeOut)
if err != nil {
if callbackConfig.CallbackFailedContinue != nil && *callbackConfig.CallbackFailedContinue {
@@ -121,13 +120,14 @@ func callBackPostReturn(ctx context.Context, url, command string, input interfac
}
return errs.ErrNetwork.Wrap(err.Error())
}
+ defer log.ZDebug(ctx, "callback", "data", string(b))
if err = json.Unmarshal(b, output); err != nil {
if callbackConfig.CallbackFailedContinue != nil && *callbackConfig.CallbackFailedContinue {
log.ZWarn(ctx, "callback failed but continue", err, "url", url)
return nil
}
- return errs.ErrData.Wrap(err.Error())
+ return errs.ErrData.WithDetail(err.Error() + "response format error")
}
return output.Parse()
diff --git a/pkg/common/http/http_client_test.go b/pkg/common/http/http_client_test.go
index 1735a3da7..5d2588673 100644
--- a/pkg/common/http/http_client_test.go
+++ b/pkg/common/http/http_client_test.go
@@ -54,7 +54,7 @@ func TestPost(t *testing.T) {
ctx context.Context
url string
header map[string]string
- data interface{}
+ data any
timeout int
}
tests := []struct {
@@ -84,8 +84,8 @@ func TestPostReturn(t *testing.T) {
ctx context.Context
url string
header map[string]string
- input interface{}
- output interface{}
+ input any
+ output any
timeOutSecond int
}
tests := []struct {
@@ -109,7 +109,7 @@ func Test_callBackPostReturn(t *testing.T) {
ctx context.Context
url string
command string
- input interface{}
+ input any
output callbackstruct.CallbackResp
callbackConfig config.CallBackConfig
}
diff --git a/pkg/common/kafka/consumer_group.go b/pkg/common/kafka/consumer_group.go
index 9abe262cf..3f444cc1f 100644
--- a/pkg/common/kafka/consumer_group.go
+++ b/pkg/common/kafka/consumer_group.go
@@ -17,11 +17,10 @@ package kafka
import (
"context"
"errors"
-
"github.com/IBM/sarama"
-
+ "strings"
+ "github.com/OpenIMSDK/tools/errs"
"github.com/OpenIMSDK/tools/log"
-
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
)
@@ -40,7 +39,7 @@ type MConsumerGroupConfig struct {
IsReturnErr bool
}
-func NewMConsumerGroup(consumerConfig *MConsumerGroupConfig, topics, addrs []string, groupID string) *MConsumerGroup {
+func NewMConsumerGroup(consumerConfig *MConsumerGroupConfig, topics, addrs []string, groupID string) (*MConsumerGroup, error) {
consumerGroupConfig := sarama.NewConfig()
consumerGroupConfig.Version = consumerConfig.KafkaVersion
consumerGroupConfig.Consumer.Offsets.Initial = consumerConfig.OffsetsInitial
@@ -53,7 +52,7 @@ func NewMConsumerGroup(consumerConfig *MConsumerGroupConfig, topics, addrs []str
SetupTLSConfig(consumerGroupConfig)
consumerGroup, err := sarama.NewConsumerGroup(addrs, groupID, consumerGroupConfig)
if err != nil {
- panic(err.Error())
+ return nil, errs.Wrap(err, strings.Join(topics, ","), strings.Join(addrs, ","), groupID, config.Config.Kafka.Username, config.Config.Kafka.Password)
}
ctx, cancel := context.WithCancel(context.Background())
@@ -62,14 +61,14 @@ func NewMConsumerGroup(consumerConfig *MConsumerGroupConfig, topics, addrs []str
consumerGroup,
groupID,
topics,
- }
+ }, nil
}
func (mc *MConsumerGroup) GetContextFromMsg(cMsg *sarama.ConsumerMessage) context.Context {
return GetContextWithMQHeader(cMsg.Headers)
}
-func (mc *MConsumerGroup) RegisterHandleAndConsumer(handler sarama.ConsumerGroupHandler) {
+func (mc *MConsumerGroup) RegisterHandleAndConsumer(ctx context.Context, handler sarama.ConsumerGroupHandler) {
log.ZDebug(context.Background(), "register consumer group", "groupID", mc.groupID)
for {
err := mc.ConsumerGroup.Consume(mc.ctx, mc.topics, handler)
@@ -81,7 +80,9 @@ func (mc *MConsumerGroup) RegisterHandleAndConsumer(handler sarama.ConsumerGroup
}
if err != nil {
- log.ZError(context.Background(), "kafka consume error", err)
+ log.ZWarn(ctx, "consume err", err, "topic", mc.topics, "groupID", mc.groupID)
+ }
+ if ctx.Err() != nil {
return
}
}
diff --git a/pkg/common/kafka/producer.go b/pkg/common/kafka/producer.go
index 1dad33f9c..417aadb54 100644
--- a/pkg/common/kafka/producer.go
+++ b/pkg/common/kafka/producer.go
@@ -21,86 +21,109 @@ import (
"strings"
"time"
+ "github.com/OpenIMSDK/tools/errs"
+
+ "github.com/IBM/sarama"
"github.com/OpenIMSDK/protocol/constant"
- log "github.com/OpenIMSDK/tools/log"
+ "github.com/OpenIMSDK/tools/log"
"github.com/OpenIMSDK/tools/mcontext"
"github.com/OpenIMSDK/tools/utils"
+ "google.golang.org/protobuf/proto"
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
-
- "github.com/IBM/sarama"
- "google.golang.org/protobuf/proto"
)
-const (
- maxRetry = 10 // number of retries
-)
+const maxRetry = 10 // number of retries
-var errEmptyMsg = errors.New("binary msg is empty")
+var errEmptyMsg = errors.New("kafka binary msg is empty")
+// Producer represents a Kafka producer.
type Producer struct {
- topic string
addr []string
+ topic string
config *sarama.Config
producer sarama.SyncProducer
}
-// NewKafkaProducer Initialize kafka producer.
-func NewKafkaProducer(addr []string, topic string) *Producer {
- p := Producer{}
- p.config = sarama.NewConfig() // Instantiate a sarama Config
- p.config.Producer.Return.Successes = true // Whether to enable the successes channel to be notified after the message is sent successfully
+// NewKafkaProducer initializes a new Kafka producer.
+func NewKafkaProducer(addr []string, topic string) (*Producer, error) {
+ p := Producer{
+ addr: addr,
+ topic: topic,
+ config: sarama.NewConfig(),
+ }
+
+ // Set producer return flags
+ p.config.Producer.Return.Successes = true
p.config.Producer.Return.Errors = true
- p.config.Producer.Partitioner = sarama.NewHashPartitioner // Set the hash-key automatic hash partition. When sending a message, you must specify the key value of the message. If there is no key, the partition will be selected randomly
- var producerAck = sarama.WaitForAll // default: WaitForAll
- switch strings.ToLower(config.Config.Kafka.ProducerAck) {
- case "no_response":
- producerAck = sarama.NoResponse
- case "wait_for_local":
- producerAck = sarama.WaitForLocal
- case "wait_for_all":
- producerAck = sarama.WaitForAll
- }
- p.config.Producer.RequiredAcks = producerAck
+ // Set partitioner strategy
+ p.config.Producer.Partitioner = sarama.NewHashPartitioner
- var compress = sarama.CompressionNone // default: no compress
- _ = compress.UnmarshalText(bytes.ToLower([]byte(config.Config.Kafka.CompressType)))
- p.config.Producer.Compression = compress
+ // Configure producer acknowledgement level
+ configureProducerAck(&p, config.Config.Kafka.ProducerAck)
+
+ // Configure message compression
+ configureCompression(&p, config.Config.Kafka.CompressType)
+
+ // Get Kafka configuration from environment variables or fallback to config file
+ kafkaUsername := getEnvOrConfig("KAFKA_USERNAME", config.Config.Kafka.Username)
+ kafkaPassword := getEnvOrConfig("KAFKA_PASSWORD", config.Config.Kafka.Password)
+ kafkaAddr := getKafkaAddrFromEnv(addr) // Updated to use the new function
- if config.Config.Kafka.Username != "" && config.Config.Kafka.Password != "" {
+ // Configure SASL authentication if credentials are provided
+ if kafkaUsername != "" && kafkaPassword != "" {
p.config.Net.SASL.Enable = true
- p.config.Net.SASL.User = config.Config.Kafka.Username
- p.config.Net.SASL.Password = config.Config.Kafka.Password
+ p.config.Net.SASL.User = kafkaUsername
+ p.config.Net.SASL.Password = kafkaPassword
}
- p.addr = addr
- p.topic = topic
+
+ // Set the Kafka address
+ p.addr = kafkaAddr
+
+ // Set up TLS configuration (if required)
SetupTLSConfig(p.config)
- var producer sarama.SyncProducer
+
+ // Create the producer with retries
var err error
for i := 0; i <= maxRetry; i++ {
- producer, err = sarama.NewSyncProducer(p.addr, p.config) // Initialize the client
+ p.producer, err = sarama.NewSyncProducer(p.addr, p.config)
if err == nil {
- p.producer = producer
- return &p
+ return &p, nil
}
- //TODO If the password is wrong, exit directly
- //if packetErr, ok := err.(*sarama.PacketEncodingError); ok {
- //if _, ok := packetErr.Err.(sarama.AuthenticationError); ok {
- // fmt.Println("Kafka password is wrong.")
- //}
- //} else {
- // fmt.Printf("Failed to create Kafka producer: %v\n", err)
- //}
- time.Sleep(time.Duration(1) * time.Second)
+ time.Sleep(1 * time.Second) // Wait before retrying
}
+
+ // Panic if unable to create producer after retries
if err != nil {
- panic(err.Error())
+ return nil, errs.Wrap(errors.New("failed to create Kafka producer: " + err.Error()))
+ }
+
+ return &p, nil
+}
+
+// configureProducerAck configures the producer's acknowledgement level.
+func configureProducerAck(p *Producer, ackConfig string) {
+ switch strings.ToLower(ackConfig) {
+ case "no_response":
+ p.config.Producer.RequiredAcks = sarama.NoResponse
+ case "wait_for_local":
+ p.config.Producer.RequiredAcks = sarama.WaitForLocal
+ case "wait_for_all":
+ p.config.Producer.RequiredAcks = sarama.WaitForAll
+ default:
+ p.config.Producer.RequiredAcks = sarama.WaitForAll
}
- p.producer = producer
- return &p
}
+// configureCompression configures the message compression type for the producer.
+func configureCompression(p *Producer, compressType string) {
+ var compress sarama.CompressionCodec = sarama.CompressionNone
+ compress.UnmarshalText(bytes.ToLower([]byte(compressType)))
+ p.config.Producer.Compression = compress
+}
+
+// GetMQHeaderWithContext extracts message queue headers from the context.
func GetMQHeaderWithContext(ctx context.Context) ([]sarama.RecordHeader, error) {
operationID, opUserID, platform, connID, err := mcontext.GetCtxInfos(ctx)
if err != nil {
@@ -111,22 +134,23 @@ func GetMQHeaderWithContext(ctx context.Context) ([]sarama.RecordHeader, error)
{Key: []byte(constant.OpUserID), Value: []byte(opUserID)},
{Key: []byte(constant.OpUserPlatform), Value: []byte(platform)},
{Key: []byte(constant.ConnID), Value: []byte(connID)},
- }, err
+ }, nil
}
+// GetContextWithMQHeader creates a context from message queue headers.
func GetContextWithMQHeader(header []*sarama.RecordHeader) context.Context {
var values []string
for _, recordHeader := range header {
values = append(values, string(recordHeader.Value))
}
- return mcontext.WithMustInfoCtx(values) // TODO
+ return mcontext.WithMustInfoCtx(values) // Attach extracted values to context
}
+// SendMessage sends a message to the Kafka topic configured in the Producer.
func (p *Producer) SendMessage(ctx context.Context, key string, msg proto.Message) (int32, int64, error) {
log.ZDebug(ctx, "SendMessage", "msg", msg, "topic", p.topic, "key", key)
- kMsg := &sarama.ProducerMessage{}
- kMsg.Topic = p.topic
- kMsg.Key = sarama.StringEncoder(key)
+
+ // Marshal the protobuf message
bMsg, err := proto.Marshal(msg)
if err != nil {
return 0, 0, utils.Wrap(err, "kafka proto Marshal err")
@@ -134,20 +158,33 @@ func (p *Producer) SendMessage(ctx context.Context, key string, msg proto.Messag
if len(bMsg) == 0 {
return 0, 0, utils.Wrap(errEmptyMsg, "")
}
- kMsg.Value = sarama.ByteEncoder(bMsg)
+
+ // Prepare Kafka message
+ kMsg := &sarama.ProducerMessage{
+ Topic: p.topic,
+ Key: sarama.StringEncoder(key),
+ Value: sarama.ByteEncoder(bMsg),
+ }
+
+ // Validate message key and value
if kMsg.Key.Length() == 0 || kMsg.Value.Length() == 0 {
return 0, 0, utils.Wrap(errEmptyMsg, "")
}
- kMsg.Metadata = ctx
+
+ // Attach context metadata as headers
header, err := GetMQHeaderWithContext(ctx)
if err != nil {
return 0, 0, utils.Wrap(err, "")
}
kMsg.Headers = header
+
+ // Send the message
partition, offset, err := p.producer.SendMessage(kMsg)
- log.ZDebug(ctx, "ByteEncoder SendMessage end", "key ", kMsg.Key, "key length", kMsg.Value.Length())
if err != nil {
log.ZWarn(ctx, "p.producer.SendMessage error", err)
+ return 0, 0, utils.Wrap(err, "")
}
- return partition, offset, utils.Wrap(err, "")
+
+ log.ZDebug(ctx, "ByteEncoder SendMessage end", "key", kMsg.Key, "key length", kMsg.Value.Length())
+ return partition, offset, nil
}
diff --git a/pkg/common/kafka/util.go b/pkg/common/kafka/util.go
index 722205865..f318ecf73 100644
--- a/pkg/common/kafka/util.go
+++ b/pkg/common/kafka/util.go
@@ -15,6 +15,10 @@
package kafka
import (
+ "fmt"
+ "os"
+ "strings"
+
"github.com/IBM/sarama"
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
@@ -33,3 +37,29 @@ func SetupTLSConfig(cfg *sarama.Config) {
)
}
}
+
+// getEnvOrConfig returns the value of the environment variable if it exists,
+// otherwise, it returns the value from the configuration file.
+func getEnvOrConfig(envName string, configValue string) string {
+ if value, exists := os.LookupEnv(envName); exists {
+ return value
+ }
+ return configValue
+}
+
+// getKafkaAddrFromEnv returns the Kafka addresses combined from the KAFKA_ADDRESS and KAFKA_PORT environment variables.
+// If the environment variables are not set, it returns the fallback value.
+func getKafkaAddrFromEnv(fallback []string) []string {
+ envAddr := os.Getenv("KAFKA_ADDRESS")
+ envPort := os.Getenv("KAFKA_PORT")
+
+ if envAddr != "" && envPort != "" {
+ addresses := strings.Split(envAddr, ",")
+ for i, addr := range addresses {
+ addresses[i] = fmt.Sprintf("%s:%s", addr, envPort)
+ }
+ return addresses
+ }
+
+ return fallback
+}
diff --git a/pkg/common/prommetrics/gin_api.go b/pkg/common/prommetrics/gin_api.go
index 7cd82dad2..9f2e4c99d 100644
--- a/pkg/common/prommetrics/gin_api.go
+++ b/pkg/common/prommetrics/gin_api.go
@@ -1,13 +1,27 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package prommetrics
-import ginProm "github.com/openimsdk/open-im-server/v3/pkg/common/ginprometheus"
+import ginprom "github.com/openimsdk/open-im-server/v3/pkg/common/ginprometheus"
/*
labels := prometheus.Labels{"label_one": "any", "label_two": "value"}
-ApiCustomCnt.MetricCollector.(*prometheus.CounterVec).With(labels).Inc()
+ApiCustomCnt.MetricCollector.(*prometheus.CounterVec).With(labels).Inc().
*/
var (
- ApiCustomCnt = &ginProm.Metric{
+ ApiCustomCnt = &ginprom.Metric{
Name: "custom_total",
Description: "Custom counter events.",
Type: "counter_vec",
diff --git a/pkg/common/prommetrics/grpc_auth.go b/pkg/common/prommetrics/grpc_auth.go
index e44c146be..30dd5f1b1 100644
--- a/pkg/common/prommetrics/grpc_auth.go
+++ b/pkg/common/prommetrics/grpc_auth.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package prommetrics
import (
diff --git a/pkg/common/prommetrics/grpc_msg.go b/pkg/common/prommetrics/grpc_msg.go
index 88d4ef3ce..758879b90 100644
--- a/pkg/common/prommetrics/grpc_msg.go
+++ b/pkg/common/prommetrics/grpc_msg.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package prommetrics
import (
diff --git a/pkg/common/prommetrics/grpc_msggateway.go b/pkg/common/prommetrics/grpc_msggateway.go
index bb62426e1..98d5a3267 100644
--- a/pkg/common/prommetrics/grpc_msggateway.go
+++ b/pkg/common/prommetrics/grpc_msggateway.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package prommetrics
import (
diff --git a/pkg/common/prommetrics/grpc_push.go b/pkg/common/prommetrics/grpc_push.go
index aa5085c2c..0b6c3e76f 100644
--- a/pkg/common/prommetrics/grpc_push.go
+++ b/pkg/common/prommetrics/grpc_push.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package prommetrics
import (
diff --git a/pkg/common/prommetrics/prommetrics.go b/pkg/common/prommetrics/prommetrics.go
index 26b02b16f..b7c5e07f4 100644
--- a/pkg/common/prommetrics/prommetrics.go
+++ b/pkg/common/prommetrics/prommetrics.go
@@ -1,7 +1,21 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package prommetrics
import (
- grpc_prometheus "github.com/grpc-ecosystem/go-grpc-prometheus"
+ gp "github.com/grpc-ecosystem/go-grpc-prometheus"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/collectors"
@@ -9,10 +23,10 @@ import (
"github.com/openimsdk/open-im-server/v3/pkg/common/ginprometheus"
)
-func NewGrpcPromObj(cusMetrics []prometheus.Collector) (*prometheus.Registry, *grpc_prometheus.ServerMetrics, error) {
+func NewGrpcPromObj(cusMetrics []prometheus.Collector) (*prometheus.Registry, *gp.ServerMetrics, error) {
////////////////////////////////////////////////////////
reg := prometheus.NewRegistry()
- grpcMetrics := grpc_prometheus.NewServerMetrics()
+ grpcMetrics := gp.NewServerMetrics()
grpcMetrics.EnableHandlingTimeHistogram()
cusMetrics = append(cusMetrics, grpcMetrics, collectors.NewGoCollector())
reg.MustRegister(cusMetrics...)
diff --git a/pkg/common/prommetrics/prommetrics_test.go b/pkg/common/prommetrics/prommetrics_test.go
index babc5e410..1e48c63ba 100644
--- a/pkg/common/prommetrics/prommetrics_test.go
+++ b/pkg/common/prommetrics/prommetrics_test.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package prommetrics
import (
diff --git a/pkg/common/prommetrics/transfer.go b/pkg/common/prommetrics/transfer.go
index 6b03870b5..197b6f7fc 100644
--- a/pkg/common/prommetrics/transfer.go
+++ b/pkg/common/prommetrics/transfer.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package prommetrics
import (
diff --git a/pkg/common/startrpc/start.go b/pkg/common/startrpc/start.go
index 01076bbbb..f6cda2ffb 100644
--- a/pkg/common/startrpc/start.go
+++ b/pkg/common/startrpc/start.go
@@ -17,7 +17,6 @@ package startrpc
import (
"errors"
"fmt"
- "log"
"net"
"net/http"
"os"
@@ -27,6 +26,8 @@ import (
"syscall"
"time"
+ "github.com/OpenIMSDK/tools/errs"
+
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
"golang.org/x/sync/errgroup"
@@ -43,7 +44,6 @@ import (
"github.com/OpenIMSDK/tools/discoveryregistry"
"github.com/OpenIMSDK/tools/mw"
"github.com/OpenIMSDK/tools/network"
- "github.com/OpenIMSDK/tools/utils"
)
// Start rpc server.
@@ -61,20 +61,20 @@ func Start(
net.JoinHostPort(network.GetListenIP(config.Config.Rpc.ListenIP), strconv.Itoa(rpcPort)),
)
if err != nil {
- return err
+ return errs.Wrap(err, network.GetListenIP(config.Config.Rpc.ListenIP), strconv.Itoa(rpcPort))
}
defer listener.Close()
client, err := kdisc.NewDiscoveryRegister(config.Config.Envs.Discovery)
if err != nil {
- return utils.Wrap1(err)
+ return errs.Wrap(err)
}
defer client.Close()
- client.AddOption(mw.GrpcClient(), grpc.WithTransportCredentials(insecure.NewCredentials()))
+ client.AddOption(mw.GrpcClient(), grpc.WithTransportCredentials(insecure.NewCredentials()), grpc.WithDefaultServiceConfig(fmt.Sprintf(`{"LoadBalancingPolicy": "%s"}`, "round_robin")))
registerIP, err := network.GetRpcRegisterIP(config.Config.Rpc.RegisterIP)
if err != nil {
- return err
+ return errs.Wrap(err)
}
var reg *prometheus.Registry
@@ -96,7 +96,7 @@ func Start(
err = rpcFn(client, srv)
if err != nil {
- return utils.Wrap1(err)
+ return err
}
err = client.Register(
rpcRegisterName,
@@ -105,7 +105,7 @@ func Start(
grpc.WithTransportCredentials(insecure.NewCredentials()),
)
if err != nil {
- return utils.Wrap1(err)
+ return errs.Wrap(err)
}
var wg errgroup.Group
@@ -116,14 +116,15 @@ func Start(
// Create a HTTP server for prometheus.
httpServer := &http.Server{Handler: promhttp.HandlerFor(reg, promhttp.HandlerOpts{}), Addr: fmt.Sprintf("0.0.0.0:%d", prometheusPort)}
if err := httpServer.ListenAndServe(); err != nil {
- log.Fatal("Unable to start a http server.")
+ fmt.Fprintf(os.Stderr, "\n\nexit -1: \n%+v PrometheusPort: %d \n\n", err, prometheusPort)
+ os.Exit(-1)
}
}
return nil
})
wg.Go(func() error {
- return utils.Wrap1(srv.Serve(listener))
+ return errs.Wrap(srv.Serve(listener))
})
sigs := make(chan os.Signal, 1)
@@ -146,7 +147,7 @@ func Start(
return gerr
case <-time.After(15 * time.Second):
- return utils.Wrap1(errors.New("timeout exit"))
+ return errs.Wrap(errors.New("timeout exit"))
}
}
diff --git a/pkg/common/startrpc/start_test.go b/pkg/common/startrpc/start_test.go
index 171cdb1c2..481986e15 100644
--- a/pkg/common/startrpc/start_test.go
+++ b/pkg/common/startrpc/start_test.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package startrpc
import (
diff --git a/pkg/common/version/base.go b/pkg/common/version/base.go
index ac214269f..9a656e03a 100644
--- a/pkg/common/version/base.go
+++ b/pkg/common/version/base.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package version
// Base version information.
@@ -15,7 +29,7 @@ package version
// When releasing a new Kubernetes version, this file is updated by
// build/mark_new_version.sh to reflect the new version, and then a
// git annotated tag (using format vX.Y where X == Major version and Y
-// == Minor version) is created to point to the commit that updates
+// == Minor version) is created to point to the commit that updates.
var (
// TODO: Deprecate gitMajor and gitMinor, use only gitVersion
// instead. First step in deprecation, keep the fields but make
diff --git a/pkg/common/version/types.go b/pkg/common/version/types.go
index ee4664149..da9c1ed90 100644
--- a/pkg/common/version/types.go
+++ b/pkg/common/version/types.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package version
// Info contains versioning information.
diff --git a/pkg/common/version/version.go b/pkg/common/version/version.go
index b8ccfaf81..3b271b3f6 100644
--- a/pkg/common/version/version.go
+++ b/pkg/common/version/version.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package version
import (
@@ -25,7 +39,7 @@ func Get() Info {
}
}
-// GetClientVersion returns the git version of the OpenIM client repository
+// GetClientVersion returns the git version of the OpenIM client repository.
func GetClientVersion() (*OpenIMClientVersion, error) {
clientVersion, err := getClientVersion()
if err != nil {
@@ -52,7 +66,7 @@ func getClientVersion() (string, error) {
return ref.Hash().String(), nil
}
-// GetSingleVersion returns single version of sealer
+// GetSingleVersion returns single version of sealer.
func GetSingleVersion() string {
return gitVersion
}
diff --git a/pkg/msgprocessor/conversation.go b/pkg/msgprocessor/conversation.go
index 56255f37c..7477bea7a 100644
--- a/pkg/msgprocessor/conversation.go
+++ b/pkg/msgprocessor/conversation.go
@@ -52,6 +52,7 @@ func GetChatConversationIDByMsg(msg *sdkws.MsgData) string {
case constant.NotificationChatType:
return "sn_" + msg.SendID + "_" + msg.RecvID
}
+
return ""
}
diff --git a/pkg/msgprocessor/options.go b/pkg/msgprocessor/options.go
index c17c7cb05..c6e209b98 100644
--- a/pkg/msgprocessor/options.go
+++ b/pkg/msgprocessor/options.go
@@ -30,14 +30,14 @@ func NewOptions(opts ...OptionsOpt) Options {
options[constant.IsOfflinePush] = false
options[constant.IsUnreadCount] = false
options[constant.IsConversationUpdate] = false
- options[constant.IsSenderSync] = false
+ options[constant.IsSenderSync] = true
options[constant.IsNotPrivate] = false
options[constant.IsSenderConversationUpdate] = false
- options[constant.IsSenderNotificationPush] = false
options[constant.IsReactionFromCache] = false
for _, opt := range opts {
opt(options)
}
+
return options
}
@@ -114,12 +114,6 @@ func WithSenderConversationUpdate() OptionsOpt {
}
}
-func WithSenderNotificationPush() OptionsOpt {
- return func(options Options) {
- options[constant.IsSenderNotificationPush] = true
- }
-}
-
func WithReactionFromCache() OptionsOpt {
return func(options Options) {
options[constant.IsReactionFromCache] = true
@@ -174,10 +168,6 @@ func (o Options) IsSenderConversationUpdate() bool {
return o.Is(constant.IsSenderConversationUpdate)
}
-func (o Options) IsSenderNotificationPush() bool {
- return o.Is(constant.IsSenderNotificationPush)
-}
-
func (o Options) IsReactionFromCache() bool {
return o.Is(constant.IsReactionFromCache)
}
diff --git a/pkg/rpcclient/grouphash/grouphash.go b/pkg/rpcclient/grouphash/grouphash.go
new file mode 100644
index 000000000..dee47ad44
--- /dev/null
+++ b/pkg/rpcclient/grouphash/grouphash.go
@@ -0,0 +1,102 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package grouphash
+
+import (
+ "context"
+ "crypto/md5"
+ "encoding/binary"
+ "encoding/json"
+
+ "github.com/OpenIMSDK/protocol/group"
+ "github.com/OpenIMSDK/protocol/sdkws"
+ "github.com/OpenIMSDK/tools/utils"
+)
+
+func NewGroupHashFromGroupClient(x group.GroupClient) *GroupHash {
+ return &GroupHash{
+ getGroupAllUserIDs: func(ctx context.Context, groupID string) ([]string, error) {
+ resp, err := x.GetGroupMemberUserIDs(ctx, &group.GetGroupMemberUserIDsReq{GroupID: groupID})
+ if err != nil {
+ return nil, err
+ }
+ return resp.UserIDs, nil
+ },
+ getGroupMemberInfo: func(ctx context.Context, groupID string, userIDs []string) ([]*sdkws.GroupMemberFullInfo, error) {
+ resp, err := x.GetGroupMembersInfo(ctx, &group.GetGroupMembersInfoReq{GroupID: groupID, UserIDs: userIDs})
+ if err != nil {
+ return nil, err
+ }
+ return resp.Members, nil
+ },
+ }
+}
+
+func NewGroupHashFromGroupServer(x group.GroupServer) *GroupHash {
+ return &GroupHash{
+ getGroupAllUserIDs: func(ctx context.Context, groupID string) ([]string, error) {
+ resp, err := x.GetGroupMemberUserIDs(ctx, &group.GetGroupMemberUserIDsReq{GroupID: groupID})
+ if err != nil {
+ return nil, err
+ }
+ return resp.UserIDs, nil
+ },
+ getGroupMemberInfo: func(ctx context.Context, groupID string, userIDs []string) ([]*sdkws.GroupMemberFullInfo, error) {
+ resp, err := x.GetGroupMembersInfo(ctx, &group.GetGroupMembersInfoReq{GroupID: groupID, UserIDs: userIDs})
+ if err != nil {
+ return nil, err
+ }
+ return resp.Members, nil
+ },
+ }
+}
+
+type GroupHash struct {
+ getGroupAllUserIDs func(ctx context.Context, groupID string) ([]string, error)
+ getGroupMemberInfo func(ctx context.Context, groupID string, userIDs []string) ([]*sdkws.GroupMemberFullInfo, error)
+}
+
+func (gh *GroupHash) GetGroupHash(ctx context.Context, groupID string) (uint64, error) {
+ userIDs, err := gh.getGroupAllUserIDs(ctx, groupID)
+ if err != nil {
+ return 0, err
+ }
+ var members []*sdkws.GroupMemberFullInfo
+ if len(userIDs) > 0 {
+ members, err = gh.getGroupMemberInfo(ctx, groupID, userIDs)
+ if err != nil {
+ return 0, err
+ }
+ utils.Sort(userIDs, true)
+ }
+ memberMap := utils.SliceToMap(members, func(e *sdkws.GroupMemberFullInfo) string {
+ return e.UserID
+ })
+ res := make([]*sdkws.GroupMemberFullInfo, 0, len(members))
+ for _, userID := range userIDs {
+ member, ok := memberMap[userID]
+ if !ok {
+ continue
+ }
+ member.AppMangerLevel = 0
+ res = append(res, member)
+ }
+ data, err := json.Marshal(res)
+ if err != nil {
+ return 0, err
+ }
+ sum := md5.Sum(data)
+ return binary.BigEndian.Uint64(sum[:]), nil
+}
diff --git a/pkg/rpcclient/msg.go b/pkg/rpcclient/msg.go
index 3b09b5062..56167d7f4 100644
--- a/pkg/rpcclient/msg.go
+++ b/pkg/rpcclient/msg.go
@@ -68,6 +68,7 @@ func newContentTypeConf() map[int32]config.NotificationConf {
constant.BlackAddedNotification: config.Config.Notification.BlackAdded,
constant.BlackDeletedNotification: config.Config.Notification.BlackDeleted,
constant.FriendInfoUpdatedNotification: config.Config.Notification.FriendInfoUpdated,
+ constant.FriendsInfoUpdateNotification: config.Config.Notification.FriendInfoUpdated, //use the same FriendInfoUpdated
// conversation
constant.ConversationChangeNotification: config.Config.Notification.ConversationChanged,
constant.ConversationUnreadNotification: config.Config.Notification.ConversationChanged,
@@ -115,6 +116,7 @@ func newSessionTypeConf() map[int32]int32 {
constant.BlackAddedNotification: constant.SingleChatType,
constant.BlackDeletedNotification: constant.SingleChatType,
constant.FriendInfoUpdatedNotification: constant.SingleChatType,
+ constant.FriendsInfoUpdateNotification: constant.SingleChatType,
// conversation
constant.ConversationChangeNotification: constant.SingleChatType,
constant.ConversationUnreadNotification: constant.SingleChatType,
@@ -155,6 +157,30 @@ func (m *MessageRpcClient) GetMaxSeq(ctx context.Context, req *sdkws.GetMaxSeqRe
return resp, err
}
+func (m *MessageRpcClient) GetMaxSeqs(ctx context.Context, conversationIDs []string) (map[string]int64, error) {
+ log.ZDebug(ctx, "GetMaxSeqs", "conversationIDs", conversationIDs)
+ resp, err := m.Client.GetMaxSeqs(ctx, &msg.GetMaxSeqsReq{
+ ConversationIDs: conversationIDs,
+ })
+ return resp.MaxSeqs, err
+}
+
+func (m *MessageRpcClient) GetHasReadSeqs(ctx context.Context, userID string, conversationIDs []string) (map[string]int64, error) {
+ resp, err := m.Client.GetHasReadSeqs(ctx, &msg.GetHasReadSeqsReq{
+ UserID: userID,
+ ConversationIDs: conversationIDs,
+ })
+ return resp.MaxSeqs, err
+}
+
+func (m *MessageRpcClient) GetMsgByConversationIDs(ctx context.Context, docIDs []string, seqs map[string]int64) (map[string]*sdkws.MsgData, error) {
+ resp, err := m.Client.GetMsgByConversationIDs(ctx, &msg.GetMsgByConversationIDsReq{
+ ConversationIDs: docIDs,
+ MaxSeqs: seqs,
+ })
+ return resp.MsgDatas, err
+}
+
func (m *MessageRpcClient) PullMessageBySeqList(ctx context.Context, req *sdkws.PullMessageBySeqsReq) (*sdkws.PullMessageBySeqsResp, error) {
resp, err := m.Client.PullMessageBySeqs(ctx, req)
return resp, err
@@ -256,6 +282,7 @@ func (s *NotificationSender) NotificationWithSesstionType(ctx context.Context, s
optionsConfig.ReliabilityLevel = constant.UnreliableNotification
}
options := config.GetOptionsByNotification(optionsConfig)
+ s.SetOptionsByContentType(ctx, options, contentType)
msg.Options = options
offlineInfo.Title = title
offlineInfo.Desc = desc
@@ -274,3 +301,11 @@ func (s *NotificationSender) NotificationWithSesstionType(ctx context.Context, s
func (s *NotificationSender) Notification(ctx context.Context, sendID, recvID string, contentType int32, m proto.Message, opts ...NotificationOptions) error {
return s.NotificationWithSesstionType(ctx, sendID, recvID, contentType, s.sessionTypeConf[contentType], m, opts...)
}
+
+func (s *NotificationSender) SetOptionsByContentType(_ context.Context, options map[string]bool, contentType int32) {
+ switch contentType {
+ case constant.UserStatusChangeNotification:
+ options[constant.IsSenderSync] = false
+ default:
+ }
+}
diff --git a/pkg/rpcclient/notification/friend.go b/pkg/rpcclient/notification/friend.go
index b061a24ae..b98a1d38e 100644
--- a/pkg/rpcclient/notification/friend.go
+++ b/pkg/rpcclient/notification/friend.go
@@ -196,7 +196,12 @@ func (f *FriendNotificationSender) FriendRemarkSetNotification(ctx context.Conte
tips.FromToUserID.ToUserID = toUserID
return f.Notification(ctx, fromUserID, toUserID, constant.FriendRemarkSetNotification, &tips)
}
-
+func (f *FriendNotificationSender) FriendsInfoUpdateNotification(ctx context.Context, toUserID string, friendIDs []string) error {
+ tips := sdkws.FriendsInfoUpdateTips{FromToUserID: &sdkws.FromToUserID{}}
+ tips.FromToUserID.ToUserID = toUserID
+ tips.FriendIDs = friendIDs
+ return f.Notification(ctx, toUserID, toUserID, constant.FriendsInfoUpdateNotification, &tips)
+}
func (f *FriendNotificationSender) BlackAddedNotification(ctx context.Context, req *pbfriend.AddBlackReq) error {
tips := sdkws.BlackAddedTips{FromToUserID: &sdkws.FromToUserID{}}
tips.FromToUserID.FromUserID = req.OwnerUserID
diff --git a/pkg/rpcclient/notification/group.go b/pkg/rpcclient/notification/group.go
index 8e71f61c3..8c3719b2c 100755
--- a/pkg/rpcclient/notification/group.go
+++ b/pkg/rpcclient/notification/group.go
@@ -52,6 +52,41 @@ type GroupNotificationSender struct {
db controller.GroupDatabase
}
+func (g *GroupNotificationSender) PopulateGroupMember(ctx context.Context, members ...*relation.GroupMemberModel) error {
+ if len(members) == 0 {
+ return nil
+ }
+ emptyUserIDs := make(map[string]struct{})
+ for _, member := range members {
+ if member.Nickname == "" || member.FaceURL == "" {
+ emptyUserIDs[member.UserID] = struct{}{}
+ }
+ }
+ if len(emptyUserIDs) > 0 {
+ users, err := g.getUsersInfo(ctx, utils.Keys(emptyUserIDs))
+ if err != nil {
+ return err
+ }
+ userMap := make(map[string]CommonUser)
+ for i, user := range users {
+ userMap[user.GetUserID()] = users[i]
+ }
+ for i, member := range members {
+ user, ok := userMap[member.UserID]
+ if !ok {
+ continue
+ }
+ if member.Nickname == "" {
+ members[i].Nickname = user.GetNickname()
+ }
+ if member.FaceURL == "" {
+ members[i].FaceURL = user.GetFaceURL()
+ }
+ }
+ }
+ return nil
+}
+
func (g *GroupNotificationSender) getUser(ctx context.Context, userID string) (*sdkws.PublicUserInfo, error) {
users, err := g.getUsersInfo(ctx, []string{userID})
if err != nil {
@@ -77,17 +112,21 @@ func (g *GroupNotificationSender) getGroupInfo(ctx context.Context, groupID stri
if err != nil {
return nil, err
}
- owner, err := g.db.TakeGroupOwner(ctx, groupID)
+ ownerUserIDs, err := g.db.GetGroupRoleLevelMemberIDs(ctx, groupID, constant.GroupOwner)
if err != nil {
return nil, err
}
+ var ownerUserID string
+ if len(ownerUserIDs) > 0 {
+ ownerUserID = ownerUserIDs[0]
+ }
return &sdkws.GroupInfo{
GroupID: gm.GroupID,
GroupName: gm.GroupName,
Notification: gm.Notification,
Introduction: gm.Introduction,
FaceURL: gm.FaceURL,
- OwnerUserID: owner.UserID,
+ OwnerUserID: ownerUserID,
CreateTime: gm.CreateTime.UnixMilli(),
MemberCount: num,
Ex: gm.Ex,
@@ -103,39 +142,18 @@ func (g *GroupNotificationSender) getGroupInfo(ctx context.Context, groupID stri
}
func (g *GroupNotificationSender) getGroupMembers(ctx context.Context, groupID string, userIDs []string) ([]*sdkws.GroupMemberFullInfo, error) {
- members, err := g.db.FindGroupMember(ctx, []string{groupID}, userIDs, nil)
+ members, err := g.db.FindGroupMembers(ctx, groupID, userIDs)
if err != nil {
return nil, err
}
- log.ZDebug(ctx, "getGroupMembers", "members", members)
- users, err := g.getUsersInfoMap(ctx, userIDs)
- if err != nil {
+ if err := g.PopulateGroupMember(ctx, members...); err != nil {
return nil, err
}
- log.ZDebug(ctx, "getUsersInfoMap", "users", users)
+ log.ZDebug(ctx, "getGroupMembers", "members", members)
res := make([]*sdkws.GroupMemberFullInfo, 0, len(members))
for _, member := range members {
- user, ok := users[member.UserID]
- if !ok {
- return nil, errs.ErrUserIDNotFound.Wrap(fmt.Sprintf("group %s member %s not in user", member.GroupID, member.UserID))
- }
- if member.Nickname == "" {
- member.Nickname = user.Nickname
- }
- res = append(res, g.groupMemberDB2PB(member, user.AppMangerLevel))
- delete(users, member.UserID)
- }
- //for userID, info := range users {
- // if info.AppMangerLevel == constant.AppAdmin {
- // res = append(res, &sdkws.GroupMemberFullInfo{
- // GroupID: groupID,
- // UserID: userID,
- // Nickname: info.Nickname,
- // FaceURL: info.FaceURL,
- // AppMangerLevel: info.AppMangerLevel,
- // })
- // }
- //}
+ res = append(res, g.groupMemberDB2PB(member, 0))
+ }
return res, nil
}
@@ -163,10 +181,13 @@ func (g *GroupNotificationSender) getGroupMember(ctx context.Context, groupID st
}
func (g *GroupNotificationSender) getGroupOwnerAndAdminUserID(ctx context.Context, groupID string) ([]string, error) {
- members, err := g.db.FindGroupMember(ctx, []string{groupID}, nil, []int32{constant.GroupOwner, constant.GroupAdmin})
+ members, err := g.db.FindGroupMemberRoleLevels(ctx, groupID, []int32{constant.GroupOwner, constant.GroupAdmin})
if err != nil {
return nil, err
}
+ if err := g.PopulateGroupMember(ctx, members...); err != nil {
+ return nil, err
+ }
fn := func(e *relation.GroupMemberModel) string { return e.UserID }
return utils.Slice(members, fn), nil
}
@@ -388,11 +409,16 @@ func (g *GroupNotificationSender) GroupApplicationAcceptedNotification(ctx conte
if err != nil {
return err
}
- tips := &sdkws.GroupApplicationAcceptedTips{Group: group, HandleMsg: req.HandledMsg, ReceiverAs: 1}
+ tips := &sdkws.GroupApplicationAcceptedTips{Group: group, HandleMsg: req.HandledMsg}
if err := g.fillOpUser(ctx, &tips.OpUser, tips.Group.GroupID); err != nil {
return err
}
- for _, userID := range append(userIDs, mcontext.GetOpUserID(ctx)) {
+ for _, userID := range append(userIDs, req.FromUserID) {
+ if userID == req.FromUserID {
+ tips.ReceiverAs = 0
+ } else {
+ tips.ReceiverAs = 1
+ }
err = g.Notification(ctx, mcontext.GetOpUserID(ctx), userID, constant.GroupApplicationAcceptedNotification, tips)
if err != nil {
log.ZError(ctx, "failed", err)
@@ -420,7 +446,12 @@ func (g *GroupNotificationSender) GroupApplicationRejectedNotification(ctx conte
if err := g.fillOpUser(ctx, &tips.OpUser, tips.Group.GroupID); err != nil {
return err
}
- for _, userID := range append(userIDs, mcontext.GetOpUserID(ctx)) {
+ for _, userID := range append(userIDs, req.FromUserID) {
+ if userID == req.FromUserID {
+ tips.ReceiverAs = 0
+ } else {
+ tips.ReceiverAs = 1
+ }
err = g.Notification(ctx, mcontext.GetOpUserID(ctx), userID, constant.GroupApplicationRejectedNotification, tips)
if err != nil {
log.ZError(ctx, "failed", err)
diff --git a/pkg/rpcclient/notification/user.go b/pkg/rpcclient/notification/user.go
index 4feebf7b9..4347faece 100644
--- a/pkg/rpcclient/notification/user.go
+++ b/pkg/rpcclient/notification/user.go
@@ -103,3 +103,21 @@ func (u *UserNotificationSender) UserStatusChangeNotification(
) error {
return u.Notification(ctx, tips.FromUserID, tips.ToUserID, constant.UserStatusChangeNotification, tips)
}
+func (u *UserNotificationSender) UserCommandUpdateNotification(
+ ctx context.Context,
+ tips *sdkws.UserCommandUpdateTips,
+) error {
+ return u.Notification(ctx, tips.FromUserID, tips.ToUserID, constant.UserCommandUpdateNotification, tips)
+}
+func (u *UserNotificationSender) UserCommandAddNotification(
+ ctx context.Context,
+ tips *sdkws.UserCommandAddTips,
+) error {
+ return u.Notification(ctx, tips.FromUserID, tips.ToUserID, constant.UserCommandAddNotification, tips)
+}
+func (u *UserNotificationSender) UserCommandDeleteNotification(
+ ctx context.Context,
+ tips *sdkws.UserCommandDeleteTips,
+) error {
+ return u.Notification(ctx, tips.FromUserID, tips.ToUserID, constant.UserCommandDeleteNotification, tips)
+}
diff --git a/pkg/rpcclient/third.go b/pkg/rpcclient/third.go
index 48a537112..73d874005 100755
--- a/pkg/rpcclient/third.go
+++ b/pkg/rpcclient/third.go
@@ -20,7 +20,6 @@ import (
"github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/credentials"
-
"google.golang.org/grpc"
"github.com/OpenIMSDK/protocol/third"
diff --git a/pkg/rpcclient/user.go b/pkg/rpcclient/user.go
index c40d95727..451914cd3 100644
--- a/pkg/rpcclient/user.go
+++ b/pkg/rpcclient/user.go
@@ -64,6 +64,9 @@ func NewUserRpcClient(client discoveryregistry.SvcDiscoveryRegistry) UserRpcClie
// GetUsersInfo retrieves information for multiple users based on their user IDs.
func (u *UserRpcClient) GetUsersInfo(ctx context.Context, userIDs []string) ([]*sdkws.UserInfo, error) {
+ if len(userIDs) == 0 {
+ return []*sdkws.UserInfo{}, nil
+ }
resp, err := u.Client.GetDesignateUsers(ctx, &user.GetDesignateUsersReq{
UserIDs: userIDs,
})
@@ -179,3 +182,10 @@ func (u *UserRpcClient) SetUserStatus(ctx context.Context, userID string, status
})
return err
}
+
+func (u *UserRpcClient) GetNotificationByID(ctx context.Context, userID string) error {
+ _, err := u.Client.GetNotificationAccount(ctx, &user.GetNotificationAccountReq{
+ UserID: userID,
+ })
+ return err
+}
diff --git a/pkg/util/flag/flag.go b/pkg/util/flag/flag.go
new file mode 100644
index 000000000..0a8e527ab
--- /dev/null
+++ b/pkg/util/flag/flag.go
@@ -0,0 +1,54 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package flag
+
+import (
+ "flag"
+ "log"
+ "strings"
+
+ "github.com/spf13/pflag"
+)
+
+// WordSepNormalizeFunc changes all flags that contain "_" separators.
+func WordSepNormalizeFunc(f *pflag.FlagSet, name string) pflag.NormalizedName {
+ if strings.Contains(name, "_") {
+ return pflag.NormalizedName(strings.ReplaceAll(name, "_", "-"))
+ }
+ return pflag.NormalizedName(name)
+}
+
+// WarnWordSepNormalizeFunc changes and warns for flags that contain "_" separators.
+func WarnWordSepNormalizeFunc(f *pflag.FlagSet, name string) pflag.NormalizedName {
+ if strings.Contains(name, "_") {
+ normalizedName := strings.ReplaceAll(name, "_", "-")
+ log.Printf("WARNING: flag %s has been deprecated and will be removed in a future version. Use %s instead.", name, normalizedName)
+ return pflag.NormalizedName(normalizedName)
+ }
+ return pflag.NormalizedName(name)
+}
+
+// InitFlags normalizes, parses, then logs the command line flags.
+func InitFlags() {
+ pflag.CommandLine.SetNormalizeFunc(WordSepNormalizeFunc)
+ pflag.CommandLine.AddGoFlagSet(flag.CommandLine)
+}
+
+// PrintFlags logs the flags in the flagset.
+func PrintFlags(flags *pflag.FlagSet) {
+ flags.VisitAll(func(flag *pflag.Flag) {
+ log.Printf("FLAG: --%s=%q", flag.Name, flag.Value)
+ })
+}
diff --git a/pkg/util/genutil/genutil.go b/pkg/util/genutil/genutil.go
new file mode 100644
index 000000000..0948a7c49
--- /dev/null
+++ b/pkg/util/genutil/genutil.go
@@ -0,0 +1,41 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package genutil
+
+import (
+ "fmt"
+ "os"
+ "path/filepath"
+)
+
+// OutDir creates the absolute path name from path and checks path exists.
+// Returns absolute path including trailing '/' or error if path does not exist.
+func OutDir(path string) (string, error) {
+ outDir, err := filepath.Abs(path)
+ if err != nil {
+ return "", err
+ }
+
+ stat, err := os.Stat(outDir)
+ if err != nil {
+ return "", err
+ }
+
+ if !stat.IsDir() {
+ return "", fmt.Errorf("output directory %s is not a directory", outDir)
+ }
+ outDir += "/"
+ return outDir, nil
+}
diff --git a/pkg/util/genutil/genutil_test.go b/pkg/util/genutil/genutil_test.go
new file mode 100644
index 000000000..050d14040
--- /dev/null
+++ b/pkg/util/genutil/genutil_test.go
@@ -0,0 +1,40 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package genutil
+
+import (
+ "testing"
+)
+
+func TestValidDir(t *testing.T) {
+ _, err := OutDir("./")
+ if err != nil {
+ t.Fatal(err)
+ }
+}
+
+func TestInvalidDir(t *testing.T) {
+ _, err := OutDir("./nondir")
+ if err == nil {
+ t.Fatal("expected an error")
+ }
+}
+
+func TestNotDir(t *testing.T) {
+ _, err := OutDir("./genutils_test.go")
+ if err == nil {
+ t.Fatal("expected an error")
+ }
+}
diff --git a/scripts/.spelling_failures b/scripts/.spelling_failures
index 5c29b5992..149d314ba 100644
--- a/scripts/.spelling_failures
+++ b/scripts/.spelling_failures
@@ -3,4 +3,6 @@ go.mod
go.sum
third_party/
translations/
-log
\ No newline at end of file
+logs
+.git
+.golangci.yml
\ No newline at end of file
diff --git a/scripts/advertise.sh b/scripts/advertise.sh
index 9c8c284ad..3effc4f2b 100755
--- a/scripts/advertise.sh
+++ b/scripts/advertise.sh
@@ -23,7 +23,7 @@ trap 'openim::util::onCtrlC' INT
print_with_delay() {
text="$1"
delay="$2"
-
+
for i in $(seq 0 $((${#text}-1))); do
printf "${text:$i:1}"
sleep $delay
@@ -34,7 +34,7 @@ print_with_delay() {
print_progress() {
total="$1"
delay="$2"
-
+
printf "["
for i in $(seq 1 $total); do
printf "#"
@@ -44,14 +44,14 @@ print_progress() {
}
function openim_logo() {
- # Set text color to cyan for header and URL
- echo -e "\033[0;36m"
+ # Set text color to cyan for header and URL
+ echo -e "\033[0;36m"
+
+ # Display fancy ASCII Art logo
+ # look http://patorjk.com/software/taag/#p=display&h=1&v=1&f=Doh&t=OpenIM
+ print_with_delay '
+
- # Display fancy ASCII Art logo
- # look http://patorjk.com/software/taag/#p=display&h=1&v=1&f=Doh&t=OpenIM
- print_with_delay '
-
-
OOOOOOOOO IIIIIIIIIIMMMMMMMM MMMMMMMM
OO:::::::::OO I::::::::IM:::::::M M:::::::M
OO:::::::::::::OO I::::::::IM::::::::M M::::::::M
@@ -68,45 +68,45 @@ O:::::::OOO:::::::O p:::::ppppp:::::::pe::::::::e n::::n n::::nII:
OO:::::::::::::OO p::::::::::::::::p e::::::::eeeeeeee n::::n n::::nI::::::::IM::::::M M::::::M
OO:::::::::OO p::::::::::::::pp ee:::::::::::::e n::::n n::::nI::::::::IM::::::M M::::::M
OOOOOOOOO p::::::pppppppp eeeeeeeeeeeeee nnnnnn nnnnnnIIIIIIIIIIMMMMMMMM MMMMMMMM
- p:::::p
- p:::::p
- p:::::::p
- p:::::::p
- p:::::::p
- ppppppppp
-
- ' 0.0001
-
- # Display product URL
- print_with_delay "Discover more and contribute at: https://github.com/openimsdk/open-im-server" 0.01
-
- # Reset text color back to normal
- echo -e "\033[0m"
-
- # Set text color to green for product description
- echo -e "\033[1;32m"
-
- print_with_delay "Open-IM-Server: Reinventing Instant Messaging" 0.01
- print_progress 50 0.02
-
- print_with_delay "Open-IM-Server is not just a product; it's a revolution. It's about bringing the power of seamless," 0.01
- print_with_delay "real-time messaging to your fingertips. And it's about joining a global community of developers, dedicated to pushing the boundaries of what's possible." 0.01
-
- print_progress 50 0.02
-
- # Reset text color back to normal
- echo -e "\033[0m"
-
- # Set text color to yellow for the Slack link
- echo -e "\033[1;33m"
-
- print_with_delay "Join our developer community on Slack: https://join.slack.com/t/openimsdk/shared_invite/zt-22720d66b-o_FvKxMTGXtcnnnHiMqe9Q" 0.01
-
- # Reset text color back to normal
- echo -e "\033[0m"
+ p:::::p
+ p:::::p
+ p:::::::p
+ p:::::::p
+ p:::::::p
+ ppppppppp
+
+ ' 0.0001
+
+ # Display product URL
+ print_with_delay "Discover more and contribute at: https://github.com/openimsdk/open-im-server" 0.01
+
+ # Reset text color back to normal
+ echo -e "\033[0m"
+
+ # Set text color to green for product description
+ echo -e "\033[1;32m"
+
+ print_with_delay "Open-IM-Server: Reinventing Instant Messaging" 0.01
+ print_progress 50 0.02
+
+ print_with_delay "Open-IM-Server is not just a product; it's a revolution. It's about bringing the power of seamless," 0.01
+ print_with_delay "real-time messaging to your fingertips. And it's about joining a global community of developers, dedicated to pushing the boundaries of what's possible." 0.01
+
+ print_progress 50 0.02
+
+ # Reset text color back to normal
+ echo -e "\033[0m"
+
+ # Set text color to yellow for the Slack link
+ echo -e "\033[1;33m"
+
+ print_with_delay "Join our developer community on Slack: https://join.slack.com/t/openimsdk/shared_invite/zt-22720d66b-o_FvKxMTGXtcnnnHiMqe9Q" 0.01
+
+ # Reset text color back to normal
+ echo -e "\033[0m"
}
function main() {
- openim_logo
+ openim_logo
}
main "$@"
diff --git a/scripts/bash_beautify.py b/scripts/bash_beautify.py
new file mode 100755
index 000000000..54c6fa0ad
--- /dev/null
+++ b/scripts/bash_beautify.py
@@ -0,0 +1,173 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+
+#**************************************************************************
+# Copyright (C) 2011, Paul Lutus *
+# *
+# This program is free software; you can redistribute it and/or modify *
+# it under the terms of the GNU General Public License as published by *
+# the Free Software Foundation; either version 2 of the License, or *
+# (at your option) any later version. *
+# *
+# This program is distributed in the hope that it will be useful, *
+# but WITHOUT ANY WARRANTY; without even the implied warranty of *
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
+# GNU General Public License for more details. *
+# *
+# You should have received a copy of the GNU General Public License *
+# along with this program; if not, write to the *
+# Free Software Foundation, Inc., *
+# 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. *
+#**************************************************************************
+
+import re
+import sys
+
+PVERSION = '1.0'
+
+
+class BeautifyBash:
+
+ def __init__(self):
+ self.tab_str = ' '
+ self.tab_size = 2
+
+ def read_file(self, fp):
+ with open(fp) as f:
+ return f.read()
+
+ def write_file(self, fp, data):
+ with open(fp, 'w') as f:
+ f.write(data)
+
+ def beautify_string(self, data, path=''):
+ tab = 0
+ case_stack = []
+ in_here_doc = False
+ defer_ext_quote = False
+ in_ext_quote = False
+ ext_quote_string = ''
+ here_string = ''
+ output = []
+ line = 1
+ for record in re.split('\n', data):
+ record = record.rstrip()
+ stripped_record = record.strip()
+
+ # collapse multiple quotes between ' ... '
+ test_record = re.sub(r'\'.*?\'', '', stripped_record)
+ # collapse multiple quotes between " ... "
+ test_record = re.sub(r'".*?"', '', test_record)
+ # collapse multiple quotes between ` ... `
+ test_record = re.sub(r'`.*?`', '', test_record)
+ # collapse multiple quotes between \` ... ' (weird case)
+ test_record = re.sub(r'\\`.*?\'', '', test_record)
+ # strip out any escaped single characters
+ test_record = re.sub(r'\\.', '', test_record)
+ # remove '#' comments
+ test_record = re.sub(r'(\A|\s)(#.*)', '', test_record, 1)
+ if(not in_here_doc):
+ if(re.search('<<-?', test_record)):
+ here_string = re.sub(
+ '.*<<-?\s*[\'|"]?([_|\w]+)[\'|"]?.*', '\\1', stripped_record, 1)
+ in_here_doc = (len(here_string) > 0)
+ if(in_here_doc): # pass on with no changes
+ output.append(record)
+ # now test for here-doc termination string
+ if(re.search(here_string, test_record) and not re.search('<<', test_record)):
+ in_here_doc = False
+ else: # not in here doc
+ if(in_ext_quote):
+ if(re.search(ext_quote_string, test_record)):
+ # provide line after quotes
+ test_record = re.sub(
+ '.*%s(.*)' % ext_quote_string, '\\1', test_record, 1)
+ in_ext_quote = False
+ else: # not in ext quote
+ if(re.search(r'(\A|\s)(\'|")', test_record)):
+ # apply only after this line has been processed
+ defer_ext_quote = True
+ ext_quote_string = re.sub(
+ '.*([\'"]).*', '\\1', test_record, 1)
+ # provide line before quote
+ test_record = re.sub(
+ '(.*)%s.*' % ext_quote_string, '\\1', test_record, 1)
+ if(in_ext_quote):
+ # pass on unchanged
+ output.append(record)
+ else: # not in ext quote
+ inc = len(re.findall(
+ '(\s|\A|;)(case|then|do)(;|\Z|\s)', test_record))
+ inc += len(re.findall('(\{|\(|\[)', test_record))
+ outc = len(re.findall(
+ '(\s|\A|;)(esac|fi|done|elif)(;|\)|\||\Z|\s)', test_record))
+ outc += len(re.findall('(\}|\)|\])', test_record))
+ if(re.search(r'\besac\b', test_record)):
+ if(len(case_stack) == 0):
+ sys.stderr.write(
+ 'File %s: error: "esac" before "case" in line %d.\n' % (
+ path, line)
+ )
+ else:
+ outc += case_stack.pop()
+ # sepcial handling for bad syntax within case ... esac
+ if(len(case_stack) > 0):
+ if(re.search('\A[^(]*\)', test_record)):
+ # avoid overcount
+ outc -= 2
+ case_stack[-1] += 1
+ if(re.search(';;', test_record)):
+ outc += 1
+ case_stack[-1] -= 1
+ # an ad-hoc solution for the "else" keyword
+ else_case = (
+ 0, -1)[re.search('^(else)', test_record) != None]
+ net = inc - outc
+ tab += min(net, 0)
+ extab = tab + else_case
+ extab = max(0, extab)
+ output.append(
+ (self.tab_str * self.tab_size * extab) + stripped_record)
+ tab += max(net, 0)
+ if(defer_ext_quote):
+ in_ext_quote = True
+ defer_ext_quote = False
+ if(re.search(r'\bcase\b', test_record)):
+ case_stack.append(0)
+ line += 1
+ error = (tab != 0)
+ if(error):
+ sys.stderr.write(
+ 'File %s: error: indent/outdent mismatch: %d.\n' % (path, tab))
+ return '\n'.join(output), error
+
+ def beautify_file(self, path):
+ error = False
+ if(path == '-'):
+ data = sys.stdin.read()
+ result, error = self.beautify_string(data, '(stdin)')
+ sys.stdout.write(result)
+ else: # named file
+ data = self.read_file(path)
+ result, error = self.beautify_string(data, path)
+ if(data != result):
+ # make a backup copy
+ self.write_file(path + '~', data)
+ self.write_file(path, result)
+ return error
+
+ def main(self):
+ error = False
+ sys.argv.pop(0)
+ if(len(sys.argv) < 1):
+ sys.stderr.write(
+ 'usage: shell script filenames or \"-\" for stdin.\n')
+ else:
+ for path in sys.argv:
+ error |= self.beautify_file(path)
+ sys.exit((0, 1)[error])
+
+# if not called as a module
+if(__name__ == '__main__'):
+ BeautifyBash().main()
+
diff --git a/scripts/build-all-service.sh b/scripts/build-all-service.sh
index c79018a87..b5578fca6 100755
--- a/scripts/build-all-service.sh
+++ b/scripts/build-all-service.sh
@@ -30,8 +30,8 @@ OPENIM_ROOT=$(dirname "${BASH_SOURCE[0]}")/..
source "${OPENIM_ROOT}/scripts/lib/init.sh"
# CPU core number
-pushd ""${OPENIM_ROOT}"/tools/ncpu" >/dev/null
- cpu_count=$(go run .)
+pushd "${OPENIM_ROOT}/tools/ncpu" >/dev/null
+cpu_count=$(go run .)
popd >/dev/null
openim::color::echo ${GREEN_PREFIX} "======> cpu_count=$cpu_count"
@@ -42,7 +42,7 @@ compile_count=$((cpu_count / 2))
# For help output
ARGHELP=""
if [[ "$#" -gt 0 ]]; then
- ARGHELP="'$*'"
+ ARGHELP="'$*'"
fi
openim::color::echo $COLOR_CYAN "NOTE: $0 has been replaced by 'make multiarch' or 'make build'"
@@ -61,15 +61,15 @@ echo " ./scripts/build-all-service.sh BINS=openim-api V=1 DEBUG=1"
echo
if [ -z "$*" ]; then
- openim::log::info "no args, build all service"
- make --no-print-directory -C "${OPENIM_ROOT}" -j$compile_count build
+ openim::log::info "no args, build all service"
+ make --no-print-directory -C "${OPENIM_ROOT}" -j$compile_count build
else
- openim::log::info "build service: $*"
- make --no-print-directory -C "${OPENIM_ROOT}" -j$compile_count build "$*"
+ openim::log::info "build service: $*"
+ make --no-print-directory -C "${OPENIM_ROOT}" -j$compile_count build "$*"
fi
if [ $? -eq 0 ]; then
- openim::log::success "all service build success, run 'make start' or './scripts/start-all.sh'"
+ openim::log::success "all service build success, run 'make start' or './scripts/start-all.sh'"
else
- openim::log::error "make build Error, script exits"
+ openim::log::error "make build Error, script exits"
fi
diff --git a/scripts/check-all.sh b/scripts/check-all.sh
index 23e2119d4..062605ae1 100755
--- a/scripts/check-all.sh
+++ b/scripts/check-all.sh
@@ -14,10 +14,10 @@
# limitations under the License.
# This script is check openim service is running normally
-#
+#
# Usage: `scripts/check-all.sh`.
# Encapsulated as: `make check`.
-# READ: https://github.com/openimsdk/open-im-server/tree/main/scripts/install/environment.sh
+# READ: https://github.com/openimsdk/open-im-server/tree/main/scripts/install/environment.sh
set -o errexit
set -o nounset
@@ -30,45 +30,55 @@ OPENIM_VERBOSE=4
openim::log::info "\n# Begin to check all openim service"
-# OpenIM status
+openim::log::status "Check all dependent service ports"
+# Elegant printing function
# Elegant printing function
print_services_and_ports() {
- # 获取数组
- declare -g service_names=("${!1}")
- declare -g service_ports=("${!2}")
-
- echo "+-------------------------+----------+"
- echo "| Service Name | Port |"
- echo "+-------------------------+----------+"
-
- for index in "${!service_names[@]}"; do
- printf "| %-23s | %-8s |\n" "${service_names[$index]}" "${service_ports[$index]}"
- done
+ local service_names=("$@")
+ local half_length=$((${#service_names[@]} / 2))
+ local service_ports=("${service_names[@]:half_length}")
+
+ echo "+-------------------------+----------+"
+ echo "| Service Name | Port |"
+ echo "+-------------------------+----------+"
+
+ for ((index=0; index < half_length; index++)); do
+ printf "| %-23s | %-8s |\n" "${service_names[$index]}" "${service_ports[$index]}"
+ done
+
+ echo "+-------------------------+----------+"
+}
- echo "+-------------------------+----------+"
+handle_error() {
+ echo "An error occurred. Printing ${STDERR_LOG_FILE} contents:"
+ cat "${STDERR_LOG_FILE}"
+ exit 1
}
+trap handle_error ERR
+
+# Assuming OPENIM_SERVER_NAME_TARGETS and OPENIM_SERVER_PORT_TARGETS are defined
+# Similarly for OPENIM_DEPENDENCY_TARGETS and OPENIM_DEPENDENCY_PORT_TARGETS
# Print out services and their ports
-print_services_and_ports OPENIM_SERVER_NAME_TARGETS OPENIM_SERVER_PORT_TARGETS
+print_services_and_ports "${OPENIM_SERVER_NAME_TARGETS[@]}" "${OPENIM_SERVER_PORT_TARGETS[@]}"
# Print out dependencies and their ports
-print_services_and_ports OPENIM_DEPENDENCY_TARGETS OPENIM_DEPENDENCY_PORT_TARGETS
-
+print_services_and_ports "${OPENIM_DEPENDENCY_TARGETS[@]}" "${OPENIM_DEPENDENCY_PORT_TARGETS[@]}"
# OpenIM check
echo "++ The port being checked: ${OPENIM_SERVER_PORT_LISTARIES[@]}"
openim::log::info "\n## Check all dependent service ports"
-echo "+++ The port being checked: ${OPENIM_DEPENDENCY_PORT_LISTARIES[@]}"
+echo "++ The port being checked: ${OPENIM_DEPENDENCY_PORT_LISTARIES[@]}"
set +e
# Later, after discarding Docker, the Docker keyword is unreliable, and Kubepods is used
if grep -qE 'docker|kubepods' /proc/1/cgroup || [ -f /.dockerenv ]; then
- openim::color::echo ${COLOR_CYAN} "Environment in the interior of the container"
+ openim::color::echo ${COLOR_CYAN} "Environment in the interior of the container"
else
- openim::color::echo ${COLOR_CYAN} "The environment is outside the container"
- openim::util::check_ports ${OPENIM_DEPENDENCY_PORT_LISTARIES[@]} || return 0
+ openim::color::echo ${COLOR_CYAN} "The environment is outside the container"
+ openim::util::check_ports ${OPENIM_DEPENDENCY_PORT_LISTARIES[@]} || return 0
fi
if [[ $? -ne 0 ]]; then
@@ -91,4 +101,6 @@ else
echo "++++ Check all openim service ports successfully !"
fi
-set -e
\ No newline at end of file
+set -e
+
+trap - ERR
\ No newline at end of file
diff --git a/scripts/cherry-pick.sh b/scripts/cherry-pick.sh
index 5f13ef0e4..8a1f8dd79 100755
--- a/scripts/cherry-pick.sh
+++ b/scripts/cherry-pick.sh
@@ -118,7 +118,7 @@ function return_to_kansas {
openim::log::status "Aborting in-progress git am."
git am --abort >/dev/null 2>&1 || true
fi
-
+
# return to the starting branch and delete the PR text file
if [[ -z "${DRY_RUN}" ]]; then
echo
@@ -137,7 +137,7 @@ function make-a-pr() {
rel="$(basename "${BRANCH}")"
echo
openim::log::status "Creating a pull request on GitHub at ${GITHUB_USER}:${NEWBRANCH}"
-
+
local numandtitle
numandtitle=$(printf '%s\n' "${SUBJECTS[@]}")
prtext=$(cat <&2
- exit 1
- fi
- done
-
- if [[ "${conflicts}" != "true" ]]; then
- echo "!!! git am failed, likely because of an in-progress 'git am' or 'git rebase'"
+curl -o "/tmp/${pull}.patch" -sSL "https://github.com/${MAIN_REPO_ORG}/${MAIN_REPO_NAME}/pull/${pull}.patch"
+echo
+openim::log::status "About to attempt cherry pick of PR. To reattempt:"
+echo " $ git am -3 /tmp/${pull}.patch"
+echo
+git am -3 "/tmp/${pull}.patch" || {
+ conflicts=false
+ while unmerged=$(git status --porcelain | grep ^U) && [[ -n ${unmerged} ]] \
+ || [[ -e "${REBASEMAGIC}" ]]; do
+ conflicts=true # <-- We should have detected conflicts once
+ echo
+ openim::log::status "Conflicts detected:"
+ echo
+ (git status --porcelain | grep ^U) || echo "!!! None. Did you git am --continue?"
+ echo
+ openim::log::status "Please resolve the conflicts in another window (and remember to 'git add / git am --continue')"
+ read -p "+++ Proceed (anything other than 'y' aborts the cherry-pick)? [y/n] " -r
+ echo
+ if ! [[ "${REPLY}" =~ ^[yY]$ ]]; then
+ echo "Aborting." >&2
exit 1
fi
- }
+ done
+
+ if [[ "${conflicts}" != "true" ]]; then
+ echo "!!! git am failed, likely because of an in-progress 'git am' or 'git rebase'"
+ exit 1
+ fi
+}
- # set the subject
- subject=$(grep -m 1 "^Subject" "/tmp/${pull}.patch" | sed -e 's/Subject: \[PATCH//g' | sed 's/.*] //')
- SUBJECTS+=("#${pull}: ${subject}")
+# set the subject
+subject=$(grep -m 1 "^Subject" "/tmp/${pull}.patch" | sed -e 's/Subject: \[PATCH//g' | sed 's/.*] //')
+SUBJECTS+=("#${pull}: ${subject}")
- # remove the patch file from /tmp
- rm -f "/tmp/${pull}.patch"
+# remove the patch file from /tmp
+rm -f "/tmp/${pull}.patch"
done
gitamcleanup=false
# Re-generate docs (if needed)
if [[ -n "${REGENERATE_DOCS}" ]]; then
+echo
+echo "Regenerating docs..."
+if ! scripts/generate-docs.sh; then
echo
- echo "Regenerating docs..."
- if ! scripts/generate-docs.sh; then
- echo
- echo "scripts/gendoc.sh FAILED to complete."
- exit 1
- fi
+ echo "scripts/gendoc.sh FAILED to complete."
+ exit 1
+fi
fi
if [[ -n "${DRY_RUN}" ]]; then
- openim::log::error "!!! Skipping git push and PR creation because you set DRY_RUN."
- echo "To return to the branch you were in when you invoked this script:"
- echo
- echo " git checkout ${STARTINGBRANCH}"
- echo
- echo "To delete this branch:"
- echo
- echo " git branch -D ${NEWBRANCHUNIQ}"
- exit 0
+openim::log::error "!!! Skipping git push and PR creation because you set DRY_RUN."
+echo "To return to the branch you were in when you invoked this script:"
+echo
+echo " git checkout ${STARTINGBRANCH}"
+echo
+echo "To delete this branch:"
+echo
+echo " git branch -D ${NEWBRANCHUNIQ}"
+exit 0
fi
if git remote -v | grep ^"${FORK_REMOTE}" | grep "${MAIN_REPO_ORG}/${MAIN_REPO_NAME}.git"; then
- echo "!!! You have ${FORK_REMOTE} configured as your ${MAIN_REPO_ORG}/${MAIN_REPO_NAME}.git"
- echo "This isn't normal. Leaving you with push instructions:"
- echo
- openim::log::status "First manually push the branch this script created:"
- echo
- echo " git push REMOTE ${NEWBRANCHUNIQ}:${NEWBRANCH}"
- echo
- echo "where REMOTE is your personal fork (maybe ${UPSTREAM_REMOTE}? Consider swapping those.)."
- echo "OR consider setting UPSTREAM_REMOTE and FORK_REMOTE to different values."
- echo
- make-a-pr
- cleanbranch=""
- exit 0
+echo "!!! You have ${FORK_REMOTE} configured as your ${MAIN_REPO_ORG}/${MAIN_REPO_NAME}.git"
+echo "This isn't normal. Leaving you with push instructions:"
+echo
+openim::log::status "First manually push the branch this script created:"
+echo
+echo " git push REMOTE ${NEWBRANCHUNIQ}:${NEWBRANCH}"
+echo
+echo "where REMOTE is your personal fork (maybe ${UPSTREAM_REMOTE}? Consider swapping those.)."
+echo "OR consider setting UPSTREAM_REMOTE and FORK_REMOTE to different values."
+echo
+make-a-pr
+cleanbranch=""
+exit 0
fi
echo
@@ -248,8 +248,8 @@ echo " git push ${FORK_REMOTE} ${NEWBRANCHUNIQ}:${NEWBRANCH}"
echo
read -p "+++ Proceed (anything other than 'y' aborts the cherry-pick)? [y/n] " -r
if ! [[ "${REPLY}" =~ ^[yY]$ ]]; then
- echo "Aborting." >&2
- exit 1
+echo "Aborting." >&2
+exit 1
fi
git push "${FORK_REMOTE}" -f "${NEWBRANCHUNIQ}:${NEWBRANCH}"
diff --git a/scripts/common.sh b/scripts/common.sh
index da0d36118..d67389d56 100755
--- a/scripts/common.sh
+++ b/scripts/common.sh
@@ -42,7 +42,7 @@ OPENIM_ROOT=$(cd "$(dirname "${BASH_SOURCE[0]}")"/.. && pwd -P)
# Constants
readonly OPENIM_BUILD_IMAGE_REPO=openim-build
-#readonly OPENIM_BUILD_IMAGE_CROSS_TAG="$(cat ""${OPENIM_ROOT}"/build/build-image/cross/VERSION")"
+#readonly OPENIM_BUILD_IMAGE_CROSS_TAG="$(cat "${OPENIM_ROOT}/build/build-image/cross/VERSION")"
readonly OPENIM_DOCKER_REGISTRY="${OPENIM_DOCKER_REGISTRY:-k8s.gcr.io}"
readonly OPENIM_BASE_IMAGE_REGISTRY="${OPENIM_BASE_IMAGE_REGISTRY:-us.gcr.io/k8s-artifacts-prod/build-image}"
@@ -53,7 +53,7 @@ readonly OPENIM_BASE_IMAGE_REGISTRY="${OPENIM_BASE_IMAGE_REGISTRY:-us.gcr.io/k8s
#
# Increment/change this number if you change the build image (anything under
# build/build-image) or change the set of volumes in the data container.
-#readonly OPENIM_BUILD_IMAGE_VERSION_BASE="$(cat ""${OPENIM_ROOT}"/build/build-image/VERSION")"
+#readonly OPENIM_BUILD_IMAGE_VERSION_BASE="$(cat "${OPENIM_ROOT}/build/build-image/VERSION")"
#readonly OPENIM_BUILD_IMAGE_VERSION="${OPENIM_BUILD_IMAGE_VERSION_BASE}-${OPENIM_BUILD_IMAGE_CROSS_TAG}"
# Here we map the output directories across both the local and remote _output
@@ -66,9 +66,10 @@ readonly OPENIM_BASE_IMAGE_REGISTRY="${OPENIM_BASE_IMAGE_REGISTRY:-us.gcr.io/k8s
# is really remote, this is the stuff that has to be copied
# back.
# OUT_DIR can come in from the Makefile, so honor it.
-readonly LOCAL_OUTPUT_ROOT=""${OPENIM_ROOT}"/${OUT_DIR:-_output}"
-readonly LOCAL_OUTPUT_SUBPATH="${LOCAL_OUTPUT_ROOT}/platforms"
-readonly LOCAL_OUTPUT_BINPATH="${LOCAL_OUTPUT_SUBPATH}"
+readonly LOCAL_OUTPUT_ROOT="${OPENIM_ROOT}/${OUT_DIR:-_output}"
+readonly LOCAL_OUTPUT_SUBPATH="${LOCAL_OUTPUT_ROOT}/bin"
+readonly LOCAL_OUTPUT_BINPATH="${LOCAL_OUTPUT_SUBPATH}/platforms"
+readonly LOCAL_OUTPUT_BINTOOLSPATH="${LOCAL_OUTPUT_SUBPATH}/tools"
readonly LOCAL_OUTPUT_GOPATH="${LOCAL_OUTPUT_SUBPATH}/go"
readonly LOCAL_OUTPUT_IMAGE_STAGING="${LOCAL_OUTPUT_ROOT}/images"
@@ -86,28 +87,28 @@ readonly OPENIM_CONTAINER_RSYNC_PORT=8730
#
# $1 - server architecture
openim::build::get_docker_wrapped_binaries() {
- local arch=$1
- local debian_base_version=v2.1.0
- local debian_iptables_version=v12.1.0
- ### If you change any of these lists, please also update DOCKERIZED_BINARIES
- ### in build/BUILD. And openim::golang::server_image_targets
-
- local targets=(
- "openim-api,${OPENIM_BASE_IMAGE_REGISTRY}/debian-base-${arch}:${debian_base_version}"
- "openim-cmdutils,${OPENIM_BASE_IMAGE_REGISTRY}/debian-base-${arch}:${debian_base_version}"
- "openim-crontask,${OPENIM_BASE_IMAGE_REGISTRY}/debian-base-${arch}:${debian_base_version}"
- "openim-msggateway,${OPENIM_BASE_IMAGE_REGISTRY}/debian-base-${arch}:${debian_base_version}"
- "openim-msgtransfer,${OPENIM_BASE_IMAGE_REGISTRY}/debian-base-${arch}:${debian_base_version}"
- "openim-push,${OPENIM_BASE_IMAGE_REGISTRY}/debian-base-${arch}:${debian_base_version}"
- "openim-rpc-auth,${OPENIM_BASE_IMAGE_REGISTRY}/debian-base-${arch}:${debian_base_version}"
- "openim-rpc-conversation,${OPENIM_BASE_IMAGE_REGISTRY}/debian-base-${arch}:${debian_base_version}"
- "openim-rpc-friend,${OPENIM_BASE_IMAGE_REGISTRY}/debian-base-${arch}:${debian_base_version}"
- "openim-rpc-group,${OPENIM_BASE_IMAGE_REGISTRY}/debian-base-${arch}:${debian_base_version}"
- "openim-rpc-msg,${OPENIM_BASE_IMAGE_REGISTRY}/debian-base-${arch}:${debian_base_version}"
- "openim-rpc-third,${OPENIM_BASE_IMAGE_REGISTRY}/debian-base-${arch}:${debian_base_version}"
- "openim-rpc-user,${OPENIM_BASE_IMAGE_REGISTRY}/debian-base-${arch}:${debian_base_version}"
- )
- echo "${targets[@]}"
+local arch=$1
+local debian_base_version=v2.1.0
+local debian_iptables_version=v12.1.0
+### If you change any of these lists, please also update DOCKERIZED_BINARIES
+### in build/BUILD. And openim::golang::server_image_targets
+
+local targets=(
+ "openim-api,${OPENIM_BASE_IMAGE_REGISTRY}/debian-base-${arch}:${debian_base_version}"
+ "openim-cmdutils,${OPENIM_BASE_IMAGE_REGISTRY}/debian-base-${arch}:${debian_base_version}"
+ "openim-crontask,${OPENIM_BASE_IMAGE_REGISTRY}/debian-base-${arch}:${debian_base_version}"
+ "openim-msggateway,${OPENIM_BASE_IMAGE_REGISTRY}/debian-base-${arch}:${debian_base_version}"
+ "openim-msgtransfer,${OPENIM_BASE_IMAGE_REGISTRY}/debian-base-${arch}:${debian_base_version}"
+ "openim-push,${OPENIM_BASE_IMAGE_REGISTRY}/debian-base-${arch}:${debian_base_version}"
+ "openim-rpc-auth,${OPENIM_BASE_IMAGE_REGISTRY}/debian-base-${arch}:${debian_base_version}"
+ "openim-rpc-conversation,${OPENIM_BASE_IMAGE_REGISTRY}/debian-base-${arch}:${debian_base_version}"
+ "openim-rpc-friend,${OPENIM_BASE_IMAGE_REGISTRY}/debian-base-${arch}:${debian_base_version}"
+ "openim-rpc-group,${OPENIM_BASE_IMAGE_REGISTRY}/debian-base-${arch}:${debian_base_version}"
+ "openim-rpc-msg,${OPENIM_BASE_IMAGE_REGISTRY}/debian-base-${arch}:${debian_base_version}"
+ "openim-rpc-third,${OPENIM_BASE_IMAGE_REGISTRY}/debian-base-${arch}:${debian_base_version}"
+ "openim-rpc-user,${OPENIM_BASE_IMAGE_REGISTRY}/debian-base-${arch}:${debian_base_version}"
+)
+echo "${targets[@]}"
}
# ---------------------------------------------------------------------------
@@ -132,170 +133,170 @@ openim::build::get_docker_wrapped_binaries() {
# DOCKER_MOUNT_ARGS
# LOCAL_OUTPUT_BUILD_CONTEXT
function openim::build::verify_prereqs() {
- local -r require_docker=${1:-true}
- openim::log::status "Verifying Prerequisites...."
- openim::build::ensure_tar || return 1
- openim::build::ensure_rsync || return 1
- if ${require_docker}; then
- openim::build::ensure_docker_in_path || return 1
- openim::util::ensure_docker_daemon_connectivity || return 1
-
- if (( OPENIM_VERBOSE > 6 )); then
- openim::log::status "Docker Version:"
- "${DOCKER[@]}" version | openim::log::info_from_stdin
- fi
+local -r require_docker=${1:-true}
+openim::log::status "Verifying Prerequisites...."
+openim::build::ensure_tar || return 1
+openim::build::ensure_rsync || return 1
+if ${require_docker}; then
+ openim::build::ensure_docker_in_path || return 1
+ openim::util::ensure_docker_daemon_connectivity || return 1
+
+ if (( OPENIM_VERBOSE > 6 )); then
+ openim::log::status "Docker Version:"
+ "${DOCKER[@]}" version | openim::log::info_from_stdin
fi
-
- OPENIM_GIT_BRANCH=$(git symbolic-ref --short -q HEAD 2>/dev/null || true)
- OPENIM_ROOT_HASH=$(openim::build::short_hash "${HOSTNAME:-}:"${OPENIM_ROOT}":${OPENIM_GIT_BRANCH}")
- OPENIM_BUILD_IMAGE_TAG_BASE="build-${OPENIM_ROOT_HASH}"
- #OPENIM_BUILD_IMAGE_TAG="${OPENIM_BUILD_IMAGE_TAG_BASE}-${OPENIM_BUILD_IMAGE_VERSION}"
- #OPENIM_BUILD_IMAGE="${OPENIM_BUILD_IMAGE_REPO}:${OPENIM_BUILD_IMAGE_TAG}"
- OPENIM_BUILD_CONTAINER_NAME_BASE="openim-build-${OPENIM_ROOT_HASH}"
- #OPENIM_BUILD_CONTAINER_NAME="${OPENIM_BUILD_CONTAINER_NAME_BASE}-${OPENIM_BUILD_IMAGE_VERSION}"
- OPENIM_RSYNC_CONTAINER_NAME_BASE="openim-rsync-${OPENIM_ROOT_HASH}"
- #OPENIM_RSYNC_CONTAINER_NAME="${OPENIM_RSYNC_CONTAINER_NAME_BASE}-${OPENIM_BUILD_IMAGE_VERSION}"
- OPENIM_DATA_CONTAINER_NAME_BASE="openim-build-data-${OPENIM_ROOT_HASH}"
- #OPENIM_DATA_CONTAINER_NAME="${OPENIM_DATA_CONTAINER_NAME_BASE}-${OPENIM_BUILD_IMAGE_VERSION}"
- #DOCKER_MOUNT_ARGS=(--volumes-from "${OPENIM_DATA_CONTAINER_NAME}")
- #LOCAL_OUTPUT_BUILD_CONTEXT="${LOCAL_OUTPUT_IMAGE_STAGING}/${OPENIM_BUILD_IMAGE}"
-
- openim::version::get_version_vars
- #openim::version::save_version_vars ""${OPENIM_ROOT}"/.dockerized-openim-version-defs"
+fi
+
+OPENIM_GIT_BRANCH=$(git symbolic-ref --short -q HEAD 2>/dev/null || true)
+OPENIM_ROOT_HASH=$(openim::build::short_hash "${HOSTNAME:-}:${OPENIM_ROOT}:${OPENIM_GIT_BRANCH}")
+OPENIM_BUILD_IMAGE_TAG_BASE="build-${OPENIM_ROOT_HASH}"
+#OPENIM_BUILD_IMAGE_TAG="${OPENIM_BUILD_IMAGE_TAG_BASE}-${OPENIM_BUILD_IMAGE_VERSION}"
+#OPENIM_BUILD_IMAGE="${OPENIM_BUILD_IMAGE_REPO}:${OPENIM_BUILD_IMAGE_TAG}"
+OPENIM_BUILD_CONTAINER_NAME_BASE="openim-build-${OPENIM_ROOT_HASH}"
+#OPENIM_BUILD_CONTAINER_NAME="${OPENIM_BUILD_CONTAINER_NAME_BASE}-${OPENIM_BUILD_IMAGE_VERSION}"
+OPENIM_RSYNC_CONTAINER_NAME_BASE="openim-rsync-${OPENIM_ROOT_HASH}"
+#OPENIM_RSYNC_CONTAINER_NAME="${OPENIM_RSYNC_CONTAINER_NAME_BASE}-${OPENIM_BUILD_IMAGE_VERSION}"
+OPENIM_DATA_CONTAINER_NAME_BASE="openim-build-data-${OPENIM_ROOT_HASH}"
+#OPENIM_DATA_CONTAINER_NAME="${OPENIM_DATA_CONTAINER_NAME_BASE}-${OPENIM_BUILD_IMAGE_VERSION}"
+#DOCKER_MOUNT_ARGS=(--volumes-from "${OPENIM_DATA_CONTAINER_NAME}")
+#LOCAL_OUTPUT_BUILD_CONTEXT="${LOCAL_OUTPUT_IMAGE_STAGING}/${OPENIM_BUILD_IMAGE}"
+
+openim::version::get_version_vars
+#openim::version::save_version_vars "${OPENIM_ROOT}/.dockerized-openim-version-defs"
}
# ---------------------------------------------------------------------------
# Utility functions
function openim::build::docker_available_on_osx() {
- if [[ -z "${DOCKER_HOST}" ]]; then
- if [[ -S "/var/run/docker.sock" ]]; then
- openim::log::status "Using Docker for MacOS"
- return 0
- fi
-
- openim::log::status "No docker host is set. Checking options for setting one..."
- if [[ -z "$(which docker-machine)" ]]; then
- openim::log::status "It looks like you're running Mac OS X, yet neither Docker for Mac nor docker-machine can be found."
- openim::log::status "See: https://docs.docker.com/engine/installation/mac/ for installation instructions."
- return 1
+if [[ -z "${DOCKER_HOST}" ]]; then
+ if [[ -S "/var/run/docker.sock" ]]; then
+ openim::log::status "Using Docker for MacOS"
+ return 0
+ fi
+
+ openim::log::status "No docker host is set. Checking options for setting one..."
+ if [[ -z "$(which docker-machine)" ]]; then
+ openim::log::status "It looks like you're running Mac OS X, yet neither Docker for Mac nor docker-machine can be found."
+ openim::log::status "See: https://docs.docker.com/engine/installation/mac/ for installation instructions."
+ return 1
elif [[ -n "$(which docker-machine)" ]]; then
- openim::build::prepare_docker_machine
- fi
+ openim::build::prepare_docker_machine
fi
+fi
}
function openim::build::prepare_docker_machine() {
- openim::log::status "docker-machine was found."
-
- local available_memory_bytes
- available_memory_bytes=$(sysctl -n hw.memsize 2>/dev/null)
-
- local bytes_in_mb=1048576
-
- # Give virtualbox 1/2 the system memory. Its necessary to divide by 2, instead
- # of multiple by .5, because bash can only multiply by ints.
- local memory_divisor=2
-
- local virtualbox_memory_mb=$(( available_memory_bytes / (bytes_in_mb * memory_divisor) ))
-
- docker-machine inspect "${DOCKER_MACHINE_NAME}" &> /dev/null || {
- openim::log::status "Creating a machine to build OPENIM"
- docker-machine create --driver "${DOCKER_MACHINE_DRIVER}" \
- --virtualbox-memory "${virtualbox_memory_mb}" \
- --engine-env HTTP_PROXY="${OPENIMRNETES_HTTP_PROXY:-}" \
- --engine-env HTTPS_PROXY="${OPENIMRNETES_HTTPS_PROXY:-}" \
- --engine-env NO_PROXY="${OPENIMRNETES_NO_PROXY:-127.0.0.1}" \
- "${DOCKER_MACHINE_NAME}" > /dev/null || {
- openim::log::error "Something went wrong creating a machine."
- openim::log::error "Try the following: "
- openim::log::error "docker-machine create -d ${DOCKER_MACHINE_DRIVER} --virtualbox-memory ${virtualbox_memory_mb} ${DOCKER_MACHINE_NAME}"
- return 1
- }
+openim::log::status "docker-machine was found."
+
+local available_memory_bytes
+available_memory_bytes=$(sysctl -n hw.memsize 2>/dev/null)
+
+local bytes_in_mb=1048576
+
+# Give virtualbox 1/2 the system memory. Its necessary to divide by 2, instead
+# of multiple by .5, because bash can only multiply by ints.
+local memory_divisor=2
+
+local virtualbox_memory_mb=$(( available_memory_bytes / (bytes_in_mb * memory_divisor) ))
+
+docker-machine inspect "${DOCKER_MACHINE_NAME}" &> /dev/null || {
+ openim::log::status "Creating a machine to build OPENIM"
+ docker-machine create --driver "${DOCKER_MACHINE_DRIVER}" \
+ --virtualbox-memory "${virtualbox_memory_mb}" \
+ --engine-env HTTP_PROXY="${OPENIMRNETES_HTTP_PROXY:-}" \
+ --engine-env HTTPS_PROXY="${OPENIMRNETES_HTTPS_PROXY:-}" \
+ --engine-env NO_PROXY="${OPENIMRNETES_NO_PROXY:-127.0.0.1}" \
+ "${DOCKER_MACHINE_NAME}" > /dev/null || {
+ openim::log::error "Something went wrong creating a machine."
+ openim::log::error "Try the following: "
+ openim::log::error "docker-machine create -d ${DOCKER_MACHINE_DRIVER} --virtualbox-memory ${virtualbox_memory_mb} ${DOCKER_MACHINE_NAME}"
+ return 1
}
- docker-machine start "${DOCKER_MACHINE_NAME}" &> /dev/null
- # it takes `docker-machine env` a few seconds to work if the machine was just started
- local docker_machine_out
- while ! docker_machine_out=$(docker-machine env "${DOCKER_MACHINE_NAME}" 2>&1); do
- if [[ ${docker_machine_out} =~ "Error checking TLS connection" ]]; then
- echo "${docker_machine_out}"
- docker-machine regenerate-certs "${DOCKER_MACHINE_NAME}"
- else
- sleep 1
- fi
- done
- eval "$(docker-machine env "${DOCKER_MACHINE_NAME}")"
- openim::log::status "A Docker host using docker-machine named '${DOCKER_MACHINE_NAME}' is ready to go!"
- return 0
+}
+docker-machine start "${DOCKER_MACHINE_NAME}" &> /dev/null
+# it takes `docker-machine env` a few seconds to work if the machine was just started
+local docker_machine_out
+while ! docker_machine_out=$(docker-machine env "${DOCKER_MACHINE_NAME}" 2>&1); do
+ if [[ ${docker_machine_out} =~ "Error checking TLS connection" ]]; then
+ echo "${docker_machine_out}"
+ docker-machine regenerate-certs "${DOCKER_MACHINE_NAME}"
+ else
+ sleep 1
+ fi
+done
+eval "$(docker-machine env "${DOCKER_MACHINE_NAME}")"
+openim::log::status "A Docker host using docker-machine named '${DOCKER_MACHINE_NAME}' is ready to go!"
+return 0
}
function openim::build::is_gnu_sed() {
- [[ $(sed --version 2>&1) == *GNU* ]]
+[[ $(sed --version 2>&1) == *GNU* ]]
}
function openim::build::ensure_rsync() {
- if [[ -z "$(which rsync)" ]]; then
- openim::log::error "Can't find 'rsync' in PATH, please fix and retry."
- return 1
- fi
+if [[ -z "$(which rsync)" ]]; then
+ openim::log::error "Can't find 'rsync' in PATH, please fix and retry."
+ return 1
+fi
}
function openim::build::update_dockerfile() {
- if openim::build::is_gnu_sed; then
- sed_opts=(-i)
- else
- sed_opts=(-i '')
- fi
- sed "${sed_opts[@]}" "s/OPENIM_BUILD_IMAGE_CROSS_TAG/${OPENIM_BUILD_IMAGE_CROSS_TAG}/" "${LOCAL_OUTPUT_BUILD_CONTEXT}/Dockerfile"
+if openim::build::is_gnu_sed; then
+ sed_opts=(-i)
+else
+ sed_opts=(-i '')
+fi
+sed "${sed_opts[@]}" "s/OPENIM_BUILD_IMAGE_CROSS_TAG/${OPENIM_BUILD_IMAGE_CROSS_TAG}/" "${LOCAL_OUTPUT_BUILD_CONTEXT}/Dockerfile"
}
function openim::build::set_proxy() {
- if [[ -n "${OPENIMRNETES_HTTPS_PROXY:-}" ]]; then
- echo "ENV https_proxy $OPENIMRNETES_HTTPS_PROXY" >> "${LOCAL_OUTPUT_BUILD_CONTEXT}/Dockerfile"
- fi
- if [[ -n "${OPENIMRNETES_HTTP_PROXY:-}" ]]; then
- echo "ENV http_proxy $OPENIMRNETES_HTTP_PROXY" >> "${LOCAL_OUTPUT_BUILD_CONTEXT}/Dockerfile"
- fi
- if [[ -n "${OPENIMRNETES_NO_PROXY:-}" ]]; then
- echo "ENV no_proxy $OPENIMRNETES_NO_PROXY" >> "${LOCAL_OUTPUT_BUILD_CONTEXT}/Dockerfile"
- fi
+if [[ -n "${OPENIMRNETES_HTTPS_PROXY:-}" ]]; then
+ echo "ENV https_proxy $OPENIMRNETES_HTTPS_PROXY" >> "${LOCAL_OUTPUT_BUILD_CONTEXT}/Dockerfile"
+fi
+if [[ -n "${OPENIMRNETES_HTTP_PROXY:-}" ]]; then
+ echo "ENV http_proxy $OPENIMRNETES_HTTP_PROXY" >> "${LOCAL_OUTPUT_BUILD_CONTEXT}/Dockerfile"
+fi
+if [[ -n "${OPENIMRNETES_NO_PROXY:-}" ]]; then
+ echo "ENV no_proxy $OPENIMRNETES_NO_PROXY" >> "${LOCAL_OUTPUT_BUILD_CONTEXT}/Dockerfile"
+fi
}
function openim::build::ensure_docker_in_path() {
- if [[ -z "$(which docker)" ]]; then
- openim::log::error "Can't find 'docker' in PATH, please fix and retry."
- openim::log::error "See https://docs.docker.com/installation/#installation for installation instructions."
- return 1
- fi
+if [[ -z "$(which docker)" ]]; then
+ openim::log::error "Can't find 'docker' in PATH, please fix and retry."
+ openim::log::error "See https://docs.docker.com/installation/#installation for installation instructions."
+ return 1
+fi
}
function openim::build::ensure_tar() {
- if [[ -n "${TAR:-}" ]]; then
- return
- fi
-
- # Find gnu tar if it is available, bomb out if not.
- TAR=tar
- if which gtar &>/dev/null; then
- TAR=gtar
- else
- if which gnutar &>/dev/null; then
- TAR=gnutar
- fi
- fi
- if ! "${TAR}" --version | grep -q GNU; then
- echo " !!! Cannot find GNU tar. Build on Linux or install GNU tar"
- echo " on Mac OS X (brew install gnu-tar)."
- return 1
+if [[ -n "${TAR:-}" ]]; then
+ return
+fi
+
+# Find gnu tar if it is available, bomb out if not.
+TAR=tar
+if which gtar &>/dev/null; then
+ TAR=gtar
+else
+ if which gnutar &>/dev/null; then
+ TAR=gnutar
fi
+fi
+if ! "${TAR}" --version | grep -q GNU; then
+ echo " !!! Cannot find GNU tar. Build on Linux or install GNU tar"
+ echo " on Mac OS X (brew install gnu-tar)."
+ return 1
+fi
}
function openim::build::has_docker() {
- which docker &> /dev/null
+which docker &> /dev/null
}
function openim::build::has_ip() {
- which ip &> /dev/null && ip -Version | grep 'iproute2' &> /dev/null
+which ip &> /dev/null && ip -Version | grep 'iproute2' &> /dev/null
}
# Detect if a specific image exists
@@ -303,12 +304,12 @@ function openim::build::has_ip() {
# $1 - image repo name
# $2 - image tag
function openim::build::docker_image_exists() {
- [[ -n $1 && -n $2 ]] || {
- openim::log::error "Internal error. Image not specified in docker_image_exists."
- exit 2
- }
+[[ -n $1 && -n $2 ]] || {
+ openim::log::error "Internal error. Image not specified in docker_image_exists."
+ exit 2
+}
- [[ $("${DOCKER[@]}" images -q "${1}:${2}") ]]
+[[ $("${DOCKER[@]}" images -q "${1}:${2}") ]]
}
# Delete all images that match a tag prefix except for the "current" version
@@ -317,21 +318,21 @@ function openim::build::docker_image_exists() {
# $2: The tag base. We consider any image that matches $2*
# $3: The current image not to delete if provided
function openim::build::docker_delete_old_images() {
- # In Docker 1.12, we can replace this with
- # docker images "$1" --format "{{.Tag}}"
- for tag in $("${DOCKER[@]}" images "${1}" | tail -n +2 | awk '{print $2}') ; do
- if [[ "${tag}" != "${2}"* ]] ; then
- V=3 openim::log::status "Keeping image ${1}:${tag}"
- continue
- fi
-
- if [[ -z "${3:-}" || "${tag}" != "${3}" ]] ; then
- V=2 openim::log::status "Deleting image ${1}:${tag}"
- "${DOCKER[@]}" rmi "${1}:${tag}" >/dev/null
- else
- V=3 openim::log::status "Keeping image ${1}:${tag}"
- fi
- done
+# In Docker 1.12, we can replace this with
+# docker images "$1" --format "{{.Tag}}"
+for tag in $("${DOCKER[@]}" images "${1}" | tail -n +2 | awk '{print $2}') ; do
+ if [[ "${tag}" != "${2}"* ]] ; then
+ V=3 openim::log::status "Keeping image ${1}:${tag}"
+ continue
+ fi
+
+ if [[ -z "${3:-}" || "${tag}" != "${3}" ]] ; then
+ V=2 openim::log::status "Deleting image ${1}:${tag}"
+ "${DOCKER[@]}" rmi "${1}:${tag}" >/dev/null
+ else
+ V=3 openim::log::status "Keeping image ${1}:${tag}"
+ fi
+done
}
# Stop and delete all containers that match a pattern
@@ -339,36 +340,36 @@ function openim::build::docker_delete_old_images() {
# $1: The base container prefix
# $2: The current container to keep, if provided
function openim::build::docker_delete_old_containers() {
- # In Docker 1.12 we can replace this line with
- # docker ps -a --format="{{.Names}}"
- for container in $("${DOCKER[@]}" ps -a | tail -n +2 | awk '{print $NF}') ; do
- if [[ "${container}" != "${1}"* ]] ; then
- V=3 openim::log::status "Keeping container ${container}"
- continue
- fi
- if [[ -z "${2:-}" || "${container}" != "${2}" ]] ; then
- V=2 openim::log::status "Deleting container ${container}"
- openim::build::destroy_container "${container}"
- else
- V=3 openim::log::status "Keeping container ${container}"
- fi
- done
+# In Docker 1.12 we can replace this line with
+# docker ps -a --format="{{.Names}}"
+for container in $("${DOCKER[@]}" ps -a | tail -n +2 | awk '{print $NF}') ; do
+ if [[ "${container}" != "${1}"* ]] ; then
+ V=3 openim::log::status "Keeping container ${container}"
+ continue
+ fi
+ if [[ -z "${2:-}" || "${container}" != "${2}" ]] ; then
+ V=2 openim::log::status "Deleting container ${container}"
+ openim::build::destroy_container "${container}"
+ else
+ V=3 openim::log::status "Keeping container ${container}"
+ fi
+done
}
# Takes $1 and computes a short has for it. Useful for unique tag generation
function openim::build::short_hash() {
- [[ $# -eq 1 ]] || {
- openim::log::error "Internal error. No data based to short_hash."
- exit 2
- }
+[[ $# -eq 1 ]] || {
+ openim::log::error "Internal error. No data based to short_hash."
+ exit 2
+}
- local short_hash
- if which md5 >/dev/null 2>&1; then
- short_hash=$(md5 -q -s "$1")
- else
- short_hash=$(echo -n "$1" | md5sum)
- fi
- echo "${short_hash:0:10}"
+local short_hash
+if which md5 >/dev/null 2>&1; then
+ short_hash=$(md5 -q -s "$1")
+else
+ short_hash=$(echo -n "$1" | md5sum)
+fi
+echo "${short_hash:0:10}"
}
# Pedantically kill, wait-on and remove a container. The -f -v options
@@ -376,15 +377,15 @@ function openim::build::short_hash() {
# container, wait to ensure it's stopped, then try the remove. This is
# a workaround for bug https://github.com/docker/docker/issues/3968.
function openim::build::destroy_container() {
- "${DOCKER[@]}" kill "$1" >/dev/null 2>&1 || true
- if [[ $("${DOCKER[@]}" version --format '{{.Server.Version}}') = 17.06.0* ]]; then
- # Workaround https://github.com/moby/moby/issues/33948.
- # TODO: remove when 17.06.0 is not relevant anymore
- DOCKER_API_VERSION=v1.29 "${DOCKER[@]}" wait "$1" >/dev/null 2>&1 || true
- else
- "${DOCKER[@]}" wait "$1" >/dev/null 2>&1 || true
- fi
- "${DOCKER[@]}" rm -f -v "$1" >/dev/null 2>&1 || true
+"${DOCKER[@]}" kill "$1" >/dev/null 2>&1 || true
+if [[ $("${DOCKER[@]}" version --format '{{.Server.Version}}') = 17.06.0* ]]; then
+ # Workaround https://github.com/moby/moby/issues/33948.
+ # TODO: remove when 17.06.0 is not relevant anymore
+ DOCKER_API_VERSION=v1.29 "${DOCKER[@]}" wait "$1" >/dev/null 2>&1 || true
+else
+ "${DOCKER[@]}" wait "$1" >/dev/null 2>&1 || true
+fi
+"${DOCKER[@]}" rm -f -v "$1" >/dev/null 2>&1 || true
}
# ---------------------------------------------------------------------------
@@ -392,47 +393,47 @@ function openim::build::destroy_container() {
function openim::build::clean() {
- if openim::build::has_docker ; then
- openim::build::docker_delete_old_containers "${OPENIM_BUILD_CONTAINER_NAME_BASE}"
- openim::build::docker_delete_old_containers "${OPENIM_RSYNC_CONTAINER_NAME_BASE}"
- openim::build::docker_delete_old_containers "${OPENIM_DATA_CONTAINER_NAME_BASE}"
- openim::build::docker_delete_old_images "${OPENIM_BUILD_IMAGE_REPO}" "${OPENIM_BUILD_IMAGE_TAG_BASE}"
-
- V=2 openim::log::status "Cleaning all untagged docker images"
- "${DOCKER[@]}" rmi "$("${DOCKER[@]}" images -q --filter 'dangling=true')" 2> /dev/null || true
- fi
-
- if [[ -d "${LOCAL_OUTPUT_ROOT}" ]]; then
- openim::log::status "Removing _output directory"
- rm -rf "${LOCAL_OUTPUT_ROOT}"
- fi
+if openim::build::has_docker ; then
+ openim::build::docker_delete_old_containers "${OPENIM_BUILD_CONTAINER_NAME_BASE}"
+ openim::build::docker_delete_old_containers "${OPENIM_RSYNC_CONTAINER_NAME_BASE}"
+ openim::build::docker_delete_old_containers "${OPENIM_DATA_CONTAINER_NAME_BASE}"
+ openim::build::docker_delete_old_images "${OPENIM_BUILD_IMAGE_REPO}" "${OPENIM_BUILD_IMAGE_TAG_BASE}"
+
+ V=2 openim::log::status "Cleaning all untagged docker images"
+ "${DOCKER[@]}" rmi "$("${DOCKER[@]}" images -q --filter 'dangling=true')" 2> /dev/null || true
+fi
+
+if [[ -d "${LOCAL_OUTPUT_ROOT}" ]]; then
+ openim::log::status "Removing _output directory"
+ rm -rf "${LOCAL_OUTPUT_ROOT}"
+fi
}
# Set up the context directory for the openim-build image and build it.
function openim::build::build_image() {
- mkdir -p "${LOCAL_OUTPUT_BUILD_CONTEXT}"
- # Make sure the context directory owned by the right user for syncing sources to container.
- chown -R "${USER_ID}":"${GROUP_ID}" "${LOCAL_OUTPUT_BUILD_CONTEXT}"
+mkdir -p "${LOCAL_OUTPUT_BUILD_CONTEXT}"
+# Make sure the context directory owned by the right user for syncing sources to container.
+chown -R "${USER_ID}":"${GROUP_ID}" "${LOCAL_OUTPUT_BUILD_CONTEXT}"
- cp /etc/localtime "${LOCAL_OUTPUT_BUILD_CONTEXT}/"
+cp /etc/localtime "${LOCAL_OUTPUT_BUILD_CONTEXT}/"
- cp ""${OPENIM_ROOT}"/build/build-image/Dockerfile" "${LOCAL_OUTPUT_BUILD_CONTEXT}/Dockerfile"
- cp ""${OPENIM_ROOT}"/build/build-image/rsyncd.sh" "${LOCAL_OUTPUT_BUILD_CONTEXT}/"
- dd if=/dev/urandom bs=512 count=1 2>/dev/null | LC_ALL=C tr -dc 'A-Za-z0-9' | dd bs=32 count=1 2>/dev/null > "${LOCAL_OUTPUT_BUILD_CONTEXT}/rsyncd.password"
- chmod go= "${LOCAL_OUTPUT_BUILD_CONTEXT}/rsyncd.password"
+cp "${OPENIM_ROOT}/build/build-image/Dockerfile" "${LOCAL_OUTPUT_BUILD_CONTEXT}/Dockerfile"
+cp "${OPENIM_ROOT}/build/build-image/rsyncd.sh" "${LOCAL_OUTPUT_BUILD_CONTEXT}/"
+dd if=/dev/urandom bs=512 count=1 2>/dev/null | LC_ALL=C tr -dc 'A-Za-z0-9' | dd bs=32 count=1 2>/dev/null > "${LOCAL_OUTPUT_BUILD_CONTEXT}/rsyncd.password"
+chmod go= "${LOCAL_OUTPUT_BUILD_CONTEXT}/rsyncd.password"
- openim::build::update_dockerfile
- openim::build::set_proxy
- openim::build::docker_build "${OPENIM_BUILD_IMAGE}" "${LOCAL_OUTPUT_BUILD_CONTEXT}" 'false'
+openim::build::update_dockerfile
+openim::build::set_proxy
+openim::build::docker_build "${OPENIM_BUILD_IMAGE}" "${LOCAL_OUTPUT_BUILD_CONTEXT}" 'false'
- # Clean up old versions of everything
- openim::build::docker_delete_old_containers "${OPENIM_BUILD_CONTAINER_NAME_BASE}" "${OPENIM_BUILD_CONTAINER_NAME}"
- openim::build::docker_delete_old_containers "${OPENIM_RSYNC_CONTAINER_NAME_BASE}" "${OPENIM_RSYNC_CONTAINER_NAME}"
- openim::build::docker_delete_old_containers "${OPENIM_DATA_CONTAINER_NAME_BASE}" "${OPENIM_DATA_CONTAINER_NAME}"
- openim::build::docker_delete_old_images "${OPENIM_BUILD_IMAGE_REPO}" "${OPENIM_BUILD_IMAGE_TAG_BASE}" "${OPENIM_BUILD_IMAGE_TAG}"
+# Clean up old versions of everything
+openim::build::docker_delete_old_containers "${OPENIM_BUILD_CONTAINER_NAME_BASE}" "${OPENIM_BUILD_CONTAINER_NAME}"
+openim::build::docker_delete_old_containers "${OPENIM_RSYNC_CONTAINER_NAME_BASE}" "${OPENIM_RSYNC_CONTAINER_NAME}"
+openim::build::docker_delete_old_containers "${OPENIM_DATA_CONTAINER_NAME_BASE}" "${OPENIM_DATA_CONTAINER_NAME}"
+openim::build::docker_delete_old_images "${OPENIM_BUILD_IMAGE_REPO}" "${OPENIM_BUILD_IMAGE_TAG_BASE}" "${OPENIM_BUILD_IMAGE_TAG}"
- openim::build::ensure_data_container
- openim::build::sync_to_container
+openim::build::ensure_data_container
+openim::build::sync_to_container
}
# Build a docker image from a Dockerfile.
@@ -440,14 +441,14 @@ function openim::build::build_image() {
# $2 is the location of the "context" directory, with the Dockerfile at the root.
# $3 is the value to set the --pull flag for docker build; true by default
function openim::build::docker_build() {
- local -r image=$1
- local -r context_dir=$2
- local -r pull="${3:-true}"
- local -ra build_cmd=("${DOCKER[@]}" build -t "${image}" "--pull=${pull}" "${context_dir}")
-
- openim::log::status "Building Docker image ${image}"
- local docker_output
- docker_output=$("${build_cmd[@]}" 2>&1) || {
+local -r image=$1
+local -r context_dir=$2
+local -r pull="${3:-true}"
+local -ra build_cmd=("${DOCKER[@]}" build -t "${image}" "--pull=${pull}" "${context_dir}")
+
+openim::log::status "Building Docker image ${image}"
+local docker_output
+docker_output=$("${build_cmd[@]}" 2>&1) || {
cat <&2
+++ Docker build command failed for ${image}
@@ -458,61 +459,61 @@ To retry manually, run:
${build_cmd[*]}
EOF
- return 1
- }
+ return 1
+}
}
function openim::build::ensure_data_container() {
- # If the data container exists AND exited successfully, we can use it.
- # Otherwise nuke it and start over.
- local ret=0
- local code=0
-
- code=$(docker inspect \
- -f '{{.State.ExitCode}}' \
- "${OPENIM_DATA_CONTAINER_NAME}" 2>/dev/null) || ret=$?
- if [[ "${ret}" == 0 && "${code}" != 0 ]]; then
- openim::build::destroy_container "${OPENIM_DATA_CONTAINER_NAME}"
- ret=1
- fi
- if [[ "${ret}" != 0 ]]; then
- openim::log::status "Creating data container ${OPENIM_DATA_CONTAINER_NAME}"
- # We have to ensure the directory exists, or else the docker run will
- # create it as root.
- mkdir -p "${LOCAL_OUTPUT_GOPATH}"
- # We want this to run as root to be able to chown, so non-root users can
- # later use the result as a data container. This run both creates the data
- # container and chowns the GOPATH.
- #
- # The data container creates volumes for all of the directories that store
- # intermediates for the Go build. This enables incremental builds across
- # Docker sessions. The *_cgo paths are re-compiled versions of the go std
- # libraries for true static building.
- local -ra docker_cmd=(
- "${DOCKER[@]}" run
- --volume "${REMOTE_ROOT}" # white-out the whole output dir
- --volume /usr/local/go/pkg/linux_386_cgo
- --volume /usr/local/go/pkg/linux_amd64_cgo
- --volume /usr/local/go/pkg/linux_arm_cgo
- --volume /usr/local/go/pkg/linux_arm64_cgo
- --volume /usr/local/go/pkg/linux_ppc64le_cgo
- --volume /usr/local/go/pkg/darwin_amd64_cgo
- --volume /usr/local/go/pkg/darwin_386_cgo
- --volume /usr/local/go/pkg/windows_amd64_cgo
- --volume /usr/local/go/pkg/windows_386_cgo
- --name "${OPENIM_DATA_CONTAINER_NAME}"
- --hostname "${HOSTNAME}"
- "${OPENIM_BUILD_IMAGE}"
- chown -R "${USER_ID}":"${GROUP_ID}"
- "${REMOTE_ROOT}"
- /usr/local/go/pkg/
- )
- "${docker_cmd[@]}"
- fi
+# If the data container exists AND exited successfully, we can use it.
+# Otherwise nuke it and start over.
+local ret=0
+local code=0
+
+code=$(docker inspect \
+ -f '{{.State.ExitCode}}' \
+"${OPENIM_DATA_CONTAINER_NAME}" 2>/dev/null) || ret=$?
+if [[ "${ret}" == 0 && "${code}" != 0 ]]; then
+ openim::build::destroy_container "${OPENIM_DATA_CONTAINER_NAME}"
+ ret=1
+fi
+if [[ "${ret}" != 0 ]]; then
+ openim::log::status "Creating data container ${OPENIM_DATA_CONTAINER_NAME}"
+ # We have to ensure the directory exists, or else the docker run will
+ # create it as root.
+ mkdir -p "${LOCAL_OUTPUT_GOPATH}"
+ # We want this to run as root to be able to chown, so non-root users can
+ # later use the result as a data container. This run both creates the data
+ # container and chowns the GOPATH.
+ #
+ # The data container creates volumes for all of the directories that store
+ # intermediates for the Go build. This enables incremental builds across
+ # Docker sessions. The *_cgo paths are re-compiled versions of the go std
+ # libraries for true static building.
+ local -ra docker_cmd=(
+ "${DOCKER[@]}" run
+ --volume "${REMOTE_ROOT}" # white-out the whole output dir
+ --volume /usr/local/go/pkg/linux_386_cgo
+ --volume /usr/local/go/pkg/linux_amd64_cgo
+ --volume /usr/local/go/pkg/linux_arm_cgo
+ --volume /usr/local/go/pkg/linux_arm64_cgo
+ --volume /usr/local/go/pkg/linux_ppc64le_cgo
+ --volume /usr/local/go/pkg/darwin_amd64_cgo
+ --volume /usr/local/go/pkg/darwin_386_cgo
+ --volume /usr/local/go/pkg/windows_amd64_cgo
+ --volume /usr/local/go/pkg/windows_386_cgo
+ --name "${OPENIM_DATA_CONTAINER_NAME}"
+ --hostname "${HOSTNAME}"
+ "${OPENIM_BUILD_IMAGE}"
+ chown -R "${USER_ID}":"${GROUP_ID}"
+ "${REMOTE_ROOT}"
+ /usr/local/go/pkg/
+ )
+ "${docker_cmd[@]}"
+fi
}
# Build all openim commands.
function openim::build::build_command() {
- openim::log::status "Running build command..."
- make -C "${OPENIM_ROOT}" multiarch
+openim::log::status "Running build command..."
+make -C "${OPENIM_ROOT}" multiarch
}
diff --git a/scripts/coverage.sh b/scripts/coverage.sh
index ae5283671..e5cef0b5d 100755
--- a/scripts/coverage.sh
+++ b/scripts/coverage.sh
@@ -19,11 +19,11 @@
echo "mode: atomic" > coverage.txt
for d in $(find ./* -maxdepth 10 -type d); do
- if ls $d/*.go &> /dev/null; then
- go test -coverprofile=profile.out -covermode=atomic $d
- if [ -f profile.out ]; then
- cat profile.out | grep -v "mode: " >> /tmp/coverage.txt
- rm profile.out
- fi
+ if ls $d/*.go &> /dev/null; then
+ go test -coverprofile=profile.out -covermode=atomic $d
+ if [ -f profile.out ]; then
+ cat profile.out | grep -v "mode: " >> /tmp/coverage.txt
+ rm profile.out
fi
+ fi
done
diff --git a/scripts/demo.sh b/scripts/demo.sh
index 5f8a2023a..4b877b9ed 100755
--- a/scripts/demo.sh
+++ b/scripts/demo.sh
@@ -15,16 +15,16 @@
if ! command -v pv &> /dev/null
then
- echo "pv not found, installing..."
- if [ -e /etc/debian_version ]; then
- sudo apt-get update
- sudo apt-get install -y pv
+ echo "pv not found, installing..."
+ if [ -e /etc/debian_version ]; then
+ sudo apt-get update
+ sudo apt-get install -y pv
elif [ -e /etc/redhat-release ]; then
- sudo yum install -y pv
- else
- echo "Unsupported OS, please install pv manually."
- exit 1
- fi
+ sudo yum install -y pv
+ else
+ echo "Unsupported OS, please install pv manually."
+ exit 1
+ fi
fi
readonly t_reset=$(tput sgr0)
@@ -42,8 +42,8 @@ openim::util::ensure-bash-version
trap 'openim::util::onCtrlC' INT
function openim::util::onCtrlC() {
- echo -e "\n${t_reset}Ctrl+C Press it. It's exiting openim make init..."
- exit 0
+ echo -e "\n${t_reset}Ctrl+C Press it. It's exiting openim make init..."
+ exit 0
}
openim::util::desc "========> Welcome to the OpenIM Demo"
diff --git a/scripts/docker-check-service.sh b/scripts/docker-check-service.sh
index adf383436..30ca89b5a 100755
--- a/scripts/docker-check-service.sh
+++ b/scripts/docker-check-service.sh
@@ -22,61 +22,61 @@ cd "$OPENIM_ROOT"
openim::util::check_docker_and_compose_versions
progress() {
- local _main_pid="$1"
- local _length=20
- local _ratio=1
- local _colors=("31" "32" "33" "34" "35" "36" "37")
- local _wave=("▁" "▂" "▃" "▄" "▅" "▆" "▇" "█" "▇" "▆" "▅" "▄" "▃" "▂")
-
- while pgrep -P "$_main_pid" &> /dev/null; do
- local _mark='>'
- local _progress_bar=
- for ((i = 1; i <= _length; i++)); do
- if ((i > _ratio)); then
- _mark='-'
- fi
- _progress_bar="${_progress_bar}${_mark}"
- done
-
- local _color_idx=$((_ratio % ${#_colors[@]}))
- local _color_prefix="\033[${_colors[_color_idx]}m"
- local _reset_suffix="\033[0m"
-
- local _wave_idx=$((_ratio % ${#_wave[@]}))
- local _wave_progress=${_wave[_wave_idx]}
-
- printf "Progress: ${_color_prefix}${_progress_bar}${_reset_suffix} ${_wave_progress} Countdown: %2ds \r" "$_countdown"
- ((_ratio++))
- ((_ratio > _length)) && _ratio=1
- sleep 0.1
+ local _main_pid="$1"
+ local _length=20
+ local _ratio=1
+ local _colors=("31" "32" "33" "34" "35" "36" "37")
+ local _wave=("▁" "▂" "▃" "▄" "▅" "▆" "▇" "█" "▇" "▆" "▅" "▄" "▃" "▂")
+
+ while pgrep -P "$_main_pid" &> /dev/null; do
+ local _mark='>'
+ local _progress_bar=
+ for ((i = 1; i <= _length; i++)); do
+ if ((i > _ratio)); then
+ _mark='-'
+ fi
+ _progress_bar="${_progress_bar}${_mark}"
done
+
+ local _color_idx=$((_ratio % ${#_colors[@]}))
+ local _color_prefix="\033[${_colors[_color_idx]}m"
+ local _reset_suffix="\033[0m"
+
+ local _wave_idx=$((_ratio % ${#_wave[@]}))
+ local _wave_progress=${_wave[_wave_idx]}
+
+ printf "Progress: ${_color_prefix}${_progress_bar}${_reset_suffix} ${_wave_progress} Countdown: %2ds \r" "$_countdown"
+ ((_ratio++))
+ ((_ratio > _length)) && _ratio=1
+ sleep 0.1
+ done
}
countdown() {
- local _duration="$1"
-
- for ((i = _duration; i >= 1; i--)); do
- printf "\rCountdown: %2ds \r" "$i"
- sleep 1
- done
- printf "\rCountdown: %2ds \r" "$_duration"
+ local _duration="$1"
+
+ for ((i = _duration; i >= 1; i--)); do
+ printf "\rCountdown: %2ds \r" "$i"
+ sleep 1
+ done
+ printf "\rCountdown: %2ds \r" "$_duration"
}
do_sth() {
- echo "++++++++++++++++++++++++"
- progress $$ &
- local _progress_pid=$!
- local _countdown=30
-
- countdown "$_countdown" &
- local _countdown_pid=$!
-
- sleep 30
-
- kill "$_progress_pid" "$_countdown_pid"
-
- "${SCRIPTS_ROOT}/check-all.sh"
- echo -e "${PURPLE_PREFIX}=========> Check docker-compose status ${COLOR_SUFFIX} \n"
+ echo "++++++++++++++++++++++++"
+ progress $$ &
+ local _progress_pid=$!
+ local _countdown=30
+
+ countdown "$_countdown" &
+ local _countdown_pid=$!
+
+ sleep 30
+
+ kill "$_progress_pid" "$_countdown_pid"
+
+ "${SCRIPTS_ROOT}/check-all.sh"
+ echo -e "${PURPLE_PREFIX}=========> Check docker-compose status ${COLOR_SUFFIX} \n"
}
set -e
diff --git a/scripts/docker-start-all.sh b/scripts/docker-start-all.sh
index 6ad277815..162655553 100755
--- a/scripts/docker-start-all.sh
+++ b/scripts/docker-start-all.sh
@@ -34,4 +34,4 @@ sleep 5
"${OPENIM_ROOT}"/scripts/check-all.sh
-tail -f ${LOG_FILE}
\ No newline at end of file
+tail -f ${LOG_FILE}
diff --git a/scripts/ensure-tag.sh b/scripts/ensure-tag.sh
index c6fea7ca0..5fedf7019 100755
--- a/scripts/ensure-tag.sh
+++ b/scripts/ensure-tag.sh
@@ -14,11 +14,13 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+OPENIM_ROOT=$(dirname "${BASH_SOURCE[0]}")/..
+
version="${VERSION}"
if [ "${version}" == "" ];then
- version=v`gsemver bump`
+ version=v$(${OPENIM_ROOT}/_output/tools/gsemver bump)
fi
-if [ -z "`git tag -l ${version}`" ];then
+if [ -z "$(git tag -l ${version})" ];then
git tag -a -m "release version ${version}" ${version}
fi
diff --git a/scripts/gen-swagger-docs.sh b/scripts/gen-swagger-docs.sh
index ccf5eaeaa..68410e79c 100755
--- a/scripts/gen-swagger-docs.sh
+++ b/scripts/gen-swagger-docs.sh
@@ -67,7 +67,7 @@ echo -e "=== any\nRepresents an untyped JSON map - see the description of the fi
asciidoctor definitions.adoc
asciidoctor paths.adoc
-cp ${OPENIM_OUTPUT_TMP}/definitions.html ${OPENIM_OUTPUT_TMP}/_output/
-cp ${OPENIM_OUTPUT_TMP}/paths.html ${OPENIM_OUTPUT_TMP}/_output/operations.html
+cp "$OPENIM_OUTPUT_TMP/definitions.html" "$OPENIM_OUTPUT_TMP/_output/"
+cp "$OPENIM_OUTPUT_TMP/paths.html" "$OPENIM_OUTPUT_TMP/_output/operations.html"
success "SUCCESS"
\ No newline at end of file
diff --git a/scripts/genconfig.sh b/scripts/genconfig.sh
index 2371edc9d..498b0b908 100755
--- a/scripts/genconfig.sh
+++ b/scripts/genconfig.sh
@@ -25,20 +25,12 @@ OPENIM_ROOT=$(dirname "${BASH_SOURCE[0]}")/..
source "${OPENIM_ROOT}/scripts/lib/init.sh"
if [ $# -ne 2 ];then
- openim::log::error "Usage: scripts/genconfig.sh scripts/environment.sh configs/openim-api.yaml"
- exit 1
+ openim::log::error "Usage: scripts/genconfig.sh scripts/environment.sh configs/config.yaml"
+ exit 1
fi
-openim::util::require-dig
-result=$?
-if [ $result -ne 0 ]; then
- openim::log::info "Please install 'dig' to use this feature."
- openim::log::info "Installation instructions:"
- openim::log::info " For Ubuntu/Debian: sudo apt-get install dnsutils"
- openim::log::info " For CentOS/RedHat: sudo yum install bind-utils"
- openim::log::info " For macOS: 'dig' should be preinstalled. If missing, try: brew install bind"
- openim::log::info " For Windows: Install BIND9 tools from https://www.isc.org/download/"
- openim::log::error_exit "Error: 'dig' command is required but not installed."
+if [ -z "${OPENIM_IP}" ]; then
+ openim::util::require-dig
fi
source "${env_file}"
@@ -48,17 +40,17 @@ declare -A envs
set +u
for env in $(sed -n 's/^[^#].*${\(.*\)}.*/\1/p' ${template_file})
do
- if [ -z "$(eval echo \$${env})" ];then
- openim::log::error "environment variable '${env}' not set"
- missing=true
- fi
+ if [ -z "$(eval echo \$${env})" ];then
+ openim::log::error "environment variable '${env}' not set"
+ missing=true
+ fi
done
if [ "${missing}" ];then
- openim::log::error 'You may run `source scripts/environment.sh` to set these environment'
- exit 1
+ openim::log::error "You may run 'source scripts/environment.sh' to set these environment"
+ exit 1
fi
eval "cat << EOF
$(cat ${template_file})
-EOF"
+EOF"
\ No newline at end of file
diff --git a/scripts/gendoc.sh b/scripts/gendoc.sh
index c948fcdf9..ece090190 100755
--- a/scripts/gendoc.sh
+++ b/scripts/gendoc.sh
@@ -14,43 +14,43 @@
# limitations under the License.
DEFAULT_DIRS=(
- "pkg"
- "internal/pkg"
+ "pkg"
+ "internal/pkg"
)
BASE_URL="github.com/openimsdk/open-im-server"
usage() {
- echo "Usage: $0 [OPTIONS]"
- echo
- echo "This script iterates over directories and generates doc.go if necessary."
- echo "By default, it processes 'pkg' and 'internal/pkg' directories."
- echo
- echo "Options:"
- echo " -d DIRS, --dirs DIRS Specify the directories to be processed, separated by commas. E.g., 'pkg,internal/pkg'."
- echo " -u URL, --url URL Set the base URL for the import path. Default is '$BASE_URL'."
- echo " -h, --help Show this help message."
- echo
+ echo "Usage: $0 [OPTIONS]"
+ echo
+ echo "This script iterates over directories and generates doc.go if necessary."
+ echo "By default, it processes 'pkg' and 'internal/pkg' directories."
+ echo
+ echo "Options:"
+ echo " -d DIRS, --dirs DIRS Specify the directories to be processed, separated by commas. E.g., 'pkg,internal/pkg'."
+ echo " -u URL, --url URL Set the base URL for the import path. Default is '$BASE_URL'."
+ echo " -h, --help Show this help message."
+ echo
}
process_dir() {
- local dir=$1
- local base_url=$2
-
- for d in $(find $dir -type d); do
- if [ ! -f $d/doc.go ]; then
- if ls $d/*.go > /dev/null 2>&1; then
- echo $d/doc.go
- echo "package $(basename $d) // import \"$base_url/$d\"" > $d/doc.go
- fi
- fi
- done
+ local dir=$1
+ local base_url=$2
+
+ for d in $(find $dir -type d); do
+ if [ ! -f $d/doc.go ]; then
+ if ls $d/*.go > /dev/null 2>&1; then
+ echo $d/doc.go
+ echo "package $(basename $d) // import \"$base_url/$d\"" > $d/doc.go
+ fi
+ fi
+ done
}
while [[ $# -gt 0 ]]; do
- key="$1"
-
- case $key in
- -d|--dirs)
+ key="$1"
+
+ case $key in
+ -d|--dirs)
IFS=',' read -ra DIRS <<< "$2"
shift # shift past argument
shift # shift past value
diff --git a/scripts/githooks/commit-msg.sh b/scripts/githooks/commit-msg.sh
index efff13fd0..d2d96645b 100644
--- a/scripts/githooks/commit-msg.sh
+++ b/scripts/githooks/commit-msg.sh
@@ -34,15 +34,15 @@ RED="\e[31m"
ENDCOLOR="\e[0m"
printMessage() {
- printf "${YELLOW}OpenIM : $1${ENDCOLOR}\n"
+ printf "${YELLOW}OpenIM : $1${ENDCOLOR}\n"
}
printSuccess() {
- printf "${GREEN}OpenIM : $1${ENDCOLOR}\n"
+ printf "${GREEN}OpenIM : $1${ENDCOLOR}\n"
}
printError() {
- printf "${RED}OpenIM : $1${ENDCOLOR}\n"
+ printf "${RED}OpenIM : $1${ENDCOLOR}\n"
}
printMessage "Running the OpenIM commit-msg hook."
@@ -50,9 +50,9 @@ printMessage "Running the OpenIM commit-msg hook."
# This example catches duplicate Signed-off-by lines.
test "" = "$(grep '^Signed-off-by: ' "$1" |
- sort | uniq -c | sed -e '/^[ ]*1[ ]/d')" || {
- echo >&2 Duplicate Signed-off-by lines.
- exit 1
+sort | uniq -c | sed -e '/^[ ]*1[ ]/d')" || {
+echo >&2 Duplicate Signed-off-by lines.
+exit 1
}
# TODO: go-gitlint dir set
@@ -60,21 +60,21 @@ OPENIM_ROOT=$(dirname "${BASH_SOURCE[0]}")/../..
GITLINT_DIR="$OPENIM_ROOT/_output/tools/go-gitlint"
$GITLINT_DIR \
- --msg-file=$1 \
- --subject-regex="^(build|chore|ci|docs|feat|feature|fix|perf|refactor|revert|style|bot|test)(.*)?:\s?.*" \
- --subject-maxlen=150 \
- --subject-minlen=10 \
- --body-regex=".*" \
- --max-parents=1
+--msg-file=$1 \
+--subject-regex="^(build|chore|ci|docs|feat|feature|fix|perf|refactor|revert|style|bot|test)(.*)?:\s?.*" \
+--subject-maxlen=150 \
+--subject-minlen=10 \
+--body-regex=".*" \
+--max-parents=1
if [ $? -ne 0 ]
then
- if ! command -v $GITLINT_DIR &>/dev/null; then
- printError "$GITLINT_DIR not found. Please run 'make tools' OR 'make tools.verify.go-gitlint' make verto install it."
- fi
- printError "Please fix your commit message to match kubecub coding standards"
- printError "https://gist.github.com/cubxxw/126b72104ac0b0ca484c9db09c3e5694#file-githook-md"
- exit 1
+if ! command -v $GITLINT_DIR &>/dev/null; then
+ printError "$GITLINT_DIR not found. Please run 'make tools' OR 'make tools.verify.go-gitlint' make verto install it."
+fi
+printError "Please fix your commit message to match kubecub coding standards"
+printError "https://gist.github.com/cubxxw/126b72104ac0b0ca484c9db09c3e5694#file-githook-md"
+exit 1
fi
### Add Sign-off-by line to the end of the commit message
@@ -88,5 +88,5 @@ SIGNED_OFF_BY_EXISTS=$?
# Add "Signed-off-by" line if it doesn't exist
if [ $SIGNED_OFF_BY_EXISTS -ne 0 ]; then
- echo -e "\nSigned-off-by: $NAME <$EMAIL>" >> "$1"
+echo -e "\nSigned-off-by: $NAME <$EMAIL>" >> "$1"
fi
\ No newline at end of file
diff --git a/scripts/githooks/pre-commit.sh b/scripts/githooks/pre-commit.sh
index 7fd21593c..cc756c9ad 100644
--- a/scripts/githooks/pre-commit.sh
+++ b/scripts/githooks/pre-commit.sh
@@ -34,15 +34,15 @@ RED="\e[31m"
ENDCOLOR="\e[0m"
printMessage() {
- printf "${YELLOW}openim : $1${ENDCOLOR}\n"
+ printf "${YELLOW}openim : $1${ENDCOLOR}\n"
}
printSuccess() {
- printf "${GREEN}openim : $1${ENDCOLOR}\n"
+ printf "${GREEN}openim : $1${ENDCOLOR}\n"
}
printError() {
- printf "${RED}openim : $1${ENDCOLOR}\n"
+ printf "${RED}openim : $1${ENDCOLOR}\n"
}
printMessage "Running local openim pre-commit hook."
@@ -55,9 +55,9 @@ limit=${GIT_FILE_SIZE_LIMIT:-2000000} # Default 2MB
limitInMB=$(( $limit / 1000000 ))
function file_too_large(){
- filename=$0
- filesize=$(( $1 / 2**20 ))
-
+ filename=$0
+ filesize=$(( $1 / 2**20 ))
+
cat < /dev/null 2>&1
then
- against=HEAD
+ against=HEAD
else
- against="$empty_tree"
+ against="$empty_tree"
fi
# Set split so that for loop below can handle spaces in file names by splitting on line breaks
@@ -104,7 +104,7 @@ fi
if [[ ! $local_branch =~ $valid_branch_regex ]]
then
- printError "There is something wrong with your branch name. Branch names in this project must adhere to this contract: $valid_branch_regex.
+ printError "There is something wrong with your branch name. Branch names in this project must adhere to this contract: $valid_branch_regex.
Your commit will be rejected. You should rename your branch to a valid name(feat/name OR bug/name) and try again."
printError "For more on this, read on: https://gist.github.com/cubxxw/126b72104ac0b0ca484c9db09c3e5694"
exit 1
diff --git a/scripts/githooks/pre-push.sh b/scripts/githooks/pre-push.sh
index 2985313b7..9bd938915 100644
--- a/scripts/githooks/pre-push.sh
+++ b/scripts/githooks/pre-push.sh
@@ -25,20 +25,20 @@ local_branch="$(git rev-parse --abbrev-ref HEAD)"
valid_branch_regex="^(main|master|develop|release(-[a-zA-Z0-9._-]+)?)$|(feature|feat|openim|hotfix|test|bug|ci|cicd|style|)\/[a-z0-9._-]+$|^HEAD$"
printMessage() {
- printf "${YELLOW}OpenIM : $1${ENDCOLOR}\n"
+ printf "${YELLOW}OpenIM : $1${ENDCOLOR}\n"
}
printSuccess() {
- printf "${GREEN}OpenIM : $1${ENDCOLOR}\n"
+ printf "${GREEN}OpenIM : $1${ENDCOLOR}\n"
}
printError() {
- printf "${RED}OpenIM : $1${ENDCOLOR}\n"
+ printf "${RED}OpenIM : $1${ENDCOLOR}\n"
}
printMessage "Running local OpenIM pre-push hook."
-if [[ `git status --porcelain` ]]; then
+if [[ $(git status --porcelain) ]]; then
printError "This scripts needs to run against committed code only. Please commit or stash you changes."
exit 1
fi
@@ -101,8 +101,8 @@ print_color "Deleted Files: ${deleted_files}" "${BACKGROUND_GREEN}"
if [[ ! $local_branch =~ $valid_branch_regex ]]
then
- printError "There is something wrong with your branch name. Branch names in this project must adhere to this contract: $valid_branch_regex.
-Your commit will be rejected. You should rename your branch to a valid name(feat/name OR bug/name) and try again."
+ printError "There is something wrong with your branch name. Branch names in this project must adhere to this contract: $valid_branch_regex.
+Your commit will be rejected. You should rename your branch to a valid name(feat/name OR fix/name) and try again."
printError "For more on this, read on: https://gist.github.com/cubxxw/126b72104ac0b0ca484c9db09c3e5694"
exit 1
fi
diff --git a/scripts/init-config.sh b/scripts/init-config.sh
index a4672c62d..82eefbb54 100755
--- a/scripts/init-config.sh
+++ b/scripts/init-config.sh
@@ -13,42 +13,244 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-# This script automatically initializes the various configuration files
-# Read: https://github.com/openimsdk/open-im-server/blob/main/docs/contrib/init-config.md
+# This script automatically initializes various configuration files and can generate example files.
set -o errexit
set -o nounset
set -o pipefail
+# Root directory of the OpenIM project
OPENIM_ROOT=$(dirname "${BASH_SOURCE[0]}")/..
+# Source initialization script
source "${OPENIM_ROOT}/scripts/lib/init.sh"
-# (en: Define a profile array that contains the name path of the profile to be generated.)
+# Default environment file
readonly ENV_FILE=${ENV_FILE:-"${OPENIM_ROOT}/scripts/install/environment.sh"}
-# (en: Defines an associative array where the keys are the template files and the values are the corresponding output files.)
+# Templates for configuration files
declare -A TEMPLATES=(
- ["${OPENIM_ROOT}/deployments/templates/env_template.yaml"]="${OPENIM_ROOT}/.env"
- ["${OPENIM_ROOT}/deployments/templates/openim.yaml"]="${OPENIM_ROOT}/config/config.yaml"
+ ["${OPENIM_ROOT}/deployments/templates/env-template.yaml"]="${OPENIM_ROOT}/.env"
+ ["${OPENIM_ROOT}/deployments/templates/config.yaml"]="${OPENIM_ROOT}/config/config.yaml"
["${OPENIM_ROOT}/deployments/templates/prometheus.yml"]="${OPENIM_ROOT}/config/prometheus.yml"
["${OPENIM_ROOT}/deployments/templates/alertmanager.yml"]="${OPENIM_ROOT}/config/alertmanager.yml"
)
-for template in "${!TEMPLATES[@]}"; do
- if [[ ! -f "${template}" ]]; then
- openim::log::error_exit "template file ${template} does not exist..."
- fi
+# Templates for example files
+declare -A EXAMPLES=(
+ ["${OPENIM_ROOT}/deployments/templates/env-template.yaml"]="${OPENIM_ROOT}/config/templates/env.template"
+ ["${OPENIM_ROOT}/deployments/templates/config.yaml"]="${OPENIM_ROOT}/config/templates/config.yaml.template"
+ ["${OPENIM_ROOT}/deployments/templates/prometheus.yml"]="${OPENIM_ROOT}/config/templates/prometheus.yml.template"
+ ["${OPENIM_ROOT}/deployments/templates/alertmanager.yml"]="${OPENIM_ROOT}/config/templates/alertmanager.yml.template"
+)
+
+# Templates for config Copy file
+declare -A COPY_TEMPLATES=(
+ ["${OPENIM_ROOT}/deployments/templates/email.tmpl"]="${OPENIM_ROOT}/config/email.tmpl"
+ ["${OPENIM_ROOT}/deployments/templates/instance-down-rules.yml"]="${OPENIM_ROOT}/config/instance-down-rules.yml"
+ ["${OPENIM_ROOT}/deployments/templates/notification.yaml"]="${OPENIM_ROOT}/config/notification.yaml"
+)
+
+# Templates for config Copy file
+declare -A COPY_EXAMPLES=(
+ ["${OPENIM_ROOT}/deployments/templates/email.tmpl"]="${OPENIM_ROOT}/config/templates/email.tmpl.template"
+ ["${OPENIM_ROOT}/deployments/templates/instance-down-rules.yml"]="${OPENIM_ROOT}/config/templates/instance-down-rules.yml.template"
+ ["${OPENIM_ROOT}/deployments/templates/notification.yaml"]="${OPENIM_ROOT}/config/templates/notification.yaml.template"
+)
+
+# Command-line options
+FORCE_OVERWRITE=false
+SKIP_EXISTING=false
+GENERATE_EXAMPLES=false
+CLEAN_CONFIG=false
+CLEAN_EXAMPLES=false
+
+# Function to display help information
+show_help() {
+ echo "Usage: $(basename "$0") [options]"
+ echo "Options:"
+ echo " -h, --help Show this help message"
+ echo " --force Overwrite existing files without prompt"
+ echo " --skip Skip generation if file exists"
+ echo " --examples Generate example files"
+ echo " --clean-config Clean all configuration files"
+ echo " --clean-examples Clean all example files"
+}
+
+# Function to generate and copy configuration files
+generate_config_files() {
+ # Handle TEMPLATES array
+ for template in "${!TEMPLATES[@]}"; do
+ local output_file="${TEMPLATES[$template]}"
+ process_file "$template" "$output_file" true
+ done
+
+ # Handle COPY_TEMPLATES array
+ for template in "${!COPY_TEMPLATES[@]}"; do
+ local output_file="${COPY_TEMPLATES[$template]}"
+ process_file "$template" "$output_file" false
+ done
+}
+
+# Function to generate example files
+generate_example_files() {
+ env_cmd="env -i"
+
+ env_vars["OPENIM_IP"]="127.0.0.1"
+ env_vars["LOG_STORAGE_LOCATION"]="../../"
+
+ for var in "${!env_vars[@]}"; do
+ env_cmd+=" $var='${env_vars[$var]}'"
+ done
+
+ # Processing EXAMPLES array
+ for template in "${!EXAMPLES[@]}"; do
+ local example_file="${EXAMPLES[$template]}"
+ process_file "$template" "$example_file" true
+ done
+
+ # Processing COPY_EXAMPLES array
+ for template in "${!COPY_EXAMPLES[@]}"; do
+ local example_file="${COPY_EXAMPLES[$template]}"
+ process_file "$template" "$example_file" false
+ done
+}
- IFS=';' read -ra OUTPUT_FILES <<< "${TEMPLATES[$template]}"
- for output_file in "${OUTPUT_FILES[@]}"; do
- openim::log::info "⌚ Working with template file: ${template} to ${output_file}..."
- "${OPENIM_ROOT}/scripts/genconfig.sh" "${ENV_FILE}" "${template}" > "${output_file}" || {
- openim::log::error "Error processing template file ${template}"
+# Function to process a single file, either by generating or copying
+process_file() {
+ local template=$1
+ local output_file=$2
+ local use_genconfig=$3
+
+ if [[ -f "${output_file}" ]]; then
+ if [[ "${FORCE_OVERWRITE}" == true ]]; then
+ openim::log::info "Force overwriting ${output_file}."
+ elif [[ "${SKIP_EXISTING}" == true ]]; then
+ openim::log::info "Skipping generation of ${output_file} as it already exists."
+ return
+ else
+ echo -n "File ${output_file} already exists. Overwrite? (Y/N): "
+ read -r -n 1 REPLY
+ echo
+ if [[ ! $REPLY =~ ^[Yy]$ ]]; then
+ openim::log::info "Skipping generation of ${output_file}."
+ return
+ fi
+ fi
+ else
+ if [[ "${SKIP_EXISTING}" == true ]]; then
+ openim::log::info "Generating ${output_file} as it does not exist."
+ fi
+ fi
+
+ if [[ "$use_genconfig" == true ]]; then
+ openim::log::info "⌚ Working with template file: ${template} to generate ${output_file}..."
+ if [[ ! -f "${OPENIM_ROOT}/scripts/genconfig.sh" ]]; then
+ openim::log::error "genconfig.sh script not found"
+ exit 1
+ fi
+ if [[ -n "${env_cmd}" ]]; then
+ eval "$env_cmd ${OPENIM_ROOT}/scripts/genconfig.sh '${ENV_FILE}' '${template}' > '${output_file}'" || {
+ openim::log::error "Error processing template file ${template}"
+ exit 1
+ }
+ else
+ "${OPENIM_ROOT}/scripts/genconfig.sh" "${ENV_FILE}" "${template}" > "${output_file}" || {
+ openim::log::error "Error processing template file ${template}"
+ exit 1
+ }
+ fi
+ else
+ openim::log::info "📋 Copying ${template} to ${output_file}..."
+ cp "${template}" "${output_file}" || {
+ openim::log::error "Error copying template file ${template}"
exit 1
}
- sleep 0.5
+ fi
+
+ sleep 0.5
+}
+
+clean_config_files() {
+ local all_templates=("${TEMPLATES[@]}" "${COPY_TEMPLATES[@]}")
+
+ for output_file in "${all_templates[@]}"; do
+ if [[ -f "${output_file}" ]]; then
+ rm -f "${output_file}"
+ openim::log::info "Removed configuration file: ${output_file}"
+ fi
+ done
+}
+
+# Function to clean example files
+clean_example_files() {
+ local all_examples=("${EXAMPLES[@]}" "${COPY_EXAMPLES[@]}")
+
+ for example_file in "${all_examples[@]}"; do
+ if [[ -f "${example_file}" ]]; then
+ rm -f "${example_file}"
+ openim::log::info "Removed example file: ${example_file}"
+ fi
done
+}
+
+while [[ $# -gt 0 ]]; do
+ case "$1" in
+ -h|--help)
+ show_help
+ exit 0
+ ;;
+ --force)
+ FORCE_OVERWRITE=true
+ shift
+ ;;
+ --skip)
+ SKIP_EXISTING=true
+ shift
+ ;;
+ --examples)
+ GENERATE_EXAMPLES=true
+ shift
+ ;;
+ --clean-config)
+ CLEAN_CONFIG=true
+ shift
+ ;;
+ --clean-examples)
+ CLEAN_EXAMPLES=true
+ shift
+ ;;
+ *)
+ echo "Unknown option: $1"
+ show_help
+ exit 1
+ ;;
+ esac
done
-openim::log::success "✨ All configuration files have been successfully generated!"
+# Clean configuration files if --clean-config option is provided
+if [[ "${CLEAN_CONFIG}" == true ]]; then
+ clean_config_files
+fi
+
+# Clean example files if --clean-examples option is provided
+if [[ "${CLEAN_EXAMPLES}" == true ]]; then
+ clean_example_files
+fi
+
+# Generate configuration files if requested
+if [[ "${FORCE_OVERWRITE}" == true || "${SKIP_EXISTING}" == false ]] && [[ "${CLEAN_CONFIG}" == false ]]; then
+ generate_config_files
+fi
+
+# Generate configuration files if requested
+if [[ "${SKIP_EXISTING}" == true ]]; then
+ generate_config_files
+fi
+
+# Generate example files if --examples option is provided
+if [[ "${GENERATE_EXAMPLES}" == true ]] && [[ "${CLEAN_EXAMPLES}" == false ]]; then
+ generate_example_files
+fi
+
+openim::log::success "Configuration and example files operation complete!"
diff --git a/scripts/init-env.sh b/scripts/init-env.sh
index ca0c471ad..75b871b08 100755
--- a/scripts/init-env.sh
+++ b/scripts/init-env.sh
@@ -25,9 +25,9 @@ source "${OPENIM_ROOT}/scripts/install/common.sh"
openim::log::info "\n# Begin Install OpenIM Config"
for file in "${OPENIM_SERVER_TARGETS[@]}"; do
- VARNAME="$(echo $file | tr '[:lower:]' '[:upper:]' | tr '.' '_' | tr '-' '_')"
- VARVALUE="$OPENIM_OUTPUT_HOSTBIN/$file"
- # /etc/profile.d/openim-env.sh
- echo "export $VARNAME=$VARVALUE" > /etc/profile.d/openim-env.sh
- source /etc/profile.d/openim-env.sh
+ VARNAME="$(echo $file | tr '[:lower:]' '[:upper:]' | tr '.' '_' | tr '-' '_')"
+ VARVALUE="$OPENIM_OUTPUT_HOSTBIN/$file"
+ # /etc/profile.d/openim-env.sh
+ echo "export $VARNAME=$VARVALUE" > /etc/profile.d/openim-env.sh
+ source /etc/profile.d/openim-env.sh
done
diff --git a/scripts/init-githooks.sh b/scripts/init-githooks.sh
index 399054bb8..4ee470742 100755
--- a/scripts/init-githooks.sh
+++ b/scripts/init-githooks.sh
@@ -39,62 +39,62 @@ OPENIM_ROOT=$(dirname "${BASH_SOURCE[0]}")/..
HOOKS_DIR="${OPENIM_ROOT}/.git/hooks"
help_info() {
- echo "Usage: $0 [options]"
- echo
- echo "This script helps to manage git hooks."
- echo
- echo "Options:"
- echo " -h, --help Show this help message and exit."
- echo " -d, --delete Delete the hooks that have been added."
- echo " By default, it will prompt to enable git hooks."
+ echo "Usage: $0 [options]"
+ echo
+ echo "This script helps to manage git hooks."
+ echo
+ echo "Options:"
+ echo " -h, --help Show this help message and exit."
+ echo " -d, --delete Delete the hooks that have been added."
+ echo " By default, it will prompt to enable git hooks."
}
delete_hooks() {
- for file in ${OPENIM_ROOT}/scripts/githooks/*.sh; do
- hook_name=$(basename "$file" .sh) # This removes the .sh extension
- rm -f "$HOOKS_DIR/$hook_name"
- done
- echo "Git hooks have been deleted."
+ for file in "${OPENIM_ROOT}"/scripts/githooks/*.sh; do
+ hook_name=$(basename "$file" .sh) # This removes the .sh extension
+ rm -f "$HOOKS_DIR/$hook_name"
+ done
+ echo "Git hooks have been deleted."
}
enable_hooks() {
- echo "Would you like to:"
- echo "1) Enable git hooks mode"
- echo "2) Delete existing git hooks"
- echo "Please select a number (or any other key to exit):"
- read -r choice
-
- case "$choice" in
- 1)
- for file in ${OPENIM_ROOT}/scripts/githooks/*.sh; do
- hook_name=$(basename "$file" .sh) # This removes the .sh extension
- cp -f "$file" "$HOOKS_DIR/$hook_name"
- done
-
- chmod +x $HOOKS_DIR/*
-
- echo "Git hooks mode has been enabled."
- echo "With git hooks enabled, every time you perform a git action (e.g. git commit), the corresponding hooks script will be triggered automatically."
- echo "This means that if the size of the file you're committing exceeds the set limit (e.g. 42MB), the commit will be rejected."
- ;;
- 2)
- delete_hooks
- ;;
- *)
- echo "Exiting without making changes."
- ;;
- esac
+ echo "Would you like to:"
+ echo "1) Enable git hooks mode"
+ echo "2) Delete existing git hooks"
+ echo "Please select a number (or any other key to exit):"
+ read -r choice
+
+ case "$choice" in
+ 1)
+ for file in ${OPENIM_ROOT}/scripts/githooks/*.sh; do
+ hook_name=$(basename "$file" .sh) # This removes the .sh extension
+ cp -f "$file" "$HOOKS_DIR/$hook_name"
+ done
+
+ chmod +x $HOOKS_DIR/*
+
+ echo "Git hooks mode has been enabled."
+ echo "With git hooks enabled, every time you perform a git action (e.g. git commit), the corresponding hooks script will be triggered automatically."
+ echo "This means that if the size of the file you're committing exceeds the set limit (e.g. 42MB), the commit will be rejected."
+ ;;
+ 2)
+ delete_hooks
+ ;;
+ *)
+ echo "Exiting without making changes."
+ ;;
+ esac
}
case "$1" in
- -h|--help)
- help_info
- ;;
- -d|--delete)
- delete_hooks
- ;;
- *)
- enable_hooks
- ;;
+ -h|--help)
+ help_info
+ ;;
+ -d|--delete)
+ delete_hooks
+ ;;
+ *)
+ enable_hooks
+ ;;
esac
diff --git a/scripts/init-pwd.sh b/scripts/init-pwd.sh
deleted file mode 100755
index 5e2e162aa..000000000
--- a/scripts/init-pwd.sh
+++ /dev/null
@@ -1,70 +0,0 @@
-#!/usr/bin/env bash
-
-# Copyright © 2023 OpenIM. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-#Include shell font styles and some basic information
-SCRIPTS_ROOT=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
-OPENIM_ROOT=$(dirname "${SCRIPTS_ROOT}")/..
-
-#Include shell font styles and some basic information
-source $SCRIPTS_ROOT/lib/init.sh
-source $SCRIPTS_ROOT/path_info.sh
-
-cd $SCRIPTS_ROOT
-
-source $OPENIM_ROOT/.env
-
-# Check if PASSWORD only contains letters and numbers
-if [[ "$PASSWORD" =~ ^[a-zA-Z0-9]+$ ]]
-then
- echo "PASSWORD is valid."
-else
- echo "ERR: PASSWORD should only contain letters and numbers. " $PASSWORD
- exit
-fi
-
-echo ""
-echo -e "===> ${PURPLE_PREFIX} you user is:$USER ${COLOR_SUFFIX}"
-echo -e "===> ${PURPLE_PREFIX} you password is:$PASSWORD ${COLOR_SUFFIX}"
-echo -e "===> ${PURPLE_PREFIX} you minio endpoint is:$MINIO_ENDPOINT ${COLOR_SUFFIX}"
-echo -e "===> ${PURPLE_PREFIX} you api url is:$API_URL ${COLOR_SUFFIX}"
-echo ""
-
-# Specify the config file
-config_file="${OPENIM_ROOT}"/config/config.yaml
-
-# Load variables from .env file
-source "${OPENIM_ROOT}"/.env
-
-# Replace the password and username field for mysql
-sed -i "/mysql:/,/database:/ s/password:.*/password: $PASSWORD/" $config_file
-sed -i "/mysql:/,/database:/ s/username:.*/username: $USER/" $config_file
-
-# Replace the password and username field for mongo
-sed -i "/mongo:/,/maxPoolSize:/ s/password:.*/password: $PASSWORD/" $config_file
-sed -i "/mongo:/,/maxPoolSize:/ s/username:.*/username: $USER/" $config_file
-
-# Replace the password field for redis
-sed -i '/redis:/,/password:/s/password: .*/password: '${PASSWORD}'/' $config_file
-
-# Replace accessKeyID and secretAccessKey for minio
-sed -i "/minio:/,/isDistributedMod:/ s/accessKeyID:.*/accessKeyID: $USER/" $config_file
-sed -i "/minio:/,/isDistributedMod:/ s/secretAccessKey:.*/secretAccessKey: $PASSWORD/" $config_file
-sed -i '/minio:/,/endpoint:/s|endpoint: .*|endpoint: '${MINIO_ENDPOINT}'|' $config_file
-sed -i '/object:/,/apiURL:/s|apiURL: .*|apiURL: '${API_URL}'|' $config_file
-
-
-# Replace secret for token
-sed -i "s/secret: .*/secret: $PASSWORD/" $config_file
diff --git a/scripts/install-im-server.sh b/scripts/install-im-server.sh
index 26ab35b0d..c1224e30c 100755
--- a/scripts/install-im-server.sh
+++ b/scripts/install-im-server.sh
@@ -1,5 +1,5 @@
#!/usr/bin/env bash
-# Copyright © 2023 OpenIM. All rights reserved.
+# Copyright © 2024 OpenIM. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -13,8 +13,20 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+#
+# OpenIM Docker Deployment Script
+#
+# This script automates the process of building the OpenIM server image
+# and deploying it using Docker Compose.
+#
+# Variables:
+# - SERVER_IMAGE_VERSION: Version of the server image (default: test)
+# - IMAGE_REGISTRY: Docker image registry (default: openim)
+# - DOCKER_COMPOSE_FILE_URL: URL to the docker-compose.yml file
+#
+# Usage:
+# SERVER_IMAGE_VERSION=latest IMAGE_REGISTRY=myregistry ./this_script.sh
-# Common utilities, variables and checks for all build scripts.
set -o errexit
set -o nounset
set -o pipefail
@@ -28,25 +40,47 @@ chmod +x "${OPENIM_ROOT}"/scripts/*.sh
openim::util::ensure_docker_daemon_connectivity
+# Default values for variables
+: ${SERVER_IMAGE_VERSION:=test}
+: ${IMAGE_REGISTRY:=openim}
+: ${DOCKER_COMPOSE_FILE_URL:="https://raw.githubusercontent.com/openimsdk/openim-docker/main/docker-compose.yaml"}
+
DOCKER_COMPOSE_COMMAND=
# Check if docker-compose command is available
openim::util::check_docker_and_compose_versions
-
-if command -v docker compose &> /dev/null
-then
- openim::log::info "docker compose command is available"
- DOCKER_COMPOSE_COMMAND="docker compose"
+if command -v docker compose &> /dev/null; then
+ openim::log::info "docker compose command is available"
+ DOCKER_COMPOSE_COMMAND="docker compose"
else
- DOCKER_COMPOSE_COMMAND="docker-compose"
+ DOCKER_COMPOSE_COMMAND="docker-compose"
fi
+export SERVER_IMAGE_VERSION
+export IMAGE_REGISTRY
+"${OPENIM_ROOT}"/scripts/init-config.sh
+
pushd "${OPENIM_ROOT}"
+docker build -t "${IMAGE_REGISTRY}/openim-server:${SERVER_IMAGE_VERSION}" .
${DOCKER_COMPOSE_COMMAND} stop
-curl https://gitee.com/openimsdk/openim-docker/raw/main/example/full-openim-server-and-chat.yml -o docker-compose.yml && make init && docker compose up -d
-"${OPENIM_ROOT}"/scripts/init-config.sh
-${DOCKER_COMPOSE_COMMAND} up --remove-orphans -d
-sleep 60
-${DOCKER_COMPOSE_COMMAND} logs openim-server
+curl "${DOCKER_COMPOSE_FILE_URL}" -o docker-compose.yml
+${DOCKER_COMPOSE_COMMAND} up -d
+
+# Function to check container status
+check_containers() {
+ if ! ${DOCKER_COMPOSE_COMMAND} ps | grep -q 'Up'; then
+ echo "Error: One or more docker containers failed to start."
+ ${DOCKER_COMPOSE_COMMAND} logs openim-server
+ ${DOCKER_COMPOSE_COMMAND} logs openim-chat
+ return 1
+ fi
+ return 0
+}
+
+# Wait for a short period to allow containers to initialize
+sleep 100
+
${DOCKER_COMPOSE_COMMAND} ps
-popd
+check_containers
+
+popd
\ No newline at end of file
diff --git a/scripts/install/common.sh b/scripts/install/common.sh
index dd8bf614e..f6ee5d3ad 100755
--- a/scripts/install/common.sh
+++ b/scripts/install/common.sh
@@ -101,7 +101,6 @@ readonly OPENIM_SERVER_PORT_LISTARIES=("${OPENIM_SERVER_PORT_TARGETS[@]##*/}")
openim::common::dependency_name() {
local targets=(
- mysql
redis
zookeeper
kafka
@@ -117,13 +116,11 @@ readonly OPENIM_DEPENDENCY_TARGETS
# This function returns a list of ports for various services
# - zookeeper
# - kafka
-# - mysql
# - mongodb
# - redis
# - minio
openim::common::dependency_port() {
local targets=(
- ${MYSQL_PORT} # MySQL port
${REDIS_PORT} # Redis port
${ZOOKEEPER_PORT} # Zookeeper port
${KAFKA_PORT} # Kafka port
diff --git a/scripts/install/dependency.sh b/scripts/install/dependency.sh
index 7d6685186..e7c7eb426 100755
--- a/scripts/install/dependency.sh
+++ b/scripts/install/dependency.sh
@@ -22,79 +22,68 @@ set -o pipefail
OPENIM_ROOT=$(cd "$(dirname "${BASH_SOURCE[0]}")"/../.. && pwd -P)
[[ -z ${COMMON_SOURCED} ]] && source "${OPENIM_ROOT}"/scripts/install/common.sh
-# Start MySQL service
-docker run -d \
- --name mysql \
- -p 13306:3306 \
- -p 23306:33060 \
- -v "${DATA_DIR}/components/mysql/data:/var/lib/mysql" \
- -v "/etc/localtime:/etc/localtime" \
- -e MYSQL_ROOT_PASSWORD=${PASSWORD} \
- --restart always \
- mysql:5.7
-
# Start MongoDB service
docker run -d \
- --name mongo \
- -p 37017:27017 \
- -v "${DATA_DIR}/components/mongodb/data/db:/data/db" \
- -v "${DATA_DIR}/components/mongodb/data/logs:/data/logs" \
- -v "${DATA_DIR}/components/mongodb/data/conf:/etc/mongo" \
- -v "./scripts/mongo-init.sh:/docker-entrypoint-initdb.d/mongo-init.sh:ro" \
- -e TZ=Asia/Shanghai \
- -e wiredTigerCacheSizeGB=1 \
- -e MONGO_INITDB_ROOT_USERNAME=${OPENIM_USER} \
- -e MONGO_INITDB_ROOT_PASSWORD=${PASSWORD} \
- -e MONGO_INITDB_DATABASE=openIM \
- -e MONGO_USERNAME=${OPENIM_USER} \
- -e MONGO_PASSWORD=${PASSWORD} \
- --restart always \
- mongo:6.0.2 --wiredTigerCacheSizeGB 1 --auth
+--name mongo \
+-p 37017:27017 \
+-v "${DATA_DIR}/components/mongodb/data/db:/data/db" \
+-v "${DATA_DIR}/components/mongodb/data/logs:/data/logs" \
+-v "${DATA_DIR}/components/mongodb/data/conf:/etc/mongo" \
+-v "./scripts/mongo-init.sh:/docker-entrypoint-initdb.d/mongo-init.sh:ro" \
+-e TZ=Asia/Shanghai \
+-e wiredTigerCacheSizeGB=1 \
+-e MONGO_INITDB_ROOT_USERNAME=${OPENIM_USER} \
+-e MONGO_INITDB_ROOT_PASSWORD=${PASSWORD} \
+-e MONGO_INITDB_DATABASE=openim_v3 \
+-e MONGO_OPENIM_USERNAME=${OPENIM_USER} \
+-e MONGO_OPENIM_PASSWORD=${PASSWORD} \
+--restart always \
+mongo:6.0.2 --wiredTigerCacheSizeGB 1 --auth
# Start Redis service
docker run -d \
- --name redis \
- -p 16379:6379 \
- -v "${DATA_DIR}/components/redis/data:/data" \
- -v "${DATA_DIR}/components/redis/config/redis.conf:/usr/local/redis/config/redis.conf" \
- -e TZ=Asia/Shanghai \
- --sysctl net.core.somaxconn=1024 \
- --restart always \
- redis:7.0.0 redis-server --requirepass ${PASSWORD} --appendonly yes
+--name redis \
+-p 16379:6379 \
+-v "${DATA_DIR}/components/redis/data:/data" \
+-v "${DATA_DIR}/components/redis/config/redis.conf:/usr/local/redis/config/redis.conf" \
+-e TZ=Asia/Shanghai \
+--sysctl net.core.somaxconn=1024 \
+--restart always \
+redis:7.0.0 redis-server --requirepass ${PASSWORD} --appendonly yes
# Start Zookeeper service
docker run -d \
- --name zookeeper \
- -p 2181:2181 \
- -v "/etc/localtime:/etc/localtime" \
- -e TZ=Asia/Shanghai \
- --restart always \
- wurstmeister/zookeeper
+--name zookeeper \
+-p 2181:2181 \
+-v "/etc/localtime:/etc/localtime" \
+-e TZ=Asia/Shanghai \
+--restart always \
+wurstmeister/zookeeper
# Start Kafka service
docker run -d \
- --name kafka \
- -p 9092:9092 \
- -e TZ=Asia/Shanghai \
- -e KAFKA_BROKER_ID=0 \
- -e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \
- -e KAFKA_CREATE_TOPICS="latestMsgToRedis:8:1,msgToPush:8:1,offlineMsgToMongoMysql:8:1" \
- -e KAFKA_ADVERTISED_LISTENERS="INSIDE://127.0.0.1:9092,OUTSIDE://103.116.45.174:9092" \
- -e KAFKA_LISTENERS="INSIDE://:9092,OUTSIDE://:9093" \
- -e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP="INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT" \
- -e KAFKA_INTER_BROKER_LISTENER_NAME=INSIDE \
- --restart always \
- --link zookeeper \
- wurstmeister/kafka
+--name kafka \
+-p 9092:9092 \
+-e TZ=Asia/Shanghai \
+-e KAFKA_BROKER_ID=0 \
+-e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \
+-e KAFKA_CREATE_TOPICS="latestMsgToRedis:8:1,msgToPush:8:1,offlineMsgToMongoMysql:8:1" \
+-e KAFKA_ADVERTISED_LISTENERS="INSIDE://127.0.0.1:9092,OUTSIDE://103.116.45.174:9092" \
+-e KAFKA_LISTENERS="INSIDE://:9092,OUTSIDE://:9093" \
+-e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP="INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT" \
+-e KAFKA_INTER_BROKER_LISTENER_NAME=INSIDE \
+--restart always \
+--link zookeeper \
+wurstmeister/kafka
# Start MinIO service
docker run -d \
- --name minio \
- -p 10005:9000 \
- -p 9090:9090 \
- -v "/mnt/data:/data" \
- -v "/mnt/config:/root/.minio" \
- -e MINIO_ROOT_USER=${OPENIM_USER} \
- -e MINIO_ROOT_PASSWORD=${PASSWORD} \
- --restart always \
- minio/minio server /data --console-address ':9090'
+--name minio \
+-p 10005:9000 \
+-p 9090:9090 \
+-v "/mnt/data:/data" \
+-v "/mnt/config:/root/.minio" \
+-e MINIO_ROOT_USER=${OPENIM_USER} \
+-e MINIO_ROOT_PASSWORD=${PASSWORD} \
+--restart always \
+minio/minio server /data --console-address ':9090'
diff --git a/scripts/install/environment.sh b/scripts/install/environment.sh
index 98636bbde..b1d2354b9 100755
--- a/scripts/install/environment.sh
+++ b/scripts/install/environment.sh
@@ -22,13 +22,13 @@
OPENIM_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd -P)"
# 生成文件存放目录
-LOCAL_OUTPUT_ROOT=""${OPENIM_ROOT}"/${OUT_DIR:-_output}"
+LOCAL_OUTPUT_ROOT="${OPENIM_ROOT}/${OUT_DIR:-_output}"
source "${OPENIM_ROOT}/scripts/lib/init.sh"
#TODO: Access to the OPENIM_IP networks outside, or you want to use the OPENIM_IP network
# OPENIM_IP=127.0.0.1
if [ -z "${OPENIM_IP}" ]; then
- OPENIM_IP=$(openim::util::get_server_ip)
+ OPENIM_IP=$(openim::util::get_server_ip)
fi
# config.gateway custom bridge modes
@@ -37,9 +37,9 @@ fi
# fi
function def() {
- local var_name="$1"
- local default_value="${2:-}"
- eval "readonly $var_name=\"\${$var_name:-$(printf '%q' "$default_value")}\""
+ local var_name="$1"
+ local default_value="${2:-}"
+ eval "readonly $var_name=\"\${$var_name:-$(printf '%q' "$default_value")}\""
}
# OpenIM Docker Compose 数据存储的默认路径
@@ -52,7 +52,7 @@ def "OPENIM_USER" "root"
readonly PASSWORD=${PASSWORD:-'openIM123'}
# 设置统一的数据库名称,方便管理
-def "DATABASE_NAME" "openIM_v3"
+def "DATABASE_NAME" "openim_v3"
# Linux系统 openim 用户
def "LINUX_USERNAME" "openim"
@@ -62,12 +62,12 @@ readonly LINUX_PASSWORD=${LINUX_PASSWORD:-"${PASSWORD}"}
def "INSTALL_DIR" "${LOCAL_OUTPUT_ROOT}/installs"
mkdir -p ${INSTALL_DIR}
-def "ENV_FILE" ""${OPENIM_ROOT}"/scripts/install/environment.sh"
+def "ENV_FILE" "${OPENIM_ROOT}/scripts/install/environment.sh"
###################### Docker compose ###################
# OPENIM AND CHAT
-def "CHAT_BRANCH" "main"
-def "SERVER_BRANCH" "main"
+def "CHAT_IMAGE_VERSION" "main"
+def "SERVER_IMAGE_VERSION" "main"
# Choose the appropriate image address, the default is GITHUB image,
# you can choose docker hub, for Chinese users can choose Ali Cloud
@@ -89,14 +89,12 @@ SUBNET=$(echo $DOCKER_BRIDGE_SUBNET | cut -d '/' -f 2)
LAST_OCTET=$(echo $IP_PREFIX | cut -d '.' -f 4)
generate_ip() {
- local NEW_IP="$(echo $IP_PREFIX | cut -d '.' -f 1-3).$((LAST_OCTET++))"
- echo $NEW_IP
+ local NEW_IP="$(echo $IP_PREFIX | cut -d '.' -f 1-3).$((LAST_OCTET++))"
+ echo $NEW_IP
}
LAST_OCTET=$((LAST_OCTET + 1))
DOCKER_BRIDGE_GATEWAY=$(generate_ip)
LAST_OCTET=$((LAST_OCTET + 1))
-MYSQL_NETWORK_ADDRESS=$(generate_ip)
-LAST_OCTET=$((LAST_OCTET + 1))
MONGO_NETWORK_ADDRESS=$(generate_ip)
LAST_OCTET=$((LAST_OCTET + 1))
REDIS_NETWORK_ADDRESS=$(generate_ip)
@@ -130,7 +128,7 @@ def "OPENIM_CONFIG_DIR" "/etc/openim/config"
def "OPENIM_LOG_DIR" "/var/log/openim"
def "CA_FILE" "${OPENIM_CONFIG_DIR}/cert/ca.pem"
-def "OPNEIM_CONFIG" ""${OPENIM_ROOT}"/config"
+def "OPNEIM_CONFIG" "${OPENIM_ROOT}/config"
def "OPENIM_SERVER_ADDRESS" "${DOCKER_BRIDGE_GATEWAY}" # OpenIM服务地址
# OpenIM Websocket端口
@@ -141,7 +139,7 @@ readonly API_OPENIM_PORT=${API_OPENIM_PORT:-'10002'}
def "API_LISTEN_IP" "0.0.0.0" # API的监听IP
###################### openim-chat 配置信息 ######################
-def "OPENIM_CHAT_DATA_DIR" "./openim-chat/${CHAT_BRANCH}"
+def "OPENIM_CHAT_DATA_DIR" "./openim-chat/${CHAT_IMAGE_VERSION}"
def "OPENIM_CHAT_ADDRESS" "${DOCKER_BRIDGE_GATEWAY}" # OpenIM服务地址
def "OPENIM_CHAT_API_PORT" "10008" # OpenIM API端口
def "CHAT_API_LISTEN_IP" "" # OpenIM API的监听IP
@@ -168,27 +166,19 @@ def "ZOOKEEPER_ADDRESS" "${DOCKER_BRIDGE_GATEWAY}" # Zookeeper的地址
def "ZOOKEEPER_USERNAME" "" # Zookeeper的用户名
def "ZOOKEEPER_PASSWORD" "" # Zookeeper的密码
-###################### MySQL 配置信息 ######################
-def "MYSQL_PORT" "13306" # MySQL的端口
-def "MYSQL_ADDRESS" "${DOCKER_BRIDGE_GATEWAY}" # MySQL的地址
-def "MYSQL_USERNAME" "${OPENIM_USER}" # MySQL的用户名
-# MySQL的密码
-readonly MYSQL_PASSWORD=${MYSQL_PASSWORD:-"${PASSWORD}"}
-def "MYSQL_DATABASE" "${DATABASE_NAME}" # MySQL的数据库名
-def "MYSQL_MAX_OPEN_CONN" "1000" # 最大打开的连接数
-def "MYSQL_MAX_IDLE_CONN" "100" # 最大空闲连接数
-def "MYSQL_MAX_LIFETIME" "60" # 连接可以重用的最大生命周期(秒)
-def "MYSQL_LOG_LEVEL" "4" # 日志级别
-def "MYSQL_SLOW_THRESHOLD" "500" # 慢查询阈值(毫秒)
-
###################### MongoDB 配置信息 ######################
def "MONGO_URI" # MongoDB的URI
def "MONGO_PORT" "37017" # MongoDB的端口
def "MONGO_ADDRESS" "${DOCKER_BRIDGE_GATEWAY}" # MongoDB的地址
def "MONGO_DATABASE" "${DATABASE_NAME}" # MongoDB的数据库名
-def "MONGO_USERNAME" "${OPENIM_USER}" # MongoDB的用户名
-# MongoDB的密码
+def "MONGO_USERNAME" "root" # MongoDB的管理员身份用户名
+# MongoDB的管理员身份密码
readonly MONGO_PASSWORD=${MONGO_PASSWORD:-"${PASSWORD}"}
+# Mongo OpenIM 身份用户名
+def "MONGO_OPENIM_USERNAME" "openIM"
+# Mongo OpenIM 身份密码
+readonly MONGO_OPENIM_PASSWORD=${MONGO_OPENIM_PASSWORD:-"${PASSWORD}"}
+
def "MONGO_MAX_POOL_SIZE" "100" # 最大连接池大小
###################### Object 配置信息 ######################
@@ -253,8 +243,6 @@ def "KAFKA_CONSUMERGROUPID_PUSH" "push" # `Kafka` 的消费
###################### openim-web 配置信息 ######################
def "OPENIM_WEB_PORT" "11001" # openim-web的端口
-def "OPENIM_WEB_ADDRESS" "${DOCKER_BRIDGE_GATEWAY}" # openim-web的地址
-def "OPENIM_WEB_DIST_PATH" "/app/dist" # openim-web的dist路径
###################### openim-admin-front 配置信息 ######################
def "OPENIM_ADMIN_FRONT_PORT" "11002" # openim-admin-front的端口
@@ -300,7 +288,6 @@ readonly ALERTMANAGER_SEND_RESOLVED=${ALERTMANAGER_SEND_RESOLVED:-"{SEND_RESOLVE
###################### Grafana 配置信息 ######################
def "GRAFANA_PORT" "13000" # Grafana的端口
def "GRAFANA_ADDRESS" "${DOCKER_BRIDGE_GATEWAY}" # Grafana的地址
-
###################### RPC Port Configuration Variables ######################
# For launching multiple programs, just fill in multiple ports separated by commas
# For example:
@@ -337,7 +324,7 @@ def "OPENIM_CONVERSATION_NAME" "Conversation" # OpenIM对话服务名称
def "OPENIM_THIRD_NAME" "Third" # OpenIM第三方服务名称
###################### Log Configuration Variables ######################
-def "LOG_STORAGE_LOCATION" ""${OPENIM_ROOT}"/logs/" # 日志存储位置
+def "LOG_STORAGE_LOCATION" "${OPENIM_ROOT}/logs/" # 日志存储位置
def "LOG_ROTATION_TIME" "24" # 日志轮替时间
def "LOG_REMAIN_ROTATION_COUNT" "2" # 保留的日志轮替数量
def "LOG_REMAIN_LOG_LEVEL" "6" # 保留的日志级别
@@ -362,12 +349,8 @@ def "JPNS_APP_KEY" "" # JPNS应用密钥
def "JPNS_MASTER_SECRET" "" # JPNS主密钥
def "JPNS_PUSH_URL" "" # JPNS推送URL
def "JPNS_PUSH_INTENT" "" # JPNS推送意图
-def "MANAGER_USERID_1" "openIM123456" # 管理员ID 1
-def "MANAGER_USERID_2" "openIM654321" # 管理员ID 2
-def "MANAGER_USERID_3" "openIMAdmin" # 管理员ID 3
-def "NICKNAME_1" "system1" # 昵称1
-def "NICKNAME_2" "system2" # 昵称2
-def "NICKNAME_3" "system3" # 昵称3
+def "IM_ADMIN_USERID" "imAdmin" # IM管理员ID
+def "IM_ADMIN_NAME" "imAdmin" # IM管理员昵称
def "MULTILOGIN_POLICY" "1" # 多登录策略
def "CHAT_PERSISTENCE_MYSQL" "true" # 聊天持久化MySQL
def "MSG_CACHE_TIMEOUT" "86400" # 消息缓存超时
@@ -386,14 +369,13 @@ def "IOS_PUSH_SOUND" "xxx" # IOS推送声音
def "IOS_BADGE_COUNT" "true" # IOS徽章计数
def "IOS_PRODUCTION" "false" # IOS生产
# callback 配置
-def "CALLBACK_ENABLE" "true" # 是否开启 Callback
+def "CALLBACK_ENABLE" "false" # 是否开启 Callback
def "CALLBACK_TIMEOUT" "5" # 最长超时时间
def "CALLBACK_FAILED_CONTINUE" "true" # 失败后是否继续
-
###################### Prometheus 配置信息 ######################
# 是否启用 Prometheus
-readonly PROMETHEUS_ENABLE=${PROMETHEUS_ENABLE:-'false'}
-def "PROMETHEUS_URL" "${GRAFANA_ADDRESS}:${GRAFANA_PORT}"
+readonly PROMETHEUS_ENABLE=${PROMETHEUS_ENABLE:-'true'}
+readonly GRAFANA_URL=${GRAFANA_URL:-"http://${OPENIM_IP}:${GRAFANA_PORT}/"}
# Api 服务的 Prometheus 端口
readonly API_PROM_PORT=${API_PROM_PORT:-'20100'}
# User 服务的 Prometheus 端口
@@ -424,7 +406,7 @@ readonly MSG_TRANSFER_PROM_ADDRESS_PORT=${MSG_TRANSFER_PROM_ADDRESS_PORT:-"${DOC
###################### OpenIM openim-api ######################
def "OPENIM_API_HOST" "127.0.0.1"
def "OPENIM_API_BINARY" "${OPENIM_OUTPUT_HOSTBIN}/openim-api" # OpenIM openim-api 二进制文件路径
-def "OPENIM_API_CONFIG" ""${OPENIM_ROOT}"/config/" # OpenIM openim-api 配置文件路径
+def "OPENIM_API_CONFIG" "${OPENIM_ROOT}/config/" # OpenIM openim-api 配置文件路径
def "OPENIM_API_LOG_DIR" "${LOG_STORAGE_LOCATION}/openim-api" # OpenIM openim-api 日志存储路径
def "OPENIM_API_LOG_LEVEL" "info" # OpenIM openim-api 日志级别
def "OPENIM_API_LOG_MAX_SIZE" "100" # OpenIM openim-api 日志最大大小(MB)
@@ -436,7 +418,7 @@ def "OPENIM_API_LOG_WITH_STACK" "${LOG_WITH_STACK}" # OpenIM openim-ap
###################### OpenIM openim-cmdutils ######################
def "OPENIM_CMDUTILS_HOST" "127.0.0.1"
def "OPENIM_CMDUTILS_BINARY" "${OPENIM_OUTPUT_HOSTBIN}/openim-cmdutils" # OpenIM openim-cmdutils 二进制文件路径
-def "OPENIM_CMDUTILS_CONFIG" ""${OPENIM_ROOT}"/config/" # OpenIM openim-cmdutils 配置文件路径
+def "OPENIM_CMDUTILS_CONFIG" "${OPENIM_ROOT}/config/" # OpenIM openim-cmdutils 配置文件路径
def "OPENIM_CMDUTILS_LOG_DIR" "${LOG_STORAGE_LOCATION}/openim-cmdutils" # OpenIM openim-cmdutils 日志存储路径
def "OPENIM_CMDUTILS_LOG_LEVEL" "info" # OpenIM openim-cmdutils 日志级别
def "OPENIM_CMDUTILS_LOG_MAX_SIZE" "100" # OpenIM openim-cmdutils 日志最大大小(MB)
@@ -448,7 +430,7 @@ def "OPENIM_CMDUTILS_LOG_WITH_STACK" "${LOG_WITH_STACK}" # OpenIM
###################### OpenIM openim-crontask ######################
def "OPENIM_CRONTASK_HOST" "127.0.0.1"
def "OPENIM_CRONTASK_BINARY" "${OPENIM_OUTPUT_HOSTBIN}/openim-crontask" # OpenIM openim-crontask 二进制文件路径
-def "OPENIM_CRONTASK_CONFIG" ""${OPENIM_ROOT}"/config/" # OpenIM openim-crontask 配置文件路径
+def "OPENIM_CRONTASK_CONFIG" "${OPENIM_ROOT}/config/" # OpenIM openim-crontask 配置文件路径
def "OPENIM_CRONTASK_LOG_DIR" "${LOG_STORAGE_LOCATION}/openim-crontask" # OpenIM openim-crontask 日志存储路径
def "OPENIM_CRONTASK_LOG_LEVEL" "info" # OpenIM openim-crontask 日志级别
def "OPENIM_CRONTASK_LOG_MAX_SIZE" "100" # OpenIM openim-crontask 日志最大大小(MB)
@@ -460,7 +442,7 @@ def "OPENIM_CRONTASK_LOG_WITH_STACK" "${LOG_WITH_STACK}" # OpenIM
###################### OpenIM openim-msggateway ######################
def "OPENIM_MSGGATEWAY_HOST" "127.0.0.1"
def "OPENIM_MSGGATEWAY_BINARY" "${OPENIM_OUTPUT_HOSTBIN}/openim-msggateway"
-def "OPENIM_MSGGATEWAY_CONFIG" ""${OPENIM_ROOT}"/config/"
+def "OPENIM_MSGGATEWAY_CONFIG" "${OPENIM_ROOT}/config/"
def "OPENIM_MSGGATEWAY_LOG_DIR" "${LOG_STORAGE_LOCATION}/openim-msggateway"
def "OPENIM_MSGGATEWAY_LOG_LEVEL" "info"
def "OPENIM_MSGGATEWAY_LOG_MAX_SIZE" "100"
@@ -475,7 +457,7 @@ readonly OPENIM_MSGGATEWAY_NUM=${OPENIM_MSGGATEWAY_NUM:-'4'}
###################### OpenIM openim-msgtransfer ######################
def "OPENIM_MSGTRANSFER_HOST" "127.0.0.1"
def "OPENIM_MSGTRANSFER_BINARY" "${OPENIM_OUTPUT_HOSTBIN}/openim-msgtransfer" # OpenIM openim-msgtransfer 二进制文件路径
-def "OPENIM_MSGTRANSFER_CONFIG" ""${OPENIM_ROOT}"/config/" # OpenIM openim-msgtransfer 配置文件路径
+def "OPENIM_MSGTRANSFER_CONFIG" "${OPENIM_ROOT}/config/" # OpenIM openim-msgtransfer 配置文件路径
def "OPENIM_MSGTRANSFER_LOG_DIR" "${LOG_STORAGE_LOCATION}/openim-msgtransfer" # OpenIM openim-msgtransfer 日志存储路径
def "OPENIM_MSGTRANSFER_LOG_LEVEL" "info" # OpenIM openim-msgtransfer 日志级别
def "OPENIM_MSGTRANSFER_LOG_MAX_SIZE" "100" # OpenIM openim-msgtransfer 日志最大大小(MB)
@@ -487,7 +469,7 @@ def "OPENIM_MSGTRANSFER_LOG_WITH_STACK" "${LOG_WITH_STACK}" #
###################### OpenIM openim-push ######################
def "OPENIM_PUSH_HOST" "127.0.0.1"
def "OPENIM_PUSH_BINARY" "${OPENIM_OUTPUT_HOSTBIN}/openim-push" # OpenIM openim-push 二进制文件路径
-def "OPENIM_PUSH_CONFIG" ""${OPENIM_ROOT}"/config/" # OpenIM openim-push 配置文件路径
+def "OPENIM_PUSH_CONFIG" "${OPENIM_ROOT}/config/" # OpenIM openim-push 配置文件路径
def "OPENIM_PUSH_LOG_DIR" "${LOG_STORAGE_LOCATION}/openim-push" # OpenIM openim-push 日志存储路径
def "OPENIM_PUSH_LOG_LEVEL" "info" # OpenIM openim-push 日志级别
def "OPENIM_PUSH_LOG_MAX_SIZE" "100" # OpenIM openim-push 日志最大大小(MB)
@@ -499,7 +481,7 @@ def "OPENIM_PUSH_LOG_WITH_STACK" "${LOG_WITH_STACK}" # OpenIM openim-
###################### OpenIM openim-rpc-auth ######################
def "OPENIM_RPC_AUTH_HOST" "127.0.0.1"
def "OPENIM_RPC_AUTH_BINARY" "${OPENIM_OUTPUT_HOSTBIN}/openim-rpc-auth" # OpenIM openim-rpc-auth 二进制文件路径
-def "OPENIM_RPC_AUTH_CONFIG" ""${OPENIM_ROOT}"/config/" # OpenIM openim-rpc-auth 配置文件路径
+def "OPENIM_RPC_AUTH_CONFIG" "${OPENIM_ROOT}/config/" # OpenIM openim-rpc-auth 配置文件路径
def "OPENIM_RPC_AUTH_LOG_DIR" "${LOG_STORAGE_LOCATION}/openim-rpc-auth" # OpenIM openim-rpc-auth 日志存储路径
def "OPENIM_RPC_AUTH_LOG_LEVEL" "info" # OpenIM openim-rpc-auth 日志级别
def "OPENIM_RPC_AUTH_LOG_MAX_SIZE" "100" # OpenIM openim-rpc-auth 日志最大大小(MB)
@@ -511,7 +493,7 @@ def "OPENIM_RPC_AUTH_LOG_WITH_STACK" "${LOG_WITH_STACK}" # OpenIM
###################### OpenIM openim-rpc-conversation ######################
def "OPENIM_RPC_CONVERSATION_HOST" "127.0.0.1"
def "OPENIM_RPC_CONVERSATION_BINARY" "${OPENIM_OUTPUT_HOSTBIN}/openim-rpc-conversation" # OpenIM openim-rpc-conversation 二进制文件路径
-def "OPENIM_RPC_CONVERSATION_CONFIG" ""${OPENIM_ROOT}"/config/" # OpenIM openim-rpc-conversation 配置文件路径
+def "OPENIM_RPC_CONVERSATION_CONFIG" "${OPENIM_ROOT}/config/" # OpenIM openim-rpc-conversation 配置文件路径
def "OPENIM_RPC_CONVERSATION_LOG_DIR" "${LOG_STORAGE_LOCATION}/openim-rpc-conversation" # OpenIM openim-rpc-conversation 日志存储路径
def "OPENIM_RPC_CONVERSATION_LOG_LEVEL" "info" # OpenIM openim-rpc-conversation 日志级别
def "OPENIM_RPC_CONVERSATION_LOG_MAX_SIZE" "100" # OpenIM openim-rpc-conversation 日志最大大小(MB)
@@ -523,7 +505,7 @@ def "OPENIM_RPC_CONVERSATION_LOG_WITH_STACK" "${LOG_WITH_STACK}"
###################### OpenIM openim-rpc-friend ######################
def "OPENIM_RPC_FRIEND_HOST" "127.0.0.1"
def "OPENIM_RPC_FRIEND_BINARY" "${OPENIM_OUTPUT_HOSTBIN}/openim-rpc-friend" # OpenIM openim-rpc-friend 二进制文件路径
-def "OPENIM_RPC_FRIEND_CONFIG" ""${OPENIM_ROOT}"/config/" # OpenIM openim-rpc-friend 配置文件路径
+def "OPENIM_RPC_FRIEND_CONFIG" "${OPENIM_ROOT}/config/" # OpenIM openim-rpc-friend 配置文件路径
def "OPENIM_RPC_FRIEND_LOG_DIR" "${LOG_STORAGE_LOCATION}/openim-rpc-friend" # OpenIM openim-rpc-friend 日志存储路径
def "OPENIM_RPC_FRIEND_LOG_LEVEL" "info" # OpenIM openim-rpc-friend 日志级别
def "OPENIM_RPC_FRIEND_LOG_MAX_SIZE" "100" # OpenIM openim-rpc-friend 日志最大大小(MB)
@@ -535,7 +517,7 @@ def "OPENIM_RPC_FRIEND_LOG_WITH_STACK" "${LOG_WITH_STACK}" # Op
###################### OpenIM openim-rpc-group ######################
def "OPENIM_RPC_GROUP_HOST" "127.0.0.1"
def "OPENIM_RPC_GROUP_BINARY" "${OPENIM_OUTPUT_HOSTBIN}/openim-rpc-group" # OpenIM openim-rpc-group 二进制文件路径
-def "OPENIM_RPC_GROUP_CONFIG" ""${OPENIM_ROOT}"/config/" # OpenIM openim-rpc-group 配置文件路径
+def "OPENIM_RPC_GROUP_CONFIG" "${OPENIM_ROOT}/config/" # OpenIM openim-rpc-group 配置文件路径
def "OPENIM_RPC_GROUP_LOG_DIR" "${LOG_STORAGE_LOCATION}/openim-rpc-group" # OpenIM openim-rpc-group 日志存储路径
def "OPENIM_RPC_GROUP_LOG_LEVEL" "info" # OpenIM openim-rpc-group 日志级别
def "OPENIM_RPC_GROUP_LOG_MAX_SIZE" "100" # OpenIM openim-rpc-group 日志最大大小(MB)
@@ -547,7 +529,7 @@ def "OPENIM_RPC_GROUP_LOG_WITH_STACK" "${LOG_WITH_STACK}" # Open
###################### OpenIM openim-rpc-msg ######################
def "OPENIM_RPC_MSG_HOST" "127.0.0.1"
def "OPENIM_RPC_MSG_BINARY" "${OPENIM_OUTPUT_HOSTBIN}/openim-rpc-msg" # OpenIM openim-rpc-msg 二进制文件路径
-def "OPENIM_RPC_MSG_CONFIG" ""${OPENIM_ROOT}"/config/" # OpenIM openim-rpc-msg 配置文件路径
+def "OPENIM_RPC_MSG_CONFIG" "${OPENIM_ROOT}/config/" # OpenIM openim-rpc-msg 配置文件路径
def "OPENIM_RPC_MSG_LOG_DIR" "${LOG_STORAGE_LOCATION}/openim-rpc-msg" # OpenIM openim-rpc-msg 日志存储路径
def "OPENIM_RPC_MSG_LOG_LEVEL" "info" # OpenIM openim-rpc-msg 日志级别
def "OPENIM_RPC_MSG_LOG_MAX_SIZE" "100" # OpenIM openim-rpc-msg 日志最大大小(MB)
@@ -559,7 +541,7 @@ def "OPENIM_RPC_MSG_LOG_WITH_STACK" "${LOG_WITH_STACK}" # OpenIM o
###################### OpenIM openim-rpc-third ######################
def "OPENIM_RPC_THIRD_HOST" "127.0.0.1"
def "OPENIM_RPC_THIRD_BINARY" "${OPENIM_OUTPUT_HOSTBIN}/openim-rpc-third" # OpenIM openim-rpc-third 二进制文件路径
-def "OPENIM_RPC_THIRD_CONFIG" ""${OPENIM_ROOT}"/config/" # OpenIM openim-rpc-third 配置文件路径
+def "OPENIM_RPC_THIRD_CONFIG" "${OPENIM_ROOT}/config/" # OpenIM openim-rpc-third 配置文件路径
def "OPENIM_RPC_THIRD_LOG_DIR" "${LOG_STORAGE_LOCATION}/openim-rpc-third" # OpenIM openim-rpc-third 日志存储路径
def "OPENIM_RPC_THIRD_LOG_LEVEL" "info" # OpenIM openim-rpc-third 日志级别
def "OPENIM_RPC_THIRD_LOG_MAX_SIZE" "100" # OpenIM openim-rpc-third 日志最大大小(MB)
@@ -571,7 +553,7 @@ def "OPENIM_RPC_THIRD_LOG_WITH_STACK" "${LOG_WITH_STACK}" # Open
###################### OpenIM openim-rpc-user ######################
def "OPENIM_RPC_USER_HOST" "127.0.0.1"
def "OPENIM_RPC_USER_BINARY" "${OPENIM_OUTPUT_HOSTBIN}/openim-rpc-user" # OpenIM openim-rpc-user 二进制文件路径
-def "OPENIM_RPC_USER_CONFIG" ""${OPENIM_ROOT}"/config/" # OpenIM openim-rpc-user 配置文件路径
+def "OPENIM_RPC_USER_CONFIG" "${OPENIM_ROOT}/config/" # OpenIM openim-rpc-user 配置文件路径
def "OPENIM_RPC_USER_LOG_DIR" "${LOG_STORAGE_LOCATION}/openim-rpc-user" # OpenIM openim-rpc-user 日志存储路径
def "OPENIM_RPC_USER_LOG_LEVEL" "info" # OpenIM openim-rpc-user 日志级别
def "OPENIM_RPC_USER_LOG_MAX_SIZE" "100" # OpenIM openim-rpc-user 日志最大大小(MB)
diff --git a/scripts/install/install-protobuf.sh b/scripts/install/install-protobuf.sh
index 33ceaeb0d..838b390b5 100755
--- a/scripts/install/install-protobuf.sh
+++ b/scripts/install/install-protobuf.sh
@@ -21,17 +21,17 @@
# This tool is customized to meet the specific needs of OpenIM and resides in its separate repository.
# It can be downloaded from the following link:
# https://github.com/OpenIMSDK/Open-IM-Protoc/releases/tag/v1.0.0
-#
+#
# About the tool:
# https://github.com/openimsdk/open-im-server/blob/main/docs/contrib/protoc-tools.md
# Download link (Windows): https://github.com/OpenIMSDK/Open-IM-Protoc/releases/download/v1.0.0/windows.zip
# Download link (Linux): https://github.com/OpenIMSDK/Open-IM-Protoc/releases/download/v1.0.0/linux.zip
-#
+#
# Installation steps (taking Windows as an example):
# 1. Visit the above link and download the version suitable for Windows.
# 2. Extract the downloaded file.
# 3. Add the extracted tool to your PATH environment variable so that it can be run directly from the command line.
-#
+#
# Note: The specific installation and usage instructions may vary based on the tool's actual implementation. It's advised to refer to official documentation.
# --------------------------------------------------------------
@@ -40,79 +40,79 @@ DOWNLOAD_DIR="/tmp/openim-protoc"
INSTALL_DIR="/usr/local/bin"
function help_message {
- echo "Usage: ./install-protobuf.sh [option]"
- echo "Options:"
- echo "-i, --install Install the OpenIM Protoc tool."
- echo "-u, --uninstall Uninstall the OpenIM Protoc tool."
- echo "-r, --reinstall Reinstall the OpenIM Protoc tool."
- echo "-c, --check Check if the OpenIM Protoc tool is installed."
- echo "-h, --help Display this help message."
+ echo "Usage: ./install-protobuf.sh [option]"
+ echo "Options:"
+ echo "-i, --install Install the OpenIM Protoc tool."
+ echo "-u, --uninstall Uninstall the OpenIM Protoc tool."
+ echo "-r, --reinstall Reinstall the OpenIM Protoc tool."
+ echo "-c, --check Check if the OpenIM Protoc tool is installed."
+ echo "-h, --help Display this help message."
}
function install_protobuf {
- echo "Installing OpenIM Protoc tool..."
-
- # Create temporary directory and download the zip file
- mkdir -p $DOWNLOAD_DIR
- wget $PROTOC_DOWNLOAD_URL -O $DOWNLOAD_DIR/linux.zip
-
- # Unzip the file
- unzip -o $DOWNLOAD_DIR/linux.zip -d $DOWNLOAD_DIR
-
- # Move binaries to the install directory and make them executable
- sudo cp $DOWNLOAD_DIR/linux/protoc $INSTALL_DIR/
- sudo cp $DOWNLOAD_DIR/linux/protoc-gen-go $INSTALL_DIR/
- sudo chmod +x $INSTALL_DIR/protoc
- sudo chmod +x $INSTALL_DIR/protoc-gen-go
-
- # Clean up
- rm -rf $DOWNLOAD_DIR
-
- echo "OpenIM Protoc tool installed successfully!"
+ echo "Installing OpenIM Protoc tool..."
+
+ # Create temporary directory and download the zip file
+ mkdir -p $DOWNLOAD_DIR
+ wget $PROTOC_DOWNLOAD_URL -O $DOWNLOAD_DIR/linux.zip
+
+ # Unzip the file
+ unzip -o $DOWNLOAD_DIR/linux.zip -d $DOWNLOAD_DIR
+
+ # Move binaries to the install directory and make them executable
+ sudo cp $DOWNLOAD_DIR/linux/protoc $INSTALL_DIR/
+ sudo cp $DOWNLOAD_DIR/linux/protoc-gen-go $INSTALL_DIR/
+ sudo chmod +x $INSTALL_DIR/protoc
+ sudo chmod +x $INSTALL_DIR/protoc-gen-go
+
+ # Clean up
+ rm -rf $DOWNLOAD_DIR
+
+ echo "OpenIM Protoc tool installed successfully!"
}
function uninstall_protobuf {
- echo "Uninstalling OpenIM Protoc tool..."
-
- # Removing binaries from the install directory
- sudo rm -f $INSTALL_DIR/protoc
- sudo rm -f $INSTALL_DIR/protoc-gen-go
-
- echo "OpenIM Protoc tool uninstalled successfully!"
+ echo "Uninstalling OpenIM Protoc tool..."
+
+ # Removing binaries from the install directory
+ sudo rm -f $INSTALL_DIR/protoc
+ sudo rm -f $INSTALL_DIR/protoc-gen-go
+
+ echo "OpenIM Protoc tool uninstalled successfully!"
}
function reinstall_protobuf {
- echo "Reinstalling OpenIM Protoc tool..."
- uninstall_protobuf
- install_protobuf
+ echo "Reinstalling OpenIM Protoc tool..."
+ uninstall_protobuf
+ install_protobuf
}
function check_protobuf {
- echo "Checking for OpenIM Protoc tool installation..."
-
- which protoc > /dev/null 2>&1
- if [ $? -eq 0 ]; then
- echo "OpenIM Protoc tool is installed."
- else
- echo "OpenIM Protoc tool is not installed."
- fi
+ echo "Checking for OpenIM Protoc tool installation..."
+
+ which protoc > /dev/null 2>&1
+ if [ $? -eq 0 ]; then
+ echo "OpenIM Protoc tool is installed."
+ else
+ echo "OpenIM Protoc tool is not installed."
+ fi
}
while [ "$1" != "" ]; do
- case $1 in
- -i | --install ) install_protobuf
- ;;
- -u | --uninstall ) uninstall_protobuf
- ;;
- -r | --reinstall ) reinstall_protobuf
- ;;
- -c | --check ) check_protobuf
- ;;
- -h | --help ) help_message
- exit
- ;;
- * ) help_message
- exit 1
- esac
- shift
+ case $1 in
+ -i | --install ) install_protobuf
+ ;;
+ -u | --uninstall ) uninstall_protobuf
+ ;;
+ -r | --reinstall ) reinstall_protobuf
+ ;;
+ -c | --check ) check_protobuf
+ ;;
+ -h | --help ) help_message
+ exit
+ ;;
+ * ) help_message
+ exit 1
+ esac
+ shift
done
diff --git a/scripts/install/install.sh b/scripts/install/install.sh
index b88fe9083..d5ec5b7f7 100755
--- a/scripts/install/install.sh
+++ b/scripts/install/install.sh
@@ -14,38 +14,38 @@
# limitations under the License.
#
# OpenIM Server Installation Script
-#
+#
# Description:
-# This script is designed to handle the installation, Is a deployment solution
+# This script is designed to handle the installation, Is a deployment solution
# that uses the Linux systen extension. uninstallation, and
# status checking of OpenIM components on the server. OpenIM is a presumed
# communication or messaging platform based on the context.
-#
+#
# Usage:
-# To utilize this script, you need to invoke it with specific commands
+# To utilize this script, you need to invoke it with specific commands
# and options as detailed below.
-#
+#
# Commands:
-# -i, --install : Use this command to initiate the installation of all
+# -i, --install : Use this command to initiate the installation of all
# OpenIM components.
-# -u, --uninstall : Use this command to uninstall or remove all
+# -u, --uninstall : Use this command to uninstall or remove all
# OpenIM components from the server.
-# -s, --status : This command can be used to check and report the
+# -s, --status : This command can be used to check and report the
# current operational status of the installed OpenIM components.
# -h, --help : For any assistance or to view the available commands,
# use this command to display the help menu.
-#
+#
# Example Usage:
# To install all OpenIM components:
-# ./scripts/install/install.sh -i
-# or
-# ./scripts/install/install.sh --install
-#
+# ./scripts/install/install.sh -i
+# or
+# ./scripts/install/install.sh --install
+#
# Note:
# Ensure you have the necessary privileges to execute installation or
-# uninstallation operations. It's generally recommended to take a backup
+# uninstallation operations. It's generally recommended to take a backup
# before making major changes.
-#
+#
###############################################################################
OPENIM_ROOT=$(cd "$(dirname "${BASH_SOURCE[0]}")"/../.. && pwd -P)
@@ -57,99 +57,99 @@ ${OPENIM_ROOT}/scripts/install/test.sh
# Detailed help function
function openim::install::show_help() {
- echo "OpenIM Installer"
- echo "Usage: $0 [options]"
- echo ""
- echo "Commands:"
- echo " -i, --install Install all OpenIM components."
- echo " -u, --uninstall Remove all OpenIM components."
- echo " -s, --status Check the current status of OpenIM components."
- echo " -h, --help Show this help menu."
- echo ""
- echo "Example: "
- echo " $0 -i Will install all OpenIM components."
- echo " $0 --install Same as above."
+ echo "OpenIM Installer"
+ echo "Usage: $0 [options]"
+ echo ""
+ echo "Commands:"
+ echo " -i, --install Install all OpenIM components."
+ echo " -u, --uninstall Remove all OpenIM components."
+ echo " -s, --status Check the current status of OpenIM components."
+ echo " -h, --help Show this help menu."
+ echo ""
+ echo "Example: "
+ echo " $0 -i Will install all OpenIM components."
+ echo " $0 --install Same as above."
}
function openim::install::install_openim() {
- openim::common::sudo "mkdir -p ${OPENIM_DATA_DIR} ${OPENIM_INSTALL_DIR} ${OPENIM_CONFIG_DIR} ${OPENIM_LOG_DIR}"
- openim::log::info "check openim dependency"
- openim::common::sudo "cp -r ${OPENIM_ROOT}/config/* ${OPENIM_CONFIG_DIR}/"
-
- ${OPENIM_ROOT}/scripts/genconfig.sh ${ENV_FILE} ${OPENIM_ROOT}/deployments/templates/openim.yaml > ${OPENIM_CONFIG_DIR}/config.yaml
- ${OPENIM_ROOT}/scripts/genconfig.sh ${ENV_FILE} ${OPENIM_ROOT}/deployments/templates/prometheus.yml > ${OPENIM_CONFIG_DIR}/prometheus.yml
-
- openim::util::check_ports ${OPENIM_DEPENDENCY_PORT_LISTARIES[@]}
-
- ${OPENIM_ROOT}/scripts/install/openim-msggateway.sh openim::msggateway::install || return 1
- ${OPENIM_ROOT}/scripts/install/openim-msgtransfer.sh openim::msgtransfer::install || return 1
- ${OPENIM_ROOT}/scripts/install/openim-push.sh openim::push::install || return 1
- ${OPENIM_ROOT}/scripts/install/openim-crontask.sh openim::crontask::install || return 1
- ${OPENIM_ROOT}/scripts/install/openim-rpc.sh openim::rpc::install || return 1
- ${OPENIM_ROOT}/scripts/install/openim-api.sh openim::api::install || return 1
-
- openim::common::sudo "cp -r ${OPENIM_ROOT}/deployments/templates/openim.target /etc/systemd/system/openim.target"
- openim::common::sudo "systemctl daemon-reload"
- openim::common::sudo "systemctl restart openim.target"
- openim::common::sudo "systemctl enable openim.target"
- openim::log::success "openim install success"
+ openim::common::sudo "mkdir -p ${OPENIM_DATA_DIR} ${OPENIM_INSTALL_DIR} ${OPENIM_CONFIG_DIR} ${OPENIM_LOG_DIR}"
+ openim::log::info "check openim dependency"
+ openim::common::sudo "cp -r ${OPENIM_ROOT}/config/* ${OPENIM_CONFIG_DIR}/"
+
+ ${OPENIM_ROOT}/scripts/genconfig.sh ${ENV_FILE} ${OPENIM_ROOT}/deployments/templates/config.yaml > ${OPENIM_CONFIG_DIR}/config.yaml
+ ${OPENIM_ROOT}/scripts/genconfig.sh ${ENV_FILE} ${OPENIM_ROOT}/deployments/templates/prometheus.yml > ${OPENIM_CONFIG_DIR}/prometheus.yml
+
+ openim::util::check_ports ${OPENIM_DEPENDENCY_PORT_LISTARIES[@]}
+
+ ${OPENIM_ROOT}/scripts/install/openim-msggateway.sh openim::msggateway::install || return 1
+ ${OPENIM_ROOT}/scripts/install/openim-msgtransfer.sh openim::msgtransfer::install || return 1
+ ${OPENIM_ROOT}/scripts/install/openim-push.sh openim::push::install || return 1
+ ${OPENIM_ROOT}/scripts/install/openim-crontask.sh openim::crontask::install || return 1
+ ${OPENIM_ROOT}/scripts/install/openim-rpc.sh openim::rpc::install || return 1
+ ${OPENIM_ROOT}/scripts/install/openim-api.sh openim::api::install || return 1
+
+ openim::common::sudo "cp -r ${OPENIM_ROOT}/deployments/templates/openim.target /etc/systemd/system/openim.target"
+ openim::common::sudo "systemctl daemon-reload"
+ openim::common::sudo "systemctl restart openim.target"
+ openim::common::sudo "systemctl enable openim.target"
+ openim::log::success "openim install success"
}
function openim::uninstall::uninstall_openim() {
- openim::log::info "uninstall openim"
-
- ${OPENIM_ROOT}/scripts/install/openim-msggateway.sh openim::msggateway::uninstall || return 1
- ${OPENIM_ROOT}/scripts/install/openim-msgtransfer.sh openim::msgtransfer::uninstall || return 1
- ${OPENIM_ROOT}/scripts/install/openim-push.sh openim::push::uninstall || return 1
- ${OPENIM_ROOT}/scripts/install/openim-crontask.sh openim::crontask::uninstall || return 1
- ${OPENIM_ROOT}/scripts/install/openim-rpc.sh openim::rpc::uninstall || return 1
- ${OPENIM_ROOT}/scripts/install/openim-api.sh openim::api::uninstall || return 1
-
- set +o errexit
- openim::common::sudo "systemctl stop openim.target"
- openim::common::sudo "systemctl disable openim.target"
- openim::common::sudo "rm -f /etc/systemd/system/openim.target"
- set -o errexit
- openim::log::success "openim uninstall success"
+ openim::log::info "uninstall openim"
+
+ ${OPENIM_ROOT}/scripts/install/openim-msggateway.sh openim::msggateway::uninstall || return 1
+ ${OPENIM_ROOT}/scripts/install/openim-msgtransfer.sh openim::msgtransfer::uninstall || return 1
+ ${OPENIM_ROOT}/scripts/install/openim-push.sh openim::push::uninstall || return 1
+ ${OPENIM_ROOT}/scripts/install/openim-crontask.sh openim::crontask::uninstall || return 1
+ ${OPENIM_ROOT}/scripts/install/openim-rpc.sh openim::rpc::uninstall || return 1
+ ${OPENIM_ROOT}/scripts/install/openim-api.sh openim::api::uninstall || return 1
+
+ set +o errexit
+ openim::common::sudo "systemctl stop openim.target"
+ openim::common::sudo "systemctl disable openim.target"
+ openim::common::sudo "rm -f /etc/systemd/system/openim.target"
+ set -o errexit
+ openim::log::success "openim uninstall success"
}
function openim::install::status() {
- openim::log::info "check openim status"
-
- ${OPENIM_ROOT}/scripts/install/openim-msggateway.sh openim::msggateway::status || return 1
- ${OPENIM_ROOT}/scripts/install/openim-msgtransfer.sh openim::msgtransfer::status || return 1
- ${OPENIM_ROOT}/scripts/install/openim-push.sh openim::push::status || return 1
- ${OPENIM_ROOT}/scripts/install/openim-crontask.sh openim::crontask::status || return 1
- ${OPENIM_ROOT}/scripts/install/openim-rpc.sh openim::rpc::status || return 1
- ${OPENIM_ROOT}/scripts/install/openim-api.sh openim::api::status || return 1
-
- openim::log::success "openim status success"
+ openim::log::info "check openim status"
+
+ ${OPENIM_ROOT}/scripts/install/openim-msggateway.sh openim::msggateway::status || return 1
+ ${OPENIM_ROOT}/scripts/install/openim-msgtransfer.sh openim::msgtransfer::status || return 1
+ ${OPENIM_ROOT}/scripts/install/openim-push.sh openim::push::status || return 1
+ ${OPENIM_ROOT}/scripts/install/openim-crontask.sh openim::crontask::status || return 1
+ ${OPENIM_ROOT}/scripts/install/openim-rpc.sh openim::rpc::status || return 1
+ ${OPENIM_ROOT}/scripts/install/openim-api.sh openim::api::status || return 1
+
+ openim::log::success "openim status success"
}
# If no arguments are provided, show help
if [[ $# -eq 0 ]]; then
- openim::install::show_help
- exit 0
+ openim::install::show_help
+ exit 0
fi
# Argument parsing to call functions based on user input
while (( "$#" )); do
- case "$1" in
- -i|--install)
- openim::install::install_openim
- shift
- ;;
- -u|--uninstall)
- openim::uninstall::uninstall_openim
- shift
- ;;
- -s|--status)
- openim::install::status
- shift
- ;;
- -h|--help|*)
- openim::install::show_help
- exit 0
- ;;
- esac
+ case "$1" in
+ -i|--install)
+ openim::install::install_openim
+ shift
+ ;;
+ -u|--uninstall)
+ openim::uninstall::uninstall_openim
+ shift
+ ;;
+ -s|--status)
+ openim::install::status
+ shift
+ ;;
+ -h|--help|*)
+ openim::install::show_help
+ exit 0
+ ;;
+ esac
done
\ No newline at end of file
diff --git a/scripts/install/openim-api.sh b/scripts/install/openim-api.sh
index a40e23611..c81dfcd0d 100755
--- a/scripts/install/openim-api.sh
+++ b/scripts/install/openim-api.sh
@@ -34,55 +34,57 @@ readonly OPENIM_API_SERVICE_TARGETS=(
readonly OPENIM_API_SERVICE_LISTARIES=("${OPENIM_API_SERVICE_TARGETS[@]##*/}")
function openim::api::start() {
- echo "++ OPENIM_API_SERVICE_LISTARIES: ${OPENIM_API_SERVICE_LISTARIES[@]}"
- echo "++ OPENIM_API_PORT_LISTARIES: ${OPENIM_API_PORT_LISTARIES[@]}"
- echo "++ OpenIM API config path: ${OPENIM_API_CONFIG}"
- openim::log::info "Starting ${SERVER_NAME} ..."
+ rm -rf "$TMP_LOG_FILE"
- printf "+------------------------+--------------+\n"
- printf "| Service Name | Port |\n"
- printf "+------------------------+--------------+\n"
+ echo "++ OPENIM_API_SERVICE_LISTARIES: ${OPENIM_API_SERVICE_LISTARIES[@]}"
+ echo "++ OPENIM_API_PORT_LISTARIES: ${OPENIM_API_PORT_LISTARIES[@]}"
+ echo "++ OpenIM API config path: ${OPENIM_API_CONFIG}"
- length=${#OPENIM_API_SERVICE_LISTARIES[@]}
+ openim::log::info "Starting ${SERVER_NAME} ..."
- for ((i=0; i<$length; i++)); do
+ printf "+------------------------+--------------+\n"
+ printf "| Service Name | Port |\n"
+ printf "+------------------------+--------------+\n"
+
+ length=${#OPENIM_API_SERVICE_LISTARIES[@]}
+
+ for ((i=0; i<$length; i++)); do
printf "| %-22s | %6s |\n" "${OPENIM_API_SERVICE_LISTARIES[$i]}" "${OPENIM_API_PORT_LISTARIES[$i]}"
printf "+------------------------+--------------+\n"
- done
- # start all api services
- for ((i = 0; i < ${#OPENIM_API_SERVICE_LISTARIES[*]}; i++)); do
+ done
+ # start all api services
+ for ((i = 0; i < ${#OPENIM_API_SERVICE_LISTARIES[*]}; i++)); do
openim::util::stop_services_on_ports ${OPENIM_API_PORT_LISTARIES[$i]}
openim::log::info "OpenIM ${OPENIM_API_SERVICE_LISTARIES[$i]} config path: ${OPENIM_API_CONFIG}"
-
+
# Get the service and Prometheus ports.
OPENIM_API_SERVICE_PORTS=( $(openim::util::list-to-string ${OPENIM_API_PORT_LISTARIES[$i]}) )
-
+
# TODO Only one port is supported. An error occurs on multiple ports
if [ ${#OPENIM_API_SERVICE_PORTS[@]} -ne 1 ]; then
- openim::log::error_exit "Set only one port for ${OPENIM_API_SERVICE_LISTARIES[$i]} service."
+ openim::log::error_exit "Set only one port for ${OPENIM_API_SERVICE_LISTARIES[$i]} service."
fi
-
+
for ((j = 0; j < ${#OPENIM_API_SERVICE_PORTS[@]}; j++)); do
- openim::log::info "Starting ${OPENIM_API_SERVICE_LISTARIES[$i]} service, port: ${OPENIM_API_SERVICE_PORTS[j]}, binary root: ${OPENIM_OUTPUT_HOSTBIN}/${OPENIM_API_SERVICE_LISTARIES[$i]}"
- openim::api::start_service "${OPENIM_API_SERVICE_LISTARIES[$i]}" "${OPENIM_API_PORT_LISTARIES[j]}"
- sleep 1
- done
+ openim::log::info "Starting ${OPENIM_API_SERVICE_LISTARIES[$i]} service, port: ${OPENIM_API_SERVICE_PORTS[j]}, binary root: ${OPENIM_OUTPUT_HOSTBIN}/${OPENIM_API_SERVICE_LISTARIES[$i]}"
+ openim::api::start_service "${OPENIM_API_SERVICE_LISTARIES[$i]}" "${OPENIM_API_PORT_LISTARIES[j]}"
+ sleep 2
done
-
- OPENIM_API_PORT_STRINGARIES=( $(openim::util::list-to-string ${OPENIM_API_PORT_LISTARIES[@]}) )
- openim::util::check_ports ${OPENIM_API_PORT_STRINGARIES[@]}
+ done
+
+ OPENIM_API_PORT_STRINGARIES=( $(openim::util::list-to-string ${OPENIM_API_PORT_LISTARIES[@]}) )
+ openim::util::check_ports ${OPENIM_API_PORT_STRINGARIES[@]}
}
function openim::api::start_service() {
local binary_name="$1"
local service_port="$2"
local prometheus_port="$3"
-
+
local cmd="${OPENIM_OUTPUT_HOSTBIN}/${binary_name} --port ${service_port} -c ${OPENIM_API_CONFIG}"
-
- nohup ${cmd} >> "${LOG_FILE}" 2>&1 &
-
+ nohup ${cmd} >> "${LOG_FILE}" 2> >(tee -a "${STDERR_LOG_FILE}" "$TMP_LOG_FILE") &
+
if [ $? -ne 0 ]; then
openim::log::error_exit "Failed to start ${binary_name} on port ${service_port}."
fi
@@ -100,61 +102,61 @@ EOF
# install openim-api
function openim::api::install() {
- openim::log::info "Installing ${SERVER_NAME} ..."
-
- pushd "${OPENIM_ROOT}"
-
- # 1. Build openim-api
- make build BINS=${SERVER_NAME}
- openim::common::sudo "cp -r ${OPENIM_OUTPUT_HOSTBIN}/${SERVER_NAME} ${OPENIM_INSTALL_DIR}/${SERVER_NAME}"
- openim::log::status "${SERVER_NAME} binary: ${OPENIM_INSTALL_DIR}/${SERVER_NAME}/${SERVER_NAME}"
-
- # 2. Generate and install the openim-api configuration file (config)
- openim::log::status "${SERVER_NAME} config file: ${OPENIM_CONFIG_DIR}/config.yaml"
-
- # 3. Create and install the ${SERVER_NAME} systemd unit file
- echo ${LINUX_PASSWORD} | sudo -S bash -c \
- "SERVER_NAME=${SERVER_NAME} ./scripts/genconfig.sh ${ENV_FILE} deployments/templates/openim.service > ${SYSTEM_FILE_PATH}"
- openim::log::status "${SERVER_NAME} systemd file: ${SYSTEM_FILE_PATH}"
-
- # 4. Start the openim-api service
- openim::common::sudo "systemctl daemon-reload"
- openim::common::sudo "systemctl restart ${SERVER_NAME}"
- openim::common::sudo "systemctl enable ${SERVER_NAME}"
- openim::api::status || return 1
- openim::api::info
-
- openim::log::info "install ${SERVER_NAME} successfully"
- popd
+ openim::log::info "Installing ${SERVER_NAME} ..."
+
+ pushd "${OPENIM_ROOT}"
+
+ # 1. Build openim-api
+ make build BINS=${SERVER_NAME}
+ openim::common::sudo "cp -r ${OPENIM_OUTPUT_HOSTBIN}/${SERVER_NAME} ${OPENIM_INSTALL_DIR}/${SERVER_NAME}"
+ openim::log::status "${SERVER_NAME} binary: ${OPENIM_INSTALL_DIR}/${SERVER_NAME}/${SERVER_NAME}"
+
+ # 2. Generate and install the openim-api configuration file (config)
+ openim::log::status "${SERVER_NAME} config file: ${OPENIM_CONFIG_DIR}/config.yaml"
+
+ # 3. Create and install the ${SERVER_NAME} systemd unit file
+ echo ${LINUX_PASSWORD} | sudo -S bash -c \
+ "SERVER_NAME=${SERVER_NAME} ./scripts/genconfig.sh ${ENV_FILE} deployments/templates/openim.service > ${SYSTEM_FILE_PATH}"
+ openim::log::status "${SERVER_NAME} systemd file: ${SYSTEM_FILE_PATH}"
+
+ # 4. Start the openim-api service
+ openim::common::sudo "systemctl daemon-reload"
+ openim::common::sudo "systemctl restart ${SERVER_NAME}"
+ openim::common::sudo "systemctl enable ${SERVER_NAME}"
+ openim::api::status || return 1
+ openim::api::info
+
+ openim::log::info "install ${SERVER_NAME} successfully"
+ popd
}
# Unload
function openim::api::uninstall() {
- openim::log::info "Uninstalling ${SERVER_NAME} ..."
-
- set +o errexit
- openim::common::sudo "systemctl stop ${SERVER_NAME}"
- openim::common::sudo "systemctl disable ${SERVER_NAME}"
- openim::common::sudo "rm -f ${OPENIM_INSTALL_DIR}/${SERVER_NAME}"
- openim::common::sudo "rm -f ${OPENIM_CONFIG_DIR}/${SERVER_NAME}.yaml"
- openim::common::sudo "rm -f /etc/systemd/system/${SERVER_NAME}.service"
- set -o errexit
- openim::log::info "uninstall ${SERVER_NAME} successfully"
+ openim::log::info "Uninstalling ${SERVER_NAME} ..."
+
+ set +o errexit
+ openim::common::sudo "systemctl stop ${SERVER_NAME}"
+ openim::common::sudo "systemctl disable ${SERVER_NAME}"
+ openim::common::sudo "rm -f ${OPENIM_INSTALL_DIR}/${SERVER_NAME}"
+ openim::common::sudo "rm -f ${OPENIM_CONFIG_DIR}/${SERVER_NAME}.yaml"
+ openim::common::sudo "rm -f /etc/systemd/system/${SERVER_NAME}.service"
+ set -o errexit
+ openim::log::info "uninstall ${SERVER_NAME} successfully"
}
# Status Check
function openim::api::status() {
- openim::log::info "Checking ${SERVER_NAME} status ..."
-
- # Check the running status of the ${SERVER_NAME}. If active (running) is displayed, the ${SERVER_NAME} is started successfully.
- systemctl status ${SERVER_NAME}|grep -q 'active' || {
- openim::log::error "${SERVER_NAME} failed to start, maybe not installed properly"
- return 1
- }
-
- openim::util::check_ports ${OPENIM_API_PORT_LISTARIES[@]}
+ openim::log::info "Checking ${SERVER_NAME} status ..."
+
+ # Check the running status of the ${SERVER_NAME}. If active (running) is displayed, the ${SERVER_NAME} is started successfully.
+ systemctl status ${SERVER_NAME}|grep -q 'active' || {
+ openim::log::error "${SERVER_NAME} failed to start, maybe not installed properly"
+ return 1
+ }
+
+ openim::util::check_ports ${OPENIM_API_PORT_LISTARIES[@]}
}
if [[ "$*" =~ openim::api:: ]];then
- eval $*
+ eval $*
fi
diff --git a/scripts/install/openim-crontask.sh b/scripts/install/openim-crontask.sh
index 26dc1a47f..6068e97d5 100755
--- a/scripts/install/openim-crontask.sh
+++ b/scripts/install/openim-crontask.sh
@@ -13,12 +13,12 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
-#
+#
# OpenIM CronTask Control Script
-#
+#
# Description:
# This script provides a control interface for the OpenIM CronTask service within a Linux environment. It supports two installation methods: installation via function calls to systemctl, and direct installation through background processes.
-#
+#
# Features:
# 1. Robust error handling leveraging Bash built-ins such as 'errexit', 'nounset', and 'pipefail'.
# 2. Capability to source common utility functions and configurations, ensuring environmental consistency.
@@ -30,13 +30,13 @@
# 1. Direct Script Execution:
# This will start the OpenIM CronTask directly through a background process.
# Example: ./openim-crontask.sh openim::crontask::start
-#
+#
# 2. Controlling through Functions for systemctl operations:
# Specific operations like installation, uninstallation, and status check can be executed by passing the respective function name as an argument to the script.
# Example: ./openim-crontask.sh openim::crontask::install
-#
+#
# Note: Ensure that the appropriate permissions and environmental variables are set prior to script execution.
-#
+#
OPENIM_ROOT=$(cd "$(dirname "${BASH_SOURCE[0]}")"/../.. && pwd -P)
[[ -z ${COMMON_SOURCED} ]] && source "${OPENIM_ROOT}"/scripts/install/common.sh
@@ -44,14 +44,19 @@ OPENIM_ROOT=$(cd "$(dirname "${BASH_SOURCE[0]}")"/../.. && pwd -P)
SERVER_NAME="openim-crontask"
function openim::crontask::start() {
- openim::log::info "Start OpenIM Cron, binary root: ${SERVER_NAME}"
- openim::log::status "Start OpenIM Cron, path: ${OPENIM_CRONTASK_BINARY}"
- openim::util::stop_services_with_name ${OPENIM_CRONTASK_BINARY}
+ rm -rf "$TMP_LOG_FILE"
+
+ openim::log::info "Start OpenIM Cron, binary root: ${SERVER_NAME}"
+ openim::log::status "Start OpenIM Cron, path: ${OPENIM_CRONTASK_BINARY}"
+
+ openim::util::stop_services_with_name ${OPENIM_CRONTASK_BINARY}
+
+ openim::log::status "start cron_task process, path: ${OPENIM_CRONTASK_BINARY}"
+
+ nohup ${OPENIM_CRONTASK_BINARY} -c ${OPENIM_PUSH_CONFIG} >> ${LOG_FILE} 2> >(tee -a "${STDERR_LOG_FILE}" "$TMP_LOG_FILE") &
+ openim::util::check_process_names ${SERVER_NAME}
- openim::log::status "start cron_task process, path: ${OPENIM_CRONTASK_BINARY}"
- nohup ${OPENIM_CRONTASK_BINARY} -c ${OPENIM_PUSH_CONFIG} >> ${LOG_FILE} 2>&1 &
- openim::util::check_process_names ${SERVER_NAME}
}
###################################### Linux Systemd ######################################
@@ -67,28 +72,28 @@ EOF
# install openim-crontask
function openim::crontask::install() {
pushd "${OPENIM_ROOT}"
-
+
# 1. Build openim-crontask
make build BINS=${SERVER_NAME}
-
+
openim::common::sudo "cp -r ${OPENIM_OUTPUT_HOSTBIN}/${SERVER_NAME} ${OPENIM_INSTALL_DIR}/${SERVER_NAME}"
openim::log::status "${SERVER_NAME} binary: ${OPENIM_INSTALL_DIR}/${SERVER_NAME}/${SERVER_NAME}"
-
+
# 2. Generate and install the openim-crontask configuration file (openim-crontask.yaml)
openim::log::status "${SERVER_NAME} config file: ${OPENIM_CONFIG_DIR}/config.yaml"
-
+
# 3. Create and install the ${SERVER_NAME} systemd unit file
echo ${LINUX_PASSWORD} | sudo -S bash -c \
- "SERVER_NAME=${SERVER_NAME} ./scripts/genconfig.sh ${ENV_FILE} deployments/templates/openim.service > ${SYSTEM_FILE_PATH}"
+ "SERVER_NAME=${SERVER_NAME} ./scripts/genconfig.sh ${ENV_FILE} deployments/templates/openim.service > ${SYSTEM_FILE_PATH}"
openim::log::status "${SERVER_NAME} systemd file: ${SYSTEM_FILE_PATH}"
-
+
# 4. Start the openim-crontask service
openim::common::sudo "systemctl daemon-reload"
openim::common::sudo "systemctl restart ${SERVER_NAME}"
openim::common::sudo "systemctl enable ${SERVER_NAME}"
openim::crontask::status || return 1
openim::crontask::info
-
+
openim::log::info "install ${SERVER_NAME} successfully"
popd
}
diff --git a/scripts/install/openim-man.sh b/scripts/install/openim-man.sh
index 6dda4bfe1..fac5cebea 100755
--- a/scripts/install/openim-man.sh
+++ b/scripts/install/openim-man.sh
@@ -17,7 +17,7 @@
#
# Description:
# This script manages the man pages for the OpenIM software suite.
-# It provides facilities to install, uninstall, and verify the
+# It provides facilities to install, uninstall, and verify the
# installation status of the man pages related to OpenIM components.
#
# Usage:
@@ -26,15 +26,15 @@
# ./openim-man.sh openim::man::status - Check installation status
#
# Dependencies:
-# - Assumes there's a common.sh in ""${OPENIM_ROOT}"/scripts/install/"
+# - Assumes there's a common.sh in "${OPENIM_ROOT}/scripts/install/"
# containing shared functions and variables.
-# - Relies on the script ""${OPENIM_ROOT}"/scripts/update-generated-docs.sh"
+# - Relies on the script "${OPENIM_ROOT}/scripts/update-generated-docs.sh"
# to generate the man pages.
#
# Notes:
-# - This script must be run with appropriate permissions to modify the
+# - This script must be run with appropriate permissions to modify the
# system man directories.
-# - Always ensure you're in the script's directory or provide the correct
+# - Always ensure you're in the script's directory or provide the correct
# path when executing.
################################################################################
@@ -54,43 +54,43 @@ EOF
# Install the man pages for openim
function openim::man::install() {
- # Navigate to the openim root directory
- pushd "${OPENIM_ROOT}" > /dev/null
-
- # Generate man pages for each component
- ""${OPENIM_ROOT}"/scripts/update-generated-docs.sh"
- openim::common::sudo "cp docs/man/man1/* /usr/share/man/man1/"
-
- # Verify installation status
- if openim::man::status; then
- openim::log::info "Installed openim-server man page successfully"
- openim::man::info
- fi
-
- # Return to the original directory
- popd > /dev/null
+ # Navigate to the openim root directory
+ pushd "${OPENIM_ROOT}" > /dev/null
+
+ # Generate man pages for each component
+ "${OPENIM_ROOT}/scripts/update-generated-docs.sh"
+ openim::common::sudo "cp docs/man/man1/* /usr/share/man/man1/"
+
+ # Verify installation status
+ if openim::man::status; then
+ openim::log::info "Installed openim-server man page successfully"
+ openim::man::info
+ fi
+
+ # Return to the original directory
+ popd > /dev/null
}
# Uninstall the man pages for openim
function openim::man::uninstall() {
- # Turn off exit-on-error temporarily to handle non-existing files gracefully
- set +o errexit
- openim::common::sudo "rm -f /usr/share/man/man1/openim-*"
- set -o errexit
-
- openim::log::info "Uninstalled openim man pages successfully"
+ # Turn off exit-on-error temporarily to handle non-existing files gracefully
+ set +o errexit
+ openim::common::sudo "rm -f /usr/share/man/man1/openim-*"
+ set -o errexit
+
+ openim::log::info "Uninstalled openim man pages successfully"
}
# Check the installation status of the man pages
function openim::man::status() {
- if ! ls /usr/share/man/man1/openim-* &> /dev/null; then
- openim::log::error "OpenIM man files not found. Perhaps they were not installed correctly."
- return 1
- fi
- return 0
+ if ! ls /usr/share/man/man1/openim-* &> /dev/null; then
+ openim::log::error "OpenIM man files not found. Perhaps they were not installed correctly."
+ return 1
+ fi
+ return 0
}
# Execute the appropriate function based on the given arguments
if [[ "$*" =~ openim::man:: ]]; then
- eval "$*"
+ eval "$*"
fi
diff --git a/scripts/install/openim-msggateway.sh b/scripts/install/openim-msggateway.sh
index 2b2a84b12..4e591deca 100755
--- a/scripts/install/openim-msggateway.sh
+++ b/scripts/install/openim-msggateway.sh
@@ -26,19 +26,22 @@ openim::util::set_max_fd 200000
SERVER_NAME="openim-msggateway"
function openim::msggateway::start() {
- openim::log::info "Start OpenIM Msggateway, binary root: ${SERVER_NAME}"
- openim::log::status "Start OpenIM Msggateway, path: ${OPENIM_MSGGATEWAY_BINARY}"
- openim::util::stop_services_with_name ${OPENIM_MSGGATEWAY_BINARY}
+ rm -rf "$TMP_LOG_FILE"
- # OpenIM message gateway service port
- OPENIM_MESSAGE_GATEWAY_PORTS=$(openim::util::list-to-string ${OPENIM_MESSAGE_GATEWAY_PORT} )
+ openim::log::info "Start OpenIM Msggateway, binary root: ${SERVER_NAME}"
+ openim::log::status "Start OpenIM Msggateway, path: ${OPENIM_MSGGATEWAY_BINARY}"
+
+ openim::util::stop_services_with_name ${OPENIM_MSGGATEWAY_BINARY}
+
+ # OpenIM message gateway service port
+ OPENIM_MESSAGE_GATEWAY_PORTS=$(openim::util::list-to-string ${OPENIM_MESSAGE_GATEWAY_PORT} )
read -a OPENIM_MSGGATEWAY_PORTS_ARRAY <<< ${OPENIM_MESSAGE_GATEWAY_PORTS}
openim::util::stop_services_on_ports ${OPENIM_MSGGATEWAY_PORTS_ARRAY[*]}
# OpenIM WS port
OPENIM_WS_PORTS=$(openim::util::list-to-string ${OPENIM_WS_PORT} )
read -a OPENIM_WS_PORTS_ARRAY <<< ${OPENIM_WS_PORTS}
-
+
# Message Gateway Prometheus port of the service
MSG_GATEWAY_PROM_PORTS=$(openim::util::list-to-string ${MSG_GATEWAY_PROM_PORT} )
read -a MSG_GATEWAY_PROM_PORTS_ARRAY <<< ${MSG_GATEWAY_PROM_PORTS}
@@ -61,7 +64,7 @@ function openim::msggateway::start() {
PROMETHEUS_PORT_OPTION="--prometheus_port ${MSG_GATEWAY_PROM_PORTS_ARRAY[$i]}"
fi
- nohup ${OPENIM_MSGGATEWAY_BINARY} --port ${OPENIM_MSGGATEWAY_PORTS_ARRAY[$i]} --ws_port ${OPENIM_WS_PORTS_ARRAY[$i]} $PROMETHEUS_PORT_OPTION -c ${OPENIM_MSGGATEWAY_CONFIG} >> ${LOG_FILE} 2>&1 &
+ nohup ${OPENIM_MSGGATEWAY_BINARY} --port ${OPENIM_MSGGATEWAY_PORTS_ARRAY[$i]} --ws_port ${OPENIM_WS_PORTS_ARRAY[$i]} $PROMETHEUS_PORT_OPTION -c ${OPENIM_MSGGATEWAY_CONFIG} >> ${LOG_FILE} 2> >(tee -a "${STDERR_LOG_FILE}" "$TMP_LOG_FILE") &
done
openim::util::check_process_names ${SERVER_NAME}
@@ -123,7 +126,7 @@ function openim::msggateway::status() {
# Check the running status of the ${SERVER_NAME}. If active (running) is displayed, the ${SERVER_NAME} is started successfully.
systemctl status ${SERVER_NAME}|grep -q 'active' || {
openim::log::error "${SERVER_NAME} failed to start, maybe not installed properly"
-
+
return 1
}
diff --git a/scripts/install/openim-msgtransfer.sh b/scripts/install/openim-msgtransfer.sh
index 18bbb3c02..def22c38b 100755
--- a/scripts/install/openim-msgtransfer.sh
+++ b/scripts/install/openim-msgtransfer.sh
@@ -12,6 +12,8 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
+# Use:
+# ./scripts/install/openim-msgtransfer.sh openim::msgtransfer::start
# Common utilities, variables and checks for all build scripts.
set -o errexit
@@ -26,54 +28,62 @@ openim::util::set_max_fd 200000
SERVER_NAME="openim-msgtransfer"
function openim::msgtransfer::start() {
- openim::log::info "Start OpenIM Msggateway, binary root: ${SERVER_NAME}"
- openim::log::status "Start OpenIM Msggateway, path: ${OPENIM_MSGTRANSFER_BINARY}"
- openim::util::stop_services_with_name ${OPENIM_MSGTRANSFER_BINARY}
-
- # Message Transfer Prometheus port list
- MSG_TRANSFER_PROM_PORTS=(openim::util::list-to-string ${MSG_TRANSFER_PROM_PORT} )
-
- openim::log::status "OpenIM Prometheus ports: ${MSG_TRANSFER_PROM_PORTS[*]}"
-
- openim::log::status "OpenIM Msggateway config path: ${OPENIM_MSGTRANSFER_CONFIG}"
-
- openim::log::info "openim maggateway num: ${OPENIM_MSGGATEWAY_NUM}"
-
- if [ "${OPENIM_MSGGATEWAY_NUM}" -lt 1 ]; then
+ rm -rf "$TMP_LOG_FILE"
+
+ openim::log::info "Start OpenIM Msggateway, binary root: ${SERVER_NAME}"
+ openim::log::status "Start OpenIM Msggateway, path: ${OPENIM_MSGTRANSFER_BINARY}"
+
+ openim::util::stop_services_with_name ${OPENIM_MSGTRANSFER_BINARY}
+
+ # Message Transfer Prometheus port list
+ MSG_TRANSFER_PROM_PORTS=(openim::util::list-to-string ${MSG_TRANSFER_PROM_PORT} )
+
+ openim::log::status "OpenIM Prometheus ports: ${MSG_TRANSFER_PROM_PORTS[*]}"
+
+ openim::log::status "OpenIM Msggateway config path: ${OPENIM_MSGTRANSFER_CONFIG}"
+
+ openim::log::info "openim maggateway num: ${OPENIM_MSGGATEWAY_NUM}"
+
+ if [ "${OPENIM_MSGGATEWAY_NUM}" -lt 1 ]; then
opeim::log::error_exit "OPENIM_MSGGATEWAY_NUM must be greater than 0"
- fi
-
- if [ ${OPENIM_MSGGATEWAY_NUM} -ne $((${#MSG_TRANSFER_PROM_PORTS[@]} - 1)) ]; then
+ fi
+
+ if [ ${OPENIM_MSGGATEWAY_NUM} -ne $((${#MSG_TRANSFER_PROM_PORTS[@]} - 1)) ]; then
openim::log::error_exit "OPENIM_MSGGATEWAY_NUM must be equal to the number of MSG_TRANSFER_PROM_PORTS"
+ fi
+
+ for (( i=0; i<$OPENIM_MSGGATEWAY_NUM; i++ )) do
+ openim::log::info "prometheus port: ${MSG_TRANSFER_PROM_PORTS[$i]}"
+ PROMETHEUS_PORT_OPTION=""
+ if [[ -n "${OPENIM_PROMETHEUS_PORTS[$i]}" ]]; then
+ PROMETHEUS_PORT_OPTION="--prometheus_port ${OPENIM_PROMETHEUS_PORTS[$i]}"
fi
-
- for (( i=0; i<$OPENIM_MSGGATEWAY_NUM; i++ )) do
- openim::log::info "prometheus port: ${MSG_TRANSFER_PROM_PORTS[$i]}"
- PROMETHEUS_PORT_OPTION=""
- if [[ -n "${OPENIM_PROMETHEUS_PORTS[$i]}" ]]; then
- PROMETHEUS_PORT_OPTION="--prometheus_port ${OPENIM_PROMETHEUS_PORTS[$i]}"
- fi
- nohup ${OPENIM_MSGTRANSFER_BINARY} ${PROMETHEUS_PORT_OPTION} -c ${OPENIM_MSGTRANSFER_CONFIG} -n ${i}>> ${LOG_FILE} 2>&1 &
- done
-
- openim::util::check_process_names "${OPENIM_OUTPUT_HOSTBIN}/${SERVER_NAME}"
+ nohup ${OPENIM_MSGTRANSFER_BINARY} ${PROMETHEUS_PORT_OPTION} -c ${OPENIM_MSGTRANSFER_CONFIG} -n ${i} >> ${LOG_FILE} 2> >(tee -a "${STDERR_LOG_FILE}" "$TMP_LOG_FILE") &
+ done
+
+ openim::util::check_process_names "${OPENIM_OUTPUT_HOSTBIN}/${SERVER_NAME}"
}
function openim::msgtransfer::check() {
- PIDS=$(pgrep -f "${OPENIM_OUTPUT_HOSTBIN}/openim-msgtransfer")
-
- NUM_PROCESSES=$(echo "$PIDS" | wc -l)
- # NUM_PROCESSES=$(($NUM_PROCESSES - 1))
-
- if [ "$NUM_PROCESSES" -eq "$OPENIM_MSGGATEWAY_NUM" ]; then
- openim::log::info "Found $OPENIM_MSGGATEWAY_NUM processes named $OPENIM_OUTPUT_HOSTBIN"
- for PID in $PIDS; do
+ PIDS=$(pgrep -f "${OPENIM_OUTPUT_HOSTBIN}/openim-msgtransfer")
+
+ NUM_PROCESSES=$(echo "$PIDS" | wc -l)
+
+ if [ "$NUM_PROCESSES" -eq "$OPENIM_MSGGATEWAY_NUM" ]; then
+ openim::log::info "Found $OPENIM_MSGGATEWAY_NUM processes named $OPENIM_OUTPUT_HOSTBIN"
+ for PID in $PIDS; do
+ if [[ "$OSTYPE" == "linux-gnu"* ]]; then
ps -p $PID -o pid,cmd
- done
- else
- openim::log::error_exit "Expected $OPENIM_MSGGATEWAY_NUM openim msgtransfer processes, but found $NUM_PROCESSES msgtransfer processes."
- fi
+ elif [[ "$OSTYPE" == "darwin"* ]]; then
+ ps -p $PID -o pid,comm
+ else
+ openim::log::error "Unsupported OS type: $OSTYPE"
+ fi
+ done
+ else
+ openim::log::error_exit "Expected $OPENIM_MSGGATEWAY_NUM openim msgtransfer processes, but found $NUM_PROCESSES msgtransfer processes."
+ fi
}
###################################### Linux Systemd ######################################
@@ -89,30 +99,30 @@ EOF
# install openim-msgtransfer
function openim::msgtransfer::install() {
pushd "${OPENIM_ROOT}"
-
+
# 1. Build openim-msgtransfer
make build BINS=${SERVER_NAME}
-
+
openim::common::sudo "cp -r ${OPENIM_OUTPUT_HOSTBIN}/${SERVER_NAME} ${OPENIM_INSTALL_DIR}/${SERVER_NAME}"
openim::log::status "${SERVER_NAME} binary: ${OPENIM_INSTALL_DIR}/${SERVER_NAME}/${SERVER_NAME}"
-
+
openim::log::status "${SERVER_NAME} binary: ${OPENIM_INSTALL_DIR}/bin/${SERVER_NAME}"
-
+
# 2. Generate and install the openim-msgtransfer configuration file (openim-msgtransfer.yaml)
# nono
-
+
# 3. Create and install the ${SERVER_NAME} systemd unit file
echo ${LINUX_PASSWORD} | sudo -S bash -c \
- "SERVER_NAME=${SERVER_NAME} ./scripts/genconfig.sh ${ENV_FILE} deployments/templates/openim.service > ${SYSTEM_FILE_PATH}"
+ "SERVER_NAME=${SERVER_NAME} ./scripts/genconfig.sh ${ENV_FILE} deployments/templates/openim.service > ${SYSTEM_FILE_PATH}"
openim::log::status "${SERVER_NAME} systemd file: ${SYSTEM_FILE_PATH}"
-
+
# 4. Start the openim-msgtransfer service
openim::common::sudo "systemctl daemon-reload"
openim::common::sudo "systemctl restart ${SERVER_NAME}"
openim::common::sudo "systemctl enable ${SERVER_NAME}"
openim::msgtransfer::status || return 1
openim::msgtransfer::info
-
+
openim::log::info "install ${SERVER_NAME} successfully"
popd
}
diff --git a/scripts/install/openim-push.sh b/scripts/install/openim-push.sh
index c17b80e67..4d14ca675 100755
--- a/scripts/install/openim-push.sh
+++ b/scripts/install/openim-push.sh
@@ -14,10 +14,10 @@
# limitations under the License.
#
# OpenIM Push Control Script
-#
+#
# Description:
# This script provides a control interface for the OpenIM Push service within a Linux environment. It supports two installation methods: installation via function calls to systemctl, and direct installation through background processes.
-#
+#
# Features:
# 1. Robust error handling leveraging Bash built-ins such as 'errexit', 'nounset', and 'pipefail'.
# 2. Capability to source common utility functions and configurations, ensuring environmental consistency.
@@ -29,7 +29,7 @@
# 1. Direct Script Execution:
# This will start the OpenIM push directly through a background process.
# Example: ./openim-push.sh
-#
+#
# 2. Controlling through Functions for systemctl operations:
# Specific operations like installation, uninstallation, and status check can be executed by passing the respective function name as an argument to the script.
# Example: ./openim-push.sh openim::push::install
@@ -39,7 +39,7 @@
# export OPENIM_PUSH_PORT="9090 9091 9092"
#
# Note: Ensure that the appropriate permissions and environmental variables are set prior to script execution.
-#
+#
set -o errexit
set +o nounset
set -o pipefail
@@ -50,30 +50,33 @@ OPENIM_ROOT=$(cd "$(dirname "${BASH_SOURCE[0]}")"/../.. && pwd -P)
SERVER_NAME="openim-push"
function openim::push::start() {
- openim::log::status "Start OpenIM Push, binary root: ${SERVER_NAME}"
- openim::log::info "Start OpenIM Push, path: ${OPENIM_PUSH_BINARY}"
-
- openim::log::status "prepare start push process, path: ${OPENIM_PUSH_BINARY}"
- openim::log::status "prepare start push process, port: ${OPENIM_PUSH_PORT}, prometheus port: ${PUSH_PROM_PORT}"
-
- OPENIM_PUSH_PORTS_ARRAY=$(openim::util::list-to-string ${OPENIM_PUSH_PORT} )
- PUSH_PROM_PORTS_ARRAY=$(openim::util::list-to-string ${PUSH_PROM_PORT} )
-
- openim::util::stop_services_with_name ${SERVER_NAME}
-
- openim::log::status "push port list: ${OPENIM_PUSH_PORTS_ARRAY[@]}"
- openim::log::status "prometheus port list: ${PUSH_PROM_PORTS_ARRAY[@]}"
- if [ ${#OPENIM_PUSH_PORTS_ARRAY[@]} -ne ${#PUSH_PROM_PORTS_ARRAY[@]} ]; then
- openim::log::error_exit "The length of the two port lists is different!"
- fi
-
- for (( i=0; i<${#OPENIM_PUSH_PORTS_ARRAY[@]}; i++ )); do
- openim::log::info "start push process, port: ${OPENIM_PUSH_PORTS_ARRAY[$i]}, prometheus port: ${PUSH_PROM_PORTS_ARRAY[$i]}"
- nohup ${OPENIM_PUSH_BINARY} --port ${OPENIM_PUSH_PORTS_ARRAY[$i]} -c ${OPENIM_PUSH_CONFIG} --prometheus_port ${PUSH_PROM_PORTS_ARRAY[$i]} >> ${LOG_FILE} 2>&1 &
- done
+ rm -rf "$TMP_LOG_FILE"
+
+ openim::log::status "Start OpenIM Push, binary root: ${SERVER_NAME}"
+ openim::log::info "Start OpenIM Push, path: ${OPENIM_PUSH_BINARY}"
+
+ openim::log::status "prepare start push process, path: ${OPENIM_PUSH_BINARY}"
+ openim::log::status "prepare start push process, port: ${OPENIM_PUSH_PORT}, prometheus port: ${PUSH_PROM_PORT}"
+
+ OPENIM_PUSH_PORTS_ARRAY=$(openim::util::list-to-string ${OPENIM_PUSH_PORT} )
+ PUSH_PROM_PORTS_ARRAY=$(openim::util::list-to-string ${PUSH_PROM_PORT} )
+
+ openim::util::stop_services_with_name ${SERVER_NAME}
+
+ openim::log::status "push port list: ${OPENIM_PUSH_PORTS_ARRAY[@]}"
+ openim::log::status "prometheus port list: ${PUSH_PROM_PORTS_ARRAY[@]}"
+
+ if [ ${#OPENIM_PUSH_PORTS_ARRAY[@]} -ne ${#PUSH_PROM_PORTS_ARRAY[@]} ]; then
+ openim::log::error_exit "The length of the two port lists is different!"
+ fi
+
+ for (( i=0; i<${#OPENIM_PUSH_PORTS_ARRAY[@]}; i++ )); do
+ openim::log::info "start push process, port: ${OPENIM_PUSH_PORTS_ARRAY[$i]}, prometheus port: ${PUSH_PROM_PORTS_ARRAY[$i]}"
+ nohup ${OPENIM_PUSH_BINARY} --port ${OPENIM_PUSH_PORTS_ARRAY[$i]} -c ${OPENIM_PUSH_CONFIG} --prometheus_port ${PUSH_PROM_PORTS_ARRAY[$i]} >${LOG_FILE} 2> >(tee -a "${STDERR_LOG_FILE}" "$TMP_LOG_FILE") &
+ done
- openim::util::check_process_names ${SERVER_NAME}
+ openim::util::check_process_names ${SERVER_NAME}
}
###################################### Linux Systemd ######################################
@@ -89,27 +92,27 @@ EOF
# install openim-push
function openim::push::install() {
pushd "${OPENIM_ROOT}"
-
+
# 1. Build openim-push
make build BINS=${SERVER_NAME}
openim::common::sudo "cp -r ${OPENIM_OUTPUT_HOSTBIN}/${SERVER_NAME} ${OPENIM_INSTALL_DIR}/${SERVER_NAME}"
openim::log::status "${SERVER_NAME} binary: ${OPENIM_INSTALL_DIR}/${SERVER_NAME}/${SERVER_NAME}"
-
+
# 2. Generate and install the openim-push configuration file (config)
openim::log::status "${SERVER_NAME} config file: ${OPENIM_CONFIG_DIR}/config.yaml"
-
+
# 3. Create and install the ${SERVER_NAME} systemd unit file
echo ${LINUX_PASSWORD} | sudo -S bash -c \
- "SERVER_NAME=${SERVER_NAME} ./scripts/genconfig.sh ${ENV_FILE} deployments/templates/openim.service > ${SYSTEM_FILE_PATH}"
+ "SERVER_NAME=${SERVER_NAME} ./scripts/genconfig.sh ${ENV_FILE} deployments/templates/openim.service > ${SYSTEM_FILE_PATH}"
openim::log::status "${SERVER_NAME} systemd file: ${SYSTEM_FILE_PATH}"
-
+
# 4. Start the openim-push service
openim::common::sudo "systemctl daemon-reload"
openim::common::sudo "systemctl restart ${SERVER_NAME}"
openim::common::sudo "systemctl enable ${SERVER_NAME}"
openim::push::status || return 1
openim::push::info
-
+
openim::log::info "install ${SERVER_NAME} successfully"
popd
}
@@ -133,7 +136,7 @@ function openim::push::status() {
openim::log::error "${SERVER_NAME} failed to start, maybe not installed properly"
return 1
}
-
+
# The listening port is hardcode in the configuration file
if echo | telnet ${OPENIM_MSGGATEWAY_HOST} ${OPENIM_PUSH_PORT} 2>&1|grep refused &>/dev/null;then # Assuming a different port for push
openim::log::error "cannot access health check port, ${SERVER_NAME} maybe not startup"
diff --git a/scripts/install/openim-rpc.sh b/scripts/install/openim-rpc.sh
index bd00ff9f2..00031f211 100755
--- a/scripts/install/openim-rpc.sh
+++ b/scripts/install/openim-rpc.sh
@@ -15,10 +15,10 @@
# limitations under the License.
#
# OpenIM RPC Service Control Script
-#
+#
# Description:
# This script provides a control interface for the OpenIM RPC service within a Linux environment. It offers functionalities to start multiple RPC services, each denoted by their respective names under openim::rpc::service_name.
-#
+#
# Features:
# 1. Robust error handling using Bash built-ins like 'errexit', 'nounset', and 'pipefail'.
# 2. The capability to source common utility functions and configurations to ensure uniform environmental settings.
@@ -102,6 +102,8 @@ readonly OPENIM_RPC_PROM_PORT_TARGETS
readonly OPENIM_RPC_PROM_PORT_LISTARIES=("${OPENIM_RPC_PROM_PORT_TARGETS[@]##*/}")
function openim::rpc::start() {
+ rm -rf "$TMP_LOG_FILE"
+
echo "OPENIM_RPC_SERVICE_LISTARIES: ${OPENIM_RPC_SERVICE_LISTARIES[@]}"
echo "OPENIM_RPC_PROM_PORT_LISTARIES: ${OPENIM_RPC_PROM_PORT_LISTARIES[@]}"
echo "OPENIM_RPC_PORT_LISTARIES: ${OPENIM_RPC_PORT_LISTARIES[@]}"
@@ -123,12 +125,14 @@ function openim::rpc::start() {
for ((i = 0; i < ${#OPENIM_RPC_SERVICE_LISTARIES[*]}; i++)); do
# openim::util::stop_services_with_name ${OPENIM_RPC_SERVICE_LISTARIES
openim::util::stop_services_on_ports ${OPENIM_RPC_PORT_LISTARIES[$i]}
+ openim::util::stop_services_on_ports ${OPENIM_RPC_PROM_PORT_LISTARIES[$i]}
+
openim::log::info "OpenIM ${OPENIM_RPC_SERVICE_LISTARIES[$i]} config path: ${OPENIM_RPC_CONFIG}"
-
+
# Get the service and Prometheus ports.
OPENIM_RPC_SERVICE_PORTS=( $(openim::util::list-to-string ${OPENIM_RPC_PORT_LISTARIES[$i]}) )
read -a OPENIM_RPC_SERVICE_PORTS_ARRAY <<< ${OPENIM_RPC_SERVICE_PORTS}
-
+
OPENIM_RPC_PROM_PORTS=( $(openim::util::list-to-string ${OPENIM_RPC_PROM_PORT_LISTARIES[$i]}) )
read -a OPENIM_RPC_PROM_PORTS_ARRAY <<< ${OPENIM_RPC_PROM_PORTS}
@@ -138,7 +142,7 @@ function openim::rpc::start() {
done
done
- sleep 0.5
+ sleep 5
openim::util::check_ports ${OPENIM_RPC_PORT_TARGETS[@]}
# openim::util::check_ports ${OPENIM_RPC_PROM_PORT_TARGETS[@]}
@@ -156,7 +160,7 @@ function openim::rpc::start_service() {
printf "Specifying prometheus port: %s\n" "${prometheus_port}"
cmd="${cmd} --prometheus_port ${prometheus_port}"
fi
- nohup ${cmd} >> "${LOG_FILE}" 2>&1 &
+ nohup ${cmd} >> "${LOG_FILE}" 2> >(tee -a "${STDERR_LOG_FILE}" "$TMP_LOG_FILE") &
}
###################################### Linux Systemd ######################################
diff --git a/scripts/install/openim-tools.sh b/scripts/install/openim-tools.sh
index fd95dc00d..ac60a5f45 100755
--- a/scripts/install/openim-tools.sh
+++ b/scripts/install/openim-tools.sh
@@ -18,9 +18,9 @@
#
# Description:
# This script is responsible for managing the lifecycle of OpenIM tools, which include starting, stopping,
-# and handling pre and post operations. It's designed to be modular and extensible, ensuring that the
+# and handling pre and post operations. It's designed to be modular and extensible, ensuring that the
# individual operations can be managed separately, and integrated seamlessly with Linux systemd.
-#
+#
# Features:
# 1. Robust error handling using Bash built-ins like 'errexit', 'nounset', and 'pipefail'.
# 2. The capability to source common utility functions and configurations to ensure uniform environmental settings.
@@ -61,6 +61,7 @@ openim::tools::pre_start_name() {
local targets=(
ncpu
component
+ up35
)
echo "${targets[@]}"
}
@@ -102,8 +103,8 @@ function openim::tools::start_service() {
printf "Specifying prometheus port: %s\n" "${prometheus_port}"
cmd="${cmd} --prometheus_port ${prometheus_port}"
fi
- openim::log::info "Starting ${binary_name}..."
- ${cmd}
+ openim::log::status "Starting ${binary_name}..."
+ ${cmd} | tee -a "${LOG_FILE}"
}
function openim::tools::start() {
diff --git a/scripts/install/test.sh b/scripts/install/test.sh
index eb3f6a200..4a78e4504 100755
--- a/scripts/install/test.sh
+++ b/scripts/install/test.sh
@@ -15,19 +15,19 @@
# limitations under the License.
#
# OpenIM RPC Service Test Control Script
-#
+#
# This control script is designed to conduct various tests on the OpenIM RPC services.
# It includes functions to perform smoke tests, API tests, and comprehensive service tests.
# The script is intended to be used in a Linux environment with appropriate permissions and
# environmental variables set.
-#
+#
# It provides robust error handling and logging to facilitate debugging and service monitoring.
# Functions within the script can be called directly or passed as arguments to perform
# systematic testing, ensuring the integrity of the RPC services.
-#
+#
# Test Functions:
# - openim::test::smoke: Runs basic tests to ensure the fundamental functionality of the service.
-# - openim::test::api: Executes a series of API tests covering authentication, user, friend,
+# - openim::test::api: Executes a series of API tests covering authentication, user, friend,
# group, and message functionalities.
# - openim::test::test: Performs a complete test suite, invoking utility checks and all defined
# test cases, and reports on their success.
@@ -40,42 +40,45 @@ IAM_ROOT=$(dirname "${BASH_SOURCE[0]}")/../..
# API Server API Address:Port
INSECURE_OPENIMAPI="http://${OPENIM_API_HOST}:${API_OPENIM_PORT}"
INSECURE_OPENIMAUTO=${OPENIM_RPC_AUTH_HOST}:${OPENIM_AUTH_PORT}
-CCURL="curl -f -s -XPOST" # Create
-UCURL="curl -f -s -XPUT" # Update
-RCURL="curl -f -s -XGET" # Retrieve
+CCURL="curl -f -s -XPOST" # Create
+UCURL="curl -f -s -XPUT" # Update
+RCURL="curl -f -s -XGET" # Retrieve
DCURL="curl -f -s -XDELETE" # Delete
openim::test::check_error() {
- local response=$1
- local err_code=$(echo "$response" | jq '.errCode')
- openim::log::status "Response from user registration: $response"
- if [[ "$err_code" != "0" ]]; then
- openim::log::error_exit "Error occurred: $response, You can read the error code in the API documentation https://docs.openim.io/restapi/errcode"
- else
- openim::log::success "Operation was successful."
- fi
+ local response=$1
+ local err_code=$(echo "$response" | jq '.errCode')
+ openim::log::status "Response from user registration: $response"
+ if [[ "$err_code" != "0" ]]; then
+ openim::log::error_exit "Error occurred: $response, You can read the error code in the API documentation https://docs.openim.io/restapi/errcode"
+ else
+ openim::log::success "Operation was successful."
+ fi
}
# The `openim::test::auth` function serves as a test suite for authentication-related operations.
function openim::test::auth() {
- # 1. Retrieve and set the authentication token.
- openim::test::get_token
-
- # 2. Force logout the test user from a specific platform.
- openim::test::force_logout
-
- # Log the completion of the auth test suite.
- openim::log::success "Auth test suite completed successfully."
+ # 1. Retrieve and set the authentication token.
+ openim::test::get_token
+
+ # 2. Force logout the test user from a specific platform.
+ openim::test::force_logout
+
+ # Log the completion of the auth test suite.
+ openim::log::success "Auth test suite completed successfully."
}
#################################### Auth Module ####################################
-# Define a function to get a token (Admin Token)
+# Define a function to get a token for a specific user
openim::test::get_token() {
- token_response=$(${CCURL} "${OperationID}" "${Header}" ${INSECURE_OPENIMAPI}/auth/user_token \
- -d'{"secret": "'"$SECRET"'","platformID": 1,"userID": "openIM123456"}')
- token=$(echo $token_response | grep -Po 'token[" :]+\K[^"]+')
- echo "$token"
+ local user_id="${1:-openIM123456}" # Default user ID if not provided
+ token_response=$(
+ ${CCURL} "${OperationID}" "${Header}" ${INSECURE_OPENIMAPI}/auth/user_token \
+ -d'{"secret": "'"$SECRET"'","platformID": 1,"userID": "'$user_id'"}'
+ )
+ token=$(echo $token_response | grep -Po 'token[" :]+\K[^"]+')
+ echo "$token"
}
Header="-HContent-Type: application/json"
@@ -84,32 +87,33 @@ Token="-Htoken: $(openim::test::get_token)"
# Forces a user to log out from the specified platform by user ID.
openim::test::force_logout() {
- local request_body=$(cat </dev/null || {
openim::log::usage "chat must be in your PATH"
- openim::log::info "You can use 'hack/install-chat.sh' to install a copy in third_party/."
+ openim::log::info "You can use 'scripts/install-chat.sh' to install a copy in third_party/."
exit 1
}
-
+
# validate chat port is free
local port_check_command
if command -v ss &> /dev/null && ss -Version | grep 'iproute2' &> /dev/null; then
port_check_command="ss"
- elif command -v netstat &>/dev/null; then
+ elif command -v netstat &>/dev/null; then
port_check_command="netstat"
else
openim::log::usage "unable to identify if chat is bound to port ${CHAT_PORT}. unable to find ss or netstat utilities."
@@ -46,24 +46,24 @@ openim::chat::validate() {
openim::log::usage "$(${port_check_command} -nat | grep "LISTEN" | grep "[\.:]${CHAT_PORT:?}")"
exit 1
fi
-
+
# need set the env of "CHAT_UNSUPPORTED_ARCH" on unstable arch.
arch=$(uname -m)
if [[ $arch =~ arm* ]]; then
- export CHAT_UNSUPPORTED_ARCH=arm
+ export CHAT_UNSUPPORTED_ARCH=arm
fi
# validate installed version is at least equal to minimum
version=$(chat --version | grep Version | head -n 1 | cut -d " " -f 3)
if [[ $(openim::chat::version "${CHAT_VERSION}") -gt $(openim::chat::version "${version}") ]]; then
- export PATH="${OPENIM_ROOT}"/third_party/chat:${PATH}
- hash chat
- echo "${PATH}"
- version=$(chat --version | grep Version | head -n 1 | cut -d " " -f 3)
- if [[ $(openim::chat::version "${CHAT_VERSION}") -gt $(openim::chat::version "${version}") ]]; then
- openim::log::usage "chat version ${CHAT_VERSION} or greater required."
- openim::log::info "You can use 'hack/install-chat.sh' to install a copy in third_party/."
- exit 1
- fi
+ export PATH="${OPENIM_ROOT}"/third_party/chat:${PATH}
+ hash chat
+ echo "${PATH}"
+ version=$(chat --version | grep Version | head -n 1 | cut -d " " -f 3)
+ if [[ $(openim::chat::version "${CHAT_VERSION}") -gt $(openim::chat::version "${version}") ]]; then
+ openim::log::usage "chat version ${CHAT_VERSION} or greater required."
+ openim::log::info "You can use 'scripts/install-chat.sh' to install a copy in third_party/."
+ exit 1
+ fi
fi
}
@@ -74,7 +74,7 @@ openim::chat::version() {
openim::chat::start() {
# validate before running
openim::chat::validate
-
+
# Start chat
CHAT_DIR=${CHAT_DIR:-$(mktemp -d 2>/dev/null || mktemp -d -t test-chat.XXXXXX)}
if [[ -d "${ARTIFACTS:-}" ]]; then
@@ -85,7 +85,7 @@ openim::chat::start() {
openim::log::info "chat --advertise-client-urls ${OPENIM_INTEGRATION_CHAT_URL} --data-dir ${CHAT_DIR} --listen-client-urls http://${CHAT_HOST}:${CHAT_PORT} --log-level=${CHAT_LOGLEVEL} 2> \"${CHAT_LOGFILE}\" >/dev/null"
chat --advertise-client-urls "${OPENIM_INTEGRATION_CHAT_URL}" --data-dir "${CHAT_DIR}" --listen-client-urls "${OPENIM_INTEGRATION_CHAT_URL}" --log-level="${CHAT_LOGLEVEL}" 2> "${CHAT_LOGFILE}" >/dev/null &
CHAT_PID=$!
-
+
echo "Waiting for chat to come up."
openim::util::wait_for_url "${OPENIM_INTEGRATION_CHAT_URL}/health" "chat: " 0.25 80
curl -fs -X POST "${OPENIM_INTEGRATION_CHAT_URL}/v3/kv/put" -d '{"key": "X3Rlc3Q=", "value": ""}'
@@ -108,7 +108,7 @@ openim::chat::start_scraping() {
}
openim::chat::scrape() {
- curl -s -S "${OPENIM_INTEGRATION_CHAT_URL}/metrics" > "${CHAT_SCRAPE_DIR}/next" && mv "${CHAT_SCRAPE_DIR}/next" "${CHAT_SCRAPE_DIR}/$(date +%s).scrape"
+ curl -s -S "${OPENIM_INTEGRATION_CHAT_URL}/metrics" > "${CHAT_SCRAPE_DIR}/next" && mv "${CHAT_SCRAPE_DIR}/next" "${CHAT_SCRAPE_DIR}/$(date +%s).scrape"
}
openim::chat::stop() {
@@ -144,17 +144,17 @@ openim::chat::install() {
(
local os
local arch
-
+
os=$(openim::util::host_os)
arch=$(openim::util::host_arch)
-
- cd ""${OPENIM_ROOT}"/third_party" || return 1
+
+ cd "${OPENIM_ROOT}/third_party" || return 1
if [[ $(readlink chat) == chat-v${CHAT_VERSION}-${os}-* ]]; then
openim::log::info "chat v${CHAT_VERSION} already installed. To use:"
openim::log::info "export PATH=\"$(pwd)/chat:\${PATH}\""
return #already installed
fi
-
+
if [[ ${os} == "darwin" ]]; then
download_file="chat-v${CHAT_VERSION}-${os}-${arch}.zip"
url="https://github.com/chat-io/chat/releases/download/v${CHAT_VERSION}/${download_file}"
@@ -162,7 +162,7 @@ openim::chat::install() {
unzip -o "${download_file}"
ln -fns "chat-v${CHAT_VERSION}-${os}-${arch}" chat
rm "${download_file}"
- elif [[ ${os} == "linux" ]]; then
+ elif [[ ${os} == "linux" ]]; then
url="https://github.com/coreos/chat/releases/download/v${CHAT_VERSION}/chat-v${CHAT_VERSION}-${os}-${arch}.tar.gz"
download_file="chat-v${CHAT_VERSION}-${os}-${arch}.tar.gz"
openim::util::download_file "${url}" "${download_file}"
diff --git a/scripts/lib/color.sh b/scripts/lib/color.sh
index 4d69c1771..744fccf5a 100755
--- a/scripts/lib/color.sh
+++ b/scripts/lib/color.sh
@@ -21,24 +21,24 @@
# shellcheck disable=SC2034
if [ -z "${COLOR_OPEN+x}" ]; then
- COLOR_OPEN=1
+ COLOR_OPEN=1
fi
# Function for colored echo
openim::color::echo() {
- COLOR=$1
- [ $COLOR_OPEN -eq 1 ] && echo -e "${COLOR} $(date '+%Y-%m-%d %H:%M:%S') $@ ${COLOR_SUFFIX}"
- shift
+ COLOR=$1
+ [ $COLOR_OPEN -eq 1 ] && echo -e "${COLOR} $(date '+%Y-%m-%d %H:%M:%S') $@ ${COLOR_SUFFIX}"
+ shift
}
# Define color variables
-# --- Feature ---
+# --- Feature ---
COLOR_NORMAL='\033[0m';COLOR_BOLD='\033[1m';COLOR_DIM='\033[2m';COLOR_UNDER='\033[4m';
COLOR_ITALIC='\033[3m';COLOR_NOITALIC='\033[23m';COLOR_BLINK='\033[5m';
COLOR_REVERSE='\033[7m';COLOR_CONCEAL='\033[8m';COLOR_NOBOLD='\033[22m';
COLOR_NOUNDER='\033[24m';COLOR_NOBLINK='\033[25m';
-# --- Front color ---
+# --- Front color ---
COLOR_BLACK='\033[30m';
COLOR_RED='\033[31m';
COLOR_GREEN='\033[32m';
@@ -48,13 +48,13 @@ COLOR_MAGENTA='\033[35m';
COLOR_CYAN='\033[36m';
COLOR_WHITE='\033[37m';
-# --- background color ---
+# --- background color ---
COLOR_BBLACK='\033[40m';COLOR_BRED='\033[41m';
COLOR_BGREEN='\033[42m';COLOR_BYELLOW='\033[43m';
COLOR_BBLUE='\033[44m';COLOR_BMAGENTA='\033[45m';
COLOR_BCYAN='\033[46m';COLOR_BWHITE='\033[47m';
-# --- Color definitions ---
+# --- Color definitions ---
# Color definitions
COLOR_SUFFIX="\033[0m" # End all colors and special effects
BLACK_PREFIX="\033[30m" # Black prefix
@@ -86,54 +86,54 @@ openim::color::print_color() {
# test functions
openim::color::test() {
- echo "Starting the color tests..."
-
- echo "Testing normal echo without color"
- openim::color::echo $COLOR_NORMAL "This is a normal text"
-
- echo "Testing bold echo"
- openim::color::echo $COLOR_BOLD "This is bold text"
-
- echo "Testing dim echo"
- openim::color::echo $COLOR_DIM "This is dim text"
-
- echo "Testing underlined echo"
- openim::color::echo $COLOR_UNDER "This is underlined text"
-
- echo "Testing italic echo"
- openim::color::echo $COLOR_ITALIC "This is italic text"
-
- echo "Testing red color"
- openim::color::echo $COLOR_RED "This is red text"
-
- echo "Testing green color"
- openim::color::echo $COLOR_GREEN "This is green text"
-
- echo "Testing yellow color"
- openim::color::echo $COLOR_YELLOW "This is yellow text"
-
- echo "Testing blue color"
- openim::color::echo $COLOR_BLUE "This is blue text"
-
- echo "Testing magenta color"
- openim::color::echo $COLOR_MAGENTA "This is magenta text"
-
- echo "Testing cyan color"
- openim::color::echo $COLOR_CYAN "This is cyan text"
-
- echo "Testing black background"
- openim::color::echo $COLOR_BBLACK "This is text with black background"
-
- echo "Testing red background"
- openim::color::echo $COLOR_BRED "This is text with red background"
-
- echo "Testing green background"
- openim::color::echo $COLOR_BGREEN "This is text with green background"
-
- echo "Testing blue background"
- openim::color::echo $COLOR_BBLUE "This is text with blue background"
-
- echo "All tests completed!"
+ echo "Starting the color tests..."
+
+ echo "Testing normal echo without color"
+ openim::color::echo $COLOR_NORMAL "This is a normal text"
+
+ echo "Testing bold echo"
+ openim::color::echo $COLOR_BOLD "This is bold text"
+
+ echo "Testing dim echo"
+ openim::color::echo $COLOR_DIM "This is dim text"
+
+ echo "Testing underlined echo"
+ openim::color::echo $COLOR_UNDER "This is underlined text"
+
+ echo "Testing italic echo"
+ openim::color::echo $COLOR_ITALIC "This is italic text"
+
+ echo "Testing red color"
+ openim::color::echo $COLOR_RED "This is red text"
+
+ echo "Testing green color"
+ openim::color::echo $COLOR_GREEN "This is green text"
+
+ echo "Testing yellow color"
+ openim::color::echo $COLOR_YELLOW "This is yellow text"
+
+ echo "Testing blue color"
+ openim::color::echo $COLOR_BLUE "This is blue text"
+
+ echo "Testing magenta color"
+ openim::color::echo $COLOR_MAGENTA "This is magenta text"
+
+ echo "Testing cyan color"
+ openim::color::echo $COLOR_CYAN "This is cyan text"
+
+ echo "Testing black background"
+ openim::color::echo $COLOR_BBLACK "This is text with black background"
+
+ echo "Testing red background"
+ openim::color::echo $COLOR_BRED "This is text with red background"
+
+ echo "Testing green background"
+ openim::color::echo $COLOR_BGREEN "This is text with green background"
+
+ echo "Testing blue background"
+ openim::color::echo $COLOR_BBLUE "This is text with blue background"
+
+ echo "All tests completed!"
}
# openim::color::test
diff --git a/scripts/lib/golang.sh b/scripts/lib/golang.sh
index a65d2c9f5..af04771d5 100755
--- a/scripts/lib/golang.sh
+++ b/scripts/lib/golang.sh
@@ -89,7 +89,7 @@ readonly OPENIM_SERVER_TARGETS
readonly OPENIM_SERVER_BINARIES=("${OPENIM_SERVER_TARGETS[@]##*/}")
# TODO: Label
-START_SCRIPTS_PATH=""${OPENIM_ROOT}"/scripts/install/"
+START_SCRIPTS_PATH="${OPENIM_ROOT}/scripts/install/"
openim::golang::start_script_list() {
local targets=(
openim-api.sh
@@ -261,7 +261,18 @@ openim::golang::setup_platforms
# The set of client targets that we are building for all platforms
# If you update this list, please also update build/BUILD.
readonly OPENIM_CLIENT_TARGETS=(
- imctl
+ changelog
+ component
+ conversion-msg
+ conversion-mysql
+ formitychecker
+ imctl
+ infra
+ ncpu
+ openim-web
+ up35
+ versionchecker
+ yamlfmt
)
readonly OPENIM_CLIENT_BINARIES=("${OPENIM_CLIENT_TARGETS[@]##*/}")
diff --git a/scripts/lib/init.sh b/scripts/lib/init.sh
index 631e751ba..be8e9f8aa 100755
--- a/scripts/lib/init.sh
+++ b/scripts/lib/init.sh
@@ -25,7 +25,7 @@ unset CDPATH
# Until all GOPATH references are removed from all build scripts as well,
# explicitly disable module mode to avoid picking up user-set GO111MODULE preferences.
-# As individual scripts (like hack/update-vendor.sh) make use of go modules,
+# As individual scripts (like scripts/update-vendor.sh) make use of go modules,
# they can explicitly set GO111MODULE=on
export GO111MODULE=on
@@ -33,7 +33,7 @@ export GO111MODULE=on
OPENIM_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd -P)"
OPENIM_OUTPUT_SUBPATH="${OPENIM_OUTPUT_SUBPATH:-_output}"
-OPENIM_OUTPUT=""${OPENIM_ROOT}"/${OPENIM_OUTPUT_SUBPATH}"
+OPENIM_OUTPUT="${OPENIM_ROOT}/${OPENIM_OUTPUT_SUBPATH}"
OPENIM_OUTPUT_BINPATH="${OPENIM_OUTPUT}/bin/platforms"
OPENIM_OUTPUT_BINTOOLPATH="${OPENIM_OUTPUT}/bin/tools"
@@ -50,8 +50,8 @@ OPENIM_RSYNC_COMPRESS="${KUBE_RSYNC_COMPRESS:-0}"
export no_proxy="127.0.0.1,localhost${no_proxy:+,${no_proxy}}"
# This is a symlink to binaries for "this platform", e.g. build tools.
-export THIS_PLATFORM_BIN=""${OPENIM_ROOT}"/_output/bin/platforms"
-export THIS_PLATFORM_BIN_TOOLS=""${OPENIM_ROOT}"/_output/bin/tools"
+export THIS_PLATFORM_BIN="${OPENIM_ROOT}/_output/bin/platforms"
+export THIS_PLATFORM_BIN_TOOLS="${OPENIM_ROOT}/_output/bin/tools"
. $(dirname ${BASH_SOURCE})/color.sh
. $(dirname ${BASH_SOURCE})/util.sh
@@ -62,7 +62,6 @@ openim::util::ensure-bash-version
. $(dirname ${BASH_SOURCE})/version.sh
. $(dirname ${BASH_SOURCE})/golang.sh
-. $(dirname ${BASH_SOURCE})/release.sh
. $(dirname ${BASH_SOURCE})/chat.sh
OPENIM_OUTPUT_HOSTBIN="${OPENIM_OUTPUT_BINPATH}/$(openim::util::host_platform)"
diff --git a/scripts/lib/logging.sh b/scripts/lib/logging.sh
index 90f9d0c7f..8f2bb33cf 100755
--- a/scripts/lib/logging.sh
+++ b/scripts/lib/logging.sh
@@ -17,28 +17,32 @@
OPENIM_VERBOSE="${OPENIM_VERBOSE:-5}"
# Enable logging by default. Set to false to disable.
-ENABLE_LOGGING=true
+ENABLE_LOGGING="${ENABLE_LOGGING:-true}"
# If OPENIM_OUTPUT is not set, set it to the default value
-if [[ ! -v OPENIM_OUTPUT ]]; then
- OPENIM_OUTPUT="$(cd "$(dirname "${BASH_SOURCE[0]}")/../../_output" && pwd -P)"
+if [ -z "${OPENIM_OUTPUT+x}" ]; then
+ OPENIM_OUTPUT="$(cd "$(dirname "${BASH_SOURCE[0]}")/../../_output" && pwd -P)"
fi
# Set the log file path
LOG_FILE="${OPENIM_OUTPUT}/logs/openim_$(date '+%Y%m%d').log"
+STDERR_LOG_FILE="${OPENIM_OUTPUT}/logs/openim_error_$(date '+%Y%m%d').log"
+TMP_LOG_FILE="${OPENIM_OUTPUT}/logs/openim_tmp_$(date '+%Y%m%d').log"
if [[ ! -d "${OPENIM_OUTPUT}/logs" ]]; then
- mkdir -p "${OPENIM_OUTPUT}/logs"
- touch "$LOG_FILE"
+ mkdir -p "${OPENIM_OUTPUT}/logs"
+ touch "$LOG_FILE"
+ touch "$STDERR_LOG_FILE"
+ touch "$TMP_LOG_FILE"
fi
# Define the logging function
function echo_log() {
- if $ENABLE_LOGGING; then
- echo -e "$@" | tee -a "${LOG_FILE}"
- else
- echo -e "$@"
- fi
+ if $ENABLE_LOGGING; then
+ echo -e "$@" | tee -a "${LOG_FILE}"
+ else
+ echo -e "$@"
+ fi
}
# MAX_LOG_SIZE=10485760 # 10MB
@@ -50,11 +54,11 @@ function echo_log() {
# Borrowed from https://gist.github.com/ahendrix/7030300
openim::log::errexit() {
local err="${PIPESTATUS[*]}"
-
+
# If the shell we are in doesn't have errexit set (common in subshells) then
# don't dump stacks.
set +o | grep -qe "-o errexit" || return
-
+
set +o xtrace
local code="${1:-1}"
# Print out the stack trace described by $function_stack
@@ -73,7 +77,7 @@ openim::log::install_errexit() {
# trap ERR to provide an error handler whenever a command exits nonzero this
# is a more verbose version of set -o errexit
trap 'openim::log::errexit' ERR
-
+
# setting errtrace allows our ERR trap handler to be propagated to functions,
# expansions and subshells
set -o errtrace
@@ -110,7 +114,7 @@ openim::log::error_exit() {
local code="${2:-1}"
local stack_skip="${3:-0}"
stack_skip=$((stack_skip + 1))
-
+
if [[ ${OPENIM_VERBOSE} -ge 4 ]]; then
local source_file=${BASH_SOURCE[${stack_skip}]}
local source_line=${BASH_LINENO[$((stack_skip - 1))]}
@@ -118,25 +122,33 @@ openim::log::error_exit() {
[[ -z ${1-} ]] || {
echo_log " ${1}" >&2
}
-
+
openim::log::stack ${stack_skip}
-
+
echo_log "Exiting with status ${code}" >&2
fi
-
+
exit "${code}"
}
-# Log an error but keep going. Don't dump the stack or exit.
+# Log an error but keep going. Don't dump the stack or exit.
openim::log::error() {
- timestamp=$(date +"[%m%d %H:%M:%S]")
- echo_log "!!! ${timestamp} ${1-}" >&2
+ # Define red color
+ red='\033[0;31m'
+ # No color (reset)
+ nc='\033[0m' # No Color
+
+ timestamp=$(date +"[%Y-%m-%d %H:%M:%S %Z]")
+ # Apply red color for error message
+ echo_log "${red}!!! ${timestamp} ${1-}${nc}" >&2
shift
for message; do
- echo_log " ${message}" >&2
+ # Apply red color for subsequent lines of the error message
+ echo_log "${red} ${message}${nc}" >&2
done
}
+
# Print an usage message to stderr. The arguments are printed directly.
openim::log::usage() {
echo_log >&2
@@ -152,7 +164,7 @@ openim::log::usage_from_stdin() {
while read -r line; do
messages+=("${line}")
done
-
+
openim::log::usage "${messages[@]}"
}
@@ -162,7 +174,7 @@ openim::log::info() {
if [[ ${OPENIM_VERBOSE} < ${V} ]]; then
return
fi
-
+
for message; do
echo_log "${message}"
done
@@ -181,7 +193,7 @@ openim::log::info_from_stdin() {
while read -r line; do
messages+=("${line}")
done
-
+
openim::log::info "${messages[@]}"
}
@@ -191,8 +203,8 @@ openim::log::status() {
if [[ ${OPENIM_VERBOSE} < ${V} ]]; then
return
fi
-
- timestamp=$(date +"[%m%d %H:%M:%S]")
+
+ timestamp=$(date +"[%Y-%m-%d %H:%M:%S %Z]")
echo_log "+++ ${timestamp} ${1}"
shift
for message; do
@@ -203,20 +215,20 @@ openim::log::status() {
openim::log::success() {
local V="${V:-0}"
if [[ ${OPENIM_VERBOSE} < ${V} ]]; then
- return
+ return
fi
timestamp=$(date +"%m%d %H:%M:%S")
echo_log -e "${COLOR_GREEN}[success ${timestamp}] ${COLOR_SUFFIX}==> " "$@"
}
function openim::log::test_log() {
- echo_log "test log"
- openim::log::info "openim::log::info"
- openim::log::progress "openim::log::progress"
- openim::log::status "openim::log::status"
- openim::log::success "openim::log::success"
- openim::log::error "openim::log::error"
- openim::log::error_exit "openim::log::error_exit"
+ echo_log "test log"
+ openim::log::info "openim::log::info"
+ openim::log::progress "openim::log::progress"
+ openim::log::status "openim::log::status"
+ openim::log::success "openim::log::success"
+ openim::log::error "openim::log::error"
+ openim::log::error_exit "openim::log::error_exit"
}
# openim::log::test_log
\ No newline at end of file
diff --git a/scripts/lib/release.sh b/scripts/lib/release.sh
index dba74c768..521e5cedc 100755
--- a/scripts/lib/release.sh
+++ b/scripts/lib/release.sh
@@ -22,10 +22,10 @@
# example: ./coscli cp/sync -r /home/off-line/docker-off-line/ cos://openim-1306374445/openim/image/amd/off-line/off-line/ -e cos.ap-guangzhou.myqcloud.com
# https://cloud.tencent.com/document/product/436/71763
-# Tencent cos configuration
readonly BUCKET="openim-1306374445"
readonly REGION="ap-guangzhou"
readonly COS_RELEASE_DIR="openim-release"
+# readonly COS_RELEASE_DIR="openim-advanced-release" # !pro
# default cos command tool coscli or coscmd
readonly COSTOOL="coscli"
@@ -36,16 +36,26 @@ readonly RELEASE_TARS="${LOCAL_OUTPUT_ROOT}/release-tars"
readonly RELEASE_IMAGES="${LOCAL_OUTPUT_ROOT}/release-images"
# OpenIM github account info
-readonly OPENIM_GITHUB_ORG=OpenIMSDK
-readonly OPENIM_GITHUB_REPO=Open-IM-Server
-readonly CHAT_GITHUB_REPO=chat
+readonly OPENIM_GITHUB_ORG=openimsdk
+readonly OPENIM_GITHUB_REPO=open-im-server
+# readonly OPENIM_GITHUB_REPO=open-im-server-enterprise # !pro
readonly ARTIFACT=openim.tar.gz
+# readonly ARTIFACT=openim-enterprise.tar.gz # !pro
+
readonly CHECKSUM=${ARTIFACT}.sha1sum
OPENIM_BUILD_CONFORMANCE=${OPENIM_BUILD_CONFORMANCE:-y}
OPENIM_BUILD_PULL_LATEST_IMAGES=${OPENIM_BUILD_PULL_LATEST_IMAGES:-y}
+if [ -z "${OPENIM_ROOT}" ]; then
+ OPENIM_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd -P)"
+fi
+
+if [ -z "${TOOLS_DIR}" ]; then
+ TOOLS_DIR="${OPENIM_ROOT}/_output/tools"
+fi
+
# Validate a ci version
#
# Globals:
@@ -70,10 +80,10 @@ function openim::release::parse_and_validate_ci_version() {
openim::log::error "Invalid ci version: '${version}', must match regex ${version_regex}"
return 1
}
-
+
# The VERSION variables are used when this file is sourced, hence
# the shellcheck SC2034 'appears unused' warning is to be ignored.
-
+
# shellcheck disable=SC2034
VERSION_MAJOR="${BASH_REMATCH[1]}"
# shellcheck disable=SC2034
@@ -108,18 +118,19 @@ function openim::release::package_tarballs() {
openim::release::package_openim_manifests_tarball &
openim::release::package_server_tarballs &
openim::util::wait-for-jobs || { openim::log::error "previous tarball phase failed"; return 1; }
-
+
openim::release::package_final_tarball & # _final depends on some of the previous phases
openim::util::wait-for-jobs || { openim::log::error "previous tarball phase failed"; return 1; }
}
-function openim::release::updload_tarballs() {
+function openim::release::upload_tarballs() {
openim::log::info "upload ${RELEASE_TARS}/* to cos bucket ${BUCKET}."
for file in $(ls ${RELEASE_TARS}/*)
do
if [ "${COSTOOL}" == "coscli" ];then
- coscli cp "${file}" "cos://${BUCKET}/${COS_RELEASE_DIR}/${OPENIM_GIT_VERSION}/${file##*/}"
- coscli cp "${file}" "cos://${BUCKET}/${COS_RELEASE_DIR}/latest/${file##*/}"
+ echo "++++ ${TOOLS_DIR}/coscli cp ${file} cos://${BUCKET}/${COS_RELEASE_DIR}/${OPENIM_GIT_VERSION}/${file##*/}"
+ ${TOOLS_DIR}/coscli cp "${file}" "cos://${BUCKET}/${COS_RELEASE_DIR}/${OPENIM_GIT_VERSION}/${file##*/}"
+ ${TOOLS_DIR}/coscli cp "${file}" "cos://${BUCKET}/${COS_RELEASE_DIR}/latest/${file##*/}"
else
coscmd upload "${file}" "${COS_RELEASE_DIR}/${OPENIM_GIT_VERSION}/"
coscmd upload "${file}" "${COS_RELEASE_DIR}/latest/"
@@ -135,22 +146,24 @@ function openim::release::package_src_tarball() {
git archive -o "${src_tarball}" HEAD
else
find "${OPENIM_ROOT}" -mindepth 1 -maxdepth 1 \
- ! \( \
- \( -path "${OPENIM_ROOT}"/_\* -o \
- -path "${OPENIM_ROOT}"/.git\* -o \
- -path "${OPENIM_ROOT}"/.github\* -o \
- -path "${OPENIM_ROOT}"/.gitignore\* -o \
- -path "${OPENIM_ROOT}"/.gsemver.yml\* -o \
- -path "${OPENIM_ROOT}"/.config\* -o \
- -path "${OPENIM_ROOT}"/.chglog\* -o \
- -path "${OPENIM_ROOT}"/.gitlint -o \
- -path "${OPENIM_ROOT}"/.golangci.yml -o \
- -path "${OPENIM_ROOT}"/build/goreleaser.yaml -o \
- -path "${OPENIM_ROOT}"/.note.md -o \
- -path "${OPENIM_ROOT}"/.todo.md \
- \) -prune \
- \) -print0 \
- | "${TAR}" czf "${src_tarball}" --transform "s|${OPENIM_ROOT#/*}|openim|" --null -T -
+ ! \( \
+ \( -path "${OPENIM_ROOT}"/_\* -o \
+ -path "${OPENIM_ROOT}"/.git\* -o \
+ -path "${OPENIM_ROOT}"/.github\* -o \
+ -path "${OPENIM_ROOT}"/components\* -o \
+ -path "${OPENIM_ROOT}"/logs\* -o \
+ -path "${OPENIM_ROOT}"/.gitignore\* -o \
+ -path "${OPENIM_ROOT}"/.gsemver.yml\* -o \
+ -path "${OPENIM_ROOT}"/.config\* -o \
+ -path "${OPENIM_ROOT}"/.chglog\* -o \
+ -path "${OPENIM_ROOT}"/.gitlint -o \
+ -path "${OPENIM_ROOT}"/.golangci.yml -o \
+ -path "${OPENIM_ROOT}"/build/goreleaser.yaml -o \
+ -path "${OPENIM_ROOT}"/.note.md -o \
+ -path "${OPENIM_ROOT}"/.todo.md \
+ \) -prune \
+ \) -print0 \
+ | "${TAR}" czf "${src_tarball}" --transform "s|${OPENIM_ROOT#/*}|openim|" --null -T -
fi
}
@@ -158,6 +171,7 @@ function openim::release::package_src_tarball() {
function openim::release::package_server_tarballs() {
# Find all of the built client binaries
local long_platforms=("${LOCAL_OUTPUT_BINPATH}"/*/*)
+
if [[ -n ${OPENIM_BUILD_PLATFORMS-} ]]; then
read -ra long_platforms <<< "${OPENIM_BUILD_PLATFORMS}"
fi
@@ -167,68 +181,81 @@ function openim::release::package_server_tarballs() {
local platform_tag
platform=${platform_long##${LOCAL_OUTPUT_BINPATH}/} # Strip LOCAL_OUTPUT_BINPATH
platform_tag=${platform/\//-} # Replace a "/" for a "-"
+
openim::log::status "Starting tarball: server $platform_tag"
(
local release_stage="${RELEASE_STAGE}/server/${platform_tag}/openim"
+ openim::log::info "release_stage: ${release_stage}"
+
rm -rf "${release_stage}"
mkdir -p "${release_stage}/server/bin"
local server_bins=("${OPENIM_SERVER_BINARIES[@]}")
- # This fancy expression will expand to prepend a path
- # (${LOCAL_OUTPUT_BINPATH}/${platform}/) to every item in the
- # server_bins array.
- cp "${server_bins[@]/bin/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \
- "${release_stage}/server/bin/"
+ openim::log::info " Copy client binaries: ${client_bins[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}"
+ openim::log::info " Copy client binaries to: ${release_stage}/server/bin"
- openim::release::clean_cruft
+ # Copy server binaries
+ cp "${server_bins[@]/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \
+ "${release_stage}/server/bin/"
- local package_name="${RELEASE_TARS}/openim-server-${platform_tag}.tar.gz"
- openim::release::create_tarball "${package_name}" "${release_stage}/.."
- ) &
- done
+ openim::release::clean_cruft
- openim::log::status "Waiting on tarballs"
- openim::util::wait-for-jobs || { openim::log::error "server tarball creation failed"; exit 1; }
- }
+ local package_name="${RELEASE_TARS}/openim-server-${platform_tag}.tar.gz"
+ openim::release::create_tarball "${package_name}" "${release_stage}/.."
+ ) &
+ done
+ openim::log::status "Waiting on tarballs"
+ openim::util::wait-for-jobs || { openim::log::error "server tarball creation failed"; exit 1; }
+}
+# Package up all of the cross compiled clients. Over time this should grow into
+# a full SDK
# Package up all of the cross compiled clients. Over time this should grow into
# a full SDK
function openim::release::package_client_tarballs() {
# Find all of the built client binaries
- local long_platforms=("${LOCAL_OUTPUT_BINPATH}"/*/*)
+ local long_platforms=("${LOCAL_OUTPUT_BINTOOLSPATH}"/*/*)
if [[ -n ${OPENIM_BUILD_PLATFORMS-} ]]; then
read -ra long_platforms <<< "${OPENIM_BUILD_PLATFORMS}"
fi
+ # echo "++++ LOCAL_OUTPUT_BINTOOLSPATH: ${LOCAL_OUTPUT_BINTOOLSPATH}"
+ # LOCAL_OUTPUT_BINTOOLSPATH: /data/workspaces/open-im-server/_output/bin/tools
+ # echo "++++ long_platforms: ${long_platforms[@]}"
+ # long_platforms: /data/workspaces/open-im-server/_output/bin/tools/darwin/amd64 /data/workspaces/open-im-server/_output/bin/tools/darwin/arm64 /data/workspaces/open-im-server/_output/bin/tools/linux/amd64 /data/workspaces/open-im-server/_output/bin/tools/linux/arm64 /data/workspaces/open-im-server/_output/bin/tools/linux/mips64 /data/workspaces/open-im-server/_output/bin/tools/linux/mips64le /data/workspaces/open-im-server/_output/bin/tools/linux/ppc64le /data/workspaces/open-im-server/_output/bin/tools/linux/s390x /data/workspaces/open-im-server/_output/bin/tools/windows/amd64
for platform_long in "${long_platforms[@]}"; do
local platform
local platform_tag
- platform=${platform_long##${LOCAL_OUTPUT_BINPATH}/} # Strip LOCAL_OUTPUT_BINPATH
+ platform=${platform_long##${LOCAL_OUTPUT_BINTOOLSPATH}/} # Strip LOCAL_OUTPUT_BINTOOLSPATH
platform_tag=${platform/\//-} # Replace a "/" for a "-"
- openim::log::status "Starting tarball: client $platform_tag"
+ openim::log::status "Starting tarball: client $platform_tag" # darwin-amd64
(
local release_stage="${RELEASE_STAGE}/client/${platform_tag}/openim"
+
+ openim::log::info "release_stage: ${release_stage}"
+ # ++++ release_stage: /data/workspaces/open-im-server/_output/release-stage/client/darwin-amd64/openim
rm -rf "${release_stage}"
mkdir -p "${release_stage}/client/bin"
local client_bins=("${OPENIM_CLIENT_BINARIES[@]}")
- # This fancy expression will expand to prepend a path
- # (${LOCAL_OUTPUT_BINPATH}/${platform}/) to every item in the
- # client_bins array.
- cp "${client_bins[@]/bin/#/${LOCAL_OUTPUT_BINPATH}/${platform}/}" \
+ # client_bins: changelog component conversion-msg conversion-mysql formitychecker imctl infra ncpu openim-web up35 versionchecker yamlfmt
+ # Copy client binclient_bins:aries
+ openim::log::info " Copy client binaries: ${client_bins[@]/#/${LOCAL_OUTPUT_BINTOOLSPATH}/${platform}/}"
+ openim::log::info " Copy client binaries to: ${release_stage}/client/bin"
+
+ cp "${client_bins[@]/#/${LOCAL_OUTPUT_BINTOOLSPATH}/${platform}/}" \
"${release_stage}/client/bin/"
- openim::release::clean_cruft
+ openim::release::clean_cruft
- local package_name="${RELEASE_TARS}/openim-client-${platform_tag}.tar.gz"
- openim::release::create_tarball "${package_name}" "${release_stage}/.."
+ local package_name="${RELEASE_TARS}/openim-client-${platform_tag}.tar.gz"
+ openim::release::create_tarball "${package_name}" "${release_stage}/.."
) &
done
-
openim::log::status "Waiting on tarballs"
openim::util::wait-for-jobs || { openim::log::error "client tarball creation failed"; exit 1; }
}
@@ -354,7 +381,7 @@ function openim::release::create_docker_images_for_server() {
rm -rf "${docker_build_path}"
mkdir -p "${docker_build_path}"
ln "${binary_file_path}" "${docker_build_path}/${binary_name}"
- ln ""${OPENIM_ROOT}"/build/nsswitch.conf" "${docker_build_path}/nsswitch.conf"
+ ln "${OPENIM_ROOT}/build/nsswitch.conf" "${docker_build_path}/nsswitch.conf"
chmod 0644 "${docker_build_path}/nsswitch.conf"
cat < "${docker_file_path}"
FROM ${base_image}
@@ -399,7 +426,7 @@ EOF
function openim::release::package_openim_manifests_tarball() {
openim::log::status "Building tarball: manifests"
- local src_dir=""${OPENIM_ROOT}"/deployments"
+ local src_dir="${OPENIM_ROOT}/deployments"
local release_stage="${RELEASE_STAGE}/manifests/openim"
rm -rf "${release_stage}"
@@ -420,7 +447,7 @@ function openim::release::package_openim_manifests_tarball() {
#cp "${src_dir}/openim-rpc-msg.yaml" "${dst_dir}"
#cp "${src_dir}/openim-rpc-third.yaml" "${dst_dir}"
#cp "${src_dir}/openim-rpc-user.yaml" "${dst_dir}"
- #cp ""${OPENIM_ROOT}"/cluster/gce/gci/health-monitor.sh" "${dst_dir}/health-monitor.sh"
+ #cp "${OPENIM_ROOT}/cluster/gce/gci/health-monitor.sh" "${dst_dir}/health-monitor.sh"
openim::release::clean_cruft
@@ -442,6 +469,7 @@ function openim::release::package_final_tarball() {
# This isn't a "full" tarball anymore, but the release lib still expects
# artifacts under "full/openim/"
local release_stage="${RELEASE_STAGE}/full/openim"
+ openim::log::info "release_stage(final): ${release_stage}"
rm -rf "${release_stage}"
mkdir -p "${release_stage}"
@@ -454,7 +482,8 @@ EOF
# We want everything in /scripts.
mkdir -p "${release_stage}/release"
- cp -R ""${OPENIM_ROOT}"/scripts/release" "${release_stage}/"
+ mkdir -p "${OPENIM_ROOT}/scripts/release"
+ cp -R "${OPENIM_ROOT}/scripts/release" "${release_stage}/"
cat < "${release_stage}/release/get-openim-binaries.sh"
#!/usr/bin/env bash
# This file download openim client and server binaries from tencent cos bucket.
@@ -471,11 +500,11 @@ Server binary tarballs are no longer included in the OpenIM final tarball.
Run release/get-openim-binaries.sh to download client and server binaries.
EOF
- # Include hack/lib as a dependency for the cluster/ scripts
+ # Include scripts/lib as a dependency for the cluster/ scripts
#mkdir -p "${release_stage}/hack"
- #cp -R ""${OPENIM_ROOT}"/hack/lib" "${release_stage}/hack/"
+ #cp -R "${OPENIM_ROOT}/scripts/lib" "${release_stage}/scripts/"
- cp -R "${OPENIM_ROOT}"/{docs,configs,scripts,deployments,init,README.md,LICENSE} "${release_stage}/"
+ cp -R "${OPENIM_ROOT}"/{docs,config,scripts,deployments,README.md,LICENSE} "${release_stage}/"
echo "${OPENIM_GIT_VERSION}" > "${release_stage}/version"
@@ -507,7 +536,7 @@ function openim::release::install_github_release(){
# - git-chglog
# - coscmd or coscli
function openim::release::verify_prereqs(){
- if [ -z "$(which github-release 2>/dev/null)" ]; then
+ if [ -z "$(which ${TOOLS_DIR}/github-release 2>/dev/null)" ]; then
openim::log::info "'github-release' tool not installed, try to install it."
if ! openim::release::install_github_release; then
@@ -516,7 +545,7 @@ function openim::release::verify_prereqs(){
fi
fi
- if [ -z "$(which git-chglog 2>/dev/null)" ]; then
+ if [ -z "$(which ${TOOLS_DIR}/git-chglog 2>/dev/null)" ]; then
openim::log::info "'git-chglog' tool not installed, try to install it."
if ! go install github.com/git-chglog/git-chglog/cmd/git-chglog@latest &>/dev/null; then
@@ -525,7 +554,7 @@ function openim::release::verify_prereqs(){
fi
fi
- if [ -z "$(which gsemver 2>/dev/null)" ]; then
+ if [ -z "$(which ${TOOLS_DIR}/gsemver 2>/dev/null)" ]; then
openim::log::info "'gsemver' tool not installed, try to install it."
if ! go install github.com/arnaud-deprez/gsemver@latest &>/dev/null; then
@@ -534,8 +563,7 @@ function openim::release::verify_prereqs(){
fi
fi
-
- if [ -z "$(which ${COSTOOL} 2>/dev/null)" ]; then
+ if [ -z "$(which ${TOOLS_DIR}/${COSTOOL} 2>/dev/null)" ]; then
openim::log::info "${COSTOOL} tool not installed, try to install it."
if ! make -C "${OPENIM_ROOT}" tools.install.${COSTOOL}; then
@@ -545,6 +573,7 @@ function openim::release::verify_prereqs(){
fi
if [ -z "${TENCENT_SECRET_ID}" -o -z "${TENCENT_SECRET_KEY}" ];then
+ openim::log::info "You need set env: TENCENT_SECRET_ID(cos secretid) and TENCENT_SECRET_KEY(cos secretkey)"
openim::log::error "can not find env: TENCENT_SECRET_ID and TENCENT_SECRET_KEY"
return 1
fi
@@ -584,39 +613,57 @@ EOF
# https://github.com/github-release/github-release
function openim::release::github_release() {
# create a github release
+ if [ -z "${GITHUB_TOKEN}" ];then
+ openim::log::error "can not find env: GITHUB_TOKEN"
+ return 1
+ fi
openim::log::info "create a new github release with tag ${OPENIM_GIT_VERSION}"
- github-release release \
+ ${TOOLS_DIR}/github-release release \
--user ${OPENIM_GITHUB_ORG} \
--repo ${OPENIM_GITHUB_REPO} \
--tag ${OPENIM_GIT_VERSION} \
--description "" \
- --pre-release
+ --pre-release \
+ --draft
# update openim tarballs
openim::log::info "upload ${ARTIFACT} to release ${OPENIM_GIT_VERSION}"
- github-release upload \
+ ${TOOLS_DIR}/github-release upload \
--user ${OPENIM_GITHUB_ORG} \
--repo ${OPENIM_GITHUB_REPO} \
--tag ${OPENIM_GIT_VERSION} \
--name ${ARTIFACT} \
+ --label "openim-${OPENIM_GIT_VERSION}" \
--file ${RELEASE_TARS}/${ARTIFACT}
- openim::log::info "upload openim-src.tar.gz to release ${OPENIM_GIT_VERSION}"
- github-release upload \
- --user ${OPENIM_GITHUB_ORG} \
- --repo ${OPENIM_GITHUB_REPO} \
- --tag ${OPENIM_GIT_VERSION} \
- --name "openim-src.tar.gz" \
- --file ${RELEASE_TARS}/openim-src.tar.gz
+ for file in ${RELEASE_TARS}/*.tar.gz; do
+ if [[ -f "$file" ]]; then
+ filename=$(basename "$file")
+ openim::log::info "Update file ${filename} to release vertion ${OPENIM_GIT_VERSION}"
+ ${TOOLS_DIR}/github-release upload \
+ --user ${OPENIM_GITHUB_ORG} \
+ --repo ${OPENIM_GITHUB_REPO} \
+ --tag ${OPENIM_GIT_VERSION} \
+ --name "${filename}" \
+ --file "${file}"
+ fi
+ done
}
function openim::release::generate_changelog() {
openim::log::info "generate CHANGELOG-${OPENIM_GIT_VERSION#v}.md and commit it"
- git-chglog ${OPENIM_GIT_VERSION} > "${OPENIM_ROOT}"/CHANGELOG/CHANGELOG-${OPENIM_GIT_VERSION#v}.md
+ local major_version=$(echo ${OPENIM_GIT_VERSION} | cut -d '+' -f 1)
+
+ ${TOOLS_DIR}/git-chglog --config ${OPENIM_ROOT}/CHANGELOG/.chglog/config.yml ${OPENIM_GIT_VERSION} > ${OPENIM_ROOT}/CHANGELOG/CHANGELOG-${major_version#v}.md
set +o errexit
- git add "${OPENIM_ROOT}"/CHANGELOG/CHANGELOG-${OPENIM_GIT_VERSION#v}.md
- git commit -a -m "docs(changelog): add CHANGELOG-${OPENIM_GIT_VERSION#v}.md"
- git push -f origin main # 最后将 CHANGELOG 也 push 上去
+ git add "${OPENIM_ROOT}"/CHANGELOG/CHANGELOG-${major_version#v}.md
+ git commit -a -m "docs(changelog): add CHANGELOG-${major_version#v}.md"
+ echo ""
+ echo "##########################################################################"
+ echo "git commit -a -m \"docs(changelog): add CHANGELOG-${major_version#v}.md\""
+ openim::log::info "You need git push CHANGELOG-${major_version#v}.md to remote"
+ echo "##########################################################################"
+ echo ""
}
diff --git a/scripts/lib/util.sh b/scripts/lib/util.sh
index ad3baa6bf..cace53645 100755
--- a/scripts/lib/util.sh
+++ b/scripts/lib/util.sh
@@ -22,6 +22,1241 @@
# OPENIM_ROOT=$(cd "$(dirname "${BASH_SOURCE[0]}")"/../.. && pwd -P)
# source "${OPENIM_ROOT}/scripts/lib/logging.sh"
+#1、将IP写在一个文件里,比如文件名为hosts_file,一行一个IP地址。
+#2、修改ssh-mutual-trust.sh里面的用户名及密码,默认为root用户及密码123。
+# hosts_file_path="path/to/your/hosts/file"
+# openim:util::setup_ssh_key_copy "$hosts_file_path" "root" "123"
+function openim:util::setup_ssh_key_copy() {
+ local hosts_file="$1"
+ local username="${2:-root}"
+ local password="${3:-123}"
+
+ local sshkey_file=~/.ssh/id_rsa.pub
+
+ # check sshkey file
+ if [[ ! -e $sshkey_file ]]; then
+ expect -c "
+ spawn ssh-keygen -t rsa
+ expect \"Enter*\" { send \"\n\"; exp_continue; }
+ "
+ fi
+
+ # get hosts list
+ local hosts=$(awk '/^[^#]/ {print $1}' "${hosts_file}")
+
+ ssh_key_copy() {
+ local target=$1
+
+ # delete history
+ sed -i "/$target/d" ~/.ssh/known_hosts
+
+ # copy key
+ expect -c "
+ set timeout 100
+ spawn ssh-copy-id $username@$target
+ expect {
+ \"yes/no\" { send \"yes\n\"; exp_continue; }
+ \"*assword\" { send \"$password\n\"; }
+ \"already exist on the remote system\" { exit 1; }
+ }
+ expect eof
+ "
+ }
+
+ # auto sshkey pair
+ for host in $hosts; do
+ if ! ping -i 0.2 -c 3 -W 1 "$host" > /dev/null 2>&1; then
+ echo "[ERROR]: Can't connect $host"
+ continue
+ fi
+
+ local host_entry=$(awk "/$host/"'{print $1, $2}' /etc/hosts)
+ if [[ $host_entry ]]; then
+ local hostaddr=$(echo "$host_entry" | awk '{print $1}')
+ local hostname=$(echo "$host_entry" | awk '{print $2}')
+ ssh_key_copy "$hostaddr"
+ ssh_key_copy "$hostname"
+ else
+ ssh_key_copy "$host"
+ fi
+ done
+}
+
+function openim::util::sourced_variable {
+ # Call this function to tell shellcheck that a variable is supposed to
+ # be used from other calling context. This helps quiet an "unused
+ # variable" warning from shellcheck and also document your code.
+ true
+}
+
+openim::util::sortable_date() {
+ date "+%Y%m%d-%H%M%S"
+}
+
+# arguments: target, item1, item2, item3, ...
+# returns 0 if target is in the given items, 1 otherwise.
+openim::util::array_contains() {
+ local search="$1"
+ local element
+ shift
+ for element; do
+ if [[ "${element}" == "${search}" ]]; then
+ return 0
+ fi
+ done
+ return 1
+}
+
+openim::util::wait_for_url() {
+ local url=$1
+ local prefix=${2:-}
+ local wait=${3:-1}
+ local times=${4:-30}
+ local maxtime=${5:-1}
+
+ command -v curl >/dev/null || {
+ openim::log::usage "curl must be installed"
+ exit 1
+ }
+
+ local i
+ for i in $(seq 1 "${times}"); do
+ local out
+ if out=$(curl --max-time "${maxtime}" -gkfs "${url}" 2>/dev/null); then
+ openim::log::status "On try ${i}, ${prefix}: ${out}"
+ return 0
+ fi
+ sleep "${wait}"
+ done
+ openim::log::error "Timed out waiting for ${prefix} to answer at ${url}; tried ${times} waiting ${wait} between each"
+ return 1
+}
+
+# Example: openim::util::wait_for_success 120 5 "openimctl get nodes|grep localhost"
+# arguments: wait time, sleep time, shell command
+# returns 0 if the shell command get output, 1 otherwise.
+openim::util::wait_for_success(){
+ local wait_time="$1"
+ local sleep_time="$2"
+ local cmd="$3"
+ while [ "$wait_time" -gt 0 ]; do
+ if eval "$cmd"; then
+ return 0
+ else
+ sleep "$sleep_time"
+ wait_time=$((wait_time-sleep_time))
+ fi
+ done
+ return 1
+}
+
+# Example: openim::util::trap_add 'echo "in trap DEBUG"' DEBUG
+# See: http://stackoverflow.com/questions/3338030/multiple-bash-traps-for-the-same-signal
+openim::util::trap_add() {
+ local trap_add_cmd
+ trap_add_cmd=$1
+ shift
+
+ for trap_add_name in "$@"; do
+ local existing_cmd
+ local new_cmd
+
+ # Grab the currently defined trap commands for this trap
+ existing_cmd=$(trap -p "${trap_add_name}" | awk -F"'" '{print $2}')
+
+ if [[ -z "${existing_cmd}" ]]; then
+ new_cmd="${trap_add_cmd}"
+ else
+ new_cmd="${trap_add_cmd};${existing_cmd}"
+ fi
+
+ # Assign the test. Disable the shellcheck warning telling that trap
+ # commands should be single quoted to avoid evaluating them at this
+ # point instead evaluating them at run time. The logic of adding new
+ # commands to a single trap requires them to be evaluated right away.
+ # shellcheck disable=SC2064
+ trap "${new_cmd}" "${trap_add_name}"
+ done
+}
+
+# Opposite of openim::util::ensure-temp-dir()
+openim::util::cleanup-temp-dir() {
+ rm -rf "${OPENIM_TEMP}"
+}
+
+# Create a temp dir that'll be deleted at the end of this bash session.
+#
+# Vars set:
+# OPENIM_TEMP
+openim::util::ensure-temp-dir() {
+ if [[ -z ${OPENIM_TEMP-} ]]; then
+ OPENIM_TEMP=$(mktemp -d 2>/dev/null || mktemp -d -t openimrnetes.XXXXXX)
+ openim::util::trap_add openim::util::cleanup-temp-dir EXIT
+ fi
+}
+
+openim::util::host_os() {
+ local host_os
+ case "$(uname -s)" in
+ Darwin)
+ host_os=darwin
+ ;;
+ Linux)
+ host_os=linux
+ ;;
+ *)
+ openim::log::error "Unsupported host OS. Must be Linux or Mac OS X."
+ exit 1
+ ;;
+ esac
+ echo "${host_os}"
+}
+
+openim::util::host_arch() {
+ local host_arch
+ case "$(uname -m)" in
+ x86_64*)
+ host_arch=amd64
+ ;;
+ i?86_64*)
+ host_arch=amd64
+ ;;
+ amd64*)
+ host_arch=amd64
+ ;;
+ aarch64*)
+ host_arch=arm64
+ ;;
+ arm64*)
+ host_arch=arm64
+ ;;
+ arm*)
+ host_arch=arm
+ ;;
+ i?86*)
+ host_arch=x86
+ ;;
+ s390x*)
+ host_arch=s390x
+ ;;
+ ppc64le*)
+ host_arch=ppc64le
+ ;;
+ *)
+ openim::log::error "Unsupported host arch. Must be x86_64, 386, arm, arm64, s390x or ppc64le."
+ exit 1
+ ;;
+ esac
+ echo "${host_arch}"
+}
+
+# Define a bash function to check the versions of Docker and Docker Compose
+openim::util::check_docker_and_compose_versions() {
+ # Define the required versions of Docker and Docker Compose
+ required_docker_version="20.10.0"
+ required_compose_version="2.0"
+
+ # Get the currently installed Docker version
+ installed_docker_version=$(docker --version | awk '{print $3}' | sed 's/,//')
+
+ # Check if the installed Docker version matches the required version
+ if [[ "$installed_docker_version" < "$required_docker_version" ]]; then
+ echo "Docker version mismatch. Installed: $installed_docker_version, Required: $required_docker_version"
+ return 1
+ fi
+
+ # Check if the docker compose sub-command is available
+ if ! docker compose version &> /dev/null; then
+ echo "Docker does not support the docker compose sub-command"
+ echo "You need to upgrade Docker to the right version"
+ return 1
+ fi
+
+ # Get the currently installed Docker Compose version
+ installed_compose_version=$(docker compose version --short)
+
+ # Check if the installed Docker Compose version matches the required version
+ if [[ "$installed_compose_version" < "$required_compose_version" ]]; then
+ echo "Docker Compose version mismatch. Installed: $installed_compose_version, Required: $required_compose_version"
+ return 1
+ fi
+
+}
+
+
+# The `openim::util::check_ports` function analyzes the state of processes based on given ports.
+# It accepts multiple ports as arguments and prints:
+# 1. The state of the process (whether it's running or not).
+# 2. The start time of the process if it's running.
+# User:
+# openim::util::check_ports 8080 8081 8082
+# The function returns a status of 1 if any of the processes is not running.
+openim::util::check_ports() {
+ # An array to collect ports of processes that are not running.
+ local not_started=()
+
+ # An array to collect information about processes that are running.
+ local started=()
+
+ openim::log::info "Checking ports: $*"
+ # Iterate over each given port.
+ for port in "$@"; do
+ # Initialize variables
+ # Check the OS and use the appropriate command
+ if [[ "$OSTYPE" == "linux-gnu"* ]]; then
+ if command -v ss > /dev/null 2>&1; then
+ info=$(ss -ltnp | grep ":$port" || true)
+ else
+ info=$(netstat -ltnp | grep ":$port" || true)
+ fi
+ elif [[ "$OSTYPE" == "darwin"* ]]; then
+ # For macOS, use lsof
+ info=$(lsof -P -i:"$port" | grep "LISTEN" || true)
+ fi
+
+ # Check if any process is using the port
+ if [[ -z $info ]]; then
+ not_started+=($port)
+ else
+ if [[ "$OSTYPE" == "linux-gnu"* ]]; then
+ # Extract relevant details for Linux: Process Name, PID, and FD.
+ details=$(echo $info | sed -n 's/.*users:(("\([^"]*\)",pid=\([^,]*\),fd=\([^)]*\))).*/\1 \2 \3/p')
+ command=$(echo $details | awk '{print $1}')
+ pid=$(echo $details | awk '{print $2}')
+ fd=$(echo $details | awk '{print $3}')
+ elif [[ "$OSTYPE" == "darwin"* ]]; then
+ # Handle extraction for macOS
+ pid=$(echo $info | awk '{print $2}' | cut -d'/' -f1)
+ command=$(ps -p $pid -o comm= | xargs basename)
+ fd=$(echo $info | awk '{print $4}' | cut -d'/' -f1)
+ fi
+
+ # Get the start time of the process using the PID
+ if [[ -z $pid ]]; then
+ start_time="N/A"
+ else
+ start_time=$(ps -p $pid -o lstart=)
+ fi
+
+ started+=("Port $port - Command: $command, PID: $pid, FD: $fd, Started: $start_time")
+ fi
+ done
+
+ # Print information about ports whose processes are not running.
+ if [[ ${#not_started[@]} -ne 0 ]]; then
+ openim::log::info "\n### Not started ports:"
+ for port in "${not_started[@]}"; do
+ openim::log::error "Port $port is not started."
+ done
+ fi
+
+ # Print information about ports whose processes are running.
+ if [[ ${#started[@]} -ne 0 ]]; then
+ openim::log::info "\n### Started ports:"
+ for info in "${started[@]}"; do
+ openim::log::info "$info"
+ done
+ fi
+
+ # If any of the processes is not running, return a status of 1.
+ if [[ ${#not_started[@]} -ne 0 ]]; then
+ openim::color::echo $COLOR_RED " OpenIM Stdout Log >> cat ${LOG_FILE}"
+ openim::color::echo $COLOR_RED " OpenIM Stderr Log >> cat ${STDERR_LOG_FILE}"
+ cat "$TMP_LOG_FILE" | awk '{print "\033[31m" $0 "\033[0m"}'
+ return 1
+ else
+ openim::log::success "All specified processes are running."
+ return 0
+ fi
+}
+
+# set +o errexit
+# Sample call for testing:
+# openim::util::check_ports 10002 1004 12345 13306
+# set -o errexit
+
+# The `openim::util::check_process_names` function analyzes the state of processes based on given names.
+# It accepts multiple process names as arguments and prints:
+# 1. The state of the process (whether it's running or not).
+# 2. The start time of the process if it's running.
+# User:
+# openim::util::check_process_names nginx mysql redis
+# The function returns a status of 1 if any of the processes is not running.
+openim::util::check_process_names() {
+ # Function to get the port of a process
+ get_port() {
+ local pid=$1
+ if [[ "$OSTYPE" == "linux-gnu"* ]]; then
+ # Linux
+ ss -ltnp 2>/dev/null | grep $pid | awk '{print $4}' | cut -d ':' -f2
+ elif [[ "$OSTYPE" == "darwin"* ]]; then
+ # macOS
+ lsof -nP -iTCP -sTCP:LISTEN -a -p $pid | awk 'NR>1 {print $9}' | sed 's/.*://'
+ else
+ echo "Unsupported OS"
+ return 1
+ fi
+ }
+
+ # Arrays to collect details of processes
+ local not_started=()
+ local started=()
+
+ openim::log::info "Checking processes: $*"
+ # Iterate over each given process name
+ for process_name in "$@"; do
+ # Use `pgrep` to find process IDs related to the given process name
+ local pids=($(pgrep -f $process_name))
+
+ # Check if any process IDs were found
+ if [[ ${#pids[@]} -eq 0 ]]; then
+ not_started+=($process_name)
+ else
+ # If there are PIDs, loop through each one
+ for pid in "${pids[@]}"; do
+ local command=$(ps -p $pid -o cmd=)
+ local start_time=$(ps -p $pid -o lstart=)
+ local port=$(get_port $pid)
+
+ # Check if port information was found for the PID
+ if [[ -z $port ]]; then
+ port="N/A"
+ fi
+
+ started+=("Process $process_name - Command: $command, PID: $pid, Port: $port, Start time: $start_time")
+ done
+ fi
+ done
+
+ # Print information
+ if [[ ${#not_started[@]} -ne 0 ]]; then
+ openim::log::info "Not started processes:"
+ for process_name in "${not_started[@]}"; do
+ openim::log::error "Process $process_name is not started."
+ done
+ fi
+
+ if [[ ${#started[@]} -ne 0 ]]; then
+ echo
+ openim::log::info "Started processes:"
+ for info in "${started[@]}"; do
+ openim::log::info "$info"
+ done
+ fi
+
+ # Return status
+ if [[ ${#not_started[@]} -ne 0 ]]; then
+ openim::color::echo $COLOR_RED " OpenIM Stdout Log >> cat ${LOG_FILE}"
+ openim::color::echo $COLOR_RED " OpenIM Stderr Log >> cat ${STDERR_LOG_FILE}"
+ cat "$TMP_LOG_FILE" | awk '{print "\033[31m" $0 "\033[0m"}'
+ return 1
+ else
+ echo ""
+ openim::log::success "All processes are running."
+ return 0
+ fi
+}
+
+# openim::util::check_process_names docker-pr
+
+# The `openim::util::stop_services_on_ports` function stops services running on specified ports.
+# It accepts multiple ports as arguments and performs the following:
+# 1. Attempts to stop any services running on the specified ports.
+# 2. Prints details of services successfully stopped and those that failed to stop.
+# Usage:
+# openim::util::stop_services_on_ports 8080 8081 8082
+# The function returns a status of 1 if any service couldn't be stopped.
+openim::util::stop_services_on_ports() {
+ # An array to collect ports of processes that couldn't be stopped.
+ local not_stopped=()
+
+ # An array to collect information about processes that were stopped.
+ local stopped=()
+
+ openim::log::info "Stopping services on ports: $*"
+ # Iterate over each given port.
+ for port in "$@"; do
+ # Use the `lsof` command to find process information related to the given port.
+ info=$(lsof -i :$port -n -P | grep LISTEN || true)
+
+ # If there's process information, it means the process associated with the port is running.
+ if [[ -n $info ]]; then
+ # Extract the Process ID.
+ while read -r line; do
+ local pid=$(echo $line | awk '{print $2}')
+
+ # Try to stop the service by killing its process.
+ if kill -10 $pid; then
+ stopped+=($port)
+ else
+ not_stopped+=($port)
+ fi
+ done <<< "$info"
+ fi
+ done
+
+ # Print information about ports whose processes couldn't be stopped.
+ if [[ ${#not_stopped[@]} -ne 0 ]]; then
+ openim::log::info "Ports that couldn't be stopped:"
+ for port in "${not_stopped[@]}"; do
+ openim::log::status "Failed to stop service on port $port."
+ done
+ fi
+
+ # Print information about ports whose processes were successfully stopped.
+ if [[ ${#stopped[@]} -ne 0 ]]; then
+ echo
+ openim::log::info "Stopped services on ports:"
+ for port in "${stopped[@]}"; do
+ openim::log::info "Successfully stopped service on port $port."
+ done
+ fi
+
+ # If any of the processes couldn't be stopped, return a status of 1.
+ if [[ ${#not_stopped[@]} -ne 0 ]]; then
+ return 1
+ else
+ openim::log::success "All specified services were stopped."
+ echo ""
+ return 0
+ fi
+}
+# nc -l -p 12345
+# nc -l -p 123456
+# ps -ef | grep "nc -l"
+# openim::util::stop_services_on_ports 1234 12345
+
+
+# The `openim::util::stop_services_with_name` function stops services with specified names.
+# It accepts multiple service names as arguments and performs the following:
+# 1. Attempts to stop any services with the specified names.
+# 2. Prints details of services successfully stopped and those that failed to stop.
+# Usage:
+# openim::util::stop_services_with_name nginx apache
+# The function returns a status of 1 if any service couldn't be stopped.
+openim::util::stop_services_with_name() {
+ # An array to collect names of processes that couldn't be stopped.
+ local not_stopped=()
+
+ # An array to collect information about processes that were stopped.
+ local stopped=()
+
+ openim::log::info "Stopping services with names: $*"
+ # Iterate over each given service name.
+ for server_name in "$@"; do
+ # Use the `pgrep` command to find process IDs related to the given service name.
+ local pids=$(pgrep -f "$server_name")
+
+ # If no process was found with the name, add it to the not_stopped list
+ if [[ -z $pids ]]; then
+ not_stopped+=("$server_name")
+ continue
+ fi
+ local stopped_this_time=false
+ for pid in $pids; do
+
+ # Exclude the PID of the current script
+ if [[ "$pid" == "$$" ]]; then
+ continue
+ fi
+
+ # If there's a Process ID, it means the service with the name is running.
+ if [[ -n $pid ]]; then
+ # Try to stop the service by killing its process.
+ if kill -10 $pid 2>/dev/null; then
+ stopped_this_time=true
+ fi
+ fi
+ done
+
+ if $stopped_this_time; then
+ stopped+=("$server_name")
+ else
+ not_stopped+=("$server_name")
+ fi
+ done
+
+ # Print information about services whose processes couldn't be stopped.
+ if [[ ${#not_stopped[@]} -ne 0 ]]; then
+ openim::log::info "Services that couldn't be stopped:"
+ for name in "${not_stopped[@]}"; do
+ openim::log::status "Failed to stop the $name service."
+ done
+ fi
+
+ # Print information about services whose processes were successfully stopped.
+ if [[ ${#stopped[@]} -ne 0 ]]; then
+ echo
+ openim::log::info "Stopped services:"
+ for name in "${stopped[@]}"; do
+ openim::log::info "Successfully stopped the $name service."
+ done
+ fi
+
+ openim::log::success "All specified services were stopped."
+ echo ""
+}
+# sleep 333333&
+# sleep 444444&
+# ps -ef | grep "sleep"
+# openim::util::stop_services_with_name "sleep 333333" "sleep 444444"
+
+# This figures out the host platform without relying on golang. We need this as
+# we don't want a golang install to be a prerequisite to building yet we need
+# this info to figure out where the final binaries are placed.
+openim::util::host_platform() {
+ echo "$(openim::util::host_os)/$(openim::util::host_arch)"
+}
+
+# looks for $1 in well-known output locations for the platform ($2)
+# $OPENIM_ROOT must be set
+openim::util::find-binary-for-platform() {
+ local -r lookfor="$1"
+ local -r platform="$2"
+ local locations=(
+ "${OPENIM_ROOT}/_output/bin/${lookfor}"
+ "${OPENIM_ROOT}/_output/${platform}/${lookfor}"
+ "${OPENIM_ROOT}/_output/local/bin/${platform}/${lookfor}"
+ "${OPENIM_ROOT}/_output/platforms/${platform}/${lookfor}"
+ "${OPENIM_ROOT}/_output/platforms/bin/${platform}/${lookfor}"
+ )
+
+ # List most recently-updated location.
+ local -r bin=$( (ls -t "${locations[@]}" 2>/dev/null || true) | head -1 )
+ echo -n "${bin}"
+}
+
+# looks for $1 in well-known output locations for the host platform
+# $OPENIM_ROOT must be set
+openim::util::find-binary() {
+ openim::util::find-binary-for-platform "$1" "$(openim::util::host_platform)"
+}
+
+# Run all known doc generators (today gendocs and genman for openimctl)
+# $1 is the directory to put those generated documents
+openim::util::gen-docs() {
+ local dest="$1"
+
+ # Find binary
+ gendocs=$(openim::util::find-binary "gendocs")
+ genopenimdocs=$(openim::util::find-binary "genopenimdocs")
+ genman=$(openim::util::find-binary "genman")
+ genyaml=$(openim::util::find-binary "genyaml")
+ genfeddocs=$(openim::util::find-binary "genfeddocs")
+
+ # TODO: If ${genfeddocs} is not used from anywhere (it isn't used at
+ # least from k/k tree), remove it completely.
+ openim::util::sourced_variable "${genfeddocs}"
+
+ mkdir -p "${dest}/docs/guide/en-US/cmd/openimctl/"
+ "${gendocs}" "${dest}/docs/guide/en-US/cmd/openimctl/"
+
+ mkdir -p "${dest}/docs/guide/en-US/cmd/"
+ "${genopenimdocs}" "${dest}/docs/guide/en-US/cmd/" "openim-api"
+ "${genopenimdocs}" "${dest}/docs/guide/en-US/cmd/" "openim-cmdutils"
+ "${genopenimdocs}" "${dest}/docs/guide/en-US/cmd/" "openim-crontask"
+ "${genopenimdocs}" "${dest}/docs/guide/en-US/cmd/" "openim-msggateway"
+ "${genopenimdocs}" "${dest}/docs/guide/en-US/cmd/" "openim-msgtransfer"
+ "${genopenimdocs}" "${dest}/docs/guide/en-US/cmd/" "openim-push"
+ "${genopenimdocs}" "${dest}/docs/guide/en-US/cmd/" "openim-rpc-auth"
+ "${genopenimdocs}" "${dest}/docs/guide/en-US/cmd/" "openim-rpc-conversation"
+ "${genopenimdocs}" "${dest}/docs/guide/en-US/cmd/" "openim-rpc-friend"
+ "${genopenimdocs}" "${dest}/docs/guide/en-US/cmd/" "openim-rpc-group"
+ "${genopenimdocs}" "${dest}/docs/guide/en-US/cmd/" "openim-rpc-msg"
+ "${genopenimdocs}" "${dest}/docs/guide/en-US/cmd/" "openim-rpc-third"
+ "${genopenimdocs}" "${dest}/docs/guide/en-US/cmd/" "openim-rpc-user"
+ "${genopenimdocs}" "${dest}/docs/guide/en-US/cmd/openimctl" "openimctl"
+
+ mkdir -p "${dest}/docs/man/man1/"
+"${genman}" "${dest}/docs/man/man1/" "openim-api"
+"${genman}" "${dest}/docs/man/man1/" "openim-cmdutils"
+"${genman}" "${dest}/docs/man/man1/" "openim-crontask"
+"${genman}" "${dest}/docs/man/man1/" "openim-msggateway"
+"${genman}" "${dest}/docs/man/man1/" "openim-msgtransfer"
+"${genman}" "${dest}/docs/man/man1/" "openim-push"
+"${genman}" "${dest}/docs/man/man1/" "openim-rpc-auth"
+"${genman}" "${dest}/docs/man/man1/" "openim-rpc-conversation"
+"${genman}" "${dest}/docs/man/man1/" "openim-rpc-friend"
+"${genman}" "${dest}/docs/man/man1/" "openim-rpc-group"
+"${genman}" "${dest}/docs/man/man1/" "openim-rpc-msg"
+"${genman}" "${dest}/docs/man/man1/" "openim-rpc-third"
+"${genman}" "${dest}/docs/man/man1/" "openim-rpc-user"
+
+ mkdir -p "${dest}/docs/guide/en-US/yaml/openimctl/"
+ "${genyaml}" "${dest}/docs/guide/en-US/yaml/openimctl/"
+
+ # create the list of generated files
+ pushd "${dest}" > /dev/null || return 1
+ touch docs/.generated_docs
+ find . -type f | cut -sd / -f 2- | LC_ALL=C sort > docs/.generated_docs
+ popd > /dev/null || return 1
+}
+
+# Removes previously generated docs-- we don't want to check them in. $OPENIM_ROOT
+# must be set.
+openim::util::remove-gen-docs() {
+ if [ -e "${OPENIM_ROOT}/docs/.generated_docs" ]; then
+ # remove all of the old docs; we don't want to check them in.
+ while read -r file; do
+ rm "${OPENIM_ROOT}/${file}" 2>/dev/null || true
+ done <"${OPENIM_ROOT}/docs/.generated_docs"
+ # The docs/.generated_docs file lists itself, so we don't need to explicitly
+ # delete it.
+ fi
+}
+
+# Returns the name of the upstream remote repository name for the local git
+# repo, e.g. "upstream" or "origin".
+openim::util::git_upstream_remote_name() {
+ git remote -v | grep fetch |\
+ grep -E 'github.com[/:]openimsdk/open-im-server|openim.cc/server' |\
+ head -n 1 | awk '{print $1}'
+}
+
+# Exits script if working directory is dirty. If it's run interactively in the terminal
+# the user can commit changes in a second terminal. This script will wait.
+openim::util::ensure_clean_working_dir() {
+ while ! git diff HEAD --exit-code &>/dev/null; do
+ echo -e "\nUnexpected dirty working directory:\n"
+ if tty -s; then
+ git status -s
+ else
+ git diff -a # be more verbose in log files without tty
+ exit 1
+ fi | sed 's/^/ /'
+ echo -e "\nCommit your changes in another terminal and then continue here by pressing enter."
+ read -r
+ done 1>&2
+}
+
+# Find the base commit using:
+# $PULL_BASE_SHA if set (from Prow)
+# current ref from the remote upstream branch
+openim::util::base_ref() {
+ local -r git_branch=$1
+
+ if [[ -n ${PULL_BASE_SHA:-} ]]; then
+ echo "${PULL_BASE_SHA}"
+ return
+ fi
+
+ full_branch="$(openim::util::git_upstream_remote_name)/${git_branch}"
+
+ # make sure the branch is valid, otherwise the check will pass erroneously.
+ if ! git describe "${full_branch}" >/dev/null; then
+ # abort!
+ exit 1
+ fi
+
+ echo "${full_branch}"
+}
+
+# Checks whether there are any files matching pattern $2 changed between the
+# current branch and upstream branch named by $1.
+# Returns 1 (false) if there are no changes
+# 0 (true) if there are changes detected.
+openim::util::has_changes() {
+ local -r git_branch=$1
+ local -r pattern=$2
+ local -r not_pattern=${3:-totallyimpossiblepattern}
+
+ local base_ref
+ base_ref=$(openim::util::base_ref "${git_branch}")
+ echo "Checking for '${pattern}' changes against '${base_ref}'"
+
+ # notice this uses ... to find the first shared ancestor
+ if git diff --name-only "${base_ref}...HEAD" | grep -v -E "${not_pattern}" | grep "${pattern}" > /dev/null; then
+ return 0
+ fi
+ # also check for pending changes
+ if git status --porcelain | grep -v -E "${not_pattern}" | grep "${pattern}" > /dev/null; then
+ echo "Detected '${pattern}' uncommitted changes."
+ return 0
+ fi
+ echo "No '${pattern}' changes detected."
+ return 1
+}
+
+openim::util::download_file() {
+ local -r url=$1
+ local -r destination_file=$2
+
+ rm "${destination_file}" 2&> /dev/null || true
+
+ for i in $(seq 5)
+ do
+ if ! curl -fsSL --retry 3 --keepalive-time 2 "${url}" -o "${destination_file}"; then
+ echo "Downloading ${url} failed. $((5-i)) retries left."
+ sleep 1
+ else
+ echo "Downloading ${url} succeed"
+ return 0
+ fi
+ done
+ return 1
+}
+
+# Test whether openssl is installed.
+# Sets:
+# OPENSSL_BIN: The path to the openssl binary to use
+function openim::util::test_openssl_installed {
+ if ! openssl version >& /dev/null; then
+ echo "Failed to run openssl. Please ensure openssl is installed"
+ exit 1
+ fi
+
+ OPENSSL_BIN=$(command -v openssl)
+}
+
+# creates a client CA, args are sudo, dest-dir, ca-id, purpose
+# purpose is dropped in after "key encipherment", you usually want
+# '"client auth"'
+# '"server auth"'
+# '"client auth","server auth"'
+function openim::util::create_signing_certkey {
+ local sudo=$1
+ local dest_dir=$2
+ local id=$3
+ local purpose=$4
+ # Create client ca
+ ${sudo} /usr/bin/env bash -e < "${dest_dir}/${id}-ca-config.json"
+EOF
+}
+
+# signs a client certificate: args are sudo, dest-dir, CA, filename (roughly), username, groups...
+function openim::util::create_client_certkey {
+ local sudo=$1
+ local dest_dir=$2
+ local ca=$3
+ local id=$4
+ local cn=${5:-$4}
+ local groups=""
+ local SEP=""
+ shift 5
+ while [ -n "${1:-}" ]; do
+ groups+="${SEP}{\"O\":\"$1\"}"
+ SEP=","
+ shift 1
+ done
+ ${sudo} /usr/bin/env bash -e < /dev/null
+apiVersion: v1
+kind: Config
+clusters:
+ - cluster:
+ certificate-authority: ${ca_file}
+ server: https://${api_host}:${api_port}/
+ name: local-up-cluster
+users:
+ - user:
+ token: ${token}
+ client-certificate: ${dest_dir}/client-${client_id}.crt
+ client-key: ${dest_dir}/client-${client_id}.key
+ name: local-up-cluster
+contexts:
+ - context:
+ cluster: local-up-cluster
+ user: local-up-cluster
+ name: local-up-cluster
+current-context: local-up-cluster
+EOF
+
+ # flatten the openimconfig files to make them self contained
+ username=$(whoami)
+ ${sudo} /usr/bin/env bash -e < "/tmp/${client_id}.openimconfig"
+ mv -f "/tmp/${client_id}.openimconfig" "${dest_dir}/${client_id}.openimconfig"
+ chown ${username} "${dest_dir}/${client_id}.openimconfig"
+EOF
+}
+
+# Determines if docker can be run, failures may simply require that the user be added to the docker group.
+function openim::util::ensure_docker_daemon_connectivity {
+ IFS=" " read -ra DOCKER <<< "${DOCKER_OPTS}"
+ # Expand ${DOCKER[@]} only if it's not unset. This is to work around
+ # Bash 3 issue with unbound variable.
+ DOCKER=(docker ${DOCKER[@]:+"${DOCKER[@]}"})
+ if ! "${DOCKER[@]}" info > /dev/null 2>&1 ; then
+ cat <<'EOF' >&2
+Can't connect to 'docker' daemon. please fix and retry.
+
+Possible causes:
+ - Docker Daemon not started
+ - Linux: confirm via your init system
+ - macOS w/ docker-machine: run `docker-machine ls` and `docker-machine start `
+ - macOS w/ Docker for Mac: Check the menu bar and start the Docker application
+ - DOCKER_HOST hasn't been set or is set incorrectly
+ - Linux: domain socket is used, DOCKER_* should be unset. In Bash run `unset ${!DOCKER_*}`
+ - macOS w/ docker-machine: run `eval "$(docker-machine env )"`
+ - macOS w/ Docker for Mac: domain socket is used, DOCKER_* should be unset. In Bash run `unset ${!DOCKER_*}`
+ - Other things to check:
+ - Linux: User isn't in 'docker' group. Add and relogin.
+ - Something like 'sudo usermod -a -G docker ${USER}'
+ - RHEL7 bug and workaround: https://bugzilla.redhat.com/show_bug.cgi?id=1119282#c8
+EOF
+ return 1
+ fi
+}
+
+# Wait for background jobs to finish. Return with
+# an error status if any of the jobs failed.
+openim::util::wait-for-jobs() {
+ local fail=0
+ local job
+ for job in $(jobs -p); do
+ wait "${job}" || fail=$((fail + 1))
+ done
+ return ${fail}
+}
+
+# openim::util::join
+# Concatenates the list elements with the delimiter passed as first parameter
+#
+# Ex: openim::util::join , a b c
+# -> a,b,c
+function openim::util::join {
+ local IFS="$1"
+ shift
+ echo "$*"
+}
+
+# Function: openim::util::list-to-string
+# Description: Converts a list to a string, removing spaces, brackets, and commas.
+# Example input: [1002 3 , 2 32 3 , 3 434 ,]
+# Example output: 10023 2323 3434
+# Example usage:
+# result=$(openim::util::list-to-string "[10023, 2323, 3434]")
+# echo $result
+function openim::util::list-to-string() {
+ # Capture all arguments into a single string
+ ports_list="$*"
+
+ # Use sed for transformations:
+ # 1. Remove spaces
+ # 2. Replace commas with spaces
+ # 3. Remove opening and closing brackets
+ ports_array=$(echo "$ports_list" | sed 's/ //g; s/,/ /g; s/^\[\(.*\)\]$/\1/')
+ # For external use, we might want to echo the result so that it can be captured by callers
+ echo "$ports_array"
+}
+# MSG_GATEWAY_PROM_PORTS=$(openim::util::list-to-string "10023, 2323, 34 34")
+# read -a MSG_GATEWAY_PROM_PORTS <<< $(openim::util::list-to-string "10023, 2323, 34 34")
+# echo ${MSG_GATEWAY_PROM_PORTS}
+# echo "${#MSG_GATEWAY_PROM_PORTS[@]}"
+# Downloads cfssl/cfssljson/cfssl-certinfo into $1 directory if they do not already exist in PATH
+#
+# Assumed vars:
+# $1 (cfssl directory) (optional)
+#
+# Sets:
+# CFSSL_BIN: The path of the installed cfssl binary
+# CFSSLJSON_BIN: The path of the installed cfssljson binary
+# CFSSLCERTINFO_BIN: The path of the installed cfssl-certinfo binary
+#
+function openim::util::ensure-cfssl {
+ if command -v cfssl &>/dev/null && command -v cfssljson &>/dev/null && command -v cfssl-certinfo &>/dev/null; then
+ CFSSL_BIN=$(command -v cfssl)
+ CFSSLJSON_BIN=$(command -v cfssljson)
+ CFSSLCERTINFO_BIN=$(command -v cfssl-certinfo)
+ return 0
+ fi
+
+ host_arch=$(openim::util::host_arch)
+
+ if [[ "${host_arch}" != "amd64" ]]; then
+ echo "Cannot download cfssl on non-amd64 hosts and cfssl does not appear to be installed."
+ echo "Please install cfssl, cfssljson and cfssl-certinfo and verify they are in \$PATH."
+ echo "Hint: export PATH=\$PATH:\$GOPATH/bin; go get -u github.com/cloudflare/cfssl/cmd/..."
+ exit 1
+ fi
+
+ # Create a temp dir for cfssl if no directory was given
+ local cfssldir=${1:-}
+ if [[ -z "${cfssldir}" ]]; then
+ cfssldir="$HOME/bin"
+ fi
+
+ mkdir -p "${cfssldir}"
+ pushd "${cfssldir}" > /dev/null || return 1
+
+ echo "Unable to successfully run 'cfssl' from ${PATH}; downloading instead..."
+ kernel=$(uname -s)
+ case "${kernel}" in
+ Linux)
+ curl --retry 10 -L -o cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
+ curl --retry 10 -L -o cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
+ curl --retry 10 -L -o cfssl-certinfo https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
+ ;;
+ Darwin)
+ curl --retry 10 -L -o cfssl https://pkg.cfssl.org/R1.2/cfssl_darwin-amd64
+ curl --retry 10 -L -o cfssljson https://pkg.cfssl.org/R1.2/cfssljson_darwin-amd64
+ curl --retry 10 -L -o cfssl-certinfo https://pkg.cfssl.org/R1.2/cfssl-certinfo_darwin-amd64
+ ;;
+ *)
+ echo "Unknown, unsupported platform: ${kernel}." >&2
+ echo "Supported platforms: Linux, Darwin." >&2
+ exit 2
+ esac
+
+ chmod +x cfssl || true
+ chmod +x cfssljson || true
+ chmod +x cfssl-certinfo || true
+
+ CFSSL_BIN="${cfssldir}/cfssl"
+ CFSSLJSON_BIN="${cfssldir}/cfssljson"
+ CFSSLCERTINFO_BIN="${cfssldir}/cfssl-certinfo"
+ if [[ ! -x ${CFSSL_BIN} || ! -x ${CFSSLJSON_BIN} || ! -x ${CFSSLCERTINFO_BIN} ]]; then
+ echo "Failed to download 'cfssl'."
+ echo "Please install cfssl, cfssljson and cfssl-certinfo and verify they are in \$PATH."
+ echo "Hint: export PATH=\$PATH:\$GOPATH/bin; go get -u github.com/cloudflare/cfssl/cmd/..."
+ exit 1
+ fi
+ popd > /dev/null || return 1
+}
+
+function openim::util::ensure-docker-buildx {
+ # podman returns 0 on `docker buildx version`, docker on `docker buildx`. One of them must succeed.
+ if docker buildx version >/dev/null 2>&1 || docker buildx >/dev/null 2>&1; then
+ return 0
+ else
+ echo "ERROR: docker buildx not available. Docker 19.03 or higher is required with experimental features enabled"
+ exit 1
+ fi
+}
+
+# openim::util::ensure-bash-version
+# Check if we are using a supported bash version
+#
+function openim::util::ensure-bash-version {
+ # shellcheck disable=SC2004
+ if ((${BASH_VERSINFO[0]}<4)) || ( ((${BASH_VERSINFO[0]}==4)) && ((${BASH_VERSINFO[1]}<2)) ); then
+ echo "ERROR: This script requires a minimum bash version of 4.2, but got version of ${BASH_VERSINFO[0]}.${BASH_VERSINFO[1]}"
+ if [ "$(uname)" = 'Darwin' ]; then
+ echo "On macOS with homebrew 'brew install bash' is sufficient."
+ fi
+ exit 1
+ fi
+}
+
+# openim::util::ensure-install-nginx
+# Check if nginx is installed
+#
+function openim::util::ensure-install-nginx {
+ if ! command -v nginx &>/dev/null; then
+ echo "ERROR: nginx not found. Please install nginx."
+ exit 1
+ fi
+
+ for port in "80"
+ do
+ if echo |telnet 127.0.0.1 $port 2>&1|grep refused &>/dev/null;then
+ exit 1
+ fi
+ done
+}
+
+# openim::util::ensure-gnu-sed
+# Determines which sed binary is gnu-sed on linux/darwin
+#
+# Sets:
+# SED: The name of the gnu-sed binary
+#
+function openim::util::ensure-gnu-sed {
+ # NOTE: the echo below is a workaround to ensure sed is executed before the grep.
+ # see: https://github.com/openimrnetes/openimrnetes/issues/87251
+ sed_help="$(LANG=C sed --help 2>&1 || true)"
+ if echo "${sed_help}" | grep -q "GNU\|BusyBox"; then
+ SED="sed"
+ elif command -v gsed &>/dev/null; then
+ SED="gsed"
+ else
+ openim::log::error "Failed to find GNU sed as sed or gsed. If you are on Mac: brew install gnu-sed." >&2
+ return 1
+ fi
+ openim::util::sourced_variable "${SED}"
+}
+
+# openim::util::ensure-gnu-date
+# Determines which date binary is gnu-date on linux/darwin
+#
+# Sets:
+# DATE: The name of the gnu-date binary
+#
+function openim::util::ensure-gnu-date {
+ # NOTE: the echo below is a workaround to ensure date is executed before the grep.
+ date_help="$(LANG=C date --help 2>&1 || true)"
+ if echo "${date_help}" | grep -q "GNU\|BusyBox"; then
+ DATE="date"
+ elif command -v gdate &>/dev/null; then
+ DATE="gdate"
+ else
+ openim::log::error "Failed to find GNU date as date or gdate. If you are on Mac: brew install coreutils." >&2
+ return 1
+ fi
+ openim::util::sourced_variable "${DATE}"
+}
+
+# openim::util::check-file-in-alphabetical-order
+# Check that the file is in alphabetical order
+#
+function openim::util::check-file-in-alphabetical-order {
+ local failure_file="$1"
+ if ! diff -u "${failure_file}" <(LC_ALL=C sort "${failure_file}"); then
+ {
+ echo
+ echo "${failure_file} is not in alphabetical order. Please sort it:"
+ echo
+ echo " LC_ALL=C sort -o ${failure_file} ${failure_file}"
+ echo
+ } >&2
+ false
+ fi
+}
+
+# openim::util::require-jq
+# Checks whether jq is installed.
+function openim::util::require-jq {
+ if ! command -v jq &>/dev/null; then
+ openim::log::errexit "jq not found. Please install." 1>&2
+ fi
+}
+
+# openim::util::require-dig
+# Checks whether dig is installed and provides installation instructions if it is not.
+function openim::util::require-dig {
+ if ! command -v dig &>/dev/null; then
+ openim::log::error "Please install 'dig' to use this feature. OR Set the environment variable for OPENIM_IP"
+ openim::log::error "Installation instructions:"
+ openim::log::error " For Ubuntu/Debian: sudo apt-get install dnsutils"
+ openim::log::error " For CentOS/RedHat: sudo yum install bind-utils"
+ openim::log::error " For macOS: 'dig' should be preinstalled. If missing, try: brew install bind"
+ openim::log::error " For Windows: Install BIND9 tools from https://www.isc.org/download/"
+ openim::log::error_exit "dig command not found."
+ fi
+ return 0
+}
+
+# outputs md5 hash of $1, works on macOS and Linux
+function openim::util::md5() {
+ if which md5 >/dev/null 2>&1; then
+ md5 -q "$1"
+ else
+ md5sum "$1" | awk '{ print $1 }'
+ fi
+}
+
+# openim::util::read-array
+# Reads in stdin and adds it line by line to the array provided. This can be
+# used instead of "mapfile -t", and is bash 3 compatible.
+#
+# Assumed vars:
+# $1 (name of array to create/modify)
+#
+# Example usage:
+# openim::util::read-array files < <(ls -1)
+#
+function openim::util::read-array {
+ local i=0
+ unset -v "$1"
+ while IFS= read -r "$1[i++]"; do :; done
+ eval "[[ \${$1[--i]} ]]" || unset "$1[i]" # ensures last element isn't empty
+}
+
+# Some useful colors.
+if [[ -z "${color_start-}" ]]; then
+ declare -r color_start="\033["
+ declare -r color_red="${color_start}0;31m"
+ declare -r color_yellow="${color_start}0;33m"
+ declare -r color_green="${color_start}0;32m"
+ declare -r color_blue="${color_start}1;34m"
+ declare -r color_cyan="${color_start}1;36m"
+ declare -r color_norm="${color_start}0m"
+
+ openim::util::sourced_variable "${color_start}"
+ openim::util::sourced_variable "${color_red}"
+ openim::util::sourced_variable "${color_yellow}"
+ openim::util::sourced_variable "${color_green}"
+ openim::util::sourced_variable "${color_blue}"
+ openim::util::sourced_variable "${color_cyan}"
+ openim::util::sourced_variable "${color_norm}"
+fi
+
+# ex: ts=2 sw=2 et filetype=sh
+
+function openim::util::desc() {
+ openim::util:run::maybe_first_prompt
+ rate=25
+ if [ -n "$DEMO_RUN_FAST" ]; then
+ rate=1000
+ fi
+ echo "$blue# $@$reset" | pv -qL $rate#!/usr/bin/env bash
+# Copyright © 2023 OpenIM. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# this script is used to check whether the code is formatted by gofmt or not
+#
+# Usage: source scripts/lib/util.sh
+################################################################################
+
+# TODO Debug: Just for testing, please comment out
+# OPENIM_ROOT=$(cd "$(dirname "${BASH_SOURCE[0]}")"/../.. && pwd -P)
+# source "${OPENIM_ROOT}/scripts/lib/logging.sh"
+
#1、将IP写在一个文件里,比如文件名为hosts_file,一行一个IP地址。
#2、修改ssh-mutual-trust.sh里面的用户名及密码,默认为root用户及密码123。
# hosts_file_path="path/to/your/hosts/file"
@@ -33,7 +1268,7 @@ function openim:util::setup_ssh_key_copy() {
local sshkey_file=~/.ssh/id_rsa.pub
- # check sshkey file
+ # check sshkey file
if [[ ! -e $sshkey_file ]]; then
expect -c "
spawn ssh-keygen -t rsa
@@ -50,7 +1285,7 @@ function openim:util::setup_ssh_key_copy() {
# delete history
sed -i "/$target/d" ~/.ssh/known_hosts
- # copy key
+ # copy key
expect -c "
set timeout 100
spawn ssh-copy-id $username@$target
@@ -301,27 +1536,43 @@ openim::util::check_ports() {
openim::log::info "Checking ports: $*"
# Iterate over each given port.
for port in "$@"; do
- # Use the `ss` command to find process information related to the given port.
- local info=$(ss -ltnp | grep ":$port" || true)
-
- # If there's no process information, it means the process associated with the port is not running.
+ # Initialize variables
+ # Check the OS and use the appropriate command
+ if [[ "$OSTYPE" == "linux-gnu"* ]]; then
+ if command -v ss > /dev/null 2>&1; then
+ info=$(ss -ltnp | grep ":$port" || true)
+ else
+ info=$(netstat -ltnp | grep ":$port" || true)
+ fi
+ elif [[ "$OSTYPE" == "darwin"* ]]; then
+ # For macOS, use lsof
+ info=$(lsof -i:"$port" | grep "\*:$port" || true)
+ fi
+
+ # Check if any process is using the port
if [[ -z $info ]]; then
not_started+=($port)
else
- # Extract relevant details: Process Name, PID, and FD.
- local details=$(echo $info | sed -n 's/.*users:(("\([^"]*\)",pid=\([^,]*\),fd=\([^)]*\))).*/\1 \2 \3/p')
- local command=$(echo $details | awk '{print $1}')
- local pid=$(echo $details | awk '{print $2}')
- local fd=$(echo $details | awk '{print $3}')
-
+ if [[ "$OSTYPE" == "linux-gnu"* ]]; then
+ # Extract relevant details for Linux: Process Name, PID, and FD.
+ details=$(echo $info | sed -n 's/.*users:(("\([^"]*\)",pid=\([^,]*\),fd=\([^)]*\))).*/\1 \2 \3/p')
+ command=$(echo $details | awk '{print $1}')
+ pid=$(echo $details | awk '{print $2}')
+ fd=$(echo $details | awk '{print $3}')
+ elif [[ "$OSTYPE" == "darwin"* ]]; then
+ # Handle extraction for macOS
+ pid=$(echo $info | awk '{print $2}' | cut -d'/' -f1)
+ command=$(ps -p $pid -o comm= | xargs basename)
+ fd=$(echo $info | awk '{print $4}' | cut -d'/' -f1)
+ fi
+
# Get the start time of the process using the PID
if [[ -z $pid ]]; then
- local start_time="N/A"
+ start_time="N/A"
else
- # Get the start time of the process using the PID
- local start_time=$(ps -p $pid -o lstart=)
+ start_time=$(ps -p $pid -o lstart=)
fi
-
+
started+=("Port $port - Command: $command, PID: $pid, FD: $fd, Started: $start_time")
fi
done
@@ -344,13 +1595,17 @@ openim::util::check_ports() {
# If any of the processes is not running, return a status of 1.
if [[ ${#not_started[@]} -ne 0 ]]; then
- echo "++++ OpenIM Log >> cat ${LOG_FILE}"
+ openim::color::echo $COLOR_RED " OpenIM Stdout Log >> cat ${LOG_FILE}"
+ openim::color::echo $COLOR_RED " OpenIM Stderr Log >> cat ${STDERR_LOG_FILE}"
+ echo ""
+ cat "$TMP_LOG_FILE" | awk '{print "\033[31m" $0 "\033[0m"}'
return 1
else
openim::log::success "All specified processes are running."
return 0
fi
}
+
# set +o errexit
# Sample call for testing:
# openim::util::check_ports 10002 1004 12345 13306
@@ -364,6 +1619,21 @@ openim::util::check_ports() {
# openim::util::check_process_names nginx mysql redis
# The function returns a status of 1 if any of the processes is not running.
openim::util::check_process_names() {
+ # Function to get the port of a process
+ get_port() {
+ local pid=$1
+ if [[ "$OSTYPE" == "linux-gnu"* ]]; then
+ # Linux
+ ss -ltnp 2>/dev/null | grep $pid | awk '{print $4}' | cut -d ':' -f2
+ elif [[ "$OSTYPE" == "darwin"* ]]; then
+ # macOS
+ lsof -nP -iTCP -sTCP:LISTEN -a -p $pid | awk 'NR>1 {print $9}' | sed 's/.*://'
+ else
+ echo "Unsupported OS"
+ return 1
+ fi
+ }
+
# Arrays to collect details of processes
local not_started=()
local started=()
@@ -373,7 +1643,7 @@ openim::util::check_process_names() {
for process_name in "$@"; do
# Use `pgrep` to find process IDs related to the given process name
local pids=($(pgrep -f $process_name))
-
+
# Check if any process IDs were found
if [[ ${#pids[@]} -eq 0 ]]; then
not_started+=($process_name)
@@ -382,7 +1652,7 @@ openim::util::check_process_names() {
for pid in "${pids[@]}"; do
local command=$(ps -p $pid -o cmd=)
local start_time=$(ps -p $pid -o lstart=)
- local port=$(ss -ltnp 2>/dev/null | grep $pid | awk '{print $4}' | cut -d ':' -f2)
+ local port=$(get_port $pid)
# Check if port information was found for the PID
if [[ -z $port ]]; then
@@ -412,13 +1682,17 @@ openim::util::check_process_names() {
# Return status
if [[ ${#not_started[@]} -ne 0 ]]; then
- echo "++++ OpenIM Log >> cat ${LOG_FILE}"
+ openim::color::echo $COLOR_RED " OpenIM Stdout Log >> cat ${LOG_FILE}"
+ openim::color::echo $COLOR_RED " OpenIM Stderr Log >> cat ${STDERR_LOG_FILE}"
+ cat "$TMP_LOG_FILE" | awk '{print "\033[31m" $0 "\033[0m"}'
return 1
else
+ echo ""
openim::log::success "All processes are running."
return 0
fi
}
+
# openim::util::check_process_names docker-pr
# The `openim::util::stop_services_on_ports` function stops services running on specified ports.
@@ -440,15 +1714,15 @@ openim::util::stop_services_on_ports() {
for port in "$@"; do
# Use the `lsof` command to find process information related to the given port.
info=$(lsof -i :$port -n -P | grep LISTEN || true)
-
+
# If there's process information, it means the process associated with the port is running.
if [[ -n $info ]]; then
# Extract the Process ID.
while read -r line; do
local pid=$(echo $line | awk '{print $2}')
-
+
# Try to stop the service by killing its process.
- if kill -TERM $pid; then
+ if kill -10 $pid; then
stopped+=($port)
else
not_stopped+=($port)
@@ -479,13 +1753,14 @@ openim::util::stop_services_on_ports() {
return 1
else
openim::log::success "All specified services were stopped."
+ echo ""
return 0
fi
}
# nc -l -p 12345
# nc -l -p 123456
# ps -ef | grep "nc -l"
-# openim::util::stop_services_on_ports 1234 12345
+# openim::util::stop_services_on_ports 1234 12345
# The `openim::util::stop_services_with_name` function stops services with specified names.
@@ -524,7 +1799,7 @@ openim::util::stop_services_with_name() {
# If there's a Process ID, it means the service with the name is running.
if [[ -n $pid ]]; then
# Try to stop the service by killing its process.
- if kill -TERM $pid 2>/dev/null; then
+ if kill -10 $pid 2>/dev/null; then
stopped_this_time=true
fi
fi
@@ -555,6 +1830,7 @@ openim::util::stop_services_with_name() {
fi
openim::log::success "All specified services were stopped."
+ echo ""
}
# sleep 333333&
# sleep 444444&
@@ -574,11 +1850,11 @@ openim::util::find-binary-for-platform() {
local -r lookfor="$1"
local -r platform="$2"
local locations=(
- ""${OPENIM_ROOT}"/_output/bin/${lookfor}"
- ""${OPENIM_ROOT}"/_output/${platform}/${lookfor}"
- ""${OPENIM_ROOT}"/_output/local/bin/${platform}/${lookfor}"
- ""${OPENIM_ROOT}"/_output/platforms/${platform}/${lookfor}"
- ""${OPENIM_ROOT}"/_output/platforms/bin/${platform}/${lookfor}"
+ "${OPENIM_ROOT}/_output/bin/${lookfor}"
+ "${OPENIM_ROOT}/_output/${platform}/${lookfor}"
+ "${OPENIM_ROOT}/_output/local/bin/${platform}/${lookfor}"
+ "${OPENIM_ROOT}/_output/platforms/${platform}/${lookfor}"
+ "${OPENIM_ROOT}/_output/platforms/bin/${platform}/${lookfor}"
)
# List most recently-updated location.
@@ -655,11 +1931,11 @@ openim::util::gen-docs() {
# Removes previously generated docs-- we don't want to check them in. $OPENIM_ROOT
# must be set.
openim::util::remove-gen-docs() {
- if [ -e ""${OPENIM_ROOT}"/docs/.generated_docs" ]; then
+ if [ -e "${OPENIM_ROOT}/docs/.generated_docs" ]; then
# remove all of the old docs; we don't want to check them in.
while read -r file; do
- rm ""${OPENIM_ROOT}"/${file}" 2>/dev/null || true
- done <""${OPENIM_ROOT}"/docs/.generated_docs"
+ rm "${OPENIM_ROOT}/${file}" 2>/dev/null || true
+ done <"${OPENIM_ROOT}/docs/.generated_docs"
# The docs/.generated_docs file lists itself, so we don't need to explicitly
# delete it.
fi
@@ -1051,7 +2327,7 @@ function openim::util::ensure-install-nginx {
exit 1
fi
- for port in 80
+ for port in "80"
do
if echo |telnet 127.0.0.1 $port 2>&1|grep refused &>/dev/null;then
exit 1
@@ -1129,8 +2405,13 @@ function openim::util::require-jq {
# Checks whether dig is installed and provides installation instructions if it is not.
function openim::util::require-dig {
if ! command -v dig &>/dev/null; then
- openim::log::error "dig command not found."
- return 1
+ openim::log::error "Please install 'dig' to use this feature. OR Set the environment variable for OPENIM_IP"
+ openim::log::error "Installation instructions:"
+ openim::log::error " For Ubuntu/Debian: sudo apt-get install dnsutils"
+ openim::log::error " For CentOS/RedHat: sudo yum install bind-utils"
+ openim::log::error " For macOS: 'dig' should be preinstalled. If missing, try: brew install bind"
+ openim::log::error " For Windows: Install BIND9 tools from https://www.isc.org/download/"
+ openim::log::error_exit "dig command not found."
fi
return 0
}
@@ -1192,6 +2473,186 @@ function openim::util::desc() {
openim::util:run::prompt
}
+function openim::util:run::prompt() {
+ echo -n "${yellow}\$ ${reset}"
+}
+
+started=""
+function openim::util:run::maybe_first_prompt() {
+ if [ -z "$started" ]; then
+ openim::util:run::prompt
+ started=true
+ fi
+}
+
+# After a `run` this variable will hold the stdout of the command that was run.
+# If the command was interactive, this will likely be garbage.
+DEMO_RUN_STDOUT=""
+
+function openim::util::run() {
+ openim::util:run::maybe_first_prompt
+ rate=25
+ if [ -n "$DEMO_RUN_FAST" ]; then
+ rate=1000
+ fi
+ echo "${green}$1${reset}" | pv -qL "$rate"
+ if [ -n "$DEMO_RUN_FAST" ]; then
+ sleep 0.5
+ fi
+ OFILE="$(mktemp -t $(basename $0).XXXXXX)"
+ if [ "$(uname)" == "Darwin" ]; then
+ script -q "$OFILE" $1
+ else
+ script -eq -c "$1" -f "$OFILE"
+ fi
+ r=$?
+ read -d '' -t "${timeout}" -n 10000 # clear stdin
+ openim::util:run::prompt
+ if [ -z "$DEMO_AUTO_RUN" ]; then
+ read -s
+ fi
+ DEMO_RUN_STDOUT="$(tail -n +2 $OFILE | sed 's/\r//g')"
+ return $r
+}
+
+function openim::util::run::relative() {
+ for arg; do
+ echo "$(realpath $(dirname $(which $0)))/$arg" | sed "s|$(realpath $(pwd))|.|"
+ done
+}
+
+# This function retrieves the IP address of the current server.
+# It primarily uses the `curl` command to fetch the public IP address from ifconfig.me.
+# If curl or the service is not available, it falls back
+# to the internal IP address provided by the hostname command.
+# TODO: If a delay is found, the delay needs to be addressed
+function openim::util::get_server_ip() {
+ # Check if the 'curl' command is available
+ if command -v curl &> /dev/null; then
+ # Try to retrieve the public IP address using curl and ifconfig.me
+ IP=$(dig TXT +short o-o.myaddr.l.google.com @ns1.google.com | sed 's/"//g' | tr -d '\n')
+
+ # Check if IP retrieval was successful
+ if [[ -z "$IP" ]]; then
+ # If not, get the internal IP address
+ IP=$(ip addr show | grep 'inet ' | grep -v 127.0.0.1 | awk '{print $2}' | cut -d'/' -f1 | head -n 1)
+ fi
+ else
+ # If curl is not available, get the internal IP address
+ IP=$(ip addr show | grep 'inet ' | grep -v 127.0.0.1 | awk '{print $2}' | cut -d'/' -f1 | head -n 1)
+ fi
+
+ # Return the fetched IP address
+ echo "$IP"
+}
+
+function openim::util::onCtrlC() {
+ echo -e "\n${t_reset}Ctrl+C Press it. It's exiting openim make init..."
+ exit 1
+}
+
+# Function Function: Remove Spaces in the string
+function openim::util::remove_space() {
+ value=$* # 获取传入的参数
+ result=$(echo $value | sed 's/ //g') # 去除空格
+}
+
+function openim::util::gencpu() {
+ # Check the system type
+ system_type=$(uname)
+
+ if [[ "$system_type" == "Darwin" ]]; then
+ # macOS (using sysctl)
+ cpu_count=$(sysctl -n hw.ncpu)
+ elif [[ "$system_type" == "Linux" ]]; then
+ # Linux (using lscpu)
+ cpu_count=$(lscpu --parse | grep -E '^([^#].*,){3}[^#]' | sort -u | wc -l)
+ else
+ echo "Unsupported operating system: $system_type"
+ exit 1
+ fi
+ echo $cpu_count
+}
+
+function openim::util::set_max_fd() {
+ local desired_fd=$1
+ local max_fd_limit
+
+ # Check if we're not on cygwin or darwin.
+ if [ "$(uname -s | tr '[:upper:]' '[:lower:]')" != "cygwin" ] && [ "$(uname -s | tr '[:upper:]' '[:lower:]')" != "darwin" ]; then
+ # Try to get the hard limit.
+ max_fd_limit=$(ulimit -H -n)
+ if [ $? -eq 0 ]; then
+ # If desired_fd is 'maximum' or 'max', set it to the hard limit.
+ if [ "$desired_fd" = "maximum" ] || [ "$desired_fd" = "max" ]; then
+ desired_fd="$max_fd_limit"
+ fi
+
+ # Check if desired_fd is less than or equal to max_fd_limit.
+ if [ "$desired_fd" -le "$max_fd_limit" ]; then
+ ulimit -n "$desired_fd"
+ if [ $? -ne 0 ]; then
+ echo "Warning: Could not set maximum file descriptor limit to $desired_fd"
+ fi
+ else
+ echo "Warning: Desired file descriptor limit ($desired_fd) is greater than the hard limit ($max_fd_limit)"
+ fi
+ else
+ echo "Warning: Could not query the maximum file descriptor hard limit."
+ fi
+ else
+ echo "Warning: Not attempting to modify file descriptor limit on Cygwin or Darwin."
+ fi
+}
+
+
+function openim::util::gen_os_arch() {
+ # Get the current operating system and architecture
+ OS=$(uname -s | tr '[:upper:]' '[:lower:]')
+ ARCH=$(uname -m)
+
+ # Select the repository home directory based on the operating system and architecture
+ if [[ "$OS" == "darwin" ]]; then
+ if [[ "$ARCH" == "x86_64" ]]; then
+ REPO_DIR="darwin/amd64"
+ else
+ REPO_DIR="darwin/386"
+ fi
+ elif [[ "$OS" == "linux" ]]; then
+ if [[ "$ARCH" == "x86_64" ]]; then
+ REPO_DIR="linux/amd64"
+ elif [[ "$ARCH" == "arm64" ]]; then
+ REPO_DIR="linux/arm64"
+ elif [[ "$ARCH" == "mips64" ]]; then
+ REPO_DIR="linux/mips64"
+ elif [[ "$ARCH" == "mips64le" ]]; then
+ REPO_DIR="linux/mips64le"
+ elif [[ "$ARCH" == "ppc64le" ]]; then
+ REPO_DIR="linux/ppc64le"
+ elif [[ "$ARCH" == "s390x" ]]; then
+ REPO_DIR="linux/s390x"
+ else
+ REPO_DIR="linux/386"
+ fi
+ elif [[ "$OS" == "windows" ]]; then
+ if [[ "$ARCH" == "x86_64" ]]; then
+ REPO_DIR="windows/amd64"
+ else
+ REPO_DIR="windows/386"
+ fi
+ else
+ echo -e "${RED_PREFIX}Unsupported OS: $OS${COLOR_SUFFIX}"
+ exit 1
+ fi
+}
+
+if [[ "$*" =~ openim::util:: ]];then
+ eval $*
+fi
+
+ openim::util:run::prompt
+}
+
function openim::util:run::prompt() {
echo -n "$yellow\$ $reset"
}
@@ -1242,7 +2703,7 @@ function openim::util::run::relative() {
# This function retrieves the IP address of the current server.
# It primarily uses the `curl` command to fetch the public IP address from ifconfig.me.
-# If curl or the service is not available, it falls back
+# If curl or the service is not available, it falls back
# to the internal IP address provided by the hostname command.
# TODO: If a delay is found, the delay needs to be addressed
function openim::util::get_server_ip() {
@@ -1250,7 +2711,7 @@ function openim::util::get_server_ip() {
if command -v curl &> /dev/null; then
# Try to retrieve the public IP address using curl and ifconfig.me
IP=$(dig TXT +short o-o.myaddr.l.google.com @ns1.google.com | sed 's/"//g' | tr -d '\n')
-
+
# Check if IP retrieval was successful
if [[ -z "$IP" ]]; then
# If not, get the internal IP address
@@ -1260,7 +2721,7 @@ function openim::util::get_server_ip() {
# If curl is not available, get the internal IP address
IP=$(ip addr show | grep 'inet ' | grep -v 127.0.0.1 | awk '{print $2}' | cut -d'/' -f1 | head -n 1)
fi
-
+
# Return the fetched IP address
echo "$IP"
}
@@ -1306,7 +2767,7 @@ function openim::util::set_max_fd() {
if [ "$desired_fd" = "maximum" ] || [ "$desired_fd" = "max" ]; then
desired_fd="$max_fd_limit"
fi
-
+
# Check if desired_fd is less than or equal to max_fd_limit.
if [ "$desired_fd" -le "$max_fd_limit" ]; then
ulimit -n "$desired_fd"
@@ -1367,4 +2828,4 @@ function openim::util::gen_os_arch() {
if [[ "$*" =~ openim::util:: ]];then
eval $*
-fi
\ No newline at end of file
+fi
diff --git a/scripts/lib/version.sh b/scripts/lib/version.sh
index 04eb89b09..cb47136fb 100755
--- a/scripts/lib/version.sh
+++ b/scripts/lib/version.sh
@@ -12,7 +12,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
-
+
# -----------------------------------------------------------------------------
# Version management helpers. These functions help to set, save and load the
# following variables:
@@ -35,7 +35,7 @@ openim::version::get_version_vars() {
openim::version::load_version_vars "${OPENIM_GIT_VERSION_FILE}"
return
fi
-
+
# If the iamrnetes source was exported through git archive, then
# we likely don't have a git tree, but these magic values may be filled in.
# shellcheck disable=SC2016,SC2050
@@ -48,12 +48,12 @@ openim::version::get_version_vars() {
# something like 'HEAD -> release-1.8, tag: v1.8.3' where then 'tag: '
# can be extracted from it.
if [[ '$Format:%D$' =~ tag:\ (v[^ ,]+) ]]; then
- OPENIM_GIT_VERSION="${BASH_REMATCH[1]}"
+ OPENIM_GIT_VERSION="${BASH_REMATCH[1]}"
fi
fi
-
+
local git=(git --work-tree "${OPENIM_ROOT}")
-
+
if [[ -n ${OPENIM_GIT_COMMIT-} ]] || OPENIM_GIT_COMMIT=$("${git[@]}" rev-parse "HEAD^{commit}" 2>/dev/null); then
if [[ -z ${OPENIM_GIT_TREE_STATE-} ]]; then
# Check if the tree is dirty. default to dirty
@@ -63,7 +63,7 @@ openim::version::get_version_vars() {
OPENIM_GIT_TREE_STATE="dirty"
fi
fi
-
+
# Use git describe to find the version based on tags.
if [[ -n ${OPENIM_GIT_VERSION-} ]] || OPENIM_GIT_VERSION=$("${git[@]}" describe --tags --always --match='v*' 2>/dev/null); then
# This translates the "git describe" to an actual semver.org
@@ -81,7 +81,7 @@ openim::version::get_version_vars() {
# shellcheck disable=SC2001
# We have distance to subversion (v1.1.0-subversion-1-gCommitHash)
OPENIM_GIT_VERSION=$(echo "${OPENIM_GIT_VERSION}" | sed "s/-\([0-9]\{1,\}\)-g\([0-9a-f]\{14\}\)$/.\1\+\2/")
- elif [[ "${DASHES_IN_VERSION}" == "--" ]] ; then
+ elif [[ "${DASHES_IN_VERSION}" == "--" ]] ; then
# shellcheck disable=SC2001
# We have distance to base tag (v1.1.0-1-gCommitHash)
OPENIM_GIT_VERSION=$(echo "${OPENIM_GIT_VERSION}" | sed "s/-g\([0-9a-f]\{14\}\)$/+\1/")
@@ -94,7 +94,7 @@ openim::version::get_version_vars() {
#OPENIM_GIT_VERSION+="-dirty"
:
fi
-
+
# Try to match the "git describe" output to a regex to try to extract
# the "major" and "minor" versions and whether this is the exact tagged
# version or whether the tree is between two tagged versions.
@@ -105,12 +105,12 @@ openim::version::get_version_vars() {
OPENIM_GIT_MINOR+="+"
fi
fi
-
+
# If OPENIM_GIT_VERSION is not a valid Semantic Version, then refuse to build.
if ! [[ "${OPENIM_GIT_VERSION}" =~ ^v([0-9]+)\.([0-9]+)(\.[0-9]+)?(-[0-9A-Za-z.-]+)?(\+[0-9A-Za-z.-]+)?$ ]]; then
- echo "OPENIM_GIT_VERSION should be a valid Semantic Version. Current value: ${OPENIM_GIT_VERSION}"
- echo "Please see more details here: https://semver.org"
- exit 1
+ echo "OPENIM_GIT_VERSION should be a valid Semantic Version. Current value: ${OPENIM_GIT_VERSION}"
+ echo "Please see more details here: https://semver.org"
+ exit 1
fi
fi
fi
@@ -123,7 +123,7 @@ openim::version::save_version_vars() {
echo "!!! Internal error. No file specified in openim::version::save_version_vars"
return 1
}
-
+
cat <"${version_file}"
OPENIM_GIT_COMMIT='${OPENIM_GIT_COMMIT-}'
OPENIM_GIT_TREE_STATE='${OPENIM_GIT_TREE_STATE-}'
@@ -140,6 +140,6 @@ openim::version::load_version_vars() {
echo "!!! Internal error. No file specified in openim::version::load_version_vars"
return 1
}
-
+
source "${version_file}"
}
diff --git a/scripts/make-rules/common.mk b/scripts/make-rules/common.mk
index 81b44826b..f8537b6ca 100644
--- a/scripts/make-rules/common.mk
+++ b/scripts/make-rules/common.mk
@@ -73,7 +73,8 @@ endif
ifeq ($(origin VERSION), undefined)
# VERSION := $(shell git describe --tags --always --match='v*')
# git describe --tags --always --match="v*" --dirty
-VERSION := $(shell git describe --tags --always --match="v*" --dirty | sed 's/-/./g') #v2.3.3.631.g00abdc9b.dirty
+# VERSION := $(shell git describe --tags --always --match="v*" --dirty | sed 's/-/./g') #v2.3.3.631.g00abdc9b.dirty
+VERSION := $(shell git describe --tags --always --match='v*')
# v2.3.3: git tag
endif
@@ -100,7 +101,7 @@ endif
# The OS must be linux when building docker images
# PLATFORMS ?= linux_amd64 linux_arm64
# The OS can be linux/windows/darwin when building binaries
-PLATFORMS ?= linux_s390x linux_mips64 linux_mips64le darwin_amd64 windows_amd64 linux_amd64 linux_arm64 linux_ppc64le # wasip1_wasm
+PLATFORMS ?= linux_s390x linux_mips64 linux_mips64le darwin_amd64 darwin_arm64 windows_amd64 linux_amd64 linux_arm64 linux_ppc64le # wasip1_wasm
# set a specific PLATFORM, defaults to the host platform
ifeq ($(origin PLATFORM), undefined)
@@ -125,11 +126,11 @@ APIROOT=$(ROOT_DIR)/pkg/proto
# Linux command settings
# TODO: Whether you need to join utils?
-FIND := find . ! -path './utils/*' ! -path './vendor/*' ! -path './third_party/*'
+FIND := find . ! -path './utils/*' ! -path './vendor/*' ! -path './third_party/*' ! -path './components/*' ! -path './logs/*'
XARGS := xargs -r --no-run-if-empty
# Linux command settings-CODE DIRS Copyright
-CODE_DIRS := $(ROOT_DIR)/pkg $(ROOT_DIR)/cmd $(ROOT_DIR)/config $(ROOT_DIR)/.docker-compose_cfg $(ROOT_DIR)/internal $(ROOT_DIR)/scripts $(ROOT_DIR)/test $(ROOT_DIR)/.github $(ROOT_DIR)/build $(ROOT_DIR)/tools $(ROOT_DIR)/deployments
+CODE_DIRS := $(ROOT_DIR)/pkg $(ROOT_DIR)/cmd $(ROOT_DIR)/config $(ROOT_DIR)/internal $(ROOT_DIR)/scripts $(ROOT_DIR)/test $(ROOT_DIR)/.github $(ROOT_DIR)/build $(ROOT_DIR)/tools $(ROOT_DIR)/deployments
FINDS := find $(CODE_DIRS)
# Makefile settings: Select different behaviors by determining whether V option is set
diff --git a/scripts/make-rules/golang.mk b/scripts/make-rules/golang.mk
index 44918d01c..915639b61 100644
--- a/scripts/make-rules/golang.mk
+++ b/scripts/make-rules/golang.mk
@@ -244,7 +244,7 @@ go.imports: tools.verify.goimports
## go.verify: execute all verity scripts.
.PHONY: go.verify
-go.verify:
+go.verify: tools.verify.misspell
@echo "Starting verification..."
@scripts_list=$$(find $(ROOT_DIR)/scripts -type f -name 'verify-*' | sort); \
for script in $$scripts_list; do \
diff --git a/scripts/make-rules/image.mk b/scripts/make-rules/image.mk
index 14a4b2c31..eaec4a127 100644
--- a/scripts/make-rules/image.mk
+++ b/scripts/make-rules/image.mk
@@ -45,7 +45,8 @@ endif
IMAGES_DIR ?= $(wildcard ${ROOT_DIR}/build/images/*)
# Determine images names by stripping out the dir names, and filter out the undesired directories
# IMAGES ?= $(filter-out Dockerfile,$(foreach image,${IMAGES_DIR},$(notdir ${image})))
-IMAGES ?= $(filter-out Dockerfile openim-tools openim-cmdutils,$(foreach image,${IMAGES_DIR},$(notdir ${image})))
+IMAGES ?= $(filter-out Dockerfile openim-tools openim-rpc-extend-msg openim-rpc-encryption openim-cmdutils,$(foreach image,${IMAGES_DIR},$(notdir ${image})))
+# IMAGES ?= $(filter-out Dockerfile openim-tools openim-cmdutils,$(foreach image,${IMAGES_DIR},$(notdir ${image}))) # !pro
ifeq (${IMAGES},)
$(error Could not determine IMAGES, set ROOT_DIR or run in source dir)
diff --git a/scripts/make-rules/tools.mk b/scripts/make-rules/tools.mk
index 7fe7305fb..5d39258ea 100644
--- a/scripts/make-rules/tools.mk
+++ b/scripts/make-rules/tools.mk
@@ -146,7 +146,7 @@ install.github-release:
# amd64
.PHONY: install.coscli
install.coscli:
- @wget -q https://ghproxy.com/https://github.com/tencentyun/coscli/releases/download/v0.13.0-beta/coscli-linux -O ${TOOLS_DIR}/coscli
+ @wget -q https://github.com/tencentyun/coscli/releases/download/v0.19.0-beta/coscli-linux -O ${TOOLS_DIR}/coscli
@chmod +x ${TOOLS_DIR}/coscli
## install.coscmd: Install coscmd, used to upload files to cos
@@ -217,6 +217,11 @@ install.depth:
install.go-callvis:
@$(GO) install github.com/ofabry/go-callvis@latest
+## install.misspell
+.PHONY: install.misspell
+install.misspell:
+ @$(GO) install github.com/client9/misspell/cmd/misspell@latest
+
## install.gothanks: Install gothanks, used to thank go dependencies
.PHONY: install.gothanks
install.gothanks:
diff --git a/scripts/mongo-init.sh b/scripts/mongo-init.sh
index 07d0e3d03..41d9ca0aa 100755
--- a/scripts/mongo-init.sh
+++ b/scripts/mongo-init.sh
@@ -12,15 +12,19 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-mongo -- "$MONGO_INITDB_DATABASE" </dev/null)
# detect if the host machine has the required shellcheck version installed
# if so, we will use that instead.
@@ -113,8 +165,8 @@ if ${HAVE_SHELLCHECK}; then
else
openim::log::info "Using shellcheck ${SHELLCHECK_VERSION} docker image."
"${DOCKER}" run \
- --rm -v ""${OPENIM_ROOT}":"${OPENIM_ROOT}"" -w "${OPENIM_ROOT}" \
- "${SHELLCHECK_IMAGE}" \
+ --rm -v "${OPENIM_ROOT}:${OPENIM_ROOT}" -w "${OPENIM_ROOT}" \
+ "${SHELLCHECK_IMAGE}" \
shellcheck "${SHELLCHECK_OPTIONS[@]}" "${all_shell_scripts[@]}" >&2 || res=$?
fi
diff --git a/scripts/verify-spelling.sh b/scripts/verify-spelling.sh
index f3ed7886d..2c02dccf7 100755
--- a/scripts/verify-spelling.sh
+++ b/scripts/verify-spelling.sh
@@ -25,17 +25,8 @@ OPENIM_ROOT=$(dirname "${BASH_SOURCE[0]}")/..
export OPENIM_ROOT
source "${OPENIM_ROOT}/scripts/lib/init.sh"
-# Ensure that we find the binaries we build before anything else.
-export GOBIN="${KUBE_OUTPUT_BINPATH}"
-PATH="${GOBIN}:${PATH}"
-
-# Install tools we need
-pushd ""${OPENIM_ROOT}"/tools" >/dev/null
- GO111MODULE=on go install github.com/client9/misspell/cmd/misspell
-popd >/dev/null
-
# Spell checking
# All the skipping files are defined in scripts/.spelling_failures
-skipping_file=""${OPENIM_ROOT}"/scripts/.spelling_failures"
+skipping_file="${OPENIM_ROOT}/scripts/.spelling_failures"
failing_packages=$(sed "s| | -e |g" "${skipping_file}")
-git ls-files | grep -v -e "${failing_packages}" | xargs misspell -i "Creater,creater,ect" -error -o stderr
+git ls-files | grep -v -e "${failing_packages}" | xargs "$OPENIM_ROOT/_output/tools/misspell" -i "Creater,creater,ect" -error -o stderr
diff --git a/scripts/verify-typecheck.sh b/scripts/verify-typecheck.sh
index a0b818135..62fca4049 100755
--- a/scripts/verify-typecheck.sh
+++ b/scripts/verify-typecheck.sh
@@ -33,7 +33,7 @@ cd "${OPENIM_ROOT}"
ret=0
TYPECHECK_SERIAL="${TYPECHECK_SERIAL:-false}"
scripts/run-in-gopath.sh \
- go run test/typecheck/typecheck.go "$@" "--serial=$TYPECHECK_SERIAL" || ret=$?
+go run test/typecheck/typecheck.go "$@" "--serial=$TYPECHECK_SERIAL" || ret=$?
if [[ $ret -ne 0 ]]; then
openim::log::error "Type Check has failed. This may cause cross platform build failures." >&2
openim::log::error "Please see https://github.com/openimsdk/open-im-server/tree/main/test/typecheck for more information." >&2
diff --git a/scripts/verify-yamlfmt.sh b/scripts/verify-yamlfmt.sh
index 82e1c528d..3d0a0180d 100755
--- a/scripts/verify-yamlfmt.sh
+++ b/scripts/verify-yamlfmt.sh
@@ -36,13 +36,13 @@ openim::util::trap_add "git worktree remove -f ${_tmpdir}" EXIT
cd "${_tmpdir}"
# Format YAML files
-hack/update-yamlfmt.sh
+scripts/update-yamlfmt.sh
# Test for diffs
diffs=$(git status --porcelain | wc -l)
if [[ ${diffs} -gt 0 ]]; then
echo "YAML files need to be formatted" >&2
git diff
- echo "Please run 'hack/update-yamlfmt.sh'" >&2
+ echo "Please run 'scripts/update-yamlfmt.sh'" >&2
exit 1
fi
\ No newline at end of file
diff --git a/scripts/wait-for-it.sh b/scripts/wait-for-it.sh
index 99a36affe..c05b85678 100755
--- a/scripts/wait-for-it.sh
+++ b/scripts/wait-for-it.sh
@@ -30,119 +30,119 @@ Usage:
Timeout in seconds, zero for no timeout
-- COMMAND ARGS Execute command with args after the test finishes
USAGE
- exit 1
+ exit 1
}
wait_for() {
- if [[ $WAITFORIT_TIMEOUT -gt 0 ]]; then
- echoerr "$WAITFORIT_cmdname: waiting $WAITFORIT_TIMEOUT seconds for $WAITFORIT_HOST:$WAITFORIT_PORT"
+ if [[ $WAITFORIT_TIMEOUT -gt 0 ]]; then
+ echoerr "$WAITFORIT_cmdname: waiting $WAITFORIT_TIMEOUT seconds for $WAITFORIT_HOST:$WAITFORIT_PORT"
+ else
+ echoerr "$WAITFORIT_cmdname: waiting for $WAITFORIT_HOST:$WAITFORIT_PORT without a timeout"
+ fi
+ WAITFORIT_start_ts=$(date +%s)
+ while :
+ do
+ if [[ $WAITFORIT_ISBUSY -eq 1 ]]; then
+ nc -z $WAITFORIT_HOST $WAITFORIT_PORT
+ WAITFORIT_result=$?
else
- echoerr "$WAITFORIT_cmdname: waiting for $WAITFORIT_HOST:$WAITFORIT_PORT without a timeout"
+ (echo -n > /dev/tcp/$WAITFORIT_HOST/$WAITFORIT_PORT) >/dev/null 2>&1
+ WAITFORIT_result=$?
fi
- WAITFORIT_start_ts=$(date +%s)
- while :
- do
- if [[ $WAITFORIT_ISBUSY -eq 1 ]]; then
- nc -z $WAITFORIT_HOST $WAITFORIT_PORT
- WAITFORIT_result=$?
- else
- (echo -n > /dev/tcp/$WAITFORIT_HOST/$WAITFORIT_PORT) >/dev/null 2>&1
- WAITFORIT_result=$?
- fi
- if [[ $WAITFORIT_result -eq 0 ]]; then
- WAITFORIT_end_ts=$(date +%s)
- echoerr "$WAITFORIT_cmdname: $WAITFORIT_HOST:$WAITFORIT_PORT is available after $((WAITFORIT_end_ts - WAITFORIT_start_ts)) seconds"
- break
- fi
- sleep 1
- done
- return $WAITFORIT_result
+ if [[ $WAITFORIT_result -eq 0 ]]; then
+ WAITFORIT_end_ts=$(date +%s)
+ echoerr "$WAITFORIT_cmdname: $WAITFORIT_HOST:$WAITFORIT_PORT is available after $((WAITFORIT_end_ts - WAITFORIT_start_ts)) seconds"
+ break
+ fi
+ sleep 1
+ done
+ return $WAITFORIT_result
}
wait_for_wrapper() {
- # In order to support SIGINT during timeout: http://unix.stackexchange.com/a/57692
- if [[ $WAITFORIT_QUIET -eq 1 ]]; then
- timeout $WAITFORIT_BUSYTIMEFLAG $WAITFORIT_TIMEOUT $0 --quiet --child --host=$WAITFORIT_HOST --port=$WAITFORIT_PORT --timeout=$WAITFORIT_TIMEOUT &
- else
- timeout $WAITFORIT_BUSYTIMEFLAG $WAITFORIT_TIMEOUT $0 --child --host=$WAITFORIT_HOST --port=$WAITFORIT_PORT --timeout=$WAITFORIT_TIMEOUT &
- fi
- WAITFORIT_PID=$!
- trap "kill -INT -$WAITFORIT_PID" INT
- wait $WAITFORIT_PID
- WAITFORIT_RESULT=$?
- if [[ $WAITFORIT_RESULT -ne 0 ]]; then
- echoerr "$WAITFORIT_cmdname: timeout occurred after waiting $WAITFORIT_TIMEOUT seconds for $WAITFORIT_HOST:$WAITFORIT_PORT"
- fi
- return $WAITFORIT_RESULT
+ # In order to support SIGINT during timeout: http://unix.stackexchange.com/a/57692
+ if [[ $WAITFORIT_QUIET -eq 1 ]]; then
+ timeout $WAITFORIT_BUSYTIMEFLAG $WAITFORIT_TIMEOUT $0 --quiet --child --host=$WAITFORIT_HOST --port=$WAITFORIT_PORT --timeout=$WAITFORIT_TIMEOUT &
+ else
+ timeout $WAITFORIT_BUSYTIMEFLAG $WAITFORIT_TIMEOUT $0 --child --host=$WAITFORIT_HOST --port=$WAITFORIT_PORT --timeout=$WAITFORIT_TIMEOUT &
+ fi
+ WAITFORIT_PID=$!
+ trap "kill -INT -$WAITFORIT_PID" INT
+ wait $WAITFORIT_PID
+ WAITFORIT_RESULT=$?
+ if [[ $WAITFORIT_RESULT -ne 0 ]]; then
+ echoerr "$WAITFORIT_cmdname: timeout occurred after waiting $WAITFORIT_TIMEOUT seconds for $WAITFORIT_HOST:$WAITFORIT_PORT"
+ fi
+ return $WAITFORIT_RESULT
}
# process arguments
while [[ $# -gt 0 ]]
do
- case "$1" in
- *:* )
- WAITFORIT_hostport=(${1//:/ })
- WAITFORIT_HOST=${WAITFORIT_hostport[0]}
- WAITFORIT_PORT=${WAITFORIT_hostport[1]}
- shift 1
- ;;
- --child)
- WAITFORIT_CHILD=1
- shift 1
- ;;
- -q | --quiet)
- WAITFORIT_QUIET=1
- shift 1
- ;;
- -s | --strict)
- WAITFORIT_STRICT=1
- shift 1
- ;;
- -h)
- WAITFORIT_HOST="$2"
- if [[ $WAITFORIT_HOST == "" ]]; then break; fi
- shift 2
- ;;
- --host=*)
- WAITFORIT_HOST="${1#*=}"
- shift 1
- ;;
- -p)
- WAITFORIT_PORT="$2"
- if [[ $WAITFORIT_PORT == "" ]]; then break; fi
- shift 2
- ;;
- --port=*)
- WAITFORIT_PORT="${1#*=}"
- shift 1
- ;;
- -t)
- WAITFORIT_TIMEOUT="$2"
- if [[ $WAITFORIT_TIMEOUT == "" ]]; then break; fi
- shift 2
- ;;
- --timeout=*)
- WAITFORIT_TIMEOUT="${1#*=}"
- shift 1
- ;;
- --)
- shift
- WAITFORIT_CLI=("$@")
- break
- ;;
- --help)
- usage
- ;;
- *)
- echoerr "Unknown argument: $1"
- usage
- ;;
- esac
+ case "$1" in
+ *:* )
+ WAITFORIT_hostport=(${1//:/ })
+ WAITFORIT_HOST=${WAITFORIT_hostport[0]}
+ WAITFORIT_PORT=${WAITFORIT_hostport[1]}
+ shift 1
+ ;;
+ --child)
+ WAITFORIT_CHILD=1
+ shift 1
+ ;;
+ -q | --quiet)
+ WAITFORIT_QUIET=1
+ shift 1
+ ;;
+ -s | --strict)
+ WAITFORIT_STRICT=1
+ shift 1
+ ;;
+ -h)
+ WAITFORIT_HOST="$2"
+ if [[ $WAITFORIT_HOST == "" ]]; then break; fi
+ shift 2
+ ;;
+ --host=*)
+ WAITFORIT_HOST="${1#*=}"
+ shift 1
+ ;;
+ -p)
+ WAITFORIT_PORT="$2"
+ if [[ $WAITFORIT_PORT == "" ]]; then break; fi
+ shift 2
+ ;;
+ --port=*)
+ WAITFORIT_PORT="${1#*=}"
+ shift 1
+ ;;
+ -t)
+ WAITFORIT_TIMEOUT="$2"
+ if [[ $WAITFORIT_TIMEOUT == "" ]]; then break; fi
+ shift 2
+ ;;
+ --timeout=*)
+ WAITFORIT_TIMEOUT="${1#*=}"
+ shift 1
+ ;;
+ --)
+ shift
+ WAITFORIT_CLI=("$@")
+ break
+ ;;
+ --help)
+ usage
+ ;;
+ *)
+ echoerr "Unknown argument: $1"
+ usage
+ ;;
+ esac
done
if [[ "$WAITFORIT_HOST" == "" || "$WAITFORIT_PORT" == "" ]]; then
- echoerr "Error: you need to provide a host and port to test."
- usage
+ echoerr "Error: you need to provide a host and port to test."
+ usage
fi
WAITFORIT_TIMEOUT=${WAITFORIT_TIMEOUT:-15}
@@ -156,36 +156,36 @@ WAITFORIT_TIMEOUT_PATH=$(realpath $WAITFORIT_TIMEOUT_PATH 2>/dev/null || readlin
WAITFORIT_BUSYTIMEFLAG=""
if [[ $WAITFORIT_TIMEOUT_PATH =~ "busybox" ]]; then
- WAITFORIT_ISBUSY=1
- # Check if busybox timeout uses -t flag
- # (recent Alpine versions don't support -t anymore)
- if timeout &>/dev/stdout | grep -q -e '-t '; then
- WAITFORIT_BUSYTIMEFLAG="-t"
- fi
+ WAITFORIT_ISBUSY=1
+ # Check if busybox timeout uses -t flag
+ # (recent Alpine versions don't support -t anymore)
+ if timeout &>/dev/stdout | grep -q -e '-t '; then
+ WAITFORIT_BUSYTIMEFLAG="-t"
+ fi
else
- WAITFORIT_ISBUSY=0
+ WAITFORIT_ISBUSY=0
fi
if [[ $WAITFORIT_CHILD -gt 0 ]]; then
+ wait_for
+ WAITFORIT_RESULT=$?
+ exit $WAITFORIT_RESULT
+else
+ if [[ $WAITFORIT_TIMEOUT -gt 0 ]]; then
+ wait_for_wrapper
+ WAITFORIT_RESULT=$?
+ else
wait_for
WAITFORIT_RESULT=$?
- exit $WAITFORIT_RESULT
-else
- if [[ $WAITFORIT_TIMEOUT -gt 0 ]]; then
- wait_for_wrapper
- WAITFORIT_RESULT=$?
- else
- wait_for
- WAITFORIT_RESULT=$?
- fi
+ fi
fi
if [[ $WAITFORIT_CLI != "" ]]; then
- if [[ $WAITFORIT_RESULT -ne 0 && $WAITFORIT_STRICT -eq 1 ]]; then
- echoerr "$WAITFORIT_cmdname: strict mode, refusing to execute subprocess"
- exit $WAITFORIT_RESULT
- fi
- exec "${WAITFORIT_CLI[@]}"
-else
+ if [[ $WAITFORIT_RESULT -ne 0 && $WAITFORIT_STRICT -eq 1 ]]; then
+ echoerr "$WAITFORIT_cmdname: strict mode, refusing to execute subprocess"
exit $WAITFORIT_RESULT
+ fi
+ exec "${WAITFORIT_CLI[@]}"
+else
+ exit $WAITFORIT_RESULT
fi
diff --git a/test/e2e/api/token/token.go b/test/e2e/api/token/token.go
index 679c0bbda..88af72058 100644
--- a/test/e2e/api/token/token.go
+++ b/test/e2e/api/token/token.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package token
import (
@@ -9,7 +23,7 @@ import (
"net/http"
)
-// API endpoints and other constants
+// API endpoints and other constants.
const (
APIHost = "http://127.0.0.1:10002"
UserTokenURL = APIHost + "/auth/user_token"
@@ -18,27 +32,27 @@ const (
OperationID = "1646445464564"
)
-// UserTokenRequest represents a request to get a user token
+// UserTokenRequest represents a request to get a user token.
type UserTokenRequest struct {
Secret string `json:"secret"`
PlatformID int `json:"platformID"`
UserID string `json:"userID"`
}
-// UserTokenResponse represents a response containing a user token
+// UserTokenResponse represents a response containing a user token.
type UserTokenResponse struct {
Token string `json:"token"`
ErrCode int `json:"errCode"`
}
-// User represents user data for registration
+// User represents user data for registration.
type User struct {
UserID string `json:"userID"`
Nickname string `json:"nickname"`
FaceURL string `json:"faceURL"`
}
-// UserRegisterRequest represents a request to register a user
+// UserRegisterRequest represents a request to register a user.
type UserRegisterRequest struct {
Secret string `json:"secret"`
Users []User `json:"users"`
@@ -58,7 +72,7 @@ func main() {
}
}
-// GetUserToken requests a user token from the API
+// GetUserToken requests a user token from the API.
func GetUserToken(userID string) (string, error) {
reqBody := UserTokenRequest{
Secret: SecretKey,
@@ -88,7 +102,7 @@ func GetUserToken(userID string) (string, error) {
return tokenResp.Token, nil
}
-// RegisterUser registers a new user using the API
+// RegisterUser registers a new user using the API.
func RegisterUser(token, userID, nickname, faceURL string) error {
user := User{
UserID: userID,
@@ -125,7 +139,7 @@ func RegisterUser(token, userID, nickname, faceURL string) error {
return err
}
- var respData map[string]interface{}
+ var respData map[string]any
if err := json.Unmarshal(respBody, &respData); err != nil {
return err
}
diff --git a/test/e2e/api/user/curd.go b/test/e2e/api/user/curd.go
index c0380b235..1b56492b3 100644
--- a/test/e2e/api/user/curd.go
+++ b/test/e2e/api/user/curd.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package user
import (
@@ -7,18 +21,18 @@ import (
"github.com/openimsdk/open-im-server/v3/test/e2e/framework/config"
)
-// UserInfoRequest represents a request to get or update user information
+// UserInfoRequest represents a request to get or update user information.
type UserInfoRequest struct {
UserIDs []string `json:"userIDs,omitempty"`
UserInfo *gettoken.User `json:"userInfo,omitempty"`
}
-// GetUsersOnlineStatusRequest represents a request to get users' online status
+// GetUsersOnlineStatusRequest represents a request to get users' online status.
type GetUsersOnlineStatusRequest struct {
UserIDs []string `json:"userIDs"`
}
-// GetUsersInfo retrieves detailed information for a list of user IDs
+// GetUsersInfo retrieves detailed information for a list of user IDs.
func GetUsersInfo(token string, userIDs []string) error {
url := fmt.Sprintf("http://%s:%s/user/get_users_info", config.LoadConfig().APIHost, config.LoadConfig().APIPort)
@@ -29,7 +43,7 @@ func GetUsersInfo(token string, userIDs []string) error {
return sendPostRequestWithToken(url, token, requestBody)
}
-// UpdateUserInfo updates the information for a user
+// UpdateUserInfo updates the information for a user.
func UpdateUserInfo(token, userID, nickname, faceURL string) error {
url := fmt.Sprintf("http://%s:%s/user/update_user_info", config.LoadConfig().APIHost, config.LoadConfig().APIPort)
@@ -44,7 +58,7 @@ func UpdateUserInfo(token, userID, nickname, faceURL string) error {
return sendPostRequestWithToken(url, token, requestBody)
}
-// GetUsersOnlineStatus retrieves the online status for a list of user IDs
+// GetUsersOnlineStatus retrieves the online status for a list of user IDs.
func GetUsersOnlineStatus(token string, userIDs []string) error {
url := fmt.Sprintf("http://%s:%s/user/get_users_online_status", config.LoadConfig().APIHost, config.LoadConfig().APIPort)
diff --git a/test/e2e/api/user/user.go b/test/e2e/api/user/user.go
index 66419b735..fd8144acd 100644
--- a/test/e2e/api/user/user.go
+++ b/test/e2e/api/user/user.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package user
import (
@@ -11,29 +25,29 @@ import (
"github.com/openimsdk/open-im-server/v3/test/e2e/framework/config"
)
-// ForceLogoutRequest represents a request to force a user logout
+// ForceLogoutRequest represents a request to force a user logout.
type ForceLogoutRequest struct {
PlatformID int `json:"platformID"`
UserID string `json:"userID"`
}
-// CheckUserAccountRequest represents a request to check a user account
+// CheckUserAccountRequest represents a request to check a user account.
type CheckUserAccountRequest struct {
CheckUserIDs []string `json:"checkUserIDs"`
}
-// GetUsersRequest represents a request to get a list of users
+// GetUsersRequest represents a request to get a list of users.
type GetUsersRequest struct {
Pagination Pagination `json:"pagination"`
}
-// Pagination specifies the page number and number of items per page
+// Pagination specifies the page number and number of items per page.
type Pagination struct {
PageNumber int `json:"pageNumber"`
ShowNumber int `json:"showNumber"`
}
-// ForceLogout forces a user to log out
+// ForceLogout forces a user to log out.
func ForceLogout(token, userID string, platformID int) error {
url := fmt.Sprintf("http://%s:%s/auth/force_logout", config.LoadConfig().APIHost, config.LoadConfig().APIPort)
@@ -45,7 +59,7 @@ func ForceLogout(token, userID string, platformID int) error {
return sendPostRequestWithToken(url, token, requestBody)
}
-// CheckUserAccount checks if the user accounts exist
+// CheckUserAccount checks if the user accounts exist.
func CheckUserAccount(token string, userIDs []string) error {
url := fmt.Sprintf("http://%s:%s/user/account_check", config.LoadConfig().APIHost, config.LoadConfig().APIPort)
@@ -56,7 +70,7 @@ func CheckUserAccount(token string, userIDs []string) error {
return sendPostRequestWithToken(url, token, requestBody)
}
-// GetUsers retrieves a list of users with pagination
+// GetUsers retrieves a list of users with pagination.
func GetUsers(token string, pageNumber, showNumber int) error {
url := fmt.Sprintf("http://%s:%s/user/account_check", config.LoadConfig().APIHost, config.LoadConfig().APIPort)
@@ -70,8 +84,8 @@ func GetUsers(token string, pageNumber, showNumber int) error {
return sendPostRequestWithToken(url, token, requestBody)
}
-// sendPostRequestWithToken sends a POST request with a token in the header
-func sendPostRequestWithToken(url, token string, body interface{}) error {
+// sendPostRequestWithToken sends a POST request with a token in the header.
+func sendPostRequestWithToken(url, token string, body any) error {
reqBytes, err := json.Marshal(body)
if err != nil {
return err
@@ -98,7 +112,7 @@ func sendPostRequestWithToken(url, token string, body interface{}) error {
return err
}
- var respData map[string]interface{}
+ var respData map[string]any
if err := json.Unmarshal(respBody, &respData); err != nil {
return err
}
diff --git a/test/e2e/e2e.go b/test/e2e/e2e.go
index d1d6c5509..a3d3b1bcf 100644
--- a/test/e2e/e2e.go
+++ b/test/e2e/e2e.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package e2e
import (
diff --git a/test/e2e/e2e_test.go b/test/e2e/e2e_test.go
index 8fe810789..a6496679c 100644
--- a/test/e2e/e2e_test.go
+++ b/test/e2e/e2e_test.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package e2e
import (
diff --git a/test/e2e/framework/config/config.go b/test/e2e/framework/config/config.go
index ed3c6a258..14074fec1 100644
--- a/test/e2e/framework/config/config.go
+++ b/test/e2e/framework/config/config.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package config
import (
diff --git a/test/e2e/framework/config/config_test.go b/test/e2e/framework/config/config_test.go
index c411df31e..b7259bf37 100644
--- a/test/e2e/framework/config/config_test.go
+++ b/test/e2e/framework/config/config_test.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package config
import (
diff --git a/test/e2e/framework/ginkgowrapper/ginkgowrapper.go b/test/e2e/framework/ginkgowrapper/ginkgowrapper.go
index 16779440b..814d393bc 100644
--- a/test/e2e/framework/ginkgowrapper/ginkgowrapper.go
+++ b/test/e2e/framework/ginkgowrapper/ginkgowrapper.go
@@ -1 +1,15 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package ginkgowrapper
diff --git a/test/e2e/framework/ginkgowrapper/ginkgowrapper_test.go b/test/e2e/framework/ginkgowrapper/ginkgowrapper_test.go
index 16779440b..814d393bc 100644
--- a/test/e2e/framework/ginkgowrapper/ginkgowrapper_test.go
+++ b/test/e2e/framework/ginkgowrapper/ginkgowrapper_test.go
@@ -1 +1,15 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package ginkgowrapper
diff --git a/test/e2e/framework/helpers/chat/chat.go b/test/e2e/framework/helpers/chat/chat.go
index 4fca28f2a..a4ead528b 100644
--- a/test/e2e/framework/helpers/chat/chat.go
+++ b/test/e2e/framework/helpers/chat/chat.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package main
import (
@@ -10,7 +24,7 @@ import (
)
var (
- // The default template version
+ // The default template version.
defaultTemplateVersion = "v1.3.0"
)
@@ -84,7 +98,7 @@ func main() {
select {}
}
-// getLatestVersion fetches the latest version number from a given URL
+// getLatestVersion fetches the latest version number from a given URL.
func getLatestVersion(url string) (string, error) {
resp, err := http.Get(url)
if err != nil {
@@ -102,7 +116,7 @@ func getLatestVersion(url string) (string, error) {
return latestVersion, nil
}
-// downloadAndExtract downloads a file from a URL and extracts it to a destination directory
+// downloadAndExtract downloads a file from a URL and extracts it to a destination directory.
func downloadAndExtract(url, destDir string) error {
resp, err := http.Get(url)
if err != nil {
@@ -141,7 +155,7 @@ func downloadAndExtract(url, destDir string) error {
return cmd.Run()
}
-// startProcess starts a process and prints any errors encountered
+// startProcess starts a process and prints any errors encountered.
func startProcess(cmdPath string) {
cmd := exec.Command(cmdPath)
cmd.Stdout = os.Stdout
diff --git a/test/jwt/main.go b/test/jwt/main.go
index a669df9d6..0ef845237 100644
--- a/test/jwt/main.go
+++ b/test/jwt/main.go
@@ -25,7 +25,7 @@ func main() {
// Verify the token
claims := &jwt.MapClaims{}
- parsedT, err := jwt.ParseWithClaims(rawJWT, claims, func(token *jwt.Token) (interface{}, error) {
+ parsedT, err := jwt.ParseWithClaims(rawJWT, claims, func(token *jwt.Token) (any, error) {
// Validate the alg is HMAC signature
if _, ok := token.Method.(*jwt.SigningMethodHMAC); !ok {
return nil, fmt.Errorf("unexpected signing method: %v", token.Header["alg"])
diff --git a/test/typecheck/README.md b/test/typecheck/README.md
index 6ba462ec9..e5b76d4c6 100644
--- a/test/typecheck/README.md
+++ b/test/typecheck/README.md
@@ -1,27 +1,52 @@
-# OpenIM Typecheck
+# OpenIM Typecheck: Cross-Platform Source Code Type Checking for Go
-OpenIM Typecheck 为所有 Go 构建平台进行跨平台源代码类型检查。
+## Introduction
-## 优点
+OpenIM Typecheck is a robust tool designed for cross-platform source code type checking across all Go build platforms. This utility leverages Go’s built-in parsing and type-check libraries (`go/parser` and `go/types`) to deliver efficient and reliable code analysis.
-- **速度**:OpenIM 完整编译大约需要 3 分钟,而使用 Typecheck 只需数秒。
-- **资源消耗**:与需要 >40GB 的 RAM 不同,Typecheck 只需 <8GB 的 RAM。
+## Advantages
-## 实现
+- **Speed**: A complete compilation with OpenIM can take approximately 3 minutes. In contrast, OpenIM Typecheck achieves this in mere seconds, significantly enhancing productivity.
+- **Resource Efficiency**: Unlike the typical requirement of over 40GB of RAM for standard processes, Typecheck operates effectively with less than 8GB of RAM. This reduction in resource consumption makes it highly suitable for a variety of systems, reducing overheads and facilitating smoother operations.
-OpenIM Typecheck 使用 Go 内置的解析和类型检查库 (`go/parser` 和 `go/types`)。然而,这些库并不是 go 编译器所使用的。偶尔会出现不匹配的情况,但总的来说,它们是相当接近的。
+## Implementation
-## 错误处理
+OpenIM Typecheck employs Go's native parsing and type-checking libraries (`go/parser` and `go/types`). However, it's important to note that these libraries aren't identical to those used by the Go compiler. While occasional mismatches may occur, these libraries generally provide close approximations to the compiler's functionality, offering a reliable basis for type checking.
-如果错误不会阻止构建,可以忽略。
+## Error Handling
-**`go/types` 报告的错误,但 `go build` 不会**:
-- **真正的错误**(根据规范):
- - 应尽量修复。如果无法修复或正在进行中(例如,已被外部引用的代码),则可以忽略。
- - 例如:闭包中的未使用变量
-- **不真实的错误**:
- - 应忽略并在适当的情况下向上游报告。
- - 例如:staging 和 generated 类型之间的类型检查不匹配
+Typecheck's approach to error handling is pragmatic, focusing on practicality and build continuity.
-**`go build` 报告的错误,但我们不会**:
-- CGo 错误,包括语法和链接器错误。
+**Errors reported by `go/types` but not by `go build`**:
+- **Actual Errors** (as per the specification):
+ - These should ideally be rectified. If rectification is not feasible, such as in cases of ongoing work or external dependencies in the code, these errors can be overlooked.
+ - Example: Unused variables within a closure.
+- **False Positives**:
+ - These errors should be ignored and, where appropriate, reported upstream for resolution.
+ - Example: Type mismatches between staging and generated types.
+
+**Errors reported by `go build` but not by us**:
+- CGo-related errors, including both syntax and linker issues, are outside our scope.
+
+## Usage
+
+### Locally
+
+To run Typecheck locally, simply use the following command:
+
+```bash
+make verify
+```
+
+### Continuous Integration (CI)
+
+In CI environments, Typecheck can be integrated into the workflow as follows:
+
+```yaml
+- name: Typecheck
+ run: make verify
+```
+
+This streamlined process facilitates efficient error detection and resolution, ensuring a robust and reliable build pipeline.
+
+More to learn about typecheck [share blog](https://nsddd.top/posts/concurrent-type-checking-and-cross-platform-development-in-go/)
\ No newline at end of file
diff --git a/test/typecheck/typecheck.go b/test/typecheck/typecheck.go
index 0fc33597b..975ce988d 100644
--- a/test/typecheck/typecheck.go
+++ b/test/typecheck/typecheck.go
@@ -12,7 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
-// do a fast type check of kubernetes code, for all platforms.
+// do a fast type check of openim code, for all platforms.
package main
import (
@@ -47,7 +47,7 @@ var (
crossPlatforms = []string{
"linux/amd64", "windows/386",
"darwin/amd64", "darwin/arm64",
- "linux/arm", "linux/386",
+ "linux/386", "linux/arm",
"windows/amd64", "linux/arm64",
"linux/ppc64le", "linux/s390x",
"windows/arm64",
@@ -59,19 +59,18 @@ var (
// paths as if it were inside of vendor/. It fails typechecking
// inside of staging/, but works when typechecked as part of vendor/.
"staging",
+ "components",
+ "logs",
// OS-specific vendor code tends to be imported by OS-specific
// packages. We recursively typecheck imported vendored packages for
// each OS, but don't typecheck everything for every OS.
"vendor",
+ "test",
"_output",
- // This is a weird one. /testdata/ is *mostly* ignored by Go,
- // and this translates to kubernetes/vendor not working.
- // edit/record.go doesn't compile without gopkg.in/yaml.v2
- // in $GOSRC/$GOROOT (both typecheck and the shell script).
- "pkg/kubectl/cmd/testdata/edit",
+ "*/mw/rpc_server_interceptor.go",
// Tools we use for maintaining the code base but not necessarily
// ship as part of the release
- "hack/tools",
+ "sopenim::golang::setup_env:tools/yamlfmt/yamlfmt.go:tools",
}
)
@@ -239,7 +238,7 @@ func dedup(errors []packages.Error) []string {
var outMu sync.Mutex
-func serialFprintf(w io.Writer, format string, a ...interface{}) (n int, err error) {
+func serialFprintf(w io.Writer, format string, a ...any) (n int, err error) {
outMu.Lock()
defer outMu.Unlock()
return fmt.Fprintf(w, format, a...)
diff --git a/test/wrktest.sh b/test/wrktest.sh
index 01617676a..10a41121f 100755
--- a/test/wrktest.sh
+++ b/test/wrktest.sh
@@ -34,7 +34,7 @@ openim_root="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd -P)"
wrkdir="${openim_root}/_output/wrk"
jobname="openim-api"
duration="300s"
-threads=$((3 * `grep -c processor /proc/cpuinfo`))
+threads=$((3 * $(grep -c processor /proc/cpuinfo)))
source "${openim_root}/scripts/lib/color.sh"
@@ -45,7 +45,7 @@ openim::wrk::setup() {
cmd="wrk -t${threads} -d${duration} -T30s --latency"
}
-# Print usage infomation
+# Print usage
openim::wrk::usage() {
cat << EOF
@@ -122,7 +122,7 @@ if (s ~ "s") {
# Remove existing data file
function openim::wrk::prepare() {
- rm -f ${wrkdir}/${datfile}
+ rm -f "${wrkdir}"/"${datfile}"
}
# Plot according to gunplot data file
@@ -216,7 +216,7 @@ openim::wrk::start_performance_test() {
do
wrkcmd="${cmd} -c ${c} $1"
echo "Running wrk command: ${wrkcmd}"
- result=`eval ${wrkcmd}`
+ result=$(eval "${wrkcmd}")
openim::wrk::convert_plot_data "${result}"
done
@@ -241,9 +241,10 @@ while getopts "hd:n:" opt;do
esac
done
-shift $(($OPTIND-1))
+shift $((OPTIND-1))
+
+mkdir -p "${wrkdir}"
-mkdir -p ${wrkdir}
case $1 in
"diff")
if [ "$#" -lt 3 ];then
@@ -255,7 +256,7 @@ case $1 in
t2=$(basename $3|sed 's/.dat//g') # 对比图中粉色线条名称
join $2 $3 > /tmp/plot_diff.dat
- openim::wrk::plot_diff `basename $2` `basename $3`
+ openim::wrk::plot_diff "$(basename "$2")" "$(basename "$3")"
exit 0
;;
*)
diff --git a/tools/changelog/changelog.go b/tools/changelog/changelog.go
index 17a9e5404..ff9a7eab9 100644
--- a/tools/changelog/changelog.go
+++ b/tools/changelog/changelog.go
@@ -61,7 +61,7 @@ var (
{"template", "template"},
{"etcd", "server"},
{"pod", "node"},
- {"hack/", "hack"},
+ {"scripts/", "hack"},
{"e2e", "test"},
{"integration", "test"},
{"cluster", "cluster"},
diff --git a/tools/component/component.go b/tools/component/component.go
index 4e44bb7ba..6b879d7f8 100644
--- a/tools/component/component.go
+++ b/tools/component/component.go
@@ -15,49 +15,36 @@
package main
import (
- "context"
- "database/sql"
+ "errors"
"flag"
"fmt"
- "net"
- "net/url"
"os"
"strings"
"time"
- "github.com/minio/minio-go/v7"
- "github.com/redis/go-redis/v9"
- "gopkg.in/yaml.v3"
-
"github.com/IBM/sarama"
+
+ "github.com/openimsdk/open-im-server/v3/pkg/common/db/cache"
+ "github.com/openimsdk/open-im-server/v3/pkg/common/db/unrelation"
+ "github.com/openimsdk/open-im-server/v3/pkg/common/discoveryregister/zookeeper"
+ "github.com/openimsdk/open-im-server/v3/pkg/common/kafka"
+
+ "github.com/OpenIMSDK/tools/component"
"github.com/OpenIMSDK/tools/errs"
- "github.com/OpenIMSDK/tools/utils"
- "github.com/go-zookeeper/zk"
- "go.mongodb.org/mongo-driver/mongo"
- "go.mongodb.org/mongo-driver/mongo/options"
- "gorm.io/driver/mysql"
- "gorm.io/gorm"
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
- "github.com/openimsdk/open-im-server/v3/pkg/common/kafka"
- "github.com/minio/minio-go/v7/pkg/credentials"
+ "gopkg.in/yaml.v3"
)
const (
// defaultCfgPath is the default path of the configuration file.
- defaultCfgPath = "../../../../../config/config.yaml"
- minioHealthCheckDuration = 1
- maxRetry = 100
- componentStartErrCode = 6000
- configErrCode = 6001
+ defaultCfgPath = "../../../../../config/config.yaml"
+ maxRetry = 300
)
var (
cfgPath = flag.String("c", defaultCfgPath, "Path to the configuration file")
-
- ErrComponentStart = errs.NewCodeError(componentStartErrCode, "ComponentStartErr")
- ErrConfig = errs.NewCodeError(configErrCode, "Config file is incorrect")
)
func initCfg() error {
@@ -72,6 +59,7 @@ func initCfg() error {
type checkFunc struct {
name string
function func() error
+ flag bool
}
func main() {
@@ -83,250 +71,184 @@ func main() {
return
}
+ configGetEnv()
+
checks := []checkFunc{
- {name: "Mysql", function: checkMysql},
+ //{name: "Mysql", function: checkMysql},
{name: "Mongo", function: checkMongo},
- {name: "Minio", function: checkMinio},
{name: "Redis", function: checkRedis},
+ {name: "Minio", function: checkMinio},
{name: "Zookeeper", function: checkZookeeper},
{name: "Kafka", function: checkKafka},
}
for i := 0; i < maxRetry; i++ {
if i != 0 {
- time.Sleep(3 * time.Second)
+ time.Sleep(1 * time.Second)
}
fmt.Printf("Checking components Round %v...\n", i+1)
+ var err error
allSuccess := true
- for _, check := range checks {
- err := check.function()
- if err != nil {
- errorPrint(fmt.Sprintf("Starting %s failed: %v", check.name, err))
- allSuccess = false
- break
- } else {
- successPrint(fmt.Sprintf("%s starts successfully", check.name))
+ for index, check := range checks {
+ if !check.flag {
+ err = check.function()
+ if err != nil {
+ component.ErrorPrint(fmt.Sprintf("Starting %s failed:%v.", check.name, err))
+ allSuccess = false
+
+ } else {
+ checks[index].flag = true
+ component.SuccessPrint(fmt.Sprintf("%s connected successfully", check.name))
+ }
}
}
if allSuccess {
- successPrint("All components started successfully!")
-
+ component.SuccessPrint("All components started successfully!")
return
}
}
- os.Exit(1)
}
-func exactIP(urll string) string {
- u, _ := url.Parse(urll)
- host, _, err := net.SplitHostPort(u.Host)
- if err != nil {
- host = u.Host
- }
- if strings.HasSuffix(host, ":") {
- host = host[0 : len(host)-1]
- }
+// checkMongo checks the MongoDB connection without retries
+func checkMongo() error {
+ _, err := unrelation.NewMongo()
+ return err
+}
- return host
+// checkRedis checks the Redis connection
+func checkRedis() error {
+ _, err := cache.NewRedis()
+ return err
}
-func checkMysql() error {
- var sqlDB *sql.DB
- defer func() {
- if sqlDB != nil {
- sqlDB.Close()
- }
- }()
- dsn := fmt.Sprintf("%s:%s@tcp(%s)/%s?charset=utf8mb4&parseTime=true&loc=Local",
- config.Config.Mysql.Username, config.Config.Mysql.Password, config.Config.Mysql.Address[0], "mysql")
- db, err := gorm.Open(mysql.Open(dsn), nil)
- if err != nil {
- return errs.Wrap(err)
- } else {
- sqlDB, err = db.DB()
- err = sqlDB.Ping()
- if err != nil {
- return errs.Wrap(err)
- }
+// checkMinio checks the MinIO connection
+func checkMinio() error {
+
+ // Check if MinIO is enabled
+ if config.Config.Object.Enable != "minio" {
+ return errs.Wrap(errors.New("minio.Enable is empty"))
}
+ minio := &component.Minio{
+ ApiURL: config.Config.Object.ApiURL,
+ Endpoint: config.Config.Object.Minio.Endpoint,
+ AccessKeyID: config.Config.Object.Minio.AccessKeyID,
+ SecretAccessKey: config.Config.Object.Minio.SecretAccessKey,
+ SignEndpoint: config.Config.Object.Minio.SignEndpoint,
+ UseSSL: getEnv("MINIO_USE_SSL", "false"),
+ }
+ err := component.CheckMinio(minio)
+ return err
+}
- return nil
+// checkZookeeper checks the Zookeeper connection
+func checkZookeeper() error {
+ _, err := zookeeper.NewZookeeperDiscoveryRegister()
+ return err
}
-func checkMongo() error {
- var client *mongo.Client
- uri := "mongodb://sample.host:27017/?maxPoolSize=20&w=majority"
- defer func() {
- if client != nil {
- client.Disconnect(context.TODO())
- }
- }()
- if config.Config.Mongo.Uri != "" {
- uri = config.Config.Mongo.Uri
- } else {
- mongodbHosts := ""
- for i, v := range config.Config.Mongo.Address {
- if i == len(config.Config.Mongo.Address)-1 {
- mongodbHosts += v
- } else {
- mongodbHosts += v + ","
- }
- }
- if config.Config.Mongo.Password != "" && config.Config.Mongo.Username != "" {
- uri = fmt.Sprintf("mongodb://%s:%s@%s/%s?maxPoolSize=%d&authSource=admin",
- config.Config.Mongo.Username, config.Config.Mongo.Password, mongodbHosts,
- config.Config.Mongo.Database, config.Config.Mongo.MaxPoolSize)
- } else {
- uri = fmt.Sprintf("mongodb://%s/%s/?maxPoolSize=%d&authSource=admin",
- mongodbHosts, config.Config.Mongo.Database,
- config.Config.Mongo.MaxPoolSize)
- }
+// checkKafka checks the Kafka connection
+func checkKafka() error {
+ // Prioritize environment variables
+ kafkaStu := &component.Kafka{
+ Username: config.Config.Kafka.Username,
+ Password: config.Config.Kafka.Password,
+ Addr: config.Config.Kafka.Addr,
}
- client, err := mongo.Connect(context.TODO(), options.Client().ApplyURI(uri))
+
+ kafkaClient, err := component.CheckKafka(kafkaStu)
if err != nil {
- return errs.Wrap(err)
- } else {
- err = client.Ping(context.TODO(), nil)
- if err != nil {
- return errs.Wrap(err)
- }
+ return err
}
+ defer kafkaClient.Close()
- return nil
-}
-
-func checkMinio() error {
- if config.Config.Object.Enable == "minio" {
- conf := config.Config.Object.Minio
- u, _ := url.Parse(conf.Endpoint)
- minioClient, err := minio.New(u.Host, &minio.Options{
- Creds: credentials.NewStaticV4(conf.AccessKeyID, conf.SecretAccessKey, ""),
- Secure: u.Scheme == "https",
- })
- if err != nil {
- return errs.Wrap(err)
- }
-
- cancel, err := minioClient.HealthCheck(time.Duration(minioHealthCheckDuration) * time.Second)
- defer func() {
- if cancel != nil {
- cancel()
- }
- }()
- if err != nil {
- return errs.Wrap(err)
- } else {
- if minioClient.IsOffline() {
- return ErrComponentStart.Wrap("Minio server is offline")
- }
- }
- if exactIP(config.Config.Object.ApiURL) == "127.0.0.1" || exactIP(config.Config.Object.Minio.SignEndpoint) == "127.0.0.1" {
- return ErrConfig.Wrap("apiURL or Minio SignEndpoint endpoint contain 127.0.0.1")
- }
+ // Verify if necessary topics exist
+ topics, err := kafkaClient.Topics()
+ if err != nil {
+ return errs.Wrap(err)
}
- return nil
-}
+ requiredTopics := []string{
+ config.Config.Kafka.MsgToMongo.Topic,
+ config.Config.Kafka.MsgToPush.Topic,
+ config.Config.Kafka.LatestMsgToRedis.Topic,
+ }
-func checkRedis() error {
- var redisClient redis.UniversalClient
- defer func() {
- if redisClient != nil {
- redisClient.Close()
+ for _, requiredTopic := range requiredTopics {
+ if !isTopicPresent(requiredTopic, topics) {
+ return errs.Wrap(err, fmt.Sprintf("Kafka doesn't contain topic: %v", requiredTopic))
}
- }()
- if len(config.Config.Redis.Address) > 1 {
- redisClient = redis.NewClusterClient(&redis.ClusterOptions{
- Addrs: config.Config.Redis.Address,
- Username: config.Config.Redis.Username,
- Password: config.Config.Redis.Password,
- })
- } else {
- redisClient = redis.NewClient(&redis.Options{
- Addr: config.Config.Redis.Address[0],
- Username: config.Config.Redis.Username,
- Password: config.Config.Redis.Password,
- })
}
- _, err := redisClient.Ping(context.Background()).Result()
+
+ _, err = kafka.NewMConsumerGroup(&kafka.MConsumerGroupConfig{
+ KafkaVersion: sarama.V2_0_0_0,
+ OffsetsInitial: sarama.OffsetNewest, IsReturnErr: false,
+ }, []string{config.Config.Kafka.LatestMsgToRedis.Topic},
+ config.Config.Kafka.Addr, config.Config.Kafka.ConsumerGroupID.MsgToRedis)
if err != nil {
- return errs.Wrap(err)
+ return err
}
- return nil
-}
+ _, err = kafka.NewMConsumerGroup(&kafka.MConsumerGroupConfig{
+ KafkaVersion: sarama.V2_0_0_0,
+ OffsetsInitial: sarama.OffsetNewest, IsReturnErr: false,
+ }, []string{config.Config.Kafka.MsgToPush.Topic},
+ config.Config.Kafka.Addr, config.Config.Kafka.ConsumerGroupID.MsgToMongo)
+ if err != nil {
+ return err
+ }
-func checkZookeeper() error {
- var c *zk.Conn
- defer func() {
- if c != nil {
- c.Close()
- }
- }()
- c, _, err := zk.Connect(config.Config.Zookeeper.ZkAddr, time.Second)
+ kafka.NewMConsumerGroup(&kafka.MConsumerGroupConfig{
+ KafkaVersion: sarama.V2_0_0_0,
+ OffsetsInitial: sarama.OffsetNewest, IsReturnErr: false,
+ }, []string{config.Config.Kafka.MsgToPush.Topic}, config.Config.Kafka.Addr,
+ config.Config.Kafka.ConsumerGroupID.MsgToPush)
if err != nil {
- return errs.Wrap(err)
- } else {
- if config.Config.Zookeeper.Username != "" && config.Config.Zookeeper.Password != "" {
- if err := c.AddAuth("digest", []byte(config.Config.Zookeeper.Username+":"+config.Config.Zookeeper.Password)); err != nil {
- return errs.Wrap(err)
- }
- }
- _, _, err = c.Get("/")
- if err != nil {
- return errs.Wrap(err)
- }
+ return err
}
return nil
}
-func checkKafka() error {
- var kafkaClient sarama.Client
- defer func() {
- if kafkaClient != nil {
- kafkaClient.Close()
- }
- }()
- cfg := sarama.NewConfig()
- if config.Config.Kafka.Username != "" && config.Config.Kafka.Password != "" {
- cfg.Net.SASL.Enable = true
- cfg.Net.SASL.User = config.Config.Kafka.Username
- cfg.Net.SASL.Password = config.Config.Kafka.Password
- }
- kafka.SetupTLSConfig(cfg)
- kafkaClient, err := sarama.NewClient(config.Config.Kafka.Addr, cfg)
- if err != nil {
- return errs.Wrap(err)
- } else {
- topics, err := kafkaClient.Topics()
- if err != nil {
- return err
- }
- if !utils.IsContain(config.Config.Kafka.MsgToMongo.Topic, topics) {
- return ErrComponentStart.Wrap(fmt.Sprintf("kafka doesn't contain topic:%v", config.Config.Kafka.MsgToMongo.Topic))
- }
- if !utils.IsContain(config.Config.Kafka.MsgToPush.Topic, topics) {
- return ErrComponentStart.Wrap(fmt.Sprintf("kafka doesn't contain topic:%v", config.Config.Kafka.MsgToPush.Topic))
- }
- if !utils.IsContain(config.Config.Kafka.LatestMsgToRedis.Topic, topics) {
- return ErrComponentStart.Wrap(fmt.Sprintf("kafka doesn't contain topic:%v", config.Config.Kafka.LatestMsgToRedis.Topic))
+// isTopicPresent checks if a topic is present in the list of topics
+func isTopicPresent(topic string, topics []string) bool {
+ for _, t := range topics {
+ if t == topic {
+ return true
}
}
-
- return nil
+ return false
}
-func errorPrint(s string) {
- fmt.Printf("\x1b[%dm%v\x1b[0m\n", 31, s)
+func configGetEnv() {
+ config.Config.Object.Minio.AccessKeyID = getEnv("MINIO_ACCESS_KEY_ID", config.Config.Object.Minio.AccessKeyID)
+ config.Config.Object.Minio.SecretAccessKey = getEnv("MINIO_SECRET_ACCESS_KEY", config.Config.Object.Minio.SecretAccessKey)
+ config.Config.Mongo.Uri = getEnv("MONGO_URI", config.Config.Mongo.Uri)
+ config.Config.Mongo.Username = getEnv("MONGO_OPENIM_USERNAME", config.Config.Mongo.Username)
+ config.Config.Mongo.Password = getEnv("MONGO_OPENIM_PASSWORD", config.Config.Mongo.Password)
+ config.Config.Kafka.Username = getEnv("KAFKA_USERNAME", config.Config.Kafka.Username)
+ config.Config.Kafka.Password = getEnv("KAFKA_PASSWORD", config.Config.Kafka.Password)
+ config.Config.Kafka.Addr = strings.Split(getEnv("KAFKA_ADDRESS", strings.Join(config.Config.Kafka.Addr, ",")), ",")
+ config.Config.Object.Minio.Endpoint = getMinioAddr("MINIO_ENDPOINT", "MINIO_ADDRESS", "MINIO_PORT", config.Config.Object.Minio.Endpoint)
}
-func successPrint(s string) {
- fmt.Printf("\x1b[%dm%v\x1b[0m\n", 32, s)
+func getMinioAddr(key1, key2, key3, fallback string) string {
+ // Prioritize environment variables
+ endpoint := getEnv(key1, fallback)
+ address, addressExist := os.LookupEnv(key2)
+ port, portExist := os.LookupEnv(key3)
+ if portExist && addressExist {
+ endpoint = "http://" + address + ":" + port
+ return endpoint
+ }
+ return endpoint
}
-func warningPrint(s string) {
- fmt.Printf("\x1b[%dmWarning: But %v\x1b[0m\n", 33, s)
+// Helper function to get environment variable or default value
+func getEnv(key, fallback string) string {
+ if value, exists := os.LookupEnv(key); exists {
+ return value
+ }
+ return fallback
}
diff --git a/tools/component/component_test.go b/tools/component/component_test.go
index afa51ef2c..4488c029e 100644
--- a/tools/component/component_test.go
+++ b/tools/component/component_test.go
@@ -15,24 +15,16 @@
package main
import (
+ "context"
+ "strconv"
"testing"
+ "time"
- "github.com/stretchr/testify/assert"
+ "github.com/redis/go-redis/v9"
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
)
-func TestCheckMysql(t *testing.T) {
- err := mockInitCfg()
- assert.NoError(t, err, "Initialization should not produce errors")
-
- err = checkMysql()
- if err != nil {
- // You might expect an error if MySQL isn't running locally with the mock credentials.
- t.Logf("Expected error due to mock configuration: %v", err)
- }
-}
-
// Mock for initCfg for testing purpose
func mockInitCfg() error {
config.Config.Mysql.Username = "root"
@@ -40,3 +32,43 @@ func mockInitCfg() error {
config.Config.Mysql.Address = []string{"127.0.0.1:13306"}
return nil
}
+
+func TestRedis(t *testing.T) {
+ config.Config.Redis.Address = []string{
+ "172.16.8.142:7000",
+ //"172.16.8.142:7000", "172.16.8.142:7001", "172.16.8.142:7002", "172.16.8.142:7003", "172.16.8.142:7004", "172.16.8.142:7005",
+ }
+
+ var redisClient redis.UniversalClient
+ defer func() {
+ if redisClient != nil {
+ redisClient.Close()
+ }
+ }()
+ if len(config.Config.Redis.Address) > 1 {
+ redisClient = redis.NewClusterClient(&redis.ClusterOptions{
+ Addrs: config.Config.Redis.Address,
+ Username: config.Config.Redis.Username,
+ Password: config.Config.Redis.Password,
+ })
+ } else {
+ redisClient = redis.NewClient(&redis.Options{
+ Addr: config.Config.Redis.Address[0],
+ Username: config.Config.Redis.Username,
+ Password: config.Config.Redis.Password,
+ })
+ }
+ _, err := redisClient.Ping(context.Background()).Result()
+ if err != nil {
+ t.Fatal(err)
+ }
+
+ for i := 0; i < 1000000; i++ {
+ val, err := redisClient.Set(context.Background(), "b_"+strconv.Itoa(i), "test", time.Second*10).Result()
+ t.Log("index", i, "resp", val, "err", err)
+ if err != nil {
+ return
+ }
+ }
+
+}
diff --git a/tools/data-conversion/README.md b/tools/data-conversion/README.md
index 8d1bf8629..71387af7f 100644
--- a/tools/data-conversion/README.md
+++ b/tools/data-conversion/README.md
@@ -13,8 +13,8 @@
### 2. 迁移 OpenIM MySQL 数据
-+ 位置: `open-im-server/v3/tools/data-conversion/openim/mysql.go`
-+ 配置 `mysql.go` 文件中的数据库信息。
++ 位置: `open-im-server/tools/data-conversion/openim/cmd/conversion-mysql.go`
++ 配置 `conversion-mysql.go` 文件中的数据库信息。
+ 手动创建 V3 版本的数据库,并确保字符集为 `utf8mb4`。
```bash
@@ -31,7 +31,7 @@ var (
usernameV3 = "root"
passwordV3 = "openIM123"
addrV3 = "127.0.0.1:13306"
- databaseV3 = "openIM_v3"
+ databaseV3 = "openim_v3"
)
```
@@ -47,7 +47,7 @@ make build BINS="conversion-mysql"
### 3. 转换聊天消息(可选)
+ 只支持转换存储在 Kafka 中的消息。
-+ 位置: `open-im-server/v3/tools/data-conversion/openim/msg.go`
++ 位置: `open-im-server/tools/data-conversion/openim/conversion-msg/conversion-msg.go`
+ 配置 `msg.go` 文件中的消息和服务器信息。
```bash
@@ -69,7 +69,7 @@ make build BINS="conversion-msg"
### 4. 转换业务服务器数据
+ 只支持转换存储在 Kafka 中的消息。
-+ 位置: `open-im-server/v3/tools/data-conversion/chat/chat.go`
++ 位置: `open-im-server/tools/data-conversion/chat/cmd/conversion-chat/chat.go`
+ 需要手动创建 V3 版本的数据库,并确保字符集为 `utf8mb4`。
+ 配置 `main.go` 文件中的数据库信息。
diff --git a/tools/data-conversion/chat/cmd/conversion-chat/chat.go b/tools/data-conversion/chat/cmd/conversion-chat/chat.go
index 77c62ee1f..0fc49c782 100644
--- a/tools/data-conversion/chat/cmd/conversion-chat/chat.go
+++ b/tools/data-conversion/chat/cmd/conversion-chat/chat.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package main
import (
diff --git a/tools/data-conversion/chat/conversion/conversion.go b/tools/data-conversion/chat/conversion/conversion.go
index 6032a4569..084fff59c 100644
--- a/tools/data-conversion/chat/conversion/conversion.go
+++ b/tools/data-conversion/chat/conversion/conversion.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package conversion
import (
diff --git a/tools/data-conversion/chat/v2/admin.go b/tools/data-conversion/chat/v2/admin.go
index 7bc1b6c1b..fec11ff5b 100644
--- a/tools/data-conversion/chat/v2/admin.go
+++ b/tools/data-conversion/chat/v2/admin.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package v2
import (
diff --git a/tools/data-conversion/chat/v2/chat.go b/tools/data-conversion/chat/v2/chat.go
index 6690e110b..15cc4797f 100644
--- a/tools/data-conversion/chat/v2/chat.go
+++ b/tools/data-conversion/chat/v2/chat.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package v2
import (
diff --git a/tools/data-conversion/go.mod b/tools/data-conversion/go.mod
index b0d7aea13..963755923 100644
--- a/tools/data-conversion/go.mod
+++ b/tools/data-conversion/go.mod
@@ -3,16 +3,16 @@ module github.com/openimsdk/open-im-server/v3/tools/data-conversion
go 1.19
require (
- github.com/IBM/sarama v1.41.2
- github.com/OpenIMSDK/protocol v0.0.23
- github.com/OpenIMSDK/tools v0.0.14
+ github.com/IBM/sarama v1.42.1
+ github.com/OpenIMSDK/protocol v0.0.33
+ github.com/OpenIMSDK/tools v0.0.20
github.com/golang/protobuf v1.5.3
- github.com/openimsdk/open-im-server/v3 v3.3.2
- golang.org/x/net v0.17.0
- google.golang.org/grpc v1.57.0
+ github.com/openimsdk/open-im-server/v3 v3.4.0
+ golang.org/x/net v0.19.0
+ google.golang.org/grpc v1.60.0
google.golang.org/protobuf v1.31.0
- gorm.io/driver/mysql v1.5.1
- gorm.io/gorm v1.25.4
+ gorm.io/driver/mysql v1.5.2
+ gorm.io/gorm v1.25.5
)
require (
@@ -28,7 +28,7 @@ require (
github.com/gin-gonic/gin v1.9.1 // indirect
github.com/go-playground/locales v0.14.1 // indirect
github.com/go-playground/universal-translator v0.18.1 // indirect
- github.com/go-playground/validator/v10 v10.15.3 // indirect
+ github.com/go-playground/validator/v10 v10.15.5 // indirect
github.com/go-sql-driver/mysql v1.7.1 // indirect
github.com/goccy/go-json v0.10.2 // indirect
github.com/golang/snappy v0.0.4 // indirect
@@ -63,10 +63,10 @@ require (
go.uber.org/multierr v1.6.0 // indirect
go.uber.org/zap v1.24.0 // indirect
golang.org/x/arch v0.3.0 // indirect
- golang.org/x/crypto v0.14.0 // indirect
- golang.org/x/image v0.12.0 // indirect
- golang.org/x/sys v0.13.0 // indirect
- golang.org/x/text v0.13.0 // indirect
- google.golang.org/genproto/googleapis/rpc v0.0.0-20230807174057-1744710a1577 // indirect
+ golang.org/x/crypto v0.17.0 // indirect
+ golang.org/x/image v0.13.0 // indirect
+ golang.org/x/sys v0.15.0 // indirect
+ golang.org/x/text v0.14.0 // indirect
+ google.golang.org/genproto/googleapis/rpc v0.0.0-20231012201019-e917dd12ba7a // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)
diff --git a/tools/data-conversion/go.sum b/tools/data-conversion/go.sum
index 9223f6e36..d6dc23742 100644
--- a/tools/data-conversion/go.sum
+++ b/tools/data-conversion/go.sum
@@ -1,9 +1,9 @@
-github.com/IBM/sarama v1.41.2 h1:ZDBZfGPHAD4uuAtSv4U22fRZBgst0eEwGFzLj0fb85c=
-github.com/IBM/sarama v1.41.2/go.mod h1:xdpu7sd6OE1uxNdjYTSKUfY8FaKkJES9/+EyjSgiGQk=
-github.com/OpenIMSDK/protocol v0.0.23 h1:L545aRQez6Ro+AaJB1Z6Mz7ojnDtp41WqASxYveCkcE=
-github.com/OpenIMSDK/protocol v0.0.23/go.mod h1:F25dFrwrIx3lkNoiuf6FkCfxuwf8L4Z8UIsdTHP/r0Y=
-github.com/OpenIMSDK/tools v0.0.14 h1:WLof/+WxyPyRST+QkoTKubYCiV73uCLiL8pgnpH/yKQ=
-github.com/OpenIMSDK/tools v0.0.14/go.mod h1:eg+q4A34Qmu73xkY0mt37FHGMCMfC6CtmOnm0kFEGFI=
+github.com/IBM/sarama v1.42.1 h1:wugyWa15TDEHh2kvq2gAy1IHLjEjuYOYgXz/ruC/OSQ=
+github.com/IBM/sarama v1.42.1/go.mod h1:Xxho9HkHd4K/MDUo/T/sOqwtX/17D33++E9Wib6hUdQ=
+github.com/OpenIMSDK/protocol v0.0.33 h1:T07KWD0jt7IRlrYRujCa+eXmfgcSi8sRgLL8t2ZlHQA=
+github.com/OpenIMSDK/protocol v0.0.33/go.mod h1:F25dFrwrIx3lkNoiuf6FkCfxuwf8L4Z8UIsdTHP/r0Y=
+github.com/OpenIMSDK/tools v0.0.20 h1:zBTjQZRJ5lR1FIzP9mtWyAvh5dKsmJXQugi4p8X/97k=
+github.com/OpenIMSDK/tools v0.0.20/go.mod h1:eg+q4A34Qmu73xkY0mt37FHGMCMfC6CtmOnm0kFEGFI=
github.com/benbjohnson/clock v1.1.0 h1:Q92kusRqC1XV2MjkWETPvjJVqKetz1OzxZB7mHJLju8=
github.com/bwmarrin/snowflake v0.3.0 h1:xm67bEhkKh6ij1790JB83OujPR5CzNe8QuQqAgISZN0=
github.com/bwmarrin/snowflake v0.3.0/go.mod h1:NdZxfVWX+oR6y2K0o6qAYv6gIOP9rjG0/E9WsDpxqwE=
@@ -34,8 +34,8 @@ github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/o
github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/NuW9t0sjnWqG1slY=
github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY=
github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY=
-github.com/go-playground/validator/v10 v10.15.3 h1:S+sSpunYjNPDuXkWbK+x+bA7iXiW296KG4dL3X7xUZo=
-github.com/go-playground/validator/v10 v10.15.3/go.mod h1:9iXMNT7sEkjXb0I+enO7QXmzG6QCsPWY4zveKFVRSyU=
+github.com/go-playground/validator/v10 v10.15.5 h1:LEBecTWb/1j5TNY1YYG2RcOUN3R7NLylN+x8TTueE24=
+github.com/go-playground/validator/v10 v10.15.5/go.mod h1:9iXMNT7sEkjXb0I+enO7QXmzG6QCsPWY4zveKFVRSyU=
github.com/go-sql-driver/mysql v1.7.0/go.mod h1:OXbVy3sEdcQ2Doequ6Z5BW6fXNQTmx+9S1MCJN5yJMI=
github.com/go-sql-driver/mysql v1.7.1 h1:lUIinVbN1DY0xBg0eMOzmmtGoHwWBbvnWubQUrtU8EI=
github.com/go-sql-driver/mysql v1.7.1/go.mod h1:OXbVy3sEdcQ2Doequ6Z5BW6fXNQTmx+9S1MCJN5yJMI=
@@ -47,7 +47,7 @@ github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiu
github.com/golang/snappy v0.0.4 h1:yAGX7huGHXlcLOEtBnF4w7FQwA26wojNCwOYAEhLjQM=
github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
-github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
+github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/gorilla/securecookie v1.1.1/go.mod h1:ra0sb63/xPlUeL+yeDciTfxMRAA+MP+HVt/4epWDjd4=
github.com/gorilla/sessions v1.2.1/go.mod h1:dk2InVEVJ0sfLlnXv9EAgkf6ecYs/i80K/zI+bUmuGM=
@@ -103,8 +103,8 @@ github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9G
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/nfnt/resize v0.0.0-20180221191011-83c6a9932646 h1:zYyBkD/k9seD2A7fsi6Oo2LfFZAehjjQMERAvZLEDnQ=
github.com/nfnt/resize v0.0.0-20180221191011-83c6a9932646/go.mod h1:jpp1/29i3P1S/RLdc7JQKbRpFeM1dOBd8T9ki5s+AY8=
-github.com/openimsdk/open-im-server/v3 v3.3.2 h1:uK6glaidrnWlYXFSwzOEq7fXS6jT1OyesUJENZJeptI=
-github.com/openimsdk/open-im-server/v3 v3.3.2/go.mod h1:rqKiCkjav5P7tQmyqaixnMJcayWlM4XtXmwG+cZNw78=
+github.com/openimsdk/open-im-server/v3 v3.4.0 h1:e7nslaWEHYc5xD1A3zHtnhbIWgfgtJSnPGHIqwjARaE=
+github.com/openimsdk/open-im-server/v3 v3.4.0/go.mod h1:HKqjLZSMjD7ec59VV694Yfqnj9SIVotzDSPWgAei2Tg=
github.com/pelletier/go-toml/v2 v2.0.8 h1:0ctb6s9mE31h0/lhu+J6OPmVeDxJn+kYnJc2jZR9tGQ=
github.com/pelletier/go-toml/v2 v2.0.8/go.mod h1:vuYfssBdrU2XDZ9bYydBu6t+6a6PYNcZljzZR9VXg+4=
github.com/pierrec/lz4/v4 v4.1.18 h1:xaKrnTkyoqfh1YItXl56+6KJNVYWlEEPuAQW9xsplYQ=
@@ -146,24 +146,22 @@ golang.org/x/arch v0.3.0/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.6.0/go.mod h1:OFC/31mSvZgRz0V1QTNCzfAI1aIRzbiufJtkMIlEp58=
-golang.org/x/crypto v0.14.0 h1:wBqGXzWJW6m1XrIKlAH0Hs1JJ7+9KBwnIO8v66Q9cHc=
-golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf4=
-golang.org/x/image v0.12.0 h1:w13vZbU4o5rKOFFR8y7M+c4A5jXDC0uXTdHYRP8X2DQ=
-golang.org/x/image v0.12.0/go.mod h1:Lu90jvHG7GfemOIcldsh9A2hS01ocl6oNO7ype5mEnk=
+golang.org/x/crypto v0.17.0 h1:r8bRNjWL3GshPW3gkd+RpvzWrZAwPS49OmTGZ/uhM4k=
+golang.org/x/crypto v0.17.0/go.mod h1:gCAAfMLgwOJRpTjQ2zCCt2OcSfYMTeZVSRtQlPC7Nq4=
+golang.org/x/image v0.13.0 h1:3cge/F/QTkNLauhf2QoE9zp+7sr+ZcL4HnoZmdwg9sg=
+golang.org/x/image v0.13.0/go.mod h1:6mmbMOeV28HuMTgA6OSRkdXKYw/t5W9Uwn2Yv1r3Yxk=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
-golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
-golang.org/x/net v0.17.0 h1:pVaXccu2ozPjCXewfr1S7xza/zcXTity9cCdXQYSjIM=
-golang.org/x/net v0.17.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE=
+golang.org/x/net v0.19.0 h1:zTwKpTd2XuCqf8huc7Fo2iSy+4RHPd10s4KzeTnVr1c=
+golang.org/x/net v0.19.0/go.mod h1:CfAk/cbD4CthTvqiEl8NpboMuiuOYsAr/7NOjZJtv1U=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
-golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
-golang.org/x/sync v0.3.0 h1:ftCYgMx6zT/asHUrPw8BLLscYtGznsLAnjq5RH9P66E=
+golang.org/x/sync v0.4.0 h1:zxkM55ReGkDlKSM+Fu41A+zmbZuaPVbGMzvvdUPznYQ=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
@@ -171,8 +169,8 @@ golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
-golang.org/x/sys v0.13.0 h1:Af8nKPmuFypiUBjVoU9V20FiaFXOcuZI21p0ycVYYGE=
-golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.15.0 h1:h48lPFYpsTvQJZF4EKyI4aLHaev3CxivZmv7yZig9pc=
+golang.org/x/sys v0.15.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
@@ -180,18 +178,17 @@ golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
-golang.org/x/text v0.13.0 h1:ablQoSUd0tRdKxZewP80B+BaqeKJuVhuRxj/dkrun3k=
-golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
+golang.org/x/text v0.14.0 h1:ScX5w1eTa3QqT8oi6+ziP7dTV1S2+ALU0bI+0zXKWiQ=
+golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
-golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
-google.golang.org/genproto/googleapis/rpc v0.0.0-20230807174057-1744710a1577 h1:wukfNtZmZUurLN/atp2hiIeTKn7QJWIQdHzqmsOnAOk=
-google.golang.org/genproto/googleapis/rpc v0.0.0-20230807174057-1744710a1577/go.mod h1:+Bk1OCOj40wS2hwAMA+aCW9ypzm63QTBBHp6lQ3p+9M=
-google.golang.org/grpc v1.57.0 h1:kfzNeI/klCGD2YPMUlaGNT3pxvYfga7smW3Vth8Zsiw=
-google.golang.org/grpc v1.57.0/go.mod h1:Sd+9RMTACXwmub0zcNY2c4arhtrbBYD1AUHI/dt16Mo=
+google.golang.org/genproto/googleapis/rpc v0.0.0-20231012201019-e917dd12ba7a h1:a2MQQVoTo96JC9PMGtGBymLp7+/RzpFc2yX/9WfFg1c=
+google.golang.org/genproto/googleapis/rpc v0.0.0-20231012201019-e917dd12ba7a/go.mod h1:4cYg8o5yUbm77w8ZX00LhMVNl/YVBFJRYWDc0uYWMs0=
+google.golang.org/grpc v1.60.0 h1:6FQAR0kM31P6MRdeluor2w2gPaS4SVNrD/DNTxrQ15k=
+google.golang.org/grpc v1.60.0/go.mod h1:OlCHIeLYqSSsLi6i49B5QGdzaMZK9+M7LXN2FKz4eGM=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.31.0 h1:g0LDEJHgrBl9N9r17Ru3sqWhkIx2NB67okBHPwC7hs8=
@@ -202,9 +199,9 @@ gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
-gorm.io/driver/mysql v1.5.1 h1:WUEH5VF9obL/lTtzjmML/5e6VfFR/788coz2uaVCAZw=
-gorm.io/driver/mysql v1.5.1/go.mod h1:Jo3Xu7mMhCyj8dlrb3WoCaRd1FhsVh+yMXb1jUInf5o=
-gorm.io/gorm v1.25.1/go.mod h1:L4uxeKpfBml98NYqVqwAdmV1a2nBtAec/cf3fpucW/k=
-gorm.io/gorm v1.25.4 h1:iyNd8fNAe8W9dvtlgeRI5zSVZPsq3OpcTu37cYcpCmw=
-gorm.io/gorm v1.25.4/go.mod h1:L4uxeKpfBml98NYqVqwAdmV1a2nBtAec/cf3fpucW/k=
+gorm.io/driver/mysql v1.5.2 h1:QC2HRskSE75wBuOxe0+iCkyJZ+RqpudsQtqkp+IMuXs=
+gorm.io/driver/mysql v1.5.2/go.mod h1:pQLhh1Ut/WUAySdTHwBpBv6+JKcj+ua4ZFx1QQTBzb8=
+gorm.io/gorm v1.25.2-0.20230530020048-26663ab9bf55/go.mod h1:L4uxeKpfBml98NYqVqwAdmV1a2nBtAec/cf3fpucW/k=
+gorm.io/gorm v1.25.5 h1:zR9lOiiYf09VNh5Q1gphfyia1JpiClIWG9hQaxB/mls=
+gorm.io/gorm v1.25.5/go.mod h1:hbnx/Oo0ChWMn1BIhpy1oYozzpM15i4YPuHDmfYtwg8=
rsc.io/pdf v0.1.1/go.mod h1:n8OzWcQ6Sp37PL01nO98y4iUCRdTGarVfzxY20ICaU4=
diff --git a/tools/data-conversion/openim/cmd/conversion-msg/conversion-msg.go b/tools/data-conversion/openim/cmd/conversion-msg/conversion-msg.go
index 338fbf111..f2b9623a6 100644
--- a/tools/data-conversion/openim/cmd/conversion-msg/conversion-msg.go
+++ b/tools/data-conversion/openim/cmd/conversion-msg/conversion-msg.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package main
import (
diff --git a/tools/data-conversion/openim/cmd/conversion-mysql/conversion-mysql.go b/tools/data-conversion/openim/cmd/conversion-mysql/conversion-mysql.go
index 8992e12c4..8a951e16f 100644
--- a/tools/data-conversion/openim/cmd/conversion-mysql/conversion-mysql.go
+++ b/tools/data-conversion/openim/cmd/conversion-mysql/conversion-mysql.go
@@ -38,7 +38,7 @@ func main() {
usernameV3 = "root" // v3版本mysql用户名
passwordV3 = "openIM123" // v3版本mysql密码
addrV3 = "127.0.0.1:13306" // v3版本mysql地址
- databaseV3 = "openIM_v3" // v3版本mysql数据库名字
+ databaseV3 = "openim_v3" // v3版本mysql数据库名字
)
var concurrency = 1 // 并发数量
diff --git a/tools/data-conversion/openim/common/config.go b/tools/data-conversion/openim/common/config.go
index e2bd14a05..e993038d1 100644
--- a/tools/data-conversion/openim/common/config.go
+++ b/tools/data-conversion/openim/common/config.go
@@ -44,7 +44,7 @@ const (
UsernameV3 = "root"
PasswordV3 = "openIM123"
IpV3 = "43.134.63.160:13306"
- DatabaseV3 = "openIM_v3"
+ DatabaseV3 = "openim_v3"
)
// V3 chat.
diff --git a/tools/data-conversion/openim/mysql/cmd.go b/tools/data-conversion/openim/mysql/cmd.go
index 924b0a206..ab3857fba 100644
--- a/tools/data-conversion/openim/mysql/cmd.go
+++ b/tools/data-conversion/openim/mysql/cmd.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package mysql
import (
@@ -24,7 +38,7 @@ func Cmd() {
usernameV3 = "root"
passwordV3 = "openIM123"
addrV3 = "203.56.175.233:13306"
- databaseV3 = "openIM_v3"
+ databaseV3 = "openim_v3"
)
log.SetFlags(log.LstdFlags | log.Llongfile)
dsnV2 := fmt.Sprintf("%s:%s@tcp(%s)/%s?charset=utf8mb4&parseTime=True&loc=Local", usernameV2, passwordV2, addrV2, databaseV2)
diff --git a/tools/data-conversion/openim/mysql/conversion/conversion.go b/tools/data-conversion/openim/mysql/conversion/conversion.go
index 298eefb50..f371654df 100644
--- a/tools/data-conversion/openim/mysql/conversion/conversion.go
+++ b/tools/data-conversion/openim/mysql/conversion/conversion.go
@@ -1,10 +1,24 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package conversion
import (
"github.com/OpenIMSDK/protocol/constant"
- v3 "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
v2 "github.com/openimsdk/open-im-server/v3/tools/data-conversion/openim/mysql/v2"
+ v3 "github.com/openimsdk/open-im-server/v3/tools/data-conversion/openim/mysql/v3"
"github.com/openimsdk/open-im-server/v3/tools/data-conversion/utils"
)
diff --git a/tools/data-conversion/openim/mysql/v2/model_struct.go b/tools/data-conversion/openim/mysql/v2/model_struct.go
index 9da33f2a5..f05b84977 100644
--- a/tools/data-conversion/openim/mysql/v2/model_struct.go
+++ b/tools/data-conversion/openim/mysql/v2/model_struct.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package db
import "time"
diff --git a/tools/data-conversion/openim/mysql/v3/black.go b/tools/data-conversion/openim/mysql/v3/black.go
new file mode 100644
index 000000000..59dd12122
--- /dev/null
+++ b/tools/data-conversion/openim/mysql/v3/black.go
@@ -0,0 +1,49 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package relation
+
+import (
+ "context"
+ "time"
+)
+
+const (
+ BlackModelTableName = "blacks"
+)
+
+type BlackModel struct {
+ OwnerUserID string `gorm:"column:owner_user_id;primary_key;size:64"`
+ BlockUserID string `gorm:"column:block_user_id;primary_key;size:64"`
+ CreateTime time.Time `gorm:"column:create_time"`
+ AddSource int32 `gorm:"column:add_source"`
+ OperatorUserID string `gorm:"column:operator_user_id;size:64"`
+ Ex string `gorm:"column:ex;size:1024"`
+}
+
+func (BlackModel) TableName() string {
+ return BlackModelTableName
+}
+
+type BlackModelInterface interface {
+ Create(ctx context.Context, blacks []*BlackModel) (err error)
+ Delete(ctx context.Context, blacks []*BlackModel) (err error)
+ UpdateByMap(ctx context.Context, ownerUserID, blockUserID string, args map[string]interface{}) (err error)
+ Update(ctx context.Context, blacks []*BlackModel) (err error)
+ Find(ctx context.Context, blacks []*BlackModel) (blackList []*BlackModel, err error)
+ Take(ctx context.Context, ownerUserID, blockUserID string) (black *BlackModel, err error)
+ FindOwnerBlacks(ctx context.Context, ownerUserID string, pageNumber, showNumber int32) (blacks []*BlackModel, total int64, err error)
+ FindOwnerBlackInfos(ctx context.Context, ownerUserID string, userIDs []string) (blacks []*BlackModel, err error)
+ FindBlackUserIDs(ctx context.Context, ownerUserID string) (blackUserIDs []string, err error)
+}
diff --git a/pkg/common/db/table/relation/chatlog.go b/tools/data-conversion/openim/mysql/v3/chatlog.go
similarity index 100%
rename from pkg/common/db/table/relation/chatlog.go
rename to tools/data-conversion/openim/mysql/v3/chatlog.go
diff --git a/tools/data-conversion/openim/mysql/v3/conversation.go b/tools/data-conversion/openim/mysql/v3/conversation.go
new file mode 100644
index 000000000..e9680873f
--- /dev/null
+++ b/tools/data-conversion/openim/mysql/v3/conversation.go
@@ -0,0 +1,73 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package relation
+
+import (
+ "context"
+ "time"
+)
+
+const (
+ conversationModelTableName = "conversations"
+)
+
+type ConversationModel struct {
+ OwnerUserID string `gorm:"column:owner_user_id;primary_key;type:char(128)" json:"OwnerUserID"`
+ ConversationID string `gorm:"column:conversation_id;primary_key;type:char(128)" json:"conversationID"`
+ ConversationType int32 `gorm:"column:conversation_type" json:"conversationType"`
+ UserID string `gorm:"column:user_id;type:char(64)" json:"userID"`
+ GroupID string `gorm:"column:group_id;type:char(128)" json:"groupID"`
+ RecvMsgOpt int32 `gorm:"column:recv_msg_opt" json:"recvMsgOpt"`
+ IsPinned bool `gorm:"column:is_pinned" json:"isPinned"`
+ IsPrivateChat bool `gorm:"column:is_private_chat" json:"isPrivateChat"`
+ BurnDuration int32 `gorm:"column:burn_duration;default:30" json:"burnDuration"`
+ GroupAtType int32 `gorm:"column:group_at_type" json:"groupAtType"`
+ AttachedInfo string `gorm:"column:attached_info;type:varchar(1024)" json:"attachedInfo"`
+ Ex string `gorm:"column:ex;type:varchar(1024)" json:"ex"`
+ MaxSeq int64 `gorm:"column:max_seq" json:"maxSeq"`
+ MinSeq int64 `gorm:"column:min_seq" json:"minSeq"`
+ CreateTime time.Time `gorm:"column:create_time;index:create_time;autoCreateTime"`
+ IsMsgDestruct bool `gorm:"column:is_msg_destruct;default:false"`
+ MsgDestructTime int64 `gorm:"column:msg_destruct_time;default:604800"`
+ LatestMsgDestructTime time.Time `gorm:"column:latest_msg_destruct_time;autoCreateTime"`
+}
+
+func (ConversationModel) TableName() string {
+ return conversationModelTableName
+}
+
+type ConversationModelInterface interface {
+ Create(ctx context.Context, conversations []*ConversationModel) (err error)
+ Delete(ctx context.Context, groupIDs []string) (err error)
+ UpdateByMap(ctx context.Context, userIDs []string, conversationID string, args map[string]interface{}) (rows int64, err error)
+ Update(ctx context.Context, conversation *ConversationModel) (err error)
+ Find(ctx context.Context, ownerUserID string, conversationIDs []string) (conversations []*ConversationModel, err error)
+ FindUserID(ctx context.Context, userIDs []string, conversationIDs []string) ([]string, error)
+ FindUserIDAllConversationID(ctx context.Context, userID string) ([]string, error)
+ Take(ctx context.Context, userID, conversationID string) (conversation *ConversationModel, err error)
+ FindConversationID(ctx context.Context, userID string, conversationIDs []string) (existConversationID []string, err error)
+ FindUserIDAllConversations(ctx context.Context, userID string) (conversations []*ConversationModel, err error)
+ FindRecvMsgNotNotifyUserIDs(ctx context.Context, groupID string) ([]string, error)
+ GetUserRecvMsgOpt(ctx context.Context, ownerUserID, conversationID string) (opt int, err error)
+ FindSuperGroupRecvMsgNotNotifyUserIDs(ctx context.Context, groupID string) ([]string, error)
+ GetAllConversationIDs(ctx context.Context) ([]string, error)
+ GetAllConversationIDsNumber(ctx context.Context) (int64, error)
+ PageConversationIDs(ctx context.Context, pageNumber, showNumber int32) (conversationIDs []string, err error)
+ GetUserAllHasReadSeqs(ctx context.Context, ownerUserID string) (hashReadSeqs map[string]int64, err error)
+ GetConversationsByConversationID(ctx context.Context, conversationIDs []string) ([]*ConversationModel, error)
+ GetConversationIDsNeedDestruct(ctx context.Context) ([]*ConversationModel, error)
+ GetConversationNotReceiveMessageUserIDs(ctx context.Context, conversationID string) ([]string, error)
+ NewTx(tx any) ConversationModelInterface
+}
diff --git a/pkg/common/db/table/relation/doc.go b/tools/data-conversion/openim/mysql/v3/doc.go
similarity index 100%
rename from pkg/common/db/table/relation/doc.go
rename to tools/data-conversion/openim/mysql/v3/doc.go
diff --git a/tools/data-conversion/openim/mysql/v3/friend.go b/tools/data-conversion/openim/mysql/v3/friend.go
new file mode 100644
index 000000000..58d8d1d34
--- /dev/null
+++ b/tools/data-conversion/openim/mysql/v3/friend.go
@@ -0,0 +1,78 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package relation
+
+import (
+ "context"
+ "time"
+)
+
+const (
+ FriendModelTableName = "friends"
+)
+
+type FriendModel struct {
+ OwnerUserID string `gorm:"column:owner_user_id;primary_key;size:64"`
+ FriendUserID string `gorm:"column:friend_user_id;primary_key;size:64"`
+ Remark string `gorm:"column:remark;size:255"`
+ CreateTime time.Time `gorm:"column:create_time;autoCreateTime"`
+ AddSource int32 `gorm:"column:add_source"`
+ OperatorUserID string `gorm:"column:operator_user_id;size:64"`
+ Ex string `gorm:"column:ex;size:1024"`
+}
+
+func (FriendModel) TableName() string {
+ return FriendModelTableName
+}
+
+type FriendModelInterface interface {
+ // 插入多条记录
+ Create(ctx context.Context, friends []*FriendModel) (err error)
+ // 删除ownerUserID指定的好友
+ Delete(ctx context.Context, ownerUserID string, friendUserIDs []string) (err error)
+ // 更新ownerUserID单个好友信息 更新零值
+ UpdateByMap(ctx context.Context, ownerUserID string, friendUserID string, args map[string]interface{}) (err error)
+ // 更新好友信息的非零值
+ Update(ctx context.Context, friends []*FriendModel) (err error)
+ // 更新好友备注(也支持零值 )
+ UpdateRemark(ctx context.Context, ownerUserID, friendUserID, remark string) (err error)
+ // 获取单个好友信息,如没找到 返回错误
+ Take(ctx context.Context, ownerUserID, friendUserID string) (friend *FriendModel, err error)
+ // 查找好友关系,如果是双向关系,则都返回
+ FindUserState(ctx context.Context, userID1, userID2 string) (friends []*FriendModel, err error)
+ // 获取 owner指定的好友列表 如果有friendUserIDs不存在,也不返回错误
+ FindFriends(ctx context.Context, ownerUserID string, friendUserIDs []string) (friends []*FriendModel, err error)
+ // 获取哪些人添加了friendUserID 如果有ownerUserIDs不存在,也不返回错误
+ FindReversalFriends(
+ ctx context.Context,
+ friendUserID string,
+ ownerUserIDs []string,
+ ) (friends []*FriendModel, err error)
+ // 获取ownerUserID好友列表 支持翻页
+ FindOwnerFriends(
+ ctx context.Context,
+ ownerUserID string,
+ pageNumber, showNumber int32,
+ ) (friends []*FriendModel, total int64, err error)
+ // 获取哪些人添加了friendUserID 支持翻页
+ FindInWhoseFriends(
+ ctx context.Context,
+ friendUserID string,
+ pageNumber, showNumber int32,
+ ) (friends []*FriendModel, total int64, err error)
+ // 获取好友UserID列表
+ FindFriendUserIDs(ctx context.Context, ownerUserID string) (friendUserIDs []string, err error)
+ NewTx(tx any) FriendModelInterface
+}
diff --git a/tools/data-conversion/openim/mysql/v3/friend_request.go b/tools/data-conversion/openim/mysql/v3/friend_request.go
new file mode 100644
index 000000000..51ea0ef6e
--- /dev/null
+++ b/tools/data-conversion/openim/mysql/v3/friend_request.go
@@ -0,0 +1,66 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package relation
+
+import (
+ "context"
+ "time"
+)
+
+const FriendRequestModelTableName = "friend_requests"
+
+type FriendRequestModel struct {
+ FromUserID string `gorm:"column:from_user_id;primary_key;size:64"`
+ ToUserID string `gorm:"column:to_user_id;primary_key;size:64"`
+ HandleResult int32 `gorm:"column:handle_result"`
+ ReqMsg string `gorm:"column:req_msg;size:255"`
+ CreateTime time.Time `gorm:"column:create_time; autoCreateTime"`
+ HandlerUserID string `gorm:"column:handler_user_id;size:64"`
+ HandleMsg string `gorm:"column:handle_msg;size:255"`
+ HandleTime time.Time `gorm:"column:handle_time"`
+ Ex string `gorm:"column:ex;size:1024"`
+}
+
+func (FriendRequestModel) TableName() string {
+ return FriendRequestModelTableName
+}
+
+type FriendRequestModelInterface interface {
+ // 插入多条记录
+ Create(ctx context.Context, friendRequests []*FriendRequestModel) (err error)
+ // 删除记录
+ Delete(ctx context.Context, fromUserID, toUserID string) (err error)
+ // 更新零值
+ UpdateByMap(ctx context.Context, formUserID string, toUserID string, args map[string]interface{}) (err error)
+ // 更新多条记录 (非零值)
+ Update(ctx context.Context, friendRequest *FriendRequestModel) (err error)
+ // 获取来指定用户的好友申请 未找到 不返回错误
+ Find(ctx context.Context, fromUserID, toUserID string) (friendRequest *FriendRequestModel, err error)
+ Take(ctx context.Context, fromUserID, toUserID string) (friendRequest *FriendRequestModel, err error)
+ // 获取toUserID收到的好友申请列表
+ FindToUserID(
+ ctx context.Context,
+ toUserID string,
+ pageNumber, showNumber int32,
+ ) (friendRequests []*FriendRequestModel, total int64, err error)
+ // 获取fromUserID发出去的好友申请列表
+ FindFromUserID(
+ ctx context.Context,
+ fromUserID string,
+ pageNumber, showNumber int32,
+ ) (friendRequests []*FriendRequestModel, total int64, err error)
+ FindBothFriendRequests(ctx context.Context, fromUserID, toUserID string) (friends []*FriendRequestModel, err error)
+ NewTx(tx any) FriendRequestModelInterface
+}
diff --git a/tools/data-conversion/openim/mysql/v3/group.go b/tools/data-conversion/openim/mysql/v3/group.go
new file mode 100644
index 000000000..6759e0d35
--- /dev/null
+++ b/tools/data-conversion/openim/mysql/v3/group.go
@@ -0,0 +1,66 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package relation
+
+import (
+ "context"
+ "time"
+)
+
+const (
+ GroupModelTableName = "groups"
+)
+
+type GroupModel struct {
+ GroupID string `gorm:"column:group_id;primary_key;size:64" json:"groupID" binding:"required"`
+ GroupName string `gorm:"column:name;size:255" json:"groupName"`
+ Notification string `gorm:"column:notification;size:255" json:"notification"`
+ Introduction string `gorm:"column:introduction;size:255" json:"introduction"`
+ FaceURL string `gorm:"column:face_url;size:255" json:"faceURL"`
+ CreateTime time.Time `gorm:"column:create_time;index:create_time;autoCreateTime"`
+ Ex string `gorm:"column:ex" json:"ex;size:1024"`
+ Status int32 `gorm:"column:status"`
+ CreatorUserID string `gorm:"column:creator_user_id;size:64"`
+ GroupType int32 `gorm:"column:group_type"`
+ NeedVerification int32 `gorm:"column:need_verification"`
+ LookMemberInfo int32 `gorm:"column:look_member_info" json:"lookMemberInfo"`
+ ApplyMemberFriend int32 `gorm:"column:apply_member_friend" json:"applyMemberFriend"`
+ NotificationUpdateTime time.Time `gorm:"column:notification_update_time"`
+ NotificationUserID string `gorm:"column:notification_user_id;size:64"`
+}
+
+func (GroupModel) TableName() string {
+ return GroupModelTableName
+}
+
+type GroupModelInterface interface {
+ NewTx(tx any) GroupModelInterface
+ Create(ctx context.Context, groups []*GroupModel) (err error)
+ UpdateMap(ctx context.Context, groupID string, args map[string]interface{}) (err error)
+ UpdateStatus(ctx context.Context, groupID string, status int32) (err error)
+ Find(ctx context.Context, groupIDs []string) (groups []*GroupModel, err error)
+ FindNotDismissedGroup(ctx context.Context, groupIDs []string) (groups []*GroupModel, err error)
+ Take(ctx context.Context, groupID string) (group *GroupModel, err error)
+ Search(
+ ctx context.Context,
+ keyword string,
+ pageNumber, showNumber int32,
+ ) (total uint32, groups []*GroupModel, err error)
+ GetGroupIDsByGroupType(ctx context.Context, groupType int) (groupIDs []string, err error)
+ // 获取群总数
+ CountTotal(ctx context.Context, before *time.Time) (count int64, err error)
+ // 获取范围内群增量
+ CountRangeEverydayTotal(ctx context.Context, start time.Time, end time.Time) (map[string]int64, error)
+}
diff --git a/tools/data-conversion/openim/mysql/v3/group_member.go b/tools/data-conversion/openim/mysql/v3/group_member.go
new file mode 100644
index 000000000..bfde72834
--- /dev/null
+++ b/tools/data-conversion/openim/mysql/v3/group_member.go
@@ -0,0 +1,74 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package relation
+
+import (
+ "context"
+ "time"
+)
+
+const (
+ GroupMemberModelTableName = "group_members"
+)
+
+type GroupMemberModel struct {
+ GroupID string `gorm:"column:group_id;primary_key;size:64"`
+ UserID string `gorm:"column:user_id;primary_key;size:64"`
+ Nickname string `gorm:"column:nickname;size:255"`
+ FaceURL string `gorm:"column:user_group_face_url;size:255"`
+ RoleLevel int32 `gorm:"column:role_level"`
+ JoinTime time.Time `gorm:"column:join_time"`
+ JoinSource int32 `gorm:"column:join_source"`
+ InviterUserID string `gorm:"column:inviter_user_id;size:64"`
+ OperatorUserID string `gorm:"column:operator_user_id;size:64"`
+ MuteEndTime time.Time `gorm:"column:mute_end_time"`
+ Ex string `gorm:"column:ex;size:1024"`
+}
+
+func (GroupMemberModel) TableName() string {
+ return GroupMemberModelTableName
+}
+
+type GroupMemberModelInterface interface {
+ NewTx(tx any) GroupMemberModelInterface
+ Create(ctx context.Context, groupMembers []*GroupMemberModel) (err error)
+ Delete(ctx context.Context, groupID string, userIDs []string) (err error)
+ DeleteGroup(ctx context.Context, groupIDs []string) (err error)
+ Update(ctx context.Context, groupID string, userID string, data map[string]any) (err error)
+ UpdateRoleLevel(ctx context.Context, groupID string, userID string, roleLevel int32) (rowsAffected int64, err error)
+ Find(
+ ctx context.Context,
+ groupIDs []string,
+ userIDs []string,
+ roleLevels []int32,
+ ) (groupMembers []*GroupMemberModel, err error)
+ FindMemberUserID(ctx context.Context, groupID string) (userIDs []string, err error)
+ Take(ctx context.Context, groupID string, userID string) (groupMember *GroupMemberModel, err error)
+ TakeOwner(ctx context.Context, groupID string) (groupMember *GroupMemberModel, err error)
+ SearchMember(
+ ctx context.Context,
+ keyword string,
+ groupIDs []string,
+ userIDs []string,
+ roleLevels []int32,
+ pageNumber, showNumber int32,
+ ) (total uint32, groupList []*GroupMemberModel, err error)
+ MapGroupMemberNum(ctx context.Context, groupIDs []string) (count map[string]uint32, err error)
+ FindJoinUserID(ctx context.Context, groupIDs []string) (groupUsers map[string][]string, err error)
+ FindUserJoinedGroupID(ctx context.Context, userID string) (groupIDs []string, err error)
+ TakeGroupMemberNum(ctx context.Context, groupID string) (count int64, err error)
+ FindUsersJoinedGroupID(ctx context.Context, userIDs []string) (map[string][]string, error)
+ FindUserManagedGroupID(ctx context.Context, userID string) (groupIDs []string, err error)
+}
diff --git a/tools/data-conversion/openim/mysql/v3/group_request.go b/tools/data-conversion/openim/mysql/v3/group_request.go
new file mode 100644
index 000000000..063b83938
--- /dev/null
+++ b/tools/data-conversion/openim/mysql/v3/group_request.go
@@ -0,0 +1,61 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package relation
+
+import (
+ "context"
+ "time"
+)
+
+const (
+ GroupRequestModelTableName = "group_requests"
+)
+
+type GroupRequestModel struct {
+ UserID string `gorm:"column:user_id;primary_key;size:64"`
+ GroupID string `gorm:"column:group_id;primary_key;size:64"`
+ HandleResult int32 `gorm:"column:handle_result"`
+ ReqMsg string `gorm:"column:req_msg;size:1024"`
+ HandledMsg string `gorm:"column:handle_msg;size:1024"`
+ ReqTime time.Time `gorm:"column:req_time"`
+ HandleUserID string `gorm:"column:handle_user_id;size:64"`
+ HandledTime time.Time `gorm:"column:handle_time"`
+ JoinSource int32 `gorm:"column:join_source"`
+ InviterUserID string `gorm:"column:inviter_user_id;size:64"`
+ Ex string `gorm:"column:ex;size:1024"`
+}
+
+func (GroupRequestModel) TableName() string {
+ return GroupRequestModelTableName
+}
+
+type GroupRequestModelInterface interface {
+ NewTx(tx any) GroupRequestModelInterface
+ Create(ctx context.Context, groupRequests []*GroupRequestModel) (err error)
+ Delete(ctx context.Context, groupID string, userID string) (err error)
+ UpdateHandler(ctx context.Context, groupID string, userID string, handledMsg string, handleResult int32) (err error)
+ Take(ctx context.Context, groupID string, userID string) (groupRequest *GroupRequestModel, err error)
+ FindGroupRequests(ctx context.Context, groupID string, userIDs []string) (int64, []*GroupRequestModel, error)
+ Page(
+ ctx context.Context,
+ userID string,
+ pageNumber, showNumber int32,
+ ) (total uint32, groups []*GroupRequestModel, err error)
+ PageGroup(
+ ctx context.Context,
+ groupIDs []string,
+ pageNumber, showNumber int32,
+ ) (total uint32, groups []*GroupRequestModel, err error)
+}
diff --git a/tools/data-conversion/openim/mysql/v3/log.go b/tools/data-conversion/openim/mysql/v3/log.go
new file mode 100644
index 000000000..22198ca7c
--- /dev/null
+++ b/tools/data-conversion/openim/mysql/v3/log.go
@@ -0,0 +1,43 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package relation
+
+import (
+ "context"
+ "time"
+)
+
+type Log struct {
+ LogID string `gorm:"column:log_id;primary_key;type:char(64)"`
+ Platform string `gorm:"column:platform;type:varchar(32)"`
+ UserID string `gorm:"column:user_id;type:char(64)"`
+ CreateTime time.Time `gorm:"index:,sort:desc"`
+ Url string `gorm:"column:url;type varchar(255)"`
+ FileName string `gorm:"column:filename;type varchar(255)"`
+ SystemType string `gorm:"column:system_type;type varchar(255)"`
+ Version string `gorm:"column:version;type varchar(255)"`
+ Ex string `gorm:"column:ex;type varchar(255)"`
+}
+
+func (Log) TableName() string {
+ return "logs"
+}
+
+type LogInterface interface {
+ Create(ctx context.Context, log []*Log) error
+ Search(ctx context.Context, keyword string, start time.Time, end time.Time, pageNumber int32, showNumber int32) (uint32, []*Log, error)
+ Delete(ctx context.Context, logID []string, userID string) error
+ Get(ctx context.Context, logIDs []string, userID string) ([]*Log, error)
+}
diff --git a/tools/data-conversion/openim/mysql/v3/object.go b/tools/data-conversion/openim/mysql/v3/object.go
new file mode 100644
index 000000000..0ed4130a6
--- /dev/null
+++ b/tools/data-conversion/openim/mysql/v3/object.go
@@ -0,0 +1,45 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package relation
+
+import (
+ "context"
+ "time"
+)
+
+const (
+ ObjectInfoModelTableName = "object"
+)
+
+type ObjectModel struct {
+ Name string `gorm:"column:name;primary_key"`
+ UserID string `gorm:"column:user_id"`
+ Hash string `gorm:"column:hash"`
+ Key string `gorm:"column:key"`
+ Size int64 `gorm:"column:size"`
+ ContentType string `gorm:"column:content_type"`
+ Cause string `gorm:"column:cause"`
+ CreateTime time.Time `gorm:"column:create_time"`
+}
+
+func (ObjectModel) TableName() string {
+ return ObjectInfoModelTableName
+}
+
+type ObjectInfoModelInterface interface {
+ NewTx(tx any) ObjectInfoModelInterface
+ SetObject(ctx context.Context, obj *ObjectModel) error
+ Take(ctx context.Context, name string) (*ObjectModel, error)
+}
diff --git a/tools/data-conversion/openim/mysql/v3/user.go b/tools/data-conversion/openim/mysql/v3/user.go
new file mode 100644
index 000000000..10a715bda
--- /dev/null
+++ b/tools/data-conversion/openim/mysql/v3/user.go
@@ -0,0 +1,72 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package relation
+
+import (
+ "context"
+ "time"
+)
+
+const (
+ UserModelTableName = "users"
+)
+
+type UserModel struct {
+ UserID string `gorm:"column:user_id;primary_key;size:64"`
+ Nickname string `gorm:"column:name;size:255"`
+ FaceURL string `gorm:"column:face_url;size:255"`
+ Ex string `gorm:"column:ex;size:1024"`
+ CreateTime time.Time `gorm:"column:create_time;index:create_time;autoCreateTime"`
+ AppMangerLevel int32 `gorm:"column:app_manger_level;default:1"`
+ GlobalRecvMsgOpt int32 `gorm:"column:global_recv_msg_opt"`
+}
+
+func (u *UserModel) GetNickname() string {
+ return u.Nickname
+}
+
+func (u *UserModel) GetFaceURL() string {
+ return u.FaceURL
+}
+
+func (u *UserModel) GetUserID() string {
+ return u.UserID
+}
+
+func (u *UserModel) GetEx() string {
+ return u.Ex
+}
+
+func (UserModel) TableName() string {
+ return UserModelTableName
+}
+
+type UserModelInterface interface {
+ Create(ctx context.Context, users []*UserModel) (err error)
+ UpdateByMap(ctx context.Context, userID string, args map[string]interface{}) (err error)
+ Update(ctx context.Context, user *UserModel) (err error)
+ // 获取指定用户信息 不存在,也不返回错误
+ Find(ctx context.Context, userIDs []string) (users []*UserModel, err error)
+ // 获取某个用户信息 不存在,则返回错误
+ Take(ctx context.Context, userID string) (user *UserModel, err error)
+ // 获取用户信息 不存在,不返回错误
+ Page(ctx context.Context, pageNumber, showNumber int32) (users []*UserModel, count int64, err error)
+ GetAllUserID(ctx context.Context, pageNumber, showNumber int32) (userIDs []string, err error)
+ GetUserGlobalRecvMsgOpt(ctx context.Context, userID string) (opt int, err error)
+ // 获取用户总数
+ CountTotal(ctx context.Context, before *time.Time) (count int64, err error)
+ // 获取范围内用户增量
+ CountRangeEverydayTotal(ctx context.Context, start time.Time, end time.Time) (map[string]int64, error)
+}
diff --git a/pkg/common/db/relation/meta_db.go b/tools/data-conversion/openim/mysql/v3/utils.go
similarity index 70%
rename from pkg/common/db/relation/meta_db.go
rename to tools/data-conversion/openim/mysql/v3/utils.go
index 6ab980120..c944eae8b 100644
--- a/pkg/common/db/relation/meta_db.go
+++ b/tools/data-conversion/openim/mysql/v3/utils.go
@@ -15,24 +15,22 @@
package relation
import (
- "context"
-
"gorm.io/gorm"
+
+ "github.com/OpenIMSDK/tools/utils"
)
-type MetaDB struct {
- DB *gorm.DB
- table any
+type BatchUpdateGroupMember struct {
+ GroupID string
+ UserID string
+ Map map[string]any
}
-func NewMetaDB(db *gorm.DB, table any) *MetaDB {
- return &MetaDB{
- DB: db,
- table: table,
- }
+type GroupSimpleUserID struct {
+ Hash uint64
+ MemberNum uint32
}
-func (g *MetaDB) db(ctx context.Context) *gorm.DB {
- db := g.DB.WithContext(ctx).Model(g.table)
- return db
+func IsNotFound(err error) bool {
+ return utils.Unwrap(err) == gorm.ErrRecordNotFound
}
diff --git a/tools/data-conversion/openim/proto/msg/msg.pb.go b/tools/data-conversion/openim/proto/msg/msg.pb.go
index 2954a3a76..a0a6cdf02 100644
--- a/tools/data-conversion/openim/proto/msg/msg.pb.go
+++ b/tools/data-conversion/openim/proto/msg/msg.pb.go
@@ -2703,7 +2703,7 @@ func RegisterMsgServer(s *grpc.Server, srv MsgServer) {
s.RegisterService(&_Msg_serviceDesc, srv)
}
-func _Msg_GetMaxAndMinSeq_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+func _Msg_GetMaxAndMinSeq_Handler(srv any, ctx context.Context, dec func(any) error, interceptor grpc.UnaryServerInterceptor) (any, error) {
in := new(sdk_ws.GetMaxAndMinSeqReq)
if err := dec(in); err != nil {
return nil, err
@@ -2715,13 +2715,13 @@ func _Msg_GetMaxAndMinSeq_Handler(srv interface{}, ctx context.Context, dec func
Server: srv,
FullMethod: "/msg.msg/GetMaxAndMinSeq",
}
- handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ handler := func(ctx context.Context, req any) (any, error) {
return srv.(MsgServer).GetMaxAndMinSeq(ctx, req.(*sdk_ws.GetMaxAndMinSeqReq))
}
return interceptor(ctx, in, info, handler)
}
-func _Msg_PullMessageBySeqList_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+func _Msg_PullMessageBySeqList_Handler(srv any, ctx context.Context, dec func(any) error, interceptor grpc.UnaryServerInterceptor) (any, error) {
in := new(sdk_ws.PullMessageBySeqListReq)
if err := dec(in); err != nil {
return nil, err
@@ -2733,13 +2733,13 @@ func _Msg_PullMessageBySeqList_Handler(srv interface{}, ctx context.Context, dec
Server: srv,
FullMethod: "/msg.msg/PullMessageBySeqList",
}
- handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ handler := func(ctx context.Context, req any) (any, error) {
return srv.(MsgServer).PullMessageBySeqList(ctx, req.(*sdk_ws.PullMessageBySeqListReq))
}
return interceptor(ctx, in, info, handler)
}
-func _Msg_SendMsg_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+func _Msg_SendMsg_Handler(srv any, ctx context.Context, dec func(any) error, interceptor grpc.UnaryServerInterceptor) (any, error) {
in := new(SendMsgReq)
if err := dec(in); err != nil {
return nil, err
@@ -2751,13 +2751,13 @@ func _Msg_SendMsg_Handler(srv interface{}, ctx context.Context, dec func(interfa
Server: srv,
FullMethod: "/msg.msg/SendMsg",
}
- handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ handler := func(ctx context.Context, req any) (any, error) {
return srv.(MsgServer).SendMsg(ctx, req.(*SendMsgReq))
}
return interceptor(ctx, in, info, handler)
}
-func _Msg_DelMsgList_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+func _Msg_DelMsgList_Handler(srv any, ctx context.Context, dec func(any) error, interceptor grpc.UnaryServerInterceptor) (any, error) {
in := new(sdk_ws.DelMsgListReq)
if err := dec(in); err != nil {
return nil, err
@@ -2769,13 +2769,13 @@ func _Msg_DelMsgList_Handler(srv interface{}, ctx context.Context, dec func(inte
Server: srv,
FullMethod: "/msg.msg/DelMsgList",
}
- handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ handler := func(ctx context.Context, req any) (any, error) {
return srv.(MsgServer).DelMsgList(ctx, req.(*sdk_ws.DelMsgListReq))
}
return interceptor(ctx, in, info, handler)
}
-func _Msg_DelSuperGroupMsg_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+func _Msg_DelSuperGroupMsg_Handler(srv any, ctx context.Context, dec func(any) error, interceptor grpc.UnaryServerInterceptor) (any, error) {
in := new(DelSuperGroupMsgReq)
if err := dec(in); err != nil {
return nil, err
@@ -2787,13 +2787,13 @@ func _Msg_DelSuperGroupMsg_Handler(srv interface{}, ctx context.Context, dec fun
Server: srv,
FullMethod: "/msg.msg/DelSuperGroupMsg",
}
- handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ handler := func(ctx context.Context, req any) (any, error) {
return srv.(MsgServer).DelSuperGroupMsg(ctx, req.(*DelSuperGroupMsgReq))
}
return interceptor(ctx, in, info, handler)
}
-func _Msg_ClearMsg_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+func _Msg_ClearMsg_Handler(srv any, ctx context.Context, dec func(any) error, interceptor grpc.UnaryServerInterceptor) (any, error) {
in := new(ClearMsgReq)
if err := dec(in); err != nil {
return nil, err
@@ -2805,13 +2805,13 @@ func _Msg_ClearMsg_Handler(srv interface{}, ctx context.Context, dec func(interf
Server: srv,
FullMethod: "/msg.msg/ClearMsg",
}
- handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ handler := func(ctx context.Context, req any) (any, error) {
return srv.(MsgServer).ClearMsg(ctx, req.(*ClearMsgReq))
}
return interceptor(ctx, in, info, handler)
}
-func _Msg_SetMsgMinSeq_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+func _Msg_SetMsgMinSeq_Handler(srv any, ctx context.Context, dec func(any) error, interceptor grpc.UnaryServerInterceptor) (any, error) {
in := new(SetMsgMinSeqReq)
if err := dec(in); err != nil {
return nil, err
@@ -2823,13 +2823,13 @@ func _Msg_SetMsgMinSeq_Handler(srv interface{}, ctx context.Context, dec func(in
Server: srv,
FullMethod: "/msg.msg/SetMsgMinSeq",
}
- handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ handler := func(ctx context.Context, req any) (any, error) {
return srv.(MsgServer).SetMsgMinSeq(ctx, req.(*SetMsgMinSeqReq))
}
return interceptor(ctx, in, info, handler)
}
-func _Msg_SetSendMsgStatus_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+func _Msg_SetSendMsgStatus_Handler(srv any, ctx context.Context, dec func(any) error, interceptor grpc.UnaryServerInterceptor) (any, error) {
in := new(SetSendMsgStatusReq)
if err := dec(in); err != nil {
return nil, err
@@ -2841,13 +2841,13 @@ func _Msg_SetSendMsgStatus_Handler(srv interface{}, ctx context.Context, dec fun
Server: srv,
FullMethod: "/msg.msg/SetSendMsgStatus",
}
- handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ handler := func(ctx context.Context, req any) (any, error) {
return srv.(MsgServer).SetSendMsgStatus(ctx, req.(*SetSendMsgStatusReq))
}
return interceptor(ctx, in, info, handler)
}
-func _Msg_GetSendMsgStatus_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+func _Msg_GetSendMsgStatus_Handler(srv any, ctx context.Context, dec func(any) error, interceptor grpc.UnaryServerInterceptor) (any, error) {
in := new(GetSendMsgStatusReq)
if err := dec(in); err != nil {
return nil, err
@@ -2859,13 +2859,13 @@ func _Msg_GetSendMsgStatus_Handler(srv interface{}, ctx context.Context, dec fun
Server: srv,
FullMethod: "/msg.msg/GetSendMsgStatus",
}
- handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ handler := func(ctx context.Context, req any) (any, error) {
return srv.(MsgServer).GetSendMsgStatus(ctx, req.(*GetSendMsgStatusReq))
}
return interceptor(ctx, in, info, handler)
}
-func _Msg_GetSuperGroupMsg_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+func _Msg_GetSuperGroupMsg_Handler(srv any, ctx context.Context, dec func(any) error, interceptor grpc.UnaryServerInterceptor) (any, error) {
in := new(GetSuperGroupMsgReq)
if err := dec(in); err != nil {
return nil, err
@@ -2877,13 +2877,13 @@ func _Msg_GetSuperGroupMsg_Handler(srv interface{}, ctx context.Context, dec fun
Server: srv,
FullMethod: "/msg.msg/GetSuperGroupMsg",
}
- handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ handler := func(ctx context.Context, req any) (any, error) {
return srv.(MsgServer).GetSuperGroupMsg(ctx, req.(*GetSuperGroupMsgReq))
}
return interceptor(ctx, in, info, handler)
}
-func _Msg_GetWriteDiffMsg_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+func _Msg_GetWriteDiffMsg_Handler(srv any, ctx context.Context, dec func(any) error, interceptor grpc.UnaryServerInterceptor) (any, error) {
in := new(GetWriteDiffMsgReq)
if err := dec(in); err != nil {
return nil, err
@@ -2895,13 +2895,13 @@ func _Msg_GetWriteDiffMsg_Handler(srv interface{}, ctx context.Context, dec func
Server: srv,
FullMethod: "/msg.msg/GetWriteDiffMsg",
}
- handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ handler := func(ctx context.Context, req any) (any, error) {
return srv.(MsgServer).GetWriteDiffMsg(ctx, req.(*GetWriteDiffMsgReq))
}
return interceptor(ctx, in, info, handler)
}
-func _Msg_SetMessageReactionExtensions_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+func _Msg_SetMessageReactionExtensions_Handler(srv any, ctx context.Context, dec func(any) error, interceptor grpc.UnaryServerInterceptor) (any, error) {
in := new(SetMessageReactionExtensionsReq)
if err := dec(in); err != nil {
return nil, err
@@ -2913,13 +2913,13 @@ func _Msg_SetMessageReactionExtensions_Handler(srv interface{}, ctx context.Cont
Server: srv,
FullMethod: "/msg.msg/SetMessageReactionExtensions",
}
- handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ handler := func(ctx context.Context, req any) (any, error) {
return srv.(MsgServer).SetMessageReactionExtensions(ctx, req.(*SetMessageReactionExtensionsReq))
}
return interceptor(ctx, in, info, handler)
}
-func _Msg_GetMessageListReactionExtensions_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+func _Msg_GetMessageListReactionExtensions_Handler(srv any, ctx context.Context, dec func(any) error, interceptor grpc.UnaryServerInterceptor) (any, error) {
in := new(GetMessageListReactionExtensionsReq)
if err := dec(in); err != nil {
return nil, err
@@ -2931,13 +2931,13 @@ func _Msg_GetMessageListReactionExtensions_Handler(srv interface{}, ctx context.
Server: srv,
FullMethod: "/msg.msg/GetMessageListReactionExtensions",
}
- handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ handler := func(ctx context.Context, req any) (any, error) {
return srv.(MsgServer).GetMessageListReactionExtensions(ctx, req.(*GetMessageListReactionExtensionsReq))
}
return interceptor(ctx, in, info, handler)
}
-func _Msg_AddMessageReactionExtensions_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+func _Msg_AddMessageReactionExtensions_Handler(srv any, ctx context.Context, dec func(any) error, interceptor grpc.UnaryServerInterceptor) (any, error) {
in := new(AddMessageReactionExtensionsReq)
if err := dec(in); err != nil {
return nil, err
@@ -2949,13 +2949,13 @@ func _Msg_AddMessageReactionExtensions_Handler(srv interface{}, ctx context.Cont
Server: srv,
FullMethod: "/msg.msg/AddMessageReactionExtensions",
}
- handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ handler := func(ctx context.Context, req any) (any, error) {
return srv.(MsgServer).AddMessageReactionExtensions(ctx, req.(*AddMessageReactionExtensionsReq))
}
return interceptor(ctx, in, info, handler)
}
-func _Msg_DeleteMessageReactionExtensions_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+func _Msg_DeleteMessageReactionExtensions_Handler(srv any, ctx context.Context, dec func(any) error, interceptor grpc.UnaryServerInterceptor) (any, error) {
in := new(DeleteMessageListReactionExtensionsReq)
if err := dec(in); err != nil {
return nil, err
@@ -2967,7 +2967,7 @@ func _Msg_DeleteMessageReactionExtensions_Handler(srv interface{}, ctx context.C
Server: srv,
FullMethod: "/msg.msg/DeleteMessageReactionExtensions",
}
- handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ handler := func(ctx context.Context, req any) (any, error) {
return srv.(MsgServer).DeleteMessageReactionExtensions(ctx, req.(*DeleteMessageListReactionExtensionsReq))
}
return interceptor(ctx, in, info, handler)
diff --git a/tools/data-conversion/openim/proto/msg/msg.proto b/tools/data-conversion/openim/proto/msg/msg.proto
index d2fe5337e..3149a7337 100644
--- a/tools/data-conversion/openim/proto/msg/msg.proto
+++ b/tools/data-conversion/openim/proto/msg/msg.proto
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
syntax = "proto3";
import "Open-IM-Server/pkg/proto/sdk_ws/ws.proto";
import "Open-IM-Server/pkg/proto/sdk_ws/wrappers.proto";
diff --git a/tools/data-conversion/openim/proto/sdk_ws/ws.pb.go b/tools/data-conversion/openim/proto/sdk_ws/ws.pb.go
index 097280860..94b6f9be6 100644
--- a/tools/data-conversion/openim/proto/sdk_ws/ws.pb.go
+++ b/tools/data-conversion/openim/proto/sdk_ws/ws.pb.go
@@ -4156,8 +4156,8 @@ func (m *SignalReq) GetGetTokenByRoomID() *SignalGetTokenByRoomIDReq {
}
// XXX_OneofFuncs is for the internal use of the proto package.
-func (*SignalReq) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {
- return _SignalReq_OneofMarshaler, _SignalReq_OneofUnmarshaler, _SignalReq_OneofSizer, []interface{}{
+func (*SignalReq) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []any) {
+ return _SignalReq_OneofMarshaler, _SignalReq_OneofUnmarshaler, _SignalReq_OneofSizer, []any{
(*SignalReq_Invite)(nil),
(*SignalReq_InviteInGroup)(nil),
(*SignalReq_Cancel)(nil),
@@ -4523,8 +4523,8 @@ func (m *SignalResp) GetGetTokenByRoomID() *SignalGetTokenByRoomIDReply {
}
// XXX_OneofFuncs is for the internal use of the proto package.
-func (*SignalResp) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {
- return _SignalResp_OneofMarshaler, _SignalResp_OneofUnmarshaler, _SignalResp_OneofSizer, []interface{}{
+func (*SignalResp) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []any) {
+ return _SignalResp_OneofMarshaler, _SignalResp_OneofUnmarshaler, _SignalResp_OneofSizer, []any{
(*SignalResp_Invite)(nil),
(*SignalResp_InviteInGroup)(nil),
(*SignalResp_Cancel)(nil),
diff --git a/tools/data-conversion/openim/proto/sdk_ws/ws.proto b/tools/data-conversion/openim/proto/sdk_ws/ws.proto
index c8ef680c9..95b956b0e 100644
--- a/tools/data-conversion/openim/proto/sdk_ws/ws.proto
+++ b/tools/data-conversion/openim/proto/sdk_ws/ws.proto
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
syntax = "proto3";
import "Open-IM-Server/pkg/proto/sdk_ws/wrappers.proto";
option go_package = "Open_IM/pkg/proto/sdk_ws;server_api_params";
diff --git a/tools/data-conversion/utils/find_insert.go b/tools/data-conversion/utils/find_insert.go
index 4789cd554..150820fce 100644
--- a/tools/data-conversion/utils/find_insert.go
+++ b/tools/data-conversion/utils/find_insert.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package utils
import (
diff --git a/tools/data-conversion/utils/time.go b/tools/data-conversion/utils/time.go
index e2dac4bb8..9077a3d88 100644
--- a/tools/data-conversion/utils/time.go
+++ b/tools/data-conversion/utils/time.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package utils
import "time"
diff --git a/tools/formitychecker/README.md b/tools/formitychecker/README.md
new file mode 100644
index 000000000..7cabf8a66
--- /dev/null
+++ b/tools/formitychecker/README.md
@@ -0,0 +1,102 @@
+# Development of a Go-Based Conformity Checker for Project File and Directory Naming Standards
+
+### 1. Project Overview
+
+#### Project Name
+
+- `GoConformityChecker`
+
+#### Functionality Description
+
+- Checks if the file and subdirectory names in a specified directory adhere to specific naming conventions.
+- Supports specific file types (e.g., `.go`, `.yml`, `.yaml`, `.md`, `.sh`, etc.).
+- Allows users to specify directories to be checked and directories to be ignored.
+- More read https://github.com/openimsdk/open-im-server/blob/main/docs/contrib/code-conventions.md
+
+#### Naming Conventions
+
+- Go files: Only underscores are allowed.
+- YAML, YML, and Markdown files: Only hyphens are allowed.
+- Directories: Only underscores are allowed.
+
+### 2. File Structure
+
+- `main.go`: Entry point of the program, handles command-line arguments.
+- `checker/checker.go`: Contains the core logic.
+- `config/config.go`: Parses and stores configuration information.
+
+### 3. Core Code Design
+
+#### main.go
+
+- Parses command-line arguments, including the directory to be checked and directories to be ignored.
+- Calls the `checker` module for checking.
+
+#### config.go
+
+- Defines a configuration structure, such as directories to check and ignore.
+
+#### checker.go
+
+- Iterates through the specified directory.
+- Applies different naming rules based on file types and directory names.
+- Records files or directories that do not conform to the standards.
+
+### 4. Pseudocode Example
+
+#### main.go
+
+```go
+package main
+
+import (
+ "flag"
+ "fmt"
+ "GoConformityChecker/checker"
+)
+
+func main() {
+ // Parse command-line arguments
+ var targetDir string
+ var ignoreDirs string
+ flag.StringVar(&targetDir, "target", ".", "Directory to check")
+ flag.StringVar(&ignoreDirs, "ignore", "", "Directories to ignore")
+ flag.Parse()
+
+ // Call the checker
+ err := checker.CheckDirectory(targetDir, ignoreDirs)
+ if err != nil {
+ fmt.Println("Error:", err)
+ }
+}
+```
+
+#### checker.go
+
+```go
+package checker
+
+import (
+ // Import necessary packages
+)
+
+func CheckDirectory(targetDir, ignoreDirs string) error {
+ // Iterate through the directory, applying rules to check file and directory names
+ // Return any found errors or non-conformities
+ return nil
+}
+```
+
+### 5. Implementation Details
+
+- **File and Directory Traversal**: Use Go's `path/filepath` package to traverse directories and subdirectories.
+- **Naming Rules Checking**: Apply different regex expressions for naming checks based on file extensions.
+- **Error Handling and Reporting**: Record files or directories that do not conform and report to the user.
+
+### 6. Future Development and Extensions
+
+- Support more file types and naming rules.
+- Provide more detailed error reports, such as showing line numbers and specific naming mistakes.
+- Add a graphical or web interface for non-command-line users.
+
+The above is an overview of the entire project's design. Following this design, specific coding implementation can begin. Note that the actual implementation may need adjustments based on real-world conditions.
\ No newline at end of file
diff --git a/tools/formitychecker/checker/checker.go b/tools/formitychecker/checker/checker.go
new file mode 100644
index 000000000..7a1643358
--- /dev/null
+++ b/tools/formitychecker/checker/checker.go
@@ -0,0 +1,111 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package checker
+
+import (
+ "fmt"
+ "os"
+ "path/filepath"
+ "regexp"
+ "strings"
+
+ "github.com/openimsdk/open-im-server/tools/formitychecker/config"
+)
+
+var (
+ underscoreRegex = regexp.MustCompile(`^[a-zA-Z0-9_]+\.[a-zA-Z0-9]+$`)
+ hyphenRegex = regexp.MustCompile(`^[a-zA-Z0-9\-]+\.[a-zA-Z0-9]+$`)
+)
+
+// CheckDirectoCheckDirectoryries initiates the checking process for the specified directories using configuration from config.Config.
+func CheckDirectory(cfg *config.Config) error {
+ ignoreMap := make(map[string]struct{})
+ for _, dir := range cfg.IgnoreDirs {
+ ignoreMap[dir] = struct{}{}
+ }
+
+ for _, targetDir := range cfg.TargetDirs {
+ err := filepath.Walk(targetDir, func(path string, info os.FileInfo, err error) error {
+ if err != nil {
+ return err
+ }
+
+ // Skip if the directory is in the ignore list
+ dirName := filepath.Base(filepath.Dir(path))
+ if _, ok := ignoreMap[dirName]; ok && info.IsDir() {
+ return filepath.SkipDir
+ }
+
+ // Check the naming convention
+ if err := checkNamingConvention(path, info); err != nil {
+ fmt.Println(err)
+ }
+
+ return nil
+ })
+
+ if err != nil {
+ return fmt.Errorf("error checking directory '%s': %w", targetDir, err)
+ }
+ }
+
+ return nil
+}
+
+// checkNamingConvention checks if the file or directory name conforms to the standard naming conventions.
+func checkNamingConvention(path string, info os.FileInfo) error {
+ fileName := info.Name()
+
+ // Handle special cases for directories like .git
+ if info.IsDir() && strings.HasPrefix(fileName, ".") {
+ return nil // Skip special directories
+ }
+
+ // Extract the main part of the name (without extension for files)
+ mainName := fileName
+ if !info.IsDir() {
+ mainName = strings.TrimSuffix(fileName, filepath.Ext(fileName))
+ }
+
+ // Determine the type of file and apply corresponding naming rule
+ switch {
+ case info.IsDir():
+ if !isValidName(mainName, "_") { // Directory names must only contain underscores
+ return fmt.Errorf("!!! invalid directory name: %s", path)
+ }
+ case strings.HasSuffix(fileName, ".go"):
+ if !isValidName(mainName, "_") { // Go files must only contain underscores
+ return fmt.Errorf("!!! invalid Go file name: %s", path)
+ }
+ case strings.HasSuffix(fileName, ".yml"), strings.HasSuffix(fileName, ".yaml"), strings.HasSuffix(fileName, ".md"):
+ if !isValidName(mainName, "-") { // YML, YAML, and Markdown files must only contain hyphens
+ return fmt.Errorf("!!! invalid file name: %s", path)
+ }
+ }
+
+ return nil
+}
+
+// isValidName checks if the file name conforms to the specified rule (underscore or hyphen).
+func isValidName(name, charType string) bool {
+ switch charType {
+ case "_":
+ return underscoreRegex.MatchString(name)
+ case "-":
+ return hyphenRegex.MatchString(name)
+ default:
+ return false
+ }
+}
diff --git a/tools/formitychecker/config/config.go b/tools/formitychecker/config/config.go
new file mode 100644
index 000000000..0c4f6a16b
--- /dev/null
+++ b/tools/formitychecker/config/config.go
@@ -0,0 +1,41 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package config
+
+import (
+ "strings"
+)
+
+// Config holds all the configuration parameters for the checker.
+type Config struct {
+ TargetDirs []string // Directories to check
+ IgnoreDirs []string // Directories to ignore
+}
+
+// NewConfig creates and returns a new Config instance.
+func NewConfig(targetDirs, ignoreDirs string) *Config {
+ return &Config{
+ TargetDirs: parseDirs(targetDirs),
+ IgnoreDirs: parseDirs(ignoreDirs),
+ }
+}
+
+// parseDirs splits a comma-separated string into a slice of directory names.
+func parseDirs(dirs string) []string {
+ if dirs == "" {
+ return nil
+ }
+ return strings.Split(dirs, ",")
+}
diff --git a/tools/formitychecker/formitychecker.go b/tools/formitychecker/formitychecker.go
new file mode 100644
index 000000000..2bedbfb32
--- /dev/null
+++ b/tools/formitychecker/formitychecker.go
@@ -0,0 +1,41 @@
+// Copyright © 2024 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package main
+
+import (
+ "flag"
+ "fmt"
+
+ "github.com/openimsdk/open-im-server/tools/formitychecker/checker"
+ "github.com/openimsdk/open-im-server/tools/formitychecker/config"
+)
+
+func main() {
+ defaultTargetDirs := "."
+ defaultIgnoreDirs := "components,.git"
+
+ var targetDirs string
+ var ignoreDirs string
+ flag.StringVar(&targetDirs, "target", defaultTargetDirs, "Directories to check (default: current directory)")
+ flag.StringVar(&ignoreDirs, "ignore", defaultIgnoreDirs, "Directories to ignore (default: A/, B/)")
+ flag.Parse()
+
+ conf := config.NewConfig(targetDirs, ignoreDirs)
+
+ err := checker.CheckDirectory(conf)
+ if err != nil {
+ fmt.Println("Error:", err)
+ }
+}
diff --git a/tools/formitychecker/go.mod b/tools/formitychecker/go.mod
new file mode 100644
index 000000000..698b77647
--- /dev/null
+++ b/tools/formitychecker/go.mod
@@ -0,0 +1,3 @@
+module github.com/openimsdk/open-im-server/tools/formitychecker
+
+go 1.19
diff --git a/tools/imctl/.gitignore b/tools/imctl/.gitignore
index a2e773394..72ff17ca9 100644
--- a/tools/imctl/.gitignore
+++ b/tools/imctl/.gitignore
@@ -36,20 +36,6 @@ config/config.yaml
.env
./.env
-### OpenIM deploy ###
-deploy/openim_demo
-deploy/openim-api
-deploy/openim-rpc-msg_gateway
-deploy/openim-msgtransfer
-deploy/openim-push
-deploy/openim_timer_task
-deploy/openim-rpc-user
-deploy/openim-rpc-friend
-deploy/openim-rpc-group
-deploy/openim-rpc-msg
-deploy/openim-rpc-auth
-deploy/Open-IM-SDK-Core
-
# files used by the developer
.idea.md
.todo.md
diff --git a/pkg/common/db/relation/doc.go b/tools/imctl/imctl.go
similarity index 79%
rename from pkg/common/db/relation/doc.go
rename to tools/imctl/imctl.go
index 41135ac97..91161326e 100644
--- a/pkg/common/db/relation/doc.go
+++ b/tools/imctl/imctl.go
@@ -1,4 +1,4 @@
-// Copyright © 2023 OpenIM. All rights reserved.
+// Copyright © 2024 OpenIM. All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -12,4 +12,11 @@
// See the License for the specific language governing permissions and
// limitations under the License.
-package relation // import "github.com/openimsdk/open-im-server/v3/pkg/common/db/relation"
+package main
+
+import "fmt"
+
+func main() {
+
+ fmt.Println("imctl")
+}
diff --git a/tools/infra/infra.go b/tools/infra/infra.go
index f8d8c7522..c14b92fa3 100644
--- a/tools/infra/infra.go
+++ b/tools/infra/infra.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package main
import (
diff --git a/tools/ncpu/ncpu.go b/tools/ncpu/ncpu.go
index 7ca3dff5e..062618b27 100644
--- a/tools/ncpu/ncpu.go
+++ b/tools/ncpu/ncpu.go
@@ -1,4 +1,4 @@
-// Copyright © 2023 OpenIM. All rights reserved.
+// Copyright © 2024 OpenIM. All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -22,6 +22,11 @@ import (
)
func main() {
- maxprocs.Set()
- fmt.Print(runtime.GOMAXPROCS(0))
+ // Set maxprocs with a custom logger that does nothing to ignore logs.
+ maxprocs.Set(maxprocs.Logger(func(string, ...interface{}) {
+ // Intentionally left blank to suppress all log output from automaxprocs.
+ }))
+
+ // Now this will print the GOMAXPROCS value without printing the automaxprocs log message.
+ fmt.Println(runtime.GOMAXPROCS(0))
}
diff --git a/tools/openim-web/README.md b/tools/openim-web/README.md
index afd5e9a96..5794a946d 100644
--- a/tools/openim-web/README.md
+++ b/tools/openim-web/README.md
@@ -37,7 +37,6 @@ Variables can be set as above, Environment variables can also be set
example:
```bash
-$ export OPENIM_WEB_DIST_PATH="/app/dist"
$ export OPENIM_WEB_PPRT="11001"
```
diff --git a/tools/up35/README.md b/tools/up35/README.md
new file mode 100644
index 000000000..c5bdcd3b6
--- /dev/null
+++ b/tools/up35/README.md
@@ -0,0 +1,67 @@
+# README for OpenIM Server Data Conversion Tool
+
+## Overview
+
+This tool is part of the OpenIM Server suite, specifically designed for data conversion between MySQL and MongoDB databases. It handles the migration of various data types, including user information, friendships, group memberships, and more from a MySQL database to MongoDB, ensuring data consistency and integrity during the transition.
+
+## Features
+
++ **Configurable Database Connections:** Supports connections to both MySQL and MongoDB, configurable through a YAML file.
++ **Data Conversion Tasks:** Converts a range of data models, including user profiles, friend requests, group memberships, and logs.
++ **Version Control:** Maintains data versioning, ensuring only necessary migrations are performed.
++ **Error Handling:** Robust error handling for database connectivity and query execution.
+
+## Requirements
+
++ Go programming environment
++ MySQL and MongoDB servers
++ OpenIM Server dependencies installed
+
+## Installation
+
+1. Ensure Go is installed and set up on your system.
+2. Clone the OpenIM Server repository.
+3. Navigate to the directory containing this tool.
+4. Install required dependencies.
+
+## Configuration
+
++ Configuration is managed through a YAML file specified at runtime.
++ Set up the MySQL and MongoDB connection parameters in the config file.
+
+## Usage
+
+To run the tool, use the following command from the terminal:
+
+```go
+make build BINS="up35"
+```
+
+Where `path/to/config.yaml` is the path to your configuration file.
+
+## Functionality
+
+The main functions of the script include:
+
++ `InitConfig(path string)`: Reads and parses the YAML configuration file.
++ `GetMysql()`: Establishes a connection to the MySQL database.
++ `GetMongo()`: Establishes a connection to the MongoDB database.
++ `Main(path string)`: Orchestrates the data migration process.
++ `SetMongoDataVersion(db *mongo.Database, curver string)`: Updates the data version in MongoDB after migration.
++ `NewTask(...)`: Generic function to handle the migration of different data types.
++ `insertMany(coll *mongo.Collection, objs []any)`: Inserts multiple records into a MongoDB collection.
++ `getColl(obj any)`: Retrieves the MongoDB collection associated with a given object.
++ `convert struct`: Contains methods for converting MySQL models to MongoDB models.
+
+## Notes
+
++ Ensure that the MySQL and MongoDB instances are accessible and that the credentials provided in the config file are correct.
++ It is advisable to backup databases before running the migration to prevent data loss.
+
+## Contributing
+
+Contributions to improve the tool or address issues are welcome. Please follow the project's contribution guidelines.
+
+## License
+
+Refer to the project's license document for usage and distribution rights.
\ No newline at end of file
diff --git a/tools/up35/go.mod b/tools/up35/go.mod
new file mode 100644
index 000000000..23163a4dc
--- /dev/null
+++ b/tools/up35/go.mod
@@ -0,0 +1,3 @@
+module github.com/openimsdk/open-im-server/v3/tools/up35
+
+go 1.19
diff --git a/tools/up35/go.sum b/tools/up35/go.sum
new file mode 100644
index 000000000..1a81e5b33
--- /dev/null
+++ b/tools/up35/go.sum
@@ -0,0 +1,125 @@
+github.com/OpenIMSDK/protocol v0.0.31 h1:ax43x9aqA6EKNXNukS5MT5BSTqkUmwO4uTvbJLtzCgE=
+github.com/OpenIMSDK/protocol v0.0.31/go.mod h1:F25dFrwrIx3lkNoiuf6FkCfxuwf8L4Z8UIsdTHP/r0Y=
+github.com/OpenIMSDK/tools v0.0.18 h1:h3CvKB90DNd2aIJcOQ99cqgeW6C0na0PzR1TNsfxwL0=
+github.com/OpenIMSDK/tools v0.0.18/go.mod h1:eg+q4A34Qmu73xkY0mt37FHGMCMfC6CtmOnm0kFEGFI=
+github.com/benbjohnson/clock v1.1.0 h1:Q92kusRqC1XV2MjkWETPvjJVqKetz1OzxZB7mHJLju8=
+github.com/bwmarrin/snowflake v0.3.0 h1:xm67bEhkKh6ij1790JB83OujPR5CzNe8QuQqAgISZN0=
+github.com/bwmarrin/snowflake v0.3.0/go.mod h1:NdZxfVWX+oR6y2K0o6qAYv6gIOP9rjG0/E9WsDpxqwE=
+github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
+github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
+github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
+github.com/go-sql-driver/mysql v1.7.0/go.mod h1:OXbVy3sEdcQ2Doequ6Z5BW6fXNQTmx+9S1MCJN5yJMI=
+github.com/go-sql-driver/mysql v1.7.1 h1:lUIinVbN1DY0xBg0eMOzmmtGoHwWBbvnWubQUrtU8EI=
+github.com/go-sql-driver/mysql v1.7.1/go.mod h1:OXbVy3sEdcQ2Doequ6Z5BW6fXNQTmx+9S1MCJN5yJMI=
+github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
+github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
+github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
+github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
+github.com/golang/snappy v0.0.4 h1:yAGX7huGHXlcLOEtBnF4w7FQwA26wojNCwOYAEhLjQM=
+github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
+github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
+github.com/jinzhu/copier v0.4.0 h1:w3ciUoD19shMCRargcpm0cm91ytaBhDvuRpz1ODO/U8=
+github.com/jinzhu/copier v0.4.0/go.mod h1:DfbEm0FYsaqBcKcFuvmOZb218JkPGtvSHsKg8S8hyyg=
+github.com/jinzhu/inflection v1.0.0 h1:K317FqzuhWc8YvSVlFMCCUb36O/S9MCKRDI7QkRKD/E=
+github.com/jinzhu/inflection v1.0.0/go.mod h1:h+uFLlag+Qp1Va5pdKtLDYj+kHp5pxUVkryuEj+Srlc=
+github.com/jinzhu/now v1.1.5 h1:/o9tlHleP7gOFmsnYNz3RGnqzefHA47wQpKrrdTIwXQ=
+github.com/jinzhu/now v1.1.5/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/z8=
+github.com/jonboulle/clockwork v0.4.0 h1:p4Cf1aMWXnXAUh8lVfewRBx1zaTSYKrKMF2g3ST4RZ4=
+github.com/klauspost/compress v1.13.6/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
+github.com/klauspost/compress v1.16.7 h1:2mk3MPGNzKyxErAw8YaohYh69+pa4sIQSC0fPGCFR9I=
+github.com/klauspost/compress v1.16.7/go.mod h1:ntbaceVETuRiXiv4DpjP66DpAtAGkEQskQzEyD//IeE=
+github.com/lestrrat-go/envload v0.0.0-20180220234015-a3eb8ddeffcc h1:RKf14vYWi2ttpEmkA4aQ3j4u9dStX2t4M8UM6qqNsG8=
+github.com/lestrrat-go/envload v0.0.0-20180220234015-a3eb8ddeffcc/go.mod h1:kopuH9ugFRkIXf3YoqHKyrJ9YfUFsckUU9S7B+XP+is=
+github.com/lestrrat-go/file-rotatelogs v2.4.0+incompatible h1:Y6sqxHMyB1D2YSzWkLibYKgg+SwmyFU9dF2hn6MdTj4=
+github.com/lestrrat-go/file-rotatelogs v2.4.0+incompatible/go.mod h1:ZQnN8lSECaebrkQytbHj4xNgtg8CR7RYXnPok8e0EHA=
+github.com/lestrrat-go/strftime v1.0.6 h1:CFGsDEt1pOpFNU+TJB0nhz9jl+K0hZSLE205AhTIGQQ=
+github.com/lestrrat-go/strftime v1.0.6/go.mod h1:f7jQKgV5nnJpYgdEasS+/y7EsTb8ykN2z68n3TtcTaw=
+github.com/montanaflynn/stats v0.0.0-20171201202039-1bf9dbcd8cbe h1:iruDEfMl2E6fbMZ9s0scYfZQ84/6SPL6zC8ACM2oIL0=
+github.com/montanaflynn/stats v0.0.0-20171201202039-1bf9dbcd8cbe/go.mod h1:wL8QJuTMNUDYhXwkmfOly8iTdp5TEcJFWZD2D7SIkUc=
+github.com/nfnt/resize v0.0.0-20180221191011-83c6a9932646 h1:zYyBkD/k9seD2A7fsi6Oo2LfFZAehjjQMERAvZLEDnQ=
+github.com/nfnt/resize v0.0.0-20180221191011-83c6a9932646/go.mod h1:jpp1/29i3P1S/RLdc7JQKbRpFeM1dOBd8T9ki5s+AY8=
+github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
+github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
+github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
+github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
+github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
+github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
+github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk=
+github.com/xdg-go/pbkdf2 v1.0.0 h1:Su7DPu48wXMwC3bs7MCNG+z4FhcyEuz5dlvchbq0B0c=
+github.com/xdg-go/pbkdf2 v1.0.0/go.mod h1:jrpuAogTd400dnrH08LKmI/xc1MbPOebTwRqcT5RDeI=
+github.com/xdg-go/scram v1.1.2 h1:FHX5I5B4i4hKRVRBCFRxq1iQRej7WO3hhBuJf+UUySY=
+github.com/xdg-go/scram v1.1.2/go.mod h1:RT/sEzTbU5y00aCK8UOx6R7YryM0iF1N2MOmC3kKLN4=
+github.com/xdg-go/stringprep v1.0.4 h1:XLI/Ng3O1Atzq0oBs3TWm+5ZVgkq2aqdlvP9JtoZ6c8=
+github.com/xdg-go/stringprep v1.0.4/go.mod h1:mPGuuIYwz7CmR2bT9j4GbQqutWS1zV24gijq1dTyGkM=
+github.com/youmark/pkcs8 v0.0.0-20181117223130-1be2e3e5546d h1:splanxYIlg+5LfHAM6xpdFEAYOk8iySO56hMFq6uLyA=
+github.com/youmark/pkcs8 v0.0.0-20181117223130-1be2e3e5546d/go.mod h1:rHwXgn7JulP+udvsHwJoVG1YGAP6VLg4y9I5dyZdqmA=
+github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
+go.mongodb.org/mongo-driver v1.12.1 h1:nLkghSU8fQNaK7oUmDhQFsnrtcoNy7Z6LVFKsEecqgE=
+go.mongodb.org/mongo-driver v1.12.1/go.mod h1:/rGBTebI3XYboVmgz+Wv3Bcbl3aD0QF9zl6kDDw18rQ=
+go.uber.org/atomic v1.7.0 h1:ADUqmZGgLDDfbSL9ZmPxKTybcoEYHgpYfELNoN+7hsw=
+go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
+go.uber.org/goleak v1.1.11 h1:wy28qYRKZgnJTxGxvye5/wgWr1EKjmUDGYox5mGlRlI=
+go.uber.org/multierr v1.6.0 h1:y6IPFStTAIT5Ytl7/XYmHvzXQ7S3g/IeZW9hyZ5thw4=
+go.uber.org/multierr v1.6.0/go.mod h1:cdWPpRnG4AhwMwsgIHip0KRBQjJy5kYEpYjJxpXp9iU=
+go.uber.org/zap v1.24.0 h1:FiJd5l1UOLj0wCgbSE0rwwXHzEdAZS6hiiSnxJN/D60=
+go.uber.org/zap v1.24.0/go.mod h1:2kMP+WWQ8aoFoedH3T2sq6iJ2yDWpHbP0f6MQbS9Gkg=
+golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
+golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
+golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
+golang.org/x/crypto v0.14.0 h1:wBqGXzWJW6m1XrIKlAH0Hs1JJ7+9KBwnIO8v66Q9cHc=
+golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf4=
+golang.org/x/image v0.13.0 h1:3cge/F/QTkNLauhf2QoE9zp+7sr+ZcL4HnoZmdwg9sg=
+golang.org/x/image v0.13.0/go.mod h1:6mmbMOeV28HuMTgA6OSRkdXKYw/t5W9Uwn2Yv1r3Yxk=
+golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
+golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
+golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
+golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
+golang.org/x/net v0.17.0 h1:pVaXccu2ozPjCXewfr1S7xza/zcXTity9cCdXQYSjIM=
+golang.org/x/net v0.17.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE=
+golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.4.0 h1:zxkM55ReGkDlKSM+Fu41A+zmbZuaPVbGMzvvdUPznYQ=
+golang.org/x/sync v0.4.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y=
+golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.14.0 h1:Vz7Qs629MkJkGyHxUlRHizWJRG2j8fbQKjELVSNhy7Q=
+golang.org/x/sys v0.14.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
+golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
+golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
+golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
+golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
+golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
+golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
+golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ=
+golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
+golang.org/x/text v0.13.0 h1:ablQoSUd0tRdKxZewP80B+BaqeKJuVhuRxj/dkrun3k=
+golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
+golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
+golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
+golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+google.golang.org/genproto/googleapis/rpc v0.0.0-20231012201019-e917dd12ba7a h1:a2MQQVoTo96JC9PMGtGBymLp7+/RzpFc2yX/9WfFg1c=
+google.golang.org/genproto/googleapis/rpc v0.0.0-20231012201019-e917dd12ba7a/go.mod h1:4cYg8o5yUbm77w8ZX00LhMVNl/YVBFJRYWDc0uYWMs0=
+google.golang.org/grpc v1.59.0 h1:Z5Iec2pjwb+LEOqzpB2MR12/eKFhDPhuqW91O+4bwUk=
+google.golang.org/grpc v1.59.0/go.mod h1:aUPDwccQo6OTjy7Hct4AfBPD1GptF4fyUjIkQ9YtF98=
+google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
+google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
+google.golang.org/protobuf v1.31.0 h1:g0LDEJHgrBl9N9r17Ru3sqWhkIx2NB67okBHPwC7hs8=
+google.golang.org/protobuf v1.31.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
+gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
+gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
+gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
+gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
+gorm.io/driver/mysql v1.5.1 h1:WUEH5VF9obL/lTtzjmML/5e6VfFR/788coz2uaVCAZw=
+gorm.io/driver/mysql v1.5.1/go.mod h1:Jo3Xu7mMhCyj8dlrb3WoCaRd1FhsVh+yMXb1jUInf5o=
+gorm.io/gorm v1.25.1/go.mod h1:L4uxeKpfBml98NYqVqwAdmV1a2nBtAec/cf3fpucW/k=
+gorm.io/gorm v1.25.4 h1:iyNd8fNAe8W9dvtlgeRI5zSVZPsq3OpcTu37cYcpCmw=
+gorm.io/gorm v1.25.4/go.mod h1:L4uxeKpfBml98NYqVqwAdmV1a2nBtAec/cf3fpucW/k=
diff --git a/tools/up35/pkg/convert.go b/tools/up35/pkg/convert.go
new file mode 100644
index 000000000..53ada0e04
--- /dev/null
+++ b/tools/up35/pkg/convert.go
@@ -0,0 +1,242 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package pkg
+
+import (
+ "time"
+
+ mongomodel "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
+ mysqlmodel "github.com/openimsdk/open-im-server/v3/tools/data-conversion/openim/mysql/v3"
+ mongomodelrtc "github.com/openimsdk/open-im-server/v3/tools/up35/pkg/internal/rtc/mongo/table"
+ mysqlmodelrtc "github.com/openimsdk/open-im-server/v3/tools/up35/pkg/internal/rtc/mysql"
+)
+
+type convert struct{}
+
+func (convert) User(v mysqlmodel.UserModel) mongomodel.UserModel {
+ return mongomodel.UserModel{
+ UserID: v.UserID,
+ Nickname: v.Nickname,
+ FaceURL: v.FaceURL,
+ Ex: v.Ex,
+ AppMangerLevel: v.AppMangerLevel,
+ GlobalRecvMsgOpt: v.GlobalRecvMsgOpt,
+ CreateTime: v.CreateTime,
+ }
+}
+
+func (convert) Friend(v mysqlmodel.FriendModel) mongomodel.FriendModel {
+ return mongomodel.FriendModel{
+ OwnerUserID: v.OwnerUserID,
+ FriendUserID: v.FriendUserID,
+ Remark: v.Remark,
+ CreateTime: v.CreateTime,
+ AddSource: v.AddSource,
+ OperatorUserID: v.OperatorUserID,
+ Ex: v.Ex,
+ }
+}
+
+func (convert) FriendRequest(v mysqlmodel.FriendRequestModel) mongomodel.FriendRequestModel {
+ return mongomodel.FriendRequestModel{
+ FromUserID: v.FromUserID,
+ ToUserID: v.ToUserID,
+ HandleResult: v.HandleResult,
+ ReqMsg: v.ReqMsg,
+ CreateTime: v.CreateTime,
+ HandlerUserID: v.HandlerUserID,
+ HandleMsg: v.HandleMsg,
+ HandleTime: v.HandleTime,
+ Ex: v.Ex,
+ }
+}
+
+func (convert) Black(v mysqlmodel.BlackModel) mongomodel.BlackModel {
+ return mongomodel.BlackModel{
+ OwnerUserID: v.OwnerUserID,
+ BlockUserID: v.BlockUserID,
+ CreateTime: v.CreateTime,
+ AddSource: v.AddSource,
+ OperatorUserID: v.OperatorUserID,
+ Ex: v.Ex,
+ }
+}
+
+func (convert) Group(v mysqlmodel.GroupModel) mongomodel.GroupModel {
+ return mongomodel.GroupModel{
+ GroupID: v.GroupID,
+ GroupName: v.GroupName,
+ Notification: v.Notification,
+ Introduction: v.Introduction,
+ FaceURL: v.FaceURL,
+ CreateTime: v.CreateTime,
+ Ex: v.Ex,
+ Status: v.Status,
+ CreatorUserID: v.CreatorUserID,
+ GroupType: v.GroupType,
+ NeedVerification: v.NeedVerification,
+ LookMemberInfo: v.LookMemberInfo,
+ ApplyMemberFriend: v.ApplyMemberFriend,
+ NotificationUpdateTime: v.NotificationUpdateTime,
+ NotificationUserID: v.NotificationUserID,
+ }
+}
+
+func (convert) GroupMember(v mysqlmodel.GroupMemberModel) mongomodel.GroupMemberModel {
+ return mongomodel.GroupMemberModel{
+ GroupID: v.GroupID,
+ UserID: v.UserID,
+ Nickname: v.Nickname,
+ FaceURL: v.FaceURL,
+ RoleLevel: v.RoleLevel,
+ JoinTime: v.JoinTime,
+ JoinSource: v.JoinSource,
+ InviterUserID: v.InviterUserID,
+ OperatorUserID: v.OperatorUserID,
+ MuteEndTime: v.MuteEndTime,
+ Ex: v.Ex,
+ }
+}
+
+func (convert) GroupRequest(v mysqlmodel.GroupRequestModel) mongomodel.GroupRequestModel {
+ return mongomodel.GroupRequestModel{
+ UserID: v.UserID,
+ GroupID: v.GroupID,
+ HandleResult: v.HandleResult,
+ ReqMsg: v.ReqMsg,
+ HandledMsg: v.HandledMsg,
+ ReqTime: v.ReqTime,
+ HandleUserID: v.HandleUserID,
+ HandledTime: v.HandledTime,
+ JoinSource: v.JoinSource,
+ InviterUserID: v.InviterUserID,
+ Ex: v.Ex,
+ }
+}
+
+func (convert) Conversation(v mysqlmodel.ConversationModel) mongomodel.ConversationModel {
+ return mongomodel.ConversationModel{
+ OwnerUserID: v.OwnerUserID,
+ ConversationID: v.ConversationID,
+ ConversationType: v.ConversationType,
+ UserID: v.UserID,
+ GroupID: v.GroupID,
+ RecvMsgOpt: v.RecvMsgOpt,
+ IsPinned: v.IsPinned,
+ IsPrivateChat: v.IsPrivateChat,
+ BurnDuration: v.BurnDuration,
+ GroupAtType: v.GroupAtType,
+ AttachedInfo: v.AttachedInfo,
+ Ex: v.Ex,
+ MaxSeq: v.MaxSeq,
+ MinSeq: v.MinSeq,
+ CreateTime: v.CreateTime,
+ IsMsgDestruct: v.IsMsgDestruct,
+ MsgDestructTime: v.MsgDestructTime,
+ LatestMsgDestructTime: v.LatestMsgDestructTime,
+ }
+}
+
+func (convert) Object(engine string) func(v mysqlmodel.ObjectModel) mongomodel.ObjectModel {
+ return func(v mysqlmodel.ObjectModel) mongomodel.ObjectModel {
+ return mongomodel.ObjectModel{
+ Name: v.Name,
+ UserID: v.UserID,
+ Hash: v.Hash,
+ Engine: engine,
+ Key: v.Key,
+ Size: v.Size,
+ ContentType: v.ContentType,
+ Group: v.Cause,
+ CreateTime: v.CreateTime,
+ }
+ }
+}
+
+func (convert) Log(v mysqlmodel.Log) mongomodel.LogModel {
+ return mongomodel.LogModel{
+ LogID: v.LogID,
+ Platform: v.Platform,
+ UserID: v.UserID,
+ CreateTime: v.CreateTime,
+ Url: v.Url,
+ FileName: v.FileName,
+ SystemType: v.SystemType,
+ Version: v.Version,
+ Ex: v.Ex,
+ }
+}
+
+func (convert) SignalModel(v mysqlmodelrtc.SignalModel) mongomodelrtc.SignalModel {
+ return mongomodelrtc.SignalModel{
+ SID: v.SID,
+ InviterUserID: v.InviterUserID,
+ CustomData: v.CustomData,
+ GroupID: v.GroupID,
+ RoomID: v.RoomID,
+ Timeout: v.Timeout,
+ MediaType: v.MediaType,
+ PlatformID: v.PlatformID,
+ SessionType: v.SessionType,
+ InitiateTime: v.InitiateTime,
+ EndTime: v.EndTime,
+ FileURL: v.FileURL,
+ Title: v.Title,
+ Desc: v.Desc,
+ Ex: v.Ex,
+ IOSPushSound: v.IOSPushSound,
+ IOSBadgeCount: v.IOSBadgeCount,
+ SignalInfo: v.SignalInfo,
+ }
+}
+
+func (convert) SignalInvitationModel(v mysqlmodelrtc.SignalInvitationModel) mongomodelrtc.SignalInvitationModel {
+ return mongomodelrtc.SignalInvitationModel{
+ SID: v.SID,
+ UserID: v.UserID,
+ Status: v.Status,
+ InitiateTime: v.InitiateTime,
+ HandleTime: v.HandleTime,
+ }
+}
+
+func (convert) Meeting(v mysqlmodelrtc.MeetingInfo) mongomodelrtc.MeetingInfo {
+ return mongomodelrtc.MeetingInfo{
+ RoomID: v.RoomID,
+ MeetingName: v.MeetingName,
+ HostUserID: v.HostUserID,
+ Status: v.Status,
+ StartTime: time.Unix(v.StartTime, 0),
+ EndTime: time.Unix(v.EndTime, 0),
+ CreateTime: v.CreateTime,
+ Ex: v.Ex,
+ }
+}
+
+func (convert) MeetingInvitationInfo(v mysqlmodelrtc.MeetingInvitationInfo) mongomodelrtc.MeetingInvitationInfo {
+ return mongomodelrtc.MeetingInvitationInfo{
+ RoomID: v.RoomID,
+ UserID: v.UserID,
+ CreateTime: v.CreateTime,
+ }
+}
+
+func (convert) MeetingVideoRecord(v mysqlmodelrtc.MeetingVideoRecord) mongomodelrtc.MeetingVideoRecord {
+ return mongomodelrtc.MeetingVideoRecord{
+ RoomID: v.RoomID,
+ FileURL: v.FileURL,
+ CreateTime: v.CreateTime,
+ }
+}
diff --git a/tools/up35/pkg/internal/rtc/mongo/mgo/meeting.go b/tools/up35/pkg/internal/rtc/mongo/mgo/meeting.go
new file mode 100644
index 000000000..fd0f2818b
--- /dev/null
+++ b/tools/up35/pkg/internal/rtc/mongo/mgo/meeting.go
@@ -0,0 +1,102 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package mgo
+
+import (
+ "context"
+ "time"
+
+ "github.com/OpenIMSDK/tools/mgoutil"
+ "github.com/OpenIMSDK/tools/pagination"
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+
+ "github.com/openimsdk/open-im-server/v3/tools/up35/pkg/internal/rtc/mongo/table"
+)
+
+func NewMeeting(db *mongo.Database) (table.MeetingInterface, error) {
+ coll := db.Collection("meeting")
+ _, err := coll.Indexes().CreateMany(context.Background(), []mongo.IndexModel{
+ {
+ Keys: bson.D{
+ {Key: "room_id", Value: 1},
+ },
+ Options: options.Index().SetUnique(true),
+ },
+ {
+ Keys: bson.D{
+ {Key: "host_user_id", Value: 1},
+ },
+ },
+ {
+ Keys: bson.D{
+ {Key: "create_time", Value: -1},
+ },
+ },
+ })
+ if err != nil {
+ return nil, err
+ }
+ return &meeting{coll: coll}, nil
+}
+
+type meeting struct {
+ coll *mongo.Collection
+}
+
+func (x *meeting) Find(ctx context.Context, roomIDs []string) ([]*table.MeetingInfo, error) {
+ return mgoutil.Find[*table.MeetingInfo](ctx, x.coll, bson.M{"room_id": bson.M{"$in": roomIDs}})
+}
+
+func (x *meeting) CreateMeetingInfo(ctx context.Context, meetingInfo *table.MeetingInfo) error {
+ return mgoutil.InsertMany(ctx, x.coll, []*table.MeetingInfo{meetingInfo})
+}
+
+func (x *meeting) UpdateMeetingInfo(ctx context.Context, roomID string, update map[string]any) error {
+ if len(update) == 0 {
+ return nil
+ }
+ return mgoutil.UpdateOne(ctx, x.coll, bson.M{"room_id": roomID}, bson.M{"$set": update}, false)
+}
+
+func (x *meeting) GetUnCompleteMeetingIDList(ctx context.Context, roomIDs []string) ([]string, error) {
+ if len(roomIDs) == 0 {
+ return nil, nil
+ }
+ return mgoutil.Find[string](ctx, x.coll, bson.M{"room_id": bson.M{"$in": roomIDs}, "status": 0}, options.Find().SetProjection(bson.M{"_id": 0, "room_id": 1}))
+}
+
+func (x *meeting) Delete(ctx context.Context, roomIDs []string) error {
+ return mgoutil.DeleteMany(ctx, x.coll, bson.M{"room_id": bson.M{"$in": roomIDs}})
+}
+
+func (x *meeting) GetMeetingRecords(ctx context.Context, hostUserID string, startTime, endTime time.Time, pagination pagination.Pagination) (int64, []*table.MeetingInfo, error) {
+ var and []bson.M
+ if hostUserID != "" {
+ and = append(and, bson.M{"host_user_id": hostUserID})
+ }
+ if !startTime.IsZero() {
+ and = append(and, bson.M{"create_time": bson.M{"$gte": startTime}})
+ }
+ if !endTime.IsZero() {
+ and = append(and, bson.M{"create_time": bson.M{"$lte": endTime}})
+ }
+ filter := bson.M{}
+ if len(and) > 0 {
+ filter["$and"] = and
+ }
+ return mgoutil.FindPage[*table.MeetingInfo](ctx, x.coll, filter, pagination, options.Find().SetSort(bson.M{"create_time": -1}))
+}
diff --git a/tools/up35/pkg/internal/rtc/mongo/mgo/meeting_invitation.go b/tools/up35/pkg/internal/rtc/mongo/mgo/meeting_invitation.go
new file mode 100644
index 000000000..9926748bf
--- /dev/null
+++ b/tools/up35/pkg/internal/rtc/mongo/mgo/meeting_invitation.go
@@ -0,0 +1,97 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package mgo
+
+import (
+ "context"
+ "time"
+
+ "github.com/OpenIMSDK/tools/mgoutil"
+ "github.com/OpenIMSDK/tools/pagination"
+ "github.com/OpenIMSDK/tools/utils"
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+
+ "github.com/openimsdk/open-im-server/v3/tools/up35/pkg/internal/rtc/mongo/table"
+)
+
+func NewMeetingInvitation(db *mongo.Database) (table.MeetingInvitationInterface, error) {
+ coll := db.Collection("meeting_invitation")
+ _, err := coll.Indexes().CreateMany(context.Background(), []mongo.IndexModel{
+ {
+ Keys: bson.D{
+ {Key: "room_id", Value: 1},
+ {Key: "user_id", Value: 1},
+ },
+ Options: options.Index().SetUnique(true),
+ },
+ {
+ Keys: bson.D{
+ {Key: "create_time", Value: -1},
+ },
+ },
+ })
+ if err != nil {
+ return nil, err
+ }
+ return &meetingInvitation{coll: coll}, nil
+}
+
+type meetingInvitation struct {
+ coll *mongo.Collection
+}
+
+func (x *meetingInvitation) FindUserIDs(ctx context.Context, roomID string) ([]string, error) {
+ return mgoutil.Find[string](ctx, x.coll, bson.M{"room_id": roomID}, options.Find().SetProjection(bson.M{"_id": 0, "user_id": 1}))
+}
+
+func (x *meetingInvitation) CreateMeetingInvitationInfo(ctx context.Context, roomID string, inviteeUserIDs []string) error {
+ now := time.Now()
+ return mgoutil.InsertMany(ctx, x.coll, utils.Slice(inviteeUserIDs, func(userID string) *table.MeetingInvitationInfo {
+ return &table.MeetingInvitationInfo{
+ RoomID: roomID,
+ UserID: userID,
+ CreateTime: now,
+ }
+ }))
+}
+
+func (x *meetingInvitation) GetUserInvitedMeetingIDs(ctx context.Context, userID string) (meetingIDs []string, err error) {
+ fiveDaysAgo := time.Now().AddDate(0, 0, -5)
+ return mgoutil.Find[string](
+ ctx,
+ x.coll,
+ bson.M{"user_id": userID, "create_time": bson.M{"$gte": fiveDaysAgo}},
+ options.Find().SetSort(bson.M{"create_time": -1}).SetProjection(bson.M{"_id": 0, "room_id": 1}),
+ )
+}
+
+func (x *meetingInvitation) Delete(ctx context.Context, roomIDs []string) error {
+ return mgoutil.DeleteMany(ctx, x.coll, bson.M{"room_id": bson.M{"$in": roomIDs}})
+}
+
+func (x *meetingInvitation) GetMeetingRecords(ctx context.Context, joinedUserID string, startTime, endTime time.Time, pagination pagination.Pagination) (int64, []string, error) {
+ var and []bson.M
+ and = append(and, bson.M{"user_id": joinedUserID})
+ if !startTime.IsZero() {
+ and = append(and, bson.M{"create_time": bson.M{"$gte": startTime}})
+ }
+ if !endTime.IsZero() {
+ and = append(and, bson.M{"create_time": bson.M{"$lte": endTime}})
+ }
+ opt := options.Find().SetSort(bson.M{"create_time": -1}).SetProjection(bson.M{"_id": 0, "room_id": 1})
+ return mgoutil.FindPage[string](ctx, x.coll, bson.M{"$and": and}, pagination, opt)
+}
diff --git a/tools/up35/pkg/internal/rtc/mongo/mgo/meeting_record.go b/tools/up35/pkg/internal/rtc/mongo/mgo/meeting_record.go
new file mode 100644
index 000000000..4e9dc5e0f
--- /dev/null
+++ b/tools/up35/pkg/internal/rtc/mongo/mgo/meeting_record.go
@@ -0,0 +1,48 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package mgo
+
+import (
+ "context"
+
+ "github.com/OpenIMSDK/tools/mgoutil"
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+
+ "github.com/openimsdk/open-im-server/v3/tools/up35/pkg/internal/rtc/mongo/table"
+)
+
+func NewMeetingRecord(db *mongo.Database) (table.MeetingRecordInterface, error) {
+ coll := db.Collection("meeting_record")
+ _, err := coll.Indexes().CreateMany(context.Background(), []mongo.IndexModel{
+ {
+ Keys: bson.D{
+ {Key: "room_id", Value: 1},
+ },
+ },
+ })
+ if err != nil {
+ return nil, err
+ }
+ return &meetingRecord{coll: coll}, nil
+}
+
+type meetingRecord struct {
+ coll *mongo.Collection
+}
+
+func (x *meetingRecord) CreateMeetingVideoRecord(ctx context.Context, meetingVideoRecord *table.MeetingVideoRecord) error {
+ return mgoutil.InsertMany(ctx, x.coll, []*table.MeetingVideoRecord{meetingVideoRecord})
+}
diff --git a/tools/up35/pkg/internal/rtc/mongo/mgo/signal.go b/tools/up35/pkg/internal/rtc/mongo/mgo/signal.go
new file mode 100644
index 000000000..47fd3fb02
--- /dev/null
+++ b/tools/up35/pkg/internal/rtc/mongo/mgo/signal.go
@@ -0,0 +1,105 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package mgo
+
+import (
+ "context"
+ "time"
+
+ "github.com/OpenIMSDK/tools/mgoutil"
+ "github.com/OpenIMSDK/tools/pagination"
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+
+ "github.com/openimsdk/open-im-server/v3/tools/up35/pkg/internal/rtc/mongo/table"
+)
+
+func NewSignal(db *mongo.Database) (table.SignalInterface, error) {
+ coll := db.Collection("signal")
+ _, err := coll.Indexes().CreateMany(context.Background(), []mongo.IndexModel{
+ {
+ Keys: bson.D{
+ {Key: "sid", Value: 1},
+ },
+ Options: options.Index().SetUnique(true),
+ },
+ {
+ Keys: bson.D{
+ {Key: "inviter_user_id", Value: 1},
+ },
+ },
+ {
+ Keys: bson.D{
+ {Key: "initiate_time", Value: -1},
+ },
+ },
+ })
+ if err != nil {
+ return nil, err
+ }
+ return &signal{coll: coll}, nil
+}
+
+type signal struct {
+ coll *mongo.Collection
+}
+
+func (x *signal) Find(ctx context.Context, sids []string) ([]*table.SignalModel, error) {
+ return mgoutil.Find[*table.SignalModel](ctx, x.coll, bson.M{"sid": bson.M{"$in": sids}})
+}
+
+func (x *signal) CreateSignal(ctx context.Context, signalModel *table.SignalModel) error {
+ return mgoutil.InsertMany(ctx, x.coll, []*table.SignalModel{signalModel})
+}
+
+func (x *signal) Update(ctx context.Context, sid string, update map[string]any) error {
+ if len(update) == 0 {
+ return nil
+ }
+ return mgoutil.UpdateOne(ctx, x.coll, bson.M{"sid": sid}, bson.M{"$set": update}, false)
+}
+
+func (x *signal) UpdateSignalFileURL(ctx context.Context, sID, fileURL string) error {
+ return x.Update(ctx, sID, map[string]any{"file_url": fileURL})
+}
+
+func (x *signal) UpdateSignalEndTime(ctx context.Context, sID string, endTime time.Time) error {
+ return x.Update(ctx, sID, map[string]any{"end_time": endTime})
+}
+
+func (x *signal) Delete(ctx context.Context, sids []string) error {
+ if len(sids) == 0 {
+ return nil
+ }
+ return mgoutil.DeleteMany(ctx, x.coll, bson.M{"sid": bson.M{"$in": sids}})
+}
+
+func (x *signal) PageSignal(ctx context.Context, sesstionType int32, sendID string, startTime, endTime time.Time, pagination pagination.Pagination) (int64, []*table.SignalModel, error) {
+ var and []bson.M
+ if !startTime.IsZero() {
+ and = append(and, bson.M{"initiate_time": bson.M{"$gte": startTime}})
+ }
+ if !endTime.IsZero() {
+ and = append(and, bson.M{"initiate_time": bson.M{"$lte": endTime}})
+ }
+ if sesstionType != 0 {
+ and = append(and, bson.M{"sesstion_type": sesstionType})
+ }
+ if sendID != "" {
+ and = append(and, bson.M{"inviter_user_id": sendID})
+ }
+ return mgoutil.FindPage[*table.SignalModel](ctx, x.coll, bson.M{"$and": and}, pagination, options.Find().SetSort(bson.M{"initiate_time": -1}))
+}
diff --git a/tools/up35/pkg/internal/rtc/mongo/mgo/signal_invitation.go b/tools/up35/pkg/internal/rtc/mongo/mgo/signal_invitation.go
new file mode 100644
index 000000000..1f76381e9
--- /dev/null
+++ b/tools/up35/pkg/internal/rtc/mongo/mgo/signal_invitation.go
@@ -0,0 +1,94 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package mgo
+
+import (
+ "context"
+ "time"
+
+ "github.com/OpenIMSDK/tools/mgoutil"
+ "github.com/OpenIMSDK/tools/pagination"
+ "github.com/OpenIMSDK/tools/utils"
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+
+ "github.com/openimsdk/open-im-server/v3/tools/up35/pkg/internal/rtc/mongo/table"
+)
+
+func NewSignalInvitation(db *mongo.Database) (table.SignalInvitationInterface, error) {
+ coll := db.Collection("signal_invitation")
+ _, err := coll.Indexes().CreateMany(context.Background(), []mongo.IndexModel{
+ {
+ Keys: bson.D{
+ {Key: "sid", Value: 1},
+ {Key: "user_id", Value: 1},
+ },
+ Options: options.Index().SetUnique(true),
+ },
+ {
+ Keys: bson.D{
+ {Key: "initiate_time", Value: -1},
+ },
+ },
+ })
+ if err != nil {
+ return nil, err
+ }
+ return &signalInvitation{coll: coll}, nil
+}
+
+type signalInvitation struct {
+ coll *mongo.Collection
+}
+
+func (x *signalInvitation) Find(ctx context.Context, sid string) ([]*table.SignalInvitationModel, error) {
+ return mgoutil.Find[*table.SignalInvitationModel](ctx, x.coll, bson.M{"sid": sid})
+}
+
+func (x *signalInvitation) CreateSignalInvitation(ctx context.Context, sid string, inviteeUserIDs []string) error {
+ now := time.Now()
+ return mgoutil.InsertMany(ctx, x.coll, utils.Slice(inviteeUserIDs, func(userID string) *table.SignalInvitationModel {
+ return &table.SignalInvitationModel{
+ UserID: userID,
+ SID: sid,
+ InitiateTime: now,
+ HandleTime: time.Unix(0, 0),
+ }
+ }))
+}
+
+func (x *signalInvitation) HandleSignalInvitation(ctx context.Context, sID, InviteeUserID string, status int32) error {
+ return mgoutil.UpdateOne(ctx, x.coll, bson.M{"sid": sID, "user_id": InviteeUserID}, bson.M{"$set": bson.M{"status": status, "handle_time": time.Now()}}, true)
+}
+
+func (x *signalInvitation) PageSID(ctx context.Context, recvID string, startTime, endTime time.Time, pagination pagination.Pagination) (int64, []string, error) {
+ var and []bson.M
+ and = append(and, bson.M{"user_id": recvID})
+ if !startTime.IsZero() {
+ and = append(and, bson.M{"initiate_time": bson.M{"$gte": startTime}})
+ }
+ if !endTime.IsZero() {
+ and = append(and, bson.M{"initiate_time": bson.M{"$lte": endTime}})
+ }
+ return mgoutil.FindPage[string](ctx, x.coll, bson.M{"$and": and}, pagination, options.Find().SetProjection(bson.M{"_id": 0, "sid": 1}).SetSort(bson.M{"initiate_time": -1}))
+}
+
+func (x *signalInvitation) Delete(ctx context.Context, sids []string) error {
+ if len(sids) == 0 {
+ return nil
+ }
+ return mgoutil.DeleteMany(ctx, x.coll, bson.M{"sid": bson.M{"$in": sids}})
+}
diff --git a/tools/up35/pkg/internal/rtc/mongo/table/meeting.go b/tools/up35/pkg/internal/rtc/mongo/table/meeting.go
new file mode 100644
index 000000000..6ff387bbb
--- /dev/null
+++ b/tools/up35/pkg/internal/rtc/mongo/table/meeting.go
@@ -0,0 +1,66 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package table
+
+import (
+ "context"
+ "time"
+
+ "github.com/OpenIMSDK/tools/pagination"
+)
+
+type MeetingInfo struct {
+ RoomID string `bson:"room_id"`
+ MeetingName string `bson:"meeting_name"`
+ HostUserID string `bson:"host_user_id"`
+ Status int64 `bson:"status"`
+ StartTime time.Time `bson:"start_time"`
+ EndTime time.Time `bson:"end_time"`
+ CreateTime time.Time `bson:"create_time"`
+ Ex string `bson:"ex"`
+}
+
+type MeetingInterface interface {
+ Find(ctx context.Context, roomIDs []string) ([]*MeetingInfo, error)
+ CreateMeetingInfo(ctx context.Context, meetingInfo *MeetingInfo) error
+ UpdateMeetingInfo(ctx context.Context, roomID string, update map[string]any) error
+ GetUnCompleteMeetingIDList(ctx context.Context, roomIDs []string) ([]string, error)
+ Delete(ctx context.Context, roomIDs []string) error
+ GetMeetingRecords(ctx context.Context, hostUserID string, startTime, endTime time.Time, pagination pagination.Pagination) (int64, []*MeetingInfo, error)
+}
+
+type MeetingInvitationInfo struct {
+ RoomID string `bson:"room_id"`
+ UserID string `bson:"user_id"`
+ CreateTime time.Time `bson:"create_time"`
+}
+
+type MeetingInvitationInterface interface {
+ FindUserIDs(ctx context.Context, roomID string) ([]string, error)
+ CreateMeetingInvitationInfo(ctx context.Context, roomID string, inviteeUserIDs []string) error
+ GetUserInvitedMeetingIDs(ctx context.Context, userID string) (meetingIDs []string, err error)
+ Delete(ctx context.Context, roomIDs []string) error
+ GetMeetingRecords(ctx context.Context, joinedUserID string, startTime, endTime time.Time, pagination pagination.Pagination) (int64, []string, error)
+}
+
+type MeetingVideoRecord struct {
+ RoomID string `bson:"room_id"`
+ FileURL string `bson:"file_url"`
+ CreateTime time.Time `bson:"create_time"`
+}
+
+type MeetingRecordInterface interface {
+ CreateMeetingVideoRecord(ctx context.Context, meetingVideoRecord *MeetingVideoRecord) error
+}
diff --git a/tools/up35/pkg/internal/rtc/mongo/table/signal.go b/tools/up35/pkg/internal/rtc/mongo/table/signal.go
new file mode 100644
index 000000000..8d8aa96ed
--- /dev/null
+++ b/tools/up35/pkg/internal/rtc/mongo/table/signal.go
@@ -0,0 +1,88 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package table
+
+import (
+ "context"
+ "time"
+
+ "github.com/OpenIMSDK/tools/errs"
+ "github.com/OpenIMSDK/tools/pagination"
+ "github.com/redis/go-redis/v9"
+ "go.mongodb.org/mongo-driver/mongo"
+)
+
+type SignalModel struct {
+ SID string `bson:"sid"`
+ InviterUserID string `bson:"inviter_user_id"`
+ CustomData string `bson:"custom_data"`
+ GroupID string `bson:"group_id"`
+ RoomID string `bson:"room_id"`
+ Timeout int32 `bson:"timeout"`
+ MediaType string `bson:"media_type"`
+ PlatformID int32 `bson:"platform_id"`
+ SessionType int32 `bson:"session_type"`
+ InitiateTime time.Time `bson:"initiate_time"`
+ EndTime time.Time `bson:"end_time"`
+ FileURL string `bson:"file_url"`
+
+ Title string `bson:"title"`
+ Desc string `bson:"desc"`
+ Ex string `bson:"ex"`
+ IOSPushSound string `bson:"ios_push_sound"`
+ IOSBadgeCount bool `bson:"ios_badge_count"`
+ SignalInfo string `bson:"signal_info"`
+}
+
+type SignalInterface interface {
+ Find(ctx context.Context, sids []string) ([]*SignalModel, error)
+ CreateSignal(ctx context.Context, signalModel *SignalModel) error
+ Update(ctx context.Context, sid string, update map[string]any) error
+ UpdateSignalFileURL(ctx context.Context, sID, fileURL string) error
+ UpdateSignalEndTime(ctx context.Context, sID string, endTime time.Time) error
+ Delete(ctx context.Context, sids []string) error
+ PageSignal(ctx context.Context, sesstionType int32, sendID string, startTime, endTime time.Time, pagination pagination.Pagination) (int64, []*SignalModel, error)
+}
+
+type SignalInvitationModel struct {
+ SID string `bson:"sid"`
+ UserID string `bson:"user_id"`
+ Status int32 `bson:"status"`
+ InitiateTime time.Time `bson:"initiate_time"`
+ HandleTime time.Time `bson:"handle_time"`
+}
+
+type SignalInvitationInterface interface {
+ Find(ctx context.Context, sid string) ([]*SignalInvitationModel, error)
+ CreateSignalInvitation(ctx context.Context, sid string, inviteeUserIDs []string) error
+ HandleSignalInvitation(ctx context.Context, sID, InviteeUserID string, status int32) error
+ PageSID(ctx context.Context, recvID string, startTime, endTime time.Time, pagination pagination.Pagination) (int64, []string, error)
+ Delete(ctx context.Context, sids []string) error
+}
+
+func IsNotFound(err error) bool {
+ if err == nil {
+ return false
+ }
+ err = errs.Unwrap(err)
+ return err == mongo.ErrNoDocuments || err == redis.Nil
+}
+
+func IsDuplicate(err error) bool {
+ if err == nil {
+ return false
+ }
+ return mongo.IsDuplicateKeyError(errs.Unwrap(err))
+}
diff --git a/tools/up35/pkg/internal/rtc/mysql/meeting.go b/tools/up35/pkg/internal/rtc/mysql/meeting.go
new file mode 100644
index 000000000..71515c3b7
--- /dev/null
+++ b/tools/up35/pkg/internal/rtc/mysql/meeting.go
@@ -0,0 +1,54 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package relation
+
+import (
+ "time"
+)
+
+type MeetingInfo struct {
+ RoomID string `gorm:"column:room_id;primary_key;size:128;index:room_id;index:status,priority:1"`
+ MeetingName string `gorm:"column:meeting_name;size:64"`
+ HostUserID string `gorm:"column:host_user_id;size:64;index:host_user_id"`
+ Status int64 `gorm:"column:status;index:status,priority:2"`
+ StartTime int64 `gorm:"column:start_time"`
+ EndTime int64 `gorm:"column:end_time"`
+ CreateTime time.Time `gorm:"column:create_time"`
+ Ex string `gorm:"column:ex;size:1024"`
+}
+
+func (MeetingInfo) TableName() string {
+ return "meeting"
+}
+
+type MeetingInvitationInfo struct {
+ RoomID string `gorm:"column:room_id;primary_key;size:128"`
+ UserID string `gorm:"column:user_id;primary_key;size:64;index:user_id"`
+ CreateTime time.Time `gorm:"column:create_time"`
+}
+
+func (MeetingInvitationInfo) TableName() string {
+ return "meeting_invitation"
+}
+
+type MeetingVideoRecord struct {
+ RoomID string `gorm:"column:room_id;size:128"`
+ FileURL string `gorm:"column:file_url"`
+ CreateTime time.Time `gorm:"column:create_time"`
+}
+
+func (MeetingVideoRecord) TableName() string {
+ return "meeting_video_record"
+}
diff --git a/tools/up35/pkg/internal/rtc/mysql/signal.go b/tools/up35/pkg/internal/rtc/mysql/signal.go
new file mode 100644
index 000000000..c546360a4
--- /dev/null
+++ b/tools/up35/pkg/internal/rtc/mysql/signal.go
@@ -0,0 +1,57 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package relation
+
+import (
+ "time"
+)
+
+type SignalModel struct {
+ SID string `gorm:"column:sid;type:char(128);primary_key"`
+ InviterUserID string `gorm:"column:inviter_user_id;type:char(64);index:inviter_user_id_index"`
+ CustomData string `gorm:"column:custom_data;type:text"`
+ GroupID string `gorm:"column:group_id;type:char(64)"`
+ RoomID string `gorm:"column:room_id;primary_key;type:char(128)"`
+ Timeout int32 `gorm:"column:timeout"`
+ MediaType string `gorm:"column:media_type;type:char(64)"`
+ PlatformID int32 `gorm:"column:platform_id"`
+ SessionType int32 `gorm:"column:sesstion_type"`
+ InitiateTime time.Time `gorm:"column:initiate_time"`
+ EndTime time.Time `gorm:"column:end_time"`
+ FileURL string `gorm:"column:file_url" json:"-"`
+
+ Title string `gorm:"column:title;size:128"`
+ Desc string `gorm:"column:desc;size:1024"`
+ Ex string `gorm:"column:ex;size:1024"`
+ IOSPushSound string `gorm:"column:ios_push_sound"`
+ IOSBadgeCount bool `gorm:"column:ios_badge_count"`
+ SignalInfo string `gorm:"column:signal_info;size:1024"`
+}
+
+func (SignalModel) TableName() string {
+ return "signal"
+}
+
+type SignalInvitationModel struct {
+ UserID string `gorm:"column:user_id;primary_key"`
+ SID string `gorm:"column:sid;type:char(128);primary_key"`
+ Status int32 `gorm:"column:status"`
+ InitiateTime time.Time `gorm:"column:initiate_time;primary_key"`
+ HandleTime time.Time `gorm:"column:handle_time"`
+}
+
+func (SignalInvitationModel) TableName() string {
+ return "signal_invitation"
+}
diff --git a/tools/up35/pkg/pkg.go b/tools/up35/pkg/pkg.go
new file mode 100644
index 000000000..b7e7c01f5
--- /dev/null
+++ b/tools/up35/pkg/pkg.go
@@ -0,0 +1,221 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package pkg
+
+import (
+ "context"
+ "errors"
+ "fmt"
+ "log"
+ "os"
+ "reflect"
+ "strconv"
+
+ "gopkg.in/yaml.v3"
+
+ "github.com/go-sql-driver/mysql"
+ "go.mongodb.org/mongo-driver/bson"
+ "go.mongodb.org/mongo-driver/mongo"
+ "go.mongodb.org/mongo-driver/mongo/options"
+ gormmysql "gorm.io/driver/mysql"
+ "gorm.io/gorm"
+ "gorm.io/gorm/logger"
+
+ "github.com/openimsdk/open-im-server/v3/pkg/common/config"
+ "github.com/openimsdk/open-im-server/v3/pkg/common/db/mgo"
+ "github.com/openimsdk/open-im-server/v3/pkg/common/db/unrelation"
+ rtcmgo "github.com/openimsdk/open-im-server/v3/tools/up35/pkg/internal/rtc/mongo/mgo"
+)
+
+const (
+ versionTable = "dataver"
+ versionKey = "data_version"
+ versionValue = 35
+)
+
+func InitConfig(path string) error {
+ data, err := os.ReadFile(path)
+ if err != nil {
+ return err
+ }
+ return yaml.Unmarshal(data, &config.Config)
+}
+
+func GetMysql() (*gorm.DB, error) {
+ conf := config.Config.Mysql
+ mysqlDSN := fmt.Sprintf("%s:%s@tcp(%s)/%s?charset=utf8mb4&parseTime=True&loc=Local", conf.Username, conf.Password, conf.Address[0], conf.Database)
+ return gorm.Open(gormmysql.Open(mysqlDSN), &gorm.Config{Logger: logger.Discard})
+}
+
+func GetMongo() (*mongo.Database, error) {
+ mgo, err := unrelation.NewMongo()
+ if err != nil {
+ return nil, err
+ }
+ return mgo.GetDatabase(), nil
+}
+
+func Main(path string) error {
+ if err := InitConfig(path); err != nil {
+ return err
+ }
+ if config.Config.Mysql == nil {
+ return nil
+ }
+ mongoDB, err := GetMongo()
+ if err != nil {
+ return err
+ }
+ var version struct {
+ Key string `bson:"key"`
+ Value string `bson:"value"`
+ }
+ switch mongoDB.Collection(versionTable).FindOne(context.Background(), bson.M{"key": versionKey}).Decode(&version) {
+ case nil:
+ if ver, _ := strconv.Atoi(version.Value); ver >= versionValue {
+ return nil
+ }
+ case mongo.ErrNoDocuments:
+ default:
+ return err
+ }
+ mysqlDB, err := GetMysql()
+ if err != nil {
+ if mysqlErr, ok := err.(*mysql.MySQLError); ok && mysqlErr.Number == 1049 {
+ if err := SetMongoDataVersion(mongoDB, version.Value); err != nil {
+ return err
+ }
+ return nil // database not exist
+ }
+ return err
+ }
+
+ var c convert
+ var tasks []func() error
+ tasks = append(tasks,
+ func() error { return NewTask(mysqlDB, mongoDB, mgo.NewUserMongo, c.User) },
+ func() error { return NewTask(mysqlDB, mongoDB, mgo.NewFriendMongo, c.Friend) },
+ func() error { return NewTask(mysqlDB, mongoDB, mgo.NewFriendRequestMongo, c.FriendRequest) },
+ func() error { return NewTask(mysqlDB, mongoDB, mgo.NewBlackMongo, c.Black) },
+ func() error { return NewTask(mysqlDB, mongoDB, mgo.NewGroupMongo, c.Group) },
+ func() error { return NewTask(mysqlDB, mongoDB, mgo.NewGroupMember, c.GroupMember) },
+ func() error { return NewTask(mysqlDB, mongoDB, mgo.NewGroupRequestMgo, c.GroupRequest) },
+ func() error { return NewTask(mysqlDB, mongoDB, mgo.NewConversationMongo, c.Conversation) },
+ func() error { return NewTask(mysqlDB, mongoDB, mgo.NewS3Mongo, c.Object(config.Config.Object.Enable)) },
+ func() error { return NewTask(mysqlDB, mongoDB, mgo.NewLogMongo, c.Log) },
+
+ func() error { return NewTask(mysqlDB, mongoDB, rtcmgo.NewSignal, c.SignalModel) },
+ func() error { return NewTask(mysqlDB, mongoDB, rtcmgo.NewSignalInvitation, c.SignalInvitationModel) },
+ func() error { return NewTask(mysqlDB, mongoDB, rtcmgo.NewMeeting, c.Meeting) },
+ func() error { return NewTask(mysqlDB, mongoDB, rtcmgo.NewMeetingInvitation, c.MeetingInvitationInfo) },
+ func() error { return NewTask(mysqlDB, mongoDB, rtcmgo.NewMeetingRecord, c.MeetingVideoRecord) },
+ )
+
+ for _, task := range tasks {
+ if err := task(); err != nil {
+ return err
+ }
+ }
+
+ if err := SetMongoDataVersion(mongoDB, version.Value); err != nil {
+ return err
+ }
+ return nil
+}
+
+func SetMongoDataVersion(db *mongo.Database, curver string) error {
+ filter := bson.M{"key": versionKey, "value": curver}
+ update := bson.M{"$set": bson.M{"key": versionKey, "value": strconv.Itoa(versionValue)}}
+ _, err := db.Collection(versionTable).UpdateOne(context.Background(), filter, update, options.Update().SetUpsert(true))
+ return err
+}
+
+// NewTask A mysql table B mongodb model C mongodb table
+func NewTask[A interface{ TableName() string }, B any, C any](gormDB *gorm.DB, mongoDB *mongo.Database, mongoDBInit func(db *mongo.Database) (B, error), convert func(v A) C) error {
+ obj, err := mongoDBInit(mongoDB)
+ if err != nil {
+ return err
+ }
+ var zero A
+ tableName := zero.TableName()
+ coll, err := getColl(obj)
+ if err != nil {
+ return fmt.Errorf("get mongo collection %s failed, err: %w", tableName, err)
+ }
+ var count int
+ defer func() {
+ log.Printf("completed convert %s total %d\n", tableName, count)
+ }()
+ const batch = 100
+ for page := 0; ; page++ {
+ res := make([]A, 0, batch)
+ if err := gormDB.Limit(batch).Offset(page * batch).Find(&res).Error; err != nil {
+ if mysqlErr, ok := err.(*mysql.MySQLError); ok && mysqlErr.Number == 1146 {
+ return nil // table not exist
+ }
+ return fmt.Errorf("find mysql table %s failed, err: %w", tableName, err)
+ }
+ if len(res) == 0 {
+ return nil
+ }
+ temp := make([]any, len(res))
+ for i := range res {
+ temp[i] = convert(res[i])
+ }
+ if err := insertMany(coll, temp); err != nil {
+ return fmt.Errorf("insert mongo table %s failed, err: %w", tableName, err)
+ }
+ count += len(res)
+ if len(res) < batch {
+ return nil
+ }
+ log.Printf("current convert %s completed %d\n", tableName, count)
+ }
+}
+
+func insertMany(coll *mongo.Collection, objs []any) error {
+ if _, err := coll.InsertMany(context.Background(), objs); err != nil {
+ if !mongo.IsDuplicateKeyError(err) {
+ return err
+ }
+ }
+ for i := range objs {
+ _, err := coll.InsertOne(context.Background(), objs[i])
+ switch {
+ case err == nil:
+ case mongo.IsDuplicateKeyError(err):
+ default:
+ return err
+ }
+ }
+ return nil
+}
+
+func getColl(obj any) (_ *mongo.Collection, err error) {
+ defer func() {
+ if e := recover(); e != nil {
+ err = fmt.Errorf("not found %+v", e)
+ }
+ }()
+ stu := reflect.ValueOf(obj).Elem()
+ typ := reflect.TypeOf(&mongo.Collection{}).String()
+ for i := 0; i < stu.NumField(); i++ {
+ field := stu.Field(i)
+ if field.Type().String() == typ {
+ return (*mongo.Collection)(field.UnsafePointer()), nil
+ }
+ }
+ return nil, errors.New("not found")
+}
diff --git a/pkg/common/db/controller/chatlog.go b/tools/up35/up35.go
similarity index 52%
rename from pkg/common/db/controller/chatlog.go
rename to tools/up35/up35.go
index def490265..63b6ec2c2 100644
--- a/pkg/common/db/controller/chatlog.go
+++ b/tools/up35/up35.go
@@ -12,26 +12,23 @@
// See the License for the specific language governing permissions and
// limitations under the License.
-package controller
+package main
import (
- pbmsg "github.com/OpenIMSDK/protocol/msg"
+ "flag"
+ "log"
+ "os"
- relationtb "github.com/openimsdk/open-im-server/v3/pkg/common/db/table/relation"
+ "github.com/openimsdk/open-im-server/v3/tools/up35/pkg"
)
-type ChatLogDatabase interface {
- CreateChatLog(msg *pbmsg.MsgDataToMQ) error
-}
-
-func NewChatLogDatabase(chatLogModelInterface relationtb.ChatLogModelInterface) ChatLogDatabase {
- return &chatLogDatabase{chatLogModel: chatLogModelInterface}
-}
-
-type chatLogDatabase struct {
- chatLogModel relationtb.ChatLogModelInterface
-}
-
-func (c *chatLogDatabase) CreateChatLog(msg *pbmsg.MsgDataToMQ) error {
- return c.chatLogModel.Create(msg)
+func main() {
+ var path string
+ flag.StringVar(&path, "c", "", "path config file")
+ flag.Parse()
+ if err := pkg.Main(path); err != nil {
+ log.Fatal(err)
+ return
+ }
+ os.Exit(0)
}
diff --git a/tools/url2im/go.mod b/tools/url2im/go.mod
index 236af91c7..b6011909d 100644
--- a/tools/url2im/go.mod
+++ b/tools/url2im/go.mod
@@ -15,6 +15,6 @@ require (
golang.org/x/sys v0.13.0 // indirect
golang.org/x/text v0.13.0 // indirect
google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1 // indirect
- google.golang.org/grpc v1.56.2 // indirect
+ google.golang.org/grpc v1.56.3 // indirect
google.golang.org/protobuf v1.31.0 // indirect
)
diff --git a/tools/url2im/go.sum b/tools/url2im/go.sum
index 071d9c3aa..1970dce2c 100644
--- a/tools/url2im/go.sum
+++ b/tools/url2im/go.sum
@@ -24,8 +24,8 @@ golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1 h1:KpwkzHKEF7B9Zxg18WzOa7djJ+Ha5DzthMyZYQfEn2A=
google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1/go.mod h1:nKE/iIaLqn2bQwXBg8f1g2Ylh6r5MN5CmZvuzZCgsCU=
-google.golang.org/grpc v1.56.2 h1:fVRFRnXvU+x6C4IlHZewvJOVHoOv1TUuQyoRsYnB4bI=
-google.golang.org/grpc v1.56.2/go.mod h1:I9bI3vqKfayGqPUAwGdOSu7kt6oIJLixfffKrpXqQ9s=
+google.golang.org/grpc v1.56.3 h1:8I4C0Yq1EjstUzUJzpcRVbuYA2mODtEmpWiQoN/b2nc=
+google.golang.org/grpc v1.56.3/go.mod h1:I9bI3vqKfayGqPUAwGdOSu7kt6oIJLixfffKrpXqQ9s=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.31.0 h1:g0LDEJHgrBl9N9r17Ru3sqWhkIx2NB67okBHPwC7hs8=
diff --git a/tools/url2im/main.go b/tools/url2im/main.go
index ee159b5e8..8d6151b09 100644
--- a/tools/url2im/main.go
+++ b/tools/url2im/main.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package main
import (
diff --git a/tools/url2im/pkg/api.go b/tools/url2im/pkg/api.go
index 1fc3813bb..7575b078a 100644
--- a/tools/url2im/pkg/api.go
+++ b/tools/url2im/pkg/api.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package pkg
import (
diff --git a/tools/url2im/pkg/buffer.go b/tools/url2im/pkg/buffer.go
index 8ccc5c52f..b4c104652 100644
--- a/tools/url2im/pkg/buffer.go
+++ b/tools/url2im/pkg/buffer.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package pkg
import (
diff --git a/tools/url2im/pkg/config.go b/tools/url2im/pkg/config.go
index 020395262..740e748cb 100644
--- a/tools/url2im/pkg/config.go
+++ b/tools/url2im/pkg/config.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package pkg
import "time"
diff --git a/tools/url2im/pkg/http.go b/tools/url2im/pkg/http.go
index 32e128524..50fb30c7b 100644
--- a/tools/url2im/pkg/http.go
+++ b/tools/url2im/pkg/http.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package pkg
import "net/http"
diff --git a/tools/url2im/pkg/manage.go b/tools/url2im/pkg/manage.go
index a68078f85..70c6713fc 100644
--- a/tools/url2im/pkg/manage.go
+++ b/tools/url2im/pkg/manage.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package pkg
import (
diff --git a/tools/url2im/pkg/md5.go b/tools/url2im/pkg/md5.go
index 0db5ba000..26b8d47a2 100644
--- a/tools/url2im/pkg/md5.go
+++ b/tools/url2im/pkg/md5.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package pkg
import (
diff --git a/tools/url2im/pkg/progress.go b/tools/url2im/pkg/progress.go
index 2d6ef3891..5f30495c6 100644
--- a/tools/url2im/pkg/progress.go
+++ b/tools/url2im/pkg/progress.go
@@ -1,3 +1,17 @@
+// Copyright © 2023 OpenIM. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
package pkg
import (
diff --git a/tools/yamlfmt/OWNERS b/tools/yamlfmt/OWNERS
deleted file mode 100644
index b7a5428e7..000000000
--- a/tools/yamlfmt/OWNERS
+++ /dev/null
@@ -1,10 +0,0 @@
-# See the OWNERS docs at https://go.k8s.io/owners
-
-reviewers:
- - cubxxw
- - kubbot
-approvers:
- - cubxxw
-labels:
- - sig/testing
- - sig/contributor-experience
\ No newline at end of file