merge: merge main code into js branch. (#2648)
* feat: update group notification when set to null. (#2590)
* refactor: refactor workflows contents.
* add tool workflows.
* update field.
* fix: remove chat error.
* Fix err.
* fix error.
* remove cn comment.
* update workflows files.
* update infra config.
* move workflows.
* feat: update bot.
* fix: solve uncorrect outdated msg get.
* update get docIDs logic.
* update
* update skip logic.
* fix
* update.
* fix: delay deleteObject func.
* remove unused content.
* feat: update group notification when set to null.
* update log standard.
* feat: add long time push msg in prometheus (#2584)
* feat: add long time push msg in prometheus
* fix: log print
* fix: go mod
* fix: log msg
* fix: log init
* feat: push msg
* feat: go mod ,remove cgo package
* feat: remove error log
* feat: test dummy push
* feat:redis pool config
* feat: push to kafka log
* feat: supports getting messages based on session ID and seq (#2582)
* fix: GroupApplicationAcceptedNotification
* fix: GroupApplicationAcceptedNotification
* fix: NotificationUserInfoUpdate
* cicd: robot automated Change
* fix: component
* fix: getConversationInfo
* feat: cron task
* feat: cron task
* feat: cron task
* feat: cron task
* feat: cron task
* fix: minio config url recognition error
* update gomake version
* update gomake version
* fix: seq conversion bug
* fix: redis pipe exec
* fix: ImportFriends
* fix: A large number of logs keysAndValues length is not even
* feat: mark read aggregate write
* feat: online status supports redis cluster
* feat: online status supports redis cluster
* feat: online status supports redis cluster
* merge
* merge
* read seq is written to mongo
* read seq is written to mongo
* fix: invitation to join group notification
* fix: friend op_user_id
* feat: optimizing asynchronous context
* feat: optimizing memamq size
* feat: add GetSeqMessage
* feat: GroupApplicationAgreeMemberEnterNotification
* feat: GroupApplicationAgreeMemberEnterNotification
* feat: go.mod
* feat: go.mod
* feat: join group notification and get seq
---------
Co-authored-by: withchao <withchao@users.noreply.github.com>
* feat: implement request batch count limit. (#2591)
* refactor: refactor workflows contents.
* add tool workflows.
* update field.
* fix: remove chat error.
* Fix err.
* fix error.
* remove cn comment.
* update workflows files.
* update infra config.
* move workflows.
* feat: update bot.
* fix: solve uncorrect outdated msg get.
* update get docIDs logic.
* update
* update skip logic.
* fix
* update.
* fix: delay deleteObject func.
* remove unused content.
* update log type.
* feat: implement request batch count limit.
* update
* update
* fix: getting messages based on session ID and seq (#2595)
* fix: GroupApplicationAcceptedNotification
* fix: GroupApplicationAcceptedNotification
* fix: NotificationUserInfoUpdate
* cicd: robot automated Change
* fix: component
* fix: getConversationInfo
* feat: cron task
* feat: cron task
* feat: cron task
* feat: cron task
* feat: cron task
* fix: minio config url recognition error
* update gomake version
* update gomake version
* fix: seq conversion bug
* fix: redis pipe exec
* fix: ImportFriends
* fix: A large number of logs keysAndValues length is not even
* feat: mark read aggregate write
* feat: online status supports redis cluster
* feat: online status supports redis cluster
* feat: online status supports redis cluster
* merge
* merge
* read seq is written to mongo
* read seq is written to mongo
* fix: invitation to join group notification
* fix: friend op_user_id
* feat: optimizing asynchronous context
* feat: optimizing memamq size
* feat: add GetSeqMessage
* feat: GroupApplicationAgreeMemberEnterNotification
* feat: GroupApplicationAgreeMemberEnterNotification
* feat: go.mod
* feat: go.mod
* feat: join group notification and get seq
* feat: join group notification and get seq
---------
Co-authored-by: withchao <withchao@users.noreply.github.com>
* feat: avoid pulling messages from sessions with a large number of max seq values of 0 (#2602)
* fix: GroupApplicationAcceptedNotification
* fix: GroupApplicationAcceptedNotification
* fix: NotificationUserInfoUpdate
* cicd: robot automated Change
* fix: component
* fix: getConversationInfo
* feat: cron task
* feat: cron task
* feat: cron task
* feat: cron task
* feat: cron task
* fix: minio config url recognition error
* update gomake version
* update gomake version
* fix: seq conversion bug
* fix: redis pipe exec
* fix: ImportFriends
* fix: A large number of logs keysAndValues length is not even
* feat: mark read aggregate write
* feat: online status supports redis cluster
* feat: online status supports redis cluster
* feat: online status supports redis cluster
* merge
* merge
* read seq is written to mongo
* read seq is written to mongo
* fix: invitation to join group notification
* fix: friend op_user_id
* feat: optimizing asynchronous context
* feat: optimizing memamq size
* feat: add GetSeqMessage
* feat: GroupApplicationAgreeMemberEnterNotification
* feat: GroupApplicationAgreeMemberEnterNotification
* feat: go.mod
* feat: go.mod
* feat: join group notification and get seq
* feat: join group notification and get seq
* feat: avoid pulling messages from sessions with a large number of max seq values of 0
---------
Co-authored-by: withchao <withchao@users.noreply.github.com>
* refactor: improve db structure in `storage/controller` (#2604)
* refactor: refactor workflows contents.
* add tool workflows.
* update field.
* fix: remove chat error.
* Fix err.
* fix error.
* remove cn comment.
* update workflows files.
* update infra config.
* move workflows.
* feat: update bot.
* fix: solve uncorrect outdated msg get.
* update get docIDs logic.
* update
* update skip logic.
* fix
* update.
* fix: delay deleteObject func.
* remove unused content.
* update log type.
* feat: implement request batch count limit.
* update
* update
* refactor: improve db structure in `storage/controller`
* feat: implement offline push using kafka (#2600)
* refactor: refactor workflows contents.
* add tool workflows.
* update field.
* fix: remove chat error.
* Fix err.
* fix error.
* remove cn comment.
* update workflows files.
* update infra config.
* move workflows.
* feat: update bot.
* fix: solve uncorrect outdated msg get.
* update get docIDs logic.
* update
* update skip logic.
* fix
* update.
* fix: delay deleteObject func.
* remove unused content.
* update log type.
* feat: implement request batch count limit.
* update
* update
* feat: implement offline push.
* feat: implement batch Push spilt
* update go mod
* feat: implement kafka producer and consumer.
* update format,
* add PushMQ log.
* feat: update Handler logic.
* update MQ logic.
* update
* update
* fix: update OfflinePushConsumerHandler.
* feat: API supports gzip (#2609)
* fix: GroupApplicationAcceptedNotification
* fix: GroupApplicationAcceptedNotification
* fix: NotificationUserInfoUpdate
* cicd: robot automated Change
* fix: component
* fix: getConversationInfo
* feat: cron task
* feat: cron task
* feat: cron task
* feat: cron task
* feat: cron task
* fix: minio config url recognition error
* update gomake version
* update gomake version
* fix: seq conversion bug
* fix: redis pipe exec
* fix: ImportFriends
* fix: A large number of logs keysAndValues length is not even
* feat: mark read aggregate write
* feat: online status supports redis cluster
* feat: online status supports redis cluster
* feat: online status supports redis cluster
* merge
* merge
* read seq is written to mongo
* read seq is written to mongo
* fix: invitation to join group notification
* fix: friend op_user_id
* feat: optimizing asynchronous context
* feat: optimizing memamq size
* feat: add GetSeqMessage
* feat: GroupApplicationAgreeMemberEnterNotification
* feat: GroupApplicationAgreeMemberEnterNotification
* feat: go.mod
* feat: go.mod
* feat: join group notification and get seq
* feat: join group notification and get seq
* feat: avoid pulling messages from sessions with a large number of max seq values of 0
* feat: API supports gzip
---------
Co-authored-by: withchao <withchao@users.noreply.github.com>
* Fix err (#2608)
* refactor: refactor workflows contents.
* add tool workflows.
* update field.
* fix: remove chat error.
* Fix err.
* fix error.
* remove cn comment.
* update workflows files.
* update infra config.
* move workflows.
* feat: update bot.
* fix: solve uncorrect outdated msg get.
* update get docIDs logic.
* update
* update skip logic.
* fix
* update.
* fix: delay deleteObject func.
* remove unused content.
* update log type.
* feat: implement request batch count limit.
* update
* update
* feat: add rocksTimeout
* feat: wrap logs
* feat: add logs
* feat: listen config
* feat: enable listen TIME_WAIT port
* feat: add logs
* feat: cache batch
* chore: enable fullUserCache
* feat: push rpc num
* feat: push err
* feat: with operationID
* feat: sleep
* feat: change 1s
* feat: change log
* feat: implement Getbatch in rpcCache.
* feat: print getOnline cost
* feat: change log
* feat: change kafka and push config
* feat: del interface
* feat: fix err
* feat: change config
* feat: go mod
* feat: change config
* feat: change config
* feat: add sleep in push
* feat: warn logs
* feat: logs
* feat: logs
* feat: change port
* feat: start config
* feat: remove port reuse
* feat: prometheus config
* feat: prometheus config
* feat: prometheus config
* feat: add long time send msg to grafana
* feat: init
* feat: init
* feat: implement offline push.
* feat: batch get user online
* feat: implement batch Push spilt
* update go mod
* Revert "feat: change port"
This reverts commit 06d5e944
* feat: change port
* feat: change config
* feat: implement kafka producer and consumer.
* update format,
* add PushMQ log.
* feat: get all online users and init push
* feat: lock in online cache
* feat: config
* fix: init online status
* fix: add logs
* fix: userIDs
* fix: add logs
* feat: update Handler logic.
* update MQ logic.
* update
* update
* fix: method name
* fix: update OfflinePushConsumerHandler.
* fix: prommetrics
* fix: add logs
* fix: ctx
* fix: log
* fix: config
* feat: change port
* fix: atomic online cache status
---------
Co-authored-by: Monet Lee <monet_lee@163.com>
* feature: add GetConversationsHasReadAndMaxSeq interface to the WebSocket API. (#2611)
* fix: lru lock (#2613)
* fix: lru lock
* fix: lru lock
* fix: lru lock
* fix: nil pointer error on close (#2618)
* fix: GroupApplicationAcceptedNotification
* fix: GroupApplicationAcceptedNotification
* fix: NotificationUserInfoUpdate
* cicd: robot automated Change
* fix: component
* fix: getConversationInfo
* feat: cron task
* feat: cron task
* feat: cron task
* feat: cron task
* feat: cron task
* fix: minio config url recognition error
* update gomake version
* update gomake version
* fix: seq conversion bug
* fix: redis pipe exec
* fix: ImportFriends
* fix: A large number of logs keysAndValues length is not even
* feat: mark read aggregate write
* feat: online status supports redis cluster
* feat: online status supports redis cluster
* feat: online status supports redis cluster
* merge
* merge
* read seq is written to mongo
* read seq is written to mongo
* fix: invitation to join group notification
* fix: friend op_user_id
* feat: optimizing asynchronous context
* feat: optimizing memamq size
* feat: add GetSeqMessage
* feat: GroupApplicationAgreeMemberEnterNotification
* feat: GroupApplicationAgreeMemberEnterNotification
* feat: go.mod
* feat: go.mod
* feat: join group notification and get seq
* feat: join group notification and get seq
* feat: avoid pulling messages from sessions with a large number of max seq values of 0
* feat: API supports gzip
* go.mod
* fix: nil pointer error on close
---------
Co-authored-by: withchao <withchao@users.noreply.github.com>
* feat: create group can push notification (#2617)
* fix: blockage caused by listen error (#2620)
* fix: GroupApplicationAcceptedNotification
* fix: GroupApplicationAcceptedNotification
* fix: NotificationUserInfoUpdate
* cicd: robot automated Change
* fix: component
* fix: getConversationInfo
* feat: cron task
* feat: cron task
* feat: cron task
* feat: cron task
* feat: cron task
* fix: minio config url recognition error
* update gomake version
* update gomake version
* fix: seq conversion bug
* fix: redis pipe exec
* fix: ImportFriends
* fix: A large number of logs keysAndValues length is not even
* feat: mark read aggregate write
* feat: online status supports redis cluster
* feat: online status supports redis cluster
* feat: online status supports redis cluster
* merge
* merge
* read seq is written to mongo
* read seq is written to mongo
* fix: invitation to join group notification
* fix: friend op_user_id
* feat: optimizing asynchronous context
* feat: optimizing memamq size
* feat: add GetSeqMessage
* feat: GroupApplicationAgreeMemberEnterNotification
* feat: GroupApplicationAgreeMemberEnterNotification
* feat: go.mod
* feat: go.mod
* feat: join group notification and get seq
* feat: join group notification and get seq
* feat: avoid pulling messages from sessions with a large number of max seq values of 0
* feat: API supports gzip
* go.mod
* fix: nil pointer error on close
* fix: listen error
---------
Co-authored-by: withchao <withchao@users.noreply.github.com>
* fix: go.mod (#2621)
* fix: GroupApplicationAcceptedNotification
* fix: GroupApplicationAcceptedNotification
* fix: NotificationUserInfoUpdate
* cicd: robot automated Change
* fix: component
* fix: getConversationInfo
* feat: cron task
* feat: cron task
* feat: cron task
* feat: cron task
* feat: cron task
* fix: minio config url recognition error
* update gomake version
* update gomake version
* fix: seq conversion bug
* fix: redis pipe exec
* fix: ImportFriends
* fix: A large number of logs keysAndValues length is not even
* feat: mark read aggregate write
* feat: online status supports redis cluster
* feat: online status supports redis cluster
* feat: online status supports redis cluster
* merge
* merge
* read seq is written to mongo
* read seq is written to mongo
* fix: invitation to join group notification
* fix: friend op_user_id
* feat: optimizing asynchronous context
* feat: optimizing memamq size
* feat: add GetSeqMessage
* feat: GroupApplicationAgreeMemberEnterNotification
* feat: GroupApplicationAgreeMemberEnterNotification
* feat: go.mod
* feat: go.mod
* feat: join group notification and get seq
* feat: join group notification and get seq
* feat: avoid pulling messages from sessions with a large number of max seq values of 0
* feat: API supports gzip
* go.mod
* fix: nil pointer error on close
* fix: listen error
* fix: listen error
* update go.mod
---------
Co-authored-by: withchao <withchao@users.noreply.github.com>
* feat: improve searchMsg implement. (#2614)
* refactor: refactor workflows contents.
* add tool workflows.
* update field.
* fix: remove chat error.
* Fix err.
* fix error.
* remove cn comment.
* update workflows files.
* update infra config.
* move workflows.
* feat: update bot.
* fix: solve uncorrect outdated msg get.
* update get docIDs logic.
* update
* update skip logic.
* fix
* update.
* fix: delay deleteObject func.
* remove unused content.
* update log type.
* feat: implement request batch count limit.
* update
* update
* remove unused script.
* feat: improve searchMsg implement.
* update mongo config.
* Fix lock (#2622)
* fix:log
* fix: lock
* fix: update setGroupInfoEX field name. (#2625)
* refactor: refactor workflows contents.
* add tool workflows.
* update field.
* fix: remove chat error.
* Fix err.
* fix error.
* remove cn comment.
* update workflows files.
* update infra config.
* move workflows.
* feat: update bot.
* fix: solve uncorrect outdated msg get.
* update get docIDs logic.
* update
* update skip logic.
* fix
* update.
* fix: delay deleteObject func.
* remove unused content.
* update log type.
* feat: implement request batch count limit.
* update
* update
* fix: update setGroupInfoEX field name.
* fix: update setGroupInfoEX field name (#2626)
* refactor: refactor workflows contents.
* add tool workflows.
* update field.
* fix: remove chat error.
* Fix err.
* fix error.
* remove cn comment.
* update workflows files.
* update infra config.
* move workflows.
* feat: update bot.
* fix: solve uncorrect outdated msg get.
* update get docIDs logic.
* update
* update skip logic.
* fix
* update.
* fix: delay deleteObject func.
* remove unused content.
* update log type.
* feat: implement request batch count limit.
* update
* update
* fix: update setGroupInfoEX field name.
* fix: update setGroupInfoEX field name
* feat: msg gateway add log (#2631)
* fix: GroupApplicationAcceptedNotification
* fix: GroupApplicationAcceptedNotification
* fix: NotificationUserInfoUpdate
* cicd: robot automated Change
* fix: component
* fix: getConversationInfo
* feat: cron task
* feat: cron task
* feat: cron task
* feat: cron task
* feat: cron task
* fix: minio config url recognition error
* update gomake version
* update gomake version
* fix: seq conversion bug
* fix: redis pipe exec
* fix: ImportFriends
* fix: A large number of logs keysAndValues length is not even
* feat: mark read aggregate write
* feat: online status supports redis cluster
* feat: online status supports redis cluster
* feat: online status supports redis cluster
* merge
* merge
* read seq is written to mongo
* read seq is written to mongo
* fix: invitation to join group notification
* fix: friend op_user_id
* feat: optimizing asynchronous context
* feat: optimizing memamq size
* feat: add GetSeqMessage
* feat: GroupApplicationAgreeMemberEnterNotification
* feat: GroupApplicationAgreeMemberEnterNotification
* feat: go.mod
* feat: go.mod
* feat: join group notification and get seq
* feat: join group notification and get seq
* feat: avoid pulling messages from sessions with a large number of max seq values of 0
* feat: API supports gzip
* go.mod
* fix: nil pointer error on close
* fix: listen error
* fix: listen error
* update go.mod
* feat: add log
---------
Co-authored-by: withchao <withchao@users.noreply.github.com>
* fix: update setGroupInfoEx func name and field. (#2634)
* refactor: refactor workflows contents.
* add tool workflows.
* update field.
* fix: remove chat error.
* Fix err.
* fix error.
* remove cn comment.
* update workflows files.
* update infra config.
* move workflows.
* feat: update bot.
* fix: solve uncorrect outdated msg get.
* update get docIDs logic.
* update
* update skip logic.
* fix
* update.
* fix: delay deleteObject func.
* remove unused content.
* update log type.
* feat: implement request batch count limit.
* update
* update
* fix: update setGroupInfoEx func name and field.
* refactor: update groupinfoEx field.
* refactor: update database name in mongodb.yml
* add groupName Condition
* fix: fix setConversations req fill. (#2645)
* refactor: refactor workflows contents.
* add tool workflows.
* update field.
* fix: remove chat error.
* Fix err.
* fix error.
* remove cn comment.
* update workflows files.
* update infra config.
* move workflows.
* feat: update bot.
* fix: solve uncorrect outdated msg get.
* update get docIDs logic.
* update
* update skip logic.
* fix
* update.
* fix: delay deleteObject func.
* remove unused content.
* update log type.
* feat: implement request batch count limit.
* update
* update
* fix: fix setConversations req fill.
* fix: GetMsgBySeqs boundary issues (#2647)
* fix: GroupApplicationAcceptedNotification
* fix: GroupApplicationAcceptedNotification
* fix: NotificationUserInfoUpdate
* cicd: robot automated Change
* fix: component
* fix: getConversationInfo
* feat: cron task
* feat: cron task
* feat: cron task
* feat: cron task
* feat: cron task
* fix: minio config url recognition error
* update gomake version
* update gomake version
* fix: seq conversion bug
* fix: redis pipe exec
* fix: ImportFriends
* fix: A large number of logs keysAndValues length is not even
* feat: mark read aggregate write
* feat: online status supports redis cluster
* feat: online status supports redis cluster
* feat: online status supports redis cluster
* merge
* merge
* read seq is written to mongo
* read seq is written to mongo
* fix: invitation to join group notification
* fix: friend op_user_id
* feat: optimizing asynchronous context
* feat: optimizing memamq size
* feat: add GetSeqMessage
* feat: GroupApplicationAgreeMemberEnterNotification
* feat: GroupApplicationAgreeMemberEnterNotification
* feat: go.mod
* feat: go.mod
* feat: join group notification and get seq
* feat: join group notification and get seq
* feat: avoid pulling messages from sessions with a large number of max seq values of 0
* feat: API supports gzip
* go.mod
* fix: nil pointer error on close
* fix: listen error
* fix: listen error
* update go.mod
* feat: add log
* fix: token parse token value
* fix: GetMsgBySeqs boundary issues
---------
Co-authored-by: withchao <withchao@users.noreply.github.com>
* fix: the attribute version is obsolete, remove it (#2644)
* refactor: update Userregister request field. (#2650)
---------
Co-authored-by: Monet Lee <monet_lee@163.com>
Co-authored-by: icey-yu <119291641+icey-yu@users.noreply.github.com>
Co-authored-by: chao <48119764+withchao@users.noreply.github.com>
Co-authored-by: withchao <withchao@users.noreply.github.com>
Co-authored-by: 蔡相跃 <caixiangyue007@gmail.com>
pull/2829/head
parent
758606f627
commit
38a989b9fa
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,122 @@
|
||||
package push
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/IBM/sarama"
|
||||
"github.com/openimsdk/open-im-server/v3/internal/push/offlinepush"
|
||||
"github.com/openimsdk/open-im-server/v3/internal/push/offlinepush/options"
|
||||
"github.com/openimsdk/open-im-server/v3/pkg/common/prommetrics"
|
||||
"github.com/openimsdk/protocol/constant"
|
||||
pbpush "github.com/openimsdk/protocol/push"
|
||||
"github.com/openimsdk/protocol/sdkws"
|
||||
"github.com/openimsdk/tools/errs"
|
||||
"github.com/openimsdk/tools/log"
|
||||
"github.com/openimsdk/tools/mq/kafka"
|
||||
"github.com/openimsdk/tools/utils/jsonutil"
|
||||
"google.golang.org/protobuf/proto"
|
||||
)
|
||||
|
||||
type OfflinePushConsumerHandler struct {
|
||||
OfflinePushConsumerGroup *kafka.MConsumerGroup
|
||||
offlinePusher offlinepush.OfflinePusher
|
||||
}
|
||||
|
||||
func NewOfflinePushConsumerHandler(config *Config, offlinePusher offlinepush.OfflinePusher) (*OfflinePushConsumerHandler, error) {
|
||||
var offlinePushConsumerHandler OfflinePushConsumerHandler
|
||||
var err error
|
||||
offlinePushConsumerHandler.offlinePusher = offlinePusher
|
||||
offlinePushConsumerHandler.OfflinePushConsumerGroup, err = kafka.NewMConsumerGroup(config.KafkaConfig.Build(), config.KafkaConfig.ToOfflineGroupID,
|
||||
[]string{config.KafkaConfig.ToOfflinePushTopic}, true)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &offlinePushConsumerHandler, nil
|
||||
}
|
||||
|
||||
func (*OfflinePushConsumerHandler) Setup(sarama.ConsumerGroupSession) error { return nil }
|
||||
func (*OfflinePushConsumerHandler) Cleanup(sarama.ConsumerGroupSession) error { return nil }
|
||||
func (o *OfflinePushConsumerHandler) ConsumeClaim(sess sarama.ConsumerGroupSession, claim sarama.ConsumerGroupClaim) error {
|
||||
for msg := range claim.Messages() {
|
||||
ctx := o.OfflinePushConsumerGroup.GetContextFromMsg(msg)
|
||||
o.handleMsg2OfflinePush(ctx, msg.Value)
|
||||
sess.MarkMessage(msg, "")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (o *OfflinePushConsumerHandler) handleMsg2OfflinePush(ctx context.Context, msg []byte) {
|
||||
offlinePushMsg := pbpush.PushMsgReq{}
|
||||
if err := proto.Unmarshal(msg, &offlinePushMsg); err != nil {
|
||||
log.ZError(ctx, "offline push Unmarshal msg err", err, "msg", string(msg))
|
||||
return
|
||||
}
|
||||
if offlinePushMsg.MsgData == nil || offlinePushMsg.UserIDs == nil {
|
||||
log.ZError(ctx, "offline push msg is empty", errs.New("offlinePushMsg is empty"), "userIDs", offlinePushMsg.UserIDs, "msg", offlinePushMsg.MsgData)
|
||||
return
|
||||
}
|
||||
log.ZInfo(ctx, "receive to OfflinePush MQ", "userIDs", offlinePushMsg.UserIDs, "msg", offlinePushMsg.MsgData)
|
||||
|
||||
err := o.offlinePushMsg(ctx, offlinePushMsg.MsgData, offlinePushMsg.UserIDs)
|
||||
if err != nil {
|
||||
log.ZWarn(ctx, "offline push failed", err, "msg", offlinePushMsg.String())
|
||||
}
|
||||
}
|
||||
|
||||
func (o *OfflinePushConsumerHandler) getOfflinePushInfos(msg *sdkws.MsgData) (title, content string, opts *options.Opts, err error) {
|
||||
type AtTextElem struct {
|
||||
Text string `json:"text,omitempty"`
|
||||
AtUserList []string `json:"atUserList,omitempty"`
|
||||
IsAtSelf bool `json:"isAtSelf"`
|
||||
}
|
||||
|
||||
opts = &options.Opts{Signal: &options.Signal{}}
|
||||
if msg.OfflinePushInfo != nil {
|
||||
opts.IOSBadgeCount = msg.OfflinePushInfo.IOSBadgeCount
|
||||
opts.IOSPushSound = msg.OfflinePushInfo.IOSPushSound
|
||||
opts.Ex = msg.OfflinePushInfo.Ex
|
||||
}
|
||||
|
||||
if msg.OfflinePushInfo != nil {
|
||||
title = msg.OfflinePushInfo.Title
|
||||
content = msg.OfflinePushInfo.Desc
|
||||
}
|
||||
if title == "" {
|
||||
switch msg.ContentType {
|
||||
case constant.Text:
|
||||
fallthrough
|
||||
case constant.Picture:
|
||||
fallthrough
|
||||
case constant.Voice:
|
||||
fallthrough
|
||||
case constant.Video:
|
||||
fallthrough
|
||||
case constant.File:
|
||||
title = constant.ContentType2PushContent[int64(msg.ContentType)]
|
||||
case constant.AtText:
|
||||
ac := AtTextElem{}
|
||||
_ = jsonutil.JsonStringToStruct(string(msg.Content), &ac)
|
||||
case constant.SignalingNotification:
|
||||
title = constant.ContentType2PushContent[constant.SignalMsg]
|
||||
default:
|
||||
title = constant.ContentType2PushContent[constant.Common]
|
||||
}
|
||||
}
|
||||
if content == "" {
|
||||
content = title
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (o *OfflinePushConsumerHandler) offlinePushMsg(ctx context.Context, msg *sdkws.MsgData, offlinePushUserIDs []string) error {
|
||||
title, content, opts, err := o.getOfflinePushInfos(msg)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = o.offlinePusher.Push(ctx, offlinePushUserIDs, title, content, opts)
|
||||
if err != nil {
|
||||
prommetrics.MsgOfflinePushFailedCounter.Inc()
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
@ -0,0 +1,286 @@
|
||||
package controller
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
|
||||
"github.com/openimsdk/open-im-server/v3/pkg/common/prommetrics"
|
||||
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/cache"
|
||||
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/database"
|
||||
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/model"
|
||||
pbmsg "github.com/openimsdk/protocol/msg"
|
||||
"github.com/openimsdk/protocol/sdkws"
|
||||
"github.com/openimsdk/tools/errs"
|
||||
"github.com/openimsdk/tools/log"
|
||||
"github.com/openimsdk/tools/mq/kafka"
|
||||
"go.mongodb.org/mongo-driver/mongo"
|
||||
)
|
||||
|
||||
type MsgTransferDatabase interface {
|
||||
// BatchInsertChat2DB inserts a batch of messages into the database for a specific conversation.
|
||||
BatchInsertChat2DB(ctx context.Context, conversationID string, msgs []*sdkws.MsgData, currentMaxSeq int64) error
|
||||
// DeleteMessagesFromCache deletes message caches from Redis by sequence numbers.
|
||||
DeleteMessagesFromCache(ctx context.Context, conversationID string, seqs []int64) error
|
||||
|
||||
// BatchInsertChat2Cache increments the sequence number and then batch inserts messages into the cache.
|
||||
BatchInsertChat2Cache(ctx context.Context, conversationID string, msgs []*sdkws.MsgData) (seq int64, isNewConversation bool, err error)
|
||||
SetHasReadSeqToDB(ctx context.Context, userID string, conversationID string, hasReadSeq int64) error
|
||||
|
||||
// to mq
|
||||
MsgToPushMQ(ctx context.Context, key, conversationID string, msg2mq *sdkws.MsgData) (int32, int64, error)
|
||||
MsgToMongoMQ(ctx context.Context, key, conversationID string, msgs []*sdkws.MsgData, lastSeq int64) error
|
||||
}
|
||||
|
||||
func NewMsgTransferDatabase(msgDocModel database.Msg, msg cache.MsgCache, seqUser cache.SeqUser, seqConversation cache.SeqConversationCache, kafkaConf *config.Kafka) (MsgTransferDatabase, error) {
|
||||
conf, err := kafka.BuildProducerConfig(*kafkaConf.Build())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
producerToMongo, err := kafka.NewKafkaProducer(conf, kafkaConf.Address, kafkaConf.ToMongoTopic)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
producerToPush, err := kafka.NewKafkaProducer(conf, kafkaConf.Address, kafkaConf.ToPushTopic)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &msgTransferDatabase{
|
||||
msgDocDatabase: msgDocModel,
|
||||
msg: msg,
|
||||
seqUser: seqUser,
|
||||
seqConversation: seqConversation,
|
||||
producerToMongo: producerToMongo,
|
||||
producerToPush: producerToPush,
|
||||
}, nil
|
||||
}
|
||||
|
||||
type msgTransferDatabase struct {
|
||||
msgDocDatabase database.Msg
|
||||
msgTable model.MsgDocModel
|
||||
msg cache.MsgCache
|
||||
seqConversation cache.SeqConversationCache
|
||||
seqUser cache.SeqUser
|
||||
producerToMongo *kafka.Producer
|
||||
producerToPush *kafka.Producer
|
||||
}
|
||||
|
||||
func (db *msgTransferDatabase) BatchInsertChat2DB(ctx context.Context, conversationID string, msgList []*sdkws.MsgData, currentMaxSeq int64) error {
|
||||
if len(msgList) == 0 {
|
||||
return errs.ErrArgs.WrapMsg("msgList is empty")
|
||||
}
|
||||
msgs := make([]any, len(msgList))
|
||||
for i, msg := range msgList {
|
||||
if msg == nil {
|
||||
continue
|
||||
}
|
||||
var offlinePushModel *model.OfflinePushModel
|
||||
if msg.OfflinePushInfo != nil {
|
||||
offlinePushModel = &model.OfflinePushModel{
|
||||
Title: msg.OfflinePushInfo.Title,
|
||||
Desc: msg.OfflinePushInfo.Desc,
|
||||
Ex: msg.OfflinePushInfo.Ex,
|
||||
IOSPushSound: msg.OfflinePushInfo.IOSPushSound,
|
||||
IOSBadgeCount: msg.OfflinePushInfo.IOSBadgeCount,
|
||||
}
|
||||
}
|
||||
msgs[i] = &model.MsgDataModel{
|
||||
SendID: msg.SendID,
|
||||
RecvID: msg.RecvID,
|
||||
GroupID: msg.GroupID,
|
||||
ClientMsgID: msg.ClientMsgID,
|
||||
ServerMsgID: msg.ServerMsgID,
|
||||
SenderPlatformID: msg.SenderPlatformID,
|
||||
SenderNickname: msg.SenderNickname,
|
||||
SenderFaceURL: msg.SenderFaceURL,
|
||||
SessionType: msg.SessionType,
|
||||
MsgFrom: msg.MsgFrom,
|
||||
ContentType: msg.ContentType,
|
||||
Content: string(msg.Content),
|
||||
Seq: msg.Seq,
|
||||
SendTime: msg.SendTime,
|
||||
CreateTime: msg.CreateTime,
|
||||
Status: msg.Status,
|
||||
Options: msg.Options,
|
||||
OfflinePush: offlinePushModel,
|
||||
AtUserIDList: msg.AtUserIDList,
|
||||
AttachedInfo: msg.AttachedInfo,
|
||||
Ex: msg.Ex,
|
||||
}
|
||||
}
|
||||
return db.BatchInsertBlock(ctx, conversationID, msgs, updateKeyMsg, msgList[0].Seq)
|
||||
}
|
||||
|
||||
func (db *msgTransferDatabase) BatchInsertBlock(ctx context.Context, conversationID string, fields []any, key int8, firstSeq int64) error {
|
||||
if len(fields) == 0 {
|
||||
return nil
|
||||
}
|
||||
num := db.msgTable.GetSingleGocMsgNum()
|
||||
// num = 100
|
||||
for i, field := range fields { // Check the type of the field
|
||||
var ok bool
|
||||
switch key {
|
||||
case updateKeyMsg:
|
||||
var msg *model.MsgDataModel
|
||||
msg, ok = field.(*model.MsgDataModel)
|
||||
if msg != nil && msg.Seq != firstSeq+int64(i) {
|
||||
return errs.ErrInternalServer.WrapMsg("seq is invalid")
|
||||
}
|
||||
case updateKeyRevoke:
|
||||
_, ok = field.(*model.RevokeModel)
|
||||
default:
|
||||
return errs.ErrInternalServer.WrapMsg("key is invalid")
|
||||
}
|
||||
if !ok {
|
||||
return errs.ErrInternalServer.WrapMsg("field type is invalid")
|
||||
}
|
||||
}
|
||||
// Returns true if the document exists in the database, false if the document does not exist in the database
|
||||
updateMsgModel := func(seq int64, i int) (bool, error) {
|
||||
var (
|
||||
res *mongo.UpdateResult
|
||||
err error
|
||||
)
|
||||
docID := db.msgTable.GetDocID(conversationID, seq)
|
||||
index := db.msgTable.GetMsgIndex(seq)
|
||||
field := fields[i]
|
||||
switch key {
|
||||
case updateKeyMsg:
|
||||
res, err = db.msgDocDatabase.UpdateMsg(ctx, docID, index, "msg", field)
|
||||
case updateKeyRevoke:
|
||||
res, err = db.msgDocDatabase.UpdateMsg(ctx, docID, index, "revoke", field)
|
||||
}
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
return res.MatchedCount > 0, nil
|
||||
}
|
||||
tryUpdate := true
|
||||
for i := 0; i < len(fields); i++ {
|
||||
seq := firstSeq + int64(i) // Current sequence number
|
||||
if tryUpdate {
|
||||
matched, err := updateMsgModel(seq, i)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if matched {
|
||||
continue // The current data has been updated, skip the current data
|
||||
}
|
||||
}
|
||||
doc := model.MsgDocModel{
|
||||
DocID: db.msgTable.GetDocID(conversationID, seq),
|
||||
Msg: make([]*model.MsgInfoModel, num),
|
||||
}
|
||||
var insert int // Inserted data number
|
||||
for j := i; j < len(fields); j++ {
|
||||
seq = firstSeq + int64(j)
|
||||
if db.msgTable.GetDocID(conversationID, seq) != doc.DocID {
|
||||
break
|
||||
}
|
||||
insert++
|
||||
switch key {
|
||||
case updateKeyMsg:
|
||||
doc.Msg[db.msgTable.GetMsgIndex(seq)] = &model.MsgInfoModel{
|
||||
Msg: fields[j].(*model.MsgDataModel),
|
||||
}
|
||||
case updateKeyRevoke:
|
||||
doc.Msg[db.msgTable.GetMsgIndex(seq)] = &model.MsgInfoModel{
|
||||
Revoke: fields[j].(*model.RevokeModel),
|
||||
}
|
||||
}
|
||||
}
|
||||
for i, msgInfo := range doc.Msg {
|
||||
if msgInfo == nil {
|
||||
msgInfo = &model.MsgInfoModel{}
|
||||
doc.Msg[i] = msgInfo
|
||||
}
|
||||
if msgInfo.DelList == nil {
|
||||
doc.Msg[i].DelList = []string{}
|
||||
}
|
||||
}
|
||||
if err := db.msgDocDatabase.Create(ctx, &doc); err != nil {
|
||||
if mongo.IsDuplicateKeyError(err) {
|
||||
i-- // already inserted
|
||||
tryUpdate = true // next block use update mode
|
||||
continue
|
||||
}
|
||||
return err
|
||||
}
|
||||
tryUpdate = false // The current block is inserted successfully, and the next block is inserted preferentially
|
||||
i += insert - 1 // Skip the inserted data
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (db *msgTransferDatabase) DeleteMessagesFromCache(ctx context.Context, conversationID string, seqs []int64) error {
|
||||
return db.msg.DeleteMessagesFromCache(ctx, conversationID, seqs)
|
||||
}
|
||||
|
||||
func (db *msgTransferDatabase) BatchInsertChat2Cache(ctx context.Context, conversationID string, msgs []*sdkws.MsgData) (seq int64, isNew bool, err error) {
|
||||
lenList := len(msgs)
|
||||
if int64(lenList) > db.msgTable.GetSingleGocMsgNum() {
|
||||
return 0, false, errs.New("message count exceeds limit", "limit", db.msgTable.GetSingleGocMsgNum()).Wrap()
|
||||
}
|
||||
if lenList < 1 {
|
||||
return 0, false, errs.New("no messages to insert", "minCount", 1).Wrap()
|
||||
}
|
||||
currentMaxSeq, err := db.seqConversation.Malloc(ctx, conversationID, int64(len(msgs)))
|
||||
if err != nil {
|
||||
log.ZError(ctx, "storage.seq.Malloc", err)
|
||||
return 0, false, err
|
||||
}
|
||||
isNew = currentMaxSeq == 0
|
||||
lastMaxSeq := currentMaxSeq
|
||||
userSeqMap := make(map[string]int64)
|
||||
for _, m := range msgs {
|
||||
currentMaxSeq++
|
||||
m.Seq = currentMaxSeq
|
||||
userSeqMap[m.SendID] = m.Seq
|
||||
}
|
||||
|
||||
failedNum, err := db.msg.SetMessagesToCache(ctx, conversationID, msgs)
|
||||
if err != nil {
|
||||
prommetrics.MsgInsertRedisFailedCounter.Add(float64(failedNum))
|
||||
log.ZError(ctx, "setMessageToCache error", err, "len", len(msgs), "conversationID", conversationID)
|
||||
} else {
|
||||
prommetrics.MsgInsertRedisSuccessCounter.Inc()
|
||||
}
|
||||
err = db.setHasReadSeqs(ctx, conversationID, userSeqMap)
|
||||
if err != nil {
|
||||
log.ZError(ctx, "SetHasReadSeqs error", err, "userSeqMap", userSeqMap, "conversationID", conversationID)
|
||||
prommetrics.SeqSetFailedCounter.Inc()
|
||||
}
|
||||
return lastMaxSeq, isNew, errs.Wrap(err)
|
||||
}
|
||||
|
||||
func (db *msgTransferDatabase) setHasReadSeqs(ctx context.Context, conversationID string, userSeqMap map[string]int64) error {
|
||||
for userID, seq := range userSeqMap {
|
||||
if err := db.seqUser.SetUserReadSeq(ctx, conversationID, userID, seq); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (db *msgTransferDatabase) SetHasReadSeqToDB(ctx context.Context, userID string, conversationID string, hasReadSeq int64) error {
|
||||
return db.seqUser.SetUserReadSeqToDB(ctx, conversationID, userID, hasReadSeq)
|
||||
}
|
||||
|
||||
func (db *msgTransferDatabase) MsgToPushMQ(ctx context.Context, key, conversationID string, msg2mq *sdkws.MsgData) (int32, int64, error) {
|
||||
partition, offset, err := db.producerToPush.SendMessage(ctx, key, &pbmsg.PushMsgDataToMQ{MsgData: msg2mq, ConversationID: conversationID})
|
||||
if err != nil {
|
||||
log.ZError(ctx, "MsgToPushMQ", err, "key", key, "msg2mq", msg2mq)
|
||||
return 0, 0, err
|
||||
}
|
||||
return partition, offset, nil
|
||||
}
|
||||
|
||||
func (db *msgTransferDatabase) MsgToMongoMQ(ctx context.Context, key, conversationID string, messages []*sdkws.MsgData, lastSeq int64) error {
|
||||
if len(messages) > 0 {
|
||||
_, _, err := db.producerToMongo.SendMessage(ctx, key, &pbmsg.MsgDataToMongoByMQ{LastSeq: lastSeq, ConversationID: conversationID, MsgData: messages})
|
||||
if err != nil {
|
||||
log.ZError(ctx, "MsgToMongoMQ", err, "key", key, "conversationID", conversationID, "lastSeq", lastSeq)
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
@ -1,92 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Copyright © 2023 OpenIMSDK.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
# ==============================================================================
|
||||
#
|
||||
# Store this file as .git/hooks/commit-msg in your repository in order to
|
||||
# enforce checking for proper commit message format before actual commits.
|
||||
# You may need to make the scripts executable by 'chmod +x .git/hooks/commit-msg'.
|
||||
|
||||
# commit-msg use go-gitlint tool, install go-gitlint via `go get github.com/llorllale/go-gitlint/cmd/go-gitlint`
|
||||
# go-gitlint --msg-file="$1"
|
||||
|
||||
# An example hook scripts to check the commit log message.
|
||||
# Called by "git commit" with one argument, the name of the file
|
||||
# that has the commit message. The hook should exit with non-zero
|
||||
# status after issuing an appropriate message if it wants to stop the
|
||||
# commit. The hook is allowed to edit the commit message file.
|
||||
|
||||
YELLOW="\e[93m"
|
||||
GREEN="\e[32m"
|
||||
RED="\e[31m"
|
||||
ENDCOLOR="\e[0m"
|
||||
|
||||
printMessage() {
|
||||
printf "${YELLOW}OpenIM : $1${ENDCOLOR}\n"
|
||||
}
|
||||
|
||||
printSuccess() {
|
||||
printf "${GREEN}OpenIM : $1${ENDCOLOR}\n"
|
||||
}
|
||||
|
||||
printError() {
|
||||
printf "${RED}OpenIM : $1${ENDCOLOR}\n"
|
||||
}
|
||||
|
||||
printMessage "Running the OpenIM commit-msg hook."
|
||||
|
||||
# This example catches duplicate Signed-off-by lines.
|
||||
|
||||
test "" = "$(grep '^Signed-off-by: ' "$1" |
|
||||
sort | uniq -c | sed -e '/^[ ]*1[ ]/d')" || {
|
||||
echo >&2 Duplicate Signed-off-by lines.
|
||||
exit 1
|
||||
}
|
||||
|
||||
# TODO: go-gitlint dir set
|
||||
OPENIM_ROOT=$(dirname "${BASH_SOURCE[0]}")/../..
|
||||
GITLINT_DIR="$OPENIM_ROOT/_output/tools/go-gitlint"
|
||||
|
||||
$GITLINT_DIR \
|
||||
--msg-file=$1 \
|
||||
--subject-regex="^(build|chore|ci|docs|feat|feature|fix|perf|refactor|revert|style|bot|test)(.*)?:\s?.*" \
|
||||
--subject-maxlen=150 \
|
||||
--subject-minlen=10 \
|
||||
--body-regex=".*" \
|
||||
--max-parents=1
|
||||
|
||||
if [ $? -ne 0 ]
|
||||
then
|
||||
if ! command -v $GITLINT_DIR &>/dev/null; then
|
||||
printError "$GITLINT_DIR not found. Please run 'make tools' OR 'make tools.verify.go-gitlint' make verto install it."
|
||||
fi
|
||||
printError "Please fix your commit message to match kubecub coding standards"
|
||||
printError "https://gist.github.com/cubxxw/126b72104ac0b0ca484c9db09c3e5694#file-githook-md"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
### Add Sign-off-by line to the end of the commit message
|
||||
# Get local git config
|
||||
NAME=$(git config user.name)
|
||||
EMAIL=$(git config user.email)
|
||||
|
||||
# Check if the commit message contains a sign-off line
|
||||
grep -qs "^Signed-off-by: " "$1"
|
||||
SIGNED_OFF_BY_EXISTS=$?
|
||||
|
||||
# Add "Signed-off-by" line if it doesn't exist
|
||||
if [ $SIGNED_OFF_BY_EXISTS -ne 0 ]; then
|
||||
echo -e "\nSigned-off-by: $NAME <$EMAIL>" >> "$1"
|
||||
fi
|
@ -1,111 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Copyright © 2023 OpenIMSDK.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
# ==============================================================================
|
||||
# This is a pre-commit hook that ensures attempts to commit files that are
|
||||
# are larger than $limit to your _local_ repo fail, with a helpful error message.
|
||||
|
||||
# You can override the default limit of 2MB by supplying the environment variable:
|
||||
# GIT_FILE_SIZE_LIMIT=50000000 git commit -m "test: this commit is allowed file sizes up to 50MB"
|
||||
#
|
||||
# ==============================================================================
|
||||
#
|
||||
|
||||
LC_ALL=C
|
||||
|
||||
local_branch="$(git rev-parse --abbrev-ref HEAD)"
|
||||
valid_branch_regex="^(main|master|develop|release(-[a-zA-Z0-9._-]+)?)$|(feature|feat|openim|hotfix|test|bug|bot|refactor|revert|ci|cicd|style|)\/[a-z0-9._-]+$|^HEAD$"
|
||||
|
||||
YELLOW="\e[93m"
|
||||
GREEN="\e[32m"
|
||||
RED="\e[31m"
|
||||
ENDCOLOR="\e[0m"
|
||||
|
||||
printMessage() {
|
||||
printf "${YELLOW}openim : $1${ENDCOLOR}\n"
|
||||
}
|
||||
|
||||
printSuccess() {
|
||||
printf "${GREEN}openim : $1${ENDCOLOR}\n"
|
||||
}
|
||||
|
||||
printError() {
|
||||
printf "${RED}openim : $1${ENDCOLOR}\n"
|
||||
}
|
||||
|
||||
printMessage "Running local openim pre-commit hook."
|
||||
|
||||
# flutter format .
|
||||
# https://gist.github.com/cubxxw/126b72104ac0b0ca484c9db09c3e5694#file-githook-md
|
||||
# TODO! GIT_FILE_SIZE_LIMIT=50000000 git commit -m "test: this commit is allowed file sizes up to 50MB"
|
||||
# Maximum file size limit in bytes
|
||||
limit=${GIT_FILE_SIZE_LIMIT:-2000000} # Default 2MB
|
||||
limitInMB=$(( $limit / 1000000 ))
|
||||
|
||||
function file_too_large(){
|
||||
filename=$0
|
||||
filesize=$(( $1 / 2**20 ))
|
||||
|
||||
cat <<HEREDOC
|
||||
|
||||
File $filename is $filesize MB, which is larger than github's maximum
|
||||
file size (2 MB). We will not be able to push this file to GitHub.
|
||||
Commit aborted
|
||||
|
||||
HEREDOC
|
||||
git status
|
||||
|
||||
}
|
||||
|
||||
# Move to the repo root so git files paths make sense
|
||||
repo_root=$( git rev-parse --show-toplevel )
|
||||
cd $repo_root
|
||||
|
||||
empty_tree=$( git hash-object -t tree /dev/null )
|
||||
|
||||
if git rev-parse --verify HEAD > /dev/null 2>&1
|
||||
then
|
||||
against=HEAD
|
||||
else
|
||||
against="$empty_tree"
|
||||
fi
|
||||
|
||||
# Set split so that for loop below can handle spaces in file names by splitting on line breaks
|
||||
IFS='
|
||||
'
|
||||
|
||||
shouldFail=false
|
||||
for file in $( git diff-index --cached --name-only $against ); do
|
||||
file_size=$(([ ! -f $file ] && echo 0) || (ls -la $file | awk '{ print $5 }'))
|
||||
if [ "$file_size" -gt "$limit" ]; then
|
||||
printError "File $file is $(( $file_size / 10**6 )) MB, which is larger than our configured limit of $limitInMB MB"
|
||||
shouldFail=true
|
||||
fi
|
||||
done
|
||||
|
||||
if $shouldFail
|
||||
then
|
||||
printMessage "If you really need to commit this file, you can override the size limit by setting the GIT_FILE_SIZE_LIMIT environment variable, e.g. GIT_FILE_SIZE_LIMIT=42000000 for 42MB. Or, commit with the --no-verify switch to skip the check entirely."
|
||||
printError "Commit aborted"
|
||||
exit 1;
|
||||
fi
|
||||
|
||||
if [[ ! $local_branch =~ $valid_branch_regex ]]
|
||||
then
|
||||
printError "There is something wrong with your branch name. Branch names in this project must adhere to this contract: $valid_branch_regex.
|
||||
Your commit will be rejected. You should rename your branch to a valid name(feat/name OR fix/name) and try again."
|
||||
printError "For more on this, read on: https://gist.github.com/cubxxw/126b72104ac0b0ca484c9db09c3e5694"
|
||||
exit 1
|
||||
fi
|
@ -1,119 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Copyright © 2023 OpenIMSDK.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
# ==============================================================================
|
||||
#
|
||||
|
||||
YELLOW="\e[93m"
|
||||
GREEN="\e[32m"
|
||||
RED="\e[31m"
|
||||
ENDCOLOR="\e[0m"
|
||||
|
||||
local_branch="$(git rev-parse --abbrev-ref HEAD)"
|
||||
valid_branch_regex="^(main|master|develop|release(-[a-zA-Z0-9._-]+)?)$|(feature|feat|openim|hotfix|test|bug|ci|cicd|style|)\/[a-z0-9._-]+$|^HEAD$"
|
||||
|
||||
printMessage() {
|
||||
printf "${YELLOW}OpenIM : $1${ENDCOLOR}\n"
|
||||
}
|
||||
|
||||
printSuccess() {
|
||||
printf "${GREEN}OpenIM : $1${ENDCOLOR}\n"
|
||||
}
|
||||
|
||||
printError() {
|
||||
printf "${RED}OpenIM : $1${ENDCOLOR}\n"
|
||||
}
|
||||
|
||||
printMessage "Running local OpenIM pre-push hook."
|
||||
|
||||
if [[ $(git status --porcelain) ]]; then
|
||||
printError "This scripts needs to run against committed code only. Please commit or stash you changes."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
COLOR_SUFFIX="\033[0m"
|
||||
|
||||
BLACK_PREFIX="\033[30m"
|
||||
RED_PREFIX="\033[31m"
|
||||
GREEN_PREFIX="\033[32m"
|
||||
BACKGROUND_GREEN="\033[33m"
|
||||
BLUE_PREFIX="\033[34m"
|
||||
PURPLE_PREFIX="\033[35m"
|
||||
SKY_BLUE_PREFIX="\033[36m"
|
||||
WHITE_PREFIX="\033[37m"
|
||||
BOLD_PREFIX="\033[1m"
|
||||
UNDERLINE_PREFIX="\033[4m"
|
||||
ITALIC_PREFIX="\033[3m"
|
||||
|
||||
# Function to print colored text
|
||||
print_color() {
|
||||
local text=$1
|
||||
local color=$2
|
||||
echo -e "${color}${text}${COLOR_SUFFIX}"
|
||||
}
|
||||
|
||||
# Function to print section separator
|
||||
print_separator() {
|
||||
print_color "==========================================================" ${PURPLE_PREFIX}
|
||||
}
|
||||
|
||||
# Get current time
|
||||
time=$(date +"%Y-%m-%d %H:%M:%S")
|
||||
|
||||
# Print section separator
|
||||
print_separator
|
||||
|
||||
# Print time of submission
|
||||
print_color "PTIME: ${time}" "${BOLD_PREFIX}${CYAN_PREFIX}"
|
||||
echo ""
|
||||
author=$(git config user.name)
|
||||
repository=$(basename -s .git $(git config --get remote.origin.url))
|
||||
|
||||
# Print additional information if needed
|
||||
print_color "Repository: ${repository}" "${BLUE_PREFIX}"
|
||||
echo ""
|
||||
|
||||
print_color "Author: ${author}" "${PURPLE_PREFIX}"
|
||||
|
||||
# Print section separator
|
||||
print_separator
|
||||
|
||||
file_list=$(git diff --name-status HEAD @{u})
|
||||
added_files=$(grep -c '^A' <<< "$file_list")
|
||||
modified_files=$(grep -c '^M' <<< "$file_list")
|
||||
deleted_files=$(grep -c '^D' <<< "$file_list")
|
||||
|
||||
print_color "Added Files: ${added_files}" "${BACKGROUND_GREEN}"
|
||||
print_color "Modified Files: ${modified_files}" "${BACKGROUND_GREEN}"
|
||||
print_color "Deleted Files: ${deleted_files}" "${BACKGROUND_GREEN}"
|
||||
|
||||
if [[ ! $local_branch =~ $valid_branch_regex ]]
|
||||
then
|
||||
printError "There is something wrong with your branch name. Branch names in this project must adhere to this contract: $valid_branch_regex.
|
||||
Your commit will be rejected. You should rename your branch to a valid name(feat/name OR fix/name) and try again."
|
||||
printError "For more on this, read on: https://gist.github.com/cubxxw/126b72104ac0b0ca484c9db09c3e5694"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
#
|
||||
#printMessage "Running the Flutter analyzer"
|
||||
#flutter analyze
|
||||
#
|
||||
#if [ $? -ne 0 ]; then
|
||||
# printError "Flutter analyzer error"
|
||||
# exit 1
|
||||
#fi
|
||||
#
|
||||
#printMessage "Finished running the Flutter analyzer"
|
Loading…
Reference in new issue