Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: find no available rootcoord, check rootcoord state #36022

Open
1 task done
littlePoBoy opened this issue Sep 5, 2024 · 10 comments
Open
1 task done

[Bug]: find no available rootcoord, check rootcoord state #36022

littlePoBoy opened this issue Sep 5, 2024 · 10 comments
Assignees
Labels
kind/bug Issues or changes related a bug stale indicates no udpates for 30 days triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@littlePoBoy
Copy link

Is there an existing issue for this?

  • I have searched the existing issues

Environment

- Milvus version:v2.4.10
- Deployment mode(standalone or cluster):standalone with docker compose
- MQ type(rocksmq, pulsar or kafka):    rocksmq
- SDK version(e.g. pymilvus v2.0.0rc2): milvus-sdk-go
- OS(Ubuntu or CentOS):   rocky linux
- CPU/Memory: Intel(R) Xeon(R) Gold 5118 CPU @ 2.30GHz 48c/16GB
- GPU: T4
- Others:

Current Behavior

when i Batch insert data to miluvs ,Milvus will hang up immediately,And it cannot be restarted。I have to clean up all the data before I can restart

Expected Behavior

No response

Steps To Reproduce

docker compose 


version: '3.5'

services:
  etcd:
    container_name: etcd
    image: quay.io/coreos/etcd:v3.5.5
    environment:
      - ETCD_AUTO_COMPACTION_MODE=revision
      - ETCD_AUTO_COMPACTION_RETENTION=1000
      - ETCD_QUOTA_BACKEND_BYTES=4294967296
      - ETCD_SNAPSHOT_COUNT=50000
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - ${DOCKER_VOLUME_DIRECTORY:-.}/volumes/etcd/logs:/tmp/milvus/logs
      - ${DOCKER_VOLUME_DIRECTORY:-.}/volumes/etcd:/etcd
    command: etcd -advertise-client-urls=http://127.0.0.1:2379 -listen-client-urls http://0.0.0.0:2379 --data-dir /etcd
    healthcheck:
      test: ["CMD", "etcdctl", "endpoint", "health"]
      interval: 30s
      timeout: 20s
      retries: 3

  minio:
    container_name: minio
    image: minio/minio:RELEASE.2023-03-20T20-16-18Z
    environment:
      MINIO_ACCESS_KEY: minioadmin
      MINIO_SECRET_KEY: minioadmin
    ports:
      - "9001:9001"
      - "9000:9000"
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - ${DOCKER_VOLUME_DIRECTORY:-.}/volumes/minio:/minio_data
    command: minio server /minio_data --console-address ":9001"
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3

  standalone:
    container_name: milvus
    image: milvusdb/milvus:v2.4.8
    command: ["milvus", "run", "standalone"]
    security_opt:
    - seccomp:unconfined
    environment:
      ETCD_ENDPOINTS: etcd:2379
      MINIO_ADDRESS: minio:9000
    volumes:
      - ${DOCKER_VOLUME_DIRECTORY:-.}/volumes/milvus:/var/lib/milvus
      - ${DOCKER_VOLUME_DIRECTORY:-.}/volumes/logs:/milvus/logs
      - /etc/localtime:/etc/localtime:ro
      - ${DOCKER_VOLUME_DIRECTORY:-.}/volumes/configs/milvus.yaml:/milvus/configs/milvus.yaml
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9091/healthz"]
      interval: 30s
      start_period: 90s
      timeout: 20s
      retries: 3
    ports:
      - "19530:19530"
      - "9091:9091"
    depends_on:
      - "etcd"
      - "minio"

networks:
  default:
    name: milvus

Milvus Log

[2024/09/04 16:03:14.043 +08:00] [INFO] [querynodev2/server.go:163] ["QueryNode init session"] [nodeID=3] ["node address"=172.31.0.6:21123]
[2024/09/04 16:03:14.043 +08:00] [INFO] [dependency/factory.go:85] ["try to init mq"] [standalone=true] [mqType=rocksmq]
[2024/09/04 16:03:14.088 +08:00] [INFO] [msgstream/mq_factory.go:244] ["init rocksmq msgstream success"] [path=/var/lib/milvus/rdb_data]
[2024/09/04 16:03:14.088 +08:00] [INFO] [msgstream/mq_factory.go:244] ["init rocksmq msgstream success"] [path=/var/lib/milvus/rdb_data]
[2024/09/04 16:03:14.088 +08:00] [INFO] [msgstream/mq_factory.go:244] ["init rocksmq msgstream success"] [path=/var/lib/milvus/rdb_data]
[2024/09/04 16:03:14.090 +08:00] [INFO] [sessionutil/session_util.go:289] ["start server"] [name=rootcoord] [address=172.31.0.6:53100] [id=3]
[2024/09/04 16:03:14.090 +08:00] [INFO] [rootcoord/root_coord.go:154] ["update rootcoord state"] [state=Initializing]
[2024/09/04 16:03:14.090 +08:00] [INFO] [sessionutil/session_util.go:289] ["start server"] [name=indexcoord] [address=172.31.0.6:13333] [id=3]
[2024/09/04 16:03:14.091 +08:00] [INFO] [sessionutil/session_util.go:289] ["start server"] [name=datacoord] [address=172.31.0.6:13333] [id=3]
[2024/09/04 16:03:14.091 +08:00] [INFO] [tso/tso.go:122] ["sync and save timestamp"] [last=2024/09/04 15:38:05.142 +08:00] [save=2024/09/04 16:03:17.091 +08:00] [next=2024/09/04 16:03:14.091 +08:00]
[2024/09/04 16:03:14.091 +08:00] [INFO] [rootcoord/root_coord.go:398] ["id allocator initialized"] [root_path=by-dev/kv] [sub_path=gid] [key=idTimestamp]
[2024/09/04 16:03:14.092 +08:00] [INFO] [datacoord/server.go:345] ["init rootcoord client done"]
[2024/09/04 16:03:14.092 +08:00] [INFO] [storage/remote_chunk_manager.go:92] ["remote chunk manager init success."] [remote=aws] [bucketname=a-bucket] [root=files]
[2024/09/04 16:03:14.092 +08:00] [INFO] [tasks/concurrent_safe_scheduler.go:27] ["query node use concurrent safe scheduler"] [max_concurrency=48]
[2024/09/04 16:03:14.092 +08:00] [INFO] [querynodev2/server.go:319] ["queryNode init scheduler"] [policy=fifo]
[2024/09/04 16:03:14.093 +08:00] [INFO] [segments/segment_loader.go:548] ["SegmentLoader created"] [ioPoolSize=256]
[2024/09/04 16:03:14.093 +08:00] [INFO] [querynodev2/server.go:230] ["set up knowhere build pool size"] [pool_size=24]
[2024/09/04 16:03:14.093 +08:00] [INFO] [tso/tso.go:122] ["sync and save timestamp"] [last=2024/09/04 16:03:15.012 +08:00] [save=2024/09/04 16:03:18.013 +08:00] [next=2024/09/04 16:03:15.013 +08:00]
[2024/09/04 16:03:14.093 +08:00] [INFO] [rootcoord/root_coord.go:422] ["tso allocator initialized"] [root_path=by-dev/kv] [sub_path=gid] [key=idTimestamp]
[2024/09/04 16:03:14.093 +08:00] [INFO] [rootcoord/root_coord.go:341] ["Using etcd as meta storage."]
[2024/09/04 16:03:14.094 +08:00] [INFO] [rootcoord/meta_table.go:150] ["recover databases"] ["num of dbs"=1]
[2024/09/04 16:03:14.094 +08:00] [INFO] [storage/remote_chunk_manager.go:92] ["remote chunk manager init success."] [remote=aws] [bucketname=a-bucket] [root=files]
[2024/09/04 16:03:14.094 +08:00] [INFO] [datacoord/server.go:354] ["init chunk manager factory done"]
[2024/09/04 16:03:14.094 +08:00] [INFO] [datacoord/server.go:632] ["data coordinator connecting to metadata store"] [metaType=etcd]
[2024/09/04 16:03:14.094 +08:00] [INFO] [datacoord/server.go:645] ["data coordinator successfully connected to metadata store"] [metaType=etcd]
[2024/09/04 16:03:14.095 +08:00] [INFO] [datacoord/index_meta.go:97] ["indexMeta reloadFromKV done"] [duration=871.633µs]
[2024/09/04 16:03:14.096 +08:00] [INFO] [datacoord/analyze_meta.go:70] ["analyzeMeta reloadFromKV done"] [duration=376.369µs]
[2024/09/04 16:03:14.096 +08:00] [INFO] [rootcoord/meta_table.go:190] ["collections recovered from db"] [db_name=default] [collection_num=0] [partition_num=0]
[2024/09/04 16:03:14.096 +08:00] [INFO] [datacoord/partition_stats_meta.go:67] ["DataCoord partitionStatsMeta reloadFromKV done"] [duration=336.552µs]
[2024/09/04 16:03:14.096 +08:00] [INFO] [datacoord/compaction_task_meta.go:62] ["DataCoord compactionTaskMeta reloadFromKV done"] [duration=312.366µs]
[2024/09/04 16:03:14.097 +08:00] [INFO] [rootcoord/meta_table.go:208] ["RootCoord meta table reload done"] [duration=3.948948ms]
[2024/09/04 16:03:14.097 +08:00] [INFO] [msgstream/mq_msgstream.go:117] ["Msg Stream state"] [can_produce=true]
[2024/09/04 16:03:14.098 +08:00] [WARN] [server/rocksmq_impl.go:413] ["rocksmq topic already exists "] [topic=by-dev-rootcoord-dml_0]
[2024/09/04 16:03:14.098 +08:00] [INFO] [msgstream/mq_msgstream.go:117] ["Msg Stream state"] [can_produce=true]
[2024/09/04 16:03:14.098 +08:00] [WARN] [server/rocksmq_impl.go:413] ["rocksmq topic already exists "] [topic=by-dev-rootcoord-dml_1]
[2024/09/04 16:03:14.097 +08:00] [INFO] [datacoord/meta.go:229] ["DataCoord meta reloadFromKV done"] [duration=963.098µs]
[2024/09/04 16:03:14.098 +08:00] [INFO] [msgstream/mq_msgstream.go:117] ["Msg Stream state"] [can_produce=true]
[2024/09/04 16:03:14.098 +08:00] [WARN] [server/rocksmq_impl.go:413] ["rocksmq topic already exists "] [topic=by-dev-rootcoord-dml_2]
[2024/09/04 16:03:14.098 +08:00] [INFO] [msgstream/mq_msgstream.go:117] ["Msg Stream state"] [can_produce=true]
[2024/09/04 16:03:14.098 +08:00] [WARN] [server/rocksmq_impl.go:413] ["rocksmq topic already exists "] [topic=by-dev-rootcoord-dml_3]
[2024/09/04 16:03:14.098 +08:00] [INFO] [msgstream/mq_msgstream.go:117] ["Msg Stream state"] [can_produce=true]
[2024/09/04 16:03:14.098 +08:00] [WARN] [server/rocksmq_impl.go:413] ["rocksmq topic already exists "] [topic=by-dev-rootcoord-dml_4]
[2024/09/04 16:03:14.098 +08:00] [INFO] [msgstream/mq_msgstream.go:117] ["Msg Stream state"] [can_produce=true]
[2024/09/04 16:03:14.098 +08:00] [WARN] [server/rocksmq_impl.go:413] ["rocksmq topic already exists "] [topic=by-dev-rootcoord-dml_5]
[2024/09/04 16:03:14.098 +08:00] [INFO] [msgstream/mq_msgstream.go:117] ["Msg Stream state"] [can_produce=true]
[2024/09/04 16:03:14.098 +08:00] [WARN] [server/rocksmq_impl.go:413] ["rocksmq topic already exists "] [topic=by-dev-rootcoord-dml_6]
[2024/09/04 16:03:14.098 +08:00] [INFO] [datacoord/server.go:1245] ["all old data node down, enable auto balance!"]
[2024/09/04 16:03:14.098 +08:00] [INFO] [msgstream/mq_msgstream.go:117] ["Msg Stream state"] [can_produce=true]
[2024/09/04 16:03:14.098 +08:00] [WARN] [server/rocksmq_impl.go:413] ["rocksmq topic already exists "] [topic=by-dev-rootcoord-dml_7]
[2024/09/04 16:03:14.098 +08:00] [INFO] [msgstream/mq_msgstream.go:117] ["Msg Stream state"] [can_produce=true]
[2024/09/04 16:03:14.098 +08:00] [WARN] [server/rocksmq_impl.go:413] ["rocksmq topic already exists "] [topic=by-dev-rootcoord-dml_8]
[2024/09/04 16:03:14.098 +08:00] [INFO] [msgstream/mq_msgstream.go:117] ["Msg Stream state"] [can_produce=true]
[2024/09/04 16:03:14.098 +08:00] [WARN] [server/rocksmq_impl.go:413] ["rocksmq topic already exists "] [topic=by-dev-rootcoord-dml_9]
[2024/09/04 16:03:14.098 +08:00] [INFO] [msgstream/mq_msgstream.go:117] ["Msg Stream state"] [can_produce=true]
[2024/09/04 16:03:14.098 +08:00] [WARN] [server/rocksmq_impl.go:413] ["rocksmq topic already exists "] [topic=by-dev-rootcoord-dml_10]
[2024/09/04 16:03:14.098 +08:00] [INFO] [msgstream/mq_msgstream.go:117] ["Msg Stream state"] [can_produce=true]
[2024/09/04 16:03:14.098 +08:00] [WARN] [server/rocksmq_impl.go:413] ["rocksmq topic already exists "] [topic=by-dev-rootcoord-dml_11]
[2024/09/04 16:03:14.098 +08:00] [INFO] [msgstream/mq_msgstream.go:117] ["Msg Stream state"] [can_produce=true]
[2024/09/04 16:03:14.098 +08:00] [WARN] [server/rocksmq_impl.go:413] ["rocksmq topic already exists "] [topic=by-dev-rootcoord-dml_12]
[2024/09/04 16:03:14.098 +08:00] [INFO] [msgstream/mq_msgstream.go:117] ["Msg Stream state"] [can_produce=true]
[2024/09/04 16:03:14.099 +08:00] [WARN] [server/rocksmq_impl.go:413] ["rocksmq topic already exists "] [topic=by-dev-rootcoord-dml_13]
[2024/09/04 16:03:14.099 +08:00] [INFO] [datacoord/channel_store_v2.go:71] ["channel store reload done"] [duration=402.199µs]
[2024/09/04 16:03:14.099 +08:00] [INFO] [datacoord/server.go:368] ["init datanode cluster done"]
[2024/09/04 16:03:14.099 +08:00] [INFO] [msgstream/mq_msgstream.go:117] ["Msg Stream state"] [can_produce=true]
[2024/09/04 16:03:14.099 +08:00] [WARN] [client/client.go:90] ["RootCoordClient mess key not exist"] [key=rootcoord]
[2024/09/04 16:03:14.099 +08:00] [WARN] [grpcclient/client.go:249] ["failed to get client address"] [error="find no available rootcoord, check rootcoord state"]
[2024/09/04 16:03:14.099 +08:00] [WARN] [grpcclient/client.go:457] ["fail to get grpc client"] [client_role=rootcoord] [error="find no available rootcoord, check rootcoord state"]
[2024/09/04 16:03:14.099 +08:00] [INFO] [datacoord/server.go:549] ["DataCoord success to get DataNode sessions"] [sessions={}]
[2024/09/04 16:03:14.099 +08:00] [INFO] [datacoord/server.go:570] ["DataCoord Cluster Manager start up"]
[2024/09/04 16:03:14.099 +08:00] [INFO] [datacoord/channel_manager_v2.go:162] ["starting channel balance loop"]
[2024/09/04 16:03:14.099 +08:00] [WARN] [grpcclient/client.go:478] ["grpc client is nil, maybe fail to get client in the retry state"] [client_role=rootcoord] [error="empty grpc client: find no available rootcoord, check rootcoord state"] [errorVerbose="empty grpc client: find no available rootcoord, check rootcoord state\n(1) attached stack trace\n -- stack trace:\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).call.func2\n | \t/workspace/source/internal/util/grpcclient/client.go:477\n | github.com/milvus-io/milvus/pkg/util/retry.Handle\n | \t/workspace/source/pkg/util/retry/retry.go:104\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).call\n | \t/workspace/source/internal/util/grpcclient/client.go:470\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).Call\n | \t/workspace/source/internal/util/grpcclient/client.go:557\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).ReCall\n | \t/workspace/source/internal/util/grpcclient/client.go:573\n | github.com/milvus-io/milvus/internal/distributed/rootcoord/client.(*Client).ListDatabases\n | \t/workspace/source/internal/distributed/rootcoord/client/client.go:639\n | github.com/milvus-io/milvus/internal/datacoord/broker.(*coordinatorBroker).ListDatabases\n | \t/workspace/source/internal/datacoord/broker/coordinator_broker.go:123\n | github.com/milvus-io/milvus/internal/datacoord.(*meta).reloadCollectionsFromRootcoord\n | \t/workspace/source/internal/datacoord/meta.go:234\n | github.com/milvus-io/milvus/internal/datacoord.(*Server).initMeta.func1.1.1\n | \t/workspace/source/internal/datacoord/server.go:659\n | github.com/milvus-io/milvus/pkg/util/retry.Do\n | \t/workspace/source/pkg/util/retry/retry.go:44\n | github.com/milvus-io/milvus/internal/datacoord.(*Server).initMeta.func1.1\n | \t/workspace/source/internal/datacoord/server.go:658\n | runtime.goexit\n | \t/usr/local/go/src/runtime/asm_amd64.s:1650\nWraps: (2) empty grpc client\nWraps: (3) find no available rootcoord, check rootcoord state\nError types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *errors.errorString"]
[2024/09/04 16:03:14.099 +08:00] [INFO] [datacoord/channel_manager_v2.go:170] ["cluster start up"] [allNodes="[]"] [legacyNodes="[]"] [oldNodes="[]"] [newOnlines="[]"] [offLines="[]"]
[2024/09/04 16:03:14.099 +08:00] [INFO] [datacoord/server.go:575] ["DataCoord Cluster Manager start up successfully"]
[2024/09/04 16:03:14.100 +08:00] [WARN] [server/rocksmq_impl.go:413] ["rocksmq topic already exists "] [topic=by-dev-rootcoord-dml_14]
[2024/09/04 16:03:14.100 +08:00] [WARN] [client/client.go:90] ["RootCoordClient mess key not exist"] [key=rootcoord]
[2024/09/04 16:03:14.100 +08:00] [INFO] [msgstream/mq_msgstream.go:117] ["Msg Stream state"] [can_produce=true]
[2024/09/04 16:03:14.100 +08:00] [WARN] [grpcclient/client.go:249] ["failed to get client address"] [error="find no available rootcoord, check rootcoord state"]
[2024/09/04 16:03:14.100 +08:00] [WARN] [server/rocksmq_impl.go:413] ["rocksmq topic already exists "] [topic=by-dev-rootcoord-dml_15]
[2024/09/04 16:03:14.100 +08:00] [WARN] [grpcclient/client.go:464] ["fail to get grpc client in the retry state"] [client_role=rootcoord] [error="find no available rootcoord, check rootcoord state"]
[2024/09/04 16:03:14.100 +08:00] [INFO] [rootcoord/dml_channels.go:215] ["init dml channels"] [prefix=by-dev-rootcoord-dml] [num=16]
[2024/09/04 16:03:14.100 +08:00] [INFO] [rootcoord/root_coord.go:451] ["create TimeTick sync done"]
[2024/09/04 16:03:14.100 +08:00] [INFO] [rootcoord/root_coord.go:467] ["init proxy manager done"]
[2024/09/04 16:03:14.100 +08:00] [WARN] [retry/retry.go:106] ["retry func failed"] [retried=0] [error="empty grpc client: find no available rootcoord, check rootcoord state"] [errorVerbose="empty grpc client: find no available rootcoord, check rootcoord state\n(1) attached stack trace\n -- stack trace:\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).call.func2\n | \t/workspace/source/internal/util/grpcclient/client.go:477\n | github.com/milvus-io/milvus/pkg/util/retry.Handle\n | \t/workspace/source/pkg/util/retry/retry.go:104\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).call\n | \t/workspace/source/internal/util/grpcclient/client.go:470\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).Call\n | \t/workspace/source/internal/util/grpcclient/client.go:557\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).ReCall\n | \t/workspace/source/internal/util/grpcclient/client.go:573\n | github.com/milvus-io/milvus/internal/distributed/rootcoord/client.(*Client).ListDatabases\n | \t/workspace/source/internal/distributed/rootcoord/client/client.go:639\n | github.com/milvus-io/milvus/internal/datacoord/broker.(*coordinatorBroker).ListDatabases\n | \t/workspace/source/internal/datacoord/broker/coordinator_broker.go:123\n | github.com/milvus-io/milvus/internal/datacoord.(*meta).reloadCollectionsFromRootcoord\n | \t/workspace/source/internal/datacoord/meta.go:234\n | github.com/milvus-io/milvus/internal/datacoord.(*Server).initMeta.func1.1.1\n | \t/workspace/source/internal/datacoord/server.go:659\n | github.com/milvus-io/milvus/pkg/util/retry.Do\n | \t/workspace/source/pkg/util/retry/retry.go:44\n | github.com/milvus-io/milvus/internal/datacoord.(*Server).initMeta.func1.1\n | \t/workspace/source/internal/datacoord/server.go:658\n | runtime.goexit\n | \t/usr/local/go/src/runtime/asm_amd64.s:1650\nWraps: (2) empty grpc client\nWraps: (3) find no available rootcoord, check rootcoord state\nError types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *errors.errorString"]
[2024/09/04 16:03:14.101 +08:00] [INFO] [rootcoord/root_coord.go:477] ["init credentials done"]
[2024/09/04 16:03:14.102 +08:00] [INFO] [rootcoord/meta_table.go:1273] ["role already exists"] [role=admin]
[2024/09/04 16:03:14.103 +08:00] [INFO] [datacoord/server.go:375] ["init service discovery done"]
[2024/09/04 16:03:14.103 +08:00] [INFO] [rootcoord/meta_table.go:1273] ["role already exists"] [role=public]
[2024/09/04 16:03:14.103 +08:00] [INFO] [datacoord/server.go:379] ["init compaction done"]
[2024/09/04 16:03:14.103 +08:00] [INFO] [datacoord/server.go:384] ["init segment manager done"]
[2024/09/04 16:03:14.103 +08:00] [INFO] [datacoord/garbage_collector.go:83] ["GC with option"] [enabled=true] [interval=1h0m0s] [scanInterval=168h0m0s] [missingTolerance=24h0m0s] [dropTolerance=3h0m0s]
[2024/09/04 16:03:14.104 +08:00] [INFO] [datacoord/server.go:399] ["init datacoord done"] [nodeID=3] [Address=172.31.0.6:13333]
[2024/09/04 16:03:14.105 +08:00] [INFO] [rootcoord/root_coord.go:483] ["init rootcoord done"] [nodeID=3] [Address=172.31.0.6:53100]
[2024/09/04 16:03:14.105 +08:00] [INFO] [rootcoord/service.go:312] ["RootCoord Core start ..."]
[2024/09/04 16:03:14.106 +08:00] [INFO] [sessionutil/session_util.go:466] ["put session key into etcd"] [key=by-dev/meta/session/indexcoord] [value="{"ServerID":3,"ServerName":"indexcoord","Address":"172.31.0.6:13333","Exclusive":true,"TriggerKill":true,"Version":"2.4.9","IndexEngineVersion":{},"LeaseID":7587881183017939410,"HostName":"26e7d3a18d7a"}"]
[2024/09/04 16:03:14.106 +08:00] [INFO] [sessionutil/session_util.go:476] ["Service registered successfully"] [ServerName=indexcoord] [serverID=3]
[2024/09/04 16:03:14.106 +08:00] [INFO] [sessionutil/session_util.go:466] ["put session key into etcd"] [key=by-dev/meta/session/rootcoord] [value="{"ServerID":3,"ServerName":"rootcoord","Address":"172.31.0.6:53100","Exclusive":true,"TriggerKill":true,"Version":"2.4.9","IndexEngineVersion":{},"LeaseID":7587881183017939413,"HostName":"26e7d3a18d7a"}"]
[2024/09/04 16:03:14.106 +08:00] [INFO] [sessionutil/session_util.go:476] ["Service registered successfully"] [ServerName=rootcoord] [serverID=3]
[2024/09/04 16:03:14.106 +08:00] [INFO] [rootcoord/root_coord.go:273] ["RootCoord Register Finished"]
[2024/09/04 16:03:14.107 +08:00] [INFO] [sessionutil/session_util.go:466] ["put session key into etcd"] [key=by-dev/meta/session/datacoord] [value="{"ServerID":3,"ServerName":"datacoord","Address":"172.31.0.6:13333","Exclusive":true,"TriggerKill":true,"Version":"2.4.9","IndexEngineVersion":{},"LeaseID":7587881183017939417,"HostName":"26e7d3a18d7a"}"]
[2024/09/04 16:03:14.108 +08:00] [INFO] [sessionutil/session_util.go:476] ["Service registered successfully"] [ServerName=datacoord] [serverID=3]
[2024/09/04 16:03:14.107 +08:00] [INFO] [proxyutil/proxy_watcher.go:96] ["succeed to init sessions on etcd"] [sessions=null] [revision=569]
[2024/09/04 16:03:14.108 +08:00] [INFO] [datacoord/server.go:267] ["DataCoord Register Finished"]
[2024/09/04 16:03:14.108 +08:00] [INFO] [rootcoord/root_coord.go:154] ["update rootcoord state"] [state=Healthy]
[2024/09/04 16:03:14.108 +08:00] [INFO] [proxyutil/proxy_watcher.go:119] ["start to watch etcd"]
[2024/09/04 16:03:14.108 +08:00] [INFO] [rootcoord/quota_center.go:303] ["Start QuotaCenter"] [collectInterval=3s]
[2024/09/04 16:03:14.108 +08:00] [WARN] [proxyutil/proxy_client_manager.go:263] ["proxy client is empty, RefreshPrivilegeInfoCache will not send to any client"]
[2024/09/04 16:03:14.108 +08:00] [INFO] [sessionutil/session_util.go:1234] ["save server info into file"] [content="rootcoord-3\n"] [filePath=/tmp/milvus/server_id_8]
[2024/09/04 16:03:14.108 +08:00] [INFO] [rootcoord/root_coord.go:713] ["rootcoord startup successfully"]
[2024/09/04 16:03:14.108 +08:00] [INFO] [components/root_coord.go:58] ["RootCoord successfully started"]
[2024/09/04 16:03:14.108 +08:00] [INFO] [datacoord/task_scheduler.go:168] ["task scheduler loop start"]
[2024/09/04 16:03:14.108 +08:00] [INFO] [datacoord/compaction.go:341] ["compactionPlanHandler start loop schedule"]
[2024/09/04 16:03:14.108 +08:00] [INFO] [datacoord/compaction.go:360] ["compactionPlanHandler start loop check"] ["check result interval"=3s]
[2024/09/04 16:03:14.108 +08:00] [INFO] [datacoord/sync_segments_scheduler.go:71] ["SyncSegmentsScheduler started..."]
[2024/09/04 16:03:14.108 +08:00] [INFO] [datacoord/import_checker.go:73] ["start import checker"]
[2024/09/04 16:03:14.108 +08:00] [INFO] [datacoord/compaction.go:380] ["compactionPlanHandler start clean check loop"] ["gc interval"=30m0s]
[2024/09/04 16:03:14.108 +08:00] [INFO] [sessionutil/session_util.go:1234] ["save server info into file"] [content="datacoord-3\n"] [filePath=/tmp/milvus/server_id_8]
[2024/09/04 16:03:14.108 +08:00] [INFO] [datacoord/compaction_trigger_v2.go:118] ["Compaction trigger manager start"]
[2024/09/04 16:03:14.108 +08:00] [INFO] [datacoord/server.go:413] ["DataCoord startup successfully"]
[2024/09/04 16:03:14.108 +08:00] [INFO] [datacoord/index_service.go:114] ["start create index for segment loop..."]
[2024/09/04 16:03:14.108 +08:00] [INFO] [datacoord/import_scheduler.go:75] ["start import scheduler"]
[2024/09/04 16:03:14.141 +08:00] [WARN] [grpcclient/client.go:478] ["grpc client is nil, maybe fail to get client in the retry state"] [client_role=rootcoord] [error="empty grpc client: find no available rootcoord, check rootcoord state"] [errorVerbose="empty grpc client: find no available rootcoord, check rootcoord state\n(1) attached stack trace\n -- stack trace:\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).call.func2\n | \t/workspace/source/internal/util/grpcclient/client.go:477\n | github.com/milvus-io/milvus/pkg/util/retry.Handle\n | \t/workspace/source/pkg/util/retry/retry.go:104\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).call\n | \t/workspace/source/internal/util/grpcclient/client.go:470\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).Call\n | \t/workspace/source/internal/util/grpcclient/client.go:557\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).ReCall\n | \t/workspace/source/internal/util/grpcclient/client.go:573\n | github.com/milvus-io/milvus/internal/distributed/rootcoord/client.wrapGrpcCall[...]\n | \t/workspace/source/internal/distributed/rootcoord/client/client.go:107\n | github.com/milvus-io/milvus/internal/distributed/rootcoord/client.(*Client).GetComponentStates\n | \t/workspace/source/internal/distributed/rootcoord/client/client.go:121\n | github.com/milvus-io/milvus/internal/util/componentutil.WaitForComponentStates[...].func1\n | \t/workspace/source/internal/util/componentutil/componentutil.go:39\n | github.com/milvus-io/milvus/pkg/util/retry.Do\n | \t/workspace/source/pkg/util/retry/retry.go:44\n | github.com/milvus-io/milvus/internal/util/componentutil.WaitForComponentStates[...]\n | \t/workspace/source/internal/util/componentutil/componentutil.go:64\n | github.com/milvus-io/milvus/internal/util/componentutil.WaitForComponentHealthy[...]\n | \t/workspace/source/internal/util/componentutil/componentutil.go:85\n | github.com/milvus-io/milvus/internal/distributed/proxy.(*Server).init\n | \t/workspace/source/internal/distributed/proxy/service.go:631\n | github.com/milvus-io/milvus/internal/distributed/proxy.(*Server).Run\n | \t/workspace/source/internal/distributed/proxy/service.go:452\n | github.com/milvus-io/milvus/cmd/components.(*Proxy).Run\n | \t/workspace/source/cmd/components/proxy.go:54\n | github.com/milvus-io/milvus/cmd/roles.runComponent[...].func1\n | \t/workspace/source/cmd/roles/roles.go:121\n | runtime.goexit\n | \t/usr/local/go/src/runtime/asm_amd64.s:1650\nWraps: (2) empty grpc client\nWraps: (3) find no available rootcoord, check rootcoord state\nError types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *errors.errorString"]
[2024/09/04 16:03:14.175 +08:00] [INFO] [sessionutil/session_util.go:914] ["register session success"] [role=datacoord] [key=by-dev/meta/session/datacoord]
[2024/09/04 16:03:14.175 +08:00] [INFO] [sessionutil/session_util.go:914] ["register session success"] [role=rootcoord] [key=by-dev/meta/session/rootcoord]
[2024/09/04 16:03:14.238 +08:00] [WARN] [grpcclient/client.go:478] ["grpc client is nil, maybe fail to get client in the retry state"] [client_role=rootcoord] [error="empty grpc client: find no available rootcoord, check rootcoord state"] [errorVerbose="empty grpc client: find no available rootcoord, check rootcoord state\n(1) attached stack trace\n -- stack trace:\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).call.func2\n | \t/workspace/source/internal/util/grpcclient/client.go:477\n | github.com/milvus-io/milvus/pkg/util/retry.Handle\n | \t/workspace/source/pkg/util/retry/retry.go:104\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).call\n | \t/workspace/source/internal/util/grpcclient/client.go:470\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).Call\n | \t/workspace/source/internal/util/grpcclient/client.go:557\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).ReCall\n | \t/workspace/source/internal/util/grpcclient/client.go:573\n | github.com/milvus-io/milvus/internal/distributed/rootcoord/client.wrapGrpcCall[...]\n | \t/workspace/source/internal/distributed/rootcoord/client/client.go:107\n | github.com/milvus-io/milvus/internal/distributed/rootcoord/client.(*Client).GetComponentStates\n | \t/workspace/source/internal/distributed/rootcoord/client/client.go:121\n | github.com/milvus-io/milvus/internal/util/componentutil.WaitForComponentStates[...].func1\n | \t/workspace/source/internal/util/componentutil/componentutil.go:39\n | github.com/milvus-io/milvus/pkg/util/retry.Do\n | \t/workspace/source/pkg/util/retry/retry.go:44\n | github.com/milvus-io/milvus/internal/util/componentutil.WaitForComponentStates[...]\n | \t/workspace/source/internal/util/componentutil/componentutil.go:64\n | github.com/milvus-io/milvus/internal/util/componentutil.WaitForComponentHealthy[...]\n | \t/workspace/source/internal/util/componentutil/componentutil.go:85\n | github.com/milvus-io/milvus/internal/distributed/datanode.(*Server).init\n | \t/workspace/source/internal/distributed/datanode/service.go:273\n | github.com/milvus-io/milvus/internal/distributed/datanode.(*Server).Run\n | \t/workspace/source/internal/distributed/datanode/service.go:188\n | github.com/milvus-io/milvus/cmd/components.(*DataNode).Run\n | \t/workspace/source/cmd/components/data_node.go:55\n | github.com/milvus-io/milvus/cmd/roles.runComponent[...].func1\n | \t/workspace/source/cmd/roles/roles.go:121\n | runtime.goexit\n | \t/usr/local/go/src/runtime/asm_amd64.s:1650\nWraps: (2) empty grpc client\nWraps: (3) find no available rootcoord, check rootcoord state\nError types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *errors.errorString"]
[2024/09/04 16:03:14.238 +08:00] [WARN] [grpcclient/client.go:478] ["grpc client is nil, maybe fail to get client in the retry state"] [client_role=rootcoord] [error="empty grpc client: find no available rootcoord, check rootcoord state"] [errorVerbose="empty grpc client: find no available rootcoord, check rootcoord state\n(1) attached stack trace\n -- stack trace:\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).call.func2\n | \t/workspace/source/internal/util/grpcclient/client.go:477\n | github.com/milvus-io/milvus/pkg/util/retry.Handle\n | \t/workspace/source/pkg/util/retry/retry.go:104\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).call\n | \t/workspace/source/internal/util/grpcclient/client.go:470\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).Call\n | \t/workspace/source/internal/util/grpcclient/client.go:557\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).ReCall\n | \t/workspace/source/internal/util/grpcclient/client.go:573\n | github.com/milvus-io/milvus/internal/distributed/rootcoord/client.wrapGrpcCall[...]\n | \t/workspace/source/internal/distributed/rootcoord/client/client.go:107\n | github.com/milvus-io/milvus/internal/distributed/rootcoord/client.(*Client).GetComponentStates\n | \t/workspace/source/internal/distributed/rootcoord/client/client.go:121\n | github.com/milvus-io/milvus/internal/util/componentutil.WaitForComponentStates[...].func1\n | \t/workspace/source/internal/util/componentutil/componentutil.go:39\n | github.com/milvus-io/milvus/pkg/util/retry.Do\n | \t/workspace/source/pkg/util/retry/retry.go:44\n | github.com/milvus-io/milvus/internal/util/componentutil.WaitForComponentStates[...]\n | \t/workspace/source/internal/util/componentutil/componentutil.go:64\n | github.com/milvus-io/milvus/internal/util/componentutil.WaitForComponentHealthy[...]\n | \t/workspace/source/internal/util/componentutil/componentutil.go:85\n | github.com/milvus-io/milvus/internal/distributed/querycoord.(*Server).init\n | \t/workspace/source/internal/distributed/querycoord/service.go:169\n | github.com/milvus-io/milvus/internal/distributed/querycoord.(*Server).Run\n | \t/workspace/source/internal/distributed/querycoord/service.go:100\n | github.com/milvus-io/milvus/cmd/components.(*QueryCoord).Run\n | \t/workspace/source/cmd/components/query_coord.go:55\n | github.com/milvus-io/milvus/cmd/roles.runComponent[...].func1\n | \t/workspace/source/cmd/roles/roles.go:121\n | runtime.goexit\n | \t/usr/local/go/src/runtime/asm_amd64.s:1650\nWraps: (2) empty grpc client\nWraps: (3) find no available rootcoord, check rootcoord state\nError types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *errors.errorString"]
[2024/09/04 16:03:14.301 +08:00] [WARN] [grpcclient/client.go:478] ["grpc client is nil, maybe fail to get client in the retry state"] [client_role=rootcoord] [error="empty grpc client: find no available rootcoord, check rootcoord state"] [errorVerbose="empty grpc client: find no available rootcoord, check rootcoord state\n(1) attached stack trace\n -- stack trace:\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).call.func2\n | \t/workspace/source/internal/util/grpcclient/client.go:477\n | github.com/milvus-io/milvus/pkg/util/retry.Handle\n | \t/workspace/source/pkg/util/retry/retry.go:104\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).call\n | \t/workspace/source/internal/util/grpcclient/client.go:470\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).Call\n | \t/workspace/source/internal/util/grpcclient/client.go:557\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).ReCall\n | \t/workspace/source/internal/util/grpcclient/client.go:573\n | github.com/milvus-io/milvus/internal/distributed/rootcoord/client.(*Client).ListDatabases\n | \t/workspace/source/internal/distributed/rootcoord/client/client.go:639\n | github.com/milvus-io/milvus/internal/datacoord/broker.(*coordinatorBroker).ListDatabases\n | \t/workspace/source/internal/datacoord/broker/coordinator_broker.go:123\n | github.com/milvus-io/milvus/internal/datacoord.(*meta).reloadCollectionsFromRootcoord\n | \t/workspace/source/internal/datacoord/meta.go:234\n | github.com/milvus-io/milvus/internal/datacoord.(*Server).initMeta.func1.1.1\n | \t/workspace/source/internal/datacoord/server.go:659\n | github.com/milvus-io/milvus/pkg/util/retry.Do\n | \t/workspace/source/pkg/util/retry/retry.go:44\n | github.com/milvus-io/milvus/internal/datacoord.(*Server).initMeta.func1.1\n | \t/workspace/source/internal/datacoord/server.go:658\n | runtime.goexit\n | \t/usr/local/go/src/runtime/asm_amd64.s:1650\nWraps: (2) empty grpc client\nWraps: (3) find no available rootcoord, check rootcoord state\nError types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *errors.errorString"]
[2024/09/04 16:03:14.544 +08:00] [INFO] [componentutil/componentutil.go:61] ["WaitForComponentStates success"] ["current state"=Healthy]
[2024/09/04 16:03:14.546 +08:00] [INFO] [componentutil/componentutil.go:61] ["WaitForComponentStates success"] ["current state"=Healthy]
[2024/09/04 16:03:14.547 +08:00] [WARN] [grpcclient/client.go:249] ["failed to get client address"] [error="find no available querycoord, check querycoord state"]
[2024/09/04 16:03:14.547 +08:00] [WARN] [grpcclient/client.go:457] ["fail to get grpc client"] [client_role=querycoord] [error="find no available querycoord, check querycoord state"]
[2024/09/04 16:03:14.547 +08:00] [WARN] [grpcclient/client.go:478] ["grpc client is nil, maybe fail to get client in the retry state"] [client_role=querycoord] [error="empty grpc client: find no available querycoord, check querycoord state"] [errorVerbose="empty grpc client: find no available querycoord, check querycoord state\n(1) attached stack trace\n -- stack trace:\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).call.func2\n | \t/workspace/source/internal/util/grpcclient/client.go:477\n | github.com/milvus-io/milvus/pkg/util/retry.Handle\n | \t/workspace/source/pkg/util/retry/retry.go:104\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).call\n | \t/workspace/source/internal/util/grpcclient/client.go:470\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).Call\n | \t/workspace/source/internal/util/grpcclient/client.go:557\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).ReCall\n | \t/workspace/source/internal/util/grpcclient/client.go:573\n | github.com/milvus-io/milvus/internal/distributed/querycoord/client.wrapGrpcCall[...]\n | \t/workspace/source/internal/distributed/querycoord/client/client.go:100\n | github.com/milvus-io/milvus/internal/distributed/querycoord/client.(*Client).GetComponentStates\n | \t/workspace/source/internal/distributed/querycoord/client/client.go:114\n | github.com/milvus-io/milvus/internal/util/componentutil.WaitForComponentStates[...].func1\n | \t/workspace/source/internal/util/componentutil/componentutil.go:39\n | github.com/milvus-io/milvus/pkg/util/retry.Do\n | \t/workspace/source/pkg/util/retry/retry.go:44\n | github.com/milvus-io/milvus/internal/util/componentutil.WaitForComponentStates[...]\n | \t/workspace/source/internal/util/componentutil/componentutil.go:64\n | github.com/milvus-io/milvus/internal/util/componentutil.WaitForComponentHealthy[...]\n | \t/workspace/source/internal/util/componentutil/componentutil.go:85\n | github.com/milvus-io/milvus/internal/distributed/proxy.(*Server).init\n | \t/workspace/source/internal/distributed/proxy/service.go:675\n | github.com/milvus-io/milvus/internal/distributed/proxy.(*Server).Run\n | \t/workspace/source/internal/distributed/proxy/service.go:452\n | github.com/milvus-io/milvus/cmd/components.(*Proxy).Run\n | \t/workspace/source/cmd/components/proxy.go:54\n | github.com/milvus-io/milvus/cmd/roles.runComponent[...].func1\n | \t/workspace/source/cmd/roles/roles.go:121\n | runtime.goexit\n | \t/usr/local/go/src/runtime/asm_amd64.s:1650\nWraps: (2) empty grpc client\nWraps: (3) find no available querycoord, check querycoord state\nError types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *errors.errorString"]
[2024/09/04 16:03:14.548 +08:00] [WARN] [grpcclient/client.go:249] ["failed to get client address"] [error="find no available querycoord, check querycoord state"]
[2024/09/04 16:03:14.548 +08:00] [WARN] [grpcclient/client.go:464] ["fail to get grpc client in the retry state"] [client_role=querycoord] [error="find no available querycoord, check querycoord state"]
[2024/09/04 16:03:14.548 +08:00] [WARN] [retry/retry.go:106] ["retry func failed"] [retried=0] [error="empty grpc client: find no available querycoord, check querycoord state"] [errorVerbose="empty grpc client: find no available querycoord, check querycoord state\n(1) attached stack trace\n -- stack trace:\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).call.func2\n | \t/workspace/source/internal/util/grpcclient/client.go:477\n | github.com/milvus-io/milvus/pkg/util/retry.Handle\n | \t/workspace/source/pkg/util/retry/retry.go:104\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).call\n | \t/workspace/source/internal/util/grpcclient/client.go:470\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).Call\n | \t/workspace/source/internal/util/grpcclient/client.go:557\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).ReCall\n | \t/workspace/source/internal/util/grpcclient/client.go:573\n | github.com/milvus-io/milvus/internal/distributed/querycoord/client.wrapGrpcCall[...]\n | \t/workspace/source/internal/distributed/querycoord/client/client.go:100\n | github.com/milvus-io/milvus/internal/distributed/querycoord/client.(*Client).GetComponentStates\n | \t/workspace/source/internal/distributed/querycoord/client/client.go:114\n | github.com/milvus-io/milvus/internal/util/componentutil.WaitForComponentStates[...].func1\n | \t/workspace/source/internal/util/componentutil/componentutil.go:39\n | github.com/milvus-io/milvus/pkg/util/retry.Do\n | \t/workspace/source/pkg/util/retry/retry.go:44\n | github.com/milvus-io/milvus/internal/util/componentutil.WaitForComponentStates[...]\n | \t/workspace/source/internal/util/componentutil/componentutil.go:64\n | github.com/milvus-io/milvus/internal/util/componentutil.WaitForComponentHealthy[...]\n | \t/workspace/source/internal/util/componentutil/componentutil.go:85\n | github.com/milvus-io/milvus/internal/distributed/proxy.(*Server).init\n | \t/workspace/source/internal/distributed/proxy/service.go:675\n | github.com/milvus-io/milvus/internal/distributed/proxy.(*Server).Run\n | \t/workspace/source/internal/distributed/proxy/service.go:452\n | github.com/milvus-io/milvus/cmd/components.(*Proxy).Run\n | \t/workspace/source/cmd/components/proxy.go:54\n | github.com/milvus-io/milvus/cmd/roles.runComponent[...].func1\n | \t/workspace/source/cmd/roles/roles.go:121\n | runtime.goexit\n | \t/usr/local/go/src/runtime/asm_amd64.s:1650\nWraps: (2) empty grpc client\nWraps: (3) find no available querycoord, check querycoord state\nError types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *errors.errorString"]
[2024/09/04 16:03:14.641 +08:00] [INFO] [componentutil/componentutil.go:61] ["WaitForComponentStates success"] ["current state"=Healthy]
[2024/09/04 16:03:14.641 +08:00] [INFO] [componentutil/componentutil.go:61] ["WaitForComponentStates success"] ["current state"=Healthy]
[2024/09/04 16:03:14.641 +08:00] [INFO] [datanode/service.go:277] ["RootCoord client is ready for DataNode"]
[2024/09/04 16:03:14.643 +08:00] [INFO] [componentutil/componentutil.go:61] ["WaitForComponentStates success"] ["current state"=Healthy]
[2024/09/04 16:03:14.643 +08:00] [INFO] [querycoordv2/server.go:186] ["QueryCoord start init"] [meta-root-path=by-dev/meta] [address=172.31.0.6:19531]
[2024/09/04 16:03:14.643 +08:00] [INFO] [componentutil/componentutil.go:61] ["WaitForComponentStates success"] ["current state"=Healthy]
[2024/09/04 16:03:14.643 +08:00] [INFO] [datanode/service.go:296] ["DataCoord client is ready for DataNode"]
[2024/09/04 16:03:14.643 +08:00] [INFO] [datanode/data_node.go:237] ["DataNode server initializing"] [TimeTickChannelName=by-dev-datacoord-timetick-channel]
[2024/09/04 16:03:14.645 +08:00] [INFO] [sessionutil/session_util.go:289] ["start server"] [name=querycoord] [address=172.31.0.6:19531] [id=3]
[2024/09/04 16:03:14.645 +08:00] [INFO] [querycoordv2/server.go:218] ["start init querycoord"] [State=Initializing]
[2024/09/04 16:03:14.645 +08:00] [INFO] [querycoordv2/server.go:222] ["query coordinator connecting to etcd."]
[2024/09/04 16:03:14.645 +08:00] [INFO] [querycoordv2/server.go:234] ["query coordinator successfully connected to etcd."]
[2024/09/04 16:03:14.645 +08:00] [INFO] [sessionutil/session_util.go:289] ["start server"] [name=datanode] [address=172.31.0.6:21124] [id=3]
[2024/09/04 16:03:14.645 +08:00] [INFO] [sessionutil/session_util.go:1234] ["save server info into file"] [content="datanode-3\n"] [filePath=/tmp/milvus/server_id_8]
[2024/09/04 16:03:14.645 +08:00] [INFO] [datanode/data_node.go:257] ["DataNode server init rateCollector done"] [role=datanode] [nodeID=3]
[2024/09/04 16:03:14.645 +08:00] [INFO] [datanode/data_node.go:260] ["DataNode server init dispatcher client done"] [role=datanode] [nodeID=3]
[2024/09/04 16:03:14.645 +08:00] [INFO] [dependency/factory.go:85] ["try to init mq"] [standalone=true] [mqType=rocksmq]
[2024/09/04 16:03:14.645 +08:00] [INFO] [msgstream/mq_factory.go:244] ["init rocksmq msgstream success"] [path=/var/lib/milvus/rdb_data]
[2024/09/04 16:03:14.645 +08:00] [INFO] [datanode/data_node.go:271] ["DataNode server init succeeded"] [role=datanode] [nodeID=3] [MsgChannelSubName=by-dev-dataNode]
[2024/09/04 16:03:14.646 +08:00] [INFO] [tso/tso.go:122] ["sync and save timestamp"] [last=2024/09/04 15:38:05.708 +08:00] [save=2024/09/04 16:03:17.646 +08:00] [next=2024/09/04 16:03:14.646 +08:00]
[2024/09/04 16:03:14.646 +08:00] [INFO] [querycoordv2/server.go:245] ["init ID allocator done"]
[2024/09/04 16:03:14.646 +08:00] [INFO] [querycoordv2/server.go:348] ["init meta"]
[2024/09/04 16:03:14.646 +08:00] [INFO] [querycoordv2/server.go:357] ["recover meta..."]
[2024/09/04 16:03:14.647 +08:00] [INFO] [meta/collection_manager.go:147] ["recover collections and partitions from kv store"] [traceID=1725436994647748042]
[2024/09/04 16:03:14.647 +08:00] [INFO] [querycoordv2/server.go:364] ["recovering collections..."] [collections="[]"]
[2024/09/04 16:03:14.654 +08:00] [INFO] [storage/remote_chunk_manager.go:92] ["remote chunk manager init success."] [remote=aws] [bucketname=a-bucket] [root=files]
[2024/09/04 16:03:14.654 +08:00] [INFO] [meta/resource_manager.go:101] ["Recover resource group"] [rgName=__default_resource_group] [nodes="[2]"] [config="requests:<> limits:<node_num:1000000 > "]
[2024/09/04 16:03:14.654 +08:00] [INFO] [syncmgr/sync_manager.go:66] ["sync manager initialized"] [initPoolSize=256]
[2024/09/04 16:03:14.654 +08:00] [INFO] [datanode/data_node.go:302] ["init datanode done"] [role=datanode] [nodeID=3] [Address=172.31.0.6:21124]
[2024/09/04 16:03:14.654 +08:00] [INFO] [datanode/service.go:308] ["current DataNode state"] [state=Initializing]
[2024/09/04 16:03:14.654 +08:00] [INFO] [datanode/service.go:192] ["DataNode gRPC services successfully initialized"]
[2024/09/04 16:03:14.654 +08:00] [INFO] [datanode/data_node.go:365] ["start id allocator done"] [role=datanode]
[2024/09/04 16:03:14.654 +08:00] [INFO] [importv2/scheduler.go:53] ["start import scheduler"]
[2024/09/04 16:03:14.654 +08:00] [INFO] [datanode/channel_manager.go:177] ["DataNode ChannelManager start"]
[2024/09/04 16:03:14.654 +08:00] [INFO] [datanode/channel_checkpoint_updater.go:64] ["channel checkpoint updater start"]
[2024/09/04 16:03:14.654 +08:00] [INFO] [datanode/data_node.go:344] ["DataNode Background GC Start"]
[2024/09/04 16:03:14.654 +08:00] [INFO] [querycoordv2/server.go:394] ["QueryCoord server initMeta done"] [duration=7.923759ms]
[2024/09/04 16:03:14.654 +08:00] [INFO] [querycoordv2/server.go:257] ["init session"]
[2024/09/04 16:03:14.654 +08:00] [INFO] [querycoordv2/server.go:261] ["init schedulers"]
[2024/09/04 16:03:14.654 +08:00] [INFO] [querycoordv2/server.go:281] ["init proxy manager done"]
[2024/09/04 16:03:14.654 +08:00] [INFO] [querycoordv2/server.go:284] ["init dist controller"]
[2024/09/04 16:03:14.654 +08:00] [INFO] [querycoordv2/server.go:294] ["init checker controller"]
[2024/09/04 16:03:14.654 +08:00] [INFO] [querycoordv2/server.go:399] ["init observers"]
[2024/09/04 16:03:14.655 +08:00] [INFO] [querycoordv2/server.go:341] ["init querycoord done"] [nodeID=3] [Address=172.31.0.6:19531]
[2024/09/04 16:03:14.656 +08:00] [INFO] [sessionutil/session_util.go:466] ["put session key into etcd"] [key=by-dev/meta/session/datanode-3] [value="{"ServerID":3,"ServerName":"datanode","Address":"172.31.0.6:21124","TriggerKill":true,"Version":"2.4.9","IndexEngineVersion":{},"LeaseID":7587881183017939448,"HostName":"26e7d3a18d7a"}"]
[2024/09/04 16:03:14.656 +08:00] [INFO] [sessionutil/session_util.go:476] ["Service registered successfully"] [ServerName=datanode] [serverID=3]
[2024/09/04 16:03:14.656 +08:00] [INFO] [datanode/data_node.go:196] ["DataNode Register Finished"]
[2024/09/04 16:03:14.656 +08:00] [INFO] [datacoord/server.go:926] ["received datanode register"] [address=172.31.0.6:21124] [serverID=3]
[2024/09/04 16:03:14.656 +08:00] [INFO] [datacoord/channel_manager_v2.go:190] ["register node"] ["registered node"=3]
[2024/09/04 16:03:14.656 +08:00] [INFO] [datacoord/channel_manager_v2.go:196] ["register node with no reassignment"] ["registered node"=3]
[2024/09/04 16:03:14.656 +08:00] [INFO] [sessionutil/session_util.go:466] ["put session key into etcd"] [key=by-dev/meta/session/querycoord] [value="{"ServerID":3,"ServerName":"querycoord","Address":"172.31.0.6:19531","Exclusive":true,"TriggerKill":true,"Version":"2.4.9","IndexEngineVersion":{},"LeaseID":7587881183017939450,"HostName":"26e7d3a18d7a"}"]
[2024/09/04 16:03:14.656 +08:00] [INFO] [sessionutil/session_util.go:476] ["Service registered successfully"] [ServerName=querycoord] [serverID=3]
[2024/09/04 16:03:14.657 +08:00] [INFO] [datanode/service.go:197] ["DataNode gRPC services successfully started"]
[2024/09/04 16:03:14.657 +08:00] [INFO] [querycoordv2/server.go:441] ["start watcher..."]
[2024/09/04 16:03:14.658 +08:00] [INFO] [meta/resource_manager.go:833] ["unassign node to resource group"] [rgName=__default_resource_group] [node=2]
[2024/09/04 16:03:14.658 +08:00] [INFO] [meta/resource_manager.go:465] ["HandleNodeDown: remove node from resource group"] [rgName=__default_resource_group] [node=2] []
[2024/09/04 16:03:14.659 +08:00] [INFO] [querycoordv2/server.go:835] ["all old query node down, enable auto balance!"]
[2024/09/04 16:03:14.659 +08:00] [INFO] [proxyutil/proxy_watcher.go:96] ["succeed to init sessions on etcd"] [sessions=null] [revision=573]
[2024/09/04 16:03:14.660 +08:00] [INFO] [querycoordv2/server.go:491] ["start cluster..."]
[2024/09/04 16:03:14.660 +08:00] [INFO] [querycoordv2/server.go:494] ["start observers..."]
[2024/09/04 16:03:14.660 +08:00] [INFO] [proxyutil/proxy_watcher.go:119] ["start to watch etcd"]
[2024/09/04 16:03:14.660 +08:00] [INFO] [observers/target_observer.go:131] ["Start update next target loop"]
[2024/09/04 16:03:14.660 +08:00] [INFO] [observers/target_observer.go:145] ["target observer init done"]
[2024/09/04 16:03:14.660 +08:00] [INFO] [querycoordv2/server.go:500] ["start task scheduler..."]
[2024/09/04 16:03:14.660 +08:00] [INFO] [observers/resource_observer.go:67] ["Start check resource group loop"]
[2024/09/04 16:03:14.660 +08:00] [INFO] [querycoordv2/server.go:503] ["start checker controller..."]
[2024/09/04 16:03:14.660 +08:00] [INFO] [querycoordv2/server.go:506] ["start job scheduler..."]
[2024/09/04 16:03:14.660 +08:00] [INFO] [sessionutil/session_util.go:1234] ["save server info into file"] [content="querycoord-3\n"] [filePath=/tmp/milvus/server_id_8]
[2024/09/04 16:03:14.660 +08:00] [INFO] [querycoordv2/server.go:435] ["QueryCoord started"]
[2024/09/04 16:03:14.660 +08:00] [INFO] [observers/replica_observer.go:69] ["Start check replica loop"]
[2024/09/04 16:03:14.678 +08:00] [INFO] [sessionutil/session_util.go:914] ["register session success"] [role=querycoord] [key=by-dev/meta/session/querycoord]
[2024/09/04 16:03:14.704 +08:00] [INFO] [rootcoord/root_coord.go:931] ["received request to list databases"] [msgID=0]
[2024/09/04 16:03:14.704 +08:00] [WARN] [rootcoord/list_db_task.go:56] ["get current user from context failed"] [error="fail to get authorization from the md, authorization:[token]"]
[2024/09/04 16:03:14.704 +08:00] [INFO] [rootcoord/root_coord.go:957] ["done to list databases"] [msgID=0] ["num of databases"=1]
[2024/09/04 16:03:14.705 +08:00] [WARN] [rootcoord/show_collection_task.go:63] ["get current user from context failed"] [error="fail to get authorization from the md, authorization:[token]"]
[2024/09/04 16:03:14.748 +08:00] [WARN] [grpcclient/client.go:478] ["grpc client is nil, maybe fail to get client in the retry state"] [client_role=querycoord] [error="empty grpc client: find no available querycoord, check querycoord state"] [errorVerbose="empty grpc client: find no available querycoord, check querycoord state\n(1) attached stack trace\n -- stack trace:\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).call.func2\n | \t/workspace/source/internal/util/grpcclient/client.go:477\n | github.com/milvus-io/milvus/pkg/util/retry.Handle\n | \t/workspace/source/pkg/util/retry/retry.go:104\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).call\n | \t/workspace/source/internal/util/grpcclient/client.go:470\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).Call\n | \t/workspace/source/internal/util/grpcclient/client.go:557\n | github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).ReCall\n | \t/workspace/source/internal/util/grpcclient/client.go:573\n | github.com/milvus-io/milvus/internal/distributed/querycoord/client.wrapGrpcCall[...]\n | \t/workspace/source/internal/distributed/querycoord/client/client.go:100\n | github.com/milvus-io/milvus/internal/distributed/querycoord/client.(*Client).GetComponentStates\n | \t/workspace/source/internal/distributed/querycoord/client/client.go:114\n | github.com/milvus-io/milvus/internal/util/componentutil.WaitForComponentStates[...].func1\n | \t/workspace/source/internal/util/componentutil/componentutil.go:39\n | github.com/milvus-io/milvus/pkg/util/retry.Do\n | \t/workspace/source/pkg/util/retry/retry.go:44\n | github.com/milvus-io/milvus/internal/util/componentutil.WaitForComponentStates[...]\n | \t/workspace/source/internal/util/componentutil/componentutil.go:64\n | github.com/milvus-io/milvus/internal/util/componentutil.WaitForComponentHealthy[...]\n | \t/workspace/source/internal/util/componentutil/componentutil.go:85\n | github.com/milvus-io/milvus/internal/distributed/proxy.(*Server).init\n | \t/workspace/source/internal/distributed/proxy/service.go:675\n | github.com/milvus-io/milvus/internal/distributed/proxy.(*Server).Run\n | \t/workspace/source/internal/distributed/proxy/service.go:452\n | github.com/milvus-io/milvus/cmd/components.(Proxy).Run\n | \t/workspace/source/cmd/components/proxy.go:54\n | github.com/milvus-io/milvus/cmd/roles.runComponent[...].func1\n | \t/workspace/source/cmd/roles/roles.go:121\n | runtime.goexit\n | \t/usr/local/go/src/runtime/asm_amd64.s:1650\nWraps: (2) empty grpc client\nWraps: (3) find no available querycoord, check querycoord state\nError types: (1) withstack.withStack (2) errutil.withPrefix (3) errors.errorString"]
[2024/09/04 16:03:15.151 +08:00] [INFO] [componentutil/componentutil.go:61] ["WaitForComponentStates success"] ["current state"=Healthy]
[2024/09/04 16:03:15.151 +08:00] [INFO] [proxy/proxy.go:215] ["init session for Proxy"]
[2024/09/04 16:03:15.153 +08:00] [INFO] [sessionutil/session_util.go:289] ["start server"] [name=proxy] [address=172.31.0.6:19529] [id=3]
[2024/09/04 16:03:15.153 +08:00] [INFO] [sessionutil/session_util.go:1234] ["save server info into file"] [content="proxy-3\n"] [filePath=/tmp/milvus/server_id_8]
[2024/09/04 16:03:15.153 +08:00] [INFO] [proxy/proxy.go:220] ["init session for Proxy done"]
[2024/09/04 16:03:15.153 +08:00] [INFO] [dependency/factory.go:85] ["try to init mq"] [standalone=true] [mqType=rocksmq]
[2024/09/04 16:03:15.153 +08:00] [INFO] [msgstream/mq_factory.go:244] ["init rocksmq msgstream success"] [path=/var/lib/milvus/rdb_data]
[2024/09/04 16:03:15.153 +08:00] [INFO] [proxy/proxy.go:230] ["Proxy init rateCollector done"] [nodeID=3]
[2024/09/04 16:03:15.156 +08:00] [INFO] [msgstream/mq_msgstream.go:117] ["Msg Stream state"] [can_produce=true]
[2024/09/04 16:03:15.156 +08:00] [WARN] [server/rocksmq_impl.go:413] ["rocksmq topic already exists "] [topic=by-dev-replicate-msg]
[2024/09/04 16:03:15.159 +08:00] [INFO] [proxy/meta_cache.go:296] ["success to init meta cache"] [policy_infos="["{\"PType\":\"p\",\"V0\":\"public\",\"V1\":\"Collection-
.
\",\"V2\":\"PrivilegeIndexDetail\"}","{\"PType\":\"p\",\"V0\":\"public\",\"V1\":\"Global-
.
\",\"V2\":\"PrivilegeDescribeCollection\"}"]"]
[2024/09/04 16:03:15.159 +08:00] [INFO] [proxy/proxy.go:302] ["init proxy done"] [nodeID=3] [Address=172.31.0.6:19529]
[2024/09/04 16:03:15.161 +08:00] [INFO] [sessionutil/session_util.go:466] ["put session key into etcd"] [key=by-dev/meta/session/proxy-3] [value="{"ServerID":3,"ServerName":"proxy","Address":"172.31.0.6:19529","TriggerKill":true,"Version":"2.4.9","IndexEngineVersion":{},"LeaseID":7587881183017939467,"HostName":"26e7d3a18d7a"}"]
[2024/09/04 16:03:15.161 +08:00] [INFO] [sessionutil/session_util.go:476] ["Service registered successfully"] [ServerName=proxy] [serverID=3]
[2024/09/04 16:03:15.161 +08:00] [INFO] [rootcoord/timeticksync.go:233] ["Add session for timeticksync"] [serverID=3]
[2024/09/04 16:03:15.161 +08:00] [INFO] [proxy/proxy.go:175] ["Proxy Register Finished"]
[2024/09/04 16:03:15.162 +08:00] [INFO] [proxyutil/proxy_client_manager.go:156] ["succeed to create proxy client"] [address=172.31.0.6:19529] [serverID=3]
[2024/09/04 16:03:15.162 +08:00] [INFO] [proxyutil/proxy_client_manager.go:156] ["succeed to create proxy client"] [address=172.31.0.6:19529] [serverID=3]
[2024/09/04 16:03:15.162 +08:00] [INFO] [proxy/service.go:713] ["start Proxy http server"]
[2024/09/04 16:03:15.162 +08:00] [INFO] [components/proxy.go:58] ["Proxy successfully started"]
[2024/09/04 16:03:15.180 +08:00] [INFO] [sessionutil/session_util.go:914] ["register session success"] [role=proxy] [key=by-dev/meta/session/proxy-3]
[2024/09/04 16:03:16.023 +08:00] [INFO] [indexnode/indexnode.go:221] ["init index node done"] [nodeID=3] [Address=172.31.0.6:21121]
[2024/09/04 16:03:16.023 +08:00] [INFO] [indexnode/indexnode.go:232] [IndexNode] [State=Healthy]
[2024/09/04 16:03:16.023 +08:00] [INFO] [indexnode/indexnode.go:235] ["IndexNode start finished"] []
[2024/09/04 16:03:16.027 +08:00] [INFO] [datacoord/server.go:957] ["received indexnode register"] [address=172.31.0.6:21121] [serverID=3]
[2024/09/04 16:03:16.027 +08:00] [INFO] [sessionutil/session_util.go:466] ["put session key into etcd"] [key=by-dev/meta/session/indexnode-3] [value="{"ServerID":3,"ServerName":"indexnode","Address":"172.31.0.6:21121","TriggerKill":true,"Version":"2.4.9","IndexEngineVersion":{},"LeaseID":7587881183017939472,"HostName":"26e7d3a18d7a","EnableDisk":true}"]
[2024/09/04 16:03:16.027 +08:00] [INFO] [sessionutil/session_util.go:476] ["Service registered successfully"] [ServerName=indexnode] [serverID=3]
[2024/09/04 16:03:16.086 +08:00] [INFO] [sessionutil/session_util.go:914] ["register session success"] [role=indexnode] [key=by-dev/meta/session/indexnode-3]
[2024/09/04 16:03:18.104 +08:00] [INFO] [querynodev2/server.go:364] ["query node init successfully"] [queryNodeID=3] [Address=172.31.0.6:21123]
[2024/09/04 16:03:18.104 +08:00] [INFO] [querynodev2/server.go:385] ["query node start successfully"] [queryNodeID=3] [Address=172.31.0.6:21123] [mmapEnabled=false] [growingmmapEnable=false]
[2024/09/04 16:03:18.104 +08:00] [INFO] [tasks/concurrent_safe_scheduler.go:213] ["start execute loop"]
[2024/09/04 16:03:18.106 +08:00] [INFO] [sessionutil/session_util.go:466] ["put session key into etcd"] [key=by-dev/meta/session/querynode-3] [value="{"ServerID":3,"ServerName":"querynode","Address":"172.31.0.6:21123","TriggerKill":true,"Version":"2.4.9","IndexEngineVersion":{"CurrentIndexVersion":5},"LeaseID":7587881183017939481,"HostName":"26e7d3a18d7a"}"]
[2024/09/04 16:03:18.106 +08:00] [INFO] [sessionutil/session_util.go:476] ["Service registered successfully"] [ServerName=querynode] [serverID=3]
[2024/09/04 16:03:18.106 +08:00] [INFO] [datacoord/server.go:977] ["received querynode register"] [address=172.31.0.6:21123] [serverID=3]
[2024/09/04 16:03:18.106 +08:00] [INFO] [datacoord/index_engine_version_manager.go:65] ["addOrUpdate version"] [nodeId=3] [minimal=0] [current=5]
[2024/09/04 16:03:18.106 +08:00] [INFO] [querycoordv2/server.go:679] ["add node to NodeManager"] [nodeID=3] [nodeAddr=172.31.0.6:21123]
[2024/09/04 16:03:18.107 +08:00] [INFO] [tracer/tracer.go:50] ["Init tracer finished"] [Exporter=noop]
[2024/09/04 16:03:18.108 +08:00] [INFO] [task/scheduler.go:251] ["add executor for new QueryNode"] [nodeID=3]
[2024/09/04 16:03:18.109 +08:00] [INFO] [dist/dist_handler.go:58] ["start dist handler"] [nodeID=3]

Anything else?

milvus.log

@littlePoBoy littlePoBoy added kind/bug Issues or changes related a bug needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Sep 5, 2024
@xiaofan-luan
Copy link
Collaborator

from the log, the only reason you failed is you didn't load the collection.
I don't see any critical message besides that

@xiaofan-luan
Copy link
Collaborator

this message of rootcoord is only during node start I guess.
What's the reason of node done?
How many memories you have and how much data you want to load?

@littlePoBoy
Copy link
Author

this message of rootcoord is only during node start I guess. What's the reason of node done? How many memories you have and how much data you want to load?

This log happened after the crash and restart,i batch insert 2000 rows data into milvus,all data lower 10k rows,memory is enough。

@yanliang567
Copy link
Contributor

@littlePoBoy are you upgrading for a previous version of milvus? could you please retry on a completed new env?
[2024/09/04 16:03:14.099 +08:00] [WARN] [server/rocksmq_impl.go:413] ["rocksmq topic already exists "] [topic=by-dev-rootcoord-dml_13]

/assign @littlePoBoy
/unassign

@yanliang567 yanliang567 added triage/needs-information Indicates an issue needs more information in order to work on it. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Sep 6, 2024
Copy link

stale bot commented Nov 9, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Rotten issues close after 30d of inactivity. Reopen the issue with /reopen.

@stale stale bot added the stale indicates no udpates for 30 days label Nov 9, 2024
@hou718945431
Copy link

May I ask if this issue has been resolved? I am currently experiencing the same problem in version 2.3.9

@stale stale bot removed the stale indicates no udpates for 30 days label Dec 10, 2024
@deepakkumarglobal
Copy link

/reopen

@deepakkumarglobal
Copy link

I am still seeing this error. Is this issue resolved?

@yanliang567
Copy link
Contributor

please file a new issue and attach the milvus pods logs for investigation. This issue usually caused by etcd service is too slow, and milvus pods lost connection with etcd service.

Copy link

stale bot commented Jan 30, 2025

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Rotten issues close after 30d of inactivity. Reopen the issue with /reopen.

@stale stale bot added the stale indicates no udpates for 30 days label Jan 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Issues or changes related a bug stale indicates no udpates for 30 days triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

7 participants
@littlePoBoy @hou718945431 @yanliang567 @xiaofan-luan @deepakkumarglobal and others