I'm trying to run CnosDB cluster mode on my PC, but it crushed on start and throw an error: make dbms: Meta { source: TenantNotFound { tenant: "cnosdb" } }
, seems failed initializing the default tenant.
Version: cnosdb 2.3.0, revision f4b52408171467c5ea2b2f75480deecc6f6ad733
I have these two configurations:
Config for meta node:
id = 1
host = "127.0.0.1"
port = 8901
snapshot_path = "/tmp/meta_1/snapshot"
journal_path = "/tmp/meta_1/journal"
snapshot_per_events = 500
[log]
level = "info"
path = "/tmp/meta_1/logs"
[meta_init]
cluster_name = "meta_1"
admin_user = "root"
system_tenant = "cnosdb"
default_database = ["public", "usage_schema"]
[heartbeat]
heartbeat_recheck_interval = 300
heartbeat_expired_interval = 600
Config for data node:
host = '127.0.0.1'
[cluster]
name = 'data_1'
http_listen_port = 8912
grpc_listen_port = 8913
flight_rpc_listen_port = 8914
tcp_listen_port = 8915
meta_service_addr = ['127.0.0.1:8901']
[node_basic]
node_id = 1
cold_data_server = false
store_metrics = true
[deployment]
mode = 'query_tskv'
[query]
[storage]
path = '/tmp/data_1/data'
[wal]
path = '/tmp/data_1/wal'
[cache]
[log]
level = 'info'
path = '/tmp/data_1/wal'
[security]
[hinted_off]
[heartbeat]
## Clean
$ rm -rf /tmp/test/*
## Run `cnosdb-meta`
$ cnosdb-meta --config ./meta_1.toml
.........
## Init cnosdb-meta as a cluster
$ curl http://127.0.0.1:8901/init -d '{}'
{"Ok":null}
## Run `cnosdb`
$ cnosdb run --config data_1.toml
.........
2023-05-29T00:36:19.264002000+08:00 INFO tskv::wal: WAL '1' starts write
2023-05-29T00:36:19.264278000+08:00 WARN tskv::wal: Recover: reading wal from seq '0'
2023-05-29T00:36:19.264431000+08:00 INFO tskv::kvcore: Job 'WAL' starting.
2023-05-29T00:36:19.264463000+08:00 INFO tskv::kvcore: Flush task handler started
2023-05-29T00:36:19.264510000+08:00 INFO tskv::kvcore: Job 'WAL' started.
2023-05-29T00:36:19.264742000+08:00 INFO tskv::kvcore: Summary task handler started
The application panicked (crashed).
Message: make dbms: Meta { source: TenantNotFound { tenant: "cnosdb" } }
Location: main/src/server.rs:246
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ BACKTRACE ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
⋮ 10 frames hidden ⋮
11: cnosdb::server::ServiceBuilder::create_dbms::{{closure}}::h1aef1bae443f563d
at <unknown source file>:<unknown line>
12: cnosdb::server::ServiceBuilder::build_query_storage::{{closure}}::h778497860af930a7
at <unknown source file>:<unknown line>
13: cnosdb::main::{{closure}}::hd08bd0324e3b1f23
at <unknown source file>:<unknown line>
14: cnosdb::main::h4f1efcc4db9ef8ea
at <unknown source file>:<unknown line>
15: std::sys_common::backtrace::__rust_begin_short_backtrace::hf3cbd67dc5a868f8
at <unknown source file>:<unknown line>
16: std::rt::lang_start::{{closure}}::h76d82e46c7eb707c
at <unknown source file>:<unknown line>
17: std::rt::lang_start_internal::h672762ded3eb0218
at <unknown source file>:<unknown line>
18: _main<unknown>
at <unknown source file>:<unknown line>
I‘ve just checked the configurations and i've made sure that the meta_1
listens on 127.0.0.1:8901
and, the data_1
is connecting to 127.0.0.1:8901
, but I'm puzzled by the output messages.
In the configuration file of datanoe, ensure that the name in both [cluster] section with name='data_1' and [meta_init] section with cluster_name="meta_1" are the same. Otherwise, it will consider them as two separate clusters.