背景
最近对rust感兴趣 找了个语言相关项目研究 编译的时候很奇怪出现这样的问题
奇怪了我protobuf版本都23.3 怀疑是proto执行文件的 软连接 或者 环境变量出问题了 一通排查 然后把Protobuf卸载了重新装
发现protocbuf分3个大版本
坑爹货 原来23.x是属于第一代产物 3.x属于第三代产物
google真是坑了一把好爹
然后重新装
brew install protobuf@3
这个@3不能丢 否则又把23.x装上了。
然后
cargo run -- standalone start
2023-07-06T13:27:11.615434Z INFO greptime: command line arguments
2023-07-06T13:27:11.615500Z INFO greptime: argument: target/debug/greptime
2023-07-06T13:27:11.615531Z INFO greptime: argument: standalone
2023-07-06T13:27:11.615559Z INFO greptime: argument: start
2023-07-06T13:27:11.615971Z INFO cmd::standalone: Standalone start command: StartCommand {
http_addr: None,
rpc_addr: None,
mysql_addr: None,
prom_addr: None,
postgres_addr: None,
opentsdb_addr: None,
influxdb_enable: false,
config_file: None,
enable_memory_catalog: false,
tls_mode: None,
tls_cert_path: None,
tls_key_path: None,
user_provider: None,
env_prefix: "GREPTIMEDB_STANDALONE",
}
2023-07-06T13:27:11.616207Z INFO cmd::standalone: Standalone frontend options: FrontendOptions {
mode: Standalone,
heartbeat_interval_millis: 5000,
retry_interval_millis: 5000,
http_options: Some(
HttpOptions {
addr: "127.0.0.1:4000",
timeout: 30s,
disable_dashboard: false,
body_limit: ReadableSize(
67108864,
),
},
),
grpc_options: Some(
GrpcOptions {
addr: "127.0.0.1:4001",
runtime_size: 8,
},
),
mysql_options: Some(
MysqlOptions {
addr: "127.0.0.1:4002",
runtime_size: 2,
tls: TlsOption {
mode: Disable,
cert_path: "",
key_path: "",
},
reject_no_database: None,
},
),
postgres_options: Some(
PostgresOptions {
addr: "127.0.0.1:4003",
runtime_size: 2,
tls: TlsOption {
mode: Disable,
cert_path: "",
key_path: "",
},
},
),
opentsdb_options: Some(
OpentsdbOptions {
addr: "127.0.0.1:4242",
runtime_size: 2,
},
),
influxdb_options: Some(
InfluxdbOptions {
enable: true,
},
),
prometheus_options: Some(
PrometheusOptions {
enable: true,
},
),
prom_options: Some(
PromOptions {
addr: "127.0.0.1:4004",
},
),
meta_client_options: None,
logging: LoggingOptions {
dir: "/tmp/greptimedb/logs",
level: None,
enable_jaeger_tracing: false,
},
}, datanode options: DatanodeOptions {
mode: Standalone,
enable_memory_catalog: false,
node_id: None,
rpc_addr: "127.0.0.1:3001",
rpc_hostname: None,
rpc_runtime_size: 8,
heartbeat_interval_millis: 5000,
http_opts: HttpOptions {
addr: "127.0.0.1:4000",
timeout: 30s,
disable_dashboard: false,
body_limit: ReadableSize(
67108864,
),
},
meta_client_options: None,
wal: WalConfig {
dir: None,
file_size: ReadableSize(
268435456,
),
purge_threshold: ReadableSize(
4294967296,
),
purge_interval: 600s,
read_batch_size: 128,
sync_write: false,
},
storage: StorageConfig {
global_ttl: None,
store: File(
FileConfig {
data_home: "/tmp/greptimedb",
},
),
compaction: CompactionConfig {
max_inflight_tasks: 4,
max_files_in_level0: 8,
max_purge_tasks: 32,
sst_write_buffer_size: ReadableSize(
8388608,
),
},
manifest: RegionManifestConfig {
checkpoint_margin: Some(
10,
),
gc_duration: Some(
600s,
),
checkpoint_on_startup: false,
compress: false,
},
flush: FlushConfig {
max_flush_tasks: 8,
region_write_buffer_size: ReadableSize(
33554432,
),
picker_schedule_interval: 300s,
auto_flush_interval: 3600s,
global_write_buffer_size: None,
},
},
procedure: ProcedureConfig {
max_retry_times: 3,
retry_delay: 500ms,
},
logging: LoggingOptions {
dir: "/tmp/greptimedb/logs",
level: None,
enable_jaeger_tracing: false,
},
}
2023-07-06T13:27:11.622042Z INFO common_runtime::global: Creating runtime with runtime_name: global-read, thread_name: read-worker, work_threads: 8.
2023-07-06T13:27:11.622455Z INFO common_runtime::global: Creating runtime with runtime_name: global-write, thread_name: write-worker, work_threads: 8.
2023-07-06T13:27:11.623016Z INFO common_runtime::global: Creating runtime with runtime_name: global-bg, thread_name: bg-worker, work_threads: 8.
2023-07-06T13:27:11.626447Z INFO datanode::store::fs: The file storage home is: /tmp/greptimedb/
2023-07-06T13:27:11.635258Z INFO datanode::instance: Creating logstore with config: WalConfig { dir: None, file_size: ReadableSize(268435456), purge_threshold: ReadableSize(4294967296), purge_interval: 600s, read_batch_size: 128, sync_write: false } and storage path: /tmp/greptimedb/wal/
2023-07-06T13:27:11.641137Z INFO raft_engine::engine: Recovering raft logs takes 4.547542ms
2023-07-06T13:27:11.666409Z INFO mito::engine: Mito engine opened table: system_catalog in schema: information_schema
2023-07-06T13:27:11.681309Z INFO storage::manifest::impl_: Updated manifest protocol from Protocol(0, 0) to Protocol(0, 0).
2023-07-06T13:27:11.701259Z INFO mito::table: Create table manifest at data/system/information_schema/0/manifest/, table_name: system_catalog
2023-07-06T13:27:11.701718Z INFO storage::manifest::impl_: Updated manifest protocol from Protocol(0, 0) to Protocol(0, 0).
2023-07-06T13:27:11.722049Z INFO catalog::local::manager: All system catalog entries processed, max table id: 0
2023-07-06T13:27:11.744205Z INFO datanode::instance: Creating procedure manager with config: ProcedureConfig { max_retry_times: 3, retry_delay: 500ms }
2023-07-06T13:27:11.744305Z INFO datanode::instance: The datanode internal storage path is: cluster/dn-0/
2023-07-06T13:27:11.744871Z INFO common_procedure::store: The procedure state store path is: cluster/dn-0/procedure/
2023-07-06T13:27:11.754038Z INFO catalog::local::manager: All system catalog entries processed, max table id: 0
2023-07-06T13:27:11.754954Z INFO storage::manifest::impl_: Updated manifest protocol from Protocol(0, 0) to Protocol(0, 0).
2023-07-06T13:27:11.763524Z INFO mito::table: Create table manifest at data/greptime/public/1/manifest/, table_name: scripts
2023-07-06T13:27:11.763675Z INFO storage::manifest::impl_: Updated manifest protocol from Protocol(0, 0) to Protocol(0, 0).
2023-07-06T13:27:11.781121Z INFO catalog: Created and registered system table: scripts
2023-07-06T13:27:11.782796Z INFO common_procedure::local: LocalManager start to recover
2023-07-06T13:27:11.783345Z INFO common_procedure::local: LocalManager finish recovery, cost: 0ms
2023-07-06T13:27:11.784401Z INFO cmd::standalone: Datanode instance started
2023-07-06T13:27:11.786260Z INFO frontend::server: Starting GRPC_SERVER at 127.0.0.1:4001
2023-07-06T13:27:11.787380Z INFO servers::grpc: gRPC server is bound to 127.0.0.1:4001
2023-07-06T13:27:11.807434Z INFO frontend::server: Starting MYSQL_SERVER at 127.0.0.1:4002
2023-07-06T13:27:11.808094Z INFO servers::server: MySQL server started at 127.0.0.1:4002
2023-07-06T13:27:11.808220Z INFO frontend::server: Starting OPENTSDB_SERVER at 127.0.0.1:4242
2023-07-06T13:27:11.808497Z INFO servers::server: OpenTSDB server started at 127.0.0.1:4242
2023-07-06T13:27:11.808598Z INFO frontend::server: Starting POSTGRES_SERVER at 127.0.0.1:4003
2023-07-06T13:27:11.808725Z INFO servers::server: Postgres server started at 127.0.0.1:4003
2023-07-06T13:27:11.809150Z INFO frontend::server: Starting HTTP_SERVER at 127.0.0.1:4000
2023-07-06T13:27:11.970306Z INFO servers::http: HTTP server is bound to 127.0.0.1:4000
2023-07-06T13:27:11.970458Z INFO frontend::server: Starting PROM_SERVER at 127.0.0.1:4004
2023-07-06T13:27:11.970965Z INFO servers::prom: Prometheus API server is bound to 127.0.0.1:4004
2023-07-06T13:32:11.651529Z INFO storage::scheduler: Submitted task: Region(4294967296, 0)
2023-07-06T13:32:11.651972Z INFO storage::scheduler: Submitted task: Region(0, 1)
2023-07-06T13:32:11.654018Z INFO storage::flush: Successfully flush memtables, region:4294967296, files: []
2023-07-06T13:32:11.686212Z INFO storage::flush: Successfully flush memtables, region:0, files: [FileId(ca83e9ce-5322-4856-a23e-50bdc21d993f)]
2023-07-06T13:32:11.692775Z INFO log_store::raft_engine::log_store: Namespace 0 obsoleted 1 entries, compacted index: 1, span: (Some(2), Some(2))
2023-07-06T13:37:11.655262Z INFO log_store::raft_engine::log_store: Successfully purged logstore files, namespaces need compaction: []
发现一个很奇怪的问题
greptimedb据说是自主研发的“高性能时序数据库” 把这个几个传统数据搞来是做啥用?
留个悬念等待下次探索吧 现在根据源码结合官网文档
- GreptimeDB User Guide
- GreptimeDB Developer Guide
- GreptimeDB internal code document 研究去了。。。