基本介绍
官方文档:docs.mongodb.com/
中文文档:www.mongodb.org.cn/
相较于RDBMS(关系型数据库)的优点
- 无固定结构
- 数据有键值对组成,文档类似于 JSON 对象,字段值可以包含其他文档,数组及文档数组,单个对象的结构是清晰的。
- 没有复杂的表链接,不需要维护表与表之间的内在关联关系
- 查询功能强大
- 易于优化和扩展
- 应用对象与数据库对象天然对应
- 可以基于内存或者硬盘存储,提供了丰富的差异性操作和索引支持
数据比对
术语比对
SQL | Mongodb | 描述 |
---|---|---|
库(database) | 库(database) | |
表(Table) | 集合(Collection) | |
行/记录(Row) | 文档(Document) | Document就是json结构的一条数据记录 |
列/字段(Col) | 字段/键/域(Field) | |
主键(Primary Key) | 对象ID(ObjectId) | _id: ObjectId("10c191e8608f19729507deea") |
索引(Index) | 索引(Index) | 也有普通索引, 唯一索引这么区分的 |
macOS 安装社区版 MongoDB
xcode-select --install
brew tap mongodb/brew
brew install mongodb-community@4.4
#报错:
#Error: No similarly named formulae found.
#Error: No available formula with the name "mongosh" (dependency of mongodb/brew/mongodb-community).
执行以下命令:
echo 'export HOMEBREW_BOTTLE_DOMAIN=https://mirrors.ustc.edu.cn/homebrew-bottles/' >> ~/.zshrc
source ~/.zshrc
brew update -v
brew install mongodb-community@4.4
# 报错
#Error: Your Command Line Tools (CLT) does not support macOS 11.
sudo rm -rf /Library/Developer/CommandLineTools
# 执行完毕后
brew install mongodb-community@4.4
brew services start mongodb/brew/mongodb-community# 启动
启动与运行
mongo
# 结果
(base) meow:/ yongjuanwang$ mongo
MongoDB shell version v4.4.5
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("9d61154c-8789-4a36-96de-494999c18bcf") }
MongoDB server version: 4.4.5
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
https://docs.mongodb.com/
Questions? Try the MongoDB Developer Community Forums
https://community.mongodb.com
---
The server generated these startup warnings when booting:
2021-06-15T21:17:26.510+08:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted
---
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---
注:
警告:强烈建议使用XFS文件系统,并使用WiredTiger存储引擎。 解释:因为当前ubuntu使用的是ext4文件系统,mongodb官方建议使用XFS文件系统功能更能发挥mongodb的性能,忽略不管
mongo是一个命令行工具,用于连接一个特定的mongod实例。
当我们没有带参数运行mongo命令它将使用默认的端口号27017和localhost进行连接
# 退出终端交互
exit
#查看版本
mongo -- version
# 查看帮助文档
help
# 当前服务器状态
db.serverStatus()
# 查看当前 db 的连接机器地址
db.getMongo()
# 查看日志
show logs
show log global
# 数据库备份与恢复
mongodump -h dbhost -d dbname -o dbdirectory
# -h MongoDB服务端地址和端口的简写,--host=MongoDB地址 --port=端口
# -d 备份的数据库名称 --db=数据库名称
# -o 备份数据保存目录 目录需七天创建
# 数据恢复
mongorestore -h dbhost -d dbname --dir dbdirectory
# -h MongoDB服务端地址和端口的简写,--host=MongoDB地址 --port=端口
# -d 备份的数据库名称 --db=数据库名称
# --dir 倍数数据所在目录
-- drop 恢复数据之前,先删除 MongoDB 中的数据
# 数据导出
mongoexport -d dbname -c collectionname -o file --type json/csv -f field
# -d 备份的数据库名称 --db=数据库名称
# -c 要导出的集合名称 --collection= 集合名称
# -o 导出数据保存的文件名
# --type 导出数据的文件格式,默认JSON,可以 CSV,当数据格式为csv时,另需加上-f "字段1,字段2,...."
# 数据导入
mongoimport -d dbname -c colletionname --file filename --headerline --type json/csv -f field
# -d 要导入的数据库名称 --db=数据库名称
# -c 要导入的集合名称 --collection= 集合名称
# --file 导入数据保存的文件名
# --type 导入数据的文件格式默认是json,也可以是csv,
# 当数据格式为csv时:<br>1. 需加上-f "字段1,字段2,...."<br>2. 可以选择加上--headerline,设置首行为导入字段
{
"host" : "meow.local",
"version" : "4.4.5",
"process" : "mongod",
"pid" : NumberLong(5020),
"uptime" : 906,
"uptimeMillis" : NumberLong(906236),
"uptimeEstimate" : NumberLong(906),
"localTime" : ISODate("2021-06-15T13:32:31.604Z"),
"asserts" : {
"regular" : 0,
"warning" : 0,
"msg" : 0,
"user" : 12,
"rollovers" : 0
},
"connections" : {
"current" : 1,
"available" : 51199,
"totalCreated" : 1,
"active" : 1,
"exhaustIsMaster" : 0,
"exhaustHello" : 0,
"awaitingTopologyChanges" : 0
},
"electionMetrics" : {
"stepUpCmd" : {
"called" : NumberLong(0),
"successful" : NumberLong(0)
},
"priorityTakeover" : {
"called" : NumberLong(0),
"successful" : NumberLong(0)
},
"catchUpTakeover" : {
"called" : NumberLong(0),
"successful" : NumberLong(0)
},
"electionTimeout" : {
"called" : NumberLong(0),
"successful" : NumberLong(0)
},
"freezeTimeout" : {
"called" : NumberLong(0),
"successful" : NumberLong(0)
},
"numStepDownsCausedByHigherTerm" : NumberLong(0),
"numCatchUps" : NumberLong(0),
"numCatchUpsSucceeded" : NumberLong(0),
"numCatchUpsAlreadyCaughtUp" : NumberLong(0),
"numCatchUpsSkipped" : NumberLong(0),
"numCatchUpsTimedOut" : NumberLong(0),
"numCatchUpsFailedWithError" : NumberLong(0),
"numCatchUpsFailedWithNewTerm" : NumberLong(0),
"numCatchUpsFailedWithReplSetAbortPrimaryCatchUpCmd" : NumberLong(0),
"averageCatchUpOps" : 0
},
"extra_info" : {
"note" : "fields vary by platform",
"page_faults" : 0
},
"flowControl" : {
"enabled" : true,
"targetRateLimit" : 1000000000,
"timeAcquiringMicros" : NumberLong(16),
"locksPerKiloOp" : 0,
"sustainerRate" : 0,
"isLagged" : false,
"isLaggedCount" : 0,
"isLaggedTimeMicros" : NumberLong(0)
},
"freeMonitoring" : {
"state" : "undecided"
},
"globalLock" : {
"totalTime" : NumberLong(906233000),
"currentQueue" : {
"total" : 0,
"readers" : 0,
"writers" : 0
},
"activeClients" : {
"total" : 0,
"readers" : 0,
"writers" : 0
}
},
"locks" : {
"ParallelBatchWriterMode" : {
"acquireCount" : {
"r" : NumberLong(57)
}
},
"ReplicationStateTransition" : {
"acquireCount" : {
"w" : NumberLong(3660)
}
},
"Global" : {
"acquireCount" : {
"r" : NumberLong(3631),
"w" : NumberLong(25),
"W" : NumberLong(4)
}
},
"Database" : {
"acquireCount" : {
"r" : NumberLong(45),
"w" : NumberLong(19),
"W" : NumberLong(6)
}
},
"Collection" : {
"acquireCount" : {
"r" : NumberLong(53),
"w" : NumberLong(20),
"W" : NumberLong(2)
}
},
"Mutex" : {
"acquireCount" : {
"r" : NumberLong(71)
}
}
},
"logicalSessionRecordCache" : {
"activeSessionsCount" : 1,
"sessionsCollectionJobCount" : 4,
"lastSessionsCollectionJobDurationMillis" : 0,
"lastSessionsCollectionJobTimestamp" : ISODate("2021-06-15T13:32:26.804Z"),
"lastSessionsCollectionJobEntriesRefreshed" : 0,
"lastSessionsCollectionJobEntriesEnded" : 0,
"lastSessionsCollectionJobCursorsClosed" : 0,
"transactionReaperJobCount" : 4,
"lastTransactionReaperJobDurationMillis" : 0,
"lastTransactionReaperJobTimestamp" : ISODate("2021-06-15T13:32:26.804Z"),
"lastTransactionReaperJobEntriesCleanedUp" : 0,
"sessionCatalogSize" : 0
},
"network" : {
"bytesIn" : NumberLong(2027),
"bytesOut" : NumberLong(12436),
"physicalBytesIn" : NumberLong(2027),
"physicalBytesOut" : NumberLong(12436),
"numSlowDNSOperations" : NumberLong(0),
"numSlowSSLOperations" : NumberLong(0),
"numRequests" : NumberLong(21),
"tcpFastOpen" : {
"serverSupported" : false,
"clientSupported" : false,
"accepted" : NumberLong(0)
},
"compression" : {
"snappy" : {
"compressor" : {
"bytesIn" : NumberLong(0),
"bytesOut" : NumberLong(0)
},
"decompressor" : {
"bytesIn" : NumberLong(0),
"bytesOut" : NumberLong(0)
}
},
"zstd" : {
"compressor" : {
"bytesIn" : NumberLong(0),
"bytesOut" : NumberLong(0)
},
"decompressor" : {
"bytesIn" : NumberLong(0),
"bytesOut" : NumberLong(0)
}
},
"zlib" : {
"compressor" : {
"bytesIn" : NumberLong(0),
"bytesOut" : NumberLong(0)
},
"decompressor" : {
"bytesIn" : NumberLong(0),
"bytesOut" : NumberLong(0)
}
}
},
"serviceExecutorTaskStats" : {
"executor" : "passthrough",
"threadsRunning" : 1
}
},
"opLatencies" : {
"reads" : {
"latency" : NumberLong(0),
"ops" : NumberLong(0)
},
"writes" : {
"latency" : NumberLong(0),
"ops" : NumberLong(0)
},
"commands" : {
"latency" : NumberLong(1229),
"ops" : NumberLong(20)
},
"transactions" : {
"latency" : NumberLong(0),
"ops" : NumberLong(0)
}
},
"opReadConcernCounters" : {
"available" : NumberLong(0),
"linearizable" : NumberLong(0),
"local" : NumberLong(0),
"majority" : NumberLong(0),
"snapshot" : NumberLong(0),
"none" : NumberLong(4)
},
"opcounters" : {
"insert" : NumberLong(0),
"query" : NumberLong(4),
"update" : NumberLong(1),
"delete" : NumberLong(0),
"getmore" : NumberLong(0),
"command" : NumberLong(30)
},
"opcountersRepl" : {
"insert" : NumberLong(0),
"query" : NumberLong(0),
"update" : NumberLong(0),
"delete" : NumberLong(0),
"getmore" : NumberLong(0),
"command" : NumberLong(0)
},
"security" : {
"authentication" : {
"mechanisms" : {
"MONGODB-X509" : {
"speculativeAuthenticate" : {
"received" : NumberLong(0),
"successful" : NumberLong(0)
},
"authenticate" : {
"received" : NumberLong(0),
"successful" : NumberLong(0)
}
},
"SCRAM-SHA-1" : {
"speculativeAuthenticate" : {
"received" : NumberLong(0),
"successful" : NumberLong(0)
},
"authenticate" : {
"received" : NumberLong(0),
"successful" : NumberLong(0)
}
},
"SCRAM-SHA-256" : {
"speculativeAuthenticate" : {
"received" : NumberLong(0),
"successful" : NumberLong(0)
},
"authenticate" : {
"received" : NumberLong(0),
"successful" : NumberLong(0)
}
}
}
}
},
"storageEngine" : {
"name" : "wiredTiger",
"supportsCommittedReads" : true,
"oldestRequiredTimestampForCrashRecovery" : Timestamp(0, 0),
"supportsPendingDrops" : true,
"dropPendingIdents" : NumberLong(0),
"supportsTwoPhaseIndexBuild" : true,
"supportsSnapshotReadConcern" : true,
"readOnly" : false,
"persistent" : true,
"backupCursorOpen" : false
},
"trafficRecording" : {
"running" : false
},
"transactions" : {
"retriedCommandsCount" : NumberLong(0),
"retriedStatementsCount" : NumberLong(0),
"transactionsCollectionWriteCount" : NumberLong(0),
"currentActive" : NumberLong(0),
"currentInactive" : NumberLong(0),
"currentOpen" : NumberLong(0),
"totalAborted" : NumberLong(0),
"totalCommitted" : NumberLong(0),
"totalStarted" : NumberLong(0),
"totalPrepared" : NumberLong(0),
"totalPreparedThenCommitted" : NumberLong(0),
"totalPreparedThenAborted" : NumberLong(0),
"currentPrepared" : NumberLong(0)
},
"transportSecurity" : {
"1.0" : NumberLong(0),
"1.1" : NumberLong(0),
"1.2" : NumberLong(0),
"1.3" : NumberLong(0),
"unknown" : NumberLong(0)
},
"twoPhaseCommitCoordinator" : {
"totalCreated" : NumberLong(0),
"totalStartedTwoPhaseCommit" : NumberLong(0),
"totalAbortedTwoPhaseCommit" : NumberLong(0),
"totalCommittedTwoPhaseCommit" : NumberLong(0),
"currentInSteps" : {
"writingParticipantList" : NumberLong(0),
"waitingForVotes" : NumberLong(0),
"writingDecision" : NumberLong(0),
"waitingForDecisionAcks" : NumberLong(0),
"deletingCoordinatorDoc" : NumberLong(0)
}
},
"wiredTiger" : {
"uri" : "statistics:",
"block-manager" : {
"blocks pre-loaded" : 0,
"blocks read" : 15,
"blocks written" : 100,
"bytes read" : 61440,
"bytes read via memory map API" : 0,
"bytes read via system call API" : 0,
"bytes written" : 655360,
"bytes written for checkpoint" : 655360,
"bytes written via memory map API" : 0,
"bytes written via system call API" : 0,
"mapped blocks read" : 0,
"mapped bytes read" : 0,
"number of times the file was remapped because it changed size via fallocate or truncate" : 0,
"number of times the region was remapped via write" : 0
},
"cache" : {
"application threads page read from disk to cache count" : 0,
"application threads page read from disk to cache time (usecs)" : 0,
"application threads page write from cache to disk count" : 50,
"application threads page write from cache to disk time (usecs)" : 14387,
"bytes allocated for updates" : 50279,
"bytes belonging to page images in the cache" : 0,
"bytes belonging to the history store table in the cache" : 173,
"bytes not belonging to page images in the cache" : 56252,
"cache overflow score" : 0,
"eviction calls to get a page" : 44,
"eviction calls to get a page found queue empty" : 44,
"eviction calls to get a page found queue empty after locking" : 0,
"eviction currently operating in aggressive mode" : 0,
"eviction empty score" : 0,
"eviction passes of a file" : 0,
"eviction server candidate queue empty when topping up" : 0,
"eviction server candidate queue not empty when topping up" : 0,
"eviction server evicting pages" : 0,
"eviction server slept, because we did not make progress with eviction" : 0,
"eviction server unable to reach eviction goal" : 0,
"eviction server waiting for a leaf page" : 0,
"eviction state" : 64,
"eviction walk target strategy both clean and dirty pages" : 0,
"eviction walk target strategy only clean pages" : 0,
"eviction walk target strategy only dirty pages" : 0,
"eviction worker thread active" : 4,
"eviction worker thread created" : 0,
"eviction worker thread evicting pages" : 0,
"eviction worker thread removed" : 0,
"eviction worker thread stable number" : 0,
"files with active eviction walks" : 0,
"files with new eviction walks started" : 0,
"force re-tuning of eviction workers once in a while" : 0,
"forced eviction - history store pages failed to evict while session has history store cursor open" : 0,
"forced eviction - history store pages selected while session has history store cursor open" : 0,
"forced eviction - history store pages successfully evicted while session has history store cursor open" : 0,
"forced eviction - pages evicted that were clean count" : 0,
"forced eviction - pages evicted that were clean time (usecs)" : 0,
"forced eviction - pages evicted that were dirty count" : 0,
"forced eviction - pages evicted that were dirty time (usecs)" : 0,
"forced eviction - pages selected because of too many deleted items count" : 0,
"forced eviction - pages selected count" : 0,
"forced eviction - pages selected unable to be evicted count" : 0,
"forced eviction - pages selected unable to be evicted time" : 0,
"forced eviction - session returned rollback error while force evicting due to being oldest" : 0,
"hazard pointer check calls" : 0,
"hazard pointer check entries walked" : 0,
"hazard pointer maximum array length" : 0,
"history store score" : 0,
"history store table max on-disk size" : 0,
"history store table on-disk size" : 0,
"internal pages queued for eviction" : 0,
"internal pages seen by eviction walk" : 0,
"internal pages seen by eviction walk that are already queued" : 0,
"maximum bytes configured" : 16642998272,
"maximum page size at eviction" : 0,
"modified pages evicted by application threads" : 0,
"operations timed out waiting for space in cache" : 0,
"pages currently held in the cache" : 21,
"pages evicted by application threads" : 0,
"pages evicted in parallel with checkpoint" : 0,
"pages queued for eviction" : 0,
"pages queued for eviction post lru sorting" : 0,
"pages queued for urgent eviction" : 0,
"pages queued for urgent eviction during walk" : 0,
"pages queued for urgent eviction from history store due to high dirty content" : 0,
"pages seen by eviction walk that are already queued" : 0,
"pages selected for eviction unable to be evicted" : 0,
"pages selected for eviction unable to be evicted as the parent page has overflow items" : 0,
"pages selected for eviction unable to be evicted because of active children on an internal page" : 0,
"pages selected for eviction unable to be evicted because of failure in reconciliation" : 0,
"pages walked for eviction" : 0,
"percentage overhead" : 8,
"tracked bytes belonging to internal pages in the cache" : 5109,
"tracked bytes belonging to leaf pages in the cache" : 51143,
"tracked dirty pages in the cache" : 0,
"bytes currently in the cache" : 56252,
"bytes dirty in the cache cumulative" : 517929,
"bytes read into cache" : 0,
"bytes written from cache" : 280420,
"checkpoint blocked page eviction" : 0,
"eviction walk target pages histogram - 0-9" : 0,
"eviction walk target pages histogram - 10-31" : 0,
"eviction walk target pages histogram - 128 and higher" : 0,
"eviction walk target pages histogram - 32-63" : 0,
"eviction walk target pages histogram - 64-128" : 0,
"eviction walk target pages reduced due to history store cache pressure" : 0,
"eviction walks abandoned" : 0,
"eviction walks gave up because they restarted their walk twice" : 0,
"eviction walks gave up because they saw too many pages and found no candidates" : 0,
"eviction walks gave up because they saw too many pages and found too few candidates" : 0,
"eviction walks reached end of tree" : 0,
"eviction walks restarted" : 0,
"eviction walks started from root of tree" : 0,
"eviction walks started from saved location in tree" : 0,
"hazard pointer blocked page eviction" : 0,
"history store table insert calls" : 0,
"history store table insert calls that returned restart" : 0,
"history store table out-of-order resolved updates that lose their durable timestamp" : 0,
"history store table out-of-order updates that were fixed up by moving existing records" : 0,
"history store table out-of-order updates that were fixed up during insertion" : 0,
"history store table reads" : 0,
"history store table reads missed" : 0,
"history store table reads requiring squashed modifies" : 0,
"history store table truncation by rollback to stable to remove an unstable update" : 0,
"history store table truncation by rollback to stable to remove an update" : 0,
"history store table truncation to remove an update" : 0,
"history store table truncation to remove range of updates due to key being removed from the data page during reconciliation" : 0,
"history store table truncation to remove range of updates due to non timestamped update on data page" : 0,
"history store table writes requiring squashed modifies" : 0,
"in-memory page passed criteria to be split" : 0,
"in-memory page splits" : 0,
"internal pages evicted" : 0,
"internal pages split during eviction" : 0,
"leaf pages split during eviction" : 0,
"modified pages evicted" : 0,
"overflow pages read into cache" : 0,
"page split during eviction deepened the tree" : 0,
"page written requiring history store records" : 0,
"pages read into cache" : 0,
"pages read into cache after truncate" : 10,
"pages read into cache after truncate in prepare state" : 0,
"pages requested from the cache" : 646,
"pages seen by eviction walk" : 0,
"pages written from cache" : 50,
"pages written requiring in-memory restoration" : 0,
"tracked dirty bytes in the cache" : 0,
"unmodified pages evicted" : 0
},
"capacity" : {
"background fsync file handles considered" : 0,
"background fsync file handles synced" : 0,
"background fsync time (msecs)" : 0,
"bytes read" : 0,
"bytes written for checkpoint" : 279105,
"bytes written for eviction" : 0,
"bytes written for log" : 33408,
"bytes written total" : 312513,
"threshold to call fsync" : 0,
"time waiting due to total capacity (usecs)" : 0,
"time waiting during checkpoint (usecs)" : 0,
"time waiting during eviction (usecs)" : 0,
"time waiting during logging (usecs)" : 0,
"time waiting during read (usecs)" : 0
},
"checkpoint-cleanup" : {
"pages added for eviction" : 0,
"pages removed" : 0,
"pages skipped during tree walk" : 0,
"pages visited" : 25
},
"connection" : {
"auto adjusting condition resets" : 57,
"auto adjusting condition wait calls" : 5354,
"auto adjusting condition wait raced to update timeout and skipped updating" : 0,
"detected system time went backwards" : 0,
"files currently open" : 14,
"hash bucket array size for data handles" : 512,
"hash bucket array size general" : 512,
"memory allocations" : 43807,
"memory frees" : 42822,
"memory re-allocations" : 3050,
"pthread mutex condition wait calls" : 11181,
"pthread mutex shared lock read-lock calls" : 15357,
"pthread mutex shared lock write-lock calls" : 998,
"total fsync I/Os" : 134,
"total read I/Os" : 46,
"total write I/Os" : 171
},
"cursor" : {
"cached cursor count" : 21,
"cursor bulk loaded cursor insert calls" : 0,
"cursor close calls that result in cache" : 7168,
"cursor create calls" : 66,
"cursor insert calls" : 99,
"cursor insert key and value bytes" : 48453,
"cursor modify calls" : 0,
"cursor modify key and value bytes affected" : 0,
"cursor modify value bytes modified" : 0,
"cursor next calls" : 38,
"cursor operation restarted" : 0,
"cursor prev calls" : 10,
"cursor remove calls" : 0,
"cursor remove key bytes removed" : 0,
"cursor reserve calls" : 0,
"cursor reset calls" : 7787,
"cursor search calls" : 369,
"cursor search history store calls" : 0,
"cursor search near calls" : 22,
"cursor sweep buckets" : 1230,
"cursor sweep cursors closed" : 0,
"cursor sweep cursors examined" : 3,
"cursor sweeps" : 205,
"cursor truncate calls" : 0,
"cursor update calls" : 0,
"cursor update key and value bytes" : 0,
"cursor update value size change" : 0,
"cursors reused from cache" : 7147,
"Total number of entries skipped by cursor next calls" : 0,
"Total number of entries skipped by cursor prev calls" : 0,
"Total number of entries skipped to position the history store cursor" : 0,
"cursor next calls that skip due to a globally visible history store tombstone" : 0,
"cursor next calls that skip greater than or equal to 100 entries" : 0,
"cursor next calls that skip less than 100 entries" : 38,
"cursor prev calls that skip due to a globally visible history store tombstone" : 0,
"cursor prev calls that skip greater than or equal to 100 entries" : 0,
"cursor prev calls that skip less than 100 entries" : 10,
"open cursor count" : 7
},
"data-handle" : {
"connection data handle size" : 456,
"connection data handles currently active" : 21,
"connection sweep candidate became referenced" : 0,
"connection sweep dhandles closed" : 0,
"connection sweep dhandles removed from hash list" : 8,
"connection sweep time-of-death sets" : 148,
"connection sweeps" : 90,
"connection sweeps skipped due to checkpoint gathering handles" : 0,
"session dhandles swept" : 11,
"session sweep attempts" : 36
},
"lock" : {
"checkpoint lock acquisitions" : 15,
"checkpoint lock application thread wait time (usecs)" : 0,
"checkpoint lock internal thread wait time (usecs)" : 0,
"dhandle lock application thread time waiting (usecs)" : 0,
"dhandle lock internal thread time waiting (usecs)" : 0,
"dhandle read lock acquisitions" : 3534,
"dhandle write lock acquisitions" : 39,
"durable timestamp queue lock application thread time waiting (usecs)" : 0,
"durable timestamp queue lock internal thread time waiting (usecs)" : 0,
"durable timestamp queue read lock acquisitions" : 0,
"durable timestamp queue write lock acquisitions" : 0,
"metadata lock acquisitions" : 15,
"metadata lock application thread wait time (usecs)" : 0,
"metadata lock internal thread wait time (usecs)" : 0,
"read timestamp queue lock application thread time waiting (usecs)" : 0,
"read timestamp queue lock internal thread time waiting (usecs)" : 0,
"read timestamp queue read lock acquisitions" : 0,
"read timestamp queue write lock acquisitions" : 0,
"schema lock acquisitions" : 27,
"schema lock application thread wait time (usecs)" : 0,
"schema lock internal thread wait time (usecs)" : 0,
"table lock application thread time waiting for the table lock (usecs)" : 0,
"table lock internal thread time waiting for the table lock (usecs)" : 0,
"table read lock acquisitions" : 0,
"table write lock acquisitions" : 10,
"txn global lock application thread time waiting (usecs)" : 0,
"txn global lock internal thread time waiting (usecs)" : 0,
"txn global read lock acquisitions" : 56,
"txn global write lock acquisitions" : 47
},
"log" : {
"busy returns attempting to switch slots" : 0,
"force archive time sleeping (usecs)" : 0,
"log bytes of payload data" : 26078,
"log bytes written" : 33280,
"log files manually zero-filled" : 0,
"log flush operations" : 5683,
"log force write operations" : 6628,
"log force write operations skipped" : 6614,
"log records compressed" : 45,
"log records not compressed" : 3,
"log records too small to compress" : 44,
"log release advances write LSN" : 26,
"log scan operations" : 0,
"log scan records requiring two reads" : 0,
"log server thread advances write LSN" : 14,
"log server thread write LSN walk skipped" : 3855,
"log sync operations" : 37,
"log sync time duration (usecs)" : 1011125,
"log sync_dir operations" : 1,
"log sync_dir time duration (usecs)" : 20275,
"log write operations" : 92,
"logging bytes consolidated" : 32768,
"maximum log file size" : 104857600,
"number of pre-allocated log files to create" : 2,
"pre-allocated log files not ready and missed" : 1,
"pre-allocated log files prepared" : 2,
"pre-allocated log files used" : 0,
"records processed by log scan" : 0,
"slot close lost race" : 0,
"slot close unbuffered waits" : 0,
"slot closures" : 40,
"slot join atomic update races" : 0,
"slot join calls atomic updates raced" : 0,
"slot join calls did not yield" : 92,
"slot join calls found active slot closed" : 0,
"slot join calls slept" : 0,
"slot join calls yielded" : 0,
"slot join found active slot closed" : 0,
"slot joins yield time (usecs)" : 0,
"slot transitions unable to find free slot" : 0,
"slot unbuffered writes" : 0,
"total in-memory size of compressed records" : 48831,
"total log buffer size" : 33554432,
"total size of compressed records" : 24236,
"written slots coalesced" : 0,
"yields waiting for previous log file close" : 0
},
"perf" : {
"file system read latency histogram (bucket 1) - 10-49ms" : 0,
"file system read latency histogram (bucket 2) - 50-99ms" : 0,
"file system read latency histogram (bucket 3) - 100-249ms" : 0,
"file system read latency histogram (bucket 4) - 250-499ms" : 0,
"file system read latency histogram (bucket 5) - 500-999ms" : 0,
"file system read latency histogram (bucket 6) - 1000ms+" : 0,
"file system write latency histogram (bucket 1) - 10-49ms" : 2,
"file system write latency histogram (bucket 2) - 50-99ms" : 0,
"file system write latency histogram (bucket 3) - 100-249ms" : 0,
"file system write latency histogram (bucket 4) - 250-499ms" : 0,
"file system write latency histogram (bucket 5) - 500-999ms" : 0,
"file system write latency histogram (bucket 6) - 1000ms+" : 0,
"operation read latency histogram (bucket 1) - 100-249us" : 0,
"operation read latency histogram (bucket 2) - 250-499us" : 0,
"operation read latency histogram (bucket 3) - 500-999us" : 0,
"operation read latency histogram (bucket 4) - 1000-9999us" : 0,
"operation read latency histogram (bucket 5) - 10000us+" : 0,
"operation write latency histogram (bucket 1) - 100-249us" : 0,
"operation write latency histogram (bucket 2) - 250-499us" : 0,
"operation write latency histogram (bucket 3) - 500-999us" : 0,
"operation write latency histogram (bucket 4) - 1000-9999us" : 0,
"operation write latency histogram (bucket 5) - 10000us+" : 0
},
"reconciliation" : {
"internal-page overflow keys" : 0,
"leaf-page overflow keys" : 0,
"maximum seconds spent in a reconciliation call" : 0,
"page reconciliation calls that resulted in values with prepared transaction metadata" : 0,
"page reconciliation calls that resulted in values with timestamps" : 0,
"page reconciliation calls that resulted in values with transaction ids" : 15,
"pages written including at least one prepare state" : 0,
"pages written including at least one start timestamp" : 0,
"records written including a prepare state" : 0,
"split bytes currently awaiting free" : 0,
"split objects currently awaiting free" : 0,
"approximate byte size of timestamps in pages written" : 0,
"approximate byte size of transaction IDs in pages written" : 384,
"fast-path pages deleted" : 0,
"page reconciliation calls" : 50,
"page reconciliation calls for eviction" : 0,
"pages deleted" : 0,
"pages written including an aggregated newest start durable timestamp " : 0,
"pages written including an aggregated newest stop durable timestamp " : 0,
"pages written including an aggregated newest stop timestamp " : 0,
"pages written including an aggregated newest stop transaction ID" : 0,
"pages written including an aggregated newest transaction ID " : 0,
"pages written including an aggregated oldest start timestamp " : 0,
"pages written including an aggregated prepare" : 0,
"pages written including at least one start durable timestamp" : 0,
"pages written including at least one start transaction ID" : 15,
"pages written including at least one stop durable timestamp" : 0,
"pages written including at least one stop timestamp" : 0,
"pages written including at least one stop transaction ID" : 0,
"records written including a start durable timestamp" : 0,
"records written including a start timestamp" : 0,
"records written including a start transaction ID" : 48,
"records written including a stop durable timestamp" : 0,
"records written including a stop timestamp" : 0,
"records written including a stop transaction ID" : 0
},
"session" : {
"flush_tier operation calls" : 0,
"open session count" : 14,
"session query timestamp calls" : 0,
"table alter failed calls" : 0,
"table alter successful calls" : 0,
"table alter unchanged and skipped" : 0,
"table compact failed calls" : 0,
"table compact successful calls" : 0,
"table create failed calls" : 0,
"table create successful calls" : 9,
"table drop failed calls" : 0,
"table drop successful calls" : 0,
"table rename failed calls" : 0,
"table rename successful calls" : 0,
"table salvage failed calls" : 0,
"table salvage successful calls" : 0,
"table truncate failed calls" : 0,
"table truncate successful calls" : 0,
"table verify failed calls" : 0,
"table verify successful calls" : 0,
"tiered storage local retention time (secs)" : 0,
"tiered storage object size" : 0
},
"thread-state" : {
"active filesystem fsync calls" : 0,
"active filesystem read calls" : 0,
"active filesystem write calls" : 0
},
"thread-yield" : {
"application thread time evicting (usecs)" : 0,
"application thread time waiting for cache (usecs)" : 0,
"connection close blocked waiting for transaction state stabilization" : 0,
"connection close yielded for lsm manager shutdown" : 0,
"data handle lock yielded" : 0,
"get reference for page index and slot time sleeping (usecs)" : 0,
"log server sync yielded for log write" : 0,
"page access yielded due to prepare state change" : 0,
"page acquire busy blocked" : 0,
"page acquire eviction blocked" : 0,
"page acquire locked blocked" : 0,
"page acquire read blocked" : 0,
"page acquire time sleeping (usecs)" : 0,
"page delete rollback time sleeping for state change (usecs)" : 0,
"page reconciliation yielded due to child modification" : 0
},
"transaction" : {
"Number of prepared updates" : 0,
"prepared transactions" : 0,
"prepared transactions committed" : 0,
"prepared transactions currently active" : 0,
"prepared transactions rolled back" : 0,
"query timestamp calls" : 871,
"rollback to stable calls" : 0,
"rollback to stable pages visited" : 0,
"rollback to stable tree walk skipping pages" : 0,
"rollback to stable updates aborted" : 0,
"set timestamp calls" : 0,
"set timestamp durable calls" : 0,
"set timestamp durable updates" : 0,
"set timestamp oldest calls" : 0,
"set timestamp oldest updates" : 0,
"set timestamp stable calls" : 0,
"set timestamp stable updates" : 0,
"transaction begins" : 54,
"transaction checkpoint currently running" : 0,
"transaction checkpoint generation" : 16,
"transaction checkpoint history store file duration (usecs)" : 1,
"transaction checkpoint max time (msecs)" : 233,
"transaction checkpoint min time (msecs)" : 87,
"transaction checkpoint most recent duration for gathering all handles (usecs)" : 33,
"transaction checkpoint most recent duration for gathering applied handles (usecs)" : 0,
"transaction checkpoint most recent duration for gathering skipped handles (usecs)" : 17,
"transaction checkpoint most recent handles applied" : 0,
"transaction checkpoint most recent handles skipped" : 10,
"transaction checkpoint most recent handles walked" : 21,
"transaction checkpoint most recent time (msecs)" : 87,
"transaction checkpoint prepare currently running" : 0,
"transaction checkpoint prepare max time (msecs)" : 0,
"transaction checkpoint prepare min time (msecs)" : 0,
"transaction checkpoint prepare most recent time (msecs)" : 0,
"transaction checkpoint prepare total time (msecs)" : 0,
"transaction checkpoint scrub dirty target" : 0,
"transaction checkpoint scrub time (msecs)" : 0,
"transaction checkpoint total time (msecs)" : 1748,
"transaction checkpoints" : 15,
"transaction checkpoints skipped because database was clean" : 0,
"transaction failures due to history store" : 0,
"transaction fsync calls for checkpoint after allocating the transaction ID" : 15,
"transaction fsync duration for checkpoint after allocating the transaction ID (usecs)" : 21765,
"transaction range of IDs currently pinned" : 0,
"transaction range of IDs currently pinned by a checkpoint" : 0,
"transaction range of timestamps currently pinned" : 0,
"transaction range of timestamps pinned by a checkpoint" : 0,
"transaction range of timestamps pinned by the oldest active read timestamp" : 0,
"transaction range of timestamps pinned by the oldest timestamp" : 0,
"transaction read timestamp of the oldest active reader" : 0,
"transaction sync calls" : 0,
"transaction walk of concurrent sessions" : 2123,
"transactions committed" : 9,
"transactions rolled back" : 45,
"race to read prepared update retry" : 0,
"rollback to stable history store records with stop timestamps older than newer records" : 0,
"rollback to stable inconsistent checkpoint" : 0,
"rollback to stable keys removed" : 0,
"rollback to stable keys restored" : 0,
"rollback to stable restored tombstones from history store" : 0,
"rollback to stable restored updates from history store" : 0,
"rollback to stable sweeping history store keys" : 0,
"rollback to stable updates removed from history store" : 0,
"transaction checkpoints due to obsolete pages" : 0,
"update conflicts" : 0
},
"concurrentTransactions" : {
"write" : {
"out" : 0,
"available" : 128,
"totalTickets" : 128
},
"read" : {
"out" : 1,
"available" : 127,
"totalTickets" : 128
}
},
"snapshot-window-settings" : {
"cache pressure percentage threshold" : 95,
"current cache pressure percentage" : NumberLong(0),
"total number of SnapshotTooOld errors" : NumberLong(0),
"max target available snapshots window size in seconds" : 5,
"target available snapshots window size in seconds" : 5,
"current available snapshots window size in seconds" : 0,
"latest majority snapshot timestamp available" : "Jan 1 08:00:00:0",
"oldest majority snapshot timestamp available" : "Jan 1 08:00:00:0"
},
"oplog" : {
"visibility timestamp" : Timestamp(0, 0)
}
},
"mem" : {
"bits" : 64,
"resident" : 41,
"virtual" : 6941,
"supported" : true
},
"metrics" : {
"aggStageCounters" : {
"$_internalInhibitOptimization" : NumberLong(0),
"$_internalSplitPipeline" : NumberLong(0),
"$addFields" : NumberLong(0),
"$bucket" : NumberLong(0),
"$bucketAuto" : NumberLong(0),
"$changeStream" : NumberLong(0),
"$collStats" : NumberLong(0),
"$count" : NumberLong(0),
"$currentOp" : NumberLong(0),
"$facet" : NumberLong(0),
"$geoNear" : NumberLong(0),
"$graphLookup" : NumberLong(0),
"$group" : NumberLong(0),
"$indexStats" : NumberLong(0),
"$limit" : NumberLong(0),
"$listLocalSessions" : NumberLong(0),
"$listSessions" : NumberLong(0),
"$lookup" : NumberLong(0),
"$match" : NumberLong(0),
"$merge" : NumberLong(0),
"$mergeCursors" : NumberLong(0),
"$out" : NumberLong(0),
"$planCacheStats" : NumberLong(0),
"$project" : NumberLong(0),
"$redact" : NumberLong(0),
"$replaceRoot" : NumberLong(0),
"$replaceWith" : NumberLong(0),
"$sample" : NumberLong(0),
"$set" : NumberLong(1),
"$skip" : NumberLong(0),
"$sort" : NumberLong(0),
"$sortByCount" : NumberLong(0),
"$unionWith" : NumberLong(0),
"$unset" : NumberLong(0),
"$unwind" : NumberLong(0)
},
"commands" : {
"buildInfo" : {
"failed" : NumberLong(0),
"total" : NumberLong(3)
},
"createIndexes" : {
"failed" : NumberLong(0),
"total" : NumberLong(1)
},
"find" : {
"failed" : NumberLong(0),
"total" : NumberLong(4)
},
"getCmdLineOpts" : {
"failed" : NumberLong(0),
"total" : NumberLong(1)
},
"getFreeMonitoringStatus" : {
"failed" : NumberLong(0),
"total" : NumberLong(1)
},
"getLog" : {
"failed" : NumberLong(0),
"total" : NumberLong(1)
},
"isMaster" : {
"failed" : NumberLong(0),
"total" : NumberLong(11)
},
"listDatabases" : {
"failed" : NumberLong(0),
"total" : NumberLong(1)
},
"listIndexes" : {
"failed" : NumberLong(2),
"total" : NumberLong(8)
},
"replSetGetStatus" : {
"failed" : NumberLong(1),
"total" : NumberLong(1)
},
"serverStatus" : {
"failed" : NumberLong(0),
"total" : NumberLong(1)
},
"update" : {
"arrayFilters" : NumberLong(0),
"failed" : NumberLong(0),
"pipeline" : NumberLong(1),
"total" : NumberLong(1)
},
"whatsmyuri" : {
"failed" : NumberLong(0),
"total" : NumberLong(1)
}
},
"cursor" : {
"timedOut" : NumberLong(0),
"open" : {
"noTimeout" : NumberLong(0),
"pinned" : NumberLong(0),
"total" : NumberLong(0)
}
},
"document" : {
"deleted" : NumberLong(0),
"inserted" : NumberLong(0),
"returned" : NumberLong(0),
"updated" : NumberLong(0)
},
"getLastError" : {
"wtime" : {
"num" : 0,
"totalMillis" : 0
},
"wtimeouts" : NumberLong(0),
"default" : {
"unsatisfiable" : NumberLong(0),
"wtimeouts" : NumberLong(0)
}
},
"operation" : {
"scanAndOrder" : NumberLong(0),
"writeConflicts" : NumberLong(0)
},
"query" : {
"planCacheTotalSizeEstimateBytes" : NumberLong(0),
"updateOneOpStyleBroadcastWithExactIDCount" : NumberLong(0)
},
"queryExecutor" : {
"scanned" : NumberLong(0),
"scannedObjects" : NumberLong(0),
"collectionScans" : {
"nonTailable" : NumberLong(0),
"total" : NumberLong(0)
}
},
"record" : {
"moves" : NumberLong(0)
},
"repl" : {
"executor" : {
"pool" : {
"inProgressCount" : 0
},
"queues" : {
"networkInProgress" : 0,
"sleepers" : 0
},
"unsignaledEvents" : 0,
"shuttingDown" : false,
"networkInterface" : "DEPRECATED: getDiagnosticString is deprecated in NetworkInterfaceTL"
},
"apply" : {
"attemptsToBecomeSecondary" : NumberLong(0),
"batchSize" : NumberLong(0),
"batches" : {
"num" : 0,
"totalMillis" : 0
},
"ops" : NumberLong(0)
},
"buffer" : {
"count" : NumberLong(0),
"maxSizeBytes" : NumberLong(0),
"sizeBytes" : NumberLong(0)
},
"initialSync" : {
"completed" : NumberLong(0),
"failedAttempts" : NumberLong(0),
"failures" : NumberLong(0)
},
"network" : {
"bytes" : NumberLong(0),
"getmores" : {
"num" : 0,
"totalMillis" : 0,
"numEmptyBatches" : NumberLong(0)
},
"notPrimaryLegacyUnacknowledgedWrites" : NumberLong(0),
"notPrimaryUnacknowledgedWrites" : NumberLong(0),
"oplogGetMoresProcessed" : {
"num" : 0,
"totalMillis" : 0
},
"ops" : NumberLong(0),
"readersCreated" : NumberLong(0),
"replSetUpdatePosition" : {
"num" : NumberLong(0)
}
},
"stateTransition" : {
"lastStateTransition" : "",
"userOperationsKilled" : NumberLong(0),
"userOperationsRunning" : NumberLong(0)
},
"syncSource" : {
"numSelections" : NumberLong(0),
"numTimesChoseDifferent" : NumberLong(0),
"numTimesChoseSame" : NumberLong(0),
"numTimesCouldNotFind" : NumberLong(0)
}
},
"ttl" : {
"deletedDocuments" : NumberLong(0),
"passes" : NumberLong(15)
}
},
"ok" : 1
}
用户管理
创建用户和超级管理员
进入/切换数据库到admin中
use admin;
# 创建账户管理员
db.createUser({
user:'aa',
pwd:"12345",
roles:[
{"role":"userAdminAnyDatabase",db:"admin"},
]
});
# 结果
Successfully added user: {
"user" : "aa",
"roles" : [
{
"role" : "userAdminAnyDatabase",
"db" : "admin"
}
]
}
创建超级管理员
# 进入/切换数据库到admin中
use admin
# 创建超级管理员账号
db.createUser({
user: "super",
pwd: "123456",
roles: [
"root", # 也可以这样写 {role:"root", db:"admin"}
]
})
db.createUser({
user: "python",
pwd: "123456",
roles: [
{role:"root", db:"admin"},
]
})
内置角色
- 数据库用户角色:read、readWrite
- 数据库管理角色:dbAdmin、dbOwner、userAdmin
- 集群管理角色:clusterAdmin、clusterManager、clusterMonitor、hostManager
- 备份恢复角色: backup、restore
- 所有数据库角色:readAnyDatabase,readWriteAnyDatabase、userAdminAnyDatabase、dbAdminAnyDatabase
- 超级用户角色: root
内置权限
- Read:允许用户读取指定数据库
- readWrite:允许用户读写指定数据库
- dbAdmin:允许用户在指定数据库中执行管理函数,如索引创建、删除,查看统计或访问system.profile
- userAdmin:允许用户向system.users集合写入,可以找指定数据库里创建、删除和管理用户
- clusterAdmin:只在admin数据库中可用,赋予用户所有分片和复制集相关函数的管理权限。
- readAnyDatabase:只在admin数据库中可用,赋予用户所有数据库的读权限
- readWriteAnyDatabase:只在admin数据库中可用,赋予用户所有数据库的读写权限
- userAdminAnyDatabase:只在admin数据库中可用,赋予用户所有数据库的userAdmin权限
- dbAdminAnyDatabase:只在admin数据库中可用,赋予用户所有数据库的dbAdmin权限。
- root:只在admin数据库中可用。超级账号,擁有超级权限
创建用户自己的数据库角色
# 切换数据库,如果当前库不存在则自动创建
use mofang
# 创建管理员用户,为了保证不会报错,所以先删除同名管理员 db.system.users.remove({user:"mofang"});
db.createUser({
user: "mofang",
pwd: "123",
roles: [
{ role: "dbOwner", db: "mofang"}
]
})
用户信息
# 查看当前数据库的用户
use mofang
show users
# 查看系统中所有的用户(需要先切换到 admin 中使用账号管理员权限进行操作
use admin
db.auth("root","123456")
db.system.users.find() # 只能在admin数据库中使用。
删除用户
有多种删除方式,下面是根据user用户名删除用户
db.system.users.remove({user:"mofang"});
# 删除效果:
WriteResult({ "nRemoved" : 1 }) # nRemoved 大于0表示成功删除管理员,等于0则表示没有删除。
修改密码
#必须先切换到相应的数据库中
use mofang
# 注册必须保证有这个管理员
db.changeUserPassword("mofang", "123456")
MongoDB 账户认证机制
设置账户认证
# 开启了账户认证机制以后,再次进入mofang
mongo
use mofang
show users # 此处会报错如:uncaught exception: Error: command usersInfo requires authentication
db.auth("mofang","123") # 此处认证时填写错误密码,报错如下:
# Error: Authentication failed.
# 0
db.auth("mofang","123456") # 此处认证时填写正确密码,效果如下:
# 1
show users # 此时经过认证以后,当前命令就不会被限制了。
库管理
- 显示所有数据库列表,数据库不会显示,或者说空数据库被 MongoDB 回收了
- 切换数据库,如果数据库不存在则创建数据库
- 查看当前工作的数据库
- 删除当前数据库,如果数据库不存在,也会存在
{"ok":1}
- 查看当前数据库状态
how dbs
show databases
use <database>
# 查看当前数据库
db
db.getName()
# 删除数据库
db.dropDatabase()
集合管理
在mongodb中其实不需要专门创建集合,直接添加文档,mongodb也会自动生成集合的。
# name为必填参数,options为可选参数。capped若设置值为true,则size必须也一并设置
db.createCollection(
name=<集合名称>,
options = {
capped : <boolean>, # 创建固定集合,固定集合指限制固定数据大小的集合,当数据达到最大值会自动覆盖最早的文档内容
size : <bytes_size>, # 指定固定集合存储的最大字节数,单位:字节数.
max : <collection_size> # 指定固定集合中包含文档的最大数量,单位:字节数
});
# 添加文档到不存在的集合中,mongodb会自动创建集合,
db.集合.insert({"name":"python入门","price" : 31.4})
# 集合列表
show collections # 或 show tables 或 db.getCollectionNames()
# 删除集合
db.集合.drop()
# 查看集合
db.getCollection("集合")
db.集合
# 查看集合创建信息
db.printCollectionStats()