这是继上一篇文章 “Elasticsearch:Data streams(一)” 的续篇。在上一篇文章中,我详述了如何创建 ILM 策略,index template 以及数据流的相关知识。设置数据流后,你可以执行以下操作:
- 将文档添加到数据流
- 搜索数据流
- 获取数据流的统计信息
- 手动翻转数据流
- 开放及关闭后备索引
- 使用数据流重建索引
- 通过查询更新数据流中的文档
- 通过查询删除数据流中的文档
- 更新或删除支持索引中的文档
将文档添加到数据流
要添加单个文档,请使用 index API。 支持 ingest pipeline。首先我们创建如下的一个 ingest pipeline:
1. PUT _ingest/pipeline/add-timestamp
2. {
3. "processors": [
4. {
5. "set": {
6. "field": "@timestamp",
7. "value": "{{_ingest.timestamp}}"
8. }
9. }
10. ]
11. }
上述的命令为事件添加一个当前 ingest pipeline 运行时的时间戳。我们可以通过如下的命令来写入一个文档到 Elasticsearch 中:
1. POST /my-data-stream/_doc/?pipeline=add-timestamp
2. {
3. "user": {
4. "id": "8a4f500d"
5. },
6. "message": "Login successful"
7. }
您不能使用索引 API 的 PUT //_doc/<_id> 请求格式将新文档添加到数据流。 要指定文档 ID,请改用 PUT //_create/<_id> 格式。 仅支持 op_type 的创建。
要通过单个请求添加多个文档,请使用 _bulk API。 仅支持创建操作。
1. PUT /my-data-stream/_bulk?pipeline=add-timestamp&refresh
2. {"create":{ }}
3. { "user": { "id": "vlb44hny" }, "message": "Login attempt failed" }
4. {"create":{ }}
5. { "user": { "id": "8a4f500d" }, "message": "Login successful" }
6. {"create":{ }}
7. { "user": { "id": "l7gk7f82" }, "message": "Logout successful" }
搜索数据流
以下搜索 API 支持数据流:
例子,我们使用 search API 来搜索上面写入的文档:
GET my-data-stream/_search?filter_path=**.hits
上面的命令的响应为:
`
1. {
2. "hits": {
3. "hits": [
4. {
5. "_index": ".ds-my-data-stream-2022.11.17-000001",
6. "_id": "ODWUhIQBSwCQ4y3lc_VM",
7. "_score": 1,
8. "_source": {
9. "message": "Login attempt failed",
10. "user": {
11. "id": "vlb44hny"
12. },
13. "@timestamp": "2022-11-17T07:53:52.203397799Z"
14. }
15. },
16. {
17. "_index": ".ds-my-data-stream-2022.11.17-000001",
18. "_id": "OTWUhIQBSwCQ4y3lc_VM",
19. "_score": 1,
20. "_source": {
21. "message": "Login successful",
22. "user": {
23. "id": "8a4f500d"
24. },
25. "@timestamp": "2022-11-17T07:53:52.203707924Z"
26. }
27. },
28. {
29. "_index": ".ds-my-data-stream-2022.11.17-000001",
30. "_id": "OjWUhIQBSwCQ4y3lc_VM",
31. "_score": 1,
32. "_source": {
33. "message": "Logout successful",
34. "user": {
35. "id": "l7gk7f82"
36. },
37. "@timestamp": "2022-11-17T07:53:52.203796507Z"
38. }
39. },
40. {
41. "_index": ".ds-my-data-stream-2022.11.17-000001",
42. "_id": "NzWPhIQBSwCQ4y3lz_V1",
43. "_score": 1,
44. "_source": {
45. "message": "Login successful",
46. "user": {
47. "id": "8a4f500d"
48. },
49. "@timestamp": "2022-11-17T07:48:47.915655422Z"
50. }
51. }
52. ]
53. }
54. }
`
获取数据流的统计信息
使用数据流统计 API 获取一个或多个数据流的统计信息:
GET /_data_stream/my-data-stream/_stats?human=true
上面的命令响应为:
`
1. {
2. "_shards": {
3. "total": 2,
4. "successful": 1,
5. "failed": 0
6. },
7. "data_stream_count": 1,
8. "backing_indices": 1,
9. "total_store_size": "9.5kb",
10. "total_store_size_bytes": 9762,
11. "data_streams": [
12. {
13. "data_stream": "my-data-stream",
14. "backing_indices": 1,
15. "store_size": "9.5kb",
16. "store_size_bytes": 9762,
17. "maximum_timestamp": 1668671632203
18. }
19. ]
20. }
`
手动滚动数据流
使用 rollover API 手动滚动数据流:
POST /my-data-stream/_rollover/
上面的命令返回:
1. {
2. "acknowledged": true,
3. "shards_acknowledged": true,
4. "old_index": ".ds-my-data-stream-2022.11.17-000001",
5. "new_index": ".ds-my-data-stream-2022.11.17-000002",
6. "rolled_over": true,
7. "dry_run": false,
8. "conditions": {}
9. }
上面显示有两个 backing indices。其中的 .ds-my-data-stream-2022.11.17-000002 是新的 write index。我们可以使用 _stat API 来进行查看:
GET /_data_stream/my-data-stream/_stats?human=true
`
1. {
2. "_shards": {
3. "total": 3,
4. "successful": 2,
5. "failed": 0
6. },
7. "data_stream_count": 1,
8. "backing_indices": 2,
9. "total_store_size": "5.3kb",
10. "total_store_size_bytes": 5475,
11. "data_streams": [
12. {
13. "data_stream": "my-data-stream",
14. "backing_indices": 2,
15. "store_size": "5.3kb",
16. "store_size_bytes": 5475,
17. "maximum_timestamp": 1668671632203
18. }
19. ]
20. }
`
从上面的输出中,我们可以看出来,有两个 backing_indices。
GET .ds-my-data-stream-2022.11.17-000001/_count
上面的命令返回:
1. {
2. "count": 4,
3. "_shards": {
4. "total": 1,
5. "successful": 1,
6. "skipped": 0,
7. "failed": 0
8. }
9. }
也就是说之前的 4 个文档被写入到 .ds-my-data-stream-2022.11.17-000001 索引中,而之后写入的文档会写入到 .ds-my-data-stream-2022.11.17-000002 索引中。目前它的文档数为 0:
GET .ds-my-data-stream-2022.11.17-000002/_count
上面的命令返回:
1. {
2. "count": 0,
3. "_shards": {
4. "total": 1,
5. "successful": 1,
6. "skipped": 0,
7. "failed": 0
8. }
9. }
开放封闭后备索引
你无法搜索封闭(close)的后备索引,即使通过搜索其数据流也是如此。你也不能更新(update)或删除(delete)封闭索引中的文档。
要重新打开封闭的后备索引,请直接向索引提交 open index API 请求:
POST /.ds-my-data-stream-2022.11.17-000002/_open
要重新打开数据流的所有关闭后备索引,请向流提交打开索引 API 请求:
POST /my-data-stream/_open/
使用数据流重建索引(reindex)
使用 reindex API 将文档从现有索引、别名或数据流复制到数据流。 因为数据流是仅附加(append_only)的,所以对数据流的重新索引(reindex)必须使 op_type 为 create。 重建索引无法更新数据流中的现有文档。
1. POST /_reindex
2. {
3. "source": {
4. "index": "archive"
5. },
6. "dest": {
7. "index": "my-data-stream",
8. "op_type": "create"
9. }
10. }
通过查询更新数据流中的文档
使用 update by query API 来更新数据流中与提供的查询匹配的文档:
1. POST /my-data-stream/_update_by_query
2. {
3. "query": {
4. "match": {
5. "user.id": "l7gk7f82"
6. }
7. },
8. "script": {
9. "source": "ctx._source.user.id = params.new_id",
10. "params": {
11. "new_id": "XgdX0NoX"
12. }
13. }
14. }
通过查询删除数据流中的文档
使用 delete by query API 删除数据流中与提供的查询匹配的文档:
1. POST /my-data-stream/_delete_by_query
2. {
3. "query": {
4. "match": {
5. "user.id": "vlb44hny"
6. }
7. }
8. }
更新或删除支持索引中的文档
如果需要,你可以通过向包含文档的后备索引发送请求来更新或删除数据流中的文档。 你需要:
- 文件编号(Document ID)
- 包含文档的后备索引的名称
- 如果更新文档,它的序列号(sequence number)和主要术语(primary term)
要获取此信息,请使用搜索请求:
1. GET /my-data-stream/_search
2. {
3. "seq_no_primary_term": true,
4. "query": {
5. "match": {
6. "user.id": "yWIumJd7"
7. }
8. }
9. }
相应:
`
1. {
2. "took": 20,
3. "timed_out": false,
4. "_shards": {
5. "total": 3,
6. "successful": 3,
7. "skipped": 0,
8. "failed": 0
9. },
10. "hits": {
11. "total": {
12. "value": 1,
13. "relation": "eq"
14. },
15. "max_score": 0.2876821,
16. "hits": [
17. {
18. "_index": ".ds-my-data-stream-2099.03.08-000003", #1
19. "_id": "bfspvnIBr7VVZlfp2lqX", #2
20. "_seq_no": 0, #3
21. "_primary_term": 1, #4
22. "_score": 0.2876821,
23. "_source": {
24. "@timestamp": "2099-03-08T11:06:07.000Z",
25. "user": {
26. "id": "yWIumJd7"
27. },
28. "message": "Login successful"
29. }
30. }
31. ]
32. }
33. }
`
说明:
- #1 包含匹配文档的后备索引
- #2 文档的文档 ID
- #3 文档的当前序号
- #4 文档的主要术语
要更新文档,请使用带有有效 if_seq_no 和 if_primary_term 参数的索引 API 请求:
1. PUT /.ds-my-data-stream-2099-03-08-000003/_doc/bfspvnIBr7VVZlfp2lqX?if_seq_no=0&if_primary_term=1
2. {
3. "@timestamp": "2099-03-08T11:06:07.000Z",
4. "user": {
5. "id": "8a4f500d"
6. },
7. "message": "Login successful"
8. }
详细关于更新文档的操作,请阅读文章 “Elasticsearch:深刻理解文档中的 verision 及乐观并发控制”。
要删除文档,请使用删除 API:
DELETE /.ds-my-data-stream-2099.03.08-000003/_doc/bfspvnIBr7VVZlfp2lqX
要使用单个请求删除或更新多个文档,请使用 _bulk API 的删除、索引和更新操作。 对于索引操作,包括有效的 if_seq_no 和 if_primary_term 参数。
1. PUT /_bulk?refresh
2. { "index": { "_index": ".ds-my-data-stream-2099.03.08-000003", "_id": "bfspvnIBr7VVZlfp2lqX", "if_seq_no": 0, "if_primary_term": 1 } }
3. { "@timestamp": "2099-03-08T11:06:07.000Z", "user": { "id": "8a4f500d" }, "message": "Login successful" }
数据流生命周期演示
我们在这个部分来演示文档在写入 Elasticsearch 后的生命周期管理。如果你先前已经做个上面的练习,经过三分钟后,rollover 发生后,之前写入的文档将被删除,而在当前 write index 里的文档还会继续存在。为了确保你有一个干净的环境,我们做如下的操作:
DELETE _data_stream/my-data-stream
接下来,我们执行如下的操作 5 次:
1. POST /my-data-stream/_doc/?pipeline=add-timestamp
2. {
3. "user": {
4. "id": "8a4f500d"
5. },
6. "message": "Login successful"
7. }
我们使用如下的命令来进行查看:
GET _data_stream/my-data-stream
上面的命令显示:
`
1. {
2. "data_streams": [
3. {
4. "name": "my-data-stream",
5. "timestamp_field": {
6. "name": "@timestamp"
7. },
8. "indices": [
9. {
10. "index_name": ".ds-my-data-stream-2022.11.18-000001",
11. "index_uuid": "ln2AtGG0S4CKGs9kAvsSsQ"
12. }
13. ],
14. "generation": 1,
15. "_meta": {
16. "my-custom-meta-field": "More arbitrary metadata",
17. "description": "Template for my time series data"
18. },
19. "status": "YELLOW",
20. "template": "my-index-template",
21. "ilm_policy": "my-lifecycle-policy",
22. "hidden": false,
23. "system": false,
24. "allow_custom_routing": false,
25. "replicated": false
26. }
27. ]
28. }
`
从上面的输出中,我们可以看到有一个叫做 .ds-my-data-stream-2022.11.18-000001 的索引已经生成了。在我们的生命周期中,我们设置 primary 文档的最大数为 5 个,如果我们再向这个数据流中写入文档的话,那么 rollover 就会发生,并且在 rollover 发生后的 3 分钟之内,.ds-my-data-stream-2022.11.18-000001 将会被自动删除。在 rollover 发生后,它会自动转入到 warm phase。经过一段时间后,我们进行查看:
GET _data_stream/my-data-stream
`
1. {
2. "data_streams": [
3. {
4. "name": "my-data-stream",
5. "timestamp_field": {
6. "name": "@timestamp"
7. },
8. "indices": [
9. {
10. "index_name": ".ds-my-data-stream-2022.11.18-000001",
11. "index_uuid": "ln2AtGG0S4CKGs9kAvsSsQ"
12. },
13. {
14. "index_name": ".ds-my-data-stream-2022.11.18-000002",
15. "index_uuid": "azlD_LO9QJqXW1akhLRGAA"
16. }
17. ],
18. "generation": 2,
19. "_meta": {
20. "my-custom-meta-field": "More arbitrary metadata",
21. "description": "Template for my time series data"
22. },
23. "status": "YELLOW",
24. "template": "my-index-template",
25. "ilm_policy": "my-lifecycle-policy",
26. "hidden": false,
27. "system": false,
28. "allow_custom_routing": false,
29. "replicated": false
30. }
31. ]
32. }
`
GET my-data-stream/_ilm/explain
`
1. {
2. "indices": {
3. ".ds-my-data-stream-2022.11.18-000002": {
4. "index": ".ds-my-data-stream-2022.11.18-000002",
5. "managed": true,
6. "policy": "my-lifecycle-policy",
7. "index_creation_date_millis": 1668746714257,
8. "time_since_index_creation": "4.22m",
9. "lifecycle_date_millis": 1668746714257,
10. "age": "4.22m",
11. "phase": "hot",
12. "phase_time_millis": 1668746714298,
13. "action": "rollover",
14. "action_time_millis": 1668746714498,
15. "step": "check-rollover-ready",
16. "step_time_millis": 1668746714498,
17. "phase_execution": {
18. "policy": "my-lifecycle-policy",
19. "phase_definition": {
20. "min_age": "0ms",
21. "actions": {
22. "rollover": {
23. "max_primary_shard_size": "50gb",
24. "max_age": "30d",
25. "max_docs": 5,
26. "max_primary_shard_docs": 5
27. },
28. "set_priority": {
29. "priority": 204
30. }
31. }
32. },
33. "version": 1,
34. "modified_date_in_millis": 1668666436429
35. }
36. }
37. }
38. }
`
上面显示当前的 write index 为 .ds-my-data-stream-2022.11.18-000002。它处于 hot phase,它的 action 为 rollover。
我们再次使用如下的命令来进行查看:
GET /_data_stream/my-data-stream/_stats?human=true
上面的命令返回的结果为:
`
1. {
2. "_shards": {
3. "total": 2,
4. "successful": 1,
5. "failed": 0
6. },
7. "data_stream_count": 1,
8. "backing_indices": 1,
9. "total_store_size": "225b",
10. "total_store_size_bytes": 225,
11. "data_streams": [
12. {
13. "data_stream": "my-data-stream",
14. "backing_indices": 1,
15. "store_size": "225b",
16. "store_size_bytes": 225,
17. "maximum_timestamp": 0
18. }
19. ]
20. }
`
也即当前的后备索引也就只有一个,也就是 .ds-my-data-stream-2022.11.18-000002, 而之前的那个 .ds-my-data-stream-2022.11.18-000001 已经在三分钟之后自动被删除了。
GET .ds-my-data-stream-2022.11.18-000002/_count
上面的命令显示为:
1. {
2. "count": 0,
3. "_shards": {
4. "total": 1,
5. "successful": 1,
6. "skipped": 0,
7. "failed": 0
8. }
9. }
也即文档数为 0。我们再次执行命令 5 次:
1. POST /my-data-stream/_doc/?pipeline=add-timestamp
2. {
3. "user": {
4. "id": "8a4f500d"
5. },
6. "message": "Login successful"
7. }
GET my-data-stream/_ilm/explain
`
1. {
2. "indices": {
3. ".ds-my-data-stream-2022.11.18-000002": {
4. "index": ".ds-my-data-stream-2022.11.18-000002",
5. "managed": true,
6. "policy": "my-lifecycle-policy",
7. "index_creation_date_millis": 1668746714257,
8. "time_since_index_creation": "11.85m",
9. "lifecycle_date_millis": 1668746714257,
10. "age": "11.85m",
11. "phase": "hot",
12. "phase_time_millis": 1668746714298,
13. "action": "rollover",
14. "action_time_millis": 1668746714498,
15. "step": "check-rollover-ready",
16. "step_time_millis": 1668746714498,
17. "phase_execution": {
18. "policy": "my-lifecycle-policy",
19. "phase_definition": {
20. "min_age": "0ms",
21. "actions": {
22. "rollover": {
23. "max_primary_shard_size": "50gb",
24. "max_age": "30d",
25. "max_docs": 5,
26. "max_primary_shard_docs": 5
27. },
28. "set_priority": {
29. "priority": 204
30. }
31. }
32. },
33. "version": 1,
34. "modified_date_in_millis": 1668666436429
35. }
36. }
37. }
38. }
`
再过一会儿执行命令:
GET my-data-stream/_ilm/explain
`
1. {
2. "indices": {
3. ".ds-my-data-stream-2022.11.18-000002": {
4. "index": ".ds-my-data-stream-2022.11.18-000002",
5. "managed": true,
6. "policy": "my-lifecycle-policy",
7. "index_creation_date_millis": 1668746714257,
8. "time_since_index_creation": "12.86m",
9. "lifecycle_date_millis": 1668747484089,
10. "age": "2.06s",
11. "phase": "warm",
12. "phase_time_millis": 1668747484490,
13. "action": "forcemerge",
14. "action_time_millis": 1668747485091,
15. "step": "segment-count",
16. "step_time_millis": 1668747485091,
17. "phase_execution": {
18. "policy": "my-lifecycle-policy",
19. "phase_definition": {
20. "min_age": "0d",
21. "actions": {
22. "allocate": {
23. "number_of_replicas": 0,
24. "include": {},
25. "exclude": {},
26. "require": {}
27. },
28. "forcemerge": {
29. "max_num_segments": 1
30. },
31. "set_priority": {
32. "priority": 50
33. },
34. "shrink": {
35. "number_of_shards": 1
36. }
37. }
38. },
39. "version": 1,
40. "modified_date_in_millis": 1668666436429
41. }
42. },
43. ".ds-my-data-stream-2022.11.18-000004": {
44. "index": ".ds-my-data-stream-2022.11.18-000004",
45. "managed": true,
46. "policy": "my-lifecycle-policy",
47. "index_creation_date_millis": 1668747484227,
48. "time_since_index_creation": "1.93s",
49. "lifecycle_date_millis": 1668747484227,
50. "age": "1.93s",
51. "phase": "hot",
52. "phase_time_millis": 1668747484290,
53. "action": "rollover",
54. "action_time_millis": 1668747484490,
55. "step": "check-rollover-ready",
56. "step_time_millis": 1668747484490,
57. "phase_execution": {
58. "policy": "my-lifecycle-policy",
59. "phase_definition": {
60. "min_age": "0ms",
61. "actions": {
62. "rollover": {
63. "max_primary_shard_size": "50gb",
64. "max_age": "30d",
65. "max_docs": 5,
66. "max_primary_shard_docs": 5
67. },
68. "set_priority": {
69. "priority": 204
70. }
71. }
72. },
73. "version": 1,
74. "modified_date_in_millis": 1668666436429
75. }
76. }
77. }
78. }
`
再过一会儿执行命令:
`
1. {
2. "indices": {
3. ".ds-my-data-stream-2022.11.18-000002": {
4. "index": ".ds-my-data-stream-2022.11.18-000002",
5. "managed": true,
6. "policy": "my-lifecycle-policy",
7. "index_creation_date_millis": 1668746714257,
8. "time_since_index_creation": "13.51m",
9. "lifecycle_date_millis": 1668747484089,
10. "age": "41.29s",
11. "phase": "warm",
12. "phase_time_millis": 1668747484490,
13. "action": "complete",
14. "action_time_millis": 1668747493998,
15. "step": "complete",
16. "step_time_millis": 1668747493998,
17. "phase_execution": {
18. "policy": "my-lifecycle-policy",
19. "phase_definition": {
20. "min_age": "0d",
21. "actions": {
22. "allocate": {
23. "number_of_replicas": 0,
24. "include": {},
25. "exclude": {},
26. "require": {}
27. },
28. "forcemerge": {
29. "max_num_segments": 1
30. },
31. "set_priority": {
32. "priority": 50
33. },
34. "shrink": {
35. "number_of_shards": 1
36. }
37. }
38. },
39. "version": 1,
40. "modified_date_in_millis": 1668666436429
41. }
42. },
43. ".ds-my-data-stream-2022.11.18-000004": {
44. "index": ".ds-my-data-stream-2022.11.18-000004",
45. "managed": true,
46. "policy": "my-lifecycle-policy",
47. "index_creation_date_millis": 1668747484227,
48. "time_since_index_creation": "41.15s",
49. "lifecycle_date_millis": 1668747484227,
50. "age": "41.15s",
51. "phase": "hot",
52. "phase_time_millis": 1668747484290,
53. "action": "rollover",
54. "action_time_millis": 1668747484490,
55. "step": "check-rollover-ready",
56. "step_time_millis": 1668747484490,
57. "phase_execution": {
58. "policy": "my-lifecycle-policy",
59. "phase_definition": {
60. "min_age": "0ms",
61. "actions": {
62. "rollover": {
63. "max_primary_shard_size": "50gb",
64. "max_age": "30d",
65. "max_docs": 5,
66. "max_primary_shard_docs": 5
67. },
68. "set_priority": {
69. "priority": 204
70. }
71. }
72. },
73. "version": 1,
74. "modified_date_in_millis": 1668666436429
75. }
76. }
77. }
78. }
`
我们看到在 warm 阶段,已经是完成了。
再次执行命令:
GET _data_stream/my-data-stream
`
1. {
2. "data_streams": [
3. {
4. "name": "my-data-stream",
5. "timestamp_field": {
6. "name": "@timestamp"
7. },
8. "indices": [
9. {
10. "index_name": ".ds-my-data-stream-2022.11.18-000002",
11. "index_uuid": "azlD_LO9QJqXW1akhLRGAA"
12. },
13. {
14. "index_name": ".ds-my-data-stream-2022.11.18-000004",
15. "index_uuid": "2nq9klz0Qiir5UA2_s1I1w"
16. }
17. ],
18. "generation": 4,
19. "_meta": {
20. "my-custom-meta-field": "More arbitrary metadata",
21. "description": "Template for my time series data"
22. },
23. "status": "YELLOW",
24. "template": "my-index-template",
25. "ilm_policy": "my-lifecycle-policy",
26. "hidden": false,
27. "system": false,
28. "allow_custom_routing": false,
29. "replicated": false
30. }
31. ]
32. }
`
上面显示有两个索引同时存在:.ds-my-data-stream-2022.11.18-000002 及 .ds-my-data-stream-2022.11.18-000004。
再过一段时间,再次执行:
GET _data_stream/my-data-stream
上面的命令显示:
`
1. {
2. "data_streams": [
3. {
4. "name": "my-data-stream",
5. "timestamp_field": {
6. "name": "@timestamp"
7. },
8. "indices": [
9. {
10. "index_name": ".ds-my-data-stream-2022.11.18-000004",
11. "index_uuid": "2nq9klz0Qiir5UA2_s1I1w"
12. }
13. ],
14. "generation": 5,
15. "_meta": {
16. "my-custom-meta-field": "More arbitrary metadata",
17. "description": "Template for my time series data"
18. },
19. "status": "YELLOW",
20. "template": "my-index-template",
21. "ilm_policy": "my-lifecycle-policy",
22. "hidden": false,
23. "system": false,
24. "allow_custom_routing": false,
25. "replicated": false
26. }
27. ]
28. }
`
从返回的数据中,我们可以看到只有一个索引 .ds-my-data-stream-2022.11.18-000004 存在。之前的那个索引 .ds-my-data-stream-2022.11.18-000002 已经被删除了。