世界上并没有完美的程序,但是我们并不因此而沮丧,因为写程序就是一个不断追求完美的过程。
自定义分析器 :
- Character filters :
1. 作用 : 字符的增、删、改转换
2. 数量限制 : 可以有0个或多个
3. 内建字符过滤器 :
1. HTML Strip Character filter : 去除html标签
2. Mapping Character filter : 映射替换
3. Pattern Replace Character filter : 正则替换 - Tokenizer :
1. 作用 :
1. 分词
2. 记录词的顺序和位置(短语查询)
3. 记录词的开头和结尾位置(高亮)
4. 记录词的类型(分类)
2. 数量限制 : 有且只能有一个
3. 分类 :
1. 完整分词 :
1. Standard
2. Letter
3. Lowercase
4. whitespace
5. UAX URL Email
6. Classic
7. Thai
2. 切词 :
1. N-Gram
2. Edge N-Gram
3. 文本 :
1. Keyword
2. Pattern
3. Simple Pattern
4. Char Group
5. Simple Pattern split
6. Path - Token filters :
1. 作用 : 分词的增、删、改转换
2. 数量限制 : 可以有0个或多个
3. 分类 :
1. apostrophe
2. asciifolding
3. cjk bigram
4. cjk width
5. classic
6. common grams
7. conditional
8. decimal digit
9. delimited payload
10. dictionary decompounder
11. edge ngram
12. elision
13. fingerprint
14. flatten_graph
15. hunspell
16. hyphenation decompounder
17. keep types
18. keep words
19. keyword marker
20. keyword repeat
21. kstem
22. length
23. limit token count
24. lowercase
25. min_hash
26. multiplexer
27. ngram
28. normalization
29. pattern_capture
30. pattern replace
31. porter stem
32. predicate script
33. remove duplicates
34. reverse
35. shingle
36. snowball
37. stemmer
38. stemmer override
39. stop
40. synonym
41. synonym graph
42. trim
43. truncate
44. unique
45. uppercase
46. word delimiter
47. word delimiter graph
今天演示42-47
# trim token filter
# 作用 : 去除词前后的空格
GET /_analyze
{
"tokenizer" : "keyword",
"filter" : ["trim"],
"text" : [" hello gooding me "]
}
# 结果
{
"tokens" : [
{
"token" : "hello gooding me",
"start_offset" : 0,
"end_offset" : 18,
"type" : "word",
"position" : 0
}
]
}
# truncate token filter
# 作用 : 将超出指定长度的词缩短到指定长度
# 配置项 :
# 1. length : 指定长度,默认10
GET /_analyze
{
"tokenizer" : "whitespace",
"filter" : [{
"type" : "truncate",
"length" : 4
}],
"text" : ["hello gooding me"]
}
# 结果
{
"tokens" : [
{
"token" : "hell",
"start_offset" : 0,
"end_offset" : 5,
"type" : "word",
"position" : 0
},
{
"token" : "good",
"start_offset" : 6,
"end_offset" : 13,
"type" : "word",
"position" : 1
},
{
"token" : "me",
"start_offset" : 14,
"end_offset" : 16,
"type" : "word",
"position" : 2
}
]
}
# unique token filter
# 作用 : 去除重复的词
# 配置项 :
# 1. only_on_same_position : 知否只去除在同一位置重复的词,默认false
GET /_analyze
{
"tokenizer" : "whitespace",
"filter" : ["unique"],
"text" : ["hello gooding gooding me me"]
}
# 结果
{
"tokens" : [
{
"token" : "hello",
"start_offset" : 0,
"end_offset" : 5,
"type" : "word",
"position" : 0
},
{
"token" : "gooding",
"start_offset" : 6,
"end_offset" : 13,
"type" : "word",
"position" : 1
},
{
"token" : "me",
"start_offset" : 22,
"end_offset" : 24,
"type" : "word",
"position" : 2
}
]
}
# uppercase token filter
# 作用 : 转大写
GET /_analyze
{
"tokenizer" : "whitespace",
"filter" : ["uppercase"],
"text" : ["hello gooding me"]
}
# 结果
{
"tokens" : [
{
"token" : "HELLO",
"start_offset" : 0,
"end_offset" : 5,
"type" : "word",
"position" : 0
},
{
"token" : "GOODING",
"start_offset" : 6,
"end_offset" : 13,
"type" : "word",
"position" : 1
},
{
"token" : "ME",
"start_offset" : 14,
"end_offset" : 16,
"type" : "word",
"position" : 2
}
]
}
# word delimiter token filter
# 作用 :
# 1. 以非字母数字分词
# 2. 以驼峰分词
# 3. 以字母数字分词
# 配置项 : 太多,不在2列举
# 备注 : 不推荐使用
GET /_analyze
{
"tokenizer" : "keyword",
"filter" : ["word_delimiter"],
"text" : ["hello gooding me HelloGood Hello123 hello-good"]
}
# 结果
{
"tokens" : [
{
"token" : "hello",
"start_offset" : 0,
"end_offset" : 5,
"type" : "word",
"position" : 0
},
{
"token" : "gooding",
"start_offset" : 6,
"end_offset" : 13,
"type" : "word",
"position" : 1
},
{
"token" : "me",
"start_offset" : 14,
"end_offset" : 16,
"type" : "word",
"position" : 2
},
{
"token" : "Hello",
"start_offset" : 17,
"end_offset" : 22,
"type" : "word",
"position" : 3
},
{
"token" : "Good",
"start_offset" : 22,
"end_offset" : 26,
"type" : "word",
"position" : 4
},
{
"token" : "Hello",
"start_offset" : 27,
"end_offset" : 32,
"type" : "word",
"position" : 5
},
{
"token" : "123",
"start_offset" : 32,
"end_offset" : 35,
"type" : "word",
"position" : 6
},
{
"token" : "hello",
"start_offset" : 36,
"end_offset" : 41,
"type" : "word",
"position" : 7
},
{
"token" : "good",
"start_offset" : 42,
"end_offset" : 46,
"type" : "word",
"position" : 8
}
]
}
# word delimiter token filter
# 作用 : 与word delimiter一致
# 配置项 : 太多,不予介绍
# 备注 : 推荐用在keyword分词器上
GET /_analyze
{
"tokenizer" : "keyword",
"filter" : ["word_delimiter_graph"],
"text" : ["hello gooding me HelloGood Hello123 hello-good"]
}
# 结果
{
"tokens" : [
{
"token" : "hello",
"start_offset" : 0,
"end_offset" : 5,
"type" : "word",
"position" : 0
},
{
"token" : "gooding",
"start_offset" : 6,
"end_offset" : 13,
"type" : "word",
"position" : 1
},
{
"token" : "me",
"start_offset" : 14,
"end_offset" : 16,
"type" : "word",
"position" : 2
},
{
"token" : "Hello",
"start_offset" : 17,
"end_offset" : 22,
"type" : "word",
"position" : 3
},
{
"token" : "Good",
"start_offset" : 22,
"end_offset" : 26,
"type" : "word",
"position" : 4
},
{
"token" : "Hello",
"start_offset" : 27,
"end_offset" : 32,
"type" : "word",
"position" : 5
},
{
"token" : "123",
"start_offset" : 32,
"end_offset" : 35,
"type" : "word",
"position" : 6
},
{
"token" : "hello",
"start_offset" : 36,
"end_offset" : 41,
"type" : "word",
"position" : 7
},
{
"token" : "good",
"start_offset" : 42,
"end_offset" : 46,
"type" : "word",
"position" : 8
}
]
}