TODO
Spring
Servlet
Spring bean生命周期
Spring MVC流程
3种controller方式 参照adapter#supports()
@Component+ impl org.springframework.web.servlet.mvc.Controller @RestController||@controller @Component+ impl org.springframework.web.HttpRequestHandler 容器初始化时加载所有controller对象的所有加了RequestMapping方法 uri为key method为value
请求-->DispatcherServlet-->handlerExecutionChain-->getHandlerAdapter--->handle-->processDispatchResult
protected void doDispatch(HttpServletRequest request, HttpServletResponse response) throws Exception {
HttpServletRequest processedRequest = request;
HandlerExecutionChain mappedHandler = null;
boolean multipartRequestParsed = false;
WebAsyncManager asyncManager = WebAsyncUtils.getAsyncManager(request);
try {
ModelAndView mv = null;
Exception dispatchException = null;
try {
processedRequest = checkMultipart(request);
multipartRequestParsed = (processedRequest != request);
// Determine handler for the current request.
//获取HandlerExecutionChain执行链
mappedHandler = getHandler(processedRequest);
if (mappedHandler == null) {
noHandlerFound(processedRequest, response);
return;
}
// 获取handle
HandlerAdapter ha = getHandlerAdapter(mappedHandler.getHandler());
// Process last-modified header, if supported by the handler.
String method = request.getMethod();
boolean isGet = "GET".equals(method);
if (isGet || "HEAD".equals(method)) {
long lastModified = ha.getLastModified(request, mappedHandler.getHandler());
if (new ServletWebRequest(request, response).checkNotModified(lastModified) && isGet) {
return;
}
}
//执行拦截器pre
if (!mappedHandler.applyPreHandle(processedRequest, response)) {
return;
}
//执行方法
mv = ha.handle(processedRequest, response, mappedHandler.getHandler());
if (asyncManager.isConcurrentHandlingStarted()) {
return;
}
applyDefaultViewName(processedRequest, mv);
//执行拦截器post
mappedHandler.applyPostHandle(processedRequest, response, mv);
}
catch (Exception ex) {
dispatchException = ex;
}
catch (Throwable err) {
// As of 4.3, we're processing Errors thrown from handler methods as well,
// making them available for @ExceptionHandler methods and other scenarios.
dispatchException = new NestedServletException("Handler dispatch failed", err);
}
processDispatchResult(processedRequest, response, mappedHandler, mv, dispatchException);
}
catch (Exception ex) {
triggerAfterCompletion(processedRequest, response, mappedHandler, ex);
}
catch (Throwable err) {
triggerAfterCompletion(processedRequest, response, mappedHandler,
new NestedServletException("Handler processing failed", err));
}
finally {
if (asyncManager.isConcurrentHandlingStarted()) {
// Instead of postHandle and afterCompletion
if (mappedHandler != null) {
mappedHandler.applyAfterConcurrentHandlingStarted(processedRequest, response);
}
}
else {
// Clean up any resources used by a multipart request.
if (multipartRequestParsed) {
cleanupMultipart(processedRequest);
}
}
}
}
Filter
HandlerInterceptor
AOP
@Aspect:作用是把当前类标识为一个切面供容器读取
@Pointcut:Pointcut是植入Advice的触发条件。每个Pointcut的定义包括2部分,一是表达式,二是方法签名。方法签名必须是 public及void型。可以将Pointcut中的方法看作是一个被Advice引用的助记符,因为表达式不直观,因此我们可以通过方法签名的方式为 此表达式命名。因此Pointcut中的方法只需要方法签名,而不需要在方法体内编写实际代码。
@Around:环绕增强,相当于MethodInterceptor
@AfterReturning:后置增强,相当于AfterReturningAdvice,方法正常退出时执行
@Before:标识一个前置增强方法,相当于BeforeAdvice的功能,相似功能的还有
@AfterThrowing:异常抛出增强,相当于ThrowsAdvice
@After: final增强,不管是抛出异常或者正常退出都会执行
Mybatis
- MyBatis拦截器的用途:在这些被拦截的方法执行前后加上某些逻辑 sql打印,参数加密解密,分页等待
项目启动时扫描解析全项目的mapper.xml
org.apache.ibatis.builder.xml.XMLMapperBuilder#configurationElement
invoke时动态sql拼接org.apache.ibatis.binding.MapperMethod#execute
SPI
Servlet SPI规范
文件名为接口全地址,内容为实现类全地址
@HandlesTypes(xx.class)会将所有xx类扫描到一个set集合
dubbo spi
key为bean name value为实现类全地址
@Adaptive注解在方法上的作用
@Activate 提供了一些配置来允许我们配置加载条件,比如 group 过滤,比如 key 过滤。有点类似我们讲 springboot 的时候用到的 conditional,根据条件进行自动激活
动态代理
jdk代理为什么必须是接口,因为生成的代理对象自动继承Proxy,若想代理执行方法则必须采用implements
juejin.cn/post/684490…
动态代理相较于静态代理的优点:无需编写代理类,假如委托类有100个方法时无需都实现 只要一个invoke即可
Map源码
ThreadPoolExecutor
public ThreadPoolExecutor(int corePoolSize,//核心线程数
int maximumPoolSize,//最大线程数 当core用完,queue满了时才启动非核心线程
long keepAliveTime,//线程存活时间
TimeUnit unit,//时间单位
BlockingQueue<Runnable> workQueue,//存放任务队列
//ArrayBlockingQueue LinkedBlockingQueue SynchronousQueue DelayQueue
ThreadFactory threadFactory,//线程生成factory
RejectedExecutionHandler handler) {
//AbortPolicy抛异常 DiscardPolicy忽略 DiscardOldestPolicy丢弃对首任务 CallerRunsPolicy主线程执行
//任务拒绝策略 当core用完,queue满了,也无非核心线程可用时启动拒绝策略
if (corePoolSize < 0 ||
maximumPoolSize <= 0 ||
maximumPoolSize < corePoolSize ||
keepAliveTime < 0)
throw new IllegalArgumentException();
if (workQueue == null || threadFactory == null || handler == null)
throw new NullPointerException();
this.acc = System.getSecurityManager() == null ?
null :
AccessController.getContext();
this.corePoolSize = corePoolSize;
this.maximumPoolSize = maximumPoolSize;
this.workQueue = workQueue;
this.keepAliveTime = unit.toNanos(keepAliveTime);
this.threadFactory = threadFactory;
this.handler = handler;
}
BlockingQueue
ClassLoader
ZskController.class.getClassLoader();//AppClassLoader 用户自定义class
String.class.getClassLoader();//jdk 常用包下的class bootstrapClassLoader
CallSiteDescriptor.class.getClassLoader();//ExtClassLoader jdk插件包(ext)下的class
String.intern()
常量池中不存在"123" 将"123"添加到常量池 返回常量池引用 常量池存在"123" 则返回常量池引用
String a = "123";
String b = new String("123");
b=b.intern();
System.out.println(a == b);
CAS
缺点
- 其他线程系统开销
- 不保证原子操作
- ABA问题 解决办法 增加一个version解决ABA问题,参考elasticSearch的version
Redis
效率高的原因
- 基于内存
- Redis使用的是非阻塞IO,IO多路复用
- Redis采用了单线程的模型,保证了每个操作的原子性,也减少了线程的上下文切换
- 基于C语言
布隆过滤器
原始数据应该一定hash函数后索引到数组且值为1时证明值存在
误判率越小hash函数各数与数组大小越多,检索更慢
RDB默认
- 相比于AOF机制,如果数据集很大,RDB的启动效率会更高
- 对于灾难恢复而言,RDB是非常不错的选择,数据恢复快
- 数据的完整性和一致性不高 可能最后一次rdb时宕机,数据不完整
AOF
- 因为同步粒度较细,所以比全量rdb快
- 数据恢复粒度比RDB更准确 取决于配置的策略 最后一次aof宕机后redis-check-aof 解决数据一致性
- rewrite机制,当aof文件超过设定的阈值,另起一个新文件将内存中的值write到新文件中并将后续操作append到新文件
- 文件会越来越大,数据恢复也会越来越慢
Redis的部署模式
主从:主负责写数据然后同步给从,从库负责读
哨兵: 哨兵通过发送命令,等待Redis服务器响应,从而监控运行的多个Redis实例
RedisCluster底层原理
todo
缓存过期策略
定期删除+惰性删除
- 定期删除 定期抽查一批key检查是否过期
- 惰性删除 服务get()时检查是否过期
缓存淘汰机制
- volatile-lru 在设置了过期时间的键空间中,移除最近最少使用的key 推荐
- volatile-ttl 在设置了过期时间的键空间中,优先移除快过期的key
- volatile-random 在设置了过期时间的键空间中,随机移除一个键 不建议
- allkeys-lru 移除最近最少使用的key
- allkeys-random 直接在键空间中随机移除一个键 不建议
- noeviction 不做过键处理,只返回一个写操作错误 不建议
缓存穿透、缓存雪崩、缓存击穿
- 缓存穿透 缓存或数据库中不存在此key的值; 布隆过滤器或者缓存空值,程序限流
- 缓存击穿 某个key数据存在,但在redis中过期又高并发请求; 互斥锁,程序限流
- 缓存雪崩 大量缓存集中在某一个时间段失效; 缓存失效时间分散开,缓存系统高可用,程序限流
引用
- 强引用 gc时不会消失
- 软引用 内存空间足够,gc时就不会回收它
- 弱引用 不管当前内存空间足够与否,gc时都会回收它的内存
- 虚引用
分库分表
- 水平分 某张表key按照某一规则分散在不同表 比如 hash(key)取模 range(key)区间分散
- hash 没有热点问题,但扩容迁移数据痛苦 eg:数据变多时取模的值i变大不好迁移
- range 不需要迁移数据,但有热点问题 新的订单永远查询单一表
- 垂直分 不同类型的表分散在不同的库,分库不分表
- 土豪专用
多数据源
基于 Spring AbstractRoutingDataSource 做拓展不同操作类,固定数据源分库分表中间件baomidou,shardingsphere
一致性Hash
- 带来的
数据倾斜问题可以使用虚拟节点解决,比如一个node有多个hash值
/**
* MurMurHash算法,性能高,碰撞率低
*
* @param key String
* @return Long
*/
public Long hash(String key) {
ByteBuffer buf = ByteBuffer.wrap(key.getBytes());
int seed = 0x1234ABCD;
ByteOrder byteOrder = buf.order();
buf.order(ByteOrder.LITTLE_ENDIAN);
long m = 0xc6a4a7935bd1e995L;
int r = 47;
long h = seed ^ (buf.remaining() * m);
long k;
while (buf.remaining() >= 8) {
k = buf.getLong();
k *= m;
k ^= k >>> r;
k *= m;
h ^= k;
h *= m;
}
if (buf.remaining() > 0) {
ByteBuffer finish = ByteBuffer.allocate(8).order(ByteOrder.LITTLE_ENDIAN);
finish.put(buf).rewind();
h ^= finish.getLong();
h *= m;
}
h ^= h >>> r;
h *= m;
h ^= h >>> r;
buf.order(byteOrder);
return h;
}
分库分表配置
spring:
# ShardingSphere 配置项
shardingsphere:
datasource:
# 所有数据源的名字
names: ds-orders-0, ds-orders-1
# 订单 orders 数据源配置 00
ds-orders-0:
type: com.zaxxer.hikari.HikariDataSource # 使用 Hikari 数据库连接池
driver-class-name: com.mysql.jdbc.Driver
jdbc-url: jdbc:mysql://127.0.0.1:3306/lab18_orders_0?useSSL=false&useUnicode=true&characterEncoding=UTF-8
username: root
password:
# 订单 orders 数据源配置 01
ds-orders-1:
type: com.zaxxer.hikari.HikariDataSource # 使用 Hikari 数据库连接池
driver-class-name: com.mysql.jdbc.Driver
jdbc-url: jdbc:mysql://127.0.0.1:3306/lab18_orders_1?useSSL=false&useUnicode=true&characterEncoding=UTF-8
username: root
password:
# 分片规则
sharding:
tables:
# orders 表配置
orders:
# actualDataNodes: ds-orders-$->{0..1}.orders_$->{0..4} # 映射到 ds-orders 数据源的 orders 表
# actualDataNodes: ds-orders-0.orders_0, ds-orders-0.orders_2, ds-orders-0.orders_4, ds-orders-0.orders_6, ds-orders-1.orders_1, ds-orders-1.orders_3, ds-orders-1.orders_5, ds-orders-1.orders_7
actualDataNodes: ds-orders-0.orders_$->{[0,2,4,6]}, ds-orders-1.orders_$->{[1,3,5,7]} # 映射到 ds-orders-0 和 ds-orders-1 数据源的 orders 表们
key-generator: # 主键生成策略
column: id
type: SNOWFLAKE
database-strategy:
inline:
algorithm-expression: ds-orders-$->{user_id % 2}
sharding-column: user_id
table-strategy:
inline:
algorithm-expression: orders_$->{user_id % 8}
sharding-column: user_id
# order_config 表配置
order_config:
actualDataNodes: ds-orders-0.order_config # 仅映射到 ds-orders-0 数据源的 order_config 表
# 拓展属性配置
props:
sql:
show: true # 打印 SQL
读写分离
spring:
# ShardingSphere 配置项
shardingsphere:
# 数据源配置
datasource:
# 所有数据源的名字
names: ds-master, ds-slave-1, ds-slave-2
# 订单 orders 主库的数据源配置
ds-master:
type: com.zaxxer.hikari.HikariDataSource # 使用 Hikari 数据库连接池
driver-class-name: com.mysql.jdbc.Driver
jdbc-url: jdbc:mysql://127.0.0.1:3306/test_orders?useSSL=false&useUnicode=true&characterEncoding=UTF-8
username: root
password:
# 订单 orders 从库数据源配置
ds-slave-1:
type: com.zaxxer.hikari.HikariDataSource # 使用 Hikari 数据库连接池
driver-class-name: com.mysql.jdbc.Driver
jdbc-url: jdbc:mysql://127.0.0.1:3306/test_orders_01?useSSL=false&useUnicode=true&characterEncoding=UTF-8
username: root
password:
# 订单 orders 从库数据源配置
ds-slave-2:
type: com.zaxxer.hikari.HikariDataSource # 使用 Hikari 数据库连接池
driver-class-name: com.mysql.jdbc.Driver
jdbc-url: jdbc:mysql://127.0.0.1:3306/test_orders_02?useSSL=false&useUnicode=true&characterEncoding=UTF-8
username: root
password:
# 读写分离配置,对应 YamlMasterSlaveRuleConfiguration 配置类
masterslave:
name: ms # 名字,任意,需要保证唯一
master-data-source-name: ds-master # 主库数据源
slave-data-source-names: ds-slave-1, ds-slave-2 # 从库数据源
# 拓展属性配置
props:
sql:
show: true # 打印 SQL
TCP三次握手
分布式事务
- 2PC两阶段提交
step1TM(请求事务)->RM(操作完成)->step2TM(提交事务)->RM(事务成功/失败)->TM(事务完成/回滚) 缺点 锁资源- 3PC三阶段提交(
相较于两阶段增加一个canCommit和timeout)step1TM(事务功能询问)->RM(可以正常提交事务)->step2TM(请求事务)->RM(操作完成)->step3TM(提交事务)->RM(事务成功/失败)->TM(事务完成/回滚)- TCC Try、Confirm/Cancel try截断预留资源 confirm真正执行业务/cancel释放还原try阶段预留资源
- MQ消息事务(最终一致性)当事务发起执行完全本地事务后并发出一条消息,事务参与方(消息消费者)
一定(retry)能够接收消息并处理事务成功- 最大努力通知 当收到消息后,向被动方发送通知,同时生成通知记录。如果没有接收到被动方的返回消息,就根据通知记录进行重复通知
分布式事务框架 LCN Seata
@Transactional失效
- sql异常被人为捕获
- @Transactional 应用在非 public 修饰的方法上
- @Transactional配置错误 eg: rollbackFor指定了回滚异常 propagation属性配置错误
REQUIRED有则加入,无则创建SUPPORTS有则加入,无则不创建MANDATORY有则加入,无则异常REQUIRES_NEW有则暂停重新创建NOT_SUPPORTED无事务方式运行,有则暂停NEVER无事务方式运行,有则异常同一个class类之间调用 因为只有当事务方法被当前类以外的代码调用时,才会由Spring生成的代理对象来管理。- 数据库引擎不支持事务 如mysql MyISAM
mysql
ACID
原子性 一致性 隔离性 持久性
log
- rodolog
持久性提交事务时,先记录修改后的数据,再改真正数据 innodb层面- bin log 用于记录数据库执行的写入性操作 mysql server层面
- undo log
原子性sql语句的回滚镜像
index失效
- 组合索引最左原则
- 不在索引列上做任何操作 如 funcation(index)=xxx
- 使用<> ,like %开头, between and ,is not null,or,not in,mysql认为权表扫描快
-
或者< 有时走索引有时不走
分布式锁
- 基于数据库实现
- 基于redis等缓存 setnx(key,ttl) redisson-spring-boot-starter
- 基于zk zk是基于目录结构的 客户机创建
临时顺序节点生成id 最小id则获取锁,将客户机信息存入节点可以实现重入锁 curator-recipes
漏桶算法、令牌桶算法
- 漏桶 消费速率一定,限制生产速度,桶满则fallback
- 令牌桶 有利于突发大量请求因为可以瞬间取走所有令牌
docker
pom.xml
<plugin>
<groupId>com.spotify</groupId>
<artifactId>docker-maven-plugin</artifactId>
<configuration>
<!-- hub.zlt.com:8080/microservices-platform/-->
<imageName>${docker.image.prefix}/${project.artifactId}</imageName>
<imageTags>
<imageTag>${project.version}</imageTag>
<imageTag>latest</imageTag>
</imageTags>
<forceTags>true</forceTags>
<!-- openjdk:8-jre-alpine-->
<baseImage>${docker.baseImage}</baseImage>
<volumes>${docker.volumes}</volumes>
<env>
<!-- -Xms128m -Xmx128m-->
<JAVA_OPTS>${docker.java.opts}</JAVA_OPTS>
</env>
<!-- -Djava.security.egd=file:/dev/./urandom-->
<entryPoint>["java ","$JAVA_OPTS ${docker.java.security.egd} ","-jar
","/${project.build.finalName}.jar"]
</entryPoint>
<resources>
<resource>
<targetPath>/</targetPath>
<!-- target-->
<directory>${project.build.directory}</directory>
<include>${project.build.finalName}.jar</include>
</resource>
</resources>
</configuration>
</plugin>
DockerFile
FROM frolvlad/alpine-oraclejdk8:slim
VOLUME /tmp
ADD /target/tld-0.0.1-SNAPSHOT.jar app.jar
EXPOSE 8090
ENTRYPOINT ["java","-jar","/app.jar"]
docker-compose
version: '3'
services:
elasticsearch:
image: elasticsearch:7.9.3
container_name: es
environment:
- cluster.name=my-application
- node.name=node
- bootstrap.memory_lock=true
- http.cors.enabled=true
- http.cors.allow-origin=*
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "cluster.initial_master_nodes=node,node1,node2"
- "discovery.seed_hosts=elasticsearch,elasticsearch1,elasticsearch2"
ulimits:
memlock:
soft: -1
hard: -1
ports:
- 9200:9200
- 9300:9300
networks:
- esnet
volumes:
- es_data:/usr/share/elasticsearch/data
- es_plugin:/usr/share/elasticsearch/plugins
elasticsearch1:
image: elasticsearch:7.9.3
container_name: es1
environment:
- cluster.name=my-application
- node.name=node1
- bootstrap.memory_lock=true
- http.cors.enabled=true
- http.cors.allow-origin=*
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "cluster.initial_master_nodes=node,node1,node2"
- "discovery.seed_hosts=elasticsearch,elasticsearch1,elasticsearch2"
ulimits:
memlock:
soft: -1
hard: -1
ports:
- 9201:9200
- 9301:9300
networks:
- esnet
volumes:
- es1_data:/usr/share/elasticsearch/data
- es_plugin:/usr/share/elasticsearch/plugins
elasticsearch2:
image: elasticsearch:7.9.3
container_name: es2
environment:
- cluster.name=my-application
- node.name=node2
- bootstrap.memory_lock=true
- http.cors.enabled=true
- http.cors.allow-origin=*
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "cluster.initial_master_nodes=node,node1,node2"
- "discovery.seed_hosts=elasticsearch,elasticsearch1,elasticsearch2"
ulimits:
memlock:
soft: -1
hard: -1
ports:
- 9202:9200
- 9302:9300
networks:
- esnet
volumes:
- es2_data:/usr/share/elasticsearch/data
- es_plugin:/usr/share/elasticsearch/plugins
kibana:
image: kibana:7.9.3
container_name: kibana
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
ports:
- "5601:5601"
networks:
- esnet
volumes:
es_data:
driver: local
es1_data:
driver: local
es2_data:
driver: local
es_plugin:
driver: local
networks:
esnet:
设计模式 todo
状态模式
责任链模式
模板方法模式
策略模式
单例模式
MQ
优点:异步,削峰 缺点:增加系统复杂度,可用性降低,一致性问题
Kafka
顺序写磁盘 零拷贝
leader复制读写,follower复制备份
topic分区在不同的broker上,分lender跟follower,leader宕机时从isr(跟leader最接近的broker)中选举,leader也在isr中
group-topic-partition
v0.9版本之前offset存储在zk内部,之后版本存储在kafka offset主题内
数据默认存储失效时间7天,在log目录下 均可配置
HW: 多个副本中最小的leo叫做HW
LEO: 每个副本中最大的offset
数据存储
offset到index中查找消息起始偏移量以及消息大小,然后根据值计算出真实消息在.log文件中的位置
时间内isr中follower没有及时同步则剔除
acks
- 0 不需要leader的ack
- 1 leader落盘后ack
- -1/all leader和isr都落盘后ack
生产数据重复
- isr都同步后 还未ack时leader宕机重新选举,生产者没有接收到ack则重试生产
消费数据重复
分组,手动提交offset
选举
ElasticSearch
请求集群任一节点,请求会被转发到其他节点
关键字
- match 将搜索词分词查询
- match_phrase 搜索词顺序一致,间隔为slop
- match_phrase_prefix
- operator 将分词条件用and连接 默认为or
- boost 检索权重
- minmum_should_match 搜索词相关度
shard
primary shard 以及其replica shard(承担读请求负载)
一个index分词若干shard(分片) primary shard拥有其replica shard(备份节点)
倒排索引
将一句话利用分词器分成若干字或词,每个字词分布在哪些doc
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: elasticsearch
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-2
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /path/to/data2
#
# Path to log files:
#
path.logs: /path/to/logs2
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 127.0.0.1
#
# Set a custom port for HTTP:
#
http.port: 9202
transport.tcp.port: 9302
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["127.0.0.1:9302","127.0.0.1:9300"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
选举
private DiscoveryNode findMaster() {
//根据yml配置的discovery.seed_hosts获取ping列表
List<ZenPing.PingResponse> fullPingResponses = pingAndWait(pingTimeout).toList();
if (fullPingResponses == null) {
return null;
}
//当前节点
final DiscoveryNode localNode = transportService.getLocalNode();
fullPingResponses.add(new ZenPing.PingResponse(localNode, null, this.clusterState()));
//根据配置是否剔除非master节点过滤
final List<ZenPing.PingResponse> pingResponses = filterPingResponses(fullPingResponses, masterElectionIgnoreNonMasters, logger);
//集群中已存在的master节点
List<DiscoveryNode> activeMasters = new ArrayList<>();
for (ZenPing.PingResponse pingResponse : pingResponses) {
if (pingResponse.master() != null && !localNode.equals(pingResponse.master())) {
activeMasters.add(pingResponse.master());
}
}
//拥有master role的节点 即有资格成为master
List<ElectMasterService.MasterCandidate> masterCandidates = new ArrayList<>();
for (ZenPing.PingResponse pingResponse : pingResponses) {
if (pingResponse.node().isMasterNode()) {
masterCandidates.add(new ElectMasterService.MasterCandidate(pingResponse.node(), pingResponse.getClusterStateVersion()));
}
}
//master节点为空 则让替补竞选
if (activeMasters.isEmpty()) {
//是否需要满足min_master_nodes配置
if (electMaster.hasEnoughCandidates(masterCandidates)) {
final ElectMasterService.MasterCandidate winner = electMaster.electMaster(masterCandidates);
return winner.getNode();
} else {
return null;
}
} else {
//不然则选择nodeID小的master
return electMaster.tieBreakActiveMasters(activeMasters);
}
}
query&fetch
性能
- filesystem cache 大小占总数据量一半最好
- es + hbase 将参与检索的字段放入es 然后根据检索结果去hbase中检索剩余字段
缓存
neo4j
CREATE (TheMatrix:Movie {title:'The Matrix', released:1999, tagline:'Welcome to the Real World'})
CREATE (Keanu:Person {name:'Keanu Reeves', born:1964})
CREATE (Carrie:Person {name:'Carrie-Anne Moss', born:1967})
CREATE (Laurence:Person {name:'Laurence Fishburne', born:1961})
CREATE (Hugo:Person {name:'Hugo Weaving', born:1960})
CREATE (LillyW:Person {name:'Lilly Wachowski', born:1967})
CREATE (LanaW:Person {name:'Lana Wachowski', born:1965})
CREATE (JoelS:Person {name:'Joel Silver', born:1952})
CREATE
(Keanu)-[:ACTED_IN {roles:['Neo']}]->(TheMatrix),
(Carrie)-[:ACTED_IN {roles:['Trinity']}]->(TheMatrix),
(Laurence)-[:ACTED_IN {roles:['Morpheus']}]->(TheMatrix),
(Hugo)-[:ACTED_IN {roles:['Agent Smith']}]->(TheMatrix),
(LillyW)-[:DIRECTED]->(TheMatrix),
(LanaW)-[:DIRECTED]->(TheMatrix),
(JoelS)-[:PRODUCED]->(TheMatrix)
cypher语法
create
create(节点名称:节点标签{属性名:属性值,属性名:属性值...})
CREATE (JoelS:Person {name: 'Joel Silver', born: 1952})
match
MATCH (nineties:Movie) WHERE nineties.released >= 1990 AND nineties.released < 2000 RETURN nineties.title
set
delete
- ()代表标签label
- {}代表节点node
- []代表关系relationship
Hive
- UDF 一进一出
- UDAF 多进一出
- UDTF 一进多出
架构
Hive数据存储在hdfs上而非本地文件
文件入库
mysql只是取代derby作为支撑hive工作的元数据库
建表时指定字段分隔符
建表时不含exrernal关键字默认为内部表,删除表数据时一并删除hdsf以及源数据库表(mysql)
update table
分区
手动添加目录 上传数据 需要修复
上传数据再建表指定分区 无需修复
Flink
第三代 低延迟 高吞吐
提交job
-c 指定入口 -p并行度 jar目录 自定义属性
standand模式提交job
yarn模式提交job流程
task
job
slot
slot被占据的多少根据task最大并行度而定
ssl
此 demo 主要演示了 Spring Boot 如何集成 https
1. 生成证书
首先使用 jdk 自带的 keytool 命令生成证书复制到项目的 resources 目录下(生成的证书一般在用户目录下 C:\Users\Administrator\server.keystore)
自己生成的证书浏览器会有危险提示,去ssl网站上使用金钱申请则不会
2. 添加配置
- 在配置文件配置生成的证书
server:
ssl:
# 证书路径
key-store: classpath:server.keystore
key-alias: tomcat
enabled: true
key-store-type: JKS
#与申请时输入一致
key-store-password: 123456
# 浏览器默认端口 和 80 类似
port: 443
- 配置 Tomcat
/**
* <p>
* HTTPS 配置类
* </p>
*
* @author yangkai.shen
* @date Created in 2020-01-19 10:31
*/
@Configuration
public class HttpsConfig {
/**
* 配置 http(80) -> 强制跳转到 https(443)
*/
@Bean
public Connector connector() {
Connector connector = new Connector("org.apache.coyote.http11.Http11NioProtocol");
connector.setScheme("http");
connector.setPort(80);
connector.setSecure(false);
connector.setRedirectPort(443);
return connector;
}
@Bean
public TomcatServletWebServerFactory tomcatServletWebServerFactory(Connector connector) {
TomcatServletWebServerFactory tomcat = new TomcatServletWebServerFactory() {
@Override
protected void postProcessContext(Context context) {
SecurityConstraint securityConstraint = new SecurityConstraint();
securityConstraint.setUserConstraint("CONFIDENTIAL");
SecurityCollection collection = new SecurityCollection();
collection.addPattern("/*");
securityConstraint.addCollection(collection);
context.addConstraint(securityConstraint);
}
};
tomcat.addAdditionalTomcatConnectors(connector);
return tomcat;
}
}
3. 测试
启动项目,浏览器访问 http://localhost 将自动跳转到 https://localhost
4. 参考
keytool命令参考
$ keytool --help
密钥和证书管理工具
命令:
-certreq 生成证书请求
-changealias 更改条目的别名
-delete 删除条目
-exportcert 导出证书
-genkeypair 生成密钥对
-genseckey 生成密钥
-gencert 根据证书请求生成证书
-importcert 导入证书或证书链
-importpass 导入口令
-importkeystore 从其他密钥库导入一个或所有条目
-keypasswd 更改条目的密钥口令
-list 列出密钥库中的条目
-printcert 打印证书内容
-printcertreq 打印证书请求的内容
-printcrl 打印 CRL 文件的内容
-storepasswd 更改密钥库的存储口令
使用 "keytool -command_name -help" 获取 command_name 的用法
报表
ureport2
<dependency>
<groupId>com.pig4cloud.plugin</groupId>
<artifactId>ureport-spring-boot-starter</artifactId>
<version>0.0.1</version>
</dependency>
cpu信息
private void setCpuInfo(CentralProcessor processor) {
// CPU信息
long[] prevTicks = processor.getSystemCpuLoadTicks();
Util.sleep(OSHI_WAIT_SECOND);
long[] ticks = processor.getSystemCpuLoadTicks();
long nice = ticks[TickType.NICE.getIndex()] - prevTicks[TickType.NICE.getIndex()];
long irq = ticks[TickType.IRQ.getIndex()] - prevTicks[TickType.IRQ.getIndex()];
long softirq = ticks[TickType.SOFTIRQ.getIndex()] - prevTicks[TickType.SOFTIRQ.getIndex()];
long steal = ticks[TickType.STEAL.getIndex()] - prevTicks[TickType.STEAL.getIndex()];
long cSys = ticks[TickType.SYSTEM.getIndex()] - prevTicks[TickType.SYSTEM.getIndex()];
long user = ticks[TickType.USER.getIndex()] - prevTicks[TickType.USER.getIndex()];
long iowait = ticks[TickType.IOWAIT.getIndex()] - prevTicks[TickType.IOWAIT.getIndex()];
long idle = ticks[TickType.IDLE.getIndex()] - prevTicks[TickType.IDLE.getIndex()];
long totalCpu = user + nice + cSys + idle + iowait + irq + softirq + steal;
cpu.setCpuNum(processor.getLogicalProcessorCount());
cpu.setTotal(totalCpu);
cpu.setSys(cSys);
cpu.setUsed(user);
cpu.setWait(iowait);
cpu.setFree(idle);
}
WebFLux
路由配置可以使用传统MVC GetMapping注解方式 也可以使用如下bean注入方式
@Configuration
public class UserRouter {
@Bean
public RouterFunction<ServerResponse> routeCity(UserHandler userHandler) {
return RouterFunctions
.route(RequestPredicates.GET("/listUser")
.and(RequestPredicates.accept(MediaType.APPLICATION_JSON)),
userHandler::listUser)
.andRoute(RequestPredicates.GET("/user/{id}")
.and(RequestPredicates.accept(MediaType.APPLICATION_JSON)),
userHandler::getUser)
.andRoute(RequestPredicates.GET("/deleteUser/{id}")
.and(RequestPredicates.accept(MediaType.APPLICATION_JSON)),
userHandler::deleteUser)
.andRoute(RequestPredicates.POST("/saveUser")
.and(RequestPredicates.accept(MediaType.APPLICATION_JSON)),
userHandler::saveUser);
}
Shell
杀死某个jar进程
- ps -ef | grep visual.jar | grep -v grep | awk '{print $2}' | xargs kill -9
TransmittableThreadLocal
解决了InheritableThreadLocal无法在线程池模式时的父子线程传值
Canal
MySQL主备复制原理
MySQL master 将数据变更写入二进制日志( binary log, 其中记录叫做二进制日志事件binary log events,可以通过 show binlog events 进行查看)
MySQL slave 将 master 的 binary log events 拷贝到它的中继日志(relay log)
MySQL slave 重放 relay log 中事件,将数据变更反映它自己的数据
canal 工作原理
canal 模拟 MySQL slave 的交互协议,伪装自己为 MySQL slave ,向 MySQL master 发送dump 协议
MySQL master 收到 dump 请求,开始推送 binary log 给 slave (即 canal )
canal 解析 binary log 对象(原始为 byte 流)
实操步骤
mysql开启bin-log