笔者近期做了一个基于Spring WebFlux异步框架的项目,有一个日志记录的需求,如果只是要记录请求/响应状态或头信息,就会比较好办,但是我们需要记录请求及响应的body内容,因为请求/响应流只能被读取一次,事情变的复杂了了起来,下面就来讲一下我的一些探索及解决方案。
环境:
- Spring Boot 2.7.1
- spring-cloud-starter-sleuth 3.1.3
1. 处理WebFlux服务外部请求及响应
在Spring WebFlux中,可以通过实现WebFilter来拦截请求,可以在此处处理请求和响应。
参考了网上的许多解决方案后,在此博客的基础上进行了我的修改:www.javacodegeeks.com/2021/07/kee…
实现如下:
public class BodyCaptureExchange extends ServerWebExchangeDecorator {
private final BodyCaptureRequest bodyCaptureRequest;
private final BodyCaptureResponse bodyCaptureResponse;
public BodyCaptureExchange(ServerWebExchange exchange) {
super(exchange);
this.bodyCaptureRequest = bodyCaptureRequest;
this.bodyCaptureResponse = bodyCaptureResponse;
}
public BodyCaptureExchange(ServerWebExchange exchange, BodyCaptureRequest bodyCaptureRequest, BodyCaptureResponse bodyCaptureResponse) {
super(exchange);
this.bodyCaptureRequest = bodyCaptureRequest;
this.bodyCaptureResponse = bodyCaptureResponse;
}
@Override
public BodyCaptureRequest getRequest() {
return bodyCaptureRequest;
}
@Override
public BodyCaptureResponse getResponse() {
return bodyCaptureResponse;
}
}
public class BodyCaptureRequest extends ServerHttpRequestDecorator {
private final StringBuilder body = new StringBuilder();
public BodyCaptureRequest(ServerHttpRequest delegate) {
super(delegate);
}
@Override
public Flux<DataBuffer> getBody() {
return super.getBody().doOnNext(this::capture);
}
private void capture(DataBuffer buffer) {
this.body.append(StandardCharsets.UTF_8.decode(buffer.asByteBuffer()));
}
public String getFullBody() {
return this.body.toString();
}
}
public class BodyCaptureResponse extends ServerHttpResponseDecorator {
public BodyCaptureResponse(ServerHttpResponse delegate) {
super(delegate);
}
private final StringBuilder body = new StringBuilder();
@Override
public Mono<Void> writeWith(Publisher<? extends DataBuffer> body) {
Flux<DataBuffer> buffer = Flux.from(body);
return super.writeWith(buffer.doOnNext(this::capture));
}
private void capture(DataBuffer buffer) {
this.body.append(StandardCharsets.UTF_8.decode(buffer.asByteBuffer()));
}
public String getFullBody() {
return this.body.toString();
}
}
@Slf4j
@Component
@RequiredArgsConstructor
public class WebServerLoggerFilter implements WebFilter {
@Override
public Mono<Void> filter(ServerWebExchange exchange, WebFilterChain chain) {
ServerHttpRequest request = exchange.getRequest();
URI requestUri = request.getURI();
String scheme = requestUri.getScheme();
boolean isHttpRequest = "http".equals(scheme) || "https".equals(scheme);
if (!isHttpRequest) {
return chain.filter(exchange);
}
String methodValue = request.getMethodValue();
String path = request.getURI().getPath();
String query = request.getURI().getQuery();
String headers = request.getHeaders().toString();
long start = System.currentTimeMillis();
log.info("接收到{}请求,请求路径:[{}],查询参数:{},请求头:{}", methodValue, path, query, headers);
BodyCaptureRequest bodyCaptureRequest = new BodyCaptureRequest(exchange.getRequest()) {
@Override
public Flux<DataBuffer> getBody() {
return super.getBody().doOnNext(s ->
log.info("{}请求:[{}],请求体:{}", methodValue, path, this.getFullBody())
);
}
};
BodyCaptureResponse bodyCaptureResponse = new BodyCaptureResponse(exchange.getResponse()) {
@Override
public Mono<Void> writeWith(Publisher<? extends DataBuffer> body) {
return super.writeWith(body).doFinally(s ->
{
log.info("完成{}请求[{}]处理, 耗时:{}ms,响应码:{},响应体:{}", methodValue, path, System.currentTimeMillis() - start, this.getRawStatusCode(), this.getFullBody());
});
}
};
BodyCaptureExchange bodyCaptureExchange = new BodyCaptureExchange(exchange, bodyCaptureRequest, bodyCaptureResponse);
return chain.filter(bodyCaptureExchange);
}
}
2. 处理WebClient请求及响应
除了服务外部请求的日志需要记录,内部对外发送的请求同样需要记录,项目采用WebClient,底层使用Netty HttpClient实现
@Bean
public WebClient webClient() {
return WebClient.builder()
.clientConnector(new ReactorClientHttpConnector(HttpClient.create()))
.build();
}
WebClient自身提供日志打印功能,具体可参考:www.baeldung.com/spring-log-…
HttpClient.create()
.wiretap(HttpClient.class.getSimpleName(), LogLevel.INFO, AdvancedByteBufFormat.TEXTUAL);
启用HttpClient的wiretap()后,发现日志打印的Sleuth traceId消失了
[2023-03-18 17:20:24.880] [b78c58f07ae8fca7] [b78c58f07ae8fca7] [reactor-http-nio-3] [INFO ] [HttpClient] [log] [148] => [cf21f25d] REGISTERED
[2023-03-18 17:20:24.938] [] [] [reactor-http-nio-3] [INFO ] [HttpClient] [log] [148] => [cf21f25d] CONNECT: localhost/127.0.0.1:8081
[2023-03-18 17:20:24.955] [] [] [reactor-http-nio-3] [INFO ] [HttpClient] [log] [148] => [cf21f25d, L:/127.0.0.1:52188 - R:localhost/127.0.0.1:8081] ACTIVE
[2023-03-18 17:20:25.009] [] [] [reactor-http-nio-3] [INFO ] [HttpClient] [log] [148] => [cf21f25d-1, L:/127.0.0.1:52188 - R:localhost/127.0.0.1:8081] WRITE: 485B POST /pay/channel/fetch HTTP/1.1
user-agent: ReactorNetty/1.0.20
host: localhost:8081
accept: */*
Content-Type: application/json; charset=UTF-8
X-B3-TraceId: b78c58f07ae8fca7
X-B3-SpanId: a460c2c793aeba20
X-B3-ParentSpanId: b78c58f07ae8fca7
X-B3-Sampled: 1
content-length: ***
{"type":"wxpay","id":"id2345678","code_type":"0","amount":1}
[2023-03-18 17:20:25.013] [] [] [reactor-http-nio-3] [INFO ] [HttpClient] [log] [148] => [cf21f25d-1, L:/127.0.0.1:52188 - R:localhost/127.0.0.1:8081] FLUSH
[2023-03-18 17:20:25.026] [] [] [reactor-http-nio-3] [INFO ] [HttpClient] [log] [148] => [cf21f25d-1, L:/127.0.0.1:52188 - R:localhost/127.0.0.1:8081] READ: 151B HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: ***
{"ret_code":"00","ret_msg":"success","order_no":"122121212"}
[2023-03-18 17:20:25.106] [] [] [reactor-http-nio-3] [INFO ] [HttpClient] [log] [148] => [cf21f25d, L:/127.0.0.1:52188 - R:localhost/127.0.0.1:8081] READ COMPLETE
sleuth似乎没有适配netty,由于笔者对于sleuth也是刚接触的状态,于是选择临时抱佛脚看一下sleuth的文档, docs.spring.io/spring-clou… 在这里发现,sleuth对WebClient和Netty HttpClient都有做自动追踪,经过测试发现,使用WebClient调用另一个服务时,另一个服务的请求日志中带有traceId,而wiretap的一部分日志打印也带有traceId,没有traceId的日志所在的包是io.netty,且sleuth官方文档上指明了一种适用于webflux框架打印日志的配置,即:docs.spring.io/spring-clou…
于是尝试使用reactor-netty-http-brave包(一个分布式链路追踪组件)中的ReactorNettyHttpTracing来包装Netty HttpClient和Netty Http Server
@Bean
ReactorNettyHttpTracing reactorNettyHttpTracing(final HttpTracing httpTracing) {
return ReactorNettyHttpTracing.create(httpTracing);
}
@Bean
NettyServerCustomizer nettyServerCustomizer(final Logbook logbook,
final ReactorNettyHttpTracing reactorNettyHttpTracing) {
return reactorNettyHttpTracing::decorateHttpServer;
}
@Bean
HttpClient httpClient(final ReactorNettyHttpTracing reactorNettyHttpTracing) {
return reactorNettyHttpTracing.decorateHttpClient(HttpClient.create()
.wiretap(HttpClient.class.getSimpleName(), LogLevel.INFO, AdvancedByteBufFormat.TEXTUAL));
}
这次生效了,大部分WebClient日志都有了traceId,除了WebClient带响应body的那一条日志
[2023-03-18 17:07:20.462] [6ab8a68d1709b496] [f7ab11d605bb26f9] [reactor-http-nio-3] [INFO ] [HttpClient] [log] [148] => [adadbf21-2, L:/127.0.0.1:58298 - R:localhost/127.0.0.1:8081] WRITE: 289B POST /pay/channel/fetch HTTP/1.1
user-agent: ReactorNetty/1.0.20
host: localhost:8081
accept: */*
Content-Type: application/json; charset=UTF-8
X-B3-TraceId: 6ab8a68d1709b496
X-B3-SpanId: 30df9e26807040c8
X-B3-ParentSpanId: 6ab8a68d1709b496
X-B3-Sampled: 1
content-length: ***
[2023-03-18 17:07:20.465] [6ab8a68d1709b496] [f7ab11d605bb26f9] [reactor-http-nio-3] [INFO ] [HttpClient] [log] [148] => [adadbf21-2, L:/127.0.0.1:58298 - R:localhost/127.0.0.1:8081] WRITE: ***B {"type":"wxpay","id":"id2345678","code_type":"0","amount":1}
[2023-03-18 17:07:20.466] [6ab8a68d1709b496] [f7ab11d605bb26f9] [reactor-http-nio-3] [INFO ] [HttpClient] [log] [148] => [adadbf21-2, L:/127.0.0.1:58298 - R:localhost/127.0.0.1:8081] FLUSH
[2023-03-18 17:07:20.486] [] [] [reactor-http-nio-3] [INFO ] [HttpClient] [log] [148] => [adadbf21-2, L:/127.0.0.1:58298 - R:localhost/127.0.0.1:8081] READ: 151B HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 80
{"ret_code":"00","ret_msg":"success","order_no":"122121212"}
[2023-03-18 17:07:20.490] [] [] [reactor-http-nio-3] [INFO ] [HttpClient] [log] [148] => [adadbf21, L:/127.0.0.1:58298 - R:localhost/127.0.0.1:8081] READ COMPLETE
但是响应没有带traceId还是不成功的,笔者试图去探索个中原因,因相关知识有限,没有排查出来。
在这个issue中也提到了相关问题:
由于sleuth 3.1.x已经是sleuth最终的版本(功能已经被Spring Boot合并入MicroMeter Tracing项目),开发人员没有提供新的解决方案。
之后笔者尝试使用WebClient的拦截器ExchangeFilterFunction,这一部分后面有空再细谈,总之我们放弃了这个方案,最终我们还是选择了使用logbook项目提供的日志打印功能,成功打印出了traceId,logbook提供的日志打印较完善,之前的WebFilter可以去掉。
使用方法如下:
先引入logbook webflux日志相关依赖:
<dependency>
<groupId>org.zalando</groupId>
<artifactId>logbook-spring-boot-webflux-autoconfigure</artifactId>
<version>2.14.0</version>
</dependency>
<dependency>
<groupId>io.projectreactor.netty</groupId>
<artifactId>reactor-netty-http-brave</artifactId>
<version>1.0.20</version>
</dependency>
配置logbook打印日志相关属性:
@Configuration
public class LogBookConfiguration {
@Bean
public CorrelationId correlationId(final Tracer tracer) {
return request -> requireNonNull(requireNonNull(tracer.currentSpan())).context().spanId();
}
@Bean
public Logbook logbook(final SensitiveLogProperties sensitiveLogProperties,
final CorrelationId correlationId) {
return Logbook.builder()
.correlationId(correlationId)
.condition(exclude(
requestTo("/actuator/**"),
contentType(MediaType.APPLICATION_OCTET_STREAM_VALUE, MediaType.MULTIPART_FORM_DATA_VALUE)))
.headerFilter(headers -> HttpHeaders.empty())
// 敏感信息可加上掩码
.bodyFilter((contentType, body) -> {
if (MediaType.APPLICATION_JSON_VALUE.equals(contentType)) {
String maskBody = MaskUtil.maskJson(body, sensitiveLogProperties.getFields());
return Optional.ofNullable(maskBody).orElse("");
}
return body;
})
// Logbook日志打印级别默认为TRACE,调整为INFO
.sink(new DefaultSink(new DefaultHttpLogFormatter(), new HttpLogWriter() {
private final Logger log = LoggerFactory.getLogger(Logbook.class);
@Override
public boolean isActive() {
return this.log.isInfoEnabled();
}
@Override
public void write(final Precorrelation precorrelation, final String request) {
this.log.info(request);
}
@Override
public void write(final Correlation correlation, final String response) {
this.log.info(response);
}
}))
.build();
}
}
还是使用ReactorNettyHttpTracing包装Netty HttpServer/HttpClient
@Configuration
public class WebClientConfig {
@Bean
ReactorNettyHttpTracing reactorNettyHttpTracing(final HttpTracing httpTracing) {
return ReactorNettyHttpTracing.create(httpTracing);
}
@Bean
NettyServerCustomizer nettyServerCustomizer(final ReactorNettyHttpTracing reactorNettyHttpTracing) {
return reactorNettyHttpTracing::decorateHttpServer;
}
@Bean
public HttpClient httpClient(final ReactorNettyHttpTracing reactorNettyHttpTracing,
final ObjectProvider<ReactorNettyHttpClientMapper> mapperProvider) {
ReactorNettyHttpClientMapper mapper = mapperProvider.orderedStream()
.reduce((before, after) -> (client) -> after.configure(before.configure(client)))
.orElse((client) -> client);
return reactorNettyHttpTracing.decorateHttpClient(
mapper.configure(HttpClient.create()));
}
@Bean
public WebClient webClient(final HttpClient httpClient) {
return WebClient.builder()
.clientConnector(new ReactorClientHttpConnector(httpClient))
.build();
}
}
打印出的日志的效果:
[2023-03-18 17:43:10.567] [780b6c23ba8f4bd1] [780b6c23ba8f4bd1] [reactor-http-nio-3] [INFO ] [org.zalando.logbook.Logbook] [write] [55] => Incoming Request: 780b6c23ba8f4bd1
Remote: /0:0:0:0:0:0:0:1:52776
POST http://localhost:8080/pay/channel HTTP/1.1
{
"pay_type": "wxpay",
"amount": "1",
···
}
[2023-03-18 17:43:10.687] [780b6c23ba8f4bd1] [d29e0646f4577fdf] [reactor-http-nio-3] [INFO ] [org.zalando.logbook.Logbook] [write] [55] => Outgoing Request: 780b6c23ba8f4bd1
Remote: localhost/127.0.0.1:8081
POST http://localhost:8081/pay/channel/fetch HTTP/1.1
{"type":"wxpay","id":"id2345678","code_type":"0","amount":1}
[2023-03-18 17:43:10.709] [780b6c23ba8f4bd1] [780b6c23ba8f4bd1] [reactor-http-nio-3] [INFO ] [org.zalando.logbook.Logbook] [write] [60] => Incoming Response: 780b6c23ba8f4bd1
Duration: 23 ms
HTTP/1.1 200 OK
{"ret_code":"00","ret_msg":"success","order_no":"122121212"}
[2023-03-18 17:43:10.714] [780b6c23ba8f4bd1] [780b6c23ba8f4bd1] [reactor-http-nio-3] [INFO ] [org.zalando.logbook.Logbook] [write] [60] => Outgoing Response: 780b6c23ba8f4bd1
Duration: 310 ms
HTTP/1.1 200 OK
{"ret_code":"00","ret_msg":"success"}