tomcat 连接器处理流程源码笔记

102 阅读7分钟

1、关于tomcat连接器 Connector

图片摘自《深入拆解tomcat&jetty》 连接器是tomcat service的重要组成部分,主要负责处理tomcat的连接,tomcat支持NIO、NIO2等网络IO模型,默认使用NIO模型,也是使用最多的模型。

在tomcat中,EndPoint主要处理tcp连接,使用了java nio中Selector、SocketChannel等 组件处理;Processor用于接收来自EndPoint的Socket, 将字节流转为tomcat request请求对象和response对象,由Adapter交由容器处理

2、Connector启动

使用nio模型+http

@Override
public void bind() throws Exception {
    // 初始化nio serverSocket
    initServerSocket();
    // 设置关闭的CountDownLatch
    setStopLatch(new CountDownLatch(1));
    // 初始化ssl
    initialiseSsl();
}

protected void initServerSocket() throws Exception {
        // 已省略非核心代码。。。。
        。。。。。
        // 开启一个nio服务端channel
        serverSock = ServerSocketChannel.open();
        // 根据配置项对ServerSocket进行参数设置
        socketProperties.setProperties(serverSock.socket());
        InetSocketAddress addr = new InetSocketAddress(getAddress(), getPortWithOffset());
        // 绑定地址
        serverSock.bind(addr, getAcceptCount());
    }
    // 设置为非阻塞
    serverSock.configureBlocking(true); //mimic APR behavior
}

在使用Connector连接器初始化的过程就是java nio的的创建配置并绑定的过程。

3、Endpoint

public void startInternal() throws Exception {
    // 初始running为false
    if (!running) {
        running = true;
        paused = false;

        if (socketProperties.getProcessorCache() != 0) {
            processorCache = new SynchronizedStack<>(SynchronizedStack.DEFAULT_SIZE,
                    socketProperties.getProcessorCache());
        }
        if (socketProperties.getEventCache() != 0) {
            eventCache = new SynchronizedStack<>(SynchronizedStack.DEFAULT_SIZE,
                    socketProperties.getEventCache());
        }
        int actualBufferPool =
                socketProperties.getActualBufferPool(isSSLEnabled() ? getSniParseLimit() * 2 : 0);
        if (actualBufferPool != 0) {
            nioChannels = new SynchronizedStack<>(SynchronizedStack.DEFAULT_SIZE,
                    actualBufferPool);
        }

        // Create worker collection
        // 创建tomcat线程池 tomcat线程池和jdk线程池有些区别
        if (getExecutor() == null) {
            createExecutor();
        }
        // 初始化连接限制
        initializeConnectionLatch();

        // Start poller thread
        poller = new Poller();
        Thread pollerThread = new Thread(poller, getName() + "-Poller");
        pollerThread.setPriority(threadPriority);
        // 设置为守护线程
        pollerThread.setDaemon(true);
        pollerThread.start();
        // 开启Acceptor线程
        startAcceptorThread();
    }
}

先看Acceptor线程,Acceptor线程用于监听特定的端口,接收来自客户端的连接请求。当有客户端发起连接请求时,Acceptor会接收该请求,并将其传递给相应的处理器进行处理。

@Override
public void run() {

    int errorDelay = 0;
    long pauseStart = 0;

    try {
        // 只要不停止,就一直循环,stopCalled被volatile修饰
        while (!stopCalled) {
            // 已省略部分非核心代码
            ....
            // 设置为运行状态
            state = AcceptorState.RUNNING;

            try {
                //if we have reached max connections, wait
                // 如果已经超过了最大连接数,则进行等待;
                // 这里tomcat做了限流功能,使用AQS LimitLatch对连接数量进行控制
                endpoint.countUpOrAwaitConnection();

                // Endpoint might have been paused while waiting for latch
                // If that is the case, don't accept new connections
                // 如果endpoint是暂停状态,继续while循环
                if (endpoint.isPaused()) {
                    continue;
                }

                U socket = null;
                try {
                    // Accept the next incoming connection from the server
                    // socket
                    // 此方法调用了ServerSocketChannel.accept();会一直阻塞,直到有新的连接请求
                    socket = endpoint.serverSocketAccept();
                } catch (Exception ioe) {
                    // We didn't get a socket
                    endpoint.countDownConnection();
                    if (endpoint.isRunning()) {
                        // Introduce delay if necessary
                        errorDelay = handleExceptionWithDelay(errorDelay);
                        // re-throw
                        throw ioe;
                    } else {
                        break;
                    }
                }
                // Successful accept, reset the error delay
                errorDelay = 0;

                // 对新连接的socket进行配置
                if (!stopCalled && !endpoint.isPaused()) {
                    // setSocketOptions() will hand the socket off to
                    // an appropriate processor if successful
                    // 对新连接的socket进行配置和添加事件
                    if (!endpoint.setSocketOptions(socket)) {
                        endpoint.closeSocket(socket);
                    }
                } else {
                    endpoint.destroySocket(socket);
                }
            } catch (Throwable t) {
                ExceptionUtils.handleThrowable(t);
                String msg = sm.getString("endpoint.accept.fail");
                log.error(msg, t);
            }
        }
}

@Override
protected boolean setSocketOptions(SocketChannel socket) {
    NioSocketWrapper socketWrapper = null;
    try {
        // Allocate channel and wrapper
        // 先从nioChannels中获取NioChannel
        NioChannel channel = null;
        if (nioChannels != null) {
            channel = nioChannels.pop();
        }
        // 如果为空
        if (channel == null) {
            // 对channel的buffer进行配置处理
            SocketBufferHandler bufhandler = new SocketBufferHandler(
                    socketProperties.getAppReadBufSize(),
                    socketProperties.getAppWriteBufSize(),
                    socketProperties.getDirectBuffer());
            // 是否开启ssl
            if (isSSLEnabled()) {
                channel = new SecureNioChannel(bufhandler, this);
            } else {
                channel = new NioChannel(bufhandler);
            }
        }
        // 封装为NioSocketWrapper
        NioSocketWrapper newWrapper = new NioSocketWrapper(channel, this);
        channel.reset(socket, newWrapper);
        // connections map对当前连接进行添加,key为初始socketchannel,value为封装后的socketchannel
        connections.put(socket, newWrapper);
        socketWrapper = newWrapper;

        // Set socket properties
        // Disable blocking, polling will be used
        // 设置为非阻塞
        socket.configureBlocking(false);
        if (getUnixDomainSocketPath() == null) {
            socketProperties.setProperties(socket.socket());
        }
        // 对当前socketchannel进行配置
        socketWrapper.setReadTimeout(getConnectionTimeout());
        socketWrapper.setWriteTimeout(getConnectionTimeout());
        socketWrapper.setKeepAliveLeft(NioEndpoint.this.getMaxKeepAliveRequests());
        // 将该连接socketchannel封装为pollerEvent添加至poller队列中
        poller.register(socketWrapper);
        return true;
    } catch (Throwable t) {
        ExceptionUtils.handleThrowable(t);
        try {
            log.error(sm.getString("endpoint.socketOptionsError"), t);
        } catch (Throwable tt) {
            ExceptionUtils.handleThrowable(tt);
        }
        if (socketWrapper == null) {
            destroySocket(socket);
        }
    }
    // Tell to close the socket if needed
    return false;
}

再来看Poller线程,Poller线程主要负责通过NIO Selector对象监测注册在其上的SocketChannel的事件。一旦监测到可读事件,Poller线程会生成任务类SocketProcessor,并将其交给Executor去处理。

@Override
public void run() {
    // Loop until destroy() is called
    // 一直循环
    while (true) {

        boolean hasEvents = false;

        try {
            if (!close) {
                // 是否有事件产生 主要处理连接事件
                hasEvents = events();
                // 说明有事件 非阻塞的selectNow,wakeupCounter在 poller.register(socketWrapper);中会进行+1
                if (wakeupCounter.getAndSet(-1) > 0) {
                    // If we are here, means we have other stuff to do
                    // Do a non blocking select
                    keyCount = selector.selectNow();
                } else {
                    keyCount = selector.select(selectorTimeout);
                }
                wakeupCounter.set(0);
            }
            if (close) {
                events();
                timeout(0, false);
                try {
                    selector.close();
                } catch (IOException ioe) {
                    log.error(sm.getString("endpoint.nio.selectorCloseFail"), ioe);
                }
                break;
            }
            // Either we timed out or we woke up, process events first
            // 再次判断是没有事件
            if (keyCount == 0) {
                hasEvents = (hasEvents | events());
            }
        } catch (Throwable x) {
            ExceptionUtils.handleThrowable(x);
            log.error(sm.getString("endpoint.nio.selectorLoopError"), x);
            continue;
        }
        // keyCount>0说明有读事件
        Iterator<SelectionKey> iterator =
            keyCount > 0 ? selector.selectedKeys().iterator() : null;
        // Walk through the collection of ready keys and dispatch
        // any active event.
        while (iterator != null && iterator.hasNext()) {
            SelectionKey sk = iterator.next();
            iterator.remove();
            NioSocketWrapper socketWrapper = (NioSocketWrapper) sk.attachment();
            // Attachment may be null if another thread has called
            // cancelledKey()
            // 将channel传递给下一个组件进行执行
            if (socketWrapper != null) {
                processKey(sk, socketWrapper);
            }
        }

        // Process timeouts
        timeout(keyCount,hasEvents);
    }

    getStopLatch().countDown();
}

// 判断现在有没有事件,处理连接事件方法
public boolean events() {
    boolean result = false;

    PollerEvent pe = null;
    for (int i = 0, size = events.size(); i < size && (pe = events.poll()) != null; i++ ) {
        // 能进入循环,说明有连接事件,返回为true
        result = true;
        // 获取channel 
        NioSocketWrapper socketWrapper = pe.getSocketWrapper();
        SocketChannel sc = socketWrapper.getSocket().getIOChannel();
        int interestOps = pe.getInterestOps();
        if (sc == null) {
            log.warn(sm.getString("endpoint.nio.nullSocketChannel"));
            socketWrapper.close();
        } else if (interestOps == OP_REGISTER) {
            // 是连接事件
            try {
                // 注册到selector上,监听事件为读事件
                sc.register(getSelector(), SelectionKey.OP_READ, socketWrapper);
            } catch (Exception x) {
                log.error(sm.getString("endpoint.nio.registerFail"), x);
            }
        } 
        // 省略非核心代码
        // .....
    }
   
    return result;
}

在Poller的run方法中先判断wakeupCounter>0,再进行selectNow,为什么不直接selectNow或者select(selectorTimeout)呢?

个人理解:因为tomcat处理的是http请求,在创建好连接之后,紧接着就是读事件(请求内容),而wakeupCounter在接收到连接请求后会进行+1,再执行时就是selectNow非阻塞,减少不必要的阻塞select,但是看这两个方法在执行上区别也不是很大

再来看Poller在接收到读事件之后,将channel传递给下一个组件

protected void processKey(SelectionKey sk, NioSocketWrapper socketWrapper) {
    try {
            if (sk.isReadable() || sk.isWritable()) {
                if (socketWrapper.getSendfileData() != null) {
                    processSendfile(sk, socketWrapper, false);
                } else {
                    // 解除channel感兴趣的事件
                    unreg(sk, socketWrapper, sk.readyOps());
                    boolean closeSocket = false;
                    // Read goes before write
                    // 可读
                    if (sk.isReadable()) {
                        if (!processSocket(socketWrapper, SocketEvent.OPEN_READ, true)) {
                            closeSocket = true;
                        }
                    }
}

public boolean processSocket(SocketWrapperBase<S> socketWrapper,
        SocketEvent event, boolean dispatch) {
    try {
        if (socketWrapper == null) {
            return false;
        }
        SocketProcessorBase<S> sc = null;
        if (processorCache != null) {
            sc = processorCache.pop();
        }
        if (sc == null) {
            // 创建SocketProcessorBase
            sc = createSocketProcessor(socketWrapper, event);
        } else {
            sc.reset(socketWrapper, event);
        }
        // 获取线程池
        Executor executor = getExecutor();
        if (dispatch && executor != null) {
            // 线程池执行SocketProcessorBase run方法
            executor.execute(sc);
        } else {
            sc.run();
        }
    } catch (RejectedExecutionException ree) {
        getLog().warn(sm.getString("endpoint.executor.fail", socketWrapper) , ree);
        return false;
    } catch (Throwable t) {
        ExceptionUtils.handleThrowable(t);
        // This means we got an OOM or similar creating a thread, or that
        // the pool and its queue are full
        getLog().error(sm.getString("endpoint.process.fail"), t);
        return false;
    }
    return true;
}

至此,EndPoint的任务也就做完了,主要使用javanio处理tcp,从接收socket连接到处理事件的整个过程。

4、Processor

再来看Processor处理Http协议相关内容

在EndPoint执行的最后,将channel封装为SocketProcessorBase,调用线程池执行run方法

protected void doRun() {
        try {
            int handshake = -1;
            try {
                // 固定返回ture,执行到这里tcp握手早已建立
                if (socketWrapper.getSocket().isHandshakeComplete()) {
                    // No TLS handshaking required. Let the handler
                    // process this socket / event combination.
                    handshake = 0;
                } else if (event == SocketEvent.STOP || event == SocketEvent.DISCONNECT ||
                        event == SocketEvent.ERROR) {
                    // 是异常连接
                    handshake = -1;
                } else {
                    handshake = socketWrapper.getSocket().handshake(event == SocketEvent.OPEN_READ, event == SocketEvent.OPEN_WRITE);
                 
                    event = SocketEvent.OPEN_READ;
                }
            } catch (IOException x) {
                handshake = -1;
                if (logHandshake.isDebugEnabled()) {
                    logHandshake.debug(sm.getString("endpoint.err.handshake",
                            socketWrapper.getRemoteAddr(), Integer.toString(socketWrapper.getRemotePort())), x);
                }
            } catch (CancelledKeyException ckx) {
                handshake = -1;
            }
            if (handshake == 0) {
                SocketState state = SocketState.OPEN;
                // Process the request from this socket
                if (event == null) {
                    state = getHandler().process(socketWrapper, SocketEvent.OPEN_READ);
                } else {
                    // 处理读事件
                    state = getHandler().process(socketWrapper, event);
                }
                if (state == SocketState.CLOSED) {
                    socketWrapper.close();
                }
            } else if (handshake == -1 ) {
                getHandler().process(socketWrapper, SocketEvent.CONNECT_FAIL);
                socketWrapper.close();
            } else if (handshake == SelectionKey.OP_READ){
                socketWrapper.registerReadInterest();
            } else if (handshake == SelectionKey.OP_WRITE){
                socketWrapper.registerWriteInterest();
            }
        } catch (CancelledKeyException cx) {
            socketWrapper.close();
        } catch (VirtualMachineError vme) {
            ExceptionUtils.handleThrowable(vme);
        } catch (Throwable t) {
            log.error(sm.getString("endpoint.processing.fail"), t);
            socketWrapper.close();
        } finally {
            socketWrapper = null;
            event = null;
            //return to cache
            if (running && processorCache != null) {
                processorCache.push(this);
            }
        }
    }

}

经过一系列调用流程后,最终走向Http11Processor的service方法正式开始转换http

public SocketState service(SocketWrapperBase<?> socketWrapper) throws IOException {
    RequestInfo rp = request.getRequestProcessor();
    // 设置当前阶段为解析
    rp.setStage(org.apache.coyote.Constants.STAGE_PARSE);

    // 初始化inputbuffer
    setSocketWrapper(socketWrapper);

    // Flags
    keepAlive = true;
    openSocket = false;
    readComplete = true;
    boolean keptAlive = false;
    SendfileState sendfileState = SendfileState.DONE;

    while (!getErrorState().isError() && keepAlive && !isAsync() && upgradeToken == null &&
            sendfileState == SendfileState.DONE && !protocol.isPaused()) {

        // Parsing the request header
        try {
            // 解析请求头到inputBuffer中
            if (!inputBuffer.parseRequestLine(keptAlive, protocol.getConnectionTimeout(),
                    protocol.getKeepAliveTimeout())) {
                if (inputBuffer.getParsingRequestLinePhase() == -1) {
                    return SocketState.UPGRADING;
                } else if (handleIncompleteRequestLineRead()) {
                    break;
                }
            }

            // 判断是否是HTTP 0.9
            prepareRequestProtocol();

        if (getErrorState().isIoAllowed()) {
            // Setting up filters, and parse some request headers
            // 设置为准备阶段
            rp.setStage(org.apache.coyote.Constants.STAGE_PREPARE);
            try {
                // 准备请求
                prepareRequest();
            } catch (Throwable t) {
                ExceptionUtils.handleThrowable(t);
                if (log.isDebugEnabled()) {
                    log.debug(sm.getString("http11processor.request.prepare"), t);
                }
                // 500 - Internal Server Error
                response.setStatus(500);
                setErrorState(ErrorState.CLOSE_CLEAN, t);
            }
        }

        int maxKeepAliveRequests = protocol.getMaxKeepAliveRequests();
        if (maxKeepAliveRequests == 1) {
            keepAlive = false;
        } else if (maxKeepAliveRequests > 0 && socketWrapper.decrementKeepAlive() <= 0) {
            keepAlive = false;
        }

        // Process the request in the adapter
        if (getErrorState().isIoAllowed()) {
            try {
                // 设置为service阶段
                rp.setStage(org.apache.coyote.Constants.STAGE_SERVICE);
                // Asapter组件,调用容器
                getAdapter().service(request, response);
                // Handle when the response was committed before a serious
                // error occurred. Throwing a ServletException should both
                // set the status to 500 and set the errorException.
                // If we fail here, then the response is likely already
                // committed, so we can't try and set headers.
                if (keepAlive && !getErrorState().isError() && !isAsync() &&
                        statusDropsConnection(response.getStatus())) {
                    setErrorState(ErrorState.CLOSE_CLEAN, null);
                }
            } catch (InterruptedIOException e) {
                setErrorState(ErrorState.CLOSE_CONNECTION_NOW, e);
            } catch (HeadersTooLargeException e) {
                log.error(sm.getString("http11processor.request.process"), e);
                // The response should not have been committed but check it
                // anyway to be safe
                if (response.isCommitted()) {
                    setErrorState(ErrorState.CLOSE_NOW, e);
                } else {
                    response.reset();
                    response.setStatus(500);
                    setErrorState(ErrorState.CLOSE_CLEAN, e);
                    response.setHeader("Connection", "close"); // TODO: Remove
                }
            } catch (Throwable t) {
                ExceptionUtils.handleThrowable(t);
                log.error(sm.getString("http11processor.request.process"), t);
                // 500 - Internal Server Error
                response.setStatus(500);
                setErrorState(ErrorState.CLOSE_CLEAN, t);
                getAdapter().log(request, response, 0);
            }
        }

        // Finish the handling of the request
        rp.setStage(org.apache.coyote.Constants.STAGE_ENDINPUT);
        if (!isAsync()) {
            // If this is an async request then the request ends when it has
            // been completed. The AsyncContext is responsible for calling
            // endRequest() in that case.
            endRequest();
        }
        rp.setStage(org.apache.coyote.Constants.STAGE_ENDOUTPUT);

        // If there was an error, make sure the request is counted as
        // and error, and update the statistics counter
        if (getErrorState().isError()) {
            response.setStatus(500);
        }

        if (!isAsync() || getErrorState().isError()) {
            request.updateCounters();
            if (getErrorState().isIoAllowed()) {
                inputBuffer.nextRequest();
                outputBuffer.nextRequest();
            }
        }

        if (!protocol.getDisableUploadTimeout()) {
            int connectionTimeout = protocol.getConnectionTimeout();
            if (connectionTimeout > 0) {
                socketWrapper.setReadTimeout(connectionTimeout);
            } else {
                socketWrapper.setReadTimeout(0);
            }
        }

        rp.setStage(org.apache.coyote.Constants.STAGE_KEEPALIVE);

        sendfileState = processSendfile(socketWrapper);
    }

    rp.setStage(org.apache.coyote.Constants.STAGE_ENDED);

    if (getErrorState().isError() || (protocol.isPaused() && !isAsync())) {
        return SocketState.CLOSED;
    } else if (isAsync()) {
        return SocketState.LONG;
    } else if (isUpgrade()) {
        return SocketState.UPGRADING;
    } else {
        if (sendfileState == SendfileState.PENDING) {
            return SocketState.SENDFILE;
        } else {
            if (openSocket) {
                if (readComplete) {
                    return SocketState.OPEN;
                } else {
                    return SocketState.LONG;
                }
            } else {
                return SocketState.CLOSED;
            }
        }
    }
}

整体过完tomcat连接器处理连接的整个流程,对tomcat处理连接有个大致了解,EndPoint 负责底层 Socket 通信,Proccesor 负责应用层协议解析。

注:文中图片摘自于《深入拆解tomcat&jetty》