Tomcat源码解读『Tomcat是如何处理web请求的-上』

161 阅读19分钟

​持续创作,加速成长!这是我参与「掘金日新计划 · 10 月更文挑战」的第10天,点击查看活动详情

之前的四篇文章,我们介绍了Tomcat启动过程的实现:

我们知道,Tomcat启动之后,就可以响应来自客户端的web请求了,本篇文章我们来看一下Tomcat容器是如何响应web请求的。

1. Servlet是如何生效的

做过Java Web的我们都知道,我们的具体业务处理逻辑是卸载Servlet中的,不难猜测,一次Tomcat的请求最后肯定是Servlet完成的。Tomcat设计了这么多层次的容器,Tomcat是怎么确定请求是由哪个Wrapper容器里的Servlet来处理的呢?答案是,Mapper组件。

Mapper组件的功能就是将用户请求的URL定位到一个Servlet,它的工作原理是: Mapper组件里保存了Web应用的配置信息,其实就是容器组件与访问路径的映射关系,比如Host容器里配置的域名、Context容器里的Web应用路径,以及 Wrapper容器里Servlet映射的路径,可以想象这些配置信息就是一个多层次的 Map。

当一个请求到来时,Mapper组件通过解析请求URL里的域名和路径,再到自己保存的Map里去查找,就能定位到一个 Servlet。需要注意的是,一个请求URL最后只会定位到一个Wrapper容器,也就是一个Servlet。接下来我通过一个例子来解释这个定位的过程。

假如有一个网购系,有面向网站管理人员的后台管理系统,还有面向终端客户的在线购物系统。这两个系统运行在在同一个Tomcat上(实际生产中肯定不会这么部署,这里仅作为示例),为了隔离它们的访问域名,配置了两个虚拟域名:manage.shopping.com和user.shopping.com,网站管理人员通过manage.shopping.com域名访问Tomcat去管理用户和商品,而用户管理和商品管理是两个单独的 Web 应用。终端客户通过user.shopping.com域名去搜索商品和下订单,搜索功能和订单管理也是两个独立的Web应用。

针对这样的部署了,Tomcat会创建一个Service 组件和一个 Engine 容器组件,在Engine容器下创建两个 Host 子容器,在每个Host容器下创建两个Context子容器。由于一个Web应用通常有多个Servlet,Tomcat还会在每个Context容器里创建多个Wrapper子容器。每个容器都有对应的访问路径,如下图所示:

假如有用户访问一个URL,比如图中的user.shopping.com:8080/order/buy,T… Servlet呢?主要包含以下几个步骤。

1.1 根据协议和端口号确定Service和Engine

Tomcat的每个连接器都监听不同的端口,比如Tomcat默认的HTTP连接器监听8080端口、默认的AJP连接器监听8009端口。上面例子中的URL访问的是8080端 口,因此这个请求会被HTTP连接器接收,而一个连接器是属于一个Service组件的,这 样Service组件就确定了。我们还知道一个Service组件里除了有多个连接器,还有一个容器组件,具体来说就是一个Engine容器,因此Service确定了也就意味着Engine也确定了。

1.2 根据域名确定Host

Service和Engine确定后,Mapper组件通过URL中的域名去查找相应的Host容器,比如例子中的URL访问的域名是user.shopping.com,因此Mapper会找到 Host2这个容器。

1.3 根据URL路径确定Context组件

Host确定以后,Mapper根据URL的路径来匹配相应的Web应用的路径,比如例子中访问的是/order,因此找到了Context4这个Context容器。

1.4 根据URL路径确定Wrapper(Servlet)

Context 确定后,Mapper再根据web.xml中配置的Servlet映射路径来找到具体的Wrapper和Servlet。

看到这里,相信我们都能明白了什么是容器,以及Tomcat如何通过一层一层的父子容器找到某个Servlet来处理请求。需要注意的是,并不是说只有Servlet才会去处理请求,实际上这个查找路径上的父子容器都会对请求做一些处理。比如,连接器中的Adapter会调用容器的Service方法来执行Servlet,最先拿到请求的是Engine容器, Engine容器对请求做一些处理后,会把请求传给自己子容器Host 继续处理,依次类推,最后这个请求会传给Wrapper容器,Wrapper会调用最终的Servlet来处理。

2. 源码分析

以上是Tomcat响应一次请求的宏观过程,接下来我们来看一下相关源码,源码的细节我们主要放在以下几个方面:

  • Tomcat的工作线程是如何产生的
  • 客户端请求是如何转化为内部对象的
  • Servlet是如何生效的
  • 响应时如何回写到客户端浏览器的

之前的文章Tomcat源码解读『基础类介绍』中我们提到,Tomcat通过Connector启动后台线程,监听指定socket端口请求,实现对外服务。具体如下图所示:

由于接下来的介绍跟上图的组件密切相关,我们把之前文章对这几个组件的介绍再这里再重复介绍一下:

  • ProtocolHandler

ProtocolHandler成员变量初始化是在Connector构造函数中完成的,如果调用Connector无参构造函数,ProtocolHandler默认为HTTP/1.1 NIO类型,即Http11NioProtocol。这里我们也以Http11NioProtocol为例,介绍ProtocolHandler。

ProtocolHandler用来处理网络连接和应用层协议,包含了2个重要部件:EndPoint和Processor

  • Endpoint

EndPoint是通信端点,即通信监听的接口,是具体的Socket接收和发送处理器,是对传输层的抽象,因此EndPoint是用来实现TCP/IP协议的。

EndPoint是一个接口,对应的抽象实现类是AbstractEndpoint,而 AbstractEndpoint的具体子类,比如在 NioEndpoint和Nio2Endpoint中,有两个重要的子组件:Acceptor和SocketProcessor。

其中Acceptor用于监听Socket连接请求。SocketProcessor用于处理接收到的 Socket请求,它实现Runnable接口,在run()方法里调用协议处理组件Processor 进行处理。为了提高处理能力,SocketProcessor被提交到线程池来执行。

  • Processor

EndPoint用来实现TCP/IP协议,Processor则用来实现应用层协议的(HTTP协议、AJP协议等),负责接收来自EndPoint的Socket,读取字节流解析成Tomcat Request和Response对象,并通过Adapter将其提交到容器处理。

  • Adapter

由于协议不同,客户端发过来的请求信息也不尽相同,Tomcat定义了自己的 Request类来存放这些请求信息。ProtocolHandler接口负责解析请求并生成Tomcat Request类。但是这个Request对象不是标准的ServletRequest,也就意味着,不能用Tomcat Request作为参数来调用容器。Tomcat设计者的解决方案是引入CoyoteAdapter,这是适配器模式的经典运用,连接器调用CoyoteAdapter 的 Sevice方法,传入的是Tomcat Request对象,CoyoteAdapter负责将Tomcat Request转成ServletRequest,再调用Engine容器的pipline方法(之前文章介绍的pipline-valve机制),实现对servlet的调用。

2.1 Tomcat工作线程

2.1.1 Connector组件类型

Tomcat连接器Connector,包含两大组件ProtocolHandler和Adapter,这两个组件的初始化分别位于Connector构造函数和Connector的initInternal方法中。

我们之前介绍Degister解析server.xml时,Connector节点解析规则如下:

digester.addRule("Server/Service/Connector", new ConnectorCreateRule());

所以在碰到server.xml文件中的”Server/Service/Connector”节点时将会触发 ConnectorCreateRule类的begin方法的调用:

public void begin(String namespace, String name, Attributes attributes)
        throws Exception {
    Service svc = (Service) digester.peek();
    Executor ex = null;
    String executorName = attributes.getValue("executor");
    if (executorName != null ) {
        ex = svc.getExecutor(executorName);
    }
    String protocolName = attributes.getValue("protocol");
    Connector con = new Connector(protocolName);
    if (ex != null) {
        setExecutor(con, ex);
    }
    String sslImplementationName = attributes.getValue("sslImplementationName");
    if (sslImplementationName != null) {
        setSSLImplementationName(con, sslImplementationName);
    }
    digester.push(con);
}

所以这里Connector实例化,会先获取server.xml文件中Connector节点的protocol属性名称,调用Connector的有参构造函数实例化Connector。而这里protocol,对于HTTP协议来说,我们一般配置为”HTTP/1.1″。

public Connector(String protocol) {
    boolean apr = AprLifecycleListener.isAprAvailable() &&
            AprLifecycleListener.getUseAprConnector();
    ProtocolHandler p = null;
    try {
        p = ProtocolHandler.create(protocol, apr);
    } catch (Exception e) {
        log.error(sm.getString(
                "coyoteConnector.protocolHandlerInstantiationFailed"), e);
    }
    if (p != null) {
        protocolHandler = p;
        protocolHandlerClassName = protocolHandler.getClass().getName();
    } else {
        protocolHandler = null;
        protocolHandlerClassName = protocol;
    }
    // Default for Connector depends on this system property
    setThrowOnFailure(Boolean.getBoolean("org.apache.catalina.startup.EXIT_ON_INIT_FAILURE"));
}

public static ProtocolHandler create(String protocol, boolean apr)
        throws ClassNotFoundException, InstantiationException, IllegalAccessException,
        IllegalArgumentException, InvocationTargetException, NoSuchMethodException, SecurityException {
    if (protocol == null || "HTTP/1.1".equals(protocol)
            || (!apr && org.apache.coyote.http11.Http11NioProtocol.class.getName().equals(protocol))
            || (apr && org.apache.coyote.http11.Http11AprProtocol.class.getName().equals(protocol))) {
        if (apr) {
            return new org.apache.coyote.http11.Http11AprProtocol();
        } else {
            return new org.apache.coyote.http11.Http11NioProtocol();
        }
    } else if ("AJP/1.3".equals(protocol)
            || (!apr && org.apache.coyote.ajp.AjpNioProtocol.class.getName().equals(protocol))
            || (apr && org.apache.coyote.ajp.AjpAprProtocol.class.getName().equals(protocol))) {
        if (apr) {
            return new org.apache.coyote.ajp.AjpAprProtocol();
        } else {
            return new org.apache.coyote.ajp.AjpNioProtocol();
        }
    } else {
        // Instantiate protocol handler
        Class<?> clazz = Class.forName(protocol);
        return (ProtocolHandler) clazz.getConstructor().newInstance();
    }
}

不难发现,Connector的内部组件ProtocolHandler类型为Http11NioProtocol

而ProtocolHandler的实例化,是通过反射调用Http11NioProtocol类的无参构造函数实现的:

public Http11NioProtocol() {
    super(new NioEndpoint());
}

所以ProtocolHandler的内部组件EndPoint类型为NioEndpoint

Connector的initInternal方法中,会为Connector的内部组件Adapter赋值,类型为CoyoteAdapter

adapter = new CoyoteAdapter(this);

EndPoint的内部组件Processor的类型为Http11Processor,细节有点复杂,后面再介绍。

所以关于上述的组件,我们可以得出结论:

Connector = ProtocolHandler + Adapter
ProtocolHandler = Endpoint + Processor 

ProtocolHandler = Http11NioProtocol
Adapter = CoyoteAdapter
Endpoint = NioEndpoint
Processor = Http11Processor

2.1.2 Connector处理线程启动

上面我们介绍了Connector的相关组件类型及构建的过程,接下来我们看一下Connector的工作线程。Connector启动方法中,会启动ProtocolHandler的start方法,根据我们上面分析的Connector子组件ProtocolHandler的类型为org.apache.coyote.http11.Http11NioProtocol,可以知道,会调用Http11NioProtocol的start方法,但是该类中没有覆盖父类AbstractProtocol的start方法,所以最终会调用到org.apache.coyote.AbstractProtocol#start方法,如下:

public void start() throws Exception {
    if (getLog().isInfoEnabled()) {
        getLog().info(sm.getString("abstractProtocolHandler.start", getName()));
        logPortOffset();
    }

    endpoint.start();
    monitorFuture = getUtilityExecutor().scheduleWithFixedDelay(
            new Runnable() {
                @Override
                public void run() {
                    if (!isPaused()) {
                        startAsyncTimeout();
                    }
                }
            }, 0, 60, TimeUnit.SECONDS);
}

这里会调用EndPoint的start方法,上面我们已经介绍过Http11NioProtocol子组件EndPoint的类型为org.apache.tomcat.util.net.NioEndpoint,所以会调用最终会调用到该类的startInternal方法,如下:

/**
 * Start the NIO endpoint, creating acceptor, poller threads.
 */
@Override
public void startInternal() throws Exception {

    if (!running) {
        running = true;
        paused = false;

        if (socketProperties.getProcessorCache() != 0) {
            processorCache = new SynchronizedStack<>(SynchronizedStack.DEFAULT_SIZE,
                    socketProperties.getProcessorCache());
        }
        if (socketProperties.getEventCache() != 0) {
            eventCache = new SynchronizedStack<>(SynchronizedStack.DEFAULT_SIZE,
                    socketProperties.getEventCache());
        }
        if (socketProperties.getBufferPool() != 0) {
            nioChannels = new SynchronizedStack<>(SynchronizedStack.DEFAULT_SIZE,
                    socketProperties.getBufferPool());
        }

        // Create worker collection
        if (getExecutor() == null) {
            createExecutor();
        }

        initializeConnectionLatch();

        // Start poller thread
        poller = new Poller();
        Thread pollerThread = new Thread(poller, getName() + "-ClientPoller");
        pollerThread.setPriority(threadPriority);
        pollerThread.setDaemon(true);
        pollerThread.start();

        // create acceptor
        startAcceptorThread();
    }
}

注释中描述的很清楚了,该方法中主要完成两件事:

  • 创建Poller线程,Poller线程用于处理Acceptor线程获取的客户端请求

  • 创建Acceptor线程,用于监听客户端Socket请求

    protected void startAcceptorThread() { acceptor = new Acceptor<>(this); String threadName = getName() + "-Acceptor"; acceptor.setThreadName(threadName); Thread t = new Thread(acceptor, threadName); t.setPriority(getAcceptorThreadPriority()); t.setDaemon(getDaemon()); t.start(); }

startAcceptorThreads方法中,调用org.apache.tomcat.util.net.Acceptor的构造函数构建了Acceptor对象,并启动Acceptor线程,Acceptor线程用于获取客户端socket请求。NioEndPoint的内部处理线程Acceptor类型为org.apache.tomcat.util.net.Acceptor

2.1.3 Acceptor线程

通过Acceptor的run方法,可以知道Acceptor线程核心就在获取socket并处理socket,分别在上图中红框处。

这里我们遗漏了一个问题,endpoint.serverSocketAccept()可以获取socket,肯定在NioEndPoint内部有个Nio相关的组件在监听某个端口,那么这个组件是如何实例化并初始化的?NioEndPoint的serverSocketAccept方法如下:

protected SocketChannel serverSocketAccept() throws Exception {
    return serverSock.accept();
}

可以得知,上述Nio相关的组件为serverSock,类型为java.nio.channels.ServerSocketChannel。那么serverSock又是什么时候与端口绑定的呢?我们发现NioEndPoint类中有个bind方法,如下:

public void bind() throws Exception {
    initServerSocket();

    setStopLatch(new CountDownLatch(1));

    // Initialize SSL if needed
    initialiseSsl();

    selectorPool.open(getName());
}

// Separated out to make it easier for folks that extend NioEndpoint to
// implement custom [server]sockets
protected void initServerSocket() throws Exception {
    if (!getUseInheritedChannel()) {
        serverSock = ServerSocketChannel.open();
        socketProperties.setProperties(serverSock.socket());
        InetSocketAddress addr = new InetSocketAddress(getAddress(), getPortWithOffset());
        serverSock.socket().bind(addr,getAcceptCount());
    } else {
        // Retrieve the channel provided by the OS
        Channel ic = System.inheritedChannel();
        if (ic instanceof ServerSocketChannel) {
            serverSock = (ServerSocketChannel) ic;
        }
        if (serverSock == null) {
            throw new IllegalArgumentException(sm.getString("endpoint.init.bind.inherited"));
        }
    }
    serverSock.configureBlocking(true); //mimic APR behavior
}

在该方法中完成了ServerSocketChannel和端口的绑定,那么该bind方法又是在什么时机被调用的?不难发现应该在Connector的initInternal方法中,该方法中会调用:

protocolHandler.init();

通脱protocolHandler的init方法到上述bind方法的调用路径如下:

接下来我们回到本节的主题,Acceptor是如何处理上面获取的Socket,也就是setSocketOptions方法,该方法在AbstractEndpoint类中是个抽象方法,我们上面说过EndPoint的类型为NioEndPoint,所以这里调用的方法也是NioEndPoint的setSocketOptions方法。

protected boolean setSocketOptions(SocketChannel socket) {
    NioSocketWrapper socketWrapper = null;
    try {
        // Allocate channel and wrapper
        NioChannel channel = null;
        if (nioChannels != null) {
            channel = nioChannels.pop();
        }
        if (channel == null) {
            SocketBufferHandler bufhandler = new SocketBufferHandler(
                    socketProperties.getAppReadBufSize(),
                    socketProperties.getAppWriteBufSize(),
                    socketProperties.getDirectBuffer());
            if (isSSLEnabled()) {
                channel = new SecureNioChannel(bufhandler, selectorPool, this);
            } else {
                channel = new NioChannel(bufhandler);
            }
        }
        NioSocketWrapper newWrapper = new NioSocketWrapper(channel, this);
        channel.reset(socket, newWrapper);
        connections.put(socket, newWrapper);
        socketWrapper = newWrapper;

        // Set socket properties
        // Disable blocking, polling will be used
        socket.configureBlocking(false);
        socketProperties.setProperties(socket.socket());

        socketWrapper.setReadTimeout(getConnectionTimeout());
        socketWrapper.setWriteTimeout(getConnectionTimeout());
        socketWrapper.setKeepAliveLeft(NioEndpoint.this.getMaxKeepAliveRequests());
        poller.register(channel, socketWrapper);
        return true;
    } catch (Throwable t) {
        ExceptionUtils.handleThrowable(t);
        try {
            log.error(sm.getString("endpoint.socketOptionsError"), t);
        } catch (Throwable tt) {
            ExceptionUtils.handleThrowable(tt);
        }
        if (socketWrapper == null) {
            destroySocket(socket);
        }
    }
    // Tell to close the socket if needed
    return false;
}

该方法构造了,然后调用poller.register(channel, socketWrapper);将需要处理的socket连接请求提交给poller处理。这里的Poller就是我们上面介绍NioEndPoint启动方法startInternal中pollerThread。

2.1.4 Poller线程

首先我们来看一下上面Acceptor调用Poller的register方法:

public void register(final NioChannel socket, final NioSocketWrapper socketWrapper) {
    socketWrapper.interestOps(SelectionKey.OP_READ);//this is what OP_REGISTER turns into.
    PollerEvent event = null;
    if (eventCache != null) {
        event = eventCache.pop();
    }
    if (event == null) {
        event = new PollerEvent(socket, OP_REGISTER);
    } else {
        event.reset(socket, OP_REGISTER);
    }
    addEvent(event);
}

核心就是构造一个PollerEvent,然后调用addEvent方法,添加到Poller的成员events中,这里events类型为SynchronizedQueue,所以上述添加操作是线程安全的。也就是Acceptor通过调用Poller的register方法,将需要处理的Socket请求包装为PollerEvent,添加到Poller的成员变量events中,那么可以想象,Poller的run方法肯定在处理events中的PollerEvent:

可以看到Poller线程对Acceptor线程提交的PollerEvent事件的处理,其实就是创建一个SocketProcessor线程,并提交到executor线程池处理。这里createSocketProcessor方法,由于我们的EndPoint类型为NioEndPoint,所以SocketProcessor类型为org.apache.tomcat.util.net.NioEndpoint.SocketProcessor。

2.1.5 SocketProcessor

SocketProcessor继承了SocketProcessorBase,SocketProcessorBase是个抽象类,实现了Runnable接口。

其中run方法调用了父类的模板方法doRun(),所以SocketProcessor最终生效的核心逻辑其实在org.apache.tomcat.util.net.NioEndpoint.SocketProcessor#duRun()。

可以看到核心逻辑是调用NioEndPoint的getHandler方法,并调用获取的Handler的process方法处理socketWrapper。

这里NioEndPoint的Handler成员变量是跟着Http11NioProtocol的构造函数初始化的,如下:

public Http11NioProtocol() {
    super(new NioEndpoint());
}

public AbstractHttp11Protocol(AbstractEndpoint<S,?> endpoint) {
    super(endpoint);
    setConnectionTimeout(Constants.DEFAULT_CONNECTION_TIMEOUT);
    ConnectionHandler<S> cHandler = new ConnectionHandler<>(this);
    setHandler(cHandler);
    getEndpoint().setHandler(cHandler);
}

所以handler类型为org.apache.coyote.AbstractProtocol.ConnectionHandler。接下来我们来看一下ConnectionHandler的process方法,如下:

@Override
public SocketState process(SocketWrapperBase<S> wrapper, SocketEvent status) {
    if (getLog().isDebugEnabled()) {
        getLog().debug(sm.getString("abstractConnectionHandler.process",
                wrapper.getSocket(), status));
    }
    if (wrapper == null) {
        // Nothing to do. Socket has been closed.
        return SocketState.CLOSED;
    }

    S socket = wrapper.getSocket();

    Processor processor = (Processor) wrapper.getCurrentProcessor();
    if (getLog().isDebugEnabled()) {
        getLog().debug(sm.getString("abstractConnectionHandler.connectionsGet",
                processor, socket));
    }

    // Timeouts are calculated on a dedicated thread and then
    // dispatched. Because of delays in the dispatch process, the
    // timeout may no longer be required. Check here and avoid
    // unnecessary processing.
    if (SocketEvent.TIMEOUT == status &&
            (processor == null ||
            !processor.isAsync() && !processor.isUpgrade() ||
            processor.isAsync() && !processor.checkAsyncTimeoutGeneration())) {
        // This is effectively a NO-OP
        return SocketState.OPEN;
    }

    if (processor != null) {
        // Make sure an async timeout doesn't fire
        getProtocol().removeWaitingProcessor(processor);
    } else if (status == SocketEvent.DISCONNECT || status == SocketEvent.ERROR) {
        // Nothing to do. Endpoint requested a close and there is no
        // longer a processor associated with this socket.
        return SocketState.CLOSED;
    }

    ContainerThreadMarker.set();

    try {
        if (processor == null) {
            String negotiatedProtocol = wrapper.getNegotiatedProtocol();
            // OpenSSL typically returns null whereas JSSE typically
            // returns "" when no protocol is negotiated
            if (negotiatedProtocol != null && negotiatedProtocol.length() > 0) {
                UpgradeProtocol upgradeProtocol = getProtocol().getNegotiatedProtocol(negotiatedProtocol);
                if (upgradeProtocol != null) {
                    processor = upgradeProtocol.getProcessor(wrapper, getProtocol().getAdapter());
                    if (getLog().isDebugEnabled()) {
                        getLog().debug(sm.getString("abstractConnectionHandler.processorCreate", processor));
                    }
                } else if (negotiatedProtocol.equals("http/1.1")) {
                    // Explicitly negotiated the default protocol.
                    // Obtain a processor below.
                } else {
                    // TODO:
                    // OpenSSL 1.0.2's ALPN callback doesn't support
                    // failing the handshake with an error if no
                    // protocol can be negotiated. Therefore, we need to
                    // fail the connection here. Once this is fixed,
                    // replace the code below with the commented out
                    // block.
                    if (getLog().isDebugEnabled()) {
                        getLog().debug(sm.getString("abstractConnectionHandler.negotiatedProcessor.fail",
                                negotiatedProtocol));
                    }
                    return SocketState.CLOSED;
                    /*
                     * To replace the code above once OpenSSL 1.1.0 is
                     * used.
                    // Failed to create processor. This is a bug.
                    throw new IllegalStateException(sm.getString(
                            "abstractConnectionHandler.negotiatedProcessor.fail",
                            negotiatedProtocol));
                    */
                }
            }
        }
        if (processor == null) {
            processor = recycledProcessors.pop();
            if (getLog().isDebugEnabled()) {
                getLog().debug(sm.getString("abstractConnectionHandler.processorPop", processor));
            }
        }
        if (processor == null) {
            processor = getProtocol().createProcessor();
            register(processor);
            if (getLog().isDebugEnabled()) {
                getLog().debug(sm.getString("abstractConnectionHandler.processorCreate", processor));
            }
        }

        processor.setSslSupport(
                wrapper.getSslSupport(getProtocol().getClientCertProvider()));

        // Associate the processor with the connection
        wrapper.setCurrentProcessor(processor);

        SocketState state = SocketState.CLOSED;
        do {
            state = processor.process(wrapper, status);

            if (state == SocketState.UPGRADING) {
                // Get the HTTP upgrade handler
                UpgradeToken upgradeToken = processor.getUpgradeToken();
                // Retrieve leftover input
                ByteBuffer leftOverInput = processor.getLeftoverInput();
                if (upgradeToken == null) {
                    // Assume direct HTTP/2 connection
                    UpgradeProtocol upgradeProtocol = getProtocol().getUpgradeProtocol("h2c");
                    if (upgradeProtocol != null) {
                        // Release the Http11 processor to be re-used
                        release(processor);
                        // Create the upgrade processor
                        processor = upgradeProtocol.getProcessor(wrapper, getProtocol().getAdapter());
                        wrapper.unRead(leftOverInput);
                        // Associate with the processor with the connection
                        wrapper.setCurrentProcessor(processor);
                    } else {
                        if (getLog().isDebugEnabled()) {
                            getLog().debug(sm.getString(
                                "abstractConnectionHandler.negotiatedProcessor.fail",
                                "h2c"));
                        }
                        // Exit loop and trigger appropriate clean-up
                        state = SocketState.CLOSED;
                    }
                } else {
                    HttpUpgradeHandler httpUpgradeHandler = upgradeToken.getHttpUpgradeHandler();
                    // Release the Http11 processor to be re-used
                    release(processor);
                    // Create the upgrade processor
                    processor = getProtocol().createUpgradeProcessor(wrapper, upgradeToken);
                    if (getLog().isDebugEnabled()) {
                        getLog().debug(sm.getString("abstractConnectionHandler.upgradeCreate",
                                processor, wrapper));
                    }
                    wrapper.unRead(leftOverInput);
                    // Associate with the processor with the connection
                    wrapper.setCurrentProcessor(processor);
                    // Initialise the upgrade handler (which may trigger
                    // some IO using the new protocol which is why the lines
                    // above are necessary)
                    // This cast should be safe. If it fails the error
                    // handling for the surrounding try/catch will deal with
                    // it.
                    if (upgradeToken.getInstanceManager() == null) {
                        httpUpgradeHandler.init((WebConnection) processor);
                    } else {
                        ClassLoader oldCL = upgradeToken.getContextBind().bind(false, null);
                        try {
                            httpUpgradeHandler.init((WebConnection) processor);
                        } finally {
                            upgradeToken.getContextBind().unbind(false, oldCL);
                        }
                    }
                    if (httpUpgradeHandler instanceof InternalHttpUpgradeHandler) {
                        if (((InternalHttpUpgradeHandler) httpUpgradeHandler).hasAsyncIO()) {
                            // The handler will initiate all further I/O
                            state = SocketState.LONG;
                        }
                    }
                }
            }
        } while ( state == SocketState.UPGRADING);

        if (state == SocketState.LONG) {
            // In the middle of processing a request/response. Keep the
            // socket associated with the processor. Exact requirements
            // depend on type of long poll
            longPoll(wrapper, processor);
            if (processor.isAsync()) {
                getProtocol().addWaitingProcessor(processor);
            }
        } else if (state == SocketState.OPEN) {
            // In keep-alive but between requests. OK to recycle
            // processor. Continue to poll for the next request.
            wrapper.setCurrentProcessor(null);
            release(processor);
            wrapper.registerReadInterest();
        } else if (state == SocketState.SENDFILE) {
            // Sendfile in progress. If it fails, the socket will be
            // closed. If it works, the socket either be added to the
            // poller (or equivalent) to await more data or processed
            // if there are any pipe-lined requests remaining.
        } else if (state == SocketState.UPGRADED) {
            // Don't add sockets back to the poller if this was a
            // non-blocking write otherwise the poller may trigger
            // multiple read events which may lead to thread starvation
            // in the connector. The write() method will add this socket
            // to the poller if necessary.
            if (status != SocketEvent.OPEN_WRITE) {
                longPoll(wrapper, processor);
                getProtocol().addWaitingProcessor(processor);
            }
        } else if (state == SocketState.SUSPENDED) {
            // Don't add sockets back to the poller.
            // The resumeProcessing() method will add this socket
            // to the poller.
        } else {
            // Connection closed. OK to recycle the processor.
            // Processors handling upgrades require additional clean-up
            // before release.
            wrapper.setCurrentProcessor(null);
            if (processor.isUpgrade()) {
                UpgradeToken upgradeToken = processor.getUpgradeToken();
                HttpUpgradeHandler httpUpgradeHandler = upgradeToken.getHttpUpgradeHandler();
                InstanceManager instanceManager = upgradeToken.getInstanceManager();
                if (instanceManager == null) {
                    httpUpgradeHandler.destroy();
                } else {
                    ClassLoader oldCL = upgradeToken.getContextBind().bind(false, null);
                    try {
                        httpUpgradeHandler.destroy();
                    } finally {
                        try {
                            instanceManager.destroyInstance(httpUpgradeHandler);
                        } catch (Throwable e) {
                            ExceptionUtils.handleThrowable(e);
                            getLog().error(sm.getString("abstractConnectionHandler.error"), e);
                        }
                        upgradeToken.getContextBind().unbind(false, oldCL);
                    }
                }
            }
            release(processor);
        }
        return state;
    } catch(java.net.SocketException e) {
        // SocketExceptions are normal
        getLog().debug(sm.getString(
                "abstractConnectionHandler.socketexception.debug"), e);
    } catch (java.io.IOException e) {
        // IOExceptions are normal
        getLog().debug(sm.getString(
                "abstractConnectionHandler.ioexception.debug"), e);
    } catch (ProtocolException e) {
        // Protocol exceptions normally mean the client sent invalid or
        // incomplete data.
        getLog().debug(sm.getString(
                "abstractConnectionHandler.protocolexception.debug"), e);
    }
    // Future developers: if you discover any other
    // rare-but-nonfatal exceptions, catch them here, and log as
    // above.
    catch (OutOfMemoryError oome) {
        // Try and handle this here to give Tomcat a chance to close the
        // connection and prevent clients waiting until they time out.
        // Worst case, it isn't recoverable and the attempt at logging
        // will trigger another OOME.
        getLog().error(sm.getString("abstractConnectionHandler.oome"), oome);
    } catch (Throwable e) {
        ExceptionUtils.handleThrowable(e);
        // any other exception or error is odd. Here we log it
        // with "ERROR" level, so it will show up even on
        // less-than-verbose logs.
        getLog().error(sm.getString("abstractConnectionHandler.error"), e);
    } finally {
        ContainerThreadMarker.clear();
    }

    // Make sure socket/processor is removed from the list of current
    // connections
    wrapper.setCurrentProcessor(null);
    release(processor);
    return SocketState.CLOSED;
}

该方法的核心调用processor的process方法处理socket,这里的processor其实就是之前讲的ProtocolHandler的组件之一(ProtocolHandler = EndPoint + Processor)。而processor是通过该方法内createProcessor方法创建出来的,如下:

# org.apache.coyote.AbstractProtocol.ConnectionHandler#process
processor = getProtocol().createProcessor();

# org.apache.coyote.http11.AbstractHttp11Protocol#createProcessor
protected Processor createProcessor() {
    Http11Processor processor = new Http11Processor(this, adapter);
    return processor;
}

# org.apache.coyote.http11.Http11Processor#Http11Processor
public Http11Processor(AbstractHttp11Protocol<?> protocol, Adapter adapter) {
    super(adapter);
    this.protocol = protocol;

    httpParser = new HttpParser(protocol.getRelaxedPathChars(),
            protocol.getRelaxedQueryChars());

    inputBuffer = new Http11InputBuffer(request, protocol.getMaxHttpHeaderSize(),
            protocol.getRejectIllegalHeader(), httpParser);
    request.setInputBuffer(inputBuffer);

    outputBuffer = new Http11OutputBuffer(response, protocol.getMaxHttpHeaderSize());
    response.setOutputBuffer(outputBuffer);

    // Create and add the identity filters.
    inputBuffer.addFilter(new IdentityInputFilter(protocol.getMaxSwallowSize()));
    outputBuffer.addFilter(new IdentityOutputFilter());

    // Create and add the chunked filters.
    inputBuffer.addFilter(new ChunkedInputFilter(protocol.getMaxTrailerSize(),
            protocol.getAllowedTrailerHeadersInternal(), protocol.getMaxExtensionSize(),
            protocol.getMaxSwallowSize()));
    outputBuffer.addFilter(new ChunkedOutputFilter());

    // Create and add the void filters.
    inputBuffer.addFilter(new VoidInputFilter());
    outputBuffer.addFilter(new VoidOutputFilter());

    // Create and add buffered input filter
    inputBuffer.addFilter(new BufferedInputFilter());

    // Create and add the gzip filters.
    //inputBuffer.addFilter(new GzipInputFilter());
    outputBuffer.addFilter(new GzipOutputFilter());

    pluggableFilterIndex = inputBuffer.getFilters().length;
}

所以ProtocolHandler的Processor组件类型为Http11Processor,Http11Processor继承了父类AbstractProcessorLight的process方法,如下:

核心在26行,调用service处理socketWrapper,service方法在AbstractProcessorLight中的定义是个抽象方法,具体实现在子类Http11Processor,如下:

public SocketState service(SocketWrapperBase<?> socketWrapper)
    throws IOException {
    RequestInfo rp = request.getRequestProcessor();
    rp.setStage(org.apache.coyote.Constants.STAGE_PARSE);

    // Setting up the I/O
    setSocketWrapper(socketWrapper);

    // Flags
    keepAlive = true;
    openSocket = false;
    readComplete = true;
    boolean keptAlive = false;
    SendfileState sendfileState = SendfileState.DONE;

    while (!getErrorState().isError() && keepAlive && !isAsync() && upgradeToken == null &&
            sendfileState == SendfileState.DONE && !protocol.isPaused()) {

        // Parsing the request header
        try {
            if (!inputBuffer.parseRequestLine(keptAlive, protocol.getConnectionTimeout(),
                    protocol.getKeepAliveTimeout())) {
                if (inputBuffer.getParsingRequestLinePhase() == -1) {
                    return SocketState.UPGRADING;
                } else if (handleIncompleteRequestLineRead()) {
                    break;
                }
            }

            // Process the Protocol component of the request line
            // Need to know if this is an HTTP 0.9 request before trying to
            // parse headers.
            prepareRequestProtocol();

            if (protocol.isPaused()) {
                // 503 - Service unavailable
                response.setStatus(503);
                setErrorState(ErrorState.CLOSE_CLEAN, null);
            } else {
                keptAlive = true;
                // Set this every time in case limit has been changed via JMX
                request.getMimeHeaders().setLimit(protocol.getMaxHeaderCount());
                // Don't parse headers for HTTP/0.9
                if (!http09 && !inputBuffer.parseHeaders()) {
                    // We've read part of the request, don't recycle it
                    // instead associate it with the socket
                    openSocket = true;
                    readComplete = false;
                    break;
                }
                if (!protocol.getDisableUploadTimeout()) {
                    socketWrapper.setReadTimeout(protocol.getConnectionUploadTimeout());
                }
            }
        } catch (IOException e) {
            if (log.isDebugEnabled()) {
                log.debug(sm.getString("http11processor.header.parse"), e);
            }
            setErrorState(ErrorState.CLOSE_CONNECTION_NOW, e);
            break;
        } catch (Throwable t) {
            ExceptionUtils.handleThrowable(t);
            UserDataHelper.Mode logMode = userDataHelper.getNextMode();
            if (logMode != null) {
                String message = sm.getString("http11processor.header.parse");
                switch (logMode) {
                    case INFO_THEN_DEBUG:
                        message += sm.getString("http11processor.fallToDebug");
                        //$FALL-THROUGH$
                    case INFO:
                        log.info(message, t);
                        break;
                    case DEBUG:
                        log.debug(message, t);
                }
            }
            // 400 - Bad Request
            response.setStatus(400);
            setErrorState(ErrorState.CLOSE_CLEAN, t);
        }

        // Has an upgrade been requested?
        if (isConnectionToken(request.getMimeHeaders(), "upgrade")) {
            // Check the protocol
            String requestedProtocol = request.getHeader("Upgrade");

            UpgradeProtocol upgradeProtocol = protocol.getUpgradeProtocol(requestedProtocol);
            if (upgradeProtocol != null) {
                if (upgradeProtocol.accept(request)) {
                    response.setStatus(HttpServletResponse.SC_SWITCHING_PROTOCOLS);
                    response.setHeader("Connection", "Upgrade");
                    response.setHeader("Upgrade", requestedProtocol);
                    action(ActionCode.CLOSE,  null);
                    getAdapter().log(request, response, 0);

                    InternalHttpUpgradeHandler upgradeHandler =
                            upgradeProtocol.getInternalUpgradeHandler(
                                    socketWrapper, getAdapter(), cloneRequest(request));
                    UpgradeToken upgradeToken = new UpgradeToken(upgradeHandler, null, null);
                    action(ActionCode.UPGRADE, upgradeToken);
                    return SocketState.UPGRADING;
                }
            }
        }

        if (getErrorState().isIoAllowed()) {
            // Setting up filters, and parse some request headers
            rp.setStage(org.apache.coyote.Constants.STAGE_PREPARE);
            try {
                prepareRequest();
            } catch (Throwable t) {
                ExceptionUtils.handleThrowable(t);
                if (log.isDebugEnabled()) {
                    log.debug(sm.getString("http11processor.request.prepare"), t);
                }
                // 500 - Internal Server Error
                response.setStatus(500);
                setErrorState(ErrorState.CLOSE_CLEAN, t);
            }
        }

        int maxKeepAliveRequests = protocol.getMaxKeepAliveRequests();
        if (maxKeepAliveRequests == 1) {
            keepAlive = false;
        } else if (maxKeepAliveRequests > 0 &&
                socketWrapper.decrementKeepAlive() <= 0) {
            keepAlive = false;
        }

        // Process the request in the adapter
        if (getErrorState().isIoAllowed()) {
            try {
                rp.setStage(org.apache.coyote.Constants.STAGE_SERVICE);
                getAdapter().service(request, response);
                // Handle when the response was committed before a serious
                // error occurred.  Throwing a ServletException should both
                // set the status to 500 and set the errorException.
                // If we fail here, then the response is likely already
                // committed, so we can't try and set headers.
                if(keepAlive && !getErrorState().isError() && !isAsync() &&
                        statusDropsConnection(response.getStatus())) {
                    setErrorState(ErrorState.CLOSE_CLEAN, null);
                }
            } catch (InterruptedIOException e) {
                setErrorState(ErrorState.CLOSE_CONNECTION_NOW, e);
            } catch (HeadersTooLargeException e) {
                log.error(sm.getString("http11processor.request.process"), e);
                // The response should not have been committed but check it
                // anyway to be safe
                if (response.isCommitted()) {
                    setErrorState(ErrorState.CLOSE_NOW, e);
                } else {
                    response.reset();
                    response.setStatus(500);
                    setErrorState(ErrorState.CLOSE_CLEAN, e);
                    response.setHeader("Connection", "close"); // TODO: Remove
                }
            } catch (Throwable t) {
                ExceptionUtils.handleThrowable(t);
                log.error(sm.getString("http11processor.request.process"), t);
                // 500 - Internal Server Error
                response.setStatus(500);
                setErrorState(ErrorState.CLOSE_CLEAN, t);
                getAdapter().log(request, response, 0);
            }
        }

        // Finish the handling of the request
        rp.setStage(org.apache.coyote.Constants.STAGE_ENDINPUT);
        if (!isAsync()) {
            // If this is an async request then the request ends when it has
            // been completed. The AsyncContext is responsible for calling
            // endRequest() in that case.
            endRequest();
        }
        rp.setStage(org.apache.coyote.Constants.STAGE_ENDOUTPUT);

        // If there was an error, make sure the request is counted as
        // and error, and update the statistics counter
        if (getErrorState().isError()) {
            response.setStatus(500);
        }

        if (!isAsync() || getErrorState().isError()) {
            request.updateCounters();
            if (getErrorState().isIoAllowed()) {
                inputBuffer.nextRequest();
                outputBuffer.nextRequest();
            }
        }

        if (!protocol.getDisableUploadTimeout()) {
            int connectionTimeout = protocol.getConnectionTimeout();
            if(connectionTimeout > 0) {
                socketWrapper.setReadTimeout(connectionTimeout);
            } else {
                socketWrapper.setReadTimeout(0);
            }
        }

        rp.setStage(org.apache.coyote.Constants.STAGE_KEEPALIVE);

        sendfileState = processSendfile(socketWrapper);
    }

    rp.setStage(org.apache.coyote.Constants.STAGE_ENDED);

    if (getErrorState().isError() || (protocol.isPaused() && !isAsync())) {
        return SocketState.CLOSED;
    } else if (isAsync()) {
        return SocketState.LONG;
    } else if (isUpgrade()) {
        return SocketState.UPGRADING;
    } else {
        if (sendfileState == SendfileState.PENDING) {
            return SocketState.SENDFILE;
        } else {
            if (openSocket) {
                if (readComplete) {
                    return SocketState.OPEN;
                } else {
                    return SocketState.LONG;
                }
            } else {
                return SocketState.CLOSED;
            }
        }
    }
}

该方法核心在做两件事情:

  • 获取socket中请求的输入流,解析输入流,并设置request(org.apache.coyote.Request)
  • 调用adapter的service方法,将request和response交给CoyoteAdapter处理,随后经过CoyoteAdapter适配之后,交给容器处理请求

我们先来看一下request的解析,主要在parseRequestLine()方法、inputBuffer.parseHeaders()方法、prepareRequest()方法和postParseRequest()方法调用中实现,其中parseRequestLine方法用于解析Http协议的请求行,parseHeaders方法用于解析Http协议的请求头,prepareRequest方法中实现对请求体的解析,postParseRequest()方法实现request关联的Container和Pipline信息初始化,这里实现细节比较多,具体可以参考Tomcat之Http11Processor源码分析

其次就是调用getAdapter().service(request, response);方法,将request和response交给Adapter处理,之前我们讲过Adapter的类型为org.apache.catalina.connector.CoyoteAdapter,其service方法如下:

public void service(org.apache.coyote.Request req, org.apache.coyote.Response res)
        throws Exception {

    Request request = (Request) req.getNote(ADAPTER_NOTES);
    Response response = (Response) res.getNote(ADAPTER_NOTES);

    if (request == null) {
        // Create objects
        request = connector.createRequest();
        request.setCoyoteRequest(req);
        response = connector.createResponse();
        response.setCoyoteResponse(res);

        // Link objects
        request.setResponse(response);
        response.setRequest(request);

        // Set as notes
        req.setNote(ADAPTER_NOTES, request);
        res.setNote(ADAPTER_NOTES, response);

        // Set query string encoding
        req.getParameters().setQueryStringCharset(connector.getURICharset());
    }

    if (connector.getXpoweredBy()) {
        response.addHeader("X-Powered-By", POWERED_BY);
    }

    boolean async = false;
    boolean postParseSuccess = false;

    req.getRequestProcessor().setWorkerThreadName(THREAD_NAME.get());

    try {
        // Parse and set Catalina and configuration specific
        // request parameters
        postParseSuccess = postParseRequest(req, request, res, response);
        if (postParseSuccess) {
            //check valves if we support async
            request.setAsyncSupported(
                    connector.getService().getContainer().getPipeline().isAsyncSupported());
            // Calling the container
            connector.getService().getContainer().getPipeline().getFirst().invoke(
                    request, response);
        }
        if (request.isAsync()) {
            async = true;
            ReadListener readListener = req.getReadListener();
            if (readListener != null && request.isFinished()) {
                // Possible the all data may have been read during service()
                // method so this needs to be checked here
                ClassLoader oldCL = null;
                try {
                    oldCL = request.getContext().bind(false, null);
                    if (req.sendAllDataReadEvent()) {
                        req.getReadListener().onAllDataRead();
                    }
                } finally {
                    request.getContext().unbind(false, oldCL);
                }
            }

            Throwable throwable =
                    (Throwable) request.getAttribute(RequestDispatcher.ERROR_EXCEPTION);

            // If an async request was started, is not going to end once
            // this container thread finishes and an error occurred, trigger
            // the async error process
            if (!request.isAsyncCompleting() && throwable != null) {
                request.getAsyncContextInternal().setErrorState(throwable, true);
            }
        } else {
            request.finishRequest();
            response.finishResponse();
        }

    } catch (IOException e) {
        // Ignore
    } finally {
        AtomicBoolean error = new AtomicBoolean(false);
        res.action(ActionCode.IS_ERROR, error);

        if (request.isAsyncCompleting() && error.get()) {
            // Connection will be forcibly closed which will prevent
            // completion happening at the usual point. Need to trigger
            // call to onComplete() here.
            res.action(ActionCode.ASYNC_POST_PROCESS,  null);
            async = false;
        }

        // Access log
        if (!async && postParseSuccess) {
            // Log only if processing was invoked.
            // If postParseRequest() failed, it has already logged it.
            Context context = request.getContext();
            // If the context is null, it is likely that the endpoint was
            // shutdown, this connection closed and the request recycled in
            // a different thread. That thread will have updated the access
            // log so it is OK not to update the access log here in that
            // case.
            if (context != null) {
                context.logAccess(request, response,
                        System.currentTimeMillis() - req.getStartTime(), false);
            }
        }

        req.getRequestProcessor().setWorkerThreadName(null);

        // Recycle the wrapper request and response
        if (!async) {
            request.recycle();
            response.recycle();
        }
    }
}

首先构建org.apache.catalina.connector.Request和org.apache.catalina.connector.Response对象,这里其实就是从上述Http11Processor生成的org.apache.coyote.Request和org.apache.coyote.Response对象中获取的。

其次是调用顶层容器Engine的pipline的invoke方法,通过pipline-valve机制,将请求提交给各级容器处理。

// Calling the container
connector.getService().getContainer().getPipeline().getFirst().invoke(
        request, response);

至此,我们明白了tomcat是如何接收请求,并转发给顶层容器Engine的。由于篇幅限制,下篇文章我们继续来介绍servert如何在容器中生效的。