Volley的使用及分析

595 阅读15分钟

Volley是谷歌在13年时推出的网络请求框架

其优点是

  • 自动的调度网络请求
  • 多并发的网络请求
  • 可以缓存http请求
  • 支持请求的优先级
  • 支持取消请求的API,可以取消单个请求,可以设置取消请求的范围域。

缺点是

  • 使用的是httpclient、HttpURLConnection。6.0不支持httpclient了,如果想支持得添加org.apache.http.legacy.jar
  • 非常不适合大的文件流操作,例如上传和下载。因为Volley会把所有的服务器端返回的数据在解析期间缓存进内存。
  • 图片加载性能一般

Volley的使用: 在build.gradle中添加引用

implementation 'com.android.volley:volley:1.1.1'

在使用时往往会抽出一个公共类比如NetworkManager,在这个类中封装网络请求,这里我们就不具体的封装了直接看它的使用。 第一步我们需要创建一个请求队列RequestQueue(如果封装的话这个只需要创建一次)

requestQueue = Volley.newRequestQueue(getApplicationContext());

第二步创建一个request,Volley默认给我们生成好了几个request,比如StringRequest、JsonObjectRequest等,如果需要自己定制request的话那么需要继承Request这个基类,比如想在请求的header中添加参数,可以继承现有的。下面的例子就是我们常用的,比如返回值为json,每个请求中都必须带token

public class CustomJsonRequest extends JsonObjectRequest {
    private Map<String, String> headers = new ArrayMap<>();

    public CustomJsonRequest( String url, @Nullable JSONObject jsonRequest, Response.Listener<JSONObject> listener, @Nullable Response.ErrorListener errorListener) {
        super(Method.POST, url, jsonRequest, listener, errorListener);
        headers.put("auth-uid", "d9b380e01a6d47668714958aad3cd808");
        headers.put("auth-token","948bfe10a0e64202848ad2a6d8d54f1a");
    }


    @Override
    public Map<String, String> getHeaders() throws AuthFailureError {

        return headers;
    }
}

我们只需继承这个JsonObjectRequest即可,并重写它的getHeaders方法,至于为什么能生效,我们接下来会说。 定义完成之后使用如下

public void postRequest(View view) throws JSONException {
    String account = "zhangsan";
    String pwd = "123456";

    final JSONObject jsonObject = new JSONObject();
    jsonObject.put("userName", account);
    jsonObject.put("password", pwd);

    CustomJsonRequest customJsonRequest = new CustomJsonRequest("http://192.168.3.174:8080/login", jsonObject, new Response.Listener<JSONObject>() {
        @Override
        public void onResponse(JSONObject response) {
            Result result = gson.fromJson(response.toString(), Result.class);

            Log.e("TAG", "resultCode = " + result.getCode() + "  resultData = " + result.getData().toString());

            LoginBean loginBean = gson.fromJson(result.getData().toString(), LoginBean.class);

            Log.e("TAG", "loginBean.user = " + loginBean.getUser().getUserName());

        }
    }, new Response.ErrorListener() {
        @Override
        public void onErrorResponse(VolleyError error) {

        }
    });

    requestQueue.add(customJsonRequest);
}

可以看到我们利用新定义的Request创建了一个Request,并且将这个Request添加到了请求队列中,在onResponse中我们首先将请求结果转换成统一封装好的返回值中,其定义如下

public class Result<T> {
    private String code;
    private String result;
    private String message;
    private T data;

...
}

然后在根据具体业务逻辑转换成需要的bean,这里的转换采用的是Gson。这里直接演示的是post请求,至于get请求这里就不说了。下面说下它的工作原理。 首先从创建队列开始

public static RequestQueue newRequestQueue(Context context) {
    return newRequestQueue(context, (BaseHttpStack) null);
}

然后

public static RequestQueue newRequestQueue(Context context, BaseHttpStack stack) {
    BasicNetwork network;
    if (stack == null) {
        if (Build.VERSION.SDK_INT >= 9) {
            network = new BasicNetwork(new HurlStack());
        } else {
            ...

            network =
                    new BasicNetwork(
                            new HttpClientStack(AndroidHttpClient.newInstance(userAgent)));
        }
    } else {
        network = new BasicNetwork(stack);
    }

    return newRequestQueue(context, network);
}

这里其实就是根据当前SDK版本确定使用的BaseHttpStack,如果大于9 的话那么使用基于HttpUrlConnection的HurlStack,否则的话使用基于HttpClient的HttpClientStack,然后才是真正的创建这个queue

private static RequestQueue newRequestQueue(Context context, Network network) {
    File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);
    RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
    queue.start();
    return queue;
}

可以看到这里首先创建了缓存目录,并且利用这个目录创建RequestQueue,同时启动这个queue,以上代码均在Volley.java中。首先我们来看看这个Request真正创建的过程

/**
 * Creates the worker pool. Processing will not begin until {@link #start()} is called.
 *
 * @param cache A Cache to use for persisting responses to disk
 * @param network A Network interface for performing HTTP requests
 */
public RequestQueue(Cache cache, Network network) {
    this(cache, network, DEFAULT_NETWORK_THREAD_POOL_SIZE);
}

从注释中可以看出它是创建一个工作池,而且只有调用了start方法后才会工作,而这一步的初始化其实就是做准备工作,这里的最后一个参数则是调度线程的个数默认为4,这个稍后会介绍。我们继续

public RequestQueue(Cache cache, Network network, int threadPoolSize) {
    this(
            cache,
            network,
            threadPoolSize,
            new ExecutorDelivery(new Handler(Looper.getMainLooper())));
}

这里则是将上一步的参数传入,并且new了一个ExecutorDelivery,而且这个类的参数为handler,这里直接传入的是主线程的handler,它的定义为

public ExecutorDelivery(final Handler handler) {
    // Make an Executor that just wraps the handler.
    mResponsePoster =
            new Executor() {
                @Override
                public void execute(Runnable command) {
                    handler.post(command);
                }
            };
}

从handler.post就能看出,它是向主线程发送消息的,而且它的名字为mResponsePoster,基本可以看出是将网络请求结果返回主线程的,到底是不是我们接着看。 他最终的初始化方法为

public RequestQueue(
        Cache cache, Network network, int threadPoolSize, ResponseDelivery delivery) {
    mCache = cache;
    mNetwork = network;
    mDispatchers = new NetworkDispatcher[threadPoolSize];
    mDelivery = delivery;
}

可以看到除了是直接赋值之外,我们还创建了NetworDispatcher这个线程数组,这个NetworkDispatcher其实是个线程接下来我们会讲到。我们在回到上面,最后调用的是queue.start

/** Starts the dispatchers in this queue. */
public void start() {
    stop(); // Make sure any currently running dispatchers are stopped.
    // Create the cache dispatcher and start it.
    mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
    mCacheDispatcher.start();

    // Create network dispatchers (and corresponding threads) up to the pool size.
    for (int i = 0; i < mDispatchers.length; i++) {
        NetworkDispatcher networkDispatcher =
                new NetworkDispatcher(mNetworkQueue, mNetwork, mCache, mDelivery);
        mDispatchers[i] = networkDispatcher;
        networkDispatcher.start();
    }
}

/** Stops the cache and network dispatchers. */
public void stop() {
    if (mCacheDispatcher != null) {
        mCacheDispatcher.quit();
    }
    for (final NetworkDispatcher mDispatcher : mDispatchers) {
        if (mDispatcher != null) {
            mDispatcher.quit();
        }
    }
}

可以看到首先是退出所有调度线程(缓存调度和网络调度),然后新建了一个缓存调度并开始,接着创建所有网络调度线程(4个)并开始。至此创建过程结束 我们看看这两个Dispatcher到底是干什么的。首先看看这个CacheDispatcher

public class CacheDispatcher extends Thread {

    private static final boolean DEBUG = VolleyLog.DEBUG;

    /** The queue of requests coming in for triage. */
    private final BlockingQueue<Request<?>> mCacheQueue;

    /** The queue of requests going out to the network. */
    private final BlockingQueue<Request<?>> mNetworkQueue;

    /** The cache to read from. */
    private final Cache mCache;

    /** For posting responses. */
    private final ResponseDelivery mDelivery;

    /** Used for telling us to die. */
    private volatile boolean mQuit = false;

    /** Manage list of waiting requests and de-duplicate requests with same cache key. */
    private final WaitingRequestManager mWaitingRequestManager;

    /**
     * Creates a new cache triage dispatcher thread. You must call {@link #start()} in order to
     * begin processing.
     *
     * @param cacheQueue Queue of incoming requests for triage
     * @param networkQueue Queue to post requests that require network to
     * @param cache Cache interface to use for resolution
     * @param delivery Delivery interface to use for posting responses
     */
    public CacheDispatcher(
            BlockingQueue<Request<?>> cacheQueue,
            BlockingQueue<Request<?>> networkQueue,
            Cache cache,
            ResponseDelivery delivery) {
        mCacheQueue = cacheQueue;
        mNetworkQueue = networkQueue;
        mCache = cache;
        mDelivery = delivery;
        mWaitingRequestManager = new WaitingRequestManager(this);
    }...
}

可以看到他其实就是一个线程。在它的构造方法中传入了几个参数,缓存队列、网络请求队列,而且这两个队列都是阻塞队列,缓存以及分发结果的delivery。它的start其实就是run方法

@Override
public void run() {
    if (DEBUG) VolleyLog.v("start new dispatcher");
    Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);

    // Make a blocking call to initialize the cache.
    mCache.initialize();

    while (true) {
        try {
            processRequest();
        } catch (InterruptedException e) {
            // We may have been interrupted because it was time to quit.
            if (mQuit) {
                Thread.currentThread().interrupt();
                return;
            }
            VolleyLog.e(
                    "Ignoring spurious interrupt of CacheDispatcher thread; "
                            + "use quit() to terminate it");
        }
    }
}

可以看到当前线程优先级设为background,而且进入了阻塞代码中不断执行processRequest,直到当前线程退出。

private void processRequest() throws InterruptedException {
    // Get a request from the cache triage queue, blocking until
    // at least one is available.
    //从缓存中取出一个request处理,如果没有则挂起,
    // 直到缓存queue中加入request会继续执行(阻塞队列特性)
    //所以首次初始化时该线程就阻塞到这里了
    final Request<?> request = mCacheQueue.take();
    processRequest(request);
}

@VisibleForTesting
void processRequest(final Request<?> request) throws InterruptedException {
    request.addMarker("cache-queue-take");

    //如果这个request标记为取消,那么就不进行分发了
    if (request.isCanceled()) {
        request.finish("cache-discard-canceled");
        return;
    }

    //如果请求没有被取消,则尝试从缓存中取
    Cache.Entry entry = mCache.get(request.getCacheKey());
    //如果没有,则添加到网络调度
    if (entry == null) {
        request.addMarker("cache-miss");
        // Cache miss; send off to the network dispatcher.
        if (!mWaitingRequestManager.maybeAddToWaitingRequests(request)) {
            mNetworkQueue.put(request);
        }
        return;
    }

    //如果找到了,但是过期了,那也从网络中重新获取,并且将这个CacheKey重新加入缓存
    if (entry.isExpired()) {
        request.addMarker("cache-hit-expired");
        request.setCacheEntry(entry);
        if (!mWaitingRequestManager.maybeAddToWaitingRequests(request)) {
            mNetworkQueue.put(request);
        }
        return;
    }

    
    //如果找到了并可用,那么就利用缓存的数据构造返回结果
    request.addMarker("cache-hit");
    Response<?> response =
            request.parseNetworkResponse(
                    new NetworkResponse(entry.data, entry.responseHeaders));
    request.addMarker("cache-hit-parsed");

    //如果不是强制刷新就直接返回刚才的结果,否则的话添加到网络调度中重新获取
    if (!entry.refreshNeeded()) {
        // Completely unexpired cache hit. Just deliver the response.
        mDelivery.postResponse(request, response);
    } else {
        // Soft-expired cache hit. We can deliver the cached response,
        // but we need to also send the request to the network for
        // refreshing.
        request.addMarker("cache-hit-refresh-needed");
        request.setCacheEntry(entry);
        // Mark the response as intermediate.
        response.intermediate = true;

        if (!mWaitingRequestManager.maybeAddToWaitingRequests(request)) {
            // Post the intermediate response back to the user and have
            // the delivery then forward the request along to the network.
            mDelivery.postResponse(
                    request,
                    response,
                    new Runnable() {
                        @Override
                        public void run() {
                            try {
                                mNetworkQueue.put(request);
                            } catch (InterruptedException e) {
                                // Restore the interrupted status
                                Thread.currentThread().interrupt();
                            }
                        }
                    });
        } else {
            // request has been added to list of waiting requests
            // to receive the network response from the first request once it returns.
            mDelivery.postResponse(request, response);
        }
    }
}

大致流程已经在注释中写明了,那么我们在看看她是如何分发结果的也就是postResponse。在上面我们已经介绍了,他其实是ExecutorDelivery,所以直接找到这个类来查看它的post方法

@Override
public void postResponse(Request<?> request, Response<?> response) {
    postResponse(request, response, null);
}

@Override
public void postResponse(Request<?> request, Response<?> response, Runnable runnable) {
    request.markDelivered();
    request.addMarker("post-response");
    mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable));
}

其实就是把request、response以及一个runnable传入,最后一步的execute其实就是调用其run方法

public void run() {
    // NOTE: If cancel() is called off the thread that we're currently running in (by
    // default, the main thread), we cannot guarantee that deliverResponse()/deliverError()
    // won't be called, since it may be canceled after we check isCanceled() but before we
    // deliver the response. Apps concerned about this guarantee must either call cancel()
    // from the same thread or implement their own guarantee about not invoking their
    // listener after cancel() has been called.

    // If this request has canceled, finish it and don't deliver.
    //如果这个请求已经被取消,直接执行request的finish
    if (mRequest.isCanceled()) {
        mRequest.finish("canceled-at-delivery");
        return;
    }

    // Deliver a normal response or error, depending.
    //没被取消那么成功就返回response的result,失败则返回error
    if (mResponse.isSuccess()) {
        mRequest.deliverResponse(mResponse.result);
    } else {
        mRequest.deliverError(mResponse.error);
    }

    // If this is an intermediate response, add a marker, otherwise we're done
    // and the request can be finished.
    //这里是重置request的状态比如finish
    if (mResponse.intermediate) {
        mRequest.addMarker("intermediate-response");
    } else {
        mRequest.finish("done");
    }

    // If we have been provided a post-delivery runnable, run it.
    if (mRunnable != null) {
        mRunnable.run();
    }
}

至此缓存调度我们已经梳理完了,总得来说就是在创建RequestQueue的时候,我们就会建立一个缓存queue,而且这时候就在后台线程中开始取出Request执行,但是此时并没有,所以直接进入阻塞状态,在将来有任务时会直接取出并执行,首先尝试从缓存中取,没有或者过期的话直接将该任务交到Network调度线程(稍后说明)中,也就是开启一个线程来执行这个任务。如果找到了则直接利用delivery将结果返回给主线程。接下来我们看看这个网络调度线程

public class NetworkDispatcher extends Thread {

    /** The queue of requests to service. */
    private final BlockingQueue<Request<?>> mQueue;
    /** The network interface for processing requests. */
    private final Network mNetwork;
    /** The cache to write to. */
    private final Cache mCache;
    /** For posting responses and errors. */
    private final ResponseDelivery mDelivery;
    /** Used for telling us to die. */
    private volatile boolean mQuit = false;

    /**
     * Creates a new network dispatcher thread. You must call {@link #start()} in order to begin
     * processing.
     *
     * @param queue Queue of incoming requests for triage
     * @param network Network interface to use for performing requests
     * @param cache Cache interface to use for writing responses to cache
     * @param delivery Delivery interface to use for posting responses
     */
    public NetworkDispatcher(
            BlockingQueue<Request<?>> queue,
            Network network,
            Cache cache,
            ResponseDelivery delivery) {
        mQueue = queue;
        mNetwork = network;
        mCache = cache;
        mDelivery = delivery;
    }
...}

其实也是一个线程,传递的参数为网络请求队列、Network则为之前创建queue时候的BasicNetwork,由于创建完成之后就直接进行了start,所以我们接着看他的run方法

@Override
public void run() {
    Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
    while (true) {
        try {
            processRequest();
        } catch (InterruptedException e) {
            // We may have been interrupted because it was time to quit.
            if (mQuit) {
                Thread.currentThread().interrupt();
                return;
            }
            VolleyLog.e(
                    "Ignoring spurious interrupt of NetworkDispatcher thread; "
                            + "use quit() to terminate it");
        }
    }
}

其实这里跟cacheDispatcher差不多,也是阻塞当前线程,接着看

private void processRequest() throws InterruptedException {
    // 从网络请求queue中取出一个request
   //如果是首次的话肯定没有,那么当前线程就会被挂起,知道有任务加入到这个queue中,才会继续进行
    Request<?> request = mQueue.take();
    processRequest(request);
}

@VisibleForTesting
void processRequest(Request<?> request) {
    long startTimeMs = SystemClock.elapsedRealtime();
    try {
        request.addMarker("network-queue-take");

        // If the request was cancelled already, do not perform the
        // 如果这个请求已经被取消,那么就停止请求网络,调用这个请求的cancel方法
        if (request.isCanceled()) {
            request.finish("network-discard-cancelled");
            request.notifyListenerResponseNotUsable();
            return;
        }

        addTrafficStatsTag(request);

        //这里发送真正的网络请求.
        NetworkResponse networkResponse = mNetwork.performRequest(request);
        request.addMarker("network-http-complete");

        // If the server returned 304 AND we delivered a response already,
        // we're done -- don't deliver a second identical response.
        if (networkResponse.notModified && request.hasHadResponseDelivered()) {
            request.finish("not-modified");
            request.notifyListenerResponseNotUsable();
            return;
        }

        //在当前线程解析请求结果,.
        Response<?> response = request.parseNetworkResponse(networkResponse);
        request.addMarker("network-parse-complete");

        // Write to cache if applicable.
        // TODO: Only update cache metadata instead of entire record for 304s.
        if (request.shouldCache() && response.cacheEntry != null) {
       //将该请求加入缓存.
            mCache.put(request.getCacheKey(), response.cacheEntry);
            request.addMarker("network-cache-written");
        }

        // Post the response back.
        request.markDelivered();
       // 将结果分发到主线程.
        mDelivery.postResponse(request, response);
        request.notifyListenerResponseReceived(response);
    } ..
}

到这里基本的请求、缓存、分发流程基本上完事了,刚才在上面我们看到 mNetwork.performRequest(request)才是真正发起请求的地方,那我们看看他又是如何实现的。这个方法是在BasicNetwork中

@Override
public NetworkResponse performRequest(Request<?> request) throws VolleyError {
    long requestStart = SystemClock.elapsedRealtime();
    while (true) {
        HttpResponse httpResponse = null;
        byte[] responseContents = null;
        List<Header> responseHeaders = Collections.emptyList();
        try {
            // Gather headers.
            Map<String, String> additionalRequestHeaders =
                    getCacheHeaders(request.getCacheEntry());
   // 这里是真正的网络请求
            httpResponse = mBaseHttpStack.executeRequest(request, additionalRequestHeaders);
            int statusCode = httpResponse.getStatusCode();
...
Response<?> response = request.parseNetworkResponse(networkResponse);
request.addMarker("network-parse-complete");
            responseHeaders = httpResponse.getHeaders();
         ...
}

可以看到这里又调用了BaseHttpStack中的executeRequest方法,这个stack在构建RequestQueue时其实就指定了,比如skd>9时的HurlStack,我们看看这个类中的这个方法

@Override
public HttpResponse executeRequest(Request<?> request, Map<String, String> additionalHeaders)
        throws IOException, AuthFailureError {
    String url = request.getUrl();
    HashMap<String, String> map = new HashMap<>();
    map.putAll(additionalHeaders);
    // Request.getHeaders() takes precedence over the given additional (cache) headers).
    map.putAll(request.getHeaders());
    if (mUrlRewriter != null) {
        String rewritten = mUrlRewriter.rewriteUrl(url);
        if (rewritten == null) {
            throw new IOException("URL blocked by rewriter: " + url);
        }
        url = rewritten;
    }
    URL parsedUrl = new URL(url);
    HttpURLConnection connection = openConnection(parsedUrl, request);
    boolean keepConnectionOpen = false;
    try {
        for (String headerName : map.keySet()) {
            connection.setRequestProperty(headerName, map.get(headerName));
        }
        setConnectionParametersForRequest(connection, request);
        // Initialize HttpResponse with data from the HttpURLConnection.
        int responseCode = connection.getResponseCode();
        if (responseCode == -1) {
            // -1 is returned by getResponseCode() if the response code could not be retrieved.
            // Signal to the caller that something was wrong with the connection.
            throw new IOException("Could not retrieve response code from HttpUrlConnection.");
        }

        if (!hasResponseBody(request.getMethod(), responseCode)) {
            return new HttpResponse(responseCode, convertHeaders(connection.getHeaderFields()));
        }

        // Need to keep the connection open until the stream is consumed by the caller. Wrap the
        // stream such that close() will disconnect the connection.
        keepConnectionOpen = true;
        return new HttpResponse(
                responseCode,
                convertHeaders(connection.getHeaderFields()),
                connection.getContentLength(),
                new UrlConnectionInputStream(connection));
    } finally {
        if (!keepConnectionOpen) {
            connection.disconnect();
        }
    }
}

首先创建了一个map并向其中添加了additionalHeaders,紧接着添加request.getHeaders,这个应该不陌生吧,在开篇Custom的request中我们已经条件了header,然后调用了openConnection方法开启一个connection,接着通过setRequestProperty向这个connection中添加请求头信息,紧接着调用setConnectionParametersForRequest来向请求中添加参数及请求的方法,接下来则开始网路请求并构造返回值。那么我们在回头看一下这个openConnection和setConnectionParametersForRequest方法

private HttpURLConnection openConnection(URL url, Request<?> request) throws IOException {
    HttpURLConnection connection = createConnection(url);

    int timeoutMs = request.getTimeoutMs();
    connection.setConnectTimeout(timeoutMs);
    connection.setReadTimeout(timeoutMs);
    connection.setUseCaches(false);
    connection.setDoInput(true);

    // use caller-provided custom SslSocketFactory, if any, for HTTPS
    if ("https".equals(url.getProtocol()) && mSslSocketFactory != null) {
        ((HttpsURLConnection) connection).setSSLSocketFactory(mSslSocketFactory);
    }

    return connection;
}

看到这里是不是豁然开朗了,原来它是支持HTTPS的只不过是需要自己安装证书,这个SSLFactory就是干这个用的,只不过利用默认方式构建的HurlStack传入的为null,所以就不支持HTTPS。接着我们看看最后一个方法

static void setConnectionParametersForRequest(
        HttpURLConnection connection, Request<?> request) throws IOException, AuthFailureError {
    switch (request.getMethod()) {
        case Method.DEPRECATED_GET_OR_POST:
            // This is the deprecated way that needs to be handled for backwards compatibility.
            // If the request's post body is null, then the assumption is that the request is
            // GET.  Otherwise, it is assumed that the request is a POST.
            byte[] postBody = request.getPostBody();
            if (postBody != null) {
                connection.setRequestMethod("POST");
                addBody(connection, request, postBody);
            }
            break;
        case Method.GET:
            // Not necessary to set the request method because connection defaults to GET but
            // being explicit here.
            connection.setRequestMethod("GET");
            break;
        case Method.DELETE:
            connection.setRequestMethod("DELETE");
            break;
        case Method.POST:
            connection.setRequestMethod("POST");
            addBodyIfExists(connection, request);
            break;
        case Method.PUT:
            connection.setRequestMethod("PUT");
            addBodyIfExists(connection, request);
            break;
        case Method.HEAD:
            connection.setRequestMethod("HEAD");
            break;
        case Method.OPTIONS:
            connection.setRequestMethod("OPTIONS");
            break;
        case Method.TRACE:
            connection.setRequestMethod("TRACE");
            break;
        case Method.PATCH:
            connection.setRequestMethod("PATCH");
            addBodyIfExists(connection, request);
            break;
        default:
            throw new IllegalStateException("Unknown method type.");
    }
}

其实就是根据request中传入的方法为connection设置请求方法,如果有请求体的话我们就设置请求体

private static void addBodyIfExists(HttpURLConnection connection, Request<?> request)
        throws IOException, AuthFailureError {
    byte[] body = request.getBody();
    if (body != null) {
        addBody(connection, request, body);
    }
}

private static void addBody(HttpURLConnection connection, Request<?> request, byte[] body)
        throws IOException {
    // Prepare output. There is no need to set Content-Length explicitly,
    // since this is handled by HttpURLConnection using the size of the prepared
    // output stream.
    connection.setDoOutput(true);
    // Set the content-type unless it was already set (by Request#getHeaders).
    if (!connection.getRequestProperties().containsKey(HttpHeaderParser.HEADER_CONTENT_TYPE)) {
        connection.setRequestProperty(
                HttpHeaderParser.HEADER_CONTENT_TYPE, request.getBodyContentType());
    }
    DataOutputStream out = new DataOutputStream(connection.getOutputStream());
    out.write(body);
    out.close();
}

这两个方法就很好理解了。当发送完网络请求是我们需要得到请求的response,我们在回到刚才的performRequest方法中,这里使用的Response<?> response = request.parseNetworkResponse(networkResponse)来解析请求结果,这是一个抽象方法需要具体的实现类来实现,比如上面我们用到的JsonObjectRequest

@Override
protected Response<JSONObject> parseNetworkResponse(NetworkResponse response) {
    try {
        String jsonString =
                new String(
                        response.data,
                        HttpHeaderParser.parseCharset(response.headers, PROTOCOL_CHARSET));
        return Response.success(
                new JSONObject(jsonString), HttpHeaderParser.parseCacheHeaders(response));
    } catch (UnsupportedEncodingException e) {
        return Response.error(new ParseError(e));
    } catch (JSONException je) {
        return Response.error(new ParseError(je));
    }
}

在这里我们重点看一下parseCacheHeaders

public static Cache.Entry parseCacheHeaders(NetworkResponse response) {
    long now = System.currentTimeMillis();

    Map<String, String> headers = response.headers;

    long serverDate = 0;
    long lastModified = 0;
    long serverExpires = 0;
    long softExpire = 0;
    long finalExpire = 0;
    long maxAge = 0;
    long staleWhileRevalidate = 0;
    boolean hasCacheControl = false;
    boolean mustRevalidate = false;

    String serverEtag = null;
    String headerValue;

    headerValue = headers.get("Date");
    if (headerValue != null) {
        serverDate = parseDateAsEpoch(headerValue);
    }

    headerValue = headers.get("Cache-Control");
    if (headerValue != null) {
        hasCacheControl = true;
        String[] tokens = headerValue.split(",", 0);
        for (int i = 0; i < tokens.length; i++) {
            String token = tokens[i].trim();
            if (token.equals("no-cache") || token.equals("no-store")) {
                return null;
            } else if (token.startsWith("max-age=")) {
                try {
                    maxAge = Long.parseLong(token.substring(8));
                } catch (Exception e) {
                }
            } else if (token.startsWith("stale-while-revalidate=")) {
                try {
                    staleWhileRevalidate = Long.parseLong(token.substring(23));
                } catch (Exception e) {
                }
            } else if (token.equals("must-revalidate") || token.equals("proxy-revalidate")) {
                mustRevalidate = true;
            }
        }
    }

    headerValue = headers.get("Expires");
    if (headerValue != null) {
        serverExpires = parseDateAsEpoch(headerValue);
    }

    headerValue = headers.get("Last-Modified");
    if (headerValue != null) {
        lastModified = parseDateAsEpoch(headerValue);
    }

    serverEtag = headers.get("ETag");

    // Cache-Control takes precedence over an Expires header, even if both exist and Expires
    // is more restrictive.
    if (hasCacheControl) {
        softExpire = now + maxAge * 1000;
        finalExpire = mustRevalidate ? softExpire : softExpire + staleWhileRevalidate * 1000;
    } else if (serverDate > 0 && serverExpires >= serverDate) {
        // Default semantic for Expire header in HTTP specification is softExpire.
        softExpire = now + (serverExpires - serverDate);
        finalExpire = softExpire;
    }

    Cache.Entry entry = new Cache.Entry();
    entry.data = response.data;
    entry.etag = serverEtag;
    entry.softTtl = softExpire;
    entry.ttl = finalExpire;
    entry.serverDate = serverDate;
    entry.lastModified = lastModified;
    entry.responseHeaders = headers;
    entry.allResponseHeaders = response.allHeaders;

    return entry;
}

其实就是解析了一些常用的请求头,比如cache-control,max-age等,至此整个流程我们就剩最后一步没分析了,也就是上述使用时的add方法

public <T> Request<T> add(Request<T> request) {
    // Tag the request as belonging to this queue and add it to the set of current requests.
    request.setRequestQueue(this);
    synchronized (mCurrentRequests) {
        mCurrentRequests.add(request);
    }

    // Process requests in the order they are added.
    request.setSequence(getSequenceNumber());
    request.addMarker("add-to-queue");

    // If the request is uncacheable, skip the cache queue and go straight to the network.
    if (!request.shouldCache()) {
        mNetworkQueue.add(request);
        return request;
    }
    mCacheQueue.add(request);
    return request;
}

其实很简单就是向那两个阻塞队列中添加数据,无非就是向网络请求调度队列中添加还是缓存队列中添加。比如按照上面的使用,创建队列,创建请求,这时候网络请求调度线程在后台阻塞,缓存调度线程也在后台阻塞。首次调用add方法时,这时候如果不强制该次请求不缓存则会加入到缓存调度队列中,从而阻塞的后台缓存线程会继续工作,取出刚才新加入的进行执行,而这个Networkqueue基本上都是通过CacheDispatcher中的传出的值赋值的(前提是没有手动设置requestShouldCache=false)