最近实习一直都没有写博客了,现在实习也稳定了,决定重新开始坚持写博客,争取每周写一篇。
打算写一个Volley系列,虽然网上已经有许多关于Volley的文章,并且好文章也不少,但还是要写。一方面是因为Volley太经典了,另一方面阅读源码也不是很困难,能从Volley源码中学到许多知识,设计模式、缓存策略、打log方法、网络知识等等。就把它当作课本一样,一行一行代码去阅读,去学习,去思考它为什么这样做,这样做的好处以及弊端。好了,废话少说,让我们开始Volley之旅吧!
这篇文章重在激发你的兴趣,带着问题和兴趣去看,效果会更好。看看Volley有什么厉害的地方,令那么多人都去看去学:
1、首先扩展性强,主要是基于接口设计,可配置性也强。
2、许多地方都体现了单一职责,后面会娓娓道来
3、Volley的缓存设计有一些地方设计的相当巧妙,真的很赞!有一种中单和打野联动起来的妙不可言。
4、网络问题的处理,例如地址重定向,重复请求等等。
5、如何设置日志来记录请求信息,用于调试?
6、怎么管理那么多的请求,又是怎么来分发他们?
7、怎么来进行组合而不是继承,让代码更加灵活,更加易扩展。
上面列举了几个有趣的地方,还有许许多多的秘密等着我们去探索。不过还是建议先去看看别的博客Volley的基本使用,不过下面我会简单的介绍Volley的框架,以及请求的分发流程。这些都是为以后的文章打基础。
先看Volley最基础的用法:
RequestQueue mQueue = Volley.newRequestQueue(context);
StringRequest stringRequest = new StringRequest("http://www.baidu.com",
new Response.Listener<String>() {
@Override
public void onResponse(String response) {
Log.d("TAG", response);
}
}, new Response.ErrorListener() {
@Override
public void onErrorResponse(VolleyError error) {
Log.e("TAG", error.getMessage(), error);
}
});
mQueue.add(stringRequest);
先从Volley中获得一个RequestQueue,看名字知道是请求队列,然后创建一个Request,将Request加入到RequestQueue中就完成了一次网络请求。看起来是真简单,但是背后是怎么发送网络的,又是怎么将传回的数据交给我们在Request中两个监听来处理的?接着往下看:
public static RequestQueue newRequestQueue(Context context, HttpStack stack, int maxDiskCacheBytes) {
File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);
String userAgent = "volley/0";
try {
String packageName = context.getPackageName();
PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);
userAgent = packageName + "/" + info.versionCode;
} catch (NameNotFoundException e) {
}
if (stack == null) {
if (Build.VERSION.SDK_INT >= 9) {
stack = new HurlStack();
} else {
// Prior to Gingerbread, HttpUrlConnection was unreliable.
// See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html
stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
}
}
Network network = new BasicNetwork(stack);
RequestQueue queue;
if (maxDiskCacheBytes <= -1)
{
// No maximum size specified
queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
}
else
{
// Disk cache size specified
queue = new RequestQueue(new DiskBasedCache(cacheDir, maxDiskCacheBytes), network);
}
queue.start();
return queue;
}
我们先挑重点的看,细节我们后面都会说到的。先看返回结果,是一个RequstQueue,
queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
或者是
queue = new RequestQueue(new DiskBasedCache(cacheDir, maxDiskCacheBytes), network);
可以看到他需要两个参数,DiskBasedCache和Network,Network是一个接口,可以看到他真正穿进来的实现类是BasicNetwork。
Network network = new BasicNetwork(stack);
上面的代码都是为了制造这两个参数,先略过,挑重点的看,先把整体脉络掌握,细节就好说了。然后在倒数第二行还执行了queue.start();上面的代码都是为了创建一个RequestQueue,然后又执行了这个方法,肯定不简单,让我们跟进去看看。
public void start() {
stop(); // Make sure any currently running dispatchers are stopped.
// Create the cache dispatcher and start it.
mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
mCacheDispatcher.start();
// Create network dispatchers (and corresponding threads) up to the pool size.
for (int i = 0; i < mDispatchers.length; i++) {
NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
mCache, mDelivery);
mDispatchers[i] = networkDispatcher;
networkDispatcher.start();
}
}
滑上去看看上面的图,是不是感觉有点眼熟。在这个方法里面也仅仅是创建CacheDispatcher和NetworkDispatcher,然后执行自己相应的start方法,其实它两本质上都是线程,执行start方法就是将线程开启,而mDispatchers.length默认值为4,也就是说开启了四个NetworkDispatcher线程(不要太操心为什么是4,默认值就是4,可能经过一些测试吧。或者说4更适合一般情况,后面会说到这是什么)具体线程里面在执行什么方法,我们待会在看。
public class CacheDispatcher extends Thread
public class NetworkDispatcher extends Thread
好了,让我们先缓一会,让我们回到最初的起点,一个ResquestQueue引发的血案,让我们知道了
RequestQueue mQueue = Volley.newRequestQueue(context);
这一行代码最后是开启了几个线程。顺着主路线往下走
StringRequest stringRequest = new StringRequest("http://www.baidu.com",
new Response.Listener<String>() {
@Override
public void onResponse(String response) {
Log.d("TAG", response);
}
}, new Response.ErrorListener() {
@Override
public void onErrorResponse(VolleyError error) {
Log.e("TAG", error.getMessage(), error);
}
});
mQueue.add(stringRequest);
这一大段代码前面都是为了创造一个StringRequest ,他需要3个参数,一个url,两个监听器。一个是处理返回正确的数据,另一个是处理错误的结果。这儿先剧透一下,不要慌,后面我们都会说到。最后把StringRequest添加到队列当中。让我们跟进去瞧一瞧。
public <T> Request<T> add(Request<T> request) {
// Tag the request as belonging to this queue and add it to the set of current requests.
request.setRequestQueue(this);
synchronized (mCurrentRequests) {
mCurrentRequests.add(request);
}
// Process requests in the order they are added.
request.setSequence(getSequenceNumber());
request.addMarker("add-to-queue");
// If the request is uncacheable, skip the cache queue and go straight to the network.
if (!request.shouldCache()) {
mNetworkQueue.add(request);
return request;
}
// Insert request into stage if there's already a request with the same cache key in flight.
synchronized (mWaitingRequests) {
String cacheKey = request.getCacheKey();
if (mWaitingRequests.containsKey(cacheKey)) {
// There is already a request in flight. Queue up.
Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey);
if (stagedRequests == null) {
stagedRequests = new LinkedList<Request<?>>();
}
stagedRequests.add(request);
mWaitingRequests.put(cacheKey, stagedRequests);
if (VolleyLog.DEBUG) {
VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
}
} else {
// Insert 'null' queue for this cacheKey, indicating there is now a request in
// flight.
mWaitingRequests.put(cacheKey, null);
mCacheQueue.add(request);
}
return request;
}
}
代码比较长,不要怕,我们一点一点看。
// Tag the request as belonging to this queue and add it to the set of current requests.
request.setRequestQueue(this);
synchronized (mCurrentRequests) {
mCurrentRequests.add(request);
}
先将当前队列设置到requset成为他的成员变量,这儿先不深究,后面会涉及到的。然后将request添加到当前队列当中。注意上面还有一个synchronized,为了保证线程安全。
// Process requests in the order they are added.
request.setSequence(getSequenceNumber());
request.addMarker("add-to-queue");
// If the request is uncacheable, skip the cache queue and go straight to the network.
if (!request.shouldCache()) {
mNetworkQueue.add(request);
return request;
}
然后给request设置序列号,这儿也是为请求的顺序以及优先级考虑,后面也会一一说到,所以说还是好好的把这个系列看完(逃)。然后就判断他是否需要缓存,如果不需要直接将它添加到网络的这个队列,还记得star方法里面初始化那两类分发器,把mNetworkQueue作为参数传进了网络分发器,没错,NetworkDispatcher 会在这个队列取request然后进行网络请求。然后返回该request。不过一般都默认它需要缓存。
// Insert request into stage if there's already a request with the same cache key in flight.
synchronized (mWaitingRequests) {
String cacheKey = request.getCacheKey();
if (mWaitingRequests.containsKey(cacheKey)) {
// There is already a request in flight. Queue up.
Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey);
if (stagedRequests == null) {
stagedRequests = new LinkedList<Request<?>>();
}
stagedRequests.add(request);
mWaitingRequests.put(cacheKey, stagedRequests);
if (VolleyLog.DEBUG) {
VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
}
} else {
// Insert 'null' queue for this cacheKey, indicating there is now a request in
// flight.
mWaitingRequests.put(cacheKey, null);
mCacheQueue.add(request);
}
return request;
}
}
这段代码又是加了synchronized,你会看见只要涉及到集合操作几乎都要设置线程安全,往下看,先获取到他的cacheKey,其实就是和url有关的一个字符串。下面一大堆代码都是为了减少重复请求,如果有相同的请求就会从缓存里面获取response。然后看看她是怎么实现的。首先看mWaitingRequests是否包含它,包含的话就获取他对应的queue,然后将它添加进去。如果queue为空就创建一个然后再把它添加进去。然后cacheKey为key,对应的queue为value,添加到mWaitingRequests中。如果mWaitingRequests不包含它,说明缓存里面没有对应他的值。然后也是cacheKey为key,不过null为value,添加到mWaitingRequests中。最后放进mCacheQueue里面,而mCacheQueue又是作为参数穿进了CacheDispatcher分发器里面。分发器里面到底是干啥了,是不是已经迫不及待的想去看一看了。
先不着急,总结一下,从上面的volley直接用法我们已经知道经过一系列的波折开启了几个线程,将request添加到mCacheQueue或者mNetworkQueue中。然后就没了。下来我们看看两个分发器里面究竟干了什么?首先这两个分发器本质都是线程,上面也说过了,所以他们开启start方法也就是运行线程的run方法。
public void run() {
if (DEBUG) VolleyLog.v("start new dispatcher");
Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
// Make a blocking call to initialize the cache.
mCache.initialize();
Request<?> request;
while (true) {
// release previous request object to avoid leaking request object when mQueue is drained.
request = null;
try {
// Take a request from the queue.
request = mCacheQueue.take();
} catch (InterruptedException e) {
// We may have been interrupted because it was time to quit.
if (mQuit) {
return;
}
continue;
}
try {
request.addMarker("cache-queue-take");
// If the request has been canceled, don't bother dispatching it.
if (request.isCanceled()) {
request.finish("cache-discard-canceled");
continue;
}
// Attempt to retrieve this item from cache.
Cache.Entry entry = mCache.get(request.getCacheKey());
if (entry == null) {
request.addMarker("cache-miss");
// Cache miss; send off to the network dispatcher.
mNetworkQueue.put(request);
continue;
}
// If it is completely expired, just send it to the network.
if (entry.isExpired()) {
request.addMarker("cache-hit-expired");
request.setCacheEntry(entry);
mNetworkQueue.put(request);
continue;
}
// We have a cache hit; parse its data for delivery back to the request.
request.addMarker("cache-hit");
Response<?> response = request.parseNetworkResponse(
new NetworkResponse(entry.data, entry.responseHeaders));
request.addMarker("cache-hit-parsed");
if (!entry.refreshNeeded()) {
// Completely unexpired cache hit. Just deliver the response.
mDelivery.postResponse(request, response);
} else {
// Soft-expired cache hit. We can deliver the cached response,
// but we need to also send the request to the network for
// refreshing.
request.addMarker("cache-hit-refresh-needed");
request.setCacheEntry(entry);
// Mark the response as intermediate.
response.intermediate = true;
// Post the intermediate response back to the user and have
// the delivery then forward the request along to the network.
final Request<?> finalRequest = request;
mDelivery.postResponse(request, response,
new Runnable() {
@Override
public void run() {
try {
mNetworkQueue.put(finalRequest);
} catch (InterruptedException e) {
// Not much we can do about this.
}
}
});
}
} catch (Exception e) {
VolleyLog.e(e, "Unhandled exception %s", e.toString());
}
}
}
第一眼看上去好吓人,这么长,其实好多都是注释,还有一些异常处理,除过这些基本剩下不多了。首先给线程设置优先级,然后初始化缓存,然后有一个while(true),说明线程是一直在执行,然后从队列里面取request,细心的同学可能会发现,怎么没有线程同步呢?
/** The cache triage queue. */
private final PriorityBlockingQueue<Request<?>> mCacheQueue =
new PriorityBlockingQueue<Request<?>>();
/** The queue of requests that are actually going out to the network. */
private final PriorityBlockingQueue<Request<?>> mNetworkQueue =
new PriorityBlockingQueue<Request<?>>();
因为它两都是PriorityBlockingQueue,是一个基于优先级堆的无界的并发安全的优先级队列(FIFO),有两点,一是线程安全,所以不用加synchronized 关键字,二是它是有优先级的,可以设置request的优先级,后面会说到怎么来设置request的优先级。接着往下看,如果request设置取消了直接返回继续,然后再根据request的cacheKey获取它的缓存实体,如果为空就将它加到mNetworkQueue,从网络中获取。如果缓存过期也让他从网络中获取。如果都没问题就从从缓存中获取response,然后再判断response是否需要刷新,如果不需要刷新,就将该response返回给request。如果需要刷新就从网络获取新的responseesponse然后返回给request。中间有一些细节可能没说,我们先抓住主干,后面都会提到的。
总结一下,先进行一系列的判断,然后如果缓存里面有就从缓存里面取,如果没有就将request加入mNetworkQueue从网络中获取。然后看看NetworkDispatcher 怎么进行网络获取的。
public void run() {
Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
Request<?> request;
while (true) {
long startTimeMs = SystemClock.elapsedRealtime();
// release previous request object to avoid leaking request object when mQueue is drained.
request = null;
try {
// Take a request from the queue.
request = mQueue.take();
} catch (InterruptedException e) {
// We may have been interrupted because it was time to quit.
if (mQuit) {
return;
}
continue;
}
try {
request.addMarker("network-queue-take");
// If the request was cancelled already, do not perform the
// network request.
if (request.isCanceled()) {
request.finish("network-discard-cancelled");
continue;
}
addTrafficStatsTag(request);
// Perform the network request.
NetworkResponse networkResponse = mNetwork.performRequest(request);
request.addMarker("network-http-complete");
// If the server returned 304 AND we delivered a response already,
// we're done -- don't deliver a second identical response.
if (networkResponse.notModified && request.hasHadResponseDelivered()) {
request.finish("not-modified");
continue;
}
// Parse the response here on the worker thread.
Response<?> response = request.parseNetworkResponse(networkResponse);
request.addMarker("network-parse-complete");
// Write to cache if applicable.
// TODO: Only update cache metadata instead of entire record for 304s.
if (request.shouldCache() && response.cacheEntry != null) {
mCache.put(request.getCacheKey(), response.cacheEntry);
request.addMarker("network-cache-written");
}
// Post the response back.
request.markDelivered();
mDelivery.postResponse(request, response);
} catch (VolleyError volleyError) {
volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
parseAndDeliverNetworkError(request, volleyError);
} catch (Exception e) {
VolleyLog.e(e, "Unhandled exception %s", e.toString());
VolleyError volleyError = new VolleyError(e);
volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
mDelivery.postError(request, volleyError);
}
}
}
还是好吓人,毛主席的话,都是纸老虎(真的是纸老虎)。让我们继续看,发现其实和上一个是一样的套路,只看一个:
// Perform the network request.
NetworkResponse networkResponse = mNetwork.performRequest(request);
可以看出是从这来进行网络获取的。mNetwork是一个接口,他有直接的实现类,从他的子类来进行网络请求的。就不继续往下追了,后面的文章会一一介绍的。
好了,这篇文章基本也接近尾声了。让我们回忆一下,首先从Volley中获取一个requestQueue,里面会开启几个线程,然后创建一个request,添加到requestQueue,如果该request有缓存从缓存里面取,如果没有就从网络中获取,再看一下上面的图,是不是感觉清晰了好多。
这篇文章说这么多是让你了解一下volley的流程,为后面的文章打基础。看的不是很明白也没有关系,让你有一个概念。可能有的人已经发现图是郭神的,文章也有点像,没错,我也是看郭神的文章过来的,站在巨人的肩膀上才会看的更远嘛。下篇文章会以缓存为切入点,让你了解Volley是怎么巧妙的设计缓存的,真的很精彩,敬请期待吧!