干货!傲娇必备:谁还不是个小可爱

120 阅读1分钟

一言不合就斗图!斗得正激烈的时候没图怎么办?找不到自己想要的图怎么办?直接去爬斗图网吧。

首先选择一家有大量表情包的网站。怎么爬图呢?通过分析,所有信息在页面中都可以拿到,那么最重要的就是要考虑的就是分页问题,通过点击不同的页面,很容易看清楚分页规则。

图片链接都在源码中,明白分页URL的构造,就可以去写代码抓图片了!爬图源代码如下:

# -*- coding:utf-8 -*-

import requests

from bs4 import BeautifulSoup

import os

class doutuSpider(object):

  headers = {

      "user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36"}

  def get_url(self,url):

      data = requests.get(url, headers=self.headers)

      soup = BeautifulSoup(data.content,'lxml')

      totals = soup.findAll("a", {"class": "list-group-item"})

      for one in totals:

          sub_url = one.get('href')

          global path

          path = 'J:\\train\\image'+'\\'+sub_url.split('/')[-1]

          os.mkdir(path)

          try:

              self.get_img_url(sub_url)

          except:

              pass

  def get_img_url(self,url):

      data = requests.get(url,headers = self.headers)

      soup = BeautifulSoup(data.content, 'lxml')

      totals = soup.find_all('div',{'class':'artile_des'})

      for one in totals:

          img = one.find('img')

          try:

              sub_url = img.get('src')

          except:

              pass

          finally:

              urls = 'http:' + sub_url

          try:

              self.get_img(urls)

          except:

              pass

  def get_img(self,url):

      filename = url.split('/')[-1]

      global path

      img_path = path+'\\'+filename

      img = requests.get(url,headers=self.headers)

      try:

          with open(img_path,'wb') as f:

              f.write(img.content)

      except:

          pass

  def create(self):

      for count in range(1, 31):

          url = 'https://www.doutula.com/article/list/?page={}'.format(count)

          print '开始下载第{}页'.format(count)

          self.get_url(url)

if __name__ == '__main__':

  doutu = doutuSpider()

  doutu.create()

运行完之后你就得到一个可以获得多个网页表情包的文件夹,里面就是你刚刚看到的那些表情包啦,快拿去跟你的小伙伴斗图吧!