requests高级用法
ssl认证
http和https有什么区别:
-https=http+ssl/tsl(证书) 没有被认证过得机构签发的证书,用的时候浏览器会提示不安全
ssl认证:
1.不认证证书了:
import requests
respone = requests.get('https://www.12306.cn', verify=False)
print(respone.status_code)
2.手动携带证书访问:
respone=requests.get('https://www.12306.cn',cert=('/path/server.crt','/path/key'))
使用代理
爬虫爬取网站可能会遭到反爬:频率限制、封账号、封ip、根据id限制用户。。。做爬虫要避免这些
-封ip---->使用代理
-封账号---->注册很多小号
-什么是代理:
正向代理: 代理客户端
反向代理: 代理服务端(nginx就是反向代理服务器)
-代理基本都收费
-代理的作用原理:使用代理的ip发送http请求
proxies = {
'http': '192.168.10.102:9003',
}
respone=requests.get('https://www.baidu.com',proxies=proxies)
print(respone.text)
超时设置
respone=requests.get('https://www.baidu23.com',timeout=3)
print(respone)
异常处理
from requests.exceptions import *
try:
r = requests.get('http://www.baidu7.com', timeout=0.00001)
except ReadTimeout:
print('===:')
except ConnectionError:
print('-----')
except Timeout:
print('aaaaa')
except RequestException:
print('Error')
上传文件
files = {'files': open('a.txt', 'rb')}
res = requests.post('http://httpbin.org/post', files=files)
print(res)
代理池搭建
github有开源的代理池的代码,------>https://github.com/jhao104/proxy_pool
-爬虫技术:爬取免费的代理网站,获取免费代理,验证过后,存到本地
-使用flask搭建一个web后端,访问某个接口就可以随机返回一个可用的代理地址
搭建步骤
1 git clone https://github.com/jhao104/proxy_pool.git
2 创建虚拟环境,安装依赖:pip install -r requirements.txt
3 修改配置文件settings.py ---》redis服务启动
HOST = "0.0.0.0"
PORT = 5000
DB_CONN = 'redis://127.0.0.1:8888/0'
PROXY_FETCHER = [
"freeProxy01",
"freeProxy02",
]
4 启动爬虫,启动web服务
python proxyPool.py schedule
python proxyPool.py server
5 随机获取ip
127.0.0.1:5000/get
import requests
res = requests.get('http://127.0.0.1:5010/get/').json()
if res['https']:
http = 'https'
else:
http = 'http'
proxie = {
http: res['proxy']
}
print(proxie)
res = requests.get('https://www.cnblogs.com/liuqingzheng/p/16005896.html', proxies=proxie)
print(res.status_code)
django后盾获取客户端ip
def ip_test(request):
ip=request.META.get('REMOTE_ADDR')
return HttpResponse('您的ip是:%s'%ip)
import requests
res = requests.get('http://127.0.0.1:5010/get/').json()
if res['https']:
http = 'https'
else:
http = 'http'
proxie = {
http: http+'://'+res['proxy']
}
print(proxie)
res = requests.get('http://101.133.225.166/ip/', proxies=proxie)
print(res.text)
爬取某视频网站
import requests
import re
res = requests.get('https://www.pearvideo.com/category_loading.jsp?reqType=5&categoryId=1&start=1')
video_list = re.findall('<a href="(.*?)" class="vervideo-lilink actplay">', res.text)
for video in video_list:
video_id = video.split('_')[-1]
header = {
'Referer': 'https://www.pearvideo.com/%s' % video
}
res = requests.get('https://www.pearvideo.com/videoStatus.jsp?contId=%s&mrd=0.6761335369801458' % video_id,headers=header).json()
real_mp4_url = res['videoInfo']['videos']['srcUrl']
real_mp4_url = real_mp4_url.replace(real_mp4_url.rsplit('/', 1)[-1].split('-')[0], 'cont-%s' % video_id)
print(real_mp4_url)
res = requests.get(real_mp4_url)
with open('./video/%s.mp4' % video_id, 'wb') as f:
for line in res.iter_content():
f.write(line)
爬取新闻
import requests
import pymysql
from bs4 import BeautifulSoup
conn = pymysql.connect(
host='127.0.0.1',
port=3306,
user='root',
password='',
database='db111',
charset='utf8mb4',
autocommit=True
)
res = requests.get('https://www.autohome.com.cn/news/1/#liststart')
soup = BeautifulSoup(res.text, 'html.parser')
ul_list = soup.find_all(name='ul', class_='article')
for ul in ul_list:
li_list = ul.find_all(name='li')
for li in li_list:
h3 = li.find(name='h3')
if h3:
title = h3.text
description = li.find(name='p').text
url = 'https:' + li.find(name='a').attrs.get('href')
img = li.find(name='img').attrs.get('src')
if not img.startswith('http'):
img = 'https:' + img
cursor = conn.cursor(cursor=pymysql.cursors.DictCursor)
sql1 = "insert into article values(%s, %s, %s, %s);"
cursor.execute(sql1, (title, description, url, img))
conn.commit()
BautifulSoup4 介绍
BeautifulSoup('要解析的内容:xml格式字符串', "html.parser")
BeautifulSoup('要解析的内容:xml格式字符串', "lxml")
bs4遍历文档树
from bs4 import BeautifulSoup
html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" id='id_p' name='lqz' xx='yy'>lqz is handsome <b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
soup = BeautifulSoup(html_doc, 'lxml')
'''
#遍历文档树:即直接通过标签名字选择,特点是选择速度快,但如果存在多个相同的标签则只返回第一个
#1、用法
#2、获取标签的名称
#3、获取标签的属性
#4、获取标签的内容
#5、嵌套选择
#6、子节点、子孙节点
#7、父节点、祖先节点
#8、兄弟节点
'''
print(list(soup.a.next_siblings))
print('-----')
print(list(soup.a.previous_siblings))