第五课:BeautifulSoup4

112 阅读7分钟

1.BeautifulSoup4的作用

和 lxml 一样,Beautiful Soup 也是一个HTML/XML的解析器,主要的功能也是如何解析和提取 HTML/XML 数据

2.安装和文档

  • 安装
pip install bs4
  • 中文文档
<https://www.crummy.com/software/BeautifulSoup/bs4/doc/index.zh.html>

3.几大解析工具解析

解析工具速度
beautifulSoup4最简单
lxml简单
正则表达式最快最难

4.简单使用

from bs4 import BeautifulSoup

html = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""

# 1.创建bs4对象
soup = BeautifulSoup(html,'lxml')
print(soup.prettify())

5.bs4常见的四种对象

  • Tag: BeautifulSoup中所有的标签都是Tag类型,并且BeautifulSoup的对象其实本质上也是一个Tag类型。所以其实一些方法比如find、find_all并不是BeautifulSoup的,而是Tag的。
  • NavigableString: 继承自python中的str,用起来就跟使用python的str是一样的。用来打印标签里面的内容
  • BeautifulSoup: 继承自Tag。用来生成BeaufifulSoup树的。对于一些查找方法,比如find、select这些,其实还是Tag的。
  • Comment:这个也没什么好说,就是继承自NavigableString,专门用于打印注解的内容。

代码示例

from bs4 import BeautifulSoup

html = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""

# 1.创建beautifulSoup对象
soup = BeautifulSoup(html, 'lxml')
# 1.Tag
print(soup.p)
# 1.1Tag两个重要的属性
# name,打印出标签的名字
print(soup.p.name)
# attrs,他会将该标签里面所有的属性打印出来
print(soup.p.attrs)  # {'class': ['title'], 'name': 'dromouse'}
print(soup.p['class'])   # ['title']
soup.p['class'] = 'love'
print(soup.p['class'])   # love

# 2.NavigableString: 打印出标签的内容
print(soup.p.string)

# 3.BeautifulSoup: 打印出整个html
print(soup)

# 4.Comment: 打印出注解的内容
print(soup.a.string)   # Elsie

6.contents和children

返回某个标签下的直接子元素,其中也包括字符串。他们两的区别是:contents返回来的是一个列表,children返回的是一个迭代器。

7.string 和 strings 和 stripped_strings属性以及 get_text方法

from bs4 import BeautifulSoup

html = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""

soup = BeautifulSoup(html, 'lxml')
# contents
print(soup.head.contents)  # [<title>The Dormouse's story</title>]
# children
print(soup.children)       # <list_iterator object at 0x000002233D116FA0>
for i in soup.children:
    print(i)

# string
print(soup.head.string)    # The Dormouse's story
# strings
print(soup.strings)   # <generator object Tag._all_strings at 0x000001F695959270>
for i in soup.strings:
    print(repr(i))
"""
"The Dormouse's story"
'\n'
'\n'
"The Dormouse's story"
'\n'
'Once upon a time there were three little sisters; and their names were\n'
',\n'
'Lacie'
' and\n'
'Tillie'
';\nand they lived at the bottom of a well.'
'\n'
'...'
'\n'
"""
print(soup.stripped_strings)     # <generator object PageElement.stripped_strings at 0x000001E05BA89270>
for i in soup.stripped_strings:
    print(i)
"""
The Dormouse's story
Once upon a time there were three little sisters; and their names were
,
Lacie
and
Tillie
;
and they lived at the bottom of a well.
...
"""

8.find 与 find_all 方法

find_all的使用

  • 在提取标签的时候,第一个参数是标签的名字。然后如果在提取标签的时候想要使用标签属性进行过滤,那么可以在这个方法中通过关键字参数的形式,将属性的名字以及对应的值传进去。或者是使用attrs属性,将所有的属性以及对应的值放在一个字典中传给attrs属性。

  • 有些时候,在提取标签的时候,不想提取那么多,那么可以使用limit参数。限制提取多少个。

find 和 find_all的区别

  • find:找到第一个满足条件的标签就返回,就是只会返回一个元素。

  • find_all:将所有满足条件的标签都返回,会返回很多标签(以列表的形式)。

使用find 和 find_all 的过滤条件

  • 关键字参数:将属性的名字作为关键字参数的名字,以及属性的值作为关键字参数的值进行过滤。

  • attrs参数:将属性条件放到一个字典中,传给attrs参数。

获取标签的属性

  • 通过下标获取:通过标签的下标的方式。
href = a['href']
  • 通过attrs属性获取:示例代码:
href = a.attrs['href']

代码示例

from bs4 import BeautifulSoup

html = """
<table class="tablelist" cellpadding="0" cellspacing="0">
    <tbody>
        <tr class="h">
            <td class="l" width="374">职位名称</td>
            <td>职位类别</td>
            <td>人数</td>
            <td>地点</td>
            <td>发布时间</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=33824&keywords=python&tid=87&lid=2218">22989-金融云区块链高级研发工程师(深圳)</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-25</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a target="_blank" href="position_detail.php?id=29938&keywords=python&tid=87&lid=2218">22989-金融云高级后台开发</a></td>
            <td>技术类</td>
            <td>2</td>
            <td>深圳</td>
            <td>2017-11-25</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=31236&keywords=python&tid=87&lid=2218">SNG16-腾讯音乐运营开发工程师(深圳)</a></td>
            <td>技术类</td>
            <td>2</td>
            <td>深圳</td>
            <td>2017-11-25</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a target="_blank" href="position_detail.php?id=31235&keywords=python&tid=87&lid=2218">SNG16-腾讯音乐业务运维工程师(深圳)</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-25</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=34531&keywords=python&tid=87&lid=2218">TEG03-高级研发工程师(深圳)</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a target="_blank" href="position_detail.php?id=34532&keywords=python&tid=87&lid=2218">TEG03-高级图像算法研发工程师(深圳)</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=31648&keywords=python&tid=87&lid=2218">TEG11-高级AI开发工程师(深圳)</a></td>
            <td>技术类</td>
            <td>4</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a target="_blank" href="position_detail.php?id=32218&keywords=python&tid=87&lid=2218">15851-后台开发工程师</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=32217&keywords=python&tid=87&lid=2218">15851-后台开发工程师</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a id="test" class="test" target='_blank' href="position_detail.php?id=34511&keywords=python&tid=87&lid=2218">SNG11-高级业务运维工程师(深圳)</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
    </tbody>
</table>
"""

soup = BeautifulSoup(html, 'lxml')

# 1. 获取所有tr标签
tr_list = soup.find_all('tr')
for tr in tr_list:
    print(tr)
    print('-'*30)
# 2. 获取第2个tr标签
tr= soup.find_all('tr', limit=2)[1]
print(tr)

# 3. 获取所有class等于even的tr标签
tr_list = soup.find_all('tr', class_ = 'even')
for tr in tr_list:
    print(tr)
    print('-'*30)

trs_list = soup.find_all('tr', attrs={"class":"even"})
for tr in trs_list:
    print(tr)
    print('-' * 30)

# 4. 将所有id等于test,class也等于test的a标签提取出来。
trs_list = soup.find_all('a', attrs={"id":"test", "class":"test"})
for i in trs_list:
    print(i)
    print('-'*30)

trs_list = soup.find_all('a', id='test', class_='test')
print(trs_list)

# 5. 获取所有a标签的href属性
trs_list = soup.find_all('a')
for tr in trs_list:
    href = tr['href']
    print(href)

for tr in trs_list:
    href = tr.attrs['href']
    print(href)

# 6. 获取所有的职位信息(纯文本)
trs = soup.find_all('tr')[1:]
lists = []
for tr in trs:
    info = {}
    tds = tr.find_all('td')
    name = tds[0].string
    category = tds[1].string
    info['name']=name
    info['category']=category
    infos = list(tr.stripped_strings)
    infos =tr.get_text()
    print(infos)

    lists.append(info)
print(list)

9.BeautifulSoup4的select方法(CSS选择器)

使用以上方法可以方便的找出元素。但有时候使用css选择器的方式可以更加的方便。使用css选择器的语法,应该使用select方法。以下列出几种常用的css选择器方法:

通过标签名查找

print(soup.select('a'))

通过类名查找

print(soup.select('.sister'))

通过Id查找

print(soup.select('#link1'))

组合查找

print(soup.select(p #link1))
# 直接子标签查找
print(soup.select("head > title"))

属性查找

print(soup.select('a[href="http://example.com/elsie"]'))

获取内容

soup = BeautifulSoup(html, 'lxml')
print(type(soup.select('title')))
print(soup.select('title')[0].get_text())

for title in soup.select('title'):
    print(title.get_text())

10.练习题

from bs4 import BeautifulSoup

html = """
<table class="tablelist" cellpadding="0" cellspacing="0">
    <tbody>
        <tr class="h">
            <td class="l" width="374">职位名称</td>
            <td>职位类别</td>
            <td>人数</td>
            <td>地点</td>
            <td>发布时间</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=33824&keywords=python&tid=87&lid=2218">22989-金融云区块链高级研发工程师(深圳)</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-25</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a target="_blank" href="position_detail.php?id=29938&keywords=python&tid=87&lid=2218">22989-金融云高级后台开发</a></td>
            <td>技术类</td>
            <td>2</td>
            <td>深圳</td>
            <td>2017-11-25</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=31236&keywords=python&tid=87&lid=2218">SNG16-腾讯音乐运营开发工程师(深圳)</a></td>
            <td>技术类</td>
            <td>2</td>
            <td>深圳</td>
            <td>2017-11-25</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a target="_blank" href="position_detail.php?id=31235&keywords=python&tid=87&lid=2218">SNG16-腾讯音乐业务运维工程师(深圳)</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-25</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=34531&keywords=python&tid=87&lid=2218">TEG03-高级研发工程师(深圳)</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a target="_blank" href="position_detail.php?id=34532&keywords=python&tid=87&lid=2218">TEG03-高级图像算法研发工程师(深圳)</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=31648&keywords=python&tid=87&lid=2218">TEG11-高级AI开发工程师(深圳)</a></td>
            <td>技术类</td>
            <td>4</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a target="_blank" href="position_detail.php?id=32218&keywords=python&tid=87&lid=2218">15851-后台开发工程师</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="even">
            <td class="l square"><a target="_blank" href="position_detail.php?id=32217&keywords=python&tid=87&lid=2218">15851-后台开发工程师</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
        <tr class="odd">
            <td class="l square"><a id="test" class="test" target='_blank' href="position_detail.php?id=34511&keywords=python&tid=87&lid=2218">SNG11-高级业务运维工程师(深圳)</a></td>
            <td>技术类</td>
            <td>1</td>
            <td>深圳</td>
            <td>2017-11-24</td>
        </tr>
    </tbody>
</table>
"""

soup = BeautifulSoup(html,'lxml')
# 1. 获取所有tr标签
trs = soup.select('tr')
for tr in trs:
    print(tr)
    print('-'*30)
# 2. 获取第2个tr标签
tr = soup.select('tr')[1]
print(tr)

# 3. 获取所有class等于even的tr标签
trs = soup.select('tr[class = "even"]')
for tr in trs:
    print(tr)
    print('-' * 30)

# 4. 获取所有a标签的href属性
alist = soup.select('a')
for a in alist:
    href = a['href']
    print(href)

# 5. 获取所有的职位信息(纯文本)
trs = soup.select('tr')
for tr in trs:
    info = list(tr.stripped_strings)
    print(info)

11.爬虫实战

爬取TOP250的电影数据

#--coding:utf-8--

import requests
from bs4 import BeautifulSoup


headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36'
}
# 获取详情页面url
def get_detail_urls(url):
    resp = requests.get(url, headers=headers)
    # print(resp.text)
    html = resp.text
    soup = BeautifulSoup(html, 'lxml')
    lis = soup.find('ol', class_='grid_view').find_all('li')
    detail_urls = []
    for li in lis:
        detail_url = li.find('a')['href']
        # print(detail_url)
        detail_urls.append(detail_url)
    return detail_urls
#解析详情页面内容

def parse_detail_url(url,f):
    # 解析详情页面内容
    resp = requests.get(url, headers=headers)
    # print(resp.text)
    html = resp.text
    soup = BeautifulSoup(html, 'lxml')
    # 电影名
    name = list(soup.find('div', id='content').find('h1').stripped_strings)
    name = ''.join(name)
    # print(name)
    # 导演
    director = list(soup.find('div', id='info').find('span').find('span', class_='attrs').stripped_strings)
    # print(director)
    # 编剧
    screenwriter = list(soup.find('div', id='info').find_all('span')[3].find('span', class_='attrs').stripped_strings)
    # print(screenwriter)
    # 演员
    actor = list(soup.find('span', class_='actor').find('span', class_='attrs').stripped_strings)
    # print(actor)
    # 评分
    score = soup.find('strong', class_='ll rating_num').string
    print(score)

    f.write('{},{},{},{},{}\n'.format(name,''.join(director),''.join(screenwriter),''.join(actor),score))

def main():
    base_url = 'https://movie.douban.com/top250?start={}&filter='
    with open('Top250.csv','a',encoding='utf-8') as f:
        for x in range(0,251,25):
            url = base_url.format(x)
            detail_urls = get_detail_urls(url)
            for detail_url in detail_urls:
               parse_detail_url(detail_url,f)


if __name__ == '__main__':
    main()