python可以爬取微信公众号:python爬微信公众号文章
python可以爬取微信公众号:python爬微信公众号文章打开网页,在图1页面上右键单击“检查”,弹出图2浏览器控制台。4、selenium 解析1、requests访问 BeautifulSoup解析2、requests访问 PyQuery解析3、urlopen访问 正则解析
爬微信公众号里由一百多页图片组成的课本,思路是先通过文章网址获得每页图片网址,再逐个访问获得图片。本文用4种方法进行爬取,具体如下:
网址:url : 'https://mp.weixin.qq.com/s/UxjZIqnyiJgjjTA4Nfxn6w'
图1
方法:
1、requests访问 BeautifulSoup解析
2、requests访问 PyQuery解析
3、urlopen访问 正则解析
4、selenium 解析
打开网页,在图1页面上右键单击“检查”,弹出图2浏览器控制台。
图2
图2所示红色方框即为单页图片网址所在的代码位置,为便于查看,将其处理成下面的样子。从中我们可以看出,data-src即为我们需要获取的网址。
<p style="max-width: 100%;min-height: 1em;font-family: -apple-system-font BlinkMacSystemFont "Helvetica Neue" "PingFang SC" "Hiragino Sans GB" "Microsoft YaHei UI" "Microsoft YaHei" Arial sans-serif;letter-spacing: 0.544px;white-space: normal;line-height: 27.2px;widows: 1;background-color: rgb(255 255 255);box-sizing: border-box !important;word-wrap: break-word !important;"><img data-copyright="0" data-ratio="0.66796875" data-s="300 640" data-type="jpeg" data-w="612" width="auto" data-src="https://mmbiz.qpic.cn/mmbiz_jpg/icD7BmF4oE3RRJKz4erWPBia5HUL13wgp17PwZibUqmfgLqyAsICzlsRs2S2V0dbI082ararWnSUoM50YtAjlXKEg/640?wx_fmt=jpeg" style="box-sizing: border-box !important; overflow-wrap: break-word !important; visibility: visible !important; width: auto !important; height: auto !important;" _width="auto" class="" src="https://mmbiz.qpic.cn/mmbiz_jpg/icD7BmF4oE3RRJKz4erWPBia5HUL13wgp17PwZibUqmfgLqyAsICzlsRs2S2V0dbI082ararWnSUoM50YtAjlXKEg/640?wx_fmt=jpeg&tp=webp&wxfrom=5&wx_lazy=1&wx_co=1" crossorigin="anonymous" data-fail="0"></p>
<img data-copyright="0" data-ratio="0.66796875" data-s="300 640" data-type="jpeg"data-w="612" width="auto"
data-src="https://mmbiz.qpic.cn/mmbiz_jpg/icD7BmF4oE3RRJKz4erWPBia5HUL13wgp17PwZibUqmfgLqyAsICzlsRs2S2V0dbI082ararWnSUoM50YtAjlXKEg/640?wx_fmt=jpeg"
style="box-sizing: border-box !important; overflow-wrap: break-word !important; visibility: visible !important; width: auto !important; height: auto !important;"
_width="auto" class="" src="https://mmbiz.qpic.cn/mmbiz_jpg/icD7BmF4oE3RRJKz4erWPBia5HUL13wgp17PwZibUqmfgLqyAsICzlsRs2S2V0dbI082ararWnSUoM50YtAjlXKEg/640?wx_fmt=jpeg&tp=webp&wxfrom=5&wx_lazy=1&wx_co=1"
crossorigin="anonymous" data-fail="0">
<p style="max-width: 100%;min-height: 1em;font-family: -apple-system-font BlinkMacSystemFont "Helvetica Neue" "PingFang SC" "Hiragino Sans GB" "Microsoft YaHei UI" "Microsoft YaHei" Arial sans-serif;letter-spacing: 0.544px;white-space: normal;line-height: 27.2px;widows: 1;background-color: rgb(255 255 255);box-sizing: border-box !important;word-wrap: break-word !important;"><img data-copyright="0" data-ratio="0.66796875" data-s="300 640" data-type="jpeg" data-w="612" width="auto" data-src="https://mmbiz.qpic.cn/mmbiz_jpg/icD7BmF4oE3RRJKz4erWPBia5HUL13wgp17PwZibUqmfgLqyAsICzlsRs2S2V0dbI082ararWnSUoM50YtAjlXKEg/640?wx_fmt=jpeg" style="box-sizing: border-box !important; overflow-wrap: break-word !important; visibility: visible !important; width: auto !important; height: auto !important;" _width="auto" class="" src="https://mmbiz.qpic.cn/mmbiz_jpg/icD7BmF4oE3RRJKz4erWPBia5HUL13wgp17PwZibUqmfgLqyAsICzlsRs2S2V0dbI082ararWnSUoM50YtAjlXKEg/640?wx_fmt=jpeg&tp=webp&wxfrom=5&wx_lazy=1&wx_co=1" crossorigin="anonymous" data-fail="0"></p>
经分析,data-src,位于P标签——img标签内。解析特征参数有多个,可灵活使用。
一、requests访问 BeautifulSoup解析
import requests time
from bs4 import BeautifulSoup
url = 'https://mp.weixin.qq.com/s/UxjZIqnyiJgjjTA4Nfxn6w'
t = time.time()
response = requests.get(url).content # 访问网址
soup = BeautifulSoup(response 'lxml') # 解析获得网页内容
a = soup.find_all('img' attrs={'data-type':"jpeg"}) # 以标签名 img 和其中包含data-type':"jpeg"为
关键参数,获得符合条件的列表a
n = 0
for i in a: # 遍历列表 a
n = 1
with open('./result/' str(n) '.jpg' 'wb') as f:
f.write(requests.get(i['data-src']).content) # i['data-src']取得图片网址,访问获得图片数据
print('正在下载......{}/{}'.format(n len(a)))
print('共用时: ' time.time()-t) # 打印运行时间
with open('./' 'time.txt' 'a ' encoding='utf-8') as f:
f.write(str(time.time()-t) '\n') # 保存图片数据
二、requests访问 PyQuery解析
import requests time
from pyquery import PyQuery as pq
url = 'https://mp.weixin.qq.com/s/UxjZIqnyiJgjjTA4Nfxn6w'
t = time.time()
html = requests.get(url).content.decode() #注意设置成.decode() 格式
doc = pq(html) #解析获得网页数据
a = doc('img').items() # 以img标签名为关键字筛选
n = 0
for i in a:
if i.attr('data-src'): # 判断是否存在data-src关键字,
n = 1
with open('./result/' str(n) '.jpg' 'wb') as f:
f.write(requests.get(i.attr('data-src')).content) # 用i.attr('data-src')取得网址
print('正在下载......{}/{}'.format(n 107))
print('共用时: ' time.time()-t)
with open('./' 'time.txt' 'a ' encoding='utf-8') as f:
f.write(str(time.time()-t) '\n')
三、urlopen访问 正则解析
import requests re os time
from urllib.request import urlopen
url = 'https://mp.weixin.qq.com/s/UxjZIqnyiJgjjTA4Nfxn6w'
t = time.time()
content = urlopen(url).read().decode()
pp = re.compile('data-src="(. ?)"') # 设置正则表达式,以data-src=网址为关键字
result = re.findall(pp content) # 直接提取出图片网址
n = 0
for r in result:
n = 1
with open('./result/' str(n) '.jpg' 'wb') as f:
f.write(requests.get(r).content) # 访问图片网址获得图片数据
print('正在下载......{}/{}'.format(n len(result)))
print('共用时: ' time.time()-t)
with open('./' 'time.txt' 'a ' encoding='utf-8') as f:
f.write(str(time.time()-t) '\n')
四、selenium 解析
selenium方法用来解析这种网页属于大材小用了,此处这里用纯属练习。
from selenium import webdriver
import os time requests
t = time.time()
driver = webdriver.Chrome(executable_path=r'C:\Program Files (x86)\Google\Chrome\Application\chromedriver.exe')
driver.get('https://mp.weixin.qq.com/s/UxjZIqnyiJgjjTA4Nfxn6w')
# 创建一个谷歌浏览器对象,用对象打开目标网址
ps = driver.find_elements_by_xpath('//*[@id="js_content"]/p/img') #用xpath方式机械,可直接在网页代码
对应位置右键复制xpath,减少分析过程
n = 0
for p in ps:
n = 1
pic_url = p.get_attribute('data-src') # 用get_attribute获得图片网址
with open('./result/' str(n) '.jpg' 'wb') as f:
f.write(requests.get(pic_url).content) #用requests访问图片网址获得图片数据
print('正在下载......{}/{}'.format(n len(ps)))
driver.close()
print('共用时: ' time.time()-t)
with open('./' 'time.txt' 'a ' encoding='utf-8') as f:
f.write(str(time.time()-t) '\n')
本网址共下载107张图片,四种方法各自用时为
1、requests访问 BeautifulSoup解析 26秒
2、requests访问 PyQuery解析 27秒
3、urlopen访问 正则解析 30秒
4、selenium 解析 39秒
考虑到每次运行时网速不同、解析节点不同的因素,时间对比不能完全反应方法的优劣,仅作参考。小小总结一下:
selenium需要打开浏览器,用时肯定较大。requests是从urlopen方式基础发展完善的,代码比较好理解掌握,其和PyQuery、urlopen的区别是直接用content数据进行解析,不需转换为decode格式。所有的解析库其实都有正则解析的影子,但都比正则方便好操作,所以,正则是网页解析的基础,但真正用它的人不多。个人还是最爱第一种方,此外还用etree进行了解析,但都大同小异,也就没列举了。
以上是个人一点的思考,有不当之处,敬请指正。