上海400电话申请,波多野结衣全集,海口市机场大巴
通过编写代码,模拟浏览器发送请求,让其去网络上抓去数据的过程。
通用爬虫
聚焦爬虫
通用爬虫和聚焦爬虫的关联:
增量式
1) chrome win7: mozilla/5.0 (windows nt 6.1; wow64) applewebkit/535.1 (khtml, like gecko) chrome/14.0.835.163 safari/535.1 win10 64 mozilla/5.0 (windows nt 10.0; wow64) applewebkit/537.36 (khtml, like gecko) chrome/79.0.3945.130 safari/537.36 2) firefox win7: mozilla/5.0 (windows nt 6.1; wow64; rv:6.0) gecko/20100101 firefox/6.0 3) safari win7: mozilla/5.0 (windows nt 6.1; wow64) applewebkit/534.50 (khtml, like gecko) version/5.1 safari/534.50 4) opera win7: opera/9.80 (windows nt 6.1; u; zh-cn) presto/2.9.168 version/11.50 5) ie win7+ie9: mozilla/5.0 (compatible; msie 9.0; windows nt 6.1; win64; x64; trident/5.0; .net clr 2.0.50727; slcc2; .net clr 3.5.30729; .net clr 3.0.30729; media center pc 6.0; infopath.3; .net4.0c; tablet pc 2.0; .net4.0e) win7+ie8: mozilla/4.0 (compatible; msie 8.0; windows nt 6.1; wow64; trident/4.0; slcc2; .net clr 2.0.50727; .net clr 3.5.30729; .net clr 3.0.30729; media center pc 6.0; .net4.0c; infopath.3) winxp+ie8: mozilla/4.0 (compatible; msie 8.0; windows nt 5.1; trident/4.0; gtb7.0) winxp+ie7: mozilla/4.0 (compatible; msie 7.0; windows nt 5.1) winxp+ie6: mozilla/4.0 (compatible; msie 6.0; windows nt 5.1; sv1) 6) 傲游 傲游3.1.7在win7+ie9,高速模式: mozilla/5.0 (windows; u; windows nt 6.1; ) applewebkit/534.12 (khtml, like gecko) maxthon/3.0 safari/534.12 傲游3.1.7在win7+ie9,ie内核兼容模式: mozilla/4.0 (compatible; msie 7.0; windows nt 6.1; wow64; trident/5.0; slcc2; .net clr 2.0.50727; .net clr 3.5.30729; .net clr 3.0.30729; media center pc 6.0; infopath.3; .net4.0c; .net4.0e) 7) 搜狗 搜狗3.0在win7+ie9,ie内核兼容模式: mozilla/4.0 (compatible; msie 7.0; windows nt 6.1; wow64; trident/5.0; slcc2; .net clr 2.0.50727; .net clr 3.5.30729; .net clr 3.0.30729; media center pc 6.0; infopath.3; .net4.0c; .net4.0e; se 2.x metasr 1.0) 搜狗3.0在win7+ie9,高速模式: mozilla/5.0 (windows; u; windows nt 6.1; en-us) applewebkit/534.3 (khtml, like gecko) chrome/6.0.472.33 safari/534.3 se 2.x metasr 1.0 8) 360 360浏览器3.0在win7+ie9: mozilla/5.0 (compatible; msie 9.0; windows nt 6.1; wow64; trident/5.0; slcc2; .net clr 2.0.50727; .net clr 3.5.30729; .net clr 3.0.30729; media center pc 6.0; infopath.3; .net4.0c; .net4.0e) 9) qq浏览器 qq浏览器6.9(11079)在win7+ie9,极速模式: mozilla/5.0 (windows nt 6.1) applewebkit/535.1 (khtml, like gecko) chrome/13.0.782.41 safari/535.1 qqbrowser/6.9.11079.201 qq浏览器6.9(11079)在win7+ie9,ie内核兼容模式: mozilla/4.0 (compatible; msie 7.0; windows nt 6.1; wow64; trident/5.0; slcc2; .net clr 2.0.50727; .net clr 3.5.30729; .net clr 3.0.30729; media center pc 6.0; infopath.3; .net4.0c; .net4.0e) qqbrowser/6.9.11079.201 10) 阿云浏览器 阿云浏览器1.3.0.1724 beta(编译日期2011-12-05)在win7+ie9: mozilla/5.0 (compatible; msie 9.0; windows nt 6.1; wow64; trident/5.0)
""" 搜狗首页的源码数据爬取 需要ua伪装 """ import requests inp = input('搜索:') params = { # url携带的请求参数 'query': inp, } # ua伪装,请求头信息 headers = { 'user-agent': 'mozilla/5.0 (windows nt 10.0; wow64) applewebkit/537.36 (khtml, like gecko) chrome/79.0.3945.130 safari/537.36' } # 指定url(网址) url = f'https://www.sogou.com/web' # get发起请求,携带请求参数,返回的是一个响应对象 # url 网址,params 参数动态化,headers ua伪装 response = requests.get(url=url, params=params, headers=headers) response.encoding = 'utf-8' # 手动修改编码格式指定utf8,处理乱码问题 # 获取响应数据 .test page_text = response.text filename = inp + '.html' # 持久化存储 with open(filename, 'w', encoding='utf-8') as f: f.write(page_text) print('ok')
import requests # ua伪装,请求头信息 headers = { 'user-agent': 'mozilla/5.0 (windows nt 10.0; wow64) applewebkit/537.36 (khtml, like gecko) chrome/79.0.3945.130 safari/537.36' } # 请求携带的参数 params = { 'type': '15', 'interval_id': '100:90', 'action': '', 'start': '0', 'limit': '20', } # 请求的网址 url = "https://movie.douban.com/j/chart/top_list?" # 发起请求 url网址 headers请求头 params请求携带的参数 res = requests.get(url=url,params=params,headers=headers) data_list = res.json() # 返回的是序列化的列表 for i in data_list: title = i["title"] types = i["types"] print(title,types)
import requests url = "http://www.kfc.com.cn/kfccda/ashx/getstorelist.ashx?op=keyword" city = input("city name : ") headers = { 'user-agent': 'mozilla/5.0 (windows nt 10.0; wow64) applewebkit/537.36 (khtml, like gecko) chrome/78.0.3904.108 safari/537.36' } # 全站信息爬取 for i in range(1,111): # 获取餐厅地址所需要的动态参数,键值对,值为str data = { 'cname': '', 'pid': '', 'keyword': city, # 数据的搜索地址 'pageindex': str(i), # 数据的页码数 'pagesize': '10', # 每一页的数据量 } # headers请求头 data参数动态化 pos_list = requests.post(url=url,headers=headers,data=data).json()["table1"] # 返回一个响应对象 .json()返回了一个序列化好的字典对象 ['table1']获取key所对应的value值 for i in pos_list: s = i["addressdetail"] print(s)
requests中,get和post的区别就是参数一个是params,一个是data。
import requests headers = { 'user-agent': 'mozilla/5.0 (windows nt 10.0; wow64) applewebkit/537.36 (khtml, like gecko) chrome/78.0.3904.108 safari/537.36' } url = "http://125.35.6.84:81/xk/itownet/portalaction.do?method=getxkzslist" # 将企业id存放到列表中 list_1 = [] # 爬取全栈(目前只爬取9页,爬多了之后会被限制) for a in range(1,10): data = { 'on': 'true', 'page': str(a), 'pagesize': '15', 'productname': '', 'conditiontype': '1', 'applyname': '', 'applysn': '', } ret_dic = requests.post(url=url,data=data,headers=headers).json()['list'] # .json()返回一个字典类型的对象 ['list']取字典中list对应的值 # 捕获企业id for i in ret_dic: _id = i['id'] list_1.append(_id) detail_url = "http://125.35.6.84:81/xk/itownet/portalaction.do?method=getxkzsbyid" for i in list_1: data = { 'id': i } com_data = requests.post(url=detail_url,headers=headers,data=data).json() legalperson = com_data["legalperson"] epsname = com_data["epsname"] print(legalperson,epsname) # 测试了一遍,只爬取了50页,就限制了
如对本文有疑问,请在下面进行留言讨论,广大热心网友会与你互动!! 点击进行留言回复
新手学习Python2和Python3中print不同的用法
Python基于os.environ从windows获取环境变量
网友评论