叶弘新浪博客,铣刨机出租,新任女教师剧场版
获取单独一个table,代码如下:
#!/usr/bin/env python3 # _*_ coding=utf-8 _*_ import csv from urllib.request import urlopen from bs4 import beautifulsoup from urllib.request import httperror try: html = urlopen("http://en.wikipedia.org/wiki/comparison_of_text_editors") except httperror as e: print("not found") bsobj = beautifulsoup(html,"html.parser") table = bsobj.findall("table",{"class":"wikitable"})[0] if table is none: print("no table"); exit(1) rows = table.findall("tr") csvfile = open("editors.csv",'wt',newline='',encoding='utf-8') writer = csv.writer(csvfile) try: for row in rows: csvrow = [] for cell in row.findall(['td','th']): csvrow.append(cell.get_text()) writer.writerow(csvrow) finally: csvfile.close()
获取所有table,代码如下:
#!/usr/bin/env python3 # _*_ coding=utf-8 _*_ import csv from urllib.request import urlopen from bs4 import beautifulsoup from urllib.request import httperror try: html = urlopen("http://en.wikipedia.org/wiki/comparison_of_text_editors") except httperror as e: print("not found") bsobj = beautifulsoup(html,"html.parser") tables = bsobj.findall("table",{"class":"wikitable"}) if tables is none: print("no table"); exit(1) i = 1 for table in tables: filename = "table%s.csv" % i rows = table.findall("tr") csvfile = open(filename,'wt',newline='',encoding='utf-8') writer = csv.writer(csvfile) try: for row in rows: csvrow = [] for cell in row.findall(['td','th']): csvrow.append(cell.get_text()) writer.writerow(csvrow) finally: csvfile.close() i += 1
以上这篇python 获取页面表格数据存放到csv中的方法就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持移动技术网。
如对本文有疑问,请在下面进行留言讨论,广大热心网友会与你互动!! 点击进行留言回复
Python 实现将numpy中的nan和inf,nan替换成对应的均值
python爬虫把url链接编码成gbk2312格式过程解析
网友评论