提取數據方法1 正則表達式2 流行的BeautifulSoup模塊3 強大的Lxml模塊性能對比為鏈接爬蟲添加抓取回調1 回調函數一2 回調函數二3 復用上章的鏈接爬蟲代碼
我們讓這個爬蟲比每個網頁中抽取一些數據,然后實現某些事情,這種做法也被稱為提取(scraping)。
1 提取數據方法
正則表達式BeautifulSoup模塊(流行)Lxml(強大)1.1 正則表達式
下面是用正則表達式提取國家面積數據的例子。 正則表達式文檔:https://docs.python.org/3/howto/regex.html
# -*- coding: utf-8 -*-import urllib2import redef scrape(html): area = re.findall('<tr id="places_area__row">.*?<td/s*class=["/']w2p_fw["/']>(.*?)</td>', html)[0] return areaif __name__ == '__main__': html = urllib2.urlopen('http://example.webscraping.com/view/China-47').read() PRint scrape(html)正則表達式容易適應未來網站的變化,但難以構造、可讀性差,難于適應布局微小的變化。
1.2 流行的BeautifulSoup模塊
安裝:pip install beautifulsoup4 有些網頁不具備良好的HTML格式,如下面HTML就存在屬性兩側引號缺失和標簽未閉合問題。
<ul class=country> <li>Area <li>Population</ul>這樣提取數據往往不能得到預期結果,但可以Beautiful Soup來處理。
>>> from bs4 import BeautifulSoup>>> brocken_html='<ul class=country><Li>Area<li>Population</ul>'>>> soup=BeautifulSoup(brocken_html,'html.parser')>>> fixed_html=soup.prettify()>>> print fixed_html<ul class="country"> <li> Area <li> Population </li> </li></ul>>>> >>> ul=soup.find('ul',attrs={'class':'country'})>>> ul.find('li')<li>Area<li>Population</li></li>>>> ul.find_all('li')[<li>Area<li>Population</li></li>, <li>Population</li>]>>> BeautifulSoup官方文檔:https://www.crummy.com/software/BeautifulSoup/bs4/doc/ 下面是用BeautifulSoup提取國家面積數據的例子。
# -*- coding: utf-8 -*-import urllib2from bs4 import BeautifulSoupdef scrape(html): soup = BeautifulSoup(html) tr = soup.find(attrs={'id':'places_area__row'}) # locate the area row # 'class' is a special python attribute so instead 'class_' is used td = tr.find(attrs={'class':'w2p_fw'}) # locate the area tag area = td.text # extract the area contents from this tag return areaif __name__ == '__main__': html = urllib2.urlopen('http://example.webscraping.com/view/United-Kingdom-239').read() print scrape(html)雖然BeautifulSoup正則表達式更加復雜,但容易構造和理解,而且無須擔心多余空格和標簽屬性這樣布局上的小變化。
1.3 強大的Lxml模塊
Lxml是基于libxml2這個XML解析庫的Python封裝。該模塊用C語言編寫的,解析速度比Beautiful Soup更快,不過安裝過程也更為復雜。最新的安裝說明可以參考http://Lxml.de/installation.html 。 和Beautiful Soup一樣,使用lxml模塊的第一步也是將有可能不合法的HTML解析為統一格式。
>>> import lxml.html>>> broken_html='<ul class=country><li>Area<li>Population</ul>'>>> tree=lxml.html.fromstring(broken_html) #parse the HTML>>> fixed_html=lxml.html.tostring(tree,pretty_print=True)>>> print fixed_html<ul class="country"><li>Area</li><li>Population</li></ul>lxml也可以正確解析屬性兩側缺失的引號,并閉合標簽。解析完輸入內容之后,進入選擇元素的步驟,此時lxml有幾種不用的方法: - XPath選擇器(類似Beautiful Soup的find()方法) - CSS選擇器(類似jQuery選擇器)
這里選用CSS選擇器,它更加簡潔,也可以用在解析動態內容。
>>> li=tree.cssselect('ul.country > li')[0]>>> area=li.text_content()>>> print areaArea>>> | 說明 | 示例 |
| 選擇所有標簽 | * |
選擇<a>標簽 | a |
選擇所有class="link"的標簽 | .link |
選擇class="link"的<a>標簽 | a.link |
選擇id="home"的<a>標簽 | a#home |
選擇父元素為<a>標簽的所有<span>標簽 | a > span |
選擇<a>標簽內部的所有<span>標簽 | a span |
選擇title屬性為”Home”的所有<a>標簽 | a[title=Home] |
下面是用CSS選擇器提取國家面積數據的例子。
# -*- coding: utf-8 -*-import urllib2import lxml.htmldef scrape(html): tree = lxml.html.fromstring(html) td = tree.cssselect('tr#places_area__row > td.w2p_fw')[0] area = td.text_content() return areaif __name__ == '__main__': html = urllib2.urlopen('http://127.0.0.1:8000/places/default/view/China-47').read() print scrape(html)W3C已提出CSS3規范,其網址是http://www.w3c.org/TR/2011/REC-css3-selectors-20110929/ 。 Lxml已經實現了大部分CSS3屬性,其不支持的功能可以參見http://pythonhosted.org/cssselect/#supported-selectors 。 需要注意的是,lxml在內部實現中,實際上是將CSS選擇器轉換為等價的XPath選擇器。
2 性能對比
# -*- coding: utf-8 -*-import csvimport timeimport urllib2import reimport timeitfrom bs4 import BeautifulSoupimport lxml.htmlFIELDS = ('area', 'population', 'iso', 'country', 'capital', 'continent', 'tld', 'currency_code', 'currency_name', 'phone', 'postal_code_format', 'postal_code_regex', 'languages', 'neighbours')def regex_scraper(html): results = {} for field in FIELDS: results[field] = re.search('<tr id="places_{}__row">.*?<td class="w2p_fw">(.*?)</td>'.format(field), html).groups()[0] return resultsdef beautiful_soup_scraper(html): soup = BeautifulSoup(html, 'html.parser') results = {} for field in FIELDS: results[field] = soup.find('table').find('tr', id='places_{}__row'.format(field)).find('td', class_='w2p_fw').text return resultsdef lxml_scraper(html): tree = lxml.html.fromstring(html) results = {} for field in FIELDS: results[field] = tree.cssselect('table > tr#places_{}__row > td.w2p_fw'.format(field))[0].text_content() return resultsdef main(): times = {} html = urllib2.urlopen('http://127.0.0.1:8000/places/default/view/China-47').read() NUM_ITERATIONS = 1000 # number of times to test each scraper for name, scraper in ('Regular expressions', regex_scraper), ('Beautiful Soup', beautiful_soup_scraper), ('Lxml', lxml_scraper): times[name] = [] # record start time of scrape start = time.time() for i in range(NUM_ITERATIONS): if scraper == regex_scraper: # the regular expression module will cache results # so need to purge this cache for meaningful timings re.purge() result = scraper(html) # check scraped result is as expected assert(result['area'] == '9596960 square kilometres') times[name].append(time.time() - start) # record end time of scrape and output the total end = time.time() print '{}: {:.2f} seconds'.format(name, end - start) writer = csv.writer(open('times.csv', 'w')) header = sorted(times.keys()) writer.writerow(header) for row in zip(*[times[scraper] for scraper in header]): writer.writerow(row)if __name__ == '__main__': main()這段代碼每個爬蟲執行1000次,每次都有會檢查結果是否正確,然后打印用時,并把所有記錄存入csv文件中。正則表達式模塊會用緩存搜索結果,我們用re.purge()方法清除第次的緩存。
wu_being@Ubuntukylin64:~/GitHub/WebScrapingWithPython/2.數據抓取$ python 2performance.py Regular expressions: 6.65 secondsBeautiful Soup: 61.61 secondsLxml: 8.57 seconds | 提取方法 | 性能 | 使用難度 | 安裝難度 |
| 正則表達式 | 快 | 困難 | 簡單(內置模塊) |
| Beautiful Soup | 慢 | 簡單 | 簡單(純Python) |
| Lxml | 快 | 簡單 | 相對困難 |
3 為鏈接爬蟲添加抓取回調
要想把提取數據代碼集成到上章鏈接爬蟲代碼中,我們需要添加一個回調函數callback,該函數就是調入參數處理用于提取數據行為。本例中,網頁下載后調用回調函數,數據提取函數包含url和html兩個參數,并返回一個待爬取的URL列表。
def link_crawler(seed_url, link_regex=None,... scrape_callback=None): ... html = download(url, headers, proxy=proxy, num_retries=num_retries) links = [] if scrape_callback: links.extend(scrape_callback(url, html) or [])##這里沒有返回一個待爬取的URL列表 ...3.1 回調函數一
現在我們只需對傳入的scrape_callback函數定制化處理。
# -*- coding: utf-8 -*-import csvimport reimport urlparseimport lxml.htmlfrom link_crawler import link_crawlerFIELDS = ('area', 'population', 'iso', 'country', 'capital', 'continent', 'tld', 'currency_code', 'currency_name', 'phone', 'postal_code_format', 'postal_code_regex', 'languages', 'neighbours')def scrape_callback(url, html): if re.search('/view/', url): tree = lxml.html.fromstring(html) row = [tree.cssselect('table > tr#places_{}__row > td.w2p_fw'.format(field))[0].text_content() for field in FIELDS] print url, rowif __name__ == '__main__': link_crawler('http://example.webscraping.com/', '/(index|view)', scrape_callback=scrape_callback)用第一種回調輸出:
wu_being@ubuntukylin64:~/GitHub/WebScrapingWithPython/2.數據抓取$ python 3scrape_callback1.py Downloading: http://example.webscraping.com/Downloading: http://example.webscraping.com/index/1...Downloading: http://example.webscraping.com/index/25Downloading: http://example.webscraping.com/view/Zimbabwe-252http://example.webscraping.com/view/Zimbabwe-252 ['390,580 square kilometres', '11,651,858', 'ZW', 'Zimbabwe', 'Harare', 'AF', '.zw', 'ZWL', 'Dollar', '263', '', '', 'en-ZW,sn,nr,nd', 'ZA MZ BW ZM ']Downloading: http://example.webscraping.com/view/Zambia-251http://example.webscraping.com/view/Zambia-251 ['752,614 square kilometres', '13,460,305', 'ZM', 'Zambia', 'Lusaka', 'AF', '.zm', 'ZMW', 'Kwacha', '260', '#####', '^(//d{5})$', 'en-ZM,bem,loz,lun,lue,ny,toi', 'ZW TZ MZ CD NA MW AO ']Downloading: http://example.webscraping.com/view/Yemen-250...3.2 回調函數二
下面我們對功能進行擴展,把得到的結果數據保存到CSV表格中。這里我們使用了回調類,以便保持csv的writer屬性的狀態。csv的writer屬性在構造方法中進行了實現化處理,然后在call方法中多次寫操作。注意,call是一個特殊方法,也是鏈接接爬蟲中scrape_callback的調用方法。也就是說scrape_callback(url,html)和scrape_callback.__call__(url,html)是等價的。可以參考https://docs.python.org/2/reference/datamodel.html#special-method-names .。
# -*- coding: utf-8 -*-import csvimport reimport urlparseimport lxml.htmlfrom link_crawler import link_crawlerclass ScrapeCallback: def __init__(self): self.writer = csv.writer(open('countries.csv', 'w')) self.fields = ('area', 'population', 'iso', 'country', 'capital', 'continent', 'tld', 'currency_code', 'currency_name', 'phone', 'postal_code_format', 'postal_code_regex', 'languages', 'neighbours') self.writer.writerow(self.fields) def __call__(self, url, html): if re.search('/view/', url): tree = lxml.html.fromstring(html) row = [] for field in self.fields: row.append(tree.cssselect('table > tr#places_{}__row > td.w2p_fw'.format(field))[0].text_content()) self.writer.writerow(row)if __name__ == '__main__': link_crawler('http://127.0.0.1:8000/places', '/places/default/(index|view)', scrape_callback=ScrapeCallback()) #link_crawler('http://example.webscraping.com/', '/(index|view)', scrape_callback=ScrapeCallback())3.3 復用上章的鏈接爬蟲代碼
# -*- coding: utf-8 -*-import reimport urlparseimport urllib2import timefrom datetime import datetimeimport robotparserimport Queuedef link_crawler(seed_url, link_regex=None, delay=0, max_depth=-1, max_urls=-1, headers=None, user_agent='wswp', proxy=None, num_retries=1, scrape_callback=None): """Crawl from the given seed URL following links matched by link_regex """ # the queue of URL's that still need to be crawled crawl_queue = [seed_url] # the URL's that have been seen and at what depth seen = {seed_url: 0} # track how many URL's have been downloaded num_urls = 0 rp = get_robots(seed_url) throttle = Throttle(delay) headers = headers or {} if user_agent: headers['User-agent'] = user_agent while crawl_queue: url = crawl_queue.pop() depth = seen[url] # check url passes robots.txt restrictions if rp.can_fetch(user_agent, url): throttle.wait(url) html = download(url, headers, proxy=proxy, num_retries=num_retries) links = [] if scrape_callback: links.extend(scrape_callback(url, html) or [])##這里沒有返回一個待爬取的URL列表 if depth != max_depth: # can still crawl further if link_regex: # filter for links matching our regular expression links.extend(link for link in get_links(html) if re.match(link_regex, link)) for link in links: link = normalize(seed_url, link) # check whether already crawled this link if link not in seen: seen[link] = depth + 1 # check link is within same domain if same_domain(seed_url, link): # success! add this new link to queue crawl_queue.append(link) # check whether have reached downloaded maximum num_urls += 1 if num_urls == max_urls: break else: print 'Blocked by robots.txt:', urlclass Throttle: """Throttle downloading by sleeping between requests to same domain """ def __init__(self, delay): # amount of delay between downloads for each domain self.delay = delay # timestamp of when a domain was last accessed self.domains = {} def wait(self, url): """Delay if have accessed this domain recently """ domain = urlparse.urlsplit(url).netloc last_accessed = self.domains.get(domain) if self.delay > 0 and last_accessed is not None: sleep_secs = self.delay - (datetime.now() - last_accessed).seconds if sleep_secs > 0: time.sleep(sleep_secs) self.domains[domain] = datetime.now()def download(url, headers, proxy, num_retries, data=None): print 'Downloading:', url request = urllib2.Request(url, data, headers) opener = urllib2.build_opener() if proxy: proxy_params = {urlparse.urlparse(url).scheme: proxy} opener.add_handler(urllib2.ProxyHandler(proxy_params)) try: response = opener.open(request) html = response.read() code = response.code except urllib2.URLError as e: print 'Download error:', e.reason html = '' if hasattr(e, 'code'): code = e.code if num_retries > 0 and 500 <= code < 600: # retry 5XX HTTP errors html = download(url, headers, proxy, num_retries-1, data) else: code = None return htmldef normalize(seed_url, link): """Normalize this URL by removing hash and adding domain """ link, _ = urlparse.urldefrag(link) # remove hash to avoid duplicates return urlparse.urljoin(seed_url, link)def same_domain(url1, url2): """Return True if both URL's belong to same domain """ return urlparse.urlparse(url1).netloc == urlparse.urlparse(url2).netlocdef get_robots(url): """Initialize robots parser for this domain """ rp = robotparser.RobotFileParser() rp.set_url(urlparse.urljoin(url, '/robots.txt')) rp.read() return rpdef get_links(html): """Return a list of links from html """ # a regular expression to extract all links from the webpage webpage_regex = re.compile('<a[^>]+href=["/'](.*?)["/']', re.IGNORECASE) # list of all links from the webpage return webpage_regex.findall(html)if __name__ == '__main__': link_crawler('http://example.webscraping.com', '/(index|view)', delay=0, num_retries=1, user_agent='BadCrawler') link_crawler('http://example.webscraping.com', '/(index|view)', delay=0, num_retries=1, max_depth=1, user_agent='GoodCrawler')Wu_Being 博客聲明:本人博客歡迎轉載,請標明博客原文和原鏈接!謝謝! 【Python爬蟲系列】《【Python爬蟲2】網頁數據提取》http://blog.csdn.net/u014134180/article/details/55506973 Python爬蟲系列的GitHub代碼文件:https://github.com/1040003585/WebScrapingWithPython