1.為什么寫這個?
一些簡單的頁面,無需用比較大的框架來進行爬取,自己純手寫又比較麻煩
因此針對這個需求寫了talonspider:
•1.針對單頁面的item提取 - 具體介紹點這里 
•2.spider模塊 - 具體介紹點這里
2.介紹&&使用
2.1.item
這個模塊是可以獨立使用的,對于一些請求比較簡單的網站(比如只需要get請求),單單只用這個模塊就可以快速地編寫出你想要的爬蟲,比如(以下使用python3,python2見examples目錄):
2.1.1.單頁面單目標
比如要獲取這個網址http://book.qidian.com/info/1004608738 的書籍信息,封面等信息,可直接這樣寫:
import timefrom talonspider import Item, TextField, AttrFieldfrom pprint import pprintclass TestSpider(Item): title = TextField(css_select='.book-info>h1>em') author = TextField(css_select='a.writer') cover = AttrField(css_select='a#bookImg>img', attr='src') def tal_title(self, title): return title def tal_cover(self, cover): return 'http:' + coverif __name__ == '__main__': item_data = TestSpider.get_item(url='http://book.qidian.com/info/1004608738') pprint(item_data)
具體見qidian_details_by_item.py
2.1.1.單頁面多目標
比如獲取豆瓣250電影首頁展示的25部電影,這一個頁面有25個目標,可直接這樣寫:
from talonspider import Item, TextField, AttrFieldfrom pprint import pprint# 定義繼承自item的爬蟲類class DoubanSpider(Item):  target_item = TextField(css_select='div.item')  title = TextField(css_select='span.title')  cover = AttrField(css_select='div.pic>a>img', attr='src')  abstract = TextField(css_select='span.inq')  def tal_title(self, title):    if isinstance(title, str):      return title    else:      return ''.join([i.text.strip().replace('/xa0', '') for i in title])if __name__ == '__main__':  items_data = DoubanSpider.get_items(url='https://movie.douban.com/top250')  result = []  for item in items_data:    result.append({      'title': item.title,      'cover': item.cover,      'abstract': item.abstract,    })  pprint(result)具體見douban_page_by_item.py
2.2.spider
當需要爬取有層次的頁面時,比如爬取豆瓣250全部電影,這時候spider部分就派上了用場:
# !/usr/bin/env pythonfrom talonspider import Spider, Item, TextField, AttrField, Requestfrom talonspider.utils import get_random_user_agent# 定義繼承自item的爬蟲類class DoubanItem(Item):  target_item = TextField(css_select='div.item')  title = TextField(css_select='span.title')  cover = AttrField(css_select='div.pic>a>img', attr='src')  abstract = TextField(css_select='span.inq')  def tal_title(self, title):    if isinstance(title, str):      return title    else:      return ''.join([i.text.strip().replace('/xa0', '') for i in title])class DoubanSpider(Spider):  # 定義起始url,必須  start_urls = ['https://movie.douban.com/top250']  # requests配置  request_config = {    'RETRIES': 3,    'DELAY': 0,    'TIMEOUT': 20  }  # 解析函數 必須有  def parse(self, html):    # 將html轉化為etree    etree = self.e_html(html)    # 提取目標值生成新的url    pages = [i.get('href') for i in etree.cssselect('.paginator>a')]    pages.insert(0, '?start=0&filter=')    headers = {      "User-Agent": get_random_user_agent()    }    for page in pages:      url = self.start_urls[0] + page      yield Request(url, request_config=self.request_config, headers=headers, callback=self.parse_item)  def parse_item(self, html):    items_data = DoubanItem.get_items(html=html)    # result = []    for item in items_data:      # result.append({      #   'title': item.title,      #   'cover': item.cover,      #   'abstract': item.abstract,      # })      # 保存      with open('douban250.txt', 'a+') as f:        f.writelines(item.title + '/n')if __name__ == '__main__':  DoubanSpider.start()            
新聞熱點
疑難解答