大家可以在Github上clone全部源碼。
Github:https://github.com/williamzxl/Scrapy_CrawlMeiziTu
Scrapy官方文檔:http://scrapy-chs.readthedocs.io/zh_CN/latest/index.html
基本上按照文檔的流程走一遍就基本會用了。
Step1:
在開始爬取之前,必須創建一個新的Scrapy項目。 進入打算存儲代碼的目錄中,運行下列命令:
scrapy startproject CrawlMeiziTu
該命令將會創建包含下列內容的 tutorial 目錄:
CrawlMeiziTu/ scrapy.cfg CrawlMeiziTu/ __init__.py items.py pipelines.py settings.py middlewares.py spiders/ __init__.py ...cd CrawlMeiziTuscrapy genspider Meizitu http://www.meizitu.com/a/list_1_1.html
該命令將會創建包含下列內容的 tutorial 目錄:
CrawlMeiziTu/ scrapy.cfg CrawlMeiziTu/ __init__.py items.py pipelines.py settings.py middlewares.py spiders/ Meizitu.py __init__.py ...
我們主要編輯的就如下圖箭頭所示:

main.py是后來加上的,加了兩條命令,
from scrapy import cmdlinecmdline.execute("scrapy crawl Meizitu".split())主要為了方便運行。
Step2:編輯Settings,如下圖所示
BOT_NAME = 'CrawlMeiziTu' SPIDER_MODULES = ['CrawlMeiziTu.spiders'] NEWSPIDER_MODULE = 'CrawlMeiziTu.spiders' ITEM_PIPELINES = { 'CrawlMeiziTu.pipelines.CrawlmeizituPipeline': 300, } IMAGES_STORE = 'D://pic2' DOWNLOAD_DELAY = 0.3 USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36' ROBOTSTXT_OBEY = True主要設置USER_AGENT,下載路徑,下載延遲時間

Step3:編輯Items.
Items主要用來存取通過Spider程序抓取的信息。由于我們爬取妹子圖,所以要抓取每張圖片的名字,圖片的連接,標簽等等
# -*- coding: utf-8 -*-# Define here the models for your scraped items## See documentation in:# http://doc.scrapy.org/en/latest/topics/items.htmlimport scrapyclass CrawlmeizituItem(scrapy.Item): # define the fields for your item here like: # name = scrapy.Field() #title為文件夾名字 title = scrapy.Field() url = scrapy.Field() tags = scrapy.Field() #圖片的連接 src = scrapy.Field() #alt為圖片名字 alt = scrapy.Field()

Step4:編輯Pipelines
Pipelines主要對items里面獲取的信息進行處理。比如說根據title創建文件夾或者圖片的名字,根據圖片鏈接下載圖片。
# -*- coding: utf-8 -*-import osimport requestsfrom CrawlMeiziTu.settings import IMAGES_STOREclass CrawlmeizituPipeline(object): def process_item(self, item, spider): fold_name = "".join(item['title']) header = { 'USER-Agent': 'User-Agent:Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36', 'Cookie': 'b963ef2d97e050aaf90fd5fab8e78633', #需要查看圖片的cookie信息,否則下載的圖片無法查看 } images = [] # 所有圖片放在一個文件夾下 dir_path = '{}'.format(IMAGES_STORE) if not os.path.exists(dir_path) and len(item['src']) != 0: os.mkdir(dir_path) if len(item['src']) == 0: with open('..//check.txt', 'a+') as fp: fp.write("".join(item['title']) + ":" + "".join(item['url'])) fp.write("/n") for jpg_url, name, num in zip(item['src'], item['alt'],range(0,100)): file_name = name + str(num) file_path = '{}//{}'.format(dir_path, file_name) images.append(file_path) if os.path.exists(file_path) or os.path.exists(file_name): continue with open('{}//{}.jpg'.format(dir_path, file_name), 'wb') as f: req = requests.get(jpg_url, headers=header) f.write(req.content) return item
新聞熱點
疑難解答