根據上面所學的CSS基礎語法知識,現在來實現字段的解析。首先還是解析標題。打開網頁開發者工具,找到標題所對應的源代碼。

發現是在div class="entry-header"下面的h1節點中,于是打開scrapy shell 進行調試

但是我不想要<h1>這種標簽該咋辦,這時候就要使用CSS選擇器中的偽類方法。如下所示。

注意的是兩個冒號。使用CSS選擇器真的很方便。同理我用CSS實現字段解析。代碼如下
| # -*- coding: utf-8 -*- import scrapy import re class JobboleSpider(scrapy.Spider): name = 'jobbole' allowed_domains = ['blog.jobbole.com'] start_urls = ['http://blog.jobbole.com/113549/'] def parse(self, response): # title = response.xpath('//div[@class = "entry-header"]/h1/text()').extract()[0] # create_date = response.xpath("//p[@class = 'entry-meta-hide-on-mobile']/text()").extract()[0].strip().replace("·","").strip() # praise_numbers = response.xpath("//span[contains(@class,'vote-post-up')]/h10/text()").extract()[0] # fav_nums = response.xpath("//span[contains(@class,'bookmark-btn')]/text()").extract()[0] # match_re = re.match(".*?(/d+).*",fav_nums) # if match_re: # fav_nums = match_re.group(1) # comment_nums = response.xpath("//a[@href='#article-comment']/span").extract()[0] # match_re = re.match(".*?(/d+).*", comment_nums) # if match_re: # comment_nums = match_re.group(1) # content = response.xpath("//div[@class='entry']").extract()[0] #通過CSS選擇器提取字段 title = response.css(".entry-header h1::text").extract()[0] create_date = response.css(".entry-meta-hide-on-mobile::text").extract()[0].strip().replace("·","").strip() praise_numbers = response.css(".vote-post-up h10::text").extract()[0] fav_nums = response.css("span.bookmark-btn::text").extract()[0] match_re = re.match(".*?(/d+).*", fav_nums) if match_re: fav_nums = match_re.group(1) comment_nums = response.css("a[href='#article-comment'] span::text").extract()[0] match_re = re.match(".*?(/d+).*", comment_nums) if match_re: comment_nums = match_re.group(1) content = response.css("div.entry").extract()[0] tags = response.css("p.entry-meta-hide-on-mobile a::text").extract()[0] pass | 
| 
 
 | 
新聞熱點
疑難解答