国产探花免费观看_亚洲丰满少妇自慰呻吟_97日韩有码在线_资源在线日韩欧美_一区二区精品毛片,辰东完美世界有声小说,欢乐颂第一季,yy玄幻小说排行榜完本

首頁 > 編程 > Python > 正文

python爬蟲爬取淘寶商品信息(selenum+phontomjs)

2020-02-22 23:18:35
字體:
來源:轉載
供稿:網友

本文實例為大家分享了python爬蟲爬取淘寶商品的具體代碼,供大家參考,具體內容如下

1、需求目標 :

進去淘寶頁面,搜索耐克關鍵詞,抓取 商品的標題,鏈接,價格,城市,旺旺號,付款人數,進去第二層,抓取商品的銷售量,款號等。

這里寫圖片描述

這里寫圖片描述

2、結果展示

這里寫圖片描述

3、源代碼

# encoding: utf-8import sysreload(sys)sys.setdefaultencoding('utf-8')import timeimport pandas as pdtime1=time.time()from lxml import etreefrom selenium import webdriver#########自動模擬driver=webdriver.PhantomJS(executable_path='D:/Python27/Scripts/phantomjs.exe')import re#################定義列表存儲#############title=[]price=[]city=[]shop_name=[]num=[]link=[]sale=[]number=[]#####輸入關鍵詞耐克(這里必須用unicode)keyword="%E8%80%90%E5%85%8B"for i in range(0,1):  try:    print "...............正在抓取第"+str(i)+"頁..........................."    url="https://s.taobao.com/search?q=%E8%80%90%E5%85%8B&imgfile=&js=1&stats_click=search_radio_all%3A1&initiative_id=staobaoz_20170710&ie=utf8&bcoffset=4&ntoffset=4&p4ppushleft=1%2C48&s="+str(i*44)    driver.get(url)    time.sleep(5)    html=driver.page_source    selector=etree.HTML(html)    title1=selector.xpath('//div[@class="row row-2 title"]/a')    for each in title1:      print each.xpath('string(.)').strip()      title.append(each.xpath('string(.)').strip())    price1=selector.xpath('//div[@class="price g_price g_price-highlight"]/strong/text()')    for each in price1:      print each      price.append(each)    city1=selector.xpath('//div[@class="location"]/text()')    for each in city1:      print each      city.append(each)    num1=selector.xpath('//div[@class="deal-cnt"]/text()')    for each in num1:      print each      num.append(each)    shop_name1=selector.xpath('//div[@class="shop"]/a/span[2]/text()')    for each in shop_name1:      print each      shop_name.append(each)    link1=selector.xpath('//div[@class="row row-2 title"]/a/@href')    for each in link1:      kk="https://" + each      link.append("https://" + each)      if "https" in each:        print each        driver.get(each)      else:        print "https://" + each        driver.get("https://" + each)      time.sleep(3)      html2=driver.page_source      selector2=etree.HTML(html2)      sale1=selector2.xpath('//*[@id="J_DetailMeta"]/div[1]/div[1]/div/ul/li[1]/div/span[2]/text()')      for each in sale1:        print each        sale.append(each)      sale2=selector2.xpath('//strong[@id="J_SellCounter"]/text()')      for each in sale2:        print each        sale.append(each)      if "tmall" in kk:        number1 = re.findall('<ul id="J_AttrUL">(.*?)</ul>', html2, re.S)        for each in number1:          m = re.findall('>*號: (.*?)</li>', str(each).strip(), re.S)          if len(m) > 0:            for each1 in m:              print each1              number.append(each1)          else:            number.append("NULL")      if "taobao" in kk:        number2=re.findall('<ul class="attributes-list">(.*?)</ul>',html2,re.S)        for each in number2:          h=re.findall('>*號: (.*?)</li>', str(each).strip(), re.S)          if len(m) > 0:            for each2 in h:              print each2              number.append(each2)          else:            number.append("NULL")      if "click" in kk:        number.append("NULL")  except:    passprint len(title),len(city),len(price),len(num),len(shop_name),len(link),len(sale),len(number)# ## ######數據框data1=pd.DataFrame({"標題":title,"價格":price,"旺旺":shop_name,"城市":city,"付款人數":num,"鏈接":link,"銷量":sale,"款號":number})print data1# 寫出excelwriter = pd.ExcelWriter(r'C://taobao_spider2.xlsx', engine='xlsxwriter', options={'strings_to_urls': False})data1.to_excel(writer, index=False)writer.close()time2 = time.time()print u'ok,爬蟲結束!'print u'總共耗時:' + str(time2 - time1) + 's'####關閉瀏覽器driver.close()            
發表評論 共有條評論
用戶名: 密碼:
驗證碼: 匿名發表
主站蜘蛛池模板: 措勤县| 萝北县| 乐业县| 赤峰市| 益阳市| 勃利县| 镶黄旗| 都江堰市| 蕲春县| 乌拉特中旗| 开封县| 长沙县| 怀宁县| 益阳市| 迁安市| 福建省| 涿鹿县| 阜新市| 太白县| 义马市| 晋州市| 许昌县| 延川县| 正镶白旗| 虞城县| 南召县| 鄂州市| 雅江县| 都匀市| 日喀则市| 嵊州市| 河津市| 巴青县| 苗栗市| 繁昌县| 宕昌县| 海盐县| 凯里市| 江源县| 河北省| 平阳县|