国产探花免费观看_亚洲丰满少妇自慰呻吟_97日韩有码在线_资源在线日韩欧美_一区二区精品毛片,辰东完美世界有声小说,欢乐颂第一季,yy玄幻小说排行榜完本

首頁(yè) > 編程 > Python > 正文

Python基于pandas實(shí)現(xiàn)json格式轉(zhuǎn)換成dataframe的方法

2020-02-15 21:58:15
字體:
來(lái)源:轉(zhuǎn)載
供稿:網(wǎng)友

本文實(shí)例講述了Python基于pandas實(shí)現(xiàn)json格式轉(zhuǎn)換成dataframe的方法。分享給大家供大家參考,具體如下:

# -*- coding:utf-8 -*-#!python3import reimport jsonfrom bs4 import BeautifulSoupimport pandas as pdimport requestsimport osfrom pandas.io.json import json_normalizeclass image_structs():  def __init__(self):    self.picture_url = {      "image_id": '',      "picture_url": ''    }class data_structs():  def __init__(self):    # columns=['title', 'item_url', 'id','picture_url','std_desc','description','information','fitment'])    self.info={      "title":'',      "item_url":'',      "id":0,      "picture_url":[],      "std_desc":'',      "description":'',      "information":'',      "fitment":''    }# "https://waldoch.com/store/catalogsearch/result/index/?cat=0&limit=200&p=1&q=nerf+bar"# https://waldoch.com/store/new-oem-ford-f-150-f150-5-running-boards-nerf-bar-crew-cab-2015-w-brackets-fl34-16451-ge5fm6.htmldef get_item_list(outfile):  result = []  for i in range(6):    print(i)    i = str(i+1)    url = "https://waldoch.com/store/catalogsearch/result/index/?cat=0&limit=200&p="+i+"&q=nerf+bar"    web = requests.get(url)    soup = BeautifulSoup(web.text,"html.parser")    alink = soup.find_all("a",class_="product-image")    for a in alink:      title = a["title"]      item_url = a["href"]      result.append([title,item_url])  df = pd.DataFrame(result,columns=["title","item_url"])  df = df.drop_duplicates()  df["id"] =df.index  df.to_excel(outfile,index=False)def get_item_info(file,outfile):  DEFAULT_FALSE = ""  df = pd.read_excel(file)  for i in df.index:    id = df.loc[i,"id"]    if os.path.exists(str(int(id))+".xlsx"):      continue    item_url = df.loc[i,"item_url"]    url = item_url    web = requests.get(url)    soup = BeautifulSoup(web.text, "html.parser")    # 圖片    imglink = soup.find_all("img", class_=re.compile("^gallery-image"))    data = data_structs()    data.info["title"] = df.loc[i,"title"]    data.info["id"] = id    data.info["item_url"] = item_url    for a in imglink:      image = image_structs()      image.picture_url["image_id"] = a["id"]      image.picture_url["picture_url"]=a["src"]      print(image.picture_url)      data.info["picture_url"].append(image.picture_url)    print(data.info)    # std_desc    std_desc = soup.find("div", itemprop="description")    try:      strings_desc = []      for ii in std_desc.stripped_strings:        strings_desc.append(ii)      strings_desc = "/n".join(strings_desc)    except:      strings_desc=DEFAULT_FALSE    # description    try:      desc = soup.find('h2', text="Description")      desc = desc.find_next()    except:      desc=DEFAULT_FALSE    description=desc    # information    try:      information = soup.find("h2", text='Information')      desc = information      desc = desc.find_next()    except:      desc=DEFAULT_FALSE    information = desc    # fitment    try:      fitment = soup.find('h2', text='Fitment')      desc = fitment      desc = desc.find_next()    except:      desc=DEFAULT_FALSE    fitment=desc    data.info["std_desc"] = strings_desc    data.info["description"] = str(description)    data.info["information"] = str(information)    data.info["fitment"] = str(fitment)    print(data.info.keys())    singledf = json_normalize(data.info,"picture_url",['title', 'item_url', 'id', 'std_desc', 'description', 'information', 'fitment'])    singledf.to_excel("test.xlsx",index=False)    exit()    # print(df.ix[i])  df.to_excel(outfile,index=False)# get_item_list("item_urls.xlsx")get_item_info("item_urls.xlsx","item_urls_info.xlsx")            
發(fā)表評(píng)論 共有條評(píng)論
用戶名: 密碼:
驗(yàn)證碼: 匿名發(fā)表
主站蜘蛛池模板: 铜山县| 花莲县| 伊春市| 札达县| 扶余县| 夏河县| 岚皋县| 古蔺县| 水城县| 石嘴山市| 思南县| 平塘县| 长白| 佛学| 北安市| 叙永县| 麻阳| 库尔勒市| 肇州县| 丰城市| 福海县| 彰武县| 兴安盟| 临高县| 贵阳市| 屯门区| 陵水| 海伦市| 绥中县| 修文县| 洛阳市| 启东市| 读书| 江永县| 武汉市| 平安县| 阿拉善左旗| 铜山县| 怀宁县| 汤阴县| 当涂县|