Elasticsearch是一個分布式、Restful的搜索及分析服務器,Apache Solr一樣,它也是基于Lucence的索引服務器,但我認為Elasticsearch對比Solr的優點在于:
輕量級:安裝啟動方便,下載文件之后一條命令就可以啟動; Schema free:可以向服務器提交任意結構的JSON對象,Solr中使用schema.xml指定了索引結構; 多索引文件支持:使用不同的index參數就能創建另一個索引文件,Solr中需要另行配置; 分布式:Solr Cloud的配置比較復雜。環境搭建
啟動Elasticsearch,訪問端口在9200,通過瀏覽器可以查看到返回的JSON數據,Elasticsearch提交和返回的數據格式都是JSON.
>> bin/elasticsearch -f
安裝官方提供的Python API,在OS X上安裝后出現一些Python運行錯誤,是因為setuptools版本太舊引起的,刪除重裝后恢復正常。
>> pip install elasticsearch
索引操作
對于單條索引,可以調用create或index方法。
from datetime import datetimefrom elasticsearch import Elasticsearches = Elasticsearch() #create a localhost server connection, or Elasticsearch("ip")es.create(index="test-index", doc_type="test-type", id=1, body={"any":"data", "timestamp": datetime.now()})
Elasticsearch批量索引的命令是bulk,目前Python API的文檔示例較少,花了不少時間閱讀源代碼才弄清楚批量索引的提交格式。
from datetime import datetimefrom elasticsearch import Elasticsearchfrom elasticsearch import helperses = Elasticsearch("10.18.13.3")j = 0count = int(df[0].count())actions = []while (j < count): action = { "_index": "tickets-index", "_type": "tickets", "_id": j + 1, "_source": { "crawaldate":df[0][j], "flight":df[1][j], "price":float(df[2][j]), "discount":float(df[3][j]), "date":df[4][j], "takeoff":df[5][j], "land":df[6][j], "source":df[7][j], "timestamp": datetime.now()} } actions.append(action) j += 1 if (len(actions) == 500000): helpers.bulk(es, actions) del actions[0:len(actions)]if (len(actions) > 0): helpers.bulk(es, actions) del actions[0:len(actions)]
在這里發現Python API序列化JSON時對數據類型支撐比較有限,原始數據使用的NumPy.Int32必須轉換為int才能索引。此外,現在的bulk操作默認是每次提交500條數據,我修改為5000甚至50000進行測試,會有索引不成功的情況。
#helpers.py source codedef streaming_bulk(client, actions, chunk_size=500, raise_on_error=False, expand_action_callback=expand_action, **kwargs): actions = map(expand_action_callback, actions) # if raise on error is set, we need to collect errors per chunk before raising them errors = [] while True: chunk = islice(actions, chunk_size) bulk_actions = [] for action, data in chunk: bulk_actions.append(action) if data is not None: bulk_actions.append(data) if not bulk_actions: returndef bulk(client, actions, stats_only=False, **kwargs): success, failed = 0, 0 # list of errors to be collected is not stats_only errors = [] for ok, item in streaming_bulk(client, actions, **kwargs): # go through request-reponse pairs and detect failures if not ok: if not stats_only: errors.append(item) failed += 1 else: success += 1 return success, failed if stats_only else errors
新聞熱點
疑難解答