国产探花免费观看_亚洲丰满少妇自慰呻吟_97日韩有码在线_资源在线日韩欧美_一区二区精品毛片,辰东完美世界有声小说,欢乐颂第一季,yy玄幻小说排行榜完本

首頁 > 編程 > Python > 正文

python編寫樸素貝葉斯用于文本分類

2020-02-16 11:14:33
字體:
來源:轉載
供稿:網友

樸素貝葉斯估計

樸素貝葉斯是基于貝葉斯定理與特征條件獨立分布假設的分類方法。首先根據特征條件獨立的假設學習輸入/輸出的聯合概率分布,然后基于此模型,對給定的輸入x,利用貝葉斯定理求出后驗概率最大的輸出y。
具體的,根據訓練數據集,學習先驗概率的極大似然估計分布

以及條件概率為

Xl表示第l個特征,由于特征條件獨立的假設,可得

條件概率的極大似然估計為

根據貝葉斯定理

則由上式可以得到條件概率P(Y=ck|X=x)。

貝葉斯估計

用極大似然估計可能會出現所估計的概率為0的情況。后影響到后驗概率結果的計算,使分類產生偏差。采用如下方法解決。
條件概率的貝葉斯改為

其中Sl表示第l個特征可能取值的個數。
同樣,先驗概率的貝葉斯估計改為

$$
P(Y=c_k) = /frac{/sum/limits_{i=1}^NI(y_i=c_k)+/lambda}{N+K/lambda}
$K$

表示Y的所有可能取值的個數,即類型的個數。
具體意義是,給每種可能初始化出現次數為1,保證每種可能都出現過一次,來解決估計為0的情況。

文本分類

樸素貝葉斯分類器可以給出一個最有結果的猜測值,并給出估計概率。通常用于文本分類。
分類核心思想為選擇概率最大的類別。貝葉斯公式如下:

詞條:將每個詞出現的次數作為特征。
假設每個特征相互獨立,即每個詞相互獨立,不相關。則

完整代碼如下;

import numpy as npimport reimport feedparserimport operatordef loadDataSet(): postingList=[['my', 'dog', 'has', 'flea', 'problems', 'help', 'please'],     ['maybe', 'not', 'take', 'him', 'to', 'dog', 'park', 'stupid'],     ['my', 'dalmation', 'is', 'so', 'cute', 'I', 'love', 'him'],     ['stop', 'posting', 'stupid', 'worthless', 'garbage'],     ['mr', 'licks', 'ate', 'my', 'steak', 'how', 'to', 'stop', 'him'],     ['quit', 'buying', 'worthless', 'dog', 'food', 'stupid']] classVec = [0,1,0,1,0,1] #1 is abusive, 0 not return postingList,classVecdef createVocabList(data): #創建詞向量 returnList = set([]) for subdata in data:  returnList = returnList | set(subdata) return list(returnList)def setofWords2Vec(vocabList,data):  #將文本轉化為詞條 returnList = [0]*len(vocabList) for vocab in data:  if vocab in vocabList:   returnList[vocabList.index(vocab)] += 1 return returnListdef trainNB0(trainMatrix,trainCategory):  #訓練,得到分類概率 pAbusive = sum(trainCategory)/len(trainCategory) p1num = np.ones(len(trainMatrix[0])) p0num = np.ones(len(trainMatrix[0])) p1Denom = 2 p0Denom = 2 for i in range(len(trainCategory)):  if trainCategory[i] == 1:   p1num = p1num + trainMatrix[i]   p1Denom = p1Denom + sum(trainMatrix[i])  else:   p0num = p0num + trainMatrix[i]   p0Denom = p0Denom + sum(trainMatrix[i]) p1Vect = np.log(p1num/p1Denom) p0Vect = np.log(p0num/p0Denom) return p0Vect,p1Vect,pAbusivedef classifyNB(vec2Classify,p0Vec,p1Vec,pClass1): #分類 p0 = sum(vec2Classify*p0Vec)+np.log(1-pClass1) p1 = sum(vec2Classify*p1Vec)+np.log(pClass1) if p1 > p0:  return 1 else:  return 0def textParse(bigString):   #文本解析 splitdata = re.split(r'/W+',bigString) splitdata = [token.lower() for token in splitdata if len(token) > 2] return splitdatadef spamTest(): docList = [] classList = [] for i in range(1,26):  with open('spam/%d.txt'%i) as f:   doc = f.read()  docList.append(doc)  classList.append(1)  with open('ham/%d.txt'%i) as f:   doc = f.read()  docList.append(doc)  classList.append(0) vocalList = createVocabList(docList) trainList = list(range(50)) testList = [] for i in range(13):  num = int(np.random.uniform(0,len(docList))-10)  testList.append(trainList[num])  del(trainList[num]) docMatrix = [] docClass = [] for i in trainList:  subVec = setofWords2Vec(vocalList,docList[i])  docMatrix.append(subVec)  docClass.append(classList[i]) p0v,p1v,pAb = trainNB0(docMatrix,docClass) errorCount = 0 for i in testList:  subVec = setofWords2Vec(vocalList,docList[i])  if classList[i] != classifyNB(subVec,p0v,p1v,pAb):   errorCount += 1 return errorCount/len(testList)def calcMostFreq(vocabList,fullText): count = {} for vocab in vocabList:  count[vocab] = fullText.count(vocab) sortedFreq = sorted(count.items(),key=operator.itemgetter(1),reverse=True) return sortedFreq[:30]def localWords(feed1,feed0): docList = [] classList = [] fullText = [] numList = min(len(feed1['entries']),len(feed0['entries'])) for i in range(numList):  doc1 = feed1['entries'][i]['summary']  docList.append(doc1)  classList.append(1)  fullText.extend(doc1)  doc0 = feed0['entries'][i]['summary']  docList.append(doc0)  classList.append(0)  fullText.extend(doc0) vocabList = createVocabList(docList) top30Words = calcMostFreq(vocabList,fullText) for word in top30Words:  if word[0] in vocabList:   vocabList.remove(word[0]) trainingSet = list(range(2*numList)) testSet = [] for i in range(20):  randnum = int(np.random.uniform(0,len(trainingSet)-5))  testSet.append(trainingSet[randnum])  del(trainingSet[randnum]) trainMat = [] trainClass = [] for i in trainingSet:  trainClass.append(classList[i])  trainMat.append(setofWords2Vec(vocabList,docList[i])) p0V,p1V,pSpam = trainNB0(trainMat,trainClass) errCount = 0 for i in testSet:  testData = setofWords2Vec(vocabList,docList[i])  if classList[i] != classifyNB(testData,p0V,p1V,pSpam):   errCount += 1 return errCount/len(testData)if __name__=="__main__": ny = feedparser.parse('http://newyork.craigslist.org/stp/index.rss') sf = feedparser.parse('http://sfbay.craigslist.org/stp/index.rss') print(localWords(ny,sf))            
發表評論 共有條評論
用戶名: 密碼:
驗證碼: 匿名發表
主站蜘蛛池模板: 抚顺市| 南安市| 冀州市| 沅江市| 汝州市| 彩票| 赫章县| 阿合奇县| 宁南县| 青川县| 永胜县| 色达县| 曲松县| 宣威市| 颍上县| 苏州市| 上虞市| 搜索| 桐城市| 东乡| 长乐市| 昭苏县| 荣成市| 绥江县| 盘锦市| 渑池县| 师宗县| 永寿县| 陇南市| 门头沟区| 政和县| 天津市| 康乐县| 石阡县| 招远市| 瑞丽市| 乌兰县| 天祝| 金塔县| 灵璧县| 阳城县|