http://www.cnblogs.com/batteryhp/p/5040342.htmlhtml
三、數據轉換正則表達式
介紹完數據的重排以後,下面介紹數據的過濾、清理、以及其餘轉換工做。數組
#-*- encoding: utf-8 -*- import numpy as np import pandas as pd import matplotlib.pyplot as plt from pandas import Series,DataFrame #DataFrame去重 data = DataFrame({'k1':['one']*3 + ['two'] * 4, 'k2':[1,1,2,3,3,4,4,]}) #print data print data.duplicated() #返回一個布爾型Series,重複的爲True,不重複的爲False #獲得去重以後的DataFrame,應該意識到這是很是經常使用的 print data.drop_duplicates().reset_index(drop = True) #能夠選定須要去重的列 print data.drop_duplicates(['k1']) #默認保留第一次出現的行 print data.drop_duplicates(['k1'],take_last = True) #設定保留最後一個出現的行
>>>
0 False
1 True
2 False
3 False
4 True
5 False
6 True
k1 k2
0 one 1
1 one 2
2 two 3
3 two 4
k1 k2
0 one 1
3 two 3
k1 k2
2 one 2
6 two 4
[Finished in 0.7s]
#-*- encoding: utf-8 -*- import numpy as np import pandas as pd import matplotlib.pyplot as plt from pandas import Series,DataFrame data = DataFrame({'food':['bacon','pulled pork','bacon','Pastrami','corned beef','Bacon','pastrami', 'honey ham','nova lox'],'ounces':[4,3,12,6,7.5,8,3,5,6]}) print data #假如你想添加一列表示該肉類食物來源的動物類型,咱們先編寫一個肉類到動物的映射。 meat_to_animal = { 'bacon':'pig', 'pulled pork':'pig', 'pastrami':'cow', 'corned beef':'cow', 'honey ham':'pig', 'nova lox':'salmon' } #Series的map方法能夠接受一個函數或含有映射關係的字典型對象,可是這裏有個問題:有些大寫了, #有些沒有。所以須要先轉換大小寫(注意數據清理過程),感受這方法很實用 data['animal'] = data['food'].map(str.lower).map(meat_to_animal) print data #下面看一下map用來執行函數,即將data['food']的每一個元素應用到隱含函數 print data['food'].map(lambda x:meat_to_animal[x.lower()])
>>>
food ounces
0 bacon 4.0
1 pulled pork 3.0
2 bacon 12.0
3 Pastrami 6.0
4 corned beef 7.5
5 Bacon 8.0
6 pastrami 3.0
7 honey ham 5.0
8 nova lox 6.0
food ounces animal
0 bacon 4.0 pig
1 pulled pork 3.0 pig
2 bacon 12.0 pig
3 Pastrami 6.0 cow
4 corned beef 7.5 cow
5 Bacon 8.0 pig
6 pastrami 3.0 cow
7 honey ham 5.0 pig
8 nova lox 6.0 salmon
0 pig
1 pig
2 pig
3 cow
4 cow
5 pig
6 cow
7 pig
8 salmon
Name: food
[Finished in 0.8s]
#-*- encoding: utf-8 -*- import numpy as np import pandas as pd import matplotlib.pyplot as plt from pandas import Series,DataFrame #下面看replace函數 data = Series([1.,-999.,2.,-999.,-1000.,3.]) print data #用replace替換-99九、-1000,注意Series能夠直接用,至關於矢量化了 print data.replace([-999,-1000],np.nan) #下面看一下numpy,不能直接用replace和map #data1 = np.arange(10) #print data1.replace(0,np.nan) #print data1.map(lambda x: x + 1) print data.replace([-999,-1000],[np.nan,0]) print data.replace({-999:np.nan,-1000:0})
>>>
0 1
1 -999
2 2
3 -999
4 -1000
5 3
0 1
1 NaN
2 2
3 NaN
4 NaN
5 3
0 1
1 NaN
2 2
3 NaN
4 0
5 3
0 1
1 NaN
2 2
3 NaN
4 0
5 3
[Finished in 0.8s]
跟Series的值同樣,軸標籤能夠經過函數或者映射進行轉換,從而獲得一個新對象,軸還能夠被就地修改,而無需新建一個數據結構。數據結構
#-*- encoding: utf-8 -*- import numpy as np import pandas as pd import matplotlib.pyplot as plt from pandas import Series,DataFrame data = DataFrame(np.arange(12).reshape((3,4)),index = ['Ohio','Colorado','New York'], columns = ['one','two','three','four']) print data #軸標籤的map方法 print data.index.map(str.upper) #就地修改 data.index = data.index.map(str.upper) print data #下面用rename獲得一個副本 print data.rename(index = str.title,columns = str.upper) #rename能夠結合字典對象進行更新 print data.rename(index = {'OHIO':'INDIANA'},columns = {'three':'peekaboo'}) #rename能夠將DataFrame的索引和標籤進行復制和賦值 #就地修改 _ = data.rename(index = {'OHIO':'INDIANA'},inplace = True) print data print '\n',type(_) print _
>>>
one two three four
Ohio 0 1 2 3
Colorado 4 5 6 7
New York 8 9 10 11
[OHIO COLORADO NEW YORK]
one two three four
OHIO 0 1 2 3
COLORADO 4 5 6 7
NEW YORK 8 9 10 11
ONE TWO THREE FOUR
Ohio 0 1 2 3
Colorado 4 5 6 7
New York 8 9 10 11
one two peekaboo four
INDIANA 0 1 2 3
COLORADO 4 5 6 7
NEW YORK 8 9 10 11
one two three four
INDIANA 0 1 2 3
COLORADO 4 5 6 7
NEW YORK 8 9 10 11
<class 'pandas.core.frame.DataFrame'>
one two three four
INDIANA 0 1 2 3
COLORADO 4 5 6 7
NEW YORK 8 9 10 11
[Finished in 0.8s]dom
爲了便於分析,連續數據經常被離散化或拆分爲面元(bin),即分組。機器學習
#-*- encoding: utf-8 -*- import numpy as np import pandas as pd import matplotlib.pyplot as plt from pandas import Series,DataFrame ages = [20,22,25,27,21,23,37,31,61,45,41,32] bins = [18,25,35,60,100] #用的是cut函數 cats = pd.cut(ages,bins) print cats #返回的是一個特殊的Categorical對象,能夠看做是表示面元名稱的字符串。 #它含有一個表示不一樣分類名稱的levels數組以及一個labels屬性: print cats.labels #是分組的序號,標示爲第幾組 print cats.levels print pd.value_counts(cats) #獲得的是幾個「區間」,不包括左,包括右,可用right = False包括左,不包括右 print pd.cut(ages,[18,26,36,61,100],right = False) #能夠設置本身的面元名稱,設置label是便可 group_names = ['Youth','YoungAdult','MiddleAged','Senior'] print pd.cut(ages,bins,labels = group_names) #固然能夠爲cut傳入面元的數量而不是具體的分界點,會自動均勻分佈 data = np.random.randn(20) print data #下面標識分爲4組,精度爲2位 print pd.cut(data,4,precision = 2) #qcut函數是一個相似於cut的函數,能夠根據樣本分位數對數據進行面元劃分。根據數據,cut可能沒法 #是各個面元數量數據點相同,qcut使用的是樣本分位數,所以能夠得大小基本相等的面元。 data = np.random.randn(1000) cats = pd.qcut(data,4) #四份位數進行切割 print cats print pd.value_counts(cats) #固然能夠設置自定義的分位數(0到1的值) print pd.qcut(data,[0,0.1,0.5,0.9,1.])
>>>
Categorical:
array([(18, 25], (18, 25], (18, 25], (25, 35], (18, 25], (18, 25],
(35, 60], (25, 35], (60, 100], (35, 60], (35, 60], (25, 35]], dtype=object)
Levels (4): Index([(18, 25], (25, 35], (35, 60], (60, 100]], dtype=object)
[0 0 0 1 0 0 2 1 3 2 2 1]
array([(18, 25], (25, 35], (35, 60], (60, 100]], dtype=object)
(18, 25] 5
(35, 60] 3
(25, 35] 3
(60, 100] 1
Categorical:
array([[18, 26), [18, 26), [18, 26), [26, 36), [18, 26), [18, 26),
[36, 61), [26, 36), [61, 100), [36, 61), [36, 61), [26, 36)], dtype=object)
Levels (4): Index([[18, 26), [26, 36), [36, 61), [61, 100)], dtype=object)
Categorical:
array([Youth, Youth, Youth, YoungAdult, Youth, Youth, MiddleAged,
YoungAdult, Senior, MiddleAged, MiddleAged, YoungAdult], dtype=object)
Levels (4): Index([Youth, YoungAdult, MiddleAged, Senior], dtype=object)
Categorical:
array([(-0.5, 0.66], (0.66, 1.82], (0.66, 1.82], (-0.5, 0.66],
(-1.67, -0.5], (0.66, 1.82], (-0.5, 0.66], (-1.67, -0.5],
(0.66, 1.82], (-1.67, -0.5], (-1.67, -0.5], (-1.67, -0.5],
(-1.67, -0.5], (-0.5, 0.66], (-0.5, 0.66], (-0.5, 0.66],
(-0.5, 0.66], (1.82, 2.98], (-0.5, 0.66], (-0.5, 0.66]], dtype=object)
Levels (4): Index([(-1.67, -0.5], (-0.5, 0.66], (0.66, 1.82],
(1.82, 2.98]], dtype=object)
[-3.161, -0.624] 250
(0.69, 2.982] 250
(0.0578, 0.69] 250
(-0.624, 0.0578] 250
[Finished in 0.7s]
異常值(outlier)的過濾或變換運算在很大程度上其實就是數組運算。函數
#-*- encoding: utf-8 -*- import numpy as np import pandas as pd import matplotlib.pyplot as plt from pandas import Series,DataFrame np.random.seed(12345) data = DataFrame(np.random.randn(1000,4)) print data.describe() #假設想要找出某些列中絕對值大小超過3的值 col = data[3] #print col print col[np.abs(col) > 3] #找出所有含有超過3或-3的值的行 print data[(np.abs(data) > 3).any(1)] #對上面的這樣的值限制在-3到3內 data[np.abs(data) > 3] = np.sign(data) * 3 print data.describe()
>>>
0 1 2 3
count 1000.000000 1000.000000 1000.000000 1000.000000
mean -0.067684 0.067924 0.025598 -0.002298
std 0.998035 0.992106 1.006835 0.996794
min -3.428254 -3.548824 -3.184377 -3.745356
25% -0.774890 -0.591841 -0.641675 -0.644144
50% -0.116401 0.101143 0.002073 -0.013611
75% 0.616366 0.780282 0.680391 0.654328
max 3.366626 2.653656 3.260383 3.927528
97 3.927528
305 -3.399312
400 -3.745356
Name: 3
0 1 2 3
5 -0.539741 0.476985 3.248944 -1.021228
97 -0.774363 0.552936 0.106061 3.927528
102 -0.655054 -0.565230 3.176873 0.959533
305 -2.315555 0.457246 -0.025907 -3.399312
324 0.050188 1.951312 3.260383 0.963301
400 0.146326 0.508391 -0.196713 -3.745356
499 -0.293333 -0.242459 -3.056990 1.918403
523 -3.428254 -0.296336 -0.439938 -0.867165
586 0.275144 1.179227 -3.184377 1.369891
808 -0.362528 -3.548824 1.553205 -2.186301
900 3.366626 -2.372214 0.851010 1.332846
0 1 2 3
count 1000.000000 1000.000000 1000.000000 1000.000000
mean -0.067623 0.068473 0.025153 -0.002081
std 0.995485 0.990253 1.003977 0.989736
min -3.000000 -3.000000 -3.000000 -3.000000
25% -0.774890 -0.591841 -0.641675 -0.644144
50% -0.116401 0.101143 0.002073 -0.013611
75% 0.616366 0.780282 0.680391 0.654328
max 3.000000 2.653656 3.000000 3.000000
[Finished in 0.8s]
下面是隨機選取一個DataFrame的一些行,作法就是隨機產生行號,而後進行選取便可。post
#-*- encoding: utf-8 -*- import numpy as np import pandas as pd import matplotlib.pyplot as plt from pandas import Series,DataFrame df = DataFrame(np.arange(5 * 4).reshape(5,4)) sampler = np.random.permutation(5) #返回一個隨機排列 print df print sampler #而後能夠在基於ix的索引操做或者take函數中使用該數組 print df.take(sampler) #做者這裏說了非替換式採樣,我理解就是不重複採樣吧! #下面是進行截取 print df.take(np.random.permutation(len(df))[:3]) bag = np.array([5,7,-1,6,4]) sampler = np.random.randint(0,len(bag),size = 10) print sampler draws = bag.take(sampler) print draws
>>>
0 1 2 3
0 0 1 2 3
1 4 5 6 7
2 8 9 10 11
3 12 13 14 15
4 16 17 18 19
[3 2 1 0 4]
0 1 2 3
3 12 13 14 15
2 8 9 10 11
1 4 5 6 7
0 0 1 2 3
4 16 17 18 19
0 1 2 3
4 16 17 18 19
0 0 1 2 3
3 12 13 14 15
[1 0 1 3 4 3 3 2 0 2]
[ 7 5 7 6 4 6 6 -1 5 -1]
[Finished in 0.7s]
另外一種經常使用的用於統計建模或機器學習的轉換方式是:將分類變量(categorical variable)轉換爲「啞變量矩陣」(dummy matrix)或「指標矩陣」(indicator matrix)。若是DataFrame的某一列有k各不一樣的值,能夠派生出一個k列的矩陣或者DataFrame(值爲1和0)。這樣的作法在下一章(第八章)的地圖的例子中有體現(誰讓我先看的第八章,當時還在想這個辦法好,原來根源在這裏)。學習
#-*- encoding: utf-8 -*- import numpy as np import pandas as pd import matplotlib.pyplot as plt from pandas import Series,DataFrame df = DataFrame({'key':['b','b','a','c','a','b'],'data1' : range(6)}) print df print pd.get_dummies(df['key']) #獲得啞變量DataFrame #有時候,你想給指標DataFrame的列加上一個前綴,一邊進行合併。 #這個功能好,可是注意是給指標DataFrame的列名加的前綴 dummies = pd.get_dummies(df['key'],prefix = 'key') print dummies df_with_dummy = df[['data1']].join(dummies) #按行索引合併 print df_with_dummy #這裏說一個隱藏的trick,df['data1']獲得一個Series,而df[['data1']]獲得一個DataFrame print type(df['data1']) #Series而已,列名丟掉 print type(df[['data1']]) #DataFrame 是有列名的 #下面看若是DataFrame的某行屬於多個分類怎麼辦,利用ch02中的MovieLens數據。 names = ['movie_id','title','genres'] movies = pd.read_table('E:\\movies.dat',sep = '::',header = None,names = names) print movies[:10] #要爲genre添加指標變量的時候須要先進性數據規整。 #首先把全部genres提取出來 genre_iter = (set(x.split('|')) for x in movies.genres) genres = sorted(set.union(*genre_iter)) dummies = DataFrame(np.zeros((len(movies),len(genres))),columns = genres) #接下來,迭代每一部電影並將dummies各行的項設置爲1 for i,gen in enumerate(movies.genres): dummies.ix[i,gen.split('|')] = 1 #而後與movies合併起來 movies_windic = movies.join(dummies.add_prefix('Genre_')) print movies_windic.ix[0] #可是對於河大的數據,這種方法構建指標很是慢。確定須要編寫一個可以利用DataFrame內部機制的更低級的函數才行 #一個對統計應用的祕訣是:結合get_dummies和諸如cut之類的離散化函數 values = np.random.rand(10) print values bins = [0,0.2,0.4,0.6,0.8,1] print pd.get_dummies(pd.cut(values,bins))
四、字符串操做ui
Python有簡單易用的字符串和文本處理功能。大部分文本運算直接作成了字符串對象的內置方法。固然還能用正則表達式。pandas對此進行了增強,可以對數組數據應用字符串表達式和正則表達式,並且能處理煩人的缺失數據。
#-*- encoding: utf-8 -*- import numpy as np import pandas as pd import matplotlib.pyplot as plt from pandas import Series,DataFrame #字符串對象方法 #對於大部分的字符串而言,內置的方法已經可以知足要求了 val = 'a,b, guido' print val.split(',') #返回的是一個列表 pieces = [x.strip() for x in val.split(',')] #strip函數修剪空白字符 print pieces #利用加法能夠把字符串鏈接起來,注意下面的賦值方式 first,second,third = pieces print first +'::' + second +'::'+ third #上面的不實用,下面是一種更快的風格 print '::'.join(pieces) #另外一種方法是字串定位,經常使用的有 in、index、find print 'guido' in val #返回布爾型,是否在字符串中 print val.index(',') #返回第一次出現的位置,找不到返回異常 print val.find(':') #返回第一次出現字符的位置,找不到返回-1,能夠指定從哪一個位置開始和結束 print val.count(',') #返回個數 print val.replace(',','::') print val.replace(',','') #傳入''用來刪除字符
>>>
['a', 'b', ' guido']
['a', 'b', 'guido']
a::b::guido
a::b::guido
True
1
-1
2
a::b:: guido
ab guido
[Finished in 0.6s]
#上面這些都能用正則表達式實現
Python內置的字符串方法有:
正則表達式(regex)提供了一種靈活的在文本中搜索、匹配字符串的模式。用的是re模塊。re模塊的函數分爲3類:模式匹配、替換、拆分。關於正則表達式的總結,參考一下:http://www.cnblogs.com/huxi/archive/2010/07/04/1771073.html (謝謝做者)。
#-*- encoding: utf-8 -*- import numpy as np import pandas as pd import matplotlib.pyplot as plt from pandas import Series,DataFrame import re text = "foo bar\t baz \tqux" print re.split('\s+',text) #這條語句是先編譯正則表達式 \s+ (多個空白字符),而後再調用split regex = re.compile('\s+') print regex.split(text) #下面是找到匹配regex的全部模式 print regex.findall(text) #注意:想轉義字符\不起做用,即做爲一個單獨字符,能夠直接在前面加r,原生字符串 text1 = r'foo \t' print text1 #若是想對許多字符串都應用同一條正則表達式,應該先compile節省時間 #findall 返回字符串中全部匹配項,而search則只返回第一個匹配項。match更加嚴格,它只匹配字符串的首部 text = """Dave dave@google.com Steve steve@gmail.com Rob rob@gmail.com Ryan ryan@yahoo.com """ pattern = r'[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}' #下面第二個參數的做用是正則對大小寫不敏感 regex = re.compile(pattern,flags = re.IGNORECASE) print regex.findall(text) #返回一個list #search返回第一個郵件地址,返回的是一種特殊特殊對象,這個對象只能告訴咱們模式在原始字符串中的起始和結束位置 m = regex.search(text) print m print text[m.start():m.end()] #regex.match返回None,由於它只匹配出如今字符串開頭的模式,也就是說,沒法指定要開始和結束的匹配位置 print regex.match(text) #還有一個sub方法,會將匹配到的模式替換爲指定字符串,並返回新字符串 print regex.sub('REDACTED',text) #另外,若是想將找出的模式分段,用圓括號括起來便可 pattern = r'([A-Z0-9._%+-]+)@([A-Z0-9.-]+)\.([A-Z]{2,4})' regex = re.compile(pattern,flags = re.IGNORECASE) m = regex.match('wesm@bright.net') #返回一個match對象 print m print m.groups() #弄成元組的形式輸出 print regex.findall(text) #返回一個列表,每一項都是一個元組 print regex.sub(r'Username: \1, Domain: \2, Suffix: \3',text) #sub能夠利用\1 \2 \3訪問被替換的字符串 #看下面的小例子 regex = re.compile(r""" (?P<username>[A-Z0-9._%+-]+) @ (?P<domain>[A-Z0-9.-]+) \. (?P<suffix>[A-Z]{2,4})""",flags = re.IGNORECASE|re.VERBOSE) #這樣能夠獲得一個簡單的字典 m = regex.match('wesm@bright.net') print m.groupdict()
>>>
['foo', 'bar', 'baz', 'qux']
['foo', 'bar', 'baz', 'qux']
[' ', '\t ', ' \t']
foo \t
['dave@google.com', 'steve@gmail.com', 'rob@gmail.com', 'ryan@yahoo.com']
<_sre.SRE_Match object at 0x03343758>
dave@google.com
None
Dave REDACTED
Steve REDACTED
Rob REDACTED
Ryan REDACTED
<_sre.SRE_Match object at 0x03342A70>
('wesm', 'bright', 'net')
[('dave', 'google', 'com'), ('steve', 'gmail', 'com'), ('rob', 'gmail', 'com'), ('ryan', 'yahoo', 'com')]
Dave Username: dave, Domain: google, Suffix: com
Steve Username: steve, Domain: gmail, Suffix: com
Rob Username: rob, Domain: gmail, Suffix: com
Ryan Username: ryan, Domain: yahoo, Suffix: com
{'username': 'wesm', 'domain': 'bright', 'suffix': 'net'}
[Finished in 0.8s]
正則表達式的方法有:
#-*- encoding: utf-8 -*- import numpy as np import pandas as pd import matplotlib.pyplot as plt from pandas import Series,DataFrame import re data = {'Dave':'dave@google.com','Steve':'steve@gmail.com','Rob':'rob@gmail.com','Web':np.nan} data = Series(data) print data print data.isnull() #經過map,全部字符串和正則都能傳入各個值(經過lambda或者其餘函數),可是若是存在NA就會報錯。 #然而,Series有些跳過NA的方法。經過Series的str屬性能夠訪問這些方法。 print '\n',data.str.contains('gmail'),'\n' #查看是否每一個包含gmail pattern = r'([A-Z0-9._%+-]+)@([A-Z0-9.-]+)\.([A-Z]{2,4})' print data.str.findall(pattern,flags = re.IGNORECASE),'\n' #print data.str.replace('@','') #這裏的replace能夠矢量化應用到每一個元素 #有兩個辦法能夠實現矢量化的元素獲取操做:要麼使用str.get,要麼在str屬性上用索引 matches = data.str.match(pattern,flags = re.IGNORECASE) print matches,'\n' print matches.str.get(1),'\n' print matches.str[0],'\n' #能夠這樣進行截取 print data.str[:5],'\n' #下面這樣只是選取前兩個 print data[:2]
>>>
Dave dave@google.com
Rob rob@gmail.com
Steve steve@gmail.com
Web NaN
Dave False
Rob False
Steve False
Web True
Dave False
Rob True
Steve True
Web NaN
Dave [('dave', 'google', 'com')]
Rob [('rob', 'gmail', 'com')]
Steve [('steve', 'gmail', 'com')]
Web NaN
Dave ('dave', 'google', 'com')
Rob ('rob', 'gmail', 'com')
Steve ('steve', 'gmail', 'com')
Web NaN
Dave google
Rob gmail
Steve gmail
Web NaN
Dave dave
Rob rob
Steve steve
Web NaN
Dave dave@
Rob rob@g
Steve steve
Web NaN
Dave dave@google.com
Rob rob@gmail.com
[Finished in 0.7s]
下面矢量化的字符串方法,比較重要。