那些有趣/用的 Python 庫

圖片處理

pip install pillow
from PIL import Image
import numpy as np

a = np.array(Image.open('test.jpg'))
b = [255,255,255] - a
im = Image.fromarray(b.astype('uint8'))
im.save('new.jpg')

Parse Redis dump.rdb

pip install rdbtools
> rdb --command json /var/redis/6379/dump.rdb

[{
"user003":{"fname":"Ron","sname":"Bumquist"},
"lizards":["Bush anole","Jackson's chameleon","Komodo dragon","Ground agama","Bearded dragon"],
"user001":{"fname":"Raoul","sname":"Duke"},
"user002":{"fname":"Gonzo","sname":"Dr"},
"user_list":["user003","user002","user001"]},{
"baloon":{"helium":"birthdays","medical":"angioplasty","weather":"meteorology"},
"armadillo":["chacoan naked-tailed","giant","Andean hairy","nine-banded","pink fairy"],
"aroma":{"pungent":"vinegar","putrid":"rotten eggs","floral":"roses"}}]

youtube-dl下載國外視頻

pip install youtube-dl #直接安裝youtube-dl
pip install -U youtube-dl #安裝youtube-dl並更新
youtube-dl "http://www.youtube.com/watch?v=-wNyEUrxzFU"

asciinema錄製命令行操做

pip3 install asciinema

asciinema rec
asciinema play https://asciinema.org/a/132560
<script type="text/javascript" src="https://asciinema.org/a/132560.js"
id="asciicast-132560" async></script>

查看對象的所有屬性和方法

pip install pdir2
>>> import pdir,requests
>>> pdir(requests)
module attribute:
    __cached__, __file__, __loader__, __name__, __package__, __path__, __spec__
other:
    __author__, __build__, __builtins__, __copyright__, __license__, __title__,
__version__, _internal_utils, adapters, api, auth, certs, codes, compat, cookies
, exceptions, hooks, logging, models, packages, pyopenssl, sessions, status_code
s, structures, utils, warnings
special attribute:
    __doc__
class:
    NullHandler: This handler does nothing. It's intended to be used to avoid th
e
    PreparedRequest: The fully mutable :class:`PreparedRequest <PreparedRequest>
` object,
    Request: A user-created :class:`Request <Request>` object.
    Response: The :class:`Response <Response>` object, which contains a
    Session: A Requests session.
exception:
    ConnectTimeout: The request timed out while trying to connect to the remote
server.
    ConnectionError: A Connection error occurred.
    DependencyWarning: Warned when an attempt is made to import a module with mi
ssing optional
    FileModeWarning: A file was opened in text mode, but Requests determined its
 binary length.
    HTTPError: An HTTP error occurred.
    ReadTimeout: The server did not send any data in the allotted amount of time
.
    RequestException: There was an ambiguous exception that occurred while handl
ing your
    Timeout: The request timed out.
    TooManyRedirects: Too many redirects.
    URLRequired: A valid URL is required to make a request.
function:
    delete: Sends a DELETE request.
    get: Sends a GET request.
    head: Sends a HEAD request.
    options: Sends a OPTIONS request.
    patch: Sends a PATCH request.
    post: Sends a POST request.
    put: Sends a PUT request.
    request: Constructs and sends a :class:`Request <Request>`.
    session: Returns a :class:`Session` for context-management.

Python 玩轉網易雲音樂

#https://github.com/ziwenxie/netease-dl pip install netease-dl
pip install ncmbot
import ncmbot
#登陸
bot = ncmbot.login(phone='xxx', password='yyy')
bot.content # bot.json()
#獲取用戶歌單
ncmbot.user_play_list(uid='36554272')

下載視頻字幕

pip install getsub

clipboard.png

Python 財經數據接口包

pip install tushare
import tushare as ts
#一次性獲取最近一個日交易日全部股票的交易數據
ts.get_today_all()

代碼,名稱,漲跌幅,現價,開盤價,最高價,最低價,最日收盤價,成交量,換手率
      code    name     changepercent  trade   open   high    low  settlement \  
0     002738  中礦資源         10.023  19.32  19.32  19.32  19.32       17.56   
1     300410  正業科技         10.022  25.03  25.03  25.03  25.03       22.75   
2     002736  國信證券         10.013  16.37  16.37  16.37  16.37       14.88   
3     300412  迦南科技         10.010  31.54  31.54  31.54  31.54       28.67   
4     300411  金盾股份         10.007  29.68  29.68  29.68  29.68       26.98   
5     603636  南威軟件         10.006  38.15  38.15  38.15  38.15       34.68   
6     002664  信質電機         10.004  30.68  29.00  30.68  28.30       27.89   
7     300367  東方網力         10.004  86.76  78.00  86.76  77.87       78.87   
8     601299  中國北車         10.000  11.44  11.44  11.44  11.29       10.40   
9     601880   大連港         10.000   5.72   5.34   5.72   5.22        5.20   
10    000856  冀東裝備         10.000   8.91   8.18   8.91   8.18        8.10

開源漏洞靶場

# 安裝pip
curl -s https://bootstrap.pypa.io/get-pip.py | python3

# 安裝docker
apt-get update && apt-get install docker.io

# 啓動docker服務
service docker start

# 安裝compose
pip install docker-compose 
# 拉取項目
git clone git@github.com:phith0n/vulhub.git
cd vulhub

# 進入某一個漏洞/環境的目錄
cd nginx_php5_mysql

# 自動化編譯環境
docker-compose build

# 啓動整個環境
docker-compose up -d
#測試完成後,刪除整個環境
docker-compose down

北京實時公交

pip install -r requirements.txt 安裝依賴
python manage.py build_cache 獲取離線數據,創建本地緩存
#項目自帶了一個終端中的查詢工具做爲例子,運行: python manage.py cli
>>> from beijing_bus import BeijingBus
>>> lines = BeijingBus.get_all_lines()
>>> lines
[<Line: 運通122(農業展覽館-華紡易城公交場站)>, <Line: 運通101(廣順南大街北口-藍龍家園)>, ...]
>>> lines = BeijingBus.search_lines('847')
>>> lines
[<Line: 847(馬甸橋西-雷莊村)>, <Line: 847(雷莊村-馬甸橋西)>]
>>> line = lines[0]
>>> print line.id, line.name
541 847(馬甸橋西-雷莊村)
>>> line.stations
[<Station 馬甸橋西>, <Station 馬甸橋東>, <Station 安華橋西>, ...]
>>> station = line.stations[0]
>>> print station.name, station.lat, station.lon
馬甸橋西 39.967721 116.372921
>>> line.get_realtime_data(1) # 參數爲站點的序號,從1開始
[
    {
        'id': 公交車id,
        'lat': 公交車的位置,
        'lon': 公交車位置,
        'next_station_name': 下一站的名字,
        'next_station_num': 下一站的序號,
        'next_station_distance': 離下一站的距離,
        'next_station_arriving_time': 預計到達下一站的時間,
        'station_distance': 離本站的距離,
        'station_arriving_time': 預計到達本站的時間,
    },
    ...
]

文章提取器

git clone https://github.com/grangier/python-goose.git
cd python-goose
pip install -r requirements.txt
python setup.py install

>>> from goose import Goose
>>> from goose.text import StopWordsChinese
>>> url  = 'http://www.bbc.co.uk/zhongwen/simp/chinese_news/2012/12/121210_hongkong_politics.shtml'
>>> g = Goose({'stopwords_class': StopWordsChinese})
>>> article = g.extract(url=url)
>>> print article.cleaned_text[:150]
香港行政長官梁振英在各方壓力下就其大宅的違章建築(僭建)問題到立法會接受質詢,並向香港民衆道歉。

梁振英在星期二(12月10日)的答問大會開始之際在其演說中道歉,但強調他在違章建築問題上沒有隱瞞的意圖和動機。

一些親北京陣營議員歡迎梁振英道歉,且認爲應能得到香港民衆接受,但這些議員也質問梁振英有

Python 藝術二維碼生成器

pip  install  MyQR
myqr https://github.com
myqr https://github.com -v 10 -l Q

clipboard.png

假裝瀏覽器身份

pip install fake-useragent
from fake_useragent import UserAgent
ua = UserAgent()

ua.ie
# Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US);
ua.msie
# Mozilla/5.0 (compatible; MSIE 10.0; Macintosh; Intel Mac OS X 10_7_3; Trident/6.0)'
ua['Internet Explorer']
# Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; GTB7.4; InfoPath.2; SV1; .NET CLR 3.3.69573; WOW64; en-US)
ua.opera
# Opera/9.80 (X11; Linux i686; U; ru) Presto/2.8.131 Version/11.11
ua.chrome
# Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.2 (KHTML, like Gecko) Chrome/22.0.1216.0 Safari/537.2'

美化 curl

pip install httpstat
httpstat httpbin.org/get

clipboard.png

python shell

pip install sh
from sh import ifconfig
print ifconfig("eth0")

處理中文文本內容

pip install -U textblob#英文文本的情感分析
pip install snownlp#中文文本的情感分析
from snownlp import SnowNLP
text = "I am happy today. I feel sad today."
from textblob import TextBlob
blob = TextBlob(text)
TextBlob("I am happy today. I feel sad today.")
blob.sentiment
Sentiment(polarity=0.15000000000000002, subjectivity=1.0)


s = SnowNLP(u'這個東西真心很贊')

s.words         # [u'這個', u'東西', u'真心',
                #  u'很', u'贊']

s.tags          # [(u'這個', u'r'), (u'東西', u'n'),
                #  (u'真心', u'd'), (u'很', u'd'),
                #  (u'贊', u'Vg')]

s.sentiments    # 0.9769663402895832 positive的機率

s.pinyin        # [u'zhe', u'ge', u'dong', u'xi',
                #  u'zhen', u'xin', u'hen', u'zan']

s = SnowNLP(u'「繁體字」「繁體中文」的叫法在臺灣亦很常見。')

s.han           # u'「繁體字」「繁體中文」的叫法
                # 在臺灣亦很常見。'

抓取發放代理

pip install -U getproxy
➜ ~ getproxy --help
Usage: getproxy [OPTIONS]

Options:
--in-proxy TEXT Input proxy file
--out-proxy TEXT Output proxy file
--help Show this message and exit.
  • --in-proxy 可選參數,待驗證的 proxies 列表文件
  • --out-proxy 可選參數,輸出已驗證的 proxies 列表文件,若是爲空,則直接輸出到終端

--in-proxy 文件格式和 --out-proxy 文件格式一致javascript

zhihu api

pip install git+git://github.com/lzjun567/zhihu-api --upgrade
from zhihu import Zhihu
zhihu = Zhihu()
zhihu.user(user_slug="xiaoxiaodouzi")

{'avatar_url_template': 'https://pic1.zhimg.com/v2-ca13758626bd7367febde704c66249ec_{size}.jpg',
     'badge': [],
     'name': '我是小號',
     'headline': '程序員',
     'gender': -1,
     'user_type': 'people',
     'is_advertiser': False,
     'avatar_url': 'https://pic1.zhimg.com/v2-ca13758626bd7367febde704c66249ec_is.jpg',
     'url': 'http://www.zhihu.com/api/v4/people/1da75b85900e00adb072e91c56fd9149', 'type': 'people',
     'url_token': 'xiaoxiaodouzi',
     'id': '1da75b85900e00adb072e91c56fd9149',
     'is_org': False}

Python 密碼泄露查詢模塊

pip install leakPasswd
import leakPasswd
leakPasswd.findBreach('taobao')

clipboard.png

解析 nginx 訪問日誌並格式化輸出

pip install ngxtop
$ ngxtop
running for 411 seconds, 64332 records processed: 156.60 req/sec

Summary:
|   count |   avg_bytes_sent |   2xx |   3xx |   4xx |   5xx |
|---------+------------------+-------+-------+-------+-------|
|   64332 |         2775.251 | 61262 |  2994 |    71 |     5 |

Detailed:
| request_path                             |   count |   avg_bytes_sent |   2xx |   3xx |   4xx |   5xx |
|------------------------------------------+---------+------------------+-------+-------+-------+-------|
| /abc/xyz/xxxx                            |   20946 |          434.693 | 20935 |     0 |    11 |     0 |
| /xxxxx.json                              |    5633 |         1483.723 |  5633 |     0 |     0 |     0 |
| /xxxxx/xxx/xxxxxxxxxxxxx                 |    3629 |         6835.499 |  3626 |     0 |     3 |     0 |
| /xxxxx/xxx/xxxxxxxx                      |    3627 |        15971.885 |  3623 |     0 |     4 |     0 |
| /xxxxx/xxx/xxxxxxx                       |    3624 |         7830.236 |  3621 |     0 |     3 |     0 |
| /static/js/minified/utils.min.js         |    3031 |         1781.155 |  2104 |   927 |     0 |     0 |
| /static/js/minified/xxxxxxx.min.v1.js    |    2889 |         2210.235 |  2068 |   821 |     0 |     0 |
| /static/tracking/js/xxxxxxxx.js          |    2594 |         1325.681 |  1927 |   667 |     0 |     0 |
| /xxxxx/xxx.html                          |    2521 |          573.597 |  2520 |     0 |     1 |     0 |
| /xxxxx/xxxx.json                         |    1840 |          800.542 |  1839 |     0 |     1 |     0 |

火車餘票查詢

pip install iquery
 Usage:
        iquery (-c|彩票)
        iquery (-m|電影)
        iquery -p <city>
        iquery -l song [singer]
        iquery -p <city> <hospital>
        iquery <city> <show> [<days>]
        iquery [-dgktz] <from> <to> <date>

    Arguments:
        from             出發站
        to               到達站
        date             查詢日期

        city             查詢城市
        show             演出的類型
        days             查詢近(幾)天內的演出, 若省略, 默認15

        city             城市名,加在-p後查詢該城市全部莆田醫院
        hospital         醫院名,加在city後檢查該醫院是不是莆田系


    Options:
        -h, --help       顯示該幫助菜單.
        -dgktz           動車,高鐵,快速,特快,直達
        -m               熱映電影查詢
        -p               莆田系醫院查詢
        -l               歌詞查詢
        -c               彩票查詢

    Show:
        演唱會 音樂會 音樂劇 歌舞劇 兒童劇 話劇
        歌劇 比賽 舞蹈 戲曲 相聲 雜技 馬戲 魔術

電腦之間傳文件

pip install magic-wormhole
Sender:

% wormhole send README.md
Sending 7924 byte file named 'README.md'
On the other computer, please run: wormhole receive
Wormhole code is: 7-crossover-clockwork
 
Sending (<-10.0.1.43:58988)..
100%|=========================| 7.92K/7.92K [00:00<00:00, 6.02MB/s]
File sent.. waiting for confirmation
Confirmation received. Transfer complete.
Receiver:

% wormhole receive
Enter receive wormhole code: 7-crossover-clockwork
Receiving file (7924 bytes) into: README.md
ok? (y/n): y
Receiving (->tcp:10.0.1.43:58986)..
100%|===========================| 7.92K/7.92K [00:00<00:00, 120KB/s]
Received file written to README.md

Python 數據可視化

pip install pyecharts
from pyecharts import Bar

bar = Bar("個人第一個圖表", "這裏是副標題")
bar.add("服裝", ["襯衫", "羊毛衫", "雪紡衫", "褲子", "高跟鞋", "襪子"], [5, 20, 36, 10, 75, 90])
bar.show_config()
bar.render()#在根目錄下生成一個 render.html 的文件,用瀏覽器打開
#pyecharts製做詞雲圖
from pyecharts import WordCloud

wordlist = ['Sam', 'Club','Macys', 'Amy Schumer', 'Jurassic World', 'Charter','Communications','Chick Fil A', 'Planet Fitness', 'Pitch Perfect', 'Express', 'Home', 'Johnny Depp','Lena Dunham', 'Lewis', 'Hamilton','KXAN', 'Mary Ellen Mark', 'Farrah','Abraham','Rita Ora', 'Serena Williams', 'NCAA', ' baseball',' tournament','Point Break']

#對應於wordlist中每一個元素的詞頻
freq = [10000, 6181, 6000, 4386, 4055, 2467, 2244, 1898, 1484, 1112,1112,1112, 965, 847, 847, 555, 555,555,550, 462, 366, 360, 282, 273, 265]

#設置圖標尺寸大小 
wordcloud = WordCloud(width=1000, height=620)

wordcloud.add(name="", 
              attr=wordlist, 
              shape='circle', 
              value=freq, 
              word_size_range=[20, 100])

#notebook上渲染出詞雲圖
wordcloud

#將詞雲圖渲染並保存到html文件中
#wordcloud.render(path='詞雲圖.html')



freq = [7000, 6181, 6000, 4386, 4055, 2467, 2244, 1898, 1484, 1112,1112,1112, 965, 847, 847, 555, 555,555,550, 462, 366, 360,299, 10000, 7000]

wordcloud = WordCloud(width=1000, height=620)

wordcloud.add(name="", 
              attr=wordlist, 
              shape='star', 
              value=freq, 
              word_size_range=[20, 100])

wordcloud

clipboard.png

微信公衆號爬蟲接口

pip install wechatsogou
from wechatsogou import *
wechats = WechatSogouApi()
name = '南京航空航天大學'
wechat_infos = wechats.search_gzh_info(name)

優雅的重試

pip install tenacity 
#限制重試次數爲3次
from tenacity import retry, stop_after_attempt
@retry(stop=stop_after_attempt(3))
def extract(url):
    info_json = requests.get(url).content.decode()
    info_dict = json.loads(info_json)
    data = info_dict['data']
    save(data)

查找IP地址歸屬地

pip install qqwry-py3 
from qqwry import QQwry 
q = QQwry() 
q.load_file('qqwry.dat', loadindex=False) 
result = q.lookup('8.8.8.8')

導出 python 庫列表

#pip freeze 導出當前環境中全部的 python 庫列表
$ pip install pipreqs
$ pipreqs /home/project/location
Successfully saved requirements file in /home/project/location/requirements.txt

Google Chrome Dev Protocol

pip install -U pychrome
google-chrome --remote-debugging-port=9222
# create a browser instance
browser = pychrome.Browser(url="http://127.0.0.1:9222")

# list all tabs (default has a blank tab)
tabs = browser.list_tab()

if not tabs:
    tab = browser.new_tab()
else:
    tab = tabs[0]


# register callback if you want
def request_will_be_sent(**kwargs):
    print("loading: %s" % kwargs.get('request').get('url'))

tab.Network.requestWillBeSent = request_will_be_sent

# call method
tab.Network.enable()
# call method with timeout
tab.Page.navigate(url="https://github.com/fate0/pychrome", _timeout=5)

# 6. wait for loading
tab.wait(5)

# 7. stop tab (stop handle events and stop recv message from chrome)
tab.stop()

# 8. close tab
browser.close_tab(tab)

模糊搜索

pip install fuzzywuzzy
>>> from fuzzywuzzy import fuzz
>>> from fuzzywuzzy import process
>>> fuzz.ratio("this is a test", "this is a test!")
    97
    >>> choices = ["Atlanta Falcons", "New York Jets", "New York Giants", "Dallas Cowboys"]
>>> process.extract("new york jets", choices, limit=2)
    [('New York Jets', 100), ('New York Giants', 78)]
>>> process.extractOne("cowboys", choices)
    ("Dallas Cowboys", 90)

算法學習

from pygorithm.sorting import bubble_sort
myList = [12, 4, 3, 5, 13, 1, 17, 19, 15]
sortedList = bubble_sort.sort(myList)
print(sortedList)
[1, 3, 4, 5, 12, 13, 15, 17, 19]

命令行洪流搜索程序

pip install torrench --upgrade
$ torrench "ubuntu desktop 16.04"    ## Search Linuxtracker for Ubuntu Desktop 16.04 distro ISO
$ torrench "fedora workstation"    ## Search for Fedora Workstation distro ISO
$ torrench -d "opensuse" ## Search distrowatch for opensuse ISO
$ torrench -d "solus" ## Search distrowatch for solus ISO

clipboard.png

根據姓名來判斷性別

pip install ngender
$ ng 趙本山 宋丹丹
name: 趙本山 => gender: male, probability: 0.9836229687547046
name: 宋丹丹 => gender: female, probability: 0.9759486128949907
>>> import ngender
>>> ngender.guess('趙本山')
('male', 0.9836229687547046)

Python編寫的簡單的微信客戶端

#https://github.com/pavlovai/match
pip install pywxclient
pip install git+https://github.com/justdoit0823/pywxclient
>>> from pywxclient.core import Session, SyncClient

>>> s1 = Session()

>>> c1 = SyncClient(s1)

>>> c1.get_authorize_url()  # Open the url in web browser

>>> c1.authorize()  # Continue authorize when returning False

>>> c1.login()

>>> c1.sync_check()

>>> msgs = c1.sync_message()  # Here are your wechat messages

>>> c1.flush_sync_key()

比較類似圖片

$ pip install numpy
$ pip install scipy
$ pip install image_match
from image_match.goldberg import ImageSignature
gis = ImageSignature()
a = gis.generate_signature('https://upload.wikimedia.org/wikipedia/commons/thumb/e/ec/Mona_Lisa,_by_Leonardo_da_Vinci,_from_C2RMF_retouched.jpg/687px-Mona_Lisa,_by_Leonardo_da_Vinci,_from_C2RMF_retouched.jpg')
b = gis.generate_signature('https://pixabay.com/static/uploads/photo/2012/11/28/08/56/mona-lisa-67506_960_720.jpg')
gis.normalized_distance(a, b)

身份證識別OCR

生成各種虛擬數據

pip install mimesis
>>> import mimesis
>>> person = mimesis.Personal(locale='en')

>>> person.full_name(gender='female')
'Antonetta Garrison'

>>> person.occupation()
'Backend Developer'

視頻處理庫

from moviepy.editor import *

video = VideoFileClip("myHolidays.mp4").subclip(50,60)

# Make the text. Many more options are available.
txt_clip = ( TextClip("My Holidays 2013",fontsize=70,color='white')
             .set_position('center')
             .set_duration(10) )

result = CompositeVideoClip([video, txt_clip]) # Overlay text on video
result.write_videofile("myHolidays_edited.webm",fps=25) # Many options...

微信聊天記錄導出、分析工具

pip install wechat-explorer
wexp list_chatrooms ../Documents user_id
wexp list_friends ../Documents user_id
wexp get_chatroom_stats ../Documents user_id chatroom_id@chatroom 2015-08-01 2015-09-01
wexp export_chatroom_records ../Documents user_id chatroom_id@chatroom 2015-10-01 2015-10-07 ../
wexp get_friend_label_stats ../Documents user_id
wkhtmltopdf --dpi 300 records.html records.pdf

圖片爬蟲庫

pip install icrawler
from icrawler.builtin import BaiduImageCrawler, BingImageCrawler, GoogleImageCrawler

google_crawler = GoogleImageCrawler(parser_threads=2, downloader_threads=4,
                                    storage={'root_dir': 'your_image_dir'})
google_crawler.crawl(keyword='sunny', offset=0, max_num=1000,
                     date_min=None, date_max=None,
                     min_size=(200,200), max_size=None)
bing_crawler = BingImageCrawler(downloader_threads=4,
                                storage={'root_dir': 'your_image_dir'})
bing_crawler.crawl(keyword='sunny', offset=0, max_num=1000,
                   min_size=None, max_size=None)
baidu_crawler = BaiduImageCrawler(storage={'root_dir': 'your_image_dir'})
baidu_crawler.crawl(keyword='sunny', offset=0, max_num=1000,
                    min_size=None, max_size=None)
from icrawler.builtin import GreedyImageCrawler

storage= {'root_dir': '/'}
greedy_crawler = GreedyImageCrawler(storage=storage)
greedy_crawler.crawl(domains='http://qq.com', 
                     max_num=6)

Python 剪貼板

pip install pyperclip  
from pyperclip import copy, paste 

copy('2333') # 向剪貼板寫入 2333 

paste() # 值爲剪貼板中的內容

獲取圖片的類似度

import face_recognition
image = face_recognition.load_image_file("your_file.jpg")
face_locations = face_recognition.face_locations(image)

clipboard.png

瀏覽器自動化splinter

#chromedriver.storage.googleapis.com/index.html
>>> from splinter.browser import Browser

>>> xx = Browser(driver_name="chrome")

>>> xx.visit("http://www.zhihu.com/")

爲SQLite數據庫生成JSON

pip3 install datasette
datasette serve path/to/database.db
http://localhost:8001/History/downloads.json

numpy 接口直接工做在 CUDA

>>> import cupy as cp
>>> x = cp.arange(6).reshape(2, 3).astype('f')
>>> x
array([[ 0.,  1.,  2.],
       [ 3.,  4.,  5.]], dtype=float32)
>>> x.sum(axis=1)
array([  3.,  12.], dtype=float32)

瞭解一個命令或程序在執行前會作什麼

pip install maybe
maybe rm -r ostechnix/
maybe has prevented rm -r ostechnix/ from performing 5 file system operations:
 delete /home/sk/inboxer-0.4.0-x86_64.AppImage
 delete /home/sk/Docker.pdf
 delete /home/sk/Idhayathai Oru Nodi.mp3
 delete /home/sk/dThmLbB334_1398236878432.jpg
 delete /home/sk/ostechnix
Do you want to rerun rm -r ostechnix/ and permit these operations? [y/N] y

獲取ip

pip install ng
$ ng ip

local_ip:192.168.1.114
public_ip:49.4.160.250
$ ng wp
$ ng wp flyfish_5g

flyfish_5g:hitflyfish123456

圖牀服務

$ pip install qu
$ qu up /somewhere/1.png
$ qu up /somewhere/1.png 2.png

get .gitignore

#https://www.gitignore.io/
$ pip install gy
$ gy generate python java lisp

PyGithub

pip install PyGithub
from github import Github

g = Github("xxxxx", "passwd")
my_forks = []
for repo in g.get_user().get_repos():
    if repo.fork:
        my_forks.append(repo)

爬蟲小工具

pip install lazyspider
from lazyspider.lazyheaders import LazyHeaders

# 注意!字符串要包裹在 三引號 或 雙引號 裏
curl = "curl 'https://pypi.python.org/pypi' -H 'cookie: .....balabala...."

lh = LazyHeaders(curl)

headers = lh.getHeaders()
cookies = lh.getCookies()

print('*' * 40)
print('Headers: {}'.format(headers))
print('*' * 40)
print('Cookies: {}'.format(cookies))
print('*' * 40)

import requests
r = requests.get('https://pypi.python.org/pypi',
                 headers=headers, cookies=cookies)
print(r.status_code)

中文近義詞工具包

pip install -U synonyms
>>> synonyms.display("飛機")
'飛機'近義詞:
  1. 架飛機:0.837399
  2. 客機:0.764609
  3. 直升機:0.762116
  4. 民航機:0.750519
  5. 航機:0.750116
  6. 起飛:0.735736
  7. 戰機:0.734975
  8. 飛行中:0.732649
  9. 航空器:0.723945
  10. 運輸機:0.720578

公衆號:蘇生不惑php

clipboard.png

相關文章
相關標籤/搜索