IPFS支持Python接口訪問,這裏是其API參考手冊。html
全部的Python方法接受下列的參數:node
ipfsapi.
connect
(host='localhost', port=5001, base='api/v0', chunk_size=4096, **defaults)[source]Create a new Client
instance and connect to the daemon to validate that its version is supported.python
Raises: |
---|
All parameters are identical to those passed to the constructor of the Client
class.linux
Returns: | ~ipfsapi.Client |
---|
ipfsapi.
assert_version
(version, minimum='0.4.3', maximum='0.5.0')[source]Make sure that the given daemon version is supported by this client version.git
Raises: |
|
---|---|
Parameters: |
ipfsapi.
Client
(host='localhost', port=5001, base='api/v0', chunk_size=4096, **defaults)[source]Bases: object
express
A TCP client for interacting with an IPFS daemon.json
A Client
instance will not actually establish a connection to the daemon until at least one of it's methods is called.bootstrap
Parameters: |
|
---|
add
(files, recursive=False, pattern='**', *args, **kwargs)[source]Add a file, or directory of files to IPFS.api
>>> with io.open('nurseryrhyme.txt', 'w', encoding='utf-8') as f: ... numbytes = f.write('Mary had a little lamb') >>> c.add('nurseryrhyme.txt') {'Hash': 'QmZfF6C9j4VtoCsTp4KSrhYH47QMd3DNXVZBKaxJdhaPab', 'Name': 'nurseryrhyme.txt'}
Parameters: |
|
---|---|
Returns: | dict (File name and hash of the added file node) |
get
(multihash, **kwargs)[source]Downloads a file, or directory of files from IPFS.
Files are placed in the current working directory.
Parameters: | multihash (str) -- The path to the IPFS object(s) to be outputted |
---|
cat
(multihash, **kwargs)[source]Retrieves the contents of a file identified by hash.
>>> c.cat('QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D') Traceback (most recent call last): ... ipfsapi.exceptions.Error: this dag node is a directory >>> c.cat('QmeKozNssnkJ4NcyRidYgDY2jfRZqVEoRGfipkgath71bX') b'<!DOCTYPE html>\n<html>\n\n<head>\n<title>ipfs example viewer</…'
Parameters: | multihash (str) -- The path to the IPFS object(s) to be retrieved |
---|---|
Returns: | str (File contents) |
ls
(multihash, **kwargs)[source]Returns a list of objects linked to by the given hash.
>>> c.ls('QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D') {'Objects': [ {'Hash': 'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D', 'Links': [ {'Hash': 'Qmd2xkBfEwEs9oMTk77A6jrsgurpF3ugXSg7dtPNFkcNMV', 'Name': 'Makefile', 'Size': 174, 'Type': 2}, … {'Hash': 'QmSY8RfVntt3VdxWppv9w5hWgNrE31uctgTiYwKir8eXJY', 'Name': 'published-version', 'Size': 55, 'Type': 2} ]} ]}
Parameters: | multihash (str) -- The path to the IPFS object(s) to list links from |
---|---|
Returns: | dict (Directory information and contents) |
refs
(multihash, **kwargs)[source]Returns a list of hashes of objects referenced by the given hash.
>>> c.refs('QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D') [{'Ref': 'Qmd2xkBfEwEs9oMTk77A6jrsgurpF3ugXSg7 … cNMV', 'Err': ''}, … {'Ref': 'QmSY8RfVntt3VdxWppv9w5hWgNrE31uctgTi … eXJY', 'Err': ''}]
Parameters: | multihash (str) -- Path to the object(s) to list refs from |
---|---|
Returns: | list |
refs_local
(**kwargs)[source]Displays the hashes of all local objects.
>>> c.refs_local() [{'Ref': 'Qmd2xkBfEwEs9oMTk77A6jrsgurpF3ugXSg7 … cNMV', 'Err': ''}, … {'Ref': 'QmSY8RfVntt3VdxWppv9w5hWgNrE31uctgTi … eXJY', 'Err': ''}]
Returns: | list |
---|
block_stat
(multihash, **kwargs)[source]Returns a dict with the size of the block with the given hash.
>>> c.block_stat('QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D') {'Key': 'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D', 'Size': 258}
Parameters: | multihash (str) -- The base58 multihash of an existing block to stat |
---|---|
Returns: | dict (Information about the requested block) |
block_get
(multihash, **kwargs)[source]Returns the raw contents of a block.
>>> c.block_get('QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D') b'\x121\n"\x12 \xdaW>\x14\xe5\xc1\xf6\xe4\x92\xd1 … \n\x02\x08\x01'
Parameters: | multihash (str) -- The base58 multihash of an existing block to get |
---|---|
Returns: | str (Value of the requested block) |
block_put
(file, **kwargs)[source]Stores the contents of the given file object as an IPFS block.
>>> c.block_put(io.BytesIO(b'Mary had a little lamb')) {'Key': 'QmeV6C6XVt1wf7V7as7Yak3mxPma8jzpqyhtRtCvpKcfBb', 'Size': 22}
Parameters: | file (io.RawIOBase) -- The data to be stored as an IPFS block |
---|---|
Returns: | dict (Information about the new block) -- See block_stat() |
bitswap_wantlist
(peer=None, **kwargs)[source]Returns blocks currently on the bitswap wantlist.
>>> c.bitswap_wantlist() {'Keys': [ 'QmeV6C6XVt1wf7V7as7Yak3mxPma8jzpqyhtRtCvpKcfBb', 'QmdCWFLDXqgdWQY9kVubbEHBbkieKd3uo7MtCm7nTZZE9K', 'QmVQ1XvYGF19X4eJqz1s7FJYJqAxFC4oqh3vWJJEXn66cp' ]}
Parameters: | peer (str) -- Peer to show wantlist for. |
---|---|
Returns: | dict (List of wanted blocks) |
bitswap_stat
(**kwargs)[source]Returns some diagnostic information from the bitswap agent.
>>> c.bitswap_stat() {'BlocksReceived': 96, 'DupBlksReceived': 73, 'DupDataReceived': 2560601, 'ProviderBufLen': 0, 'Peers': [ 'QmNZFQRxt9RMNm2VVtuV2Qx7q69bcMWRVXmr5CEkJEgJJP', 'QmNfCubGpwYZAQxX8LQDsYgB48C4GbfZHuYdexpX9mbNyT', 'QmNfnZ8SCs3jAtNPc8kf3WJqJqSoX7wsX7VqkLdEYMao4u', … ], 'Wantlist': [ 'QmeV6C6XVt1wf7V7as7Yak3mxPma8jzpqyhtRtCvpKcfBb', 'QmdCWFLDXqgdWQY9kVubbEHBbkieKd3uo7MtCm7nTZZE9K', 'QmVQ1XvYGF19X4eJqz1s7FJYJqAxFC4oqh3vWJJEXn66cp' ] }
Returns: | dict (Statistics, peers and wanted blocks) |
---|
bitswap_unwant
(key, **kwargs)[source]Remove a given block from wantlist.
Parameters: | key (str) -- Key to remove from wantlist. |
---|
object_data
(multihash, **kwargs)[source]
Returns the raw bytes in an IPFS object.
>>> c.object_data('QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D') b'\x08\x01'
Parameters: | multihash (str) -- Key of the object to retrieve, in base58-encoded multihash format |
---|---|
Returns: | str (Raw object data) |
object_new
(template=None, **kwargs)[source]Creates a new object from an IPFS template.
By default this creates and returns a new empty merkledag node, but you may pass an optional template argument to create a preformatted node.
>>> c.object_new() {'Hash': 'QmdfTbBqBPQ7VNxZEYEj14VmRuZBkqFbiwReogJgS1zR1n'}
Parameters: | template (str) -- Blueprints from which to construct the new object. Possible values:
|
---|---|
Returns: | dict (Object hash) |
object_links
(multihash, **kwargs)[source]Returns the links pointed to by the specified object.
>>> c.object_links('QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDx … ca7D') {'Hash': 'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D', 'Links': [ {'Hash': 'Qmd2xkBfEwEs9oMTk77A6jrsgurpF3ugXSg7dtPNFkcNMV', 'Name': 'Makefile', 'Size': 174}, {'Hash': 'QmeKozNssnkJ4NcyRidYgDY2jfRZqVEoRGfipkgath71bX', 'Name': 'example', 'Size': 1474}, {'Hash': 'QmZAL3oHMQYqsV61tGvoAVtQLs1WzRe1zkkamv9qxqnDuK', 'Name': 'home', 'Size': 3947}, {'Hash': 'QmZNPyKVriMsZwJSNXeQtVQSNU4v4KEKGUQaMT61LPahso', 'Name': 'lib', 'Size': 268261}, {'Hash': 'QmSY8RfVntt3VdxWppv9w5hWgNrE31uctgTiYwKir8eXJY', 'Name': 'published-version', 'Size': 55}]}
Parameters: | multihash (str) -- Key of the object to retrieve, in base58-encoded multihash format |
---|---|
Returns: | dict (Object hash and merkedag links) |
object_get
(multihash, **kwargs)[source]Get and serialize the DAG node named by multihash.
>>> c.object_get('QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D') {'Data': '', 'Links': [ {'Hash': 'Qmd2xkBfEwEs9oMTk77A6jrsgurpF3ugXSg7dtPNFkcNMV', 'Name': 'Makefile', 'Size': 174}, {'Hash': 'QmeKozNssnkJ4NcyRidYgDY2jfRZqVEoRGfipkgath71bX', 'Name': 'example', 'Size': 1474}, {'Hash': 'QmZAL3oHMQYqsV61tGvoAVtQLs1WzRe1zkkamv9qxqnDuK', 'Name': 'home', 'Size': 3947}, {'Hash': 'QmZNPyKVriMsZwJSNXeQtVQSNU4v4KEKGUQaMT61LPahso', 'Name': 'lib', 'Size': 268261}, {'Hash': 'QmSY8RfVntt3VdxWppv9w5hWgNrE31uctgTiYwKir8eXJY', 'Name': 'published-version', 'Size': 55}]}
Parameters: | multihash (str) -- Key of the object to retrieve, in base58-encoded multihash format |
---|---|
Returns: | dict (Object data and links) |
object_put
(file, **kwargs)[source]Stores input as a DAG object and returns its key.
>>> c.object_put(io.BytesIO(b''' ... { ... "Data": "another", ... "Links": [ { ... "Name": "some link", ... "Hash": "QmXg9Pp2ytZ14xgmQjYEiHjVjMFXzCV … R39V", ... "Size": 8 ... } ] ... }''')) {'Hash': 'QmZZmY4KCu9r3e7M2Pcn46Fc5qbn6NpzaAGaYb22kbfTqm', 'Links': [ {'Hash': 'QmXg9Pp2ytZ14xgmQjYEiHjVjMFXzCVVEcRTWJBmLgR39V', 'Size': 8, 'Name': 'some link'} ] }
Parameters: | file (io.RawIOBase) -- (JSON) object from which the DAG object will be created |
---|---|
Returns: | dict (Hash and links of the created DAG object) -- See object_links() |
object_stat
(multihash, **kwargs)[source]Get stats for the DAG node named by multihash.
>>> c.object_stat('QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D') {'LinksSize': 256, 'NumLinks': 5, 'Hash': 'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D', 'BlockSize': 258, 'CumulativeSize': 274169, 'DataSize': 2}
Parameters: | multihash (str) -- Key of the object to retrieve, in base58-encoded multihash format |
---|---|
Returns: | dict |
object_patch_append_data
(multihash, new_data, **kwargs)[source]Creates a new merkledag object based on an existing one.
The new object will have the provided data appended to it, and will thus have a new Hash.
>>> c.object_patch_append_data("QmZZmY … fTqm", io.BytesIO(b"bla")) {'Hash': 'QmR79zQQj2aDfnrNgczUhvf2qWapEfQ82YQRt3QjrbhSb2'}
Parameters: |
|
---|---|
Returns: | dict (Hash of new object) |
object_patch_add_link
(root, name, ref, create=False, **kwargs)[source]Creates a new merkledag object based on an existing one.
The new object will have a link to the provided object.
>>> c.object_patch_add_link( ... 'QmR79zQQj2aDfnrNgczUhvf2qWapEfQ82YQRt3QjrbhSb2', ... 'Johnny', ... 'QmR79zQQj2aDfnrNgczUhvf2qWapEfQ82YQRt3QjrbhSb2' ... ) {'Hash': 'QmNtXbF3AjAk59gQKRgEdVabHcSsiPUnJwHnZKyj2x8Z3k'}
Parameters: | |
---|---|
Returns: | dict (Hash of new object) |
object_patch_rm_link
(root, link, **kwargs)[source]Creates a new merkledag object based on an existing one.
The new object will lack a link to the specified object.
>>> c.object_patch_rm_link( ... 'QmNtXbF3AjAk59gQKRgEdVabHcSsiPUnJwHnZKyj2x8Z3k', ... 'Johnny' ... ) {'Hash': 'QmR79zQQj2aDfnrNgczUhvf2qWapEfQ82YQRt3QjrbhSb2'}
Parameters: | |
---|---|
Returns: | dict (Hash of new object) |
object_patch_set_data
(root, data, **kwargs)[source]Creates a new merkledag object based on an existing one.
The new object will have the same links as the old object but with the provided data instead of the old object's data contents.
>>> c.object_patch_set_data( ... 'QmNtXbF3AjAk59gQKRgEdVabHcSsiPUnJwHnZKyj2x8Z3k', ... io.BytesIO(b'bla') ... ) {'Hash': 'QmSw3k2qkv4ZPsbu9DVEJaTMszAQWNgM1FTFYpfZeNQWrd'}
Parameters: |
|
---|---|
Returns: | dict (Hash of new object) |
file_ls
(multihash, **kwargs)[source]Lists directory contents for Unix filesystem objects.
The result contains size information. For files, the child size is the total size of the file contents. For directories, the child size is the IPFS link size.
The path can be a prefixless reference; in this case, it is assumed that it is an /ipfs/
reference and not /ipns/
.
>>> c.file_ls('QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D') {'Arguments': {'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D': 'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D'}, 'Objects': { 'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D': { 'Hash': 'QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D', 'Size': 0, 'Type': 'Directory', 'Links': [ {'Hash': 'Qmd2xkBfEwEs9oMTk77A6jrsgurpF3ugXSg7dtPNFkcNMV', 'Name': 'Makefile', 'Size': 163, 'Type': 'File'}, {'Hash': 'QmeKozNssnkJ4NcyRidYgDY2jfRZqVEoRGfipkgath71bX', 'Name': 'example', 'Size': 1463, 'Type': 'File'}, {'Hash': 'QmZAL3oHMQYqsV61tGvoAVtQLs1WzRe1zkkamv9qxqnDuK', 'Name': 'home', 'Size': 3947, 'Type': 'Directory'}, {'Hash': 'QmZNPyKVriMsZwJSNXeQtVQSNU4v4KEKGUQaMT61LPahso', 'Name': 'lib', 'Size': 268261, 'Type': 'Directory'}, {'Hash': 'QmSY8RfVntt3VdxWppv9w5hWgNrE31uctgTiYwKir8eXJY', 'Name': 'published-version', 'Size': 47, 'Type': 'File'} ] } }}
Parameters: | multihash (str) -- The path to the object(s) to list links from |
---|---|
Returns: | dict |
resolve
(name, recursive=False, **kwargs)[source]Accepts an identifier and resolves it to the referenced item.
There are a number of mutable name protocols that can link among themselves and into IPNS. For example IPNS references can (currently) point at an IPFS object, and DNS links can point at other DNS links, IPNS entries, or IPFS objects. This command accepts any of these identifiers.
>>> c.resolve("/ipfs/QmTkzDwWqPbnAh5YiV5VwcTLnGdw … ca7D/Makefile") {'Path': '/ipfs/Qmd2xkBfEwEs9oMTk77A6jrsgurpF3ugXSg7dtPNFkcNMV'} >>> c.resolve("/ipns/ipfs.io") {'Path': '/ipfs/QmTzQ1JRkWErjk39mryYw2WVaphAZNAREyMchXzYQ7c15n'}
Parameters: | |
---|---|
Returns: | dict (IPFS path of resource) |
key_list
(**kwargs)[source]Returns a list of generated public keys that can be used with name_publish
>>> c.key_list() [{'Name': 'self', 'Id': 'QmQf22bZar3WKmojipms22PkXH1MZGmvsqzQtuSvQE3uhm'}, {'Name': 'example_key_name', 'Id': 'QmQLaT5ZrCfSkXTH6rUKtVidcxj8jrW3X2h75Lug1AV7g8'} ]
Returns: | list (List of dictionaries with Names and Ids of public keys.) |
---|
key_gen
(key_name, type, size=2048, **kwargs)[source]Adds a new public key that can be used for name_publish.
>>> c.key_gen('example_key_name') {'Name': 'example_key_name', 'Id': 'QmQLaT5ZrCfSkXTH6rUKtVidcxj8jrW3X2h75Lug1AV7g8'}
Parameters: | |
---|---|
Returns: | dict (Key name and Key Id) |
key_rm
(key_name, *key_names, **kwargs)[source]Remove a keypair
>>> c.key_rm("bla") {"Keys": [ {"Name": "bla", "Id": "QmfJpR6paB6h891y7SYXGe6gapyNgepBeAYMbyejWA4FWA"} ]}
Parameters: | key_name (str) -- Name of the key(s) to remove. |
---|---|
Returns: | dict (List of key names and IDs that have been removed) |
key_rename
(key_name, new_key_name, **kwargs)[source]Rename a keypair
>>> c.key_rename("bla", "personal") {"Was": "bla", "Now": "personal", "Id": "QmeyrRNxXaasZaoDXcCZgryoBCga9shaHQ4suHAYXbNZF3", "Overwrite": False}
Parameters: | |
---|---|
Returns: | dict (List of key names and IDs that have been removed) |
name_publish
(ipfs_path, resolve=True, lifetime='24h', ttl=None, key=None, **kwargs)[source]Publishes an object to IPNS.
IPNS is a PKI namespace, where names are the hashes of public keys, and the private key enables publishing new (signed) values. In publish, the default value of name is your own identity public key.
>>> c.name_publish('/ipfs/QmfZY61ukoQuCX8e5Pt7v8pRfhkyxwZK … GZ5d') {'Value': '/ipfs/QmfZY61ukoQuCX8e5Pt7v8pRfhkyxwZKZMTodAtmvyGZ5d', 'Name': 'QmVgNoP89mzpgEAAqK8owYoDEyB97MkcGvoWZir8otE9Uc'}
Parameters: |
|
---|---|
Returns: | dict (IPNS hash and the IPFS path it points at) |
name_resolve
(name=None, recursive=False, nocache=False, **kwargs)[source]Gets the value currently published at an IPNS name.
IPNS is a PKI namespace, where names are the hashes of public keys, and the private key enables publishing new (signed) values. In resolve, the default value of name
is your own identity public key.
>>> c.name_resolve() {'Path': '/ipfs/QmfZY61ukoQuCX8e5Pt7v8pRfhkyxwZKZMTodAtmvyGZ5d'}
Parameters: | |
---|---|
Returns: | dict (The IPFS path the IPNS hash points at) |
dns
(domain_name, recursive=False, **kwargs)[source]Resolves DNS links to the referenced object.
Multihashes are hard to remember, but domain names are usually easy to remember. To create memorable aliases for multihashes, DNS TXT records can point to other DNS links, IPFS objects, IPNS keys, etc. This command resolves those links to the referenced object.
For example, with this DNS TXT record:
>>> import dns.resolver >>> a = dns.resolver.query("ipfs.io", "TXT") >>> a.response.answer[0].items[0].to_text() '"dnslink=/ipfs/QmTzQ1JRkWErjk39mryYw2WVaphAZNAREyMchXzYQ7c15n"'
The resolver will give:
>>> c.dns("ipfs.io") {'Path': '/ipfs/QmTzQ1JRkWErjk39mryYw2WVaphAZNAREyMchXzYQ7c15n'}
Parameters: | |
---|---|
Returns: | dict (Resource were a DNS entry points to) |
pin_add
(path, *paths, **kwargs)[source]Pins objects to local storage.
Stores an IPFS object(s) from a given path locally to disk.
>>> c.pin_add("QmfZY61ukoQuCX8e5Pt7v8pRfhkyxwZKZMTodAtmvyGZ5d") {'Pins': ['QmfZY61ukoQuCX8e5Pt7v8pRfhkyxwZKZMTodAtmvyGZ5d']}
Parameters: | |
---|---|
Returns: | dict (List of IPFS objects that have been pinned) |
pin_rm
(path, *paths, **kwargs)[source]Removes a pinned object from local storage.
Removes the pin from the given object allowing it to be garbage collected if needed.
>>> c.pin_rm('QmfZY61ukoQuCX8e5Pt7v8pRfhkyxwZKZMTodAtmvyGZ5d') {'Pins': ['QmfZY61ukoQuCX8e5Pt7v8pRfhkyxwZKZMTodAtmvyGZ5d']}
Parameters: | |
---|---|
Returns: | dict (List of IPFS objects that have been unpinned) |
pin_ls
(type='all', **kwargs)[source]Lists objects pinned to local storage.
By default, all pinned objects are returned, but the type
flag or arguments can restrict that to a specific pin type or to some specific objects respectively.
>>> c.pin_ls() {'Keys': { 'QmNNPMA1eGUbKxeph6yqV8ZmRkdVat … YMuz': {'Type': 'recursive'}, 'QmNPZUCeSN5458Uwny8mXSWubjjr6J … kP5e': {'Type': 'recursive'}, 'QmNg5zWpRMxzRAVg7FTQ3tUxVbKj8E … gHPz': {'Type': 'indirect'}, … 'QmNiuVapnYCrLjxyweHeuk6Xdqfvts … wCCe': {'Type': 'indirect'}}}
Parameters: | type ("str") -- The type of pinned keys to list. Can be:
|
---|---|
Returns: | dict (Hashes of pinned IPFS objects and why they are pinned) |
pin_update
(from_path, to_path, **kwargs)[source]Replaces one pin with another.
Updates one pin to another, making sure that all objects in the new pin are local. Then removes the old pin. This is an optimized version of using first using pin_add()
to add a new pin for an object and then using pin_rm()
to remove the pin for the old object.
>>> c.pin_update("QmXMqez83NU77ifmcPs5CkNRTMQksBLkyfBf4H5g1NZ52P", ... "QmUykHAi1aSjMzHw3KmBoJjqRUQYNkFXm8K1y7ZsJxpfPH") {"Pins": ["/ipfs/QmXMqez83NU77ifmcPs5CkNRTMQksBLkyfBf4H5g1NZ52P", "/ipfs/QmUykHAi1aSjMzHw3KmBoJjqRUQYNkFXm8K1y7ZsJxpfPH"]}
Parameters: | |
---|---|
Returns: | dict (List of IPFS objects affected by the pinning operation) |
pin_verify
(path, *paths, **kwargs)[source]Verify that recursive pins are complete.
Scan the repo for pinned object graphs and check their integrity. Issues will be reported back with a helpful human-readable error message to aid in error recovery. This is useful to help recover from datastore corruptions (such as when accidentally deleting files added using the filestore backend).
>>> for item in c.pin_verify("QmNuvmuFeeWWpx…wTTZ", verbose=True): ... print(item) ... {"Cid":"QmVkNdzCBukBRdpyFiKPyL2R15qPExMr9rV9RFV2kf9eeV","Ok":True} {"Cid":"QmbPzQruAEFjUU3gQfupns6b8USr8VrD9H71GrqGDXQSxm","Ok":True} {"Cid":"Qmcns1nUvbeWiecdGDPw8JxWeUfxCV8JKhTfgzs3F8JM4P","Ok":True} …
Parameters: | |
---|---|
Returns: | iterable |
repo_gc
(**kwargs)[source]Removes stored objects that are not pinned from the repo.
>>> c.repo_gc() [{'Key': 'QmNPXDC6wTXVmZ9Uoc8X1oqxRRJr4f1sDuyQuwaHG2mpW2'}, {'Key': 'QmNtXbF3AjAk59gQKRgEdVabHcSsiPUnJwHnZKyj2x8Z3k'}, {'Key': 'QmRVBnxUCsD57ic5FksKYadtyUbMsyo9KYQKKELajqAp4q'}, … {'Key': 'QmYp4TeCurXrhsxnzt5wqLqqUz8ZRg5zsc7GuUrUSDtwzP'}]
Performs a garbage collection sweep of the local set of stored objects and remove ones that are not pinned in order to reclaim hard disk space. Returns the hashes of all collected objects.
Returns: | dict (List of IPFS objects that have been removed) |
---|
repo_stat
(**kwargs)[source]Displays the repo's status.
Returns the number of objects in the repo and the repo's size, version, and path.
>>> c.repo_stat() {'NumObjects': 354, 'RepoPath': '…/.local/share/ipfs', 'Version': 'fs-repo@4', 'RepoSize': 13789310}
Returns: | dict (General information about the IPFS file repository) |
---|
NumObjects | Number of objects in the local repo. |
RepoPath | The path to the repo being currently used. |
RepoSize | Size in bytes that the repo is currently using. |
Version | The repo version. |
id
(peer=None, **kwargs)[source]Shows IPFS Node ID info.
Returns the PublicKey, ProtocolVersion, ID, AgentVersion and Addresses of the connected daemon or some other node.
>>> c.id() {'ID': 'QmVgNoP89mzpgEAAqK8owYoDEyB97MkcGvoWZir8otE9Uc', 'PublicKey': 'CAASpgIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggE … BAAE=', 'AgentVersion': 'go-libp2p/3.3.4', 'ProtocolVersion': 'ipfs/0.1.0', 'Addresses': [ '/ip4/127.0.0.1/tcp/4001/ipfs/QmVgNoP89mzpgEAAqK8owYo … E9Uc', '/ip4/10.1.0.172/tcp/4001/ipfs/QmVgNoP89mzpgEAAqK8owY … E9Uc', '/ip4/172.18.0.1/tcp/4001/ipfs/QmVgNoP89mzpgEAAqK8owY … E9Uc', '/ip6/::1/tcp/4001/ipfs/QmVgNoP89mzpgEAAqK8owYoDEyB97 … E9Uc', '/ip6/fccc:7904:b05b:a579:957b:deef:f066:cad9/tcp/400 … E9Uc', '/ip6/fd56:1966:efd8::212/tcp/4001/ipfs/QmVgNoP89mzpg … E9Uc', '/ip6/fd56:1966:efd8:0:def1:34d0:773:48f/tcp/4001/ipf … E9Uc', '/ip6/2001:db8:1::1/tcp/4001/ipfs/QmVgNoP89mzpgEAAqK8 … E9Uc', '/ip4/77.116.233.54/tcp/4001/ipfs/QmVgNoP89mzpgEAAqK8 … E9Uc', '/ip4/77.116.233.54/tcp/10842/ipfs/QmVgNoP89mzpgEAAqK … E9Uc']}
Parameters: | peer (str) -- Peer.ID of the node to look up (local node if None ) |
---|---|
Returns: | dict (Information about the IPFS node) |
bootstrap
(**kwargs)[source]Compatiblity alias for bootstrap_list()
.
bootstrap_list
(**kwargs)[source]Returns the addresses of peers used during initial discovery of the IPFS network.
Peers are output in the format <multiaddr>/<peerID>
.
>>> c.bootstrap_list() {'Peers': [ '/ip4/104.131.131.82/tcp/4001/ipfs/QmaCpDMGvV2BGHeYER … uvuJ', '/ip4/104.236.176.52/tcp/4001/ipfs/QmSoLnSGccFuZQJzRa … ca9z', '/ip4/104.236.179.241/tcp/4001/ipfs/QmSoLPppuBtQSGwKD … KrGM', … '/ip4/178.62.61.185/tcp/4001/ipfs/QmSoLMeWqB7YGVLJN3p … QBU3']}
Returns: | dict (List of known bootstrap peers) |
---|
bootstrap_add
(peer, *peers, **kwargs)[source]Adds peers to the bootstrap list.
Parameters: | peer (str) -- IPFS MultiAddr of a peer to add to the list |
---|---|
Returns: | dict |
bootstrap_rm
(peer, *peers, **kwargs)[source]
Removes peers from the bootstrap list.
Parameters: | peer (str) -- IPFS MultiAddr of a peer to remove from the list |
---|---|
Returns: | dict |
swarm_peers
(**kwargs)[source]Returns the addresses & IDs of currently connected peers.
>>> c.swarm_peers() {'Strings': [ '/ip4/101.201.40.124/tcp/40001/ipfs/QmZDYAhmMDtnoC6XZ … kPZc', '/ip4/104.131.131.82/tcp/4001/ipfs/QmaCpDMGvV2BGHeYER … uvuJ', '/ip4/104.223.59.174/tcp/4001/ipfs/QmeWdgoZezpdHz1PX8 … 1jB6', … '/ip6/fce3: … :f140/tcp/43901/ipfs/QmSoLnSGccFuZQJzRa … ca9z']}
Returns: | dict (List of multiaddrs of currently connected peers) |
---|
swarm_addrs
(**kwargs)[source]Returns the addresses of currently connected peers by peer id.
>>> pprint(c.swarm_addrs()) {'Addrs': { 'QmNMVHJTSZHTWMWBbmBrQgkA1hZPWYuVJx2DpSGESWW6Kn': [ '/ip4/10.1.0.1/tcp/4001', '/ip4/127.0.0.1/tcp/4001', '/ip4/51.254.25.16/tcp/4001', '/ip6/2001:41d0:b:587:3cae:6eff:fe40:94d8/tcp/4001', '/ip6/2001:470:7812:1045::1/tcp/4001', '/ip6/::1/tcp/4001', '/ip6/fc02:2735:e595:bb70:8ffc:5293:8af8:c4b7/tcp/4001', '/ip6/fd00:7374:6172:100::1/tcp/4001', '/ip6/fd20:f8be:a41:0:c495:aff:fe7e:44ee/tcp/4001', '/ip6/fd20:f8be:a41::953/tcp/4001'], 'QmNQsK1Tnhe2Uh2t9s49MJjrz7wgPHj4VyrZzjRe8dj7KQ': [ '/ip4/10.16.0.5/tcp/4001', '/ip4/127.0.0.1/tcp/4001', '/ip4/172.17.0.1/tcp/4001', '/ip4/178.62.107.36/tcp/4001', '/ip6/::1/tcp/4001'], … }}
Returns: | dict (Multiaddrs of peers by peer id) |
---|
swarm_connect
(address, *addresses, **kwargs)[source]Opens a connection to a given address.
This will open a new direct connection to a peer address. The address format is an IPFS multiaddr:
/ip4/104.131.131.82/tcp/4001/ipfs/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ
>>> c.swarm_connect("/ip4/104.131.131.82/tcp/4001/ipfs/Qma … uvuJ") {'Strings': ['connect QmaCpDMGvV2BGHeYERUEnRQAwe3 … uvuJ success']}
Parameters: | address (str) -- Address of peer to connect to |
---|---|
Returns: | dict (Textual connection status report) |
swarm_disconnect
(address, *addresses, **kwargs)[source]Closes the connection to a given address.
This will close a connection to a peer address. The address format is an IPFS multiaddr:
/ip4/104.131.131.82/tcp/4001/ipfs/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ
The disconnect is not permanent; if IPFS needs to talk to that address later, it will reconnect.
>>> c.swarm_disconnect("/ip4/104.131.131.82/tcp/4001/ipfs/Qm … uJ") {'Strings': ['disconnect QmaCpDMGvV2BGHeYERUEnRQA … uvuJ success']}
Parameters: | address (str) -- Address of peer to disconnect from |
---|---|
Returns: | dict (Textual connection status report) |
swarm_filters_add
(address, *addresses, **kwargs)[source]Adds a given multiaddr filter to the filter list.
This will add an address filter to the daemons swarm. Filters applied this way will not persist daemon reboots, to achieve that, add your filters to the configuration file.
>>> c.swarm_filters_add("/ip4/192.168.0.0/ipcidr/16") {'Strings': ['/ip4/192.168.0.0/ipcidr/16']}
Parameters: | address (str) -- Multiaddr to filter |
---|---|
Returns: | dict (List of swarm filters added) |
swarm_filters_rm
(address, *addresses, **kwargs)[source]Removes a given multiaddr filter from the filter list.
This will remove an address filter from the daemons swarm. Filters removed this way will not persist daemon reboots, to achieve that, remove your filters from the configuration file.
>>> c.swarm_filters_rm("/ip4/192.168.0.0/ipcidr/16") {'Strings': ['/ip4/192.168.0.0/ipcidr/16']}
Parameters: | address (str) -- Multiaddr filter to remove |
---|---|
Returns: | dict (List of swarm filters removed) |
dht_query
(peer_id, *peer_ids, **kwargs)[source]Finds the closest Peer IDs to a given Peer ID by querying the DHT.
>>> c.dht_query("/ip4/104.131.131.82/tcp/4001/ipfs/QmaCpDM … uvuJ") [{'ID': 'QmPkFbxAQ7DeKD5VGSh9HQrdS574pyNzDmxJeGrRJxoucF', 'Extra': '', 'Type': 2, 'Responses': None}, {'ID': 'QmR1MhHVLJSLt9ZthsNNhudb1ny1WdhY4FPW21ZYFWec4f', 'Extra': '', 'Type': 2, 'Responses': None}, {'ID': 'Qmcwx1K5aVme45ab6NYWb52K2TFBeABgCLccC7ntUeDsAs', 'Extra': '', 'Type': 2, 'Responses': None}, … {'ID': 'QmYYy8L3YD1nsF4xtt4xmsc14yqvAAnKksjo3F3iZs5jPv', 'Extra': '', 'Type': 1, 'Responses': []}]
Parameters: | peer_id (str) -- The peerID to run the query against |
---|---|
Returns: | dict (List of peers IDs) |
dht_findprovs
(multihash, *multihashes, **kwargs)[source]Finds peers in the DHT that can provide a specific value.
>>> c.dht_findprovs("QmNPXDC6wTXVmZ9Uoc8X1oqxRRJr4f1sDuyQu … mpW2") [{'ID': 'QmaxqKpiYNr62uSFBhxJAMmEMkT6dvc3oHkrZNpH2VMTLZ', 'Extra': '', 'Type': 6, 'Responses': None}, {'ID': 'QmaK6Aj5WXkfnWGoWq7V8pGUYzcHPZp4jKQ5JtmRvSzQGk', 'Extra': '', 'Type': 6, 'Responses': None}, {'ID': 'QmdUdLu8dNvr4MVW1iWXxKoQrbG6y1vAVWPdkeGK4xppds', 'Extra': '', 'Type': 6, 'Responses': None}, … {'ID': '', 'Extra': '', 'Type': 4, 'Responses': [ {'ID': 'QmVgNoP89mzpgEAAqK8owYoDEyB97Mk … E9Uc', 'Addrs': None} ]}, {'ID': 'QmaxqKpiYNr62uSFBhxJAMmEMkT6dvc3oHkrZNpH2VMTLZ', 'Extra': '', 'Type': 1, 'Responses': [ {'ID': 'QmSHXfsmN3ZduwFDjeqBn1C8b1tcLkxK6yd … waXw', 'Addrs': [ '/ip4/127.0.0.1/tcp/4001', '/ip4/172.17.0.8/tcp/4001', '/ip6/::1/tcp/4001', '/ip4/52.32.109.74/tcp/1028' ]} ]}]
Parameters: | multihash (str) -- The DHT key to find providers for |
---|---|
Returns: | dict (List of provider Peer IDs) |
dht_findpeer
(peer_id, *peer_ids, **kwargs)[source]Queries the DHT for all of the associated multiaddresses.
>>> c.dht_findpeer("QmaxqKpiYNr62uSFBhxJAMmEMkT6dvc3oHkrZN … MTLZ") [{'ID': 'QmfVGMFrwW6AV6fTWmD6eocaTybffqAvkVLXQEFrYdk6yc', 'Extra': '', 'Type': 6, 'Responses': None}, {'ID': 'QmTKiUdjbRjeN9yPhNhG1X38YNuBdjeiV9JXYWzCAJ4mj5', 'Extra': '', 'Type': 6, 'Responses': None}, {'ID': 'QmTGkgHSsULk8p3AKTAqKixxidZQXFyF7mCURcutPqrwjQ', 'Extra': '', 'Type': 6, 'Responses': None}, … {'ID': '', 'Extra': '', 'Type': 2, 'Responses': [ {'ID': 'QmaxqKpiYNr62uSFBhxJAMmEMkT6dvc3oHkrZNpH2VMTLZ', 'Addrs': [ '/ip4/10.9.8.1/tcp/4001', '/ip6/::1/tcp/4001', '/ip4/164.132.197.107/tcp/4001', '/ip4/127.0.0.1/tcp/4001']} ]}]
Parameters: | peer_id (str) -- The ID of the peer to search for |
---|---|
Returns: | dict (List of multiaddrs) |
dht_get
(key, *keys, **kwargs)[source]Queries the DHT for its best value related to given key.
There may be several different values for a given key stored in the DHT; in this context best means the record that is most desirable. There is no one metric for best: it depends entirely on the key type. For IPNS, best is the record that is both valid and has the highest sequence number (freshest). Different key types may specify other rules for they consider to be the best.
Parameters: | key (str) -- One or more keys whose values should be looked up |
---|---|
Returns: | str |
dht_put
(key, value, **kwargs)[source]Writes a key/value pair to the DHT.
Given a key of the form /foo/bar
and a value of any form, this will write that value to the DHT with that key.
Keys have two parts: a keytype (foo) and the key name (bar). IPNS uses the /ipns/
keytype, and expects the key name to be a Peer ID. IPNS entries are formatted with a special strucutre.
You may only use keytypes that are supported in your ipfs
binary: go-ipfs
currently only supports the /ipns/
keytype. Unless you have a relatively deep understanding of the key's internal structure, you likely want to be using the name_publish()
instead.
Value is arbitrary text.
>>> c.dht_put("QmVgNoP89mzpgEAAqK8owYoDEyB97Mkc … E9Uc", "test123") [{'ID': 'QmfLy2aqbhU1RqZnGQyqHSovV8tDufLUaPfN1LNtg5CvDZ', 'Extra': '', 'Type': 5, 'Responses': None}, {'ID': 'QmZ5qTkNvvZ5eFq9T4dcCEK7kX8L7iysYEpvQmij9vokGE', 'Extra': '', 'Type': 5, 'Responses': None}, {'ID': 'QmYqa6QHCbe6eKiiW6YoThU5yBy8c3eQzpiuW22SgVWSB8', 'Extra': '', 'Type': 6, 'Responses': None}, … {'ID': 'QmP6TAKVDCziLmx9NV8QGekwtf7ZMuJnmbeHMjcfoZbRMd', 'Extra': '', 'Type': 1, 'Responses': []}]
Parameters: | |
---|---|
Returns: | list |
ping
(peer, *peers, **kwargs)[source]Provides round-trip latency information for the routing system.
Finds nodes via the routing system, sends pings, waits for pongs, and prints out round-trip latency information.
>>> c.ping("QmTzQ1JRkWErjk39mryYw2WVaphAZNAREyMchXzYQ7c15n") [{'Success': True, 'Time': 0, 'Text': 'Looking up peer QmTzQ1JRkWErjk39mryYw2WVaphAZN … c15n'}, {'Success': False, 'Time': 0, 'Text': 'Peer lookup error: routing: not found'}]
Parameters: | |
---|---|
Returns: | list (Progress reports from the ping) |
config
(key, value=None, **kwargs)[source]Controls configuration variables.
>>> c.config("Addresses.Gateway") {'Key': 'Addresses.Gateway', 'Value': '/ip4/127.0.0.1/tcp/8080'} >>> c.config("Addresses.Gateway", "/ip4/127.0.0.1/tcp/8081") {'Key': 'Addresses.Gateway', 'Value': '/ip4/127.0.0.1/tcp/8081'}
Parameters: | |
---|---|
Returns: | dict (Requested/updated key and its (new) value) |
config_show
(**kwargs)[source]Returns a dict containing the server's configuration.
Warning
The configuration file contains private key data that must be handled with care.
>>> config = c.config_show() >>> config['Addresses'] {'API': '/ip4/127.0.0.1/tcp/5001', 'Gateway': '/ip4/127.0.0.1/tcp/8080', 'Swarm': ['/ip4/0.0.0.0/tcp/4001', '/ip6/::/tcp/4001']}, >>> config['Discovery'] {'MDNS': {'Enabled': True, 'Interval': 10}}
Returns: | dict (The entire IPFS daemon configuration) |
---|
config_replace
(*args, **kwargs)[source]Replaces the existing config with a user-defined config.
Make sure to back up the config file first if neccessary, as this operation can't be undone.
log_level
(subsystem, level, **kwargs)[source]Changes the logging output of a running daemon.
>>> c.log_level("path", "info") {'Message': "Changed log level of 'path' to 'info'\n"}
Parameters: | |
---|---|
Returns: | dict (Status message) |
log_ls
(**kwargs)[source]Lists the logging subsystems of a running daemon.
>>> c.log_ls() {'Strings': [ 'github.com/ipfs/go-libp2p/p2p/host', 'net/identify', 'merkledag', 'providers', 'routing/record', 'chunk', 'mfs', 'ipns-repub', 'flatfs', 'ping', 'mockrouter', 'dagio', 'cmds/files', 'blockset', 'engine', 'mocknet', 'config', 'commands/http', 'cmd/ipfs', 'command', 'conn', 'gc', 'peerstore', 'core', 'coreunix', 'fsrepo', 'core/server', 'boguskey', 'github.com/ipfs/go-libp2p/p2p/host/routed', 'diagnostics', 'namesys', 'fuse/ipfs', 'node', 'secio', 'core/commands', 'supernode', 'mdns', 'path', 'table', 'swarm2', 'peerqueue', 'mount', 'fuse/ipns', 'blockstore', 'github.com/ipfs/go-libp2p/p2p/host/basic', 'lock', 'nat', 'importer', 'corerepo', 'dht.pb', 'pin', 'bitswap_network', 'github.com/ipfs/go-libp2p/p2p/protocol/relay', 'peer', 'transport', 'dht', 'offlinerouting', 'tarfmt', 'eventlog', 'ipfsaddr', 'github.com/ipfs/go-libp2p/p2p/net/swarm/addr', 'bitswap', 'reprovider', 'supernode/proxy', 'crypto', 'tour', 'commands/cli', 'blockservice']}
Returns: | dict (List of daemon logging subsystems) |
---|
log_tail
(**kwargs)[source]Reads log outputs as they are written.
This function returns a reponse object that can be iterated over by the user. The user should make sure to close the response object when they are done reading from it.
>>> for item in c.log_tail(): ... print(item) ... {"event":"updatePeer","system":"dht", "peerID":"QmepsDPxWtLDuKvEoafkpJxGij4kMax11uTH7WnKqD25Dq", "session":"7770b5e0-25ec-47cd-aa64-f42e65a10023", "time":"2016-08-22T13:25:27.43353297Z"} {"event":"handleAddProviderBegin","system":"dht", "peer":"QmepsDPxWtLDuKvEoafkpJxGij4kMax11uTH7WnKqD25Dq", "session":"7770b5e0-25ec-47cd-aa64-f42e65a10023", "time":"2016-08-22T13:25:27.433642581Z"} {"event":"handleAddProvider","system":"dht","duration":91704, "key":"QmNT9Tejg6t57Vs8XM2TVJXCwevWiGsZh3kB4HQXUZRK1o", "peer":"QmepsDPxWtLDuKvEoafkpJxGij4kMax11uTH7WnKqD25Dq", "session":"7770b5e0-25ec-47cd-aa64-f42e65a10023", "time":"2016-08-22T13:25:27.433747513Z"} {"event":"updatePeer","system":"dht", "peerID":"QmepsDPxWtLDuKvEoafkpJxGij4kMax11uTH7WnKqD25Dq", "session":"7770b5e0-25ec-47cd-aa64-f42e65a10023", "time":"2016-08-22T13:25:27.435843012Z"} …
Returns: | iterable |
---|
version
(**kwargs)[source]Returns the software version of the currently connected node.
>>> c.version() {'Version': '0.4.3-rc2', 'Repo': '4', 'Commit': '', 'System': 'amd64/linux', 'Golang': 'go1.6.2'}
Returns: | dict (Daemon and system version information) |
---|
files_cp
(source, dest, **kwargs)[source]Copies files within the MFS.
Due to the nature of IPFS this will not actually involve any of the file's content being copied.
>>> c.files_ls("/") {'Entries': [ {'Size': 0, 'Hash': '', 'Name': 'Software', 'Type': 0}, {'Size': 0, 'Hash': '', 'Name': 'test', 'Type': 0} ]} >>> c.files_cp("/test", "/bla") '' >>> c.files_ls("/") {'Entries': [ {'Size': 0, 'Hash': '', 'Name': 'Software', 'Type': 0}, {'Size': 0, 'Hash': '', 'Name': 'bla', 'Type': 0}, {'Size': 0, 'Hash': '', 'Name': 'test', 'Type': 0} ]}
Parameters: |
---|
files_ls
(path, **kwargs)[source]Lists contents of a directory in the MFS.
>>> c.files_ls("/") {'Entries': [ {'Size': 0, 'Hash': '', 'Name': 'Software', 'Type': 0} ]}
Parameters: | path (str) -- Filepath within the MFS |
---|---|
Returns: | dict (Directory entries) |
files_mkdir
(path, parents=False, **kwargs)[source]Creates a directory within the MFS.
>>> c.files_mkdir("/test") b''
Parameters: |
---|
files_stat
(path, **kwargs)[source]Returns basic stat
information for an MFS file (including its hash).
>>> c.files_stat("/test") {'Hash': 'QmUNLLsPACCz1vLxQVkXqqLX5R1X345qqfHbsf67hvA3Nn', 'Size': 0, 'CumulativeSize': 4, 'Type': 'directory', 'Blocks': 0}
Parameters: | path (str) -- Filepath within the MFS |
---|---|
Returns: | dict (MFS file information) |
files_rm
(path, recursive=False, **kwargs)[source]Removes a file from the MFS.
>>> c.files_rm("/bla/file") b''
Parameters: |
---|
files_read
(path, offset=0, count=None, **kwargs)[source]Reads a file stored in the MFS.
>>> c.files_read("/bla/file") b'hi'
Parameters: | |
---|---|
Returns: | str (MFS file contents) |
files_write
(path, file, offset=0, create=False, truncate=False, count=None, **kwargs)[source]Writes to a mutable file in the MFS.
>>> c.files_write("/test/file", io.BytesIO(b"hi"), create=True) b''
Parameters: |
|
---|
files_mv
(source, dest, **kwargs)[source]Moves files and directories within the MFS.
>>> c.files_mv("/test/file", "/bla/file") b''
Parameters: |
---|
shutdown
()[source]Stop the connected IPFS daemon instance.
Sending any further requests after this will fail with ipfsapi.exceptions.ConnectionError
, until you start another IPFS daemon instance.
add_bytes
(data, **kwargs)[source]
Adds a set of bytes as a file to IPFS.
>>> c.add_bytes(b"Mary had a little lamb") 'QmZfF6C9j4VtoCsTp4KSrhYH47QMd3DNXVZBKaxJdhaPab'
Also accepts and will stream generator objects.
Parameters: | data (bytes) -- Content to be added as a file |
---|---|
Returns: | str (Hash of the added IPFS object) |
add_str
(string, **kwargs)[source]Adds a Python string as a file to IPFS.
>>> c.add_str(u"Mary had a little lamb") 'QmZfF6C9j4VtoCsTp4KSrhYH47QMd3DNXVZBKaxJdhaPab'
Also accepts and will stream generator objects.
Parameters: | string (str) -- Content to be added as a file |
---|---|
Returns: | str (Hash of the added IPFS object) |
add_json
(json_obj, **kwargs)[source]Adds a json-serializable Python dict as a json file to IPFS.
>>> c.add_json({'one': 1, 'two': 2, 'three': 3}) 'QmVz9g7m5u3oHiNKHj2CJX1dbG1gtismRS3g9NaPBBLbob'
Parameters: | json_obj (dict) -- A json-serializable Python dictionary |
---|---|
Returns: | str (Hash of the added IPFS object) |
get_json
(multihash, **kwargs)[source]Loads a json object from IPFS.
>>> c.get_json('QmVz9g7m5u3oHiNKHj2CJX1dbG1gtismRS3g9NaPBBLbob') {'one': 1, 'two': 2, 'three': 3}
Parameters: | multihash (str) -- Multihash of the IPFS object to load |
---|---|
Returns: | object (Deserialized IPFS JSON object value) |
add_pyobj
(py_obj, **kwargs)[source]Adds a picklable Python object as a file to IPFS.
Deprecated since version 0.4.2: The *_pyobj
APIs allow for arbitrary code execution if abused. Either switch to add_json()
or use client.add_bytes(pickle.dumps(py_obj))
instead.
Please see get_pyobj()
for the security risks of using these methods!
>>> c.add_pyobj([0, 1.0, 2j, '3', 4e5]) 'QmWgXZSUTNNDD8LdkdJ8UXSn55KfFnNvTP1r7SyaQd74Ji'
Parameters: | py_obj (object) -- A picklable Python object |
---|---|
Returns: | str (Hash of the added IPFS object) |
get_pyobj
(multihash, **kwargs)[source]Loads a pickled Python object from IPFS.
Deprecated since version 0.4.2: The *_pyobj
APIs allow for arbitrary code execution if abused. Either switch to get_json()
or use pickle.loads(client.cat(multihash))
instead.
注意
The pickle module is not intended to be secure against erroneous or maliciously constructed data. Never unpickle data received from an untrusted or unauthenticated source.
Please read this article to understand the security risks of using this method!
>>> c.get_pyobj('QmWgXZSUTNNDD8LdkdJ8UXSn55KfFnNvTP1r7SyaQd74Ji') [0, 1.0, 2j, '3', 400000.0]
Parameters: | multihash (str) -- Multihash of the IPFS object to load |
---|---|
Returns: | object (Deserialized IPFS Python object) |