IPFS(三)源碼解讀之-add

Add 所做的事其實就是將文件傳到IPFS上,經過塊的方式存到本地blockstore中。node

在ipfs的安裝目錄的blocks目錄下保存了當前本地節點所存儲的全部的塊數據,具體有沒有對數據加密,我也沒有仔細去看git

Ps:我去看了一下,並無加密,原文存儲,這裏須要批評一下...github

首先,add的入口在core/commands/add.go文件,這是一個命令行工具,主要做用是提供交互以及命令的一下定義和相關配置對應的不一樣功能的解析,這裏真的只是解析,而後保存到AddedObject這個對象中,這個對象的做用就是當上傳是的關於文件的一下配置信息,和一些操做。json

type AddedObject struct {
   Name        string
   Hash        string `json:",omitempty"`
   Bytes       int64  `json:",omitempty"`
   Size        string `json:",omitempty"`
   VID         string `json:",omitempty"`
   VersionInfo *utils.VersionInfo
}
複製代碼

而後,經過下面這種看起來很奇怪的方式去上傳文件的數據量,具體後面的塊生成部分咱們不去探討,這裏只看是怎麼從本地讀取到節點,並將這個流送入塊中的api

其實下面作的事很是簡單,先定義好addAllAndPin這個方法,這個方法最主要的做用就是對文件路徑進行遍歷,也就是咱們在命令行輸入的路徑,讀取文件內容,經過fileAdder.AddFile(file)將文件寫入到下一步安全

而下面的協程用於監聽上傳是否完成,是否有錯誤。並將錯誤信息丟入errCh管道,而且關閉output這個管道,app

做用在於這兩個管道被用於向控制檯輸出。output是輸出上傳狀況,上傳完成後的塊hash等,errCh就是錯誤信息less

addAllAndPin := func(f files.File) error {
   // Iterate over each top-level file and add individually. Otherwise the
   // single files.File f is treated as a directory, affecting hidden file
   // semantics.
   for {
      file, err := f.NextFile()
      if err == io.EOF {
         // Finished the list of files.
         break
      } else if err != nil {
         return err
      }
      if err := fileAdder.AddFile(file); err != nil {
         return err
      }
   }

   // copy intermediary nodes from editor to our actual dagservice
   _, err := fileAdder.Finalize()
   if err != nil {
      return err
   }

   if hash {
      return nil
   }

   return fileAdder.PinRoot()
}

errCh := make(chan error)
go func() {
   var err error
   defer func() { errCh <- err }()
   defer close(outChan)
   err = addAllAndPin(req.Files)
}()
defer res.Close()

err = res.Emit(outChan)
if err != nil {
	log.Error(err)
	return
	}
err = <-errCh
if err != nil {
	res.SetError(err, cmdkit.ErrNormal)
}

複製代碼

下面進入具體上傳工做的函數,也就是fileAdder.AddFile(file),fileAddrer是上面生成一個AddedObject這個結構體的對象,它有一些工具方法,AddFile就是用於上傳的對外接口,在core/coreunix/add.go文件中svg

func (adder *Adder) AddFile(file files.File) error {
   if adder.Pin {
      adder.unlocker = adder.blockstore.PinLock()
   }
   defer func() {
      if adder.unlocker != nil {
         adder.unlocker.Unlock()
      }
   }()

   return adder.addFile(file, false, nil)
}
複製代碼

主要就是幾個鎖的設置,繼續調用內部的addFile方法,到這裏其實就以及開始上傳了,後面的代碼就不分析了,有興趣的小夥伴能夠本身去看一下函數

下面是命令行入口add.go的所有內容 package commands

import ( "errors" "fmt" "io" "os" "strings" //塊服務提供的接口 blockservice "github.com/ipfs/go-ipfs/blockservice" //核心api core "github.com/ipfs/go-ipfs/core" //add的一些工具方法和結構 "github.com/ipfs/go-ipfs/core/coreunix" //文件存儲接口 filestore "github.com/ipfs/go-ipfs/filestore" //dag服務接口 dag "github.com/ipfs/go-ipfs/merkledag" //提供一個新的線程安全的dag dagtest "github.com/ipfs/go-ipfs/merkledag/test" //一個可變IPFS文件系統的內存模型 mfs "github.com/ipfs/go-ipfs/mfs" //文件系統格式 ft "github.com/ipfs/go-ipfs/unixfs"

//控制檯入口工具包 如下都是工具包
複製代碼

cmds "gx/ipfs/QmNueRyPRQiV7PUEpnP4GgGLuK1rKQLaRW7sfPvUetYig1/go-ipfs-cmds" mh "gx/ipfs/QmPnFwZ2JXKnXgMw8CdBPxn7FWh6LLdjUjxV1fKHuJnkr8/go-multihash" pb "gx/ipfs/QmPtj12fdwuAqj9sBSTNUxBNu8kCGNp8b3o8yUzMm5GHpq/pb" offline "gx/ipfs/QmS6mo1dPpHdYsVkm27BRZDLxpKBCiJKUH8fHX15XFfMez/go-ipfs-exchange-offline" bstore "gx/ipfs/QmadMhXJLHMFjpRmh85XjpmVDkEtQpNYEZNRpWRvYVLrvb/go-ipfs-blockstore" cmdkit "gx/ipfs/QmdE4gMduCKCGAcczM2F5ioYDfdeKuPix138wrES1YSr7f/go-ipfs-cmdkit" files "gx/ipfs/QmdE4gMduCKCGAcczM2F5ioYDfdeKuPix138wrES1YSr7f/go-ipfs-cmdkit/files" )

//限制深度 深度達到上限 // ErrDepthLimitExceeded indicates that the max depth has been exceeded. var ErrDepthLimitExceeded = fmt.Errorf("depth limit exceeded")

//構建命令選項參數常量 const ( quietOptionName = "quiet" quieterOptionName = "quieter" silentOptionName = "silent" progressOptionName = "progress" trickleOptionName = "trickle" wrapOptionName = "wrap-with-directory" hiddenOptionName = "hidden" onlyHashOptionName = "only-hash" chunkerOptionName = "chunker" pinOptionName = "pin" rawLeavesOptionName = "raw-leaves" noCopyOptionName = "nocopy" fstoreCacheOptionName = "fscache" cidVersionOptionName = "cid-version" hashOptionName = "hash" ) //管道上限 const adderOutChanSize = 8 //構建一個命令 var AddCmd = &cmds.Command{ //命令對應的幫助信息 Helptext: cmdkit.HelpText{ Tagline: "Add a file or directory to ipfs.", ShortDescription: Adds contents of <path> to ipfs. Use -r to add directories (recursively)., LongDescription: ` Adds contents of to ipfs. Use -r to add directories. Note that directories are added recursively, to form the ipfs MerkleDAG.

The wrap option, '-w', wraps the file (or files, if using the recursive option) in a directory. This directory contains only the files which have been added, and means that the file retains its filename. For example:

ipfs add example.jpg added QmbFMke1KXqnYyBBWxB74N4c5SBnJMVAiMNRcGu6x1AwQH example.jpg ipfs add example.jpg -w added QmbFMke1KXqnYyBBWxB74N4c5SBnJMVAiMNRcGu6x1AwQH example.jpg added QmaG4FuMqEBnQNn3C8XJ5bpW8kLs7zq2ZXgHptJHbKDDVx

You can now refer to the added file in a gateway, like so:

/ipfs/QmaG4FuMqEBnQNn3C8XJ5bpW8kLs7zq2ZXgHptJHbKDDVx/example.jpg

The chunker option, '-s', specifies the chunking strategy that dictates how to break files into blocks. Blocks with same content can be deduplicated. The default is a fixed block size of 256 * 1024 bytes, 'size-262144'. Alternatively, you can use the rabin chunker for content defined chunking by specifying rabin-[min]-[avg]-[max] (where min/avg/max refer to the resulting chunk sizes). Using other chunking strategies will produce different hashes for the same file.

ipfs add --chunker=size-2048 ipfs-logo.svg added QmafrLBfzRLV4XSH1XcaMMeaXEUhDJjmtDfsYU95TrWG87 ipfs-logo.svg ipfs add --chunker=rabin-512-1024-2048 ipfs-logo.svg added Qmf1hDN65tR55Ubh2RN1FPxr69xq3giVBz1KApsresY8Gn ipfs-logo.svg

You can now check what blocks have been created by:

ipfs object links QmafrLBfzRLV4XSH1XcaMMeaXEUhDJjmtDfsYU95TrWG87 QmY6yj1GsermExDXoosVE3aSPxdMNYr6aKuw3nA8LoWPRS 2059 Qmf7ZQeSxq2fJVJbCmgTrLLVN9tDR9Wy5k75DxQKuz5Gyt 1195 ipfs object links Qmf1hDN65tR55Ubh2RN1FPxr69xq3giVBz1KApsresY8Gn QmY6yj1GsermExDXoosVE3aSPxdMNYr6aKuw3nA8LoWPRS 2059 QmerURi9k4XzKCaaPbsK6BL5pMEjF7PGphjDvkkjDtsVf3 868 QmQB28iwSriSUSMqG2nXDTLtdPHgWb4rebBrU7Q1j4vxPv 338 `, }, //命令對應參數格式 Arguments: []cmdkit.Argument{ cmdkit.FileArg("path", true, true, "The path to a file to be added to ipfs.").EnableRecursive().EnableStdin(), }, //命令參數可選項設置 Options: []cmdkit.Option{ //注意全部帶有experimental的命令選項都是實驗的部分須要在配置文件中啓用若是須要使用測試的話 cmds.OptionRecursivePath, // a builtin option that allows recursive paths (-r, --recursive) cmdkit.BoolOption(quietOptionName, "q", "Write minimal output."), cmdkit.BoolOption(quieterOptionName, "Q", "Write only final hash."), cmdkit.BoolOption(silentOptionName, "Write no output."), cmdkit.BoolOption(progressOptionName, "p", "Stream progress data."), cmdkit.BoolOption(trickleOptionName, "t", "Use trickle-dag format for dag generation."), cmdkit.BoolOption(onlyHashOptionName, "n", "Only chunk and hash - do not write to disk."), cmdkit.BoolOption(wrapOptionName, "w", "Wrap files with a directory object."), cmdkit.BoolOption(hiddenOptionName, "H", "Include files that are hidden. Only takes effect on recursive add."), cmdkit.StringOption(chunkerOptionName, "s", "Chunking algorithm, size-[bytes] or rabin-[min]-[avg]-[max]").WithDefault("size-262144"), cmdkit.BoolOption(pinOptionName, "Pin this object when adding.").WithDefault(true), cmdkit.BoolOption(rawLeavesOptionName, "Use raw blocks for leaf nodes. (experimental)"), cmdkit.BoolOption(noCopyOptionName, "Add the file using filestore. Implies raw-leaves. (experimental)"), cmdkit.BoolOption(fstoreCacheOptionName, "Check the filestore for pre-existing blocks. (experimental)"), cmdkit.IntOption(cidVersionOptionName, "CID version. Defaults to 0 unless an option that depends on CIDv1 is passed. (experimental)"), cmdkit.StringOption(hashOptionName, "Hash function to use. Implies CIDv1 if not sha2-256. (experimental)").WithDefault("sha2-256"), }, //設置命令默認選項的默認值,節點啓動時運行 PreRun: func(req *cmds.Request, env cmds.Environment) error { quiet, _ := req.Options[quietOptionName].(bool) quieter, _ := req.Options[quieterOptionName].(bool) quiet = quiet || quieter

silent, _ := req.Options[silentOptionName].(bool)

  if quiet || silent {
     return nil
  }

  // ipfs cli progress bar defaults to true unless quiet or silent is used
  _, found := req.Options[progressOptionName].(bool)
  if !found {
     req.Options[progressOptionName] = true
  }

  return nil
複製代碼

}, //在控制檯命令時調用run Run: func(req *cmds.Request, res cmds.ResponseEmitter, env cmds.Environment) { //獲取IPFSNode的配置信息 n, err := GetNode(env) if err != nil { res.SetError(err, cmdkit.ErrNormal) return } //獲取IPFS全局配置文件配置信息 cfg, err := n.Repo.Config() if err != nil { res.SetError(err, cmdkit.ErrNormal) return } // check if repo will exceed storage limit if added // TODO: this doesn't handle the case if the hashed file is already in blocks (deduplicated) // TODO: conditional GC is disabled due to it is somehow not possible to pass the size to the daemon //if err := corerepo.ConditionalGC(req.Context(), n, uint64(size)); err != nil { // res.SetError(err, cmdkit.ErrNormal) // return //}

//將全部的命令參數對應的值強轉bool 以用來驗證該命令參數有沒有被使用
  progress, _ := req.Options[progressOptionName].(bool)
  trickle, _ := req.Options[trickleOptionName].(bool)
  wrap, _ := req.Options[wrapOptionName].(bool)
  hash, _ := req.Options[onlyHashOptionName].(bool)
  hidden, _ := req.Options[hiddenOptionName].(bool)
  silent, _ := req.Options[silentOptionName].(bool)
  chunker, _ := req.Options[chunkerOptionName].(string)
  dopin, _ := req.Options[pinOptionName].(bool)
  rawblks, rbset := req.Options[rawLeavesOptionName].(bool)
  nocopy, _ := req.Options[noCopyOptionName].(bool)
  fscache, _ := req.Options[fstoreCacheOptionName].(bool)
  cidVer, cidVerSet := req.Options[cidVersionOptionName].(int)
  hashFunStr, _ := req.Options[hashOptionName].(string)

  // The arguments are subject to the following constraints.
  //
  // nocopy -> filestoreEnabled
  // nocopy -> rawblocks
  // (hash != sha2-256) -> cidv1

  // NOTE: 'rawblocks -> cidv1' is missing. Legacy reasons.

  // nocopy -> filestoreEnabled
   //實驗方法,具體能夠自行實驗
  if nocopy && !cfg.Experimental.FilestoreEnabled {
     res.SetError(filestore.ErrFilestoreNotEnabled, cmdkit.ErrClient)
     return
  }
	//實驗方法,具體能夠自行實驗
  // nocopy -> rawblocks
  if nocopy && !rawblks {
     // fixed?
     if rbset {
        res.SetError(
           fmt.Errorf("nocopy option requires '--raw-leaves' to be enabled as well"),
           cmdkit.ErrNormal,
        )
        return
     }
     // No, satisfy mandatory constraint.
     rawblks = true
  }
//實驗方法,具體能夠自行實驗	
  // (hash != "sha2-256") -> CIDv1
  if hashFunStr != "sha2-256" && cidVer == 0 {
     if cidVerSet {
        res.SetError(
           errors.New("CIDv0 only supports sha2-256"),
           cmdkit.ErrClient,
        )
        return
     }
     cidVer = 1
  }
//實驗方法,具體能夠自行實驗
  // cidV1 -> raw blocks (by default)
  if cidVer > 0 && !rbset {
     rawblks = true
  }
//實驗方法,具體能夠自行實驗
  prefix, err := dag.PrefixForCidVersion(cidVer)
  if err != nil {
     res.SetError(err, cmdkit.ErrNormal)
     return
  }

  hashFunCode, ok := mh.Names[strings.ToLower(hashFunStr)]
  if !ok {
     res.SetError(fmt.Errorf("unrecognized hash function: %s", strings.ToLower(hashFunStr)), cmdkit.ErrNormal)
     return
  }

  prefix.MhType = hashFunCode
  prefix.MhLength = -1
	//若是使用 -n 命令參數 只寫入塊hash,不寫入磁盤
  if hash {
     nilnode, err := core.NewNode(n.Context(), &core.BuildCfg{
        //TODO: need this to be true or all files
        // hashed will be stored in memory!
        NilRepo: true,
     })
     if err != nil {
        res.SetError(err, cmdkit.ErrNormal)
        return
     }
     n = nilnode
  }
	//一個能夠回收的塊存儲 
  addblockstore := n.Blockstore
   //若是true 就構建一個新的能夠回收的塊
  if !(fscache || nocopy) {
     addblockstore = bstore.NewGCBlockstore(n.BaseBlocks, n.GCLocker)
  }
	//基本上不會被執行,多是版本跟新代碼沒有刪除乾淨
  exch := n.Exchange
  local, _ := req.Options["local"].(bool)
  if local {
     exch = offline.Exchange(addblockstore)
  }
 //經過塊服務構建一個新的塊啓用塊的交換策略
  bserv := blockservice.New(addblockstore, exch) // hash security 001
 //將塊交給dag服務管理
   dserv := dag.NewDAGService(bserv)
	//新建輸出管道 設置長度
  outChan := make(chan interface{}, adderOutChanSize)
	//返回於文件添加操做的新文件對象
  fileAdder, err := coreunix.NewAdder(req.Context, n.Pinning, n.Blockstore, dserv)
  if err != nil {
     res.SetError(err, cmdkit.ErrNormal)
     return
  }
	//爲文件對象設置屬性
  fileAdder.Out = outChan
  fileAdder.Chunker = chunker
  fileAdder.Progress = progress
  fileAdder.Hidden = hidden
  fileAdder.Trickle = trickle
  fileAdder.Wrap = wrap
  fileAdder.Pin = dopin
  fileAdder.Silent = silent
  fileAdder.RawLeaves = rawblks
  fileAdder.NoCopy = nocopy
  fileAdder.Prefix = &prefix
	//若是使用 -n 命令參數 只寫入塊hash,不寫入磁盤
  if hash {
      //獲取一個新的線程安全的dag
     md := dagtest.Mock()
     emptyDirNode := ft.EmptyDirNode()
     // Use the same prefix for the "empty" MFS root as for the file adder.
     emptyDirNode.Prefix = *fileAdder.Prefix
     mr, err := mfs.NewRoot(req.Context, md, emptyDirNode, nil)
     if err != nil {
        res.SetError(err, cmdkit.ErrNormal)
        return
     }

     fileAdder.SetMfsRoot(mr)
  }
	//構建文件上傳io
  addAllAndPin := func(f files.File) error {
     // Iterate over each top-level file and add individually. Otherwise the
     // single files.File f is treated as a directory, affecting hidden file
     // semantics.
      //每次讀取一個文件保存到新的文件對象fileAdder中
     for {
        file, err := f.NextFile()
        if err == io.EOF {
           // Finished the list of files.
           break
        } else if err != nil {
           return err
        }
        if err := fileAdder.AddFile(file); err != nil {
           return err
        }
     }
	
     // copy intermediary nodes from editor to our actual dagservice
      // Finalize方法刷新mfs根目錄並返回mfs根節點。
     _, err := fileAdder.Finalize()
     if err != nil {
        return err
     }

     if hash {
        return nil
     }
	//遞歸新的我那件對象和根節點
	//將pin節點狀態寫入後臺數據存儲。
     return fileAdder.PinRoot()
  }

  errCh := make(chan error)
   
   //開啓協程進行文件上傳
  go func() {
      //一個錯誤變量
     var err error
      //defer一個管道接收err變量 存放文件傳輸過程當中出現的錯誤
     defer func() { errCh <- err }()
     defer close(outChan)
      //傳輸文件並返回錯誤信息
     err = addAllAndPin(req.Files)
  }()
	//關閉連接
  defer res.Close()
	//res 錯誤
  err = res.Emit(outChan)
  if err != nil {
     log.Error(err)
     return
  }
   //傳輸錯誤
  err = < -errCh
  if err != nil {
     res.SetError(err, cmdkit.ErrNormal)
  }
複製代碼

}, //返回執行結果到命令行 PostRun: cmds.PostRunMap{ //實習接口方法 cmds.CLI: func(req *cmds.Request, re cmds.ResponseEmitter) cmds.ResponseEmitter { //新建一個輸出通道標準格式 reNext, res := cmds.NewChanResponsePair(req) //add 命令行返回值 文件hash信息 存儲管道 outChan := make(chan interface{}) //add 命令行方悔之 文件大小 存儲管道 sizeChan := make(chan int64, 1) //經過文件對象獲取文件大小對象 sizeFile, ok := req.Files.(files.SizeFile) //若是獲取文件對象成功 if ok { // Could be slow. go func() { //經過文件對象獲取大小 size, err := sizeFile.Size() if err != nil { log.Warningf("error getting files size: %s", err) // see comment above return } //將文件大小存到文件大小管道中 sizeChan <- size }() } else { //不能得到上傳文件的大小 // we don't need to error, the progress bar just // won't know how big the files are log.Warning("cannot determine size of input file") } //進度條 progressBar := func(wait chan struct{}) { defer close(wait)

quiet, _ := req.Options[quietOptionName].(bool)
        quieter, _ := req.Options[quieterOptionName].(bool)
        quiet = quiet || quieter

        progress, _ := req.Options[progressOptionName].(bool)
		
        var bar *pb.ProgressBar
        if progress {
           bar = pb.New64(0).SetUnits(pb.U_BYTES)
           bar.ManualUpdate = true
           bar.ShowTimeLeft = false
           bar.ShowPercent = false
           bar.Output = os.Stderr
           bar.Start()
        }

        lastFile := ""
        lastHash := ""
        var totalProgress, prevFiles, lastBytes int64

     LOOP:
        for {
           select {
           case out, ok := <-outChan:
              if !ok {
                 if quieter {
                    fmt.Fprintln(os.Stdout, lastHash)
                 }

                 break LOOP
              }
              output := out.(*coreunix.AddedObject)
              if len(output.Hash) > 0 {
                 lastHash = output.Hash
                 if quieter {
                    continue
                 }

                 if progress {
                    // clear progress bar line before we print "added x" output
                    fmt.Fprintf(os.Stderr, "\033[2K\r")
                 }
                 if quiet {
                    fmt.Fprintf(os.Stdout, "%s\n", output.Hash)
                 } else {
                    fmt.Fprintf(os.Stdout, "added %s %s\n", output.Hash, output.Name)
                 }

              } else {
                 if !progress {
                    continue
                 }

                 if len(lastFile) == 0 {
                    lastFile = output.Name
                 }
                 if output.Name != lastFile || output.Bytes < lastBytes {
                    prevFiles += lastBytes
                    lastFile = output.Name
                 }
                 lastBytes = output.Bytes
                 delta := prevFiles + lastBytes - totalProgress
                 totalProgress = bar.Add64(delta)
              }

              if progress {
                 bar.Update()
              }
           case size := <-sizeChan:
              if progress {
                 bar.Total = size
                 bar.ShowPercent = true
                 bar.ShowBar = true
                 bar.ShowTimeLeft = true
              }
           case <-req.Context.Done():
              // don't set or print error here, that happens in the goroutine below
              return
           }
        }
     }
		//控制文件上傳時的制度條顯示
     go func() {
        // defer order important! First close outChan, then wait for output to finish, then close re
        defer re.Close()

        if e := res.Error(); e != nil {
           defer close(outChan)
           re.SetError(e.Message, e.Code)
           return
        }

        wait := make(chan struct{})
        go progressBar(wait)

        defer func() { <-wait }()
        defer close(outChan)

        for {
           v, err := res.Next()
           if !cmds.HandleError(err, res, re) {
              break
           }

           select {
           case outChan <- v:
           case <-req.Context.Done():
              re.SetError(req.Context.Err(), cmdkit.ErrNormal)
              return
           }
        }
     }()

     return reNext
  },
複製代碼

}, //添加一個object對象 Type: coreunix.AddedObject{}, }

相關文章
相關標籤/搜索