XIV(5)-- Data Recovery Protection (XDRP)

和大多數存儲系統同樣,XIV也提供多地容災的解決方案。XIV Data Recovery Protection (XDRP)有三種實現方式, Synchronous Mirroring; ASynchronous Mirroring; Data Migration。除此以外,固然也支持Flashcopy,VolumeCopy數據庫

1、Synchronous Mirroringapp

XDRP是在兩個或多個XIV系統之間作real-time copy,支持 Fiber Channel 或iSCSI links【考慮到長距離災備時的可靠性和帶寬,大多數使用FC】。有Master system和Slave system。以下圖描述了兩個XIV系統是如何運做的。異步

p_w_picpath

 

 

  • 三種狀態:async

–Initialization -正在作Sync,數據正從Master copy到Slaveide

–Synchronized - 完成Sync,兩端數據一致ui

–Unsynchronized – Remote Mirror出現問題spa

 

那一旦remote mirror出問題了,好比鏈路段了,Master會對tracking它上面Source Vol的change.3d

–當鏈路恢復後,全部tracking的change會被馬上copy到Slave上,這部分數據叫作 「Uncommitted Data」。當Vol resynced時系統在Slave Master上自動建立的一個Special Snapshot,這個Snapshot叫作""Last consistent snapshot」。它是系統Snapshot,用戶不能手動刪除,會在Fully resynced以後自動刪除,它不會考慮大小限制,若是有須要,會佔據磁盤全部剩餘空間。server

 

Consistency Groups--同時對一組voume作snapshot,好比數據庫應用,數據庫文件和Log文件不在同一個volume上,爲了確保一致性,必須味數據庫文件volume和log文件volume建立一個consistency group,而後基於consistency group作flashcopy。blog


  • You can switch the role of the entire consistency group

–This will change the Master/Slave location of all member volumes in the consistency group

  • Both the Master and the Slave system must be configured with an empty CG that is configured for mirroring

  • After the CG mirror is created and synchronized you can start adding volumes to it

–All volumes must be at a synchronized state and of the same sync type

 

2、Asynchronous Mirroring

 只會發送改變的數據塊給Slave,能夠按事先定義的間隔時間段Sync,好比20s (min_interval), 30s, 1m, 2m, 5m, 10m, 15m, 30m, 1h, 2h, 3h, 6h, 8h, 12h

p_w_picpath

Mirroring是以Initialization開始,支持(Online Initialization和Offline Initialization,具體解釋以下)

 

–一旦Initialization完成,Master會決定Synchronization的範圍

當Define了新的Mirror以後,Master會在Mirror開始以前對系統生成一個Snapshot來表明初始狀態

p_w_picpath 

 

XIV系統使用特殊的Snapshot來決定Sync的範圍

Two snapshots are maintained on the Master:

–The most_recent snapshot denotes the most recent mirroring-related snapshot of the Master

–The last_replicated snapshot reflects the most recent state of the Master which has a consistent replica on the Slave

One snapshot is maintained on the Slave

–The last_replicated snapshot reflects the most recent state of the Master which has a consistent replica on the Slave

p_w_picpath

 

Offline initialization

-Offline Initialization (previously dubbed ‘Truck‘ initialization) enables initialization of a remote mirror peer (the ‘Slave’) without being required to replicate the contents of the local peer (the ‘Master’) over the link (a.k.a. online initialization)

-The feature applies to asynchronous mirroring, and entails validation of the replica data prior to ongoing mirroring 只適用於異步Mirror這種方式,減小帶寬佔用,節省Initialization的時間,在Mirror以前會對「運輸」過來的replica作check

 

Asynchronous Replication Offline Initialization (「Truck」 Mode) process


1.Create Snapshot of future Master volume

2.Backup Snapshot to transportable media (e.g. tape)

3.Transport media to remote site

4.Restore future Slave volume from transported media

5.Create async mirror specifying ‘offline initialization’

6.Activate async mirror

–Offline initialization will begin

- Checksum exchange on 64K boundaries

- Reduces bandwidth and time required for initialization

p_w_picpath

3、Data Migration

  XIV DM能夠經過FC或者iscsi將其餘任何存儲上的數據遷移到XIV上且不須要停機,實現生產環境下在線遷移。(這個實際上是有點不許確的,中間會停機一段時間,即下面的DM Process的Step2)

 

在遷移數據過程當中,XIV仍是會繼續處理主機發過來的IO

–全部讀操做會根據目前數據在哪裏來處理:

 若是數據已經寫到XIV了,那就從XIV讀

 若是數據還沒寫到XIV,那從host發給XIV的讀操做就從legacy storage中讀取並返還給主機

–XIV處理全部主機發過來的寫操做:

  處理寫請求時,有兩種方式,這是在你定義data migration時就選擇了Source Updating仍是No Source Updating

Source Updating

--- 兩套Storage(legacy Storage和XIV)上都寫數據,即Source Storage在migration過程當中是keep updated的,就像XDRP,是既寫到XIV,也寫到legacy Storage後,纔會給Host發送Acknowledge。若是migration過程當中和legacy Storage的通訊斷了,那麼XIV也會將這個寫操做置爲Fail

No Source Updating

--- Migration過程當中legacy Storage是不寫任何數據的,即兩套存儲上數據是不一樣步的    


XIV Data Migration Process

1.Server is connected to legacy system & accessing legacy luns                   2.Unmap LUNs and disconnect server from legacy system

–Remove any proprietary device drivers from server

–Prepare server to use native multipathing (MPIO)

3.Connect XIV to legacy system

–Define XIV as a Linux ‘host’ to legacy system

–Map legacy LUNs to XIV ‘host’

4.Start XIV data migration

–「Keep Source Updated」 is recommended

–XIV reads LUN sequentially

5.Connect server to XIV

–Map new XIV LUN to server

6.Production resumes and continues during migration

7.Disconnect legacy storage from XIV after migration is complete

8.Discard or repurpose legacy storage

 

p_w_picpath


附:

XIV_1300023>>target_list

Name          SCSI Type   Connected   

XIV_1310138       FC          yes   


XIV_1300023>>target_connectivity_list

Target Name   Remote Port        FC Port         IP Interface   Active   Up    

XIV_1310138   50017380279A0152   1:FC_Port:7:2                  yes      yes   

wKioL1XtXBjAtGSVAAHzji-tL3w348.jpg

更多關於Copy Service和Data Migration的內容能夠參考這本書《SG24-7759-02,IBM XIV Storage System:Copy Services and Migration》。BTW,全部的IBM redbook均可以到這裏找http://www.redbooks.ibm.com/,很是好用,工程師必備。

-------------------------

至此XIV到此就告一段落,講的很是簡單,知道個皮毛而已,若是之後的工做涉及到XIV會更加詳細地研究下。

相關文章
相關標籤/搜索