使用mnesia在節點間共享內存

概述

有不少場景須要在一系列節點間共享內存數據. 如, 有一系列水平對等的網關, 能夠在任意網關節點上拿到全部網關的特定內存信息.
通常的作法是使用zookeeper, etcd等提供了分佈式一致性保證的服務.
使用zookeeper, etcd作節點間的數據同步固然沒有問題. 可是:node

  • erlang內置數據類型須要額外的序列化/反序列化處理. 如pid.
  • 不想引入一個複雜系統.

我最終使用了 gossip protocol 共享數據. 由於它很是簡單可控, 能解決上面的痛點. 也能夠實現節點間的最終一致性.
erlang原生的mnesia看起來也很適合上述場景. 在最初作選型的時候, 對mnesia的實現沒有透徹瞭解, 這裏探討一下使用mnesia的可行性, 以及mnesia是如何實現的:ubuntu

  • 分佈式事務是如何實現的?
  • 有新節點加入時, 數據是如何同步的?
  • 有沒有主節點概念? 網絡分區後如何恢復?
  • 提供什麼級別的一致性保證?

使用mnesia在節點間共享數據

~/platform/launcher(master*) » iex --sname t1
Erlang/OTP 21 [erts-10.3.5.6] [source] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:1] [hipe]

Interactive Elixir (1.7.3) - press Ctrl+C to exit (type h() ENTER for help)
iex(t1@ubuntu)1> alias :mnesia, as: Mnesia
:mnesia
iex(t1@ubuntu)2> Mnesia.start()
:ok
iex(t1@ubuntu)3> Mnesia.create_table(Person, [attributes: [:id, :name, :job]])                
{:atomic, :ok}
iex(t1@ubuntu)4> Mnesia.dirty_write({Person, 1, "Seymour Skinner", "Principal"})
:ok
iex(t1@ubuntu)5> Mnesia.dirty_read({Person, 1})
[{Person, 1, "Seymour Skinner", "Principal"}]
iex(t1@ubuntu)6> Mnesia.table_info(Person, :all)
[
  access_mode: :read_write,
  active_replicas: [:t1@ubuntu],
  all_nodes: [:t1@ubuntu],
  arity: 4,
  attributes: [:id, :name, :job],
  checkpoints: [],
  commit_work: [],
  cookie: {{1593853684922256987, -576460752303423391, 1}, :t1@ubuntu},
  cstruct: {:cstruct, Person, :set, [:t1@ubuntu], [], [], [], 0, :read_write,
   false, [], [], false, Person, [:id, :name, :job], [], [], [],
   {{1593853684922256987, -576460752303423391, 1}, :t1@ubuntu}, {{2, 0}, []}},
  disc_copies: [],
  disc_only_copies: [],
  external_copies: [],
  frag_properties: [],
  index: [],
  index_info: {:index, :set, []},
  load_by_force: false,
  load_node: :t1@ubuntu,
  load_order: 0,
  load_reason: {:dumper, :create_table},
  local_content: false,
  majority: false,
  master_nodes: [],
  memory: 321,
  ram_copies: [:t1@ubuntu],
  record_name: Person,
  record_validation: {Person, 4, :set},
  size: 1,
  snmp: [],
  storage_properties: [],
  storage_type: :ram_copies,
  subscribers: [],
  type: :set,
  user_properties: [],
  version: {{2, 0}, []},
  where_to_commit: [t1@ubuntu: :ram_copies],
  where_to_read: :t1@ubuntu, 
  where_to_wlock: {[:t1@ubuntu], false},
  where_to_write: [:t1@ubuntu],
  wild_pattern: {Person, :_, :_, :_}
]

啓動t2cookie

~/platform/launcher(master*) » iex --sname t2
Erlang/OTP 21 [erts-10.3.5.6] [source] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:1] [hipe]

Interactive Elixir (1.7.3) - press Ctrl+C to exit (type h() ENTER for help)
iex(t2@ubuntu)1> alias :mnesia, as: Mnesia
:mnesia
iex(t2@ubuntu)2> Mnesia.start()
:ok

copy table至t2並在t2驗證.網絡

iex(t1@ubuntu)9> Mnesia.add_table_copy(Person, :t2@ubuntu, :ram_copies)
{:atomic, :ok}
iex(t1@ubuntu)10> Mnesia.change_config(:extra_db_nodes, [:t2@ubuntu]) 
iex(t2@ubuntu)4> Mnesia.dirty_read({Person, 1})
[{Person, 1, "Seymour Skinner", "Principal"}]

t2的寫入也在t1可讀async

iex(t2@ubuntu)5> Mnesia.dirty_write({Person, 2, "Homer Simpson", "Safety Inspector"})
:ok
iex(t1@ubuntu)11> Mnesia.dirty_read({Person, 2})
[{Person, 2, "Homer Simpson", "Safety Inspector"}]

若t2重啓, 須要t1從新change_config, t2纔會從新從t1同步數據.分佈式

~/platform/launcher(master*) » iex --sname t2
Erlang/OTP 21 [erts-10.3.5.6] [source] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:1] [hipe]

Interactive Elixir (1.7.3) - press Ctrl+C to exit (type h() ENTER for help)
iex(t2@ubuntu)1> alias :mnesia, as: Mnesia
:mnesia
iex(t2@ubuntu)2> Mnesia.start()
:ok
iex(t2@ubuntu)3> Mnesia.dirty_read({Person, 1})
** (exit) {:aborted, {:no_exists, [Person, 1]}}
    (mnesia) mnesia.erl:355: :mnesia.abort/1
iex(t1@ubuntu)12> Mnesia.change_config(:extra_db_nodes, [:t2@ubuntu])   
{:ok, [:t2@ubuntu]}
iex(t2@ubuntu)3> Mnesia.dirty_read({Person, 1})
[{Person, 1, "Seymour Skinner", "Principal"}]
iex(t2@ubuntu)4> Mnesia.dirty_read({Person, 2})
[{Person, 2, "Homer Simpson", "Safety Inspector"}]

如有新節點t3, t1/t2執行change_config便可. 並不須要add_table_copy, 不add_table_copy的node, 沒法寫入. 在關閉t1, t2節點後. 節點t3寫入失敗. 若是從新啓動t1節點, 並在t3 change_config, 能夠將schema數據拷貝回t1. 從新能夠提交. 但以前的數據所有丟失了.atom

iex(t3@ubuntu)6> Mnesia.dirty_write({Person, 4, "Person 4", "Safety Inspector"})
** (exit) {:aborted, {:no_exists, Person}}
    (mnesia) mnesia.erl:355: :mnesia.abort/1
    (mnesia) mnesia_tm.erl:1061: :mnesia_tm.dirty/2
iex(t3@ubuntu)6> Mnesia.table_info(Person, :all)                                
[
  access_mode: :read_write,
  active_replicas: [],
  all_nodes: [:t2@ubuntu, :t1@ubuntu],
  arity: 4,
  attributes: [:id, :name, :job],
  checkpoints: [],
  commit_work: [],
  cookie: {{1593853684922256987, -576460752303423391, 1}, :t1@ubuntu},
  cstruct: {:cstruct, Person, :set, [:t2@ubuntu, :t1@ubuntu], [], [], [], 0,
   :read_write, false, [], [], false, Person, [:id, :name, :job], [], [], [],
   {{1593853684922256987, -576460752303423391, 1}, :t1@ubuntu},
   {{3, 0}, {:t1@ubuntu, {1593, 853838, 540527}}}},
  disc_copies: [],
  disc_only_copies: [],
  external_copies: [],
  frag_properties: [],
  index: [],
  index_info: {:index, :set, []},
  load_by_force: false,
  load_node: :unknown,
  load_order: 0,
  load_reason: :unknown,
  local_content: false,
  majority: false,
  master_nodes: [],
  memory: 0,
  ram_copies: [:t2@ubuntu, :t1@ubuntu],
  record_name: Person,
  record_validation: {Person, 4, :set},
  size: 0,
  snmp: [],
  storage_properties: [],
  storage_type: :unknown,
  subscribers: [],
  type: :set,
  user_properties: [],
  version: {{3, 0}, {:t1@ubuntu, {1593, 853838, 540527}}},
  where_to_commit: [],
  where_to_read: :nowhere,
  where_to_wlock: {[], false},
  where_to_write: [],
  wild_pattern: {Person, :_, :_, :_}
]
iex(t3@ubuntu)7> Mnesia.change_config(:extra_db_nodes, [:t1@ubuntu])   
{:ok, [:t1@ubuntu]}
iex(t3@ubuntu)8> Mnesia.table_info(Person, :all)                       
[
  access_mode: :read_write,
  active_replicas: [:t1@ubuntu],
  all_nodes: [:t2@ubuntu, :t1@ubuntu],
  arity: 4,
  attributes: [:id, :name, :job],
  checkpoints: [],
  commit_work: [],
  cookie: {{1593853684922256987, -576460752303423391, 1}, :t1@ubuntu},
  cstruct: {:cstruct, Person, :set, [:t2@ubuntu, :t1@ubuntu], [], [], [], 0,
   :read_write, false, [], [], false, Person, [:id, :name, :job], [], [], [],
   {{1593853684922256987, -576460752303423391, 1}, :t1@ubuntu},
   {{3, 0}, {:t1@ubuntu, {1593, 853838, 540527}}}},
  disc_copies: [],
  disc_only_copies: [],
  external_copies: [],
  frag_properties: [],
  index: [],
  index_info: {:index, :set, []},
  load_by_force: false,
  load_node: :unknown,
  load_order: 0,
  load_reason: :unknown,
  local_content: false,
  majority: false,
  master_nodes: [],
  memory: 0,
  ram_copies: [:t2@ubuntu, :t1@ubuntu],
  record_name: Person,
  record_validation: {Person, 4, :set},
  size: 0,
  snmp: [],
  storage_properties: [],
  storage_type: :unknown,
  subscribers: [],
  type: :set,
  user_properties: [],
  version: {{3, 0}, {:t1@ubuntu, {1593, 853838, 540527}}},
  where_to_commit: [t1@ubuntu: :ram_copies],
  where_to_read: :t1@ubuntu,
  where_to_wlock: {[:t1@ubuntu], false},
  where_to_write: [:t1@ubuntu],
  wild_pattern: {Person, :_, :_, :_}
]

在關閉t1, t3節點後, t2節點仍然能夠寫入成功.code

推測

  • mnesia使用多主節點lock and commit.
  • 有新節點加入時, 經過change_config主動從特定節點上覆制數據.
  • 無一致性保證, 有腦裂問題. (寫入時沒有要求多於半數節點存活)

總結

mnesia 不適合有一致性要求的場景.orm

相關文章
相關標籤/搜索