Ansible是由Python開發的一個運維工具,由於工做須要接觸到Ansible,常常會集成一些東西到Ansible,因此對Ansible的瞭解愈來愈多。python
那Ansible究竟是什麼呢?在個人理解中,原來須要登陸到服務器上,而後執行一堆命令才能完成一些操做。而Ansible就是來代替咱們去執行那些命令。而且能夠經過Ansible控制多臺機器,在機器上進行任務的編排和執行,在Ansible中稱爲playbook。mysql
那Ansible是如何作到的呢?簡單點說,就是Ansible將咱們要執行的命令生成一個腳本,而後經過sftp將腳本上傳到要執行命令的服務器上,而後在經過ssh協議,執行這個腳本並將執行結果返回。linux
那Ansible具體是怎麼作到的呢?下面從模塊和插件來看一下Ansible是如何完成一個模塊的執行web
PS:下面的分析都是在對Ansible有一些具體使用經驗以後,經過閱讀源代碼進一步得出的執行結論,因此但願在看本文時,是創建在對Ansible有必定了解的基礎上,最起碼對於Ansible的一些概念有了解,例如inventory,module,playbooks等sql
模塊是Ansible執行的最小單位,能夠是由Python編寫,也能夠是Shell編寫,也能夠是由其餘語言編寫。模塊中定義了具體的操做步驟以及實際使用過程當中所須要的參數shell
執行的腳本就是根據模塊生成一個可執行的腳本。json
那Ansible是怎麼樣將這個腳本上傳到服務器上,而後執行獲取結果的呢?服務器
鏈接插件,根據指定的ssh參數鏈接指定的服務器,並切提供實際執行命令的接口運維
命令插件,根據sh類型,來生成用於connection時要執行的命令python2.7
執行策略插件,默認狀況下是線性插件,就是一個任務接着一個任務的向下執行,此插件將任務丟到執行器去執行。
動做插件,實質就是任務模塊的全部動做,若是ansible的模塊沒有特別編寫的action插件,默認狀況下是normal或者async(這兩個根據模塊是否async來選擇),normal和async中定義的就是模塊的執行步驟。例如,本地建立臨時文件,上傳臨時文件,執行腳本,刪除腳本等等,若是想在全部的模塊中增長一些特殊步驟,能夠經過增長action插件的方式來擴展。
實際需求中,咱們擴展的一些Ansible模塊須要使用三方庫,但每一個節點中安裝這些庫有些不易於管理。ansible執行模塊的實質就是在節點的python環境下執行生成的腳本,因此咱們採起的方案是,指定節點上的Python環境,將局域網內一個python環境做爲nfs共享。經過擴展Action插件,增長節點上掛載nfs,待執行結束後再將節點上的nfs卸載。具體實施步驟以下:
擴展代碼:
重寫ActionBase的execute_module方法
# execute_module from __future__ import (absolute_import, division, print_function) __metaclass__ = type import json import pipes from ansible.compat.six import text_type, iteritems from ansible import constants as C from ansible.errors import AnsibleError from ansible.release import __version__ try: from __main__ import display except ImportError: from ansible.utils.display import Display display = Display() class MagicStackBase(object): def _mount_nfs(self, ansible_nfs_src, ansible_nfs_dest): cmd = ['mount',ansible_nfs_src, ansible_nfs_dest] cmd = [pipes.quote(c) for c in cmd] cmd = ' '.join(cmd) result = self._low_level_execute_command(cmd=cmd, sudoable=True) return result def _umount_nfs(self, ansible_nfs_dest): cmd = ['umount', ansible_nfs_dest] cmd = [pipes.quote(c) for c in cmd] cmd = ' '.join(cmd) result = self._low_level_execute_command(cmd=cmd, sudoable=True) return result def _execute_module(self, module_name=None, module_args=None, tmp=None, task_vars=None, persist_files=False, delete_remote_tmp=True): ''' Transfer and run a module along with its arguments. ''' # display.v(task_vars) if task_vars is None: task_vars = dict() # if a module name was not specified for this execution, use # the action from the task if module_name is None: module_name = self._task.action if module_args is None: module_args = self._task.args # set check mode in the module arguments, if required if self._play_context.check_mode: if not self._supports_check_mode: raise AnsibleError("check mode is not supported for this operation") module_args['_ansible_check_mode'] = True else: module_args['_ansible_check_mode'] = False # Get the connection user for permission checks remote_user = task_vars.get('ansible_ssh_user') or self._play_context.remote_user # set no log in the module arguments, if required module_args['_ansible_no_log'] = self._play_context.no_log or C.DEFAULT_NO_TARGET_SYSLOG # set debug in the module arguments, if required module_args['_ansible_debug'] = C.DEFAULT_DEBUG # let module know we are in diff mode module_args['_ansible_diff'] = self._play_context.diff # let module know our verbosity module_args['_ansible_verbosity'] = display.verbosity # give the module information about the ansible version module_args['_ansible_version'] = __version__ # set the syslog facility to be used in the module module_args['_ansible_syslog_facility'] = task_vars.get('ansible_syslog_facility', C.DEFAULT_SYSLOG_FACILITY) # let module know about filesystems that selinux treats specially module_args['_ansible_selinux_special_fs'] = C.DEFAULT_SELINUX_SPECIAL_FS (module_style, shebang, module_data) = self._configure_module(module_name=module_name, module_args=module_args, task_vars=task_vars) if not shebang: raise AnsibleError("module (%s) is missing interpreter line" % module_name) # get nfs info for mount python packages ansible_nfs_src = task_vars.get("ansible_nfs_src", None) ansible_nfs_dest = task_vars.get("ansible_nfs_dest", None) # a remote tmp path may be necessary and not already created remote_module_path = None args_file_path = None if not tmp and self._late_needs_tmp_path(tmp, module_style): tmp = self._make_tmp_path(remote_user) if tmp: remote_module_filename = self._connection._shell.get_remote_filename(module_name) remote_module_path = self._connection._shell.join_path(tmp, remote_module_filename) if module_style in ['old', 'non_native_want_json']: # we'll also need a temp file to hold our module arguments args_file_path = self._connection._shell.join_path(tmp, 'args') if remote_module_path or module_style != 'new': display.debug("transferring module to remote") self._transfer_data(remote_module_path, module_data) if module_style == 'old': # we need to dump the module args to a k=v string in a file on # the remote system, which can be read and parsed by the module args_data = "" for k,v in iteritems(module_args): args_data += '%s=%s ' % (k, pipes.quote(text_type(v))) self._transfer_data(args_file_path, args_data) elif module_style == 'non_native_want_json': self._transfer_data(args_file_path, json.dumps(module_args)) display.debug("done transferring module to remote") environment_string = self._compute_environment_string() remote_files = None if args_file_path: remote_files = tmp, remote_module_path, args_file_path elif remote_module_path: remote_files = tmp, remote_module_path # Fix permissions of the tmp path and tmp files. This should be # called after all files have been transferred. if remote_files: self._fixup_perms2(remote_files, remote_user) # mount nfs if ansible_nfs_src and ansible_nfs_dest: result = self._mount_nfs(ansible_nfs_src, ansible_nfs_dest) if result['rc'] != 0: raise AnsibleError("mount nfs failed!!! {0}".format(result['stderr'])) cmd = "" in_data = None if self._connection.has_pipelining and self._play_context.pipelining and not C.DEFAULT_KEEP_REMOTE_FILES and module_style == 'new': in_data = module_data else: if remote_module_path: cmd = remote_module_path rm_tmp = None if tmp and "tmp" in tmp and not C.DEFAULT_KEEP_REMOTE_FILES and not persist_files and delete_remote_tmp: if not self._play_context.become or self._play_context.become_user == 'root': # not sudoing or sudoing to root, so can cleanup files in the same step rm_tmp = tmp cmd = self._connection._shell.build_module_command(environment_string, shebang, cmd, arg_path=args_file_path, rm_tmp=rm_tmp) cmd = cmd.strip() sudoable = True if module_name == "accelerate": # always run the accelerate module as the user # specified in the play, not the sudo_user sudoable = False res = self._low_level_execute_command(cmd, sudoable=sudoable, in_data=in_data) # umount nfs if ansible_nfs_src and ansible_nfs_dest: result = self._umount_nfs(ansible_nfs_dest) if result['rc'] != 0: raise AnsibleError("umount nfs failed!!! {0}".format(result['stderr'])) if tmp and "tmp" in tmp and not C.DEFAULT_KEEP_REMOTE_FILES and not persist_files and delete_remote_tmp: if self._play_context.become and self._play_context.become_user != 'root': # not sudoing to root, so maybe can't delete files as that other user # have to clean up temp files as original user in a second step tmp_rm_cmd = self._connection._shell.remove(tmp, recurse=True) tmp_rm_res = self._low_level_execute_command(tmp_rm_cmd, sudoable=False) tmp_rm_data = self._parse_returned_data(tmp_rm_res) if tmp_rm_data.get('rc', 0) != 0: display.warning('Error deleting remote temporary files (rc: {0}, stderr: {1})'.format(tmp_rm_res.get('rc'), tmp_rm_res.get('stderr', 'No error string available.'))) # parse the main result data = self._parse_returned_data(res) # pre-split stdout into lines, if stdout is in the data and there # isn't already a stdout_lines value there if 'stdout' in data and 'stdout_lines' not in data: data['stdout_lines'] = data.get('stdout', u'').splitlines() display.debug("done with _execute_module (%s, %s)" % (module_name, module_args)) return data
集成到normal.py和async.py中,記住要將這兩個插件在ansible.cfg中進行配置
from __future__ import (absolute_import, division, print_function) __metaclass__ = type from ansible.plugins.action import ActionBase from ansible.utils.vars import merge_hash from common.ansible_plugins import MagicStackBase class ActionModule(MagicStackBase, ActionBase): def run(self, tmp=None, task_vars=None): if task_vars is None: task_vars = dict() results = super(ActionModule, self).run(tmp, task_vars) # remove as modules might hide due to nolog del results['invocation']['module_args'] results = merge_hash(results, self._execute_module(tmp=tmp, task_vars=task_vars)) # Remove special fields from the result, which can only be set # internally by the executor engine. We do this only here in # the 'normal' action, as other action plugins may set this. # # We don't want modules to determine that running the module fires # notify handlers. That's for the playbook to decide. for field in ('_ansible_notify',): if field in results: results.pop(field) return results
ansible 51 -m mysql_db -a "state=dump name=all target=/tmp/test.sql" -i hosts -u root -v -e "ansible_nfs_src=172.16.30.170:/web/proxy_env/lib64/python2.7/site-packages ansible_nfs_dest=/root/.pyenv/versions/2.7.10/lib/python2.7/site-packages ansible_python_interpreter=/root/.pyenv/versions/2.7.10/bin/python"