pyetl是一個純python開發的ETL框架, 相比sqoop, datax 之類的ETL工具,pyetl能夠對每一個字段添加udf函數,使得數據轉換過程更加靈活,相比專業ETL工具pyetl更輕量,純python代碼操做,更加符合開發人員習慣python
pip3 install pyetl
數據庫表之間數據同步git
from pyetl import Task, DatabaseReader, DatabaseWriter reader = DatabaseReader("sqlite:///db1.sqlite3", table_name="source") writer = DatabaseWriter("sqlite:///db2.sqlite3", table_name="target") Task(reader, writer).start()
數據庫表到hive表同步github
from pyetl import Task, DatabaseReader, HiveWriter2 reader = DatabaseReader("sqlite:///db1.sqlite3", table_name="source") writer = HiveWriter2("hive://localhost:10000/default", table_name="target") Task(reader, writer).start()
數據庫表同步essql
from pyetl import Task, DatabaseReader, ElasticSearchWriter reader = DatabaseReader("sqlite:///db1.sqlite3", table_name="source") writer = ElasticSearchWriter(hosts=["localhost"], index_name="tartget") Task(reader, writer).start()
原始表目標表字段名稱不一樣,須要添加字段映射數據庫
# 原始表source包含uuid,full_name字段 reader = DatabaseReader("sqlite:///db.sqlite3", table_name="source") # 目標表target包含id,name字段 writer = DatabaseWriter("sqlite:///db.sqlite3", table_name="target") # columns配置目標表和原始表的字段映射關係 columns = {"id": "uuid", "name": "full_name"} Task(reader, writer, columns=columns).start()
添加字段的udf映射,對字段進行規則校驗、數據標準化、數據清洗等json
# functions配置字段的udf映射,以下id轉字符串,name去除先後空格 functions={"id": str, "name": lambda x: x.strip()} Task(reader, writer, columns=columns, functions=functions).start()
import json from pyetl import Task, DatabaseReader, DatabaseWriter class NewTask(Task): reader = DatabaseReader("sqlite:///db.sqlite3", table_name="source") writer = DatabaseWriter("sqlite:///db.sqlite3", table_name="target") def get_columns(self): """經過函數的方式生成字段映射配置,使用更靈活""" # 如下示例將數據庫中的字段映射配置取出後轉字典類型返回 sql = "select columns from task where name='new_task'" columns = self.writer.db.read_one(sql)["columns"] return json.loads(columns) def get_functions(self): """經過函數的方式生成字段的udf映射""" # 如下示例將每一個字段類型都轉換爲字符串 return {col: str for col in self.columns} def apply_function(self, record): """數據流中對一整條數據的udf""" record["flag"] = int(record["id"]) % 2 return record def before(self): """任務開始前要執行的操做, 如初始化任務表,建立目標表等""" sql = "create table destination_table(id int, name varchar(100))" self.writer.db.execute(sql) def after(self): """任務完成後要執行的操做,如更新任務狀態等""" sql = "update task set status='done' where name='new_task'" self.writer.db.execute(sql) NewTask().start()
Reader | 介紹 |
---|---|
DatabaseReader | 支持全部關係型數據庫的讀取 |
FileReader | 結構化文本數據讀取,如csv文件 |
ExcelReader | Excel表文件讀取 |
Writer | 介紹 |
---|---|
DatabaseWriter | 支持全部關係型數據庫的寫入 |
ElasticSearchWriter | 批量寫入數據到es索引 |
HiveWriter | 批量插入hive表 |
HiveWriter2 | Load data方式導入hive表(推薦) |
FileWriter | 寫入數據到文本文件 |
使用過程當中有任何疑問,歡迎評論交流app
項目地址pyetl框架