Ceph librados編程訪問

引言 

我須要針對Ceph的對象存儲直接進行編程訪問,看看用網關和不用網關下的性能差異。基於gate-way進行訪問例子已經走通。如今 要測的是不走網關,用librados直接和Ceph集羣打交道。  java

環境配置
 1. Ceph集羣:你要有一個已經配置好的Ceph集羣,經過ceph -s能夠看到集羣的狀態。
shell

2. 開發庫安裝 個人系統是CentOS6.5 採用以下命令安裝相關開發包(C/C++開發包) 編程

sudo yum install librados2-devel
安裝成功後,你能夠在/usr/include/rados路徑下看到相應的頭文件

示例程序
該實例程序來自官網,可參官網實例
http://docs.ceph.com/docs/master/rados/api/librados-intro/ swift

#include <rados/librados.hpp>
#include <string>
#include <list>
int main(int argc, const char **argv)
{
int ret = 0 ;
// Get cluster handle and connect to cluster
std::cout<<"ceph Cluster connect begin."<<std::endl;
std::string cluster_name("ceph");
std::string user_name("client.admin");
librados::Rados cluster ;
ret = cluster.init2(user_name.c_str(), cluster_name.c_str(), 0);
if (ret < 0)
{
std::cerr << "Couldn't initialize the cluster handle! error " << ret << std::endl;
ret = EXIT_FAILURE;
return 1;
} else {
std::cout << "Created a cluster handle." << std::endl;
}
ret = cluster.conf_read_file("/etc/ceph/ceph.conf");
if (ret < 0)
{
std::cerr << "Couldn't read the Ceph configuration file! error " << ret << std::endl;
ret = EXIT_FAILURE;
return 1;
} else {
std::cout << "Read the Ceph configuration file Succeed." << std::endl;
}
ret = cluster.connect();
if (ret < 0) {
std::cerr << "Couldn't connect to cluster! error " << ret << std::endl;
ret = EXIT_FAILURE;
return 1;
}else{
std::cout << "Connected to the cluster." << std::endl;
}
std::cout<<"ceph Cluster connect end."<<std::endl;
// IO context poolname pool-1
std::cout<<"ceph Cluster create io context for pool begin."<<std::endl;
librados::IoCtx io_ctx ;
std::string pool_name("pool-1");
ret = cluster.ioctx_create(pool_name.c_str(), io_ctx);
if (ret < 0)
{
std::cerr << "Couldn't set up ioctx! error " << ret << std::endl;
exit(EXIT_FAILURE);
} else {
std::cout << "Created an ioctx for the pool." << std::endl;
}
std::cout<<"ceph Cluster create io context for pool end."<<std::endl;
// Write an object synchronously
std::cout<<"Write an object synchronously begin."<<std::endl;
librados::bufferlist bl;
std::string objectId("hw");
std::string objectContent("Hello World!");
bl.append(objectContent);
ret = io_ctx.write_full("hw", bl);
if (ret < 0) {
std::cerr << "Couldn't write object! error " << ret << std::endl;
exit(EXIT_FAILURE);
} else {
std::cout << "Wrote new object 'hw' " << std::endl;
}
std::cout<<"Write an object synchronously end."<<std::endl;
// Add an xattr to the object.
librados::bufferlist lang_bl;
lang_bl.append("en_US");
io_ctx.setxattr(objectId, "lang", lang_bl);
// Read the object back asynchronously
librados::bufferlist read_buf;
int read_len = 4194304;
//Create I/O Completion.
librados::AioCompletion *read_completion = librados::Rados::aio_create_completion();
//Send read request.
io_ctx.aio_read(objectId, read_completion, &read_buf, read_len, 0 );
// Wait for the request to complete, and print content
read_completion->wait_for_complete();
read_completion->get_return_value();
std::cout<< "Object name: " << objectId << "\n"
<< "Content: " << read_buf.c_str() << std::endl ;
// Read the xattr.
librados::bufferlist lang_res;
io_ctx.getxattr(objectId, "lang", lang_res);
std::cout<< "Object xattr: " << lang_res.c_str() << std::endl ;
// Print the list of pools
std::list<std::string> pools ;
cluster.pool_list(pools );
std::cout << "List of pools from this cluster handle" << std::endl ;
for (std::list<std::string>::iterator i = pools.begin(); i != pools.end(); ++i)
std::cout << *i << std::endl;
// Print the list of objects
librados::ObjectIterator oit=io_ctx.objects_begin();
librados::ObjectIterator oet=io_ctx.objects_end();
std::cout<< "List of objects from this pool" << std::endl ;
for(; oit!= oet; oit++ ) {
std::cout << "\t" << oit->first << std::endl ;
}
// Remove the xattr
io_ctx.rmxattr(objectId, "lang");
// Remove the object.
io_ctx.remove(objectId);
// Cleanup
io_ctx.close();
cluster.shutdown();
return 0 ;
}

編譯指令 api

g++ -g -c cephclient.cxx -o cephclient.o
g++ -g cephclient.o -lrados -o cephclient

結果輸出 app

[root@gnop029-ct-zhejiang_wenzhou-16-34 ceph-rados]# ./cephclient 
ceph Cluster connect begin.
Created a cluster handle.
Read the Ceph configuration file Succeed.
Connected to the cluster.
ceph Cluster connect end.
ceph Cluster create io context for pool begin.
Created an ioctx for the pool.
ceph Cluster create io context for pool end.
Write an object synchronously begin.
Wrote new object 'hw' 
Write an object synchronously end.
Object name: hw
Content: Hello World!
Object xattr: en_US
List of pools from this cluster handle
rbd
pool-1
pool-2
.rgw
.rgw.root
.rgw.control
.rgw.gc
.rgw.buckets
.rgw.buckets.index
.log
.intent-log
.usage
.users
.users.email
.users.swift
.users.uid
List of objects from this pool
rb.0.d402.238e1f29.00000000ee00
rb.0.d402.238e1f29.000000015000
rb.0.d402.238e1f29.00000000fa2f
rb.0.d402.238e1f29.00000001ac00
rb.0.d402.238e1f29.000000012000

接口說明
實例代碼中包含了主要的接口,有:
1. 集羣句柄建立
2. 集羣鏈接
3. IO上下文環境初始化
4. 對象讀寫
5. IO上下文環境關閉
6. 集羣句柄關閉 async

說明
我是參考了官方文檔以後,自行走了一遍相關的過程,有不清楚的地方可直接看官網。
性能

官網中針對C/C++/java/Python/PHP相關的訪問都進行了說明。 測試

PS: 測試數據待補充。 ui

相關文章
相關標籤/搜索