Workload file for librados

classic Classic list List threaded Threaded
15 messages Options
Reply | Threaded
Open this post in threaded view
|

Workload file for librados

nmtadam
Is anyone using CosBench with rados? If so, could you please post a sample workload file.

Thanks,
Adam
Reply | Threaded
Open this post in threaded view
|

Re: Workload file for librados

jrgruher
I didn't think this was supported, but if it is, I would be very interested in trying it as well, and could use some guidance on how to set it up also.  Thanks!
Reply | Threaded
Open this post in threaded view
|

Re: Workload file for librados

ywang19
Administrator
In reply to this post by nmtadam
COSBench already includes one librados adapter which is contributed by Niklas Goerke, unfortunately, no sample configuration provided. but from the implementation, the storage section should be as following:
  <storage type="librados" config="accesskey=<accesskey>;secretkey=<scretkey>;endpoint=<endpoint>" />

And so far no ceph setup on my hand, if above configuration works, please let me know, then I can include one librados sample into package.

-Y.G.
Reply | Threaded
Open this post in threaded view
|

Re: Workload file for librados

Niklas Goerke
Hi there

I did implement a librados adapter.
The storage section part, ywang posted looks good to me. Please note to use the whole secretkey including the "=" at the end. Just copy it from your ceph.client.admin.keyring or ceph.conf file (if using user admin). I'll upload a sample config file within the next few days.

Niklas
Reply | Threaded
Open this post in threaded view
|

Re: Workload file for librados

nmtadam
Thanks for all of the information. I did figure out the type = librados bit, but I have some other issue at this point. I am also going to set up radosgw and attempt to use your swift and s3 storage types through the radosgw. I'll post more info/questions as I move forward.

Reply | Threaded
Open this post in threaded view
|

RE: Workload file for librados

jrgruher
In reply to this post by Niklas Goerke
For the required values:
  <storage type="librados" config="accesskey=<accesskey>;secretkey=<scretkey>;endpoint=<endpoint>" />

Is accesskey the Ceph admin key and secretkey is the Ceph rados gateway key?  Endpoint would be the name/IP of a rados gateway?

ceph@cephtest05:~$ cat /etc/ceph/ceph.client.admin.keyring
[client.admin]
        key = AQCJ7jxSGHsiHRAALTEOIzcSsMosVn2HHgwUfw==

ceph@cephtest05:~$ cat /etc/ceph/keyring.radosgw.gateway
[client.radosgw.gateway]
        key = AQAzTENScN2XCxAAwXlYRYYRIawjxlCZFBlxfg==
        caps mon = "allow rw"
        caps osd = "allow rwx"
Reply | Threaded
Open this post in threaded view
|

RE: Workload file for librados

Niklas Goerke
Hi
No, the names are a bit misleading but I used the default cosbench names. The accesskey is the "username", the secretkey is the actual key.
For your example this should work:

<storage type="librados" config="accesskey=admin;secretkey=AQCJ7jxSGHsiHRAALTEOIzcSsMosVn2HHgwUfw==;endpoint=192.168.1.1" />

The librados Adapter does not use radosgw, it directly accesses librados (see [1]). As for the endpoint you can put the ip (or dns-resolvable name) of your monitor. Librados will only get Meta (probably osdmap and pgmap) information from the monitor and will directly connect to the OSDs for data transfer.

The librados Adapter should of course work with the radosgw user, but it will not use the radosgw itself, instead it would just use the same credentials as radosgw. Other users should also work but I have to admit I never tried it (wouldn't see why it should not, if the user is configured correctly ceph-wise)

Good luck, and don't hesitate to ask or report problems!

Niklas

[1] http://ceph.com/docs/master/architecture/
Reply | Threaded
Open this post in threaded view
|

RE: Workload file for librados

jrgruher
This post was updated on .
Thanks for the details Niklas!  I should be able to test next week and I will report back on results.
Reply | Threaded
Open this post in threaded view
|

RE: Workload file for librados

ywang19
Administrator
In reply to this post by Niklas Goerke
Actually for cosbench, there are a few ways to talk with ceph, one is through librados adapter, another ways is with radosgw.

For radosgw, it may expose swift-compatible or s3-compatible interface, so you may enable radosgw in your ceph setup, and use swift or s3 adapter to talk with ceph. we tested radosgw + swift adapter, and radosgw + s3 adapter should also work (not verified yet).


-yaguang
Reply | Threaded
Open this post in threaded view
|

RE: Workload file for librados

jrgruher
In reply to this post by jrgruher
I'm unable to get COSBench to work with the librados adapter, but I don't think it is necessarily a problem with the adapter.  I'm not very familiar with writing workloads at the XML layer, generally I use the GUI to create them, so I think I'm just not creating my workload correctly.  If anyone is able to post an example that would be helpful.  Or, if anyone wants to create a workload with my credentials (posted above) I'd be happy to test it against my Ceph cluster and report back.

I've successfully tested COSBench against Ceph using the Swift interface before, so I can confirm that works.  Those workloads are easily generated through the GUI.
Reply | Threaded
Open this post in threaded view
|

RE: Workload file for librados

jrgruher
Hi all-

I would still love to try this... can anyone provide a working workload example using the librados adapter?  Can't seem to get it quite right on my own.

And, will librados support be coming to the workload generator in the web GUI anytime soon?  That would be ideal.

Thanks!
Reply | Threaded
Open this post in threaded view
|

RE: Workload file for librados

Niklas Goerke
I added an annotated sample config for librados [1]. If you can't get it to work, please post a detailed error report so I can help you.

I will not implement librados support for the workload generator in the web GUI as I don't need it. If anyone else needs it, feel free to implement.


[1] https://github.com/Niklas974/cosbench/blob/7a89e3db081175c9eee98ffb0d99cd4acca6d64e/release/conf/librados-sample-annotated.xml
Reply | Threaded
Open this post in threaded view
|

RE: Workload file for librados

jrgruher

Thanks for providing the annotated sample file!  My ceph cluster is in the process of being rebuilt at the moment but I should be able to test this next week and I’ll report back on my results.

 

Reply | Threaded
Open this post in threaded view
|

RE: Workload file for librados

chitr
In reply to this post by Niklas Goerke
Hi
I am new to COSbench
I am getting following error (seems i am missing something basic in configuration)
================================================== stage: s1-init ==================================================
--------------------------------- mission: null, driver: driver1 ----------------------------------
[N/A]================================================== stage: s2-prepare ==================================================
================================================== stage: s3-main ==================================================
================================================== stage: s4-cleanup ==================================================
================================================== stage: s5-dispose ==================================================
Reply | Threaded
Open this post in threaded view
|

RE: Workload file for librados

Niklas Goerke
Hi there,

could you please provide your config file and take a look at the log files (in you cosbench folder in /log) for any suspicious log messages?

Merry Christmas
Niklas