COSBench already includes one librados adapter which is contributed by Niklas Goerke, unfortunately, no sample configuration provided. but from the implementation, the storage section should be as following:
<storage type="librados" config="accesskey=<accesskey>;secretkey=<scretkey>;endpoint=<endpoint>" />
And so far no ceph setup on my hand, if above configuration works, please let me know, then I can include one librados sample into package.
I did implement a librados adapter.
The storage section part, ywang posted looks good to me. Please note to use the whole secretkey including the "=" at the end. Just copy it from your ceph.client.admin.keyring or ceph.conf file (if using user admin). I'll upload a sample config file within the next few days.
Thanks for all of the information. I did figure out the type = librados bit, but I have some other issue at this point. I am also going to set up radosgw and attempt to use your swift and s3 storage types through the radosgw. I'll post more info/questions as I move forward.
The librados Adapter does not use radosgw, it directly accesses librados (see ). As for the endpoint you can put the ip (or dns-resolvable name) of your monitor. Librados will only get Meta (probably osdmap and pgmap) information from the monitor and will directly connect to the OSDs for data transfer.
The librados Adapter should of course work with the radosgw user, but it will not use the radosgw itself, instead it would just use the same credentials as radosgw. Other users should also work but I have to admit I never tried it (wouldn't see why it should not, if the user is configured correctly ceph-wise)
Good luck, and don't hesitate to ask or report problems!
Actually for cosbench, there are a few ways to talk with ceph, one is through librados adapter, another ways is with radosgw.
For radosgw, it may expose swift-compatible or s3-compatible interface, so you may enable radosgw in your ceph setup, and use swift or s3 adapter to talk with ceph. we tested radosgw + swift adapter, and radosgw + s3 adapter should also work (not verified yet).
I'm unable to get COSBench to work with the librados adapter, but I don't think it is necessarily a problem with the adapter. I'm not very familiar with writing workloads at the XML layer, generally I use the GUI to create them, so I think I'm just not creating my workload correctly. If anyone is able to post an example that would be helpful. Or, if anyone wants to create a workload with my credentials (posted above) I'd be happy to test it against my Ceph cluster and report back.
I've successfully tested COSBench against Ceph using the Swift interface before, so I can confirm that works. Those workloads are easily generated through the GUI.
I am new to COSbench
I am getting following error (seems i am missing something basic in configuration)
================================================== stage: s1-init ==================================================
--------------------------------- mission: null, driver: driver1 ----------------------------------
[N/A]================================================== stage: s2-prepare ==================================================
================================================== stage: s3-main ==================================================
================================================== stage: s4-cleanup ==================================================
================================================== stage: s5-dispose ==================================================