Are you happy with your logging solution? Would you help us out by taking a 30-second survey? Click here


Statistics on CeTune

Number of watchers on Github 76
Number of open issues 49
Average time to close an issue 6 days
Main language Python
Average time to merge a PR 3 days
Open pull requests 28+
Closed pull requests 36+
Last commit over 1 year ago
Repo Created over 4 years ago
Repo Last Updated over 1 year ago
Size 46.9 MB
Organization / Author01org
Contributors18
Page Updated
Do you use CeTune? Leave a review!
View open issues (49)
View on github
Fresh, new opensource launches 🚀🚀🚀
Trendy new open source projects in your inbox! View examples

Subscribe to our mailing list

Evaluating CeTune for your project? Score Explanation
Commits Score (?)
Issues & PR Score (?)

Functionality Description

  • CeTune is a toolkit/framework to deploy, benchmark, profile and tune *Ceph cluster performance.
  • Aim to speed up the procedure of benchmarking *Ceph performance, and provide clear data charts of system metrics, latency breakdown data for users to analyze *Ceph performance.
  • CeTune provides test performance through three interfaces: block, file system and object to evaluate *Ceph.

Maintainance

  • CeTune is an opensource project, under LGPL V2.1, Drived by INTEL BDT CSE team.
  • Maillist: https://github.com/01org/CeTune
  • Subscribe maillist: https://lists.01.org/mailman/listinfo/cephperformance

Prepare

  • one node as CeTune controller(AKA head), Other nodes as CeTune worker(AKA worker).
  • Head is able to autossh to all workers include himself, head has a 'hosts' file contains all workers info.
  • All nodes are able to connect to yum/apt-get repository and also being able to wget/curl from ceph.com.

Installation

  • Install to head and workers:
head and workers need deploy apt-get,wget,pip proxy.
apt-get install -y python
  • Install to head:
git clone https://github.com/01org/CeTune.git

cd /CeTune/deploy/
python controller_dependencies_install.py

# make sure head is able to autossh all worker nodes and 127.0.0.1
cd ${CeTune_PATH}/deploy/prepare-scripts; ./configure_autossh.sh ${host} ${ssh_password}
  • Install to workers:
cd /CeTune/deploy/
python worker_dependencies_install.py

Start CeTune with WebUI

# install webpy python module
cd ${CeTune_PATH}/webui/ 
git clone https://github.com/webpy/webpy.git

cd webpy
python setup.py install

# run CeTune webui
cd ${CeTune_PATH}/webui/
Python webui.py

# you will see below output
root@client01:/CeTune/webui# python webui.py
http://0.0.0.0:8080/

Add user for CeTune

cd /CeTune/visualizer/
# show help
python user_Management.py --help

# add a user
cd /CeTune/visualizer/
python user_Management.py -o add --user_name {set username} --passwd {set passwd} --role {set user role[admin|readonly]}

# delete a user
python user_Management.py -o del --user_name {username}

# list all user
python user_Management.py -o list

# update a user role
python user_Management.py -o up --user_name {username} --role {set user role[admin|readonly]}
  • CeTune WebUI

webui.png


Configure

  • Use WebUI 'Test Configuration' Page, you can specify all the deploy and benchmark required configuration.
  • Also users are also able to directly modify conf/all.conf, conf/tuner.yaml, conf/cases.conf to do configuration.
  • Configuration helper is both under 'helper' tag, right after 'User Guide' and shows on the configuration page.
  • Below is a brief intro of all configuration files' objective:
    • conf/all.conf
      • This is a configuration file to describe cluster, benchmark.
    • conf/tuner.yaml
      • This is a configuration file to tune ceph cluster, including pool configuration, ceph.conf, disk tuning, etc.
    • conf/cases.conf
      • This is a configuration file to decide which test case to run.

Deploy Ceph

Assume ceph is installed on all nodes, this part is demonstrate the workflow of using CeTune to deploy osd and mon to bring up a healthy ceph cluster.

  • Configure nodes info under 'Cluster Configuration'
KEY VALUE DESCRIPTION
clean build true / false Set true, clean current deployed ceph and redeploy a new cluster; Set false, try obtain current cluster layout, and add new osd to the existing cluster
head ${hostname} Cetune controller node hostname
user root Only support root currently
enable_rgw true / false Set true, cetune will also deploy radosgw; Set false, only deploy osd and rbd nodes
list_server ${hostname1},${hostname2},... List osd nodes here, split by ','
list_client ${hostname1},${hostname2},... List client(rbd/cosbench worker) nodes here, split by ','
list_mon ${hostname1},${hostname2},... List mon nodes here, split by ','
${server_name} ${osd_device1}:${journal_device1},${osd_device2}:${journal_device2},... After adding nodes at 'list_server', cetune will add new lines whose key is the server's name;Add osd:journal pair to corresponding node, split by ','
  • Uncheck 'Benchmark' and only check 'Deploy', then click 'Execute'

webui_deploy.png

  • WebUI will jump to 'CeTune Status' and you will about to see below console logs

webui_deploy_detail.png


Benchmark Ceph

  • Users are able to configure disk_read_ahead, scheduler, etc at 'system' settings.
  • Ceph.conf Tuning can be added to 'Ceph Tuning', so CeTune will runtime apply to ceph cluster.
  • 'Benchmark Configuration' is how we control the benchmark process, will give a detail explaination below.
    • There are two parts under 'Benchmark Configuration'.
    • the first table is to control some basic settings like where to store result data, what data will be collected, etc.
    • The second table is to control what testcase will be run, users can add multi testcase, so all the testcases will be run one by one.

Check Benchmark Results

webui_result.png

webui_result_detail.png

webui_result_detail2.png


User Guidance PDF

CeTune Documents Download Url

CeTune open issues Ask a question     (View All Issues)
  • about 2 years RGW default pools are wrong since jewel
  • about 2 years AttributeError: 'ThreadedDict' object has no attribute 'userrole'
  • about 2 years allow to specify keyring if cephx enabled
  • about 2 years disk performance on clients
  • about 2 years CentOS dependency install
  • over 2 years Add user role permit to cetune
  • over 2 years kraken & bluestore disk_format
  • over 2 years conf/common.py: Incorrect IP address returned by getIpByHostInSubnet()
  • over 2 years Result report export to excel
CeTune open pull requests (View All Pulls)
  • Support testing on more than 1 ceph pools
  • parse the parameter 'disk_num_per_client' more flexibly
  • show cpu core log list
  • fix_cetune_cancel_function
  • fix_cetune_cancel_function
  • modify the method for get the value of cur_space
  • cetune log level set
  • Wip jian
  • do reweight for balance the PGS of osds
  • asynchronous processing the make_osd_fs
  • fix 'add cancel button on cetune status page' pr bug
  • [common.py] fix the code format
  • Deploy: fix bugs when creating partition
  • User control UI coding
  • Fio log format
  • [generic]delete the cluster["run_time_extend"]
  • [analyzer] update get_execute_time()
  • add some parameters for lttng
  • vdbench modify some method and optimize run method
  • ui add sysbench
  • save the interrupt to the node
  • Updated wait_workload_to_stop and stop_workload and Add new method to check vdbench
  • do analyzer at nodes
  • do analyzer at nodes
  • Improvement cases
  • calculate nvme different value
  • add the workflow test
  • Rgw deploy imprevoment
CeTune list of languages used
Other projects in Python