Are you happy with your logging solution? Would you help us out by taking a 30-second survey? Click here


Statistics on virtual-storage-manager

Number of watchers on Github 153
Number of open issues 56
Average time to close an issue 9 days
Main language Python
Average time to merge a PR 1 day
Open pull requests 16+
Closed pull requests 15+
Last commit over 2 years ago
Repo Created about 5 years ago
Repo Last Updated over 1 year ago
Size 16.2 MB
Organization / Author01org
Latest Releasev2.1.0
Contributors13
Page Updated
Do you use virtual-storage-manager? Leave a review!
View open issues (56)
View virtual-storage-manager activity
View on github
Fresh, new opensource launches πŸš€πŸš€πŸš€
Trendy new open source projects in your inbox! View examples

Subscribe to our mailing list

Evaluating virtual-storage-manager for your project? Score Explanation
Commits Score (?)
Issues & PR Score (?)

VSM - Virtual Storage Manager

Travis CI: Build Status

Virtual Storage Manager (VSM) is software that Intel has developed to help manage Ceph clusters. VSM simplifies the creation and day-to-day management of Ceph cluster for cloud and datacenter storage administrators.

VSM enables OEMs and system integrators to ensure consistent cluster configuration through the use of pre-defined, standard cluster configurations, and as a result improves ease of cluster installation and operational reliability, and reduces maintenance and support costs.

VSM supports the creation of clusters containing a mix of hard disk drives (HDDs), Solid State storage, and SSD-cached HDDs, and simplifies management of the Ceph cluster using a system to organize servers and storage devices according to performance characteristics, intended use, and failure domain.

The VSM web-based user interface provides the operator with the ability to monitor overall cluster status, manage cluster hardware and storage capacity, inspect detailed operation status of Ceph subsystems, and attach Ceph pools to OpenStack Cinder.

VSM has been developed in Python using OpenStack Horizon as the starting point for the application framework, and has a familiar look and feel for both software developers and OpenStack administrators.

Important Notice and Contact Information

a) Open source VSM does not have a full-time support team and so would not be generally suitable for production use unless you can support it or have support from a third party. Before you use VSM, please understand the need to invest enough effort to learn how to use it effectively and to address possible bugs.

b) To help VSM develop further, please become an active member of the community and consider giving back by making contributions. We intend to make all open source VSM feature proposals public, and do all development publicly.

For other questions, contact yaguang.wang@intel.com or ferber.dan@intel.com

Licensing

a) Intel source code is being released under the Apache 2.0 license.

b) Additional libraries used with VSM have their own licensing; refer to NOTICE for details.

Installation & Usage

Please refer to INSTALL.md or INSTALL.pdf to know how to install VSM, and wiki page to know how to get started.

Contributing

Please refer to wiki page to know how to get involved.

Resources

Wiki: (https://github.com/01org/virtual-storage-manager/wiki)

Issue tracking: (https://01.org/jira/browse/VSM)

Mailing list: (http://vsm-discuss.33411.n7.nabble.com/)

*Other names and brands may be claimed as the property of others.

virtual-storage-manager open issues Ask a question     (View All Issues)
  • about 3 years install error
  • about 3 years succeed to remove a server but fail to add it again
  • over 3 years Problem with Keystone
  • over 3 years Add existing cluster
  • over 3 years auto detect ceph.conf doesn't work as expected
  • over 3 years debuild fails on ubuntu 14
  • over 3 years receive "ERROR: checking monitor status" for VSM 2.10 with Jewel on Centos 7
  • over 3 years remove a server, then add it back, the server's osds seem invisible
  • over 3 years PPA Repo?
  • over 3 years Add a background service that updates the VSM database as out-of-band cluster topology changes are made.
  • over 3 years to improve the performance of the sql queries, especially for performance metrics related
  • over 3 years view vsm logs from UI
  • over 3 years High availability for VSM controller
  • over 3 years Add complete set of Juno features in Horizon; pluggable settings, etc.
  • over 3 years Configuration Management: as more and more features are added, and different scenarios require different settings, configuration is requried.
  • over 3 years build up automation testing framework
  • over 3 years support to configure and manage ceph client
  • over 3 years requires access control for different features
  • over 3 years embed VSM into Openstack dashboard
  • over 3 years list all vsm related service status on UI
  • over 3 years Automated migration of data from one EC pool to another EC pool.
  • over 3 years require MDS management
  • over 3 years require RBD level management
  • over 3 years Mirantis OpenStack OS / Fuel Support 6.X
  • over 3 years RedHat OpenStack OSP Support V6
  • over 3 years SuSE Linux Support
  • over 3 years Inktank Ceph Enterprise 1.2 Support
  • over 3 years support to manage volume backup/snapshot
  • over 3 years Support email alerts
  • over 3 years Support SNMP alerts
virtual-storage-manager open pull requests (View All Pulls)
  • Display capacities less than 1000 with appropriate units on dashboard & pg status page.
  • Reduce interval on OSD status check from 10 min to 1 min.
  • Add non-interactive flag to apt-get in preinstall.
  • Detect when OSDs are removed out of band and remove them from VSM's db.
  • Fix mds summary data and add exception handler to summary loop.
  • Add the complete file set to be removed by uninstall.sh
  • Master cleanup duplication
  • Remove agent vsm-deps cache functionality.
  • Added changes for installation on Ubuntu.
  • Fix accidentally reverted changes to DB API
  • allow openstack glance to use ceph pools
  • Fix 'Cluster Status' widget when cluster is in error state
  • Modify config parser to use the most recently touched config file as the authoritative source.
  • scheduler: add flag to judge success or error in add_storage_group_to…
  • Translaction and other related stuff for Russian interface
  • Add page for changing a language of interface
virtual-storage-manager list of languages used
virtual-storage-manager latest release notes
v2.2.0 v2.2.0 beta release

Resolved bugs

  • VSM-533 support to create multiple RGW instances.
  • VSM-532 allow openstack glance to use ceph pools
  • VSM-519 Does this tool support Ceph Jewel now?
  • VSM-512 IndexError: list index out of range, when checking cluster.manifest with latest code

from master branch

  • VSM-504 there is no cluster_manifest supplied during the installation
  • VSM-495 the pagination feature is broken at rbd status page work in progress
  • VSM-480 Share keystone, mysql and rabbitmq between vsm and openstack work in progress
  • VSM-463 VSM 2.1 ignores ceph parameters defined in cluster.manifest file during cluster

creation.

  • VSM-459 When importing cluster, if there is no OSd and MON sections, the import will fail work

in progress

  • VSM-451 make file system mount options configurable

Known issues


v2.1.0 v2.1.0 final release

Special Notes

  • This is 2.1 final release, we made a lot of tests, especially on importing cluster and infernalis supporting.
  • 8 new features are added, and 27 bugs are fixed in this release.
  • Spotlights:
    • Import existing cluster (see Cluster Management\Import Cluster), user could import existing cluster, and monitor its status, or manipulate it by adding/removing servers or devices.
    • Manage storage group (see Cluster Management\Manage Storage group), user could see the crushmap hierarchy, and understand storage group coverage on the topology.
    • Manage zone (see Cluster Management\Manage zone), user could add new zones then associates servers to those new zones.

New Features

  • Key Summary
  • VSM-53 Allow VSM to attach to and manage an existing Ceph cluster
  • VSM-185 show SMART information for storage devices
  • VSM-409 add storage group column on OSD list page
  • VSM-418 Add support to Ceph infernalis
  • VSM-385 Change PG count even after ceph cluster deployed
  • VSM-337 Upgrade: expect to upgrade from 1.0 to 2.0
  • VSM-407 suggest to split disk status and capacity utilization in different column in device list page
  • VSM-396 Support DNS lookup beside current /etc/hosts lookup

Resolved bugs

  • Key Summary
  • VSM-427 when updating storage group, it's required to add a hour glass.
  • VSM-428 if storage group is referenced by some pools, it still allows to update its marker.
  • VSM-423 when creating new pool, after spinner is disappeared, the pool is not showing up
  • VSM-419 Stop Server error in 2.1 beta
  • VSM-172 cluster.manifest - request for improvement from Marcin
  • VSM-261 when executing vsm-controller, a TypeError raises
  • VSM-330 When upgrading from Firefly to Giant, the upgrade complains missing python-rados package
  • VSM-344 after a few times install, clean-up, reinstall, we have seem some eaddrinuse errors on starting rabbitmq
  • VSM-354 when 'create cluster' page opened,other pages still can be opened
  • VSM-371 When installing with dependent packages pre-prepared, the installer will stop with complains
  • VSM-395 Remove then restore an OSD, the OSD can't hit to in-up state again.
  • VSM-422 stop/start servers should display an hour glass during operation
  • VSM-425 Dashboard Monitor widget: boxes too small for hostname font
  • VSM-426 blocked on process indicator
  • VSM-388 VSM fails to create a Monitor daemon (after clicking OK in a popup warning window)
  • VSM-429 VSM agent fail to save ceph.conf when size is above 127 KB.
  • VSM-225 VSM Creates Very Small pg_num and pg._num Size for EC Pool
  • VSM-420 VSM 2.0 is_server_manifest_error
  • VSM-421 During cluster import, vsm agents don't have sufficient rights to /var/lib/ceph
  • VSM-380 it would be nice if install.sh script should take care of agent-tokens, even at the case restart install.sh with prior errors.
  • VSM-406 servers can't be stopped after cluster imported
  • VSM-410 The scrollbar of manage server is missing now.
  • VSM-413 Manage device, click new, popup internal ERROR on the top right corner.
  • VSM-416 at import cluster page, the scroll bar for crushmap and ceph.conf can scroll
  • VSM-417 when creating cluster, the monitor status is incorrect
  • VSM-387 ceph version conflicts when reinstall vsm at the case the previous one made ceph upgrade
  • VSM-377 VSM installation failure

Known issues

v2.1.0-b1 v2.1.0 beta1 release

Special Notes

  • This is 2.1 beta 1 release, only very limited tests on it.
  • 27 new features are added, and 51 bugs are fixed in this release.

New Features

  • VSM-66 VSM interoperates with Ceph update
  • VSM-226 Need Documentation for VSM REST API
  • VSM-78 Integration with Cinder capacity filter
  • VSM-58 Calamari Integration
  • VSM-56 Report total RBD Image commit
  • VSM-32 support to define storage group with storage node or rack as unit
  • VSM-376 support to label device with by-uuid
  • VSM-382 undeploy/redeploy ceph cluster
  • VSM-389 Need to automatically retrieve the osd info from existing cluster.
  • VSM-386 batch add osds
  • VSM-355 there are some used disk paths list in the data path field and journal dik path field on page 'add new osd'
  • VSM-90 Monitor Status page improvements.
  • VSM-96 UI responsiveness
  • VSM-98 Server Configuration Change.
  • VSM-341 expect to have some utilties to help make automation tests.
  • VSM-352 servermgmt page autorefrush too frequent
  • VSM-372 narrow the attack surface from VSM to Openstack cluster
  • VSM-373 at adding new OSD page, expect to list device name like /dev/sdd1 instead of pci bus address.
  • VSM-140 Ceph Development (Epic 6): Prototype Calamari/VSM dashboard implementation
  • VSM-88 On monitor status page, report what server each monitor is running on.
  • VSM-242 Allow user to modify ceph.conf outside VSM
  • VSM-15 VSM-backup prompt info not correct
  • VSM-124 [CDEK-1852] VSM | adding possibility to manipulate ceph values in cluster.manifest file
  • VSM-4 Average Response Timemissing in dashboard Overview panelVSM Status" section.
  • VSM-184 add automated script to help deploy VSM on multiple nodes
  • VSM-159 add issue reporting tool
  • VSM-156 add sanity check tool to help identify potential issues before or after deployment

Resolved bugs

  • VSM-349 click 'create cluster',I got the tip:there are some zones no monitor created
  • VSM-411 When create cluster, there are four servers and choose all servers as storage node but only three as monitors, the cluster can not be created sucessfully.
  • VSM-329 Remove Monitors button in Manage Servers hangs when Monitor node also has MDS daemon
  • VSM-400 the UI of all server operator pages will appear loading without any operation
  • VSM-356 I got warning info means that the number of pg in each osd is too large after upgrade ceph from a lower version to hammar
  • VSM-397 mysqld takes up 100% CPU on one core and cause VSM dashboard to become unusable
  • VSM-412 After remove server or remove monitor, failed to add the monitor back.
  • VSM-391 the ceph df number is not consistent with pool quota
  • VSM-399 the UI messy of the manage servers page
  • VSM-402 after stop server, then start server, the osd tree changes
  • VSM-392 Have removed the volume from the openstack, but from the vsm rbd status page, the rbd list still include the volume
  • VSM-384 stuck at restart all ceph servers after stopped them all from UI
  • VSM-394 present more than one pool to openstack cinder, it always creates volumes on a pool
  • VSM-321 no upstart mechanism used for ubuntu when controlling ceph service
  • VSM-336 On Dashboard, even no cluster is created, the VSM version and uptime should be displayed
  • VSM-24 [CDEK-1661] VSM Dashboard | Manage Servers | Reset server status - works not correctly.
  • VSM-365 Creating Cluster stucks at ceph.conf creation when running VSM on CentOS 7.1
  • VSM-19 CDEK-1613] VSM | Reset Server Status button - return Error:Network error
  • VSM-379 Trace back in browser when using reset server status action buttons
  • VSM-381 run diamond through service instead of current process launching
  • VSM-378 Performance data is retrieved from outside nodes
  • VSM-374 the down server is not reflected in VSM
  • VSM-375 Malformed JSON in 'Integrate Cluster' function
  • VSM-366 the password for openstack access is shown as plain text
  • VSM-312 vsm-node sets node status=Need more IP if a Monitor only node does not have a cluster IP address..
  • VSM-367 can't create cluster at public cloud environment
  • VSM-368 The default password is not following the same password policy to include uppercase and digitals.
  • VSM-369 Change password: ! doesnt' support in password even prompt message says OK
  • VSM-244 Internal server error when installing v1.1
  • VSM-224 Controller node error in /var/log/httpd/error_log - constantly ongoing messages [error]
  • VSM-239 with automatic deployment, the execution is blocked at asking if start mysql service
  • VSM-193 hard-coded cluster id
  • VSM-179 keep ceph.conf up to date when executing remove server operations.
  • VSM-176 SSL certificate password is stored in a plain text file
  • VSM-177 wrong /etc/fstab entry for osd device mount point
  • VSM-166 cluster_manifest sanity check program gives incorrect advice for auth_keys
  • VSM-171 [CDEK1672] VSM_CLI | list shows Admin network in Public IP section
  • VSM-168 [CDEK1800] VSM_CLI | remove mds - doesn't update vsm database
  • VSM-121 Storage node unable to connect to controller although network is OK and all setting correct
  • VSM-123 Storage node will not be able to contact controller node to install if http proxy set
  • VSM-260 the check_network in server_manifest will be wrong when it has a single network card
  • VSM-236 no way to check manifest correctness after editing them
  • VSM-233 console blocks when running automatic installation procedure
  • VSM-33 negative update time in RBD list
  • VSM-216 Add storage group requires at least 3 nodes
  • VSM-113 [CDEK-1835] VSM | /var/log/httpd/error_log - constantly ongoing messages [error]
  • VSM-51 Install Fails for VSM 0.8.0 Engineering Build Release
  • VSM-29 vsm-agent process causes high i/o on os disk
  • VSM-230 when presenting pool to openstack, cache tiering pools should be listed.
  • VSM-207 can't assume eth0 device name
  • VSM-26 [CDEK-1664] VSM | Not possible to replace node if ceph contain only 3 nodes.

Known issues

Other projects in Python