FAQ
OpenMetal Cloud FAQ
The following FAQs cover many categories of OpenMetal Cloud. These categories consist of the platform and hardware technology, the network setup, the private cloud core, understanding of the service level agreement, and details of how to run your very first private cloud!
OpenStack
OpenStack is the overarching cloud management software and handles networking, compute, storage connection, access levels, and much more. More information can be found on our OpenStack platform page.
A security group acts as a virtual firewall for servers and other resources on a network. Create firewall rules on the hardware nodes to protect VMs on the individual node. This allows you to have a public IP address on a network so that individual departments can have their own private network space for their VMs separated from other departments.
In your private network within your "hard" VLANs you can also create an overlay that provides networking, management control, control panels, APIs, and more to the compute and storage.
Yes, this is your private cloud!
Ceph
Ceph provides the network storage including Block Storage, Object Storage, and, if needed, an NFS compatible file storage called CephFS.
If you need to maximize your usable disk space, we have the following general preference for Replica 2. We supply only data center grade SATA SSD and NVMe drives. The mean time between failures of a typical hard drive is 300,000 hours. Most recommendations and history of a selection of 3 replicas come from hard drive use cases taking into account this failure rate. Both our SATA SSDs MTBF and our NVMe's MTBF are 2 million hours. Though failures will certainly still occur, it is roughly 6 times less likely than with an HDD. Usable Ceph disk space savings are significant (estimated, not exact): HC Small, Replica 3 -- 960GB * 3 servers / 3 replicas = 960GB usable; HC Small, Replica 2 -- 960GB * 3 servers / 2 replicas = 1440GB usable; HC Standard, Replica 3 -- 3.2TB * 3 servers / 3 replicas = 3.2TB usable; HC Standard, Replica 2 -- 3.2TB * 3 servers / 2 replicas = 4.8TB usable.
No, that is not recommended unless it is an emergency or similar temporary situation. Those drives are not intended for heavy use and are not rated for high disk writes per day.
Bare Metal
When provisioning bare metal servers within your network they will be, by default, on your private VLANs. You can then use OpenStack's Firewall as a Service to allow selected public traffic through to that bare metal server. You have the option to place any bare metal servers on the public VLAN by overriding the VLAN tagging on that individual server. In the case you are running bare-metal servers that are not part of the OpenStack cluster, then those bare-metal servers will be within the private or public VLAN you assigned and must traverse one of the private OpenStack routers to connect to a VM that is on a VXLAN.
OpenMetal Cloud Platform and Hardware
We offer OpenMetal Central as a GUI as well as by API. OpenStack and Ceph both have an administrative GUI and a "Self Service User" GUI. As OpenStack and Ceph are often considered to be "API first" or "Infrastructure as Code first" applications, more administrative features are available via API or Command Line than within the administrative interface. For users that you might give self-service access, OpenStack and Ceph have strong capabilities within the GUI.
These servers are dedicated to you. IOPS will vary by the hardware you purchase and the technology you are using to access the hardware. The drives used are data center grade Intel NVMe or SATA SSDs. For extremely high IOPS, we recommend using the NVMe or SATA SSD drives directly from your application. For very high IOPS with built-in data protection, Ceph with a replication of 2 on NVMe drives is popular. A replica level of 3 will slightly reduce the IOPS but is a recommended choice.
We are currently researching the right hardware for bulk availability. Please contact your account manager for access to GPUs.
In OpenStack the control plane is made up of all the services that are necessary for the IPMI port to be connected to an IPMI network that only allows traffic between your port and our central management IP.
This depends on the size of the OpenMetal Cloud and services being used from the OpenStack control plane. For small OpenMetal Clouds, this might only be a few CPU cores and 2-4GB of RAM per Private Cloud Core server. For very large OpenMetal Clouds, like several hundred server nodes, the control plane on the PCC can use enough of the PCC's resources that best practices will advise against using the PCC for compute and storage.
Capacity and redundancy benefits come with the 5 PCC footprint and are typically appropriate for very large deployments. The use of 3 replicas has typically been the standard for storage systems like Ceph, meaning 3 copies exist at all times in normal operation to prevent data loss. With data center grade SATA SSD and NVMe drives, the mean time between failure (MTBF) is better than traditional spinning drives, leading many cloud administrators to move to 2 replicas for Ceph when running on data center grade SSDs.
We supply IPv4 for lease and will be terminated on your VLANs. We are aiming to supply IPv6 for a no-charge lease in the near release. You can also SWIP your IPv4 blocks to us.
Your servers are 100% dedicated to you. The crossover between your OpenMetal Cloud and the overall data center comes at the physical switch level for internet traffic and for IPMI traffic. For internet traffic, you are assigned a set of VLANs within the physical switches. Those VLANs only terminate on your hardware.
There are several ways to grow your compute and storage past what is within your PCC. You can add additional matching or non-matching compute nodes. You can add additional matching converged servers to your PCC. You can create a new converged cluster with servers different from your PCC servers. You can also create a new storage cloud, typically done for large-scale implementations when the economy of scale favors separating compute and storage.
This depends on your use case. It typically happens naturally as you scale up. You will find that you have some "marooned" resources in your cluster. For example, you may have disk space left over but your RAM/CPU has been consumed. In this scenario, you will just need to add compute. If you are out of storage but have plenty of RAM/CPU, you have the choice of creating a new storage only cluster or shifting your base converged node. Consult with your account manager for advice.
We generally recommend that clouds are not run much over 80% of their theoretical capacity. Performance monitoring and node health are key to track.
OpenMetal uses open source technology for all major systems. There are no license fees for any OpenStack feature supplied in the standard cloud system or for any features supplied by additional OpenStack components. We do include access to Datadog for hardware node monitoring included in the cost of the cloud.
Support and Service
In general, we manage the networks above your OpenMetal Clouds and we supply the hardware and parts replacements as needed for hardware in your OpenMetal Clouds. OpenMetal Clouds themselves are managed by your team. If your team has not managed OpenStack and Ceph private clouds before, we have several options including complimentary onboarding training, self-paced free onboarding guides, free test clouds, paid additional training and coaching, and complimentary emergency service.
You will need to add one of our public keys to the server in question. These keys are rotated periodically. You should remove our public keys after service has been rendered. To access the public key, log into your OpenMetal Central account, click on "Requests" on the left side panel, then click the button labeled "Support Agent Access" on the top right.
Self-service access to VMs, networking space, storage, and other OpenStack services are handled through the Horizon interface or through automation against OpenStack APIs. As the cloud administrator, you will set up projects for those departments or people. You can set resource limitations that will be enforced by OpenStack.
Probably not unless you are in a paid training program. If you are not sure or would like clarification on your unique situation, ask your account manager or contact us.
You can return a server by simply removing all running cloud services then requesting removal via API or from OpenMetal Central. To safely remove the server: spin down or move off any VMs, direct your OpenStack to drop management of this server, and detach Ceph from using any drives on this server.
Yes. We have yet to find a workload that cannot be run on either our OpenStack Cloud or our Bare Metal. If you can dream it, we can help you run it! Also, if for any reason it is not a fit, you have a self-service 30 day money back guarantee or the 30 days free PoC time. No obligations or lock-in.
We have done this before for customers in many different situations. Key factors include: whether you are over the "Tipping Point" in cloud spend, whether your team is highly technical, and the scale of your deployment. Public Cloud is a good solution when cloud spend is around $10k/month or less. Hosted Private Cloud is a close cousin to public cloud but scale is important to get the most value. Also, if for any reason it is not a fit, you have a self-service 30 day money back guarantee or the 30 days free PoC time.
You can count on the following: An Account Manager, Account Engineer, and an Executive Sponsor will be assigned from our side. You will be invited to our Slack for Engineer to Engineer support. Your Account Manager will collect your goals and we will align our efforts to your success. You can meet with your support team via video up to weekly. We also provide migration planning and discussion on agreements and potential discounts.
We offer two levels of support. All clouds come with the first level included in base prices: hardware management, procurement, provisioning of initial cloud software, providing new versions for upgrades, and support for cloud health issues. The second level, Assisted Management, includes a named Account Engineer, engineer-to-engineer support, upgrade assistance, joint cloud health monitoring with 24/7 team response, and monthly proactive health checks.
Building an OpenStack and Ceph cloud is much harder than running a well-architected cloud. Your OpenMetal Hosted Private Cloud is "Day 2 ready" and is relatively easy to maintain. Skilled Linux System Administrators can learn to maintain an OpenMetal Cloud in about 40 hours using our provided Cloud Administrator Guides, and we will give you free time on non-production test clouds for this purpose.
Your OpenMetal Hosted Private Cloud is "Day 2 ready" and is relatively easy to maintain but does require solid Linux System Administration basics. For companies without a Linux Admin Ops team, we recommend our Assisted Management level of service. It covers most situations, including joint cloud health monitoring with our 24/7 team. A junior Linux System Administrator can learn to maintain an OpenMetal Cloud in about 120 hours using our provided guides.
Speed and Connectivity
HC Smalls have 2x1Gbit ports. All other servers have 2x10Gbit ports. They are bonded by default to provide redundancy and greater throughput.
OpenMetal Clouds are organized by "pods". Each pod has a minimum of 200Gbits of connectivity that can be upgraded based on usage. Pods and the overall network may also have direct peering with other cloud providers for maximum throughput.
The network performance for 2023 was 99.994% and for 2024 it is tracking similar. The base SLA is 99.96%.
Ashburn, Virginia (East Coast USA). Los Angeles, California (West Coast USA). Amsterdam, Netherlands (Central Europe). International Business Park, Singapore (Asia Pacific). Data center certifications include: SOC 1, SOC 2, SOC 3, PCI-DSS, NIST 800-53/FISMA, HIPAA, ISO 27001, ISO 22301, ISO 50001, LEED Gold.
Disaster Recovery
Several things come into play. First, your default available expansion capacity will likely be less than what you need. You will need to work with your account manager ahead of time to have that capacity available. Very large deployments do require an agreement for this service. Second, it is likely that you will be SWIPing IP addresses to us to broadcast from our routers. It is wise to understand these processes ahead of time and potentially perform a yearly dry run.
This will depend on your situation, but Ceph has native remote replication options. Use of more than one of our locations can often meet your DR requirements. There are also several companies that specialize in Ceph data replication if your rules require a third party. For backups in general, the Ceph object storage system is one of the best in the industry and that is native to any of your OpenMetal storage clouds.
Have additional questions?
If you have any additional questions, your account manager will assist you with your needs. Get in touch with our team today.
Explore Components of OpenMetal Cloud
Private cloud as a service is delivered on demand and at scale.
Ceph Object Storage
Enterprise data storage with scaling and fault management capabilities.
Explore Object StorageBare Metal as a Service
Servers delivered via API into your private infrastructure.
Explore Bare MetalPower Your Cloud With OpenMetal
Experience the power of on-demand private cloud infrastructure. Deploy your cloud in under 45 seconds with full OpenStack and Ceph capabilities.