It’s important to know your compute options when you’re looking at hosting an application or website.

Is there a difference between a cloud server and a dedicated server? This is a question we’re frequently asked … and the start of a new year feels like a perfect time to answer it.

In this piece, we’ll look specifically at cloud servers.

What is a cloud server?

A cloud server is a shared section of a server. It’s allocated for your use (via a virtual environment) and controlled by the service or cloud provider. On your end, it appears that you are running your own compute and storage space. However—and this is an important point—there are usually a large number of other people using the same compute and storage resources within their own virtualized environment.

Cloud servers offer you the ability to save on purchasing and management costs that would otherwise go to developing and maintaining your own infrastructure. Additionally, with cloud storage, you get billed for only the amount of storage you use, which means you can increase your compute and storage services and scale as needed.

All workloads are not created equal. Your cloud provider should be willing and able to help you architect critical and non-critical workloads depending on your particular requirements.

You get to benefit from an economy of scale, meaning you sharing hardware with other businesses but only pay for the amount of compute and storage capacity that you use. If you need to increase your usage, you can do so through a dashboard. You have unlimited flexibility to increase or decrease your resources and can change server specifications with the demands of your business.

The good news?

Your business is supported by large amounts of processing power and storage.

The bad news?

Your business is supported by large amounts of processing power and storage.

When you share resources with many (sometimes dozens) of other businesses, your cloud performance decreases when one of those businesses posts a viral hamster video.

As IBM has shown everyone, don’t underestimate the impact of a viral hamster video.

Finally, there is a myth out there about cloud redundancy. Many people believe that in the event of an outage or server crash, the cloud “automagically” fails over to a working box, ensuring minimal to no interruption on your end.

This is not true.

Cloud servers are not, by definition or default, high availability or redundant.

Simply put, high availability or redundant cloud capabilities must be architected at the server and application layers.

The first question to ask is whether or not your applications are architected to take advantage of multiple cloud servers in multiple availability zones. If not, that’s a great place to start a conversation with your cloud provider.