The concept of cloud computing isn’t new. References to cloud computing have been around since the 1960s, although at the time, “cloud” wasn’t a part of the lexicon. Back then, using remote computing and processing elements was called “network-based.”
Considering multiple enterprise providers—Amazon, Google, Microsoft, and IBM—latched onto the idea of using remote servers as a way to drive higher capacity computing in the mid-2000s, it’s not surprising they are credited with the modern development of the cloud. Amazon’s Elastic Compute Cloud debuted in 2006; IBM’s cluster computing, in cooperation with Google, arrived in 2007; Microsoft’s Azure arrived in 2010; Google’s Compute Engine came out in 2012.
Software as a service (SaaS) offerings, like Salesforce’s CRM platform, helped accelerate cloud computing adoption in every industry and in every respect. SaaS opened the world up to the cloud and its ability to lower costs through usage-based purchasing, while still providing firepower equivalent to—or better than—on-premises options.
Why companies have adopted the cloud
Cost savings advocate on their own for cloud adoption. Pre-cloud costs were high, landing in the realm of CapEx (capital expenditure), large up-front payments for software, servers, or other physical infrastructure. Cloud computing moves many of those CapEx costs into OpEx (operational expenditure), or smaller, pay-per-use costs. How? Usage scalability.
The biggest problem with CapEx is that businesses can end up overbuying. With cloud computing, users pay for what they, well, use. A cloud solution that you need four licenses for? Alright, pay for four. Now you need nine? Pay for nine.
This kind of scalability is beneficial across market segments. Small businesses—who often have the tightest purses—pay for what they need and nothing more. This cloud structure lowers barriers for small businesses significantly. Enterprises, looking to optimize operations and cut costs, rely on the cloud to scale up and down with them during growth and maintenance periods. Particularly when it comes to infrastructure, companies can pay for more when they need more, and pay for less when less is necessary. That elastic capacity enables, for example, e-commerce providers to expand their server space during Cyber Monday when they get a massive influx of web traffic.
Flexibility doesn’t come just from scaling. Since cloud services and infrastructure are all remotely accessible, the capacity for borderless business models increases. Associates in Bangalore, London, Chicago, and San Francisco can all work on the same projects and access the same data. Or, a business could use a small office, having staff work remotely thanks to cloud data accessibility. The business world takes this for granted nowadays, but it cannot be overstated how critically important remote access is. The modern enterprise’s productivity and ability to offer services across countries and internationally would come to a crawl without this key capability.
We compiled some G2 users reviews to find out how different sized businesses adopted the cloud. You can see a breakdown for small businesses, mid-sized businesses and enterprise businesses below.
Since cloud computing’s inception, we’ve seen three “classes” of cloud computing emerge.
Public cloud computing resources are publicly available at the subscription or pay-as-you-go level and often take the form of storage or virtual infrastructure (e.g., VDI). Public cloud resources are managed by the provider, removing routine infrastructure tasks from users’ workloads. Instead, users can devote themselves to more in-depth, demanding tasks.
Private cloud computing brings equivalent or better firepower and capability than public, and provides more control and customization that you might expect from on-premises solutions. With greater control over dedicated resources and security, companies using private cloud resources need to devote some amount of their own resources towards maintenance and upkeep.
Hybrid cloud computing mixes on-premises and public or private cloud resources. You’ll likely see this when companies want more demanding, high-security, or compliance-related resources retained on site (e.g., medical PII), but don’t mind having other lower-stakes resources running on a public or private cloud (e.g., a patient access portal).
With the increasing availability of cloud resources, the idea of the multicloud has emerged. A multicloud setup means that companies invest in multiple cloud resources and providers, using each for a distinct set of functions. For example, a company might use one cloud specifically for containerization, while using a different one for databases and storage. A business might use one cloud provider for IaaS, another for PaaS, and a variety for SaaS. The multicloud approach allows businesses to utilize each cloud provider's greatest strengths, investing in best-of-breed for each needed business function. This optimizes a business’ overall performance. Multicloud also mitigates some of the risks of relying on a single vendor (e.g., technical reliance on a vendor having 100% uptime).
Future considerations for cloud resources
Due to the far-reaching consequences in a company or the market as a whole, it’s important to watch a few key areas as cloud computing expands in availability and use.
Failover and recovery
Part of investing into cloud resources is that you are at the mercy of their uptime and downtime. Unexpected outages—especially during peak service times—can be costly, even if only for a few minutes. Many cloud providers include failover capabilities or a level of redundancy/recovery as a selling point to mitigate outages. However, as time passes and more resources move into the cloud, backups, redundancies, and DRaaS will become at-minimum expectations of cloud providers. Consolidating these capabilities with existing cloud resources will reduce consumer frustration and increase customer retention for cloud providers.
Securing cloud resources presents its own concerns for many companies. I asked Aaron Walker, our research analyst specializing in security, for his thoughts. He identifies three major challenges companies face when working on securing their cloud resources: identity management, data management and encryption, and skill shortage.
"Identity management is very important and one of the easiest threats to data to remediate, though it can be tiring to manage and maintain. Role-based identities should be set in place and required to conduct any administrative actions. Multi-factor authentication and risk-based authentication tools should be required for accessing any kind of sensitive information.
Data management and encryption are the next layers to protect. All sensitive information stored at rest should be encrypted. Data-centric security tools are useful in discovering and organizing information that needs to be encrypted. They can also help to manage and cluster data to enforce policies according to their labels, characteristics, and requirements.
Lastly, there is a shortage of skilled cybersecurity professionals, and with cloud security being one of the newest and most rapidly evolving security markets, cloud security professionals are few and far between. Novice security professionals are more likely to misconfigured policies and mismanage identities."
Responsibility for securing cloud resources falls on companies, not providers. Walker notes that—especially in the case of data breaches—providers have tended to write off responsibility to businesses themselves. So, companies utilizing cloud resources need to be vigilant in securing what they use. To help mitigate cloud security risks, Walker points to companies ensuring that web application firewalls (WAFs) are well-configured; network security policy management (NSPM) tools help make this possible.
The new monopolies?
As with most new things in the tech world, organizations leading the charge are usually the ones with the most available funds. It’s not surprising to see providers like AWS, Microsoft, and Google dominating the cloud computing market. However, as markets mature, the field of providers deepens, and what previously was a monopoly or duopoly becomes competitive.
A healthy industry requires the cloud behemoths to loosen their grip. In public cloud alone, just 16% of all public cloud is not hosted by AWS, Azure, Google, or Alibaba. AWS on its own hosts almost 50% of the public cloud, and adding in Azure, they take about 70% of the public cloud market. That’s a whole lot of data in only two places. Consolidating that much of all public cloud data—or even that much of the market—in one or two companies reads trouble. It will likely be data ownership that drives the overhaul of antitrust/monopolization laws, sooner rather than later.
In the meantime, companies looking to pull from the Amazon and Azure lion’s share will need to beat them at their own game, but at a market or industry level. They’ll need to specialize. We’re likely to see an advent of new cloud providers targeting, for example, specifically small businesses or health care service providers, designing cloud infrastructure or services especially for their needs.
Keep an eye out for G2 researchers at tech events later this year, including AWS re:Invent and Dreamforce. In the meantime, learn more about our researchers and what we do.
Zack is a former G2 senior research analyst for IT and development software. He leveraged years of national and international vendor relations experience, working with software vendors of all markets and regions to improve product and market representation on G2, as well as built better cross-company relationships. Using authenticated review data, he analyzed product and competitor data to find trends in buyer/user preferences around software implementation, support, and functionality. This data enabled thought leadership initiatives around topics such as cloud infrastructure, monitoring, backup, and ITSM.