Skip to main content

Why We Need To Talk About Sustainable Cloud

blue swirls

Sustainability is an important topic for everyone, especially those in the technology space whose products and services can potentially have a big impact on the environment. At Nasstar, we're committed to sustainable initiatives both of our own accord, and those of our partners like AWS. We actively seek out partners and suppliers that have tangible sustainability initiatives and we strive to become a more environmentally conscious business wherever we can. 

The cloud is one area in particular where we have plenty of scope to deliver more sustainability for our customers. Our very own cloud expert, APN Ambassador and AWS Technical Practice Lead, Jason Oliver, has written the below article to help us understand more about sustainable cloud with AWS and what we do to deliver on our sustainability objectives.

A Personal Journey

Next to my passion for innovative technology is my passion for the environment, which I understand is something of an oxymoron. I have been following the green agenda well before the iconic Greta Thunberg was even born!

My initiation to the topic was at high school; I recall learning about the depleting ozone layer, renewable energy. As a sidebar to this date, I am puzzled why the hydropower solutions I learned about then haven't been more widely adopted to this day in the water locked islands that make up the UK.

The older I get, the more mindful I become about my legacy and the state of the planet I am leaving my children. I am the one in the house, seemingly, the only one, constantly running around switching things off when they are not in use. Constantly on a rant to anyone who will listen about the virtues of intelligent energy and resource consumption.

Like many cloud professionals, I came from a background in traditional IT. I considered myself a server-hugger, challenging myself to step away from the tin and embrace the cloud throughout my own journey. I even used to design and build small data centres, so I fully appreciate the plant needed to operate a successful facility; hot isles, cold isles, heating, ventilation, air conditioning (HVAC) systems, etc.

A Cloud Computing Providers Journey

Nasstar first became an AWS Partner in 2010 and were the UK’s first Premier Partner in 2014. We became a Managed Services Provider (MSP) in 2015 and currently hold multiple high valuable competencies that differentiate us in the marketplace.

Below are our top reasons for adopting AWS as our preferred Cloud Computing Providers supplier:

  • On September 19th 2019, Amazon and Global Optimism announced The Climate Pledge, a commitment to meet the Paris Agreement 10 years early. In this, Amazon and all subsidiaries pledge to be net-zero carbon by 2040.
  • AWS are on the path to using 100% renewable energy by 2025, five years ahead of their initial goal of 2030.
  • In 2020, Amazon announced The Climate Pledge Fund, a dedicated investment program — with an initial $2 billion in funding to invest in visionary companies whose products and solutions will facilitate the transition to a low-carbon economy.
  • Amazon is the world’s largest corporate purchaser of renewable energy, on a path to powering its operations with 100% renewable energy by 2025.

I recall feeling moved after watching the AWS re:Invent 2020 - Infrastructure Keynote with Peter DeSantis. This session is lengthy and detailed and well worth a view in its entirety; however, skip forward to 1h:02m into the keynote for mind-blowing innovations in data centre efficiencies.

In isolation, these are all incredible and ambitious efforts, and together they amount to legendary and legacy-building commitments to right by industry and planet. They cannot be ignored in terms of their desire to do the right thing and align with my own and Nasstars sustainability ambitions.

A Partner’s Journey

Nasstar bolsters AWS's sustainability commitments by leveraging innovative design patterns and solutions, saving its customers money whilst saving the planet, a winning combination!

Energy-efficient best practice

It is reasonable to assume that reducing the amount of compute capacity would yield efficiencies in both cost and energy; the greenest energy is the energy you don’t use. An obvious target would be to minimise the amount of online compute capacity but not currently required to service a workload. However, all aspects of core service consumption are distributed equally in the cloud, such as storage, networking, security, etc., so this all bears fruit in reducing energy consumption.

In addition to selecting the right Partner and Cloud Computing Providers, empowering some innovative design patterns over traditional hosting can further enhance these benefits. Significant energy-efficient cloud design patterns and best practices include the following:

  • Instance scheduling. A reduction of over 70% on power consumption and costs can be obtained simply by scheduling popular cloud resources such as AWS Elastic Computing (EC2) and Amazon Relational Database Service (RDS) instances to ensure workloads and environments are only running at times when the business requires them, for example, 09:00-17:00 Mon-Fri. This is in contrast with an on-premises solution that may not have the necessary controls to shut down resources on a timed basis to reduce consumption and the attack surface.
  • Elasticity. Similarly, various scaling policies can be employed to ensure that cloud resources are used in line with metered demand and not over-provisioned to account for a likely maximum peak. This is in contrast with an on-premises solution that may need to maintain a considerable amount of available capacity online but not operate productively to deal with transient workload peaks, hardware failure, and DC unavailability.
  • Rightsizing. Rightsizing is the process of matching instance types and sizes to your workload performance and capacity requirements at the lowest possible cost. It’s also the process of looking at deployed instances and identifying opportunities to eliminate or downsize without compromising capacity or other conditions, which results in lower costs. It lowers the energy consumed if adopting improved family types. This is in contrast with an on-premises solution that may wait months or years for the next hardware refresh to implement such efficiencies.
  • Transactional computing. Serverless resources are managed by AWS, which perform optimal capacity management, and services are consumed as and when needed, much like electricity itself. This makes it efficient to use within solutions that can now be considered a transactional consumption model. For example, an AWS Lambda function when not executing is not consuming energy.

Disaster recovery in the cloud

Another opportunity to provide significant energy consumption reduction is by eliminating existing data centre use. Consider disaster recovery (DR) strategies available that can be broadly categorised into four approaches, as highlighted below by increasing energy / decreasing Recovery Time Objective (RTO):

  • Backup and restore. Backup and restore is a suitable approach for mitigating data loss or corruption. This approach requires no energy being consumed by the recovery environment until a recovery is triggered.
  • Pilot light. With the pilot light approach, you replicate your data from one region to another and provision a copy of your core workload infrastructure. Resources required to support data replication and back-ups, such as databases and object storage, are always active. Other elements, such as application servers, are loaded with application code and configurations but are be provisioned at the point of usage and are only used during testing or when disaster recovery failover is invoked. Unlike the backup and restore approach, your core infrastructure is always available and with the option to quickly provision a full-scale production environment by switching on and scaling out your application servers.
  • Warm standby. The warm standby approach involves ensuring that there is a scaled-down but fully functional copy of your production environment in another region. This approach extends the pilot light concept and decreases the time to recovery because your workload is always on in another region. This approach also allows you to more easily perform testing or implement continuous testing to increase confidence in your ability to recover from a disaster.
  • Multi-site active/active. You can run your workload simultaneously in multiple regions as part of a multi-site active/active or hot standby active/passive strategy. Multi-site active/active serves traffic from all regions to which it is deployed, whereas hot standby serves traffic only from a single region, and the other region(s) are only used for disaster recovery. With a multi-site active/active approach, users can access your workload in any of the regions in which it is deployed. While great for high availability, it does not save any energy at all. This may be unavoidable for some critical customer-facing workloads that require a cheaper, more energy-efficient approach.

A recurring theme is that sustainability and application performance/availability/cost need not be contradictory goals. Elastic scalability of compute resources is desirable for cost optimisation, but it also enables the customer to avoid running servers that are not immediately required to service the workload.

Conclusion

In this post, we introduced my passion for the environment and AWS, an inspirational leader in this field. We look at what AWS has been building towards and the unprecedented levels of commitment to become the dominant player in the public cloud and lead the way on environmentally friendly computing. We also reviewed some innovative design patterns, best practices, and strategies that passionate partners can employ with a common interest in preserving the environment and reducing customer spending.