Get started Bring yourself up to speed with our introductory content.

Use OpenStack Senlin, Heat templates to enable autoscaling

As it is with other cloud platforms, autoscaling in OpenStack is important to meet changing workload demands. Here's how to enable that process with Senlin and Heat templates.

The OpenStack platform constantly evolves, and as a result, some processes that worked in the past are no longer...

valid -- which is now the case for autoscaling. Before, users set up OpenStack autoscaling with the Ceilometer service and Heat templates. Now, users need a new tool.

OpenStack Senlin is a service that lets admins create and manage clusters of related cloud resources to simplify orchestration. It integrates with Heat and Ceilometer to perform autoscaling tasks, and these tools work together to ensure that a cluster scales in and out based on workload demands.

When you use OpenStack Senlin with Heat, you use specific Heat cluster resources. Use YAML code to define the cluster, as well your scale-in and scale-out policies. The core code of such a configuration might look as follows, where Listing 1 defines the cluster and Listing 2 defines the scale-in and scale-out policies. Both sample configurations are from OpenStack documentation.

Listing 1: Define the Senlin cluster in Heat


  type: OS::Senlin::Cluster


    desired_capacity: 2

    min_size: 2

    profile: {get_resource: profile}

Listing 2: Define the scale-in and scale-out policies


  type: OS::Senlin::Policy


    type: senlin.policy.scaling-1.0


      - cluster: {get_resource: cluster}


      event: CLUSTER_SCALE_IN


        type: CHANGE_IN_CAPACITY

        number: 1



  type: OS::Senlin::Policy


    type: senlin.policy.scaling-1.0


      - cluster: {get_resource: cluster}


      event: CLUSTER_SCALE_OUT


        type: CHANGE_IN_CAPACITY

        number: 1

While the resources in Listing 1 and 2 define the basis of the OpenStack Senlin cluster, you need to do more to make it fully operational. A load-balancer policy is a mandatory part of the configuration, and the OpenStack Neutron service will create one automatically. Based on the load-balancer policy, you need to define resources to manage the scale-in and scale-out process, which completes the basic framework you define in Heat.

The last required element is the alarm trigger. You must include Ceilometer alarms in the Heat template so that the alarm action defines which Heat trigger to pull -- either the receiver_scale_in or the receiver_scale_out, based on what needs to be done.

Next Steps

Why would an enterprise choose OpenStack?

Simplify OpenStack management with these five tips

Review OpenStack Kolla for container deployments

This was last published in October 2017

Dig Deeper on Open source cloud computing



Find more PRO+ content and other member only offers, here.

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever's puzzling you.

You will be able to add details on the next page.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What challenges have you faced with OpenStack autoscaling?