Skip to main content

OpenStack

Image tags

Sometimes it is necessary to specify the image tag to be used for a specific service or a specific image of a service. All available images and tags are listed in the 002-images-kolla.yml file.

The image tags can be set in the environments/kolla/images.yml file.

  • Use a specific tag for all images of a service:

    environments/kolla/images.yml
    barbican_tag: "2023.1"
  • Use a specific tag for a specific image of a service:

    environments/kolla/images.yml
    barbican_worker_tag: "2023.1"

Network interfaces

ParameterDefaultDescription
network_interfaceeth0
neutron_external_interface{{ network_interface }}
kolla_external_vip_interface{{ network_interface }}
api_interface{{ network_interface }}
migration_interface{{ api_interface }}
tunnel_interface{{ network_interface }}
octavia_network_interface{{ 'o-hm0' if octavia_network_type == 'tenant' else api_interface }}
dns_interface{{ network_interface }}
dpdk_tunnel_interface{{ neutron_external_interface }}
ironic_http_interface{{ api_interface }}
ironic_tftp_interface{{ api_interface }}

Customization of the service configurations

info

The following content is based on the kolla-ansible uptream documentation.

OSISM will generally look for files in environments/kolla/files/overlays/CONFIGFILE, environments/kolla/files/overlays/SERVICENAME/CONFIGFILE or environments/kolla/files/overlays/SERVICENAME/HOSTNAME/CONFIGFILE in the configuration repository. These locations sometimes vary and you should check the config task in the appropriate Ansible role for a full list of supported locations. For example, in the case of nova.conf the following locations are supported, assuming that you have services using nova.conf running on hosts called ctl1, ctl2 and ctl3:

  • environments/kolla/files/overlays/nova.conf
  • environments/kolla/files/overlays/nova/ctl1/nova.conf
  • environments/kolla/files/overlays/nova/ctl2/nova.conf
  • environments/kolla/files/overlays/nova/ctl3/nova.conf
  • environments/kolla/files/overlays/nova/nova-scheduler.conf

Using this mechanism, overrides can be configured per-project (Nova), per-project-service (Nova scheduler service) or per-project-service-on-specified-host (Nova servies on ctl1).

Overriding an option is as simple as setting the option under the relevant section. For example, to set override scheduler_max_attempts in the Nova scheduler service, the operator could create environments/kolla/files/overlays/nova/nova-scheduler.conf in the configuration repository with this content:

[DEFAULT]
scheduler_max_attempts = 100

If the operator wants to configure the initial disk, cpu and ram allocation ratio on compute node com1, the operator needs to create the file environments/kolla/files/overlays/nova/com1/nova.conf with this content:

[DEFAULT]
initial_cpu_allocation_ratio = 3.0
initial_ram_allocation_ratio = 1.0
initial_disk_allocation_ratio = 1.0

Note that the numbers shown here with an initial_cpu_allocation_ratio of 3.0 do match the requirements of the SCS-nV-* (moderate oversubscription) flavors. If you do not use SMT/hyperthreading, SCS would allow 5.0 here (for the V flavors).

This method of merging configuration sections is supported for all services using oslo.config, which includes the vast majority of OpenStack services, and in some cases for services using YAML configuration. Since the INI format is an informal standard, not all INI files can be merged in this way. In these cases OSISM supports overriding the entire config file.

Additional flexibility can be introduced by using Jinja conditionals in the config files. For example, you may create Nova cells which are homogeneous with respect to the hypervisor model. In each cell, you may wish to configure the hypervisors differently, for example the following override shows one way of setting the bandwidth_poll_interval variable as a function of the cell:

[DEFAULT]
{% if 'cell0001' in group_names %}
bandwidth_poll_interval = 100
{% elif 'cell0002' in group_names %}
bandwidth_poll_interval = -1
{% else %}
bandwidth_poll_interval = 300
{% endif %}

An alternative to Jinja conditionals would be to define a variable for the bandwidth_poll_interval and set it in according to your requirements in the inventory group or host vars:

[DEFAULT]
bandwidth_poll_interval = {{ bandwidth_poll_interval }}

OSISM allows the operator to override configuration globally for all services. It will look for a file called environments/kolla/files/overlays/global.conf in the configuration repository.

For example to modify database pool size connection for all services, the operator needs to create environments/kolla/files/overlays/global.conf in the configuration repository with this content:

[database]
max_pool_size = 100

How does the configuration get into services?

It is explained with example of OpenSearch Service how the configuration for OpenSearch is created and gets into the container.

  • The task Copying over opensearch service config file merges the individual sources of the files.

    Copying over opensearch service config file task
    - name: Copying over opensearch service config file
    merge_yaml:
    sources:
    - "{{ role_path }}/templates/opensearch.yml.j2"
    - "{{ node_custom_config }}/opensearch.yml"
    - "{{ node_custom_config }}/opensearch/opensearch.yml"
    - "{{ node_custom_config }}/opensearch/{{ inventory_hostname }}/opensearch.yml"
    dest: "{{ node_config_directory }}/opensearch/opensearch.yml"
    mode: "0660"
    become: true
    when:
    - inventory_hostname in groups['opensearch']
    - opensearch_services['opensearch'].enabled | bool
    notify:
    - Restart opensearch container
  • As a basis a template opensearch.yml.j2 is used which is part of the OpenSearch service role.

    opensearch.yml.j2 template
    {% set num_nodes = groups['opensearch'] | length %}
    {% set recover_after_nodes = (num_nodes * 2 / 3) | round(0, 'floor') | int if num_nodes > 1 else 1 %}
    plugins.security.disabled: "true"

    node.name: "{{ 'api' | kolla_address | put_address_in_context('url') }}"
    network.host: "{{ 'api' | kolla_address | put_address_in_context('url') }}"

    cluster.name: "{{ opensearch_cluster_name }}"
    cluster.initial_master_nodes: [{% for host in groups['opensearch'] %}"{{ 'api' | kolla_address(host) }}"{% if not loop.last %},{% endif %}{% endfor %}]
    node.master: true
    node.data: true
    discovery.seed_hosts: [{% for host in groups['opensearch'] %}"{{ 'api' | kolla_address(host) | put_address_in_context('url') }}"{% if not loop.last %},{% endif %}{% endfor %}]

    http.port: {{ opensearch_port }}
    gateway.expected_nodes: {{ num_nodes }}
    gateway.recover_after_time: "5m"
    gateway.recover_after_nodes: {{ recover_after_nodes }}
    path.data: "/var/lib/opensearch/data"
    path.logs: "/var/log/kolla/opensearch"
    indices.fielddata.cache.size: 40%
    action.auto_create_index: "true"
  • For OpenSearch, overlay files can additionally be stored in 3 places in the configuration repository.

    • environments/kolla/files/overlays/opensearch.yml
    • environments/kolla/files/overlays/opensearch/opensearch.yml
    • environments/kolla/files/overlays/opensearch/{{ inventory_hostname }}/opensearch.yml

    When merging files, the last file found has the most weight. If there is a parameter node.master: true in the service role template opensearch.yml.j2 of the OpenSearch service and you set e.g. node.master: false in environments/kolla/files/overlays/opensearch.yml then accordingly in the finished opensearch.yml node.master: false is used.

  • After the merge the task Copying over opensearch service config file copies the content into the configuration directory /etc/kolla/opensearch of the service.

    /etc/kolla/opensearch/opensearch.yml
    action.auto_create_index: 'true'
    cluster.initial_master_nodes:
    - 192.168.16.10
    cluster.name: kolla_logging
    discovery.seed_hosts:
    - 192.168.16.10
    gateway.expected_nodes: 1
    gateway.recover_after_nodes: 1
    gateway.recover_after_time: 5m
    http.port: 9200
    indices.fielddata.cache.size: 40%
    network.host: 192.168.16.10
    node.data: true
    node.master: true
    node.name: 192.168.16.10
    path.data: /var/lib/opensearch/data
    path.logs: /var/log/kolla/opensearch
    plugins.security.disabled: 'true'
  • The configuration directory /etc/kolla/opensearch is mounted in each container of the OpenSearch service to /var/lib/kolla/config_files.

    Output of docker inspect opensearch
    "Mounts": [
    {
    "Type": "bind",
    "Source": "/etc/kolla/opensearch",
    "Destination": "/var/lib/kolla/config_files",
    "Mode": "rw",
    "RW": true,
    "Propagation": "rprivate"
    },
  • Entrypoint of a service is always kolla_start. This script calls a script set_configs.py. This script takes care of copying files from /var/lib/kolla/config_files to the right place inside the container. For this purpose, the container has a config.json in which the individual actions are configured.

    The file /var/lib/kolla/config_files/opensearch.yml is copied to /etc/opensearch/opensearch.yml.

    The permissions of /var/lib/opensearch and /var/log/kolla/opensearch are set accordingly.

    /etc/kolla/opensearch/config.json
    {
    "command": "/usr/share/opensearch/bin/opensearch",
    "config_files": [
    {
    "source": "/var/lib/kolla/config_files/opensearch.yml",
    "dest": "/etc/opensearch/opensearch.yml",
    "owner": "opensearch",
    "perm": "0600"
    }
    ],
    "permissions": [
    {
    "path": "/var/lib/opensearch",
    "owner": "opensearch:opensearch",
    "recurse": true
    },
    {
    "path": "/var/log/kolla/opensearch",
    "owner": "opensearch:opensearch",
    "recurse": true
    }
    ]
    }
  • In the config.json of the service is also defined the command which will be executed after finishing the preparations. In the case of OpenSearch this is /usr/share/opensearch/bin/opensearch.

    /etc/kolla/opensearch/config.json
    {
    "command": "/usr/share/opensearch/bin/opensearch",
    "config_files": [
    {
    "source": "/var/lib/kolla/config_files/opensearch.yml",
    "dest": "/etc/opensearch/opensearch.yml",
    "owner": "opensearch",
    "perm": "0600"
    }
    ],
    "permissions": [
    {
    "path": "/var/lib/opensearch",
    "owner": "opensearch:opensearch",
    "recurse": true
    },
    {
    "path": "/var/log/kolla/opensearch",
    "owner": "opensearch:opensearch",
    "recurse": true
    }
    ]
    }

Number of service workers

The number of workers used for the individual services can generally be configured using two parameters.

openstack_service_workers: "{{ [ansible_facts.processor_vcpus, 5] | min }}"
openstack_service_rpc_workers: "{{ [ansible_facts.processor_vcpus, 3] | min }}

The default for openstack_service_workers is set to 5 when using the cookiecutter for the initial creation of the configuration.

This value can be overwritten for individual services. The default for all parameters in the following table is {{ openstack_service_workers }}. The parameter aodh_api_workers can then be used to explicitly set the number of workers for the AODH API, for example. A reconfigure must be made for the particular services in the case of a change. osism apply -a reconfigure aodh in this example.

These parameters are all set in environments/kolla/configuration.yml.

Parameter
aodh_api_workers
barbican_api_workers
cinder_api_workers
designate_api_workers
designate_worker_workers
designate_producer_workers
designate_central_workers
designate_sink_workers
designate_mdns_workers
glance_api_workers
gnocchi_metricd_workers
gnocchi_api_workers
heat_api_cfn_workers
heat_api_workers
heat_engine_workers
horizon_wsgi_processes
ironic_api_workers
keystone_api_workers
proxysql_workers
magnum_api_workers
magnum_conductor_workers
manila_api_workers
neutron_api_workers
neutron_metadata_workers
nova_api_workers
nova_superconductor_workers
nova_metadata_api_workers
nova_scheduler_workers
nova_cell_conductor_workers
octavia_api_workers
octavia_healthmanager_health_workers
octavia_healthmanager_stats_workers
placement_api_workers
skyline_gunicorn_workers