Category Archives: OpenStack

In praise of systemd

There was definitely a lot of hate around systemd in the Linux community over the last few years as all the Linux distros moved over to it. Things change all the time, and I personally never understood this one. It wasn’t like the random collection of just barely working shell scripts that may or may not have implemented “status” was really a technical superior solution.

Recently I’ve been plumbing more native systemd support into DevStack (OpenStack‘s development environment setup tool). Personally systemd is just another way to start services, it does a few things better at the cost of having to learn a new interface. But some huge wins in a developer context come from the new logging system with it, journald.

Systemd has it’s own internal journaling system, which can reflect out to syslog and friends. However, it’s inherently much richer. Journal messages can include not only text, but arbitrary metadata. You can then query the journal with this arbitrary metadata.

A concrete example of this is the request-id in OpenStack. On any inbound REST API call a request-id is generated, then passed around between workers in that service. This lets you, theoretically, trace through multiple systems to see what is going on. In practice, you end up having to build some other system to collect and search these logs. But with some changes to OpenStack’s logging subsystem to emit journal native messages, you can do the following:

journalctl REQUEST_ID=req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb

And get the following:

-- Logs begin at Wed 2017-03-29 08:23:23 EDT, end at Wed 2017-03-29 16:19:46 EDT. --
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.api.openstack.wsgi [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Action: 'create', calling method: <bound method ServersController.create of <nova.api.openstack.compute.servers.ServersController object at 0x7fa7c93584d0>>, body: {"server": {"min_count": 1, "flavorRef": "d2", "name": "test1", "imageRef": "f69a1669-34b6-4a2a-9f98-6c4ae0a9d21a", "max_count": 1}} from (pid=14042) _process_stack /opt/stack/nova/nova/api/openstack/wsgi.py:621
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.api.openstack.compute.servers [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running _create_extension_point for <Extension: name=MultipleCreate, alias=os-multiple-create, version=1> from (pid=14042) _create_extension_point /opt/stack/nova/nova/api/openstack/compute/servers.py:740
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.api.openstack.compute.servers [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running _create_extension_point for <Extension: name=BlockDeviceMapping, alias=os-block-device-mapping, version=1> from (pid=14042) _create_extension_point /opt/stack/nova/nova/api/openstack/compute/servers.py:740
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.api.openstack.compute.servers [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running _create_extension_point for <Extension: name=BlockDeviceMappingV1, alias=os-block-device-mapping-v1, version=1> from (pid=14042) _create_extension_point /opt/stack/nova/nova/api/openstack/compute/servers.py:740
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.api.openstack.compute.servers [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running _create_extension_point for <Extension: name=AvailabilityZone, alias=os-availability-zone, version=1> from (pid=14042) _create_extension_point /opt/stack/nova/nova/api/openstack/compute/servers.py:740
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.api.openstack.compute.servers [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running _create_extension_point for <Extension: name=UserData, alias=os-user-data, version=1> from (pid=14042) _create_extension_point /opt/stack/nova/nova/api/openstack/compute/servers.py:740
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.api.openstack.compute.servers [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running _create_extension_point for <Extension: name=Keypairs, alias=os-keypairs, version=1> from (pid=14042) _create_extension_point /opt/stack/nova/nova/api/openstack/compute/servers.py:740
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.api.openstack.compute.servers [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running _create_extension_point for <Extension: name=SchedulerHints, alias=os-scheduler-hints, version=1> from (pid=14042) _create_extension_point /opt/stack/nova/nova/api/openstack/compute/servers.py:740
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.api.openstack.compute.servers [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running _create_extension_point for <Extension: name=SecurityGroups, alias=os-security-groups, version=1> from (pid=14042) _create_extension_point /opt/stack/nova/nova/api/openstack/compute/servers.py:740
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.api.openstack.compute.servers [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running _create_extension_point for <Extension: name=ConfigDrive, alias=os-config-drive, version=1> from (pid=14042) _create_extension_point /opt/stack/nova/nova/api/openstack/compute/servers.py:740
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.network.neutronv2.api [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] validate_networks() for None from (pid=14042) validate_networks /opt/stack/nova/nova/network/neutronv2/api.py:1622
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG neutronclient.v2_0.client [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] GET call to neutron for http://10.42.0.53:9696/v2.0/networks.json?tenant_id=70162ab521a5455faff0680e923dd091&shared=False used request id req-19605b00-631d-41fe-9d2e-78583e4b0999 from (pid=14042) _append_request_id /usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py:128
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG neutronclient.v2_0.client [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] GET call to neutron for http://10.42.0.53:9696/v2.0/networks.json?shared=True used request id req-f53ce1bb-d261-4a15-a75f-4320b7342023 from (pid=14042) _append_request_id /usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py:128
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG neutronclient.v2_0.client [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] GET call to neutron for http://10.42.0.53:9696/v2.0/quotas/70162ab521a5455faff0680e923dd091.json used request id req-77f7c098-8cc0-4396-b6bb-ff53db580a5b from (pid=14042) _append_request_id /usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py:128
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG neutronclient.v2_0.client [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] GET call to neutron for http://10.42.0.53:9696/v2.0/ports.json?fields=id&tenant_id=70162ab521a5455faff0680e923dd091 used request id req-1183c82b-cd80-4a19-ae07-888f5cab0aec from (pid=14042) _append_request_id /usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py:128
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.virt.hardware [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] emulator threads policy constraint: None from (pid=14042) get_emulator_threads_constraint /opt/stack/nova/nova/virt/hardware.py:1286
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.quota [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Getting quotas for project 70162ab521a5455faff0680e923dd091. Resources: ['metadata_items'] from (pid=14042) _get_quotas /opt/stack/nova/nova/quota.py:369
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.quota [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Getting quotas for user 444f517480154b5588b354e64c50c075 and project 70162ab521a5455faff0680e923dd091. Resources: ['metadata_items'] from (pid=14042) _get_quotas /opt/stack/nova/nova/quota.py:361
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.quota [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Getting quotas for project 70162ab521a5455faff0680e923dd091. Resources: ['injected_files'] from (pid=14042) _get_quotas /opt/stack/nova/nova/quota.py:369
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.quota [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Getting quotas for user 444f517480154b5588b354e64c50c075 and project 70162ab521a5455faff0680e923dd091. Resources: ['injected_files'] from (pid=14042) _get_quotas /opt/stack/nova/nova/quota.py:361
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.quota [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Getting quotas for project 70162ab521a5455faff0680e923dd091. Resources: ['injected_file_content_bytes', 'injected_file_path_bytes'] from (pid=14042) _get_quotas /opt/stack/nova/nova/quota.py:369
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.quota [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Getting quotas for user 444f517480154b5588b354e64c50c075 and project 70162ab521a5455faff0680e923dd091. Resources: ['injected_file_content_bytes', 'injected_file_path_bytes'] from (pid=14042) _get_quotas /opt/stack/nova/nova/quota.py:361
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.quota [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Reserving resources using context.project_id: 70162ab521a5455faff0680e923dd091 from (pid=14042) reserve /opt/stack/nova/nova/quota.py:500
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.quota [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Reserving resources using context.user_id: 444f517480154b5588b354e64c50c075 from (pid=14042) reserve /opt/stack/nova/nova/quota.py:505
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.quota [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Attempting to reserve resources for project 70162ab521a5455faff0680e923dd091 and user 444f517480154b5588b354e64c50c075. Deltas: {'instances': 1, 'ram': 1024, 'cores': 1} from (pid=14042) reserve /opt/stack/nova/nova/quota.py:510
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.quota [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Quota limits for project 70162ab521a5455faff0680e923dd091: {'project_id': u'70162ab521a5455faff0680e923dd091'} from (pid=14042) reserve /opt/stack/nova/nova/quota.py:519
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.quota [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Getting quotas for project 70162ab521a5455faff0680e923dd091. Resources: ['instances', 'ram', 'cores'] from (pid=14042) _get_quotas /opt/stack/nova/nova/quota.py:369
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.quota [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Quotas for project 70162ab521a5455faff0680e923dd091 after resource sync: {'instances': 10, 'ram': 51200, 'cores': 20} from (pid=14042) reserve /opt/stack/nova/nova/quota.py:525
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.quota [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Getting quotas for user 444f517480154b5588b354e64c50c075 and project 70162ab521a5455faff0680e923dd091. Resources: ['instances', 'ram', 'cores'] from (pid=14042) _get_quotas /opt/stack/nova/nova/quota.py:361
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.quota [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Quotas for project 70162ab521a5455faff0680e923dd091 and user 444f517480154b5588b354e64c50c075 after resource sync: {'instances': 10, 'ram': 51200, 'cores': 20} from (pid=14042) reserve /opt/stack/nova/nova/quota.py:533
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.db.sqlalchemy.api [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] QuotaUsage has not changed, refresh is unnecessary for: {'user_id': u'444f517480154b5588b354e64c50c075', 'deleted_at': None, 'reserved': 0, 'in_use': 0, 'deleted': 0, 'created_at': datetime.datetime(2017, 3, 29, 20, 16, 5, 803775), 'updated_at': datetime.datetime(2017, 3, 29, 20, 16, 5, 802794), 'until_refresh': None, 'resource': 'instances', 'project_id': u'70162ab521a5455faff0680e923dd091', 'id': 1} from (pid=14042) _refresh_quota_usages /opt/stack/nova/nova/db/sqlalchemy/api.py:3694
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.db.sqlalchemy.api [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] QuotaUsage has not changed, refresh is unnecessary for: {'user_id': u'444f517480154b5588b354e64c50c075', 'deleted_at': None, 'reserved': 0, 'in_use': 0, 'deleted': 0, 'created_at': datetime.datetime(2017, 3, 29, 20, 16, 5, 808250), 'updated_at': datetime.datetime(2017, 3, 29, 20, 16, 5, 807797), 'until_refresh': None, 'resource': 'ram', 'project_id': u'70162ab521a5455faff0680e923dd091', 'id': 2} from (pid=14042) _refresh_quota_usages /opt/stack/nova/nova/db/sqlalchemy/api.py:3694
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.db.sqlalchemy.api [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] QuotaUsage has not changed, refresh is unnecessary for: {'user_id': u'444f517480154b5588b354e64c50c075', 'deleted_at': None, 'reserved': 0, 'in_use': 0, 'deleted': 0, 'created_at': datetime.datetime(2017, 3, 29, 20, 16, 5, 810384), 'updated_at': datetime.datetime(2017, 3, 29, 20, 16, 5, 809969), 'until_refresh': None, 'resource': 'cores', 'project_id': u'70162ab521a5455faff0680e923dd091', 'id': 3} from (pid=14042) _refresh_quota_usages /opt/stack/nova/nova/db/sqlalchemy/api.py:3694
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.db.sqlalchemy.api [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Copying QuotaUsage for resource "instances" from user_usages into project_usages: {'user_id': u'444f517480154b5588b354e64c50c075', 'deleted_at': None, 'reserved': 0, 'in_use': 0, 'deleted': 0, 'created_at': datetime.datetime(2017, 3, 29, 20, 16, 5, 803775), 'updated_at': datetime.datetime(2017, 3, 29, 20, 16, 5, 802794), 'until_refresh': None, 'resource': 'instances', 'project_id': u'70162ab521a5455faff0680e923dd091', 'id': 1} from (pid=14042) quota_reserve /opt/stack/nova/nova/db/sqlalchemy/api.py:3842
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.db.sqlalchemy.api [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Copying QuotaUsage for resource "ram" from user_usages into project_usages: {'user_id': u'444f517480154b5588b354e64c50c075', 'deleted_at': None, 'reserved': 0, 'in_use': 0, 'deleted': 0, 'created_at': datetime.datetime(2017, 3, 29, 20, 16, 5, 808250), 'updated_at': datetime.datetime(2017, 3, 29, 20, 16, 5, 807797), 'until_refresh': None, 'resource': 'ram', 'project_id': u'70162ab521a5455faff0680e923dd091', 'id': 2} from (pid=14042) quota_reserve /opt/stack/nova/nova/db/sqlalchemy/api.py:3842
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.db.sqlalchemy.api [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Copying QuotaUsage for resource "cores" from user_usages into project_usages: {'user_id': u'444f517480154b5588b354e64c50c075', 'deleted_at': None, 'reserved': 0, 'in_use': 0, 'deleted': 0, 'created_at': datetime.datetime(2017, 3, 29, 20, 16, 5, 810384), 'updated_at': datetime.datetime(2017, 3, 29, 20, 16, 5, 809969), 'until_refresh': None, 'resource': 'cores', 'project_id': u'70162ab521a5455faff0680e923dd091', 'id': 3} from (pid=14042) quota_reserve /opt/stack/nova/nova/db/sqlalchemy/api.py:3842
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.quota [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Created reservations ['b8e99024-107c-4992-b7ea-c96284033336', '15691130-ae33-4dc6-b9cb-cbdc6dc7b407', 'e0fe21ab-eb0e-43a2-898d-631871b5142e'] from (pid=14042) reserve /opt/stack/nova/nova/quota.py:1383
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.compute.api [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Going to run 1 instances... from (pid=14042) _provision_instances /opt/stack/nova/nova/compute/api.py:1002
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.compute.api [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] block_device_mapping [BlockDeviceMapping(attachment_id=<?>,boot_index=0,connection_info=None,created_at=<?>,delete_on_termination=True,deleted=<?>,deleted_at=<?>,destination_type='local',device_name=None,device_type='disk',disk_bus=None,guest_format=None,id=<?>,image_id='f69a1669-34b6-4a2a-9f98-6c4ae0a9d21a',instance=<?>,instance_uuid=<?>,no_device=False,snapshot_id=None,source_type='image',tag=None,updated_at=<?>,volume_id=None,volume_size=None)] from (pid=14042) _bdm_validate_set_size_and_instance /opt/stack/nova/nova/compute/api.py:1339
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.quota [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Committed reservations [u'b8e99024-107c-4992-b7ea-c96284033336', u'15691130-ae33-4dc6-b9cb-cbdc6dc7b407', u'e0fe21ab-eb0e-43a2-898d-631871b5142e'] from (pid=14042) commit /opt/stack/nova/nova/quota.py:1409
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CAST unique_id: ce8a6568de9f4cc98ff3f06e0ca4004a exchange 'nova' topic 'conductor' from (pid=14042) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:489
Mar 29 16:16:05 os4 nova-api[14042]: INFO nova.osapi_compute.wsgi.server [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] 10.42.0.53 "POST /v2.1/servers HTTP/1.1" status: 202 len: 795 time: 0.8296130
Mar 29 16:16:05 os4 nova-conductor[14326]: WARNING oslo_config.cfg [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Option "scheduler_default_filters" from group "DEFAULT" is deprecated. Use option "enabled_filters" from group "filter_scheduler".
Mar 29 16:16:05 os4 nova-conductor[14326]: DEBUG oslo_db.sqlalchemy.engines [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION from (pid=14326) _check_effective_sql_mode /usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/engines.py:261
Mar 29 16:16:05 os4 nova-conductor[14326]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CALL msg_id: 917d9b546f5f4809a5dc38d3399bc714 exchange 'nova' topic 'scheduler' from (pid=14326) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:489
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CAST unique_id: ac54c940a3064721907949fbf6f3af48 NOTIFY exchange 'nova' topic 'notifications.info' from (pid=14269) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:473
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.scheduler.host_manager [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Found 2 cells: 00000000-0000-0000-0000-000000000000, 31d98bab-f734-4463-8f3f-87b604d0bd2b from (pid=14269) _get_computes_all_cells /opt/stack/nova/nova/scheduler/host_manager.py:586
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.scheduler.host_manager [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Getting compute nodes and services for cell 00000000-0000-0000-0000-000000000000(cell0) from (pid=14269) _get_computes_all_cells /opt/stack/nova/nova/scheduler/host_manager.py:591
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG oslo_db.sqlalchemy.engines [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION from (pid=14269) _check_effective_sql_mode /usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/engines.py:261
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.scheduler.host_manager [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Getting compute nodes and services for cell 31d98bab-f734-4463-8f3f-87b604d0bd2b(cell1) from (pid=14269) _get_computes_all_cells /opt/stack/nova/nova/scheduler/host_manager.py:591
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG oslo_db.sqlalchemy.engines [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION from (pid=14269) _check_effective_sql_mode /usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/engines.py:261
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG oslo_concurrency.lockutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Lock "(u'os4', u'os4')" acquired by "nova.scheduler.host_manager._locked_update" :: waited 0.000s from (pid=14269) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:270
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.scheduler.host_manager [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Update host state from compute node: ComputeNode(cpu_allocation_ratio=16.0,cpu_info='{"vendor": "Intel", "model": "Broadwell", "arch": "x86_64", "features": ["smap", "avx", "clflush", "sep", "rtm", "vme", "dtes64", "invpcid", "tsc", "fsgsbase", "xsave", "pge", "vmx", "erms", "xtpr", "cmov", "hle", "smep", "ssse3", "est", "pat", "monitor", "smx", "pbe", "lm", "msr", "adx", "3dnowprefetch", "nx", "fxsr", "syscall", "tm", "sse4.1", "pae", "sse4.2", "pclmuldq", "acpi", "fma", "tsc-deadline", "mmx", "osxsave", "cx8", "mce", "de", "tm2", "ht", "pse", "lahf_lm", "abm", "rdseed", "popcnt", "mca", "pdpe1gb", "apic", "sse", "f16c", "ds", "invtsc", "pni", "rdtscp", "avx2", "aes", "sse2", "ss", "ds_cpl", "bmi1", "bmi2", "pcid", "fpu", "cx16", "pse36", "mtrr", "movbe", "pdcm", "rdrand", "x2apic"], "topology": {"cores": 2, "cells": 1, "threads": 2, "sockets": 1}}',created_at=2017-03-29T10:37:36Z,current_workload=0,deleted=False,deleted_at=None,disk_allocation_ratio=1.0,disk_available_least=195,free_disk_gb=218,free_ram_mb=15435,host='os4',host_ip=10.42.0.53,hypervisor_hostname='os4',hypervisor_type='QEMU',hypervisor_version=2005000,id=1,local_gb=218,local_gb_used=0,memory_mb=15947,memory_mb_used=512,metrics='[]',numa_topology='{"nova_object.version": "1.2", "nova_object.changes": ["cells"], "nova_object.name": "NUMATopology", "nova_object.data": {"cells": [{"nova_object.version": "1.2", "nova_object.changes": ["cpu_usage", "memory_usage", "cpuset", "mempages", "pinned_cpus", "memory", "siblings", "id"], "nova_object.name": "NUMACell", "nova_object.data": {"cpu_usage": 0, "memory_usage": 0, "cpuset": [0, 1, 2, 3], "pinned_cpus": [], "siblings": [[1, 3], [0, 2]], "memory": 15947, "mempages": [{"nova_object.version": "1.1", "nova_object.changes": ["total", "used", "reserved", "size_kb"], "nova_object.name": "NUMAPagesTopology", "nova_object.data": {"used": 0, "total": 4082564, "reserved": 0, "size_kb": 4}, "nova_object.namespace": "nova"}, {"nova_object.version": "1.1", "nova_object.changes": ["total", "used", "reserved", "size_kb"], "nova_object.name": "NUMAPagesTopology", "nova_object.data": {"used": 0, "total": 0, "reserved": 0, "size_kb": 2048}, "nova_object.namespace": "nova"}, {"nova_object.version": "1.1", "nova_object.changes": ["total", "used", "reserved", "size_kb"], "nova_object.name": "NUMAPagesTopology", "nova_object.data": {"used": 0, "total": 0, "reserved": 0, "size_kb": 1048576}, "nova_object.namespace": "nova"}], "id": 0}, "nova_object.namespace": "nova"}]}, "nova_object.namespace": "nova"}',pci_device_pools=PciDevicePoolList,ram_allocation_ratio=1.5,running_vms=0,service_id=None,stats={},supported_hv_specs=[HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec],updated_at=2017-03-29T20:16:00Z,uuid=aa91aff1-89f9-47f7-a66c-8a60d87aa76d,vcpus=4,vcpus_used=0) from (pid=14269) _locked_update /opt/stack/nova/nova/scheduler/host_manager.py:168
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.scheduler.host_manager [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Update host state with aggregates: [] from (pid=14269) _locked_update /opt/stack/nova/nova/scheduler/host_manager.py:171
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.scheduler.host_manager [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Update host state with service dict: {'binary': u'nova-compute', 'deleted': False, 'created_at': datetime.datetime(2017, 3, 29, 10, 37, 36, tzinfo=<iso8601.Utc>), 'updated_at': datetime.datetime(2017, 3, 29, 20, 16, 5, tzinfo=<iso8601.Utc>), 'report_count': 3471, 'topic': u'compute', 'host': u'os4', 'version': 17, 'disabled': False, 'forced_down': False, 'last_seen_up': datetime.datetime(2017, 3, 29, 20, 16, 5, tzinfo=<iso8601.Utc>), 'deleted_at': None, 'disabled_reason': None, 'id': 6} from (pid=14269) _locked_update /opt/stack/nova/nova/scheduler/host_manager.py:174
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.scheduler.host_manager [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Update host state with instances: {} from (pid=14269) _locked_update /opt/stack/nova/nova/scheduler/host_manager.py:177
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG oslo_concurrency.lockutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Lock "(u'os4', u'os4')" released by "nova.scheduler.host_manager._locked_update" :: held 0.005s from (pid=14269) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.filters [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Starting with 1 host(s) from (pid=14269) get_filtered_objects /opt/stack/nova/nova/filters.py:70
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.scheduler.filters.retry_filter [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Re-scheduling is disabled from (pid=14269) host_passes /opt/stack/nova/nova/scheduler/filters/retry_filter.py:34
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.filters [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Filter RetryFilter returned 1 host(s) from (pid=14269) get_filtered_objects /opt/stack/nova/nova/filters.py:104
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.filters [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Filter AvailabilityZoneFilter returned 1 host(s) from (pid=14269) get_filtered_objects /opt/stack/nova/nova/filters.py:104
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.filters [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Filter RamFilter returned 1 host(s) from (pid=14269) get_filtered_objects /opt/stack/nova/nova/filters.py:104
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.filters [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Filter DiskFilter returned 1 host(s) from (pid=14269) get_filtered_objects /opt/stack/nova/nova/filters.py:104
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.filters [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Filter ComputeFilter returned 1 host(s) from (pid=14269) get_filtered_objects /opt/stack/nova/nova/filters.py:104
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.filters [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Filter ComputeCapabilitiesFilter returned 1 host(s) from (pid=14269) get_filtered_objects /opt/stack/nova/nova/filters.py:104
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.filters [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Filter ImagePropertiesFilter returned 1 host(s) from (pid=14269) get_filtered_objects /opt/stack/nova/nova/filters.py:104
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.filters [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Filter ServerGroupAntiAffinityFilter returned 1 host(s) from (pid=14269) get_filtered_objects /opt/stack/nova/nova/filters.py:104
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.filters [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Filter ServerGroupAffinityFilter returned 1 host(s) from (pid=14269) get_filtered_objects /opt/stack/nova/nova/filters.py:104
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.filters [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Filter SameHostFilter returned 1 host(s) from (pid=14269) get_filtered_objects /opt/stack/nova/nova/filters.py:104
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.filters [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Filter DifferentHostFilter returned 1 host(s) from (pid=14269) get_filtered_objects /opt/stack/nova/nova/filters.py:104
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.scheduler.filter_scheduler [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Filtered [(os4, os4) ram: 15435MB disk: 199680MB io_ops: 0 instances: 0] from (pid=14269) _schedule /opt/stack/nova/nova/scheduler/filter_scheduler.py:114
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.scheduler.filter_scheduler [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Weighed [WeighedHost [host: (os4, os4) ram: 15435MB disk: 199680MB io_ops: 0 instances: 0, weight: 0.0]] from (pid=14269) _schedule /opt/stack/nova/nova/scheduler/filter_scheduler.py:119
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.scheduler.filter_scheduler [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Selected host: WeighedHost [host: (os4, os4) ram: 15435MB disk: 199680MB io_ops: 0 instances: 0, weight: 0.0] from (pid=14269) _schedule /opt/stack/nova/nova/scheduler/filter_scheduler.py:126
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG oslo_concurrency.lockutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Lock "(u'os4', u'os4')" acquired by "nova.scheduler.host_manager._locked" :: waited 0.000s from (pid=14269) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:270
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.virt.hardware [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Require both a host and instance NUMA topology to fit instance on host. from (pid=14269) numa_fit_instance_to_host /opt/stack/nova/nova/virt/hardware.py:1466
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG oslo_concurrency.lockutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Lock "(u'os4', u'os4')" released by "nova.scheduler.host_manager._locked" :: held 0.003s from (pid=14269) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CAST unique_id: fcea183f04574d61901f435df9d4fba0 NOTIFY exchange 'nova' topic 'notifications.info' from (pid=14269) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:473
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] sending reply msg_id: 917d9b546f5f4809a5dc38d3399bc714 reply queue: reply_8feb4f5bbc7d493c8821bc41ffee5e1a time elapsed: 0.215702679008s from (pid=14269) _send_reply /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:90
Mar 29 16:16:06 os4 nova-conductor[14326]: DEBUG oslo_db.sqlalchemy.engines [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION from (pid=14326) _check_effective_sql_mode /usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/engines.py:261
Mar 29 16:16:06 os4 nova-conductor[14326]: DEBUG oslo_db.sqlalchemy.engines [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION from (pid=14326) _check_effective_sql_mode /usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/engines.py:261
Mar 29 16:16:06 os4 nova-conductor[14326]: DEBUG nova.conductor.manager [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] block_device_mapping [BlockDeviceMapping(attachment_id=<?>,boot_index=0,connection_info=None,created_at=<?>,delete_on_termination=True,deleted=<?>,deleted_at=<?>,destination_type='local',device_name=None,device_type='disk',disk_bus=None,guest_format=None,id=<?>,image_id='f69a1669-34b6-4a2a-9f98-6c4ae0a9d21a',instance=<?>,instance_uuid=<?>,no_device=False,snapshot_id=None,source_type='image',tag=None,updated_at=<?>,volume_id=None,volume_size=None)] from (pid=14326) _create_block_device_mapping /opt/stack/nova/nova/conductor/manager.py:841
Mar 29 16:16:06 os4 nova-conductor[14326]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CAST unique_id: cb281d7a070a47e29aa664ff22879597 exchange 'nova' topic 'compute.os4' from (pid=14326) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:489
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG oslo_concurrency.lockutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Lock "a70f2d77-f140-4edf-bf38-8b07076be37a" acquired by "nova.compute.manager._locked_do_build_and_run_instance" :: waited 0.000s from (pid=14334) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:270
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CALL msg_id: 38331690f1924074ba2146e3e22ec717 exchange 'nova' topic 'conductor' from (pid=14334) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:489
Mar 29 16:16:06 os4 nova-conductor[14327]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] sending reply msg_id: 38331690f1924074ba2146e3e22ec717 reply queue: reply_2d509f4c3c904a8d83ad6a659f755bf2 time elapsed: 0.0114914269943s from (pid=14327) _send_reply /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:90
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG nova.compute.manager [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] Starting instance... from (pid=14334) _do_build_and_run_instance /opt/stack/nova/nova/compute/manager.py:1747
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CALL msg_id: 5a0709355daf4e4f8dd4589455fdffd8 exchange 'nova' topic 'conductor' from (pid=14334) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:489
Mar 29 16:16:06 os4 nova-conductor[14326]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] sending reply msg_id: 5a0709355daf4e4f8dd4589455fdffd8 reply queue: reply_2d509f4c3c904a8d83ad6a659f755bf2 time elapsed: 0.0587505440053s from (pid=14326) _send_reply /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:90
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CAST unique_id: b400e8e03fed4cc19a7af04d81a25417 NOTIFY exchange 'nova' topic 'notifications.info' from (pid=14334) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:473
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG nova.notifications.objects.base [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Defaulting the value of the field 'projects' to None in FlavorPayload due to 'Cannot call _load_projects on orphaned Flavor object' from (pid=14334) populate_schema /opt/stack/nova/nova/notifications/objects/base.py:118
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CAST unique_id: dff4adf357e940ae91140eb047dbed8a NOTIFY exchange 'nova' topic 'versioned_notifications.info' from (pid=14334) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:473
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG oslo_concurrency.lockutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Lock "compute_resources" acquired by "nova.compute.resource_tracker.instance_claim" :: waited 0.000s from (pid=14334) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:270
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG nova.compute.resource_tracker [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Memory overhead for 1024 MB instance; 0 MB from (pid=14334) instance_claim /opt/stack/nova/nova/compute/resource_tracker.py:147
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG nova.compute.resource_tracker [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Disk overhead for 10 GB instance; 0 GB from (pid=14334) instance_claim /opt/stack/nova/nova/compute/resource_tracker.py:150
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CALL msg_id: 57afd33c5efd4638badeb6efd1242afa exchange 'nova' topic 'conductor' from (pid=14334) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:489
Mar 29 16:16:06 os4 nova-conductor[14327]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] sending reply msg_id: 57afd33c5efd4638badeb6efd1242afa reply queue: reply_2d509f4c3c904a8d83ad6a659f755bf2 time elapsed: 0.00463710498298s from (pid=14327) _send_reply /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:90
Mar 29 16:16:06 os4 nova-compute[14334]: INFO nova.compute.claims [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] Attempting claim on node os4: memory 1024 MB, disk 10 GB, vcpus 1 CPU
Mar 29 16:16:06 os4 nova-compute[14334]: INFO nova.compute.claims [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] Total memory: 15947 MB, used: 512.00 MB
Mar 29 16:16:06 os4 nova-compute[14334]: INFO nova.compute.claims [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] memory limit: 23920.50 MB, free: 23408.50 MB
Mar 29 16:16:06 os4 nova-compute[14334]: INFO nova.compute.claims [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] Total disk: 218 GB, used: 0.00 GB
Mar 29 16:16:06 os4 nova-compute[14334]: INFO nova.compute.claims [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] disk limit: 218.00 GB, free: 218.00 GB
Mar 29 16:16:06 os4 nova-compute[14334]: INFO nova.compute.claims [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] Total vcpu: 4 VCPU, used: 0.00 VCPU
Mar 29 16:16:06 os4 nova-compute[14334]: INFO nova.compute.claims [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] vcpu limit not specified, defaulting to unlimited
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG nova.virt.hardware [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Require both a host and instance NUMA topology to fit instance on host. from (pid=14334) numa_fit_instance_to_host /opt/stack/nova/nova/virt/hardware.py:1466
Mar 29 16:16:06 os4 nova-compute[14334]: INFO nova.compute.claims [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] Claim successful on node os4
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CALL msg_id: 572a6e0e3cd642bab51844504c947944 exchange 'nova' topic 'conductor' from (pid=14334) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:489
Mar 29 16:16:06 os4 nova-conductor[14326]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] sending reply msg_id: 572a6e0e3cd642bab51844504c947944 reply queue: reply_2d509f4c3c904a8d83ad6a659f755bf2 time elapsed: 0.0616553190048s from (pid=14326) _send_reply /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:90
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CALL msg_id: 7f329ad9e6ce406cae034a83ad6dcb92 exchange 'nova' topic 'conductor' from (pid=14334) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:489
Mar 29 16:16:06 os4 nova-conductor[14327]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] sending reply msg_id: 7f329ad9e6ce406cae034a83ad6dcb92 reply queue: reply_2d509f4c3c904a8d83ad6a659f755bf2 time elapsed: 0.00696902099298s from (pid=14327) _send_reply /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:90
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG nova.scheduler.client.report [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] Sending allocation for instance {'MEMORY_MB': 1024, 'VCPU': 1, 'DISK_GB': 10} from (pid=14334) _allocate_for_instance /opt/stack/nova/nova/scheduler/client/report.py:807
Mar 29 16:16:06 os4 nova-compute[14334]: INFO nova.scheduler.client.report [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] Submitted allocation for instance
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CALL msg_id: 36a079cbc7304ed6bc5d2e0ca0b3404b exchange 'nova' topic 'conductor' from (pid=14334) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:489
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG nova.scheduler.client.report [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Refreshing aggregate associations for resource provider aa91aff1-89f9-47f7-a66c-8a60d87aa76d from (pid=14334) _ensure_resource_provider /opt/stack/nova/nova/scheduler/client/report.py:437
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG nova.scheduler.client.report [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Updating our resource provider generation from 1 to 2 from (pid=14334) _get_inventory_and_update_provider_generation /opt/stack/nova/nova/scheduler/client/report.py:480
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG oslo_concurrency.lockutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Lock "compute_resources" released by "nova.compute.resource_tracker.instance_claim" :: held 0.291s from (pid=14334) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG nova.compute.manager [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] Start building networks asynchronously for instance. from (pid=14334) _build_resources /opt/stack/nova/nova/compute/manager.py:2062
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CALL msg_id: f70037f919264d8db2021cef722e9f97 exchange 'nova' topic 'conductor' from (pid=14334) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:489
Mar 29 16:16:06 os4 nova-conductor[14327]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] sending reply msg_id: f70037f919264d8db2021cef722e9f97 reply queue: reply_2d509f4c3c904a8d83ad6a659f755bf2 time elapsed: 0.0664471710043s from (pid=14327) _send_reply /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:90
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG oslo_concurrency.lockutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Lock "compute_resources" acquired by "nova.compute.resource_tracker.update_usage" :: waited 0.000s from (pid=14334) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:270
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CALL msg_id: 76a1008be9ea4a0f882eaed972093de8 exchange 'nova' topic 'conductor' from (pid=14334) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:489
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG nova.scheduler.client.report [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Refreshing aggregate associations for resource provider aa91aff1-89f9-47f7-a66c-8a60d87aa76d from (pid=14334) _ensure_resource_provider /opt/stack/nova/nova/scheduler/client/report.py:437
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_concurrency.lockutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Lock "compute_resources" released by "nova.compute.resource_tracker.update_usage" :: held 0.096s from (pid=14334) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282
Mar 29 16:16:07 os4 nova-compute[14334]: INFO nova.virt.osinfo [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Cannot load Libosinfo: (No module named gi.repository.Libosinfo)
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CALL msg_id: f545ef909ec64b169fa53a601952279d exchange 'nova' topic 'conductor' from (pid=14334) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:489
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG nova.compute.manager [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] Allocating IP information in the background. from (pid=14334) _allocate_network_async /opt/stack/nova/nova/compute/manager.py:1384
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG nova.network.neutronv2.api [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] allocate_for_instance() from (pid=14334) allocate_for_instance /opt/stack/nova/nova/network/neutronv2/api.py:829
Mar 29 16:16:07 os4 nova-conductor[14327]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] sending reply msg_id: f545ef909ec64b169fa53a601952279d reply queue: reply_2d509f4c3c904a8d83ad6a659f755bf2 time elapsed: 0.0140517560067s from (pid=14327) _send_reply /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:90
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG nova.block_device [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] block_device_list [] from (pid=14334) volume_in_mapping /opt/stack/nova/nova/block_device.py:585
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG nova.compute.manager [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] Start building block device mappings for instance. from (pid=14334) _build_resources /opt/stack/nova/nova/compute/manager.py:2088
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CALL msg_id: e28395f19cde4e4ca2698e93de02a63b exchange 'nova' topic 'conductor' from (pid=14334) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:489
Mar 29 16:16:07 os4 nova-conductor[14326]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] sending reply msg_id: e28395f19cde4e4ca2698e93de02a63b reply queue: reply_2d509f4c3c904a8d83ad6a659f755bf2 time elapsed: 0.0703596939857s from (pid=14326) _send_reply /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:90
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CALL msg_id: 013ddb3c758b4cbfaade984871fb2671 exchange 'nova' topic 'conductor' from (pid=14334) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:489
Mar 29 16:16:07 os4 nova-conductor[14327]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] sending reply msg_id: 013ddb3c758b4cbfaade984871fb2671 reply queue: reply_2d509f4c3c904a8d83ad6a659f755bf2 time elapsed: 0.0648256909917s from (pid=14327) _send_reply /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:90
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG nova.compute.manager [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] Start spawning the instance on the hypervisor. from (pid=14334) _build_and_run_instance /opt/stack/nova/nova/compute/manager.py:1921
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG nova.block_device [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] block_device_list [] from (pid=14334) volume_in_mapping /opt/stack/nova/nova/block_device.py:585
Mar 29 16:16:07 os4 nova-compute[14334]: INFO nova.virt.libvirt.driver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] Creating image
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_concurrency.lockutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Lock "/opt/stack/data/nova/instances/a70f2d77-f140-4edf-bf38-8b07076be37a/disk.info" acquired by "nova.virt.libvirt.imagebackend.write_to_disk_info_file" :: waited 0.000s from (pid=14334) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:270
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_concurrency.lockutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Lock "/opt/stack/data/nova/instances/a70f2d77-f140-4edf-bf38-8b07076be37a/disk.info" released by "nova.virt.libvirt.imagebackend.write_to_disk_info_file" :: held 0.001s from (pid=14334) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_concurrency.lockutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Lock "ff5dedb42be7cda8422f529dde9a3096688be2c5" acquired by "nova.virt.libvirt.imagebackend.fetch_func_sync" :: waited 0.000s from (pid=14334) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:270
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG neutronclient.v2_0.client [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] GET call to neutron for http://10.42.0.53:9696/v2.0/networks.json?tenant_id=70162ab521a5455faff0680e923dd091&shared=False used request id req-3e9bf7cf-1951-49b1-9754-69aba6b56634 from (pid=14334) _append_request_id /usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py:128
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG neutronclient.v2_0.client [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] GET call to neutron for http://10.42.0.53:9696/v2.0/networks.json?shared=True used request id req-92909c76-6784-4de4-9522-2b12c2e55215 from (pid=14334) _append_request_id /usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py:128
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG nova.policy [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Policy check for network:attach_external_network failed with credentials {'service_roles': [], 'user_id': u'444f517480154b5588b354e64c50c075', 'roles': [u'anotherrole', u'Member'], 'user_domain_id': None, 'service_project_id': None, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_domain_id': None, 'is_admin_project': True, 'is_admin': False, 'project_id': u'70162ab521a5455faff0680e923dd091', 'project_domain_id': None} from (pid=14334) authorize /opt/stack/nova/nova/policy.py:168
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running cmd (subprocess): /usr/bin/python -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env LC_ALL=C LANG=C qemu-img info /opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5.part from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:355
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CMD "/usr/bin/python -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env LC_ALL=C LANG=C qemu-img info /opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5.part" returned: 0 in 0.055s from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:385
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG nova.virt.images [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] f69a1669-34b6-4a2a-9f98-6c4ae0a9d21a was qcow2, converting to raw from (pid=14334) fetch_to_raw /opt/stack/nova/nova/virt/images.py:146
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running cmd (subprocess): qemu-img convert -O raw -f qcow2 /opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5.part /opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5.converted from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:355
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CMD "qemu-img convert -O raw -f qcow2 /opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5.part /opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5.converted" returned: 0 in 0.118s from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:385
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running cmd (subprocess): /usr/bin/python -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env LC_ALL=C LANG=C qemu-img info /opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5.converted from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:355
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CMD "/usr/bin/python -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env LC_ALL=C LANG=C qemu-img info /opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5.converted" returned: 0 in 0.067s from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:385
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_concurrency.lockutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Lock "ff5dedb42be7cda8422f529dde9a3096688be2c5" released by "nova.virt.libvirt.imagebackend.fetch_func_sync" :: held 0.573s from (pid=14334) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf touch -c /opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5 from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:355
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CMD "sudo nova-rootwrap /etc/nova/rootwrap.conf touch -c /opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5" returned: 0 in 0.145s from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:385
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running cmd (subprocess): /usr/bin/python -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env LC_ALL=C LANG=C qemu-img info /opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5 from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:355
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CMD "/usr/bin/python -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env LC_ALL=C LANG=C qemu-img info /opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5" returned: 0 in 0.057s from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:385
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_concurrency.lockutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Lock "ff5dedb42be7cda8422f529dde9a3096688be2c5" acquired by "nova.virt.libvirt.imagebackend.copy_qcow2_image" :: waited 0.000s from (pid=14334) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:270
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running cmd (subprocess): /usr/bin/python -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env LC_ALL=C LANG=C qemu-img info /opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5 from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:355
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CMD "/usr/bin/python -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env LC_ALL=C LANG=C qemu-img info /opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5" returned: 0 in 0.050s from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:385
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running cmd (subprocess): qemu-img create -f qcow2 -o backing_file=/opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5 /opt/stack/data/nova/instances/a70f2d77-f140-4edf-bf38-8b07076be37a/disk from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:355
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CMD "qemu-img create -f qcow2 -o backing_file=/opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5 /opt/stack/data/nova/instances/a70f2d77-f140-4edf-bf38-8b07076be37a/disk" returned: 0 in 0.031s from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:385
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG nova.virt.disk.api [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Checking if we can resize image /opt/stack/data/nova/instances/a70f2d77-f140-4edf-bf38-8b07076be37a/disk. size=10737418240 from (pid=14334) can_resize_image /opt/stack/nova/nova/virt/disk/api.py:214
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running cmd (subprocess): /usr/bin/python -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env LC_ALL=C LANG=C qemu-img info /opt/stack/data/nova/instances/a70f2d77-f140-4edf-bf38-8b07076be37a/disk from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:355
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CMD "/usr/bin/python -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env LC_ALL=C LANG=C qemu-img info /opt/stack/data/nova/instances/a70f2d77-f140-4edf-bf38-8b07076be37a/disk" returned: 0 in 0.049s from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:385
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running cmd (subprocess): qemu-img resize /opt/stack/data/nova/instances/a70f2d77-f140-4edf-bf38-8b07076be37a/disk 10737418240 from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:355
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CMD "qemu-img resize /opt/stack/data/nova/instances/a70f2d77-f140-4edf-bf38-8b07076be37a/disk 10737418240" returned: 0 in 0.027s from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:385
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG oslo_concurrency.lockutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Lock "ff5dedb42be7cda8422f529dde9a3096688be2c5" released by "nova.virt.libvirt.imagebackend.copy_qcow2_image" :: held 0.164s from (pid=14334) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running cmd (subprocess): /usr/bin/python -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env LC_ALL=C LANG=C qemu-img info /opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5 from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:355
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CMD "/usr/bin/python -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env LC_ALL=C LANG=C qemu-img info /opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5" returned: 0 in 0.049s from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:385
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG nova.virt.disk.api [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Checking if we can resize image /opt/stack/data/nova/instances/a70f2d77-f140-4edf-bf38-8b07076be37a/disk. size=10737418240 from (pid=14334) can_resize_image /opt/stack/nova/nova/virt/disk/api.py:214
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running cmd (subprocess): /usr/bin/python -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env LC_ALL=C LANG=C qemu-img info /opt/stack/data/nova/instances/a70f2d77-f140-4edf-bf38-8b07076be37a/disk from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:355
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CMD "/usr/bin/python -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env LC_ALL=C LANG=C qemu-img info /opt/stack/data/nova/instances/a70f2d77-f140-4edf-bf38-8b07076be37a/disk" returned: 0 in 0.052s from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:385
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG nova.virt.disk.api [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Cannot resize image /opt/stack/data/nova/instances/a70f2d77-f140-4edf-bf38-8b07076be37a/disk to a smaller size. from (pid=14334) can_resize_image /opt/stack/nova/nova/virt/disk/api.py:220
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG nova.virt.libvirt.driver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] Ensure instance console log exists: /opt/stack/data/nova/instances/a70f2d77-f140-4edf-bf38-8b07076be37a/console.log from (pid=14334) _ensure_console_log_for_instance /opt/stack/nova/nova/virt/libvirt/driver.py:2936
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG neutronclient.v2_0.client [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] POST call to neutron for http://10.42.0.53:9696/v2.0/ports.json used request id req-3e2a6228-9e64-4a3b-b25c-e0f441738a4f from (pid=14334) _append_request_id /usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py:128
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG nova.network.neutronv2.api [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] Successfully created port: 0c051449-cc3a-4745-b0c7-ff65c445b798 from (pid=14334) _create_port_minimal /opt/stack/nova/nova/network/neutronv2/api.py:396
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG neutronclient.v2_0.client [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] GET call to neutron for http://10.42.0.53:9696/v2.0/extensions.json used request id req-9bc517c3-5106-471e-a388-7379bf33a82a from (pid=14334) _append_request_id /usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py:128
Mar 29 16:16:09 os4 nova-compute[14334]: DEBUG neutronclient.v2_0.client [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] PUT call to neutron for http://10.42.0.53:9696/v2.0/ports/0c051449-cc3a-4745-b0c7-ff65c445b798.json used request id req-06dff966-90bd-45dd-83f4-42f63b4d370a from (pid=14334) _append_request_id /usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py:128
Mar 29 16:16:09 os4 nova-compute[14334]: DEBUG nova.network.neutronv2.api [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] Successfully updated port: 0c051449-cc3a-4745-b0c7-ff65c445b798 from (pid=14334) _update_port /opt/stack/nova/nova/network/neutronv2/api.py:433
Mar 29 16:16:09 os4 nova-compute[14334]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CALL msg_id: 2adf28dab4814f31826e2ecbd43cf462 exchange 'nova' topic 'conductor' from (pid=14334) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:489
Mar 29 16:16:09 os4 nova-conductor[14326]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] sending reply msg_id: 2adf28dab4814f31826e2ecbd43cf462 reply queue: reply_2d509f4c3c904a8d83ad6a659f755bf2 time elapsed: 0.00981185701676s from (pid=14326) _send_reply /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:90
Mar 29 16:16:09 os4 nova-compute[14334]: DEBUG oslo_concurrency.lockutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Acquired semaphore "refresh_cache-a70f2d77-f140-4edf-bf38-8b07076be37a" from (pid=14334) lock /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:212
Mar 29 16:16:09 os4 nova-compute[14334]: DEBUG nova.network.neutronv2.api [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] _get_instance_nw_info() from (pid=14334) _get_instance_nw_info /opt/stack/nova/nova/network/neutronv2/api.py:1280
Mar 29 16:16:09 os4 nova-compute[14334]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CALL msg_id: eea079f2759b4c7e8e37f5920b1be459 exchange 'nova' topic 'conductor' from (pid=14334) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:489
Mar 29 16:16:09 os4 nova-conductor[14327]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] sending reply msg_id: eea079f2759b4c7e8e37f5920b1be459 reply queue: reply_2d509f4c3c904a8d83ad6a659f755bf2

That, is the flow of creating a server in all the Nova processes.

That’s the kind of analysis that most people were added Elastic Search into their environments to get. And while this clearly doesn’t actually replicate all the very cool things you can do with Elastic Search, it does really give you a bunch of new tools as a developer.

Note: the enablement for everything above is still very much work in progress. The DevStack patches have landed, but are off by default. The oslo.log patch is in the proof of concept phase for review.

This idea of programs being able to send not just log messages, but other metadata around them, is really powerful. It means, for instance, you can always send the code function / line number where a message was emitted. It’s not clogging up the log message itself, but if you want it later you can go ask for it. Or even if it is in the log message, by extracting in advance what the critical metadata is, you can give your consumers structured fields instead of relying on things like grok to parse the logs. Which simplifies ingest by things like Elastic Search.

There is a lot to explore here to provide a better developer and operator experience, which is only possible because we have systemd. I for one look forward to exploring the space.

The Nova API in Kilo and Beyond

Over the past couple of years we’ve been trying to find a path forward with the Nova API. The Nova v2.0 API defined a very small core set of interfaces that were basically unchangable and copied from Rackspace early in the history of the project. We then added a way to extend it with extensions, both in the upstream community, as well as vendor extensions out of tree.

This created a user experience that was… suboptimal. We had 80+ extensions in tree, they were of various quality and documentation. Example: floating ips were an extension (officially out of the API), but are used so extensively that they were a de facto part of the core API. Except, of course, they weren’t, so people trying to use the API had a few things they could count on, and a ton of things that might or might not be there.

This was a disaster for the promise of interoperable clouds.

Even figuring out what a cloud could do was pretty terrible. You could approximate it by listing the extensions of the API, then having a bunch of logic in your code to realize which extensions turned on or off certain features, or added new data to payloads.

We took a very long journey to come up with a better way, with lots of wrong turns and dead ends. That’s a story for another day. Today I’d like just explain where we got to, and why.

A User-First Perspective

Let’s take a step back and think about some stakeholders involved in OpenStack, and what they need and want out of it.

Jackson the Absent

Last year Jackson wrote an application that works against OpenStack; it’s been deployed in production and is part of the workflow at his organization.

Jackson has decided to leave tech and take up goat farming. His application should continue to work without changes even after the OpenStack cloud it’s running against has been upgraded multiple times.

Emma the Active

Emma is the kind of person that loves new features and eagerly awaits her OpenStack cloud upgrades to see the new things she can do. She should be able to have her application work fine across upgrades, and then be able to modify it to take advantages of new features exposed in the API.

Sophia the Multi-Cloud Integrator

Sophia has an application that spans multiple OpenStack clouds run by different organizations. These clouds will be at different versions of OpenStack, and thus will expose different API features.

Sophia needs to be able to talk to all these clouds, know what features they expose, and have a single program that can talk to them all simultaneously.

(Note: this is what our own nodepool does that runs all the OpenStack upstream tests)

Aiden the Cloud Operator

Aiden knows he has a lot of users of the OpenStack API at his location. He’d really like to know who the Jacksons and Emmas of the world are, so that he can keep an eye on the far future of whether it’s ever safe to disable really old features in his cloud.

Olivia the Contributor

Olivia wants to get her feature added to Nova which needs exposing through the API. She’d like to be able to get that landed during a release, and not have to wait 3 years for an eventual API rewrite.

Requirements

Considering the needs of these various users helps to determine the key requirements for an API to something like OpenStack: an Open Source project that’s deployed in many different companies and environments, with possibly years of difference in the version of code deployed at any of these locations.

  • The API should be as common as possible between deploys. Every optional feature is a feature that can’t be depended on by an application writer. Or worse: if it’s used, it’s a lock-in to that cloud. That means software has to be rewritten for every cloud, or written with a horrible kluge layer.
  • It needs to be really clear exactly what a particular cloud’s API supports.
  • Older applications must not be broken by new features, or need to be rewritten after their OpenStack cloud is upgraded.
  • We have to have a way to get new features out in a timely basis.
  • We have to be able to evolve the API one piece at a time, as the Nova API is sufficiently large that a major version bump is no longer possible (we learned this the hard way).

The Backwards Compatibility Fallacy

Nearly every conversation with a developer around this issue starts with “why can’t we just add more data to structures in backwards compatible ways”. This is how service providers like Amazon, Meetup, and others work.

The problem is we aren’t a proprietary company with 1 revision of our API stack in the wild at a time. We are an Open Source project with thousands of deployments all over the world, all on different code revisions and upgrade cadence.

Let’s play the exercise where we thought additive was good enough.

A great example currently exists: ipv6 filtering on server lists. Currently, the Nova client erroneously says it’s supported. It’s not; it’s actually completely ignored on the server. A suggestion is that this is a backwards compatible addition, so we should just do it, and we don’t need to signal to the user that this was an API change.

However, that assumes that time is monotonically moving forward; it’s not. Sophia might run across a cloud that had gotten to this version of the code and realized she could filter by ipv6 address. Great! She writes her code to use and depend on that feature.

Then she runs her code at against another cloud, which runs a version of Nova that predates this change. She’s now effectively gone back in time. Her code now returns thousands of records instead of 1, and she’s terribly confused why. She also has no way to figure out if random cloud Z is going to support this feature or not. So the only safe thing to do is implement the filtering client side instead, which means the server side filtering actually gained her very little. It’s not something she can ever determine will work ahead of time. It’s an API that is untrustworthy, so it’s something that’s best avoided.

We’ve seen this problem before on the World Wide Web. Javascript APIs in browsers are different, and while parts of them converge towards standardization over time, you end up writing a lot of shim compatibility layers to take advantage of new features but not break on old implementations. This is the way of decentralized API evolution and deployment. Once you get past one implementation controlled by a single organization you have to deal with it.

API Microversions

There are some amazing things in the HTTP specification, some really great ideas that I am amazed were thought about back in the early days of the web. One of which is Content Negotiation. A resource is addressed by a URL (Uniform Resource Locator). However, that resource might be available in multiple representations: there might be a text version, an html version, and a pdf version. The HTTP spec provides a header that allows you to tell the server what kind of representation you would like for your resource. The server can say “that’s not possible” and then you try again with something different, but it gives you as the client a lot of control in what you are going to get.

What if APIs worked like that? It’s always a server, but I’d really like the 2.253 representation of it, which has some fields that are really handy for me.

Microversions are like Content Negotiation for the API.

Like Content Negotiation, the requested Microversion is passed as an HTTP header. Unlike Content Negotiation we don’t support ranges, as the complexity to client programming gets out of control. Like Content Negotiation, if nothing is provided, we do a sane thing: send the minimum supported version.

Nova v2.1

Nova v2.1 is a new, cleaner backend implementation of the Nova v2.0 API. The one thing it adds is consistent input validation, so we catch bad requests at the API layer and return a sane error to the user. This is much more straight forward than our old model of trying to translate a stack trace (possibly triggered by a database violation) into a meaningful error message to the user.

Applications that work on v2.0 can be pointed to v2.1, and will just work. It should be transparent enough to the application authors that they’ll never notice the transition.

And onto this Nova v2.1 API endpoint, we start adding microversion features. If you want features in the 2.3 microversion, you specify that fact in your header. Then you’ll get the v2.3 versions of all the resources.

If you specify nothing, you get the minimum supported version, which rolls back to v2.1, which is the same as the v2.0 API. So all existing applications just work without doing anything. Only when an application wants to opt into new features does it need to make changes.

Solving for Stakeholders

Let’s look at how this solves things for our stakeholders:

  • Jackson: his application keeps running, v2.1 is v2.0. His application needed to make 0 changes to run as it did before.
  • Emma: she can poll the Versions endpoint and discover that hercloud now supports 2.4. So she can start coding to those features in her application and put a 2.4 version request into all of her code.
  • Sophia: she can now probe all of the clouds she’s working with to find out what feature levels they support, based on the information provided by the Versions endpoint. As request version is per request, she can either figure out some API version that intersects all her clouds and write to that, or she can write client-side paths based on different versions she’s sure she can support and has tested (a 2.1 path, a 2.4 path, a 2.52 path) and dynamically use the best path supported on a particular cloud. This approach works even after BleedingEdgeCo cloud has set a minimum supported version at 2.50, even though ImSlowWithUpgradesCo cloud still only is up to 2.4. Sophia’s job was never going to be fun, but it’s now possible, without building a giant autoconf-like system to probe and detect what clouds actually support, or worse: trying to piece it together from a range of service provider and product documentation.
  • Aiden: he’s now collecting client request version information on inbound requests, which means that he can figure out which of his users are still using older code. That provides the ability to know when, if ever, it’s safe to move forward. Or even be able to have a chat with folks using Jackson’s ancient tools to figure out what their long term support strategy should be.
  • Olivia: she can now help evolve the Nova API in a safe way, knowing that she’s not going to break existing users, but will still be able to support new things needed by OpenStack.

Nova v2.1/v2.0 Forever (nearly)

There are some details about how we implement microversions internally in Nova, which means our assumption is that we’re supporting the base v2.1 API forever. We have the facility to raise the minimum version; however we’ve stated the first time we’re even going to have that conversation is in Barcelona in Fall of 2016. That doesn’t mean we’ll raise the minimum in Orzo, but we’ll have our first conversation, with lots of data from operators and application developers to see how things are going, and what’s a realistic path moving forward.

One API – no more extensions

There were multiple ways we could have gone about microversioning; one of the original suggestions was versions per resource. But the moment you start thinking about what the client code would look like to talk to that, you want to throw up a little bit. It was clear that to provide the best user experience we needed to draw a line in the sand and stop thinking about the Nova API as a toolkit to extend, and declare it as a solid thing that all users can expect from their clouds.

The extensions mechanism is deprecated. All the extensions in the Nova tree are now in the Nova API. Over the next couple of cycles we’ll be consolidating some of the code to make this more consistent, and eventually remove the possibility of out-of-tree extensions working at all. This allows the API to have a meaningful monotonically increasing API version that will mean the same thing across all deploys.

This is also a signal to people working on Nova that “innovating on the API out-of-tree” is a thing we not only don’t find valuable, but is fundamentally hostile to the creation of an application ecosystem for OpenStack.

If you need new things, come talk to us. Let’s figure out how to do it together, in tree, in an interop-friendly way. And yes, this means some features won’t be welcomed, or will be delayed as we consider a way to make them something that could work for a majority of deployers / hypervisors, and that could be a contract we could support long-term.

Never another API endpoint

You should never expect another API endpoint from Nova. Our versioning mechanism is no longer about the endpoint name, it’s the Nova API with a Microversion header in the request. Applications will never need to think about a big rewrite because they have to jump API endpoints. They can instead take advantage of new features on a time table that makes sense to them.

Next Steps

Microversioning is a new thing, but it’s already shown quite a bit of promise. It’s also been implemented by other projects like Ironic to address the same kinds of concerns that we saw. It’s very exciting to see this spread across OpenStack.

This mechanism will also let us bring in things like JSON Home to expose the resource tree in Nova as a resource itself. And work on concepts like the Tasks API to provide a better workflow for creating multi-part resources in Nova (a server with networks and volumes that should get built as an atomic unit).

Discoverability is not yet fully solved, as the policy that applies to a user is still hidden. We hope that with some of the upcoming work on Dynamic Policy in Keystone we can get that built into the API as well. That will give us a complete story where an application can know what it can do against a given cloud before it gets started (as a combination of supported microversion, and exposed policy).

And there is a lot of internal cleaing up, documentation, and testing still to do. The bulk of the Liberty cycle API work is going to be there to put some polish on what we’ve got.

A big thanks to Chris Yeoh

The journey that got us here was a long and hard one. It started 5 cycles ago at the Grizzly summit. A year ago no one on the team was convinced we had a path forward. Really hard things are some times really hard.

Chris Yeoh was the point person for the API work through this whole time, and without his amazing patience and perseverance as we realized some terrible corners we’d painted ourselves into, and work that had to be dropped on the floor, we probably wouldn’t have come out the other side with anything nearly as productive as what we now have. We lost him this spring at far too young an age. This work is part of his legacy, and will be foundational for much of OpenStack for a long time to come.

Thank you Chris, and we miss you.

Do you want to learn more?

If you some how made it this far, and want even more details about what’s going on the following is great follow up reading:

OpenStack Emacs Tools

Over the past few weeks I’ve been tweaking my emacs configs, and writing some new modes to help with OpenStack development. Here is a non comprehensive list of some of the emacs integration I’m finding useful in developing for OpenStack (especially things that have come up in conversation). URLs provided, though I won’t walk through all configuration in depth.

tramp

Tramp is a built in facility in emacs that lets you open files on remote machines via ssh (and other protocols). This means your emacs runs locally, with all the latency gains that has, as configured as you would like, but editing can be done across multiple systems. A remote file url looks like /servername:/file/path. All the normal file completion that you expect works after that point.

I tend to do code that doesn’t need a full stack run locally on my laptop or desktop, but full stack code happens on my NUC devstack box, and tramp lets me do that from an single emacs instance.

More info about tramp at emacswiki.

fly-hack

Emacs has an in-buffer syntax checker called flymake. There are various tutorials out there for integrating pyflakes or flake8 into that. However in OpenStack we have hacking, which extends flake8 with new rules. Also, every project turns on custom ignores. Also, many projects extend flake8 further with custom rules for that repo.

screenshot_151

fly-hack uses the flake8 in the .tox/pep8 venv for each project, and uses the tox.ini config for each project, so when in Nova, nova rules will be enforced, when in Cinder, cinder rules will be enforced. Mousing over the error will pop up what it is (you can see the H306 in that screenshot). It has a fallback mode when using it over tramp that’s a reasonably sane flake8 least common denominator for OpenStack projects.

More info at fly-hack github page – https://github.com/sdague/fly-hack

stacktest

Our testr / subunit / testtools testing toolchain gets a bit heavy handed when trying to iterate on test development. testr discovery takes 20 – 30s on the Nova source tree, even if you are trying to only run 1 test. I became inspired at the Ops Summit in Philly to see if I could do better. And stacktest.el was born.

screenshot_153

It’s mostly a port from nosemacs which allows you to run tests from an emacs buffer, however it does so using tox, subunit, or testtools, depending on whether you want to run a top level target, test a file, test an individual test function, and/or use pdb. It works over tramp, it works with pdb, and it uses the subunit-trace tooling if available.

I’ve now bound F12 to stacktest-one, which is a super fast way to both iterate on test development.

More info at the stacktest github page – https://github.com/sdague/stacktest

pycscope

OpenStack is a lot of code, and uses a ton of libraries. git grep works ok in a single repo, but the moment some piece of code ends up calling into an oslo library, that breaks down.

Peter Portante, OpenStack Swift contributor, maintains a pythonized version of cscope. It parses the AST of all the files to build a quite rich symbol cscope database. This lets you search for definitions (searching down), calling points (searching up), and references (searching sideways). Which very quickly lets you dive through a code path and figure out where you end up.

screenshot_155

The only draw back is the AST parse is consuming on something as large as the Nova tree, especially if you index all the .tox directories, which I do to let myself get all the way back through the library stacks that we include.

You can learn more about pycscope at it’s github page – https://github.com/portante/pycscope

flyspell-prog-mode

Emacs includes a spell checker called flyspell. Very useful for text files. What I only learned last year is that there is also a flyspell-prog-mode, which is like flyspell, but only acts on comments and strings that are semantically parsed by Emacs. This helps avoid a spelling mistake when writing inline documentation.

screenshot_156

More information at Emacs wiki.

lambda-mode

This is totally gratuitous, but fun. There is a small mode that does a display adjustment of the word ‘lambda’ to an actual ‘λ’. It’s a display adjustment only, this is still 6 characters in the buffer. But it makes the lambda expressions pop out a bit more.

screenshot_157

More information at Emacs wiki.

The important thing about having an extensible editor is actually extending it to fit your workflow. If you are an emacs user, hopefully this will give you some fun things to dive into. If you use something else, hopefully this will inspire you to go looking into your toolchain for similar optimizations.

My IRC proxy setup

IRC (Internet Relay Chat) is a pretty important communication medium for a lot of Open Source projects nowadays. While email is universal and lives forever, IRC is the equivalent of the hallway chat you’d have with a coworker to bounce ideas around. IRC has the advantage of being a reasonably simple and open (and old) protocol, so writing things that interface with it is about as easy as email clients. But, it has a pretty substantial drawback: you only get messages when you are connected to the channels in question.

Again, because it’s an open protocol this is actually a solvable problem, have a piece of software on an always on system somewhere that remains connected for you. There are 2 schools of thinking here:

  • Run a text IRC client in screen or tmux on a system, and reconnect to the terminal session when you come in. WeeChat falls into this camp.
  • Run an irc proxy on a server, and have your IRC client connect to the proxy which replays all the traffic since the last time you were connected. Bip, ZNC, and a bunch of others fall into this camp.

I’m in Camp #2, because I find my reading comprehension of fixed width fonts is far less than variable width ones. So I need my IRC client to be in a variable width font, which means console solutions aren’t going to help me.

ZNC

ZNC is my current proxy of choice. I’ve tried a few others, and dumped them for reasons I don’t entirely remember at this point. So ZNC it is.

I have a long standing VPS with Linode to host a few community websites. For something like ZNC you don’t need much horse power and could use cloud instances anywhere. If you are running debian or ubuntu in this cloud instance: apt-get install znc gets you rolling.

Run ZNC from the command line and you’ll get something like this:

znc fail

That’s because first time up it needs to create a base configuration. Fortunately it’s pretty straight forward what that needs to be.

znc –makeconf takes you through a pretty interactive configuration screen to build a base configuration. The defaults are mostly fine. The only thing to keep in mind is what port you make ZNC listen on, as you’ll have to remember to punch that port open on the firewall/security group for your cloud instance.

I also find the default of 50 lines of scrollback to be massively insufficient. I usually bounce that to 5000 or 10000.

Now connect your client to the server and off you go. If you have other issues with basic ZNC configuration, I’d suggest checking out the project website.

ZNC as a service

The one place ZNC kind of falls down is that out of the box (at least on ubuntu) it doesn’t have init scripts. Part of this is because the configuration file is very user specific, and as we say by the interactive mode, is designed around asking you a bunch of questions. That means if your cloud instance reboots, your ZNC doesn’t come back.

I fixed this particular shortcoming with Monit. Monit is a program that monitors other programs on your system and starts or restarts them if they have faulted out. You can apt-get install it on debian/ubuntu.

Here is my base znc monit script:

znc monit

Because znc doesn’t do pid files right, this just matches on a process name. It has a start command which includes the user / group for running this, and a stop command, and some out of bounds criteria. All in a nice little dsl.

All that above will get you a basic ZNC server running, surviving cloud instance reboots, and make sure you never miss a minute of IRC.

But… what if we want to go further.

ZNC on ZNC

The idea for this comes from Dan Smith, so full credit where it is due.

If you regularly connect to IRC from more than one computer, but only have 1 ZNC proxy setup, the issue is the scrollback gets replayed to the first computer that connects to the proxy. So jumping between computers to have conversations ends up being a very fragmented experience.

ZNC presents as just an IRC Server to your client. So you can layer ZNC on top of ZNC to create independent scrollback buffers for every client device. My setup looks something like this:

ZNC on ZNC

Which means that all devices have all the context for IRC, but I’m only presented as a single user on the freenode network.

Going down this path requires a bit more effort, which is why I’ve got the whole thing automated with puppet: znc-puppet.tar. You’ll probably need to do a little bit of futzing with it to make it work for your puppet managed servers (you do puppet all your systems, right?), but hopefully this provides a good starting point.

IRC on Mobile

Honestly, the Android IRC experience is… lacking. Most of the applications out there that do IRC on Android provide an experience which is very much a desktop experience, which works poorly on a small phone.

Monty Taylor pointed me at IRCCloud which is a service that provides a lot of the same offline connectivity as the ZNC stack provides. They have a webui, and an android app, which actually provides a really great mobile experience. So if Mobile is a primary end point for you, it’s probably worth checking out.

IRC optimizations for the Desktop

In the one last thing category, I should share the last piece of glue that I created.

I work from home, with a dedicated home office in the house. Most days I’m working on my desktop. I like to have IRC make sounds when my nick hits, mostly so that I have some awareness that someone wants to talk to me. I rarely flip to IRC at that time, it just registers as a “will get to it later” so I can largely keep my concentration wherever I’m at.

That being said, OpenStack is a 24hr a day project. People ping me in the middle of the night. And if I’m not at my computer, I don’t want it making noise. Ideally I’d even like them to see me as ‘away’ in IRC.

Fortunately, most desktop software in Linux integrates with a common messaging bus: dbus. The screensaver in Ubuntu emits a signal on lock and unlock. So I created a custom script that mutes audio on screen lock, unmutes it on screen unlock, as well as sends ‘AWAY’ and ‘BACK’ commands to xchat for those state transitions.

You can find the script as a gist.

So… this was probably a lot to take in. However, hopefully getting an idea of what an advanced IRC workflow looks like will give folks ideas. As always, I’m interested in hearing about other things people have done. Please leave a comment if you’ve got an interesting productivity hack around IRC.

OpenStack as Layers

Last week at LinuxCon I gave a presentation on DevStack which gave me the proper excuse to turn an idea that Dean Troyer floated a year ago about OpenStack Layers into pictures (I highly recommend reading that for background, I won’t justify every part of that here again). This abstraction has been something that’s actually served us well as we think about projects coming into DevStack.

OpenStack_Layers

Some assumptions are made here in terms of what essential services are here as we build up the model.

Layer 1: Base Compute Infrastructure

We assume that compute infrastructure is the common starting point of minimum functional OpenStack that people are deploying. The output of the last OpenStack User Survey shows that the top 3 deployed services, regardless of type of cloud (Dev/Test, POC, or Production) are Nova / Glance / Keystone. So I don’t think this is a huge stretch. There are definitely users that take other slices (like Swift only) but compute seems to be what the majority of people coming to OpenStack seem to be focussed on.

Basic Compute services need 3 services to get running. Nova, Glance, and Keystone. That will give you a stateless compute cloud which is a starting point for many people getting into the space for the first time.

Layer 2: Extended Infrastructure

Once you have a basic bit of compute infrastructure in place, there are some quite common features that you do really need to do more interesting work. These are basically enhancements on the Storage, Networking, or Compute aspects of OpenStack. Looking at the User Survey these are all deployed by people, in various ways, at a pretty high rate.

This is the first place we see new projects integrating into OpenStack. Ironic extends the compute infrastructure to baremetal, and Designate adds a missing piece of the networking side with DNS management as part of your compute creation.

Hopefully nothing all that controversial here.

Layer 3: Optional Enhancements

Now we get a set of currently integrated services that integrate North bound and South bound. Horizon integrates on the North bound APIs for all the services, it requires service further down in layers (it also today integrates with pieces further up that are integrated). Ceilometer consumes South bound parts of OpenStack (notifications) and polls North bound interfaces.

From the user survey Horizon is deployed a ton. Ceilometer, not nearly as much. Part of this is due to how long things have been integrated, but even if you do analysis like take the Cinder / Neutron numbers, delete all the Folsom deploys from it (which is the first time those projects were integrated) you still see a picture where Ceilometer is behind on adoption. Recent mailing list discussions have hints at why, including some of the scaling issues, and a number of alternative communities in this space.

Let’s punt on Barbican, because honestly, it’s new since we came up with this map, and maybe it’s really a layer 2 service.

Layer 4: Consumption Services

I actually don’t like this name, but I failed to come up with something better. Layer 4 in Dean’s post was “Turtles all the way down”, which isn’t great describing things either.

This is a set of things which consume other OpenStack services to create new services. Trove is the canonical example, create a database as a service by orchestrating Nova compute instances with mysql installed in them.

The rest of the layer 4 services all fit the same pattern, even Heat. Heat really is about taking the rest of the components in OpenStack and building a super API for their creation. It also includes auto scaling functionality based on this. In the case of all integrated services they need a guest agent to do a piece of their function, which means when testing them in OpenStack we don’t get very far with the Cirros minimal guest that we use for Layer 3 and down.

But again, as we look at the user survey we can see deployment of all of these Layer 4 services is lighter again. And this is what you’d expect as you go up these layers. These are all useful services to a set of users, but they aren’t all useful to all users.

I’d argue that the confusion around Marconi’s place in the OpenStack ecosystem comes with the fact that by analogy it looks and feels like a Layer 4 service like Trove (where a starting point would be allocating computes), but is implemented like a Layer 2 one (straight up raw service expected to be deployed on bare metal out of band). And yet it’s not consumable as the Queue service for the other Layer 1 & 2 services.

Leaky Taxonomy

This is not the end all be all of a way to look at OpenStack. However, this layered view of the world confuses people a lot less than the normal view we show them — the giant spider diagram (aka the mandatory architecture slide for all OpenStack presentations):

OpenStack_Spider_Diagram

This picture is in every deep dive on OpenStack, and scares the crap out of people who think they might want to deploy it. There is no starting point, there is no end point. How do you bite that off in a manageable chunk as the spider grows?

I had one person come up to me after my DevStack talk giving a big thank you. He’d seen a presentation on Cloudstack and OpenStack previously and OpenStack’s complexity from the outside so confused him that he’d run away from our community. Explaining this with the layer framing, and showing how you could experiment with this quickly with DevStack cleared away a ton of confusion and fear. And he’s going to go dive in now.

Tents and Ecosystems

Today the OpenStack Technical Committee is in charge of deciding the size of the “tent” that is OpenStack. The approach to date has been a big tent philosophy, where anything that’s related, and has a REST API, is free to apply to the TC for incubation.

But a big Tent is often detrimental to the ecosystem. A new project’s first goal often seems to become incubated, to get the gold star of TC legitimacy that they believe is required to build a successful project. But as we’ve seen recently a TC star doesn’t guarantee success, and honestly, the constraints on being inside the tent are actually pretty high.

And then there is a language question, because OpenStack’s stance on everything being in Python is pretty clear. An ecosystem that only exists to spawn incubated projects, and incubated projects only being allowed to be in Python, basically means an ecosystem devoid of interesting software in other languages. That’s a situation that I don’t think any of us want.

So what if OpenStack were a smaller tent, and not all the layers that are in OpenStack today were part of the integrated release in the future? Projects could be considered a success based on their users and usage out of the ecosystem, and not whether they have a TC gold star. Stackforge wouldn’t have some stigma of “not cool enough”, it would be the normal place to exist as part of the OpenStack ecosystem.

Mesos is an interesting cloud community that functions like that today. Mesos has a small core framework, and a big ecosystem. The smaller core actually helps grow the ecosystem by not making the ecosystem 2nd class citizens.

I think that everyone that works on OpenStack itself, and all the ecosystem projects, want this whole thing to be successful. We want a future with interoperable, stable, open source cloud fabric as a given. There are lots of thoughts on how we get there, and as no one has ever created a universal open source cloud fabric that lets users have the freedom to move between providers, public and private, so it’s no surprise that as a community we haven’t figured everything out yet.

But here’s another idea into the pool, under the assumption that we are all smarter together with all the ideas on the table, than any of us are on our own.

Splitting up Git Commits

Human review of code takes a bunch of time. It takes even longer if the proposed code has a bunch of unrelated things going on in it. A very common piece of review commentary is “this is unrelated, please put it in a different patch”. You may be thinking to yourself “gah, so much work”, but turns out git has built in tools to do this. Let me introduce you to git add -p.

Lets look at this Grenade review – https://review.openstack.org/#/c/109122/1. This was the result of a days worth of hacking to get some things in order. Joe correctly pointed out there was at least 1 unrelated change in that patch (I think he was being nice, there were probably at least 4 things going that should have been separate). Those things are:

  • The quiece time for shutdown, that actually fixes bug 1285323 all on it’s own.
  • The reordering on the directory creates so it works on a system without /opt/stack
  • The conditional upgrade function
  • The removal of the stop short circuits (which probably shouldn’t have been done)

So how do I turn this 1 patch, which is at the bottom of a patch series, into 3 patches, plus drop out the bit that I did wrong?

Step 1: rebase -i master

Start by running git rebase -i master on your tree to put myself into the interactive rebase mode. In this case I want to be editing the first commit to split it out.

screenshot_171

Step 2: reset the changes

git reset ##### will unstage all the changes back to the referenced commit, so I’ll be working from a blank slate to add the changes back in. So in this case I need to figure out the last commit before the one I want to change, and do a git reset to that hash.

screenshot_173

Step 3: commit in whole files

Unrelated change #1 was fully isolated in a whole file (stop-base), so that’s easy enough to do a git add stop-base and then git commit to build a new commit with those changes. When splitting commits always do the easiest stuff first to get it out of the way for tricky things later.

Step 4: git add -p 

In this change grenade.sh needs to be split up all by itself, so I ran git add -p to start the interactive git add process. You will be presented with a series of patch hunks and a prompt about what to do with them. y = yes add it, n = no don’t, and lots of other options to be trickier.

screenshot_176

In my particular case the first hunk is actually 2 different pieces of function, so y/n isn’t going to cut it. In that case I can type ‘e’ (edit), and I’m dumping into my editor staring at the patch, which I can interactively modify to be the patch I want.

screenshot_177

I can then delete the pieces I don’t want in this commit. Those deleted pieces will still exist in the uncommitted work, so I’m not losing any work, I’m just not yet dealing with it.

screenshot_178

Ok, that looks like just the part I want, as I’ll come back to the upgrade_service function in patch #3. So save it, and final all the other hunks in the file that are related to that change to add them to this patch as well.

screenshot_179

Yes, to both of these, as well as one other towards the end, and this commit is ready to be ‘git commit’ed.

Now what’s left is basically just the upgrade_service function changes, which means I can git add grenade.sh as a whole. I actually decided to fix up the stop calls before doing that just by editing grenade.sh before adding the final changes. After it’s done, git rebase –continue rebases the rest of the changes on this, giving me a new shiney 5 patch series that’s a lot more clear than the 3 patch one I had before.

Step 5: Don’t forget the idempotent ID

One last important thing. This was a patch to gerrit before, which means when I started I had an idempotent ID on every change. In splitting 1 change into 3, I added that id back to patch #3 so that reviewers would understand this was an update to something they had reviewed before.

It’s almost magic

As a git user, git add -p is one of those things like git rebase -i that you really need in your toolkit to work with anything more than trivial patches. It takes practice to have the right intuition here, but once you do, you can really slice up patches in a way that are much easier for reviewers to work with, even if that wasn’t how the code was written the first time.

Code that is easier for reviewers to review wins you lots of points, and will help with landing your patches in OpenStack faster. So taking the time upfront to get used to this is well worth your time.

OpenStack Failures

Last week we had the bulk of the brain power of the OpenStack QA and Infra teams all in one room, which gave us a great opportunity to spend a bunch of time diving deep into the current state of the Gate, figure out what’s going on, and how we might make things better.

Over the course of 45 minutes we came up with this picture of the world.

14681027401_327a720647_o

We have a system that’s designed to merge good code, and keep bugs out. The problem is that while it’s doing a great job of keeping big bugs out, subtle bugs, ones that are low percentage (like show up in only 1% of test runs) can slip through. These bugs don’t go away, they instead just build up inside of OpenStack.

As OpenStack expands in scope and function, these bugs increase as well. They might grow or shrink based on seemingly unrelated changes, dependency changes (which we don’t gate on), timing impacts by anything in the underlying OS.

As OpenStack has grown no one has a full view of the system any more, so even identifying that a bug might or might not be related to their patch is something most developers can’t do. The focus of an individual developer is typically just wanting to land their code, not diving into the system as a whole. This might be because they are on a schedule, or just that landing code feels more fun and productive, than digging into existing bugs.

From a social aspect we seem to have found that there is some threshold failure rate in the gate that we always return to. Everyone ignores base races until we get to that failure rate, and once we get above it for long periods of time, everyone assumes fixing it is someone else’s responsibility. We had an interesting experiment recently where we dropped 300 Tempest tests in turning off Nova v3 by default, which gave us a short term failure drop, but within a couple months we’re back up to our unpleasant failure rate in the gate.

Part of the visibility question is also that most developers in OpenStack don’t actually understand how the CI system works today, so when it fails, they feel powerless. It’s just a big black box blocking their code, and they don’t know why. That’s incredibly demotivating.

Towards Solutions

Every time the gate fail rates get high, debates show up in IRC channels and on the mailing list with ideas to fix it. Many of these ideas are actually features that were added to the system years ago. Some are ideas that are provably wrong, like autorecheck, which would just increase the rate of bug accumulation in the OpenStack code base.

A lot of good ideas were brought up in the room, over the next week Jim Blair and I are going to try to turn these into something a little more coherent to bring to the community. The OpenStack CI system tries to be the living and evolving embodiment of community values at any point in time. One of the important things to remember is those values aren’t fixed points either.

The gate doesn’t exist to serve itself, it exists because before OpenStack had one, back in the Diablo days, OpenStack simply did not work. HP Cloud had 1000 patches to Diablo to be able to put it into production, and took 2 years to migrate from it to another version of OpenStack.

Processing OpenStack GPG keys in Thunderbird

If you were part of the OpenStack keysigning party from the summit, you are currently probably getting a bunch of emails sent by caff. This is an easy way to let a key signer send you your signed key.

These are really easy to process if you are using Thunderbird + Enigmail as your signed/encrypted mail platform. Just open up the mail attachments, right click, and import key:

screenshot_161

Once you’ve done this you’ll have included the signature in your local database. Then from the command line you can:

gpg --send-key YOURKEYID

And then you are done.

Happy GPGing!

OpenStack Summit Preview: Elastic Recheck

With OpenStack summit only a few days away, I’ve been preparing materials for my Elastic Recheck talk. Elastic Recheck is a system that we built over the last 7 months to help us data mine failures in test results in the OpenStack test system to find patterns.

The Problem

OpenStack is a complicated system, with lots of components working in an asynchronous way. This means that small timing changes can often expose some interesting issues. This is especially true in an environment like the upstream gate where we are running tests in parallel.

A good example of this is a currently open bug. If you run a security group list against all tenants at the same time someone is deleting a security group, the listing returns a 404. This is because of a nesting behavior in the list, which includes running a db get over all the items in the list to get additional details. There is an exposure window there where a security group is in the list, it’s deleted by another user, then we go back to get it, and it fails. That failure currently propagates a set of exceptions which become a 404 to the end user. Which is totally unexpected.

That window seems really small, right? Like it never could actually happen. Well, in the gate, even with only 2 – 4 API calls happening at a time, we see this 7 times a day:

Security Group Listing

The Solution

Starting during the Havana RC phase, we started turning this into a search problem. Using logstash and elastic search on the back end, we find fingerprints for known bugs. These fingerprints are queries that will give us back only test runs which seem to have failed on this particular bug.

The system includes real time reporting to Gerrit and IRC when we detect that a job failed with a known issue, and bulk reporting every 30 minutes to let us understand trends and classification rates.

Elastic Recheck Flow

Overall this has been a huge boon towards really identifying some of the key issues we expose during normal testing. What’s also been really interesting is having a system like this impacts the way that people write core project code, so that errors are more uniquely discoverable. Which is a win not only for our detection, but for debugging OpenStack in a production environment.

Learning More

If you are going to be in Atlanta, and would like to know more, you’ll have lots of opportunities.

My summit talk, which is going to be overview intended for people that want to learn more about the project and technique.

Elastic Recheck – Tools for Finding Race Conditions in OpenStack
Date: Thursday, May 15th
Time: 2:20pm
Room: B206
Track: Related Open Source Projects

We’ll also be doing a design summit session for people that are interested in contributing to the project, and helping us set priorities for the next cycle. Wed, 9:50am in the Infrastructure Track.

Also, feel free to find me anywhere to chat about Elastic Recheck. I’m always happy to talk about it, especially if you are interested in getting involved in the effort.

I believe the summit talk will be recorded, and I’ll post links to the video once it’s online for people that can’t make it to Atlanta.

Helpful Gerrit Queries (Gerrit 2.8 edition)

Gerrit got a very nice upgrade recently which brings in a whole new host of features that are really interesting. Here are some of the things you should know to make use of these new features. You might want to read up on the basics of gerrit searches here: Gerrit queries to avoid review overload, before getting started.

Labels

Gone are the days of -CodeReview-1, we now have a more generic mechanism called labels. Labels are a lot more powerful because they can specify both ranges as well as specific users!

For instance, to select everything without negative code reviews:

status:open NOT label:Code-Review<=-1

Because we now have operators, we can select for a range of values, so any negative (-1, -2, or any high negative value should it get implemented in the future) matches. Also negation is done with the ‘NOT’ keyword, and notable that CodeReview becomes label:Code-Review in the new system.

Labels exist for all three columns. Verified is what CI bots vote in, and Workflow is a combination of the Work in Progress (Workflow=-1) and Approved (Workflow=1) states that we used to have.

Labels with Users

Labels get really power when you start adding users to them. Now that we have a ton of CI bots voting, with regular issues in their systems, you might want to filter out by changes that Jenkins currently has a positive vote on.

status:open label:Verified>=1,jenkins

This means that changes which do not yet have a Jenkins +1 or +2 won’t be shown in your list. Hiding patches which are currently blocked by Jenkins or it hasn’t reported on yet. If you want to see not yet voted changes, you could change that to >=0.

Labels with Self

This is where we get really fun. There is a special user, self, which means your logged in id.

status:open NOT label:Code-Review>=0,self label:Verified>=1,jenkins NOT label:Code-Review<=-1

This is a list of all changes that ‘you have not yet commented on’, that don’t have negative code reviews, and that Jenkins has passing results. That means this query becomes a todo list, because as you comment on changes, positive, negative, or otherwise, they drop out of this query.

If you also drop all the work in progress patches:

status:open NOT label:Code-Review>=0,self label:Verified>=1,jenkins NOT label:Code-Review<=-1 
  NOT label:Workflow<=-1

then I consider this a basic “Inbox zero” review query. You can apply this to specific projects with “project:openstack/nova”, for instance. Out of this basic chunk I’ve built a bunch of local links to work through reviews.

File Matching

With this version of gerrit we get a thing called secondary indexes, which means we get some more interesting searching capabilities. which basically means we also have a search engine for certain other types of queries. This includes matching changes against files.

status:open file:^.*/db/.*/versions/.* project:^openstack.*

is a query that looks at all the outstanding changes in OpenStack that change a database migration. It’s currently showing glance, heat, nova, neutron, trove, and storyboard changes.

Very helpful if as a reviewer you want to keep an eye on a cross section of changes regardless of project.

Learning more

There are also plenty of other new parts of this query language. You can learn all the details in the gerrit documentation.

We’re also going to work at making some of these “inbox zero” queries available in the gerrit review system as a custom dashboard, making it easy to use it on any project in the system without building local bookmarks to queries.

Happy reviewing!