Tag Archives: devstack

In praise of systemd

There was definitely a lot of hate around systemd in the Linux community over the last few years as all the Linux distros moved over to it. Things change all the time, and I personally never understood this one. It wasn’t like the random collection of just barely working shell scripts that may or may not have implemented “status” was really a technical superior solution.

Recently I’ve been plumbing more native systemd support into DevStack (OpenStack‘s development environment setup tool). Personally systemd is just another way to start services, it does a few things better at the cost of having to learn a new interface. But some huge wins in a developer context come from the new logging system with it, journald.

Systemd has it’s own internal journaling system, which can reflect out to syslog and friends. However, it’s inherently much richer. Journal messages can include not only text, but arbitrary metadata. You can then query the journal with this arbitrary metadata.

A concrete example of this is the request-id in OpenStack. On any inbound REST API call a request-id is generated, then passed around between workers in that service. This lets you, theoretically, trace through multiple systems to see what is going on. In practice, you end up having to build some other system to collect and search these logs. But with some changes to OpenStack’s logging subsystem to emit journal native messages, you can do the following:

journalctl REQUEST_ID=req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb

And get the following:

-- Logs begin at Wed 2017-03-29 08:23:23 EDT, end at Wed 2017-03-29 16:19:46 EDT. --
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.api.openstack.wsgi [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Action: 'create', calling method: <bound method ServersController.create of <nova.api.openstack.compute.servers.ServersController object at 0x7fa7c93584d0>>, body: {"server": {"min_count": 1, "flavorRef": "d2", "name": "test1", "imageRef": "f69a1669-34b6-4a2a-9f98-6c4ae0a9d21a", "max_count": 1}} from (pid=14042) _process_stack /opt/stack/nova/nova/api/openstack/wsgi.py:621
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.api.openstack.compute.servers [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running _create_extension_point for <Extension: name=MultipleCreate, alias=os-multiple-create, version=1> from (pid=14042) _create_extension_point /opt/stack/nova/nova/api/openstack/compute/servers.py:740
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.api.openstack.compute.servers [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running _create_extension_point for <Extension: name=BlockDeviceMapping, alias=os-block-device-mapping, version=1> from (pid=14042) _create_extension_point /opt/stack/nova/nova/api/openstack/compute/servers.py:740
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.api.openstack.compute.servers [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running _create_extension_point for <Extension: name=BlockDeviceMappingV1, alias=os-block-device-mapping-v1, version=1> from (pid=14042) _create_extension_point /opt/stack/nova/nova/api/openstack/compute/servers.py:740
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.api.openstack.compute.servers [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running _create_extension_point for <Extension: name=AvailabilityZone, alias=os-availability-zone, version=1> from (pid=14042) _create_extension_point /opt/stack/nova/nova/api/openstack/compute/servers.py:740
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.api.openstack.compute.servers [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running _create_extension_point for <Extension: name=UserData, alias=os-user-data, version=1> from (pid=14042) _create_extension_point /opt/stack/nova/nova/api/openstack/compute/servers.py:740
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.api.openstack.compute.servers [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running _create_extension_point for <Extension: name=Keypairs, alias=os-keypairs, version=1> from (pid=14042) _create_extension_point /opt/stack/nova/nova/api/openstack/compute/servers.py:740
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.api.openstack.compute.servers [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running _create_extension_point for <Extension: name=SchedulerHints, alias=os-scheduler-hints, version=1> from (pid=14042) _create_extension_point /opt/stack/nova/nova/api/openstack/compute/servers.py:740
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.api.openstack.compute.servers [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running _create_extension_point for <Extension: name=SecurityGroups, alias=os-security-groups, version=1> from (pid=14042) _create_extension_point /opt/stack/nova/nova/api/openstack/compute/servers.py:740
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.api.openstack.compute.servers [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running _create_extension_point for <Extension: name=ConfigDrive, alias=os-config-drive, version=1> from (pid=14042) _create_extension_point /opt/stack/nova/nova/api/openstack/compute/servers.py:740
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.network.neutronv2.api [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] validate_networks() for None from (pid=14042) validate_networks /opt/stack/nova/nova/network/neutronv2/api.py:1622
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG neutronclient.v2_0.client [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] GET call to neutron for http://10.42.0.53:9696/v2.0/networks.json?tenant_id=70162ab521a5455faff0680e923dd091&shared=False used request id req-19605b00-631d-41fe-9d2e-78583e4b0999 from (pid=14042) _append_request_id /usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py:128
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG neutronclient.v2_0.client [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] GET call to neutron for http://10.42.0.53:9696/v2.0/networks.json?shared=True used request id req-f53ce1bb-d261-4a15-a75f-4320b7342023 from (pid=14042) _append_request_id /usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py:128
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG neutronclient.v2_0.client [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] GET call to neutron for http://10.42.0.53:9696/v2.0/quotas/70162ab521a5455faff0680e923dd091.json used request id req-77f7c098-8cc0-4396-b6bb-ff53db580a5b from (pid=14042) _append_request_id /usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py:128
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG neutronclient.v2_0.client [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] GET call to neutron for http://10.42.0.53:9696/v2.0/ports.json?fields=id&tenant_id=70162ab521a5455faff0680e923dd091 used request id req-1183c82b-cd80-4a19-ae07-888f5cab0aec from (pid=14042) _append_request_id /usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py:128
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.virt.hardware [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] emulator threads policy constraint: None from (pid=14042) get_emulator_threads_constraint /opt/stack/nova/nova/virt/hardware.py:1286
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.quota [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Getting quotas for project 70162ab521a5455faff0680e923dd091. Resources: ['metadata_items'] from (pid=14042) _get_quotas /opt/stack/nova/nova/quota.py:369
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.quota [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Getting quotas for user 444f517480154b5588b354e64c50c075 and project 70162ab521a5455faff0680e923dd091. Resources: ['metadata_items'] from (pid=14042) _get_quotas /opt/stack/nova/nova/quota.py:361
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.quota [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Getting quotas for project 70162ab521a5455faff0680e923dd091. Resources: ['injected_files'] from (pid=14042) _get_quotas /opt/stack/nova/nova/quota.py:369
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.quota [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Getting quotas for user 444f517480154b5588b354e64c50c075 and project 70162ab521a5455faff0680e923dd091. Resources: ['injected_files'] from (pid=14042) _get_quotas /opt/stack/nova/nova/quota.py:361
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.quota [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Getting quotas for project 70162ab521a5455faff0680e923dd091. Resources: ['injected_file_content_bytes', 'injected_file_path_bytes'] from (pid=14042) _get_quotas /opt/stack/nova/nova/quota.py:369
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.quota [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Getting quotas for user 444f517480154b5588b354e64c50c075 and project 70162ab521a5455faff0680e923dd091. Resources: ['injected_file_content_bytes', 'injected_file_path_bytes'] from (pid=14042) _get_quotas /opt/stack/nova/nova/quota.py:361
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.quota [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Reserving resources using context.project_id: 70162ab521a5455faff0680e923dd091 from (pid=14042) reserve /opt/stack/nova/nova/quota.py:500
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.quota [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Reserving resources using context.user_id: 444f517480154b5588b354e64c50c075 from (pid=14042) reserve /opt/stack/nova/nova/quota.py:505
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.quota [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Attempting to reserve resources for project 70162ab521a5455faff0680e923dd091 and user 444f517480154b5588b354e64c50c075. Deltas: {'instances': 1, 'ram': 1024, 'cores': 1} from (pid=14042) reserve /opt/stack/nova/nova/quota.py:510
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.quota [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Quota limits for project 70162ab521a5455faff0680e923dd091: {'project_id': u'70162ab521a5455faff0680e923dd091'} from (pid=14042) reserve /opt/stack/nova/nova/quota.py:519
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.quota [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Getting quotas for project 70162ab521a5455faff0680e923dd091. Resources: ['instances', 'ram', 'cores'] from (pid=14042) _get_quotas /opt/stack/nova/nova/quota.py:369
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.quota [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Quotas for project 70162ab521a5455faff0680e923dd091 after resource sync: {'instances': 10, 'ram': 51200, 'cores': 20} from (pid=14042) reserve /opt/stack/nova/nova/quota.py:525
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.quota [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Getting quotas for user 444f517480154b5588b354e64c50c075 and project 70162ab521a5455faff0680e923dd091. Resources: ['instances', 'ram', 'cores'] from (pid=14042) _get_quotas /opt/stack/nova/nova/quota.py:361
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.quota [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Quotas for project 70162ab521a5455faff0680e923dd091 and user 444f517480154b5588b354e64c50c075 after resource sync: {'instances': 10, 'ram': 51200, 'cores': 20} from (pid=14042) reserve /opt/stack/nova/nova/quota.py:533
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.db.sqlalchemy.api [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] QuotaUsage has not changed, refresh is unnecessary for: {'user_id': u'444f517480154b5588b354e64c50c075', 'deleted_at': None, 'reserved': 0, 'in_use': 0, 'deleted': 0, 'created_at': datetime.datetime(2017, 3, 29, 20, 16, 5, 803775), 'updated_at': datetime.datetime(2017, 3, 29, 20, 16, 5, 802794), 'until_refresh': None, 'resource': 'instances', 'project_id': u'70162ab521a5455faff0680e923dd091', 'id': 1} from (pid=14042) _refresh_quota_usages /opt/stack/nova/nova/db/sqlalchemy/api.py:3694
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.db.sqlalchemy.api [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] QuotaUsage has not changed, refresh is unnecessary for: {'user_id': u'444f517480154b5588b354e64c50c075', 'deleted_at': None, 'reserved': 0, 'in_use': 0, 'deleted': 0, 'created_at': datetime.datetime(2017, 3, 29, 20, 16, 5, 808250), 'updated_at': datetime.datetime(2017, 3, 29, 20, 16, 5, 807797), 'until_refresh': None, 'resource': 'ram', 'project_id': u'70162ab521a5455faff0680e923dd091', 'id': 2} from (pid=14042) _refresh_quota_usages /opt/stack/nova/nova/db/sqlalchemy/api.py:3694
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.db.sqlalchemy.api [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] QuotaUsage has not changed, refresh is unnecessary for: {'user_id': u'444f517480154b5588b354e64c50c075', 'deleted_at': None, 'reserved': 0, 'in_use': 0, 'deleted': 0, 'created_at': datetime.datetime(2017, 3, 29, 20, 16, 5, 810384), 'updated_at': datetime.datetime(2017, 3, 29, 20, 16, 5, 809969), 'until_refresh': None, 'resource': 'cores', 'project_id': u'70162ab521a5455faff0680e923dd091', 'id': 3} from (pid=14042) _refresh_quota_usages /opt/stack/nova/nova/db/sqlalchemy/api.py:3694
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.db.sqlalchemy.api [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Copying QuotaUsage for resource "instances" from user_usages into project_usages: {'user_id': u'444f517480154b5588b354e64c50c075', 'deleted_at': None, 'reserved': 0, 'in_use': 0, 'deleted': 0, 'created_at': datetime.datetime(2017, 3, 29, 20, 16, 5, 803775), 'updated_at': datetime.datetime(2017, 3, 29, 20, 16, 5, 802794), 'until_refresh': None, 'resource': 'instances', 'project_id': u'70162ab521a5455faff0680e923dd091', 'id': 1} from (pid=14042) quota_reserve /opt/stack/nova/nova/db/sqlalchemy/api.py:3842
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.db.sqlalchemy.api [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Copying QuotaUsage for resource "ram" from user_usages into project_usages: {'user_id': u'444f517480154b5588b354e64c50c075', 'deleted_at': None, 'reserved': 0, 'in_use': 0, 'deleted': 0, 'created_at': datetime.datetime(2017, 3, 29, 20, 16, 5, 808250), 'updated_at': datetime.datetime(2017, 3, 29, 20, 16, 5, 807797), 'until_refresh': None, 'resource': 'ram', 'project_id': u'70162ab521a5455faff0680e923dd091', 'id': 2} from (pid=14042) quota_reserve /opt/stack/nova/nova/db/sqlalchemy/api.py:3842
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.db.sqlalchemy.api [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Copying QuotaUsage for resource "cores" from user_usages into project_usages: {'user_id': u'444f517480154b5588b354e64c50c075', 'deleted_at': None, 'reserved': 0, 'in_use': 0, 'deleted': 0, 'created_at': datetime.datetime(2017, 3, 29, 20, 16, 5, 810384), 'updated_at': datetime.datetime(2017, 3, 29, 20, 16, 5, 809969), 'until_refresh': None, 'resource': 'cores', 'project_id': u'70162ab521a5455faff0680e923dd091', 'id': 3} from (pid=14042) quota_reserve /opt/stack/nova/nova/db/sqlalchemy/api.py:3842
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.quota [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Created reservations ['b8e99024-107c-4992-b7ea-c96284033336', '15691130-ae33-4dc6-b9cb-cbdc6dc7b407', 'e0fe21ab-eb0e-43a2-898d-631871b5142e'] from (pid=14042) reserve /opt/stack/nova/nova/quota.py:1383
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.compute.api [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Going to run 1 instances... from (pid=14042) _provision_instances /opt/stack/nova/nova/compute/api.py:1002
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.compute.api [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] block_device_mapping [BlockDeviceMapping(attachment_id=<?>,boot_index=0,connection_info=None,created_at=<?>,delete_on_termination=True,deleted=<?>,deleted_at=<?>,destination_type='local',device_name=None,device_type='disk',disk_bus=None,guest_format=None,id=<?>,image_id='f69a1669-34b6-4a2a-9f98-6c4ae0a9d21a',instance=<?>,instance_uuid=<?>,no_device=False,snapshot_id=None,source_type='image',tag=None,updated_at=<?>,volume_id=None,volume_size=None)] from (pid=14042) _bdm_validate_set_size_and_instance /opt/stack/nova/nova/compute/api.py:1339
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG nova.quota [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Committed reservations [u'b8e99024-107c-4992-b7ea-c96284033336', u'15691130-ae33-4dc6-b9cb-cbdc6dc7b407', u'e0fe21ab-eb0e-43a2-898d-631871b5142e'] from (pid=14042) commit /opt/stack/nova/nova/quota.py:1409
Mar 29 16:16:05 os4 nova-api[14042]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CAST unique_id: ce8a6568de9f4cc98ff3f06e0ca4004a exchange 'nova' topic 'conductor' from (pid=14042) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:489
Mar 29 16:16:05 os4 nova-api[14042]: INFO nova.osapi_compute.wsgi.server [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] 10.42.0.53 "POST /v2.1/servers HTTP/1.1" status: 202 len: 795 time: 0.8296130
Mar 29 16:16:05 os4 nova-conductor[14326]: WARNING oslo_config.cfg [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Option "scheduler_default_filters" from group "DEFAULT" is deprecated. Use option "enabled_filters" from group "filter_scheduler".
Mar 29 16:16:05 os4 nova-conductor[14326]: DEBUG oslo_db.sqlalchemy.engines [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION from (pid=14326) _check_effective_sql_mode /usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/engines.py:261
Mar 29 16:16:05 os4 nova-conductor[14326]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CALL msg_id: 917d9b546f5f4809a5dc38d3399bc714 exchange 'nova' topic 'scheduler' from (pid=14326) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:489
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CAST unique_id: ac54c940a3064721907949fbf6f3af48 NOTIFY exchange 'nova' topic 'notifications.info' from (pid=14269) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:473
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.scheduler.host_manager [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Found 2 cells: 00000000-0000-0000-0000-000000000000, 31d98bab-f734-4463-8f3f-87b604d0bd2b from (pid=14269) _get_computes_all_cells /opt/stack/nova/nova/scheduler/host_manager.py:586
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.scheduler.host_manager [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Getting compute nodes and services for cell 00000000-0000-0000-0000-000000000000(cell0) from (pid=14269) _get_computes_all_cells /opt/stack/nova/nova/scheduler/host_manager.py:591
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG oslo_db.sqlalchemy.engines [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION from (pid=14269) _check_effective_sql_mode /usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/engines.py:261
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.scheduler.host_manager [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Getting compute nodes and services for cell 31d98bab-f734-4463-8f3f-87b604d0bd2b(cell1) from (pid=14269) _get_computes_all_cells /opt/stack/nova/nova/scheduler/host_manager.py:591
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG oslo_db.sqlalchemy.engines [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION from (pid=14269) _check_effective_sql_mode /usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/engines.py:261
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG oslo_concurrency.lockutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Lock "(u'os4', u'os4')" acquired by "nova.scheduler.host_manager._locked_update" :: waited 0.000s from (pid=14269) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:270
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.scheduler.host_manager [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Update host state from compute node: ComputeNode(cpu_allocation_ratio=16.0,cpu_info='{"vendor": "Intel", "model": "Broadwell", "arch": "x86_64", "features": ["smap", "avx", "clflush", "sep", "rtm", "vme", "dtes64", "invpcid", "tsc", "fsgsbase", "xsave", "pge", "vmx", "erms", "xtpr", "cmov", "hle", "smep", "ssse3", "est", "pat", "monitor", "smx", "pbe", "lm", "msr", "adx", "3dnowprefetch", "nx", "fxsr", "syscall", "tm", "sse4.1", "pae", "sse4.2", "pclmuldq", "acpi", "fma", "tsc-deadline", "mmx", "osxsave", "cx8", "mce", "de", "tm2", "ht", "pse", "lahf_lm", "abm", "rdseed", "popcnt", "mca", "pdpe1gb", "apic", "sse", "f16c", "ds", "invtsc", "pni", "rdtscp", "avx2", "aes", "sse2", "ss", "ds_cpl", "bmi1", "bmi2", "pcid", "fpu", "cx16", "pse36", "mtrr", "movbe", "pdcm", "rdrand", "x2apic"], "topology": {"cores": 2, "cells": 1, "threads": 2, "sockets": 1}}',created_at=2017-03-29T10:37:36Z,current_workload=0,deleted=False,deleted_at=None,disk_allocation_ratio=1.0,disk_available_least=195,free_disk_gb=218,free_ram_mb=15435,host='os4',host_ip=10.42.0.53,hypervisor_hostname='os4',hypervisor_type='QEMU',hypervisor_version=2005000,id=1,local_gb=218,local_gb_used=0,memory_mb=15947,memory_mb_used=512,metrics='[]',numa_topology='{"nova_object.version": "1.2", "nova_object.changes": ["cells"], "nova_object.name": "NUMATopology", "nova_object.data": {"cells": [{"nova_object.version": "1.2", "nova_object.changes": ["cpu_usage", "memory_usage", "cpuset", "mempages", "pinned_cpus", "memory", "siblings", "id"], "nova_object.name": "NUMACell", "nova_object.data": {"cpu_usage": 0, "memory_usage": 0, "cpuset": [0, 1, 2, 3], "pinned_cpus": [], "siblings": [[1, 3], [0, 2]], "memory": 15947, "mempages": [{"nova_object.version": "1.1", "nova_object.changes": ["total", "used", "reserved", "size_kb"], "nova_object.name": "NUMAPagesTopology", "nova_object.data": {"used": 0, "total": 4082564, "reserved": 0, "size_kb": 4}, "nova_object.namespace": "nova"}, {"nova_object.version": "1.1", "nova_object.changes": ["total", "used", "reserved", "size_kb"], "nova_object.name": "NUMAPagesTopology", "nova_object.data": {"used": 0, "total": 0, "reserved": 0, "size_kb": 2048}, "nova_object.namespace": "nova"}, {"nova_object.version": "1.1", "nova_object.changes": ["total", "used", "reserved", "size_kb"], "nova_object.name": "NUMAPagesTopology", "nova_object.data": {"used": 0, "total": 0, "reserved": 0, "size_kb": 1048576}, "nova_object.namespace": "nova"}], "id": 0}, "nova_object.namespace": "nova"}]}, "nova_object.namespace": "nova"}',pci_device_pools=PciDevicePoolList,ram_allocation_ratio=1.5,running_vms=0,service_id=None,stats={},supported_hv_specs=[HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec],updated_at=2017-03-29T20:16:00Z,uuid=aa91aff1-89f9-47f7-a66c-8a60d87aa76d,vcpus=4,vcpus_used=0) from (pid=14269) _locked_update /opt/stack/nova/nova/scheduler/host_manager.py:168
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.scheduler.host_manager [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Update host state with aggregates: [] from (pid=14269) _locked_update /opt/stack/nova/nova/scheduler/host_manager.py:171
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.scheduler.host_manager [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Update host state with service dict: {'binary': u'nova-compute', 'deleted': False, 'created_at': datetime.datetime(2017, 3, 29, 10, 37, 36, tzinfo=<iso8601.Utc>), 'updated_at': datetime.datetime(2017, 3, 29, 20, 16, 5, tzinfo=<iso8601.Utc>), 'report_count': 3471, 'topic': u'compute', 'host': u'os4', 'version': 17, 'disabled': False, 'forced_down': False, 'last_seen_up': datetime.datetime(2017, 3, 29, 20, 16, 5, tzinfo=<iso8601.Utc>), 'deleted_at': None, 'disabled_reason': None, 'id': 6} from (pid=14269) _locked_update /opt/stack/nova/nova/scheduler/host_manager.py:174
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.scheduler.host_manager [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Update host state with instances: {} from (pid=14269) _locked_update /opt/stack/nova/nova/scheduler/host_manager.py:177
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG oslo_concurrency.lockutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Lock "(u'os4', u'os4')" released by "nova.scheduler.host_manager._locked_update" :: held 0.005s from (pid=14269) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.filters [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Starting with 1 host(s) from (pid=14269) get_filtered_objects /opt/stack/nova/nova/filters.py:70
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.scheduler.filters.retry_filter [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Re-scheduling is disabled from (pid=14269) host_passes /opt/stack/nova/nova/scheduler/filters/retry_filter.py:34
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.filters [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Filter RetryFilter returned 1 host(s) from (pid=14269) get_filtered_objects /opt/stack/nova/nova/filters.py:104
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.filters [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Filter AvailabilityZoneFilter returned 1 host(s) from (pid=14269) get_filtered_objects /opt/stack/nova/nova/filters.py:104
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.filters [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Filter RamFilter returned 1 host(s) from (pid=14269) get_filtered_objects /opt/stack/nova/nova/filters.py:104
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.filters [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Filter DiskFilter returned 1 host(s) from (pid=14269) get_filtered_objects /opt/stack/nova/nova/filters.py:104
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.filters [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Filter ComputeFilter returned 1 host(s) from (pid=14269) get_filtered_objects /opt/stack/nova/nova/filters.py:104
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.filters [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Filter ComputeCapabilitiesFilter returned 1 host(s) from (pid=14269) get_filtered_objects /opt/stack/nova/nova/filters.py:104
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.filters [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Filter ImagePropertiesFilter returned 1 host(s) from (pid=14269) get_filtered_objects /opt/stack/nova/nova/filters.py:104
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.filters [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Filter ServerGroupAntiAffinityFilter returned 1 host(s) from (pid=14269) get_filtered_objects /opt/stack/nova/nova/filters.py:104
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.filters [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Filter ServerGroupAffinityFilter returned 1 host(s) from (pid=14269) get_filtered_objects /opt/stack/nova/nova/filters.py:104
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.filters [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Filter SameHostFilter returned 1 host(s) from (pid=14269) get_filtered_objects /opt/stack/nova/nova/filters.py:104
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.filters [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Filter DifferentHostFilter returned 1 host(s) from (pid=14269) get_filtered_objects /opt/stack/nova/nova/filters.py:104
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.scheduler.filter_scheduler [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Filtered [(os4, os4) ram: 15435MB disk: 199680MB io_ops: 0 instances: 0] from (pid=14269) _schedule /opt/stack/nova/nova/scheduler/filter_scheduler.py:114
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.scheduler.filter_scheduler [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Weighed [WeighedHost [host: (os4, os4) ram: 15435MB disk: 199680MB io_ops: 0 instances: 0, weight: 0.0]] from (pid=14269) _schedule /opt/stack/nova/nova/scheduler/filter_scheduler.py:119
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.scheduler.filter_scheduler [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Selected host: WeighedHost [host: (os4, os4) ram: 15435MB disk: 199680MB io_ops: 0 instances: 0, weight: 0.0] from (pid=14269) _schedule /opt/stack/nova/nova/scheduler/filter_scheduler.py:126
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG oslo_concurrency.lockutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Lock "(u'os4', u'os4')" acquired by "nova.scheduler.host_manager._locked" :: waited 0.000s from (pid=14269) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:270
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG nova.virt.hardware [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Require both a host and instance NUMA topology to fit instance on host. from (pid=14269) numa_fit_instance_to_host /opt/stack/nova/nova/virt/hardware.py:1466
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG oslo_concurrency.lockutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Lock "(u'os4', u'os4')" released by "nova.scheduler.host_manager._locked" :: held 0.003s from (pid=14269) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CAST unique_id: fcea183f04574d61901f435df9d4fba0 NOTIFY exchange 'nova' topic 'notifications.info' from (pid=14269) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:473
Mar 29 16:16:06 os4 nova-scheduler[14269]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] sending reply msg_id: 917d9b546f5f4809a5dc38d3399bc714 reply queue: reply_8feb4f5bbc7d493c8821bc41ffee5e1a time elapsed: 0.215702679008s from (pid=14269) _send_reply /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:90
Mar 29 16:16:06 os4 nova-conductor[14326]: DEBUG oslo_db.sqlalchemy.engines [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION from (pid=14326) _check_effective_sql_mode /usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/engines.py:261
Mar 29 16:16:06 os4 nova-conductor[14326]: DEBUG oslo_db.sqlalchemy.engines [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION from (pid=14326) _check_effective_sql_mode /usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/engines.py:261
Mar 29 16:16:06 os4 nova-conductor[14326]: DEBUG nova.conductor.manager [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] block_device_mapping [BlockDeviceMapping(attachment_id=<?>,boot_index=0,connection_info=None,created_at=<?>,delete_on_termination=True,deleted=<?>,deleted_at=<?>,destination_type='local',device_name=None,device_type='disk',disk_bus=None,guest_format=None,id=<?>,image_id='f69a1669-34b6-4a2a-9f98-6c4ae0a9d21a',instance=<?>,instance_uuid=<?>,no_device=False,snapshot_id=None,source_type='image',tag=None,updated_at=<?>,volume_id=None,volume_size=None)] from (pid=14326) _create_block_device_mapping /opt/stack/nova/nova/conductor/manager.py:841
Mar 29 16:16:06 os4 nova-conductor[14326]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CAST unique_id: cb281d7a070a47e29aa664ff22879597 exchange 'nova' topic 'compute.os4' from (pid=14326) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:489
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG oslo_concurrency.lockutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Lock "a70f2d77-f140-4edf-bf38-8b07076be37a" acquired by "nova.compute.manager._locked_do_build_and_run_instance" :: waited 0.000s from (pid=14334) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:270
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CALL msg_id: 38331690f1924074ba2146e3e22ec717 exchange 'nova' topic 'conductor' from (pid=14334) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:489
Mar 29 16:16:06 os4 nova-conductor[14327]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] sending reply msg_id: 38331690f1924074ba2146e3e22ec717 reply queue: reply_2d509f4c3c904a8d83ad6a659f755bf2 time elapsed: 0.0114914269943s from (pid=14327) _send_reply /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:90
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG nova.compute.manager [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] Starting instance... from (pid=14334) _do_build_and_run_instance /opt/stack/nova/nova/compute/manager.py:1747
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CALL msg_id: 5a0709355daf4e4f8dd4589455fdffd8 exchange 'nova' topic 'conductor' from (pid=14334) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:489
Mar 29 16:16:06 os4 nova-conductor[14326]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] sending reply msg_id: 5a0709355daf4e4f8dd4589455fdffd8 reply queue: reply_2d509f4c3c904a8d83ad6a659f755bf2 time elapsed: 0.0587505440053s from (pid=14326) _send_reply /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:90
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CAST unique_id: b400e8e03fed4cc19a7af04d81a25417 NOTIFY exchange 'nova' topic 'notifications.info' from (pid=14334) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:473
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG nova.notifications.objects.base [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Defaulting the value of the field 'projects' to None in FlavorPayload due to 'Cannot call _load_projects on orphaned Flavor object' from (pid=14334) populate_schema /opt/stack/nova/nova/notifications/objects/base.py:118
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CAST unique_id: dff4adf357e940ae91140eb047dbed8a NOTIFY exchange 'nova' topic 'versioned_notifications.info' from (pid=14334) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:473
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG oslo_concurrency.lockutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Lock "compute_resources" acquired by "nova.compute.resource_tracker.instance_claim" :: waited 0.000s from (pid=14334) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:270
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG nova.compute.resource_tracker [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Memory overhead for 1024 MB instance; 0 MB from (pid=14334) instance_claim /opt/stack/nova/nova/compute/resource_tracker.py:147
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG nova.compute.resource_tracker [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Disk overhead for 10 GB instance; 0 GB from (pid=14334) instance_claim /opt/stack/nova/nova/compute/resource_tracker.py:150
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CALL msg_id: 57afd33c5efd4638badeb6efd1242afa exchange 'nova' topic 'conductor' from (pid=14334) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:489
Mar 29 16:16:06 os4 nova-conductor[14327]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] sending reply msg_id: 57afd33c5efd4638badeb6efd1242afa reply queue: reply_2d509f4c3c904a8d83ad6a659f755bf2 time elapsed: 0.00463710498298s from (pid=14327) _send_reply /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:90
Mar 29 16:16:06 os4 nova-compute[14334]: INFO nova.compute.claims [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] Attempting claim on node os4: memory 1024 MB, disk 10 GB, vcpus 1 CPU
Mar 29 16:16:06 os4 nova-compute[14334]: INFO nova.compute.claims [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] Total memory: 15947 MB, used: 512.00 MB
Mar 29 16:16:06 os4 nova-compute[14334]: INFO nova.compute.claims [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] memory limit: 23920.50 MB, free: 23408.50 MB
Mar 29 16:16:06 os4 nova-compute[14334]: INFO nova.compute.claims [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] Total disk: 218 GB, used: 0.00 GB
Mar 29 16:16:06 os4 nova-compute[14334]: INFO nova.compute.claims [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] disk limit: 218.00 GB, free: 218.00 GB
Mar 29 16:16:06 os4 nova-compute[14334]: INFO nova.compute.claims [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] Total vcpu: 4 VCPU, used: 0.00 VCPU
Mar 29 16:16:06 os4 nova-compute[14334]: INFO nova.compute.claims [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] vcpu limit not specified, defaulting to unlimited
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG nova.virt.hardware [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Require both a host and instance NUMA topology to fit instance on host. from (pid=14334) numa_fit_instance_to_host /opt/stack/nova/nova/virt/hardware.py:1466
Mar 29 16:16:06 os4 nova-compute[14334]: INFO nova.compute.claims [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] Claim successful on node os4
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CALL msg_id: 572a6e0e3cd642bab51844504c947944 exchange 'nova' topic 'conductor' from (pid=14334) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:489
Mar 29 16:16:06 os4 nova-conductor[14326]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] sending reply msg_id: 572a6e0e3cd642bab51844504c947944 reply queue: reply_2d509f4c3c904a8d83ad6a659f755bf2 time elapsed: 0.0616553190048s from (pid=14326) _send_reply /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:90
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CALL msg_id: 7f329ad9e6ce406cae034a83ad6dcb92 exchange 'nova' topic 'conductor' from (pid=14334) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:489
Mar 29 16:16:06 os4 nova-conductor[14327]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] sending reply msg_id: 7f329ad9e6ce406cae034a83ad6dcb92 reply queue: reply_2d509f4c3c904a8d83ad6a659f755bf2 time elapsed: 0.00696902099298s from (pid=14327) _send_reply /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:90
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG nova.scheduler.client.report [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] Sending allocation for instance {'MEMORY_MB': 1024, 'VCPU': 1, 'DISK_GB': 10} from (pid=14334) _allocate_for_instance /opt/stack/nova/nova/scheduler/client/report.py:807
Mar 29 16:16:06 os4 nova-compute[14334]: INFO nova.scheduler.client.report [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] Submitted allocation for instance
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CALL msg_id: 36a079cbc7304ed6bc5d2e0ca0b3404b exchange 'nova' topic 'conductor' from (pid=14334) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:489
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG nova.scheduler.client.report [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Refreshing aggregate associations for resource provider aa91aff1-89f9-47f7-a66c-8a60d87aa76d from (pid=14334) _ensure_resource_provider /opt/stack/nova/nova/scheduler/client/report.py:437
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG nova.scheduler.client.report [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Updating our resource provider generation from 1 to 2 from (pid=14334) _get_inventory_and_update_provider_generation /opt/stack/nova/nova/scheduler/client/report.py:480
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG oslo_concurrency.lockutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Lock "compute_resources" released by "nova.compute.resource_tracker.instance_claim" :: held 0.291s from (pid=14334) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG nova.compute.manager [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] Start building networks asynchronously for instance. from (pid=14334) _build_resources /opt/stack/nova/nova/compute/manager.py:2062
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CALL msg_id: f70037f919264d8db2021cef722e9f97 exchange 'nova' topic 'conductor' from (pid=14334) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:489
Mar 29 16:16:06 os4 nova-conductor[14327]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] sending reply msg_id: f70037f919264d8db2021cef722e9f97 reply queue: reply_2d509f4c3c904a8d83ad6a659f755bf2 time elapsed: 0.0664471710043s from (pid=14327) _send_reply /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:90
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG oslo_concurrency.lockutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Lock "compute_resources" acquired by "nova.compute.resource_tracker.update_usage" :: waited 0.000s from (pid=14334) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:270
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CALL msg_id: 76a1008be9ea4a0f882eaed972093de8 exchange 'nova' topic 'conductor' from (pid=14334) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:489
Mar 29 16:16:06 os4 nova-compute[14334]: DEBUG nova.scheduler.client.report [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Refreshing aggregate associations for resource provider aa91aff1-89f9-47f7-a66c-8a60d87aa76d from (pid=14334) _ensure_resource_provider /opt/stack/nova/nova/scheduler/client/report.py:437
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_concurrency.lockutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Lock "compute_resources" released by "nova.compute.resource_tracker.update_usage" :: held 0.096s from (pid=14334) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282
Mar 29 16:16:07 os4 nova-compute[14334]: INFO nova.virt.osinfo [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Cannot load Libosinfo: (No module named gi.repository.Libosinfo)
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CALL msg_id: f545ef909ec64b169fa53a601952279d exchange 'nova' topic 'conductor' from (pid=14334) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:489
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG nova.compute.manager [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] Allocating IP information in the background. from (pid=14334) _allocate_network_async /opt/stack/nova/nova/compute/manager.py:1384
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG nova.network.neutronv2.api [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] allocate_for_instance() from (pid=14334) allocate_for_instance /opt/stack/nova/nova/network/neutronv2/api.py:829
Mar 29 16:16:07 os4 nova-conductor[14327]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] sending reply msg_id: f545ef909ec64b169fa53a601952279d reply queue: reply_2d509f4c3c904a8d83ad6a659f755bf2 time elapsed: 0.0140517560067s from (pid=14327) _send_reply /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:90
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG nova.block_device [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] block_device_list [] from (pid=14334) volume_in_mapping /opt/stack/nova/nova/block_device.py:585
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG nova.compute.manager [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] Start building block device mappings for instance. from (pid=14334) _build_resources /opt/stack/nova/nova/compute/manager.py:2088
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CALL msg_id: e28395f19cde4e4ca2698e93de02a63b exchange 'nova' topic 'conductor' from (pid=14334) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:489
Mar 29 16:16:07 os4 nova-conductor[14326]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] sending reply msg_id: e28395f19cde4e4ca2698e93de02a63b reply queue: reply_2d509f4c3c904a8d83ad6a659f755bf2 time elapsed: 0.0703596939857s from (pid=14326) _send_reply /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:90
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CALL msg_id: 013ddb3c758b4cbfaade984871fb2671 exchange 'nova' topic 'conductor' from (pid=14334) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:489
Mar 29 16:16:07 os4 nova-conductor[14327]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] sending reply msg_id: 013ddb3c758b4cbfaade984871fb2671 reply queue: reply_2d509f4c3c904a8d83ad6a659f755bf2 time elapsed: 0.0648256909917s from (pid=14327) _send_reply /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:90
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG nova.compute.manager [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] Start spawning the instance on the hypervisor. from (pid=14334) _build_and_run_instance /opt/stack/nova/nova/compute/manager.py:1921
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG nova.block_device [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] block_device_list [] from (pid=14334) volume_in_mapping /opt/stack/nova/nova/block_device.py:585
Mar 29 16:16:07 os4 nova-compute[14334]: INFO nova.virt.libvirt.driver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] Creating image
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_concurrency.lockutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Lock "/opt/stack/data/nova/instances/a70f2d77-f140-4edf-bf38-8b07076be37a/disk.info" acquired by "nova.virt.libvirt.imagebackend.write_to_disk_info_file" :: waited 0.000s from (pid=14334) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:270
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_concurrency.lockutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Lock "/opt/stack/data/nova/instances/a70f2d77-f140-4edf-bf38-8b07076be37a/disk.info" released by "nova.virt.libvirt.imagebackend.write_to_disk_info_file" :: held 0.001s from (pid=14334) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_concurrency.lockutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Lock "ff5dedb42be7cda8422f529dde9a3096688be2c5" acquired by "nova.virt.libvirt.imagebackend.fetch_func_sync" :: waited 0.000s from (pid=14334) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:270
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG neutronclient.v2_0.client [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] GET call to neutron for http://10.42.0.53:9696/v2.0/networks.json?tenant_id=70162ab521a5455faff0680e923dd091&shared=False used request id req-3e9bf7cf-1951-49b1-9754-69aba6b56634 from (pid=14334) _append_request_id /usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py:128
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG neutronclient.v2_0.client [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] GET call to neutron for http://10.42.0.53:9696/v2.0/networks.json?shared=True used request id req-92909c76-6784-4de4-9522-2b12c2e55215 from (pid=14334) _append_request_id /usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py:128
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG nova.policy [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Policy check for network:attach_external_network failed with credentials {'service_roles': [], 'user_id': u'444f517480154b5588b354e64c50c075', 'roles': [u'anotherrole', u'Member'], 'user_domain_id': None, 'service_project_id': None, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_domain_id': None, 'is_admin_project': True, 'is_admin': False, 'project_id': u'70162ab521a5455faff0680e923dd091', 'project_domain_id': None} from (pid=14334) authorize /opt/stack/nova/nova/policy.py:168
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running cmd (subprocess): /usr/bin/python -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env LC_ALL=C LANG=C qemu-img info /opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5.part from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:355
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CMD "/usr/bin/python -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env LC_ALL=C LANG=C qemu-img info /opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5.part" returned: 0 in 0.055s from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:385
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG nova.virt.images [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] f69a1669-34b6-4a2a-9f98-6c4ae0a9d21a was qcow2, converting to raw from (pid=14334) fetch_to_raw /opt/stack/nova/nova/virt/images.py:146
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running cmd (subprocess): qemu-img convert -O raw -f qcow2 /opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5.part /opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5.converted from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:355
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CMD "qemu-img convert -O raw -f qcow2 /opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5.part /opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5.converted" returned: 0 in 0.118s from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:385
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running cmd (subprocess): /usr/bin/python -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env LC_ALL=C LANG=C qemu-img info /opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5.converted from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:355
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CMD "/usr/bin/python -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env LC_ALL=C LANG=C qemu-img info /opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5.converted" returned: 0 in 0.067s from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:385
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_concurrency.lockutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Lock "ff5dedb42be7cda8422f529dde9a3096688be2c5" released by "nova.virt.libvirt.imagebackend.fetch_func_sync" :: held 0.573s from (pid=14334) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf touch -c /opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5 from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:355
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CMD "sudo nova-rootwrap /etc/nova/rootwrap.conf touch -c /opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5" returned: 0 in 0.145s from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:385
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running cmd (subprocess): /usr/bin/python -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env LC_ALL=C LANG=C qemu-img info /opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5 from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:355
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CMD "/usr/bin/python -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env LC_ALL=C LANG=C qemu-img info /opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5" returned: 0 in 0.057s from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:385
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_concurrency.lockutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Lock "ff5dedb42be7cda8422f529dde9a3096688be2c5" acquired by "nova.virt.libvirt.imagebackend.copy_qcow2_image" :: waited 0.000s from (pid=14334) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:270
Mar 29 16:16:07 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running cmd (subprocess): /usr/bin/python -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env LC_ALL=C LANG=C qemu-img info /opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5 from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:355
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CMD "/usr/bin/python -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env LC_ALL=C LANG=C qemu-img info /opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5" returned: 0 in 0.050s from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:385
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running cmd (subprocess): qemu-img create -f qcow2 -o backing_file=/opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5 /opt/stack/data/nova/instances/a70f2d77-f140-4edf-bf38-8b07076be37a/disk from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:355
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CMD "qemu-img create -f qcow2 -o backing_file=/opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5 /opt/stack/data/nova/instances/a70f2d77-f140-4edf-bf38-8b07076be37a/disk" returned: 0 in 0.031s from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:385
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG nova.virt.disk.api [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Checking if we can resize image /opt/stack/data/nova/instances/a70f2d77-f140-4edf-bf38-8b07076be37a/disk. size=10737418240 from (pid=14334) can_resize_image /opt/stack/nova/nova/virt/disk/api.py:214
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running cmd (subprocess): /usr/bin/python -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env LC_ALL=C LANG=C qemu-img info /opt/stack/data/nova/instances/a70f2d77-f140-4edf-bf38-8b07076be37a/disk from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:355
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CMD "/usr/bin/python -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env LC_ALL=C LANG=C qemu-img info /opt/stack/data/nova/instances/a70f2d77-f140-4edf-bf38-8b07076be37a/disk" returned: 0 in 0.049s from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:385
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running cmd (subprocess): qemu-img resize /opt/stack/data/nova/instances/a70f2d77-f140-4edf-bf38-8b07076be37a/disk 10737418240 from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:355
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CMD "qemu-img resize /opt/stack/data/nova/instances/a70f2d77-f140-4edf-bf38-8b07076be37a/disk 10737418240" returned: 0 in 0.027s from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:385
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG oslo_concurrency.lockutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Lock "ff5dedb42be7cda8422f529dde9a3096688be2c5" released by "nova.virt.libvirt.imagebackend.copy_qcow2_image" :: held 0.164s from (pid=14334) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:282
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running cmd (subprocess): /usr/bin/python -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env LC_ALL=C LANG=C qemu-img info /opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5 from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:355
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CMD "/usr/bin/python -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env LC_ALL=C LANG=C qemu-img info /opt/stack/data/nova/instances/_base/ff5dedb42be7cda8422f529dde9a3096688be2c5" returned: 0 in 0.049s from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:385
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG nova.virt.disk.api [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Checking if we can resize image /opt/stack/data/nova/instances/a70f2d77-f140-4edf-bf38-8b07076be37a/disk. size=10737418240 from (pid=14334) can_resize_image /opt/stack/nova/nova/virt/disk/api.py:214
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Running cmd (subprocess): /usr/bin/python -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env LC_ALL=C LANG=C qemu-img info /opt/stack/data/nova/instances/a70f2d77-f140-4edf-bf38-8b07076be37a/disk from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:355
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG oslo_concurrency.processutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CMD "/usr/bin/python -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env LC_ALL=C LANG=C qemu-img info /opt/stack/data/nova/instances/a70f2d77-f140-4edf-bf38-8b07076be37a/disk" returned: 0 in 0.052s from (pid=14334) execute /usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:385
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG nova.virt.disk.api [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Cannot resize image /opt/stack/data/nova/instances/a70f2d77-f140-4edf-bf38-8b07076be37a/disk to a smaller size. from (pid=14334) can_resize_image /opt/stack/nova/nova/virt/disk/api.py:220
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG nova.virt.libvirt.driver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] Ensure instance console log exists: /opt/stack/data/nova/instances/a70f2d77-f140-4edf-bf38-8b07076be37a/console.log from (pid=14334) _ensure_console_log_for_instance /opt/stack/nova/nova/virt/libvirt/driver.py:2936
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG neutronclient.v2_0.client [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] POST call to neutron for http://10.42.0.53:9696/v2.0/ports.json used request id req-3e2a6228-9e64-4a3b-b25c-e0f441738a4f from (pid=14334) _append_request_id /usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py:128
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG nova.network.neutronv2.api [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] Successfully created port: 0c051449-cc3a-4745-b0c7-ff65c445b798 from (pid=14334) _create_port_minimal /opt/stack/nova/nova/network/neutronv2/api.py:396
Mar 29 16:16:08 os4 nova-compute[14334]: DEBUG neutronclient.v2_0.client [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] GET call to neutron for http://10.42.0.53:9696/v2.0/extensions.json used request id req-9bc517c3-5106-471e-a388-7379bf33a82a from (pid=14334) _append_request_id /usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py:128
Mar 29 16:16:09 os4 nova-compute[14334]: DEBUG neutronclient.v2_0.client [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] PUT call to neutron for http://10.42.0.53:9696/v2.0/ports/0c051449-cc3a-4745-b0c7-ff65c445b798.json used request id req-06dff966-90bd-45dd-83f4-42f63b4d370a from (pid=14334) _append_request_id /usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py:128
Mar 29 16:16:09 os4 nova-compute[14334]: DEBUG nova.network.neutronv2.api [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] Successfully updated port: 0c051449-cc3a-4745-b0c7-ff65c445b798 from (pid=14334) _update_port /opt/stack/nova/nova/network/neutronv2/api.py:433
Mar 29 16:16:09 os4 nova-compute[14334]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CALL msg_id: 2adf28dab4814f31826e2ecbd43cf462 exchange 'nova' topic 'conductor' from (pid=14334) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:489
Mar 29 16:16:09 os4 nova-conductor[14326]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] sending reply msg_id: 2adf28dab4814f31826e2ecbd43cf462 reply queue: reply_2d509f4c3c904a8d83ad6a659f755bf2 time elapsed: 0.00981185701676s from (pid=14326) _send_reply /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:90
Mar 29 16:16:09 os4 nova-compute[14334]: DEBUG oslo_concurrency.lockutils [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] Acquired semaphore "refresh_cache-a70f2d77-f140-4edf-bf38-8b07076be37a" from (pid=14334) lock /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:212
Mar 29 16:16:09 os4 nova-compute[14334]: DEBUG nova.network.neutronv2.api [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] [instance: a70f2d77-f140-4edf-bf38-8b07076be37a] _get_instance_nw_info() from (pid=14334) _get_instance_nw_info /opt/stack/nova/nova/network/neutronv2/api.py:1280
Mar 29 16:16:09 os4 nova-compute[14334]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] CALL msg_id: eea079f2759b4c7e8e37f5920b1be459 exchange 'nova' topic 'conductor' from (pid=14334) _send /usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:489
Mar 29 16:16:09 os4 nova-conductor[14327]: DEBUG oslo_messaging._drivers.amqpdriver [req-3e4fa169-80ec-4635-8a7e-b60c16eddbcb demo demo] sending reply msg_id: eea079f2759b4c7e8e37f5920b1be459 reply queue: reply_2d509f4c3c904a8d83ad6a659f755bf2

That, is the flow of creating a server in all the Nova processes.

That’s the kind of analysis that most people were added Elastic Search into their environments to get. And while this clearly doesn’t actually replicate all the very cool things you can do with Elastic Search, it does really give you a bunch of new tools as a developer.

Note: the enablement for everything above is still very much work in progress. The DevStack patches have landed, but are off by default. The oslo.log patch is in the proof of concept phase for review.

This idea of programs being able to send not just log messages, but other metadata around them, is really powerful. It means, for instance, you can always send the code function / line number where a message was emitted. It’s not clogging up the log message itself, but if you want it later you can go ask for it. Or even if it is in the log message, by extracting in advance what the critical metadata is, you can give your consumers structured fields instead of relying on things like grok to parse the logs. Which simplifies ingest by things like Elastic Search.

There is a lot to explore here to provide a better developer and operator experience, which is only possible because we have systemd. I for one look forward to exploring the space.

OpenStack as Layers

Last week at LinuxCon I gave a presentation on DevStack which gave me the proper excuse to turn an idea that Dean Troyer floated a year ago about OpenStack Layers into pictures (I highly recommend reading that for background, I won’t justify every part of that here again). This abstraction has been something that’s actually served us well as we think about projects coming into DevStack.

OpenStack_Layers

Some assumptions are made here in terms of what essential services are here as we build up the model.

Layer 1: Base Compute Infrastructure

We assume that compute infrastructure is the common starting point of minimum functional OpenStack that people are deploying. The output of the last OpenStack User Survey shows that the top 3 deployed services, regardless of type of cloud (Dev/Test, POC, or Production) are Nova / Glance / Keystone. So I don’t think this is a huge stretch. There are definitely users that take other slices (like Swift only) but compute seems to be what the majority of people coming to OpenStack seem to be focussed on.

Basic Compute services need 3 services to get running. Nova, Glance, and Keystone. That will give you a stateless compute cloud which is a starting point for many people getting into the space for the first time.

Layer 2: Extended Infrastructure

Once you have a basic bit of compute infrastructure in place, there are some quite common features that you do really need to do more interesting work. These are basically enhancements on the Storage, Networking, or Compute aspects of OpenStack. Looking at the User Survey these are all deployed by people, in various ways, at a pretty high rate.

This is the first place we see new projects integrating into OpenStack. Ironic extends the compute infrastructure to baremetal, and Designate adds a missing piece of the networking side with DNS management as part of your compute creation.

Hopefully nothing all that controversial here.

Layer 3: Optional Enhancements

Now we get a set of currently integrated services that integrate North bound and South bound. Horizon integrates on the North bound APIs for all the services, it requires service further down in layers (it also today integrates with pieces further up that are integrated). Ceilometer consumes South bound parts of OpenStack (notifications) and polls North bound interfaces.

From the user survey Horizon is deployed a ton. Ceilometer, not nearly as much. Part of this is due to how long things have been integrated, but even if you do analysis like take the Cinder / Neutron numbers, delete all the Folsom deploys from it (which is the first time those projects were integrated) you still see a picture where Ceilometer is behind on adoption. Recent mailing list discussions have hints at why, including some of the scaling issues, and a number of alternative communities in this space.

Let’s punt on Barbican, because honestly, it’s new since we came up with this map, and maybe it’s really a layer 2 service.

Layer 4: Consumption Services

I actually don’t like this name, but I failed to come up with something better. Layer 4 in Dean’s post was “Turtles all the way down”, which isn’t great describing things either.

This is a set of things which consume other OpenStack services to create new services. Trove is the canonical example, create a database as a service by orchestrating Nova compute instances with mysql installed in them.

The rest of the layer 4 services all fit the same pattern, even Heat. Heat really is about taking the rest of the components in OpenStack and building a super API for their creation. It also includes auto scaling functionality based on this. In the case of all integrated services they need a guest agent to do a piece of their function, which means when testing them in OpenStack we don’t get very far with the Cirros minimal guest that we use for Layer 3 and down.

But again, as we look at the user survey we can see deployment of all of these Layer 4 services is lighter again. And this is what you’d expect as you go up these layers. These are all useful services to a set of users, but they aren’t all useful to all users.

I’d argue that the confusion around Marconi’s place in the OpenStack ecosystem comes with the fact that by analogy it looks and feels like a Layer 4 service like Trove (where a starting point would be allocating computes), but is implemented like a Layer 2 one (straight up raw service expected to be deployed on bare metal out of band). And yet it’s not consumable as the Queue service for the other Layer 1 & 2 services.

Leaky Taxonomy

This is not the end all be all of a way to look at OpenStack. However, this layered view of the world confuses people a lot less than the normal view we show them — the giant spider diagram (aka the mandatory architecture slide for all OpenStack presentations):

OpenStack_Spider_Diagram

This picture is in every deep dive on OpenStack, and scares the crap out of people who think they might want to deploy it. There is no starting point, there is no end point. How do you bite that off in a manageable chunk as the spider grows?

I had one person come up to me after my DevStack talk giving a big thank you. He’d seen a presentation on Cloudstack and OpenStack previously and OpenStack’s complexity from the outside so confused him that he’d run away from our community. Explaining this with the layer framing, and showing how you could experiment with this quickly with DevStack cleared away a ton of confusion and fear. And he’s going to go dive in now.

Tents and Ecosystems

Today the OpenStack Technical Committee is in charge of deciding the size of the “tent” that is OpenStack. The approach to date has been a big tent philosophy, where anything that’s related, and has a REST API, is free to apply to the TC for incubation.

But a big Tent is often detrimental to the ecosystem. A new project’s first goal often seems to become incubated, to get the gold star of TC legitimacy that they believe is required to build a successful project. But as we’ve seen recently a TC star doesn’t guarantee success, and honestly, the constraints on being inside the tent are actually pretty high.

And then there is a language question, because OpenStack’s stance on everything being in Python is pretty clear. An ecosystem that only exists to spawn incubated projects, and incubated projects only being allowed to be in Python, basically means an ecosystem devoid of interesting software in other languages. That’s a situation that I don’t think any of us want.

So what if OpenStack were a smaller tent, and not all the layers that are in OpenStack today were part of the integrated release in the future? Projects could be considered a success based on their users and usage out of the ecosystem, and not whether they have a TC gold star. Stackforge wouldn’t have some stigma of “not cool enough”, it would be the normal place to exist as part of the OpenStack ecosystem.

Mesos is an interesting cloud community that functions like that today. Mesos has a small core framework, and a big ecosystem. The smaller core actually helps grow the ecosystem by not making the ecosystem 2nd class citizens.

I think that everyone that works on OpenStack itself, and all the ecosystem projects, want this whole thing to be successful. We want a future with interoperable, stable, open source cloud fabric as a given. There are lots of thoughts on how we get there, and as no one has ever created a universal open source cloud fabric that lets users have the freedom to move between providers, public and private, so it’s no surprise that as a community we haven’t figured everything out yet.

But here’s another idea into the pool, under the assumption that we are all smarter together with all the ideas on the table, than any of us are on our own.

Bash trick of the week – call stacks

For someone that used to be very vocal about hating shell scripting, I seem to be building more and more tools related to it every day. The latest is caller (from “man bash”):

caller [expr]
Returns the context of any active subroutine call (a shell function or a script executed with the . or source builtins). Without expr, caller displays the line number and source filename of the current subroutine call. If a non-negative inte‐ ger is supplied as expr, caller displays the line number, subroutine name, and source file corresponding to that position in the current execution call stack. This extra information may be used, for example, to print a stack trace. The current frame is frame 0. The return value is 0 unless the shell is not executing a subroutine call or expr does not correspond to a valid position in the call stack.

This means that if your bash code makes heavy use of functions, you can get the call stack back out. This turns out to be really handy for things like writing testing scripts. I recently added some more unit testing to devstack-gate, and used this to make it easy to see what was going on:

# Utility function for tests
function assert_list_equal {
    local source=$(echo $1 | awk 'BEGIN{RS=",";} {print $1}' | sort -V | xargs echo)
    local target=$(echo $2 | awk 'BEGIN{RS=",";} {print $1}' | sort -V | xargs echo)
    if [[ "$target" != "$source" ]]; then
        echo -n `caller 0 | awk '{print $2}'`
        echo -e " - ERROR\n $target \n != $source"
        ERRORS=1
    else
    # simple backtrace progress detector
        echo -n `caller 0 | awk '{print $2}'`
        echo " - ok"
    fi
}

The output ends up looking like this:

ribos:~/code/openstack/devstack-gate(master)> ./test-features.sh 
test_full_master - ok
test_full_feature_ec - ok
test_neutron_master - ok
test_heat_slow_master - ok
test_grenade_new_master - ok
test_grenade_old_master - ok
test_full_havana - ok

I never thought I’d know this much bash, and I still think data structure manipulation is bash is craziness, but for imperative programming that’s largely a lot of command calls, this works pretty well.

Devstack Vagrant

Devstack is tooling for OpenStack to make it easy to bring up an OpenStack environment based on the latest git trees. It’s used extensively in upstream testing, and by many OpenStack developers to set up dev/test environments.

One of my personal challenges in working on Devstack was testing devstack itself. Relying on the upstream gate means we have a limited number of configurations, and when something goes wrong, iterating on a fix is hard. Even more importantly, the upstream gate is currently only a single node test environment.

A month ago I hacked out a new tool – devstack-vagrant (available on github).

DevstackVagrant

Devstack vagrant provides a customized Vagrant environment that will build a 2 node devstack cluster under VirtualBox. The basic model is 2 devstack nodes (a controller and a compute) that bridge through a physical interface on your host. The bridged interface is set as the default route in the nodes so that 2nd level guests created on top of this 2 node devstack can route to the outside world.

The nodes start and build from official Ubuntu 12.04 cloud images, and are customized using the puppet provisioning support in vagrant. There are a few config variables you need to set in a config.yaml, including hostnames, bridge interface, and the password hash you want your stack user to have. Basically enough to bootstrap the environment and then run devstack from upstream git.

I added a bit of additional logic to the end of the normal devstack process that includes installing an Ubuntu 12.04 and Fedora 20 cloud image in your glance, injecting the ssh public key for the stack user into the keyserver, and opening up ping and ssh in all the security groups.

I still consider this an expert tool at this point, as in, if it breaks you get to keep all the pieces. However, this has been useful to me so far, and given the pull requests I got the other day, seemingly is useful for others as well. Patches definitely welcomed. And if it looks like more folks want to contribute I’ll happily move to stackforge.

One of the things I’d love to do is sort out a functioning libvirt backend for vagrant (there are 2, they are both a little wonky) because then the 2nd level guests could use nested KVM and not be ridiculously slow.

This tool has already proved very useful to me, so hopefully it will be useful to others as well.