Thursday, 2018-04-12

*** felipemonteiro_ has joined #openstack-nova00:00
mnaserhey00:04
mnaserfun bug time00:04
*** suresh12 has quit IRC00:04
*** suresh12 has joined #openstack-nova00:04
mnaserif a user builds an instance that if boot from volume (where nova creates the volume) and nova-compute fails to create the volume, the vm build fails, and it actually counts as a 'consecutive build failures'00:05
*** suresh12 has quit IRC00:05
mnaserso if i'm $bad_user and i try to launch 40 instances but keep hitting quota limits but keep trying again, i will effectively slowly but surely disable every compute node in the cloud00:05
*** suresh12 has joined #openstack-nova00:05
mnaserso while that feature is nice, i think we have to find a way to identify the type of failure it is00:06
mnaseri mean we could disable that feature but that feels dirty00:08
*** harlowja has quit IRC00:08
*** suresh12 has quit IRC00:11
*** suresh12 has joined #openstack-nova00:11
*** fragatina has quit IRC00:13
*** fragatina has joined #openstack-nova00:14
*** zhurong has quit IRC00:14
*** mriedem has joined #openstack-nova00:14
mriedemmnaser: yeah https://bugs.launchpad.net/nova/+bug/174210200:14
openstackLaunchpad bug 1742102 in OpenStack Compute (nova) "Simple user can disable compute" [Undecided,Confirmed] - Assigned to jichenjc (jichenjc)00:14
mriedemthere is an ops ML thread about similar issues00:14
mriedemi think we likely need to consider a whitelist of acceptable, not threshold inducing exceptions00:14
mriedemthe latest comments in there from jichen are actually the exact issue you're describing00:15
*** mriedem_afk has quit IRC00:16
mriedembut we can also be smarter and check quota before trying to create volumes00:17
mriedemlike we do for ports00:17
mnasermriedem: yeah, i think that's better, do we check quota at api layer for ports?00:17
mriedemcould do that in conductor so we don't block the API response00:17
mriedemmnaser: yes00:17
mriedemit's the validate_networks call00:17
mnaserthat sounds reasonable and backport-able too i think00:17
mriedembased on the number of requested networks and instances, we check quota00:17
mnaserthe thing is if i have a quota of 10 volumes, and i launch a 100 instances (one by one), i might still run into that issue i guess00:18
mriedemyeah - either way you'd fail after the 202 you get from the API00:18
mriedemyou either fail in conductor or you fail in compute00:18
mriedemsure but we don't disable your compute :)00:18
*** fragatina has quit IRC00:18
mnaserbecause of the race as conductor still sees you using 0 volumes but as computes start creating volumes00:18
mriedemoh, sure00:19
mriedemwe could also just handle volume quota issues in the compute as a whitelist of things to not disable the compute00:19
mriedemmulti-part fixes00:19
mnaseri think both out-of-ports and out-of-volumes both seem like reasonable 'skip' failures00:19
mriedemyeah00:20
mriedemthe bug above started as a port / fixed ips quota issue00:20
mriedembut also extends to volumes00:20
mnaseri think any api exception should be skipped tbh00:20
mnaserbecause that means the compute node is fine00:20
*** sdague has quit IRC00:20
mnasersure it sounds vague but if your cinder is not having a good time then you're slowly disabling all compute nodes00:20
mriedemwell, depends on what it is,00:20
*** suresh12 has quit IRC00:21
mriedemif nova.conf is misconfigured on the compute to talk to neutron, that's a different problem00:21
mriedemlikely a 40300:21
mnaserah yes00:21
*** suresh12 has joined #openstack-nova00:21
mriedemif we get NeutronClientException, then we made a request and it failed but the endpoint, config and token should be ok00:21
mriedemsame with CinderClientException00:21
mnaserso we can skip those 2 exceptions because if we get them, it doesn't mean that the compute node has any problems00:22
*** r-daneel has quit IRC00:22
*** mdbooth has quit IRC00:22
mriedemwell, if it did turn out that excluding those were too broad, we could narrow it down over time00:22
mriedemvif plug is something that could fail...00:23
mriedembased on bad config of the host00:23
mriedemthat's a hard one though00:23
mriedembecause we don't get vif plug failures directly00:23
*** sree has joined #openstack-nova00:23
mriedemwe get a callback event from neutron that just says plug failed but not why00:23
mriedembut yeah, that's not a NeutronClientException, so ignore me00:24
mnaseri can't imagine a single time that i remember seeing a failed to plug callback00:24
mnaserit either never comes or comes ok :P00:24
mnaserhaha00:24
mnaserhttps://github.com/openstack/python-cinderclient/blob/master/cinderclient/exceptions.py (ClientException) and https://github.com/openstack/python-neutronclient/blob/master/neutronclient/common/exceptions.py (NeutronClientException)00:27
mnaserboth seem to be fairly reasonable in terms of things they handle00:27
mnaserthing is NoAuthURLProvided/EndpointNotFound/EndpointTypeNotFound/AmbiguousEndpoints/ConnectionFailed and a bunch of other stuff00:28
mnaserare also under neutronclientexcpetion00:28
*** sree has quit IRC00:28
mriedemwe won't hit those anymore,00:32
mriedembecause we constructor neutronclient with a ksa session00:32
mriedemso if we fail auth stuff, it will be early with ksa00:32
mriedem*construct00:32
mriedemthat's true in at least queens, i'm not sure about pike...00:33
mnaserso that affects the backportable-itiy00:35
mnasernot that i want to stay on pike for much longer00:35
mriedemmaybe, would have to dig into it00:36
mriedemtrying to fix bug 1679750 atm00:36
openstackbug 1679750 in OpenStack Compute (nova) queens "Allocations are not cleaned up in placement for instance 'local delete' case" [Medium,Confirmed] https://launchpad.net/bugs/167975000:36
mnasero00:37
mnaserthat's a bad time00:37
*** dtruong_ has joined #openstack-nova00:42
*** itlinux has joined #openstack-nova00:42
*** ccamacho has quit IRC00:45
*** dtruong has quit IRC00:46
*** yamamoto has joined #openstack-nova00:47
*** felipemonteiro_ has quit IRC00:48
arvindn05mriedem: quick question, i think you answered it once but wanted to reconfirm. "A rebuild is staying on the same host, with optionally a new image, but the flavor stays the same."00:48
arvindn05https://developer.openstack.org/api-guide/compute/server_concepts.html the documentation does not specify that it stays on the same host...just wanted to confirm if rebuild ALWAYS means staying on same host00:49
*** odyssey4me has quit IRC00:51
*** odyssey4me has joined #openstack-nova00:51
*** yamamoto has quit IRC00:52
mriedemyes it's always the same host00:53
*** zhaochao has joined #openstack-nova00:54
mriedemhttps://developer.openstack.org/api-guide/compute/server_concepts.html#recover-from-a-failed-compute-host could probably be tightened up00:54
*** caisan has joined #openstack-nova00:54
mriedemwhere it says, "Evacuate does the same operation as a rebuild. " - that could say, "Evacuate does the same operation as a rebuild except evacuate is on a new host while rebuild is on the same host."00:55
mriedemsomething like that00:55
mriedemand you can't evacuate with a new image00:55
mriedemarvindn05: when you start talking about evacuate and rebuild, you must read http://www.danplanet.com/blog/2016/03/03/evacuate-in-nova-one-command-to-confuse-us-all/00:56
arvindn05thx for the hint :)00:56
*** hoangcx has joined #openstack-nova00:57
arvindn05where is the same host enforced?00:57
arvindn05in code i mean...all that the manager checks is if the hints have the check_type set to rebuild00:58
*** mdbooth has joined #openstack-nova00:58
arvindn05as per scheduler.utils.request_is_rebuild00:59
*** salv-orlando has joined #openstack-nova00:59
arvindn05thikning out loud, why would select destination be called in the first place if we dont need to select a host? is it just for some validation?01:02
*** phuongnh has joined #openstack-nova01:02
*** salv-orlando has quit IRC01:03
openstackgerritNaichuan Sun proposed openstack/nova master: xenapi: Use XAPI pool instead of aggregate pool for shared SR migration  https://review.openstack.org/55415401:09
arvindn05nvm...we want to run image related filters so thats why we want to call select destination...01:13
*** tiendc has joined #openstack-nova01:22
*** namnh has joined #openstack-nova01:25
*** yamahata has quit IRC01:25
Kevin_Zhengmriedem as for your comments in https://review.openstack.org/#/c/560296/01:26
Kevin_ZhengI thought that making the order constant in all files make it more easier to read and compare....01:27
mriedemKevin_Zheng: i wouldn't mix that into the deduplicate payload change01:30
mriedemif you want to sort the keys globally for those samples, i'd do that once all of the dedup stuff is done01:30
Kevin_ZhengOK, then I will undo them01:30
mriedemarvindn05: for rebuild we set a destination on the request_spec which forces the scheduler to only consider the host that the instance is already one01:31
mriedemarvindn05: here https://github.com/openstack/nova/blob/master/nova/compute/api.py#L315801:32
*** mdbooth has quit IRC01:33
arvindn05mriedem: thank you. starting to understand more of the workflow01:34
mriedemarvindn05: np, it only took me about 5 years to understand how rebuild works01:35
*** ssurana has quit IRC01:37
arvindn05mriedem: lol...i am shifting languages as well so its harder to find workflows01:37
*** _ix has joined #openstack-nova01:38
openstackgerritMatt Riedemann proposed openstack/nova master: Cleanup RP and HM records while deleting a compute service.  https://review.openstack.org/55492001:38
openstackgerritMatt Riedemann proposed openstack/nova master: Delete allocations from API if nova-compute is down  https://review.openstack.org/56070601:38
arvindn05mriedem: i saw def _reset_image_metadata and did not read any code after since in java world seeing a function definition meant i was done with the original function01:38
mriedeminner method01:39
mriedemyou can have inline functions in java01:39
arvindn05inline functions? you mean lambda's?01:40
mriedemno,01:40
mriedemmaybe i'm thinking of inline class impls, like inline interfaces01:40
mriedemidk, it's been 7 years since i've written java01:40
arvindn05yea..anonymous classes you can define them01:41
*** _ix_ has joined #openstack-nova01:43
arvindn05but cannot define method withhin - quora speaks https://www.quora.com/Can-we-write-a-method-inside-a-method-in-Java01:43
*** _ix has quit IRC01:44
arvindn05anyway...i learn something new everyday...weird pattern to have methods within methods....havent see it a lot in openstack code either01:45
arvindn05will keep my eye out from now on01:45
*** licanwei has joined #openstack-nova01:46
*** gouthamr has joined #openstack-nova01:46
*** mdbooth has joined #openstack-nova01:47
*** gouthamr has quit IRC01:49
*** gouthamr has joined #openstack-nova01:49
*** QianYu has joined #openstack-nova01:50
*** markvoelker has quit IRC01:52
*** mriedem has quit IRC01:54
*** stakeda has joined #openstack-nova01:56
*** sree has joined #openstack-nova01:56
*** dougshelley66 has quit IRC01:59
*** salv-orlando has joined #openstack-nova02:00
*** rajinir has quit IRC02:00
*** yamahata has joined #openstack-nova02:00
*** QianYu has quit IRC02:00
*** QianYu has joined #openstack-nova02:01
*** salv-orlando has quit IRC02:04
*** sree has quit IRC02:05
*** kmalloc has quit IRC02:06
*** mdbooth has quit IRC02:13
openstackgerritZhenyu Zheng proposed openstack/nova master: Deduplicate notification samples Rocky - 1  https://review.openstack.org/56029602:16
*** edmondsw has joined #openstack-nova02:19
openstackgerritZhenyu Zheng proposed openstack/nova master: Deduplicate notification samples Rocky - 1  https://review.openstack.org/56029602:22
*** liuzz has joined #openstack-nova02:22
*** gongysh has joined #openstack-nova02:23
*** liuzz has quit IRC02:23
*** edmondsw has quit IRC02:24
*** liuzz_ has quit IRC02:25
*** fragatin_ has joined #openstack-nova02:28
*** armaan has quit IRC02:29
*** armaan has joined #openstack-nova02:29
openstackgerritZhenyu Zheng proposed openstack/nova master: Deduplicate notification samples Rocky - 2  https://review.openstack.org/56029902:31
takashin02:32
*** hongbin has joined #openstack-nova02:32
*** gouthamr has quit IRC02:33
*** liuzz has joined #openstack-nova02:34
*** wolverineav has quit IRC02:34
*** wolverineav has joined #openstack-nova02:34
*** janki has joined #openstack-nova02:35
*** gouthamr has joined #openstack-nova02:35
*** wolverineav has quit IRC02:40
*** gouthamr_ has joined #openstack-nova02:40
*** gouthamr has quit IRC02:40
*** liuzz_ has joined #openstack-nova02:41
*** liuzz__ has joined #openstack-nova02:41
*** liuzz has quit IRC02:44
*** wolverineav has joined #openstack-nova02:44
*** liuzz_ has quit IRC02:45
*** wolverineav has quit IRC02:51
*** markvoelker has joined #openstack-nova02:53
*** Spaz-Work has joined #openstack-nova02:55
Spaz-WorkMorning Nova02:55
*** suresh12 has quit IRC02:56
openstackgerritArvind Nadendla proposed openstack/nova-specs master: Handle rebuild of instance with new image  https://review.openstack.org/56071802:56
*** Spazmotic has quit IRC02:58
*** salv-orlando has joined #openstack-nova03:00
*** suresh12 has joined #openstack-nova03:02
*** moshele has joined #openstack-nova03:03
*** salv-orlando has quit IRC03:05
*** suresh12 has quit IRC03:06
*** sree has joined #openstack-nova03:07
*** QianYu has quit IRC03:08
*** itlinux has quit IRC03:08
*** imacdonn has quit IRC03:08
*** imacdonn has joined #openstack-nova03:09
*** slaweq has joined #openstack-nova03:11
*** slaweq has quit IRC03:16
*** germs has quit IRC03:16
*** germs has joined #openstack-nova03:17
*** germs has quit IRC03:17
*** germs has joined #openstack-nova03:17
*** dave-mccowan has quit IRC03:18
*** jchhatbar has joined #openstack-nova03:19
openstackgerritZhenyu Zheng proposed openstack/nova master: Deduplicate notification samples Rocky - 3  https://review.openstack.org/56072703:21
*** suresh12 has joined #openstack-nova03:22
*** janki has quit IRC03:22
*** markvoelker has quit IRC03:26
*** suresh12 has quit IRC03:27
*** takashin has quit IRC03:27
*** _ix_ has quit IRC03:29
*** bkopilov has quit IRC03:29
*** Tom-Tom has joined #openstack-nova03:31
*** gouthamr_ is now known as gouthamr03:31
openstackgerritZhenyu Zheng proposed openstack/nova master: Deduplicate notification samples Rocky - 4  https://review.openstack.org/56073103:31
*** gyee has quit IRC03:33
*** armaan has quit IRC03:34
*** armaan has joined #openstack-nova03:35
openstackgerritZhenyu Zheng proposed openstack/nova master: Deduplicate notification samples Rocky - 4  https://review.openstack.org/56073403:37
*** armaan has quit IRC03:38
*** armaan has joined #openstack-nova03:39
*** psachin has joined #openstack-nova03:40
openstackgerritZhenyu Zheng proposed openstack/nova master: Deduplicate notification samples Rocky - 6  https://review.openstack.org/56073703:45
*** amodi has quit IRC03:46
*** moshele has quit IRC03:47
*** armaan has quit IRC03:47
*** gongysh has quit IRC03:48
*** nicolasbock has quit IRC03:49
*** Tom-Tom has quit IRC03:51
*** takashin has joined #openstack-nova03:51
*** Tom-Tom has joined #openstack-nova03:51
*** hiro-kobayashi has joined #openstack-nova03:54
*** Tom-Tom has quit IRC03:56
*** salv-orlando has joined #openstack-nova04:01
*** udesale has joined #openstack-nova04:04
openstackgerritZhenyu Zheng proposed openstack/nova master: Deduplicate notification samples Rocky - 7  https://review.openstack.org/56074804:04
*** salv-orlando has quit IRC04:05
*** edmondsw has joined #openstack-nova04:07
*** caisan has quit IRC04:08
*** gouthamr has quit IRC04:10
*** edmondsw has quit IRC04:12
*** fragatin_ has quit IRC04:15
*** dustfalling has left #openstack-nova04:18
*** takashin has quit IRC04:22
*** markvoelker has joined #openstack-nova04:23
*** moshele has joined #openstack-nova04:25
*** claudiub has joined #openstack-nova04:26
*** liuzz__ has quit IRC04:26
*** liuzz has joined #openstack-nova04:26
*** markvoelker has quit IRC04:27
*** abhishekk has joined #openstack-nova04:30
*** jichen has joined #openstack-nova04:34
*** markvoelker has joined #openstack-nova04:38
*** hongbin has quit IRC04:38
*** suresh12 has joined #openstack-nova04:39
openstackgerritMerged openstack/nova master: Fix race fail in test_resize_with_reschedule_then_live_migrate  https://review.openstack.org/56045404:40
*** Zames has joined #openstack-nova04:41
*** amodi has joined #openstack-nova04:41
*** liuzz_ has joined #openstack-nova04:42
*** claudiub|2 has joined #openstack-nova04:42
*** claudiub has quit IRC04:45
*** liuzz has quit IRC04:45
*** hongbin has joined #openstack-nova04:46
*** hongbin has quit IRC04:51
*** Zames has quit IRC04:59
*** sridharg has joined #openstack-nova04:59
*** lpetrut has joined #openstack-nova05:02
*** salv-orlando has joined #openstack-nova05:02
*** yangyapeng has joined #openstack-nova05:04
*** salv-orlando has quit IRC05:05
*** salv-orlando has joined #openstack-nova05:05
*** caisan has joined #openstack-nova05:07
*** links has joined #openstack-nova05:09
openstackgerritjichenjc proposed openstack/nova master: remove ec2 in service and cmd  https://review.openstack.org/55677805:11
openstackgerritjichenjc proposed openstack/nova master: remove ec2 object definitions  https://review.openstack.org/55715005:11
openstackgerritjichenjc proposed openstack/nova master: remove ec2 db functions  https://review.openstack.org/55757205:11
*** gongysh has joined #openstack-nova05:11
*** bkopilov has joined #openstack-nova05:17
*** Tom-Tom has joined #openstack-nova05:18
*** hiro-kobayashi has quit IRC05:25
*** jaosorior has quit IRC05:26
*** hamzy has joined #openstack-nova05:27
*** suresh12 has quit IRC05:29
*** markvoelker has quit IRC05:29
*** evin has joined #openstack-nova05:29
*** markvoelker has joined #openstack-nova05:30
*** ratailor has joined #openstack-nova05:30
*** yangyapeng has quit IRC05:31
*** markvoelker has quit IRC05:34
*** sidx64 has joined #openstack-nova05:39
*** fnordahl has quit IRC05:41
*** fnordahl has joined #openstack-nova05:43
*** trozet has quit IRC05:44
*** trozet has joined #openstack-nova05:45
*** lpetrut has quit IRC05:46
*** yangyapeng has joined #openstack-nova05:48
*** lajoskatona has joined #openstack-nova05:48
*** openstackgerrit has quit IRC05:48
*** dr_gogeta86 has quit IRC05:51
*** yangyapeng has quit IRC05:53
*** jaosorior has joined #openstack-nova05:55
*** lpetrut has joined #openstack-nova05:55
*** edmondsw has joined #openstack-nova05:56
*** udesale_ has joined #openstack-nova05:56
*** AlexeyAbashkin has joined #openstack-nova05:58
*** udesale has quit IRC05:59
*** edmondsw has quit IRC06:00
*** yangyapeng has joined #openstack-nova06:01
*** threestrands has joined #openstack-nova06:05
*** sahid has joined #openstack-nova06:06
*** ssurana has joined #openstack-nova06:08
*** suresh12 has joined #openstack-nova06:09
*** markvoelker has joined #openstack-nova06:11
*** alex_xu has quit IRC06:11
*** suresh12 has quit IRC06:13
*** ccamacho has joined #openstack-nova06:16
*** markvoelker has quit IRC06:16
*** alex_xu has joined #openstack-nova06:16
*** yangyapeng has quit IRC06:17
*** germs has quit IRC06:18
*** germs has joined #openstack-nova06:18
*** germs has quit IRC06:18
*** germs has joined #openstack-nova06:18
*** takashin has joined #openstack-nova06:18
*** openstackgerrit has joined #openstack-nova06:20
openstackgerritjichenjc proposed openstack/nova master: Avoid live migrate to same host  https://review.openstack.org/54268906:20
*** amodi has quit IRC06:20
*** hoangcx has quit IRC06:20
*** tiendc has quit IRC06:20
*** phuongnh has quit IRC06:20
*** tiendc has joined #openstack-nova06:21
*** phuongnh has joined #openstack-nova06:21
*** hoangcx has joined #openstack-nova06:21
*** AlexeyAbashkin has quit IRC06:24
*** kholkina has joined #openstack-nova06:26
*** andreas_s has joined #openstack-nova06:26
*** sahid has quit IRC06:29
openstackgerritZhenyu Zheng proposed openstack/nova master: Deduplicate notification samples Rocky - 1  https://review.openstack.org/56029606:30
*** lpetrut has quit IRC06:33
*** salv-orlando has quit IRC06:35
*** AlexeyAbashkin has joined #openstack-nova06:35
*** salv-orlando has joined #openstack-nova06:35
*** armaan has joined #openstack-nova06:36
*** vivsoni has quit IRC06:38
*** salv-orlando has quit IRC06:39
openstackgerritZhenyu Zheng proposed openstack/nova master: Deduplicate notification samples Rocky - 2  https://review.openstack.org/56029906:41
*** salv-orlando has joined #openstack-nova06:42
openstackgerritNaichuan Sun proposed openstack/nova master: xenapi: Use XAPI pool instead of aggregate pool for shared SR migration  https://review.openstack.org/55415406:42
*** yangyapeng has joined #openstack-nova06:44
*** AlexeyAbashkin has quit IRC06:44
*** yangyapeng has quit IRC06:48
*** belmoreira has joined #openstack-nova06:53
*** alexchadin has joined #openstack-nova06:53
openstackgerritNaichuan Sun proposed openstack/nova master: xenapi: Use XAPI pool instead of aggregate pool for shared SR migration  https://review.openstack.org/55415406:54
*** belmoreira has quit IRC06:54
*** armaan_ has joined #openstack-nova06:59
*** avolkov has joined #openstack-nova06:59
*** armaan has quit IRC07:00
*** AlexeyAbashkin has joined #openstack-nova07:01
*** yangyapeng has joined #openstack-nova07:02
*** ekhugen has quit IRC07:03
*** bhagyashris has quit IRC07:04
*** ekhugen has joined #openstack-nova07:04
*** armaan_ has quit IRC07:04
*** bhagyashris has joined #openstack-nova07:04
*** hemna_ has quit IRC07:04
*** yamamoto has joined #openstack-nova07:05
*** yangyapeng has quit IRC07:06
openstackgerritNaichuan Sun proposed openstack/nova master: xenapi: Use XAPI pool instead of aggregate pool for shared SR migration  https://review.openstack.org/55415407:08
*** armaan has joined #openstack-nova07:08
*** slaweq has joined #openstack-nova07:08
openstackgerritjichenjc proposed openstack/nova master: Move update_task_state out of try/except  https://review.openstack.org/55715207:08
*** sidx64 has quit IRC07:14
*** _ix has joined #openstack-nova07:14
openstackgerritNaichuan Sun proposed openstack/nova master: xenapi(N-R-P): Add API to support compute node resource provider update and create  https://review.openstack.org/52104107:14
*** markvoelker has joined #openstack-nova07:15
*** ralonsoh has joined #openstack-nova07:15
*** sidx64 has joined #openstack-nova07:15
*** mgoddard has joined #openstack-nova07:20
openstackgerritZhenyu Zheng proposed openstack/nova master: Deduplicate notification samples Rocky - 3  https://review.openstack.org/56072707:24
*** tiendc has quit IRC07:25
openstackgerritNaichuan Sun proposed openstack/nova master: xenapi(N-R-P): Add API to support compute node resource provider update and create  https://review.openstack.org/52104107:25
*** tiendc has joined #openstack-nova07:25
*** tbachman has quit IRC07:27
*** udesale__ has joined #openstack-nova07:28
*** tbachman has joined #openstack-nova07:29
*** xinliang has quit IRC07:30
*** yangyapeng has joined #openstack-nova07:31
*** udesale_ has quit IRC07:31
*** jpena|off is now known as jpena07:33
*** evin has quit IRC07:33
*** xinliang has joined #openstack-nova07:35
*** armaan has quit IRC07:35
*** armaan has joined #openstack-nova07:35
openstackgerritZhenyu Zheng proposed openstack/nova master: Deduplicate notification samples Rocky - 4  https://review.openstack.org/56073107:38
*** tesseract has joined #openstack-nova07:39
*** armaan has quit IRC07:40
*** edmondsw has joined #openstack-nova07:44
*** evin has joined #openstack-nova07:44
*** damien_r has joined #openstack-nova07:47
*** edmondsw has quit IRC07:48
openstackgerritZhenyu Zheng proposed openstack/nova master: Deduplicate notification samples Rocky - 4  https://review.openstack.org/56073107:55
*** lucas-afk is now known as lucasagomes07:55
*** phuongnh has quit IRC07:55
*** hiro-kobayashi has joined #openstack-nova07:56
*** sc has quit IRC07:56
*** frickler has quit IRC07:56
*** phuongnh has joined #openstack-nova07:56
*** gcb has quit IRC07:57
openstackgerritZhenyu Zheng proposed openstack/nova master: Deduplicate notification samples Rocky - 4  https://review.openstack.org/56073107:59
*** armaan has joined #openstack-nova08:00
*** gongysh has quit IRC08:01
*** frickler has joined #openstack-nova08:03
*** adreznec has quit IRC08:05
*** adreznec has joined #openstack-nova08:06
*** mdnadeem has joined #openstack-nova08:06
openstackgerritLee Yarwood proposed openstack/nova stable/pike: libvirt: Block swap volume attempts with encrypted volumes prior to Queens  https://review.openstack.org/54356908:09
* lyarwood -> offline until tomorrow \o_08:10
*** masayukig has quit IRC08:12
*** pcaruana has joined #openstack-nova08:12
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in test_serversV21.py (1)  https://review.openstack.org/56082108:12
*** masayukig_ has joined #openstack-nova08:13
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in test_serversV21.py (1)  https://review.openstack.org/56082108:14
*** armaan has quit IRC08:15
*** armaan has joined #openstack-nova08:15
*** ktibi has joined #openstack-nova08:16
*** yamamoto has quit IRC08:17
*** armaan has quit IRC08:20
*** evin has quit IRC08:21
*** evin has joined #openstack-nova08:22
*** gongysh has joined #openstack-nova08:23
*** sc has joined #openstack-nova08:23
*** sc is now known as Guest4744108:23
*** masayukig_ is now known as masayukig08:27
*** AlexeyAbashkin has quit IRC08:29
*** AlexeyAbashkin has joined #openstack-nova08:30
*** salv-orlando has quit IRC08:30
*** salv-orlando has joined #openstack-nova08:31
openstackgerritZhenyu Zheng proposed openstack/nova master: Deduplicate notification samples Rocky - 4  https://review.openstack.org/56073108:32
*** yamamoto has joined #openstack-nova08:32
*** lpetrut has joined #openstack-nova08:33
*** salv-orlando has quit IRC08:35
*** masayukig is now known as masayukig_08:35
*** yamamoto has quit IRC08:36
*** masayukig_ has quit IRC08:40
*** tssurya has joined #openstack-nova08:42
*** sambetts|afk is now known as sambetts08:45
*** derekh has joined #openstack-nova08:47
*** hiro-kobayashi has quit IRC08:51
*** hiro-kobayashi has joined #openstack-nova08:51
*** linkmark has joined #openstack-nova08:53
*** armaan has joined #openstack-nova08:53
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in test_serversV21.py (2)  https://review.openstack.org/56082908:53
*** armaan_ has joined #openstack-nova08:54
*** ttsiouts has quit IRC08:54
*** ttsiouts has joined #openstack-nova08:55
*** armaan has quit IRC08:57
openstackgerritYikun Jiang (Kero) proposed openstack/nova master: WIP: complex policy  https://review.openstack.org/55377608:58
openstackgerritYikun Jiang (Kero) proposed openstack/nova master: Add rules column to instance_group_policy table.  https://review.openstack.org/56083208:58
*** _ix has quit IRC09:00
*** sdague has joined #openstack-nova09:01
*** dpawlik has quit IRC09:02
openstackgerritjichenjc proposed openstack/nova master: remove ec2 object definitions  https://review.openstack.org/55715009:03
openstackgerritjichenjc proposed openstack/nova master: remove ec2 db functions  https://review.openstack.org/55757209:03
*** dpawlik has joined #openstack-nova09:08
*** ssurana has quit IRC09:11
*** sdague has quit IRC09:17
*** rha has joined #openstack-nova09:22
*** evin has quit IRC09:25
*** do3meli has joined #openstack-nova09:26
*** evin has joined #openstack-nova09:28
*** gaoyan has joined #openstack-nova09:31
*** salv-orlando has joined #openstack-nova09:31
openstackgerritZhenyu Zheng proposed openstack/nova master: Deduplicate notification samples Rocky - 5  https://review.openstack.org/56073409:33
phuongnhhi jaypipes, I would like to add one more line in file https://review.openstack.org/#/c/462759/3/os_traits/hw/cpu/x86.py for CPU_FPGA09:33
phuongnhthe link to this is: https://en.wikipedia.org/wiki/Field-programmable_gate_array09:34
phuongnhDo I need to propose a blueprint in launchpad?09:34
*** sree has quit IRC09:36
*** salv-orlando has quit IRC09:37
*** salv-orlando has joined #openstack-nova09:39
*** yamamoto has joined #openstack-nova09:46
*** alexchadin has quit IRC09:47
*** yamamoto has quit IRC09:48
*** yamamoto has joined #openstack-nova09:49
*** armaan_ has quit IRC09:49
openstackgerritZhenyu Zheng proposed openstack/nova master: Deduplicate notification samples Rocky - 6  https://review.openstack.org/56073709:50
*** hiro-kobayashi has quit IRC09:52
*** alexchadin has joined #openstack-nova09:52
*** yamamoto has quit IRC09:52
*** Tom-Tom has quit IRC09:54
*** yamamoto has joined #openstack-nova09:54
*** jichen has quit IRC09:54
*** stakeda has quit IRC09:57
*** moshele has quit IRC09:58
*** yamamoto has quit IRC10:00
*** kmalloc has joined #openstack-nova10:01
*** yamamoto has joined #openstack-nova10:01
*** takashin has left #openstack-nova10:02
*** namnh has quit IRC10:06
*** AlexeyAbashkin has quit IRC10:10
*** hoangcx has quit IRC10:12
*** alexchadin has quit IRC10:15
*** sdague has joined #openstack-nova10:18
*** owalsh_afk is now known as owalsh10:19
*** lpetrut has quit IRC10:21
*** threestrands has quit IRC10:24
*** gaoyan has quit IRC10:29
*** elmaciej has joined #openstack-nova10:30
*** dougshelley66 has joined #openstack-nova10:34
*** elmaciej_ has joined #openstack-nova10:35
*** abhishekk has quit IRC10:38
*** sahid has joined #openstack-nova10:38
*** elmaciej has quit IRC10:39
*** elmaciej_ has quit IRC10:40
*** elmaciej has joined #openstack-nova10:40
*** bkopilov has quit IRC10:50
*** elmaciej_ has joined #openstack-nova10:51
*** elmaciej has quit IRC10:53
*** AlexeyAbashkin has joined #openstack-nova10:53
*** phuongnh has quit IRC10:54
*** odyssey4me has quit IRC10:55
*** odyssey4me has joined #openstack-nova10:55
*** tbachman has quit IRC10:55
*** yamamoto has quit IRC10:56
*** licanwei has left #openstack-nova10:59
*** Guest47441 is now known as sc11:00
*** odyssey4me has quit IRC11:00
*** odyssey4me has joined #openstack-nova11:00
*** pchavva has joined #openstack-nova11:00
*** tianhui has quit IRC11:02
*** yamamoto has joined #openstack-nova11:02
*** yamamoto has quit IRC11:03
*** elmaciej has joined #openstack-nova11:04
openstackgerritMerged openstack/nova master: Remove mox in test_virt_drivers.py  https://review.openstack.org/55987811:04
*** elmaciej_ has quit IRC11:06
*** gongysh has quit IRC11:06
*** pchavva has quit IRC11:07
*** yamamoto has joined #openstack-nova11:08
*** elmaciej_ has joined #openstack-nova11:08
*** elmaciej has quit IRC11:11
*** yamamoto has quit IRC11:11
*** yamamoto has joined #openstack-nova11:11
Shilpastephenfin: Hi11:12
*** ssurana has joined #openstack-nova11:12
*** cdent has joined #openstack-nova11:12
*** ssurana has quit IRC11:17
*** hshiina|afk has quit IRC11:19
*** edmondsw has joined #openstack-nova11:20
*** sree has joined #openstack-nova11:20
openstackgerritMerged openstack/nova master: Remove mox in unit/virt/xenapi/test_vm_utils.py (1)  https://review.openstack.org/55870411:20
*** tianhui has joined #openstack-nova11:20
*** owalsh is now known as owalsh_afk11:23
*** edmondsw has quit IRC11:24
*** elmaciej has joined #openstack-nova11:24
*** sree has quit IRC11:24
*** lpetrut has joined #openstack-nova11:25
stephenfinShilpa: o/11:26
*** elmaciej_ has quit IRC11:27
Shilpastephenfin: i have added patch set https://review.openstack.org/#/c/550172/2, and added comment there, can you please go through the same.11:28
Shilpastephenfin: its ok to proceed with noVNC v1.0.0 tag, i have seen issue reported here at https://github.com/novnc/noVNC/issues/103411:29
*** nicolasbock has joined #openstack-nova11:30
stephenfinShilpa: It looks like the CI is failing though?11:31
stephenfinwith a valid failure, no less11:31
*** lucasagomes is now known as lucas-hungry11:31
Shilpastephenfin: i have noted that, but they were failing on earlier patch set too, so just ignored, will check logs11:32
*** yamamoto has quit IRC11:33
*** derekh_ has joined #openstack-nova11:34
*** rcernin has quit IRC11:35
*** derekh has quit IRC11:36
*** dougshelley66 has quit IRC11:38
*** elmaciej_ has joined #openstack-nova11:40
*** elmaciej has quit IRC11:42
*** elmaciej has joined #openstack-nova11:44
openstackgerritZhenyu Zheng proposed openstack/nova-specs master: Amend allow abort live migrations in queued status spec  https://review.openstack.org/56087211:46
*** elmaciej_ has quit IRC11:47
*** elmaciej_ has joined #openstack-nova11:50
*** jpena is now known as jpena|lunch11:53
*** amoralej is now known as amoralej|lunch11:53
*** elmaciej has quit IRC11:53
*** caisan has quit IRC11:53
*** dillaman has joined #openstack-nova11:56
*** liverpooler has joined #openstack-nova11:56
bhagyashrisalex_xu: Hi, I just want to know about the spec https://docs.openstack.org/oslo.config/latest/reference/mutable.html which basically give the provision to make the conf parameter as mutable11:58
Shilpastephenfin: checked logs http://logs.openstack.org/72/550172/2/check/tempest-full/f6945b6/job-output.txt.gz, and observed that 1 test case is failed, and that is tempest.api.compute.servers.test_novnc.NoVNCConsoleTestJSON.test_novnc[id-c640fdff-8ab4-45a4-a5d8-7e6146cbd0dc]11:59
alex_xubhagyashris: maybe gcb is good person, but looks like he isn't in this channel11:59
bhagyashrisalex_xu: there is one patch submitted in nova https://review.openstack.org/#/c/319203/3 so what are the criteria to make the conf parameter as mutable in any project11:59
*** elmaciej has joined #openstack-nova12:00
alex_xubhagyashris: ok12:00
bhagyashrisalex_xu: as per the operators point of view every parameter should be make it as mutable but why only these two parameters are made it as mutable12:01
openstackgerritMatthew Booth proposed openstack/nova master: Give volume DriverBlockDevice classes a common prefix  https://review.openstack.org/52634612:02
openstackgerritMatthew Booth proposed openstack/nova master: Add DriverLocalImageBlockDevice  https://review.openstack.org/52634712:02
openstackgerritMatthew Booth proposed openstack/nova master: Expose driver_block_device fields consistently  https://review.openstack.org/52836212:02
openstackgerritMatthew Booth proposed openstack/nova master: Add local_root to block_device_info  https://review.openstack.org/52902912:02
openstackgerritMatthew Booth proposed openstack/nova master: Pass DriverBlockDevice to driver.attach_volume  https://review.openstack.org/52836312:02
openstackgerritMatthew Booth proposed openstack/nova master: Fix libvirt volume tests passing invalid disk_info  https://review.openstack.org/52932812:02
*** ralonsoh_ has joined #openstack-nova12:02
bhagyashrisalex_xu: in this patch https://review.openstack.org/#/c/319203/312:03
*** elmaciej_ has quit IRC12:03
*** ratailor has quit IRC12:04
*** moshele has joined #openstack-nova12:04
alex_xubhagyashris: I guess that BP only target to something people just need at that point12:05
*** udesale__ has quit IRC12:05
*** ralonsoh has quit IRC12:06
openstackgerritZhenyu Zheng proposed openstack/nova master: Deduplicate notification samples Rocky - 7  https://review.openstack.org/56074812:07
alex_xubhagyashris: I'm not sure all the config option works, but I also didn't see any reject for other options other than that BP12:07
*** suresh12 has joined #openstack-nova12:10
bhagyashrisalex_xu: Actually I have few queries in nova only those two parameters are marked as mutable = True in (https://review.openstack.org/#/c/319203/3) so is that means other all the conf parameters are non-mutable.12:10
*** dougshelley66 has joined #openstack-nova12:11
openstackgerritpippo proposed openstack/nova master: Update eventlet  https://review.openstack.org/56087612:11
*** suresh12 has quit IRC12:14
*** elmaciej_ has joined #openstack-nova12:14
*** elmaciej has quit IRC12:17
*** r-daneel has joined #openstack-nova12:18
*** liverpooler has quit IRC12:19
*** liverpooler has joined #openstack-nova12:19
*** elmaciej has joined #openstack-nova12:20
*** eharney has joined #openstack-nova12:21
*** bkopilov has joined #openstack-nova12:22
*** elmaciej__ has joined #openstack-nova12:23
*** elmaciej_ has quit IRC12:23
*** elmaciej has quit IRC12:25
*** AlexeyAbashkin has quit IRC12:28
*** AlexeyAbashkin has joined #openstack-nova12:29
*** yamamoto has joined #openstack-nova12:30
efriedGood UGT morning bhagyashris12:32
*** dave-mccowan has joined #openstack-nova12:33
efriedbhagyashris: Are you going to take over https://review.openstack.org/560444 and https://review.openstack.org/560459 ?12:33
* efried waves shiny co-author credit12:33
*** yangyapeng has quit IRC12:34
bhagyashrisefried: Good morning :)12:34
*** yangyapeng has joined #openstack-nova12:35
*** yamamoto has quit IRC12:35
*** dpawlik has quit IRC12:37
*** tiendc has quit IRC12:37
*** yangyapeng has quit IRC12:38
*** yangyapeng has joined #openstack-nova12:39
*** tbachman has joined #openstack-nova12:40
bhagyashrisefried: ok i will take.12:42
efriedbhagyashris: Great.  Can you put your name down on the etherpad https://etherpad.openstack.org/p/rocky-nova-priorities-tracking see L11512:43
efriedjaypipes: FYI ^12:43
*** yangyapeng has quit IRC12:44
*** yangyapeng has joined #openstack-nova12:44
bauzasstephenfin: quick question, if I'm providing a extra spec saying --property hw:numa_nodes=N, does it mean that the instance will use 2 physical NUMA nodes, or just that the instance XML will have 2 nodes ?12:47
*** yamamoto has joined #openstack-nova12:47
bauzasif the latter, are we supporting to not have sharded the NUMA nodes ?12:48
efriedsean-k-mooney, cfriesen: ^12:48
bauzasstephenfin: context is https://review.openstack.org/#/c/552924/7/specs/rocky/approved/numa-topology-with-rps.rst@18512:48
*** yangyapeng has quit IRC12:49
*** yamamoto has quit IRC12:49
*** dklyle has quit IRC12:51
bauzasefried: as you can see, I wonder if we would still need a specific sharding param12:52
bauzasefried: because when we discussed yesterday, I was only thinking of vGPUs affinited to CPUs, not about vCPUs being anti-affinited by NUMA nodes12:52
bauzasso the latter would need something missing now12:53
bauzasand then, in that case, that would be in my spec :12:53
efriedbauzas: Yup; and I totally can't answer that question (whether we need explicit anti-affinity/sharding)12:57
bauzasefried: when looking at the doc, see "  FLAVOR-NODES: (integer) The number of host NUMA nodes to restrict execution of instance vCPU threads to. If not specified, the vCPU threads can run on any number of the host NUMA nodes available."12:57
efriedBut your spec definitely must.12:57
bauzashttps://docs.openstack.org/nova/latest/user/flavors.html#extra-specs-numa-topology12:57
*** dklyle has joined #openstack-nova12:57
efriedbauzas: Oh, that's interesting.  I read that as a *maximum*.12:58
efriedIn which case, we're covered.12:58
efriedsort of12:58
efriedWe need to split into N numbered request groups12:58
*** pchavva has joined #openstack-nova12:59
efriedThe only issue is that we can't specify that in a flexible way - it would have to be specific numbers of procs in each group.12:59
efriedand the only reasonable way to do that is to divide them evenly.12:59
efriedThat still means we get *at most* N separate NUMA nodes, but because granular doesn't guarantee separation, we could get anywhere from 1..N13:00
bauzasthat said, in https://docs.openstack.org/nova/latest/admin/cpu-topologies.html13:00
*** sree has joined #openstack-nova13:00
bauzasit says " Inadequate per-node resources will result in scheduling failures. Resources that are specific to a node include not only CPUs and memory, but also PCI and SR-IOV resources. It is not possible to use multiple resources from different nodes without requesting a multi-node layout. As such, it may be necessary to ensure PCI or SR-IOV resources are associated with the same NUMA node or force a multi-node layout."13:00
*** dpawlik has joined #openstack-nova13:00
efriedThat's fair.13:01
*** edmondsw has joined #openstack-nova13:01
bauzaswhat I understand from the above is that if I'm asking for 2 nodes but then only have 1 node, then NoValidHost13:01
*** yamamoto has joined #openstack-nova13:01
bauzasthat's honeslty confusing13:01
efriedno, that's not how I read it.13:01
bauzasalso given https://bugs.launchpad.net/nova/+bug/1466780 and " There is no correlation required between the NUMA topology exposed in the instance and how the instance is actually pinned on the host. This is by design. See this invalid bug for more information."13:01
openstackLaunchpad bug 1466780 in OpenStack Compute (nova) "nova libvirt pinning not reflected in VirtCPUTopology" [Undecided,Invalid] - Assigned to Stephen Finucane (stephenfinucane)13:01
*** yamamoto has quit IRC13:01
bauzasbut I take that only for CPU pinning13:02
efriedI read it to mean, "if you're restricting to one NUMA node, but no single node has all the resources available, NoValidHost"  Which is totally legit.13:02
bauzaslet's wait for the others, then13:02
kholkinathe spec for user-data update need your review https://review.openstack.org/#/c/547964/13:02
efriedBut it brings us to a different issue, which is whether your spec claims to handle device affinity as well as proc/mem13:02
bauzasefried: that said, if we need to shard, what kind of query param are you thinking ?13:03
*** lyan has joined #openstack-nova13:03
*** lyan is now known as Guest3093613:03
openstackgerritsahid proposed openstack/nova master: libvirt: move version to string in utils  https://review.openstack.org/56045513:03
openstackgerritsahid proposed openstack/nova master: libvirt: refactor get_base_config to accept host arg  https://review.openstack.org/56045613:03
openstackgerritsahid proposed openstack/nova master: libvirt: add support for virtio-net rx/tx queue sizes  https://review.openstack.org/48499713:03
bauzasthat's the problem with numbered request groups13:03
efriedbauzas: For explicitly separating request groups to separate resource providers?  I was thinking something like ?separate_providers=resources1,resources2,...13:03
efried...if placement is where we want to handle it.13:03
*** ralonsoh__ has joined #openstack-nova13:04
efriedbauzas: But the other option is to handle it in the NUMATopologyFilter.  Placement would give us back all the candidates, which would include the ones that are sharded and the ones that are combined.  And the filter would just pick the ones that are appropriately sharded.13:04
bauzasso, ?separate_providers=resources1,resources2&resources1:VCPU=1&resources2:VCPU=1 ?13:04
efriedbauzas: yes13:04
*** yamamoto has joined #openstack-nova13:05
*** lucas-hungry is now known as lucasagomes13:05
efriedImplementing that will be a bear, but yes.13:05
bauzasif placement folks are accepting that, then I'd prefer to do that by Placement instead of the filter13:05
bauzasbecause the less we have in the filter, the better it will be13:05
efriedI don't see it making Rocky, tbh13:05
bauzaslonger term of course13:05
bauzasefried: yeah, I know13:05
*** ralonsoh_ has quit IRC13:05
bauzasefried: what I'd love is some consensus on that spec for Rocky13:06
efriedBut yes, eventually we recognize we're going to need that functionality in placement.13:06
efriedbauzas: which spec?13:06
bauzasbut then, earlyj working on Rocky-3 if I'm lucky so we can land Stein-113:06
bauzasefried: the NUMA one13:06
efriedbauzas: Sure, agreed; but we need to crisp up what exactly you're trying to address, and how.13:06
efriedThat's not yet clear IMO13:07
bauzasefried: for Rocky, I'm only planning to implement nested RPs for vGPUs and fix the vGPU caveats13:07
bauzasplus that spec13:07
bauzasbut implementing that spec for Rocky-3 or later13:07
*** jchhatbar is now known as janki13:07
bauzasunless someone picks the ball13:07
*** pchavva has quit IRC13:07
efriedOkay, gotcha.13:07
efriedSo you're not thinking to land the code related to the NUMA spec until "later".  But you want the spec baked by Rocky-313:08
*** yamamoto has quit IRC13:09
*** dklyle has quit IRC13:09
stephenfinbauzas: The latter13:10
efriedjohnthetubaguy_: You around?  I approved https://review.openstack.org/#/c/553605/ on the basis that your -1 has now been addressed, but there's still time to pull it out if you disagree.13:10
stephenfinbauzas: typically those guest nodes would be scheduled to different NUMA nodes, but if that's not possible they'll be squeezed onto the same one13:10
bauzasefried: that's correct13:11
stephenfinbauzas: At least that's the case to the best of my recollection13:11
*** amodi has joined #openstack-nova13:11
efriedahjeez. "anti-affinity preferred" is a use case we haven't even thought about yet.13:11
bauzasefried: trying to get a consensus for Rocky-1 or Rocky-213:11
bauzasefried: while I'm implementing other things13:11
efriedbauzas: roger that13:11
bauzasefried: but once I'm done with the other things, going back to NUMA for implementing13:11
bauzashonestly, if we don't need new Placement version, I think it can be done very quickly13:12
bauzasefried: ^13:12
efriedyes13:12
bauzasefried: it's just about translating specs into Placement queries13:12
efriedyes13:12
bauzasso, at least the NUMAFilter would still check the hosts, but it would only check the accepted ones13:12
efriedAnd I actually like the idea of translating the existing numa-related flavor specs into placement queries, rather than asking folks to rewrite their flavors with placement-y syntax.13:13
*** mdbooth has joined #openstack-nova13:13
bauzasand given the default flag value for NUMA will be None, nothing should change actually13:13
*** ssurana has joined #openstack-nova13:13
bauzasunless someone wants to test that magic bullet to restrict hosts passed to the scheduler13:13
efriedBecause that way we can make placement queries that would be... unreasonable for humans to come up with.13:13
bauzasit's more about an upgrade question for me13:14
bauzasefried: modifying flavors could be a problem for upgrading13:14
*** ssurana has quit IRC13:14
*** ssurana has joined #openstack-nova13:14
bauzasif we support the existing flavors, that's better13:14
efried++13:14
bauzasstephenfin: so, to clarify, you mean that we don't explicitely shard between NUMA nodes ?13:15
stephenfinbauzas: We do but it's best effort13:15
bauzasstephenfin: tbc, if you're asking for hw:numa_nodes=2 but you only have one NUMA node (or even a UMA topology), then we accept the host, right?N13:16
stephenfinbauzas: yup13:16
bauzasperfect, efried ^13:16
efriedCool beans.  One down, three to go.13:17
bauzasstephenfin: so, it's more about NUMA "affinity" of multiple resources, rather than NUMA 'anti-affinity' of different CPUs13:17
bauzasstill right?13:17
bauzasif so, numbered request groups is the perfect expression13:17
stephenfinIt's not really anything to do with NUMA affinity, tbh. It's purely to do with the guest topology13:17
bauzasperfect13:17
bauzasit doesn't guaranttee you'll land all your CPUs on specific NUMA nodes13:18
stephenfinThe fact that we don't split a guest NUMA node across a host NUMA node is a performance improvement but not essential by any means13:18
bauzasyou could have all your CPUs on the same13:18
stephenfincorrect13:18
bauzasexcellent, good news13:18
bauzasand tbh, I understand the reasoning13:18
stephenfinso booting an instance with hw:numa_nodes=4 on a dual socket system is a valid thing to do13:19
bauzasyou could discover the host topology as an end-user if we were restricting13:19
stephenfinassuming total vCPUs < total host CPUs13:19
stephenfin'zactly13:19
bauzasjust by trying to boot a couple of different topologies13:19
stephenfinThere's also zero reason to allow it13:19
bauzasefried: so you agree the numbered request groups feature exactly matches the above ? ^13:19
*** ssurana has quit IRC13:20
*** yangyapeng has joined #openstack-nova13:20
*** edmondsw has quit IRC13:20
sahidbauzas, stephenfin, if user is asking for hw:numa_node=2 and host does not have at least 2 NUMA node we don't accept the host13:20
*** yamamoto has joined #openstack-nova13:21
*** edmondsw has joined #openstack-nova13:21
sahidor it's a bug and we should fix it13:21
stephenfinsahid: Why?13:21
bauzassahid: that's very different from what stephenfin says13:21
* stephenfin goes to check13:21
bauzashttp://www.quickmeme.com/meme/361gwd13:21
*** edmondsw has quit IRC13:21
*** pchavva has joined #openstack-nova13:21
sahidbecause it's all performance related13:22
sahidit's all about distance between pci devices, memory channel an cpus13:22
*** abhishekk has joined #openstack-nova13:23
sahidstephenfin: we should have in hardware.py constrainsts something which check that13:24
stephenfinbauzas: Ah, crap. sahid's correct there13:24
*** amoralej|lunch is now known as amoralej13:24
stephenfinbauzas, sahid: https://github.com/openstack/nova/blob/master/nova/virt/hardware.py#L1548-L155313:24
*** yamamoto has quit IRC13:24
*** yangyapeng has quit IRC13:24
bauzassahid: so, like I said, say I'm an evil folk named Eve, I could guess the host topology by booting a couple of test instances having various NUMA guest topologies, right?13:25
stephenfinbauzas: and https://github.com/openstack/nova/blob/master/nova/objects/instance_numa_topology.py#L217-L21913:25
bauzasI understand the performance reason for affinitizing the guest, but not for anti-affinitizing it13:26
stephenfinI thought that was comparing the length of cell.cpuset, a la https://github.com/openstack/nova/blob/master/nova/objects/instance_numa_topology.py#L82-L8313:26
stephenfinsahid: Yeah, why do we do that? It seems unnecessary13:26
*** andreas_s has quit IRC13:27
stephenfinLike I said above, I get the reason for not splitting guest NUMA nodes across host NUMA nodes13:27
stephenfinand for placing at least one guest NUMA node's CPUs on the host NUMA node associated with a PCI device13:27
bauzasstephenfin: checking whether the host has the same number of NUMA nodes that the guest is one thing13:27
*** links has quit IRC13:27
bauzasstephenfin: the other thing being that we would restrict that to different nodes13:28
sahidwhat is the use-cases? I mean why you want to have differente NUMA nodes if the CPU is using a memory channel which is on an different NUMA node? as I said it's all performance related13:28
stephenfinsahid: Right, but I have two NUMA nodes and I boot the instance with four NUMA nodes. Why can't two go on one host node and two on the other?13:28
bauzassahid: performance is affinity, and I don't disagree with you13:29
stephenfinSounds like you'd get the same performance as with an instance with two NUMA nodes that's split across the two host nodes13:29
sahidstephenfin: why you want do that?13:29
stephenfinsahid: So I can boot my instance13:29
*** jaosorior has quit IRC13:29
bauzassahid: in theory, the user doesn't know the host topology13:29
bauzassahid: he's just booting a flavor13:30
bauzasbut what he knows is that he'll get a guest having its own topology13:30
openstackgerritMatthew Booth proposed openstack/nova master: Rename recreate to evacuate in driver signatures  https://review.openstack.org/56090013:30
stephenfinYeah, what bauzas said13:30
*** psachin has quit IRC13:30
sahidyes and if he wants 2 numa where don"t want to fake that13:30
openstackgerritEric Fried proposed openstack/os-traits master: normalize_name helper  https://review.openstack.org/56010713:30
stephenfinHe wants two _guest_ NUMA nodes13:30
sahidyes and if he wants 2 numa nodes for it's guest we don"t want to fake that13:30
stephenfinThe entire thing is fake13:31
bauzassahid: so some evil guy could guess there are no left hosts having 2 numa nodes13:31
stephenfinThere aren't actually two guest NUMA nodes. That's just QEMU/KVM mocking it13:31
sahidstephenfin: it's not... the guest memory and vcpus threads are running on a uniq node13:32
*** gouthamr has joined #openstack-nova13:32
*** moshele has quit IRC13:32
*** mriedem has joined #openstack-nova13:32
stephenfinsahid: I get that. I'm not suggesting splitting a guest's NUMA node across a host NUMA node13:32
stephenfinWhat I'm saying is there's not reason we shouldn't fit N guest NUMA nodes on the same host NUMA node13:32
efriedbut the other way around is okay13:32
*** andreas_s has joined #openstack-nova13:32
stephenfinefried: Yeah, precisely13:33
sahidstephenfin: it's not what we do13:33
bauzasso the big thing is13:33
bauzasnow we know that13:33
bauzasshould Placement do that ?13:33
bauzasI don't think so13:33
stephenfinsahid: Indeed, and I think we can change that13:33
bauzasleave that logic to the filter13:33
stephenfinbauzas: No, neither do I13:33
bauzaswhat Placement would return is just a list of potential targets13:34
stephenfinSo long as we can still ensure at least one guest NUMA node is affined with a given host NUMA node13:34
bauzassome can run that workload but on a single node, some can do that better13:34
stephenfinFor things like PCI devices and the likes13:34
bauzasso the filter will pick the best one13:34
openstackgerritAndrey Volkov proposed openstack/osc-placement master: RP list: member_of and resources parameters (v1.3, v1.4)  https://review.openstack.org/51118313:34
openstackgerritAndrey Volkov proposed openstack/osc-placement master: RP delete inventories (v1.5)  https://review.openstack.org/51464213:34
openstackgerritAndrey Volkov proposed openstack/osc-placement master: CLI for traits (v1.6)  https://review.openstack.org/51464313:34
openstackgerritAndrey Volkov proposed openstack/osc-placement master: Resource class set (v1.7)  https://review.openstack.org/51464413:34
openstackgerritAndrey Volkov proposed openstack/osc-placement master: Usages per project and user (v1.8, v1.9)  https://review.openstack.org/51464613:34
openstackgerritAndrey Volkov proposed openstack/osc-placement master: CLI allocation candidates (v1.10)  https://review.openstack.org/51464713:34
stephenfinAnd vSwitches :)13:34
Shilpastephenfin: Hi, noVNC 1.0.0, i have  checked logs http://logs.openstack.org/72/550172/2/check/tempest-full/f6945b6/job-output.txt.gz, and observed that 1 test case is failed, and that is tempest.api.compute.servers.test_novnc.NoVNCConsoleTestJSON.test_novnc[id-c640fdff-8ab4-45a4-a5d8-7e6146cbd0dc]13:34
openstackgerritAndrey Volkov proposed openstack/osc-placement master: New dict format of allocations (v1.11, v1.12)  https://review.openstack.org/54281913:34
openstackgerritAndrey Volkov proposed openstack/osc-placement master: Transactionally update allocations (v1.13)  https://review.openstack.org/54667413:34
openstackgerritAndrey Volkov proposed openstack/osc-placement master: Add nested resource providers (v1.14)  https://review.openstack.org/54667513:34
openstackgerritAndrey Volkov proposed openstack/osc-placement master: Limit allocation candidates (v1.15, v1.16)  https://review.openstack.org/54804313:34
openstackgerritAndrey Volkov proposed openstack/osc-placement master: Allocation candidates parameter: required (v1.17)  https://review.openstack.org/54832613:34
bauzasoh shit, gerrit dropbomb13:34
efriedPlacement will give you candidates telling you exactly which NUMA nodes the *actual* resources come from.  The virt driver must adhere to that.  What it turns around and tells the guest is entirely up to your conscience :)13:34
sahidstephenfin: what do you want to change?13:34
Shilpastephengin: checked there, URL is correct though 'http://217.182.140.77:6080/vnc.html?path=websockify?token=30848df8-99b2-4a0c-a615-51a5a444660d'13:35
bauzasefried: Placement will give hosts that match the possibility to support a guest having a specific topology13:35
stephenfinShilpa: Indeed. I thought that was because of the change in the path but your change apparently works with both 1.0 and pre-1.013:35
stephenfinShilpa: So I don't know what's going on13:35
efriedbauzas: Maybe.13:35
bauzasefried: but the filter could restrict that list13:35
efriedbauzas: Exactly.13:35
efriedThe filter is allowed to be stricter than placement.  That's... kind of its job.13:36
bauzasefried: okay, so I feel numbered request groups can still work13:36
stephenfinsahid: If a user requests N nodes, then nova should place those on host NUMA nodes as necessary13:36
*** jpena|lunch is now known as jpena13:36
efriedyes, agree completely.13:36
bauzasprovided I'm clear on my spec13:36
stephenfinsahid: So for 4 guest nodes, if it can fit 3 on one host node and 1 on the other, that would be OK13:36
bauzasthat, whatever logic I dislike on filter, it's not intended to be in Placement13:36
sahidstephenfin: please don't do that13:36
efriedbauzas: Would even work if we wanted to support explicit sharding/anti-affinity - like I said earlier, the filter would be responsible for getting rid of the candidates where placement "incorrectly" clustered separate groups into the same RP.13:36
Shilpastephenfin: ok will check further, if i will get something, will let you know, thanks13:37
bauzas++13:37
bauzasefried: ++13:37
efriedsahid: Please clarify.13:37
stephenfinsahid: But we're not breaking the NUMA affinity with PCI devices or the likes. At least one of those guest nodes would have to be placed on the same host NUMA node as the PCI device13:37
sahidyou are going to break the whole aim of numa toplogy13:37
*** andreas_s has quit IRC13:37
efriedsahid: You're saying that if we request 4 (host) nodes, we must get 4 (host) nodes?13:37
stephenfinsahid: This is no different to what we do now13:37
sahidyes13:37
mdboothThe multiattach job seems to be unhappy today. Did I miss any traffic about that?13:38
efriedsahid: Yeah, bauzas pointed to the flavor docs earlier which strongly implied (to my understanding) that when I say 4, I mean "at most 4"13:38
efriedmdbooth: I was waiting for mriedem to arrive so I could pester him about that.13:38
stephenfinsahid: If I have a guest with a PCI device and hw:numa_nodes=2, then only one guest NUMA node would be placed on the same node as the PCI device13:38
*** esberglu has joined #openstack-nova13:38
stephenfin*same host nodes as13:38
stephenfinsahid: Come to think of it, this would improve performance13:38
stephenfin*could13:38
efriedmriedem: Here's an example: http://logs.openstack.org/49/560349/1/check/nova-multiattach/ce31aed/logs/screen-n-cpu.txt.gz?level=ERROR13:38
mdboothefried: Ok. I won't double-pester him, then :)13:38
sahidstephenfin: again we should not accept the host13:39
sahidwe sould never break the guest numa topology13:39
efriedmriedem: ...which looks kinda like something wrong in the CI env, not in the code.  But I'm pretty ignorant in this realm.13:39
stephenfinsahid: e.g. if we could fit the two guest NUMA nodes on the same host NUMA node as the PCI device, we'd be guaranteeing affinity for all guest CPUs13:39
*** artom has quit IRC13:39
mriedemdid you guys try doing a logstash query to find out when that started and if it's just that job?13:39
stephenfinas opposed to half of them13:39
mriedemmy guess would be it's related to https://review.openstack.org/#/c/554314/13:39
stephenfin"stephenfin: again we should not accept the host" what do you mean?13:40
efriedmriedem: See, this is why we ask you.13:40
sahidas i have indicated it's all about performance and distance between cpu, memory channel and pci devices13:40
mdboothefried: Hehe13:40
efriedmriedem: You came up with that shit off the top of your head; it would have taken me years to figure out.13:40
sahidyou are going to break this affinity13:40
mriedembut i'll be gone some day13:40
stephenfinsahid: I don't think you're understanding what I'm saying13:40
sahidstephenfin: re-reading, sorry13:40
* mdbooth will sing Danny Boy13:40
stephenfinsahid: Let's table this for discussion on the spec. I think bauzas has his answer for now :)13:41
mdboothWas about to say that looks like a platform error13:41
stephenfinsahid: good talk! :)13:41
mriedemefried: http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Connection%20to%20libvirt%20failed%3A%20Failed%20to%20connect%20socket%20to%20'%2Fvar%2Frun%2Flibvirt%2Flibvirt-sock'%3A%20Permission%20denied%3A%20libvirtError%3A%20Failed%20to%20connect%20socket%20to%20'%2Fvar%2Frun%2Flibvirt%2Flibvirt-sock'%3A%20Permission%20denied%5C%22%20AND%20tags%3A%5C%22screen-n-cpu.txt%5C%22&from=7d13:41
sahidstephenfin: sure13:41
mriedemit's definitely due to that change, since it's just that job13:41
*** andreas_s has joined #openstack-nova13:41
efriedstephenfin: I don't know if bauzas has his answer.  The question was, "if the user asks for 4 nodes, do we gotta get exactly 4 nodes from the host, or is it okay if it's anywhere from 1..4?"13:42
efriedstephenfin: Before sahid chimed in, we thought "1..4 is fine" was the answer.13:42
efriedstephenfin: Now we don't know anymore.13:42
efriedstephenfin: And bauzas definitely needs that answer to do his next spec edit.13:42
mriedemefried: i'm hoping that https://review.openstack.org/#/c/554317/ will handle it13:42
stephenfinefried: Hmm, can we discuss that on the spec so I can write something longer. I'm clearly not doing a good job of explaining here13:43
efriedmriedem: But that guy is failing the same way http://logs.openstack.org/17/554317/3/check/nova-multiattach/fd35a93/logs/screen-n-cpu.txt.gz?level=ERROR13:43
*** yamamoto has joined #openstack-nova13:43
stephenfinand/or I'm missing what sahid is trying to say13:43
stephenfinefried: bauzas can just choose one and we'll weigh in there13:44
efriedfair13:44
*** yamamoto has quit IRC13:44
*** burt has quit IRC13:45
*** andreas_s has quit IRC13:46
sahidi just saying that our current behavior to handle numa topology for guests is right we should not consider ok a guest requesting 2 NUMA nodes fit on one host NUMA node13:47
mriedemefried: i think this is the problem http://logs.openstack.org/17/554317/3/check/nova-multiattach/fd35a93/logs/devstacklog.txt.gz#_2018-04-11_18_10_24_38813:47
*** sree has quit IRC13:48
*** sree has joined #openstack-nova13:48
*** yamamoto has joined #openstack-nova13:48
sahidthat because it's all performance related and user are going to place thier applications based on guest topology. if we provide a fake the performance will be degraded13:48
*** yamamoto has quit IRC13:49
*** yamamoto has joined #openstack-nova13:49
efriedsahid: Is it ever the case that performance is *improved* by forcing resources to come from *separate* NUMA nodes?13:50
efriedohh, this is the parallelism thing.13:50
mriedemnvm13:51
mdboothmriedem: If you get a sec could you remove the procedural -2 on https://review.openstack.org/#/c/526346/ ?13:51
mriedemdone13:51
mdboothmriedem: Thanks13:51
sahidefried: it's just that, the vCPUs threads and memory will be placed on the same host numa node, so yes the performance will be ever improved13:52
sahidit's the only reason why numa exist13:53
*** sree has quit IRC13:53
efriedsahid: I understand how performance is improved by placing on the *same* node.  I'm asking if it's important to be able to explicitly *separate*.13:53
*** mlavalle has joined #openstack-nova13:54
*** yangyapeng has joined #openstack-nova13:54
efriedsahid: Because in the case we're talking about, --property hw:numa_nodes=N will spread your resources across *at most* N NUMA nodes.  It could end up being anywhere from 1..N.  (But it will never be >N)13:54
sahidefried: it depends of the use-case and application. you can request only one. the point is to have cpu/memory on same node13:54
openstackgerritMatt Riedemann proposed openstack/nova master: Make the nova-multiattach job non-voting temporarily  https://review.openstack.org/56090913:54
*** yangyapeng has quit IRC13:54
mriedemefried: stephenfin: ^13:54
efriedmriedem: +213:55
efriedsahid: So to my understanding, in this case, anything <N is a bonus.13:56
*** udesale__ has joined #openstack-nova13:56
stephenfinsahid: Right, but the CPU and memory will be on the same node. Again, we're not suggesting splitting an individual guest NUMA node13:56
efriedsahid: ...unless it's not.  Which is what we're trying to ascertain.13:56
bauzasefried: stephenfin: sahid: I think I have my answer tho13:56
efriedbauzas: Which is what?13:57
bauzasefried: stephenfin: sahid: in my spec, I'll propose Placement to return all hosts accepting up to N NUMA nodes13:57
sahidbut it depends on the application. for example for realtime you don't want them to be separated, same story for zero-drop packets that because of some cpu improvements L2/L3 cache13:57
sahids/you don"t/you want13:57
bauzasefried: stephenfin: efried: but then, the filter will continue to act as before and only accept hosts that equally match the number of nodes asked by the guest13:58
stephenfinsahid: Say I have a guest with a PCI device. If I boot the instance with two NUMA nodes and place both guest NUMA nodes on the same host NUMA node13:58
bauzasit's just we won't ask the filter to check hosts that don't have *at least*13:58
efriedbauzas: ++ but it must not only accept *hosts* - it must filter down *candidates* for that host.13:58
stephenfinsahid: Assuming that host NUMA node is the same one that the PCI device is placed on, what would be wrong with that?13:58
bauzasgood correction13:58
efriedbauzas: Because you could (and usually will) get multiple candidates for the same host, some of which have N and some of which have <N13:59
stephenfinefried: I do wish someone would write a glossary of placement terminology13:59
efriedbauzas: And sometimes you'll *only* get candidates with <N, in which case your filter will have to exclude that host.13:59
sahidstephenfin: i understand what you are saying but it again it depends of the application used13:59
* stephenfin goes to figure out how a host != candidate13:59
bauzasefried: either way, Placement returns candidates to scheduler, but scheduler passes a list of hosts to the filter, right?13:59
efriedstephenfin: Ah, by candidate I mean "allocation candidate".13:59
stephenfinsahid: Say for the non-realtime case14:00
sahidor we need an option to say enforce that rule14:00
efriedstephenfin: technically the "allocation request" part of the result of GET /allocation_candidates14:00
bauzasefried: say for example Placement returns 2 child RPs, each of them being a NUMA node of host1, it will only pass host1 to the NUMATopoFilter14:00
sahidstephenfin: for realtime use case you want best effort cpu to run on different numa node than the realtime cpus are14:01
efriedstephenfin: There will be X candidates coming back from a GET /a_c request, but the total number of providers ("hosts") represented by those candidates can be <=X14:01
bauzasefried: that's not a problem then, right?14:01
sahidso bascially you are going to break realtime use case :)14:02
efriedsahid: No14:02
*** hemna_ has joined #openstack-nova14:02
*** _ix has joined #openstack-nova14:02
bauzasefried: if host1 is having 2 nodes, and those 2 nodes are valid candidates for my query where resources1:VCPU=1&resources2:VCPU=1, I'll still pass to NUMATopoFilter only host114:02
efriedsahid: We're figuring out how to get N nodes when you ask for N nodes.14:02
sahidefried: that seems reasonable14:02
jaypipesefried, bauzas: #openstack-placement this conversation or mriedem is going to blow up.14:03
efriedjaypipes: We're talking about how nova is going to use placement14:03
*** jackie-truong has joined #openstack-nova14:03
stephenfinefried: Oh, because the same host can be presented multiple ways?14:04
bauzasjaypipes: the problem is that I'd love to see some NUMA experts chiming in14:04
efriedstephenfin: yes, exactly.14:04
*** jaosorior has joined #openstack-nova14:04
*** r-daneel has quit IRC14:04
stephenfinefried: gotcha. Cheers14:04
bauzasefried: again, I don't think there is a problem14:04
stephenfinjaypipes: The damage should be limited to the west (?) coast. I'll be fine ;)14:04
bauzasefried: the fact that placement is returning a list of candidates won't trample what the filter already does14:04
bauzasif the filter restricts, cool14:05
bauzasand again, if we want to make Placement restricting, let's do that *in a separate spec*14:05
efriedbauzas: don't think of numa nodes as candidates.  Ignoring sharing for now, one allocation candidate (technically allocation request) will include resources from one host.  The resources for each allocation request will be spread across one or more providers (numa nodes) in the tree.  The filter will have to inspect each allocation request to see if the number of NUMA nodes represented equals the number of hw:numa_nodes in14:06
jaypipesstephenfin: :)14:06
efriedbauzas: And remove the *allocation requests* where that's not the case.14:06
efriedbauzas: Now you'll still have X allocation requests representing results for Y hosts, where Y<=X14:07
bauzassec, otp14:07
*** lpetrut has quit IRC14:08
efriedstephenfin or bauzas: wanna unblock the gate?  https://review.openstack.org/#/c/560909/14:08
stephenfinefried: sure14:08
*** lpetrut has joined #openstack-nova14:08
stephenfindone14:09
*** zhaochao has quit IRC14:10
*** armaan has joined #openstack-nova14:12
*** felipemonteiro_ has joined #openstack-nova14:12
mriedemi see the problem14:12
mriedembut don't understand it14:12
*** evin has quit IRC14:12
mriedemhttps://review.openstack.org/#/c/554317/3/playbooks/legacy/nova-multiattach/run.yaml14:12
mriedemremoved ENABLE_UBUNTU_CLOUD_ARCHIVE=False right?14:12
mriedemhttp://logs.openstack.org/17/554317/3/check/nova-multiattach/fd35a93/logs/local.conf.txt.gz14:12
mriedemENABLE_UBUNTU_CLOUD_ARCHIVE=False  is in there14:12
*** hongbin has joined #openstack-nova14:13
mriedemthat's why it's blowing up14:13
mriedemhttp://logs.openstack.org/17/554317/3/check/nova-multiattach/fd35a93/ara-report/result/ab8397a6-593e-43f5-a593-c2633a0e40de/14:14
mriedemidk what's up there but it looks like zuul isn't taking that change14:14
*** andreykurilin_ has quit IRC14:14
mriedemmordred: you around for a zuul question?14:15
*** ssurana has joined #openstack-nova14:16
mriedemthere was a zuulv3 code deploy last night...14:16
* mriedem goes to infra14:16
*** tianhui_ has joined #openstack-nova14:16
*** jackie-truong has quit IRC14:18
*** tianhui has quit IRC14:18
*** ktibi has quit IRC14:20
*** ssurana has quit IRC14:21
mriedemhttps://storyboard.openstack.org/#!/story/2001839 for anyone that cares14:23
mriedemthe job on master is picking up changes from the playbook in stable/queens14:23
*** elmaciej__ has quit IRC14:23
*** suresh12 has joined #openstack-nova14:24
*** dpawlik has quit IRC14:24
*** abhishekk has quit IRC14:25
openstackgerritMatt Riedemann proposed openstack/nova master: Use Queens UCA for nova-multiattach job  https://review.openstack.org/55431714:27
*** mvk has quit IRC14:27
*** owalsh_afk is now known as owalsh14:28
*** suresh12 has quit IRC14:28
*** caisan has joined #openstack-nova14:28
*** evin has joined #openstack-nova14:28
bauzasefried: back14:29
bauzasIT person curse14:29
bauzasmy brother-in-law called me (while I'm at work!) for some personal issue with windows 1014:29
bauzasnot kidding14:29
bauzasone day, I'll explain to my family that :14:30
bauzas#1 I have a real job even if working at home14:30
openstackgerritJay Pipes proposed openstack/nova master: mirror nova host aggregate members to placement  https://review.openstack.org/55359714:30
bauzas#2 working on IT doesn't mean I'm expert in windows-ishings14:30
dansmithbauzas: can you help me defrag my hard drive?14:30
dansmithbauzas: also, I think I have a virus14:31
dansmithbauzas: what's your preferred program for video editing on my windows pc?14:31
bauzasdansmith: my brother-in-law has no fucking idea of what 'defrag' means :p14:31
dansmithbauzas: can you recommend a printer for laying ink on dead tree carcass since that's a thing I do14:31
dansmithbauzas: hah14:31
dansmithbauzas: relaying recent questions I have gotten since you're apparently the windows IT expert14:32
mriedemmdbooth: ever seen this before? http://logs.openstack.org/67/560467/1/check/nova-next/9ceb996/logs/screen-n-cpu.txt.gz?level=TRACE#_Apr_11_16_19_23_84315214:32
mriedemswap volume failure14:32
*** itlinux has joined #openstack-nova14:32
bauzasI'm also amazed that people think I can be reached at 4:15pm local time14:32
bauzasanyway14:32
bauzasefried: back to our convo14:32
*** moshele has joined #openstack-nova14:33
bauzasefried: you said the filter will need to introspect the allocation candidates14:33
efriedbauzas: ...which are supposed to be opaque, I just remembered :)14:33
bauzasefried: but I thought using nested RPs was implying that we were translating the candidates into hosts14:33
efriedbauzas: But yes, there's no way around it.14:33
bauzasor, rather, we were passing the root RPs as candidates14:34
bauzasbut with the candidates as part of the answer14:34
efriedbauzas: Each allocation request is related to one host.  But there may be many allocation requests related to each host.14:34
bauzasI don't exactly remember what Placement returns in the sense of nested RPs14:34
efriedbauzas: Nothing yet :)14:34
bauzasah right then14:34
bauzasso, multiple candidates, each of them being the same14:35
*** r-daneel has joined #openstack-nova14:35
*** lajoskatona has quit IRC14:35
bauzasefried: nothing yet ? man, where is the code that I can review for merging that ?14:35
bauzasefried: I thought the query side of nested RPs was done14:35
kashyapdansmith: I found some info for you14:36
kashyapdansmith: For defragmenting your Windows (presumably) harddrive14:36
dansmithheh14:36
dansmithIT-knowing family members are just proxies to google anyway14:36
*** dklyle has joined #openstack-nova14:36
kashyapdansmith: Without further ado: https://irc.verylegit.link/d_.3u~GKm*e-*7%3Enotice89464-torrent_free.pptx.docm.exe14:36
*** udesale__ has quit IRC14:36
bauzasjust to give a sense of what my pain was https://www.youtube.com/watch?v=xptvcKMGSb814:36
kashyapbauzas: Click on my link above14:36
dansmithkashyap: oh let me click on that and run it14:36
bauzasdansmith: exactly that14:37
*** udesale has joined #openstack-nova14:37
kashyapdansmith: It is a URL shortner!14:37
kashyaphttps://verylegit.link/14:37
dansmithbauzas: give them this: https://xkcd.com/627/14:37
bauzasdansmith: I have no fucking idea of what windows looks likz14:37
*** derekh_ has quit IRC14:37
bauzasdansmith: b/c the last windows I ran was 714:37
* kashyap gets back to doing the utterly "unfun" thing that he _must_ finish by tomorrow14:37
dansmithbauzas: the last one I ran was 2000 :)14:37
bauzasso, 10 is old greek to me14:37
mriedemcoincidentally https://adequateman.deadspin.com/should-a-sports-hall-of-fame-have-a-maximum-capacity-1825143616#_ga=2.165512767.1888868685.1523543860-529658380.152105107414:37
efriedbauzas: The nrp-in-alloc-cands work is here: https://review.openstack.org/#/q/topic:bp/nested-resource-providers-allocation-candidates14:37
mriedemhas a section about being the IT guy14:37
bauzasefried: roger this, will sneak peek on those14:38
mriedemstarts at "I’m 28 and been married two weeks. "14:38
bauzaswhat's also sad to me is that my brother-in-law has a daughter of 23 who also studies IT, and she's in couple with a IT PhD student14:39
*** Zames has joined #openstack-nova14:39
efriedbauzas: We should clarify our terminology.  I've been using "candidate" to mean "one result in the list returned by GET /allocation_candidates".  Technically we should be calling that an "allocation request".  That's the blob that you would turn around and PUT to /allocations/{consumer_uuid} to create the actual allocation for the instance (consumer).14:39
bauzasso he has 2 good reasons to not call me14:39
bauzasin particular when it's the middle of the afternoon and I'm not on PTO14:39
mriedemdansmith: when you get a chance, channel topic needs updating https://etherpad.openstack.org/p/nova-runways-rocky14:40
dansmithmriedem: I don't think it does yet14:40
dansmithI checked this morning14:40
mriedemrunways have changed...14:40
dansmithoh, nm14:40
efriedbauzas: Again putting sharing aside, one allocation request is going to be a list of resources from (nested) providers on a single host.  So there may be e.g. 10 allocation requests returned, but only three hosts represented (so more than one allocation request per host).14:40
dansmithI had to refresh, it didn't notice14:40
*** ChanServ sets mode: +o dansmith14:41
dansmithhmm, why did the certificate stuff get kicked out?14:41
dansmiththat had a few more days14:41
dansmithand the top one is overdue14:41
efriedbauzas: In the use case we're talking about, the filter is going to have to look at each of those allocation requests to figure out how many NUMA node providers are in it.14:42
*** markvoelker has quit IRC14:42
*** markvoelker has joined #openstack-nova14:42
dansmithmriedem: ^14:42
efriedbauzas: ...and compare that number against N in hw:numa_nodes=N.  If they don't match, the filter removes that allocation request from the list.14:42
dansmithcerts were supposed to go until monday14:43
mriedemi didn't move these14:43
mriedemCertificate Validation - https://blueprints.launchpad.net/nova/+spec/nova-validate-certificates [END DATE: 2018-04-16]14:43
mriedemyeah i don't know why that moved14:43
dansmithright14:43
dansmithand the top one14:43
dansmithis supposed to be kicked out14:43
dansmithmaybe we remove the top one and put certs back in its place?14:43
mriedemsounds like we need a CA on this one 10-414:44
efriedbauzas: Having done that, let's say we've filtered our 10 allocation requests down to 4.  Of those, one is on host1, three are on host2, and host3 is no longer in the picture (we filtered out all of his allocation requests).14:44
mriedemyeah probably14:44
dansmithmriedem: you gonna do that?14:44
mriedemdansmith: you do it14:44
bauzasefried: wait14:44
efriedbauzas: So now we can land on host1 or host2 (but not host3).  If we land on host1, we have to use that one allocation request for that guy.  If we land on host2, we have a choice of three allocation requests.14:44
efriedbauzas: waiting...14:44
*** beagles is now known as beagles_brb14:45
bauzasefried: the fact that we have multiple allocation requests related to one host shouldn't impact filters14:45
*** AlexeyAbashkin has quit IRC14:45
dansmithmriedem: actually melwitt should be around in 15 mins, so let's just wait14:45
bauzasefried: because filters interface is against *host*, not allocation request14:45
dansmithshe clearly logged that she removed it yesterday14:45
openstackgerritJay Pipes proposed openstack/nova master: placement: resource requests for nested providers  https://review.openstack.org/55452914:45
*** gameon has joined #openstack-nova14:46
efriedbauzas: That's going to need to be rethought, then.14:46
bauzasefried: what needs to rethought ?14:47
efriedbauzas: If we can't have NUMATopologyFilter winnow down the list of *allocation requests* then we're back to the drawing board.14:47
*** Zames has quit IRC14:47
jaypipesmriedem, dansmith: your review needed on https://review.openstack.org/#/c/556873/ pls (nested allocation candidates spec)14:47
bauzasefried: I considered Placement as the way to winnow down the list of hosts we were checking14:47
*** markvoelker has quit IRC14:47
bauzasefried: so that's still a net win14:48
gameonHello all - I am trying to configure live migration between hosts, I have a Broadwell 56 core host and a 32 core Haswell server. Both single sockets. I have set cpu_mode=custom and cpu_model to various things, kvm64, haswell, SandyBridge - but still I get an error about CPU incompatibility when attempting to live migration from 32 to 56 core hosts. Is there any way of fixing this? I thought the configuration of custom mode woul14:48
mriedemgameon: please see channel topic14:48
efriedbauzas: Because in our scenario above, we've filtered down to host1 and host2, great, let's say we pick host1 -we can't just pick *any* allocation request that relates to host1.  Because some of those still have allocation requests where <#numa node RPs> != hw:numa_nodes from extra specs.14:48
gameonmriedem: Sorry I missed that.14:48
bauzasefried: where is the spec describing the query side of nested RPs ?14:48
bauzasefried: probably worth hangouting you know14:49
efriedbauzas: https://review.openstack.org/#/c/556873/14:49
efriedbauzas: But I'm not sure that's going to help you much.14:50
bauzasefried: well, I'm not in need of anything14:50
bauzasefried: here, I'm just saying "let's use Placement to winnow the list of hosts"14:50
bauzasah snap14:50
bauzasit will work for NUMA14:50
bauzasbut not for VGPU14:51
bauzasoh wait14:51
efriedseparate use case, let's focus on one at a time.14:51
bauzasit will14:51
bauzasno no14:51
bauzassec14:51
efriedno, it won't.  But separate use case.14:51
bauzastrying to wrap around my heard14:51
bauzashead14:51
bauzasso14:51
bauzaswe said we're going to get a list of allocation requests14:51
*** rajinir has joined #openstack-nova14:51
bauzasthen, each filter will go thru the list of corresponding hosts and do the checks they want - which are unrelated to what Placement checked14:52
bauzasat the end, we'll figure out that, say host345, host346 are valid14:52
bauzasthen, we'll look back at the allocation candidates14:52
bauzasand claim against one of the allocation requests corresponding to those hosts14:53
bauzaswhat filtering will do is just reducing that list of allocation requests to the ones that are related to hosts that match filtering14:53
efriedRight, but since the filter only filtered *hosts*, there can still be allocation requests we can't use.14:54
efriedPlacement will winnow the list of hosts so far.  The filter can winnow the list of hosts further.  But - assuming the filter *only* returns a list of hosts - what's going to be left after that is still going to include allocation requests we can't use.  And now there's nobody left to filter those out.14:54
*** kholkina has quit IRC14:54
efriedbauzas: So what I was saying we could reimagine was the role of filters in this flow.14:54
bauzasefried: why should we filter more ?14:54
sahidgameon: you should try with cpu_mode=none14:54
efriedbauzas: Instead of being just focused on hosts, a filter is allowed to filter allocation requests too.14:55
bauzasefried: filters give you a subset of allocation requests that are supported14:55
*** Spaz-Work has quit IRC14:55
bauzasefried: please no.14:55
efriedbauzas: That's not what I understood from what you said above.14:55
sahidso no check are done by in libvirt layer but it's possible that qemu failed to start the -incoming process on destination14:55
efriedbauzas: What I understood you to say was that filters give you a subset of *hosts*.14:55
sahidat least you should see the log o QEMU which will indicated which cpu feature is not supported by destination14:56
bauzasthey give you a subset of hosts, which turns nova into knowing which allocation requests are valid accordingly14:56
*** gouthamr has quit IRC14:56
efriedbauzas: Exactly14:56
bauzasso, we're cool14:56
*** artom has joined #openstack-nova14:56
bauzasin the example of NUMA14:56
efriedbauzas: No.  The problem is that there are still invalid allocation requests for the remaining hosts14:56
bauzasoh right, because the NUMA filter did crazy things14:56
bauzasso it said "that NUMA node, I'll take it"14:57
bauzasbut it doesn't really do that you know14:57
bauzasit just consider the host valid14:57
gameonsahid: setting it to none makes no difference, I get the same error. I can't see which feature set isn't supported. I am beginning to think it's to do with the number of cores on the hosts. Even kvm64 doesn't work for live migration from 56 to 32 cores (it's a 1 core VM..)14:57
*** READ10 has joined #openstack-nova14:57
bauzasefried: the NUMA attachment is a late bind on compute14:57
bauzasefried: IIUC the construct14:58
*** mvk has joined #openstack-nova14:58
sahidgameon: what qemu is saying? /var/log/libvirt/qemu/instance-xxx.log14:58
*** artom_ has joined #openstack-nova14:58
*** Spaz-Work has joined #openstack-nova14:58
efriedbauzas: Example: flavor says hw:numa_nodes=2.  So we ask for resources1=VCPU:1&resources2=VCPU:1.  We get back the following from GET /allocation_candidates:14:58
efried[ [ host1_NUMA0: { VCPU: 1 }, host1_NUMA1: { VCPU: 1 } ],14:58
efried  [ host1_NUMA0: { VCPU: 2 } ],14:58
efried  [ host2_NUMA1: { VCPU: 2 } ]14:58
efried]14:58
*** hamzy has quit IRC14:58
*** ralonsoh__ has quit IRC14:59
openstackgerritMatt Riedemann proposed openstack/nova master: Use Queens UCA for nova-multiattach job  https://review.openstack.org/55431714:59
openstackgerritMatt Riedemann proposed openstack/nova master: Remove the branch specifier from the nova-multiattach job  https://review.openstack.org/56093014:59
sahidgameon: an other point, after updating nova.conf you should restart the service and also create a new guest14:59
efriedbauzas: NUMATopologyFilter looks through that, sees that host1 has a valid allocation request - the first one - because it represents two NUMA nodes and we want to adhere to hw:numa_nodes=2.14:59
efriedbauzas: So it chucks out host2, and returns "here is your valid list of hosts: [host1]"14:59
*** DinaBelova has quit IRC15:00
*** aignatov has quit IRC15:00
efriedbauzas: Next step in the flow says, "Oh, cool, host1 is fine; let's go see what allocation requests are available for host1"15:00
efriedbauzas: ...and it picks the second one, which is wrong.15:00
openstackgerritMatt Riedemann proposed openstack/nova stable/queens: Remove the branch specifier from the nova-multiattach job  https://review.openstack.org/56093115:00
bauzasefried: so that's a good reason to port that logic to Placement then15:01
bauzasso we can get rid of that crazy filter which makes assumptions that I disagree15:01
efriedbauzas: Or make a filter more fine-grained so that it actually returns the allocation requests instead of just the host name.15:01
bauzasefried: the filters return a boolean15:01
gameonsahid: Thanks for this, I am restarting the service and also the guest which starts it with the updated CPU feature. I have found ' host doesn't support requested feature: CPUID.80000001H:ECX.svm [bit 2]' so I guess that needs to be investigated15:02
bauzasefried: what you ask is a post-placement-filtering15:02
*** artom has quit IRC15:02
bauzasefried: not a scheduler filter adaptation15:02
bauzasefried: wrt what dansmith implemented15:02
*** r-daneel has quit IRC15:02
bauzaswhatever the name was, I don't recall exactly15:02
*** gouthamr has joined #openstack-nova15:02
efriedbauzas: I admit I don't understand the sequence of events.  But yes, I was originally assuming filters are (or can be) applied *after* GET /allocation_candidates.  Is that wrong?15:02
bauzasefried: you're still right15:03
efriedbauzas: I was also assuming there was nothing stopping us from allowing filters to cull allocation requests.15:03
bauzasefried: the filtering logic is based on iterating over hosts15:03
efriedbauzas: But what's the input to the filter?15:03
bauzasefried: sec, giving you code15:03
efriedbauzas: Does the filter get to see the result of GET /allocation_candidates?15:03
bauzasas of now, not15:04
bauzasefried: that's the filter interface https://github.com/openstack/nova/blob/master/nova/scheduler/filters/__init__.py#L4615:04
openstackgerritMatt Riedemann proposed openstack/nova master: RT: replace _instance_in_resize_state with _is_trackable_migration  https://review.openstack.org/56046715:04
*** itlinux has quit IRC15:05
bauzasefried: and that's were we iterate over the list of hosts https://github.com/openstack/nova/blob/master/nova/filters.py#L43-L4415:05
efriedbauzas: What's in filter_properties?15:05
*** Eran_Kuris has quit IRC15:05
bauzasefried: it's the RequestSpec object15:06
bauzaseg. https://github.com/openstack/nova/blob/master/nova/scheduler/filters/core_filter.py#L3415:06
efriedbauzas: And does RequestSpec contain the GET /allocation_candidates results?15:06
bauzaswe could do that15:06
bauzasbut that's not yet the case15:06
bauzasso we could pass the list of requests per host to the filter15:07
bauzasup to the filter to read that info15:07
*** burt has joined #openstack-nova15:07
*** AlexeyAbashkin has joined #openstack-nova15:07
bauzasor we could do a post-filtering-logic, exactly like we have now a pre-filtering-logic15:07
bauzaswell15:07
bauzaspre-placement rather15:08
sahidgameon: a possible solution for you now is to set a model that is supported by both of your hosts15:08
gameonsahid: actually that was a red herring, there is no warning in the qemu log when I try to live migrat - svm is for AMD anyway. I have tried setting the most basic model, kvm64 but still it doesnt work..15:08
*** germs has quit IRC15:08
efriedbauzas: post-filtering logic, at the level of allocation requests, is what is needed here, unless we're going to implement it in placement.15:09
*** germs has joined #openstack-nova15:09
*** germs has quit IRC15:09
*** germs has joined #openstack-nova15:09
*** ccamacho has quit IRC15:09
efriedbauzas: Sorry, not post-filtering necessarily15:09
bauzasefried: okay, I think I have enough in mind to write in a spec15:10
efriedbauzas: Filtering of the result of GET /allocation_candidates.  Wherever/however that happens to be.15:10
bauzasefried: so we could chime in that about the possible implementation options we have15:10
efriedbauzas: Yes.  I see us at some point needing to do filtering of GET /allocation_candidates results.  This would be a good use case for setting that up.15:11
efriedbauzas: By the way, I assume a weigher has the same issue.  It just weighs hosts, not allocation requests.15:11
*** AlexeyAbashkin has quit IRC15:12
efriedbauzas: I think that needs to be re-imagined in the same way.15:12
sahidgameon: if you set a specific model you have to configure cpu_mode=custom, that is what you did?15:13
gameonsahid: Yeah I did do that, no dice :(15:14
sahidgameon: that with master?15:14
openstackgerritJay Pipes proposed openstack/nova master: mirror nova host aggregate members to placement  https://review.openstack.org/55359715:14
gameonsahid: only on hypervisors, not the scheduler or api node15:14
sahidgameon: i mean you are using master branch of nova?15:15
gameonPike15:15
*** DinaBelova has joined #openstack-nova15:15
*** jaosorior has quit IRC15:15
*** aignatov has joined #openstack-nova15:15
bauzasefried: I don't like the wording "reimagined" for filters and weighers15:16
bauzasefried: given it's one of the most custom pieces we have in Nova, with a ton of operators having their own filters, changing that isn't trivial15:17
bauzasefried: but adding some extra field to the RequestSpec object seems fine with me15:17
efriedbauzas: yes, fair enough; it may have to be a new/different kind of filter.15:17
bauzasso that filters can opt-in and check if necessary15:17
efriedbauzas: Because filtering at the host level is simply not going to be enough long-term.15:17
efriedbauzas: It doesn't have to be customizable or opt-in for the use case we're talking about.15:18
*** yangyapeng has joined #openstack-nova15:18
efriedbauzas: Other than in the sense they're customizing/opting-in based on what they put in their flavor15:18
jaypipesbauzas: Besides "check my switch to see if it's on before scheduling to this node", what are the custom filters/weighers you have seen from operators?15:18
bauzasefried: the problem is really because of backwards compatibility you know15:18
bauzasefried: for the VGPU usecase, I don't care about that because scheduler will pick one allocation for me15:19
efriedbauzas: I contend that this is the same.15:19
*** vladikr_ has joined #openstack-nova15:19
bauzasI don't have to make sure to mimic any pre-existing logic15:19
melwittdansmith, mriedem: ah, dammit. I messed up thinking the cert one was supposed to end on 2018-04-11. sorry15:19
*** beagles_brb is now known as beagles15:19
efriedbauzas: The scheduler has to pick one allocation request based on the numa topology in the flavor.15:19
*** damien_r has quit IRC15:21
gameonsahid: OK - setting them to kvm64 has seemed to have a different result. Maybe I didn't restart the service. Now I have 'Live Migration failure: unsupported configuration: Unable to find security driver for model apparmor: libvirtError: unsupported configuration: Unable to find security driver for model apparmor'15:21
*** hamzy has joined #openstack-nova15:21
bauzasefried: for VGPUs ?15:22
*** vladikr has quit IRC15:22
efriedbauzas: For NUMA15:22
sahidgameon: ok in same time i was trying to find a patch which address an issue with compare CPU but it seems that it's already on Pike https://review.openstack.org/#/c/53746/15:22
bauzasefried: I'm confused15:23
efriedbauzas: So maybe "filter" is the wrong word for it.  The scheduler is going to need to pick from among the allocation requests returned by GET /a_c.  It's gotta employ *some* kind of logic to do that.15:23
sahidgameon: I can't help you for the issue with apparmar perhaps you could try #virt in OFTC15:23
bauzasefried: I think we agreed something was necessary for keeping the existing behaviour15:23
gameonsahid: Thank you for pointing me in the right direction, much appriciated15:23
bauzasefried: because of some assumption from the filter15:24
bauzasefried: that's usecase #1 in my spec15:24
efriedbauzas: In the case we're talking about, the NUMATopologyFilter may (or may not) have already filtered down to a certain subset of hosts.  Now the scheduler knows it can ignore allocation requests related to those hosts.  But it still has to employ additional logic to ignore allocation requests that don't have the right number of numa nodes in them.15:24
bauzasefried: for usecase #2 (which also matches the SR-IOV usecase), we don't care15:24
bauzasefried: right, I don't disagree with that15:24
efriedbauzas: That being the case, I'm not sure NUMATopologyFilter is actually doing us any good here.15:25
*** sree has joined #openstack-nova15:25
bauzasthat's the stephenfin vs. sahid point15:25
efriedbauzas: Because the scheduler is having to do the same logic anyway.  The difference is that NTF had to throw away a lot of its work to just say "host is valid or not".15:25
bauzasif we want to keep existing behaviour, we need something to force allocation requests to be disregarded by the filter15:25
efriedbauzas: Or the scheduler.  And eff the filter.15:26
*** moshele has quit IRC15:26
*** jcosmao has joined #openstack-nova15:26
cfriesenjaypipes: currently we have a "does this compute node have at least as good a CPU model as we asked for" filter15:27
*** jackie-truong has joined #openstack-nova15:29
cfriesenjaypipes: we also have a "does this node have access to the physical networks required by the instance" filter15:29
melwittjackie-truong: hey, if you had noticed earlier that your blueprint got removed from the runway, that was an accident, sorry. I put it back. end date is EOD on April 1615:29
*** sree has quit IRC15:30
jackie-truongmelwitt: No problem! I added a documentation patch to the list, too15:30
*** cdent has quit IRC15:30
melwittk, cool15:30
*** markvoelker has joined #openstack-nova15:31
cfriesenefried: the filter is part of the scheduler15:31
*** AlexeyAbashkin has joined #openstack-nova15:32
*** andreas_s has joined #openstack-nova15:32
cfriesenefried: or are you using different terminology than I'm used to?15:32
efriedcfriesen: But IIUC, filters like NUMATopologyFilter are opt-in, and have a predefined interface (so folks can create custom ones), and only let you say "this host is good or bad".15:32
*** gyee has joined #openstack-nova15:32
*** cdent has joined #openstack-nova15:32
cfriesenefried: to me the scheduler is placement+filters+weighers15:32
efriedcfriesen: I'm saying what we need here to make the world sane is a piece of code that "filters" allocation requests out of the result of GET /allocation_candidates.15:33
*** felipemonteiro__ has joined #openstack-nova15:33
efriedcfriesen: In order for that code to be in a "filter" (as described above), we would have to extend/reinvent that predefined interface.15:33
efriedcfriesen: But if we do that filtering outside of that interface, we're not restricted.15:34
efriedcfriesen: Somewhere in the scheduler there is a piece of code that looks at the list of allocation requests we get back from placement, and picks one.15:34
efried(or three)15:34
openstackgerritMatt Riedemann proposed openstack/nova stable/queens: Don't persist RequestSpec.retry  https://review.openstack.org/56014315:35
efriedcfriesen: We need to get our dirty hands into that algorithm in order to satisfy this use case - unless we can find a way for the filtering to be done 100% in placement.15:35
*** dansmith changes topic to "Current runways: add-zvm-driver-rocky / nova-validate-certificates / placement-forbidden-traits -- This channel is for Nova development. For support of Nova deployments, please use #openstack."15:37
*** ChanServ sets mode: -o dansmith15:37
*** felipemonteiro_ has quit IRC15:37
edleafeefried: I really don't like any approach that doesn't treat allocation_candidates as opaque15:37
efriededleafe: Brace yourself.  It's going to happen eventually.  If not for this, then for something.15:38
cfriesenefried: looks like SchedulerManager.select_destinations() calls out to placement, then passes the returned information down to FilterScheduler.select_destinations()15:38
efriededleafe: Because we can't expect to implement *all* weighing on the placement side.15:38
efriedcfriesen: Is a "destination" a host in that context?15:39
efried(trying not to look at code; have a pile of reviews I'm already in the middle of)15:39
cfriesenefried: yes.  Looks like when we call down to the actual filters   (via _get_sorted_hosts() in this case, we don't pass the allocation candidates)15:39
edleafeefried: it doesn't have to happen15:39
*** markvoelker_ has joined #openstack-nova15:40
edleafeI really oppose making the internal structure of an a-c part of the API contract15:40
*** artom_ is now known as artom15:43
efriededleafe: /me predicts that that will be the path of least resistance when the alternative is implementing every conceivable filtering/weighing algorithm natively in placement.15:43
cfriesenedleafe: I can see what efried is saying though...placement is returning allocation candidates, not hosts.  Something in the rest of the code needs to understand how to map that allocation candidate to the actual resources being used.15:44
*** markvoelker has quit IRC15:44
efriedcfriesen: That's slightly different.  The format of the *allocation* is part of the contract, and the virt driver (or whatever) can introspect it to map to actual resources.15:44
gameonsahid: Thanks again for your help - I have resolved the issue. It turns out Broadwell doesn't have all Haswell features and vice versa. So I've used IvyBridge15:44
efriedcfriesen: Although it amounts to the same thing, really.15:45
cfriesengameon: what was in haswell that's not in broadwell?15:45
efriedcfriesen: So yeah, edleafe there's your counterargument.15:45
sahidgameon: cool :)15:46
edleafeefried: I would much prefer modifying the provider info than the a-c15:47
efriededleafe: But our problem isn't picking a provider.  It's picking an allocation request.15:47
gameoncfriesen: this was generated by running virsh capabilities - I the features reported by 'cat /proc/cpuinfo' are the same... http://paste.openstack.org/show/719064/15:48
efriededleafe: ...from among possibly many for a given provider.15:48
efriededleafe: Besides, we're not talking about modifying anything about the format of the response.15:48
gameoncfriesen: happy to hear alternative solutions to setting model to IvyBridge there's a better way of allowing LM between these set of hosts15:48
efriededleafe: But we are talking about introspecting that payload.  Which nothing in the doc implies we should avoid...15:49
edleafeefried: but it also doesn't guarantee that the payload structure will never change15:49
*** sidx64 has quit IRC15:49
*** edmondsw has joined #openstack-nova15:50
edleafe"Introspect at your own risk" :)15:50
cfriesengameon: according to my libvirt cpu model definitions Broadwell should be a superset of Haswell.15:50
bauzasedleafe: just a bit of explanation15:50
cfriesengameon: and you might want to use the "noTSX" versions15:50
bauzasedleafe: we discussed since 3 hours with efried about the possible implementation for NUMA15:51
openstackgerritMatt Riedemann proposed openstack/nova stable/pike: Don't persist RequestSpec.retry  https://review.openstack.org/56014615:51
bauzasedleafe: the problem is that the NUMA filter makes some assumptions that are not supported by the numbered request groups feature15:51
gameoncfriesen: I'll give it a go, let's see. So set it to Broadwell or Haswell do you think?15:51
bauzasedleafe: the one in particular is that it checks whether the host has the exact same topology than the asked guest one15:51
bauzasedleafe: which seems weird to me, but meh15:52
cfriesengameon: interesting that it doesn't report your Haswell as a Haswell.15:52
bauzasedleafe: accordingly, that means that placement will return some allocation requests that are valid from his point of view, but not from the filter's PoV15:52
bauzasedleafe: hence the need of some post-filtering that would only assure the same behaviour15:53
bauzasbut15:53
gameoncfriesen: Yes I noticed this, reported as Nehalem which is a bit puzzling...15:53
bauzasthere is a but15:53
cfriesengameon: According to what you've got there I'd use Nehalem, unless you need the newer features.15:53
*** sahid has quit IRC15:53
*** yamahata has quit IRC15:53
*** edmondsw has quit IRC15:53
bauzasedleafe: one other option could be to consider that placement should return the exact same candidates as the ones that the filter would agree15:53
bauzasedleafe: in that case, it would mean some new query param for Placement15:54
bauzasthoughts ?15:54
gameoncfriesen: Nehalem works :)15:55
*** sree has joined #openstack-nova15:55
efriededleafe: The way the doc is written, it *is* guaranteeing that the format won't change (without a new microversion).15:56
efriedwhich is as it should be.15:56
edleafeefried: about to run off to a meeting15:57
efriedI've never understood the reasoning behind that payload needing to be opaque, btw.15:57
edleafeefried: all it was supposed to be was something you could send back to allocate/claim15:57
cfriesenbauzas: it seems to me that we have two options.  give the filters access to the allocation candidates so they can rule out ones they don't like, or have enough flexibility in placement that we can ensure we never get back invalid candidates.15:57
edleafeit could have been a hash, or a uuid, or...15:58
efriededleafe: And you're supposed to pick one based on... what?15:58
*** edmondsw has joined #openstack-nova15:58
cfriesenedleafe: for that to work we need enough flexibility in what we can request from placement to ensure that all the candidates are valid15:58
*** Nel1x has joined #openstack-nova15:58
edleafenow we have nested, which brings in multiple a-cs per rp15:58
melwittefried: when you get a chance, wanna write some notes on how the update-provider-tree runway review went at L118? https://etherpad.openstack.org/p/nova-runways-rocky15:58
efriedmelwitt: ack15:58
edleafeefried: you picked a host, and found the matching a-c15:58
efriededleafe: There were several a-cs for that host.15:59
*** l4yerffej has quit IRC15:59
efriededleafe: How did I pick one?15:59
edleafenow there isn't a 1:1 host:a-c relationship with nesting15:59
cfriesenefried: if they were all actually valid it wouldn't matter15:59
efriedcfriesen: Yup, that's the big IF.15:59
efriedcfriesen: For this use case, we either implement new logic in placement, or they're *not* all valid and we can't just pick one at random.16:00
*** felipemonteiro__ has quit IRC16:00
cfriesenefried: yes, agreed16:00
efriededleafe: There was never a 1:1 host:a-c relationship.16:00
*** efried has quit IRC16:03
*** efried has joined #openstack-nova16:04
*** do3meli has quit IRC16:04
openstackgerritMatt Riedemann proposed openstack/nova stable/ocata: Don't persist RequestSpec.retry  https://review.openstack.org/56095516:05
bauzasefried: just for the context, they're not all valid because of some specific implementation details of the filter that I don't particularly like16:06
*** caisan has quit IRC16:06
efriedmelwitt: done16:06
bauzasadding more debt to either placement or the filters looks terrible to me16:06
openstackgerritMatt Riedemann proposed openstack/nova stable/ocata: Don't persist RequestSpec.retry  https://review.openstack.org/56016716:07
melwittefried: thanks16:07
*** l4yerffej has joined #openstack-nova16:07
bauzasefried: from a placement perspective, we can ask for a query that would *shard* resources between children, but that's the only trade-off I'd make16:07
bauzasefried: if we implement such thing, then we wouldn't need to pass the candidates down to the filters16:08
efriedbauzas: It's starting to sound like that might be the "easier" option16:08
bauzasnot the easier16:08
efriedbauzas: And as I've said, we know we're going to want that logic in placement eventually regardless.16:08
bauzasthe less debtful16:08
efriedbauzas: So this might as well be the motivation.16:08
efriedYeah16:08
bauzasok, I'll amend my spec accordingly16:09
edleafeefried: sorry, in the API-SIG meeting16:09
edleafeefried: https://github.com/openstack/nova/blob/master/nova/scheduler/manager.py#L154-L16016:09
edleafeThat creates one a-c per rp16:09
melwittcdent: the placement-forbidden-traits blueprint has been added to a review runway. please ack if the next two weeks work for you for quick iteration on review16:10
edleafewe always just grab the first one of that list16:10
*** lucasagomes is now known as lucas-afk16:11
cdentmelwitt: thanks, it does16:11
melwittk, great16:11
efriededleafe: It creates a *list* of allocation requests per rp_uuid16:11
efriededleafe: ...by introspecting the payload, by the way :P16:11
*** gameon has quit IRC16:13
mriedemlyarwood: for https://review.openstack.org/#/c/543569/ - do we want a security reno for https://bugs.launchpad.net/nova/+bug/1739593 and CVE-2017-18191?16:13
openstackLaunchpad bug 1739593 in OpenStack Security Advisory "Swapping encrypted volumes can lead to data loss and a possible compute host DOS attack (CVE-2017-18191)" [Undecided,Incomplete]16:13
mriedemi see the ossa isn't published16:13
*** itlinux has joined #openstack-nova16:13
melwittjichen: the z/VM driver series has been added to a review runway. I know you have been active on the patches already but please let us know if there are any problems with the next two weeks for quick iteration on review16:14
edleafeefried: this is the part that I was referring to: https://github.com/openstack/nova/blob/master/nova/scheduler/filter_scheduler.py#L212-L21716:14
edleafe"information in the provider summaries"16:14
*** armaan has quit IRC16:15
*** armaan has joined #openstack-nova16:15
*** fragatina has joined #openstack-nova16:17
*** fragatina has quit IRC16:17
efriededleafe: Yeah, I get that we can do *some* weighing/filtering based on the provider summaries; but that still only gets us down to the list of allocation requests for a given host.  It doesn't help us pick among those.16:17
*** fragatina has joined #openstack-nova16:18
*** ssurana has joined #openstack-nova16:18
*** ssurana has quit IRC16:19
*** ssurana has joined #openstack-nova16:19
mriedemlyarwood: sounds like the ossa is blocked until the stable/ocata patch is up16:20
mriedemi'm +1 on the stable/pike change now if you want to start on the stable/ocata backport16:20
*** ccamacho has joined #openstack-nova16:22
*** ssurana has quit IRC16:23
*** ccamacho has quit IRC16:24
*** ccamacho has joined #openstack-nova16:24
*** yamahata has joined #openstack-nova16:26
openstackgerritBrianna Poulos proposed openstack/nova master: Implement certificate_utils  https://review.openstack.org/47994916:27
cfriesenmriedem: lyarwood: I've got https://review.openstack.org/#/c/560690/ up for the stable/pike backport, but there's a complication in that Pike treats the encryption stuff a bit differently.  Wondering how you want to handle it.  I wrote it up in the review.16:28
mriedemcfriesen: heh, see https://review.openstack.org/#/c/543569/16:29
mriedemcfriesen: comments inline16:32
*** udesale has quit IRC16:32
openstackgerritBrianna Poulos proposed openstack/nova master: Add trusted_image_certificates to REST API  https://review.openstack.org/48620416:32
*** tssurya has quit IRC16:34
mriedemcfriesen: oh yeah reading https://review.openstack.org/#/c/460243/ i see the problem kind of,16:34
mriedemthat was really a frankenstein of a patch, and should have been split up16:34
*** sdague has quit IRC16:35
mriedemhttps://review.openstack.org/#/c/460243/16/nova/virt/libvirt/driver.py@1453 specifically16:36
mriedemif that's its own bug, we'd have to backport separately before your change, but would need to talk to lyarwood16:36
mriedemi don't know if that applies before the changes to _disconnect_volume though16:36
mriedemif not, don't worry about it16:36
*** gouthamr has quit IRC16:38
cfriesenmriedem: If we wanted to backport "proper" handling of encrypted volumes in the error case it could be done in a separate patch, I don't think the ordering really matters.  For now I'll rework the backport to go off the stable/queens one.16:38
*** tbachman has quit IRC16:39
mriedemi think the encrypted volume stuff only needed to change because of the behavior change in _disconnect_volume for luks native encryption16:39
mriedemwhich we're not going to backport16:39
*** mdbooth has quit IRC16:39
cfriesenmriedem: well...with my fix if we hit exception.DeviceNotFound we'll continue on without every calling encryptor.detach_volume()16:41
*** tbachman has joined #openstack-nova16:42
cfriesenmriedem: so I was wondering if we should move the call to ncryptor.detach_volume() down right above the call to self._disconnect_volume(), but I didn't know enough about that code to know if that was okay.16:44
cfriesenit *seems* analogous to what happens in the newer code, but I could be missing something16:44
*** moshele has joined #openstack-nova16:44
mriedemi defer to lyarwood16:45
*** yamamoto has quit IRC16:47
*** fragatina has quit IRC16:47
*** andreas_s has quit IRC16:49
*** jmccarthy1 has joined #openstack-nova16:49
*** jmccarthy1 has left #openstack-nova16:49
openstackgerritChris Friesen proposed openstack/nova stable/pike: libvirt: disconnect volume from host during detach  https://review.openstack.org/56069016:50
melwittcfriesen: see the earlier version of the patch from before the encryption-related refactor https://review.openstack.org/#/c/515008/9/nova/virt/libvirt/driver.py16:50
*** udesale has joined #openstack-nova16:51
*** dpawlik has joined #openstack-nova16:52
efriedmelwitt: Care to have a look at https://review.openstack.org/#/c/553475/ ?  Then we can put update_provider_tree to bed.16:53
efriedmelwitt: Should be an easy on.16:53
efriedone16:53
melwittsure16:54
cfriesenmelwitt: perfect, that's exactly what I was thinking about doing16:55
*** dpawlik has quit IRC16:56
*** dtruong_ has quit IRC16:56
*** udesale has quit IRC16:56
*** dtruong has joined #openstack-nova16:57
*** fragatina has joined #openstack-nova16:57
*** arvindn05_ has joined #openstack-nova16:57
*** udesale has joined #openstack-nova16:58
*** udesale has quit IRC16:58
*** mdnadeem has quit IRC16:58
*** yamamoto has joined #openstack-nova16:58
*** ssurana has joined #openstack-nova17:00
*** elmaciej has joined #openstack-nova17:00
*** fragatina has quit IRC17:02
*** yamamoto has quit IRC17:03
*** dklyle has quit IRC17:07
efriedmikal: What's your feel on deferring the requirements issue out of the zvm driver series?17:08
openstackgerritJay Pipes proposed openstack/nova-specs master: Numbered request groups use different providers  https://review.openstack.org/56097417:09
*** sdague has joined #openstack-nova17:09
*** janki has quit IRC17:09
mriedemefried: like we did in the powervm series? :)17:10
efriedmriedem: Sure.  I.e. nobody cared enough to pursue it, so it dropped.  That's as it should be, if nobody cares enough to pursue it.17:10
mriedemi was also going to mention in that ML thread, btw, that if we did get pedantic about requirements, os-brick would also fall into that camp since only the libvirt and hyperv driver use it17:11
efriedmriedem: The ML thread is making it clearer with every note that this is a bigger issue than we can/should expect to solve in the zvm driver series.17:11
openstackgerritChris Friesen proposed openstack/nova stable/pike: libvirt: disconnect volume from host during detach  https://review.openstack.org/56069017:13
openstackgerritJay Pipes proposed openstack/nova-specs master: Standardize CPU resource tracking  https://review.openstack.org/55508117:13
mriedemi haven't read the latest17:13
mriedemi know what we'll do,17:14
mriedemi'll run for TC,17:14
mriedemand then push through that all projects must define optional requirements in [extras]17:14
*** gouthamr has joined #openstack-nova17:14
mriedemas a community wide goal17:14
dansmithmriedem: os-brick isn't really environment specific as much though17:14
dansmithwell, maybe that's not true, I guess it runs linux commands17:14
mriedemdansmith: this zvm lib dep isn't conditional on arch right?17:15
dansmithbut, it seems less confusingly installed than my linux machine with powervm and zvm stuff both installed17:15
dansmithmriedem: arch or platform?17:15
dansmithmriedem: I assume you run nova in linux land on z, so not platform17:15
dansmithand maybe not even on an s390x if it's like an HMC17:16
mriedemi assume this zvm driver runs on a linux host, and then calls REST APIs to some zvm hypervisor17:16
dansmithyeah17:16
mriedemlike powervm17:16
dansmithlinux for sure, but might even be on x8617:16
mriedemyeah totes17:16
*** fragatina has joined #openstack-nova17:16
mriedemdoesn't need to be linux on s390x17:16
mriedemthat would be dumb17:16
dansmithyeah, the arch thing isn't the concern as much as it's a lib for a hypervisor I don't need17:16
mriedemsure, but the point is, we are all over the board17:17
dansmithbrick is kinda the same-ish, although it's not as weird I think17:17
dansmithyes, definitely17:17
mriedemos-xenapi is also in requirements.txt17:17
mriedemtaskflow is also only used by powervm but in requiments.txt17:17
*** jpena is now known as jpena|off17:17
dansmithI think that for people who get government audits for every line of installed code, it'd be a harder sell than os-brick being there but not used17:17
dansmithbut just a guess17:17
mriedemyou know what i miss? COOs17:18
dansmithI'd be happy using this as an opportunity to get right with the loahd on here17:18
dansmithmriedem: really? that's funny, I don't miss them at all17:18
mriedemi was being sarcastic17:18
dansmithWAT17:18
dansmithI had no idea17:18
mriedembut your audit comment got me reminiscing17:18
dansmithI know :)17:18
mriedemand now this https://www.youtube.com/watch?v=CZ_3G4xqSDQ17:18
mriedemlooking at those guys reminds me i need to schedule a haircut17:19
*** hamzy_ has joined #openstack-nova17:20
mriedemalright wtf was i doing now17:21
*** hamzy has quit IRC17:22
mriedemefried: i have replied for great posterity17:24
efriedmriedem: thanks17:24
openstackgerritMatt Riedemann proposed openstack/nova master: Wait for network-vif-plugged before starting live migration  https://review.openstack.org/55800117:25
openstackgerritMatt Riedemann proposed openstack/nova master: Add check if neutron "binding-extended" extension is available  https://review.openstack.org/52354817:25
openstackgerritMatt Riedemann proposed openstack/nova master: Add "bind_ports_to_host" neutron API method  https://review.openstack.org/52360417:25
openstackgerritMatt Riedemann proposed openstack/nova master: Add "delete_port_binding" network API method  https://review.openstack.org/55217017:25
openstackgerritMatt Riedemann proposed openstack/nova master: Add "activate_port_binding" neutron API method  https://review.openstack.org/55594717:25
openstackgerritMatt Riedemann proposed openstack/nova master: Delete port bindings in setup_networks_on_host if teardown=True  https://review.openstack.org/55633317:25
openstackgerritMatt Riedemann proposed openstack/nova master: Implement migrate_instance_start method for neutron  https://review.openstack.org/55633417:25
openstackgerritMatt Riedemann proposed openstack/nova master: Add VIFMigrateData object for live migration  https://review.openstack.org/51542317:25
openstackgerritMatt Riedemann proposed openstack/nova master: WIP: libvirt: use dest host vif migrate details for live migration  https://review.openstack.org/55137017:25
openstackgerritMatt Riedemann proposed openstack/nova master: WIP: compute: use port binding extended API during live migration  https://review.openstack.org/55137117:25
openstackgerritMatt Riedemann proposed openstack/nova master: WIP: Port binding based on events during live migration  https://review.openstack.org/43487017:25
openstackgerritMatt Riedemann proposed openstack/nova master: conductor: use port binding extended API in during live migrate  https://review.openstack.org/52253717:25
*** moshele has quit IRC17:27
mriedemarvindn05: if you're just asking questions in a patch ( https://review.openstack.org/#/c/546357/ ) you don't need to -117:27
*** suresh12 has joined #openstack-nova17:27
*** suresh12 has quit IRC17:28
*** andreas_s has joined #openstack-nova17:29
*** itlinux has quit IRC17:30
*** corvus is now known as jeblair17:31
*** ssurana has quit IRC17:31
*** jeblair is now known as corvus17:31
*** ssurana has joined #openstack-nova17:31
*** andreas_s has quit IRC17:34
*** ssurana has quit IRC17:35
*** Nel1x has quit IRC17:35
*** Nel1x has joined #openstack-nova17:39
*** wolverineav has joined #openstack-nova17:44
*** lpetrut has quit IRC17:44
*** amoralej is now known as amoralej|off17:45
arvindn05_mriedem: thanks...will keep in mind.17:47
*** sambetts is now known as sambetts|afk17:48
mriedemarvindn05_: i put some comments in your spec amendment, thanks for starting that https://review.openstack.org/#/c/560718/117:48
mriedemjaypipes: dansmith: efried: ^ that's going to require some placement love17:49
dansmithugh17:49
mriedemf yeah, rebuild17:49
*** eharney_ has joined #openstack-nova17:51
*** eharney has quit IRC17:51
*** moshele has joined #openstack-nova17:51
*** eharney_ is now known as eharney17:51
mriedemif we can request resources=VCPU:0 and in_tree=<compute node uuid> then i think we'd be ok17:52
efriedmriedem: I think jaypipes wanted to deal with that by using the GET /resource_providers API instead of GET /a_c.  (I disagreed; I think ?in_tree in GET /a_c is a fine idea)17:53
*** amodi has quit IRC17:53
dansmithbut asking for zero resources seems weird17:53
mriedemactually if we added in_tree to GET /allocation_candidates, just make resources optional in the same microversion17:53
*** itlinux has joined #openstack-nova17:53
mriedemso GET /allocation_candidates?in_tree=1234&required=CUSTOM_NEW_IMAGE_TRAIT17:54
*** tesseract has quit IRC17:55
dansmiththat also seems a little weird to me17:55
mriedembecause we aren't going to actually allocate anything?17:55
*** dougshelley66 has quit IRC17:56
dansmithwell, not that we aren't going to allocate anything really17:56
dansmithjust that we're asking a weird question there17:56
mriedemok, well, some options17:56
mriedemthis doesn't seem like an impossibly hard issue though17:57
*** jackie-truong has quit IRC17:57
dansmithnot to validate it no, just weird to do it that way is all18:00
*** suresh12 has joined #openstack-nova18:00
*** suresh12 has quit IRC18:01
*** suresh12_ has joined #openstack-nova18:01
*** gjayavelu has joined #openstack-nova18:01
*** yamamoto has joined #openstack-nova18:04
*** dklyle has joined #openstack-nova18:05
*** wolverineav has quit IRC18:06
*** wolverin_ has joined #openstack-nova18:06
*** pcaruana has quit IRC18:09
*** wolverineav has joined #openstack-nova18:10
*** wolverin_ has quit IRC18:10
*** sidx64 has joined #openstack-nova18:10
dansmithmriedem: why can't we just GET /rp/$uuid and look at the traits?18:13
*** sidx64_ has joined #openstack-nova18:13
*** sidx64 has quit IRC18:15
*** ssurana has joined #openstack-nova18:15
*** felipemonteiro has joined #openstack-nova18:17
*** sree has quit IRC18:18
*** felipemonteiro_ has joined #openstack-nova18:18
*** sree has joined #openstack-nova18:18
*** openstackgerrit has quit IRC18:19
*** itlinux has quit IRC18:20
*** mvk has quit IRC18:20
*** breton_ is now known as breton18:20
*** felipemonteiro has quit IRC18:22
*** itlinux has joined #openstack-nova18:23
mriedemwe could18:23
*** moshele has quit IRC18:23
*** damien_r has joined #openstack-nova18:23
mriedemin fact,18:23
arvindn05_mriedem: that was the suggestion in the bp18:23
arvindn05_:)18:23
mriedemGET /resource_providers?in_tree=<node uuid>&required=<traits>18:23
*** huanxie has quit IRC18:23
mriedemarvindn05_: if you mean, "For the above issue, the scheduler can request traits of current host and try18:24
mriedemto match those traits with the traits specifid in the image."18:24
mriedemthat wasn't clear to me as a proposed solution at all18:24
mriedemi might have gotten hung up on the 'current host' thing, thinking about RPs18:24
mriedemarvindn05_: if you can clarify that specifically talking about https://developer.openstack.org/api-ref/placement/#list-resource-provider-traits then it's probably fine18:25
arvindn05_sorry if it was unclear...i will add that we will call /resource_providers/{uuid}/traits18:26
mriedemok that makes more sense for the alternative now18:27
arvindn05_i am looking at the comments now...will add the above to make it clearer18:27
arvindn05_i thoguht about the /resource_providers/{uuid}/traits solution after we discussed the issue yesterday to minimize changes18:29
arvindn05_mriedem: my original proposal to modify the API to accept no resources is now the alternate suggestion18:30
mriedemyeah now it makes more sense to me18:31
*** links has joined #openstack-nova18:31
mriedemjust need to clarify18:31
mriedemi don't exactly know where we'd do this validation...in a new filter?18:31
*** arvindn05 has quit IRC18:31
mriedemor maybe in the ImagePropertiesFilter if we know we're doing a rebuild?18:32
*** sree has quit IRC18:32
dansmithor just in rebuild code in general18:35
dansmithor, I guess we need to know if they had imagepropfilter enabled in that case?18:36
dansmithbleh18:36
*** felipemonteiro_ has quit IRC18:37
*** openstackgerrit has joined #openstack-nova18:37
openstackgerritEric Berglund proposed openstack/nova master: PowerVM Driver: Localdisk  https://review.openstack.org/54930018:37
*** lpetrut has joined #openstack-nova18:37
*** amodi has joined #openstack-nova18:40
*** damien_r has quit IRC18:42
*** itlinux has quit IRC18:42
*** Nel1x has quit IRC18:42
*** damien_r has joined #openstack-nova18:43
openstackgerritMerged openstack/nova master: Make the nova-multiattach job non-voting temporarily  https://review.openstack.org/56090918:44
efried^ time to queue up the rechecks18:45
*** arvindn05 has joined #openstack-nova18:46
openstackgerritJackie Truong proposed openstack/nova master: Implement certificate_utils  https://review.openstack.org/47994918:47
openstackgerritJackie Truong proposed openstack/nova master: Add trusted_image_certificates to REST API  https://review.openstack.org/48620418:47
openstackgerritJackie Truong proposed openstack/nova master: Add certificate validation docs  https://review.openstack.org/56015818:47
*** mgoddard has quit IRC18:47
arvindn05_dansmith: mriedem: my assumption was we'd do the check in select destination before the filters https://github.com/openstack/nova/blob/master/nova/scheduler/manager.py#L13518:48
arvindn05_an else case where in case of rebuild we made an call to /resource_providers/{uuid}/traits to get the traits and match them18:49
mriedemidk, today we only restrict a rebuild with a new image based on the enabled filteres, as dansmith mentions18:51
mriedemso i kind of think we should stick with that behavior18:51
mriedemi think we'd have everything we need within the ImagePropertiesFilter itself to understand if we need to do this check18:52
mriedemit should all be in the reqspec18:52
mriedemand the HostState has the compute node uuid to get the resource provider18:52
mriedemrest api calls from a filter isn't awesome, but if we just do this conditionally when we know we're doing a rebuild, it's the same rest api call, just in a different place18:53
dansmithyeah if you have that filter disabled, no fair rejecting a rebuild based on stuff in there18:53
arvindn05_not sure i understand...if the image traits have been updated, shouldnt the rebuild take that into account?18:54
*** r-daneel has joined #openstack-nova18:54
*** sree has joined #openstack-nova18:54
arvindn05_regardless of whether the imagefilter properties filter is enabled18:54
arvindn05_if the image changed completely with new traits, and image filter was never enabled, then the traits will be ignored18:55
mriedemif you don't care about scheduling based on image properties, then why should we require it?18:55
mriedemif you do care about scheduling based on image properties, then the filter will be enabled18:56
*** Nel1x has joined #openstack-nova18:56
mriedemit's also a default filter, which i don't think anyone disables, so it's probably fine18:56
arvindn05_the image properties and traits are parallel concepts though18:57
mriedemumm18:58
mriedemwe get the traits via the image properties18:58
arvindn05_i dont need to enable imageproperties filter to use the traits filtering mechanissm18:58
mriedemi guess you could argue that we do traits-based filtering in placement via flavor extra specs regardless of any flavor extra spec specific filters being enabled18:58
arvindn05_if you completely remove image prop filter...the traits filtering still works since it goes through placement18:58
arvindn05_yup18:59
*** sree has quit IRC18:59
mriedemidk, i don't want to have to make this decision18:59
openstackgerritMatt Riedemann proposed openstack/osc-placement master: RP list: member_of and resources parameters (v1.3, v1.4)  https://review.openstack.org/51118319:00
openstackgerritMatt Riedemann proposed openstack/osc-placement master: RP delete inventories (v1.5)  https://review.openstack.org/51464219:00
openstackgerritMatt Riedemann proposed openstack/osc-placement master: CLI for traits (v1.6)  https://review.openstack.org/51464319:00
openstackgerritMatt Riedemann proposed openstack/osc-placement master: Resource class set (v1.7)  https://review.openstack.org/51464419:00
openstackgerritMatt Riedemann proposed openstack/osc-placement master: Usages per project and user (v1.8, v1.9)  https://review.openstack.org/51464619:00
openstackgerritMatt Riedemann proposed openstack/osc-placement master: CLI allocation candidates (v1.10)  https://review.openstack.org/51464719:00
openstackgerritMatt Riedemann proposed openstack/osc-placement master: New dict format of allocations (v1.11, v1.12)  https://review.openstack.org/54281919:00
*** sree has joined #openstack-nova19:00
openstackgerritMatt Riedemann proposed openstack/osc-placement master: Transactionally update allocations (v1.13)  https://review.openstack.org/54667419:00
openstackgerritMatt Riedemann proposed openstack/osc-placement master: Add nested resource providers (v1.14)  https://review.openstack.org/54667519:00
openstackgerritMatt Riedemann proposed openstack/osc-placement master: Limit allocation candidates (v1.15, v1.16)  https://review.openstack.org/54804319:00
openstackgerritMatt Riedemann proposed openstack/osc-placement master: Allocation candidates parameter: required (v1.17)  https://review.openstack.org/54832619:00
mriedemarvindn05_: i'm sure bauzas would love to think about this19:00
arvindn05_will wait for their comments....will update the spec to make it clear based on your comments for now...thanks mriedem19:01
*** amodi has quit IRC19:02
*** eharney has quit IRC19:03
*** salv-orlando has quit IRC19:04
arvindn05_mriedem: btw are you convinced with the argument that image properties filter and using traits on images are independent of each other?19:04
*** salv-orlando has joined #openstack-nova19:04
*** sree has quit IRC19:05
*** AlexeyAbashkin has quit IRC19:06
*** damien_r has quit IRC19:07
mriedemarvindn05_: no19:08
mriedembut,19:08
mriedemi also don't feel strongly about it19:08
arvindn05_that's half the battle won :)19:08
mriedemha, now you're getting it19:09
*** salv-orlando has quit IRC19:09
openstackgerritMatt Riedemann proposed openstack/nova master: Skip placement on rebuild in same host  https://review.openstack.org/54635719:09
arvindn05_but ignoring the rebuild issue, like i mentioned if tommorow someone removed the image properties filter, based on the original BP, there would be no regression of the feature19:09
mriedemhongbin: when you get a minute, can you propose backports for https://review.openstack.org/#/c/546357/ to queens and pike?19:10
hongbinmriedem: sure19:10
mriedemthanks19:10
mriedemyou'll have to do it from command line19:10
hongbini see, sure, no problem19:11
openstackgerritMerged openstack/nova master: uncap eventlet in nova  https://review.openstack.org/56042019:12
openstackgerritMerged openstack/nova master: xenapi: Support live migration in pooled multi-nodes environment  https://review.openstack.org/48945119:12
*** felipemonteiro has joined #openstack-nova19:13
*** felipemonteiro_ has joined #openstack-nova19:14
openstackgerritMerged openstack/nova master: Remove mox in unit/virt/xenapi/test_vm_utils.py (2)  https://review.openstack.org/55899319:15
*** sree has joined #openstack-nova19:15
openstackgerritMerged openstack/nova master: Remove mox in unit/virt/xenapi/test_vm_utils.py (3)  https://review.openstack.org/55925819:15
openstackgerritHongbin Lu proposed openstack/nova stable/queens: Skip placement on rebuild in same host  https://review.openstack.org/56101419:16
*** felipemonteiro has quit IRC19:17
*** sree has quit IRC19:20
*** gouthamr has quit IRC19:20
*** sree has joined #openstack-nova19:21
*** amodi has joined #openstack-nova19:21
openstackgerritHongbin Lu proposed openstack/nova stable/pike: Skip placement on rebuild in same host  https://review.openstack.org/56101519:22
*** sree has quit IRC19:25
openstackgerritArvind Nadendla proposed openstack/nova-specs master: Handle rebuild of instance with new image  https://review.openstack.org/56071819:26
*** sree has joined #openstack-nova19:26
*** sdeath has joined #openstack-nova19:30
*** gouthamr has joined #openstack-nova19:30
*** andreas_s has joined #openstack-nova19:30
*** sree has quit IRC19:31
*** amodi has quit IRC19:32
*** wolverin_ has joined #openstack-nova19:34
*** wolverineav has quit IRC19:34
openstackgerritArvind Nadendla proposed openstack/nova-specs master: Handle rebuild of instance with new image  https://review.openstack.org/56071819:35
*** andreas_s has quit IRC19:35
*** pchavva1 has joined #openstack-nova19:36
*** sree has joined #openstack-nova19:38
cfriesenmelwitt: for the detach_volume() change, are you suggesting a try/except block around the call to encryptor.detach_volume()?19:43
*** sree has quit IRC19:43
melwittcfriesen: yes. because if you get there on a second attempt and you already ran it in the past, os-brick will raise "unknown device" because it can't find the attached device19:43
*** sridharg has quit IRC19:43
cfriesenmakes sense, will do19:44
melwittwe backported the os-brick change back to pike but we can't bump requirements.txt on stable19:44
melwittso it's not guaranteed that someone running stable/pike will have the os-brick version that will ignore exit code 4 (unknown) for you19:45
*** links has quit IRC19:46
*** amodi has joined #openstack-nova19:47
melwitts/you/them/19:48
*** linkmark has quit IRC19:49
*** gouthamr has quit IRC19:50
*** gouthamr has joined #openstack-nova19:51
*** pchavva1 has quit IRC19:51
*** pchavva has quit IRC19:51
*** pchavva has joined #openstack-nova19:51
*** moshele has joined #openstack-nova19:54
*** jackie-truong has joined #openstack-nova19:54
*** suresh12_ has quit IRC19:55
*** suresh12 has joined #openstack-nova19:56
*** mvk has joined #openstack-nova19:56
*** suresh12 has quit IRC19:57
*** suresh12 has joined #openstack-nova19:57
*** moshele has quit IRC19:58
*** artom has quit IRC19:59
mriedemi found another cells upcall20:00
mriedemhttps://github.com/openstack/nova/blob/c531b7905f5a9f8a5bfa355a2047d04032cbf847/nova/virt/xenapi/host.py#L8620:01
mriedemhowever, it's for that completely broken and deprecated anyway host maintenance / power action stuff20:01
*** damien_r has joined #openstack-nova20:02
*** felipemonteiro_ has quit IRC20:03
*** felipemonteiro_ has joined #openstack-nova20:03
*** oanson has quit IRC20:05
*** salv-orlando has joined #openstack-nova20:05
*** damien_r has quit IRC20:06
openstackgerritMatt Riedemann proposed openstack/nova master: Use Queens UCA for nova-multiattach job  https://review.openstack.org/55431720:08
cfriesenmelwitt: I don't have the actual "target" to report in the error message.  do you know how to get it, or can I just use "disk_dev" or something?20:09
*** salv-orlando has quit IRC20:09
*** salv-orlando has joined #openstack-nova20:09
melwittcfriesen: yeah, disk_dev is the equivalent20:10
melwittthis version of the patch was almost right, it just also needed to reraise if not exit code 4 https://review.openstack.org/#/c/515008/3/nova/virt/libvirt/driver.py20:11
cfriesenah, okay20:11
melwittbecause we still want to fail if a legit detach error happened. but we'll want to ignore "unknown" only20:11
*** oanson has joined #openstack-nova20:12
*** sree has joined #openstack-nova20:15
openstackgerritMatt Riedemann proposed openstack/nova master: DNM: test live_migration_wait_for_vif_plug=True  https://review.openstack.org/55800620:17
*** itlinux has joined #openstack-nova20:17
openstackgerritMatt Riedemann proposed openstack/nova master: DNM: test live_migration_wait_for_vif_plug=True  https://review.openstack.org/55800620:17
*** sree has quit IRC20:19
*** yamamoto has quit IRC20:21
*** yamamoto has joined #openstack-nova20:22
*** markvoelker_ has quit IRC20:23
*** elmaciej has quit IRC20:23
*** archit has joined #openstack-nova20:33
*** sidx64_ has quit IRC20:33
*** amodi has quit IRC20:33
*** archit is now known as amodi20:33
*** wolverin_ has quit IRC20:34
*** wolverineav has joined #openstack-nova20:34
*** tbachman has quit IRC20:38
*** sree has joined #openstack-nova20:38
openstackgerritChris Friesen proposed openstack/nova stable/pike: libvirt: disconnect volume from host during detach  https://review.openstack.org/56069020:39
*** wolverineav has quit IRC20:39
*** elmaciej has joined #openstack-nova20:42
*** pchavva has quit IRC20:42
mriedemmelwitt: i'll be skipping the meeting today for fun times, left some links to gate bugs in the meeting agenda this morning, but the multiattach one is already semi resolved20:42
*** sree has quit IRC20:43
*** mriedem is now known as mriedem_funtime20:43
melwittmriedem: we shall miss you. okay, I'll take a look at those bugs20:43
*** tbachman has joined #openstack-nova20:46
*** wolverineav has joined #openstack-nova20:46
*** tbachman has quit IRC20:47
openstackgerritJay Pipes proposed openstack/nova-specs master: Standardize CPU resource tracking  https://review.openstack.org/55508120:47
*** salv-orlando has quit IRC20:47
*** wolverineav has quit IRC20:47
*** salv-orlando has joined #openstack-nova20:48
*** wolverineav has joined #openstack-nova20:48
*** takashin has joined #openstack-nova20:49
*** hamzy_ has quit IRC20:49
melwittnova meeting in 10 minutes20:50
*** eharney has joined #openstack-nova21:00
*** itlinux has quit IRC21:01
openstackgerritEric Berglund proposed openstack/nova master: PowerVM Driver: Localdisk  https://review.openstack.org/54930021:02
*** gouthamr has quit IRC21:06
*** slaweq has quit IRC21:07
*** slaweq has joined #openstack-nova21:08
*** evin has quit IRC21:11
*** slaweq has quit IRC21:13
*** Nel1x has quit IRC21:14
*** gouthamr has joined #openstack-nova21:15
*** Nel1x has joined #openstack-nova21:15
*** Nel1x has quit IRC21:16
*** Nel1x has joined #openstack-nova21:16
*** dpawlik has joined #openstack-nova21:25
*** openstackstatus has quit IRC21:27
*** openstack has joined #openstack-nova21:29
*** ChanServ sets mode: +o openstack21:29
*** dpawlik has quit IRC21:30
*** yangyape_ has joined #openstack-nova21:31
*** yangyapeng has quit IRC21:31
*** sree has joined #openstack-nova21:32
*** lpetrut has quit IRC21:35
*** sambetts|afk has quit IRC21:36
*** sree has quit IRC21:36
*** sambetts_ has joined #openstack-nova21:38
*** sree has joined #openstack-nova21:38
*** Nel1x has quit IRC21:39
*** Nel1x has joined #openstack-nova21:39
*** burt has quit IRC21:42
*** sree has quit IRC21:43
*** jackie-truong has quit IRC21:46
*** tbachman has joined #openstack-nova21:51
cfriesenmelwitt: I've got a theory for https://bugs.launchpad.net/nova/+bug/1763181 but I guess mriedem and bauzas are away21:52
openstackLaunchpad bug 1763181 in OpenStack Compute (nova) "test_parallel_evacuate_with_server_group intermittently fails" [Medium,Confirmed]21:52
*** gouthamr has quit IRC21:54
*** tbachman has quit IRC21:55
*** sree has joined #openstack-nova21:55
*** esberglu has quit IRC21:58
openstackgerritMichael Still proposed openstack/nova master: Move xenapi disk resizing to privsep.  https://review.openstack.org/55224221:59
openstackgerritMichael Still proposed openstack/nova master: Sync xenapi and libvirt on what flags to pass e2fsck.  https://review.openstack.org/55407821:59
openstackgerritMichael Still proposed openstack/nova master: Move xenapi partition copies to privsep.  https://review.openstack.org/55360521:59
openstackgerritMichael Still proposed openstack/nova master: Move image conversion to privsep.  https://review.openstack.org/55443721:59
openstackgerritMichael Still proposed openstack/nova master: We don't need utils.trycmd any more.  https://review.openstack.org/55443921:59
openstackgerritMichael Still proposed openstack/nova master: We no longer need rootwrap.  https://review.openstack.org/55443821:59
*** sree has quit IRC22:00
*** tbachman has joined #openstack-nova22:03
*** eharney has quit IRC22:03
openstackgerritEric Fried proposed openstack/nova master: add lower-constraints job  https://review.openstack.org/55596122:05
openstackgerritEric Fried proposed openstack/nova master: add lower-constraints job  https://review.openstack.org/55596122:05
melwittcfriesen: noyce, thanks for the comment on the bug22:07
*** felipemonteiro__ has joined #openstack-nova22:08
*** felipemonteiro_ has quit IRC22:12
cfriesenmelwitt: there's another variant that can affect migrations, where we'd want to look at the migration records and include any hosts that are the destination or source of a migration for instances in the group.22:13
cfriesenthe joys of race conditions22:14
melwittguh22:15
*** felipemonteiro__ has quit IRC22:15
*** felipemonteiro__ has joined #openstack-nova22:16
*** lbragstad has quit IRC22:16
*** salv-orlando has quit IRC22:18
*** liverpooler has quit IRC22:18
*** salv-orlando has joined #openstack-nova22:19
*** salv-orlando has quit IRC22:23
*** harlowja has joined #openstack-nova22:24
*** rcernin has joined #openstack-nova22:30
*** arvindn05_ has quit IRC22:32
*** arvindn0_ has joined #openstack-nova22:33
*** Sukhdev has joined #openstack-nova22:35
*** linkmark has joined #openstack-nova22:39
*** wolverineav has quit IRC22:44
*** wolverineav has joined #openstack-nova22:44
*** sree has joined #openstack-nova22:50
*** yikun has quit IRC22:51
*** yikun has joined #openstack-nova22:52
*** tbachman has quit IRC22:52
*** sree has quit IRC22:54
*** suresh12 has quit IRC22:55
*** hongbin has quit IRC22:58
*** andreas_s has joined #openstack-nova23:03
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in test_serversV21.py (1)  https://review.openstack.org/56082123:05
*** _ix has quit IRC23:06
*** andreas_s has quit IRC23:09
*** swamireddy has quit IRC23:14
*** Guest30936 has quit IRC23:14
mriedem_funtimenew unit test race fail http://logs.openstack.org/78/554378/3/check/openstack-tox-py27/54b38c0/testr_results.html.gz23:20
*** mriedem_funtime is now known as mriedem23:20
*** sdague has quit IRC23:21
*** gouthamr has joined #openstack-nova23:21
*** cdent has quit IRC23:21
mriedemhttps://bugs.launchpad.net/nova/+bug/176353523:22
openstackLaunchpad bug 1763535 in OpenStack Compute (nova) "nova.tests.unit.virt.xenapi.test_vm_utils.StreamDiskTestCase.test_non_ami intermittently fails" [High,Confirmed]23:22
*** sdeath has quit IRC23:22
mriedemtakashin: ^ you should probably look at that23:28
*** Nel1x has quit IRC23:30
*** itlinux has joined #openstack-nova23:35
mriedemmelwitt: https://review.openstack.org/#/q/I78dead482e2ed8262745cb08c9f4ab71035adb33 are what fix the multiattach job23:36
mriedemhttps://review.openstack.org/#/c/554317/ on top has to depend on the queens backport,23:36
mriedemotherwise zuul pulls the queens job def which has the bad branch specifier in it and borks everything up23:37
mriedemso once those patches to remove the in-tree branch specifier are merged, we can make the multiattach job voting and gating again23:37
*** mriedem is now known as mriedem_afk23:38
*** felipemonteiro_ has joined #openstack-nova23:39
-openstackstatus- NOTICE: The Etherpad service at https://etherpad.openstack.org/ is being restarted to pick up the latest release version; browsers should see only a brief ~1min blip before reconnecting automatically to active pads23:40
*** elmaciej has quit IRC23:40
*** felipemonteiro__ has quit IRC23:43
*** mlavalle has quit IRC23:45
*** amodi has quit IRC23:47
*** arvindn0_ has quit IRC23:51
openstackgerritMerged openstack/nova master: Remove :return from update_provider_tree docstring  https://review.openstack.org/56044223:51
*** claudiub|2 has quit IRC23:53
*** didelspk has joined #openstack-nova23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!