Monday, 2019-01-14

*** slaweq has joined #openstack-nova00:11
*** slaweq has quit IRC00:15
*** amodi_ has quit IRC00:17
eanderssonWhat kind of impact does getting additional filtering / weighting data have on scheduler performance?00:19
eanderssone.g. flavor information00:19
eanderssonIt's easy enough to test with a few computes and VMs, but not sure how to test it on more realistic PROD scales.00:20
*** ileixe has joined #openstack-nova00:51
*** jamesdenton has quit IRC00:57
*** jamesdenton has joined #openstack-nova00:57
*** markvoelker has joined #openstack-nova01:09
*** alex_xu has joined #openstack-nova01:11
*** markvoelker has quit IRC01:45
*** TxGirlGeek has joined #openstack-nova01:59
*** slaweq has joined #openstack-nova02:11
*** hongbin has joined #openstack-nova02:14
*** slaweq has quit IRC02:15
*** mschuppert has quit IRC02:33
*** mhen has quit IRC02:52
*** sapd__x has joined #openstack-nova02:57
*** psachin has joined #openstack-nova02:57
*** mhen has joined #openstack-nova03:00
*** remi_ness has quit IRC03:01
*** brault has joined #openstack-nova03:04
*** sapd__x has quit IRC03:08
*** brault has quit IRC03:09
*** remi_ness has joined #openstack-nova03:15
*** markvoelker has joined #openstack-nova03:18
*** ircuser-1 has joined #openstack-nova03:19
*** sapd__x has joined #openstack-nova03:25
*** whoami-rajat has joined #openstack-nova03:30
*** markvoelker has quit IRC03:34
*** markvoelker has joined #openstack-nova03:34
*** markvoelker has quit IRC03:39
*** TxGirlGeek has quit IRC03:39
*** markvoelker has joined #openstack-nova03:48
*** diga has joined #openstack-nova03:51
*** markvoelker has quit IRC03:54
*** markvoelker has joined #openstack-nova03:55
*** Kevin_Zheng has quit IRC04:08
*** slaweq has joined #openstack-nova04:11
*** slaweq has quit IRC04:16
*** udesale has joined #openstack-nova04:23
*** tbachman has joined #openstack-nova04:29
*** sapd__x has quit IRC04:33
*** owalsh_ has joined #openstack-nova04:33
*** sridharg has joined #openstack-nova04:33
*** owalsh has quit IRC04:37
*** sapd__x has joined #openstack-nova04:45
*** wolverineav has joined #openstack-nova04:55
*** _pewp_ has joined #openstack-nova05:07
*** ratailor has joined #openstack-nova05:10
*** markvoelker has quit IRC05:39
*** wolverineav has quit IRC05:51
*** wolverineav has joined #openstack-nova05:55
*** hongbin_ has joined #openstack-nova05:59
*** brault has joined #openstack-nova06:00
*** hongbin has quit IRC06:02
*** hongbin_ has quit IRC06:02
*** brault has quit IRC06:05
*** macza has joined #openstack-nova06:06
*** wolverineav has quit IRC06:07
*** cfriesen has joined #openstack-nova06:07
*** slaweq has joined #openstack-nova06:11
*** udesale has quit IRC06:11
*** udesale has joined #openstack-nova06:12
*** markvoelker has joined #openstack-nova06:12
*** slaweq has quit IRC06:15
*** brinzhang has joined #openstack-nova06:18
*** belmoreira has quit IRC06:35
*** wolverineav has joined #openstack-nova06:41
*** wolverineav has quit IRC06:45
*** slaweq has joined #openstack-nova06:46
*** slaweq has quit IRC06:50
*** remi_ness has quit IRC06:51
*** Luzi has joined #openstack-nova06:53
*** udesale has quit IRC06:58
*** udesale has joined #openstack-nova06:58
*** maciejjozefczyk has quit IRC06:58
*** maciejjozefczyk has joined #openstack-nova07:02
*** slaweq has joined #openstack-nova07:06
*** ileixe has quit IRC07:09
*** pcaruana has joined #openstack-nova07:11
*** wolverineav has joined #openstack-nova07:13
*** slaweq has quit IRC07:15
*** dpawlik has joined #openstack-nova07:29
*** sapd1 has quit IRC07:29
*** sapd__x has quit IRC07:29
*** sapd1 has joined #openstack-nova07:29
*** sapd__x has joined #openstack-nova07:29
*** brault has joined #openstack-nova07:42
*** ccamacho has joined #openstack-nova07:42
openstackgerritZhenyu Zheng proposed openstack/nova master: Allow run metadata api per cell  https://review.openstack.org/62461207:44
*** maciejjozefczyk has quit IRC07:48
*** belmoreira has joined #openstack-nova07:49
*** slaweq has joined #openstack-nova07:58
*** markvoelker has quit IRC07:58
*** lpetrut has joined #openstack-nova08:01
*** markvoelker has joined #openstack-nova08:03
*** maciejjozefczyk has joined #openstack-nova08:04
maciejjozefczykjaypipes: hey. I did a little update on counters here: https://review.openstack.org/#/c/614167/ Please check that ;) Sorry for spamming a lot about this08:05
*** moshele has joined #openstack-nova08:06
*** wolverineav has quit IRC08:08
*** rpittau has joined #openstack-nova08:09
*** markvoelker has quit IRC08:10
*** helenafm has joined #openstack-nova08:11
*** sapd1 has quit IRC08:15
*** sapd1_x has joined #openstack-nova08:15
*** sapd1 has joined #openstack-nova08:16
*** ratailor_ has joined #openstack-nova08:18
*** sapd__x has quit IRC08:19
*** ratailor has quit IRC08:21
*** jangutter has joined #openstack-nova08:24
*** _alastor_ has joined #openstack-nova08:24
*** _alastor_ has quit IRC08:29
*** markvoelker has joined #openstack-nova08:29
*** markvoelker has quit IRC08:30
*** markvoelker has joined #openstack-nova08:32
*** markvoelker has quit IRC08:33
*** markvoelker has joined #openstack-nova08:35
*** liuyulong has joined #openstack-nova08:37
*** ralonsoh has joined #openstack-nova08:43
*** liuyulong has quit IRC08:48
*** sridharg has quit IRC09:03
*** sridharg has joined #openstack-nova09:04
*** sridharg has quit IRC09:04
*** sridharg has joined #openstack-nova09:04
*** panda|off is now known as panda09:12
*** sridharg has quit IRC09:16
*** derekh has joined #openstack-nova09:32
*** owalsh_ is now known as owalsh09:34
*** markvoelker has quit IRC09:52
*** wolverineav has joined #openstack-nova09:55
*** ratailor__ has joined #openstack-nova09:56
*** ratailor_ has quit IRC09:58
*** wolverineav has quit IRC09:59
*** macza has quit IRC10:01
*** cfriesen has quit IRC10:04
*** ratailor__ has quit IRC10:05
*** diga has quit IRC10:16
*** ratailor has joined #openstack-nova10:19
*** awalende has joined #openstack-nova10:20
*** yan0s has joined #openstack-nova10:35
*** dtantsur|afk is now known as dtantsur10:39
*** diga has joined #openstack-nova10:40
*** erlon has joined #openstack-nova10:41
*** moshele has quit IRC10:42
*** ratailor_ has joined #openstack-nova10:58
*** macza has joined #openstack-nova10:59
*** diga has quit IRC11:00
*** ratailor has quit IRC11:02
*** macza has quit IRC11:04
*** Dinesh_Bhor has joined #openstack-nova11:08
*** udesale has quit IRC11:13
*** macza has joined #openstack-nova11:20
*** macza has quit IRC11:24
*** mvkr has quit IRC11:26
*** erlon has quit IRC11:29
*** erlon has joined #openstack-nova11:33
*** erlon has quit IRC11:36
*** erlon has joined #openstack-nova11:37
*** cdent has joined #openstack-nova11:46
*** sapd1_x has quit IRC11:47
*** moshele has joined #openstack-nova11:48
finucannotbauzas: If you're about today, fancy pushing https://review.openstack.org/630154 and https://review.openstack.org/630138 ?11:48
*** finucannot is now known as stephenfin11:49
*** mvkr has joined #openstack-nova11:56
*** ratailor__ has joined #openstack-nova11:57
*** ratailor_ has quit IRC11:59
*** awalende has quit IRC12:00
*** markvoelker has joined #openstack-nova12:01
*** ratailor__ has quit IRC12:02
*** awalende has joined #openstack-nova12:03
*** sridharg has joined #openstack-nova12:04
*** sapd1_x has joined #openstack-nova12:06
*** awalende has quit IRC12:08
*** mvkr has quit IRC12:10
*** mvkr has joined #openstack-nova12:10
*** sapd1_x has quit IRC12:26
*** awalende has joined #openstack-nova12:27
jaypipesmaciejjozefczyk: no worries! will look at it shortly.12:35
*** brinzhang has quit IRC12:38
*** cdent has quit IRC12:40
*** erlon_ has joined #openstack-nova12:44
*** Dinesh_Bhor has quit IRC12:45
*** erlon has quit IRC12:48
*** macza has joined #openstack-nova12:49
*** macza has quit IRC12:54
*** awalende has quit IRC12:55
*** rpittau is now known as rpittau|lunch12:59
*** needssleep is now known as TheJulia13:02
*** awalende has joined #openstack-nova13:03
*** macza has joined #openstack-nova13:11
*** dave-mccowan has joined #openstack-nova13:14
*** macza has quit IRC13:16
sean-k-mooneyo/13:28
*** _alastor_ has joined #openstack-nova13:28
*** awaugama has joined #openstack-nova13:31
*** wolverineav has joined #openstack-nova13:31
*** _alastor_ has quit IRC13:33
*** wolverineav has quit IRC13:36
*** ttsiouts has joined #openstack-nova13:39
*** ttsiouts has quit IRC13:44
*** ttsiouts has joined #openstack-nova13:44
*** mriedem has joined #openstack-nova13:44
*** temka is now known as artom13:50
gibimriedem: hi! welcome back! I'm trying to find a slot for a hangouts call about the bandwidth patches and need your input. Possible slots this week: Wednesday 18:00 UTC (after cdent's placement hangouts), Thursday 17:00 UTC, Friday 17:00 UTC13:51
openstackgerritMatthew Booth proposed openstack/nova master: Fix configure() called after DatabaseAtVersion fixture  https://review.openstack.org/61972313:51
*** tetsuro has joined #openstack-nova13:51
gibimriedem: and one more not so good spot on Tuesday 17:00 UTC13:53
*** takashin has joined #openstack-nova13:54
mriedemany of those should work for me13:55
efriedn-sch meeting in 10 minutes in #openstack-meeting-alt13:56
*** fragatina has joined #openstack-nova13:56
gibimriedem: thanks14:00
gibimriedem: will post the final time on the ML14:00
openstackgerritcaoyuan proposed openstack/nova stable/rocky: Update the description to make it more accuracy  https://review.openstack.org/63068114:05
*** rpittau|lunch is now known as rpittau14:05
*** sapd1_x has joined #openstack-nova14:09
*** mmethot has joined #openstack-nova14:12
*** mchlumsky has joined #openstack-nova14:16
*** beekneemech is now known as bnemec14:19
*** lbragstad has joined #openstack-nova14:19
*** awalende has quit IRC14:20
*** markvoelker has quit IRC14:31
efriedbelmoreira: Howdy. Are you around today?14:33
*** markvoelker has joined #openstack-nova14:34
*** asmita has joined #openstack-nova14:38
asmitagibi: Hi14:39
*** ShilpaSD has joined #openstack-nova14:40
*** fragatina has quit IRC14:40
gibiasmita: hu14:41
gibiasmita: hi14:41
*** takashin has left #openstack-nova14:42
sean-k-mooneyadrianc: do you mind if i rebase the sriov patch series and start addressing some of stephenfin's nits14:42
asmitagibi: How do we use this mehod?[1]:https://github.com/openstack/python-novaclient/blob/master/novaclient/tests/unit/utils.py#L3014:43
asmitamethod*14:43
sean-k-mooneymriedem: are you ok with removing your procedual -2 in https://review.openstack.org/#/c/616120/ since the spec has been merged and blueprint approved?14:44
mriedemyou don't14:44
mriedemasmita: ^14:44
mriedemit's patching the mock library at import time to avoid adding tests which call invalid assert methods14:44
mriedemwhich used to be a problem, but is fixed in latest mock i believe14:44
gibiasmita: mriedem was faster. But I think the mock library already fixed to reject those wrong asserts14:45
*** tetsuro has quit IRC14:45
gibiasmita: so you don't even need that patch any more14:45
mriedemsean-k-mooney: done14:45
sean-k-mooneymriedem: thanks it need to be rebased and some nits addressed but i think the feature is effectivly complete. we just need to test it properly and clean it up at this point.14:46
asmitaOkay.There is a test scenario in  test_hypervisors.py file of novaclient which I need to implement.I need to raise exception in that test case..But because of the fixtures,I get a error as "No mock address".So how do I overcome this situation?14:48
gibiasmita: do you have a patch up on gerrit I can look at?14:49
asmitagibi: Not yet.14:51
gibiasmita: just push the not working code up so I can look at it14:51
*** mlavalle has joined #openstack-nova14:54
*** psachin has quit IRC14:56
*** macza has joined #openstack-nova14:56
asmitagibi: Please have a look.https://github.com/openstack/python-novaclient/blob/master/novaclient/tests/unit/utils.py#L3014:56
*** sridharg has quit IRC14:56
asmitagibi: https://github.com/openstack/python-novaclient/blob/master/novaclient/tests/unit/utils.py#L3014:57
asmitagibi : Sorry wron link14:57
asmitawrong link14:57
asmitagibi: paste.openstack.org/show/742306..Thanks.14:58
gibiasmita: looking...14:58
tobias-urdinany good way to actually health check novncproxy? we've had some fallouts where it stops working with "handler exception: [Errno 32] Broken pipe" on console requests14:59
tobias-urdini guess i'll have to look into some doing some vnc client that does a handshake and tries to find some response that could be checked15:00
*** sean-k-mooney has quit IRC15:00
gibiasmita: this patch you pasted works for me, both py27 and py35 test passes for me on it15:04
gibiasmita: http://paste.openstack.org/show/742307/15:05
*** sean-k-mooney has joined #openstack-nova15:06
asmitagibi :Thanks .15:07
gibiasmita: so I don't know why it doesn't work for you15:07
asmitagibi : My concern here is whether it is okay to mock.patch and raise exception,the way I have done?15:09
*** cfriesen has joined #openstack-nova15:09
*** Luzi has quit IRC15:10
*** derekh has quit IRC15:11
*** dpawlik has quit IRC15:11
*** jangutter has quit IRC15:12
asmitagibi : Thanks.I will look into it.15:13
*** mrch_ has joined #openstack-nova15:14
*** xek has joined #openstack-nova15:15
gibiamotoki: is the self.cs.hypervisors.search() you call and the novaclient.v2.hypervisors.HypervisorManager.search that you mocked the same thing? if they are the same then you are not testing anything (except the mock library)15:17
gibiamotoki: sorry15:17
gibiasmita: ^^15:17
*** markvoelker has quit IRC15:20
*** markvoelker has joined #openstack-nova15:20
*** markvoelker has quit IRC15:20
openstackgerritMatt Riedemann proposed openstack/nova master: Allow run metadata api per cell  https://review.openstack.org/62461215:21
*** derekh has joined #openstack-nova15:21
mriedemdansmith: the channel topic is pretty out of date, maybe we should just link to https://etherpad.openstack.org/p/nova-runways-stein ?15:22
mriedem"Current review runways: https://etherpad.openstack.org/p/nova-runways-stein" or something15:22
dansmithaye15:22
dansmithwill update in a bit15:22
*** ttsiouts has quit IRC15:22
mriedemthanks15:22
*** ttsiouts has joined #openstack-nova15:23
*** efried1 has joined #openstack-nova15:24
*** efried has quit IRC15:25
*** efried1 is now known as efried15:25
*** jangutter has joined #openstack-nova15:26
mriedemeasy runway bp change with a +2 https://review.openstack.org/#/c/624612/ - run meta-api per cell15:27
openstackgerritJan Gutter proposed openstack/os-vif master: Extend port profiles with datapath offload type  https://review.openstack.org/57208115:27
belmoreiraefried: hi15:27
*** ttsiouts has quit IRC15:28
efriedbelmoreira: Greetings. Today in the nova-scheduler meeting we brought up a couple of patches that CERN had showed interest in.15:28
efriedWe were looking to find out if y'all could (re)deploy them to make sure they do what's intended and don't break the world.15:28
efried(jaypipes heads up)15:29
*** _alastor_ has joined #openstack-nova15:29
efriedThe first one was the placement traffic improvement series starting at https://review.openstack.org/#/c/615677/15:29
*** mmethot has quit IRC15:30
efriedIIRC, you had deployed it and demonstrated the expected improvements, which is great.15:30
efriedbut15:30
*** mmethot_ has joined #openstack-nova15:30
efriedthere was some concern about how it would behave in large ironic deployments.15:30
efriedLate last week I made some updates to hopefully take care of that15:30
efriedLast I recall, you were about to start deploying to your ironic nodes - not sure how far you got on that or whether you stalled it because of the perceived problems with the patches.15:31
belmoreiraefried: that are great news15:31
*** ChanServ sets mode: +o dansmith15:31
efriedAnyway, we were hoping you could redeploy the series in its current form to see if it's copacetic15:31
*** dansmith changes topic to "Current runways: https://etherpad.openstack.org/p/nova-runways-stein -- This channel is for Nova development. For support of Nova deployments, please use #openstack."15:32
belmoreirabefore the end of the year I was trying it with the ironic compute nodes. It was in our small test cell, and it was OK. Scalability issues will only be noticed in our prod instance. Was planning to do it next week.15:34
*** _alastor_ has quit IRC15:34
belmoreirawill check the new code, thanks.15:34
efriedbelmoreira: Great.15:36
efriedThe other one was jaypipes: https://review.openstack.org/#/c/623558/15:36
efriedI don't know anything about this, will let him speak to it.15:36
*** jangutter has quit IRC15:37
*** TxGirlGeek has joined #openstack-nova15:40
cfriesenartom: have you made any progress with the resource-aware live migration stuff?  I don't see any reviews up since the spec review.15:43
*** mmethot_ has quit IRC15:43
*** mmethot_ has joined #openstack-nova15:43
sean-k-mooneycfriesen: i haven't seen artom online downstream yet today so not sure if he is about yet15:43
cfriesensean-k-mooney: thanks15:44
belmoreiraefried: I didn't try jaypipes code because merge/dependencies issues. Ended up with a custom code that only gets the info from the nodes in the cell. Will revisit it next week.15:44
efriedight15:45
jaypipesbelmoreira: https://review.openstack.org/#/c/623558/ commit message should be self-explanatory. It should dramatically increase performance of your scheduler due to not doing N queries and only doing 1 query per cell.15:45
sean-k-mooneycfriesen: i know he plans to work on it this sprint and once i finsih off the last few bits for the sriov migraton ill likely try and spend my time helping him with that work15:45
cfriesensean-k-mooney: good to hear.  I'm hoping to have some review cycles for this.15:46
*** mmethot_ has quit IRC15:46
*** mmethot_ has joined #openstack-nova15:47
*** mmethot_ has quit IRC15:49
*** mmethot_ has joined #openstack-nova15:50
*** _alastor_ has joined #openstack-nova15:50
belmoreirajaypipes: let me revisit again your patch. I wasn't able to use your code directly because it was relying in few things that are new to rocky and I got scared to push them to prod. But we cooked a light similar thing. This is the result in the schedule time: https://docs.google.com/document/d/1atA0wbUNBCdojiq8WyBKA0k8Ulyj91bzXDMPOIN9gRI15:52
belmoreirajaypipes: I think yours will improve it even more15:52
*** _alastor_ has quit IRC15:54
belmoreirajaypipes: after this what we observed is that now most of the time is because the placement call... But needs more investigation in our side.15:55
belmoreirajaypipes: will let you know when we start again our tests. Thanks for all this work15:56
jaypipesbelmoreira: rock on :)15:57
*** lpetrut has quit IRC15:58
*** igordc has joined #openstack-nova16:01
*** eharney has joined #openstack-nova16:02
*** moshele has quit IRC16:03
*** igordc has quit IRC16:04
*** _alastor_ has joined #openstack-nova16:11
adriancsean-k-mooney: working on the nits, PS will be uploaded by tommorow16:11
adriancsean-k-mooney: left to deal with several on the indirect implementation commit and ill upload16:12
sean-k-mooneyok ill wait untill you have uploaded it so to rebase16:13
artomsean-k-mooney, I'm around16:13
artomcfriesen, nothing posted yet, working on it16:13
adriancsean-k-mooney: thanks16:13
artomsean-k-mooney, just because I don't say anything, doesn't mean I'm not here ;)16:13
cfriesenartom: cool, thanks.16:13
*** cdent has joined #openstack-nova16:14
*** mmethot has joined #openstack-nova16:18
*** pcaruana has quit IRC16:20
*** jangutter has joined #openstack-nova16:21
sean-k-mooneyartom: ah lurking in the backgound. sneaky :)16:21
*** mmethot has quit IRC16:21
*** mmethot has joined #openstack-nova16:22
*** mmethot_ has quit IRC16:22
artomsean-k-mooney, I mean, it's sort of what everyone does by default, no?16:23
*** amodi has joined #openstack-nova16:24
*** fragatina has joined #openstack-nova16:25
*** eharney has quit IRC16:25
sean-k-mooneyartom: ya true  but i usually say hi in the morning16:26
artomsean-k-mooney, maybe I'll start doing that :)16:27
*** mmethot_ has joined #openstack-nova16:29
openstackgerritBalazs Gibizer proposed openstack/nova master: Reject interface attach with QoS aware port  https://review.openstack.org/57007816:29
openstackgerritBalazs Gibizer proposed openstack/nova master: Reject networks with QoS policy  https://review.openstack.org/57007916:29
openstackgerritBalazs Gibizer proposed openstack/nova master: Create RequestGroup from neutron port  https://review.openstack.org/62594116:29
openstackgerritBalazs Gibizer proposed openstack/nova master: Include requested_resources to allocation candidate query  https://review.openstack.org/62594216:29
openstackgerritBalazs Gibizer proposed openstack/nova master: Transfer port.resource_request to the scheduler  https://review.openstack.org/56726816:29
openstackgerritBalazs Gibizer proposed openstack/nova master: Extend RequestGroup object for mapping  https://review.openstack.org/61952716:29
openstackgerritBalazs Gibizer proposed openstack/nova master: Calculate RequestGroup resource provider mapping  https://review.openstack.org/61623916:29
openstackgerritBalazs Gibizer proposed openstack/nova master: Fill the RequestGroup mapping during schedule  https://review.openstack.org/61952816:29
openstackgerritBalazs Gibizer proposed openstack/nova master: Pass resource provider mapping to neutronv2 api  https://review.openstack.org/61624016:29
openstackgerritBalazs Gibizer proposed openstack/nova master: Recalculate request group - RP mapping during re-schedule  https://review.openstack.org/61952916:29
openstackgerritBalazs Gibizer proposed openstack/nova master: Send RP uuid in the port binding  https://review.openstack.org/56945916:29
openstackgerritBalazs Gibizer proposed openstack/nova master: Test boot with more ports with bandwidth request  https://review.openstack.org/57331716:29
openstackgerritBalazs Gibizer proposed openstack/nova master: Remove port allocation during detach  https://review.openstack.org/62242116:29
openstackgerritBalazs Gibizer proposed openstack/nova master: Refactor PortResourceRequestBasedSchedulingTestBase  https://review.openstack.org/62408016:29
openstackgerritBalazs Gibizer proposed openstack/nova master: Record requester in the InstancePCIRequest  https://review.openstack.org/62531016:29
openstackgerritBalazs Gibizer proposed openstack/nova master: Add pf_interface_name tag to passthrough_whitelist  https://review.openstack.org/62531116:29
openstackgerritBalazs Gibizer proposed openstack/nova master: Ensure that bandwidth and VF are from the same PF  https://review.openstack.org/62354316:29
openstackgerritBalazs Gibizer proposed openstack/nova master: Extend NeutronFixture to return port with resource request  https://review.openstack.org/63071916:29
openstackgerritBalazs Gibizer proposed openstack/nova master: Read port resource request from Neutron  https://review.openstack.org/63072016:29
openstackgerritBalazs Gibizer proposed openstack/nova master: Reject server create with port having resource request  https://review.openstack.org/63072116:29
openstackgerritBalazs Gibizer proposed openstack/nova master: Reject resize with port having resource request  https://review.openstack.org/63072216:29
openstackgerritBalazs Gibizer proposed openstack/nova master: Reject migrate with port having resource request  https://review.openstack.org/63072316:29
openstackgerritBalazs Gibizer proposed openstack/nova master: Reject evacuate with port having resource request  https://review.openstack.org/63072416:29
openstackgerritBalazs Gibizer proposed openstack/nova master: Reject unshelve with port having resource request  https://review.openstack.org/63072516:29
* gibi is trully sorry about the lenght of the series16:30
sean-k-mooneyhehe i takeit you have resovled all the merge conflits16:31
gibisean-k-mooney: not just that, but make it so that the new external behavior is only active in a new microversion (which still needs to be added on the top)16:32
gibimriedem: ^^ fyi16:32
*** _mmethot_ has joined #openstack-nova16:32
*** mmethot_ has quit IRC16:32
*** mmethot has quit IRC16:33
sean-k-mooneygibi: cool and for the old micro version dose it ignore the request or reject16:33
gibisean-k-mooney: today it rejects. Ignoring the request leads to some hard to manage edge cases that I added to the etherpad16:33
gibisean-k-mooney: like create a server with a new microversion so there is port allocation but then migrate it with an old microversion16:34
sean-k-mooneyok  well we can disucss those  on the call i guess16:34
gibisean-k-mooney: yes, definitely16:34
sean-k-mooneyon blance that might be safer but we need a failly big upgrade impact section to call it out16:35
sean-k-mooneydid you choose a time by the way16:35
gibisean-k-mooney: I still have to talk to mlavalle about it16:35
gibisean-k-mooney: as the last proposed time, Tuesday, does not seem to work any more16:35
mlavallegibi: will get back to you in ~30 mins16:36
gibimlavalle: OK, no worries16:36
sean-k-mooneyok cool ill try and be there but i also dont need to be so ill keep an eye out on the ml16:36
*** hongbin has joined #openstack-nova16:37
gibisean-k-mooney: i will post to the ML in any case16:37
*** lpetrut has joined #openstack-nova16:37
*** mmethot_ has joined #openstack-nova16:37
*** _mmethot_ has quit IRC16:38
gibisean-k-mooney: btw if we go with the reject solution in nova, we can still turn off sending the resource request in neutron. That way we can get back the true legacy behavior, i.e. no resource request in the port et all16:38
sean-k-mooneygibi: im not quite sure how you neutron would know when to do that. are you suggesting a config opion for it?16:39
gibisean-k-mooney: yes, it is config driven in this WIP patch https://review.openstack.org/#/c/627978/16:40
*** eharney has joined #openstack-nova16:40
gibisean-k-mooney: it turned out that neutron api extensions are not selectively loaded but loaded by the service plugin16:40
gibisean-k-mooney: but everything in the neutron code is ready to make an api extension like our resource request selectively loaded16:41
sean-k-mooneyah cool. so if you have stien neutron and rocky nova or older you would set the config option to disable it16:42
sean-k-mooneygibi: well there are selectivly loaded in that you have to enable the service plugin or extentinon in the neutron.conf or in the agent/ml2_conf.ini16:42
gibisean-k-mooney: if you have stein neutron and rocky nova then nova does not check the resource request at all. if you have stein neutron and stein nova, but you want to go back to the legacy behavior then you can turn off the extension in neutron16:43
gibisean-k-mooney: as far as I know if you enable a service plugin then it loads every extension of that plugin16:43
*** ttsiouts has joined #openstack-nova16:44
sean-k-mooneyok so its not that granular in say enable ingress qos only you can either enable qos or not16:44
gibisean-k-mooney: yes, as far as I understand16:45
sean-k-mooneyand you get all the qos oplicys or none today. i honestly did not look quite that closely at least not in a few years16:45
*** ade_lee has joined #openstack-nova16:45
gibisean-k-mooney: I've never looked into it deeply. I rely on rubasov's and lajoskatona's neutron experties16:47
ade_leehi all -- we have a grenade test that keeps failing due to an error which is related to volume attachment.  Is there anyone here that knows about how nova volume attach works and can help?16:48
*** fragatina has quit IRC16:48
janguttermoshel, lennyb: I suspect the OVS_HW_offload CI is having some issues. Would you be able to check it?16:49
ade_leespecifically, we're running into an error where we try to write to a volume before its ready --16:50
ade_leewe do wait for cinder to show the volume as being "in-use", but that seems to be insufficient.16:51
ade_leewhat would we wait for to ensure that the volume is ready for use?16:51
ade_leeand the instance knows about it.16:52
*** mdbooth has joined #openstack-nova16:52
*** rpittau has quit IRC16:57
*** fragatina has joined #openstack-nova17:00
stephenfinbauzas: I think you need to submit those gantt reviews by yourself, as it's just yourself and johnthetubaguy that have access. melwitt is OK with us retiring it17:01
bauzasstephenfin: WDYM ?17:01
bauzasfast-approving ?17:01
stephenfinyup17:01
stephenfinbauzas: Pretty much everyone else is no longer working on OpenStack https://review.openstack.org/#/admin/groups/253,members17:02
sean-k-mooneystephenfin: well you can submit the change and bauzas and johnthetubaguy can approve17:02
mdboothAre folks aware that openstack-placement isn't listed in either requirements or test-requirements, btw?17:02
*** helenafm has quit IRC17:02
stephenfinmdbooth: Neither is neutron, I'd imagine17:02
mdboothstephenfin: How's it supposed to be installed?17:03
stephenfinThey're all applications rather than libraries. You don't list applications in requirements.txt17:03
mdboothstephenfin: We pull in placement in functional17:03
stephenfinIf you're DevStack, usually by cloning the repo locally?17:03
mdboothFunctional tests don't work17:03
stephenfinOh, then that's different17:03
sean-k-mooneystephenfin: it should be in test-requiremetns if its needed to run the tests17:03
stephenfinsean-k-mooney: Yup. It shouldn't be required to run tests though, otherwise we now have integration tests17:04
stephenfinmdbooth: Then I can't offer any more info. Sorry for the noise :)17:04
sean-k-mooneystephenfin: we import the test fixture form placement now so we can share it between nova and placement i think17:05
mdboothhttps://github.com/openstack/nova/blob/master/nova/tests/functional/fixtures.py#L1917:05
sean-k-mooneycdent: ^ you would know better then i17:05
* mdbooth wonders how it's working anywhere17:05
*** fragatina has quit IRC17:06
cdentin the gate, it is happening a a result of tox-siblings17:06
openstackgerritJan Gutter proposed openstack/nova master: Convert vrouter legacy plugging to os-vif  https://review.openstack.org/57132517:06
janguttermriedem: would you be able to remove your procedural -2 from ^^ (a tag has been made!)17:06
mdboothSounds cosy17:06
sean-k-mooneycdent: how would you run tox locally to get the same effect.  just pip install placement after the fact?17:07
cdentone mo17:07
cdentsean-k-mooney, mdbooth : look at the functional job def in tox.ini17:08
cdentplacement is installed explicitly for functional only because unit tests should not know about placement17:08
cdentfrom git master17:08
cdenttox-siblings makes sure in the gate that depends-on type things are respected17:08
mriedemjangutter: done17:08
cdentif you need that locally, then pip install the git ref17:08
sean-k-mooneycdent: mdbout is have import errors when runing placement17:08
sean-k-mooney* nova fucntional tests17:09
cdentmdbooth: got any more details than that?17:09
janguttermriedem: thanks! (finally, the dominoes have started falling!)17:09
*** sapd1_x has quit IRC17:10
mdboothcdent: Ah, ha!17:10
mdboothI was running functional-py37 which looks like it's missing a line17:10
cdentmdbooth: it does indeed appear to be missing a line17:11
mdboothpy3{5,6} both have an explicit deps= referring to functional deps17:11
cdenti bet that's a bad merge17:11
*** yan0s has quit IRC17:11
cdentfunctiona-py37 was added around the same time the functional test changes were added17:11
cdentbut functional-py37 is not in the gate17:11
cdentmdbooth: you happy to fix that as part of whatever else you are doing, or would like me to get it?17:12
mdboothcdent: 'whatever I'm doing' will merge shortly after the decay of the last proton, so you go ahead17:12
* cdent hugs mdbooth 17:12
*** lpetrut has quit IRC17:14
*** pcaruana has joined #openstack-nova17:15
janguttermdbooth: you made me look up the current lower limit for proton decay. I think the only think that will beat that is COBOL code running in production.17:15
mdboothjangutter: lol17:16
gibimlavalle, sean-k-mooney, efried, mriedem: So the (hopefully) final timeslot for the bandwidth hangouts in Friday 17:00 UTC. Posted on the ML17:18
gibihttp://lists.openstack.org/pipermail/openstack-discuss/2019-January/001710.html17:18
mlavallegibi: ack17:18
efriedack17:18
* gibi needs to run away17:19
efriedgibi: Hangout link to be provided day-of?17:20
*** dtantsur is now known as dtantsur|afk17:23
openstackgerritChris Dent proposed openstack/nova master: Make functional-py37 job work like others  https://review.openstack.org/63074517:29
*** itlinux has joined #openstack-nova17:33
*** lpetrut has joined #openstack-nova17:33
*** ccamacho has quit IRC17:36
*** ttsiouts has quit IRC17:37
*** ttsiouts has joined #openstack-nova17:38
stephenfinsean-k-mooney: thoughts on https://review.openstack.org/#/c/621528/5/nova/network/linux_net.py@1455 ?17:41
*** mvkr has quit IRC17:41
*** ttsiouts has quit IRC17:42
*** fragatina has joined #openstack-nova17:42
sean-k-mooney exitcode 2 is file not found or interface not found in this case17:42
sean-k-mooney254 im not sure and 0 is success17:42
sean-k-mooneywhat is that patch for17:43
sean-k-mooneyoh its the privesep changes im not sure if that code is used as os-vif should be used even for nova-networks17:44
stephenfinsean-k-mooney: Aye, could be nova-network only code17:45
sean-k-mooneyyou mean unrelated to vif plugging17:45
stephenfinNo I mean the file I'm linking to seems to be nova-network only code and unrelated to the neutron/os-vif code path17:46
stephenfinWe don't use os-vif for nova-net, do we?17:46
sean-k-mooneyyes we do17:46
sean-k-mooneyos-vif should be used for both nova-networks and neutorn in the libvirt dirver17:47
sean-k-mooneythis is the legacy linux bridge code path which i think is unused currently17:47
sean-k-mooneyin any case you are coorect that its a change in behavior but i think exit codes [0,2,254] would have been the correct set to use in this case17:48
jaypipesefried: got a sec?17:52
jaypipesefried: re: https://review.openstack.org/#/c/617042/14/nova/compute/manager.py@55517:52
jaypipesefried: I'm still super-confused why we need that.17:52
jaypipesefried: and why we can't just instantiate the reportclient variable on __init__ of the compute manager17:52
jaypipesefried: is this to minimize the number of ProviderTree caches or something?17:53
*** derekh has quit IRC18:00
*** jangutter has quit IRC18:02
*** lpetrut has quit IRC18:03
jaypipesefried: never mind... I see you've addressed it in https://review.openstack.org/#/c/620711/1318:06
*** panda is now known as panda|off18:12
mriedemit's funny that an az description field has never been requested for the api18:16
*** ttsiouts has joined #openstack-nova18:20
*** wolverineav has joined #openstack-nova18:22
jaypipesmriedem: perhaps because nobody knows wtf an AZ is in Nova?18:25
*** itlinux_ has joined #openstack-nova18:26
*** dave-mccowan has quit IRC18:26
*** itlinux_ has quit IRC18:28
*** itlinux has quit IRC18:29
*** igordc has joined #openstack-nova18:31
sean-k-mooneyjaypipes: its like a aws AZ right with all the semantics or a totally seperate fault domain and is tototally not a metadata key on an aggregate18:31
efriedjaypipes: sorry was afk. Assume any remaining action items are denoted in review comments at this point?18:32
*** dave-mccowan has joined #openstack-nova18:32
jaypipesefried: yuupers.18:36
jaypipesyuppers18:36
jaypipessean-k-mooney: lol, yeah right :)18:36
efriedjaypipes: Thanks for the review. I'm pretty excited to get this series going.18:36
* cdent will dance when it merges18:37
efriedcdent: tape that ^18:37
eanderssonAre there any actual dependencies on the new libvirt / qemu versions for nova-compute?18:40
cdentefried: no sir18:44
*** ralonsoh has quit IRC18:45
*** moshele has joined #openstack-nova18:47
*** remi_ness has joined #openstack-nova18:47
*** remi_ness has quit IRC18:54
lennybjangutter, moshele. yes. internal environment . I will check it18:54
moshelelennyb: ?18:55
*** mvkr has joined #openstack-nova18:56
mriedemjaypipes: heh, well, all the more reason to allow a cloud operator to describe their AZs18:56
mriedemeandersson: what do you mean? the libvirt driver has a minimum required libvirt/qemu otherwise nova-compute won't start18:57
mriedemwe bump that minimum every other release or so18:58
mriedemsome features in the libvirt driver require newer versions of the binaries and those have conditional version checks on them18:58
mriedeme.g. file-backed memory18:58
*** wolverineav has quit IRC19:04
*** pcaruana has quit IRC19:08
*** moshele has quit IRC19:22
*** remi_ness has joined #openstack-nova19:25
*** wolverineav has joined #openstack-nova19:29
*** wolverineav has quit IRC19:32
*** wolverineav has joined #openstack-nova19:32
*** ttsiouts has quit IRC19:41
openstackgerritAdrian Chiris proposed openstack/nova master: Add free for claimed, allocated devices  https://review.openstack.org/61612019:41
openstackgerritAdrian Chiris proposed openstack/nova master: Allow per-port modification of vnic_type and profile  https://review.openstack.org/60736519:42
openstackgerritAdrian Chiris proposed openstack/nova master: Add get_instance_pci_request_from_vif  https://review.openstack.org/61992919:42
openstackgerritAdrian Chiris proposed openstack/nova master: SR-IOV Live migration indirect port support  https://review.openstack.org/62011519:42
*** ttsiouts has joined #openstack-nova19:42
*** jangutter has joined #openstack-nova19:42
*** igordc has quit IRC19:43
*** ttsiouts has quit IRC19:47
adriancsean-k-mooney: ^19:47
*** itlinux has joined #openstack-nova19:51
lennybjangutter, moshele. fixed and passed20:04
*** itlinux has quit IRC20:04
*** erlon_ has quit IRC20:04
jangutterlennyb: thanks very much!20:04
*** remi_ness has quit IRC20:05
*** ttsiouts has joined #openstack-nova20:05
*** cfriesen has quit IRC20:07
openstackgerritMatt Riedemann proposed openstack/nova master: Share image membership with instance owner  https://review.openstack.org/63076920:36
ade_leemriedem, hey - any idea who I could talk to about voume attachments?20:38
openstackgerritMatt Riedemann proposed openstack/nova master: Share snapshot image membership with instance owner  https://review.openstack.org/63076920:38
mriedemade_lee: likely me20:38
ade_leemriedem, so -- I have a gate job that is failing20:38
ade_leebecause I'm trying to attach a volume and then use it - and there is some timing issue20:39
ade_leewhereby it looks like we're trying to use it before its fully attached.20:39
ade_leewe're waiting to see if the volume is "in-use",20:39
*** whoami-rajat has quit IRC20:40
ade_leebut thats checking if cinder thinks the volume is in use20:40
ade_leeand not if the instance thinks the volume is attached ..20:40
ade_leeso how/what do I wait on instead?20:40
ade_leemriedem, the specific gate job that is failing is here -- http://logs.openstack.org/67/628667/13/check/grenade-devstack-barbican/613998f/20:41
mriedemthat should be sufficient,20:41
mriedemnova tells cinder when the local connection happens for the guest vm,20:41
mriedemand that's what should change the volume status to in-use20:41
smcginnisYeah, I didn't think we marked it as in-use until things were ready.20:41
ade_leemriedem, smcginnis interesting -- any idea whats going on in the above test then?20:42
smcginnisCould it be that the local device is getting set wrong for some reason? http://logs.openstack.org/67/628667/13/check/grenade-devstack-barbican/613998f/job-output.txt.gz#_2019-01-14_15_22_54_28103320:42
*** moshele has joined #openstack-nova20:42
ade_leenote - the code we are executing is here -- https://github.com/openstack/barbican-tempest-plugin/blob/master/barbican_tempest_plugin/tests/scenario/manager.py  line 32620:43
mriedemis something specifically waiting on /dev/vdb ?20:45
mriedembecause nova, or at least the libvirt driver, totally ignores the user-requested device name for the attach20:45
mriedemso https://github.com/openstack/barbican-tempest-plugin/blob/master/barbican_tempest_plugin/tests/scenario/manager.py#L328 is kind of pointless to pass the device kwarg20:45
mriedemhttps://github.com/openstack/barbican-tempest-plugin/blob/master/barbican_tempest_plugin/tests/scenario/manager.py#L34620:46
mriedemlooks like it probably is20:46
ade_leemriedem, well - that fails when we actually try to write to the device ..20:46
smcginnisThere's only a /dev/vda20:46
mriedemthis is the POST to attach the volume it looks like http://logs.openstack.org/67/628667/13/check/grenade-devstack-barbican/613998f/job-output.txt.gz#_2019-01-14_15_21_32_86064220:46
ade_leemriedem, so this is the test -- https://github.com/openstack/barbican-tempest-plugin/blob/master/barbican_tempest_plugin/tests/scenario/test_volume_encryption.py line 10120:47
mriedemwtf this is nice http://logs.openstack.org/67/628667/13/check/grenade-devstack-barbican/613998f/logs/screen-n-cpu.txt.gz#_Jan_14_15_14_46_11550820:47
mriedemunrelated though20:47
mriedemhttp://logs.openstack.org/67/628667/13/check/grenade-devstack-barbican/613998f/logs/screen-n-cpu.txt.gz#_Jan_14_15_20_47_77030220:48
mriedemJan 14 15:20:47.770302 ubuntu-xenial-rax-ord-0001683696 nova-compute[343]: INFO nova.virt.libvirt.driver [None req-c127b555-17c3-4ce7-9c75-b3df8b653f76 tempest-VolumeEncryptionTest-1809383190 tempest-VolumeEncryptionTest-1809383190] [instance: 1f45a2d6-3811-4b85-a13b-68c3eb605dea] Ignoring supplied device name: /dev/vdb20:48
mriedemsays it's attaching it to /dev/vdb though20:49
mriedemhttp://logs.openstack.org/67/628667/13/check/grenade-devstack-barbican/613998f/logs/screen-n-cpu.txt.gz#_Jan_14_15_20_48_01293920:49
mriedemJan 14 15:20:48.012939 ubuntu-xenial-rax-ord-0001683696 nova-compute[343]: INFO nova.compute.manager [None req-c127b555-17c3-4ce7-9c75-b3df8b653f76 tempest-VolumeEncryptionTest-1809383190 tempest-VolumeEncryptionTest-1809383190] [instance: 1f45a2d6-3811-4b85-a13b-68c3eb605dea] Attaching volume ed7fc802-ba1c-4639-8723-47ed02bd11de to /dev/vdb20:49
lyarwoodade_lee: are you only seeing this with the _cryptsetup test?20:51
mriedemhmm, i wonder if it's connected at sda20:51
mriedemhttp://logs.openstack.org/67/628667/13/check/grenade-devstack-barbican/613998f/logs/screen-n-cpu.txt.gz#_Jan_14_15_20_51_03357020:51
ade_leelyarwood, seems so20:51
mriedemsomething sure enjoys calling the barbican secrets API a lot during this attach flow20:53
lyarwoodade_lee: if the _luks version of the test is passing I'd put money on there being an issue with how os-brick is wiring up the encrypted volume on the underlying host20:54
mriedemJan 14 15:20:51.786765 ubuntu-xenial-rax-ord-0001683696 nova-compute[343]: DEBUG os_brick.encryptors.cryptsetup [None req-c127b555-17c3-4ce7-9c75-b3df8b653f76 tempest-VolumeEncryptionTest-1809383190 tempest-VolumeEncryptionTest-1809383190] opening encrypted volume /dev/sda {{(pid=343) _open_volume /usr/local/lib/python2.7/dist-packages/os_brick/encryptors/cryptsetup.py:109}}20:54
mriedemit's on sda but the test is looking for dev/vdb20:54
lyarwoodthat's on the compute host itself20:54
mriedemhttp://logs.openstack.org/67/628667/13/check/grenade-devstack-barbican/613998f/logs/screen-n-cpu.txt.gz#_Jan_14_15_20_51_78676520:54
mriedem<target bus="virtio" dev="vdb"/>20:55
ade_leemriedem, not sure if I understand what that means -- does that mean we asked for  it to be on vdb and it put it on sda instead?20:56
mriedemmaybe not, tempest is sshi'ing into the guest looking for it at vdb20:57
mriedemif the sda thing is host only, then ignore me20:57
mriedemlyarwood seems to know something about cryptsetup20:57
mriedemi remember something about that coming up in berlin too20:57
lyarwoodunfortunately, yeah I'd like to drop support for it tbh20:58
mriedemL22 https://etherpad.openstack.org/p/BER-volume-encryption-forum20:58
lyarwoodade_lee: so os-brick currently hacks the device paths on the host to point the instance at the decrypted volume using the original encrypted device path.20:59
ade_leelyarwood, so os-brick is setting it up sda instead of vdb -- and luks is setting it up on vdb?21:01
smcginnisI don't think so. /dev/vda should be the boot volume.21:02
*** hamzy_ has joined #openstack-nova21:03
*** alanmeadows_ has joined #openstack-nova21:04
lyarwoodade_lee: os-brick points /dev/disk/by-id/scsi-360000000000000000e00000000010001 to the decrypted device on the host, for some reason I can't see the actual commands in the logs anymore.21:05
*** cdent has quit IRC21:07
*** tbachman_ has joined #openstack-nova21:09
*** tbachman has quit IRC21:09
*** tbachman_ is now known as tbachman21:09
*** alanmeadows has quit IRC21:11
*** mgoddard has quit IRC21:11
*** bzhao__ has quit IRC21:11
*** awaugama has quit IRC21:11
*** hamzy has quit IRC21:11
*** jroll has quit IRC21:11
*** alanmeadows_ is now known as alanmeadows21:11
*** awaugama has joined #openstack-nova21:11
*** jroll has joined #openstack-nova21:12
ade_leelyarwood, sorry - did you find anything?  I'm looking at a couple runs -- one where it worked and one where it didn't - but not sure how to make heads or tails of it21:13
lyarwoodade_lee: yeah sorry I can't see anything obvious on the host / os-brick side of things tbh21:13
*** trident has quit IRC21:13
lyarwoodade_lee: /dev/disk/by-id/scsi-360000000000000000e00000000010001 should point to the decrypted device that in turn points to encrypted /dev/sda21:14
ade_leelyarwood, well sdb right?21:15
*** mgoddard has joined #openstack-nova21:15
*** efried has quit IRC21:16
*** trident has joined #openstack-nova21:16
*** efried has joined #openstack-nova21:17
lyarwoodade_lee: nope, this is on the host sorry21:18
lyarwoodade_lee: like I said there's a load of hacks with this legacy encryptor to decrypt the volume on the host itself21:19
ade_leelyarwood, it seems to work intermittently .. so not sure whats going on21:19
*** mrjk has quit IRC21:19
lyarwoodade_lee: hard to say but if we can't read/write to the device once it's connected within the instance then I'd start by looking at the devicemapper layer on the compute host itself.21:21
lyarwoodade_lee: if it's intermittent it might be a race between os-brick connecting another device while also setting up the decrypt devices for this volume.21:23
ade_leelyarwood, would that show up in the logs somehow?21:24
efriedmriedem: Doesn't deleting the compute service remove its node provider(s) from placement?21:25
lyarwoodade_lee: yeah, os-brick keeps reporting the device path as /dev/disk/by-id/scsi-360000000000000000e00000000010001 once each volume is connected on the host.21:27
lyarwoodade_lee: we use `scsi-360000000000000000e00000000010001` as part of the device name when using cryptsetup to decrypt21:28
*** xek has quit IRC21:28
*** mrjk has joined #openstack-nova21:31
lyarwoodade_lee: I need to head offline now but it does look like we are overloading the /dev/disk/by-id/scsi-360000000000000000e00000000010001 path during these tests somehow21:31
lyarwoodade_lee: do you have an env to hand btw?21:31
lyarwoodade_lee: I'd be interested to see what `qemu-img info /dev/vdb` has to say within the instance21:32
ade_leelyarwood, no - I've just been trying to run in the gate ..21:32
*** moshele has quit IRC21:32
lyarwoodade_lee: kk, I can take another look at this during the morning in EMEA if it ends up in a bug somewhere.21:32
ade_leelyarwood, thanks - I'll send you an email and/or open a bug21:33
ade_leelyarwood, oh  -- this is interesting ..21:34
ade_leeJan 14 15:21:51.361034 ubuntu-xenial-rax-ord-0001683696 nova-compute[343]: WARNING os_brick.encryptors.luks [None req-b8bc15a0-3faf-41ee-bda7-65ede3c3e432 tempest-VolumeEncryptionTest-1809383190 tempest-VolumeEncryptionTest-1809383190] isLuks exited abnormally (status 1): Device /dev/disk/by-id/scsi-360000000000000000e00000000010001 is not a valid LUKS device.21:34
ade_leeJan 14 15:21:51.361319 ubuntu-xenial-rax-ord-0001683696 nova-compute[343]: Command failed with code 22: Device /dev/disk/by-id/scsi-360000000000000000e00000000010001 is not a valid LUKS device.21:34
ade_leeJan 14 15:21:51.361579 ubuntu-xenial-rax-ord-0001683696 nova-compute[343]: : ProcessExecutionError: Unexpected error while running command.21:34
*** igordc has joined #openstack-nova21:35
lyarwoodade_lee: as horrid as that is it's expected the first time we connect to an empty encrypted volume21:35
ade_leelyarwood, ok :)21:35
lyarwoodade_lee: as c-vol hasn't had to write and thus encrypt any data to the volume21:35
mriedemefried: should21:35
mriedemefried: are you on master?21:35
mriedemand is the api configured to talk to placement?21:35
lyarwoodade_lee: so n-cpu comes along and ensures the LUKs / crypt headers are written to the volume before we use it21:36
efriedmriedem: I found the code. It looks like for ironic deployments, it'll only delete the "first" compute node (whatever that means in practice).21:36
efriedmriedem: cf https://github.com/openstack/nova/blob/da98f4ba4554139b3901103aa0d26876b11e1d9a/nova/api/openstack/compute/services.py#L244-L247 and https://github.com/openstack/nova/blob/da98f4ba4554139b3901103aa0d26876b11e1d9a/nova/objects/service.py#L308-L31121:36
mriedemefried: sure enough21:37
mriedembugify it21:37
efriedmriedem: Well, a) I can't prove it (not having an ironic setup); but b) as jaypipes points out here, we might actually not want to do that: https://review.openstack.org/#/c/615677/17//COMMIT_MSG@3021:38
efriedbut I can open a bug anyway :)21:39
mriedemefried: well, the "has instances on them" is caught here https://github.com/openstack/nova/blob/da98f4ba4554139b3901103aa0d26876b11e1d9a/nova/api/openstack/compute/services.py#L23221:40
*** mrjk has quit IRC21:40
efriedmriedem: Yeah, I didn't see that pre-check, but knew placement would bounce the delete in any case if we got that far.21:41
mriedemyeah that's also a relatively recent validation21:41
efriedmriedem: But do we really want to delete "potentially thousands" of compute node records from placement, assuming they're non-spawned?21:42
*** mrjk has joined #openstack-nova21:42
mriedemironic is special since we use the ironic node uuid to set the compute node uuid21:42
mriedemso we can actually link those21:42
mriedemwhich is also relatively recent...21:42
mriedemrocky or queens21:42
mriedemotherwise if you delete the service record and the compute_nodes records, the providers in placement would be orphaned if you restart the compute service which creates new compute nodes records with new uuids21:43
efriedmriedem: I saw code elsewhere to delete those orphans.21:43
mriedemor just simply forget to stop the nova-compute service...21:43
efriedseems like a bass-ackwards way off doing it.21:43
efrieds/ff/f/21:43
mriedemsays the guy that +2d the change https://review.openstack.org/#/c/554920/21:44
* efried wipes egg21:45
mriedemso let's say we delete a compute service hosting no instances and we delete 1000 ironic compute nodes records and related resource providers,21:46
mriedemthe next time you restart that compute service (or update_available_resources runs), nova-compute will just create 1000 new compute nodes records with uuids that match the ironic nodes that service is hosting,21:46
mriedemand create 1000 new resource providers with those same uuids21:46
mriedemif we don't delete the resource providers, the compute_nodes records are gone either way i think21:47
mriedemwhen the services table record is deleted21:47
mriedembecause of this https://github.com/openstack/nova/blob/da98f4ba4554139b3901103aa0d26876b11e1d9a/nova/db/sqlalchemy/api.py#L40621:48
efriedIf I'm following, you're saying that deleting the service is going to cause deletion-and-recreation of all the ironic nodes even today? Just with different-than-optimal timing?21:48
mriedemnow if for some reason that service is only hosting 800 of those ironic nodes, n-cpu will create 800 compute_nodes records and you'll have orphaned 200 resource providers in placement21:48
mriedemit will delete all of the compute_nodes table records for that service, yes21:49
mriedemand always did21:49
mriedemif some other compute service starts managing that other 200 nodes, then cool - their providers are already in placement21:49
mriedembut if not, you're reporting things to the scheduler that might not be managed anywhere21:49
*** awaugama has quit IRC21:50
mriedemthe scheduler should filter those out but you could get NoValidHost if you're like CERN and have a real low config for how many placement allocation candidates you want to get back21:50
efriedI will try to express this in the bug report.21:51
*** eharney has quit IRC21:51
mriedemi could be wrong of course, i don't have a multinode ironic deployment to play with21:52
mriedembut we could easily simulate this in functional tests21:52
mriedemi guess the bug is, in the case of ironic, we should either delete 0 or all providers in placement for the related compute nodes21:52
mriedemsure is fun needing nova to mirror everything to placement isn't it o-)21:53
efriedmriedem: https://bugs.launchpad.net/nova/+bug/1811726 -- please see if I expressed this correctly, kthx21:58
openstackLaunchpad bug 1811726 in OpenStack Compute (nova) "Deleting compute service only deletes "first" ironic node from placement" [Undecided,New]21:58
mriedemcommented22:03
mriedembut i don't think that cleanup orphan code you pointed out is going to clean up orphans22:03
mriedemthat code in update_available_resource is for nodes that the driver no longer reports (user deleted the ironic node in the ironic api) but nova was still tracking it22:04
*** jangutter has quit IRC22:05
*** imacdonn has quit IRC22:06
*** imacdonn has joined #openstack-nova22:06
openstackgerritJack Ding proposed openstack/nova master: [WIP] Flavor extra spec and image properties validation  https://review.openstack.org/62070622:07
*** jangutter has joined #openstack-nova22:20
* efried needs to never again let vacation time accumulate22:23
*** jangutter has quit IRC22:26
*** rcernin has joined #openstack-nova22:31
*** itlinux has joined #openstack-nova22:47
openstackgerritMatt Riedemann proposed openstack/nova master: Share snapshot image membership with instance owner  https://review.openstack.org/63076922:52
jaypipesmriedem: unless I'm badly mistaken, if you attempt to delete a resource provider and the provider has allocations against it, that will fail.22:55
efriedmriedem: When you get a chance, would you mind having a quick look at https://review.openstack.org/#/c/615677/17/nova/compute/resource_tracker.py@824 ?22:55
efriedPerhaps you can explain a circumstance under which is_new_compute_node==True but the report client's provider tree cache has an entry for the "new" node.22:55
mriedemjaypipes: yes i believe that's a 40922:55
edleafegit ls22:55
jaypipesmriedem: and we are catching that 409 in the DELETE /os-services/{id} API?22:55
edleafedoh!22:55
efriedcause I'm sure I put that code there for a reason. In fact, I want to say it was the/a main thing in this patch.22:56
mriedemjaypipes: we don't attempt to delete services/computenodes/resoruce providers if the service host has instances on it22:56
mriedemjaypipes: https://github.com/openstack/nova/blob/da98f4ba4554139b3901103aa0d26876b11e1d9a/nova/api/openstack/compute/services.py#L23222:56
jaypipesmriedem: ack.22:57
mriedemwe don't explicitly catch that 409 though no,22:57
mriedemit'd be a 500 if we raced there22:57
jaypipesright.22:57
mriedembut we also wouldn't delete the service record or compute nodes table entries, so you could retry22:57
jaypipesmriedem: which I'm fine with, really, since it's a data corruption error if that happened IMHO (no InstanceList.get_by_service() results but there ARE allocations is a corruption in my book)22:58
mriedemefried: replied inline23:00
*** mdbooth_ has joined #openstack-nova23:02
efriedthanks23:03
*** mchlumsky has quit IRC23:03
*** mdbooth has quit IRC23:04
*** mriedem is now known as mriedem_away23:07
*** efried has quit IRC23:08
*** mlavalle has quit IRC23:11
*** mmethot_ has quit IRC23:14
*** efried has joined #openstack-nova23:15
*** ttsiouts has quit IRC23:47
*** ttsiouts has joined #openstack-nova23:47
*** ttsiouts has quit IRC23:52
*** erlon_ has joined #openstack-nova23:53
*** hongbin has quit IRC23:57
*** markvoelker has joined #openstack-nova23:58

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!