Friday, 2019-02-15

*** whoami-rajat has quit IRC00:00
*** markvoelker has joined #openstack-nova00:09
*** macza has joined #openstack-nova00:27
openstackgerritTakashi NATSUME proposed openstack/nova-specs master: Fix warnings in the document generation  https://review.openstack.org/63115000:29
*** tbachman has quit IRC00:31
*** _alastor_ has quit IRC00:31
*** macza has quit IRC00:32
*** lbragstad has quit IRC00:38
*** lxkong has joined #openstack-nova00:41
*** gyee has quit IRC00:41
*** markvoelker has quit IRC00:42
lxkonghi nova, how can a admin user create a vm using a normal user's port?00:43
*** ileixe has joined #openstack-nova01:02
*** tiendc has joined #openstack-nova01:04
*** wolverineav has quit IRC01:08
*** wolverineav has joined #openstack-nova01:10
*** wolverineav has quit IRC01:15
*** tbachman has joined #openstack-nova01:15
openstackgerrityuanliu proposed openstack/nova master: Fix query method for nova compute services by compute_driver types  https://review.openstack.org/63708301:18
*** brinzhang has joined #openstack-nova01:28
*** wolverineav has joined #openstack-nova01:32
*** _alastor_ has joined #openstack-nova01:33
*** _alastor_ has quit IRC01:38
*** markvoelker has joined #openstack-nova01:39
*** wolverineav has quit IRC01:48
*** wolverineav has joined #openstack-nova01:50
*** _fragatina has quit IRC01:54
*** wolverineav has quit IRC01:55
openstackgerritMerged openstack/nova master: Remove deprecated 'os-flavor-manage' policy  https://review.openstack.org/63365601:56
openstackgerritMerged openstack/nova master: Fix a missing policy in test policy data  https://review.openstack.org/63368601:56
*** hongbin has joined #openstack-nova01:56
*** cfriesen has quit IRC01:59
*** wolverineav has joined #openstack-nova02:02
openstackgerritTakashi NATSUME proposed openstack/nova stable/rocky: Fix a missing policy in test policy data  https://review.openstack.org/63708502:03
openstackgerritMerged openstack/nova master: Change nova-next tempest test regex  https://review.openstack.org/63645902:05
openstackgerritTakashi NATSUME proposed openstack/nova stable/queens: Fix a missing policy in test policy data  https://review.openstack.org/63711202:07
*** erlon_ has quit IRC02:07
*** mriedem has quit IRC02:11
*** markvoelker has quit IRC02:12
*** sapd1 has joined #openstack-nova02:13
openstackgerritMerged openstack/nova master: Default zero disk flavor to RULE_ADMIN_API in Stein  https://review.openstack.org/60391002:18
openstackgerritMerged openstack/nova master: Fix deps for api-samples tox env  https://review.openstack.org/63362002:18
*** wolverineav has quit IRC02:32
*** wolverineav has joined #openstack-nova02:33
*** wolverineav has quit IRC02:37
*** Dinesh_Bhor has joined #openstack-nova02:39
openstackgerritYongli He proposed openstack/nova master: Add server subresouce toplogy API  https://review.openstack.org/62147602:44
*** hongbin has quit IRC02:48
*** mlavalle has quit IRC02:53
openstackgerritYongli He proposed openstack/nova master: Adds the server group info into show server detail API.  https://review.openstack.org/62147402:54
*** psachin has joined #openstack-nova02:56
*** markvoelker has joined #openstack-nova03:09
*** tbachman has quit IRC03:28
*** liumk2233 has joined #openstack-nova03:30
*** tbachman has joined #openstack-nova03:31
*** lbragstad has joined #openstack-nova03:42
*** markvoelker has quit IRC03:43
*** janki has joined #openstack-nova03:58
openstackgerritMerged openstack/python-novaclient master: Microversion 2.68: Remove 'forced' live migrations, evacuations  https://review.openstack.org/63513104:07
openstackgerritMerged openstack/nova master: Remove get_config_vhostuser  https://review.openstack.org/56547104:34
*** wolverineav has joined #openstack-nova04:36
*** markvoelker has joined #openstack-nova04:40
*** igordc has quit IRC04:40
*** cfriesen has joined #openstack-nova04:59
*** wolverineav has quit IRC05:04
*** wolverineav has joined #openstack-nova05:05
*** ratailor has joined #openstack-nova05:05
openstackgerritYongli He proposed openstack/nova master: Add server subresouce toplogy API  https://review.openstack.org/62147605:09
*** markvoelker has quit IRC05:12
*** raghav has joined #openstack-nova05:15
*** rnoriega has quit IRC05:15
*** rnoriega has joined #openstack-nova05:16
*** janki has quit IRC05:18
*** janki has joined #openstack-nova05:18
openstackgerritMerged openstack/nova master: api-ref: Add descriptions for vol-backed snapshots  https://review.openstack.org/61508405:24
*** psachin has quit IRC05:38
*** moshele has joined #openstack-nova05:38
*** wolverineav has quit IRC05:40
*** wolverineav has joined #openstack-nova05:53
*** wolverineav has quit IRC05:54
*** moshele has quit IRC05:55
melwittthanks gmann ++06:05
*** whoami-rajat has joined #openstack-nova06:08
*** markvoelker has joined #openstack-nova06:09
*** eandersson has quit IRC06:12
*** eandersson has joined #openstack-nova06:13
*** eandersson has quit IRC06:13
*** eandersson has joined #openstack-nova06:14
*** cfriesen has quit IRC06:17
*** lbragstad has quit IRC06:18
*** sridharg has joined #openstack-nova06:23
*** tkajinam_ has joined #openstack-nova06:33
*** tkajinam has quit IRC06:35
ileixeHi guys, I'm not sure it's right place to ask but wish someone to shed light on the newbie.06:37
ileixeIs there any way to change timezone (from UTC to any dfifrent) for nova?06:37
*** dpawlik has joined #openstack-nova06:40
*** markvoelker has quit IRC06:43
*** psachin has joined #openstack-nova06:49
*** wolverineav has joined #openstack-nova06:53
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (6)  https://review.openstack.org/57411306:57
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (7)  https://review.openstack.org/57497406:57
*** wolverineav has quit IRC06:58
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (8)  https://review.openstack.org/57531106:58
*** Luzi has joined #openstack-nova07:02
*** maciejjozefczyk has joined #openstack-nova07:06
*** tkajinam_ has quit IRC07:14
*** pbing19 has joined #openstack-nova07:17
*** Dinesh_Bhor has quit IRC07:27
*** Dinesh_Bhor has joined #openstack-nova07:27
*** tkajinam has joined #openstack-nova07:28
*** tkajinam_ has joined #openstack-nova07:29
*** tkajinam has quit IRC07:32
*** markvoelker has joined #openstack-nova07:40
*** maciejjozefczyk has quit IRC07:40
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (9)  https://review.openstack.org/57558107:41
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (10)  https://review.openstack.org/57601707:41
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (11)  https://review.openstack.org/57601807:41
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (12)  https://review.openstack.org/57601907:41
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (13)  https://review.openstack.org/57602007:42
*** matoef1 has joined #openstack-nova07:47
*** maciejjozefczyk has joined #openstack-nova07:55
*** pbing19 has quit IRC07:57
*** wolverineav has joined #openstack-nova07:58
*** bhagyashris has joined #openstack-nova07:58
openstackgerritYongli He proposed openstack/nova master: Add server subresouce toplogy API  https://review.openstack.org/62147607:59
*** pbing19 has joined #openstack-nova08:00
*** rpittau has joined #openstack-nova08:04
matoef1Hi Folks,08:06
matoef1After anabling SSL on all devstack endpoints. I'm unable to create cluster, because NOVA returns 400 HTTP status (user_data too long) to the HEAT. But my user_data size is under 64K. Message from NOVA: http://paste.openstack.org/show/745137/08:06
matoef1Event and resource lists and nova, heat versions: http://paste.openstack.org/show/745139/08:06
matoef1Thank you in advance for any help you can provide.08:06
*** ade_lee_ has quit IRC08:08
*** ade_lee_ has joined #openstack-nova08:08
*** rchurch has quit IRC08:09
*** manjeets has quit IRC08:09
*** rchurch has joined #openstack-nova08:09
*** manjeets has joined #openstack-nova08:11
*** markvoelker has quit IRC08:12
*** awalende has joined #openstack-nova08:13
*** ade_lee_ has quit IRC08:14
*** ade_lee_ has joined #openstack-nova08:14
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (14)  https://review.openstack.org/57602708:17
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (15)  https://review.openstack.org/57603108:18
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (16)  https://review.openstack.org/57629908:18
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (17)  https://review.openstack.org/57634408:18
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (18)  https://review.openstack.org/57667308:19
*** tkajinam_ has quit IRC08:20
*** tesseract has joined #openstack-nova08:22
*** yikun has quit IRC08:25
*** tssurya has joined #openstack-nova08:39
*** mcgiggler has joined #openstack-nova08:45
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (19)  https://review.openstack.org/57667608:51
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (20)  https://review.openstack.org/57668908:51
openstackgerritTakashi NATSUME proposed openstack/nova master: Remove mox in unit/network/test_neutronv2.py (21)  https://review.openstack.org/57670908:51
openstackgerritTakashi NATSUME proposed openstack/nova stable/queens: Add description of custom resource classes  https://review.openstack.org/61912508:53
*** takashin has left #openstack-nova09:03
*** xek has joined #openstack-nova09:05
*** erlon_ has joined #openstack-nova09:06
*** markvoelker has joined #openstack-nova09:10
*** ccamacho has joined #openstack-nova09:10
*** priteau has joined #openstack-nova09:20
gibiefried: nice catch about the lock copy.09:21
openstackgerritKashyap Chamarthy proposed openstack/nova master: libvirt: Bump MIN_{LIBVIRT,QEMU}_VERSION for "Stein"  https://review.openstack.org/63250709:24
openstackgerritKashyap Chamarthy proposed openstack/nova master: libvirt: Drop MIN_LIBVIRT_PARALLELS_SET_ADMIN_PASSWD  https://review.openstack.org/63251409:24
openstackgerritKashyap Chamarthy proposed openstack/nova master: libvirt: Rewrite _create_pty_device() to be clearer  https://review.openstack.org/63715209:24
gibiefried: I read back your discussion with mriedem about the VCPU handling in the request_groups09:25
gibiefried: there is a note in the code where the field is initialized which is the place where the unnumbered group (VCPU and the rest) should be added09:26
gibiefried: I can add another note to the definition of the field but that note will be in a place that doesn't need to be changed when the actuall refactor adds the unnumbered group to the list so that note will be subject to drift09:27
*** derekh has joined #openstack-nova09:28
*** zhanglong has joined #openstack-nova09:32
*** zhanglong has left #openstack-nova09:32
*** moshele has joined #openstack-nova09:41
*** markvoelker has quit IRC09:42
*** awalende has quit IRC09:45
*** bhagyashris has quit IRC09:47
*** awalende has joined #openstack-nova09:47
*** ccamacho has quit IRC09:48
*** ccamacho has joined #openstack-nova09:48
*** awalende has quit IRC09:52
*** awalende has joined #openstack-nova09:54
*** erlon_ has quit IRC09:56
*** Dinesh_Bhor has quit IRC09:58
*** ralonsoh has joined #openstack-nova10:07
*** dtantsur|afk is now known as dtantsur10:09
*** erlon has joined #openstack-nova10:24
*** ociuhandu has joined #openstack-nova10:24
*** liumk2233 has quit IRC10:30
*** liumk2233 has joined #openstack-nova10:30
*** erlon has quit IRC10:31
*** sridharg has quit IRC10:32
*** priteau has quit IRC10:33
*** moshele has quit IRC10:33
*** moshele has joined #openstack-nova10:34
*** cdent has joined #openstack-nova10:36
*** markvoelker has joined #openstack-nova10:39
*** sridharg has joined #openstack-nova10:46
*** erlon has joined #openstack-nova10:52
*** erlon has quit IRC10:52
openstackgerritSylvain Bauza proposed openstack/nova master: WIP: Use the correct mdev allocated from the pGPU  https://review.openstack.org/63659110:52
openstackgerritSylvain Bauza proposed openstack/nova master: libvirt: implement reshaper for vgpu  https://review.openstack.org/59920810:52
openstackgerritSylvain Bauza proposed openstack/nova master: Add functional test for libvirt vgpu reshape  https://review.openstack.org/63155910:52
*** erlon has joined #openstack-nova10:53
*** erlon has quit IRC10:54
*** erlon has joined #openstack-nova10:54
*** erlon has quit IRC10:54
*** erlon has joined #openstack-nova10:55
*** moshele has quit IRC10:56
*** ileixe has quit IRC10:56
*** pbing19 has quit IRC11:00
*** pbing19 has joined #openstack-nova11:00
*** cdent has quit IRC11:02
*** cdent has joined #openstack-nova11:06
*** yan0s has joined #openstack-nova11:08
*** sridharg has quit IRC11:08
*** brinzhang has quit IRC11:10
*** wolverineav has quit IRC11:11
*** markvoelker has quit IRC11:13
gibijaypipes, efried: responded in https://review.openstack.org/#/c/616239 . I will respin the patch with some fixes you suggested11:13
*** moshele has joined #openstack-nova11:17
rhaIs anybody here who could to a review on: https://review.openstack.org/#/c/420026 please?11:21
*** tbachman has quit IRC11:22
*** moshele has quit IRC11:32
*** _alastor_ has joined #openstack-nova11:45
*** moshele has joined #openstack-nova11:45
*** _alastor_ has quit IRC11:50
*** tbachman has joined #openstack-nova11:52
*** liumk2233 has quit IRC11:55
*** moshele has quit IRC12:04
*** markvoelker has joined #openstack-nova12:10
*** tiendc has quit IRC12:10
*** janki has quit IRC12:16
*** tbachman has quit IRC12:22
*** avolkov has joined #openstack-nova12:22
*** pbing19 has quit IRC12:22
*** pbing19 has joined #openstack-nova12:25
*** Luzi has quit IRC12:29
*** pbing19 has quit IRC12:30
*** pbing19 has joined #openstack-nova12:33
*** lpetrut has joined #openstack-nova12:40
*** markvoelker has quit IRC12:42
*** raghav has quit IRC12:43
*** ratailor has quit IRC12:43
*** ccamacho has quit IRC12:55
*** ccamacho has joined #openstack-nova12:55
*** whoami-rajat has quit IRC12:57
openstackgerrithyunsik Yang proposed openstack/nova stable/pike: Manage Compute services in nova manual typos  https://review.openstack.org/63717812:58
openstackgerritAndrey Volkov proposed openstack/nova master: Check hosts have no instances for AZ rename  https://review.openstack.org/50920613:03
*** pbing19 has quit IRC13:11
*** thgcorrea has joined #openstack-nova13:16
*** dave-mccowan has joined #openstack-nova13:18
jaypipesrha: wow, >2 year old patch... sure, I'll take a look.13:22
openstackgerritBalazs Gibizer proposed openstack/nova master: Calculate RequestGroup resource provider mapping  https://review.openstack.org/61623913:24
openstackgerritBalazs Gibizer proposed openstack/nova master: Fill the RequestGroup mapping during schedule  https://review.openstack.org/61952813:24
openstackgerritBalazs Gibizer proposed openstack/nova master: Pass resource provider mapping to neutronv2 api  https://review.openstack.org/61624013:24
openstackgerritBalazs Gibizer proposed openstack/nova master: Recalculate request group - RP mapping during re-schedule  https://review.openstack.org/61952913:24
openstackgerritBalazs Gibizer proposed openstack/nova master: Send RP uuid in the port binding  https://review.openstack.org/56945913:24
openstackgerritBalazs Gibizer proposed openstack/nova master: Test boot with more ports with bandwidth request  https://review.openstack.org/57331713:24
openstackgerritBalazs Gibizer proposed openstack/nova master: Remove port allocation during detach  https://review.openstack.org/62242113:24
openstackgerritBalazs Gibizer proposed openstack/nova master: Record requester in the InstancePCIRequest  https://review.openstack.org/62531013:24
openstackgerritBalazs Gibizer proposed openstack/nova master: Add pf_interface_name tag to passthrough_whitelist  https://review.openstack.org/62531113:24
openstackgerritBalazs Gibizer proposed openstack/nova master: Ensure that bandwidth and VF are from the same PF  https://review.openstack.org/62354313:24
rhajaypipes: Yeah, it has quite a history already  :). Thanks!13:24
openstackgerritBalazs Gibizer proposed openstack/nova master: Support server create with ports having resource request  https://review.openstack.org/63636013:24
openstackgerritSurya Seetharaman proposed openstack/nova master: API microversion 2.69: Handles Down Cells  https://review.openstack.org/59165713:25
openstackgerritSurya Seetharaman proposed openstack/nova master: API microversion 2.69: Handles Down Cells Documentation  https://review.openstack.org/63514713:25
openstackgerritSurya Seetharaman proposed openstack/nova master: Add context.target_cell() stub to DownCellFixture  https://review.openstack.org/63718213:25
jangutterrha: Ah, SIGHUP and how it seems to screw everyone over....13:25
jaypipesgibi: ok, cool. my recommendation made sense then?13:27
gibijaypipes: renaming and func docing for sure, the actual algo change not so much13:28
*** mriedem has joined #openstack-nova13:34
*** psachin has quit IRC13:36
*** efried is now known as fried_rice13:37
fried_riceo/13:37
fried_ricegibi: I agree about the drift in that comment, but I feel like it's a chance worth taking, since it's SO non-obvious.13:38
openstackgerritBalazs Gibizer proposed openstack/nova master: Calculate RequestGroup resource provider mapping  https://review.openstack.org/61623913:38
openstackgerritBalazs Gibizer proposed openstack/nova master: Fill the RequestGroup mapping during schedule  https://review.openstack.org/61952813:38
openstackgerritBalazs Gibizer proposed openstack/nova master: Pass resource provider mapping to neutronv2 api  https://review.openstack.org/61624013:38
openstackgerritBalazs Gibizer proposed openstack/nova master: Recalculate request group - RP mapping during re-schedule  https://review.openstack.org/61952913:38
openstackgerritBalazs Gibizer proposed openstack/nova master: Send RP uuid in the port binding  https://review.openstack.org/56945913:38
openstackgerritBalazs Gibizer proposed openstack/nova master: Test boot with more ports with bandwidth request  https://review.openstack.org/57331713:38
openstackgerritBalazs Gibizer proposed openstack/nova master: Remove port allocation during detach  https://review.openstack.org/62242113:38
openstackgerritBalazs Gibizer proposed openstack/nova master: Record requester in the InstancePCIRequest  https://review.openstack.org/62531013:38
openstackgerritBalazs Gibizer proposed openstack/nova master: Add pf_interface_name tag to passthrough_whitelist  https://review.openstack.org/62531113:38
openstackgerritBalazs Gibizer proposed openstack/nova master: Ensure that bandwidth and VF are from the same PF  https://review.openstack.org/62354313:38
openstackgerritBalazs Gibizer proposed openstack/nova master: Support server create with ports having resource request  https://review.openstack.org/63636013:38
gibifried_rice: I've added a NOTE :)13:38
fried_ricethank you13:38
fried_riceDid you rebase on latest master by chance?13:38
gibifried_rice: yes13:39
fried_ricenice13:39
gibifried_rice: there was some merge conflict in the middle due to os-resource-classes13:39
gibinow everything is shiny13:39
fried_riceSundar and I are working on cyborg stuff, and need to base it on your bottom couple of patches, and there was merge conflict on exactly that, so that'll clear it up for us. Thanks.13:39
*** markvoelker has joined #openstack-nova13:39
gibifried_rice: yeah I saw that cyborg patches appeared on top of my series hope you can reuse some of my stuff13:40
*** whoami-rajat has joined #openstack-nova13:41
fried_ricegibi: that's the plan. I'm really glad you led the charge with that.13:41
*** awaugama has joined #openstack-nova13:42
*** awaugama_ has joined #openstack-nova13:43
*** awaugama_ has quit IRC13:43
*** moshele has joined #openstack-nova13:44
*** awaugama has quit IRC13:45
*** awaugama has joined #openstack-nova13:45
gibifried_rice: cool13:45
cdentfeels like a nice rush of progress lately. exciting.13:46
*** moshele has quit IRC13:47
*** dpawlik has quit IRC13:49
gibilet's break nova before the feature freeze ;)13:50
*** dpawlik has joined #openstack-nova13:50
cdent\o/13:50
cdentbreaking things is a great way to make things beter13:50
bauzasplease, no.13:55
*** sapd1 has quit IRC13:55
bauzasI was off most of this cycle because $bugs13:55
gibibauzas: we making your job security better :)13:56
bauzasremember, when you say "it's good to merge first and then see bugs", some people are actually doing bugs13:56
bauzasgibi: well, no13:56
bauzasgibi: being off upstream is bad for me :(13:56
bauzasof course, it's nice for our customers13:56
bauzasbut tbh, I prefer to be conservative and not have bugs because we tried to merge very quickly without verifying13:57
bauzasso customers would be happy and *I* would be happy13:57
gibibauzas: do you have an overall feeling that openstack produced more bugs lately than before? or what changed is that you are more involved with customer support13:57
bauzasfor placement and instance groups ? surely13:58
*** agopi_ is now known as agopi13:58
bauzaswhat folks don't know if that most of our customers were using Newton until 6 months ago13:59
bauzasnow they use Queens13:59
cdentso clearly any changes we are making now aren't really all that relevant13:59
bauzascdent: until the next year, for sure13:59
bauzasbut then ?13:59
bauzasunless you want me going to another company... /o\13:59
*** lbragstad has joined #openstack-nova14:00
gibibauzas: do you see some missing test coverage what OpenStack should add to avoid some of your painfull bugs?14:01
bauzasgood question14:01
*** mlavalle has joined #openstack-nova14:02
bauzaswhat I know is that we have less people working on OpenStack14:02
bauzasso, for example, if we have a problem, only a few from our team could help to fixing it14:03
bauzasalso, we have less operators testing OpenStack14:03
bauzasso, somehow, it means that when we merge something, we discover bugs later14:03
bauzasthan previously14:03
bauzasnow, the real question is : if we were having better test coverage, would that be better ? oh yes, of course14:04
bauzasbut14:04
bauzaswe would still miss bugs14:04
bauzasso at the end of the day, that would still impact us14:05
gibiyeah, I feel the less people part too. Unfortunately half of the reason I'm paid to work on OpenStack is doing features, so I cannot just say no to feature work :/14:06
*** dpawlik has quit IRC14:06
bauzasof course14:07
bauzaswe're all paid by customers14:07
bauzasI'm not saying no to features14:07
bauzastbc, I just provided my opinion now because it was looking that we were not thinking about what means us quickly merging things14:08
kashyapbauzas: Not quite; less features, more on: maintainability, stability, robustness, performance (a feature)14:08
bauzasthe reshape series is a good example14:08
bauzasI wish I could have worked earlier on it14:08
kashyapAnd ... "don't fall apart if you sneeze", so on14:08
*** dpawlik has joined #openstack-nova14:08
* kashyap jumped in randomly on just the last point "not saying no to features", without reading the full context 14:09
bauzasso, now, I'm just triying to test a lot of things14:09
bauzasto make sure we don't have problems14:09
cdentI reckon things could be improved quite a bit if there was a closer tie between the code being worked on now and people using it so that there were more people making sure it was okay14:09
bauzasanyway, back to https://review.openstack.org/#/c/636591/3/nova/virt/libvirt/driver.py14:09
cdentthat time-related disconnect is a problem14:10
bauzascdent: I agree with you14:10
gibicdent: true14:10
cdentbut it is also a problem that our systems are so complex (much of the time) that they are hard to test and experiment with14:10
bauzasit's difficult since we have some time between the merge and the usage14:10
cdentit would be nice to figure out a way to change that14:10
bauzascdent: heh, Nova is 2M LOCs AFAIK14:10
*** eharney has joined #openstack-nova14:10
cdentouch14:10
cdentwell I've done my part to make nova smaller, I hope someone else will too14:11
cdentbrb14:11
gibicdent: our downstream development switched from mitaka to pike recently and that already helped me to get actionable feedback from the downstream teams14:11
bauzasanyway, we're diverting a lot now from real work14:11
bauzasit's more an hallways discussion or a PTG one if you prefer14:11
bauzasbut I'm very sad to not have time to help more Nova14:11
bauzaslike for example, doing upstream bug triage14:12
*** markvoelker has quit IRC14:12
*** _alastor_ has joined #openstack-nova14:19
*** maciejjozefczyk has quit IRC14:24
mriedemdansmith: i'm +1 on the down cell series up through the api change https://review.openstack.org/#/c/591657/ just waiting on zuul results if you want to step through those today14:28
belmoreiramriedem: dansmith: does it make sense? https://bugs.launchpad.net/nova/+bug/1816034 or I'm missing something?14:29
openstackLaunchpad bug 1816034 in OpenStack Compute (nova) "Ironic flavor migration and default resource classes" [Undecided,New]14:29
dansmithmriedem: cool, I meant to get to them yesterday and got distracted14:30
dansmithbelmoreira: mriedem probably remembers better than I, but there was something recently about this14:31
mriedembelmoreira: and you have this enabled yes? https://review.openstack.org/#/c/609043/14:38
mriedemwhich is probably why the ironic driver isn't reporting standard resource class inventory14:38
*** lpetrut has quit IRC14:39
*** ociuhandu_ has quit IRC14:41
*** ociuhandu_ has joined #openstack-nova14:42
*** maciejjozefczyk has joined #openstack-nova14:43
mriedemi wonder if we should just set IronicDriver.requires_allocation_refresh = not CONF.workarounds.report_ironic_standard_resource_class_inventory14:44
mriedembecause if we're not going to report the standard inventory we shouldn't try to put allocations for those standard inventory resource classes14:45
openstackgerritMerged openstack/nova master: Libvirt: do not set MAC when unplugging macvtap VF  https://review.openstack.org/62484214:46
belmoreiramriedem: is set to False14:46
mriedemyeah so i think simply setting IronicDriver.requires_allocation_refresh = not CONF.workarounds.report_ironic_standard_resource_class_inventory would fix the issue14:47
mriedemerr,14:47
mriedemnix the 'not'14:47
mriedemthat's probably easier than mucking with the _pike_flavor_migration code in stable branches since that code is all gone in stein14:48
*** awalende has quit IRC14:49
belmoreiramriedem: not sure if I'm following. What you are suggesting is to disable IronicDriver.requires_allocation_refresh14:49
mriedemcorrect, which is what you said you did as a workaround in the bug14:50
*** thgcorrea has quit IRC14:51
mriedemunless that will regress the fix for bug 1724589 somehow14:52
openstackbug 1724589 in OpenStack Compute (nova) pike "Unable to transition to Ironic Node Resource Classes in Pike" [High,Fix committed] https://launchpad.net/bugs/1724589 - Assigned to Matt Riedemann (mriedem)14:52
*** dpawlik has quit IRC14:52
*** dpawlik has joined #openstack-nova14:53
belmoreiramriedem: we are fixing this in our instances_extra because we will need to recreate the allocations/RP. To make sure RP will not not change uuid in a sharded nova-compute setup with ironic14:54
belmoreiraotherwise allocations will not be recreated14:54
mriedemyou mean for old instances14:54
mriedemotherwise the scheduler would create the allocations14:54
belmoreirayes, for old instances14:55
belmoreirabecause I would like to have https://review.openstack.org/#/c/571535/14:57
mriedemand it looks like http://git.openstack.org/cgit/openstack/nova/tree/nova/scheduler/client/report.py#n136 will do the right thing and filter out any allocations from the flavor that are 014:58
mriedemso requires_allocation_refresh can remain true and only report the custom node resource class allocation14:58
mriedemand the driver will only report custom resource class inventory since CONF.workarounds.report_ironic_standard_resource_class_inventory=False14:58
belmoreirayes14:59
mriedemso i guess this whole thing was missed when the code was dropped14:59
mriedemhttps://github.com/openstack/nova/blob/stable/rocky/nova/virt/ironic/driver.py#L55414:59
mriedem"This code can be removed in                               Queens, and will need to be updated to also alter extra_specs to                               zero-out the old-style standard resource classes of VCPU, MEMORY_MB,                               and DISK_GB."14:59
mriedemheh, "remove this code, but also change it"14:59
belmoreiraI guess existing ironic deployments will hit this when they upgrade to rocky. The impact depends in their expectations and config15:00
mriedemyeah so CONF.workarounds.report_ironic_standard_resource_class_inventory=False actually causes a problem15:01
mriedemfixes one issue, exposes another15:01
*** thgcorrea has joined #openstack-nova15:01
mriedemdansmith: jaypipes: jroll: edleafe: do you remember why we didn't just zero out the standard resource classes on embedded instance.flavor for ironic instances in the first place?15:03
mriedemi.e. https://github.com/openstack/nova/blob/stable/rocky/nova/virt/ironic/driver.py#L55415:03
edleafemriedem: IIRC, it was for display purposes15:03
edleafethe flavor would show what the user would get15:04
mriedembut can't we still have that *and* override by setting resources:VCPU=0?15:04
*** dpawlik has quit IRC15:05
*** matoef1 has quit IRC15:05
mriedemhttps://docs.openstack.org/nova/latest/user/flavors.html15:05
mriedem"Custom resource classes and standard resource classes to override"15:05
mriedemyeah15:05
mriedem resources:CUSTOM_BAREMETAL_SMALL=1 resources:VCPU=015:05
edleafeThat was how it was supposed to be done. Keep the values in the flavor, and zero out in extra_specs15:06
mriedemok i don't know why that standard resource class 0 override wasn't done at the same time as the embedded instance.flavor migration once the node was reporting a custom resource class15:06
mriedemplus that's what the ironic docs tell operators to do https://docs.openstack.org/ironic/latest/install/configure-nova-flavors15:07
edleafemriedem: https://docs.openstack.org/ironic/latest/install/configure-nova-flavors describes it pretty well15:07
edleafejinx15:07
dansmithmriedem: yeah, for display and to avoid triggering any of our other magic behaviors with a value is zero, like root size15:08
mriedemsure but we can override via the extra spec15:09
*** thgcorrea has quit IRC15:09
mriedemthat seems to be the missing piece15:09
mriedemnow whether or not the code that's reporting allocations actually looks at that override...15:09
dansmithright, the point being.. leave the real values for display, override what we actually ask for in the resource overrides15:09
*** markvoelker has joined #openstack-nova15:10
*** takamatsu has joined #openstack-nova15:10
mriedemok i think this is where the override happens https://github.com/openstack/nova/blob/stable/rocky/nova/scheduler/utils.py#L36615:11
*** thgcorrea has joined #openstack-nova15:14
*** bnemec is now known as bnemec-pto15:16
*** dpawlik has joined #openstack-nova15:17
*** mchlumsky has joined #openstack-nova15:18
*** ratailor has joined #openstack-nova15:18
*** lbragstad is now known as elbragstad15:19
*** cfriesen has joined #openstack-nova15:19
*** _alastor_ has quit IRC15:19
mriedembelmoreira: ok i'm working on a patch which you can take a look at - knowing this isn't all totally fubar will require a functional test though given how tightly coupled all of this is between the virt driver, resource tracker and scheduler15:22
*** dpawlik has quit IRC15:22
*** yan0s has quit IRC15:23
*** BlackDex_ is now known as BlackDex15:28
*** maciejjozefczyk has quit IRC15:28
*** ratailor has quit IRC15:30
*** jistr is now known as jistr|mtg15:32
*** kmalloc is now known as needscoffee15:32
belmoreiramriedem: thanks15:37
mriedemthe commit message on this is going to be fun15:38
mriedem"first belmiro lost a finger here, and then he blew off a toe here"15:38
belmoreiraas you may noticed I upgraded nova-compute for ironic only today :) as been an interesting day15:39
belmoreirathe next issue is: https://bugs.launchpad.net/nova/+bug/181608615:40
openstackLaunchpad bug 1816086 in OpenStack Compute (nova) "Resource Tracker performance with Ironic driver" [Undecided,New]15:40
fried_riceuh oh15:41
fried_riceI thought I fixed that. belmoreira are you using latest master?15:41
mriedemhe's upgrading ironic queens to rocky i assume15:42
mriedemfor compute services15:42
mriedemso they would have had to backport your patches15:42
fried_riceoh, that'd do it15:42
fried_riceyeah15:42
*** maciejjozefczyk has joined #openstack-nova15:42
*** markvoelker has quit IRC15:42
belmoreiraI'm using rocky (back ported the patches that we discussed in the past)15:42
melwitt15:43
belmoreirafried_rice: so what I'm saying in the bug doesn't make sense anymore?15:45
fried_ricebelmoreira: Backported which patches, though?15:45
belmoreirawe couldn't cherrypick the patches in a clean way, we may have missed something15:45
fried_ricebelmoreira: Latest master has changes that made sure to only update the ironic node being touched.15:45
fried_ricelet me find the patch that did that...15:46
fried_ricebelmoreira: https://review.openstack.org/#/c/615677/15:47
fried_ricebelmoreira: I think if you picked that up prior to about PS17, you would have the ironic explosion behavior you're seeing. But after that it *should* be fixed.15:49
jaypipesgibi: hmm, couldn't my algorithm work if we just sorted the request groups in descending order of amount of resources being requested in the group?15:49
jaypipesgibi: then the "greedy" approach would work fine, no?15:49
*** mrjk has joined #openstack-nova15:49
fried_ricebelmoreira: and I know you guys were picking up that series at various points along the way15:49
gibijaypipes: if only one resource class would be used then yes, but if you have two resource classes requested then which one would sorted first?15:49
fried_ricejaypipes: I didn't feel strongly enough to say so in the review, but fwiw I liked gibi's algorithm as is.15:50
jaypipesgibi: do we have tests that show the behaviour or your algorithm is correct for such situations?15:50
belmoreirafried_rice: thanks, I'm checking15:50
tssuryafried_rice: porting was done after that patch got merged15:50
jaypipesfried_rice: it's difficult to read and understand, IMHO.15:50
fried_ricetssurya: roger that, thanks.15:50
fried_ricejaypipes: I don't disagree, but I'm not sure yours is more readable/understandable. (It is to you because you formulated it.)15:51
jaypipesfried_rice: as is most things around multiple use_single_provider request groups, FWIW15:51
gibijaypipes: it worth to step through https://review.openstack.org/#/c/616239/26/nova/tests/unit/objects/test_request_spec.py@130515:51
fried_ricetssurya: How far up the series did you port?15:52
gibijaypipes: there group1 could be fit first to 3 RPs but finally only the RP #3 leads to solution15:52
jaypipesfried_rice, gibi: this all just leaves an odd taste in my mouth.15:54
jaypipesfried_rice, gibi: a taste that says "oh, we're totally leaking the placement selection details out of the placement API"15:54
gibijaypipes: totally agree15:54
jaypipesand yes, I understand this is a workaround15:54
fried_ricejaypipes: Yeah, it sucks, but is hopefully temporary, to be whacked when placement lets ... just so15:54
jaypipesuntil such point that the result of allocation_candidates contains an indicator of which request group "belongs" to which allocation provider15:55
gibijaypipes: yes15:55
fried_riceSo let's get that spec pushed through so we can implement it early in Train and get rid of this spaghetti.15:55
jaypipesack, ok15:55
gibifried_rice: unfortunately I cannot focus both on that spec and the bandwidth series at the same time (I'm limited)15:56
gibifried_rice: but after feature freeze I can update the spec15:56
fried_ricegibi: Would you like me to take over that spec?15:56
jaypipesgibi, fried_rice: question for you... are different request groups containing same resource class requests guaranteed to land on different resource providers?15:56
gibifried_rice: if it is OK to push it after the feature freeze then I'd like to keep the spec15:56
jaypipes(I can never remember...)15:56
fried_ricejaypipes: if group_policy=none, no15:56
jaypipesah, yes, group_policy=isolate..15:57
jaypipesgibi, fried_rice: is group_policy=isolate taken account of in this mapping patch?15:57
fried_ricegibi: okay. Yes, that should be fine. Just lmk if you want help (other than review)15:57
fried_ricejaypipes: oo, good question.15:57
jaypipesor does it even need to be? /shurg15:57
gibifried_rice: thanks, I will need you review! :)15:57
*** jangutter has quit IRC15:58
fried_ricejaypipes: I think it might need to be, yes.15:58
gibijaypipes, fried_rice: bahh, good question15:58
belmoreirafried_rice: what is taking time is the last for in the update_from_provider_tree15:58
fried_riceas is, gibi's algorithm takes the first of many possible matchups; but it assumes group_policy=none15:58
jaypipesyeah15:58
belmoreirahttps://github.com/openstack/nova/blob/d231a420d0d4865bb19da513af416cb8bc89010f/nova/scheduler/client/report.py#L143915:59
gibifried_rice, jaypipes: if the request was made with isolate but the algo founds an overlapping solution then we are screwed15:59
gibifried_rice: correct15:59
gibifried_rice: so I have to enhance this algo15:59
gibifried_rice: to handle isolate15:59
gibifried_rice: and in that case throw away overlapping solutions15:59
fried_ricegibi: correct16:00
jaypipesgibi, fried_rice: well, maybe... maybe not. :) after all, we *KNOW* that the returned allocations properly meet the group_policy=isolate constraint because the placement service is shown to be correct in that.16:00
* gibi needs to craft examples16:00
fried_ricemm, nooo, I still think there's a hole16:00
jaypipesso we can assume that the provider that is assigned a particular amount of allocated resources does not have any "doubled up" allocations from multiple request groups, right?16:01
gibiin case of isolate, yes16:01
fried_ricebelmoreira: noted16:01
jaypipesthis is what's "eww" about this...  :) it's essentially needing to re-implement the entire request group selection algorithm that's in the placement service already. :)16:01
fried_ricebelmoreira: because new_uuids == {every ironic node}16:02
fried_ricejaypipes: Exactly.16:02
gibijaypipes: you are totatlly right16:02
belmoreirafried_rice: correct16:02
fried_ricejaypipes: and we haven't even done forbidden traits and aggregates yet16:02
gibifried_rice: and I hope we never have to in this algo16:02
fried_ricebelmoreira: set_inventory_for_provider et al should be shorting out, not talking to the placement API, for any but the changed node. Are you seeing those hit placement?16:03
gibiso I need to found an example where there is two possible valid mapping  one with overlap between groups and one without. If I can make such an example then I need to change the algo16:03
fried_ricebelmoreira: Because https://github.com/openstack/nova/blob/d231a420d0d4865bb19da513af416cb8bc89010f/nova/scheduler/client/report.py#L920-L92416:04
fried_ricegibi: ++. If anyone can do it...16:04
* gibi wondering why these issues surface at Friday after 5 pm 16:04
jaypipesgibi, fried_rice: so, thinking through the group_policy=isolate case... as mentioned, we can assume that placement returned allocations of same-resource-class requests against *different* providers so I don't think we need to modify this algorithm to account for group_policy=isolate, because there's no way the algorithm *could* place amounts of the same resource class on the same provider.16:04
*** jistr|mtg is now known as jistr16:04
jaypipesgibi: sorry :(16:05
gibijaypipes: don't be sorry, you found a deep question here16:05
gibijaypipes: so I'm glad16:05
jaypipesgibi: I'm not sure it's actually a scenario we need to be concerned about. :)16:05
gibijaypipes: it is just murphy's law16:05
jaypipesheh16:06
gibiisolate is global for the whole request, isn't it?16:06
fried_riceyes16:06
gibiso every group will have its own RP in the allocaton16:06
openstackgerritMatt Riedemann proposed openstack/nova master: ironic: complete the flavor data migration started in pike  https://review.openstack.org/63721716:06
mriedembelmoreira: jaypipes: dansmith: edleafe: ^ probably the most detailed commit message ever16:06
mriedemjroll: ^16:06
gibiand the size of the allocation is always the size of the request16:07
gibiso there is no way that in this allocation an overlapping mapping can be found16:07
gibibecause overlap means the size of an allocation is at least the size of two request16:08
gibiso even if a partial solution found with overlap, the remaining RPs cannot fulfill the remaining groups16:08
fried_ricegibi: right, I think you would either end up short an allocation, or short a provider.16:09
*** jaypipes is now known as leakypipes16:09
gibifried_rice: yeah, feels like it16:09
mriedemmgoddard: johnthetubaguy: you might also care about https://review.openstack.org/#/c/637217/16:09
* leakypipes hands mriedem a cookie (where's hansmoleman!?)16:09
fried_riceso, maybe safe by serendipity16:09
leakypipesgibi: exactly my thoughts.16:10
leakypipesfried_rice: ya16:10
mgoddardmriedem: that looks like something I would care about, adding to review list16:10
leakypipesfried_rice: well, not necessarily serendipity. safe due to placement's alloc cands selection algorithm already working properly :)16:10
gibileakypipes, fried_rice: OK, I still write up a partial overlap case to see that I'm right16:11
*** takamatsu_ has joined #openstack-nova16:11
*** takamatsu has quit IRC16:12
gibihttps://etherpad.openstack.org/p/CdbhX84nKI16:13
gibileakypipes, fried_rice: I think I rest my case now16:13
belmoreirafried_rice: yes, we have those changes. I think is simple the iteration...16:15
fried_ricebelmoreira: I'm asking whether you're seeing 1700x3 calls to placement from that loop, or if all the thrashing is happening locally16:18
mriedemtssurya: you can drop your -W on https://review.openstack.org/#/c/635146/16:18
*** artom has quit IRC16:19
belmoreirafried_rice: we don't see placement calls in the loop16:19
fried_ricebelmoreira: I find it hard to imagine that 5100 local dict compares on the ProviderTree are taking 6h16:19
fried_ricewow16:19
leakypipesgibi: yep, the current algo in the mapping patch does indeed have the possibility of assigning the same RP to different groups erroneously. :(16:20
leakypipesWell, cool, I'm glad my rambling and injection of nonsense into this conversation has resulted in something of actual value.16:20
fried_ricebelmoreira: Good thing ProviderTree isn't an ovo :P16:20
leakypipesjob done. weekend awaits.16:20
belmoreirafried_rice: is not 5100 compares16:21
tssuryamriedem: thanks a lot! done16:22
fried_ricebelmoreira: Based on: 1700 iterations of the loop; each loop is doing three set_*_for_provider()s; and each set_*_for_provider is asking ProviderTree.has_*_changed()16:23
fried_rice(ugh, my IRC client made a mess of that)16:23
fried_riceohhh16:24
fried_riceSo, here's a thing.16:24
fried_riceThe roots are stored in a list16:24
fried_ricewhich we're iterating over to find the one we're asking about16:24
fried_riceO(N) just to find the provider to do the compare on.16:24
fried_riceSo O(N^2) for that loop.16:24
fried_riceTimes three.16:25
*** takamatsu_ has quit IRC16:25
fried_ricebelmoreira: You have a way to shove in a patch and test this easily?16:25
tssuryadansmith, mriedem: yes !!! thanks a lot for the reviews on the API changes :) and sorry for giving such a hard time on that series :D but I am seriously happy that's done. I will respin something for the nits16:25
gibileakypipes, fried_rice: thanks16:25
belmoreirafried_rice: I just did it... in the16:25
mriedemtssurya: no problem, it's a complicated change16:25
mriedemnow just figure out how to not grind the API on startup when there is a down cell16:26
mriedem:)16:26
tssuryahaha yea16:26
dansmithtssurya: np, thanks for your persistence16:27
belmoreiraefried_rice: for the "updated_from_provider_tree" I'm also passing the compute_node and removed the last "for". It only to update/check that resource provider16:28
*** takamatsu_ has joined #openstack-nova16:28
*** TxGirlGeek has joined #openstack-nova16:28
*** maciejjozefczyk has quit IRC16:28
fried_ricebelmoreira: That would be one way to do it.16:29
fried_ricebelmoreira: care to push that patch?16:29
fried_ricebelmoreira: But I don't think that's a generic solution long-term. I think it's going to break as soon as we have shared storage providers, for example.16:30
fried_ricebelmoreira: Let me play with the efficiency of ProviderTree itself and see if I can't get that loop a lot tighter...16:30
belmoreirafried_rice: ohh no! this was emergency mode :) just to keep Ironic alive16:31
fried_riceCode blue, get a crash cart in here, got it.16:31
*** dims has joined #openstack-nova16:34
melwittle sigh, the ceph job is consistently timing out taking too long to run. not sure what could have changed16:36
*** wwriverrat has joined #openstack-nova16:37
*** dims has quit IRC16:38
*** macza has joined #openstack-nova16:39
*** markvoelker has joined #openstack-nova16:39
fried_ricebelmoreira: are you running py2 or py3?16:40
belmoreirafried_rice: py216:41
fried_riceight16:41
*** ociuhandu_ has quit IRC16:41
*** ociuhandu_ has joined #openstack-nova16:42
belmoreirav16:42
melwittdansmith: your review on the spec amendment for detach boot volume would be appreciated https://review.openstack.org/61916116:42
dansmithmelwitt: appreciated how much?16:43
* dansmith is looking for cash prizes or .. candy16:43
melwittI have chicago style popcorn16:43
* dansmith googles16:44
*** ociuhandu_ has quit IRC16:44
dansmithyeah, alright.16:44
*** dims has joined #openstack-nova16:45
melwittlol16:45
*** mcgiggler has quit IRC16:46
*** _fragatina has joined #openstack-nova16:48
openstackgerritMerged openstack/nova master: Make VolumeAttachmentsSampleV249 test other methods  https://review.openstack.org/63362116:48
belmoreiraI need to leave, thanks for all the help16:49
*** belmoreira has quit IRC16:51
*** tssurya has quit IRC16:51
*** dims has quit IRC16:53
openstackgerritMerged openstack/nova-specs master: Amend the detach-boot-volume design  https://review.openstack.org/61916116:54
*** dims has joined #openstack-nova16:55
*** eharney has quit IRC16:59
mriedemmelwitt: for the same reasons the tempest-full job times out17:00
mriedemhttp://status.openstack.org/elastic-recheck/#178340517:00
mriedemhttps://review.openstack.org/#/q/topic:bug/1783405+(status:open+OR+status:merged)17:00
mriedem{1} tempest.scenario.test_network_advanced_server_ops.TestNetworkAdvancedServerOps.test_server_connectivity_reboot [278.769116s] ... ok17:01
mriedem{1} tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern [231.855718s] ... ok17:01
mriedem{2} tempest.api.compute.volumes.test_attach_volume.AttachVolumeTestJSON.test_attach_detach_volume [224.248984s] ... ok17:01
*** dpawlik has joined #openstack-nova17:01
mriedem{0} tempest.scenario.test_minimum_basic.TestMinimumBasicScenario.test_minimum_basic_scenario [266.396758s] ... ok17:01
*** gyee has joined #openstack-nova17:01
melwittoh, I see. thank you. I didn't know the test times are on the tests17:02
mriedemactually it would be worse17:02
* melwitt embarrassed17:02
mriedembecause tempest-full skips slow tests17:02
mriedembut the ceph job doesn't17:03
openstackgerritLee Yarwood proposed openstack/nova master: WIP Use migration_status during volume migrating and retyping  https://review.openstack.org/63722417:03
mriedemand the ceph job is also running cinder tempest plugin tests17:03
melwittthe ceph job does it all!17:03
mriedemhttp://git.openstack.org/cgit/openstack/devstack-plugin-ceph/tree/.zuul.yaml#n1617:03
mriedem^ should be increased probably17:03
melwittit was fine until recently, now it's timing out constantly17:03
openstackgerritEric Fried proposed openstack/nova master: WIP: Perf: Use dicts for ProviderTree roots  https://review.openstack.org/63722517:03
fried_ricebelmoreira, tssurya: ^ try that out, if you please.17:04
mriedemwell, skipping slow tests would help quite a bit17:04
mriedemat the sacrifice of coverage17:05
*** dpawlik has quit IRC17:05
melwittyeah, that's what I was about to say. a separate ceph slow tests job comes to mind but "yet another job"17:05
mriedemcompare to the timeout on the tempest-slow job which only runs slow tests17:05
mriedemhttp://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n27617:05
mriedemthat timeout is 3 hours rather than 217:05
mriedemthe easy fix is bumping the timeout17:06
mriedembut if that job is consistently taking over 2 hours, that means all nova changes are waiting at least that long in the check queue for results from a non-voting job17:06
melwitthm, yeah17:07
mriedemso if we're going to wait that long for that job, i think it needs to be voting17:07
mriedemand if it's going to take that long and be non-voting, i think it's place is in the experimental queue17:07
melwittyeah, that's fair17:07
mriedemprobably something to be brought up in the ML since it likely also affects other projects (cinder/glance at least)17:07
smcginnis++17:08
*** dtantsur is now known as dtantsur|afk17:08
melwittthe job was as stable as the normal tempest-full (I need to get the graph again) for quite a while, but I never got around to trying to make it voting again17:09
melwittyeah, ok. I'll grab the data and post to the ML17:09
mriedemspeaking of jobs, it looks like while i fixed legacy-grenade-dsvm-neutron-multinode-live-migration the zuul.yaml change on irrelevant-files means it's not running on nova changes now...17:10
mriedemhttps://review.openstack.org/#/c/634962/5/.zuul.yaml@24217:11
*** markvoelker has quit IRC17:13
melwitt?17:15
mriedeme.g. that job isn't running on https://review.openstack.org/#/c/591657/17:15
mriedemit's now only running if there are changes to nova/tests/live_migration/17:16
melwittoh.... oops17:16
mriedemnova/tests/live_migration/ should probably move under nova/gate/17:16
*** tesseract has quit IRC17:17
*** artom has joined #openstack-nova17:17
*** artom has quit IRC17:18
*** artom has joined #openstack-nova17:18
melwittthat's weird, I thought we've used that regex before to run on nova/tests/live_migration in addition to non-test nova changes17:18
*** rpittau has quit IRC17:20
mriedemi've got a fix17:20
*** TxGirlGeek has quit IRC17:21
openstackgerritMatt Riedemann proposed openstack/nova master: Fix irrelevant-files for legacy-grenade-dsvm-neutron-multinode-live-migration  https://review.openstack.org/63723117:25
openstackgerritBalazs Gibizer proposed openstack/nova master: Trim fake_deserialize_context in test_conductor  https://review.openstack.org/63585917:29
openstackgerritBalazs Gibizer proposed openstack/nova master: Cleanup inflight rpc messages between test cases  https://review.openstack.org/63723317:29
gibimriedem: fyi, I think I got the fkr ^^17:29
* gibi leave the computer for the weekend17:30
mriedemdansmith: maybe we were too hasty https://review.openstack.org/#/c/635146/10/nova/api/openstack/compute/views/servers.py@11417:30
mriedemhttps://review.openstack.org/#/c/635147/13/api-guide/source/down_cells.rst@2817:31
*** igordc has joined #openstack-nova17:32
dansmithmriedem: hmm, is that going to refer to links in other apis that we won't support?17:33
dansmithlike /servers/uuid/somethingelse ?17:33
mriedemno it's a ref back to itself17:33
mriedemso nova show17:33
dansmithI thought there were a couple of links, like to self, flavor, etc17:33
mriedemflavor links would be under the flavor dict in the response17:33
mriedemself is just GET /servers/{server_id}17:33
mriedem"href": "http://openstack.example.com/v2/6f70656e737461636b20342065766572/servers/22c91117-08de-4894-9aa9-6ef382400985"17:34
dansmithokay17:34
dansmithhmm, she's not around and it's friday.. perhaps someone else should just fix it up for her?17:38
mriedemyeah first thing is pulling it from the gate17:38
mriedemso i guess i'll jack the commit message17:38
openstackgerritMatt Riedemann proposed openstack/nova master: Plumbing required in servers ViewBuilder to construct partial results  https://review.openstack.org/63514617:39
openstackgerritLajos Katona proposed openstack/python-novaclient master: Add support for microversion v2.69  https://review.openstack.org/63723417:39
*** ccamacho has quit IRC17:42
*** TxGirlGeek has joined #openstack-nova17:43
*** eharney has joined #openstack-nova17:46
*** igordc has quit IRC17:46
mriedemalright then, looks like i'm going to have lunch and then i'll add the links and such, and probably also fix up my comments in the docs patch at the end17:52
mriedemso much for cross-cell resize fun today17:53
*** ccamacho has joined #openstack-nova18:00
*** derekh has quit IRC18:04
*** ociuhandu_ has joined #openstack-nova18:06
*** ociuhandu has quit IRC18:09
*** markvoelker has joined #openstack-nova18:10
*** ociuhandu_ has quit IRC18:10
*** pbing19 has joined #openstack-nova18:16
*** mriedem has quit IRC18:18
*** ralonsoh has quit IRC18:20
*** mriedem has joined #openstack-nova18:24
*** ociuhandu has joined #openstack-nova18:24
*** tssurya has joined #openstack-nova18:25
*** ociuhandu has quit IRC18:28
*** awalende has joined #openstack-nova18:30
*** awalende has quit IRC18:31
*** awalende has joined #openstack-nova18:31
*** awalende has quit IRC18:36
*** mchlumsky has quit IRC18:42
*** mrjk_ has joined #openstack-nova18:43
*** mchlumsky has joined #openstack-nova18:43
*** markvoelker has quit IRC18:43
*** mrjk has quit IRC18:44
*** psachin has joined #openstack-nova18:52
*** thgcorrea has quit IRC18:56
*** erlon has quit IRC18:57
*** moshele has joined #openstack-nova19:04
*** TxGirlGeek has quit IRC19:07
*** TxGirlGe_ has joined #openstack-nova19:07
*** moshele has quit IRC19:09
*** moshele has joined #openstack-nova19:10
*** wolverineav has joined #openstack-nova19:12
*** wolverineav has quit IRC19:16
*** wolverineav has joined #openstack-nova19:18
*** moshele has quit IRC19:23
*** gyee has quit IRC19:32
mriedemdansmith: here it comes19:34
openstackgerritMatt Riedemann proposed openstack/nova master: Plumbing required in servers ViewBuilder to construct partial results  https://review.openstack.org/63514619:35
openstackgerritMatt Riedemann proposed openstack/nova master: Add context.target_cell() stub to DownCellFixture  https://review.openstack.org/63718219:35
openstackgerritMatt Riedemann proposed openstack/nova master: API microversion 2.69: Handles Down Cells  https://review.openstack.org/59165719:35
openstackgerritMatt Riedemann proposed openstack/nova master: API microversion 2.69: Handles Down Cells Documentation  https://review.openstack.org/63514719:35
dansmithmriedem: omg -1 so hard19:37
mriedemhere it comes19:39
openstackgerritMatt Riedemann proposed openstack/nova master: Plumbing required in servers ViewBuilder to construct partial results  https://review.openstack.org/63514619:39
openstackgerritMatt Riedemann proposed openstack/nova master: Add context.target_cell() stub to DownCellFixture  https://review.openstack.org/63718219:39
openstackgerritMatt Riedemann proposed openstack/nova master: API microversion 2.69: Handles Down Cells  https://review.openstack.org/59165719:39
openstackgerritMatt Riedemann proposed openstack/nova master: API microversion 2.69: Handles Down Cells Documentation  https://review.openstack.org/63514719:39
*** markvoelker has joined #openstack-nova19:41
dansmithmriedem: and the ones above are unchanged so I can stamp them?19:41
*** wolverineav has quit IRC19:42
mriedemminor changes19:42
mriedemthe api samples had to change b/c of the links key in the response19:42
dansmithsure, but I see other stuff too19:44
dansmithanyway, looks okay19:45
mriedemcdent: you want to propose for stable/rocky and friends? https://review.openstack.org/#/c/636701/19:45
mriedemor send out the gary signal?19:45
*** gyee has joined #openstack-nova19:46
mriedemdansmith: i'm +2 on the docs change at the end if you want to flush it all19:46
mriedemi think you do19:46
*** wolverineav has joined #openstack-nova19:48
dansmith420 lines..19:48
mriedemyou live in portland, you can get down with that19:49
dansmithheh19:49
*** wolverineav has quit IRC19:53
*** psachin has quit IRC19:58
openstackgerritCorey Bryant proposed openstack/nova master: add python 3.7 unit test job  https://review.openstack.org/61069419:59
openstackgerritMatt Riedemann proposed openstack/python-novaclient master: API microversion 2.69: Handles Down Cells  https://review.openstack.org/57956319:59
dansmithmriedem: comments on the docs patch.. I could just +2 and we could discuss/fix later, but probably not as much of a rush on that one I'm thinking20:01
*** awalende has joined #openstack-nova20:01
-openstackstatus- NOTICE: The StoryBoard service on storyboard.openstack.org is offline momentarily for maintenance: http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002666.html20:04
tssuryamriedem, dansmith: you had to respin for the links part ? sorry about that and again my never-ending list of "thank you"'s20:04
openstackgerritCorey Bryant proposed openstack/python-novaclient master: add python 3.7 unit test job  https://review.openstack.org/63729020:07
*** zer0c00l has joined #openstack-nova20:07
mriedemdansmith: tssurya: i also just thought about this https://review.openstack.org/#/c/591657/44/nova/api/openstack/compute/services.py@7820:11
*** TxGirlGe_ has quit IRC20:11
*** markvoelker has quit IRC20:13
tssuryamriedem: hmm so you want it to be false if those filters are passed ?20:13
*** TxGirlGeek has joined #openstack-nova20:13
mriedemdansmith: yeah i'm ok with those docs changes20:14
mriedemtssurya: idk, hence the question20:14
mriedemtssurya: if we were being consistent, we'd set cell_down_support=False if there were any filters on the request20:14
dansmithI don't really have an opinion on the servers stuff20:14
dansmither, services20:14
mriedembut like i said, we're not filtering in the db query, we're doing it in python once we get results20:14
mriedemso *shrug*?20:14
tssuryayea20:16
tssuryabesides technically the edge cases are only for "listing servers" :D20:16
tssuryaat least in the docs and everywhere its only for the server details20:17
mriedemoh well if the docs say so...20:19
mriedemanyway i'm fine with the way it is, not really worth losing sleep over it20:20
*** wolverineav has joined #openstack-nova20:21
*** wolverineav has quit IRC20:27
*** itlinux has joined #openstack-nova20:30
mriedemlooks like a whole bunch of 3rd party CIs are failing on this20:31
mriedemConnectionError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Max retries exceeded with url: /packages/ed/39/15045ae46f2a123019aa968dfcba0396c161c20f855f11dea6796bcaae95/PyMySQL-0.9.3-py2.py3-none-any.whl (Caused by ReadTimeoutError("HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out. (read timeout=15)",))20:31
*** agopi is now known as agopi|bbiab20:32
*** itlinux has quit IRC20:34
*** ccamacho has quit IRC20:36
*** agopi|bbiab has quit IRC20:37
*** itlinux has joined #openstack-nova20:39
*** agopi|bbiab has joined #openstack-nova20:43
fried_riceokay, but cmon, do we actually *need* PyMySQL?20:43
melwittnah20:43
fried_ricetssurya: Did you see the patch I proposed to improve the ironic ufpt thing?20:44
fried_ricegibi: Is it too late to make the bw part of the binding profile look more like the resource request syntax in extra specs?20:44
tssuryafried_rice: saw the WIP, but didn't review it yet20:45
*** TxGirlGeek has quit IRC20:45
fried_ricetssurya: Less about review, more about trying it in your env and seeing if it takes the 6h processing ironic nodes down to something sane.20:45
tssuryahaha we kind of hacked the bit of code that tries to do the N^2 thing already got the node up20:46
tssuryafor now20:46
tssuryaI will be able to try your patch only on Monday :(20:46
fried_ricetssurya: yeah, IIUC you're only processing the one node, but that's not a long term solution.20:46
fried_riceokay20:46
tssuryabut trust me getting that to work properly is our priority20:47
fried_ricegibi: IMO that's a more consistent user experience; and then also we would get to use common code for parsing into RequestGroup etc.20:47
tssuryayour concern is for provider sharing right ?20:47
fried_ricetssurya: That's one example. It's really a question of why that code is there in the first place: it's because upt is allowed to muck with anything in the ProviderTree object.20:48
fried_ricetssurya: Like what if ironic actually *wanted* to dork with more than one node at a time?20:48
tssuryafried_rice: since we are talking about why that code exists, can you give me a ore concrete example of when ironic would want that ?20:53
tssuryalike I am trying to understand more use cases for this whole thing20:53
tssuryasince I am pretty sure we don't need them for now right ?20:53
tssuryaor maybe I am missing some documentation20:53
fried_ricetssurya: Yeah, I don't have a specific use case; I could only contrive hypothetical ones. It's just that update_provider_tree is a generic method which passes the *whole* ProviderTree object down to the virt driver for modification, and then update_from_provider_tree is responsible for flushing any changes back to placement.20:54
fried_riceI really didn't want upt/ufpt to be tightly bound to specific virt drivers' implementations or needs in that regard.20:54
fried_riceand it's specifically designed to be future-looking to when we have nested providers, sharing providers, etc.20:55
fried_rice...just like the code that broke your Queens performance :)20:55
tssuryayea I get that, its just that for ironic I don't know how this would scale with the more nodes that we would keep adding in future20:55
fried_rice...which wasn't actually going to be used until... I think stein actually.20:55
cdentmriedem: i'll either get to it or get gary, but yeah20:55
fried_ricetssurya: Well, let's see how this does.20:55
fried_ricetssurya: Note that there are already a number of anticipatory optimizations in that code.20:56
fried_ricetssurya: Like the fact that we do local compares and only call out to placement if something has changed.20:56
openstackgerritmelanie witt proposed openstack/nova master: Add user_id field to InstanceMapping  https://review.openstack.org/63335020:57
openstackgerritmelanie witt proposed openstack/nova master: Add online data migration for populating user_id  https://review.openstack.org/63335120:57
tssuryafried_rice: yea I mean as far as most of the things we do right now the flushes to placement are very less20:57
fried_ricetssurya: I just didn't anticipate that iterating through a list would bring the service to its knees. Even if you had told me the list was going to be a couple thousand long.20:57
tssuryafried_rice: of course yea we didn't know this as well until we timed this, our test env is not exactly the size of our actual prod env20:58
*** cdent has quit IRC21:00
openstackgerritMatt Riedemann proposed openstack/nova master: Check hosts have no instances for AZ rename  https://review.openstack.org/50920621:03
mriedemsounds like someone needs their nova-computes running on s390x mainframes21:03
mriedemor Power9!21:03
*** wolverineav has joined #openstack-nova21:03
tssuryafried_rice: just saw your patch, so basically you are saving on the lookup time instead of iterating.. We will try this first thing Monday and let you know21:04
openstackgerritMatt Riedemann proposed openstack/nova master: Check hosts have no instances for AZ rename  https://review.openstack.org/50920621:04
fried_ricetssurya: Yes. We already have the list of UUIDs. Now instead of that loop taking O(N) to look up each node's provider object, it'll take O(1).21:05
tssuryayeap nice21:05
fried_ricetssurya: now, it's possible that that lookup wasn't the problem: it's possible that what's killing us is 1700 ProviderTree.data() calls (which copies the guts of the ProviderTree) plus 5100 compares of that information.21:06
fried_ricethat may be a tad harder to optimize.21:06
fried_rice(...guts of the individual _Provider in the ProviderTree, that is)21:08
*** wolverineav has quit IRC21:08
*** markvoelker has joined #openstack-nova21:10
tssuryayea you mean this part here (https://github.com/openstack/nova/blob/880327cc31fea7328d23355730d5458f3b74662b/nova/scheduler/client/report.py#L1440)21:15
tssuryaand then the set aggregates and inventories comparisions21:15
*** mchlumsky has quit IRC21:17
*** wolverineav has joined #openstack-nova21:23
melwittmriedem: if I'm trying to remove the console-auth workaround in stein and nova-status upgrade check has the check for console auths, where it warns if compute services older than rocky are found and advises to set the [workarounds] option if so. if I'm removing the workaround, it seems like I should remove the upgrade check. is that right?21:24
*** wolverineav has quit IRC21:25
*** wolverineav has joined #openstack-nova21:28
mriedemdoes the upgrade check just warn today?21:30
melwittyes21:30
mriedemand is that upgrade check in stable/rocky?21:30
mriedemi think you'd either remove the upgrade check in stein or change it to a hard failure21:30
*** whoami-rajat has quit IRC21:30
mriedemand then maybe mark it for removal in Train21:30
melwittyes the check is in stable/rocky21:31
openstackgerritArtom Lifshitz proposed openstack/nova master: New objects to transmit NUMA config from dest to source  https://review.openstack.org/63482721:31
openstackgerritArtom Lifshitz proposed openstack/nova master: Introduce live_migration_claim()  https://review.openstack.org/63566921:31
openstackgerritArtom Lifshitz proposed openstack/nova master: [WIP] Use live_migration_claim() to check dest resources  https://review.openstack.org/63460621:31
openstackgerritArtom Lifshitz proposed openstack/nova master: LM: Make dest send NUMAMigrateData to the source  https://review.openstack.org/63482821:31
openstackgerritArtom Lifshitz proposed openstack/nova master: LM: update NUMA-related XML on the source  https://review.openstack.org/63522921:31
mriedemoff the top of my head i think i'd go conservative and make the upgrade check a hard failure in stein and remove it in train21:31
melwittok, I considered changing it to a hard failure if any computes older than rocky are found. I guess the action would be "upgrade your computes21:31
mriedembecause someone could run the stein upgrade checks against a rocky deployment from a venv/container21:31
mriedemdansmith might have an opinion21:32
melwittI was struggling a bit with what the action item is for something that generic. all of the other failures say, "run this command"21:32
mriedemyeah i think the only recent upgrade check we removed was the one for placement resource providers because of the extracted placement21:32
mriedemwe could have re-worked that to hit the placement API rather than the nova_api db, but it had been around since ocata so figured it was ok to just drop it21:32
mriedemthis consoleauth one seems to have really confused some people and caught them off guard21:33
mriedemso dropping the check with the workaround in the same release might be too aggressive21:33
melwittI wondered that too, because of how problematic it's been. whether to just keep the workaround around another cycle21:34
mriedemif it's not hurting anything that might be fine21:34
mriedemsafer than sorryier21:34
melwittI was thinking to follow through with TODOs before they get too old and forgettable but this seems like one of those where it's best not21:35
melwittyeah, it doesn't hurt anything21:35
melwittok, well, that's easy then21:35
mriedemwe've still got lots of todos to remove things from several releases ago21:37
mriedemhell we've still got nova-net and cells v121:37
*** artom has quit IRC21:37
melwittoh I know. I was thinking instead, I should propose one of the old TODOs. I think the instance group migration to API database that I worked on is still around :(21:38
melwittoh, looks like you removed that last year. heh21:41
*** markvoelker has quit IRC21:43
*** awalende has quit IRC21:56
*** awaugama has quit IRC21:56
*** awalende has joined #openstack-nova21:57
mriedemyou still have to write the API side changes for the counting quotas from placement thing right?21:59
mriedems/quotas/usage/21:59
*** awalende has quit IRC22:01
*** wolverineav has quit IRC22:09
*** wolverineav has joined #openstack-nova22:09
melwittmriedem: right. I'm trying to debug why my online data migration does an infinite loop in grenade, at the moment :(22:11
*** itlinux has quit IRC22:12
*** awalende has joined #openstack-nova22:18
*** agopi|bbiab has quit IRC22:22
mriedemlooks like you're mostly just copying populate_queued_for_delete22:23
mriedemthe only major difference i see is you're using filter_by and that other uses filter()22:24
mriedemthat shouldn't really matter though22:24
melwittyeah. I did that because I was thinking that's a better approach than what I was originally planning, to get all instance mappings where user_id=None and then lookup cell by project_id and get instance22:26
*** wolverineav has quit IRC22:27
melwitttssurya mentioned to me that when she was working on queued_for_delete, the loop happened for that because originally she didn't have a default for InstanceMapping.create(), so it was creating NULL fields that would get found again by the migration (creates must have been happening also in the online data migrations, I guess)22:27
*** wolverineav has joined #openstack-nova22:28
melwittso I added a default for create(), also added setting of user_id for save() in conductor, in case any instance builds are in-flight during an upgrade, where instance mapping was created with cell=None before upgrade and then before scheduling during upgrade, it gets scheduled, set the user_id22:29
melwittI don't know what case grenade is hitting. it's looping not in the initial online migration run before starting services, but in the second online migration after starting services22:29
melwittI added a print statement to the online migration to help with debugging in the next gate run22:30
mriedemi don't see anything obviously wrong22:30
openstackgerritMatt Riedemann proposed openstack/nova master: WIP: Add confirm_snapshot_based_resize_at_source  https://review.openstack.org/63705822:30
openstackgerritMatt Riedemann proposed openstack/nova master: WIP: Add ConfirmResizeTask  https://review.openstack.org/63707022:30
openstackgerritMatt Riedemann proposed openstack/nova master: WIP: Add confirm_snapshot_based_resize conductor RPC method  https://review.openstack.org/63707522:30
openstackgerritMatt Riedemann proposed openstack/nova master: WIP: Confirm cross-cell resize from the API  https://review.openstack.org/63731622:30
mriedemdansmith: and we've got confirm ^22:30
melwittthanks for taking a look22:30
mriedemplus functional22:30
*** wolverineav has quit IRC22:33
*** markvoelker has joined #openstack-nova22:40
melwittI think I need to take a break22:41
*** itlinux has joined #openstack-nova22:42
*** dave-mccowan has quit IRC22:45
*** awalende has quit IRC22:46
*** awalende has joined #openstack-nova22:47
*** awalende has quit IRC22:51
*** pbing19 has quit IRC22:53
*** itlinux has quit IRC22:54
*** itlinux has joined #openstack-nova22:55
*** wolverineav has joined #openstack-nova22:59
*** wolverineav has quit IRC23:04
*** markvoelker has quit IRC23:13
*** wolverineav has joined #openstack-nova23:13
*** wolverineav has quit IRC23:16
*** itlinux has quit IRC23:17
*** wolverineav has joined #openstack-nova23:20
*** xek has quit IRC23:21
*** tssurya has quit IRC23:22
*** wolverineav has quit IRC23:25
*** panda is now known as panda|off23:28
*** takamatsu_ has quit IRC23:29
*** betherly has quit IRC23:45
*** wolverineav has joined #openstack-nova23:47
*** gyee has quit IRC23:50
*** wolverineav has quit IRC23:52
*** gyee has joined #openstack-nova23:54
*** moshele has joined #openstack-nova23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!