Wednesday, 2018-01-10

*** AlexeyAbashkin has joined #openstack-nova00:01
*** AlexeyAbashkin has quit IRC00:05
*** penick has quit IRC00:09
*** claudiub has quit IRC00:10
*** yangyapeng has quit IRC00:27
*** yangyapeng has joined #openstack-nova00:27
*** yangyapeng has quit IRC00:31
*** liuzz has joined #openstack-nova00:34
*** andymccr has quit IRC00:36
*** mlavalle has quit IRC00:38
*** FL1SK has quit IRC00:39
*** felipemonteiro has quit IRC00:40
*** chyka has quit IRC00:40
*** andymccr has joined #openstack-nova00:40
*** FL1SK has joined #openstack-nova00:41
*** markvoelker has joined #openstack-nova00:43
*** corvus has quit IRC00:44
*** markvoelker has quit IRC00:45
*** markvoelker has joined #openstack-nova00:49
*** markvoelker has quit IRC00:50
*** andymccr has quit IRC00:51
*** Dinesh_Bhor has joined #openstack-nova00:52
*** andymccr has joined #openstack-nova00:55
*** tidwellr has joined #openstack-nova00:57
*** david-lyle has joined #openstack-nova00:58
openstackgerritMatt Riedemann proposed openstack/nova master: Fix comment in MigrationSortContext  https://review.openstack.org/53236800:59
*** yangyapeng has joined #openstack-nova01:00
mriedemalex_xu: need another reviewer on https://review.openstack.org/#/c/330406/ - we have +2s on the 2 changes below it01:01
*** Dinesh_Bhor has quit IRC01:02
*** jichen has joined #openstack-nova01:05
*** yangyapeng has quit IRC01:06
*** dave-mccowan has joined #openstack-nova01:08
*** markvoelker has joined #openstack-nova01:08
*** markvoelker has quit IRC01:09
mriedemefried: i think you're wrong about https://bugs.launchpad.net/nova/+bug/174231101:11
openstackLaunchpad bug 1742311 in OpenStack Compute (nova) "AttributeError in report client error path" [Undecided,Invalid]01:11
mriedembecause https://github.com/requests/requests/blob/v2.18.4/requests/models.py#L66301:11
*** yangyapeng has joined #openstack-nova01:13
*** Dinesh_Bhor has joined #openstack-nova01:14
*** hoangcx has quit IRC01:16
*** hieulq has quit IRC01:16
*** hieulq has joined #openstack-nova01:17
*** hoangcx has joined #openstack-nova01:17
*** yangyapeng has quit IRC01:18
*** yangyapeng has joined #openstack-nova01:22
*** hongbin has joined #openstack-nova01:24
*** zhurong has joined #openstack-nova01:24
*** mdnadeem has joined #openstack-nova01:25
*** felipemonteiro has joined #openstack-nova01:25
openstackgerritZhenyu Zheng proposed openstack/nova master: Use neutron port_list when filtering instance by ip  https://review.openstack.org/52550501:25
*** phuongnh has joined #openstack-nova01:29
*** sbezverk has left #openstack-nova01:31
*** tidwellr has quit IRC01:41
*** dave-mccowan has quit IRC01:42
*** Eran_Kuris has quit IRC01:42
*** Eran_Kuris has joined #openstack-nova01:42
*** Apoorva_ has joined #openstack-nova01:43
*** Apoorva has quit IRC01:46
*** felipemonteiro_ has joined #openstack-nova01:47
*** Apoorva_ has quit IRC01:47
*** felipemonteiro__ has joined #openstack-nova01:51
*** felipemonteiro_ has quit IRC01:51
*** felipemonteiro has quit IRC01:51
*** jafeha__ has joined #openstack-nova01:52
*** jafeha has quit IRC01:54
*** felipemonteiro__ has quit IRC01:55
*** penick has joined #openstack-nova01:57
*** penick_ has joined #openstack-nova02:00
*** sdague has quit IRC02:01
*** penick has quit IRC02:01
*** Tom-Tom has quit IRC02:03
*** Tom-Tom has joined #openstack-nova02:03
*** felipemonteiro__ has joined #openstack-nova02:07
*** david-lyle has quit IRC02:07
alex_xumriedem: got it, will get it done today02:09
*** slaweq has joined #openstack-nova02:09
*** liuzz has quit IRC02:09
*** liverpooler has joined #openstack-nova02:10
*** liuzz has joined #openstack-nova02:10
*** chyka has joined #openstack-nova02:12
*** slaweq has quit IRC02:13
*** dave-mccowan has joined #openstack-nova02:15
*** mriedem has quit IRC02:16
*** chyka has quit IRC02:17
*** felipemonteiro__ has quit IRC02:18
*** harlowja has quit IRC02:24
*** jeblair has joined #openstack-nova02:24
*** jeblair has quit IRC02:24
*** hiro-kobayashi has joined #openstack-nova02:26
*** Apoorva has joined #openstack-nova02:29
gmannalex_xu: if you can check this simple one,we can close the BP - https://review.openstack.org/#/c/531061/02:31
alex_xugmann: got it02:32
gmannalex_xu: thanks02:32
alex_xugmann: np02:32
*** penick_ has quit IRC02:32
takashin02:35
openstackgerritKevin Zhao proposed openstack/nova master: Modify the test case of get_disk_mapping_rescue_with_config  https://review.openstack.org/49415602:39
*** mordred has quit IRC02:42
*** Dinesh_Bhor has quit IRC02:43
*** mordred has joined #openstack-nova02:43
*** dave-mccowan has quit IRC02:47
*** liverpooler has quit IRC02:47
*** Apoorva has quit IRC02:52
*** lbragstad has quit IRC02:54
openstackgerritEric Berglund proposed openstack/nova master: WIP: PowerVM Driver: vSCSI  https://review.openstack.org/52609402:58
*** tuanla____ has joined #openstack-nova02:59
*** tetsuro has joined #openstack-nova03:00
*** Dinesh_Bhor has joined #openstack-nova03:02
*** Dinesh_Bhor has quit IRC03:02
*** liverpooler has joined #openstack-nova03:08
*** annp has joined #openstack-nova03:20
*** zhurong has quit IRC03:23
*** Tom-Tom has quit IRC03:28
*** Apoorva has joined #openstack-nova03:30
*** jistr has quit IRC03:31
*** takashin has quit IRC03:31
*** zhurong has joined #openstack-nova03:32
*** hshiina has quit IRC03:36
*** takashin has joined #openstack-nova03:39
*** jeblair has joined #openstack-nova03:40
*** jeblair has quit IRC03:41
*** liverpooler has quit IRC03:42
*** DinaBelova has quit IRC03:44
*** andreykurilin has quit IRC03:44
*** andreykurilin has joined #openstack-nova03:44
*** DinaBelova has joined #openstack-nova03:46
*** abhishekk has joined #openstack-nova03:47
*** lyarwood has quit IRC03:49
*** lyarwood has joined #openstack-nova03:50
*** AlexeyAbashkin has joined #openstack-nova03:52
*** Dinesh_Bhor has joined #openstack-nova03:55
*** AlexeyAbashkin has quit IRC03:57
*** udesale has joined #openstack-nova03:59
*** hshiina has joined #openstack-nova03:59
*** tbh_ has joined #openstack-nova04:00
*** digambar has joined #openstack-nova04:02
*** sree has joined #openstack-nova04:07
*** sree has quit IRC04:11
*** jeblair has joined #openstack-nova04:12
*** jeblair has quit IRC04:24
*** AlexeyAbashkin has joined #openstack-nova04:25
*** sree has joined #openstack-nova04:26
*** AlexeyAbashkin has quit IRC04:29
*** hshiina2 has joined #openstack-nova04:30
*** hshiina has quit IRC04:33
*** vladikr has quit IRC04:34
*** Dinesh_Bhor has quit IRC04:34
*** links has joined #openstack-nova04:37
*** Dinesh_Bhor has joined #openstack-nova04:37
*** Dinesh_Bhor has quit IRC04:38
*** hongbin has quit IRC04:38
*** itlinux has quit IRC04:39
*** zhurong has quit IRC04:40
*** links has quit IRC04:40
*** gouthamr has quit IRC04:41
*** janki has joined #openstack-nova04:41
*** Tom-Tom has joined #openstack-nova04:43
*** namnh has joined #openstack-nova04:44
*** david-lyle has joined #openstack-nova04:46
*** links has joined #openstack-nova04:46
*** david-lyle has quit IRC04:53
*** Tom-Tom has quit IRC04:55
*** Tom-Tom has joined #openstack-nova04:55
*** kumarmn has joined #openstack-nova04:57
*** Apoorva has quit IRC04:58
*** yamamoto has joined #openstack-nova04:58
*** Tom-Tom has quit IRC05:00
*** Dinesh_Bhor has joined #openstack-nova05:00
*** hoonetorg has quit IRC05:00
*** kumarmn has quit IRC05:02
*** kumarmn has joined #openstack-nova05:06
*** sree has quit IRC05:10
*** brad[] has quit IRC05:10
*** karthiks has joined #openstack-nova05:10
*** sree has joined #openstack-nova05:10
*** kumarmn has quit IRC05:11
*** artom_ has quit IRC05:11
*** sree has quit IRC05:12
*** sree has joined #openstack-nova05:12
*** hoonetorg has joined #openstack-nova05:14
*** jeblair has joined #openstack-nova05:14
*** sree has quit IRC05:15
*** sridharg has joined #openstack-nova05:17
*** sree has joined #openstack-nova05:17
*** jeblair has quit IRC05:20
*** jeblair has joined #openstack-nova05:21
*** jeblair is now known as corvus05:21
*** sree has quit IRC05:23
*** harlowja has joined #openstack-nova05:24
*** ratailor has joined #openstack-nova05:25
openstackgerritTakashi NATSUME proposed openstack/nova master: Transform rescue/unrescue instance notifications  https://review.openstack.org/38564405:25
*** hshiina3 has joined #openstack-nova05:26
*** yamamoto_ has joined #openstack-nova05:29
*** chyka has joined #openstack-nova05:29
*** hshiina2 has quit IRC05:30
*** Tom-Tom has joined #openstack-nova05:30
*** gouthamr has joined #openstack-nova05:32
*** takashin has left #openstack-nova05:33
*** chyka has quit IRC05:33
*** yamamoto has quit IRC05:33
openstackgerritRajesh Tailor proposed openstack/nova master: Host addition host-aggregate should be case-sensitive  https://review.openstack.org/49833405:34
*** gyee has quit IRC05:36
*** artom has joined #openstack-nova05:37
*** markvoelker has joined #openstack-nova05:39
*** psachin has joined #openstack-nova05:40
*** sree has joined #openstack-nova05:42
*** yamamoto has joined #openstack-nova05:46
*** yamamoto_ has quit IRC05:49
*** sree_ has joined #openstack-nova05:50
*** sree_ is now known as Guest4063405:50
*** sree has quit IRC05:50
*** sandanar has joined #openstack-nova05:51
openstackgerritJie Li proposed openstack/nova-specs master: Support volume-backed server rebuild  https://review.openstack.org/53240705:51
*** Dinesh_Bhor has quit IRC05:59
*** edmondsw has joined #openstack-nova06:00
openstackgerritJie Li proposed openstack/nova-specs master: Support volume-backed server rescue  https://review.openstack.org/53241006:00
*** karthiks has quit IRC06:04
*** edmondsw has quit IRC06:04
*** claudiub has joined #openstack-nova06:05
*** oanson has joined #openstack-nova06:06
openstackgerritZhenyu Zheng proposed openstack/nova master: Use neutron port_list when filtering instance by ip  https://review.openstack.org/52550506:11
*** tbachman_ has joined #openstack-nova06:16
*** zhurong has joined #openstack-nova06:16
*** oanson has quit IRC06:18
*** tbachman has quit IRC06:18
*** tbachman_ is now known as tbachman06:18
*** oanson has joined #openstack-nova06:22
*** moshele has joined #openstack-nova06:23
*** gouthamr has quit IRC06:23
*** pcaruana has joined #openstack-nova06:27
*** licanwei has joined #openstack-nova06:29
*** avolkov has joined #openstack-nova06:30
*** Dinesh_Bhor has joined #openstack-nova06:33
*** pcaruana has quit IRC06:33
*** pcaruana has joined #openstack-nova06:34
*** karthiks has joined #openstack-nova06:34
*** lajoskatona has joined #openstack-nova06:35
*** fragatina has quit IRC06:36
*** sree has joined #openstack-nova06:40
openstackgerritJie Li proposed openstack/nova-specs master: Support volume-backed server rescue  https://review.openstack.org/53241006:40
*** Guest40634 has quit IRC06:41
*** sree has quit IRC06:44
*** sree has joined #openstack-nova06:44
*** sree has quit IRC06:50
openstackgerritJie Li proposed openstack/nova-specs master: Support volume-backed server rebuild  https://review.openstack.org/53240706:50
*** gcb has quit IRC06:55
*** udesale__ has joined #openstack-nova06:57
*** udesale has quit IRC06:59
openstackgerritLajos Katona proposed openstack/nova master: Deduplicate service status notification samples  https://review.openstack.org/53138107:04
*** tuanla____ has quit IRC07:04
*** tuanla____ has joined #openstack-nova07:05
*** masber has quit IRC07:08
*** damien_r has joined #openstack-nova07:08
*** slaweq has joined #openstack-nova07:09
openstackgerritJie Li proposed openstack/nova-specs master: Support volume-backed server rescue  https://review.openstack.org/53241007:14
*** slaweq has quit IRC07:14
*** rcernin has quit IRC07:15
*** sree has joined #openstack-nova07:19
*** udesale has joined #openstack-nova07:19
*** fnordahl has quit IRC07:20
*** udesale has quit IRC07:20
*** udesale__ has quit IRC07:21
openstackgerritJie Li proposed openstack/nova-specs master: Support volume-backed server rebuild  https://review.openstack.org/53240707:22
*** kumarmn has joined #openstack-nova07:22
*** fnordahl has joined #openstack-nova07:22
*** sree has quit IRC07:23
*** kumarmn has quit IRC07:27
*** sree has joined #openstack-nova07:29
*** moshele has quit IRC07:29
*** moshele has joined #openstack-nova07:31
*** sree has quit IRC07:34
*** armax has quit IRC07:35
*** threestrands has quit IRC07:36
*** sree has joined #openstack-nova07:38
*** Dinesh_Bhor has quit IRC07:39
*** sree has quit IRC07:43
openstackgerritJie Li proposed openstack/nova master: Support volume-backed server rescue  https://review.openstack.org/53152407:44
*** sree has joined #openstack-nova07:44
*** armax has joined #openstack-nova07:47
*** digambar has quit IRC07:47
*** harlowja has quit IRC07:49
*** sree has quit IRC07:49
*** hoonetorg has quit IRC07:50
*** kholkina has joined #openstack-nova07:55
*** ameeda has joined #openstack-nova07:57
*** AlexeyAbashkin has joined #openstack-nova07:57
*** Dinesh_Bhor has joined #openstack-nova07:57
ameedaMorning :)07:59
ameedaplease review my code here "https://review.openstack.org/#/c/526900/" and let me know if I need to write microversions.07:59
ameedaYikun Jiang (Kero) asked me to write microversions since he said that I changed a status code on a particular response.07:59
ameedaThanks !08:00
*** kumarmn has joined #openstack-nova08:00
*** armax has quit IRC08:01
*** matrohon has joined #openstack-nova08:05
yikun@ameeda, actually, I'm not sure we should add a new micro version in this change or not.08:05
yikunyeah, we change the status code and API behaviour when "Metadata property value > 255".(400 ---> 200)08:05
yikunbut as @jichen mentioned: we don't accept 255 before and now we accept it , no impact to end user ?08:06
yikunI think you maybe can get some help from @alex_xu or @takashi.08:06
*** kumarmn has quit IRC08:07
*** hoonetorg has joined #openstack-nova08:07
*** armax has joined #openstack-nova08:08
ameedayikun: thanks alot :)08:10
ameedaalex_xu: are you around ?08:10
ameedatakashi: also ?08:10
alex_xuameeda: I added it into my review list, will try it asap08:12
*** Dinesh_Bhor has quit IRC08:13
ameedaalex_xu: thank you very much :)08:13
*** tesseract has joined #openstack-nova08:14
*** sree has joined #openstack-nova08:16
*** ralonsoh has joined #openstack-nova08:16
*** Dinesh_Bhor has joined #openstack-nova08:17
*** Dinesh_Bhor has quit IRC08:19
*** Dinesh_Bhor has joined #openstack-nova08:20
*** damien_r has quit IRC08:20
*** karthiks has quit IRC08:20
*** tbh_ has quit IRC08:20
*** sree has quit IRC08:20
*** sree has joined #openstack-nova08:21
*** zhaochao has joined #openstack-nova08:23
*** alexchadin has joined #openstack-nova08:24
*** sahid has joined #openstack-nova08:26
*** alex_xu has quit IRC08:28
*** alex_xu has joined #openstack-nova08:29
*** karthiks has joined #openstack-nova08:32
gmannameeda: alex_xu we are all good in term of microversion, this does not change the API behavior08:36
gmannneed to check whether it fix the reported bug or not08:37
*** tetsuro has quit IRC08:39
*** sree has quit IRC08:39
*** ragiman has joined #openstack-nova08:40
*** sree has joined #openstack-nova08:41
*** karthiks has quit IRC08:44
*** jpena|off is now known as jpena08:45
*** damien_r has joined #openstack-nova08:45
*** sree has quit IRC08:45
*** hiro-kobayashi has quit IRC08:51
gmannameeda: left comment there08:52
*** Dinesh_Bhor has quit IRC08:53
*** jaosorior has joined #openstack-nova08:54
openstackgerritJie Li proposed openstack/nova master: Support volume-backed server rescue  https://review.openstack.org/53152408:55
*** jistr has joined #openstack-nova08:57
*** sree has joined #openstack-nova08:59
bauzasmorning Novaers08:59
*** karthiks has joined #openstack-nova08:59
gmannmorning09:01
ameedagmann: Thank you for review, so I need to return the master code, this restrict the length of the value, I removed the limitations for that ? I will change the reno. also add a functional regression test.09:01
gmannameeda: you mean this? - https://review.openstack.org/#/c/526900/17/nova/compute/api.py09:03
ameedagmann: yes09:04
*** sree has quit IRC09:04
gmannameeda: i checked and it should not have impact but let me confirm again.09:04
*** moshele has quit IRC09:04
*** Dinesh_Bhor has joined #openstack-nova09:04
ameedagmann: thanks :)09:05
gmannameeda: np!09:05
*** Dinesh_Bhor has quit IRC09:06
*** tommylikehu has quit IRC09:07
*** markvoelker has quit IRC09:08
*** tommylikehu has joined #openstack-nova09:08
*** moshele has joined #openstack-nova09:08
*** owalsh has quit IRC09:09
*** owalsh has joined #openstack-nova09:10
*** hshiina3 has quit IRC09:11
*** slaweq_ has joined #openstack-nova09:11
*** diga has joined #openstack-nova09:12
*** slaweq_ has quit IRC09:15
gmannameeda: i still not find if that code truncated the value in case of system metadata09:22
gmannameeda: any failure link etc?09:22
*** gnuoy has joined #openstack-nova09:24
ameedagmann: I am not sure. I still beginner with nova and openstack, so you guess that I need to restore the original code ?09:24
gmannameeda: np!. let's not change that and see whether bug is fixed or not. you can check locally also and by adding functional tests also09:25
gmannameeda: if something compute/api.py change needs to do we can do later. but i am sure changing DB field and util function should work09:26
ameedagmann: what about functional tests. its important ? if so, how I can do that ?09:27
*** Dinesh_Bhor has joined #openstack-nova09:29
*** andreas_s has joined #openstack-nova09:30
*** sree has joined #openstack-nova09:30
*** lucas-afk is now known as lucasagomes09:31
*** licanwei has quit IRC09:31
gmannameeda: yea it is imp to see bug is fixed and it does not regress . example is like these tests - https://github.com/openstack/nova/tree/master/nova/tests/functional/regressions09:31
ameedagmann: thanks for your help and your time.09:32
gmannameeda: with that we can get to know whether bug is actually fixed and no more hidden restriction/truncation etc09:32
gmannameeda: np!09:32
*** Dinesh_Bhor has quit IRC09:32
*** sree has quit IRC09:35
openstackgerritsahid proposed openstack/nova master: libvirt: slow live-migration to ensure network is ready  https://review.openstack.org/49745709:35
*** sandanar has quit IRC09:42
*** derekh has joined #openstack-nova09:42
*** sandanar has joined #openstack-nova09:43
*** tianhui has quit IRC09:44
*** liuzz has quit IRC09:49
*** liuzz_ has joined #openstack-nova09:49
*** lpetrut has joined #openstack-nova09:50
openstackgerritcaishan proposed openstack/nova master: Unit testing test_driver.py indent issue  https://review.openstack.org/53247309:53
openstackgerritcaishan proposed openstack/nova master: Unit testing test_driver.py indent issue  https://review.openstack.org/53247309:57
*** diga has quit IRC10:01
*** kumarmn has joined #openstack-nova10:03
*** sridharg has quit IRC10:04
openstackgerritStephen Finucane proposed openstack/nova master: Fix typo in release note  https://review.openstack.org/53185410:07
*** kumarmn has quit IRC10:07
*** yamamoto has quit IRC10:11
*** mdnadeem has quit IRC10:13
*** sambetts|afk is now known as sambetts10:13
*** diga has joined #openstack-nova10:15
*** kumarmn has joined #openstack-nova10:18
*** jichen has quit IRC10:20
*** threestrands has joined #openstack-nova10:21
*** kumarmn has quit IRC10:22
mdboothlyarwood: Could you take a look at https://review.openstack.org/#/c/531524/ ? I'd like to encourage the author to resurrect your stable rescue series instead.10:24
*** dtantsur|afk is now known as dtantsur10:24
lyarwoodmdbooth: sure, there's a spec up for review for this, might provide the feedback there - https://review.openstack.org/#/c/532410/3/specs/rocky/approved/volume-backed-server-rescue.rst10:26
mdboothHmm. I didn't see that linked from the bp.10:27
lyarwoodmdbooth: it isn't, it's on the gerrit topic.10:28
mdboothEurgh.10:28
stephenfinlyarwood: Small question here https://review.openstack.org/#/c/460243/13/nova/virt/libvirt/driver.py@127310:29
mdboothI reviewed the patch anyway, which had issues. I pointed out your series as a much more thorough alternative which was also previously nearly across the line.10:29
*** alexchadin has quit IRC10:32
*** alexchadin has joined #openstack-nova10:33
*** annp has quit IRC10:34
*** gszasz has joined #openstack-nova10:34
*** phuongnh has quit IRC10:34
mdboothstephenfin: I answered for him :)10:36
stephenfinmdbooth: Ta!10:38
stephenfinThat's done now, as promised10:38
openstackgerritsahid proposed openstack/nova master: hardware: only take into account small pages  https://review.openstack.org/53216810:45
*** ebbex has quit IRC10:49
lyarwoodstephenfin / mdbooth ; yup thanks, context is still required for attach10:50
*** cdent has joined #openstack-nova10:50
*** namnh has quit IRC10:51
*** matrohon has quit IRC10:54
*** tuanla____ has quit IRC10:55
*** yamamoto has joined #openstack-nova10:55
*** zhurong has quit IRC10:56
*** abhishekk has quit IRC10:57
*** Tom-Tom has quit IRC10:59
*** Tom-Tom has joined #openstack-nova11:00
*** alexchadin has quit IRC11:06
*** alexchadin has joined #openstack-nova11:06
*** Tom-Tom has quit IRC11:07
openstackgerritJie Li proposed openstack/nova-specs master: Support volume-backed server rescue  https://review.openstack.org/53241011:07
*** andreas_s has quit IRC11:08
*** markvoelker has joined #openstack-nova11:08
*** alexchadin has quit IRC11:09
*** andreas_s has joined #openstack-nova11:09
openstackgerritLiam Young proposed openstack/nova master: Add exception to no-upcall note of cells doc  https://review.openstack.org/53249111:11
*** yangyapeng has quit IRC11:12
sean-k-mooneydid they upgrade gerrit recently? i just commented on a patch set 15 of someting and it included old draft comment i had on patchset 4 also...11:14
sean-k-mooneygranted it did at least include them on the patchset 4 version but still that annoying when you cant see that they are there11:15
gibisean-k-mooney: yeah, this is a new feature from the last gerrit upgrade11:17
gibisean-k-mooney: but the upgrade happened couple of months ago11:17
*** AlexeyAbashkin has quit IRC11:18
*** AlexeyAbashkin has joined #openstack-nova11:18
*** yamahata has quit IRC11:22
*** andreas_s has quit IRC11:23
*** andreas_s has joined #openstack-nova11:23
*** yangyapeng has joined #openstack-nova11:25
*** sree has joined #openstack-nova11:25
sean-k-mooneygibi: oh really i guess i have just been lucky enough not to hit it till now. if i start a review and there are new revions in between i normally start again and copy the comments but dont always delete the old ones since they were ignored11:30
*** breton has left #openstack-nova11:30
*** sree has quit IRC11:30
openstackgerritJie Li proposed openstack/nova-specs master: Support volume-backed server rebuild  https://review.openstack.org/53240711:30
*** andreas_s has quit IRC11:37
*** andreas_s has joined #openstack-nova11:38
*** alexchadin has joined #openstack-nova11:39
*** moshele has quit IRC11:39
*** moshele has joined #openstack-nova11:40
*** alexchadin has quit IRC11:41
openstackgerritMarcin Juszkiewicz proposed openstack/nova stable/pike: libvirt: use 'host-passthrough' as default on AArch64  https://review.openstack.org/53250411:42
hrwas it went to master it would be good to have it in pike too ;D11:42
*** markvoelker has quit IRC11:42
*** moshele has quit IRC11:42
openstackgerritJie Li proposed openstack/nova master: Support volume-backed server rescue  https://review.openstack.org/53152411:44
*** sridharg has joined #openstack-nova11:48
*** andreas_s has quit IRC11:48
*** andreas_s has joined #openstack-nova11:49
*** diga has quit IRC11:50
mdboothlyarwood: Sorry :/ https://review.openstack.org/#/c/460243/11:53
*** purplerbot has quit IRC11:53
openstackgerritJie Li proposed openstack/nova master: Support volume-backed server rescue  https://review.openstack.org/53152411:54
*** yamamoto has quit IRC11:55
*** abalutoiu has joined #openstack-nova11:57
*** nicolasbock has joined #openstack-nova11:57
*** sdague has joined #openstack-nova11:58
*** alexchadin has joined #openstack-nova12:00
*** yamamoto has joined #openstack-nova12:00
*** ratailor has quit IRC12:04
*** Eran_Kuris has quit IRC12:09
*** yamamoto has quit IRC12:11
*** smatzek has joined #openstack-nova12:20
*** Rambo has joined #openstack-nova12:22
*** purplerbot has joined #openstack-nova12:23
RamboHi,everyone ,Can you help me to review the spec ?The link is https://review.openstack.org/#/c/532410/12:24
RamboAnother is :https://review.openstack.org/#/c/532407/12:25
RamboThank you very much12:25
*** yamamoto has joined #openstack-nova12:26
*** lucasagomes is now known as lucas-hungry12:27
*** janki has quit IRC12:27
sean-k-mooneymelwitt: stephenfin bauzas o/ do any of ye know the url to the ptg etherpad matt started. ill grab it from the irc logs if not so noworries if you dont have it to hand.12:29
*** brad[] has joined #openstack-nova12:29
sean-k-mooneymelwitt: stephenfin bauzas found it https://etherpad.openstack.org/p/nova-ptg-rocky12:30
*** sdague has quit IRC12:31
lyarwoodmdbooth: np, fired back, I really don't like that comment with the encryptor.detach_volume call removed12:31
lyarwoodmdbooth: it should be pretty obvious that you can't disconnect a volume before detaching it from the guest tbh12:32
lyarwoodmdbooth: and the tests should catch anyone trying to do this12:32
*** yamamoto has quit IRC12:32
kashyaplyarwood: Sometimes explicit is better than implicit.  What is obvious to you would be a far cry from it for a fresh pair of eyes trying to get up2speed.12:33
kashyapI'm personally a fan of documenting in comments, even if sometimes it's a bit obvious if you're in the know12:34
*** moshele has joined #openstack-nova12:35
*** sean-k-mooney has quit IRC12:35
*** ygl has joined #openstack-nova12:36
yglhi all12:36
yglcan someone help me with my issue12:36
*** ygl has left #openstack-nova12:37
*** markvoelker has joined #openstack-nova12:39
openstackgerritsahid proposed openstack/nova master: hardware: only take into account small pages  https://review.openstack.org/53216812:43
*** gcb has joined #openstack-nova12:43
*** dtruong has quit IRC12:44
*** dtruong has joined #openstack-nova12:44
*** jpena is now known as jpena|lunch12:48
lyarwood*sigh*12:51
lyarwoodkashyap: so how would the comment help someone reviewing that method for the first time?12:52
lyarwoodkashyap: given that the call to detach the encryptor is now hidden from them in _disconnect_volume12:52
lyarwoodkashyap: I'm all for helping first time readers through code but it makes the entire thing more confusing IMHO12:53
*** vivsoni has quit IRC12:54
*** Rambo has quit IRC12:54
*** vivsoni has joined #openstack-nova12:54
kashyaplyarwood: Hmm, if you think it'll confuse more, I'll defer to you.12:55
kashyaplyarwood: I noticed what you said is missing in your review comment12:55
kashyaplyarwood: Then maybe you'd want to note that the call to detach the encryptor is elsewhere :-)12:56
*** sean-k-mooney has joined #openstack-nova12:56
*** takashin has joined #openstack-nova12:58
*** links has quit IRC12:58
takashinalex_xu: Are you aroud?13:01
*** edmondsw has joined #openstack-nova13:01
alex_xutakashin: yea13:02
takashinalex_xu: Is there API meeting today?13:02
alex_xutakashin: yes, but passed few weeks, there is no people show up, then I didn't run it, is there anything you want to discuss, we can discussed at here I think13:03
takashinalex_xu: Okay. I have 2 patches for reviews.13:03
takashinapi-ref: Parameter verification for servers.inc: https://review.openstack.org/#/c/528201/13:04
gmannalex_xu: takashin i was away too since 2-3 weeks. we can resume from next week may be13:04
takashinapi-ref: Example verification for servers.inc: https://review.openstack.org/#/c/529520/13:04
takashingmann: thanks.13:04
takashinalex_xu: gmann: Would you review the patches?13:05
alex_xugmann: cool, we will run it13:05
alex_xutakashin: I add them to my review list, will try to reach them13:05
gmanntakashin: ywa, i remember to review those half way last week, ll do tomorrow for sure13:05
takashinalex_xu: gmann: Thank you.13:05
takashinThat's all.13:06
*** yikun_ has joined #openstack-nova13:06
alex_xutakashin: did you see my comment https://review.openstack.org/#/c/459483/, I think that is thing we should keep consistent, and I think that isn't worth another microversion13:06
takashinalex_xu: I saw your comment. I will fix it tomorrow.13:07
alex_xutakashin: thanks13:07
alex_xutakashin: gmann btw, there is API patch closed to merge https://review.openstack.org/#/c/330406, I'm reviewing it, but still still found something, it will be great you guys can help review it also, the API patch is really huge :)13:08
mdboothlyarwood: Anyway, like I said the patch is a huge improvement, I don't see any issues in the code. It's simpler and it fixes at least 3 bugs. I'm just cautious about removing context from a driver which is already plenty opaque in places.13:08
gmannalex_xu: sure, added in my tomorrow list13:09
takashinalex_xu: okay. I will review it tomorrow.13:09
*** smatzek has quit IRC13:09
alex_xugmann: takashin thanks!13:10
*** markvoelker has quit IRC13:12
*** smatzek has joined #openstack-nova13:14
kashyapmdbooth: Do you recall top off your head, in what scenarios Nova calls 'qemu-img info' for _running_ guests?13:14
*** mvk has quit IRC13:14
kashyapIf not, don't worry, I'll go look into code13:14
mdboothkashyap: Not off the top of my head, but pretty sure there are some.13:15
*** janki has joined #openstack-nova13:15
*** takashin has left #openstack-nova13:15
mdboothLook at live migration. Maybe imagecache reaper.13:16
kashyapmdbooth: Okido, I'm in a discussion w/ the QEMU Block folks, and they're asking this.13:19
kashyapI'm sure we do, just have to audit13:19
efriedmgoddard Where's set_traits_for_provider (https://review.openstack.org/#/c/532290/1/nova/compute/resource_tracker.py@890) defined?  I can't find it in master or in your series.13:19
efriedmgoddard I ask because I'm actually in the process of implementing that method right now.13:20
*** moshele has quit IRC13:20
efriedmgoddard Want to avoid duplication of effort if possible.13:20
*** yamamoto has joined #openstack-nova13:21
*** ttx has quit IRC13:21
*** lucas-hungry is now known as lucasagomes13:22
*** ttx has joined #openstack-nova13:23
*** andreas_s has quit IRC13:27
*** sree has joined #openstack-nova13:30
mgoddardefried: hi. I'm still implementing that one. I can submit what I have for review if you'd like to see it13:31
efriedmgoddard I would, yes.  I'll show you mine if you show me yours :)13:32
*** andreas_s has joined #openstack-nova13:32
mgoddardefried: well I don't usually do this, but go on then13:33
efriedmgoddard You're probably a little further along BUT one of the key things I'm doing there is exposing a new exception base class for placement API conflicts and raising subclasses thereof from this method and its brethren (e.g. set_aggregates_for_provider, tbd) when they encounter 409s.13:34
efriedmgoddard It's in the middle of a rather messy restack, won't be ready to show for a little while yet.13:34
efriedmgoddard But now that I know you're also wanting it for the ironic traits bp, seems it needs to be peeled out of that series.13:35
efriedwhich is probably not super hard.  Though at the moment it's based on a change that raises a conflict exception for RP creation, which I think *is* tied pretty heavily into that series.13:36
*** andreas_s has quit IRC13:37
*** alexchadin has quit IRC13:38
*** andreas_s has joined #openstack-nova13:38
*** andreas_s has quit IRC13:40
*** andreas_s has joined #openstack-nova13:40
*** andreas__ has joined #openstack-nova13:41
*** sree has quit IRC13:43
kashyapmdbooth: Just noting for the record, looked for the past few minutes:13:44
kashyap_rebase_with_qemu_img() , _live_snapshot() and _get_instance_disk_info_from_config() [from nova/virt/libvirt/driver.py]13:44
kashyapnova/virt/libvirt/imagebackend.py:13:44
kashyap - cache() --> fetch_func_sync() --> get_disk_size() --> qemu_img_info()13:44
sean-k-mooneyefried: QQ is there a top level api for placement aggregates? e.g. can i list all aggregates or list all resocue providers in an aggregate given the aggregate uuid?13:45
*** andreas_s has quit IRC13:45
efriedsean-k-mooney sec...13:45
*** moshele has joined #openstack-nova13:45
efriedsean-k-mooney Hum, I thought there was, cause I'm gonna need it.  Still looking...13:46
*** ameeda_ has joined #openstack-nova13:46
sean-k-mooneyefried: no rush. you can get teh aggregates a resouce provider is part of but at least looking at the master docs the recprical api does not appear to exist13:46
cdentyou can use member_of to get all rps in a given aggregate13:46
efriedAhh, that's it, thanks cdent13:47
cdenthttps://developer.openstack.org/api-ref/placement/#list-resource-providers13:47
cdentIt's not clear how you're supposed to discovery an aggregate, though, other than by looking at https://developer.openstack.org/api-ref/placement/#list-resource-provider-aggregates13:47
cdentor knowing the uuid prior13:48
*** pchavva has joined #openstack-nova13:48
mdboothkashyap: I'm not convinced cache() would be called on a running instance. As always with that code, though, it's far from obvious without checking carefully.13:48
mdboothkashyap: cache() mostly means 'create'13:48
kashyapmdbooth: Yeah, I should've been careful in pointing that out13:48
efriedcdent Yeah.  That said, is there a use case for that?13:48
*** sree has joined #openstack-nova13:48
sean-k-mooneycdent: ah yes cool.  that still leaves me with one question. how do i create the aggreate in the first place with out a top level aggregates api?13:48
cdentefried: not that I'm aware, but just as I was thinking of it I stumbled on "How do I know the aggregates"13:49
cdentsean-k-mooney: it gets created when you use it13:49
*** yamamoto has quit IRC13:49
kashyapmdbooth: I didn't do a thorough audit, though.  Taking notes as I find instances of it & then see where they're called on a running guest13:49
kashyapFor live snapshot we do for sure13:49
cdentso if you PUT to /resource_provider/{uuid}/aggregates with a new uuid there ya go13:50
efriedsean-k-mooney You "create" it by assigning it to a provider via PUT /rp/{uuid}/aggs13:50
sean-k-mooneycdent: so the first time i add a resouce provider to an agregate it creates the uuid13:50
efriedyeah, what he said.13:50
cdentyes13:50
mdboothkashyap: Did you look in the imagecache periodic task?13:50
kashyapmdbooth: Not yet; so far I'm just noting down instances where it's called.  E.g:13:50
kashyapnova/virt/libvirt/imagebackend.py13:50
kashyap - cache() --> fetch_func_sync()13:50
kashyap    --> get_disk_size()  [from nova/virt/libvirt/driver.py]13:50
kashyap         --> qemu_img_info() [from nova/virt/images.py]13:50
kashyap - verify_base_size() --> get_disk_size()13:50
kashyap - class LVM() --> create_image() --> create_lvm_image() --> get_disk_size() --> qemu_img_info()13:50
sean-k-mooneycdent: so the problem of determining an un used uuid for the aggregate is left to the client13:51
cdentyes13:51
efriedsean-k-mooney I knew I wrote that code: https://github.com/openstack/nova/blob/master/nova/scheduler/client/report.py#L43113:51
kashyap(And then see the calls for the live guest.)13:51
*** jpena|lunch is now known as jpena13:51
sean-k-mooneycdent: ok cool. more reading for me to do :)13:51
cdentsean-k-mooney: there's some stuff in the very early rp specs about such things13:51
efriedsean-k-mooney There shouldn't be any such thing as an "unused" aggregate UUID.  Unless you mean "used only for one provider", which is pointless.13:51
*** moshele has quit IRC13:51
cdentbut if I remember right, the thinking was that the client would already have some identifier in min13:52
cdentd13:52
*** alexchadin has joined #openstack-nova13:52
efried...because when you remove the last association, placement ought to get rid of that agg ID.13:52
efriedcdent Yeah, that's a weird one.  In the PowerVM SSP case, the only thing that makes sense is to give the agg the same UUID as the shared storage pool13:53
efriedThough I *suppose* I could give the agg the UUID of the cluster instead.  It's effectively the same thing in powervm land.13:53
*** sree has quit IRC13:53
sean-k-mooneyefried: well what i mean is if i am createing a set of recouse providers and i want to make them part of a aggregate to group them i need to choose a uuid that is not useed by anyone else.13:53
cdentplacement does not clean up the unused agg uuids. that was an early design decision, sort of resulting from how the tables were being normalized to not use uuid keys (and a few others things, it's so long ago)13:54
efriedsean-k-mooney Yes, which one would normally do by generating one randomly, BUT that breaks down quickly if you have more than one point of control for that agg.13:54
efriedcdent Whoah, so how do they get cleaned up?13:54
cdentthey don't13:54
cdentat least not last I checked13:55
efriedThey just... leak?13:55
mgoddardefried: I have some concerns with the storage of the generation in the provider tree13:55
mgoddardefried: there is a single generation per-provider that covers inventory, traits, and aggregates13:55
mgoddardefried: but if we call e.g. pt.update_inventory() with a new generation, the traits or aggregates for that provider may not necessarily correspond to the generation that gets set13:55
mgoddardefried: I can't see anything that's using the tree's stored generations currently, but presumably they're in there for some future purpose?13:55
efriedmgoddard That's precisely what I'm working on right now.13:55
cdentsean-k-mooney: selecting an unused uuid is kind of the easy part of why use uuids?13:56
*** sree has joined #openstack-nova13:56
cdentbut you can always check it, with member_of13:56
efriedcdent Except for the multi-source sync issue, aforementioned.13:56
cdentlink?13:57
efriedmgoddard The client code should never be setting/incrementing the generation directly.  It should be blindly passing the generation from the API response into those ProviderTree methods.13:57
*** vladikr has joined #openstack-nova13:57
sean-k-mooneycdent: oh yes i know i can just gen a new one. the reson i brought this up is  i knind of have a usecase where really i would like to have a resouce provider have two parents but alternively i could use an aggregate to model the second relationship13:57
efriedcdent Word salad on that review we discussed in the sched mtg, stand by...13:57
mdboothkashyap: Ah, looks like I removed qemu-img from that task.13:58
cdentefried: I think in that context were assuming that we're "choosing" a know uuid from some authority13:58
cdentknown13:58
sean-k-mooneycdent: yes13:58
efriedcdent Which would be fine, if that's what we're doing, AND the multiple control points understand the semantics of the provider having multiple aggs associated or not.13:59
cdentsean-k-mooney: I will keep you in beverages of your choice for the duration of dublin if you promise me to never bring up multi-parents again :)13:59
*** sdague has joined #openstack-nova14:00
sean-k-mooneycdent: efried so in the case of vhost-vfio interfaces i was considering them to be owned by neutron not nova yes i want to model numa relationship too14:00
efriedmgoddard Which is what report client should be doing, but isn't yet, but I'm working on it.14:00
kashyapmdbooth: I see, stopped the audit for a bit; will get back.14:00
kashyapmdbooth: https://lists.nongnu.org/archive/html/qemu-devel/2018-01/msg00845.html -- [PATCH 0/2] qemu-img: Let "info" warn and go ahead without -U14:00
sean-k-mooneycdent:hehe yes i know we dont want to have multiple parent so im exploing other options.14:00
kashyap(The "-U" == '--force-share')14:01
efriedmgoddard It also sends the generation - whatever was the last value it got when it retrieved the RP record - whenever it updates the RP or associations (traits, aggs*, etc.).  And then the placement API bounces with a 409 if that's not the same as the generation it thinks that RP should have.14:01
efriedmgoddard The client can/should respond to that by re-GETting the provider/associations, redriving the change, and re-PUTting with the new generation.14:01
sean-k-mooneycdent: basically i want to model the resouce is a sub resouce of a numa node(eventully) and have an easy way to associate them with the neutron agent that manages them14:02
sean-k-mooneycdent: without the numa aspect i had originally considerd modeling the neutron agent as a resouce providers and making it the parent of the sub resouce14:02
cdentthat sounds a bit like the model of shared provider where the neutron agent is a resource provider14:02
efriedsean-k-mooney Placement won't stop you from doing that, but it would be highly irregular for neutron to own (create/modify/control) a provider in the middle of a tree that's otherwise owned by compute.14:03
cdentI think we've already talked in the past about considering being able to parent neutron rps into a compute generated tree, as leafs14:03
sean-k-mooneyefried: well the resoce for this design is i dont think nova long term should be respocible for tracking networking resouces in placement14:03
efriedsean-k-mooney It would be preferable if you found a way to model it such that the provider(s) owned by neutron were sharing providers associated by aggregate with ones in the compute's tree.14:04
cdentthere was some of that in denver. not that we _will_ do it, but might be able to do it14:04
*** sree has quit IRC14:04
*** stvnoyes has joined #openstack-nova14:05
*** ttsiouts has joined #openstack-nova14:05
cdentI have some vague concerns that the provider tree model in compute is going to make it harder to manage rps from multiple places, which was an original goal14:05
*** sree has joined #openstack-nova14:05
efriedI mean, I guess, as long as compute and the neutron agent both agree on how that dance is done.  It worries me, though, because different vendors' drivers have to talk to the same neutron agent sometimes.14:06
cdentI need to unvague those concerns, but at the moment I can't even get devstack to do the right things for my simple experiments, so...14:06
* cdent nods at efried 14:06
sean-k-mooneycdent: well the issue is i dont think nova should own the root node of the tree. i think we should considerd the root node to be something earch service can create subtree from14:07
*** Tom-Tom has joined #openstack-nova14:07
efriedsean-k-mooney What do you call that root node?14:08
efriedIs it a host?14:08
efrieda "cloud"?14:08
efriedsome nebulous as-yet-unnamed entity that exists solely as an anchor point for this multi-owner provider tree?14:08
efried...but doesn't correspond to anything in the real world?14:09
cdentsean-k-mooney: That's kind of related to why I disputed that root provider be a thing in the data struture or representation. I think we should be able to access a tree of providers anywhere in whatever trees people like, and declare subtrees to be whole if that's what suits them. That is, if we're gonna have trees, let's have _trees_.14:09
cdent(I'd rather just not have trees, but that ship sailed)14:09
*** lyan has joined #openstack-nova14:09
*** markvoelker has joined #openstack-nova14:10
*** sree has quit IRC14:10
sean-k-mooneyefried: in the case of a nova created tree i was assuming a compute node. e.g. a host yes14:11
efriedsean-k-mooney Right, I was referring to what you said earlier about "i dont think nova should own the root node of the tree" and "the root node [is] something each service can create subtree from"14:12
sean-k-mooneyyep i think we are violently agreeing on that point :)14:12
mgoddardefried: so with your change, will a generation change in the provider tree cause inventory, traits, and aggregates to be updated, and checked that all those responses contain the same generation?14:12
efriedIn that picture, who owns the root node, and what does it represent?14:12
*** ameeda_ has quit IRC14:13
sean-k-mooneyefried: everyone owns it.  i think we just need to agree on how the root node is create for example by saying that the node is idenfied by the host_id which default to the hostname14:14
efriedsean-k-mooney But if the root node represents a compute host, doesn't it make sense for compute to own/create it?14:14
sean-k-mooneyefried: not in a converged deployment14:14
sean-k-mooneyfor example if you have cinder running on the same physical server then its also a storage node not just a compute node14:15
sean-k-mooneyif cinder and nova agree on how to create/select that root node then can boot create subtrees for that phyical server14:16
efriedmgoddard At the report client level, the plan for set_traits_for_provider is: If PUT /rp/{uuid}/traits succeeds (200), I'll update_traits on the ProviderTree, setting the generation based on what's in the PUT response.  (The generation is an attribute of the provider, so updating it via update_traits updates it for anything else associated with that provider.)14:16
*** gouthamr has joined #openstack-nova14:17
efriedmgoddard If the PUT fails 409, set_traits_for_provider will raise (a subclass of) this new conflict exception.14:17
sean-k-mooneys/ then can boot/ then both can/14:17
efriedmgoddard The report client consumer (i.e. resource tracker) should then redrive the overarching operation, which should entail first asking report client to refresh its cached representation of that provider in the ProviderTree.14:18
efriedmgoddard Which means reGETting the provider and all its associated bits (traits, aggs, etc.) and calling the appropriate ProviderTree methods to update them all.14:19
efriedmgoddard Then the consumer (resource tracker) would make whatever change again (in this example setting the traits).  Rinse, repeat.14:19
efriedmgoddard Not sure if I'm explaining this particularly well.14:20
sean-k-mooneyefried: cdent anyway that when a little off topic but i think one design would be for neutorn to create its resouces under the numa nodes create by nova and add the resouces it creates to an aggreate using the neutron agent uuid as the aggreate uuid.14:20
efriedsean-k-mooney Why do you need the aggregate in that picture?14:21
efriedsean-k-mooney Are the neutron resources common to multiple hosts?14:21
efriedsean-k-mooney If so, they should *not* be created as providers in the tree under the numa nodes.14:21
sean-k-mooneyefried: no they are specific to that compute but i want a simple way to look up all the resocurce providers created by the neutron agent on that host14:22
*** smatzek has quit IRC14:22
*** mvk has joined #openstack-nova14:22
*** smatzek has joined #openstack-nova14:22
efriedsean-k-mooney Ahhh, that makes sense.14:22
efriedsean-k-mooney So the aggregate is really just saying "neutron owns these".  It's not associating the providers with any shared providers.14:23
efriedsean-k-mooney Which is a crucial distinction, because you wouldn't want an allocation candidate request to bleed across compute trees.14:23
efriedsean-k-mooney So in this scenario, the providers are associated with an aggregate, but nobody in that aggregate has the MISC_SHARES_VIA_AGGREGATE trait.14:24
*** smatzek has quit IRC14:24
sean-k-mooneyefried: yes exactly14:25
efriedmgoddard But note that report client is the pinch point for everything.  You can't directly change the report client's cached ProviderTree from outside the report client.  Conversely and more importantly, you can't effect a change to placement by changing that cached ProviderTree.  You can only do that by calling report client methods like set_*_on_provider etc.14:26
efriedmgoddard Where it gets confusing is that ComputeDriver.update_provider_tree *sounds* like it's doing just that (effecting a change to placement by changing the cached ProviderTree).14:27
efriedmgoddard Which is almost but not quite the case.14:27
efriedmgoddard RT will ask RC for the provider tree; RC will return a *copy*.  RT will then ask ComputeDriver to update_provider_tree.  RT will then ask RC to diff and flush any changes back to placement, which will update RC's cached ProviderTree in the process.14:28
efriedsean-k-mooney Is there any reason that agg ID has to be the same across all the computes for which that neutron instance is handling the network providers?14:30
efriedsean-k-mooney I guess I can see the benefits either way.14:30
sean-k-mooneyefried: in this case i was assuming the aggregate would not span multiple computes14:30
openstackgerritMark Goddard proposed openstack/nova master: Add support to scheduler client for setting traits  https://review.openstack.org/53253914:31
sean-k-mooneyi am considering if aggregates can be used to group resouce providres in a singel compute that are releated in some way14:31
sean-k-mooneyefried: do you rememebr https://review.openstack.org/#/c/502306/14/specs/queens/approved/bandwidth-resource-provider.rst effectivly im wondering how best to remove the agent resouce providers so that we can also model numa affinity of networking resources14:32
mgoddardefried: Ah, I think I see now. We can only get a 200 back if no other changes have occurred other than those we just PUT, therefore our provider tree must be up to date. Thanks for explaining14:32
openstackgerritLee Yarwood proposed openstack/nova master: libvirt: Collocate encryptor and volume driver calls  https://review.openstack.org/46024314:33
lyarwoodmdbooth / stephenfin ; ^ one final respin with the comment and nits taken care of14:33
*** hongbin has joined #openstack-nova14:34
*** Eran_Kuris has joined #openstack-nova14:35
efriedmgoddard Just so.  Given that report client is the pinch point for placement in nova, and there's only one report client, and it's running on one compute node, under only one thread, we shouldn't actually see these conflicts if nova is the only thingy managing all the providers.  But as you can see from the conversation above, there are plans to potentially have multiple sources of ownership for a given provider.14:36
sean-k-mooneyefried: https://review.openstack.org/#/c/502306/14/specs/queens/approved/bandwidth-resource-provider.rst was assumign numa would either be a trait or we would have multiple parents. i like haveing numa nodes be resouce provider but that means neutron agent cant be if tehre sub resouce have numa affinity.14:36
efriedmgoddard BTW, I put a * above: today aggregates aren't included in the generation thing.  cdent is working on fixing that.  It's unclear whether that's going to get deferred to Rocky.  IMO sean-k-mooney just described a non-sharing-RP use case where we need it.14:37
efriedsean-k-mooney Multiple parents, eh?  How so?14:38
*** mriedem has joined #openstack-nova14:38
efriedsean-k-mooney I *think* the best way to model NUMA affinity is going to be using a numbered group with granular syntax.14:38
*** lbragstad has joined #openstack-nova14:38
mriedemyikun: if you wanted to start looking at something, this part of the spec hasn't been started https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/neutron-new-port-binding-api.html14:38
mriedemyikun: "Prior to the RPC call from live_migration on the source host to pre_live_migration on the dest host, start a wait thread for the vif-plugged event from Neutron, similar to during initial spawn."14:38
*** jmlowe has quit IRC14:39
sean-k-mooneyefried: if resouce are allowed to have 1+ parents tehn one parent is the numa node and the other the neutron agent but that makes the trees a graph which we dont whant in general14:39
*** armax has quit IRC14:41
*** sahid has quit IRC14:41
ameedaanyone can help me to write functional regression test for this review.openstack.org/#/c/526900/ ?14:41
*** READ10 has joined #openstack-nova14:42
*** Guest20076 has quit IRC14:42
efriedsean-k-mooney Agree.  The tree should be a tree (to quote cdent), the neutron agent can control the providers (at least the inventories thereof, if not the RP records themselves), and we can use aggregates to "tag" those providers as being controlled thusly.14:42
efriedsean-k-mooney An alternative approach for such tagging would be to use a trait.14:42
*** markvoelker has quit IRC14:43
sean-k-mooneyefried: ya that was the other thing i was going to explore14:43
efriedsean-k-mooney Which might engender less confusion wrt sharing providers (which these ain't) as well as remove the requirement for the agg generation thing cdent is working.14:43
*** alexchadin has quit IRC14:43
sean-k-mooneye.g. CUSTOM_AGENT_<uuid goes here>14:43
cdentefried: you're welcome to quote me efried, but you're taking me out of context. the implication i was trying to make there is that the trees should be big and anything can be a root14:43
efriedcdent Sorry, I take it back.  Just didn't want to plagiarize :)14:44
*** yassine has joined #openstack-nova14:44
*** yassine is now known as Guest8314:44
efriedsean-k-mooney Or just CUSTOM_MANAGED_BY_NEUTRON.  Why would you need a UUID in the trait?14:44
cdentmy utterances here are cca14:44
efriedI'll quote you on that.14:45
efried(but not give you credit)14:45
sean-k-mooneyefried: again so i have a singel api call i can make to retrive all resouce provders that are managed by that agent14:45
sean-k-mooneyboth aggreates via memberof and traits with the uuid give me that within the existing model14:46
efriedsean-k-mooney Isn't the scope of one neutron in this scenario the same as the scope of one placement service?14:46
*** cleong has joined #openstack-nova14:46
sean-k-mooneyone placement service? is someone suggesting there would be more then one placement service?14:47
cdentefried: sean's assertion was neutron agent, not neutron14:47
sean-k-mooneyoh yes what cdent said14:48
cdentsean-k-mooney: in the dark corners of the universe there has been talk of nested placement services14:48
*** eharney has joined #openstack-nova14:48
efried(cdent that's not what I was talking about)14:48
efriedcdent sean-k-mooney In that case, there's one neutron agent per compute host, yah?14:48
sean-k-mooneycdent: beyond one per cell that sound like i need something stronger then the coffee im drinking14:48
cdentsean-k-mooney: yeah, it's the sort of thing where you should keep _me_ in drinks in dublin to not raise again14:49
*** burt has joined #openstack-nova14:49
sean-k-mooneyefried: actully you could have several per compute node. e.g. sriov + ovs on the same node though in practice yes14:49
*** yamamoto has joined #openstack-nova14:50
efriedsean-k-mooney But not one agent for multiple computes.14:50
efriedsean-k-mooney So my point is, you can ask for providers (having CUSTOM_MANAGED_BY_NEUTRON) && (in tree <compute RP UUID>)14:50
sean-k-mooneyefried: :( well for agent based neutron no. but this is odl....14:50
sean-k-mooney* there is14:50
efriedcdent I thought GET /resource_providers had a queryparam for "having traits".  I don't see it at a glance.14:51
*** markvoelker has joined #openstack-nova14:51
*** awaugama has joined #openstack-nova14:51
cdenti'm not sure that got merged (yet)14:52
cdentI do think code to do it somewhere though14:52
efriedcdent Oh, okay, in flight14:52
efriedIt would be somewhere in that fabulous placement update email summary...14:53
mriedemalex_xu: i'm still +2 on https://review.openstack.org/#/c/330406/ - i think the migration_links thing is correct; it's consistent with other APIs that support paging. you have a good point about the changes-since before 2.59 though, but that could be addressed in a follow up.14:54
mriedemsince it assumes people would actually do that14:54
sean-k-mooneyefried: worst comes worse neuron can jsut get teh whole tree and walk it to see if the inventries it creates exist or not.14:54
cdentefried: I think it is something that alex_xu was working on but a lot of his trait related stuff got abandoned14:54
efriedcdent Yah, I can't find any such thing in open state (assuming it would have 'trait' somewhere in the title/description)14:55
efriedmgoddard So at a glance, it looks like what you've done here is modeled after the inventory updating stuff.14:56
mgoddardefried: Correct14:56
cdentefriend I suspect that quite a few query style things, on /resource_providers, got dropped when /allocation_candidates took the focus, especially if the use cases on /resource_providers werent yet fully formed14:57
*** kumarmn has joined #openstack-nova14:57
efriedmgoddard I haven't fully synthesized this stance yet, but I *think* I'm going to come to the conclusion that that's unnecessarily complicated (even for inventory) - and even incorrect in that it does retries at this low level rather than at the consumer level.14:57
*** felipemonteiro__ has joined #openstack-nova14:57
*** yamamoto has quit IRC14:58
-openstackstatus- NOTICE: Gerrit is being restarted due to slowness and to apply kernel patches14:58
*** felipemonteiro_ has joined #openstack-nova14:59
efriedcdent GET /resource_providers?having_all=T1,T2 and/or ?having_any=T3,T4 seems like a fairly natural thing to expect, but of course there needs to be a real use case for it.  What sean-k-mooney described could count as such.14:59
sean-k-mooneyefried: one of the other issue is that nova uses the nova compute node uuid of the host_id(hostname by default) which is sotre in the name filed of the compute node RP i think so we cant uses in_tree in this case and need to use name14:59
efriedI believe in_tree accepts name or UUID, doesn't it?15:00
*** abalutoiu_ has joined #openstack-nova15:00
efriedno, never mind.15:00
sean-k-mooneygot to run to a meeting but efried did you not have a systax for this discribed also in your generic device management proposal.15:01
sean-k-mooneybe back in 30 mins15:01
*** openstackgerrit has quit IRC15:01
efriedsean-k-mooney Syntax for what?  And I doubt it, I don't recall getting to a 'syntax' level of detail in the generic device management discussions.15:02
mriedemalex_xu: makes me wonder if we've added other query strings in higher microversions to apis that allowed additionalProperties before :)15:03
*** felipemonteiro__ has quit IRC15:03
*** markvoelker has quit IRC15:03
*** abalutoiu has quit IRC15:04
efriedmgoddard I think we got away with retries at the report client level for inventory because at the time inventory was the only thing that could affect generation, AND we were guaranteed to be the only thing messing with that provider, AND there were no trees or sharing providers.  dansmith cdent and Jay should check me on this, but I think we're going to want to pull those retries outta there (at least for 409s) and subsume t15:04
efriedhem in the wholesale retries from the resource tracker level.15:04
cdentthat's probably right and aligns with what was said monday15:04
*** smatzek has joined #openstack-nova15:05
kashyapmriedem: A heads-up: Given your Nova commit 8075797, https://lists.nongnu.org/archive/html/qemu-devel/2018-01/msg00845.html -- [PATCH 0/2] qemu-img: Let "info" warn and go ahead without -U ['--force-share']15:05
kashyapI (& DanPB too) pointed out that Nova already added support to it15:05
kashyapWhere the QEMU folks were asking if Nova / other management tools use it -- https://lists.nongnu.org/archive/html/qemu-devel/2018-01/msg01816.html15:06
*** smatzek_ has joined #openstack-nova15:06
*** smatzek has quit IRC15:08
*** archit has joined #openstack-nova15:08
*** dave-mccowan has joined #openstack-nova15:09
mriedemkashyap: so they are talking about deprecating and removing the locking thing because everyone is just bypassing it to get their code working again?15:09
alex_xuefried: cdent anything I can help on trait?15:10
mriedemi think nova hits qemu-info from a lot of places15:10
*** sahid has joined #openstack-nova15:10
kashyapmriedem: The discussion is still in flux.  I don't think they're going to _remove_ it.15:10
mriedemso auditing when we can just ignore it and bypass the lock would be difficult15:10
kashyapThe aim of the locking change is to not let users shoot themselves in the foot15:10
mriedemyeah i realize15:10
kashyapBut that WILl cause some inconvenience, in terms of usage behaviour15:10
cdentalex_xu: I don't think we need to do anything immediately but we were discussing needing to be able to get a list of resource providers that have a particular trait15:11
kashyapTrying to get a sense of what is the behaviour across versions15:11
alex_xumriedem: so...after that patch merge, we have a window the order version API is broken15:11
mriedemkashyap: so for the shareable disk thing in libvirt 3.10, does that just bypass the lock in qemu 2.10?15:11
kashyapmriedem: Also, I was just adding a TODO item is that, we should investigate using the run-time command 'query-block'15:11
mriedemkashyap: or is it telling qemu, 'this is intentionally a shared thing, so be cool with it'?15:11
cdentit occurs to me now, after thinking about it a bit, that we can probably use the 'resources' param for that, and pass in the one single trait we care about ( <- efried )15:11
kashyapInstead of 'qemu-img' in a loop every few seconds; as 'query-block' will give more consistent results15:11
kashyapmriedem: Yep15:11
mriedemalex_xu: i wouldn't say the older API version is broken15:12
*** moshele has joined #openstack-nova15:12
mriedemalex_xu: it's assuming someone actually passes changes_since, something that wasn't supported before the new microversion15:12
efriedcdent How do you specify traits to ?resources ?15:12
kashyapmriedem: So the upcoming behaviour (not set in stone) is that: *even* if you _don't_ specify '--force-share', it'll go ahead with the run, but will print a warning, so as to prime your brain15:12
efriedcdent I should know that answer, shouldn't I15:12
mriedemkashyap: we won't see those warnings most likely15:12
kashyapmriedem: Just noticed your other question about shareable thing15:12
alex_xumriedem: yes....15:12
cdentefried: i'm not certain, and i'm not certain we do, yet15:12
*** dave-mcc_ has joined #openstack-nova15:12
efriedcdent It's in flight, yeah.15:13
kashyapmriedem: Did you see Peter's comment here, to your question: https://bugzilla.redhat.com/show_bug.cgi?id=1378242#c2115:13
openstackbugzilla.redhat.com bug 1378242 in libvirt "QEMU image file locking (libvirt)" [Unspecified,On_qa] - Assigned to pkrempa15:13
cdentwas generalizing that that's how it _should_ work15:13
mriedemalex_xu: if you're really concerned about it, i can make the quick change to check for it if version<2.59 and just pop it off the req.GET15:13
cdentand it should work for both /rp and /ac15:13
kashyapmriedem: Yep, we won't see, because Nova already baked in (correctly so) the '--force-share' with your commit15:13
mriedemalex_xu: i'm just trying to get done as much as i can before i'm out next week15:13
mriedemkashyap: i meant nova won't see b/c it would be in the qemu/libvirtd logs,15:13
mriedemand we don't look there unless it's an error15:13
efriedcdent Ah, that's it, actually the code I have up is only applying it to /ac.  https://review.openstack.org/#/c/517757/115:14
kashyapAh, like that.15:14
*** armax has joined #openstack-nova15:14
cdentefried: thus my comment on line 23 on https://etherpad.openstack.org/p/nova-ptg-rocky15:14
*** dave-mccowan has quit IRC15:14
alex_xumriedem: got it, you can have my promise to review that patch again tomorrow15:14
mriedemalex_xu: ok i'll update it today then15:15
mriedemthanks for the solid review as always15:15
alex_xumriedem: thanks15:15
simondodsleyHi - hope I'm on the correct channel to ask these questions...15:15
simondodsleyWhat I’m trying to find out is when and if Nova supported/supports the use of the ```virsh –-unsafe``` switch when ```cachemode != none```?15:15
simondodsleyI can see that this switch was added in libvirt 0.9.11 back in 2012, but I’m struggling in finding references to it or VIR_MIGRATE_UNSAFE as valid options in Kilo or later releases of OpenStack (other than just comments)15:15
simondodsleyAny idea when it became a valid option to add to the ```live_migration_flags``` parameter in ```nova.conf``` and since this parameter was deprecated in Mitaka does Nova now automatically use ‘unsafe’ or is there something else that needs to be set to force the ‘unsafe’ switch?15:15
mriedemcdent: efried: dansmith: klindgren_ pinged me last night about the number of REST calls from the compute to placement during the update_available_resource periodic task which runs by default every minute,15:16
alex_xucdent: efried, for the trait, the only left thing is expose 'required' parameter intthe 'GET /allocation_candidates' API15:16
simondodsleysorry about the format there :)15:16
mriedemfrom his pike deployment it's 5 calls https://paste.ubuntu.com/26356656/15:16
mriedemat least15:16
mriedemper compute15:16
mriedemcdent: efried: dansmith: the thing i noted was the 2 calls for aggregates,15:16
cdentmriedem: yes, you remember that post i made mid year about such things ?15:16
mriedemwhich if you look at the code, the provider aggregate map is there in the report client but not used,15:16
mriedemb/c we don't support shared providers yet15:17
mriedemcdent: not the detalis no15:17
*** hrw has quit IRC15:17
mriedemcdent: can you summarize?15:17
mriedemwe might be on the same page15:17
dansmithtwo hits to inventories?15:17
*** moshele has quit IRC15:17
mriedemdansmith: i wondered about that too15:17
mriedemfor the aggregates ones, i told him the obvious thing to do is just comment out that code as it's totally unused15:18
cdentit was also five, iirc, and I was able to do some tricks to trim it but they were deemed risky. agree that one way to cut is to reduce is not make the agg map15:18
cdentlet me find the message, because I think it had something to say about the double inventory15:18
mriedem_get_inventory is only called by _get_inventory_and_update_provider_generation which is only called to check if we need to update inventory (if things changed), or delete inventory15:19
mriedemi wonder if he's on baremetal15:21
mriedembecause there are cases where the driver.get_inventory call for ironic will return an empty dict which is an indication to delete the inventory for the provider15:21
mriedemklindgren_: ^15:21
mriedemwere you seeing those inventory calls to placement on libvirt or ironic computes?15:21
*** hrw has joined #openstack-nova15:22
*** psachin has quit IRC15:22
*** jaypipes has joined #openstack-nova15:25
*** lajoskatona has quit IRC15:26
*** openstackgerrit has joined #openstack-nova15:27
openstackgerritAndrey Volkov proposed openstack/nova master: [placement] Fix resource provider delete  https://review.openstack.org/52951915:27
cdentsigh, took me forever to find http://lists.openstack.org/pipermail/openstack-dev/2017-January/110953.html15:27
mriedemoh i see,15:27
mriedem_update_inventory_attempt is called in a loop15:27
cdentnot yet clear if it will be any use15:27
mriedemso if we get a 409 trying to update inventory we try again15:28
mriedemthat's why there are multiple GETs for inventory15:28
sean-k-mooneyefried: syntaks for traits to resouces. there was discusstion of a resouce_1=<class x>,required1=<trait y>,<trait z> query arg syntaks for get allocation candiates15:28
cdentmriedem: that may not be right, have a look at point B1 in the list posting above15:28
openstackgerritBalazs Gibizer proposed openstack/nova master: Deduplicate service status notification samples  https://review.openstack.org/53138115:29
cdent(it maybe also be right, though, but without logs, hard to say)15:29
sean-k-mooneyefried: i dont think we have a similar fuctionality for the resouce providres api  however.15:29
*** kholkina has quit IRC15:30
mriedemcdent: yeah, you might be right, because if we hit an inventory conflict, we delete the rp_uuid from the cache and then get the resource provider again to update the generation, but i don't see a GET to just /resource_providers in klindgren_'s output15:31
mriedemso likely hitting something that's not a 409, but would need logs15:31
*** felipemonteiro_ has quit IRC15:33
efriedsean-k-mooney Correct.15:33
*** felipemonteiro_ has joined #openstack-nova15:34
efriedmriedem FYI, I'm trying to rework all of this atm15:34
*** Tom-Tom has quit IRC15:34
efriedmriedem But are you looking for a way to get this "fixed" in pike?15:34
*** matrohon has joined #openstack-nova15:34
*** kumarmn has left #openstack-nova15:35
*** Apoorva has joined #openstack-nova15:37
*** markvoelker has joined #openstack-nova15:37
mriedemefried: well, kind of depends on how much operators that are rolling up to ocata and pike are going to be complaining about the new load their computes are making becaues of a lot of http traffic to placement every minute15:38
mriedemi think klindgren_ is working around it by turning down the update_available_resource report interval so it doesn't run every minute15:38
efriedOkay, well, keep me posted.  I'll be interested in contributing to (or at least reviewing) the code if we go there.15:39
mriedemcdent: ha "After that every 60s or so, five requests are made:"15:41
mriedemright on15:41
mriedemtracking here btw https://bugs.launchpad.net/nova/+bug/174246715:41
openstackLaunchpad bug 1742467 in OpenStack Compute (nova) "Compute unnecessarily gets resource provider aggregates during every update_available_resource run" [Undecided,New]15:41
cdentmriedem: in your thinking just now did you get any clearer picture on the why of double inventory GET?15:43
openstackgerritMark Goddard proposed openstack/nova master: WIP: Send traits to ironic on server boot  https://review.openstack.org/50811615:43
openstackgerritMark Goddard proposed openstack/nova master: Add get_traits() method to ComputeDriver  https://review.openstack.org/53228715:43
openstackgerritMark Goddard proposed openstack/nova master: Add has_any_traits() to provider tree  https://review.openstack.org/53228915:43
openstackgerritMark Goddard proposed openstack/nova master: Implement get_traits() for the ironic virt driver  https://review.openstack.org/53228815:43
openstackgerritMark Goddard proposed openstack/nova master: Add support to scheduler client for setting traits  https://review.openstack.org/53253915:43
openstackgerritMark Goddard proposed openstack/nova master: Call get_traits() in the resource tracker  https://review.openstack.org/53229015:43
*** david-lyle has joined #openstack-nova15:43
mriedemcdent: yes it's the RT15:43
mriedem_update_available_resource is the call from the compute periodic task,15:44
mriedemwhich calls _init_compute_node15:44
mriedemwhen we already have the compute node, it calls _update15:44
mriedemwhich eventually does the update_inventory_attempt stuff in the report client15:44
mriedemthen at the end of _update_available_resource,15:44
mriedemwe call _update again15:44
mriedemso that's your 2 inventory updates15:44
mriedemwhich, johnthetubaguy changed in queens15:45
mriedemor wait,no15:45
*** janki has quit IRC15:45
mriedemhttps://review.openstack.org/#/c/520024/15:45
*** sandanar has quit IRC15:45
mriedemthat would fix the double GET inventories15:45
*** yamahata has joined #openstack-nova15:47
*** zhaochao has quit IRC15:47
mriedemmaciejjozefczyk: have you figured out anything more about https://review.openstack.org/#/c/520024/ ?15:48
maciejjozefczykmriedem: hah!15:49
maciejjozefczykmriedem: aready working on this15:49
maciejjozefczykand yes, I found something strange, but I need big prove about it15:49
maciejjozefczykI'll post it today in review15:49
maciejjozefczykmriedem: basically: each time self._provider_tree.has_inventory_changed() returns False here:15:52
maciejjozefczykhttps://github.com/openstack/nova/blob/cf33de28b15bb445d34bbdda1897130812e3b5c5/nova/scheduler/client/report.py#L69615:52
maciejjozefczykwithout my change15:52
maciejjozefczykwith my change: It tries to update inventory_data to placement and then placement raises this strange Exception15:53
maciejjozefczykso for now in upstream we update only DB (once with faulty values, second time with proper ones)15:54
*** Eran_Kuris has quit IRC15:54
mriedemso it's failing this check? https://github.com/openstack/nova/blob/master/nova/objects/resource_provider.py#L23115:54
maciejjozefczykmriedem: exactly15:55
*** abalutoiu__ has joined #openstack-nova15:55
*** abalutoiu__ has quit IRC15:55
mriedemdo you see this debug message with the PUT inventory request body in it when this fails? https://github.com/openstack/nova/blob/cf33de28b15bb445d34bbdda1897130812e3b5c5/nova/scheduler/client/report.py#L76515:56
maciejjozefczykmriedem: yes15:57
maciejjozefczykWARNING nova.scheduler.client.report [None req-89e951da-7283-473f-9de0-53854741839a None None] [req-b6f884cd-883a-4964-8e99-a8f0    f754c0df] Failed to update inventory for resource provider 52559824-5fb1-424b-a4cf-79da9199447d: 400 {"errors": [{"status": 400,     "request_id": "req-b6f884cd-883a-4964-8e99-a8f0f754c0df", "detail": "The server could not comply with the request since it is e    ither15:57
maciejjozefczykmalformed or otherwise incorrect.\n\n Unable to update inventory for resource provider 52559824-5fb1-424b-a4cf-79da9199447    d: Invalid inventory for 'VCPU' on resource provider '52559824-5fb1-424b-a4cf-79da9199447d'. The reserved value is greater than     or equal to total.  ", "title": "Bad Request"}]}15:57
*** felipemonteiro__ has joined #openstack-nova15:57
mriedemthat's the warning, do you have this debug log? https://github.com/openstack/nova/blob/cf33de28b15bb445d34bbdda1897130812e3b5c5/nova/scheduler/client/report.py#L76515:58
*** abalutoiu_ has quit IRC15:59
mriedemi want to see if the request body has "The reserved value is greater than     or equal to total." in it15:59
cdentgreater than empty value?15:59
maciejjozefczykmriedem:  checking15:59
*** tidwellr has joined #openstack-nova16:00
*** jmlowe has joined #openstack-nova16:00
*** moshele has joined #openstack-nova16:00
*** felipemonteiro_ has quit IRC16:01
mriedemcdent: so, we can likely at least turn down the 5 calls per periodic to 2 if we nix the 2 aggregate calls and turn the 2 inventory calls to 1; and then i think the GET /allocations in here is only if you have ocata computes or are using the ironic driver16:02
*** moshele has quit IRC16:02
cdentthat makes sense16:02
*** dklyle has joined #openstack-nova16:04
*** david-lyle has quit IRC16:04
melwittlyarwood: thanks for the stable/pike reviews, could you please hit these stable/ocata versions too? https://review.openstack.org/531422 and https://review.openstack.org/#/c/52391116:05
lyarwoodmelwitt: ack np, I'll get to them tonight16:06
*** mkoderer_6 has joined #openstack-nova16:07
*** mfisch` has joined #openstack-nova16:07
melwittthanks16:07
*** yikun_ has quit IRC16:08
maciejjozefczykmriedem: debug log: DEBUG nova.scheduler.client.report [None req-ad586aa8-27d1-494d-9c4d-bb8f15439fca None None] [req-a4ea518f-1da0-43fd-8348-64704210cb49] Failed inventory update request for resource provider 52559824-5fb1-424b-a4cf-79da9199447d with body:  {'resource_provider_generation': 4, 'inventories': {'VCPU': {'allocation_ratio': 0.0, 'total': 2, 'reserved': 0, 'step_size': 1, 'min_unit': 1,16:09
maciejjozefczyk'max_unit': 2}, 'MEMORY_MB': {'allocation_ratio': 0.0, 'total': 29449, 'reserved': 512, 'ste p_size': 1, 'min_unit': 1, 'max_unit': 29449}, 'DISK_GB': {'allocation_ratio': 0.0, 'total': 193, 'reserved': 0, 'step_size': 1, 'min_unit': 1, 'max_unit': 193}}} {{(pid=19609) _update_inventory_attempt /opt/stack/nova/nova/scheduler/cli ent/report.py:765}}16:09
cdentallocation_ratio being 0 is not supposed to happen16:09
*** mfisch has quit IRC16:10
*** itlinux has joined #openstack-nova16:10
cdentwe've had bug fixes for that since then16:10
mriedemreserved < total in all of those16:10
cdentbut capacity is a calculation that involved allocation_ratio as a multiplier16:11
cdentif it is 016:11
cdent...16:11
mriedemyeah16:11
mriedemrecheck gerrit restart16:11
mriedemooops16:11
cdentI suspect we've got bad exception trapping happening16:11
mriedemreturn int((self.total - self.reserved) * self.allocation_ratio)16:11
mriedemmaciejjozefczyk: is that from master with your patch? or pike/ocata?16:12
maciejjozefczykmaster, I think from 20 DEC 2017~ when I worked on that16:12
maciejjozefczykshould I pull?16:12
mriedemcdent: BASE_INVENTORY_SCHEMA shows only a max for allocation_ratio, not a min16:13
mriedemin pike anyway16:13
maciejjozefczykIm on 04c8fa469109098a0ba8e8774f6176c43b7ed19a16:13
cdentmriedem: that's still true, I checked16:13
cdentwhere things got changed was on the resource tracker side16:13
cdentwhere it was possible to default to 0, but that was changed, but I'm not sure when/where16:13
openstackgerritEric Fried proposed openstack/nova master: WIP: Scheduler[Report]Client.get_provider_tree  https://review.openstack.org/52109816:14
openstackgerritEric Fried proposed openstack/nova master: WIP: ComputeDriver.update_provider_tree()  https://review.openstack.org/52118716:14
openstackgerritEric Fried proposed openstack/nova master: WIP: Use update_provider_tree from resource tracker  https://review.openstack.org/52024616:14
openstackgerritEric Fried proposed openstack/nova master: Fix nits in update_provider_tree series  https://review.openstack.org/53126016:14
openstackgerritEric Fried proposed openstack/nova master: Raise conflict exception on RP create 409  https://review.openstack.org/53256316:14
openstackgerritEric Fried proposed openstack/nova master: WIP: SchedulerReportClient.set_traits_for_provider  https://review.openstack.org/53256416:14
mriedemcdent: _normalize_inventory_from_cn_obj ?16:14
cdentnot sure, but sounds likely16:14
efriedmgoddard WIP: SchedulerReportClient.set_traits_for_provider  https://review.openstack.org/532564 is where I was planning to go.16:14
mriedemthat's the thing in the RT that sets the allocation_ratio in the inventory payload if the driver.get_inventory() method didn't include allocation ratios16:15
mriedemand all 3 of those default to 0.0 in config :)16:15
mriedemso yeah...we're always sending an inventory update with a capacity that's not going to be accepted16:15
efriedouch16:16
maciejjozefczykmriedem: yes16:17
*** john51 has quit IRC16:17
mriedemleft notes in https://review.openstack.org/#/c/520024/16:18
mriedemso how does this work today?16:18
maciejjozefczyktoday, you mean without my patch?16:19
cdenti swear we've seen this before and changed it16:19
mriedemmaciejjozefczyk: yeah16:19
maciejjozefczykmriedem: https://github.com/openstack/nova/blob/cf33de28b15bb445d34bbdda1897130812e3b5c5/nova/scheduler/client/report.py#L696 always pass16:19
*** markvoelker has quit IRC16:19
maciejjozefczykso we dont send anything to scheduler16:19
maciejjozefczykpff, placement*16:19
maciejjozefczykonly DB is updated16:19
ameedaHello novaers, when I try to deploy overcloud on baremetal using undercloud "installed at vm" I got this error "No compute node record for host undercloud: ComputeHostNotFound_Remote: Compute host undercloud could not be found." from nova-compute.log file16:20
cfriesenmriedem: reading your "working towards feature freeze" email...given reviewer resources is there any point in trying to refresh the live-migration resource tracking patches for Q or is that now basically an R thing?16:20
mriedemlive migration resource tracking patches?16:20
mriedemmaciejjozefczyk: but at some point we have to set the initial inventory for the provider16:21
cfriesenmriedem: yeah, the ones that have been around forever to fix pinned cpus, hugepages, etc. on live migration16:21
mriedemcfriesen: i didn't realize you were trying to get those into Q16:21
cfriesenmriedem: I haven't actively16:21
cfriesenmriedem: sfinucan refreshed one of them in November16:22
*** mlavalle has joined #openstack-nova16:22
*** markvoelker has joined #openstack-nova16:22
mriedemmaciejjozefczyk: so i'm trying to figure out why the initial PUT for inventories for that compute node doesn't fail16:22
mriedemlike, the first time the compute is created16:23
maciejjozefczykmriedem: Im curious too16:23
maciejjozefczykmriedem: also looking16:23
*** john51 has joined #openstack-nova16:23
mriedemon the first PUT we'd get here https://github.com/openstack/nova/blob/master/nova/objects/resource_provider.py#L37116:24
mriedemwhich also checks capacity https://github.com/openstack/nova/blob/master/nova/objects/resource_provider.py#L20016:24
cdentat create time is the inventory constructed by something else, which does not set allocation_ratio in the request? If so, the server side will default to 1.016:25
mriedemi don't think so, should be the same code,16:25
mriedemcalls driver.get_inventory()16:25
mriedemwhich for libvirt doesn't return allocation_ratio16:25
*** markvoelker has quit IRC16:27
*** jamiec has quit IRC16:29
*** andreas__ has quit IRC16:30
cdentmriedem: I think maybe it just busted but in our functional tests and tempest we always set those config values16:30
cdenttempest has allocation_ratio conf defaults16:30
cdentand the gabbi test fixture for function does too16:30
*** chyka has joined #openstack-nova16:31
mriedemtempest? you mean devstack?16:31
mriedemi don't see allocation_ratio set in nova.conf http://logs.openstack.org/24/520024/6/check/legacy-tempest-dsvm-neutron-full/255f8c4/logs/etc/nova/nova-cpu.conf.txt.gz16:31
cdentyeah, sorry16:31
mriedemhttp://logs.openstack.org/24/520024/6/check/legacy-tempest-dsvm-neutron-full/255f8c4/logs/screen-n-cpu.txt.gz#_Dec_15_15_20_27_56021416:31
cdent(I'm simultaneously working on some tempest stuff)16:32
*** andreas_s has joined #openstack-nova16:32
melwittI've noticed the test_live_migration_actions functional test has been intermittently failing in the gate again http://logs.openstack.org/90/333990/22/check/openstack-tox-functional/24eeb2b/testr_results.html.gz16:32
mriedemcdent: in this test run, this is the first inventory update after the compute node RP is created http://logs.openstack.org/24/520024/6/check/legacy-tempest-dsvm-neutron-full/255f8c4/logs/screen-n-cpu.txt.gz#_Dec_15_15_20_29_16293416:33
melwittgibi ^16:33
*** andreas__ has joined #openstack-nova16:33
cdentmriedem: presumably the set to 0.0 here shouldn't be? http://logs.openstack.org/24/520024/6/check/legacy-tempest-dsvm-neutron-full/255f8c4/logs/screen-n-cpu.txt.gz#_Dec_15_15_20_27_56021416:33
mriedemcdent: 0.0 is the default in the configs16:34
cdentI get that.16:34
cdentBut one wouldn't expect that work, would one?16:35
cdentunless 0 is meant to mean 116:35
cdentand that's the conversation I seem to remember having, but can't find the thread of16:35
mriedemi don't see how it works with the capacity checks on the placement side for updating inventory16:35
mriedemfor this ci run, this is the first inventory update http://logs.openstack.org/24/520024/6/check/legacy-tempest-dsvm-neutron-full/255f8c4/logs/screen-placement-api.txt.gz#_Dec_15_15_20_29_13889316:36
mriedembut doesn't tell us much16:36
gibimelwitt: thanks for the heads up, I will look into it16:36
*** andreas_s has quit IRC16:36
melwittthanks gibi16:36
cdentmriedem: ah, it is the compute node object that does some jiggery pokery if it is doesn't like the values of the allocation ratios: see _from_db_object in class ComputeNode16:37
*** aloga_ has joined #openstack-nova16:37
mriedemyup i just found that16:37
mriedemmaciejjozefczyk:16:37
cdentsome presumably somewhere in the first step that pokery is happening, but then getting ruined in the second16:37
* cdent 's brain melts16:38
mriedemmaciejjozefczyk: https://github.com/openstack/nova/blob/master/nova/objects/compute_node.py#L19816:38
maciejjozefczykmriedem: mmmmm16:40
*** READ10 has quit IRC16:41
*** andreas__ has quit IRC16:41
maciejjozefczykmriedem: and yes, I dont have it specified in config at all16:41
*** READ10 has joined #openstack-nova16:41
*** tidwellr has quit IRC16:43
mriedemi don't really understand why the inventory request wouldn't have the allocation_ratio values from the compute node object though16:43
mriedembecause in your patch, _update is called after the compute node record is created,16:43
mriedemand ComputeNode.create() eventually calls _from_db_object to set the 'default' allocation ratios on the object itself16:43
cfriesenjust stumbled over something odd...if the os_type isn't set, then in some scenarios the ephemeral disks are formated vfat.  if the ephemeral size is too big for vfat, this chokes.  Should we default to something else in the code or just not format it at all if a default isn't specified in the config file?16:43
*** tidwellr has joined #openstack-nova16:43
mriedemwhich will be used in _normalize_inventory_from_cn_obj16:44
cfriesenmdbooth: ^16:44
*** jaosorior has quit IRC16:47
mriedemmaciejjozefczyk: if you have a recreate of that failure, try logging the compute node record in _update() before calling reportclient.set_inventory_for_provider16:48
*** felipemonteiro__ has quit IRC16:48
*** felipemonteiro__ has joined #openstack-nova16:48
*** clarkb has quit IRC16:53
*** gyee has joined #openstack-nova16:54
*** ragiman has quit IRC16:54
*** AlexeyAbashkin has quit IRC16:55
maciejjozefczykmriedem: ok16:57
*** moshele has joined #openstack-nova16:57
*** liverpooler has joined #openstack-nova16:57
*** diga has joined #openstack-nova17:00
digajaypipes: Hi17:01
digajaypipes: have you seen my mail on this bug - https://bugs.launchpad.net/nova/+bug/171993317:01
openstackLaunchpad bug 1719933 in OpenStack Compute (nova) "placement server needs to retry allocations, server-side" [Medium,Triaged] - Assigned to Jay Pipes (jaypipes)17:01
*** Apoorva has quit IRC17:01
*** dklyle has quit IRC17:03
cdentdiga: twitter suggests that jay is rather ill today17:03
digacdent: ohh17:04
*** archit has quit IRC17:05
*** yamahata has quit IRC17:05
maciejjozefczykmriedem: Ok, I'll paste it tomorrow, need to go17:05
mdboothcfriesen: The behaviour of that formatting is lost in the depths of time. That sounds like a bona fide bug, though.17:05
digacdent: do you have sometime, can you help me ?17:06
mriedemmaciejjozefczyk: o/17:06
cdentdiga: I might be able to yeah, what's up?17:06
mdboothcfriesen: Do other clouds format blank ephemeral disks for you? It seems like such a weird thing to do in Nova.17:06
maciejjozefczykfirst sight, strange, https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L763 gives me https://pastebin.com/skR0X0Mm (at init)17:07
mdboothcfriesen: I don't think we can change that behaviour without at least a microversion bump, btw. Probably a cycle of ops discussion, a prominent release note, and a microversion bump.17:07
maciejjozefczykmriedem: and cpu_allocation_ratio is set as 16.017:07
openstackgerritEric Berglund proposed openstack/nova master: PowerVM Driver: vSCSI volume driver  https://review.openstack.org/52609417:08
mriedemmaciejjozefczyk: that's because it pulls the object from the db and runs it through ComputeNode._from_db_object which sets the 'default' allocation ratios (hardcoded in code) if not set in config17:08
mdboothcfriesen: When the ephemeral disk is too big, does the mkfs choke causing the spawn to fail?17:08
maciejjozefczykmriedem: I'll check it tomorrow why then it is changed to 017:09
maciejjozefczykmriedem: thats the real cause, yes?17:09
mriedemyeah i don't know what is changing it to 0.0 before we hit placement17:09
maciejjozefczykmriedem: I'll find it for you ;) bb17:10
mriedemgodspeed17:10
*** dave-mcc_ has quit IRC17:11
*** damien_r has quit IRC17:11
*** pcaruana has quit IRC17:11
digacdent:17:11
* cdent listens17:11
digacdent: - Here I got the error after adding scenarios - http://paste.openstack.org/show/641294/17:11
*** lpetrut has quit IRC17:12
digacdent: changes are made in git diff - http://paste.openstack.org/show/641293/17:12
cdentdiga: do you have a work in progress that you post up to gerrit so there's code to look at?17:12
digacdent: I have submit the patch yet, Let me submit the patch then17:13
digas/have/haven't17:13
digacdent: yes, I am working on this as per jaypipes suggestions17:14
cdentdiga: note that DbDeadLock is already handled, what's not handled is ConcurrentUpdate17:14
digacdent: okay17:15
cdent_set_allocations method already has the retry handing on it17:15
cdentbut it is only set up for handling db api exceptions, which ConcurrentUpdate is not17:16
*** moshele has quit IRC17:16
mriedemsdague: this is the novaclient change for the file injection deprecation and userdata + rebuild stuff https://review.openstack.org/#/c/528128/ - closes out that bp and unblocks the next novaclient change in the series for the next microversion; client release freeze is creeping up so i'd like to get some reviews on this stuff17:16
*** harlowja has joined #openstack-nova17:16
*** david-lyle has joined #openstack-nova17:16
digaokk, I will work on it then17:16
cdentdiga: let me know if/how I can help17:17
digacdent: currently main challenge in nova is to reproduce the issue, db part is reproduced, but how to reproduce ConcurrentUpdate17:17
digacdent: some pointers can be helpful17:18
mriedemdiga: i have a devstack patch that reproduces it...17:18
*** chyka has quit IRC17:18
cdentdiga: I think you can probably do something similar to what you've done in your existing test, but put the side effect on the method that increase the generation17:18
mriedemhttps://review.openstack.org/#/c/507918/17:18
cdentdiga: _increment_provider_generation17:19
*** chyka has joined #openstack-nova17:19
*** tidwellr has quit IRC17:19
*** dave-mccowan has joined #openstack-nova17:19
*** lucasagomes is now known as lucas-afk17:19
digacdent: okay17:19
digamriedem: I will take a look at it17:20
mriedemit's probably not very helpful though for recreating a concurrent update failure in a unit test17:20
mriedemit was just something i think i noticed while investigating failures in that devstack patch17:21
*** vivsoni__ has joined #openstack-nova17:21
digamriedem: but it needs to rerun devstack with this change17:21
*** matrohon has quit IRC17:21
digamriedem: ok, got it17:21
*** mfisch` has quit IRC17:23
*** sree has joined #openstack-nova17:23
*** tidwellr has joined #openstack-nova17:24
*** jamiec has joined #openstack-nova17:29
*** tidwellr has quit IRC17:29
*** egonzalez has joined #openstack-nova17:29
stephenfinartom: https://review.rdoproject.org/r/#/c/11283/17:29
* stephenfin heads home17:29
digamriedem: Thanks for sharing the link, it will be certainly helpful17:34
digacdent: thanks for your help17:34
digacdent: mriedem : will ping you if I need any help, will update you by tomorrow17:35
*** itlinux has quit IRC17:35
*** jaypipes has quit IRC17:35
cfriesenmdbooth: sorry, was off in a meeting.  Yes, the initial spawn fails when the ephemeral disk is too big.   I wonder if we could change the default based on size, to either use something else or just not format it instead of choking.17:38
*** sree_ has joined #openstack-nova17:39
mdboothcfriesen: I think it has to be considered part of the api. I think the only change we can make to it without a microversion bump is a minimal change to make it not fail.17:39
*** sree_ is now known as Guest4841817:40
mdboothcfriesen: So we could, for eg, not format it at all, but only in the case that we know it would cause a failure to build.17:40
*** itlinux has joined #openstack-nova17:41
cfriesenmdbooth: agreed.   I'll open a bug and maybe propose a fix.17:41
*** Guest48418 has quit IRC17:42
*** sree has quit IRC17:42
*** penick has joined #openstack-nova17:43
*** felipemonteiro_ has joined #openstack-nova17:43
*** egonzalez has quit IRC17:46
*** felipemonteiro__ has quit IRC17:46
*** sahid has quit IRC17:49
*** diga has quit IRC17:51
*** sree has joined #openstack-nova17:53
*** sree has quit IRC17:58
*** derekh has quit IRC18:00
*** Apoorva has joined #openstack-nova18:00
*** Apoorva has quit IRC18:00
*** Apoorva has joined #openstack-nova18:01
*** vivsoni__ has quit IRC18:02
*** moshele has joined #openstack-nova18:03
*** archit has joined #openstack-nova18:05
*** david-lyle has quit IRC18:07
*** moshele has quit IRC18:09
*** gouthamr has quit IRC18:09
*** karthiks has quit IRC18:12
*** jpena is now known as jpena|off18:14
*** felipemonteiro_ has quit IRC18:19
*** felipemonteiro_ has joined #openstack-nova18:19
*** hemna_ has joined #openstack-nova18:25
*** openstack has joined #openstack-nova18:31
*** ChanServ sets mode: +o openstack18:31
*** AlexeyAbashkin has joined #openstack-nova18:31
*** AlexeyAbashkin has quit IRC18:35
*** jackie-truong has joined #openstack-nova18:35
*** tesseract has quit IRC18:36
*** avolkov has quit IRC18:37
*** harlowja has quit IRC18:37
*** jmlowe has quit IRC18:40
*** jmlowe has joined #openstack-nova18:41
*** gouthamr has joined #openstack-nova18:47
*** gszasz has quit IRC18:50
*** xinliang has quit IRC18:53
*** gouthamr has quit IRC18:53
openstackgerritmelanie witt proposed openstack/nova master: Detach volume after deleting instance with no host  https://review.openstack.org/34061418:53
*** xinliang has joined #openstack-nova18:54
*** tidwellr has joined #openstack-nova18:54
*** READ10 has quit IRC18:54
mriedemstvnoyes: finally got that multiattach snapshot test to pass http://logs.openstack.org/86/531386/7/check/tempest-full/2f25c03/job-output.txt.gz#_2018-01-10_02_15_35_84000818:55
*** READ10 has joined #openstack-nova18:55
stvnoyesexcellent. I've been working on libvirt 3.10. Finally got past the dependency issues working on getting stack up now18:56
mriedemfor the tempest patch, it's getting big, so i might need to think about splitting it up18:56
mriedemmtreinish: how do you feel about this? https://review.openstack.org/#/c/266605/25/tempest/api/compute/volumes/test_attach_volume.py18:56
melwittis anyone willing to please review the local delete patch ^ if an instance with attached volumes etc goes into error state, when it's deleted the volumes and networks aren't disconnected from the instance and have to be manually disconnected in order to be used again18:56
mriedemshould i do the first patch with the config option and 1 test, then add the other tests in subsequent patches?18:56
*** mvk has quit IRC18:57
*** david-lyle has joined #openstack-nova18:59
openstackgerritEric Berglund proposed openstack/nova master: PowerVM Driver: vSCSI volume driver  https://review.openstack.org/52609419:01
*** moshele has joined #openstack-nova19:02
*** felipemonteiro__ has joined #openstack-nova19:03
*** aloga_ has quit IRC19:03
openstackgerritMatt Riedemann proposed openstack/nova master: Add pagination and Changes-since filter support for os-migrations.  https://review.openstack.org/33040619:04
openstackgerritMatt Riedemann proposed openstack/nova master: Add index(updated_at) on migrations table.  https://review.openstack.org/53113219:04
openstackgerritMatt Riedemann proposed openstack/nova master: Fix comment in MigrationSortContext  https://review.openstack.org/53236819:04
ameedaHello, when I try to deploy overcloud on baremetal using undercloud "installed at vm" I got this error "No compute node record for host undercloud: ComputeHostNotFound_Remote: Compute host undercloud could not be found." from nova-compute.log file19:05
ameedawhat can I do with that ?19:05
*** felipemonteiro_ has quit IRC19:06
*** moshele has quit IRC19:07
openstackgerritMark Goddard proposed openstack/nova master: WIP: Send traits to ironic on server boot  https://review.openstack.org/50811619:10
openstackgerritMark Goddard proposed openstack/nova master: Add get_traits() method to ComputeDriver  https://review.openstack.org/53228719:10
openstackgerritMark Goddard proposed openstack/nova master: Add support to scheduler client for setting traits  https://review.openstack.org/53253919:10
openstackgerritMark Goddard proposed openstack/nova master: Call get_traits() in the resource tracker  https://review.openstack.org/53229019:10
openstackgerritMark Goddard proposed openstack/nova master: Implement get_traits() for the ironic virt driver  https://review.openstack.org/53228819:10
*** sambetts is now known as sambetts|afk19:10
*** sridharg has quit IRC19:10
rybridgesHey guys. Got another question for you today. Is there a way to update the user-data on an instance after it is in the build state during the boot flow? I have a hunch that it is not possible because once the instance is in the build state, the user-data and config driver stuff have already been written out onto the that instance's partition on the hypervisor. Meaning in order to update the user data19:10
rybridgeson the instance after it is built, we would need some api capable of modifying user data on the instance. does something like that exist?19:10
*** harlowja has joined #openstack-nova19:13
*** hemna_ has quit IRC19:16
*** harlowja_ has joined #openstack-nova19:16
*** jobewan has joined #openstack-nova19:18
melwittI'm not sure if it's done yet but the plan was to allow user data to be provided during a rebuild19:19
*** harlowja has quit IRC19:19
rybridgeshmm19:19
rybridgesi am talking more from a coding point of view rather than from a user's point of view19:19
rybridgeslike in the code of the boot flow19:19
rybridgesis there a way to update that user data after the instance has already been built19:20
*** READ10 has quit IRC19:20
rybridgesso for example19:21
rybridgesif you update the user data at this point https://github.com/OpenStack/nova/blob/stable/ocata/nova/compute/api.py#L94419:22
rybridgeswhich is before the instance is actually created on the HV19:22
rybridgesthen when the instance is actually created on the HV, it will get your user data19:22
rybridgesbut if you do it after that... I am thinking it wont update because updating would involve rewriting a file on the hypervisor19:22
*** Apoorva_ has joined #openstack-nova19:32
*** mvk has joined #openstack-nova19:33
melwittrybridges: so you're saying you want to modify the user data after it's been provided by the end user?19:34
melwittbut it's not the end user themselves who want to update it?19:34
*** Apoorva has quit IRC19:34
melwittit sounds like what you want is the vendor data stuff19:35
*** jackie-truong has quit IRC19:40
melwitthttps://docs.openstack.org/nova/latest/user/vendordata.html19:41
*** penick has quit IRC19:43
*** hemna_ has joined #openstack-nova19:45
*** moshele has joined #openstack-nova19:45
*** cdent has quit IRC19:48
rybridgeseh.. not quite. That's okay. Thanks melwitt19:48
*** moshele has quit IRC19:49
*** aloga_ has joined #openstack-nova19:50
mriedemhe's looking for the server create hook19:52
mriedemwhich has been deprecated forever19:52
*** dtantsur is now known as dtantsur|afk19:53
*** nicolasbock has quit IRC19:56
mtreinishmriedem: what do you want me to look at there?20:01
*** cleong has quit IRC20:03
mriedemmtreinish: so i've got the 3 tests in there passing,20:04
mriedembut, that patch also has some setup stuff so it's getting large,20:04
mriedemwas thinking about changing that to just be the first test, and then put 1 patch per new test on top of that in a series20:04
mriedemsince there are a bunch of TODOs in there for more tests, i didn't want to hold that single patch for all of the tests20:04
mtreinishmriedem: sure, that sounds like a sane way to handle it20:05
mriedemok20:05
mriedemjust wanted to make sure since it's going to be a bit of work20:05
mtreinishI did quickly look at the tests the other day when you linked me to them and they seemed fine to me20:05
mtreinishmriedem: I mean it's not that long a patch with the 3 tests in one. I'd be fine reviewing it as is too20:06
mriedemthe two patches below that one are ready to go20:06
mtreinishit's really your call20:06
mriedemyeah  i knew you would probably, but for others20:06
mriedemi'll split it up20:06
mriedemplus then it will rock my tempest stats!20:06
mtreinishheh, got to maintain your top 10 committer status :)20:07
mriedemyou know it20:07
mriedemheh i didn't know that was still a thing http://stackalytics.com/?release=all&module=tempest&metric=commits20:08
mriedemsoon i can catch up with old man dague20:08
*** slaweq has joined #openstack-nova20:09
*** ralonsoh has quit IRC20:09
mtreinishmriedem: heh, nice20:09
sdaguemriedem: yeh, well... you youngins20:20
mriedemi can hear your fist shaking from here20:20
*** tidwellr_ has joined #openstack-nova20:24
*** tidwellr has quit IRC20:24
*** dave-mccowan has quit IRC20:27
*** imacdonn has quit IRC20:32
*** imacdonn has joined #openstack-nova20:33
*** archit has quit IRC20:34
*** dave-mccowan has joined #openstack-nova20:38
*** awaugama has quit IRC20:45
*** chyka has quit IRC20:49
*** chyka has joined #openstack-nova20:50
openstackgerritEric Berglund proposed openstack/nova master: PowerVM driver: ovs vif  https://review.openstack.org/42251220:52
*** Jeffrey4l has quit IRC20:52
openstackgerritEric Berglund proposed openstack/nova master: PowerVM Driver: SEA  https://review.openstack.org/52321620:53
openstackgerritEric Berglund proposed openstack/nova master: PowerVM Driver: vSCSI volume driver  https://review.openstack.org/52609420:53
mriedemthis was added in 2014: https://github.com/openstack/nova/blob/master/nova/compute/api.py#L102220:57
mriedemseems it might be time to remove the get()20:57
mriedemsince the dfeault in cinder for new volumes is bootable=False20:57
*** chyka has quit IRC20:59
*** chyka_ has joined #openstack-nova20:59
*** AlexeyAbashkin has joined #openstack-nova21:00
*** Jeffrey4l has joined #openstack-nova21:03
*** eharney has quit IRC21:03
*** felipemonteiro__ has quit IRC21:03
*** felipemonteiro__ has joined #openstack-nova21:04
*** Guest83 has quit IRC21:04
openstackgerritSylvain Bauza proposed openstack/nova master: libvirt: create vGPU for instance  https://review.openstack.org/52883221:04
openstackgerritSylvain Bauza proposed openstack/nova master: libvirt : Force a specificly static UUID for a mediated device  https://review.openstack.org/53175221:04
openstackgerritSylvain Bauza proposed openstack/nova master: WIP: libvirt: Use only existing mdevs if kernel race  https://review.openstack.org/53185321:04
*** AlexeyAbashkin has quit IRC21:04
openstackgerritEric Berglund proposed openstack/nova master: PowerVM Driver: vSCSI volume driver  https://review.openstack.org/52609421:04
*** Guest83 has joined #openstack-nova21:07
*** yamahata has joined #openstack-nova21:12
*** moshele has joined #openstack-nova21:13
*** Apoorva_ has quit IRC21:15
*** Apoorva has joined #openstack-nova21:16
*** moshele has quit IRC21:16
*** kylek3h has joined #openstack-nova21:16
*** smatzek_ has quit IRC21:19
*** smatzek has joined #openstack-nova21:20
*** felipemonteiro_ has joined #openstack-nova21:20
*** smatzek_ has joined #openstack-nova21:22
*** felipemonteiro__ has quit IRC21:23
*** smatzek has quit IRC21:24
*** smatzek_ has quit IRC21:26
*** awaugama has joined #openstack-nova21:28
*** dave-mccowan has quit IRC21:29
*** felipemonteiro_ has quit IRC21:32
*** gouthamr has joined #openstack-nova21:32
*** felipemonteiro_ has joined #openstack-nova21:33
*** gouthamr_ has joined #openstack-nova21:35
*** gouthamr has quit IRC21:37
*** kylek3h has quit IRC21:38
mriedemildikov: on the multiattach api patch, i've got the rest api controller tests done, added a happy path functoinal test for boot from volume and attach to an existing server, and now working on negative tests for the error conditions in the API code - that should wrap it up21:39
mriedemthen ill split up the tempest test patch tomorrow probably21:39
ildikovmriedem: ack21:40
ildikovmriedem: is there anything stvnoyes or me should/could do?21:40
mriedemyeah, something on my todo list is we're going to need a CI job defined for multiattach, probably in the nova experimental queue21:41
mriedembased on the devstack patches i have21:41
mriedemhttps://review.openstack.org/#/c/531386/21:41
*** pcaruana has joined #openstack-nova21:42
mriedemi'm not sure if that job definition should live in nova with zuulv3, or if it should live in openstack-zuul-jobs since we'd want to run it on nova/cinder/tempest/devstack patches21:42
*** jmlowe has quit IRC21:42
mriedemmordred: ^ is there guidance on where a job should live if it's going to be run by multiple projects?21:42
*** pcaruana has quit IRC21:43
ildikovmriedem: ok21:43
stvnoyesmriedem: i am testing multiattach on libvirt 3.10.. so far it's working ok. But I just noticed that one of the libvirt modules, libvirt-bin, is at 3.6.  The Debian site says "This is a transitional package." Do you think it matters?21:43
stvnoyesI'm having trouble finding a 3.10 version of that.21:44
mriedemi don't know what a transitional package is21:44
mriedemwould probably have to ask zigo21:44
ildikovstvnoyes: do you have bandwidth to look into the CI job too?21:45
mordredmriedem: in general the idea is to have it live as close to the people who would be the most natural 'owners'21:45
stvnoyesI left out the import note that followed - "This is a transitional package. You can safely remove it." That's why I was thinking it might not matter.21:46
mriedemmordred: hmm, ok, i guess that is probably nova...21:46
mriedemor...devstack21:47
*** dr_gogeta86 has quit IRC21:47
mordredmriedem: it's perfectly acceptable to make a nova-devstack-multiattach in the nova repo and then have cinder, devstack, tempest repos add it to their .zuul.yaml files21:47
mriedemyeah i guess we'll just start with it in nova21:47
mordredor it could totally go in devstack, or tempest :)21:47
mriedemsince the only tests so far are compute api tests in tempest21:47
mriedemildikov: stvnoyes: if one of you do start on that, the first nova patch for zuulv3 layout is https://review.openstack.org/#/c/514309/ so you'd likely build on that21:48
stvnoyesmriedem ildikov: as for the CI job, I can take a look. I haven't played around with CI jobs before, but I can see how far I get21:48
stvnoyesok I'll take a look at that21:48
mriedemstvnoyes: it's mostly copy and tweak21:48
mriedemwe just need it to set this devstack variable https://review.openstack.org/#/c/531386/7/stackrc21:48
mgagnemriedem: when deprecating a rule name in oslo.policy, will the generator create an alias for the old name? My concern is with Horizon which might/will still use the legacy name until updated.21:48
mordredmriedem: this could be your first nova zuulv3-native job :)21:49
mriedemlike https://review.openstack.org/#/c/514309/10/playbooks/legacy/nova-lvm/run.yaml@3321:49
stvnoyesmriedem: kk, btw, is there any specific test you'd like to see with libvirt 3.10? So far, attach & detach are working ok. Was their a particular scenario that was failing?21:49
mriedemmordred: that i did?21:49
mriedemstvnoyes: the tempest patch is testing attach/detach to 2 servers, boot from volume with 1 server, and boot from volume and snapshot that volume-backed server21:50
ildikovstvnoyes: thanks much!21:50
mriedemstvnoyes: i think we also need testing for resize/cold migrate and swap volume21:50
stvnoyesall manual for now  I presume?21:50
mriedemstvnoyes: sure, until we write tempest patches21:50
mriedemstvnoyes: if you want to add a resize test on top of the tempest patch https://review.openstack.org/#/c/266605/ go ahead21:51
mriedemshould be pretty simple21:51
stvnoyeskk I'll do that21:51
mriedemswap volume gets tricky b/c that's an admin-only operation so i think that has to live in a different tree structure in tempest21:51
mriedemtempest/api/compute/admin/21:51
*** takashin has joined #openstack-nova21:51
*** threestrands_ has joined #openstack-nova21:52
*** threestrands_ has joined #openstack-nova21:52
mriedemwould live in here somewhere https://github.com/openstack/tempest/blob/master/tempest/api/compute/admin/test_volume_swap.py21:52
mriedemnew test class  to use the new microversion21:52
*** threestrands_ has quit IRC21:53
*** threestrands_ has joined #openstack-nova21:53
*** threestrands_ has quit IRC21:53
*** threestrands_ has joined #openstack-nova21:53
*** threestrands has quit IRC21:54
mriedemmgagne: not sure, lbragstad might know21:56
mriedemlbragstad: "when deprecating a rule name in oslo.policy, will the generator create an alias for the old name? My concern is with Horizon which might/will still use the legacy name until updated."21:56
* lbragstad goes digging21:56
*** chyka_ has quit IRC21:57
*** jackie-truong has joined #openstack-nova21:58
*** archit has joined #openstack-nova21:59
*** smatzek has joined #openstack-nova22:00
*** matrohon has joined #openstack-nova22:00
lbragstadmriedem: mgagne yeah - oslo.policy supports that case22:00
lbragstadmriedem: mgagne https://github.com/openstack/oslo.policy/blob/master/oslo_policy/policy.py#L590-L60922:01
lbragstadit takes the new policy and adds an OrCheck to it with the deprecated one22:01
mgagnelbragstad: this is about the check string, not the rule name22:01
mgagneno?22:01
lbragstadyes - for the check string22:01
lbragstadnot the name22:01
mgagneok, I'm asking about name22:02
lbragstadsorry - misread the question22:02
mgagnebecause I got caught with Horizon still reading legacy name22:02
mgagneI used oslo policy generator to get a policy file and used that in horizon, big mistake22:02
*** masber has joined #openstack-nova22:02
*** yamahata has quit IRC22:03
*** yamahata has joined #openstack-nova22:04
mgagneso now what I'm trying to do is create a mapping file (still testing atm): https://gist.github.com/mgagne/c98982290ed72aecf668e5291b5ee02722:04
lbragstadmgagne: so you're interested in this case - https://github.com/openstack/oslo.policy/blob/master/oslo_policy/policy.py#L1143-L116022:04
*** smatzek has quit IRC22:05
lbragstadhmm - we might have some work todo there22:05
*** gouthamr_ has quit IRC22:06
mgagneok so there is nothing in place for that, legacy v2 got replaced by v2.1 (with new policy names) but never got some form of deprecation period or mapping22:07
lbragstadmgagne: today we iterate through all the policies, but the rule is only registered with the non-deprecated name https://github.com/openstack/oslo.policy/blob/master/oslo_policy/policy.py#L62722:07
mgagneI think that if a policy name is deprecated and replaced by something else, sample file shouldn't include the literal string check in the legacy one but an alias to the new policy name so you can update the check string once22:08
lbragstadi suppose we could register another entry in that process with the deprecated name and check string iff that policy is deprecated22:08
mgagnebasically, what I'm trying to test above22:09
lbragstadthis is only the case when the policy *name* is changing, right?22:09
mgagneyes22:09
*** yamahata has quit IRC22:09
mgagneI'm sure there is more use cases but that's the one that seems to not be handled right now22:10
mgagneand which is causing some issues with horizon which expects to find the legacy names (well, the version of horizon I'm using)22:11
lbragstadthis is because horizon uses the policy file to customize UI22:11
mgagne(ocata)22:11
*** rcernin has joined #openstack-nova22:11
mgagneif I generate a sample file for Nova, update it to fit my needs and then use it for Nova, Horizon will mostly read the "default" rule because it can't find the "legacy rules"22:12
mgagneand default is not included anymore iirc22:12
mgagnewill have to double check on that one22:12
lbragstadwouldn't the problem be that horizon is looking for a policy name that no longer exists?22:13
mgagneyep, no default22:13
lbragstadso wouldn't you need an entry for each?22:13
lbragstadthe deprecate option and the new option?22:13
mlavallemriedem: have you seen the way the substring query is done in https://review.openstack.org/#/c/521683/?22:13
lbragstadin the generated policy file?22:13
mlavalleI want to make sure you are happy with it22:13
mgagnesure but... this also means you need to install the same version as Nova? can't install Horizon Ocata with Nova Newton?22:13
mgagnelbragstad: yes22:14
mgagnethat's what I will end up with22:14
mgagnebecause I need to satisfy Horizon22:14
lbragstadso you'd need an ocata nova policy file... right?22:14
mriedemmlavalle: hmm,22:17
mgagneCurrent use case: Nova Newton with Horizon Ocata. Horizon Ocata still expects legacy policy names while generated policy file by Nova Newton does not content legacy names.22:18
mriedem'%s%%'22:18
mriedemmlavalle: not sure why it's not just '%%%s%%'22:18
mriedemso the substring could be anywhere within the IP address22:18
mgagnelbragstad: even if I had ocata policy file, it will never content legacy names ever again, they got removed and no mapping was created, even in-code.22:19
mriedemmlavalle: like this http://git.openstack.org/cgit/openstack/nova/tree/nova/db/sqlalchemy/api.py#n70922:19
lbragstadmgagne: well - one thing we could do is emit deprecated policies if we detect one22:20
mgagnelbragstad: this also means that even if Horizon got updated, in future, you would need to update Horizon and Nova in lockstep which I suspect is something we don't want to encourage or promote22:20
mlavallemriedem: exactly. I want to make sure we deliver what you need on the Nova side. Thanks!22:20
mriedemhongbin: ^22:21
* lbragstad finds an example22:21
lbragstadmgagne: ok - let's say we're doing this https://github.com/openstack/oslo.policy/blob/master/oslo_policy/policy.py#L1143-L116022:21
*** corvus is now known as jeblair22:21
*** jackie-truong has quit IRC22:21
lbragstadrenaming foo:post_bar to foo:create_bar22:22
hongbino/22:22
*** jeblair is now known as corvus22:22
mriedemhongbin: see the questions about the IP substring filtering in neutron22:22
mgagnelbragstad: if you end up with: "foo:post_bar": "rule:foo:create_bar", that would be great I guess22:22
lbragstador if you have "foo:post_bar": "role:fizz" and "foo:create_bar": "role"fizz"22:23
lbragstadeven though they are the same thing22:23
-openstackstatus- NOTICE: The zuul system is being restarted to apply security updates and will be offline for several minutes. It will be restarted and changes re-equeued; changes approved during the downtime will need to be rechecked or re-approved.22:23
mgagnelbragstad: so I need to update 2 rules if I change a check string?22:23
lbragstadah - right, i see what you mean22:24
mgagnebut we need to determine for how long you want to support legacy names22:24
hongbinmriedem: mlavalle : the patch was originally proposed to support full substring (%%%s%%), zhenyu commented on it to give preference to a right hand substring (%s%%), so i made the revision22:24
lbragstadmgagne: i think that would depend in how long you want to support a deprecated policy22:25
lbragstadwhen it is removed, it's no longer rendered in the policy file22:25
mgagnebecause do you want to be able to run Horizon on release Xylophone but with Nova Mitaka?22:25
mriedemhongbin: hmm, ok we should ask Kevin_Zheng then probably22:25
mriedemwhen he's awake22:25
lbragstadmgagne: that's a good question - but i'm not sure i'm qualified to answer it :)22:25
mgagneyea, just something to consider22:25
hongbinmriedem: sure, i will send the email22:26
mgagnebut the one major release deprecation period might not be enough in that case22:26
lbragstadwould you expect to run deprecated configuration options from Mitaka in Xylophone/22:26
mgagnebecause the*22:26
mgagnecurrent, that's what I'm doing22:26
mgagnewas Nova Kilo with Horizon Ocata until very recently.22:26
mgagnenow Mitaka/Ocata22:27
lbragstadack22:27
mgagnebut also... Horizon should use new names...22:27
lbragstadright - i also expect this issue to be limited to horizon feeding off a generated policy file22:27
* bauzas raises fist at devstack22:27
mgagnelbragstad: afaik, there is no way to consume policy through API so there might be some 3rd party softwares consuming policy files too22:28
bauzasany idea why it sticks my oslo.policy version to be 1.28.1 while I'm upgrading the package before ?22:28
lbragstadbecause if nova deprecates a policy name, they are likely going to start using the new policy name in the service around the same time they deprecate it22:28
*** jmlowe has joined #openstack-nova22:28
bauzasbecause it raises an exception when running nova api_db sync22:28
*** felipemonteiro__ has joined #openstack-nova22:29
mlavallehongbin, mriedem: thanks!22:29
mgagnelbragstad: would need to find a way to detect legacy policy names usage in horizon22:29
lbragstadmgagne: yeah22:29
mgagnelbragstad: policy names got updated in Horizon Pike22:29
mgagnehttps://github.com/openstack/horizon/commit/c61ae4f0834253e523c4443cecb3ce5eb06bf89b22:29
mgagnebut issue still remain, can't update a policy name without breaking horizon22:30
lbragstadthis is neither here nor there, but i'm hoping to have a PoC of a capabilities API that removes the need for rendered policy files by dublin22:31
bauzasoh snap, I need to git pull my requirements directotyu22:32
lbragstadmgagne: would you want to open a bug against oslo.policy for this?22:33
*** felipemonteiro_ has quit IRC22:33
mgagnelbragstad: project is using LP?22:33
lbragstadyes22:33
mgagnecool cool22:33
mgagnewill do22:33
lbragstadhttps://launchpad.net/oslo.policy22:33
mgagnesure, just wanted to make sure it's not storyboard =)22:34
lbragstadfwiw - i think we should be able to support rendering deprecated policy names pretty easy22:34
lbragstadbut getting horizon to figure out if a policy name is deprecated is going to require a bit more work22:35
*** gouthamr has joined #openstack-nova22:37
*** threestrands_ has quit IRC22:41
*** tidwellr_ has quit IRC22:41
*** tidwellr has joined #openstack-nova22:41
*** gouthamr has quit IRC22:42
*** matrohon has quit IRC22:44
*** lyan has quit IRC22:45
*** slaweq has quit IRC22:47
*** lyan has joined #openstack-nova22:47
openstackgerritEric Berglund proposed openstack/nova master: PowerVM driver: ovs vif  https://review.openstack.org/42251222:47
*** lyan has quit IRC22:48
openstackgerritEric Berglund proposed openstack/nova master: PowerVM Driver: SEA  https://review.openstack.org/52321622:48
openstackgerritMatt Riedemann proposed openstack/nova master: [api] Allow multi-attach in compute api  https://review.openstack.org/27104722:50
*** archit has quit IRC22:50
mriedemildikov: johnthetubaguy: stvnoyes: ^ done with the API change, ready for review22:51
mriedemtests are done22:51
mriedemand gibi ^22:51
mriedemstvnoyes: i'll work on the zuulv3 job if you didn't start on that yet22:52
*** jmlowe has quit IRC22:52
ildikovmriedem: looks great, thanks!22:52
mriedemsdague: if you're looking for an easy patch, this undumbifies our USE_NEUTRON usage in the functional tests https://review.openstack.org/#/c/529456/22:53
*** threestrands has joined #openstack-nova22:55
*** threestrands has quit IRC22:55
*** threestrands has joined #openstack-nova22:55
lbragstadmgagne: we might be able to do something like - https://review.openstack.org/#/c/532685/22:56
lbragstadmgagne: let me know if that helps you work around the issue22:57
*** edmondsw has quit IRC22:59
mgagnelbragstad: looks mostly good, commented already. I think the other issue is that Nova didn't register the legacy names and I needed to dig in code to find the expected name.23:01
lbragstadmgagne: responded - yeah the deprecation bits in oslo.policy are pretty new23:02
lbragstadi'm not sure if nova had a pre-existing deprecation implementation in nova23:02
mgagnenone that I'm aware of23:02
*** aloga_ has quit IRC23:04
*** burt has quit IRC23:05
*** awaugama has quit IRC23:06
mgagnelbragstad: ok, finally tested gist I posted above. the alias thing works fine.23:07
mgagne=> "compute:create": "rule:os_compute_api:servers:create"23:07
lbragstadsweet23:08
mgagneso legacy policy name "compute:create" becomes an alias of "os_compute_api:servers:create"23:08
lbragstadnice - that makes sense23:08
*** tidwellr has quit IRC23:09
lbragstadmgagne: pushed a new patch23:10
*** felipemonteiro__ has quit IRC23:10
mgagnethis works for me. only use case I'm not sure about is if both name and check string are deprecated. =)23:11
*** threestrands has quit IRC23:11
mgagnelike you get a new name AND a new check string :D23:11
lbragstadin that case, the policy is being removed all together, right?23:11
*** jmlowe has joined #openstack-nova23:12
lbragstador it can be the name and the check_str is changing at the same time...23:12
mgagneI don't know tbh. but it would still be a supported use case by oslo policy, won't fail with: can't deprecate both name and check string.23:12
mgagneyes, that's what I'm referring to23:13
mgagnelbragstad: I'm super bad with bug description, feel free to update =) https://bugs.launchpad.net/oslo.policy/+bug/174256923:17
openstackLaunchpad bug 1742569 in oslo.policy "Including deprecated policy names in sample file" [Undecided,New]23:17
lbragstadmgagne: looks good - thanks for the report23:18
lbragstadmgagne: i'm sure we'll be able to get that addressed before library freeze23:18
mgagnecool, thanks! =)23:19
mgagnefor now, will use my legacy mapping, looks to work fine with Horizon Ocata and Nova Newton23:20
*** threestrands has joined #openstack-nova23:20
*** hongbin has quit IRC23:23
openstackgerritMatt Riedemann proposed openstack/nova master: Add the nova-multiattach experimental queue job  https://review.openstack.org/53268923:27
mriedemildikov: stvnoyes: the new multiattach job ^23:27
ildikovmriedem: I owe you23:29
*** armax has quit IRC23:36
gmannmriedem: we can mark this complete now -https://blueprints.launchpad.net/nova/+spec/api-extensions-policy-removal23:39
mriedemgmann: cool thanks23:40
*** efried has quit IRC23:44
*** liverpooler has quit IRC23:44
*** jmlowe has quit IRC23:44
*** jmlowe has joined #openstack-nova23:46
*** sdague has quit IRC23:47
*** kumarmn has joined #openstack-nova23:47
*** pchavva has quit IRC23:50
*** andreykurilin has quit IRC23:52
*** kumarmn has quit IRC23:52
*** andreykurilin has joined #openstack-nova23:53
*** edmondsw has joined #openstack-nova23:55
*** yamamoto has joined #openstack-nova23:58

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!