Wednesday, 2017-10-18

*** namnh has joined #openstack-nova00:00
*** namnh has quit IRC00:00
*** esberglu has quit IRC00:01
*** ijw has quit IRC00:03
*** tbachman has quit IRC00:05
*** gjayavelu has quit IRC00:09
*** tbachman has joined #openstack-nova00:10
*** flanders_ has quit IRC00:11
*** gjayavelu has joined #openstack-nova00:12
*** tbachman has quit IRC00:15
*** tbachman has joined #openstack-nova00:17
*** jmlowe has quit IRC00:19
*** yingjun has joined #openstack-nova00:19
*** yikun has quit IRC00:20
*** yikun_ is now known as yikun00:20
*** smatzek has joined #openstack-nova00:20
*** liusheng has quit IRC00:23
*** smatzek has quit IRC00:25
*** tbachman_ has joined #openstack-nova00:26
*** tbachman has quit IRC00:27
*** tbachman_ is now known as tbachman00:27
*** gjayavelu has quit IRC00:27
*** andreas_s has joined #openstack-nova00:28
*** gjayavelu has joined #openstack-nova00:28
-openstackstatus- NOTICE: due to unscheduled restart of zuulv3.o.o you will need to 'recheck' your jobs that were last running. Sorry for the inconvenience.00:32
*** psachin has joined #openstack-nova00:32
*** yikun_jiang has joined #openstack-nova00:34
*** liusheng has joined #openstack-nova00:34
*** Apoorva has quit IRC00:35
mriedemyikun_jiang: did you see the updates to ?00:35
*** andreas_s has quit IRC00:37
*** xinliang has quit IRC00:37
*** suresh12 has quit IRC00:40
*** cjvolzka has quit IRC00:43
*** isq has quit IRC00:45
*** isq has joined #openstack-nova00:45
*** ijw has joined #openstack-nova00:45
*** salv-orlando has joined #openstack-nova00:48
*** xinliang has joined #openstack-nova00:50
*** yangyapeng has quit IRC00:51
*** yangyapeng has joined #openstack-nova00:51
*** salv-orlando has quit IRC00:52
*** suresh12 has joined #openstack-nova00:53
*** markvoelker_ has quit IRC00:53
*** markvoelker has joined #openstack-nova00:54
*** TuanLA has joined #openstack-nova00:54
*** andreas_s has joined #openstack-nova00:55
*** yangyapeng has quit IRC00:56
*** thorst has joined #openstack-nova00:57
*** markvoelker has quit IRC00:58
*** smatzek has joined #openstack-nova01:02
*** thorst has quit IRC01:02
*** thorst has joined #openstack-nova01:03
*** andreas_s has quit IRC01:04
*** andreas_s has joined #openstack-nova01:04
openstackgerritYikun Jiang proposed openstack/nova-specs master: Add pagination and changes since filter support for os-instance-action API
*** huanxie has joined #openstack-nova01:05
*** thorst has quit IRC01:08
Kevin_Zhengmriedem hi, for
Kevin_ZhengI was thinking adding a config option will let the operator to choose from 1. improved performance and 2. regex matching01:11
Kevin_Zhengif the neutron is old01:11
mriedemKevin_Zheng: we don't really want config-driven API behavior01:11
*** phuongnh has joined #openstack-nova01:11
mriedemif neutron has the new regex filtering support, we use it, else we fallback to what we do today01:12
Kevin_ZhengOK, that's also good, I was just adding it for discussion01:12
mriedemif someone really really needed to both filter by IPs and list deleted instances, they could still do that client side if necessary01:12
*** andreas_s has quit IRC01:13
*** crushil has quit IRC01:19
*** mriedem has quit IRC01:20
*** AlexeyAbashkin has joined #openstack-nova01:23
*** smatzek has quit IRC01:25
*** smatzek has joined #openstack-nova01:26
*** yamahata has quit IRC01:27
*** andreas_s has joined #openstack-nova01:27
*** AlexeyAbashkin has quit IRC01:27
*** gjayavelu has quit IRC01:28
*** ijw has quit IRC01:30
*** suresh12 has quit IRC01:33
*** suresh12 has joined #openstack-nova01:42
*** yikun has quit IRC01:45
*** suresh12 has quit IRC01:46
*** yangyapeng has joined #openstack-nova01:48
*** salv-orlando has joined #openstack-nova01:48
*** yangyape_ has joined #openstack-nova01:49
*** andreas_s has quit IRC01:50
*** smatzek has quit IRC01:50
*** yangyapeng has quit IRC01:52
*** salv-orlando has quit IRC01:53
*** hongbin has joined #openstack-nova01:54
*** yamamoto has joined #openstack-nova01:56
*** andreas_s has joined #openstack-nova01:59
*** abhi89 has quit IRC02:00
openstackgerritjichenjc proposed openstack/nova master: Refactor placement version check
openstackgerritjichenjc proposed openstack/nova master: mv generate_glance_url to get_image_endpoint_url
openstackgerritjichenjc proposed openstack/nova master: Remove duplicate error info
*** thorst has joined #openstack-nova02:03
*** takashin has quit IRC02:04
*** andreas_s has quit IRC02:07
*** andreas_s has joined #openstack-nova02:08
*** thorst has quit IRC02:08
*** hemna_ has quit IRC02:09
*** huanxie has quit IRC02:09
*** huanxie has joined #openstack-nova02:10
*** suresh12 has joined #openstack-nova02:10
openstackgerritjichenjc proposed openstack/nova master: Add description for reousrce class creation
*** suresh12 has quit IRC02:15
*** gjayavelu has joined #openstack-nova02:15
*** andreas_s has quit IRC02:17
*** vladikr has joined #openstack-nova02:17
*** takashin has joined #openstack-nova02:17
*** Shunli has joined #openstack-nova02:18
*** andreas_s has joined #openstack-nova02:22
*** AlexeyAbashkin has joined #openstack-nova02:23
*** vladikr has quit IRC02:27
*** AlexeyAbashkin has quit IRC02:27
*** vladikr has joined #openstack-nova02:27
openstackgerritHuan Xie proposed openstack/nova master: VGPU: Define vgpu resource class
*** pramodrj07 has quit IRC02:29
*** MasterOfBugs has quit IRC02:29
*** andreas_s has quit IRC02:30
openstackgerritHironori Shiina proposed openstack/nova-specs master: Ironic: Resize and cold migration support
*** gouthamr has joined #openstack-nova02:32
*** huanxie has quit IRC02:42
*** gouthamr has quit IRC02:42
*** gouthamr has joined #openstack-nova02:43
*** dave-mccowan has quit IRC02:44
openstackgerritHuan Xie proposed openstack/nova master: VGPU: Define vgpu resource class
*** hieulq has quit IRC02:47
*** TuanLA has quit IRC02:47
*** yamahata has joined #openstack-nova02:47
openstackgerritjichenjc proposed openstack/nova master: fix race condition of instance host
*** hieulq has joined #openstack-nova02:48
*** TuanLA has joined #openstack-nova02:48
*** dave-mccowan has joined #openstack-nova02:48
*** salv-orlando has joined #openstack-nova02:50
openstackgerritYikun Jiang proposed openstack/nova-specs master: Add pagination and timestamp filtering support for os-migrations API
*** nicolasbock has quit IRC02:53
*** salv-orlando has quit IRC02:54
*** tbachman has quit IRC02:55
openstackgerritjichenjc proposed openstack/nova master: Add create inventories doc for placement
openstackgerritjichenjc proposed openstack/nova master: check query param for server groups function
*** TuanLA has quit IRC02:56
*** hieulq has quit IRC02:56
*** TuanLA has joined #openstack-nova02:57
*** tbachman has joined #openstack-nova02:57
*** hieulq has joined #openstack-nova02:57
*** phuongnh has quit IRC03:01
*** andreas_s has joined #openstack-nova03:02
*** phuongnh has joined #openstack-nova03:03
*** thorst has joined #openstack-nova03:04
*** vladikr has quit IRC03:07
*** vladikr has joined #openstack-nova03:07
*** thorst has quit IRC03:09
*** andreas_s has quit IRC03:11
*** andreas_s has joined #openstack-nova03:12
*** jmlowe has joined #openstack-nova03:12
*** gjayavelu has quit IRC03:17
*** andreas_s has quit IRC03:20
*** openstackgerrit has quit IRC03:22
*** AlexeyAbashkin has joined #openstack-nova03:23
*** AlexeyAbashkin has quit IRC03:27
*** huanxie has joined #openstack-nova03:27
*** liusheng has quit IRC03:28
*** liusheng has joined #openstack-nova03:29
*** pratapagoutham has quit IRC03:31
*** shewless has quit IRC03:31
*** ijw has joined #openstack-nova03:31
*** mdnadeem has joined #openstack-nova03:36
*** yamamoto_ has joined #openstack-nova03:39
*** hongbin has quit IRC03:40
*** andreas_s has joined #openstack-nova03:41
*** yamamoto has quit IRC03:42
*** andreas_s has quit IRC03:45
*** links has joined #openstack-nova03:45
*** salv-orlando has joined #openstack-nova03:50
*** smatzek has joined #openstack-nova03:51
*** salv-orlando has quit IRC03:55
*** smatzek has quit IRC03:55
*** huanxie has quit IRC03:58
*** gongysh has joined #openstack-nova04:02
*** Jeffrey4l has quit IRC04:02
*** Jeffrey4l has joined #openstack-nova04:03
*** chyka has joined #openstack-nova04:06
*** udesale has joined #openstack-nova04:09
*** chyka has quit IRC04:10
*** claudiub has joined #openstack-nova04:11
*** udesale has quit IRC04:14
*** udesale has joined #openstack-nova04:15
*** hshiina has joined #openstack-nova04:23
*** diga has joined #openstack-nova04:25
*** abhi89 has joined #openstack-nova04:25
*** huanxie has joined #openstack-nova04:26
*** vladikr has quit IRC04:28
*** vladikr has joined #openstack-nova04:29
*** trinaths has joined #openstack-nova04:36
*** vks1 has joined #openstack-nova04:39
*** zsli_ has joined #openstack-nova04:47
*** vvargaszte has joined #openstack-nova04:47
*** dave-mccowan has quit IRC04:47
*** ijw has quit IRC04:48
*** Shunli has quit IRC04:50
*** vks1 has quit IRC04:51
*** salv-orlando has joined #openstack-nova04:51
*** vks1 has joined #openstack-nova04:53
*** openstackgerrit has joined #openstack-nova04:54
openstackgerritIan Wienand proposed openstack/nova master: [DNM] testing nova changes
*** salv-orlando has quit IRC04:57
*** suresh12 has joined #openstack-nova04:57
*** avolkov`` has joined #openstack-nova04:59
*** avolkov` has quit IRC05:00
*** gjayavelu has joined #openstack-nova05:01
*** huanxie has quit IRC05:05
*** thorst has joined #openstack-nova05:05
*** huanxie has joined #openstack-nova05:07
*** thorst has quit IRC05:10
*** josecastroleon has quit IRC05:10
*** suresh12 has quit IRC05:11
*** josecastroleon has joined #openstack-nova05:11
*** suresh12 has joined #openstack-nova05:11
*** suresh12 has quit IRC05:11
openstackgerritNaichuan Sun proposed openstack/nova master: VGPU_support: add enabled white list
*** suresh12 has joined #openstack-nova05:12
*** salv-orlando has joined #openstack-nova05:15
openstackgerritNaichuan Sun proposed openstack/nova master: VGPU_support: add enabled white list
*** cheneydc has joined #openstack-nova05:18
*** lajoskatona has joined #openstack-nova05:23
*** jmlowe has quit IRC05:28
*** gongysh has quit IRC05:31
*** gouthamr has quit IRC05:34
*** markvoelker has joined #openstack-nova05:35
*** trinaths has left #openstack-nova05:37
*** markvoelker_ has joined #openstack-nova05:37
*** markvoelker has quit IRC05:39
*** markvoelker_ has quit IRC05:46
*** markvoelker has joined #openstack-nova05:47
*** chyka has joined #openstack-nova05:48
*** markvoelker has quit IRC05:50
*** markvoelker has joined #openstack-nova05:50
*** markvoelker has quit IRC05:51
*** chyka has quit IRC05:53
*** huanxie has quit IRC05:53
*** edand has joined #openstack-nova05:56
*** Tom has joined #openstack-nova05:56
openstackgerritTakashi NATSUME proposed openstack/nova master: Fix 500 error while passing 4-byte unicode data
*** suresh12 has quit IRC06:00
*** Oku_OS-away is now known as Oku_OS06:06
*** TuanLA has quit IRC06:11
*** andreas_s has joined #openstack-nova06:14
*** xinliang has quit IRC06:17
*** Tom has quit IRC06:19
*** sahid has joined #openstack-nova06:26
*** xinliang has joined #openstack-nova06:26
*** sahid has quit IRC06:27
*** sahid has joined #openstack-nova06:27
*** liusheng has quit IRC06:30
*** liusheng has joined #openstack-nova06:31
*** sridharg has joined #openstack-nova06:31
*** cfriesen has quit IRC06:32
*** salv-orlando has quit IRC06:33
*** TuanLA has joined #openstack-nova06:33
*** trungnv has joined #openstack-nova06:34
*** salv-orlando has joined #openstack-nova06:35
*** Tom has joined #openstack-nova06:39
*** belmoreira has joined #openstack-nova06:40
openstackgerritjichenjc proposed openstack/nova master: Refactor placement version check
*** gjayavelu has quit IRC06:42
*** Tom has quit IRC06:42
*** Tom has joined #openstack-nova06:42
openstackgerritYikun Jiang proposed openstack/nova-specs master: Add pagination and changes since filter support for os-instance-action API
*** pcaruana has joined #openstack-nova06:45
*** markvoelker has joined #openstack-nova06:45
*** armax has quit IRC06:45
*** thorst has joined #openstack-nova06:58
*** hshiina has quit IRC06:58
openstackgerritNaichuan Sun proposed openstack/nova master: VGPU_support: add enabled white list
*** josecastroleon has quit IRC06:59
*** chyka has joined #openstack-nova07:00
*** chyka has quit IRC07:00
*** josecastroleon has joined #openstack-nova07:01
*** thorst has quit IRC07:03
*** suresh12 has joined #openstack-nova07:04
*** chyka has joined #openstack-nova07:07
*** suresh12 has quit IRC07:09
*** chyka has quit IRC07:11
*** sshwarts has joined #openstack-nova07:12
*** cheneydc_ has joined #openstack-nova07:14
*** cheneydc has quit IRC07:14
*** tesseract has joined #openstack-nova07:17
*** avolkov`` has quit IRC07:21
*** avolkov`` has joined #openstack-nova07:21
*** belmoreira has quit IRC07:26
*** markus_z has joined #openstack-nova07:31
*** hoonetorg has joined #openstack-nova07:32
*** kuzko has quit IRC07:32
*** ttsiouts has quit IRC07:33
*** ttsiouts has joined #openstack-nova07:33
*** gcb has quit IRC07:34
*** wasmum has joined #openstack-nova07:34
*** kuzko has joined #openstack-nova07:35
*** yingjun has quit IRC07:36
*** vvargaszte has quit IRC07:39
*** Yingxin has quit IRC07:39
*** rodolof has joined #openstack-nova07:40
*** Yingxin has joined #openstack-nova07:40
*** mikal has quit IRC07:40
*** ragiman has joined #openstack-nova07:40
*** mikal has joined #openstack-nova07:41
*** jpena|off is now known as jpena07:42
*** huanxie has joined #openstack-nova07:48
*** gjayavelu has joined #openstack-nova07:51
* lyarwood is finally back online, catching up with now07:53
*** gjayavelu has quit IRC07:54
*** udesale has quit IRC07:55
*** AlexeyAbashkin has joined #openstack-nova07:57
*** edand has quit IRC08:04
*** edand has joined #openstack-nova08:04
gibiefried: I left one comment in
*** kashyap has quit IRC08:07
*** lucas-afk has quit IRC08:07
*** weshay|ruck has quit IRC08:07
*** jpena has quit IRC08:07
*** ansiwen has quit IRC08:08
*** mdbooth has quit IRC08:08
*** jpena has joined #openstack-nova08:08
*** kashyap has joined #openstack-nova08:09
*** weshay has joined #openstack-nova08:09
*** lucasagomes has joined #openstack-nova08:09
*** ansiwen has joined #openstack-nova08:10
*** mdbooth has joined #openstack-nova08:12
*** yassine has quit IRC08:15
*** flanders_ has joined #openstack-nova08:21
*** slaweq has joined #openstack-nova08:23
*** ralonsoh has joined #openstack-nova08:23
*** Qiming_ has joined #openstack-nova08:25
gibia pretty trivial patch for already awake cores:
*** ralonsoh_ has joined #openstack-nova08:26
openstackgerritJianghua Wang proposed openstack/nova master: vGPU: XenAPI: get vgpu stats from hypervisor
*** clayton_ has joined #openstack-nova08:27
*** john51 has quit IRC08:28
*** ujjain has quit IRC08:28
*** clayton has quit IRC08:28
*** Qiming has quit IRC08:28
*** john51 has joined #openstack-nova08:28
*** clayton_ is now known as clayton08:28
*** ralonsoh has quit IRC08:29
*** ujjain has joined #openstack-nova08:30
*** ujjain has quit IRC08:30
*** ujjain has joined #openstack-nova08:30
*** fragatina has joined #openstack-nova08:30
*** yassine has joined #openstack-nova08:30
*** mnaser has quit IRC08:30
*** fragatina has quit IRC08:31
*** fragatina has joined #openstack-nova08:31
*** mnaser has joined #openstack-nova08:35
*** yamamoto_ has quit IRC08:37
*** yamamoto has joined #openstack-nova08:40
*** yamamoto has quit IRC08:40
*** dtantsur|afk is now known as dtantsur08:40
*** diga has quit IRC08:41
*** chyka has joined #openstack-nova08:44
openstackgerritjichenjc proposed openstack/nova master: Refactor placement version check
*** derekh has joined #openstack-nova08:45
*** chyka has quit IRC08:48
*** ralonsoh_ is now known as ralonsoh08:48
openstackgerritNaichuan Sun proposed openstack/nova master: VGPU_support: add enabled white list
*** david-lyle has quit IRC08:53
*** karthiks has joined #openstack-nova08:55
*** yamahata has quit IRC08:55
openstackgerritNaichuan Sun proposed openstack/nova master: VGPU_support: add enabled white list
*** thorst has joined #openstack-nova08:59
*** ociuhandu has quit IRC08:59
*** thorst has quit IRC09:04
*** yamamoto has joined #openstack-nova09:07
*** seba has quit IRC09:10
*** yamamoto has quit IRC09:11
*** sridharg is now known as sridharg|afk09:14
*** yamamoto has joined #openstack-nova09:17
*** Tom has quit IRC09:18
*** yamamoto has quit IRC09:21
*** Tom____ has joined #openstack-nova09:24
*** jled_ has joined #openstack-nova09:24
*** gcb has joined #openstack-nova09:25
*** yamamoto has joined #openstack-nova09:28
*** jled_ has quit IRC09:28
*** Tom____ has quit IRC09:28
*** zsli_ has quit IRC09:30
stephenfinsean-k-mooney, ralonsoh: Sean brought this up at the PTG, but what bus do the on-die FPGAs on recent Xeon chips hook into? QPI?09:32
ralonsohstephenfin: that's what I can give you now:
ralonsohstephenfin: take a look at slide 15. As you said, is QPI09:34
*** yamamoto_ has joined #openstack-nova09:34
stephenfinralonsoh: Just what I wanted. Thanks!09:35
*** diga has joined #openstack-nova09:35
*** yamamoto has quit IRC09:37
*** edand has quit IRC09:43
openstackgerritMerged openstack/nova stable/pike: Target context for build notification in conductor
*** sapd__ has quit IRC09:45
*** sapd_ has joined #openstack-nova09:45
*** cdent has joined #openstack-nova09:45
*** ociuhandu has joined #openstack-nova09:47
*** sapd_ has quit IRC09:47
openstackgerritStephen Finucane proposed openstack/nova-specs master: PCI NUMA Policies
*** sapd_ has joined #openstack-nova09:48
*** AlexeyAbashkin has quit IRC09:49
*** mvk has quit IRC09:50
openstackgerritDinesh Bhor proposed openstack/nova-specs master: Schedule VM's on default host aggregate
*** yamamoto_ has quit IRC09:56
*** andreas_s has quit IRC09:57
*** andreas_s has joined #openstack-nova09:58
*** cdent has quit IRC10:03
*** gmann is now known as gmann_afk10:07
openstackgerritJianghua Wang proposed openstack/nova master: vGPU: XenAPI: get vgpu stats from hypervisor
*** andreas_s has quit IRC10:07
*** yangyape_ has quit IRC10:09
*** cheneydc_ has quit IRC10:09
*** andreas_s has joined #openstack-nova10:12
*** diga has quit IRC10:12
*** baoli has joined #openstack-nova10:14
*** takashin has left #openstack-nova10:18
*** baoli has quit IRC10:19
*** edand has joined #openstack-nova10:21
*** andreas_s has quit IRC10:21
openstackgerritMichael Still proposed openstack/nova master: Cleanup mount / umount and associated rmdir calls
openstackgerritMichael Still proposed openstack/nova master: Move lvm handling to privsep.
openstackgerritMichael Still proposed openstack/nova master: Move shred to privsep.
openstackgerritMichael Still proposed openstack/nova master: Move xend existence probes to privsep.
openstackgerritMichael Still proposed openstack/nova master: Move the idmapshift binary into privsep.
openstackgerritMichael Still proposed openstack/nova master: Move loopback setup and removal to privsep.
openstackgerritMichael Still proposed openstack/nova master: Move nbd commands to privsep.
openstackgerritMichael Still proposed openstack/nova master: Move kpartx calls to privsep.
openstackgerritMichael Still proposed openstack/nova master: Move blkid calls to privsep.
*** TuanLA has quit IRC10:24
*** yamamoto has joined #openstack-nova10:26
*** andreas_s has joined #openstack-nova10:27
*** sdague has joined #openstack-nova10:28
*** flanders_ has quit IRC10:31
*** andreas_s has quit IRC10:36
*** gszasz has joined #openstack-nova10:36
*** edand has quit IRC10:37
*** andreas_s has joined #openstack-nova10:41
*** andreas_s has quit IRC10:42
*** andreas_s has joined #openstack-nova10:43
*** edand has joined #openstack-nova10:51
openstackgerritMerged openstack/nova master: Remove unnecessary BDM destroy during instance delete
*** AlexeyAbashkin has joined #openstack-nova10:55
openstackgerritMerged openstack/nova master: api-ref: fix server status values in GET /servers docs
openstackgerritMerged openstack/nova master: Update "SHUTOFF" description in API guide
openstackgerritMerged openstack/nova master: Fix binary name
openstackgerritMerged openstack/nova master: Make etree.tostring() emit unicode everywhere
openstackgerritMerged openstack/nova master: Modernize set_vm_state_and_notify
openstackgerritJohn Garbutt proposed openstack/nova master: Only add CUSTOM_ prefix if required
openstackgerritJohn Garbutt proposed openstack/nova master: Only add CUSTOM_ prefix if required
openstackgerritJohn Garbutt proposed openstack/nova master: Only add CUSTOM_ prefix if required
*** thorst has joined #openstack-nova11:00
*** thorst has quit IRC11:05
*** chyka has joined #openstack-nova11:08
openstackgerritNaichuan Sun proposed openstack/nova master: VGPU_support: add enabled white list
*** zzzeek has quit IRC11:10
*** chyka has quit IRC11:12
*** huanxie has quit IRC11:13
*** zzzeek has joined #openstack-nova11:14
*** vks1 has quit IRC11:17
*** smatzek has joined #openstack-nova11:19
openstackgerritjichenjc proposed openstack/nova master: Refactor placement version check
*** phuongnh has quit IRC11:23
sean-k-mooneystephenfin: ralonsoh technically pcie and qpi pre skylake upi + pcie from skylake  on11:26
*** vladikr has quit IRC11:27
sean-k-mooneystephenfin: ralonsoh upi is the replacement for qpi11:27
*** vladikr has joined #openstack-nova11:27
efriedgibi Ack, thanks11:30
*** dave-mccowan has joined #openstack-nova11:31
*** janki has joined #openstack-nova11:33
*** dave-mcc_ has joined #openstack-nova11:35
*** rodolof has quit IRC11:36
jankiHi. Is there a way to know which microversion of API is used in Pike? According to, its 2.53 but API calls shows /compute/v2.1/..11:36
*** dave-mccowan has quit IRC11:37
*** lucasagomes is now known as lucas-hungry11:38
*** acormier has joined #openstack-nova11:39
*** nicolasbock has joined #openstack-nova11:40
*** liverpooler has quit IRC11:43
openstackgerritEric Fried proposed openstack/nova master: Send Allocations to spawn
efriedgibi ^ Thanks for the find.11:44
*** acormier has quit IRC11:44
efriedjanki The microversion is in an HTTP header11:44
*** acormier has joined #openstack-nova11:44
efriedNot the endpoint URL11:44
*** gcb has quit IRC11:45
gibiefried: thanks. I'm +2 now11:45
efriedgibi Thanks!11:45
*** yangyapeng has joined #openstack-nova11:45
*** huanxie has joined #openstack-nova11:46
efriedjanki In what context are you wanting to know?  The nova CLI?  The openstack CLI?  Some external API call?11:46
*** gcb has joined #openstack-nova11:47
jankiefried, I am running tempest with ODL + openstack Pike and this test fails with 404 POST
efriedjanki Do you have a link to the error trace?11:49
jankiefried, nova doc says this API is deprecated from 2.36 microversion and will return 404. Pike uses microversion 2.53. How do I know which microversion is used11:50
jankiefried, is the traceback11:50
*** belmoreira has joined #openstack-nova11:50
jankiefried, please expand the first test which says "fail"11:51
rgerganovefried, could you please answer my last question at
*** thorst has joined #openstack-nova11:52
efriedjanki You're looking for an HTTP header like: "OpenStack-API-Version: compute 2.53".  If it's not there, I *think* we default to the earliest in the API.11:52
efriedrgerganov Looking...11:52
*** litao__ has quit IRC11:53
jankiefried, so API verion is 2.1 as seen in
efriedjanki Is it possible that the floating IP pool actually isn't there? :)11:55
jankiefried, thats my next step to debug. I first wanted to ensure the API version. I will need tempest.conf for it and searching for that file. or is it any other way I can verify?11:56
efriedjanki It's not something you can set in the config file.  The microversion is per API call.  Looking at the log...11:57
efriedjanki It looks like that header is being returned by nova (so it's not being set by the caller) only for one call in that log, and it's not the call you're concerned about because it's a 200.11:59
*** yamamoto has quit IRC11:59
jankiefried, all jenkins build logs are at
efriedrgerganov Responded, sorry I missed that the first time.  I would have seen it eventually :)12:01
*** dtantsur is now known as dtantsur|brb12:01
efriedrgerganov Let me know if you want to talk it though some more.  There's a lot TBD about how drivers will model their RPs, and we especially haven't talked a lot about IP addresses as generic resources.12:02
efriedrgerganov At least *I* haven't.  sean-k-mooney might know more on that front.12:02
*** acormier has quit IRC12:03
sean-k-mooneyi heard my name :)12:03
sean-k-mooneyIP adress as a generic resource?12:04
sean-k-mooneyas in like routed network today or something different12:04
jankiefried, I dont find any api call before running this test. ANy specific file I should be looking into?12:04
*** eharney has joined #openstack-nova12:05
efriedjanki I was poking around for your n-api log, haven't found it yet.  Was skimming the n-cpu log, but kinda doubt I'm going to find anything useful in there.12:05
efriedsean-k-mooney Greetings.  rgerganov was asking about how would play with IP addresses.12:05
rgerganovefried, thanks for the response, it seems that our longstanding problem with VMW shared datastores won't get fixed any time soon :)12:05
*** jpena is now known as jpena|lunch12:06
efriedrgerganov Tell me more about the problem, if you have time.  Perhaps there's a nice placement-y solution...12:06
sean-k-mooneyrgerganov: i think the intent is that if you need to boot a vm on a specifc nfs share you will associate the boot request with the nfs share by modeling the nfs share as a shared resource provider and using an aggregate to associate the hosts that can connect to that share12:06
*** mvk has joined #openstack-nova12:07
efriedsean-k-mooney ++12:07
efriedBut note that shared RPs are not in scope for Queens.12:07
efriedThough there's some code already in place that's gonna have to be trodden carefully around.12:07
sean-k-mooneyya thats the rub. rgerganov shared resouce providers is the feature you are looking for to correctly model vmw shared datastores12:08
rgerganoveven it is a shared RP, we still need to know how to connect to it in order to use it, right12:08
rgerganovthere should be some way for virt drivers to obtain connection info12:09
sean-k-mooneyrgerganov: yes that info can be sotred in the nova-compute agents config12:09
rgerganovsean-k-mooney, is that some kind of static data?12:09
*** Dinesh_Bhor has quit IRC12:09
sean-k-mooneyrgerganov: well in the case of the the storage backend for the specific compute node yes its static12:10
sean-k-mooneyfor libvirt for example by default we use the filesystemd under /var/lib/libvirt but i can configure it to use an nfs share or rbd to ceph by default instead12:11
rgerganovsean-k-mooney, I don't think this will work well for the vmware driver12:12
rgerganovit's hard to believe that after we added so much complexity with placement, RP, allocations, etc we can't solve a simple use case with shared datastores12:13
efriedjanki Search for 'Floating IP' - confirms that the call is using microversion 2.1.  There's a policy check failure a few lines earlier, not sure if that's related.12:13
sean-k-mooneyrgerganov: well we could but be dont have the shared resouce providers code merged yet12:14
jankiefried, ya. didnot find anything useful in cpu logs12:14
efriedrgerganov Is there some kind of API endpoint that can be queried to get the details on the NFS store?12:14
jankiefried, ya. policies is 1 of the reason for 404 as per the api doc12:14
efriedrgerganov That would be another way to do it.12:14
sean-k-mooneyif we did we coudl store info such as the ip adress as a trait on the resouce provider12:14
efriedjanki Oh, then maybe that's the whole problem.12:14
efriedsean-k-mooney Just so.12:14
efriedsean-k-mooney Though I'm not sure that's a great idea.  More likely you would want to maintain a mapping of RP UUIDs to configs.12:15
sean-k-mooneyefried: ya the down side to that is it is a public api...12:15
rgerganovefried, yeah, this is what I am calling "connect info" for resource provider12:16
sean-k-mooneyat least i think it is placement is now admin only correct12:16
rgerganovmaybe the right term is just "config", idk12:16
efriedA trait like CUSTOM_NFS_SHARE_IP_192_168_0_55 is a pretty hideous thought.12:16
*** huanxie has quit IRC12:16
sean-k-mooneyefried: ya at least that was not an ipv6 address ...12:17
efriedHah, totally12:17
*** sridharg|afk is now known as sridharg12:17
efriedsean-k-mooney rgerganov As with PCI devices, we would rather maintain the specifics (like PCI address) external to placement, and have the driver responsible for mapping RP UUIDs to whatever entry in that external data store.12:17
jankiefried, POST /compute/v2.1/os-floating-ips => generated 73 bytes in 180 msecs (HTTP/1.1 404) 9 headers in 378 bytes (1 switches on core 0) - "generated" in this line means HTTP package is generated and not FIP is generated right12:18
efriedjanki I don't know what FIP is.  I think that's just saying the response payload was 73 bytes.12:19
rgerganovefried, the problem with this is that it may go out of sync; why not storing this info in placement itself?12:19
efriedrgerganov How does it go out of sync?  You would have to keep it up to date in placement somehow too, nah?12:19
jankiefried, FIP = Floating IP. sorry for short form12:19
efriedjanki Oh, yeah, it's talking about the response payload.  The API message knows nothing about the internals of what you're doing with the call.12:20
*** edmondsw has joined #openstack-nova12:21
jankiefried, ack. "Policy check for os_compute_api:os-extended-server-attributes failed with credentials" doesnot mean that os-floating-ip has policy issues and couldnot find anthing in log which suggests that12:22
rgerganovefried, ok maybe keeping them in sync is not a real problem; for me it just feels natural the config info to be associated with the RP and be available through the placement api12:22
sean-k-mooneyim sure we can come up with a clean way to store config info for a resouce provders  such as adding a new field to the rp itself or reusing the description field12:22
rgerganovsean-k-mooney, +112:23
sean-k-mooneythat config section though should be admin only12:23
jankiefried, but again there is no os-extended-server-attributes API12:23
sean-k-mooneynormal users never need to see it only openstack services like the virt diriver12:23
efriedjanki I really don't understand the test case, but if the policy check failed, is it possible it didn't even get to the API you're concerned about?12:24
sean-k-mooneyjanki: that api is being moved into the normal server respoce i belive12:24
efriedjanki The API call in question, if it needed to use a specific microversion, would presumably be coded up to do that.12:25
jankiefried, I dont think it needed a specific microversion. I was just checking if that is the reason for failure12:25
efriedjanki Okay, well, it sounds like the policy is the first thing to sort out.12:26
*** salv-orlando has quit IRC12:26
sean-k-mooneyjanki: this might be of interest to you
sean-k-mooneyyou will see on line 75 the proposal is to add the extended attributes to the get server responce12:28
*** salv-orlando has joined #openstack-nova12:28
jankisean-k-mooney, ya. looks like os-extended-server-attributes is a collection of REST APIs exposed by Nova. right?12:29
*** dave-mcc_ is now known as dave-mccowan12:29
sean-k-mooneyjanki: it was technically a buch of rest apis exposed by a nova api extention12:30
*** huanxie has joined #openstack-nova12:30
*** lucas-hungry is now known as lucasagomes12:30
sean-k-mooneyjanki: as of pike we nolonger support extending the api like that but the existing extions were more or less kept12:31
sean-k-mooneyjanki: i think the useful ones will be adopted into the main api and the unsed ones will be drop. i would expect the extended server attribute to be adopted in to the server resouce12:31
jankisean-k-mooney, ohk. so I am getting 404 on os-floating-ips and policy error for os-extended-server-attributes. could this be related?12:32
*** MVenesio has joined #openstack-nova12:32
sean-k-mooneyjanki: if you query with admin prviliges dose the same happen12:32
sean-k-mooneyjanki: the reason for the clean up spec is that there are two set of policies applied to the extentions12:33
sean-k-mooneythe main api policies and the extention level policies that we are looking to remove12:33
jankisean-k-mooney, I dont have acces to the setup. Its jenkins build. But I think the naswer is no. there is 1 more variable in policy.json "os_compute_api:os-floating-ips": "rule:admin_or_owner" so there are not related12:34
*** liverpooler has joined #openstack-nova12:34
sean-k-mooneydumb question im assumeing the setup is using neutron, im not sure how the extentions were enabled in the past but if it was using nova-network you would have no floating-ips12:36
alex_xujanki: it looks like the floating pool isn't there12:37
jankialex_xu, yes. because the API that creates floating ip is failing12:38
*** jhesketh_ has joined #openstack-nova12:38
alex_xujanki: ok, great, you already know that, I didn't read the full chat log yet :)12:39
jankialex_xu, I am trying to figure out the reason for the API to fail. I checked, with help of efried, that microversion used is 2.1 (< 2.36).12:40
jankialex_xu, also there is policy check failure for os-extended-server-attributes in log but no such failure for os-floating-ips12:42
*** jhesketh has quit IRC12:43
*** swamireddy has quit IRC12:43
*** links has quit IRC12:44
alex_xujanki: they are sounds unrelated12:44
jankialex_xu, Ya. initially I thought they are but looking at sample policy file, I do think the same12:45
*** clayton has quit IRC12:45
*** ujjain has quit IRC12:46
openstackgerritJan Zerebecki proposed openstack/nova master: Only log not correcting allocation once per period
*** clayton has joined #openstack-nova12:49
*** gjayavelu has joined #openstack-nova12:49
openstackgerritJan Zerebecki proposed openstack/nova master: Only log not correcting allocation once per period
*** ujjain has joined #openstack-nova12:51
*** tssurya has joined #openstack-nova12:51
*** ujjain has quit IRC12:51
*** ujjain has joined #openstack-nova12:51
*** gmann_afk is now known as gmann12:52
*** takashin has joined #openstack-nova12:55
alex_xujanki: looks like you needn't worry about that policy check failure, the log will be emitted each time when the user doesn't have the permission12:55
alex_xuand that log looks like annoying12:55
*** cdent has joined #openstack-nova12:55
jankialex_xu, well then, the next step is look into tempest.conf12:56
*** rodolof has joined #openstack-nova12:56
*** lyan has joined #openstack-nova12:56
*** mriedem has joined #openstack-nova12:59
*** yamamoto has joined #openstack-nova12:59
alex_xunova api meeting is running at #openstack-meeting-413:00
*** huanxie has quit IRC13:01
mriedemsdague: johnthetubaguy: can you take a look at this ocata-only fix? turns out we backported something awhile ago that requires some special handling based on the version of libvirt you're running13:01
*** vdrok has quit IRC13:02
*** tommylikehu has quit IRC13:02
*** vdrok has joined #openstack-nova13:02
*** tommylikehu has joined #openstack-nova13:03
gmannjanki: what is error actually, i can help from tempest.conf side13:03
*** jpena|lunch is now known as jpena13:03
*** belmoreira has quit IRC13:04
jankigmann, so I am running tempest for ODL + Pike setup and is failing with error
stephenfinmriedem: Any suggestions on who else I can ask to review in jaypipes' absence?13:07
*** yamamoto has quit IRC13:07
jankigmann, as per, I verified that microverion is 2.1 and there are no logs about policy failure for os-floating-ips13:08
mriedemstephenfin: i think that dansmith guy is pretty smart13:08
jankigmann, next is to check tempest.conf for proper flag settings13:09
gmannjanki: its not policy related, what FIP pool you configured on nova>13:10
jankigmann, I didnot manually configure anything. These logs are from jenkins build. Which file would those be in? I can dig that file up13:11
stephenfinmriedem: I agree: dansmith is a super smart guy who'd surely love to review
gmannjanki: sure in api meeting, ll ping/debug after that13:11
*** jaypipes has joined #openstack-nova13:12
jankigmann, ack.13:12
stephenfinThen again, if jaypipes is back from the dead, he might also like to review ...13:13
stephenfinNo rest for the weary :)13:13
*** felipemonteiro_ has joined #openstack-nova13:15
*** lbragstad has joined #openstack-nova13:16
*** felipemonteiro__ has joined #openstack-nova13:17
*** lyan has quit IRC13:20
*** felipemonteiro_ has quit IRC13:20
*** gouthamr has joined #openstack-nova13:25
efriedstephenfin I'm probably just missing something fundamental on
* stephenfin looks13:27
lyarwoodmdbooth: re - do you recall why context.auth_token isn't set?13:29
mdboothlyarwood: looking13:29
lyarwoodmdbooth: that introduced the workaround doesn't really say why it isn't there via init_host13:29
cdentefried: going back through the log looking at the conversation you had with sean-k-mooney and rgerganov; where in the powervm universe is the mapping from rp uuid to <other> going to live?13:29
stephenfinefried: The problem I'm trying to get at is, even if we somehow managed to keep non-PCI-needing instances off of PCI-having NUMA nodes, we can still end up in the situation where we have no free PCI-having NUMA nodes for PCI-needing instances13:30
mdboothlyarwood: Could it be cause there's no request context at that point?13:31
mdboothi.e. no user, no api call?13:31
* mdbooth is guessing13:31
efriedcdent That's a pretty big TBD.  We don't have any persistent data on our "hypervisor" (the NovaLink partition), which is a fairly fundamental point of architecture.  You're supposed to be able to trash the partition and recreate it with no loss.13:31
stephenfinThis approach reduces the possibility but it doesn't mitigate it entirely. We want to use the two in tandem, which is what you've kind of hinted at (I think)13:31
lyarwoodmdbooth: yeah sorry,
sean-k-mooneystephenfin: that was the usecase we created to address13:31
openstackgerritChris Dent proposed openstack/nova-specs master: Add spec for symmetric GET and PUT of allocations
efriedstephenfin Yes.  You'll never get around that limitation.  Unless you want to totally deny non-PCI-needing instances from booting on PCI-having NUMA nodes.  Which ain't reasonable IMO.13:32
* lyarwood elevates13:32
sean-k-mooneyit allows you to create aggregates of nodes with scarce resouces and require that they are requested in the flavor to schedule to those nodes13:32
stephenfinefried: But you will with this spec, which allows booting PCI-needing instances from using non-PCI-having NUMA nodes13:32
efriedcdent Off the cuff, the idea would be to make some part of the RP relate to some part of whatever thingy we're "mapping" to.13:32
sean-k-mooneythis filter will work with traits in request in the flavor too by the way13:33
mdboothlyarwood: Yeah, that would be required for a glance request, I guess13:33
*** smatzek has quit IRC13:33
stephenfinsean-k-mooney: I saw that, but it wasn't too inflexible, hence
mdboothAlthough there's useful work it can do without that13:33
sean-k-mooneythat is your new weigher right13:33
*** lajoskatona has quit IRC13:33
*** smatzek has joined #openstack-nova13:33
lyarwoodmdbooth: it's the cinder encryption metadata lookup that's failing at the moment13:34
sean-k-mooneystephenfin: i tought that merged in pike?13:34
efriedstephenfin The "problem" you won't get around is where I boot a hundred non-PCI-needing instances and run out of non-PCI-having NUMA nodes after the first fifty, so I start using my PCI-having NUMA nodes.  Once those are full, I can no longer boot "required"-affinity PCI-needing instances.13:34
stephenfinsean-k-mooney: Correct. If you had to divide your cloud into aggregates for PCI devices, it made it very inflexible if you wanted to, say, start using a lot of PCI-having instances or vice versa13:34
stephenfinsean-k-mooney: It did. I'm saying forms a one-two combo :)13:34
mdboothlyarwood: Hmm, yeah. That's not gonna work unless we've got some special-sauce cinder admin login, or we cached it.13:35
stephenfinefried: Yup, there's nothing we can do about that, I'm afraid. If you want to use 'required' then this is what you signed up for. Go buy more PCI devices :P13:35
sean-k-mooneystephenfin: right we really need to merge and implemented this cycle it was originally ment to work in icehouse..13:36
*** takashin has left #openstack-nova13:36
efriedstephenfin Cool, then we're on the same page mentally; but then I don't think that chunk of the spec is expressing any reasonable "Alternatives".13:36
cdentefried: k, thanks. I’d rather we avoid putting that kind of mapping into placement if possible, but it seems there is a fairly generic need for a translation. I presumem using is insufficient. Next comes the sort of config info that rgerganov mentioned (things like the IP for the shared storage piece?). That, to me, especially should not go into placement as we’re doing a serious mission creep at that point.13:36
stephenfinefried: However, the should help mitigate that, and the 'preferred' policy in should solve it completely13:36
sean-k-mooneystephenfin: as an employee of a hardware vendor that sells pic devices i feel oblidged to +1 that responce :P13:36
*** esberglu has joined #openstack-nova13:36
*** burt has joined #openstack-nova13:37
stephenfin*if* you're ok living with the downsides (possible performance impact)13:37
stephenfinsean-k-mooney: :P13:37
stephenfinefried: Sure, I'm happy to drop it, if that makes sense? :)13:37
stephenfin*if that would help?13:37
efriedstephenfin If you don't mind, perhaps give me a crack at rewording it?13:37
stephenfinefried: Knock yourself out13:38
*** smatzek has quit IRC13:38
*** gcb has quit IRC13:38
openstackgerritMatt Riedemann proposed openstack/nova-specs master: Add pagination and timestamp filtering support for os-migrations API
*** gcb has joined #openstack-nova13:38
markus_zHere's a doc change waiting for review for 5 weeks:
stephenfinefried: Also, saw your reply on the "two spaces indicate age" comment. It was just an amusing article I read once upon a time that stuck with me. Not an actual ageist thing or anything!13:38
* stephenfin only got the "want to comment on my hair" comment after seeing that, heh13:39
efriedcdent I agree with you (at least in the sense that I've accepted the stated limitations of the placement architecture).  We could conceivably get some mileage out of overloading the 'name' field, or embedding information into gross custom traits.  But I would be wanting to look for cleaner alternatives pretty hard before resorting to that.13:39
stephenfinmarkus_z: Sure, I'll take a look13:39
markus_zstephenfin: thanks!13:39
openstackgerritStephen Finucane proposed openstack/nova master: docs: Explain the flow of the "serial console" feature
*** awaugama has joined #openstack-nova13:40
*** edand has quit IRC13:41
efriedstephenfin No offense taken, of course.  It's an argument that's been made to me before, so you may have caught some blowback from arguments I've had with others who were... um... more serious about it (to the point of -1ing stuff).  My take on it is that the reasons for "never do that" don't apply to programmers.13:41
*** peter-hamilton has joined #openstack-nova13:42
dansmithmriedem: tell me you don't care and I'll +W:
* cdent starts using three spaces, since he is older than efried 13:42
* edleafe uses 13 spaces13:43
openstackgerritGhanshyam Mann proposed openstack/nova-specs master: Spec for API extensions policy removal
efriedstephenfin Before I start, can you confirm that was indeed supposed to be "without"?13:44
gmannjohnthetubaguy:  sdague  alex_xu ^^ updated spec -
mriedemdansmith: i could fix quick, but don't care too much13:45
gmannjanki: where i can see the nova logs? i see all networking things there13:45
stephenfinefried: Ah, cool, just making sure. I don't do it because it was never something that was taught to do, though I have been known to strip them out when I rewrite stuff. All good point though13:46
stephenfinefried: Yes, without13:46
dansmithmriedem: if it were master I'd want it fixed, but I guess you could make the argument that it doesn't matter on a frozen branch13:46
peter-hamiltonhi everyone, i'm hoping to get final feedback on the updated cert validation spec:
mriedemdansmith: i'll fix it quick, 2 minutes13:46
peter-hamiltonlet me know if you have any questions13:46
mdboothdansmith: The problem with that is the hair pulling when some random other test fails because of it, and then you end up having to bisect a testrun to determine the ordering which causes a failure.13:47
*** pchavva has joined #openstack-nova13:47
jankigmann, nova-cpu logs
dansmithmdbooth: that's what I said in my review, yes13:47
dansmithmdbooth: the thing is that it's in a branch that shouldn't really get a lot of debug anymore13:47
mdboothdansmith: I was agreeing with you, but with additional angst.13:48
* dansmith shakes his fist in angry agreement at mdbooth13:48
johnthetubaguygmann: there is a bit of wording in there I am not totally sure about, agreed with what I think you mean.13:48
* mdbooth has spent a couple of days swearing at those in the past.13:49
gmannjohnthetubaguy: right, ll update thanks13:50
*** Tom____ has joined #openstack-nova13:50
gmannjanki: error is floating ip pool is not found13:50
*** armax has joined #openstack-nova13:50
jankigmann, yes. because the API call to create it fails right.13:51
gmannjanki: API fail because there is no floating ip pool in your env13:51
jankigmann, isnt that what /compute/v2.1/os-floating-ips do?13:52
gmannjanki: you mean POST?13:52
jankigmann, ya. POST on compute/v2.1/os-floating-ips returns 40413:53
gmannjanki: error is raised from here but i can double check the logs to confirm them same -
jankigmann, - search for os-floating-ips13:54
*** Tom____ has quit IRC13:54
*** yamamoto has joined #openstack-nova13:54
*** yamamoto has quit IRC13:55
*** coreywright has quit IRC13:55
openstackgerritMatthew Booth proposed openstack/nova master: libvirt: Don't VIR_MIGRATE_NON_SHARED_INC without migrate_disks
*** edand has joined #openstack-nova13:56
*** amodi has joined #openstack-nova13:57
dansmithmriedem: fyi, this is the last thing i think is critical to land for fixing up our weird cell0 listing wart:
dansmiththe smartness patches after that are not critical, just gravy13:58
dansmiththat jenkins -1 isn't going to disappear, in case that has been deterring review13:58
openstackgerritMatt Riedemann proposed openstack/nova stable/ocata: libvirt: add check for VIR_DOMAIN_BLOCK_REBASE_COPY_DEV
mriedemdansmith: ^13:58
mriedemdansmith: ok, what's been deterring review is the looming newton eol and spec freeze13:59
dansmithmriedem: ack13:59
gmannjanki: yea floating ip pool is needed to create the floating ip and if there is nothing then raise error.13:59
gmannjanki: what is value of default_floating_pool  in conf? under default or neutron section14:00
*** abhi89 has quit IRC14:00
openstackgerritBalazs Gibizer proposed openstack/nova master: Extract instance allocation removal code
dansmithmriedem: then speaking of spec freeze:
openstackgerritMerged openstack/nova-specs master: Add pagination and timestamp filtering support for os-migrations API
dansmithwe *have* to have that for a variety of things14:00
dansmithmriedem: it's a thick review, so you might just want to assume that efried has worded the ass off it and stamp it through14:01
efriedstephenfin Are you still looking at ^?14:02
*** markus_z has quit IRC14:02
jankigmann, its "public" under  [DEFAULT] In nova.conf14:03
*** vks1 has joined #openstack-nova14:03
jankigmann, these will also depend on values in tempest.conf right?14:03
stephenfinefried: At what now?14:03
efriedstephenfin The granular resource request spec14:03
stephenfinefried: It's on my backlog, but I think I saw it merge this morning?14:04
stephenfinOr at least get some +2s?14:04
*** eharney has quit IRC14:04
efriedstephenfin Not merged yet, has dansmith +2 and some +1s.14:04
*** psachin has quit IRC14:05
gmannjanki: hat tests did not pass the pool so default is being used and not found in neutron14:05
jankigmann, what next? I still doubt if tempest.conf has anything to do with this14:07
stephenfinefried: Then yes, I should get to it before EOD14:08
gmannjanki: did you specified this in tempest.conf - floating_network_name14:08
*** crushil has joined #openstack-nova14:08
efriedstephenfin Cool, thanks.14:09
*** coreywright has joined #openstack-nova14:09
jankigmann, thats the blockage. I have no access to tempest.conf and trying to find it somewhere in the logs :(14:09
gibibauzas: hi! I pushed the follow up patch to refactor allocation removal as you suggested
gmannjanki: no prob, i got it fro log and its None14:10
jankigmann, but then again floating IP related calls are passing for other tempest tests expect this one.14:10
jankigmann, where did you find it?14:10
openstackgerritSteve Noyes proposed openstack/nova master: Update live migration to use v3 cinder api
*** salv-orlando has quit IRC14:11
*** salv-orlando has joined #openstack-nova14:11
*** huanxie has joined #openstack-nova14:12
johnthetubaguyI am looking at ironic and resource classes, and hitting some problems with the transition around claims, is that a known issue / known user error?14:13
*** sambetts|afk is now known as sambetts14:14
johnthetubaguybasically we update the resource class in the flavor, but the allocations don't get updated14:14
*** lyan has joined #openstack-nova14:14
johnthetubaguydunno if that is as designed14:14
johnthetubaguyit seems to cause problems14:14
cdentjohnthetubaguy: you mean already exisitng allocations?14:15
johnthetubaguycdent: yes14:15
cdentI think you’d have to do some kind of move/migration/resize/whatever for them to change14:15
mriedemjohnthetubaguy: isn't that similar to editing a flavor on an existing instance? which we don't allow outside of resize?14:15
*** salv-orlando has quit IRC14:16
mriedemwe probably never considered that, but now that flavor resource allocations are going to be tied to classes in the flavor extra specs, and you can edit extra specs at will,14:16
johnthetubaguyso... I should roll back, this is basically trying to do the Pike resource class transition for ironic14:16
mriedempeople might think that will auto-adjust the instance using that flavor somehow14:16
johnthetubaguyso I update my ironic nodes to have resources classes, thats all cool14:17
johnthetubaguyput the existing instances have only allocations for some of the resources now14:17
johnthetubaguyso if I update my existing flavors to request the new resource class, and stop requesting VCPU I have a problem14:18
johnthetubaguywhen I do a build instance, obviously I see all the nodes with existing instances as candidate hosts, as they still have the resources I need14:18
johnthetubaguyboom... my transition path is busted14:18
johnthetubaguynow what does work, is keeping claiming VCPU and RAM in the flavor14:19
*** baoli has joined #openstack-nova14:19
johnthetubaguybut I think that will cause problems in queens when we stop reporting those resources for ironic14:19
johnthetubaguy... wondering if I am missing something here14:19
cdentdoes it make any different if you keep the old flavors and makeentirely new flavors?14:20
johnthetubaguyno, same resource request problem14:21
*** baoli has quit IRC14:21
* cdent is just scratching around spitbaling, etc14:21
mriedemjohnthetubaguy: are you seeing logs in the compute from here?
cdentwhat about not updating the in use node to have resource classes?14:21
johnthetubaguyall my nodes have a resource class now14:22
johnthetubaguymriedem: that stuff all works, all my instances have their flavor updated, but the allocations are not refreshed14:22
johnthetubaguy(basically I did update the in use nodes to have a resource class)14:22
mriedemjohnthetubaguy: did you change the allocation amounts?14:23
efriedmriedem dansmith Can we get pushed through, please?  Clean cherry-pick, and got a vendor request for it.14:23
johnthetubaguymriedem: where would I do that?14:23
efried(cc gibi)14:23
mriedemjohnthetubaguy: well i'm confused what you mean by allocations being refreshed14:23
mriedemthe allocation amounts shouldn't change just because there is a resource class now14:23
mriedemdo you mean the custom resource class allocatoin isn't showing up for the instance?14:24
johnthetubaguyI mean old instances don't have allocations for the new resource, which breaks the scheduling of new flavors that use the resource class14:24
mriedemso instance A had vcpu/ram/disk allocations before the node.resource_class was set, then you set node.resource_class = baremetal and now you expect to see a 'baremetal' allocation for instance A in placement14:24
*** janki has quit IRC14:25
johnthetubaguythat is what I expected, yes14:25
johnthetubaguy(clearly incorrectly)14:25
mriedemthat's probably because the RT isn't reporting allocations anymore once all of your computes are pike14:25
cdentmore importantly, with that expectation not met, scheduling breaks, right?14:25
* edleafe tries to remember dtantsur|brb's advice on this...14:25
johnthetubaguycdent: +114:25
edleafejohnthetubaguy: this came up last cycle14:26
gibiefried: I'm +1 on pushing that notification backport through14:26
mriedemin the before times, the update_available_resource periodic task would update the allocations for the instances running on that node14:26
mriedemuntil we squashed that14:26
johnthetubaguymriedem: yeah, that is what I was thinking14:26
mriedemefried: i can't +W my own backport14:26
*** ragiman has quit IRC14:26
edleafejohnthetubaguy: unfortunately, it was in the middle of a bunch of other discussions about the ironic transition to custom RCs14:26
cdentedleafe: yeah, I seem to recall dtantsur|brb had something to say about this. since he is brb, maybe he’ll brb14:27
mriedemjohnthetubaguy: ok so the scheduling issue is a new instance request can try to claim the 'baremetal' resource on node A even though instance A is already using it14:27
mriedemwhich causes a scheduler failure yes?14:27
efriedmriedem Right, and I'm led to understand the set of cores on stable is not the same as master, but I don't really understand how it works.  Guess I'm asking what needs to be done to get it in?14:27
johnthetubaguymriedem: yes14:27
mriedemjohnthetubaguy: ok yeah i seem to remember this coming up too....maybe dansmith remembers14:28
*** baoli has joined #openstack-nova14:28
mriedemefried: yes different core group on stable,members14:28
mriedemb/c different rules14:28
dansmithum what14:29
dansmithjohnthetubaguy: mriedem I'm not sure what you're talking about14:29
dansmithI'm on a call right now so I'm a bit distracted14:29
mriedemefried: we could use more stable cores, so if that's something you're interested in helping with, please dig in, get to know the rules, and do reviews14:29
*** yikun has joined #openstack-nova14:29
efriedmriedem ack14:29
*** smatzek has joined #openstack-nova14:29
mriedembasically means don't backport features, or backward incompatible changes, all things start on master and go backward, and there are support phases for what's appropriate to backport based on severity14:30
mriedemjohnthetubaguy: ok, got it - want to start by reporting a bug?14:30
*** sridharg has quit IRC14:31
johnthetubaguymriedem: yeah, will do, I was hoping I just miss-read the docs14:31
*** sridharg has joined #openstack-nova14:31
*** erlon has joined #openstack-nova14:32
mriedemjohnthetubaguy: is the change i was thinking of,14:33
mriedemslightly different though14:33
mriedemthat's about reporting inventory, not allocations14:33
openstackgerritBalazs Gibizer proposed openstack/nova master: Extract instance allocation removal code
gibimriedem: fixed your nit in ^^14:33
*** gmann is now known as gmann_afk14:33
*** smatzek has quit IRC14:34
openstackgerritEric Fried proposed openstack/nova-specs master: PCI NUMA Policies
efriedstephenfin ^14:35
efriedI basically just took that nonsensical chunk out, and fixed up those couple of words.14:35
*** smatzek has joined #openstack-nova14:36
openstackgerritGhanshyam Mann proposed openstack/nova-specs master: Spec for API extensions policy removal
mriedemjohnthetubaguy: ok so normally, at least if you have 1 ocata compute, the update_available_resource periodic would run, that would get available nodes, which would refresh nodes from ironic, and that refresh does the resoure class / flavor migration thing,14:38
*** david-lyle has joined #openstack-nova14:38
mriedemand then as part of the update_available_resource periodic, the RT would update allocations for each instance on the node (if you have at least 1 ocata compute)14:38
mriedemwe don't really have a hook between the virt driver and the RT to say if the allocations should be updated,14:39
stephenfinefried: Looks good to me. Thank you :)14:39
openstackgerritBalazs Gibizer proposed openstack/nova master: Moving more utils to ServerResourceAllocationTestBase
openstackgerritBalazs Gibizer proposed openstack/nova master: factor out compute service start in ServerMovingTest
openstackgerritBalazs Gibizer proposed openstack/nova master: Test resource allocation during soft delete
stephenfindansmith: Could you stick on your review backlog?14:39
stephenfinsean-k-mooney, cdent: Ye might like to look at that again too ^14:40
mriedemjohnthetubaguy: random thoughts: we could have the ironic driver set a flag when a flavor was migrated and the RT calls into the driver to see if the flag was set and allocations should be forcefully updated (kind of gross and maybe racy),14:40
*** smatzek has quit IRC14:40
mriedemjohnthetubaguy: we could have the driver update allocations on it's own...also kind of gross but at least very specific case14:40
dansmithstephenfin: I can, but the queue is long and I'm not sure I'm the best person to review that14:40
dansmithI guess I don't really know who is though14:40
johnthetubaguymriedem: yeah, being part of the instance flavor migration doesn't seem totally crazy14:41
stephenfindansmith: That's the problem :( jaypipes would be the best person, but he has his hands full with nested-resource-providers14:41
dansmithstephenfin: he has his hands full with being a slacker14:41
stephenfindansmith: You could just approve it and assume it's perfect? :P14:41
johnthetubaguymriedem: I attempted to write it all up here:
openstackLaunchpad bug 1724589 in OpenStack Compute (nova) "Unable to transition to Ironic Node Resource Classes in Pike" [High,New]14:41
stephenfinWell, that goes without saying14:41
mriedemanother idea is the RT could check to see if there are any new resource classes in the flavor, but that would be complicated - and the only way to tell new from old is by checking existing allocations in placement - not something we want to do while holding a lock in the RT14:41
*** smatzek has joined #openstack-nova14:42
*** huanxie has quit IRC14:42
*** tonygunk has joined #openstack-nova14:43
johnthetubaguymriedem: I don't mind a nova-manage cmd you have to run to update the allocations for the node you are on?14:43
jaypipesstephenfin: I'm currently reviewing the PCI NUMA policy spec.14:43
jaypipesstephenfin: also, still on vacation..14:43
edmondswefried here are the nova stable cores:,members14:43
johnthetubaguymriedem: although that breaks our upgrade ABI...14:43
jaypipesstephenfin: but I just got some coffee and fuck it, might as well do some work.14:43
openstackgerritChris Dent proposed openstack/nova master: [placement] Clean up TODOs in allocations.yaml gabbit
kashyapjaypipes: On vacation and on specs?  Sheesh14:44
kashyapjaypipes: Setting a baaaaad example, I tell ya14:44
mriedemjohnthetubaguy: i'd rather we do it automatically14:45
stephenfinjaypipes: You're a better man than I. I didn't know you were on vacation14:45
johnthetubaguymriedem: yeah, I will have a dig14:45
mriedemjohnthetubaguy: one issue though is i think once we migrate the flavor, we no longer check this stuff in the ironic driver,14:45
mriedemso anyone that has already upgraded and migrated the flavors, not sure if we can correct the issue for them14:46
mriedemwithout detecting if the instance allocations include the resource class or not, and if not, add it14:46
mriedemwhich sucks since that would run every time14:46
*** smatzek has quit IRC14:46
jaypipesstephenfin: it's cool duder. I'll be back into my normal swing of things later today.14:46
jaypipesstephenfin: just got back from Ohio at midnight last night.14:46
johnthetubaguymriedem: it would only be once per process restart to do that allocation check14:46
*** xyang1 has joined #openstack-nova14:46
*** ociuhandu has quit IRC14:46
mriedemjohnthetubaguy: the flavor migration thing happens on every node refresh14:47
mriedembecause of the hash ring stuff14:47
johnthetubaguymriedem: will find you the link14:47
*** smatzek has joined #openstack-nova14:48
johnthetubaguymriedem: before this line is only one per process restart, I think:
*** lyan has quit IRC14:48
johnthetubaguymriedem: well, per instance you create too, there is the _migrated_instance_uuids guard to stop the worst of it14:48
stephenfinjaypipes: Gotcha. Looking through efried's spec rn, but if there's anything you want reviews on let me know14:49
jaypipesstephenfin: will do. cheers.14:49
*** josecastroleon has quit IRC14:50
mriedem"By adding just the custom RC to the existing flavor                               extra_specs, the periodic call to update_available_resources() will add                               an allocation against the custom resource class, and prevent placement                               from thinking that that node is available."14:50
mriedemexcept it won't...14:50
cdentI kinda wish we could figure out some way to have that back.14:51
cdentif we can’t, then I think we should be okay with the virt driver doing it14:52
mriedemjohnthetubaguy: this is where we start the migration checking
*** smatzek has quit IRC14:52
cdentwe’re seeing increasing situations where the virt driver being able to talk to placement is sueful14:52
mriedemany time a node is refreshed14:52
dansmithcdent: you mean compute14:53
cdentno, I mean virt driver14:53
mriedemdansmith: not in this case14:53
efriedbauzas Could you take a look at please?14:53
johnthetubaguymriedem: but most of it doesn't happen, as we check
mriedemjohnthetubaguy: so consider i've upgraded to pike, added resource classes to my existing nodes, and those migrated the flavor extra specs14:53
mriedemwe get to
dansmithmriedem: I really want to avoid the virt driver talking to placement14:53
*** josecastroleon has joined #openstack-nova14:53
bauzasefried: sure, top prio just being specs reviews for today14:53
johnthetubaguymriedem: yes14:53
efriedbauzas Of course; that one should be an easy +A hopefully.14:54
mriedemjohnthetubaguy: before we'd have to check if there is an existing allocation for normalized_rc in placement for that instance14:54
mriedemand if not, add it14:54
bauzascdent: I'm pretty against the idea to see the virt driver talking to placement14:54
mriedemjohnthetubaguy: that is at most redundant once per restart of nova-compute i agree14:54
johnthetubaguymriedem: +1 that's what I was trying to say above14:54
mriedembecause then self._migrated_instance_uuids.add(node.instance_uuid)14:54
johnthetubaguyyep, yep14:54
johnthetubaguyI mean its horrid, but seems the least horrid14:54
cdentbauzas: yes, it is icky, but the non libvirt drivers (notably powervm and vmware) may very well need to do it despite the ickiness14:55
*** yamamoto has joined #openstack-nova14:55
dansmithcdent: why?14:55
*** suresh12 has joined #openstack-nova14:55
*** suresh12 has quit IRC14:56
mriedemjohnthetubaguy: ok we both said the same thing in :)14:56
openstackLaunchpad bug 1724589 in OpenStack Compute (nova) "Unable to transition to Ironic Node Resource Classes in Pike" [High,New]14:56
johnthetubaguymriedem: I can go and try code that up, to see how bad it looks14:56
*** cfriesen has joined #openstack-nova14:56
cdentdansmith: one example is the conversation that efried, rgerganov and efried were having earlier today about device management. /me locates link14:56
mriedemjohnthetubaguy: go nuts - you've got the recreate so you'll be able to tell if it fixes it14:57
efriedcdent I think having the allocations passed into the virt driver obviates any need to query placement further14:57
dansmithcdent: I want compute asking the virt driver what it thinks it needs and then compute doing that work14:57
efriedor that ^14:57
mriedemso, what john and i are talking about is a special one off ase14:57
dansmithcdent: having two things managing allocations or RCs or anything like that gets us further into the territory we had before where things are stomping on each other14:57
mriedemfor the ironic flavor migration stuff14:57
cdentdansmith: I agree we probably don’t want multiple places doing writes14:58
cdentbut reads, I’m not ure14:58
cdentI don’t know for certain, at this stage I’m merely speculating14:58
dansmithwe should be passing anything in that it needs, otherwise we're likely duplicating queries14:59
*** vks1 has quit IRC14:59
dansmithmriedem: can you explain what you think you need for ironic?14:59
openstackLaunchpad bug 1724589 in OpenStack Compute (nova) "Unable to transition to Ironic Node Resource Classes in Pike" [High,In progress] - Assigned to John Garbutt (johngarbutt)14:59
*** baoli has quit IRC14:59
mriedemdansmith: it's in ^14:59
mriedemjohnthetubaguy: one issue might be a chicken and egg with the custom RC inventory being available when we PUT /allocations/{consumer_uuid}15:00
*** baoli has joined #openstack-nova15:00
mriedembecause the inventory gets updated via the RT periodic15:00
mriedemso if i just set a custom rc on my ironic node, the periodic runs, it detects the new rc and will update inventory, but while in the driver we're migrating the instance because it's node has an rc now, if we try updating the allocation for that rc before the node RP inventory has that rc, it will fail with a 40915:01
johnthetubaguymriedem: dang, yeah15:01
mriedem"409 Conflict if there is no available inventory in any of the resource providers for any specified resource classes or inventories are updated by another thread while attempting the operation."15:01
johnthetubaguymriedem: I guess we skip adding it as migration complete, and go around again15:01
dansmithmriedem: is comment #1 what I should read?15:01
mriedemdansmith: or 215:01
mriedemoh yeah 1 is my description of the problem15:02
mriedemin 'merican15:02
johnthetubaguymriedem: I have a related thing here, I am not keen on what it could do across upgrade mind:
*** baoli has quit IRC15:02
*** baoli has joined #openstack-nova15:02
dansmithmriedem: this is because pike doesn't do healing of allocations right?15:03
johnthetubaguydansmith: +115:03
*** mdnadeem has quit IRC15:03
mriedemthe flavor migration code has a comment asserting that the allocations are updated via the RT periodic15:03
mriedemprobably b/c it was written 1/2 a day before that changed :)15:03
dansmithhaving the virt driver do that thing we decided was bad to have compute (at all) do, seems like a bad call to me,15:04
dansmithalthough I understand something needs to do it15:04
*** acormier has joined #openstack-nova15:04
dansmithit'd be nice if we had some way to let the driver signal to compute that something fundamental has changed about the instance to force the heal,15:04
mriedemcould we get the scheduler to do it? probably not b/c the issue during scheduling isn't with the instance already consuming that node, it's a new build request and the scheduler thinking the node is free15:04
mriedemdansmith: that's what i was thinking as an option above,15:05
*** yamamoto has quit IRC15:05
mriedemhave a way for the RT to call into the driver to ask if the allocatoin should be forced15:05
mriedemthat might also fix the issue with trying to update the allocatoin before we've posted inventory for the newly set node.rc15:05
dansmithwhich could apply to other virt drivers that have things happen underneath them (like vmware moving an instance within a cluster)15:05
johnthetubaguyI am missing the trigger, a new build fails and triggers the refresh?15:06
jaypipesstephenfin: k, done on the pci numa policies spec.15:06
mriedemjohnthetubaguy: where does the new build fail? scheduler or compute?15:06
mriedemi thought it was compute and triggered a retry15:06
johnthetubaguyyeah, its on the compute, ironic find the node is already in use by another instance15:06
mriedemso spawn fails?15:06
*** Oku_OS is now known as Oku_OS-away15:07
johnthetubaguyI should double check the logs on that15:07
stephenfinjaypipes: Thank you sir. Much appreciated, as always15:07
jaypipesstephenfin: no problem. or however one says no problem in Gaelic.15:07
*** acormier has quit IRC15:08
*** penick has joined #openstack-nova15:08
*** lyan has joined #openstack-nova15:08
mriedembauzas: you were +2 on before so are you still good15:09
efriedAn dtugann stephenfin Gaeilge?  I was kind of guessing not...15:09
mriedemjohnthetubaguy: not sure i like the spawn fail / refresh thing,15:10
mriedemtrying to think of how we can set a flag for instances that have migrated but need their allocations updated by the RT15:10
mriedemincluding instances that have had their flavors migrated already15:10
*** ragiman has joined #openstack-nova15:11
dansmiththe problem being we do the migration async right?15:11
dansmithwe come up and before we can do all of them we might get a boot request?15:11
mriedemah crap didn't think about that either15:11
dansmithwe could disable ourselves until the migration is complete15:11
mriedemthe async migration also means that if the RT asked the driver for any instances to forcefully update allocations, they might not be set yet, but the next pass or the periodic would hit them15:12
mriedemi was thinking the driver is shoving stuff onto a work queue,15:12
mriedemthe RT pulls off that queue15:12
mriedemonce all instances are processed the queue remains empty15:12
johnthetubaguyget_inventory, could we do the migration on that call, and return something if the allocation needs refreshing?15:13
dansmithif we disable ourselves we can still do management operations of existing instances, and the scheduler won't send us new stuff until we're ready15:13
dansmithjohnthetubaguy: that's kinda icky15:13
*** catintheroof has joined #openstack-nova15:13
dansmithjohnthetubaguy: I mean, really icky15:13
dansmithjohnthetubaguy: gross even15:13
johnthetubaguyworse than virt driver calling to placement?15:14
dansmithalso recall that people can do this before anything starts back up during an upgrade15:14
*** vks1 has joined #openstack-nova15:14
*** lpetrut has joined #openstack-nova15:14
openstackgerritBalazs Gibizer proposed openstack/nova master: Transform instance.exists notification
openstackgerritBalazs Gibizer proposed openstack/nova master: Add sample test for instance audit
dansmithjohnthetubaguy: it's just overloading get_inventory() as a general hook to do potentially a ton of db maintenance15:15
johnthetubaguyso the alternative is dropping the resources:VCPU=0 in the new flavor15:15
dansmithjohnthetubaguy: it'd be better to just provide a proper hook I think15:15
johnthetubaguydansmith: I am agreeing its horrid, just thinking relative horrid here really15:15
dansmithjohnthetubaguy: it's relatively horrid yes :)15:16
dansmithjohnthetubaguy: you don't like the self disable?15:16
dansmithif we do it in get_inventory we still have a race right?15:16
dansmithwith the scheduler15:16
johnthetubaguyso its a race with when the flavor gets updated I guess15:16
mriedemi'm not sure at which point you auto-disable15:17
dansmithmriedem: on init_host(), if there are unmigrated instances15:17
dansmithand then re-enable when you're done with the migration15:18
johnthetubaguybut if the resource class isn't set on the node, we are never done15:18
mriedemso originally when we did the flavor migration it was on init_host and then that moved because of the hash ring stuff15:18
dansmithjohnthetubaguy: that had to be set before queens, right?15:19
mriedemyeah i'm trying to think if we re-enable too soon, or disable too long15:19
*** pcaruana has quit IRC15:19
dansmithso queens starting up can assume it's set no?15:19
johnthetubaguydansmith: yes, but this is really about making pike work15:19
johnthetubaguyso right know I can't update the flavors in pike, for them to be set in time for queens, because of this bug15:20
dansmithoh, was this migration in pike? I'm misremembering the timing15:20
johnthetubaguyyeah, sadly15:20
*** tesseract has quit IRC15:20
mriedemyeah we have to backport the fix15:20
dansmithjohnthetubaguy: but you can set the node class before pike, and you can run the migration before you start anything else up right/15:21
mriedemso i'm still thinking queue and hook callback from RT to driver15:21
dansmithmriedem: explain your queue thing again?15:21
mriedemso before we're going to have to see if the instance already has an allocation for the normalized_rc,15:22
cdentdoes anyone have a reference to previous discussion/decisions on why we don’t do regular allocation updates? (something I can read, rather than taking us off track here)15:22
mriedemif it doesn't, we have to put the instance/rc into a queue,15:22
mriedemwhen the RT periodic runs, it calls into the driver to say, give me any instances that need their allocations updated and we'd dequeue at that point15:22
mriedemthen ^ should auto-heal during the periodic as the admin is setting node rcs in ironic15:23
mriedemwhile things are running15:23
dansmithmriedem: you mean the first time we run the RT periodic we end up migrating all the instances the driver needs before we update inventory?15:24
mriedemcdent: in a nutshell, in pike we want the scheduler to create allocations, including doubling them up for moves and shared providers (before we stopped trying to make shared providers work in pike) - the problem is the periodic task in ocata computes will overwrite the allocations created by the scheduler15:25
mriedemdansmith: need to see if the RT updates inventory before allocations15:25
*** eharney has joined #openstack-nova15:25
*** gjayavelu has quit IRC15:25
mriedemit doesn't15:26
mriedemthe RT would update allocations before inventory15:26
dansmitheven still, we run most of that code at other times than just the perioidic15:26
*** jgriffith has quit IRC15:26
*** Sukhdev_ has joined #openstack-nova15:26
mriedemthe allocatoin update does happen during instance_claim15:27
dansmithmriedem: so we still do allocation updates if we have ocata computes.. what if we keep doing them if we have ocata nodes and the driver says we need to?15:27
*** hongbin has joined #openstack-nova15:27
*** MVenesio has quit IRC15:28
*** slaweq_ has joined #openstack-nova15:28
mriedemand the driver says we need to only for ironic and only until we remove the flavor migration code15:28
sean-k-mooneystephenfin: gladly. ill review it in the next 20 mins or so. i was pretty happy with the previous version so i dont expect that ill see anything wrong with it15:28
*** andreas_s has quit IRC15:28
dansmithmriedem: yeah15:28
johnthetubaguy... now that I like15:29
mriedemdansmith: that would be simpler15:29
johnthetubaguywill code that up now15:29
dansmiththe commit message would need to be "Extend existing shitpile with more shit because.. why not"15:29
mriedemyou can't migrate ironic instances anyway right15:29
mriedemso the ocata compute allocation overwrite thing is less of a concern there15:30
johnthetubaguymriedem: there is a spec on that ;)15:30
dansmithno migrations at all?15:30
mriedemjohnthetubaguy: sure, but not in pike15:30
dansmithI thought evac worked at least15:30
mriedemevac is the only one i can think of15:30
johnthetubaguyoh, rebuild does I guess15:30
*** gszasz has quit IRC15:30
mriedemyou don't care about the allocations on the dead source host15:30
mriedemalthough these are nodes, not hosts15:30
* dansmith points to his shitpile comment15:31
mriedemhmm, so how does evac work for ironic - the nodes might not be down, but the nova-compute service is,15:31
mriedemso you evacuate and move all of those instances to other ironic nodes managed by another compute host?15:31
* dansmith points to his shitpile comment again15:31
* johnthetubaguy has no idea how evacuate works15:32
mriedemwell regardless,15:32
mriedembecause of gibi's fixes,15:32
mriedemwhen/if the source compute comes back up, we remove the allocations from the old instances that were on it15:32
*** gszasz_ has joined #openstack-nova15:32
mriedemso yeah the callback to the driver to ask if it should update allocatoins is probably ok for ironic,15:33
mriedemlesser of all evils,15:33
mriedemand temporary15:33
dansmithnot not evil, but less evil15:34
mriedemlike mike pence15:34
* cdent blinks15:36
*** dtantsur|brb is now known as dtantsur15:36
dtantsurjohnthetubaguy, cdent, whas was the question, could you please give a tl;dr?15:40
johnthetubaguydtantsur: hit this bug:
openstackLaunchpad bug 1724589 in OpenStack Compute (nova) "Unable to transition to Ironic Node Resource Classes in Pike" [High,In progress] - Assigned to John Garbutt (johngarbutt)15:41
edleafedtantsur: if you update an ironic flavor with a custom RC, nova may schedule to an existing node15:41
*** jgriffit1 has joined #openstack-nova15:41
edleafebecause we don't create the allocation for nodes with that custom RC15:41
*** fragatina has quit IRC15:42
dtantsurwait, wasn't that periodic task in the ironic virt driver written to update the allocations?15:42
*** fragatina has joined #openstack-nova15:42
dansmithdtantsur: no, the flavors15:43
dansmithdtantsur: when it was written the compute node would unceremoniously heal allocations15:43
edleafedtantsur: that was supposed to be part of it, but only the flavor migration was added15:43
dansmithdtantsur: right over top of the scheduler, which is why we made it stop,15:43
dansmithdtantsur: but that periodic in the ironic driver was depending on that15:43
dtantsurwas the update bit removed in Pike or Queens or both?15:44
*** jgriffit1 has quit IRC15:44
dansmithdtantsur: pike once all compute nodes have been updated15:44
*** jgriffith has joined #openstack-nova15:44
dansmithdtantsur: it would keep doing it as long as ocata nodes were in the cluster15:44
*** jgriffith has quit IRC15:45
dtantsurso, is our only option to update the periodic task to fix up allocations as well?15:46
dtantsurI don't know nova well enough to suggest anything else :)15:46
dansmithdtantsur: we're going to let ironic have a vote on the "should we keep healing" decision15:48
*** felipemonteiro__ has quit IRC15:48
efriedcdent I wrote that test case, but I can't get it to fail properly.  I suspect it may be due to that bug, though.15:50
*** smatzek has joined #openstack-nova15:50
efriedcdent Mebbe I post it and see if you can find holes.15:50
cdentefried: sounds like a grand plan15:50
*** Sukhdev has joined #openstack-nova15:51
dtantsurdansmith: I just don't see other options. I don't think we should ask operators to go into Placement and fix up stuff themselves. or should we?15:52
dansmithdtantsur: nobody is suggesting that15:52
dtantsurjohnthetubaguy: btw you may also hit in your testing15:52
openstackLaunchpad bug 1723423 in OpenStack Compute (nova) "Ironic node cannot be used if it does not report VCPU" [Undecided,In progress] - Assigned to Dmitry Tantsur (divius)15:52
*** sahid has quit IRC15:52
johnthetubaguydtantsur: so ironic should eventually return 0 for all the resources that are no longer being claimed?15:53
dtantsurjohnthetubaguy: sorry, I don't quite get it. Which of the bugs are you referring to?15:54
*** jmlowe has joined #openstack-nova15:54
*** links has joined #openstack-nova15:54
openstackgerritAndrey Volkov proposed openstack/osc-placement master: CLI for resource classes
openstackgerritAndrey Volkov proposed openstack/osc-placement master: [WIP] RP list: member_of and resources parameters
johnthetubaguydtantsur: the one you just linked too, that might be a bit of the picture I was missing15:55
johnthetubaguydtantsur: once the resource class is updated, we stop reporting the vcpu mem and disk from ironic?15:55
*** chyka has joined #openstack-nova15:56
dtantsurjohnthetubaguy: we don't, no. only if you remove 'cpus' key from node.properties15:56
efriedcdent Actually the problem is that AllocationCandidates refuses to find anything in the shared RP as long as the compute node also has inventory in the same RC (DISK_GB).15:56
dtantsurwhich is what I tried doing in my testing15:56
efriedalex_xu yt?15:57
cdentefried: are you based on alex’s stuff? that may be a semi-intentional holdover from the early policy of “we don’t do that”15:57
johnthetubaguydtantsur: OK, I didn't know that was a thing, makes sense now, thanks.15:57
cdentin which case you’ve found a version (there are many) of the bug15:58
efriedcdent Yes, I'm based on his in-flight patch to implement traits in allocation candidates.15:58
efriedcdent Got a handy dashboard of placement bugs?  This one is worth filing separately if we don't already have it.15:59
dtantsurso, back to the first bug. are there any options except for fixing the periodic task? I really don't know15:59
*** sahid has joined #openstack-nova15:59
cdentefried: just make a nova one tagged placement:
*** slaweq_ has quit IRC16:00
johnthetubaguydtantsur: we went through some, they all seem a lot worse, like calling into placement to correct the allocations after we migrate the instance flavor16:00
efriedcdent Cool, but don't want to dup if there's already one out there.16:00
efriedcdent Oh.  That's the dashboard.  Got it :)16:00
*** andreas_s has joined #openstack-nova16:01
openstackgerritMerged openstack/nova-specs master: PCI NUMA Policies
*** gjayavelu has joined #openstack-nova16:02
dtantsurjohnthetubaguy: okay.. wanna try to make a patch? at least you have a reproducer already16:05
efriedcdent What do we call the "main" RP to distinguish it from the shared RPs?  Today it's always the "compute node RP" - but do we have a generic name for it?16:05
*** jmccarthy has joined #openstack-nova16:06
cdentefried: we haven’t really settled on anything. alex_xu tried to use “root” in his stuff but that overlaps with nested16:06
*** sridharg has quit IRC16:06
efriedcdent Right16:06
efriedI'll use "compute node" for the sake of the bug, but eventually we should have a word.16:06
cdentthe other one is the “sharing” rp, so maybe the main one is the “non-sharing” (that’s a bit weak though)16:07
*** vks1 has quit IRC16:07
cfriesencdent: we could have multiple non-shared RPs per compute node, no?  I think I remember hearing that some Intel folks were talking about creating a separate non-shared RP to provide L3 cache.16:09
*** Apoorva has joined #openstack-nova16:09
cfriesenif so then it's really the "nova compute node RP" or something like that. :)16:09
openstackgerritJohn Garbutt proposed openstack/nova master: Keep updating allocations for Ironic
johnthetubaguydtantsur: that is my first stab at it ^16:09
*** andreas_s has quit IRC16:09
cdentcfriesen: non-sharing-root-provider is more complete then16:11
cdentwas trying to leave that complexity out for mow16:11
*** gszasz_ has quit IRC16:12
cfriesencdent: although in the case of L3 cache it would be properly modelled as a per-NUMA-node resource and would need nested providers. :)16:12
*** salv-orlando has joined #openstack-nova16:12
cdentlet’s hope so16:13
*** ijw has joined #openstack-nova16:14
*** andreas_s has joined #openstack-nova16:14
*** jgriffith has joined #openstack-nova16:15
edleafe"greedy rp"? (i.e., not sharing)16:15
*** mmehan has joined #openstack-nova16:15
*** lucasagomes is now known as lucas-afk16:15
*** baoli has quit IRC16:15
*** jgriffith has quit IRC16:15
mriedemefried: dansmith: i went through for the granular allocation candidates request syntax,16:16
mriedemonly sticking point right now is this means you can only specify one set of resource classes/traits which are spread across providers, right?16:16
mriedembecause all numbered groups are applied to the same RP16:16
efriedmriedem Correct.16:16
efriedmriedem Why would you need more than... oh.16:16
mriedemi'm trying to think if that screws us later16:16
efriedmriedem Actually, no, I think that's aight.16:17
mriedemi'm mostly worried about the shared storage scenario,16:17
mriedemwhich i know we're punting, but16:17
*** salv-orlando has quit IRC16:17
*** amodi has quit IRC16:17
dansmithwait, what?16:17
dansmiththe point of this is you specifying traits/classes that go together16:17
efriedmriedem Well, the spec deliberately leaves out shared RP semantics.16:17
openstackgerritRodolfo Alonso Hernandez proposed openstack/nova-specs master: Network bandwidth resource provider
*** jgriffith has joined #openstack-nova16:18
efriedmriedem But we do have a couple of pretty big holes even without this, when it comes to traits + aggregates.16:18
efriedI'm writing one up now :)16:18
*** ttsiouts has quit IRC16:18
* dansmith actually reads the review16:19
dansmithah, I see16:19
mriedemso with shared storage, i think we wanted the scheduler to ask for vcpu/memory_mb and disk_gb, and we could get 2 providers back,16:19
mriedemone compute node RP for the vcpu/memory_mb, and one shared storage provider for the disk_gb16:19
mriedemscheduler would claim on both16:19
dansmithwith this,16:19
mriedemmaybe that still works ok with this new thing16:19
mriedemyou request those in an unnumbered resources group16:19
mriedemso the resources can be spread across providers16:19
dansmithyou could ask for resources2:DISK=1,required2=SHARED16:19
dansmithto make sure you get a shared disk, right?16:20
mriedemin previous thinking about shared storage i didn't think the flavor had to be specific as to the type of storage16:20
dansmithit doesn't have to be16:20
mriedemso like i said, i think we don't lose the ability to do what we can do today with an unnumbered request,16:21
mriedemyou just can't have more than one unnumbered request16:21
mriedembut i can't think of any reason why that might be bad16:21
efriedCorrect.  Except "what we can do today with an unnumbered request" is poorly defined and broken.16:21
dansmithis there some special behavior assigned to resources= that isn't there for resourcesN=?16:22
dansmithI must have missed that if so16:22
mriedemalso, now that i'm thinking about this, what is preventing placementing from returning 2 compute node providers, one that satisfies the vcpu requirement and one that satisfies the memory_mb requirement, scheduler claiming on both and the build failing?16:22
efriedSorry, by "today" I actually mean "not today at all".  I mean when both traits and aggregates are considered.16:22
mriedemdansmith: my comment here
*** peter-hamilton has quit IRC16:23
dansmithmriedem: ah, I see, I had read that first bullet differently I guess16:23
*** gszasz_ has joined #openstack-nova16:23
dansmithI read this as "if you only have one grouping"16:24
*** alex_xu has quit IRC16:24
efried"same tree" is the key there, mriedem.  That's how you don't get VCPU and MEMORY_MB from separate computes.16:24
*** Yingxin has quit IRC16:25
mriedemok so compute node today is a tree of 1 node16:25
mriedemthe alpha and omega16:25
*** salv-orlando has joined #openstack-nova16:25
mriedemwhew :)16:26
efriedI would think we want to enforce that one allocation can only ever get resources from one tree plus zero or more shared-via-aggregate16:26
mriedemyeah i think so16:26
mriedemotherwise it gets crazy16:26
efriedFor the unnumbered group, the resources from one class are always from the same RP; but resources from different classes can be spread throughout the tree + aggregates16:26
mriedemi guess we never answered the question from the other day about whether or not a compute node provider can report both local disk_gb and disk_gb via a shared-with-aggregate storage pool16:27
efriedFor the numbered group, all resources are always from the same RP (one node within a tree)... but I don't know how aggregates come into play there.16:27
efriedmriedem Just so.  That's the bug I'm writing up now.  As currently implemented, we ignore the aggregate if the compute node has DISK_GB.16:27
efriedand I don't think that's the right answer, generally/long-term.16:28
*** andreas_s has quit IRC16:28
mriedemah, yeah, i'd think we'd pull from the aggregate inventory16:28
*** Yingxin has joined #openstack-nova16:28
*** links has quit IRC16:28
mriedemsince you're assuming that's a pool the operator wants you to use16:28
efriedwell, we should return candidates for both.16:28
mriedemmaybe that becomes a traits tihng16:28
efriedmriedem we're busted there too16:29
efriedBecause my compute RP can have e.g. RAID trait, and my shared RP can have e.g. SSD trait.16:29
efriedBut placement has no way to know that those should stick together.16:29
efriedSo if I ask for RAID+SSD, I'll get candidates, but I shouldn't.16:29
efriedthat was in fact the bug I was trying to express (by writing a test for it) when I discovered the previous.16:30
*** baoli has joined #openstack-nova16:30
*** baoli has quit IRC16:30
mriedemdansmith: so no major issues with you for this unnumbered resource limitation thing?16:30
mriedemas noted, it's no worse than what we have today16:31
*** alex_xu has joined #openstack-nova16:31
dansmithmriedem: no, I didn't have it in my head, but Idon't think it changes anything16:31
cfriesenmriedem: efried: why would the compute node provider report the shared storage amounts?  shouldn't it just report that it has access to a particular shared storage?16:31
dansmithI don't like the asymmetry, but I think we probably need to keep it to avoid breakage in the meantime16:31
efriedcfriesen Nono, the compute node *has* local storage.16:31
dansmithit's really the resourcesN being different that I hadn't grokked16:31
*** baoli has joined #openstack-nova16:32
efriedcfriesen So it's got a local disk, and it's also attached to a SAN or whatever.  The former is reported in the compute node RP; the latter via the shared RP.16:32
dansmithwhat efried said16:32
dansmiththat's not possible today, but should be eventually16:32
*** markvoelker has quit IRC16:32
cfriesenefired: yeah, that makes sense.   mriedem's comment made it sound like the compute node RP was reporting on sizes of available shared storage16:33
*** markvoelker has joined #openstack-nova16:33
mriedemdon't compute nodes that are getting storage from NFS today report disk_gb for the entire NFS cluster?16:34
cfriesenmreidem: yeah, and that's been a bug for a long time I think.16:34
mriedemso it makes it look like you have potentially 100 computes with 1TB of storage each, but not really16:34
efried < there's the first one.16:34
openstackLaunchpad bug 1724613 in OpenStack Compute (nova) "AllocationCandidates.get_by_filters ignores shared RPs when the RC exists in both places" [Undecided,New]16:34
cfriesenincidentally, who updates the stats for the shared storage RP?   is there an auditor somewhere?16:35
mriedemyeah so i wasn't sure how we are goign to fix the NFS disk reporting thing with shared providers, because the compute service is reporting that disk_gb16:35
mriedemcfriesen: was supposed to be external to nova16:35
mriedemlike how neutron reports IP allocation pools16:35
efriedmriedem Multiple compute services report the same inventory, but to the same RP, because the shared RP has a UUID that they can all agree on.16:36
efriedI think that's what jaypipes update_inventory_if_needed thingy is for (or whatever it's called)16:36
mriedemefried: i don't think that's quite accurate16:36
mriedemeach compute node get_inventory is going to report what it thinks it's local disk is,16:36
mriedembut it can't tell if it's shared or not16:36
efriedWe talking Q or later?16:37
mriedemthis is why we have the 'is_shared_storage' ssh stuff during migration16:37
mriedemwell, shared storage is not Q, so later,16:37
mriedembut this is why it's not Q, among other reasons16:37
efriedYeah, so each compute node happily reports all the storage, but the conductor (or whatever is doing the rollup) can see that those inventories are coming from the same RP, so it can report the total just once rather than adding it up.16:38
*** sshwarts has quit IRC16:38
*** markvoelker has quit IRC16:38
dansmithefried: mriedem right, computes will need to not report shared storage16:38
efriedBut wait, the compute node shouldn't be reporting inventory in the compute node RP for storage that's in an aggregate.16:38
dansmithnot just report the same16:38
efriedyeah, that %16:38
dansmithcomputes will report their storage if they have some, else none16:38
openstackgerritMerged openstack/nova-specs master: Granular Resource Request Syntax
mriedemefried: the omputes aren't aware of hte aggregate16:38
efrieddansmith So who's responsible for creating the shared storage RP and its inventory?16:39
dansmithefried: some storage agent16:39
dansmithlike neutron does16:39
mriedemthinking back on pike issues, this was also a thing where the scheduler would claim disk_gb on the shared storage RP, but the compute would overwrite the instance disk_gb allocation against it's local compute node16:39
cdentmriedem: the rt has an  aggregate map (as yet unused)16:39
mriedembecause it wasn't aware of the aggregate relationship for disk16:39
cdentthat was supposed to allow it to be able to report inventory correctly16:40
cdentonce shared exists16:40
mriedemcdent: ah, fun16:40
mriedemi remember working a patch for one afternoon late in pike rc time trying to sort out how to not get the rt to overwrite the shared disk allocation and it went down the hole fast16:40
mriedeminvolved basically reverse engineering the logic in placement and the scheduler, from the rt16:41
mriedembut, another reason why we don't want the RT trying to figure out allocations16:41
cdentthe aggregate_map still leaves open the question of whether a compute node can have both local and shared, which, unsure16:41
*** rushiagr has quit IRC16:41
*** andreas_s has joined #openstack-nova16:41
*** derekh has quit IRC16:42
mriedemyeah idk, couldn't you mount an NFS share on a compute and configure the instance path to use that share, but leave root and everything else that's local disk for the OS and running nova-compute?16:42
mriedemlike, i want all my instance and image crap to go in the NFS share16:42
mriedemleave local disk for everything else16:42
*** mvk has quit IRC16:42
dansmiththere's lots of things that need to change on compute to make that duality possible16:43
dansmiththe easiest to do today would be local disk + ceph I think,16:43
*** gyee has joined #openstack-nova16:43
dansmithsince it's not fighting over /var/lib/instances16:43
* dansmith imagines the above, but with correct grammar16:43
*** suresh12 has joined #openstack-nova16:44
mriedemok well this is why no shared storage support in queens :)16:44
mriedemgranular request syntax spec approved16:44
dansmithefried: get to work16:44
* dansmith runs off for a bit16:44
efriedThanks for that.  (jaypipes should really read it at some point, since it was his idea.)16:45
mriedemartom: you want to start working on squashing into ?16:45
*** AlexeyAbashkin has quit IRC16:47
*** vks1 has joined #openstack-nova16:48
*** suresh12 has quit IRC16:48
jmccarthyTrying to figure out about using horizon to view instance console (instance on xen compute) - the console.log works, but not the console, any ideas ?16:50
*** andreas_s has quit IRC16:50
*** Sukhdev has quit IRC16:50
*** andreas_s has joined #openstack-nova16:51
artommriedem, sure16:52
artomI wonder what'll come first - CI allowed you to merge it, or the next solar eclipse16:52
mriedemor me going to lunch16:54
artomMaybe you could eat the sun, two birds with one stone16:55
*** andreas_s has quit IRC16:55
mriedemi'm gonna need some psilocybin to pull that off16:56
openstackgerritsean mooney proposed openstack/nova-specs master: Use neutron's new port binding API
artomThat was awfully erudite way of asking for shrooms16:57
*** jmlowe has quit IRC16:58
*** yikun has quit IRC16:59
openstackgerritArtom Lifshitz proposed openstack/nova stable/newton: Use VIR_DOMAIN_BLOCK_REBASE_COPY_DEV when rebasing
artommriedem, ^^17:00
*** ralonsoh has quit IRC17:01
*** sdague has quit IRC17:04
*** ociuhandu has joined #openstack-nova17:10
*** jpena is now known as jpena|off17:11
*** Swami has joined #openstack-nova17:12
openstackgerritJohn Garbutt proposed openstack/nova master: Keep updating allocations for Ironic
*** rha has quit IRC17:14
*** vvargaszte has joined #openstack-nova17:14
*** rushiagr has joined #openstack-nova17:14
*** suresh12 has joined #openstack-nova17:15
*** Swami has quit IRC17:15
*** Swami has joined #openstack-nova17:16
*** andreas_s has joined #openstack-nova17:18
*** rha has joined #openstack-nova17:19
*** rha has joined #openstack-nova17:19
*** suresh12 has quit IRC17:20
*** felipemonteiro_ has joined #openstack-nova17:20
*** felipemonteiro__ has joined #openstack-nova17:21
jmccarthyAny ideas about how the console connection is setup to it can work via horizon ? (with instances on xen)17:21
*** huanxie has joined #openstack-nova17:22
sean-k-mooneystephenfin: just looked at there are some minor inaccuacyies in the spec but nothing i care about enough to warrent updating now that its merged17:23
*** Apoorva_ has joined #openstack-nova17:23
*** dtantsur is now known as dtantsur|afk17:24
*** lpetrut has quit IRC17:25
*** felipemonteiro_ has quit IRC17:25
*** Apoorva has quit IRC17:26
sean-k-mooneyjmccarthy: i belive with xen you can still use spice/vnc or xens own console17:26
openstackLaunchpad bug 1724633 in OpenStack Compute (nova) "AllocationCandidates.get_by_filters hits incorrectly when traits are split across the main RP and aggregates" [Undecided,New]17:26
jmccarthysean-k-mooney: True, those do work (at least the xen console that I tried earlier). I was hoping to see about getting it going via horizon17:27
sean-k-mooneyjmccarthy: connect it to horizon should work all you need to do is deploy the same proxy service on dom0 that you would use for kvm17:28
sean-k-mooneyhorrizon bassically uses a html5 canvas and a websocket to deliver the vnc/spice console17:29
melwittlooks like zuul was restarted again, so check to see if you need to recheck your patches17:29
jmccarthysean-k-mooney: Not sure I follow here about proxy service that is needed on xen compute ?17:30
*** suresh12 has joined #openstack-nova17:31
cdentthanks efried will pay real attention a bit later17:31
sean-k-mooneyjmccarthy: actully it does not need to be on the computes. for kvm we deploy nova_novncproxy on the contoler and it connect to the hypervisor vnc console and proxyies it to horizon via a websocket17:31
*** andreas_s has quit IRC17:31
efriedcdent The bug doesn't have anything you don't already know.  I'll post the reviews in a bit and poke ya.17:31
cdents’cool, good to have on radar17:32
jmccarthysean-k-mooney: Ah ok yep I have that - I have both types of computes in this setup actually17:32
*** suresh12 has quit IRC17:33
*** amodi has joined #openstack-nova17:34
sean-k-mooneyjmccarthy: ya so backically in the nova compute nodes you have to add eitehr a vnc or spice section to the nova.conf or nova-compute.conf that points the hypervior at the proxy17:34
sean-k-mooneyjmccarthy: somthing like this
mriedemneed another specs core on this one
jmccarthysean-k-mooney: I think I have that tho ? The console in horizon is working for instances on kvm compute17:35
sean-k-mooneyjmccarthy: is xen configured to use a xen console for guests or a vnc console. you will have to match them up17:35
*** Sukhdev_ has quit IRC17:36
*** andreas_s has joined #openstack-nova17:36
sean-k-mooneyjmccarthy: are you using xen via libvirt17:38
jmccarthysean-k-mooney: yep17:38
sean-k-mooneyjmccarthy: if so dump the xml of the xen instance and check if its using vnc like kvm or is it using the xen console17:38
sean-k-mooneyyou have to pass the -c xen:// option to virsh to see the instance by default17:39
sean-k-mooneyso virsh -c xen:// list   then virsh -c xen:// dumpxml <domain name>17:39
*** suresh12 has joined #openstack-nova17:39
jmccarthysean-k-mooney: I see this console bit ?
sean-k-mooneyjmccarthy: can you show me the full xml17:41
sean-k-mooneyjmccarthy: it should look something like
sean-k-mooney<graphics type='vnc' port='5900' autoport='yes' listen='' keymap='en-us'>17:42
sean-k-mooney      <listen type='address' address=''/>17:42
sean-k-mooney    </graphics>17:42
sean-k-mooneyso its using vnc17:42
sean-k-mooneyso you need to configre the nova conf also for vnc17:43
*** Apoorva_ has quit IRC17:44
*** gszasz_ has quit IRC17:44
*** Apoorva has joined #openstack-nova17:44
jmccarthysean-k-mooney: Oh I think vnc is off, spice is enabled17:45
sean-k-mooneyjmccarthy: :) that would cause issues. vnc works better in general17:46
*** sambetts is now known as sambetts|afk17:46
sean-k-mooneyat least via horizon17:46
openstackgerritEric Fried proposed openstack/nova master: Test alloc candidates with same RC in cn & shared
efriedcdent ^ thar she blows.17:47
efriedmriedem dansmith ^ demonstrates those two bugs I mentioned earlier.17:47
*** mvk has joined #openstack-nova17:48
jmccarthysean-k-mooney: Ok I have to re-jig this and try it again17:49
jmccarthysean-k-mooney: Thanks :) !17:49
*** abhi89 has joined #openstack-nova17:49
*** catinthe_ has joined #openstack-nova17:53
*** catintheroof has quit IRC17:53
*** andreas_s has quit IRC17:54
*** sahid has quit IRC17:54
*** rodolof has quit IRC17:54
*** rodolof has joined #openstack-nova17:55
*** MVenesio has joined #openstack-nova17:56
*** lbragstad has quit IRC17:56
* edleafe notices that efried studied at the jaypipes school of variable naming17:58
*** catintheroof has joined #openstack-nova17:59
*** vks1 has quit IRC17:59
*** andreas_s has joined #openstack-nova17:59
*** vvargaszte has quit IRC18:00
*** ociuhandu has quit IRC18:00
*** catinthe_ has quit IRC18:01
*** avolkov`` has quit IRC18:01
*** baoli has quit IRC18:02
*** huanxie has quit IRC18:03
*** baoli has joined #openstack-nova18:04
*** ragiman has quit IRC18:04
*** catinthe_ has joined #openstack-nova18:05
*** catintheroof has quit IRC18:07
jmccarthysean-k-mooney: Oh actually it is vnc hmm18:08
*** baoli has quit IRC18:13
dimsedleafe : ah ivy league school :)18:13
edleafedims: :)18:14
*** baoli has joined #openstack-nova18:14
*** catintheroof has joined #openstack-nova18:14
*** catinthe_ has quit IRC18:15
*** andreas_s has quit IRC18:16
*** lbragstad has joined #openstack-nova18:16
*** lpetrut has joined #openstack-nova18:19
*** AlexeyAbashkin has joined #openstack-nova18:19
*** Sukhdev has joined #openstack-nova18:20
*** AlexeyAbashkin has quit IRC18:24
*** tonygunk has quit IRC18:24
openstackgerritElod Illes proposed openstack/nova master: Transform keypair.import notification
*** andreas_s has joined #openstack-nova18:26
*** ijw has quit IRC18:27
*** Tom__ has joined #openstack-nova18:27
*** vvargaszte has joined #openstack-nova18:29
*** weshay is now known as weshay|ruck|brb18:30
openstackgerritmelanie witt proposed openstack/nova master: DNM: Test websocketproxy with TLS
*** slaweq_ has joined #openstack-nova18:33
*** sdague has joined #openstack-nova18:33
*** andreas_s has quit IRC18:33
efriededleafe Where, in that test case?  I was just following examples in the rest of the file.  Thought I was mimicking alex_xu - but apparently he was mimicking jaypipes :)18:34
openstackgerritmelanie witt proposed openstack/nova master: DNM: Test websocketproxy with TLS
efriedjaypipes I was about to start tackling the functional test failures in but don't want to step on you if you're on it.18:37
*** slaweq_ has quit IRC18:37
*** jmccarthy has left #openstack-nova18:40
jaypipesefried: looking..18:40
jaypipesdims: that would be an i v lg schl18:41
jaypipesdims: or, just to piss off edleafe and dansmith, it would be:18:43
*** markvoelker has joined #openstack-nova18:44
*** crushil has quit IRC18:46
* penick thinks winnie stepped on Jay's keyboard18:49
*** vvargaszte has quit IRC18:50
jaypipespenick: :)18:50
dansmithpenick: that's jay's coding style... "indistinguishable from dog-on-keyboard"18:51
* dansmith kids18:51
dansmithit's like pep192 or something I think18:52
dansmithsounds more like it18:53
edleafe"dog_on_keyboard" sounds about right :)18:53
openstackgerritMerged openstack/nova master: placement: set/check if inventory change in tree
jaypipesefried: so, can you brief me on any changes, if any, you made in the n-r-p series please?18:55
jaypipesefried: just rebases or anything functional?18:55
efriedjaypipes Almost entirely rebases, fixing nits, a couple of bugs.  I don't think there was anything functional - lemme skim real quick...18:56
jaypipesefried: I'm gonna pull the top patch (385693) and run tests locally.18:56
openstackgerritChris Dent proposed openstack/nova master: [placement] manage cache headers for resource classes
edleafejaypipes: I know you got a ton to catch up with, but the alternate hosts series is ready for review, starting with
jaypipesedleafe: rock on, will do that as soon as I get those tests running.18:56
efriedjaypipes Okay, should I assume you're taking the reins back at this point, or do you want to do some pair dev on that series?18:56
jaypipesefried: oh, I can take it over again, no prob.18:57
efriedjaypipes I'm totally prepared to fix those test cases if you have more important (or jaypipes-needin) things to do.18:57
jaypipesefried: nope, I'm good to take it back18:57
jaypipesappreciate you, edleafe and dansmith pushing on this while I was off18:57
efriedjaypipes At some point you should give a once-over, make sure it gels with what you conceived at the PTG.18:59
jaypipesefried: yep, it's near the top of my queue.19:00
*** tbachman has quit IRC19:02
efriedjaypipes The only changes of any substance were: 1) -- please double-check me to make sure I referenced the exception from the right spot19:02
efriedjaypipes 2) moved no-op reformatting stuff to a new patch19:03
efriedjaypipes That's it.  The rest were rebases & spelling.19:04
jaypipesefried: coolio. thx.19:05
mriedemdansmith: this newton backport of artom's looks good to me now - it's kind of a frankenpatch with squashes but does the job19:05
mriedemwould be the last one for newton before we release and tag for eol19:05
* dansmith looks19:05
*** ociuhandu has joined #openstack-nova19:06
efriedjaypipes And as soon as you come up for air, would like your input on the possibility of ripping out some of the existing aggregate code until we can figure out how tf it's ever going to work with traits & nested & numbered.19:07
jaypipesefried: well, on that, if it isn't currently affecting anything (other than read queries which cdent has patches up to address perf issues), I'd just as soon leave it be for now. We have lots of other things to get done by SYD19:08
dansmithme too19:08
efriedjaypipes That's just it: it very well may be affecting "things".19:08
*** slaweq_ has joined #openstack-nova19:08
jaypipesefried: well, I'm perfectly happy to see your thoughts on it. just not a huge priority unless you can show me it's causing bugs.19:10
jaypipesefried: besides the general "tech debt bugs" which I certainly admit there are many.19:10
jaypipesok, lemme read this granular spec :)19:10
mriedemdansmith: john put up a patch for the ironic flavor migratoin allocatoins thing
mriedemseems ok to me19:13
dansmithmriedem: yeah, sorry, I'm just behind19:14
*** suresh12 has quit IRC19:14
mriedemnp, i just got back from running errands19:14
mriedemyour context targeting thing in pike is also happy for ci now
mriedemsdague: ^ we need to get that released soon19:15
dansmithokay I can hit that one quick19:16
dansmithbecause I know it's perfect19:16
*** weshay|ruck|brb is now known as weshay|ruck19:18
*** edand has quit IRC19:18
*** AlexeyAbashkin has joined #openstack-nova19:19
*** huanxie has joined #openstack-nova19:19
sdaguemriedem: yeh, that seems fine. I wasn't originally sure that the sc reinflates over that, but it looks like it does19:21
*** AlexeyAbashkin has quit IRC19:24
efriedcdent What was the thing where you could modify an allocations/{consumer_uuid} - is that gonna be via PUT or POST?19:26
melwittmriedem: have you seen live migration job fails like this before? raises InstanceNotFound because of VIR_ERR_NO_DOMAIN?
cdentefried: that’s always PUT, and it always replaces19:26
cdentthe post version is in progress at:
efriedcdent Okay.  I'm looking at your schema here and I think it's a little strict.  Will see if you "fixed" it via ^.19:27
melwittmriedem: the "target build notification" patch for ocata failed the live migration job on that and I was wondering if it's just something to recheck or if it's something more19:27
cdentefried: strict how? (entirely possible my jsonschema fu doesn’t allow for handstands)19:28
*** jdavis has joined #openstack-nova19:28
efriedcdent I think you might have done it, but still pondering.  Basically by saying minProperties=1 in a couple places, you would be disallowing resources: {} which was (I thought) how you would say you're deleting those pieces of the allocation.19:28
efriedThough I guess omitting that chunk entirely would also work, since it's replacing.19:29
efriedSo maybe moot.19:29
cdentminProperties is disallowing anything other than the resource provider keyed object19:29
cdentwhich is required on PUT, but will not be required on POST19:30
mriedemmelwitt: was just looking at that one19:30
efriedcdent So what's the POST supposed to do?  Oh, allow you to modify allocations for multiple consumers at once?19:30
mriedemmelwitt: it's this
mriedemknown issue19:31
mriedemqemu-system-x86_64: /build/qemu-GGGtkw/qemu-2.8+dfsg/block/io.c:1514: bdrv_co_pwritev: Assertion `!(bs->open_flags & BDRV_O_INACTIVE)' failed.19:31
melwittokay. I looked at it earlier and it's not related to that patch but I hesitated to blindly recheck19:31
*** salv-orlando has quit IRC19:31
cdentefried: yes, so the the migration uuid stuff that dansmith has is not racey19:31
* efried is up to speed now. Thanks cdent.19:31
melwittmriedem: cool, thanks19:31
openstackLaunchpad bug 1706377 in OpenStack Compute (nova) "(libvirt) live migration fails on source host due to "Assertion `!(bs->open_flags & BDRV_O_INACTIVE)' failed."" [Undecided,Confirmed]19:31
mriedemwe can't fingerprint that in e-r because the failure is in the qemu instance logs, which we don't index19:32
jaypipesefried: k, just read the granular resource requests spec. lgtm.19:34
efriedjaypipes Cool, thanks for going through it.19:35
*** salv-orlando has joined #openstack-nova19:35
efriedcdent Is there a patch somewhere that's adding the project_id/user_id to the GET /allocations/{consumer_uuid} response?19:35
cdentyeah, it’s an antecedent to the thing that adds the schema:
cdentsorry wrong paste19:36
mriedemmelwitt: now you just need to find another stable core to approve it :)19:38
efriedcdent Okay, and where does the api-ref get updated?19:38
efriedcdent This sucker:
mriedemefried: ?19:38
mriedemor the actual infra that updates the page?19:39
cdentefried: it’s not done yet. I haven’t decided where to put it. Could go in a more generic
cdentefried: so please feel free to put a -1 somewhere in there that says “oi, you’ve not done the docs yet"19:40
*** suresh12 has joined #openstack-nova19:40
efriedcdent Okay, cool, that's what I wanted to know.  (mriedem specifically the doc change that adds the project/user id to that api-ref piece)19:40
cdentefried: thanks for the attention to detail, the content of that stack has expanded a lot since I started, so some steps have been lost19:42
efriedcdent Swhat I'm here for.  Eventually I'll be useful for something other than pointing out typos and missed paperwork.19:42
dansmithefried: don't forget rebasing19:43
dansmithyou can rebase a mofo like a sonofabee19:43
*** mlavalle has joined #openstack-nova19:44
mriedemsdague: can you get this pike backport and the one below?19:44
mlavallemriedem: any chance we can get nova team eyes on the lastest revisions of and
dansmithedleafe: cdent: so that limits or {} thing.. limits is nullable in reqspec.. are we sure we need do that thing?19:45
dansmithalso, I'm not clear on what part of this series requires this when we didn't before19:46
* dansmith bets edleafe is super glad he's bringing this one line change up again19:46
mriedemmlavalle: honestly i think the bandwidth provider thing is not going to happen in queens19:46
mriedemgiven the various dependencies and overall complexity19:46
*** csuttles_ has quit IRC19:47
mlavallemriedem: dependencies meaning functionality that is being developed in the Nova side?19:47
mriedemmlavalle: yes19:47
mriedemthere are some major challenges to overcome before we can even get to bandwidth provider19:48
mriedemmlavalle: for example, it relies on the compute pushing the bandwidth allocations and we're moving away from the computes trying to create allocations19:48
*** huanxie has quit IRC19:50
openstackgerritChris Dent proposed openstack/nova master: [placement] manage cache headers for usages
edleafedansmith: it caused multiple unit test failures without the 'or {}19:54
edleafedansmith: not sure if the change back to including limits makes that no longer the case.19:54
dansmithedleafe: lol. well we should put that in the comment then, eh?19:55
dansmithedleafe: I'd like to understand why we're making that change if we are19:55
openstackgerritJay Pipes proposed openstack/nova master: placement: adds REST API for nested providers
openstackgerritJay Pipes proposed openstack/nova master: placement: update client to set parent provider
jaypipesefried: fixed. ^19:55
cdentgoodnight all19:56
*** cdent has quit IRC19:56
mriedemgdi i hate reviewing specs in new gerrit19:57
*** crushil has joined #openstack-nova19:57
mriedemjumps all over the place when trying to expand comments19:57
jaypipesmriedem: yuuuuup. drives me friggin nuts.19:57
*** slaweq_ has quit IRC19:57
*** penick has quit IRC19:57
edleafedansmith: re-running tests with that change reverted - will let you know19:58
dansmithedleafe: thanks19:58
dansmithmriedem: jaypipes not just in firefox19:58
dansmithmriedem: left one nit and one real comment in johnthetubaguy's ironic patch19:59
dansmithI can fix the nit (and the functional bit too I guess) if you agree19:59
*** xyang1 has quit IRC19:59
*** armax has quit IRC19:59
*** lpetrut has quit IRC20:00
*** lpetrut has joined #openstack-nova20:00
*** penick has joined #openstack-nova20:01
*** salv-orlando has quit IRC20:02
*** salv-orlando has joined #openstack-nova20:02
*** gjayavelu has quit IRC20:04
dansmithmriedem: I do it in with statements because python requires it :P20:05
*** yamahata has joined #openstack-nova20:05
dansmithmriedem: so you think we have no other situations other than moves that ironic doesn't support where we don't want said stomping?20:06
*** gyee has quit IRC20:07
*** salv-orlando has quit IRC20:07
mriedemmy brain is kind of fried,20:08
mriedemat this point, you're likely to blow your foot off either way20:08
mriedembecause of the minefield i mean20:08
mriedemthe stomp was really because of moves from ocata to pike nodes i thought, scheduler would claim on both (double stuff) and ocata periodic would overwrite the allocations created by the scheduler,20:09
mriedemand then the dest node would overwrite the allocations again once the move was done20:09
mriedemyou can't resize, live migrate or unshelve an ironic instance20:09
mriedemand with evacuate, the source host isn't running the stomper20:09
*** gjayavelu has joined #openstack-nova20:09
*** abhi89 has quit IRC20:10
mriedemi could be missing something20:12
mriedemlike i said, minefield20:12
*** MVenesio has quit IRC20:12
mriedemseinfeld minefield20:12
*** pchavva has quit IRC20:12
*** vvargaszte has joined #openstack-nova20:12
mriedem"what's the deal with all these allocations?!"20:12
*** esberglu has quit IRC20:14
openstackgerritDan Smith proposed openstack/nova master: Keep updating allocations for Ironic
*** esberglu has joined #openstack-nova20:15
*** awaugama has quit IRC20:15
mriedemdansmith: jaypipes: change the rendering setting to 'slow'20:16
mriedemseems to help20:16
mriedemsmcginnis: ++20:16
smcginnismriedem: It helped even more in the last gerrit. This one has some more quirks, but I think it's better.20:17
dansmithI don't even know what that would mean, but... cool20:17
*** openstackgerrit has quit IRC20:17
*** lpetrut has quit IRC20:17
mriedemgerrit settings20:18
dansmithI mean I dunno what fast rendering would be20:18
*** vvargaszte has quit IRC20:18
dansmithit all seems pretty slow to me :)20:18
*** salv-orlando has joined #openstack-nova20:18
mriedemthey should rename the setting to "stop fucking up my gd cursor placement fucker"20:19
mriedemi agree20:19
*** eharney has quit IRC20:19
*** esberglu has quit IRC20:19
mriedemefried: thanks for helping to review this
mriedemit's definitely hairy20:20
efriedmriedem phew, thought I was just being dense.20:20
smcginnismriedem: +1 to that name, it would be much more accurate.20:21
mriedemno man, it's building on like all of the worst things20:21
mriedemport orchestration, nested resource providers, scheduling, neutron agent stuff, etc etc20:21
*** Sukhdev has quit IRC20:21
*** acormier has joined #openstack-nova20:22
*** liverpooler has quit IRC20:22
*** acormier has joined #openstack-nova20:22
*** smatzek has quit IRC20:24
*** smatzek has joined #openstack-nova20:25
*** smatzek_ has joined #openstack-nova20:27
mriedemsean-k-mooney: in - so when nova binds a port to a selected host, neutron will put some kind of allocation information in the port binding profile, and nova will proxy that allocation information to placement?20:29
mriedemand the suggested flow is:20:29
mriedem1. conductor asks scheduler for a host20:29
*** smatzek has quit IRC20:29
mriedem2. scheduler filter looks for ports with a qos policy and if found, gets allocatoin candidates for hosts that have a nested bw provider20:29
mriedem3. scheduler returns host to conductor20:29
mriedem4. conductor binds the port to the host20:30
mriedem5. the bound port profile has some allocation juju that nova proxies to placement as an allocation request for the port on the bw provider20:30
mriedem6. conductor sends to compute to build the instance20:30
mriedem7. compute activates the bound port20:30
mriedem8. compute plugs vifs20:30
mriedem9. profit?!20:30
*** smatzek_ has quit IRC20:31
mriedemwhy can't the allocation juju happen on the neutron side when the port is bound20:31
*** suresh12 has quit IRC20:33
mriedemi seem to remember some discussion at the ptg about it being cool that nova will proxy the allocation creation for neutron, but i'm not sure why20:33
mriedemmlavalle: ^ do you remember?20:33
mriedemultimately we don't want nova managing resource allocations for network things20:34
*** esberglu has joined #openstack-nova20:34
mriedemespecially out of band things like qos policies in neutron20:34
*** armax has joined #openstack-nova20:35
efriedmriedem This may be slightly off topic, but who will be the consumer of the allocations related to the port?  The port itself, or the instance?20:36
*** tpatzig_ has joined #openstack-nova20:37
mriedemi think the port20:38
*** tpatzig_ has quit IRC20:39
*** suresh12 has joined #openstack-nova20:39
*** esberglu has quit IRC20:39
*** esberglu has joined #openstack-nova20:40
mriedemit's a bit confusing in the spec20:40
mriedem"A virtual machine port can consume bandwidth from one of these Resources20:40
mriedem"By the time the port is bound, the bandwidth allocated to the instance is already recorded."20:40
sdaguemriedem: I looked at the stable stuff, should be good. You still need somone else on the top patch20:40
mriedemsdague: yeah, thanks - i can find someone20:40
mriedemsomeone named dan20:41
dansmithlink m20:41
*** esberglu has quit IRC20:41
*** esberglu has joined #openstack-nova20:41
mriedemcan't find it20:42
dansmithoffer expires in 3....2...20:42
*** gouthamr has quit IRC20:43
*** tssurya_ has joined #openstack-nova20:48
efriedmriedem The port is actually the only thing that makes sense.20:48
efriedOtherwise we would either have to a) allocate the network resources and the other instance resources at the same time, i.e. from the scheduler; or b) overwrite the allocation from the scheduler so it goes from [just the port] to [port plus other instance resources]20:49
efriedEither one of those things would be pretty weird.20:50
efriedmm, maybe a) wouldn't be so weird.20:50
mriedemthe spec is proposing that we do (a)20:52
mriedemit's saying the neutron agent populates the bw provider inventory,20:53
mriedemand when nova binds a port to a host, neutron will return information about the allocation in the binding profile,20:53
mriedemand nova will create the allocation in placement for the port on the bw resource class on behalf of neutron20:53
mriedemthat's what i don't agree with20:53
*** catintheroof has quit IRC20:53
mriedemwe don't want nova-compute managing port bw allocations20:53
*** catintheroof has joined #openstack-nova20:54
mriedemwhich also means nova would then be responsible for removing allocatoins when migrating the instance to another host20:54
mriedemi'm suggesting that neutron manage the allocatoins when the port is bound or unbound, which is triggered by nova20:54
*** sbezverk has quit IRC20:54
mriedemif the allocation on the neutron side fails, the port binding fails and we have to reschedule with another host providing bw resources20:54
mriedemif we do the port binding in the conductor, then that reschedule isn't as expensive as doing it in the compute20:55
mriedembut that also means this depends on picking up johnthetubaguy's network-aware scheduling stuff to move port creation etc to conductor20:55
mriedemso let's recap - deps are nested RPs, port binding API integration in nova, prep for nw aware scheduling, plus some API changes in neutron20:56
mriedem== me telling mlavalle it's probably DOA for queens20:56
*** andreas_s has joined #openstack-nova20:57
*** tbachman has joined #openstack-nova20:57
*** catintheroof has quit IRC20:58
melwittare we having a cells meeting today or no?21:01
mriedemi forgot21:02
mriedemdon't think we have much to talk about do we?21:02
mriedemneed reviews on alternate hosts21:02
melwittI don't, but just wanted to ask in case I missed something21:02
mriedemand dan has one more thing for the efficient instance listing weirdness with cell021:02
mriedemi think i tipped dan over with one too many stable reviews21:03
*** suresh12 has quit IRC21:03
tssurya_so that means no meeting right ?21:03
*** suresh12 has joined #openstack-nova21:04
*** thorst has quit IRC21:04
melwitttssurya_: yeah, I guess so. did you have anything you wanted to talk about or bring to attention?21:04
tssurya_melwitt : nothing in particular21:05
melwittk, me either21:05
tssurya_melwitt : just wanted to discuss about this : which we can do in the channel I guess21:05
openstackLaunchpad bug 1724621 in OpenStack Compute (nova) "nova-manage cell_v2 verify_instance returns a valid instance mapping even after the instance is deleted" [Undecided,New] - Assigned to Surya Seetharaman (tssurya)21:05
*** andreas_s has quit IRC21:06
dansmithsorry I apparently hit dismiss earlier on my cells reminder21:06
dansmithand then got distracted21:06
dansmithmriedem: tssurya_ melwitt sorry about that21:06
tssurya_melwitt : saw your comment,21:06
*** andreas_s has joined #openstack-nova21:06
tssurya_dansmith : no problem :)21:07
dansmithI literally did my errands earlier so I'd be around for it,21:07
dansmithand then one click spoiled it all :)21:07
dansmithtssurya_: did your patch merge?21:07
tssurya_dansmith : yes it did :D21:07
efriedmriedem Okay, you may want to call out specifically then.  It says the conductor is handling the resource classes and allocations.21:08
dansmithtssurya_: so regarding the bug above, melwitt is right, we can't remove those instance mappings until the instance is removed from the cell db21:08
dansmithtssurya_: however, there's a work item that needs to be done if you're looking for another thing to work on21:08
mriedemefried: i did elsewhere above21:09
dansmithtssurya_: (related to this I mean)21:09
efriedOh, dansmith is in stable easy +A mode?  you're welcome21:09
tssurya_dansmith :  sure but is there a way to remove the instances ?21:09
dansmithefried: haven't I approved enough stuff for you lately?21:09
mriedemtssurya_: yeah, nova-manage db archive_deleted_rows21:09
dansmithtssurya_: yes, that's the work item :)21:09
dansmithmriedem: DUDE21:09
dansmithmriedem: let me tell the stoy21:09
tssurya_just like we now have a way to remove the hosts
efrieddansmith "for me"?  It's all "for nova"!21:09
mriedemoh, oops21:09
efriedThere is no efried.  Only zuulv321:09
mriedemwell gather round children,21:10
mriedemit's uncle dan's story time21:10
*** gyee has joined #openstack-nova21:10
dansmithtssurya_: so we remove instances once they're marked as deleted by "archiving" them with the command that mriedem mentioned21:10
mriedemlong ago in a land called north carolina lived a boy named dan21:10
*** gyee has quit IRC21:10
*** andreas_s has quit IRC21:11
dansmithtssurya_: that moves them from the instances table to shadow_instances, where there are no constraints and then they can be deleted or dumped out to an archive log or something21:11
dansmithtssurya_: right now, if you do this, you leave instance mappings for those instances which are no longer there forever, which is bad21:11
tssurya_mriedem, dansmith : I have seen the  review of this nova-manage db archive_deleted_rows21:11
*** gyee has joined #openstack-nova21:11
dansmithtssurya_: while they're deleted=yes, they need their mapping, but once they get archived, the mapping should go away21:11
*** jdavis has quit IRC21:11
*** gyee has quit IRC21:11
dansmithtssurya_: so that archive command needs to follow up after the archival by deleting instance mappings from the api database21:11
dansmithtssurya_: if you want to fix that since you're familiar with nova-manage now, that would be super awesome21:12
dansmithgenerations of DBAs will thank you for years to come21:12
tssurya_dansmith : yes please would love to do it21:12
dansmithwoot ;)21:12
mriedemuh, deleted=id21:12
dansmithmriedem: in the object, it's boolean, c'mon21:13
mriedemoh sheesh21:13
dansmithtssurya_: you could re-use your bug for this, just maybe add a comment or change the title slightly to s/deleted/archived/21:13
mriedemonce you've mastered removing instance mappings for super gone instances,21:13
mriedemyou can do the same for request specs21:13
dansmithoh yeah, tssurya_ you should delete the reqspec at the same time21:13
dansmithgood call21:14
tssurya_dansmith, mriedem : yes shall do ; its this one ?
openstackLaunchpad bug 1678056 in OpenStack Compute (nova) "RequestSpec records are never deleted when destroying an instance" [High,In progress] - Assigned to Sylvain Bauza (sylvain-bauza)21:14
dansmithtssurya_: yep21:14
*** john5223 has quit IRC21:14
dansmithtssurya_: you'll also have bauzas' gratitude for finishing his unfinished work :P21:15
tssurya_dansmith , mriedem : will get them done then :D21:15
mriedemalso, my porch is suddenly full of wasps and spiders and i need someone to take care of that for me21:15
mriedemalthough it is kind of spooky just in time for halloween21:15
*** john5223 has joined #openstack-nova21:15
*** salv-orlando has quit IRC21:16
*** salv-orlando has joined #openstack-nova21:18
*** ijw has joined #openstack-nova21:18
*** AlexeyAbashkin has joined #openstack-nova21:19
*** suresh12 has quit IRC21:20
*** gouthamr has joined #openstack-nova21:20
*** salv-orlando has quit IRC21:21
*** suresh12 has joined #openstack-nova21:21
*** salv-orlando has joined #openstack-nova21:23
*** smatzek has joined #openstack-nova21:23
*** ociuhandu has quit IRC21:24
*** AlexeyAbashkin has quit IRC21:24
*** andreas_s has joined #openstack-nova21:25
mlavallemriedem: I responded to your comment in the spec21:25
mlavalleI don't recall either that conversation during the PTG, but your feedback in the spec makes sense21:26
*** eharney has joined #openstack-nova21:26
*** acormier has quit IRC21:27
*** andreas_s has quit IRC21:29
*** Sukhdev has joined #openstack-nova21:29
*** openstackgerrit has joined #openstack-nova21:30
openstackgerritEric Fried proposed openstack/nova master: Service user token requested with no auth
*** chyka has quit IRC21:32
mriedemmlavalle: looks like the spec was based on the notes in
mriedemand i asked in there about the proxy thing too21:33
* mlavalle looking21:35
*** baoli has quit IRC21:35
*** peter-hamilton has joined #openstack-nova21:35
openstackgerritChris Dent proposed openstack/nova master: [placement] Clean up TODOs in allocations.yaml gabbit
*** smatzek has quit IRC21:36
*** claudiub has quit IRC21:38
*** tbachman has quit IRC21:39
*** jmlowe has joined #openstack-nova21:39
efriedThanks dansmith.  edmondsw FYI, is in the gate.21:41
openstackgerritMerged openstack/nova master: Send Allocations to spawn
efriedThanks johnthetubaguy too ^21:41
openstackgerritMerged openstack/nova master: docs: Explain the flow of the "serial console" feature
edmondswefried saw... tx everyone21:41
tonybWill writing an external app to listen to the notifications bus 'void my warrantee?' or otherwise make y'all sad?21:42
*** andreas_s has joined #openstack-nova21:42
tonybI need to clean up an external system when an instance is deleted and I think that's the least gross way to do it21:43
mriedemthere are lots of things that do that already21:43
mriedemtelemetry, searchlight21:43
mriedemdesignate sink i think21:43
mriedemtonyb: use the versioned notifications21:43
tonybokay cool.  That's knoda what I thought, but they're OpenStack and this things would be far from it21:44
*** edmondsw has quit IRC21:44
mriedemoh well if it's not openstack it's not allowed21:44
tonybmriedem: Yup  no point writing to the old thing21:44
tonybmriedem: okay21:44
mriedemi'm joking21:44
* tonyb hangs head low and goes to look for another approach 21:44
mriedemabout the not openstack thing21:44
mriedemcross polination is the name of the game nowadays21:45
tonybmriedem: of course I'll push for it but the decision isn't really mine to make21:46
*** openstackgerrit has quit IRC21:48
*** takashin has joined #openstack-nova21:49
*** acormier has joined #openstack-nova21:50
*** edmondsw has joined #openstack-nova21:50
*** andreas_s has quit IRC21:51
*** openstackgerrit has joined #openstack-nova21:53
openstackgerritTakashi NATSUME proposed openstack/nova-specs master: Abort Cold Migration
openstackgerritTakashi NATSUME proposed openstack/nova master: Add 'delete_host' command in 'nova-manage cell_v2'
openstackgerritTakashi NATSUME proposed openstack/nova master: List/show all server migration types (1/2)
*** acormier has quit IRC21:54
*** edmondsw has quit IRC21:54
openstackgerritTakashi NATSUME proposed openstack/nova master: List/show all server migration types (2/2)
openstackgerritTakashi NATSUME proposed openstack/python-novaclient master: Microversion 2.54 - List/Show all server migration types
openstackgerritTakashi NATSUME proposed openstack/nova master: Enable cold migration with target host(1/2)
*** edmondsw has joined #openstack-nova21:56
openstackgerritTakashi NATSUME proposed openstack/nova master: Enable cold migration with target host(2/2)
*** markvoelker_ has joined #openstack-nova21:57
*** edmondsw_ has joined #openstack-nova21:58
*** slaweq_ has joined #openstack-nova21:58
openstackgerritEd Leafe proposed openstack/nova master: Return Selection objects from the scheduler driver
openstackgerritEd Leafe proposed openstack/nova master: Change RPC for select_destinations()
openstackgerritEd Leafe proposed openstack/nova master: Move the claim_resources method to scheduler utils
openstackgerritEd Leafe proposed openstack/nova master: Make conductor pass and use host_lists
*** edmondsw has quit IRC22:00
*** markvoelker has quit IRC22:00
*** edmondsw_ has quit IRC22:02
*** slaweq_ has quit IRC22:02
*** tssurya_ has quit IRC22:02
*** tbachman has joined #openstack-nova22:02
*** lbragstad has quit IRC22:03
*** tssurya__ has joined #openstack-nova22:04
edleafedansmith: ^^ tests passed now, so I removed the 'or {}'22:05
*** acormier has joined #openstack-nova22:05
*** tssurya__ has quit IRC22:06
openstackgerritMerged openstack/nova master: Move restart_compute_service to a common place
*** gjayavelu has quit IRC22:08
*** gjayavelu has joined #openstack-nova22:09
openstackgerritMerged openstack/nova stable/pike: Regenerate context during targeting
openstackgerritMerged openstack/nova stable/ocata: libvirt: add check for VIR_DOMAIN_BLOCK_REBASE_COPY_DEV
*** acormier has quit IRC22:10
openstackgerritMerged openstack/nova stable/pike: Reproduce bug 1721652 in the functional test env
openstackbug 1721652 in OpenStack Compute (nova) pike "Evacuate cleanup fails at _delete_allocation_for_moved_instance" [High,In progress] - Assigned to Matt Riedemann (mriedem)22:10
*** dave-mccowan has quit IRC22:11
*** acormier has joined #openstack-nova22:16
*** jmlowe has quit IRC22:16
*** gyee has joined #openstack-nova22:17
mriedemmlavalle: johnthetubaguy: sean-k-mooney: ok more comments in the port binding spec, specifically
mlavallemriedem: thanks I saw them22:17
mriedemi think there is a cleaner way to orchestrate some of the old/new flow stuff, which would also allow you to turn on the new flow if the source/dest during live migration are both running queens code22:17
mriedemmake conductor create the dest host binding, shove that into the migrate_data object, and have the computes key off that for doing stuff with the new dest host binding22:18
mriedemalso, don't have the virt driver call the neutron api to activate the dest host port binding, that needs to happen via compute manager in a generic way22:18
*** burt has quit IRC22:19
mriedemi think the dest host port binding contains the details that are required to be put into the domain xml that goes on the dest host, but the spec isn't real clear about that22:19
mriedembut that's probably the stuff that goes in the migrate_data and the compute keys off if it's doing new or old style stuff22:19
*** acormier has quit IRC22:19
*** acormier has joined #openstack-nova22:20
mlavallemriedem: cool, thanks. I'll probably shoot this in an email to sean-k-mooney later in an email to make sure he sees it firt thing in the morning22:21
mriedemmlavalle: ok we might need a hangout in the morning22:21
mriedemincluding johnthetubaguy22:21
mriedemi feel like the spec goes 2 steps forward and 1 back each time i review it22:21
mlavallemriedem: do you have a time in mind?22:21
mriedemnova team meeting is at 9am CT22:22
mriedemso how about 10am CT?22:22
*** chyka has joined #openstack-nova22:22
*** lbragstad has joined #openstack-nova22:22
mriedemthat would be 4pm for john22:22
mlavallemriedem: that sounds good. I have a meeting at that time, but I think I can delegate it22:22
mriedemneutron drivers meeting?22:22
mlavalleno, L3 sub-team22:23
*** med_ has quit IRC22:23
*** Guest34657 has quit IRC22:23
*** brad[] has quit IRC22:23
*** felipemonteiro__ has quit IRC22:23
mlavalledrivers is in the afternoon out time22:23
*** med_ has joined #openstack-nova22:23
mriedemwell i think if i can get sean and john on a hangout we can work through the major issues22:23
*** mfisch has joined #openstack-nova22:23
mriedemi'll set something up22:23
*** mfisch has quit IRC22:23
*** mfisch has joined #openstack-nova22:23
*** brad[] has joined #openstack-nova22:23
*** med_ is now known as Guest9906022:23
mlavallemriedem: cool, thanks22:24
*** acormier has quit IRC22:24
mlavallemriedem: would 8 be too early?22:24
*** lyan has quit IRC22:24
mriedemfor me yeah22:26
mriedemi've got to get my kid cleaned up and on the bus22:26
mlavalleok, cool, I suspected that22:26
mlavalleand I know how un-clean kids can be :-)22:27
mriedemdansmith: i assume you don't want to be part of this port binding live migration love fest22:28
*** mlavalle has quit IRC22:28
mriedemor melwitt22:28
mriedemor anyone else in their right mind22:28
dansmithmriedem: I certainly can be if you'd like22:28
mriedemok added22:31
dansmithI can't friggin wait22:31
mriedemi think my recent comments in the spec are the way to do this,22:31
mriedemjust need sean and john to tell me if i'm missing something22:32
*** edmondsw has joined #openstack-nova22:32
*** amodi has quit IRC22:32
openstackgerritMatt Riedemann proposed openstack/nova stable/pike: Keep updating allocations for Ironic
*** esberglu has quit IRC22:33
*** salv-orlando has quit IRC22:33
*** esberglu has joined #openstack-nova22:33
*** jmlowe has joined #openstack-nova22:33
*** penick has quit IRC22:34
melwittmriedem: I've got another call at that time, so I have to miss out :)22:34
mriedemyour loss22:36
melwittoh I know22:36
*** edmondsw has quit IRC22:36
*** gouthamr has quit IRC22:41
*** andreas_s has joined #openstack-nova22:42
*** nicolasbock has quit IRC22:43
*** esberglu has quit IRC22:43
*** rodolof has quit IRC22:45
*** andreas_s has quit IRC22:46
openstackgerritMerged openstack/nova stable/pike: fix cleaning up evacuated instances
*** eharney has quit IRC22:52
*** andreas_s has joined #openstack-nova22:56
*** tbachman has quit IRC22:57
*** andreas_s has quit IRC23:00
*** Tom__ has quit IRC23:00
*** Sukhdev has quit IRC23:00
*** dave-mccowan has joined #openstack-nova23:06
*** sdague has quit IRC23:07
*** lbragstad has quit IRC23:09
*** jdavis has joined #openstack-nova23:12
*** ijw has quit IRC23:13
*** jdavis has quit IRC23:17
*** andreas_s has joined #openstack-nova23:18
*** AlexeyAbashkin has joined #openstack-nova23:19
*** hongbin has quit IRC23:21
*** andreas_s has quit IRC23:23
*** AlexeyAbashkin has quit IRC23:23
*** chyka has quit IRC23:31
*** thorst has joined #openstack-nova23:32
*** salv-orlando has joined #openstack-nova23:34
*** ijw has joined #openstack-nova23:34
*** thorst has quit IRC23:36
*** smatzek has joined #openstack-nova23:37
*** ijw has quit IRC23:38
*** salv-orlando has quit IRC23:39
*** Apoorva_ has joined #openstack-nova23:40
*** smatzek has quit IRC23:41
*** Apoorva has quit IRC23:43
*** ijw has joined #openstack-nova23:44
*** acormier has joined #openstack-nova23:44
*** andreas_s has joined #openstack-nova23:46
*** smatzek has joined #openstack-nova23:46
*** Apoorva_ has quit IRC23:47
*** Apoorva has joined #openstack-nova23:47
*** acormier has quit IRC23:49
*** andreas_s has quit IRC23:50
*** ijw has quit IRC23:51
*** tbachman has joined #openstack-nova23:51
*** acormier has joined #openstack-nova23:55

Generated by 2.15.3 by Marius Gedminas - find it at!