14:00:49 <efried> #startmeeting nova
14:00:50 <openstack> Meeting started Thu Feb 20 14:00:49 2020 UTC and is due to finish in 60 minutes.  The chair is efried. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:51 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:53 <openstack> The meeting name has been set to 'nova'
14:01:24 <LiangFang> o/
14:01:25 <gibi> o/
14:01:29 <kevinz> o/
14:01:34 <brinzhang__> o/
14:01:35 <alex_xu> o/
14:02:02 <gmann> o/
14:02:03 <stephenfin> o/
14:02:10 <lyarwood> o/
14:02:59 <efried> Hello all!
14:03:04 <efried> #link agenda https://wiki.openstack.org/wiki/Meetings/Nova#Agenda_for_next_meeting
14:03:09 <efried> Let's roll
14:03:27 <efried> #topic Last meeting
14:03:27 <efried> #link Minutes from last meeting: http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-02-13-21.00.html
14:03:27 * efried lyarwood to curate rocky EM list (from two weeks ago)
14:03:28 <sean-k-mooney> o/
14:04:06 <lyarwood> efried: yup apologies but I've not found the time to get to this as yet
14:04:17 <lyarwood> efried: I'll try to find time before the end of the week to send this out
14:04:22 <efried> Okay, no worries. Have they officially EM'd the thing yet?
14:04:45 <lyarwood> not offically AFAIK
14:04:48 <gibi> lyarwood: ping elod he might able to help
14:04:50 <lyarwood> but it's pending
14:04:57 <lyarwood> gibi: ack will do
14:05:10 <efried> cool, I guess we have until... whenever they do that :)
14:05:22 <efried> I'll keep this on the agenda for next time.
14:05:28 <efried> any other old business?
14:06:06 <efried> #topic Bugs (stuck/critical)
14:06:06 <efried> No Critical bugs
14:06:06 <efried> However, our untriaged bug counts are still climbing.
14:06:06 <efried> 101 'new untriaged' as of yesterday
14:06:14 <efried> #help need help with bug triage
14:06:31 <efried> #link 101 new untriaged bugs (+5 since the last meeting): https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New
14:06:31 <efried> #link 27 untagged untriaged bugs (+6 since the last meeting): https://bugs.launchpad.net/nova/+bugs?field.tag=-*&field.status%3Alist=NEW
14:06:45 <efried> any comments on bugs?
14:06:51 <gibi> >100 so we are doomed.
14:06:58 <gibi> lets move on :)
14:06:59 <efried> ikr
14:07:20 <efried> #topic Release Planning
14:07:20 <efried> #link ussuri planning etherpad https://etherpad.openstack.org/p/nova-ussuri-planning
14:07:20 <efried> Spec freeze has passed.
14:07:21 <gibi> I will try to do some triage at some point in the future :)
14:07:26 <efried> thanks gibi
14:07:38 <efried> so, there are a couple of exceptions on the agenda
14:07:40 <efried> first:
14:07:47 <efried> #link support-volume-local-cache http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012615.html
14:08:06 <gibi> the cinder spec has been approved today
14:08:16 <gibi> I'm supportive to approve the nova spec
14:08:31 <alex_xu> +1
14:08:32 <brinzhang__> gibi: agree approve this spec
14:09:20 <gibi> dansmith is not supportive but he is not -1 either
14:09:21 <LiangFang> hope team can approve it, so customers can try this feature and give  feedback to improve it in next release. thanks
14:09:24 <efried> Okay, sounds like gibi and alex_xu are willing to approve the spec. Is it ready now?
14:09:41 <efried> looks like it is, I see those +2s.
14:09:52 <efried> Okay, let's grant the sfe here. Any objections?
14:09:54 <gibi> yeah, it looks good to me
14:10:28 <efried> #action efried unblock and +W the support-volume-local-cache spec after the meeting https://review.opendev.org/#/c/689070/
14:10:33 <efried> next:
14:10:41 <LiangFang> thanks team
14:10:46 <efried> #link destroy-instance-with-datavolume http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012616.html
14:11:16 <efried> Looks like the spec needs a bit of work, but is it pretty close?
14:11:33 <gibi> gmann has a request to use PATCH instead of PUT. I can accept that
14:11:50 <efried> do we have a second core on board here?
14:11:57 <gibi> I think that is the biggest change
14:12:12 <gmann> IMO, that is best way to go and keep already complex swap API as it is.
14:12:19 <sean-k-mooney> the rest are mainly wordign nits that can be adress in a FUP if needed
14:13:01 <gibi> could somebody proxy gmann's and sean-k-mooney's +1 as a +2 ? :)
14:13:37 <alex_xu> what is the body for PATCH action?
14:13:39 <lyarwood> *if* we switch to PATCH right?
14:13:40 <efried> I... don't feel _great_about that
14:14:15 <efried> brinzhang__: --^
14:14:25 <brinzhang__> I don’t know much about PATCH, please forgive me for not saying too much.
14:14:30 <gmann> delete-termination flag and volume id
14:14:33 <sean-k-mooney> alex_xu: the body would just be {delete_on_terminate:True|false}
14:15:15 <sean-k-mooney> gmann: we could take the volume_id form the url
14:15:18 <gmann> and volume id in url. or somethings we can decide bets possible design
14:15:19 <sean-k-mooney> im not sure that needs to be there
14:15:20 <gmann> yeah
14:15:26 <brinzhang__> using swap volume API, and keep that condition inline, is it ok?
14:15:43 <alex_xu> I'm not sure about the PATCH, I'm ok with existing PUT, the only concern is the policy. there should be another policy for delete_on_termination
14:16:32 <sean-k-mooney> it should be admin or owner right
14:16:47 <alex_xu> yea
14:16:52 <brinzhang__> sean-k-mooney: yeash
14:17:15 <gmann> alex_xu: but existing PUT is for updating the "attachments of server" not updating the attachments property.
14:17:17 <sean-k-mooney> i would have assumed that was the same policy for swap volume
14:17:17 <alex_xu> the existing swap API is admin-only, I don't think the usecase is for admin-only
14:17:25 <sean-k-mooney> oh i see
14:17:32 <sean-k-mooney> ya ok makes sense
14:18:05 <lyarwood> another reason for going with PUT over UPDATE tbh
14:18:13 <lyarwood> overloading the swap volume API is just wrong
14:18:29 <sean-k-mooney> lyarwood: did you mean PATCH
14:18:33 <lyarwood> PATCH even sorry
14:18:35 <lyarwood> yeah
14:18:41 <gmann> yeah. from user point view it will be too much mixing things in single API
14:19:11 <gmann> especially when we called PUT as swap volume API in our doc.
14:19:15 <efried> So look, I want to be permissive here, but it's tough to justify this exception if we're still discussing nontrivial design details after spec freeze.
14:19:16 <sean-k-mooney> so PATCH we could make admin_or_owner by default and restict it to just the delete_on_termination property of the attachment
14:19:36 <lyarwood> efried: agreed
14:19:37 <gibi> efried: agree
14:19:44 <gibi> this seems to be still open
14:20:12 <gmann> sean-k-mooney: +1. owner should be able to update it.
14:20:27 <efried> There seems to be genuine desire among the team to make something work here, which is nice to see.
14:21:05 <sean-k-mooney> efried: if it was not for the legacy of swap volume this seams like a thing we would have just trivially approved
14:21:08 <efried> What about this: If y'all can figure out a way to get the spec updated and approved by EOB tomorrow, we'll allow the exception?
14:21:24 <sean-k-mooney> i would be fine with ^
14:21:29 <lyarwood> sounds good
14:21:47 <efried> gibi, alex_xu: those +2s would be on you. Are you on board with that?
14:22:21 <alex_xu> ok for me
14:22:22 <gibi> efried: sure
14:22:35 <gibi> my EOB is in 3 hours
14:22:36 <efried> brinzhang__: seem fair?
14:22:53 <efried> yes, I figure it is more likely to happen tomorrow "morning".
14:23:08 <brinzhang__> efried:agree, I will ask gibi and alex_xu to do something
14:23:40 <efried> alex_xu, gibi: if you leave your +2s on it by your EOB, I'll swap the CR-2 for W+1 in my daytime.
14:24:11 <alex_xu> got it
14:24:17 <efried> Thanks all. Anything further on sfes before we move on?
14:24:31 <gibi> got it
14:24:47 <brinzhang__> thanks all
14:25:01 <efried> #agreed to grant sfe for support-volume-local-cache if two +2s by EOB Friday 20200221
14:25:20 <gibi> delete on terminate ?
14:25:25 <efried> whoops
14:25:26 <efried> #undo
14:25:26 <openstack> Removing item from minutes: #agreed to grant sfe for support-volume-local-cache if two +2s by EOB Friday 20200221
14:25:46 <efried> #agreed to grant sfe for destroy-instance-with-datavolume if two +2s by EOB Friday 20200221
14:25:50 <efried> thanks gibi
14:25:53 <gibi> :)
14:26:08 <efried> okay, moving on...
14:26:11 <brinzhang__> gibi, alex_xu: thanks
14:26:21 <efried> #link Proposal to scrub five Definition:Approved blueprints http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012612.html
14:26:55 <efried> bauzas expressed his opinion on the thread. TLDR: why do this? It ain't broke, don't fix it.
14:27:18 <efried> I've gotten the impression many of you agree, or are at best neutral
14:27:23 <efried> thoughts from those present?
14:28:02 <gibi> is there any bp that is easy to cut out?
14:28:08 <gibi> if not then lets go with 30
14:28:12 <stephenfin> I get where you're coming from but I have to agree with bauzas - I think this stuff will work itself out naturally
14:29:30 <gibi> there is no clear -1s on the etherpad so I'm on the side to continue with 30
14:29:37 <efried> Yes, it always does work itself out naturally.
14:29:51 <efried> Okay then, I stand down.
14:30:06 <efried> #agreed to Direction:Approve all Definition:Approved blueprints
14:30:24 <efried> moving on
14:30:28 <efried> we already talked about rocky EM
14:30:38 <efried> #action efried to fup with lyarwood next week
14:30:44 <efried> #undo
14:30:45 <openstack> Removing item from minutes: #action efried to fup with lyarwood next week
14:30:53 <efried> #action efried to fup with lyarwood next week about rocky EM
14:31:04 <efried> #topic PTG/Summit planning
14:31:04 <efried> Please mark attendance and topics on
14:31:04 <efried> #link PTG etherpad https://etherpad.openstack.org/p/nova-victoria-ptg
14:31:20 <efried> I submitted the attendance survey
14:31:32 <efried> stating we would have about 20 people
14:32:10 <efried> and asking for a 20-person "room", with the note that one of nova/cinder/ironic/neutron ought to have a 40-person room for xproj stuff.
14:32:44 <efried> and saying we would need a minimum of 1 day
14:32:57 <gibi> make sense
14:33:23 <efried> but under the "who's gonna run the room" question, a big shrug. I guess it will be clearer after we have a Victoria PTL.
14:33:36 <efried> I'm sure diablo_rojo_phon understands that.
14:33:44 <efried> any comments questions concerns?
14:33:44 <gibi> in Shanghai we made that dynamic
14:34:00 <efried> ++
14:34:09 <gibi> there was always 2-3 cores in the room and handled the agenda together
14:34:38 <sean-k-mooney> ya i think that makes sense
14:35:09 <efried> cool
14:35:10 <efried> #topic Sub/related team Highlights
14:35:10 <efried> Placement (tetsuro)
14:35:33 <efried> It seems melwitt is driving the consumer types work along. Otherwise nothing going here that I'm aware of.
14:36:33 <efried> API (gmann)
14:36:42 <efried> There's an update from last week: http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012563.html
14:36:50 <efried> not sure if there's anything new we haven't already talked about above
14:37:04 <gmann> that's all for this week. something we need to start working and add in report is API bug triage. I have not looked on numbers yet
14:37:26 <efried> bug-in-general triage. Perhaps we should reinstate the 'bug czar'
14:37:36 <efried> gmann: care to volunteer? :P
14:37:52 <gmann> efried: i can do but after policy and py2 drop work
14:38:04 <efried> that would be really great, thank you.
14:38:13 <efried> IIUC the bug czar is responsible for bugging people about bugs
14:38:32 <efried> not solely responsible for triage etc, but coordinates the effort
14:38:57 <gmann> ok.
14:39:13 <efried> moving on before gmann changes his mind...
14:39:14 <efried> #topic Stuck Reviews
14:39:18 <efried> any?
14:39:56 <efried> #topic Open discussion
14:39:56 <efried> [efried] Exiting OpenStack
14:39:56 <efried> #help PTL pro tem needed
14:39:56 <efried> #link call for volunteers http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012663.html
14:40:26 <efried> A deafening silence followed last week's call for volunteers. Ditto email responses on that topic.
14:40:58 <efried> There's very little official/documented process for replacing a PTL mid cycle https://docs.openstack.org/project-team-guide/ptl.html#handing-over-ptl-duties
14:41:15 <efried> I'll just point out the ominous "figure it out or the TC gets involved" bit.
14:41:54 <gibi> I totally agree that we have to solve the situation somehow
14:41:59 <efried> wrt knowledge transfer, I really don't feel like anybody who would be volunteering here would need a huge amount of handoff; you all pretty much know how to run this thing.
14:42:40 <gmann> and nova PTL guide cover most of things which is really nice doc and only few project have.
14:42:54 <gibi> efried: if you see someting to update in ^^ then that would be appreciated
14:43:25 <efried> Yes, good call, I've had that TODO for a while to take a swipe at that doc
14:43:31 <efried> anybody have that link handy?
14:43:38 <efried> I have it *somewhere*...
14:43:44 <gibi> I hope gmann has :)
14:43:53 <efried> this one? https://docs.openstack.org/nova/latest/contributor/ptl-guide.html
14:43:56 <sean-k-mooney> gmann: i think its a cross project goal to add it to ther projects
14:44:19 <gibi> efried: that is the doc
14:44:37 <gmann> yeah. efried is fast
14:44:42 <efried> #action efried to look at the nova PTL guide and update if/as appropriate
14:44:51 <efried> #link nova ptl guide https://docs.openstack.org/nova/latest/contributor/ptl-guide.html
14:45:19 <efried> Okay, that's everything on the agenda. Anything else to discuss before we close?
14:45:25 <kevinz> yes
14:45:32 <efried> kevinz: your floor
14:45:37 <kevinz> I have one about bring arm64 CI on arm64
14:45:59 <kevinz> we have donate some nodes to nodepool already, wanna to define some jobs
14:46:09 <kevinz> a draft here: https://etherpad.openstack.org/p/arm64-nova-ci
14:47:43 <kevinz> but actually not sure which jobs are related with multi-arch as the first to enable
14:48:47 <kevinz> The draft is picked up from one submit and remove some ones that looks not related with architecture
14:48:55 <efried> At a glance, there are a couple of things that look odd to me, like creating a whole pipeline for this. It's unclear to me whether this should just be a job in the experimental queue (for now) or a 3pCI or...
14:48:55 <efried> sean-k-mooney, gmann: would one of you be willing to liaise with kevinz to work out the kinks here?
14:49:33 <sean-k-mooney> am sure i can try and help
14:49:44 <gmann> sure.
14:49:51 <sean-k-mooney> kevinz: i noteiced there is a sperate check-arm64 pipeline right
14:49:52 <kevinz> thanks a lot!
14:49:57 <efried> Thanks.
14:50:22 <efried> kevinz: are you able to hang out in #openstack-nova and/or #openstack-qa to chat with gmann and sean-k-mooney?
14:50:26 <kevinz> sean-k-mooey: yes, due to lacking of essential nodes. so we define a separate pipeline
14:50:35 <efried> (we don't want to discuss it here)
14:50:42 <kevinz> sure no
14:50:45 <kevinz> sure np
14:50:47 <kevinz> :D
14:50:58 <efried> Great.
14:50:58 <efried> Anything else to discuss?
14:51:04 <sean-k-mooney> ya we can discuss it in either just ping me
14:51:22 <kevinz> OK, thx
14:51:36 <efried> Okay, thanks for a productive meeting.
14:51:36 <efried> o/
14:51:36 <efried> #endmeeting