16:01:35 #startmeeting Cinder 16:01:36 Meeting started Wed Oct 16 16:01:35 2019 UTC and is due to finish in 60 minutes. The chair is jungleboyj. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:01:37 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:38 o/ 16:01:40 The meeting name has been set to 'cinder' 16:01:48 o/ 16:01:50 hi! o/ 16:01:54 @! 16:01:54 <_pewp_> jungleboyj (✧∇✧)╯ 16:01:56 hi 16:02:07 hi 16:02:21 o/ 16:02:26 hi 16:02:33 hi 16:02:34 o/ 16:02:58 Howdy everyone. 16:03:13 greetings all 16:03:20 Give everyone another minute to join. 16:04:34 hi all 16:05:25 Ok. Lets get started. 16:05:44 Our last Train meeting. 16:05:48 #link https://etherpad.openstack.org/p/cinder-train-meetings 16:06:05 And, therefore I guess the last meeting that I will be running. :-) 16:06:21 #topic announcements 16:06:50 So, next week we will switch to the Usurri meeting etherpad. 16:06:55 #link https://etherpad.openstack.org/p/cinder-ussuri-meetings 16:07:53 If you have topics for next week please put them in there. 16:08:21 BTW, Brian isn't feeling well today so I am going to run things. 16:08:25 For this meeting. 16:08:40 Anyway, want to get feedback on how to handle the ping list. 16:09:01 Brian has copied it over right now. Do we want to keep the current list or remove it and start fresh? 16:09:37 * jungleboyj hears crickets 16:09:56 AFAIR, we usually start with an empty list each release 16:09:57 I think start fresh. 16:09:57 start fresh to trim dead weight 16:10:10 Kind of the reason to have a new etherpad anyway... 16:10:31 Ok, so, I will add a new ping list out there and give it a couple of meetings then we will remove the old list. 16:11:04 Thanks for the feedback. 16:11:32 Reminder to please add to the PTG planning list: 16:11:35 #link https://etherpad.openstack.org/p/cinder-ussuri-ptg-planning 16:11:56 Not a lot of content right now. Don't want to go all the way to Shanghai for nothing. ;-) 16:12:30 frequent flier miles for that trip is not exactly nothing ;) 16:12:41 Maybe a good task for Brian to post the ML to try to get some topic on there from the Chinese community. 16:13:03 davee__: True. And I am already Comfort + there and back. 16:13:22 Hi 16:14:05 Ok. So, Forum Sessions. We have these two accepted: 16:14:17 #link https://www.openstack.org/summit/shanghai-2019/summit-schedule/events/24404/how-are-you-using-cinders-volume-types 16:14:28 #link https://www.openstack.org/summit/shanghai-2019/summit-schedule/events/24403/are-you-using-upgrade-checks 16:14:51 If you have notes, discussion points, etc. on these please add them here: 16:14:54 https://etherpad.openstack.org/p/cinder-shanghai-forum-proposals 16:15:03 #link https://etherpad.openstack.org/p/cinder-shanghai-forum-proposals 16:15:15 We will get them into etherpads for discussion at the summit. 16:15:28 Ok. That was what I had for announcements. 16:15:46 #topic follow-up discussion on Removing Legacy Attach Code in Nova 16:15:57 So, we said we would continue this discussion this week. 16:16:09 geguileo: mriedem Updates here? 16:16:46 jungleboyj: I had a look at the mail thread, replied, and then got dragged into fixing FC in OS-Brick, so I'm not up to date with the conversation 16:17:19 geguileo: Ok. I don't think there was a lot more discussion after your last note. 16:17:39 jungleboyj: I think there was a reply 16:18:02 jungleboyj: but in summary I don't think Nova should try to find a flow with what we have that works for all cases 16:18:07 #link http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010066.html 16:18:19 I think we should add new functionality to Cinder 16:18:29 #link http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010069.html 16:18:32 That sounded like the best path forward. 16:18:36 one that allows them to add the connection_info 16:18:44 #link http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010071.html 16:18:46 on the Cinder attachment object 16:19:18 geguileo: Ok, and I think that was the general agreement for last week's meeting as well. 16:19:23 I get a vague feeling that was discussed with John as these APIs were being designed and shot down, but I have no idea why at this point. 16:19:30 Maybe it was a slightly different idea though. 16:19:48 I don't know 16:19:56 smcginnis: That does feel slightly familiar. 16:20:23 but I think that that's the option that reduces the chances of finding weird bugs 16:20:33 Well, unless someone steps forward with an argument of why it shouldn't be done, I think it makes sense given the current use case. 16:20:35 that will be blamed on Cinder even if it's not our "fault" 16:20:50 smcginnis: +1 16:21:23 Tangential, but Matt had a good idea for a tempest plugin test to enforce the initialize_connection issue. 16:21:43 But then, that's only really useful if the CIs actually run with the tempest plugin, which I think right now they do not. 16:21:49 At least the majority of them. 16:21:51 user:: I didn't say it was your fault, I said I blamed you 16:22:00 smcginnis: what was that meant to test? 16:22:17 that creating more than 1 attachment with the same host connector doesn't result in multiple backend connections 16:22:30 i.e. that the driver code was idempotent i think 16:22:32 The after the fact assumption enforced by nova that calling initialize_connection multiple times would be idempotent. 16:22:47 mriedem: but that's not a requirement in Cinder 16:22:50 davee__: ++ :D 16:23:01 mriedem: it's something that we should have been explicit about, but we weren't 16:23:16 because we don't have the driver interface properly documented 16:23:37 given the api isn't even documented i'm not sure how anyone outside of digging into the cinder code would know the intentions of the api 16:23:48 smcginnis: let's start from the basics: how do we force he CIs to run the tests from cinder_tempest_plugin? 16:24:13 tosky: Best we've been able to come up with is a periodic review of what they are actually running. 16:24:14 tosky: Oh, that is a whole other discussion. 16:24:21 smcginnis: ++ 16:24:27 Since everyone reports a slightly different format, hard to automate the checking. 16:24:31 it may be useful for other tests too 16:24:32 before going off and adding a spec and new API for updating the connection_info on an attachment, geguileo should probably read my response 16:24:51 because i also feel like jgriffith would be rolling in his grave talking about adding that :) 16:24:58 :D 16:25:06 Ha! 16:25:17 mriedem: I will read the reply, but I don't think my recommendation will change... 16:25:31 If we say his name three times, maybe he'll show up and review it. ;) 16:26:03 https://gph.is/1u23SD7 16:26:13 smcginnis: You mean jgriffith 16:26:15 the 2 APIs were not meant to be mixed 16:26:21 smcginnis: or stab you in the ear with a Q-tip 16:26:29 what is wrong with this again: 16:26:36 1. create new attachment, 16:26:41 2. complete that attachment, 16:26:56 3. call os-terminate_connection for the old attachment with the old host connector? 16:27:09 well, the order there is wrong 16:27:09 #2 may create a different mapping on the backend or may not 16:28:06 how is that different from migrating a server? 16:29:20 I was going to say that this would be good for a cross project discussion (yelling across the room we will be in) but mriedem is lucky enough to avoid the trip across the pond. 16:29:26 i.e. create a new attachment on dest host, terminate the connection for the source host, complete the dest host attachment to put the volume back into in-use status 16:29:44 mriedem: because migrating uses 2 different hosts 16:30:00 mriedem: and the bug in Nova where it was calling Cinder initialization a second time was fixed 16:30:07 i'd have to confirm but i wouldn't be surprised if we do ^ for same-host resize as well 16:30:25 geguileo: you're talking about live migratoin right 16:30:26 ? 16:30:31 mriedem: yup 16:30:38 i mean even just simple same-host resize 16:31:00 mriedem: then you may get a similar bug 16:31:22 I don't know the Nova code 16:31:24 which would have been around since....forever 16:31:51 i would need to confirm we do that for same host resize but i don't have anything coming to mind that we treat same-host resize different wrt volumes 16:32:49 anyway we can move to the ML thread again 16:32:52 and yeah i won't be in shanghai 16:34:33 Ok. Lets move this to the ML if you can keep working this geguileo ? 16:34:39 mriedem: the paths of old terminate and removing an attachment are completely different 16:35:12 not from a nova pov 16:35:21 rofl rofl rofl 16:35:24 good for you 16:35:42 my point is if this is a problem where the host doesn't change for some drivers, it's been a problem forever 16:36:24 you say it as if that would surprise you... 16:37:08 no it wouldn't 16:37:13 nothing surprises me in openstack anymore 16:37:21 i'm surprised when shit *works* 16:37:29 * jungleboyj shakes my head 16:38:15 Ok. Lets move on to the other topics and we can keep working this one in the channel or through the ML. 16:38:57 Any disagreement? 16:39:07 nope 16:41:00 Okie dokie. 16:41:10 #topic Team Dinner at Shanghai PTG. 16:41:17 Anyone interested in doing this? 16:41:43 +1 16:41:56 There is 1 yes. 16:42:03 e0ne: You will be there? 16:42:12 jungleboyj: +1 16:42:33 jungleboyj: yes, just booked a hotel and flights 16:42:39 I like food. 16:42:51 Oh yeah, there have been some updates in the etherpad. 16:43:00 Actually have a good list of people there. 16:43:06 smcginnis: So do I. Too much. 16:44:00 Ok, so, it sounds like there is interest. I will work with Brian to plan something. 16:44:22 +1 16:44:40 jungleboyj: thanks! 16:44:48 #action jungleboyj to work with rosmaita to plan a dinner. Can talk about nights, etc in next week's meeting. 16:45:26 #topic recording of PTG meetings 16:45:43 So, Given the great firewall, I am trying to decide what we want to do here. 16:46:54 I am going to sign up for a VPN that is supposed to work, but who knows. 16:47:08 I am also only taking my work laptop. Not personal one. 16:47:26 as a backup plan, we can record it and publish videos after the PTG 16:47:34 I see that we have a few people listed as remote attendees. 16:48:01 e0ne: True. Also concerned by the fact that we are all going to be in one big room. 16:48:13 I like that plan e0ne. 16:48:29 jungleboyj: true, everyone in one room should not be great for remotees 16:48:31 Even if we can't do real time, we can at least record things for others to watch later. 16:48:39 You know, if they have trouble sleeping or something. 16:48:58 smcginnis: So we will have at least content like I the other events. 16:49:23 yeah, that would be nice 16:49:35 Glad I'm not remote this time :) 16:49:51 can anyone point out where to find more info on doing that remote since I cannot attend this one? 16:49:52 Ok. So I will still bring my big Mic and camera. 16:50:19 davee__: Well, we will put info in the etherpad and IRC as it happens if we are able to do so. 16:50:58 I will need someone to take over recording on the second day as I will be in TC meetings. 16:51:28 Ok. That answers my question there. 16:51:55 #topic Update of legacy jobs for moving to py3 only. 16:52:00 Who added this topic? 16:52:43 Ah, that'd be me. 16:52:48 :-) 16:52:50 Just a heads up to start thinking or researching. 16:52:52 smcginnis: Take it away. 16:53:03 I know we have a PTG topic to talk about moving to py3 only. 16:53:23 One thing that I saw somewhere was a mention that some of the legacy jobs may not be set up right to run on a py3-only node. 16:53:29 I'm not sure if we are impacted or not. 16:53:36 We have the LIO-barbican job. 16:53:49 And I think we run a few others that are not in-tree. 16:53:54 if a certain zuul patch lands, we may quickly kill a few of the legacy jobs 16:54:05 OK, great. Thanks tosky. 16:54:23 I don't really have the bandwidth to investigate, so I wanted to at least make sure others were aware of it. 16:54:26 (see my pending patches; of course you all can start checking if they do what they are supposed to do in the meantime :) 16:54:37 tosky: Do you have a link to that patch? 16:54:51 Would be great if folks could review those and get them through. 16:54:58 That would be one less concern for the migration. 16:55:15 Nothings finalized with the overall plan, but the hope is to be able to drop py2 support by milestone 1. 16:55:28 So really just a couple months to identify any blockers to doing that. 16:55:40 And personally, I really wish I had more time to rip out all that compat code. :) 16:55:51 uhm, https://review.opendev.org/#/q/status:open+owner:%22Luigi+Toscano+%253Cltoscano%2540redhat.com%253E%22++topic:zuulv3 16:56:26 Awesome, thanks tosky 16:56:28 https://gph.is/2F9U7tV 16:56:32 and also this zuul patch: https://review.opendev.org/#/c/674334/ (but you can see that from the dependency) 16:57:01 Cool, I will try to review those later. 16:57:08 Cool. 16:57:15 I guess that's all from me. Running out of time anyway. 16:57:17 Anything else there smcginnis 16:57:22 Nope 16:57:23 Ok. 16:57:28 #topic open discussion 16:57:35 Any topics for the last 3 minutes/ 16:57:46 In the end, I wanted to say hello to everyone, I am a newbie in Cinder 16:57:59 anastzhyr: Welcome! 16:58:02 And I wanted to contribute to Cinder+Tempest 16:58:04 as a follow-up to my previous question (how do we force 3rd party CIs to run the cinder_tempest_plugin?) 16:58:22 anastzhyr: Let us know if you have questions. 16:58:30 welcome anastzhyr 16:58:37 And I would be happy for any help and support in my first steps in open source 16:58:45 tosky: Need to go through the logs and see who is running them and not. 16:58:54 Thanks a lot 16:59:03 Welcome anastzhyr! 16:59:08 jungleboyj: I guess we don't have a unified log place; but do all of them at least publish the subunit file? 16:59:17 we can probably discuss it after the meeting, or next week 16:59:23 tosky: I think improving the test coverage in cinder-tempest-plugin is also a related topic 16:59:31 whoami-rajat: ++ 16:59:43 tosky: They should be. But then, they also should be running the plugin tests. 16:59:59 And should be running py3, and should be ..... 17:00:02 Right now I start with unit tests for creating and deleting volumes 17:00:05 There isn't a unified log location. 17:00:09 the two issues can be solved in parallel (making sure that the CIs run the plugin, and that the test coverage is increased) 17:00:10 Times up 17:00:17 I usually start with http://cinderstats.ivehearditbothways.com/cireport.txt and go look at what they push up. 17:00:36 ack, thanks! 17:00:37 Ok. We need to stop for today. We can take this discussion to the cinder channel. 17:00:51 anastzhyr: Join us in #openstack-cinder if you have more question. 17:00:55 Thanks everyone! 17:00:57 Thanks! 17:00:59 #endmeeting