15:00:12 #startmeeting manila 15:00:13 Meeting started Thu Oct 19 15:00:12 2017 UTC and is due to finish in 60 minutes. The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:14 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:16 The meeting name has been set to 'manila' 15:00:19 hello all 15:00:20 o/ 15:00:20 hello 15:00:23 hi 15:00:28 Hi 15:00:36 hi 15:01:02 hi 15:01:07 gouthamr vponomaryov toabctl cknight: courtesy ping 15:01:14 hello o/ 15:01:24 \o 15:01:28 hi 15:01:33 #topic announcements 15:01:50 Queens milestone one is TODAY 15:02:18 assuming that the release team has sorted out their pipelines we'll be tagging a milestone release today 15:02:37 also according to the schedule, the spec freeze date is TODAY 15:02:45 but I have a topic to discuss that next 15:03:17 that's all for announcements 15:03:19 O/ 15:03:31 #agenda https://wiki.openstack.org/wiki/Manila/Meetings 15:03:43 #topic Spec freeze 15:04:00 #link https://review.openstack.org/#/q/status:open+project:openstack/manila-specs 15:04:26 are we ready for this? 15:04:37 so we've had some issues over the past few weeks with gate failures 15:04:42 i think we've less specs to review, but doesn't look like we're ready to merge them all today :) 15:04:46 thanks to the zuul upgrades 15:05:15 gouthamr had suggested we give ourselves more time because some of the core reviewers have been distracted by the gate issues 15:05:31 +1 15:05:35 +1 15:05:41 +1 15:05:47 gouthamr: how many more specs do you plan to look at? 15:05:59 +1 15:06:10 I'm happy to review some more specs 15:06:17 the question is how much time is enough 15:06:51 there're about 4 unmerged at this point, 3 from zhongjun and 1 from tbarron - i'd like to review them all if possible.. 15:06:53 we can't push the deadline too far because there's a ton of holidays between milestones 2 and 3 and I expect little work to get done that milestone 15:07:02 is 1 week enough/ 15:07:03 ? 15:08:16 anyone want more time than that? 15:08:22 who will be reviewing? 15:08:39 I can review some more specs if needed 15:08:54 it's needed, we're very short on reviewer b/w 15:09:12 I agree with that, I could move my spec to next version 15:09:28 I'm leaning towards just 1 more week, to avoid encouraging bad review habits 15:09:50 are there any remaining gate issues we're still struggling with? 15:09:51 that's ok with me, one week and be ruthless about approvals 15:09:53 1 week's fine.. 15:09:56 tbarron: +1 15:10:07 okay 15:10:14 getting kicked out to the next release is not failure 15:10:42 #agreed blanket 1 week extension of the spec freeze due to reviewer distractions 15:10:47 mostly means that the system can't handle the throughput, back pressure 15:10:58 #topic py2 -> py3 15:11:06 vkmc: you're up 15:11:14 sure 15:11:41 so, as discussed during the ptg (https://etherpad.openstack.org/p/manila-ptg-queens), I was trying to understand what was missing for the py2 -> py3 15:12:01 most of the work was done by Valeriy 15:12:09 as part of this bp 15:12:31 #link https://blueprints.launchpad.net/manila/+spec/py3-compatibility 15:12:44 the tempest jobs should run with py3 instead of py2 15:12:55 there is presumably only one thing missing 15:13:05 and any other jobs currently running with py2 only as well 15:13:11 and is that SSL tests are skipped because of the bug requests to SSL wrapped sockets hang while reading using py3 15:13:14 yes 15:13:28 there is a bug filed for this https://bugs.launchpad.net/manila/+bug/1482633 15:13:29 Launchpad bug 1482633 in Manila "requests to SSL wrapped sockets hang while reading using py3" [Low,Triaged] 15:13:42 yeah we believe that the code works with py3, but we focus our automated testing on py2 still -- that's what needs to change 15:13:58 all right 15:14:11 the only py2 we should have in the gate is the gate-manila-python27-ubuntu-xenial job 15:14:42 to avoid breakage of py2 support until py2 can be officially dropped 15:14:43 to address bug #1482633 15:14:43 bug 1482633 in Manila "requests to SSL wrapped sockets hang while reading using py3" [Low,Triaged] https://launchpad.net/bugs/1482633 15:14:51 we should revive this review https://review.openstack.org/#/c/289382/ 15:14:57 and yes we should fix that bug 15:15:45 I've sync up with some people that drove the effort of py2 -> py3 migration on other projects 15:16:25 vkmc: I suspect most other groups are still testing py2 primarily, although they are finding and fixing py3 bugs 15:16:45 and for what I could get from them... dropping py2 should be a community move... as soon we are all ready to do it 15:17:24 well dropping py2 can only happen after the vast majority of deployments are py3 based -- we're very far from that goal AFAIK 15:17:29 yeah we still need to test py27 some too until the community drops 15:17:29 yeah 15:17:55 there are some project-specific roadblocks for the py3 migration -- but fortunately we're not affected 15:18:06 good 15:18:16 so we currently test py2 primarily and py3 secondarily and I'd like to flip that 15:18:33 both need testing, but py3 should be the "preferred" way to run manila 15:19:01 so... the quick next step is to revive https://review.openstack.org/#/c/289382/ 15:19:11 and then fix the gates you mentioned 15:19:12 yes 15:19:42 ready to move on? 15:19:44 I was wondering if there was something preventing us from consuming sslutils from oslo.service 15:19:56 oh I don't know the specifics of that bug 15:19:57 gate test fixes should be sequenced after raissa's work, but that will come up later in the meeting. 15:20:00 something that has been discussed in the past that I was not aware of 15:20:06 all right 15:20:08 yeah 15:20:11 so I'll move forward with that then 15:21:05 that's all from my side 15:21:46 dustins I'm going to save your topic for last 15:21:55 bswartz: Works for me! 15:21:56 #topic Zuul V3 migration status 15:22:02 raissa: you're up 15:22:17 hey, so while doing the splitting of manila tempest plugin 15:22:23 I had to see how to adapt the jobs 15:22:32 and that was in the middle of the whole migration to v3 15:22:44 so I ended up getting that migration work started 15:22:51 following this guide https://docs.openstack.org/infra/manual/zuulv3.html#legacy-job-migration-details 15:23:08 specially the step-by-step under Moving Legacy Jobs to Projects 15:23:30 now I have 3 patches up that I think are ready for reviews from manila and infra folks 15:23:53 #link https://review.openstack.org/#/c/512559/ 15:23:58 #link https://review.openstack.org/#/c/513075/ 15:24:03 #link https://review.openstack.org/#/c/513076/ 15:24:14 right, thanks (was pasting also) 15:24:31 the one I sent to manila also fixes intermittent issues with centos jobs 15:24:44 that tom helped me figure out 15:25:30 2 of those are failing the check jobs for project-config 15:26:26 the one in openstack-zuul-jobs will fail 15:26:32 "The openstack-zuul-jobs patch will give a config error because the project-config patch removing use of the jobs hasn’t landed. That’s ok. We’ll recheck it once the project-config patch lands." 15:26:39 (from the doc) 15:26:47 ok 15:26:50 and the one in manila has a -1 because of the intermittent issue 15:26:56 raissa: do you need help with any of this? 15:26:58 in 512559 the new migrated jobs pased 15:27:01 raissa: do you know if we can now test project-config changes by making a dummy change in manila that depends on it? 15:27:17 s/changes/change 15:27:26 for now I need reviews 15:27:38 gouthamr: no, but you can see the results for the in-tree .zuul.yaml 15:27:40 in the patch 15:27:59 the jobs that are without the "legacy" in front of them 15:28:40 raissa: oh, another noob Q, will https://review.openstack.org/#/c/512559/ need to be backported to all the supported branches? 15:29:03 part of it afaik 15:29:04 note that glusterfs-native and hdfs have been failing since prior to the zuulv3 migration, so they don't count 15:29:05 the playbooks 15:29:09 wait a minute 15:29:13 and .zuul.yaml 15:29:25 I failed to grasp this before, but it appears that the job definitions will be in our own repo now 15:29:28 and the cephfs-nfs job failed there b/c of a timeout getting to ceph repo 15:29:33 bswartz: +1 15:29:33 see "Stable Branches" section in the doc I pasted 15:29:41 bswartz: yeah 15:29:56 that's one of the sellign points for zuulv3 15:30:00 selling 15:30:11 that's a big step forward in some ways, but it raises concerns 15:30:13 nice : "the jobs defined in the master branch will be available in any branch. But it does at least need a project stanza" answers my question, thanks raissa 15:30:21 there's a hierarhy of job definition places 15:30:22 gouthamr: \o/ cool 15:30:45 we can inherity and customize 15:30:50 inherit 15:30:54 * tbarron can't type today 15:31:17 * gouthamr new startup/band name inherity 15:31:23 yeah, I mostly wanted reviews from infra folks as well as they are more aware if we're kind of in the right track 15:31:29 tbarron: you should ask dustins about keyboards -- he might be able to recommend a better one 15:31:36 :) 15:31:43 hahaha 15:31:43 but I'm sure they'll want the ptl's +1 15:32:03 raissa: tell me when you're happy with the patches and want my review 15:32:14 bswartz: you can review them right now 15:32:19 I think I'm done tweaking 15:32:23 tbarron: das keyboard 15:32:33 in what order do you expect them to merge? 15:32:41 not right right now, but when you have the time :) 15:32:55 as far as I understand 15:33:06 manila's -> project-config -> openstack-zuul 15:33:13 *-jobs 15:33:15 my main question is whether the jobs are substantially the same, or whether and changes were required other than the reorg 15:33:38 it's copy-paste, so they should be the same 15:33:53 and they ran together at the gates in the patch 15:33:57 so you can see the results 15:34:12 k 15:34:44 there's also some jobs that I didn't move related to legacy-manila-ui, but these ones I think someone should move to manila-ui repo 15:35:03 I can do that later if no one's up for it, but let's see how the manila ones go 15:35:24 and we're running cookie cutter jobs for pep8, unit tests, etc. 15:35:33 and the client? 15:35:34 i'm fixing the cover job 15:36:18 raissa: we haven't looked at client yet, right, except for the jenkins->$USER fix? 15:36:21 can also be moved 15:36:46 the client work is probably more challenging than the UI 15:36:48 yeah, those are easier because are less things to move and check (I think :)) 15:36:50 and more valuable 15:37:23 I think client is not urgent though now that it's working again 15:38:00 but it would be good to have in tree 15:38:28 okay let's move one to make sure dustins has enough time for his topic 15:38:40 #topic Let's Go Over New Bugs 15:38:45 all right thanks :) 15:38:55 dustins: you're up 15:39:01 bswartz: Thanks! 15:39:03 #link https://etherpad.openstack.org/p/manila-bug-triage-pad 15:39:17 Heh, took the words right out of my buffer 15:39:33 So these are some new/confirmed bugs that need some owners 15:39:55 Well, minus the Manila service image one, but that's just a follow up 15:40:04 did zhongjun volunteer for the share groups API ref changes? 15:40:14 yes 15:40:30 We submitted share group and share group type docs before, but got a few review and doesn’t merge now, so we can not see those in API ref doc 15:40:39 https://review.openstack.org/#/q/status:open+project:openstack/manila+branch:master+topic:share_group_doc 15:40:47 zhongjun: I assigned the bug to you 15:40:47 #link https://review.openstack.org/#/q/status:open+project:openstack/manila+branch:master+topic:share_group_doc 15:41:08 And I marked it as in progress, thanks, zhongjun! 15:41:31 bswartz: okay 15:41:49 So next on the list is: https://bugs.launchpad.net/manila/+bug/1720283 15:41:50 Launchpad bug 1720283 in Manila "use openflow to set security group, create port failed" [Undecided,New] 15:42:09 that bug isn't actionable, no details how to reproduce 15:42:09 It's a little sparse on details, but does anyone know what OpenFlow is? 15:42:18 https://en.wikipedia.org/wiki/OpenFlow 15:42:32 bswartz: Could we put more eyes on doc review :) Thanks 15:42:37 openflow shouldn't be hard to figure out how to use, but I've never tried it 15:42:47 tbarron: Agreed, not much we can do here given the description 15:42:57 that's not the issue, there's no mention of back end, relase of openstack , what was done to cause the issue, etc. 15:43:21 Indeed, I've marked it as incomplete and will ask for more information 15:43:38 Next one is pretty similar: https://bugs.launchpad.net/manila/+bug/1719837 15:43:39 Launchpad bug 1719837 in Manila "Verify the domain quota when updata the project quota" [Undecided,New] 15:43:47 the bug wasn't filed long ago -- if we can track down haobing1 and get more info our of him/her maybe we can add more details to the bug 15:43:48 they are usinig "contrail" wwhich is a juniper thing 15:44:28 haobing1 is from zte.com.cn 15:44:56 tripleo has instructions for filing a bug when you try to file, a template, etc. We should look into that. I can help dustin. 15:45:00 He is from china 15:45:13 These reports lack sufficient information. 15:45:21 zhongjun: if you see him online please ask for additional details in the bug 15:45:26 tbarron: Sounds good, thanks! 15:45:35 bswartz : I will 15:45:36 And I'll remark the same on the bugs themselves 15:46:06 Fourth is https://bugs.launchpad.net/manila/+bug/1719467 15:46:07 Launchpad bug 1719467 in Manila "manila service image panics" [Critical,Fix committed] - Assigned to Tom Barron (tpb) 15:46:07 what is a "domain" quota? 15:46:22 err are we skipping this one? 15:46:41 I didn't kknow I had that :) 15:46:49 Oh, I thought the conversation was going toward skipping that one, my appologies 15:46:51 dustins: It looks like we already done before 15:47:11 there are 2 sparsely-worded bugs from haobing1 15:47:16 I'd like to understand the issue in the second bug more 15:47:24 what is a domain quota 15:47:29 Yeah, I was just grabbing New bugs within the last few weeks 15:47:34 am I just dense? 15:48:15 I don't understand the bug report. 15:48:29 does anyone? 15:48:34 There may well be a real issue though. 15:48:59 yeah I don't doubt haobing1 has a real problem I just don't know what it is 15:49:29 I could ask haobing1 about what is the real mean about that 15:49:31 I was hoping that someone with some greater knowledge of quotas might have an idea as to what's going on with this one 15:50:02 zhongjun: I can do the same thing on the bug itself as well 15:50:37 dustins: okay we're all stumped, let's move on 15:50:55 Right, so this one is just a follow up on the Manila service image 15:51:29 bswartz built a new image, successfully pushed it up to tarballs.xxx and the issue is resolved 15:51:39 Oh, so it did get updated? 15:51:54 well the underlying issue remains a mystery 15:52:05 "it" meaning the image at tarballs... ? Yes. 15:52:18 the fix here was the equivalent of hitting Ctrl-Alt-Delete 15:52:21 tbarron: Yeah, sorry about the ambiguity 15:52:31 we don't understand why the image that was there earlier was corrupted. 15:52:50 bit rot on the wire? 15:52:53 if there's a real issue, it probably lies in the build process of manila-image-elements or the gate jobs thereof 15:53:17 do we checksum before and after the image transfer? 15:53:37 no there's no SHA1 verification if that's what you're thinking of 15:53:53 but it's still more likely that the build produced a bad image and the testing didn't catch it 15:54:05 ack 15:54:33 Is there any concrete near term action that we expect to take w.r.t. this one? 15:55:06 we could fix the job to test the correct image 15:55:16 We know the build verification tests don't work, they test the upstream image rather than the one that has been produced. 15:55:43 We should probably close this bug and make a new one for that. 15:55:46 IIRC, vponomaryov created a gate job for manila-image-elements that runs a dsvm job, but the job tests the previous image, not the newly created one 15:56:01 ^^^ right, that's what I was trying to say 15:56:22 so the job needs enhancement to actually test the image being produced, so future bad builds don't get uploaded to tarballs.o.o 15:56:26 I don't know if anyone is planning to work on that issue right away though. 15:56:26 it tests the newly created ine iirc.. not the tarball 15:56:39 gouthamr: logs show otherwise :) 15:56:55 though that's what was intended 15:57:01 aside from that, all we can do is work around the issue buy fixing bad images quickly 15:57:12 tbarron: gate-manila-tempest-dsvm-generic-scenario-custom-image-ubuntu-xenial-nv is for the "custom" image, i.e, current code change 15:57:15 no? :P 15:57:29 gouthamr: that's the intent, not the reality 15:57:31 intended to be 15:57:44 oh.. 15:57:51 logs show it downloading the tarball and checking that 15:58:16 so in summary, the gate job for manila-image-elements doesn't work 15:58:23 and it allows bad changes to get through 15:58:45 anyways I'm for closing this one and opening a new one for the tech debt, or renaming this one an noting the tech debt 15:58:54 so be extra careful when working on manila-image-elemnts 15:59:00 +1 15:59:02 Sounds like a plan 15:59:16 but I am not myself planning on working on the tech debt issue in the next few weeks so will unassign if we rename it 15:59:17 50 seconds for the last bug 15:59:24 https://bugs.launchpad.net/manila/+bug/1717261 15:59:25 Launchpad bug 1717261 in Manila "NetApp drivers don’t create share of requested size when creating from snapshot" [Low,Confirmed] 15:59:42 Just needs NetAppers to ack 15:59:46 dustins: just confirmed the bug, the fix is ready to be pushed up 15:59:50 last time, bswartz update a new patch and build the new image, but our image still doesn't update in tarball link 15:59:56 gouthamr: That was fast :) 15:59:57 sounds good 16:00:02 we're out of time 16:00:11 thanks all 16:00:16 #endmeeting