15:01:39 <gouthamr> #startmeeting manila
15:01:40 <openstack> Meeting started Thu Jul  9 15:01:39 2020 UTC and is due to finish in 60 minutes.  The chair is gouthamr. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:41 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:01:43 <openstack> The meeting name has been set to 'manila'
15:01:45 <carloss> hi o/
15:01:51 <dviroel_> o/
15:01:51 <danielarthurt> hi!
15:02:08 <gouthamr> courtesy ping: ganso vkmc amito lseki tbarron andrebeltrami
15:02:08 <tbarron> Hi!
15:02:09 <vhari> o/
15:02:11 <lseki> o/
15:02:12 <carthaca> Hi
15:02:20 <andrebeltrami> o/
15:02:31 <gouthamr> hey everyone! thanks for joining
15:02:44 <gouthamr> here's the agenda for today: https://wiki.openstack.org/wiki/Manila/Meetings#Next_meeting
15:03:49 <gouthamr> irc has been flaky a bit, so let me save this meeting in case :)
15:03:54 <gouthamr> #chair tbarron
15:03:54 <openstack> Current chairs: gouthamr tbarron
15:04:10 <tbarron> ack
15:04:24 <gouthamr> ^ i know he doesn't use irccloud or rdocloud or any other thing that's down at the moment :)
15:04:33 <gouthamr> #topic Announcements
15:05:14 <gouthamr> so we're at R-14 - about the midway point in the victoria release cycle
15:05:20 <gouthamr> #link https://releases.openstack.org/victoria/schedule.html
15:05:31 <gouthamr> this is our specifications deadline
15:05:41 <gouthamr> we'll discuss these in a bit...
15:05:43 <tbarron> my bouncer is in the DigitalOcean -- hope I didn't just jinx them
15:06:16 <gouthamr> the week of Jul 27 - Jul 31 is the new driver deadline
15:06:39 <gouthamr> i've seen some queries for new driver inclusions, and there's one blueprint filed
15:07:14 <gouthamr> lets hope they can have their code submitted and tested by that deadline
15:08:30 <gouthamr> if you have any questions or concerns about these deadlines, do let us know - these are helpful safeguards and save reviewer bandwidth by planning the release better...
15:09:17 <gouthamr> but, we've had issues come up that take your focus away... and we'll be accommodative of that in a reasonable manner
15:09:49 <gouthamr> next up is some good news
15:10:03 <gouthamr> we've a new tc tag approved in the last week
15:10:07 <gouthamr> #link https://governance.openstack.org/tc/reference/tags/tc_approved-release.html
15:10:50 <tbarron> gouthamr: thanks for your great work on this!  and thanks also to gmann and others who supported the initiative.
15:11:26 <gouthamr> absolutely, we had a great discussion about this at the PTG and we have much to follow up on..
15:11:55 <gmann> +1.
15:12:49 <gouthamr> while we're on this topic, i'd encourage you to take a look at another proposal if you haven't:
15:12:57 <gouthamr> #link https://review.opendev.org/#/c/736369/ (Create starter-kit:kubernetes-in-virt tag)
15:13:18 <gouthamr> it needs a trivial rebase, but please share your thoughts when you get a chance
15:13:31 <gouthamr> any other announcements?
15:14:33 <gouthamr> #topic CI status
15:15:18 <gouthamr> late last week, we had a breakage - and we haven't root-caused it yet
15:16:19 <gouthamr> our voting LVM job - the legacy one that we run on manila's main branch as well as the native-zuulv3 style job fail when run on some infra
15:16:44 <gouthamr> #link https://zuul.opendev.org/t/openstack/builds?job_name=manila-tempest-plugin-lvm
15:17:12 <gouthamr> or a better link:
15:17:18 <gouthamr> #link https://zuul.opendev.org/t/openstack/builds?job_name=manila-tempest-plugin-lvm&job_name=manila-tempest-minimal-lvm-ipv6-only (LVM jobs)
15:18:07 <gouthamr> the cause seems to be a reboot that occurs on the test node in the middle of running tests
15:18:29 <tbarron> +1
15:18:41 <dviroel> and it only happens with LVM?
15:19:13 <tbarron> yes, so far as i can tell
15:19:25 <gouthamr> before that reboot occurs, a bunch of tests are run, and do pass - however, after the reboot, i don't know how, but, testr_results.html reports all tests having failed
15:20:16 <tbarron> gouthamr: do we still think the issue tracks with rax nodes?
15:20:23 <gouthamr> we've been playing with reducing the test concurrency, or modifying the backing file size given that we do thick provisioning, and mayy be running out of space
15:20:27 <tbarron> i do but am checking
15:20:45 <gouthamr> tbarron: yes, i have seen the job run on other clouds and pass or fail one or more scenario tests
15:21:05 <tbarron> i don't think it's a resource issue (and that was my main guess b/c rax nodes have less disk space than e.g. vexxhost) on root
15:21:07 <gouthamr> tbarron: apart from rax, i haven't seen this reboot behavior occurring elsewhere
15:21:20 <tbarron> https://zuul.opendev.org/t/openstack/build/51980ae05c024e35b73614a9db4925bd/log/logs/screen-m-shr.txt#8814-8829
15:21:31 <tbarron> tha's on rax right before a reboot
15:21:45 <tbarron> instrumented for free memory and disk
15:23:15 <tbarron> the issue is complicated by unreliable syslog and journal.  Both can lose info before the reboot and syslog has limited info usually anyways.
15:23:26 <tbarron> Though I found one syslog with perhaps a clue:
15:23:41 <tbarron> https://zuul.opendev.org/t/openstack/build/1bc61a82ace14217807f28c5b8c9debe/log/controller/logs/syslog.txt#8463
15:24:34 <tbarron> but that's with the same kernel that is running on vexxhost
15:24:44 <tbarron> where we don't see an issue
15:25:13 <tbarron> this particular GPF seems to pertain to paqcket filtering and the packets in question are ipv6
15:27:05 <gouthamr> good find, tbarron - we do run ipv6 tests even with cephfs-nfs
15:27:34 <tbarron> gouthamr: true and i haven't seen this issue there
15:28:22 <gouthamr> okay, lets make a patch disabling ipv6 data path tests, and see if that changes anything
15:28:52 <gouthamr> would help to isolate the issue if anything, so we can follow up
15:28:58 <tbarron> +1
15:29:38 <tbarron> lvm job is updating kernel nfs exports whereas cephfs-nfs job is using ganesha but i'm just speculatiing again
15:30:34 <gouthamr> cool, we'll continue debugging then and take this to #openstack-manila
15:31:09 <gouthamr> but, in general, folks - do not recheck on failures in this job..
15:31:43 <dviroel> tbarron gouthamr great job in investigating that, hope to find some time to join on debugging
15:31:54 <gouthamr> thank you dviroel
15:32:02 <gouthamr> anything else regarding $topic?
15:32:03 <carloss> tbarron gouthamr ++
15:33:07 <gouthamr> #topic Stable branch status
15:33:28 <gouthamr> we meant to talk about this one during the PTG
15:33:54 <gouthamr> i wanted to run this by the group here, before opening the discussion on the mailing list
15:34:10 <gouthamr> we have ETOOMANY branches that are in Extended Maintenance
15:34:23 <gouthamr> ocata through rocky
15:35:15 <gouthamr> the last time anyone's proposed a patch to ocata was a year ago
15:35:20 <gouthamr> #link https://review.opendev.org/#/q/project:openstack/manila+branch:stable/ocata
15:36:05 <gouthamr> pike's seen some activity this year, but the last patch that merged was at three months ago:
15:36:07 <gouthamr> #link https://review.opendev.org/#/q/project:openstack/manila+branch:stable/pike
15:36:58 <gouthamr> i'm not sure any tempest jobs work on these branches.. and it's not clear if anyone benefits from us keeping these alive
15:37:54 <gouthamr> so should we just pull the trigger on EOL-ing these branches
15:38:22 <gouthamr> that means we'll be creating a tag off their HEAD, and deleting the branches
15:38:50 <gouthamr> it'll save gate resources, because bitrot jobs are still running testing these branches
15:39:06 <tbarron> +1
15:39:48 <dviroel> +1
15:40:00 <carloss> +1
15:40:47 <gouthamr> cool, does it mean others disagree, or don't care either way? :)
15:41:07 <carthaca> I don't care ;)
15:41:14 <lseki> +1
15:41:27 <danielarthurt> +1
15:41:43 <andrebeltrami> +1
15:41:53 <gouthamr> haha, that was my thought too, i don't feel like  anyone is relying on us keeping these branches alive
15:42:13 <gouthamr> #action gouthamr will send an email asking to EOL stable/pike and stable/ocata branches
15:43:20 <gouthamr> we still have stable/queens and stable/rocky - for the moment, wearing my red fedora, i want to help keep stable/queens alive a bit longer, to allow backports that can make their way into rdo and rhosp
15:44:18 <dviroel> +1
15:44:29 <gouthamr> but, its not a priority to get the gate up and running against these two branches - devstack's failing atm, i'll take a look when we're done fixing all things on the main branch
15:45:28 <gouthamr> however, we should actively keep EOL'ing older branches so we don't give the impression that these are meant to work for these extended periods, while all development is focused ahead
15:46:06 <gouthamr> anyway, that's all i had to say about that..
15:46:10 <gouthamr> any other thoughts?
15:46:49 <gouthamr> #topic Specifications
15:47:01 <dviroel> \o/
15:47:15 <gouthamr> i see four proposed for this release, but two actively being worked on..
15:47:31 <gouthamr> #link https://review.opendev.org/#/c/735970/ (Share server migration)
15:47:43 <gouthamr> #link https://review.opendev.org/#/c/739136/ (Add lite spec for share server limits)
15:48:05 <gouthamr> #link https://review.opendev.org/#/c/729292/ (Improve security service update --- deferred to Wallaby?)
15:48:27 <gouthamr> #link https://review.opendev.org/#/c/710166/ (Add share group replica)
15:49:51 <gouthamr> do you think we can actively review these this week
15:49:59 <gouthamr> and bring up any concerns at the next meeting?
15:50:05 <tbarron> dviroel: carloss thanks for #1
15:50:18 <tbarron> keeping focus on current release, and
15:50:37 <tbarron> #2 discussing issues publicly, upstream
15:51:05 <tbarron> gouthamr: that sounds like a good plan to me
15:51:22 <carloss> :)
15:51:33 <tbarron> Can we think of people who shold review besides the usual suspects?
15:51:35 <carloss> I agree. Will take a look in the others I haven't reviewed yet
15:51:55 <gouthamr> we can take longer on this one:
15:51:56 <dviroel> would be good to have some feedback on #3 too, but I will asks more eyes on #1 and #2 at this moment
15:51:56 <gouthamr> #link https://review.opendev.org/#/c/729292/ (Improve security service update --- deferred to Wallaby?)
15:52:07 <gouthamr> yeah..
15:52:19 <tbarron> anybody paricularly imjpacted by share server migration or share server limits?
15:52:39 <carthaca> I will have a look, too
15:52:49 <tbarron> umm, him ^^
15:52:56 <dviroel> haha
15:52:58 <gouthamr> :) ty carthaca
15:53:19 <carthaca> Naturally, because I requested those features
15:53:42 <gouthamr> carloss: i'd reach out to Jon Vondra for the server limits spec because he had some thoughts
15:54:03 <carloss> oh, good
15:54:17 <carloss> will tag him in the patch as well
15:54:22 <gouthamr> thanks..
15:54:45 <gouthamr> okay, anything else about specs?
15:55:15 <gouthamr> we've ~6 minutes, maybe we can talk about one or two bugs, vhari?
15:55:24 <gouthamr> #topic Bugs (vhari)
15:55:27 <vhari> sure ..
15:55:39 <gouthamr> thanks, and sorry - its been that sort of week :)
15:55:40 <vhari> gouthamr, may have time to consider closing this
15:55:44 <vhari> #link https://bugs.launchpad.net/manila/+bug/1838936
15:55:44 <openstack> Launchpad bug 1838936 in OpenStack Shared File Systems Service (Manila) "manila-share not working with ceph mimic (13.2) nor ceph nautilus (14.2)" [Undecided,Confirmed]
15:55:54 <gouthamr> ah yes!
15:56:34 <gouthamr> ty vhari, i'll do that
15:56:46 <vhari> gouthamr, gr8 ty
15:56:47 <gouthamr> vkmc is out today, but we know what might have happened in their environments
15:56:56 <gouthamr> i'll comment and close that bug
15:56:58 <vhari> that's a wrap for bugs out of time ..
15:57:07 <gouthamr> thank you!
15:57:12 <gouthamr> #topic Open Discussion
15:58:24 <gouthamr> looks like we have none, and can save a whole minute :)
15:58:49 <gouthamr> thank you all for joining - lets get to reviewing and debugging these gate issues on #openstack-manila
15:59:01 <gouthamr> see you here next week, stay safe!
15:59:03 <dviroel> thanks!!!
15:59:07 <carloss> thanks!
15:59:09 <gouthamr> #endmeeting