15:01:01 <gouthamr> #startmeeting manila
15:01:02 <openstack> Meeting started Thu Jun 18 15:01:01 2020 UTC and is due to finish in 60 minutes.  The chair is gouthamr. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:03 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:01:05 <openstack> The meeting name has been set to 'manila'
15:01:06 <tbarron> Hi
15:01:22 <lseki> o/
15:01:23 <andrebeltrami> o/
15:01:25 <carloss> o/
15:01:32 <vhari> hi
15:01:35 <gouthamr> courtesy ping: ganso vkmc amito dviroel danielarthurt
15:01:43 <danielarthurt> o/
15:02:01 <carthaca> hi
15:02:02 <gouthamr> hello everyone o/
15:02:25 <gouthamr> thanks for joining, i hope you're all doing well! The agenda for this meeting is here: https://wiki.openstack.org/wiki/Manila/Meetings#Next_meeting
15:02:25 <dviroel> o/
15:02:52 <gouthamr> let's begin as usual with,
15:02:55 <gouthamr> #topic Announcements
15:03:27 <gouthamr> i hope you all had some light reading from this ML post
15:03:29 <gouthamr> #link http://lists.openstack.org/pipermail/openstack-discuss/2020-June/015494.html (PTG Summary)
15:04:34 <tbarron> gouthamr: ty for the work doing this summary
15:04:43 <dviroel> gouthamr++
15:04:54 <gouthamr> sorry for the novel, but, if you feel there's something missing there, let me know - we'll revisit this when we review/commit the code we spoke about
15:05:13 <tbarron> it's a great reference even if maybe not a potboiler thriller
15:05:31 <gouthamr> haha, its like a memoir of a thriller
15:05:37 <gouthamr> We're at milestone-1
15:05:59 <gouthamr> that means we're a couple of weeks away from our specifications deadline
15:06:02 <gouthamr> #link https://releases.openstack.org/victoria/schedule.html (victoria release schedule)
15:06:40 <gouthamr> so this is a reminder to you if you want to propose specs for the victoria cycle, please do so by Jul 10
15:07:15 <gouthamr> s/a couple/three
15:07:33 <gouthamr> we'll discuss milestone-1 deliverables in a minute
15:07:47 <gouthamr> but, that's all i had in terms of announcements
15:07:55 <gouthamr> anyone else got any?
15:08:08 * vkmc sneaks in
15:08:21 <gouthamr> with an announcement? :)
15:08:36 <gouthamr> cool, lets move on with an ad-hoc topic
15:08:42 <gouthamr> #topic Milestone-1 rollcall
15:09:03 <gouthamr> no bugs in python-manilaclient marked for m-1
15:09:03 <gouthamr> #link https://launchpad.net/python-manilaclient/+milestone/victoria-1 (milestone-1 bugs in python-manilaclient)
15:10:06 <gouthamr> there's a new release already posted, it'll show up here https://releases.openstack.org/victoria/ soon-ish
15:10:18 <gouthamr> #link https://review.opendev.org/#/c/735710/
15:10:29 <gouthamr> there're nothing from manila-ui either
15:10:29 <gouthamr> #link https://launchpad.net/manila-ui/+milestone/victoria-1 (milestone-1 bugs in Manila UI)
15:10:39 <gouthamr> lets review this list
15:10:39 <gouthamr> #link https://launchpad.net/manila/+milestone/victoria-1 (milestone-1 bugs in Manila)
15:11:26 <gouthamr> there're 12 bugs in-progress, 4 not in progress
15:11:36 <gouthamr> i'll move all 4 of these to milestone-2
15:12:03 <gouthamr> is there anything in the in-progress list that we should pay attention to?
15:12:17 <gouthamr> if not, we'll retarget them as well
15:13:17 <gouthamr> we're using these milestones to track work through the cycle, and not much else - we don't make a release for manila at milestones anymore - but trunk consumers can see when a bugfix lands based on the milestone we target
15:14:29 <gouthamr> *crickets*
15:14:54 <gouthamr> :) alright, please ping me/follow up on bugs that you own, or are reviewing
15:15:02 <gouthamr> and let's update the statuses accordingly
15:15:17 <gouthamr> any questions/concerns?
15:15:31 <gouthamr> #topic CI/Gate Status
15:16:14 <gouthamr> late last week we hit an issue with uwsgi
15:16:15 <gouthamr> #link http://lists.openstack.org/pipermail/openstack-discuss/2020-June/015432.html
15:16:21 <gouthamr> #link https://bugs.launchpad.net/devstack/+bug/1883468 (stack.sh fails because uWSGI directory not found in lib/apache)
15:16:21 <openstack> Launchpad bug 1883468 in devstack "stack.sh fails because uWSGI directory not found in lib/apache" [Critical,In progress]
15:16:23 <gouthamr> #link https://bugs.launchpad.net/manila/+bug/1883715 (Manila API fails to initialize uwsgi)
15:16:23 <openstack> Launchpad bug 1883715 in OpenStack Shared File Systems Service (Manila) "Manila API fails to initialize uwsgi" [Critical,Fix released] - Assigned to Douglas Viroel (dviroel)
15:16:47 <gouthamr> there have been fixes in devstack, and in manila (thanks, dviroel)
15:16:54 <dviroel> gouthamr: np
15:17:05 <gouthamr> on the main branch, as well as stable/ussuri, train and stein branches
15:17:14 <tbarron> dviroel++
15:17:22 <carloss> dviroel++
15:17:23 <gouthamr> but, everything older - i.e., the extended maintenance branches are still broken
15:18:33 <gouthamr> we don't use uwsgi on queens i think
15:19:20 <gouthamr> by "we", i mean when deploying manila-api - but, because the rest of the projects do, devstack would be broken
15:19:34 <gouthamr> https://review.opendev.org/#/c/631338/ --- iirc, we added this during stein
15:20:33 <gouthamr> please keep a look out for this before "rechecking" on these older branches
15:20:46 <gouthamr> anything else to add here, dviroel?
15:21:19 <dviroel> i don't think so, but we'll need stable/rocky and stable/queens working again soon, to land a important fix
15:21:21 <gouthamr> if you saw third party CI failing - this is likely the issue, but, i think dviroel/andrebeltrami hit some other issues in their CI
15:21:57 <dviroel> gouthamr: yeah, it's almost healthy again
15:22:18 <dviroel> gouthamr: we are updating the images at this moment
15:23:01 <dviroel> do we have any job running on ubuntu 16.04?
15:23:02 <gouthamr> dviroel: what OS do you run the NetApp CI on?
15:23:12 <gouthamr> dviroel: we do, in rocky and queens
15:23:50 <gouthamr> dviroel: actually, most dsvm jobs in rocky/queens should be running on xenial
15:23:53 <dviroel> gouthamr: migrating to 18.04 now
15:25:08 <gouthamr> dviroel: ack, devstack was building uwsgi from source for xenial - so i'm not sure how that can be resolved
15:25:37 <dviroel> gouthamr: yeah, we might need a workaround for that
15:25:57 <gouthamr> dviroel: we'll follow that up - one alternative is to use mod-wsgi, but i'm not sure the rest of the projects support that - devstack itself had a deprecation warning when we were testing with it
15:26:47 <dviroel> gouthamr: ack, we can try that also
15:26:50 <gouthamr> dviroel: if you're updating images, i suggest you update main branch testing with focal fossa
15:27:29 <dviroel> gouthamr: we will do in the sequence
15:27:33 <gouthamr> ^ same goes for all third party CI systems - given that we'll likely run into more issues like this in the future
15:28:16 <gouthamr> any other concerns / questions?
15:28:55 <gouthamr> cool, lets move on
15:28:56 <gouthamr> #link https://etherpad.openstack.org/p/manila-victoria-review-focus
15:29:06 <gouthamr> #topic Reviews needing attention
15:29:10 <gouthamr> #link https://etherpad.openstack.org/p/manila-victoria-review-focus
15:29:45 <gouthamr> is there anything on that list that needs to discussed?
15:30:53 <gouthamr> i haven't gotten back to the zuulv3 goal since before the PTG - we should probably status check next week
15:31:27 <gouthamr> but it generally looks like we're not paying attention to the reviews on this etherpad
15:32:08 <gouthamr> :) kinda defeats the purpose - i'll start assigning reviews and reaching out this week
15:32:49 <dviroel> yeah, doesn't seems to work with small changes/fixes, but do work when we are close to feature freeze
15:33:18 <gouthamr> possibly, since we have deadlines for those
15:33:39 <gouthamr> and bugfix deadlines are more relaxed
15:34:26 <gouthamr> lets see, i'll go down that list and chase people for status - if you own a change on the list, or have reviewed it, please leave a status message under the change
15:34:43 <gouthamr> lets move on and discuss some bug backlog
15:34:46 <gouthamr> #topic Bugs (vhari)
15:34:55 <gouthamr> o/ vhari - floor is yours
15:35:08 <vhari> gouthamr, ofc we have bugs to scrub :)
15:35:11 <vhari> #link https://bugs.launchpad.net/manila/+bug/1639662
15:35:11 <openstack> Launchpad bug 1639662 in OpenStack Shared File Systems Service (Manila) "Share service VM system to restart stuck" [Undecided,In progress] - Assigned to Xiaoyang Zhang (es-xiaoyang)
15:35:24 <vhari> easy one.. fix merged ..
15:35:34 <gouthamr> oh, yes
15:35:37 <vhari> can this be closed now?
15:36:03 <gouthamr> vhari: yes, we can move this to "Fix Released", a milestone isn't necessary
15:36:21 <gouthamr> bot went on a holiday
15:36:33 <vhari> k
15:36:35 <vhari> next up
15:36:39 <vhari> #link https://bugs.launchpad.net/manila/+bug/1754428
15:36:39 <openstack> Launchpad bug 1754428 in OpenStack Shared File Systems Service (Manila) "Tempest failure: manila_tempest_tests.tests.api.test_rules.ShareIpRulesForNFSTest.test_create_delete_access_rule_with_cidr fails intermittently" [Medium,Triaged]
15:37:26 <vhari> need to know if this is still an issue ..
15:37:53 <gouthamr> i think it is, we'll need to look on the logserver for recent failures
15:38:10 <vhari> if so need to add minor triage info
15:39:32 <gouthamr> vhari: ack, probably isn't an easy fix
15:39:53 <gouthamr> vhari: i'll check if http://logstash.openstack.org/#/dashboard/ shows any occurrences and we can update the bg
15:39:57 <gouthamr> bug*
15:40:34 <gouthamr> lets loop back after this meeting on this one
15:40:45 <vhari> gouthamr, ack .. repro info will help
15:41:06 <vhari> #link https://bugs.launchpad.net/manila/+bug/1772029
15:41:06 <openstack> Launchpad bug 1772029 in OpenStack Shared File Systems Service (Manila) "ganesha library: update of export counter and rados object url index object racy" [Medium,In progress] - Assigned to Ramana Raja (rraja)
15:42:15 <vhari> looking for some update on this one too
15:42:18 <gouthamr> hmmm, this looks like rraja's tracker - i wonder what's left to do in manila
15:42:34 <gouthamr> the issue of atomicity of updates seems to be fixed in ceph itself
15:42:35 <tbarron> I wonder if it's actually a supported configuration.
15:42:49 <gouthamr> but he mentions, "The fix could will require changes to the ganesha library (in manila) and the ceph_volume_client library (in Ceph)."
15:42:57 <tbarron> multiple manila driver instances using  the same Ganesha
15:43:00 <tbarron> server
15:43:28 <gouthamr> tbarron: hmmm, that's a good point - what is supported/recommended when using multibackend?
15:44:22 <tbarron> He might have seen that the code would be racy in such a circumstance but I think CephFS folks including rraja will say that there may
15:44:23 <gouthamr> multifs hasn't been supported in ceph - until then, should we say you can only have one manila ceph-nfs backend per ceph cluster?
15:44:37 <tbarron> be other issues Ceph side that you'd hit first.
15:45:14 <tbarron> Also, the plan is over time to migrate off the python-specific ceph_volume_client library, right?
15:46:21 <tbarron> And it looks like rraja envisioned the fix to be in that library, not in manila driver itself?
15:47:04 <gouthamr> hmmm, we won't stop using this code path (ceph-volume-client to write exports into rados) until ceph-mgr replaces that operation (it doesn't today)
15:47:44 <tbarron> Agree, but it seems like he's saying it's over on the ceph library/mgr side of the fence.
15:48:10 <tbarron> it == the work to fix
15:48:17 <gouthamr> the bug report also says "The fix could will require changes to the ganesha library (in manila)..."
15:48:25 <tbarron> i see
15:48:48 <tbarron> because something changes on the other side and we need to adapt?
15:48:53 <gouthamr> perhaps
15:50:50 <gouthamr> vhari: we need some investigation
15:51:11 <vhari> gouthamr, ack
15:51:17 <gouthamr> i'll post on this bug after this meeting
15:51:20 <tbarron> gouthamr: I like your suggestion that we add a note in the ganesha doc indicating that currently running multiple ceph-nfs backends with the same ganesha server is not safe, with a link to this bug.
15:51:35 <gouthamr> tbarron: +1
15:51:41 <gouthamr> https://bugzilla.redhat.com/show_bug.cgi?id=1600068
15:51:41 <openstack> bugzilla.redhat.com bug 1600068 in CephFS "ceph_volume_client: allow atomic updates of object" [Low,Closed: errata] - Assigned to ridave
15:51:45 <gouthamr> ^ related bugzilla
15:51:46 <tbarron> I doubt anyone is actually doing this though.
15:52:05 <gouthamr> ack
15:52:17 <vhari> gouthamr, so that's a wrap for bugs
15:52:21 <gouthamr> awesome, ty vhari
15:52:26 <vhari> yw gouthamr
15:52:29 <gouthamr> #topic Open Discussion
15:53:14 <gouthamr> so we had some concerning news from the neutron folks this week on the ML
15:53:20 <gouthamr> #link http://lists.openstack.org/pipermail/openstack-discuss/2020-June/015480.html ([neutron][neutron-dynamic-routing] Call for maintainers)
15:53:42 <gouthamr> it appears that they're looking for helping hands, or they'll be deprecating this project
15:54:30 <dviroel> oh
15:54:50 <tbarron> Where do we rely on it?  Only for Ipv6 environment?
15:55:09 <gouthamr> we've used this service, in combination with quagga/zebra to test ipv6 data path
15:56:07 <tbarron> specifically there, we need a way to advertise more specific cidr networks so that they are reachable on return paths
15:56:24 <tbarron> at least that is my understanding, someone correct me if there's more to it
15:56:33 <gouthamr> yes that's mostly it
15:56:38 <tbarron> mostly?
15:57:17 <gouthamr> our initial solution was to have something setup static return routes - but it got complicated real quick
15:57:20 <tbarron> Cause I think we should discover what will be the gap and ask how to fill it, it can't be a manila-only need
15:58:15 <gouthamr> it can't be, but i only see one response there so far
15:58:42 <gouthamr> so if anyone here is interested in continuing to use that project, lets look at sharing the maintenance responsibilities
15:59:06 <tbarron> I disagree, we shouldn't work on this.  It's a diversion of resources.
15:59:26 <tbarron> And there must be another good way to solve this problem if the project is languiishing.
16:00:06 <tbarron> There are major openstack distros that support IPv6 but don't support dynamic routing in OpenStack.
16:00:19 <gouthamr> agreed; we're at the hour - lets take this to #openstack-manila
16:00:31 <tbarron> To be clear, if someone is really enthusiastic and interested
16:00:32 <tbarron> kk
16:00:34 <gouthamr> thank you all for attending, stay safe!
16:00:39 <gouthamr> #endmeeting