15:00:38 #startmeeting manila 15:00:40 Meeting started Thu Jun 18 15:00:38 2015 UTC and is due to finish in 60 minutes. The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:41 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:44 The meeting name has been set to 'manila' 15:00:47 Hi 15:00:49 hello 15:00:51 hi 15:00:52 Hello 15:00:52 hello 15:00:55 hi 15:00:57 hi 15:01:05 #agenda https://wiki.openstack.org/wiki/Manila/Meetings 15:01:13 hello 15:01:44 real quick before we get started, I wanted to remind everyone that L-1 is next week 15:01:47 hi 15:01:51 hi 15:02:23 only a few things targeted at L-1 have been merged, so there will be a lot of retargeting 15:03:10 but the priority for reviews the next week should be stuff that's targeted at L-1 15:03:48 #topic manila-service-image hosting 15:04:13 u_glide: you're up 15:04:56 we agreed with infra team that manila-service-image will be hosted as regular release on tarballs site 15:05:42 all releases + latest master build will be hosted 15:05:54 the only one question left 15:06:22 hi 15:06:22 Which build we will use in devstack plugin 15:06:37 latest master or some stable release 15:06:57 master 15:06:59 I think we should use stable with periodic update for newly created 15:07:12 -1 15:07:16 to avoid nice surprises 15:07:32 this team is going to control the image project though 15:07:35 how can we surprise ourselves? 15:07:51 stable branches should use stable, but I'd think master would use latest 15:07:55 I think for devstack we are supposed to test always the latest 15:08:00 easy - tempest tests are running mostly in gates =) 15:08:01 bswartz: service image could be incompatible between releaes 15:08:21 s/releaes/releases/ 15:09:10 for example manila L will be compatible only with service image <= 0.2.0 15:09:19 I agree to use latest only case if we run all tempest tests against that image on its project commits 15:09:28 okay so if we pin the release of the service image that runs in the gate, how will we get ourselves unstuck when there's a bug that needs fixing? 15:11:06 bswartz; you mean cross-changes? 15:11:13 can we not have a default known good image url recorded in manila tree itself? and pull that (or a conf overridden alternative if something goes wrong)? 15:12:08 updates to the image are going to be approved after review, correct? 15:12:21 hmm 15:12:34 if something breaks the image, thus breaking the gate, then we revert, pull, or fix 15:12:35 ganso_: yes, standard gerrit-review process 15:12:40 ganso_: yes the image project will have gerrit change control like everything else 15:13:03 the image will be tested on its gate then, we will know when a patch breaks 15:13:22 can we just run our latest tempest tests from manila master against every image review request? 15:13:33 toabctl: yes, we can 15:13:40 toabctl: yes that's what I was thinking 15:13:42 toabctl: and should do 15:13:57 ok. that's also what I would prefer to do 15:14:17 okay so assuming we do it that way, is there any danger using the master version? 15:14:25 Do we plan to run ALL jobs, or some of them?? 15:14:25 so we can tempest test and use latest for master devstack 15:14:36 what about someone running "stable/x" devstack? 15:14:55 markstur: for such things we keep old image as is 15:14:56 bswartz: old releases will download incompatible images 15:15:00 u_glide: hm. good question. I think we don't need all. 15:15:06 u_glide: I would say just 1 job 15:15:14 markstur: or push new commit with new image 15:15:44 images just have stable/liberty and so on branches which are used for the corresponding manila branches. 15:15:49 I was thinking the old branches would need to be told to use an old image. 15:16:19 yeah I think stable branches will use the stable versions of the image 15:16:28 markstur: actually newly created image is going to have same functionality + additional without incompatibilities 15:16:37 stable/liberty will track the latest commit to that branch just like master does 15:16:44 vponomaryov: should have, we are not sure we can guarantee that 15:16:51 hi 15:17:08 vponomaryov, backwards compat would be best, but I think u_glide already suggested we could break compat 15:17:09 ganso_: I mean difference between current one and first new one 15:17:17 oh 15:17:23 markstur: +1 15:17:40 vponomaryov: I see, like an upgrade plan 15:17:49 vponomaryov: always backwards compatible with the previous one 15:17:54 there is a timeperiod (when you accepted a image request and the image rebuilds and needs to be published) where manila uses an outdated image 15:18:50 hm. but that shouldn't be a problem I guess... 15:19:16 so, we agreed for running tempest tests against image project and use latest build for master branch, right? 15:19:29 and stable build for stable branches? 15:19:34 +1 15:19:41 ++ 15:20:02 +1 15:20:09 +1 15:20:10 +1 15:20:41 okay sounds good 15:20:54 u_glide: you satisfied with this plan? 15:21:01 bswartz: yes 15:21:20 #topic Separate ID approach for Share Migration and Replication 15:21:43 I replied to the mail thread here 15:21:47 #link http://lists.openstack.org/pipermail/openstack-dev/2015-June/067376.html 15:21:55 we don't need to hash this one out again in the meeting 15:22:08 I think it makes sense to keep the discussion on the ML 15:22:43 but I wanted to mention that I'm now in favor of implement "share instances" even though it will be a lot of work/code change 15:23:11 ganso_: this will affect your migration proposal, but hopefully in a good way 15:23:16 right, so if we have some progress on the share-instance idea, we more people agree it looks promising then we could step forward 15:23:52 bswartz: yes, I was about to start coding a new DB column, but preferred to wait this meeting discussion 15:24:09 ganso_: well the share instance stuff will take a long time 15:24:17 you'd be better off continuing your development in parallel 15:24:38 there will be a lot of work to do in the data copy service and on the network side independent of this change 15:24:42 bswartz: I think your email summarized my proposals in a good way 15:25:00 my proposal only addresses the thorny problem of how the 2 shares are tracked in the database and what IDs are used 15:25:11 bswartz: we have 2 big classes of changes, one that involves changing the ID, the vendor implementing a method, and other that requires a lot of effort on the core code 15:25:55 ganso_: maybe better to direct effort to one another blocker - admin network that is requried for mount of both shares? 15:26:04 vponomaryov: +1 15:26:08 while ID approach is under development 15:26:41 there are still a lot of unresolved questions 15:26:46 vponomaryov: that's next topic 15:26:57 :-) 15:27:03 oh, right =) 15:27:12 vponomaryov: yes, that is also important, but bswartz mentioned that VM approach is not interesting, if more people hop on that bandwagon, then we are off to stick with the vendors implementing the necessary methods and partial support in Liberty 15:27:34 #topic Network accessibility for share migration 15:27:43 vponomaryov: and it is already coded... 15:27:52 this was another new problem ganso_ brought up last week 15:27:58 vponomaryov: there is that benefit 15:28:01 I replied to that thread too 15:28:04 bswartz, ganso: why exactly VM, it can be any host accessible for manila host and share backends 15:28:07 #link http://lists.openstack.org/pipermail/openstack-dev/2015-June/067378.html 15:28:47 vponomaryov: VM would be inside Openstack, a manila host is in admin network. Admin network access to shares is unusual use case at this time 15:28:51 in short, the problem is that some shares are not mountable by the manila services responsible for migrating data in the current proposal 15:29:16 because those shares on only exported on private/segmented tenant networks 15:29:24 for that case we should have dependency to have such proxy, that is accesible 15:30:19 my proposal to fix this problem is to require backends which support exporting shares on private networks (which is only backends with driver_handles_share_servers=true) to also export shares on the admin network 15:31:06 vponomaryov: would the use of a proxy be something standard for all driver vendors? 15:31:15 this should not a very difficult technical requirement for drivers 15:31:17 bswartz: +1 The network plug-in mechanism we have should be reusable for the admin network. 15:31:22 vponomaryov: like, something we can code in Core code? 15:31:47 vponomaryov: what kind of proxy do you have in mind? 15:31:54 ganso_: for drivers that support migration 15:32:20 bswartz: host that is available for share backends with migration support 15:32:21 if it's based on a nova VM, then I have a problem with it because I don't want to make share migration dependent on nova 15:32:56 bswartz: it does not matter NOva VM it is or not 15:33:07 More and more openstack clouds will be based on other forms of compute, such as ironic and magnum 15:33:21 bswartz: +1 15:33:26 ironic is backend for Nova 15:34:08 bswartz: what if it is dependent on neutron? if I understand this correctly, it would be possible to work some neutron magic to make the manila host, or data copy service node, to connect to the backend directly through the openstack network, like... if admins, or even DHSS = true drivers usually use of a provider network (FLAT or VLAN) to make that 15:34:08 possible, what if the node in the admin network can also connect to that provider network inside openstack? 15:34:09 and I'm also not forgetting standalone manila without the rest of openstack 15:35:05 I'm concerned that a network bridge/proxy approach will be too complicated -- we already have several network plugins, and we're likely to get more 15:35:24 a proxy would have to understand every network system supported by manila 15:35:40 putting the requirement on the drivers to simply provide an accessible mount point makes the problem far easier 15:35:45 vponomaryov: the generic drivers creates a VIF so the manila node can access the Manila Service network inside openstack, is it possible to create something like this to the provider network the backend is in? 15:36:01 I'm curious to know if there are drivers that can't do that however 15:36:17 ganso_: Yes, neutron magic could be used, but that doesn't solve the non-OpenStack use case. And neutron is already complicated without resorting to magic. If we define a singleton admin network, it could be defined using any of our 5 network plugins. 15:36:46 it seems like a fairly modest requirement to provide access to shares directly on a flat network in addition to whatever other tenant networks require access 15:36:56 ganso_: And the dhss drivers already know how to handle network resources, this is just incremental to that. 15:37:44 cknight: +1 for the non-Openstack use case 15:38:02 cknight: that can be quite a deal breaker 15:38:18 so, no neutron magic 15:39:08 so, requiring network path is already coded in the prototype, as soon as it is merged, driver vendors will probably focus more on that 15:39:27 fwiw, I think the proxy approach would also work, but I feel that it's likely to be more effort to maintain over time 15:39:29 or we will get complaints from ops that this requirement is too complex to attend 15:39:53 /s/attend/satisfy 15:40:37 bswartz: we can have more than one solution 15:40:59 vponomaryov: +1, fallback solution maybe 15:41:54 vponomaryov: if all of the drivers that support share_servers and segmented networking can be modified to provide an additional export_location on the admin network then no other solution is necessary 15:42:02 and that's only a few drivers 15:42:10 bswartz: solutions can be dedicated to different network plugins or installations in general 15:42:33 vponomaryov: An alternate fallback may be interesting, but let's get the primary design working first. 15:42:51 I don't mind 15:42:54 bswartz: what if we delegate the network plugin improvement to the driver vendors that cannot provide the network path? I remember I was going to contribute to the network plugin when I was coding my driver 15:43:06 I'd rather avoid doing extra work if it can be avoided 15:43:47 the main thing that would convince me that we need a general proxy approach would be if there are drivers that simply can't implement my proposal 15:44:14 dhss=false drivers 15:44:21 My current understanding of what would be needed for a proxy that would work in all cases is that it would be a ton of work 15:44:39 vponomaryov: those don't have this problem though 15:45:33 for any driver that doesn't manage networking, it is the administrator's job to ensure connectivity, both to the tenant networks and to the admin network 15:46:15 we should consider scaling of this operation 15:46:24 scaling what 15:46:40 scaling load of migration operations 15:46:46 scaling of copying from one backend to another 15:46:51 u_glide: the data movement service should be horizontally scalable 15:46:58 that's what the data copy service is for 15:47:00 u_glide: run as many of them as you need 15:47:18 cknight: ok 15:48:08 yes I think having a separate service for actually doing the data copying is essential to avoid bottlenecks 15:48:18 as cknight says, horizontally scalable 15:49:07 anyways there is a ML thread on this topic 15:49:51 my proposal is there, and I'd like for any driver maintainers that have share_server_supporting drivers to provide feedback 15:49:52 I guess this topic is not such a blocker for the ID topic after all 15:50:15 the generic driver should be really easy to add another admin-network-facing network interface to export the shares 15:50:27 * bswartz hopes that's true 15:50:47 #topic open discussion 15:50:49 bswartz: NetApp and HP have already said they can do it. What other DHSS drivers are there? 15:50:54 anything else today? 15:51:02 bswartz: I already coded that in my prototype, but it kinda looks like a workaround approach 15:51:02 cknight: one of EMC's I think 15:51:03 EMC 15:51:20 bswartz: there was another topic in the list 15:51:29 OK, xyang1 can weigh in, then. 15:51:30 bswartz; 4. Quick review of Minimum Requirements ? 15:51:42 oh! 15:51:51 someone modified the agenda in mid meeting.... 15:51:59 sneaky 15:52:03 #topic Quick review of Minimum Requirements 15:52:04 cknight: sorry, I missed the latest discussion 15:52:12 bswartz: no, 1 minute prior to the meeting :P 15:52:23 ganso_: confirm =) 15:52:27 someone removed the Upcoming Minimum requirements 15:52:35 xyang1: No worries, it's in the DL thread Ben linked to. 15:53:01 xyang1: ping me later today or tomorrow if you want to discuss 15:53:10 #link https://etherpad.openstack.org/p/manila-minimum-driver-requirements 15:53:12 cknight, bswartz: thanks, I'll take a look 15:54:12 ganso_: any specific questions or are you just asking if there is more feedback on what you have? 15:54:14 so, I had included a "upcoming minimum requirements" so driver vendors can anticipate what's coming that they need to support 15:54:20 I think QoS support is advanced 15:54:37 but someone removed that 15:54:46 yes QoS isn't required yet 15:54:57 It is in the minimum required list 15:55:04 it is not even implemented yet 15:55:28 not every driver can support that 15:55:32 someone from future came to us =) 15:55:38 ganso_: manage/unmanage snapshot and thin provisioning, right? 15:55:40 xyang1: I added because it was a default capability, thanks for the feedback, I will remove it :) 15:55:48 rraja: yes :) thanks! 15:56:01 ganso_: default to False, like in Cinder:) 15:56:34 I'm not sure we need to enumerate future minimum requirements until the features are actually merged and shown to be working 15:56:44 ok so, Thin Provisioning will not be a minimum requirement, but Manage Unmanage Snapshot will, correct? 15:57:12 we have lots of proposals for features but without working code it's hard to even have a discussion about when they'll be required, if ever 15:57:18 bswartz: I can change the title to "Planned Upcoming Minimum Requirements", so it will make more sense that they are still uncertain 15:57:23 ganso_: that is future, not implemented yet 15:57:25 yes that helps 15:58:08 btw, Access Rule can be RW only? 15:58:16 ro/rw 15:58:39 no idea if that was discussed in the last meetings, but how is the progress for the external CI systems from the different vendors? I guess that's the mini-minimum requirement... 15:58:42 vponomaryov: ok, thanks 15:59:00 toabctl, mini??? 15:59:07 mega 15:59:09 Somewhere we should also enumerate optional features. 15:59:14 markstur: ah. right! :) 15:59:19 And which drivers support them. 15:59:20 markstur: ^^ 15:59:32 toabctl: the upcoming deadline next week is to have all the driver's maintainers sign up and commit to doing CI 15:59:45 I'm going to track down those to haven't committed and find out why they're not paying attention 16:00:17 bswartz: so you are in personal contact with the different maintainers? 16:00:21 #link https://etherpad.openstack.org/p/manila-driver-maintainers 16:00:37 we know who they are from the git history 16:00:46 and I've talked to all of them in the past 16:01:03 some seem to be less active recently 16:01:04 We should continue the "minimum" req list discussion next week 16:01:05 we're out of time though 16:01:14 bswartz: so even if they don't read openstack-dev, they all know that there is a deadline? 16:01:45 they should be following the manila ML at least 16:01:56 those that aren't I will reach out to personally 16:02:07 bswartz: ok. that's good news. 16:02:11 that's why we have this early deadline 16:02:19 thanks everyone 16:02:21 #endmeeting