15:00:24 #startmeeting manila 15:00:24 Meeting started Thu May 28 15:00:24 2015 UTC and is due to finish in 60 minutes. The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:26 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:28 The meeting name has been set to 'manila' 15:00:31 Hi! 15:00:32 Hello 15:00:34 hello folks 15:00:36 hi 15:00:37 hello 15:00:42 hello 15:00:42 Hello 15:00:43 hi 15:00:52 bswartz: Any news on Baby Liberty? 15:00:55 #agenda https://wiki.openstack.org/wiki/Manila/Meetings 15:01:04 cknight: lol not yet 15:01:15 we do have a name picked out, and it's not Liberty 15:01:27 bswartz: :-) 15:01:43 bswartz: you have time to change your mind =) 15:02:04 heh 15:02:14 okay so first topic 15:02:20 #topic Manila Midcycle Meetup 15:02:31 this is something I had hoped to figure out before the summit 15:03:00 the cinder team has set their midcycle meetup for August 4-7 15:03:40 o/ 15:03:41 my thinking is that in case there are people who want to travel for both, we should set the Manila one 2 weeks before or after that 15:04:12 bswartz: +1 15:04:53 that makes the prospective dates either July 21-23 or August 18-20 15:05:20 July 21-23 is the week before L-2 15:05:29 August 18-20 is right in the middle of L-3 15:05:45 I'm included to go earlier 15:05:58 bswartz: Middle of L-3 seems too late. 15:06:09 so I'd like to propose we hold the meetup July 21-23 15:06:12 cknight: +1 15:06:32 I'm not aware of any serious conflicts, please let me know if those dates are bad for you 15:06:44 I think, I'd miss that (vacation). 15:06:59 also regarding location, NetApp would like to volunteer our RTP office 15:08:02 we will do an online format like last time because I know many people can't travel 15:08:34 but I'm hoping that by planning it father in advance, we will get a good number to join in person 15:09:13 bswartz: "father in advance" nice 15:09:25 if anyone else wants to volunteer a location, respond to the email thread that I'm going to start after this meeting, and hopefully we can get it settled this week 15:09:31 s/father/farther/ 15:09:34 cknight, +1 15:09:35 doh 15:10:07 markstur: I'll talk to you offline about your vacation dates 15:10:21 anything else about the meetup plan? 15:11:01 #topic Liberty-1 Milestone 15:11:25 #link https://wiki.openstack.org/wiki/Liberty_Release_Schedule 15:11:33 #link https://launchpad.net/manila/+milestone/liberty-1 15:11:48 there's only 4 weeks until L-1 15:12:09 it looks like each milestone is just 5 weeks long in Liberty 15:12:40 we have a couple of targetted blueprints, but now is time to start getting that list in order 15:13:52 Anything anyone is working on now should have a blueprint targeted to liberty, so write them up and ask me to target them when they're ready 15:15:17 #topic 3rd party vendor CI 15:15:37 so I started the ML thread before the summit 15:16:13 there were some helpful suggestions from anteaya but no other feedback on the plan 15:16:42 feedback: looks good 15:16:55 I linked to your email thread at both my third party meetings this week 15:17:11 so I'm going to assume everyone is happy with it and start implementing the plan as it's written in the email 15:17:20 anteaya: thanks! 15:17:35 the key for me is if all manila core reviewers are familiar with http://docs.openstack.org/infra/system-config/third_party.html 15:17:39 bswartz: welcome 15:18:04 if core reviewers from manila have questions please talk to me 15:18:08 anteaya: thanks for the link 15:18:31 my preference is to support the manila team so you can support the third party operators 15:18:41 of course all are welcome at third party meetings 15:18:41 core reviewers please read that page - I've glanced over it before but I haven't read every word, so I'll do that today 15:18:48 thank you 15:19:02 ask me if you need any clarifications 15:19:34 so the first part of the plan calls for identifying all the driver maintainers and collecting contact info 15:19:41 #link https://etherpad.openstack.org/p/manila-driver-maintainers 15:19:47 I've created an etherpad for that purpose 15:20:33 can we eliminate the etherpad and go straight to https://wiki.openstack.org/wiki/ThirdPartySystems 15:20:34 I actually know who most of the driver maintainers are but I'd like people to add themselves so they're aware that they're signing up for this 15:20:46 the etherpad just creates more work for yourself 15:21:04 they need to add themselves to that wikipage anyway, may as well start there 15:21:18 saves confusion 15:21:23 bswartz: we have just started planning for manila CI's, both isilon and vnx teams are aware of your email on CI 15:21:30 anteaya: I'd like to do the etherpad exercise anyways, as there are multiple drivers from some vendors 15:21:43 and some vendors are involved with multiple projects 15:21:54 this just creates more work for you and more confusion for them 15:22:02 yes it's double work but 1 line of text won't kill anyone 15:22:08 but it is your call how all of them end up on that page 15:22:10 bswartz, anteya: one of drivers is running in OpenStack CI 15:22:23 vponomaryov: you mean generic? 15:22:30 bswartz, anteya: so, it should not be added to https://wiki.openstack.org/wiki/ThirdPartySystems , right? 15:22:37 bswartz: yes 15:22:39 vponomaryov: generic won't need CI -- it's part of our gate 15:22:43 vponomaryov: which driver 15:22:51 I just listed it on the etherpad for completeness 15:23:13 anteaya: "Generic" driver that uses Cinder as backend 15:23:23 the default driver that manila can't run without? 15:23:25 anteaya: it is used in our gates currently 15:23:43 bswartz: if the one who maintains driver code and who will work on CI are different, should both of them indicated as "maintainer"? 15:23:44 anteaya: manila can run without it -- it's just the default choice 15:23:59 that is considered a first party driver 15:24:04 not a third party driver 15:24:05 csaba: list both and their roles if there are multiple people involved 15:24:19 bswartz: OK. thx 15:24:40 I want to get a list of "throats to choke" for various driver issues -- CI being the main one 15:25:35 anteaya: thanks for info 15:25:36 I know where to find you all anyways but if people sign up here, it signals acknowledgement that they're paying attention 15:25:52 vponomaryov: welcome 15:26:28 oh and just for the record, the email thread I referred to above is 15:26:33 #link http://lists.openstack.org/pipermail/openstack-dev/2015-May/064086.html 15:26:49 bswartz: Ganesha is not a driver by itself. It's a library that is currently only used by glusterfs driver. 15:27:33 rraja: what about IBM GPFS 15:28:02 bswartz: ATM GPFS uses their own private Ganesha instrumentation 15:28:05 I thought you were working together on ganesha stuff 15:28:13 okay thanks for the clarification 15:28:26 that's planned, ATM we support different versions of ganesha 15:28:50 csaba/rraja: please consolidate the gluster/ganesha entries on the etherpad as appropriate 15:29:03 bswartz: sure 15:29:49 okay anything else on CI 15:30:15 NetApp is reporting! 15:30:20 good job 15:30:27 oh yeah -- I forgot about that 15:30:42 at least our single_svm driver mode is already reporting 15:30:50 thanks to the work of akerr 15:30:54 and cknight 15:31:34 NetApp uses CI internally for testing changes before we upload them, as well as reporting on upstream changes 15:31:56 okay moving on 15:31:59 #topic Availability Zones 15:32:16 #link https://review.openstack.org/#/c/184544/ 15:32:31 so there was a proposal to set the Nova AZ for service VMs created by the generic driver 15:32:43 I think we actually need a more general facility 15:33:27 the basic problem is: in a relatively large cloud, locality of compute and data start to matter for performance reasons 15:33:54 thus you would like for manila to be able to ensure that your shares end up relatively close to the nova VMs that are accessing them 15:34:54 assuming that Nova AZs are even the right concept to control this, then what we really want is a way for manila users to ask for a share in a specific AZ 15:35:13 so I have 2 questions 15:35:26 are AZs the right abstraction to control locality of compute and data? 15:35:55 and if so, what API changes should we make to give users more control over them? 15:36:21 maybe metnion two existing variants? 15:36:46 vponomaryov: what do you mean? 15:37:05 that we are able to do it by configuration and extra specs 15:37:16 ok 15:37:24 if both cases we have no API issues 15:37:37 so the linked change above provides a way for the admin to pin all the service VMs to a specific AZ 15:37:42 s/if/in/ 15:38:01 that doesn't solve the problem I mentioned -- it just gives the admin a tool to control service VMs 15:38:11 and for the moment he can not change it =) 15:38:46 if we created some extra specs on the generic backend for nova_az, then the admin could offer multiple AZs controlled by share type, which would start to address the problem (with no API changes) but I don't think that goes far enough 15:39:15 can we use "host aggregates"? 15:39:22 even though other driver could do something similar to what we implement for the generic driver, there would be no enforcement of a common implementation 15:39:29 markstur: we can use everything Nova API provides 15:39:51 * bswartz doesn't know what host aggregates are :-( 15:39:56 bswartz: another way is to add some hints in the scheduler. there is a locality filter in cinder that allows you to schedule volume and VM on the same host. that is something we can look at 15:40:27 xyang: same host as what? 15:40:52 vponomaryov: cinder volume and compute on same host 15:41:04 vponomaryov: make volume local to vm 15:41:21 vponomaryov: yeah cinder can actually use disks on the compute nodes for LVM volumes 15:41:34 xyang: but if user does not have VM yet, what then? 15:41:43 and it makes sense to use local disks rather than remote disks where possible 15:42:00 vponomaryov: that only solves a specific problem, but not all problems 15:42:32 bswartz: what use cases do we have that can not be addressed by extra specs? 15:42:36 in the case of Manila, the kind of locality we will be looking for will probably be "same rack" in a situation where there are multiple racks of gear with fast backplanes and relatively slow backbone links between racks 15:43:31 vponomaryov: the problem comes with scaling the number of AZs 15:43:47 with the extra spec proposal, you'd need a share_type for every AZ 15:44:01 bswartz: not really 15:44:03 and if you wanted multiple types of shares you'd end up with a matrix of share types 15:44:06 bswartz: it depends on implementation 15:44:14 bswartz: we can implement it as a list of AZs 15:44:40 wouldn't you have repeated share types for each AZ? 15:44:42 I feel like the user needs a way to specify the specific AZ he wants the share to be in (again, assuming AZs are the right construct) 15:44:59 like "bronze_AZ1, bronze_AZ2", etc? 15:45:14 ganso_: that's my fear 15:45:28 bswartz: +1 I think extra_specs are the way to go 15:45:29 ganso_: right, we shouldn't go there 15:46:14 I think AZ should be on the create share. It's for isolation. 15:46:37 markstur: it should not, because it is driver-specific 15:46:58 markstur: that's my gut feeling, but I want to make sure there's not a better option than AZ 15:47:09 markstur: in case of generic driver we can set Az for Nova as well as for Cinder 15:47:29 it should accomodate share migration semantics i think 15:47:33 For Cinder, create volume has an AZ pull-down (there is an "all" though) 15:47:36 vponomaryov: if we allow the user to specify the AZ, then the scheduler would have to know which AZ/AZs each backend can create shares in 15:47:44 I would think create share would follow a similar construct. 15:47:57 ie migrate this share to this other AZ and then i'll run my compute over there too 15:48:17 some drivers might be limited to a single AZ, while others might be able to use all the AZs (like generic) 15:48:20 I'm thinking host aggregates is for admin hinting to scheduler about location -- but I'm not very familiar with aggregate use 15:49:03 jasonsb: yes this locality concept might apply to migration and also replication 15:49:32 much like we would like to be able to re-type a share using migration, we might want to change-AZ a share also using migration 15:50:03 it sounds powerful 15:50:13 bswartz: +1 15:50:58 so I don't have the answer yet 15:51:52 bswartz: +1 15:51:55 but I'm pretty confident that the change in 184544 doesn't go far enough 15:52:18 so I'm going to read about host aggregates 15:52:20 we need a user 15:52:35 marc? 15:53:12 you looking for mkoderer? 15:53:16 i could be wrong. it just sounds tricky to get right 15:54:03 bswartz: yes, i was thinking he could give feedback? 15:54:04 well I'm also curious to see exactly what cinder does with its AZ field 15:54:21 because I haven't experimented with it 15:54:52 okay that's about all I had 15:54:53 bswartz: I think igor's implementation is similar to cinder's, you can specify in cinder.conf 15:55:11 xyang: where in cinder.conf? 15:55:26 for each backend? or something in the default section? 15:55:30 I hear from "cinder" that AZ is for isolation and host-aggregates is for scheduler hints (I assume e.g. co-locate) 15:55:43 there is a field for availability zone, I have find the exact option 15:55:47 but I also hear developers don't do this much. So it would be nice to hear from on operator 15:56:11 markstur: yeah I think us developers spend all our time on single-node devstack systems 15:56:29 so we forget that real clouds have multiple hosts involved and they're not all equal 15:56:47 devstack secure physically isolated multi-tenancy rocks! 15:56:49 xyang: I see 'default_availability_zone' and 'storage_availability_zone' 15:57:18 tbarron: I think you are right. I ran into it when working on CG 15:57:38 tbarron: the storage_availability_zone override the default one 15:57:51 xyang: but I live in a devstack world, as bswartz indicates, so I don't know how they work 15:58:22 tbarron: I'm not playing with it either. I only have one zone:) 15:58:24 okay so I'll follow up on those too 15:58:43 I don't want to rush into a solution without understanding how existing stuff works 15:58:54 and I also want to hear from users/deployers on this topic 15:58:58 #topic open discussion 15:59:01 1 minute left! 15:59:05 any last topics? 15:59:17 https://bugs.launchpad.net/cinder/+bug/1426205 15:59:17 Launchpad bug 1426205 in Cinder "useless storage_availability_zone import" [Low,Fix released] - Assigned to QiangGuan (hzguanqiang) 15:59:37 FYI -- searching storage_availility_zone. 15:59:57 interesting link markstur 16:00:06 thank everyone 16:00:11 markstur: interesting 16:00:16 look for mail thread on midcycle meetup 16:00:25 #endmeeting