15:00:24 <bswartz> #startmeeting manila
15:00:24 <openstack> Meeting started Thu May 28 15:00:24 2015 UTC and is due to finish in 60 minutes.  The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:26 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:28 <openstack> The meeting name has been set to 'manila'
15:00:31 <cknight> Hi!
15:00:32 <vponomaryov> Hello
15:00:34 <bswartz> hello folks
15:00:36 <xyang> hi
15:00:37 <ganso_> hello
15:00:42 <u_glide> hello
15:00:42 <markstur> Hello
15:00:43 <rraja> hi
15:00:52 <cknight> bswartz: Any news on Baby Liberty?
15:00:55 <bswartz> #agenda https://wiki.openstack.org/wiki/Manila/Meetings
15:01:04 <bswartz> cknight: lol not yet
15:01:15 <bswartz> we do have a name picked out, and it's not Liberty
15:01:27 <cknight> bswartz: :-)
15:01:43 <vponomaryov> bswartz: you have time to change your mind =)
15:02:04 <bswartz> heh
15:02:14 <bswartz> okay so first topic
15:02:20 <bswartz> #topic Manila Midcycle Meetup
15:02:31 <bswartz> this is something I had hoped to figure out before the summit
15:03:00 <bswartz> the cinder team has set their midcycle meetup for August 4-7
15:03:40 <tbarron> o/
15:03:41 <bswartz> my thinking is that in case there are people who want to travel for both, we should set the Manila one 2 weeks before or after that
15:04:12 <cknight> bswartz: +1
15:04:53 <bswartz> that makes the prospective dates either July 21-23 or August 18-20
15:05:20 <bswartz> July 21-23 is the week before L-2
15:05:29 <bswartz> August 18-20 is right in the middle of L-3
15:05:45 <bswartz> I'm included to go earlier
15:05:58 <cknight> bswartz: Middle of L-3 seems too late.
15:06:09 <bswartz> so I'd like to propose we hold the meetup July 21-23
15:06:12 <u_glide> cknight: +1
15:06:32 <bswartz> I'm not aware of any serious conflicts, please let me know if those dates are bad for you
15:06:44 <markstur> I think, I'd miss that (vacation).
15:06:59 <bswartz> also regarding location, NetApp would like to volunteer our RTP office
15:08:02 <bswartz> we will do an online format like last time because I know many people can't travel
15:08:34 <bswartz> but I'm hoping that by planning it father in advance, we will get a good number to join in person
15:09:13 <cknight> bswartz: "father in advance"  nice
15:09:25 <bswartz> if anyone else wants to volunteer a location, respond to the email thread that I'm going to start after this meeting, and hopefully we can get it settled this week
15:09:31 <bswartz> s/father/farther/
15:09:34 <markstur> cknight, +1
15:09:35 <bswartz> doh
15:10:07 <bswartz> markstur: I'll talk to you offline about your vacation dates
15:10:21 <bswartz> anything else about the meetup plan?
15:11:01 <bswartz> #topic Liberty-1 Milestone
15:11:25 <bswartz> #link https://wiki.openstack.org/wiki/Liberty_Release_Schedule
15:11:33 <bswartz> #link https://launchpad.net/manila/+milestone/liberty-1
15:11:48 <bswartz> there's only 4 weeks until L-1
15:12:09 <bswartz> it looks like each milestone is just 5 weeks long in Liberty
15:12:40 <bswartz> we have a couple of targetted blueprints, but now is time to start getting that list in order
15:13:52 <bswartz> Anything anyone is working on now should have a blueprint targeted to liberty, so write them up and ask me to target them when they're ready
15:15:17 <bswartz> #topic 3rd party vendor CI
15:15:37 <bswartz> so I started the ML thread before the summit
15:16:13 <bswartz> there were some helpful suggestions from anteaya but no other feedback on the plan
15:16:42 <markstur> feedback:  looks good
15:16:55 <anteaya> I linked to your email thread at both my third party meetings this week
15:17:11 <bswartz> so I'm going to assume everyone is happy with it and start implementing the plan as it's written in the email
15:17:20 <bswartz> anteaya: thanks!
15:17:35 <anteaya> the key for me is if all manila core reviewers are familiar with http://docs.openstack.org/infra/system-config/third_party.html
15:17:39 <anteaya> bswartz: welcome
15:18:04 <anteaya> if core reviewers from manila have questions please talk to me
15:18:08 <bswartz> anteaya: thanks for the link
15:18:31 <anteaya> my preference is to support the manila team so you can support the third party operators
15:18:41 <anteaya> of course all are welcome at third party meetings
15:18:41 <bswartz> core reviewers please read that page - I've glanced over it before but I haven't read every word, so I'll do that today
15:18:48 <anteaya> thank you
15:19:02 <anteaya> ask me if you need any clarifications
15:19:34 <bswartz> so the first part of the plan calls for identifying all the driver maintainers and collecting contact info
15:19:41 <bswartz> #link https://etherpad.openstack.org/p/manila-driver-maintainers
15:19:47 <bswartz> I've created an etherpad for that purpose
15:20:33 <anteaya> can we eliminate the etherpad and go straight to https://wiki.openstack.org/wiki/ThirdPartySystems
15:20:34 <bswartz> I actually know who most of the driver maintainers are but I'd like people to add themselves so they're aware that they're signing up for this
15:20:46 <anteaya> the etherpad just creates more work for yourself
15:21:04 <anteaya> they need to add themselves to that wikipage anyway, may as well start there
15:21:18 <anteaya> saves confusion
15:21:23 <xyang> bswartz: we have just started planning for manila CI's, both isilon and vnx teams are aware of your email on CI
15:21:30 <bswartz> anteaya: I'd like to do the etherpad exercise anyways, as there are multiple drivers from some vendors
15:21:43 <bswartz> and some vendors are involved with multiple projects
15:21:54 <anteaya> this just creates more work for you and more confusion for them
15:22:02 <bswartz> yes it's double work but 1 line of text won't kill anyone
15:22:08 <anteaya> but it is your call how all of them end up on that page
15:22:10 <vponomaryov> bswartz, anteya: one of drivers is running in OpenStack CI
15:22:23 <bswartz> vponomaryov: you mean generic?
15:22:30 <vponomaryov> bswartz, anteya: so, it should not be added to https://wiki.openstack.org/wiki/ThirdPartySystems , right?
15:22:37 <vponomaryov> bswartz: yes
15:22:39 <bswartz> vponomaryov: generic won't need CI -- it's part of our gate
15:22:43 <anteaya> vponomaryov: which driver
15:22:51 <bswartz> I just listed it on the etherpad for completeness
15:23:13 <vponomaryov> anteaya: "Generic" driver that uses Cinder as backend
15:23:23 <anteaya> the default driver that manila can't run without?
15:23:25 <vponomaryov> anteaya: it is used in our gates currently
15:23:43 <csaba> bswartz: if the one who maintains driver code and who will work on CI are different, should both of them indicated as "maintainer"?
15:23:44 <bswartz> anteaya: manila can run without it -- it's just the default choice
15:23:59 <anteaya> that is considered a first party driver
15:24:04 <anteaya> not a third party driver
15:24:05 <bswartz> csaba: list both and their roles if there are multiple people involved
15:24:19 <csaba> bswartz: OK. thx
15:24:40 <bswartz> I want to get a list of "throats to choke" for various driver issues -- CI being the main one
15:25:35 <vponomaryov> anteaya: thanks for info
15:25:36 <bswartz> I know where to find you all anyways but if people sign up here, it signals acknowledgement that they're paying attention
15:25:52 <anteaya> vponomaryov: welcome
15:26:28 <bswartz> oh and just for the record, the email thread I referred to above is
15:26:33 <bswartz> #link http://lists.openstack.org/pipermail/openstack-dev/2015-May/064086.html
15:26:49 <rraja> bswartz: Ganesha is not a driver by itself. It's a library that is currently only used by glusterfs driver.
15:27:33 <bswartz> rraja: what about IBM GPFS
15:28:02 <csaba> bswartz: ATM GPFS uses their own private Ganesha instrumentation
15:28:05 <bswartz> I thought you were working together on ganesha stuff
15:28:13 <bswartz> okay thanks for the clarification
15:28:26 <csaba> that's planned, ATM we support different versions of ganesha
15:28:50 <bswartz> csaba/rraja: please consolidate the gluster/ganesha entries on the etherpad as appropriate
15:29:03 <csaba> bswartz: sure
15:29:49 <bswartz> okay anything else on CI
15:30:15 <markstur> NetApp is reporting!
15:30:20 <markstur> good job
15:30:27 <bswartz> oh yeah -- I forgot about that
15:30:42 <bswartz> at least our single_svm driver mode is already reporting
15:30:50 <bswartz> thanks to the work of akerr
15:30:54 <bswartz> and cknight
15:31:34 <bswartz> NetApp uses CI internally for testing changes before we upload them, as well as reporting on upstream changes
15:31:56 <bswartz> okay moving on
15:31:59 <bswartz> #topic Availability Zones
15:32:16 <bswartz> #link https://review.openstack.org/#/c/184544/
15:32:31 <bswartz> so there was a proposal to set the Nova AZ for service VMs created by the generic driver
15:32:43 <bswartz> I think we actually need a more general facility
15:33:27 <bswartz> the basic problem is: in a relatively large cloud, locality of compute and data start to matter for performance reasons
15:33:54 <bswartz> thus you would like for manila to be able to ensure that your shares end up relatively close to the nova VMs that are accessing them
15:34:54 <bswartz> assuming that Nova AZs are even the right concept to control this, then what we really want is a way for manila users to ask for a share in a specific AZ
15:35:13 <bswartz> so I have 2 questions
15:35:26 <bswartz> are AZs the right abstraction to control locality of compute and data?
15:35:55 <bswartz> and if so, what API changes should we make to give users more control over them?
15:36:21 <vponomaryov> maybe metnion two existing variants?
15:36:46 <bswartz> vponomaryov: what do you mean?
15:37:05 <vponomaryov> that we are able to do it by configuration and extra specs
15:37:16 <bswartz> ok
15:37:24 <vponomaryov> if both cases we have no API issues
15:37:37 <bswartz> so the linked change above provides a way for the admin to pin all the service VMs to a specific AZ
15:37:42 <vponomaryov> s/if/in/
15:38:01 <bswartz> that doesn't solve the problem I mentioned -- it just gives the admin a tool to control service VMs
15:38:11 <vponomaryov> and for the moment he can not change it =)
15:38:46 <bswartz> if we created some extra specs on the generic backend for nova_az, then the admin could offer multiple AZs controlled by share type, which would start to address the problem (with no API changes) but I don't think that goes far enough
15:39:15 <markstur> can we use "host aggregates"?
15:39:22 <bswartz> even though other driver could do something similar to what we implement for the generic driver, there would be no enforcement of a common implementation
15:39:29 <vponomaryov> markstur: we can use everything Nova API provides
15:39:51 * bswartz doesn't know what host aggregates are :-(
15:39:56 <xyang> bswartz: another way is to add some hints in the scheduler.  there is a locality filter in cinder that allows you to schedule volume and VM on the same host.  that is something we can look at
15:40:27 <vponomaryov> xyang: same host as what?
15:40:52 <xyang> vponomaryov: cinder volume and compute on same host
15:41:04 <xyang> vponomaryov: make volume local to vm
15:41:21 <bswartz> vponomaryov: yeah cinder can actually use disks on the compute nodes for LVM volumes
15:41:34 <vponomaryov> xyang: but if user does not have VM yet, what then?
15:41:43 <bswartz> and it makes sense to use local disks rather than remote disks where possible
15:42:00 <xyang> vponomaryov: that only solves a specific problem, but not all problems
15:42:32 <vponomaryov> bswartz: what use cases do we have that can not be addressed by extra specs?
15:42:36 <bswartz> in the case of Manila, the kind of locality we will be looking for will probably be "same rack" in a situation where there are multiple racks of gear with fast backplanes and relatively slow backbone links between racks
15:43:31 <bswartz> vponomaryov: the problem comes with scaling the number of AZs
15:43:47 <bswartz> with the extra spec proposal, you'd need a share_type for every AZ
15:44:01 <vponomaryov> bswartz: not really
15:44:03 <bswartz> and if you wanted multiple types of shares you'd end up with a matrix of share types
15:44:06 <vponomaryov> bswartz: it depends on implementation
15:44:14 <vponomaryov> bswartz: we can implement it as a list of AZs
15:44:40 <ganso_> wouldn't you have repeated share types for each AZ?
15:44:42 <bswartz> I feel like the user needs a way to specify the specific AZ he wants the share to be in (again, assuming AZs are the right construct)
15:44:59 <ganso_> like "bronze_AZ1, bronze_AZ2", etc?
15:45:14 <bswartz> ganso_: that's my fear
15:45:28 <ganso_> bswartz: +1 I think extra_specs are the way to go
15:45:29 <cknight> ganso_: right, we shouldn't go there
15:46:14 <markstur> I think AZ should be on the create share.  It's for isolation.
15:46:37 <vponomaryov> markstur: it should not, because it is driver-specific
15:46:58 <bswartz> markstur: that's my gut feeling, but I want to make sure there's not a better option than AZ
15:47:09 <vponomaryov> markstur: in case of generic driver we can set Az for Nova as well as for Cinder
15:47:29 <jasonsb> it should accomodate share migration semantics i think
15:47:33 <markstur> For Cinder, create volume has an AZ pull-down (there is an "all" though)
15:47:36 <bswartz> vponomaryov: if we allow the user to specify the AZ, then the scheduler would have to know which AZ/AZs each backend can create shares in
15:47:44 <markstur> I would think create share would follow a similar construct.
15:47:57 <jasonsb> ie migrate this share to this other AZ and then i'll run my compute over there too
15:48:17 <bswartz> some drivers might be limited to a single AZ, while others might be able to use all the AZs (like generic)
15:48:20 <markstur> I'm thinking host aggregates is for admin hinting to scheduler about location -- but I'm not very familiar with aggregate use
15:49:03 <bswartz> jasonsb: yes this locality concept might apply to migration and also replication
15:49:32 <bswartz> much like we would like to be able to re-type a share using migration, we might want to change-AZ a share also using migration
15:50:03 <jasonsb> it sounds powerful
15:50:13 <cknight> bswartz: +1
15:50:58 <bswartz> so I don't have the answer yet
15:51:52 <ganso_> bswartz: +1
15:51:55 <bswartz> but I'm pretty confident that the change in 184544 doesn't go far enough
15:52:18 <bswartz> so I'm going to read about host aggregates
15:52:20 <jasonsb> we need a user
15:52:35 <jasonsb> marc?
15:53:12 <bswartz> you looking for mkoderer?
15:53:16 <jasonsb> i could be wrong.  it just sounds tricky to get right
15:54:03 <jasonsb> bswartz: yes, i was thinking he could give feedback?
15:54:04 <bswartz> well I'm also curious to see exactly what cinder does with its AZ field
15:54:21 <bswartz> because I haven't experimented with it
15:54:52 <bswartz> okay that's about all I had
15:54:53 <xyang> bswartz: I think igor's implementation is similar to cinder's, you can specify  in cinder.conf
15:55:11 <bswartz> xyang: where in cinder.conf?
15:55:26 <bswartz> for each backend? or something in the default section?
15:55:30 <markstur> I hear from "cinder" that AZ is for isolation and host-aggregates is for scheduler hints (I assume e.g. co-locate)
15:55:43 <xyang> there is a field for availability zone, I have find the exact option
15:55:47 <markstur> but I also hear developers don't do this much.  So it would be nice to hear from on operator
15:56:11 <bswartz> markstur: yeah I think us developers spend all our time on single-node devstack systems
15:56:29 <bswartz> so we forget that real clouds have multiple hosts involved and they're not all equal
15:56:47 <markstur> devstack secure physically isolated multi-tenancy rocks!
15:56:49 <tbarron> xyang: I see 'default_availability_zone' and 'storage_availability_zone'
15:57:18 <xyang> tbarron: I think  you are right. I ran into it when working on CG
15:57:38 <xyang> tbarron: the storage_availability_zone override the default one
15:57:51 <tbarron> xyang: but I live in a devstack world, as bswartz indicates, so I don't know how they work
15:58:22 <xyang> tbarron: I'm not playing with it either.  I only have one zone:)
15:58:24 <bswartz> okay so I'll follow up on those too
15:58:43 <bswartz> I don't want to rush into a solution without understanding how existing stuff works
15:58:54 <bswartz> and I also want to hear from users/deployers on this topic
15:58:58 <bswartz> #topic open discussion
15:59:01 <bswartz> 1 minute left!
15:59:05 <bswartz> any last topics?
15:59:17 <markstur> https://bugs.launchpad.net/cinder/+bug/1426205
15:59:17 <openstack> Launchpad bug 1426205 in Cinder "useless storage_availability_zone import" [Low,Fix released] - Assigned to QiangGuan (hzguanqiang)
15:59:37 <markstur> FYI -- searching storage_availility_zone.
15:59:57 <bswartz> interesting link markstur
16:00:06 <bswartz> thank everyone
16:00:11 <xyang> markstur: interesting
16:00:16 <bswartz> look for mail thread on midcycle meetup
16:00:25 <bswartz> #endmeeting