15:00:16 <bswartz> #startmeeting manila
15:00:17 <openstack> Meeting started Thu Jan  8 15:00:16 2015 UTC and is due to finish in 60 minutes.  The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:18 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:20 <openstack> The meeting name has been set to 'manila'
15:00:23 <bswartz> hello all
15:00:28 <vponomaryov> Hello
15:00:32 <chen> hello
15:00:34 <xyang1> hi
15:00:42 <bswartz> #agenda https://wiki.openstack.org/wiki/Manila/Meetings
15:00:42 <jasonsb_> hny
15:00:46 <nileshb> hi
15:00:57 <bswartz> hope you all took some time off over the holidays
15:01:16 <bswartz> and happy new year!
15:01:44 <bswartz> #topic dev status
15:01:52 <bswartz> vponomaryov: I know you've been busy
15:01:59 <vponomaryov> dev status:
15:02:10 <vponomaryov> 1) Tempest CI jobs for Manila have been improved and now should be more stable.
15:02:10 <bswartz> merging a lot of tempest-stability patches
15:02:19 <vponomaryov> 2) Manage/unmanage shares/share-servers
15:02:24 <vponomaryov> BP: #link https://blueprints.launchpad.net/manila/+spec/manage-shares
15:02:24 <vponomaryov> status: work in progress
15:02:33 <vponomaryov> 3) Single SVM mode for Generic driver
15:02:38 <vponomaryov> BP: #link https://blueprints.launchpad.net/manila/+spec/single-svm-mode-for-generic-driver
15:02:41 <vponomaryov> gerrit: #link https://review.openstack.org/#/c/142403/
15:02:55 <vponomaryov> that's the main, other are whistles and bells
15:03:16 <bswartz> is there a WIP for (2)?
15:03:35 <vponomaryov> (2) contains lots of subtasks
15:03:48 <toabctl> hi
15:03:56 <bswartz> oh I see them
15:03:58 <vponomaryov> so, BP over all is in WIP
15:04:16 <bswartz> 3 changes in gerrit
15:04:31 <vponomaryov> will be more
15:04:39 <bswartz> yeah I'm sure
15:04:48 <bswartz> ty vponomaryov
15:04:54 <bswartz> anyone have questions about the above?
15:05:14 <bswartz> I have a 1 question
15:05:32 <bswartz> why do our tempest-dsvm jobs sometime still fail?
15:05:35 <bswartz> I saw one failure this morning
15:05:47 <vponomaryov> this time devstack did not start
15:05:49 <vponomaryov> at all
15:05:53 <vponomaryov> happens
15:05:56 <bswartz> anything we can do about that?
15:06:15 <vponomaryov> I do not think so
15:06:33 <bswartz> why don't other projects have this issue?
15:06:43 <vponomaryov> who said this?
15:07:00 <vponomaryov> When we pushed fix to Cinder
15:07:08 <bswartz> I'm asking because my next question is, can we make the tempest-dsvm jobs voting now?
15:07:13 <vponomaryov> it succeded only from third attempt
15:07:26 <vponomaryov> bswartz; I think yes
15:07:36 <bswartz> ok
15:07:38 <vponomaryov> this time is near
15:07:48 <bswartz> thanks great news
15:08:06 <bswartz> next topic
15:08:10 <bswartz> #topic rename driver mode
15:08:18 <bswartz> #link http://lists.openstack.org/pipermail/openstack-dev/2015-January/053960.html
15:08:24 <bswartz> chen, you're up
15:08:51 <chen> I want to change current driver mode name because they're confusing
15:09:36 <chen> I'd like to suggest, change single_svm_mode to static mod_mode and multi_svm_mode to dynamic_mode
15:09:53 <csaba> chen: mod_mode?
15:10:09 <chen> static_mode
15:10:11 <bswartz> thanks for putting much of the discussion on the ML
15:10:12 <chen> sorry
15:10:13 <csaba> ok
15:10:30 <bswartz> I read through the thread and responded with my comments
15:10:42 <chen> I see
15:10:44 <bswartz> those of you who haven't followed should read the ML
15:10:50 <bswartz> chen I agree with you
15:11:01 <bswartz> the names are probably a bit confusing and could be better
15:12:08 <bswartz> so first of all, does anyone disagree and want to keep the current names?
15:12:30 <bswartz> current driver modes are "single_svm" and "multi_svm"
15:12:55 <vponomaryov> I do mind with 'static' and 'dynamic'
15:13:04 <bswartz> single_svm mode implies no share servers will be created, and no networking config is needed within manila
15:13:06 <xyang1> the current names are okay with me as those were proposed from the start
15:13:11 <vponomaryov> we can rename, but need good new names
15:13:18 <xyang1> I don't like static
15:13:23 <bswartz> multi_svm mode implies that share servers will be created and they will consume network resources
15:14:09 <vponomaryov> either created or reused. relation 1:many
15:14:29 <lpabon> o/ (late)
15:14:41 <jasonsb_> from practical aspect i think pattern is multi-svm is more close to east-west traffic
15:14:53 <jasonsb_> and single is more north-south
15:15:00 <ganso1> I am not a big fan of the new names
15:15:03 <bswartz> I think one valid complain is that "svm" is an ancronym not used elsewhere and not understood
15:15:21 <jasonsb_> but i suspect it will change alot over time
15:15:31 <lpabon> ganso1: bswartz, i agree
15:15:42 <vponomaryov> ganso also proposed in manila chat variants 'basic' and 'advanced'
15:15:47 <bswartz> no_share_servers and multi_share_servers might be more accurate
15:16:00 <ganso1> bswartz: definitely
15:16:12 <ganso1> I think the term "Share_server" must be included
15:16:23 <ganso1> it is the term we are using throughout Manila
15:16:24 <xyang1> basic and advanced are not good.  it implies drivers supporting basic is not as good
15:16:32 <lpabon> bswartz: from my point of view, it seems that no_share_servers have no shares.. is that what is meant?
15:16:32 <bswartz> xyang1: +1
15:16:38 <toabctl> xyang1: +1
15:16:41 <marcusvrn> xyang1: I agree
15:16:45 <bswartz> lpabon: well no
15:16:55 <marcusvrn> xyang1: +1
15:17:08 <bswartz> no_share_servers would mean the drive doesn't create share servers because it's using something preexisting
15:17:35 <bswartz> okay so we may need to brainstorm on this topic
15:18:00 <chen> bswartz, I considered "no_share_servers", but in the single_svm_mode for generic, a instance need to be configured, so , when admin working under this mode. no_share_server but need one instance
15:18:05 <bswartz> can I suggest that we resolve this by continuing the ML thread and people can suggest better alternatives? then next week we can pick one?
15:18:07 <jasonsb_> mind if i make it more complicated?
15:18:23 <bswartz> jasonsb_: go ahead
15:18:29 <marcusvrn> xyang1: what's the problem with static and dynamic?
15:18:31 <jasonsb_> i'm confronting situation where i would like to load balance over several share servers
15:18:33 <ganso1> bswartz: +1
15:18:47 <jasonsb_> so i might be single_svm but there are many of them
15:18:57 <chen> jasonsb_, +1
15:19:02 <lpabon> bswartz: +1
15:19:04 <jasonsb_> i suspect i'm not alone
15:19:06 <marcusvrn> bswartz: +1
15:19:08 <bswartz> jasonsb_: okay so that's part of the confusion here
15:19:11 <xyang1> "static" sounds the capability is not flexible enough
15:19:33 <ganso1> xyang1: +1
15:19:36 <xyang1> Let's also not keep changing names
15:19:38 <bswartz> we don't want to prevent backends from doing what they need to do -- which is why the definition of a share server is intentionally vague
15:19:40 <vponomaryov> static and dynamic are not good because real criteria - do we create additional resources or not
15:19:53 <jasonsb_> so i think its hard to pigeon hole this at this time
15:20:07 <rushil> svm seems fine to me
15:20:13 <bswartz> in the case of netapp, a "share server" actually has multiple IP addresses and lives on multiple physical nodes
15:20:28 <xyang1> we used single tenant and multi tenant before
15:20:29 <bswartz> and our driver can create them and destroy them as needed
15:20:31 <jasonsb_> are there multiple IP's that can host a given share?
15:21:01 <vponomaryov> jasonsb_: Manila is able to provide only one export location, right now
15:21:15 <vponomaryov> but server can have more than 1 net interface
15:21:18 <jasonsb_> vponomaryov: yes i discovered that )
15:21:19 <bswartz> the only important aspect of a share_server is that it's something created by manila, so manila expects to own its lifecycle
15:21:24 <ganso1> jasonsb_: my driver actually may fall into that category
15:21:28 <vponomaryov> common case - service net int and tenatn net int for export
15:21:39 <bswartz> if your driver uses something preexisting, then it's not a share server (from manila's perspective)
15:21:51 <bswartz> that doesn't mean that it can't serve shares
15:22:21 <bswartz> this split is what we were trying to capture with the single/multi svm thing
15:22:35 <toabctl> bswartz: then something like 'share_server_needed' and 'share_server_included' could be possible names?
15:22:36 <bswartz> it's perfectly fine to have a "single_svm" driver which is backed by a large cluster of servers
15:22:39 <jasonsb_> bswartz: that makes sense
15:22:52 <ganso1> toabctl: -1
15:23:00 <bswartz> the difference that manila cares about is that manila is not responsible for creating/destroying the servers themselves
15:24:05 <vponomaryov> we can replace mode as string with boolean with name "driver_handles_share_server = True/False"
15:24:08 <ganso1> I think changing from "single_svm" to "single_share_server", "multi_svm" to "multi_share_server" is a the simplest change we can make
15:24:10 <jasonsb_> manage share or manage share+network assets
15:24:13 <bswartz> one thing that's clear to me is that regardless of what we do with the name, we need much better documentation on what these modes and share server are all about
15:24:23 <ganso1> vponomaryov: +1
15:24:34 <xyang1> vponomaryov: I think that's better
15:24:57 <bswartz> vponomaryov: which modes to true and false map to?
15:25:07 <vponomaryov> true - multi_svm
15:25:10 <bswartz> true = single_svm, false = multi_svm?
15:25:11 <bswartz> oh
15:25:19 <jasonsb_> or perhaps just enumerate the assets and who manages?
15:25:36 <jasonsb_> (vponomaryov idea)
15:25:36 <bswartz> so the option means "driver supports share server creation"
15:25:44 <toabctl> vponomaryov: yes. it's not really a mode. it's just a flag which indicates that there is some more stuff todo during creation/deletion of a share
15:26:13 <ganso1> for now it looks like a great solution
15:26:15 <vponomaryov> toabctl: right - for driver developer - it is implementation of additional interfaces
15:26:28 <bswartz> toabctl: it's still sort of a mode, because when you set it to true, there are additional expectations from the config
15:26:55 <bswartz> and the manager will interact with the driver differently if the flag is set to true
15:27:33 <xyang1> how about we just keep the current names but with better explanation in the code and doc
15:28:21 <vponomaryov> I am open to changes, but I do not insist on it.
15:28:32 <chen> xyang1, -1
15:28:51 <ganso1> vponomaryov: +1
15:28:55 <bswartz> xyang1: that's one option, but I want to give some time to make better proposals
15:29:03 <chen> I still not understand what's the "single" means in single_svm_mode.
15:29:06 <vponomaryov> if change then to boolean, because we will not have third value
15:29:20 <bswartz> I'll put an agenda item next week to decide whether to rename the option and if so, what the new names should be
15:29:22 <jasonsb_> perhaps the thing to do is to write some stub drivers as documentation
15:29:26 <bswartz> let's keep this discussion going on the ML
15:29:30 <jasonsb_> and see how many patterns develop
15:29:42 <jasonsb_> then revisit
15:29:45 <bswartz> so far I like valeriy's proposal best
15:30:19 <lpabon> bswartz: thanks, that's a good idea (the agenda)
15:30:24 <xyang1> I think vponomaryov's proposal is more straight forward
15:30:33 <bswartz> everyone okay with pushing the decision to next week and giving everyone time to consider?
15:30:36 <xyang1> I just don't like keep changing names
15:30:42 <csaba> bswartz: and then how do you answer your own argument that it's a mode b/c it impiiles different scheme on part of manager?
15:30:49 <xyang1> we just got rid of single tenant and multi tenant
15:30:58 <rushil> xyang1: +1
15:31:00 <bswartz> xyang1: I agree, but this change went in during kilo so we haven't actually released the new option
15:31:14 <csaba> bswartz: +1
15:31:27 <ganso1> bswartz: +1
15:31:33 <bswartz> I want to get this right during kilo because it will be much harder to change it during L
15:31:38 <lpabon> bswartz: aye!
15:31:49 <marcusvrn> bswartz: +1
15:32:02 <xyang1> bswartz: if we can settle down in Kilo, that will be great
15:32:09 <bswartz> okay
15:32:15 <vponomaryov> lets do it next meeting
15:32:25 <vponomaryov> having poll
15:32:29 <bswartz> #topic level-of-access-for-shares BP
15:32:40 <bswartz> #link https://blueprints.launchpad.net/manila/+spec/level-of-access-for-shares
15:32:53 <vponomaryov> Idea of this^ spring out of following use case:
15:32:54 <bswartz> vponomaryov: you're up
15:33:04 <vponomaryov> use case: public share with different access levels for different users of different projects.
15:33:10 <vponomaryov> Like publisher with 'rw' access and readers with only 'ro' access.
15:33:16 <vponomaryov> This is useful with imlementation of another idea described in BP: #link https://blueprints.launchpad.net/manila/+spec/level-of-visibility-for-shares where we can make share visible for all.
15:33:27 <vponomaryov> So, question for maintainers of drivers. Will it be possible to implement it with your drivers?
15:33:36 <vponomaryov> if such interface appears
15:34:28 <vponomaryov> planned three possible levels - ro, rw and su
15:34:31 <bswartz> so the share is still owned by 1 tenant, but they can do access-allow with rw/ro instead of just rw?
15:34:39 <vponomaryov> right
15:34:44 <bswartz> okay ro/rw/su
15:34:54 <bswartz> those 3 levels only make sense for NFS btw
15:35:03 <bswartz> for CIFS the allowed "levels" might be different
15:35:18 <vponomaryov> lets leave to abstraction level
15:35:26 <vponomaryov> the idea of more than one level
15:35:30 <jasonsb_> manila access-list would have additional field?
15:35:31 <ganso1> It is not clear to me the difference between su and rw
15:35:47 <bswartz> well if we support it in the manila API, then the implementation must be standard across all backends
15:35:48 <vponomaryov> ganso1: su have execution right
15:35:49 <ganso1> bswartz: +1
15:36:04 <ganso1> vponomaryov: humm ok
15:36:14 <bswartz> we can't have some backends that support some levels and other backends that support different levels
15:36:17 <ganso1> vponomaryov: is this mode supported by both CIFS and NFS?
15:36:23 <vponomaryov> rwx or rw- or r--
15:36:39 <bswartz> the difference between rw and su is that su means "root_squash" is turned off
15:37:03 <ganso1> bswartz: thanks, but root_squash is only for NFS, correct me if I am wrong please
15:37:12 <vponomaryov> ganso1: I did not look deep into CIFS according to that
15:37:12 <bswartz> correct
15:37:26 <bswartz> vponomaryov: -1
15:37:30 <bswartz> su has nothing to do with the x bit
15:37:33 <xyang1> vponomaryov: what about r-x?
15:37:35 <ganso1> also, I believe changing permissions manually via a script is out of hand, correct?
15:37:42 <bswartz> su only has to do with root_squash
15:38:41 <vponomaryov> ganso1: permission for who? all at once or some?
15:39:13 <ganso1> vponomaryov: I meant that those modes will apply to the share as a whole, such as the options configurable in NFS export, not the files themselves
15:39:21 <bswartz> and NFS client can directly chmod files inside the NFS share, and access is controlled inside the NFS protocol
15:39:53 <bswartz> some NFS servers can squash_root, meaning that clients cannot obtain root access under any circumstances
15:39:56 <vponomaryov> it is not about files, it is about access for whole share
15:40:12 <bswartz> some NFS servers can also force read-only access, regardless of the underlying mode bits on the filesystem
15:40:44 <bswartz> the NFS server has no control of whether the client can execute stuff or not
15:41:41 <bswartz> so in order to make progress on this
15:41:55 <bswartz> we need to find out if all of the existing driver can even support a feature like this
15:42:09 <bswartz> I'm pretty sure the generic driver can (for NFS)
15:42:15 <bswartz> and the NetApp driver also could
15:42:35 <bswartz> we'd need to define some levels for CIFS and find out if everyone can support those levels
15:42:51 <bswartz> but there is the separate question of whether these is even demand for this
15:43:02 <bswartz> s/these/there/
15:43:12 <vponomaryov> mentioned use case
15:43:31 <vponomaryov> that belongs to public deployment
15:43:38 <bswartz> so there is a theoretical use case, but are any real users asking for this?
15:44:01 <vponomaryov> I know about 1 case in driver development project
15:44:07 <vponomaryov> it was impelmented using metadata
15:44:13 <vponomaryov> liek workaround
15:44:20 <bswartz> which driver
15:44:25 <vponomaryov> WFA
15:44:32 <bswartz> ah
15:44:55 <bswartz> what the use case for RO or for something else?
15:45:11 <ganso1> I think read only is a must have
15:45:11 <vponomaryov> when we need to share info, but keep it safe
15:45:12 <bswartz> s/what/was/
15:45:34 <ganso1> since for a big company, the IT adm may put several files there and it should prevent users from deleting them
15:45:41 <bswartz> if we only implemented RO and RW, would that be enough?
15:45:41 <ganso1> so it should have this option, RO
15:45:49 <vponomaryov> ganso: +1
15:46:09 <bswartz> RO and RW both have fairly obvious semantics and I'm sure we can support them for both NFS and CIFS
15:46:09 <marcusvrn> bswartz: ganso1: +1
15:46:23 <xyang1> bswartz: that's my question too.  why not allow setting r, w, x, any combination?
15:46:27 <rprakash> #info I have been particpating starting last summit and keeping gab on the same
15:46:29 <bswartz> other "levels" like SU are less obvious and might not be supported universally
15:46:58 <bswartz> xyang1: that's not how any NFS server I'm aware of works
15:47:03 <vponomaryov> bswartz: we have no interfaces that are supported by all
15:47:06 <ganso1> there may be less use cases for SU
15:47:12 <bswartz> xyang1: these would be export-wide settings
15:47:32 <toabctl> starting with RW and RO sounds good to me.
15:47:32 <marcusvrn> bswartz: yes, I think our driver (hdi-driver) does not support su
15:47:36 <ganso1> I think it is safe to assume that we can start partially, with RO and RW... and add SU if needed
15:47:36 <vponomaryov> bswartz: so, it should not be a problem - supporting by all
15:47:49 <bswartz> xyang1: the mode bits for individual files would remain as-is
15:47:51 <xyang1> bswartz:ok, I'll check our backend too
15:48:06 <lpabon> ganso1: i think you are correct
15:48:46 <jasonsb_> i like idea of rw and ro but try to make general enough to do su later
15:49:01 <vponomaryov> so, main question is satisfied. level of access is required
15:49:02 <bswartz> #agreed implementing read-only and read-write access levels seems like something everyone can do and there are obvious use cases
15:49:13 <rprakash> #topic Boot Get started on wiki
15:49:37 <bswartz> also read-only and read-write makes sense for both NFS and CIFS and (hopefully) other protocols
15:49:51 <rprakash> #info established in December frank says
15:50:01 <bswartz> rprakash: can we help you?
15:50:29 <bswartz> #topic open discussion
15:50:49 <chen> do we have logs for irc chat ? didn't find manila at http://eavesdrop.openstack.org/irclogs/
15:50:56 <ganso1> chen: +1
15:51:13 <bswartz> chen: yes
15:51:17 <toabctl> https://wiki.openstack.org/wiki/Manila/Meetings
15:51:20 <bswartz> http://eavesdrop.openstack.org/meetings/manila/
15:51:29 <vponomaryov> not meeting
15:51:31 <bswartz> oh!
15:51:34 <chen> bswartz, this is only for meeting
15:51:35 <vponomaryov> room of manila
15:51:38 <bswartz> IRC logs for the channel
15:51:47 <chen> bswartz, yep
15:51:50 <bswartz> no I don't believe that infra logs our channel
15:51:53 <toabctl> ups. that's the link I wanted to post. thanks bswartz . it's mentioned on the wiki page
15:52:02 <bswartz> I log the channel, but my logs are not public
15:52:16 <jasonsb_> bswartz: interested in discussing export_location in db?
15:52:54 <bswartz> jasonsb_: is it a quick topic?
15:53:01 <bswartz> we've got 7 minutes
15:53:09 <jasonsb_> not sure
15:53:34 <bswartz> go ahead and ask the question
15:53:50 <jasonsb_> is there existing patterns for changing the endpoint address depending on some circumstance
15:53:55 <rprakash> # info is the boxes in Oregon for VPN access at Linuxfoundations or they at Ericsson DCs?
15:53:55 <jasonsb_> (load balancing perhaps)
15:54:41 <ganso1> jasonsb_: this sounds like "share migration"
15:54:44 <jasonsb_> in my case I have many IP addresses I can use but I see that the IP address is coded into database
15:55:10 <vponomaryov> jasonsb_: you can write any address but only one
15:55:17 <bswartz> yeah...
15:55:18 <vponomaryov> but idea is good
15:55:19 <toabctl> jasonsb_: endpoint address of what? the share-server? the manila api service?
15:55:23 <bswartz> this seems like a limitation
15:55:38 <rprakash> ##action can we get access to BGS hardware for contributions?
15:55:49 <bswartz> clustered NFS server implementations often have a list of IPs through which the share can be accessed
15:55:54 <marcusvrn> rprakash: 0.o ???
15:56:02 <bswartz> rprakash: please stop spamming!
15:56:06 <jasonsb_> I was wondering what other drivers might do where there are many IP's to choose from
15:56:45 <bswartz> jasonsb_: we only return 1 IP address, and then rely on in-band negotiation between the NFS client and NFS server to discover other IP addresses
15:56:54 <ganso1> jasonsb_: maybe a workaround for this limitation is a setting up a proxy. But getting rid of this limitation is a good proposal
15:57:01 <bswartz> that's what PNFS is all about
15:57:46 <chen> I have the same question as jasonsb_  , is there a way to change glusterFS driver to add more than one "glusterfs_target", and all glusterfs_targets are replications for each other. Then when manila create a share, chose one target to use. This would distribute data traffic to the cluster, higher bandwidth, higher performance
15:58:17 <bswartz> a proxy is not the answer
15:58:41 <bswartz> I think manila may need to allow multiple mount points to be stored in the DB
15:58:43 <vponomaryov> need implement list of exports instead of one string as export
15:58:54 <bswartz> the question is whether those would change over time
15:58:55 <marcusvrn> chen: it's a good idea to implement, but I don't think it's possible today
15:59:13 <jasonsb_> i was thinking that the driver itself could be involved in scheduling context
15:59:13 <bswartz> because we currently store that export one time and never change it
15:59:16 <mkwiek> hello
15:59:16 <jasonsb_> to determine this
15:59:41 <jasonsb_> so its some interesting variable to the single/multi_svm discussion
15:59:47 <bswartz> jasonsb_ it's a good idea, but we're out of time
15:59:51 <bswartz> I'm sure we can revisit this topic
16:00:02 <jasonsb_> sounds good
16:00:08 <bswartz> it's not related to the single/multi_svm discussion though
16:00:14 <ganso1> let's discuss this again next meeting or create a ML :)
16:00:15 <marcusvrn> bswartz: jasonsb_ +1
16:00:17 <bswartz> if you think it is then you don't understand the driver modes
16:00:30 <bswartz> I'll try to explain why in the ML thread
16:00:42 <bswartz> thanks everyone!
16:00:45 <vponomaryov> thanks
16:00:53 <chen> thanks!
16:00:55 <ganso1> thanks!
16:00:55 <bswartz> #endmeeting