15:00:23 #startmeeting manila 15:00:24 Meeting started Thu Jul 9 15:00:23 2015 UTC and is due to finish in 60 minutes. The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:26 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:29 The meeting name has been set to 'manila' 15:00:30 hello all 15:00:35 Hi 15:00:36 hello 15:00:37 hello 15:00:37 Hello 15:01:04 #agenda https://wiki.openstack.org/wiki/Manila/Meetings#Next_meeting 15:01:50 only 2 topics today, so hopefully short meeting 15:02:02 #topic Midcycle Meetup 15:02:05 hi 15:02:08 #link https://etherpad.openstack.org/p/manila-liberty-midcycle-meetup 15:02:25 so the midcycle meetup is just 3 weeks away 15:03:04 a number of people have said they can't attend in person, and would like to attend remotely, so we will be doing video conference like last time 15:03:21 hi 15:03:24 Hi 15:03:33 anyone who can travel to join locally should contact me to find out about accomodations 15:04:09 * lpabon would really like to do remotely 15:04:23 I just created the etherpad to collect attendance information and to start collecting topic proposals 15:04:53 I'll be working to get a more formal agenda/schedule in the next 2 weeks 15:05:11 because there will be remote attendees, we will have timezone challenges like before 15:05:54 this time I will create timeslots for topics and we will try to stick with the schedule 15:06:33 also if there will be a lot of remote joiners, we might have a severe race for hangout slots 15:07:14 csaba: what is the max on hangouts? 15:07:19 csaba: we will have audio conference as well as hangout 15:07:24 pabon: 15 15:07:29 if not I can do bluejeans 15:07:33 i think bluejeans is 99 15:07:39 so if the hangout fills up we can get additional people in audio-only 15:07:41 and we can record them 15:07:44 there should be a way to live stream the video so people on the audio bridge can get video feed 15:08:12 lpabon: my experience with bluejeans so far has not been great 15:08:17 what kind of video do you expect? 15:08:20 I'm not sure how well it will work for others 15:08:38 bswartz: well, I can do setup one if you need to. Let me know 15:08:39 google hangouts are tried and trusted, although the 15 person limit is a big problem 15:08:54 vponomaryov: slides, demos, code, etc 15:09:06 vponomaryov: at least it's useful if the screen displays who's speaking as we can't expect all of us recongnize each other by voice 15:09:07 we can start with bluejeans and fallback to hangouts if you want 15:09:20 csaba: +1 15:09:24 I like that feature too 15:09:32 I don't know everyone's voice 15:09:46 csaba: not musician? =) 15:10:13 I'm open to other conference technologies but we should probably test it out if it's something we haven't used before 15:10:26 vponomaryov: I'm almost deaf to one of my ears ;) 15:10:37 webex is another alternative to google hangouts but webex has it's own set of problems 15:10:41 bswartz: sure let me know.. i use bluejeans almost every day 15:10:50 bswartz: +1 for webex, why not? 15:11:04 vponomaryov: webex works poorly on Linux 15:11:18 webex doesn't love linux :( 15:11:27 bswartz: it allows to join using skype/phone 15:11:33 for audio 15:11:46 bluejeans allows audio phone dialin also 15:11:53 it works fine for audio and slide presentations 15:12:08 * lpabon has talked enough about bluejeans :) 15:12:33 but I'm not sure about video, and if anyone has never used webex before they might have fits trying to get it to work 15:13:17 * bswartz remembers doing some unholy things to install 32-bit JVM 15:13:38 apage satanas! 15:14:14 anyways please add your name to the etherpad if you plan to join the meetup 15:14:19 bswartz : we are doing lots of unholy things just using openstack =) 15:14:30 and please propose some more topics so next week we can get a more formal agenda/schedule started 15:15:23 #topic share dismantling policies 15:15:31 #link http://thread.gmane.org/gmane.comp.cloud.openstack.devel/58419 15:15:43 rraja: just in time, my topic is on ;) 15:15:48 csaba: you're up 15:16:28 so there is that email that's linked, probably the question could be addressed on ml but as we don't have too much for today I thought we could talk it over here 15:16:32 * bswartz assumes everyone is reading the thread 15:17:03 csaba: are you familiar with what cinder does? 15:17:19 bswartz: no, I'd be happy to hear that 15:17:45 bswartz: I think you mean the first situation, when deleting shares 15:17:57 s/think/assume/ 15:17:59 the general contract of a delete is: when a user (tenant) deletes a share/volume, the data from that share/volume should never be accessible again to any other user/tenant 15:18:15 there's no guarantee that a administrator couldn't recover the data 15:18:29 bswartz: OK, that's clear statement 15:18:59 s/any other user/any user/ 15:19:11 and with unmanage the difference is that the data can come back? 15:19:18 markstur: yes 15:19:44 well unmanage is an admin-only operation 15:20:03 csaba, bswartz; unmanage just takes out share out of Manila control, data will still exist 15:20:26 yeah let's not bring unmanage into the discussion 15:20:35 vponomaryov: but according to bswartz data can still exist with delete too .. 15:20:42 bswartz: OK 15:21:15 in cinder there is a problem where delete user data can actually still be accessible to users (including OTHER users) if you don't set a specific option for the LVM driver 15:21:25 that's definitely NOT okay 15:21:43 bswartz: it takes lots of extra time 15:21:58 bswartz: and is disabled for testing 15:21:59 but we don't need to securely delete data so that nobody including the administrator can access it 15:22:09 vponomaryov: correct 15:22:32 so then in cinder adheres the cited contract; implied generic driver adheres that contract in manila ... what's the situation with other drivers? 15:22:42 csaba: yes 15:23:11 csaba: I would expect that when you create a new share, it's always empty (no data in it) 15:23:16 is this contract consensual enough in cloud computing to not have to make an explicit statement on it? 15:23:25 unless the share is created from a snapshot, of course 15:24:02 csaba: in a cloud context you have to trust the cloud admin with your data in any case, because the admin can ALWAYS read your data if he wants to 15:24:26 so there's little point is going to the extra effort of securely deleting data on the platters 15:24:36 OK so I'm happy with these answers, we can get to the second part 15:25:22 the disruptiveness of deny-access, whereby disruptiveness my home brewed terminology 15:25:36 I'm not sure how to understand the second question 15:25:43 is it about access-deny? 15:25:49 to mean that on access revoking, all users who are not authorized are kicked out 15:26:01 bswartz: indeed, only deny 15:26:18 csaba: just think how it is without Manila using vanilla NFS 15:26:35 access rights are verified on "mount" point 15:26:36 NFS and other protocols have defined semantics for when access is removed, I believe 15:26:57 stale mount 15:27:03 vponomaryov: according to my testing that is disruptive... after the export is removed, further syscalls fail with EACCES 15:27:12 if the admin tells the server to revoke access to an client, the server immediately starts denying requests from that client 15:27:24 bswartz: yes, that's what I call disruptive 15:27:32 csaba: what error is returned would be up to the client though 15:27:52 clients cache data and could in theory continue operating from their cache for some time before they find out the server has cut them off 15:28:11 there's nothing we can do about that in any case -- that's NFS semantics 15:28:17 yeah sure, it can all take effect only when data needs to go thru the wire 15:28:40 so but other drivers might not drop the mount upon access being revoked 15:28:51 so I still don't understand what the alternative might be 15:28:57 (which actually is the case with gluster_native 15:28:58 ) 15:29:04 deny-access means to deny access right now 15:29:13 how else could it be interpretted 15:29:22 if the tenant has a mount of the share at the point of denial, the mount remains functional 15:29:27 oh 15:29:38 however, new mounts cant be made 15:29:55 that's what I call "non-distruptive denial" 15:29:56 so maybe deny access could be interpreted as no new mounts can be made, but existing mounts can continue? 15:30:00 okay 15:30:14 yeah I would say that's the wrong interpretation 15:30:18 yeah, that's the semantical ambiguity I'm concerned about 15:30:42 I believe that deny should be disruptive, to use your term 15:30:54 OK, clear point then... 15:30:58 does anyone disagree about that? 15:31:06 csaba: Can you fix gluster_native to revoke access immediately upon access_deny? 15:31:07 agree 15:31:10 agree 15:31:14 should, but do we consider it as requirement for Manila? 15:31:33 are we ready to kick some backend support because of it? 15:31:41 cknight: it will take some time, glusterfs changes will be needed 15:31:43 vponomaryov: yes, I think we should have a scenario test that verifies it 15:32:10 bswartz: +1 15:32:29 scenario test: 15:32:29 1) create share 15:32:29 2) grant access 15:32:29 3) mount share 15:32:29 4) write some data 15:32:30 5) deny access 15:32:30 6) write more data 15:32:30 7) validate the more data didn't get written 15:32:32 bswartz: scenario test is low-hanging-fruit, I am asking in general 15:32:58 bswartz: when some backend does not behave so and can not 15:33:17 ... and can not as expected 15:33:29 well I'm curious to know which backend could not implement those semantics 15:33:52 csaba: said that gluster_native does so 15:33:57 so then as a reward for bringing this up may I ask for some grace time? :) 15:33:58 it seems like a severe limitation to not have a way to cut off user access on the server side 15:33:58 csaba: right? 15:34:10 csaba: just consider it a bug 15:34:11 vponomaryov: yeah, that's what we just discovered 15:34:33 bswartz: yeah we'll file one 15:34:58 bswartz: how do you mean to get at 7) ? 15:35:03 csaba: I would fix it in liberty (sooner the better) and also propose a backport because it's a security issue for your driver 15:35:30 csaba: allow access and re-mount :) 15:35:31 csaba: for the scenario test there would need to be a second client with access that was not denied 15:35:43 or what u_glide said 15:36:17 u_glide, bswartz : I thought we'd verify if the mount becomes defunct on denial 15:36:19 IMO a second client would be a cleaner way to write the test 15:36:53 csaba: we don't care what happens on the client side -- as I said the client defines its own behavior when it loses access 15:37:10 what we care about is that from the server side, nothing got modified after the access was denied 15:37:31 a similar test could be done to check that read access was also revoked, which would definitely require a second client 15:38:14 bswartz: so there would be a window of time across with you check if server content is unchanged after denial? 15:38:29 s/with/which/ 15:38:49 should be pretty immediate when the API request succeeds 15:39:16 to make the test run stable-ly we might need to insert some fences or fsyncs 15:39:43 bswartz: OK, that makes sense 15:40:16 we wouldn't want client caching behavior to make the test non-deterministic 15:40:28 bswartz: yeah, that was my concern 15:40:46 okay so did we answer your questions? 15:40:55 I'll post a reply to the ML thread 15:41:07 I'm just slow on email so I hadn't gotten to it yet 15:41:20 yep just let me know what's exactly a fence here? informal term or has an exact technical meaning? 15:41:57 in the case of the scenario test we'd just need to flush write caches after every data write 15:42:08 before going to the next step 15:42:18 OK 15:42:25 I think a simple sync will do that 15:42:41 yeah, so my questions got proper answers, thanks. 15:42:51 #topic open discussion 15:43:35 anyone have another topic? 15:43:42 bswartz: Care to update folks on the progression of the replication design? 15:43:50 yes 15:43:55 cknight: no ;-) 15:44:05 lol 15:44:09 I have the topic of offline share expectation on resize 15:44:11 bswartz: :-) 15:44:18 Should I send that out on the ML list instead? 15:44:27 lpabon go ahead 15:44:30 ok 15:45:37 for those interested in replication -- I think I mentioned last week that I think manila needs to support AZ and that the replication proposal needs updating based on that, and the update is not done 15:46:35 lpabon: ? 15:46:51 oh, I was just going to send it to the ML list :) 15:46:57 oh okay 15:47:08 we've got 13 minutes here but ML also works 15:47:16 ok, i'll start it 15:47:19 anyone interested in that can read the chat history in #manila from the last hour 15:47:45 it all started when I was reviewing the generic driver 15:47:55 support for shirnking a share 15:48:14 It takes the volume offline to be able to shrink the volume 15:48:42 It seemed to me that probably most storage systems can do this while online, but some may not 15:48:53 So, I asked on the #manila channel 15:49:02 I'm curious if we know of any system other than generic driver that can't do online expand of share 15:49:26 I'm almost 100% that gluster can do it online (expansion) 15:49:40 because if it's just a matter of making the generic driver better then we can undertake that challenge 15:49:55 bswartz: yeah i agree 15:50:25 maybe the question should be: What drivers/storage systems can resize while keeping the share online? 15:50:37 and which cannot.. 15:50:39 lpabon: we had pool in ML 15:50:58 lpabon: Resize operation may be different for shrink and extend, for some backends 15:51:09 ganso_: agreed 15:51:20 a backend may be able to extend online, but requires to take share offline to shrink 15:51:31 yeah I'm more interested in extend than shink 15:51:41 What the concern is that to the user, there could be different behaviors on different shares, since they could be coming from different vendors 15:51:43 ganso_: +1 15:51:48 me too 15:52:23 I can live with shrink requiring the share to go offline, but extend really should be online if at all possible 15:52:34 But shrinking is important... Imagine paying $$ for some amount of storage space in one month due to a project.. then not needing that much storage the next... 15:52:46 hadnt we agreed that all vendors could extend online before, I dont remember 15:52:47 But what I am asking is really this: 15:53:04 ganso_: yes we had 15:53:15 Should we have the everyone conform to the same behavoir 15:53:16 lpabon: you have 2 recourses: you can suffer the downtime caused by a shrink or you can copy your data to a small share and delete the larger one 15:54:17 bswartz: absolutely true. (it was be nice to have a "copy" function :-)) 15:54:28 lpabon: I am working on that :) 15:54:44 I am implemeting a copy function for share migration 15:54:54 but still the question remains.. should the expected behavior be the same across all share vendors 15:55:14 that's really more of a philosophical question 15:55:34 we could have a shrink fallback that does exactly as bswartz said 15:55:49 create a smaller share, copy all data to it 15:56:03 and use the copy function implemented in manila core 15:56:04 but that takes time, plus the mount point may be different 15:56:12 generally speaking I'm in favor of complete consistency 15:56:18 true 15:56:30 I'm treating the generic driver and extend as a special case -- maybe it's the wrong thing to do 15:56:47 bswartz: yeah, I am not really coming up with an answer, just asking what we think should be the correct course 15:57:27 i think users will expect consistency across any share 15:57:36 consistency of behavior that is 15:57:48 inconsistent behavior making me very unhappy and I try to avoid it with designs that allow/enforce consistency 15:58:18 yeah, that is my concerrn also 15:58:19 the trouble here is that extend is such an important operation and I'm not willing to block it completely because one driver can't do it right 15:58:34 so I'm being hypocritical 15:59:22 bswartz: true, i agree, I'm just thinking we need to write it down on the API what the expected behavior is, so that customers selling Manila can tell their customers 15:59:28 we're almost out of time though so I think an ML post is called for 15:59:35 sure. will do 15:59:48 bswartz: +1 15:59:53 thanks everytone 15:59:59 everyone* 16:00:07 #endmeeting