19:01:27 #startmeeting swift 19:01:28 Meeting started Wed Dec 11 19:01:27 2013 UTC and is due to finish in 60 minutes. The chair is notmyname. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:01:29 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:01:31 The meeting name has been set to 'swift' 19:01:56 agenda this week is light. https://wiki.openstack.org/wiki/Meetings/Swift 19:02:19 who's here? 19:02:28 hola 19:02:36 avast, ye salty sea dog 19:02:38 o/ 19:02:40 here 19:02:47 here 19:02:54 here 19:03:33 cschwede: now that I know we're online at the same time, I had to add a .mailmap entry for you recently (you had multiple addresses). please review and offer a patch if I chose the wrong one 19:03:43 great, let's get started 19:03:52 #topic 1.11.0 status update 19:04:14 RC for 1.11.0 has been cut. tons of great stuff in it (See the changelog) 19:04:23 it's currently living on the milestone-proposed branch 19:04:34 great 19:04:51 will it be final tomorrow? 19:04:53 if nothing comes up in the next ~20 hours, we'll have our final 1.11.0 release 19:05:15 the only thing that came up so far was the spninx versioning requirement change 19:05:45 since without it, new versions of sphinx (which -infra has installed) break 19:06:08 anything else know about anything else for 1.11? 19:06:37 not from red hata 19:06:41 s/a// 19:07:01 acoles: cschwede: anything from our EU contingent? :-) 19:07:14 acoles: wake up! 19:07:19 no 19:07:20 heh 19:07:20 we know its late. :) 19:07:21 no :) 19:07:42 that was no to notmyname :) 19:07:42 ok. if anything sees anything troubling, please let me know ASAP 19:08:02 #topic python-swiftclient 19:08:10 moving on to the next thing... 19:08:32 the client has had some patches languish and needs some lovin 19:08:51 there are a couple of patches I'd like to see land in it https://wiki.openstack.org/wiki/Swift/PriorityReviews 19:09:05 here's the plan I'm working towards: 19:09:35 (1) land those last 2 patches (2) cut the last 1.X rev (3) land the cert checking patch (4) cut the 2.0 rev 19:10:08 as to the other patches outstanding for python-swiftclient, most are stylistic changes or py3k changes (neither of which I consider high priority) 19:10:25 any objections to this plan? 19:10:48 none from red hat 19:11:23 I think https://review.openstack.org/#/c/59673/ (one of those 2 last patches) needs only another opinion from someone else, should be easy to merge. 19:12:11 ya, I think both are fairly easy patches to review 19:12:56 I'll look for activity on these an start nagging if nothing has happened by the end of the week 19:13:06 anything else on python-swiftclient? 19:13:53 ok. moving on... 19:14:02 #topic get_diskfile 19:14:11 is gholt here? 19:14:13 this is a change that portante has. 19:14:16 portante: got a link? 19:14:19 sec 19:14:21 doesn't look like it 19:14:34 https://review.openstack.org/#/c/60629/ 19:14:45 thanks 19:14:46 I don't look like that link, but there it is 19:15:06 gholt has placed a -1 on that patch, are there any other concerns with it? 19:15:36 * torgomatic is entirely neutral 19:15:56 not sure I fully understand his concern 19:15:58 as long as storage policy index can make it down to the filesystem in a manner that nobody hates too much, I'm on board 19:16:16 portante: to summarize what I've heard on it, the get_diskfile change is something that woudl be expected to move over time, so this isn't good or bad, per se 19:16:38 okay, but that is true for lots of the code 19:16:44 portante: ie anyone writing a DiskFile would be expected to know what versions of swift they are compatible with 19:16:50 portante: sure :-) 19:17:01 Frankly I don't see why an implementation cannot conform to the old style of parameters and then simply ignore device and partition. 19:17:01 certainly, that is true today 19:17:20 it can, this is to line things up with twhat is coming with storage policies 19:17:42 so that new parameters from the environment that are implementation specific land as kwargs only 19:17:59 and BTW, I have the patch almost ready that removes DATADIR and replaces with method func that takes index in and returns obj dir 19:18:11 this is a very tight change, and one which all the other out of the tree consumers have to adjust to anyways today 19:18:18 peluse: great 19:18:19 yup 19:18:30 yup to your first comment, not the 2nd one :) 19:19:06 do we need to 'policiy'ize' ASYNCDIR as well? 19:19:29 peluse: before we get there, let's finish up with portante's patch 19:19:30 asyndir-1, asyncdir-2, etc 19:19:46 sorry, jumping ahead 19:20:08 notmyname: not much more we can discuss 19:20:16 gholt is not here, so it is kinda mute 19:20:18 moot 19:20:41 I am happy to move on to the next topic and just work this as best we can via -swift 19:20:54 portante: I'm on board with a,c,o, **kwargs. seems like it needs to go through normal review and land when it gets 2 +2s 19:21:43 ok, moving on...to "open" 19:21:49 #topic open discussion 19:21:57 peluse: asyncdirs? 19:22:11 yes, question for portante mostly I guess, and torgomatic 19:22:31 just one for all policies or a dir per policy like we are doing with objects 19:22:43 What about the topic on the agenda: gatekeeper middleware (https://review.openstack.org/51228) ? 19:22:44 I don't think we need to do it for asyncdirs, or quarantine for that matter, as the hash will always be different 19:22:54 cool 19:23:08 lincolnt: I snuck that in there at about UTC 16:59:59, so we can just do that in open discussion :) 19:23:09 makes sense 19:23:19 lincolnt: whoops. sorry. I didn't reload my agenda page :-) 19:23:27 lincolnt: I'll come back to that 19:23:28 there's plenty of time 19:23:31 ok 19:23:37 notmyname: Do you think we need more reviewers and if yes, are there any good ideas. I basically do nothing while I'm working on mem_backend for a,c and I feel like reviews pile up. 19:23:59 ok, let me organize this 19:24:03 #topic async dirs 19:24:20 got my answer already :) 19:24:22 peluse: clayg: torgomatic: peluse: any resolution here? yes or now? 19:24:24 ok 19:24:50 answer is no (no need tomake changes there) 19:25:11 peluse: at least for now. they may be needed at a later time 19:25:30 I have it almost done so may submit with the patch and reviwers can decide, easy to back out 19:25:34 ok 19:26:11 #topic review backlog 19:26:37 zaitcev: in general, ya, I'm somewhat worried about the review backlog 19:26:57 part of that has to do with the amount of reviews being done 19:27:40 however, we have seen some recent movement as some of the core devs who weren't participating as much in the past did some more reviews (swifterdarrell) 19:28:18 someone should set up a graph of review-queue depth 19:28:20 but for a healthy community we also need to be sure folks from different companies are reviewing others folks work 19:28:21 :) 19:28:46 portante: yes, indeed 19:28:52 it is not just about review backlog, it is also about getting the community mind share on changes 19:29:09 hear that 19:29:15 I agree with portante in abstract, but it's a luxury for me. I just grab any reviews I can complete, grab next, and it never ends. Can't even think what company backs what. 19:29:40 aside from one or two changes recently, I don't think this is actually a problem we face currently 19:29:55 portante: do you disagree? 19:30:31 it would be nice to see the data on the commits and how they were approved 19:30:41 but not that concerned right now 19:30:46 ok 19:31:14 I think the only case when out-of-band sit-in-same-building authority forced a dubious patch in was Alex's DB cursors thing... And even in that case I doubt anyone would've caught that. BTW, IIRC it was ultimately fixed by a dude from HP. 19:31:24 we have partners who are not participating in the swift community because they don't feel it is open enough 19:31:26 if anyone has ideas on how to do reviews differently, please let me know. let's be free with trying new things 19:31:29 we are trying to change that 19:32:15 that's funny - not participating because they don't think its open enough is kinda part of the problem 19:32:17 portante: every time I hear that complaint I ask for data and I haven't gotten any yet 19:32:21 peluse: +1 19:32:36 agreed, but it is what it is 19:32:42 I do wonder if I could drum up more support or critique for the backends if I were physically in SF 19:33:11 portante: I'd also be happy to talk to anyone about that if it would help 19:33:11 I used to work in RH office in Sunnyvale, which is somewhat close. 19:33:28 yes, and I have recommended that 19:33:41 zaitcev: yes, we all need to get on your db backends reviews (myself included) 19:34:01 Tangentially re. community, remember this dude who posted a Ceph back-end 19:34:12 yes 19:34:13 notmyname: maybe a peridic 'review-fest' online event or something... 19:34:23 zaitcev: I'm looking at the ceph one, but I'm not expert - it takes time 19:34:47 We told him "go to a separate repo". I'm wondering if he understood it right re. making sure he uses stable APIs and like a welcome test case, and not just giving him a cold shoulder. 19:34:49 zaitcev: same for db backends - I don't have a strong vision for what it *should* look like - everything you've got seems sane 19:35:06 https://review.openstack.org/60215 - Babu Shanmugam 19:36:06 zaitcev: I think chmouel should be able to keep in on the right page 19:36:23 zaitcev: i think chmouel is in contact with him 19:36:30 same company right? 19:36:30 cschwede: beat ya! 19:36:34 enovance 19:36:55 okay 19:36:56 ya, I'm happy that it came from a company that has already been very good at working with upstream swift 19:37:34 but it did get jumped on pretty quickly. not incorrectly, but it may have been seen as harsh or sudden. I'm glad to hear chmouel and cschwede can help guide it 19:37:48 ok, let's move on to lincolnt's topic 19:38:00 #topic gatekeeper middleware 19:38:22 alright, so there's this patch here that adds a new, mandatory middleware: https://review.openstack.org/51228 19:38:24 lincolnt: what do you have? 19:38:51 We just noticed it, wanted to discuss, might be related to our https://wiki.openstack.org/wiki/MetadataSearch topic 19:38:57 so why this middlware must be the first one? 19:39:04 its my patch 19:39:04 tomesh and I here, form HP 19:39:09 from 19:39:18 i thoguth it had to be the second after catch_errors 19:39:18 oh, sorry, it is torgomatic's topic, not lincolnt 19:39:30 sure, the second one sorry. 19:39:34 yeah, I should have put my name on it 19:39:38 * torgomatic is lazy sometimes 19:40:11 anyhow, when the (mandatory) gatekeeper middleware isn't in the pipeline, this patch adds it 19:40:13 my concern is with the metadata search middleware 19:40:28 tomesh: I don't think this is related to search, sorry 19:40:55 so we offer search on both custom metadata as well as system metadata 19:40:58 We will be capturing additional system metadata on requests to add to the TBD metadata DB for searching 19:41:15 and when gatekeeper isn't second in the pipeline, this patch moves it 19:41:32 that's what I want to get thoughts on: the automatic moving of middlewares 19:41:48 IMO, if an operator writes their pipeline in a particular order, we should respect that 19:41:57 I agree 19:41:57 like if someone sticks mempeek at index 0 to look for memory leaks 19:42:29 but I don't want to send acoles off on a snipe hunt if other folks think that gatekeeper should get relocated 19:42:29 torgomatic: I still cannot find it, what's the review # 19:42:39 zaitcev: https://review.openstack.org/51228 19:42:40 51228 19:42:58 the code in question is in swift/proxy/server.py 19:43:01 https://review.openstack.org/#/c/51228/ 19:43:37 I believe the proxy server, and really any WSGI servers needs control of the beginning and end of the pipeline 19:43:57 so basically what I'm after here is: should Swift re-order middleware or not? 19:44:00 catch_errors is the one that I think needs to be mandatory 19:44:16 portante: i like to turn off catch_errors in dev sometimes (so lazy) 19:44:18 I don't think it should reorder middleware that is definable in the configuration file 19:44:25 torgomatic: in some cases I can see that it should (eg cache after ratelimit) 19:44:34 clayg: sure, so have a proxy-serves section switch for that 19:44:49 in other cases, no. eg I have my own custom middleware where I want/need it 19:44:54 portante: sure, but if gatekeeper is config definable and someone screws up then sysmeta gets leaked 19:45:05 I think that dynamic pipelines patch had a good idea in this regard - you can either ask for the pipeline to be auto ordered or you can say explicitly what you want 19:45:24 So, related to metadata search: How can new middleware like ours grab new system metadata like (say) x-container-sysmeta-target-container-pointer if it strips it off before our middleware later in the pipe sees it? 19:45:29 ya, I want alpha_ori to resurrect that patch 19:45:48 but I don't think we are talking about arbitrary middleware, we seem to be talking about how to define the environment that middleware lives in in a WSGI server for swift 19:46:17 so if we are trying to prevent headers, why wouldn't we foorce that to the beginning of the pipeline? 19:46:36 lincolnt: gatekeeper just strips from incoming client request (which is why it needs to be at start of pipe) 19:46:38 portante: because I might want to profile the whole middleware chain, including gatekeeper 19:46:39 and how would the administrator know their middleware isn't at the beginning> 19:46:47 and if you force gatekeeper in front of my profiler, I can't do that 19:47:11 so then we need to do the profiling differently 19:47:23 but how many other middlewares will we want to force to the front there? 19:47:31 profiling seems to be internal type stuff 19:47:47 so are we talking about internal mechanism or general middleware? 19:48:25 I'm talking about general stuff... if I type out a pipeline with a bunch of stuff in it in a particular order, I want the proxy to just do it 19:48:35 and it will 19:48:51 not if we move gatekeeper around 19:49:06 why think of it as middleware? 19:49:38 two modes: (1) explicit and warn if it's not up front and (2) auto-manage where it moves things 19:49:38 torgomatic: there is lots of code in WSGI handling that won't get profiled with profiling middleware 19:49:40 more concretely, if I have "pipeline = thing-one thing-two catch_errors gatekeeper", the proposed patch will turn that into "pipeline = catch_errors gatekeeper thing-one thing-two" 19:49:41 how abotu that ^ 19:50:23 torgomatic: in this proposed scenario catch_errors and gatekeeper are not middleware 19:50:26 right. why does the getkeeper need to be second 19:50:27 notmyname: Some people, when confronted with a problem, think "I'll make it configurable!" Now they have 2^N problems. 19:50:32 they would be part of WSGI 19:50:53 torgomatic: sure sure :-) 19:51:12 tomesh: catch_errors is a generic error handling thing for the pipeline, also adds txid 19:51:25 sure 19:51:34 * clayg search for gatekeeper in pep333 19:51:40 gatekeeper is about preventing user specified reserved headers from entering in to the WSGI handling 19:51:42 my question is why do we force getkeeper to be the second middleware 19:51:58 I understand 19:52:00 portante: ...but they're not? WSGI stuff isn't usually subject to me goofing it up, but these modules are 19:52:00 so that catch-errors can be first 19:52:04 and I support this functionality 19:52:14 torgomatic: what? 19:52:19 but why does it NEEDs to be second? 19:52:34 make gatekeeper first but let it whitelist earlier middleware? 19:53:12 portante: eventlet.wsgi or Apache or whatever is outside the scope of things I can fix by patching Swift, but gatekeeper isn't, so I want it to be examinable 19:53:28 tomesh: so that no other middleware will be affected by user attempts to set reserved headers 19:54:05 notmyname: worth considering, but issue concerns catch_errors too 19:54:07 but if we have a middleware that needs to see the reserved headers 19:54:46 tomesh: the point is that those middlewares need to be assured that if a reserved header is present it cames from other middleware and not set by the user 19:54:56 tomesh: the reserved headers should never be set by a clinet 19:54:59 client 19:55:00 torogmatic: why isn't gatekeeper? 19:55:25 We want our metadata search middleware to be in front of gatekeeper so we catch system metadata that gatekeeper (with this patch) would strip off, e.g. the example x-container-sysmeta-target-container-pointer 19:55:25 (my name example) 19:55:43 portante: because it lives in the Swift source tree, and gets packaged with Swift 19:55:44 lincolnt: why? 19:56:16 torgomatic: how much of swift do you want examinable? 19:56:18 just gatekeeper? 19:56:26 what about ther est of the wsgi code in common? 19:56:41 why would gatekeeper be any different from that 19:56:45 Isn't it a use case that the general mecahnism being added, to store any new system metadata, might be metadata that someone could want to search on? Like custom metadata 19:56:47 I don't think we've got a consensus here (or will get one in the next 4 minutes). so let's move the discussion to #openstack-swift and the patch review 19:56:56 okay 19:57:06 sure 19:57:14 but now we all know what's at stake, so go review the code :-) 19:57:22 ok. btw thanks folks for the reviews and interest 19:57:35 acoles: I think you've got a lot of interest :-) 19:57:50 and have done a good job with this 19:57:55 #topic open discussion (redux) 19:58:05 anything else to bring up in the last 3 minutes? 19:58:14 EC 19:58:15 s/2/3 19:58:21 doubt can do it in 3 min thogh :) 19:58:25 heh 19:58:38 we can pick it up on regular IRC 19:58:40 ok 19:58:43 works 19:58:53 next meeting is scheduled for Dec 25. I propose we skip it 19:59:03 how come? 19:59:04 :) 19:59:07 any objections? 19:59:11 none here 19:59:22 sounds good 19:59:23 if so, have the meeting without anyone else ;-) 19:59:38 none here, so we'll need again in the new year, happy new year! 19:59:40 international audience 19:59:49 ok, off to -swift and gerrit for EC and gatekeeper discussions 19:59:53 thanks for coming 19:59:55 #endmeeting