19:00:05 #startmeeting swift 19:00:05 Meeting started Wed Mar 19 19:00:05 2014 UTC and is due to finish in 60 minutes. The chair is notmyname. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:00:06 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:00:08 The meeting name has been set to 'swift' 19:00:15 who's here for the swift meeting? 19:00:43 here 19:00:50 here :) 19:00:50 * creiht enters triumpantly from the north 19:00:54 hello 19:01:07 * creiht also can't spell 19:01:10 hey 19:01:33 creiht: figuratively or are you not in San Antonio right now? 19:01:52 heh 19:02:08 ('cause SA is south of just about everyone else here) 19:02:14 was just thinking we should have meetings in a MUD :) 19:02:41 here 19:02:44 it's dark and you're likely to be eaten by a grue 19:02:59 notmyname: that's IF, not a MUD :) 19:03:05 anyways... 19:03:08 :) 19:03:34 #link https://wiki.openstack.org/wiki/Meetings/Swift 19:03:49 not a ton on the agenda this week, so maybe short (?) 19:03:58 #topic storage policies 19:04:09 first up, a status update on storage policies 19:04:24 lot's of people have been working on it 19:04:27 great working being done 19:04:31 but not done yet 19:04:52 and not really any way to safely get it into master with enough testing/docs/etc before Icehouse 19:05:02 so here's what that means: 19:05:29 Icehouse will be Swift 1.13.1 or 1.14.0 and not include the functionality currently on the feature/ec branch 19:05:54 and we'll cut that later this month, probably. (maybe first week of April) 19:06:10 and for storage policies, here's the plan to get it on master: 19:06:35 take the functionality that's been written and refactor it into more logical commits and propose them to master 19:06:39 * clayg hopes the second step is ??? 19:07:07 this should give us 8-10 patches or so that can be reviewed sanely and so everyone can see what's happening there 19:07:16 lol @ clayg 19:07:43 and those will land after the Swift 1.next tag point 19:07:52 * clayg notes torgomatic did NOT l*OL* for the record 19:08:01 haha 19:08:05 * clayg hopes step 3 is profit 19:08:09 torgomatic is currently taking the lead on refactoring the git history for that work 19:08:15 loloti 19:08:20 clayg: then we land it all on amster and profit! 19:08:24 lol on the inside 19:08:28 amsterdam 19:08:37 portante: brussels 19:09:31 any questions on what's happening with storage policies? I'll be putting that into (probably long) prose and sending it to the openstack-dev mailing list later this week 19:09:33 does anyone have any idea how we should do the reviews? Or basically the same plan as normal? 19:09:48 notmyname: I think that is a good call 19:09:57 I know a tough decision 19:10:01 I think some are going to be sorta smallish and trival, others will probably require some thought if you don't know much about sp or the final end state 19:10:02 indeed :-) 19:10:03 I'd like to see us consider code coverage in the reviews in general 19:10:33 and treat storage policies as no different 19:10:43 creiht: it's really the only way to be good citizens to the rest of the contributing community and the user base 19:11:02 portante: you mean there are some tests missing in sp? 19:11:34 we also want to clearly document up front what all the patches are that make up storage policies so that mid-stream of patches landing we folks know the tree is functional, but not feature complete for sp 19:11:48 cschwede_: I think func test coverage of multiple policies is light 19:12:25 cschwede_: otoh you could swift your default policy to 1 and now functests coverage for storage policies is AWESOME 19:12:25 cschwede_: I am not sure, but would like to think we can mitigate the impact of code changes by raising the bar on coverage, both for unit tests and the functional tesse 19:12:27 tests 19:12:58 clayg: nice way to increase coverage ;-) 19:12:58 yay moar tests! 19:13:10 portante: +1! 19:13:12 * torgomatic is all for more testing 19:13:15 portante: yup. and that's a more general thing that I'd like to see us all focus more on for the rest of the year (regardless of storage policies) 19:13:21 * clayg was sorta under the impression we kinda had ok coverage as far as opensource/stack projects go 19:13:23 great 19:13:57 we do, but I think dfg's concerns about code churn can be mitigate in part by raising that bar further 19:14:31 clayg: we do, but we also have some pretty important gaps 19:14:34 clayg: so in answer to your question, I think these are "normal" reviews, but a few key ones should get some extra love beyond a standard 2 +2 19:14:35 like in the proxy server 19:14:44 * creiht just re-discovered today 19:14:45 creiht: you're fixing that now, right? 19:14:48 ;-) 19:14:49 lol 19:14:50 i'm not entirely sure, he went on to say that he didn't think some of the issues gatekeeper caused for sos and cdn could really have been caught without their targeted integration testing 19:14:51 fixing some 19:14:53 we need to start +3ing some reviews 19:15:58 failing that 3x+2 19:16:06 I think these meetings could help highlight the reviews that should have more eyes - I think it's easier to say "lets way for core XXX to look at this before we merge" if it hasn't been up for weeks already with some other core devoting a bunch of time reviewing it 19:16:18 where weeks ~= months 19:16:28 * creiht gives glange the evil eye 19:16:43 :) 19:17:11 clayg: do you have some in mind that we should bring up later in the meeting? 19:17:21 * clayg gives glange a warm smile 19:17:32 * portante feels the love spreading 19:17:48 for now, are there any more questions about what's going on with storage policies? 19:17:49 notmyname: no not atm - but the ec branch in general is a good point - please start studying up folks - the merge train is COMING! :P 19:17:57 choo choo! 19:18:02 is this the good time to bring up 47713 19:18:14 We had a hilarious occurence, if nobody noticed 19:18:30 zaitcev: just noticed after you talked about it... 19:18:37 portante's uuid middleware name change? 19:18:37 zaitcev: only 177 days old! 19:18:38 clayg posted a fix 81104 19:18:57 which is a code fix for a bug which was fixed ini 47713 months ago 19:18:59 glange: and here I thought nobody noticed 19:19:10 :) 19:19:14 like really exactly the same code change 19:19:24 sorry I'm late... here :) 19:19:29 zaitcev: well the other thing there I noted was that I had to dig around in three different files to piece together the fix which the other patch isolated into a single function 19:19:33 er... emthod 19:19:50 #topic patch 47713 -- "Pluggable Back Ends" 19:19:56 seems we've moved on :-) 19:20:07 notmyname: nice 19:20:07 I took Clay's new test from 81104 and applied to 47713, and it worked 19:20:37 zaitcev: did you know you fixed it when you did? opening bugs for some of those and attaching them to review couldn't hurt 19:22:05 zaitcev: I've recently been digging through some of the container/backend -> common/db relations working on change sam has going into sp - and I've found I don't have any good intution for which methods are on a broker are defined in common/db or type/backend 19:22:09 clayg: I did not know it was real, e.g. showing in the field. However, since PBE involves all this deep refactoring, a lot of oddities floated up, including that one. Apparently we have a ton of strange cruft and I went and tried to prove some general statements about what code does 19:22:54 clayg: zaitcev's changes cap that off, I thought 19:23:00 zaitcev: I saw you had common/db flesh out the interface with some NotImplementedErrors - which I think could help 19:23:02 what is in master is really mid-stream 19:23:28 clayg: the only hard line I've had is that all SQL statements go in the backend 19:23:53 torgomatic: db_file too 19:24:40 zaitcev: noted 19:24:40 portante: I will probably need to have something like the call-tree you did for diskfile - maybe if I can bring up the docs on PBE 19:24:46 torgomatic, clayg: from the gluster work, having the server API code call simple methods on the objects, like what zaitcev has in account and container server cleans things up quite a bit 19:24:50 because if you have, just for example, GlusterFS with library implementation, it talks to volume servers through a little library. so there's literally no db_file that can be given to open(). 19:24:52 I haven't needed to reference db_file yet in what I'm doing 19:25:44 to me, it makes it so that you can do stuff like, in-memory backend for account and container server to isolote the API handling code 19:26:36 We want to give sysadmin a view into what breaks, which includes backend-specific file, which is why I added __str__ method. For Gluster it would be "host://volume/dirdirdir" something, and 100% compatible db_file for legacy. 19:26:39 there are tendrils of the response handling seeping down into the backend code which gets cleaned up (last I looked) 19:27:25 I'm more or less on board with the idea, and the patch has come along way (the in-memory example is neat) - but I still struggle with reviewing it 19:27:34 clayg: +1 19:27:59 okay, anyway. I think about splitting 47713 into pieces and feed them piece by piece. Each given piece will make less sense by itself, but they'll be easier to review. I'll make sure each is somewhat self-contained and passes tests. 19:27:59 agreed, I do too 19:28:32 portante: is zaitcev's suggestion good based on your DiskFile experience? 19:28:40 seems like a good idea to me 19:28:47 I don't know what to think about large patches 19:28:56 sometimes they need a wholistic review 19:29:05 notmyname: it's the opposite to portante's experience because we reviewed DiskFile in person at hackathon 19:29:06 sometimes they need spoon feeding 19:29:08 with big patches like this I'm never sure if "all at once for the whole picture" or "small pieces one at a time" is better 19:29:18 agreed 19:29:51 well, I think it depends on how carvable the patch is (if thats a word) 19:29:52 portante: peluse: clayg: creiht: what would you prefer to see? 19:30:03 I think what might help here is unit test coverage before and after the change on those modules is really high, and functional test coverage can shown to be hight 19:30:20 portante: back to the "more testing" theme, eh? 19:30:25 I would rather take the pain of reviewing the whole patch 19:30:25 yes 19:30:27 if zaitcev can make a logcal series of patches with a high level design flow that would be by preference 19:30:33 ;) 19:30:39 well small chagnes are great when they make sense on their own - sometimes it takes awhile to find the small pieices though - I think DiskFile was making progress with small changes and the last mile ended up coming in sorta biggish 19:30:47 I don't want to rely on in-person review. What if something does not come together, like I have a flat tire in Raton on my way to Colorado. 19:30:49 smaller patches are nice as long as splitting up a big patch doesn't hide the overall goal 19:31:02 we can send a taxi for you :) 19:31:04 zaitcev: I've done that before :-( 19:31:21 zaitcev: e-rated tires might hlpe 19:31:29 cschwede_: that's why added the 'along with a high level design flow' to my comment... 19:31:38 ok, how's this: 19:31:52 put together the call flow for PBE (before and after) 19:32:00 I think some context would help me a lot, like a highbandwidth overview of the patch - there's are big piecies, this is how they fit together 19:32:01 peluse: yep! 19:32:03 compare before/after testing 19:32:04 notmyname: noted 19:32:26 if I can see in my head - yeah that makes sense, I want it to work like that - then I can review the patch and decide if does what I think it should 19:32:35 zaitcev: and as I reviewed I struggled a bit to understand what problem(s) you were trying to solve. Some were clears, others not so much so some info there would be great too 19:32:36 and, since it's pretty obvious that the one-big-patch hasn't worked for 6 months for this one, break it up into discrete chunks 19:33:21 zaitcev: but it seems like an overview of the goals and methods would help a lot for everyone. can you make that? 19:33:33 zaitcev: (not necessarily on your own, if others can help) 19:34:00 notmyname: what form should it take? e-mail to openstack-dev, or in-changelog overview? 19:34:02 I can help with that, let's do that right after the meeting if that is okay 19:34:11 sure 19:34:32 I like the in changelog overview 19:34:39 zaitcev: portante: a ML post wouldn't impact people who already look at the changeset. just keep it there, IMO 19:34:50 ie no extra eyes from a ML post 19:35:01 there being gerrit? 19:35:12 sorry for my choppy attendance today, have to run... ttl 19:35:45 zaitcev: portante: yes, referenced in gerrit, if not in the commit message or in the patch itself 19:35:57 k 19:36:10 sound workable for everyone? 19:36:15 sure 19:36:21 yes 19:36:34 awesome. thanks zaitcev and portante 19:36:37 #topic swift3 19:36:54 chmouel: zaitcev: you want swift3 back in tree? 19:37:07 is that the s3 mapper? 19:37:11 ya 19:37:39 notmyname: I would prefer it, although I have an ulterior motive: I do not like packaging non-tagged versions and Tomo didn't tag since 1.7 19:37:40 https://github.com/fujita/swift3 19:37:46 ah 19:38:34 so, my understanding is that the openstack-tc decision from a while back still stands. no non-openstack APIs should be included in the openstack projects 19:38:38 notmyname: I would have swore we like *had* to take it out because of either a) some openstack foundation promoting the openatck api perception thing or b) some aws s3 clone questionable leagl thing? 19:38:38 he's not a bad maintainer and responds to pull requests, but I have a feeling it's not his favourite project 19:38:52 oooh, sorry. I completely forgot 19:39:04 which was the reason given for excluding it, although it may have a little to do with us doing some tree pruning at the same time 19:39:05 i think one idea is to move it to stackforge: https://github.com/fujita/swift3/issues/62 19:39:35 also, at the same time there was the CDMI API proposed patch. same decision killed both wrt being in swift's tree 19:39:51 yeah! stackforge! tomo said he'd go for that but it didn't happen - can't we just fork - it's no bad blood or anything 19:39:59 cschwede_: as long as that doesn't mean adding on the openstack dependency party, that might be a good idea 19:40:16 I have a hard time testing that sort of stuff 19:40:29 what about a common place for additional middlewares for swift on stackforge? ie swift3, cdmi, whatelse (if their authors are ok with that) 19:40:42 like ceph back-end 19:40:49 zaitcev: :) 19:40:56 not only do I need a SAIO, I need to take out my credit card and go sign up for S3, and then tell my wife WTF this recurring 18-cent charge is when I forget to delete some test data 19:41:17 i think for ceph-backend the idea is already to put it on stackforge 19:41:19 cschwede_: which generalized is the concept of a "contrib" 19:41:48 torgomatic: that's a real concern. I suppose you could just deploy eucalyptus ;-) 19:42:14 torgomatic: i think there is a free contingent on s3? 19:42:15 cschwede_: what "cost" does stackforge incur? 19:42:36 cschwede_: no, you'll still have cc registered, and probably BW charges for testing 19:42:50 cschwede_: I thought that was only EC2 with the little free tier, not S3 19:42:52 I could be wrong though 19:43:08 torgomatic: at least they stopped sending me 18 cent bills ... 19:43:17 * torgomatic hasn't worked with Amazon's stuff in a couple years 19:43:42 so I don't think it's appropriate to include it back into swift's source tree. stackforge could be interesting. basically, how do we keep good ecosystem repos alive 19:43:48 notmyname: regarding stackforge: afaik it's like a "regular" openstack project, ie you have core reviewers and everything else 19:43:49 if i am not mistaken, it's not legal to expose S3 API...it's legal to implement it to access S3, but not implement it and expose it as an API of Swift... 19:44:42 gvernik: yeah, that could make some trouble 19:44:47 gvernik: we're not adding it back to the swift tree (or getting into legal questions/distractions for the tech side of things) 19:45:01 cschwede_: ok. that's full of good and bad :-) 19:45:33 #topic open discussion 19:45:40 what else is on your mind? 19:45:42 other patches? 19:45:46 Hackathon 19:45:46 other questions? 19:46:04 cross-account copy! 19:46:06 What are we going to do there and why it's important to get budget to travel there. 19:46:17 zaitcev: ok, I was waiting for a bit to make that public. but I'll talk about it now :-) 19:46:31 Also, why is it after Atlanta. The HK one was before HK, so this is Juno cycle hackathon, right? 19:47:22 so we had a hackathon last november in austin. it was great 19:47:32 there's been interest in doing another one 19:47:53 so it's something that's been scheduled for June in the Denver/Longmont CO area 19:48:45 We've sent invites to the core devs, and I'll open the registration publicly when the core team has a chance to register if they are going (so that we don't have a rush). space is limited, like in austin 19:48:49 why june? 19:49:04 because april and may are very busy and march is too soon for logistics 19:49:33 april == openstack release and red hat summit (which many of us are involved with). may == atlanta summit 19:50:02 so that's the info, and more will be shared in the next 2/3 weeks 19:50:24 oh, this one is sponsored by Intel. thanks peluse 19:50:39 So, we're going to focus on EC/SP, right? 19:51:14 I think we'll focus on EC, but SP should be pretty much done, I think. 19:51:24 oh man I hope so 19:51:25 Any specific accomplishment we can do, like reviews? I mean how do I prepare - just read the theory and code for EC? 19:51:33 * torgomatic is tired 19:51:43 there will also be other topics too. I hope there will be a lot around performance, testing, and efficiency improvements actually 19:51:45 * peluse just popped back in for a few 19:52:07 zaitcev: keving has some good EC general info out there. I'll post a link 19:52:55 zaitcev: think of it like the openstack summit, but with no power-point and no "what is swift?" discussions. 19:53:10 this is for the actual python EC library but has good general info also and links to other papers. We'll also be posting some flows/diagrams on the IO path and recontructor over the coming weeks 19:53:15 https://pypi.python.org/pypi/PyECLib/0.2.2 19:53:20 just like last time in austin. some coding, lot's of reviews 19:53:57 zaitcev: let's talk more, if necessary, after this meeting 19:54:04 torgomatic: did you want to discuss cross-account copy? 19:54:16 notmyname: thanks, I got it 19:54:35 zaitcev: cool. let me know if you have more questions 19:55:08 well, I think cross-account copy is looking pretty good, and it's something I want for a project I have 19:55:22 I'd like to get other eyes on it so that it makes it into the next release 19:55:29 yeah that would be nice to see 19:55:36 did the profiling middleware make it in? 19:56:02 #link cross-account copy: https://review.openstack.org/#/c/72157/ 19:56:05 not yet 19:56:12 cschwede_: thanks 19:56:21 it would be nice to see that in 19:57:17 for that patch, gholt, chmouel, and portante have all commented on it 19:57:54 I'm hoping one of you can take another look 19:59:05 creiht: agreed on that one too 19:59:19 ...and we're out of time this week (saved by the bell) 19:59:23 #endmeeting