19:11:56 #startmeeting swift 19:11:57 Meeting started Wed Oct 2 19:11:56 2013 UTC and is due to finish in 60 minutes. The chair is notmyname. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:11:58 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:12:01 The meeting name has been set to 'swift' 19:12:32 here for the meeting 19:12:37 again, sorry for the delay. seems that I'm having wifi problems 19:12:48 shall we postpone? 19:12:51 here for the free beer 19:13:03 topics we need to address this week include the status of the havana release (first) 19:13:19 portante: I'd like to at least get a status update on the patches 19:13:24 okay 19:13:34 did I see the memcache patch merge earlier today? 19:13:52 I think it is in the queue, pool one, right? 19:14:10 ya, the pool one 19:14:29 in the zuul queue, behind like 20 other patches 19:14:31 the other one I'm curious about is the dropped connection one by torgomatic 19:15:09 portante: kk, so by tormorrow :-) 19:15:12 torgomatic: the fix he came up with didn't really cut the mustard - he's posted a comment on it so I think he's gunna try again 19:15:29 ok 19:16:00 portante: zaitcev: any progress on reviews for your respective refactorings? 19:16:33 a few folks have reviewed them, but not much movement 19:16:35 notmyname: Thanks for reviewing the trickies one for me, but I failed to drum up support. 19:17:01 bah. sorry zaitcev 19:17:38 any other outstanding patches that need to be addressed? 19:17:58 I'm new to this project 19:18:08 grapsus: welcome! 19:18:15 hi 19:18:33 grapsus: right now we're wrapping up the release for Swift's contribution to havana 19:18:36 have you finished designing the API between EC codecs and swift-ec ? 19:18:39 notmyname: I believe we need to at least address the ondisk changes 19:18:55 clayg: have you considered that further? 19:19:06 grapsus: still WIP, do you have info on the Trello board where we discuss? 19:19:12 portante: what do you mean by "Address" 19:19:18 peluse: yep I read it 19:19:41 grapsus: that's the latest status, still open fpor mods. We plan to discuss at hackathon as well 19:19:42 seems like this PyEClib isn't public 19:19:52 sweepthelegjohnn: ^ 19:20:05 grapsus: i'll give you access 19:20:05 I've mostly been looking at https://review.openstack.org/#/c/46957/ 19:20:05 portante: can you confirm that's top the tree for now? 19:20:26 oh sorry, I think there's two conversations going on 19:20:28 peluse: I work on EC with Adri2000 I believe he already spoke to you 19:20:38 clayg: my bad, sorry 19:20:53 I spoke with notmyname actually. maybe finish the patches discussion and move on to EC afterwards? 19:20:55 clayg: I think I need to rebase, but not much difference from top-of-tree 19:21:05 grapsus: lets let the topic of existing patches finish first 19:21:14 peluse: I have a very simple implementation of RS in pure python (~ 200 lines + tests), I wrote to be sure about the API 19:21:24 peluse: ok, sorry 19:22:02 portante: ok, i'll keep on that one then 19:22:05 * clayg is done 19:22:31 clayg: portante: thanks 19:23:16 what else on the topic of patches for havana? 19:23:51 clayg: regarding ondisk changes that is based on 19:23:53 yeah it is still a bit back in the queue 19:23:58 sorry I'm late :) 19:24:18 don't we want to fix the ondisk changes so that hash_path is put back to utils? 19:24:59 portante: sounds reasonable, but I want to look at it again. clayg, you had thoughts on this, right? 19:25:37 clayg, notmyname: my concern is that we release havana with a code move we don't really want 19:26:17 portante: right 19:26:19 I think i had a different perspective on why you want an ondisk module, my initial thoughts made it pretty clean that things like hash_path and normalize_timestamp that are consumed by things that are NOT ondisk should not be in the ondisk module 19:26:41 mea culpa, after reviewing that change more thoroughly, I made a mistake moving hash_path, though I still believe normalieze_timestamp should be in ondisk 19:26:58 another way to look at is to mark everything that is currently imported/used by the current in tree implementation as "ondisk" 19:27:14 but I don't know when that stops... cause like readconf, half of utils... 19:27:33 I tried to talk Peter to give ondisk a rest for a moment and just focus on getting the API in. This of course means 1) replicators use old API (but GlusterFS not using replicators), 2) no .setup method. On the upside, we actually deliver what we promised. And it's not some crap code cooked-up in a hurry to meat a deadline, just the most useful part of it in. 19:27:42 that is why I believe hash_path should go back to utils where it is shared and has a unit tests to verify it does not change 19:28:10 It's nicer with ondisk, no doubt. 19:28:26 anyway I think it's ugly but I don't have the chutzpah to carry a big "change the import path of a function just 'cause" patch through - so I shouldn't be whining about it 19:29:21 clayg: "it" == the normalize_timestamp location? 19:29:21 clayg: it seems we both arrived at a good case for hash_path NOT being in ondisk, so it seems worth it to put it back 19:29:56 portante: maybe, it's not worth it to me, normailize_timestamp is the one where I acctually have code that sits outside of swift that got broke by the change 19:30:09 like you can't talk directly to backend servers without importing from... ondisk!? 19:30:20 i mean you could re-implement it 19:30:49 clayg: today that is true, because the backend servers rely on that for its ondisk format 19:31:07 clayg: We didn't realize there was such code. I grepped through my out-of-tree stuffs, ghole grepped through his, surprisingly normalize_timestamp wasn't used anywhere. 19:31:16 Heheh ghole 19:31:20 we'd have to change the backend code to use a different method and not share it 19:31:27 I've been called worse; wait, maybe I haven't. 19:31:40 ouch, sorry 19:31:42 not twice. by the same person that is 19:31:46 :) 19:31:56 lol 19:31:59 it's fine, I'm cool with it... 19:32:15 I *really* not going to be able to carry a patch to relocate it thought - so I'm not going to whine about it 19:32:29 The one use I found for normalize_timestamp turned out to be an unused import. So I was whining needlessly 19:32:36 lol 19:32:45 portante: clayg: it seems that it's being used for 2 different things. so maybe it shoudl be 2 different functions (and that might be a bad idea too) 19:32:47 I don't mind carrying that patch if folks want it 19:33:24 this is the kind of small thing that we might regret not being satisfied with before the release 19:33:34 hi 19:33:49 davidhaddas 19:33:50 hello 19:33:57 -d 19:34:14 Yay I'm not alone with nick typos 19:34:48 I'd like to get to an EC status update. are we good with what needs to be done for havana patches? 19:35:23 i'll propose the hash_path restoration and you guys can vote on it that way, is that fair? 19:35:35 sounds good 19:35:44 one last thing 19:36:12 regarding the DiskFile API changes as a whole, would it make it easier to review if I also posted a unified combined patch set? 19:36:35 Right now it is broken out into four 19:36:52 * notmyname finds it useful to see the end result 19:36:55 but I can go either way 19:36:58 I would appreciate that - have not been active in that effort and the size/complexity for a newb is one of the resaons why 19:37:10 maybe like https://github.com/portante/swift/commits/acctcont-api only what you say, and then we can clone one, clone two, diff -urpN -X dontdiff one two 19:37:39 or I am the last guy remaining in git age who does things like that 19:38:48 portante: I think you have the unified set (or can make it easitly). go ahead and put it up, perhaps marked as WIP 19:38:57 k done 19:39:02 portante: thanks 19:39:10 #action portante unified patch set? 19:39:18 #action portante unified patch set 19:39:30 ok, EC status update 19:39:44 peluse: sweepthelegjohnn: grapsus 19:39:47 policies: freshly rebased and proxyu/container code is ready for review 19:39:59 PyECLib is functional and has a bunch of tests 19:40:10 EC Lb API: still being defined/discussed on Trello, plan to talk at hackathon more 19:40:13 shrink-wrapping for v0.1 release 19:40:22 sweepthelegjohnn: when are you opening it? (I thought you already had) 19:40:36 grapsus: if you have a lib it would be great if you can get in on the Trello discussion and make sure the current state meets your requirements and if not make some suggestions 19:40:49 peluse: grapsus +1 19:41:04 i can, i not am giving out access until a few i's are dotted 19:41:07 peluse: yep I don't have access to my work computer, I can post my API and code tomorrow 19:41:23 grapsus: what do you have? 19:41:47 grapsus: reed-solomon or something else? 19:41:54 RS EC lib in pure python, it's very compact (< 200 lines of python), slow, but it works 19:41:58 oh 19:42:01 I wrote to be sure about the API 19:42:07 cool, so I suspect between what sweepthelegjohnn has and what Intel is doing we should have a pretty well ironed out API by end of this month, Kevin? (I can't keep typing that nick :)) 19:42:30 peluse: yes. 19:42:40 in fact, i think the api is good to go 19:42:40 sounds great :-) 19:42:49 a few more things, if you guys don't mind 19:42:50 where can we find the state of the current proposed API? in PyECLib? 19:42:56 I think so too, but wnated to reserve that strong of a statement until after hackathon 19:43:01 sweepthelegjohnn: go for it 19:43:07 ok 19:43:33 it's 100% test covered and the API is documented, I can post it, but we'd like to see your API too to see if we haven't forget anything 19:43:41 so, i talked with jim plank this week and will apparently be adding our GF-Complete (really fast galois field stuff) to Jerasure 2.0 19:43:55 awesome! 19:43:56 i will inciorporate that into v1.0 of PyECLib 19:44:16 we think that integrating our GF work into Jerasure should be pretty easy 19:44:44 wrt grapsus's question: Kevin is the latest on Trello up to date with your last ocnversations with Tushar? If not maybe you can post a "latest" just to level set 19:44:53 yes 19:44:59 I'd really love for the multi-ring stuff to land on master. Is there a chance that could move there instead of the ec branch? 19:45:05 peluse: yeah, i made the updates to PyECLib 19:45:15 gholt: man I would love that too! 19:45:31 peluse: the only comment was specifying word size in the init function 19:45:32 gholt: the plan is for multi-ring to get finished, then merge ec -> master, then do erasure-code stuff, then merge again 19:45:34 IIRC 19:45:43 Ah okay 19:45:45 gholt: peluse: ya, let's figure out how that can happen (not right now while I have 7 second ping times) 19:46:00 BTW: the policy stuff (multi-ring) is in multiple patch sets, some of which aren't done yet 19:46:15 there's still a few more commits, like what to do if two diff. replicas of a container have different policies *and* objects in them 19:46:20 peluse: so the init only takes (k, m, type)… E.g., (12, 2, "rs_vand") 19:46:26 I had hoped that the ec branch would help isolate some changes, but I think the jury is out on the effectiveness of it 19:46:28 the one up there now (the big one) covers the basic plumbing in proxy and container. there's still some obj module work and replicator work as well, not rnearly as big though 19:46:48 notmyname: let's judge effectiveness when we merge back into master the first time 19:46:54 torgomatic: sounds good 19:46:55 Yessir, maybe a hackathon thing 19:47:09 that would be a perfect time :-) 19:47:10 grapsus: what is your bitbucket account name? 19:47:11 agree w/that! (judge effectiveness later) 19:47:18 sweepthelegjohnn: grapsus 19:47:45 Note however that I have been rebasing the policy patch each week after notmyname merges from master so its up to date 19:47:50 I have another question about EC, it is about server-side chunking 19:47:58 shoot 19:48:07 grapsus: invite sent 19:48:17 what's your plan for that ? let's say I send a 2 Go file, will you buffer it entirely before calling EC ? 19:49:02 grapsus: no. EC each chunk as its read off the wire 19:49:36 so chunking is done server side? this is not implemented yet, is it? 19:49:45 notmyname: is there some list somewhere about subjects we want talked about a the hackathon? or do we figure it out when we get there? 19:49:46 sweepthelegjohnn: excellent, thank you ! I'm looking at the API, I will send mine tomorrow, but looks very similar 19:49:52 BTW, I they fixed fedora's openoffice so I was able toread the EC PPT preso, it explained a lot 19:50:10 dfg: somewhat in person, but I'm also working on a list that I'll publish 19:50:19 I just wish you guys weren't such... well... Just export it ot PDF next time or something. 19:50:22 notmyname: ok- cause I have one. 19:50:25 grapsus: please run the tests in the README and let me know if everything works and the performance 19:50:31 zaitcev: what EC PPT preso? 19:50:40 notmyname: Adri2000: so where will be this server chunk size determined and how ? 19:50:41 dfg: cool, make a wiki page on the openstack wiki. I'll use tat 19:51:15 J1-erasureCode.pptx 19:51:32 waaait a moment 19:51:40 sweepthelegjohnn: ok, I'll do, but if you're using 128 bit registers, perfs should be stellar compared to my POC with python ints 19:52:00 ok, moving on from low-level EC implementation details... :-) 19:52:02 even without that, i can get ~1 GB/s in some cases 19:52:20 what else do we need to discuss in here in the next 3 minutes? 19:52:36 that went by fast 19:52:39 is the priority list of reviews still up to date? 19:52:47 is that working? 19:53:20 portante: I believe it's up to date and works for me. I'd like other feedback too 19:54:15 I have another meeting in a few minutes and have to run. any last minute things? 19:55:05 portante: where is the priority list of reviews available at? 19:55:12 I'm going to look at the timestamp comparison, since I screwed up the mempool so well 19:55:13 sec 19:55:15 peluse: topic in the -swift channel 19:55:20 https://wiki.openstack.org/wiki/Swift/PriorityReviews 19:55:21 https://wiki.openstack.org/wiki/Swift/PriorityReviews 19:55:26 thanks guys! 19:55:39 was the irc topic at one point 19:55:59 still is 19:56:09 of #openstack-swift, that is 19:56:29 thanks everyone for attending and participating today. we're out of time. 19:56:31 #endmeeting