21:01:37 #startmeeting swift 21:01:37 Meeting started Wed Jun 17 21:01:37 2015 UTC and is due to finish in 60 minutes. The chair is notmyname. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:01:38 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:01:40 The meeting name has been set to 'swift' 21:01:46 hello, everyone. who's here for the swift team meeting? 21:01:49 👈 21:01:57 yo 21:01:58 o/ 21:02:00 Hi 21:02:01 🐄 21:02:02 hi 21:02:02 yup 21:02:02 hi 21:02:05 Hi. 21:02:05 o/ 21:02:07 howdy 21:02:09 torgomatic: I think you spend the whole week looking for "I'm here" emojis 21:02:09 o/ 21:02:35 I memorized the codepoint for my go-to emoji :3 21:02:37 notmyname: there's so many to choose from! 21:02:42 hello 21:02:50 Hi 21:02:54 MooingLemur: yeah, that one looks appropriate :-) 21:02:54 here 21:03:08 welcome, everyone. thanks for coming 21:03:13 agenda this week is at 21:03:15 #link https://wiki.openstack.org/wiki/Meetings/Swift 21:03:44 first, up general stuff 21:03:48 #topic general 21:04:07 as mentioned in -swift and on the mailing list, we've slightly change the core team structure 21:04:18 we now have swift-core and swiftclient-core 21:04:37 swiftclient-core can merge code for python-swiftclient 21:04:45 and it's made up of "swift-core" and joeljwright 21:04:54 joeljwright: thanks for your work 21:05:06 yay joeljwright 21:05:13 :) 21:05:14 this is one thing that came out of recent conversations, especially in vancouver 21:05:31 we need to renew our focus on the swiftclient 21:06:03 o/ 21:06:05 the reality is that when people write apps for openstack compute, well, they don't have to change anything. they provision a server and put their existing app on it 21:06:21 however, when they use swift, the app has to change to use the swift api 21:06:51 therefore, our client, as the official CLI/SDK for swift becomes a huge entry point for anyone adopting swift 21:07:20 therefore, if it's bad, it's blocking swift's growth and deployments. and if it's good, it means swift goes more places 21:07:28 so we need it to be good :-) 21:07:32 well said 21:07:40 +1 21:08:02 so in an effort to renew focus on swiftclient and to continue to make it better, we have swiftclient 21:08:13 I'm hoping that this will allow patches to move more quickly 21:08:44 and I'm hoping that it will be able to grow and develop with people who are dedicated and passionate about the CLI/SDK 21:08:52 so, no pressure joeljwright :-) 21:08:58 hehehe, yeah, thanks! 21:09:02 s/swiftclient/swiftclient-core/ 21:09:11 joeljwright: good luck :P 21:09:19 however, an important point is that just because there is a swiftclient, it does NOT mean swift-core can ignore it! ;-) 21:09:34 we're all responsible for both swift and swiftclient 21:09:37 yeah, but it just happens that way 21:09:51 zaitcev: it's natural for people to specialize 21:10:11 part of that's on me, so you'll see me spending more time highlighting swiftclient patches along side swift-server patches 21:10:31 any questions about that? that's all I have on that topic 21:10:50 do we have a good sense of what's good and what's a bad? 21:11:07 barker: good and bad about the client? or in general? 21:11:20 the client 21:11:40 barker: a lot of it seems to be historic 21:11:50 the API and CLI are a bit confusing 21:11:55 and badly documented 21:12:04 there were some good notes take at the summit https://etherpad.openstack.org/p/python-swiftclient-liberty-discussion 21:12:05 and we have leftovers from pre-requests days etc 21:12:16 agreed - and this is great by the way 21:12:26 IIRC I have already conflicted with Joel about those details, where we have api functions A and B and someone implemented A calling B, whereas it was obvious to me that it's much nicer if A and B call _C. Something like that, about printouts and their manager. 21:12:50 And _C was set up to handle unicode already 21:13:29 zaitcev: it's always nice to just have people involved 21:13:32 zaitcev: ok, if there's a specific thing we need to address, let's do that. in general, though, it's good to challenge one another on everything. I'm happy to hear it :-) 21:14:12 eranrom: are you here? 21:14:15 hey 21:14:23 good. you're up next :-) 21:14:32 thanks 21:14:46 ok, moving on to container sync. we ran out of time in last week's meeting 21:14:50 #topic container sync 21:15:05 eranrom: this is your's. take it away. what did you want to discuss? 21:15:17 container metadata sync 21:15:32 So today, the container metadata is not being replicated 21:15:49 I have opened a bug which suggest the following: 21:16:08 Hope this is not too much content for the IRC discussion... 21:16:09 The container sync functionality does not include syncing the container's metadata. 21:16:10 Proposed changes below: 21:16:10 container info table 21:16:10 -------------------- 21:16:10 Add a string field �last_metadata_sync_timestamp� 21:16:11 Sync Process 21:16:11 ------------ 21:16:11 1. In ContainerSync.container_sync() attempt to sync the metadata 21:16:12 just before the loop doing the container_sync_row() 21:16:13 2. Given: 21:16:45 eranrom: which bug on launchpad? Do I get the link? 21:16:45 (A) the metadata json kept in the container info table 21:16:48 haven't I seen a LP link? 21:16:54 (B) the last_metadata_sync_timestamp 21:17:08 link: https://bugs.launchpad.net/swift/+bug/1464022 21:17:09 Launchpad bug 1464022 in OpenStack Object Storage (swift) "Container sync does not replicate container metadata" [Undecided,New] 21:17:19 thx! 21:17:50 #link https://bugs.launchpad.net/swift/+bug/1464022 21:17:52 (for the notes) 21:18:32 eranrom: so i think this is a good idea to add to container sync, assuming we figure out which metadata to sync 21:19:00 I suggest two config options 21:19:05 eranrom: is this something that you have some code for already or will be writing soon? 21:19:23 I am half way through 21:19:32 can post a patch in the coming days 21:19:36 great 21:20:27 The bug is in a 'new' state. Should I just post a patch? 21:20:31 yes! 21:20:34 ok np 21:20:40 eranrom: I'm pretty sure it's use-case specific - not deployer specific? I mean maybe in sysmeta space you may have some custom middleware and you want to sync the sysmeta - but for usermeta I think it should be data drivin - basically a new container level metadata key that says what metadata keys on the container to sync 21:20:56 best to put the patch in Gerrit; patches attached to LP bugs tend to languish 21:21:41 clayg: yes this is my thinking 21:21:44 idk, i feel like there's a sussing out of the use-case here that will need to happen before a patch can really be reviewed 21:21:54 that is have a sysmeta that defines what metadata to sync 21:22:11 I mean we can do it *on* the patch *with* the code in the commit message or w/e... 21:22:32 clayg: would you rather that eranrom add some more info as a comment in LP? 21:22:37 eranrom: idk, maybe - you'd have to appy the sysmeta to existing containers? 21:22:55 +1 for making it configurable, need to be careful not to surprise existing users who may not want their metadata sync'd 21:23:01 notmyname: maybe - i haven't read all the content that's there 21:23:09 acoles: +1 21:23:33 acoles: I'm sure there's use-case where you absolutely want the same key to have different values on two containers that are sync partners! 21:23:40 yah 21:23:48 clayg: storage policies, encryption keys, etc 21:24:04 notmyname: you typed it faster than me 21:24:06 :) 21:24:16 acoles: really I'd like to start with the use-case of "here is an example of a metadata that almost *everyone* would want to be synced if they're syncing containers" and then try to work toward the feature from that use-case 21:24:21 eranrom: you've got some great info on a design plan in LP now, but can you add another comment from the user perspective there? 21:24:44 notmyname: doesn't our readme tell us how to do this? 21:24:45 eranrom: then, as you have code, put it in gerrit for people to look at 21:24:53 clayg: specs? 21:25:00 was thinking the same 21:25:10 notmyname: contributing.md 21:25:12 refactoring the LP bug report as a spec? 21:25:17 although don't want to 'make work' 21:25:33 acoles: right! 21:25:35 Start with the use case ... then design from the cluster operator up 21:25:54 ok will add a use case comment 21:26:02 eranrom: I think that would be helpful 21:26:06 sure 21:26:20 anythign else to discuss in this meeting on this topic? 21:26:23 eranrom: thanks for bringin it up 21:26:36 sure, thanks 21:26:44 ok, moving on 21:26:52 #topic next release 21:27:19 we've had severl bugfixes land, and there are still a few outstanding EC patches that need to land (more on that later) 21:27:27 so the example in the lp bug is acl's - but I don't think that's a great use-case because there is probably already today containers synced between clusters where the acl strings might not even translate from one cluster to the other 21:27:30 I'd like to see the EC patches land ASAP and then cut a release 21:27:49 mostly so we can have a better-testable version of EC for people 21:28:15 yeah, would be great to land these things sooner than later so we can be positive (those testing) that we have all the right stuff 21:28:18 notmyname: peluse is working on it! 21:28:24 ha! 21:28:34 so there's only one "big" thing that's landed since the last release 21:28:40 the new dependency on six 21:28:48 however there aren't any places it's used yet 21:29:13 if the EC stuff lands soon, then I'm inclined to revert the six dependency, release 2.3.1, then reapply the new dependency 21:29:40 however, if there's more "big" stuff that lands, then we've got a 2.4 release no matter what and the new dependency will stay in 21:30:15 notmyname: i don't think reverting six is worth the trouble 21:30:26 notmyname: people are going to eat it at some point - may as well do it now 21:30:45 true 21:30:47 notmyname: and it's way less of a pita to package six than it was to package pyeclib - so... really people testing ec should be desensitized to it anyway 21:30:53 just IMHO 21:31:21 mostly I'm mad at myself for not stating all that earlier and letting the six dependency go in before the EC patches. so it's all my fault ;-) 21:31:39 but yeah, probably not a huge deal either way 21:31:41 I'd tend to agree in priciple, people can handle the extra dependency 21:31:53 hang notmyname up by his toes! 21:31:57 (subject to the consensus of everyone here) 21:32:12 notmyname: forgive yourself ;) 21:32:17 ok, moving on then... 21:32:23 #topic hummingbird update 21:32:36 hurricanerix_: I hope you can let us know what you're seeing in go-land 21:32:46 dfg: ^^^ :) 21:33:05 hey 21:33:10 hi dfg_ :-) 21:33:25 dfg_: hurricanerix_ jsut punted to you about hummingbird status 21:33:27 i just joined- we're talknig about what's been going on withhummingbird? 21:33:31 ok 21:33:31 yup 21:34:03 the biggest change is that the # of read Timeouts has almost gone away 21:34:20 we've also seen an increase in performance of GET calls 21:34:43 that's great 21:34:51 do you have some relative numbers you can share? 21:35:13 * notmyname is hoping for somethign similar to what swifterdarrell showed with the 1+ server-per-port patch 21:35:23 um- not sure about specific numbers but the # read timeotus has decreased by like x30 21:35:35 and i think we can tune it to be even better 21:35:58 these are on production nodes 21:36:10 what's the current focus of the dev work on hummingbird? where are you making changes? 21:36:45 we've also seen a much better and much much more consistent time to first byte on object gets 21:37:06 the current focus is the object replicator daemon 21:37:25 ok 21:37:40 dfg_: in the summit scott gave us the num of get as 10946/s. do you have more good performance? 21:38:38 but they are compatible, are they not? I mean it's possible just to remote the -server but not -replicator? 21:38:48 we have seen an increase in performance on the production nodes but its not as big. the nodes we put them on have a lot going on right now :) 21:38:50 s/to remote/to replace/ 21:39:15 dfg_: ok thanks! 21:39:22 zaitcev: yeah the go object-server works with the old replicator - or at least it did 21:39:26 zaitcev: yes they are compatible. what we have deployed is the python replicator daemon talking to the hummingbird object server 21:39:36 zaitcev: I don't think the goal is for the go replciator ot work with the pytho object server 21:39:44 but the hummingbird replicator can't talk to the python object server 21:39:46 what we are working on is using the hummingbird replicator daemon. which uses the new SYNC calls and not rsync 21:40:19 is still uses the same disk layout / hashes.pkl and everything. 21:40:37 dfg_: it'd be *so* helpful if rick could work up an "overview" of how the sync calls work - just for some context when digging into the code 21:41:04 clayg: we can put something together- you don't have to voluteer rick :) 21:41:09 heh 21:41:17 thats what we do :) 21:41:17 hurricanerix_: dfg says you'll do it 21:41:22 lol 21:41:22 haha 21:41:36 dfg_: thanks for the update 21:41:43 yeah that was great! 21:41:49 I'm glad you're seeing good performance improvements 21:41:59 sounds cool 21:42:01 whew. look at the time.... 21:42:06 anyway. the replicator daemon switch out is what i'm most interested in. i don;'t even care about customer requests really :p 21:42:15 * dfg_ mostly joking 21:42:22 #topic oslo.config 21:42:27 lol 21:42:27 ok, this one is a big thing 21:42:30 notmyname: and more consistent responses - sounds like a lot of the i/o isolation issues are negated when you move away from the eventlet hub 21:42:41 clayg: shocking!! ;-) 21:42:59 ya- the consistency thing has been huge. 21:43:01 ok, ho has been working on getting keystone's policy.json supoort into swift 21:43:11 notmyname: ho: that'd be great! 21:43:22 and to do that, keystone has to bring in oslo.config 21:43:32 clayg: thanks! 21:43:36 #link https://review.openstack.org/#/c/192094/ 21:43:42 and more importantly.. 21:43:48 #link http://paste.openstack.org/show/297679/ 21:43:59 that second one is _why_ oslo.config is needed currently 21:44:15 notmyname: darrell almost wrote common.utils.get_int_from_conf_value - so.... I think getting oslo.config to work with our conf.d paste.ini's would be a valuable thing to do 21:44:26 however, that's a Big Deal since it's a huge addition to swift with a long-term impact 21:44:31 I really really don't like oslo.config 21:44:43 torgomatic: whahhhhhaaaaaaa?! 21:45:02 let's stuff everything into a giant mutable global variable and reference it *everywhere*! 21:45:03 regardless of oslo.config, I really really don't like having to have multiple config systems that we have to support forever 21:45:05 what can go wrong? 21:45:45 summary is, policy.json == good, but oslo.config == bad. so we have to resolve something somewhere 21:45:53 right now, if I want an ObjectController with a certain config, I make the config ("conf = blah()") and pass it to the ObjectController ("ObjectController(conf)") 21:46:17 a while back we talked about oslo.config. those notes are on https://etherpad.openstack.org/p/swift_gap_scratchpad 21:46:31 to do that with oslo.config means that I take my config, stuff it in a global, then instantiate an ObjectController() that looks at my global 21:46:37 ho: so is the motivator for oslo.config to get oslo.policy or is it the "project option" patch we have been discussing? 21:46:45 I think global config is fine - arguably better than the crazy shit we do in swift.conf/constraints/utils/etc 21:46:56 so here's the good thing about what ho has done: no new config files and only middleware that needs oslo.config uses it 21:47:17 acoles: my main motivation is oslo.policy 21:47:23 ho: ok thx 21:47:30 torgomatic: can't we do it like we do with the loggers? self.logger = logger or get_logger() 21:47:40 self.config = config or GLOBAL_CONFIG 21:47:56 then everywhere we pass in {} in a test, we just make local instance of the config and let the test use that 21:48:36 i don't think oslo.config is bad - our hodge podge of shit is bad - the only reason we don't use something better is because our hodge podge of shit *works* and who gives a flying flip about how you do configs 21:49:20 clayg: well, more specifically, we've got a _lot_ of systems out there already doing it one way, and we'd likely have to support current + new for any value of new 21:49:23 the main blocker right now is all the things our existing stuff does that oslo.config isn't going to support 21:49:23 the biggest -ve i remember about oslo.config is that you cannot do instance specific config, only class specific 21:49:30 clayg: I guess, it just seems like extra work so we can make the same mistakes as Nova did 21:49:31 unless they fixed that 21:49:50 the dynamic section name and prefixed config var subsection stuff is going to require some crazy hacks 21:49:55 not to mention conf.d for paste 21:50:01 and ain't nobody got time for that 21:50:23 clayg: right, it's not like the other projects *just* use oslo stuff; they've all got paste lurking in the back somewhere 21:50:31 acoles: i don't even know whot instance specific vs class specific means 21:50:51 clayg: every ProxyLoggingMiddleware vs an instance of that class 21:50:56 it means if you have two instance of say the same middleware class then they both get the same config. 21:51:05 torgomatic: yeah buy everyone has built up their own abstractions away-from-and-over paste 21:51:07 clayg: ie you have one config per middleware, not 2. yeah. that ^ 21:51:11 vs separate filter sections in paste ini 21:51:27 I think most piplies are like "choose from a menu by selecting one of the three preconfigured pipelines" 21:52:04 ok, so what do we do to make progress? 21:52:11 acoles: what the fuck does that even mean? 21:52:24 acoles: I think we can use oslo.config without having to pickup all the fucked up shit they did to paste 21:52:29 we have our own fucked up shit we did to paste 21:52:45 what do we need to see in oslo.config? what do we need to see for ho's patch to get policy.json 21:52:45 clayg: right, point is that adding oslo.config doesn't take us from "our stuff is clunky and sucks" to "our stuff is actually fairly okay"; it's more like "our stuff is clunky and sucks and also has oslo.config in it" 21:52:50 ... but i don't really care - I don't want to discourage someone who wants to try and make oslo.config work for swift 21:52:55 it's gunna be a bunch of work tho 21:53:08 torgomatic: WELL PUT! 21:53:25 well, is it possible to only have oslo config in keystone middleware? 21:53:41 I'm not sure that's reasonable or realistic or not 21:53:43 torgomatic: but the "also has olso.config" in it might be useful for a few subclasses of annoying shit our config system doesn't currently do well 21:54:02 So isn't ho just proposing adding a tiny bit of oslo.config where its needed, not wholesale replace? 21:54:05 I mean, we want oslo.policy (I guess) to work in Swift; how hard is it to alter oslo.policy to *not* require oslo.config? Like you said, my_config = passed_in_config or CONF 21:54:07 ho: what do you want to see to move forward? 21:54:21 er, like clayg said 21:54:46 my idea is "hybrid(partial!?)" oslo config support in swift. I would like to keep paste ini deployment but want to use oslo libraries. it's reality i think :-) 21:54:56 torgomatic: from a brief look at the oslo.policy code, that might be possible, but not simple 21:54:57 notmyname: torgomatic: I like the stated goal of "our stuff is clunky and sucks and also has oslo.config in it" 21:55:36 ho: sounds perfect! what can we do to help? 21:55:52 see the links above 21:56:21 notmyname: well i read the paste - and it was like "we need oslo.config" 21:56:43 notmyname: the other one is a review that only changes doc and example config? 21:56:53 https://review.openstack.org/#/c/149930/ this one is the patch. the other was the spec 21:56:54 notmyname: or the spec? 21:57:02 clayg: https://review.openstack.org/#/c/192094/ spec please. 21:58:07 bah! we've got through half of our agenda and time's pretty much up 21:58:25 it means we have alot of interesting stuff on :) 21:58:32 ok, to finish up oslo config for this meeting.. 21:58:47 it's a Big Deal and needs a lot of people to support it 21:58:52 both the patch and the spec 21:58:57 ho: i like the intent, i have some questions, will comment on spec. thx 21:58:59 the spec first, since it lays out the idea 21:59:23 acoles: thanks! 21:59:39 #topic other 21:59:49 * notmyname needs to trim the agenda for next week. ;-) 22:00:03 check out the EC patches. bugs are listed at https://bugs.launchpad.net/swift/+bugs?field.tag=ec 22:00:21 https://review.openstack.org/#/c/191970/ and https://review.openstack.org/#/c/184189/ need reviewed 22:00:29 thanks everyone for coming 22:00:41 eranrom: dfg_: ho: thanks for your topics 22:00:49 #endmeeting