21:01:37 <notmyname> #startmeeting swift
21:01:37 <openstack> Meeting started Wed Jun 17 21:01:37 2015 UTC and is due to finish in 60 minutes.  The chair is notmyname. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:01:38 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:01:40 <openstack> The meeting name has been set to 'swift'
21:01:46 <notmyname> hello, everyone. who's here for the swift team meeting?
21:01:49 <torgomatic> 👈
21:01:57 <jrichli> yo
21:01:58 <hurricanerix_> o/
21:02:00 <AndroUser> Hi
21:02:01 <MooingLemur> 🐄
21:02:02 <kota_> hi
21:02:02 <joeljwright> yup
21:02:02 <ho> hi
21:02:05 <minwoob_> Hi.
21:02:05 <mattoliverau> o/
21:02:07 <peluse> howdy
21:02:09 <notmyname> torgomatic: I think you spend the whole week looking for "I'm here" emojis
21:02:09 <blmartin_> o/
21:02:35 <MooingLemur> I memorized the codepoint for my go-to emoji :3
21:02:37 <torgomatic> notmyname: there's so many to choose from!
21:02:42 <tdasilva> hello
21:02:50 <AndroUser> Hi
21:02:54 <notmyname> MooingLemur: yeah, that one looks appropriate :-)
21:02:54 <acoles> here
21:03:08 <notmyname> welcome, everyone. thanks for coming
21:03:13 <notmyname> agenda this week is at
21:03:15 <notmyname> #link https://wiki.openstack.org/wiki/Meetings/Swift
21:03:44 <notmyname> first, up general stuff
21:03:48 <notmyname> #topic general
21:04:07 <notmyname> as mentioned in -swift and on the mailing list, we've slightly change the core team structure
21:04:18 <notmyname> we now have swift-core and swiftclient-core
21:04:37 <notmyname> swiftclient-core can merge code for python-swiftclient
21:04:45 <notmyname> and it's made up of "swift-core" and joeljwright
21:04:54 <notmyname> joeljwright: thanks for your work
21:05:06 <mattoliverau> yay joeljwright
21:05:13 <joeljwright> :)
21:05:14 <notmyname> this is one thing that came out of recent conversations, especially in vancouver
21:05:31 <notmyname> we need to renew our focus on the swiftclient
21:06:03 <dmorita> o/
21:06:05 <notmyname> the reality is that when people write apps for openstack compute, well, they don't have to change anything. they provision a server and put their existing app on it
21:06:21 <notmyname> however, when they use swift, the app has to change to use the swift api
21:06:51 <notmyname> therefore, our client, as the official CLI/SDK for swift becomes a huge entry point for anyone adopting swift
21:07:20 <notmyname> therefore, if it's bad, it's blocking swift's growth and deployments. and if it's good, it means swift goes more places
21:07:28 <notmyname> so we need it to be good :-)
21:07:32 <mattoliverau> well said
21:07:40 <joeljwright> +1
21:08:02 <notmyname> so in an effort to renew focus on swiftclient and to continue to make it better, we have swiftclient
21:08:13 <notmyname> I'm hoping that this will allow patches to move more quickly
21:08:44 <notmyname> and I'm hoping that it will be able to grow and develop with people who are dedicated and passionate about the CLI/SDK
21:08:52 <notmyname> so, no pressure joeljwright :-)
21:08:58 <joeljwright> hehehe, yeah, thanks!
21:09:02 <mattoliverau> s/swiftclient/swiftclient-core/
21:09:11 <mattoliverau> joeljwright: good luck :P
21:09:19 <notmyname> however, an important point is that just because there is a swiftclient, it does NOT mean swift-core can ignore it! ;-)
21:09:34 <notmyname> we're all responsible for both swift and swiftclient
21:09:37 <zaitcev> yeah, but it just happens that way
21:09:51 <notmyname> zaitcev: it's natural for people to specialize
21:10:11 <notmyname> part of that's on me, so you'll see me spending more time highlighting swiftclient patches along side swift-server patches
21:10:31 <notmyname> any questions about that? that's all I have on that topic
21:10:50 <barker> do we have a good sense of what's good and what's a bad?
21:11:07 <notmyname> barker: good and bad about the client? or in general?
21:11:20 <barker> the client
21:11:40 <joeljwright> barker: a lot of it seems to be historic
21:11:50 <joeljwright> the API and CLI are a bit confusing
21:11:55 <joeljwright> and badly documented
21:12:04 <notmyname> there were some good notes take at the summit https://etherpad.openstack.org/p/python-swiftclient-liberty-discussion
21:12:05 <joeljwright> and we have leftovers from pre-requests days etc
21:12:16 <barker> agreed - and this is great by the way
21:12:26 <zaitcev> IIRC I have already conflicted with Joel about those details, where we have api functions A and B and someone implemented A calling B, whereas it was obvious to me that it's much nicer if A and B call _C. Something like that, about printouts and their manager.
21:12:50 <zaitcev> And _C was set up to handle unicode already
21:13:29 <joeljwright> zaitcev: it's always nice to just have people involved
21:13:32 <notmyname> zaitcev: ok, if there's a specific thing we need to address, let's do that. in general, though, it's good to challenge one another on everything. I'm happy to hear it :-)
21:14:12 <notmyname> eranrom: are you here?
21:14:15 <eranrom> hey
21:14:23 <notmyname> good. you're up next :-)
21:14:32 <eranrom> thanks
21:14:46 <notmyname> ok, moving on to container sync. we ran out of time in last week's meeting
21:14:50 <notmyname> #topic container sync
21:15:05 <notmyname> eranrom: this is your's. take it away. what did you want to discuss?
21:15:17 <eranrom> container metadata sync
21:15:32 <eranrom> So today, the container metadata is not being replicated
21:15:49 <eranrom> I have opened a bug which suggest the following:
21:16:08 <eranrom> Hope this is not too much content for the IRC discussion...
21:16:09 <eranrom> The container sync functionality does not include syncing the container's metadata.
21:16:10 <eranrom> Proposed changes below:
21:16:10 <eranrom> container info table
21:16:10 <eranrom> --------------------
21:16:10 <eranrom> Add a string field �last_metadata_sync_timestamp�
21:16:11 <eranrom> Sync Process
21:16:11 <eranrom> ------------
21:16:11 <eranrom> 1. In ContainerSync.container_sync() attempt to sync the metadata
21:16:12 <eranrom> just before the loop doing the container_sync_row()
21:16:13 <eranrom> 2. Given:
21:16:45 <kota_> eranrom: which bug on launchpad? Do I get the link?
21:16:45 <eranrom> (A) the metadata json kept in the container info table
21:16:48 <notmyname> haven't I seen a LP link?
21:16:54 <eranrom> (B) the last_metadata_sync_timestamp
21:17:08 <eranrom> link: https://bugs.launchpad.net/swift/+bug/1464022
21:17:09 <openstack> Launchpad bug 1464022 in OpenStack Object Storage (swift) "Container sync does not replicate container metadata" [Undecided,New]
21:17:19 <kota_> thx!
21:17:50 <notmyname> #link https://bugs.launchpad.net/swift/+bug/1464022
21:17:52 <notmyname> (for the notes)
21:18:32 <notmyname> eranrom: so i think this is a good idea to add to container sync, assuming we figure out which metadata to sync
21:19:00 <eranrom> I suggest two config options
21:19:05 <notmyname> eranrom: is this something that you have some code for already or will be writing soon?
21:19:23 <eranrom> I am half way through
21:19:32 <eranrom> can post a patch in the coming days
21:19:36 <notmyname> great
21:20:27 <eranrom> The bug is in a 'new' state. Should I just post a patch?
21:20:31 <notmyname> yes!
21:20:34 <eranrom> ok np
21:20:40 <clayg> eranrom: I'm pretty sure it's use-case specific - not deployer specific?  I mean maybe in sysmeta space you may have some custom middleware and you want to sync the sysmeta - but for usermeta I think it should be data drivin - basically a new container level metadata key that says what metadata keys on the container to sync
21:20:56 <torgomatic> best to put the patch in Gerrit; patches attached to LP bugs tend to languish
21:21:41 <eranrom> clayg: yes this is my thinking
21:21:44 <clayg> idk, i feel like there's a sussing out of the use-case here that will need to happen before a patch can really be reviewed
21:21:54 <eranrom> that is have a sysmeta that defines what metadata to sync
21:22:11 <clayg> I mean we can do it *on* the patch *with* the code in the commit message or w/e...
21:22:32 <notmyname> clayg: would you rather that eranrom add some more info as a comment in LP?
21:22:37 <clayg> eranrom: idk, maybe - you'd have to appy the sysmeta to existing containers?
21:22:55 <acoles> +1 for making it configurable, need to be careful not to surprise existing users who may not want their metadata sync'd
21:23:01 <clayg> notmyname: maybe - i haven't read all the content that's there
21:23:09 <clayg> acoles: +1
21:23:33 <clayg> acoles: I'm sure there's use-case where you absolutely want the same key to have different values on two containers that are sync partners!
21:23:40 <acoles> yah
21:23:48 <notmyname> clayg: storage policies, encryption keys, etc
21:24:04 <acoles> notmyname: you typed it faster than me
21:24:06 <acoles> :)
21:24:16 <clayg> acoles: really I'd like to start with the use-case of "here is an example of a metadata that almost *everyone* would want to be synced if they're syncing containers" and then try to work toward the feature from that use-case
21:24:21 <notmyname> eranrom: you've got some great info on a design plan in LP now, but can you add another comment from the user perspective there?
21:24:44 <clayg> notmyname: doesn't our readme tell us how to do this?
21:24:45 <notmyname> eranrom: then, as you have code, put it in gerrit for people to look at
21:24:53 <notmyname> clayg: specs?
21:25:00 <acoles> was thinking the same
21:25:10 <clayg> notmyname: contributing.md
21:25:12 <notmyname> refactoring the LP bug report as a spec?
21:25:17 <acoles> although don't want to 'make work'
21:25:33 <notmyname> acoles: right!
21:25:35 <clayg> Start with the use case ... then design from the cluster operator up
21:25:54 <eranrom> ok will add a use case comment
21:26:02 <notmyname> eranrom: I think that would be helpful
21:26:06 <eranrom> sure
21:26:20 <notmyname> anythign else to discuss in this meeting on this topic?
21:26:23 <notmyname> eranrom: thanks for bringin it up
21:26:36 <eranrom> sure, thanks
21:26:44 <notmyname> ok, moving on
21:26:52 <notmyname> #topic next release
21:27:19 <notmyname> we've had severl bugfixes land, and there are still a few outstanding EC patches that need to land (more on that later)
21:27:27 <clayg> so the example in the lp bug is acl's - but I don't think that's a great use-case because there is probably already today containers synced between clusters where the acl strings might not even translate from one cluster to the other
21:27:30 <notmyname> I'd like to see the EC patches land ASAP and then cut a release
21:27:49 <notmyname> mostly so we can have a better-testable version of EC for people
21:28:15 <peluse> yeah, would be great to land these things sooner than later so we can be positive (those testing) that we have all the right stuff
21:28:18 <clayg> notmyname: peluse is working on it!
21:28:24 <peluse> ha!
21:28:34 <notmyname> so there's only one "big" thing that's landed since the last release
21:28:40 <notmyname> the new dependency on six
21:28:48 <notmyname> however there aren't any places it's used yet
21:29:13 <notmyname> if the EC stuff lands soon, then I'm inclined to revert the six dependency, release 2.3.1, then reapply the new dependency
21:29:40 <notmyname> however, if there's more "big" stuff that lands, then we've got a 2.4 release no matter what and the new dependency will stay in
21:30:15 <clayg> notmyname: i don't think reverting six is worth the trouble
21:30:26 <clayg> notmyname: people are going to eat it at some point - may as well do it now
21:30:45 <notmyname> true
21:30:47 <clayg> notmyname: and it's way less of a pita to package six than it was to package pyeclib - so... really people testing ec should be desensitized to it anyway
21:30:53 <clayg> just IMHO
21:31:21 <notmyname> mostly I'm mad at myself for not stating all that earlier and letting the six dependency go in before the EC patches. so it's all my fault ;-)
21:31:39 <notmyname> but yeah, probably not a huge deal either way
21:31:41 <MooingLemur> I'd tend to agree in priciple, people can handle the extra dependency
21:31:53 <clayg> hang notmyname up by his toes!
21:31:57 <notmyname> (subject to the consensus of everyone here)
21:32:12 <acoles> notmyname: forgive yourself ;)
21:32:17 <notmyname> ok, moving on then...
21:32:23 <notmyname> #topic hummingbird update
21:32:36 <notmyname> hurricanerix_: I hope you can let us know what you're seeing in go-land
21:32:46 <hurricanerix_> dfg: ^^^ :)
21:33:05 <dfg_> hey
21:33:10 <notmyname> hi dfg_ :-)
21:33:25 <notmyname> dfg_: hurricanerix_ jsut punted to you about hummingbird status
21:33:27 <dfg_> i just joined- we're talknig about what's been going on withhummingbird?
21:33:31 <dfg_> ok
21:33:31 <notmyname> yup
21:34:03 <dfg_> the biggest change is that the # of read Timeouts has almost gone away
21:34:20 <dfg_> we've also seen an increase in performance of GET calls
21:34:43 <notmyname> that's great
21:34:51 <notmyname> do you have some relative numbers you can share?
21:35:13 * notmyname is hoping for somethign similar to what swifterdarrell showed with the 1+ server-per-port patch
21:35:23 <dfg_> um- not sure about specific numbers but the # read timeotus has decreased by like x30
21:35:35 <dfg_> and i think we can tune it to be even better
21:35:58 <dfg_> these are on production nodes
21:36:10 <notmyname> what's the current focus of the dev work on hummingbird? where are you making changes?
21:36:45 <dfg_> we've also seen a much better and much much more consistent time to first byte on object gets
21:37:06 <dfg_> the current focus is the object replicator daemon
21:37:25 <notmyname> ok
21:37:40 <ho> dfg_: in the summit scott gave us the num of get as 10946/s. do you have more good performance?
21:38:38 <zaitcev> but they are compatible, are they not? I mean it's possible just to remote the -server but not -replicator?
21:38:48 <dfg_> we have seen an increase in performance on the production nodes but its not as big. the nodes we put them on have a lot going on right now :)
21:38:50 <zaitcev> s/to remote/to replace/
21:39:15 <ho> dfg_: ok thanks!
21:39:22 <clayg> zaitcev: yeah the go object-server works with the old replicator - or at least it did
21:39:26 <dfg_> zaitcev: yes they are compatible. what we have deployed is the python replicator daemon talking to the hummingbird object server
21:39:36 <clayg> zaitcev: I don't think the goal is for the go replciator ot work with the pytho object server
21:39:44 <glange> but the hummingbird replicator can't talk to the python object server
21:39:46 <dfg_> what we are working on is using the hummingbird replicator daemon. which uses the new SYNC calls and not rsync
21:40:19 <dfg_> is still uses the same disk layout / hashes.pkl and everything.
21:40:37 <clayg> dfg_: it'd be *so* helpful if rick could work up an "overview" of how the sync calls work - just for some context when digging into the code
21:41:04 <dfg_> clayg: we can put something together- you don't have to voluteer rick :)
21:41:09 <notmyname> heh
21:41:17 <dfg_> thats what we do :)
21:41:17 <clayg> hurricanerix_: dfg says you'll do it
21:41:22 <hurricanerix_> lol
21:41:22 <dfg_> haha
21:41:36 <notmyname> dfg_: thanks for the update
21:41:43 <clayg> yeah that was great!
21:41:49 <notmyname> I'm glad you're seeing good performance improvements
21:41:59 <mattoliverau> sounds cool
21:42:01 <notmyname> whew. look at the time....
21:42:06 <dfg_> anyway. the replicator daemon switch out is what i'm most interested in. i don;'t even care about customer requests really :p
21:42:15 * dfg_ mostly joking
21:42:22 <notmyname> #topic oslo.config
21:42:27 <mattoliverau> lol
21:42:27 <notmyname> ok, this one is a big thing
21:42:30 <clayg> notmyname: and more consistent responses - sounds like a lot of the i/o isolation issues are negated when you move away from the eventlet hub
21:42:41 <notmyname> clayg: shocking!! ;-)
21:42:59 <dfg_> ya- the consistency thing has been huge.
21:43:01 <notmyname> ok, ho has been working on getting keystone's policy.json supoort into swift
21:43:11 <clayg> notmyname: ho: that'd be great!
21:43:22 <notmyname> and to do that, keystone has to bring in oslo.config
21:43:32 <ho> clayg: thanks!
21:43:36 <notmyname> #link https://review.openstack.org/#/c/192094/
21:43:42 <notmyname> and more importantly..
21:43:48 <notmyname> #link http://paste.openstack.org/show/297679/
21:43:59 <notmyname> that second one is _why_ oslo.config is needed currently
21:44:15 <clayg> notmyname: darrell almost wrote common.utils.get_int_from_conf_value - so.... I think getting oslo.config to work with our conf.d paste.ini's would be a valuable thing to do
21:44:26 <notmyname> however, that's a Big Deal since it's a huge addition to swift with a long-term impact
21:44:31 <torgomatic> I really really don't like oslo.config
21:44:43 <clayg> torgomatic: whahhhhhaaaaaaa?!
21:45:02 <torgomatic> let's stuff everything into a giant mutable global variable and reference it *everywhere*!
21:45:03 <notmyname> regardless of oslo.config, I really really don't like having to have multiple config systems that we have to support forever
21:45:05 <torgomatic> what can go wrong?
21:45:45 <notmyname> summary is, policy.json == good, but oslo.config == bad. so we have to resolve something somewhere
21:45:53 <torgomatic> right now, if I want an ObjectController with a certain config, I make the config ("conf = blah()") and pass it to the ObjectController ("ObjectController(conf)")
21:46:17 <notmyname> a while back we talked about oslo.config. those notes are on https://etherpad.openstack.org/p/swift_gap_scratchpad
21:46:31 <torgomatic> to do that with oslo.config means that I take my config, stuff it in a global, then instantiate an ObjectController() that looks at my global
21:46:37 <acoles> ho: so is the motivator for oslo.config to get oslo.policy or is it the "project option" patch we have been discussing?
21:46:45 <clayg> I think global config is fine - arguably better than the crazy shit we do in swift.conf/constraints/utils/etc
21:46:56 <notmyname> so here's the good thing about what ho has done: no new config files and only middleware that needs oslo.config uses it
21:47:17 <ho> acoles: my main motivation is oslo.policy
21:47:23 <acoles> ho: ok thx
21:47:30 <clayg> torgomatic: can't we do it like we do with the loggers?  self.logger = logger or get_logger()
21:47:40 <clayg> self.config = config or GLOBAL_CONFIG
21:47:56 <clayg> then everywhere we pass in {} in a test, we just make local instance of the config and let the test use that
21:48:36 <clayg> i don't think oslo.config is bad - our hodge podge of shit is bad - the only reason we don't use something better is because our hodge podge of shit *works* and who gives a flying flip about how you do configs
21:49:20 <notmyname> clayg: well, more specifically, we've got a _lot_ of systems out there already doing it one way, and we'd likely have to support current + new for any value of new
21:49:23 <clayg> the main blocker right now is all the things our existing stuff does that oslo.config isn't going to support
21:49:23 <acoles> the biggest -ve i remember about oslo.config is that you cannot do instance specific config, only class specific
21:49:30 <torgomatic> clayg: I guess, it just seems like extra work so we can make the same mistakes as Nova did
21:49:31 <acoles> unless they fixed that
21:49:50 <clayg> the dynamic section name and prefixed config var subsection stuff is going to require some crazy hacks
21:49:55 <clayg> not to mention conf.d for paste
21:50:01 <clayg> and ain't nobody got time for that
21:50:23 <torgomatic> clayg: right, it's not like the other projects *just* use oslo stuff; they've all got paste lurking in the back somewhere
21:50:31 <clayg> acoles: i don't even know whot instance specific vs class specific means
21:50:51 <notmyname> clayg: every ProxyLoggingMiddleware vs an instance of that class
21:50:56 <acoles> it means if you have two instance of say the same middleware class then they both get the same config.
21:51:05 <clayg> torgomatic: yeah buy everyone has built up their own abstractions away-from-and-over paste
21:51:07 <notmyname> clayg: ie you have one config per middleware, not 2. yeah. that ^
21:51:11 <acoles> vs separate filter sections in paste ini
21:51:27 <clayg> I think most piplies are like "choose from a menu by selecting one of the three preconfigured pipelines"
21:52:04 <notmyname> ok, so what do we do to make progress?
21:52:11 <clayg> acoles: what the fuck does that even mean?
21:52:24 <clayg> acoles: I think we can use oslo.config without having to pickup all the fucked up shit they did to paste
21:52:29 <clayg> we have our own fucked up shit we did to paste
21:52:45 <notmyname> what do we need to see in oslo.config? what do we need to see for ho's patch to get policy.json
21:52:45 <torgomatic> clayg: right, point is that adding oslo.config doesn't take us from "our stuff is clunky and sucks" to "our stuff is actually fairly okay"; it's more like "our stuff is clunky and sucks and also has oslo.config in it"
21:52:50 <clayg> ... but i don't really care - I don't want to discourage someone who wants to try and make oslo.config work for swift
21:52:55 <clayg> it's gunna be a bunch of work tho
21:53:08 <clayg> torgomatic: WELL PUT!
21:53:25 <notmyname> well, is it possible to only have oslo config in keystone middleware?
21:53:41 <notmyname> I'm not sure that's reasonable or realistic or not
21:53:43 <clayg> torgomatic: but the "also has olso.config" in it might be useful for a few subclasses of annoying shit our config system doesn't currently do well
21:54:02 <acoles> So isn't ho just proposing adding a tiny bit of oslo.config where its needed, not wholesale replace?
21:54:05 <torgomatic> I mean, we want oslo.policy (I guess) to work in Swift; how hard is it to alter oslo.policy to *not* require oslo.config? Like you said, my_config = passed_in_config or CONF
21:54:07 <notmyname> ho: what do you want to see to move forward?
21:54:21 <torgomatic> er, like clayg said
21:54:46 <ho> my idea is "hybrid(partial!?)" oslo config support in swift. I would like to keep paste ini deployment but want to use oslo libraries. it's reality i think :-)
21:54:56 <notmyname> torgomatic: from a brief look at the oslo.policy code, that might be possible, but not simple
21:54:57 <clayg> notmyname: torgomatic: I like the stated goal of "our stuff is clunky and sucks and also has oslo.config in it"
21:55:36 <clayg> ho: sounds perfect!  what can we do to help?
21:55:52 <notmyname> see the links above
21:56:21 <clayg> notmyname: well i read the paste - and it was like "we need oslo.config"
21:56:43 <clayg> notmyname: the other one is a review that only changes doc and example config?
21:56:53 <notmyname> https://review.openstack.org/#/c/149930/ this one is the patch. the other was the spec
21:56:54 <clayg> notmyname: or the spec?
21:57:02 <ho> clayg: https://review.openstack.org/#/c/192094/ spec please.
21:58:07 <notmyname> bah! we've got through half of our agenda and time's pretty much up
21:58:25 <mattoliverau> it means we have alot of interesting stuff on :)
21:58:32 <notmyname> ok, to finish up oslo config for this meeting..
21:58:47 <notmyname> it's a Big Deal and needs a lot of people to support it
21:58:52 <notmyname> both the patch and the spec
21:58:57 <acoles> ho: i like the intent, i have some questions, will comment on spec. thx
21:58:59 <notmyname> the spec first, since it lays out the idea
21:59:23 <ho> acoles: thanks!
21:59:39 <notmyname> #topic other
21:59:49 * notmyname needs to trim the agenda for next week. ;-)
22:00:03 <notmyname> check out the EC patches. bugs are listed at https://bugs.launchpad.net/swift/+bugs?field.tag=ec
22:00:21 <notmyname> https://review.openstack.org/#/c/191970/ and https://review.openstack.org/#/c/184189/ need reviewed
22:00:29 <notmyname> thanks everyone for coming
22:00:41 <notmyname> eranrom: dfg_: ho: thanks for your topics
22:00:49 <notmyname> #endmeeting