19:00:30 <notmyname> #startmeeting swift
19:00:31 <openstack> Meeting started Wed Sep 24 19:00:30 2014 UTC and is due to finish in 60 minutes.  The chair is notmyname. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:00:32 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:00:34 <openstack> The meeting name has been set to 'swift'
19:00:42 <notmyname> hello, world. who's here for the swift meeting?
19:00:48 <zaitcev> o7
19:00:50 <cschwede> o/
19:00:56 <tdasilva> hello
19:00:56 <cutforth> yo
19:01:03 <torgomatic> <-
19:01:11 <hurricanerix> hi
19:01:29 <notmyname> #link https://wiki.openstack.org/wiki/Meetings/Swift
19:01:50 <notmyname> this week we'll talk about stuff for the juno release mostly, I think
19:01:59 <notmyname> #topic general
19:02:01 <mattoliverau> o/
19:02:11 <notmyname> the hackathon in the Boston area is next week
19:02:21 <notmyname> so many of us will see each other in-person there
19:02:30 * cutforth cutforth is bummed he cannot make it to the hackathon
19:02:36 <notmyname> and therefore I propose we have no online meeting next week
19:02:50 <notmyname> cutforth: too bad
19:03:19 <cutforth> notmyname: thanks
19:03:25 <notmyname> like at previous ones, next week's schedule will be determined when we're there
19:03:48 <notmyname> I'm looking forward to it and to seeing many of you again in person!
19:04:01 <notmyname> any questions about it (that I can punt to tdasilva)?
19:04:29 <notmyname> next general thing...
19:04:39 <notmyname> go update bash on your servers. do it today
19:04:41 <tdasilva> just a quick note about the hackathon: red hat is taking us to Kimball Farm on Tuesday night for a fun night and dinner, so try not to plan anything for that night
19:04:41 * acoles joins meeting late
19:04:51 <notmyname> (that one [I don't think] has anything to do with swift, but do it anyway
19:04:58 <notmyname> tdasilva: cool
19:05:04 <zaitcev> go go gadget lwn
19:05:18 <notmyname> next up, the autoabandoner
19:05:31 <notmyname> we've been running it for long enough to actualy have stuff to abandon
19:05:38 <notmyname> any comments or concerns on it?
19:05:51 <notmyname> I'd also love for mattoliverau to share his thoughts
19:06:19 <mattoliverau> it looks like it might be working as since notmyname went through the list last week, it has remained empty
19:06:42 <mattoliverau> has anyone been annoyed from the email spamming?
19:07:03 <notmyname> mattoliverau: and has it caused too much of a burden on you?
19:08:02 <mattoliverau> no, not at all, it now has whitelist functionality which I added last week, so now you can whitelist by name, email, subject, username or number
19:08:20 <mattoliverau> #link http://abandoner.oliver.net.au/abandoned_changes.html
19:09:13 <notmyname> mattoliverau: thanks for writing it
19:09:25 <notmyname> doesn't sound like there are any concerns, so let's move on :-)
19:09:31 <notmyname> #topic juno release
19:09:38 <notmyname> first swiftclient
19:09:55 <notmyname> acoles discovered that the SwiftService patch broke keystone v3 compat
19:10:12 <notmyname> which is in part because of lack of keystone v3 coverage
19:10:28 <notmyname> however, acoles was also awesome by fixing it and adding keystone v3 test coverage
19:10:42 <notmyname> so https://review.openstack.org/#/c/123168/ has landed (or is going through jenkins)
19:11:08 <notmyname> and that will necessitate a v2.3.1 release of python-swiftclient
19:11:20 <notmyname> acoles: any other commentary on why it broke or the fix?
19:11:24 <mattoliverau> cool, nice work acoles
19:11:29 <notmyname> acoles: thanks for working on it
19:11:42 <acoles> well, with hindsight i'd have put those tests in earlier
19:12:05 <mattoliverau> with hindsight there'd be no bugs :P
19:12:13 <acoles> i think the interface between shell and service is a little confused,
19:12:31 <acoles> i have spoken with joel wright and agreed there could be some tidy up,
19:12:43 <acoles> but for now wanted to just fix the bug
19:12:57 <acoles> There are some other regressions I have noticed,
19:13:30 <acoles> e.g. error scenarios (bad CLI options) now dumpt stack traces rather than helpful hints.
19:14:07 <acoles> I'm hoping we can put some dev time on adding swiftclient tests
19:14:11 <acoles> (we == hp)
19:15:26 <notmyname> acoles: anything that you've seen that should prevent the release with v3 support fixed?
19:15:55 <acoles> notmyname: just looking for the other bug i filed...
19:16:55 <acoles> https://bugs.launchpad.net/python-swiftclient/+bug/1372589
19:16:56 <uvirtbot> Launchpad bug 1372589 in python-swiftclient "'swift stat' prints exception trace when auth fails" [Undecided,New]
19:17:50 <notmyname> do we need to revert the SwiftService patch until those regressions are straightened out? or is it cheaper to move forward as-is?
19:18:42 <notmyname> thoughts from everyone on this qeustion would be good^
19:18:51 <zaitcev> i'm thinking, i'm thinking
19:19:06 <torgomatic> just move forward; SwiftService encapsulates a lot of complicated stuff
19:19:12 <acoles> BTW, that bug report is an example, there's others
19:19:23 <cschwede> acoles: more errors?
19:19:30 <zaitcev> yeah. I mostly just verified that download works... with that blasted threading and stuff
19:20:00 <torgomatic> if we revert it, we go back to a world where you have to shell out to bin/swift or implement your own manifest building just to get a large object
19:20:11 <acoles> cschwede: another example was post (I think) with insufficient auth options just returns silently
19:20:25 <zaitcev> Since I'm not a user of v3, I have not seen anything fatal yet. I think the move to Requests was far more disruptive.
19:21:22 <notmyname> yeah, this does remind me of the move to requests. mostly better, but some nasty regressions that were (mostly) fixed subsequently
19:21:36 <notmyname> I'm of the mind to move forward
19:22:48 <notmyname> doesn't sound like anyone wants to revert it, so let's consider it settled :-)
19:22:48 <acoles> so, to be clear, beyond the v3 fix that is in flight, the issues i have seen are yuk output when cli options are bad/missing, i'm not aware of any other functional errors
19:23:12 <notmyname> thanks
19:23:48 <notmyname> ok, moving forward with juno things...
19:23:50 <notmyname> #link https://wiki.openstack.org/wiki/Swift/PriorityReviews
19:24:05 <notmyname> the juno list has been mostly getting shorter. yay
19:24:14 <notmyname> with a mind to timeframes, here's what I know
19:24:24 <notmyname> the juno release is happening oct 16
19:24:39 <notmyname> we (I) gave a tentative date a while back of a swift RC on oct 6
19:24:45 <notmyname> that's one week from this coming monday
19:25:02 <notmyname> next week is the hackathon, so we'll either get everything done then or not work on anything
19:25:26 <notmyname> the per-policy container counts is making its way through the gate now
19:25:39 <notmyname> the zerocopy GET needs one more +2
19:26:05 <notmyname> I strongly suspect the global replication improvements and the zerocopy PUT won't make it
19:26:06 <zaitcev> hopefuly hackathon helps rather than hinder despite all the travel
19:26:14 <notmyname> :-)
19:26:17 * mattoliverau puts on his extra big reviewing and testing hat for the next week or so then.
19:26:21 <torgomatic> zerocopy PUT isn't even rebased on anything useful at the moment, so... :)
19:26:44 <notmyname> cschwede: the partition movement one has a scary commit message https://review.openstack.org/#/c/121422/
19:27:12 <torgomatic> besides, I'm working on something that helps EC but drops a bunch of garbage right in the middle of zero-copy PUT
19:27:12 <cschwede> notmyname: ?
19:27:21 <notmyname> as in, I thought we had already figured that out a while back by considering weight before failure domains
19:28:00 <cschwede> notmyname: it's a little bit scary; torgomatic had a patch that respected device weight which helped a lot, but that's not enough
19:28:07 <notmyname> ok
19:28:38 <notmyname> I guess I mean "scary" as in "there's a use case of adding a new region to an existing cluster that has some known issues"
19:28:52 <cschwede> notmyname: so the problem is this: when adding a new tier, all devices in the tier are "at risk" and selected to reassign. they aren'T reassigned to another tier, thus moving inside the existing tier
19:28:54 <notmyname> and I thought we had solved that, but apparantly not well enough
19:29:30 <notmyname> so lots of data shuffling without actually improving placement
19:29:31 <cschwede> the added test can be run against the current master branch, then you'll see the difference (amount of moved partitions)
19:29:38 <cschwede> notmyname: exactly
19:29:51 <notmyname> and your patch makes sure partitions are only moved if it improves placement
19:30:13 <notmyname> also "tier" is more than just a region. it would be affected by adding another zone in a region, too, right?
19:30:17 <cschwede> yes, by checking first if there are partitions on other tiers that want more partitions
19:30:28 <cschwede> yes, regions, zones, nodes, …
19:30:53 <cschwede> there was a lot of back-and-forth, but i think Florent and me got it with the latest patchset
19:30:59 <notmyname> is it only if you are moving up the placement tree? eg you have only one zone and you add another? or what if you already have N zones and add to it?
19:31:30 <cschwede> if you have N zones and add another one it is not that worse, and if you have a large N you might not even notice this
19:32:02 <notmyname> ok
19:32:46 <notmyname> so all that being said, this patch (https://review.openstack.org/#/c/121422/) should land in juno. but it depends on https://review.openstack.org/#/c/120713/7 which needs one more +2
19:33:30 <torgomatic> the dependency is pretty straightforward; instead of doing some arithmetic spread throughout the builder, just compare old to new and print the number of differences
19:33:30 <notmyname> acoles: you had already given a +1 on the dependent path. will you be able to follow-up or do you need someone else to look at it?
19:34:06 <acoles> notmyname: yes, tomorrow
19:34:11 <notmyname> acoles: ack. thanks
19:34:12 <torgomatic> it's hard not to like, unless you're worried about the builder's memory consumption... and given that you run swift-ring-builder on a machine not in the swift cluster, and then the process exits, who cares?
19:34:47 <notmyname> torgomatic: oh I'm sure we could find someone not to like it. I'm told swift devs are cranky ;-)
19:35:17 <zaitcev> yeah, I run it on proxies and man it's sloooooow for me for some reason
19:35:30 <zaitcev> but I know that it's wrong and I shouldn't do it
19:35:33 <acoles> torgomatic: yeah, i just thought i should run it
19:35:56 <notmyname> is there anything else that needs to be addressed for the juno release
19:36:23 <notmyname> also, the swift release for juno will have swift 2.2.0
19:36:59 <notmyname> if something does com up, then please let me know asap
19:37:24 <notmyname> #topic paris summit scheduling
19:37:32 <notmyname> here's what I know
19:38:03 <notmyname> the site we've used in the past is recommended to be not used this time
19:38:18 <notmyname> many other projects are using communal etherpads to come up with topics and figure out scheduling
19:38:43 <notmyname> I tend to thing that's a pretty terrible idea (not the communal part, the part about trying to track who's doing what)
19:39:04 <notmyname> so I'm asking ttx about using the old site.
19:39:31 <notmyname> that being said, I certainly want to get broad feedback on what to discuss in paris
19:39:47 <notmyname> we'll actually have slightly more time, in aggregate, than we did in atlanta
19:40:09 <notmyname> we'll have 6 assigned sessions (as opposed to 8 in atlanta), but we'll also have a free-form half-day
19:40:41 <notmyname> any questions or thoughts you'd like to share on summit shceduling?
19:41:22 <notmyname> ok :-)
19:41:47 <notmyname> I'll figure out how we're doing scheduling and let you know when I know
19:41:51 <acoles> notmyname: do we get a pod/table like in atlanta? that was useful
19:41:51 <notmyname> #topic open discussion
19:42:01 <notmyname> acoles: no, no room for that in paris
19:42:07 <notmyname> well, actually, we don't get one for just us
19:42:18 <notmyname> there are some tables, but they will need to be shared
19:42:36 <notmyname> eg pick up the "swift" placard and put it on the table when we talk about stuff
19:42:41 <mattoliverau> its paris we can just find a cafe ;)
19:42:50 <notmyname> mattoliverau: great idea!
19:42:55 <notmyname> I have 2 small other things to bring up
19:43:43 <notmyname> for those of you who have been following the defcore stuff and were concerned about its exclusion of swift, that came to a head last week and was rejected by the foundation board of directors
19:44:14 <notmyname> so they are moving forward with a slight change in direction to look at having multiple trademarks instead of just one by which to label "openstack"
19:44:34 <notmyname> related to that, pay attention to the "big tent" email thread on the -dev mailing list
19:45:14 <notmyname> there are some potential big changes being discussed that could affect openstack organization. if you want to know what's going on or influence it, now's the time
19:46:04 <notmyname> finally, some of you may have heard of gnomes' outreach program for women. it's officially kicked off in january, but we have a lady interested in contributing to swift as part of that
19:46:22 <notmyname> so welcome mahatic and help her out as she works on swift :-)
19:46:56 <notmyname> and that's all I've got.
19:46:59 <mahatic> :) thank you!
19:47:03 <notmyname> anything else to bring up this week?
19:47:08 <mattoliverau> yay! welcome mahatic
19:47:12 <brnelson> I would like to ask a quick question
19:47:18 <notmyname> brnelson: ask away
19:47:19 <brnelson> I have a question about the sqlite journaling mode.  Right now it's DELETE, but that causes some performance issues on a shared file system.  Does anyone know if other possibilities are being investigated?  I saw this blueprint as one possible alternative:
19:47:26 <brnelson> Use WAL journal mode for sqlite:  https://blueprints.launchpad.net/swift/+spec/consider-wal-mode-sqlite
19:47:48 <torgomatic> you mean running multiple container servers against a single file system?
19:47:57 <torgomatic> do people really do that outside of a SAIO?
19:48:22 <brnelson> Yes, but we manage the access to the container dbs through individual nodes so that only one node accesses one container
19:48:26 <mahatic> thank you mattoliverau . Looking forward.
19:49:47 <torgomatic> interesting... that's the first I've heard of anyone doing that. out of curiosity, why do that instead of using plain old SSDs?
19:49:53 <torgomatic> (sorry if I'm derailing the conversation)
19:50:33 <brnelson> our main product is  a shared file system and some customers want to have a Swift object solution using our filesystem
19:50:41 <torgomatic> huh. well, TIL
19:50:42 <torgomatic> :)
19:51:29 <brnelson> I'll be at the hackathon and can discuss some of the adventures we've had getting it to work
19:51:39 <acoles> notmyname: i have a quick update re data migration, https://review.openstack.org/#/c/64430/
19:51:40 <torgomatic> I think the primary use case is still directly-attached plain old filesystems, but if WAL mode speeds up (or doesn't harm) that use case, it may be worth considering
19:52:10 <notmyname> torgomatic: ++
19:52:13 <torgomatic> IOW, I'm not too terribly interested in changes that benefit the shared-FS case while harming the XFS-on-a-disk case, but anything that helps both is fair game
19:52:46 <notmyname> brnelson: let's dive into it next week!
19:53:02 <brnelson> it could perhaps be a configuration property used in the shared filesystem case.  It wouldn't necessarily have to be always enabled
19:53:35 <notmyname> brnelson: but specifically for your question, I don't know of anyone currently working in that area (for that problem or others)
19:53:44 <brnelson> ok. thanks
19:53:56 <notmyname> but I'm glad you're looking at it
19:54:06 <notmyname> acoles: what's your update for the migration middleware?
19:54:47 <acoles> so imho its now in good shape, i would +2 but I've had some input so ideally would leave for others to +2
19:55:44 <notmyname> good to hear
19:56:05 <notmyname> thanks for working on it so much up to this point
19:56:33 <acoles> well, gvernik has been very willing to adapt (btw he's on vacation so not here)
19:56:54 <notmyname> ok
19:56:56 <acoles> and tdasilva made some great input which has been incorporated.
19:57:05 <notmyname> do you remember if he's going to be in boston next week?
19:57:16 <tdasilva> i don't think so
19:57:20 <acoles> i think not, but he is going to paris
19:57:47 <notmyname> ok
19:57:56 <notmyname> anything else from anyone?
19:58:39 <notmyname> ok. remember that we don't have an online meeting next week in lieu of the in-person meeting
19:58:46 <notmyname> thanks for coming
19:58:47 <mattoliverau> :(
19:58:50 <notmyname> thanks for you work on swift!
19:58:55 <notmyname> #endmeeting