Monday, 2015-12-14

*** zaitcev has joined #openstack-swift00:00
*** ChanServ sets mode: +v zaitcev00:00
*** haomaiwang has quit IRC00:43
*** chlong has joined #openstack-swift00:48
*** changbl has joined #openstack-swift00:49
*** takashi has joined #openstack-swift00:52
kota_good morning00:53
takashikota_: good morning!00:54
kota_takashi: o/00:54
*** Guest91264 has quit IRC00:58
*** blmartin has joined #openstack-swift01:14
blmartinmattoliverau: o/01:23
mattoliveraukota_, blmartin: o/01:24
*** 18WABHOP7 has joined #openstack-swift01:28
blmartinmattoliverau: I've got a sharding question for you should you have a moment! Eventually when I am sharding a '.gt' container (it's never the same one but always a '.gt' container), it will create the child '.le' and '.gt' containers but all the objects in it and the child containers go missing.01:29
blmartinI think this may have something to do with an OSError (directory not found, I think) that gets thrown from broker.update_metadata in get_and_fill_shard_broker (I can send a stacktrace). I'm using an SAIO with a single replica. I'm going to keep looking into it but I wanted to see if you had ideas.01:29
*** klrmn has quit IRC01:30
blmartinI'm not going to rule out the possibility that this is a setup issue on my end though. I would say it is possible01:30
mattoliverauYeah paste me a stacktrace, and a 1 replica might cause some issues.. But let's figure it out.01:31
kota_mattoliverau, blmartin: good morning :)01:40
blmartinhere you go: http://paste.openstack.org/show/Yx9aQMRxm6dh0YrNBoOC/01:40
blmartinkota_: Good morning01:40
blmartin!01:40
*** dewsday has joined #openstack-swift01:45
*** 18WABHOP7 has quit IRC01:50
*** dmorita has joined #openstack-swift01:51
*** dmorita has quit IRC01:56
*** noark9 has joined #openstack-swift01:56
*** nakagawamsa has joined #openstack-swift01:56
*** dmorita has joined #openstack-swift01:57
*** garthb has joined #openstack-swift02:06
*** ianbrown has quit IRC02:07
*** ianbrown has joined #openstack-swift02:08
*** dmorita has quit IRC02:17
*** haomaiwang has joined #openstack-swift02:17
*** haomaiwang has quit IRC02:18
*** haomaiwang has joined #openstack-swift02:18
*** haomaiwang has quit IRC02:19
*** 21WAAGYBC has joined #openstack-swift02:19
*** 21WAAGYBC has quit IRC02:20
*** haomaiwang has joined #openstack-swift02:20
*** haomaiwang has quit IRC02:21
*** haomaiwang has joined #openstack-swift02:21
*** haomaiwang has quit IRC02:22
*** haomaiwang has joined #openstack-swift02:22
*** haomaiwang has quit IRC02:23
*** 21WAAGYC3 has joined #openstack-swift02:23
*** 21WAAGYC3 has quit IRC02:24
*** 21WAAGYDS has joined #openstack-swift02:24
*** 21WAAGYDS has quit IRC02:25
*** haomaiwang has joined #openstack-swift02:25
*** haomaiwang has quit IRC02:26
*** haomaiwang has joined #openstack-swift02:26
*** haomaiwang has quit IRC02:27
*** haomaiwang has joined #openstack-swift02:27
*** haomaiwang has quit IRC02:28
*** haomaiwang has joined #openstack-swift02:28
*** haomaiwang has quit IRC02:29
*** haomaiwang has joined #openstack-swift02:29
*** haomaiwang has quit IRC02:30
*** 18WABHPIA has joined #openstack-swift02:30
*** 18WABHPIA has quit IRC02:31
*** 18WABHPIR has joined #openstack-swift02:31
*** 18WABHPIR has quit IRC02:32
*** haomaiwang has joined #openstack-swift02:32
*** haomaiwang has quit IRC02:33
*** 18WABHPJK has joined #openstack-swift02:33
*** 18WABHPJK has quit IRC02:34
*** 18WABHPJ5 has joined #openstack-swift02:34
*** 18WABHPJ5 has quit IRC02:35
*** 18WABHPKO has joined #openstack-swift02:35
*** 18WABHPKO has quit IRC02:36
*** 18WABHPKY has joined #openstack-swift02:36
*** 18WABHPKY has quit IRC02:37
*** haomaiwang has joined #openstack-swift02:37
*** haomaiwang has quit IRC02:38
*** haomaiwang has joined #openstack-swift02:38
*** haomaiwang has quit IRC02:39
*** haomaiwang has joined #openstack-swift02:39
*** haomaiwang has quit IRC02:40
*** haomaiwang has joined #openstack-swift02:40
*** haomaiwang has quit IRC02:41
*** haomaiwang has joined #openstack-swift02:41
*** haomaiwang has quit IRC02:42
*** 21WAAGYMS has joined #openstack-swift02:42
*** 21WAAGYMS has quit IRC02:43
*** haomaiwang has joined #openstack-swift02:43
*** haomaiwang has quit IRC02:44
*** 17WABADU3 has joined #openstack-swift02:44
*** 17WABADU3 has quit IRC02:45
*** haomaiwang has joined #openstack-swift02:45
*** haomaiwang has quit IRC02:46
*** haomaiwang has joined #openstack-swift02:46
*** haomaiwang has quit IRC02:47
*** haomaiwa_ has joined #openstack-swift02:47
*** haomaiwa_ has quit IRC02:48
*** haomaiwang has joined #openstack-swift02:48
*** haomaiwang has quit IRC02:49
*** haomaiwang has joined #openstack-swift02:49
*** haomaiwang has quit IRC02:50
*** 21WAAGYQP has joined #openstack-swift02:50
*** 21WAAGYQP has quit IRC02:51
*** haomaiwang has joined #openstack-swift02:51
*** haomaiwang has quit IRC02:52
*** haomaiwang has joined #openstack-swift02:52
*** haomaiwang has quit IRC02:53
*** haomaiwang has joined #openstack-swift02:53
*** haomaiwang has quit IRC02:54
*** 21WAAGYSL has joined #openstack-swift02:54
*** dmorita has joined #openstack-swift02:54
*** 21WAAGYSL has quit IRC02:55
*** 21WAAGYS7 has joined #openstack-swift02:55
*** 21WAAGYS7 has quit IRC02:56
*** haomaiwang has joined #openstack-swift02:56
*** haomaiwang has quit IRC02:57
*** haomaiwang has joined #openstack-swift02:57
*** haomaiwang has quit IRC02:58
*** 17WABAD1Y has joined #openstack-swift02:58
*** 17WABAD1Y has quit IRC02:59
*** dmorita has quit IRC02:59
*** haomaiwang has joined #openstack-swift02:59
*** haomaiwang has quit IRC03:00
*** haomaiwang has joined #openstack-swift03:00
*** haomaiwang has quit IRC03:01
*** haomaiwang has joined #openstack-swift03:01
*** haomaiwang has quit IRC03:02
*** ianbrown has quit IRC03:02
*** 17WABAD3R has joined #openstack-swift03:02
*** ianbrown has joined #openstack-swift03:02
*** 17WABAD3R has quit IRC03:03
*** haomaiwang has joined #openstack-swift03:03
*** haomaiwang has quit IRC03:04
*** 14WAAFHJF has joined #openstack-swift03:04
*** 14WAAFHJF has quit IRC03:05
*** haomaiwa_ has joined #openstack-swift03:05
*** haomaiwa_ has quit IRC03:06
*** haomaiwa_ has joined #openstack-swift03:06
*** haomaiwa_ has quit IRC03:07
*** haomaiwa_ has joined #openstack-swift03:07
*** haomaiwa_ has quit IRC03:08
*** 14WAAFHKW has joined #openstack-swift03:08
*** 14WAAFHKW has quit IRC03:09
*** haomaiwang has joined #openstack-swift03:09
*** haomaiwang has quit IRC03:10
*** 14WAAFHL3 has joined #openstack-swift03:10
*** 14WAAFHL3 has quit IRC03:11
*** haomaiwang has joined #openstack-swift03:11
*** haomaiwang has quit IRC03:12
*** 17WABAD9C has joined #openstack-swift03:12
*** 17WABAD9C has quit IRC03:13
*** haomaiwang has joined #openstack-swift03:13
*** haomaiwang has quit IRC03:14
*** haomaiwang has joined #openstack-swift03:14
*** haomaiwang has quit IRC03:15
*** 18WABHP2X has joined #openstack-swift03:15
*** 18WABHP2X has quit IRC03:16
*** haomaiwang has joined #openstack-swift03:16
*** haomaiwang has quit IRC03:17
*** 18WABHP3W has joined #openstack-swift03:17
*** 18WABHP3W has quit IRC03:18
*** haomaiwa_ has joined #openstack-swift03:19
*** haomaiwa_ has quit IRC03:19
*** haomaiwang has joined #openstack-swift03:19
*** haomaiwang has quit IRC03:20
*** haomaiwa_ has joined #openstack-swift03:20
*** haomaiwa_ has quit IRC03:21
*** haomaiwa_ has joined #openstack-swift03:21
*** haomaiwa_ has quit IRC03:22
*** 18WABHP6H has joined #openstack-swift03:22
*** 18WABHP6H has quit IRC03:23
*** haomaiwang has joined #openstack-swift03:23
*** haomaiwang has quit IRC03:24
*** haomaiwang has joined #openstack-swift03:24
*** haomaiwang has quit IRC03:25
*** ianbrown has quit IRC03:25
*** haomaiwa_ has joined #openstack-swift03:25
*** ianbrown has joined #openstack-swift03:25
*** janonymous has joined #openstack-swift03:25
*** haomaiwa_ has quit IRC03:26
*** haomaiwang has joined #openstack-swift03:26
*** haomaiwang has quit IRC03:27
*** 14WAAFHVA has joined #openstack-swift03:27
*** 14WAAFHVA has quit IRC03:28
*** haomaiwang has joined #openstack-swift03:28
*** haomaiwang has quit IRC03:29
*** haomaiwang has joined #openstack-swift03:29
*** haomaiwang has quit IRC03:30
*** haomaiwang has joined #openstack-swift03:30
*** haomaiwang has quit IRC03:31
*** haomaiwang has joined #openstack-swift03:31
*** haomaiwang has quit IRC03:32
*** noark9 has quit IRC03:32
*** haomaiwang has joined #openstack-swift03:32
*** haomaiwang has quit IRC03:32
*** ianbrown has quit IRC03:36
*** ianbrown has joined #openstack-swift03:36
*** tsg has joined #openstack-swift03:37
*** dewsday has quit IRC03:37
*** ianbrown has quit IRC03:41
*** ianbrown has joined #openstack-swift03:41
*** trifon has joined #openstack-swift04:00
*** yatin has joined #openstack-swift04:04
*** nakagawamsa has quit IRC04:11
*** wolsen_ is now known as wolsen04:12
*** mahatic has quit IRC04:18
*** blmartin has quit IRC04:19
*** ppai has joined #openstack-swift04:27
*** trifon has quit IRC04:29
*** pdardeau- has quit IRC04:34
*** wasmum has quit IRC04:36
*** pdardeau- has joined #openstack-swift04:37
*** wasmum has joined #openstack-swift04:42
*** proteusguy_ has quit IRC04:46
*** 18WABHRGA has joined #openstack-swift04:47
*** 18WABHRGA has quit IRC04:48
*** haomaiwang has joined #openstack-swift04:48
*** haomaiwang has quit IRC04:49
*** 18WABHRG0 has joined #openstack-swift04:49
*** ig0r_ has quit IRC04:50
*** 18WABHRG0 has quit IRC04:50
*** km has joined #openstack-swift04:50
*** 18WABHRHH has joined #openstack-swift04:50
*** km is now known as Guest1894904:50
*** 18WABHRHH has quit IRC04:51
*** haomaiwang has joined #openstack-swift04:51
*** haomaiwang has quit IRC04:52
*** haomaiwang has joined #openstack-swift04:52
*** haomaiwang has quit IRC04:53
*** haomaiwang has joined #openstack-swift04:53
*** haomaiwang has quit IRC04:54
*** haomaiwang has joined #openstack-swift04:54
*** ig0r_ has joined #openstack-swift04:54
*** haomaiwang has quit IRC04:55
*** haomaiwang has joined #openstack-swift04:55
*** haomaiwang has quit IRC04:56
*** haomaiwang has joined #openstack-swift04:56
*** haomaiwang has quit IRC04:57
*** haomaiwang has joined #openstack-swift04:57
*** haomaiwang has quit IRC04:58
*** haomaiwang has joined #openstack-swift04:58
*** haomaiwang has quit IRC04:59
*** 21WAAG0TB has joined #openstack-swift04:59
*** proteusguy_ has joined #openstack-swift04:59
*** 21WAAG0TB has quit IRC05:00
*** haomaiwang has joined #openstack-swift05:00
*** haomaiwang has quit IRC05:01
*** haomaiwa_ has joined #openstack-swift05:01
*** haomaiwa_ has quit IRC05:02
*** 17WABAFFW has joined #openstack-swift05:02
*** 17WABAFFW has quit IRC05:03
*** haomaiwang has joined #openstack-swift05:03
*** haomaiwang has quit IRC05:04
*** 21WAAG0V0 has joined #openstack-swift05:04
*** 21WAAG0V0 has quit IRC05:05
*** 17WABAFG6 has joined #openstack-swift05:05
*** 17WABAFG6 has quit IRC05:06
*** 14WAAFJED has joined #openstack-swift05:06
yatinnotmyname: Thanks for that excellent youtube video link of you :)05:06
*** 14WAAFJED has quit IRC05:07
notmynameyatin: was it the old playing with swift one from LCA?05:07
*** 14WAAFJEU has joined #openstack-swift05:07
yatinnotmyname: LCA?05:07
*** 14WAAFJEU has quit IRC05:08
notmynamelinux conf australia05:08
notmynameI don't remember which one I linked :-)05:08
*** 17WABAFIK has joined #openstack-swift05:08
*** 17WABAFIK has quit IRC05:09
yatinnotmyname: i didn;t see LCA but, it was on Single node with 4 different scenarios05:09
yatinnotmyname: like disk fail, server fail, bit rot05:09
*** haomaiwang has joined #openstack-swift05:09
yatinnotmyname: disk full05:09
notmynameoh yeah. that's the same one :-)05:09
notmynameI hope it was helpful05:09
*** haomaiwang has quit IRC05:10
yatinnotmyname: in case of server fail, swift don;t go for handoff node, hoping that server will come back up05:10
notmynameright. availability vs durability issues05:10
*** haomaiwa_ has joined #openstack-swift05:10
yatinnotmyname: it was to the point, quick, and damm helpful05:10
notmynamegreat! I'm glad to hear that05:10
*** haomaiwa_ has quit IRC05:11
mattoliverauI remember a talk like that talk, it could've been at an LCA :)05:11
notmynamemattoliverau: yeah. IIRC the canberra one.05:11
*** 18WABHRQ3 has joined #openstack-swift05:11
*** 18WABHRQ3 has quit IRC05:12
yatinnotmyname: but somehow, if node don't come back up like ever, swift live with one copy less?05:12
yatinnotmyname: server* for node05:12
*** haomaiwang has joined #openstack-swift05:12
*** ppai has quit IRC05:12
yatinnotmyname: s/node/server05:12
*** haomaiwang has quit IRC05:13
notmynameyatin: no, not for new data. new data will remain fully replicated05:13
*** tsg has quit IRC05:13
*** haomaiwang has joined #openstack-swift05:13
yatinnotmyname: yes. and old object copies?05:13
notmynameyatin: but otherwise, yeah. it's up to the operator to mark it as down (ie remove it from the ring) or otherwise fix the server05:13
*** haomaiwang has quit IRC05:14
*** 17WABAFKZ has joined #openstack-swift05:14
notmynameyatin: the reason we made that choice is to require human intervention on potentially large replication events. you don't want a big "replication storm" that turns into some death spiral in the cluster because some server got hung for a few moments and then 100TB of data gets dumped on the wire05:14
yatinnotmyname: oh! if remove from ring then raplicator will notice one less copy and start preparing next one05:15
*** 17WABAFKZ has quit IRC05:15
notmynameright05:15
notmynameit will readjust where the partitions are assigned and replication will put stuff in the right place05:15
*** 14WAAFJIW has joined #openstack-swift05:15
*** 14WAAFJIW has quit IRC05:16
*** haomaiwang has joined #openstack-swift05:16
*** haomaiwang has quit IRC05:17
*** ianbrown has quit IRC05:17
*** 14WAAFJJV has joined #openstack-swift05:17
yatinnotmyname: great! where can i find anymore such resources/links?05:17
*** ianbrown has joined #openstack-swift05:17
*** 14WAAFJJV has quit IRC05:18
*** noark9 has joined #openstack-swift05:18
*** haomaiwa_ has joined #openstack-swift05:18
yatinnotmyname: i seen couple of videos over weekend, but none can match with the LCA one which you gave05:18
*** haomaiwa_ has quit IRC05:19
*** haomaiwang has joined #openstack-swift05:19
*** haomaiwang has quit IRC05:20
*** 14WAAFJK8 has joined #openstack-swift05:20
*** 14WAAFJK8 has quit IRC05:21
*** haomaiwang has joined #openstack-swift05:21
*** haomaiwang has quit IRC05:22
notmynameyatin: are you looking for something particular?05:22
*** haomaiwang has joined #openstack-swift05:22
yatinnotmyname: developer and performance centric resources05:22
*** haomaiwang has quit IRC05:23
*** haomaiwang has joined #openstack-swift05:23
yatinnotmyname: one side, I want to start looking to contribute, and other side, measure performance for disk05:23
*** haomaiwang has quit IRC05:24
yatinnotmyname: s/disk/swift05:24
*** haomaiwa_ has joined #openstack-swift05:24
*** haomaiwa_ has quit IRC05:25
*** 18WABHRXB has joined #openstack-swift05:25
*** 18WABHRXB has quit IRC05:26
*** haomaiwa_ has joined #openstack-swift05:26
*** haomaiwa_ has quit IRC05:27
*** haomaiwang has joined #openstack-swift05:27
yatinnotmyname: there are endless no. of resources, if i look by your name on youtube..05:27
notmynamelol05:27
*** haomaiwang has quit IRC05:28
notmynamelet me pastebin an email response I gave recently to the "how to contribute" question05:28
notmynamehttps://gist.github.com/notmyname/253cc726c084db998e5705:28
*** haomaiwang has joined #openstack-swift05:28
*** ppai has joined #openstack-swift05:28
*** haomaiwang has quit IRC05:29
notmynameyatin: updated with line wrapping ;-)05:29
*** haomaiwa_ has joined #openstack-swift05:29
notmynameyatin: but for more detail about how to get started contributing, there's a few places I'd recommend starting05:29
*** haomaiwa_ has quit IRC05:30
notmynameyatin: first, of course, is to install swift and play with it. set up the swift all in on (SAIO)05:30
notmynameyou can find docs at swift.openstack.org05:30
*** haomaiwang has joined #openstack-swift05:30
notmynamethere's also a vagrant script that automates it all for you (but you miss all the fun of doing it by hand!)05:30
notmynamehttps://github.com/swiftstack/vagrant-swift-all-in-one05:30
*** haomaiwang has quit IRC05:31
notmynameafter that, you'll want to work on something05:31
notmynamethere's 2 main ways to help, in general, and both are vital05:31
*** 17WABAFSK has joined #openstack-swift05:31
notmynamefirst is doing code reviews (see the review dashboard link in the topic). read some code, figure out what it's doing, play with it. leave comments05:31
notmynamesecond is fixing a bug or writing a feature05:32
*** 17WABAFSK has quit IRC05:32
notmynameif you look at the bug tracker (https://bugs.launchpad.net/swift) there's a lot you can wrok on05:32
notmynameif you look at "wishlist" bugs, those are things that someone has thought would be neat. most of them are fairly small and isolated05:33
*** 14WAAFJRE has joined #openstack-swift05:34
*** 14WAAFJRE has quit IRC05:35
yatinnotmyname: I did SAIO, then started with code-reviews, but then fall back, as I felt the need to know how swift is design to work and why such way..05:35
*** haomaiwa_ has joined #openstack-swift05:35
*** haomaiwa_ has quit IRC05:36
notmynamecool!05:36
notmynamehave you looked at the docs at http://swift.openstack.org05:36
notmyname?05:36
*** haomaiwa_ has joined #openstack-swift05:36
yatinnotmyname: based on your responses above, i could build to-do list for me..05:36
*** haomaiwa_ has quit IRC05:37
*** 17WABAFUV has joined #openstack-swift05:37
yatinnotmyname: yes, on day one, i looked at that, but didn't understand much of it, as didn't get context as no experience with object-storage, but now i would like to go once again...05:37
*** 17WABAFUV has quit IRC05:38
*** haomaiwang has joined #openstack-swift05:38
notmynameyatin: but like learning music, you don't go learn a bunch of theory before you pick up an instrument. you just go try to play something. then you learn the technical parts05:38
*** haomaiwang has quit IRC05:39
notmynamesame here. don't be scared to go play with stuff. maybe make a middleware that does something simple. maybe play with the on-disk format05:39
notmynamelearn by playing :-)05:39
*** 21WAAG1BJ has joined #openstack-swift05:39
*** 21WAAG1BJ has quit IRC05:40
yatinnotmyname: sounds good. thanks  :)05:40
*** haomaiwang has joined #openstack-swift05:40
yatinnotmyname: middleware will come into pipeline?05:41
*** haomaiwang has quit IRC05:41
*** haomaiwang has joined #openstack-swift05:41
notmynameyeah. it's a way to modify a request or response. many of swift's features are implemented in middleware, and you can write others to do other cool stuff05:41
*** haomaiwang has quit IRC05:42
*** haomaiwang has joined #openstack-swift05:42
*** haomaiwang has quit IRC05:43
*** haomaiwa_ has joined #openstack-swift05:43
yatinnotmyname: nice :)05:43
*** haomaiwa_ has quit IRC05:44
*** haomaiwa_ has joined #openstack-swift05:44
*** haomaiwa_ has quit IRC05:45
*** haomaiwa_ has joined #openstack-swift05:45
*** haomaiwa_ has quit IRC05:46
*** 14WAAFJWN has joined #openstack-swift05:46
*** 14WAAFJWN has quit IRC05:47
*** trifon has joined #openstack-swift05:47
*** haomaiwang has joined #openstack-swift05:47
*** haomaiwang has quit IRC05:48
*** 17WABAFZC has joined #openstack-swift05:48
*** 17WABAFZC has quit IRC05:49
*** haomaiwang has joined #openstack-swift05:49
*** haomaiwang has quit IRC05:50
*** haomaiwang has joined #openstack-swift05:50
*** haomaiwang has quit IRC05:51
*** haomaiwa_ has joined #openstack-swift05:51
*** haomaiwa_ has quit IRC05:52
*** haomaiwa_ has joined #openstack-swift05:52
*** haomaiwa_ has quit IRC05:53
*** 21WAAG1H4 has joined #openstack-swift05:53
*** 21WAAG1H4 has quit IRC05:54
*** 14WAAFJ0F has joined #openstack-swift05:54
*** 14WAAFJ0F has quit IRC05:55
*** 21WAAG1IS has joined #openstack-swift05:55
*** 21WAAG1IS has quit IRC05:56
*** haomaiwang has joined #openstack-swift05:56
*** haomaiwang has quit IRC05:57
*** haomaiwang has joined #openstack-swift05:57
*** haomaiwang has quit IRC05:58
*** 17WABAF3C has joined #openstack-swift05:58
*** 17WABAF3C has quit IRC05:59
*** haomaiwang has joined #openstack-swift05:59
*** haomaiwang has quit IRC06:00
*** haomaiwang has joined #openstack-swift06:00
*** haomaiwang has quit IRC06:01
*** haomaiwang has joined #openstack-swift06:01
*** haomaiwang has quit IRC06:02
*** haomaiwa_ has joined #openstack-swift06:02
*** garthb has quit IRC06:02
*** haomaiwa_ has quit IRC06:03
*** haomaiwang has joined #openstack-swift06:03
*** haomaiwang has quit IRC06:04
*** 14WAAFJ5Q has joined #openstack-swift06:04
*** 14WAAFJ5Q has quit IRC06:05
*** 14WAAFJ59 has joined #openstack-swift06:05
*** 14WAAFJ59 has quit IRC06:06
*** 14WAAFJ60 has joined #openstack-swift06:06
*** 14WAAFJ60 has quit IRC06:07
*** haomaiwang has joined #openstack-swift06:07
*** haomaiwang has quit IRC06:08
*** haomaiwang has joined #openstack-swift06:08
*** SkyRocknRoll_ has joined #openstack-swift06:08
*** haomaiwang has quit IRC06:09
*** 21WAAG1P6 has joined #openstack-swift06:09
*** SkyRocknRoll_ has quit IRC06:09
*** 21WAAG1P6 has quit IRC06:10
*** haomaiwang has joined #openstack-swift06:10
*** haomaiwang has quit IRC06:11
*** haomaiwang has joined #openstack-swift06:11
*** haomaiwang has quit IRC06:12
*** ig0r_ has quit IRC06:12
*** 18WABHSH9 has joined #openstack-swift06:12
*** 18WABHSH9 has quit IRC06:13
*** haomaiwang has joined #openstack-swift06:13
*** haomaiwang has quit IRC06:14
*** 14WAAFKBY has joined #openstack-swift06:14
*** 14WAAFKBY has quit IRC06:15
*** haomaiwang has joined #openstack-swift06:15
*** haomaiwang has quit IRC06:16
*** haomaiwa_ has joined #openstack-swift06:16
*** haomaiwa_ has quit IRC06:17
*** haomaiwa_ has joined #openstack-swift06:17
*** haomaiwa_ has quit IRC06:18
*** haomaiwang has joined #openstack-swift06:18
*** haomaiwang has quit IRC06:19
*** haomaiwa_ has joined #openstack-swift06:19
*** haomaiwa_ has quit IRC06:20
*** 14WAAFKEK has joined #openstack-swift06:20
*** 14WAAFKEK has quit IRC06:21
*** 21WAAG1XB has joined #openstack-swift06:21
*** 21WAAG1XB has quit IRC06:22
*** haomaiwang has joined #openstack-swift06:22
*** haomaiwang has quit IRC06:23
*** haomaiwa_ has joined #openstack-swift06:23
*** haomaiwa_ has quit IRC06:24
*** haomaiwang has joined #openstack-swift06:24
*** haomaiwang has quit IRC06:25
*** haomaiwang has joined #openstack-swift06:25
*** haomaiwang has quit IRC06:26
*** haomaiwa_ has joined #openstack-swift06:26
*** haomaiwa_ has quit IRC06:27
*** 17WABAGFM has joined #openstack-swift06:27
*** 17WABAGFM has quit IRC06:28
*** 17WABAGF5 has joined #openstack-swift06:28
*** 17WABAGF5 has quit IRC06:29
*** haomaiwang has joined #openstack-swift06:29
*** haomaiwang has quit IRC06:30
*** 17WABAGG5 has joined #openstack-swift06:30
*** 17WABAGG5 has quit IRC06:31
*** haomaiwang has joined #openstack-swift06:31
*** haomaiwang has quit IRC06:32
*** haomaiwa_ has joined #openstack-swift06:32
*** haomaiwa_ has quit IRC06:33
*** haomaiwa_ has joined #openstack-swift06:34
*** haomaiwa_ has quit IRC06:34
*** haomaiwa_ has joined #openstack-swift06:34
*** haomaiwa_ has quit IRC06:35
*** 17WABAGI5 has joined #openstack-swift06:35
*** 17WABAGI5 has quit IRC06:36
*** 21WAAG14R has joined #openstack-swift06:36
*** 21WAAG14R has quit IRC06:37
*** haomaiwa_ has joined #openstack-swift06:37
*** haomaiwa_ has quit IRC06:38
*** haomaiwa_ has joined #openstack-swift06:38
*** haomaiwa_ has quit IRC06:39
*** 14WAAFKN6 has joined #openstack-swift06:39
*** 14WAAFKN6 has quit IRC06:40
*** 21WAAG16O has joined #openstack-swift06:40
*** 21WAAG16O has quit IRC06:41
*** haomaiwang has joined #openstack-swift06:41
*** noark9 has quit IRC06:41
*** haomaiwang has quit IRC06:42
*** haomaiwang has joined #openstack-swift06:42
*** haomaiwang has quit IRC06:43
*** 18WABHSXT has joined #openstack-swift06:43
*** 18WABHSXT has quit IRC06:44
*** haomaiwang has joined #openstack-swift06:44
*** haomaiwang has quit IRC06:45
*** haomaiwang has joined #openstack-swift06:45
*** haomaiwang has quit IRC06:46
*** haomaiwang has joined #openstack-swift06:46
*** haomaiwang has quit IRC06:47
*** zaitcev has quit IRC06:47
*** haomaiwang has joined #openstack-swift06:47
*** haomaiwang has quit IRC06:48
*** haomaiwa_ has joined #openstack-swift06:48
*** haomaiwa_ has quit IRC06:49
*** haomaiwang has joined #openstack-swift06:49
*** haomaiwang has quit IRC06:50
*** haomaiwa_ has joined #openstack-swift06:50
*** haomaiwa_ has quit IRC06:51
*** haomaiwang has joined #openstack-swift06:51
*** haomaiwang has quit IRC06:52
*** haomaiwa_ has joined #openstack-swift06:52
*** haomaiwa_ has quit IRC06:53
*** 18WABHS22 has joined #openstack-swift06:53
*** 18WABHS22 has quit IRC06:54
*** 18WABHS3O has joined #openstack-swift06:54
*** 18WABHS3O has quit IRC06:55
*** haomaiwang has joined #openstack-swift06:55
*** haomaiwang has quit IRC06:56
*** haomaiwa_ has joined #openstack-swift06:56
*** haomaiwa_ has quit IRC06:57
*** haomaiwa_ has joined #openstack-swift06:57
*** haomaiwa_ has quit IRC06:58
*** haomaiwang has joined #openstack-swift06:58
*** haomaiwang has quit IRC06:59
*** sudorandom has quit IRC06:59
*** 18WABHS6X has joined #openstack-swift06:59
*** 18WABHS6X has quit IRC07:00
*** 18WABHS7V has joined #openstack-swift07:00
*** 18WABHS7V has quit IRC07:01
*** haomaiwang has joined #openstack-swift07:01
*** rcernin has joined #openstack-swift07:01
*** haomaiwang has quit IRC07:02
*** haomaiwang has joined #openstack-swift07:02
*** haomaiwang has quit IRC07:03
*** haomaiwang has joined #openstack-swift07:03
*** haomaiwang has quit IRC07:04
*** haomaiwang has joined #openstack-swift07:04
*** haomaiwang has quit IRC07:05
*** 14WAAFK27 has joined #openstack-swift07:05
*** 14WAAFK27 has quit IRC07:06
*** haomaiwang has joined #openstack-swift07:06
*** openstackstatus has joined #openstack-swift07:06
*** ChanServ sets mode: +v openstackstatus07:06
*** haomaiwang has quit IRC07:07
*** haomaiwang has joined #openstack-swift07:07
*** haomaiwang has quit IRC07:08
*** haomaiwang has joined #openstack-swift07:08
*** haomaiwang has quit IRC07:09
*** haomaiwang has joined #openstack-swift07:09
*** haomaiwang has quit IRC07:10
*** openstackgerrit has joined #openstack-swift07:10
*** haomaiwang has joined #openstack-swift07:10
*** haomaiwang has quit IRC07:11
*** 18WABHTFO has joined #openstack-swift07:11
*** 18WABHTFO has quit IRC07:12
*** jamielennox is now known as jamielennox|away07:12
*** haomaiwang has joined #openstack-swift07:12
*** haomaiwang has quit IRC07:13
*** haomaiwang has joined #openstack-swift07:13
*** haomaiwang has quit IRC07:14
*** haomaiwang has joined #openstack-swift07:14
*** haomaiwang has quit IRC07:15
*** 18WABHTH8 has joined #openstack-swift07:15
*** 18WABHTH8 has quit IRC07:16
*** haomaiwang has joined #openstack-swift07:16
*** haomaiwang has quit IRC07:17
*** haomaiwa_ has joined #openstack-swift07:17
*** haomaiwa_ has quit IRC07:18
*** 18WABHTKN has joined #openstack-swift07:18
*** 18WABHTKN has quit IRC07:19
*** haomaiwang has joined #openstack-swift07:19
*** haomaiwang has quit IRC07:20
*** haomaiwang has joined #openstack-swift07:20
*** haomaiwang has quit IRC07:21
*** haomaiwang has joined #openstack-swift07:21
*** haomaiwang has quit IRC07:22
*** haomaiwa_ has joined #openstack-swift07:22
*** haomaiwa_ has quit IRC07:23
*** 21WAAG2W7 has joined #openstack-swift07:23
*** 21WAAG2W7 has quit IRC07:24
*** 18WABHTOP has joined #openstack-swift07:24
*** 18WABHTOP has quit IRC07:25
*** 21WAAG2YQ has joined #openstack-swift07:25
*** 21WAAG2YQ has quit IRC07:26
*** haomaiwang has joined #openstack-swift07:26
*** haomaiwang has quit IRC07:27
*** 17WABAHB6 has joined #openstack-swift07:27
*** 17WABAHB6 has quit IRC07:28
*** ppai has quit IRC07:28
*** 17WABAHCR has joined #openstack-swift07:28
*** 17WABAHCR has quit IRC07:29
*** haomaiwang has joined #openstack-swift07:29
*** ppai has joined #openstack-swift07:29
*** haomaiwang has quit IRC07:30
*** haomaiwang has joined #openstack-swift07:30
*** haomaiwang has quit IRC07:31
*** 17WABAHEW has joined #openstack-swift07:31
*** 17WABAHEW has quit IRC07:32
*** 18WABHTT2 has joined #openstack-swift07:32
*** 18WABHTT2 has quit IRC07:33
*** sudorandom has joined #openstack-swift07:33
*** haomaiwang has joined #openstack-swift07:33
*** haomaiwang has quit IRC07:34
*** haomaiwang has joined #openstack-swift07:34
*** haomaiwang has quit IRC07:35
*** haomaiwang has joined #openstack-swift07:35
*** haomaiwang has quit IRC07:36
*** 18WABHTWW has joined #openstack-swift07:36
*** 18WABHTWW has quit IRC07:37
*** haomaiwang has joined #openstack-swift07:37
*** haomaiwang has quit IRC07:38
*** 17WABAHJP has joined #openstack-swift07:38
*** 17WABAHJP has quit IRC07:39
*** haomaiwang has joined #openstack-swift07:39
*** haomaiwang has quit IRC07:40
*** haomaiwang has joined #openstack-swift07:40
*** haomaiwang has quit IRC07:41
*** haomaiwang has joined #openstack-swift07:41
*** haomaiwang has quit IRC07:42
*** haomaiwang has joined #openstack-swift07:42
*** haomaiwang has quit IRC07:43
*** haomaiwa_ has joined #openstack-swift07:43
*** haomaiwa_ has quit IRC07:44
*** 21WAAG3AT has joined #openstack-swift07:44
*** 21WAAG3AT has quit IRC07:45
*** haomaiwang has joined #openstack-swift07:45
*** haomaiwang has quit IRC07:46
*** chlong has quit IRC07:46
*** 21WAAG3CQ has joined #openstack-swift07:46
*** 21WAAG3CQ has quit IRC07:47
*** haomaiwang has joined #openstack-swift07:47
*** haomaiwang has quit IRC07:48
*** haomaiwang has joined #openstack-swift07:48
*** haomaiwang has quit IRC07:49
*** 17WABAHQI has joined #openstack-swift07:49
*** 17WABAHQI has quit IRC07:50
*** 17WABAHRI has joined #openstack-swift07:50
*** 17WABAHRI has quit IRC07:51
*** haomaiwa_ has joined #openstack-swift07:52
*** haomaiwa_ has quit IRC07:53
*** 14WAAFL04 has joined #openstack-swift07:53
*** 14WAAFL04 has quit IRC07:54
*** haomaiwang has joined #openstack-swift07:54
*** haomaiwang has quit IRC07:55
*** 14WAAFL2U has joined #openstack-swift07:55
*** 14WAAFL2U has quit IRC07:56
*** haomaiwang has joined #openstack-swift07:56
*** haomaiwang has quit IRC07:57
*** haomaiwang has joined #openstack-swift07:57
*** haomaiwang has quit IRC07:58
*** haomaiwang has joined #openstack-swift07:58
*** haomaiwang has quit IRC07:59
*** haomaiwang has joined #openstack-swift07:59
*** haomaiwang has quit IRC08:00
*** haomaiwang has joined #openstack-swift08:00
*** haomaiwang has quit IRC08:01
*** haomaiwa_ has joined #openstack-swift08:01
*** haomaiwa_ has quit IRC08:02
*** haomaiwa_ has joined #openstack-swift08:02
*** haomaiwa_ has quit IRC08:03
*** haomaiwa_ has joined #openstack-swift08:03
*** haomaiwa_ has quit IRC08:04
*** haomaiwang has joined #openstack-swift08:04
*** arnox has joined #openstack-swift08:04
*** haomaiwang has quit IRC08:05
*** 17WABAH2C has joined #openstack-swift08:05
*** 17WABAH2C has quit IRC08:06
*** haomaiwang has joined #openstack-swift08:06
*** haomaiwang has quit IRC08:07
*** haomaiwang has joined #openstack-swift08:07
*** haomaiwang has quit IRC08:08
*** haomaiwang has joined #openstack-swift08:08
*** haomaiwang has quit IRC08:09
openstackgerritVictor Stinner proposed openstack/swift: py3: Use the six module in the xprofile middleware  https://review.openstack.org/23314508:09
*** haomaiwang has joined #openstack-swift08:09
openstackgerritVictor Stinner proposed openstack/swift: Port swift.common.utils.Timestamp to Python 3  https://review.openstack.org/23701508:09
openstackgerritVictor Stinner proposed openstack/swift: Port swift.common.utils.StatsdClient to Python 3  https://review.openstack.org/23700508:09
*** haomaiwang has quit IRC08:10
openstackgerritVictor Stinner proposed openstack/swift: Port get_hmac() and hash_path() to Python 3  https://review.openstack.org/23699808:10
openstackgerritVictor Stinner proposed openstack/swift: Add __next__() methods to utils iterators for py3  https://review.openstack.org/23704008:10
openstackgerritVictor Stinner proposed openstack/swift: Port parse_mime_headers() to Python 3  https://review.openstack.org/23702708:10
openstackgerritVictor Stinner proposed openstack/swift: Port FileLikeIter to Python 3  https://review.openstack.org/23701908:10
*** haomaiwang has joined #openstack-swift08:10
*** haomaiwang has quit IRC08:11
*** haomaiwa_ has joined #openstack-swift08:11
*** haomaiwa_ has quit IRC08:12
*** haomaiwang has joined #openstack-swift08:12
*** haomaiwang has quit IRC08:13
*** haomaiwang has joined #openstack-swift08:13
*** haomaiwang has quit IRC08:14
*** 17WABAH9J has joined #openstack-swift08:14
*** rledisez has joined #openstack-swift08:14
*** 17WABAH9J has quit IRC08:15
*** haomaiwang has joined #openstack-swift08:15
*** haomaiwang has quit IRC08:16
*** haomaiwa_ has joined #openstack-swift08:16
*** haomaiwa_ has quit IRC08:17
*** haomaiwang has joined #openstack-swift08:17
*** haomaiwang has quit IRC08:18
*** haomaiwang has joined #openstack-swift08:18
*** haomaiwang has quit IRC08:19
*** haomaiwang has joined #openstack-swift08:19
*** haomaiwang has quit IRC08:20
*** 14WAAFMPN has joined #openstack-swift08:20
*** 14WAAFMPN has quit IRC08:21
*** haomaiwang has joined #openstack-swift08:21
*** haomaiwang has quit IRC08:22
*** 21WAAG354 has joined #openstack-swift08:22
*** 21WAAG354 has quit IRC08:23
*** 21WAAG36W has joined #openstack-swift08:23
*** 21WAAG36W has quit IRC08:24
*** haomaiwang has joined #openstack-swift08:24
*** haomaiwang has quit IRC08:25
*** haomaiwang has joined #openstack-swift08:25
*** haomaiwang has quit IRC08:26
*** haomaiwang has joined #openstack-swift08:26
*** haomaiwang has quit IRC08:27
*** haomaiwang has joined #openstack-swift08:27
*** haomaiwang has quit IRC08:28
*** haomaiwang has joined #openstack-swift08:28
*** haomaiwang has quit IRC08:29
*** haomaiwang has joined #openstack-swift08:29
*** haomaiwang has quit IRC08:30
*** haomaiwang has joined #openstack-swift08:30
*** ppai has quit IRC08:30
*** haomaiwang has quit IRC08:31
*** haomaiwang has joined #openstack-swift08:31
*** haomaiwang has quit IRC08:32
*** haomaiwang has joined #openstack-swift08:32
*** haomaiwang has quit IRC08:33
*** haomaiwang has joined #openstack-swift08:33
*** haomaiwang has quit IRC08:34
*** haomaiwang has joined #openstack-swift08:34
*** haomaiwang has quit IRC08:35
*** haomaiwang has joined #openstack-swift08:35
*** haomaiwang has quit IRC08:36
*** 21WAAG4IN has joined #openstack-swift08:36
*** 21WAAG4IN has quit IRC08:37
*** haomaiwang has joined #openstack-swift08:37
*** haomaiwang has quit IRC08:38
*** haomaiwa_ has joined #openstack-swift08:38
*** haomaiwa_ has quit IRC08:39
*** haomaiwang has joined #openstack-swift08:39
*** haomaiwang has quit IRC08:40
*** 18WABHVFG has joined #openstack-swift08:40
*** 18WABHVFG has quit IRC08:41
*** haomaiwa_ has joined #openstack-swift08:41
*** haomaiwa_ has quit IRC08:42
*** ppai has joined #openstack-swift08:42
*** haomaiwang has joined #openstack-swift08:42
*** haomaiwang has quit IRC08:43
*** haomaiwang has joined #openstack-swift08:43
*** haomaiwang has quit IRC08:44
*** haomaiwang has joined #openstack-swift08:44
*** haomaiwang has quit IRC08:45
*** 17WABAIWP has joined #openstack-swift08:45
*** 17WABAIWP has quit IRC08:46
*** jordanP has joined #openstack-swift08:46
*** haomaiwa_ has joined #openstack-swift08:46
*** haomaiwa_ has quit IRC08:47
*** haomaiwang has joined #openstack-swift08:47
*** eranrom has joined #openstack-swift08:47
*** haomaiwang has quit IRC08:48
*** haomaiwa_ has joined #openstack-swift08:48
*** haomaiwa_ has quit IRC08:49
*** haomaiwang has joined #openstack-swift08:49
*** haomaiwang has quit IRC08:50
*** haomaiwang has joined #openstack-swift08:50
*** haomaiwang has quit IRC08:51
*** haomaiwang has joined #openstack-swift08:51
*** haomaiwang has quit IRC08:52
*** haomaiwang has joined #openstack-swift08:52
*** haomaiwang has quit IRC08:53
*** haomaiwang has joined #openstack-swift08:53
*** haomaiwang has quit IRC08:54
*** haomaiwang has joined #openstack-swift08:54
*** haomaiwang has quit IRC08:55
*** 18WABHVRX has joined #openstack-swift08:55
*** 18WABHVRX has quit IRC08:56
*** haomaiwa_ has joined #openstack-swift08:56
*** haomaiwa_ has quit IRC08:57
*** hseipp has joined #openstack-swift08:57
*** 17WABAI3K has joined #openstack-swift08:57
*** 17WABAI3K has quit IRC08:58
*** 18WABHVUB has joined #openstack-swift08:58
*** rminmin has joined #openstack-swift08:58
*** 18WABHVUB has quit IRC08:59
*** haomaiwang has joined #openstack-swift08:59
*** haomaiwang has quit IRC09:00
*** haomaiwang has joined #openstack-swift09:00
*** haomaiwang has quit IRC09:01
*** haomaiwang has joined #openstack-swift09:01
*** haomaiwang has quit IRC09:02
*** haomaiwang has joined #openstack-swift09:02
*** geaaru has joined #openstack-swift09:02
*** haomaiwang has quit IRC09:03
*** haomaiwang has joined #openstack-swift09:03
*** haomaiwang has quit IRC09:04
*** haomaiwa_ has joined #openstack-swift09:04
*** haomaiwa_ has quit IRC09:05
*** 14WAAFNRI has joined #openstack-swift09:05
*** 14WAAFNRI has quit IRC09:06
*** haomaiwang has joined #openstack-swift09:06
*** haomaiwang has quit IRC09:07
*** haomaiwa_ has joined #openstack-swift09:07
*** haomaiwa_ has quit IRC09:08
*** haomaiwa_ has joined #openstack-swift09:08
*** haomaiwa_ has quit IRC09:09
*** 18WABHV30 has joined #openstack-swift09:09
*** 18WABHV30 has quit IRC09:10
*** haomaiwang has joined #openstack-swift09:10
*** haomaiwang has quit IRC09:11
*** haomaiwang has joined #openstack-swift09:11
*** haomaiwang has quit IRC09:12
*** jistr has joined #openstack-swift09:12
*** haomaiwa_ has joined #openstack-swift09:12
*** haomaiwa_ has quit IRC09:13
*** 21WAAG5C5 has joined #openstack-swift09:13
*** 21WAAG5C5 has quit IRC09:14
*** haomaiwang has joined #openstack-swift09:14
*** haomaiwang has quit IRC09:15
*** 14WAAFN04 has joined #openstack-swift09:15
*** 14WAAFN04 has quit IRC09:16
*** 17WABAJF9 has joined #openstack-swift09:16
*** 17WABAJF9 has quit IRC09:17
*** 21WAAG5FX has joined #openstack-swift09:17
*** 21WAAG5FX has quit IRC09:18
*** haomaiwang has joined #openstack-swift09:18
*** haomaiwang has quit IRC09:19
*** 17WABAJH7 has joined #openstack-swift09:19
*** 17WABAJH7 has quit IRC09:20
*** haomaiwang has joined #openstack-swift09:20
*** haomaiwang has quit IRC09:21
*** haomaiwang has joined #openstack-swift09:21
*** haomaiwang has quit IRC09:22
*** 17WABAJJ4 has joined #openstack-swift09:22
*** 17WABAJJ4 has quit IRC09:23
*** haomaiwang has joined #openstack-swift09:23
*** haomaiwang has quit IRC09:24
*** haomaiwang has joined #openstack-swift09:24
*** haomaiwang has quit IRC09:25
*** 17WABAJLT has joined #openstack-swift09:25
*** 17WABAJLT has quit IRC09:26
*** 17WABAJMJ has joined #openstack-swift09:26
*** 17WABAJMJ has quit IRC09:27
*** aix has joined #openstack-swift09:36
*** ig0r_ has joined #openstack-swift09:38
*** takashi has quit IRC10:06
*** rminmin has quit IRC10:15
*** rcernin has quit IRC10:25
*** ntt has joined #openstack-swift10:30
janonymousHi, Is this test running fine for everyone: tox -e py27 test.unit.container.test_reconciler:TestReconcilerUtils.test_parse_raw_obj10:39
*** Guest18949 has quit IRC10:40
janonymousi am getting this failure : http://paste.openstack.org/show/481791/10:40
nttHi, someone can help me with some questions related to one replica swift configuration?10:44
*** janonymous__ has joined #openstack-swift10:53
tdasilvantt: it's better to state your question and someone who knows will try to answer...maybe not immediately, but eventually ;)10:57
ntttdasilva: ok, thanks. Actually I'm using swift with 1 replica configuration, but I see too much traffic from openstack-swift-object-auditor and from openstack-swift-object-updater. I'm using kilo release with centos 7. Someone can help me please?10:58
nttI see traffic because I have a san that exposes 4 disks inside the storage node10:59
janonymous__tdasilva : hey , do the failure in ut looks known to u11:00
nttFurthermore, I see a lot of "object replication tasks" inside logs.... but this is a necessary task in one replica configuration?11:00
tdasilvajanonymous: don't think i've seen them before but i'm running tests to see if I can replicate11:02
janonymous__tdasilva : thanks that would be helpful11:03
tdasilvabrb11:04
*** kei_yama has quit IRC11:09
*** janonymous__ has quit IRC11:36
*** chlong has joined #openstack-swift11:39
*** ppai has quit IRC11:39
openstackgerritYatin Kumbhare proposed openstack/swift: Fixed inconsistencies in docstrings  https://review.openstack.org/25729311:50
*** ppai has joined #openstack-swift11:52
*** yatin has quit IRC11:54
*** 64MAAG2A9 has joined #openstack-swift11:54
*** 64MAAG2A9 has quit IRC11:55
*** haomaiwang has joined #openstack-swift11:55
*** haomaiwang has quit IRC11:56
*** haomaiwang has joined #openstack-swift11:57
*** haomaiwang has quit IRC11:58
*** haomaiwang has joined #openstack-swift11:58
*** haomaiwang has quit IRC11:59
*** haomaiwang has joined #openstack-swift11:59
*** haomaiwang has quit IRC12:00
*** 17WABAMAY has joined #openstack-swift12:00
*** 17WABAMAY has quit IRC12:01
*** haomaiwang has joined #openstack-swift12:01
*** haomaiwang has quit IRC12:02
*** haomaiwang has joined #openstack-swift12:02
*** haomaiwang has quit IRC12:03
nttIf I have one replica, use more than one zone is wrong?12:03
*** ppai has quit IRC12:04
kota_ntt: i think that's not wrong but *may* be meaningless12:09
nttkota_: and it is possible to reduce number of zones from a cluster?12:11
nttI can reduce disks.... but I don't know if it possible reduce zones12:11
kota_maybe12:11
*** haomaiwang has joined #openstack-swift12:11
ahalesure, just delete all the hosts in a zone to remove it12:11
*** haomaiwang has quit IRC12:12
*** aix has quit IRC12:12
kota_ntt: your swift is already running?12:13
nttyes.... not a production environment, but I want to save data12:13
*** haomaiwang has joined #openstack-swift12:13
*** haomaiwang has quit IRC12:14
kota_i think that reducing zone operation might affect some swift behavor12:14
kota_with 1 replica, i think it's not so a big problem but...12:14
kota_it could trigger some warnings in swift-ring-builder with unbalanced zone statement.12:15
kota_not sure.12:15
kota_and when you remove the device from unnecessary zone, you *must* set weight 0 for them before removing device.12:17
*** haomaiwang has joined #openstack-swift12:17
*** haomaiwang has quit IRC12:18
kota_because if 1 replica remaing in the removed device and the ring missing it, swift won't be able to find the object anymore12:18
*** ppai has joined #openstack-swift12:18
kota_so we have to wait the 1 replica moving to available devices before removing it from the ring.12:19
kota_i think removing disk with 1 replica is sensitive operation.12:21
*** haomaiwa_ has joined #openstack-swift12:21
*** haomaiwa_ has quit IRC12:22
kota_ntt: good luck12:22
nttkota_: is there a command for removing zones? or I should just remove disks?12:22
*** haomaiwang has joined #openstack-swift12:22
*** haomaiwang has quit IRC12:23
*** haomaiwang has joined #openstack-swift12:23
*** janonymous__ has joined #openstack-swift12:23
*** haomaiwang has quit IRC12:24
*** haomaiwa_ has joined #openstack-swift12:24
*** haomaiwa_ has quit IRC12:25
*** haomaiwang has joined #openstack-swift12:25
*** haomaiwang has quit IRC12:26
kota_to set weight 0 for the devices in removal zone, and then deploy it. after that (wait enough time to all replicas into available disks), to remove the disks from the ring.12:27
*** haomaiwa_ has joined #openstack-swift12:28
*** haomaiwa_ has quit IRC12:29
kota_s/to all replicas/all replicas to move/12:29
*** haomaiwang has joined #openstack-swift12:29
*** haomaiwang has quit IRC12:30
kota_i don't think there is a direct command to remove a zone from your ring.12:30
*** 18WABHZ6J has joined #openstack-swift12:30
ahaleyou can do like : swift-ring-builder blah.builder remove z512:31
kota_ahale: ah, really12:31
*** 18WABHZ6J has quit IRC12:31
ahaleyah, it will match on lots of things like that, the same as the search options12:31
nttahale: but should I remove disks before?12:32
kota_ahale: nice, i didn't notice that12:32
ahale(i spent the weekend messing with rings so all this is fresh)12:32
*** haomaiwang has joined #openstack-swift12:32
ahaleyou should set to 0.0, wait a while , until they replicate all data off disks, then remove i guess12:32
*** lpabon has joined #openstack-swift12:32
ahaleim not a huge fan of remove now though - I didn't realise some of the implications until recently12:33
*** haomaiwang has quit IRC12:33
nttahale: 1) Should I set to 0.0 weights for all kind of ring, right?12:33
nttwith 3 separate commands?12:33
ahalelike if you want to drain account and container as well as object off a server? yep12:34
nttyes, I want to remove all from the zone12:34
*** janonymous_ has joined #openstack-swift12:34
*** haomaiwang has joined #openstack-swift12:35
ahaleso yeah - you would drop all drives on the zone to 0.0 weight , then wait for the data to get replicated to its new location12:35
*** haomaiwang has quit IRC12:36
nttok.... and partitions will be rebalanced automatically?12:36
kota_ahale: oh, yeah "swift-ring-builder remove z1" looks like to remove all devices which matches "z1"12:36
nttactually I have 1024 partitions, 512 for zone12:36
*** haomaiwang has joined #openstack-swift12:36
*** haomaiwang has quit IRC12:37
*** janonymous__ has quit IRC12:37
kota_ntt: when setting 0.0 weight to zone, the partitions will be rebalanced to new location.12:37
nttok12:38
nttok.... so this is a 2 command task: 1) set 0.0 for each ring and wait rebalance and 2) remove the zone12:38
nttright?12:38
*** haomaiwang has joined #openstack-swift12:38
*** haomaiwang has quit IRC12:39
kota_ntt: yes12:39
*** haomaiwang has joined #openstack-swift12:39
kota_assuming you don't want to reuse the disks.12:39
nttI want to reuse disks12:39
kota_(removed disks.12:40
*** haomaiwang has quit IRC12:40
nttI think that after this 2 commands, I should readd disks in Z112:40
*** haomaiwang has joined #openstack-swift12:40
kota_ntt: right12:40
*** haomaiwang has quit IRC12:41
nttOk.... I will try in a small environment with 5 or 6 GB of total space. Next, if this works, I need to set to 0.0 disks with a capacity of 5TB (with few GB really used from swift)12:42
ntthow much time should I expect?12:42
*** haomaiwang has joined #openstack-swift12:42
kota_depands on environment12:42
*** trifon has quit IRC12:43
*** haomaiwang has quit IRC12:43
kota_basically, disk I/O or NW I/O12:43
nttI'm using one server and both disks are connected through iscsi with a SAN12:43
*** haomaiwang has joined #openstack-swift12:43
nttok.... I understand12:43
*** haomaiwang has quit IRC12:44
*** haomaiwang has joined #openstack-swift12:44
*** haomaiwang has quit IRC12:45
kota_ntt: good luck.12:45
*** haomaiwang has joined #openstack-swift12:45
kota_have to be heading for home12:45
*** haomaiwang has quit IRC12:46
nttthanks.... I will give a feedback soon12:46
*** janonymous_ has quit IRC12:46
*** jmccarthy has quit IRC12:46
*** jmccarthy has joined #openstack-swift12:47
*** haomaiwa_ has joined #openstack-swift12:47
*** haomaiwa_ has quit IRC12:48
*** haomaiwang has joined #openstack-swift12:48
*** haomaiwang has quit IRC12:49
tdasilvajanonymous: unit tests ran on master without any issues12:50
*** haomaiwang has joined #openstack-swift12:50
*** haomaiwang has quit IRC12:51
*** 14WAAFSE4 has joined #openstack-swift12:51
*** 14WAAFSE4 has quit IRC12:52
*** 17WABAM96 has joined #openstack-swift12:52
*** 17WABAM96 has quit IRC12:53
*** jmccarthy has quit IRC12:53
*** haomaiwang has joined #openstack-swift12:53
*** haomaiwang has quit IRC12:54
*** janonymous_ has joined #openstack-swift12:54
*** aix has joined #openstack-swift12:55
*** haomaiwa_ has joined #openstack-swift12:55
*** haomaiwa_ has quit IRC12:56
*** haomaiwang has joined #openstack-swift12:56
*** haomaiwang has quit IRC12:57
*** haomaiwang has joined #openstack-swift12:58
*** haomaiwang has quit IRC12:59
*** haomaiwa_ has joined #openstack-swift13:00
*** janonymous_ has quit IRC13:01
*** janonymous_ has joined #openstack-swift13:07
*** jmccarthy has joined #openstack-swift13:08
*** jmccarthy has quit IRC13:09
*** jmccarthy has joined #openstack-swift13:09
*** lpabon has quit IRC13:10
*** janonymous_ has quit IRC13:14
*** ppai has quit IRC13:18
*** admin0 has joined #openstack-swift13:19
openstackgerritMerged openstack/swift: Fix some inconsistency in docstrings  https://review.openstack.org/25619713:23
*** akle has joined #openstack-swift13:25
*** SkyRocknRoll has quit IRC13:29
*** SkyRocknRoll has joined #openstack-swift13:44
*** dslevin_ has joined #openstack-swift13:55
*** joeljwright has joined #openstack-swift14:07
*** ChanServ sets mode: +v joeljwright14:07
*** admin0 has quit IRC14:16
*** changbl has quit IRC14:17
*** changbl has joined #openstack-swift14:19
*** Vinsh has quit IRC14:22
*** janonymous_ has joined #openstack-swift14:26
gmmahanotmyname: thanks for the tips. Unfortunately headed out before i could see them.. :( Hit Rita's by the riverwalk, and a jazz concert last night. was a good weekend. :)14:29
*** dustins|out is now known as dustins14:29
*** haomaiwa_ has quit IRC14:36
nttHow can I see the zone name in a running swift cluster?14:37
janonymous_tdasilva: thanks,did something wrong with env i guess :( i'll try to figure it out :)14:39
*** petertr7_away is now known as petertr714:44
*** blmartin has joined #openstack-swift14:46
openstackgerritAlistair Coles proposed openstack/swift: Merge branch 'master' into feature/crypto  https://review.openstack.org/25741614:47
janonymous_please review https://review.openstack.org/#/c/208222/ ; https://review.openstack.org/#/c/254276/14:47
janonymous_notmyname: hi14:48
janonymous_notmyname: Ondrej asked me to have some opinions on https://review.openstack.org/#/c/227855/, so i added it in meeting topic.Please feel free to comment.14:49
acolespchng: jrichli mahatic patch 257416 will need to land on feature/crypto to fix failing tests (pyeclib version on feature/crypto is currently in conflict with openstack global requirement)14:50
patchbotacoles: https://review.openstack.org/#/c/257416/ - Merge branch 'master' into feature/crypto14:50
*** ig0r_ has quit IRC15:04
*** eranrom has quit IRC15:12
*** noark9 has joined #openstack-swift15:16
*** sgundur1 has joined #openstack-swift15:16
*** ttilley has joined #openstack-swift15:20
*** ttilley has left #openstack-swift15:21
*** tsg has joined #openstack-swift15:27
*** breitz has quit IRC15:27
*** breitz has joined #openstack-swift15:28
*** jmccarthy has quit IRC15:31
*** haomaiwang has joined #openstack-swift15:35
*** haomaiwang has quit IRC15:36
*** dslevin_ has quit IRC15:36
*** haomaiwang has joined #openstack-swift15:36
*** janonymous_ has quit IRC15:43
*** admin0 has joined #openstack-swift15:43
*** janonymous_ has joined #openstack-swift15:45
*** pgbridge has joined #openstack-swift15:50
*** jmccarthy has joined #openstack-swift15:50
pelusenotmyname: you there?15:51
notmynamepeluse: just sat down at my computer to check email. what's up?15:51
peluseI jsut approved patch 214206 after testing but realized I was on the wrong VM.  It doesn't pass for me and apparently a few others as well.  It there a way to stop it from merging?15:52
patchbotpeluse: https://review.openstack.org/#/c/214206/ - Modify functional tests to use testr15:52
notmynamepeluse: yes. add a -2 to it15:52
notmynameor I can15:52
pelusenotmyname: gracias15:52
pelusedone15:52
notmynamepeluse: did you remove your +A too?15:53
notmynameand I was all excited when I saw that you had added a +A ;-)15:53
*** petertr7 is now known as petertr7_away15:54
peluseheh, that's what I get for having 2 VM's running and not color coding my term windows...15:54
pelusewill look into whats failing on my end and see if its the same as some of the others15:54
notmynameok, thanks15:55
*** petertr7_away is now known as petertr715:58
*** sgundur1 has quit IRC15:59
*** haomaiwang has quit IRC16:01
*** haomaiwang has joined #openstack-swift16:01
*** sgundur1 has joined #openstack-swift16:02
*** jistr has quit IRC16:03
wbhubernotmyname: timburke: re: patch 256580, seems the verdict hasn't been solidified.  i'd be OK with implementing a new get response format or tweaking the input to accept new format.16:04
patchbotwbhuber: https://review.openstack.org/#/c/256580/ - SLO multipart-manfest=get call returns json in inc...16:05
*** noark9 has quit IRC16:07
*** petertr7 is now known as petertr7_away16:12
openstackgerritGleb Samsonov proposed openstack/swift: go: fix requests with Range header for 0-bytes files (DLO or special objects like links for example)  https://review.openstack.org/25746116:12
*** SkyRocknRoll has quit IRC16:14
*** admin0 has quit IRC16:19
*** fthiagogv has joined #openstack-swift16:20
*** garthb has joined #openstack-swift16:24
*** yatin has joined #openstack-swift16:26
wbhubertimburke: notmyname: let me know your thoughts.  thx.16:29
*** blmartin has quit IRC16:33
*** blmartin has joined #openstack-swift16:38
*** dslevin has quit IRC16:39
*** dslevin has joined #openstack-swift16:40
*** dslevin has quit IRC16:42
*** dslevin has joined #openstack-swift16:43
*** dslevin has quit IRC16:44
*** zaitcev has joined #openstack-swift16:47
*** ChanServ sets mode: +v zaitcev16:47
*** dslevin has joined #openstack-swift16:48
*** gyee has joined #openstack-swift16:53
*** breitz has quit IRC16:56
*** breitz has joined #openstack-swift16:56
*** diazjf has joined #openstack-swift16:59
*** haomaiwang has quit IRC17:01
*** haomaiwang has joined #openstack-swift17:01
diazjfnotmyname, jrichli, I have updated the spec for a new auth system in Barbican. It will be discussed today at the Barbican meeting. https://wiki.openstack.org/wiki/Meetings/Barbican17:05
diazjfnotmyname, jrichli, feel free to check it out and ask me any questions :)17:05
*** jmccarthy has quit IRC17:19
*** openstackstatus has quit IRC17:20
*** openstackstatus has joined #openstack-swift17:22
*** ChanServ sets mode: +v openstackstatus17:23
notmynamegood morning (here for real now)17:26
notmynamewbhuber: added a comment. I'm fine with either way, and since you're the one writing the code, I'll support whichever you choose ;-)17:28
notmynamewbhuber: looks like dfg would prefer the formatted get form, so that might be easier to get through review17:28
notmynamediazjf: do you have swift-specific questions?17:30
*** dmorita has joined #openstack-swift17:30
diazjfnotmyname, not really just wanted you to see the new spec to see if it is suitable for using Castellan in the Keymaster17:30
*** sgundur1 has quit IRC17:31
*** petertr7_away is now known as petertr717:31
*** hseipp has quit IRC17:32
openstackgerritChristian Schwede proposed openstack/swift: Fix full_listing in internal_client  https://review.openstack.org/25750217:33
*** sgundur1 has joined #openstack-swift17:33
*** rledisez has quit IRC17:36
notmynamewbhuber: of course, if others think that the broader accept format is the better way, that's good too17:36
notmynamewbhuber: FWIW, I'd prefer the broader accept format, but I can go either way. like I said, you're the one writin the code ;-)17:38
jrichlidiazjf: thanks!  I will be there. what time is the meeting in our timezone again?17:39
*** lyrrad has joined #openstack-swift17:40
diazjfnotmyname, jrichli, 3PM EST, so 2PM our time :)17:40
notmynamenoon ;-)17:41
jrichli:-)17:41
jrichliyou didn't want lunch, right?17:41
*** janonymous_ has quit IRC17:42
*** aix has quit IRC17:42
notmynamediazjf: jrichli: remind me why swift + barbican = no keystone. I don't remember, and I'm not sure it makes sense17:43
*** jordanP has quit IRC17:43
jrichlinotmyname: I think it was that we want swift + castellan to not require keystone, so you *could* use a different keystore17:44
jrichlibut if you use swift + castellan -> backed by Barbican specifically, then its ok to bind to keystone17:44
notmynameok17:45
notmynamebut I think that's where I keep getting confused17:45
diazjfnotmyname, its done so that deployments without keystone can take advantage of swift encryption using Barbican17:46
notmynamesomehow I've reduced castellan in my mind to "a wrapper around some key manager functionality" and as such I don't see why it itself requires auth (ie instead of passing auth through to whatever the backend is17:46
notmynamediazjf: but that's different than what jrichli just said17:46
jrichliif we don't remove the dependency of castellan upon keystone, then we couldn't actually make use of the power of the castellan API in our standalone swift.17:46
notmynamejrichli: yeah. but I'm not clear on why castellan itself requires keystone17:47
notmynameie instead of barbican as used by castellan17:47
jrichliI don't know either.  But it does, right diazjf?17:47
diazjfnotmyname, jrichli, what happens now is that in a Barbican backend then keystone is required, the only other supported auth in Barbican is no auth.17:47
diazjfBarbican only supports Keystone or no Auth17:48
notmynameok, I'm with you on that17:48
jrichlidiazjf: we know that about BB17:48
notmynamewhen would castellan be used without barbican?17:48
*** joeljwright has quit IRC17:49
diazjfIn the future it will be able to switch between Barbican and KMIP, and any other future backends17:49
jrichliI had gotten the impression from somebody that there was even a build dependency17:49
jrichlidiazjf: I am wondering about if I use Castellan, and than write my own implementation of the Castellan API - not barbican, would I still have to have keystone libraries to build and or deploy?17:50
diazjfjrichli, so you can write your own keymaster, but if you plan on having Barbican in the backend, you will need Keystone no matter what17:50
jrichlidiazjf: since we can chat in person, I will come by :-)17:51
diazjfjrichli, awesome ok :)17:51
hroudiazjf, jrichli  I didn't think caastellan had any keystone or barbican build dependencies as of today from what I recall (sure today you can't use it with anything else .. :) ..)17:51
openstackgerritOndÅ™ej Nový proposed openstack/swift: Deprecated param timeout removed from memcached  https://review.openstack.org/25706617:51
notmynamejrichli: ok, tell me what the answer is ;-)17:52
notmynamedoes anyone here know or use the glance driver for swift?17:54
notmynamecschwede: tdasilva: ^ maybe?17:54
tdasilvaflaper87 does17:54
notmynametdasilva: well yeah, of course he does. I'm looking for a swift dev 'cause flaper87 is trying to get rid of it ;-)17:55
notmynamere http://lists.openstack.org/pipermail/openstack-dev/2015-December/081966.html17:55
notmynamebasically, I'm wondering if (1) we could bring the glance driver under the swift project umbrella and (2) if any swift user/dev already knows anything about it17:56
notmynameI haven't had a chance to look at the code yet, but it's something I want to try to do soon17:56
tdasilvanotmyname: mm...was just reading that email now17:58
tdasilvaflaper87 and others in red hat have asked me questions before regarding glance and swift integration. Honestly it seems to make sense to have some kind of expertise in that driver as I think it could make better use of swift's functionality18:00
*** pberis has quit IRC18:00
notmynametdasilva: yeah18:00
*** haomaiwang has quit IRC18:01
*** haomaiwang has joined #openstack-swift18:01
*** yatin has quit IRC18:03
jrichlihrou: ok.  because last time you had looked - and another person working with me here - there was a build dependency at the least, and probably at the most.18:04
hroujrichli, Oh !  Yea I looked a while back and I could have sworn there wasn't but I could be wrong ;)  I'll check out the repo18:05
jrichlihrou: thanks.  i guess i could do the same ;-)18:06
*** petertr7 is now known as petertr7_away18:06
jrichlinotmyname: some people in our group have made use of the glance driver for swift.  but they didn't have to do anything except from change the config file settings, and it just worked.18:08
notmynameyeah, that's the next question: how much maintenance/work does it need?18:08
tdasilvanotmyname: so, in a way I can see why it would make sense to bring under the swift umbrella, OTOH, we could fall in the same issue, just reversed. Where we try to maintain the driver but don't have enough knowledge of glance18:08
notmynametdasilva: is it much different than the keystoneauth middleware we maintain? there's an interface contract, and we have the swift-specific translation in our repo18:09
tdasilvagood point18:09
tdasilvai honestly don't know18:09
notmynameso a glance driver in its own repo under the swift project umbrella seems reasonable to me18:09
jrichlinotmyname: our systems are probably too young to get that info from.  so far, nobody had to dig.  just works.18:10
notmynameflaper87: what's the current state of or health of the swift driver in glance?18:10
notmynamedfg: did you see this patch from torgomatic? patch 25209618:14
patchbotnotmyname: https://review.openstack.org/#/c/252096/ - Allow smaller segments in static large objects18:15
notmynamedfg: swaps the SLO min limit for some ratelimiting18:15
*** klrmn has joined #openstack-swift18:17
*** eranrom has joined #openstack-swift18:28
*** lpabon has joined #openstack-swift18:28
jrichlinotmyname: I think the main concern was for testing.  But I have the words from the last meeting.  I put a snippet here http://paste.openstack.org/show/481847/ that might help get us back to where we were18:28
*** eranrom has quit IRC18:29
*** arnox has quit IRC18:30
notmynamejrichli: I like this one "<rellerreller> notmyname the goal of castellan is not to depend on Keystone or any other authentication scheme."18:30
notmynamejrichli: FWIW, I can make yaml by hand with a text editor, so it's not like that brings in much of a dependency (I think).18:31
nttHow can I discover how much time I have to wait before min_part_hours expire?18:32
jrichlinotmyname: ok, good to know.  :-)18:32
notmynamejrichli: unless there is some fairly complex thing to manage that I don't know about. if it's "just" for creds, I'd imagine it would be simple18:33
notmynamejrichli: to which you respond, "yeah, but that's what you thought before you ever saw keystone!"18:33
jrichlilol18:34
notmynamentt: that is a great question. I don't think we expose that, but I think we probably should18:34
notmynamentt: I added a wishlist bug for that https://bugs.launchpad.net/swift/+bug/152601718:35
openstackLaunchpad bug 1526017 in OpenStack Object Storage (swift) "expose time remaining in min_part_hours" [Wishlist,New]18:35
diazjfnotmyname, jrichli, my thoughts are that we will have a gate with Swift and Barbican running and that swift will access Barbican by passing a username and password. No Keystone involved. And swift deployers can use Barbican and not need keystone.18:36
nttnotmyname: min_hours_part is 1 hour for me. Now when I try to rebalance, swift tells me that I have to wait. But how much? I'm not interested at the right value for min_hours_part in this moment18:36
nttmin_part_hours*, sorry.18:36
notmynamentt: right18:36
stevemardiazjf: won't barbican require keystone ?18:36
stevemaror does it not require keystone?18:37
diazjfstevemar it currently does, but there is a spec to add a new SAML based authentication18:37
notmynamentt: the point of min_part_hours is to allow a replication cycle to complete in the cluster before data is moved around. this keeps the data available to the end upser18:37
diazjfstevemar, notmyname, https://review.openstack.org/#/c/241068/18:37
notmynamentt: however, there are times when you want to do rebalances faster (mostly testing, but also maybe with some special circumstances in your cluster)18:37
nttyes, ... in my case the replication is long less than 1 minute. But I have to wait 1 hour to continue operations18:38
notmynamentt: ok. you can do that. here's how..18:38
nttthanks :)18:38
notmynamentt: use `swift-ring-builder <builder> pretend_min_part_hours_passed`18:38
diazjfstevemar, since swift is most commonly deployed without keystone, we need someway of using Barbican without that dependancy18:38
notmynamentt: that will reset the clock18:38
nttnotmyname: so I have to launch this command for all the rings interested (in my case all rings) and then I can continue with rebalance?18:39
*** ig0r_ has joined #openstack-swift18:39
notmynamentt: yeah. let me build an example18:40
dfgnotmyname: ya- i saw it. i guess i could try to review it18:40
dfg:)18:40
notmynamedfg: even a +/-1 on the idea would be good :-)18:41
dfgif thats what you are getting at :)18:41
notmynameyeah, thanks :-)18:41
stevemardiazjf: is the assumption that barbican will have access to keystone?18:41
stevemardiazjf: or just swift and barbican with no keystone anywhere?18:41
*** david-lyle_ has joined #openstack-swift18:42
diazjfstevemar, Barbican will be able to be used with or without Keystone. Most deployments will most likely be Swift, Barbican, no-Keystone. notmyname, could you confirm this?18:43
notmynamediazjf: barbican with keystone makes sense to me (in most cases, I think). isn't the question about castellan and keystone?18:43
blmartincan I take that wishlist item you just mentioned (I'm assuming it is just a change to the ring builder cli output)?18:43
notmynameblmartin: yeah18:43
notmynameblmartin: I could imagine it in 2 places. one is in the output when no command is given. the other is in the output of rebalance when the rebalance command fails18:44
diazjfnotmyname, If you are using Castellan with Barbican in the backend this still applies, but its fine using any other backend.18:44
notmynamediazjf: what is fine?18:44
onovynotmyname, hey! why is pretend_min_part_hours_passed not in manual? it's cool :]18:44
blmartinnice! Thanks notmyname18:45
notmynameonovy: because it's a huge loaded foot-gun!18:45
*** david-lyle has quit IRC18:45
notmynamebut I do agree (after getting a lot of questions) that it should probably be more documented, and with lots of scary words around it18:45
onovyhmm, ring-builder --help-i-m-expert ? :)18:45
blmartin--help-trust-me-im-a-developer18:46
diazjfnotmyname, that if you do not have Barbican in the backend, but something else, you can write a keymanager and choose how to authenticate, but if you want Barbican you need Keystone.18:46
stevemardiazjf: sounds awkward18:47
notmynamentt: here's an example using pretend_min_part_hours_passed https://gist.github.com/notmyname/0f62708048f69ee820c718:48
*** dslevin_ has joined #openstack-swift18:48
diazjfstevemar, notmyname, my thoughts were that we wanted access to Barbican from Castellan not to depend on Keystone :/18:49
onovynotmyname, can you look to https://review.openstack.org/#/c/256597/ and https://review.openstack.org/#/c/256715/ and https://review.openstack.org/#/c/256725/ pls? 30+ other OS projects already merged same change :)18:51
*** rvasilets__ has joined #openstack-swift18:55
jrichlidiazjf, stevemar, notmyname: I think this discussion was brought up before because oslo.context is being used for auth when using castellan.  so, for testing, you would still most likely want to use the oslo.context.18:55
openstackgerritJohn Dickinson proposed openstack/swift: Document pretend_min_part_hours_passed  https://review.openstack.org/25753218:55
notmynameonovy: ntt: dfg: docstring added for prenten_min_part_hours_passed ^18:55
jrichliand we didnt want to use that when testing18:55
diazjfjrichli, stevemar, notmyname, if we are not using Barbican in our testing, we can just write another castellan key-manager18:57
diazjfand use that instead of the Barbican one18:57
diazjfif we are using Barbican then we either need Keystone or a new auth method must be developed for Barbican18:58
jrichlidiazjf: yes, but the interface takes a context.  technically, we could make it different for testing, i suppose ...18:58
diazjfjrichli, correct!18:59
diazjfnotmyname, jrichli, I'm just not sure which path we want to take :(19:00
onovynotmyname, perfect. maybe you should point to swift-recon for replication pass (time)19:00
*** changbl has quit IRC19:00
*** haomaiwang has quit IRC19:01
*** changbl has joined #openstack-swift19:01
*** haomaiwang has joined #openstack-swift19:01
acolesnotmyname: peluse gmmaha: I see a TypeError from functional tests on patch 214206 when using --until-failure - any of you seen this or had success with that flag?19:02
patchbotacoles: https://review.openstack.org/#/c/214206/ - Modify functional tests to use testr19:02
acolesI want to do the equivalent of nosetests -x19:02
nttnotmyname: this works in kilo ?19:03
peluseacoles: just got back to my desk, running now...19:04
acolespeluse: here's what i see http://paste.openstack.org/show/481864/19:05
*** nadeem has joined #openstack-swift19:06
acolespeluse: i'm not sure --until-failure is the equivalent to nose -x, the os-testr docs say it *loops* until failure19:06
peluseacoles: ahh shit, now I'm getting pyeclib not isntalled complaints19:06
acolespeluse: heh! thats how i spent my morning!19:07
* peluse picked the wrong day to stop sniffing glue...19:07
acolesldconfig ?19:07
peluseacoles: could be, I'll check when its done "thinking" or whatever its doing19:07
acolespeluse: i found tox -r would always fail, i had to pip install pyeclib outside of the tox envs, ldconfig then tox -e py27 etc19:08
*** siva_krishnan_ has joined #openstack-swift19:08
peluseacoles: yeah, I just tried that and the pip install failed.  trying ldconfig followed by tox -r now and if that doesn't work will go see what pip is bithcing about this time19:09
peluseman pip is harder to please than my wife!19:09
acolescareful ;)19:09
gmmahaacoles: sorry just got back to desk19:11
onovyntt, yes, kilo too19:11
hroupeluse, lol ...... so true though, errr was fighting with pip problems yesterday19:11
peluseacoles: its OK, she has no idea what IRC is :)19:11
gmmahaacoles: sorry, used that flag once and it did work for me.19:13
gmmahalet me try again19:13
peterlisaknotmyname, hi, sorry to bother, I have made an older patch 238799 (based on onovy 's idea to set schedule priorities via swift configs). Not sure if it is right to have this setting in swift. Maybe we could put it into meeting agenda to be seen by more people and make decision?19:14
patchbotpeterlisak: https://review.openstack.org/#/c/238799/ - Change schedule priority of daemon/server in config19:14
acolesgmmaha: interesting, so what does it do, does it loop forever until failure or run one set of tests (stopping if any failure)?19:14
*** diazjf has quit IRC19:15
gmmahait runs all and stops running on its first fialure19:15
acolesgmmaha: and if no failure it repeats all the tests, again, and again...?19:16
acolesgmmaha: (sorry, I would try myself but it breaks :/)19:16
gmmahaacoles: no worries at all.. let me go and try it out now19:17
* gmmaha has a setup that will always fail19:17
acolesgmmaha: btw your suggestion to get rid of the log spew works for me19:17
acolesgmmaha: try it on a setup that always passes :)19:17
gmmahaacoles: thanks.. yeah with that it works and all tests pass19:17
peluseok tox working again (not yet tried func) and I had to pip install pyecbli, ldconfig and then tox -r19:17
gmmahabut when you let it spew logs, the tests start to fail19:17
peluseacoles: what's the exact cmd you are running that's failing on the testr patch?19:21
gmmahaacoles: are you running fedora or ubuntu? and which version if i may ask19:21
acolespeluse:  http://paste.openstack.org/show/481864/19:22
onovypeterlisak, you wrote it, you should be sure!19:23
nttonovy: thanks..... actually I'm waiting min_part_hours.... but it seems that is doesn't works19:23
acolespeluse: the second attempt is same but going through tox19:23
*** diazjf has joined #openstack-swift19:23
peluseacoles: does plain old tox -e func work for you?19:24
acolespeluse: yes, its just the --until-failures flag that causes the error19:25
peluseacoles: OK, same here19:25
pelusewtf is ostestr though?  I get cmd not found on that one19:25
*** tsg has quit IRC19:26
acolesmaybe source your tox venv first, or install with sudo pip install ostestr19:26
*** petertr7_away is now known as petertr719:27
peterlisakonovy, i mean some developers have different opinion :)19:27
onovyntt, sry, don't known how and if it works. just looked to source code, branch stable/kilo and function is there :]19:28
nttonovy: thanks.... I will give a feedback soon. Other question: if I have a disk with weight = 0, this disk will be excluded from write process, right?19:29
peluseacoles: ok, yeah.  its os-testr though.  so I get the same things as you in both cases now19:29
onovyntt, yes. no new data will be placed here and everything what is here will be moved somewhere else19:29
acolespeluse: sorry, yeah missed the hyphen - package is os-testr, command is ostestr19:30
acolespeluse: i'm just leaving a comment on gerrit cos i have to leave soon. The thing is, the nosetests -x option could be a real time saver, not having to wait for all tests to run before knowing there's a failure.19:31
acolespeluse: that's progress i guess :P19:31
peluseacoles: agree, thanks!19:31
nttonovy: ok.... all data was moved and now I'm trying to remove the zone... This seems to work, but I need to rebalance. However, I can use the cluster because no new data will be placed on the disk with weight=0. thanks.19:32
gmmahaacoles: you are right.. getting the same error19:32
onovyntt, yep19:32
acolesgmmaha: ok thanks for confirming19:33
onovyntt, and there is a bug. you can't remove disk with weight=0 if i remeber it correctly. i think peterlisak fixed it19:33
onovythere was19:33
nttonovy: I need to remove the entire zone. Asking here, I noticed that I should to set weight=0 for add disks in the zone (I have just 1 disk in the zone)19:36
nttand then I should remove the zone with swift-ring-builder remove Z319:36
onovyntt, yes thats right19:36
onovyntt, and now i can't rebalance19:37
onovyyou19:37
*** petertr7 is now known as petertr7_away19:38
nttI dont know if this is a bug or I should wait ....19:38
onovyntt, any stdout/stderr message?19:38
ntt"Either none need to be or none can be due to min_part_hours [1]" .... this is the error when I try19:38
onovyntt, bug19:38
onovyhttps://review.openstack.org/#/c/233096/ this19:38
onovyntt, add --force19:39
acolespeluse: gmmaha: i have to go, will try to play with os-testr some more tomorrow19:39
nttShould I use pretend_min_part_hours_passed?19:39
nttor just --force?19:39
gmmahaacoles: thanks.. will do the same here..19:39
onovyntt, rebalance --force19:39
gmmahasorry, need to finish some administrative stuff before end of the year.. gettting stuck with that19:40
nttok.... it works. zone disappear:)19:40
onovyntt, this is fixed in swift "master" already19:41
onovyit should work without --force19:41
nttonovy: I need to remove other zones. Should I wait min_part_hours in any case or can I just --force?19:42
*** acoles is now known as acoles_19:42
onovyntt, set_weight, rebalance, push rings to cluster, WAIT, remove, rebalance --force, push rings to cluster19:43
nttonovy: really thank you!19:45
*** albertom has quit IRC19:45
onovyntt, you are welcome19:45
*** haomaiwang has quit IRC19:46
nttonovy: other question: if I set replica 1 and I have 1 zone, replicator will try to replicate objects ?19:46
onovyntt, hmm. never had 1 replica19:46
onovyreplicator will run, but do only .ts cleanup i think19:47
nttok19:47
*** albertom has joined #openstack-swift19:52
nttonovy: I'm trying to set_weight to 0 and then rebalance.... bbut it seems that the space used on the disk with weight=0 doesn't decrease19:53
onovyso, after rebalance you pushed rings19:53
onovyreplicators are running19:53
onovyand space is not decreasing?19:54
nttjust one node, no need to push rings. replicators are running. yes19:54
*** geaaru has quit IRC19:55
diazjfnotmyname, stevemar, jrichli, Barbican meeting in 5 min, #openstack-meeting-alt19:55
onovyso you have 1 node, 2+ disks, right?19:55
nttstarted.... ok19:55
nttI'm sorry19:55
onovylook into object-replicator log19:56
nttonovy: I'm a bit worried :)19:56
nttbut now it's ok19:56
onovyntt, just for check: 2 disks, 1 node, 1 replica. And now you are removing one disk, right?19:56
nttonovy: due to an error in a script, my initial configuration was: 1 region, 4 zones, 1 replica, 4 disks19:57
nttNow I'm trying to obtain : 1region, 1zone, 1 replica, 4 disks19:57
onovyso you will remove 1 disk and then readd it with correct zone19:58
nttyes.... but used space is small, so I can first remove 3 disks and then readdd to the right zone19:58
onovythis should work: remove 1 disk, add same disk again with correct zone, rebalance19:59
onovypartitions will move from removed 1 disk to new added disk19:59
*** tsg has joined #openstack-swift19:59
onovyno migration needed :)19:59
nttah ok...20:00
nttcool20:00
ntt :)20:00
nttactually I'm removing the second disk.... I will give a try with the third20:00
onovyand good idea is to do ring changes on other server (on in another directory) than production20:00
onovyyou should check changes before "pushing"20:01
nttactually in my case, if something goes wrong no users will kill me :)20:01
*** haomaiwang has joined #openstack-swift20:01
nttbut yes..... this is a good idea20:01
nttonovy: replication logs stopped, but the disk has other 3,8 GB of used space20:06
nttis this a completely async process?20:06
onovyntt, stopped => finished?20:06
nttno, not finished.... paused I think because now restarted after some tasks related to auditor20:07
*** whydidyoustealmy has quit IRC20:07
*** shakamunyi has quit IRC20:07
onovyreplicator automatically restart after every ring change20:07
onovyreplicator and auditor is two different daemons20:08
nttok... now the space used on the disk is 0.  :)20:09
onovytry ls /<disk>20:09
onovythere should be no partitions (directory with number in name)20:10
nttInside the disk I have just folders :{accounts, containers, objects, tmp}20:10
nttbut they are empty20:10
onovyperfect20:10
onovythats correct -> no partiions inside that folders20:10
nttNow .... I have to wait or can I remove the zone?20:10
*** david-lyle_ has quit IRC20:10
onovyyou can remove20:11
*** sgundur1 has quit IRC20:11
nttok... I need to remove the zone for each ring, so I launch the command 3 times20:11
nttright?20:11
onovyyep20:11
nttok... now I need to rebalance --force, right?20:12
*** vinsh has joined #openstack-swift20:12
*** david-lyle has joined #openstack-swift20:13
*** haomaiwang has quit IRC20:13
onovyremove, rebalance --force20:14
nttyes... remove is ok. now I rebalance with --force20:15
ntt"d2r1z3-192.168.254.15:6000R192.168.254.15:6000/sdd1_"" marked for removal and will be removed next rebalance." this is the output of remove20:15
onovyyep20:16
nttok....zone 3 disappeared :)20:17
onovynow readd and rebalance20:17
nttnow... I have 2 zone with one disk in each zone. And other 2 free disks20:18
nttWhat should I try to do? first add free disks to zone1?20:18
onovyyes20:19
onovyi think it's better20:19
onovyadd both of them, then rebalance20:20
*** siva_krishnan_ has left #openstack-swift20:20
*** siva_krishnan_ has joined #openstack-swift20:20
nttok... Can I add 2 disks with only 1 rebalance, right?20:20
onovyyes20:21
nttswift-ring-builder object.builder add r1z1-192.168.254.15:6000/sdd1 500020:21
*** dslevin_ has quit IRC20:21
nttit's ok?20:21
onovylooks good, but don't know your ips, ports, mountpoints, etc.20:22
nttyes :)20:22
*** siva_krishnan has quit IRC20:23
nttonovy: can I rebalance without --force?20:25
*** siva_krishnan_ has left #openstack-swift20:26
onovytry it20:26
*** siva_krishnan_ has joined #openstack-swift20:26
*** siva_krishnan_ is now known as siva_krishnan20:27
nttok, it works. the number of partitions is correct20:27
nttin background the replicator moving files20:28
onovyok, wait now20:28
nttyes :)20:28
nttHow can I know then this process ends?20:28
nttHow can I know when this process ends?*20:29
onovylook into replicator look20:29
*** sgundur has left #openstack-swift20:30
nttIt seems not so clear.... logs just stop when replicator ends..... without any message20:31
onovyand wait for replicator finish20:31
onovyso wait for next replicator run :)20:31
nttok20:32
*** sgundur has joined #openstack-swift20:33
*** lpabon has quit IRC20:36
blmartinnotmyname:  how does this look20:38
blmartinThe minimum number of hours before a partition can be reassigned is 120:38
blmartinTime until partitions can be rebalanced: 0:38:3720:38
blmartinThe overload factor is 0.00% (0.000000)20:38
notmynameblmartin: what about "The minimum number of hours before a partition can be reassigned is 1 (0:38:37 remaining)"20:40
blmartinI'm not opposed to saving screen real estate20:43
nttonovy: object-replicator: 809/1024 (79.00%) partitions replicated in 900.01s20:43
onovyntt, perfect20:44
*** dmorita has quit IRC20:50
*** fthiagogv has quit IRC20:56
nttonovy: 1024/1024 (100.00%) partitions replicated20:56
onovyntt, ok. now remove one disk and readd it again with correct zone20:57
*** garthb_ has joined #openstack-swift20:57
onovynotmyname, blmartin: i like that one-line solution. I have some bash scripts for parsing swift-ring-builder dump, so don't add new lines please :))20:59
*** garthb has quit IRC20:59
nttonovy: swift-ring-builder account.builder remove r1z1-192.168.254.15/sdc121:00
nttright?21:00
nttand then21:00
onovyntt, or --id <id>21:00
ntt--id ?21:00
onovybut you are removing r1z1?21:00
nttno...r1z221:00
nttonovy: swift-ring-builder account.builder remove r1z2-192.168.254.15/sdc121:00
onovyswift-ring-builder object.builder dumps builder info. first column is id21:00
onovybut this works too21:01
nttswift-ring-builder account.builder remove r1z2-192.168.254.15/sdc1 and then swift-ring-builder container.builder add r1z1-192.168.254.15:6001/sdc1 500021:01
*** david-lyle has quit IRC21:01
*** haomaiwa_ has joined #openstack-swift21:01
gmmahanotmyname: peluse: mattoliverau: seeing this when running the tests in my logs21:01
onovyntt, yes, but both for account21:02
onovythen both for container, etc.21:02
nttyes21:02
gmmahahttp://paste.openstack.org/show/481879/21:02
gmmahamemory failure causing kernel the kill testr :(21:02
nttonovy: I done "remove". Now weight is 0 for the disk21:04
nttright?21:04
onovyremote set weight=0 and flag disk for removing21:04
onovy*remove21:04
nttyes.... I done21:05
nttwithout set_weight21:05
nttjust remove21:05
onovyyep21:05
nttnow I have to add?21:05
nttwithout rebalance?21:05
onovyyes21:05
*** mac_ified has joined #openstack-swift21:06
ntt swift-ring-builder object.builder add  r1z1-192.168.254.15:6000/sdc1 500021:06
nttDevice 1 already uses 192.168.254.15:6000/sdc1.21:06
nttThe on-disk ring builder is unchanged.21:06
onovyah, my fault21:06
ntt?21:07
onovymmnt21:07
*** Zyric_ has joined #openstack-swift21:07
*** david-lyle has joined #openstack-swift21:07
onovyyou need to (in different folder than /etc/swift!): rebalance --force, readd, pretend_min_part_hours_passed, rebalance21:08
*** dslevin_ has joined #openstack-swift21:09
nttmmmmm..... can you help me? I don't understand21:09
gmmahathanks to lifeless. would have never thought to look at that without his suggestion21:09
onovyntt, copy /etc/swift/ *.ring.gz a *.builder somewhere else21:10
nttok...now I have 6 files in /root/swift_builder21:11
onovyntt, "cd" to different location. rebalance rings now21:11
nttok.... done21:12
nttI have only 3 disks21:12
*** david-lyle has quit IRC21:12
onovyyep. add 4. disk back21:12
ntthow?21:12
onovyswift-ring-builder container.builder add r1z1-192.168.254.15:6001/sdc1 5000 ?21:13
nttswift-ring-builder object.builder add  r1z1-192.168.254.15:6000/sdc1 500021:13
*** zhill has joined #openstack-swift21:13
nttok21:13
blmartinhmmmm, so this is a little interesting (assuming I'm reading this correctly).21:13
blmartineverytime rebalance is called, the builder's _last_part_moves_epoch is updated to the current time.21:13
blmartinso if you keep calling rebalance at intervals < 1 hour, the partitions will never register as having aged because no time has passed according to _last_part_moves_epoch.21:14
blmartinIs there a reason for this? otherwise I think this can be fixed by not updating _last_part_moves_epoch if the elapsed_hours is == 021:14
nttok... now I have 4 disks  but last disk with 0 partitions21:14
onovyntt, rebalance21:14
blmartinwell, <= 0 to be safe, have to watch out for those negative hours21:14
nttonovy : http://pastebin.com/XLvk6e9E21:15
onovyntt, pretend_min_part_hours_passed21:15
onovyand then rebalance21:15
nttwait .... I have to rebalance all rings....21:15
nttok.... rebalance is ok21:16
nttnow I have to use pretend_min_part_hours_passed ?21:16
onovyntt, yes and rebalance again. it didn't rebalance enought21:16
nttswift-ring-builder <builder> pretend_min_part_hours_passed    <--- right?21:17
onovyyep21:17
*** jamielennox|away is now known as jamielennox21:17
notmynameblmartin: things devs wrongly believe about time: it always moves forward21:18
onovyntt, do you reading/writting to this cluster now?21:18
flaper87notmyname: tdasilva hey, just read the backlog. TBH, it'd be super awesome to get support from you guys on that driver.21:18
flaper87There are folks using it for sure21:18
notmynameblmartin: http://infiniteundo.com/post/25326999628/falsehoods-programmers-believe-about-time ;-)21:19
flaper87I know Rackspace uses it, HP cloud used to use it, entercloudsuite uses it, etc21:19
nttonovy: actually I stopped proxy server. But wait.... I used the wrong port for account and container :(21:19
nttsorry21:19
notmynameflaper87: so what's the impetus for not maintaining the swift driver in glance?21:19
notmynameflaper87: what's the current maintenance burden on it21:20
nttonovy: how can I solve?21:20
onovyntt, ideally remove copy and get it fresh from /etc/swift21:21
nttuff... I'm sorry21:21
nttok.... so I just delete /root/swift_builders/* and then copy files from /etc/swift21:21
ntt?21:21
flaper87notmyname: oh wait, I hope my message didn't go through like I'm trying to get rid of it. I'd like to have commitment from ppl that are using it (and know that code) as it's hard to maintain as it's right now21:21
onovythan do this: rebalance, readd, pretend_min_part_hours_passed, rebalance21:21
onovyntt, in this order ^21:21
flaper87also, that code needs some love, tbh.21:22
notmynameflaper87: I understood it as "if nobody steps up, we're dropping it"21:22
blmartinnotmyname: I like #24, sums a lot of things up21:22
flaper87yeah, that's the message. If you read between lines you'll hear me screaming: "please step up, please step up" :)21:23
notmynameblmartin: lol21:23
flaper87jokes apart, I'm not as worried for the swift driver as I'm with others21:23
flaper87ppl are using it and I'll get someone to maintain it. If it's someone from Swift, it'd be even better as you guys know swift way better than us21:23
notmynameflaper87: which leads me to asking what about it is broken or needs love on it? why threaten to deprecate it? why is it painful for the glance team to maintain?21:24
*** badari has joined #openstack-swift21:25
*** dmorita has joined #openstack-swift21:25
nttonovy: I finished21:26
onovyntt, sent me swift-ring-builder object.builder pls21:27
flaper87notmyname: the Single/Multi Tenant implementation is fragile. That's part of the feedback I've gotten from folks running it in production. Just ~10% of the glance team knows how that driver works and those folks are not providing as many reviews/care to that driver as we need. There's a refactor to the glance_store library that will require drivers to be refactored too and this will need support for21:28
flaper87folks maintaining those drivers. So, TL;DR: The glance team doesn't have the bandwidth to support all these drivers and we need people interested in these drivers to help maintaining them21:28
nttonovy: http://pastebin.com/HQYGysT121:28
gmmahaacoles_: any chance your machine is running more then 4G of RAM?21:29
onovyntt, ok, push it (copy back to /etc)21:29
nttwithout stopping any daemon?21:29
notmynameflaper87: are you anticipating that these drives move somewhere else or stay in the glance repo?21:30
onovyntt, just proxy stopped21:30
nttok... it is already stopped21:30
notmynameflaper87: drivers*21:30
onovyntt, so fine, push it21:30
nttonovy: done. 6 files updated.21:31
onovynow wait21:31
onovyand look into logs21:31
nttwhich logs? replicator? and what should I expect?21:32
onovyreplicator21:32
onovyit will move some data21:32
flaper87notmyname: stay in glance repo, not really planning to have them go somewhere else unless they are completely unmaintained (sheepdog, for instance) in which case I'll happily remove them21:32
nttI have an rsync failed21:32
notmynameflaper87: ah interesting. my initial reaction is that if the swift team is maintaining the swift glance driver, it would move to be a repo under swift21:32
onovyntt, pastebin pls21:33
notmynameflaper87: how isolated are the drivers? how much of an official driver interface/API is there?21:33
onovyntt, or you can you our pastebin: http://paste.openstack.org/21:34
onovy*use21:34
ntthttp://paste.openstack.org/show/481883/21:34
nttnow rsync seems to work21:35
onovyntt, :31:09 you pushed rings? +-21:35
nttyes21:35
onovyso it's ok21:35
nttbut now replicator is working21:36
flaper87notmyname: the infrastructure to have those drivers living outside glance_store is there. We use stevedore, etc. But there's not really a point, right now, to have them in a separate repo. The API exists but it's a bit "old" and that's part of what we're looking forward to refactor in this spec: https://review.openstack.org/#/c/188050/21:36
flaper87the spec is still a draft and needs quite some work21:36
nttI see messages of file are moving21:36
onovyntt, that's expected21:36
nttok21:36
nttnow I have to wait replicator to finish before restart proxy?21:37
flaper87The api for the driver is very simple. It has to implement this class https://github.com/openstack/glance_store/blob/master/glance_store/driver.py#L3421:37
onovyi thinked my solution will be better, but it's not :) but no data lost so far :))21:37
onovyntt, yes, wait21:37
flaper87but again, that comes from when glance_store was in Glance.21:37
flaper87The mistake there was that we pulled it out before refactoring it and now we need to refactor it and have a deprecation path for the old stuff. But I digress21:38
notmynameflaper87: so then what are you asking for? if you don't want "us" (for any definition of not-glance-devs) to move it into another repo and maintain it, what are you looking for?21:38
nttuff... onovy.... I was a bit nervous :)21:38
onovyntt, i'm still a bit nervous :)21:38
nttahahahah :)21:38
onovyntt,  but i think it will be fine21:38
nttyes.... replicator is working fine21:39
mattoliveraumorning21:39
mattoliveraugmmaha: interesting21:39
onovymy solution was wrong, sry for it. it will replicate many partions now, but it will finish21:39
gmmahagood morning mattoliverau21:39
flaper87notmyname: fwiw, it's not that I don't want you to do that. I hadn't thought about maintainers willing to provide support by pulling the driver out. I'd personally be ok with that, TBH.21:39
gmmahayeah, but atleast its not a head scratcher. we know where its failing :)21:39
notmynameflaper87: I don't know either if it's a good idea or not. I don't know how the swift team would be able to maintain it otherwise though21:40
notmynameflaper87: so if that's not what you were originally thinking, what were you asking for?21:40
flaper87notmyname: what about we both bring this up in $project's meetings and see the reaction of folks about these drivers living outside glance_store21:40
notmynameflaper87: that's actually why I was brining it up today. to prep for our project meeting ;-)21:41
flaper87notmyname: what I was looking for is a name I can ping whenever there's something broken in the driver, reviews are needed and/or there are bugs to fix.21:41
notmynameflaper87: ok21:41
flaper87I'm more than happy to bring swift folks into glance_store's core if keeping the driver there makes more sense21:41
notmynameflaper87: so more of an advisor than as a maintainer. a "liason" if you will ;-)21:41
flaper87notmyname: sure... someone to blame^Wcall21:42
flaper87:P21:42
mattoliveraulol21:42
notmynameflaper87: ah? so you've got a separate -core team for these?21:42
flaper87notmyname: not yet but that's very simple to set up21:42
notmynameflaper87: does this look like a decent summary? first agenda item https://wiki.openstack.org/wiki/Meetings/Swift21:44
* flaper87 clicks21:44
*** Zyric has joined #openstack-swift21:48
tdasilvanotmyname: flaper87: would it make sense to create a swift-driver-core team and make that made up of swift-core + glance-core?21:49
flaper87notmyname: looks good21:49
notmynameflaper87: ok, thanks21:49
notmynametdasilva: interesting idea21:49
onovyntt, is it done?21:49
flaper87tdasilva: sure, we can do that and give permissions to that team for things under glance_store/_drivers21:49
flaper87tdasilva: sure, we can do that and give permissions to that team for things under glance_store/_drivers/swift*21:50
tdasilvaright21:50
flaper87it's just a matter of finding the best way to make it work well for you folks and us. The more we can join efforts, the better21:50
*** chlong has quit IRC21:51
notmynameflaper87: FWIW, I think it's great to have the swift driver in glance. because defaults matter and the out-of-the-box experience should be good21:51
mattoliverau+121:52
notmynameflaper87: just didn't go there first in my mind since that's not how I read the email21:52
*** petertr7_away is now known as petertr721:53
flaper87notmyname: I agree. As I said, I hadn't actually thought about *existing* drivers being moved out of glance_store as it'd complicate things a bit more21:53
flaper87so, +1 working on creating that joint core team and having you guys being a contact point for the driver21:53
mattoliverautdasilva's idea or just a some point people (more then 1) I'm leading to. If the latter I'm happy to be one of them if that helps keep swift + glance a going.21:53
flaper87just gimme a name :D21:53
mattoliverautho I guess I'd be one in what ever way forward we go :P21:54
notmynameflaper87: seems that mattoliverau just volunteered ;-)21:54
* flaper87 hugs mattoliverau and ties him to the etherpad forever21:54
mattoliveraulol21:54
notmynameflaper87: but, seriously, if it's a joint core thing, then it would be PTL, I'd guess. but really a team thing21:54
* mattoliverau needs to think harder in a pre morning coffee state :P21:55
flaper87in the swift and cinder case it's easier because it's OpenStack. There are other drivers I've no freaking idea what they do (again, sheepdog)21:55
flaper87mattoliverau: mind putting your contact info here: https://etherpad.openstack.org/p/glance-store-drivers-status21:55
flaper87at least for now, we can update it later21:56
flaper87mattoliverau: please, do that before you drink your coffee21:56
flaper87I don't want you to change your mind21:56
flaper87:P21:56
mattoliverauflaper87: sure21:56
*** lcurtis has joined #openstack-swift21:57
mattoliverauflaper87: done21:57
notmynamemattoliverau: thanks (don't overcommit!!)21:58
nttonovy:  22:58:05 proxy1 object-replicator: 1024/1024 (100.00%) partitions replicated21:58
notmynameflaper87: I kinda like tdasilva's idea of the join core team. I added that to the list of options21:58
notmynameflaper87: our meeting is wednesday. we'll see what happens21:58
flaper87notmyname: tdasilva mattoliverau ++21:59
*** tongli has joined #openstack-swift21:59
nttonovy: can I restart proxy?21:59
flaper87ours is on Thursday, I'll follow-up based with the Glance team based on the outcome of your meeting21:59
notmynameflaper87: ok. thanks for stopping by with the extra info22:00
flaper87anytime, sorry for the confusing email. Glad you asked/pinged me22:00
*** haomaiwa_ has quit IRC22:01
onovyntt, yep22:01
*** haomaiwang has joined #openstack-swift22:01
nttonovy: it works :)22:02
nttactually thereis one missing id in the ring, but I think this is not a problem22:02
openstackgerritBen Martin proposed openstack/swift: Print min_part_hours lockout time remaining  https://review.openstack.org/25757722:02
onovyntt, missing id?22:05
onovyntt, 0 2 3 4 - missing 1? it's ok22:06
nttyes22:08
*** tsg has quit IRC22:08
nttok.... but when I add a new disk id 1 will be used?22:09
nttis this an important thing?22:09
onovyntt, it's not important22:09
nttok....22:09
onovyjust "primary key"22:09
ntthowever, thanks you22:09
nttthank you*22:09
onovyntt, you are welcome22:10
nttit seems that swift does a lot of read operation on disks. Is this a normal behavior?22:10
onovyntt, try to stop auditor daemon22:10
onovy(just for test)22:11
nttburst of 10MB on the network (disks are on a SAN, so I see network traffic)22:11
*** tongli has quit IRC22:11
nttyes.... the problem is the auditor daemon22:12
nttis this a normal behavior? can I reduce this traffic?22:12
onovyntt, you can lower it22:12
onovythere are few limits in config, just tune it22:12
onovyfiles per second, bytes per seconds, etc.22:12
onovyi have many many small files in production swift cluster, so i have files per second as low as 522:13
nttmmmmm.... we can speak about this problem tomorrow?22:13
onovyntt, in what TZ are you?22:14
nttI'm in Italy... 11:15PM now22:15
nttand I'm working from 8AM :)22:15
onovyntt, ok, same TZ here (CZ). so tomorrow morning is fine22:16
nttok.... thank you onovy. I need to sleep :)22:17
onovyntt, good night22:17
nttbye22:17
*** ntt has quit IRC22:17
openstackgerritBen Martin proposed openstack/swift: Print min_part_hours lockout time remaining  https://review.openstack.org/25757722:19
*** tsg has joined #openstack-swift22:22
*** zhill has quit IRC22:25
*** petertr7 is now known as petertr7_away22:26
blmartinonovy: Thanks for catching that error22:32
onovyblmartin, np :)22:32
onovyi will read that patch tomorrow22:33
blmartinsounds good to me, That will give me plenty of time to spell check everything else :D22:33
notmynameblmartin: yeah, I think you did catch a bug wrt the last part moves epoch22:35
*** david-lyle has joined #openstack-swift22:39
onovyi bet unit tests will fail22:45
*** nadeem has quit IRC22:47
*** dmorita has quit IRC22:47
*** dmorita has joined #openstack-swift22:48
*** diazjf has quit IRC22:51
notmynamedmorita: I forgot that you were in the US now. I thought you were just keeping really weird hours ;-)22:51
dmoritanotmyname: hehe :)22:52
*** tsg has quit IRC22:53
onovymattoliverau, notmyname, can you look to #256715 #256725 #256597 pls? same three patches22:54
notmynameonovy: looking now22:54
onovythanks :)22:55
notmynameonovy: what's the urgency behind these?22:55
onovypriority: zero22:55
onovyjust want to have my "review queue" not so long22:55
onovynow it have +- 200 outgoing review :)22:55
onovyand yours opinion here will be perfect too: https://review.openstack.org/#/c/251151/22:58
blmartinonovy: locally I had 1 failure and 1 error, I'm looking into it right now22:59
onovyblmartin, ok :)22:59
*** jamielennox is now known as jamielennox|away23:01
*** haomaiwang has quit IRC23:01
*** jamielennox|away is now known as jamielennox23:01
*** haomaiwang has joined #openstack-swift23:01
*** zhill has joined #openstack-swift23:02
notmynameonovy: done23:02
onovythanks23:03
openstackgerritMerged openstack/swift-bench: Deprecated tox -downloadcache option removed  https://review.openstack.org/25672523:03
notmynameit's so nice how quickly things land in that repo23:04
onovyin swift-bench?23:04
notmynameyeah23:04
onovyswauth is fast too :)23:04
onovyno func tests so far23:04
notmynameyup23:04
mattoliverauonovy: will look post meeting (i'm just going into)23:04
notmynameno long-running gate jobs and really big gate queue23:04
notmynamemattoliverau: I already approved them all23:04
onovymattoliverau, yep, notmyname already +a that, thanks23:05
mattoliverauTa23:05
*** chlong has joined #openstack-swift23:05
onovyif you can comment #251151 i will be happy :)23:05
*** asettle is now known as asettle-afk23:06
onovyjust say if you ack christian, and i will change that patch23:06
onovyclassic fight: local or utc time...23:06
notmynameonovy: if you use the form "patch 251151" you'll get a helpful clicky link (the trigger is the word "patch")23:06
patchbotnotmyname: https://review.openstack.org/#/c/251151/ - Show local time in swift-recon in replication part23:06
onovyah, that's trigger :) thx23:06
blmartinyeah, the failure is because it compares the output of default to a string embedded in the test. :(23:07
blmartinMaybe I can control time with mock.....23:07
onovya will try to find where is that regexp and fix it. i want to #<number> works too :)23:07
onovyblmartin, yep, mock is your friend :)23:08
notmynamehurricanerix: gmmaha: mattoliverau: running the testr functests patch again. seeing what I see vs what others are seeing23:11
*** km has joined #openstack-swift23:14
onovygood night23:14
hurricanerixnotmyname: i think that patch hates us =)23:14
openstackgerritJonathan Hinson proposed openstack/swift: WIP Conditional GETs Fix  https://review.openstack.org/25760323:15
mattoliveraunotmyname: mine worked properly on OSX but not in_func on my SAIO. Might be a memory leak and not cleaning. But I swear it worked last week, so I wonder if something has been updated (or I was very lucky lastweek)23:15
blmartinGood Morning mattoliverau23:18
mattoliveraublmartin: hey man23:18
*** asettle-afk is now known as asettle23:20
hurricanerixmattoliverau testr gremlins?23:22
hurricanerix=)23:22
mattoliverauhurricanerix: yeah, things that shouldn't be so hard end up taking over your life.. damn testr :P23:23
*** kei_yama has joined #openstack-swift23:32
notmynamemattoliverau: "subunit2pyunit: error: no such option: --no-passthough"23:33
mattoliverauWhat? Yes it does..23:34
notmynamemattoliverau: hmm...works on the command line23:36
*** zhill has quit IRC23:37
*** ho has joined #openstack-swift23:45
hogood morning!23:46
blmartinwhen running the unittests, has anyone gotten an error importing six.viewkeys?23:46
blmartinho: Good morning23:47
hoblmartin: helllo23:47
mattoliverauho: morning23:54
hoblmartin: last week I executed unittest but didn't get the error.23:54
homattoliverau: morning!23:55
notmynamemattoliverau: oh interesting! if I use your comment (ie not ostestr) then it supresses the log spew and looks mostly good. with gmmaha's version using ostestr, I was having problems23:56
blmartinho: Thanks. I think it is probably just me since the gate test did not show the same error23:57
mattoliveraunotmyname: let me try the same thing. Maybe worst case is we don't use ostestr and have another wrap script in our repo.23:58
notmynamemattoliverau: I'm using the "python setup.py test --coverage ..." line from your comment (put into .functests). no wrappers23:59
notmynamewell tests are still running. maybe it will try to run more that it should23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!