Monday, 2016-03-14

*** baojg has quit IRC00:13
*** haomaiwang has joined #openstack-swift00:27
*** haomaiwang has quit IRC00:32
*** mingdang1 has joined #openstack-swift00:42
*** chlong has quit IRC00:51
*** takashi has joined #openstack-swift00:51
takashigood morning00:53
*** david-lyle has quit IRC00:54
*** david-lyle has joined #openstack-swift00:54
kota_Good morning!00:56
kota_ho, takashi: \o/00:57
*** km has quit IRC00:58
*** km has joined #openstack-swift00:58
mingdang1oh, morning01:00
*** StraubTW has joined #openstack-swift01:03
*** chlong has joined #openstack-swift01:05
takashikota_, ho_, mingdang1: morning01:06
kota_mingdang1: o/01:06
*** StraubTW has quit IRC01:07
*** NM has joined #openstack-swift01:29
*** StraubTW has joined #openstack-swift01:40
*** panda has quit IRC01:40
*** panda has joined #openstack-swift01:41
*** NM has quit IRC01:46
*** tsg has joined #openstack-swift01:49
tsgkota_: ping01:50
kota_tsg: pong01:50
kota_tsg: alright, we are talking about patch 28257801:51
patchbotkota_: https://review.openstack.org/#/c/282578/ - swift - Set backend content length for fallocate - EC Policy01:51
tsgcomments on https://review.openstack.org/#/c/282578/3/swift/proxy/controllers/obj.py - in particular the "TODO" comment at #211601:51
patchbottsg: patch 282578 - swift - Set backend content length for fallocate - EC Policy01:51
kota_:)01:51
kota_directly speaking, in my eyes, current PyECLib.get_segment_info seems to round the data_len into a segment if the data_len is smaller enough than the segment_size.01:52
kota_e.g.01:52
kota_(i'm not sure, making currect exmaple)01:53
kota_segment_size = 1MB, data_len = 1MB + 1B -> num_segment = 1, last_segment_size = 1MB + 1B...01:54
kota_like this?01:54
kota_actually, Swift will make 2 segments likely num_segment = 2, last_segment_size = 1B01:54
kota_tsg: is that correct?01:55
*** baojg has joined #openstack-swift01:55
tsgkota_: that's how it was planned early on - I assume that's what you see happen?01:55
tsgkota_: is there a bug?  or is this not what's desired?01:56
kota_tsg: I don't know why PyECLib does so for now i.e. don't know it's desired or not.01:56
kota_but to be simple, I prefer the way current Swift does even if it waste the device space a bit.01:57
tsgkota_: it was a design choice :)01:58
kota_anyways, we found the difference, PyECLib expects and Swift does different way.01:59
kota_tsg: exactly01:59
tsgkota_: padding a smaller segment was also a part of design discussion at the time but we chose to go the other route01:59
kota_tsg: (honesty it's the reason, I didn't raise a bug report for that, becuase it's actually design choice :))01:59
tsgkota_: I see - did you discover this when playing with the 1MB + 1b* like case?02:01
tsgkota_: (do you have a test case for Swift where this becomes an issue, is the q)02:01
kota_tsg: yup, at first, the first author Janie are confusing why it will be different values.02:02
kota_wait02:02
*** haomaiwang has joined #openstack-swift02:04
kota_tsg: not found, I might not save the test for now, sorry.02:04
jrichlikota_: the functests that had failed for me were test_slo_copy and test_slo_copy_account02:04
tsgjrichli: o/02:05
kota_jrichli: \o/02:05
jrichlitsg kota_: o/02:05
tsgjrichli: did these tests fail after you added code to handle the "non-chunked" case (ie when CL is in the PUT headers)02:06
jrichliI would get a 499 on a PUT because fsize != upload_size. actual size was +8002:06
jrichlithese tests do not fail with the latest code in the patch02:07
*** chlong has quit IRC02:07
tsgjrichli: ok .. I am curious what made these tests fail (they were passing earlier)02:07
kota_because PyEClib round the 1 byte into previous one but Swift transfer the fragment header + 1 bytes as the last.02:07
kota_fragment header is now 80 bytes, right?02:07
kota_tsg:^^02:08
tsgkota_: correct02:08
tsgkota_: it has always been :)02:08
kota_tsg: does it make you sense?02:09
tsgkota_: I am cross-checking everything from libec, pyeclib to the original ec put code .. but wondering why these tests didn't fail all this time. :-)02:10
tsgjrichli, kota_: nm .. may be we were not running functests after all .. sorry02:10
kota_tsg: and currently we don't use get_segment_info for the fallocate, just chunked transfer.02:11
kota_tsg: it was found when we tried to use it in Swift itself :)02:11
tsgkota_: there is a long history to the "chunked-only" decision :)02:12
kota_tsg: I know the tons of effort :)02:12
tsgkota_: but I do see the value in making sure there is enough disk space before streaming the fragments02:12
tsgkota_: given get_segment_info() is not part of the common(ly used) API and Swift is the only caller at the moment, it should be possible to make a change - let's also chat with Kevin - I will start a thread. thank you for bringing this up!02:17
kota_tsg: thanks a lot!02:17
tsgthank you jrichli! for the patch02:18
*** tsg has quit IRC02:18
jrichlitsg: np!  thank you02:19
*** chlong has joined #openstack-swift02:20
*** km has quit IRC02:23
*** km has joined #openstack-swift02:24
*** StraubTW has quit IRC02:34
*** StraubTW has joined #openstack-swift02:35
*** StraubTW has quit IRC02:39
*** StraubTW has joined #openstack-swift02:40
*** nadeem has joined #openstack-swift02:50
*** StraubTW has quit IRC02:55
*** haomaiwang has quit IRC03:01
*** haomaiwang has joined #openstack-swift03:01
*** asettle has quit IRC03:20
*** asettle has joined #openstack-swift03:20
*** chlong has quit IRC03:22
*** chlong has joined #openstack-swift03:41
*** asettle has quit IRC03:44
*** sanchitmalhotra has joined #openstack-swift03:45
*** sanchitmalhotra has quit IRC03:47
*** links has joined #openstack-swift03:50
*** ekarlso- has quit IRC04:00
*** haomaiwang has quit IRC04:01
*** haomaiwang has joined #openstack-swift04:01
*** ekarlso- has joined #openstack-swift04:13
*** silor has joined #openstack-swift04:26
*** klrmn has quit IRC04:31
*** treaki__ has joined #openstack-swift04:32
*** ppai has joined #openstack-swift04:33
*** chlong has quit IRC04:35
*** treaki_ has quit IRC04:36
*** chlong has joined #openstack-swift04:47
*** baojg has quit IRC04:52
*** baojg has joined #openstack-swift04:53
*** haomaiwang has quit IRC05:01
*** haomaiwang has joined #openstack-swift05:01
*** haomaiwang has quit IRC05:02
*** chlong has quit IRC05:03
*** chlong has joined #openstack-swift05:16
*** haomaiwa_ has joined #openstack-swift05:18
openstackgerritBrian Cline proposed openstack/swift: Don't report recon mount/usage status on files  https://review.openstack.org/29220605:18
*** chlong has quit IRC05:22
*** haomaiwa_ has quit IRC05:22
*** haomaiwang has joined #openstack-swift05:33
*** chlong has joined #openstack-swift05:34
*** haomaiwang has quit IRC05:38
*** haomaiwang has joined #openstack-swift05:49
*** haomaiwang has quit IRC05:52
*** haomaiwang has joined #openstack-swift05:52
*** haomaiwang has quit IRC06:01
*** haomaiwa_ has joined #openstack-swift06:01
*** trifon has joined #openstack-swift06:01
*** nadeem has quit IRC06:10
*** nadeem has joined #openstack-swift06:10
*** andreaponza has joined #openstack-swift06:13
openstackgerritOpenStack Proposal Bot proposed openstack/swift: Imported Translations from Zanata  https://review.openstack.org/29221706:13
*** asettle has joined #openstack-swift06:18
*** ChubYann has quit IRC06:19
*** andreaponza has quit IRC06:34
*** daemontool has quit IRC06:34
*** pcaruana has quit IRC06:39
openstackgerritNadeem Syed proposed openstack/swift: go: add ability to dump goroutines stacktrace with SIGABRT  https://review.openstack.org/29222906:46
*** chlong has quit IRC06:48
*** siva_krishnan has joined #openstack-swift06:51
*** ChanServ sets mode: +v cschwede06:53
*** haomaiwa_ has quit IRC07:01
*** haomaiwa_ has joined #openstack-swift07:01
*** bhakta_ has quit IRC07:02
*** Lickitysplitted_ has joined #openstack-swift07:04
*** patchbot has quit IRC07:04
*** patchbot` has joined #openstack-swift07:04
*** andymccr_ has joined #openstack-swift07:05
*** jlhinson_ has joined #openstack-swift07:05
*** patchbot` is now known as patchbot07:05
*** timburke_ has joined #openstack-swift07:06
*** ChanServ sets mode: +v timburke_07:06
*** chrisnelson_ has joined #openstack-swift07:06
*** acorwin_ has joined #openstack-swift07:07
*** asettle has quit IRC07:07
*** bhakta has joined #openstack-swift07:08
*** redbo_ has joined #openstack-swift07:10
*** csmart_ has joined #openstack-swift07:10
*** km has quit IRC07:12
*** cschwede_ has joined #openstack-swift07:14
*** ChanServ sets mode: +v cschwede_07:14
*** Lickitysplitted has quit IRC07:15
*** cschwede has quit IRC07:15
*** redbo has quit IRC07:15
*** jlhinson has quit IRC07:15
*** mathiasb has quit IRC07:15
*** chrisnelson has quit IRC07:15
*** sileht has quit IRC07:15
*** timburke has quit IRC07:15
*** ajiang has quit IRC07:15
*** csmart has quit IRC07:15
*** clyps__ has quit IRC07:15
*** wbhuber has quit IRC07:15
*** dabukalam has quit IRC07:15
*** acorwin has quit IRC07:15
*** andymccr has quit IRC07:15
*** km has joined #openstack-swift07:15
*** cschwede_ is now known as cschwede07:15
*** ajiang has joined #openstack-swift07:16
*** mathiasb has joined #openstack-swift07:16
*** clyps__ has joined #openstack-swift07:16
*** wbhuber has joined #openstack-swift07:16
*** dabukalam has joined #openstack-swift07:16
*** sileht has joined #openstack-swift07:16
*** treyd_ has quit IRC07:18
*** treyd has joined #openstack-swift07:21
*** timur has joined #openstack-swift07:32
*** timur has left #openstack-swift07:32
*** csmart_ is now known as csmart07:34
*** mmcardle has joined #openstack-swift07:37
*** andymccr_ is now known as andymccr07:39
*** timur has joined #openstack-swift07:43
*** mmcardle1 has joined #openstack-swift07:49
*** mmcardle has quit IRC07:51
*** tesseract has joined #openstack-swift07:52
*** tesseract is now known as Guest5918707:52
*** baojg has quit IRC07:58
*** haomaiwa_ has quit IRC08:01
*** haomaiwang has joined #openstack-swift08:01
*** jmccarthy has quit IRC08:02
*** jmccarthy has joined #openstack-swift08:03
*** baojg has joined #openstack-swift08:05
openstackgerritKota Tsuyuzaki proposed openstack/swift: Fix reclaimable PUT racing .durable/.data cleanup  https://review.openstack.org/28975608:13
*** rledisez has joined #openstack-swift08:13
openstackgerritKota Tsuyuzaki proposed openstack/swift: Fix reclaimable PUT racing .durable/.data cleanup  https://review.openstack.org/28975608:22
*** rcernin has joined #openstack-swift08:25
*** pcaruana has joined #openstack-swift08:43
*** permalac has joined #openstack-swift08:50
permalacGuys, how do you manage large swift environments. I'm with a small swift (3 nodes and 3 proxy on the openstack controllers), and this is dificult to follow.08:51
permalacI do not manage to see where my glance images go, and makes me nervous.08:51
permalacWhat am I missing?08:52
openstackgerritKota Tsuyuzaki proposed openstack/swift: Fix ssync related object-server config docs  https://review.openstack.org/29225708:52
*** asettle has joined #openstack-swift08:55
*** haomaiwang has quit IRC09:01
*** jordanP has joined #openstack-swift09:02
*** stantonnet has quit IRC09:02
*** stantonnet has joined #openstack-swift09:05
*** haomaiwang has joined #openstack-swift09:06
*** asettle has quit IRC09:09
mingdang1@kota_  excuse me ,i have a questin. When I run a data migrate in swift cluster ,how to ensure the service is normal?09:09
*** asettle has joined #openstack-swift09:11
*** asettle has quit IRC09:15
*** McMurlock has quit IRC09:19
*** McMurlock has joined #openstack-swift09:23
*** McMurlock has left #openstack-swift09:24
*** jistr has joined #openstack-swift09:28
*** nadeem has quit IRC09:29
*** acoles_ is now known as acoles09:55
*** haomaiwang has quit IRC10:01
*** haomaiwang has joined #openstack-swift10:01
*** kei_yama has quit IRC10:02
*** ho_ has quit IRC10:07
*** mingdang1 has quit IRC10:19
*** haomaiwang has quit IRC10:23
*** mvk has joined #openstack-swift10:24
*** haomaiwang has joined #openstack-swift10:26
*** haomaiwang has quit IRC10:32
*** haomaiwang has joined #openstack-swift10:33
*** mingdang1 has joined #openstack-swift10:55
*** mingdang1 has joined #openstack-swift10:55
kota_mingdang: I'm back10:58
kota_mingdng1:^^10:58
kota_mingdang1: how does "migrate" mean for you?10:59
*** haomaiwang has quit IRC11:01
*** haomaiwang has joined #openstack-swift11:01
mingdang1when i rebalance a ring,and swift-replicator is working on "update_deleted"11:02
mingdang1@kota_ :)11:02
*** baojg has quit IRC11:03
kota_mingdang1: I think always the swift service is normal in the way.11:05
mingdang1yeah?11:06
kota_mingdang1: if you want to know the status for the replication, you can see the log in the object-replicator11:06
*** silor has quit IRC11:07
kota_it might be needed what you mean as 'normal', though.11:07
mingdang1maybe i request a object,but  one replica of the object is replicating...11:07
mingdang1after I rebalance, the partition that one object belong is moved from one node to another node11:10
kota_yes11:10
mingdang1now i get it,the ring recond the node is old,but it moving to new node11:11
mingdang1%s/recond/record11:11
mingdang1oh,I am wrong.now i get it,the ring record the node is new,and it is moving to new node11:13
kota_yup11:13
kota_the primary who should have the replica was changed.11:14
mingdang1maybe the object is not moved to new node complete, i request to this node11:15
mingdang1it will return a 404 not found?11:16
kota_mingdang1: it may return 404 *but*11:17
kota_mingdang1: basically Swift will move only one of replica for each paritions at once11:17
kota_mingdang1: when you do "rebalance" command for swift-ring-builder11:18
kota_mingdang1: so 2 replicas will be still remaining in the primary nodes.11:18
mingdang1but the 2 replication's node is old11:19
mingdang1i get node from ring find new11:19
kota_mingdang1: and then, if proxy got 404 not found from the first primary, proxy will attempt to the next primary (who should have the one of the replicas)11:19
kota_mingdang1: you mean all primary nodes replaced with completely fresh new nodes?11:20
kota_mingdang1: can i make sure your migration scenario?11:22
kota_mingdang1: I thought...11:22
mingdang1when i run the rebalance,it not all replica is changed in ring?11:22
kota_mingdang1: 1. add new devices to ring, 2. do rebalance, 3. deploy the ring, 4 waiting, 5 remove old devices, 6. do rebalance, 7. deploy the ring, 8 waiting11:23
kota_in my scenario.11:23
kota_mingdagn1: exactly11:24
kota_mingdang1: except removing the all devices from the ring, maybe.11:24
kota_except? unless?11:24
kota_lack of my Einglish skill :/11:25
kota_just one replica will be moved at once per rebalance.11:25
mingdang1now I add a new device, and rebalance, and the partitino that A object is located is moving to another mode11:25
mingdang1now i get A object, if i will return a 404?11:26
kota_mingdang1: did you get 404?11:27
mingdang1no, no, i guess :)11:27
mingdang1on the basis of code i read :)11:28
*** km has quit IRC11:28
mingdang1maybe i am wrong11:28
kota_mingdang1: almostly you can get 200 unless 2 primaries (who is not moving) down.11:29
mingdang1swift read one replica,whether the one is the not moving completely11:29
mingdang1primaries means?11:29
kota_alright, primary means the nodes who should have the replica in current ring.11:30
kota_the first 3 devices (if you set 3 replicas) you can see when you run 'swift-get-nodes <ring> acccount container object'11:32
kota_mingdang1: note that swift can redirect the get request to another node if an object-server down, not found, whatever for 4xx, 5xx.11:34
kota_mingdagn1: exactly proxy tries to read one replica and the repica might be in moving for the rebalance, but proxy can get the object from another node.11:35
mingdang1when I run rebalance and not copy the ring to storage node ,the storage node is old ring, the proxy is new ring?11:37
kota_mingdagn1: wow, scared11:37
kota_mingdang1: I think you shouldn't do so.11:38
kota_mingdang1: Does it mean the Swift has 2 different rings, right?11:38
mingdang1yes?11:39
mingdang1yes11:39
*** ujjain- is now known as ujjain11:39
mingdang1if i don't copy the ring to storage node manual,how to ensure the ring is same?11:40
kota_Swift is desinged that all nodes have same ring.11:40
kota_depends on operation, e.g. md5sum?11:41
mingdang1where do you run rebalance ?11:41
kota_in the out of Swift cluster11:42
kota_maybe we call likely management node11:42
mingdang1yes11:42
mingdang1then?11:42
kota_deploy the ring to nodes in various way11:43
kota_ways11:43
mingdang1is there any process ensure the same?11:43
kota_e.g. ansible? scp? git and agent pulling?11:43
kota_depends on your oeration model.11:43
kota_s/oeration/operation/11:43
kota_no process in Swift itself.11:44
mingdang1oh11:44
mingdang1maybe i leave out a storage node.....:(11:45
kota_hmm11:47
kota_mingdang1: you can see how HPE cloud managed the ring, here https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/maintaining-and-operating-swift-at-public-cloud-scale11:49
kota_mingdang1: since about 26:00-ish11:50
*** cdelatte has joined #openstack-swift11:52
*** cdelatte has quit IRC11:53
*** cdelatte has joined #openstack-swift11:53
mingdang1@kota_  ok, thanks very much :)11:56
*** haomaiwang has quit IRC12:01
*** haomaiwang has joined #openstack-swift12:01
*** chlong has joined #openstack-swift12:01
*** ppai_ has joined #openstack-swift12:01
*** ppai has quit IRC12:03
*** delattec has joined #openstack-swift12:25
*** cdelatte has quit IRC12:28
*** silor has joined #openstack-swift12:32
*** haomaiwang has quit IRC12:32
*** NM has joined #openstack-swift12:35
*** silor has quit IRC12:39
*** silor has joined #openstack-swift12:39
*** MVenesio has joined #openstack-swift12:40
*** silor1 has joined #openstack-swift12:42
*** silor has quit IRC12:44
*** silor1 is now known as silor12:44
*** links has quit IRC12:46
*** StraubTW has joined #openstack-swift12:46
*** links has joined #openstack-swift12:46
*** ppai_ has quit IRC12:50
*** _JZ_ has joined #openstack-swift12:59
*** StraubTW has quit IRC13:01
openstackgerritGleb Samsonov proposed openstack/swift: go: proxyserver's version with mongodb backend This is our implemetation for some swift proxy functions. Not product-ready yet.  https://review.openstack.org/28715713:01
*** delatte has joined #openstack-swift13:03
*** haomaiwang has joined #openstack-swift13:05
*** delattec has quit IRC13:05
*** esker has quit IRC13:13
*** esker has joined #openstack-swift13:14
*** yarkot_ has joined #openstack-swift13:19
*** StraubTW has joined #openstack-swift13:24
*** BigWillie has joined #openstack-swift13:24
openstackgerritGleb Samsonov proposed openstack/swift: go: proxyserver's version with mongodb backend This is our implemetation for some swift proxy functions. Not product-ready yet.  https://review.openstack.org/28715713:25
*** yarkot_ has quit IRC13:27
*** cbartz has joined #openstack-swift13:34
*** panda has quit IRC13:40
*** panda has joined #openstack-swift13:40
openstackgerritGleb Samsonov proposed openstack/swift: go: proxyserver's version with mongodb backend This is our implemetation for some swift proxy functions. Not product-ready yet.  https://review.openstack.org/28715713:41
*** mvk has quit IRC13:41
*** mingdang1 has quit IRC13:47
*** haomaiwang has quit IRC14:01
*** haomaiwang has joined #openstack-swift14:01
*** ig0r_ has joined #openstack-swift14:02
*** ametts has joined #openstack-swift14:04
*** tongli has joined #openstack-swift14:10
*** daemontool has joined #openstack-swift14:10
*** mvk has joined #openstack-swift14:12
*** asettle has joined #openstack-swift14:12
*** david-lyle has quit IRC14:16
*** david-lyle has joined #openstack-swift14:19
*** asettle has quit IRC14:19
*** pcaruana has quit IRC14:28
*** CaioBrentano has joined #openstack-swift14:35
*** vinsh has joined #openstack-swift14:38
gmmahagood morning14:40
*** gmmaha has left #openstack-swift14:42
*** gmmaha has joined #openstack-swift14:43
*** zaitcev has joined #openstack-swift14:45
*** ChanServ sets mode: +v zaitcev14:45
*** cbartz has left #openstack-swift14:49
*** twm2016 has joined #openstack-swift14:56
*** haomaiwang has quit IRC15:01
*** tmoreira has quit IRC15:01
*** haomaiwa_ has joined #openstack-swift15:01
pdardeaugood morning gmmaha15:04
gmmahapdardeau: o/15:04
*** nchristia has joined #openstack-swift15:04
*** tmoreira has joined #openstack-swift15:05
twm2016Hello everyone, I am working on this bug https://bugs.launchpad.net/swift/+bug/1537811 and am trying to write a functional test. I have never done this before and am looking for some guidance on this. I think I want to add an if statement before this one here: https://github.com/openstack/swift/blob/master/test/functional/swift_test_client.py#L29215:08
openstackLaunchpad bug 1537811 in OpenStack Object Storage (swift) "204 No Content responses have Content-Length specified" [Low,In progress] - Assigned to Trevor McCasland (twm2016)15:08
*** gmmaha has quit IRC15:08
*** gmmaha has joined #openstack-swift15:08
twm2016My change checks for response status 204 and removes the "Content-Length" header if it exists.15:09
twm2016I have the unit test written but the functional test is a bit different.15:09
*** arch-nemesis has joined #openstack-swift15:10
*** klrmn has joined #openstack-swift15:13
*** esker has quit IRC15:15
*** corvus is now known as jeblair15:21
*** daemontool has quit IRC15:22
*** daemontool has joined #openstack-swift15:22
*** links has quit IRC15:23
*** ig0r__ has joined #openstack-swift15:23
*** StraubTW has quit IRC15:23
*** ig0r_ has quit IRC15:26
*** StraubTW has joined #openstack-swift15:26
*** tsg has joined #openstack-swift15:29
siva_krishnangood morning!15:40
*** fthiagogv has joined #openstack-swift15:44
*** zul has joined #openstack-swift15:50
*** garthb has joined #openstack-swift15:53
notmynamegood morning, everyone15:53
*** proteusguy_ has quit IRC15:54
notmynameI was kinda absent late last week so i could recover from whatever sickness I had. I hope to catch up today15:54
notmynamejrichli: I hope you're feeling better, too15:54
jrichlinotmyname: thanks, my plan to sleep it off worked!  It never really fully developed :-)15:55
jrichliglad you are feeling better!15:56
notmynamegreat :-)15:57
*** haomaiwa_ has quit IRC16:01
*** haomaiwang has joined #openstack-swift16:01
*** StraubTW has quit IRC16:01
*** daemontool has quit IRC16:01
*** StraubTW has joined #openstack-swift16:04
*** pcaruana has joined #openstack-swift16:09
*** proteusguy_ has joined #openstack-swift16:11
jidaris there no documented process for recovering object from quarantine?16:13
jidarI've searched through a few books and some of the docs and admin guide, and while it's mentioned a few times I don't actually see the process outlined anywhere16:13
notmynamejidar: quarantined objects are put into a "quarantine" directory (sibling of "objects"). you can examine them there. however, note that they are only put there if something is wrong with them. so you almost certainly don't want to put them back16:14
notmynameafter a replica is quarantined, replication will replace that replica with a good one (ie copy over another replica)16:15
jidarnotmyname: I can explain why they are there, but the solution has already been run (path_hash settings gone awry)16:15
notmynameah. yikes16:15
jidarfor a few hours the hash was wrong16:15
jidarnow I've got a bunch of glance images sitting there unable to be used16:15
notmynameok, so you got a lot of good stuff into the quarantine directory and want to put it back. is that for every drive or just for one drive?16:16
jidarthree servers, all controllers and all under /srv/node/d116:16
openstackgerritTrevor McCasland proposed openstack/swift: Remove Content-Length from 204 No Content Response  https://review.openstack.org/29146116:17
*** gyee has joined #openstack-swift16:17
*** dmorita has joined #openstack-swift16:19
notmynameso under quarantine, you have objects/<hash>/<ts>.data16:19
jidarsomething along these lines: /srv/node/d1/quarantined/objects/04b27334c0de225af769837593324876/1452023491.12492.data16:19
notmynameand under objects (the good one), you have the pattern <part>/<suffix>/<hash>/<ts>.data16:20
notmynameso here's how that works16:20
notmynamenote that they both have <hash>. that's the same thing16:20
jidarsimilar: /srv/node/d1/objects/532/f1e/852369f73b2efe65167c43af382b0f1e/1452030564.69473.ts16:21
acolesnotmyname: glad you're feeling better. i'm confused by the status of this patch 289890 which seems to have stalled - I can only wonder that maybe you needed to add your +2 *followed by* your +A16:21
patchbotacoles: https://review.openstack.org/#/c/289890/ - python-swiftclient (stable/liberty) - Do not reveal auth token in swiftclient log messag...16:21
*** twm2016 has quit IRC16:21
notmynameit's the hash of the object name according to the ring (including the hash_suffix and hash_prefix values in swift.conf)16:21
acolesnotmyname: along with patch 28464416:21
patchbotacoles: https://review.openstack.org/#/c/284644/ - python-swiftclient (stable/liberty) - Fix the http request headers being overwritten in ...16:21
zaitcevdo the renames while daemons are stopped16:22
notmynamejidar: yes, what zaitcev said16:22
jidaroh16:22
notmynamejidar: so the other parts of the path16:22
jidarjust take down one of my swift servers and move the directories, restart services?16:22
notmynamejidar: the <suffix> is the last 3 characters of the hex representation fo the hahs16:22
*** klrmn has quit IRC16:23
notmynamejidar: yeah. "just" ;-)16:23
jidarnotmyname: hahaha16:23
notmynamejidar: so for your hash 852369f73b2efe65167c43af382b0f1e see that the suffix is f1e16:23
notmynameok, so the last part is the part. that's the decimal representation of the ring partition the object hashes too16:24
zaitcevjidar: most likely if objects are recovered properly, as soon as one of them is up, it'll replicate to rest, meaning 2x space in each else where quarantine still full of them; make sure space is enough16:24
notmynamejidar: so if your part power is 12, then the part power is "int(hash, 16) >> (32-12)"16:24
jidarI suppose this is why mirantis made this post : https://www.mirantis.com/blog/openstack-swift-hash_path_suffix-can-go-wrong/16:25
notmynamewait, that snippet was wrong. trying to figure out what it should be16:26
zaitcevI'd try to get the object name, including the account after auth and container, then run swift-get-nodes instead of calculating16:27
notmynamezaitcev: yeah, that's probably best16:28
jidarso I've done a few swift-object-info commands16:28
notmynamejidar: what is your part power?16:28
jidarI don't see it defined in the config16:28
*** dmorita has quit IRC16:29
notmynamejidar: swift-ring-builder will tell you the number of partitions. what's that?16:29
jidar1024 partitions16:30
jidaron all hosts16:30
notmynameok, so that's a part power of 1016:30
notmyname2**10 == 102416:30
*** dmorita has joined #openstack-swift16:30
notmynameso the actual math is "int(x, 16) >> (128-10)". but zaitcev is right that using swift-object-info would probably be safer16:31
*** Guest59187 has quit IRC16:31
jidarhttps://gist.github.com/0b5bbe6a16ee58b7cc9b16:31
jidaran example of swift-object-info output16:32
notmynamesorry, I meant to say swift-get-nodes16:33
notmynameso you could take one object you know of. suppose it's "AUTH_foo/bar_container/my_awesome_image.quuz". and you'd run `swift-get-nodes /etc/swift/object.ring.gz AUTH_foo/bar_container/my_awesome_image.quuz`16:33
notmynamejidar: so I get somethign like https://gist.github.com/notmyname/529fe493d21b3360bd14 on my dev box16:33
zaitcevand hopefuly it matches lines 31-33 in the gist, assuming the hash prefix is correct now16:34
zaitcevand suffix16:34
notmynamewhich gives you a path name on the right servers to use (note this is based on the current values in swift.conf, so make sure that is right16:34
notmynameyeah16:34
zaitcevSorry, I keep talking across you...16:34
notmynameand then move one into the right place and run replication (eg `swift-init object replication once`) and you should get it into the other places16:34
notmynamezaitcev: no, you're saying all the right thigns :-)16:35
jidarlet me try with the AUTH_ part filled out and a correct glance image ID16:35
acolesnotmyname: ohh! so they just needed a simple 'recheck'! thanks16:37
jidarhttps://gist.github.com/b12394f7071c8d7dfcd416:38
jidarwhat I'm having trouble figuring out is the portion listed in quarantine, with the .data bits on it16:38
notmynameacoles: I hope :-)16:39
acolesnotmyname: they went to zuul this time, so far so good16:39
notmynamejidar: keep those the same. that's the timestamp of when the object was created16:40
jidarfrom this: /srv/node/d1/quarantined/objects/04b27334c0de225af769837593324876/1452023491.12492.data, removing quarantined, and replacing the last few bits, what .... hurm16:40
notmynamejidar: to swift's on-disk format, an object is actually a directory. so just keep the contents of that directory the same16:40
jidarso am I just renaming the object-id there?16:41
jidarthe 04b27334c0de225af769837593324876 bit?16:41
jidarsorry to be a bit daft at this, I haven't really gotten to work with this very much prior to having a issue :(16:42
*** lyrrad has joined #openstack-swift16:43
notmynamejidar: right. that's the hash. you are keeping the 1452023491.12492.data file and moving it to a different directory. that's basically it. (the trick is putting it into the *right* directory)16:43
notmynamejidar: also, you need to go to the root of your data drives and run `find . -name \*.pkl` and delete anything you find16:45
notmynamejidar: note that all of this work is going to (1) result in a *lot* (like 100%) data movement in your cluster (2) totally should only be a last resort (3) pretty much an unsupported use case (4) definitely will have downtime in your cluster16:46
notmynamejidar: basically, if it's at all possible to reupload the data, that will be easier and safer16:46
*** cdelatte has joined #openstack-swift16:48
*** delatte has quit IRC16:51
*** dmorita has quit IRC16:52
*** delattec has joined #openstack-swift16:53
jidarheh16:54
jidarthats the conclusion I've come too16:54
jidarI don't mind moving the data, it's only 20 gigs or so16:54
jidareven if that's 2 or 3x over, it's all on 10gigE16:55
notmynamelesson 0: don't change the hash path suffix or prefix. take those notes in the sample config file seriously ;-)16:55
jidarnotmyname: full disclosure, running the tripleo overcloud deploy command from the wrong directory results in a new hash_suffix being created16:55
zaitcevI suspect someone has ran TrippleO or Director twice16:55
jidarhahahahahaha16:55
notmynameyikes16:55
zaitcevor that16:56
notmynameactually, that's what I was about to ask16:56
jidarzaitcev: hit the nail on the head16:56
*** rcernin has quit IRC16:56
*** cdelatte has quit IRC16:56
notmynamewhat is it we can do on the swift side to prevent this from happening?16:56
jidaryou can run it twice, but it has to be from the same directory16:56
*** chlong has quit IRC16:56
notmynamewhat is it about those things that causes this to happen?16:56
*** dmorita has joined #openstack-swift16:56
jidarso the undercloud doesn't know what the overclouds hash_suffix is at run time, it's going to create a new one16:57
jidaris hash_suffix there for security reasons?16:57
jidarI'd see some people advocating not using it16:57
notmynameyou should use both hash_suffix and hash_prefix. they are mixed into the hashing so that an end user can't target a particular partition and attack the cluster (or, in general, know the hash of an object)16:58
zaitcevI saw clusters with hash_suffis=%CHANGEME% (straight from RPMs)16:58
notmyname:-(16:59
*** haomaiwang has quit IRC17:01
*** haomaiwang has joined #openstack-swift17:01
jidarman this sucks to go back to my customers and tell them to re-upload :/17:01
zaitcevYou can probably re-upload for them... You know their credentials, right? You have the object's body in the quarantine directory. It's the same actual data, etag is going to be same.17:02
jidaroh I see17:02
zaitcevslip in curl in the night17:02
zaitcevnobody will ever know17:03
jidarI'm still not 100% sure I understand how to do that though17:03
jidarlet me poke around with it a bit17:03
*** nadeem has joined #openstack-swift17:03
jidarlike how do I find all of the right .data objects to throw into a directory?17:05
notmynameit's all of the ones in the quarantine directory, right?17:06
jidarright, but they belong to different objects, no?17:08
jidar[root@overcloud-controller-0 objects]# find . -iname \*.data -exec swift-object-info '{}' \; | grep ETag17:08
jidarETag: 50bdc35edb03a38d91b1b071afb20a3c (valid)17:08
jidarfor instance, do I find everything with the same ETag and throw them together?17:09
notmynameno, I wouldn't do that17:10
notmynameetag is the md5 of the contents. I can upload the same object to different names and get the same etag for both17:11
jidarit looks like I have about 20 ETags17:11
jidarbut about 100 objects17:11
jidarer, .data files17:11
notmynameyou shouldn't worry about the etag at all for anythign you're doing here (I don't think)17:11
notmynameyou only care about the object names and the .data files17:11
notmynamedo you have anything other than the .data files? do you have any .meta files or .ts files?17:12
jidarno, not in quarantine17:12
notmynameok, that's good17:12
notmynameand just to check, do you have more than one storage policy?17:12
jidarlet me double check17:12
jidardon't think so17:13
notmynamegood17:13
notmynameand it's a replicated policy?17:13
jidaryea17:13
notmynamegood17:13
jidarso these two files belong to the same etag, ./a426aa3c2de4acf1809691ef515dbb7f/1450768165.06062.data ./98ed8dcf30af8dadb0deadf46e08a203/1450773487.96097.data, but they have no data or object id that looks similar to them17:16
jidarnothing based on the file name is what I mean17:16
jidarhow do I know to upload them together?17:16
notmynamewhat do you mean?17:16
jidaris this not multi-part data?17:16
notmynameif you run swift-object-info on them, you should see that they have different metadata17:16
notmynamedoens't matter if it's part of a large object or not. you don't care (just like you shouldn't care what the etag is)17:17
jidarhttps://gist.github.com/cdcc5f15cc2d157f50da17:18
notmynameoh, when you do the copy to the other directory, be sure you're using the cp option that preserves extended attributes17:18
jidarso these files even though they have the same ETag, are not part of the same object?17:18
notmynamedoesn't matter. stop worrying about the etag17:18
jidarwill do!17:18
*** klrmn has joined #openstack-swift17:18
notmynameyes. you could worry about the etag and probably get some network efficiency. but that is an optimization that will only add complexity17:19
jidarit looks like all objects are limited to 200mb?17:19
notmynametreat the etag like any other piece of metadata: an opaque blob of bytes. you do not care about any of it17:19
notmynameyou only want to make sure that the object is in the right on-disk place based on it's name17:19
notmynameso in this case, you can take the account, container, and object name reported by swift-object-info, then copy it to the right place (preserving xattrs). that's it17:20
notmynameso eg with that one you just pasted...17:20
notmynameif you're on the .11 machine, then copy the .data file to the directory <mount point location>/d1/objects/158/63e/27b89b0e67aff4385678b1c4bc19b63e17:21
notmynameafter you make sure that directory exists17:21
notmyname(that's for the .06062.data file)17:22
jidarso the objects/158 exists, but not objects/158/63e17:23
jidarand objects/158/hashs.pkl is in there17:23
notmynamedo all the file moving first, then delete the hashes.pkl. then start replication17:23
notmynamehmm...17:24
notmynameyou said you have 3 servers, right?17:24
notmynameonly one drive on each?17:24
jidaryea, so just for clarity sake I'm thinking, unless you want to correct me, that it might just be easier to try and upload these behind the scenes17:24
jidaryes17:24
jidaras new images17:24
*** jordanP has quit IRC17:25
notmynamedo you have the same set of quarantined objects on each?17:25
jidarlet me double check17:25
notmynameI think you'll be able to recover this without having to reupload17:25
jidaryea, all 15gigs17:25
*** twm2016 has joined #openstack-swift17:25
*** rledisez has quit IRC17:25
jidaron all 3 servers, same directories and everything17:25
notmynamecool. so you should be able to do this on just one machine and then it will move it out. that' more inefficient from a network sense, but simpler from your recovery script perspective17:26
notmynameie you can do it on one machine, run replication, and things should be good17:26
jidaryea, and because the data set is small, it wouldn't take long17:27
notmynameright17:27
notmynameso for every .data file in quarantine, rung swift-object-info, fine the correct right place it should be, and move it back to that place. that's it17:27
notmyname(assuming you've already corrected your hash suffix/prefix17:27
jidarwhile the swift services are down, and while  they are down and I've done run the replication once17:28
jidarand after I'm done, run the replication once*17:28
*** jistr has quit IRC17:28
notmynameyeah, it's probably best to do it with swift replication turned off. if you can afford the downtime, you could turn everything off17:28
jidarlet me try this once and see what happens17:28
notmynamethen after moving it, start up the main services, and run replication once. check that it's ok, then start up everything normally17:29
notmynametry it once == try with one file?17:29
jidarI'm thinking yes17:29
openstackgerritTrevor McCasland proposed openstack/swift: Remove Content-Length from 204 No Content Response  https://review.openstack.org/29146117:30
*** nadeem has quit IRC17:34
*** nadeem has joined #openstack-swift17:35
*** panda has quit IRC17:40
*** panda has joined #openstack-swift17:40
*** dmorita_ has joined #openstack-swift17:43
*** dmorita has quit IRC17:45
openstackgerritMerged openstack/swift: go: add ability to dump goroutines stacktrace with SIGABRT  https://review.openstack.org/29222917:47
*** chsc has joined #openstack-swift17:48
*** alejandrito has joined #openstack-swift17:51
claygheyoh!17:55
*** haomaiwang has quit IRC18:01
*** haomaiwang has joined #openstack-swift18:01
*** nadeem has quit IRC18:05
*** mmcardle1 has quit IRC18:05
*** mvk has quit IRC18:09
*** fthiagogv has quit IRC18:12
*** fthiagogv has joined #openstack-swift18:12
*** fthiagogv has quit IRC18:13
*** fthiagogv has joined #openstack-swift18:13
*** ejat has quit IRC18:13
*** ejat has joined #openstack-swift18:14
*** ejat has quit IRC18:14
*** ejat has joined #openstack-swift18:14
openstackgerritMerged openstack/swift: Imported Translations from Zanata  https://review.openstack.org/29221718:15
*** nadeem has joined #openstack-swift18:15
*** permalac has quit IRC18:19
openstackgerritTrevor McCasland proposed openstack/swift: Remove Content-Length from 204 No Content Response  https://review.openstack.org/29146118:21
*** ChubYann has joined #openstack-swift18:34
*** twm2016 has quit IRC18:34
*** gyee has quit IRC18:46
*** haomaiwang has quit IRC19:01
*** haomaiwang has joined #openstack-swift19:01
*** zul has quit IRC19:04
*** mvk has joined #openstack-swift19:12
*** esker has joined #openstack-swift19:20
acolesnotmyname: i'm holding my breath...19:33
notmynameacoles: I have ever mentioned that I don't like gerrit and I think the new gerrit is worse than the old? ;-)19:33
acolesnotmyname: can you imagine how long i stared at gerrit looking for some clue before summoning the courage to raise my hand in -infra ? ;)19:34
notmynameI can now!19:34
acoles+1 for -infra though, immediate helpful response19:35
acolesnotmyname: ...and they're in the gate queue :) thanks for clicking the right places the right number of times!19:36
notmynamewhen all else fails, I'm happy to do it all over again in a different order19:36
acolesgood night19:37
notmynameacoles: thanks for tracking it down. good nioght19:37
*** acoles is now known as acoles_19:38
brianclineso I know nobody likes to talk about tempest... but why wouldn't it have failed while testing patch #291461?19:46
patchbotbriancline: https://review.openstack.org/#/c/291461/ - swift - Remove Content-Length from 204 No Content Response19:46
brianclineseeing as it's explicitly checking for it at least here: https://github.com/openstack/tempest/blob/master/tempest/api/object_storage/test_account_services.py#L6819:47
*** insanidade has joined #openstack-swift19:56
insanidadehi all. quick question: is it possible to server19:56
insanidadeops. sorry.19:56
insanidadehi all. quick question: is it possible to copy data from one swift cluster to another swift cluster without downloading and uploading all data?19:57
*** macgyver_ has joined #openstack-swift20:00
*** haomaiwang has quit IRC20:01
*** haomaiwang has joined #openstack-swift20:01
insanidadeanyone ?20:01
MooingLemurif both clusters have it enabled and available, you can do per container sync.  It's not necessarily fast20:02
insanidadeMooingLemur: but that would be a better solution than downloading and then uploading every file, right ?20:03
*** ig0r_ has joined #openstack-swift20:03
MooingLemurit's the same amount of data transfer happening either way20:04
MooingLemurI have some 10Gb-connected servers that I could use to download/reupload, so that'd probably be how I'd do it.20:04
insanidadeMooingLemur: it takes me a day and a half just to download all data.20:05
MooingLemurcontainer-sync might take longer, depends on how many objects and how fast the pipes are20:05
insanidadeMooingLemur: my understanding from your first sentence is that I could make both clusters sync if that feature was enabled in both sides. am I wrong ?20:06
MooingLemurcontainer-sync will (somewhat lazily) migrate all objects and subsequent changes from one container to another (on a different cluster, or the same cluster)20:06
*** ig0r__ has quit IRC20:07
MooingLemurand by migrate, I mean it'll make a copy of all uploads and propagate deletes20:08
insanidadeMooingLemur: hmmm. I've managed to sync containers in the same cluster. Would it be a much different task to sync containers in different clusters ?20:08
MooingLemurdo you control both clusters?20:10
insanidadeyes20:11
insanidadeI mean: I have an account in both clusters but I do not configure them.20:11
insanidadeso I don't control. I'm a user.20:11
*** MVenesio has quit IRC20:12
MooingLemurso, the target cluster might have to be configured by the administrator to allow container-sync from the source cluster.20:14
MooingLemurhttp://docs.openstack.org/developer/swift/overview_container_sync.html explains the feature both in terms of cluster configuration and as a user20:14
*** StraubTW has quit IRC20:17
*** fthiagogv has quit IRC20:18
jidarI'm having trouble understanding something, I don't see any files larger than 200004 in my quarantine directory, but I see several 1gig or so files in my regular objects directory, is it possible as part of quarantine objects are split up? I could just be wrong here but it's confusing me20:32
jidarnotmyname: zaitcev, I didn't get to thank you guys earlier for all of the help, thanks :)20:33
notmynamejidar: these are glance images, right? doesn't glance split the data into smaller chunks like that?20:38
*** silor has quit IRC20:39
jidaronly in quarantine?20:39
jidarthat's what's throwing me for a loop20:39
*** asettle has joined #openstack-swift20:46
*** BigWillie has quit IRC20:46
*** bapalm has quit IRC20:55
*** bapalm has joined #openstack-swift20:58
*** haomaiwang has quit IRC21:01
*** haomaiwang has joined #openstack-swift21:01
jidar./glance/glance-cache.conf:117:#swift_store_large_object_chunk_size=20021:09
jidarapprears to be commented out, but I wonder if that's the default somehow21:09
*** chlong has joined #openstack-swift21:14
*** dmorita_ has quit IRC21:24
*** dmorita has joined #openstack-swift21:24
torgomaticquarantining doesn't modify the objects21:25
mattoliveraumorning all. yesterday was a holiday here in Oz. Still no baby. Feels like the kid will be middleaged before she's born :P21:28
notmyname:-)21:28
timburke_mattoliverau: my wife and i are convinced our daughter was born a month old :)21:29
mattoliverautimburke_: lol, I'm beginning to undetstand that feeling :P21:30
*** mathiasb has quit IRC21:31
*** timburke_ is now known as timburke21:31
*** NM has quit IRC21:34
jidartorgomatic: damn21:35
*** mathiasb has joined #openstack-swift21:37
*** panda has quit IRC21:40
*** panda has joined #openstack-swift21:41
*** dmorita has quit IRC21:48
claygjrichli: looks like you and acoles_ managed to get patch 158401 merged!21:48
patchbotclayg: https://review.openstack.org/#/c/158401/ - swift (feature/crypto) - Enable middleware to set metadata on object POST (MERGED)21:48
claygjrichli: I had started to review an earlier patch set on Thursday but lost track of my comments - i was mainly still loading it into my head21:50
*** dmorita has joined #openstack-swift21:54
*** nadeem has quit IRC21:55
claygoh, nm - i found it - submitted (patch set 10)21:55
claygjrichli: so what's the next one?  probably patch 291458 ???21:55
patchbotclayg: https://review.openstack.org/#/c/291458/ - swift (feature/crypto) - Changes crypto to use transient-sysmeta for crypto...21:55
notmynameclayg: yes, that one is next21:56
notmynameclayg: I just forwarded you an email about it21:56
claygnotmyname: just saw that - thanks!21:57
*** haomaiwang has quit IRC22:01
*** haomaiwang has joined #openstack-swift22:01
*** vinsh has quit IRC22:02
*** vinsh has joined #openstack-swift22:02
*** garthb_ has joined #openstack-swift22:03
*** vinsh_ has joined #openstack-swift22:03
*** vinsh_ has joined #openstack-swift22:04
*** garthb has quit IRC22:06
*** vinsh has quit IRC22:07
*** vinsh has joined #openstack-swift22:09
*** vinsh_ has quit IRC22:12
*** MVenesio has joined #openstack-swift22:13
timurI'm observing a peculiar behavior with Swift that I'm trying to figure out whether it's intended or not. When submitting a HEAD or a GET request to retrieve the account metadata, container list, container metadata, or object metadata, the HTTP connection to the proxy server remains open. However, after submitting a GET request for an object, the connection is closed and the "Connection: close"22:15
timurheader is set. Is this intended or is it a bug? I couldn't find any documentation about this behavior22:15
*** timur has quit IRC22:18
*** MVenesio has quit IRC22:18
*** timur has joined #openstack-swift22:19
timurI'm observing a peculiar behavior with Swift that I'm trying to figure out whether it's intended or not. When submitting a HEAD or a GET request to retrieve the account metadata, container list, container metadata, or object metadata, the HTTP connection to the proxy server remains open. However, after submitting a GET request for an object, the connection is closed and the header is set. Is this22:19
timurintended or is it a bug? I couldn't find any documentation about this behavior22:19
* notmyname gives timur a look from across the room22:20
notmyname(for double-posting22:20
timurright, configuring irssi with sasl on ec2 is to blame for that. sorry :(22:20
notmynameheh, no worries22:20
notmynametimur: for your question...22:20
notmynametimur: yes, being undocumented is either unintended or a bug22:21
notmynamethat's what you were asking about, right? ;-)22:21
timurnotmyname: it's not clear to me why the client's proxy server connection would need to be closed after fulfilling a GET request22:21
timurI'm happy to dig in to fix it, unless there is a reason it's done this way (which I guess I may or may not find out during the digging)22:22
notmynameyeah, swift should support multiple requests pipelined on a single connection22:23
notmynameI'd guess that the object connection close may have snuck in at some point. but no, I don't know the reason it's one way or another22:23
timurnotmyname: thanks! I'll try to figure out why that's happening and submit a patch!22:24
*** gyee has joined #openstack-swift22:27
notmynamemattoliverau: will you have a chance to handle the merge conflict on concurrent gets?22:37
*** alejandrito has quit IRC22:42
*** mvk_ has joined #openstack-swift22:48
*** mvk has quit IRC22:52
*** trifon has quit IRC22:55
*** haomaiwang has quit IRC23:01
*** haomaiwang has joined #openstack-swift23:01
brianclineanyone mind taking a quick peek at patch #292206?23:02
patchbotbriancline: https://review.openstack.org/#/c/292206/ - swift - Don't report recon mount/usage status on files23:02
*** _JZ_ has quit IRC23:05
*** km has joined #openstack-swift23:05
*** ametts has quit IRC23:20
*** chsc has quit IRC23:27
*** arch-nemesis has quit IRC23:32
*** kei_yama has joined #openstack-swift23:32
notmynamezaitcev: on patch 248867 you left a +1. it's got another +2 from cschwede on it now. I'm curious about your +2 instead of +2. do you want to leave it as-is? your comment said you're ok but the patch violates the RFC. what do you want to do?23:32
patchbotnotmyname: https://review.openstack.org/#/c/248867/ - swift - Stop staticweb revealing container existence to un...23:32
*** gyee has quit IRC23:39
*** macgyver_ has left #openstack-swift23:43
mattoliveraunotmyname: I will get a new version up today :)23:45
notmynamemattoliverau: thanks23:45
* mattoliverau has been in a meeting but back now23:45
*** mkrcmari__ has joined #openstack-swift23:48
notmynameptl candidacy submitted23:50
torgomaticI wonder what would happen if you didn't run for PTL23:50
timburketorgomatic: i'd guess there'd be a write-in campaign and he'd still get elected23:51
torgomaticwhether he likes it or not, eh?23:51
notmynameactually, the TC would appoint someone23:51
*** mvk_ has quit IRC23:52
timburkenotmyname: that's not as much fun. although you may *still* find yourself stuck with the position23:52
*** ho_ has joined #openstack-swift23:58

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!