Monday, 2016-04-18

*** kei_yama_ has joined #openstack-swift00:01
*** kei_yama has quit IRC00:02
*** asettle has quit IRC00:04
*** ober37_ has joined #openstack-swift00:05
*** hosanai has joined #openstack-swift00:06
*** ChanServ sets mode: +v hosanai00:06
*** asettle has joined #openstack-swift00:06
*** ober37 has quit IRC00:07
*** ober37 has joined #openstack-swift00:18
*** ober37_ has quit IRC00:18
*** ober37 has left #openstack-swift00:18
*** asettle has quit IRC00:19
*** asettle has joined #openstack-swift00:21
*** ober37 has joined #openstack-swift00:28
*** ekarlso has quit IRC00:32
*** asettle has quit IRC00:32
*** asettle has joined #openstack-swift00:32
*** ober37 has quit IRC00:33
*** asettle has left #openstack-swift00:33
*** asettle has joined #openstack-swift00:33
*** ober37 has joined #openstack-swift00:34
*** ekarlso has joined #openstack-swift00:44
*** ober37 has quit IRC00:44
*** ober37 has joined #openstack-swift00:46
*** mingdang1 has joined #openstack-swift00:47
*** kota_ has joined #openstack-swift00:54
*** ChanServ sets mode: +v kota_00:54
kota_good morning00:54
*** kei_yama_ has quit IRC00:58
mattoliveraukota_: morning!01:01
*** ober37 has quit IRC01:04
kota_mattoliverau: \o/01:05
kota_mattoliverau: how's going?01:05
mattoliveraui'm back! well reading lots of email, but back01:05
mattoliveraukota_: its going well, I'm very tired, you parents are crazy, newborns don't sleep well at night :P01:06
kota_great to hear, welcome to be back!!!01:06
kota_mattoliverau: exactly, it's heavy work for everyone :)01:06
*** asettle has quit IRC01:13
*** ober37 has joined #openstack-swift01:14
*** mingdang1 has quit IRC01:17
*** ober37_ has joined #openstack-swift01:17
*** ober37 has quit IRC01:18
*** kei_yama has joined #openstack-swift01:20
*** ober37_ has quit IRC01:22
*** ober37 has joined #openstack-swift01:22
*** asettle has joined #openstack-swift01:25
*** rchurch_ has quit IRC01:34
*** takashi has joined #openstack-swift01:34
*** rchurch has joined #openstack-swift01:34
takashigood morning01:34
*** asettle has quit IRC01:49
*** asettle has joined #openstack-swift01:50
*** ober37 has quit IRC01:54
*** ober37 has joined #openstack-swift01:54
*** ober37 has quit IRC01:59
*** rchurch has quit IRC01:59
notmynamehello, world02:00
*** asettle has quit IRC02:04
*** asettle has joined #openstack-swift02:04
kota_takashi, notmyname: hello02:11
notmynamehttp://lists.openstack.org/pipermail/openstack-dev/2016-April/092505.html is a very interesting email thread (and etherpad)02:11
kota_looks interesting.02:12
notmynamethe question of "what to do with global requirements" (if that's ever come up for you) seems to be addressed there02:14
notmynameincluding, "should swift merge global requirements updates or not?"02:15
*** Jeffrey4l has joined #openstack-swift02:15
zaitcev propose that we drop our requirement for co-installability of all services ... each project manage its own versions of dependencies  ... by relying on containers for deployment02:16
zaitcevThis guy is mad02:16
notmynamezaitcev: maybe. or maybe he's just in the whole new docker docker docker world an we havne't caught up yet ;-)02:17
notmynamemy point being, it's good to talk about it02:17
zaitcevnotmyname: yeah...02:17
jrichlimattoliverau: welcome back!  Will you be at the summit, or spending time with baby that week?02:28
*** asettle has quit IRC02:33
*** asettle has joined #openstack-swift02:34
*** haomaiwang has joined #openstack-swift02:45
mattoliveraujrichli: I'll be in Austin :)02:53
mattoliverautakashi, notmyname, jrichli zaitcev: o/02:53
zaitcevmattoliverau: sounds great, see you there02:54
*** janonymous has joined #openstack-swift02:58
notmynameVinsh: someone just pointed this out to me (actually it was hugokuo) https://github.com/openstack/keystonemiddleware/blob/6e58f8620ae60eb4f26984258d15a9823345c310/releasenotes/notes/deprecate-caching-tokens-in-process-a412b0f1dea84cb9.yaml02:59
*** haomaiwang has quit IRC03:01
*** haomaiwang has joined #openstack-swift03:01
takashikota_, mattoliverau: o/03:17
*** asettle has quit IRC03:20
*** takashi has quit IRC03:24
*** takashi has joined #openstack-swift03:31
*** haomaiwang has quit IRC03:41
*** haomaiwa_ has joined #openstack-swift03:41
*** garthb has joined #openstack-swift03:42
*** trifon has joined #openstack-swift03:52
*** haomaiwa_ has quit IRC04:01
*** haomaiwa_ has joined #openstack-swift04:01
*** haomaiwa_ has quit IRC04:01
*** asettle has joined #openstack-swift04:01
*** 7JTAAPGO1 has joined #openstack-swift04:01
*** garthb has quit IRC04:11
*** garthb has joined #openstack-swift04:11
mahaticgood morning04:14
*** garthb has quit IRC04:14
*** garthb has joined #openstack-swift04:14
*** mingdang1 has joined #openstack-swift04:18
*** trifon has quit IRC04:18
mattoliveraumahatic: morning04:18
mahaticmattoliverau: hello! welcome back :)04:18
mattoliveraumahatic: thanks :)04:19
*** garthb has quit IRC04:21
*** mingdang1 has quit IRC04:22
*** ekarlso has quit IRC04:27
openstackgerritOpenStack Proposal Bot proposed openstack/swift: Updated from global requirements  https://review.openstack.org/8873604:28
*** ekarlso has joined #openstack-swift04:40
*** links has joined #openstack-swift04:41
*** ober37 has joined #openstack-swift04:56
*** trifon has joined #openstack-swift04:57
*** ober37 has quit IRC05:00
*** 7JTAAPGO1 has quit IRC05:01
*** haomaiwang has joined #openstack-swift05:01
*** rcernin has joined #openstack-swift05:04
*** citytiming has joined #openstack-swift05:15
*** arch-nemesis has joined #openstack-swift05:16
*** nadeem has joined #openstack-swift05:18
*** ppai has joined #openstack-swift05:25
*** citytiming has quit IRC05:25
*** haomaiwang has quit IRC05:28
*** trifon has quit IRC05:32
*** klrmn has quit IRC05:48
*** zaitcev has quit IRC05:50
*** ChubYann has quit IRC05:50
*** Jeffrey4l has quit IRC05:50
*** ober37 has joined #openstack-swift05:57
*** ober37 has quit IRC06:01
*** mingdang1 has joined #openstack-swift06:05
*** nadeem has quit IRC06:16
*** nadeem has joined #openstack-swift06:17
openstackgerritOpenStack Proposal Bot proposed openstack/swift: Imported Translations from Zanata  https://review.openstack.org/30692406:31
*** haomaiwang has joined #openstack-swift06:46
*** haomaiwang has quit IRC07:01
*** haomaiwang has joined #openstack-swift07:01
*** wuhg has joined #openstack-swift07:02
*** mingdang1 has quit IRC07:08
*** haomaiwang has quit IRC07:11
*** sw3 has quit IRC07:15
*** pcaruana has joined #openstack-swift07:15
*** rledisez has joined #openstack-swift07:16
*** mmcardle has joined #openstack-swift07:18
onovyhi all. already read https://swiftstack.com/blog/2013/12/20/upgrade-openstack-swift-no-downtime/ and have a question :)07:20
onovycan i upgrade swift-store inside one region, then uprade all proxy inside one region and then go to another one region?07:20
onovyproxy is "forward" compatible, but are backward too?07:21
onovyso can proxy 2.5.0 communicate with 2.3.0 store?07:21
onovyi guess yes, but want to be sure :)07:21
*** mmcardle1 has joined #openstack-swift07:22
openstackgerritMerged openstack/swift: Per container stat. report  https://review.openstack.org/28181407:23
*** mmcardle has quit IRC07:23
*** arch-nemesis has quit IRC07:31
*** asettle has quit IRC07:52
*** geaaru has joined #openstack-swift07:53
*** oro has joined #openstack-swift07:58
*** ober37 has joined #openstack-swift07:59
*** asettle has joined #openstack-swift08:02
*** ober37 has quit IRC08:03
*** asettle has quit IRC08:07
*** siva_kri- has quit IRC08:12
*** siva_krishnan_ has joined #openstack-swift08:16
*** jmccarthy has quit IRC08:16
*** daemontool has quit IRC08:17
*** jmccarthy has joined #openstack-swift08:17
*** nadeem has quit IRC08:20
*** joeljwright has joined #openstack-swift08:20
*** ChanServ sets mode: +v joeljwright08:20
*** jistr has joined #openstack-swift08:21
openstackgerritMerged openstack/swift: Imported Translations from Zanata  https://review.openstack.org/30692408:34
*** acoles_ is now known as acoles08:53
acolesmattoliverau: welcome back!08:57
*** tesseract has joined #openstack-swift09:03
*** tesseract is now known as Guest3039409:03
*** oro has quit IRC09:04
*** jistr has quit IRC09:05
*** ozeri has joined #openstack-swift09:08
*** jistr has joined #openstack-swift09:12
*** sheel has joined #openstack-swift09:15
openstackgerritdharmendra kushwaha proposed openstack/swift-specs: Pass environment variables of proxy to tox  https://review.openstack.org/26414009:24
*** asettle has joined #openstack-swift09:36
*** mingdang1 has joined #openstack-swift09:45
*** mingdang1 has quit IRC09:49
*** asettle has quit IRC10:03
*** ober37 has joined #openstack-swift10:13
*** hosanai has quit IRC10:17
*** mingdang1 has joined #openstack-swift10:17
*** ober37 has quit IRC10:20
alholano1aI tested Swift a few months ago and now I tried usage of Storage Policies, but I met a few problems. I have 5 node configuration (2 proxy nodes, 3 storage nodes), I set rings to two replicas, but containers are stored still on all three storage nodes. I am not sure, if I should set Storage Policy in swift.conf on all nodes - proxy and storages nodes, or only some of them?10:22
*** SkyRocknRoll has joined #openstack-swift10:30
*** daemontool has joined #openstack-swift10:38
*** asettle has joined #openstack-swift10:54
*** mahatic_ has joined #openstack-swift11:02
*** zhiyan_ has joined #openstack-swift11:02
*** marcusvrn__ has joined #openstack-swift11:02
*** Lickitysplitted_ has joined #openstack-swift11:02
*** cppforlife__ has joined #openstack-swift11:02
*** swiftfan_ has joined #openstack-swift11:04
*** dfg_ has joined #openstack-swift11:04
*** redbo_ has joined #openstack-swift11:04
*** tristanC_ has joined #openstack-swift11:04
*** BAKfr_ has joined #openstack-swift11:04
*** mariusv_ has joined #openstack-swift11:05
*** thurloat_ has joined #openstack-swift11:05
*** joearnold_ has joined #openstack-swift11:05
*** dmellado_ has joined #openstack-swift11:05
*** cschwede_ has joined #openstack-swift11:05
*** ChanServ sets mode: +v cschwede_11:05
*** hugokuo_ has joined #openstack-swift11:05
*** cppforlife_ has quit IRC11:05
*** marcusvrn_ has quit IRC11:05
*** blair has quit IRC11:05
*** BAKfr has quit IRC11:05
*** mmotiani has quit IRC11:05
*** timburke has quit IRC11:05
*** hrou_ has quit IRC11:05
*** joearnold has quit IRC11:05
*** Lickitysplitted has quit IRC11:05
*** tdasilva has quit IRC11:05
*** swiftfan has quit IRC11:05
*** mathiasb_ has quit IRC11:05
*** thurloat has quit IRC11:05
*** pdardeau has quit IRC11:06
*** serverascode has quit IRC11:06
*** zhiyan has quit IRC11:06
*** rushiagr has quit IRC11:06
*** briancli1e has quit IRC11:06
*** hugokuo has quit IRC11:06
*** dmellado has quit IRC11:06
*** dfg has quit IRC11:06
*** mariusv has quit IRC11:06
*** saltsa has quit IRC11:06
*** tristanC has quit IRC11:06
*** redbo has quit IRC11:06
*** cschwede has quit IRC11:06
*** tanee has quit IRC11:06
*** sc has quit IRC11:06
*** jaakkos has quit IRC11:06
*** mahatic has quit IRC11:06
*** briancline has joined #openstack-swift11:06
*** jaakkos has joined #openstack-swift11:06
*** BAKfr_ is now known as BAKfr11:06
*** thurloat_ is now known as thurloat11:06
*** cschwede_ is now known as cschwede11:06
*** taneee has joined #openstack-swift11:06
*** hrou has joined #openstack-swift11:06
*** serverascode_ has joined #openstack-swift11:06
*** timburke has joined #openstack-swift11:06
*** ChanServ sets mode: +v timburke11:06
*** hugokuo_ is now known as hugokuo11:06
*** hrou has quit IRC11:06
*** hrou has joined #openstack-swift11:06
*** marcusvrn__ is now known as marcusvrn_11:07
*** joearnold_ is now known as joearnold11:07
*** mmotiani has joined #openstack-swift11:07
*** cppforlife__ is now known as cppforlife_11:10
*** zhiyan_ is now known as zhiyan11:10
*** saltsa has joined #openstack-swift11:11
*** rcernin has quit IRC11:12
*** mathiasb has joined #openstack-swift11:12
*** serverascode_ is now known as serverascode11:12
*** pdardeau has joined #openstack-swift11:14
*** tdasilva has joined #openstack-swift11:18
*** rcernin has joined #openstack-swift11:23
*** okdas has quit IRC11:25
*** ozeri has quit IRC11:25
*** sc has joined #openstack-swift11:26
*** okdas has joined #openstack-swift11:26
*** okdas has joined #openstack-swift11:26
*** sc is now known as Guest9247711:26
*** oro has joined #openstack-swift11:28
*** blair has joined #openstack-swift11:29
*** oro has quit IRC11:30
*** ppai has quit IRC11:32
*** ozeri has joined #openstack-swift11:32
*** taneee is now known as tanee11:33
*** ozeri has quit IRC11:37
*** rushiagr has joined #openstack-swift11:39
*** mmcardle1 has quit IRC11:39
openstackgerritAlistair Coles proposed openstack/swift: Encrypt/decrypt etag in container listing  https://review.openstack.org/27549211:41
openstackgerritAlistair Coles proposed openstack/swift: crypto - add functional test to verify container listing detail  https://review.openstack.org/30712111:41
*** ppai has joined #openstack-swift11:46
*** asettle has quit IRC11:50
*** asettle has joined #openstack-swift11:50
*** cdelatte has joined #openstack-swift11:52
*** takashi has quit IRC11:57
*** asettle has quit IRC12:03
*** mmcardle has joined #openstack-swift12:03
*** ppai has quit IRC12:03
*** mtreinish has joined #openstack-swift12:03
*** ozeri has joined #openstack-swift12:06
*** Guest23578 has quit IRC12:09
*** ppai has joined #openstack-swift12:19
*** christian has quit IRC12:23
*** wuhg has quit IRC12:30
*** oshritf has joined #openstack-swift12:34
*** oshritf has quit IRC12:34
*** ober37 has joined #openstack-swift12:39
*** ober37_ has joined #openstack-swift12:40
*** mwheckmann has joined #openstack-swift12:42
*** ober37 has quit IRC12:43
openstackgerritAlistair Coles proposed openstack/swift: crypto - verify that x-object-meta-etag does not confuse  https://review.openstack.org/30537412:46
*** pauloewerton has joined #openstack-swift12:55
openstackgerritAlistair Coles proposed openstack/swift: crypto - only set body crypto meta if body is read and encrypted  https://review.openstack.org/30579412:56
*** kei_yama has quit IRC12:56
Vinshnotmyname: Interesting RE the keystone-middleware.. I have not been using in-process cache though.12:59
*** klamath has joined #openstack-swift13:11
*** mingdang1 has quit IRC13:18
*** ppai has quit IRC13:34
*** rchurch has joined #openstack-swift13:40
*** BigWillie has joined #openstack-swift13:41
*** ChanServ sets mode: +v tdasilva13:47
*** sgundur1 has joined #openstack-swift13:48
*** ober37_ has left #openstack-swift13:49
*** ober37_ has joined #openstack-swift13:49
*** SkyRocknRoll has quit IRC13:54
*** SkyRocknRoll_ has joined #openstack-swift13:54
*** links has quit IRC13:57
*** bhakta has quit IRC13:58
*** ajiang has quit IRC13:58
*** ajiang has joined #openstack-swift13:59
*** bhakta has joined #openstack-swift13:59
*** bkeller` has quit IRC14:01
*** aj701_ has quit IRC14:01
*** zaitcev has joined #openstack-swift14:02
*** ChanServ sets mode: +v zaitcev14:02
*** ametts has joined #openstack-swift14:04
*** bkeller` has joined #openstack-swift14:04
*** aj701 has joined #openstack-swift14:05
*** arch-nemesis has joined #openstack-swift14:07
*** janonymous has quit IRC14:22
*** wasmum has quit IRC14:23
*** ober37_ has quit IRC14:26
*** ober37 has joined #openstack-swift14:28
*** wasmum has joined #openstack-swift14:32
openstackgerritAlistair Coles proposed openstack/swift: crypto - only set body crypto meta if body is read and encrypted  https://review.openstack.org/30579414:36
openstackgerritAlistair Coles proposed openstack/swift: Encrypt/decrypt etag in container listing  https://review.openstack.org/27549214:36
openstackgerritAlistair Coles proposed openstack/swift: crypto - verify that x-object-meta-etag does not confuse  https://review.openstack.org/30537414:36
openstackgerritAlistair Coles proposed openstack/swift: crypto - cleanup decrypter exception handling  https://review.openstack.org/30480614:36
*** dmsimard has quit IRC14:42
*** dmsimard has joined #openstack-swift14:43
*** ozeri has quit IRC14:55
*** asettle has joined #openstack-swift15:08
*** garthb has joined #openstack-swift15:12
*** asettle has quit IRC15:13
*** SkyRocknRoll_ has quit IRC15:14
*** ober37 has quit IRC15:15
*** ober37 has joined #openstack-swift15:16
pdardeaumattoliverau: welcome back!15:17
*** links has joined #openstack-swift15:19
*** ober37 has quit IRC15:20
gmmahamattoliverau: welcome back15:21
*** m3m0 has joined #openstack-swift15:41
m3m0heys guys, is there a way to get notified if an object gets upload to swift?15:41
*** hurricanerix has joined #openstack-swift15:43
*** gyee has joined #openstack-swift15:43
*** Guest67912 has joined #openstack-swift15:44
*** Guest67912 is now known as scotticus15:46
*** lakshmiS has joined #openstack-swift15:50
claygheyoh!15:57
claygalholano1a: storage policies are for object rings only - the account & container rings are seperate configuration than from the policies defined in swift.conf15:58
claygonovy: it's more commonly the case that a 2.3.0 proxy can communicate with 2.5.0 storage layer just fine15:59
*** tqtran has joined #openstack-swift16:00
*** links has quit IRC16:01
*** Guest30394 has quit IRC16:05
timburkegood morning16:05
*** raginbajin has quit IRC16:05
*** ober37 has joined #openstack-swift16:05
*** gyee has quit IRC16:06
*** ober37 has quit IRC16:07
*** ober37 has joined #openstack-swift16:07
*** raginbajin has joined #openstack-swift16:10
*** chsc has joined #openstack-swift16:10
*** lyrrad has joined #openstack-swift16:17
*** rledisez has quit IRC16:21
*** pcaruana has quit IRC16:21
notmynamegood morning16:31
openstackgerritOpenStack Proposal Bot proposed openstack/swift: Updated from global requirements  https://review.openstack.org/8873616:35
*** jistr has quit IRC16:39
*** tqtran has quit IRC16:40
*** itlinux has joined #openstack-swift16:42
*** rcernin has quit IRC16:43
*** dmorita has joined #openstack-swift16:56
*** joeljwright has quit IRC17:06
*** ozeri has joined #openstack-swift17:07
*** tqtran has joined #openstack-swift17:10
*** ozeri has quit IRC17:12
*** nadeem has joined #openstack-swift17:16
*** nadeem has quit IRC17:16
*** klrmn has joined #openstack-swift17:18
*** nadeem has joined #openstack-swift17:23
*** mmcardle has quit IRC17:26
*** jmccarthy1 has joined #openstack-swift17:34
*** ChubYann has joined #openstack-swift17:39
*** geaaru has quit IRC17:39
*** ober37 has left #openstack-swift17:41
*** stevemar has quit IRC17:41
*** stevemar has joined #openstack-swift17:44
*** joeljwright has joined #openstack-swift17:48
*** ChanServ sets mode: +v joeljwright17:48
*** joeljwright has quit IRC17:53
openstackgerritEran Rom proposed openstack/swift: Add process level concurrency to container sync  https://review.openstack.org/21009917:54
openstackgerritTim Burke proposed openstack/python-swiftclient: Guard against post-as-copy flattening large objects  https://review.openstack.org/30364717:55
notmynamejrichli: I know the goal has been for a good, working crypto branch by the end of this week. where are we?18:07
*** gyee has joined #openstack-swift18:07
notmynamehttps://wiki.openstack.org/wiki/Swift/PriorityReviews only has 2 patches listed (that's great!), but the copy middleware still hasn't landed18:07
jrichlioh, good, acoles is still around for this answer :-)18:07
notmynamekota_ and timburke and tdasilva have been looking at copy middleware it seems18:08
notmyname:-)18:08
tdasilvanotmyname: yep, kota_ left some comments, I need to get back to him18:08
jrichliacoles has been working like a beast the last couple weeks!  A lot of improvements have been made.  he has been creating one or two new patches for every one that gets merged ;-)18:08
acolesnotmyname: i moved some of the less urgent crypto reviews off the priority review page and replaced with a link to gerrit18:09
notmynameah, ok. makes sense18:09
jrichlibut yes, COPY is still the thing missing18:09
notmynamejrichli: yeah, those high-productivity devs who are so hard to keep up with ;-)18:09
jrichliI know!  since he is pretty much all crypto all the time now, it is even more clear how productive he is :-)18:10
acolesnotmyname: this week jrichli and i are focussing on re-instating some of the key management stuff that got taken out to cope with missing support that is now in place18:10
acolesand prep for summit18:10
notmynamegreat18:11
notmynameplease let me know asap if/how I can help with your summit talk18:11
jrichliI am also focusing on crypto explanations - especially with the etag scheme18:11
notmynamemahatic_: ^ you too18:11
acolesnotmyname: thanks18:11
notmynamejrichli: this is how I normally feel about crypto explanations: https://s-media-cache-ak0.pinimg.com/736x/17/f3/95/17f395a4fe61b28a4efbd92002231848.jpg18:12
jrichlilol!  I wont get too deep unless provoked ;-)18:14
notmynametdasilva: with copy as post, who's been reviewing that? is it just kota_ and timburke?18:14
acolesnotmyname: possibly stupid question... if we were to merge the 2 copy related patches on feature/crypto simply so we have something working and easy to manage (i.e. not having to cherry pick) how hard is it to then revert those when the equivalent patches land on master? would that be a bad way to go? i'm just after an easy life this week!18:14
jrichlibut I do want to be able to get deep outside the talk as well18:14
jrichliwe do have some concerns to bring to reaperhulk18:15
tdasilvanotmyname: mattoliverau had also reviewed before his paternity leave18:15
tdasilvamattoliverau: welcome back!!! :)18:15
notmynameacoles: it all depends on if/how they change before landing on master. I *think* if they land as-is, then it should be pretty clean. however, I'd probably want to do a simulated test to actually verify that (eg build a small git repo and land a patch on both then try a merge and see what happens)18:16
acolesmattoliverau's memory will have been erased of everything BC (before child)18:16
notmynameheh18:16
notmynameyeah18:16
tdasilvalol18:16
notmynameok, so we need mattoliverau to dive back into it then (and/or another core). kota_'s going to be mostly unavailable this week18:16
notmynameacoles: which means it might be good to try the early merge to crypto18:17
tdasilvanotmyname: oh, ok18:17
kota_Sorry18:17
notmynamekota_: no worries!18:17
acolesnotmyname: k, i'll ponder it some more18:18
notmynamekota_: if I remember your most recent review on it, there was just a minor thing. tdasilva, if that's the case, then it would probably be ok for 1 +2 after addressing it (depending on the scope of change)18:18
notmynamedoes that make sense? or make anyone extra nervous?18:19
kota_K18:20
kota_So, Maybe merge able18:22
kota_Wait, boot on laptop18:22
claygwhooooo!  Is everyone having fun!?  OSD is right around the corner!18:23
kota_back with laptop keyboad.18:25
kota_tdasilva: the last peice I'm wondering in the versioned_writes patch is if swift can return EntityTooLarge when the source response has no content-length.18:26
kota_probably, it keeps current Swift behavior because handle_copy_request in proxy app looks to have similar code.18:27
kota_but I had an idea how it works in transfer-encoding: chunked response (storlets things might get nervous).18:28
kota_not sure, for now, but... it might be ok we can consider around there *after* landed.18:30
kota_so, sorry I'm waffling.18:33
kota_anyways, it looks to get good shape for the scope definately.18:34
timburkekota_: i'm reasonably confident that swift will properly 413; the existing content_length is None check is an attempt to save us some bandwidth for a request that would *probably* fail18:34
tdasilvakota_: no worries, but you are right, the code is very similar to what was already done in handle_copy_request and also in the copy middleware18:34
timburkekota_: also, you should be asleep right now. go sleep.18:34
* tdasilva was wondering if kota_ is already in the US....18:35
kota_timburke: yup, it's *probably* fail...18:37
kota_tdasilva: I'm in JST :)18:37
tdasilvakota_: have to agree with timburke, go sleep ;)18:38
*** silor has joined #openstack-swift18:39
kota_tdasilva, timburke: thanks18:41
*** silor1 has joined #openstack-swift18:44
*** silor has quit IRC18:44
*** silor1 is now known as silor18:44
*** jmccarthy1 has quit IRC18:49
*** StraubTW has joined #openstack-swift18:51
*** ametts has quit IRC19:00
*** bkeller^ has joined #openstack-swift19:04
*** bkeller^ has quit IRC19:04
*** ober37 has joined #openstack-swift19:08
*** ober37 has quit IRC19:08
*** ober37 has joined #openstack-swift19:08
*** bkeller^ has joined #openstack-swift19:09
*** sheel has quit IRC19:15
*** raginbajin has quit IRC19:16
*** bkeller`- has joined #openstack-swift19:16
*** ametts has joined #openstack-swift19:17
*** acoles is now known as acoles_19:17
*** raginbajin has joined #openstack-swift19:21
*** bkeller`- has quit IRC19:23
*** bkeller^ has quit IRC19:23
*** bkeller^ has joined #openstack-swift19:23
*** bkeller` has quit IRC19:24
*** bkeller^ is now known as bkeller`19:24
*** ober37` has joined #openstack-swift19:25
*** mminesh has joined #openstack-swift19:27
*** chsc has quit IRC19:35
*** _erick0zcr has joined #openstack-swift19:35
*** mmcardle has joined #openstack-swift19:37
*** mmcardle has quit IRC19:56
*** delattec has joined #openstack-swift19:57
*** lakshmiS has quit IRC19:59
*** lakshmiS_ has joined #openstack-swift19:59
*** cdelatte has quit IRC19:59
openstackgerritMerged openstack/swift: Removing unused clause  https://review.openstack.org/25427820:01
*** delattec has quit IRC20:02
*** delattec has joined #openstack-swift20:07
*** diazjf has joined #openstack-swift20:20
*** diazjf has quit IRC20:23
*** ober37 has quit IRC20:27
*** ober37` is now known as ober3720:27
*** zaitcev has quit IRC20:29
*** zaitcev has joined #openstack-swift20:30
*** ChanServ sets mode: +v zaitcev20:30
MooingLemuris there any process written that could look for leaked/orphaned objects?  I suspect they'd be rare, but I suppose it could happen if you introduced a node that had been gone beyond the maximum age of a tombstone.  Unlike some other forms of stale data (such as rsync dotfiles), it would apparently persist due to replication because there's no interaction with container listings for any of the20:30
MooingLemurobject auditing code that I can find.20:30
notmynameno20:31
MooingLemurI suppose looking for and acting upon such things could be racelike though anyway20:31
notmynamewhat you'd need to do is sync the objects against the container DBs. and yeah. there are still some races there too20:31
*** alholano1a has quit IRC20:32
*** ober37_ has joined #openstack-swift20:32
notmyname(like is the delete=true row still in the container db; what about differences between replicas of a container db; etc)20:32
claygleaking stale datafiles from old nodes over reclaimed tombstones is definately something that happens - we should evanglize being more careful with devices that have been "out of the cluster" for longer than reclaim age20:33
MooingLemurI saw somewhere someone referred to stale object files as "dark matter".20:33
claygMooingLemur: you really have to audit from the disk's back up to the container listings - it's a real hozer20:33
MooingLemurthought that was a good description :)20:33
clayg"dark *data*"20:34
claygit's like big data - but less well understood by scientists - we know it exists tho - but there theoretical models math doesn't always work out in practice ;)20:34
notmynamelol20:34
MooingLemur:)20:34
*** BigWillie has quit IRC20:34
claygMooingLemur: this one time someone had a brilliant idea of making the object-auditor look into this sort of thing and it ended up being way to agressive to beat on the container layer like that :\20:36
*** asettle has joined #openstack-swift20:37
*** rickflare has quit IRC20:40
MooingLemurclayg: This seems like one of the only types of space-claiming cruft that won't decay over time with disk replacements.  Maybe there's a middle ground where object-auditor could take on this role but not beat up the container dbs so much.  Most of the ideas that are floating in my head don't seem all that good and are at best kludges.20:41
*** rickflare has joined #openstack-swift20:41
*** silor has quit IRC20:44
MooingLemurbut I think the best solution for me might be a standalone one, audit like zbf but hit the container db for each one (maybe only objects older than reclaim_age), log something if it isn't found, and recheck those objects all at a later time.20:44
MooingLemurI'll keep thinking about it though.  Thanks :)20:45
claygMooingLemur: yeah I think it'll be more queries up to the conainer layer that you're really going to want to see to run it all the time :\20:52
ahale_i advise just trying not to think about stuff like that20:54
claygahale_: didn't redbo have an idea with a bloom filter or something one time?20:56
*** ober37_ has quit IRC20:56
ahale_oh maybe, he has lots of ideas20:57
notmynameahale_: I know, right? hard drives are just getting bigger--just buy more of them20:57
claygif you could manage to iter every page of every container in some sort of reasonable time perioid (less that reclaim age) it seems like you could build up a collection of bloomfilters on the cardinality of container servers20:57
clayg... that should be too big (that's kind of the key thing with bloom filters)20:58
claygthen you just ship them out to the object servers anything that older than reclaim age and not in any of the container layer bloom filters should be dark data20:58
claygMooingLemur: ^20:58
MooingLemuroh, interesting thought20:58
claygthe "iter every page of every container" is *very* much dependent on your containers - a few monsters and you're maybe doing as much harm as good :\20:59
claygidk, a week or two is a long time - you can fit a lot of i/o in a week - if you maybe keep track of where you're at and do 10K rows a time or something you could just keep sweeping with a little sleep w/o beating on any one container for too long (maybe?)21:00
claygredbo_: how was that all supposed to work again?21:00
redbo_clayg: that's basically what we're working on, yeah.  Still in the "seeing how long it'll take and how big it'll get" stage.21:00
MooingLemurclayg: true, but there could still be some hash size that would be small enough to be reasonable for the object-auditor to know about21:01
notmynameclayg: keep track of where you're at? like have some table in the container that has a set of high-water marks for various background processes? sounds like a neat idea ;-)21:01
claygMooingLemur: that's the hope21:01
claygnotmyname: ROFL - i didn't even think of that21:01
claygnotmyname: but you also need to reap out tombstone rows - i had assumed they would expire - like you'd constantly be resetting your high-water marks - I don't think you just keep the same bloomfilter's going forever - your false hit rates would keep getting wrose and worse the longer you use them21:02
claygnotmyname: but still - you could keep track where your at for a time, and once your cycles start looking "done-ish" you ship 'em out the object servers and start a fresh run over the containers (reset the sync points)21:03
*** _erick0zcr has quit IRC21:03
claygnotmyname: but there's not a really terribly compelling arugument for keeping this information in the container db's themselves except that maybe it'd be more robust to rebalances... acctually that might be a somewhat compelling argument :)21:03
claygnotmyname: either way you start with some scripts - it's all going to be highly manual for ages anyways - sort of like our approach to part-power adustment21:04
MooingLemurEither I'm not thinking of it correctly or... the watermarks being in the container db won't necessarily help if you introduce a ridiculously old set of objects after the process has already begun though21:04
claygjust build up the core support in swift - let ops figure out best practices before trying to automate a disaster waiting to happen21:04
redbo_But basically what we're going to do is pick a date at least relcaim_age old, and old enough that there can't be any async pending files laying around from before that date, and only clean up object files older than that date.21:05
claygredbo_: +221:05
*** redbo_ is now known as redbo21:05
*** ChanServ sets mode: +v redbo21:06
claygMooingLemur: yeah I think your goal is to sweep out everything older than... 4 months or something - if you've been running the cluster for years that's still a ton of unique hashes that need to be checked tho21:06
MooingLemuror perhaps we can also do something to the process to help ensure old data doesn't get replicated out in the first place21:06
claygMooingLemur: if there's some newish stuff (that's acctually still dark data) you just have to catch it on the next go round21:06
redboThen we currently use xfs bulkstat to scan object files.21:07
claygMooingLemur: yeah - normally if we have a node out for a couple of days those devices get pulled from the ring - and before the old dead boy comes back in he's just wiped and added in as fresh capacity (new swift drive id's)21:07
claygredbo: how currently uses xfs bulkstat to scan for object files?!21:07
clayg*who21:08
redboThat's what our dark data cleanup script uses21:08
claygoh... how do you know they're dark?21:08
*** ametts has quit IRC21:08
redboWe bulkstat, read the metadata for each .data file, then compare it to the bloom filter.21:08
claygOH!  so you *have* already done this!  well then... how long will it take and how big do they get!?21:09
MooingLemurclayg: yeah, that's usually the process.. but in my case where I suspect I may have introduced dark data, I had messed up some permissions for a couple months by rsyncing some partitions manually off drives I was removing, so they ended up as root:root rather than swift:swift.  So replicator couldn't get at them, and I didn't notice.  I fixed the permissions but didn't consider there was proba21:09
MooingLemurbly old data (that couldn't be deleted)21:09
MooingLemurGranted, this is likely a tiny drop in the bucket with respect to space on the cluster, but I can't easily verify that that's the case :)21:11
*** asettle has quit IRC21:11
claygyeah... I mean depending on your situation you may just be able to get away with a script that rolls over the objects dirs and uses internal_clint to start paging container queries into memory?21:11
redboWell, on our biggest cluster with I don't know how many tens of billions of objects, with an acceptable error rate, the bloom filters are too big.  So we're having to build them with some partitioning.21:11
claygMooingLemur: I was sorta hoping something simple and ad-hoc like that might already be floating around the operations toolboxes out there somewhere?21:12
claygMooingLemur: I think we have something like - but it's baked into some other tools that might not be appropriate for the public domain :\21:12
notmynameredbo: have you considered writing it in go?21:12
redboThen getting the names of all the objects on a healthy drive takes about 30 minutes (but we find a lot of drives aren't healthy and take some hours).  And checking those objects against the bloom filter takes a few minutes.21:12
claygredbo: great info!21:13
redboHow do you know it isn't already written in go??  And no, I don't think it would help this very much.21:13
notmynamejust messing with you ;-)21:14
*** StraubTW has quit IRC21:15
*** gyee has quit IRC21:17
*** ametts has joined #openstack-swift21:17
MooingLemurI think I'll do what clayg suggested, run this against the objects on one drive, query the container for every qualified object using internal_client, and see what it looks like21:17
*** StraubTW has joined #openstack-swift21:18
*** StraubTW has quit IRC21:18
*** StraubTW has joined #openstack-swift21:18
redboWe have a script that does that and it helps, but it's so slow we filter it to only check containers for objects above a certain size.21:19
MooingLemureven if I run it sequentially on one host and one drive at a time, and it takes a year for a single pass, it's way better than no audit at all.21:19
claygredbo: nice!  there's dark data - then there's *big* dark data ;)21:21
MooingLemurand then there's dark big data21:21
MooingLemursurveillance :)21:22
notmynameit's a reverse ZBF auditor. you know that most of the data in the cluster is from large objects. so just scan those21:22
notmyname(as opposed to most of the objects in the cluster being small)21:22
claygMooingLemur: I'm trying to think how you could have the walker feed a worker container names as it finds them - then he pages a container listing into an in-memory set - and only after listing the whole container does the check to see if the name is in the container's set21:22
notmynameredbo: also, if that script is slow, have you considered writing it in go?21:23
*** mwheckmann has quit IRC21:23
claygMooingLemur: there's lots of trade offs that have to do with how many containers you have and how big they are :\21:23
MooingLemurclayg: yeah, I think we no longer have the customer that had 100 million objects in a single container :)21:23
*** asettle has joined #openstack-swift21:23
notmynameMooingLemur: no longer have the customer? or that customer no longer has those containers? ;-)21:23
* notmyname hopes the latter21:24
MooingLemurboth, heh.21:24
MooingLemur(oh well)21:24
notmynameproblem solved!21:24
claygMooingLemur: you know it's funny - but even 100 million objects fits in a python set somewhat manaageably!  I mean I wouldn't want it sitting their in memory for all time... but... well YMMV21:24
redboWell to really speed it up, I'd have to write the container server in go too.21:25
notmynameheh21:25
claygBOOM!21:25
notmynameyou haven't already?!21:25
claygnotmyname: you're on fire today?21:25
*** asettle has quit IRC21:27
redboI mean, I have some prototypes.. trying out leveldb and rocksdb and some others.  But I haven't worked on them in a while.21:27
notmynameredbo: I hear mongodb might be a good idea in hummingbird21:28
*** lakshmiS_ has quit IRC21:28
redboIf you put everything in mongo, you don't need all these other servers.21:29
redboThen you only have one problem.  Unfortunately it's mongo.21:30
notmynamelol21:30
*** pauloewerton has quit IRC21:33
*** nadeem has quit IRC21:35
*** _JZ_ has joined #openstack-swift21:39
*** gyee has joined #openstack-swift21:43
*** wer has joined #openstack-swift21:45
claygredbo: boltdb maybe?  https://github.com/boltdb/bolt21:47
claygredbo: honestly it's not super clear to me how a generic k/v db replaces a container db - i mean I get that at some level everything is k/v (it's all block address down there somewhere) - but it seems like you'd have to write a bunch of stuff sqlite already gets right to really have the indexes on name work the way we want?21:48
*** wasmum has quit IRC21:48
redboclayg: Yeah it would be more complex, you'd need to add indexes as separate keys and stuff, which sucks and is why I have a bunch of half-finished prototypes sitting around.  The only appeal is if it ends up way faster.21:52
claygredbo: faster on insert I guess is your primary metric?  I feel like indexes gunna be indexes - if you want to keep 'em lean and mean we just gotta shard :\21:54
redboI wonder if sqlite4 is dead.  They were basing it on an LSM tree k/v backend similar to those newere databases.21:57
claygwhoa21:57
notmynamezigo: I really enjoy reading your emails22:30
zigonotmyname: About the global-reqs ?22:30
notmynameall of them :-)22:31
notmynamebut yeah22:31
notmynamezigo: and I think most of them are summarized with "look, packaging is hard, nobody is working on it, please stop making my job harder"22:31
zigo:)22:31
notmynameand bringing some degree of ops sanity/perspective to how software is actually consumed22:31
notmynamezigo: like you, I'm very interested and curious about the outcome of these global reqs discussions22:32
zigonotmyname: Well, I agreed to attempt giving less work on the global-reqs management.22:32
zigoI do believe a lot of it can be just not done.22:33
notmynameI don't have a particularly strong opinion one way or the other. except that ops people who deploy swift tell me that they don't want to keep redoing all their packaging every time some other openstack project decides they need to upgrade some dependency22:33
notmynameso as long as people who deploy swift are happy, I really don't care what the outcome is :-)22:34
zigoYup, that too !22:34
notmyname(ie it's really easy for me to +2 or -2 a global requirements patch. really! takes about 3 seconds!)22:34
zigoIf we could test the lower bounds of dependencies for each and every project, we wouldn't have any issue, it'd be fully tested and automated.22:34
notmynameyes! I like that idea22:35
notmynamefrom time to time I run my swift all in one with all the minimum versions allowed in our dependencies22:35
notmynameand sometimes I find stuff we need to bump up22:35
zigoOh, really? Thanks, that's really helpful.22:35
clarkbnotmyname: zigo please remember anyone including you can write that test and have it run against openstack, either on every patchset or periodically or after merges or whatever makes sense22:36
notmynameFWIW I'm not doing that currently, so I couldn't tell you what the current state is. I think I ended up upgrading everything a few months ago22:36
clarkbit doesn't have to be a special "oh I do this for swift locally to catch bugs"22:37
notmynameclarkb: right! I think the reason I haven't is lack of knowledge (or comfort) with how to modify devstack dependencies to be something different to run tests. my understanding is that devstack just slams in global requirements regardless of anything else22:38
clarkbespecially now that pip honors constraints (it was hard to actually enforce the lower bounds previously)22:38
clarkbnotmyname: sort of, it uses pip's upper constraints today. But you could add in lower constraints and everything else would be the same22:38
notmynamezigo: separate question: how are projects supposed to identify dependencies that aren't in requirements.txt? (ie stuff that can't be installed from pypi)?22:39
clarkbits just a matter of generating the constraints file with the correct contraints and feeding that into devstack/pip22:39
clarkbnotmyname: bindep22:39
clarkbnotmyname: you can have an other-requirements.txt file that uses bindeps specification format to specify system deps22:39
* clarkb looks for an example22:40
notmynamewhew22:40
zigonotmyname: I'm not sure I understand your question ...22:41
zigonotmyname: Will it work if something is missing?22:42
zigoMost likely, it will just break hard, no?22:42
clarkbnotmyname: http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/data/bindep-fallback.txt this is the fallback list we use if projects do not have an other-requirements.txt file22:42
clarkbnotmyname: it is fairly comprehensive of all the various things various projects need, but if you add one to say swift you can make it the minimal set that only swift needs22:42
notmynameclarkb: meta comment, thanks for jumping in recently to conversations with "hey instead of doing your own thing, why don't you help the community...here's how". personally, I don't know how most of the -infra stuff works or what's available to me. so these things help :-)22:43
notmynameclarkb: so if we have a root-level "other-requirements.txt" then that's what will get picked up instead of this one?22:43
clarkbnotmyname: ok :) I find that generally things get better when we do them upstream like this. Because then not only swift can have tests that do that but the other projects as well22:44
clarkbnotmyname: correct22:44
zigoclarkb: Clearly, what I can do to help the community, is get this packaging-deb project up and running...22:44
clarkbnotmyname: I don't think devstack is using it yet but for all of the non devstack stuff we rely on either the in project list or the fallback list22:44
zigoclarkb: Though, as you know, it's currently stuck with this sbuild issue... :(22:44
clarkbzigo: yes and I tried to help you earlier today but you ignored my advice22:44
clarkbzigo: so I am not sure what else I can do22:45
zigoclarkb: What advice was that?22:45
zigoTo use bash --login ?22:45
notmynameclarkb: and then the installed dependencies are the union of other-requirments.txt and requirements.txt and test-requirments.txt? (with versions specified for each listed dependency as defined by upper-constraints?)22:45
zigoI tried...22:45
clarkbzigo: to try su and to make sure the group is being installed properly22:45
zigoclarkb: *IT IS* installed properly in /etc/group, I checked ...22:45
clarkbnotmyname: yes, other-requirements ets intsalled out of the system package manager (you will see platform:dpkg and similar in the fallback file), and the requirements.txt and test-requirements.txt via pi22:46
zigo(not in infra, but I know the script, and ran it on an image...)22:46
clarkb*via pip22:46
notmynameclarkb: this is great info22:46
m3m0hey guys is there a way to get notified when an object gets uploaded to swift?22:48
claygtimur: ^ cc :D22:49
clarkbnotmyname: the goal is that each project can have better control over that stuff both for documentation purposes and for easier management of test deps22:50
notmynamem3m0: short answer is "not yet"22:50
notmynamem3m0: longer answer is that there are some good ideas out there, it's an area of active discussion (including a topic for next week's summit)22:51
*** StraubTW has quit IRC22:51
notmynamem3m0: but it's definitely possible, if you include "write code" in your scope of what's possible22:53
mattoliveraumorning y'all22:53
notmynamehey mattoliverau!22:53
mattoliveraunotmyname: hey!22:53
timburkegood morning mattoliverau!22:53
notmynamemattoliverau: are you getting any sleep?22:53
mattoliveraunotmyname: pfft sleep, its overrated (says someone not getting any)22:53
notmynameheh22:54
notmynameit will get better :-)22:54
mattoliveraufor the first time in my life I'm looking forward to the long flight to the US :P22:54
notmynamelol22:54
mattoliverautimburke: hey man, and congrats on core-ifying some more.22:54
mattoliveraunotmyname: new borns are hard, they're so cute during the day.. and turn into gremlims at night.22:55
timburkemattoliverau: i have so much more to do and worry about now :-(22:56
timburkei'm sure you can relate ;-)22:56
zigom3m0: You can pay an employee to look at the logs? :)22:56
mattoliverautimburke: lol, yup22:56
mattoliverautimburke: no pressure :P22:56
* zigo hides after this bad joke22:56
clarkbmattoliverau: now imagine having two at the same time :)22:57
notmynameclarkb: twins?22:57
clarkbnotmyname: ya I have 10 month old twins22:57
*** wasmum has joined #openstack-swift22:57
mattoliverauclarkb: that is just rediculous.. I think parents who go back are extra times are crazy :P (but apparently you forget about that in the future).22:58
notmynameI gotta admit, I don't envy you (or clayg) ;-)22:58
mattoliverauclarkb: wow, you poor poor man, I'll never complain (around you) again :P22:58
*** km__ has joined #openstack-swift23:00
*** km__ is now known as Guest4735823:01
pdardeauhey mattoliverau! welcome back23:06
mattoliveraupdardeau: hey man! thanks23:07
*** rickyrem has left #openstack-swift23:08
mattoliverauSo it turns out the world doesn't stop when your away, people keep on emailing.23:08
notmynamemattoliverau: mark all as read. if it's important, then you'll probably get fired for not responding. good luck23:10
mattoliveraunotmyname: lol, great, job done :P23:11
*** asettle has joined #openstack-swift23:13
mattoliveraunotmyname: it's what most my yesterday was. that and rebasing sharding code and now trying to debug it post fastpost world (which involved alot of rewording).23:13
*** MrsWr0ng has joined #openstack-swift23:16
MrsWr0nghi23:16
notmynamehello MrsWr0ng23:20
MrsWr0nghi are you bot23:20
gmmahahowdy mattoliverau .. welcome back!!23:22
mattoliveraugmmaha: hey!23:23
*** kei_yama has joined #openstack-swift23:35
*** MrsWr0ng has quit IRC23:36
*** klamath has quit IRC23:38
openstackgerritMerged openstack/swift: Encrypt/decrypt etag in container listing  https://review.openstack.org/27549223:55

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!