Wednesday, 2015-09-09

*** jrichli has joined #openstack-swift00:06
*** garthb has quit IRC00:13
*** kota_ has joined #openstack-swift00:16
*** ChanServ sets mode: +v kota_00:16
kota_good morning00:16
jrichlikota_ : good morning!00:19
*** zhill has quit IRC00:19
kota_jrichli: morning :D00:19
jrichlikota_ : I have gotten farther in my learning Japanese, but I feel that don't really know how to say anything :-)00:20
jrichliLearning a new language is very hard!00:20
kota_jrichli: exactly, I am still not good at English :/00:21
jrichliI was learning the difference between wa and ga.  I think I will end up knowing some single words, but making sentences is another thing.00:21
jrichliwe seem to all understand you just fine :-)00:21
kota_thanks00:21
jrichliI should say, understand you well.  I see that sentence now, and I understand it might not translate well.00:22
jrichliI guess this is really a good thing for me to do.00:22
kota_basically 'wa' and 'ga' have same meaning. Japanese can understand if you could get mistake for use case.00:23
jrichliyes, but I read some examples where the difference can be make the difference between a compliment and an insult!00:23
kota_In my opinion, we often use 'ha' for statement... maybe00:24
jrichlioh, I haven00:24
jrichli't  gotten to that one yet00:24
kota_be verb, am, are, was, like that00:24
kaliyatorgomatic, it can, it's 777 with swift:swift...00:25
kota_and 'ga' is used for action verb like do, play00:25
kota_or so00:25
jrichliI haven't see that yet either - I dont think.  I am using Rosetta Stone, which is a good, but I have to suppliment by reading info on the internet.00:26
kota_e.g. if you want to say, 'I AM a worker'. you should use 'ha' in the statement.00:27
kota_oh, Rosetta Stone is popular in US, too?00:27
jrichliok, that makes sense.  That type of statement hasn't been covered yet.00:27
jrichliyep00:27
kota_I saw the commercial in Japan, too, I didn't try it yet tho00:28
kota_jrichli: if you have any question, feel free to ask me :P00:30
kota_about Japanese.00:30
*** flwang1 has quit IRC00:30
*** m_kazuhiro has joined #openstack-swift00:30
jrichliGreat, thanks!  I am sure I will take you up on that at some point.00:31
*** gyee has quit IRC00:33
*** rjaiswal has joined #openstack-swift00:39
kaliyaTokyo summit? :) I am trying with "Japanese for busy people" but I'm too busy00:45
kota_kaliya: lol00:46
kaliyaSo far, I know how to say sumimasen00:48
jrichlilol.  Well, some words are nice because they sound very similar to the english equivalent.  Like pen, computer, internet, dinning room, to name a few.00:48
*** zacksh_ has quit IRC00:48
*** darrenc has quit IRC00:48
*** sudorandom has quit IRC00:48
*** mattoliverau has quit IRC00:48
*** darrenc has joined #openstack-swift00:49
*** jamielennox has quit IRC00:49
*** flwang has quit IRC00:49
*** CJM has quit IRC00:49
*** StevenK has quit IRC00:50
*** donagh has quit IRC00:50
*** jroll has quit IRC00:50
*** kevinc___ has quit IRC00:50
kota_jrichli: yes, they came from outside of Japan. We have a way to call them as they are.00:50
*** mattoliverau has joined #openstack-swift00:51
*** StevenK has joined #openstack-swift00:51
*** zacksh has joined #openstack-swift00:51
jrichliah, i see.  I even thought "Tishatsu" was pretty similar.  Easy to remember anyway, and I thought applicable to the summit :-)00:51
*** ChanServ sets mode: +v mattoliverau00:52
jrichlimore applicable than dinning room anyway :-)00:53
*** jroll has joined #openstack-swift00:53
*** donagh has joined #openstack-swift00:53
kota_jrichli: yup, before a kind of "shirt" came to Japane, Japanese people weared "kimono", traditional clothes.00:54
*** jamielennox has joined #openstack-swift00:54
*** sudorandom has joined #openstack-swift00:54
kota_that's why we have same pronounceation for "shirt"00:54
*** CrackerJackMack has joined #openstack-swift00:54
jrichliah.  this very interesting00:54
jrichliI mean, that is very interesting00:55
kota_:)00:55
jrichlibut then, why is laptop so different?00:55
kota_I don't know :/00:56
*** flwang has joined #openstack-swift00:56
jrichli:-)  oh well, I knew it wouldn't be that easy00:56
kota_Japanese sometimes make a new word combining foreign laungage.00:56
kota_AFAIK, old Japanese thought "notebook + personal computer" for the laptop.00:57
kota_before we got the word "laptop"00:57
jrichlioh, ic.00:57
kota_and it would be defact like note-PC00:58
*** jkugel has joined #openstack-swift00:58
kota_jrichli: sorry, I have a meeting, away from here.01:00
*** kota_ has quit IRC01:00
jrichlikota_: no problem :-)01:05
*** kei_yama has quit IRC01:14
*** kei_yama has joined #openstack-swift01:15
mattoliveraumorning all01:15
*** flwang1 has joined #openstack-swift01:15
mattoliverauhad a busy morning01:15
jrichlimattoliverau: morning!  That is great to hear that you will be vacationing in Japan for 2 weeks!01:17
mattoliverauyeah! gonna be awesome!01:18
mattoliveraualso looks like there is another swift dev in the making: http://postimg.org/image/63kefpeyp/01:19
mattoliverau^ that;'s why I was late this morn.. I'm a little excited now :)01:19
jrichliOMG!  congratulations!!!01:20
mattoliveraujrichli: thanks :)01:20
mattoliveraujust following in tdasilva's and ho's foot steps :)01:20
jrichliso, that says he/she is 12 weeks and 6 days old?01:21
mattoliverauyeah, it was the 12 weeks scan, by the length it looks closer to 12 weeks 6 days. Wont know the sex until 20 weeks01:22
mattoliverauand now in safetly zone so can finally tell people :)01:22
jrichliic.  I bet it was hard not to say anything :-)01:22
charzmattoliverau: Congruations!!!01:22
mattoliverauyeah, my remote work mates and the swift community are first to know LOL! (other then family of course).01:24
jrichlias it should be ;-)01:25
jrichliso soon, you will need tiny swift shirts01:26
mattoliveraulol01:27
hroumattoliverau, oh wow that's awesome, congratulations !!!01:30
mattoliverauhrou: thanks man.. what are you still doing online, shouldn't you be sleeping :P01:30
hroumattoliverau, I recall chatting about this very subject during the hackathon !  Its funny here at work I find kids come in batches, literary for a 6 month period, kids popping out everywhere !  Then nothing for a year.01:31
hrouha, no just 9:30 here.01:33
mattoliveraulol, so whos next?01:33
hrouI love the 'mattoliverau v2.0', that's a great pic.  But wait, rcbau ? : ) should there be a 2.0 appended on to that as well01:34
*** haomaiwang has joined #openstack-swift01:37
*** darrenc is now known as darrenc_afk01:42
mattoliverauhrou: rcbau is rackspace cloud builders Australia, its the meme I used to tell my work team :)01:47
mattoliverauCause memes are important01:47
hroumattoliverau, ah ! : )  Ok that's hilarious and now makes perfect sense01:48
*** jkugel has quit IRC01:54
*** haomaiwang has quit IRC02:01
*** haomaiwang has joined #openstack-swift02:01
*** m_kazuhiro has quit IRC02:04
tdasilvamattoliverau: Congrats!!02:16
*** darrenc_afk is now known as darrenc02:17
homattoliverau: Congratulations!!!02:18
openstackgerritZack M. Davis proposed openstack/swift: given Python 3, use http.client.parse_headers instead of rfc822.Message  https://review.openstack.org/20330402:23
*** kei_yama has quit IRC02:23
openstackgerritZack M. Davis proposed openstack/swift: given Python 3, use parse_headers instead of rfc822.Message  https://review.openstack.org/20330402:24
*** kei_yama has joined #openstack-swift02:24
notmynamemattoliverau: that's awesome! congrats!02:25
*** changbl has joined #openstack-swift02:28
mattoliverauThanks y'all02:40
jrichlinotmyname: I was excited to see women sizes on the t-shirt survey :-).  also, you said you have the design?  Can it be revealed?02:44
*** eranrom has joined #openstack-swift02:53
*** haomaiwang has quit IRC02:55
*** albertom has quit IRC02:56
*** haomaiwa_ has joined #openstack-swift02:57
*** albertom has joined #openstack-swift02:59
*** rjaiswal has quit IRC03:00
*** haomaiwa_ has quit IRC03:01
*** eranrom has quit IRC03:01
*** haomaiwa_ has joined #openstack-swift03:01
*** ppai has joined #openstack-swift03:28
notmynamejrichli: heh. not yet ;-)03:28
notmynamejrichli: and yes, I wanted to make sure I had some women's sizes too03:28
jrichlinotmyname: mahatic will be happy too!  The men's h]03:29
jrichlishirts were pretty big on her03:29
mattoliverauJust an FYI, I may miss the meeting tomorrow, or at least a chuck of it cause I need to fly to Sydney at the crack of dawn.03:55
*** sanchitmalhotra has joined #openstack-swift03:57
*** haomaiwa_ has quit IRC04:01
*** 7F1AAJQF9 has joined #openstack-swift04:01
*** m_kazuhiro has joined #openstack-swift04:03
*** hrou has quit IRC04:18
*** jrichli has quit IRC04:23
*** mahatic has joined #openstack-swift04:40
mahaticgood morning04:43
*** wbhuber has joined #openstack-swift04:48
*** baojg has joined #openstack-swift04:49
*** wbhuber has quit IRC04:53
*** kota_ has joined #openstack-swift04:58
*** ChanServ sets mode: +v kota_04:58
kota_mattoliverau: congrats!04:59
mattoliveraukota_: thanks man :)05:00
*** 7F1AAJQF9 has quit IRC05:01
*** haomaiwa_ has joined #openstack-swift05:01
kota_Just what I want to say...and be offline for meeting, again :(05:02
*** mahatic has quit IRC05:06
*** kota_ has quit IRC05:06
*** mahatic has joined #openstack-swift05:06
*** flwang1 has quit IRC05:06
mahaticmattoliverau: congratulations!!05:07
mahatic:)05:07
mattoliveraumahatic: thanks :)05:07
mahaticand awesome that you're vacationing in Japan!05:07
mattoliveraumahatic: are you coming to Japan?05:13
mahaticmattoliverau: my flights are done, visa on the way! (Japanese visa is super compliacted, I am actually getting docs shipped from Tokyo! :D)05:21
*** SkyRocknRoll has joined #openstack-swift05:21
mattoliverauwow, ok.. well travelled documents05:21
mahaticlol yeah05:22
*** silor has joined #openstack-swift05:25
*** silor has quit IRC05:26
*** silor has joined #openstack-swift05:26
*** changbl has quit IRC05:30
*** silor1 has joined #openstack-swift05:30
*** silor has quit IRC05:30
*** silor1 is now known as silor05:30
*** ekarlso- has joined #openstack-swift05:37
*** ekarlso- has quit IRC05:50
*** km has quit IRC05:52
*** baojg has quit IRC05:52
*** km has joined #openstack-swift05:52
*** haomaiwa_ has quit IRC06:01
*** haomaiwang has joined #openstack-swift06:01
*** mahatic has quit IRC06:08
*** mahatic has joined #openstack-swift06:14
*** baojg has joined #openstack-swift06:20
*** xnox has quit IRC06:34
*** xnox has joined #openstack-swift06:35
*** xnox has quit IRC06:44
*** xnox has joined #openstack-swift06:46
*** akle has joined #openstack-swift06:51
*** haomaiwang has quit IRC07:01
*** haomaiwang has joined #openstack-swift07:01
openstackgerritMerged openstack/swift: Improving statistics sent to Graphite.  https://review.openstack.org/20265707:12
*** rledisez has joined #openstack-swift07:14
*** pberis has joined #openstack-swift07:25
*** baojg has quit IRC07:39
*** geaaru has joined #openstack-swift07:40
*** pberis has quit IRC07:53
*** jordanP has joined #openstack-swift07:53
*** hseipp has joined #openstack-swift07:54
*** baojg has joined #openstack-swift07:57
*** haomaiwang has quit IRC08:01
*** haomaiwang has joined #openstack-swift08:01
*** acoles_ is now known as acoles08:01
*** jordanP has quit IRC08:02
*** jordanP has joined #openstack-swift08:02
*** cazino has joined #openstack-swift08:03
*** cazino has left #openstack-swift08:03
*** m_kazuhiro has quit IRC08:12
*** jordanP has quit IRC08:17
*** jistr has joined #openstack-swift08:32
*** jordanP has joined #openstack-swift08:43
*** itlinux has joined #openstack-swift08:43
acolesmattoliverau: congrats! sharding irl ;)08:45
*** flwang1 has joined #openstack-swift08:49
*** SkyRocknRoll has quit IRC08:49
*** T0m_ has joined #openstack-swift08:56
*** haomaiwang has quit IRC09:01
*** haomaiwang has joined #openstack-swift09:01
*** baojg has quit IRC09:10
*** kei_yama has quit IRC09:16
*** Kennan_Vacation2 has quit IRC09:17
*** trifon_ has quit IRC09:19
*** SkyRocknRoll has joined #openstack-swift09:22
*** Kennan_Vacation has joined #openstack-swift09:25
*** m_kazuhiro has joined #openstack-swift09:29
*** km has quit IRC09:33
mattoliverauLol, thanks acoles09:35
*** mahatic has quit IRC09:36
*** haomaiwang has quit IRC10:01
*** haomaiwang has joined #openstack-swift10:02
*** trifon_ has joined #openstack-swift10:03
*** itlinux has quit IRC10:24
*** logan2 has quit IRC10:24
*** logan2 has joined #openstack-swift10:28
*** T0m_ has left #openstack-swift10:43
*** itlinux has joined #openstack-swift10:47
*** hseipp has quit IRC10:53
*** haomaiwang has quit IRC11:01
*** haomaiwang has joined #openstack-swift11:01
*** aix has quit IRC11:06
*** T0m_ has joined #openstack-swift11:22
T0m_Is there a way to query a swift object to see what partion its part of?11:23
*** mahatic has joined #openstack-swift11:26
acolesT0m_: you can use swift-get-nodes command, like this:11:27
acoles swift-get-nodes /etc/swift/object.ring.gz AUTH_acc/cont/obj11:27
acolesthat's if you have access to a node. there is no way to find partition via the swift API.11:29
T0m_ok will give that a go - thanks11:30
mahaticacoles: hello, I need to skip probe tests when we're not using encryption. Would I need to be creating a parameter of sorts in swiftclient, like we have for healthcheck, to turn it on/off? I'm not sure11:32
*** logan2 has quit IRC11:38
*** logan2 has joined #openstack-swift11:40
mahaticI don't think we know if encryption is on/of by default. Atleast, I don't11:43
*** logan2 has quit IRC11:50
*** resker has joined #openstack-swift11:54
*** logan2 has joined #openstack-swift11:54
*** lpabon has joined #openstack-swift11:57
*** esker has quit IRC11:57
*** thurloat is now known as thurloat_isgone11:59
*** haomaiwang has quit IRC12:01
*** haomaiwa_ has joined #openstack-swift12:01
*** Protux has quit IRC12:03
*** lpabon has quit IRC12:07
*** ppai has quit IRC12:08
*** logan2 has quit IRC12:17
*** logan2 has joined #openstack-swift12:20
*** ppai has joined #openstack-swift12:22
*** marzif has quit IRC12:25
*** marzif has joined #openstack-swift12:25
*** changbl has joined #openstack-swift12:37
*** logan2 has quit IRC12:41
*** logan2 has joined #openstack-swift12:42
*** haomaiwa_ has quit IRC12:50
*** aix has joined #openstack-swift12:51
*** m_kazuhiro has quit IRC12:54
*** resker has quit IRC12:55
*** ctrath has joined #openstack-swift12:57
*** ppai has quit IRC13:02
*** ctrath has quit IRC13:04
*** dustins has joined #openstack-swift13:07
*** hrou has joined #openstack-swift13:08
*** nadeem has joined #openstack-swift13:15
*** nadeem has quit IRC13:15
*** nadeem has joined #openstack-swift13:16
*** marcusvrn_ has joined #openstack-swift13:16
*** jkugel has joined #openstack-swift13:19
*** SkyRocknRoll has quit IRC13:21
*** breitz has quit IRC13:23
*** breitz has joined #openstack-swift13:23
openstackgerritDavid Goetz proposed openstack/swift: go: add a couple timers to see about GetHashes vrs Sync time  https://review.openstack.org/22175313:24
*** haomaiwang has joined #openstack-swift13:28
*** flwang1 has quit IRC13:31
*** tongli has joined #openstack-swift13:33
*** mahatic has quit IRC13:37
*** flwang1 has joined #openstack-swift13:38
*** ctrath has joined #openstack-swift13:43
*** lcurtis has joined #openstack-swift13:48
*** mahatic has joined #openstack-swift13:50
*** flwang1 has quit IRC13:50
*** wbhuber has joined #openstack-swift13:51
*** SkyRocknRoll has joined #openstack-swift13:52
*** nadeem has quit IRC13:58
*** ho has quit IRC14:00
*** haomaiwang has quit IRC14:01
*** haomaiwang has joined #openstack-swift14:01
*** wbhuber_ has joined #openstack-swift14:02
*** wbhuber has quit IRC14:03
*** mahatic has quit IRC14:03
*** mahatic has joined #openstack-swift14:04
*** mahatic has quit IRC14:04
*** sanchitmalhotra has quit IRC14:05
*** flwang1 has joined #openstack-swift14:06
*** trifon_ has quit IRC14:15
*** jlhinson has joined #openstack-swift14:15
*** jrichli has joined #openstack-swift14:17
*** flwang1 has quit IRC14:18
tdasilvanotmyname: good morning, is there a meeting today?14:23
*** proteusguy__ has quit IRC14:32
*** SkyRocknRoll has quit IRC14:34
*** jlhinson has quit IRC14:34
*** ujjain- has quit IRC14:37
*** ujjain- has joined #openstack-swift14:37
jrichlimahatic: are you around?14:39
jrichlimahatic: I dont see you on.  I will send you email14:40
openstackgerritDavid Goetz proposed openstack/swift: go: assign startTime to Now at beginning of replication run  https://review.openstack.org/22151214:42
*** wbhuber_ is now known as wbhuber14:45
*** jlhinson has joined #openstack-swift14:46
openstackgerritAlistair Coles proposed openstack/swift: Trivial Key Master for encryption  https://review.openstack.org/19374914:47
openstackgerritDavid Goetz proposed openstack/swift: go: use recon diskusage so you can weed out unmounted drives  https://review.openstack.org/22144614:48
*** proteusguy__ has joined #openstack-swift14:50
*** pberis1 has joined #openstack-swift14:55
*** pberis1 is now known as pberis14:55
*** mahatic has joined #openstack-swift14:58
*** minwoob has joined #openstack-swift14:58
*** esker has joined #openstack-swift14:59
*** haomaiwang has quit IRC15:01
*** haomaiwang has joined #openstack-swift15:01
*** jistr is now known as jistr|call15:02
*** esker has quit IRC15:04
*** marzif has quit IRC15:10
*** akle has quit IRC15:16
notmynamegood morning15:24
jrichligood morning15:25
*** haomaiwang has quit IRC15:26
openstackgerritAlistair Coles proposed openstack/swift: Trivial Key Master for encryption  https://review.openstack.org/19374915:28
*** haomaiwang has joined #openstack-swift15:30
*** flwang1 has joined #openstack-swift15:30
*** SkyRocknRoll has joined #openstack-swift15:33
*** esker has joined #openstack-swift15:33
notmynametdasilva: yes, there's a meeting today15:33
tdasilvanotmyname: cool, thanks!15:33
*** bill_az has joined #openstack-swift15:36
openstackgerritAlistair Coles proposed openstack/swift: Cryptography module to be used by middleware  https://review.openstack.org/19382615:38
*** garthb has joined #openstack-swift15:38
*** trifon_ has joined #openstack-swift15:44
*** marcusvrn_ has quit IRC15:45
*** adutta has joined #openstack-swift15:48
*** adutta has quit IRC15:48
*** gyee has joined #openstack-swift15:49
*** esker has quit IRC15:50
*** tongli has quit IRC15:51
*** tongli has joined #openstack-swift15:51
*** oddsam91 has joined #openstack-swift15:53
*** flwang1 has quit IRC15:54
openstackgerritMerged openstack/swift: go: add a couple timers to see about GetHashes vrs Sync time  https://review.openstack.org/22175315:54
*** jistr|call is now known as jistr15:55
*** tongli has quit IRC15:55
*** haomaiwang has quit IRC16:01
*** itlinux has quit IRC16:01
*** haomaiwang has joined #openstack-swift16:01
*** esker has joined #openstack-swift16:04
notmynameif you haven't taken the survey on tshirt sizes and colors, please do so https://www.surveymonkey.com/r/5WQ7Y9F16:05
*** esker has quit IRC16:09
hrouHey timburke around ?16:13
*** flwang1 has joined #openstack-swift16:21
*** oddsam91 has quit IRC16:31
*** rledisez has quit IRC16:32
*** sasik has joined #openstack-swift16:33
*** jordanP has quit IRC16:34
*** prnk28 has joined #openstack-swift16:37
*** jith_ has joined #openstack-swift16:37
jith_Hi all how to clear the contents of swift  as a whole.16:39
jith_i should make all the storage node empty16:40
ctennisturn off the swift services and reformat the drives16:40
*** oddsam91 has joined #openstack-swift16:41
*** jistr has quit IRC16:42
jith_ctennis: thanks.. then do i need to add the drives again in the ring?16:42
ctennisno as long as you don't change or remove the ring files16:42
jith_initially i made one keystone authentication, and uploaded certain GB's of data..  Later i configured the same swift cluster with another keystone for authentication.. so how to delete the previous cluster contents??16:43
ctennisyou will have to manually delete each object via the API you want to be gone, or just have to wipe all of the data16:43
ctennisor you can delete the swift account you want explicitly and the reaper will take care of removing all of the object data for you16:44
jith_i lost the old authentication.. so cant do anything with api right without the previous authentication??16:45
*** sasik has left #openstack-swift16:45
jith_i just need to erase all the data16:46
jith_ctennis: is this the procedure swift-init all stop16:46
jith_mkfs.xfs /dev/sdb1 .. i am sorry if i am wrong16:46
ctennisif you have a new user who is a "superadmin" it can delete other accounts16:47
ctennisyou just need to know the account name16:47
ctennisotherwise yes that looks right16:47
jith_ctennis.. then should i do all the mounting steps again??16:47
ctennisyou will need to unmount it before oyu can format it, but yes16:47
jith_i followed this http://docs.openstack.org/kilo/install-guide/install/apt/content/swift-install-storage-node.html16:48
jith_so i should complete the 3rd step in the previous link... is it so?16:49
*** oddsam91 has left #openstack-swift16:49
*** jistr has joined #openstack-swift16:55
openstackgerritMerged openstack/python-swiftclient: Add links for release notes tool  https://review.openstack.org/22132916:56
*** marcusvrn_ has joined #openstack-swift16:57
openstackgerritMerged openstack/swift: go: assign startTime to Now at beginning of replication run  https://review.openstack.org/22151216:57
*** tongli has joined #openstack-swift16:58
*** haomaiwang has quit IRC17:01
*** haomaiwang has joined #openstack-swift17:01
*** luapsil has joined #openstack-swift17:02
*** luapsil has quit IRC17:04
*** esker has joined #openstack-swift17:10
*** esker has quit IRC17:11
*** zhill has joined #openstack-swift17:15
*** mahatic has quit IRC17:17
*** delattec has joined #openstack-swift17:18
*** cdelatte has joined #openstack-swift17:18
*** jistr has quit IRC17:28
*** haomaiwang has quit IRC17:28
*** devlaps has joined #openstack-swift17:32
*** geaaru has quit IRC17:34
claygacoles: you about?17:35
acolesclayg: yep17:35
claygacoles: patch 217830 fixes the ugly ValueError when we int('None')17:36
patchbotclayg: https://review.openstack.org/#/c/217830/17:36
acolesyes17:36
wbhuberbil: do a "source openrc"17:37
acolesclayg: 217830 is good, i was wondering if we needed a probe test to validate it, but then saw the probe test you change in patch 21802317:38
patchbotacoles: https://review.openstack.org/#/c/218023/17:38
acoleswhich seems to validate the change in 21783017:38
*** esker has joined #openstack-swift17:38
acolesbut i couldn't see *why* it did17:38
claygacoles: ok, yeah I need to rebase that change17:39
acolesclayg: so did i understand correct that we only saw the int('None') when a handoff with only tombstones reverts to another handoff, so there is no frag index nor node index?17:41
acolesthe patch is fine its me thats confused :)17:42
claygacoles: yes that's correct17:42
claygacoles: well - i'm confused too17:42
claygacoles: weekend in between when I wrote the patch17:42
acolesclayg: sorry - i gotta go - maybe catch you later around meeting time17:43
claygi don't understand why my primary runs reconstructor on second handoff then first instead of first handoff then second - i suppose it shouldn't matter - but the test acts like it does17:44
claygi guess I'll try swapping them when I rebase the test and try to remember why/if it matters17:45
*** acoles is now known as acoles_17:51
*** T0m_ has left #openstack-swift17:53
*** SkyRocknRoll has quit IRC18:20
*** eranrom has joined #openstack-swift18:21
wbhuberclayg: acoles: not sure if there's an existing swift command that lists containers specifically for a policy.  im in a situation where i have three policies running and each policy has several containers with numerous objects.  So, I'd like to list the containers per policy to deduce some more info.18:25
*** aix has quit IRC18:29
*** tongli has quit IRC18:35
*** esker has quit IRC18:36
openstackgerritAlan Erwin proposed openstack/swift-specs: Adding expiring objecs spec  https://review.openstack.org/22191418:55
tdasilvais Alan Erwin around in the channel?19:01
tdasilvaaerwin3: ^^ maybe??19:02
aerwin3I am here. :)19:02
tdasilvahey! how's going...I remember we talking about the expiring objects issue during last hackathon19:03
aerwin3Yes sir. How is it going?19:03
tdasilvaI guess the part i'm a bit confused is how does having the auditor expire fix the "expire can't catch up issue"19:04
tdasilvais the auditor just faster than the expirer?19:04
aerwin3I just put in a review for a spec on the fix for it.19:04
aerwin3 https://review.openstack.org/22191419:04
tdasilvaaerwin3: yeah..i was just reading it19:04
*** esker has joined #openstack-swift19:05
*** esker has quit IRC19:05
ahalei guess auditors just walk the local disk and scale easily when you add more storage, compared to expirers which do network reqs and are more annoying to run lots of (we dont run it on every object server) ?19:08
*** esker has joined #openstack-swift19:08
aerwin3ahale: thats right. Scaling the expirer is a serious pain.19:11
tdasilvaahale: expirer is annoying to run :) took me a while to figure out the whole 'process' and 'processes' options19:11
ahalei like that spec btw :)19:11
*** oddsam91 has joined #openstack-swift19:11
aerwin3Thanks. Hopefully this will reduce the amount of moving parts needed for this feature.19:12
*** oddsam91 has left #openstack-swift19:13
aerwin3tdasilva: One thing to note about this spec is when an object's expire_at time hits, doesn't mean that the object is removed. The cluster will need to be able to handle having the object around until the auditor deletes it.19:14
tdasilvaaerwin3, ahale: an issue that it introduces is that projects using swift + third-party storage backends can use the expirer feature and they typically don't need the auditor (as their storage systems takes care of data consistency)19:14
tdasilvatdasilva: not that i'm a fan of the expirer, but i can use it19:14
tdasilvaaerwin3: ^19:14
ahalewith expire info in the users container, it could be interesting to eventually have an account header showing how much unexpired expiring data an account has ?19:15
aerwin3ahale: I can see that. Maybe being able to see how much data will be expiring soon can help customers make decisions.19:17
*** silor has quit IRC19:18
openstackgerritEran Rom proposed openstack/swift: Container-Sync to iterate only over synced containers  https://review.openstack.org/20580319:19
aerwin3tdasilva: That is an interesting observation.19:19
tdasilvaaerwin3: so, just for example, with the Swift-on-File project we make use of the expirer daemon, but we don't use the auditor (we let gluster or gpfs take care of keeping the data replication consistent)19:19
tdasilvaaerwin3: in the case, it is nice to be able to the the info from the container listing19:20
*** esker has quit IRC19:20
*** esker has joined #openstack-swift19:30
openstackgerritAlan Erwin proposed openstack/swift-specs: Adding expiring objecs spec  https://review.openstack.org/22191419:31
*** aix has joined #openstack-swift19:33
aerwin3tdasilva: so for this use case, checking to see what objects are left to expire is needed? Do I have that correct?19:34
tdasilvaaerwin3: not sure what you mean by "checking"19:35
tdasilvado you mean a "checking" by doing a container listing?19:36
*** resker has joined #openstack-swift19:37
aerwin3yes. So, currenty if you want to know what objects are needed to expire as container listing to the .expiring_objects account would be where you would start.19:37
tdasilvaaerwin3: correct19:37
aerwin3Is that the kind of information that is need for the use case?19:37
tdasilvaaerwin3: right19:37
aerwin3Would being able to query an account to see what objects have/have not expired solve the need?19:38
tdasilvaaerwin3: so, your spec is actually trying to solve two (related) problems. One is that a container listing has "stale" data and by adding the expire-at column, you could filter that out19:40
*** esker has quit IRC19:40
tdasilvasecond, is the expirer daemon not being fast enough to expire objects19:40
aerwin3tdasilva: Correct.19:41
tdasilvawhile I agree with both issues, the proposed solution for the second issue is what could be a problem for us19:41
aerwin3tdasilva: I can see where that would be considering that auditor isn't running in your use case.19:41
openstackgerritOpenStack Proposal Bot proposed openstack/python-swiftclient: Updated from global requirements  https://review.openstack.org/8925019:42
tdasilvayeah...I understand the expirer is a problem, but it has this nice characterist of getting the info from the container as opposed to getting the info from the underlying file19:42
tdasilvait would be nice to be able to keep that19:42
openstackgerritOpenStack Proposal Bot proposed openstack/swift: Updated from global requirements  https://review.openstack.org/8873619:43
*** devlaps has quit IRC19:43
aerwin3tdasilva: what is the 'info from the container' that would be nice to keep?19:47
claygi guess I'll try swapping them when I rebase the test and try to remember why/if it matters19:48
tdasilvaaerwin3: sorry...i mean to say that the expirer daemon today is doing a GET on the container (.expiring_objects/<expiring_container_name>)19:48
tdasilvathat's how it knows what objects to expire...as opposed to looking in the underlying filesystem and scanning the file (as the auditor would do)19:49
mattoliverauMorning all from am airport lounge.19:50
* mattoliverau needs coffee stat19:50
tdasilvamattoliverau: good morning!19:50
aerwin3tdasilva: Oh I see. The deletion of the object by the auditor is the issue. Because the container listing could be made to show objects that need to expire.19:53
aerwin3mattoliverau: morning!19:53
*** flwang1 has quit IRC19:53
*** resker has quit IRC19:54
tdasilvaaerwin3: correct19:54
tdasilvaaerwin3: i need to take off for a bit to drive home, but will be back in about 1 hour for the swift meeting19:55
*** esker has joined #openstack-swift19:56
aerwin3tdasilva: Ok I will need to think on an approach to address your use case.19:56
openstackgerritDavid Goetz proposed openstack/swift: go: replicator log time spent getting remote hashes  https://review.openstack.org/22194219:58
mattoliverauSo I will be flying during meeting, I apologize, make sure someone is sarcastic in my absence.. I will be reading the meeting log when I land ;)19:58
*** delattec has quit IRC20:08
*** cdelatte has quit IRC20:08
*** mvandijk has joined #openstack-swift20:11
*** eranrom has quit IRC20:28
*** esker has quit IRC20:35
openstackgerritAlan Erwin proposed openstack/swift-specs: Adding expiring objecs spec  https://review.openstack.org/22191420:40
*** prnk28 has quit IRC20:44
openstackgerritMinwoo Bae proposed openstack/swift: Reconstructor logging to omit 404 warnings  https://review.openstack.org/22195620:49
*** esker has joined #openstack-swift20:52
*** kota_ has joined #openstack-swift20:55
*** ChanServ sets mode: +v kota_20:55
kota_good morning20:56
wbhuberkota_: good morning20:56
notmynamehello kota_20:56
*** esker has quit IRC20:56
aerwin3kota_: morning20:57
*** ho has joined #openstack-swift20:58
timburkegood morning kota_!20:58
notmynamemeeting time20:59
*** acoles_ is now known as acoles21:00
*** darrenc_ has joined #openstack-swift21:02
*** darrenc has quit IRC21:03
*** dustins has quit IRC21:08
*** trifon_ has quit IRC21:09
acolesclayg: i think i figured out my confusion with that probe test while driving home21:14
*** pberis has quit IRC21:17
acolesclayg: i'd forgotten that a handoff will revert tombstones to all primaries PLUS another handoff if one of the primaries is down21:18
claygacoles: that's true21:22
acolesclayg: yeah. so with the probe test change you made in patch 218023, 2nd handoff reverts its tombstone first while one primary is still down, so attempts to revert to first handoff, which would fail without the fix in patch 21783021:25
patchbotacoles: https://review.openstack.org/#/c/218023/21:25
hokota_: how  about 燕軍団?21:27
kota_just  燕 seems better.21:28
hokota_: difference is team swift or just swift. :-)21:29
*** flwang1 has joined #openstack-swift21:37
kota_ho: yup21:45
*** ChanServ changes topic to "Review Dashboard: https://goo.gl/eqeGwE | Summary Dashboard: https://goo.gl/jL0byl | Summit planning: https://etherpad.openstack.org/p/tokyo-summit-swift | Logs: http://eavesdrop.openstack.org/irclogs/%23openstack-swift/"21:46
notmynamethe summit planning etherpad is now in the topic message21:46
kota_if you want to mean "a team", "仲間" or "家族" seems better?21:46
hokota_: my image is たけし軍団 (Takeshi Kitano's team) :-)21:47
hokota_: the meaning of 迅速 does not "directory" fit to swift but the charators look cool for me.21:48
wbhuberkota_: acoles: minwoob: notmyname: I've added my preliminary (or alloying) .02 cents on EC patch/bug prioritization in the following page: https://wiki.openstack.org/wiki/Swift/PriorityReviews21:51
notmynamewbhuber: thanks21:51
kota_wbhuber: thanks!21:51
minwoobwbhuber: thanks!21:52
*** hrou has quit IRC21:52
notmynamekota_: ho: what is あまつばめ21:53
*** jlhinson has quit IRC21:54
kota_notmyname: scientific name21:54
kota_written in "Hiragana"21:54
acoleskota_: ho: アマツバメ科   ?21:56
acoleswbhuber: k. thanks21:56
notmynameminwoob: I had a question on https://bugs.launchpad.net/swift/+bug/149188321:59
openstackLaunchpad bug 1491883 in OpenStack Object Storage (swift) "Reconstructor complains 404 is invalid responce when building object" [Undecided,In progress] - Assigned to Minwoo Bae (minwoob)21:59
notmynamelet me see if I can go find it again...21:59
minwoobnotmyname: Okay22:00
*** acoles is now known as acoles_22:00
notmynameminwoob: ah. so on replication, we do have the possibility of logging the 404 in update() but not in update_deleted(). and the reconstructor will log the 40422:01
notmynameminwoob: so I'm trying to reconcile in my head what ctennis and you are saying on the bug report22:02
claygacoles_: ok it's important to run the second handoff first because it acctually *can* delete it's handoff after it verifies it's in sync with the alive primaries + the first handoff22:02
claygacoles_: if you ran the first handoff first - it would find the failed primary and then realize the tombstone on itself is about as good as it's going to get and keep it around22:03
kota_acoles: same meaning between あまつばめ and アマツバメ.22:03
claygI think the tombstone revert jobs could be dramatically more optomistic about when it's safe to cleanup themselves22:03
claygbut... different issue22:03
*** jkugel has quit IRC22:03
*** jkugel has joined #openstack-swift22:04
mattoliverauAnd landed.. Sorry I missed the meeting22:04
minwoobnotmyname: It seems that our consensus last time was to remove the reconstructor 404 logging since apparently the warnings given by the object-server were argued to be good enough, and this way we won't fill up the logs.22:06
kota_mattoliverau: np22:07
minwoobnotmyname: ctennis: I was originally for keeping the reconstructor 404 logs, but the above seemed convincing, especially since reconstructor 404s aren't considered "unusual".22:08
ctennisnotmyname: I'm trying to say that the "Invalid response 404" is unnecessary22:08
notmynamectennis: minwoob: yeah, that makes sense. I think that doesn't make sense to me is that the object replication logs 404s22:08
minwoobnotmyname: I'll verify whether we should log the 404 in update() and not in update_deleted(), as per your suggestion.22:09
minwoobnotmyname: For replication22:10
notmynameminwoob: heh, I'm not suggesting anything. just trying to understand22:11
minwoobAh.22:11
*** jkugel has quit IRC22:11
*** ctrath has quit IRC22:12
honotmyname, acoles, kota_: I'm not sure but  あまつばめ is a kind of swallow22:18
notmynamewbhuber: I'm not sure that https://bugs.launchpad.net/swift/+bug/1491908 is just a logging thing22:21
openstackLaunchpad bug 1491908 in OpenStack Object Storage (swift) "proxy server reports 202 even though EC backend didn't accept object" [Undecided,New]22:21
wbhubernotmyname: taking another look at it...22:22
minwoobnotmyname: wbhuber: That one has to do with the container sync dependency.22:22
wbhubernotmyname: again, the list i posted is up for debate22:22
minwoobFixing the 202 to 409 will break current container sync'ed clusters.22:22
wbhuberminwoob: don't you think this one will be handled pre-LIberty?22:23
minwoobnotmyname: wbhuber: So before that one can be fixed, it looks like the 100-continue + timestamp idea for container sync needs to be implemented first, and then decide how to handle the 409 within that context.22:24
torgomaticit might be worth breaking container sync to fix EC though; if right now you can PUT an object and have the cluster say 202 but not actually store the object, that's bad22:25
torgomaticbesides, wouldn't it only break container sync for EC containers? or is that incorrect?22:26
ctennistorgomatic: it's not EC dependent22:27
torgomaticoh, boo22:27
torgomaticnever mind then22:27
wbhubernotmyname: you're right.  i'll put that out of the logging set - can set its priority to other than low but timetable is not limtied to Liberty at least.22:27
ctennisit's just that in my case, with a 3 node EC cluster, 1 node had a majority of responses, so if the clock is skewed back you get a majority of 409s22:28
torgomaticOTOH, it only happens when your time rolls backwards...?22:28
ctennisyes22:28
torgomaticokay, so not as big a deal as I thought it was then22:28
torgomaticstill important, of course, but not oh-shit-stop-the-world22:28
ctennisright22:28
wbhuberctennis: i'd be interested to try to recreate it on my clutster... looks quirky22:28
ctennisjust upload an object, stop ntp on some nodes, set the clock back on one or more nodes and reupload a new object.22:29
wbhuberctennis: notmyname: minwoob: med-high?22:29
notmynameset the clock back by how much? clocks get out of sync22:30
ctennisI think it does highlight the importance that accurate and synchronized clocks are paramount for EC though22:30
ctennisnotmyname: well, any time <= to the original timestamp triggers the issue22:30
ctennisthe container-sync thing relies on the fact that if the objects are synchronized, the timestamps equal, and thus you get a 409 if you try to overwrite it22:31
ctennisso it says "a 409 is okay, so let's treat that as a 202"22:31
jrichlitorgomatic: results of my quick attempt to find out what a fish goal is : http://www.standard.co.uk/news/goal-fish-thierry-shows-his-finishing-power-6685791.html22:32
*** AndreiaKumpera has quit IRC22:32
*** kota_ has quit IRC22:33
torgomaticjrichli: it's a shame the final score wasn't eeleven to twona22:35
*** lcurtis has quit IRC22:35
jrichlihee hee22:35
openstackgerritClay Gerrard proposed openstack/swift: Fix purge for tombstone only REVERT job  https://review.openstack.org/21802322:38
notmynameminwoob: where is the 100-continue timestamp patch/bug you referenced?22:38
claygnotmyname: I think there was one that ctennis filed that was more like 409 should emit 20222:39
clayg*shouldn't22:39
notmynameclayg: https://bugs.launchpad.net/swift/+bug/1491908 ?22:39
openstackLaunchpad bug 1491908 in OpenStack Object Storage (swift) "proxy server reports 202 even though EC backend didn't accept object" [Undecided,New]22:39
notmynameclayg: minwoob was saying it might depend on a different one22:39
minwoobnotmyname: eranrom: It was the one for allowing the proper handling of multiple uploads in the container sync scenario.22:40
minwoobnotmyname: That if we encounter a 100-continue22:41
minwoobThen switch the 409 to a 41722:41
notmynamechanging to a different marker for detecting that the data has been synced22:43
minwoobFor 409 to 202, only do that if the request has a container sync auth header22:45
minwoobAs a workaround to support existing container sync clients.22:45
notmynameso why is this an EC specific thing? wouldn't the same thing happen for replicated data?22:45
minwoobThat seems to be the case.22:46
minwoobI think ctennis must have filed it as they were doing testing on their EC cluster.22:46
notmynamesure22:46
minwoobAlong with the other bugs.22:46
openstackgerritMerged openstack/swift: go: replicator log time spent getting remote hashes  https://review.openstack.org/22194222:47
*** jrichli has quit IRC22:49
*** darrenc_ is now known as darrenc22:51
wbhuberctennis: regarding https://bugs.launchpad.net/swift/+bug/1491605, is there any asssessment that the randomizing reconstructor jobs would improve performance as opposed to serially processing jobs disk by disk?22:52
openstackLaunchpad bug 1491605 in OpenStack Object Storage (swift) "Reconstructor jobs are ordered by disk instead of randomized" [Undecided,New]22:52
ctenniswbhuber: that's my hunch yeah22:52
ctenniswbhuber: but I'm not sure if is has all of the jobs at first..it seems like it builds them per disk.22:53
notmynamehttps://bugs.launchpad.net/swift/+bug/1484598 and https://review.openstack.org/#/c/213147/ seems to be a Big Deal22:53
openstackLaunchpad bug 1484598 in OpenStack Object Storage (swift) "Proxy server ignores additional fragments on primary nodes" [Undecided,In progress] - Assigned to paul luse (paul-e-luse)22:53
notmynameclayg: right? ^22:54
notmynameie should be very high priority22:54
wbhubernotmyname: +122:54
*** rjaiswal has joined #openstack-swift22:55
wbhuberctennis: i'd need to re-test the handling of the jobs logic first to see how that exactly works. so basically, it'd go through each of single node's disks and process job therein.  it'd be troublesome if the node itself is hitting some latency.22:55
claygnotmyname: I think it's a lower priority than optomistic GETs22:56
ctenniswbhuber: yeah that sounds right22:56
notmynameclayg: optimistic gets is https://review.openstack.org/#/c/215276/ ?22:57
claygignoring multiple frag index for the same etag/timestamp coupled with the better bucketing of Ib710a133ce1be278365067fd0d6610d80f1f7372 should be sufficient to "work around" the multiple frag issues during *normal* rebalances (where less than parity part-replica need to move between a reconstrcutor cycle-time/ring-rebalance)22:58
notmynamethat one has landed22:59
claygyeah, etag buckets is in - but it's not throwing out potential duplictes22:59
claygyeah acoles patch 215276 is the begining of the work on optomistic gets23:00
patchbotclayg: https://review.openstack.org/#/c/215276/23:00
notmynameok23:00
claygtimeouts on write resulting in missing .durables is a thing that we've all observed under load - that coupled with some disk failures or network timeouts on GETs is causing data that should be able to be served to come up 404 :'(23:00
notmynameso that one depends on the patch that closes https://bugs.launchpad.net/swift/+bug/1484598, so I'm setting that bug to critical. it's important and is blocking other important stuff23:01
openstackLaunchpad bug 1484598 in OpenStack Object Storage (swift) "Proxy server ignores additional fragments on primary nodes" [Critical,In progress] - Assigned to paul luse (paul-e-luse)23:01
claygthe rebalance multiple frag holding primary thing is not as well understood IMHO23:01
*** km has joined #openstack-swift23:01
notmynamealso, acoles_ needs to rebase his patch ;-)23:01
clayg... but it definately requires a rebalance and if we *just* threw out duplicate frags (even without going back to re-read the second frag from a primary holding multiples) may be enough23:02
claygwell - I think the chain is sorta out of whack - and the patch doesn't really take on the whole thing - I think it started with just making it possible to *ask* an object server to return a non-durable frag23:02
claygcurrent thinkin is we'd be better off most of the time if we just get the non-durable flags with a marker that says "hopefully someone else can vouch for the reconstructability of this - but here's the latest timestamp I have for the name"23:03
claygwhere "current" is circa Austin23:04
*** hrou has joined #openstack-swift23:04
clayg... but I think we lost track of some of it during the reconstructor fallout coming out of the intel benchmarks23:04
notmynametorgomatic: can you please look at https://bugs.launchpad.net/swift/+bug/1491669 and set the "importance" flag?23:05
openstackLaunchpad bug 1491669 in OpenStack Object Storage (swift) "Balance of EC ring seems odd compared to replica ring" [Undecided,New]23:05
claygI think those fixes are all up for review - and ctennis & paul have commented that they are critical to having successfully stressful benchmarks23:05
ctennisnotmyname: this one doesn't show up in my normal list but I think it's probably medium: https://bugs.launchpad.net/swift/+bug/149167623:06
openstackLaunchpad bug 1491676 in OpenStack Object Storage (swift) "Reconstructor has some troubles with tombstones on full drives" [Undecided,New]23:06
notmynamectennis: I was just looking at that one23:06
ctennisah ok23:06
notmynameclayg: yeah, my next step is to go through associated patches and see what's proposed23:06
ctenniswhen I look at the list of bugs I've reported, it doesn't show up for some reason23:06
ctennisan nm there it is, it's not sorted in order23:06
torgomaticnotmyname: done23:07
notmynametorgomatic: thanks23:07
claygtorgomatic:  you got there so much quicker than I did!23:08
claygnotmyname:  well did you find the bug for timeouts writing .druables?  if you're going to make lp bug #1484598 a release blocker you have to do the same with that one too23:10
openstackLaunchpad bug 1484598 in OpenStack Object Storage (swift) "Proxy server ignores additional fragments on primary nodes" [Critical,In progress] https://launchpad.net/bugs/1484598 - Assigned to paul luse (paul-e-luse)23:10
*** david-lyle has quit IRC23:11
notmynamectennis: https://bugs.launchpad.net/swift/+bug/1491676 looks like it might be the normal "full clusters are bad, m'kay?"23:11
openstackLaunchpad bug 1491676 in OpenStack Object Storage (swift) "Reconstructor has some troubles with tombstones on full drives" [Undecided,New]23:11
ctennisnotmyname: not even full cluster, just full disk.23:11
notmynamectennis: well, ok. same thing ;-)23:12
*** david-lyle has joined #openstack-swift23:12
ctennisperhaps, but it's for a different reason.  and based on this fixing a full EC cluser is going to be much more difficult.23:12
claygctennis: :'(23:13
ctennisclayg: don't frown, I'm guessing there's a fix :)23:14
notmynameclayg: https://bugs.launchpad.net/swift/+bug/1469094 I hadn't looked at that yet since it was already ranked23:14
openstackLaunchpad bug 1469094 in OpenStack Object Storage (swift) "Timeout writing .durable can cause error on GET (under failure)" [High,Confirmed]23:14
torgomaticnot harder than replication, right? rsync can't push tombstones to a full disk either23:14
ctennisbut that wasn't hte issue here.  in a 24 disk replicated system, if 3 disks "fill" I would easily expect them to be able to clear themselves up on their own23:15
claygctennis: do the full disks have any handoff partitions (partitions balanced *off* of them) still lying about?  or is it just that the parts that belong on the disk won't fit?23:15
ctennisthe issue in replication is things can't unfill because other disks are full so they have nowhere to go23:15
ctennisin this setup, the disks just couldn't unfill even though there were plenty of places for data to go23:16
ctennisclayg: I've since wiped things23:16
ctennisin other words, in a replicated cluster..if I was full and added new drives, things would start spreading out.  here I had full disks, added new drives and they never unfilled.23:19
claygctennis: k - I think there's some missing tracebacks23:19
ctennispossible.  I can recreate it again23:20
claygctennis: it's not clean what was failing in the push data off the full disks path23:20
clayg*clear23:20
ctennisok23:20
claygctennis: maybe the re-hash?23:20
*** jkugel has joined #openstack-swift23:20
*** darrenc is now known as darrenc_afk23:20
*** amit213 has quit IRC23:22
*** amit213 has joined #openstack-swift23:22
ctennisclayg: I still have the all.log from that day23:22
*** david-lyle has quit IRC23:22
notmynamectennis: thanks. I marked it as incomplete23:22
*** david-lyle has joined #openstack-swift23:23
claygctennis: could be useful!  upload it somewhere I can get at it?23:25
notmynameok, (except for that last one about full drives) all of the EC tagged bugs have been given a priority. next up is to review and set ones that should be release blockers23:25
notmynamewhy does LP show me bugs that are Fix Committed by default?23:29
claygnotmyname: because if it didn't you'd forget about them!23:30
notmynameI'm ok with that! they're "done" ;-)23:30
clayg:)23:30
notmynametests pass, deploy to prod! ;-)23:31
notmynametorgomatic: for https://bugs.launchpad.net/swift/+bug/1452431, this is more likely in EC, but could happen in replicated storage too? and the requirement is that it will happen when the device count is < primary nodes?23:32
openstackLaunchpad bug 1452431 in OpenStack Object Storage (swift) "some parts replicas assigned to duplicate devices in the ring" [Medium,Confirmed] - Assigned to Samuel Merritt (torgomatic)23:32
torgomaticnotmyname: yes and no, respectively23:32
notmynameoh, ok. so it's more likely than that? what's the trigger?23:33
torgomaticit *will* happen if #devs < #replicas, but it *can* happen if things aren't well-balanced23:33
notmynamelike the "region less than 1 regionth of the cluster" situation?23:34
torgomaticyep23:34
torgomatic3 zones of unequal weights might make it happen23:34
clayg"replicanth"23:35
notmynameclayg: yes, a better word. more cromulent in every way23:36
notmynametorgomatic: ok, I'll leave it at medium and remove the EC tag23:36
*** kei_yama has joined #openstack-swift23:36
*** bill_az_ has joined #openstack-swift23:43
*** minwoob has quit IRC23:50
*** kota_ has joined #openstack-swift23:58
*** ChanServ sets mode: +v kota_23:58
kota_I'm back here, sitting in my office.23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!