Monday, 2014-12-22

*** rdaly2 has quit IRC00:04
*** fandi has quit IRC00:17
*** Masahiro has joined #openstack-swift00:18
*** Masahiro has quit IRC00:22
*** sungju has joined #openstack-swift00:30
*** bill_az has joined #openstack-swift00:30
*** ho has joined #openstack-swift00:50
hogood morning!00:57
*** Masahiro has joined #openstack-swift01:01
*** addnull has joined #openstack-swift01:28
*** masonhsiung has joined #openstack-swift01:36
*** rdaly2 has joined #openstack-swift02:01
openstackgerritYuan Zhou proposed openstack/swift: Fixes versioning SLO objects  https://review.openstack.org/12376502:01
*** rdaly2 has quit IRC02:05
mattoliverauho: morning (sorry was at lunch when you came online)02:06
homattoliverau: morning!02:07
*** nosnos has joined #openstack-swift02:10
homattoliverau: I was thinking about why KeystoneAuth doesn't support multiple reseller admins in the configuration file.02:15
homattoliverau: I got a question about it but I can not find any reason why we don't support it.02:15
homattoliverau: I think it is better to allow it's configuration such as operator roles.  What do you think?02:15
*** oomichi has joined #openstack-swift02:23
mattoliverauho: according to the documentation, it does: Users with the keystone role defined in 'reseller_admin_role'  will be reseller admins. Sure this looks like it's one role, can any user you add to it will be reseller admins: https://github.com/openstack/swift/blob/master/doc/source/overview_auth.rst#access-control-using-keystoneauth02:27
homattoliverau: Thanks for the reference. Users can specify an account for reseller_admin_role. any account... but not allow to specify multiple accounts.02:35
mattoliverauCan specify a role (group) that a user must be in to be a reseller admin. That is my understanding.02:36
mattoliverauI'll need to look at the code, but I think you can only specify 102:36
homattoliverau: I read the code. only allow to specify one role.02:37
*** haomaiwa_ has joined #openstack-swift02:38
mattoliverauho: yeah looks like it :) So yeah, you specify a role/group and any user who is apart of it will be reselleradmins, so you can have many reseller admins, but they are defined by being members of 1 certian role in keystone.02:39
homattoliverau: Thanks for double checking!  I may propse a patch for this. Keystone extends it's authentication function to policy base RBAC (many roles). But swift has three roles, reseller, operator and others.02:42
mattoliverauho: you might want to read up on the composite token spec.. as this I think has some revelence to many roles in keystone auth, here is a patch in flight that will update the existing composite spec: https://review.openstack.org/#/c/138771/02:47
mattoliverauso you have the most uptodate reasoning of the spec.02:47
homattoliverau: thanks! I will read it first.02:49
*** panbalag has joined #openstack-swift02:50
*** panbalag has left #openstack-swift02:53
*** rdaly2 has joined #openstack-swift03:24
*** Masahiro has quit IRC03:26
*** nosnos has quit IRC03:41
*** nosnos has joined #openstack-swift03:42
*** nosnos has quit IRC03:46
*** nosnos has joined #openstack-swift04:15
*** serverascode____ has quit IRC04:16
*** wer has quit IRC04:16
*** mitz has quit IRC04:16
*** mitz has joined #openstack-swift04:18
*** serverascode____ has joined #openstack-swift04:19
*** cebruns has quit IRC04:19
*** cebruns has joined #openstack-swift04:20
*** wer has joined #openstack-swift04:20
*** ppai has joined #openstack-swift04:27
*** Masahiro has joined #openstack-swift04:27
*** lpabon has joined #openstack-swift04:32
*** SkyRocknRoll has joined #openstack-swift04:45
*** SkyRocknRoll has joined #openstack-swift04:45
*** wer has quit IRC04:57
*** wer has joined #openstack-swift04:58
*** rdaly2_ has joined #openstack-swift05:03
*** ahonda has quit IRC05:05
*** hurricanerix has quit IRC05:05
*** ahonda has joined #openstack-swift05:05
*** hurricanerix has joined #openstack-swift05:05
*** rdaly2 has quit IRC05:06
*** xianghui has quit IRC05:06
*** xianghui has joined #openstack-swift05:06
*** jdaggett_ has joined #openstack-swift05:09
*** mlanner_ has joined #openstack-swift05:10
*** notmyname_ has joined #openstack-swift05:10
*** ChanServ sets mode: +v notmyname_05:10
*** mtreinish has quit IRC05:11
*** mlanner has quit IRC05:11
*** jdaggett has quit IRC05:11
*** dvorkbjel has quit IRC05:11
*** notmyname has quit IRC05:11
*** omame has quit IRC05:11
*** omame has joined #openstack-swift05:11
*** dmsimard_away has quit IRC05:11
*** mlanner_ is now known as mlanner05:11
*** notmyname_ is now known as notmyname05:11
*** jdaggett_ is now known as jdaggett05:11
*** mtreinish has joined #openstack-swift05:12
*** dmsimard_away has joined #openstack-swift05:13
*** dvorkbjel has joined #openstack-swift05:13
*** dmsimard_away is now known as dmsimard05:13
*** lpabon has quit IRC05:21
*** kopparam has joined #openstack-swift05:44
*** oomichi has quit IRC05:59
*** addnull has quit IRC06:03
homattoliverau: I read the spec. I thought it was better to support multiple reseller admins in proxy-server.conf. But from the affinity with OpenStack components, I think the Keystoneauth in swift should handle policy.json as RBAC in addition to (or instead of) the user interface of the spec (https://github.com/openstack/swift-specs/blob/master/specs/in_progress/service_token.rst).06:28
*** rdaly2_ has quit IRC06:34
homattoliverau: Until Keystoneauth supports hte policy.json as RBAC with the composite authorization, I would like to have a functionality to specify multiple reseller admins in proxy-server because as I mentioned before swift has only three roles and keystone can have more roles so swift needs to have an ability for flexibility of the mapping.06:36
*** exploreshaifali has joined #openstack-swift06:37
*** bkopilov has joined #openstack-swift06:40
*** exploreshaifali has quit IRC06:46
*** ppai has quit IRC06:49
*** kopparam has quit IRC06:55
*** kopparam has joined #openstack-swift06:55
*** ttrumm has joined #openstack-swift06:57
*** ppai has joined #openstack-swift07:00
homattoliverau: Thanks for the info. I commented above on https://review.openstack.org/#/c/138771/07:02
*** addnull has joined #openstack-swift07:02
*** ttrumm has quit IRC07:06
*** addnull has quit IRC07:17
*** ttrumm has joined #openstack-swift07:20
*** kopparam has quit IRC07:20
*** sungju has quit IRC07:29
*** k4n0 has joined #openstack-swift07:30
*** ttrumm has quit IRC07:31
*** rdaly2 has joined #openstack-swift07:35
*** addnull has joined #openstack-swift07:37
*** rdaly2 has quit IRC07:39
*** ttrumm has joined #openstack-swift07:47
*** ttrumm has quit IRC07:48
*** kopparam has joined #openstack-swift07:54
*** rledisez has joined #openstack-swift08:11
*** geaaru has joined #openstack-swift08:14
*** bill_az has quit IRC08:15
*** dorry has quit IRC08:23
*** addnull has quit IRC08:28
*** kopparam has quit IRC08:36
*** kopparam has joined #openstack-swift08:36
*** jordanP has joined #openstack-swift08:42
*** ttrumm has joined #openstack-swift08:56
*** jordan__ has joined #openstack-swift09:05
*** jordan__ has quit IRC09:05
*** jordanP has quit IRC09:08
*** exploreshaifali has joined #openstack-swift09:13
*** fandi has joined #openstack-swift09:15
*** kopparam has quit IRC09:32
*** addnull has joined #openstack-swift09:39
*** addnull has quit IRC09:43
*** jordanP has joined #openstack-swift09:54
*** aix has joined #openstack-swift10:10
*** nellysmitt has joined #openstack-swift10:12
*** rawat_vedams has joined #openstack-swift10:16
*** Masahiro has quit IRC10:21
*** kopparam has joined #openstack-swift10:21
*** Masahiro has joined #openstack-swift10:21
*** sungju has joined #openstack-swift10:23
*** sungju has quit IRC10:23
openstackgerritNicolas Trangez proposed openstack/swift: Add test coverage for `splice` and `tee` failure scenarios  https://review.openstack.org/14303110:30
*** SkyRocknRoll has quit IRC10:34
*** Masahiro has quit IRC10:37
*** addnull has joined #openstack-swift10:43
*** ppai has quit IRC10:46
*** SkyRocknRoll has joined #openstack-swift10:46
*** addnull has quit IRC10:53
*** ppai has joined #openstack-swift10:59
*** tristanC has quit IRC11:01
*** tristanC has joined #openstack-swift11:02
*** masonhsi_ has joined #openstack-swift11:05
*** tristanC has quit IRC11:06
*** addnull has joined #openstack-swift11:06
*** tristanC has joined #openstack-swift11:07
*** masonhsiung has quit IRC11:08
*** masonhsi_ has quit IRC11:09
*** nosnos has quit IRC11:13
*** nosnos has joined #openstack-swift11:13
*** nosnos has quit IRC11:17
*** Masahiro has joined #openstack-swift11:18
*** Masahiro has quit IRC11:20
*** ho has quit IRC11:20
*** addnull has quit IRC11:32
*** ppai has quit IRC11:37
*** haomaiwa_ has quit IRC11:41
*** haomaiwang has joined #openstack-swift11:41
*** haomaiwang has quit IRC11:46
*** ppai has joined #openstack-swift11:51
*** fifieldt__ has quit IRC12:02
*** fifieldt has joined #openstack-swift12:07
*** exploreshaifali has quit IRC12:37
*** kopparam has quit IRC12:57
*** infotection has quit IRC13:25
*** infotection has joined #openstack-swift13:28
*** Masahiro has joined #openstack-swift13:29
*** jokke__ is now known as jokke_13:30
*** aswadr has joined #openstack-swift13:31
*** Masahiro_ has joined #openstack-swift13:32
*** Masahiro has quit IRC13:34
*** acoles_away is now known as acoles13:34
*** Masahiro has joined #openstack-swift13:35
*** Masahiro_ has quit IRC13:36
*** ppai has quit IRC13:40
*** Masahiro has quit IRC13:40
*** Masahiro has joined #openstack-swift13:41
*** Masahiro has quit IRC13:49
*** bill_az has joined #openstack-swift13:50
*** Masahiro has joined #openstack-swift13:54
*** Masahiro has quit IRC13:58
*** Masahiro has joined #openstack-swift13:58
*** Masahiro has quit IRC14:03
*** mahatic has joined #openstack-swift14:03
*** Masahiro has joined #openstack-swift14:03
openstackgerritMerged openstack/swift: EC: Allow tuning ec_object_segment_size per policy  https://review.openstack.org/13238914:07
*** Masahiro has quit IRC14:12
*** Guest32161 has joined #openstack-swift14:15
*** Masahiro has joined #openstack-swift14:15
*** fandi has quit IRC14:16
*** fandi_ has joined #openstack-swift14:16
*** infotection has quit IRC14:18
*** Masahiro has quit IRC14:20
*** mahatic has quit IRC14:21
*** mahatic has joined #openstack-swift14:23
*** infotection has joined #openstack-swift14:23
*** ttrumm has quit IRC14:48
*** masonhsiung has joined #openstack-swift15:03
*** rdaly2 has joined #openstack-swift15:03
*** SkyRocknRoll has quit IRC15:11
*** Masahiro has joined #openstack-swift15:16
*** Masahiro has quit IRC15:21
*** masonhsiung has quit IRC15:21
*** SkyRocknRoll has joined #openstack-swift15:24
*** annegentle has joined #openstack-swift15:24
*** masonhsiung has joined #openstack-swift15:28
*** cutforth has joined #openstack-swift15:33
*** tdasilva has joined #openstack-swift15:43
*** EmilienM is now known as EmilienM|afk15:47
*** masonhsiung has quit IRC15:51
*** masonhsiung has joined #openstack-swift15:52
*** masonhsiung has quit IRC16:09
acolestdasilva: hi16:12
tdasilvaacoles: hey! how are you?16:12
acolestdasilva: good thanks, pretty quiet in the office here today16:12
acolestdasilva: about manifest versions, i read your conversation with notmyname in scrollback ...16:13
tdasilvaacoles: yeah, I guess people are just getting ready for the xmas/new year's break16:13
tdasilvasure...what do you think?16:13
acolestdasilva: well, i'm curious what difference you perceive between a dlo and slo 'content' being significant wrt a version. if the dlo x-object-manifest header changes to point to another container/prefix, does that not constitute a change in same way as an slo manifest json body changing?16:15
acolesor am i missing something? (quite possible!)16:16
tdasilvaacoles: no, you have a good point and I did think about that, but I was trying to go more for what *I thought* made more sense...I was trying not to change things up so much, so assuming the document had been written that way, I just assumed it meant for DLO16:18
notmynamegood morning16:19
tdasilvaso I assumed that people would just create a new obj with new header instead of changing headers16:19
tdasilvabut like you said it is entirely possible16:19
notmynamehmm...versioning16:19
tdasilvalol16:19
acolesnotmyname: morning !16:19
notmynameI don't think there's a significant difference between versioning SLOs and DLOs (ie I disagree with that particular point that tdasilva made)16:20
notmynamehowever, getting versioning working with manifests, long term, seems interesting16:20
notmynameso if we fix DLO to what's documented and make progress on versioning SLOs (assuming we can do that without breaking old clients), then I think that's cool16:21
mahaticgood morning16:21
acolestdasilva: yeah, its interesting to try to figure the various use cases. i've just been thinking about if there was a fundamental reason to treat the differently.16:22
acoless/the/them16:22
tdasilvanotmyname: well, but I think what we are saying is that even the DLO doc would be wrong, so we would need to change that too, which BTW I'm ok with...16:22
notmynameacoles: IMO, there is not fundamental difference (and it makes our jobs easier if we keep it that way)16:23
notmynametdasilva: how? doesn't it say "versions + manifest = no bueno"?16:23
acolesnotmyname: yeah, thats what i have concluded, and i agree that if we can figure out how to move towards them being versioned then that is cool16:24
tdasilvanotmyname: yeah, but you said that you disagree with my point, so I'm assuming you want to make it both (slo and dlo) to be able to be versioned, no?16:24
notmynametdasilva: long term? or this week? ;-)16:24
acolestdasilva: the doc is not specific to only DLO right? http://docs.openstack.org/developer/swift/api/object_versioning.html16:24
notmynametdasilva: ya, long-term I'd prefer that they be the same16:24
tdasilvanotmyname: mmm...well well16:24
tdasilvaacoles: that doc is not, but I thought I had read a dlo doc saying the same, so I just assumed the obj. versioning doc was also referring to dlo, my mistake16:28
tdasilvanotmyname: so, for now we have already merged that fix to dlo to not allow merging16:28
tdasilvanotmyname: I could work in the obj. versioning middleware to allow versioning to both dlo and slo, does that sound like a good plan?16:29
acolestdasilva: i'm ok with that patch landing, current behavior was broken (as in when deleting a manifest a non-manifest zero sized object gets put back in its place :( )16:30
notmynametdasilva: assuming it can be done without breaking old clients. that's my only concern.16:30
notmynameacoles: right. and also agree with the :-(16:30
notmynametdasilva: obviously, that's not my _only_ concern, but, ...ya know....16:31
acolestdasilva: notmyname: so imho the right tactical thing was to merge that fix16:31
notmynameyes, absolutely16:31
notmynamefix the broken stuff first. then make it better16:32
tdasilvaacoles, notmyname: and what to do about this patch: https://review.openstack.org/#/c/123765/ I am concerned I may have given Yuan different instructions16:32
acolestdasilva: if you can work it into the middleware, great! if it proves complex then perhaps treat it separately, ie land the middleware then look at enabling manifest versioning after16:33
acolestdasilva: :) yeah i read 123765 today16:34
tdasilvaacoles: ok, I will consider that...16:34
tdasilvaacoles: it might be easier to get it all done at once16:34
notmynameyuanzz: ^^16:35
acolestdasilva: re comment on 123765, seems like we can't think of a good reason why dlo and slo would be different16:36
tdasilvaacoles: I understand...that's fine...just trying to think if we should change it to not allow it either for slo (for the short-term)16:37
notmynametdasilva: +1 (ie fix first--get docs and code and expectations in sync)16:38
tdasilvaacoles: or just put in the support now for both slo and dlo like yuanzz initally intended16:38
acolestdasilva: thats what i am wondering too16:38
tdasilvaacoles: then, what is missing on yuanzz patch is just to update the docs accordingly and make sure clients don't break as notmyname mentioned16:39
acolestdasilva: oh, so 123765 enables for dlo too - the commit messsage says just slo.16:40
tdasilvaacoles: well..it did in patch set 1, i believe16:41
acolestdasilva: ok i will go look at it more closely16:42
*** exploreshaifali has joined #openstack-swift16:42
acolestdasilva: i'll lok and ping you again tomorrow16:43
tdasilvaacoles: ok, let me know...thanks for your help!16:44
*** gyee has joined #openstack-swift16:56
*** openstack has joined #openstack-swift17:02
*** Masahiro has joined #openstack-swift17:05
*** Nadeem has joined #openstack-swift17:07
*** Masahiro has quit IRC17:10
*** annegentle has quit IRC17:11
*** SkyRocknRoll has quit IRC17:12
*** k4n0 has quit IRC17:13
*** gyee has quit IRC17:19
*** EmilienM|afk is now known as EmilienM17:19
*** masonhsiung has joined #openstack-swift17:20
*** masonhsiung has quit IRC17:24
*** geaaru has quit IRC17:34
*** gyee has joined #openstack-swift17:36
*** rledisez has quit IRC17:36
*** EmilienM is now known as EmilienM|afk17:42
*** nellysmitt has quit IRC17:43
*** pcaruana has quit IRC17:53
*** annegentle has joined #openstack-swift17:57
*** annegentle has quit IRC17:58
*** annegentle has joined #openstack-swift18:00
*** abhirc has joined #openstack-swift18:03
cutforthnotmyname: nmn, i've got a question about use cases of swift18:15
*** acoles is now known as acoles_away18:16
cutforthor anybody, for that matter... are there any swift clusters with sizes larger than 1PB?18:17
notmynamecutforth: many18:25
notmynamecutforth: what sort of use cases are you wondering about? or is it just for total capacity capability?18:27
cutforthnotmyname: so more specifically, is there public info on these sizes?  i'm being asked by a seagate director and i want to show evidence other than saying "trust me" they are really big18:27
cutforthi'm most interested in capacity.  if they are also geo-replicated that would be nice to know also18:28
notmynamecutforth: I can't give details on swiftstack customers, other that what we've said before (yes, multiple 1PB+ customers). Time Warner is featured (today!) on the openstack site. http://superuser.openstack.org/articles/pass-the-mic-matt-haines-time-warner-cable Also GA Tech has talked about theirs http://superuser.openstack.org/articles/case-study-georgia-tech-university18:30
notmynamecutforth: enovance has talked about cloudwatt (20+PB). Rackspace has talked about theirs (probably ~100PB now)18:31
notmynamecutforth: hubic (EU dropbox app) run by OVH has ~3PB. also  OVH runs runabove (public swift service provider)18:32
notmynamecutforth: HP has many PB, but they've been cagy about the amount18:32
notmynameNTT always has interesting presentations and ideas on global clusters. I don't know their size, but being NTT I'm expecting "not tiny"18:33
notmynamecutforth: there was an edu in Canada that has mentioned 1PB+ storage in swift for their library system18:34
pelusethere's also mercado libre, in their HK preso they state 1.2PB18:34
pelusesee https://www.openstack.org/summit/portland-2013/session-videos/presentation/openstack-swift-mercadolibre-case-study18:34
notmyname^ also important since it runs mission critical data. (ie the pictures for an auction site. can't make money without those)18:35
ctennisdoes softlayer have stats on size?18:35
notmynamecutforth: not that I've seen (publicly)18:35
cutforthpeluse: I saw that mercado listed 1.4 billion images, but i missed the size of 1.2PB.  was that in the video or the slides?18:35
cutforthnotmyname: thanks, i'll poke around for these.  is wikipedia swiftstack, or just swift.  any info on it?18:36
peluseyeah, I think that was the wrong link, this one.  I just saw it, I'll tell you the marker here in a sec http://www.confreaks.com/videos/4269-openstacksummithongkong2013-how-mercadolibre-stores-1-4-billion-images-on-openstack-object-storage18:36
notmynamecutforth: wikipedia is swift (not swiftstack). they have about 300 million images but less than 100TB18:36
pelusethen go in to 11:1918:37
cutforthnotmyname: thx for the wikipedia info18:37
cutforthpeluse: thanks for the link.  i anxiously await a time marker :)18:37
notmynamecutforth: wikimedia is pretty public about what they are doing. you can get a lot of info from them (in general)18:37
pelusecutforth, its 11:1918:37
notmynamecutforth: eg you can actually look at their monitoring pages18:37
notmynamecutforth: also, wikimedia has a global cluster, AFAIK18:38
notmynamecutforth: http://ganglia.wikimedia.org/latest/18:39
cutforthnotmyname: thx18:40
notmynameat least, you can see 3 DCs in that ganglia page that have something with "swift"18:40
cutforthpeluse: thx18:40
cutforthyup18:40
notmynamecutforth: https://www.openstack.org/summit/openstack-summit-hong-kong-2013/session-videos/presentation/an-intimate-look-at-running-openstack-swift-at-scale  <-- RAX preso in HK18:41
cutforthnotmyname: peluse: awesome, this should be a good start18:41
notmynamecutforth: what are you selling (or convincing)?18:42
cutforthnotmyname: its one of our new principles. he's not new to data centers, but new to object storage. he asked for this info.  admitedly i'm in the swift corner so i'm wanting to provide facts.18:44
*** amandap_ is now known as amandap18:45
notmynamecutforth: cool. let me know if there are other ways I can help there18:45
cutforthnotmyname: will do, thanks again18:45
*** Masahiro has joined #openstack-swift18:54
*** zaitcev has joined #openstack-swift18:55
*** ChanServ sets mode: +v zaitcev18:55
*** Masahiro has quit IRC18:58
*** bkopilov has quit IRC19:10
*** jordanP has quit IRC19:14
*** tab____ has joined #openstack-swift19:42
*** tab____ has quit IRC19:43
*** tab____ has joined #openstack-swift19:43
*** nellysmitt has joined #openstack-swift19:44
*** nellysmitt has quit IRC19:49
*** annegentle has quit IRC19:52
*** themadcanudist has joined #openstack-swift19:57
themadcanudistnotmyname: greetings! I wanted to follow up with you on that "issue" i found on my swift cluster last week. Throttling container-updater helped out. However, it lead me down a path of wondering why the updater was workign significantly harder on one node vs. the other and I discovered that there was about 100x more containers on one. I'm wondering why that may be the case and if there is a way to rebalance them?19:58
themadcanudisthey guys, anyone here familiar with swift nodes having a 100x difference in # of containers? basically a huge imbalance even though all the devices on all nodes are weighted equally?20:12
*** annegentle has joined #openstack-swift20:13
*** annegentle has quit IRC20:21
*** silor has joined #openstack-swift20:21
*** exploreshaifali has quit IRC20:29
*** jwang__ has quit IRC20:30
*** EmilienM|afk is now known as EmilienM20:34
*** Masahiro has joined #openstack-swift20:42
*** annegentle has joined #openstack-swift20:43
*** annegentle has quit IRC20:44
*** Guest32161 is now known as annegentle20:44
*** Masahiro has quit IRC20:47
claygis cschwede working around today?  or is anyone else looking at the ring warning patch?20:50
swifterdarrellthemadcanudist: you could acquire and run the swift-ring-builder here https://review.openstack.org/#/c/140478 on your container.builder file?20:51
themadcanudistswifterdarrell: oh yeah, I definitely have. That's how I built them in the first place20:51
themadcanudistand have rebalanced over the years20:51
swifterdarrellthemadcanudist: is there any tier (region, node, etc) whose aggregate weight underneath is unbalanced?20:52
swifterdarrellthemadcanudist: you've run that patch? it's pretty recent...20:52
themadcanudistoh sorry20:52
themadcanudisti just meant swift-ring-builder20:52
themadcanudistno, i haven't run that.20:52
swifterdarrellthemadcanudist: and also not committed into Swift yet20:52
swifterdarrellthemadcanudist: have you balanced the rings with Swift 2.0+ yet?20:52
swifterdarrellthemadcanudist: or maybe the cutoff is 2.2?20:53
*** tdasilva has quit IRC20:53
themadcanudistnot yet20:53
themadcanudiststill on an older version20:53
swifterdarrellthemadcanudist: oh, then maybe it's not the issue that patch checks for (but still worth running it against your builder)20:54
claygthemadcanudist: does the one node have more total weight than the others (more disks, bigger disks, maybe the others have failed disks?)21:03
themadcanudistclayg: no21:03
themadcanudistthe space consumption is identical21:03
themadcanudistthe container-updater just scans 100x more directories21:03
themadcanudistweight across ALL devices is equal21:03
claygthemadcanudist: more directories or more databases?  could just be this bug: https://review.openstack.org/#/c/138524/21:04
themadcanudistclayg: THat's definitely what I think it is21:05
themadcanudistthere are a lot of directories that are empty21:05
themadcanudistie. no hashes/sqlite dbs21:05
themadcanudistbut they're still crawled21:05
*** aswadr has quit IRC21:06
notmynamethat one was backported. themadcanudist it should be possible to apply that pretty cleanly in your environment21:08
themadcanudisthrm, is it safe to clear these out manually?21:08
themadcanudistlooking at commit21:08
notmynamehere's the backport https://review.openstack.org/#/c/139255/21:08
themadcanudisttwo lines of code21:09
themadcanudistyeah, this suggests it safe to clean up manually as well21:11
themadcanudistclayg: thank you so much! =D21:12
mattoliverauMorning21:13
claygi geuess... it may have been me that wrote the bug in the first place - you should thank ctennis21:13
themadcanudistI'm thanking you for your exprtise in searching the bug database and your succesful find!21:13
themadcanudist=D21:13
* clayg good for something21:13
*** Nadeem has quit IRC21:23
claygtorgomatic: so i had a ring that was all undispersed 3:1:1 but when I "fixed" the over-weighted device so the ring was 2:1:1 - it should be solvable and I thought after a few runs of pretend/rebalance - things would work out, but it turns out the balance is perfect so the undispersed parts on the 2x node don't think they have anywhere to go (but that's only becuase some of the parts on the 1x nodes could double up on 2x node21:24
themadcanudistclayg: Any definitive comments about getting rid of these empty container dirs?21:32
themadcanudistdoesn't seem to affect accounts21:32
themadcanudistor at least it's not a problem on my cluster21:32
themadcanudistie. is there ANY risk here?21:33
notmynamethemadcanudist: you're talking about partition and suffix directories, right?21:38
themadcanudistyeah21:38
notmynamethemadcanudist: no risk21:39
themadcanudist* /srv/node/*/containers/{emtpy dir}21:39
notmynamethemadcanudist: you might want to stop replication temporarily while you delete the empty dirs. ie you don't want to fight with the system if another server is trying to add the dirs back while you are deleting them21:41
themadcanudistright. I'd only move the dirs out of the way if they're older than 24 hours anyway21:41
themadcanudistbut yeah21:41
themadcanudistpoint taken21:41
themadcanudistand will do21:42
*** nellysmitt has joined #openstack-swift21:45
*** nellysmitt has quit IRC21:50
*** zaitcev has quit IRC21:50
*** nellysmitt has joined #openstack-swift22:01
*** cutforth has quit IRC22:03
*** nellysmitt has quit IRC22:08
*** mahatic has quit IRC22:22
*** Masahiro has joined #openstack-swift22:31
*** Masahiro has quit IRC22:36
*** EmilienM is now known as EmilienM|afk22:38
*** sungju has joined #openstack-swift22:56
*** sungju has left #openstack-swift23:02
*** tab____ has quit IRC23:41
*** dmsimard is now known as dmsimard_away23:44
*** dmsimard_away is now known as dmsimard23:44
*** rdaly2 has quit IRC23:53
claygdoes anyone have an intuative sense for the range of the "balance" float in the ring?23:57
clayglike before initial balance all devices say they are -100.0 "balanced" which looks like they have -100% of the parts they want (0/parts_wanted)23:58
claygbut then my ring says it's "balance" is "100.0" (?)23:58
*** silor has quit IRC23:59
claygso then I rebalance and things mostly get cooled off - a few devices have 0.10 balance, some have -0.39 - i.e. a few points away from center, nothing to worry about - and my rin says it's "balance" is "0.39" (?)23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!