Saturday, 2015-02-14

claygso bknudson says he doesn't think i'm doing anything wrong on my devstack - I'm not so sure - is acoles_away the only guy that's gotten keystone/swift up and running any time recently?00:03
clayghurricanerix_: aren't neck deap in a mess of keystone currently?  Maybe we can cry on each other's shoulders?00:04
*** rdaly2 has quit IRC00:24
*** rdaly2 has joined #openstack-swift00:26
*** rdaly2 has quit IRC00:27
*** gyee has joined #openstack-swift00:28
openstackgerritClay Gerrard proposed openstack/swift-specs: Add containeralias spec  https://review.openstack.org/15552400:32
klrmndoes the built-in pre-commit hook in the swift repo not do the same pep8 checks as jenkins?00:35
torgomaticklrmn: no, I think the pre-commit hook there is just whatever git-review puts in, and that doesn't enforce much of anything00:45
torgomatic$ tox -e pep8   # just run pep8 and other linters00:45
klrmntorgomatic: the pre-commit hook runs pep8 and tox runs flake800:46
*** bill_az has quit IRC00:46
klrmntorgomatic: i'm endeavoring to change that (locally) but i get "./test/probe/test_container_merge_policy_index.py:375:34: F812 list comprehension redefines 'metadata' from line 371" which looks like perfectly valid code to me00:47
torgomaticklrmn: huh; my pre-commit hook doesn't seem to do that... in fact, the only hook I have is commit-msg, and that does the Change-Id stuff00:49
torgomaticso whatever you've got, it's not Swift's, although it probably is a good idea00:49
klrmntorgomatic: given this is the third time jenkins has told me my flake8 sucks...00:50
torgomaticklrmn: the gate is just complaining about an unused Manager import, not whatever F812 is... maybe your local system has a newer pep800:52
torgomaticthat's one of the nice things about tox; it uses the versions in test-requirements.txt, and I think we have those locked down pretty well00:53
klrmntorgomatic: yes, i know. but when i changed my pre-commit hook to use flake8, i got the other error00:54
openstackgerritLeah Klearman proposed openstack/swift: more probe test refactoring  https://review.openstack.org/15589500:56
*** lnxnut has joined #openstack-swift01:10
*** abhirc has joined #openstack-swift01:15
*** rdaly2 has joined #openstack-swift01:28
*** david-lyle has joined #openstack-swift01:32
*** rdaly2 has quit IRC01:32
*** david-lyle is now known as david-lyle_afk01:33
*** gyee has quit IRC01:38
mattoliverauAt SFO airport now, thanks for a great week all!01:40
notmynamemattoliverau: have a safe trip01:40
*** zhill_ has quit IRC01:44
mattoliverauLooks like I'm going to miss valentines day completely, as it won't exist as a day. I go from Friday straight to Sunday.. #badhusband01:52
claygwelp, that's it for me today I think01:54
claygtorgomatic: can you help me remember next week that we need to update https://review.openstack.org/#/c/155421/ with the proposal for the "new" versioned object ideas that otherjon had been kicking around01:55
torgomaticclayg: sure, I'll make a note01:55
claygtorgomatic: I really do think you're the most familar with his ideas, so I'd appreciate your help tricking cschwede into writing^W^W^W^W writing up the spec01:56
torgomaticclayg: sounds good; we'll get that on Tuesday01:57
*** lnxnut has quit IRC02:08
mattoliverauHave a great long weekend guys02:09
*** lnxnut has joined #openstack-swift02:30
*** lnxnut has quit IRC02:34
*** lnxnut has joined #openstack-swift02:38
klrmnclayg: ok, jenkins has +1-ed it, and i do not plan to make any more changes02:53
*** lnxnut has quit IRC04:03
*** dmsimard_away is now known as dmsimard04:04
*** dmsimard is now known as dmsimard_away04:06
*** dmsimard_away is now known as dmsimard04:07
*** lnxnut has joined #openstack-swift04:21
*** dmsimard is now known as dmsimard_away04:34
*** lnxnut has quit IRC04:37
*** IRTermite has joined #openstack-swift04:53
*** abhirc has quit IRC05:37
openstackgerritOpenStack Proposal Bot proposed openstack/swift: Imported Translations from Transifex  https://review.openstack.org/15596706:09
*** panbalag has quit IRC07:13
*** panbalag has joined #openstack-swift07:27
*** glange has quit IRC07:28
*** jd__ has quit IRC07:28
*** jd__ has joined #openstack-swift07:32
*** glange has joined #openstack-swift07:35
*** ChanServ sets mode: +v glange07:35
*** joeljwright has joined #openstack-swift07:43
*** sileht has quit IRC09:49
*** silor has joined #openstack-swift11:00
*** joeljwright has quit IRC11:55
*** silor has quit IRC12:16
*** silor has joined #openstack-swift12:16
*** bkopilov has quit IRC12:19
*** mahatic has joined #openstack-swift13:19
*** dmsimard_away is now known as dmsimard13:48
*** sileht has joined #openstack-swift14:35
*** tsg has joined #openstack-swift14:38
*** tsg has quit IRC14:52
*** dmsimard is now known as dmsimard_away15:16
openstackgerritRichard Hawkins proposed openstack/swift: Add functional tests for container TempURLs  https://review.openstack.org/15551315:49
openstackgerritRichard Hawkins proposed openstack/swift: Add functional tests for container TempURLs  https://review.openstack.org/15551316:05
openstackgerritRichard Hawkins proposed openstack/swift: Add additional func tests for TempURLs  https://review.openstack.org/15598516:20
openstackgerritRichard Hawkins proposed openstack/swift: Add additional func tests for TempURLs  https://review.openstack.org/15598516:21
*** otoolee has quit IRC16:43
*** otoolee has joined #openstack-swift16:48
*** geaaru has joined #openstack-swift17:06
*** Anayag has joined #openstack-swift17:33
AnayagHi I am installing SAIO in a server and everything worked fine. Now I just changed the proxy IP to the local ip instead 127.0.0.1 and add the same local ip to memcache config and restarted proxy. Then the temp AUTH token is changing in every request17:35
Anayagdo you have any idea to resolve this?17:35
ctennisdid you restart memcache too?17:40
Anayagyes17:42
Anayagbut now I revert the memcache to 127.0.0.117:42
Anayagthen it works fine17:42
Anayagwas it wrong to set the memcache ip?17:43
ctennisdid you update the proxy server configuration to tell it memcache is listening on a different ip?17:43
Anayagahh that I did not17:44
AnayagHow do I set it in the proxy config?17:45
ctennishttps://github.com/openstack/swift/blob/master/etc/proxy-server.conf-sample#L33817:46
AnayagThanks a lot17:47
*** jrichli has joined #openstack-swift18:52
mattoliveraujrichli is here!18:55
jrichlimattoliverau: hey Matt!  I guess you made it home safely?18:56
mattoliverauI'm about to take off on my last leg of the trip home, only 4-5 hours to go!18:56
mattoliverauSo no not yet :(18:56
jrichlihave you fully recovered yet?18:56
jrichlior are you still a bit under the weather?18:57
mattoliverauFeeling much better, but just came off a 13.5 hour flight and didn't really sleep.. So more like a dead man walking now :p18:58
notmynamemattoliverau: did you get the fancy AirNZ seats on the long flight back?18:58
mattoliverauI take it you got home safe :)18:58
jrichliI am back in town having some great coffee in a cafe near UT.  About to learn me some more swift :-)18:59
mattoliveraunotmyname: yeah, and I did sleep better.. Got 3-4 hours, which is the best so far ;p18:59
notmynamenice18:59
notmynamejrichli: sounds like a great plan!!18:59
mattoliverauNice, valentines day is the best day to learn about swift :p19:00
jrichlilol, well ... I am sure we will eat somewhere exciting tonight.  Hubby is here shopping to buy me more laptop stickers!19:02
mattoliverauHaha, we've created a momster... And love it ;) OK phone going off again, have a great rest of you day jrichli and notmyname (and anyone else reading) :)19:03
mattoliverau*monster19:03
notmynamemattoliverau: hope you have a nice flight and get a couple more hours of sleep19:03
mattoliverauThanks, I'm tired enough too so should be fine :)19:04
jrichli+119:04
*** doxavore has joined #openstack-swift19:05
notmynamemattoliverau: jrichli: I wrote up something short about the hackathon (and have both your pictures) https://swiftstack.com/blog/2015/02/13/openstack-swift-hackathon/19:05
mattoliveraunotmyname: oh and I talked to jogo yest(?) And there is a new zuul feature that will be awesome for swift check and gate tests, the ability to skip tests, eg don't run full integration tests if only docs have changed.. I'll make a patch on Monday :)19:07
doxavoreAre there any detailed docs on how placement actually works with regard to multiple regions and weights? I'd like to make sure there is 1 replica in a different region, but I haven't been able to find any operational or admin docs that describe how weights come into play.19:07
notmynamemattoliverau: cool19:07
notmynamedoxavore: yes19:08
notmynamedoxavore: I'll give you some links, but there have been some subtle changes recently that we're currently in the process of writing up more completely that might affect things in some cases19:08
*** jrichli has quit IRC19:08
*** jrichli has joined #openstack-swift19:09
doxavorenotmyname: that would be great, thank you!19:09
notmynamedoxavore: http://docs.openstack.org/developer/swift/overview_ring.html and https://swiftstack.com/blog/2012/11/21/how-the-ring-works-in-openstack-swift/19:09
mattoliverauOK phone really going off now! I feel the glare from the airplane staff :p19:09
jrichlinotmyname: Great summary!19:10
notmynamedoxavore: but the short answer is "yes, if you have 2 regions and 3 replicas (and the regions are the same size), then you'll have at least 1 replica in each region"19:10
notmynamejrichli: thanks19:10
notmynamedoxavore: also, https://swiftstack.com/openstack-swift/architecture/ is a high-level summary of all of swift19:12
doxavorenotmyname: by the same size, you mean that to ensure i have 2 copies in region 1 and 1 copy in region 2, i need to make sure the total weight of region 2 is _at least_ 1/3 of my total cluster weight?19:12
notmynamedoxavore: first (because things have recently changed slightly) what version of swift are you using?19:13
doxavoresuch that the weight is given priority, then it uses its "most unique" placements?19:13
notmynameright19:13
notmynamethat's the current strategy19:13
notmynamedoxavore: or to slightly rephrase, things are placed as widely as possible across failure domains until you have a failure domain that is full with respect to weight. then you can get more than one replica in a failure domain tier19:14
doxavoreiirc, swift-2.2.0 from Ubuntu Cloud Archive (Juno on 14.04) - apologies, VPN issues are preventing me from grabbing the exact version :-/19:15
notmynamedoxavore. latest version is 2.2.219:16
*** bkopilov has joined #openstack-swift19:16
doxavoreokay that makes sense. you said that could be changing soon though?19:17
notmynamewhat could?19:17
notmynamethe placement?19:17
notmynamethat did change in 2.2.219:17
notmynamedoxavore: https://github.com/openstack/swift/blob/master/CHANGELOG19:18
doxavoreoh okay. you mentioned some "subtle changes" coming :>19:18
notmynamedoxavore: ya, those are what are n 2.2.219:18
notmynamedoxavore: and mostly they affect deployments that are unbalanceable19:18
notmynamedoxavore: one cool thing is that the `swift-ring-builder` tool in 2.2.2 includes a command to print out the "durability". It will report if there are partitions in the ring that are too heavily concentrated in one failure domain19:20
doxavoreinteresting - looks like we are on 2.2.0 still. is there a different recommended install from the juno docs (which seem to suggest using the ubuntu-cloud-archive repo)?19:21
*** silor1 has joined #openstack-swift19:21
notmynamedoxavore: I'd always recommend people using the latest tagged version of swift :-)19:21
*** silor has quit IRC19:22
notmynamedoxavore: but I don't know what distros have packaged right now19:22
doxavorenotmyname: i'll have to see if we can get up to 2.2.2 then. in the meantime, is my understanding of how the placement works different in 2.2.0, or are there just some gotchas I should watch out for? :>19:23
notmynamedoxavore: how many regions do you have?19:24
doxavore2 regions, 3 replicas. just trying to use one of the regions for DR insurance more than anything (but there are some application services that will use it periodically)19:25
notmynameok. so an active-active DR thing. makes sense19:25
doxavorebut that insurance is only valid if we can actually make sure there is 1 copy of everything in that 2nd region :)19:25
notmynameright19:25
notmynameare the regions similarly sized? ie capacity (and specifically, weights)19:26
doxavorewe're still setting things up, so I can play with the weights however I need to. generally though, it should be 2/3 vs 1/3 of total cluster weight, yes19:27
doxavoreif that's all we need to do to make sure we're getting the DR we want, then I can make sure we stay configured that way. With some drives dropping out, etc, it can be difficult to make sure it's always exactly 2/3 - 1/3 split19:28
notmynamedoxavore: weights should always be related to the available capacity. the best rule of thumb is "number of GB". so 3000 = 3 TB drive, 6000 = 6TB drive. etc19:29
notmynameok, you aren't going to get what you expect there19:29
notmynameyou need to have 1/2 and 1/219:29
notmynamebecause you'll get 2 replicas in one region and 1 in the other (for each partition. so you'd have at least 1 replica in each region)19:30
notmynamethe way you have it now...19:30
notmynameyou'll have that for 1/3 of your cluster, then the smaller region gets "full" (wrt weight) and then you'll start getting all 3 replicas in the larger region19:31
doxavorehmm... is there not a way i could make sure 2 of my 3 replicas are always in 1 region? e.g. our truly active data center vs our higher-latency offsite19:31
notmynamehow far apart are your regions? in latency19:31
doxavorewe will be running with 2/3 of disk storage in region 1 and 1/3 in region 2, so i had planned on using the rule to make weight = GB19:32
doxavore20ms, give or take19:32
notmynameso you might be able to get away with calling it one region and then splitting it into 3 evenly sized zones in that one region19:33
notmynamethen each zone will fill up evenly19:33
notmynameand give you what you're currently expecting/looking for19:33
doxavorehrm. that's unfortunate. so 3 evenly-sized zones or (presumably) 3 evently-sized regions where 2 of them are in that primary facility and 1 in the offsite is the only way to get that..19:38
doxavoreas even saying weights in region 2 are 2*GB would result in 2 copies being placed there (half the time)19:39
notmynamedoxavore: right. so think of it as tiers (drive->server->zone->region). and swift fills up evenly across the available failure domains at a given tier. so if you have 2 regions, you'll get 2 replicas in one and 1 in the other. before it goes to the lower tiers (zone, etc)19:40
notmynamedoxavore: I'm being called away, but I'll be back online later. also, there are several other people who will be able to answer similar questions (mostly during the workweek though)19:42
doxavorebeautifully stated. that's exactly what I was trying to wrap my head around. I was having trouble finding that in any of the docs. :) thank you very much for your help.19:42
notmynamedoxavore: glad to have helped19:42
notmynamedoxavore: good luck with your deployment. please let us know if you have other questions19:42
notmynamedoxavore: and I always love hearing about new clusters and their use cases. I'd be happy to hear anything you can share about it19:43
* notmyname out19:43
doxavorewill do. I'm sure i'll be around in the coming days/weeks. :>19:43
*** MasterPiece has joined #openstack-swift19:50
*** silor1 has quit IRC20:18
*** MasterPiece has quit IRC20:21
*** zul has joined #openstack-swift20:47
*** doxavore has quit IRC21:03
*** jrichli_ has joined #openstack-swift21:18
*** jrichli has quit IRC21:18
*** mahatic has quit IRC21:23
*** mahatic has joined #openstack-swift21:40
*** mahatic has quit IRC21:52
*** mahatic has joined #openstack-swift21:53
*** cppforlife_ has quit IRC22:24
*** cppforlife_ has joined #openstack-swift22:45
*** jrichli_ has quit IRC22:52
*** tsg has joined #openstack-swift23:06
*** gsilvis has quit IRC23:38
*** gsilvis has joined #openstack-swift23:40

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!