Tuesday, 2014-12-09

*** tab___ has quit IRC00:08
*** miurahr has joined #openstack-swift00:11
*** annegent_ has quit IRC00:13
*** miurahr has quit IRC00:26
*** miurahr has joined #openstack-swift00:29
*** tellesnobrega has joined #openstack-swift00:32
*** dmorita has joined #openstack-swift00:32
*** Masahiro has joined #openstack-swift00:38
*** tellesnobrega has quit IRC00:39
*** nellysmitt has joined #openstack-swift00:39
*** Masahiro has quit IRC00:43
*** nellysmitt has quit IRC00:43
*** annegent_ has joined #openstack-swift00:43
*** miurahr has quit IRC00:45
*** shri has quit IRC00:45
*** aix has quit IRC00:49
*** annegent_ has quit IRC00:52
*** addnull has joined #openstack-swift01:13
*** gyee has quit IRC01:19
*** tdasilva has joined #openstack-swift01:35
*** nexusz99 has joined #openstack-swift01:42
*** lpabon has joined #openstack-swift01:45
*** lpabon has quit IRC02:00
*** bill_az has quit IRC02:01
*** addnull has quit IRC02:07
*** haomaiwang has joined #openstack-swift02:08
*** Masahiro has joined #openstack-swift02:26
*** Masahiro has quit IRC02:31
openstackgerritThiago da Silva proposed openstack/swift: fix dlo manifest file getting versioned  https://review.openstack.org/14020602:32
*** addnull has joined #openstack-swift02:33
*** nellysmitt has joined #openstack-swift02:40
openstackgerritThiago da Silva proposed openstack/swift: fix dlo manifest file getting versioned  https://review.openstack.org/14020602:43
*** imkarrer has joined #openstack-swift02:44
*** nellysmitt has quit IRC02:44
*** tdasilva has quit IRC02:54
imkarrerGood evening everyone! I have a question about handoff partitions.  Reading the source it appears the that handoff nodes are determined by the partition and consistent hash ring when get_more_nodes is called.  Is there a way to specify a handoff node.02:54
imkarrer?02:54
imkarrerIf for example you want to designate a certain device as a handoff device02:57
*** addnull has quit IRC03:13
*** david-lyle is now known as david-lyle_afk03:24
notmynameimkarrer: no, that's not possible03:32
*** addnull has joined #openstack-swift03:38
imkarrerThanks! I did not think so, wanted to check.03:42
*** abhirc_ has quit IRC03:42
*** Masahiro has joined #openstack-swift03:42
*** annegent_ has joined #openstack-swift03:46
*** Masahiro has quit IRC03:47
*** annegent_ has quit IRC03:48
*** annegent_ has joined #openstack-swift03:48
*** erlon has quit IRC03:50
*** erlon has joined #openstack-swift03:51
*** imkarrer has quit IRC04:24
*** SkyRocknRoll has joined #openstack-swift04:27
*** tkay has quit IRC04:33
*** nellysmitt has joined #openstack-swift04:40
*** Masahiro has joined #openstack-swift04:43
*** nellysmitt has quit IRC04:45
*** addnull has quit IRC04:47
*** Masahiro has quit IRC04:47
*** ppai has joined #openstack-swift04:57
*** rebelshrug has quit IRC05:05
*** TaiSHi has quit IRC05:06
*** kopparam has joined #openstack-swift05:18
*** addnull has joined #openstack-swift05:22
*** kopparam has quit IRC05:31
*** kopparam_ has joined #openstack-swift05:31
*** Masahiro has joined #openstack-swift05:44
*** Masahiro has quit IRC05:48
*** zaitcev has quit IRC06:08
*** CrackerJackMack has quit IRC06:12
*** CrackerJackMack has joined #openstack-swift06:13
*** kopparam has joined #openstack-swift06:24
*** kopparam_ has quit IRC06:26
*** annegent_ has quit IRC06:28
*** bkopilov has quit IRC06:39
*** jyoti-ranjan has joined #openstack-swift06:41
*** addnull has quit IRC06:41
*** nellysmitt has joined #openstack-swift06:42
*** kopparam has quit IRC06:44
*** kopparam has joined #openstack-swift06:45
*** nellysmitt has quit IRC06:46
*** addnull has joined #openstack-swift06:47
*** xianghui has quit IRC06:57
*** nshaikh has joined #openstack-swift07:03
*** kopparam has quit IRC07:03
*** kopparam has joined #openstack-swift07:03
*** CybergeekDK has quit IRC07:05
*** CybergeekDK has joined #openstack-swift07:06
*** pberis has quit IRC07:14
*** pberis has joined #openstack-swift07:14
*** k4n0 has joined #openstack-swift07:15
*** sungju has quit IRC07:18
*** nellysmitt has joined #openstack-swift07:23
*** nellysmitt has quit IRC07:28
*** annegent_ has joined #openstack-swift07:28
*** NellyK has joined #openstack-swift07:30
*** Masahiro has joined #openstack-swift07:33
*** annegent_ has quit IRC07:33
*** NellyK is now known as nellysmitt07:36
*** Masahiro has quit IRC07:37
openstackgerritHisashi Osanai proposed openstack/swift: Fix the GET's response code when there is a missing segment in LO  https://review.openstack.org/13625807:40
*** nellysmitt has quit IRC07:46
*** xianghui has joined #openstack-swift08:02
*** rledisez has joined #openstack-swift08:04
*** NellyK has joined #openstack-swift08:14
*** jistr has joined #openstack-swift08:20
*** NellyK has quit IRC08:23
*** bkopilov has joined #openstack-swift08:24
*** nellysmitt has joined #openstack-swift08:24
*** Masahiro has joined #openstack-swift08:33
*** Masahiro has quit IRC08:38
*** jordanP has joined #openstack-swift08:51
*** jwang has quit IRC09:06
*** kopparam has quit IRC09:31
*** kopparam_ has joined #openstack-swift09:31
*** addnull has quit IRC09:34
*** aix has joined #openstack-swift09:35
*** ppai has quit IRC09:43
*** ppai has joined #openstack-swift09:56
*** nellysmitt has left #openstack-swift09:58
*** addnull has joined #openstack-swift10:04
*** addnull has quit IRC10:14
*** haomaiwang has quit IRC10:18
*** jistr has quit IRC10:21
*** Masahiro has joined #openstack-swift10:22
*** nshaikh has quit IRC10:24
*** Masahiro has quit IRC10:27
*** nshaikh has joined #openstack-swift10:29
*** addnull has joined #openstack-swift10:42
*** jistr has joined #openstack-swift10:49
*** nshaikh has quit IRC10:55
*** kopparam_ has quit IRC10:58
*** kopparam has joined #openstack-swift10:59
*** aix has quit IRC11:19
*** addnull has quit IRC11:24
*** SkyRocknRoll has quit IRC11:28
*** aix has joined #openstack-swift11:32
*** mahatic has joined #openstack-swift11:34
*** dmsimard_away is now known as dmsimard11:37
*** nshaikh has joined #openstack-swift11:42
*** tellesnobrega has joined #openstack-swift11:42
openstackgerritXiang Hui proposed openstack/swift: Fix getaddrinfo if dnspython is installed.  https://review.openstack.org/11661811:43
*** addnull has joined #openstack-swift11:46
*** tellesnobrega has quit IRC11:54
*** lpabon has joined #openstack-swift11:57
*** addnull has quit IRC12:10
*** Masahiro has joined #openstack-swift12:12
*** tdasilva has joined #openstack-swift12:14
*** addnull has joined #openstack-swift12:14
*** Masahiro has quit IRC12:15
*** nshaikh has quit IRC12:21
*** nshaikh has joined #openstack-swift12:21
*** dmorita has quit IRC12:26
*** delatte has quit IRC12:29
*** oomichi has quit IRC12:36
*** xianghui has quit IRC12:41
*** kopparam has quit IRC12:45
*** cdelatte has joined #openstack-swift12:47
*** pberis has quit IRC13:00
*** xianghui has joined #openstack-swift13:08
*** xianghui has quit IRC13:14
*** bill_az has joined #openstack-swift13:19
*** foexle has joined #openstack-swift13:19
*** ppai has quit IRC13:34
*** miqui has joined #openstack-swift13:42
*** annegent_ has joined #openstack-swift13:58
*** Masahiro has joined #openstack-swift14:00
*** Masahiro has quit IRC14:04
*** annegent_ has quit IRC14:06
*** tellesnobrega has joined #openstack-swift14:09
*** k4n0 has quit IRC14:21
*** nshaikh has quit IRC14:36
openstackgerritDaniel Wakefield proposed openstack/python-swiftclient: Verify MD5 of uploaded objects.  https://review.openstack.org/12925414:45
*** rebelshrug has joined #openstack-swift14:50
*** tdasilva has quit IRC14:52
*** k4n0 has joined #openstack-swift15:01
*** neoteo has joined #openstack-swift15:08
*** neoteo has left #openstack-swift15:08
*** pberis has joined #openstack-swift15:08
*** rdaly2 has joined #openstack-swift15:18
*** imkarrer has joined #openstack-swift15:21
imkarrerAnother question about handoff devices.  Say there is a deployment maintaining 3 replicas with 3 devices shared between the account and container rings.  If one of the devices goes down, where does the replica go since there are no handoffs?  Is the replica copied to one of the two working devices?15:23
ctennisimkarrer: no, because they will already have one of the replicas anyway.  In this case, there won't be a handoff.15:25
ahaleyeah, they will, won't they15:26
*** jyoti-ranjan has quit IRC15:26
ahalesince in a swift-get-nodea -a you see every disk is a possible handoff15:26
ahaleswift-get-nodes -a15:26
ahalewell, i guess unless each node is just one drive15:27
imkarrerSo there would only be 2 copies of the data until the drive is restored?  Doesn't the unique as possible data algorithm maintain 3 copies15:27
imkarrerWith only 3 devices in a ring with 3 replicas, there are no handoffs enumerated15:27
ctennisyes, but the drive is the lowest level in that scheme15:27
ctennisthere would only be 2 copies until the drive is restored15:28
imkarrerThank you ctennis.15:28
*** bkopilov has quit IRC15:32
imkarrerctennis, is it possible that rsync copies an object over to one of the drives to maintain three replicas?  I dont think so but I figure I should ask.  I think rsync places replicas based on the ring.  If the ring has no 4th device name to place a copy during a failure then there will be no third copy.15:38
imkarrerAnd the 'Unique as Possible' placement is decided when the rings are created, correct?15:39
ctennisimkarrer: yes, when the rings are "rebalanced" actually.  but again, the uniqueness of the data is at the drive at the lowest level.  There won't be multiple copies of the same replica on the same drive.15:41
imkarrerThanks for the clarification ctennis.15:42
*** bkopilov has joined #openstack-swift15:47
*** Masahiro has joined #openstack-swift15:48
*** lpabon has quit IRC15:50
*** Masahiro has quit IRC15:53
*** tdasilva has joined #openstack-swift16:00
*** addnull has quit IRC16:02
*** jamieh_ has joined #openstack-swift16:25
*** abhirc has joined #openstack-swift16:27
*** k4n0 has quit IRC16:34
*** bkopilov has quit IRC16:35
notmynamegood morning16:46
mahaticgood morning!16:47
*** gyee has joined #openstack-swift16:47
pelusemornin'16:54
*** rdaly2 has quit IRC17:20
*** rledisez has quit IRC17:22
*** anderstj has quit IRC17:23
*** otherjon has quit IRC17:23
*** alpha_ori has quit IRC17:23
*** chrisnelson has quit IRC17:23
*** zackmdavis has quit IRC17:23
*** bobby2 has quit IRC17:23
*** ctennis has quit IRC17:23
*** acorwin has quit IRC17:23
*** swifterdarrell has quit IRC17:23
*** hugokuo has quit IRC17:23
*** amandap has quit IRC17:23
*** joearnold has quit IRC17:23
*** mlanner has quit IRC17:23
*** charz has quit IRC17:23
*** acorwin has joined #openstack-swift17:24
*** alpha_ori has joined #openstack-swift17:25
*** otherjon has joined #openstack-swift17:26
*** swifterdarrell has joined #openstack-swift17:27
*** ChanServ sets mode: +v swifterdarrell17:27
*** zackmdavis has joined #openstack-swift17:29
*** amandap has joined #openstack-swift17:29
*** tkay has joined #openstack-swift17:30
*** anderstj has joined #openstack-swift17:30
*** bobby2 has joined #openstack-swift17:30
*** charz has joined #openstack-swift17:31
*** chrisnelson has joined #openstack-swift17:31
*** ctennis has joined #openstack-swift17:32
*** hugokuo has joined #openstack-swift17:34
*** joearnold has joined #openstack-swift17:34
*** ctennis has quit IRC17:35
*** ctennis has joined #openstack-swift17:35
*** mlanner has joined #openstack-swift17:35
*** Masahiro has joined #openstack-swift17:37
*** Masahiro has quit IRC17:42
cschwedenotmyname: torgomatic: i’m currently looking at this bug report: https://bugs.launchpad.net/swift/+bug/1400497 - it’s related to https://review.openstack.org/#/c/121422/17:53
cschwedenotmyname: torgomatic: so i think the behaviour is correct, because the total device weight in zone 2 in the example is twice as big as zone 117:53
cschwedenotmyname: torgomatic: so i’m wondering if this is a bug or more like a missing warning in the documentation. wdyt?17:54
*** abhirc has quit IRC17:55
notmynamecschwede: just got out of a meeting. give me a moment and I'll look17:55
*** david-lyle_afk is now known as david-lyle17:58
swifterdarrellcschwede: notmyname: (cc torgomatic): that's definitely a fall-out of better taking the weight into consideration during ring rebalancing17:58
swifterdarrellcschwede: notmyname: (cc torgomatic): we're planning on being able to generate a metric relating to "amount of reduced availability" that falls out of this17:59
cschwedeswifterdarrell: is this something you’re already working on (ie a patch for swift-ring-builder)?17:59
swifterdarrellcschwede: notmyname: (cc torgomatic): if you have one failure-domain with less than 1/N*100% of the total weight (where N == replica count), then you're likely to get some amount of reduced availability17:59
swifterdarrellcschwede: not atm, no17:59
cschwedeswifterdarrell: yes, the 1/N18:03
cschwede*100% makes sense. i can work on a patch for this18:03
cschwedeswifterdarrell: so, it’s mostly two patches: 1. raise a warning to the user + update docs 2. add an option to swift-ring-builder to show some general info+statistics18:04
*** jwang has joined #openstack-swift18:12
*** bkopilov has joined #openstack-swift18:14
cschwedeswifterdarrell: hmm, i think this is also only a problem if there are less than N failure domains with a weight of (1/N*totalweight). look at this: http://paste.openstack.org/show/148054/18:19
notmynamecschwede: I just talked to swifterdarrell about it and got a little whiteboard drawing18:19
cschwedenotmyname: now i’m curious about your discussions :)18:20
notmynamecschwede: nah, he was explaining the problem to me. and talking about building the exact same thing you were just talking about building18:21
swifterdarrellcschwede: what I'm planning on doing is writing a piece of code that takes a builder file as input and constructs a forest of nodes (with regions at top, then zones, then nodes; but stopping before drives) with each node containing the count of total partitions underneath the node and the count of partitions for which all N replicas are underneath the node.18:22
cschwedeswifterdarrell: ah, that sounds nice!18:22
swifterdarrellcschwede: the percentage:   parts_with_all_replicas_under / all_parts_under * 100%  is proportional to the probability of not having data available if the failure domain in question fails.18:23
swifterdarrellcschwede: it's some kind of badness metric, IOW18:23
swifterdarrellcschwede: that would fit well as an instance method on a builder object18:23
swifterdarrellcschwede: simple function that generates a graph... then somethign else can decide how to display/interpret/act on that data18:24
swifterdarrelltorgomatic: notmyname clayg ^^^^^^^^^^^^^^18:24
notmyname+118:25
cschwedeswifterdarrell: notmyname: i think a separate patch with a simple warning and doc update makes sense. wdyt?18:27
notmynamecschwede: yup18:27
swifterdarrellcschwede: +118:29
cschwedenotmyname: swifterdarrell: ok, i start working on this. thx for your time!18:29
swifterdarrellcschwede: please point me at the review when you post it... I'm extremely interested in this :)18:30
cschwedeswifterdarrell: sure, will do18:30
swifterdarrellcschwede: and thanks for picking it up!18:30
cschwedeswifterdarrell: well it’s my patch that changed the behaviour, so… ;)18:31
*** geaaru has joined #openstack-swift18:31
*** IRTermite has quit IRC18:32
*** jamieh_ has quit IRC18:32
*** tkay has left #openstack-swift18:36
*** ToMfromTO has joined #openstack-swift18:38
*** ToMfromTO has left #openstack-swift18:39
notmynamecschwede: are you working with Tim Leak, the reporter of that bug?18:40
*** ToMfromTO has joined #openstack-swift18:40
*** ToMfromTO has left #openstack-swift18:41
cschwedenotmyname: no, not yet. do you think i should ask him for a patch first?18:42
notmynamecschwede: no. I was wondering if he was a coworker or customer and you were already had talked to him18:43
notmynames/were//18:43
cschwedenotmyname: no, just saw this bug and immediately thought that this needs a warning and doc update18:44
notmynameok, thanks. if possible, it would be nice to have these things land by the end of the weekend so we can include them in the 2.2.1 release18:45
*** jistr has quit IRC18:49
*** mahatic has quit IRC18:52
*** jamieh_ has joined #openstack-swift18:53
*** jamieh_ has quit IRC18:55
*** jordanP has quit IRC18:57
*** IRTermite has joined #openstack-swift19:09
*** mahatic has joined #openstack-swift19:11
*** remix_tj has joined #openstack-swift19:12
*** aix has quit IRC19:20
*** Masahiro has joined #openstack-swift19:26
remix_tjhi, is this the right channel for asking about architectural requirements and how to configure properly swift for a multi-zone/multi-region cluster?19:30
*** Masahiro has quit IRC19:30
*** zul has quit IRC19:31
remix_tji've some issues understanding some requirements about the various dedicated networks19:31
remix_tji asked also on ask openstack, but seems that no one is caring19:34
*** zul has joined #openstack-swift19:34
notmynameremix_tj: ya, you can get some answers here. I'm in a meeting, but feel free to ask19:34
remix_tjnotmyname: thank you19:35
*** zul has quit IRC19:35
remix_tji'm planning a deployment of a swift cluster involving multiple zones and two regions (starting with one, but then i'll add the second one)19:36
*** zul has joined #openstack-swift19:37
remix_tji want to use dedicated networks for cluster-facing network and for replication network19:38
notmynameok19:39
remix_tjfirst doubt: the proxy server of one region (supposing that i deploy only one for simplicity) does need access to all the storage nodes of all zones of that region?19:40
remix_tjand does it need access also to storage nodes of the other regions?19:40
*** exploreshaifali has joined #openstack-swift19:47
remix_tjanyway, where can i find documentation about an implementation of a cluster like the one that i want to do, with dedicated networks? seems that no one has already done this kind of setup19:48
peluselooking to save myself a little googling here... was asked internally what current options there are for customers looking to migrate from file based to Swift and I assume they meant tools/etc to helop both on the app side as well as migration of data... anyone have any pre-canned info on this?19:49
notmynamepeluse: mostly that's done with some gateway. there's a few things out there that can present swift with some set of posix semantics. the other way is looking at explorer or dashboard tools that give you (pseudo) directories and drag-drop functionality19:52
notmynamepeluse: clayg just gave torgomatic and me a run-down of your current work on https://review.openstack.org/#/c/131872/19:53
pelusenotmyname, cool, I'm about to push another update that fixes a few things, cleans shit up and makes existing test code work.  Will start adding new test code that but I think its the right direction this time19:54
notmynameremix_tj: yup. totally possible. the nodes in the swift cluster need to be able to talk to one another (all the storage nodes need to talk to one another and the proxies should be able to talk to the storage nodes19:54
notmynamepeluse: everything I heard sounded good. still a bunch of "fun" problems to solve. seems a good direction though19:55
pelusenotmyname, patch includes all the plumbing for new hashes.pkl and functional update and update_delete for both reconstructor and repl (well, reconstructor side won't actually reconstruct until GET it done but it will reuse all of that)19:55
pelusenotmyname, yeah, I'll update the design doc too after I push the cleaned up version....19:56
peluseone more existing unit test failure to fix...19:56
notmynamepeluse: oh, and my "pre-canned" info for file->swift is basically spelled "swiftstack sales pitch" :-)19:56
peluseheh19:58
pelusei have no problem giving them that answer19:58
notmynameremix_tj: https://swiftstack.com/blog/2012/09/16/globally-distributed-openstack-swift-cluster/ https://www.swiftstack.com/docs/admin/cluster_management/regions.html https://www.youtube.com/watch?v=mcaTwhP_rPE https://www.youtube.com/watch?v=LpmBRqevuVU19:58
pelusenotmyname, so does the SS Filesystem Gateway require SS Controller or is it a standalone thing?20:04
peluseheh, watching Joe's video on it now :)20:05
notmynamepeluse: (I'm told we aren't supposed to call it "SS" anything). it's not stand-alone. only part of the product20:05
pelusewell, you didn't do that I did :)20:05
*** gyee has quit IRC20:06
*** lpabon has joined #openstack-swift20:15
remix_tjnotmyname: so e20:40
remix_tj*so, summing up, every network needs to have global visibility of all nodes of the clusters20:41
*** rdaly2 has joined #openstack-swift20:41
openstackgerritpaul luse proposed openstack/swift: New hashes.pkl format and several other reconstructor related changes  https://review.openstack.org/13187220:43
openstackgerritpaul luse proposed openstack/swift: Build up reconstructor with correct node selection  https://review.openstack.org/12936120:43
openstackgerritpaul luse proposed openstack/swift: Add node/pair index patch back into feature/EC  https://review.openstack.org/13406520:43
peluseclayg, getting close... https://review.openstack.org/131872 has all the stuff we talked about (pretty sure) and existing test code updated to work with it.  Still a little stitching to do on the reconstructor side but will start putting in test code for new functions here shortly....20:46
pelusewell, have an eye Dr appt and will be dilated so maybe not so shortly :)20:47
*** prontotest has joined #openstack-swift20:58
*** prontotest has left #openstack-swift20:58
*** rdaly2 has quit IRC20:59
*** cdelatte has quit IRC21:00
notmynameremix_tj: yes21:05
remix_tjok, this doesn't emerge from docs and books. Maybe is implicit but when i'll ask our netadmins for new vlans/networks it's the first question they'll ask :-\21:07
*** dmsimard is now known as dmsimard_away21:09
mattoliverau Morning21:10
swifterdarrellremix_tj: it's not simple from an implementation standpoint, but conceptually it's simple: every proxy-server needs to be able to route to every IP/port defined in all rings (with the possible exception of the replication IP/ports if they differ?); every storage node needs to be able to route to every IP/port defined in all rings.  notmyname, that sound about right?21:10
notmynameyup21:10
swifterdarrellremix_tj: notmyname: there's also necessity for all proxy-servers to be able to route to every memcached IP/port defined in the configs (these are often co-deployed on the proxy-servers themselves)21:11
remix_tjvery good21:11
remix_tjswifterdarrell: even remote proxy server?21:11
swifterdarrellremix_tj: for distributed proxies (say, 2 regions, 2 proxies each for simplicity), you have 2 choices: one memcached pool per region (defined by the set of memcached server IPs in each proxy's configs) with the potential issue that each proxy will not have access to cache members in the other region; OR one large memcached pool that's coherent but introduces WAN latency in to some proportion of your requests.21:13
openstackgerritChristian Schwede proposed openstack/swift: [WIP] Warn if multiple replicas are stored within same region/zone  https://review.openstack.org/14047821:14
*** Masahiro has joined #openstack-swift21:15
remix_tjswifterdarrell: the idea is that the proxy on second region is only contacted in case of disaster recovery21:19
remix_tj(more or less)21:19
*** Masahiro has quit IRC21:19
swifterdarrellremix_tj: then you'd still need full routability, but you could deploy disjoint memcached pools21:19
swifterdarrellremix_tj: and after a fail-over, you'll have a cold cache (auth tokens will become invalid and/or need to be re-injected into memcached, etc.)21:20
remix_tjok, auth won't be a problem since servers and applications will be restarted with a specific order21:21
remix_tjso they'll reauth21:21
swifterdarrellremix_tj: note also that you'll need >= 33% of the raw storage capacity (as given to the ring as the sum of all devices's weight) in the DR region or you run the risk of > 0 partitions having all 3 replicas in the non-DR region (that's assuming 3 replicas, where you want 2 replicas in primary region and 1 in DR region)21:23
*** tab___ has joined #openstack-swift21:24
remix_tjyeah, i know. At the moment, for testing purposes, we planned a local region composed by 6 storage nodes in two datacenters at campus distance and 6 storage nodes on remote region, so raw capacity will be theorically 50/5021:27
*** pberis has quit IRC21:33
*** lpabon has quit IRC21:35
*** tdasilva has quit IRC21:53
openstackgerritChristian Schwede proposed openstack/swift: [WIP] Warn if multiple replicas are stored within same region/zone  https://review.openstack.org/14047821:57
*** pberis has joined #openstack-swift21:59
*** exploreshaifali has quit IRC22:06
*** foexle has quit IRC22:07
*** dmsimard_away is now known as dmsimard22:14
*** dmsimard is now known as dmsimard_away22:15
*** miurahr has joined #openstack-swift22:22
*** StevenK has quit IRC23:00
*** Masahiro has joined #openstack-swift23:04
*** tacticus_v1 is now known as tacticus23:05
*** Masahiro has quit IRC23:08
*** rmcall has joined #openstack-swift23:09
*** tab___ has quit IRC23:19
*** tab____ has joined #openstack-swift23:19
*** gyee has joined #openstack-swift23:33
*** miurahr has quit IRC23:49

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!