Wednesday, 2014-07-09

mattoliveraunotmyname: a new pull request, added some colours so large graphs should be easier to read.00:00
*** dmorita has joined #openstack-swift00:09
*** shri has quit IRC00:27
*** tsg has joined #openstack-swift00:40
*** kevinc_ has quit IRC00:44
*** tdasilva has quit IRC01:00
*** wer has joined #openstack-swift01:14
*** Tyger has quit IRC01:25
*** tsg has quit IRC01:28
*** kota_ has joined #openstack-swift01:43
*** nosnos has joined #openstack-swift01:44
*** zhiyan_ is now known as zhiyan01:47
*** zhiyan is now known as zhiyan_01:56
*** zhiyan_ is now known as zhiyan01:56
*** tsg has joined #openstack-swift02:01
*** Edward-Zhang has joined #openstack-swift02:02
*** haomaiw__ has quit IRC02:10
*** haomaiwang has joined #openstack-swift02:11
openstackgerritA change was merged to openstack/swift: Merge tag '2.0.0'  https://review.openstack.org/10522202:22
*** haomai___ has joined #openstack-swift02:26
*** haomaiwang has quit IRC02:29
*** blazesurfer has joined #openstack-swift02:29
blazesurferhi all02:30
blazesurferquest around full disk. i have been trying for awhile to balance out or remove a disk from my swift cluster is ther away to ensure all data has evacuated the disk?02:31
blazesurferive gradualy dropped the wieght of the disk though doesnt seem to remove data off the disk, any one able to clear up my understanding?02:31
zaitcevJust remove it from the ring.02:33
zaitcevRings.02:33
blazesurferonly issue with that is i want to make sure not to loss any data this is one of 2 drives that were originally in a single replica single node cluster. have expanded it now to multi node and have got it replicating thought 100% sure how to confirm if i have two replicas of all the data in cluster.02:35
blazesurferdoes that make sence?02:35
blazesurferthank you for  your response +zaitcev02:35
zaitcevAre you running with just 2? Most people build rings with  swift-ring-builder 18 3 1  or something. Not 2.02:37
zaitcevOh, wait. It's even worse - single replica.02:37
zaitcevI don't know what to do in such case. Normally one just bumps one drive off and replicators restore redundancy. If you only have 2, there's a probability that something, somewhere, got 2 assigned to this drive.02:39
torgomaticIf the ring only had two drives in it, there will be one replica per disk. You can just remove it from the ring, then cross your fingers and hope the one remaining disk doesn't die.02:49
blazesurferok02:52
blazesurferi started with on server 4 disks 1 replica(running on enterprise san array ) typical dont do it this way desing but what i had to start with.02:53
blazesurfernow i have running 2 servers 4 drivers per server 2 zones (each server is a zone) and have replicas set to 202:54
blazesurferwill be adding a 3rd zone/server/replica once i have it setup.02:54
blazesurferdoes that read right?02:55
torgomaticSounds reasonable so far.02:56
*** madhuri has joined #openstack-swift02:56
blazesurferso from what i understand the disks can be added gradually and removed gradually though im just not seeing it happne.02:57
blazesurferalso do you know of a way to confirm if there is a replica in both zones (note i own all the accounts on the cluster so can authenticate as any of them) if that helps.02:57
torgomaticswift-get-nodes will tell you where the primary nodes are for any particular object.02:59
torgomaticOr account or container02:59
torgomaticIf you are trying to remove an entire server, gradual removal won't work. Durability trumps even distribution. You'll just have to remove the disks and let replication go a while03:01
blazesurferok my issue is 2 disks are full and that is why i am trying to remove them gradually. the other disks have 800gb free on them so trying to balance the data03:06
blazesurferas im getting alot of disk full errors logs03:06
blazesurfernot trying to remove entire server just the two full drives.03:06
torgomaticSo you reduce the weights, rebalance, and distribute the rings to both servers?03:07
blazesurferi have yes03:08
*** ho has joined #openstack-swift03:09
blazesurfersoo all 2tb drives reduced the two drives that are full to 1500 weight rest are at 2000 rebalanced a few times still 100% full03:09
torgomaticI think that swift-ring-builder will tell you got many partitions are on each drive; do the 1500 weight drives have fewer?03:12
blazesurferhttp://paste.openstack.org/show/HpI3amYYG5QcE2jOKp0i/03:13
blazesurferyes it reports that there are fewer ment to be there03:13
blazesurferthat paste is of the swift-get-nodes command03:14
blazesurfernode  10.42.123.13 drives sdb1 and sdc1 are the two that are full03:15
blazesurferwill tripple check partitions again03:15
blazesurferok sorry yes they do report as fewer partitions what i check the rings using swift-ring-builder command though dont appear to be shrinking in size on disk.03:17
torgomaticIf fewer partitions are allocated to the 1500 weight drives, things are working as intended. You may just have some partitions with lots of big objects in them, and you got unlucky.03:17
torgomaticTry dropping the weights to 1000 and see what happens, maybe?03:18
blazesurferok so the balance in the ring should that be at any thing in particular to be healthy then as it changes each time03:19
*** nosnos has quit IRC03:20
zaitcevIsn't it the case that nobody actually deletes objects that do not need to be there?03:21
torgomaticThe replicator will delete things.03:22
*** kota_ has quit IRC04:01
*** Edward-Zhang has quit IRC04:05
*** ho has quit IRC04:10
*** nosnos has joined #openstack-swift04:13
*** nosnos has quit IRC04:24
*** Edward-Zhang has joined #openstack-swift04:36
*** elambert has joined #openstack-swift04:45
*** kopparam has joined #openstack-swift04:45
*** nshaikh has joined #openstack-swift04:49
*** zaitcev has quit IRC04:50
*** kopparam has quit IRC04:56
*** kopparam has joined #openstack-swift04:57
*** ppai has joined #openstack-swift05:18
*** chandankumar has joined #openstack-swift05:19
*** Kbee has joined #openstack-swift05:52
*** nacim has quit IRC06:01
*** elambert has quit IRC06:11
openstackgerritMatthew Oliver proposed a change to openstack/swift: Swift configuration parameter audit  https://review.openstack.org/10476006:12
openstackgerritZhang Hua proposed a change to openstack/swift: Add distributed tracing capablities in logging.  https://review.openstack.org/9367706:32
mattoliverauI'm off to the in-laws for dinner, so need to head off a little early, have a great night all.06:35
openstackgerritZhang Hua proposed a change to openstack/swift: Add distributed tracing capablities in logging.  https://review.openstack.org/9367706:36
*** mkerrin has joined #openstack-swift06:49
*** wer has quit IRC06:54
*** kopparam has quit IRC07:02
*** kopparam has joined #openstack-swift07:02
*** kopparam has quit IRC07:07
*** wer has joined #openstack-swift07:09
*** ppai has quit IRC07:14
*** ppai has joined #openstack-swift07:26
charznotmyname: I found a bug in swift3 (current repo: stackforge/swift3) and create a bug report in https://launchpad.net/swift3. And also have a patch for it.07:27
*** sungju has quit IRC07:28
charznotmyname: Should I post this bug to swift project? or keep it in https://launchpad.net/swift3.07:28
*** madhuri has quit IRC07:33
*** kopparam has joined #openstack-swift07:45
*** ppai has quit IRC07:49
openstackgerritChristian Schwede proposed a change to openstack/swift: Limit changed partitions when rebalancing  https://review.openstack.org/10566607:49
*** kopparam has quit IRC07:50
*** nacim has joined #openstack-swift08:04
*** kopparam has joined #openstack-swift08:11
openstackgerritChristian Schwede proposed a change to openstack/swift: Limit changed partitions when rebalancing  https://review.openstack.org/10566608:20
*** mmcardle has joined #openstack-swift08:23
*** andyandy has joined #openstack-swift08:39
*** tusharsg has joined #openstack-swift09:00
*** tsg has quit IRC09:02
*** ppai has joined #openstack-swift09:04
*** Edward-Zhang has quit IRC09:11
*** mmcardle has quit IRC09:20
*** mmcardle has joined #openstack-swift09:22
*** jaz has joined #openstack-swift09:25
*** jaz is now known as Guest2427109:25
Guest24271Hi, can anyone please answer? How the storage URL is generated?09:26
Guest24271and where in code?09:26
*** TaiSHi has quit IRC09:28
*** nshaikh has quit IRC09:33
*** aswadr has joined #openstack-swift09:43
Guest24271Where is the settings for IP of proxy service mentioned?09:46
*** haomai___ has quit IRC09:47
*** haomaiwa_ has joined #openstack-swift09:48
*** Shivani has joined #openstack-swift09:50
*** haomaiwa_ has quit IRC10:00
*** haomaiwang has joined #openstack-swift10:01
*** mmcardle has quit IRC10:07
*** mmcardle has joined #openstack-swift10:08
*** tusharsg has quit IRC10:11
ShivaniHi anyone there?10:14
*** kopparam has quit IRC10:14
*** ppai has quit IRC10:28
ShivaniCan someone tell from where to get data, metadata for put, uplodad, etc.10:31
Shivani?10:31
omameShivani: what do you mean exactly? what is that you're trying to do?10:34
*** kopparam has joined #openstack-swift10:46
*** Kbee has quit IRC10:50
*** kopparam has quit IRC10:50
ShivaniI have to issues, I have done installation using SAIO. Now I want to change the server from 127.0.0.1 to my local ip to get/put requests there. I am unable to do this. I am adding *memcache_servers = 10.0.9.18:8080* in /etc/swift/proxy-server.conf. Still connection is not being established. What should I do?10:53
ctennisShivani: you need to change the bind_ip line in the proxy-server.conf file11:00
ctennismemcache_servers is the pointing to where memcached is listening11:00
*** kopparam has joined #openstack-swift11:04
*** ppai has joined #openstack-swift11:14
*** ujjain has quit IRC11:37
*** ppai has quit IRC11:55
*** mali_ has joined #openstack-swift12:02
*** dmorita has quit IRC12:03
*** mmcardle has quit IRC12:08
*** mali_ has quit IRC12:11
*** mmcardle has joined #openstack-swift12:11
*** kopparam has quit IRC12:21
*** kopparam has joined #openstack-swift12:22
*** Edward-Zhang has joined #openstack-swift12:24
*** kopparam has quit IRC12:26
*** tdasilva has joined #openstack-swift12:27
*** zul has quit IRC13:11
*** mmcardle has quit IRC13:13
*** mmcardle has joined #openstack-swift13:16
*** otoolee has joined #openstack-swift13:18
*** zul has joined #openstack-swift13:18
*** mbeegala has joined #openstack-swift13:19
*** foexle has joined #openstack-swift13:36
*** anticw_ has joined #openstack-swift13:43
*** Anticime1 has joined #openstack-swift13:43
*** jokke__ has joined #openstack-swift13:44
*** MooingLe1ur has joined #openstack-swift13:44
*** swat30_ has joined #openstack-swift13:45
*** ondergetekende_ has joined #openstack-swift13:45
*** tanee2 has joined #openstack-swift13:46
*** mlanner_ has joined #openstack-swift13:46
*** mitz- has joined #openstack-swift13:47
*** mordred_ has joined #openstack-swift13:47
*** tsg has joined #openstack-swift13:47
*** foexle has quit IRC13:48
*** tdasilva has quit IRC13:48
*** Edward-Zhang has quit IRC13:48
*** swat30 has quit IRC13:48
*** mitz has quit IRC13:48
*** mordred has quit IRC13:48
*** mlanner has quit IRC13:48
*** JelleB has quit IRC13:48
*** MooingLemur has quit IRC13:48
*** ryao has quit IRC13:48
*** mordred_ is now known as mordred13:48
*** mlanner_ is now known as mlanner13:48
*** Anticimex has quit IRC13:48
*** redbo has quit IRC13:48
*** glange has quit IRC13:48
*** tanee has quit IRC13:48
*** anticw has quit IRC13:48
*** jokke_ has quit IRC13:48
*** ondergetekende has quit IRC13:48
*** swat30_ is now known as swat3013:48
*** tsg has joined #openstack-swift13:49
*** foexle has joined #openstack-swift13:50
*** tdasilva has joined #openstack-swift13:56
*** redbo has joined #openstack-swift13:58
*** ChanServ sets mode: +v redbo13:58
*** jokke__ is now known as jokke_13:58
*** glange has joined #openstack-swift13:58
*** ChanServ sets mode: +v glange13:58
*** ryao has joined #openstack-swift13:58
*** kopparam has joined #openstack-swift13:58
*** zaitcev has joined #openstack-swift14:02
*** ChanServ sets mode: +v zaitcev14:02
*** mordred has quit IRC14:02
*** mordred has joined #openstack-swift14:02
*** mmcardle has quit IRC14:09
*** JelleB has joined #openstack-swift14:09
*** mwstorer has joined #openstack-swift14:10
*** blazesurfer has quit IRC14:17
*** blazesurfer has joined #openstack-swift14:17
*** mmcardle has joined #openstack-swift14:21
*** otoolee has quit IRC14:29
*** tsg has quit IRC14:33
*** kopparam has quit IRC14:40
*** kopparam has joined #openstack-swift14:41
*** kopparam has quit IRC14:45
*** nacim has quit IRC14:46
*** otoolee has joined #openstack-swift14:56
*** MooingLe1ur is now known as MooingLemur15:00
*** kevinc_ has joined #openstack-swift15:02
*** zhiyan is now known as zhiyan_15:10
*** kevinc_ has quit IRC15:18
*** kevinc_ has joined #openstack-swift15:21
*** tsg has joined #openstack-swift15:25
*** elambert has joined #openstack-swift15:44
*** zz_wasmum is now known as wasmum15:49
*** otoolee has quit IRC15:54
*** haomaiw__ has joined #openstack-swift15:56
*** haomaiwang has quit IRC15:59
*** gyee has quit IRC16:08
notmynamegood morning16:11
notmynamecharz: if it's a bug in swift3, then it needs to be tracked and patched there16:11
charznotmyname: ok, got it. thanks.16:13
*** chandankumar has quit IRC16:18
*** gyee has joined #openstack-swift16:22
*** kevinc_ has quit IRC16:24
notmynameglange: I like reading your sports analysis16:25
*** kevinc_ has joined #openstack-swift16:26
notmyname(http://greglange.blogspot.com/2014/07/what-lebron-james-should-do.html)16:26
*** miqui has quit IRC16:27
*** miqui has joined #openstack-swift16:28
zaitcevcharz: and while you're at it, make Tomo to cut a 2.0 release, for crissakes16:39
*** aswadr has quit IRC16:42
charzzaitcev: ok, I don't see any tag or branch for 2.0 release. I will ask Tomo.16:46
*** miqui_ has joined #openstack-swift16:54
*** miqui has quit IRC16:55
*** mbeegala has quit IRC16:56
*** blazesurfer has quit IRC17:08
*** blazesurfer has joined #openstack-swift17:08
notmynamereminder that the swift team meeting is in a little less than 2 hours17:12
*** mmcardle has quit IRC17:20
*** andyandy has quit IRC17:30
*** blazesurfer has quit IRC17:32
*** elambert has quit IRC17:33
*** dfg_ is now known as dfg17:34
*** ChanServ sets mode: +v dfg17:34
*** Midnightmyth has joined #openstack-swift17:36
*** elambert has joined #openstack-swift17:50
*** Guest24271 has quit IRC17:59
*** Shivani has quit IRC18:00
*** pberis has quit IRC18:03
*** pberis has joined #openstack-swift18:05
*** tusharsg has joined #openstack-swift18:09
*** tsg has quit IRC18:12
*** Tyger has joined #openstack-swift18:19
*** wasmum is now known as zz_wasmum18:21
*** gvernik has joined #openstack-swift18:24
*** shri has joined #openstack-swift18:35
*** zz_wasmum is now known as wasmum18:36
*** pberis has quit IRC18:38
glangenotmyname: thanks, that blog post is 100% true :)18:41
*** pberis has joined #openstack-swift18:42
*** elmiko has joined #openstack-swift18:52
elmikohi, i'm having some difficulty with swiftclient.client.Connection using the preauthtoken. do i need to specify more than preauthtoken when creating the Connection?18:53
torgomaticelmiko: probably the storage URL too18:53
elmikotorgomatic: thanks, i'll give that a try18:53
elmikotorgomatic: one more thing, will preauthurl understand 'swift://container/object' or do i need the full http form?18:54
torgomaticelmiko: the full HTTP URL, otherwise it doesn't know which host to talk to or what the account name is (or which protocol, or...)18:55
elmikotorgomatic: cool, thanks again :)18:55
torgomaticnp :)18:56
elmikothat did it18:57
*** pberis has quit IRC18:58
notmynamemeeting time in #openstack-meeting in a cuple of minutes18:58
*** tusharsg has quit IRC18:59
elmikoi've been using `$ swift stat -v` to get the StorageURL, is there a better way to do this?19:07
torgomaticelmiko: not that I know of; you have to auth to get the storage URL, and that's as good a way to auth as any19:11
elmikook19:12
elmikois the storageURL guaranteed to be ip:port/AUTH_projectid ?19:12
openstackgerritSamuel Merritt proposed a change to openstack/swift: added process pid to the end of storage node log lines  https://review.openstack.org/10530919:12
elmikotorgomatic: i'm working an issue where i want to use keystone delegated trust tokens to access swift objects, so the final user of the container/object might not have sufficient privilege to get the storageURL. i'm trying to make sure i have all the details of what i need to bundle with the trust token for the final consumer.19:14
torgomaticelmiko: storage URL (includes protocol, hostname, and account) + token + container + object should do it19:15
elmikotorgomatic: that's what it's looking like. i was just curious about discovering the storageURL.19:16
elmikoalso, sorry for interrupting the meeting :)19:17
torgomaticheh, IRC meetings are slow anyway :)19:18
elmikoyea19:18
* notmyname makes a note to talk to torgomatic more in the meeting19:21
elmikolol, sorry torgomatic19:22
*** pberis has joined #openstack-swift19:23
TygerWhoever talks in here gets the most responsibility in the meeting?19:28
*** miqui_ is now known as miqui19:37
*** tdasilva has quit IRC19:43
*** tdasilva has joined #openstack-swift19:43
*** tsg has joined #openstack-swift19:44
*** kevinc_ has quit IRC19:45
zaitcevelmiko: You're supposed to receive the storage URL alongside the token. It should not be "discovered".19:47
cschwedeclayg: thanks for the review! am i missing something when adding new regions or zones? maybe i don’t need https://review.openstack.org/#/c/105666/ at all?19:56
mattoliverauOK, I'm going back to bed, be back in a few hours :)19:58
*** gvernik has quit IRC19:59
elmikozaitcev: as part of the endpoint catalog?20:00
zaitcevelmiko: You can call it that, but normally it's like  {"access": {"token": {"expires": "2011-11-30T14:52:29.768403", "id": "fbd7f5d0-8896-482a-babc-c2718b2e18e2", "tenant": {"id": "1", "name": "adminTenant"}}, "serviceCatalog": [{"endpoints": [{"....... "publicURL": "http://localhost:8774/v1.1/"}],20:03
zaitcevOkay that's example from 3 years ago :-)20:04
zaitcevAnd uses Nova20:04
elmikolol20:04
elmikozaitcev: yea, it looks a little different now but i get your meaning20:04
zaitcevSo you can look it up in the endpoint catalog, then interpolate as needed, but why bother20:05
*** pberis has quit IRC20:05
elmikowell yea, when i get the token i also get the endpoints available20:06
elmikoi'm trying to work this from the angle that i will need to discover these things using the keystone client20:06
zaitcevso the library does an equiv of20:07
zaitcevcurl -X POST http://127.0.0.1:5000/v2.0/tokens -d '{"auth": {"tenantName":"tsa17", "passwordCredentials":{"username":"spare17", "password":"secrete"}}}' -H 'Content-Type: application/json20:07
elmikothat makes sense20:07
zaitcevThe returned serviceCatalog entries are already interpolated20:08
zaitcevWell, I didn't do it by hand in years.20:08
zaitcevAnd holey moley these new-fanged PKI tokens a hueg20:09
elmikolol yes20:09
elmikothanks for the help zaitcev20:10
*** tdasilva has quit IRC20:10
*** kevinc_ has joined #openstack-swift20:11
*** jergerber has joined #openstack-swift20:14
claygcschwede: i need to look at those gists and figure out what you're seeing - i guess if you go from 3-replica 2-zone to 3-replica 3-zone one replica of every partition is going to move regadless of weight?  that doesn't sound right...20:15
cschwedeclayg: exactly! in my case it was from 3-replica 1-zone 1-region to 3-replica 2-regions. This would put a lot of stress to the replication network20:17
cschwedeclayg: but if you think it is a bug i will open a ticket and fix this instead20:17
claygcschwede: yeah i think that seems like a bug?  torgomatic: can you grok on 105666 when you get a chance (gholt?)20:18
torgomaticnotmyname: can you poke at https://review.openstack.org/#/c/103783/ and try it again? (it's real fast)20:33
claygcschwede: so you can also see when adding a second device to a 2-replica 1-device ring - rebalance thinks it *has* to move the parts that are duped on the device now that it has a second one - weights be damned!20:35
notmynametorgomatic: when I go to http://wiki.openstack.org/HowToContribute#If_you.27re_a_developer.2C_start_here I get redirected to the top of the page (no anchor)20:36
torgomatichttp://wiki.openstack.org/HowToContribute#If_you.27re_a_developer20:38
torgomaticnotmyname: try ^^20:38
cschwedeclayg: yep, you’re right. i’m currently trying to find the bug in swift-ring-builder20:39
notmynametorgomatic: actually a browser issue, not the link20:39
torgomaticcschwede: clayg: it's a feature?20:39
ahalewhen I saw 105666 I was thinking it would be nice to be able to specify a specific partition you want to move in a rebalance sometimes20:39
torgomaticit is currently expected that going from 2 regions to 3, in a 3-replica ring, will move one replica of every partition into the new region20:39
claygtorgomatic: ok, well can I make that not get me fired by net-eng20:40
ahalelike in the situation we were in with the db preallocation issue, there was a particularly large one it would have been nice to ensure a copy of moved20:40
torgomaticdurability trumps balance, at least how it is now20:40
torgomaticclayg: feel free... I'm not saying what's good or bad, just how it's expected to work now20:40
cschwedetorgomatic: yes, but this will create a lot of replication traffic and load and i like to be able to control this a little bit more20:41
torgomaticahale: that also sounds like a fine idea, though orthogonal to what's being discussed20:41
torgomaticcschwede: sounds reasonable to me20:41
ahaleoh indeed, just mentioning it as an ops PoV thought I had this morning reading that gerrit20:41
torgomaticthere was the question of whether slowly turning up the weights would do any good, and I was just clarifying that no, it doesn't help jack20:42
torgomaticnow, when going 3-region to 4-region with 3 replicas, the weights do help20:42
cschwedetorgomatic: ah, yes that makes sense, after thinking about how it works20:45
cschwedeclayg: torgomatic: so you think that it makes sense to continue working on https://review.openstack.org/#/c/105666/ instead of changing the current behavior?20:48
torgomaticcschwede: sure, seems good to me. I think the current behavior is pretty good; it just has some rough edges when you have fewer regions/zones than you do replicas20:52
cschwedetorgomatic: ok great, thanks for your feedback!20:54
*** wasmum is now known as zz_wasmum21:06
*** tsg has quit IRC21:19
*** tsg has joined #openstack-swift21:29
openstackgerritSamuel Merritt proposed a change to openstack/swift: account to account copy implementation  https://review.openstack.org/7215721:30
claygcschwede: torgomatic: so we'll have two ways of slowly rebalancing - one is per device, and the other is per rebalance?21:40
torgomaticclayg: looks that way21:40
claygcschwede: torgomatic: what happens if you have a cluster with 100 devices in region0 and you add 10 devcies in region1 if rebalance doesn't respect weight?21:40
*** elmiko has left #openstack-swift21:40
torgomaticclayg: region1 fills up in a darn hurry21:40
cschwedetorgomatic: clayg: exactly, this is my problem ;)21:41
claygcschwede: I'm not sure having two crappy ways to do the same thing is a as good as one way that does what we want21:41
cschwedeclayg: so, from my tests it looks like the weight is considered inside a region or zone21:42
cschwedeas long as there are less regions or zones than replicas21:42
*** CaioBrentano has joined #openstack-swift21:46
openstackgerritA change was merged to openstack/python-swiftclient: Add context sensitive help  https://review.openstack.org/10251021:48
zaitcevOh great, Swift in Docker.21:59
zaitcevI seem to recall someone did it before...21:59
notmynamezaitcev: http://serverascode.com/2014/06/12/run-swift-in-docker.html22:07
notmyname?22:07
zaitcevnotmyname: thanks a lot22:09
*** miqui_ has joined #openstack-swift22:13
*** miqui has quit IRC22:13
*** nacim has joined #openstack-swift22:15
mattoliverauMorning all... Again :)22:18
claygoh goody, are we doing that thing where the gate fails everything22:21
*** nacim has quit IRC22:28
*** Midnightmyth has quit IRC22:46
*** CaioBrentano has quit IRC22:48
*** foexle has quit IRC22:57
*** kevinc_ has quit IRC22:59
*** jergerber has quit IRC23:08
*** gyee has quit IRC23:14
*** kevinc_ has joined #openstack-swift23:18
openstackgerritClay Gerrard proposed a change to openstack/swift: Let eventlet.wsgi.server log tracebacks when eventlet_debug is enabled  https://review.openstack.org/10591823:31
openstackgerritA change was merged to openstack/swift: Fix the section name in CONTRIBUTING.rst  https://review.openstack.org/10378323:41
openstackgerritpaul luse proposed a change to openstack/swift: Allow Object Auditor to Specify a Different disk_chunk_size  https://review.openstack.org/10592023:51

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!