Wednesday, 2015-01-21

notmyname>>> x = {'x':'eval(open(\'/etc/passwd\',\'rb\').read())'}00:04
notmyname>>> x00:04
notmyname{'x': "eval(open('/etc/passwd','rb').read())"}00:04
notmyname>>> import pickle00:04
notmyname>>> pickle.dumps(x)00:04
notmyname'(dp0\nS\'x\'\np1\nS"eval(open(\'/etc/passwd\',\'rb\').read())"\np2\ns.'00:04
notmyname>>> pickle.loads(pickle.dumps(x))00:04
notmyname{'x': "eval(open('/etc/passwd','rb').read())"}00:04
notmynametorgomatic: like that?00:04
*** lcurtis has quit IRC00:04
torgomaticnotmyname: I have no idea what just happened here00:05
brianclinetogormatic: yeah, I thought so. I knew I remembered seeing the xattr discussion a while back but couldn't find it, and I had to satisfy a bit of a red herring question about it for someone at work00:05
torgomaticyou mean that strings remain strings even after a run through pickle?00:05
notmynameya. ie the eval didn't happen00:05
notmynamebut I'm not sure the right way to exploit the pickle thing00:06
torgomaticyup00:06
torgomaticreally, if you have RW access to Swift data on disk, you can craft a pickle to gain control of the Swift daemons, which you can then use to... modify data on disk?00:06
*** Nadeem has quit IRC00:06
torgomaticit's sort of pointless00:06
swifterdarrelltorgomatic: NEFARIOUS!00:07
torgomaticthe memcache thing would let you get from "access to internal network" to "control of Swift daemons", and that's an actual escalation so we fixed it00:07
brianclineyeah. if you have disk access, why even bother with pickling an exploit anyway00:08
torgomaticexactly.00:08
brianclinethanks for enduring the question. just checking off my "yes I asked your question in the community" box00:10
*** lihkin has quit IRC00:11
torgomaticnp :)00:11
*** dmorita has joined #openstack-swift00:34
*** ho has joined #openstack-swift00:41
notmynamefor those who have logged on since my morning, note that there is a registration link in the topic for the hackathon00:43
notmynamespace is limited, so sign up early if you're coming00:44
*** abhirc has joined #openstack-swift00:51
*** ahonda has joined #openstack-swift01:03
mattoliveraunotmyname: so looks like I'm super effiecent... I don't remember signing up earlier, but looks like I did! So now I have 2 tickets, way for the eventbrite app to notice.01:05
notmynamemattoliverau: heh. I think you can change it01:09
notmynamemattoliverau: otherwise you're saying that we'll have to up the limit just to account for you registering twice!01:09
mattoliveraunotmyname: No I just have to take up  twice as much space and be twice as productive :p01:10
*** abhirc has quit IRC01:10
notmynamebut if you are twice as productive and only twice as big, then your effective "productivity for unit matt" is actually constant. you need to be 4 times as effective if you take up 2 slots!01:12
mattoliveraunotmyname: Lol, damn, better figure out how to cancel one then :p01:13
mattoliverauTurns out you can only cancel tickets from the web interface NOT the app (for those playing along at home)01:15
mattoliverauLol eventbrite says your new office is 100± miles away... I guess that is a correct statement, but its also 1+ mile away :p01:17
notmynameand also 1000+ miles away01:18
*** km has joined #openstack-swift01:18
*** abhirc has joined #openstack-swift01:19
*** lpabon has quit IRC01:20
*** zigo_ has joined #openstack-swift01:21
*** abhirc has quit IRC01:21
*** abhirc has joined #openstack-swift01:21
*** zigo has quit IRC01:21
*** kei_yama has joined #openstack-swift01:21
*** zigo has joined #openstack-swift01:27
*** zigo_ has quit IRC01:27
charzmahatic01:31
*** abhirc has quit IRC01:42
*** zigo has quit IRC01:46
notmynamemattoliverau: FWIW I only see one ticket with your name on it01:47
notmynameso maybe you deleted it?01:47
*** zigo has joined #openstack-swift01:47
*** addnull has joined #openstack-swift01:48
mattoliveraunotmyname: yeah, I cancelled it, turns out you can cancel it from the web UI but not the APP01:50
*** abhirc has joined #openstack-swift01:52
*** bill_az has quit IRC01:54
charznotmyname: mahatic I'm going to turn jenkins to silent mode until swift community cluster online. Sorry for inconvenience.01:56
mahaticcharz, okay. thanks for letting me know01:59
*** zigo has quit IRC02:00
*** zigo has joined #openstack-swift02:00
*** tellesnobrega has quit IRC02:06
hogood morning!02:15
charzho: good morning02:17
openstackgerritYuan Zhou proposed openstack/swift: Update contianer sync to use internal client  https://review.openstack.org/14379102:30
*** lcurtis has joined #openstack-swift02:44
*** addnull has quit IRC03:08
*** erlon has quit IRC03:30
*** echevemaster has joined #openstack-swift03:39
*** mahatic has quit IRC03:48
*** fandi has joined #openstack-swift03:51
*** erlon has joined #openstack-swift03:53
*** erlon has quit IRC03:58
*** erlon has joined #openstack-swift04:17
*** abhirc has quit IRC04:44
*** lcurtis has quit IRC04:48
*** zaitcev has quit IRC04:59
openstackgerritParantap Roy proposed openstack/python-swiftclient: SwiftClient object beginning with / or ./  https://review.openstack.org/14879105:11
*** SkyRocknRoll has joined #openstack-swift05:11
*** echevemaster has quit IRC05:18
*** nshaikh has joined #openstack-swift05:48
*** erlon has quit IRC05:50
*** addnull has joined #openstack-swift05:59
*** tellesnobrega_ has quit IRC06:22
*** silor has joined #openstack-swift06:29
*** fandi has quit IRC06:42
*** oomichi has quit IRC06:59
*** dmsimard_away is now known as dmsimard07:13
*** dmsimard is now known as dmsimard_away07:31
openstackgerritParantap Roy proposed openstack/python-swiftclient: SwiftClient object beginning with / or ./  https://review.openstack.org/14879107:40
*** aix has joined #openstack-swift07:45
*** chlong has quit IRC07:46
*** oomichi has joined #openstack-swift07:52
*** ppai has joined #openstack-swift08:24
*** geaaru has joined #openstack-swift08:27
*** oomichi has quit IRC08:28
*** addnull has quit IRC08:30
*** addnull has joined #openstack-swift08:31
*** addnull has joined #openstack-swift08:31
openstackgerritDaisuke Morita proposed openstack/swift: Output logs of policy index  https://review.openstack.org/13699508:32
*** kei_yama has quit IRC08:41
*** km has quit IRC08:43
*** ppai has quit IRC08:52
*** ppai has joined #openstack-swift09:06
*** acoles_away is now known as acoles09:08
*** jistr has joined #openstack-swift09:09
*** addnull has quit IRC09:14
*** addnull has joined #openstack-swift09:15
*** addnull has quit IRC09:18
*** fandi has joined #openstack-swift09:20
openstackgerritDaisuke Morita proposed openstack/swift: Show each policy's information on audited results in recon  https://review.openstack.org/13869709:25
*** SkyRocknRoll has quit IRC09:40
*** jordanP has joined #openstack-swift09:46
*** jistr has quit IRC09:50
*** ppai has quit IRC09:52
*** jistr has joined #openstack-swift09:56
*** ppai has joined #openstack-swift10:05
*** addnull has joined #openstack-swift10:08
*** foexle has joined #openstack-swift10:08
*** jordanP has quit IRC10:09
*** addnull has quit IRC10:11
*** mkerrin has joined #openstack-swift10:12
*** ho has quit IRC10:12
*** tellesnobrega_ has joined #openstack-swift10:15
openstackgerritDaisuke Morita proposed openstack/swift: Show each policy's information on audited results in recon  https://review.openstack.org/13869710:17
*** addnull has joined #openstack-swift10:18
*** silor has quit IRC10:18
*** jordanP has joined #openstack-swift10:23
openstackgerritDonagh McCabe proposed openstack/swift: Add multiple reseller prefixes and composite tokens  https://review.openstack.org/13708610:25
*** addnull has quit IRC10:30
*** addnull has joined #openstack-swift10:32
*** nellysmitt has joined #openstack-swift10:32
*** jordanP has quit IRC10:37
*** aix has quit IRC10:45
*** jordanP has joined #openstack-swift10:50
*** rledisez has joined #openstack-swift10:54
*** tellesnobrega_ has quit IRC11:13
*** aix has joined #openstack-swift11:16
*** tellesnobrega has joined #openstack-swift11:47
*** ppai has quit IRC11:59
*** chlong has joined #openstack-swift12:02
*** saltsa has quit IRC12:02
*** saltsa has joined #openstack-swift12:03
*** ppai has joined #openstack-swift12:13
*** ppai has quit IRC12:22
*** ppai has joined #openstack-swift12:34
*** nellysmitt has quit IRC13:01
*** silor has joined #openstack-swift13:06
*** fandi has quit IRC13:08
*** erlon has joined #openstack-swift13:22
*** abhirc has joined #openstack-swift13:28
*** ppai has quit IRC13:39
*** abhirc has quit IRC13:45
*** bill_az has joined #openstack-swift13:48
*** ppai has joined #openstack-swift13:48
*** abhirc has joined #openstack-swift13:49
*** abhirc has quit IRC13:52
*** mikehn has quit IRC13:54
*** tellesnobrega has quit IRC13:54
*** mikehn has joined #openstack-swift13:57
*** tellesnobrega has joined #openstack-swift14:06
*** ppai has quit IRC14:07
*** mahatic has joined #openstack-swift14:08
*** dmsimard_away is now known as dmsimard14:31
*** lcurtis has joined #openstack-swift14:35
*** lcurtis has quit IRC14:35
*** dmsimard is now known as dmsimard_away14:36
*** lihkin has joined #openstack-swift14:37
*** nshaikh has quit IRC14:47
*** addnull has quit IRC14:50
*** nellysmitt has joined #openstack-swift15:02
*** zaitcev has joined #openstack-swift15:04
*** ChanServ sets mode: +v zaitcev15:04
*** nellysmitt has quit IRC15:06
*** jasondotstar has joined #openstack-swift15:08
openstackgerritTakashi Kajinami proposed openstack/swift: Add process name checking into swift-init  https://review.openstack.org/11620315:08
*** jasondotstar has quit IRC15:08
*** jasondotstar has joined #openstack-swift15:09
*** dmsimard_away is now known as dmsimard15:11
*** lpabon has joined #openstack-swift15:13
*** tdasilva has joined #openstack-swift15:13
*** abhirc has joined #openstack-swift15:22
*** jasondotstar has quit IRC15:38
*** lihkin has quit IRC15:41
*** lihkin has joined #openstack-swift15:43
openstackgerritMahati proposed openstack/swift: Implement OPTIONS verb for storage nodes.  https://review.openstack.org/14010315:52
*** fandi has joined #openstack-swift15:57
*** lcurtis has joined #openstack-swift15:57
*** jasondotstar has joined #openstack-swift16:01
*** lihkin has quit IRC16:08
*** lihkin has joined #openstack-swift16:09
*** briancurtin has joined #openstack-swift16:14
*** abhirc has quit IRC16:25
*** foexle has quit IRC16:26
*** jasondotstar has quit IRC16:27
*** abhirc has joined #openstack-swift16:33
notmynamegood morning16:34
openstackgerritpaul luse proposed openstack/swift: Merge master to feature/ec  https://review.openstack.org/14898316:39
peluseclayg, torgomatic :  FYI the merge above resolves a conflict between a few of your patches... please take a quick look (not urgent though)16:42
pelusemorning!16:42
*** nellysmitt has joined #openstack-swift16:48
*** lcurtis has quit IRC16:51
*** lcurtis has joined #openstack-swift16:57
notmynamehttps://wiki.openstack.org/wiki/Swift/PriorityReviews is updated with a section for the next release. just a few things there (I'd love to see them land soon)16:57
notmynameanyone have anything for the meeting today? that ^ is all I had16:58
notmynameoh, and the hackathon (link in topic)16:58
*** tongli has joined #openstack-swift17:03
notmynamemahatic: around?17:06
*** atan8 has joined #openstack-swift17:09
*** atan8 has quit IRC17:10
*** atan8 has joined #openstack-swift17:11
*** rledisez has quit IRC17:15
*** mjfork has joined #openstack-swift17:17
mjforkI have a node in my cluster that all I reduced all of the drive weights by 20% relative to the other nodes but they continue to show the same or higher capacity used than the higher weighted counterparts.  Since dropping we have completed multiple replication cycles.  Any suggestions on where to start looking?17:19
mahaticnotmyname, sorry!17:20
mahaticnotmyname, around now17:20
mahaticnotmyname, I was cooking and lost track of time :P17:20
notmynamemjfork: no worries. what were you cooking?17:20
notmynamemjfork: what version of swift are you running?17:21
mahaticnotmyname, just some bread and eggs17:21
notmynamemahatic:  no worries. what were you cooking?  <-- now with the right nick :-)17:21
mjfork1.12.017:21
mahaticnotmyname, lol yes17:22
mahaticnotmyname, did you happen to take a look at the patch? I updated it for the rest of the two containers17:24
notmynamemahatic: since my review yesterday?17:27
notmynamelooking now17:27
mahaticyeah17:27
*** echevemaster has joined #openstack-swift17:27
mahatici mean rest of the two servers17:30
notmynameyup. looks good17:30
notmynamegood catch by ppai17:30
mahaticyeah!17:30
notmynamemahatic: ok, since this one looks ready to go, then the next step (while you're waiting for this to be reviewed and land) is to work on the small patch to add the server type header17:31
notmynamemahatic: how do you think that should look?17:31
mahaticI also have a question on that, in the test - test_call_incorrect_replication_method, if I replace the obj_methods with 'REPLICATE'17:31
mahaticnotmyname, obj/server.py returns 200, but account and container return 400 - bad request. It's a value error and I think it is because of the input that is being passed to REPLICATE method17:33
notmynamehmm17:33
mahaticam i correct?17:33
notmynamelet me check17:33
mjforknotmyname: just realized i didn't tag you in my answer - we are running 1.12.0 (I know...back level)17:34
notmynamemjfork: I saw. I didn't have an immediate answer. sorry :-)17:34
notmynamemjfork: how "balanced" are your failure domains?17:35
notmynamemjfork: ie looking at total available raw capacity in each region/zone/server, how even is it?17:35
mjforknotmyname: very unbalanced, we are gradually adding 2x the number of nodes (about 20% in at this point)17:36
notmynamemjfork: ah17:36
notmynamemahatic: ok, just to talk through it...17:41
notmynamemahatic: if you set up the server to be a replication server, then give a REPLICATE verb. the object server gives 200 and the account/container give 40017:42
notmynameright?17:42
notmynamemjfork: that's ... not unexpected. as in, I'm not surprised to hear that answer, and there's been some work recently to fix some of the data placement when there are very uneven failure domains17:43
mahaticnotmyname, yeah, in the current scenario itself - replication:true, so if I go head replace the verbs with REPLICATE, that's what I get (the result you just mentioned)17:43
notmynamemjfork: note I'm not saying that it's good. just that you aren't seeing a "special snowflake" situation. probably17:43
mahaticreplication_server:true17:44
*** jordanP has quit IRC17:44
*** jasondotstar has joined #openstack-swift17:44
mjforknotmyname: thanks.  what i expected.17:44
notmynamemjfork: you're on the east coast of the US, right?17:45
mjforknotmyname: yes17:45
notmynamemjfork: ok. mahatic is in India, so her timezone is way off from us. let me finish working with her, and then I can help you out (more details or whatever you need)17:46
mjforknotmyname: ok, works for me. thanks.17:46
notmynamemahatic: I'm looking at the differences in implementation17:47
mahaticnotmyname, sure17:47
*** gvernik has joined #openstack-swift17:53
openstackgerritDonagh McCabe proposed openstack/swift-specs: Minor updates to composite token spec  https://review.openstack.org/13877117:53
mahaticnotmyname, mjfork thanks for working out the timings!17:53
notmynamemahatic: what you're seeing makes sense, I was just reminding myself of the "why"17:53
*** lihkin has quit IRC17:54
mahaticnotmyname, hmm, json.load(req.environ['wsgi.input']) where wsgi.input = stringIO() doesn't go right I guess17:55
notmynamemahatic: so the object server expects /drive/part/suffix/policy and the account+container servers expects /drive/partition/hash17:55
*** lihkin has joined #openstack-swift17:56
mahaticyup17:56
notmynameand since the test has "/sdb1/p/a/c" for all of the test paths, the account+container servers fail with 400 because of the bad path17:56
*** jistr has quit IRC17:56
notmynamemahatic: so, technically, the requests you're building in the tests for account+container are wrong. however, what you're testing is that the other verbs respond with 405. so it still tests the right thing17:57
notmynamemahatic: does that make sense? do you agree?17:57
mahaticnotmyname, yes, it makes sense. Why the existing is working. But not the sdb1/p/a/c part. If it's a bad path, why is not effecting the object server?17:57
notmynamemahatic: because the "is this a valid verb" check happens first, before the "is this a valid path" check17:58
mahaticexisting test*17:58
notmynamemahatic: the path parsing is done _in_ the verb implementation17:58
mahaticnotmyname, correct, so it should fail for object server too, correct?17:59
mahaticam I missing something?18:00
notmynamemahatic: no, because the fake path you have happens to match the pattern the object server's REPLICATE verb expects, and the implementation of the object's REPLCIATE verb ends up not failing if the given thing referenced ont he path doesn't exist18:01
mahaticnotmyname, okay18:02
*** tongli has quit IRC18:02
mahaticnotmyname, so the existing test does make the correct check, should I write one more test each for account + container that contains the correct path?18:03
notmynameah. the account+container servers are expecting json in the request body18:05
mahaticyes and hence I thought it was a ValueError18:05
notmynamemahatic: well, maybe. but that would be a different test. right now, your test checks that other verbs respond with 405 if replication server is true18:06
notmynameso that's a good test18:06
mahaticokay18:06
notmynamebut what you're talking about is a different check that the replicate verb does something different18:06
notmyname(I'm guessing there's already some sort of test for that)18:06
*** acoles is now known as acoles_away18:07
mahatichmm18:07
notmynamehmmm indeed18:08
notmynameinitial check says "no"18:08
mahaticah18:08
mahaticand another check says no too18:11
notmynamelooks like there are several tests on the object server18:13
notmynameso at least there's that :-/18:13
mahaticyup18:13
notmynamemahatic: ok, so it seems that there is a gap there18:14
notmynameand yup. completely confirmed. unittests never execute those lines in the container server (so says the coverage report)18:14
mahaticah18:15
mahaticI see18:15
notmynamemahatic: however, while important, it's orthogonal to the OPTIONS/recon stuff you're working on18:15
notmynamebut thanks for pointing it out :-)18:15
mahaticso it ends up being a different patch/work altogether?18:16
mahaticokay! no problem :)18:17
notmynameya, I think so18:17
notmynamemahatic: what are your next steps? do you know yet?18:18
mahaticnotmyname, so getting back to the existing patch, server type header?18:18
notmynameyes18:18
mahaticnotmyname, er I don't understand what it is about18:18
mahaticwhere should I make a change? any reference?18:19
notmynamemahatic: ok, at least on the OPTIONS response (but maybe on all of them--but that might be harder), there should be a header returned saying what kind of server it is18:19
notmynamemahatic: so there are 2 parts: 1) set the value and 2) return the value18:19
*** geaaru has quit IRC18:20
mahaticnotmyname, okay18:20
notmynamemahatic: setting is easy: have a default in the base class and then set it in each child class (ie a class attribute)18:20
notmynamemahatic: for returning it, it should be trivially simple to set the header on the OPTIONS response. you can find many many places in the code on how to set headers on a response, if you don't remember how to do it off the top of your head18:21
notmynamemahatic: but the question remains as to what header to set (ie the header name)18:21
mahaticnotmyname, okay. don't we already have something like X-Server-Type?18:22
notmynamemahatic: so to figure that out, I'd like you to do some research and find if there is a standard header (eg defined in an RFC) that we should use18:22
notmynamewe don't have anything yet18:22
mahaticokay18:22
notmynameso we should use something standard if there is something appropriate18:22
notmynamemaybe "Server". I remember something about that somewhere, but I don't remember details18:23
mahaticokay18:23
mahaticwill look into that18:23
*** atan8 has quit IRC18:23
notmynamemahatic: if you don't find anything standard that we should use, then you get to invent on. eg X-Server-Type or X-Swift-Server or X-Mahatis-Header or whatever18:23
mahaticnotmyname, :D yeah, last one sounds cool, so i'll go with that ;)18:24
notmynamelol18:24
mahaticbefore whatever that is :D18:24
notmynamepoint is, you set the header and then, finally (next step after this), you read that header with the recon tool18:24
*** jasondotstar has quit IRC18:24
notmynamemahatic: now for extra bonus points, the header could be returned with every response from the server. but figure out just doing the OPTIONS response first18:25
mahaticoh okay18:25
notmynamemahatic: make sensse? does that give you enough to go on for now?18:25
*** Nadeem has joined #openstack-swift18:25
mahaticnotmyname, sure, I believe so. I'll do a quick check to confirm I understood things, or I'll post some questions here18:25
notmynamemahatic: great!18:27
mahaticnotmyname, I see that x-delete-at or X-Backend-Timestamp kind of headers being set in the class, are they all defined or created in some config file?18:28
notmynamewhere?18:29
mahaticin container/server.py -> gen_resp_headers method ->18:30
mahaticX-Backend-Timestamp, X-Backend-PUT-Timestamp etc are set18:31
notmynameya. doesn't look like that's used for every response, but that's similar to how it'd be done for every response18:32
mahaticno, I'm not saying it's used for every response. I wanted to know where are they all initially defined18:32
*** nshaikh has joined #openstack-swift18:33
mahaticthey are not standard headers if I'm not wrong? So aren't they defined/created somewhere?18:33
notmynamethe only way headers are "defined" is that they get added to a headers dictionary somewhere. so no there isn't a formal "here's how to define a header" thing anywhere.18:34
notmynamebut on the other hand, yes, that may be where those are used and set (and thus defined) for that server18:35
notmynamedoes that make sense?18:35
mahaticyeah where do I find that headers dictionary? I'll just have to navigate through the code and find out?18:36
mahaticnotmyname, that's what I meant to find out. If there is any common place for this sort of a dictionary18:38
notmynameah18:39
notmynamemaybe. I mean, that would be nice. probably not though18:39
notmynamethe gen_resp_headers() is probably one of the closest things18:39
mahaticokay, no problem, will look through18:39
*** annegent_ has joined #openstack-swift18:40
mahaticokay18:40
notmynamemjfork: still around? I've got about 20 minutes now18:41
mahaticnotmyname, thank you for the inputs. Will look through and post back any questions18:42
notmynamemahatic: you're welcome18:42
*** annegent_ has quit IRC18:52
*** acoles_away is now known as acoles18:54
notmynamegood drive info from backblaze. good info when looking at selecting drives for swift too https://www.backblaze.com/blog/best-hard-drive/18:54
ahalesome shocking seagate numbers !18:55
*** cutforth has joined #openstack-swift18:56
notmynameya. looks like just the 3TB drives18:56
notmynamedon't buy those ;-)18:56
notmynameinteresting summary at the bottom: 4TB drives are good and based on price about equal18:57
notmynameswift meeting time19:00
*** lihkin has quit IRC19:02
*** aix has quit IRC19:04
mattoliverauWell that was short and sweet. I'm going back to bed then :) cya all in an hour or two.19:15
notmyname:-)19:16
claygnotmyname: if mjfork is on 1.12 he's in the exact situation as cschwede that caused us to let weights fight dispersion when failure domains at a tier is less than replicas - changing the weight on old code had zero effect on partition placement if you have <= failure domains at a tier than replicas in that tier19:17
*** acoles is now known as acoles_away19:17
notmynameright19:17
notmynameand thanks for the detail19:17
* notmyname steps out for an early lunch19:22
*** mahatic has quit IRC19:28
*** silor has quit IRC19:37
*** annegent_ has joined #openstack-swift19:39
*** annegent_ has quit IRC19:44
*** fifieldt__ has joined #openstack-swift19:53
*** jrichli has joined #openstack-swift19:55
*** fifieldt_ has quit IRC19:56
*** pberis has quit IRC19:57
*** erlon has quit IRC20:00
*** jasondotstar has joined #openstack-swift20:04
*** jasondotstar has quit IRC20:09
*** gvernik has quit IRC20:18
*** nshaikh has left #openstack-swift20:24
*** erlon has joined #openstack-swift20:28
notmynamenew office space means a whole new set of lunch options20:29
*** fandi has quit IRC20:29
occupantI'm about to move into office space that's in a mall, so there'll be gigantic food court directly underneath me.20:30
notmynamesbarro every day? ;-)20:30
occupantit's a westfield mall, so the food court is pretty decent.20:31
notmynameoh, nice20:31
notmynameoccupant: how's the sound isolation? I'd expect that to be pretty noisy20:32
occupantit's several floors above the actual mall, so just like any normal building. my employer is providing the noise by ripping out all the cubes and replacing them with a bullshit open plan.20:34
notmynameugh20:34
occupantbecause it's real fucking "collaborative" when everyone needs to have noise cancelling headphones strapped to their head just so they can hear themselves think20:34
occupantI'm pretty sure it just started out as a bullshit ex post facto justification to cram a bunch of people into a small space on cheap ikea desks20:35
occupantbut people accidentally started believing it for real20:35
notmynameya, I think that's pretty common. cram a lot of people in and realize it's relatively cheap and then justify it20:36
*** nellysmitt has quit IRC20:37
*** nellysmitt has joined #openstack-swift20:38
openstackgerritIan Cordasco proposed openstack/python-swiftclient: Release connection after consuming the content  https://review.openstack.org/14904320:39
*** annegent_ has joined #openstack-swift20:40
*** sigmavirus24 has joined #openstack-swift20:40
sigmavirus24Hey everyone, I just submitted https://review.openstack.org/#/c/149043/ but I was wondering if anyone could provide some guidance on where the best place to add a test for that would be. Thanks!20:41
*** mjfork has quit IRC20:44
*** annegent_ has quit IRC20:45
*** bpap has joined #openstack-swift20:47
*** fandi has joined #openstack-swift20:48
*** fandi has quit IRC20:53
*** fandi has joined #openstack-swift20:53
*** omame has quit IRC20:58
*** fandi has quit IRC20:58
*** omame has joined #openstack-swift20:58
dmsimardSeeing a lot of ChunkWriteTimeout by swift-proxy from image uploads through glance on a virtual CI setup. I'm not expecting everything to be super performant but is there anything I should be especially looking at ?20:58
*** fandi has joined #openstack-swift20:58
*** tdasilva has quit IRC21:07
zaitcevThe fundamental problem, as I understand, is that Swift API is HTTP and clients time out eventually. They cannot know if servers are loaded or not. So, say they close after a minute, which is actually a lot for libraries that support browsers.21:08
zaitcevIn that minute, the proxy must time out duff nodes and retry.21:09
zaitcevTherefore, nodes have to have a latency that's less than a minute. In that there are DB ops with sqlite.21:09
zaitcevLong story short, you start seeing the above when service times on disks start climbing.21:10
zaitcevIf utilization breaks 80% and elevator starts working, you see queue going into 10 sec, and it's game over21:11
zaitcevSo Swift nodes cannot live on overloaded CI machines. They just can't.21:11
zaitcevnotmyname et.al. should correct my understanding here, but generally it's what I see from attempts to place Swift on busy VM hosts.21:12
*** tdasilva has joined #openstack-swift21:20
*** jeblair has joined #openstack-swift21:22
*** fandi has quit IRC21:26
*** tdasilva has quit IRC21:28
openstackgerritIan Cordasco proposed openstack/python-swiftclient: Release connection after consuming the content  https://review.openstack.org/14904321:39
*** annegent_ has joined #openstack-swift21:39
*** annegent_ has quit IRC21:44
*** david-lyle has joined #openstack-swift21:46
*** mjfork has joined #openstack-swift21:47
*** fandi has joined #openstack-swift21:47
*** sigmavirus24 has left #openstack-swift21:55
mjforkclayg: sorry for delay ... we are on 1.12 and we have 20+ zones and only 3 replicas.  Does that mean the issue you mentioned would not apply?21:56
*** chlong has quit IRC22:05
*** nellysmitt has quit IRC22:08
*** nellysmitt has joined #openstack-swift22:09
*** nellysmitt has quit IRC22:09
*** wasmum has left #openstack-swift22:10
*** david-lyle has quit IRC22:11
*** david-lyle has joined #openstack-swift22:11
clayghrmmm.... yeah with >3 zones in the same region 1.12 should acknowledge the lower weight in one of the zones22:14
claygmjfork: ^22:15
claygmjfork: maybe i misunderstood your description of the issue - you reduced the weight of the devices in one of the zones and the part count went down but you're not seeing replication move those parts off and free up disk space?22:15
*** david-lyle has quit IRC22:15
dmsimardzaitcev: Just saw your reply - thanks22:20
brianclineclayg: yeah, they were reduced in a zone by about 20% and it didn't seem to free up much of any space on devices within that zone22:22
claygbriancline: did parts move in balance, and just replication was slow/not-working?  Or is rebalance not moving parts off the de-weighted devices?22:25
claygbriancline: mjfork: those two problems are kind of different - swift-ring-builder something.builder should tell us how many parts are assigned to the devices - it should be ~20% less than the other devices (a factor of weight)22:25
brianclineclayg: checking on the part counts for those, one moment22:30
clayghey hey!  gil's back!  https://review.openstack.org/#/c/143791/22:30
*** tellesnobrega_ has joined #openstack-swift22:32
notmynameclayg: yup. he was chatting in here earlier22:32
*** annegent_ has joined #openstack-swift22:39
*** erlon has quit IRC22:40
claygwhat the fuck is up with testr!?  I can't even tell what test is failing!?22:41
*** tellesnobrega_ has quit IRC22:41
claygwhy is there so much output?22:41
portanteclayg: not to be of any help here, but testr likes to run tests and tell you all about them!22:42
*** annegent_ has quit IRC22:44
claygapparently there's like a follow on command that knows how to make sense of the output maybe?22:45
clayg'\n'.join('I hate python-swiftclient's test suite.' for i in range(100))22:49
*** tellesnobrega_ has joined #openstack-swift22:49
*** mjfork has quit IRC22:51
claygtime tox -e py27 tests.unit.test_service.TestServiceUtils.test_process_options_defaults ... Ran 1 tests in 0.044s (-0.002s) ... real1m3.785s22:53
clayg^ how can we be expected to live like this!?22:53
clayga'ignt nobody got time for that!?22:54
notmynameclayg: http://www.quickmeme.com/img/ef/ef5c72d07e23f4e1423e708c874cb54114f9f89beae1ce019c35a6066604d43a.jpg22:55
claygthis better be a trick to make tox run faster22:55
clayg... i'll give you half credit22:55
notmyname:-)22:55
*** david-lyle has joined #openstack-swift22:58
*** tellesnobrega_ has quit IRC23:00
claygahhhhh..... real   0m4.492s23:00
claygah..... even better!  real   0m0.779s23:01
claygpython -m testtools.run <test> was the answer - "thanks" https://wiki.openstack.org/wiki/Testr#FAQ23:01
peluseclayg, ahh, handy.. I rarely just one but when I do I've been editing tox.ini -- sheesh that's much easier.23:03
* peluse takes note: always check the FAQs23:03
claygpeluse: that was more like my first google result for 'god i hate you testr!' or somethig similar23:04
peluseclayg, I say that kinda shit to siri all the time but it never seems to get me results23:05
*** david-lyle has quit IRC23:09
clayggd, i'm sure there is some combination of environment variables that will cause this test to pass...23:12
claygahhh unset ST_AUTH23:12
claygthat doesn't make any sense - we already patch that dictionary23:14
*** jrichli has quit IRC23:20
*** tdasilva has joined #openstack-swift23:23
*** km has joined #openstack-swift23:25
*** chlong has joined #openstack-swift23:33
*** IRTermite has quit IRC23:37
*** annegent_ has joined #openstack-swift23:39
*** echevemaster has quit IRC23:43
*** tdasilva has quit IRC23:44
*** echevemaster has joined #openstack-swift23:44
*** annegent_ has quit IRC23:44
claygbah, i don't know an easy way to patch the environ before you import the module23:50
*** oomichi_ has joined #openstack-swift23:50
*** dmsimard is now known as dmsimard_away23:57

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!