Tuesday, 2013-08-06

*** dina_belova has quit IRC00:00
*** rfolco has quit IRC00:00
clarkbjeblair: mordred: do you want to review https://review.openstack.org/#/c/39992/1 checks that IPV6 flag is set in the launch node env files before checking that ipv6 works to accomodate providers without ipv600:00
openstackgerritA change was merged to openstack-infra/config: pbx: update SIP config to help deal with NAT issues  https://review.openstack.org/3961600:03
bodepdclarkb: let me get back to a recreat...00:03
jeblairclarkb: has the flag been set in the rc files?00:03
clarkbjeblair: yup, fungi set them according to IRC and the first comment on that change00:04
jeblairclarkb: ah yes00:04
jeblairjust read that00:04
clarkbI missed it too initially :)00:04
openstackgerritA change was merged to openstack-infra/config: More launch improvements  https://review.openstack.org/3999200:06
*** gyee has quit IRC00:06
*** pabelanger has quit IRC00:11
*** pcrews has joined #openstack-infra00:12
*** nijaba_ has joined #openstack-infra00:12
*** nijaba has quit IRC00:13
* fungi needs to use bold+underline font decoration more00:13
fungigerrit wishlist item: support <blink> tag in review comments ;)00:13
clarkbfungi: I have colors turned off in my irc client config. I am glad now :)00:14
fungime too actually. too many years on monochrome displays to tolerate to much color assault00:14
notmynameyou say that, but do you use syntax coloring in source editors?00:15
*** sarob has joined #openstack-infra00:16
clarkbnotmyname: thats different. vim doesn't give me rainbow text of random things00:16
clarkbrainbow text in IRC is quite annoying00:16
fungitame, subdued highlighting, yes. controlled/structured colorization is good for me. randomuser-selected color highlighting not so much00:16
lifelesspaint all the things00:17
fungibikesheds first!00:17
notmynameheh. kinda like how I have auto inlined images turned off in my irc client (who would do that?!), but I occasionally do like to see an animated gif. (openstackreactions notwithstanding)00:17
fungimy irc client wouldn't do inlined images unless aalib counts00:18
jeblairopenstackreactions are animated?  i don't do animated gifs in my browser, so i guess i've been "missing out".00:18
clarkbjeblair: yes :)00:18
notmynamejeblair: I wouldn't say you've been "missing" it00:18
jeblairi am so sad.00:18
clarkba couple of them are really good. I like the crash test dummy one >_>00:19
notmynamejeblair: also, if you want to stop using lynx, I hear netscape has a new browser you can try out00:19
clarkbpleia2: I have left comments on some of your cgit related changes. Let me know if you have any qusetions because at least one of them is sad panda making00:19
jeblairnotmyname: it's harder to have it email me web pages.00:20
*** sarob has quit IRC00:20
pleia2clarkb: the name one I knew would be trouble, trying to figure out how to do it in a way that makes lint happy (I really don't know yet)00:21
notmynamejeblair: don't worry. all projects expand in scope until they include email.00:21
mgagnepleia2: see inline comment, I'm sorry ^^'00:22
jeblairnotmyname: that has proven to be true even for openstack-infra.00:22
funginotmyname: s/netscape/ncsa mosaic/00:22
pleia2mgagne :)00:22
*** wu_wenxiang has quit IRC00:23
notmynamefungi: well, yeah, if you want to old stuff. but navigator has support for HTML4 and even the marquis tag. I think it may even be better than that newfangled IE400:23
notmynamehard to believe that the netscape IPO (ie start of first bubble) was 19 years ago00:24
fungimy first impression of mosaic was "inline images will never catch on"00:24
*** jinkoo has joined #openstack-infra00:24
openstackgerritA change was merged to openstack-infra/devstack-gate: Use Jenkins credentials store if specified  https://review.openstack.org/4031000:25
notmynamefungi: "you mean like anyone can put an image on my screen? have you seen the kind of people who are on the internet?"00:25
fungibasically. and then it only got worse00:25
notmynamefungi: when life hands you lemons, throw a party ;-)00:26
* fungi shudders00:26
clarkbNow I have Cave Johnson talking in my head. Thankfully that is much much better than the alternative00:28
fungij.k. simmons was such a great choice of voice actor. i too hear him in my head all the time. telling me to do things. science things00:29
clarkbWhat was the line "If life gives you lemons, make hand grenades!"?00:30
fungiit was far more verbose00:30
fungia full on diatribe00:31
clarkbya just found it on the HL2 wiki00:31
clarkb"All right, I've been thinking. When life gives you lemons, don't make lemonade. Make life take the lemons back! Get mad! I don't want your damn lemons, what am I supposed to do with these? Demand to see life's manager! Make life rue the day it thought it could give Cave Johnson lemons! Do you know who I am? I'm the man who's gonna burn your house down! With the lemons! I'm gonna get my engineers to00:31
clarkbinvent a combustible lemon that burns your house down!"00:31
clarkbso much win00:32
clarkbfungi: if you haven't seen kerbal space program you should play the demo00:32
fungii have not been finding sufficient time for video games of late. but i do mean to check that one out00:32
clarkbI technically do not have time either, but my last Mun attempt ended with me on a Kerbin escape vector. It was awesome00:33
*** rcleere has joined #openstack-infra00:36
*** rcleere has quit IRC00:37
clarkbI feel like I need to do something that isn't logstash or code review related. Anything in particular people would like to see get done?00:37
*** jinkoo_ has joined #openstack-infra00:43
*** jinkoo has quit IRC00:44
*** jinkoo_ is now known as jinkoo00:44
lifelessclarkb: do you really want to ask that :>00:47
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Prepare to test git-review  https://review.openstack.org/4031900:48
fungiclarkb: ^00:48
fungiclarkb: once we land that, we can try the pending integration tests live00:50
clarkblifeless: probably not, but I really feel like I need a change of scenery for tomorrow00:50
clarkbfungi: reviewing00:50
lifelessclarkb: so something that would be cool00:50
lifelessclarkb: would be a zuul element for dib/tripleo - as part of the whole 'and you can CI as a downstream easily.'00:50
lifelessclarkb: cleearly not on the -infra roadmap, but if you just want a change of pace...00:50
clarkbfungi: reviewed00:51
clarkblifeless: dib is something I have been meaning to get into. I will probably start with mordred's kexec awesomesauce for d-g but can look at bigger thinsg too00:52
fungioops, forgot the 33 jobs aren't part of the job group00:52
mgagneclarkb: try updating all puppet modules to the latest version :D00:52
clarkbmgagne: uh that isn't a change of pace that is a death march :)00:52
mgagneclarkb: well, now that I think of it... sorry ^^'00:53
*** jinkoo has quit IRC00:53
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Prepare to test git-review  https://review.openstack.org/4031900:53
fungishould be better ^00:53
jeblairclarkb: there are also 153 open openstack-ci bugs :)00:54
clarkbjeblair: ya, I was going to resort to looking at the list if no one had a pressing thing00:54
*** jinkoo has joined #openstack-infra00:54
jog0is it possible to get py33 jobs of for all cleints?00:54
jog0as non gating00:55
jog0since those are some of the early targets for py33 compat00:55
jeblairjenkins01 and jenkins02 are up again00:55
clarkbyes, we may want more slaves. Not sure how much contention exists for those yet00:55
*** dina_belova has joined #openstack-infra00:56
clarkbjog0: the only concern I have enabling them all like that is whether or not we will see active development to correct the issues in all of them00:56
fungii'm not sure what volume of changes the clients get, but i get the impression it's only a fraction of the changes for server projects00:56
clarkbjog0: maybe we should start with a few thatwe know will get attention?00:56
jog0clarkb: nova client is getting some00:58
*** jinkoo has quit IRC00:58
jog0I was thinking if we have the p33 test people may try fixing them00:58
*** ^d has quit IRC00:59
jog0there are some Canonical guys making sure things are py33 compat00:59
jeblairI'm going to simplify the overview tab on the new jenkins servers; it's slow as-is.00:59
clarkbjeblair: ++ I expect with mutliple masters jenkins will get even less direct viewership00:59
*** dina_belova has quit IRC01:00
clarkbjog0: did you want to propose the change? it should be pretty straightforward01:02
clarkbedit openstack-infra/config/modules/openstack_project/files/jenkins_job_builder/config/projects.yaml to include gate-{name)-python33 under the job list for each client then add the new jobs to each cleint's check tests in modules/openstack_project/files/zuul/layout.yaml01:03
jeblairclarkb: maybe we should add it to python-jobs in jjb?01:03
jeblairthe number of 33 jobs seems to be increasing at a rate that might be useful.01:04
clarkbjeblair: we can do that too01:04
clarkbI asked jd__ to leave it out of the group initially because I wasn't sure what demand would be like01:04
jeblairyeah, i think at the point we're discussing adding it to "all clients" is probably the time to reconsider that01:05
fungiso it might make sense to double the precisepy3k slave count to 8... or do we want to take a wait-and-see approach to contention over those for now?01:07
clarkbfungi: we can probably wait and see on that. You are right in that their patchset load is low01:07
jog0jeblair: https://review.openstack.org/#/dashboard/2401:08
jeblairjog0: nice.  i support this.  it does seem like tests there would be useful.01:08
jog0zul: ^ has been doing py33 comapt01:08
jeblairalso, we should have git-review do something better with detached branches.01:09
jog0jeblair: heh yeah01:09
jog0the py33 stuff is mostly low hanging fruit01:09
jog0(fixing wise)01:09
* clarkb whips a change up really quick01:10
zulhmmm?01:11
jog0also a nie email to the ML would help get attention01:11
jog0zul: taking about getting py33 tests for all clients01:12
*** nijaba_ has quit IRC01:12
zuljog0:  ah yeah i was going to do that this week, i had the day off today01:12
*** nijaba has joined #openstack-infra01:12
*** jhesketh_ is now known as jhesketh01:12
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Make the python33 template part of python-jobs  https://review.openstack.org/4032101:13
fungijog0: clarkb: jeblair: ^01:13
clarkbfungi: cool one less thing I need to do in this change :)01:13
jog0zul: think I just talked clarkb / fungi into doing it for you01:13
zuljog0:  yay!01:13
* locke105 is off to blow up some rockets in kerbel space program. :)01:13
fungiwell, i figured the "make py33 standard in the job group" deserved to be a separate change from "add to all clients"01:14
zulbetter make it non-voting though01:14
fungizul: i assumed it would be for those, yes01:14
zulfungi:  ack01:14
clarkblocke105: have fun01:15
fungiseparate non-voting settings for each of the clients, and then we can peel those back individually as each one reaches compliance01:15
locke105clarkb: i'm actually pretty good at this game... been playing since version 0.15 or so01:16
zulfungi:  i was going to make the tox.ini/requirements are kosher as well, keystoneclient needed a newer oslo.config01:16
locke105they redid all the SAS stuff in the latest update so I have to figure out how to fly again tough :(01:16
locke105s/tough/though/01:16
fungizul: awesome01:17
jeblairclarkb, fungi: devstack-gate updated to target 7 ready nodes from each az on each server (7*3*3=63)01:17
jeblairso all three jenkins should equalize on 21 ready nodes each01:18
clarkbjeblair: perfect01:18
openstackgerritSteve Baker proposed a change to openstack-infra/config: Enable pypi jobs for diskimage-builder  https://review.openstack.org/4032201:18
fungijeblair: sounds ideal01:19
jeblairfungi, clarkb, mordred: if you add or edit a slave on the new nodes, be sure to re-use the existing jenkins ssh credential (in the dropdown)01:19
openstackgerritClark Boylan proposed a change to openstack-infra/config: Add python33 tests to all openstack python clients  https://review.openstack.org/4032301:19
jeblairsee https://jenkins02.openstack.org/computer/centos6-2/configure  for an example01:19
clarkbjog0: ^ and that adds the jobs to the clients01:19
clarkbjeblair: so by setting the credential in the config and using it in the jenkins API calls you don't have to create a new one for each host?01:20
jeblairclarkb, fungi: puppet is running on all the jenkins and devstack-launch nodes.01:20
jeblairclarkb: correct; the version of jenkins.o.o created and deleted one for every slave01:20
stevebakerjeblair: hey, could you please take a look at my reply to your comments? https://review.openstack.org/#/c/38226/01:20
jeblairclarkb: the version on jenkins01/02 created but did not delete them, which was obviously problematic.01:20
jeblairstevebaker: sure thing01:21
clarkbjeblair: is there a manual step in there of creating a credential so that launch node can use htat?01:21
clarkbzaro: jeblair the ZMQ event publisher plugin just showed up on http://updates.jenkins-ci.org/download/plugins/01:22
jeblairclarkb: sort-of -- yes this once, but it's in an xml file that can be copied into place (i have it in my tarball of boilerplate openstack jenkins config)01:22
jeblairclarkb: it was created on 01, and copied to 02 that way.01:22
*** tianst20 has joined #openstack-infra01:24
*** tian has quit IRC01:24
clarkbdo you plan to have puppet untar that on jenkins masters?01:25
jeblairclarkb: the tarball is for convenience, until the files are in puppet individually.01:26
clarkbgotcha01:26
clarkbok time for me to head home01:26
clarkbmulti master jenkins is very exciting01:26
clarkbI will need to remove my jenkins.o.o pinned tab now :)01:27
*** jrex_laptop has quit IRC01:27
clarkbsdague fwiw the devstack neutron unstableness affected non neutron too01:29
clarkbif you look at nati's graphite graphs you see they spike together01:29
jeblairstevebaker: i guess i still don't see why stackforge is the right place.  it looks to me like you made an excellent argument that the repo shouldn't exist at all.01:31
jeblair(which i agree with!)01:31
jeblairas a project, we're clearly not averse to hosting deprecated code in the openstack org: https://github.com/openstack/melange01:32
jeblair(though maybe we should be)01:32
*** markmcclain has quit IRC01:32
clarkb++01:32
stevebakerquite probably it won't in the long term, but I thought doing one step at a time would have the best buy-in - otherwise the debates get bogged down in orthogonal issues01:33
*** lcestari has quit IRC01:34
fungiclarkb: i doubt that's neutron/devstack/tempest instability when they spike together. more likely changes which got approved but were broken and failing tests legitimately01:35
clarkbfungi good point01:35
clarkbalso possibly d-g + jenkins unhappyness01:35
*** reed has quit IRC01:35
stevebakerjeblair: the dependencies and timeline for heat-cfn, heat-boto and heat-watch being deleted is different for each one01:35
*** pcrews has quit IRC01:37
jeblairstevebaker: i think it should go into stackforge if it really is not an openstack project at all.  that means, different core group, its own ptl, its own bug tracker, and no support from infra, docs, qa, the tc, etc.01:38
jeblairstevebaker: but if it's part of the heat project, even if it's a part you want to deprecate, it seems like it should go in the openstack org.01:38
*** pcrews has joined #openstack-infra01:38
jeblairstevebaker: or perhaps openstack-dev, if it's just a 'developer tool' as you suggest in your 4th point.01:38
stevebakeropenstack-dev doesn't seem like a great fit either01:39
jeblairhacking, pbr and devstack are all in openstack-dev because they're not part of the finished product, but are used in developing it.01:40
lifelessso the boto stuff is closer to heat-client than to devs-of-heat01:40
lifelessisn't it?01:40
stevebakerat this point it is only useful to heat developers who are debugging cfn api issues01:41
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Prepare to test git-review  https://review.openstack.org/4031901:41
*** UtahDave has quit IRC01:43
*** jrex_laptop has joined #openstack-infra01:44
fungirebased that ^ on the change to add python33 tests to python-jobs01:44
*** pcrews has quit IRC01:45
jeblairstevebaker: i have to run to dinner now; we'll have to continue this later, sorry.01:45
stevebakerjeblair: no problem01:45
*** yaguang has joined #openstack-infra01:45
Alex_GaynorFollowing links to jenkins builds from http://status.openstack.org/zuul/ leads you to pages where the SSL cert isn't right, known?01:49
*** pcrews has joined #openstack-infra01:49
Alex_Gaynorerr, or its just a self signed cert?01:49
clarkbAlex_Gaynor: are they links to jenkins01.o.o or jenkins02.o.o?01:52
Alex_Gaynorclarkb: 0101:52
clarkbAlex_Gaynor: I am guessing jeblair used self signed certs on those new hosts. Probably just missed it01:52
clarkbAlex_Gaynor: we now have multi master jenkins :)01:52
Alex_Gaynorclarkb: but... this means there won't be monotonically increasing job numbers. This is bad for my number fetish.01:53
bodepdclarkb: just recreated01:53
clarkbAlex_Gaynor: it is also bad for our old log dir format, which we thankfully fixed01:53
clarkbAlex_Gaynor: but it is awesome in so many ways01:53
clarkbAlex_Gaynor: zero downtime jenkins upgrades01:54
clarkbAlex_Gaynor: jenkins can scale now01:54
clarkband so on01:54
Alex_GaynorYeah that's cool I guess, but the numbers!01:54
dstufftjenkins… scale..? Im not sure I believe you01:54
clarkbdstufft: you just spin up more :)01:54
Alex_Gaynorclarkb: how's it work, anything in jenkins itself, or does zuul/gerrit just distribute jobs amongst them?01:54
clarkbAlex_Gaynor: zuul speaks gearman and there is a jenkins gearman plugin01:55
clarkbso gearman distributes jobs among them01:55
clarkbbodepd: now is the time to try the weird setuptools upgrade stuff01:55
clarkbbodepd: does pip --version work?01:55
bodepdI just did that unintall you recommended.01:56
*** dina_belova has joined #openstack-infra01:56
clarkboh cool01:56
clarkbbodepd: now you need to reinstall and before you do anything else use pip to upgrade setuptools01:56
*** nati_ueno has quit IRC01:57
clarkbthen try puppet again01:58
harlowja_qq, have u guys seen this01:58
harlowja_https://review.openstack.org/#/c/29862/01:58
harlowja_http://logs.openstack.org/62/29862/59/check/gate-grenade-devstack-vm/a0dbf6d : LOST in 11m 01s :-/01:59
harlowja_someone lost it, ha01:59
*** thomasem has joined #openstack-infra01:59
bodepdclarkb: https://gist.github.com/bodepd/616137101:59
clarkbbodepd: weird02:00
clarkbharlowja_: yes, that is a bug in the current build system02:00
harlowja_kk02:00
harlowja_recheck should be ok then?02:00
clarkbharlowja_: I think it means zuul was not updated on the progress of that job within its timeout period02:00
harlowja_kk02:00
clarkbharlowja_: you will want jeblair to confirm that though02:00
harlowja_thx clarkb02:00
clarkband yes a recheck should be fine02:00
harlowja_sounds great02:01
bodepdclarkb: I can re-run puppet and see if anything magical happens...02:01
clarkbbodepd: worth a shot I guess02:01
*** dina_belova has quit IRC02:01
fungiclarkb: the self-signed nature of the jenkinsXX.o.o certs was discussed previously and jeblair settled on not getting separate ca-signed certs for them02:01
fungibut maybe we revisit if people complain02:01
openstackgerritlifeless proposed a change to openstack-dev/hacking: Fix typo in HACKING.rst.  https://review.openstack.org/4032602:01
openstackgerritlifeless proposed a change to openstack-dev/hacking: Add editor files to .gitignore.  https://review.openstack.org/4032702:01
lifelessjog0: I bet those ^ are going to error on H803.02:02
lifelessjog0: any objections to making H803 be ignored in hacking itself ?02:02
clarkbfungi: when was that? I seem to have completely missed it. Or I paid attention and simply don't remember02:02
fungiclarkb: late last week i think? i'll grep my log02:04
clarkboh if it was on Friday then I probably did pay attention but then completely forgot as I had a cold and Friday was not good02:05
*** prad has joined #openstack-infra02:06
fungiclarkb: a couple weeks looks like. set wabac machine for 2013-07-29 16:21:1602:07
fungier, about a week i guess02:07
clarkbI am fine with it. We deemphasizing jenkins itself so shouldn't be a major issue02:09
*** nijaba has quit IRC02:12
mordredgod scrollback02:14
mordredjenkins is a bullshit02:14
mordredhaving more than one jenkins lets us care less about jenkins02:14
openstackgerritJoe Gordon proposed a change to openstack-dev/hacking: Fix typo in HACKING.rst  https://review.openstack.org/4032902:14
clarkbmordred: now tell us how you feel about buildbot02:14
mordredclarkb: hah. not even real code02:15
*** nijaba has joined #openstack-infra02:15
*** nijaba has quit IRC02:15
*** nijaba has joined #openstack-infra02:15
mordredclarkb: total pile of garbage02:15
jog0lifeless: I would object on principle02:15
jog0hacking shouldn't ignore any of its own rules02:15
mordredif you have an error in your config, you find out when the twisted python sends you error logs that tell you about a callback that didnt' work for unknown reasons02:15
*** jrex_laptop has quit IRC02:16
* mordred agress with jog002:16
*** lifeless has quit IRC02:16
clarkbme too02:18
mordredbefore agreeing with jog0, I had lovely drinks on a rooftop02:19
clarkbI know what I can do tomorrow to mix it up. I can fix BREACH02:19
clarkbmordred: I am drinking a beer called "PigWar"02:19
mordredclarkb: your beer is good02:20
clarkbnamed after http://en.wikipedia.org/wiki/Pig_War02:20
*** melwitt has quit IRC02:21
mordredclarkb: goo.gl/5zo84902:23
clarkbmordred: that is much nicer than my apartment02:23
SpamapSWhats the status on the Babel issue?02:25
Alex_Gaynorclarkb: FWIW jenkins02 also seems to have a self-signed cert02:25
clarkbAlex_Gaynor: ya, see above. Apparently jeblair decided not to pay for certs02:25
clarkbSpamapS: I think it was resolved shortly after breaking unless there is a new babel issue02:26
SpamapS  File "/opt/stack/venvs/heat/local/lib/python2.7/site-packages/heat/openstack/common/gettextutils.py", line 34, in <module>02:26
clarkbSpamapS: we pinned to the old version and after that upstream fixed the problem02:26
SpamapS    from babel import localedata02:26
mordredSpamapS: there is a babel issue?02:26
SpamapSImportError: No module named babel02:26
SpamapSclarkb: after pip installing heat in a virtualenv I get that..02:26
mordredSpamapS: I blame evil02:26
SpamapSChecking why now. Just wanted to see if pypi resolved it or if we're all still carrying hacks.02:26
SpamapSmordred: if you're into evil you're a friend of mine02:27
clarkbSpamapS: we may still be carrying the hack and upstream fix didn't fix everything02:27
SpamapSHeat does not have the >=0.9.602:27
clarkblooks like we unpinned the upper bound https://github.com/openstack/requirements/blob/master/global-requirements.txt#L502:27
clarkbSpamapS: you are probably installing 1.0.X02:27
clarkber 1.X02:28
clarkbSpamapS: works locally. `virtualenv venv ; source venv/bin/activate ; pip install babel ; python -> from babel import localedata`02:29
clarkbSpamapS: perhaps babel is not part of your requirements?02:30
clarkbmaybe it is only in test-requirements?02:30
*** prad has quit IRC02:36
*** nijaba_ has joined #openstack-infra02:42
*** changbl_ has joined #openstack-infra02:43
*** jhesketh has quit IRC02:46
*** jhesketh has joined #openstack-infra02:46
*** nijaba has quit IRC02:47
*** changbl has quit IRC02:47
*** morganfainberg has quit IRC02:47
*** adalbas has quit IRC02:47
*** morganfainberg has joined #openstack-infra02:47
SpamapSclarkb: it is not, I think a sync was done from oslo without testing heat-manage02:47
*** morganfainberg has quit IRC02:48
*** morganfainberg has joined #openstack-infra02:48
*** pabelanger has joined #openstack-infra02:49
mordredSpamapS: where is heat-managee02:51
mordredSpamapS: and is it's code path exercised in devstack-gate?02:52
mordredSpamapS: we just landed comprehensive requirements gating today02:52
SpamapSmordred: heat/bin ... and if heat is installed in devstack, heat-manage is run02:53
SpamapS./lib/heat:    $HEAT_DIR/bin/heat-manage db_sync02:53
mordredis heat enabled int he main gate? I thought it was?02:54
mordredalso - we're now forcing all projects to sync withopenstack/requirements inside of devstack02:54
mordredbut that just started today02:54
SpamapS2013-08-05 21:06:48.460 | 2013-08-05 21:06:48 + /opt/stack/new/heat/bin/heat-manage db_sync02:54
SpamapSmordred: yeah it is run02:54
SpamapSbut, devstack puts everything on one yssystem02:55
SpamapSsystem02:55
SpamapSso you'd end up with babel02:55
SpamapSbut we're putting it in a venv02:55
* SpamapS files and fixes bug02:56
*** yaguang has quit IRC02:56
mordredbut would we wind up with the right or wrong version of bable?02:56
bodepdclarkb: if I remove that setuptools package resource, it works02:56
*** xchu has joined #openstack-infra02:56
*** dina_belova has joined #openstack-infra02:57
*** adalbas has joined #openstack-infra02:57
clarkbbodepd: there is a setuptools package resource?02:57
clarkbbodepd: we should've removed that a while back02:58
*** lifeless has joined #openstack-infra03:00
SpamapSmordred: right version03:01
*** dina_belova has quit IRC03:01
lifelessjog0: I don't follow what your link to the git ignore docs is for03:01
lifelessjog0: Are you saying you'd like to remove all the in-repo rules ?03:02
*** yaguang has joined #openstack-infra03:08
bodepdclarkb: can I just remove it?03:13
*** nijaba_ has quit IRC03:14
*** nijaba has joined #openstack-infra03:15
*** dguitarbite has joined #openstack-infra03:19
*** UtahDave has joined #openstack-infra03:36
*** beagles has quit IRC03:39
*** pcrews has quit IRC03:42
*** jhesketh has quit IRC03:43
bodepdone other thing that I noticed, I was installing python-setuptools for jjb03:43
bodepdand Puppet installs both03:43
*** jhesketh has joined #openstack-infra03:43
zaroclarkb: hey, the zmq plugin made it on the jenkins plugin manager.  yeah!03:47
*** afazekas has quit IRC03:47
fungitrying to scrape branch-specific pypi mirroring out of my branes... i think we'll want to update all mirrors periodically and when a new release of a requirement under our control is uploaded to pypi03:50
fungithe only situation where we might only need to update the mirror for a single branch is when a change is merged to openstack/requirements03:51
fungiand i'm not sure it's worth the effort to optimize the jobs for that?03:51
*** yaguang has quit IRC03:55
SpamapShttp://logs.openstack.org/34/40334/1/check/gate-heat-requirements/bf3d116/console.html .. is there a way to tell what reviews were included there?03:56
fungiSpamapS: what do you mean by included?03:57
SpamapSfungi: well zuul gathers all the pending things to merge right, its not just one merge.03:57
*** dina_belova has joined #openstack-infra03:57
fungiSpamapS: zuul tests your change against the tip of the target branch03:58
SpamapSI think we got our swords crossed.. two approved changes did the same fix.. but.. can't find the one that wasn't mine. :-P03:58
SpamapSfungi: but it doesn't just test one change at a time.03:59
SpamapSn/m03:59
SpamapSmy fix is just borked03:59
fungiSpamapS: it does, in fact. in an integration test your change is tested against the currently merged state of the relevant branches of all other projects being integrated03:59
*** dina_belova has quit IRC04:00
fungiyou're probably thinking of in the gate pipeline, where tests are run merged on top of other changes being gated, but the end result you see is only testing on top of other changes which successfully merged04:01
fungiso what you're probably interested in, in that case, is what the state of that project's branch was at the time your change was being tested merged onto it04:02
*** SergeyLukjanov has joined #openstack-infra04:03
fungiin that log the relevant lines are...04:04
fungiHEAD is now at c8634c2 Move heat-cfn, heat-boto, heat-watch out of heat.04:04
fungiHEAD is now at 019c878 Add Babel missing requirement04:05
fungithe first is the change yours got merged onto04:05
*** rcleere has joined #openstack-infra04:10
*** yaguang has joined #openstack-infra04:13
*** jinkoo has joined #openstack-infra04:13
*** nijaba has quit IRC04:14
*** nijaba has joined #openstack-infra04:16
*** vogxn has joined #openstack-infra04:16
*** changbl_ has quit IRC04:22
*** jinkoo has quit IRC04:25
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: Branch-specific PyPI mirrors  https://review.openstack.org/4000304:27
*** SergeyLukjanov has quit IRC04:30
*** SergeyLukjanov has joined #openstack-infra04:31
*** UtahDave has quit IRC04:33
*** boris-42 has joined #openstack-infra04:44
*** zul has quit IRC04:48
*** rcleere has quit IRC04:52
*** dina_belova has joined #openstack-infra04:58
*** changbl_ has joined #openstack-infra05:00
*** dina_belova has quit IRC05:02
*** jjmb has joined #openstack-infra05:04
*** jjmb has quit IRC05:04
*** jhesketh has quit IRC05:05
*** jhesketh has joined #openstack-infra05:05
*** changbl_ has quit IRC05:09
*** nijaba has quit IRC05:15
*** nijaba has joined #openstack-infra05:16
*** nijaba has quit IRC05:16
*** nijaba has joined #openstack-infra05:16
*** SergeyLukjanov has quit IRC05:24
*** nicedice has quit IRC05:27
*** amotoki has joined #openstack-infra05:34
*** dguitarbite has quit IRC05:35
*** vogxn has quit IRC05:52
*** vogxn has joined #openstack-infra06:04
*** dkliban_afk has quit IRC06:13
*** nijaba has quit IRC06:13
*** nijaba has joined #openstack-infra06:16
*** nijaba has quit IRC06:16
*** nijaba has joined #openstack-infra06:16
*** Ryan_Lane has joined #openstack-infra06:31
*** odyssey4me has joined #openstack-infra06:34
*** yolanda has joined #openstack-infra06:43
*** olaph has quit IRC06:58
*** dina_belova has joined #openstack-infra06:59
*** dina_belova has quit IRC07:03
*** tianst20 has quit IRC07:07
*** yaguang has quit IRC07:07
*** dina_belova has joined #openstack-infra07:11
*** olaph has joined #openstack-infra07:13
*** nijaba has quit IRC07:15
*** Ryan_Lane has quit IRC07:16
*** nijaba has joined #openstack-infra07:16
*** yaguang has joined #openstack-infra07:20
*** dina_belova has quit IRC07:25
amotokihi, i have a problem that devstack on Ubuntu 12.04 fails due to some version conflicts in python modules (boto, paramiko, cmd2).07:25
amotokidevstack update requirements in each project based on global-requirements.07:26
amotokion the other hand, some python modules are installed from the distribution. It is defined in files/apts/*. This cause version conflict and nova-api fails to start.07:27
*** Ryan_Lane has joined #openstack-infra07:28
amotokiAfter removing euca2ools, python-boto, python-cmd2 from files/apts/*, I succeeded stack.sh.07:29
*** CliMz has joined #openstack-infra07:29
*** yaguang has quit IRC07:31
*** vogxn has quit IRC07:34
*** jpich has joined #openstack-infra07:35
ttxamotoki: sounds like a devstack bug you should file07:36
amotokittx: sure.07:36
amotokiI am searching around devstack patches, but it is not filed so far.07:36
*** fbo_away is now known as fbo07:39
*** yaguang has joined #openstack-infra07:44
*** dina_belova has joined #openstack-infra07:46
*** vogxn has joined #openstack-infra07:50
*** Ryan_Lane has quit IRC07:54
*** Ryan_Lane has joined #openstack-infra08:04
CliMzhi08:07
*** SergeyLukjanov has joined #openstack-infra08:08
*** nijaba has quit IRC08:13
*** nijaba has joined #openstack-infra08:17
*** derekh has joined #openstack-infra08:23
openstackgerritA change was merged to openstack-infra/odsreg: Allow multiple allocations for a topic  https://review.openstack.org/4021208:29
*** Ryan_Lane has quit IRC08:29
*** sdake_ has quit IRC08:30
*** jjmb has joined #openstack-infra09:11
*** dina_belova has quit IRC09:13
*** Ng is now known as Ng_holiday09:15
*** nijaba has quit IRC09:16
*** nijaba has joined #openstack-infra09:17
*** jjmb has quit IRC09:19
*** woodspa has joined #openstack-infra09:24
*** woodspa has quit IRC09:24
kiallAny of the infra team online?09:26
*** dina_belova has joined #openstack-infra09:37
amotoki (FYI) I reported the devstack issue on Ubuntu in https://bugs.launchpad.net/devstack/+bug/120871809:57
uvirtbotLaunchpad bug 1208718 in devstack "n-api fails to start with the latest devstack on Ubuntu 12.04" [Undecided,New]09:57
*** giulivo has joined #openstack-infra09:58
giulivoguys, if I wanted to learn more (and maybe write a few lines) about which/how many gate configurations and periodic jobs we have, where should I start?09:59
*** dina_belova has quit IRC09:59
kiallgiulivo: probably these links..10:00
kiallhttp://ci.openstack.org/jenkins-job-builder/ <-- The took used to describe and configure the jenkins jobs10:00
giulivooh kiall , thanks10:02
giulivoso that is the tool and where is the YAML repo?10:02
*** vogxn has quit IRC10:04
kiallhttp://ci.openstack.org/zuul/ <-- the tool used to configure "Zuul" the gating and coordination of jenkins jobs with Gerrit revies10:04
kiall(sorry - was AFK for a min)10:04
kiallhttps://github.com/openstack-infra/config/blob/master/modules/openstack_project/files/zuul/layout.yaml10:04
kialland JJB config: https://github.com/openstack-infra/config/tree/master/modules/openstack_project/files/jenkins_job_builder/config10:04
kiallThat should keep you going for a while ;)10:05
giulivoindeed, thanks a lot10:07
kiallno problem10:07
*** xchu has quit IRC10:09
*** vogxn has joined #openstack-infra10:10
*** boris-42 has quit IRC10:15
*** nijaba has quit IRC10:16
*** nijaba has joined #openstack-infra10:20
*** dina_belova has joined #openstack-infra10:20
*** nayward has joined #openstack-infra10:30
*** CliMz has quit IRC10:35
*** yaguang has quit IRC10:43
*** dina_belova has quit IRC10:46
*** vogxn has quit IRC10:46
*** dina_belova has joined #openstack-infra10:50
*** nayward has quit IRC10:54
*** lcestari has joined #openstack-infra11:00
*** ruhe has joined #openstack-infra11:08
*** CliMz has joined #openstack-infra11:11
*** vogxn has joined #openstack-infra11:12
*** nijaba has quit IRC11:17
*** nijaba has joined #openstack-infra11:18
ianwBobBall: ping11:26
*** woodspa has joined #openstack-infra11:27
*** zul has joined #openstack-infra11:33
*** boris-42 has joined #openstack-infra11:34
*** vogxn has quit IRC11:34
*** Shrews has joined #openstack-infra11:41
*** beagles has joined #openstack-infra11:50
*** weshay has joined #openstack-infra11:52
*** dina_belova has quit IRC11:53
*** thomasem has quit IRC11:54
*** ArxCruz has joined #openstack-infra12:03
*** sandywalsh has quit IRC12:03
*** nayward has joined #openstack-infra12:03
BobBallianw: here now12:07
*** dkranz has joined #openstack-infra12:09
ianwBobBall: have you ever looked at Anvil?12:10
BobBallonly in the last couple of days12:10
BobBallbut not to any real degree12:10
BobBalli.e. only very vaguely12:10
BobBalldo you think it's method for RPMs is the one we should adopt then?12:11
ianwwell, I've surely missed a lot of context being fairly new12:11
ianwbut it seems like a good idea to me :)12:11
BobBallI think my main concerns are it solves the issue in the "same" sort of way that virtual environments do12:12
ianwharlowja_ helped me this morning (au time) and we got it working fairly quickly12:12
BobBalli.e. it side-steps what the distributions are providing and just compiling things ourselves12:12
BobBallI'm not convinced I understand why we don't use venvs in devstack12:12
ianwi'm not sure either, maybe daemons calling out to other things via various paths "breaks out" of the venv12:13
ianwjust a guess12:13
BobBallhttp://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2013-08-01.log - look for TRACK_DEPENDS for some discussion around venv12:15
BobBall"the reason we can't use venv in the gate is because we need to be able to have a system that works without them" - installing the python libs globally12:16
*** sandywalsh has joined #openstack-infra12:17
BobBallso it seems that we don't want to use packages, but we don't want to use anything that means we can't use packages... hehe12:17
*** nijaba has quit IRC12:17
*** nijaba has joined #openstack-infra12:18
ianwi see12:20
BobBallI suppose if anvil is creating RPMs for everything that needs them automatically then that's okay12:21
BobBallbut isn't that effectively gating on packages12:21
*** dina_belova has joined #openstack-infra12:24
ianwBobBall: what do you mean by that (gating on packages?)12:28
BobBallHaving the OpenStack gate depending on packages and packaging.  i.e. if something changes in the python dependencies then the package scripts need to be updated (or is that automatic in Anvil?) for the gate to pass - thus putting an extra burdon of package maintenance on the openstack developers12:29
ianwI believe it's all automatic12:30
ianwanvil scans the requirements.txt and downloads anything it can from yum12:30
*** rfolco has joined #openstack-infra12:30
ianwit then downloads the rest from pip ... which drags lots of deps back with it12:30
ianwit then scans those deps again, and kicks out any of them that are packages12:30
ianwfinally it creates rpms of the remaning pip downloads12:30
ianwthen it installs everything in one transaction12:31
*** dina_belova has quit IRC12:32
giulivoianw, I think sdague is a supporter of that approach; yesterday we discussed a bit about it12:33
giulivomy proposal is that packaging should be a task for the distributors, not for the development12:33
giulivodevelopers set and maintain the requirements; distributors eventually package those if they want to12:34
giulivonot that my idea counts much, I'm just sparing my 2cents12:37
giulivowhy would you want to use some distro packages instead?12:37
ianwgiulivo: because the rest of the distro is (presumably) working well with them12:39
ianwfrom a rh point of view, it would be nice to know that the RDO packages are being used for example12:40
giulivoyeah so as per previous discussion with sdague , I think it really boils down to if we want to "gate" the actual capability of the new code to run on some particular distro12:40
giulivoor if we want to "gate" the code changes and ensure they work in a vanilla environment12:40
BobBallI agree ianw - and that's something that dan prince would also push for I'm sure - but the problem there is the RDO packages can't (by definition) keep up with the commits that update the requirements12:41
ianwBobBall: no, but when they do update, we start using them automatically12:41
giulivoianw, I'm not sure what are the plans for RH, but I think we're not discussing the opportunity to run on RDO, just to use some pre-packaged python modules12:41
*** psedlak has joined #openstack-infra12:42
ianwgiulivo: well, you get that for free.  you add the RDO rpm, and if anvil finds packages there it uses them instead of building it itself12:42
BobBallI guess the problem is that sometimes it will break - and when it does we don't want the gating tests to prevent people to commit to the OS repos12:42
ianws/rpm/repo/12:42
*** psedlak has quit IRC12:44
giulivoianw, as per BobBall comment, how could RDO keep up with the actual commit you are gating?12:44
ianwBobBall: yes, possibly there is a bug in a distro package, which isn't your fault but stops you committing i guess.  is that what you mean?12:44
ianwgiulivo: i'm not really talking about the gate.  but you or i or anyone interested could run a CI job that runs anvil with the RDO repos12:45
BobBallor we add a new dependency and if it's not in anvil (or the anvil auto-generator doesn't know how to cope with that new dependecy or auto generating based on a change) then things could break there12:45
giulivoianw, okay so I've probably brought up a different topic then because what has been partially discussed with sdague was the usage of packages vs. pip for the devstack requirements (and at gate); a discussion mostly related to this https://review.openstack.org/#/c/40019/12:47
ianwBobBall: harlowja_ should speak to that when it's a normal time for him; but to my understanding, it's building an rpm from a pip download.  so if pip can get it, there isn't really much to go wrong; it's a glorified copy12:47
giulivoand not the usage of distro packages for the openstack components theirselves12:48
*** dprince has joined #openstack-infra12:48
ianwgiulivo: yes, well that change comes into it.  that throws out all distro packages.  but as my comment there says, it doesn't fix the case of, say python-setuptools the package being needed by xen12:49
*** ruhe has quit IRC12:50
BobBallactually it's python-lxml needed by xen :)12:52
ianwoh, sorry, but yeah12:53
*** whayutin_ has joined #openstack-infra12:57
*** weshay has quit IRC12:57
*** anteaya has joined #openstack-infra12:58
giulivoping kiall13:00
*** dkehn_ has joined #openstack-infra13:00
*** dina_belova has joined #openstack-infra13:00
giulivoI see the gating jobs are in devstack-gate.yaml ; may I ask what is in python-jobs.yaml and what in python-bitrot-jobs.yaml ?13:01
*** dkehn has quit IRC13:01
*** dkehn_ is now known as dkehn13:02
*** vogxn has joined #openstack-infra13:04
kiallgiulivo: I'm not totally familiar with the jobs :) But devstack-gate.yaml is not ALL the gating jobs, just the devstack ones13:06
kiallpython-jobs has things like the py27 / py26 / pep8 gate jobs, and some publication stuff etc13:06
openstackgerritA change was merged to openstack-dev/hacking: python3: Fix tracebacks while running testsuites  https://review.openstack.org/4005213:06
openstackgerritA change was merged to openstack-dev/hacking: Fix typo in HACKING.rst  https://review.openstack.org/4032913:07
openstackgerritA change was merged to openstack-dev/hacking: Import exceptions list is now configurable  https://review.openstack.org/3914013:09
*** fbo is now known as fbo_away13:11
*** woodspa_ has joined #openstack-infra13:13
*** dkliban_afk has joined #openstack-infra13:14
*** psedlak has joined #openstack-infra13:14
*** woodspa has quit IRC13:16
*** nijaba has quit IRC13:17
*** nijaba has joined #openstack-infra13:18
*** nijaba has joined #openstack-infra13:18
*** dina_belova has quit IRC13:19
*** fbo_away is now known as fbo13:20
*** mriedem has joined #openstack-infra13:22
mordredamotoki: I removed netaddr the other day for a similar reason13:24
*** dina_belova has joined #openstack-infra13:24
*** zul has quit IRC13:29
*** ruhe has joined #openstack-infra13:31
*** krtaylor has quit IRC13:36
*** changbl_ has joined #openstack-infra13:36
*** pentameter has joined #openstack-infra13:36
mordredsdague: https://review.openstack.org/4041813:42
mordredsdague: and https://review.openstack.org/4041713:42
fungigiulivo: the bitrot jobs are periodic re-runs of unit/functional and integration tests on supported stable release branches, to confirm new dependency releases and tooling changes don't break them13:42
* mordred is happy - there is so much scrollback from all sorts of different people this morning!13:42
BobBallour aim in life is to make you spend the first half hour of the day reading scrollback!13:43
* fungi feels like he spends most of his day reading scrollback13:43
giulivoyou guys indeed turn the lights on and off13:44
fungii prefer to work with the lights off13:44
fungimakes the loud techno music feel even louder13:44
sdaguesorry folks, was getting car serviced... reading scrollback with lots of my name pinged13:45
pentameterHey mordred, is a package headed my way?13:45
mordredgiulivo, ianw, BobBall: the only reason I'd really be interested in auto-generating packages is if it made things easier13:46
mordredpentameter: no. I went to ship it before I left and the store was closed. I really need a personal assistant for this sort of thing13:46
BobBallwell at the moment I'm struggling to see many alternatives mordred...13:47
sdaguemordred: so why - https://review.openstack.org/#/c/40418/1/functions ?13:47
sdaguewe don't protect people from borking over their local changes elsewhere13:47
mordredsdague: we don't change their source repos elsewhere13:47
sdaguereclone=yes13:48
sdaguewe sure do13:48
mordredbut they ask us to do that, no?13:48
mordredchmouel: can you chime in on this one? ^^13:48
sdaguemy feeling is that it's actually dangerous to run stack.sh on changed repos for lots of other reasons already13:48
*** woodspa__ has joined #openstack-infra13:49
sdagueif you want to rerun stack.sh you really need to be running with local forks elsewhere that you reference in localrc13:49
* BobBall has been doing that lots in the last few days13:49
chmouelsdague: well i guess for most people you use devstack to dev13:49
chmouelso you would modify your source code13:49
chmouelrerun devstack to clean etc..13:49
mordredBobBall: I think the question, as you guys were discussing above, is what we want to test and and we want to solve by adding packaging in to the mix13:50
chmoueltest/restart the services etc...13:50
*** changbl_ has quit IRC13:50
sdaguechmouel: right, so manually going into screen and stop / restarting a service, all good13:50
sdaguebut stack.sh goes and clobbers all kinds of things13:51
chmouelsdague: what about cleaning volumes and such?13:51
chmouelclobering the repo before?13:51
mordredBobBall: first and foremost, the most important thing we want to test is the code itself and how that interacts with the library versions _we_ have specified we require13:52
*** dina_belova has quit IRC13:52
chmouelsdague: we probably can't expect that a user is not going to rerun a stack.sh over and over again13:52
*** woodspa_ has quit IRC13:52
BobBallmordred: Perhaps - but I'm starting from the point of we've got a broken system at the moment because we're subverting the packaging system of the distro we're installing on.  I think we either need to be fully independent of distro packaging (perhaps dropping support for RHEL or using venvs) or we have no real choice but to play in the packaging sandpit...13:52
sdaguechmouel: reclone does that in descrimintly13:52
*** vijendar has joined #openstack-infra13:52
BobBallchmouel, sdague: I run stack.sh many times - reclone is off (of course) and it's very useful13:53
BobBallif we don't want people to run stack.sh multiple times, let's delete unstack.sh?13:53
sdagueBobBall: right, but there is actually a completely supported workflow for that13:53
mordredBobBall: I've been arguing the opposite directoin - we should stop trying to instal _any_ of our depends via distro packages13:53
chmouelsdague: so right i wasn't using reclone, so should we force people not be able to rerun stack.sh then only with reclone13:53
sdagueSWIFT_REPO=/home/myuser/code/swift13:53
sdaguein localrc13:53
mordredBobBall: because we _know_ that the distros do not have the right requirements for us13:53
BobBallmordred: I'm happy with that as long as we don't have a conflict like we do with RHEL - if we can install things independently then it works fine13:54
mordredBobBall: there shouldn't be a conflict if we don't try to install the same thing with both packaging systems13:54
chmouelsdague: what's the difference of having it with the default ?13:54
chmouelsdague: we still are going to have the requirements files updated automatically there,right?13:55
BobBallmordred: but the problem is we have to in the case of python-lxml and python-crypto isn't it?13:55
mordrednope13:55
mordredwhy?13:55
BobBallmordred: or anyone might have installed _anything_ in their system13:55
mordred_that_ I don't care about13:55
chmouelI tought Alex_Gaynor was working on remove thos C deps ^13:55
mordredthe second thing13:55
BobBallwell, ok, devstack doesn't have to install them, but if they exist already we have to tolerate them being there13:55
mordredI cannot control how a person might have broken their system before running devstack13:55
mordredif they are doing something more complex, they should use something that isn't devstack to install13:56
BobBallwhy is it broken?  They have just installed python-lxml13:56
mordreddevstack is not an arbitrary and rich deployment tool13:56
mordredwhy have they installed _anything_13:56
BobBallIn my case, xenserver-core depends on xen which depends on python-lxml13:56
BobBallso that package exists in the system already13:56
mordredright. so xenserver is a specific issue we need to solve13:56
BobBallso my assertion is that devstack needs to tolerate python-lxml (or any other python-*) packages being installed13:57
sdaguechmouel: ok, I'm a soft -0 right now on the patch right now.13:57
BobBallotherwise you might remove packages that the user is expecting to exist13:57
sdagueI'll let dtroyer weigh in later13:57
BobBallI don't agree that devstack should break a system by removing packages a user has installed13:57
sdagueand think about it more today13:57
mordredBobBall: I think we needto be more specific. I think devstack needs to engineer for python-lxml for sure, because xenserver is a thing we can be expected to know about13:57
chmouelsdague: cool thanks13:57
chmouelsdague: it does really feel like that isn't it http://openstackreactions.enovance.com/2013/08/understanding-global-requirements-in-the-gate/13:58
mordredBobBall: is the removing packages thing about lxml getting removed by us removing setuptools?13:58
sdague:)13:58
BobBallBut what if the user has a differencing tool installed that uses python-foo which we want to install the pip version of... python-foo gets removed, and so does the tool they are using13:58
mordredwhy would python-foo get removed?13:58
BobBallbecause devstack force removes things that will conflict by installing them through pip13:59
BobBallatm that's only python-lxml and python-crypto - but if we say we'll do everything through pip and ignore all packages, it's the same breakage problem13:59
mordredactually, I think if we go my route, we should not remove things either14:00
BobBallstack.sh line 60214:00
mordredI know - we're talking planning here - how do we fix it - ignore what it does right now14:00
*** _TheDodd_ has left #openstack-infra14:01
BobBallok14:01
*** _TheDodd_ has joined #openstack-infra14:01
BobBallmy point was that if we install things through pip (even if it's everything) then we're still overwriting files that might be installed by the packaging system and thus potentially breaking things14:02
mordredwe won't overwrite things if the system depend satisfies the requirement14:02
mordredbecause pip won't install14:02
mordredit's only if the python-lxml on the system installed by rpms is too old and does not meet the minimum version we assert we need, that we will take action14:03
BobBallbut if it doesn't then we'll upgrade the files underneath the RPM14:03
mordredright14:03
mordredso14:03
mordredthing a) someone needs to make an rpm for lxml that is new enough and can be installed on xenserver boxes14:03
mordredthat, it seems, is a known important task14:03
BobBallwhich violates what the packaging systems expect - perhaps I'm thinking too theoretically :)14:03
mordredyes. you can't solve the full theory here14:03
mordredbecause you try to solve a meta distribution that emerges magically without a distro team making it14:04
mordredbelieve, I fought that battle for about 2 years14:04
BobBallhehe14:04
mordredit's a really tempting path to explore though14:04
mordredon this specific issue, if you, or someone, made a python-lxml rpm of a more modern lxml14:05
mordredand installed that on your xenserver node14:05
mordredwould that break xenserver?14:05
BobBallwe'd need to change stack.sh to not forceably remove it14:05
*** dina_belova has joined #openstack-infra14:05
BobBalljust that change makes it work for me ;)14:05
*** whayutin_ has quit IRC14:05
*** dkranz has quit IRC14:05
mordredright. I mean pre-devstack14:05
mordredwithout devstack in the mix14:06
mordredwould that lxml package break the xenserver?14:06
BobBallI'm sure a new python-lxml would work with xen, sure14:06
mordredgreat. because if it wouldn't, there would be bigger fish to fry14:06
BobBallindeed14:06
mordredso what we need is to get one of those packages and get it into a location that can be trusted14:06
BobBallI'm running a system with the old package but replaced via pip and that seems to work through most tests so far (other tests fail for known reasons that we're working on)14:07
mordredso, removing lxml was done because not removing it was breaking someone else, no?14:07
BobBalland assume that python-lxml won't need to be upgraded again (or assert that if it does then the gate fails until it's fixed?)14:07
mordredwait - I think I donm't understand you there14:08
BobBalltbh I got confused by the comment in devstack...14:08
BobBallI mean that if we require another newer version of python-lxml then we shouldn't accept that until the package is built for that newer version - otherwise we have the same problem14:08
BobBallI have a meeting now - can we resume this later? :)14:09
mordredwe can14:09
mordredI don't think what you are suggesting will work14:09
BobBallgreat14:09
mordredbecause we used to do it that way :)14:09
BobBallyou're probably right! :D14:09
mordredbut come back after your meeting14:09
*** nayward has quit IRC14:10
*** weshay has joined #openstack-infra14:10
*** rnirmal has joined #openstack-infra14:14
*** datsun180b has joined #openstack-infra14:15
*** cppcabrera has joined #openstack-infra14:15
*** nijaba has quit IRC14:15
*** psedlak has quit IRC14:15
*** krtaylor has joined #openstack-infra14:16
*** yaguang has joined #openstack-infra14:17
*** markmcclain has joined #openstack-infra14:18
*** nijaba has joined #openstack-infra14:19
sdagueclarkb: did I make logstash sad?14:21
*** ^d has joined #openstack-infra14:24
*** ^d has joined #openstack-infra14:24
clarkbsdague: maybe, did you ask it for info across a big chunk of time?14:24
*** burt has joined #openstack-infra14:24
clarkbit seems to not like that but should recover if the cache settings work like they should14:25
dkehnclarkb, I was bitching on Sat that neutron devstack was all f-ed up, which it was, seems mo better now, FYI, built 2 VMs and all is working as it should14:26
*** zul has joined #openstack-infra14:26
clarkbdkehn yup it is passing the gate now too14:27
sdagueyes14:27
*** thomasbiege has joined #openstack-infra14:27
dkehnclarkb, I love how changes make it in as working14:27
sdagueI was trying to figure out the floating ip fail occurance14:27
*** yaguang has quit IRC14:28
clarkbsdague: I believe the issue here is elasticsearch needs to load more stuff into memory than it is capable of to perform the query14:29
clarkbbecause our test logs are too big... more restricted queries by test time and other fields should help14:29
*** pabelanger_ has joined #openstack-infra14:31
fungisdague: are you still working with grenade on that temporary 166.78.161.26 or should i tear it back down now?14:32
*** dolphm has joined #openstack-infra14:33
*** pabelanger_ has quit IRC14:33
*** pabelanger_ has joined #openstack-infra14:33
*** pabelanger has quit IRC14:33
anteayacan this storyboard patch get some eyes on it please? https://review.openstack.org/#/c/40014/14:33
*** pabelanger_ is now known as pabelanger14:33
*** pabelanger_ has joined #openstack-infra14:34
*** pabelanger has quit IRC14:34
*** pabelanger has joined #openstack-infra14:34
jeblairfungi: know anything about "Fatal error: puppet-3.1.1-1.fc18.noarch requires hiera >= 1.0.0 : Success - empty transaction14:37
sdaguefungi: oh, that's solved14:37
sdaguesorry14:37
jeblairfungi: fedora18-1.slave is sending emails with that14:37
sdaguefungi: so kill it14:37
mordredjeblair: wow. that looks amazing14:37
mordredand no14:37
fungijeblair: i thought i'd downed fedora18-1's puppetry14:37
fungii'll go ahead and tear that slave down14:38
*** cody-somerville_ is now known as cody-somerville14:38
LinuxJedimordred: any way to convert a draft review into a non-draft review if the person who created it is away on vacation?14:39
fungiLinuxJedi: by editing a row in the database14:39
LinuxJediouch, ok14:40
*** yaguang has joined #openstack-infra14:40
fungiLinuxJedi: "draft" is a bool column in the patchsets table. just toggle it14:40
mordredLinuxJedi: or - grab it, and recommit it removing the change-id line14:40
mordredLinuxJedi: and upload it s a new changeset14:40
fungiLinuxJedi: your gerrit or openstack's?14:40
LinuxJedimordred: I've already pushed up a changeset on top of the draft one, so that will make a mess14:41
LinuxJedifungi: openstack14:41
fungii can un-draft something in review.o.o, just let me know the change number14:41
LinuxJedifungi: 3960814:41
LinuxJedinormally it wouldn't matter, but I need to deploy this feature before he gets back from vacation :)14:42
clarkbsdague: I am able to perform queries over the last hour. I think you are safe search 12 hour chunks or so14:44
clarkbjeblair: I think the zuul status page isn't properly accounting for d-g nodes belonging to the new jenkins servers14:44
jeblairclarkb: that's correct; the 3 dg systems are overwriting each other14:45
*** tianst has joined #openstack-infra14:45
jeblairLinuxJedi: tell him not to use drafts next time.  only unhappiness can result.14:45
LinuxJediwhy do we support them if they are bad? :)14:46
mordredBobBall, sdague: https://review.openstack.org/4043114:46
*** edleafe has joined #openstack-infra14:46
mordredjeblair: do we need to turn d-g into a system driven by gearman with a single reporting entity? :)14:47
*** ruhe has quit IRC14:48
*** changbl_ has joined #openstack-infra14:51
fungiLinuxJedi: turns out i had to update the draft column for that patchset from Y to N in the patch_sets table but also update the status column for the change from d to w in the changes table14:53
edleafemordred: I'm getting a pbr installation on travis-ci: https://travis-ci.org/rackspace/pyrax/jobs/990196014:53
edleafemordred: alex gaynor suggested pinging you about it14:53
LinuxJedifungi: eek... sorry to be a pain.  Many thanks for sorting that :)14:53
mordrededleafe: wow. that's the weirdest problem anyone has come to me with in this channel :)14:53
mordredI see travis-ci.org and pyrax in that question ;)14:53
fungiLinuxJedi: no worries--i wanted to have a better understanding of how that worked underneath anyway14:53
edleafemordred: well, I maintain pyrax14:54
mordrededleafe: yes yes. I'm being snarky. one sec14:54
mordrededleafe: I believe you have hit the distribute upgrade problem ...14:55
edleafemordred: fwiw, it installs without issue on every other system I've tried14:55
mordrededleafe: does travis give the ability to tell it what version of setuptools you want in the virtualenv it uses?14:55
edleafemordred: not that I know of. it seems to be a known issue after googling: https://github.com/travis-ci/travis-ci/issues/119714:56
mordredyes. basically, anything that depends on anything that depends on distribute is going to break14:57
mordredif the travis guys don't do something systemically to fix it14:57
edleafebut setup.py is using setuptools14:57
edleafenot distribute14:57
mordrededleafe: nono. this has nothing to do with you14:58
mordredthis has to do with a 3rd level transitive dependency causing things to get upgraded in a specific sequence which breaks things14:58
edleafemordred: yeah, I know - just wanted to clarify14:58
mordredso - in your install: section, before your python setup.py install14:59
mordrededleafe: add "pip install -U setuptools"14:59
mordredand everything should work14:59
*** ftcjeff has joined #openstack-infra15:00
edleafemordred: trying it now...15:01
edleafemordred: no love from travis: https://travis-ci.org/rackspace/pyrax/jobs/990441715:05
*** ruhe has joined #openstack-infra15:05
mordredlet the record show, I have been helpful to travis people: https://github.com/travis-ci/travis-ci/issues/119715:05
clarkbis the virtualenv being created with distribute?15:06
edleafemordred: duly noted15:06
edleafemordred: and thanks for looking into this15:06
mordrededleafe: that is a different issue15:07
mordredor, maybe it's not. hrm... one sec15:07
mordrededleafe: will you try one more thing for me, just for giggles?15:08
mordrededleafe: will you replace "python setup.py install" with "pip install ."15:09
edleafemordred: ok, gimme a sec...15:09
edleafemordred: leaving in the 'pip install -U setuptools'?15:09
mordrededleafe: yes15:09
*** ianw has quit IRC15:10
*** yaguang has quit IRC15:10
*** pcrews has joined #openstack-infra15:12
mordredWOW. why in the WORLD is pbr's override called by python-swiftclient getting called by pyrax's easy_install call15:13
mordredthat's AMAZING bleeding15:13
*** dolphm has quit IRC15:13
mordredoh -right - easy_install does everything in function calls in the same process15:14
fungijeblair: fyi, i've deleted fedora18-1.slave.openstack.org from jenkins.o.o, from rackspace nova and from rackspace dns (a and aaaa rrs). i checked jenkins01 (and 02) also just to be safe, but didn't see it in there yet. also would have deleted it from puppet-dashboard but looks like i may have already done that in the past couple weeks15:14
mordredso python-swiftclient consumes pbr, which does things for its install which are still in process afterwards, so subsequent easy_install invocations FAIL15:14
BobBallmordred: looks like your change will work - but I'm just testing it to be doubly sure15:14
mordredWOW15:14
mordredwhat a giant pile of terrible design!15:14
BobBallmordred: and while I think a long term solution is probably needed let's get the short term fix in too :)15:15
jeblairfungi: cool, thx!15:15
fungijeblair: out of curiosity where were you seeing those errors show up? puppet agent was disabled and the cron job was commented out for weeks15:15
mordredBobBall: yeah. that way we won't feel pressure to solve the very tricky and intricate problem NOW15:15
edleafemordred: travis is all unicorns and rainbows now15:15
fungijeblair: oh, nevermind. it was on the autoupdate cronspam--i see it now15:16
clarkbjeblair: fungi mordred translation change proposals to gerrit are working again on proposal.slave. I think we can safely delete the old tx slave now15:16
mordrededleafe: awesome. that's because a) easy_install is TERRIBLE and b) easy_install is TERRIBLE and c) you guesed it, easy_install is the worst piece of software ever written by humans15:16
fungiclarkb: yay!15:16
mordredclarkb: woot15:16
jeblairclarkb: cool15:16
edleafemordred: yeah, but it's *easy*!!15:16
*** reed has joined #openstack-infra15:17
*** dkranz has joined #openstack-infra15:17
clarkbmost patchsets ever? https://review.openstack.org/#/c/27091/15:18
*** nijaba has quit IRC15:18
*** danger_fo_away is now known as danger_fo15:19
* mordred throws setuptools at edleafe15:19
*** nijaba has joined #openstack-infra15:19
*** nijaba has quit IRC15:19
*** nijaba has joined #openstack-infra15:19
fungiclarkb: and no votes on it for nearly 4 months... does anybody look at glance translation update proposals? i'm guessing not :/15:20
mordredclarkb: do we have a plicy that a single core can +2 a translation change?15:21
clarkbmordred: I don't think we do15:21
clarkbI am open to that though15:22
*** dolphm has joined #openstack-infra15:22
clarkbglance dev seems to be really slow right now15:22
*** vogxn has quit IRC15:22
* mordred just pinged in channel15:23
jeblairfungi: you mentioned yesterday you might be interested in helping to move jenkins slave nodes... still up for that?15:23
fungijeblair: sure, i'm happy to pitch in on that15:24
jeblairfungi: okay, want to work on the precise nodes while i continue on centos?15:24
fungiyou're setting the node offline in jenkins.o.o, then waiting for any running job to complete, then adding the node on jenkins01 or 02 (odd vs even), then deleting it in jenkins.o.o?15:25
fungiany steps i'm missing there?15:25
jeblairfungi: yes -- except make sure to click the delete button on jenkins before hitting the save button on jenkins0[12] so that there's no chance of stepping on toes15:25
fungiaha, got it. so delete from jenkins.o.o before adding to a new jenkins15:26
jeblairyep15:26
fungii'll tackle precises15:26
funginow that my advisories for the day are sent15:26
jeblairkeepin ya busy, huh?15:27
fungii like being busy ;)15:27
*** CliMz has quit IRC15:27
fungii do want to pick your brain about branch-specific mirroring at some point today too. i have a wip change for the jobs up, but i'll wait to pester you until i have the corresponding jeepyb patch up for review this afternoon15:28
jeblairok15:28
*** yaguang has joined #openstack-infra15:28
*** vijendar has quit IRC15:31
*** markmcclain has quit IRC15:31
jeblairfungi: i'm done with centos, i'll do precise3k now15:33
fungik15:33
*** psedlak has joined #openstack-infra15:34
HenryGThis is new to me. What happened here?  http://logs.openstack.org/17/39017/4/gate/gate-neutron-python27/ed7be4315:35
*** odyssey4me has quit IRC15:36
jeblairclarkb: can you triage that? ^ it doesn't make sense to me.15:36
jeblairfungi: ok, done with precise3k;  i'm going to go turn down the devstack-gate knobs15:37
clarkbjeblair: ya15:38
clarkbHenryG: if you open of the subunit log and scroll to the end15:38
clarkbHenryG: it appears that the test failures are captured in there. Still not sure why testr didn't report them to stderr correctly15:38
*** pabelanger has quit IRC15:39
clarkboh interesting the thing I thought was a failure reported successful. /me keeps looking15:39
*** pabelanger__ has joined #openstack-infra15:39
fungijeblair: okay. i'm slowly ramping up on the precise slaves and getting the migration pattern down, but will be a bit before i get through the rest of them15:39
*** pabelanger__ has quit IRC15:39
*** pabelanger__ has joined #openstack-infra15:39
jeblairfungi: *nod*15:39
*** pabelanger__ is now known as pabelanger15:39
clarkboh I see15:40
*** pabelanger has joined #openstack-infra15:40
clarkbHenryG: those are process return code failures, indicating the test runners exited uncleanly without returning 0 to the calling testr process15:40
clarkbHenryG: is the code under testing prematurely exiting?15:41
HenryGclarkb: My change did not touch these bits :(15:42
clarkblooks like py26 and py27 failed in the same way so it isn't completely inconsistent15:43
clarkbHenryG: is it possible that your change rebased atop master tickled something?15:43
HenryGclarkb: Trying to think how that might be, but extremely unlikely for unit tests15:44
*** beagles has quit IRC15:44
clarkbHenryG: looking in the subunit logs I think the log capturing has the ERRORs15:45
clarkbthings like 2013-08-06 14:37:29,904    ERROR [neutron.api.v2.resource] create failed15:45
*** vijendar has joined #openstack-infra15:45
clarkband 2013-08-06 14:37:34,512    ERROR [neutron.api.v2.resource] update failed15:46
clarkbthese feel more like functional tests and they are bombing out because something changed15:46
HenryGclarkb: please bear with me while I ask ignorant questions ...15:49
*** rcleere has joined #openstack-infra15:49
HenryGAre these the same tests that I can run locally with 'tox -e py27' in neutron?15:50
clarkbHenryG: yes15:50
clarkbHenryG: however, your change was rebased atop the state of the gate when it was tested and failed15:50
clarkbHenryG: so if you want that exact state you need to fetch the ref that was tested15:51
clarkbgit fetch http://zuul.openstack.org/p/openstack/neutron refs/zuul/master/Z333d40755cb1488680f87e4af90ec7d6 && git checkout FETCH_HEAD15:51
clarkbthat ref is near the beginning of the job's console log15:51
*** sarob has joined #openstack-infra15:52
fungiclarkb: mordred: looks like we recently broke pip on the centos slaves? http://puppet-dashboard.openstack.org:3000/reports/778887 "Could not locate the pip command." started on sunday...15:53
clarkbon sunday? I didn't approve anything over the weekend /me looks in git logs15:54
fungii'm going to guess something locally depending on pbr, upgrading pbr, doing something to globally-installed pip15:55
clarkbpbr did do a new release right?15:55
fungiso might have been a pbr change or release tagged on sunday which triggered it?15:55
clarkbfungi: new pbr release on the 4th15:55
clarkb(which was sunday) I bet that is what caused the problem15:56
fungiwe have a loose temporal correspondence with that then15:56
fungistronger if that changed handling of pip15:56
*** tianst has quit IRC15:56
mordredit did not cange handling of pip15:56
zaroclarkb: did you see that the zmq plugin is available now?15:56
*** avtar has joined #openstack-infra15:56
clarkbzaro: yup I saw that. It showed up late yesterday15:57
clarkbzaro: so you think the pom.xml upload did it?15:57
zaroyes, fo sure.15:57
clarkbif so we should add uploading that file to the plugin upload job15:57
zaroclarkb: https://bugs.launchpad.net/openstack-ci/+bug/120890115:57
uvirtbotLaunchpad bug 1208901 in openstack-ci "deploy jenkins plugin pom.xml file " [Undecided,New]15:57
clarkbperfect15:58
clarkbfungi: mordred remember centos is broken and doesn't use /usr/local15:58
clarkbfungi: mordred isn't it possible that in upgrading pbr or $other thing dependencies were munged and pip was removed? similar to what we see with devstack on rhel?15:59
* anteaya wonders what life would be like if smalltalk had gained more traction and we could just issue devstack images15:59
*** beagles has joined #openstack-infra16:00
BobBallhmmmm16:00
BobBallRunning the latest devstack removes python-setuptools which in turn removes nose, coverage, pip, numpy...16:00
clarkbanteaya: ship around squeak VMs?16:00
anteayaclarkb: would eliminate the dependency issues, would it not?16:01
mordredclarkb: oh god16:01
anteayaI take it mordred doesn't like the idea16:01
mordredok. so - I'm starting to believe that we REALLY need a setuptools 0.9.8 rpm16:01
mordredbecause the dance we're doing with redhat right nowis not really working for me16:01
*** mrodden has quit IRC16:02
clarkbanteaya: it would, but have you used squeak? I can't get over the fact that it tries to be so self contained to the point of uselessness16:02
mordredI don't mean we need all of the rpms in the world - but a setuptools 0.9.8 rpm will solve MANY things16:02
clarkbanteaya: turns out my existing editor and browser and all these other tools are better than the conglomeration they ship.16:02
clarkbmordred: do we need a corresponding deb?16:03
anteayaclarkb: I have just done the tutorials, but I know a guy who has used smalltalk for years, he loves it16:03
clarkbmordred: so that we can apply it symmetrically across platforms?16:03
anteayaclarkb: fair enought16:03
mordredclarkb: debian isn't broken in quite the same way - but yeah, that would be nice16:03
clarkbanteaya: one of my professors was a big Squeak fan. wrote http://www.squeakbyexample.org/16:03
mordredclarkb: I've asked zul for one16:03
clarkbanteaya: we were happy when he made us use mosml16:03
anteayabut devstack is fairly self contained, and meant to be so, no?16:03
anteayaclarkb: ? https://www.itu.dk/~sestoft/mosml.html16:05
clarkbanteaya: ya16:06
anteayayou liked it better than squeak?16:06
pleia2good morning16:06
clarkbanteaya: yes, I mean it is completely different, but we felt the pain of mosml was more bearable than squeak16:06
anteayamorning pleia216:07
clarkbpleia2: morning16:07
anteayaclarkb: okay, guess I have never felt the pain of squeak, just that it requires a structure that is very limited because the paradigm was never as popular as files16:07
anteayabut I like prolog and forth, so I am odd16:08
*** Ryan_Lane has joined #openstack-infra16:08
*** salv-orlando has joined #openstack-infra16:08
clarkbHenryG: any luck?16:08
*** sarob_ has joined #openstack-infra16:08
*** sarob has quit IRC16:08
HenryGclarkb: I fetched that ref and ran tox locally. No problems.16:09
clarkbzaro: for deploying the pom.xml I think we should extract the file from the hpi archive in the publish job then push both. Instead of pushing both to tarballs.o.o16:09
clarkbHenryG: it is possible this is an interation between tests in that case16:10
clarkbHenryG: with that ref checked out, you can download the subunit file from the failing test run. Then `source .tox/py27/bin/activate ; testr load $path_to_downloaded_subunit_file ; testr run --analyze isolation` this will attempt to determine which tests interact poorly with each other given the order tests were run in in the gate16:11
salv-orlandoclarkb, HenryG: we were chasing this issue in openstack-neutron16:11
clarkbthe other thing to try is running those tests on their own. It may be that they have some hidden dependency on other tests16:12
salv-orlandoas a matter of fact, we already seen intermittently this particular faillure, where the test runner appears to crash16:12
*** mrodden has joined #openstack-infra16:12
salv-orlandohas there been any recent change in the gate which might have increased the levelly of tests concurrency?16:13
clarkbsalv-orlando: no, the unittest slaves are still 4 core machines16:13
clarkbsalv-orlando: but order isn't necessarily deterministic16:13
clarkbtestr will attempt to group tests optimally given previous test run times, jenkins does not keep the .testrepository dir though16:14
salv-orlandoclarkb: thanks. My question was about the fact that from today we're noticing a much higher impact of this particular failure16:14
clarkbso to reproduce locally you may have an easier time if you moved .testrepository aside each time you run the tests16:14
*** cppcabrera has left #openstack-infra16:14
clarkbsalv-orlando: last time we saw something like this in neutron there was a sys.exit() call in a plugin iirc16:15
clarkbwhich caused the test suite to bomb out early16:15
*** nicedice has joined #openstack-infra16:16
*** nayward has joined #openstack-infra16:16
salv-orlandosys.exit however should be deterministically reproducible… shouldn't it?16:17
clarkbsalv-orlando: not if it is hidden in error handling code or some other dark corner16:17
clarkbbut sys.exit does seem less likely if this is not deterministic16:17
*** vogxn has joined #openstack-infra16:18
salv-orlandoI am more inclined towards a concurrency issues, but I'll check the sys.exit too.16:18
*** ^d is now known as ^demon|away16:18
salv-orlandoThis particular test case basically just executes a set of common unit tests with another plugin - but in its turn this plugin just uses the same mixin16:19
salv-orlandoas all the other plugin.16:19
*** nijaba has quit IRC16:19
salv-orlandoThe thing that is probably weird is that this test case inherits from another plugin's test case.16:19
*** yolanda has quit IRC16:19
*** nijaba has joined #openstack-infra16:19
salv-orlandoAnyway, I will keep trying to fix it. I Might stress the gate a bit with a few patches.16:20
mordrednotmyname: any chance I could convince you to release python-swiftclient?16:20
salv-orlandoIf the gate becomes pretty much blocked by this, I suggest temporarily removing this test case. It adds very little to code coverage, and it's plugin specific.16:20
*** sarob_ has quit IRC16:21
mordrednotmyname: with the requirments update that drops d2to1 from the reqs? it's causing people to trip over the distribute/setuptools bug.16:21
clarkbsalv-orlando: good to know. Let me know if you have testr or jenkins related questions. happy to help as much as I can16:21
clarkbHenryG: ^16:21
*** vogxn has quit IRC16:22
mordrednotmyname: this one: https://review.openstack.org/#/c/40286/ specifically16:22
*** sarob has joined #openstack-infra16:22
*** vogxn has joined #openstack-infra16:22
*** dkranz has quit IRC16:22
mordredclarkb: one of the ways we can stop tripping over this setuptools/distribute thing16:22
* clarkb AFKs a bit to facilitate a move to the office16:22
mordredclarkb: is to get everything out of our pipeline that tries to install distribute16:22
clarkbmordred: basically kill distribute with fire?16:23
mordredwhich, at the moment, are pip installs of d2to1 and pyhton-mysql16:23
mordredyeah16:23
mordredwe can safely install python-mysql from apt in both devstack and our machines that need it16:23
mordredand if we can get python-*client to cut new releases with the pbr dep updated to post d2to116:23
mordredI think we're out of the weeks16:23
mordredwoods16:23
mordredI'm thinking that instead of trying to solve setuptools globally, we just stop using things that trip the bug16:24
mordredand go back to system installed setuptools16:24
jeblairthat would be great16:24
mordredyeah - I think we've been chasing the wrong rabbit16:24
clarkb++16:25
clarkbcan we also work with mysql-python to fix their depend?16:25
reedinteresting... the UX team is evaluating using askbot to manage their discussions16:26
clarkbthat dependency on distribute has caused othee problems16:26
mordredyup. well, I think mysql-python is actually safe for us right now. I have no idea when andy might cut another release16:27
mordredbut yeah16:27
*** yaguang has quit IRC16:27
salv-orlandoclarkb: good news. enikanorov found the nasty sys.exit16:28
clarkb\o/ can you link me to the review that fixea the problem? I am curious now16:30
*** odyssey4me has joined #openstack-infra16:32
salv-orlandoclarkb: enikanorov is working. The sys.exit merely uncovered the real reason for the exception16:33
reeddo you know if there is a special process that users need to follow in order to change their SSH key in gerrit?16:33
reedor just change the key and go push a review?16:33
salv-orlandoreed: settings --> ssh public keys?16:34
reedI have a user reporting that reviews are refused after changing the key16:34
fungireed: they go to https://review.openstack.org/#/settings/ssh-keys16:34
reedsalv-orlando, that's what I assumed16:34
pleia2do we want to allow http access to cgit, or force https for everything? came up in the add ssl review: https://review.openstack.org/#/c/40253/5/modules/cgit/templates/git.vhost.erb16:34
*** dkranz has joined #openstack-infra16:35
clarkbpleia2: for all other https capable hosts we force https16:35
*** BobBall is now known as BobBall_Away16:35
*** sdake_ has joined #openstack-infra16:35
pleia2clarkb: ah, makes sense then16:36
fungii'm torn. on the one hand we've got people who have trouble getting git+https through their work proxies reliably (ibm?), on the other hand https-everywhere is a compelling idea16:36
fungiand since we're adding git protocol too, which should be faster than plaintext http in theory, maybe the http benefits are not significant16:37
*** thomasbiege has quit IRC16:37
fungialso, having http and http copies of everything visible to crawlers means search results are diluted16:38
fungibecause they index it twive16:38
fungitwice16:38
*** sandywalsh has quit IRC16:39
*** UtahDave has joined #openstack-infra16:40
*** yaguang has joined #openstack-infra16:40
*** boris-42 has quit IRC16:41
*** fbo is now known as fbo_away16:43
*** zul has quit IRC16:43
openstackgerritElizabeth Krumbach Joseph proposed a change to openstack-infra/config: Add httpd ssl support to git.openstack.org  https://review.openstack.org/4025316:46
*** gyee has joined #openstack-infra16:48
clarkbfungi: ya git protocol should be much better and I think we can recommend it in cases where authenticating the remote end isn't super important16:51
jeblair+116:52
jeblairthe devstack nodes on jenkins.o.o are almost run down; i'll increase the numbers on 01 and 02 now.16:52
Alex_Gaynorjeblair: can't we just yell at teh tempest authors to make their tests faster? (snark)16:52
Alex_GaynorAlso, has anyone tried to quantify if openstack has the most used CI system in all of open source?16:53
jeblairAlex_Gaynor: we do that too ;)16:53
*** krtaylor has quit IRC16:53
Alex_Gaynorjeblair: yell, or quantify?16:53
clarkbAlex_Gaynor: making the tests faster means we need more node :) as we can run more tests in  shorter period of time16:53
jeblairAlex_Gaynor: a little of both?16:53
clarkbAlex_Gaynor: jeblair had a test hours per hour graph16:53
jeblairclarkb: it's not done yet, i'm pretty sure the last version was wrong16:54
fungijeblair: precise slaves are all moved to their respective new masters, and i've confirmed at least one completed job ran on each now16:54
jeblairAlex_Gaynor: but snark aside, what i'm actually doing is moving the the load off of our old jenkins server and on to two newly minted jenkins masters.16:54
clarkbjeblair: it occurred to me that jenkins + gearman is what you would need to compete with travis >_>16:55
*** sandywalsh has joined #openstack-infra16:55
Alex_Gaynorjeblair: are we going to get a nice shinny SSL cert for 01 and 02?16:55
clarkbAlex_Gaynor: I think travisci as a whole does a lot more open source testing16:55
Alex_Gaynorclarkb: I guess OS is 2nd to "all of travis", which isn't half bad :)16:56
clarkbAlex_Gaynor: but on a per project basis I think we run more tests than any travisci project16:56
Alex_GaynorMaybe GCC, they have some crazy platform testing16:56
jeblairAlex_Gaynor: well, if it's important.... we've been doing a lot to make it so you don't actually have to visit jenkins most of the time...16:56
jeblairAlex_Gaynor: so given that, i didn't think it was worth the hassle, especially if we start spinning up more jenkins with more weird names...16:56
Alex_Gaynorjeblair: I often watch the stdout, since I find jenkins time estimates are... innacurate16:56
Alex_GaynorBut if it's too much of a hassle, no worries16:57
*** vogxn has quit IRC16:57
jeblairwe could: ignore it (i have 'accepted this certificate permanently); turn off https (but some of us still log in so it might be nice to avoid passing openid nonces in the clear); reverse proxy it from jenkins.o.o; or buy some certs.16:58
clarkbAlex_Gaynor: you can create an exception and trust us :)16:58
fungii think reverse-proxying the realtime log stream through status.o.o might make sense16:58
Alex_Gaynorclarkb: THat's what I did :)16:59
fungiAlex_Gaynor: TRUST US16:59
fungi;)16:59
*** yaguang has quit IRC16:59
*** sandywalsh has quit IRC17:00
*** dkranz has quit IRC17:00
*** ruhe has quit IRC17:04
mordredjeblair: we could also publish the ca that we use to allow people to choose to trust us if they wanted :)17:06
mgagneIs an additional right required to approve stuff on the stable/grizzly branch? ref.: https://github.com/openstack-infra/config/blob/master/modules/openstack_project/files/gerrit/acls/stackforge/puppet-modules.config17:07
sdagueclarkb: so is there anything that can be done in the logstash config to get > 24hr searches to work?17:10
*** odyssey4me has quit IRC17:11
jeblairmgagne: yes, there's a global rule that allows the release team exclusive access to the stable branches.  stackforge projects need an extra acl to override the override.17:11
jeblairs/release/stable-maint/17:12
mgagneI'll grep for it, thanks!17:12
jeblairmgagne: i don't think the all-projects acls have made it into puppet yet; you can see them here: https://review.openstack.org/gitweb?p=All-Projects.git;a=history;hb=refs%2Fmeta%2Fconfig;f=project.config17:12
openstackgerritKhai Do proposed a change to openstack-infra/config: deploy jenkins plugin pom.xml file.  https://review.openstack.org/4045517:12
*** sandywalsh has joined #openstack-infra17:13
jeblairmgagne: also, documentation here may explain some of the reasoning: http://ci.openstack.org/gerrit.html#access-controls17:13
*** zul has joined #openstack-infra17:13
openstackgerritKhai Do proposed a change to openstack-infra/config: deploy jenkins plugin pom.xml file.  https://review.openstack.org/4045517:14
mgagnejeblair: as a core member, I should therefore be able to review stuff?17:14
mgagnejeblair: is a reload of gerrit required?17:14
clarkbsdague: more bigger nodes (not likely to happen) or index less stuff17:15
jeblairmgagne: what project?17:15
sdagueclarkb: hmph, the shards aren't working out?17:15
*** derekh has quit IRC17:15
clarkbsdague also right now I realize I am not getting events from the new servers will fix in a bit17:16
*** dkranz has joined #openstack-infra17:16
sdagueok, yeh, I was just trying to confirm if that floating ips bug was seen on any non neutron jobs17:16
mgagnejeblair: stackforge/puppet-quantum17:16
clarkbsdague not really as the amount of data over a 24 hour period doesnt change17:16
sdagueit only shows up twice in the last 24 hrs, both neutron jobs17:16
clarkbso you can add shards but without a bunch of nodes there is little value17:17
*** Ryan_Lane has quit IRC17:17
clarkbsdague you can search older time periods just put upper and lower time bounds17:17
clarkbor search on particular indexes17:17
*** nijaba has quit IRC17:19
*** yaguang has joined #openstack-infra17:19
mordredmgagne: no, if you read the doc above, it will explain why17:19
mordredmgagne: you can override that inyour own project17:19
mgagnemordred: thanks, I'll reread the doc17:19
*** nijaba has joined #openstack-infra17:20
*** nijaba has quit IRC17:20
*** nijaba has joined #openstack-infra17:20
*** afazekas has joined #openstack-infra17:20
mordredmgagne: look at modules/openstack_project/files/gerrit/acls/openstack-dev/devstack.config for an example of a project that overrides it17:20
clarkbsdague we could try indexing only the gate to reign in the amount of data17:21
mgagnemordred: thanks, that's what I was looking for17:21
*** rfolco has quit IRC17:22
sdagueclarkb: hmmmm.... what if we discarded the DEBUG logs in the services17:22
sdaguebecause I don't think in the base case we're going to need that17:23
*** amotoki has quit IRC17:23
sdaguewe'll punch back out to the real logs for DEBUG17:23
*** nati_ueno has joined #openstack-infra17:23
openstackgerritMathieu Gagné proposed a change to openstack-infra/config: Allow puppet-manager-core to review stable branches  https://review.openstack.org/4045617:23
clarkbsdague we can do that17:24
zarojenkins01 is exciting, but orig jenkins looks boring now. why so?17:24
clarkbI will write that patch17:24
clarkbzaro we are killing it17:25
zaroclarkb: to be replaced with new one? jenkins02?17:26
*** odyssey4me has joined #openstack-infra17:26
*** Ryan_Lane has joined #openstack-infra17:27
clarkbyes17:27
*** dina_belova has quit IRC17:28
zaroahh.. can't wait!17:28
*** thomasm has joined #openstack-infra17:30
*** krtaylor has joined #openstack-infra17:31
*** SergeyLukjanov has quit IRC17:32
*** SergeyLukjanov has joined #openstack-infra17:33
*** ruhe has joined #openstack-infra17:33
*** ruhe has quit IRC17:34
*** SergeyLukjanov has quit IRC17:34
fungizaro: for the near term, untrusted slaves will be split between jenkins01 and jenkins02 (odds and evens), while jenkins.openstack.org will continue to handle trusted slaves (proposal, pypi, mirrors, et cetera)17:34
fungithough it sounds like jenkins.openstack.org will likely also get a rebuilt replacement to move the trusted slaves onto17:35
*** Ryan_Lane has quit IRC17:36
harlowja_woah lots of scrollback, ha17:37
harlowja_i blame mordred17:37
harlowja_ha17:37
*** dina_belova has joined #openstack-infra17:38
zarofungi: i was wondering about that, thnx.17:38
*** yaguang has quit IRC17:38
*** dina_belova has quit IRC17:40
*** vipul is now known as vipul-away17:40
fungimordred: clarkb: i may have missed part of the discussion, but did we have a suggested solution to the broken pip installation on our centos6 slaves? reinstall python-pip from rpm?17:43
mordredfungi: oh - no, we got sidetracked17:43
mordredfungi: there is a script in devsstack that does the right thing... but I'm not really sure that's what we want to do on our centos slaves17:44
mordredlet me think about it for another second17:44
fungirpm -qa shows python-pip installed, but no pip executable in the path17:45
dtroyerfungi: I've been testing a script to encapsulate all of that, try https://review.openstack.org/#/c/39827/17:45
dtroyertools/install_pip.sh17:45
fungimmm, well these aren't devstack machines, but maybe17:46
*** ftcjeff has quit IRC17:46
openstackgerritClark Boylan proposed a change to openstack-infra/config: Fix logstash.o.o elasticsearch discover node list.  https://review.openstack.org/4045917:46
fungiif this is generally useful outside of devstack, maybe it needs a different home17:46
dtroyermaybe…it uses devstack's functions file but could be re-done standalone17:47
*** ruhe has joined #openstack-infra17:47
dtroyerhmmm…that one is old17:47
dtroyerI just pused up my current one.  it installs pip from a tarball as get-pip.py was not reliable over the weekend17:48
*** dolphm has quit IRC17:51
openstackgerritKhai Do proposed a change to openstack-infra/config: deploy jenkins plugin pom.xml file.  https://review.openstack.org/4045517:51
openstackgerritJames E. Blair proposed a change to openstack-infra/gear: Server: make job handle safer  https://review.openstack.org/4046217:51
clarkbfungi: mordred for our centos slaves maybe we should prioritize the puppet refactor17:51
*** edleafe has left #openstack-infra17:51
clarkband fix this across the board?17:52
mordredclarkb: can you expand on "prioritize the puppet refactor"17:52
mordredtoo many balls in air - I don't know which thing you mean there17:52
openstackgerritJames E. Blair proposed a change to openstack-infra/gear: Server: make job handle safer  https://review.openstack.org/4046217:53
clarkbmordred: if we refactor the setuptools/pip madness out of our base manifest we can apply it to the static slaves without breaking d-g17:53
mordredah - right17:53
clarkbmordred: basically fix it in a generic way in puppet but only where we need it so that d-g and devstack are still independently testable17:53
*** salv-orlando has quit IRC17:53
mordredyes. I agree. in fact, I dont' think we need it on our static slaves17:53
clarkbsdague: logstash should now be talking to all three jenkins masters. Will work on the filtering of debug messages shortly17:54
jeblairdon't need what on our static slaves?17:54
clarkbjeblair: I think we do as it would correct the centos problems aiui17:55
jeblair(i'm having trouble following the conversation because mordred said he agreed with clarkb, but appeared to contradict him)17:55
openstackgerritElizabeth Krumbach Joseph proposed a change to openstack-infra/config: Add replication of git from gerrit to git.o.o  https://review.openstack.org/3779417:55
clarkbjeblair: I think he did that17:56
jeblairokay, i'll check back in a minute and see if this conversation got any more linear.17:56
sdagueclarkb: coolness17:56
clarkbsdague: do you know if the oslo.config log levels that are not standard are documented somewhere?17:56
clarkbsdague: so that I can drop DEBUG and below17:56
*** dkranz has quit IRC17:57
clarkbalso can I just say that doing non standard python logging log levels seems like a bug17:58
mordredclarkb, jeblair: I agree that we should do the refactor which should allow us to apply the pip/setuptools to things where they are needed17:58
*** pabelanger has quit IRC17:58
mordredseparately, I do not believe that we need the pip/setuptools fix on our static slaves17:59
clarkbmordred: can you explain the reason you don't believe that is necessary?17:59
openstackgerritElizabeth Krumbach Joseph proposed a change to openstack-infra/config: Add replication of git from gerrit to git.o.o  https://review.openstack.org/3779417:59
sdagueclarkb: I thought it was standard somewhere17:59
*** dolphm has joined #openstack-infra18:00
*** boris-42 has joined #openstack-infra18:00
mordredclarkb: I do not believe we are performing global actions on those machines wthat would cause setuptools/pip breakage18:01
*** dina_belova has joined #openstack-infra18:02
*** afazekas has quit IRC18:02
*** vipul-away is now known as vipul18:02
*** sarob has quit IRC18:03
*** sarob has joined #openstack-infra18:03
jeblairmordred: aside from running pip globally in puppet, which eventually always leads to breakage, which is why we want to replace the use of system pip with packages on our slaves.18:03
mordredjeblair: yes18:04
*** markmcclain has joined #openstack-infra18:05
clarkbsdague: python logging only does DEBUG INFO WARNING ERROR CRITICAL18:05
clarkbsdague: oslo.config does TRACE and AUDIT too18:05
clarkbs/config//18:06
clarkbI don't know why I wanted to say config there, it is in oslo logging which is still in incubation18:07
*** koolhead17 has joined #openstack-infra18:07
*** sarob has quit IRC18:07
koolhead17can some one help me with finding monty`s nick. I keep forgetting it18:07
clarkbhttps://github.com/openstack/oslo-incubator/blob/master/openstack/common/log.py#L163-L16718:08
clarkbkoolhead17: mordred18:08
koolhead17clarkb, aah thanks. stupid me18:08
*** melwitt has joined #openstack-infra18:09
clarkbsdague: looks like the new levels are all above the debug level18:09
clarkbso I don't have to worry about them in this case18:09
*** dkranz has joined #openstack-infra18:10
sdagueclarkb: yep18:12
*** SergeyLukjanov has joined #openstack-infra18:16
openstackgerritJames E. Blair proposed a change to openstack-infra/gearman-plugin: Add OFFLINE_NODE_WHEN_COMPLETE option  https://review.openstack.org/4046818:16
jeblairclarkb, mordred: ^ the beginning of the end for the devstack inprogress/complete jobs.  :)18:17
bodepdclarkb: should I submit a patch to not install setuptools as a pip?18:19
*** nijaba has quit IRC18:19
bodepdclarkb: that is the thing that was causing errors from the pip class18:19
clarkbbodepd: I think we already removed that18:20
*** nijaba has joined #openstack-infra18:20
clarkbbodepd: it caused problems with devstack gate. Unless we do it in mutliple place18:20
mordredjeblair: well - so I think the global breakage when using pip globally is the thing we decided to chase the other rabbit on18:21
mordredjeblair: as in, actually, nothing we install _should_ be trying to upgrade setuptools18:21
mordredand now that new pbr is out that doesn't use d2to1, that shoudl be the case with all of the software that we do for ourselves18:21
mordredwhich means system apt/yum instealed setuptools should be fine18:22
*** ruhe has quit IRC18:22
jeblairok that makes me happy.  :)18:22
bodepdclarkb: oh. that must have been recently...18:23
mordredjeblair, clarkb, bodepd: I think we got down a rabbit hole, and I'd like to try to unwind the complexity18:24
bodepdclarkb mordred jeblair I am probably good to go18:24
mordredcool18:24
bodepdit looks like that resources was already removed18:24
bodepdI still have a jenkins-job reload failure, but I'll sort that out today18:25
*** dina_belova has quit IRC18:25
openstackgerritA change was merged to openstack-infra/config: Add more jobs for Savanna projects  https://review.openstack.org/3798718:26
fungimordred: so... just yum reinstall python-pip on those slaves for now and we should be fine?18:28
mordredfungi: let's call that a yes18:29
fungimordred: clarkb: jeblair: i'm happy to patch up the system pip on centos6-dev1 in that case, and then follow up on the production ones if that works out18:31
dstufftmordred: do you use pip install -U18:32
mordreddstufft: for?18:32
dstufftinstalling things globally18:32
dstufftin openstac infra18:32
fungioh, though right now that server's got its own separate issues... "Failed to apply catalog: Invalid parameter psql_path at /etc/puppet/modules/postgresql/manifests/role.pp:54"18:32
mordreddstufft: we use the puppet pip module18:32
*** harlowja_ has quit IRC18:32
dstufftsorry I'm only catching the tail end of things18:32
mordreddstufft: it's all good... it's a long and confusing tail18:33
*** ftcjeff has joined #openstack-infra18:33
mordreddstufft: new theory - stop trying to upgrade setuptools everywhere, because it's a nightmare18:33
dstufftbut if anything depends on setuptools at all, and you execute ``pip install -U something-with-a-dep-on-setuptools-somewhere-in-dep-graph`` it will try to upgrade setuptools18:33
dstufftbecause pip recursively upgrades18:33
mordreddstufft: instead, stop intsalling things via pip that depend on setuptools or distribute18:33
mordredright18:33
mordredI believe the only things we've been installing so far that depend on setuptools|distribute18:33
dstufftmordred: ok, just making sure that you know the recursive behavior18:34
mordredare things that were using pbr, which was depending on d2to1 which depended on distribute18:34
mordredbut we killed d2to118:34
*** morganfainberg has quit IRC18:34
mordredso I _believe_ we're in good shape on our servers18:34
dstufftare you writing plain setup.py files now?18:34
mordredgod no18:34
*** morganfainberg has joined #openstack-infra18:34
*** chuck__ has joined #openstack-infra18:34
mordredthat's a disaster and crazy18:34
*** dina_belova has joined #openstack-infra18:34
mordredpbr just no longer depends on d2to118:34
*** zul has quit IRC18:34
dstufftAre you parsing the setup.cfg yourself?18:35
*** harlowja has joined #openstack-infra18:35
mordredyup. we merged d2to1 code directly in to our  then fixed it up when upstream d2to1 would not respond to patches removing the distribute dep18:35
*** rfolco has joined #openstack-infra18:35
openstackgerritJames E. Blair proposed a change to openstack-infra/config: Add jenkins-job-builder-core group  https://review.openstack.org/4047018:36
dstufftgotcha18:36
mordredsince I talk to you all the time, I think that gives us more ability to morph and be compliant as the world in general progresses18:36
dstufftmakes sense18:36
dstufftmordred: ALso how come no SSL on pypi.openstack.org18:37
clarkbdstufft: because when we built it pip and easy install didn't care :)18:37
dstufftclarkb: makes sense18:38
mordreddstufft: good point18:38
dstufftclarkb: so who do I bug to make it forced ssl? ;P18:38
mordredclarkb: we shoudl fix that18:38
*** chuck__ is now known as zul18:39
mordreddstufft: us18:39
openstackgerritA change was merged to openstack-infra/jenkins-job-builder: Added some more scm options  https://review.openstack.org/3929818:39
dstufftmordred: *bugs*18:39
dstufft:)18:39
mordredjeblair: when you get a chance, can you buy an ssl cert for pypi.openstack.org ?18:40
* mordred is assuming that jeblair is the ssl cert buying ninja18:43
jeblairmordred: why?18:44
mordredjeblair: don't you normally do it?18:46
jeblairmordred: no i mean why ssl for that host?18:46
mordredso that pip installs from that host can be assured that they aren't being MITM'd now that dstufft has added ssl support to pip for us18:47
*** sarob has joined #openstack-infra18:47
*** sarob has quit IRC18:47
mordredalso, pip 1.5 is going to be ssl only, iirc18:47
*** sarob has joined #openstack-infra18:48
dstufftNah it won't be SSL only, but there will be scary messages if your index doesn't have SSL18:48
*** dina_belova has quit IRC18:48
openstackgerritlin-hua-cheng proposed a change to openstack/requirements: Add support for Keystone V3 Auth in Horizon.  https://review.openstack.org/3977918:50
jeblairmordred: i'm a big fan of ssl, but this is getting to be a death by a thousand cuts.  if we want to start adding ssl to non-user-facing hosts, we may want to consider an alternate setup.18:50
*** dina_belova has joined #openstack-infra18:51
mordredjeblair: I believe openstack devs consume pypi.openstack.org from time to time because it's so much faster ... but I'm also fine witih talking about different approaches18:51
jeblairmordred: we'd need to split that out into another host to get a new ip address18:54
mordredjeblair: ok. let's come back to it later then18:55
jeblairmordred: i'd like to defer this until we've solved some of our more pressing problems.18:55
mordred++18:55
openstackgerritClark Boylan proposed a change to openstack-infra/config: Don't index logs with DEBUG log level.  https://review.openstack.org/4047418:55
clarkbsdague: ^ testing that took far too much time... but I have locally tested it so it should owrk18:55
mordredjeblair, clarkb: re global pip - we'll also need the new release of python-swiftclient released, since that curently also depends on d2to118:55
mordredand we install that as a dep of things18:55
mordredbut that should be coming real soon now18:55
clarkbok good to know18:56
mordredpython-novaclient has been fixed already18:56
clarkbjeblair: mordred: we could possibly use a floating IP for the pypi.o.o IP18:56
clarkbwe should probably start trying to use those things where sensible18:56
clarkbbut yes focus on more pressing needs18:56
jeblairmordred: are there stable branches of things that will break due to d2to1 issues?18:57
openstackgerritDan Bode proposed a change to openstack-infra/config: Add puppet-pip  https://review.openstack.org/3983318:57
clarkbjeblair: I don't think so as d2to1 and pbr have been a havana thing right? or did we sneak it in at the end of grizzly?18:57
jeblairclarkb: last i checked, rax didn't have floating ips.18:57
clarkb:(18:57
jeblairclarkb: that was a long time ago.18:57
mordredjeblair: what clarkb said18:58
clarkbmeeting time19:00
*** UtahDave has quit IRC19:00
*** krtaylor has quit IRC19:01
*** vijendar has quit IRC19:02
*** vijendar has joined #openstack-infra19:02
hub_caphey guys im going to tag and bag the troveclient, and i had a quick question. i follow https://wiki.openstack.org/wiki/GerritJenkinsGithub#Tagging_a_Release right? but just pull from master and tag off that?19:03
mordredyup19:03
mordredmake sure you use git tag -s19:03
mordred:)19:03
hub_capcoo. and the tag is _just_ the version?19:03
clarkbhub_cap: make sure your tip of master matches github/gerrit19:03
*** vijendar has quit IRC19:03
mordredhub_cap: yup19:03
hub_capyes yes with the gpg stuff!19:03
mordredso, git tag -s 1.2.319:03
mordredand then git push gerrit 1.2.319:04
mordredand you're gold19:04
hub_capmellow gold?19:04
hub_capclarkb: you threw me a wrench. how do i go doing that?19:04
hub_capps said wrench was caught in my beard. it might be gone forever19:04
clarkbhub_cap: make sure the commit sha1 matches the commit sha1 on github19:04
clarkbhub_cap: which will be the case if you have kept your local master pristine eg no local dev or merges19:05
hub_capoh ya iz gonna check out fresh to make sure19:05
nati_uenoclarkb: Hi How can I rename quantum to neutron in this page http://status.openstack.org/reviews/ ?19:08
pleia2nati_ueno: the script is called reviewday, hang on19:09
pleia2nati_ueno: https://github.com/openstack-infra/reviewday19:09
nati_uenopleia2: gotcha!19:09
hub_capclarkb: mordred do the tag msgs matter at _all_19:09
pleia2nati_ueno: the bin/reviewday file in there is where they are defined19:10
clarkbhub_cap: I like to put something in there. They show up if you git show $tag19:10
hub_capi just noticed that19:10
hub_capthe last troveclient says19:10
hub_capA19:10
nati_uenopleia2: it looks easy fix. I'll send a patch19:10
hub_capclassy19:10
pleia2nati_ueno: yep! thanks19:10
nati_uenopleia2: Thanks!19:11
*** cp16net has quit IRC19:12
*** cp16net has joined #openstack-infra19:13
markmcclainOk running into a strange dependency problem… this failed to merge https://review.openstack.org/#/c/37461/19:13
markmcclainyet we're getting unit tests failing with requests=1.2.3 installed19:13
markmcclainhttps://jenkins02.openstack.org/job/gate-neutron-python27/4/console19:14
openstackgerritNachi Ueno proposed a change to openstack-infra/reviewday: Rename quantum to neutron  https://review.openstack.org/4048019:15
clarkbmarkmcclain: enikanorov and salv-orlando were fixing that19:15
clarkbmarkmcclain: there is apparently an error and a bad sys.exit()19:16
markmcclainno the error is in entrypoints loading19:16
markmcclainand mismatched deps19:16
markmcclainwhich triggers the exit()19:16
mordredhrm19:17
markmcclainlatest nova client is 1.2.2 but somehow 1.2.3 is getting installed19:17
markmcclainlooks like we 1.2.3 in the mirror19:18
clarkboh new fail I should look closer apparently19:18
*** enikanorov_ has joined #openstack-infra19:18
clarkbfungi: ^19:18
enikanorov_hi folks19:18
SergeyLukjanovthere are both 1.2.2 and 1.2.3 versions in mirror - http://pypi.openstack.org/openstack/requests/19:18
SergeyLukjanovbtw we have the same problem with requests 1.2.3 in savanna19:19
*** anteaya has quit IRC19:19
enikanorov_oh, i see you're already discussing this19:19
*** zul has quit IRC19:19
*** prad has joined #openstack-infra19:19
*** gyee has quit IRC19:19
*** nijaba has quit IRC19:20
*** nijaba has joined #openstack-infra19:20
*** nijaba has quit IRC19:20
*** nijaba has joined #openstack-infra19:20
mordredso - we just rolled out new stuff in devstack19:22
clarkbmordred: this is unittests19:23
mordredyah. just making sure19:23
clarkbI think it is related to removing the requests upper bound19:23
clarkbas nova fixed there issues19:23
clarkbapparently other projects aren't so happy with it19:23
mordredah. interesting that they break in unittests but not in devstack :(19:26
*** krtaylor has joined #openstack-infra19:27
mordredmarkmcclain: we're trying to get more and more testing on openstack/requirements to prevent us from getting in to these, but I'mguessing there are still a couple to sort out19:27
markmcclainyeah. was just wondering if the failed merge left it in a strange state19:28
*** dolphm has quit IRC19:28
clarkbmarkmcclain: it shouldn't as Gerrit should handle that cleanly19:30
clarkbit is possible that requests 1.2.3 snuck in transitively19:30
*** vijendar has joined #openstack-infra19:31
markmcclainyeah.. I'm guess something required it uncapped19:31
clarkboh you know what19:32
markmcclainkeystone client is uncapped and nova is capped19:32
*** prad has quit IRC19:32
*** prad has joined #openstack-infra19:32
clarkbmarkmcclain: at this point you may need to cap in neutron and -1 37461. I will remove my approval and +2 from there19:33
mordredrequests>=1.1,<1.2.3 is the thing in openstack/requirements19:33
mordredwe need to get all of the client libs synced with openstack/requirements asap19:33
mordredalthough I did just request that everyone do that19:33
*** vipul is now known as vipul-away19:34
clarkbmarkmcclain: my approval is gone so we won't directly break you19:34
mordredso hopefuly they will comply soon19:34
mtreinishmordred: so devstack isn't working for me with the current master: http://paste.openstack.org/show/43347/19:34
*** anteaya has joined #openstack-infra19:34
mtreinishit looks like it's installing the old version of json schema for glanceclient and openstackclient and using the new correct version for glance19:34
mtreinishbut when glance launches it uses .8 and fails19:35
mordredwhy is it installing old versions of anything?19:35
markmcclainclarkb: it is better to do that or just update novaclient which is still capped?19:36
mtreinishmordred: don't know, I didn't think it should be, but that looks like what is happening19:36
sdaguemordred: yeh, things are weird for sure19:36
clarkbmarkmcclain: if you can get a proper fix through quickly I would go with that19:37
clarkbmtreinish: I just wouldn't count on that being speedy19:37
mordredmarkmcclain: wait - what state do we _want_ to be in?19:37
markmcclainI'm happy with them uncapped19:38
mordredmarkmcclain: do we want to be in the state where everyone is uncapped and we're installing 1.2.3?19:38
mordredgreat19:38
mordredso what we want is to get requirements updated, which might mean fixing code somewhere first?19:38
mordredand then getting the clients to sync requirements?19:38
markmcclainright19:38
markmcclainthe only client that caps is nova19:38
markmcclainI pushed this up: https://review.openstack.org/#/c/40483/19:39
fungimarkmcclain: the revert in nova happened in https://review.openstack.org/#/c/38012/19:39
fungirationale is in that commit message and the commit message of the commit it was reverting (including links to the bug)19:40
mordredbut I thought 1.2.3 is breaking neutron?19:41
markmcclain1.2.3 is breaking because of transitive deps19:41
markmcclainentrypoints picking up the nova limit19:41
markmcclainand then when the obj is load the installation has changed to a newer version19:42
mordredAH19:42
mordredI fully understand the problem now19:42
mordredso we _do_ want to land the requirements update, and then we want to get novaclient to sync it, and then honestly you want novaclient to do another release19:43
mordredbecaus that's the only thing that's going to unbreak your unittests19:43
markmcclainwell I don't think we can now19:43
markmcclainfungi linked the revert from aug 119:43
markmcclainbut that was in nova server not nova client19:44
clarkbI think capping in the near future will unblock you, then you can do what mordred said as things fall into place19:44
clarkbunless you can get a couple things to happen really quick19:44
markmcclainrussellb around?19:44
fungii can read the scrollback in more detail after the meeting concludes19:44
russellbmarkmcclain: yep19:44
mordredthen - I think sdague and I will need to spend some time thinking about how to avoid this19:45
sdaguemordred: ok, looking at scrollback, what's the new issue?19:45
markmcclainrussellb: tl;dr capped version of requests in novaclient is transitively breaking things19:45
mordredsdague: trapping for incompat requirements things around client libs and unittests19:46
markmcclainrussellb: thoughts on moving this through quickly: https://review.openstack.org/#/c/40483/19:46
mordredsdague: it might be something that will shake out once we're up to date with requirements on a more frequent basis19:46
sdagueright, this is pushing back towards running unit tests on devstack nodes, right?19:46
mordredmaybe - or maybe not19:46
mordredI'm not sure I have thought about it long enough to know whether or not I think something new needs to change19:47
mordredor whether we just need to give the current changes we just made time to percolate19:47
sdagueso.... ceilometer pulls the upstream master tarball for nova in it's unit tests19:47
mordredthat's a whole other issue19:47
mordredalthough also needs addressing19:47
russellbmarkmcclain: sure, if that went into global requirements, that's fine19:48
russellbmordred: sdague ^  yes?19:48
mordredI think that seems correct19:48
openstackgerritKhai Do proposed a change to openstack-infra/config: deploy jenkins plugin pom.xml file.  https://review.openstack.org/4045519:48
sdagueright, so we land this:  https://review.openstack.org/#/c/40483/1, then get everyone to update19:49
*** boris-42 has quit IRC19:50
sdaguehmmmm.... isn't zuul supposed to be running jobs on that?19:50
enikanorov_it is19:50
sdagueoh, sorry, that's a python-novaclient change19:51
sdagueso global-requirements is still capped in master19:51
*** _TheDodd_ has quit IRC19:51
*** dolphm has joined #openstack-infra19:51
clarkbsdague: yes, 37461 needs a rebase19:52
sdagueclarkb: ok, let me do that19:52
sdagueoh, right, everyone has now modified the wrong file19:53
*** UtahDave has joined #openstack-infra19:53
*** ftcjeff has quit IRC19:54
*** vipul-away is now known as vipul19:54
openstackgerritMatthew Treinish proposed a change to openstack-dev/pbr: Add option to run testr serially  https://review.openstack.org/3981119:54
openstackgerritSean Dague proposed a change to openstack/requirements: removal invalid pin of python-requests<=1.2.2  https://review.openstack.org/3746119:54
sdagueok, so that should give us test runs to see if it works19:55
*** _TheDodd_ has joined #openstack-infra19:55
*** odyssey4me has quit IRC19:56
*** dina_belova has quit IRC19:58
*** odyssey4me has joined #openstack-infra19:58
sdaguemordred: so something about the latest pip-ness has meant that we are no longer overwriting local versions it looks like19:58
sdaguewhich we used to do19:58
openstackgerritDan Bode proposed a change to openstack-infra/config: Add puppet-pip  https://review.openstack.org/3983319:59
*** dprince has quit IRC20:00
anteayajeblair: thanks for shoehorning me in20:01
clarkbI didn't get to sneak this in but I will probably be AFK for much of Thursday and definitely most of Friday20:01
jeblairanteaya: sorry we didn't have much time20:01
anteayaI'll see if I can mix up a 1.5 upgrade patch for storyboard20:01
anteayajeblair: no worries20:01
mordredsdague: can you point me to something?20:01
* jeblair points mordred to sdague20:02
*** rfolco has quit IRC20:02
anteayayou have a lot to cover20:02
anteayathat was about all I needed anyway20:02
anteayattx said he isn't opposed to a 1.5 patch, so that is something20:02
clarkbanteaya: maybe start with trying to support both like horizon?20:02
ttxIf the choice is =1.4 or >=1.5.1, I prefer the latter20:03
clarkbhorizon has an example of running tests to check both20:03
anteayaclarkb: can you expand that comment?20:03
ttxbut I think >=1.4 is possible too20:03
clarkbanteaya: it is possible to support 1.4 and 1.5 concurrently. Horizon does this. The enforce it with two unittest jobs. One runs with 1.4 the other 1.520:03
anteayaah I did not know, I will look at horizons tests, I have to do that anyway20:03
clarkbanteaya: check out the tox.ini20:03
clarkbfungi: will you be working from seattle?20:04
anteayattx ack >=1.5.1 is prefered to =1.420:04
fungiclarkb: yeah, though hours will be fragmented i expect since we'll want to do some touristy things during the day20:04
*** lcestari has quit IRC20:04
clarkbcool, we should definitely grab beers if you can swing it20:04
fungiclarkb: so might be mornings and late nights hacking to accommodate that20:04
anteayaclarkb: I will, thank you though since we are early in the game, would there be much fallout if we just supported >=1.5.1?20:05
fungiclarkb: definitely. any time you're free that week. zaro too20:05
clarkbanteaya: the problem with that is it may restrict your potential install base to people willing to install from source or build their own packages20:05
clarkbanteaya: not an issue for us, but if you want to get outside contributors it may become important20:05
zarofungi: thought we were going to computer museum?20:06
anteayaclarkb: what would happen if we supported >=1.5.1 and welcomed anyone who wanted to support =1.4?20:06
fungizaro: that's on the to do list for that week too20:06
openstackgerritDan Bode proposed a change to openstack-infra/config: Add puppet-pip  https://review.openstack.org/3983320:06
clarkbfungi: early in the week will probably be better. I have a second round of family/friend stuff beginning the 23rd, but I can do whenever20:06
fungizaro: so we can always work in computer museum and beer in one time slot20:07
anteayasince we have NO storyboard tests, it isn't like we currently have created an expectation that we are dropping20:07
anteayawell no storyboard tests of consequence20:07
clarkbanteaya: that is doable too, but if you have to put in extra work it might scare some people away20:07
zarofungi: wfm20:07
clarkbanteaya: I don't think there is a solid answer, just things to consider :)20:07
fungiclarkb: that works better for us anyway, because there are more people from our cruise group showing up later in the week as we get closer to departing, and they'll have stuff they want to do anyway20:07
clarkbelysian fields is down that way >_>20:07
anteayaclarkb: fair enought20:07
anteayaconsidering my django foo at present, I want to keep the focus as narrow as possible20:08
anteayait increases my ability to actually have some output20:08
anteayarather than getting lost in documentation20:08
sdaguemordred: what's up?20:08
sdaguesorry, was in mtreinish's office20:08
clarkbfood time. back in a bit20:09
mordredsdague: god. I've already forgotten20:09
sdagueso warlock 0.8.1, which is satisfied by g-r, requires incompatible jsonschema20:09
mordredand you have offices?20:09
clarkbmordred: I was just about to ask that same question20:09
sdagueyeh, he's 3 doors down from me20:09
mordredsdague: oh - - yeah - can you link me to the pip based problem evil?20:09
*** krtaylor has quit IRC20:10
sdagueI think mtreinish had the dump20:10
*** odyssey4me has quit IRC20:11
openstackgerritA change was merged to openstack-infra/config: Add Python 3.3 PyPI mirror jobs  https://review.openstack.org/3999920:11
fungii'll watch those ^ later this evening when they kick off20:12
openstackgerritA change was merged to openstack-infra/config: Make the python33 template part of python-jobs  https://review.openstack.org/4032120:12
mgagneI would like a review on this change: https://review.openstack.org/#/c/40456/20:12
fungiand that ^ one frees us up to start running the empty pyXX jobs for git-review changes now20:12
*** dolphm has left #openstack-infra20:12
openstackgerritSean Dague proposed a change to openstack/requirements: Raise warlock requirement  https://review.openstack.org/3761620:13
sdaguemordred: honestly, I think part of the problem is we've had a pretty big backup on requirements review20:14
sdaguejd__ is really the only person who's been reviewing20:14
fungimgagne: lgtm20:14
sdagueI'm going to run around and rebase things which should probably land20:14
jd__hm, can I help?20:14
openstackgerritA change was merged to openstack-infra/config: Ensure /var/lib/zuul is owned by zuul  https://review.openstack.org/3961220:14
fungijd__: apparently you already do help20:14
jd__fantastic!20:15
fungiyes, i agree20:15
*** psedlak has quit IRC20:16
mordredsdague: I've been waiting on them until we got the gating in place20:17
sdagueyep, that's cool20:17
sdaguethey all need rebases though20:17
sdagueand now we'll get test results!20:17
mordredw00t20:18
mordredzaro: next time you hack on gerrit ...20:19
*** nijaba has quit IRC20:20
mordredI thnk that topic links defaulting to creating a link that also includes the project name is crazypants20:20
openstackgerritSean Dague proposed a change to openstack/requirements: Removes the cap for SQLAlchemy  https://review.openstack.org/3803520:21
*** nijaba has joined #openstack-infra20:21
*** nijaba has quit IRC20:21
*** nijaba has joined #openstack-infra20:21
sdaguealso, auto merge is funny20:21
sdagueand in the sqla case changed tests/files/gr-base.txt instead of global-requirements.txt20:21
sdaguemaybe we should take these in batches, as I wasn't smart enough to rebase them all on each other20:22
mordredhaha20:25
dstufftmordred: jeblair just catching up on things, I just assumed Openstack had a wildcard cert :/20:28
jeblairdstufft: if we did, i'm not sure we'd put it on some of the hosts we run (at least, not yet)20:28
jeblairthose things are dangerous if they get loose.  :)20:29
dstufftjeblair: heh, yea I tend to run a single host that unwraps TLS and use internal networks (or self signed certs) going from that single node to everything else20:30
*** dkliban_afk has quit IRC20:30
*** sdake has quit IRC20:31
fungidstufft: what are "internal" networks? ;)20:31
jeblairdstufft: yeah, that's one of the things i think we should consider if we want to do https more places.  might be easier if we had access to www.openstack.org and openstack.org webservers.20:31
jeblairfungi: tor hidden services!  mordred: ;)20:32
fungifor us, internal networks would be ptp vpnm20:32
fungiheh, onion routing ftw!20:32
dstufftfungi: that's the "or self signed certs" option ;P20:32
fungidstufft: self-signed certs steal food from starving certificate authority executives20:32
dstufftgood, they're assholes anyways20:33
fungihow will they ever make their car payments now? what about the mortgage on their third vacation home?20:33
*** dkliban_afk has joined #openstack-infra20:35
bodepdrediculously vague jjb question20:35
bodepdI'm trying to install/configure jenkins with jjb in one run20:36
bodepdit always fails during the puppet run, but then always works as soon as I invoke manually from the machine20:36
mgagnebodepd: logs?20:36
bodepdjenkins-jobs update /etc/jenkins_jobs/config20:36
jeblairbodepd: maybe jenkins hasn't really started yet (it takes a while?)20:36
mgagnebodepd: did you find a way to get the API token after the Jenkins installation?20:37
bodepdhttps://gist.github.com/bodepd/616838220:37
bodepdI also added a 15 second sleep before jenkins-jobs update20:38
mgagnebodepd: jenkins takes time to start and shows "Jenkins is preparing to work..." for a while before being available20:38
bodepdin the exec20:38
*** ^demon|away is now known as ^d20:38
bodepd20 seconds though?20:38
*** sarob has quit IRC20:38
bodepdI guess I could bump it up to a whole minute and try it out20:39
mgagnebodepd: just to make sure it's this problem and not something else20:39
bodepdmgagne: I am using the password (and setting that with some nasty XML templates)20:39
bodepdmgagne: I can also set the api key via XML template20:39
jeblairbodepd, mgagne: depending on whether you are trying to manage an existing install, or bootstrap a new one, you can use a predetermined api key20:39
fungimore complication, but maybe you want something retrying to query the api endpoint for a configurable length of time20:39
jeblairbodepd: ah, you're probably onto this then.  but yeah, if you supply 'secret.key' and the user xml file for the jjb user, you can set all that up before hand20:40
*** sdake has joined #openstack-infra20:40
*** sdake has quit IRC20:40
*** sdake has joined #openstack-infra20:40
jeblair(assuming you have already obtained those from an existing jenkins install; doesn't help the new user who is spinning one up for the first time)20:41
jeblair(supporting that would probably require implementing jenkins brain-dead encryption protocol in puppet)20:41
*** sdake has quit IRC20:41
*** sdake has joined #openstack-infra20:41
*** sdake has quit IRC20:41
*** sdake has joined #openstack-infra20:41
fungijenkins rolled its own encryption protocol? i'm afraid to look20:42
mordredand then you can hit an api endpoint, and if it responds, then jenkins is up - and if it doesn't, it's not20:42
*** gyee has joined #openstack-infra20:42
*** gyee_ has joined #openstack-infra20:43
*** dkliban_afk has quit IRC20:43
*** gyee_ has quit IRC20:43
jeblairbodepd: maybe that's how you should exit from the jenkins service restart? ^20:43
*** sarob has joined #openstack-infra20:43
bodepdjeblair: by trying to connect?20:44
bodepdjeblair: of coarse that is the real solution20:44
bodepdjeblair: but I'd rather just have it work atm20:44
bodepd(the real solution is for the jjb update script to be a native type that blocks and waits a predetermined time for the service to be ready)20:45
mordredbodepd: thing is - I have seen jenkins take >30 minutes to start before20:45
bodepdalso, I noticed in the code, that it tries to create a job, then checks if the job is there20:45
bodepdso the logs don't really lead me to the right spot in the code20:45
bodepdmordred: say what :)20:45
mordredbodepd: not kidding.20:46
bodepdthis goes back to my question from yesterday about just ripping it out :)20:46
bodepdbuild bot anyone?20:46
* mordred stabs bodepd20:47
mordredbodepd: use it for anything at all for more than 5 minutes and then come back and say that20:47
mordred:)20:47
bodepdI'm mostly just making an educated guess it can't be worse than jenkins20:48
mordredyou'd almost be right20:48
mordredthing is- we've already added most of what we need to jenkins20:48
mordredand we'd have to start completely from scratch with buildbot20:49
mordredas in, no part of our current in frastructure would carry over20:49
mordredand if we're going to do that - then we might as well just continue working on non-jenkins gearman workers for zuul20:49
bodepdmordred: I understand. I'm just so fristrated with it's lack of usable APIs20:49
mordred:)20:49
Mithrandirbuildbot is made of cheese.20:50
fungiis it runny?20:51
bodepdmordred: non-jenkins gearman workers you say...20:51
bodepdmordred: and zuul becomes the portal?20:51
mordredbodepd: yes20:51
mordredthis is already in the brainstorm stage20:52
Mithrandirfungi: moldy.20:52
mordredbecause, it turns out - gearman is pretty amazing at distributing work to be done by people :)20:52
*** dkliban_afk has joined #openstack-infra20:56
*** dina_belova has joined #openstack-infra20:58
clarkbback from lunch and so much scrollback21:00
openstackgerritMonty Taylor proposed a change to openstack-infra/config: Add oslo.version  https://review.openstack.org/4049821:00
*** CaptTofu has quit IRC21:00
*** CaptTofu has joined #openstack-infra21:01
*** krtaylor has joined #openstack-infra21:01
*** dina_belova has quit IRC21:02
clarkbmordred: I just tried your topic without project search and it is slow, maybe that is why the behavior today defaults to including the project21:02
*** ArxCruz has quit IRC21:03
mordredhrm.21:04
mordredmaybe an ... INDEX ... would be helpful21:04
mordredthat was an index on topic, project21:04
mordredso that both topci and topic, project could benefit21:04
*** dina_belova has joined #openstack-infra21:08
markmcclainanyone want to +2, Approve this: https://review.openstack.org/#/c/37461/21:08
*** SergeyLukjanov has quit IRC21:08
clarkbmarkmcclain: looking21:09
*** pabelanger_ has quit IRC21:09
clarkbdone21:09
markmcclainclarkb: thanks21:09
* clarkb settles in to do more code review21:09
*** pabelanger has joined #openstack-infra21:09
bodepdI got it working! https://gist.github.com/bodepd/616866321:09
bodepd(but I'm not proud of the solution :( 021:10
clarkbbodepd: you need a while loop that curls the jenkins server anddoesn't exit until after the please wait page goes away >_>21:10
bodepdI probably shouldn't use the word solution either21:10
bodepdyeah, I'll open a ticker for that <_<21:11
*** emagana has joined #openstack-infra21:11
bodepdticket21:11
*** derekh has joined #openstack-infra21:12
zaromordred: i'll inquire about it next time the gerrit channel is active.  seems pretty dead right now.21:13
*** dina_belova has quit IRC21:13
mordredzaro: or, ignore me really. I'mjust bitching21:13
clarkbha21:14
*** sdake has quit IRC21:14
openstackgerritA change was merged to openstack-infra/gear: Server: make job handle safer  https://review.openstack.org/4046221:15
*** vipul is now known as vipul-away21:16
*** sarob_ has joined #openstack-infra21:17
*** salv-orlando has joined #openstack-infra21:17
*** sdake has joined #openstack-infra21:19
*** sdake has quit IRC21:19
*** sdake has joined #openstack-infra21:19
*** sarob has quit IRC21:20
*** sarob has joined #openstack-infra21:20
fungipleia2: on 36593 were you wanting to amend that part of the init.pp with commentary about being centos-only, or are we clear to merge it?21:20
*** nijaba has quit IRC21:20
*** sarob_ has quit IRC21:21
*** nijaba has joined #openstack-infra21:21
*** nijaba has quit IRC21:21
*** nijaba has joined #openstack-infra21:21
*** nati_ueno has quit IRC21:21
fungimordred: clarkb: jeblair: worth noting, the "missing pip" problem seems to have struck all our centos machines. git and pbx are both suffering from it too21:23
fungii did 'yum reinstall python-pip' on git.o.o just now, and will see how that works out for subsequent puppet runs21:24
*** thomasm has quit IRC21:24
clarkbgood to know21:25
*** vijendar has quit IRC21:25
mordredfungi: awesome21:26
clarkbI am going to ninja approve a couple things related to proposal.slave and logstash21:27
mordredgo forit21:27
fungiclarkb: i apologize for not reviewing those yet, if i haven't21:29
clarkbfungi: you reviewed most of them. thank you21:29
jeblairi'm working in chrono order21:29
fungii guess i feel more behind than i am, in that case21:29
openstackgerritA change was merged to openstack-infra/config: Replace tx node label with proposal in jobs.  https://review.openstack.org/4001821:29
openstackgerritA change was merged to openstack-infra/config: Fix logstash.o.o elasticsearch discover node list.  https://review.openstack.org/4045921:29
clarkbThey are all low impact with relatively high return rates. ^ Is like the last step before turning off tx.slave21:30
openstackgerritA change was merged to openstack-infra/config: Don't index logs with DEBUG log level.  https://review.openstack.org/4047421:30
fungii'm all for that21:30
clarkbmordred: what do you think about my comments on https://review.openstack.org/#/c/39967/121:31
*** woodspa__ has quit IRC21:31
fungihuh, so the next puppet run seems to have blown away pip again... http://puppet-dashboard.openstack.org:3000/reports/78098721:32
clarkbfungi: mordred: for the project registry, I am almost beginning to think we need to host that with etcd or in a database eg someplace queryable through an api21:32
fungiclarkb: probably so. we keep churning out more bits and pieces which depend on project-specific metadata lists21:33
clarkbfungi: any chance the symlink is breaking things?21:34
*** emagana has quit IRC21:34
fungiclarkb: it's entirely possible21:34
fungiif something is trying to pip install -U pip21:34
fungiOR! maybe epel just added a new python-pip rpm...21:35
fungii will start investigating in that direction21:35
clarkbLOLOLOLOL21:35
clarkbfungi: that is the problem21:35
clarkbpip is a symlink to pip-python. pip-python is a symlink to pip21:35
*** echohead has joined #openstack-infra21:36
clarkbLOLs all the way down21:36
fungiyep21:36
clarkbthat is hilarious21:36
echoheadhi, #openstack-infra.21:36
clarkbechohead: ohai21:36
echoheadi'm having some trouble with cloning things from review.openstack.org on ipv6.21:36
fungiwe saw this on fedora 18, so i already have a workaround in the provider for it. just need to make that more ubiquitous21:36
*** sdake has quit IRC21:36
clarkb"Tonight on when hacks go bad, pip, epel and symlinks"21:37
echoheadlike the ipv6 address for review.openstack.org always times out.  is this expected?21:37
* fungi whips up emergency puppetry21:37
clarkbechohead: it is not expected21:37
*** sdake has joined #openstack-infra21:37
*** sdake has quit IRC21:37
*** sdake has joined #openstack-infra21:37
*** mriedem has quit IRC21:37
harlowjaclarkb i think they recently did that weirdness with pip, pip-python, python-python symlinsk21:37
harlowjanot quite sure why, ha21:37
clarkbechohead: does ssh -p user@review.openstack.org gerrit ls-projects timeout too?21:37
fungiechohead: are you seeing it do that when connecting via ssh from a virtual machine in rackspace possibly?21:37
clarkbechohead: and does adding -vvv to that ssh command show anything interesting?21:38
echoheadfungi: no, this is a physical machine in my co's datacenter.21:38
echoheadclarkb: trying now.21:38
fungiah, okay. cloning via http then i take it?21:38
clarkbechohead: you need -p29418 sorry missed the port21:38
*** sparkycollier has joined #openstack-infra21:38
mordredclarkb: if it's in a database, I still want the source of it to be a thing in the repo21:39
*** ^d has quit IRC21:39
fungiharlowja: yep. the puppet pip package provider had a workaround to symlink pip-python to pip, so having it suddenly move took it by surprise21:39
clarkbmordred: ya, I think that will be necessary to make it editable21:39
harlowjadef fungi21:39
echoheadfungi: yes, http.21:39
fungier, pip to pip-python, but regardless21:39
clarkbmordred: but curl foo/projects is a lot easier than clone this thing, update it make sure you don't diverge and so on21:39
jeblairclarkb: project registry?21:39
harlowjai think the new epel package even symlinks python-pip21:40
clarkbjeblair: the list of projects that we copy and paste all over for different reasons21:40
harlowjaall variations possible, ha21:40
clarkbechohead: I aks because we have seen weirdness with ipv6 and different protocols, just wondering if ssh performs any differently21:40
fungiharlowja: it does. what we ended up with was a circular symlink between pip and python-pip pointing at each other21:40
echoheadclarkb: ssh hangs while trying to establish a connection:21:40
echoheaddebug1: Connecting to review.openstack.org [2001:4800:780d:509:3bc3:d7f6:ff04:39f0] port 29418.21:40
harlowjafungi nice, good ole epel :-p21:40
clarkbyay it is at least consistent21:40
clarkbechohead: do you have routable ipv6?21:41
clarkbechohead: eg can you hit anything else over ipv6?21:41
*** nati_ueno has joined #openstack-infra21:41
echoheadyes, i can hit ipv6.google.com, for example.21:41
fungiechohead: ping6 2001:470:8:d2f::34 (it's my house)21:41
echoheadfungi: i can ping you just fine.21:42
fungiechohead: but ping6 2001:4800:780d:509:3bc3:d7f6:ff04:39f0 fails for you?21:42
echoheadfungi: that's correct.21:42
clarkbwhat about 2001:4800:780d:509:3bc3:d7f6:ff04:359b jenkins.o.o21:42
fungii wonder if rackspace is having trouble with v6 at one of their peering points21:42
clarkbwhich is in the same DC presumable21:42
echoheadclarkb: that address works fine from here too.21:42
fungithat would be a good confirmation21:42
clarkbechohead: hmm ok. do you have mtr available? can you mtr review.o.o via ipv6?21:43
fungihuh, so two addresses within the same /112 and you can reach one but not the other21:43
*** vipul-away is now known as vipul21:44
echoheadclarkb: http://paste.openstack.org/show/43363/21:44
clarkbso packets are getting lost within rackspace21:45
clarkbnot a peering issue21:45
fungihis hop #13 there is the penultimate hop for me as well21:46
clarkbjlk: ^ any idea of what is going on there?21:46
fungiechohead: what global v6 address are you coming from? i'll do a return trace and see if the path is majorly asymmetrical21:47
jeblairclarkb: seems like just having the projects.yaml in the config repo should be sufficient, and ensure it's copied to /etc/projects.yaml everywhere it's needed21:47
echoheadfungi: 2607:f700:3460:fa0:230:48ff:fecc:d63221:47
jlkclarkb: looking, but networks… ugh.21:47
clarkbjeblair: that will work too21:48
fungiechohead: looks like the packets might be vanishing on the return path after the hand-off from he to spectrum21:48
jeblairmordred: you may want to revisit https://review.openstack.org/#/c/40068/21:48
jlkand ipv6 -- even more eww.21:48
mordredjeblair: I'm excited to21:48
fungiechohead: oh, actually routing loop now in bbgrp.net21:48
clarkbfungi: asymetrical routes? ugh21:48
clarkbasymetrical routes is how you confuse all the firewalls21:49
mordredjeblair: I will fix after dinner21:49
clarkb*stateful firewalls21:49
jlkclarkb: I'm not seeing any systemic issues our outages21:49
clarkbjlk: thanks, fungi seems to have found some asymetric badness upstream21:49
jeblairi have noticed dropped packets from here21:49
fungiechohead: clarkb: http://paste.openstack.org/show/4336421:50
*** dkliban_afk is now known as dkliban21:51
echoheadok, so it looks like you can route to me, but i can't route to you?21:51
fungiechohead: so i'll wager your packets are getting to review.o.o and then the responses are hitting a loop in your provider21:51
sparkycollierhope this is not too off topic, but wondering if any mac users out there prefer macports or homebrew?21:52
fungiechohead: probably they have multiple paths in their network and one of them has a bad default back to the other or similar, and load is distributed based on an address hash so it works for some remote addresses and not others seemingly at random21:52
echoheadfungi: i see. thanks for you help.  i'll ask the network guy here about it.21:53
jeblairsparkycollier: it's so nice to see you, regardless of the topic! :)21:54
fungiechohead: specifically, it looks like they have a secondary-side core router sending traffic back up to a border router, judging from their naming conventions. maybe cr02 is announcing routes via an igp which it doesn't have a non-default next hop to21:54
sparkycollierjeblair it's a new half years resolutoin21:54
echoheadfungi: showed your traceroute to a guy here and he's working to remove the loop now.  thanks!21:56
fungiechohead: np. that used to be my job, so he's got my sympathies21:56
clarkbjeblair: any reason for not merging https://review.openstack.org/#/c/36593/21:57
jeblairclarkb: just to give fungi or mordred a chance to see it21:58
dtroyersparkycollier: I've been happy with homebrew for a couple of years now.22:00
clarkbjeblair: ok22:01
clarkbpleia2: the https for cgit change has been rereviewed22:01
ttxanteaya: so, to recap: i would prefer to make storyboard compatible with >=1.4, but if for whatever reason we need an 1.5-specific feature (or have to choose between being 1.4 or 1.5-compatible) then we'd pick 1.522:02
*** sarob has quit IRC22:02
fungiclarkb: on 36593 i was wondering if pleia2 was wanting to amend that part of the init.pp with commentary about being centos-only, or if we were clear to merge it22:03
*** sarob has joined #openstack-infra22:03
ttxanteaya: I haven't seen that we were in that hard place yet, but maybe you know better (I run 1.4)22:03
*** giulivo has quit IRC22:03
fungii'll +2 and ask in the review insead22:03
ttxsparkycollier: will skip the phone call and go to bed, nothing specific to report22:03
ttxsparkycollier: put all on the etherpad22:03
clarkbfungi: I think that can happen in a subsequent change because cgit being attached to centos has been an operating assumption aiui22:04
jeblairfungi, clarkb: i don't think that's important, just mentioning it as in interesting assumption.22:04
fungiclarkb: just saw your new comment there and agree. +2 and approved22:05
* ttx eods22:05
openstackgerritA change was merged to openstack-infra/config: Add git-daemon to cgit server.  https://review.openstack.org/3659322:06
clarkbmordred: https://review.openstack.org/#/c/36634/ if you haven't seen that yet you should take a look22:06
anteayattx okay, I was just noticing that 1.5 deals with urls in a completely different fashion than 1.422:07
*** _TheDodd_ has quit IRC22:07
anteayaand if we are building something new, I favour the newest release to build it on22:07
*** sarob has quit IRC22:07
anteayaplus we don't have db migrations to worry about yet, but that is coming soon22:07
clarkbhttps://www.djangoproject.com/weblog/2013/aug/06/breach-and-django/22:07
openstackgerritA change was merged to openstack/requirements: removal invalid pin of python-requests<=1.2.2  https://review.openstack.org/3746122:07
anteayaso if we were to upgrade I would favour the upgrade sooner rather than later22:07
clarkbfungi: jeblair mordred ^ we may want to disable compression on review.o.o and jenkins.o.o maybe?22:08
openstackgerritA change was merged to openstack/requirements: Add support for Keystone V3 Auth in Horizon.  https://review.openstack.org/3977922:08
clarkbI think jenkins does a lot of compression internally, not sure about gerrit22:08
*** sarob has joined #openstack-infra22:08
fungioh, ouch22:09
*** dina_belova has joined #openstack-infra22:09
*** CaptTofu has quit IRC22:09
clarkband we should probably update our ciphersuite like ryan_lane did to reduce the broadness of available attacks22:09
clarkbfungi: did you want to take a swipe at that? I can propose a change but I feel like I will be cargo culting what other people say is secure22:10
*** burt has quit IRC22:10
clarkbapparently ubuntu apache2.2.2 supports turning SSLCompression off22:11
fungiclarkb: i'll take a look. compressing before encrypting is known to leak badly22:12
*** CaptTofu has joined #openstack-infra22:12
fungiso turning it off for https sites is probably a good call22:12
clarkbit will still be a problem on centos but there is no private data there22:12
*** dina_belova has quit IRC22:13
clarkbsdague: we are now not indexing DEBUG level logs in the d-g screen logs22:18
clarkbsdague: should make a difference as the vast bulk of the data was there I htink22:19
SlickNikHey guys...22:19
SlickNikdid jenkins.openstack.org change? I don't see the usual jobs there…22:19
clarkbsdague: I am still indexing all of the console logs and things like syslog, swift, and so on22:19
clarkbsdague: because they don't differentiate levels22:19
clarkbSlickNik: yes22:19
clarkbSlickNik: we now have 3 jenkins masters. Jenkins.o.o jenkins01.o.o and jenkins02.o.o22:19
clarkbSlickNik: jenkins.o.o is going to slowly die with 01 and 02 running most of the jobs22:20
SlickNikah, I see22:20
SlickNikthanks!22:20
clarkbnote the zuul status page will link you directly to the job on the correct master22:20
*** nijaba has quit IRC22:20
lifelessclarkb: https://review.openstack.org/#/c/40322/ <- seen this?22:20
clarkblifeless: I hadn't I am slowly getting through the list. reviewing that one now22:21
lifelessfungi: the compress-encrypt leaks are quite specific22:21
lifelessclarkb: tanks!22:21
*** nijaba has joined #openstack-infra22:21
lifelessfungi: IIRC you need a block cipher with constant prefixed compressed output, same as the cookie leak issues22:22
*** weshay has quit IRC22:22
*** jrex_laptop has joined #openstack-infra22:22
openstackgerritA change was merged to openstack-infra/config: Enable pypi jobs for diskimage-builder  https://review.openstack.org/4032222:26
fungilifeless: yeah, nobody's demonstrated similar material leaks with stream ciphers *yet* anyway22:26
lifelessclarkb: thanks!22:26
clarkbfungi: lifeless see this is why I get you guys to look at it. Less cargo cult and more understanding :)22:27
clarkbfungi: lifeless I would be interested in the why when we end up with a config that we want to usethough22:28
*** datsun180b has quit IRC22:28
*** jrex_laptop has quit IRC22:28
fungiclarkb: i'm not a cryptographer, i only play one on cypherpunks mailing lists22:28
lifelesswhat's needed to get https://review.openstack.org/#/c/40140/ landed ?22:30
lifelessclarkb: ^ fungi: ^22:30
clarkblifeless: jeblair22:30
openstackgerritKhai Do proposed a change to openstack-infra/config: deploy jenkins plugin pom.xml file.  https://review.openstack.org/4045522:30
openstackgerritSteve Baker proposed a change to openstack-infra/config: Create new repo to host legacy heat-cfn client.  https://review.openstack.org/3822622:32
*** changbl_ has quit IRC22:33
openstackgerritKhai Do proposed a change to openstack-infra/config: deploy jenkins plugin pom.xml file.  https://review.openstack.org/4045522:34
fungiclarkb: yeah, the puppetlabs rsync module was actually a very serendipitous find. i thought to look for one before writing it because... you never know... but didn't expect it to be so nearly spot on for what we actually wanted to implement22:34
*** avtar has quit IRC22:35
clarkbfungi: with default parameters even :)22:36
openstackgerritKhai Do proposed a change to openstack-infra/config: deploy jenkins plugin pom.xml file.  https://review.openstack.org/4045522:37
lifelessclarkb: fungi: http://support.novell.com/security/cve/CVE-2012-4929.html22:37
uvirtbotlifeless: The TLS protocol 1.2 and earlier, as used in Mozilla Firefox, Google Chrome, Qt, and other products, can encrypt compressed data without properly obfuscating the length of the unencrypted data, which allows man-in-the-middle attackers to obtain plaintext HTTP headers by observing length differences during a series of guesses in which a string in an HTTP request potentially matches an unknown string in an HTTP header, aka a "CRIME" attack22:37
clarkbthank you uvirtbot22:38
lifelessclarkb: fungi: and http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2039356 claims22:38
lifelessIt has been confirmed that CRIME is ineffective against vCenter Operations Manager (vCOps) 5.6 and higher. The TLS CRIME vulnerability appears to be isolated to the use of the libqt4 libraries for compression.22:38
lifeless(not that we're using vcenter)22:38
lifelessjust - tl;dr afaict - a) server can obfuscate the length of the data and avoid the compressor state leak crime depends on, and upstreams are doing that now.22:39
lifelessand b) browsers generally disable tls compression themselves now22:39
lifelessso I wouldn't worry about the sslcompression setting; but off if you have any concerns.22:39
*** sparkycollier has quit IRC22:40
*** prad has quit IRC22:42
clarkbgotcha, thanks22:43
*** zaro0508 has joined #openstack-infra22:45
*** CaptTofu has quit IRC22:45
*** sarob has quit IRC22:46
*** jpich has quit IRC22:46
*** sarob has joined #openstack-infra22:46
*** ianw has joined #openstack-infra22:47
*** CaptTofu has joined #openstack-infra22:48
pabelangerclarkb, pyflakes was kept because jeblair asked to for it.  Since JJB gates on it, I had another review up to remove it22:49
clarkbpabelanger: I see, feel free to ignore that comment then22:49
pabelangernp22:49
clarkbI will note that in the review22:49
*** sarob has quit IRC22:51
fungilifeless: so, crime relied on tls compression but breach extends those techniques to leverage compression applied by higher layers (such as gzip encoding of responses from the server)22:52
fungiso while the attack is similar, the attack surface is not22:53
clarkbI think BREACH is worth worrying about22:54
*** sarob has joined #openstack-infra22:55
fungibreach basically happens when the connection is mitm below the encryption later and the attacker doesn't have the ability to decrypt the session but ca coerce the victim to elicit server responses with some content determined by the attacker22:55
clarkbmordred: https://review.openstack.org/#/c/40470/ adds a JJB core group. Did you want to review that before I approve?22:56
fungisay i'm connected to an https web application and you're able to force my packets to flow through your sniffer, and you're also able to get me to send requests to the site with a string of your choosing which the server will echo within the rest of its response, then you can trigger the request and observe the compression ratio to fine-tune guesses of a session key which is also contained within the22:58
fungiresponse22:58
fungithat sort of fun22:58
clarkbthere is now an OSX wine like emulator22:59
jeblairclarkb, fungi: about rsync -- how will/could this interact with the other things we serve there?22:59
jeblairit looks like it defines a pypi module which is appropriately rooted22:59
jeblairand i guess we could add a tarballs module too?23:00
jeblairif we wanted to make tarballs rsyncable?23:00
clarkbjeblair: yes we could have multiple defines for each of the paths we want to make rsyncable23:00
fungijeblair: yes, it allows for multiple modules each with a distinct name23:00
fungiso we could even add tarballs right now if we wanted, same way, just one more rsync::server::module block23:01
jeblairand i assume rsync://pypi.o.o or rsync://static.o.o would work equally well?23:01
jeblairwe may want to only publicise rsync://pypi.o.o/pypi though, so we have flexibility to move it around23:01
fungijeblair: yes, rsync does not have anything like http 1.1 host headers23:01
fungiso if we wanted to limit it to a specific host name, that name would need to resolve to one and only one ip address where the daemon was listening23:02
*** pentameter has quit IRC23:02
fungibut i agree just limiting the name we publicize with it should be sufficient to deter people being surprised if it moves in the future23:03
stevebakerjeblair: hey, I updated https://review.openstack.org/#/c/38226/ to host heat-cfnclient in openstack-dev23:03
jeblairfungi: are you okay with adding a new daemon on a host that (will eventually) host our distribution artifacts?23:04
jeblairincreased attack surface and all23:04
clarkbjeblair: distribution artifacts == deb/rpm packages?23:05
clarkbjeblair: that may be an argument to host those things elsewhere?23:05
jeblairclarkb: no, we make tarballs.  :)23:06
clarkboh those things23:06
*** rcleere has quit IRC23:07
clarkbin other security news researchers are recommending a switch to elliptic curve crypto (eg ecdsa) as RSA may become vulnerable after new progress made on solving discreet logarithmic problem23:09
*** dina_belova has joined #openstack-infra23:09
clarkbI think our centos6 hosts have too old openssl/openssh to support ecdsa :(23:09
lifelessfungi: clarkb: yes, the higher level stuff has made the http/2 design 'fun'.23:09
*** zaro0508 has quit IRC23:10
*** david-lyle has quit IRC23:11
*** david-lyle has joined #openstack-infra23:11
*** danger_fo is now known as danger_fo_away23:12
lifelessjeblair: the rsync equivalent of vhosts is the exported path - like nfs23:12
fungijeblair: i am not terribly worried about rsyncd itself, and we can at least audit its configuration to make sure it's not more promiscuous. though maybe we should consider adding a dedicated unprivileged account to run it under. i'll amend the patch to do that (the module takes it as an option i believe)23:13
jeblairfungi: probably a good idea; nobody has a notoriously high level of access23:13
lifelessjeblair: in terms of being able to move it round, I think just docs - tell folk clearly that its rsync on pypi.o.o/pypi; in the description for the pypi export note the canonical name for it.23:13
*** dina_belova has quit IRC23:13
fungijeblair: i'm more worried about people trusting downloads obtained from an unauthenticated protocol without actually comparing signed checksum lists23:13
*** mrodden has quit IRC23:14
fungiand the rsync protocol does not cryptographically authenticate the server to the client (which is why rsync over ssh is generally better)23:14
lifelessfungi: thats the TUF stuff which will kick in as upstream adopts it23:14
fungilifeless: yes, i'm very much looking forward to that23:14
lifelessI do take your point that there is a risk here...23:15
lifelessdoes run-mirror retrieve gpg sigs from pypi?23:16
clarkbthe signatures are independent of the package right? in that case no23:16
clarkbor are they in the tarball?23:16
fungilifeless: at the moment our auto-published pypi tarballs aren't even signed ether23:16
lifeless.asc files23:16
lifelessfungi: ok, so this isn't really a new risk23:16
fungisomething else i'd love to see improved23:16
jeblairfungi: i was more getting at the fact that we're increasing the attack surface of the server itself by running another C daemon.23:16
fungijeblair: agreed. i think we rely on local permissions enforcement to protect us some there23:17
fungiand rsyncd hasn't had too ugly of a history of remote exploits23:18
fungilast one i remember well was about a decade ago23:18
jeblairwhich could be good news or bad.  :)23:19
openstackgerritA change was merged to openstack-infra/config: Add jenkins-job-builder-core group  https://review.openstack.org/4047023:19
*** rnirmal has quit IRC23:19
* fungi will not comment on the positive and negative aspects of his lack of memories... smells like a set up23:20
clarkbI went ahead and approved ^ I didn't hear any disagreement of the idea when it was proposed to the list23:20
*** UtahDave has quit IRC23:21
*** nijaba has quit IRC23:21
lifelessjeblair: fungi: 2011 last vuln I think?23:21
*** nijaba has joined #openstack-infra23:22
lifelessbut that affects receiver only23:22
jeblairso they either fixed everything or no one's paying attention anymore.23:22
*** dnavale has joined #openstack-infra23:22
lifelessindeed23:23
reeddnavale, what happened with your new ssh key?23:23
dnavalei'd followed all the steps in the https://wiki.openstack.org/wiki/Documentation/HowTo#First-time_Contributors.23:24
dnavalehad everything set up but hadnt submitted anything until yday23:24
clarkbfungi: were you going to write the fix for centos pip symlinkage?23:24
clarkbfungi: I can if you are busy23:24
dnavalewhen i tried to, it kept asking me for the passphrase and would then give an error..23:25
clarkbdnavale: this is happening when running git-review?23:25
dnavaleso one of my colleagues who had similar issues suggested to get a new ssh key.. did that..23:25
dnavaleyes..23:25
dnavalethats right23:25
jeblairso tbh, i'm not sure i'm keen about running an entirely new public facing service for this.  it just seems like such a heavyweight solution to lifeless's problem.23:25
*** sarob has quit IRC23:25
dnavalewith the new key, the git review still gave me the error.. and asked me to sign the agreement..23:26
jeblairlifeless: but then i still don't understand why running 'run-mirror' doesn't work.  it keeps a pip cache, it should be quite fast.23:26
*** sarob has joined #openstack-infra23:26
clarkbdnavale: can you go to https://review.openstack.org/#/settings/ and give me your account id number?23:26
jeblairdnavale, reed: that page looks like it needs updating23:26
jeblairyou do not need to ad an ssh key to launchpad23:27
openstackgerritElizabeth Krumbach Joseph proposed a change to openstack-infra/config: Add httpd ssl support to git.openstack.org  https://review.openstack.org/4025323:27
jeblairdnavale: you do need to add an ssh key to gerrit23:27
lifelessjeblair: you have to have a suitable dev environment, so you have to install mysql-dev etc etc etc23:27
reedjeblair, indeed23:27
dnavalesure.. account id# 810323:27
jeblairdnavale: https://review.openstack.org/#/settings/ssh-keys23:27
lifelessjeblair: this is a big footprint on the host machine, and overlaps with the actions inside the chroot as well23:27
jeblairdnavale: make sure your ssh key is in there23:27
pleia2had a bit of a messy rebase there, I think I got it though23:27
* reed yells at the gods of online passwords!23:27
dnavaleyes.. it is in there23:27
lifelessjeblair: just copying a working mirror is /much/ easier. I'm not against plumbing up a chrooted dedicated mechanism for running run-mirror eventually.23:28
lifelessjeblair: but it's also /slow/, because run-mirror is spidering.23:28
jeblairlifeless: what's the 'host machine' you're talking about here?23:28
lifelessjeblair: my laptop, for instance.23:28
clarkbdnavale: looking in the DB you haven't signed a CLA agreement23:28
clarkbdnavale: https://review.openstack.org/#/settings/agreements on that page is there a row that says verified individual account agreement?23:29
dnavaleyes.. but when i try to, it says the address is not one of the registered ones and ask me to fill a new form23:29
clarkb*OpenStack Individual Contributor Account Agreement23:29
lifelessjeblair: pypi, like most REST services, doesn't understand latency at all, and dies the death of a thousand cuts.23:29
reedwhy is the documentation team saying that you need a github account?23:30
lifelessjeblair: the cache avoids repeated bulk data download by run-mirror, but the spider-everything overhead doesn't go away23:30
clarkbdnavale: you need to use the same email address in gerrit as was provided to the openstack foundation. And if you haven't signed up with the openstack foundation you will need to do that first23:30
dnavaleApplication Error23:30
dnavaleServer Error23:30
dnavaleThe request could not be completed. You may not be a member of the foundation registered under this email address. Before continuing, please make sure you have joined the foundation at http://openstack.org/register/23:30
*** sarob has quit IRC23:30
clarkbdnavale: have you done that?23:30
dnavalehmmm.. i've been using the same email id everywhere23:30
harlowjaianw: all that is needed for qpid, seems to work when i tried it on my new vm, https://review.openstack.org/#/c/40481/23:31
ianwharlowja: thanks, was just restoring my vm to give it a try :)23:31
jeblairlifeless: i see, so this is for your local dev environment.  i know mordred uses run-mirror, but i guess he doesn't have the latency issues due to only occasionally being in the southern hemisphere.23:31
harlowjaianw cool23:32
dnavalei dont think so, i was told that since we are a part of red hat, thats already done..23:32
dnavalebut i'll try and do that now..23:32
harlowjaianw make sure u use -p to specify that persona, instead of the default rabbit one23:32
clarkbdnavale: they were probably talking about the corporate CLA stuff, all of this is done at the individual level on top of that23:32
dnavaleAh.. ok.. thanks.. i'll register and try again..23:32
lifelessjeblair: I wrote up a fuller explanation about run-mirror at the end of the listing-files patch we've abandoned; dunno if you saw that23:32
*** mrodden has joined #openstack-infra23:33
*** mrodden1 has joined #openstack-infra23:35
reeddnavale, what were you told exactly?23:37
dnavaleclarkb: thanks.. i was able to submit the CLA..23:37
*** mrodden has quit IRC23:37
fungiclarkb: i had started and then got sucked away, sorry. settled in now and can focus on patches23:38
*** CaptTofu has quit IRC23:39
dnavalei guess i thought that we wouldn't need to fill in the individual form when contributing as part of red hat.. sorry about that..23:39
*** CaptTofu has joined #openstack-infra23:39
reeddnavale, oh, ok. Please tell the person that told you the incorrect info to talk to me about that and fix the internal documentation23:40
dnavaleok.. will do..23:43
clarkbdnavale: are you able to submit changes to gerrit now?23:45
dnavalethanks again..23:45
dnavaleyes.. i'm able to do that..23:46
jeblairlifeless: why do you notice run-mirror's runtime?  i mean, we run it in the background, we're not exactly waiting for it to finish.23:46
openstackgerritJeremy Stanley proposed a change to openstack-infra/config: No longer link pip to pip-python on Red Hat  https://review.openstack.org/4051623:47
fungiclarkb: ^ testing that on git.o.o momentarily23:47
lifelessjeblair: you don't notice it because you have a long lived server you can just use; start from scratch some day23:49
*** zul has joined #openstack-infra23:49
lifelessjeblair: you're on a low latency inter-server environment in the gate; someone that just downloaded tripleo and is building an image is not.23:49
lifelessjeblair: if you'd be happier I can spin up a dedicated HP Cloud instance to deliver this, but that really seems like shadowing the -infra role, which I really don't want to do.23:51
jeblairlifeless: when we set up the pypi mirror, we explicitly agreed it would not be a publicied and recommended distribution channel for openstack dependencies23:51
lifelessjeblair: interesting; I didn't know that.23:52
jeblairlifeless: i thought this was for you on your laptop?  lots of people use cloud servers for their own development.  is this a public service that the tripleo program needs to provide for its users?23:52
lifelessjeblair: thats the intent yes.23:53
lifeless(service for tripleo users, on by default)23:53
jeblairlifeless: you threw me with the laptop thing there.23:53
lifelesshttps://review.openstack.org/#/c/38543/ in the comments in there23:53
lifeless5th up from the bottom23:53
lifeless"Our developers - myself, spamaps, pleia2, ng etc are all building lots of images, and each image has up to ~12 virtual envs. We spend a huge amount of time downloading stuff from pypi, since the ssl change means squid no longer caches the content."23:53
clarkbwait what? why can't squid mitm you?23:54
lifelessclarkb: pip forces SSL now.23:54
clarkbdoes this have to do wit the proxy behavior of pip?23:54
clarkblifeless: yeah but squid can mitm ssl23:54
lifelessclarkb: Getting every tripleo dev+user to configure ssl-bump with a snakeoil cert is a huge barrier to entry.23:55
clarkbthats fair23:55
clarkbit does make the proxy setup more complex23:55
lifelessyou'd also need to configure tcp interception23:55
lifelesswhich is doable, but again - barrier to entry.23:56
lifelessand since iptables works on ip's not dns names, /complex/ to get right23:56
lifelessyou'd need a thing intercepting all dns lookups, pulling out the A and CNAME records for the pypi CDN, mapping those to iptables rules just in time...23:56
nati_uenoclarkb: do you have a bug report for Today's morning neturon ut issue? It looks like still broken, so I wanna know current status23:56
lifelessor you need squid wildcard mitming all SSL23:57
lifelesswhich frankly is a bad idea as most SSL only websites have terrible caching headers and squid will end up caching stuff not meant to be written to disk.23:57
lifelessclarkb: ^23:57
clarkbnati_ueno: which issue? I got a late ish start today and don't remember any neutron things23:58
clarkbnati_ueno: is this what sdague was talking about yesterday?23:58
openstackgerritA change was merged to openstack-infra/config: No longer link pip to pip-python on Red Hat  https://review.openstack.org/4051623:58
fungiso that ^ kept puppet from blowing away the pip executable on git.o.o after i reinstalled the rpm. once the master pulls it down, i'll reinstall python-pip on git, pbx and all the centos6 jenkins slaves23:58
nati_uenoclarkb: Ahh sorry. May be I talked with  sdague23:58
lifelessjeblair: so the basic issue then is that what i'm asking for is in contradiction with a prior -infra decision. Do we revisit that decision?23:58
clarkbnati_ueno: I know he filed a bug about something. Not sure if that is the same thing you are talking about23:59
nati_uenoclarkb: so we faced requirement version issue23:59
jeblairlifeless: i had not seen the latest comments in the original change (my review workload has something like a 4 day cycle currently).  they clarify quite a bit, thanks.23:59
clarkbnati_ueno: with requests?23:59
nati_uenoclarkb: changes for requirement.txt brokes neutron unit test23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!