19:01:28 #startmeeting infra 19:01:29 Meeting started Tue Aug 26 19:01:28 2014 UTC and is due to finish in 60 minutes. The chair is jeblair. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:01:31 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:01:33 The meeting name has been set to 'infra' 19:01:33 o/ 19:01:36 #link agenda https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting 19:01:38 o/ 19:01:43 #link last meeting http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-08-19-19.02.html 19:01:45 o/ 19:01:55 #topic Actions from last meeting 19:02:01 jeblair Publish python-jenkins 0.3.3 (last release without pbr) tarball to pypi 19:02:03 that happened 19:02:07 o/ 19:02:09 so did 0.3.4 19:02:09 yay 19:02:15 because 0.3.3 was broken 19:02:19 :( 19:02:32 if there were no broken releases we'd never have new releases ;) 19:02:38 ha ha ha 19:02:47 heh, true 19:02:50 but now i think that's all taken care of, and the next (0.4.0?) release can be done by zaro by pushing a tag in the normal way 19:03:12 o/ 19:03:19 pleia2 create new mailing lists 19:03:20 i think that will happen in about a month. 19:03:34 that was done, anteaya will talk about it later in the meeting 19:03:40 #link http://lists.openstack.org/cgi-bin/mailman/listinfo/third-party-announce 19:03:41 I can talk now 19:03:45 pleia2: i reordered :) 19:03:48 they are up 19:03:50 oh good :) 19:03:51 thanks pleia2 19:03:59 #link http://lists.openstack.org/cgi-bin/mailman/listinfo/third-party-request 19:04:07 this is my first time admining lists, feedback welcome 19:04:07 yea! 19:04:10 #link https://review.openstack.org/#/c/116989 19:04:26 and that is the patch to redirect folks to use the lists 19:04:41 a speedy iteration prevents a week of transition 19:04:55 yeah, probably best to do this quickly to avoid confusion 19:05:05 are there outstanding requests from folks we have received on the infra list? 19:05:16 jeblair: there are a few yes 19:05:17 a couple 19:05:29 I'm standing by to offer new patchsets for 116989 if folks have -1s 19:05:44 okay, we should probably avoid asking people to resubmit on the new list, that's just silly 19:05:49 agreed 19:05:58 i think when anteaya's change merges, we should announce the new policy to the old list 19:06:02 ask everyone to subscribe to the announce list 19:06:07 agreed 19:06:09 and new requests go to the request list 19:06:09 agreed 19:06:19 and explicitly mention that old requests don't need to be resubmitted 19:06:37 I can bulk import gerrit email addresses to the announce list 19:06:46 anyone think that is a bad idea? 19:07:11 anteaya: we could consider inviting them, but not actually subscribing them 19:07:21 okay, I can invite them 19:07:27 anteaya: mailman lets you do either in bulk 19:07:32 anteaya: that's an option in the interface 19:07:34 yeah 19:07:41 if we prefer I invite, I will invite 19:07:42 (they'll get a message saying 'click here to confirm' etc) 19:07:53 ++ 19:07:54 yeah. i'd be very much opposed to subscribing without confirmation 19:07:54 right, never bulk-subscribe people without prior consent or a very, very good reason 19:07:56 should reduce some cruft right off the bat 19:08:08 invite it shall be 19:08:21 I think that was all I needed here 19:08:27 anything I am missing? 19:08:41 okay thanks 19:09:05 #action anteaya invite third-party email addrs from gerrit to announce list 19:09:32 who wants to send the email to -infra announcing the new lists/policies after 116989 merges? 19:09:51 * jeblair is happy to if no one else loves that idea 19:10:26 I can do it 19:10:26 i can as well 19:10:40 * fungi bows to pleia2 ;) 19:10:44 everyone wants that job 19:10:54 #action pleia2 send the email to -infra announcing the new lists/policies after 116989 merges 19:11:03 anteaya, pleia2: thanks! 19:11:05 anteaya: left a comment in the review 19:11:17 #topic Priority Specs (jeblair) 19:11:19 I left one too which expanded on pleia2's 19:11:27 #link https://review.openstack.org/#/c/100363/ 19:11:58 i think that one's probably ready for approval unless more people want to go through it? 19:12:10 I was happy with it 19:12:19 * nibalizer here to answer questions 19:12:21 or unless nibalizer wants to address any of the suggestions on that last patchset 19:13:01 ill update it for those suggestions 19:13:14 clarkb: did you want to talk more about the symlink? 19:13:31 nibalizer: not here, if you update the spec about what that is for I think that is enough 19:13:35 okay sweet 19:13:36 will do 19:13:44 nibalizer: also, "Set hiera.yaml appropriately to source both dirs in order" -- does that need more detail too? 19:13:58 that's an infra-root action, right? 19:14:15 jeblair: no hiera.yaml will be managed by puppet 19:14:24 it should be a file resource 19:14:31 ok nevermind then. it can be documented in the puppet 19:14:44 nibalizer, as a side note, we use hieradata strucutre on forj.io, we manage this with puppet, no symlinks required 19:14:54 nibalizer: so we'll wait for your update about the rationale for the symlink, then it looks like it's ready to approve 19:15:06 okay cool 19:15:38 wenlock: maybe you could weigh in on https://review.openstack.org/#/c/100363 after nibalizer writes the update? 19:15:47 wenlock: lets follow up after the nmeeting 19:15:49 yes, i'll read up on it today 19:16:09 wenlock: that way we'll know if we're doing something different than you are that requires it, or if we've missed something and don't really need it :) 19:16:21 nibalizer, sounds good, we can point you to our source on that 19:16:31 #link https://review.openstack.org/#/c/99990/ 19:16:58 i think this is probably ready modulo syntax errors? 19:17:08 i think at this point that one is us fighting with restructured text parsing 19:17:10 jeblair, i'd like to propose that you use librarian-puppet as option in install_module.sh to manage the repos 19:17:19 I need to rereview it with the new problem statement but yes I think it is pretty close 19:17:30 wenlock: I will -2 that :) 19:17:31 but i think for the ideas there we're largely getting consensus 19:17:35 wenlock: we can talk about why after the meeting 19:17:37 i think i cleaned up all the syntax errors, but the work items could use a re-review 19:17:41 jeblair +2 for subtree workflow too, this worked nice on our side 19:17:59 clarkb, yes, i'd like to understand how you will manage tags/revs, etc. 19:18:04 jesusaurus: it's currently failing tests 19:18:15 nibalizer: I agree with the content of 99990 if we can get the rst parsing figured out 19:18:34 jeblair: thats an old test, it should get through the check queue in like 6 hours or so 19:18:38 oh ok 19:18:58 #link https://review.openstack.org/#/c/110730/ 19:19:03 oh no, i have unaddressed comments 19:19:28 jesusaurus: i added some comments to the latest patchset as well (i had them in draft for an earlier one) 19:20:15 anyway, i think resolution to the comments already in that one is straightforward, so please go ahead and review with that in mind 19:20:16 fungi: thanks 19:20:36 jeblair: i have some more in draft i need to add too, though i can port them forward to the next iteration if i don't get to it before you update 19:21:28 #link https://review.openstack.org/#/c/110793/ 19:21:55 a few updates noted inline there 19:22:03 jeblair: I think you need to make at least one edit to 110793 as well 19:22:06 jeblair: but it is pretty close 19:22:24 probably the main thing is that there's a section where i list 3 options for openstack-manuals 19:23:21 option 2 is yucky. option 3 is probably the default option because it doesn't really change anything from the docs pov 19:23:54 option 1 actually simplifies the jobs somewhat, at the cost of some extra cpu time 19:24:00 I am a fan on 1 19:24:02 s/on/of/ 19:24:21 i should probably work with AJaeger on quantifying that and seeing what the impact would be 19:25:13 anything else on these specs? 19:25:37 yeah, i think i'm really okay with any of the three options, keeping in mind that eventual optimization with afs will probably trump any of them for efficiency anyway 19:25:57 yup 19:26:07 so simpler==better for now probably 19:26:26 #topic Translations demos active, sent to Daisy (pleia2) 19:26:51 this is mostly an FYI 19:27:04 we have demos up for both zanata on wildfly and pootle 2.6 19:27:32 pleia2: do you have an idea of the relative ease of pupetting those two options? 19:27:43 note the pootle 2.6 demo has not had translate-dev dns udpated to point at it because we lack openid and I want to keep it in a position where its really just a demo and not a dev server 19:27:53 we have step by step instructions for how they both were deployed in etherpads, but neither of them will be easy 19:28:23 zanata is a bit trickier since the wildfly support is not official yet, so it's a bit hacky as they continue to work on it 19:28:32 I think the biggest thing from pootle side is that we will have to run the various django managementy commands some of which expect human input (may need a graphite like hack for that) 19:28:32 pleia2: we do run one django app via puppet (graphite) 19:28:33 pleia2: also a gut feel for which would require more ongoing care and feeding (infra core effort)? 19:28:57 fungi: zanata, since we don't work with jboss/wildfly today, and we do work with django 19:29:14 I do have a couple concerns about pootle. The first is allauth makes the openid stuff harder not easier :/ and second the UI is thoroughly unintuitive at least to me 19:29:26 but for the second thing I defer to the translation team(s) 19:29:46 how long do the translation folks need to complete their assessment? 19:29:57 I haven't looked deeply into zanata's openid support, but right now in our demo it is taking the https://launchpad.net/~lyz addresses as logins (we'll need to simplify and restrict this) 19:30:07 clarkb: what do you think needs to happen for openid? 19:30:25 pleia2: in either case, i think nothing short of openid sso is acceptable 19:30:31 anteaya: uncertain, we're getting to the point in the cycle where they will be really busy with translations work 19:30:32 clarkb: for pootle 19:30:40 pleia2: fair 19:30:46 jeblair: allauth needs to support a single openid provider (there is an open bug for this). also allauth and/or pootle need to learn how to do a single type of auth 19:31:08 jeblair: today it looks like you always get local auth in addition to whatever allauth other mechanisms you have enabled 19:31:26 as far as evaluation, Daisy has admin on both systems and I'll be working with her to help present the demos to the team on a schedule she determines 19:31:35 there is an open bug against pootle for the second thing which I need to follow up on and possibly file a bug against allauth for 19:31:38 yay Daisy 19:31:40 if anyone else wants admin, lmk 19:32:00 pleia2: you're a great admin :D 19:32:04 pleia2: do you have any idea how receptive the pootle dev(s) would be to this kind of work? 19:32:32 jeblair: they've really been leaning on django for all openid stuff, clarkb has some open bugs that they've responded to 19:32:50 jeblair: pleia2: ya I think they would be receptive but have already said go fix it in allauth 19:32:56 yeah 19:34:04 how about zanata? what's it take for openid there? 19:34:20 it works, haven't looked into restricting to only openid 19:34:37 i should say openid sso 19:34:48 cause it sounds like right now, it asks for your openid, right? 19:35:04 http://15.126.226.230:8080/account/register 19:35:24 yeah, so where it wants openid just put in your http://launchpad.net/~user address 19:35:34 and it also has the "and a local account" problem 19:35:41 yeah 19:36:22 well, i added openid sso to gerrit; i could probably add it to zanata too :) 19:36:25 I don't know what mechanism they're using for this, but they've been eager to help us get support for other things we need 19:36:34 even better if they do it :) 19:36:46 i also added it to pootle, but that's neither here nor there 19:37:51 I think that's it for this week, we'll tackle issues down the road as we come to them and the translations team gets a better idea of what works for them 19:37:56 i think from our pov, openid-sso is critical, and easy of installation/maintenance is the next most important 19:38:02 * pleia2 nods 19:38:24 ++ 19:38:26 zanata install steps: https://etherpad.openstack.org/p/zanata-install 19:38:34 #link https://etherpad.openstack.org/p/zanata-install 19:38:55 pootle install steps (down at line 47 and below): https://etherpad.openstack.org/p/pootle-250-upgrade 19:39:21 #link https://etherpad.openstack.org/p/pootle-250-upgrade 19:40:09 we probably have some anti-patters in puppet for gerrit that we can crib for some of the zanata stuff 19:40:13 oh, and zanata will probably be available via ansible..recipes? so hopefully we'd be able to convert them to puppet 19:40:14 anti-patterns 19:40:45 pleia2: cool, thanks 19:40:55 #topic Open discussion 19:41:08 so something has come out of the reviews so far on my patch 19:41:27 what are the expectations for thrid party folks regarding the two new lists 19:41:30 the upcoming renames list has the dashboard puppet module on it, proposed to go to openstack-attic... that can come off the list right? 19:41:41 I want all third party folks to subscribe to both lists 19:41:54 to announce to keep alert for their system if it is disabled 19:42:00 to request to help me out 19:42:18 fungi: yeah, we should just merge a change to readme saying it's dead 19:42:28 https://review.openstack.org/#/c/116989 19:42:53 jeblair: i'll propose a bunch of "it's dead jim" patches to projects in the same boat in that case 19:43:09 anteaya: i think that would be useful. i think announce should be considered a requirement (i don't intend on policing it); requests is a nice-to-have 19:43:12 only if I can call you bones, fungi 19:43:25 jeblair: I can live with that 19:43:39 ++ 19:43:43 I'm not going to police either list 19:43:55 but if they don't know their system is disabled it is on them 19:44:05 i think announce should be a very strong suggestion, since if we're making changes you need to be aware of or taking your systems offline, that's where you're going to find out about it 19:44:09 anteaya: exactly 19:44:34 fungi: I tried to capture that strong suggestion in the wording of my patch 19:44:34 I vote requirement 19:44:40 krtaylor: no 19:44:40 and if you aren't subscribed to the announce list or aren't paying attention, then it's your problem not ours 19:44:53 requirements are basis for disabling a system if they aren't met 19:45:01 anteaya, why not? 19:45:15 if they aren't subscribed to announce disablying their system isn't going to be my action 19:45:39 anteaya: it will also be where we send notifications of policy changes (lack of adherence to which would get systems disabled too) 19:45:48 jeblair: agreed 19:46:11 I am thinking more in terms of announcements that CI teams will want to know, that is important 19:46:19 what jeblair said 19:46:22 fungi: also I have tried to capture the strong wording in teh message on the landing page for the list 19:46:32 krtaylor: right 19:47:21 so why wouldn't it be a requirement? we need a global communication channel 19:47:38 requirements are what gets your system disabled if you don't have them 19:47:49 or not granted in the first place 19:47:58 I am not going to disable a system if they don't subscribe to a mailing list 19:48:40 they are foolish if they don't, but that is on them 19:48:42 I don't think that would be a problem, but listing it in the requirements section would be ok 19:48:49 I disagree 19:48:56 however they may get disabled because they missed announcements on that mailing list, or they might not find out in a timely manner that they were disabled because we reached out to them via that list 19:49:22 fungi: true 19:49:28 again their responsibility 19:49:43 some of the onus has to be on them for their actions 19:50:24 true, but not a good track record of that, at least initially 19:50:30 no 19:50:51 so anyway I hope taht clarifies my patch 19:51:19 I am happy we have the lists, hopefully everyone will sign up 19:51:36 ++ 19:53:44 well, if that's it, i reckon we can end early 19:53:56 oh I did the tox and trusty stuff 19:53:58 it went mostly ok 19:54:07 oh cool 19:54:13 it went remarkably well, i thought 19:54:21 glance hiccuped and so did a couple of our tools and stackforge puppet 19:54:22 yay tox and trusty 19:54:28 but considering we have several hundred projects it went well :) 19:54:53 :) 19:54:54 thats it 19:54:57 and some review teams got a bit of a learning experience about prioritizing prerequisite changes for scheduled infra activities 19:55:06 haha 19:55:36 but we are future proofed until tox does their next release 19:55:50 great 19:55:53 clarkb: is there more to do with trusty? eg, zuul layout? 19:56:06 hey. is that open discussion yet? :D 19:56:16 jeblair: nope, not unless the stackforge puppet team wants to move puppet 2.7 back to precise 19:56:19 hashar: has been for quite some time :) 19:56:23 great 19:56:23 clarkb: yeaaaaa 19:56:27 so i think that gate is busted 19:56:35 nibalizer: even with mgagne's fix? 19:56:41 did that land? 19:56:43 yes 19:56:43 https://review.openstack.org/#/c/116915/ ? 19:56:45 so for those that don't know me, I am the continuous guy at wikimedia foundation and I have basically copy pasted Zuul setup to our infra. 19:57:11 hashar: and improved zuul :) 19:57:13 I would like to hereby publicly thank you in the name of me and the wikimedia foundation folks for all your hard work supporting third party installation 19:57:16 clarkb: its like i was saying yesterday, 'double puppet install' isnt the problem, ruby 1.9.3 is the problem 19:57:25 from Zuul (which is awesome) to kindly maintaining python-jenkins 19:57:41 hashar: oh, you're very welcome! 19:57:50 hashar: you're welcome, and thanks for all your help too! 19:57:51 clarkb: so much lag, no result yet on my test :-/ 19:57:53 hashar: thank you for using it and helping us make it better 19:58:11 hashar: thanks in turn for your contributions! 19:58:13 our jobs are stressful, and little time is spent to say thanks. So here you have: merci beaucoup ! 19:58:35 thanks hashar 19:58:46 -> Thank you for helping us help you help us all. 19:58:52 :) 19:58:59 and we're going to start using hashar's zuul-cloner soon :) 19:59:00 and I got Zuul cloner deployed in production last week. It is now voting as of today ! :D 19:59:04 mgagne: well put! :) 19:59:14 hashar: also, in a broader context, thanks for keeping mediawiki and by extension wikipedia working. i use them both a lot 19:59:26 (((that is on Wikimedia production, not OpenStack! ))) 19:59:28 yes 19:59:30 I used that too 19:59:38 hashar: nice! 19:59:40 fungi, hashar: ++ 19:59:42 with GLaDOS' voice 19:59:47 yeah MediaWiki is quite fun :] 19:59:53 hashar: and thank you! 20:00:08 thought it can be scary and is not yet as tested as openstack can be. But we are working on it! 20:00:18 what a nice way to end a meeting :) 20:00:22 #endmeeting