19:01:58 #startmeeting infra 19:01:59 Meeting started Tue Feb 17 19:01:58 2015 UTC and is due to finish in 60 minutes. The chair is jeblair. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:02:00 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:02:02 The meeting name has been set to 'infra' 19:02:03 #link agenda https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting 19:02:07 #link previous meeting http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-02-10-19.01.html 19:02:19 o/ 19:02:21 #topic Announcements 19:02:25 o/ 19:02:36 o/ 19:02:37 i have two things i'd like to start off with... 19:02:55 first, a recap of the meeting format we're currently using 19:03:07 o/ 19:03:25 in general, what we're trying to do here is make sure that things that need discussion get time here 19:03:54 the agenda is always open for people to add topics that they'd like to talk about, get agreement on, brainstorm on, just let people know about, etc... 19:04:08 the other thing we do is identify some priority efforts 19:04:41 usually things that affect a large group; things we've identified (perhaps at summits) which affect the openstack project as a whole... 19:05:08 and we identify individual changes that might be blocking those efforts that need high-priority review 19:05:40 one thing that i don't want this meeting to become is a place where we list individual changes that need review 19:05:51 o/ 19:06:07 by my reckoning, there are usually a few hundred infra changes outstanding 19:06:27 for large values of "a few" 19:06:56 ya I have been trying to dig myself out of that hole recently. Its hard to make progress 19:07:00 and part of the priority efforts is an attempt to deal with that by making sure that we don't sacrifice progress on efforts we feel are most important by getting swamped by the deluge 19:07:22 so anyway, please keep that in mind when you add things to the agenda 19:07:31 #info meeting should be used for discussion and for identifying changes related to priority efforts 19:07:31 #info please refrain from simply listing changes that you would like reviewed 19:07:56 the other announcement is much more fun 19:08:05 #info pleia2 has been nominated for red hat's women in open source award 19:08:05 #link http://www.redhat.com/en/about/women-in-open-source 19:08:10 yay 19:08:15 congrats! 19:08:18 woot woot 19:08:19 pleia2: congratulations on your nomination! :) 19:08:28 nice! congrats 19:08:28 congrats pleia2 ! 19:08:34 awesome! 19:08:42 congrats! 19:08:44 i'll note that none of her competition lists as many years of open source experience either... she's a shoe-in! 19:09:04 #topic Actions from last meeting 19:09:20 fungi collapse image types 19:09:27 it's in progress 19:09:32 fungi: i think this is an ongoing thing...but you've definitely started! 19:09:41 #link https://review.openstack.org/156698 19:09:51 i'm manually testing that on a held worker right now 19:09:58 it's still missing at least a few packages 19:10:09 but yeah, it's not going to be a quick transition 19:10:25 fungi: cool... we'll come back to that in a minute then. thanks. 19:10:25 we'll need to pilot it a little and then migrate somewhat piecemeal i think 19:10:29 yep 19:10:33 jhesketh look into log copying times 19:10:33 jhesketh move more jobs over 19:10:53 So more jobs over is here https://review.openstack.org/#/q/status:open+project:openstack-infra/project-config+branch:master+topic:enable_swift,n,z 19:11:09 But unfortunately I haven't looked into the copying times yet 19:11:11 jhesketh: I just got through those changes, -1 on the last one due to indexes not being right if you run the swift upload multiple times 19:11:20 #action jhesketh look into log copying times 19:11:31 jhesketh: but the others I am +2 on, approved the first but wanted more consensus we are ready for the other jobs 19:11:41 clarkb: thanks 19:11:55 sdague look at devstack swift logs for usability 19:12:04 sdague: did you have a chance to look at those? 19:12:10 The copying times probably requires looking into what the requests library is doing 19:12:44 if it gets opaque, sigmavirus24 is the requests maintainer and always very helpful 19:12:50 jeblair: yeh, they seemed to be roughly the same fetch times from what I saw 19:12:53 hi 19:12:56 sdague gave me some feedback I've done here 19:12:58 https://review.openstack.org/#/q/status:open+project:openstack-infra/project-config+branch:master+topic:swift_log_index_improvements,n,z 19:13:00 at least within margin of error 19:13:07 sdague: and the index listing format, etc, was okay? 19:13:21 sigmavirus24: will probably ping you post meeting if that works to talk about requests 19:13:22 fungi: oh cool, thanks 19:13:26 jeblair: oh, I provided jhesketh with feedback 19:13:28 clarkb: :thumbsup: 19:13:36 I haven't looked at those patches yet, will put that on my queue 19:13:36 sdague, jhesketh: oh cool 19:14:01 zaro to chat with trove folks about review-dev db problems 19:14:06 Still need to decide how to do the help pages for logs 19:14:13 zaro: i think we figured this out, right? 19:14:22 correct. 19:14:36 since i upped the wait timeout on the review-dev instance, i haven't seen it recur 19:14:36 i believe its fixed as well, right fungi ? 19:14:51 i concur with fungi 19:14:56 however we still need to apply similar updates to other trove instances for consistency 19:15:07 the default timeout was set differently based on when our instances were created, and indeed, review-dev was set to a very low value 19:15:14 we can get into details as to what that entails after the meeting 19:15:17 so we'll want to be cognizant of that in the future 19:15:37 zaro, fungi: thanks! 19:15:38 #topic Priority Specs 19:16:05 i took a quick look at the specs, and i think there are at least two we should look at and get merged soon 19:16:15 #link openstack_project refactor https://review.openstack.org/137471 19:16:28 #link In tree third-party CI https://review.openstack.org/139745 19:17:00 i think both will help us continue on the path to downstream reusability... thereby putting us all in the same boat and in a better position for collaboration 19:17:12 * nibalizer agree 19:17:25 +1 19:17:29 will take a look at those today 19:17:31 on a related note, should the (formerly priority) puppet module split spec get moved to implemented? if so i'll whip up a quick change to take care of that 19:17:34 ++ 19:17:42 fungi: +1 19:17:54 (I was ++-ing jeblair - but also agree with fungi ) 19:17:59 fungi: yes.... i think there is one last thing in the storyboard let me check 19:18:16 nibalizer: oh, cool. we can dig in after the meeting in that case 19:18:25 #topic Priority Efforts (New efforts?) 19:18:54 so our current priority efforts are: swift logs, nodepool dib, docs publishing (waiting on swift logs completion), and zanata 19:19:01 i think we have room to add 2 or 3 more 19:19:06 jeblair: askbot? 19:19:09 especially since swift logs is hopefully winding down 19:19:14 we are running without backup 19:19:19 would the two new specs be candidates? 19:19:37 mrmartin: i think askbot is a good candidate due to the importance to the community, yeah 19:19:47 anteaya: i think so; i'd probably lump them together 19:19:49 the spec is in the review queue 19:20:01 jeblair: makes sense 19:20:03 and the patches also 19:20:17 askbot and the puppet items make good candidates imo 19:20:26 but requires some effort from infra side, launch instance, migrate db. etc. 19:20:28 fungi's work on images maybe too... 19:20:29 isn't gerrit upgrade in PE somewhere? 19:20:38 zaro that's a good one 19:20:41 jeblair: yeah, that's sort of taking on a life of its own outside of the dib work 19:20:59 fungi: jeblair though still very tightly coupled to the dib efforts 19:22:26 so, how about: add gerrit upgrade, askbot, and the puppet work; include image consolidation in the dib item? 19:22:40 I can buy that 19:22:50 sounds good to me 19:22:56 +1 19:23:10 wfm 19:23:36 #agreed new priority efforts: gerrit upgrade; askbot, third-party and openstack_project puppet efforts 19:23:37 +1 19:23:52 #agreed image consolidation to be included in nodepool dib effort 19:24:05 #topic Priority Efforts (Swift logs) 19:24:16 so i think we probably covered most of this in the actions section 19:24:23 #link https://review.openstack.org/#/q/status:open+project:openstack-infra/project-config+branch:master+topic:swift_log_index_improvements,n,z 19:24:26 #link https://review.openstack.org/#/q/status:open+project:openstack-infra/project-config+branch:master+topic:enable_swift,n,z 19:24:32 i'll just throw those in here for reference 19:24:32 Yeah I've got nothing else this week sorry 19:24:52 np 19:25:00 #topic #topic Priority Efforts (Nodepool DIB) 19:25:18 #topic Priority Efforts (Nodepool DIB) 19:25:37 :) 19:25:59 I have verified image upload and launch to work on both clouds 19:26:03 at least on ubuntu 19:26:21 mordred: is this with current or pending nodepool changes, or with shade? 19:26:37 with shade - yolanda has started hacking on my pending nodepool changes to get them finished up 19:26:38 There are still a handful of nodepool bug fixes that have come out of the dib work up for review. I have added tests for nodepool commands too. Reviews would be great if only because less bugy nodepool makes the dibification less confusing 19:26:48 yes, i wanted to raise a pair of questions 19:26:55 i'm hoping to finish banging out the package list later today and get an experimental job landed to run nova python27 unit tests on devstack-trusty nodes 19:27:11 so i was talking with mordred about some ensureKeypair method, jeblair, do you know more details about it? 19:27:13 fungi: wow! 19:27:24 fungi: woot 19:27:35 well, i think what i have in wip is already close, just need to finish confirming the missing bits 19:27:49 yolanda: can you expand for the rest of us? 19:27:52 jeblair: I was telling yolanda about our discussion around keypair management and realied that I had not written down good notes -so I think it might be worth the three of us syncing up on that topic again 19:28:10 clarkb: basically right now the logic for handling keypairs is in nodepool.py not provider_manager.py 19:28:33 sure, i think the general thing is that nodepool should not require a pre-existing keypair 19:28:34 so in figuring out how to put an api in front of it similar to the one we use for floatingip 19:28:35 so i moved that logic to provider manager, but i don't feel that's what we will need 19:28:45 jeblair: ++ 19:29:05 seems like a good improvement 19:29:09 currently, it creates one per instance; i think it would be okay to create one and cache it, but then it would also need to keep track of it 19:29:34 How does this interact with dib? 19:29:36 yup. also - it's entirely feasible that nodepool might not need to create one at all - if one considers the dib case 19:29:36 (that can be done, it's just a bit more work) 19:29:40 would hopefully minimize future keypair leaks 19:30:00 where the keypair can quite happily be directly baked into the image 19:30:09 so the nodepool logic should almost certainly handle those deployment choices 19:30:19 yeah, it may not be worth a lot of effort for us 19:30:19 mordred: the only issue with that potentially is downstream ocnsumer of our images wouldn't have access to our private key(s) 19:30:30 mordred: So, we do still need the otherthing 19:30:31 right now, if we don't pass a key name, it generates one based on hostname, then adds it 19:30:35 if image fails, it removes 19:30:41 clarkb: right. turns out dib already has an element which says "please inject the key from the user that built this image" 19:30:58 mordred: yes but that doesn';t fix the issue for people taking our images that we prebuilt 19:31:00 clarkb: so there are ways we could chose to do that and still be friendly to downstreams 19:31:01 mordred: but if we publish our images, other people can't reuse them easily 19:31:04 clarkb: this is correct 19:31:15 mordred: so if we do that wile running as "the nodepool/jenkins/whateverwewantocallit user" then we and downstream users should be set 19:31:32 jeblair: yes - other than the direct-binary-reconsumption thing 19:31:40 jeblair: not downstream users that consume the image directly 19:31:48 I don't think we can have nodepool stop managing keys 19:31:48 yeah, was responding to the earlier thing 19:32:02 could be toggleable but the key management is still important 19:32:06 yes 19:32:14 clarkb: it's important for snapshot images, yes 19:32:14 I totally think nodepool needs teh ability to manage keys 19:32:30 I'm just saying taht we may not choose to use that feature in our config - although we _might_ choose to use it 19:32:42 so nodepool needs to grok that someone can make a choice about that 19:33:10 any other dib related things to talk about? 19:33:20 mm, related to the same work 19:33:27 there is a missing feature for get capabilities 19:33:32 this should be created in shade, right? 19:33:35 yes 19:33:41 get capabilities? 19:33:47 if shade is missing a thing that nodepool needs, it shoudl be added 19:33:56 clarkb: ask the cloud what its capabilities are 19:34:16 I see, floating ips and so on for example? 19:34:23 in nodepool there is some code that checks if cloud has os-floating-ips capabilities 19:34:27 yep 19:34:34 right 19:34:36 HOWEVER 19:34:42 nodepool will stop doing that particular thing 19:34:52 since shade does the right thing in that case already 19:35:04 i just removed that from nodepool directly 19:35:08 yay 19:35:10 also how is this related to dib? (I am trying to make sure I understand if these things are requirements or nice to have or whatever) 19:35:12 yah. but it's an excellent example 19:35:18 but i wanted to ensure if that was really fine, or need some extra work on shade 19:35:39 clarkb: it's related to dib because the logic to deal with glance across clouds is in shade - and it's VERY ugly 19:35:52 mordred: specifically get capabilities thoug 19:36:02 mordred: we don't need to accmodate capabilities today for dib or do we? 19:36:14 no- but we do need to deal with it in the shade patch 19:36:19 I see 19:36:41 so the question winds up being "add get capabilities support to shade" OR "remove need for it by adding logic to shade" 19:36:50 o/ 19:36:52 I prefer the second, but the first could also work/be better 19:37:08 are there more use cases that need capabilities? 19:37:10 depending on scope/effort and how general the check is vs. specific to nodepool logic 19:37:20 yolanda: only other one I know of it keypair extension 19:37:27 i think the end-state is that nodepool should not need it; it should all be in shade... 19:37:40 so if that's easy, go with that; if it makes something too big and complicated, then we can do it piecemeal 19:37:48 ++ 19:37:56 see, jeblair says things clearly 19:38:20 mordred: he is a good translator for you, keep him close 19:38:31 * mordred hands jeblair a fluffy emu 19:38:39 #topic Priority Efforts (Migration to Zanata) 19:39:04 I guess pleia2 is not here today, she uploaded a new patch, requires review / testing 19:39:13 #link https://review.openstack.org/#/c/147947/ 19:39:35 #topic Priority Efforts (Askbot migration) 19:39:38 mrmartin: yeah, she's travelling for several speaking engagements at a couple of free software conferences 19:40:10 #link https://review.openstack.org/154061 19:40:18 that's probably the first thing we ought to do -- review that spec :) 19:40:19 ok for askbot, I guess the tasks are clear, if anything comes up, let's discuss that. 19:41:41 #link https://review.openstack.org/140043 19:41:46 #link https://review.openstack.org/151912 19:41:58 also i did update the https cert for the current non-infra-managed server and stuck it in hiera using the key names from the proposed system-config change 19:42:14 and it looks like there's a solr module we can use without needing our own, which is good (i hope) 19:42:17 fungi: thanks 19:42:34 #topic Priority Efforts (Upgrading Gerrit) 19:42:36 jeblair, yes with some limitation the solr works, I've tested 19:42:51 so we set a date last week and sent an announcement 19:42:54 for the trusty upgrade 19:43:13 i believe we're planning on doing that first, and then upgrading gerrit to 2.9 later? 19:43:25 3rd party folks paying attention seem to know about the ip change coming up 19:43:32 ya since 2.9 needs a package only available on trusty (or not available on precise) 19:43:32 so here are the changes for that : https://review.openstack.org/#/q/topic:+Gerrit-2.9-upgrade,n,z 19:43:42 that's for after moving to trusty 19:43:48 clarkb: right, but specifically i meant not doing both at once 19:43:59 I added it to tomorrow's cinder meeting 19:44:04 how much time to do we need between the os upgrade and the gerrit upgrade? 19:44:32 asselin: cool 19:45:43 if we don't have a reason to wait, why would we wait? 19:45:48 will the server be up beforehand to test, and the swith made only on the date specified? 19:45:59 asselin: yes 19:45:59 *switch 19:46:06 asselin: it will not be available to test 19:46:10 well, also part of the question is, does only doing the distro upgrade by itself shrink the window we need for the outage (including time needed to roll back if necessary) 19:46:47 but yes, more important is that changing too many things at once makes it harder to untangle where a bug came from 19:46:47 fungi: i don't... i think rollback is probably faster when doing a server switch 19:46:52 i can't seem to find the announcement, could someone please provide a link? 19:46:56 jeblair, it would be good for 3rd party folks to test our firewall settings, if there's "something" on the other end 19:47:07 #link http://lists.openstack.org/pipermail/openstack-dev/2015-February/056508.html 19:48:35 i agree with fungi, maybe we should wait a week between going to trusty then gerrit 2.9 ? 19:48:35 so if folks don't want to do both at once, then we probably need to let it sit for at least a week or two before we attempt the gerrit upgrade 19:48:55 zaro: can you look at the schedule and propose a time for the gerrit upgrade during our next meeting? 19:49:08 will do 19:49:10 #action zaro propose dates for gerrit 2.9 upgrade 19:49:15 zaro: also take the release schedule into account there 19:49:19 last time we did this we separated OS from Gerrit upgrades. So that plan of action sounds good to me 19:49:36 thanks 19:49:37 #topic supporting project-config new-repo reviews with some clarity (anteaya) 19:49:49 #link https://etherpad.openstack.org/p/new-repo-reviewing-sanity-checks 19:50:05 so I'm frustrated with my reviews on project-config 19:50:12 * asselin notes that users won't have access to new gerrit https during time between new os and gerrit upgrade 19:50:16 anteaya: agree that adding expectations to README in project is a good idea 19:50:23 so I tried to capture some thoughts on this etherpad 19:50:55 anteaya: sofar I asked for a patch to governance - you want to have it merged first? 19:50:56 that captures the essence of what I am feeling 19:51:16 well the name is still in doubt as far as the tc is concerned 19:51:17 also note that new project creation has a race 19:51:22 so I haven't really been reviewing any of them 19:51:26 yet is it a reality in git.o.o right now 19:51:43 clarkb: oh, er, that should probably be the first thing we talk about in this meeting :( 19:52:01 clarkb: let's defer that discussion though 19:52:25 does anyone have any other thoughts? 19:52:47 anteaya: my concern is if we make this so complicated reviewers and users won't want to touch it 19:52:55 clarkb: fair enough 19:53:04 I can' tbring myself to review project-config right now 19:53:06 traditionally we haven't been stackforge police 19:53:10 if we go with governance patch being a prereq, then using depends-on crd in the commit message could ensure that even if we approve the new project it won't get created until teh tc approves the governance change (for official projects) 19:53:15 since my concerns don't seem to be incorporated 19:53:37 fungi, I would be fine with that one 19:53:59 but using CRD for the governance change dep for openstack/ projects seems reasonable 19:54:11 anteaya: your first point is regarding stackforge; i agree with clark, i'm not certain we should care 19:54:27 well actually it is the use of openstack names in stackforge 19:54:42 since once they are used there openstack losing the ability to say how they get used 19:54:44 anteaya: the second point, yes, things that go into the openstack namespace should be part of an openstack project (for the moment at least) 19:54:48 since as you say, we don't care 19:55:09 which as i pointed out in the earlier discussion, there are lots of existing cases of project names in stackforge repo names 19:55:18 anteaya: i feel like in many cases those may be nominative... like "neutron-foo" is the foo for neutron; hard to describe it otherwise... 19:55:28 especially for the config management projects in stackforge 19:55:35 networking-foo is what neutron has been using 19:55:49 anteaya: at your suggestion? 19:55:52 when they have reviewed patches that condcern the name use 19:55:56 yes 19:56:02 since they agreed with my point 19:56:14 about losing the ability to use the name for themselves 19:56:20 i'm not certain that i agree that all uses of the word neutron in a project name are incorrect 19:56:23 and determine its value by their actions 19:56:28 when they go from stackforge to openstack as big-tent projects/teams should they also rename themselves from stackforge/networking-foo to openstack/neutron-foo? 19:56:37 yeah - but I tend to agree with jeblair - stackforge/puppet-neutron, for instance, I do not think needs any blessing from anyone - it is descriptive, it is puppet modules to deploy neutron 19:56:52 I don't think we need to police that 19:56:59 config repos are different from driver repos 19:57:02 to me 19:57:13 also the ammonut of work to change puppet-neutron to anything else is nontrivial 19:57:14 that's hard to write a policy around though 19:57:23 but all I am aksing for is for someone from the project to weigh in 19:57:25 it seems even weirder that a neutron driver wouldn't use neutron in the name 19:57:31 I can see that - but I don't think it's an area where we want to have an opinion, and should really only have one if we need to 19:57:33 "this is bad, except sometimes it's not" is a reviewing policy nightmare 19:58:12 i feel like policing stackforge project names is deep, deep bikeshed 19:58:23 but all I am aksing for is for someone from the project to weigh in 19:58:32 anteaya: you have been asking them to change names 19:58:40 yes I have been 19:58:50 anteaya: I think that implies that someone from the project has the right to an opinion 19:58:53 you even suggest that in your point in the etherpad 19:59:01 and at this point I am asking for someone from the project to weigh in on a patch that uses the project name 19:59:02 that is some how more valid than someone else's 19:59:19 is that not the point? 19:59:25 what else is the point of the name 19:59:29 and I'm not sure I agree that is the case - as it puts people into a position to make a value judgement potentially unnecssarily 19:59:40 the ptl of the projcet probably has plenty of things to worry about, and so when we say 'hey need input' they're gonna say 'link to policy doc plz?' and then the problem is right back on us 20:00:09 Hi folks, just watching the flow of thing, first time IRC to anything OpenStack. p ) 20:00:20 specifically, it's a potential point for corruption and abuse - if someone doesn't like ewindisch, they might be more inclinded to say "I don't think nova-docker should get to use nova in its name"- when at the end of the day, it is a docker driver for nova 20:00:43 I see it as abuse the other way 20:00:48 and that is merely a factual statement. now - if it was a request for openstack/nova-docker 20:00:50 anteaya: also once you publish software, I'm basically entitiled to write puppet- thats kindof not your call if i do it or not 20:01:03 does that make sense? 20:01:12 then it implies a level of credibility related to openstack/nova and I would tehn think that the nova-ptl should be involved 20:01:28 how do we do that after the fact 20:01:42 as we have agreed stackforge can do whatever they want 20:02:10 keep in mind that stackforge is just one of many, many hosting options for a free software project. if projectname-foo wants to register its name on github, pypi, rtfd, et cetera we have no control there 20:02:13 yup 20:02:19 correct 20:02:39 yet there is a feeling of attachment to openstack via stackforge 20:02:46 so exercising that control over stackforge seems counter-productive 20:03:04 It would be nice to have some guidelines, e.g. on how to name drivers for nova like nova-docker or plugins for neutron like networking-midonet 20:03:23 to have similiarities 20:03:26 I think naming things only has value if the name means something 20:03:36 and the value can easily degrade 20:03:46 yeah - I hear that - but stackforge is very explicitly not the place for that 20:03:46 for example, turbo-hipster? 20:03:53 right 20:04:02 like, it exists to be a place that does not have a governance opinion 20:04:13 right 20:04:26 which is why I feel the way I do 20:04:26 we don't want to make a secondary policy board for stackforge that governs choices people make there 20:04:29 there will always be exceptions to any naming pegegree, places where something doesn't fit or is covered by two rules 20:04:30 let me take the action to add information about CRD on governance patch to the infra-manual. ok? 20:04:32 if I am the only one, that is fine 20:04:35 move ahead 20:04:36 oh, we're way over time too. sdague wasn't going to do a tc meeting today, correct? 20:04:40 I said what I feel 20:04:41 fyi, for anyone looking for the TC meeting, it is cancelled today. 20:04:59 fungi: correct, cancelled, just wanted to make sure no one was hanging out waiting for it 20:05:09 sdague: well, i was, but... 20:05:10 you guys can keep the channel 20:05:13 #action AJaeger update infra manual with info on setting up depends on for CRD between new openstack/* project changes and the governance change accepting that project 20:05:18 and yeah, i thought a few extra minutes would be helpful here 20:05:24 agreed 20:05:37 sorry if I jumped the gun on the CRD thing but it seemed like we had consensus on that point 20:05:42 i think so 20:05:49 i think we're in a good place to continue this in a change review 20:06:00 thanks AJaeger 20:06:18 sorry we didn't get to the other items on the agenda 20:06:32 they'll be up first after priority efforts next time 20:06:51 but if you're curious, there are some links in today's agenda if you want to explore on your own 20:06:55 thanks everyone! 20:06:58 #endmeeting