22:03:07 #startmeeting zuul 22:03:08 Meeting started Mon Dec 12 22:03:07 2016 UTC and is due to finish in 60 minutes. The chair is jeblair. Information about MeetBot at http://wiki.debian.org/MeetBot. 22:03:09 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 22:03:12 The meeting name has been set to 'zuul' 22:03:19 #link agenda https://wiki.openstack.org/wiki/Meetings/Zuul#Agenda_for_next_meeting 22:03:30 o/ 22:03:31 o/ 22:03:33 #link previous meeting http://eavesdrop.openstack.org/meetings/zuul/2016/zuul.2016-12-05-22.02.html 22:03:59 #topic Actions from last meeting 22:04:07 jeblair update nodepool system-config docs with zk info 22:04:16 * jesusaur lurks 22:04:29 i think that merged? 22:04:37 I +2'd it iirc 22:04:44 i believe i did too 22:04:45 yep 22:04:47 so it shoudl eb close to merging if not merged 22:04:49 #link http://docs.openstack.org/infra/system-config/nodepool.html 22:04:53 it's published even! 22:04:55 marvellous! 22:05:00 sometimes this stuff works 22:05:08 jeblair merge https://review.openstack.org/406342 22:05:12 jeblair merge https://review.openstack.org/406411 22:05:21 those were merged 22:05:28 pabelanger test zk disaster scenarios with nodepool-builder 22:05:43 pabelanger did that and i think was happy with the result 22:05:57 pabelanger launch nb02 22:06:10 pabelanger: did that as well and i think was happy with that result as well :) 22:06:39 nothing's come crashing down around our ears, so i call that success 22:06:53 all things done! 22:07:00 whats our oldest image? 22:07:10 #topic Status updates: Nodepool Zookeeper work 22:07:10 or how old is the oldest image 22:07:16 (mostly just curious) 22:07:17 clarkb: built or uploaded? 22:07:22 uploaded 22:07:37 since its uploading where we have struggled the most in the past and having multiple uploaders should help 22:08:09 01:23:03:58 22:08:20 smells like success to me 22:08:20 yes, happy here 22:08:27 <2 22:08:33 that's right on the nose 22:09:10 the oldest built image is from around then too 22:09:45 yeah, so barring deeper wrongness, the new process doesn't seem out-and-out broken anyway 22:09:59 ya thats a good indicator of general working 22:10:07 those action items were all the pre-production blockers; which of course is why we switched it into production on friday 22:10:41 i think the next thing was "i send out an email" then merge the branch, then switch prod to master, then start using the v3 branch for the next phase of work 22:10:49 also gave us an opportunity to make sure it's all working before we get too far into milestone week 22:10:58 (the spec for the next phase landed last week) 22:11:15 o/ 22:11:30 so how about i send out the email today, merge the branch and switch prod tomorrow? 22:11:49 wfm 22:11:52 (and we'll also make a release from master sometime in the near future, timing isn't important) 22:11:55 by switch prod you mean back to master right? 22:11:58 clarkb: yep 22:12:05 ya that sounds good 22:12:18 #action jeblair send email announcing impending merge and release 22:12:27 #action jeblair switch production to run from master after merge 22:12:44 exciting 22:12:48 we should switch over too 22:12:52 whee! 22:13:27 anything else nodepool related? 22:14:29 #topic Status updates: Devstack-gate roles refactoring 22:14:45 i just put this on the agenda, but did not give rcarrillocruz a heads up about it :) 22:15:18 the base three changes of the stack are ready for review and have my +2 on them 22:15:23 ah nice 22:15:30 but i wanted to start calling attention to it, we should probably start thinking about this in parallel with the other efforts 22:15:38 also, if folks are ok with it, i'm more than happy to continue refactoring other stuff than setup_host 22:15:56 it's worth noting something clarkb said in channel about those last week or the week before ... 22:16:02 rcarrillocruz: my only concern is that we have already had a hard time just with logistics around setup host that if we add more stuff it might be harder not easier 22:16:03 unless dmsimard or others have started doing so? 22:16:16 which is that given the functinality they touch - being thorough/careful on reviews of them is extra important 22:16:18 (it's kind of our "premier" job, so it'll be good to have this ready as a demonstration of what v3 can do when we roll it out) 22:16:19 yeah, not doing in parallel, just continue afterwards 22:16:31 mordred: yup specifically make sure that we don't introduce new behavior that could be a regression 22:17:46 clarkb: ++ 22:18:16 also, d-g is very self testing, so make sure to check out the logs and see what the result looks like (it's important that the logs be easy to follow) 22:18:20 along with that, i'd vote on just have feature parity to what we have now and leave optimizations later 22:18:37 ++ 22:18:43 (to both things) 22:18:44 rcarrillocruz: yes, incremental changes are good here 22:19:58 at any rate, anyone who knows anything about either openstack or ansible can make a contribution by reviewing these :) 22:20:14 i dunno if that's anyone here 22:20:15 ++, the replacements are easy enough 22:20:24 jeblair: I'd like to give it a look 22:20:39 not really openstack knowledge required, just replacing bash with ansible 22:20:59 does anyone really know bash? 22:21:09 though it at least helps having seen devstack-gate run before and knowing what it is/does ;) 22:21:14 topic:zuulv3 project:openstack-infra/devstack-gate 22:21:20 is there a link to the topic of the PRs? 22:22:07 #link https://review.openstack.org/#/q/is:open+topic:zuulv3+project:openstack-infra/devstack-gate 22:22:15 https://review.openstack.org/#/q/status:open+project:openstack-infra/devstack-gate+branch:master+topic:zuulv3 22:22:17 y that 22:22:21 jeblair: lol. "knowing bash" oy 22:22:38 i know some bash, but don't know d-g bash, it's a variant on its own :D 22:22:56 rcarrillocruz: i concur with that assessment 22:23:01 true -- as the author of the "bash unit tests" in devstack-gate, i will be happy to see them go. :) 22:23:12 jeblair, clarkb: can you either tag or branch the current master before the production merge? 22:23:29 how to know when you should rewrite your shell scripts in another language: you invent shell-based unit testing for them 22:23:42 jamielennox: there should be a tag already from before the zuulv3 work started 22:23:55 fungi: I mean ksh has a debugger even 22:24:03 ph33r 22:24:53 * fungi has some not-so-great memories of debugging the ksh-based userland most of sco was implemented in 22:24:58 #topic Status updates: Zuul test enablement 22:25:00 clarkb: yea, but master has moved since then 22:25:01 clarkb: I do think we merged some things into master between the tag and the zuulv3 stuff - might not be _terrible_ to do 2 back to back "here is the thing that is before zk - here is the thing that is after" 22:25:12 tags are cheap, after all 22:25:22 ya no objection from me 22:25:31 #topic Status updates: Nodepool Zookeeper work 22:25:31 thanks! 22:25:36 mordred: it never hurts to tag pre-big change 22:25:46 (though I am likely not going to be doing that tagging/merging) 22:27:19 yeah, i think we can tag a 0.3.1 22:27:31 before performing the merge 22:28:16 mordred: i'm not planning on tagging after the merge until we're ready for a release of the zk stuff 22:28:25 jeblair: works for me 22:28:33 mordred: i don't know what else we might want to do before such a release, but i figure we should take stock before doing it 22:28:47 so will probably ask folks about that next week 22:28:51 ++ 22:28:58 #topic Status updates: Zuul test enablement 22:29:40 jamielennox came to me with a question recently that i had difficulty answering 22:31:20 it was related to the zuul trigger, and interacted with the tenants v3 change and some of the reorganization i did earlier in v3 around connections. but also, i've been trying to think about how we want to present an api for extensions to support other systems (eg, github, etc) 22:32:07 * jlk perks up 22:32:30 i had an idea to slightly reorganize the connections/sources/triggers/reporters under a driver 22:33:01 jeblair: unless you have come up with something i'd vote to drop zuul triggers for v3 release. My understanding is we don't need them for openstack-infra and i'd prefer to drop non-required things in the breaking change and reintroduce them with more design later wehen required 22:33:24 i'm partway through a change implementing this; it's not ready for real review yet, but you can see where i'm starting to go if you look at https://review.openstack.org/408849 in zuul/driver/__init__.py 22:34:16 well, if we know it's something we're going to want to support later, reorganizing now to make that easier still seems like a sane choice 22:34:19 I think that's going to make the interface much nicerer 22:34:36 the nicerer the betterer 22:34:38 jamielennox: yeah, i was leaning that way, however, i'm almost certain they'd end up back in sooner rather than later, and i think my change will make it easy to keep them and support drivers with similar needs -- as i was thinking about it, i realized that the 'timer' system shares quite a bit of the same issues 22:35:38 anyway, i think what i want to do is finish getting that into a workable state, and then see if folks like the general direction. 22:35:48 jeblair: i've only looked at that review a little and i think the idea of a driver interface is a nice idea, just thinking it shouldn't be driven by zuultrigger 22:36:21 i think once that's there, we might have some conversations about if that's exactly the right approach, or if we want to do more things with different interface classes, or abstract base classes, etc. 22:36:38 anyway, +1 WFM 22:36:56 (i can see a couple of different ways of actually specifying the API, all of which still match the general outline i'm drawing up) 22:37:21 jamielennox: yeah, that's a good point. i've been trying to think about it holistically, considering all of the uses i know about 22:37:32 (and can imagine) 22:37:50 like yolanda's url trigger desire 22:38:06 fundamentally, this lets us have global state, tenant state, and pipeline state, so it's very flexible. 22:39:22 anyway, just wanted to let people know about that since it might affect other trigger/reporter/etc work (though the actual migration is pretty simple and mechanical) 22:39:41 on the "might affect other stuff" train, I started trying to re-enable the swift tests 22:39:47 and once it's generally ready, we will want to put some polish on it as it's going to be an externally facing api surface 22:39:54 and found that auth plumbing isn't quite written yet, nor is swift connection plumbing 22:40:08 yep 22:40:16 i'm back on business from vaca 22:40:24 i will get back to it this week SpamapS 22:40:31 for the auth stuff, not swift tho 22:40:51 rcarrillocruz: cool, I will probably move that to blocked and pick it back up when you finish. 22:40:56 ++ 22:40:56 SpamapS: yeah, though we do have the swift code from v2, so there may be some pieces that need connecting up 22:41:22 blocking on the auth/secrets stuff makes sense to me 22:41:37 jeblair: seems like rcarrillocruz is more familiar with those pieces, so I may be more productive with something else. 22:42:20 though the swift authentication is a lot more fine-grained than the general auth provisions in the v3 spec 22:42:44 indeed, but one needs a base on which to stand. 22:42:52 since it's zuul handing out per-build credentials for specific subtrees in a container 22:43:14 There's also a question of whether there should be per-tenant swift connections. 22:43:22 lots of stuff that I just don't have in my head yet 22:43:38 so having any of it filled in first should help get it done. 22:44:08 feels a bit like the tests that I find still skipped are more in-depth and require code inside zuul now vs. just reworking of tests. 22:44:09 we also have an opportunity in v3 we didn't have in v2, of having *trusted* swift creds 22:44:18 i wip'd some auth stuff already before going vaca, https://review.openstack.org/#/c/406382/ , when i get that sorted i'll work on getting the pipeline allow-secrets flag, followed bu the actual job auth code handling 22:44:19 which is a good thing.. getting to the meat of it. 22:44:55 ++ 22:45:22 #topic Progress summary 22:45:38 * jeblair waits for SpamapS to post link 22:45:41 * SpamapS resists link paste finally 22:45:46 :) 22:46:06 SpamapS: please, the floor is yours :) 22:46:10 oh haha 22:46:14 #link https://storyboard.openstack.org/#!/board/41 22:46:21 So we have _a ton_ in flight. 22:46:49 And a ton of "new" tasks, which are tasks we haven't evaluated yet for one of the other lanes. 22:47:12 But the backlog and Todo is already long enough, I"m not worried about people not being able to find more work to do. 22:47:29 I would like to see more tasks in Todo.. Right now it is all stories. 22:48:00 not really sure why https://storyboard.openstack.org/#!/story/2000767 is still "in progress". the 3304 task in that doesn't seem related to the builder, IMO 22:48:00 SpamapS: i'll take a pass at updating those 22:48:06 jeblair: also your script has been working well, so thank you. :) 22:48:16 \o/ 22:48:40 i'm happy to take ansible-based deployment, since i've battled that a bit in the past 22:48:49 and happy to leverage pabelanger_ roles for it 22:48:52 Shrews: 3305 is still marked as in progress 22:48:56 unless pabelanger_ has interest on taking it ? 22:48:58 i.e. https://storyboard.openstack.org/#!/story/2000791 22:49:02 "re-enable nodepool builder and devstack tests " 22:49:29 SpamapS: it's in production. tests are re-enabled, so it's done 22:49:45 rcarrillocruz: sure, I can show you what I've been working on. 22:49:50 Shrews: well then let's mark 3305 as merged. :) 22:50:04 could've swore we moved this to Done label last week 22:50:05 k, i'll pick up assignment then 22:50:06 [and there was much rejoicing] 22:50:12 SpamapS: ^ , taking https://storyboard.openstack.org/#!/story/2000791 22:50:42 Shrews: Notice there's no Done lane anymore. Just mark the task as Merged and it will disappear. 22:50:53 Shrews: we might have moved the card, but my auto-script might have moved it back because it has an in-progress task; updating the task to 'merged' will cause to be auto-removed from the board 22:51:14 * Shrews just avoids SB until the weekly meetings 22:51:16 :) 22:51:18 I updated the description of the board, but maybe we weren't clear enough in here. :-P 22:51:58 The board is now mostly a reflection of SB task status, except we are the lanes to break out some statuses into finer grained status so people can find appropriate tasks to work on. 22:52:07 (and so we can see how busy / not busy things are) 22:52:50 right now I'd say our WIP is a bit high, and we should make sure cores double down on reviewing to keep the in progress list manageable. 22:53:12 I also don't think we have enough in Todo. 22:53:30 fine to grab things off backlog, but todo is where we should drop things that feel like they're "ready to go next" 22:53:38 and i want to say a big chunk of the wip are the devstack-gate changes 22:53:48 jeblair: any thoughts on ansible deployment or any other tasks/stories moving to Todo? 22:54:43 BTW, I'm removing 2000767 (the story) from the board. Stories (not tasks) are managed manually. 22:54:48 (woot again? ;) 22:55:19 SpamapS: i think there's nothing blocking the ansible deployment, however, it's not a blocker itself for us getting into production. so we should move it to todo if we've run out of higher-priority things 22:55:37 jeblair: ACK, I'll leave the story in Backlog 22:56:14 okay, one more quick topic: 22:56:21 #topic Zuul v3: update with Ansible role information 22:56:31 jeblair: what about "make change a subclass of ref" ? Isn't that kind of a big refactor that we want to get in there soon? 22:56:37 #link https://review.openstack.org/#/c/381329/ 22:57:14 SpamapS: yes, that's probably a good one 22:57:24 SpamapS: i'm hopeful it won't actually be too intrusive 22:57:30 ACK, ok, carry on :) 22:57:41 this is, i think, the last outstanding spec update 22:57:57 it adds some detail around how we expect roles to work 22:58:25 i think with that in place, we'll be ready to actually write the "really run an ansible-based job for real" code, as i like to think of it. :) 22:58:28 and looks like it's getting proposed tomorrow for an infra council vote 22:58:38 fungi: indeed! 22:59:06 i gave a heads-up about this last week, but no time like the present to take a look at it 23:00:03 and that's all we have time for 23:00:06 thanks everyone! 23:00:09 #endmeeting