14:00:03 #startmeeting tripleo 14:00:04 Meeting started Tue Jun 28 14:00:03 2016 UTC and is due to finish in 60 minutes. The chair is shardy. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:06 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:08 The meeting name has been set to 'tripleo' 14:00:09 #topic rollcall 14:00:10 o/ 14:00:16 o/ 14:00:17 Hi all, who's around? 14:00:17 \o 14:00:18 Hello guys o/ 14:00:18 o/ 14:00:18 hello 14:00:21 o/ 14:00:21 o/ 14:00:22 ohai 14:00:22 \o 14:00:25 o/ 14:00:40 \o 14:00:40 hey 14:00:44 o/ 14:00:46 o/ 14:00:57 o/ 14:01:18 #link https://wiki.openstack.org/wiki/Meetings/TripleO 14:01:35 #topic agenda 14:01:35 * one off agenda items 14:01:35 * bugs 14:01:35 * Projects releases or stable backports 14:01:35 * CI 14:01:37 * Specs 14:01:40 * open discussion 14:01:52 Anyone have anything else to add to the one-off items before we get started? 14:02:19 #topic one off agenda items 14:02:44 o/ 14:02:50 So there are two - first was from me, following discussions with EmilienM re CI outages 14:03:13 we wanted to start some discussion on if there's a more effective way to spread the load of constantly fixing CI 14:03:34 and also, see if there are things (such as moving onto a managed cloud) which will help reduce the overhead of these tasks 14:03:48 we can cover this now, or in the CI section, I don't mind 14:04:04 we can cover it during CI topic 14:04:10 a weekly rotation of those willing to make it a top priority for a week at a time? 14:04:32 I just wanted to make sure we start the oooq/ooo-ci discussion sometimes during this meeting 14:04:33 o/ 14:04:46 slagle: something like that would probably help - I know folks easily burn out if there's too small a pool of people regularly battling the breakages 14:05:06 +1 to easily burning out when it is a small group 14:05:31 yes, the person "on call" so to speak really just makes sure there is a bz filed, someone somewhere is looking at it, communicates the outage, etc 14:05:32 So, yeah that's the other part, weshay indicated there may be more opportunity to reduce fragmentation of CI effort if we were to start using tripleo-quickstart in upstream CI 14:05:34 if there is a noob CI guide I would like to help 14:05:35 weekly volunteering groups? 14:05:37 what do folks think about that? 14:05:40 shardy: right, some people (me included) burned a lot of time recently 14:05:48 I think the tripleo core reviewer team has to be responsible for keeping CI working 14:05:55 * shardy realises we're doing this now not in the CI section, but nevermind, we've started now ;) 14:05:56 shardy: it has huge impact on the features we wanted to deliver 14:06:14 +1.. any way to get mentored and more involved 14:06:39 Yeah, knowing how to help is tricky 14:06:55 I also would like to propore some automation, dprince & I discussed about it last week: something that would tell you what changed between latest successful job and current broken situation so we would easily find what broke tripleo 14:07:04 propose* 14:07:07 sanjayu: There isn't, but perhaps having folks willing to help get people involved by helping them learn how to debug etc would be a good idea 14:07:10 if some of the more experienced folks volunteered to mentor 1 newbie each that would really help a lot 14:07:17 +1 keeping CI working, based on weekly groups not to get burn out 14:07:36 +1 to being mentored :) 14:07:40 +1 to weshay's and sanjayu's sentiment. 14:07:48 weshay: I think we can do that, if there's a commitment those folks can dedicate time to helping 14:07:59 EmilienM, yep, I thought about the same, I can take action item about this 14:07:59 the problem is that some failures are sysadmin things on systems a few have access 14:08:11 weshay, what do you need? should we organize like a session to get some overall idea about what to check when seeing errors 14:08:12 I think sshnaidm and derek have already built a good relationship and derek has been helping sshnaidm around 14:08:19 so even if we mentor people, they won't have access to the system to restart things or fix space issue, etc 14:08:34 sshnaidm: sure, go ahead 14:08:45 at least initially, but as they become more comfortable and are trusted, maybe more people can be given access to offload 14:08:57 I think any newb should review the bootcamp materials.. first off 14:08:58 +1 ^ 14:09:00 EmilienM: ya, I think there are two separate failure scenarios, the one you mention is only fixable by 2 or 3 people 14:09:11 EmilienM: your correct we could do with more sys admins, but most of our issues don't require sysadmin rights to fix 14:09:26 that situation would be alleviated by a managed cloud 14:09:29 It seems like there are three distinct tasks (1) Keeping the CI cloud running (2) keeping the CI scripts running (3) dealing with constant trunk regressions 14:09:37 there may be other ways of triggering restarts that don't require direct access, like maybe a script cna monitor commits somewhere, and certain people give commit access; then you have a log, and not direct SSH access (controls of what you can run, etc) 14:09:47 alternatively sudo could be used to control restart commands 14:09:48 EmilienM: you keep bring that up, and i think it's a little unfairly negative 14:10:02 trown: no it wouldn't, we'd still need to decide who has access to the squid server and mirror server for example 14:10:04 EmilienM: we've regularly given access to people in the past who 1) ask, 2) show they're trustworthy 14:10:09 (1) may be helped by moving to a managed cloud which I know is planned, but (2) & (3) are likely to remain I think 14:10:12 slagle: well, gaerman needed to be restarted a few times lately 14:10:16 slagle: this week-end included 14:10:16 EmilienM: those are about the only requirements 14:10:40 it's better to have somebody from tripleo-core in weekly group for merging and system access and any others could join 14:11:12 ya, I think the rotation idea requires one tripleo core reviewer at a minimum 14:11:14 EmilienM: yes, we needed to restart gearman pretty much every day last week, this was fallout from infras move away from jenkins 14:11:16 I'd like to match panda up w/ someone, maybe that could happen offline 14:11:39 sshnaidm: +1 14:11:42 EmilienM: i'm not saying access wasn't required. i'm saying going on about who does/doesn't have access isn't the root of the problem 14:11:43 EmilienM: and even on a managed cloud, only a subset of people would have access to it (if we had a hearman) 14:11:55 EmilienM: the people who have access are the ones who have participated in fixing ci in the past, and have asked for access 14:12:12 right, root access is required for a low amount of our failures. 14:12:48 Ok, so we seem to have agreed on two things here: 14:13:02 1. Setup a rolling rota of folks willing to drop everything when CI breaks 14:13:23 2. Arrange to mentor new folks so they can be included on that rota (along with existing/core folks) 14:13:36 The third question was around tripleo-quickstart 14:13:43 derekh: what are your thoughts on that? 14:13:55 shardy: I think we need to define which fo the three scenarios are involved with 1, ie, I constantly work on trunk regressions, but not much on the other two 14:13:56 well, when CI breaks, I don't see how the rest of the team can work 14:14:33 EmilienM: people have all kinds of other priorities competing for their time 14:14:37 I'm a little skeptical about the rolling rota, a person (or more), needs to have tripleo-ci as the only thing they do, that person should look at the ci status page constanctly and jump on any falures they see 14:14:37 I personally can't resist to work on CI issues when CI is red 14:14:42 tripleo-quickstart is not a drop in replacement for our current CI, but I do think it begins to make sense when we have an OVB cloud 14:14:52 and I'm not doing composable services work when CI is broken, because I consider CI > everything else 14:15:03 others can and should help when its red also 14:15:18 +1 14:15:29 someone who is 100% working on tripleo can't ignore a tripleo CI outage 14:15:42 it's like sending patches and know they'll fail 14:15:51 what is the goal of that ^? 14:15:58 the point of the rotation was just to help spread the load 14:16:04 not rotate the responsibility of CI 14:16:09 +1 for the rotation though 14:16:26 w/o the rotation and just saying "everyone does it", that's what we have now 14:16:33 Yep, to encourage wider participation and spread the load 14:16:42 it sounds a good plan, let's try it 14:16:44 slagle +1 14:16:48 the rotation doesnt mean it's just that person's job for a week 14:16:53 how many people for each rotation are needed ? 14:16:55 it's still everyone's job, all the time 14:17:09 it's not like we're going to discourage anyone not on the rota from fixing stuff, it's just providing some folks to act as coordinators and make sure stuff gets fixed 14:17:10 panda: I would say 1 expert mentoring 1 'noob' 14:17:18 so by the time we have more experts 14:17:21 +1 14:17:33 we need to take care of timezones also 14:17:51 maybe do 2 groups, 1 in EMEA and one in NA 14:17:55 why don't we make the rotation 1 experienced and 1 noob 14:18:01 who work together 14:18:06 +1 14:18:06 EmilienM: +1 maybe 2 'shifts' per rotation 14:18:08 1 expert (derek or shardy) in EMEA with panda + etc 14:18:10 Ok, we probably need to timebox this - I'll start a ML thread and we can follow up on thsi discussion there, if that's OK with everyone? 14:18:24 +1 14:18:24 it was an example ^ I didn't mean to name all folks 14:18:27 +1 ya, this could take the whole meeting 14:18:38 +1 for ML 14:18:38 +1 14:18:47 +1 on combining some kind of rotation with mentoring 14:18:54 woot 14:18:57 +1 14:18:58 ++ on that 14:19:00 slagle: want to cover your item now? 14:19:22 yes, i added: Investigating using https://storyboard-dev.openstack.org to track tripleo-ci tasks (slagle) 14:19:35 marios: right, so we cover both tz without forcing people staying late 14:19:38 for those that dont know, the long term plan is to migrate all openstack projects to storyboard 14:19:59 Unless somebody is fulltime on CI (in addition to the rota of people to help out), I dont think it will work 14:19:59 i've also been getting some feedback about the visibility (mostly the lack thereof) of our work in launchpad 14:20:00 hasn't that been the plan for, like, several years now? 14:20:17 shardy: it's more concrete now, at in that there is an accepted spec 14:20:19 puppet group investigated 2 years ago and storyboard was really not ready for us but now it looks really better 14:20:32 yes, it's better, and keeps getting better 14:20:41 a big +1 on this idea 14:20:41 Ok, well we can investigate - but what are the complaints about our launchpad content? 14:20:55 so what i'm proposing is tracking our ci related tasks in storyboard 14:21:00 since we don't currently use launchpad for that 14:21:02 I think our blueprints are currently too big in some cases, but otherwise things are working OK I think 14:21:03 we'll need to re-write our bot that sends alert, we like this thing 14:21:05 and i dont want to duplicate anything 14:21:19 we would still have bugs in lp that affect CI 14:21:23 slagle: so this would replace the CI trello board then? 14:21:35 shardy: um :), maybe? 14:21:40 i dont know of a ci trello board 14:21:56 so.. that would be used to determine who is working on which failing job? 14:21:57 which one? 14:21:57 I don't really care what tool we use tbh, but I really, really dislike tracking the same work in multiple places 14:22:04 https://trello.com/b/0jIoMrdo/tripleo 14:22:06 +1000 14:22:13 shardy: if its the one I was working on at one stage, people mostly didn't use it 14:22:29 shardy: no one uses that 14:22:33 derekh: puppet group has same situation, we have a trello but barely used. 14:22:38 derekh: Yeah, critical bugs with the alert bot seem to be working OK AFAICS 14:22:40 my first idea was to revive that 14:22:52 but then decided to investigate storyboard instead 14:22:57 slagle: I guess that was my point ;) 14:23:07 so in storyboard we'd have things like the planned ovb work, multinode work, rh2 rack work, etc 14:23:15 stuff that we aren't actually tracking anywhere right now 14:23:17 what does storyboard offer which will make actually using it more likely? 14:23:20 +1 to be aligned with OpenStack projects, so +1 for storyboard, it's a sane long term solution I thikn 14:23:28 vs the trello board 14:24:03 shardy: it's integrated with OpenStack workflows 14:24:10 shardy: dunno. this isn't about comparing the tools honestly 14:24:17 bugs, blueprints, interractions with gerrit, etc 14:24:26 EmilienM: Ok, that's a compelling reason :) 14:24:44 EmilienM: that's only for -dev btw right now 14:24:50 so the main gerrit is not yet integrated 14:24:52 slagle: sure, I'm just trying to figure out if this will be different, not really comparing the tools 14:24:53 ya I think it is more about needing to track that stuff, and if we are going to try and revive trello might as well use the infra tool instead 14:25:02 slagle: right, it's WIP 14:25:02 trown: yes exactly 14:25:21 Ok, well +1 on giving it a try 14:25:39 plus, storyboard team is excited about getting our feedback 14:25:51 and they want to work with project teams 14:26:07 maybe we can iterate by using it for one project 14:26:18 like 14:26:22 tht or something 14:26:22 EmilienM: i'm proposing just tripleo-ci 14:26:24 ya CI seems like a good place to start 14:26:27 EmilienM: no on tht 14:26:30 cool ok 14:26:32 we use lp on that 14:26:36 we dont want to duplicate 14:26:44 Ok, sounds like we're agreed to give it a try - shall we move on? 14:26:55 +1 14:27:00 i'll mail the list with a summary 14:27:06 slagle: +1, thanks! 14:27:09 #topic bugs 14:27:31 So, other than the CI impacting issues we're fighting, anyone have any specific bugs to mention? 14:27:50 #link https://bugs.launchpad.net/tripleo/ 14:27:57 weshay has a bug with neutron master 14:28:14 I like the storyboard idea, I was trying to create some kind of nit finder, as I was investing lot of time finding nits in tht submissions, with storyboard we might log them also.. http://mosquito-ccamacho.rhcloud.com/_plugin/tripleo-nit-finder/ and find config_settings.. 14:28:22 EmilienM, we're looking at that again.. rechecking w/ more resources on the undercloud 14:28:52 hehe tripleo-nit-finder sounds like a code-review bot ;) 14:29:33 seems like a lot of not in progress critical bugs? 14:29:58 trown: Yeah, that was going to be my next comment - there are still a lot of stale/obsolte bugs 14:30:18 please can everyone review the list of bugs reported by them, and close those either fixed or no longer valid 14:30:40 i think we need a purge of blueprints too :) 14:30:46 i can look through them this week 14:30:47 quite a few expired from incomplete recently, but there's still many I think we should close or mark invalid in there 14:31:08 also probably some duplicate 14:31:12 slagle: Yeah, maybe - I did a purge of some BPs at the start of newton, but any help much appreciated 14:31:34 Ok, anything else on bugs before we move on? 14:32:02 #topic Projects releases or stable backports 14:32:14 So, it's only ~2weeks until n-2 14:32:34 #link http://releases.openstack.org/newton/schedule.html 14:32:47 #link https://launchpad.net/tripleo/+milestone/newton-2 14:33:07 There's a huge amount of in-progress stuff there :( 14:33:36 So when we fix CI, can folks please prioritize reviews for patches on that list, and help burn down the in-progress things so they land 14:33:40 e.g both bugs and blueprints 14:34:02 If we don't do that, a bunch of stuff will slip to n3, and I'd really rather avoid that if we can 14:34:17 Composable Services Within Roles is making slow progress recently, due to CI outages, but once CI is back, we'll finish it, almost everything is WIP and under review 14:34:52 EmilienM: excellent, well that sounds encouraging provided we can get a clear few days of CI working 14:35:00 yeah, I'm confident we can make it 14:35:21 beagles: what's the status of https://blueprints.launchpad.net/tripleo/+spec/neutron-dvr-support? 14:35:31 should we bump that to n-3 as it's not started? 14:35:57 https://blueprints.launchpad.net/tripleo/+spec/overcloud-upgrades-workflow-mitaka-to-newton 14:36:06 marios: same question re ^^ 14:37:02 Ok, well if I don't see the implementation set soon, I'll bump them to n3 14:37:09 shardy: it's actually pretty decent 14:37:14 (wrong room) 14:37:26 I just have to fix up the patch a bit and it should work okay 14:37:44 it's what I'm mainly focused on finishing up at the moment 14:37:53 beagles: Ok, good news - can you change the implementation on the BP to reflect the current status, e.g "good progress" or whatever? 14:38:00 yup 14:38:02 thanks! 14:38:14 Ok, anything else to discuss re releases atm? 14:38:46 I plan to do a 0.1 release or tripleo-quickstart this week, then start on switching virt docs for it 14:38:53 s/or/of/ 14:39:17 trown: Ok, sounds good - there has been some confusion around docs for quickstart/non-quickstart 14:39:40 #topic CI 14:39:48 ya we were missing some bits to reproduce tripleo-ci workflow, but panda++ has implemented those 14:40:11 So, we covered the trunk regressions etc earlier, but derekh did you want to discuss the current state of switching to OVB? 14:40:14 ok jistr is not around but we're tring to revert a patch, see https://review.openstack.org/#/c/335008/ 14:40:24 current situation is HA & upgrade jobs are broken 14:40:27 that seems very relevant from a moving to a managed cloud perspective 14:40:34 we think it's related to https://review.openstack.org/#/c/330096/ 14:40:45 pradk: ^^ FYI we'll have to revert that if it proves to fix CI 14:41:03 I'm telneting the job and it just failed the HA job... I'll investigate 14:41:08 we can then work out what the issue is and un-revert it later 14:42:00 are there any questions re: the development of quickstart? 14:42:56 weshay: I believe there are several questions around developer workflow with quickstart 14:43:16 I tried it a while back and reported some bugs, but ultimately couldn't switch over to using it for my daily workflow 14:43:34 shardy: that's what we're working at the moment 14:43:36 we'll have to revisit some of those things as stuff like docs and potentially CI gets changed to use it 14:43:39 panda, is validating that atm, ensuring there is a good user experience. I know he has found and is fixing at least two bugs thee 14:43:41 there 14:43:48 weshay: i'd be interested to know what the goals of the project are 14:44:05 shardy: I'm very interested in you feedback on the previous experience 14:44:12 i think it started as virt-setup replacement 14:44:19 now we're talking about a CI switch 14:44:26 to have a very fast, user friendly install of tripleo with pre-validated content 14:44:43 that is a bit broad... but the main goal of the project has always been to "provide a good new user experience of tripleo" 14:45:10 in order to do that, I think that "experience" needs to be thoroughly CI'd 14:45:13 panda: I reported several bugs, the main blocker is it's hard to deploy trunk easily 14:45:15 i would like to see the work tracked a bit more openly in launchpad 14:45:31 using it in CI might solve that if we made the images available for developer usage 14:45:35 so the plans for project direction is more visible 14:45:45 shardy: sry, on a call 14:45:49 If we were to move our CI to all ovb clouds, how does quickstart fit in with that? does it need to? 14:45:58 e.g., what is this usbkey installer, what are the goals, where are we going with it, etc 14:46:06 there is alot tracked in launchpad... I think more than most other tripleo projects actually 14:46:31 there may be other folks with ideas about that, who would like to participate 14:46:38 trown: are there bp's for any of the new features? 14:46:40 #link https://bugs.launchpad.net/tripleo-quickstart 14:46:43 shardy: ok, I'll ping you later to get more details and to see what you consider an acceptable workflow. Should be easier to deploy trunk now. 14:46:55 we had ovb working w/ some previous CI, rlandy is preparing ovb for quickstart now 14:46:56 slagle: good questions, I also wonder why we have some external bits like IPA server integration, etc 14:47:45 ooo-usbkey was something rbowen requested.. basically tripleo-quickstart src on a usbkey paired w/ an undercloud/overcloud image on the key 14:47:58 slagle: all of the work to enable a developer workflow are tracked in launchpad bugs 14:48:25 EmilienM, re: ipa and quickstart? 14:48:27 slagle: I am guessing by new features you are refering to the external roles that quickstart can use? 14:48:30 weshay: yeah 14:48:34 weshay: what is the goal? 14:49:00 trown: i dont think so, unless those are part of quickstart directly 14:49:09 EmilienM, we're showing folks how to integrate w/ external repos/roles so that all these features are not in a tool that is meant to replace instack-virt-setup 14:49:16 slagle: those are being developed independently as things that can be plugged in to quickstart, but are not required for a good new user experience 14:49:25 using composable ansible roles 14:49:28 trown: i'm thinking things like usbkey, baremetal 14:49:37 virt-setup was the original goal 14:49:40 baremetal is an external role 14:49:44 ya.. baremetal is also done outside of quickstart 14:49:51 so we have composable ansible roles in oooq vs composable roles in THT 14:49:58 baremetal is something I've been exploring as well with oooq 14:49:58 not vs 14:50:05 rlandy, ^ 14:50:12 EmilienM: too different meanings of role 14:50:21 trown: ok i guess i dont really know what is where 14:50:21 role is an ansible specific term equal to puppet module 14:50:21 we have baremetal jobs running 14:50:24 i see full-deploy-baremetal.sh 14:50:32 that sounds like baremetal 14:50:38 but maybe i'm jumping to conclusions :) 14:50:41 and I am working on the ovb solution 14:50:41 so we have quickstart pulling in 3rd party repos and using those as part of the extra deployment steps 14:50:48 one needs for bare metal 14:50:58 slagle: ack - I just added that 14:51:06 for the baremetal jjb 14:51:12 ok, so maybe we should move the ci-scripts in quickstart to somewhere in RDO... it is just easier to have them in-tree, but I can see how if one just looks at that dir it would be confusing 14:51:13 weshay: getting that into the main repo seems like it would be a good thing to aim for? 14:51:29 there are three roles added to enable baremetal testing 14:51:30 but ci-scripts dir is really rdo-ci-scripts 14:51:43 weshay: but IPA server would run on undercloud? or on the host? 14:51:48 shardy, not sure.. I see the repos in redhat-openstack as a good place to incubate roles until we're 100% sure they are solid 14:51:52 trown: right, the ci-scripts dir makes me think this is a replaceement for tripleo-ci 14:51:59 trown: which...may be fine eventually 14:52:01 shardy: we need to take care of the timing 14:52:09 but it's a planning discussion yet to be had 14:52:19 but it seems oooq is already moving into being a ci tool 14:52:25 but think they could always stay external.. or be moved in.. doesn't really matter 14:52:32 slagle: well it has worked very well in RDO 14:52:33 it's composable :) 14:52:55 weshay: Ok, I guess I'm just worried about switching docs for VM to oooq then we have the baremetal docs as completely different 14:53:00 and it kind of makes sense to have the most CI run against the thing new users try first 14:53:05 trown: and it might work very well for tripleo one day. but right now the project is a tripleo project 14:53:07 but we can work that out as the docs stuff gets proposed I guess 14:53:08 not an rdo project 14:53:17 shardy, so we're developing w/ that in mind... 14:53:28 slagle: not sure I get your point 14:53:29 weshay: my question is more "why not using instack-undercloud" for such roles 14:53:44 trown: these ci scrpts are for what? rdo or tripleo? 14:53:48 EmilienM, not sure what you mean 14:53:52 slagle: RDO 14:54:00 we use instack-undercloud.. you mean the repo? 14:54:13 trown: then maybe they shouldn't be in an upstream tripleo repo 14:54:29 we have a repo for that already, tripleo-ci 14:54:31 slagle: well, does the project have a different way to gate itself? 14:54:32 we have a similar set of ci-scripts in an internal repo as well for rhos 14:54:40 shardy: one of my goals is to get the oooq baremetal doc in place 14:54:45 weshay: for IPA server, why not setup it on undercloud?, just a thought 14:54:57 slagle: to be fair, the discussion around potentially using quickstart in upstream CI was based on a desire to improve collaboration between RDO and upstream CI 14:54:59 EmilienM, you'd have to ask adam young that 14:55:12 so just saying let's remove it may not be the best path towards that 14:55:16 it's not something we're driving 14:55:19 weshay: so that's leading to the "what is the scope of quickstack" 14:55:27 shardy: that's not what i'm saying 14:55:30 but we are showing him how to integrate it via 3rd party git repo 14:55:33 EmilienM: because the undercloud is a single point of failure 14:55:35 weshay: do we accept any new role? so why not deploying openstack services then? 14:55:38 shardy: right 14:55:40 shardy: i'm trying to understand where we are going 14:55:45 EmilienM, define accept 14:55:49 shardy: we have duplication between oooq and trpleo-ci 14:56:00 but imho, oooq should not be a place to put services that we dont want on undercloud 14:56:05 slagle: I dont think we do 14:56:06 we'll add new roles to a role gate, but thus far, only one role has been promoted to first class afaict 14:56:06 ie: new services, like IPA 14:56:21 so there is a fairly long incubation period 14:56:32 slagle: tripleo-ci is not able to gate tripleo-quickstart, which was discussed in the spec 14:56:47 weshay: just a note, I personally like oooq, I use it almost every day. Just asking questions about roles ;-) 14:56:49 slagle: it was agreed to have it be third-party CI via RDO 14:57:06 trown: yes, for virt-setup 14:57:14 EmilienM, thus far the criteria has been, a role is in our role-gate, running daily somewhere, and part of the rdo promotion gate 14:57:16 since that is not covered by tripleo-ci 14:57:23 slagle: yes we have duplication also with composable roles in oooq and what we do on instack-undercloud I think 14:57:33 how so? 14:57:43 I'm confused by that 14:57:47 slagle: ok so would it make you happy to move all of the ci scripts out that are not gating virt-setup? 14:57:47 because we're deploying services that we used to manage with puppet, etc 14:58:13 it is pretty trivial to do so, though I dont personally see the benefit to tripleo 14:58:14 let's say, instack-undercloud is multi node, why not using it for IPA server? 14:58:21 trown: it would make me happy if we discussed a plan about where we are trying to go with oooq 14:58:32 EmilienM, oh.. so.. we are not driving any of the IPA work 14:58:37 I see what you are saying now 14:58:43 I just see people sending roles in OOOQ because instack is single node, shouldn't we focus efforts on getting instack multi node? 14:58:56 ok, let's put it this way: 14:59:04 why don't we work on getting instack undercloud multi nodes 14:59:11 trown: that's all. i'm not asking for things to be removed 14:59:12 and use the same thing as composable roles work that we do 14:59:27 instead of using OOOQ to deploy new services on the undercloud or host 14:59:29 slagle: k, what form should that take... ML to start? 14:59:45 EmilienM, that sounds fine, again.. we have nothing to do w/ the IPA work.. 14:59:46 We're out of time folks, and apologies we've missed specs and open discussion this week (although we've pretty much ended up in open discussion I think) 14:59:50 trown: would just like to get some alignment...so we can see how to improve tripleo-ci to maybe have it use oooq, or whatever we come up with 14:59:53 let's continue in #tripleo :) 14:59:56 weshay: yeah, it's just an example 15:00:00 thanks everyone! 15:00:02 \o 15:00:04 #endmeeting