14:00:15 #startmeeting tripleo 14:00:16 Meeting started Tue Jul 19 14:00:15 2016 UTC and is due to finish in 60 minutes. The chair is shardy. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:17 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:20 The meeting name has been set to 'tripleo' 14:00:24 #topic rollcall 14:00:26 bonjour :) 14:00:30 o/ 14:00:31 o/ 14:00:32 Hi all, who's around? 14:00:35 o/ 14:00:35 hi 14:00:37 o/ 14:00:39 hello 14:00:41 o/ 14:00:48 \o 14:00:48 o/ 14:00:55 hi o/ 14:01:00 o/ 14:01:11 o/ 14:01:20 o/ 14:01:24 o/ 14:01:32 o/ 14:01:34 o/ 14:01:57 \o 14:02:21 #link https://wiki.openstack.org/wiki/Meetings/TripleO 14:02:37 #link https://etherpad.openstack.org/p/tripleo-meeting-items 14:02:49 #topic agenda 14:02:49 * one off agenda items 14:02:50 * bugs 14:02:50 * Projects releases or stable backports 14:02:50 * CI 14:02:52 * Specs 14:02:54 * open discussion 14:03:06 So I see one one-off item from EmilienM, anyone have anything else to add? 14:03:32 #topic one off agenda items 14:03:47 my item should be quick, I just want people interested by service validation to read my email and give feedback about the proposal. 14:04:05 the idea is to "stop a deployment if something fails in a step" 14:04:17 so we save time for deployers and give feedback on what failed 14:04:24 shardy I added the item in the last meeting. 14:04:31 sorry.. 14:04:53 EmilienM++ I like that idea to save time 14:05:09 hi 14:05:24 EmilienM: I like the idea of fail-fast, but it'd be kinda nice if the validation steps weren't combined with the puppet configuration? 14:05:40 I made it on purpose 14:05:46 I thought that was the approach SpinalStack took with serverspec, or am I remembering incorrectly? 14:05:48 so the validation is as close as possible with our profiles 14:06:02 shardy: we used another approach in SpinalStack with serverspec 14:06:09 might at least a --fail-fast option work? 14:06:10 the idea here is to make the validation within our profiles 14:06:28 EmilienM: Ok, I guess I was thinking about it differently, where a given service template has an expected validation, regardless of the tool used 14:06:30 so we don't have to wire validation scripts with our roles 14:06:36 EmilienM: I think I would like to consider keeping the validations within the heat composable services 14:07:01 EmilienM: I'm thinking particularly of folks interested in plugging in non-puppet tools (such as ceph-ansible which I know some folks have interest in reusing) 14:07:16 I guess that's all dependent on containers anyway, just thinking ahead 14:07:18 they can write their validation in ceph-ansible itself! 14:07:24 my PoC is just for Puppet profiles 14:07:32 EmilienM: https://review.openstack.org/#/c/174150/ 14:07:36 but anyone plugged to TripleO (ie: ceph-ansible) could validate their own way 14:07:37 shardy: ^^ 14:07:52 that was our initial attempt at generic "script" style validations a while back... 14:08:01 EmilienM: ++ 14:08:08 I think we should make validation as close as possible from our profiles 14:08:14 whatever tool (puppet ansible, etc) 14:08:21 yeah I was a bit confused as well about how this can work togeher with the tooling shadower is working on 14:08:24 we just need to fail fast 14:08:26 dprince: Yeah, that was the pattern I was thinking of 14:08:33 EmilienM: makes sense that the component that defines what and how also validates that the outcome was what intended 14:08:35 gfidente: it's different, post deployment 14:08:40 EmilienM: that works for single node validation framework I think. But what about multi-node stuff? 14:08:41 and this is not the same validation 14:08:50 gfidente: Yeah, I don't think it can, because the ansible based validations aren't integrated with the heat deployment steps 14:08:53 my stuff is really low level validation 14:09:00 that's more about pre-flight-checks I think 14:09:23 I don't aim to test APIs 14:09:25 or boot a VM 14:09:28 etc 14:09:28 a validations "framework" will have all of the above. pre-during-and post validations 14:10:17 anyway, I produced a PoC based on operators feedback 14:10:27 feel free to use the ML thread to comment and give feedback 14:10:36 and if you have a better idea, please submit it 14:10:36 EmilienM: ack, thanks, lets follow up on the ML :) 14:10:42 shardy: yup 14:10:47 ccamacho: you had a question about tripleo-ui? 14:10:56 jtomasek: Can you provide an update on the status there? 14:10:57 EmilienM: do we think validations is more important right now than finishing the composable services work and composable roles? 14:10:59 yeahp, a quick one. Is anyone using it? 14:11:11 dprince: not at all. I just implemented a 5min idea 14:11:15 EmilienM: perhaps this is a next release feature 14:11:24 dprince: as you can see, the patch is 10LOC 14:11:26 ccamacho: AFAIK it's blocked on the mistral API landing, which is still in-progress, but hopefully jtomasek can confirm 14:11:33 dprince: for sure, at this stage of the cycle. 14:11:51 dprince: I'm preparing next Summit ;-) 14:12:04 shardy ack, I was reading docs and found lot of empty spots there.. That's why I was asking 14:12:14 shardy: I am happy to answer anything about TripleO UI, I am currently at the meeting, so I am slow to respond. Which part specifically is the concern? 14:12:33 jtomasek: when folks can start using it on upstream builds 14:12:52 jtomasek: e.g when can we get it integrated with the undercloud install, are we blocked on the remaining mistral patches? 14:13:04 e.g the actions/workflows going into tripleo-common 14:13:16 jtomasek, about using it and testing it in in upstream tripleo 14:13:22 shardy: no blockers really, biggest obstacle now is resolve the rdo packaging of GUI 14:13:42 shardy: the work on integrating it with undercloud starts this week too 14:13:47 jtomasek, but from source, should work right? 14:13:51 jtomasek: do you have someone working on a puppet module to install it? 14:13:59 ccamacho: yes 14:14:28 dprince: flfuchs is about to start on it, but any help is welcome and would deffinitely speed up the process 14:14:37 jtomasek, ack as should be nice to have some puppet automation for that, nice! 14:14:44 jtomasek: cool. 14:14:47 jtomasek: Ok, sounds good, shout if we can provide any help and/or test/review things 14:14:57 shardy++ 14:15:12 shardy, ccamacho, dprince: thanks! 14:15:20 Any other one-off items before we move on? 14:15:42 #topic bugs 14:16:08 So, related to bugs, jpich started a discussion on the ML about the various launchpad projects 14:16:19 dprince: can you add the tag for your hieradata bugs ? https://bugs.launchpad.net/tripleo/+bugs?field.tag=composable-roles 14:16:22 in particular there seems to be a redundant one related to tripleo-common we can probably remove 14:16:31 sorry guys, one step back to one-off, is anybody looking into why mitaka/ci fails? 14:16:46 gfidente: wasn't it the gnocchi thing? 14:17:10 gfidente: shall we cover that in the CI section? 14:17:24 ack 14:17:33 Do folks have any opinions about the launchpad tracking? 14:17:50 gfidente: nevermind it looks like something else, it fails after 15 min 14:17:56 EmilienM: tag should be 'composable-services' but whatever 14:18:02 Personally I'd rather have one for most tripleo things, e.g https://bugs.launchpad.net/tripleo 14:18:23 vs lots and lots of LP projects to manage 14:18:36 shardy: I fine to file bugs for things 14:18:39 it's easier from a release tracking/milestones perspective if we're going to release everything each milestone IMO 14:18:46 dprince: yeah, please add it to the 2 bugs 14:19:04 shardy: taken the number of repos, I'd prefer single track with tags 14:19:16 dprince, shardy: +1 for launchpad/bugs 14:19:18 EmilienM; already done 14:19:21 dprince: ++ 14:19:58 https://bugs.launchpad.net/tripleo-common https://bugs.launchpad.net/tripleo-ui https://bugs.launchpad.net/tripleo-validations https://bugs.launchpad.net/tripleo-quickstart all also exist 14:20:14 I'd suggest we remove the tripleo-common one, and probably the tripleo-validations one? 14:20:14 yeah I was going to say that we probably want something in between 14:20:36 It's probably fine to have a separate one for the UI at this point though I think 14:20:38 a few projects seem better tracked with their own project but I wouldn't split all 14:20:58 shardy: what's in on those two? (sorry for my ignorance still) 14:21:22 jokke_: Not much, hence my proposal to remove them :) 14:21:29 shadower: any thoughts re the validations one? 14:22:13 Ok, well we can follow up on the ML anyway 14:22:15 shardy: +1 on removing the tripleo-common and tripleo-validations bugs 14:22:29 any other bugs related things to mention today? 14:23:04 #topic Projects releases or stable backports 14:23:20 So, we had the n2 release, but I was thinking we're probably overdue a release of the stable branches 14:23:35 shardy: we can take care of it this week 14:23:38 coolsvap: Were you planning to propose those, or shall I go ahead and do it? 14:24:01 shardy: i can do that 14:24:41 coolsvap: Ok, thanks - perhaps you can ping me and EmilienM when you've posted the patch and we can review 14:24:52 shardy: sure 14:24:59 but we need to make sure Mitaka jobs are working 14:25:04 (currently broken) 14:25:16 Yeah, obviously we'll need to get a good promote before we can release 14:25:28 yup we'll take care of it ^ 14:25:42 ++ Ok, sounds good, thanks :) 14:25:49 #topic CI 14:25:54 so Mitaka job looks broken 14:25:57 http://logs.openstack.org/25/342725/3/check-tripleo/gate-tripleo-ci-centos-7-ovb-ha/3dd5eb0/console.html#_2016-07-19_13_48_29_492778 14:26:06 ERROR:dlrn:cmd failed. See logs at: /opt/stack/new/delorean/data/repos/f1/13/f113c9e4103b7ed593c74c2c8517363843e99ed0_969c6c49/ 14:26:08 One issue is with building delorean rpms for THT mitaka and liberty - asked apevec to look at it, but still haven't received a response from him yet: https://bugs.launchpad.net/tripleo/+bug/1604039 14:26:08 Launchpad bug 1604039 in tripleo "CI: delorean build of tripleo-heat-templates fails because wrong spec" [High,Triaged] 14:26:11 INFO:dlrn:Skipping notify email to ['jslagle@redhat.com', 'dprince@redhat.com'] 14:26:18 EmilienM, ^^ 14:26:20 slagle and dprince must have received an email :) 14:26:26 can we add folks to that list? 14:26:43 * dprince isn't getting emails about this 14:26:45 *Skipping* notify email... 14:26:48 yeah :) 14:26:53 I guess we can try reproducing this locally with STABLE_RELEASE=mitaka tripleo.sh --delorean-build openstack/foo 14:27:21 shardy, for tht 14:27:23 Actually we'll need to do --repo-setup with REPO_PREFIX set too 14:27:35 why is it skipping ? 14:28:05 https://dashboards.rdoproject.org/rdo-dev 14:28:13 panda: it skips in CI because we don't have a mail server configured, and we don't want it in CI 14:28:30 current-tripleo promoted 2d ago, but it's been 13 days since RDO promotion, some of which look tripleo related 14:28:32 but it should be configured on the trunk server running delorean 14:28:42 e.g the gnocci db sync issue and a couple of others 14:29:33 And all periodic jobs failed tonight because: https://bugs.launchpad.net/tripleo/+bug/1604380 14:29:33 Launchpad bug 1604380 in tripleo "CI: nodes registrtion in periodic jobs fail because of bug in old pecan (fixed in 1.0.5)" [High,New] 14:29:45 yeah, same for Puppet CI 14:29:52 we have the same issue, good to know :) 14:29:56 the new build of pecan is wating on sync in RDO 14:29:57 sshnaidm: ack, thanks for pointing that out 14:30:04 will hit at 16:12 UTC 14:30:06 ya. and rdo ci :) 14:30:07 trown: excellent news 14:30:08 can we add a periodic_cistatus page to tripleo.org? 14:30:13 I would find that useful 14:30:39 weshay_mtg: Do you have any updates re the status of third-party CI? 14:31:04 IIRC you were looking to enable upgrades and trunk deployments on RHEL? 14:31:17 shardy: sshnaidm might be able to get his status page there 14:31:33 (If we're getting RHEL coverage I'll abandon https://review.openstack.org/#/c/340503/) 14:31:46 derekh: Yeah that'd be good 14:31:48 it might be helpful to sync with the work harlowja is doing with the oslobot 14:31:50 shardy, ya.. apetrich has a few patches required merged as of yesterday, now we're testing and getting some inconsistent installs atm.. 14:32:22 that's providing status of all periodic oslo integration tests to the channel 14:32:27 attila also just went on pto, so that wil slow down the 3rd party rhel gate.. by a few days 14:32:51 weshay_mtg: ack, Ok no worries, thanks for the update :) 14:33:12 derekh: So, what's up with rh1, have you OVB-ified it yet? 14:33:58 shardy: zoli is working on it, he only started on thursday and is hitting problems with foreman (don't ask) 14:34:25 derekh: Ok - anything you need help with, or is it just a matter of more time? 14:34:33 shardy: anyways I've been helping him work through them, progress is been made but still at the early stages 14:35:34 shardy: time at the moment but if he hits any brick walls others can assist we'll pick people 14:35:46 derekh: Ok, sounds good, thanks for the update :) 14:35:57 yeah, pick us if needed 14:36:44 EmilienM: will do 14:37:08 Ok, anything else re CI before we continue? 14:37:16 However, can anybody from RDO people help with mitaka/liberty delorean fails of tht? it's already 2 days long https://bugs.launchpad.net/tripleo/+bug/1604039 14:37:16 Launchpad bug 1604039 in tripleo "CI: delorean build of tripleo-heat-templates fails because wrong spec" [High,Triaged] 14:37:38 I'd like to be sure that somebody cares about it 14:38:01 sshnaidm: I'd jump into #rdo and chat with apevec and trown about it 14:38:26 #topic Specs 14:38:33 shardy, great, thanks 14:39:10 https://review.openstack.org/#/q/project:openstack/tripleo-specs+status:open 14:39:44 It'd be good to review the dpdk, sr-iov, Ironic, lightweight-HA and validations specs if anyone has time 14:40:02 It'd be good to land them if we expect the features to get done for newton 14:40:42 https://launchpad.net/tripleo/+milestone/newton-3 14:40:52 Would like more feedback on the idea of having a policies section: https://review.openstack.org/339236 14:40:53 That's the feature list I'm working to planning for the n-3 release 14:41:49 bnemec: +1, I'm fine with it, and I see you replied to my comment re -docs vs -specs 14:42:10 shardy: Cool, thanks 14:43:03 #link http://osdir.com/ml/openstack-dev/2016-07/msg00724.html 14:43:06 bnemec: will look after the meeting 14:43:16 I sent that re the feature-freeze process after we disussed it last week 14:43:27 feel free to reply on the ML if there are any comments 14:44:05 basically we can probably land some things as FFEs but we can't rely on it for the big backlog of n-3 targetted features 14:44:37 #topic open discussion 14:44:48 Anyone have anything else they would like to add? 14:44:56 I have a question 14:45:18 who would like to be the package maintainer for os-*-config repos in RDO? :) 14:45:32 I have an FYI: I dropped the weekly composable-roles meeting since we moved all puppet code. Remaining work does not require meeting imho, let me know if you disagree 14:45:44 currently there is nobody listed in https://github.com/redhat-openstack/rdoinfo/blob/master/rdo.yml 14:46:44 EmilienM: no need for a meeting now I think. Lets drop it 14:46:45 nobody? 14:46:56 trown: I'm happy to do it if nobody else is keen 14:47:00 dprince: done 14:47:03 I'd like to poke a bit of people's feelings 14:47:46 shardy: k, I am getting pinged by people to merge https://review.rdoproject.org/r/#/c/1678/1 but wanted to have an actual maintainer for those because I do not totally understand them 14:48:02 The ceph-ansible work towards Ocata. Are folks comfy with that or is there something we should consider on that work? 14:48:25 I know there is quite a bit of push from the Ceph folks to move to that ansible blop 14:48:38 I'm really curious about this work, is it going to kill puppet-ceph usage in tripleo? 14:48:40 jokke_: It's not even been properly discussed yet, I think we'll need a summit session to consider how non-puppet deployments can be integrated into tripleo 14:48:58 EmilienM: There are folks who don't want to maintain both puppet-ceph and ceph-ansible 14:49:04 right 14:49:06 shardy: That's why I wanted to bring the discussion to people's minds 14:49:08 shardy: so in the short term if you wanted to do a sanity check on that review, that will be great... and maybe I can put you down in rdoinfo until you can appoint a replacement? 14:49:25 trown: slagle and I are the maintainers of the fedora package. 14:49:25 so they want to wire in ceph-ansible to tripleo - which I think will be possible only after we've got a way to isolate the puppet and ansible configuration (e.g containers) 14:49:46 bnemec: ok, can I put you guys down for RDO then? 14:50:14 I think that finishing our abstraction to use ansible would be nice, ceph-ansible could be a first consumer of that 14:50:15 bnemec: and could you review https://review.rdoproject.org/r/#/c/1678/1 :) 14:50:16 trown: I would be fine with that. 14:50:28 trown: I would like to talk with steve baker on https://bugs.launchpad.net/tripleo/+bug/1603144 14:50:28 Launchpad bug 1603144 in tripleo "older os-collect-config can't be updated or upgraded via heat" [High,Invalid] - Assigned to Marios Andreou (marios-b) 14:50:31 guys quick question, are CI jobs taking ~+2 hours? 14:50:50 maybe it's just me 14:50:52 trown: specifically the implication there might be that we have some os-collect-config code for the reexec that isn't being used I think 14:50:58 so I'd like a summit session about the more general introduction of ansible as well 14:50:58 ccamacho: ~1h40 14:51:03 trown: and I'd like to get rid of that code anyways... 14:51:05 dprince: ya he is on PTO, and people are bugging me to merge that... I dont totally understand what is going on and so I am not comfortable merging it myself 14:51:07 ccamacho: see http://tripleo.org/cistatus.html 14:51:14 EmilienM ack, thanks! 14:51:16 FWIW I did a little PoC with one of ceph-ansible folks: https://github.com/hardys/heat-ceph-templates/blob/master/mon_cluster.yaml#L61 14:51:31 It shows that we can wire in ansible roles in a similar way to how we deploy puppet modules 14:51:40 so we're going to mix puppet & ansible within a same deployment? interesting 14:51:40 but it doesn't solve how we drive a mixture of two tools in t-h-t 14:51:53 EmilienM: I would prefer that we didn't, but it's what some users are asking for 14:52:07 dprince: so as long as we dont think it is totally broken, I think we should merge it, and discuss with sbaker when he is back from PTO 14:52:12 I think it only really makes sense when we have a fully containerized overcloud tho tbh 14:52:25 shardy: I'd like to treat the Ceph ansible as perhaps an opt-int (3rd party) feature 14:52:36 so we probably need to focus on that, and ensure the abstractions around puppet are sufficient to enable alternative tooling to plug in 14:52:53 dprince: Sure, I don't think anyone is proposing changing any defaults at this point 14:52:59 EmilienM: that's why I wanted to ask if people are confortable with the idea in principle ... 'cause there is definitely push for it but I'd like to keep the expectations realistic 14:53:04 dprince: do you disagree with the packaging fixup there? 14:53:04 only figuring out if/how this may be possible to integrate 14:53:05 shardy: specifically because it may cost some integration features. Specifically, the ability to configure advanced things with extra_config 14:53:15 dprince: i mean for sbaker SIGKILL on occ 14:53:24 dprince: Yeah, I think it's understood that all *ExtraConfig hieradata overrides will break 14:53:28 marios: I haven't tried it. It just made me curious 14:53:37 what is missing in puppet-ceph (existing setup), that we have in ceph-ansible? 14:53:51 I think the goal here is to "make Ceph deployment better" 14:54:00 so let's figure why we need ceph-ansible? 14:54:31 dprince: ok. apparently it was done downstream k already. but we can discuss on the bug/offline if you like 14:54:34 EmilienM: exactly, I'm not sure there is anything that can't be done with puppet-ceph as well. Perhaps both implementations can live with parity 14:54:38 EmilienM: It's because that's where the ceph team is making all of their improvements, so unless we want to duplicate them we have to use it. 14:54:47 dprince: (downstream _in_ K, kilo i mean) 14:54:52 EmilienM: folks are invested in ceph-ansible as "the" tool, they're not willing to contribute to puppet-ceph as well AFAICT 14:54:55 EmilienM: iiuc the push is mostly just about the maintenance overhead of two of them 14:55:14 reasoning why there is the ceph-ansible in first place is something I can't answer 14:55:16 ok 14:55:22 I still think there may be a community around puppet-ceph though 14:55:36 after all our efforts in puppet-ceph 14:55:37 well there is one for sure, fuel 14:55:42 lol we'll just kill it 14:56:01 but I suggest we don't look at the two things together 14:56:07 perhaps even a larger one than the ansible-ceph thing. I don't know... but I think pulling the rug on puppet-ceph would be premature 14:56:08 the last thing I want to see is that we spend lots of time to get it integrated and receive -2 for principle reasons 14:56:12 gfidente: right, they overlap a lot 14:56:16 testing tripleo ability to use ansible can be done as a pre-requisite 14:56:48 risks of migration to ceph-ansible has pros and cons I suppose and it's not same problem 14:56:50 * bnemec thinks we should just let the Ceph installer install Ceph and drop the tripleo deployment of it entirely 14:56:50 jokke_: As a replacement for puppet-ceph at this point I would likely -2 it 14:57:00 * bnemec also wants a unicorn for his birthday 14:57:08 jokke_: as an opt-in (live beside) it may be okay though 14:57:10 bnemec: yes - 100% agree 14:57:23 bnemec: The problem is folks want hyper-converged deployments, not only standalone external ceph clusters 14:57:36 * trown gets bnemec a hornless unicorn for his birthday 14:57:36 if it was all external we could just drive ansible via a mistral action or something 14:57:40 bnemec: we have this integration with nova/cinder/glance/gnocchi that we don't need puppet-ceph or ceph-ansible 14:57:51 dprince: thanks ... I'll get that feedback to the expectations loop 14:58:04 EmilienM, well that's the point of ceph-ansible 14:58:14 we don't need puppet-ceph to configure ceph, we can use ceph-ansible for that 14:58:17 gfidente: what, it also configure nova.conf, etc? 14:58:27 but we'll continue to use puppet-nova for the nova configuration 14:58:32 Ok, well I guess this is the start of a longer discussion - jokke_ do you want to start a ML thread or shall I? 14:58:48 shardy: either way is fine by me 14:58:53 my point is: we could use ansible-ceph to configure Ceph and puppet-nova,glance,cinder,... to configure services like nova,glance,cinder,... to use ceph backend 14:58:58 I can add it to my todo list 14:59:02 this said, don't misinterpret me as I would myself push back ceph-ansible only to when we discussed the integration of ansbile successfully 14:59:06 Ok, I'll do it after the meeting & people can add their thoughts 14:59:08 shardy: 1 min left 14:59:08 as I don't see technical reasons for ceph-ansible today 14:59:33 Ok, out of time, thanks all! 14:59:38 #endmeeting