19:02:43 #startmeeting tripleo 19:02:44 Meeting started Tue Dec 17 19:02:43 2013 UTC and is due to finish in 60 minutes. The chair is lifeless. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:02:45 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:02:48 The meeting name has been set to 'tripleo' 19:02:51 SpamapS: got your UTC clock on ? :) 19:02:56 #topic agenda 19:03:03 bugs 19:03:04 reviews 19:03:04 Projects needing releases 19:03:04 CD Cloud status 19:03:04 CI virtualized testing progress 19:03:06 Insert one-off agenda items here 19:03:08 moving TripleO UI under Horizon codebase 19:03:11 open discussion 19:03:14 #topic bugs 19:03:16 #link https://bugs.launchpad.net/tripleo/ 19:03:16 #link https://bugs.launchpad.net/diskimage-builder/ 19:03:18 #link https://bugs.launchpad.net/os-refresh-config 19:03:21 #link https://bugs.launchpad.net/os-apply-config 19:03:23 #link https://bugs.launchpad.net/os-collect-config 19:03:26 #link https://bugs.launchpad.net/tuskar 19:03:28 #link https://bugs.launchpad.net/tuskar-ui 19:03:31 #link https://bugs.launchpad.net/python-tuskarclient 19:04:10 hi 19:04:24 well done - all bugs triaged now 19:04:32 still one incomplete we haven't chased to ground 19:04:55 criticals 19:05:01 https://bugs.launchpad.net/tripleo/+bug/1254246 19:05:06 https://bugs.launchpad.net/tripleo/+bug/1254555 19:05:11 https://bugs.launchpad.net/tripleo/+bug/1261253 19:05:50 I think we can close the first one, it's fixed in neutron 19:06:04 the first one should have been fixed by now, but the proposed fix revealed another interesting issue with DB schema migrations in Neutron 19:06:17 oh! it's marked fix committed 19:06:48 yeah, but this one is needed too https://review.openstack.org/#/c/61677/1 19:06:53 *to be merged first 19:07:20 long story short: ML2 migrations have been broken in Neutron for a long time and worked by accident 19:07:26 arghhh 19:07:29 *fun* 19:07:30 ok 19:07:33 so it stays open 19:07:44 the second - can someone agitate about that in Neutron? it's marked 'low' 19:07:52 which I really find a bit bizarre 19:07:57 anteaya: ^ 19:08:13 lifeless: i can ping enikanorov__ about it tomorrow 19:08:15 o/ 19:08:36 marios: interestingly it says 'symptoms fixed but issue remains' 19:08:58 perhaps we should try removing clints workaround, and if that works, close it in tripleo ? 19:09:10 rpodolyaka1: can you be available after the meeting to discuss this in -neutron? 19:09:24 anteaya: sure 19:09:40 thanks 19:10:08 lifeless: marios: enikanorov told me he had fixed only one particular issue leading to problems with policies, though there might be others 19:10:14 rpodolyaka1: ah 19:10:26 rpodolyaka1: ah k 19:10:30 rpodolyaka1: so should we try removing the bandaid? 19:10:33 lifeless: though, I agree, we should try to remove workaround 19:10:49 lifeless: at least to provide more information of errors we have 19:11:07 looks like a good plan to me 19:11:09 is there a volunteer here to try that (not you rpodolyaka1 :P) or should we ask on the list? 19:11:40 sure i can give a go 19:11:44 matrohon: btw i'm here 19:11:59 #action marios to try removing workaround for bug 1254555 19:12:47 bug 1261253 we can workaround very easily - it's just a matter of manually installing d2to1 into the mirror, and we document other cases of that already in the pypi element; after that we can downgrade the bug to a medium 19:13:17 see bug 1222306 for another example 19:13:27 any volunteers here, or should I ask on the liset? 19:13:55 o/ 19:14:20 #action marios to document workaround in pypi element for bug 1261253 19:14:33 Any other bug business? 19:14:51 sorry for being late 19:14:56 #topic reviews 19:14:59 #link 19:15:00 http://russellbryant.net/openstack-stats/tripleo-openreviews.html 19:15:03 erm 19:15:07 #link http://russellbryant.net/openstack-stats/tripleo-openreviews.html 19:15:42 19:15:43 Stats since the last revision without -1 or -2 : 19:15:43 Average wait time: 0 days, 5 hours, 28 minutes 19:15:43 1rd quartile wait time: 0 days, 2 hours, 31 minutes 19:15:43 Median wait time: 0 days, 3 hours, 42 minutes 19:15:45 3rd quartile wait time: 0 days, 5 hours, 27 minutes 19:15:49 So, still in good shape. \o/ 19:15:53 nice 19:16:07 Any discussion needed around reviews? People happy with the quality, helpfulness etc that they are receiving? 19:16:37 happy 19:16:45 very happy :-) 19:17:16 ok, cool 19:17:28 #topic projects needing releases 19:17:39 We've landed code -> we need to do a release of projects. 19:17:44 Can I have a volunteer? 19:17:56 o/ 19:17:59 :) 19:18:06 #action rpodolyaka1 to release all the things 19:18:22 Any discussion points around releases? 19:19:56 #topic CD Cloud status 19:20:08 SpamapS: you here? 19:20:23 (He may be stuck in face to face meetings today) 19:21:13 going to take that as no :) 19:21:14 ok so 19:21:21 we're now back to somewhat reliable 19:21:26 but we found some major 19:21:28 issues 19:21:47 a) we were missing -o pipefail in a number of places, which with tee leads to undetected failures 19:22:08 specifically we were failing to build the noncompute image for weeks, deploying wiht the old one 19:22:29 we found this out when nova broke compat between the noncompute and compute images :) 19:22:47 fixing that lead to a cascade of small fixes that we used the two-reviewers for CD rule to land 19:22:57 as noone else was around,a nd we were down 19:23:10 so - please use -o pipefail :) 19:23:31 b) cinder was basically never working, we have no idea how we ever succeeeded with it included; fixed now. 19:23:55 The preserve-ephemeral patchset is now complete enough to experiment with! 19:24:36 lifeless: unless we have a test or something in the overcloud that uses Cinder it could still break at any time right? 19:24:38 See the patch to devtest_overcloud (I0efa8f52864f49ccdb885f6f655c732c951b3f7a) for references. 19:24:50 dprince: less so, but yes. 19:25:07 dprince: we know all the current failures because we fixed the reporting chain so we detected them 19:25:39 * dprince likes checks in multiple places 19:25:40 I'm sure rpodolyaka1 & jog0 & whoever else is hacking on the preserve ephemeral patchset would love folk to try early adoption 19:25:52 I've shoehorned it into the current undercloud 19:26:04 to get real world experience 19:26:09 test it, break it, review it :) 19:26:13 ++ 19:26:16 good news - it deploys; bad news - we're not trying the new codepath entirely yet. 19:26:42 Need to land I0efa8f52864f49ccdb885f6f655c732c951b3f7a first 19:27:09 Anything else on the CD cloud status? 19:28:00 #topic virtualised testing 19:28:16 Anyone have news on zis? 19:28:52 dprince: pleia2: ? 19:28:59 Well. I've started pushing some things to the incubator to get parity w/ tripleO CI. 19:29:00 I might lose my spot in this conference room in a couple minutes, so I'll be quick 19:29:15 Soon I'll rip tripleo CI aport and have it use the devtest scripts. 19:29:16 https://review.openstack.org/#/c/61052/ is the main patch from derekh that I'm reviewing 19:29:19 (and testing) 19:29:59 And derekh just about has the test worker stuff in the bag. 19:30:02 ok, cool 19:30:08 I believe derekh was going to look into confirming that the networking that works ok now, still works with the overlay network 19:30:42 Specific to our Red Hat test environment we are still working on getting a small set (about 30) public IPs. 19:31:00 that is mostly it I think. 19:31:01 done. 19:31:10 that's it for me too 19:31:52 dprince: you and derekh available for a sync up tomorrow? 19:32:10 dprince: ok - hey the RH environment should be in the CD cloud section :) 19:32:28 dprince: but thanks! 19:32:43 ##topic moving TripleO UI under Horizon codebase 19:32:45 pleia2: I am! 19:32:53 lifeless: yay 19:32:55 lifeless: okay, we can add it 19:33:05 #topic moving TripleO UI under Horizon codebase 19:33:07 ok 19:33:11 pleia2/lifeless: I'll send out an invite for the normal time. 19:33:15 dprince: thanks 19:33:19 lsmola_: tag:) 19:33:29 dprince: great 19:33:39 so i have sent email with 2 plans, merging under Horizon program and merging directly to horizon codebase 19:34:23 we will know more after today's meeting 19:34:26 cool 19:34:28 we are leaning to merging directly to the codebase, although we have some conditions and development can be slightly slower 19:34:31 is there anything you need from us here? 19:34:39 or any concerns you want to talk about? 19:34:43 though it will be all done properly upstream :-) 19:34:56 I don't think there are triplo related concerns 19:35:05 not really I think, it will all be discussed on Horizon meeting today 19:35:12 Horizon meeting is more important today :) 19:35:17 Yup 19:35:23 :-) 19:35:30 ok 19:35:48 Now I see there's a new item on the wiki page 19:35:56 lsmola_: please put them in the list at the top as well :) 19:36:01 #topic After heat stack-create init operations (lsmola) 19:36:06 ok 19:36:09 "Regarding this discussion http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg11671.html Does this initialization belongs to Heat template or to Tuskar-API? shardy says: but it is possible to update the configuration subsequently using cfn-hup, or os-collect-config/os-apply-config, which read updated resource Metadata and apply it " 19:36:15 I wasnn't sure 19:36:22 dunno if that all came through - there should be a trailing " if it did 19:37:13 so 19:37:36 basically tuskar was calling these init scripts after the stack-create 19:37:56 though I am not sure if that is the right approach 19:38:06 Ok 19:38:24 It's an initialization step and it should be part of stack-create 19:38:27 so this is probably something to tease out on the list - I mean, I can say why its the way it is today for the CLI 19:39:13 so are there any concerns with packing it to Heat or occ and oac? 19:39:14 we have a principle in the design that one-time things - basically API calls - should be externally orchestrated, not done by local machine scripts 19:39:26 all of this setup stuff is in that category. 19:39:41 Consider deployment of an HA setup. Which machine - and only one can do it - should run pki_setup 19:39:58 Ditto initial neutron setup 19:40:03 and keystone registrations 19:40:44 hm 19:40:59 not sure if i am right, but we will be using different heat templates for HA setup right? 19:41:29 Not sure at this point 19:41:32 I'd like to avoid that 19:41:42 just merge in with a count for the control plane of N != 1 19:41:50 hm 19:41:56 Remember that we build the undercloud by doing one node and handing over then scaling up 19:42:12 because you have to define also resources like load balancers, and thing like that in Heat template right? 19:42:25 for a given stack, yeah 19:42:38 We can divide these tasks into two categories 19:42:44 there are things that we /should/ be automating 19:42:45 so it can be kind of hard to define all option to one template 19:42:50 like endpoint registration 19:42:58 but rather have multiple tested templates with different setup 19:43:14 by which I mean that if an endpoint moves it needs to be re-registered 19:43:33 right 19:43:44 but even thats not 100% clear - with VIPs you register once and any subsequent registration is deliberate and orthogonal to redeployment stuff 19:43:54 the other category is human tweaking 19:43:56 like network setup 19:44:29 until we have baremetal neutron to give outbound automatic policy stuff, the public network setup is entirely a matter of driving the Neutron API in the overcloud once 19:44:41 which users can do via the Admin tab of the deployed overcloud 19:45:09 Either way, none of these things need to be done on changes to the cloud, it's just initial bringup 19:45:31 hm 19:46:01 so i take it there is not place in heat templates, that is good for this init scripts? 19:46:14 heat doesn't know how to orchestrate APIs that it's deploying 19:46:23 only how to orchestrate APIs that provide it with resources 19:46:39 the 'software config' work that is ongoing is tangentially related, but not the same. 19:47:03 If we could get rid of the SSH for keystone, it would be all API all the time 19:47:17 hmm ok 19:47:18 and then we could focus on addressing that in Heat 19:47:30 we will put it separately then 19:47:34 anyhow - lets say: it is an issue, it should be fixed, but how isn't clear; -> the list 19:47:41 IMNSHO :) 19:47:48 an we will discuss with heat guys, how to make this happen 19:48:01 Well, also a broader tripleo discussion 19:48:07 lifeless: ok, cool 19:48:09 meetings aren't a good place to get everyones thoughts 19:48:15 like - this has been just you and me :) 19:48:25 no SpamapS, no ng, ... 19:48:26 hehe, ok, point taken :-) 19:48:45 #topic open discussion 19:49:13 12 minutes, get it while it's hot 19:50:10 cue crickets 19:50:59 and 19:51:01 #endmeeting