19:08:24 #startmeeting tripleo 19:08:25 Meeting started Tue Apr 8 19:08:24 2014 UTC and is due to finish in 60 minutes. The chair is slagle. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:08:26 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:08:29 The meeting name has been set to 'tripleo' 19:08:34 hi 19:08:39 #topic Agenda 19:08:41 hello 19:08:49 bugs reviews Projects needing releases CD Cloud status CI Insert one-off agenda items here open discussion 19:08:56 doh, failed at that 19:09:09 heh 19:09:10 you get the idea, let's move on 19:09:14 #topic bugs 19:09:24 #link https://bugs.launchpad.net/tripleo/ 19:09:24 #link https://bugs.launchpad.net/diskimage-builder/ 19:09:24 #link https://bugs.launchpad.net/os-refresh-config 19:09:24 #link https://bugs.launchpad.net/os-apply-config 19:09:27 #link https://bugs.launchpad.net/os-collect-config 19:09:29 #link https://bugs.launchpad.net/tuskar 19:09:32 #link https://bugs.launchpad.net/python-tuskarclient 19:09:46 oh nuts 19:09:47 incidentally, i was just triaging as we were waiting for the meeting 19:09:52 I have this in my calendar for an hour later 19:09:54 hi! 19:10:03 we had several untriaged bugs on tripleo :( 19:10:05 close to 10 19:10:11 lifeless: you want to take over? :) 19:11:01 I can take a look at some of those after the meeting 19:11:23 slagle: nooo 19:11:31 slagle: I *do* very much want to talk about configs i nthe context of reviews 19:11:41 slagle: but I'm more than happy for you to run the meeting 19:11:42 lifeless: ack 19:11:58 so for the untriaged stuff, i did see a pattern on a few... 19:12:05 if you assign the bug to yourself, please triage it :) 19:12:31 set a priority, mark as in progress if you're working on it, etc 19:13:36 unassigned crit: https://bugs.launchpad.net/tripleo/+bug/1304085 19:13:42 derekh: you want that one ^^? 19:13:59 slagle: yup, will take 19:14:15 unassigned crit: https://bugs.launchpad.net/tripleo/+bug/1304424 19:14:36 i just triaged that, and marked as critical. but it needs an assignee 19:15:06 i actually think some of dprince's patches in queue may address it 19:15:17 around bringing the network stack back up 19:15:26 any volunteers? 19:15:32 I can check it tomorrow 19:15:47 thx 19:15:53 np 19:16:19 so thats the openvswitch issue 19:16:45 dprince: ^ - I haven't checked, but I gave pointers on what I would really prefer to see for Ubuntu, I haven't had time to try to write it up myself 19:16:56 is there a bug already opened? 19:17:12 pretty sure, lets see 19:17:23 "The" openvswitch issue? 19:17:36 bug 1272969 is part of it 19:17:39 lifeless: I left you a reply about that. 19:17:52 dprince: oh cool; let me gohunt that down 19:18:09 lifeless: My choice was to go with what all the distro's do for openvswitch and let the bridge get destroyed 19:18:53 lifeless: and I would point out that not destroying the bridge is exactly why DHCP gets broken today on a reboot (thus my initial approach to use neutron-ovs-cleanup to work around this) 19:19:30 dprince: wouldn't it be much simpler then to jut put the ovs db in tmpfs, if we're going to be stateless? 19:19:43 dprince: It's really confusing to be half and half 19:20:03 lifeless: my take is this: everyone cleans up openvswitch ports on a reboot. We do it in openstack (aka. neutron-ovs-cleanup). So do all the distros via their init scripts (mostly so things are compatible with linux bridge perhaps). So why not us too? Especially since it doesn't cause any problems. 19:20:23 dprince: my problem is that 'ifdown foo; ifup foo' will break neutron flows and thats non obvious 19:20:32 dprince: its not the ports I'm concerned about per se - its the flows 19:21:04 lifeless: ifup/down manually might yes. But for that matter so would many things (like calling neutron-ovs-clean). 19:21:04 dprince: what if we land your stuff to unbreak things and someone can work on revising it later - would you object if that happened ? 19:21:29 lifeless: no, in fact I put a session on for Atlanta to hash through this stuff 19:21:37 ok 19:21:39 so lets do that 19:21:46 lifeless: but before it lands we need to land the MAC addresses fix 19:21:59 lifeless: otherwise all the virtual dev environments are hosed 19:22:06 dprince: thats in incubator right? 19:22:23 lifeless: yes, https://review.openstack.org/#/c/83867/ 19:23:14 ok, back to slagle :) 19:23:43 any additional bug business? 19:23:53 the other criticals all have assignees 19:24:14 i don't think we need to go through those individually unless folks are blocked on them... 19:24:30 agreed 19:24:40 #topic reviews 19:24:51 #link http://russellbryant.net/openstack-stats/tripleo-openreviews.html 19:25:34 sounds like we should have some new cores soon :) 19:25:40 \o/ 19:25:55 i think saw votes from most existing cores on the ML 19:26:04 there's 2 threads 19:26:10 so reply to both if you haven't yet 19:27:02 #action lifeless to do core updates after everyone has voted 19:27:25 the first thread is actioned 19:27:33 since I know dan is already interested ;) 19:27:41 the second thread is waiting for clear consensus from -core 19:27:47 and then the folk to commit to 3/day 19:27:50 cool 19:27:59 also 19:28:05 so, if you've been nominated, plz reply and say if you're willing to commit 19:28:12 o/ 19:28:12 lifeless: something formal or can I just say I'm good with that -- nevermind, I'll reply 19:28:18 my brain melted doing the meta-review fo rthat, 3 hours or so of mass review reply reading 19:28:26 will do 19:28:27 thanks jdon :) 19:28:43 lol 19:28:49 :D 19:29:01 better than the time I was called "jbod" for a good two week 19:29:02 we are about where we were last week in terms of Stats since the last revision without -1 or -2 19:29:08 Average wait time: 4 days, 13 hours, 32 minutes 19:29:38 oh no, scratch that 19:29:45 it was 3days last week 19:29:49 given we had a massive CI fail last week, I'm not suprised 19:29:56 we're clawing it back I think 19:30:06 but this is actually what I want to talk about 19:30:07 yea, we feel back a bit 19:30:08 *fell 19:30:13 for context 19:30:30 HP has a bunch of very experienced product folk spinning up on TripleO right now 19:30:34 you may have noticed :) 19:30:59 many of their reviews are tied into expanding the configuration surface area 19:31:07 lifeless: is that where all the config changes came from! 19:31:39 now 19:32:11 every single one of those changes, more or less, is what HP is running in its existing, at scale configs, that are different to what tripleo delivers today 19:33:00 so I think we've got a big opportunity to pull together a consistent view of the delta between that particular production cloud and the defaults 19:33:12 we've got a thread going at the moment on the list about the topic *in general* 19:33:25 but I'd like to avoid us all churning around what to do with these options in the very short term 19:34:14 lifeless: who is going to work on this? 19:34:15 I have a commitment from the teams manager that they'll work on the bigger picture in the medium term - but right now its a) killing us and b) killing them to get all these settings in play 19:34:30 * dprince is interested 19:34:31 lifeless: when you say "product folk", do you mean people who have adminned OpenStack in the past? 19:34:52 jdob: yes, folk who are running thousands of nodes of OpenStack right now :) 19:35:22 awesome, i'm psyched to have admin experience v. just a developer presence 19:35:39 so anyhow 19:35:54 Indeed 19:35:57 what I'd like to achieve is some way to get past this huge bloat of reviews 19:36:49 and get back to incremental improvements - and put making a scalable config system a high priority post-bloat - since, as I said, I have committment from the management chain (up several levels in fact) that they're here fo rthe long term, working upstream on TripleO now. 19:37:00 does anyone have thoughts on how we might do this? 19:37:30 dprince: 'who will do the work' - review work is us; but work to make things better - the new folk are here to do such work 19:38:02 dprince: their first order of business is essentially bringing across all the learnt experience - which is where this huge influx of stuff came from 19:38:35 lifeless: when you said avoiding churning earlier, do you mean you pretty much want to just get all the config changes landed? 19:39:03 Ng: so right now we have nearly 200 open reviews 19:39:26 lifeless: right, I was more talking about proof of concepting some new config implementation to make configuring everything possible, without having to constantly review and keep our elements in sync 19:39:37 Ng: and a review team that is able to give deep thoughtful reviews on all of that, but its going to take time to work through and make it all relly good and orthogonal 19:39:50 lifeless: if we do this right many of those could go away I think 19:39:54 Ng: *and* there's the open-ended aspect we're talking about too which would make 95% of said reviews just Go Away 19:39:59 dprince: yeah 19:40:17 dprince: but we'll need something similar on the heat side too 19:40:21 i guess there is a question of how soon do they need support for what they've submitted landed? 19:40:31 dprince: +1 19:40:44 dprince: as well as good consistent answers for when to make something part of the UI vs exposed plumbing 19:40:48 lifeless: are their more similar reviews to come or do we have the complete list ? 19:40:53 i don't want to see folks getting turned off by slow review times 19:41:14 lifeless: on the heat side we could follow a model similar to what derek and yourself are doing for the CI environments... essentially allowing people to have their own site specific stuff that would get merge.py'd in 19:41:24 slagle: they're suffering right now; aggressive deadlines internally, and caught on the other hand with 'work upstream' 19:41:40 dprince: Ideally thats all tuskar really 19:41:45 ok, so why don't we just slog through the reviews that are out there in the short term 19:42:03 recognizing we have a problem, and there are some ideas that have been brought up on how to fix it 19:42:10 so actually 19:42:23 but it sounds like those aren't likely to get implemented quickly enough to satisfy these folks? 19:42:41 what I'd like to do is find a volunteer - and those teams may well provide one - to implement a config pass through system and have us prioritise reviewing and approving that 19:42:41 slagle: I would like to be caution about top level element options because if we eventually remove them we may break compat for someone 19:42:47 get that landed tomorrowish. 19:43:35 I was considering asking if folk would be open to landing stuff with a light touch, but I think it would be hard for them to post-review fix them up effectively - too easy to drop stuff through the cracks. 19:43:59 yeah I was just mulling around the idea of dropping the two-cores requirement for this specific set of reviews 19:44:21 so here's my proposal - how about we : get *a* pass-through config system in place ASAP, with knowledge that its first gen and we can replace or fix it down the track. 19:44:30 For both TIE and THT 19:44:42 we say that for this bulk set of options they should all be done passthrough 19:45:15 and the team will come back and help us(all ofopenstack) have better defaults in the medium term? 19:45:24 i would be ok with that, as long as it's understood we may very well take a different route later on 19:45:47 The specific ask of the review team is to a) help get the passthrough thing in place and b) understand its form may change 19:46:05 lifeless: when you was "config passthrough", you mean a generic config option setter? sounds ok to me 19:46:05 and then close out the bulk of the open reviews instead of merging? 19:46:13 derekh: e.g. dprince or my strawmans in the list thread 19:46:17 jdob: right 19:46:28 lifeless: my thoughts exactly 19:46:31 jdob: there will be a bunch of reviews needed to enable the passthrough thing I suspect. 19:46:39 lifeless: ok, cool 19:46:40 i'm guessing this is implied, but their ambitious deadlines are ok with that? 19:46:55 I believe so 19:46:55 lifeless: I was more or less asking who is going to do this initial pass through, and do we have an approach we like best? 19:46:55 assuming, like you said, it lands tomorrow/thursday 19:47:02 lifeless: sounds good 19:47:49 what do folk think of the config schema in my strawman 19:47:54 it'll certainly help flush out the generic one having so many use cases that quickly, and if they are ok with the few more days delay it sounds like a good plan 19:47:55 - that is in my reply to dan ? 19:49:34 i like the section nesting better than dot namespacing 19:49:40 http://lists.openstack.org/pipermail/openstack-dev/2014-April/032183.html specifically 19:50:06 I think it makes sense 19:50:27 lifeless: it looks like XML converted to JSON 19:50:53 lifeless: why not my suggestion above it? 19:51:15 lifeless: unless we are eventually going to go for XML too ;) 19:51:20 how about a generic inifile setter? if seems the examples in the mail will need knowledged of file locations etc... 19:51:22 dprince: that requires more parsing and a new tool e.g. augeas, so it will replace the existing tempaltes - more work to bring in 19:51:54 the vast bulk of the problem we have is openstack settings in openstack files we already know about 19:52:23 lifeless: so just the adding those template lines you show in the mail to the existing templates, will produce the config options from your yaml example? 19:52:31 slagle: I believe so 19:52:40 (just making sure i understand) 19:52:42 lifeless: or, we could have a simple glue layer which auto-templatizes the upstream example config (nova.conf.sample for example) 19:52:51 yea, that's rather slick actually 19:52:55 slagle: shove them down the bottom of the file 19:52:58 lifeless: and with the glue we have our way with the config format 19:53:11 lifeless: but what about the heat templates? you still need all the yaml there? 19:53:13 wont we end up adding multiple DEFAULT sections? 19:54:11 derekh: I'm fairly sure iniparser doesn't care about that, I'd need to check. We can of course translate our *current* stuff in heat into the schema and have templates htat are solely this format 19:54:24 derekh: which avoids that problem. Implw 19:54:32 lifeless: ok 19:54:36 slagle: thats why we need a passthrough for heat too 19:54:50 slagle: which I'll nab e.g. stevebaker or SpamapS on in a few minutes 19:54:56 ok 19:55:14 well, i'm fine with reviewing this stuff lightly so it can be fast tracked 19:55:41 so am I 19:55:50 oh...but still want to see passing CI though 19:55:54 ok 19:55:59 thank you! 19:56:25 slagle: review what stuff lightly, the stuff up for review now? or the stuff we are going to do for pass through? 19:56:29 dprince: I think those options are ones to explore for a rework later, which as I said - I have folk tagged to do whatever remedial work we need for helping them through this hump 19:57:09 dprince: the pass through 19:57:24 lifeless: sure, fundamental work... 19:57:33 we're down to 3 minutes btw, do we have anything else we want to squeeze in to the meeting? 19:57:52 do we have enough consensus or should I raise this on the list ? 19:57:56 or both ? 19:58:10 list would still be good, for absent folks 19:58:17 i'd think 19:58:32 action for me 19:58:45 #action lifeless to mail ML about short term config pass though 19:59:01 Ok, any other business in 2 minutes? 19:59:10 Hey, if anyone was able to look at the tuskar proposal for Juno mentioned in an upstream email - http://lists.openstack.org/pipermail/openstack-dev/2014-April/032034.html - that would be greatly appreciated! 19:59:24 slagle: releasing the things? 19:59:26 tzumainn: indeed! 19:59:29 ccrouch: oh yes 19:59:35 i will release this week 19:59:44 tzumainn: still have that sticky note on my screen, sorry :( 19:59:45 new ssl certs for ci/cd overcloud endpoints are on the way. just waiting for the verification/processing. I guess all the ssl registrars are pretty busy today ;) 19:59:51 need to bump the .Y's now that the stable branches are setup. 19:59:56 jistr, lol, no worries 19:59:59 Ng: that's an understatement :) 20:00:04 #action slagle to release the things 20:00:15 #link http://lists.openstack.org/pipermail/openstack-dev/2014-April/032034.html 20:00:39 thanks everyone, plz continue in #tripleo. sorry for the rush at the end 20:00:44 goodnight tripleo 20:01:00 #endmeeting