20:01:27 #startmeeting tc 20:01:28 Meeting started Tue Dec 8 20:01:27 2015 UTC and is due to finish in 60 minutes. The chair is ttx. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:01:29 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:01:32 The meeting name has been set to 'tc' 20:01:33 Hi everyone! 20:01:39 mestery sends his regrets 20:01:43 Our agenda for today: 20:01:49 #link https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee 20:01:56 * flaper87 is connected through H+ (tethering) 20:01:58 expect some lag :D 20:02:03 russellb: good, we'll freely bitch about the Neutron stadium then 20:02:19 oh my 20:02:28 But let's start with another topic 20:02:34 #topic Should DefCore explictly require running Linux as a compute capability 20:02:43 #link http://lists.openstack.org/pipermail/openstack-tc/2015-December/001085.html 20:02:56 an even more fun topic! 20:03:00 Do we have anyone wanting to introduce the topic ? 20:03:02 #link https://review.openstack.org/#/c/244782/ the critical review in question 20:03:09 hogepodge maybe 20:03:18 markvoelker: ? 20:03:28 I can proxy if neither of them are here 20:03:31 Sure, if hogepodge isn't around.... 20:03:32 o/ 20:03:32 o/ 20:03:43 #link https://docs.google.com/document/d/1Q_N93hJ-8WK4C3Ktcrex0mxv4VqoAjBzP9g6cDe0JoY/edit 20:03:46 Also a report of what happened at the last board meeting could be helpful 20:03:49 ok, looks like we've got hogepodge & markvoelker 20:03:55 and zehicle 20:03:57 hey hogepodge zehicle markvoelker 20:03:57 since it was discussed there (and no report was done yet) 20:04:07 last board meeting was mostly informational for board members, no real action to report IMO 20:04:11 * markvoelker yields the floor to hogepodge since he got this on the agenda for the day 20:04:20 * zehicle waves in an OS agnostic way 20:04:27 some opinions expressed, but no consensus or any real new perspectives 20:04:30 so, given that some tc members might not be fully up on the topic, it would be good to have a summary presented 20:04:34 * mordred also sent an email to the tc list recently if you haven't seen it 20:04:35 before we get to discussion 20:04:40 * ttx throws interop tests at hogepodge 20:04:47 * markmcclain sneaks in late 20:04:52 #link http://lists.openstack.org/pipermail/openstack-tc/2015-December/001088.html mordred's email 20:05:00 Some tempest tests used for defcore implicitly require linux 20:05:19 clarification: requires Linux guests be running on the cloud 20:05:30 This has raised the question of if booting Linux should be an explicitly required capability for passing Defcore interoperability tests. 20:05:52 because they attempt to log into those guests after booting them to ensure that the OS actually is on the network, and the cloud is working 20:06:08 I kinda like mordred's vision of it, but it goes a bit beyond the question (i.e. requires image upload) 20:06:11 do those tests boot the image using a hard-coded name? 20:06:13 yah. it's a test of the inbound API presented by the cloud to the guest 20:06:22 clarification: since those tests ARE required now, Linux guests are required until fix/flag the tests or change the rules 20:06:27 sdague: correct, it checks # cpus, hostname, and other capabilities 20:06:27 i'd be curious what disagreements there are with mordred's position 20:06:28 dhellmann: no, the image_ref is specified in tempest.conf 20:06:34 could save some time with just restating agreement there 20:06:35 sdague : ok 20:06:41 russellb: ++ 20:06:53 zehicle: a guest could possibly wrap the commands and pass 20:07:07 I do wonder why we keep saying "linux" when really its just a small set of tools that could run on unix or cygwin right? eg maybe we should say ssh, ifconfig, etc are required and not worry about the kernel ? 20:07:08 russellb, people want to keep image upload and what runs in guest as different concerns 20:07:10 I'm fine supporting mordred's line 20:07:14 I mean, I knowit goes beyond the specific question, but if we're all in agreement with the one-step-past position, it will make the situation pretty clear 20:07:18 but it is simpler to say "linux" 20:07:22 russellb : I agree with mordred's email 20:07:40 right, but I think this brings up the more important heart and soul of what it means to be OpenStack, there are lots of technical ways to work around this, but the point is we don't really want to do that 20:07:42 clarkb: yes, that's more accurate to say, but most linux distros give you the tools outright. 20:07:47 sdague: ++ 20:07:50 sdague: ++ 20:07:52 clarkb: as would linux containers 20:08:01 mordred: I think it's a good medium-term goal yes 20:08:07 well, if the test requires the cloud to be able to boot a linux guest and there isn't one, we need to be able to boot one 20:08:10 clarkb: that's helpful, thanks 20:08:16 clarkb, we could and ignore the larger question. it's worth discussing the broader question too 20:08:31 yeah, let's not identify technicalities that let someone bypass this and just more clearly declare the intent 20:08:34 and I think that's why mordred also mentioned uploading an image 20:08:42 dhellmann: agreed, this is about intentions 20:08:44 right, because what is an OpenStack application going to look like in the future that you'd want to take off the shelf and apply to a cloud 20:08:46 but I'll admit I'm not 100% clear on why we are asking this question. Is that because someone wants to call a cloud that can't bnoot a Linux VM "an OpenStack cloud" ? 20:08:51 ttx: yes 20:08:57 flaper87: so a container-based openstack cloud would fail on that point 20:08:58 yes, solaris zones 20:08:58 ttx: yes, see the link to the review 20:09:01 ttx: precisely 20:09:03 i agree with morderd and sdague on the goals and roasoning 20:09:08 i think it's an important clarification -- it's important that a user *be able to run anything*, including linux, which happens to be the most common factor in the hardware/os matrix 20:09:10 it's in there in pretty specific detail 20:09:18 YEah, one of the points that's been brought up is that we already have a way to identify required capabilities for OpenStack Powered(TM) clouds. So if we think "can boot linux" should be a required capability, perhaps we should just use them. 20:09:18 mordred, sdague I read it -- was checking that was the only reason why the question was asked 20:09:32 jeblair : I would be happy to support going further, if we had tests for other operating systems 20:09:36 markvoelker: I'd love that 20:09:37 hogepodge: can a linux container be uploaded ? I think that's good as well 20:09:37 Which means we score that capability against the 12 Criteria, etc. 20:09:57 i think "can boot linux" is the wrong thing to say. it's "can provide full machine virt" 20:10:05 russellb: ya 20:10:07 yah 20:10:08 flaper87: a container cloud can't boot cirros, though. It's a distinction 20:10:18 "can boot linux" is an easy way to _verify_ full machine virt 20:10:19 russellb: that certainly clears up any confusion I might have had 20:10:20 russellb: well, maybe, it's really "can show up with own arbitrary image" 20:10:21 russellb: sure, both have brought up 20:10:24 tht does not present an undue burden 20:10:27 but it is a detail 20:10:29 which baremetal also supports 20:10:29 russellb: that's a good clarification 20:10:36 we also need to realize that defcore is a lookback 20:10:37 sdague: ++ 20:10:39 OK, I think the TC's opinion is requested and we seem to be OK holding the line where mordred eloquently set it. 20:10:43 yep, baremetal is fine with me too 20:10:57 ttx: ++ 20:10:58 FWIW, baremetal will fail other tests anyway IIRC 20:11:00 ++ 20:11:01 I'm not sure how to best formalize that 20:11:07 TC resolution? 20:11:08 ttx: ++ 20:11:11 russellb: ++ 20:11:13 witha a formal vote 20:11:13 ttx: there was a suggestion on the list for a resolution 20:11:15 russellb : ++ 20:11:19 russellb: ++ 20:11:21 markvoelker, hogepodge: would you require a TC resolution ? 20:11:21 ++ 20:11:24 let's write it up and vote 20:11:25 there are capabilities we check for that bare metal will fail. 20:11:27 ttx, technical community also needs to be aware of issue from a fix test perspective too. Or add tests to make intent clear 20:11:36 DefCore is limited to what exists as tests 20:11:42 I don't like saying "fix test" 20:11:43 IMHO not necessary since DefCore is Board body, but helpful. 20:11:45 hogepodge : there's no need to conflate VM and baremetal. Separate tests, separate things, separate capabilities. 20:11:46 because that implies it's broken :) 20:12:03 Just as a point of curiosity, would taking this position mean that clouds that currently support non-x86 architectures would no longer meet the test, or is that just an interim position from mordred? 20:12:03 I think we can write a resolution that captures this 20:12:09 zehicle: right -- this originally comes from a a test, but what we affirm here is that that was actually a good thing 20:12:15 right, i don't think TC resolution is required, but a very helpful way of formally adopting a position communicated in important cases like this 20:12:15 persia: iterim 20:12:27 dhellmann: it's making a statement that OpenStack Powered is VM only, which is important. There are some opinions that OpenStack should be about compute in general. Just want to make sure that point is raised. 20:12:36 Ah, so existing multiarchitecture clouds may comtinue to claim to be compliant? 20:12:44 I think that will let us come up with clear wording that represents our position 20:12:44 yes, if we want to say "the tc feels this way" we should vote on a resolution 20:12:47 (a resolution) 20:12:53 persia: if they can boot the linux needed for the test, sure 20:12:56 mordred: feeling ready to draft it ? 20:12:57 hogepodge: I think the spirit is the user can show up with their own image and get that to work, and being able to run Linux, as it is a free and open OS, is very reasonable validation of that 20:12:57 jeblair: ++ 20:12:59 persia : yes, the point isn't that all of your cloud supports this feature, it's that some part of your cloud supports it 20:12:59 So, FYI it's currently too late to add stuff to the Guideline that goes up for a vote in January. Next one will go up for a vote in July, so if folks want to take some time to craft their opinions there is some time 20:13:01 ttx: yeah. I'll write it 20:13:06 ttx: I can just clean up the email 20:13:09 ttx, I don't think the current test is a good way to ensure the objectives being stated here. it's really a side effect. 20:13:11 mordred: thanks! 20:13:12 We can iterate on it on the review all week 20:13:14 The more urgent thing is just to decide whether we grant the flag request for this Guideline. 20:13:15 I know of no multiarchitecture clouds that do not have x86 flavours, making that easy. 20:13:16 and vote on it last week 20:13:18 persia: yes, an OpenStack can do more things 20:13:21 DefCore will do better with explicit tests so we can create and score capabilities 20:13:23 damnit, too much lag here. I can help with the resolution 20:13:24 markvoelker: I would request that you do not 20:13:27 but there is some base commonality 20:13:34 tests can be improved, but i think avoiding a step backward here is important 20:13:44 russellb : ++ 20:13:46 markvoelker: there is nothing broken in the test, nor any intent on the part of the tech community to alter it 20:13:51 defcore is admittedly already such a low bar, let's not lower it more 20:13:52 oh, mmh, mordred took it already 20:14:03 zehicle : one of those scoring criteria is also "future technical direction", which we're trying to give you 20:14:16 dhellmann, that's true. 20:14:23 markvoelker: admitting the flag would imply that the thing being flagged is a potential issue, and it's not a door that is likely to change in the direction the flag is requesting any time in the forseeable future 20:14:29 markvoelker: I believe there is pretty clear voice on that flag requests should not be approved 20:14:45 we discussed having "runs Linux" "runs Windows" and "runs Solaris" capabilities. that would be VERY explicit 20:14:45 Also FWIW, one thing that came out of the Board meeting is that there are some very different opinions on what "interoperability" means. Some seem to think it should stop at the API, others expect full workload portability, etc. 20:14:50 zehicle: agreed. I think what was a side-effect needs to have its own clear capabilities/tests 20:14:58 May not change anyone's minds, but worth considering: https://etherpad.openstack.org/p/UnofficialBoardNotes-Dec3-2015 20:14:59 if we are to follow that road 20:15:01 markvoelker: we can make that part of the TC resolution if you don't think it's clear enough with the 3 TC -1s on the review 20:15:01 "stop at the API" is meaningless to me 20:15:13 if you can't say anything about expected input and output, what the heck does that mean 20:15:15 russellb: ++ 20:15:18 anyway .. 20:15:19 markvoelker: anybody who thinks it stops at the API 20:15:19 russellb : ++ 20:15:23 markvoelker: has never run antyhing on a cloud 20:15:33 certainly not 2 clouds 20:15:35 markvoelker: and CERTAINLY has not run anything on multiple clouds 20:15:36 mordred: ++ 20:15:39 that means the nova unit tests could be an OpenStack (tm) 20:15:45 * zehicle recalls that API vs Implementation has always been a challenge for DefCore discussions [e.g. designated sections] 20:15:51 zehicle: yup 20:15:58 russellb: ++ 20:15:59 mordred: we can flag in waiting for test or functionality improvements, and we have done that in the past. In this case the flag wouldn't eliminate a capability, but defer testing until the suite is "better". I'm not advocating any position, just trying to clarify the process. 20:16:26 hogepodge: right, and I think the point is we explicitly don't want this 20:16:26 hogepodgetotally. but I think the entirety of the TC is saying "the test is not broken and is by design" 20:16:31 hogepodge : I would not expect this test to be changed at all. 20:16:35 hogepodge: so, we _could_ do that 20:16:37 hogepodge: that's like exactly why I submitted the patch... 20:16:40 but the test is spot on 20:16:45 +1 20:16:45 I don't see the point in disabling a test that ends up having the side-effect of testing something we also want 20:16:58 The test is fine, I just want it to not use hard coded commands 20:17:03 ttx: agreed 20:17:13 mfisher_ora: but we're saying that those commands are not going to change 20:17:14 I guess my position is that "boot linux" needs to have a test so we can test for it as a capability. That is my preference, and I would be happy to work with qa to write that test. 20:17:16 mfisher_ora: right, and we disagree with you 20:17:20 I agree with zehicle it should end up having its own test if that's something we want 20:17:21 hogepodge : ++ 20:17:24 mfisher_ora: because upstream cannot test changed commands 20:17:33 rather than just being another test sideeffect 20:17:34 and we do not accept changes we cannot test 20:17:45 sdague, mordred: yep, we get that now 20:17:46 ttx: I'm fine with tests depending on things that are also required, the bone here is that "boot linux as a full-virt guest" currently isn't. So a useful conversation has ensued. =) 20:17:52 ttx, that is not a consensus position. some see that it is a side effect as reason enough to kill it 20:18:09 it's consensus here :) 20:18:13 mfisher_ora: thank you, by the way, for highlighting the issue and diving in to this with us 20:18:19 which is what we're trying to arrive at 20:18:23 TC opinion to communicate back 20:18:24 zehicle : as I said before, we are trying to clarify the technical direction as intended by the contributors. Are you getting that feedback from contributors? 20:18:29 russellb, clearly. wanted to speak for other points of view I've heard 20:18:34 mordred: uh I'd say no problem, but obviously I'm a little disappointed at the moment :) 20:18:45 mfisher_ora: sure - totally understand :) 20:18:49 dhellmann, DefCore has a very broad audience. 20:18:51 mordred: we might want to address some of these common other opinions in the resolution 20:19:03 russellb: happy to 20:19:03 zehicle : I think maybe you give too much weight to some of them. 20:19:19 mordred: maybe it's covered well enough in your mail, i'll think it over again and comment on resolution if i think of something 20:19:25 zehicle : but that's a discussion for another day, I think 20:19:26 russellb: thanks! 20:19:29 dhellmann, my goal is to hear that from everyone. then I know I got it right 20:19:48 zehicle: the right answer isn't the median one 20:19:57 sdague : ++ 20:20:00 OK, I think we can move on. mordred will draft a resolution to formalize the TC's position on that question 20:20:14 we'll hopefully iterate on it fast enough to be able to vote it next week 20:20:20 sdague, totally agree! point is to hear and understand everyone. 20:20:27 ttx: o/ 20:20:34 Thanks for your time and attention everyone. It's great to see the community input to defcore. 20:20:42 ttx: had some ECHILD at top of the hour, sorry 20:20:46 hogepodge: ++ 20:20:51 zehicle: ++ 20:20:53 lifeless: That happens 20:20:56 lifeless: That happens 20:20:59 * zehicle thanks TC for being direct and vocal. it helps discussion to have a position taken 20:21:27 It's good that we seem to have consensus on a position too 20:21:43 could have turned out a lot less clear 20:21:48 zehicle: it's not every day we can be this clear :) 20:21:58 Alright, moving on to the other topics 20:22:01 yes, quite clear. thank you. 20:22:01 I think cloud providers want to serve customers, and customers benefit from a clear understanding. 20:22:05 * dhellmann puts a big red circle around today on the calendar 20:22:08 mordred: lol, unfortunately, that's true :D 20:22:10 zehicle: hogepodge markvoelker: thx 20:22:26 #topic Open discussion 20:22:35 We have a number of topics to cover in Open discussion 20:22:38 * Standardizes name of freezer service to match conventions 20:22:44 This review (not needing formal vote) seems stuck: 20:22:49 #link https://review.openstack.org/249788 20:22:56 If we could come to an agreement on the color of that bikeshed and be done with it... 20:23:22 I think we should store shovels in the shed 20:23:23 please :) 20:23:24 red shovels 20:23:27 I don't know what a "Recover service" is but if we get that changed to "Recovery" I think it looks fine 20:23:31 dhellmann: ok 20:23:41 i'd like to recover some service 20:23:43 Yeah I agree Recover doesn't make sense 20:23:44 flaper87: did you find where freezer does disaster recovery? 20:23:51 flaper87: that was the other concern to address 20:23:58 annegentle : that's what they said they're building 20:23:59 (if you change it to recovery, do you need to change it to restoration?) 20:24:07 annegentle : line 561 of the same file 20:24:20 dhellmann: but. but... I couldn't find proof of it so do we publish that in the service name? 20:24:36 annegentle: not in their code, but I did ping folks from the freezer team and asked them to chime in. 20:24:36 dhellmann: so... 20:24:38 that didn't happen, unfortunately 20:24:40 annegentle : I don't think we've asked that question of other projects? 20:24:41 I think the mission statement has it, though 20:24:48 flaper87: ah ok. well thanks for following up 20:24:51 Backup, restore and recovery might not be 3 verbs, they are three operations of Freezer 20:24:58 So I'm fine with that 20:24:59 "prove you are building the thing you claim but haven't yet finished"? 20:25:01 yeh 20:25:13 dhellmann: heh, well. ok. names without claims, we're ok with? 20:25:18 yeah, "backup and restore" is common parlance, if not good grammar 20:25:22 Restoration and Recover are not backupland concepts 20:25:25 +1 to mordred's mail 20:25:25 or claims in names 20:25:31 annegentle : trust? 20:25:34 yup, that's in their mission statement 20:25:49 dhellmann: sure, seems fair to extend 20:26:00 so, I'll change to Disaster Recovery and gtg? 20:26:01 annegentle: how strongly do you feel on verb consistency ? 20:26:11 annegentle: +1 20:26:13 annegentle : ++ 20:26:15 ttx: I like it. A lot. But I'm bad at catching it early enough :) 20:26:25 annegentle: ++ 20:26:34 ttx: and recovery/restore is particularly badly patterned in the industry 20:26:51 annegentle: probably why I don't care about that industry and use tar 20:26:55 hee 20:27:21 Alright sounds like we have a way forward 20:27:43 #agreed annegentle to change "Recover" to "Disaster Recovery" 20:27:45 check! 20:27:51 whew. i guess they can't always be as easy as the defcore topic can they? 20:28:00 freezer is really about recovering the cloud right? 20:28:06 it's not tenant backup/restore 20:28:08 hahahah 20:28:10 jeblair : it depends on the size of the shed, right? 20:28:14 covering teh last toipic first since the second one will likely take us until the top of the hour 20:28:20 * N/O naming status 20:28:21 just to make sure that distinction is out there 20:28:24 We are now getting very late for the N naming, where voting was supposed to start 2015-11-30 20:28:28 mordred: need help with this one ? 20:29:03 I will try to get to this today 20:29:14 kewl. I don't care taht much about O 20:29:15 sorry - I'm running mildly behind 20:29:17 well 20:29:19 ruhroh, ECHILD again. 20:29:22 I'm going to do them both at the same time 20:29:39 sure, but if that is the main reason why it's late... better do N first 20:29:47 can we start N even if O is blocked for any reason? 20:29:52 that was the plan, AFAIR 20:30:20 ttx: it's not. literally splitting a file into 20 smaller files and then propping my feet up to click "submit" 40 times is what I'm waiting on 20:30:32 neither are blocked by anything 20:30:32 I just suck 20:30:36 #agreed mordred will try to get to start the N/O voting today 20:30:41 * dhellmann hands mordred an intern 20:30:44 that's all I wanted to hear. 20:30:45 go go mordred 20:30:45 mordred: don't you have minions for that? 20:30:55 :) 20:30:56 dougwig: BWAHAHAHAHAHAHAHAHAHAHAHA 20:31:16 * The neutron stadium discussion 20:31:18 can i give some background and context on this one? 20:31:23 #link http://lists.openstack.org/pipermail/openstack-dev/2015-December/080865.html 20:31:30 russellb: please 20:31:32 This started when Neutron split its drivers out into other git repos. I suggested those repos be adopted as officially part of Neutron. 20:31:33 Wanted to discuss this a bit -- I was a bit alarmed to hear the neutron PTL say he can't vouch for anything in the neutron "stadium" 20:31:40 It has since grown to include other types of repos, so it's hard to generalize now. I attempted to start breaking down the different types here: http://lists.openstack.org/pipermail/openstack-dev/2015-December/080876.html 20:31:42 I read through the thredad but I wouldn't mind some background 20:31:48 The key point is that the Neutron PTL feels that the PTL/delegates can't appropriately track all of these efforts. How to deal with that is the question at hand. There are a few different possibilities in my view. 20:32:02 that's the tl;dr 20:32:34 my opinion varies depending on which type of thing we're talking about 20:32:35 russellb: is the outcome to avoid PTL overwhelms or to better manage expectations for stadium inclusion? 20:32:39 Currently we have governance reviews stuck because the PTL defers to others accepting new things into it (like mentioned on https://review.openstack.org/#/c/230699/) 20:32:53 so I would agree the stadium is pretty big at this point, it would be nice to have more things really stand on their own 20:32:58 We let project teams freely add repositories on the basis that the project team (and its PTL) are responsible for anything in them 20:32:59 there are some easy ones that should probably be independent 20:33:04 the least obvious is a basic neutron driver 20:33:14 It doesn't seem to be the case in the neutron stadium... 20:33:24 I wish armax were here 20:33:27 but right, if neutron says "we can't keep track" then we can split it all up, it honestly doesn't matter to me much 20:33:29 russellb : are you saying the driver should or should not be independent? 20:33:31 note that armax is unavailable right now, and he'd like to be part of this discussion/decision. 20:33:36 however a lot of the advanced services, which would be really good to stand on their own, still directly import neutron code, which makes that hard 20:33:39 dhellmann: i'm saying it should not, it's a weird thing to set 20:33:44 russellb: the categories are helpful 20:33:47 sdague: right. 20:33:51 russellb : ok, that's what I thought, I agree with that I think 20:34:02 dougwig: I don't expect a decision to be made, mostly get a temperature reading from the TC on that question 20:34:04 sdague: right, so it can't happen immediately. 20:34:05 russellb: I agree a driver should not be independant 20:34:12 it's "open discussion" time after all 20:34:23 sdague: there is work to make the shared neutron bits a true library 20:34:24 dougwig: yeah this is for us to ensure we have understanding also 20:34:34 no worries, just wanted to mention it. :) 20:34:45 dougwig: oh absolutely :) 20:34:56 markmcclain: right, it just seems that should be the #1 priority at the moment, if it would help remove overload 20:35:08 markmcclain: ironically, via a stadium repo. :) 20:35:11 markmcclain: would the categories change much if a shared library came to be? 20:35:18 part of the issue is that neutron has a workflow it sets out for the stadium and then should a member not follow that workflow it creates a lot more work 20:35:22 dougwig : I would expect that library to stay a stadium repo, no? 20:35:23 sdague: not sure about it being #1 prio but yeah, it'd be good for the project 20:35:23 not only for neutron 20:35:29 dhellmann: yes. 20:35:36 even making advanced services use a shared lib doesn't change the fact that they're tightly coupled to neutron 20:35:40 I guess I felt the split kind of happened backwards, and that more git repos without clear interfaces just creates more work for everyone 20:35:40 they run in the neutron-server process 20:35:50 sdague : ++ 20:35:51 russellb: right, that seems problematic 20:36:02 russellb: well, that part is more easily broken than you'd think, since they can export their own endpoint, once the co-dependency is broken. 20:36:03 I think russellb's type clasification on that thread is great 20:36:17 could the be really, truely, stand alone processes? 20:36:35 sdague: lbaas already is, via octavia. the plan is for neutron-lbaas in neutron-server to become a passthrough. 20:36:50 dougwig : is it likely that this problem is going to get worse (by adding more repos) before it gets better (by finishing that library work)? 20:36:56 dougwig: so why not a separate sc entry, why even a pass through? 20:37:10 sdague : backwards compat 20:37:11 ? 20:37:13 sdague: both, for backwards compatibility, IMO. 20:37:15 i don't think the library helps here 20:37:27 dougwig: ok, like the volumes proxy in nova, I can live with that 20:37:40 dougwig: wait, what does "export their own endpoint" mean? 20:37:41 russellb : oh, I thought that would be required for splitting the teams apart, but you think not? 20:37:41 dhellmann: i think most of the repo explosion was due to the vendor decomp, and that wave has likely crested. it'll be a trickle from here, until the lib is more mature. 20:37:45 russellb: yeh, it very well may not 20:37:54 annegentle: lbaas has a service endpoint in keystone. 20:37:57 e.g. 20:37:58 annegentle : new rest services 20:38:09 dhellmann: depends on what we're talking about. part of the library is for each of the plugin interfaces 20:38:18 I think the more important thing is to figure out how more things really stand on their own, don't run in the neutron processes, only use published neutron REST API to communicate with the rest of neutron 20:38:20 dhellmann: but you can't have lbaas without neutron? Or is that where the shared library bit comes in? 20:38:22 Personally I'd like to see the neutron project team deliverables reduced to what the neutron team can handle, and teh rest split out into their own project teams (since that's what they actually are) 20:38:32 that gives the loose coupling to be independent 20:38:34 sdague: yes. 20:38:49 sdague: and there's only 1 example, maybe 2, of that. 20:38:50 ttx: ++ 20:38:52 annegentle : that's where the shared lib comes in. lbass will need neutron, but neutron won't "contain" lbaas 20:38:58 dhellmann: got it. 20:38:59 The current situation feels like a bit of a bypass of our "are you OpenStack" review 20:39:02 sdague: the lib is for stable, versioned internal interfaces, since we've kicked all the plugins out of core. 20:39:04 ttx: ++ 20:39:06 and now we pay that price 20:39:10 ttx: + 20:39:19 ttx: for which components specifically? 20:39:19 lbaas needs neutron ports, but doesn't need to import neutron, IMO. 20:39:25 yeah, it's also muddying up the release stuff, as anteaya pointed out, when some of the teams don't follow processes 20:39:31 that is probably true for some things, but i don't think that's true for all. 20:39:35 dougwig: right, that seems like a more durable architecture 20:39:38 i think kuryr is the best example of something that should be independent 20:39:44 ttx: sorry, afk for ~25m I suspect 20:39:50 but a little neutron plugin? meh. 20:40:01 perhaps a committee could be struck to work with neutron leadership to come up with a plan and details? 20:40:03 I think things consuming neutron should be (kuryr, for example) 20:40:12 should be independent* 20:40:19 everyone with opinions should really raise them on the thread btw 20:40:27 to include armax 20:40:34 I guess the question is, is the neutron team asking for help from beyond the team boundaries here? 20:40:41 russellb : drivers are ok, as long as the neutron team can actually manage the release processes for them. We've slowed down adding libs to oslo in part because as the number grew we started hitting issues tracking them all 20:40:54 sdague: not formally, but it impacts the TC since it's blocking repo reviews, at least 20:40:55 russellb: I'm just saying that if the teams producing some of them are so separate the Neutron PTL doesn't want to hear about them, I think those should be dropped from governance and formally apply to become OpenStack project teams 20:40:59 it sounded like there was some overwhelming of folks, but things weren't super clear to me 20:41:00 sdague: I didnt' get the sense they were 20:41:01 just IMO, but i think we need to strike a balance between what should be separate/using strict interfaces, and still moving forward, since the former isn't ready yet. 20:41:14 ttx: yep, that's fine 20:41:22 sdague: i think armax was raising the point to spark a discussion. 20:41:26 i'm actually OK with that at this point, it's not worth arguing over 20:41:42 as a user I like it when things that can run on instead of in cloud do so 20:41:54 i just think about it from a precedence POV too 20:41:55 ttx: yeh, I agree. Either the neutron team vouches for stuff, or those things aren't neutron, and shouldn't be included under it in governance 20:42:00 then I am not stuck to the features the cloud deploys 20:42:04 do we want a new top level project for every driver of every project? 20:42:07 dougwig : is there a plan to get things to a technical point to let neutron get past this governance issue? 20:42:10 if not? what's our guidance? 20:42:13 russellb : I don't think anyone is suggesting that 20:42:27 dhellmann: the original proposal suggested that 20:42:33 I think some groups will be very successful as independent teams. Some others will stay in neutron and be handled by the neutron team. And some others may struggle to adopt enough of the OpenStack Way to become recognized as their own project team 20:42:40 russellb : ok, anyone here 20:42:49 sure 20:42:52 but at least that's more in line with how we handle everything 20:43:07 well maybe the question is, what does the neutron core team feel comfortable vouching for, and make the governance list that. And it might open up a driver question. 20:43:07 ttx: yep, that's fine 20:43:07 russellb : the team needs to manage its growth so that it is only trying to take on work it can handle, and adding everything networking related to one team may not line up with that 20:43:08 dhellmann: no, i think that's what armax is hoping to accomplish. the neutron-lib plan to break the co-dependency hell is targeted at mitaka/N, at which point some of this will happen organically. but i think he wants something more concrete. 20:43:34 dougwig : ok, I think that's a mistake 20:43:47 russellb: With the caveat that sdague mentioned about interfaces not really being clean yet 20:43:56 right, none of them are, really 20:43:58 it's fun 20:44:02 dhellmann: want a concrete plan is a mistake? parse failure. 20:44:08 as someone whose spent a lot of time decoupling things in OpenStack.... it does not ever happen organically 20:44:27 dougwig : we have previously said that projects run by different teams talk to each other over rest interfaces as a way of clearly delineating boundaries 20:44:27 true, true. 20:44:28 sdague: not even with compost 20:44:30 it happens with a machette and a blow torch, and lots of sweat 20:44:41 dougwig : splitting all repos to their own projects, but maybe I misunderstood you 20:44:56 sdague: I'd rather have the neutron team focus on cleaning those interfaces rather than handling everything in the neutron stadium though 20:45:06 ttx: sure 20:45:07 dhellmann: agree, cutting everything loose is likely too far. 20:45:16 At least that gives us a way out of the maze 20:45:21 ttx: I'm with you on that, and it lets services give better thought to API design of their own service 20:46:07 dougwig: I wouldn't say "everything". Just the things where the teams are so disjoint you don't feel confident about them 20:46:22 dhellmann: the rest boundary is interesting. so, something that creates a neutron api extension, but is otherwise 100% separate, should never be a separate openstack project? even if it's neutron-related, but neutron isn't interested in managing it? they must have their own service endpoint? 20:46:40 I bet there are a few things in that list of deliverables that have a lot of overlap with the "core" neutron team 20:46:50 if the neutron team doesn't like the API extension, i dont' think it should be in openstack at all 20:47:18 building some consensus around common APIs is kind of the point :) 20:47:31 dougwig : it's difficult to set a hard rule, but I would lean toward finding ways to add more independent services to the networking feature space and stop cramming everything into one service 20:47:47 also, what russellb said about extensions: +1 20:47:56 "cramming everything into one service" is often required, because it all has to interact with the same network plumbing 20:47:56 russellb: agreed, I don't think anything should be extending the neutron API that isn't controlled by the neutron core team. 20:47:56 not arguing. the way we do apis is just a lot of duplication and overhead, today. 20:47:57 dougwig: extensions are awful for end user experience so I wouldn't advocate for those specifically but the neutron core API needs to be understandable/reviewable/consumable 20:47:59 it's complicated. 20:48:05 rest is hard for a few things... there also has to be a southbound code level defined interface otherwise we'll have a pyramid of rest for drivers for particular technologies 20:48:07 russellb: "it depends" 20:48:07 russellb: you seem to have a pretty good handle on that issue, so maybe it's best to let you calmly come up with solutions there ? 20:48:34 ttx: i have my opinions, at least 20:48:40 russellb : sure, if they're tightly coupled they should be the same service. That doesn't sound like an extension, though. 20:48:43 i think it's on the neutron group in general to keep working through it 20:48:51 russellb : ++ 20:48:52 good for TC members to be aware of it though, and please weigh in if you'd like 20:48:56 ++ 20:48:59 russellb: categories are a great way for them to discuss amongst themselves for starters 20:49:01 russellb: well, except we do hand off between services to get working network for a guest already with nova / neutron via rest things 20:49:07 it's harder, but it's doable 20:49:08 I'm not strongly on one specific solution anyway, I just don't think we can continue in the current situation 20:49:17 sdague: ah, right, the VIF plugging bit 20:49:31 the lines. they are blurry. 20:49:36 sdague: isn't the vif plugging library meant to simplify that, though? 20:49:39 russellb: and the network proxy 20:49:40 ttx: agreed 20:49:47 sdague: in the nova api? heh yeah 20:49:50 ttx: agreed. I'm worried about getting all of those things released if the neutron ptl/release liaison feel they can't manage them all 20:50:06 dougwig: there will still be per hypervisor code in nova for it as well 20:50:13 if i'm hearing one thing today, it's that defining a hard boundary for a separate project is likely never going to substitute for a judgement call. 20:50:31 #agreed let russellb drive and propose solutions to move away from the current deadlock 20:50:40 ttx: I don't think we got milestones for most of them, for example 20:50:45 thanks russellb, it's good stuff 20:50:46 * russellb will start by giving armax another hug 20:51:01 dhellmann: most are marked release independent, and ping mestery when they want something put up, i think. 20:51:12 dhellmann: there is also the whole "I won't be ready at release time" thing making them release:independent 20:51:40 dougwig: right, but that mostly seems like a punt, because how does a consumer compose a working set of these things 20:51:41 dougwig : ok. I didn't see any on http://docs.openstack.org/releases/releases/mitaka.html so I don't know if they were released quietly or not at all 20:51:42 ttx: true 20:51:55 russellb: agreed. I actually am very thankful of armax for putting that dead fish on the table for everyone to see 20:52:12 ttx: agreed, very thankful of armax kicking off this thread 20:52:19 armax: when you read this backlog, *hug* 20:52:19 there are a few on http://docs.openstack.org/releases/independent.html 20:52:23 sdague: fair point. many are vendor code, further muddying it pu. 20:52:23 otherwise the issue would have gone underground for more time 20:52:25 /pu/up/ 20:52:50 dougwig: right, so that all seems like remove from neutron stadium, and make no guaruntees about it 20:53:00 because that seems to be the actual state of things anyway 20:53:28 note that i'm not arguing with y'all. the stadium does feel like it's gotten "too big" for its intended purpose to me, too. i just don't know what the end result should look like, or its timeline. 20:53:33 sdague: yeah, it's about setting expectations more clearly. Currently we assume support due to being under the neutron project team, but that support is not really there 20:53:43 dougwig : understood 20:53:58 bar is/was quite low 20:54:03 Consider the floor open for other open discussion topics 20:54:07 Anything else, anyone ? 20:54:15 (we can continue on the neutron thing in parallel) 20:54:15 dougwig: oh yes, I don't see this as argumentative at all, quite constructive 20:54:19 raising bar and evaluating against it adds up to a lot of work 20:54:23 that nobody is thrilled about doing 20:54:26 that's part of the deadlock 20:54:28 ttx: it's neutron, the bikeshedding can never end on neutron. or else openstack might cease to exist. 20:55:07 heh 20:55:25 haha 20:55:35 quantum physics after all 20:55:42 dougwig: I want to store shovels in neutron. red shovels 20:55:50 * fungi groans at ttx 20:56:20 fwiw, i dont' think this is bikeshedding at all. 20:56:50 it's an important discussion about how openstack teams are operating in reality 20:57:00 yeah this is team definition stuff 20:57:01 very true 20:57:06 and how best to organize and reflect it 20:57:19 storming and norming! 20:57:29 this is exactly why the oslo team asked the os-win folks to start their own project team 20:57:43 dhellmann: ah good parallel I might not have seen 20:58:34 not the same team, then not the same team. 20:58:35 right now, it's a TC within a TC. 20:58:49 dougwig: huh? 20:58:56 armax is nipping it before it can form it's own mini-bureacracy. 20:59:15 the stadium is its own governance model. 20:59:17 it's not so different than other projects with groups working on new APIs or new drivers ... 20:59:23 just that it's separate repos 20:59:25 Every time we tried to fit two separate project teams into a single one it failed. We created the big tent to escape that issue and remove the friction in creating new teams 20:59:45 thing is, once the separate repos exist, clearly not everyone looks at all of them 20:59:52 so ... now what 21:00:14 Oh well, time is up 21:00:18 russellb: and who's responsible/accountable, etc. 21:00:22 Thanks everyone, was a good one 21:00:23 kthxbai 21:00:26 #endmeeting