20:01:39 <ttx> #startmeeting tc
20:01:40 <openstack> Meeting started Tue May  7 20:01:39 2013 UTC.  The chair is ttx. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:01:41 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:01:43 <openstack> The meeting name has been set to 'tc'
20:01:49 <ttx> Agenda is pretty busy, we'll see how far we manage to go today
20:01:59 <ttx> #link https://wiki.openstack.org/wiki/Governance/TechnicalCommittee
20:02:02 <hub_cap> we can keep reddwarf short ;)
20:02:13 <ttx> #topic RedDwarf Application for Incubation
20:02:19 <ttx> speaking of which
20:02:22 <ttx> #link https://wiki.openstack.org/wiki/ReddwarfAppliesForIncubation
20:02:26 <hub_cap> #link https://wiki.openstack.org/wiki/ReddwarfUsingHeat
20:02:30 <hub_cap> #link https://review.openstack.org/#/c/28328/
20:02:34 <ttx> This is a continuation from last week discussion.
20:02:36 <hub_cap> those were my homework items
20:02:49 <ttx> hub_cap posted the Heat integration plan we requested, see above
20:02:50 <hub_cap> the latter is a review showing the effort to get postgres into reddwarf
20:03:08 <ttx> I haven't seen clearly new questions being raised on the public discussion
20:03:15 <hub_cap> its not a full impl, but it _works_ ie i can create users/dbs
20:03:29 <hub_cap> nope ttx and i updated w/ shardy comments about the refactor
20:03:31 <ttx> hub_cap: I had one question. In this doc you mention that Heat integration would be optional as a first step...
20:03:38 <hub_cap> sure ttx, let me splain
20:03:39 <ttx> I think the benefit of using Heat is to avoid duplicating orchestration functionality, so if the code is still around in RedDwarf it's not so great
20:03:50 <hub_cap> sure i agree
20:03:51 <ttx> I'd definitely like to see the orchestration code dropped in RedDwarf before it graduates from incubation.
20:03:57 <hub_cap> grad, yes
20:04:01 <ttx> So if for any reason (n+1 deprecation ?) it's not doable in a few months, maybe it's a bit early to file for incubation ?
20:04:12 <ttx> I see two options: (1) have it optional in a few months, file for incubation *then*, get visibility at summit, remove legacy during the I cycle and graduate to integrated for the J cycle
20:04:12 <hub_cap> oh its doable
20:04:23 <ttx> (2) have it optional real quick, remove legacy during the H cycle, and graduate to integrated for the I cycle
20:04:48 <hub_cap> the reason i mentioned optional was
20:05:04 <hub_cap> but someone could stand up reddwarf right now, and point it to say, hp cloud, and not need to self install heat, or have hp cloud install heat
20:05:11 <hub_cap> ie, it works against a cloud _right now_
20:05:14 * mordred likes the idea but doesn't necessarily think we need to expect them to have the work done before incubation - I think having a scope and a road map is quite fair and in keeping with past projects
20:05:34 <ttx> mordred: I'd just consider it a condition for graduation, personally
20:05:36 <mordred> hub_cap: and I am a big fan of things not requiring all of the intra
20:05:39 <mordred> ttx: ++
20:05:45 <markmc> hub_cap, is that a use case for the project, though?
20:05:46 <dolphm> scope = RDaaS or simply RDBaaS?
20:06:06 <dolphm> s/RDaaS/DBaas/
20:06:07 <markmc> hub_cap, would you not expect reddwarf and the cloud to be deployed together always?
20:06:17 <hub_cap> thx markmc for elaborating
20:06:36 <markmc> hub_cap, for clouds that don't have heat, heat would just be an impl detail of thm adding the reddwarf service?
20:06:42 <russellb> and the cloud could have heat running internally and not necessary exposed to customers
20:06:47 <hub_cap> yes markmc
20:06:49 <russellb> yes, that.
20:06:50 <markmc> cool
20:06:51 <mordred> markmc: ++
20:07:00 <mordred> heat should hopefully soon be able to also run outside of a cloud
20:07:01 <hub_cap> either they, could, or not, have heat and still get this puppy fired up
20:07:06 <ttx> hub_cap: see dolph's question
20:07:14 <ttx> <dolphm> scope = DBaaS or simply RDBaaS?
20:07:21 <vishy> markmc, hub_cap: it sounds like you could run reddwarf locally and have it use a public cloud?
20:07:31 <vishy> is that correct?
20:07:34 <hub_cap> sure, im not sure weve fully answered that, but last meeting i thought we decided RDBaaS was fine for _now_
20:07:41 <hub_cap> vishy: correct, u dont have to own the cloud
20:07:44 <hub_cap> so to speak
20:07:50 <russellb> using heat doesn't change that though
20:07:59 <russellb> in theory.
20:08:01 * mordred would like for our accepted scope to be RDBaaS and for increase in that to require further vote
20:08:02 <vishy> russellb: it would mean you would have to run heat locally as well
20:08:05 <russellb> yes
20:08:10 <dolphm> hub_cap: that was my impression from last meeting, i just wanted to double check today
20:08:13 <hub_cap> im fine w/ RDBaaS for now
20:08:16 <markwash> re scope, it makes sense to me to treat it as "DB Provisioning aaS"
20:08:24 * markmc is fine with RDBaaS scope too
20:08:28 <markwash> rather than focusing on relational vs non
20:08:29 <hub_cap> i _do_ know we are going to be doing a cluster api
20:08:37 <hub_cap> and that facilitates things like non RDB's
20:08:45 <hub_cap> so it might fall in line quite well
20:08:47 <vishy> seems like keeping the option to run without heat might be valuable until heat is ubiquitous in public clouds
20:08:56 <markwash> vishy: +1
20:09:05 <mordred> vishy: if heat can also run locally easily too?
20:09:11 <hub_cap> and if someone wants to make a NRDB, they shoud consider reddwarf as a viable option before going from scratch
20:09:13 <shardy> vishy: unless it leads to lots of duplication and maintenance overhead..
20:09:13 <ttx> vishy: i'm a bit concerned with code duplication
20:09:23 <russellb> same here
20:09:24 * mordred is pushing hard for non-colocated heat so that openstack-infra can start using it
20:09:43 <russellb> and ttx had a good point earlier around when heat becomes "the way it works", and how that affects the incubation timeline
20:09:45 <shardy> mordred: I think (with some auth tweaks) heat could run locally too relatively easily
20:09:54 <russellb> i'd like to explore that a bit more
20:09:55 <mordred> shardy: I believe cody-somerville is working on it
20:09:59 <ttx> vishy: someone would run Reddwarf+Heat outside of the cloud
20:10:00 <vishy> mordred: I like the idea, I'm just thinking in terms of user adoption. It is nice if i could try it out without having to start up heat.
20:10:01 <shardy> mordred: IIRC there's a patch up adding much of what we need right now
20:10:08 <hub_cap> ttx thats def possible
20:10:16 <shardy> mordred: yup, that's what I was referring to
20:10:17 <mordred> vishy: ++
20:10:19 <hub_cap> they can run reddwarf w/o a cloud now
20:10:44 <hub_cap> so, heat as a incubation graduation req?
20:11:00 <russellb> and if so, what does the timeline look like for that?
20:11:13 <ttx> I'd rather avoid us having to deprecate a feature, I've lived through enough project splits
20:11:28 <markwash> can we really set rules for graduation requirements at this point? that would be putting constraints on future TC decisions that I don't think we have the power to make
20:11:43 <russellb> possible in H timeframe?
20:11:51 <hub_cap> russellb: def
20:11:53 <dolphm> markwash: i think that would be for the future tc to overrule?
20:11:54 <mordred> I think markwash makes a good point - I think requirement is a bit strong
20:11:56 <hub_cap> ttx: sure, one Q, is heat a required openstack service at this point? id hate to say u _have_ to have heat but heat is optional
20:11:56 <mikal> markwash: I agree. We should note it and let the future TC decide.
20:12:01 <markmc> markwash, it's totally reasonable to say "here's the things we think you'll need to graduate"
20:12:08 <ttx> markwash: not requirement. Direction.
20:12:09 <russellb> markwash: i think it's fair to set the roadmap for what we expect during incubation, even if that could change
20:12:14 <markwash> markmc: +1, direction, not req
20:12:16 <mordred> ++
20:12:18 <dolphm> markwash: +1
20:12:36 <dolphm> markmc: *
20:12:45 <russellb> but honestly, if we expect to keep the old way around, this whole pushing to use heat thing seems like a waste
20:12:53 <russellb> what's the point if that's not going to be *the* way to do it
20:13:01 <ttx> hub_cap: no service is required. But for example, I don't think Nova should have its own image service, when we have Glance
20:13:08 <hub_cap> so its just a matter of heat/optional, and id say we make heat default, and those who have alrady deployed w/o heat, can use the legacy code
20:13:18 <hub_cap> ttx: sure but do u think that no user shoudl use nova if they have heat?
20:13:34 <russellb> but the legacy code goes away when?
20:13:45 <markmc> agree with russellb on two impls being pointless
20:13:47 <ttx> hub_cap: err... not sure I understand that question
20:13:47 <markmc> long term
20:14:00 <ttx> heat uses nova
20:14:02 <russellb> either the legacy code is on its way out asap, or the heat idea is scrapped
20:14:04 <russellb> imo
20:14:12 <hub_cap> ttx: heh what i mean is that reddwarf is a user of nova
20:14:19 <hub_cap> russellb: why is that? heat is the long term vision
20:14:24 <shardy> hub_cap: it's not just nova though, you're orchestrating clusters of instances, post-install customization, managing dependencies, potentially supporting scale-out etc, etc
20:14:29 <hub_cap> i sure as hell dont want to write clustering code :)
20:14:32 <shardy> all of which we already do
20:14:35 <hub_cap> for things like just what shardy said :D
20:14:39 <russellb> hub_cap: ok, so you see the legacy code being removed then.
20:14:46 <hub_cap> def russellb
20:14:53 <hub_cap> it wont be around forever heck no :)
20:14:54 <russellb> on what timeline?
20:15:04 <russellb> guess, not commitment
20:15:08 <ttx> hub_cap: hmmm... I see what you mean. i guess it's valid to directly address nova for a very basic scenario
20:15:23 <hub_cap> russellb: i was hoping to have yall help me w/ that
20:15:31 <hub_cap> im not sure how deprecating features, so to speak, works
20:15:34 <russellb> ok
20:15:40 <hub_cap> n+1? or _for_ graduation?
20:15:53 <hub_cap> i thnk those are the valid answers but i dont know whats best overall
20:15:53 <russellb> well ideally at this point in the process we wouldn't have to worry so much about the cost of deprecation .... :(
20:16:18 <ttx> hub_cap: my understanding is that you're covering more than just a basic scenario, and duplicating a lot of functionality from Heat
20:16:26 <hub_cap> ttx: as of now its only the basic
20:16:30 <hub_cap> single instance
20:16:31 <markmc> s/deprecating features/deprecating legacy implementation/ :)
20:16:47 <russellb> markmc: mhm
20:16:52 <mordred> markmc: ++
20:17:03 <russellb> as a project in incubation, i honestly don't think you should have to worry about deprecation cycles
20:17:12 <hub_cap> ah ic
20:17:15 <hub_cap> that makes sense
20:17:19 <russellb> ... ideally, anyway.
20:17:22 <gabrielhurley> Even with proper deprecation I don't see it as a huge problem to mark as deprecated for the H release, ptentially graduate to Integrated in I and actually remove the code during that cycle...
20:17:35 <hub_cap> im fine w/ that gabrielhurley that was my hope
20:17:49 <hub_cap> we can make heat default for all installs
20:17:53 <gabrielhurley> anyone who's new to RD in the H release should know better than to start using something that's already deprecated ;-)
20:17:53 <ttx> russellb: +1
20:17:53 <shardy> hub_cap: if you currenlty only support single instance, I'd be interested to see a comparison of functionality wrt our RDS nested stack resource
20:18:05 <ttx> yes, mark deprecated in H is fine
20:18:06 <shardy> may be a good starting point for your integration w/heat
20:18:13 <hub_cap> shardy: does the nested stack do backups/restores/etc?
20:18:34 <hub_cap> we are working on replication now too, so this is really the ideal time to start integrating heat
20:18:35 <ttx> More questions before we vote ?
20:18:41 <hub_cap> cuz we will need master/slave VERY soon :)
20:18:42 <shardy> hub_cap: Not currently, no, that kind of matrix is what I'm interested in discovering
20:18:53 * mordred moves that we vote
20:18:58 <hub_cap> shardy: we should def chat then after
20:19:07 <shardy> hub_cap: may be stuff we need to add to support your use-case etc
20:19:08 * hub_cap moves out of the way
20:19:10 <ttx> raise you hand if you still have questions
20:19:19 <ttx> your*
20:19:20 <mordred> wait - we don't have to make motions to vote here... so nice and civilized...
20:19:27 <hub_cap> shardy: def. id love to work together on it
20:19:33 <jgriffith> mordred: I move we vote
20:19:39 <mordred> jgriffith: I second!
20:19:39 <shardy> hub_cap: sounds good :)
20:20:02 <ttx> No more questions, setting up vote
20:20:08 <ttx> #startvote Approve RedDwarf for Incubation? yes, no, abstain
20:20:09 <openstack> Begin voting on: Approve RedDwarf for Incubation? Valid vote options are yes, no, abstain.
20:20:10 <openstack> Vote using '#vote OPTION'. Only your last vote counts.
20:20:37 <hub_cap> sweet! didnt know that existed
20:20:41 <dolphm> #vote yes
20:20:45 <shardy> #vote yes
20:20:45 <mordred> hub_cap: we're fancy around here
20:20:48 <mordred> #vote yes
20:20:50 <jgriffith> #vote yes
20:20:50 <mikal> #vote yes
20:20:53 <ttx> #vote yes
20:20:54 <markmcclain> #vote yes
20:20:54 <vishy> #vote yes
20:20:54 <annegentle> #vote yes
20:20:57 <markwash> I #vote yes even without the requirements of deprecating non heat approaches
20:21:07 <ttx> markwash: that won't count.
20:21:10 <mordred> :)
20:21:14 <gabrielhurley> #vote yes
20:21:14 <mordred> ok. we're not that fancy
20:21:17 <markwash> just wanted to let people know
20:21:19 <markwash> #vote yes
20:21:20 <jgriffith> haha
20:21:21 <jeblair> but we accept patches
20:21:22 <markmc> #vote yes
20:21:22 <dolphm> mordred: i'll file a bug
20:21:27 <SlickNik> heh
20:21:37 <ttx> 30 more seconds
20:21:52 <russellb> #vote yes
20:21:54 <jgriffith> markwash: care to vote officially?
20:22:00 <vishy> he did
20:22:01 <dolphm> jgriffith: he did
20:22:01 <jgriffith> doh
20:22:02 <jgriffith> nm
20:22:06 <ttx> #endvote
20:22:07 <jgriffith> sorry.. just saw it
20:22:07 <openstack> Voted on "Approve RedDwarf for Incubation?" Results are
20:22:08 <openstack> yes (13): markmc, ttx, vishy, shardy, annegentle, russellb, jgriffith, mikal, mordred, gabrielhurley, dolphm, markwash, markmcclain
20:22:16 <hub_cap> wow
20:22:18 <hub_cap> thx so much guys
20:22:19 * russellb considered abstain because of the movement in the heat area ... but taking leap of faith that it'll work out.
20:22:20 <ttx> Awesome, congrats guys
20:22:24 <imsplitbit> :-)
20:22:30 <hub_cap> russellb: i dont blame ya
20:22:35 <ttx> russellb: we can vote them out of the island if they misbehave
20:22:39 <hub_cap> its on the top of my list of todos
20:22:41 <gabrielhurley> I also considered abstaining on questions of scope of openstack, but I want to use red dwarf myself, so....
20:22:42 <SlickNik> thanks for the faith, russellb.
20:22:43 <russellb> ttx: ok, cool :)
20:22:44 <mordred> ttx: wait - there's an island?
20:22:50 <hub_cap> hah ttx, who was the idol?
20:22:53 * ttx votes mordred for today
20:22:55 <hub_cap> *err has
20:22:57 * markmc leaping of faith too :)
20:22:57 <ttx> #topic Ironic / Baremetal split - Nova scope evolution
20:23:05 <ttx> #link https://wiki.openstack.org/wiki/BaremetalSplitRationale
20:23:11 <ttx> This is the first part of the discussion about splitting the baremetal-specific code from Nova into its own project
20:23:15 <markwash> we should steal the name Ironic for the I release
20:23:21 <ttx> We must first decide that this code doesn't belong in Nova anymore
20:23:31 <russellb> +1
20:23:33 <ttx> Which, I think, is a no-brainer since we didn't really decide to have it in in the first place, and the Nova crew seems to agree to remove it
20:23:47 <markwash> +10 fwiw
20:23:48 <ttx> Questions ?
20:23:49 <markmc> definitely think this has a lot of potential for use outside of nova
20:24:05 <dolphm> Ironic is an awesomely relevant project name #notaquestion
20:24:20 <gabrielhurley> My biggest question is "how much code will be duplicated?" (I get that this makes the remaining code simpler, but still worry about another copy-and-paste of Nova's source)
20:24:25 <ttx> (second part of the discussion will be about the incubation of the separate project)
20:24:26 <mikal> markmc: I wanted "incarceration" for that release
20:24:45 <ttx> gabrielhurley: maybe that belongs to the second part ?
20:24:51 <gabrielhurley> :;shrug::
20:25:00 <russellb> hoping we'll have the nova code removed asap
20:25:04 <russellb> so that there's no duplication
20:25:22 <mikal> baremetal is very different from other virt drivers
20:25:24 <mikal> Own DB etc
20:25:30 <mikal> I think it belongs elsewhere
20:25:30 <devananda> gabrielhurley: i've been digging into that over the weekend. short answer is: a lot, unless ironic starts fresh w.r.t. api, service, etc.
20:25:31 <gabrielhurley> russellb: there must be some... it wouldn't be *in* nova if it didn't rely on *any* nova code currently
20:25:32 <mordred> ++
20:25:41 <gabrielhurley> devananda: that's more what I expected to hear ;-)
20:25:43 <markmc> russellb, think gabrielhurley means the service infrastructure and such
20:25:49 <mordred> the virt driver itself that will talk to ironic will still be in the nova tree though, right?>
20:25:52 <markmc> russellb, as opposed to the legacy virt driver
20:25:52 <russellb> ah yes, like cinder...
20:25:57 <gabrielhurley> yeah
20:25:59 <markmc> yes, like cinder :)
20:26:05 <devananda> besides api and service, it relies on nova/virt/disk for file injection, which i want to abandon anyway
20:26:08 <markwash> gabrielhurley: is this "bad" duplication or just "use cases for oslo" duplication?
20:26:13 <gabrielhurley> the copy-paste snowballing of problems/flaws/bugs makes me a sad panda.
20:26:14 <markmc> think it's going to be much more different from nova than cinder was
20:26:27 <mordred> I thnk there's going to be some of both
20:26:34 <gabrielhurley> markmc: can you elaborate?
20:26:40 <markmc> gabrielhurley, no scheduler e.g.
20:26:45 <devananda> it's _very_ different code from nova.
20:27:16 <ttx> Ready to vote on the Nova scope reduction ?
20:27:23 <gabrielhurley> sure
20:27:24 <markmc> quick q
20:27:32 <markmc> will the legacy virt driver be feature frozen
20:27:34 <markmc> during H?
20:27:54 <ttx> markmc: I suppose
20:28:10 <devananda> markmc: fwiw, I would like it to be, except for bug fixes
20:28:15 <devananda> there are several open BPs
20:28:16 <markmc> devananda, cool
20:28:32 <ttx> ok, ready to vote on the first part ?
20:28:34 <devananda> one in particular will have a big impact in terms of simplifying deployment of the baremetal driver in nova
20:28:38 <mikal> There are a few security caveats too
20:28:55 <devananda> mikal: i dont think those are any different in vs. out of nova?
20:29:12 <jgriffith> so stupid point of clarification, that means we're voting to skip incubation correct?
20:29:19 <ttx> jgriffith: no
20:29:19 <mikal> devananda: sure, but I don't want a nova freeze stopping us from fixing / documenting them
20:29:19 <russellb> no
20:29:37 <jgriffith> ttx: then how can we say "no features on existing code for"
20:29:38 <ttx> jgriffith: just voting on removing baremetal code from Nova's scope for the moment. More at next topic
20:29:42 <jgriffith> K
20:29:46 <ttx> we don't say that, YET
20:29:56 <jgriffith> k... I'll be patient
20:30:09 <ttx> #startvote Agree on long-term removal of baremetal code from Nova's scope? yes, no, abstain
20:30:10 <openstack> Begin voting on: Agree on long-term removal of baremetal code from Nova's scope? Valid vote options are yes, no, abstain.
20:30:11 <openstack> Vote using '#vote OPTION'. Only your last vote counts.
20:30:11 <markwash> #vote yes
20:30:13 <russellb> #vote yes
20:30:15 <mordred> #vote yes
20:30:15 <mikal> #vote yes
20:30:15 <jgriffith> #vote yes
20:30:16 <gabrielhurley> #vote yes
20:30:19 <ttx> #vote yes
20:30:20 <shardy> #vote yes
20:30:21 <markmcclain> #vote yes
20:30:28 <ttx> 30 more seconds
20:30:32 <markmc> #vote yes
20:30:39 <annegentle> #vote yes
20:30:40 <markmc> (yes when ironic is ready I guess)
20:30:52 <markmc> we're not reducing the scope really until the legacy driver is removed
20:30:56 * markmc shrugs
20:30:56 <ttx> #endvote
20:30:56 <openstack> Voted on "Agree on long-term removal of baremetal code from Nova's scope?" Results are
20:30:57 <openstack> yes (11): markmc, ttx, shardy, annegentle, russellb, jgriffith, mikal, mordred, gabrielhurley, markwash, markmcclain
20:31:04 <ttx> #topic Ironic / Baremetal split - Incubation request
20:31:11 <russellb> markmc: agreeing on direction to reduce scope, i guess
20:31:18 <ttx> This is the second part of the project split decision... create a project from Nova baremetal code and accept that new "Ironic" project into Incubation
20:31:31 <ttx> The idea being that Ironic could make a first drop by Havana release and we'd mark baremetal code deprecated in Nova in Havana...
20:31:33 <markmc> russellb, yeah - we could change our minds if Ironic fails is my point, I think
20:31:38 <markmc> russellb, (unlikely, but ...)
20:31:41 <ttx> Then if everything goes well have the code removed and Ironic integrated during the I cycle
20:31:43 <russellb> fair enough
20:31:52 <ttx> Fire questions
20:31:52 <mordred> ttx: makes sense to me
20:31:55 <ttx> My main question would be... is this code OpenStack-specific ? Should it become an OpenStack integrated project rather than, say, a generic Python library ?
20:32:19 <markmc> is glance OpenStack specific? keystone?
20:32:23 <devananda> ttx: swift is an openstack project, but aiui can be deployed separately. how is this different?
20:32:26 <markmc> IMHO this is as OpenStack specific as anything else
20:32:32 <mordred> I believe that the reason I've been arguing that it's openstack and not just generic - is that I thin there are potentially several services who might want to integrate with its apis
20:32:37 <russellb> i look at it like as something in the openstack brand, that may or may not be used in combination with openstack services
20:32:47 <jgriffith> mordred: +1
20:32:50 <ttx> No, I mean... if this is generally useful to more than just openstack...
20:32:54 <markwash> plus it adds IMHO a key piece to OpenStack
20:32:58 <devananda> so far we def want interaction between ironic and nova, cinder, and quantum
20:33:02 <mordred> for instance - a pan-project scheduler might want to talk to the baremetal service for information about rack locality
20:33:16 <jgriffith> ttx: I think your point is fine as well
20:33:22 <markwash> devananda: not glance )-;
20:33:25 <jgriffith> ttx: in other words it can be useful stand-alone
20:33:32 <jgriffith> nothing wrong with that
20:33:37 <devananda> markwash: actually, yes, glance too!
20:33:39 <ttx> mordred: ok, makes sense
20:33:44 <mordred> glance is generally useful outside of openstack :) canonical run a public one for people to get their images from
20:34:00 <mordred> oh. I read markwash's comment wrong :)
20:34:07 * mordred shuts up
20:34:11 <vishy> mordred: it includes a rest api as well
20:34:31 <markmc> vishy, Ironic will have a REST API
20:34:43 <vishy> right which makes it more of a project than a library imo
20:34:52 <markmc> ah, ok
20:34:54 <ttx> vishy: agreed.
20:34:55 <markmc> yes
20:35:25 <ttx> Other questions ?
20:35:47 <devananda> so i have a question for folks -- in splitting ironic, should i aim to preserve as much code from nova as possible, or start fresh so the result is less bloated? and does that affect incubation in any way?
20:35:47 <markwash> russellb: +1 to OS brand #notaquestion
20:36:12 <mordred> I believe you should do things cleanly if it's possible and doesn't kill you
20:36:22 <ttx> devananda: since we do a deprecation cycle, you have some room for cleanup
20:36:23 <mordred> but I do not believe that's in scope for us here really
20:36:27 <gabrielhurley> mordred++
20:36:32 <markwash> I agree with mordred, but that would really be your call
20:36:53 <devananda> ack.good to know that doesn't affect incubation
20:37:01 <markmc> you can ask us for opinions as the 18 wise people of openstack
20:37:04 <russellb> just be aware of the time impact
20:37:05 <gabrielhurley> the less snowballing the better. this is a chance for cleaning house. but as everyone said, not a requirement.
20:37:07 <markmc> but you probably know best :)
20:37:23 <russellb> like, look at how long it has taken to get quantum up to where we can make it the default, vs cinder
20:37:45 <ttx> devananda: if you want to hit the incubation targets to get integrated in I you'll have to produce working code very fast... so the "doesn't kill you" remark from mordred applies
20:38:00 <devananda> ack
20:38:01 <russellb> yes, that :)
20:38:13 <ttx> worst case scenario, you do one more cycle as incubated, not so much of a big deal
20:38:25 <devananda> right.
20:38:28 <vishy> the cinder approach is definitely faster
20:38:33 <markmc> ttx, well, it would be another cycle of the baremetal driver being feature frozen
20:38:39 <vishy> but it delays adding new features for a long time
20:39:13 <devananda> cinder approach = ?
20:39:17 <vishy> nova -> cinder transition was pretty painless (much less painless than nova -> quantum)
20:39:30 <russellb> reusing code as much as possible, as opposed to starting over
20:39:41 <markwash> but quantum has lots of context that could influence that
20:39:49 <vishy> russellb: yes, also replicating the api directly
20:40:07 <jgriffith> devananda: I'm happy to share my thoughts offline if you're interested
20:40:09 <ttx> devananda: so you can refactor a bit, but would be better to reuse as much as you can so that you iterate faster
20:40:11 <vishy> and just adding a python-*client wrapper to talk to the same apis exposed via rest
20:40:22 <vishy> with no change at all to the backend
20:40:27 <jgriffith> but yes, copy out of nova and modify was a life saver for me
20:40:28 <devananda> jgriffith: thanks, will def take you up on that after this meeting
20:40:29 <markmc> the Nova API probably wouldn't make much sense as a starting point for Ironic?
20:40:29 <mordred> devananda and I worked on a hybrid split - which involved git filter-branch on the nova tree to pull out the existing baremetal code, but leaving the other bits out
20:40:44 <vishy> but that meant 6 months of no changes to the first six months of cinder essentially
20:40:50 <ttx> More questions ?
20:40:54 <markmcclain> yeah.. we made a bunch of changes which is why moved at a different pace
20:41:20 <devananda> ttx: no more q from me
20:41:32 <russellb> excuse me, not quantum, the project formerly known as quantum
20:41:38 <ttx> Everyone ready to vote ?
20:41:44 <markmc> russellb, still known as quantum
20:41:45 <mikal> I am
20:41:47 * jgriffith moves we vote
20:41:51 <markmc> russellb, soon to be formerly known as quantum :)
20:41:55 <gabrielhurley> OpenStack Networking
20:42:00 <markmc> gabrielhurley, nope
20:42:03 <mordred> mutnauq!
20:42:08 <markmc> quebec!
20:42:08 <gabrielhurley> I still vote we rename it "quality"
20:42:13 <markmc> that works
20:42:14 <markwash> markmc: lol!
20:42:23 <gabrielhurley> starts with Q, same number of letters... Quality!
20:42:24 <ttx> #startvote Approve Ironic for Incubation? yes, no, abstain
20:42:24 <openstack> Begin voting on: Approve Ironic for Incubation? Valid vote options are yes, no, abstain.
20:42:25 <openstack> Vote using '#vote OPTION'. Only your last vote counts.
20:42:27 <markmc> #vote yes
20:42:29 <russellb> #vote yes
20:42:29 <mordred> #vote yes
20:42:30 * markmcclain is still accepting name nominations
20:42:32 <mikal> #vote yes
20:42:32 <dolphm> #vote yes
20:42:32 <markmcclain> #vote yes
20:42:33 <gabrielhurley> #vote yes
20:42:35 <jgriffith> #vote yes
20:42:35 <shardy> #vote yest
20:42:35 <openstack> shardy: yest is not a valid option. Valid options are yes, no, abstain.
20:42:35 <markwash> #vote yes
20:42:40 <ttx> #vote yes
20:42:41 <gabrielhurley> lol
20:42:41 <shardy> #vote yes
20:42:42 <mordred> haahahaha
20:42:47 <shardy> oops
20:42:48 <vishy> #vote yes
20:42:51 <ttx> 30 more seconds
20:43:19 <ttx> #endvote
20:43:20 <openstack> Voted on "Approve Ironic for Incubation?" Results are
20:43:21 <openstack> yes (12): markmc, ttx, vishy, shardy, russellb, jgriffith, mikal, mordred, gabrielhurley, dolphm, markwash, markmcclain
20:43:29 <ttx> devananda: congrats!
20:43:30 <gabrielhurley> we're a very agreeable bunch today
20:43:33 <ttx> yay process!
20:43:34 <devananda> thanks! :)
20:43:45 <ttx> gabrielhurley: that's because we rae missing the devil's advocate member
20:43:49 <gabrielhurley> lol
20:43:52 <jgriffith> haha
20:43:52 <russellb> devananda: make it happen! go go go!
20:43:52 <markmc> zero no votes or abstains so far?
20:44:08 * markmc is sure devananda feels suitably empowered now :)
20:44:13 <ttx> markmc: that's what I call managed lazy consensus
20:44:15 <ttx> #topic Discussion: API version discovery
20:44:19 <gabrielhurley> yay!
20:44:23 <ttx> #link http://lists.openstack.org/pipermail/openstack-tc/2013-May/000223.html
20:44:28 <ttx> This is preliminary discussion on API version discovery
20:44:31 <markmc> #vote yes
20:44:35 <ttx> Personally I'm not sure this needs formal TC blessing, unless things get ugly at individual project-level
20:44:35 <russellb> lulz.
20:44:37 <gabrielhurley> I cleaned things up into a nice reST document for y'all
20:44:38 <gabrielhurley> https://gist.github.com/gabrielhurley/5499434
20:44:38 <markmc> oh, no vote yet?
20:44:38 <jgriffith> haha
20:44:40 <annegentle> I was agreeable too but missed the vote :)
20:44:42 <ttx> But I guess we can still discuss it :)
20:44:48 <ttx> gabrielhurley: care to introduce the topic ?
20:44:51 <annegentle> sowwy
20:44:55 <gabrielhurley> yep yep
20:44:58 <gabrielhurley> so
20:45:01 <gabrielhurley> short version
20:45:25 <ttx> We were less agreeable last week, poor jgriffith
20:45:26 <gabrielhurley> We now have a Keystone v2 and v3 API, Glance, v1 and v2, and a Nova v2 and soon-to-be v3
20:45:31 <gabrielhurley> people want to use these
20:45:41 <gabrielhurley> people want to use these across various clouds
20:45:42 * jgriffith 's head stil hurts
20:45:46 <gabrielhurley> and use multiple versions within the same cloud
20:46:24 <gabrielhurley> that means we need to start dealing with the issues of how to let consumers of these APIs (clients, Horizon, etc.) understand what versions are available, what capabilities (extensions, etc.) are deployed for each version, and more
20:46:53 <gabrielhurley> short version of the proposed fix (see gist, ML thread, and etherpad for long version)
20:46:57 <gabrielhurley> :
20:47:28 <mordred> gabrielhurley: I am in favor of things that sensibly let me consume multiple clouds
20:47:37 <annegentle> is an extension always a capability?
20:47:41 <ttx> gabrielhurley: Any reason to believe there would be resistance to this ?
20:47:49 <gabrielhurley> Move the Keystone service catalog towards solely providing root service endpoints and let the clients/consumers do the work of interpreting a (standardized) "discovery" response from GET /
20:47:57 <gabrielhurley> ttx: nope, everyone's been very positive so far
20:48:05 <mordred> gabrielhurley: as long as it doesn't mean a) tons of branching logic in my consumer code because b) we're tending towards Least Common Denominator in some way
20:48:09 <dolphm> annegentle: extensions can provide multiple capabilities, i believe
20:48:12 <gabrielhurley> and consensus has formed around most of the ideas in the latest revision of the proposal
20:48:13 <mordred> but I don't think that's what you're proposing
20:48:23 <gabrielhurley> but it is a huge cross-project effort to impement, hence TC involvement
20:48:30 <mordred> ++
20:48:38 <annegentle> gabrielhurley: it seems like a huge doc effort as well?
20:48:53 <gabrielhurley> annegentle: see https://gist.github.com/gabrielhurley/5499434#extensions-vs-capabilities for "extension" vs. "capability
20:49:08 <gabrielhurley> annegentle: when I said "cross-project" I meant it ;-)
20:49:12 <ttx> gabrielhurley: we can bless it, but I'm not sure we can mandate it
20:49:29 <gabrielhurley> ttx: it's a fine line. if one project opts out the whole thing breaks
20:49:31 <vishy> gabrielhurley: so the idea is that we continue to provide /extensions for the existing apis and add /capabilities for new apis?
20:49:33 <annegentle> gabrielhurley: so most projects keep /extensions but add /capabilities? (I did read the doc and still had Qs)
20:49:38 <dolphm> ttx: can we mandate that projects return a proper 300 response with an expected format?
20:49:39 <gabrielhurley> vishy: correct
20:49:46 <vishy> gabrielhurley: it seems like we need a standard format for the capabilites resource as well
20:49:49 * annegentle thinks like a vish
20:49:54 <ttx> gabrielhurley: would be good to engage with all PTLs and check they are all OK with that
20:50:08 <gabrielhurley> vishy: correct, we do. I recommend versioning that response as well, in case we need to tweak it over time.
20:50:13 <markwash> I'm a little "meh" about capabilities being described exclusively as endpoint-level details
20:50:17 <gabrielhurley> ttx: I have gotten feedback from more than half of them
20:50:23 <gabrielhurley> but I can try and pin down the rest
20:50:23 <ttx> gabrielhurley: but yes, we can weigh in and say it's a very good idea
20:50:36 <markwash> seems like capabilities could be finer grained
20:50:36 <annegentle> if we don't version extensions now, how do we version capabilities?
20:50:37 <gabrielhurley> markwash: care to elaborate?
20:50:51 <gabrielhurley> annegentle: simply saying to version the response format
20:50:52 <markwash> gabrielhurley: I'll probably just muddy the waters
20:51:03 <markwash> gabrielhurley: and the granularity probably isn't a TC level issue
20:51:04 <ttx> gabrielhurley: basically, if one project doesn't like it, I'm not sure we have a lot of ways to enforce it, apart from threatening to remove them from the integrated release.
20:51:04 <gabrielhurley> interpretation of that data is a larger problem
20:51:18 <gabrielhurley> ttx: hopefully it won't come to that
20:51:25 <ttx> gabrielhurley: so consensus would be a much better way to get to it
20:51:25 <gabrielhurley> and I don't think it will
20:51:26 <dolphm> annegentle: capabilities are versioned along with the API version, i think? GET /<version>/capabilities
20:51:38 <gabrielhurley> dolphm: most likely yes
20:51:58 <ttx> gabrielhurley: and I don't want our "blessing" to look like a mandate and cause some allergic reaction
20:52:07 <ttx> where there shouldn't be any
20:52:20 <notmyname> gabrielhurley: dolphm: I'd like something other than that (since that breaks existing swift clusters)
20:52:21 <mordred> ttx: ++
20:52:28 <vishy> gabrielhurley: it seems like we need multiple capabilities
20:52:41 <vishy> the global one saying which endpoints are hittable
20:52:53 <vishy> then some way of exposing schema for the endpoints
20:53:12 <vishy> for example if i have an extension that adds a parameter to a response
20:53:24 <vishy> sticking it in the global capabilities list seems odd
20:53:27 <gabrielhurley> notmyname: I switched it to /<version>/capabilities at your suggestion since you were already using "extensions" in a valid way... or did I misunderstand your comment?
20:53:38 <notmyname> vishy: sounds like the rfc2616 OPTIONS verb ;-)
20:53:59 <vishy> notmyname: yeah something like that
20:54:00 <notmyname> gabrielhurley: not important for the tc meeting. we can discuss later, if you wnat
20:54:00 <gabrielhurley> vishy: I'm not convinced that a /capabilties is actually useful... it'd have to describe all the capabilties for all the versions
20:54:19 <gabrielhurley> I was proposing GET / gets you endpoint discovery for supported versions
20:54:20 <vishy> sorry i didn't mean /capabilities
20:54:36 <gabrielhurley> and /<version>/capabilties describes what's possible for that version
20:54:42 <vishy> i mean that /version/capabilities could respond with all of the endpoints for that version
20:54:44 <annegentle> dolphm: gabrielhurley: but an extension's definition can change release to release (underlying release, not API release)? Is that why we'd use capabilities?
20:54:55 <vishy> but sticking extra params there seems a bit messy
20:54:59 <dolphm> gabrielhurley: does /capabilities need to be in scope here?
20:55:08 <ttx> 5 minutes left, and there are two more things I wanted to raise -- can we move this discussion to the ML and follow the result at the next meeting ?
20:55:14 <gabrielhurley> vishy: oh, I see, you're talking specifically about multi-endpoint
20:55:20 <ttx> I think Gabriel needs to track down the remaining PTLs
20:55:27 <gabrielhurley> vishy: I don't think that's a good thing to try and solve now since we can't agree on that in Keystone anway
20:55:40 <vishy> gabrielhurley: no sorry endpoint is a bad word. i mean all of the paths that are reachable
20:55:41 <gabrielhurley> dolphm: only because the standardization is helpful and related
20:55:50 <dolphm> gabrielhurley: agree, but it seems like a second step
20:55:51 <gabrielhurley> vishy: gotcha. we can discuss more later
20:55:57 <gabrielhurley> ttx: will do
20:56:07 <ttx> I think you got the ball rolling here
20:56:15 <gabrielhurley> yep
20:56:21 <ttx> we'll definitely track this in future meetings
20:56:25 <ttx> #topic Discussion: I naming candidates
20:56:30 <ttx> #link https://wiki.openstack.org/wiki/ReleaseNaming#.22I.22_release_cycle_naming
20:56:37 <ttx> The only suggestion which strictly fits in the current guidelines is "Ili".
20:56:43 <annegentle> I like Ili
20:56:46 <ttx> So I propose that we slightly extend the rules to accept street names in Hong-Kong, which should add a few options
20:56:47 <gabrielhurley> short and to the point
20:56:56 <ttx> or we can just accept Ili.
20:57:01 <gabrielhurley> what are the other options?
20:57:01 <dolphm> what about Ichang violates guidelines?
20:57:02 <hub_cap> i was hoping for innermongolia :/
20:57:05 <vishy> no one suggested Imperial :(
20:57:09 <gabrielhurley> hub_cap: lol
20:57:22 <dolphm> ( ili is my first choice, after icehouse ;)
20:57:24 <ttx> The rules are rather strict, and Ichang is a bit borderline
20:57:25 <russellb> Influenza?
20:57:32 <gabrielhurley> -1
20:57:44 <russellb> sorry.
20:57:47 <annegentle> oh like Grizzly followed the rules
20:57:47 <mikal> russellb: !
20:57:47 <ttx> Is that OK for everyone ? (extending to street names to have 3-4 candidates total)
20:57:54 <vishy> is it ili or illi ?
20:57:55 <gabrielhurley> Bear Flag Revolt!
20:57:56 <jgriffith> annegentle: haha
20:57:58 <dolphm> #vote yes
20:58:01 <ttx> I'll take that as a YES
20:58:05 <gabrielhurley> lol
20:58:05 <mikal> Yeah, works for me
20:58:06 <ttx> #topic Open discussion
20:58:10 <ttx> mordred: you had a communication to make ?
20:58:21 <mordred> ttx: yup. thanks
20:58:39 <mordred> fwiw... openstack-infra is going to change how we're running tests on stuff
20:58:41 <ttx> mordred: told ya I'd save one minute for you
20:58:51 <mordred> we believe we're still in compliance with the python support decision
20:59:18 <mordred> but based on canonical dropping support for non-LTS releases to 9 months, we're now planning on running 2.7 tests on LTS+cloud archive
20:59:27 <mordred> and not attempting to run test slaves on latest ubuntu
20:59:34 <mordred> basically, nobody should notice
20:59:39 <mordred> but we thought we'd mention
21:00:14 <mordred> also, for background, we have actually NEVER moved to the latest ubuntu as soon as it comes out in the CI system
21:00:28 <russellb> seems reasonable.
21:00:30 <reed> Sooner than later we should start talking about the Design session in Hong Kong: need to make sure that we have successful design summit there, which means make sure all relevant people are able to travel there and the ones that can't, can still join the conversations
21:00:40 <mordred> reed: +1000
21:00:50 <mordred> also - I names
21:00:52 <mordred> :)
21:01:00 <russellb> i'm very concerned about the number of people that won't be able to make it from the US (or elsewhere) because of budget
21:01:01 <ttx> annndd
21:01:06 <ttx> we are out of time
21:01:18 <ttx> #endmeeting