20:02:07 <ttx> #startmeeting tc
20:02:08 <openstack> Meeting started Tue Feb 19 20:02:07 2013 UTC.  The chair is ttx. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:02:09 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:02:10 <jaypipes> o/
20:02:12 <openstack> The meeting name has been set to 'tc'
20:02:15 <ttx> Agenda for today is:
20:02:19 <ttx> #link http://wiki.openstack.org/Governance/TechnicalCommittee
20:02:20 <vishy> hi
20:02:32 <ttx> (on brand-new wiki)
20:02:37 <ttx> #topic Joint Board / TC F2F meeting on April 14
20:02:50 <ttx> The Board is proposing that we hold a common BoD/TC face-to-face meeting on the Sunday before the Summit
20:02:58 <ttx> Something like 2pm to 5pm then dinner at 6pm.
20:03:12 <ttx> This raises a few issues... some of us (including me) already have booked non-refundable conflicting plane tickets
20:03:13 <bcwaldon> ttx: hey hey
20:03:21 <ttx> Also we don't really know who will be on the TC, as we are renewing 10/12 members in the upcoming elections
20:03:35 <ttx> Comments ? Should we answer, we can try to be there, but best effort only ?
20:04:10 <ttx> The goal was to wrap up the conclusions of the Incubation/Core committee
20:04:15 <mordred> I think that best-effort is probably all we _can_ offer
20:04:22 <ttx> which should hopefully be completed by then
20:04:32 <markmc> if folks have tickets booked, that's bad news
20:04:36 <markmc> especially ttx
20:04:42 <markmc> could we do a breakfast together?
20:05:29 <ttx> We could also do an evening thing, starting at 6pm and ending in dinner
20:05:45 <russellb> probably the best if we actually want everyone there
20:06:08 <ttx> OK, I'll answer depending on how needed people are, prefer a late or early thing during the week
20:06:16 <ttx> if optional, then sunday is fine
20:06:22 <jgriffith> o/
20:06:25 * heckj nods
20:06:30 <ttx> does that sound good ?
20:06:31 <annegentle> Sunday's tough, maybe another week day
20:06:48 <ttx> the know that getting people in regular hours will be impossible
20:06:54 <mordred> the rest of the week is out because of the summit
20:07:03 <ttx> so that means doing early or late in the day
20:07:14 <markmc> and keeping it short
20:07:16 <mordred> and the board members yelled at bryce after the last time that we had a meeting scheduled over top of sessions
20:07:23 <mordred> keeping it short is definitely ++
20:07:41 <ttx> OK, I'll come up with an answer, though I'm pretty sure AlanClark will see this log
20:08:01 <ttx> #topic End-of-cycle graduation review (cont'd)
20:08:17 <ttx> Last week both projects presented why they think they are ready to be integrated in the Havana release cycle
20:08:26 <ttx> We also reviewed their release process alignment status, which was positive
20:08:34 <ttx> Brian suggested we continue the review serially rather than in parallel
20:08:43 <ttx> If there is no objection to that...
20:08:56 <ttx> ...then I suggest that Heat goes first... since nijaba from Ceilometer is not around for this meeting
20:09:10 <markmc> ttx, do we absolutely need to reach a decision today?
20:09:13 <eglynn> serially within a single meeting, or?
20:09:22 <eglynn> or accross multiple meetings?
20:09:23 <ttx> serially within one or two meetings
20:09:26 <eglynn> k
20:09:33 <ttx> depending how fast we go
20:09:50 <ttx> objections ?
20:09:51 <eglynn> I can speak for ceilo in any case if we get to it today
20:10:07 <ttx> eglynn: we should touch it, but maybe not finish it, today.
20:10:26 <eglynn> k
20:10:31 <ttx> #topic Technical stability and architecture maturity assessment, Heat
20:10:50 <ttx> In this section I'd like to make sure that (1) the project is currently usable and (2) the architecture is now stable (no full rewrite needed during Havana cycle)
20:11:10 <sdake_z> sure
20:11:14 <ttx> For Heat the only questions seem to be around template / API, and the need to support more than just AWS cloudformation.
20:11:33 <ttx> sdake: could you give us your view on that ?
20:11:47 <sdake_z> heat is basically a parser
20:11:57 <sdake_z> which is contained in one file
20:12:07 <sdake_z> if someone wants another template format, simply write another parser.py
20:12:13 <sdake_z> so no rewrite required
20:12:39 <sdake_z> api as far as I am concerned is in good shape
20:12:43 <ttx> so the architecture is pretty simple... and stable ?
20:12:53 <sdake_z> our architecture has not changed in 9 months
20:13:11 <sdake_z> code base was stable when we went into incubation, but even more bugfree now ;)
20:13:23 <sdake_z> one area where our code base will change..
20:13:28 <sdake_z> we have this directory called resources
20:13:38 <sdake_z> it contains things that launch vms, eips, etc.
20:13:47 <sdake_z> it contains something called nested stacks - ie: RDS
20:13:52 <sdake_z> a relational database service
20:14:05 <sdake_z> we would prefer those not be nested stacks and instead use openstack apis where avialable
20:14:21 <sdake_z> atm there are no openstack apis for rds but if there are, we will merge to use those apis
20:14:34 <sdake_z> there are other resource types as well
20:14:41 <russellb> so to be clear, in the absence of a db service, you have code that knows how to set up an instance with a database on it, that kind of thing, right?
20:14:43 <sdake_z> load balancer
20:14:49 <sdake_z> right
20:14:53 <russellb> k
20:15:09 <sdake_z> but those are not major architectural changes, only changes in how we interface with openstack
20:15:11 <ttx> makes sense
20:15:15 <russellb> but you want to kill those off when an API is available ... makes sense
20:15:27 <ttx> other questions on technical stability and architecture maturity, before we talk about the scope ?
20:15:34 <sdake_z> would like to do so yes - and add more resources as projects like moniker hit the wire
20:15:42 <markmc> what's the story with the cloudwatch impl?
20:15:45 <heckj> sdake_z: does the architecture support switching those out without major rewrites as new things come available?
20:15:48 <markmc> is there a deprecation plan?
20:15:48 <ttx> s/as/if/
20:15:57 <markmc> or is it already optional and you can use another implementation?
20:15:59 <sdake_z> yes, each resource is a .py file
20:16:08 <sdake_z> with a standard programming api
20:16:12 <shardy> markmc: we plan to move to using ceilometer when the features we need are there
20:16:42 <markmc> shardy, will there be much work to make that move?
20:16:51 <heckj> sdake_z: thanks
20:16:52 <markmc> shardy, they should be compatible, so no major user impact right?
20:16:53 <sdake_z> re cloudwatch, want to remove it from the code base as soon as ceilo is in place, have had discussions with ceilo team about alerting and that seems positive
20:17:06 <markmc> cool
20:17:08 <shardy> markmc: There will be a bit of rework in the engine to decouple things, but nothing major, no
20:17:35 <markmc> how about the metadata server, heat still has its own read/write server?
20:17:40 <shardy> markmc: I've been putting off that rework waiting for the ceilo stuff to be ready
20:18:00 <shardy> markmc: No, all resource metadata is now served via the CFN or ReST API
20:18:08 <shardy> we removed the metadata server
20:18:11 <sdake_z> the metadata server would disappear once ceilo is in
20:18:18 <gabrielhurley> how do you determine which resource modules to use? do you use keystone's service catalog, or config flags, or...?
20:18:35 <sdake_z> gabrielhurley not sure i understand question
20:18:50 <gabrielhurley> you were talking about trading out resource .py modules
20:18:50 <sdake_z> shardy I think markmc was tlaking about cloudwatch server process
20:19:12 <gabrielhurley> if there are competing implementations, how do you determine which ones to use?
20:19:15 <markmc> no, I was asking about the metadata server that at one point you wanted to use nova's metadata server
20:19:20 <markmc> but the issue was that it was readonly
20:19:27 <shardy> sdake_z: well he mentioned CW then metadata - the CW stuff is in progress, but the separate metadata service has been removed now
20:19:39 <sdake_z> gabrielhurley there is a name space - for example OS::HEAT::Reddwarf
20:19:41 <markmc> the server that e.g. cfn-trigger or whatever talks to?
20:19:44 * markmc waves hands
20:20:01 <jd__> hi
20:20:15 <sdake_z> each resource.py contains the namespace resource it is responsible for
20:20:24 <shardy> markmc: It all talks to either the CFN or Cloudwatch API now, there is no heat-metadata anymore
20:20:35 <shardy> which means everything is authenticated/signed
20:20:36 <markmc> shardy, ok, thanks
20:20:52 <gabrielhurley> sdake_z so the resource files are namespaced for each impementor, but how do you determine which one to use? I'm trying to understand if this is an "operator must configure" or "dynamically determine what's available" situation.
20:21:15 <sdake_z> gabrielhurley you put in the template file which resource you want to use and which parameters you want to pass it
20:21:34 <shardy> gabrielhurley: there is a mapping of class name to template name in each resource implementation
20:21:35 <gabrielhurley> what happens if I (as an end user) put in my template a resource which is not available
20:21:40 <markmc> gabrielhurley, I don't think there are any competing implementations of a single resource type atm
20:21:45 <gabrielhurley> or worse, a resource which is available from a different implementor
20:21:57 <sdake_z> gabrielhurley you get a parse error if there is no resource available
20:22:00 <shardy> gabrielhurley: I'd expect template validation to fail
20:22:30 <sdake_z> in the example of databases...
20:22:34 <gabrielhurley> okay. that's what I was trying to understand. I think that architecture is gonna need more work as the ecosystem expands, but that's fine for now.
20:22:45 <sdake_z> ok sounds good ;)
20:22:55 <ttx> any more questions before we discuss scope ?
20:23:27 <shardy> gabrielhurley: the resource implementations are pluggable now, so the architecture is extensible
20:23:53 <ttx> #topic Scope complementarity, Heat
20:24:02 <ttx> In this section I'd like to discuss the desirability of integrate Ceilometer in the common OpenStack Havana release
20:24:15 <ttx> We don't really have name guidelines yet that define what is off-limits for "OpenStack" resource focus
20:24:17 <markmc> you mean Heat :)
20:24:19 <eglynn> s/Ceilometer/Heat/
20:24:19 <mordred> s/Ceilomter/Heat/
20:24:19 <gabrielhurley> lol
20:24:22 <russellb> s/Ceilometer/Heat/
20:24:22 <ttx> oops
20:24:24 <markmc> heh
20:24:26 <ttx> So at this point we can only apply technical guidelines
20:24:33 <ttx> Is the project complementary in scope, or overlapping with others ?
20:24:39 <ttx> Are there other projects in our community covering the same scope ?
20:24:45 <ttx> Does it integrate well with current integrated projects ? Does it form a coherent product ?
20:24:55 <ttx> (that's what you get by reshuffling me)
20:24:56 <sdake_z> we are only project in this space inside incubation/core
20:25:12 <sdake_z> integrates extremely well with other projects including full keystone auth
20:25:28 <notmyname> ttx: "does it forma conherent product" get's into openstack in-general guidelines, not technical things
20:25:46 <sdake_z> as far as coherent product, again, i'd like to see the rds and autoscaling and other features come out of heat into other projects so we could use those directly
20:25:56 <ttx> personally I place it in the same category as Horizon, an integration point
20:25:59 <ttx> notmyname: true
20:26:05 <markmc> as a service which pulls together our APIs, I love it
20:26:13 <markmc> it's a pretty natural expansion of scope, I think
20:26:23 <mordred> ++
20:26:34 <ttx> notmyname: for some pretty large definition of "coherent"
20:26:34 <markmc> obviously, one of our largest competitors has something similar
20:26:44 <mordred> hehe
20:26:45 <gabrielhurley> it provides a functionality which a lot of openstack consumers are clamoring for... but it's definitely an expansion more than a compliment.
20:26:46 <markmc> and there's a lot of interest in application level orchestration
20:27:39 <mordred> fwiw, we also had conversations with rob at dell about ways that heat can be complimentary to crowbar
20:27:51 <annegentle> how much do changes to the OpenStack APIs affect Heat's templates? Are templates versioned somehow?
20:28:19 * annegentle wonders about integrated releases
20:28:26 <sdake_z> annegentle we generate a version against a specific version of openstack - ie heat for havana integrates against havana apis
20:28:31 <vishy> my crowbar, you are looking fine today...
20:28:48 <mordred> vishy: :)
20:28:50 <markmc> heh
20:29:21 <shardy> annegentle: but changes in the service api's heat uses will not change the template syntax (unless you're using a new feature specific to a release, e.g the new Quantum based resources)
20:29:33 <ttx> more questions on scope ?
20:29:43 <gabrielhurley> sdake_z: one of openstack's goals is to be version N-1 compatible. how does Heat feel on that front?
20:29:57 <ttx> it feels the heat
20:29:58 <sdake_z> we follow openstack processes - so that seems reasonable
20:30:02 <annegentle> shardy: sdake_z ok, thanks
20:30:05 <gabrielhurley> cool
20:30:17 <sdake_z> although atm that is not implemented in the architecture
20:31:06 <sdake_z> gabrielhurley i would expect python-* libs to be backwards compatible for the most part though so should be straightforward
20:31:21 <gabrielhurley> sdake_z: you'd think that, wouldn't you. ;-)
20:31:33 <heckj> heh
20:31:35 <mordred> gabrielhurley: :)
20:31:36 <sdake_z> naive i guess :)
20:31:50 <gabrielhurley> you'll learn :-D
20:31:54 <ttx> see why I put horizon and heat in the same bag, they are already forming a group
20:31:59 <shardy> gabrielhurley: sorry, do you mean python version?
20:32:08 <gabrielhurley> shardy: no, openstack release version
20:32:19 <shardy> gabrielhurley: k, thanks
20:32:31 <ttx> #topic Final Q&A and vote, Heat
20:32:43 <ttx> Final questions/discussion before we vote on Heat graduation ?
20:33:03 <ttx> doubts, objections...
20:33:07 <annegentle> o/
20:33:18 <annegentle> one more Q, are you documenting your own API somewhere?
20:33:41 <sdake_z> that needs to be done - although there is some basic docs already in the source tree
20:33:47 <sdake_z> but they need love
20:33:56 <sdake_z> we should speak offline about your expectations re documentatoin
20:34:01 <sdake_z> so we can deliver what you want
20:34:31 <annegentle> sdake_z: yes and I want to be sure we meet user expectations for docs
20:34:35 <ttx> other final questions ?
20:34:39 <danwent> i think something like heat is very valuable.  To me the only question is whether it makes sense to put one such template/orchestration approach as the "official" one.
20:34:47 * mordred registers his support for both the concept and the codebase
20:34:58 <annegentle> sdake_z: is the size of the API like "16 calls" -- basically CRUD on templates? Want a ballpark
20:35:10 <danwent> if no one has concerns, I think the team has done a great job building heat and integrating, so i'm generally supportive
20:35:11 <sdake_z> I believe 9 - just  a guess tho
20:35:29 <mordred> especially since we've got projects moving towards figuring out how to integrate other orchestration system with heat, rather than just competitive
20:35:31 <annegentle> sdake_z: ok thanks, yeah that sounds right
20:35:48 <ttx> ok, ready to vote ?
20:36:03 <mordred> so from my end, it doesn't seem like adding heat will block other things from playing in the ecosystem - but rather will enable things
20:36:15 <mordred> (that's related to danwent's concern above)
20:36:40 <danwent> mordred: i agree, as long as others see it that way as well.
20:36:52 <ttx> #startvote Approve graduation of Heat (to be integrated in common Havana release)? yes, no, abstain
20:36:53 <openstack> Begin voting on: Approve graduation of Heat (to be integrated in common Havana release)? Valid vote options are yes, no, abstain.
20:36:54 <openstack> Vote using '#vote OPTION'. Only your last vote counts.
20:37:02 <markmc> #vote yes
20:37:04 <mordred> #vote yes
20:37:06 <russellb> #vote yes
20:37:23 <gabrielhurley> #vote yes
20:37:24 <ttx> #vote yes
20:37:24 <jgriffith> #vote yes
20:37:25 <danwent> #vote yes
20:37:35 <heckj> #vote abstain
20:37:53 <notmyname> #vote no
20:38:17 <ttx> 30 more seconds
20:38:52 <annegentle> #vote yes
20:38:57 <gabrielhurley> ttx: in general, it might be good to ping people at the start of a vote
20:39:07 <ttx> hmm
20:39:13 <gabrielhurley> for those who aren't looking at IRC, it's not always apparent that a timed vote is happening
20:39:13 <ttx> who are we missing
20:39:14 <danwent> gabrielhurley: do you mean you're not hanging on every word of the discussion? :P
20:39:18 <russellb> vishy: ping
20:39:19 * annegentle sets up a notification for vote :)
20:39:19 <ttx> jaypipes, vishy ^
20:39:25 <gabrielhurley> bcwaldon: ping
20:39:30 <vishy> #vote yes
20:39:40 <bcwaldon> I'm looking!
20:39:43 <gabrielhurley> lol
20:39:48 <heckj> heh
20:39:49 <gabrielhurley> tick tock
20:39:54 <bcwaldon> #vote yes
20:40:09 <ttx> ok, 10 more seconds
20:40:21 <ttx> #endvote
20:40:22 <openstack> Voted on "Approve graduation of Heat (to be integrated in common Havana release)?" Results are
20:40:23 <jaypipes> #vote yes
20:40:24 <openstack> yes (10): markmc, bcwaldon, ttx, vishy, annegentle, russellb, jgriffith, mordred, gabrielhurley, danwent
20:40:25 <openstack> abstain (1): heckj
20:40:26 <openstack> no (1): notmyname
20:40:35 <ttx> jaypipes: :P
20:40:36 <gabrielhurley> jaypipes: just missed it ;-)
20:40:45 <jaypipes> yeh, sorry, on yet another call :(
20:40:52 <ttx> sdake: congrats
20:40:56 <ttx> #topic Technical stability and architecture maturity assessment, Ceilometer
20:40:57 <markmc> congrats sdake_z, shardy and co.
20:41:00 <sdake_z> tx - blame the developers ;)
20:41:24 <eglynn> #link https://wiki.openstack.org/wiki/Ceilometer/Graduation#Is_our_architecture_stable.3F
20:41:26 <ttx> so... making sure that (1) the project is currently usable and (2) the architecture is now stable (no full rewrite needed during Havana cycle)
20:41:42 <ttx> During the Grizzly cycle, Ceilometer appeared to adapt its architecture to external pressure from new consumers
20:41:58 <dhellmann> ttx, we made incremental changes but nothing major
20:42:02 * mordred has to drop off for a plane flight ... thinks ceilometer is great
20:42:03 <ttx> But the linnked doc explains the architecture is now pretty flexible and shouldn't change, iirc
20:42:18 <eglynn> we could see that pressure as a positive (connoting a healthy, sustainable community attracting wide interest)
20:42:41 <ttx> certainly
20:42:44 <jd__> thanks mordred :)
20:42:46 * gabrielhurley appreciates that ceilometer has pushed to move useful functionality into oslo
20:42:50 <notmyname> I'm concerned that CM is generating monitoring data but claiming usefulness for billing calculations
20:43:15 <heckj> gabrielhurley: +1
20:43:15 * annegentle will be back shortly
20:43:21 <eglynn> notmyname: we want that data acquisition for metering and monitoring to shared infrastructure
20:43:38 <notmyname> eglynn: but they have very different requirements
20:44:02 <notmyname> eglynn: specifically, you must be able to reliably recreate your numbers when used in a billing context
20:44:16 <eglynn> yes and the architecture is intended to be flexible enough to address these differing requirements
20:44:24 <gabrielhurley> what I've heard from the ceilometer team is that they're doing metrics, and if some people try to use that for billing that's their choice (or folly, if you prefer)
20:44:36 <notmyname> and the logs are the persistent source of those numbers, but you are simply using messages sent in the course of the request
20:44:51 <dhellmann> notmyname: and polling, and auditing notifications
20:45:07 * notmyname has mostly looked at the swift parts, since that's what I know
20:45:18 <eglynn> notmyname: we have a multi-publish framework that allows measurements for different backends to travel via different conduits
20:45:34 <eglynn> notmyname: so for example for metric, trade off currency versus completeness
20:45:51 <eglynn> notmyname: for metering/billing ... do the opposite trade-off
20:46:36 <eglynn> notmyname: the idea is not to force the metrics and metering data to be shoehorned into the same bucket
20:46:55 <eglynn> (with the exact same QoS etc.)
20:47:10 <notmyname> eglynn: is CM an aggregation point for whatever metering/etc you are using (like keystone for auth, cinder for blocks)?
20:47:55 <eglynn> notmyname: yes, CM can acquire and aggregate measurements from many different source including the one you mention
20:48:48 <eglynn> notmyname: and also publish to different backends
20:49:06 <eglynn> notmyname: e.g. one backend would be the CM metering store
20:49:19 <eglynn> notmyname: another would be the future CW service
20:49:30 <eglynn> (i.e. the integrated Synaps engine)
20:49:39 <notmyname> what backends do you provide as part of the code today?
20:50:06 <eglynn> notmyname: just one, the CM collector/metering store/CM API service
20:50:32 <eglynn> notmyname: but the archiecture was specifically evolved during G to accomodate others
20:50:59 <notmyname> and so, eg, if you wanted something for correctness (like billing), you'd provide your own?
20:51:37 <eglynn> notmyname: well we envisage the metering store mentioned above would be suitable for that purpose
20:52:22 <eglynn> notmyname: (i.e. we don't throw metering data on the floor, though another backend might do sample or discard delayed data etc.)
20:52:43 <notmyname> what scale has it been tested at?
20:52:51 <jd__> it's actually already used by some to do that, like DreamHost
20:52:58 <eglynn> notmyname: its currently deployed in DreamHost
20:53:08 <eglynn> dhellmann can speak to the scale there
20:53:19 <dhellmann> our current cluster is fairly small
20:54:00 <notmyname> small == 100s req/sec? 10s req/sec?
20:54:17 <notmyname> I understand if you can't really share that ;-)
20:54:28 <dhellmann> I would share, but I don't have those numbers
20:55:08 <eglynn> we can agree though that its a non-trivial production deployment, or?
20:55:46 <eglynn> s/agree/surmise/
20:55:48 <notmyname> if others have questions, please ask. /me is ready to vote
20:56:01 <ttx> we need to talk about scope a bit first
20:56:09 <dhellmann> eglynn: yes, it's a non-trivial deployment, just not seeing a lot of traffic at this point
20:56:12 <notmyname> ya, I know my thoughts there ;-)
20:56:18 <ttx> any more question on Technical stability and architecture maturity N
20:56:20 <ttx> ?
20:56:23 <agentle_> are the meters in https://wiki.openstack.org/wiki/EfficientMetering in production?
20:57:02 <dhellmann> agentle_: yes. a better list is at http://docs.openstack.org/developer/ceilometer/measurements.html
20:57:23 <notmyname> why would you emit volume units not in bytes?
20:57:24 <dhellmann> IIRC, we've updated the formal docs, and not gone back and updated that design document
20:57:32 * agentle_ sighs :)
20:57:33 <dhellmann> notmyname: where?
20:58:06 <notmyname> dhellmann: on agentle_'s link
20:58:10 <agentle_> mostly I sigh because Google finds the wiki first
20:58:15 <dhellmann> notmyname: ok, that's out of date
20:58:16 <eglynn> agentle_: note that adding a meter is a relatively straight forward task, the archiecture is highly extensible ... so we'd expect that list to grow
20:58:31 <agentle_> eglynn: yup, understood
20:58:31 <heckj> http://docs.openstack.org/developer/ceilometer/index.html
20:58:38 <jgriffith> notmyname: Cinder does everything in GB so makes sense to me
20:58:46 <ttx> looks like we are running out of time -- I propose we do scope complementarity and vote next week, unless nobody has any question on scope
20:58:49 <russellb> should add the up to date link to the top of the wiki page
20:58:51 <russellb> that should avoid confusion
20:59:01 <dhellmann> russellb: good idea, I'm doing that now
20:59:34 <agentle_> what is kwapi?
20:59:37 <eglynn> on units, we rationalized our units usage during G so that its now much more consistent & logical
20:59:52 <ttx> eglynn: yay incubation.
20:59:53 <jgriffith> +1 for GiB BTW :)
21:00:10 <jd__> agentle_: an energy monitoring tool see https://github.com/stackforge/kwapi
21:00:11 <ttx> ok, we'll continue (and finish) the ceilometer review next week, thanks everyone
21:00:19 <ttx> big meeting ahead
21:00:24 <ttx> #endmeeting