21:04:42 <russellb> #startmeeting nova
21:04:43 <openstack> Meeting started Thu Nov 15 21:04:42 2012 UTC.  The chair is russellb. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:04:44 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:04:45 <openstack> The meeting name has been set to 'nova'
21:04:58 <russellb> #link http://wiki.openstack.org/Meetings/Nova
21:05:09 <russellb> There's the agenda ... who's here?
21:05:09 <alexpilotti> hi huys
21:05:11 <dansmith> <-- here in body, at least
21:05:26 * jog0 says hi
21:05:29 <mikal> Ahoy!
21:05:36 <russellb> comstud: here?  cells on deck first
21:05:42 <sdague> <-- here
21:06:05 * devananda waves
21:06:05 <russellb> k, we'll skip cells and wait for comstud
21:06:16 <russellb> #topic state of the baremetal driver
21:06:29 <russellb> devananda: hey, want to give an update on this one?
21:06:30 <comstud> sorry
21:06:33 <comstud> lunchies
21:06:39 <devananda> russellb: sure
21:06:41 <russellb> comstud: no problem, we'll do cells next
21:06:49 <devananda> two more patches landed last night, total of 3 now
21:07:01 <devananda> there are two more large-ish ones, and a few tiny ones, left
21:07:09 <devananda> big one is replacing the baremetal driver code
21:07:18 <devananda> after that is adding the pxe functionality to it
21:07:38 <russellb> #link https://review.openstack.org/#/q/status:open+topic:bp/general-bare-metal-provisioning-framework,n,z
21:07:42 <devananda> meanwhile i've also been working on devstack integration
21:07:52 <devananda> and formulating a plan with the CI team for how to get it all going
21:08:08 <devananda> to be shared shortly with the dev list (waiting on approval from the NTT guys)
21:08:10 <russellb> has there been much feedback on the next big one?
21:08:31 <devananda> little to none
21:08:31 <devananda> #link https://review.openstack.org/#/c/11354/
21:09:00 <vishy> the big one is a little tough because there aren't any real users of the old baremetal code
21:09:02 <russellb> #help need reviews on the next big baremetal driver patch
21:09:11 <vishy> so it is hard to say if it is breaking anything
21:09:18 <russellb> vishy: so shouldn't have qualms about removing it if nobody uses it ...
21:09:24 <russellb> could do a post on openstack@ just in case
21:09:31 <vishy> but perhaps breakage doesn't matter for that reason
21:09:38 <russellb> at least we'd have something to point to and say "well, we tried to see if it mattered ..."
21:09:44 <russellb> we can also clearly define it in release notes
21:09:50 <devananda> vishy: i can at least say that the _new_ code works :)
21:09:52 <vishy> yes i propose a quick message to openstack-operators
21:09:56 <vishy> and/or openstack
21:10:06 <russellb> great, who's going to send the message?
21:10:11 <vishy> and reviews can stick to style / code-smell
21:10:14 <rmk> I dont even think the old baremetal code works
21:10:20 <vishy> devananda?
21:10:27 <vishy> my guess is that it doesn't
21:10:37 <rmk> it cant work, we pass args from compute it into it that it cant accept
21:10:42 <devananda> vishy: i haven't heard anyone say anything about the ol code working
21:10:43 <russellb> devananda: you good for putting out a post just making sure nobody freaks out that we rip out the old stuff?
21:10:59 <russellb> well if it's that bad ... i guess no message needed
21:11:00 <devananda> russellb: sure. destinations are just openstack@ and openstack-operators@ ?
21:11:05 <russellb> yeah
21:11:11 <russellb> a message still wouldn't hurt
21:11:11 <rmk> I know this because we didn't update it a number of times when we changed driver calls for everything else
21:11:21 <russellb> want to show the ops that we care about them :-)
21:11:26 <rmk> message is always good but I'd say its very safe to rip it out
21:11:38 <dansmith> rmk: I updated it for no-db-virt :)
21:11:52 <russellb> #action devananda to post to openstack/openstack-operators about removing old baremetal code, just to make sure everyone is good with it
21:12:09 <russellb> alright, anything else on baremetal?  lots of movement since last week.
21:12:27 <devananda> that was the high level stuff
21:12:32 <rmk> dansmith: cool -- do you know if all the entry points from compute match the required number of args?  Unless someone changed it recently I dont think it does
21:12:43 <russellb> yeah, anything deeper we should hit on the -dev list
21:12:46 <dansmith> rmk: no idea, but the tests seem to run
21:12:51 <rmk> interesting
21:13:01 <russellb> devananda: just want to hit status here to make sure it keeps moving
21:13:06 <devananda> :)
21:13:07 <russellb> alright, let's talk cells!
21:13:12 <comstud> ok
21:13:12 <russellb> #topic cells status
21:13:14 <russellb> comstud: you're up
21:14:01 <comstud> i spent a number of days breaking out the scheduling filter/weighting into code that was currently somewhat shared with the host scheduler.  all of the base stuff landed including the host scheduler changes and new weighting plugin stuff
21:14:11 <comstud> cells has been rebased against that stuff that landed (master, really)...
21:14:20 <comstud> i broke out the filtering/weighting scheduler stuff in cells into its own review
21:14:23 <russellb> #link https://review.openstack.org/#/q/status:open+topic:bp/nova-compute-cells,n,z
21:14:32 <comstud> all of this lowers the main review down to < 4000 lines now (still large)
21:14:45 <comstud> anyway... was in rebase hell for a while with all of the FLAGS -> CONF changes...
21:14:56 <comstud> but I addressed a number of comments in the initial feedback
21:15:00 <comstud> and those reviews are up
21:15:12 <comstud> next is... trying to clarify some of the basic routing stuff.
21:15:24 <comstud> and seeing if I can make the code make more sense there
21:15:29 <comstud> w/ better doc strings
21:15:30 <russellb> yeah, that would help me :)
21:15:50 <russellb> code seems sane, but docstrings, and some overview on how the routing works would be useful for folks diving into it
21:15:56 <comstud> yep
21:16:08 <russellb> #action russellb to do another pass on reviewing the updated cells patches
21:16:14 <comstud> I think I've thought of a way to clean it up
21:16:26 <russellb> we're going to need at least one other person to do a deep dive into nova-cells to get these reviews done
21:16:27 <comstud> then it becomes more obvious even without docstrings, but we'll see
21:16:32 <zykes-> how's cells with quantum integration btw?
21:16:44 <comstud> cells works with a global quantum
21:16:59 <comstud> and individual nova-network's in each cell
21:17:09 <russellb> anyone else want to volunteer to start doing a deep dive in this code to help the review along?
21:17:11 <comstud> except there's some stickiness if you use different networks in different cells right now
21:17:11 <zykes-> comstud: quantum v2 ?
21:17:14 <comstud> that needs addressed
21:17:21 <comstud> zykes-: ya
21:17:22 <comstud> we use it
21:17:23 <comstud> :)
21:17:25 <zykes-> ok
21:17:33 <vishy> comstud: ? nova-network shouldn't be necessary with v2
21:17:42 <vishy> it proxied to quantum in v1
21:17:47 <russellb> #help need one more nova-core reviewer to volunteer to start reviewing the cells patches
21:17:57 <vishy> but v2 the api nodes should be calling quantum directly..
21:18:01 <comstud> does network_api handle it ?
21:18:06 <vishy> comstud: yeah
21:18:11 <zykes-> what's the diff though in cells and aggregates if I can ask ?
21:18:12 <comstud> it should be fine then
21:18:19 <comstud> configure network_api in each child cell appropriately
21:18:29 <vishy> zykes-: cells each have their own db and queue
21:18:34 <comstud> the only thing that will not work potentially is the API extension
21:18:37 <vishy> aggregates are groups of hosts in a single installs
21:18:49 <comstud> if you point API cells and child cells at same quantum, then you're ok
21:18:52 <russellb> cells contain aggregates
21:18:53 <comstud> but that's a limitation right now
21:19:11 <zykes-> vishy: ok, so aggregates are within that
21:19:14 <russellb> would be good to keep a documented set of limitations somewhere
21:19:20 <comstud> agree
21:19:21 <russellb> not sure where ...
21:19:40 <jog0> devref?
21:19:49 <russellb> jog0: but it's a user doc, really
21:19:57 <russellb> maybe wiki.openstack.org for now while it's still moving a lot
21:20:48 <russellb> anything else on cells?
21:20:58 <comstud> in any case.. that's about all i have.  probably the largest issues (just understanding how the routing works) will be addressed shortly.
21:20:59 <jog0> russellb: works for me, but long term openstack-manuals sounds like the right home
21:21:13 <russellb> jog0: agreed
21:21:15 <russellb> #topic bugs
21:21:24 <russellb> #link http://webnumbr.com/untouched-nova-bugs
21:21:28 <russellb> only 20 untouched bugs, very nice.
21:21:48 <russellb> many thanks to those putting time into triage.  if we all just get a few issues here and there, we'll stay on top of them
21:21:52 <sdague> did mikal create his cookie report?
21:21:57 <mikal> Yep, please hold
21:22:00 <mikal> http://www.stillhq.com/openstack/nova/triage-20121116.txt
21:22:08 <mikal> So, we're not as good as last week, but still ok
21:22:15 <mikal> Mostly that's cause russellb has fallen off the wagon...
21:22:25 <dansmith> woohoo!
21:22:25 <sdague> mikal steals all the cookies
21:22:28 <russellb> mikal gets this week's triage gold star
21:22:28 <mikal> Heh
21:22:36 <mikal> Its still not all the nova-core people doing triage
21:22:37 <mikal> Hint hint
21:22:54 <comstud> i've got a focus right now, sorry :)
21:23:00 <comstud> +different
21:23:16 <mikal> Its ok
21:23:20 <mikal> I shall just mock you
21:23:20 <russellb> #help please help triage nova bugs!  Even just a few would be awesome.
21:23:25 <comstud> ok :)
21:23:29 <russellb> cool, anything else on bugs?
21:23:40 <russellb> i actually have one thing on bugs
21:23:48 <russellb> We have 4 active nova security bugs (private) that need nova-core attention
21:24:21 <russellb> please look at these bugs (and don't discuss them here): https://bugs.launchpad.net/nova/+bug/1074343 https://bugs.launchpad.net/nova/+bug/1070539 https://bugs.launchpad.net/nova/+bug/1069904 https://bugs.launchpad.net/nova/+bug/1058077
21:24:22 <uvirtbot> Launchpad bug 1074343 in nova "ec2 describe instances does not filter by project_id" [Undecided,Incomplete]
21:24:36 <russellb> unless the bug is already public, heh
21:24:40 <sdague> heh
21:24:44 <russellb> #topic grizzly-1
21:24:53 <russellb> Alrighty, grizzly-1 is one week out
21:24:53 <dansmith> uvirtbot: loose lips sink ships!
21:24:54 <uvirtbot> dansmith: Error: "loose" is not a valid command.
21:25:06 <dansmith> hrm
21:25:22 <russellb> #link https://launchpad.net/nova/+milestone/grizzly-1
21:25:42 <russellb> so,  let's take a look at status of some of these things
21:25:45 <alexpilotti> we have a few things on the Hyper-V side that we'd like to have in for G-1 if possible
21:26:05 <russellb> alexpilotti: ok, is there a blueprint or a bug?
21:26:21 <alexpilotti> russellb: we have a bp, let me fetch it
21:26:22 <dansmith> russellb: were you going to target no-db-virt for G-1?
21:26:26 <dansmith> I think we can call it done, right?
21:26:35 <russellb> dansmith: yeah, we can, link?
21:26:35 <sdague> russellb: the audit's going to be a couple weeks past G-1, I just moved it to G-2
21:26:44 <devananda> is there a chance the remaining baremetal patches might land before G1?
21:26:44 <russellb> sdague: ok great thanks
21:26:56 <dansmith> https://blueprints.launchpad.net/nova/+spec/no-db-virt
21:27:11 <russellb> devananda: it's possible ... don't want to rush the reviews just to meet grizzly-1 though, since we're still early in grizzly
21:27:25 <devananda> ack
21:27:26 <alexpilotti> https://blueprints.launchpad.net/nova/+spec/grizzly-hyper-v-nova-compute
21:27:58 <russellb> dansmith: done
21:28:02 <dansmith> thanks
21:28:23 <alexpilotti> for the moment we have ConfigDriveV2 implemented
21:28:27 <russellb> alexpilotti: so is everything for this blueprint up for review?
21:28:44 <vishy> alexpilotti: really should split the features into individual blueprints
21:28:51 <vishy> instead of 1 blueprint for all features
21:28:54 <alexpilotti> russellb: the BP is still generic, we have to update it, I know
21:29:14 <alexpilotti> vishy: I'm going to do it ASAP, in the next days
21:29:14 <russellb> yeah, hard to target if it doesn't have a defined endpoint
21:29:38 <russellb> alexpilotti: ok, well if you get a blueprint for config drive support, we can target that, that will probably go in for grizzly-1
21:29:40 <alexpilotti> anyway on the Nova side we don't have too much, we are targeting mainly Grizzly
21:29:49 <alexpilotti> for Quantum
21:30:02 <alexpilotti> russellb: ok tx
21:30:05 <russellb> gotcha.
21:30:20 <alexpilotti> mikal: can I please ask you to re-approve the review?
21:30:29 <alexpilotti> mikal: https://review.openstack.org/#/c/15743/
21:30:31 <mikal> Sure
21:30:38 <russellb> is Mate Lakat here?  don't know the status of that xenapi volume driver blueprint
21:30:43 <alexpilotti> mikal: since you approved I had to rebase and fix 2 lines
21:30:55 <mikal> Yep, I'll re-review this morning
21:31:22 <russellb> xenapi-volume-drivers seems to be stalled
21:31:25 <russellb> vishy: know anything on that one?
21:31:38 <russellb> guess we can just bump early next week if needed
21:31:57 <vishy> there are a bunch of reviews in
21:31:59 <vishy> but not done
21:32:05 <russellb> k
21:32:19 <russellb> #topic Open Discussion
21:32:38 <russellb> #help nova-core: review queue is longer than ideal, please help knock it down
21:32:53 <vishy> ok i have an idea on that point
21:32:56 <vishy> might be crazy
21:33:08 <russellb> i like how this is starting
21:33:12 <vishy> what if we allowed certain people to have +a access to subdirectories
21:33:33 <russellb> so, if it only touched say ... nova/virt/libvirt/* ?
21:33:39 <vishy> right
21:33:39 <sdague> is that doable in the current ci system?
21:33:42 <alexpilotti> vishy: +1 :-)
21:33:46 <vishy> sdague: no i asked
21:33:53 <vishy> but we could do it through convention
21:34:07 <vishy> does that seem useful? or just a crazy idea
21:34:24 <sdague> sure, I suppose. It does bring up a question of creating consistent quality across the project
21:34:24 <dansmith> seems like we kinda already have that
21:34:29 <mikal> Are there really that many extra reviewers that would add?
21:34:39 <vishy> it isn't for extra reviewers
21:34:42 <russellb> seems potentially useful ... but if you trust someone enough to approve nova/virt/libvirt/, we should be able to trust them to approve anything that is within their comfort zone
21:34:49 <dansmith> in that, if they hyperv guys propose something, everyone mostly trusts them that it's the right thing, as long as it doesn't break anything, right?
21:35:03 <russellb> dansmith: yeah, that's what I do on those patches :-)
21:35:05 <vishy> it is so alexpiloti can approve stuff in virt/hyperv/ without having to go track down two core members to +2 a
21:35:20 <alexpilotti> russellb: I keep adding beers for you ;-)
21:35:23 <russellb> but his +1 is effectively that to me already
21:35:27 <dansmith> right
21:35:42 <vishy> sure but we need a core member to take the time and go push the button
21:35:44 <russellb> if I see that a domain expert has already reviewed it, i spend much less time on it
21:35:49 <russellb> true
21:36:03 <russellb> but it doesn't take that long :)
21:36:09 <vishy> maybe we just need to get better about pushing the button
21:36:10 <dansmith> I dunno, I think that's opening the doors a bit,
21:36:12 <vishy> :)
21:36:12 <russellb> and it's good to have at least a 1 minute sanity check
21:36:24 <vishy> fair enough, just wanted to throw the idea out
21:36:30 <sdague> vishy: we do, but at the end of the day it's going to be a core member that might have to do an emergency fix. So it seems like they should have looked at it at some point :)
21:36:32 <dansmith> even if it's just to small stylistic things and conventions we avoid elsewhere but aren't in hacking.py
21:36:38 <vishy> i have a general topic to bring up as well
21:36:41 <dansmith> sdague: ++ :)
21:37:03 <vishy> cross tenant apis
21:37:21 <vishy> currently we have things like all_tenants=True for admins
21:37:37 <vishy> so some things an admin can do across tenants
21:37:49 <vishy> is this correct / useful?
21:38:01 <vishy> this sort of plays into the tenant_ids in urls for future versions of the api
21:38:11 <vishy> and whether that is good or bad
21:38:40 <russellb> i thought there was no concept of a tenant specific admin
21:38:51 <russellb> and that an admin in keystone was admin of the world
21:39:28 * russellb must just not understand the question
21:39:52 <vishy> yeah i'm discussing how we fix that
21:40:05 <mikal> Well, we still want super-admins right?
21:40:12 <mikal> I see that as useful functionality
21:40:17 <russellb> oh, ok.
21:40:23 <vishy> mikal: useful yes
21:40:35 <jog0> presumably we want both  tenant admin and operators(super-admin)
21:40:49 <mikal> jog0: that would be good
21:40:56 <vishy> but the question is should operators be using the normal urls
21:41:19 <vishy> can an operator access http://api/<other-tenant-id>/servers
21:41:28 <vishy> or just:
21:41:51 <jog0> wouldn't using the normal urls make it harder to  do cross tenant operations?
21:41:53 <vishy> http://api/<my-tenant-id>/servers?all_tenants=True
21:42:10 <vishy> jog0: yes
21:43:35 <vishy> anyway something to chew on
21:43:49 <jog0> according to the keystone readme, a role is "a first-class piece of metadata associated with many user-tenant pairs"
21:43:54 <jog0> I am not sure where admin fits in to that
21:44:25 <russellb> might be a good candidate for an openstack-dev thread to get more attention
21:44:32 <jog0> russellb:  +1
21:44:44 <sdague> sounds like a good plan
21:44:54 <sdague> on the subject of openstack-dev....
21:45:00 <sdague> new other topic - http://lists.openstack.org/pipermail/openstack-dev/2012-November/002889.html - was hoping to get some other eyes on this thread
21:45:18 <sdague> it's i18n again
21:45:56 <russellb> i18n in the API itself seems evil
21:45:59 <russellb> but i'm an ignorant american
21:46:23 <sdague> yeh... the problem is when you're an operator in .cn, assuming english, isn't often an option
21:47:06 <sdague> anyway, just wanted to have people take a look and discuss out there. Not let it just wither on the vine
21:47:26 * russellb tends to just listen to the people that live elsewhere and this is a big issue for them
21:47:53 <jog0> while talking about ML threads: http://lists.openstack.org/pipermail/openstack-dev/2012-November/002822.html
21:48:09 <russellb> aw, how'd i miss that
21:48:54 <russellb> i don't have a strong opinion on it.  my gut reaction is to say leave a default quota in place, but i wouldn't fight it if enough people thought it made sense to have no quotas by default
21:49:50 <russellb> anything else?  or we'll #endmeeting
21:49:55 <eglynn> #topic ceilometer/nova interaction as discussed on the ML #link http://wiki.openstack.org/EfficientMetering/FutureNovaInteractionModel
21:50:02 <jog0> russellb: while I would like to change it, I don't have a strong opinion on it either.  was hoping to get more feedback
21:50:06 <russellb> ooh, good one eglynn
21:50:20 <russellb> #topic ceilometer/nova interaction as discussed on the M
21:50:21 <eglynn> we need to get closure/convergence on that thread
21:50:31 <russellb> #topic ceilometer/nova interaction as discussed on the ML
21:50:34 <russellb> #link http://wiki.openstack.org/EfficientMetering/FutureNovaInteractionModel
21:50:46 <eglynn> so the question for you nova folks, is either option #4a or #5 a runner?
21:50:57 <russellb> eglynn: yeah ... i took a step back to see where everyone took it.  there were a lot of different opinions flying aorund.
21:50:58 <eglynn> (both require buy-in from nova)
21:51:32 <eglynn> yep, lots of noise on the thread
21:51:42 <vishy> i'm leaning towards #5
21:51:44 <russellb> i guess i'd like to look closer at what 4a does, and figure out if it's something that could just be a part of nova-compute
21:52:20 <vishy> that was the define a common interface that each driver could implement as a library right?
21:52:21 <russellb> vishy: so we'd distribute 2 copies of the virt layer?
21:52:26 <eglynn> vishy: cool
21:52:28 <eglynn> russellb: my concern about 4a is the seperate daemon complicating deployment
21:52:47 <russellb> eglynn: that's why i was saying we could look at adding whatever it does to the existing daemon
21:53:14 <eglynn> russellb: yeah, but the timeliness issue is a worry
21:53:21 <russellb> i don't think i fully get 5 ...
21:53:41 <eglynn> (as in there seem to be just fairly weak guarantees on the period_tasks)
21:53:49 <russellb> eglynn: correct
21:53:58 <vishy> the only realy difference between 4 and 5 is nova-compute-pollster lives in ceilo instead of nova
21:54:02 <eglynn> russellb: any hazy around 5 may be my fault
21:54:04 <russellb> the frequency they run is directly affected by how long they take
21:54:20 <russellb> so, what does this library from nova have in it
21:54:25 <russellb> in #5
21:54:44 <vishy> an implementation of an interface defined by cielo
21:54:47 <russellb> does it contain the virt drivers?
21:54:48 <eglynn> a cut-down subset of the hypersior driver API
21:54:53 <vishy> like get_bw_for_period
21:55:00 <vishy> instance_exists()
21:55:02 <vishy> etc.
21:55:08 <russellb> ah ok ...
21:55:15 <russellb> and then nova-compute could consume this same library then?
21:55:16 <eglynn> yep, all read-only, non destructive
21:55:23 <eglynn> sure
21:55:29 <russellb> for the bits it uses too
21:55:37 <eglynn> though the versioning might be oriented to external consumers
21:55:46 <russellb> sure.
21:55:46 <eglynn> (slowly evolving, stable)
21:56:00 <russellb> that doesn't seem so evil ... i just don't want to see duplicated virt code
21:56:14 <eglynn> cool
21:56:34 <vishy> since that piece needs to exist regardless
21:56:46 <eglynn> yep
21:56:52 <eglynn> so, provisional good vibes around option #5?
21:56:56 <vishy> we could define that api first before deciding where the daemon will live
21:56:58 <alexpilotti> my 2 cents: 5 seems to leave more control to Nova on the interface / mixin that needs to defined
21:57:16 <russellb> eglynn: yeah, i think so ...
21:57:28 <alexpilotti> in Hyper-V we have quite cool APIs handled via WMI, so both 4a and 5 are not an issue
21:57:31 <russellb> pending letting it simmer a bit, and going into more detail on what it would look like, etc
21:57:58 <eglynn> cool
21:58:05 <russellb> eglynn: thanks for bringing that up
21:58:10 <russellb> anything else?  we have a whole 2 minutes
21:58:16 <eglynn> OK, so we'll look at fleshing out that API definition on the ceilo side to keep the ball rolling
21:58:47 <alexpilotti> eglynn: what's the deadline for the ceilo / nova  integration?
21:59:13 <eglynn> alexilotti: probably G-2
21:59:21 <eglynn> (we're sitting out G-1)
21:59:33 <russellb> seems reasonable
21:59:37 <eglynn> cool
21:59:48 <alexpilotti> wow, cool. Can't wait to have it done.
22:00:04 <russellb> alright, then.  thanks everyone
22:00:07 <russellb> #endmeeting