21:02:06 <russellb> #startmeeting nova
21:02:06 <openstack> Meeting started Thu Jun 20 21:02:06 2013 UTC.  The chair is russellb. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:02:07 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:02:07 <russellb> hello everyone!  who's around to talk about nova?
21:02:09 <openstack> The meeting name has been set to 'nova'
21:02:10 <mriedem> hi
21:02:12 <mrodden> here
21:02:15 <cyeoh> hi!
21:02:20 <dripton> hi
21:02:20 <dansmith> o/
21:02:24 <alaski> o/
21:02:28 <melwitt> hi
21:02:34 <russellb> #link https://wiki.openstack.org/wiki/Meetings/Nova
21:02:42 <russellb> #topic blueprints
21:02:43 <n0ano> o/
21:02:51 <devananda> \o
21:02:53 <hartsocks> \o
21:02:58 <russellb> #link https://launchpad.net/nova/+milestone/havana-2
21:03:11 <russellb> I think we're actually in decent shape given how much is on the havana-2 list
21:03:16 <russellb> lots of stuff already up for review, which is good
21:03:31 <russellb> but I figured it would be a good time to remind everyone of the schedule ...
21:03:44 <russellb> we're roughly half-way through havana-2 (which means roughly half way through dev time for havana!)
21:03:52 <russellb> #link https://wiki.openstack.org/wiki/Havana_Release_Schedule
21:05:03 <russellb> sooner the better for putting some review time into the bigger stuff for these blueprints , so it's not so much of a rush at the end of this milestone
21:05:19 <russellb> any comments/questions about release status / blueprints?
21:06:02 <mrodden> i am somewhat concerned about the API locale one I have up
21:06:10 <russellb> mrodden: link?
21:06:14 <mrodden> https://blueprints.launchpad.net/nova/+spec/user-locale-api
21:06:21 <russellb> and what's the concern?
21:06:27 <mrodden> its not very glorious of code
21:06:39 <mrodden> and hasn't had any feedback from any nova core's yet
21:07:05 <russellb> yeah, don't think i've seen the nova code yet
21:07:31 <mrodden> the oslo stuff got merge yesterday (?)
21:07:38 <mrodden> so thats cool
21:07:40 <russellb> well right, i saw that
21:07:51 <russellb> is there a nova patch up?
21:07:57 <mrodden> yep... second
21:08:08 <mriedem> https://review.openstack.org/#/c/30459/
21:08:19 <mrodden> yep that one
21:08:25 <russellb> #help could use feedback on user-locale-api review
21:08:26 <mriedem> says it's merged
21:08:35 <mrodden> https://review.openstack.org/#/c/30479/
21:08:36 <mrodden> oh
21:08:37 <mrodden> that one
21:08:38 <mriedem> yeah
21:09:11 <russellb> i can't pull it up right now ... on edge internet :-/
21:09:36 <russellb> blueprint set to "Needs Code Review" ?
21:09:50 <mrodden> my main concern is that it took a lot of iterations to get it ready for g3 but then was too late to merge
21:09:56 <mrodden> yeah should be
21:10:00 <russellb> ok
21:10:15 <russellb> well i'll try to look soon
21:10:18 <mrodden> ok thanks
21:10:29 <russellb> (and others too i hope)
21:10:34 <russellb> i've been trying to do a better job of looking at older stuff first
21:10:43 <russellb> using the next-review script, and looking at my review stats
21:11:05 <russellb> cyeoh: how's the v3 api stuff coming?
21:11:27 <cyeoh> I think its progressing fairly well, but nearly all of it is still waiting for review
21:11:35 <cyeoh> (what we've submitted anyway)
21:11:38 <russellb> and are review times causing you lots of pain?
21:12:04 <russellb> i was afraid of that
21:12:05 <cyeoh> I'm getting concerned because I know we are going to get merge conflicts because of setup.cfg changes
21:12:27 <russellb> i like the new process you've started using, step 1/2
21:12:31 <dripton> The v1 API patches are just copies of files to new places so they should sail through review; it's just getting 2 core people to look at them.
21:12:43 <cyeoh> last time I checked the v3 api patches were about 30% of the review queue (if only taking into account changes waiting for reviewer rather than submitter)
21:12:50 <russellb> yeah, but the step 1 patch shouldn't be approved before step 2
21:13:29 <russellb> #help really need reviews on v3 API changes, they're easier to review than they seem, i promise :-)
21:13:52 <cyeoh> that would be very much appreciated and lower my stress levels :-)
21:14:03 <sdague> I think cyeoh just wants to race with dansmith on depleting all the devstack nodes :)
21:14:05 <russellb> a lot of core reviewers seem busy with other things ... been slowing things down a bit i think
21:14:14 <dansmith> sdague: good luck to him
21:14:19 <cyeoh> sdague :-)
21:14:57 <russellb> all good stuff :)
21:15:31 <russellb> i'll see what i can do to encourage more review time
21:15:33 <russellb> maybe i'll bake cookies
21:15:48 <devananda> mmm!
21:15:50 <russellb> we can come back to blueprints and such in open discussion if needed
21:15:51 <cyeoh> :-)
21:15:54 <russellb> #topic bugs
21:16:04 <russellb> so, bug triage
21:16:07 <russellb> #link https://wiki.openstack.org/wiki/Nova/BugTriage
21:16:13 <russellb> we have this handy process to split up the triage work
21:16:19 <russellb> there are 68 new bugs right now, most of them tagged
21:16:34 <russellb> so please check your queue if you signed up to help
21:16:43 <russellb> and if you didn't sign up, please do
21:16:56 <dripton> I triaged a bug as WONTFIX today and felt bad about it, but it was an SQLAlchemy bug so there's not much we can do.
21:17:14 <russellb> 68 is a bit too high
21:17:19 <russellb> heh, you shouldn't feel bad :)
21:17:19 <russellb> was it the ipv6 one?
21:17:23 <dripton> yes
21:17:30 <russellb> cool, i figured
21:17:49 <russellb> i close bugs sometimes just because i think the bug report isn't good enough
21:17:53 <russellb> "it doesn't work"
21:18:02 <russellb> so i'm way more harsh :)
21:18:06 <dripton> that's why I felt bad about that one; it was totally valid and well written, just not ours
21:18:17 <russellb> ah, yeah ... well as long as you made all of that clear
21:18:41 <melwitt> is there any prereq for signing up for a tag i.e. is there additional permission needed to triage past tagging?
21:18:56 <russellb> melwitt: you have to be a member of the nova-bugs team on launchpad
21:19:01 <russellb> but it's an open team (anyone can join)
21:19:11 <russellb> so, just willingness and time to give
21:19:15 <melwitt> russellb: ok, thanks
21:19:20 <russellb> sure
21:19:37 <dripton> it doesn't take *that* much time.  it's like milking cows: it only takes a few minutes but you have to do it every day or the cows kick you
21:19:44 <russellb> ha
21:19:51 <russellb> yeah, doesn't take much to do a few
21:20:07 <russellb> takes a lot if you're the one or two people trying to do most of it (which is what we kinda had before)
21:20:35 <dripton> right, but if you're only triaging one tag it's not too bad.
21:20:55 <russellb> i've been tagging, but not triaging as much, but happy to help on tricky ones
21:21:04 <russellb> dripton: yeah hope so, spread the pain :)
21:21:32 <russellb> let's talk subteams
21:21:35 <russellb> #topic subteams
21:21:44 <harlowja> hi!!
21:21:46 <russellb> devananda: what's up with baremetal / ironic ?
21:23:39 <russellb> ok, can come back to that one
21:23:39 <russellb> harlowja:
21:23:46 <harlowja> howdy
21:24:05 <russellb> or hartsocks ?
21:24:09 <russellb> (sorry if it's just my internet sucking here)
21:24:13 <hartsocks> hey.
21:24:21 <harlowja> so one flow in cinder is working and awaiting the taskflow library to be released (so that it can be integrated, jenkins is right now complaining about missing dep()
21:24:23 <russellb> who wants to give an update :)
21:24:35 <russellb> cool.
21:24:42 <harlowja> sure, so we are also working with heat folks on there desired requirements
21:24:49 <harlowja> it'd be interesting to have a nova equivalent
21:24:53 <harlowja> *even if its just random thoughts*
21:25:02 <devananda> russellb: sorry, looked away at an email for a minute... back now :)
21:25:03 <harlowja> #link https://wiki.openstack.org/wiki/Heat/TaskSystemRequirements
21:25:14 <russellb> harlowja: k, i'd post to the ML
21:25:15 <harlowja> i'd be interesting to start collecting nova 'ideas/requirements'
21:25:26 <harlowja> russellb sounds good
21:25:32 <russellb> anything else?
21:25:36 <harlowja> so otherwise, just heads down working
21:25:41 <russellb> cool
21:25:42 <harlowja> thats about it :)
21:25:47 <russellb> hartsocks: alright, you're up
21:25:51 <russellb> hartsocks: i like the weekly emails
21:25:51 <hartsocks> okee dokee
21:25:57 <hartsocks> Thanks.
21:26:07 <russellb> guess you can just link to those here, heh ... but highlights are good too
21:26:08 <hartsocks> I appreciate the response we got off of that.
21:26:32 <hartsocks> So… I'll just put out a list of stuff we think is good for core-review
21:26:42 <russellb> ok
21:26:44 <hartsocks> I'll send that around on fridays.
21:26:47 <hartsocks> Otherwise...
21:26:48 <hartsocks> So we VMwareAPI folks are drilling down on our Blueprints now.
21:27:01 <hartsocks> We've got most of the critical/high priorty stuff assigned to people
21:27:14 <hartsocks> There's only one outstanding bug that needs attention.
21:27:29 <hartsocks> We've seen a potential problem with one of the blueprints slated for H2
21:27:41 <hartsocks> We may have to move it to H3.
21:28:02 <hartsocks> We're heads down now and H2 deadlines will be close I feel.
21:28:02 <russellb> ok, assignee should be able to update that if needed
21:28:11 <russellb> or ping me (or anyone on nova-drivers) otherwise
21:28:25 <russellb> ok, to get stuff reviewed in time, try to have code up for review early
21:28:34 <russellb> stuff that goes up in the last week will almost certainly slip
21:28:45 <russellb> just based on past experience
21:28:53 <russellb> sooner the better, of course
21:28:58 <hartsocks> I've told folks to get there code up by July 11th if they really want it to get in.
21:29:09 <hartsocks> Perhaps I should say the 4th
21:29:13 <russellb> perfect
21:29:31 <russellb> 11th hard deadline, depending on size, there's still risk
21:29:34 <hartsocks> We really only have 2 working weeks in this release then.
21:29:38 <russellb> but it's not the havana feature freeze, so not a *hue* deal
21:29:45 <hartsocks> Yep.
21:29:51 <hartsocks> Just if you want it in H2.
21:29:56 <russellb> yep.
21:30:06 <russellb> and hopefully we can merge as much as we can in h2
21:30:09 <russellb> so it's not so heavy in h3
21:30:15 <hartsocks> I'm trying to push all the bug fix + put the fires out work… to H2 so we can save H3 work for new hotness.
21:30:25 <russellb> heh
21:30:30 <hartsocks> So we're rolling along.
21:30:38 <russellb> cool, thanks for the update.
21:31:03 <russellb> devananda: alright, how's ironic / baremetal ?
21:31:08 <devananda> hi!
21:31:23 <devananda> prepared a summary, pasting...
21:31:28 <russellb> wooo
21:31:33 <devananda> Many open bugs in baremetal still. I've continued to focus on ironic rather than fixing bugs, except for when the bugs are quick to fix.
21:31:41 <devananda> dprince and pleia2 have been doing some interesting things testing bare metal with the TOCI project and fixing blocking bugs for their work. some bug fixes have trickled over to ironic from their work.
21:31:57 <devananda> GheRivero seems to be making good progress porting image utils to glanceclient. Could probably use a few more reviews:
21:32:06 <devananda> #link https://review.openstack.org/#/c/33327/
21:32:19 <devananda> Once that's done, we'll be able to move on implementing the PXE driver in Ironic
21:32:31 <devananda> that's the last driver we're missing (IPMI and SSH/virtualpowerdriver are done)
21:32:40 <devananda> I'm addressing the present lack of API and RPC functionality
21:32:50 <devananda> and NobodyCam is working on integration with keystone, and scripting the installation of ironic services with diskimage-builder.
21:32:53 <devananda> [EOF]
21:33:10 <russellb> speaking of API, are you looking at pecan/wsme for your API?
21:33:15 <devananda> yep
21:33:18 <russellb> awesome
21:33:22 <devananda> the basic components are already there and working
21:33:27 <russellb> cool
21:33:30 <devananda> i landed api unit tests today
21:33:39 <devananda> need to actually implement all the handlers for things :)
21:33:42 <devananda> and RPC layer
21:33:45 <russellb> boris-42 was wanting to help get those used more throughout openstack, i recommended he check with you to see if you could use help
21:33:57 <russellb> those == pecan/wsme
21:34:04 <devananda> great
21:34:17 <devananda> one of his guys, romcheg, has been working on porting dansmith's object code
21:34:20 <devananda> so we are using that, too
21:34:25 <russellb> awesome
21:34:37 <russellb> which means once it settles we should be looking at oslo-ifying it
21:34:40 <devananda> yep
21:35:27 <russellb> cool, anything else on baremetal / ironic?
21:35:35 <devananda> nope
21:35:37 <russellb> any other subteams wanna give a report?
21:35:47 <n0ano> scheduler
21:35:54 <russellb> n0ano: great, go for it
21:36:13 <n0ano> went over 2 things, scaling - I've started a thread on the mailing list, we'll see how that works out...
21:36:24 <russellb> ah yes
21:36:44 <russellb> sounds like consensus was to kill off all of the fanout_cast stuff, which i'm actually happy to hera
21:37:11 <n0ano> other issue was talked about BP for multiple default AZs (https://blueprints.launchpad.net/nova/+spec/schedule-set-availability-zones), looks like a reasonable idea, he'll be working on implementing it.
21:37:20 <russellb> beyond scale, fanout also causes some problems with trusted messaging
21:37:44 <russellb> yep, sounds reasonable to me
21:38:06 <n0ano> russellb, I still don't undersand, why do the fan out message have more latency than the DB updates, they should be faster
21:38:55 <russellb> anything else?
21:38:56 <russellb> #topic open discussion
21:39:08 <dansmith> I have something for open discussion
21:39:17 <jog0> q on the multiple default AZs
21:39:17 <dansmith> https://review.openstack.org/#/c/33888/
21:39:41 <dansmith> tox appears broken in anything but CI after a recent reqs change, that's the revert patch ^^
21:39:50 <jog0> I am a little confused on what the goal is there.
21:40:14 <russellb> oops
21:40:14 <russellb> n0ano: because the db updates are "instant"
21:40:15 <russellb> n0ano: while the fanout stuff is periodic
21:40:17 <russellb> dansmith: whose patch was it?
21:40:23 <jog0> n0ano: ^
21:40:26 <dansmith> russellb: geekinutah
21:40:54 <mrodden> dansmith: checking to see if that solves my tox issue...
21:41:07 <n0ano> I would think we just replate the DB updates with a fanout message, then they are no longer periodic
21:41:07 <dansmith> mrodden: if it's a failure to get oslo.config, then, yeah
21:41:18 <russellb> ok, +2
21:41:19 <dansmith> it fails the nova-requirements test, which I don't know the content of
21:41:31 <n0ano> the DB can't be `instant', you have to sent a message to the DB server
21:41:33 <dansmith> is there a gate that prevents going backwards or something?
21:41:43 <russellb> n0ano: right, we could, but that's a *ton* of messages
21:41:52 <jog0> dansmith: take to the infra team about requirments test
21:41:57 <n0ano> versus a ton of DB update messages
21:42:07 <russellb> pretty much
21:42:22 <russellb> but thing is, right now we're using the db
21:42:30 <russellb> the fanout stuff is just wasting resources
21:42:35 <devananda> iiuc, db updates will grow linearly with # of nodes, whereas fanout will grow exponentially....
21:42:54 <devananda> (it's possible I am thinking of a different conversation, too)
21:43:11 <russellb> devananda: basically just talking about how the scheduler gets the current state of compute nodes
21:43:18 <russellb> resource usage and whatnot.
21:43:23 <n0ano> devananda, I think so, the fanout message is one from each node, adding a new node only adds one message (per state change)
21:43:23 <jog0> devananda: AFAIK fanout grows with number of schedulers*compute-nodes and db is number of compute-nodes
21:43:36 <devananda> jog0: right.
21:44:00 <russellb> i don't think people are running many schedulers, but yeah.
21:44:06 <jog0> n0ano: we already have the use DB for record keeping paradigm fanout generally breaks that.
21:44:34 <jog0> russellb: one of the problems is single threaded scheduler processing all the fanouts, adding another scheduler doesn't make thigns better just the same
21:44:44 <russellb> fanout also generally kills our ability to move to peer-to-peer messaging, which is much more scalable
21:45:37 <n0ano> wouldn't the message from the compute node to the scheduler be considered a peer-peer message, shouldn't that fit in?
21:45:42 <russellb> and fanout isn't as good for trusted messaging, because you don't know the specific endpoint you're talking to
21:45:59 <russellb> it's one message broadcasted to all schedulers
21:46:08 <n0ano> the end point is the scheduler, that
21:46:16 <n0ano> that's a trusted end point
21:46:38 <russellb> but (at least with amqp) you're not talking to one thing
21:46:42 <russellb> you're sending 1 message to N things
21:47:31 <russellb> guess we should cover it more on the ML
21:47:35 <russellb> easier to go into detailed discussion there
21:47:50 <jog0> n0ano: if we didn't have a notion of a central DB  already fanout has more benifits but we already have that concept, so why not use it
21:47:53 <n0ano> yeah, I think the ML is the proper place to discuss this
21:47:56 <jog0> n0ano: question on https://blueprints.launchpad.net/nova/+spec/schedule-set-availability-zones how do propose having several default AZs?
21:48:09 <russellb> isn't it a config option now?
21:48:11 <lifeless> jog0: because central db's are evil! evil!
21:48:39 <jog0> lifeless: perhaps but that is the bigger debate that we should have
21:48:46 <lifeless> jog0: I was trolling.
21:48:47 <n0ano> jog0, we already have one default, the BP is about allowing mulitiple defaults if the user doesn't specify one
21:49:08 <russellb> choose a host from any of these AZs, as opposed to just this one default AZ
21:49:13 * russellb shrugs
21:49:14 <russellb> seems ok
21:49:16 <jog0> n0ano: oh for where VMs go, not for where compute-nodes go?
21:49:18 <n0ano> russellb, +1
21:49:27 <n0ano> jog0, correct
21:49:32 <jog0> that makes alot more sense
21:49:49 <russellb> jog0: right :)
21:49:59 <n0ano> it took a long discussion to understand what the BP was proposing but we got it at the end :-)
21:50:00 <jog0> that wasn't clear to me from the BP
21:50:05 <russellb> jog0: but btw ... don't host aggregates allow you to technically put a compute node in more than 1 AZ?
21:50:19 <jog0> russellb: yeah
21:50:20 <russellb> n0ano: should update theblueprint then with clarification
21:50:37 <n0ano> russellb, that was the suggestion at the meeting
21:50:41 <russellb> jog0: i don't think our API really deals with that
21:50:55 <jog0> russellb: yeah, that is currently a don't do for deployers
21:51:01 <jog0> russellb: they don't
21:51:08 * russellb nods
21:51:19 <russellb> so you just have plenty of rope
21:51:26 <jog0> yeah ...
21:51:30 <russellb> be careful, don't hurt yourself
21:51:31 <russellb> k
21:51:34 <russellb> i guess we could put in a safeguard ...
21:52:04 <russellb> block adding a host to an aggregate with an AZ set, if it's already in another aggregate with an AZ set
21:52:15 <jog0> russellb: not against the idea, didn't  seem worth it at the time though.  took the deployers should be careful approach initially
21:52:26 <russellb> yeah
21:52:37 <russellb> i think it works from a scheduling point of view
21:52:44 <jog0> russellb: taht isn't enough, because you can add a aggregate to become an AZ afterwards too
21:52:45 <russellb> just reflecting data back through the APi doesn't account for it
21:52:51 <russellb> ah, yeah.
21:53:12 <russellb> well,  guess we'll leave it alone for now then :)
21:53:20 <jog0> russellb: API just lists all AZs with commans in between
21:53:30 <russellb> oh it does?
21:53:41 <russellb> well then, it's fine
21:53:42 <jog0> last time I checked at least
21:53:52 <russellb> i just misread it then
21:54:11 <russellb> alrighty, coming up on time ...
21:54:21 <russellb> any last minute comments/questions/concerns?
21:55:13 <russellb> alright, feel free to stop by #openstack-nova any other time you want to chat.
21:55:15 <russellb> thanks!
21:55:16 <russellb> #endmeeting