21:00:38 <dansmith> #startmeeting nova
21:00:38 <openstack> Meeting started Thu Jun  6 21:00:38 2013 UTC.  The chair is dansmith. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:00:39 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:00:41 <openstack> The meeting name has been set to 'nova'
21:00:43 <dansmith> #link https://wiki.openstack.org/wiki/Meetings/Nova
21:00:51 <dansmith> russellb asked me to run this in his absence
21:00:53 <dansmith> who is around?
21:00:57 <dripton> hi
21:00:58 <cyeoh> hi
21:00:59 <devananda> \o
21:01:00 <n0ano> o/
21:01:02 <melwitt> hi
21:01:02 <krtaylor> o/
21:01:16 <ken1ohmichi> hi
21:01:22 <mriedem1> hi
21:01:30 <hartsocks> \\o
21:01:41 <dansmith> cool, not sure we'll have all that much to talk about, but we'll run through the same ol' agenda items and see
21:01:49 <dansmith> #topic havana blueprints
21:01:56 <dansmith> blueprints: we've got 'em
21:02:04 <dansmith> anything anyone needs to bring up about a blueprint?
21:02:18 <dansmith> any blueprints needing (re-)targeting or anything?
21:02:24 <vishy> o/
21:02:29 <dansmith> vishy: go!
21:02:36 <alaski> o/
21:02:53 <vishy> well i have two small blueprints i will be creating soon
21:03:09 <vishy> one was the ml post and poc i made earlier about volume-swap
21:03:18 <vishy> the other is about live-snapshots that i mentioned yesterday
21:03:33 * dansmith thinks live snapshots == cool
21:04:15 <dansmith> vishy: are you expecting these to be targeted to H2?
21:04:22 <vishy> possibly
21:04:25 <vishy> i think yes
21:04:30 <vishy> since i don't think either is very big
21:04:35 <dansmith> okay, cool
21:04:44 <vishy> the outstanding question is someone going to implement them for other hypervisors?
21:05:29 <dansmith> hartsocks: ?
21:05:44 <hartsocks> @dansmith I'll look at it.
21:05:50 <dansmith> okay
21:06:14 <dansmith> I don't think we have representation from any of the other hypervisors here, but I assume they would have spoken up on the ML
21:06:23 <alaski> I can bring it up with johnthetubaguy re: xen
21:06:28 <dansmith> alaski: cool
21:06:28 <alaski> just to get his thoughts
21:06:35 <mriedem1> dansmith: i can look at it for powervm
21:06:46 <dansmith> #action alaski to talk to johnthetubaguy about live snapshots and volume swap
21:07:04 <dansmith> #action mriedem1 to look at live snapshots and volume swap for powervm
21:07:18 <dansmith> vishy: anything else on that?
21:07:31 <hartsocks> @dansmith I'll put it on the vmwareapi team agenda to discuss
21:07:52 <dansmith> #action hartsocks to bring up volume swap and live snapshots on the vmwareapi team meeting
21:08:06 <vishy> nope
21:08:16 <dansmith> okay, alaski: did you have a blueprint to bring up?
21:08:47 <alaski> dansmith: that was my hello wave.  Though I do have a quick question on the scheduling blueprint that I can ask
21:08:58 <dansmith> cool
21:09:37 <alaski> So in order to break up the tasks to prepare for orchestration, things like allocate_networks are looking like they need a new compute rpcapi call
21:10:03 <alaski> But for that one in particular, the only reason it can't be done in conductor is the get_mac_* call which differs based on virt driver
21:10:40 <alaski> I guess I'm wondering what the thought is on more back and forth RPC calls, and if I should worry about trying to mive that up a level
21:11:05 <alaski> even though conceptually it fits on the compute node.  It just doesn't actually need to run there based on the logic I've seen
21:11:21 <dansmith> but it might someday, right?
21:11:29 <dansmith> seems like a very virt-driver-ish sort of thing
21:11:49 <alaski> it is
21:12:11 <alaski> And I kind of like it staying there.  It's just that spawn would be moving from one RPC call to probably 4
21:12:16 <dansmith> yeah
21:12:25 <alaski> just wanted to make sure no one is concerned about that
21:12:26 <dansmith> interestingly,
21:12:40 <dansmith> this would apply rather neatly to objects, although kinda different than the other cases
21:12:51 <dansmith> where we could ask for an driver info object from compute once,
21:13:01 <dansmith> which could implement the mac function locally, never needing an rpc call back,
21:13:15 <dansmith> or it could make it remoted and then every call would go back to compute if needed
21:13:23 <dansmith> but that's just craziness since I have objects on the brain :)
21:13:35 <dansmith> comstud: any thoughts on the extra rpc traffic alaski is asking about?
21:14:03 <alaski> :)  that sounds great actually, the never needing an RPC call.  Since in this case the logic is driver specific, but not host specific
21:14:03 <comstud> the problem is expanding instances right?
21:14:11 <comstud> sorry, i need to scroll back
21:14:18 <alaski> more RPC calls
21:14:22 <dansmith> the problem is moving things out of compute to conductor,
21:14:30 <comstud> Why is the message larger ?
21:14:34 <dansmith> and then needing access to things that compute has, like asking the virt driver to generate a mac
21:14:41 <comstud> Is this the qpid problem we're talkinga bout?
21:14:45 <comstud> It doesn't support > 64K msg size
21:14:49 <dansmith> no
21:14:52 <comstud> Oh
21:14:52 <alaski> The message isn't larger, just more of them
21:15:03 <alaski> also, the qpid fix was merged yesterday
21:15:07 <dansmith> hah
21:15:23 <comstud> I am in favor of more calls from conductor to compute
21:15:28 <dansmith> we should have woken up ol' man bearhands earlier
21:15:28 * johnthetubaguy has stopped playing the tuba
21:15:28 <comstud> if that's what's being asked
21:15:38 <comstud> Yeah, I forgot about this mtg :)
21:16:10 <alaski> comstud: cool.  Just wanted to get some positive vibes before I do all that plumbing
21:16:29 <comstud> Splitting up the work into more calls makes for better error recovery and retrying and whatever you may need to do
21:16:32 <comstud> IMO
21:16:53 <johnthetubaguy> sorry, I missed stuff, conductor is gonna start making all the RPC calls to compute still right?
21:16:55 <comstud> I'm fine with it on the backend... but not in the API :)
21:17:01 <dansmith> alaski: sounds like you're good then
21:17:10 <johnthetubaguy> comstud +1 hoping migrate improves due to that
21:17:12 <comstud> (ie, not when a caller is blocking)
21:17:15 <alaski> dansmith: cool
21:17:24 <mriedem1> just an fyi, i'm planning on submitting a blueprint to support config drive and attach/detach volumes for the powervm driver sometime here (hopefully next week)
21:17:25 <alaski> johnthetubaguy: conductor to compute manager
21:17:37 <johnthetubaguy> alaski: cool, all good
21:17:47 <dansmith> okay, so we're good on that it sounds like
21:17:50 <comstud> +1
21:17:53 <alaski> I'm good
21:18:05 <comstud> dansmith: 'for now'
21:18:07 <dansmith> mriedem1: did you have anything more to say about that, or just a warning to the group?
21:18:14 <mriedem1> just giving a warning
21:18:24 <dansmith> comstud: right, until you change your mind. it's always implied
21:18:27 <dansmith> mriedem1: okay
21:18:29 <comstud> correcto
21:18:31 <dansmith> any other blueprint discussion?
21:18:54 <ken1ohmichi> can i bring api-validation up?
21:18:59 <dansmith> sure
21:19:35 <ken1ohmichi> now we are discussing WSME or jsonschema for validation.
21:19:45 <hartsocks> neat
21:21:02 <ken1ohmichi> https://review.openstack.org/#/c/25358/
21:21:23 <hartsocks> thank you. I couldn't find it.
21:22:04 <cyeoh> ken1ohmichi: I don't think we are getting WSME in Nova for a while? afaik no one is working on it.
21:22:17 <ken1ohmichi> In this patches, WSME is not used for nova-v3-api. so I'd like to discuss jsonschema will be used for nova-v3-api.
21:22:38 <ken1ohmichi> cyeoh: yes, right.
21:24:02 <dansmith> is there some discussion to have here?
21:24:53 <dansmith> any other blueprint items to bring up?
21:25:23 <dansmith> alright
21:25:25 <dansmith> #topic bugs
21:25:32 <dansmith> we have those too!
21:25:46 <whenry_> is this nova meeting? is it still active? how do I make a comment/discuss a topic?
21:25:54 <jog0> dansmith: https://blueprints.launchpad.net/nova/+spec/flavor-instance-type-dedup needs a review
21:26:07 <dansmith> I don't have the link to the bug plot, does anyone have it handy?
21:26:10 <jog0> or two or three
21:26:14 <dansmith> 58 new bugs right now
21:26:15 <dripton> whenry_: yes, it's the right time to talk about the qpid leak
21:26:48 <dansmith> #link  https://blueprints.launchpad.net/nova/+spec/flavor-instance-type-dedup
21:27:08 * whenry_ has identified the qpidd leak problem. it's a bad nasty one. tross and I have been looking at it and we think we have fixed it and simplified dramatically
21:27:31 <dansmith> whenry_: okay, is there a link to a review?
21:27:57 <whenry_> dansmith, I don't have a link right now. we are still testing but it looks promising.
21:28:07 <whenry_> where to I post link for review when it's time
21:28:07 <whenry_> ?
21:28:15 <dansmith> okay, is this different than the one alaski mentioned a minute ago?
21:28:22 <dripton> yes
21:28:35 <dansmith> okay
21:28:36 <dripton> alaski's qpid bug was the 65535 limit on a message
21:28:40 <dansmith> gotcha
21:28:43 <dripton> this one is qpid leaking lots of exchanges
21:29:04 <dansmith> okay
21:29:08 <dripton> whenry_: is https://review.openstack.org/#/c/29617/ helpful or unrelated?
21:29:23 <dansmith> do we need to discuss it? sounds like until it hits gerrit, there's not much else to do, is that right?
21:29:25 <dripton> (It at least lets us turn durable on and off for qpid like we can for rabbit.)
21:29:32 <tedross> dripton: 65535 limit on what?
21:29:44 <dripton> size of a message sent across qpid
21:29:55 <whenry_> there is no such limit
21:29:59 <dripton> see commit 781a8f908cd3e5e69ff8b88d998fa93c48532e15 from yesterday
21:30:04 <alaski> I think it was just the size of a string within a message, but not exactly sure
21:30:05 <dansmith> dripton: it's size of a single string, IIRC
21:30:09 <dansmith> right
21:30:12 <tedross> dripton: there's no such limit unless you're carrying the content in a header
21:30:20 <tedross> rather than the message body
21:30:55 <dripton> well, the commit went in yesterday, so you're a bit late, but feel free to go back and see if there's a better way.
21:31:55 <whenry_> dripton, who was the "feel free" comment addressed to?
21:32:13 <whenry_> dragondm, this re the string size limit?
21:32:21 <dripton> you and tedross.  Anyone who thinks that wasn't a real qpid problem that alaski fixed.
21:32:27 <whenry_> dripton, , this re the string size limit?
21:32:34 <dripton> whenry_: right
21:32:49 <whenry_> dripton, ack I'll let tedross look at that
21:32:50 <alaski> well, I didn't fix it.  I just copied the fix from oslo
21:33:45 <dansmith> okay, so I think we're good on these two issues for now, anything else?
21:34:03 <whenry_> so re https://review.openstack.org/#/c/29617/ when do we discuss this?
21:34:14 <dansmith> jog0: shall we get back to your blueprint link in open discussion?
21:34:19 <whenry_> what use case is begging for durable?
21:34:45 <jog0> dansmith: I hope there isn't to much to discuss about it, just to review
21:34:48 <dripton> whenry_: I think the leak was begging for a way to turn durable off
21:34:51 <dragondm> Hrrrm?
21:35:05 <dansmith> jog0: okay
21:35:10 <whenry_> dripton, oh well that's fixed now ;-) when we send the code for review
21:35:15 <mriedem1> dansmith: the only thing related to bugs i'd bring up is asking for help from core reviewers to check this out, i think it's ready: https://review.openstack.org/#/c/30694/
21:35:26 <whenry_> dragondm, sorry about that .. go back
21:36:22 <dansmith> okay, anything else on bugs?
21:36:34 <hartsocks> heh, if we're soliciting reviews, I've got one: https://review.openstack.org/#/c/30036/
21:37:04 <dansmith> everyone has a pet review, lets bring up critical bugs needing discussion here and save the others for the end okay? :)
21:37:15 <hartsocks> sorry.
21:37:22 <dansmith> #topic subteam reports
21:37:30 <dripton> db
21:37:31 <dansmith> who has a subteam report to make?
21:37:37 <n0ano> scheduler
21:38:04 <johnthetubaguy> xenapi: smokestack (was) back testing on XenServer 6.1 EOM
21:38:09 <dansmith> okay, dripton: go for db
21:38:13 <dansmith> johnthetubaguy: nice :)
21:38:32 <dripton> DB meeting failed to happen again.  Need a new meeting time that's more Europe friendly since that's where everyone is.
21:38:39 <dripton> But we got a bunch of good patches in
21:38:50 <dripton> And maurosr fixed the postgresql gate failure, which is awesome.
21:38:52 <dripton> that's all
21:38:55 <johnthetubaguy> dansmith: credit to danprince for his good work
21:39:12 <dansmith> johnthetubaguy: sorry, you said EOM, so you have to save it until next week :)
21:39:17 <dansmith> dripton: okay, cool
21:39:30 <dansmith> n0ano: scheduler, go for it
21:39:33 <johnthetubaguy> dansmith: lol, oK
21:39:38 <devananda> ironic
21:39:51 <n0ano> long discussion on isolation filter, hope to coalesce (sp) 3 BPs into one ...
21:40:18 <n0ano> jog0, brought up issue of BlueHost creating a 16K OpenStack cluster and had to throw out the scheduler (didn't scale)...
21:40:31 <n0ano> expect to see a lively discussion over that issue at the meeting next Tues.
21:40:44 <n0ano> that's about it, read the IRC log for details
21:40:49 <dansmith> hah, okay, cool
21:41:04 <dansmith> hartsocks: vmware
21:41:19 <hartsocks> So, we're starting to get some motion on blueprints.
21:41:29 <hartsocks> There are several blueprint related patches up for review.
21:41:54 <hartsocks> We're having some in depth design discussions on some of these b/c a few conflict with each other.
21:42:19 <hartsocks> Also, reviews are taking a long time.
21:42:59 <hartsocks> We're looking at getting together a group of people who are really good with VMwareAPI to kind of "pre-review" these patches ahead of the core team that way y'all can focus on the OpenStack bits of the patches.
21:43:29 <hartsocks> That… and we've got guys in about 5 timezones now… so yeah. That's fun.
21:43:50 <hartsocks> That's it for us. See the Wiki for logs.
21:45:04 <dansmith> welcome to the club :)
21:45:04 <dansmith> it's really scary when you get five guys in six timezones, but..
21:45:04 <dansmith> okay, cool
21:45:04 <dansmith> devananda: ironic
21:45:10 <devananda> hi!
21:45:26 <devananda> wanted to say thanks to dansmith - we've ported some of his work with unified objects
21:45:42 <devananda> and are moving with that now, hoping to see it get into oslo :)
21:45:46 <dansmith> woo!
21:46:07 <devananda> other than that, i think there's not a lot of nova-facing news.
21:46:32 <devananda> [eol]
21:46:37 <dansmith> heh, okay, cool
21:46:50 <dansmith> comstud: you want to (or want me to) talk about objects a little?
21:46:50 <devananda> oh!
21:47:00 <devananda> i nearly forgot - one other thing.
21:47:17 <devananda> gherivero is porting a lot of the image manipulation from nova to glance-client
21:47:21 * devananda looks for the link
21:47:54 <devananda> #link https://review.openstack.org/#/c/31473/
21:48:08 <devananda> not sure how directly that affects you guys, but worth knowing about
21:48:14 <devananda> really done this time :)
21:48:20 <dansmith> cool
21:48:31 <dansmith> well, I'll talk about objects real quick then
21:48:47 <dansmith> so, we're making lots of progress on getting objects into the tree
21:49:07 <comstud> well
21:49:11 <dansmith> we've got a lot of base stuff integrated and we keep refining that with the patches to actually start using them floating ever so slightly above the cut line
21:49:12 <comstud> we need reviewers.
21:49:17 <comstud> other than me
21:49:20 <comstud> and dan
21:49:21 <comstud> :)
21:49:24 <dansmith> heh, yes :)
21:49:26 <dansmith> https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/unified-object-model,n,z
21:49:37 <vishy> devananda: what image manip?
21:49:49 <comstud> dansmith has about 100 patches up already
21:49:49 <dansmith> er,
21:49:53 <dansmith> #link https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/unified-object-model,n,z
21:49:54 <dansmith> heh
21:50:09 <dripton> this meeting was a great break from gerrit emailing me constantly with more dansmith patches
21:50:17 <dansmith> haha
21:50:30 <devananda> vishy: comments on that review explain it pretty thoroughly. tl;dr - nova and cinder duplicate a bunch of the code to download an manipulate glance images. ironic needs to do the same stuff
21:50:49 <devananda> vishy: and we figure it's time to share that code instead of keep copying it all over ... :)
21:50:53 <dripton> devananda: does that mean the code comes out of nova after it goes into glanceclient?
21:51:08 <vishy> i guess i'm not clear on "manipulate"
21:51:18 <devananda> dripton: i think that's the plan, yes.
21:51:23 <comstud> vishy: nova's image_service code.
21:51:24 <dripton> devananda: ++
21:51:37 <comstud> nova/image/glance.py
21:51:53 <vishy> yeah I'm wondering what manipulation it did
21:52:00 <devananda> dripton: i'm not sure if ghe is planning on doing that (would love it if any nova folks want to help out)
21:52:04 <devananda> vishy: file injection
21:52:06 <vishy> just curious if it overlaps with the stuff going into brick
21:52:10 <comstud> i think that's just his word for create/update/delete.. which it all wraps
21:52:13 <comstud> :)
21:53:30 <comstud> if we need this much wrapping of glanceclient...
21:53:36 <comstud> it does feel like there's a problem with glanceclient
21:53:43 <comstud> or something.
21:54:06 <vishy> ok i don't see anything in there for qcow or lvm handling
21:54:09 <vishy> seems ok then
21:55:37 <dansmith> okay, anything else on subteam reports?
21:56:18 <hartsocks> well...
21:56:35 <hartsocks> some of us vmware folks might need to be schooled on what "ephemeral" disks are...
21:57:02 <dansmith> sounds like a perfect segway into the open discussion/whining portion
21:57:07 <dansmith> #topic open discussion
21:57:12 <hartsocks> I keep forgetting to mention we also have #openstack-vmware
21:57:33 <hartsocks> for *developer* discussion tho
21:57:49 <hartsocks> Please don't send disgruntled Admins in there...
21:58:09 <hartsocks> :-p
21:59:53 <dansmith> okay, time
22:00:02 <dansmith> #endmeeting