21:01:27 #startmeeting nova 21:01:28 Meeting started Thu Jun 5 21:01:27 2014 UTC and is due to finish in 60 minutes. The chair is mikal. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:01:29 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:01:31 The meeting name has been set to 'nova' 21:01:34 o/ 21:01:38 o/ 21:01:38 o/ 21:01:40 o/ 21:01:40 So, who is around? 21:01:41 hi 21:01:42 o/ 21:01:45 o/ 21:01:46 hi 21:02:00 Cool! 21:02:09 . 21:02:11 The agenda for today is at https://wiki.openstack.org/wiki/Meetings/Nova as always 21:02:16 o/ 21:02:29 #topic Juno mid-cycle meetup 21:02:45 So while I slept last week, John asked people to fill in a survey for the mi cycle meetup 21:02:49 mid even 21:03:05 The winning dates were the weekdays after OSCON 21:03:13 the weekend? 21:03:17 oh weekdays 21:03:17 So July 28 - 30 21:03:23 russellb: no, Monday thru Wednesday 21:03:27 so my cash bribes worked 21:03:27 great. 21:03:30 Heh 21:03:45 Intel has now confirmed they can host on those dates, so I think we're good to announce 21:03:54 I'll send an email to openstack-dev after this meeting 21:04:00 o/ 21:04:10 I intend to try and arrange a block hotel booking, but that hasn't been done yet 21:04:19 But at least this way people can start negotiating with their managers 21:04:47 where will it be portland? 21:04:48 Is there anything else we need to cover there apart from my promise of an email soon? 21:04:58 jogo: Beaverton, at the Intel campus 21:05:02 jogo, technically beaverton 21:05:12 jogo: which is a 20 minute drive from the airport IIRC 21:05:20 neato 21:05:20 main portland airport would be best i presume? 21:05:22 ah. 21:05:28 dansmith will give everyone a ride 21:05:29 pfft 21:05:31 russellb: I think so, but I sahll confirm 21:05:34 not even 20 minutes at midnight 21:05:39 mikal, you drive fast :-)( 21:05:42 I think I was told there is a train as well, but that might be a lie 21:06:06 n0ano: which campus? 21:06:21 dansmith, jones farm is the current site 21:06:28 okay 21:06:41 Oh, I see 21:06:46 I didn't realize there was more than one campus 21:06:50 I was looking at maps for Aloha 21:07:04 there are lots of them 21:07:07 all over the damned place 21:07:08 there's about 5, maybe more 21:07:14 *shrug* 21:07:14 five that we know about 21:07:17 We'll work it out 21:07:23 There are secret hidden campuses? 21:07:25 wiki-ify it 21:07:28 it’s intel 21:07:33 with the deets 21:07:33 everything is a secret 21:07:38 russellb: yep, that's the plan 21:07:40 all the campuses have privacy berms in front 21:07:43 they're all relatively close together (<10 min by care) 21:07:45 k 21:07:58 “look not beyond yonder wall” 21:08:04 Heh 21:08:13 Ok, so, we're having the meetup in a lair 21:08:17 In other news... 21:08:23 Are we done with this topic for now? 21:08:30 will there be cake? 21:08:33 yes. 21:08:37 i mean yes, i'm done. 21:08:54 #action mikal to wiki page up the mid cycle details 21:09:04 #topic Gate breakage 21:09:12 This one isn't on the agenda because its new 21:09:24 I've woken up to emails saying everything is busted 21:09:31 one of the breakers was force merged 21:09:34 the resize one 21:09:35 Does someone have more details on if there's things nova needs to do to fix the world? 21:09:53 there was a get-pip one that sdague fixed yesterday 21:10:02 sounds like things are still backed up, not sure if nova is the main issue now 21:10:14 Ok, but we should hold off on approving things, right? 21:10:15 neutron has the ssh issue everyone knows and loves as #1 21:10:25 i think that's what they were asking, unless they fix race bugs 21:10:36 Ok 21:10:44 But we're not aware of any more nova bugs that need looking at? 21:10:57 look at http://status.openstack.org/elastic-recheck/ 21:10:58 well let's see 21:10:59 yteah 21:11:00 that 21:11:08 Bug 1254890 - "Timed out waiting for thing ... to become ACTIVE" causes tempest-dsvm-* failures 21:11:09 Launchpad bug 1254890 in nova ""Timed out waiting for thing ... to become ACTIVE" causes tempest-dsvm-* failures" [High,Confirmed] https://launchpad.net/bugs/1254890 21:11:28 that looks like the biggest nova one 21:11:39 which has been around forever 21:11:46 jogo: that's probably not a nova bug though right? 21:11:48 yup 21:11:50 sure 21:11:51 it may be 21:11:53 jogo: that's nova waiting on other services? 21:11:54 compute api tests can timeout all over 21:11:58 that bug tends ot catch a number of underlying bugs 21:11:59 not necessarily 21:12:00 o/ late to the meeting. 21:12:02 need to dig into specific failures 21:12:10 e.g. timeout waiting for snapshot 21:12:12 something like that 21:12:15 qemu-img taking too long 21:12:15 yeah, so its not uncommon as we dig into bugs to find they are 5 or 6 bugs 21:12:24 in which case we can split the bug up and add seperate e-r queries etc 21:12:30 that one in particular ... a bunch has come out of that in the past, it's been on there a long time 21:12:33 jogo: do you split them out into separate bugs at that point? 21:12:39 Cool 21:12:49 we want this: https://review.openstack.org/#/c/97812/ 21:13:04 i think we need to help get better diags when things fail to help with a lot of these timeouts 21:13:17 mikal: yeah. we just say this bug covered several underyling issues ... 21:13:27 Ok 21:13:28 there are probably several different similar timeout bugs 21:13:37 But I'm not seeing a specifical call to action here for nova, is that fair? 21:13:40 ? 21:13:41 not sure if there are some that hit more than others, we haven't dug into that 21:13:49 except please don't approve stuff 21:13:53 also getting rid of stacktraces in the logs really helps with these things 21:13:56 Yeah, except that 21:14:15 we still ahve a lot 21:14:17 jogo: is there a list of bogus stacktraces in logs somewhere? 21:14:29 jogo: how would someone wanting to help with that proceed? 21:14:39 there is a whitelist dump at the end of the runs 21:14:44 of all the errors that aren't whitelisted from the logs 21:14:50 mikal: what mriedem said 21:14:51 used to gate on that but couldn't keep up 21:15:01 i think the only thing we gate on is no errors in n-cond 21:15:14 right, afaik 21:15:16 and alaski has a fix up for one of those 21:15:28 would be this https://review.openstack.org/#/c/96955/ 21:15:37 #action We need to help remove bogus stack traces from our tempest logs 21:15:50 i thought, maybe that's not the one 21:16:01 mriedem: 97942 maybe? 21:16:01 there was a bogus info cache one 21:16:25 alaski: doesn't look right 21:16:29 That second one is approved 21:16:30 i think there is a specofic bug 21:16:40 mriedem: oh, the one you're thinking of merged 21:16:56 mriedem: https://review.openstack.org/#/c/96824/ 21:17:06 that's the one 21:17:09 mikal: sample output http://logs.openstack.org/98/96998/1/check/check-tempest-dsvm-full/95f0c01/console.html#_2014-06-03_04_42_16_638 21:17:20 from most recent nova patch that merged 21:17:25 yeah 21:17:33 there was a sec group list race bug too that was merged yesterday 21:17:40 fix merged i mean 21:18:01 mriedem: correct me if I am wrong, but a lot of the instability this time isn't from nova 21:18:13 besides the resize thing reverted this morning, i think that's correct 21:18:18 mostly neutron looks like ... 21:18:19 Cool 21:18:19 lots of infra issues 21:18:22 and neutron yeah 21:18:25 I didn't mean to imply it was 21:18:31 Just making sure we're pulling in the right direction 21:18:32 i'm sure those teams wouldn't mind help on those issues though :) 21:18:32 ceilometer UT is shitting the bed also 21:18:38 It sounds like we're off the hook mostly 21:18:44 they copied our test timeouts and started timing out :) 21:19:07 Is there anything else here or should we move on? 21:19:10 move on 21:19:18 #topic juno-1 21:19:36 Over the last week johnthetubaguy has been pushing things from juno-1 to juno-2 that look like they wont land in time 21:19:44 juno-1 being 12 June, which is real soon now 21:20:06 We should not be approving specs at the moment, but instead should be trying to review bps targetted to juno-1 (and bug fixes, more on that later) 21:20:12 #link https://launchpad.net/nova/+milestone/juno-1 21:20:27 Obviously the gate thing will slow us down there 21:21:01 So this is mostly a reminder that juno-1 is just around the corner 21:21:11 johnthetubaguy: you around? 21:21:18 (I suspect not) 21:21:52 Moving on 21:22:01 #topic Bugs 21:22:04 i suspect compute-manager-objects-juno will be a moving target 21:22:18 mriedem: yeah, that seems likely to me too 21:22:19 yeah, not much point in that being anything other than j3 I think 21:22:58 bugs! 21:23:01 Ok, I changed that one 21:23:02 Bugs 21:23:02 any bug day analysis? 21:23:09 tjones ran a bug day the other day 21:23:16 we had about 8ish bugs merged yesterday 21:23:16 The early analysis is "it sucked" 21:23:25 Well, that's my analysis at least 21:23:27 people are continuing to fix and review bugs today as well 21:23:34 Ok cool 21:23:42 I think tjones did a good job 21:23:43 my 1st bug day so not sure what to expect 21:23:46 We just didn't fix enough 21:23:52 Given that we have 1,200 bugs 21:23:54 closing invalid bugs == fixing 21:23:58 imo 21:24:00 true 21:24:02 mriedem: agreed 21:24:04 there was quite a bit of that yesterday 21:24:10 1,200 is so many we just don't know what's there 21:24:14 a bunch set to "need more info" too 21:24:16 why wasn't the bug day stuff done in the nova room? 21:24:19 open bug counts went down about 30ish 21:24:20 #link http://status.openstack.org/bugday/ 21:24:29 jogo: it was... 21:24:31 jogo: i was in there chasing bugs 21:24:42 mriedem: ohh I thought there as an email saying it wasn't, ignore me 21:24:44 i was going through a lot of abandoned things 21:24:45 as mriedem said yesterday - hard to see the granularity of this chart 21:25:15 The bit that worries me is that it feels to me like we're ignoring our users. They try to get our attention and we don't keep up. 21:25:35 I don't know how to fix that, except for asking people to be consistent about trying to close bugs 21:25:41 That's really a huge list to try and burn down 21:25:54 I'm 100% sure there's dupes etc in there, but I don't know how we find them when the list is so long 21:26:11 one by one 21:26:14 have we closed bugs older then a year? 21:26:22 to get us down to a sane list etc? 21:26:30 jogo: no we haven't 21:26:33 jogo: the bug day email has a link to in progress bugs but a lot of those are old and abandoned patches 21:26:37 We probably want to query opener for 6 months with no activity as well. 21:26:41 jogo: I think we'd want to discuss that on the mailing list before doing it 21:26:42 jogo: does that necessarily mean they are not valid any longer? 21:26:46 jogo: i went through a lot of those one by one and closed as invalid, incomplete or dupes 21:26:53 or moved back to triaged and removed assignee 21:26:57 i'm sure there are bugs >1yr old for nova-bm which are still valid, for example 21:27:10 yes 21:27:18 which brings up a point i raised yesterday about nova-bm bugs 21:27:20 there were 40+ 21:27:21 I did find one which had been fixed and never duplicated to closed issue. 21:27:28 i'm not sure those baremetal bugs are tagged for ironic 21:27:29 not as a blanket rule but as a guideline 21:27:29 but they should be 21:27:33 mriedem: ++ 21:27:38 also our bug management tools need help badly. currently i get all bugs and stick them in excel to see what is going on 21:27:47 mriedem: some of them i may have untagged specifically (but there'd be a coment trail) 21:27:52 mriedem: because they didn't apply 21:28:01 I don't expect we can solve this today 21:28:08 But I would like people to ponder it 21:28:08 wish we used bugzilla 21:28:19 tjones: the foundation is writing a new thing, but its not ready 21:28:20 how often do we do bug days? 21:28:26 When is the next bug day? 21:28:26 tjones: i'm not sure i've ever heard someone say that 21:28:27 mriedem: in general, once per release 21:28:33 mikal: we should do them more often then 21:28:34 food for thought -- is there a way to make it easier for users to submit drive-by bug fixes (and for those to get accepted/merged) 21:28:36 once a month 21:28:39 it would be better than what we have 21:28:39 tjones: pleia2 does our bugdays, do you want to chat with her about tools? 21:28:40 Last time we tried to do a second no one showed up 21:28:41 that might apply more systemically, not just to nova 21:28:49 no one showed up yesterday 21:28:51 imo 21:28:54 sure thanks anteaya 21:29:00 np 21:29:07 no one == very few compared to the number of people in the room 21:29:11 mriedem: that's kind of how I feel. I know its unfair on the people who did, but not enough people showed up. 21:29:28 Do we think people would get bored if we did something monthly? 21:29:28 i think counting on a a catch-up bug day is always going to fail long term 21:29:37 and i think a regular bug team (what tjones has been trying to do) is the best bet 21:30:00 russellb: I agree, but I also think that every dev has to help out 21:30:00 no one is showing up to the bug meetings as well. i use that time to triage bugs 21:30:20 mikal: right, and tjones has been trying to keep the work organized and broken up for devs to pitch in 21:30:30 using the tagging, and a coordinated time for people to meet and triage 21:30:30 I personally feel that some of this is because stackalytics doesn't track bug fixes, so people aren't competing based on them 21:30:38 i doubt we can get any real numbers, but the sense i have is that companies are fixing bugs internally as they hack things into product/ion, and the benefit of that isn't often flowing upstream 21:30:57 that and honestly, the bugs i care about most are ones that come from my customers 21:31:00 I've filed a bug against stackalytics to ask for that to change 21:31:01 i'm sure other people work that way too 21:31:07 it's just how it ends up happening' 21:31:28 so this isn't a good solution, but at the very least we should just discuss this issue more 21:31:36 to at least raise awareness 21:31:41 Yeah, I think this will be on the agenda for the mid cycle 21:31:46 I can't see it being fully solved by then 21:31:52 tjones: you keep of list of bugs which have proposed fixes in play? 21:32:09 I feel a bit like many of the bugs with fixes out there are for bugs which were just filed 21:32:14 our procedure for infra is pretty simple, I just have a launchpad lib script that pulls a list that we drop into an etherpad and go to town https://github.com/pleia2/openstack-infra-scripts/blob/master/infra_bugday.py 21:32:17 i.e. dev files bug to track work, fixes it 21:32:19 i have a list of bugs, their age, and when last updated 21:32:36 I don't have data on that though, perhaps its unfair 21:32:40 also having tools to keep track of stale bugs (assigned but not working on, patch proposed and then bit rotted) 21:32:44 lplib could be better documented, but you can grab a fair amount from the api 21:32:45 might help 21:33:11 pleia2: that is what i use - grab everything and stick in excel for analysis 21:33:23 So what I did for bug day is trolled around looking for an interesting bug, then search for similar ones. I think I found six related bugs to work on. I fixed three. 21:33:32 So that is a technique which might work for other people as well 21:34:16 automatically marking a bug as no longer in progress if the patch was abandoned over 6 months ago might help 21:34:21 that's come up before 21:34:28 mriedem: I think shorter than that 21:34:31 sure 21:34:33 abandoned for more than a couple of weeks 21:34:34 but automating it 21:34:48 we probably have hundreds of those 21:35:03 yep, it really needs to be automated 21:35:05 what do you mark it as?? 21:35:11 depends 21:35:15 usually triaged 21:35:21 but depends on why it was abandoned 21:35:23 tjones: how often do you ask people for more info on new bugs? What release they're running, stuff like that? 21:35:38 yeah sometimes i mark those as incomplete 21:35:47 when i triage, if it is not clear to me i ask for more info and mark incomplete 21:35:57 tjones: I think we're just talking about removing the assignee and reverting from "In progress" to "Confirmed" 21:36:05 something like that 21:36:10 Would automating asking for more info help? 21:36:17 Like sending a survey to everyone who files a new bug? 21:36:31 by "triage" i mean tagging untagged bugs. I am not reading each new bug. if they come in tagged i leave them 21:36:32 That might annoy our frequent bug filers 21:37:04 We could also auto-close bugs which have been incomplete for more than six months 21:37:14 do we have a wiki page on what info is very useful for a bug report to contain? Because lots of the API ones I see a very vague 21:37:17 mriedem: ++ to an aotmated tool for that. would help several projects, i bet 21:37:31 and I end up setting a lot to incomplete because there simply isn't enough info to replicate the issue 21:37:54 cyeoh: I can't think of one off the top of my head 21:38:18 cyeoh: do you require a bug report to have enough info to reproduce it? 21:38:32 if there was one, i'd think we'd find the template here https://wiki.openstack.org/wiki/Bugs 21:38:35 or linked from there 21:38:53 devananda: well often its info I know they have, they just probably didn't think to include it but would have if they'd known it would be useful 21:39:14 devananda: even if its just simple things like they were running against master, or icehouse or havana etc. 21:39:18 which release, which commit, basic setup/topology, steps, stacktrace 21:39:35 #action Everyone to think about how to improve our bug state, there are some ideas for automation if people want a coding task 21:39:47 cyeoh: I think making a wiki page is a good idea, can you have a go at it please? 21:40:03 mikal: sure 21:40:08 cyeoh: thanks 21:40:18 This is really important to me, but I feel we need to move on 21:40:41 #topic Sub team reports 21:40:50 o/ 21:40:56 So what sub teams do we have hanging around? 21:41:02 adrian_otto: you have a containers report? 21:41:05 * n0ano gantt 21:41:06 Containers 21:41:09 #link http://eavesdrop.openstack.org/meetings/containers/2014/containers.2014-06-03-22.00.html Containers Sub-Team Meeting Minutes 21:41:10 Top takeaway is determining how cinder support can be added and what DefCore requirements should be relaxed (if any). 21:41:10 o/ 21:41:10 * devananda wonders if he qualifies as a subteam :) 21:41:35 next discussion will be about how to support operating specific needs 21:41:51 adrian_otto: I assume that's something you guys are progressing... Do you need anything from nova itself? 21:42:01 adrian_otto: or is it all in progress? 21:42:02 next meeting is 1600 UTC Tuesday. I will be at a conference and need a backup chair to start the meeting if I am unable to get online. 21:42:30 we will will follow up with those two topics on ML, requesting input 21:42:37 so please keep an eye out. 21:42:41 Cool 21:42:43 /end 21:42:54 n0ano: gantt report? 21:42:59 please let me know if you can back me up as chair (anyone) 21:43:28 sure, forklift effort still on-going, cleaning up the interfaces is the main job right now 21:43:47 n0ano: what about the refactor bp that was originally in juno-1? 21:43:57 n0ano: is that a gantt thing or being done by different people? 21:44:29 which refactor bp are you referring to, that's probably the same as what we are calling forlift these days 21:44:33 #link https://blueprints.launchpad.net/nova/+spec/scheduler-lib 21:44:40 Ahhh, I see 21:44:44 yeah, that's the one we're working on 21:44:57 Ok, cool 21:45:05 n0ano: anything else you need to raise or should we move on? 21:45:07 we seem to have scared the guy working on the no-db work, he's abandoned the BP for it 21:45:20 n0ano: you mean boris-42? 21:45:39 actually yoriksar took over from boris-42 but it's the same work 21:45:49 boris-42 was also on vacation for a few weeks post summit 21:45:49 Ok 21:45:59 i dont have the sense he's abandoning that effort, last time we talked 21:46:03 He was around yesterday, but I suspect is very busy 21:46:16 though i hope he's rethinking it :) 21:46:37 devananda, that was the message from the BP but I agree, I don't want to give up entirely 21:46:49 We should move on, given the time 21:46:59 OK, that's the big things 21:47:01 devananda: how is the ironic nova drive going? 21:47:05 driver even 21:48:26 mikal: aside from broken for ~36 hours, i need to get back to the specs 21:48:39 mikal: they haen't gotten a lot of feedback 21:49:04 mikal: shrews is spinning up some unit tests that will watch the internal APIs we're depending on 21:49:10 devananda: fair point 21:49:14 so that should avoid the kind of gate breakage we got two days ago 21:49:27 #action nova-drivers please take a look at the ironic driver specs 21:49:30 but as far as the driver itself, i'm not sure how best to proceed -- we need eyes on the specs 21:49:45 Yep, let's see if we can improve that over the next week 21:49:51 i'm not going to put the time into splicing up the driver itself until those are at least looking close to approved 21:49:54 thanks :) 21:50:12 Any other sub teams I missed? 21:50:16 docker. 21:50:20 vmwareapi 21:50:24 nova api 21:50:30 Oh, I suck 21:50:33 ewindisch: go! 21:50:37 LOL 21:50:47 we have pause/unpause merged and we have patch to be merged for soft-deletes 21:51:10 mikal: fwiw, here are the links for ironic specs: https://review.openstack.org/95024 and https://review.openstack.org/95025 21:51:15 I have an AI to speak more with mikal and mark about our glance integration… so we can fix snapshots 21:51:18 ewindisch: I assume you're part of this cinder conversation with adrian_otto ? 21:51:36 mikal: yes. 21:51:44 Cool 21:52:00 Perhaps a mail thread about glance is the way to go. It might be easier than getting us all paying attention at the same time 21:52:15 Going quickly because of time... 21:52:18 tjones: go! 21:52:47 ok very quick - last phase 1 refactor patch is up for review, has a +2 from mriedem. https://review.openstack.org/92691 phase 2 will be posted later on today. 21:52:48 done 21:53:10 Excellent! 21:53:14 cyeoh: go! 21:53:17 s/has/had/ 21:53:19 rebase 21:53:39 ok, so just mostly wanting more eyes on spec reviews - we're a bit blocked on doing anything at all until we get some approved 21:53:54 https://review.openstack.org/#/c/84695/ - is the v2.1 on v3 api one 21:54:18 the microversion would could do with more eyes as well - even if its just people saying they don't care which route we take (we just need to choose one!) 21:54:24 https://review.openstack.org/#/c/96139/1/specs/juno/api-microversions.rst 21:54:34 can we be sure to talk about the microversion stuff at the meetup? 21:54:41 Ok, so that's another thing we can try and look at over the next week then 21:54:47 dansmith: yes, but we might not get everyone there 21:54:47 because I think we decided we’d punt on how that works, 21:54:49 but we need to do that 21:54:56 dansmith: we can do a hangout if needed 21:54:59 having policy implemented in the REST API I think is a lot less controversial: https://review.openstack.org/#/c/92005/ 21:55:07 #action Specs for ironic and nova api need review 21:55:11 yea unfortunately I won't be able to make the midcycle but happy to attend remotely 21:55:25 #action Discuss microversions at the mid cycle 21:55:34 cyeoh: yeah, we can work something out 21:55:42 we can proceed with v2.1 on v3 quite a way without microversions bedded down 21:55:42 we were able to get alaski in last time 21:55:47 which worked okay I think 21:55:49 we had a hangout last time, but it was unplanned and just with a cheap tablet 21:55:54 it worked pretty well 21:55:55 we could probably plan ahead and have a nicer setup 21:55:59 I think its actually easier than at the summit 21:56:01 And we did ok there 21:56:12 Agreed we can make it a bit fancier though 21:56:12 alaski: we heard you *really* well 21:56:20 yeah, kindof amazing 21:56:26 better than people in the room 21:56:28 it was epic 21:56:29 #action Determine the AV facilities at intel and how that works for hangouts 21:56:33 heh 21:56:40 Ok, moving on though 21:56:47 #topic Open Discussion 21:57:01 Enjoy your three minutes of open discussion 21:57:11 I want to bring up https://review.openstack.org/#/c/64769/ with everyone 21:57:32 ports are apparently being leaked at times, causing issues for some CI systems 21:57:47 I'm not a big fan of this solution, but something needs to be done I think 21:58:09 yeah, this sucks 21:58:11 not this patch, 21:58:13 the bug was filed by one of the hp cloud ops 21:58:25 who is trying to help us fix testing on hp cloud 21:58:25 :/ 21:58:25 the problem of trying to maintain this synchronized state 21:58:35 it's not clear who owns the ports in this situation 21:58:38 so cleanup is a mess 21:58:44 do we need more data on the bug report? 21:58:47 s/mess/disaster/ 21:59:04 can't wait until we kill nova auto creating ports 21:59:23 I really think we shoud just do it 21:59:24 anteaya: the bug report is pretty comprehensive as i recall 21:59:29 kk 22:00:12 if this is coming from hp cloud ops then I think we should consider it high priority 22:00:27 the new hp 1.1 cloud (running trunk) is a big part of why things got bad in the gate 22:00:27 that is where it is coming from, yes 22:00:28 So, we should all promise to review that change? 22:00:40 that would help 22:01:01 Ok 22:01:07 nova not creating ports is the long term solution, but we could use a stopgap 22:01:08 essentially we aren't able to run much on hp cloud right now 22:01:10 alaski: ddn't you propose a patch that would help this situation on friday? 22:01:19 alaski: ++ 22:01:27 That's us out of time unfortunately 22:01:41 tjones: I did. it's getting reviews, but just failed jenkins it looks like 22:02:49 https://review.openstack.org/#/c/96955/ 22:03:12 Ok, we better end given we're over 22:03:16 Thanks everyone for coming 22:03:28 Please keep talking in openstack-nova if there's more to discuss 22:03:36 #endmeeting