19:00:25 #startmeeting ironic 19:00:26 Meeting started Mon Feb 17 19:00:25 2014 UTC and is due to finish in 60 minutes. The chair is devananda. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:00:28 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:00:30 Hi NobodyCam, nice to be back. 19:00:31 The meeting name has been set to 'ironic' 19:00:38 it's great to see lots of familiar faces :) 19:00:47 as usual, our rough agenda is here 19:00:49 #link https://wiki.openstack.org/wiki/Meetings/Ironic 19:01:06 #topic announcements 19:01:14 one brief annoucement 19:01:26 i emailed the list this morning but want to call it out 19:01:38 ifyou submitted a patch over the weekend and jenkins -1'd it for a py26 failure 19:01:41 that's fixed now :) 19:02:02 NobodyCam: any other announcements? 19:02:14 i'd like to announce that we have start Review jams on mondays and thrusdays 19:02:48 s/start/started/ 19:02:49 ah, right 19:02:59 in an effort to get our review queue moving faster... 19:03:00 what are jams? 19:03:02 so NobodyCam, what's a review jam? 19:03:18 as many -core folks as can make it all get online together and focus just on doing code reviews for a few hours 19:03:19 sounds good 19:03:34 ya that ^^ 19:03:41 ah 19:03:45 we'll note the time(s) of them on the ML, and if you're in channel you'll probably see a lot of chatter 19:04:00 great! 19:04:05 and we're stashing notes as we go, follow up cnocerns, etc, here 19:04:07 #link https://etherpad.openstack.org/p/IronicReviewDay 19:04:36 ok, moving on 19:04:59 i'm actually going to skip our regular topics for the moment (they'll get covered by other topics though) 19:05:06 #topic Icehouse-3 planning 19:05:16 ahh yes 19:05:21 I sent a rather lengthy email to the list last week 19:05:32 someone want to dig up the link? (I dont have it handy) 19:05:52 lemme find it 19:06:03 tl;dr - I outlined the various technical deficits between where we are today and what we need to graduate in Icehouse and become an integrated project 19:06:25 some of them are well under way... 19:06:33 and some of them look likely to prevent our graduation 19:06:40 #link http://lists.openstack.org/pipermail/openstack-dev/2014-February/026962.html 19:06:43 such as docs, the nova driver, and integration tests 19:06:46 lucasagomes: thanks 19:07:26 we should be seeing some doc reviews shortly (this week I'm hoping) 19:07:50 so by nova driver you also mean a 3rd party ci in place? 19:07:50 russellb's feedback there is that we need to have the nova driver landed and fully CI'd (devstack + tempest doing functional tests of it) 19:07:53 before we can land 19:07:56 mrda: no 19:08:13 mrda: in this context, third party CI means things like the SeaMicro and HP iLO drivers in Ironic 19:08:22 those are not part of the graudation requirement 19:08:38 so we're planning on using infra ci all the way except for driver testing, right? 19:08:39 for nova driver we will need to start poking people?? 19:08:59 mrda: we can rely on infra for CI of the SSH and PXE drivers 19:09:20 mrda: we can't do either IPMI, or the 3rd party drivers, in infra's CI today -- and that's OK 19:09:34 we'll need it eventually (probably by Juno) and we should get a lot of that from tripleo-ci 19:09:35 NobodyCam, +1 I haven't seem many reviews on the nova ironic driver patches 19:09:50 NobodyCam: yes, start poking people in nova 19:09:57 #link https://review.openstack.org/#/c/70348/ 19:10:09 ^^^^ needed for devstack CI 19:10:10 russellb: any one in particular who we should start nagging for feedback on our nova driver patches? 19:10:11 I think that's a chicken and egg problem actually. 19:10:35 I don't think we'll get the review focus we'd like until there's confidence that the patches will work. 19:11:16 russellb: or do we need to wait even on gettign review feedback until we have CI in place? 19:12:02 for CI, we need devstack able to create the necessary environment (net bridge, bunch of VMs, SSH key, etc) 19:12:39 devstack should accept such a patch before the driver's in Nova. we'll then create an infra job in the experimental pipe to exercise that part of devstack 19:12:56 and trigger it only on the nova review that is the tail of our driver patch chain 19:13:01 (yea, it's a chain, not just one patch) 19:13:11 I think nova driver won't get review until the ci is ready, so I think we need to focus there, FWIW. 19:13:16 mrda: I agree 19:13:26 mrda: ++ 19:13:30 which is why i'm bringing it up at the start of our meeting :) 19:13:45 cause if we don't get that ASAP, we won't graduate, even if the rest of Ironic is working 19:13:58 devananda, what type of net bridge is needed for devstack? is there a patch for that? 19:14:00 hmmm, where's romcheg .... 19:14:00 (was in UT last week and chatted to some Nova cores informally) 19:14:29 agordeev2: hi! 19:14:48 devananda: hi! 19:14:56 agordeev2: are you around / still working on the devstack patch for Ironic? see the conversation ^^ ? :) 19:15:17 #link https://review.openstack.org/#/c/70348/ 19:15:27 that ^ is the start of getting devstack to do what we need 19:15:49 devananda: yes, i do. I'm going to pay more attention to it tomorrow 19:15:57 great 19:15:58 and on the that week too 19:15:58 agordeev2: great 19:16:14 * lucasagomes adds that link to his todo list to review later 19:16:24 i'm going to start testing / hacking on it this week 19:17:21 for now only one question. What sort of linux distro is preffered for the infra CI ? 19:17:24 if anyone's familiar with devstack or CI already, this'd be a great place to help out right now. it's a critical path item for the next month or so 19:17:49 is GheRivero here? 19:17:57 clarkb: what distro is tempest run on? 19:17:57 GheRivero: ^^^^ ????? 19:18:01 devananda: precise 19:18:08 clarkb: thanks 19:18:31 tripleo-incubator script heavily relies on --persistent virsh option which is not possible on precise. 19:18:50 agordeev2: which script? 19:19:09 ah, setup-network 19:19:20 create-node also 19:19:48 agordeev2, I usually test things on fedora for e.g to write that pad with the steps to deploy a machine etc... (note I work for rh) 19:20:14 agordeev2: infra's tempest nodes are ephemeral (destroyed after one test run) 19:20:38 agordeev2: do we need --persistent? if not, it should be easy to patch triplei-incubator to remove that option when run on precise 19:21:17 devananda: there was something about in hipchat this mornig ... I did not read all of it 19:21:46 lucasagomes: for the CI stuff, any chance you can work on precise so we're all focusing on the same issues (at least for now, until we have some CI, then add fedora support) 19:22:28 devananda, yeah sure I can do that 19:22:43 lucasagomes: thanks :) 19:23:03 any more questions/concerns on CI? 19:23:20 concerns: neutron integration 19:23:23 devananda: yes. What about neutron support? 19:23:29 ahh 19:23:51 yea, well, we depend on neutron to set the DHCP BOOT option 19:24:13 by "we" i mean the PXE driver, which is our reference deploy driver implementation 19:24:34 so yea, to do CI with the Nova driver, we need to enable Neutron in those tempest tests 19:24:45 and yea, we're kinda tied to Neutron's gate issues .... 19:24:47 right, btw lemme ask something... we have this integration with neutron but not with nova network right? 19:24:55 lucasagomes: right 19:25:26 cause nova network doesn't know how to handle pxe boots? 19:25:47 nova-net doesn't haev an API for setting the DHCP BOOT option 19:26:07 ah ack... cause I was looking at the code to see if nova bm does have a nova net integration 19:26:17 thanks 19:26:22 and we dont have a nova-net client linked in to ironic/drivers/modules/pxe 19:26:44 now that nova-net is no longer feature frozen, i suppose it might be possible to add it there 19:26:53 hmmm, devananda, is it an option to retrofit DHCP BOOT into nova-network? I'm not sure we want to be tied to neutron. 19:27:01 mrda: see ^ :) 19:27:43 that going to be really tuff to retofit in a month 19:27:51 yup 19:27:52 devananda, ahh that's interesting, yea we should keep our eyes on that, cause if someone add an integration to nova bm and nova net we would have to add it to ironic too in order to have pair functionality with nov abm 19:28:04 lucasagomes: yep 19:28:35 ok, moving on before we run out of time :) 19:28:42 #topic feature freeze 19:28:52 so, two things 19:29:37 1. lots of projects agreed to stop accepting new code submissions after the 18th. we probably can't follow suit since we have so much to do 19:29:50 2. I3 is supposed to be global featuer freeze, and we need to adhere to that 19:30:03 caveat is FFE's - feature freeze exceptions 19:30:18 what is the date for I3? 19:30:37 do we have ffe for nova driver bits already? 19:30:45 linggao, march 6th (I think, lemme search) 19:30:56 we haev a lot to do before Icehouse... i'd like to ask that folks prioritize reviews and new code based on what is in the critical path to graduation 19:31:36 lucasagomes, thanks 19:31:47 I3 is March 6, although string freeze is 2 days earlier 19:32:02 mrda: thanks 19:32:07 (and string freeze also means feature freeze) 19:32:21 so March 4 19:32:25 so that week, a lot of us will be at the code sprint in SJC 19:32:26 So I'll talk to Sun Jing to make sure we finish the console bp by that date. 19:32:51 linggao: Are you folks working on that? I had started to take a look at the console BP as well. 19:32:57 the weekend before (Mar 2) I'll go through and block (-2) any large patches that aren't critical to graduation 19:33:06 I will stand down if you are doing it. Or I'm happy to help. 19:33:06 linggao: before that date... needs to land by the 6th 19:33:16 linggao, ack, you guys might want to take a look at https://review.openstack.org/#/c/72998/ 19:34:15 matty_dubs, yes. Sun Jing is still working on 64100 19:34:32 NobodyCam, yes. 19:34:37 :) 19:34:37 I'll also bump any BP's that aren't implemented by March 2 19:34:37 linggao: OK, excellent. I was worried she had been preoccupied. Let me know if I can lend a hand. 19:35:30 matty_dubs, thanks. 19:35:33 awesome thank you matty_dubs 19:35:38 :) 19:35:40 linggao, matty_dubs: if you guys can get the console done by then, that's great -- if it's mostly done, I think a FFE would be fine, too 19:35:55 NobodyCam: yes, we should file an FFE with Nova for our driver as soon as Nova starts accepting them 19:36:08 devananda: just saw your message. i'm not sure of anyone in particular. If we think it's going to miss Icehouse, probably best to just put off review while we focus on Icehouse items, honestly 19:36:36 devananda, I'll talk to Sun Jing tonight and let you by tomorrow. 19:36:59 russellb: that's the largest item // most likely to cause us to miss graduation at this point, so I'm going to focus on it for the next few weeks 19:37:13 OK 19:37:25 sounds like consensus is to block on CI 19:37:37 russellb: if you know that there's no chance of it landing, let me know soon so i don't kill myself trying :) 19:37:50 i think CI is the sticking point 19:38:17 i definitely don't think you should kill yourself over it 19:38:33 but I think CI in place will then get people reviewing it 19:38:48 russellb: ack. my concern is the ramifications of ironic not graduating, eg. for tripleo // projects based on nova being able to provision physical machines 19:38:56 understood 19:39:03 russellb: many of which are counting on functionality in Ironic and likely to start using it even if it doesn't get integrated 19:39:20 not that that's a reason for Nova to accept something without CI :) 19:39:25 right.. 19:40:11 but those projects have nova-baremetal in the meantime, in theory 19:40:40 which has some significant limitations (no HA, no support for vendors) 19:41:05 anyhow, i'll see how much progress we can make on CI in a very short time 19:41:08 thanks :) 19:41:45 ok, moving on 19:41:50 #topic code cleanup 19:42:03 there've been several patches by folks doing code cleanup 19:42:27 i'd like to know how folks feel about this -- should we review? or postpone in light of upcoming featuer freeze? 19:43:21 we need to prioritize the more important patches 19:43:24 I think if we have time to review - it's better to spent it on critical items 19:43:54 and cleanup is low priority 19:43:54 if it's a _small_ patch fixing some comestic problems it's fine, but larger patches or series might be postponed 19:43:56 I feel small cleanup patches are good, but larger ones that remove functionalyity are tuffer 19:44:15 lucasagomes: + 19:44:24 lucasagomes: ++ 19:44:35 I thought we talked about this at the last meeting, noting that they should be merged very soon if it's going to happen 19:44:41 Or maybe that was your email, devananda? 19:44:50 (I don't have a strong opinion either way, though) 19:45:09 matty_dubs: yea, we did briefly, but folks are still proposing more 19:45:17 matty_dubs: i'm hesitating on -2'ing until there's concensus 19:45:30 Ah, okay. 19:45:56 example of small fix: https://review.openstack.org/#/c/74114/ 19:46:07 example of big code cleanup: https://review.openstack.org/#/c/73223/ 19:46:16 devananda: yea ! 19:46:39 so, cores, please vote 19:46:44 (lets see if i get the syntax right) 19:47:12 74114 is follow-up one 19:47:15 :) 19:47:16 #startvote Should we block any further large code cleanups until after Juno opens? (+1 == yes, -1 == no) 19:47:17 Begin voting on: Should we block any further large code cleanups until after Juno opens? Valid vote options are , +1, yes, -1, no, . 19:47:18 Vote using '#vote OPTION'. Only your last vote counts. 19:47:21 #vote for small comestic clean up patchs only 19:47:22 NobodyCam: for small comestic clean up patchs only is not a valid option. Valid options are , +1, yes, -1, no, . 19:47:51 #vote +1 19:47:53 #vote +1 19:48:05 #vote +1 19:48:13 #vote +1 19:48:28 4/6, two not present 19:48:29 but just to make clear, I don't see problem in people proposing new patches 19:48:30 this means we still tend to merge all what we have till now right? 19:48:46 but they will hang in the queue for a while 19:48:55 at least those ones that we already looked on 19:48:56 ya is -2 needed 19:48:58 max_lobur: ah, just to be clear, i'll go -2 existing large code refactorings that aren't functional changes 19:48:58 new patches fixing costemic problems I mean... and larger ones 19:49:10 devananda: + 19:49:18 need to cleanup queue 19:49:20 just a novote coment should hold other cores form approving 19:49:27 so that's appropriate way I think 19:49:36 or we may ask to abandon change 19:49:51 so It can be restored later 19:49:59 ok. does that change anyone's vote? last chance :) 19:50:03 that's the same basically :) 19:50:06 right 19:50:18 heh I gotta think more about -2'ing it tho 19:50:26 I mean, I don't see the problem in leaving it on the queue 19:50:34 other people might want to review them 19:50:47 lucasagomes: the folks who proposed it will continue to spend cycles maintaing it 19:50:58 leaving it also means someone new wont re perpsoe it 19:51:03 if we won't going to merge them nearest time they will stale 19:51:04 in a new patch 19:51:10 right 19:51:10 and reviews will stale too 19:51:11 right hmm 19:51:22 it blocks up the review queue and gives the impression it will be reviweed 19:51:35 we /deffered/ status 19:51:42 let's someone who have time to review doing this on our critical patches :) 19:51:53 heh 19:51:56 yea makes sense 19:51:58 ok agreed 19:52:20 alternative is we all go land them now, and start refactoring all the in-flight critical patches to work with the new cleanup state of things 19:52:23 ok :) 19:52:25 #endvote 19:52:26 Voted on "Should we block any further large code cleanups until after Juno opens?" Results are 19:52:27 +1 (4): max_lobur, NobodyCam, devananda, lucasagomes 19:52:37 thanks guys 19:52:37 I will agree to -2 aslong as it comes with a good comemnt to the dev 19:52:43 NobodyCam: aboslutely 19:52:46 :) 19:52:50 NobodyCam, +2 19:52:51 :) 19:53:00 #action devananda to post to ML regarding large code cleanups prior to -2'ing them 19:53:02 NobodyCam: fair 19:53:03 +1 19:53:21 ok, ~7 min left 19:53:24 #topic Open Discussion 19:53:49 Any reviews on SeaMicro blueprints? 19:54:19 lucasagomes: are you going to be able to get the rebased nova driver patches up today or is it too late for you? 19:54:35 k4n0: given that it's not critical to icehouse graduation, i wouldn't count on core reviewers having a lot of time for it until Juno opens 19:54:45 NobodyCam, I started it today, but I would finish tomorrow 19:54:52 :) 19:54:58 devananda: ok 19:54:58 NobodyCam, if you need it I can try to finish up 19:55:02 folks, have you seen https://review.openstack.org/#/c/74063/ comments 19:55:03 k4n0: that said, I'll continue to try to look at all teh 3rd party drivers from time to time and give feedback 19:55:14 ack : no its holiday here today 19:55:24 quick note since I see someone added "Functional/Integration testing of vendor drivers (Tempest?)" to the agenda 19:55:24 devananda: yes, the feedback is more important, they can land whenever time permits 19:56:01 NobodyCam, right, tomorrow morning I finish it up :) (and it will be pretty early there in US so it's grand) 19:56:17 NobodyCam, thanks 19:56:17 I want to discuss some sort of driver model to extend the chassis object for vendor specific purpose. Any comments? 19:56:18 3rd party CI isn't needed for Icehouse. we'll want to discuss it in depth at the Summit, though 19:56:21 lucasagomes: Awesome TY :) 19:56:52 k4n0: my guess is that you're looking for a way to drivers to do $thing without needing a node to act on 19:57:06 k4n0: eg, discovery, or something 19:57:07 devananda: yes :) 19:57:29 k4n0: i'm not sure chassis is the right place for that, but in general, yep, we'll need that. maybe propose something to the summit? :) 19:58:19 devananda: from a vendor's pov, chassis actions can be exposed from the chassis object, right? 19:58:45 k4n0: from API perspectiev, yes. but there is also an API end point for drivers 19:59:04 - One Minute - 19:59:13 can we contine inchannel? 19:59:17 + 19:59:23 k4n0: so eg. discovery may make more sense as POST /v1/drivers/seamicro/discover {'range': '10.0.0.0/24'} 19:59:29 yep 19:59:41 ok ,lets discuss inchan 19:59:42 Great meeting Thank you all 19:59:45 thanks 19:59:49 cheers, thanks everyone 19:59:50 thanks Everyone! 19:59:53 thanks! 19:59:56 thanks 19:59:58 #endmeeting