16:00:43 #startmeeting ironic_neutron 16:00:47 Meeting started Mon Jun 29 16:00:43 2015 UTC and is due to finish in 60 minutes. The chair is Sukhdev. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:48 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:50 The meeting name has been set to 'ironic_neutron' 16:01:06 Welcome to our weekly meeting 16:01:11 who is out there? 16:01:35 jroll…are you here? 16:01:40 hi 16:01:55 #topic: Agenda 16:02:05 #link: https://wiki.openstack.org/wiki/Meetings/Ironic-neutron 16:02:16 #topic: Announcements 16:02:34 I was at neutron mid-cycle sprint last week 16:03:07 had some discussion about ironic-neutron integration - I will cover under Open Discussion topic 16:03:24 Anybody would like to announce anything? 16:03:48 well, lets dive into the agenda.. 16:03:59 #topic: Spec Reviews 16:04:23 #link: https://review.openstack.org/#/c/187829/ 16:04:48 there are few new comments on this - jroll you may want to answer 16:04:49 so dmitry has a couple valid points here 16:04:59 I'll be responding / updating here later on today or tomorrow 16:05:13 * jroll just got back from vacation hence the lag :) 16:05:13 jroll: cool - thanks 16:05:33 jroll: wonderful….so, all relaxed :-) 16:05:42 something like that :) 16:06:17 jroll: you were going to discuss with Ironic cores as well some nova folks? 16:06:27 jroll: any update on that front? 16:07:00 Sukhdev: nova folks seem fine without a spec, I put up some wip patches there 16:07:09 and still hoping to get ironic spec reviews :) 16:07:53 * devananda lurks 16:07:56 jroll: Isn't ironic weekly meeting after this meeting? Perhaps bring it up there 16:08:03 devananda: wecome 16:08:13 Sukhdev: it's been brought up in the meeting, specs cores are busy 16:08:19 devananda: we can use your blessing on couple of specs 16:08:37 .... we need reviews from all cores, not just the ptl 16:08:54 it'll get done, just takes time 16:09:03 jroll: ah I see.. with Liberty-1 behind, we may be able to get some cycles now :-) 16:09:10 Sukhdev: I bless a spec by sprinkling water on my screen while it's rendered, right? 16:09:13 :) 16:09:37 devananda: something like that, yeah :-) 16:10:10 jroll: thanks jroll for being on top of things 16:10:19 The next spec 16:10:32 np 16:10:33 #link: https://review.openstack.org/#/c/188528/ 16:10:46 lauramoore: thanks for updating it 16:10:58 lauramoore: I reviewed it - it looks good as well 16:11:02 sukhdev: thanks 16:11:26 i don't think many others have looked at it yet 16:12:03 lauramoore: hopefully with in next few days - it is short week in US 16:12:05 I need to review that, will do early this week 16:12:12 yes laura, would be great to give a day or two 16:12:41 jroll and viveknarasimhan: thanks, that'd be good 16:12:49 anything on the specs - any questions? before we move to next topic 16:13:38 #topic: Bare metal physical connectivity Scenarios 16:13:57 Huge thanks to viveknarasimhan for putting together a document 16:14:07 Sukhdev: thanks 16:14:10 #link: https://drive.google.com/file/d/0B501-UCM_VGvVnB5LXJ4a3hhdE0/view?usp=sharing 16:14:33 viveknarasimhan listed all the scenarios that we discussed during the design summit 16:14:42 That covers Unsupported scenarios as well 16:14:48 Did folks have time to review this document? 16:15:05 Updated Link here: https://docs.google.com/document/d/1a-DX4FQZoX1SdTOd9w_Ug6kCKdY1wfrDcR3SKVhWlcQ/view?usp=sharing 16:15:24 I reviewed and provided feedback - and noticed viveknarasimhan addressed my comments 16:15:26 * jroll looking now 16:15:39 viveknarasimhan: is this different version? 16:16:20 viveknarasimhan: I reviewed the one which is on the agenda 16:17:06 viveknarasimhan: I do not see scenario 3 & 4 in the link that you provided 16:17:29 I don't see why 3 and 4 shouldn't be supported, seems trivial 16:17:41 viveknarasimhan: never mind - you moved them to the bottom 16:17:53 jroll: agree - I think we can support them 16:18:10 I don't think we'll need to do anything special to support them 16:18:25 same with 10 and 11, honestly 16:18:41 maybe 11 needs work, idk 16:18:54 ok and 10 16:18:56 fair. 16:19:01 * Sukhdev looking 16:19:11 it's the problem with matching nic to network 16:19:51 jroll: yes i agree with your points, i think 3 and 4 should be ok but 10 and 11 need the nic to network matching 16:20:50 jroll lauramoore : I agree with your assesment 16:20:54 laura: scenario 3 and 4 16:22:40 During design summit - we agreed that scenario 3 will be considered same as scenario 2 - i.e. two ports going from same BM to same switch will be treated as LAG'ed ports 16:23:18 therefore, i am thinking we will treat scenario 2 and 3 as same 16:23:30 are there repercussions to that? 16:23:48 sukhdev, viveknarasimhan: i think what sukhdev says will be best to focus on for start 16:24:04 the part i'm not too sure on is that currently Nova puts a network on one port at random 16:24:35 jroll: I can't think of any - perhaps cloud operators can comment 16:25:02 Sukhdev: I'm thinking in the "guest" 16:25:16 I'm not sure if LAG vs non-LAG behaves differently or whatever 16:25:36 Sukhdev: I think we'll probably want to treat them differently 16:25:51 idk how it looks switch side either, I may need to talk to some folks 16:26:23 * jroll does it 16:26:30 lauramoore: Do you know how you plan on deploying it? 16:27:01 sukhdev: our use case was for LAG'ed ports 16:27:21 lauramoore: that is what I thought 16:27:29 I'm chatting with someone now 16:27:35 laura: so can we ignore scenaro 3 and 4 for now 16:27:43 jroll: cool 16:28:02 vivekn: your ID changed - did you get disconnected? 16:28:07 laura: as we say we cover only LAGed ports 16:28:22 Sukhdev: got disconnected, logged back in as 'vivekn' 16:28:31 vivekn: thought so :-) 16:28:41 the way i was seeing it was the port-group represented a LAG'ed port (as discussed in summit). If the ports are not LAG'ed then they'd be represented as 2 ports 16:29:21 so what I'm hearing, and this sounds sane to me, is "you can never have two interfaces on the same physical network without crazy routing shenanigans" 16:29:38 so maybe scenario 3 is mostly invalid 16:30:18 jroll: how about scenario 4 16:30:18 jroll: agree scenario 3 is not useful and would be void 16:30:32 would that be void as well (wouldn't HA for physical net be a consideration here though)? 16:30:44 jroll: my hunch was on the same lines - why would one connect two ports from the same source to same destination and not configure them as LAG'ed 16:30:58 vivekn: you would do MLAG if you wanted HA 16:31:05 bonding etc 16:31:17 vivekn: scenario 4 is valid 16:31:22 so yeah, 3 and 4 seem invalid to me 16:31:25 oh? 16:32:14 jroll: why scenario 4 is invalid - I am confused - this is MLAG case 16:32:51 Sukhdev: right, sorry, it didn't call out MLAG so I thought this was somehow doing that without MLAG 16:33:04 might be nice to call that out specifically 16:33:16 jroll: got you…. 16:33:30 yes, I am thinking scenario 4 is MLAG - 16:33:36 +1 16:33:41 +1 16:33:45 jroll: so how do we handle scenario 4 as per the spec? 16:33:59 we treat that as LAG port itself 16:34:05 vivekn: can you post a document where we can post the comments on it? this will allow reviewers to add comments/clarifications? 16:34:26 vivekn: I mean change the permissions, etc… so, that we can edit it as well 16:34:46 Sukhdev: i have made the doc public 16:35:08 vivekn: I haven't completely read the spec yet but surely we can work it out? 16:35:09 Sukhdev: let me check and give away permissions 16:35:12 i would think 2 ports in 1 port-group, the binding profile would contain 2 local link infos each with a different switch_id 16:35:18 Sukhdev: thought everybody could edit 16:35:19 vivekn: for whatever reason, I could not add comment - perhaps screw up on my side then 16:35:22 lauramoore: yeah, seems sane 16:35:57 laura: thanks for clarification 16:36:00 vivekn: let me answer your question on scenario 4 16:37:01 laura: a variation of scenario 2 then, just with different switch id in the list element 16:37:03 vivekn: So, from the Iornic side, the port-group will be created with those two ports shown in scenario 4 and this port group will be used to invoke neutron port-create() 16:37:33 vivekn: yes thats how i see it 16:37:53 lauramoore: +1 16:37:54 Sukhdev and laura: thanks for clarification 16:38:11 Sukhdev: I will move Scenario 4 to Supported 16:38:20 vivekn: thanks 16:38:39 Hope we are in sync that Scenario 3 is void, and scenario 10 and 11 are Unsupported due to NIC to network mapping issue 16:38:57 sounds good to me. 16:38:57 vivekn: +1 16:39:00 Thanks! 16:39:07 +1 16:39:27 vivekn: Thanks for taking time to get this clarified 16:39:39 Sukhdev: Thanks ! 16:39:42 sukhdev: +1 thanks vivekn 16:40:04 anything else on these scenarios? Looks like all others are OK 16:40:30 I have one more item to discuss - shall we move on? 16:40:43 #topic: Open Discussion 16:40:46 Sukhdev: yes, please 16:41:25 As I mentioned, I was at the neutron mid-cycle sprint and had detailed chat with armax about ironic-neutron integration 16:41:45 as we were discussing the filtering issue that vivekn had brought up on the spec 16:42:03 Sukhdev: I am still going through the spec, I am terribly slow but I am going to finish it today no matter what 16:42:12 armax mentioned that we could possibly use "Compute: Ironic" 16:42:33 armax: no worries - please chime in, your input is welcome 16:42:46 So, for the background puposes 16:42:57 Sukhdev: we’d need to run this idea by someone who is familiar with the use of that device_owner 16:43:22 Sukhdev: kevin and I found out that this is also used to track cells 16:43:23 yeah, let's back up, I have zero context here 16:43:26 presently NOVA sets device_owner for the BM server port to "compute:none" 16:43:46 jroll: I am giving the context stay with me 16:44:30 in fact nova sets the port's device_owner for all compute related ports VMs as well BMs to "Compute:none" 16:45:27 so, the thought which armax and I were discussing is that perhaps we can use it to set to "compute:Ironic" for BM servers and leave it alone for VMs 16:46:07 what is device_owner used for? 16:46:13 and why do we want to set that? 16:46:58 neutron uses it to identityfy who does the port belong to - eg. device_owners are compute, dhcp, router, etc 16:47:46 jroll: in ML2 drivers we often use device_owner to filter the compute ports from other ports 16:48:13 ok 16:48:14 sukhdev: do you know where nova sets this? 16:48:24 jroll: we usually use the filters as "starts with compute" as it is alway set to compute:none 16:48:26 jroll: in the spec, it was mentioend that binding:HOST_ID will contain '-ironic-'. Sukhdev's point is in lieu of that 16:48:53 Sukhdev: and why do we need to be able to filter for ironic ports? just to be able to know that it's baremetal? 16:48:58 vivekn: correct - we could use host or device-owner 16:49:12 jroll: correct 16:49:39 so 16:49:42 jroll: if somebody wants to filter ironic ports from VM ports 16:49:57 nova actually sets it to compute:%s % instance.availability_zone 16:50:35 lauramoore: https://github.com/openstack/nova/blob/master/nova/network/neutronv2/api.py#L644-646 16:50:41 ah ok i see it 16:50:41 but, both armax and I felt it may be hard to push it through nova - but, neither of us familiar with it 16:51:14 so 16:51:22 I still like host_id or whatever 16:51:38 because we can put the actual ironic node id there 16:52:24 jroll: I wanted to through it out there so that we can debate about it 16:52:49 yeah, I don't see any benefit to this, and seems non-trivial 16:52:53 jroll: both armax and I felt that we should proceed with our plan (which is use host_id), but, keep other options open 16:53:04 yeah, I agree 16:53:48 I will add it to the long term lists so that we do not forget about it 16:53:48 whichever way we’re proceeding, we’re simply abusing fields whose role is not the one intended 16:54:38 armax: currently with BM deployments, the host_id field has a wrong/useless value 16:55:03 I mean 16:55:12 host_id is meant to be the host for the instance, right? 16:55:15 which is the ironic node. 16:55:16 armax: while I agree with you that we are overloading this, but, in a way, we are fixing it too :-) 16:55:37 no we aren’t …we’re only adding stuff to the mess 16:55:46 host id has a specific meaning 16:56:02 as jroll said, it’s the hypervisor 16:56:21 armax: well, the ironic node is the equivalent of the hypervisor in this case 16:56:31 so 'ironic-%s' % instance.node doesn't seem horrible 16:56:37 to me, anyway 16:56:47 what’s the ironic- for? 16:57:09 right, so that's the weird part 16:57:16 jroll: correct, 16:57:17 but it's so ML2 things can know it's baremetal 16:57:18 which sucks 16:57:21 that’s what I dislike 16:57:26 btw, WIP patch here https://review.openstack.org/#/c/194413/1 16:57:27 yeah 16:57:31 open to other options. 16:57:32 we’re conflating two things in one 16:57:48 I agree 16:57:52 I'd rather a new field 16:58:02 or better yet, I'd rather the ML2 things just figure it out 16:58:22 figure out how 16:58:23 ? 16:58:25 jroll: easy to say ML2 things just figurer out :) 16:58:28 I have no clue :) 16:58:36 from an unstable/undefined field format? 16:58:42 that’s just great! 16:58:54 armax: I hear ya. 16:59:16 jroll armax: folks that is why I brought up device_owner discussion - 16:59:30 Sukhdev: same problem there, you're just overloading an existing thing 16:59:52 IMO, the device owner is the most sensible place 17:00:11 jroll: kinda - but, seems more cleaner 17:00:15 because, as it’s name, it’s holding what infrastructure is using the port 17:00:41 well, it's currently the availability zone for the instance 17:00:47 I do see your point thuogh 17:01:06 now we have compute: 17:01:06 seems meeting is over :/ 17:01:08 today 17:01:31 folks we are out of time….lets all sleep over it - we can discuss it further in our next meeting 17:01:35 ok 17:01:35 bye 17:01:38 ok, thanks sukhdev 17:01:43 armax: happy to continue in another channel if you'd like 17:01:44 Sukhdev: ok thanks :) 17:01:52 in the mean time, lets look at the code a bit and see if it is trivial to implement it 17:02:02 thanks for attending the meeting 17:02:05 Sukhdev: it's approximately the same amount of work 17:02:07 #endmeeting