Tuesday, 2016-06-14

*** Ashana has quit IRC00:02
*** svenkat has joined #openstack-powervm00:13
*** thorst has joined #openstack-powervm00:19
*** jwcroppe_ has quit IRC00:23
thorstefried: you still around?00:31
*** jwcroppe has joined #openstack-powervm00:33
*** thorst has quit IRC00:40
*** thorst has joined #openstack-powervm00:41
*** thorst has quit IRC00:49
*** thorst has joined #openstack-powervm01:34
*** arnoldje has joined #openstack-powervm01:58
*** thorst has quit IRC03:00
*** thorst has joined #openstack-powervm03:00
*** thorst has quit IRC03:09
*** openstackgerrit has quit IRC03:11
*** openstackgerrit has joined #openstack-powervm03:11
*** apearson_ has quit IRC03:22
*** svenkat has quit IRC03:50
*** thorst has joined #openstack-powervm04:07
*** thorst has quit IRC04:14
*** tlian has joined #openstack-powervm04:48
*** thorst has joined #openstack-powervm05:12
*** thorst has quit IRC05:19
*** arnoldje has quit IRC05:20
*** erlarese has quit IRC05:29
*** erlarese has joined #openstack-powervm05:31
*** tlian has quit IRC06:00
*** thorst has joined #openstack-powervm06:16
*** thorst has quit IRC06:24
*** jwcroppe has quit IRC06:38
*** jwcroppe has joined #openstack-powervm06:47
*** openstackgerrit has quit IRC06:48
*** openstackgerrit has joined #openstack-powervm06:48
*** Cartoon has joined #openstack-powervm06:59
*** thorst has joined #openstack-powervm07:22
*** k0da has joined #openstack-powervm07:24
*** thorst has quit IRC07:29
*** Cartoon_ has joined #openstack-powervm07:32
*** Cartoon has quit IRC07:35
*** thorst has joined #openstack-powervm08:26
*** thorst has quit IRC08:34
*** Cartoon_ has quit IRC09:25
*** thorst has joined #openstack-powervm09:32
*** thorst has quit IRC09:39
*** thorst has joined #openstack-powervm10:35
*** thorst has quit IRC10:44
*** jwcroppe has quit IRC10:51
*** jwcroppe has joined #openstack-powervm10:51
*** jwcroppe has quit IRC11:14
*** jwcroppe has joined #openstack-powervm11:23
efriedesberglu, yt?11:41
*** thorst has joined #openstack-powervm11:41
efriedthorst, did you need something?11:42
*** thorst_ has joined #openstack-powervm11:43
efriedthorst_, did you need something?11:44
*** thorst has quit IRC11:47
thorst_efried: I think I was pinging last night for the latest pypowervm update I pushed out11:47
thorst_not urgent11:47
*** thorst_ is now known as thorst11:49
efriedthorst, +1.  Did you want adreznec to give it a final nod, or you want my +2 now?11:50
thorstefried: I'm actually going to just patch it on and see what happens.11:50
thorstI clearly also need a nova-powervm change to take advantage of it11:50
thorstI'm going to change all migration flows to pass both overrides11:50
efriedDid you see https://review.openstack.org/#/c/329205/1 ?  First attempt to fix up the hostname regex problem.11:51
thorstno, not yet11:51
thorstlet me take a peak11:51
thorstI like your commit message tho11:51
efriedIt's not quiiite working.  I can't figure out where else we set/retrieve the hostname of the system.11:51
thorstefried: yeah, I was never 100% sure that was right?11:51
thorstremember there is the CONF.host setting11:52
efriedthat could be it11:52
thorstefried: let me find it11:52
thorsthmmm...is it just me or did our lab network go out?11:54
thorstno, worse.  Our openstack host went out11:55
thorstesberglu: this isn't going to be fun11:55
thorstefried: CI is out until we get that sorted...11:57
efriedFine.  But wouldn't mind getting closer to that fix.11:58
thorstefried: yep...I'm searching through the code yet...11:59
thorstefried: I can't find it anywhere...12:05
efriedI couldn't either.12:06
thorstI want to hop on a ready node...look there12:06
thorstbut we don't really have one of those12:06
*** svenkat has joined #openstack-powervm12:07
thorstefried: I at least know why we're having trouble with the ssh command to the VM.  We're still booting the 100 MB 'zeros' image instead of a real bootable image12:09
efriedThat'd do it.12:09
efriedFor my part, I can't figure out wtf this test is doing.12:09
efriedIndirection through dynamic impls, across the REST API, through a database.12:09
efriedNo idea where the actual code is.12:09
thorstefried: same.  I can tell its not in the local.conf.aio (only variable there is HOSTNAME which is just the 'hostname')12:10
*** seroyer has joined #openstack-powervm12:11
thorstseroyer: Had a question for you12:11
thorstthose overrides for 'OVS'12:11
thorstcan I turn that on, but still use 'SEA'?12:12
seroyerthorst, I think so.12:12
thorstI'm looking to just have nova-powervm blanket always turn that override on, because the migration call doesn't really have the network info12:12
openstackgerritSridhar Venkat proposed openstack/nova-powervm: Blueprint for nova-powervm SR-IOV VIFs  https://review.openstack.org/32220312:21
*** smatzek has joined #openstack-powervm12:27
*** Ashana has joined #openstack-powervm12:28
*** Ashana has quit IRC12:30
*** Ashana has joined #openstack-powervm12:30
*** tlian has joined #openstack-powervm12:31
*** seroyer has quit IRC13:01
*** mdrabe has joined #openstack-powervm13:10
*** kylek3h has joined #openstack-powervm13:19
efriedthorst, svenkat: I believe I've sussed what needs to be done in order to make pci_passthrough_whitelist work for us.  (svenkat, get your "add to blueprint" scratchpad ready)13:21
svenkatefried: sure.13:21
efriedFirst, initialization.  We can avoid all the Linux special file checks in one of two ways.13:21
efriedFor starters, we can't use "devname" at all.  There's an unavoidable special file check if we specify that guy.13:22
efriedSo we either have to use "address" (with caveats - forthcoming) or our own new field.13:22
efriedIf we use "address", we can avoid special file checks by ensuring that there's at least one wildcard in the value.13:23
thorstefried: following so far13:25
efriedThat is: domain:bus:slot.func <= at least one of these must be '*'13:26
efriedMAX_FUNC = 0x713:26
efriedMAX_DOMAIN = 0xFFFF13:26
efriedMAX_BUS = 0xFF13:26
efriedMAX_SLOT = 0x1F13:26
efriedSo for example, if we use *:bus:slot.func, we've got 0x7 + 0xFF + 0x1F bits to work with.13:26
efriedIf we cheat a little more and use domain:bus:slot.*, then we have a bunch more bits to work with.13:26
efriedWould have to check the DRC index spec with seroyer to see how many bits we actually need.13:26
svenkatefried: ok…13:27
thorstFUNC, Domain, Bus, slot...13:27
efriedthorst wha?13:27
thorstI don't know what domain and bus really mean there...but we probably don't care?13:28
efriedThe format of the address, according to the commentary in pci.py, is13:28
efried["address": "[[[[<domain>]:]<bus>]:][<slot>][.[<function>]]"13:28
efriedthorst, agree, since we won't be using a real PCI address anyway.13:28
efriedIt would have been nice to make the "bus" and "slot" encoded into the DRC index match up with the "bus" and "slot" fields of the PCI address; but if we need more bits, we can break that.13:29
efriedThe other option is to avoid both "devname" and "address" (the code will default the address to *:*:*.*), and introduce our own brand new key.13:30
efriedIf we want to do this *without* changing nova code, it will simply require us to parse the conf value afresh in our vif driver, because the existing Whitelist/PciDeviceSpec code won't know about our key.13:31
efriedBut said existing code will accept an entry thus presented.13:31
efriedLet's come back to that decision ("address" or new key) in a sec.13:32
*** seroyer has joined #openstack-powervm13:32
efriedThe way we avoid having this stuff processed at all after the fact is simply to keep our driver's impl of get_available_resource (including our existing "build_host_resource_from_ms" method) from including a "pci_passthrough_devices" key.  So no-op there.13:34
efriedAnd I *think* that's it.13:37
efriedseroyer: how many bits do I actually need out of the DRC index to make sure I can reference the phys port uniquely on the system?13:38
seroyerDRC index is 32 bits.  You need all 32 bits.13:38
seroyerBut I need to verify we have a DRC index per “physical port” if that’s really what you need.13:39
efriedWell, the physical port definitely has a DRC index.13:39
seroyerI know we have one per adapter and per logical port.13:39
efriedAre you questioning whether that index is different for each pport?13:39
*** smatzek has quit IRC13:40
seroyerCorrect.  I don’t see one per physical port using lshwres today.13:40
efriedWell, doesn't matter, cause we don't have 32 bits to work with.13:41
efriedBut I think I have another solution if we still want to go the "address" route.13:41
*** edmondsw has joined #openstack-powervm13:41
efriedLet me check something...13:41
seroyeradapter ID + port ID, but I don’t know how big those fields are.13:41
efriedseroyer, that's what I was just about to ask.13:42
seroyerI would guess port ID should be smallish (like 8 bits).  But not sure about adapter ID.13:42
efriedAccording to the schema, the SRIOV Adapter ID is 'int', and the Physical Port ID is byte.13:42
seroyerHow many bits do you have to work with?13:43
efriedMax of 2513:43
seroyerI’ll have to check with some folks.13:43
*** seroyer has quit IRC13:43
efriedseroyer: Correction, 29 bits.13:45
efriedsvenkat, thorst, I had been thinking we wanted to avoid using the SRIOV Adapter ID to build the "address", because that ID doesn't exist until you configure the card in SRIOV mode.13:49
efriedBut we need it to be configured thus anyway because otherwise the pports don't even show up.13:49
efriedWhich brings up another topic: what does the user need to do (particularly outside the auspices of nova) to get set up?13:50
*** apearson_ has joined #openstack-powervm13:52
efried1) Set SRIOV mode, e.g.13:55
efriedpvmctl sriov list --where mode=Dedicated -d physloc | while read physloc; do pvmctl sriov update -i physloc=$physloc -s mode=Sriov; done13:55
efried2) Glean addresses (probably in relation to physlocs, which is how the customer will be thinking of which cards/ports are physically located/cabled where), e.g.13:55
efriedpvmctl sriov list -d physloc address13:55
efried3) Set up pci_passthrough_whitelist in nova.conf:13:55
efriedpci_passthrough_whitelist = [ { "address": <address_from_above>, "physical_network": <network_name> }, ... ]13:55
svenkatefried: ok...13:56
efriedThat's assuming we do end up going with "address".  If not, it'll just be "physloc" in steps 2 & 3.13:56
efriedwhich I feel is a bit of an easier user experience - but has the disadvantage of deviating from the existing pci_passthrough_whitelist format.13:57
svenkatefried ok… in BP i mentioned we will use devname, i will rework words13:57
efriedsvenkat, yeah, devname won't work at all.  (Just discovered that a few minutes ago, though.)13:58
*** lmtaylor1 has joined #openstack-powervm13:59
efriedsvenkat, may want to wait until we've landed on which route we're going to take.13:59
svenkatefried: yes.. sure.14:00
*** burgerk has joined #openstack-powervm14:02
*** smatzek has joined #openstack-powervm14:03
*** arnoldje has joined #openstack-powervm14:04
*** seroyer has joined #openstack-powervm14:09
*** kriskend_ has joined #openstack-powervm14:14
*** kriskend has joined #openstack-powervm14:14
*** jwcroppe_ has joined #openstack-powervm14:39
*** jwcroppe has quit IRC14:41
*** mdrabe has quit IRC14:42
*** jwcroppe_ has quit IRC14:45
*** jwcroppe_ has joined #openstack-powervm14:47
*** jwcroppe has joined #openstack-powervm14:50
*** jwcroppe_ has quit IRC14:52
*** mdrabe has joined #openstack-powervm14:58
*** jwcroppe has quit IRC15:00
*** jwcroppe has joined #openstack-powervm15:01
*** jwcroppe_ has joined #openstack-powervm15:01
seroyerefried, adapter ID is actually 16 bits, port number is 8 bits.  The two added together are 24 bits.15:03
efriedseroyer, beautiful!15:03
*** jwcroppe has quit IRC15:05
efriedsvenkat, thorst ^^15:09
efriedDomain is 16 bits, bus is 8; so the simplest would be xx:xx:*.*15:10
*** jwcroppe has joined #openstack-powervm15:11
efriedTechnically, adapter should be "slot" and port should be "function".  But we would have to do some creative bit manipulation and put the extra bits into "domain".15:12
svenkatefrid: good.15:12
efriedWhich would be xx:*:xx.xx15:12
efriedNot sure how y'all feel about that.15:12
*** jwcroppe_ has quit IRC15:13
svenkatefried: can you give an example for xx:*:xx.xx15:14
efriedI believe *usually* the domain would be 00 in that case, because the port IDs count off monotonically from 1 on the card (I believe) and the adapter IDs should do nearly the same.15:14
efriedsvenkat, yeah, hold on a tick.15:15
efriedd = { 'slot': sriov_adap.sriov_adap_id & 0x1F, 'func': pport.port_id & 0x7, 'domain': <leftover bits from the other two - let me get back to you on this> }15:19
efriedaddress = "%(domain)x:*:%(slot)x.%(func)x" % d15:19
efrieddomain = (adapid & 0xFFE0) | (pport_id >> 3)15:21
efriedI think that might work.15:21
efriedSomething like that, anyway.15:21
efried(svenkat ^^)15:22
svenkatefried: thanks15:22
efriedWhereas if we went the other route, it would be15:23
efriedaddress = "%(domain)x:%(bus)x:*.*" % { 'domain': adapid, 'bus': pport_id }15:23
efriedThey're both ugly for different reasons.  thorst, opinion?15:23
svenkati will replace discussion on devname with address with these details in BP.15:23
efriedsvenkat, not yet, wait until we've decided which route we're actually going to go.15:23
efriedInventing a new key is still on the table too.15:23
svenkatafter agreement from all. i am not making changes rightaway...15:23
thorstefried: trying to catch up...guilty of doing three things at once15:24
thorstI prefer the first but just because it captures the function in it15:25
efriedthorst, what do you mean, "captures the function"?15:25
efriedYou mean, "the function field of the address actually has (usually all of, but at least most of) the function in it"?15:26
thorstthe VF15:27
openstackgerritKris Kendall proposed openstack/nova-powervm: Initial LB VIF Type  https://review.openstack.org/30244715:28
efriedBoth mechanisms have the VF in them.  Realistically, they'll almost always both have the same digits in them.15:30
thorstyeah, I honestly don't know that I understand it enough to have a full vote in the matter15:30
efriedThat is, in the vast majority of cases, it would be "0:*:x.y" vs. "x:y:*.*"15:30
*** kriskend has quit IRC15:31
efriedIt's just sometimes the former might look like "z:*:a.b", where one or both of 'a' and 'b' aren't actually the adapter/port ID.15:31
efriedWhich is a definite con.15:32
efriedBut it makes the adapter ID be in the "slot" field and the port ID be in the "funtion" field.15:32
efriedthorst, the VF shows up nowhere, ever.15:32
thorstefried: Was just looking at this one: address = "%(domain)x:*:%(slot)x.%(func)x" % d15:33
efriedthorst, right.  As I say, *usually* domain will be 0, slot will be adapter ID, and func will be port ID.15:33
efriedBut sometimes, domain will be (unrecognizable garble of bits), and one or both of slot/func will be (just the lower bits of adapter ID / port ID)15:34
efried(From what I can tell, in the KVM context, 'func' corresponds to "physical function", not VF.  So using 'func' for pport ID is a good parallel.)15:35
thorstI missed that15:35
efried(Just don't teel kriskend_)15:35
efriedReally, the question is, do we care more about the numbers matching up most closely with the "proper" chunks of the address; or about simplicity of code and guaranteed ability to correlate adapter & port ID at a glance?15:37
efried(thorst, also remember we still have the other option on the table: using our own custom key e.g. "physloc")15:39
kriskend_efried I am actually ok with it in this case :-)15:40
efriedkriskend_, it's closer than using domain for the adapter ID and bus for the port ID, anyway.15:41
efriedthorst, this is still your call as far as I'm concerned.  Want me to bring in another opinion?  adreznec?15:54
*** jwcroppe_ has joined #openstack-powervm15:54
thorstefried: Can we discuss in scrum later?15:54
thorstI'm not heads down enough to make a valid call at the moment15:54
adreznecefried: Reading backscroll, but I think a live discussion would be good15:54
efriedI will compose a concise summary and email it.15:55
adreznecI've been in and out as the discussion has jumped between here and slack15:55
*** tjakobs has joined #openstack-powervm15:55
*** jwcroppe has quit IRC15:55
*** k0da has quit IRC15:57
thorstefried: Just live tested https://review.openstack.org/#/c/312240/1416:00
thorstfailed...can you take a look at my comment?16:00
*** thorst is now known as thorst_afk16:01
efriedthorst_afk, done.  Should be an easy fix... for someone who understands what's called for, which I don't.16:08
*** kriskend_ has quit IRC16:49
*** tblakeslee has joined #openstack-powervm17:01
*** tblakeslee_ has joined #openstack-powervm17:02
*** tblakeslee has quit IRC17:06
*** tblakeslee_ is now known as tblakeslee17:06
*** thorst_afk is now known as thorst17:07
*** tblakeslee has quit IRC17:09
openstackgerritDrew Thorstensen proposed openstack/nova-powervm: Support override migration flags  https://review.openstack.org/32959217:30
*** jwcroppe has joined #openstack-powervm17:37
*** jwcroppe_ has quit IRC17:39
*** jwcroppe_ has joined #openstack-powervm17:41
*** jwcroppe has quit IRC17:42
openstackgerritDrew Thorstensen proposed openstack/nova-powervm: Initial LB VIF Type  https://review.openstack.org/30244717:48
thorstefried: can you see if you're OK with my responses above?17:48
thorstI know seroyer really wants that one in today17:48
efriedthorst, ack17:48
*** jwcroppe has joined #openstack-powervm17:53
efriedthorst, done.  Who's the +2?17:54
thorstadreznec: Can you be the +2 on 302447?17:54
thorstand maybe also 329592?17:54
*** jwcroppe_ has quit IRC17:55
*** kriskend_ has joined #openstack-powervm17:59
efriedthorst: "the glance image metadata can override which VIF type you use" -- it can?18:17
efriedSo we could have any number of vif drivers in play.18:18
thorstefried: 'kinda'18:22
thorstthere is a difference between the edge connection technology (vNIC versus say VF) and the bridging type (ex. SEA to OVS)18:22
efriedYet we're using PvmVifDrivers for all of 'em.18:23
thorsttrue...true.  That's because we don't have the same differences that say libvirt does18:23
thorstwhere you can pass in the card 'type'18:23
thorstI see VF and vNIC as effectively different 'card types'18:24
thorstbut, I also recognize I'm stretching the truth a bit18:24
efriedDo you (or svenkat) have some idea of how one specifies these in the configuration?  I'm getting confused by the wording in the blueprint, need some context.18:24
thorstthe card type?18:24
efried...whatever tells us which PvmVifDriver to load up.18:25
efriedWhich, as far as I can tell, is all that matters.18:25
thorstheh...well right now its just the vif['type']18:26
thorstwhich is pvm_sea, ovs or bridge18:26
thorstwhich is provided solely form the neutron agent18:26
svenkatthorst: default is in nova conf and overwrite is in glance metadata - from your review comments.18:26
thorstwell, in libvirt the glance metadata specifies the card type...not necessarily the vif driver.  But I was proposing that vnic versus VF direct would be different vif drivers...18:27
efriedThey definitely need to be different vif drivers, because their plug/unplug methods operate totally differently.18:28
efriedWhat's vif['type']?  Where is that specified?18:28
thorstcomes back from neutron18:28
efriedAnd neutron gets it from....?18:28
thorstits a maze...I can try walking you through it after I get coffee18:28
svenkatcan vif type can be specified when a port is pre-created, and it drives which vif driver to load to plug/unplug.18:28
efriedAt some point the user has to tell us, right?18:28
thorstneutron gets it from the mechanism driver18:28
thorstsvenkat: No, the closest you get to specifying it is the 'direct' thing you get to pass in18:29
thorstefried: Not really...but kinda18:29
svenkatthorst that is the vnic_type in port you are referring to? (direct)18:29
thorstsvenkat: yep18:29
svenkatvs normal for sea currently (we do not set this today, it comes in as default)18:30
*** k0da has joined #openstack-powervm18:30
efriedml2_conf.ini: mechanism_drivers18:30
thorstpretty much18:30
svenkatso how about we use direct as vnic _type for vf direct vif and macvtap as vnic_type for vnic.  this can drive which vif driver to load during plug18:30
thorstbased on the vnic_type, I think one of three agents can handle it.  Bare metal, 'standard' (ovs, linux bridge, sea) or SR-IOV18:31
thorstwhich is why you can't mix/match ovs with SEA18:31
efriedk, so getcher coffee.  I need to get a better understanding of this.18:31
thorstefried: see step 3:  https://wiki.openstack.org/wiki/Nova-neutron-sriov#3:_create_a_neutron_port18:52
thorsttheir term vnic != powervm vnic18:52
thorstbut there are various vnic types.  And the standard (not sure what that is) basically says 'let the standard agent decide'18:52
thorstwhich is one of "OVS, Linux Bridge, or SEA"18:52
thorstBEHIND the scenes, that agent generates a "VIF" type.  So the SEA agent's VIF type is 'pvm_sea', the OVS agents VIF type is 'openvswitch' and the Linux Bridge agents vif type is 'bridge'18:53
efriedthorst, we don't actually need to do this neutron port-create step for any of our stuff (old or new), right?18:55
efriedThat said, does it work if we do?  Then instead of choosing a network for my instance, I could choose an existing port instead?18:56
*** kriskend_ has quit IRC18:58
thorstI think we do...otherwise it'll default to not being a direct VIF18:59
thorstnova will create a neutron port on your behalf...sure.  But I don't think it'll create a SR-IOV neutron port on your behalf18:59
svenkatwhen you deploy a vm use "networks":[{"port":"df547f3c-d75b-427d-afdf-1159b98ca1a5”}],in the body to use aprecreataed port, use "networks":[{"uuid":"ddc24826-504d-422b-b758-a84bcbe77992","fixed_ip":""}], to use ip directly19:00
svenkatif used second option, a ‘normal’port will be created under the covers. which is used for sea only (currently)19:01
svenkati mean vnic_type normal.19:01
svenkatthe uuid in the second case is network id19:01
thorstsvenkat: just to confirm my understanding.  To use SR-IOV, you MUST pre-create the neutron port?19:03
*** seroyer has quit IRC19:04
efriedthorst, in kvm, yes.  In power, no!19:04
efriedPower win!19:04
svenkatthorst: that is one option. we need to find a way to do specify ip and not port id or sr-iov. i cannot think of any automatic way as we need to support sea as well.19:05
svenkathow does it work for linux bridge for example. is that available along with sea in a given environment?19:06
svenkatand also  - whats wrong with precreating a port and use it to deploy/19:07
thorstsvenkat: why do we NEED to do that19:07
thorstI see no problem with pre-creating the neutron port19:07
svenkatin that case i do not have an issue. we can always pre-create a port for sriov… i thought you were leaning towards specifying an ip is an expection for sriov19:08
svenkatso we precreate a port for sriov (use direct as vnic type - say). then how do we differentiate between vf direct vs vf-vnic19:09
efriedthorst, what do you mean by pre-creating the neutron port?  Pre as in "during plug"?  Or pre as in "before you start up the driver, issue a neutron port-create command"?19:09
thorstbefore you spawn, it is very common to pre-create a neutron port.  Then pass that in on the spawn19:10
svenkatpre as in - you create it using openstack cli or curl or rest… for powervc, UI can do precreate port and then kick off deploy - in two steps sequentially19:10
efriedThat's going to be a problem19:12
efriedIn REST/core, a VF belongs to a partition, period.  You can't just create one and then assign it.19:12
svenkatwhen a neutorn port is created it is not bound to anything. it can be bound later on…19:13
efriedSo what I'm saying is there's no way to do that with SRIOV as implemented in PowerVM, as far as I know.19:13
efriedAnd I'm not sure whether you could create it (e.g. attached to the NL partition) and then move it.19:13
svenkatwhena neutron port is created, it is not attached to any nova instance19:14
svenkatwhen a port is createad it looks like :19:15
svenkat| Field               | Value                                                                              |19:15
svenkat| admin_state_up      | True                                                    |19:15
svenkat| binding:host_id     |                                                         |19:15
svenkat| binding:profile     | {}                                                      |19:15
svenkat| binding:vif_details | {}                                                      |19:15
svenkat| binding:vif_type    | unbound                                                 |19:15
svenkat| binding:vnic_type   | normal                                                  |19:15
svenkat| created_at          | 2016-05-10T17:27:54                                     |19:15
svenkat| description         |                                                         |19:15
svenkat| device_id           |                                                         |19:15
svenkat| device_owner        |                                                         |19:15
svenkat| extra_dhcp_opts     |                                                         |19:15
svenkat| fixed_ips           | {"subnet_id": "0cf7a2f5-49f7-43a7-a456-0fe4adafc1ab", "ip_address": ""} |19:15
svenkat| id                  | df547f3c-d75b-427d-afdf-1159b98ca1a5                    |19:15
svenkat| mac_address         | fa:16:3e:dc:5e:db                                       |19:15
svenkat| name                |                                                         |19:15
svenkat| network_id          | 8622fe68-afce-435f-be65-cdaa66ef44ea                    |19:15
svenkat| status              | DOWN                                                    |19:15
svenkat| tenant_id           | f14fb833dd764ad7808390e52c83728f                        |19:15
svenkat| updated_at          | 2016-05-10T17:27:54                                     |19:15
svenkata port like this can be used while deploying a vm - currently for sea19:15
svenkatwhen it is bound it looks like :19:16
thorstthe vif_type is what will get set on binding...19:16
svenkat| Field               | Value                                                                             |19:16
svenkat| admin_state_up      | True                                                    |19:16
svenkat| binding:host_id     | 828642A_21C1B6V                                         |19:16
svenkat| binding:profile     | {}                                                      |19:16
svenkat| binding:vif_details | {"port_filter": false, "ovs_hybrid_plug": false}        |19:16
svenkat| binding:vif_type    | pvc_sea                                                 |19:16
svenkat| binding:vnic_type   | normal                                                  |19:16
svenkat| created_at          | 2016-05-10T18:39:18                                     |19:16
svenkat| description         |                                                         |19:16
svenkat| device_id           | 68fd8a57-ad84-4a09-b114-e4d234f98d26                    |19:16
svenkat| device_owner        | compute:None                                            |19:16
svenkat| extra_dhcp_opts     |                                                         |19:16
svenkat| fixed_ips           | {"subnet_id": "34459206-1ab9-4388-84ee-3f28d924cad6", "ip_address": ""} |19:16
svenkat| id                  | 393cc099-b9ef-41ad-a8f9-79d60308715b                    |19:16
svenkat| mac_address         | fa:71:4c:bd:2a:20                                       |19:16
svenkat| name                |                                                         |19:16
svenkat| network_id          | ddc24826-504d-422b-b758-a84bcbe77992                    |19:16
svenkat| status              | ACTIVE                                                  |19:16
svenkat| tenant_id           | f14fb833dd764ad7808390e52c83728f                        |19:16
svenkat| updated_at          | 2016-05-10T18:41:32                                     |19:16
svenkatnotice mac is also updated19:16
adreznecsvenkat: You should really use pastebin for that kind of stuff...19:17
efriedA port can only belong to a single VM?19:17
svenkatyes. a single vm19:18
svenkatdevice id above is nova instance id19:19
efriedSo I reiterate: There's no way to create an unassociated VF or vNIC.  Is that a show-stopper?19:20
svenkatwhen you create a port it is not for aVF or vNIC. it is just an ‘empty’ port. it can be updated later on19:20
svenkatas part of deploy process (spawn or build_and_run_instance., etc19:21
svenkati am talkng based on whats happening today for SEA19:21
efriedI see.  So creating the neutron port itself doesn't actually even hit our code.19:22
efriedUntil you plug it in.19:22
svenkatthats right…19:22
*** jwcroppe has quit IRC19:22
adreznecYep, until you plug the VIF all that code is handled in Neutron19:22
efriedOkay.  Today the drop-down for VNIC Type in port creation has values Normal, Direct, and MacVTap.  Where do those values come from?  Are we talking about adding one, e.g. PowerVNIC, that will appear on that list?19:26
*** seroyer has joined #openstack-powervm19:26
svenkatdirect and macvtap are for sriov.. these are vnic_types in neuton port19:27
svenkatpvm_sea is vif_type  and proposal is to add pvm_vf and pvm_vnic for sriov as vif_types19:27
svenkatso to create a sriov port in devstack , i think you will end up picking direct. (for vf-direct)19:28
svenkati am thinking you shold pick macvtap for vf-vnic19:28
svenkator use direct for that as well, and somehow differentiate between vf-direct vs vf-vnic under the covers). that was my earlier question in this channel19:29
*** tblakeslee has joined #openstack-powervm19:29
efriedI don't agree with overriding macvtap for power vnic.19:30
efrieddirect makes sense19:31
efriedfor direct19:31
svenkatisitnt macvtap conceptually same as vnic ? an intermediary on host between sriov and vm.19:31
efriedPerhaps conceptually, but macvtap refers to an extremely specific technology.19:32
efriedWhich our technology doesn't resemble at all.19:32
efriedAt least, that's my understanding.  I could be wrong.19:32
svenkatefried: when you said ‘direct makes sense, for direct’ you mean for vf-direct only or for vf-vnic also?19:33
svenkatjust to be clear19:33
efriedI'm certain that 'direct' makes sense for "direct VF to VM".19:33
efriedI'm not certain (up or down) whether 'direct' makes sense for Power vNIC.19:33
efriedIt's not really direct, is it?19:34
svenkatok… well, once it is setup, vm talks to vf directly isitnt? (LRDMA)19:34
efriedDangerous to use that as a criterion.  Theoretically any of our technologies could do that.19:35
efriedThat's semantics, though, which I might be convinced to overlook.  Right now I'm more interested in the practical aspects.19:36
svenkati agree..19:36
svenkatsounds like we need soemthing similar to ‘macvtap’ for vnic19:36
efriedYes, that's what I was assuming - we would add an entry to that list: Normal, Direct, MacVTap, PowerVNIC19:37
svenkatmakes sense.19:38
svenkati guess that will be a mechanism driver work19:38
efriedThis would mean that the list element in spawn's network_info would be a dict which would have a 'vif_type' key whose value would be set to whatever our new value is?19:39
thorstefried svenkat: one other option is we use direct for both, and a nova.conf setting indicates vnic or VF direct to VM19:40
svenkatcould be. i have to  run through spawn in pdb mode and look at networkrequest object19:40
thorstand you're all one way19:40
svenkatand glance metadata will have overwrite during runtime if other option is needed?19:41
efriedthorst, seems reasonable, as generally I imagine the customer would want to be all vNIC.  If there's some reason to use direct, it would likely be a technical one that *forces* them to do so, whereupon it would have to be across the board.19:43
svenkatPowervc will use vf-vnic only.19:44
efriedPresumably direct vf performs better.19:44
svenkatyes. but with no failover and migration support. trade-off.19:45
efriedRemind me why we need to support direct in the community?19:45
efriedthorst ^^19:46
*** burgerk has quit IRC19:46
svenkatefried: i think it is beause it is closer to what we have in communtity today.19:48
svenkati will be back.19:48
efriedIf that's the only reason, I'd like to put it back on the table and challenge it.19:48
efriedOur vNIC solution looks *enough* like KVM's direct attach, but it leapfrogs a lot of the limitations.  It's simply a better technology.  And it's not like there's some special extra requirement; if we can do direct, we've got everything we need to do vNIC.19:50
efriedThe inputs (phys ports on phys nets) are the same.  The outputs (vif appears on VM) are the same.19:51
*** burgerk has joined #openstack-powervm19:58
*** ManojK has joined #openstack-powervm20:27
openstackgerritTaylor Jakobson proposed openstack/nova-powervm: Create trunk on target of LPM  https://review.openstack.org/31224020:41
*** ManojK has quit IRC20:41
*** smatzek has quit IRC20:49
*** svenkat has quit IRC20:49
openstackgerritTaylor Jakobson proposed openstack/nova-powervm: Create trunk on target of LPM  https://review.openstack.org/31224020:52
*** edmondsw has quit IRC20:52
*** thorst has quit IRC20:54
*** thorst has joined #openstack-powervm20:55
*** ManojK has joined #openstack-powervm20:58
*** thorst has quit IRC20:59
*** smatzek has joined #openstack-powervm21:09
openstackgerritTaylor Jakobson proposed openstack/nova-powervm: Create trunk on target of LPM  https://review.openstack.org/31224021:12
*** smatzek has quit IRC21:22
*** burgerk has quit IRC21:26
*** jwcroppe has joined #openstack-powervm21:38
*** lmtaylor1 has quit IRC21:53
*** kylek3h has quit IRC21:56
*** tjakobs has quit IRC22:08
*** ManojK has quit IRC22:09
*** mdrabe has quit IRC22:11
*** arnoldje has quit IRC22:16
*** jwcroppe has quit IRC22:20
*** openstackstatus has quit IRC22:25
*** openstack has joined #openstack-powervm22:27
*** catintheroof has quit IRC22:30
*** k0da has quit IRC23:08
*** Ashana has quit IRC23:20
*** seroyer has quit IRC23:32
*** apearson_ has quit IRC23:56

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!