Thursday, 2016-06-16

*** openstack has joined #openstack-powervm05:59
*** thorst_ has joined #openstack-powervm06:44
*** thorst_ has quit IRC06:52
*** thorst_ has joined #openstack-powervm07:49
*** thorst_ has quit IRC07:56
*** k0da has joined #openstack-powervm08:09
*** thorst_ has joined #openstack-powervm08:54
*** thorst_ has quit IRC09:01
*** thorst_ has joined #openstack-powervm09:59
*** thorst_ has quit IRC10:06
*** smatzek has joined #openstack-powervm10:36
*** thorst_ has joined #openstack-powervm11:03
*** thorst_ has quit IRC11:11
*** thorst_ has joined #openstack-powervm11:53
*** thorst__ has joined #openstack-powervm11:55
*** thorst_ has quit IRC11:58
*** svenkat has joined #openstack-powervm12:12
*** smatzek has quit IRC12:13
*** smatzek has joined #openstack-powervm12:34
*** kylek3h has joined #openstack-powervm12:41
*** edmondsw has joined #openstack-powervm12:45
*** kylek3h has quit IRC12:47
thorst__svenkat: Do you know if you can increase the size of a SSP volume?12:58
*** thorst__ is now known as thorst_12:58
svenkatyou can resize  a volume..13:00
svenkatlet me see if there are restrictions on ssp volume.. one moment13:00
thorst_svenkat: I was looking at this:
thorst_but it didn't seem to work for ssp's13:02
*** apearson has joined #openstack-powervm13:08
*** mdrabe has joined #openstack-powervm13:10
thorst_adreznec: there?13:28
adreznecthorst_: Sup13:28
thorst_looking at Ashana's container.  It just looks like nothing is in the keystone container at all13:28
*** Ashana has joined #openstack-powervm13:28
adreznecthorst_: That's... odd. Did it run the keystone role steps during container setup?13:30
thorst_Ashana: do you know?13:31
Ashanalet me check13:31
Ashanain the lxc-container-create.yml it doesnt have that step. But when I ran the openstack-ansible setup-openstack.yml  it does run the os-keystone-install.yml13:34
thorst_yeah.  So the setup-hosts I think just creates the blank containers.  But then the setup-openstack is supposed to put the content in them13:35
thorst_the setup-openstack is what failed...right?13:35
adreznecthorst_: Yeah, you're right there. setup-hosts only creates the container shells and configures their networking, etc. It doesn't install any of the OS services into them13:36
Ashanayep thats setup-openstack fais13:37
thorst_Ashana: can you run that in the VNC and I'll monitor13:37
*** tjakobs has joined #openstack-powervm13:39
thorst_Ashana: well, that fails fast13:40
adreznecAshana: thorst_  How fast is fast?13:40
thorst_adreznec: 3 seconds13:40
thorst_I think I see the problem though13:40
adreznecWow yep, fast13:40
thorst_did you run 'setup-infrastructure' before 'setup-openstack'?13:41
Ashanayes i did13:42
AshanaI just ran it again in VNC13:43
adreznecAshana: Not connected, but I assume it succeeded?13:44
adreznecWhen you ran it the first time that is13:44
Ashanayes it did13:44
thorst_I see it running now...lets let that finish and see what happens13:45
thorst_it looks like this one could take a while13:45
*** tjakobs has quit IRC13:48
thorst_Ashana: do you know roughly how much time setup-infrastructure tooko the first time?  It looks like its at that 'requirement wheels' step right now...which is where it'll build the x86 ones (I think) and we need it to also include the ppc64le ones13:59
thorst_esberglu: lets for now shut off the ssh test?14:03
thorst_and handle that one in the staging env?14:03
esbergluI have a change up already for that.14:03
openstackgerritDrew Thorstensen proposed openstack/nova-powervm: Initial LB VIF Type
thorst_ahh, let me go +2 that one14:04
openstackgerritMerged openstack/nova-powervm: Support override migration flags
svenkatthorst: ssp volume resize is supported.14:08
thorst_svenkat: know the vios command for it?14:08
svenkati will get it …14:08
svenkatthorst: looking into ssp cinder driver…14:09
svenkatthorst: i see the code called _extend_lu .  it is done via k2.14:12
thorst_svenkat: thx.  I'll find the vios command.14:13
*** smatzek has quit IRC14:14
*** tjakobs has joined #openstack-powervm14:15
thorst_svenkat: for future reference:
svenkatthorst: thanks.14:16
thorst_Ashana: so I saw you highlight a mariadb install error.14:21
*** burgerk has joined #openstack-powervm14:21
AshanaYea I was wondering why that happened and why it was ignored14:22
thorst_maybe because it already there or something14:22
thorst_but that is odd14:22
thorst_Ashana: so I just kicked off the setup-openstack14:25
thorst_and it is definitely not failing as we may be through it yet?14:26
*** tblakeslee has joined #openstack-powervm14:26
Ashanayea it looks like its going good14:26
*** erlarese has quit IRC14:28
*** arnoldje has joined #openstack-powervm14:36
thorst_Ashana: so I do eventually expect this to fail14:38
thorst_cause it SHOULD fail on the compute node14:38
*** kotra03 has joined #openstack-powervm14:39
Ashanayep it just failed on the infra01_glance_container-8587e81d14:39
*** efried has joined #openstack-powervm14:41
thorst_I'm not super sure what is going on there14:43
thorst_but I can say...I'm a bit worried cause the disk we made was only 40 GB....14:43
adreznecthorst_: The controller disk?14:43
thorst_adreznec: yeah...14:43
thorst_that's not THIS error14:43
adreznecYeah... that might be too small14:43
adreznecI know minimum AIO is 60GB14:43
adreznecAnd this is relatively close to AIO14:44
thorst_hmm...well we're OK for now14:44
thorst_lets work through this then figure out what we may need to do there...14:44
thorst_Ashana: I think you can work through this by lxc-attach to the glance container14:45
adreznecFair enough14:45
thorst_and run that apt-get install command14:45
adreznecWhat failed?14:45
thorst_but be sure to use the --allow-unauthenticated option14:45
Ashanaalrighty what am I intsalling14:46
thorst_the glance container tried to install a bunch of stuff14:46
thorst_I'll ping you the VNC to look at14:46
adreznecAh yeah, I bet it's because they're coming from that external repo14:49
adreznecI thought they were adding a key for that repo though, similar to what gets done for the novalink repo for example14:49
adreznecAshana: We should double check that the apt keys are getting added properly14:50
adreznecThat should have taken care of this package auth issue14:50
*** kylek3h has joined #openstack-powervm14:52
*** efried has quit IRC14:53
thorst_adreznec: where does that auth key get added?14:54
thorst_is that something we should propose14:54
*** efried has joined #openstack-powervm14:54
*** tblakeslee has quit IRC15:01
*** tblakeslee has joined #openstack-powervm15:04
adreznecthorst_:   Sorry, stepped away to talk to Taylor quick15:08
adreznecIt should be getting done as part of I believe15:08
*** tjakobs has quit IRC15:09
thorst_adreznec: but does that run on every container?  Remember this was the glance container that failed15:09
adreznecThat's just the general role, so it should run in any container that's calling into the mariadb install15:09
adreznecUnless they're not using the generic role in the glance container for some reasno15:09
adreznecWait, ignore my previous link. I forgot that they're using Galera to provide mariadb with OSA iirc15:11
adreznecI need to look at the glance role quick...15:11
*** kotra03 has quit IRC15:16
adreznecYeah, so the repo comes out of by default, but I'm not seeing any apt-key setup within that role15:16
adreznecI'm wondering if that's supposed to be handled as part of the galera_client_gpg_keys in there15:19
adreznecBut it's not happening for some reason15:19
thorst_adreznec: wonder if we need to propose a change up for that?15:27
adreznecthorst_: Yeah idk... I mean this is obviously working in the gate, so I'm not sure if this is just something that's broken in our environment or what15:28
adreznecOr if the gate image somehow gets that key another way15:28
AshanaSo I think I did all the unathenticated packages should I try running it again. The packages were mysql-common libmysqlclient18 mariadb-common libmariadbclient18", "  libmariadbclient-dev mariadb-client-core-10.0 mariadb-client-10.0", "  mariadb-client"]15:31
*** openstackgerrit has quit IRC15:34
*** openstackgerrit has joined #openstack-powervm15:34
thorst_Ashana: yea, try it again15:41
*** mdrabe has quit IRC15:49
Ashana@throst I have to install those packages on compute15:51
Ashana@thorst_ I have to install those packages on compute15:51
thorst_Ashana: really?  where does it say that?15:52
Ashanathe compute node just failed in the VNC saying those packages are unathenticated15:53
thorst_ahh, yeah15:54
thorst_but compute won't have containers (at least it SHOULDN'T)15:54
Ashanathats true. And this task is still hanging on15:54
*** smatzek has joined #openstack-powervm16:00
*** mdrabe has joined #openstack-powervm16:11
*** seroyer has joined #openstack-powervm16:12
*** k0da has quit IRC16:19
*** tblakeslee has quit IRC16:22
*** tblakeslee has joined #openstack-powervm16:23
thorst_Ashana: sorry .... had to step away a bit.  Should I be looking at anything for the environment now?16:35
*** tblakeslee has quit IRC16:37
*** kotra03 has joined #openstack-powervm16:43
efriedthorst_, svenkat: Care to talk through some of the design details of vNIC creation methods in pypowervm?17:02
svenkatsure.. i was able to create couple of vnic ports on an active lpar using hmc on our dev novalink.17:08
AshanaNo it was just that the compute node fail so I will install those packets on there or should I do that thorst_17:09
efriedsvenkat, Can you get an XML dump17:09
svenkatxml dump using pvmctl lpar list —xml ?17:10
Ashanathorst_:  No it was just that the compute node fail so I will install those packets on there or should I do that?17:10
svenkatin that vnic is not showing up17:10
svenkatwas talking to Nilam about that and he said thre is no code yet to pulll vnic details in pvmctl17:12
efriedsvenkat, right - we'll have to do it via the K2 REST API.17:13
efried(Can I say "K2"?  The HMC REST API.)17:14
thorst_Ashana: Yeah, go ahead and do those isntalls on the compute node17:15
thorst_sorry, was AFK17:15
*** seroyer has quit IRC17:16
efriedthorst_, svenkat: I've been walking through some of the models for vNIC management in pypowervm.  Some interesting questions come out of this.17:18
efriedThe data model informs a lot of what we can do17:19
efriedIt's like this:17:19
efriedYou create a vNIC via PUT /rest/api/uom/LogicalPartition/{uuid}/VirtualNICDedicated17:19
efriedThe payload, a <VirtualNICDedicated/>, has certain 1:1 properties, and then a list of backing devices.17:20
efriedThe assignable 1:1 properties include: slot designation (specific, or "use next"); PVID & PVID priority; Allowed VLAN IDs; MAC address17:21
svenkatok… looking at vnic gui on hmc simultaneously17:22
efriedA backing device comprises 1) pointers to (VIOS, adapter, phys port), and 2) capacity %.17:22
svenkatok… thelist is complete now.. matches with what we have in hmc ui17:23
efriedSo there's several ways we could allow the user to set up a vNIC in pypowervm.17:23
efriedGenerally all of our wrappers have a .bld() method that takes required and optional params according to all the things you are required/able to set *only* on creation.17:24
svenkatok.. agreed.17:25
efriedIn this case, the slot designation would be the obvious one17:25
svenkatshould it be driven by last byte of mac as we do now? and range of 32-64, etc?17:26
svenkatandpick next for subsequent ports17:26
thorst_svenkat: no, we shouldn't do any of that mac to slot stuff17:26
thorst_that was for P6 way back when17:26
efriedI'm not worried about that aspect of things.17:27
thorst_we don't do any of that anymore...17:27
efriedRight now I'm just trying to figure out how to allow/require the user to set up which parts of the vNIC.17:27
efriedI believe the other (non-slot) stuff - PVID/VLAN/MAC stuff - is all settable after the fact; so would not be necessary in .bld().17:28
efriedQuestion is, do we want to allow/require backing devs to be assigned via bld()?17:28
svenkatsure. agree. Capacity is needed at build?17:28
efriedCapacity, as far as I can tell from the schema, is a property on the *backing* devs, not on the outer vNIC thingy.17:28
efriedI see DesiredCapacityPercentage on the vNIC, but it's designated as ROO (read-only optional).17:29
efriedThis means it can't be set - at least if the schema designation is correct.17:29
thorst_efried: Just because its settable after the fact, doesn't mean that we don't require it on the create17:29
thorst_PVID we should require.  Mac, VLANs, Slot should all be optional?17:29
efriedthorst_, fair enough.  We can hash out those details later.  I'm going broader strokes right now.17:30
svenkatok… i think backing device must be mandatory to create vnic. how do you do otherwise.17:30
thorst_same with capacity...but figuring out how much capacity you have left is kinda a big deal17:30
efriedTo create, yes - you can't PUT until you've got that stuff set up.  But I need to be able to set up the backing devs after bld().  The reasons will become apparent in a bit...17:30
efriedBack to capacity: On the backing device, I see CurrentCapacityPercentage, which is designated COD (Create-Only Defaulting).17:31
thorst_back in a sec.17:32
efriedSo - again assuming the schema is correct - if you want to designate capacities, it must be done on a per-backing-dev basis (which makes sense).  If not specified, it defaults (to the MinimumEthernetCapacityGranularity on the physical port).17:32
efriedWhich, to me, means that it doesn't make a whole lot of sense to have a 'capacity' param in VNIC.bld().17:33
efriedThe only way that could possibly work is if you *also* specified the backing devs to bld() (whereupon we would use that capacity for all of 'em) - otherwise it would be ignored, which would be confusing.  So I say we do *not* have a capacity param in bld.17:34
efriedSo before we decide whether to *allow* backing devices to be specified to bld() - let's walk through the reasons you must at least have the *ability* to specify them after the fact.17:35
efriedSums up easily as: automatic/dynamic anti-affinity.17:35
svenkatok.. so setup vnic using bld() and then pick which adapter/pp its attached to…17:36
svenkatbut all of these are supposed to happen in plug itself isitnt?17:36
efriedplug is not the only usage scenario.17:36
efriedI believe plug will want to use the automatic/dynamic anti-affinity every time.17:37
efriedI believe that algorithm will accept (vnic_wrapper, pports, vioses, min_redundancy, max_redundancy)17:38
efriedit will update the vnic_wrapper's backing devices.17:38
efriedIf it can't find enough VF-ness to satisfy min_redundancy, error.17:38
svenkatok… but to be clear, what other scenario other than plug is involved here to cerate vnic?17:39
efriedAnd it will assign at most min(max_redundancy, len(pports)) backing devs.17:39
efriedSo the anti-affinity algorithm will try to figure out the optimal distribution of pports across cards & VIOSes, whereupon it will create the backing dev element wrappers and stuff 'em onto the vnic_wrapper.17:40
svenkatpvmctl will create vnic without backing device, why ( i guess there iwll be pvmctl to attach backing device to a vnic). but.17:40
svenkatok… so the algorithm wlll expect vnics be created already.17:41
efriedno, pvmctl will need to be able to create the whole vnic in one whack.17:41
efriedYou can't create a vnic on the server without specifying all of this stuff.17:41
efriedThough I'm actually not sure whether you can modify the list of backing devices on the fly - can you see any indication of that one way or another in the HMC GUI?17:41
svenkatlet me look to see if there is any edit facility17:42
efriedI'm asking Nilam too.17:42
svenkatvnic modify lets me update nly port vlan id17:42
svenkateverything else is readonly17:43
efriedAccording to seroyer, it is possible.17:44
efriedWhich makes for some interesting possibilities.17:45
efriedLike: creating with one backing device, then adding others after the fact.17:45
svenkatok.. adding will result in redundancy scenario only (like moving from 1 backing device to multiple)17:46
efriedsvenkat, thorst_: So back to the pypowervm usage model: we certainly need the ability to modify the backing dev list on an existing vnic wrapper.17:54
Ashanathorst_: so I installed the packages on the compute node but the "No package matching 'libmariadbclient-dev' is not available on 16.04 ubuntu so the compute node keeps failing, since the package isnt available17:55
efriedI don't see any harm in allowing a list of backing devs to be passed to VNIC.bld().17:55
svenkatok… this is mainly due to need for redundancy support, is that right.17:55
efriedsvenkat, for sure.17:55
efriedBut I've been saying that I'm not planning to treat redundancy as a separate usage model in pypowervm.17:55
efriedYou have a list of backing devices.17:55
efriedIf it's a list of one, no redundancy.17:56
svenkatyes.. i agree.17:56
efriedMore than one, redundancy.17:56
efriedIf you create the list yourself, you get whatever anti-affinity you can come up with.17:56
*** seroyer has joined #openstack-powervm17:56
svenkatso you start with no backing device, then add one . so far this is not redundancy..17:57
efriedIf you let our special algorithm get after it, we figure out the optimal anti-affinity and set up the backing devs for you.17:57
svenkatthen you add one more, it becomes redundant vnic17:57
svenkatdid i say it right17:57
efriedsvenkat, that's for the (potential, brainstorm-level, proposed) pvmctl model.17:57
efriedI don't necessarily think that's the right model for the community code.17:57
efriedBecause we generally want to avoid making more REST calls than we have to.17:57
seroyerefried: I missed part of this discussion, but you can’t start with no backing device.  You must start with one backing device.  There is no concept of vNIC without a backing device.17:58
efriedwe have to consider out-of-band changes, e.g. saturation of a pport we wanted to use as a backing device.17:58
efried(seroyer, understood)17:58
efriedThat would cause the whole PUT op to fail.17:58
efriedSo we would have to redrive from the start.17:58
efriedWhich may be fine17:58
efriedOr we may prefer to add one at a time, absorbing failures and moving on to the next candidate, until we've reached the desired redundancy level.17:59
efriedIt's going to be a matter of whether we want to try to assess the port saturation stuff on the client side (would still need to handle failures from the server), or just abdicate all of the validation to the server.17:59
thorst_efried: I think we do have to know about other usage...but we don't have to deal with an admin adding a pport to one of our VMs vnics?18:00
*** apearson has quit IRC18:01
efriedthorst_, I'm not so much concerned about that as about multiple nova threads going on at once.18:02
efriedMy thread looks at capacities and thinks we've got enough to create a vnic with a particular set of ports.18:03
efriedBut meanwhile, seventeen other VMs got created and saturated that port.18:03
efriedSo our PUT will fail because we have stale info.18:03
efriedBut this isn't a situation where etags work for us.18:03
efriedBecause we're not PUTting on the pport.18:03
efriedSo no matter how recently we did our GET, we could still have out-of-date information.18:03
efriedSo need to handle server-side errors in some consistent and reliable way.18:04
efriedWe basically have two choices: all-or-nothing and one-at-a-time.18:04
thorst_efried: Agree...18:05
thorst_but I think we can put that against REST somehow18:05
efriedIn all-or-nothing, we put together the whole package of backing devs at once, and if we get back a server failure (aside: need to be able to identify that failure *specifically* as "you tried to exceed the capacity of a pport") then we need to rebuild the whole package and try again.18:05
efriedOne-at-a-time is simpler: We create with the first one and then add one at a time; and on each iteration, if we get a failure (aside: requires same error identification as above), we move on to the next possible candidate; iterate until we've satisfied our redundancy requirement.18:06
efriedStrike "is simpler".  There's some edge cases that could be pretty hairy there.  Like if we run out of pport candidates, do we go back and try to iterate over the list again?  After all, they could have freed up some capacity while we've been diddling around.18:07
thorst_efried: KISS18:08
efriedI feel like all-or-nothing could allow us to take advantage of our @retry decorator (though not, alas, our FeedTask infra).18:08
thorst_fail the whole op18:08
thorst_lets see how often this fails18:08
*** tblakeslee has joined #openstack-powervm18:08
thorst_I doubt it'll be often...18:08
thorst_this seems 'edge'18:08
efriedthorst_, but at least make an attempt to use unsaturated ports, presumably.18:09
efriedBut I agree, definitely KISS to begin with.  We can screw with the fine details later.18:10
efriedErgo I suppose we don't need the deterministically-identifiable REST failure, at least at the start - it'll just blow through as a generic 400/500.18:11
efriedThough it wouldn't be a bad idea to ask for that up front in anticipation, so we don't end up behind.18:11
*** apearson has joined #openstack-powervm18:13
thorst_chongshi could probably add that18:14
efriedthorst_, not sure that's his bailiwick - might be nvcastet's.18:18
efriedOr mbuttard.18:19
svenkatefried : on another topic, for pci_passthrough_whitelist, we decided to go with a new field physloc.18:24
svenkatwilil it be only location code ?18:24
svenkatwhat will be the format of it. adapterid - location code and comma separated?18:25
efriedI think just the physloc18:25
svenkatso a comma separated physloc into one value18:25
efriedOne whitelist entry per port18:25
svenkatin nova.conf, separate physloc=<loc code> entries ?18:26
efriedpci_passthrough_whitelist = [{"physloc": "xxxx", "physical_network": "foo_net"}, {"physloc": "yyyy", "physical_network": "foo_net"},18:27
efried{"physloc": "aaaa", "physical_network": "bar_net"}, {"physloc": "bbbb", "physical_network": "bar_net"}, ...]18:27
svenkatand about vif types for vf direct and vnic vif types, last night you pinged us about no free form in glance metadata18:28
efriedRight, which doesn't matter for now because not supporting direct VF in community for Newton.18:28
svenkatok.. so vnic is the only path - as PowerVC.18:29
efriedWhat's PowerVC?18:29
efriedI don't think we need to diddle with the glance metadata at all for the community code.18:29
efriedBecause I *think* there will only be the one possibility for a given setup.18:30
efriedthorst_ may want to weigh in here.18:30
efriedIt would be technically possible for us to have the same network set up to allow either SEA or SRIOV at the same time.18:30
efriedWhereupon we would conceivably make the decision on the fly based on the glance metadata.18:31
efriedBut not sure we want to support all of that out of the gate.18:32
efriedSomething to put in the blueprint, though, svenkat - assuming thorst_ agrees it is technically possible.18:36
efriedActually, SRIOV-backed SEA support is a definite topic for the blueprint.  This should definitely be supported.  But as with existing SEA-backed support, you would need to set it up beforehand.18:38
svenkatyes. agree.18:38
efriedThe setup would look like: Configure VF promisc ports (via pvmctl) and assign to VIOS(es); create SEAs using those VFs.  And from that point on, it's the same as today.18:39
efriedShould be no code changes to support that.18:40
svenkatyes. agree.18:40
efried(no code changes in community - obviously pvmctl will have to support VF create-and-assign-to-VIOS.)18:40
efried(But we knew that anyway - for installer etc.)18:41
thorst_we should tolerate SEA being backed by SR-IOV...but not provision it from OpenStack18:47
openstackgerritEric Berglund proposed openstack/nova-powervm: DNM: CI Check2
*** kotra03 has quit IRC19:02
*** apearson has quit IRC19:04
*** seroyer has quit IRC19:06
*** apearson has joined #openstack-powervm19:06
*** seroyer has joined #openstack-powervm19:06
thorst_adreznec: I think efried wants you to be the +2 here -
efriedthorst_, adreznec: for sure.19:11
efriedI +1ed.19:12
thorst_and I'm definitely looking for a +2 today if possible  :-)19:12
efriedthorst_, tested?19:12
thorst_efried: yep19:12
thorst_more iterations later...but its become a dependency for other things19:13
*** tblakeslee has quit IRC19:47
*** apearson has quit IRC20:01
*** apearson has joined #openstack-powervm20:04
*** apearson has quit IRC20:11
*** apearson has joined #openstack-powervm20:11
*** tblakeslee has joined #openstack-powervm20:14
*** openstackstatus has joined #openstack-powervm20:19
*** ChanServ sets mode: +v openstackstatus20:19
*** k0da has joined #openstack-powervm20:42
*** k0da has quit IRC20:52
*** svenkat has quit IRC20:53
*** k0da has joined #openstack-powervm20:56
tpeoplesthorst_: "This code requires that it is run on the PowerVM Compute Host directly." for the ceilometer_powervm inspector. Would you be opposed to a patch that either allows the powervm inspector to take a session object or allow it to use the21:02
tpeoplesCONF settings instead of assuming localhost for the session?21:02
tpeoples(FWIW, I have it running outside of the compute host directly and it works fine)21:02
thorst_tpeoples: hmm...we have not done that else where21:10
thorst_let me think it through a little bit...we've been against that generally because you should be on the node itself21:10
thorst_but that may be more nova/neutron?21:10
thorst_I need to run...will get back to you21:11
*** thorst_ has quit IRC21:11
*** thorst_ has joined #openstack-powervm21:11
*** smatzek has quit IRC21:14
*** thorst_ has quit IRC21:16
*** thorst_ has joined #openstack-powervm21:23
*** mdrabe has quit IRC21:25
*** thorst_ has quit IRC21:28
*** Ashana has quit IRC21:31
*** tblakeslee has quit IRC21:32
*** tblakeslee has joined #openstack-powervm21:35
*** Ashana has joined #openstack-powervm21:38
openstackgerritEric Berglund proposed openstack/nova-powervm: DNM1
openstackgerritEric Berglund proposed openstack/nova-powervm: DNM: Test Change Set 2
*** Ashana has quit IRC21:42
*** thorst_ has joined #openstack-powervm21:44
*** Ashana has joined #openstack-powervm21:44
*** thorst__ has joined #openstack-powervm21:46
*** burgerk has quit IRC21:46
*** Ashana has quit IRC21:48
*** thorst_ has quit IRC21:49
*** Ashana has joined #openstack-powervm21:50
*** thorst__ has quit IRC21:50
*** Ashana has quit IRC21:55
*** Ashana has joined #openstack-powervm21:55
*** Ashana has quit IRC22:00
*** Ashana has joined #openstack-powervm22:07
*** Ashana has quit IRC22:12
*** Ashana has joined #openstack-powervm22:13
*** kylek3h has quit IRC22:16
*** seroyer has quit IRC22:18
*** Ashana has quit IRC22:18
*** Ashana has joined #openstack-powervm22:19
*** Ashana has quit IRC22:23
*** Ashana has joined #openstack-powervm22:25
*** Ashana has quit IRC22:30
*** Ashana has joined #openstack-powervm22:31
*** Ashana has quit IRC22:35
*** Ashana has joined #openstack-powervm22:37
*** arnoldje has quit IRC22:38
*** Ashana has quit IRC22:42
*** Ashana has joined #openstack-powervm22:43
*** Ashana has quit IRC22:47
*** Ashana has joined #openstack-powervm22:49
*** k0da has quit IRC22:49
*** Ashana has quit IRC22:53
*** Ashana has joined #openstack-powervm22:55
*** Ashana has quit IRC22:59
*** Ashana has joined #openstack-powervm23:01
*** Ashana has quit IRC23:05
*** Ashana has joined #openstack-powervm23:07
*** edmondsw has quit IRC23:07
*** Ashana has quit IRC23:11
*** Ashana has joined #openstack-powervm23:12
*** Ashana has quit IRC23:17
*** Ashana has joined #openstack-powervm23:18
*** Ashana has quit IRC23:23
*** Ashana has joined #openstack-powervm23:24
*** Ashana has quit IRC23:29
*** Ashana has joined #openstack-powervm23:30
*** Ashana has quit IRC23:34
*** Ashana has joined #openstack-powervm23:38
*** Ashana has quit IRC23:42
*** Ashana has joined #openstack-powervm23:44
*** Ashana has quit IRC23:48
*** Ashana has joined #openstack-powervm23:49
*** tblakeslee has quit IRC23:52
*** Ashana has quit IRC23:53
*** Ashana has joined #openstack-powervm23:55

Generated by 2.14.0 by Marius Gedminas - find it at!