Tuesday, 2017-09-05

*** thorst_afk has joined #openstack-powervm00:24
*** edmondsw has quit IRC00:26
*** chas has joined #openstack-powervm00:49
*** chas has quit IRC00:53
*** thorst_afk has quit IRC00:55
*** thorst_afk has joined #openstack-powervm01:39
*** thorst_afk has quit IRC01:39
*** thorst_afk has joined #openstack-powervm02:17
*** thorst_afk has quit IRC02:17
*** chas has joined #openstack-powervm02:49
*** chas has quit IRC02:54
*** thorst_afk has joined #openstack-powervm03:18
*** thorst_afk has quit IRC03:23
*** thorst_afk has joined #openstack-powervm04:19
*** thorst_afk has quit IRC04:23
*** kairo has quit IRC04:49
*** chas has joined #openstack-powervm04:50
*** chas has quit IRC04:55
*** thorst_afk has joined #openstack-powervm05:20
*** thorst_afk has quit IRC05:24
*** kairo has joined #openstack-powervm05:54
*** kairo has quit IRC06:17
*** thorst_afk has joined #openstack-powervm06:20
*** chas has joined #openstack-powervm06:22
*** thorst_afk has quit IRC06:25
*** thorst_afk has joined #openstack-powervm07:22
*** thorst_afk has quit IRC07:26
*** k0da has joined #openstack-powervm07:27
*** k0da has quit IRC07:34
*** k0da has joined #openstack-powervm07:48
*** k0da has quit IRC08:17
*** thorst_afk has joined #openstack-powervm08:22
*** thorst_afk has quit IRC08:27
*** k0da has joined #openstack-powervm08:34
*** kairo has joined #openstack-powervm09:16
*** thorst_afk has joined #openstack-powervm09:23
*** thorst_afk has quit IRC09:28
*** k0da has quit IRC10:05
*** thorst_afk has joined #openstack-powervm10:24
*** thorst_afk has quit IRC10:28
*** smatzek has joined #openstack-powervm11:21
*** thorst_afk has joined #openstack-powervm11:25
*** fried_rice is now known as efried11:29
*** thorst_afk has quit IRC11:33
*** thorst_afk has joined #openstack-powervm12:00
*** edmondsw has joined #openstack-powervm12:21
*** edmondsw has quit IRC12:26
*** edmondsw has joined #openstack-powervm12:27
*** edmondsw has quit IRC12:31
*** apearson has joined #openstack-powervm12:53
*** esberglu has joined #openstack-powervm12:57
*** kylek3h has joined #openstack-powervm12:58
*** edmondsw has joined #openstack-powervm12:58
esberglu#startmeeting powervm_driver_meeting13:00
openstackMeeting started Tue Sep  5 13:00:09 2017 UTC and is due to finish in 60 minutes.  The chair is esberglu. Information about MeetBot at http://wiki.debian.org/MeetBot.13:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.13:00
openstackThe meeting name has been set to 'powervm_driver_meeting'13:00
esberglu#link https://etherpad.openstack.org/p/powervm_driver_meeting_agenda13:01
esberglu#topic In-Tree Driver13:01
esbergluPlanning on starting the pike spec today in the background13:02
esbergluMight have some questions13:02
edmondswefried did you mark the pike one implemented?13:02
esbergluqueens spec13:02
efriedI think mriedem did13:02
edmondswcool... I thought he had, but then you were talking about it on Friday and I assumed I'd missed something13:03
efriedoh, I didn't wind up moving it from approved/ to implemented/ because it appears there's a script to do that for all of them, and I don't think that's my responsibility.13:03
efriedwell, to rephrase, I'm not sure it would be appreciated if I proposed that.13:03
edmondswoh, interesting13:03
edmondswas long as we're in sync with everything else13:04
efriedYeah, nothing has been moved yet.13:04
edmondswesberglu how is config drive going?13:04
esbergluedmondsw: Good I think I'm ready for the first wave of reviews on the IT patch13:05
edmondswwhat have you done to test it?13:05
edmondswI assume there are functional tests that we can start running in the CI related to this... have you tried those with it?13:06
esbergluStill need to finish up the UT pypowervm side for removal of the host_uuid which is needed for that13:06
esbergluYeah there is a tempest conf option FORCE_CONFIG_DRIVE13:06
esbergluI have done a couple manual runs with that set to true13:06
esbergluAnd have looked through the logs to make sure that it is hitting the right code paths13:06
esbergluEverything looked fine for those13:07
efriedManually you can also do a spawn and then look to make sure the device and its scsi mapping exist.13:07
esbergluI have not done any testing of spawns from the cli13:07
esbergluOnly through tempest so far13:07
edmondswk, let's try the CLI as well just to be safe13:08
esberglu#action: esberglu: Test config drive patch from CLI13:08
edmondswand make sure that what you tried to get config drive to do was actually done on the guest OS13:08
edmondswe.g. setting the hostname13:09
esbergluYep will do13:09
thorst_afkefried: you started posting in nova13:09
efriedthorst_afk Eh?13:09
efriedPosting what?13:09
mdrabeDoes IT support LPM yet?13:09
thorst_afknm...I misread  :-)13:10
edmondswmdrabe not yet... that's one of our TODOs for queens13:10
mdrabeK, was just thinking about the problems that vopt causes13:10
efriedLPM might be ambitious for Q13:10
edmondswefried oh, you're right... that was NOT a TODO for queens...13:11
efriedmdrabe We have that stuff solved for OOT; do you see any reason the solution would be different IT?13:11
mdrabeI don't think so13:11
edmondswanybody else have something to discuss IT?13:13
esberglu#topic Out-of-Tree Driver13:14
esbergluAnything to discuss here?13:14
efriedPursuant to above, the rename from approved/ to implemented/ is already proposed here: https://review.openstack.org/#/c/500369/213:14
efried(for the pike spec)13:14
edmondswefried cool13:15
edmondswmdrabe any updates on what you were working on?13:15
mdrabeThe PPT ratio stuff has been delivered in pypowervm13:16
*** k0da has joined #openstack-powervm13:16
mdrabeI've tested the teeny nova-powervm bits, but I wanted to get with Satish on merging that13:16
edmondswmdrabe and nova-powervm?13:17
mdrabe(Also the nova-powervm side has been merged internally)13:17
edmondswanything else OOT?13:17
efriedI think I went through on Friday and cleaned up some oldies.13:18
edmondswthere is an iSCSI-related pypowervm change from tjakobs that I need to review13:18
edmondswthorst_afk what are we doing with 5531?13:18
thorst_afkedmondsw: looking13:19
edmondswbeen sitting a while13:19
thorst_afkI don't think we need that13:19
thorst_afkand if we want it, we can punt to a later pypowervm13:20
edmondswthorst_afk abandon?13:20
thorst_afkbut the OVS update proposed doesn't require it13:20
thorst_afkcan do13:20
edmondswanything else?13:20
edmondswesberglu next...13:21
esberglu#topic PCI Passthrough13:21
efriedWhich should be renamed "device passthrough"13:22
efriedBecause it's not going to be limited to PCI devices.13:22
edmondswwhat else?13:22
efriedNo significant update from last week; been doing a brain dump in prep for the PTG here https://etherpad.openstack.org/p/nova-ptg-queens-generic-device-management but it's not really ready for anyone else to read yet.13:22
efriedAt the end of last week I think I started to get the idea of how it's really going to end up working.13:23
efriedAnd I think there's only going to be a couple of things that will be device-specific about it (as distinguishable from any other type of resource)13:24
efriedOne will be how to transition away from the existing PCI device management setup (see L62 of that etherpad)13:24
efriedThe other will be how network attachments will be associated with devices when they're generic resources.13:25
efriedI'm going to spend the rest of this week thinking through various scenarios and populating the section at L47...13:25
efried...and possibly putting those into a nice readable RST that we can put up on the screen in Denver.13:25
edmondswthat sounds great13:26
edmondsweven just throwing up the etherpad would be great13:26
efriedThe premise is that I believe we can handle devices just like any other resource, with some careful (and occasionally creative) modeling of traits etc.13:26
efriedSo the goal is to enumerate the scenarios and try to describe how each one would fit into that picture.13:26
efriedNow, getting this right relies *completely* on nested resource providers.13:27
efriedWhich aren't done yet, but which I think will be a focus for Q.13:27
efriedIf they aren't already, the need for device passthrough will be a push in that direction.13:27
edmondswso are we being pushed away from doing things first with the current state of things and then again later moving to resource providers?13:28
efriedOnce that framework is all in place, the onus will be on individual virt drivers to do most of the work as far as inventory reporting, creation of resource classes and traits, etc.13:28
efriedWhat do you mean pushed?13:28
edmondswis this our choice, or are comments from jaypipes others making us have to go that way?13:29
efriedSo as far as the powervm driver is concerned (both in and out of tree), I believe the appropriate plan is for us to implement our hacked up PCI passthrough using the existing PCI device management subsystem.  Basically cleat up the PoCs I've got already proposed.13:30
efriedand have that be our baseline for Q.13:30
efriedThen whenever the generic resource provider and placement stuff is ready, we transition.  Whether that be Queens or Rocky or whatever.13:30
edmondswok, I misunderstood your intentions then13:30
edmondswsounds good13:31
edmondswI like doing what we can under the current system in case the resource providers is delayed13:31
edmondswbut moving to that as soon as we can13:31
efriedRight; and to answer the other part of your question: yes, it's Jay et al (Dan, Ed, Chris, etc.) pushing for the way things are going to be.13:31
*** zerick has quit IRC13:31
efriedI'm tracking it very closely, and imposing myself in the process, to make sure our particular axes are appropriately ground.13:32
edmondswgreat, as long as they're not resisting patches that will get things working under the current system13:32
efriedBut so far the direction seems sane and correct and generic enough to accomodate everyone.13:32
efriedOh, I have no idea about that.13:32
*** zerick has joined #openstack-powervm13:33
efriedWe'll have to throw those at the wall and see if they stick.13:33
efriedBut we can at least get it done for OOT.13:33
efriedWhich is the important thing for us.13:33
edmondswmdrabe need you to look at this resource providers future and assess PowerVC impacts13:33
edmondswwe can talk more about that offline13:34
esbergluReady to move on?13:34
edmondswI think so13:34
esberglu#topic PowerVM CI13:34
efriedNoticed things are somewhat unhealthy at the moment.13:34
efriedAt least OOT.13:35
efriedhttps://review.openstack.org/#/c/500099/ https://review.openstack.org/#/c/466425/13:35
esbergluneo19 is failing to start the compute service with this error13:35
edmondswhttp://ci-watch.tintri.com/project?project=nova says the last 7 OOT have passed13:36
esbergluThis persisted through an unstack and a stack. Anyone know what that's about? I asked in novalink with no response13:36
esbergluAs far as actual tempest runs13:36
esbergluThe fix for the REST serialization stuff doesn't appear to have solved the issue13:37
edmondswesberglu anytime we see HTTP 500 we will have to look at pvm-rest logs13:37
esbergluStill seeing "The physical location code "U8247.22L.2125D5A-V2-C4" either does not exist on the system or the device associated with it is not in AVAILABLE state."13:37
esbergluNeed to link up with hsien again for that13:37
esbergluThere is this other issue that has been popping up lately13:38
esbergluOther than that I spent some time looking at the networking related tempest failures. It seems to be an issue with multiple tests trying to interact with the same network13:39
esbergluI haven't wrapped my head around exactly what's going on there13:39
efriedesberglu That rootwrap one - is it occasional or consistent?13:39
esbergluefried: Occaisonal13:39
efriedthat's really weird.  If there's no filter for `tee`, there's no filter for `tee`.13:39
edmondswcould be using wildcards that sometimes match and sometimes don't13:40
edmondswbut that is really weird13:40
edmondswmaybe part of the rootwrap setup is sometimes failing?13:41
esbergluI haven't been keeping this page as up to date as I should, but I started transitioning some local notes to it13:42
edmondswesberglu I'll try to hep you with that offline13:42
edmondswesberglu could you stop taking local notes and just work out of that etherpad?13:43
esbergluedmondsw: Yeah that's my plan. The formatting options aren't as robust as I would like but I can deal13:44
edmondswjust as much as possible13:44
esbergluAnyone know what that neo19 issue is about? Might just reinstall unless anyone has an idea13:45
esberglu(IIRC this error message has been seen before and fixed via reinstall)13:45
efriedSeems like we should open a defect and have the VIOS team look at that.13:45
esbergluefried: k13:46
esbergluThat's all for CI13:46
efriedCan we bump neo19 out of the pool in the meantime?13:46
esbergluefried: Yeah it already is13:46
esberglu#topic Driver Testing13:46
esbergluHaven't heard from jay in a while, anyone know where that testing is at?13:47
edmondswyeah, I've got updates here13:47
edmondswwe have lost Jay13:47
edmondswhe's been pulled off to other things13:47
edmondswWe may have someone else that can help here, or we may not... I will be figuring that out this week13:47
edmondswlongterm we can probably assume that testing will be whatever we can do as a dev team via tempest, with no specific tester assigned13:48
edmondswany questions?13:49
esbergluNot atm13:49
esberglu#topic Open Discussion13:50
esbergluAnything else this week?13:51
edmondswwe should all start thinking about tempest coverage and improving it where we can / should / have time13:51
edmondswI think that's it for me13:51
edmondswwe have the PTG next week, so probably no meeting13:51
esbergluYep I'll cancel13:52
esbergluHave a good week all13:52
openstackMeeting ended Tue Sep  5 13:52:55 2017 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)13:52
openstackMinutes:        http://eavesdrop.openstack.org/meetings/powervm_driver_meeting/2017/powervm_driver_meeting.2017-09-05-13.00.html13:52
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/powervm_driver_meeting/2017/powervm_driver_meeting.2017-09-05-13.00.txt13:52
openstackLog:            http://eavesdrop.openstack.org/meetings/powervm_driver_meeting/2017/powervm_driver_meeting.2017-09-05-13.00.log.html13:53
*** kairo has quit IRC14:06
*** miltonm has joined #openstack-powervm14:17
*** kairo has joined #openstack-powervm14:48
*** kairo has quit IRC14:48
*** kairo has joined #openstack-powervm14:48
edmondswefried have you seen the ML post from Andrey?15:31
edmondswcorrect me if I'm wrong, but I thought the number of PF == number of ports, and if that's the case I don't think we should need to have a trait for port number15:31
efriededmondsw I was just responding to that.15:32
edmondswam I right about PF = ports?15:33
efriededmondsw not sure what you're saying with "number of".15:36
efriedA PF is a physical port15:36
efriedAndrey is saying he wants basically a way to do HA/anti-affinity across switches and pports.15:36
efriedI think he's *almost* there.  But I think he may not quite grasp that placement isn't going to return him *specific* VFs.15:37
efriededmondsw He just popped on at -nova -- going to follow up with him there if you want to listen in.15:38
*** k0da has quit IRC15:56
*** chas has quit IRC18:20
esbergluefried: edmondsw: I was able to spawn using the CLI with --config-drive true. Everything looks good at first glance.18:42
esbergluNot sure what else I can verify without networking18:42
edmondswesberglu better start working on networking... ;)18:43
efriedesberglu pvmctl scsi list and make sure the sucker is mapped.  Log into the vterm and see if the hostname is set.18:43
efriedIf you do those things, I don't know of anything you would need to check that would require networking.18:43
efriedand with those two things, I would be satisfied.18:43
edmondswesberglu we need to make sure to remember to add FORCE_CONFIG_DRIVE to the CI once this merges, so I hope you've got a TODO for that18:47
edmondswmaybe even go ahead and add that for OOT...18:47
esbergluedmondsw: Already is for OOT18:48
edmondswok, good18:48
*** thorst_afk has quit IRC18:59
*** thorst_afk has joined #openstack-powervm18:59
*** thorst_afk has quit IRC19:04
*** thorst_afk has joined #openstack-powervm19:11
*** thorst_afk has quit IRC19:15
*** k0da has joined #openstack-powervm19:24
*** thorst has joined #openstack-powervm19:27
*** apearson has quit IRC19:37
*** esberglu has quit IRC19:50
*** esberglu has joined #openstack-powervm19:50
*** esberglu has quit IRC19:55
*** esberglu has joined #openstack-powervm20:03
*** esberglu has quit IRC20:07
*** smatzek has quit IRC20:14
*** efried is now known as efried_bbiab20:15
*** edmondsw has quit IRC20:36
*** edmondsw has joined #openstack-powervm20:37
*** edmondsw has quit IRC20:42
*** efried_bbiab is now known as efried20:52
*** smatzek has joined #openstack-powervm21:02
*** edmondsw has joined #openstack-powervm21:16
*** thorst has quit IRC21:18
*** smatzek has quit IRC21:19
*** thorst has joined #openstack-powervm21:37
*** thorst has quit IRC21:42
*** thorst has joined #openstack-powervm22:07
*** thorst has quit IRC22:10
*** edmondsw has quit IRC22:47
*** edmondsw has joined #openstack-powervm22:48
*** esberglu has joined #openstack-powervm22:50
*** edmondsw has quit IRC22:52
*** esberglu has quit IRC22:55
*** k0da has quit IRC23:04
*** efried is now known as efried_zzz23:07
*** thorst has joined #openstack-powervm23:11
*** kairo_ has joined #openstack-powervm23:13
*** kairo has quit IRC23:15
*** thorst has quit IRC23:16
*** thorst has joined #openstack-powervm23:21
*** thorst has quit IRC23:25
*** kairo_ has quit IRC23:40
*** kairo has joined #openstack-powervm23:44

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!