14:00:48 #startmeeting PowerVM Driver Meeting 14:00:48 Meeting started Tue Jun 19 14:00:48 2018 UTC and is due to finish in 60 minutes. The chair is edmondsw. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:49 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:52 The meeting name has been set to 'powervm_driver_meeting' 14:00:55 ō/ 14:01:04 #link https://etherpad.openstack.org/p/powervm_driver_meeting_agenda 14:01:17 ping esberglu gman-tx mdrabe mujahidali chhagarw 14:01:31 #topic In-Tree Driver 14:01:35 #link https://etherpad.openstack.org/p/powervm-in-tree-todos 14:02:04 I don't think we have any in-tree updates this week that I can think of... anyone else? 14:02:21 not I 14:02:55 #topic Out-of-Tree Driver 14:03:08 #link https://etherpad.openstack.org/p/powervm-oot-todos 14:03:24 mdrabe I saw you were starting to look at MSP stuff 14:03:33 are you working on nova-powervm changes for that? 14:03:43 Yea gonna whip that up in nova-powervm soon 14:03:59 efried sounds like you're off the hook :) 14:04:00 I'm assuming we'll want test for that in devstack 14:04:03 woot 14:04:27 I'll probably need help with that 14:04:30 mdrabe anything we put into nova-powervm should be tested with devstack 14:04:46 mdrabe I'm going to be writing up a wiki to help folks 14:05:02 mdrabe if you're working on it before that email goes out, ping me 14:05:13 Yep thnx 14:05:43 In reviewing py3-first changes I noticed our tox.inis - at least for networking-powervm, probably the other -powervms as well - is totally borqué. I started looking into it a little bit, but mostly don't really know what I'm doing. We should probably move to stestr before we try too hard to fix existing unused envs in there. 14:06:18 efried I know essentially nothing about stestr... seem like it will be hard to transition? 14:06:53 Probably not. There's an email thread somewhere or other from mtreinish, which is where I would start. And if help needed, he's the one I would ask. 14:07:08 i.e., you're not taking that further at the moment 14:07:23 priorities :) 14:07:37 I added it to the TODO list 14:07:41 k 14:08:32 esberglu, I think you said you're done porting changes from IT reviews back to OOT, correct? 14:08:43 I removed that workitem from the TODO etherpad 14:08:47 edmondsw: No I'm done with IT follow ups 14:09:01 I haven't ported yet, I have a couple local changes that aren't quite done 14:09:08 oh, misunderstood... I'll put it back then 14:09:42 For snapshot and base disk adapter, haven't looked at localdisk or vSCSI backports yet other than stuff I noted while developing 14:10:01 so are you planning to finish that out, or are you done? 14:10:19 if you have a list of things that need to be done there, you could throw it in the etherpad under that item 14:10:46 edmondsw: I'm planning on finishing that. It's all pretty minor changes so I've just been poking when I have a chance 14:10:59 +1 and thank you 14:11:58 chhagarw doesn't seem to be online today, but real quick on iscsi... 14:12:06 https://review.openstack.org/#/c/567575/ merged 14:12:18 and she is now working on https://review.openstack.org/#/c/576034/ 14:12:44 I'm not sure the approach she started with is the best, and I've commented accordingly 14:13:00 would love others thoughts there 14:13:05 efried mdrabe etc. 14:14:03 oh, I was waiting until you were +2 to even look. 14:15:16 hmm... I can't get to http://review.openstack.org/ now... 14:15:33 just came back 14:15:38 ok yep 14:15:55 I'm trying to setup an env to test https://review.openstack.org/#/c/567964/ 14:17:12 tjakobs has been working on a couple rbd changes that I think we can merge pretty easily once NovaLink and pypowervm support is in 14:17:45 and the change to use loop backstore type is in a similar boat 14:18:06 I've also been working on docs a bit when I can spare a minute (so not often) 14:18:13 I think that's all for OOT 14:18:28 #topic Device Passthrough 14:18:39 efried ^ 14:19:33 Spec for reshape-provider-tree merged. nrp-in-alloc-cands series hasn't moved much in the last week. Consumer gens getting close, but not there yet. 14:19:51 "consumer gens"? 14:19:55 consumer generations 14:20:02 oh, right 14:20:23 Finally been reviewing cyborg specs, had some back-and-forth with Sundar about os-acc, getting closer on that. 14:20:43 what's os-acc? 14:20:54 this like os-vif but for accelerators? 14:20:56 like os-vif but for accelerators 14:20:57 yeah. 14:21:00 jinx 14:21:33 tx for commenting on the vTPM spec 14:22:05 #topic PowerVM CI 14:22:10 Not sure if I mentioned last week, but the libvirt update_provider_tree and shared DISK_GB stuff merged, which is a big milestone. 14:22:24 awesome 14:22:28 #link https://etherpad.openstack.org/p/powervm_ci_todos 14:22:41 efried sorry, thought you were done 14:22:52 now I am 14:23:02 mujahidali where do we stand with the CI? 14:23:45 esberglu said yesterday that it was broken and he was working with you to redeploy 14:23:56 currnetly I am redeplying the compute nodes 14:24:23 esberglu I'm not even seeing powervm in http://ci-watch.tintri.com/project?project=nova anymore 14:24:37 does it get removed if we don't report on anything for a while? 14:25:01 edmondsw: Yeah it only goes back 24 hours, so if we haven't voted in 24 it disappears 14:25:17 mujahidali think we'll be back up today? 14:25:28 mujahidali was having some issues running the Ansible scripts. Are you still stuck on that same issue? 14:25:50 edmondsw:most probably 14:26:18 esberglu: problem was resolved 14:26:29 What was the issue? 14:27:04 I restarted neo-7 14:27:18 it was pingable but not able to ssh 14:27:48 and most of the vms are in error state 14:28:06 I thought you were having another issue with the playbook failing? 14:28:18 Was that error just because neo7 was down? 14:28:50 You sent me a log where the Update resolv.conf file task failed 14:29:13 yeah, that was due to virtual env, asnible was asking for "python netaddr" and it was already installed 14:29:43 so I googled it and did a pip install for the same 14:29:58 mujahidali: Okay cool let me know if you hit any other issues while deploying 14:30:02 google for the win! 14:30:23 esberglu: thanks for the quick help :) 14:30:33 I've given mujahidali some notes for where we are with multinode CI and the next steps to take there 14:30:54 I'm assuming we want to give that priority over the stable branch vSCSI CI he was doing? 14:31:22 esberglu I thought we were really close to done on the vscsi stuff 14:31:43 edmondsw: Yeah I think we had all branches except ocata working 14:31:46 working for queens and pike now, just ocata left? 14:31:49 yeah 14:32:28 esberglu: I am also looking into multinode, will come back to you after trying the steps mentioned in your mail. 14:32:33 if we're going to put the ocata work on hold to focus on multinode, which makes sense to me, let's remove it from the changeset and update the commit to be specific to pike/queens 14:32:43 edmondsw: Honestly might not be worth the trouble of porting to ocata for now 14:32:44 so we can go ahead and merge that for those release and work on ocata separately 14:32:47 Yeah what you said 14:32:49 esberglu right 14:33:16 you got that, mujahidali? 14:33:27 make sense 14:33:41 should be a quick/easy change, and then you can focus on multinode (after you get the CI back up, of course) 14:34:14 anything else for CI/ 14:34:15 ? 14:34:17 The only other CI task is getting those additional systems added to the CI pool 14:35:07 yeah, will look into that 14:35:23 mujahidali FYI, I hit an issue trying to do a greenfield NovaLink install over the weekend 14:35:34 you will hit the same issue until we update the bootp server 14:35:41 I will try to do that today 14:36:06 ping me if I forget to update you when that's fixed 14:36:10 :) 14:36:13 sure 14:36:30 one other thing I remembered 14:36:41 we had a slack conversation last week while you were out mujahidali 14:36:56 about an issue that came up where something updated a file and broke the CI 14:37:09 go back and read that thread when you get a chance 14:37:49 next time we redeploy the CI for non-emergency (i.e. not what you're doing now) we should talk about updating some things there to prevent that issue in future 14:38:21 sure, will look into that, I think it's the opnestack-ci channel 14:38:58 power-openstack-ci 14:39:16 #topic Open Discussion 14:39:23 anything else before we close? 14:41:33 alright, tx everyone 14:41:36 #endmeeting