Monday, 2018-01-08

*** edmondsw has joined #openstack-powervm01:21
*** edmondsw has quit IRC01:26
*** edmondsw has joined #openstack-powervm03:09
*** edmondsw has quit IRC03:14
*** chhavi has joined #openstack-powervm04:08
*** chhavi has quit IRC04:39
*** edmondsw has joined #openstack-powervm04:58
*** edmondsw has quit IRC05:02
*** efried has quit IRC06:08
*** efried has joined #openstack-powervm06:18
*** chhavi has joined #openstack-powervm06:33
*** edmondsw has joined #openstack-powervm06:46
*** edmondsw has quit IRC06:50
*** edmondsw has joined #openstack-powervm08:34
*** edmondsw has quit IRC08:39
*** edmondsw has joined #openstack-powervm10:22
-openstackstatus- NOTICE: zuul has been restarted, all queues have been reset. please recheck your patches when appropriate10:25
*** edmondsw has quit IRC10:27
*** chhavi has quit IRC11:38
*** chhavi has joined #openstack-powervm11:39
*** edmondsw has joined #openstack-powervm12:10
*** edmondsw has quit IRC12:15
*** chhavi has quit IRC13:08
*** chhavi has joined #openstack-powervm13:10
*** chhavi has quit IRC13:22
*** chhavi has joined #openstack-powervm13:23
*** edmondsw has joined #openstack-powervm13:51
*** esberglu has joined #openstack-powervm14:03
*** esberglu has quit IRC14:17
*** esberglu has joined #openstack-powervm14:17
*** esberglu has quit IRC14:22
*** esberglu has joined #openstack-powervm14:32
*** esberglu_ has joined #openstack-powervm14:33
*** esberglu_ is now known as esberglu__14:34
*** esberglu has quit IRC14:37
*** AlexeyAbashkin has joined #openstack-powervm14:53
edmondswesberglu__ having connection issues?14:54
*** esberglu__ is now known as esberglu14:54
esbergluedmondsw: I was, good now though14:55
*** AlexeyAbashkin has quit IRC15:13
esbergluedmondsw: efried: I'm looking at attaching a vscsi volume on spawn, I'm pretty sure the OOT driver is broken15:17
efriedesberglu Do you have a test bed?15:17
esbergluI'm just messing around with the CLI spawns right now15:18
esbergluBut that doesn't work15:18
esbergluWell the IT equivalent doesn't work15:19
efriedWhat's the error?15:19
esbergluSo the bdm parameter is a dictionary roughly in the form in the comment here15:20
esbergluSo 1st of all self.bdm.volume_id isn't a thing15:20
*** gman-tx has joined #openstack-powervm15:21
esbergluIt has to be self.bdm['connection_info']['data']['volume_id']15:21
esbergluAnd calling save on a dict doesn't work15:21
efried"has to be"?15:21
efriedAre you sure it's a dict?15:21
esbergluefried: Yeah15:21
esberglulogged it out15:22
efriedNova has lots of objects that are dict-ish but also support attribute indexing.15:22
efriedIIRC, the class behind bdm is actually a subclass of dict.  So it would print out looking as if it's just a dict.15:22
efriedbut may actually have .volume_id15:23
esbergluefried: I got an attribute does not exist error trying to use .volume_id15:23
esbergluAnd an error about not being able to call save() on a dict after I changed the code to get the vol id as above15:24
edmondswI asked gman-tx to join since an OOT issue should affect PowerVC15:24
esbergluefried: edmondsw: Maybe we aren't extracting the bdm correctly? From what I'm looking at in nova.objects.block_device.BlockDeviceMapping we should be able to use .volume_id15:47
edmondswesberglu where do we extract it?15:48
edmondswthe bdm you're working with isn't [] is it?15:51
esbergluNo its roughly the dict in the comment15:51
edmondswI'm not seeing an issue there... pretty straightforward15:52
edmondswesberglu there are a whole bunch of different classes and forms for bdm... seems like we're mixing them inappropriately15:56
edmondswesberglu you'll enjoy what jaypipes says about them here:
*** AlexeyAbashkin has joined #openstack-powervm16:02
*** AlexeyAbashkin has quit IRC16:06
esbergluedmondsw: K Now I'm more confused than before16:20
edmondswbdms are always confusing16:22
*** chhavi has quit IRC16:38
esbergluedmondsw: efried: Why do we SaveBDM when attaching volumes on spawn, but not when attaching them after?16:43
efriedesberglu Haven't a clue.16:44
efriedI can't even spell BDM16:44
efriedIf you can find thorst, he might be able to shed some light.16:44
efriedYou could also tap gfm, he might know (or know who knows).16:44
esbergluHere's the block_device_mapping getting passed into SaveBDM, still trying to piece together where it's coming from16:45
efriedesberglu What makes you say we don't save BDMs except in spawn?16:56
efriedesberglu Isn't everything funneled through _add_volume_connection_tasks?16:57
esbergluspawn calls _add_volume_connection_tasks16:58
efriedYup, got it.16:58
esbergluattach_volume adds the ConnectVolume task directly16:58
efriedYou should check with those people I mentioned, but I suspect that is indeed an oversight.16:58
gman-txFor BDMs we need to also pull in Ken17:19
*** AlexeyAbashkin has joined #openstack-powervm17:54
*** AlexeyAbashkin has quit IRC17:58
*** burgerk has joined #openstack-powervm18:00
edmondswesberglu gman-tx we have Ken now18:07
edmondswburgerk any insights here?18:07
esberglu^ conversation starts there if you don't have context18:10
burgerkso is the question about the BDM itself or if the right calls are being made on it ?18:15
esberglu1st question. Is there are reason the the bdm is only being saved when attaching volumes on spawn?18:16
esbergluAnd not being saved when attaching a volume after the spawn18:16
esbergluOr is that an oversight?18:16
burgerkI would think the BDM should be updated whenever it is changed.18:19
burgerkso my guess would be oversight18:20
burgerkunless it is being done elsewhere that isn't obvious here18:21
*** openstackgerrit has joined #openstack-powervm18:26
openstackgerritMatthew Edmonds proposed openstack/nova-powervm master: cleanup private bdm methods
esbergluburgerk: Okay the other issue. During my testing of the in-tree nova driver, the SaveBDM storage task is failing18:28
esbergluI haven't had a chance to confirm, but I suspect that nova-powervm also is failing there18:29
esbergluThe nova block device mapping is pretty convoluted so I haven't been able to figure out where we're going wrong18:29
esberglubut I am getting attribute errors trying to access self.bdm.volume_id and call
edmondswesberglu my patch above shouldn't really help here, but noticed it looking at the code and the messiness annoyed me...18:33
esbergluedmondsw: Sounds good, I will port to the IT implementation18:34
esberglu^ that's an example of the bdm parameter18:35
esbergluI've been looking through this info to try to see where that is coming from18:35
edmondswesberglu see also
esbergluedmondsw: I was using the openstack cli to spawn, not the nova cli. Which only has the legacy version documented. I'll try bdm v2 with the nova cli, see if that makes a difference18:48
esbergluSame result19:04
esbergluburgerk: Do you know what the bdm is used for after the attach? Do we even need to save it?19:05
esbergluWithout the SaveBDM method I haven't been having any issues with the in-tree nova driver, but that just does basic spawns/deletes19:06
esbergluNot sure if it's needed in the more complex cases19:06
burgerkif you detach, the BDM would contain the information to find the disk to detach19:09
esbergluburgerk: I was attaching and detaching volumes after spawning with no problems though. And the bdm doesn't get saved in that case19:10
esbergluAt least not by our driver, maybe nova is doing the save?19:10
burgerkwhen you say it doesn't get saved, have you looked in the DB to see there is no entry?19:11
esbergluburgerk: No I haven't, just meant that the SaveBDM task isn't added19:13
burgerki would guess it is being done....  somewhere19:13
esbergluburgerk: Yeah it's in the database after the volume attach19:21
esbergluSo perhaps we don't actually need the SaveBDM task at all?19:21
burgerkprobably not as long as what is in the DB is correct for your attach19:25
esbergluedmondsw: burgerk: For now I'm going to remove the SaveBDM task from the WIP IT driver implementation and do some further testing19:29
esbergluOnce I get a chance I can get a stack set up with the OOT driver and confirm that19:29
esberglua) SaveBDM is broken there as well19:29
esberglub) We can remove it from nova-powervm19:29
edmondswesberglu sounds good. From what I've looked at I also suspect that is broken / not needed19:30
esbergluedmondsw: Idk if you saw, but the pypowervm u-c is W+1 now. As soon as that is in I'm going to do one more manual run with the IT SEA CI patch19:32
edmondswesberglu just test both a) attach during spawn and then detach and b) attach after spawn and then detach19:32
esbergluedmondsw: Yep19:32
edmondswesberglu I did see that. Thanks19:32
esbergluIf that manual run looks good, we can merge and recheck OVS and SEA, then bug the cores for reviews19:33
edmondswesberglu I think I finally tracked down what actually changes about the BDM during the ConnectVolume task...
edmondswsorry, wrong link...19:37
esbergluedmondsw: You had a mistake in the bdms patch, easy fix19:37
edmondswso if a save doesn't happen, that is what would be missed. Maybe you can check that in the db19:38
esbergluedmondsw: The target_UDID is set in the db19:39
edmondswthen something else must be saving it19:39
openstackgerritMatthew Edmonds proposed openstack/nova-powervm master: cleanup private bdm methods
edmondswesberglu fyi nova feature freeze is 1/1819:54
edmondswesberglu and mriedem is on vacation that week, so if we can get things in this week it would be better19:56
*** tjakobs has joined #openstack-powervm20:00
esbergluedmondsw: 1/25 no?20:05
edmondswesberglu 1/25 for novaclient but 1/18 for non-client per mriedem20:06
edmondswesberglu fair to say "Should have 2 of the 3 remaining patches ready for review in the next day or two, and the last later in the week." ?20:09
edmondswI'm replying to that note20:09
edmondswreasoning that SEA and OVS will be ready as soon as the global req and CI changes merge today/tomorrow, and then vSCSI is close20:10
esbergluedmondsw: Yep I'm good with that20:10
edmondswesberglu they're big patches, so I went ahead and told mriedem and sdague they could start looking at OVS and SEA20:19
esbergluedmondsw: Yep good call20:20
edmondswand I put status in the etherpad from mriedem's note... one of us should remember to update that etherpad when the global req change has merged and the CI is working20:21
esbergluedmondsw: Added it to my list20:25
*** edmondsw has quit IRC20:26
*** openstack has quit IRC20:38
*** openstack has joined #openstack-powervm20:40
*** ChanServ sets mode: +o openstack20:40
*** openstackgerrit has quit IRC21:03
*** burgerk has quit IRC21:28
*** esberglu has quit IRC21:53
*** esberglu has joined #openstack-powervm22:04
*** chhavi has joined #openstack-powervm22:34
*** chhavi has quit IRC22:38
*** tjakobs has quit IRC22:42
*** apearson__ has joined #openstack-powervm23:44
*** apearson_ has quit IRC23:46
*** gman-tx has quit IRC23:52

Generated by 2.15.3 by Marius Gedminas - find it at!