Tuesday, 2019-07-23

*** _hemna has joined #openstack-cinder00:05
*** tkajinam has quit IRC00:10
*** lixiaoy1 has joined #openstack-cinder00:32
*** jdillaman has quit IRC00:37
*** jdillaman has joined #openstack-cinder00:38
*** spatel has joined #openstack-cinder00:46
*** Liang__ has joined #openstack-cinder00:50
*** _hemna has quit IRC00:55
*** eharney has quit IRC00:57
*** TxGirlGeek has joined #openstack-cinder00:58
*** enriquetaso has quit IRC01:17
*** deiter has quit IRC01:18
*** imacdonn has quit IRC01:18
*** imacdonn has joined #openstack-cinder01:18
*** TxGirlGeek has quit IRC01:32
*** whoami-rajat has joined #openstack-cinder01:43
*** baojg has joined #openstack-cinder01:45
*** _erlon_ has quit IRC01:46
*** TxGirlGeek has joined #openstack-cinder01:50
*** TxGirlGeek has quit IRC01:54
*** Kuirong has joined #openstack-cinder01:57
*** ruffian_sheep has joined #openstack-cinder02:09
ruffian_sheepwhoami-rajat:I am sorry that I am asking you every day. However, due to the many reasons involved, the integration of this drive is more important to our company. I need to report the situation of cinder entering the main line on a daily basis.02:09
Kuirong@ruffian_sheep I think they cannot reply immediately becaz live in different time zones.02:15
whoami-rajatruffian_sheep: hey, I got involved with some other task so couldn't fully review the patch. Apologies for that. Will surely provide feedback till EOD.02:17
KuirongDeadline is 7/26 right?02:19
*** spatel has quit IRC02:34
ruffian_sheepwhoami-rajat:I am very grateful to receive your reply as soon as possible.02:42
ruffian_sheepKuirong :I also asked him to go online before asking him.lol02:43
ruffian_sheepDoes it mean that 7.26 stops all the main lines of this time?02:47
KuirongI think so.02:50
ruffian_sheepKuirong :Is the main line integration drive semi-annual?02:53
ruffian_sheepKuirong :Thx03:06
*** psachin has joined #openstack-cinder03:27
*** gkadam has joined #openstack-cinder03:49
*** gkadam has quit IRC03:50
*** ruffian_sheep has quit IRC03:51
*** dviroel has quit IRC03:52
*** tejdeep has quit IRC03:55
*** udesale has joined #openstack-cinder04:01
*** rcernin has quit IRC04:13
*** rcernin has joined #openstack-cinder04:14
*** rcernin has quit IRC04:20
*** pcaruana has joined #openstack-cinder04:43
*** Luzi has joined #openstack-cinder05:05
*** m75abrams has joined #openstack-cinder05:11
*** TxGirlGeek has joined #openstack-cinder05:16
*** TxGirlGeek has quit IRC05:43
*** dpawlik has joined #openstack-cinder05:44
openstackgerritBrin Zhang proposed openstack/cinder master: Rollback the quota_usages table when failed to create a incremental backup  https://review.opendev.org/62679006:24
*** tejdeep has joined #openstack-cinder06:38
*** bhagyashris has joined #openstack-cinder06:39
*** e0ne has joined #openstack-cinder06:40
*** e0ne has quit IRC06:41
*** georgeakahiron has joined #openstack-cinder06:47
*** sahid has joined #openstack-cinder07:02
*** tesseract has joined #openstack-cinder07:09
*** irclogbot_1 has quit IRC07:20
*** kaisers has quit IRC07:20
*** openstackstatus has quit IRC07:20
*** irclogbot_2 has joined #openstack-cinder07:21
*** kaisers has joined #openstack-cinder07:21
*** dansmith has quit IRC07:23
*** sahid has quit IRC07:23
*** dansmith has joined #openstack-cinder07:24
*** sahid has joined #openstack-cinder07:24
*** Anticimex has quit IRC07:24
*** tosky has joined #openstack-cinder07:28
*** Anticimex has joined #openstack-cinder07:29
*** tejdeep has quit IRC07:42
*** tejdeep_ has joined #openstack-cinder07:42
*** tejdeep_ has quit IRC08:03
*** sahid has quit IRC08:06
*** sahid has joined #openstack-cinder08:09
*** lixiaoy1 has quit IRC08:28
*** e0ne has joined #openstack-cinder08:46
openstackgerritBrin Zhang proposed openstack/cinder master: Rollback the quota_usages table when failed to create a incremental backup  https://review.opendev.org/62679008:47
*** sapd1_x has joined #openstack-cinder08:55
openstackgerritMerged openstack/cinder master: Update api-ref location  https://review.opendev.org/67208709:10
*** _hemna has joined #openstack-cinder09:12
*** _hemna has quit IRC09:16
*** Liang__ has quit IRC09:19
*** psachin has quit IRC09:20
*** ociuhandu has joined #openstack-cinder09:31
*** psachin has joined #openstack-cinder09:35
*** georgeakahiron has quit IRC09:40
*** jojoda has quit IRC09:45
*** ociuhandu has quit IRC09:48
openstackgerritBrin Zhang proposed openstack/cinder master: Rollback the quota_usages table when failed to create a incremental backup  https://review.opendev.org/62679009:48
*** ociuhandu has joined #openstack-cinder09:50
*** bhagyashris has quit IRC09:54
*** dpawlik has quit IRC10:02
*** dpawlik has joined #openstack-cinder10:04
*** sahid has quit IRC10:10
*** lpetrut has joined #openstack-cinder10:21
*** brinzhang_ has joined #openstack-cinder10:26
*** brinzhang has quit IRC10:30
*** sahid has joined #openstack-cinder10:47
*** spatel has joined #openstack-cinder11:20
*** spatel has quit IRC11:25
openstackgerritGorka Eguileor proposed openstack/cinder master: QNAP: Fix login on Python3  https://review.opendev.org/67226511:25
openstackgerritGorka Eguileor proposed openstack/cinder master: QNAP: Avoid unnecessary sleeps  https://review.opendev.org/67226911:46
*** carloss has joined #openstack-cinder11:47
*** markvoelker has quit IRC11:58
*** eharney has joined #openstack-cinder12:01
*** udesale has quit IRC12:04
*** udesale has joined #openstack-cinder12:04
*** raghavendrat has joined #openstack-cinder12:11
*** dviroel has joined #openstack-cinder12:14
*** markvoelker has joined #openstack-cinder12:16
*** henriqueof has joined #openstack-cinder12:33
*** mriedem has joined #openstack-cinder12:38
*** deiter has joined #openstack-cinder12:53
*** enriquetaso has joined #openstack-cinder12:57
*** raghavendrat has quit IRC12:59
*** openstackstatus has joined #openstack-cinder13:21
*** ChanServ sets mode: +v openstackstatus13:21
openstackgerritPawel Kaminski proposed openstack/os-brick master: connectors/nvme: Wait utill nvme device show up in kernel  https://review.opendev.org/67203113:27
*** davidsha has joined #openstack-cinder13:28
*** mriedem has quit IRC13:38
*** lemko has joined #openstack-cinder13:38
openstackgerritPawel Kaminski proposed openstack/cinder master: target/spdknvmf: Add configuration parameter  https://review.opendev.org/67206413:39
*** tosky_ has joined #openstack-cinder13:40
*** tosky has quit IRC13:42
*** Luzi has quit IRC13:51
geguileomnaser: I forgot to mention yesterday that for Active-Active you'll also have to configure TooZ in Cinder to use a DLM (like etcd with a gateway)13:54
*** tosky_ is now known as tosky14:08
*** jrubenst has joined #openstack-cinder14:10
*** baojg has quit IRC14:18
*** baojg has joined #openstack-cinder14:19
*** mriedem has joined #openstack-cinder14:36
*** dosaboy has joined #openstack-cinder14:41
openstackgerritBhaa Shakur proposed openstack/cinder master: Zadara VPSA: Move to API access key authentication  https://review.opendev.org/67071514:43
*** _hemna has joined #openstack-cinder14:47
*** _hemna has quit IRC14:52
*** TxGirlGeek has joined #openstack-cinder14:52
*** lseki has joined #openstack-cinder15:06
*** enriquetaso has quit IRC15:09
*** enriquetaso has joined #openstack-cinder15:09
*** jc_ has joined #openstack-cinder15:11
jc_Hi, I have installed OpenStack with one controller and two compute nodes. I am trying to install block storage on both compute nodes (compute + block). Is it possible, or Do i need to separate them?15:13
*** enriquetaso has quit IRC15:14
*** trident has quit IRC15:18
*** _hemna has joined #openstack-cinder15:18
*** vanou has joined #openstack-cinder15:19
hemnaheh huawei laid off 2/3 of US employees15:19
*** trident has joined #openstack-cinder15:20
jungleboyjOn top of all the ones that left?15:20
jungleboyjYikes.  Not totally surprised I guess.15:21
* jungleboyj will not get political15:21
jungleboyjGlad I work for Lenovo.15:21
jungleboyjhemna: Any word on your Mid-Cycle attendance?15:22
BLZbubbajc_: do you mean you're trying to converge a cinder-lvm instance on each compute node?15:23
smcginnisOh wow.15:23
smcginnisHadn't heard that yet.15:23
hemnajungleboyj: I should be able to attend15:24
jc_BLZbubba: yes. But I am not successful just following the cinder documentation.15:25
BLZbubbai don't see why it wouldn't work...15:26
jungleboyjhemna: Awesome.  We will have at least have you, me, eharney and rosmaita  there then.15:26
BLZbubbapersonally i would suggest making a vm for cinder and passing the storage & networking in via virtual functions... this should reduce the chance of weird nova/cinder conflicts15:27
jc_as per the cinder install doc (https://docs.openstack.org/cinder/stein/install/cinder-storage-install-ubuntu.html), I have installed the cinder, but I am not able to see my complete storage in the local storage of admin panel15:28
BLZbubbai run ceph osd as a vm like this on my openstack nodes15:28
BLZbubbasocket 0 for ceph, socket 1 for qemu (nova)15:28
*** dpawlik has quit IRC15:30
*** m75abrams has quit IRC15:32
smcginnisjc_: Does "cinder service-list" show your two cinder-volume services?15:32
jc_smcginnis: yes , on controller node it shows that both the nodes are up.15:34
smcginnisjc_: Do you have a volume type for each backend and can you create a volume from each?15:34
jc_smcginnis: you mean, I need to create volume on each server. I tried with vg and lvextended with all the physical disks, but i did not see complete local storage in horizon15:37
*** e0ne has quit IRC15:39
smcginnisjc_: Not sure what you mean now. You should have had a volume group on each host that you configured cinder to use when installing the service there. You shouldn't need to do any other lvm commands after that initial setup of the space you want to use.15:40
*** david-lyle is now known as dklyle15:40
jc_smcginnis: https://paste.ubuntu.com/p/C2cNmmxQB6/, I have added the details from two servers on paste bin.15:42
*** sapd1_x has quit IRC15:44
jc_I have created volume group on each host. But my problem is, I am not able to see the added storage in the neutron. When I am creating the VMs, I THINK the created VMs are not using the cinder storage space.15:45
vanoujungleboyj: hello. I want to confirm your comment on 3rd party ci at last weekly mtg.15:46
vanouyou said15:46
vanou will be going through the CI results in the next couple of weeks to see who all is running with Py3.7. I have seen at least one py3.5. I am not going to mark a driver unsupported for that, but by the end of Train it will need to be py3.7 or it will get marked.15:46
vanouDose this mean 'Failing to make ci running py3.7 till milestone 2 dosen't mark a driver unsupported, but failing to pass py37 test & to running py3.7 in ci env till 12th Sep marks a driver unsupported' ?15:47
vanouForgive my long posting, this is first time of IRC.15:48
smcginnisjc_: You mean horizon, not neutron, right?15:49
smcginnisAre you trying to boot a VM from a Cinder volume or add a volume to a VM?15:49
jungleboyjvanou:  Hey, let me read here.15:50
smcginnisvanou: Yes, all third party CI will need to be running services under py3.7. If not done by that later date, it will be marked unsupported.15:50
vanoujungleboyj: thanks!15:51
jungleboyjvanou: So, if you are trying to run py37 but it isn't passing yet.  A patch will not be pushed up to mark you unsupported.15:51
jungleboyjIt will be marked unsupported at milestone 3 if py37 isn't passing by then.15:51
vanousmcginnis: Thanks your comment! Does 'that later date' mean '12th Sep'?15:51
*** trident has quit IRC15:52
jungleboyjIf py3 isn't running at all, then it will be marked unsupported after this week.15:52
*** _hemna has quit IRC15:53
jc_smcginnis: sorry, I mean in Horizon. I want the VMs to use the cinder volume, when they are created. As , I am not seeing the total storage in the horizon, I think the volume added to cinder-volumes is not available for VM creation or there is something I am missing or don't know at the moment.15:53
smcginnisjc_: So you have one cinder node configured to use h018-vg in cinder.conf and the other to use and cinder-volumes in the other cinder.conf?15:53
vanoujungleboy: Sorry. What is difference between "if you are trying to run py37 but it isn't passing yet." and "If py3 isn't running at all" ?15:55
*** trident has joined #openstack-cinder15:55
smcginnisjc_: Then you created two volume types, one for each backend? In horizon you can either create the volume first from the Volumes section, or you can create the volume when you create the VM. The drawback with the latter is you can't specify which volume type to use and you need to have a default volume type set so nova can create the cinder volume.15:56
jc_smcginnis: I can change the cinder volume names to same on both hosts, but How can i fix the storage available for usage.16:00
vanoujungleboyj: Sorry. I misspelled your nick.  What is difference between "if you are trying to run py37 but it isn't passing yet." and "If py3 isn't running at all" ? Because I understand 'this week' in "If py3 isn't running at all, then it will be marked unsupported after this week." as "this week (Jul 21 - 27 2019)".16:00
smcginnisjc_: What volume names are you changing? You shouldn't be modifying any volumes created via cinder.16:01
*** e0ne has joined #openstack-cinder16:01
jc_smcginnis: these names (h018-vg and cinder-volumes) are created by me as discussed in https://docs.openstack.org/cinder/stein/install/cinder-storage-install-ubuntu.html. They were not created by cinder.16:06
jungleboyjvanou: If you have a job producing logs showing you are trying to test py37, you are good.  If you aren't running any tests running py37, then it will be unsupported.16:06
jungleboyjSo, if for patches you are running py27 and it is passing CI and you are doing a separate run with py37 but it is still failing, that is ok until Milestone3.16:07
smcginnisjc_: That name of the volume group doesn't matter as long as that's what you've configured in that node's cinder.conf to use.16:07
jungleboyjIt is probably easiest to just switch to using py37 though.16:07
*** enriquetaso has joined #openstack-cinder16:07
*** tesseract has quit IRC16:07
*** lemko has quit IRC16:08
openstackgerritChris M proposed openstack/cinder master: Fix dothill multiattach support  https://review.opendev.org/67194216:08
openstackgerritChris M proposed openstack/cinder master: Create Seagate driver from dothill driver  https://review.opendev.org/67119516:08
vanoujungleboyj: I understand. Should I post logs to mailing list or so to show trying to run py37?16:10
jc_smcginnis: ok, how can i allocate more space for VM creation ( have more volume to allocate when VM is created from horizon)16:10
jungleboyjAre you able to have the logs show up with your CI run?16:11
jungleboyjvanou: ^^16:11
smcginnisIt looks like you should have 50+ TB available on each node. Does it tell you there is no space on either one?16:11
smcginnisjc_: Oh wait, you've created your own volume in both of those volume groups using all the space it looks like according to that paste from earlier.16:12
*** tejdeep has joined #openstack-cinder16:12
vanoujungleboyj: Not yet...16:12
vanouBut I'll...16:12
jungleboyjWhich driver is this for?16:15
jc_smcginnis: I have around 60TB + 60 TB on each host, but on the horizon I can only see 350 + 350 GB (where OS is installed for both hosts). Everything else is shown in the volume groups, but I am not able to add more storage for VM creation.16:15
vanouFujitsu Eternus16:15
geguileohemna: how is your customer setting the iSCSI boot?16:16
openstackgerritLuigi Toscano proposed openstack/cinder master: Port the legacy multibackend jobs to Zuul v3  https://review.opendev.org/67194516:16
geguileohemna: the customer with the problem of the iSCSI boot volume and the OS-Brick16:16
potsjungleboyj: smcginnis: if i have two patches submitted (multi-attach fix and seagate driver), and they conflict with each other, should i make one dependent on the other or just make them all relative to master and rebase them as needed until they are all merged?  it seems like zuul won't apply the dependencies in order to validate the dependent pa16:16
smcginnisjc_: You're using all the space with the volumes you've created from those volume groups.16:17
*** sahid has quit IRC16:17
* hemna looks over teh bug16:17
smcginnispots: I would just rebase as needed.16:17
jungleboyjsmcginnis: ++16:18
*** lpetrut has quit IRC16:19
*** enriquetaso has quit IRC16:19
jc_smcginnis: so I should not add all the drives in lvm, so that I can mount some of the hdd and use them for instances?16:21
whoami-rajatgeguileo: Hi, could you take a look at the fix https://review.opendev.org/#/c/670887/ . Thanks!16:22
smcginnisjc_: No, that part is right. All drives (at least any that you want to use) should be added to the volume group16:22
smcginnisjc_: But the volumes you've created from the volume groups are using most of your space, so not sure why you are doing that. If that was a mistake, you should just delete those logical volumes.16:23
*** baojg has quit IRC16:26
vanoujungleboyj: Sorry. Fujitsu Eternus16:28
jungleboyjvanou:  Ok.  Cool.  Thank you for keeping up to date on this.16:28
jc_smcginnis: so, I should not use all the HDD for vgcreate rather, I should choose only few disks for cinder volume group. What should i do with the remaining HDD, should mounted them to be used for VM instances at /var/lib/nova/instances ?16:29
smcginnisjc_: No. All drives you want to use in cinder should be added to the vg.16:30
vanoujungleboyj: Thanks your kind & patience! It is midnight in my timezone. So, I'll leave. Thanks!16:31
jungleboyjOk.  Thanks for keeping us updated.16:31
jungleboyjHave a good night.16:31
*** davidsha has quit IRC16:32
*** vanou has quit IRC16:34
jc_smcginnis: but the drives added to cinder, are not shown in Horizon. On different setup, I have mounted some of the HDD at /var/lib/nova/instances by adding them to a VG. In that scenario, I saw the local storage on Horizon. (This setup doesn't exist now)16:34
smcginnisjc_: Because you've created a logical volume from the volume group using up most of your space.16:35
jc_smcginnis: sorry, but I have checked all my commands. I have not created logical volume on one of my host, but cinder created it seems. Other host (I have created)16:40
*** deiter has quit IRC16:40
jc_smcginnis: I am not sure, how to fix this problem.16:40
smcginnisCinder created sda on h018-vg? That doesn't seem right. I don't think cinder would set a name like that.16:41
geguileowhoami-rajat: reviewed16:43
smcginnisWow, our LVM documentation appears to really be lacking any kind of useful detail.16:44
jc_smcginnis: I created h018, that is old host. I have installed on a different host today where I just created pvcreate /dev/sdX ... ... and then vgcreate cinder-volumes /dev/sdX ... ... lsblk output from host where cinder created (maybe) https://paste.ubuntu.com/p/4wfn6Xd4DV/16:45
smcginnisjc_: Ah, I see the one is a thin pool. So make sure the cinder.conf on that node is pointing to that pool, not the VG as I had previously said.16:50
rosmaitasmcginnis: jungleboyj: eharney: geguileo: e0ne: jgriffith: and anyone else interested in stable branch maintenance policy16:50
smcginnisjc_: h018-vg/sda does not appear to be a thin pool though.16:50
rosmaitathis is a follow up to last week's discussion about the stable/rocky release situation16:50
rosmaitawe were deciding what to do about https://review.opendev.org/#/c/639867/ "Declare multiattach support for HPE MSA"16:50
rosmaitapots agreed to do some testing to see if multiattach does work for HPE MSA16:50
rosmaitaturns out that there are some bugfixes needed16:50
rosmaitafor example, https://review.opendev.org/#/c/671942/516:51
rosmaitaso, I think declaring multiattach is a bit premature16:51
rosmaitamy concern is that there are 21 unreleased changes in stable/queens that are being blocked16:51
rosmaitaso my proposal is to restore https://review.opendev.org/#/c/670086/ to revert the declaration in stable/rocky16:51
rosmaitathen release 13.0.6 for rocky and 12.0.8 for queens16:51
rosmaitaafter that, we can continue backporting the dothill fixes definitely to stein, and perhaps farther16:51
rosmaita(pots should probably hold off on the renaming to seagate until we have this stuff working and backported)16:51
rosmaitaso let's discuss here, or i can write this up and send to the ML for discussion if people are busy now16:51
smcginnisrosmaita: I think we should probably just revert that patch then. If it was just a flag being returned that was fine for stable. If it required some additional code changes, then I don't think we should have done it.16:51
whoami-rajatgeguileo: Thanks. I thought that was the base structure for implementing tests and db/memory test files were helper.16:52
whoami-rajatAlso i think i tried setting the PERSISTENCE_CFG to memory but ran into failures, see PS3.16:52
whoami-rajatWill try again after the refactoring.16:52
rosmaitasmcginnis: that's what i was thinking too16:52
geguileowhoami-rajat: it's the base class for testing persistence plugins16:52
geguileowhoami-rajat: but in your patch you want to test the helper methods16:52
eharneyrosmaita: reverting sounds like a good idea to me16:53
geguileowhoami-rajat: aka, I did a poor job with the classes  :-(16:53
potsi'd like to continue with the dothill->seagate rename though.  is there any reason not to?16:53
geguileowhoami-rajat: and now you are paying for it, sorry16:53
e0nesmcginnis: +1 to revert it16:53
rosmaitapots: only reason would be if it complicates backports16:53
rosmaitai think we still need to have the discussion about driver backports at the midcycle16:54
jungleboyjI am ok with doing the revert given the additional info.16:54
rosmaitabecause i think it's important to backport bugfixes for these16:54
*** enriquetaso has joined #openstack-cinder16:54
jungleboyjrosmaita:  ++16:54
whoami-rajatgeguileo: oh, no issues. Also it makes the addition of future tests easier. Thanks for the help. :)16:55
jungleboyjpots:  Yeah, that is the concern with the driver rename.16:55
geguileowhoami-rajat: thank you for working on the fix16:55
*** gnufied has joined #openstack-cinder16:55
rosmaitawe just need to formulate a coherent policy that the stable-maint team will accept16:55
jungleboyjIf others agree, I think we could allow delaying the rename a bit to allow for you to do the fix backport more easily.16:55
potsi'm not sure what the difference is, though, whether we rename the driver now or in the future?16:55
jc_smcginnis: i should change it in [DEFAULT] or [lvm] , it should changed to "volume_group = cinder-volumes-pool" , is that correct?16:56
jungleboyjBackporting the changes from master are more complicated once the name has changed.16:56
*** tejdeep has quit IRC16:56
rosmaitai think you will have a merge conflict on every backport to stein16:57
smcginnisjc_: The LVM pool is set under the backend section, so if you've named that [lvm], that's where. Not under [DEFAULT].16:57
rosmaitaok, so as far as the stable/rocky release goes, it looks like we have a majority of cinder stable-maint cores saying revert the change16:59
rosmaitai will revise the patches16:59
rosmaitaif anyone has second thoughts, you can leave comments on the patches16:59
potsok, in that case should the seagate driver just be another subclass of the dothill driver?  or should it just be a new standalone driver, which the other drivers can subclass in the future, at which point we can just remove the old dothill code.  That way, there are no renames to complicate backports.17:01
jungleboyjpots:  Well, if you subclass it, then you don't complicate things for backporting changes.17:01
jungleboyjThat or you go forward with the rename and deal with the merge conflicts any time you need to backport a fix.17:02
rosmaitai would be against subclassing it17:03
potsyes, but won't there always be an argument that we shouldn't rneame the dothill code because of the merge conflicts with backports?  if we introduce the new driver first, there won't be any merge conflicts with backports, just sometimes duplicate patches if we have to patch both drivers.17:03
*** udesale has quit IRC17:03
jungleboyjI think the argument is always going to be with the subclassing.17:04
rosmaitayeah, it's one of those things where if we have a bunch of bugfixes that need to be done right now, lets fix first & then rename17:04
rosmaitabut if it's just this one patch, then probably not a big deal17:04
jungleboyjThat was why I was proposing we let you delay the rename to get the patch in and then go from there.17:05
potsmy concern is missing the deadline for adding the seagate driver.17:07
*** trident has quit IRC17:07
rosmaitathat is a legitimate concern17:08
potsi have a seagate-ci ready to start posting results with py37 but for some problems getting the multiattach tests to pass17:08
jc_smcginnis: changed it and restarted nova-compute and cinder-volume. It still shows the same.17:08
jungleboyjpots:  Right, that was why I was trying to see if the team would be ok with letting that slide a bit given the situation.17:08
*** trident has joined #openstack-cinder17:10
jungleboyjI can't make that decision alone though.  :-)17:11
potsi'm fine with whatever works for the group, just need you to point me in the right direction.17:11
rosmaitajungleboyj: what are your thoughts exactly on "sliding"?17:12
jungleboyjI am saying that we can slip the requirement to get the Seagate change in by a week or two while he get this other issue resolved.17:13
*** ociuhandu has quit IRC17:13
jungleboyjGiven that it isn't a NEW driver.  It is a rebrand.17:13
jungleboyjWe have allowed those to go in later in the past I think.17:13
rosmaitai personally think that is fine -- it doesn't set much of a precedent because it's a pretty unique case17:14
rosmaitai think as far as backporting fixes, reviews are easier if the backports are clean17:14
rosmaitaso i think it would be a good tradeoff17:14
jungleboyjOkie dokie.17:16
*** lseki has quit IRC17:16
jungleboyjpots:  You ok with that?17:19
potssure.  shall i submit a seagate driver as a subclass for now and we can hold off on renaming the underlying dothill driver until the next cycle?17:20
*** tejdeep has joined #openstack-cinder17:20
jungleboyjNo.  Lets just get the patches to fix multi-attach merged now and backported.17:20
*** tejdeep has quit IRC17:21
jungleboyjOnce that is done, push up the rename.  As long as we get that all done in the next couple of weeks it is fine.17:21
*** tejdeep has joined #openstack-cinder17:21
potsok, that works for me.17:23
*** enriquetaso has quit IRC17:23
jungleboyjpots:  Thank you!17:23
*** _erlon_ has joined #openstack-cinder17:25
potsthank you all for your patience17:25
jungleboyjpots:  Thanks for keeping us in the loop as to what is going on.17:27
*** enriquetaso has joined #openstack-cinder17:31
*** e0ne has quit IRC17:33
*** senrique_ has joined #openstack-cinder17:35
*** enriquetaso has quit IRC17:37
*** _hemna has joined #openstack-cinder17:49
geguileohemna: I figured out a solution to the problem, we just need to create a new iSCSI interface with a different initiator name :-)17:50
geguileohemna: https://gorka.eguileor.com/host-iscsi-devices17:51
*** ociuhandu has joined #openstack-cinder18:16
hemnaso we get iscsi session isolation by creating a separate initiator interface and initiator iqn18:17
hemnawhich can be used by the host attaches18:17
hemnaand then cinder/nova/os-brick attaches will use the default iqn18:17
*** psachin has quit IRC18:19
hemnawe should document this as part of the cinder documentation18:20
hemnaI can see others running into this18:20
*** _hemna has quit IRC18:22
*** ociuhandu has quit IRC18:34
*** senrique__ has joined #openstack-cinder18:34
*** senrique__ is now known as enriquetaso18:34
*** senrique_ has quit IRC18:38
*** e0ne has joined #openstack-cinder18:42
*** mriedem has quit IRC18:42
*** enriquetaso has quit IRC18:49
*** ociuhandu has joined #openstack-cinder19:04
*** ociuhandu_ has joined #openstack-cinder19:09
*** ociuhandu has quit IRC19:09
*** whoami-rajat has quit IRC19:22
*** enriquetaso has joined #openstack-cinder19:35
*** henriqueof has quit IRC19:40
*** tosky has quit IRC19:47
*** mriedem has joined #openstack-cinder20:00
*** obi12341 has quit IRC20:02
*** jsquare has quit IRC20:03
*** jsquare has joined #openstack-cinder20:05
*** e0ne has quit IRC20:06
openstackgerritPawel Kaminski proposed openstack/os-brick master: connectors/nvme: Wait utill nvme device show up in kernel  https://review.opendev.org/67203120:11
*** _hemna has joined #openstack-cinder20:19
*** deiter has joined #openstack-cinder20:25
*** ociuhandu_ has quit IRC20:32
*** ociuhandu has joined #openstack-cinder20:33
*** pcaruana has quit IRC20:50
*** _hemna has quit IRC20:53
*** ociuhandu has quit IRC21:02
*** jrubenst has quit IRC21:15
*** gnufied has quit IRC21:21
*** lpetrut has joined #openstack-cinder21:22
*** e0ne has joined #openstack-cinder21:22
*** lpetrut has quit IRC21:22
*** lpetrut has joined #openstack-cinder21:23
*** lpetrut has quit IRC21:30
*** irclogbot_2 has quit IRC21:32
*** altlogbot_0 has quit IRC21:33
*** altlogbot_0 has joined #openstack-cinder21:33
*** irclogbot_2 has joined #openstack-cinder21:33
*** rosmaita has left #openstack-cinder21:39
*** enriquetaso has quit IRC21:44
*** e0ne has quit IRC21:50
*** gnufied has joined #openstack-cinder21:58
*** irclogbot_2 has quit IRC21:59
*** altlogbot_0 has quit IRC22:01
*** altlogbot_1 has joined #openstack-cinder22:21
*** TxGirlGeek has quit IRC22:25
*** altlogbot_1 has quit IRC22:27
*** _hemna has joined #openstack-cinder22:49
*** tkajinam has joined #openstack-cinder22:51
*** mriedem has quit IRC22:55
*** altlogbot_0 has joined #openstack-cinder23:13
*** carloss has quit IRC23:16
*** rcernin has joined #openstack-cinder23:16
*** altlogbot_0 has quit IRC23:19
*** _hemna has quit IRC23:24
*** altlogbot_0 has joined #openstack-cinder23:27
*** irclogbot_3 has joined #openstack-cinder23:31

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!