Monday, 2019-07-22

*** tkajinam has quit IRC00:12
*** tejdeep has quit IRC00:16
*** takamatsu has quit IRC00:18
*** dklyle has joined #openstack-cinder00:28
*** lixiaoy1 has joined #openstack-cinder00:33
*** brinzhang has joined #openstack-cinder00:38
*** brinzhang_ has joined #openstack-cinder00:45
*** brinzhang has quit IRC00:47
*** brinzhang_ has quit IRC00:52
*** brinzhang has joined #openstack-cinder00:54
*** Liang__ has joined #openstack-cinder01:02
*** spatel has joined #openstack-cinder01:16
*** imacdonn has quit IRC01:17
*** imacdonn has joined #openstack-cinder01:18
*** spatel has quit IRC01:20
*** Kuirong has joined #openstack-cinder01:21
*** ruffian_sheep has joined #openstack-cinder01:25
ruffian_sheepwhoami-rajat:Hello, last week's driver submission did not receive feedback on the changes. Is it already starting to consider joining the main line?;)01:32
openstackgerritChris M proposed openstack/cinder master: Fix dothill multiattach support  https://review.opendev.org/67194201:40
*** ruffian_sheep has quit IRC01:41
*** ruffian_sheep has joined #openstack-cinder01:46
*** ruffian_sheep has quit IRC02:08
*** zul has quit IRC02:09
*** tejdeep has joined #openstack-cinder02:17
*** ruffian_sheep has joined #openstack-cinder02:21
*** bhagyashris has joined #openstack-cinder02:27
*** tejdeep has quit IRC02:30
*** ruffian_sheep has quit IRC02:31
*** ruffian_sheep has joined #openstack-cinder02:37
openstackgerritChris M proposed openstack/cinder master: Fix dothill multiattach support  https://review.opendev.org/67194202:38
openstackgerritChris M proposed openstack/cinder master: WIP: Create Seagate driver from dothill driver  https://review.opendev.org/67119502:38
*** ruffian_sheep27 has joined #openstack-cinder02:38
*** ruffian_sheep has quit IRC02:41
*** baojg has quit IRC03:01
*** m75abrams has joined #openstack-cinder03:05
*** baojg has joined #openstack-cinder03:07
*** psachin has joined #openstack-cinder03:32
*** ruffian_sheep27 is now known as ruffian_sheep03:36
*** baojg has quit IRC03:47
*** baojg has joined #openstack-cinder03:58
*** udesale has joined #openstack-cinder04:02
*** baojg has quit IRC04:28
*** tejdeep has joined #openstack-cinder04:55
*** ircuser-1 has quit IRC04:57
*** threestrands has joined #openstack-cinder05:00
*** davee_ has quit IRC05:03
*** davee_ has joined #openstack-cinder05:03
*** Luzi has joined #openstack-cinder05:03
openstackgerritChris M proposed openstack/cinder master: Create Seagate driver from dothill driver  https://review.opendev.org/67119505:44
-openstackstatus- NOTICE: Due to a failure on the logs.openstack.org volume, old logs are unavailable while partition is recovered. New logs are being stored. ETA for restoration probably ~Mon Jul 22 12:00 UTC 201906:03
*** ChanServ changes topic to "Due to a failure on the logs.openstack.org volume, old logs are unavailable while partition is recovered. New logs are being stored. ETA for restoration probably ~Mon Jul 22 12:00 UTC 2019"06:03
*** whoami-rajat has joined #openstack-cinder06:05
*** vishalmanchanda has joined #openstack-cinder06:05
*** udesale has quit IRC06:16
*** ChanServ changes topic to "The Block Storage Project | https://wiki.openstack.org/wiki/Cinder | https://tiny.cc/CinderPriorities"06:22
-openstackstatus- NOTICE: logs.openstack.org volume has been restored. please report any issues in #openstack-infra06:22
*** markvoelker has quit IRC06:32
*** pcaruana has joined #openstack-cinder06:32
*** baojg has joined #openstack-cinder06:41
ruffian_sheepwhoami-rajat:Hello, last week's driver submission did not receive feedback on the changes. Is it already starting to consider joining the main line?;)06:44
ruffian_sheephttps://review.opendev.org/#/c/612311/06:45
*** udesale has joined #openstack-cinder06:47
*** boxiang has joined #openstack-cinder06:57
*** sahid has joined #openstack-cinder06:58
*** tesseract has joined #openstack-cinder07:01
*** markvoelker has joined #openstack-cinder07:04
*** sahid has quit IRC07:05
*** rcernin has quit IRC07:07
*** tkajinam has joined #openstack-cinder07:08
*** tosky has joined #openstack-cinder07:21
*** tkajinam has quit IRC07:36
*** tejdeep has quit IRC07:52
*** nikeshm has joined #openstack-cinder08:01
*** davidsha has joined #openstack-cinder08:06
*** m75abrams has quit IRC08:12
*** sahid has joined #openstack-cinder08:14
*** threestrands has quit IRC08:14
*** sahid has quit IRC08:24
*** sahid has joined #openstack-cinder08:24
*** m75abrams has joined #openstack-cinder08:28
*** ociuhandu has joined #openstack-cinder08:37
*** ociuhandu has quit IRC08:39
*** ociuhandu has joined #openstack-cinder08:39
*** lixiaoy1 has quit IRC08:53
*** sahid has quit IRC08:55
*** sahid has joined #openstack-cinder08:56
*** e0ne has joined #openstack-cinder09:04
openstackgerritye proposed openstack/cinder master: This fix let the delete err-info more precisely  https://review.opendev.org/67199609:06
*** sahid has quit IRC09:17
*** lemko has joined #openstack-cinder09:17
*** ruffian_sheep has quit IRC09:24
*** Liang__ has quit IRC09:37
*** udesale has quit IRC09:40
*** udesale has joined #openstack-cinder09:40
*** udesale has quit IRC09:42
*** udesale has joined #openstack-cinder09:42
*** FlorianFa has joined #openstack-cinder09:42
*** ociuhandu has quit IRC09:45
*** ociuhandu has joined #openstack-cinder09:48
*** m75abrams has quit IRC09:54
openstackgerritRaghavendra Tilay proposed openstack/cinder master: 3PAR: Provide new option to specify NSP for single path attachments  https://review.opendev.org/65758509:58
*** raghavendrat has joined #openstack-cinder09:59
*** nikeshm has quit IRC10:11
*** udesale has quit IRC10:28
*** sahid has joined #openstack-cinder10:31
*** nikeshm has joined #openstack-cinder10:42
openstackgerritRaghavendra Tilay proposed openstack/cinder master: 3PAR: Provide new option to specify NSP for single path attachments  https://review.opendev.org/65758510:46
openstackgerritBrin Zhang proposed openstack/cinder master: Rollback the quota_usages table when failed to create a incremental backup  https://review.opendev.org/62679010:50
openstackgerritMerged openstack/cinder master: Cleanup api-ref sample files  https://review.opendev.org/66897210:58
*** bhagyashris has quit IRC10:59
*** ruffian_sheep has joined #openstack-cinder11:02
*** rosmaita has joined #openstack-cinder11:07
*** ruffian_sheep has quit IRC11:08
*** kaisers has quit IRC11:35
*** kaisers has joined #openstack-cinder11:36
*** tejdeep has joined #openstack-cinder11:42
*** sahid has quit IRC11:49
*** psachin has quit IRC11:50
*** spatel has joined #openstack-cinder11:54
*** sahid has joined #openstack-cinder12:03
*** carloss has joined #openstack-cinder12:06
*** udesale has joined #openstack-cinder12:07
*** Kuirong has quit IRC12:09
*** spatel has quit IRC12:13
*** tejdeep has quit IRC12:14
*** raghavendrat has quit IRC12:17
*** _erlon_ has joined #openstack-cinder12:21
*** raghavendrat has joined #openstack-cinder12:23
*** dklyle has quit IRC12:36
*** david-lyle has joined #openstack-cinder12:36
openstackgerritPawel Kaminski proposed openstack/os-brick master: connectors/nvme: Wait utill nvme device show up in kernel  https://review.opendev.org/67203112:39
*** irclogbot_0 has quit IRC12:39
*** irclogbot_1 has joined #openstack-cinder12:42
*** sahid has quit IRC12:45
*** sahid has joined #openstack-cinder12:46
*** boxiang has quit IRC12:50
*** beraldo has joined #openstack-cinder12:51
beraldoHi all, hope you guys help me here. I'm trying to deploy openstack with two AZ and one cinder server on each AZ. Looks like it is working fine when I'm trying to create a volume. If a choose the proper AZ, the volume will be deployed correctly. The problem is when I try to lanch an instance and choose any zone. The instance is not deployed because looks like the filters is retuning a empty12:56
beraldoresources list because by default is going to "nova" AZ. Anyone here can help ? I already did a debug on code but without success... Please let me know if this is not the channel. Thanks in advance12:56
*** mchlumsky has quit IRC13:01
*** mchlumsky has joined #openstack-cinder13:01
raghavendratberaldo: as per my knowledge, generic queries can be posted at #openstack-qa13:07
beraldothanks raghavendrat13:07
openstackgerritPawel Kaminski proposed openstack/os-brick master: connectors/nvme: Wait utill nvme device show up in kernel  https://review.opendev.org/67203113:10
*** eharney has joined #openstack-cinder13:10
*** mriedem has joined #openstack-cinder13:11
*** sahid has quit IRC13:22
*** sahid has joined #openstack-cinder13:25
openstackgerritChris M proposed openstack/cinder master: Fix dothill multiattach support  https://review.opendev.org/67194213:26
*** sahid has quit IRC13:30
*** sahid has joined #openstack-cinder13:33
*** sahid has quit IRC13:38
*** sahid has joined #openstack-cinder13:38
*** spatel has joined #openstack-cinder13:40
*** Luzi has quit IRC13:48
*** raghavendrat has quit IRC13:49
*** nikeshm has quit IRC13:49
*** TxGirlGeek has joined #openstack-cinder13:49
*** TxGirlGeek has quit IRC13:52
*** tesseract has quit IRC13:55
*** tesseract has joined #openstack-cinder14:00
mnaserso i think i'm running into this again -- https://bugs.launchpad.net/cinder/+bug/164131214:03
openstackLaunchpad bug 1641312 in Cinder "CleanableInUse error in a deployment with just one cinder-volume" [High,Fix released] - Assigned to Gorka Eguileor (gorka)14:03
*** senrique_ has joined #openstack-cinder14:04
mnaserok, so doing a little bit of reserach, is there any reason why the worker for create_volume is created inside the scheduler rpcapi layer14:11
mnaserbut the rest are not?14:11
mnaseri'm trying to avoid this race condition by trying to bubble up the worker creation code to be somewhere where no race conditions are possible (like api layer or scheduler)14:12
openstackgerritMohammed Naser proposed openstack/cinder master: docs: fix incorrect reference in ha docs  https://review.opendev.org/67205414:19
mnaserwell, now i understand it14:27
mnaseri'm wondering if this is because there are multiple cinder-volumes with the same `host` value14:28
mnaserso doing a cast means multiple workers are picking it up14:28
mnaserand they just kinda race there14:28
mnaseri wonder why create_snapshot() needs to do a cast, shouldn't it be a call because we know the c-vol we're hitting anyways?14:29
*** m75abrams has joined #openstack-cinder14:47
mnaserok so the more i look at this, the more i feel like this should be a log message, and not an exception -- https://github.com/openstack/cinder/blob/33fa38b0350ed17c87858563898ff2f55b9f368b/cinder/objects/cleanable.py#L152-L15514:53
mnaserbecause if we reach that part of the code, it means that a worker *was* created under another service_id14:53
mnaserwhich means, we raced, but lost, so nothing *bad* happened14:54
*** spatel has quit IRC14:55
*** ircuser-1 has joined #openstack-cinder14:57
geguileomnaser: what PyMySQL version are you using?15:00
mnasergeguileo: this is an OSA deployment, so we use whatever upper constraints / requirements ships15:01
mnaserlet em verify15:01
openstackgerritPawel Kaminski proposed openstack/cinder master: target/spdknvmf: Add configuration parameter  https://review.opendev.org/67206415:03
mnasergeguileo: according to upper constraints, PyMySQL===0.8.015:04
* geguileo looks if they messed it up again...15:05
mnaseri'm double checking the actual environment15:05
mnaserthat was just based on reading upper-constraints15:05
mnaser /me looks at queens CI deployas15:05
hemnamep15:07
mnaseryeah we're doing pymysql-0.8.015:07
geguileomnaser: on what operation are you seeing the error?15:07
mnasergeguileo: most commonly, snapshot create and volume deletes15:08
mnaserwhich are the ones that the worker is created inside the c-vol instead of c-sch15:08
mnaser(im not sure why we don't create the worker all the way up at the http api layer)15:08
mnaserbut then i'd imagine we'd have a different type of race..15:09
mnasercontext: this is one of those deployments where there is multiple c-vol against a single ceph cluster with the same `host` value15:09
geguileowait, wait, wait15:09
geguileomnaser: are multiple c-vol services running with the same host value?15:10
mnasergeguileo: yes, so cinder service-list reports a single host in this case.  i thought at some point that this was an ok thing to do15:10
mnaseri dont remember where/how/when but i recall that15:10
geguileoNOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO15:10
mnaser:(15:10
geguileoYOU CANNOT DO THAT15:10
mnaserwe recently landed a change in osa that did it for everyone15:10
geguileoand that is the problem15:10
mnaserha15:11
mnaserhttps://review.opendev.org/#/c/623490/15:11
mnaserso this is a bad idea(tm) ?15:11
geguileothat is how you do it if you are running Active-Passive15:12
geguileoBut only 1 c-vol service can be running at a time15:12
mnaserso that implies that only 1 c-vol at a time15:12
mnaserah ok15:12
mnaseri wonder how tripleo handles this15:12
geguileodeploying active-passive15:13
geguileoXD15:13
mnaserusing pacemaker ?15:13
geguileoyup15:13
mnasersigh15:13
mnaserhm15:13
geguileothere is an ongoing effort to start adding active-active15:13
geguileowhich means that you start using cluster15:13
geguileowhich is probably what you want to do15:13
geguileoleave the host alone15:13
geguileoand set cluster15:13
mnaserbut that hasn't landed yet? so that means really what needs to be done is: revert that commit15:13
geguileowhat hasn't landed?15:14
mnaseractive-active15:14
geguileothe cluster stuff?15:14
geguileoit has15:14
mnaseroh, master or?15:14
geguileojust replace backend_host with cluster and it should be fine15:14
mnaserhow long has it been around for?15:15
geguileoI don't remember15:15
geguileothe cluster feature a while15:15
mnaserok, ill look for it15:15
mnaserand then fix osa15:15
geguileoRBD supporting active-active less time15:15
mnaserok ill check out the rbd driver logs15:15
geguileonot all drivers support A/A deployments15:15
geguileomnaser: there's a class attribute saying that it is supported15:15
geguileommmmm, I don't see it there anymore...15:17
eharneySUPPORTS_ACTIVE_ACTIVE is in RBDDriver15:18
geguileoeharney: I was looking in github...15:19
geguileomnaser: https://opendev.org/openstack/cinder/src/branch/master/cinder/volume/drivers/rbd.py#L22315:19
geguileoeharney: and apparently that oaf15:19
hemnamorning15:19
geguileoeharney: oh, I hit rsd intead of rbd  (facepalm)15:19
geguileomnaser: looks like it was added in Rocky https://github.com/openstack/cinder/commit/245a488c36003764e3550c2c95fa4bef6119e0ea15:21
hemnageguileo: so that issue with changing the scan mode to manual is still around15:22
jbernardmnaser: i did test it when i submitted that patch, appeared to work at the time15:22
hemnathe customer can't isolate the iscsi sessions to just 1 or a subset of their target portals15:22
hemnadue to needing multipath attachments15:22
hemnaso they are basically hosed with os-brick forcing it to manual15:23
hemnarebooting the host loses drives15:23
hemnasince the scan mode is set to manual15:23
hemnaso I think we need a config option to allow folks to force os-brick to leave it to auto15:23
hemnabut adding that config in nova is a major pita15:24
geguileoI think we need to look for an alternative way to make both be able to coexist15:24
hemnaI'm not sure how to solve it15:24
geguileoin os-brick itself15:24
hemnaboth settings having problems15:24
geguileoI'll finish with what I'm working on and see if I can come up with something...15:25
mnaserok great, so we'll fix it in openstack-ansible, thanks jbernard and geguileo.15:26
*** lpetrut has joined #openstack-cinder15:30
mnasergeguileo: well i guess this is still relevant https://review.opendev.org/#/c/672054/ ?15:32
geguileomnaser: that sentence is now also incorrect, because snapshot creation goes through the scheduler as well15:34
geguileoand I don't know if we are creating the worker in the scheduler as we should or not15:34
mnasergeguileo: we are def creating it in the volume rpcapi in what i see15:35
mnasercreate_worker() happens only for new volume creation in scheduler15:35
geguileomnaser: then that's probably a bug as well15:35
mnaserfor delete vol and create snapshot, it happens in vol api15:35
mnaserok, let me fix that then15:35
geguileomnaser: but I'm not 100% sure, it needs to be checked15:36
*** lemko has quit IRC15:37
openstackgerritMohammed Naser proposed openstack/cinder master: workers: create snapshot worker in scheduler rpcapi  https://review.opendev.org/67205415:39
mnasergeguileo: ^ thats to be checked by whomever15:39
geguileomnaser: you'll need to update the unittests15:40
mnaseroh yeah15:40
openstackgerritMohammed Naser proposed openstack/cinder master: workers: create snapshot worker in scheduler rpcapi  https://review.opendev.org/67205415:42
*** henriqueof has quit IRC15:43
*** henriqueof has joined #openstack-cinder15:54
*** sahid has quit IRC15:54
*** henriqueof has quit IRC15:55
mnaseri cant seem to find any docs on how to add/configure active-active15:57
mnaseram i not googliing the right terms?15:57
*** henriqueof has joined #openstack-cinder15:57
jungleboyjmnaser:  I thought geguileo had a blog on best practices for setting up an active/active env.15:57
geguileojungleboyj: he's looking into how it should be deployed for osa15:58
geguileoand I'm not sure we have that doc anywhere15:58
jungleboyjgeguileo:  Ah, ok.  Yeah, we probably don't have that documented yet.15:58
jungleboyj:-(15:59
mnasernono15:59
mnaseri will make OSA support it :)15:59
mnaserso docs about how its done in cinder period are good enough15:59
geguileomnaser: yeah, I know15:59
geguileobut you are looking for how it should be deployed so you can add it to osa15:59
mnaseryep.. i cant find any docs right now16:00
*** sahid has joined #openstack-cinder16:00
geguileothat's what I meant to say (and I think jungleboyj knows me enough to understand the meaning from my weird sentences)16:00
geguileomnaser: https://docs.openstack.org/cinder/latest/contributor/high_availability.html16:00
geguileomnaser: that's what I wrote for developers16:01
geguileobut it doesn't cover the deployment16:01
jungleboyjYeah, I knew what geguileo meant.16:01
*** tejdeep has joined #openstack-cinder16:01
geguileojeje16:01
jungleboyjThat was the documentation I found as well.16:01
mnaseryeah that seems awesomely detailed :P16:01
geguileomnaser: iirc you only need to set the cluster conf option16:01
mnaser"For Active-Active configurations we need to include the Volume services that will be managing the same backends on the cluster. To include a node in a cluster, we need to define its name in the [DEFAULT] section using the cluster configuration option, and start or restart the service."16:02
geguileoand not set the host field or backend_host fields16:02
mnaser"The name of the cluster must be unique and cannot match any of the host or backend_host values. Non unique values will generate duplicated names for message queues."16:02
mnaserthat poretty much covers it i guess16:02
geguileoyup16:02
geguileoand NOT defining the same host conf on multiple c-vol services16:02
geguileoor c-scheduler services16:03
mnasercan i put this inside the backend settings?16:03
mnaseror is it a c-vol top level config16:03
mnaserhttps://github.com/openstack/cinder16:03
mnasero no is broken16:03
geguileoI may have set it to be a global thing...16:04
* geguileo looks16:04
geguileomnaser: it's global, like "host"16:05
mnaserah ack16:05
mnaserok will implement in osa16:05
*** e0ne has quit IRC16:05
*** tejdeep has quit IRC16:06
*** udesale has quit IRC16:08
openstackgerritPawel Kaminski proposed openstack/os-brick master: connectors/nvme: Wait utill nvme device show up in kernel  https://review.opendev.org/67203116:08
mnasergeguileo: https://review.opendev.org/#/c/672078/16:14
*** lpetrut has quit IRC16:16
*** senrique_ is now known as enriquetaso16:21
*** sahid has quit IRC16:23
*** davidsha has quit IRC16:24
openstackgerritAndreas Jaeger proposed openstack/cinder master: Update api-ref location  https://review.opendev.org/67208716:28
*** deiter has joined #openstack-cinder16:29
*** baojg has quit IRC16:42
*** baojg has joined #openstack-cinder16:43
*** dviroel has joined #openstack-cinder16:43
*** baojg has quit IRC16:43
*** baojg has joined #openstack-cinder16:43
*** baojg has quit IRC16:44
*** baojg has joined #openstack-cinder16:44
*** baojg has quit IRC16:44
*** baojg has joined #openstack-cinder16:45
*** baojg has quit IRC16:45
*** baojg has joined #openstack-cinder16:46
*** baojg has quit IRC16:46
*** jrubenst has joined #openstack-cinder16:46
*** baojg has joined #openstack-cinder16:47
*** baojg has quit IRC16:47
*** baojg has joined #openstack-cinder16:47
*** baojg has quit IRC16:48
*** baojg has joined #openstack-cinder16:48
*** baojg has quit IRC16:48
*** baojg has joined #openstack-cinder16:49
*** baojg has quit IRC16:49
*** baojg has joined #openstack-cinder16:50
*** baojg has quit IRC16:50
*** baojg has joined #openstack-cinder16:51
*** baojg has quit IRC16:51
*** baojg has joined #openstack-cinder16:51
*** baojg has quit IRC16:52
*** baojg has joined #openstack-cinder16:52
*** baojg has quit IRC16:52
*** baojg has joined #openstack-cinder16:53
*** baojg has quit IRC16:53
*** baojg has joined #openstack-cinder16:54
*** baojg has quit IRC16:54
*** baojg has joined #openstack-cinder16:55
*** baojg has quit IRC16:55
*** baojg has joined #openstack-cinder16:55
*** baojg has quit IRC16:56
*** baojg has joined #openstack-cinder16:56
*** baojg has quit IRC16:56
*** baojg has joined #openstack-cinder16:57
*** baojg has quit IRC16:57
*** baojg has joined #openstack-cinder16:58
*** henriqueof has quit IRC16:58
*** baojg has quit IRC16:58
*** baojg has joined #openstack-cinder16:58
*** baojg has quit IRC16:59
*** baojg has joined #openstack-cinder16:59
*** baojg has quit IRC16:59
*** e0ne has joined #openstack-cinder17:05
*** mvkr_ has quit IRC17:18
jungleboyje0ne:  Are you still around?17:22
e0nejungleboyj: hi. yes17:30
jungleboyje0ne:  Hey, I think I got the problem resolved.  Had Horizon throwing a 500 error.  For some reason it thought the logs were inaccessible.17:31
jungleboyjI restarted the container and then it was fine.17:31
jungleboyjEver seen anything like that?17:31
e0neneed to see logs17:32
*** jmlowe has quit IRC17:34
openstackgerritSofia Enriquez proposed openstack/cinder master: Support Incremental Backup Completion In RBD  https://review.opendev.org/62794117:35
jungleboyjOk.17:38
jungleboyjhttps://www.irccloud.com/pastebin/S0Z5MlYO/17:39
jungleboyje0ne: ^^^ That was what I was seeing.17:39
e0nejungleboyj: looks like you don't have permissions to write logs:(17:46
*** ociuhandu_ has joined #openstack-cinder17:47
BLZbubbahi guys, i'm doing some benchmarks here on a cinder-lvm/iscsi volume and the effective queue depth on the 4 RAID10 devices is < 1, but the guest OS fio process has a queue depth of 128 (iostat shows effectively it is well over 100).  obviously this is killing performance on the SSDs which like a queue depth of 50+ if possible17:47
BLZbubbablock size is 1M17:48
jungleboyjYep, that was what I determined too.  Restarting the container fixed it.17:48
BLZbubbais there a good way to determine who is limiting the number of simultaneous operations?  i am actually starting to suspect mdadm17:49
e0nehm... did you deploy your env via openstack-helm?17:49
jungleboyjProbably should have  poked around in the containter more first.17:49
jungleboyje0ne: This was an RHOSP deployment17:49
*** ociuhandu has quit IRC17:49
*** ociuhandu_ has quit IRC17:52
e0neBLZbubba: what target driver do you use? if it's tgtd you can try to change it's configuration via 'iscsi_target_flags'iscsi_target_flags config opt17:52
e0nejungleboyj: I didn't try that distro17:56
BLZbubbayes tgtd, but I was using a non-raid local hard drive earlier and it didn't seem to have the same problem.  guess i should get a second local drive & RAID 1 and see what happens17:56
jungleboyjOk.17:57
BLZbubbathe sad thing is i was trying this method because ceph kept nuking my effective queue depth to below 117:57
*** jmlowe has joined #openstack-cinder17:59
*** tesseract has quit IRC18:01
*** tejdeep has joined #openstack-cinder18:13
*** deiter has quit IRC18:25
*** m75abrams has quit IRC18:26
*** jmlowe has quit IRC18:39
*** henriqueof has joined #openstack-cinder18:43
*** baojg has joined #openstack-cinder19:01
*** e0ne has quit IRC19:01
*** baojg has quit IRC19:05
openstackgerritBrian Rosmaita proposed openstack/cinder master: Add options to [service_user] sample config  https://review.opendev.org/67214519:09
openstackgerritEric Harney proposed openstack/cinder master: Prevent double-attachment race in attachment_reserve  https://review.opendev.org/67137019:17
*** deiter has joined #openstack-cinder19:19
deiterHello, please review https://review.opendev.org/#/c/669736/ thank you!19:22
*** eharney has quit IRC19:23
*** Roamer` has quit IRC19:30
*** beraldo has quit IRC19:32
*** e0ne has joined #openstack-cinder19:50
*** eharney has joined #openstack-cinder20:02
*** e0ne has quit IRC20:04
*** jmlowe has joined #openstack-cinder20:10
*** whoami-rajat has quit IRC21:01
*** mvkr_ has joined #openstack-cinder21:04
*** enriquetaso has quit IRC21:05
*** pcaruana has quit IRC21:10
*** jrubenst has quit IRC21:11
*** jrubenst has joined #openstack-cinder21:31
*** jrubenst has quit IRC21:44
*** jrubenst has joined #openstack-cinder21:54
*** jrubenst has quit IRC22:02
*** jrubenst has joined #openstack-cinder22:05
*** jrubenst has quit IRC22:10
*** jrubenst has joined #openstack-cinder22:11
*** Kuirong has joined #openstack-cinder22:13
*** jrubenst has quit IRC22:19
*** rcernin has joined #openstack-cinder22:30
*** mriedem has quit IRC22:39
*** tkajinam has joined #openstack-cinder22:39
*** tosky has quit IRC22:49
*** enriquetaso has joined #openstack-cinder22:58
*** baojg has joined #openstack-cinder23:02
*** baojg has quit IRC23:07
*** henriqueof has quit IRC23:14
*** carloss has quit IRC23:15
*** tejdeep has quit IRC23:34
*** Kuirong has quit IRC23:53
*** TxGirlGeek has joined #openstack-cinder23:54
*** tejdeep has joined #openstack-cinder23:56
*** TxGirlGeek has quit IRC23:56

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!