Monday, 2019-04-08

*** whoami-rajat has joined #openstack-manila01:19
*** irclogbot_1 has quit IRC01:44
*** lpetrut has joined #openstack-manila03:49
*** lpetrut has quit IRC04:21
*** pcaruana has joined #openstack-manila04:49
*** pcaruana has quit IRC04:55
*** e0ne has joined #openstack-manila05:31
*** e0ne has quit IRC05:39
*** e0ne has joined #openstack-manila05:56
*** e0ne has quit IRC06:04
*** lpetrut has joined #openstack-manila06:08
*** pcaruana has joined #openstack-manila06:30
*** kopecmartin|off is now known as kopecmartin06:50
*** tosky has joined #openstack-manila07:19
*** e0ne has joined #openstack-manila07:24
*** e0ne has quit IRC07:52
*** e0ne has joined #openstack-manila08:36
openstackgerritNir Gilboa proposed openstack/manila-tempest-plugin master: Scenario test: Create/extend share and write data  https://review.openstack.org/53156808:40
*** zigo_ has joined #openstack-manila08:46
*** zigo_ is now known as zigo08:50
*** tbarron_ has joined #openstack-manila09:13
*** e0ne has quit IRC09:27
*** e0ne has joined #openstack-manila09:34
*** e0ne has quit IRC10:06
*** e0ne has joined #openstack-manila10:13
*** e0ne has quit IRC11:02
*** e0ne has joined #openstack-manila11:09
*** carloss has joined #openstack-manila11:17
*** e0ne has quit IRC11:46
*** pcaruana has quit IRC11:47
*** e0ne has joined #openstack-manila11:51
*** e0ne has quit IRC11:53
*** pcaruana has joined #openstack-manila12:38
*** mmethot has joined #openstack-manila12:45
*** enriquetaso has joined #openstack-manila12:59
*** dviroel has joined #openstack-manila13:26
*** whoami-rajat has quit IRC13:28
*** pcaruana has quit IRC13:31
*** pcaruana has joined #openstack-manila13:36
*** whoami-rajat has joined #openstack-manila13:38
*** eharney has joined #openstack-manila13:52
*** jmlowe has quit IRC13:52
*** luizbag has quit IRC14:07
*** lpetrut has quit IRC14:09
*** kaisers_ has joined #openstack-manila14:10
*** jmlowe has joined #openstack-manila14:28
*** kaisers_ is now known as kaisers_away14:36
*** kaisers_away is now known as kaisers_14:54
*** esker has joined #openstack-manila15:12
*** e0ne has joined #openstack-manila15:14
*** esker has quit IRC15:22
*** kaisers_ is now known as kaisers_away15:57
*** kaisers_away is now known as kaisers_16:00
*** kaisers_ is now known as kaisers_away16:00
*** e0ne has quit IRC16:02
gouthamrDarkl0rd!!78013516:03
* gouthamr mondays16:04
vkmc:o16:04
* vkmc tries gouthamr's bank account16:04
*** e0ne has joined #openstack-manila16:05
gouthamrvkmc: :P i secure that little better than IRC, try passw0rd_dumbledoreIsNotDeaD.com16:05
bswartzgouthamr: spoiler alert!16:05
vkmcgouthamr, nooooo16:05
gouthamrbswartz vkmc: he's not, how could you believe it :(16:06
gouthamrwait16:06
*** erlon has joined #openstack-manila16:16
*** e0ne has quit IRC16:17
*** kaisers_away is now known as kaisers_16:36
*** kaisers_ is now known as kaisers_away16:37
*** e0ne has joined #openstack-manila16:46
*** ociuhandu has quit IRC16:54
*** luizbag has joined #openstack-manila17:08
*** erlon has quit IRC17:13
*** erlon has joined #openstack-manila17:15
*** kopecmartin is now known as kopecmartin|off17:16
*** e0ne has quit IRC17:16
*** erlon has quit IRC17:31
*** lseki has joined #openstack-manila17:37
*** luizbag has quit IRC17:38
*** kaisers_away is now known as kaisers_18:06
*** kaisers_ has quit IRC18:11
*** patrickeast has quit IRC18:19
*** dviroel has quit IRC18:19
*** dviroel has joined #openstack-manila18:20
*** carloss has quit IRC18:20
*** patrickeast has joined #openstack-manila18:20
*** amito has quit IRC18:20
*** amito has joined #openstack-manila18:20
*** carloss has joined #openstack-manila18:21
*** erlon has joined #openstack-manila18:33
*** e0ne has joined #openstack-manila18:33
*** luizbag has joined #openstack-manila18:40
*** thgcorrea has joined #openstack-manila18:40
*** e0ne has quit IRC18:55
*** jmlowe has quit IRC18:56
*** luizbag has quit IRC19:59
*** jmlowe has joined #openstack-manila20:00
*** thgcorrea has quit IRC20:05
*** whoami-rajat has quit IRC20:08
*** pcaruana has quit IRC20:25
*** eharney has quit IRC20:30
openstackgerritCarlos Eduardo proposed openstack/manila master: DNM - Fix manila pagination speed  https://review.openstack.org/65098620:38
lsekihi folks21:13
lsekiCould someone give me an advice investigating this bug? https://bugs.launchpad.net/manila/+bug/180420821:13
openstackLaunchpad bug 1804208 in Manila "scheduler falsely reports share service down" [High,Triaged] - Assigned to Lucio Seki (lseki)21:13
lsekiit's a bug reported by carthaca. Seems that under a heavy load environment, manila-share service is listed as `down`21:14
lsekibut it appears as `up` all the time, even when `_update_host_state_map` is reporting `Share service is down`21:17
lsekiit only appears as `down` when I restart manila-share service, while exporting the shares and it takes too long21:18
lsekimy guess is to perform live checks while re-exporting the shares under `ShareManager#ensure_driver_resources`, but I'm not sure if it's a good idea...21:23
tbarron_lseki: i think carthaca intended to report a steady-state scale issue, not a restart issue21:31
tbarron_lseki: both would be legitimate issues, but it's less surprising that the service would show down in a restart :)21:31
tbarron_lseki: if there's only one instance of the service and it's down during a restart I'd think that's expected21:32
lsekitbarron_  > there's only one instance of the service and it's down during a restart21:33
lsekiyeah that's exactly what's happening21:33
lsekiso seems that I couldn't reproduce the error yet21:33
tbarron_lseki: today the manila-share service runs active-passive and you'd need more than a devstack setup to run mutliple instances under pacemaker and even then it may be "down" for a brief period21:33
tbarron_lseki: right, carthaca is claiming that at SAP the service shows as down (from the vantage of the scheduler) w/o any manila-share failover, just due to scale issues21:34
tbarron_lseki: now it may be that they are running older manila version and that we don't have the bug anymore but21:34
tbarron_lseki: i kinda doubt it,  I don't think we've changed much in that area21:35
tbarron_lseki: so there's an interesting detective problem here, how to reproduce21:35
tbarron_lseki: some of our mutual downstream customer want to run about a hundred "edge" manila-share (and cinder-volume) services21:36
tbarron_lseki: from a core of three manila-api and scheduler services21:36
tbarron_lseki: so that's the scale issue, will the services show as down just b/c of this scale?  even without any failover?21:37
tbarron_gouthamr: no rush, but the gating for csi features in next-gen okd is painful so21:38
tbarron_gouthamr: i've backed off for now to pure k8s21:38
tbarron_gouthamr: running one master and three nodes i'm running a minimal21:39
lsekitbarron_: hmm seems more complex than I thought21:39
tbarron_gouthamr: `manila+keystone+rabbit+myself on one of the nodes21:39
tbarron_lseki: maybe not, but i think it's a steady state scale issue that carthaca is targeting, not a failover issue21:39
gouthamrtbarron_: ack, nice... local deployment? with kubeadm?21:40
tbarron_gouthamr: yup, https://paste.fedoraproject.org/paste/bk9yQyUXB36n~EendYxinA21:42
gouthamrtbarron_: sweet!21:42
tbarron_gouthamr: running devstack on a worker, would you recommend on the master or on an independent node instead?21:42
tbarron_gouthamr: what's the better test/demo?21:43
tbarron_gouthamr: this is local on libvirt but we might be bable to do it on openstack21:43
gouthamrtbarron_: i used the dsvm as the master; the only issue being the etcd clash21:43
*** whoami-rajat has joined #openstack-manila21:45
tbarron_gouthamr: maybe they fixed devstack so that etcd isn't always on, i picked the worker for that reason21:45
tbarron_gouthamr: but devstack didn't start it21:46
tbarron_gouthamr: I thought it used to no mattter what21:46
tbarron_gouthamr: local.conf has: ENABLED_SERVICES=mysql,rabbit,keystone21:47
gouthamrtbarron_: yes, you can "disable_service etcd3" i think21:47
tbarron_plus the plugin21:47
tbarron_for manila and ceph21:47
tbarron_gouthamr: so given that we can run without it anyways, i'm wondering whether running on a worker or master is better to "prove" service decoupling21:48
tbarron_gouthamr: or maybe just another VM21:48
tbarron_gouthamr: theoretically I don't think it matters, maybe just dramatically :)21:48
gouthamrtbarron_: true, as long as the AUTH_URL is reachable, it should work21:49
lsekiAlright, now I know that being "down" for a while after restart is not a problem (at least not the reported issue), and I didn't manage to reproduce his issue yet.22:02
lsekitbarron_: thanks, I'll keep trying to figure out how to reproduce the issue22:03
*** erlon has quit IRC22:05
*** dviroel has quit IRC22:56
tbarron_lseki++23:35
*** whoami-rajat has quit IRC23:58
*** tosky has quit IRC23:58

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!