Wednesday, 2015-04-15

*** mriedem has quit IRC00:01
*** julim has joined #openstack-cinder00:10
*** _cjones_ has quit IRC00:11
*** Yogi1 has joined #openstack-cinder00:14
*** IanGovett has quit IRC00:14
*** r-daneel has quit IRC00:24
*** jwcroppe has joined #openstack-cinder00:28
*** Yogi11 has joined #openstack-cinder00:33
*** dims_ has joined #openstack-cinder00:35
*** Yogi1 has quit IRC00:36
*** dims has quit IRC00:36
*** heyun has joined #openstack-cinder00:38
*** emagana has quit IRC00:40
*** emagana has joined #openstack-cinder00:41
*** IanGovett has joined #openstack-cinder00:42
*** leeantho has quit IRC00:46
*** emagana has quit IRC00:48
*** IanGovett has quit IRC00:49
*** annashen has joined #openstack-cinder00:49
*** kmartin has quit IRC00:51
*** annashen has quit IRC00:55
*** fanyaohong has quit IRC00:59
*** esker has joined #openstack-cinder01:24
*** davechen has joined #openstack-cinder01:27
*** jwcroppe has quit IRC01:32
*** davechen1 has joined #openstack-cinder01:34
*** davechen has quit IRC01:36
*** vilobhmm1 has quit IRC01:37
*** davechen has joined #openstack-cinder01:45
*** davechen1 has quit IRC01:46
*** davechen1 has joined #openstack-cinder01:55
*** xyang has quit IRC01:55
*** davechen has quit IRC01:55
openstackgerritxing-yang proposed openstack/python-cinderclient: Add support to incremental backups in cinder
*** Lee1092 has joined #openstack-cinder01:56
*** kaisers has joined #openstack-cinder01:58
*** kaisers1 has quit IRC01:59
*** changbl has joined #openstack-cinder02:14
*** harlowja is now known as harlowja_away02:17
*** Yogi12 has joined #openstack-cinder02:20
*** dims_ has quit IRC02:21
*** Yogi11 has quit IRC02:24
openstackgerritPatrick East proposed openstack/cinder: Add locking to PureISCSIDriver around creating Purity Host objects.
*** Apoorva has quit IRC02:26
*** vilobhmm1 has joined #openstack-cinder02:28
*** Yogi12 has quit IRC02:32
*** patrickeast has quit IRC02:40
*** julim has quit IRC02:43
*** julim has joined #openstack-cinder02:55
*** esker has quit IRC02:57
*** esker has joined #openstack-cinder02:57
*** mriedem has joined #openstack-cinder03:02
*** mriedem1 has quit IRC03:04
*** mriedem has quit IRC03:08
*** jwcroppe has joined #openstack-cinder03:14
*** Longgeek has joined #openstack-cinder03:18
*** aswadr has joined #openstack-cinder03:25
openstackgerritJay Bryant proposed openstack/os-brick: Sync loopingcall from oslo-incubator for os-brick
*** li_zhang has joined #openstack-cinder03:41
*** Longgeek has quit IRC03:41
openstackgerritJay Bryant proposed openstack/os-brick: Use oslo_log instead of openstack.common.log
thingeejungleboyj: is this still going to happen?
jungleboyjthingee: Technically that is happening.03:52
jungleboyjthingee: Don't don't have the unittests working yet.03:53
*** aswadr has quit IRC03:54
thingeejungleboyj: ok03:54
thingeeso liberty?03:54
jungleboyjthingee: Yeah, I should make time to finish that up.03:57
*** shyama has joined #openstack-cinder03:58
*** Mandell has quit IRC04:05
*** krtaylor has quit IRC04:07
*** Mandell has joined #openstack-cinder04:09
*** zhangli has joined #openstack-cinder04:09
*** li_zhang has quit IRC04:09
*** krtaylor has joined #openstack-cinder04:10
*** zhangli has quit IRC04:14
*** zhangli has joined #openstack-cinder04:15
*** patrickeast has joined #openstack-cinder04:16
patrickeastthingee: winston-d: thanks for the super quick reviews on !04:18
thingeepatrickeast: np, just waiting on the pure ci04:19
*** heyun has quit IRC04:20
*** btran has quit IRC04:21
*** annashen has joined #openstack-cinder04:21
patrickeastthingee: hmm, it should have finished by now...04:22
* patrickeast goes to check04:22
*** sks has joined #openstack-cinder04:23
patrickeastthingee: oh derp04:24
patrickeastthingee: my zuul-merger was stuck, had like 3 things in the queue from the last couple of hours04:25
patrickeastthingee: ~45 min and it should be posted04:25
patrickeastthingee: although, this brings up an interesting point... this bug wouldn't actually have been found from the CI since we don't have CHAP enabled in the tests... I've already added a task to figure out which combinations of config variables we need to have setup to get better coverage04:26
patrickeastthings like multipathing on/off, chap, etc04:26
*** annashen has quit IRC04:26
*** ankit_ag has joined #openstack-cinder04:29
*** zhangli has quit IRC04:32
*** heyun has joined #openstack-cinder04:32
*** rushiagr_away is now known as rushiagr04:36
*** Longgeek has joined #openstack-cinder04:37
*** avishay has joined #openstack-cinder04:37
*** rushiagr is now known as rushiagr_away04:38
*** rushiagr_away is now known as rushiagr04:38
*** Longgeek has quit IRC04:44
*** xyang1 has quit IRC04:52
*** BharatK has joined #openstack-cinder04:53
*** yamada-h has quit IRC05:00
*** jwcroppe has quit IRC05:01
patrickeastthingee: the ci just finished05:02
*** Mandell has quit IRC05:02
*** yamada-h has joined #openstack-cinder05:02
thingeepatrickeast: thanks05:04
thingeepatrickeast: +2/a05:05
patrickeastthingee: what would you say the odds are of being able to get that into kilo?05:05
patrickeasti'm still kicking myself that we didn't find it until today... went through like 4 people banging on this and no one noticed until today when i was looking at some new stuff for L >.<05:06
openstackgerritMike Perez proposed openstack/cinder-specs: Open Liberty for specs
thingeepatrickeast: We have to do an rc-2 no matter what, so I think good :)05:07
*** deepakcs has joined #openstack-cinder05:08
patrickeastthingee: awesome05:08
thingeeI'll review the kilo propose patch05:08
patrickeastthingee: thanks!05:08
thingeepatrickeast: need to get my voting rights on that branch adjusted. +1 for now05:09
thingeepatrickeast: have a good night. I'm going to head to bed early for some much needed sleep.05:10
patrickeastthingee: night! thanks again for the reviews05:11
openstackgerritDaisuke Fujita proposed openstack/cinder: Fix a wrong argument of create method
*** nkrinner has joined #openstack-cinder05:23
*** Mandell has joined #openstack-cinder05:26
*** thingee has quit IRC05:29
*** ociuhandu has quit IRC05:29
*** resker has joined #openstack-cinder05:32
*** nikesh has joined #openstack-cinder05:32
*** nlevinki has joined #openstack-cinder05:33
*** lan has quit IRC05:35
*** lan has joined #openstack-cinder05:35
*** esker has quit IRC05:35
*** sgotliv has joined #openstack-cinder05:36
*** ociuhandu has joined #openstack-cinder05:38
*** ankit8188 has joined #openstack-cinder05:38
*** ankit8188 has quit IRC05:38
*** patrickeast has quit IRC05:40
nikeshthingee: hi05:40
*** yamada-h has quit IRC05:44
*** Longgeek has joined #openstack-cinder05:48
*** dulek has joined #openstack-cinder05:49
*** bkopilov has quit IRC05:53
*** ociuhandu has quit IRC05:56
*** bkopilov has joined #openstack-cinder05:59
*** TobiasE has joined #openstack-cinder06:03
*** TobiasE has quit IRC06:05
*** bkopilov has quit IRC06:24
*** yamada-h has joined #openstack-cinder06:30
*** yamada-h has quit IRC06:35
*** vilobhmm1 has quit IRC06:49
*** tshefi has joined #openstack-cinder06:50
*** bkopilov has joined #openstack-cinder06:50
*** sgotliv has quit IRC06:55
*** bkopilov has quit IRC06:56
*** ho has quit IRC07:00
*** nshaikh has joined #openstack-cinder07:00
*** yamada-h has joined #openstack-cinder07:01
*** yamada-h has quit IRC07:05
*** rwsu has quit IRC07:09
*** bkopilov has joined #openstack-cinder07:11
*** markus_z has joined #openstack-cinder07:12
*** jordanP has joined #openstack-cinder07:13
*** jcespedes_ has quit IRC07:19
*** jistr has joined #openstack-cinder07:24
*** annashen has joined #openstack-cinder07:24
*** annashen has quit IRC07:29
*** ankit_ag has quit IRC07:30
*** ankit_ag has joined #openstack-cinder07:33
*** chlong has quit IRC07:35
*** yamada-h has joined #openstack-cinder07:37
*** yamada-h has quit IRC07:37
*** c0m0 has joined #openstack-cinder07:41
*** bkopilov has quit IRC07:45
*** yamada-h has joined #openstack-cinder07:48
*** yamada-h has quit IRC07:48
*** bkopilov has joined #openstack-cinder07:49
*** rushiagr is now known as rushiagr_away07:49
*** abhiram_moturi has quit IRC07:50
*** abhiram_moturi has joined #openstack-cinder07:50
*** bkopilov has quit IRC07:54
*** bkopilov has joined #openstack-cinder07:54
*** ronis_ has joined #openstack-cinder07:56
*** rushiagr_away is now known as rushiagr08:01
*** e0ne has joined #openstack-cinder08:02
*** e0ne has quit IRC08:05
*** e0ne has joined #openstack-cinder08:08
*** Mandell has quit IRC08:08
*** ronis_ has quit IRC08:10
*** bkopilov has quit IRC08:10
*** shyama has quit IRC08:15
*** bkopilov has joined #openstack-cinder08:17
*** e0ne has quit IRC08:19
*** e0ne has joined #openstack-cinder08:23
*** ndipanov has joined #openstack-cinder08:27
*** e0ne has quit IRC08:29
*** ociuhandu has joined #openstack-cinder08:31
*** e0ne has joined #openstack-cinder08:33
*** alecv has joined #openstack-cinder08:35
*** alecv has quit IRC08:37
*** bkopilov has quit IRC08:39
*** zerda has joined #openstack-cinder08:40
*** e0ne has quit IRC08:41
*** bkopilov has joined #openstack-cinder08:45
*** davechen1 has left #openstack-cinder08:47
*** bkopilov has quit IRC08:51
*** bkopilov has joined #openstack-cinder08:52
*** krtaylor has quit IRC08:57
*** bkopilov has quit IRC08:57
*** alecv has joined #openstack-cinder09:03
*** shyama has joined #openstack-cinder09:03
*** krtaylor has joined #openstack-cinder09:04
*** sgotliv has joined #openstack-cinder09:11
zerdahemna, hello, I wonder is there any plans to re-introduce HP MSA driver?09:11
openstackgerrityogeshprasad proposed openstack/cinder: Add chap support to CloudByte cinder driver
*** heyun has quit IRC09:20
*** nshaikh has left #openstack-cinder09:31
*** TobiasE has joined #openstack-cinder09:31
*** e0ne has joined #openstack-cinder09:34
*** e0ne is now known as e0ne_09:49
*** liusheng has quit IRC09:51
*** jamielennox is now known as jamielennox|away09:52
*** nshaikh has joined #openstack-cinder09:59
*** e0ne_ has quit IRC10:01
*** Longgeek has quit IRC10:02
*** Longgeek_ has joined #openstack-cinder10:02
*** e0ne has joined #openstack-cinder10:03
*** theanalyst has quit IRC10:06
*** theanalyst has joined #openstack-cinder10:09
*** ronis has joined #openstack-cinder10:11
*** annashen has joined #openstack-cinder10:26
*** lpetrut has joined #openstack-cinder10:27
*** nlevinki_ has joined #openstack-cinder10:28
*** IanGovett has joined #openstack-cinder10:28
*** nlevinki has quit IRC10:30
*** annashen has quit IRC10:31
*** heyun has joined #openstack-cinder10:34
*** ociuhandu has quit IRC10:34
*** MIDENN_ has quit IRC10:35
DuncanTzerda: Do you have a desire to use an MSA with Openstack?10:37
*** e0ne is now known as e0ne_10:39
zerdaDuncanT, we're considering this now10:40
DuncanTzerda: Thanks. Hemna is probably your best sorce of info, but he's west coast US so won't be on for a while. As far as I know there's no current plan simply due to lack of demand10:41
*** e0ne_ is now known as e0ne10:42
zerdaDuncanT, OK, got it, thank you. I'm 13 hours away from him, guess it's will be better to write an email :)10:43
*** deepakcs has quit IRC10:46
*** zhenguo has quit IRC10:48
*** asselin_ has quit IRC10:49
*** e0ne has quit IRC10:53
*** e0ne has joined #openstack-cinder10:54
openstackgerritDuncan Thomas proposed openstack/cinder: Add CA cert option to backups swift driver
*** sgotliv has quit IRC11:00
*** heyun has quit IRC11:01
nikeshzerda DuncanT: hi we are working with hp to introduce back hpmsa driver for liberty release11:06
*** bkopilov has joined #openstack-cinder11:07
*** dims has joined #openstack-cinder11:08
*** BharatK has quit IRC11:09
DuncanTI'm confused, can anybody see why the nimble CI failed on ?11:11
zerdanikesh, glad to hear that11:14
*** sgotliv has joined #openstack-cinder11:15
*** e0ne is now known as e0ne_11:19
*** nlevinki_ has quit IRC11:19
sdaguedoes anyone know if this statement is still true?11:20
sdague" Currently cinderclient needs you to specify the *volume api* version. "11:21
sdaguebecause some code in openrc in devstack completely masks out openstack client having volumes support because of it11:21
*** nlevinki has joined #openstack-cinder11:22
*** BharatK has joined #openstack-cinder11:23
*** e0ne_ has quit IRC11:29
*** Longgeek has joined #openstack-cinder11:29
*** Longgeek_ has quit IRC11:31
*** ociuhandu has joined #openstack-cinder11:37
*** e0ne has joined #openstack-cinder11:38
openstackgerritRushi Agrawal proposed openstack/cinder-specs: Snapshot sharing
*** aix has quit IRC11:50
*** Lee1092 has quit IRC11:51
*** akerr has joined #openstack-cinder12:01
*** julim has quit IRC12:01
*** Yogi11 has joined #openstack-cinder12:03
*** arif-ali has quit IRC12:05
*** Mandell has joined #openstack-cinder12:09
openstackgerritRushi Agrawal proposed openstack/cinder-specs: Snapshot sharing
*** arif-ali has joined #openstack-cinder12:12
*** Mandell has quit IRC12:13
*** e0ne is now known as e0ne_12:14
*** bswartz has quit IRC12:22
*** Lee1092 has joined #openstack-cinder12:23
*** zerda has quit IRC12:24
*** e0ne_ has quit IRC12:26
*** aix has joined #openstack-cinder12:28
*** zhenguo has joined #openstack-cinder12:29
*** e0ne has joined #openstack-cinder12:30
*** timcl has joined #openstack-cinder12:32
*** eharney has joined #openstack-cinder12:34
*** nshaikh has left #openstack-cinder12:40
*** jaypipes has quit IRC12:42
*** julim has joined #openstack-cinder12:43
smcginnissdague: I know thingee has a patch out for automatic version discovery.12:44
smcginnissdague: I don't know enough, but I thought it just defaulted to v1.12:44
sdaguesmcginnis: no idea, but in all devstack configs we force it to v2, which causes openstack client heartburn12:45
smcginnissdague: AFIAK, v2 is where we want to be, but v1 is still the client default.12:45
sdagueok, so I feel like we should not be overriding that default in devstack12:49
sdaguedevstack tries to keep the defaults when possible of what the projects are doing12:50
*** sgotliv has quit IRC12:50
smcginnissdague: That does seem like it would be safer.12:50
sdagueit would be good to know what the cinder plan is here to change that default12:50
*** annegentle has joined #openstack-cinder12:50
*** sgotliv has joined #openstack-cinder12:51
smcginnisI believe the plan is to have the default v2, but I will let other more knowledgable folks confirm that. :)12:51
*** sgotliv has quit IRC12:51
*** sgotliv has joined #openstack-cinder12:51
e0nesdague: i think we'll change default to v2 in L because we'll remove v1 soon :)12:53
*** jaypipes has joined #openstack-cinder12:56
*** timcl1 has joined #openstack-cinder12:57
smcginnishemna: Preso looks nice. It's a great start.12:57
smcginnishemna: I wish dropbox would allow shared editing. :\12:58
*** bswartz has joined #openstack-cinder12:58
smcginnishemna: I thought it did, but maybe not odp format?12:58
*** resker has quit IRC12:58
*** timcl has quit IRC13:00
*** Yogi11 has quit IRC13:09
*** dims has quit IRC13:13
*** dims has joined #openstack-cinder13:15
*** crose has joined #openstack-cinder13:19
*** primechuck has joined #openstack-cinder13:20
*** david-lyle has quit IRC13:24
*** dustins has joined #openstack-cinder13:25
*** jungleboyj has quit IRC13:26
*** mriedem has joined #openstack-cinder13:27
*** cbader has joined #openstack-cinder13:27
*** annashen has joined #openstack-cinder13:29
*** r-daneel has joined #openstack-cinder13:31
*** annashen has quit IRC13:33
*** BharatK has quit IRC13:35
*** jwcroppe has joined #openstack-cinder13:38
*** jwcroppe has quit IRC13:38
*** jwcroppe has joined #openstack-cinder13:39
*** ozialien has joined #openstack-cinder13:39
*** marcusvrn has quit IRC13:41
*** Yogi1 has joined #openstack-cinder13:41
*** e0ne is now known as e0ne_13:43
*** e0ne_ is now known as e0ne13:43
*** mtanino has joined #openstack-cinder13:48
*** markvoelker has joined #openstack-cinder13:48
*** marcusvrn has joined #openstack-cinder13:56
*** Yogi11 has joined #openstack-cinder13:58
e0newinston-d: hi. are you around?14:00
*** dustins has quit IRC14:00
*** rushil has joined #openstack-cinder14:01
*** Yogi1 has quit IRC14:01
*** dustins has joined #openstack-cinder14:02
*** rongze has joined #openstack-cinder14:06
*** esker has joined #openstack-cinder14:06
*** nkrinner has quit IRC14:06
*** bswartz has quit IRC14:06
*** ronis has quit IRC14:07
*** markvoelker_ has joined #openstack-cinder14:13
*** markvoelker has quit IRC14:16
*** ozialien has quit IRC14:17
*** bswartz has joined #openstack-cinder14:17
*** ozialien has joined #openstack-cinder14:19
*** marcusvrn1 has joined #openstack-cinder14:22
*** marcusvrn has quit IRC14:24
*** ozialien has quit IRC14:26
*** ankit_ag has quit IRC14:32
*** rushiagr is now known as rushiagr_away14:33
*** emagana has joined #openstack-cinder14:36
*** primechuck has quit IRC14:36
*** markvoelker has joined #openstack-cinder14:36
*** markvoel_ has joined #openstack-cinder14:38
*** russellb has quit IRC14:38
*** markvoelker_ has quit IRC14:40
*** markvoel_ has quit IRC14:41
*** russellb has joined #openstack-cinder14:42
*** markvoelker has quit IRC14:42
*** marcusvrn has joined #openstack-cinder14:45
smcginnisjgriffith: No coffee yet? :P14:45
*** sks has quit IRC14:45
jgriffithsmcginnis: hehe.. nope14:45
jgriffithsmcginnis: It's brewing though :)14:46
nikeshthingee:hi,could you please approve bps and target them for liberty14:46
*** marcusvrn1 has quit IRC14:46
nikeshsooner i am planning to submit patch for reviews for that14:47
smcginnisnikesh: Probably better not to keep pinging him on that. He will get to it.14:47
smcginnisnikesh: Just post your code when ready. If the bp isn't approved by then it will be before merging.14:47
*** marcusvrn1 has joined #openstack-cinder14:47
*** BharatK has joined #openstack-cinder14:49
*** marcusvrn has quit IRC14:49
nikeshsmcginnis: ok thanks14:50
*** primechuck has joined #openstack-cinder14:52
*** asselin has joined #openstack-cinder14:53
*** annegentle has quit IRC14:54
*** timcl1 has quit IRC14:54
*** smoriya has joined #openstack-cinder14:54
*** jordanP has quit IRC14:55
*** patrickeast has joined #openstack-cinder14:55
*** kmartin has joined #openstack-cinder14:58
DuncanTnikesh: You're aware you'll need working CI before we merge new drivers, right?14:58
*** xyang has joined #openstack-cinder14:58
*** jungleboyj has joined #openstack-cinder14:58
*** mdenny has joined #openstack-cinder15:00
*** rwsu has joined #openstack-cinder15:00
DuncanTHmmm, nothing at all on the agenda yet15:01
DuncanTShortest weekly meeting ever15:01
smcginnisDuncanT: Should be quick.15:01
*** timcl has joined #openstack-cinder15:01
smcginnisBut I'm sure we'll find something to fill the time.15:01
smcginnisShould probably talk Summit.15:01
*** Yogi11 has quit IRC15:02
*** Mandell has joined #openstack-cinder15:02
*** jkraj has joined #openstack-cinder15:02
SwansonBallpark timelines for liberty would be nice.15:02
*** Yogi1 has joined #openstack-cinder15:03
jungleboyjSwanson: There was a mailing list thread from ttx on that this week.15:03
jungleboyjAt least from a project level.15:03
SwansonI need to stop sorting the mailing list into "cinder" and "other" folders.15:04
*** nlevinki has quit IRC15:04
jungleboyjSwanson: :-)  The e-mail is a never ending challenge.15:04
*** smoriya has quit IRC15:05
*** xyang1 has joined #openstack-cinder15:05
*** TobiasE has quit IRC15:07
*** xyang has quit IRC15:07
e0ne - interesting.. i've missed it:(15:08
e0newinston-d: short update about BlockDeviceDriver15:10
*** shyama has quit IRC15:10
e0newinston-d: it was implemented for using in Sahara15:10
*** shyama has joined #openstack-cinder15:11
*** luv has joined #openstack-cinder15:11
e0newinston-d: sahara uses it now and our sahara team has contacts for people who use it with sahara15:11
*** rushiagr_away is now known as rushiagr15:11
e0newinston-d: feel free to ask it in #openstack-sahara15:11
*** markvoelker has joined #openstack-cinder15:12
e0newinston-d: or ask me if you need some driver support/maitanence/fixing15:12
e0newinston-d: i havn't any testing results for this driver :(15:12
nikeshjgriffith: how can we make sos-ci working before merging drivers15:12
DuncanTnikesh: Cherry pick your driver on top of cinder, there's a devstack config option to do so15:13
luvhi, Im writing a volume driver for cinder here (just messing around at this stage) and I would like to propagate an error all the way back to the user behind horizon (or cli). I see that raised exceptions in cinder are not propagated via rabbitmq (as opposed to nova?).15:13
*** c0m0 has quit IRC15:13
*** li_zhang has joined #openstack-cinder15:14
DuncanTluv: Most operations are async, so the user is gone before the exception is raised15:14
*** rongze has quit IRC15:15
luvDuncanT: that might not be the case when the user is using horizon15:16
*** markvoelker has quit IRC15:16
DuncanTluv: The 'user' from a cinder PoV is whatever is at the end of the socket15:17
jgriffithnikesh: modify the local.conf template to fetch your changes from your own github repo15:17
luvDuncanT: i see no issue with reporting errors via rabbitmq and horizon processing them and showing them to the user15:18
luvall async and clean15:18
DuncanTluv: I believe you can configure the cinder logger to send all ERROR or EXCEPTION to rabbit15:18
*** bswartz has quit IRC15:19
luvnova has a wrap_exception decorator for reporting exceptions via rabbitmq (or other supported messaging system, ofc)15:19
*** crose has quit IRC15:19
*** c0m0 has joined #openstack-cinder15:20
e0neluv: did you see realated conversation in the openstack-dev ML?15:20
*** crose has joined #openstack-cinder15:20
luvDuncanT: that might something to look at, would you mind pointing me at the correct docs?15:20
*** timcl has quit IRC15:20
luve0ne: nope ... sounds interesting tho - will check it out15:20
luve0ne: yup, looks exactly like my problem. will check it out tomorrow. afk now :)15:22
*** timcl has joined #openstack-cinder15:22
*** Mandell has quit IRC15:22
*** ronis has joined #openstack-cinder15:24
*** ozialien has joined #openstack-cinder15:26
nikeshjgriffith :  so in sos-ci/sos-ci/ansible/tasks/install_devstack.yml,do we have to modify cinder_branch with our patch ref15:26
nikeshmeans in valid event we can catch only our patch ref15:27
nikeshand it will update cinder branch15:27
e0nejbernard: hi. are you aroung?15:28
jbernarde0ne: i am, how goes it?15:28
*** jungleboyj has quit IRC15:28
e0nejbernard: i'm reading your comment for you didn't add proposal to design session, did you?15:29
jbernardi did15:29
jgriffithnikesh: so the idea is to just fetch/cherry pick your driver in15:29
jbernarde0ne: i added a placeholder in the etherpad15:29
*** jdurgin1 has joined #openstack-cinder15:29
jbernarde0ne: but i haven't had a chance to post any proposed contents yet15:30
jgriffithnikesh: check the wiki, thingee wrote up a deal on it under the FAQ15:30
*** ronis has quit IRC15:30
*** harlowja_at_home has joined #openstack-cinder15:31
*** russellb has quit IRC15:32
DuncanTluv: I don't know if it is documented, sorry15:33
*** li_zhang has quit IRC15:34
*** _cjones_ has joined #openstack-cinder15:34
*** jungleboyj has joined #openstack-cinder15:35
e0nejbernard: fyi, i'm looking on this issue too. i hope, i'll could provide some fix for it later this week15:36
*** russellb has joined #openstack-cinder15:36
jbernarde0ne: great, ill remove the session proposal15:36
e0nejbernard: wait, please15:37
jbernarde0ne: no problem15:37
e0nejbernard: 'When eventlet isn't enough; alternatives for long-running tasks' is it yours about the current bug?15:37
jbernardthat's the one i added, yes15:37
*** li_zhang has joined #openstack-cinder15:38
*** _cjones_ has quit IRC15:38
*** _cjones_ has joined #openstack-cinder15:38
e0nei'm confused a bit. the problem not about long-running tasks.15:38
e0nethe problem is in eventlet-blocking tasks:(15:38
*** ronis has joined #openstack-cinder15:39
jbernardvolume delete can be a long running task15:39
jbernardwhich causes eventlet to not be able to schedule others tasks15:39
jbernardthereby blocking forward progress15:39
e0nebut in general, that's a problem for short time tasks too15:40
jgriffitheventlet doesn't block15:40
e0nejgriffith: c lib blocks eventlet thread15:40
jbernardright, that's what we mean15:41
*** rongze has joined #openstack-cinder15:41
jgriffithjbernard: e0ne ok, not going down this rat hole again :)15:41
jbernardjgriffith: smart man :)15:41
*** li_zhang has quit IRC15:41
* jgriffith should've saved the code snippet to demonstrate the eventlet stuff15:42
*** li_zhang has joined #openstack-cinder15:42
jgriffithmay be different than what you two are looking at, but probably still valuable15:42
jbernardjgriffith: yeah, post it if you find it15:42
jbernardthe basic problem in rbd is that volume delete may take a long time15:43
jbernardspawning a subprocess for each delete operation is not good15:43
*** fanyaohong has joined #openstack-cinder15:43
jgriffithjbernard: kk... I'll see if I can find it15:43
jbernardbut waiting for one to return causes havoc for the other evenelet threads15:43
e0nejgriffith: just to clarify. eventlet doesn't monkey patch C libs15:43
jgriffithe0ne: jbernard so are you looking at internal usage of eventlet in RDB?15:44
jbernardinternal, as in making librbd evenelet-aware?15:44
*** marcusvrn has joined #openstack-cinder15:44
jgriffithe0ne: jbernard Ahh.. got ya15:44
*** lpetrut has quit IRC15:45
e0nejbernard: long-running ceph jobs is one part of issue15:45
jbernarde0ne: agreed15:45
*** marcusvrn1 has quit IRC15:46
e0nejbernard: in my case, it's blocked when librados doesn't respose to calls and we disable connect timeout by default15:46
DuncanTe0ne: Is patching librbd to be eventlet aware the solution?15:46
jbernardDuncanT: unlikely15:47
e0neDuncanT: i'm looking on it. it uses threads in a strange way15:47
jbernardDuncanT: i spoke with josh about this, i don't think they want it upstream15:47
DuncanTjbernard: Any reason given?15:47
*** Apoorva has joined #openstack-cinder15:48
jbernardDuncanT: i believe the thread model complicates things, but i need to go dig through my logs15:48
DuncanTjbernard: exec might be the only sensible answer in the short term then15:48
jbernardDuncanT: that would work, but then you have the problem with large volume numbers15:49
DuncanTjbernard: Why? It is only doing an operation on them one at a time15:49
jbernardDuncanT: what happens when a second volume is requested for deletion?15:50
jbernardDuncanT: will it queue?15:50
DuncanTjbernard: By default, no, you can put a lock in the driver if needed, but generally it will fork up to n_workers copies in parallel - about 50 I believe15:51
*** zhithuang has joined #openstack-cinder15:51
*** zhithuang is now known as winston-d_15:51
DuncanTjbernard: You can add locking to make it queue, but the ceph cmd line client supports a fair amount of parallelism15:51
*** timcl has quit IRC15:52
e0neDuncanT, jbernard: if we do a queue, we need to remember that several cinder-volume services could be available15:53
*** annegentle has joined #openstack-cinder15:54
jbernardfair points15:54
jbernardi think the major concern is that deleting large rbd volumes can be CPU intensive, so system load can overtake the cinder node15:54
DuncanTe0ne: ceph seems quite happy with parallel operations AFAICT... the only reason to queue them in the driver qould be as a crude rate limit15:54
e0neDuncanT: agree15:55
*** thingee has joined #openstack-cinder15:55
e0nei hope, simple thread could resolve this issue, but i don't test it yet15:56
*** jwcroppe has quit IRC15:56
*** deepakcs has joined #openstack-cinder15:56
*** jwcroppe has joined #openstack-cinder15:57
*** xyang has joined #openstack-cinder15:57
*** rongze has quit IRC15:59
*** bswartz has joined #openstack-cinder15:59
smcginnisjoin #openstack-meeting15:59
smcginnisOops. :)15:59
winston-d_smcginnis: yes sir15:59
*** vilobhmm1 has joined #openstack-cinder16:00
*** jwcroppe has quit IRC16:01
*** vilobhmm11 has joined #openstack-cinder16:02
openstackgerritWalter A. Boring IV (hemna) proposed openstack/os-brick: Brick: Fix race in removing iSCSI device
*** emagana has quit IRC16:03
*** jistr has quit IRC16:03
*** jistr has joined #openstack-cinder16:04
*** vilobhmm1 has quit IRC16:04
*** marcusvrn1 has joined #openstack-cinder16:07
*** jistr has quit IRC16:09
*** marcusvrn has quit IRC16:09
*** emagana has joined #openstack-cinder16:10
*** harlowja_at_home has quit IRC16:10
*** jwcroppe has joined #openstack-cinder16:11
*** markvoelker has joined #openstack-cinder16:13
*** rushil has quit IRC16:13
*** dosaboy has quit IRC16:13
*** markus_z has quit IRC16:14
*** dosaboy has joined #openstack-cinder16:14
*** jistr has joined #openstack-cinder16:15
*** annegentle has quit IRC16:16
*** Yogi1 has quit IRC16:17
*** jistr has quit IRC16:17
*** timcl has joined #openstack-cinder16:18
*** _cjones_ has quit IRC16:19
*** markvoelker has quit IRC16:19
*** lpetrut has joined #openstack-cinder16:19
*** Yogi1 has joined #openstack-cinder16:21
*** _cjones_ has joined #openstack-cinder16:21
*** emagana has quit IRC16:22
*** sgotliv has quit IRC16:25
*** dustins has quit IRC16:26
*** marcusvrn has joined #openstack-cinder16:27
*** ociuhandu has quit IRC16:28
*** dannywilson has joined #openstack-cinder16:28
*** marcusvrn1 has quit IRC16:29
e0newinston-d_: did you see my messages about block device driver? ^^16:29
winston-d_e0ne: sorry, i'm on a different computer, I need to logon to the other one to see the history16:30
e0newinston-d_: np. anyway, you can find it in irc logs too16:31
winston-d_e0ne: sure, yeah, we have logs now16:31
*** emagana has joined #openstack-cinder16:32
*** annashen has joined #openstack-cinder16:32
*** jungleboyj has quit IRC16:32
*** leeantho has joined #openstack-cinder16:32
*** li_zhang has quit IRC16:35
*** mwichmann has left #openstack-cinder16:35
*** li_zhang has joined #openstack-cinder16:35
*** thingee has quit IRC16:37
*** annashen has quit IRC16:37
*** annashen has joined #openstack-cinder16:37
*** jdurgin1 has quit IRC16:38
smcginnisZuuls looking pretty backed up again.16:38
*** EmilienM is now known as EmilienM|afk16:38
*** jungleboyj has joined #openstack-cinder16:39
*** li_zhang has quit IRC16:40
*** c0m0 has quit IRC16:40
*** patrickeast has quit IRC16:41
*** asselin has quit IRC16:42
*** alecv has quit IRC16:43
*** dannywilson has quit IRC16:43
*** shyama has quit IRC16:43
*** markvoelker has joined #openstack-cinder16:44
*** jungleboyj has quit IRC16:44
*** rasoto_ has joined #openstack-cinder16:44
*** afazekas has joined #openstack-cinder16:47
rasoto_hello all, I am trying to find information on how can I create Cinder Volume Schedules to take snapshots , can someone point me in the right direction?16:48
bswartzthingee: you still around?16:50
*** markvoelker has quit IRC16:50
*** Yogi11 has joined #openstack-cinder16:50
bswartz...looks like he bailed16:50
smcginnisrasoto_: Cron job?16:51
e0nesmcginnis: +1 :)16:51
rasoto_there is no way to have a user/tenant create the schedules via the dashboard?16:52
*** Yogi1 has quit IRC16:53
smcginnisrasoto_: There's nothing in horizon that I'm aware of.16:53
rasoto_smcginnis: and I am assuming there is nothing on cinder either, right?16:54
smcginnisrasoto_: Right, there's no snapshot scheduling service in cinder.16:55
rasoto_I see16:55
e0nerasoto_: you're right. cinder doesn't care on it16:55
rasoto_well thank you everyone for the help :)16:55
smcginnisrasoto_: np. Doesn't sound like the answer you were looking for, but at least it's an answer. :)16:55
*** e0ne has quit IRC16:56
rasoto_smcginnis: yeah, I wonder if there is something on the NetApp side since that is the service/driver I am using16:56
smcginnisrasoto_: I think most arrays have some sort of scheduled snapshot functionality.16:57
*** dustins has joined #openstack-cinder16:57
*** jungleboyj has joined #openstack-cinder16:57
smcginnisrasoto_: The problem is it's usually not integrated on the host side, so no guarantee the data is in a good state.16:57
smcginnistbarron: You around? ^^16:58
tbarronsmcginnis: yessir16:58
smcginnistbarron: Sorry, you're my go to NetApp guy. :)16:58
*** avishay has quit IRC16:58
tbarronsmcginnis: wazzup?16:58
*** bswartz has quit IRC16:59
smcginnistbarron: rasoto_ was looking for info on snapshot scheduling. Wondering if NetApp does anything special.16:59
*** ozialien has quit IRC17:00
rasoto_tbarron: I can move this to the openstack-netapp channel if you want17:01
tbarronrasoto_: you can certainly set up a DOT snapshot schedule for NetApp backends.17:02
tbarronThis is external to Cinder.17:02
tbarronAn administrator might want to do that.17:03
akerrsmcginnis: I believe we have special extra-specs to define a snapshot policy in Manila, but not currently in Cinder.  But even then its a backend snapshot and not a manila facing snapshot17:03
tbarronMight be a good idea.17:03
vilobhmm11DuncanT : just saw your msg17:03
smcginnisYeah, might be worth thinking through for a bit.17:04
rasoto_tbarron: can a FlexVol have multiple schedules? and can a user choose which one?17:04
tbarronIt provides administrative backup as opposed to cinder backup, which is self-service backup for tenants.17:04
akerrmight be best to move to #openstack-netapp so we don't spam all of cinder about our netapp specific implmentations17:04
rasoto_akerr: sure17:04
tbarronakerr: +117:04
smcginnisxyang has an interesting proposal for snapshot used for backup.17:05
smcginnisI hope we can do something there.17:05
*** Mandell has joined #openstack-cinder17:05
*** bswartz has joined #openstack-cinder17:06
tbarronsmcginnis: yes, that's an interesting proposal.  I kinda like it.  Cinder snapshots are per-volume, NetApp backend snapshots are per-flexvol, so it can be confusing ...17:06
DuncanTAnybody interested in adding scheduled snapshots / scheduled backups? It's a clean use-case AFAICT17:07
smcginnistbarron: Yea. :)17:07
smcginnisDuncanT: Does it belong in cinder is my question.17:07
smcginnisDuncanT: But agree there is a use case for having it.17:07
smcginnisakerr: Thanks for pointing that out? Do you know where in the code that's handled? I'd like to take a look.17:09
smcginnisNot sure why I swapped punctuation on that. :)17:10
DuncanTsmcginnis: Not sure if it belongs or not. Lots of arguments both ways17:11
smcginnisDuncanT: Yeah, that's how I feel.17:11
xyangDuncanT, smcginnis: that is an interesting and valid use case17:11
DuncanTsmcginnis: Saying 'use cron' is not a good argument, since you need somewhere reliable to host the cron17:12
smcginnisDuncanT: I know from experience with other vendors solutions, they usually provide the means for snapping, but not scheduling.17:12
smcginnisDuncanT: At least it's not a very good end user experience.17:12
xyangsmcginnis: backup product usually has scheduling17:12
DuncanTsmcginnis: A time-based scheduling service for openstack might be a valid argument, but seems unlikely to get much traction17:12
smcginnisxyang: Yeah, that's why I'm wondering if it really belongs. Seems kind of like it should be handled by something else like a real backup solution.17:13
smcginnisDuncanT: Agreed.17:13
smcginnisakerr: Thank you!!17:13
xyangsmcginnis: right, it is debatable.  cinder has backup service17:14
*** ndipanov has quit IRC17:15
smcginnisakerr: Do you know, is that an option specific to that driver, or is that common for any driver that chooses to implement it.17:15
smcginnisakerr: Not assuming you are the expert though. :)17:15
akerrsmcginnis: I'm pretty sure its specific to the netapp driver, but I'll let bswartz correct me17:15
xyangakerr: snapshot created by using the policy are not exposed to Manila, correct?17:15
akerrxyang: correct17:15
xyangakerr: thanks17:16
DuncanTIf we add it to cinder, we definitely want it to produce real cinder snapshots, not some hidden thing17:16
smcginnisSo we could conceivable have an extra spec that sets up a snapshot schedule on the array for cinder.17:16
tbarronDuncanT: I can see use cases here, but all of them require getting the backup service to scale better as a prerequisite to fancier stuff.17:16
DuncanTSame behaviour as a cron job would have17:16
smcginnisDuncanT: +117:16
akerrDuncanT: +117:16
smcginnisDuncanT: Any requests for this in your public cloud?17:16
DuncanTtbarron: Improving the speed/throughput of backup or something else?17:17
akerrsmcginnis: I've requested it from Rackspace cloud :)17:17
DuncanTsmcginnis: A few people asked if it is supported, but didn't seem that desparate for it17:17
tbarronDuncanT: that, and probably also not having to load all the volume backends on the backup cinder node17:17
*** Mandell has quit IRC17:17
smcginnisakerr, DuncanT: Thanks17:17
smcginnisbbl, lunch train is leaving. :)17:17
tbarronDuncanT: should the cron-like service be part of Cinder or should it be from some OpenStack automation external to Cinder itself?17:18
DuncanTAny chance of a core who isn't Mike taking a quick look at please, I try not to +A things from my team17:19
DuncanTtbarron: I'm not sure there'll be much buy-in for an openstack-cron-aaS - what are the usecases? Nova instance snapshots maybe? cinder & manilla snaps, cinder backups, anything else?17:20
DuncanTtbarron: It is arguably useful just for those I guess... be a fun thing to design, not too big17:20
tbarronDuncanT: I'd be happy with doing it in cinder, but in a way that it could be pulled out and generalized.17:21
DuncanTWithin the cinder umbrella for sure is a good way to bootstrap... I'd start with a clean git tree, unique endpoint etc from day one if we are going to support more than just cinder though17:22
*** aix has quit IRC17:22
tbarronDuncanT: I suspect there are other use cases that would emerge if there were a general service, but I certainly don't want to be another advocate of a service looking for problems to solve.17:22
*** asselin has joined #openstack-cinder17:23
DuncanTtbarron: If the list of verbs that can be scheduled is extensible, then I'd say there's enough in the above to justify the existance17:23
*** Mandell has joined #openstack-cinder17:24
*** bkopilov has quit IRC17:26
*** Mandell has quit IRC17:26
hemnaDuncanT, isn't the scheduled snapshots accomplished by using Mistral?17:27
hemnascheduling workflows, etc.17:27
hemnanot that I've used mistral17:27
DuncanTNever heard of it!17:27
hemnabut still17:27
*** Mandell has joined #openstack-cinder17:27
DuncanThemna: Looks like a superset of the thing I was suggesting we design. Thanks for the pointer!17:28
DuncanTrasoto_: Are you still about?17:28
*** Mandell has quit IRC17:29
vilobhmm11DuncanT : mistral , taskflow can do the same thing…mistral can be the service undelying using taskflow lib or something like thatå17:30
*** Mandell has joined #openstack-cinder17:30
*** bkopilov has joined #openstack-cinder17:30
hemnataskflow isn't for scheduling repeating jobs in the future17:31
hemnaor even 1 time jobs17:31
vilobhmm11DuncanT : tooz can also be used for distributed locking have you ever thought of using it for the problem(A-A) we discussed in the meeting today..17:31
vilobhmm11hemna : ^^17:31
geguileovilobhmm11: I've had a look into Tooz17:32
vilobhmm11ok what are you findings geguileo ?17:32
DuncanTvilobhmm11: The A-A problem has many, many facets, and I really want to avoid jumping into a solution before we have a good grasp of at least most of the problems17:32
geguileovilobhmm11: When looking into solutions for the atomic state changes17:32
hemnaDuncanT, +117:32
DuncanTvilobhmm11: State is far from the only problem17:32
vilobhmm11DuncanT : agree17:32
geguileoDuncanT: But it's one of them  ;)17:32
DuncanTvilobhmm11: We can start with a big, clunky state lock once we fix nova17:33
DuncanTgeguileo: It might be possible to make it not a problem though, by fixing other thigns17:33
vilobhmm11DuncanT : my 2 cents for it17:33
geguileoDuncanT: You sure?17:33
geguileoDuncanT: From what I could gather it is a problem17:33
DuncanTgeguileo: No, I'm not sure, but I know some parts of it can definitely go away17:34
geguileoDuncanT: But you still have the problem with transitions17:34
DuncanTgeguileo: Once those parts have gone away, maybe the number of states and changes is small enough that 'update state=$new where id=$id and state=$state' will be enough17:34
DuncanTJust look at the count returned from that... if it is 1 then you just did a (low cost) atomic check-and-set transition17:35
DuncanTNothing external needed17:35
geguileoDuncanT: That plus a retry on deadlock works17:35
geguileoDuncanT: Although a select for update plus retry on deadlocks also works17:35
DuncanT\select for update breaks in galera, and is slower17:36
geguileoDuncanT: It doesn't break in Galera, I have tested it  :)17:36
geguileoDuncanT: It's not supported17:36
*** harlowja_away is now known as harlowja17:36
DuncanTNot supported == broken17:36
geguileoDuncanT: In the sense that it doesn't lock row on all nodes17:36
hemnatooz looks interesting17:36
geguileoTooz doesn't solve the underlying problem17:36
*** deepakcs has quit IRC17:37
hemnafor locks it might work17:37
*** e0ne has joined #openstack-cinder17:37
geguileoDuncanT: It's not broken, you would get the lock in one of the DBs and for the others collissions would be controlled by the retry on deadlock17:37
hemnawould have to play with it though17:37
hemnatooz does locking in a db17:37
geguileohemna: I have played with the 3 solutions17:37
hemnaand has drivers for several db's, it looks like17:37
geguileohemna: Yes, but it doesn't take into account the galera situation17:38
hemnawhich is ?17:38
geguileohemna: It only locks in one DB, not in all the DB nodes17:38
geguileohemna: So you need to whatchout for deadlocks17:38
*** bkopilov has quit IRC17:39
*** bkopilov has joined #openstack-cinder17:39
hemnais that really a problem Cinder needs to deal with though17:39
hemnaI really don't think that's a use case that Cinder has to worry about17:40
geguileoI have tested 5 workes doing 100 state transitions (available-deleting-available) simultateoulsy against an HAProxy, round robind sending requests to 3 different nodes (each new request is sent to the next server)17:40
hemnaI dunno.17:40
geguileohemna: I think it is17:40
geguileohemna: Otherwise you could have race conditions17:40
hemnafrankly, I think using something like tooz would be far better off than we are today.17:40
geguileohemna: Tooz alone doesn't solve it, I have checked it17:41
hemnatooz will move the locks from local to central17:41
geguileoIt reduces to almost 0 the number of DB deadlocks, but these happen from time to time17:41
vilobhmm11hemna : +1 thats why i brought that up :)17:41
geguileohemna: Yes, and that could be fine for locks on the creation workflow (I still have to confirm)17:42
vilobhmm11but one thing to be cautious is only locking is supported by the tooz db driver and other concepts like group membership etc are not supported17:42
geguileovilobhmm11: I have tested tooz with 3 different backends and it's still not enough to avoid issues17:42
DuncanThemna: It absolutely is a problem cinder has to deal with IMO, as is deploying against galera requires some haproxy madness to get a working install. We can fix those (quota and something else) but introducing more is madness17:42
vilobhmm11hemna, DuncanT details :
*** cbits has joined #openstack-cinder17:43
geguileoI have done the locking with Redis17:43
geguileoas backend17:43
vilobhmm11geguileo : I would suggest you to please file bug against tooz if you are consistently seeing these issues.17:43
geguileovilobhmm11: What issues?17:43
DuncanTtooz might well be a good solution, but I want to have the problem(s) spelt out very clearly first17:44
vilobhmm11the issues you are dicussing about various tooz drivers you tested against17:44
vilobhmm11DuncanT : +217:44
geguileovilobhmm11: You mean that DB would not assure us a solution with galera configuration?17:44
vilobhmm11I ahree17:44
vilobhmm11agree DuncanT : ^^17:44
*** bkopilov has quit IRC17:44
geguileovilobhmm11: The issues are not specific to the driver, but the Galera cluster environment17:45
geguileovilobhmm11: Where you have replication delays17:45
DuncanTAt the moment there's no clear definition of what the state issue(s) are. My idea of what the state machine work was solving is definitely different to others17:45
*** leeantho has quit IRC17:45
hemnaI'd just hate to see us give up because it doesn't solve ever single <1% use case17:45
*** leeantho has joined #openstack-cinder17:45
geguileohemna: Solution proposed by DuncanT plus deadlock retry solves 100% of cases17:46
geguileohemna: So does select for update + deadlock retry17:46
DuncanTI don't know how well my solution scales17:46
geguileohemna: So does Tooz + any of above 2 solutions17:46
vilobhmm11DuncanT : since i was the one working on the state machine work would like to answer questions if you have any ? But agree with hemna i would also hate to give up on this17:46
DuncanT'select for update' does *not* work with galera. You don17:46
DuncanTt 't get atomicity17:46
hemnayah, but that doesn't work against every db.  which is why tooz is an attempt at db locking which is db agnostic17:46
geguileoDuncanT: 5 workers fighting for the same row with 100 transitions (available-deleting-available) worked fine17:47
hemnaso, maybe an option is to talk to the tooz dev and ping them about it?17:47
*** markvoelker has joined #openstack-cinder17:47
DuncanTgeguileo: We need to see 500 workers fighting for the same row, or 5x100 workers fighting for the same 5 rows17:47
geguileoDuncanT: You don't get atomicity in all cluster, but when there's a collision you get a Deadlock situation17:47
hemnasee what they think if they even support the galera deployment17:47
vilobhmm11hemna : +1 ;17:47
hemnaI'm not trying to say that tooz is the panacea, but it's worth looking into17:48
geguileohemna: I agree, that's why I've been looking into it  :)17:48
geguileoDuncanT: What do you mean 5x100 workers?17:48
DuncanTDefinitely worth investigating. I want to start by documenting what situation(s) the locks are there to prevent, though17:48
*** leeantho has quit IRC17:49
vilobhmm11hemna : agree…DuncanT : +117:49
DuncanTgeguileo: 100 workers fighting for row 1, 100 fighting for row 2, .... row 517:49
geguileoDuncanT: You mean locks for state changes or the other local locks we currently have?17:49
geguileoDuncanT: I can test that17:50
DuncanTgeguileo: Both17:50
geguileoDuncanT: What numbers would you like to see regarding that test?17:50
*** e0ne is now known as e0ne_17:50
*** erlon has joined #openstack-cinder17:50
DuncanTgeguileo: That test would be great. We should an order of magnitude bigger than the biggest system we expect, so 100 workers per row, 5 rows should be good17:50
*** Mandell has quit IRC17:51
geguileoDuncanT: I'll do that, but I want to make sure I get the right information so we can review it17:51
*** Longgeek has quit IRC17:51
DuncanTSome missing info on tooz docs, like do we need to clean locks up or is that done on release?17:51
*** markvoelker has quit IRC17:52
geguileoDuncanT: As far as I could see locks don't get cleaned up17:52
geguileoDuncanT: For example if you use a file, the file stays there17:52
DuncanTgeguileo: First thing is does it fall over in a heap, second thing is how long does log acquire take (min, max, mean, stddev)17:52
DuncanTgeguileo: There isn't an API to clean them up AFAICT.... that is bad17:53
hemnaon a heavily loaded DB no less17:53
hemnathe time when you least want locks to fail, but are most likely to fail, is under heavy load17:53
DuncanTthird thing is how much do other db accesses slow down while the test is running17:53
geguileohemna: Mmmm, my tests are with no load on the DB since I deployed the HAProxy + galera proxy just for this test17:53
*** Yogi11 has quit IRC17:53
*** Apoorva has quit IRC17:54
geguileoWhat kind of additional load would you suggest?17:54
DuncanTgeguileo: Can you write a really simple load generator to run at the same time? Just tight loop some selects and inserts in an unrelated table17:54
DuncanTSay 10 selects and 5 updates a second17:54
DuncanTOf an arbitary table17:54
geguileoDuncanT: Cool17:55
geguileoDuncanT: I can do that  :)17:55
geguileoDuncanT: How many workers of that load?17:55
DuncanTgeguileo: I dunno, 50 to start with? That represents a fairly hefty load for cinder, but might be realistic for a shared openstack db17:56
*** Apoorva has joined #openstack-cinder17:56
geguileoDuncanT: Ok17:56
DuncanTgeguileo: If you can put the test script(s) on github or something then other people can examine them if they want to17:56
*** annashen has quit IRC17:57
geguileoDuncanT: I'll start modifying my environmet and code to accomodate those changes 3 changes (number of workers, numbers to report, load generator)17:57
rasoto_DuncanT: sorry step away from the computer, I am here now17:58
*** timcl1 has joined #openstack-cinder17:58
geguileoDuncanT: Ok, once I've made the changes I'll upload the code so other can review it with the results17:58
harlowjahemna tooz dev here :-P17:58
* harlowja is tooz dev, haha17:58
*** openstackstatus has quit IRC17:58
harlowjau can't escape me!17:58
*** openstackstatus has joined #openstack-cinder17:59
*** ChanServ sets mode: +v openstackstatus17:59
*** fanyaohong has quit IRC17:59
DuncanTharlowja: Any feedback on lock cleanup?17:59
harlowjawhat means :-P18:00
*** timcl has quit IRC18:00
DuncanTharlowja: If I create a lock called '1'. lock it, unlock it, create a lock called '2', etc, for a billion locks, does that leave a billion db rows/files/etc lying around?18:00
hemnaharlowja, Tooz!18:01
hemnaheh awesome.18:01
harlowjau can run but u can't hide18:01
harlowjathe josh monster eat u18:01
harlowjaDuncanT so this is assuming a DB lock?18:01
harlowjathe tooz one is just using inbuilt mysql/postgres locks, not its own table18:01
harlowja*at least currently18:01
harlowjabb, meeting18:02
-openstackstatus- NOTICE: Gerrit has stopped emitting events so Zuul is not alerted to changes. We will restart Gerrit shortly to correct the problem.18:02
*** ChanServ changes topic to "Gerrit has stopped emitting events so Zuul is not alerted to changes. We will restart Gerrit shortly to correct the problem."18:02
*** ozialien has joined #openstack-cinder18:02
geguileoYes, it's using SELECT GET_LOCK18:02
*** annashen has joined #openstack-cinder18:02
geguileoThat's why it's not "Galera safe"18:02
DuncanTAh, ok, thanks18:03
geguileoRedis is probably the best backend18:03
*** cbits has quit IRC18:03
*** patrickeast has joined #openstack-cinder18:03
*** annegentle has joined #openstack-cinder18:04
DuncanTWe can't expect people to deploy redis though :-(18:04
geguileoThat's what Ceilometer is using18:04
geguileoTooz + Redis18:04
DuncanTI've got to head off now, sorry, cab home just turned up18:04
geguileoDuncanT: See ya18:05
*** e0ne_ is now known as e0ne18:05
DuncanTLots of people aren't interested in ceiliometer18:05
smcginnisrasoto_: I think the ping was to point out the mistral project.18:07
*** bswartz has quit IRC18:07
smcginnisrasoto_: Might be an answer to scheduling snapshots.18:07
smcginnisJust catching up after getting back from lunch though...18:07
hemnais gerrit down ?18:07
hemnahehe...I guess I can't read.  nm18:07
vilobhmm11got notification18:07
rasoto_smcginnis: ok, ty :) I will take a look a it18:08
vilobhmm11ChanServ has changed the topic to: Gerrit has stopped emitting events so Zuul is not alerted to changes. We will restart Gerrit shortly to correct the problem.18:08
*** Mandell has joined #openstack-cinder18:13
*** dosaboy has quit IRC18:13
*** dosaboy has joined #openstack-cinder18:14
*** dannywilson has joined #openstack-cinder18:15
*** dosaboy has quit IRC18:20
*** dosaboy has joined #openstack-cinder18:21
*** EmilienM|afk is now known as EmilienM18:22
*** wormwood has joined #openstack-cinder18:22
*** dosaboy has quit IRC18:22
*** dosaboy has joined #openstack-cinder18:24
wormwoodgreetings all, I'm writing a simple script that uses the cinderclient lib and was wondering how can I can force a new volume object to reload it's stale data. Horizon is showing me that it attached to the instance, but new_volume.status still shows it as available18:24
*** dosaboy has quit IRC18:24
*** dosaboy has joined #openstack-cinder18:24
*** ChanServ changes topic to "The OpenStack Block Storage Project Cinder | The New Kids On the Block |"18:24
-openstackstatus- NOTICE: Gerrit has been restarted. New patches, approvals, and rechecks between 17:30 and 18:20 UTC may have been missed by Zuul and will need rechecks or new approvals added.18:24
*** bswartz has joined #openstack-cinder18:28
*** e0ne has quit IRC18:30
patrickeastasselin: hey, i wanted to follow up with you about having multiple nic’s on nodepool nodes for iscsi networks18:31
patrickeastasselin: what issues did you run into with it?18:31
*** angela-s has joined #openstack-cinder18:31
*** winston-d_ has quit IRC18:34
*** thingee has joined #openstack-cinder18:34
*** tshefi has quit IRC18:34
*** annegentle has quit IRC18:35
*** annegentle has joined #openstack-cinder18:35
*** eharney has quit IRC18:36
*** Yogi1 has joined #openstack-cinder18:39
smcginnisJenkins appears to be chugging along again.18:40
*** bswartz has quit IRC18:44
*** bswartz has joined #openstack-cinder18:44
*** Mandell has quit IRC18:45
*** rushil has joined #openstack-cinder18:46
harlowjaDuncanT geguileo is correct, without tooz having a db migration/schema (and its own db...) the other drivers are better suited for distributed locking18:46
harlowjahaving its own db in a library is not exactly common (taskflow does it, but its not super-optimial there either)18:47
asselinpatrickeast, hi, the issue is on the openstack side. how to configure it18:51
asselinpatrickeast, provider side18:51
patrickeastasselin: gotcha18:51
asselinwhat I was trying to do (and failed) was to have a local.conf devstack able to preconfigure it to 'just work'18:52
*** ho has joined #openstack-cinder18:53
patrickeastasselin: so we ended up doing 3 flat networks, public_mgmt, public_iscsi1, public_iscsi2, each with a bridge to a physical adapter on an initiator and a range of ip’s to use18:53
patrickeastasselin: then a private_mgmt network which uses floating ip’s for internet access (from the public_mgmt net)18:54
patrickeastasselin: and we directly attach nodes to the public_iscsi ones18:54
patrickeastwhere we let neutron do dhcp18:54
asselinpatrickeast, do you have scripts to setup? or manual?18:55
patrickeastasselin: for the provider setup its manual (we have a small number of big hosts), but once the networking is set up all we did was add them to the nodepool image setup as additonal networks and everything just worked18:55
asselinpatrickeast, we're still using nova-network. I tried neutron but gave up there too....18:56
rhe00patrickeast: that's the set up I have as well18:56
*** Mandell has joined #openstack-cinder18:57
asselinpatrickeast, I'm not so familar with networking so the manual setup has been a struggle :/18:57
patrickeastheh yea, we used RDO to setup our nodes and i was excited thinking the networking would be taken care of18:58
patrickeastthe first step on our wiki of how to set up another provider is ‘delete all the existing network configuration'18:58
patrickeasttook us a while to figure out how to make it work18:58
asselinpatrickeast, would be great if there was a devstack plugin to automate it....19:00
asselinbtw, we're using devstack to setup the providers19:00
*** bswartz has quit IRC19:00
asselinI figured maybe when neutron become default in devstack we can switch and go from there19:01
patrickeastah yea, thats a good idea19:01
patrickeasti assume you can do this same setup with nova-net, but i don’t know how19:01
asselinmy original one worked on nova-net19:01
asselinback in icehouse19:02
*** bswartz has joined #openstack-cinder19:03
patrickeastheres some copy-pasta from our internal wiki that describes how to set it up
*** crose has quit IRC19:03
* asselin looks19:03
patrickeastformatting didn’t carry over well, but it shows the gist of how the networking got initialized19:03
*** annashen has quit IRC19:04
*** eharney has joined #openstack-cinder19:04
* patrickeast notices a few typos in the configs19:05
asselinpatrickeast, this is really great. thank you for sharing19:07
openstackgerritJay Bryant proposed openstack/os-brick: Use oslo_log instead of openstack.common.log
jgriffithpatrickeast: asselin if you're ever interested... might be useful:
asselinpatrickeast, when I get the courage to try again, I will follow your tips :)19:07
patrickeastasselin: np, figure others can benefit from my and dannywilson banging our heads on the wall for a week or so19:07
jgriffithbut if you're forging bravely into neutron you're golden :)19:07
patrickeast:o nova network looks so much easier to use19:08
jgriffithI don't use DHCP on the second nic because it got kinda wonky in my ansible stuff19:08
jgriffithpatrickeast: yeah... my big complaint about neutron19:08
asselinjgriffith, thanks will look at that too19:09
jgriffithpatrickeast: it's just "harder"19:09
jgriffithasselin: you bet19:09
* dannywilson remembers crying into my beer trying to get those steps finalized. :)19:09
openstackgerritJay Bryant proposed openstack/os-brick: Use oslo_log instead of openstack.common.log
*** Yogi1 has quit IRC19:17
*** jaypipes has quit IRC19:18
*** jaypipes has joined #openstack-cinder19:21
*** jwcroppe has quit IRC19:21
*** jwcroppe has joined #openstack-cinder19:22
*** sgotliv has joined #openstack-cinder19:24
*** ronis has quit IRC19:26
*** jwcroppe has quit IRC19:26
*** xyang has quit IRC19:27
*** david-lyle has joined #openstack-cinder19:30
*** winston-d_ has joined #openstack-cinder19:30
*** e0ne has joined #openstack-cinder19:30
*** Mandell has quit IRC19:35
*** winston-d_ has quit IRC19:35
*** Mandell has joined #openstack-cinder19:38
openstackgerritxing-yang proposed openstack/cinder: Fixed issue with mismatched config in VMAX driver
*** e0ne has quit IRC19:41
*** timcl1 has quit IRC19:42
*** markvoelker has joined #openstack-cinder19:50
*** ozialien has quit IRC19:50
*** kmartin has quit IRC19:51
*** jwcroppe has joined #openstack-cinder19:51
*** Rockyg has joined #openstack-cinder19:51
*** rushil has quit IRC19:52
*** jwcroppe has quit IRC19:53
*** bkopilov has joined #openstack-cinder19:54
*** markvoelker has quit IRC19:55
*** sgotliv has quit IRC19:56
thingeemtreinish: so volume boot pattern issue still exists. I guess we have to skip?19:59
thingeemtreinish: just been having people with third party ci asking19:59
*** dustins has quit IRC20:00
mtreinishthingee: it works fine in the gate, and ceph fixed the bug they were having with it20:00
*** Yogi1 has joined #openstack-cinder20:00
mtreinishI still contend people are hitting issues with their ci configs or backends20:00
mtreinishbecause it's the first test many of them have run which actually logins into a guest and does any verification of the volume20:01
thingeeyeah I'm even hitting issues. was working fine since april 7ish20:02
thingeemtreinish: ^20:02
mtreinishthingee: hmm, I don't think we merged anything new then which would have effected that test20:03
mtreinish(well except for a temporary ceph skip which the revert is up for)20:03
*** nkrinner has joined #openstack-cinder20:03
thingeemriedem: I thought when we spoke at the cinder meeting there was a general issue with volumebootpattern, not just ceph?20:04
*** annashen has joined #openstack-cinder20:04
*** Yogi1 has quit IRC20:05
mtreinishthingee: so the only thing I've seen about that was:
mtreinishwhich jgriffith had some issue with, which is why I've held off on it20:05
mtreinishthingee: oh and gluster complained that it's impossible for the test to work with them20:06
*** rushiagr is now known as rushiagr_away20:06
mtreinishbut that's different :)20:06
mriedemthingee: there is, with glusterfs and gpfs also20:06
mriedemthingee: mtreinish:
mriedemwe also need this to stop running the ceph gate job on stable
mriedemsince it won't work on stable icehouse/juno20:07
*** ronis has joined #openstack-cinder20:07
mriedemmtreinish: you want to pull some infra strings there? :)20:07
mtreinishwell that regex is wrong20:07
mtreinishoh the stable thing sure we can poke people on that20:07
*** jwcroppe has joined #openstack-cinder20:07
*** annashen has quit IRC20:08
*** jwcroppe has quit IRC20:08
mriedemyeah i already took a karma hit yesterday for an infra review request20:08
*** annashen has joined #openstack-cinder20:08
mriedemsave throw failed, etc20:08
*** dustins has joined #openstack-cinder20:09
*** ronis has quit IRC20:11
mtreinishthingee: if that patch (171569) I linked before seems valid to you feel free to leave your feedback on it20:12
mtreinishI was just defering to jgriffith20:12
*** timcl has joined #openstack-cinder20:14
thingeemtreinish, mriedem
thingeeI fail both volumebootpattern and volumebootpatternv220:16
mtreinishthingee: oh fun failure to schedule the server boot20:16
*** rushil has joined #openstack-cinder20:16
thingeemtreinish: bb sorry20:16
mtreinishthingee: is there a log file index I can look at?20:16
mtreinishI need to see the nova logs to figure out why it didn't think it could boot the server20:16
thingeemtreinish: I can upload it when I get back, sorry.20:17
thingeemtreinish: and thank you20:17
mtreinishthingee: no worries, thanks20:18
*** rushiagr_away is now known as rushiagr20:24
*** kmartin has joined #openstack-cinder20:25
*** Yogi1 has joined #openstack-cinder20:26
rmstarhi guys.   just a quick question about cinder create...when it is creating the volume from an image, is there a way to see the percentage / progress?  ie. how far along is the "download" ?20:28
*** bswartz has quit IRC20:28
hemnajungleboyj, ping20:31
ameadeHey folks, I have two really simple driver patches i want to speed up merging. Been up since Feb20:31
ameadeif anyone has a chance, should be an easy review but the churn is starting to get in the way WRT rebases20:32
ameadethanks in advance20:32
jungleboyjhemna: What up?20:33
hemnaquestion for you in there I think20:33
jungleboyjhemna: :-)  Thank you.20:35
jungleboyjSwamped today.20:35
hemnaah crap sorry20:35
hemnaI'm trying to get theses brick patches done20:35
jungleboyjhemna: Don't apologize.20:35
jgriffithrmstar: nope20:36
jgriffithmtreinish: hey... so I don't necessarily have an "issue" with it per say20:36
rmstarthanks jgriffith20:36
jgriffithmtreinish: It's proceduraly correct I think20:36
jgriffithmtreinish: I was just bothered by the fact that something else seems to "handle" it for the majority of the drivers20:37
jgriffithmtreinish: LVM, SolidFire etc20:37
jgriffithmtreinish: and now with the jbernard 's fix, RBD20:37
jgriffithmtreinish: that being said, I don't see any reason NOT to merge it either.20:37
jungleboyjhemna: Updated.20:38
vilobhmm11jungelboyj : thanks20:39
rmstarjgriffith: just out of curiousity.  Can anyone just register a wishlist? or do i have to go through you guys?20:40
vilobhmm11jungelboyj : out of curiosity, so should every service start using oslo_log than using the logging module ?20:41
jgriffithrmstar: You are more than welcome to register one20:41
vilobhmm11jungleboyj : ^^20:43
mtreinishjgriffith: ok fair enough, yeah I couldnt figure out why it seems to work for most cases either20:45
mtreinishjgriffith: I think I was probably confusing that discussion with the one around the gluster stuff20:45
mtreinishthey did happen at the same time...20:45
jungleboyjvilobhmm11: Yes, all everything outside of oslo-incubator should be using oslo_log .20:45
jgriffithmtreinish: yeah, that was the one I objected to skipping the test :)20:46
jgriffithmtreinish: which given another 24 hours got fixed in the driver as it should have :)20:46
hemnaxyang1, ping20:47
*** rasoto_ has left #openstack-cinder20:47
mtreinishyeah, we definitely agreed on that :)20:47
*** dustins has quit IRC20:48
anteayathingee: are you about?20:53
anteayathingee: I'm reading the logs from last week's cinder meeting regarding ceph and want to hear from you what your current understanding is20:53
anteayaI want to make sure I understand what you are thinking20:54
openstackgerritJay Bryant proposed openstack/os-brick: Sync latest _i18n module for os_brick
jungleboyjCaptain Oslo .... away!20:54
hemnastrikes again!20:55
*** rushil has quit IRC20:57
*** jungleboyj has quit IRC21:04
*** Yogi1 has quit IRC21:05
*** akerr has quit IRC21:05
*** Mandell has quit IRC21:06
*** Mandell has joined #openstack-cinder21:08
*** jwcroppe has joined #openstack-cinder21:10
*** jwcroppe has quit IRC21:10
*** Lee1092 has quit IRC21:10
*** esker has quit IRC21:12
*** esker has joined #openstack-cinder21:15
*** jkraj has quit IRC21:16
*** rushiagr is now known as rushiagr_away21:24
xyang1hemna: Hi21:24
openstackLaunchpad bug 1382440 in OpenStack Compute (nova) "Detaching multipath volume doesn't work properly when using different targets with same portal for each multipath device" [High,Fix committed] - Assigned to Hiroyuki Eguchi (h-eguchi)21:26
hemnado you remember that one ?21:26
xyang1Ya, is it a backport?21:27
xyang1I have to fix it for vnx multipath to work21:27
xyang1What is the question?21:28
*** nkrinner has quit IRC21:28
xyang1 That fix is also in brick21:29
*** lpetrut has quit IRC21:29
xyang1hemna: I'll have take a closer look later.21:32
thingeemtreinish: hey back21:33
hemnaxyang1, so I'm looking through the libvirt volume code for iSCSI21:33
thingeemtreinish: I need to disappear again21:33
hemnaand trying to reconcile the differences with os-brick21:33
thingeeanteaya: I'll get back to you. have a meeting.21:33
xyang1hemna: If they only test with LVM, that will break other drivers21:33
anteayathingee: thanks21:33
*** jwcroppe has joined #openstack-cinder21:36
thingeemtreinish: back21:41
thingeemtreinish: let me know if you need anymore information.21:41
thingeeanteaya: just continuing the convo in #openstack-infra21:42
anteayathingee: yup thanks21:42
anteayathingee: also if you can look at this thread:
anteayathingee: do you need anything from infra at this point? I'm not sure who this person is and how much time you have already spent on/with them21:43
*** Mandell has quit IRC21:43
mtreinishthingee: so cinder is returning 500 during test volume boot pattern, check the n-cpu log around: 2015-04-15 20:01:50.79721:45
mtreinishthingee: I'll check the cinder logs next to see why21:45
*** dims_ has joined #openstack-cinder21:46
*** jamielennox|away is now known as jamielennox21:46
mtreinishthingee: oh actually there were failures before that in the log21:46
*** Rockyg has quit IRC21:46
mtreinish(for that test)21:46
mtreinishthingee: like Stderr: u'iscsiadm: No session found.\n'21:46
thingeemtreinish: right21:47
mtreinishyeah that looks like the cause of the failure to boot (which results in the tempest error)21:47
thingeeanteaya: I don't think we've discussed this case yet. We wrote this for people that are trying to test a driver not in master yet
*** annegentle has quit IRC21:47
thingeemtreinish: yea21:48
*** dims has quit IRC21:48
thingeemtreinish: so I'm not sure what changed there21:49
anteayathingee: thanks I didn't know about that portion of the wikipage21:49
mtreinishthingee: yeah dunno. My gut feeling is something happened to the the tgtd config or something21:51
mtreinishbut I don't think we've made devstack changes or anythign there lately21:52
*** annegentle has joined #openstack-cinder21:52
hemnathingee, when you are free21:52
*** vilobhmm11 has quit IRC21:53
mtreinishthingee: I doubt you guys would have landed a cinder change to break that considering the point in the cycle21:53
thingeemtreinish: yeah21:53
thingeeok, I'll do a little more looking and let you know21:53
*** markvoelker has joined #openstack-cinder21:53
*** nikesh has quit IRC21:54
thingeebummer I really thought this was related to the other issue ceph was hitting and was hoping it would disappear21:54
thingeeI've been having the ci disabled since21:54
xyang1hemna: Hi, the fix is probably fine.  It preserved my change for the case when each iqn is associated with a portal21:55
asselinthingee, which issue?21:55
hemnaxyang1, so I have 2 bugs that have been 'fixed' in nova's libvirt volume drivers that aren't in cinder's brick (and obviously os-brick)21:55
hemnaI'm wondering if I should file new bugs against os-brick for this ?21:55
openstackLaunchpad bug 1367189 in Cinder "multipath not working with Storwize backend if CHAP enabled" [High,Fix released] - Assigned to TaoBai (baitao2020)21:55
openstackLaunchpad bug 1382440 in OpenStack Compute (nova) "Detaching multipath volume doesn't work properly when using different targets with same portal for each multipath device" [High,Fix committed] - Assigned to Hiroyuki Eguchi (h-eguchi)21:56
hemnaneither of those made it into cinder/brick21:56
hemnaI realized it after I started working on a nova patch to use os-brick21:56
xyang1hemna: ok, sure, need a bug for them21:56
thingeeasselin: I started noticing all the volumebootpatterns failing for me around the 7th
* asselin looks21:57
*** leeantho has joined #openstack-cinder21:57
asselinthingee,  consistently or intermittently?21:57
mtreinishthingee: digging more through the logs it looks like cinder isn't having any issue talking to that target like 2 sec before nova fails21:57
asselinthingee, we're seeing that issue recently intermittently21:58
asselin"Details: {u'code': 400, u'message': u'Security group is still in use'21:58
hemnaxyang1, it doesn't look like os-brick is a project in launchpad21:58
thingeeasselin: consistent...I've tried by hand a few times21:58
*** markvoelker has quit IRC21:58
thingeeasselin: can do another couple of runs to verify21:58
hemnaanteaya, ping21:59
*** Mandell has joined #openstack-cinder21:59
*** primechuck has quit IRC21:59
anteayahello hemna22:00
anteayahow can I help?22:00
hemnaso I'm not sure if you are the right person to ask22:00
anteayalet's find out22:00
hemnabut it seems that the os-brick (Subproject of cinder) isn't an official launchpad project22:01
hemnaso I can't assign bugs to it22:01
hemnapython-cinderclient is a project in launchpad22:01
thingeehemna: hi22:01
hemnaI think os-brick should be a project in launchpad, so we can assign bugs to it22:01
hemnaI'm not sure how that happens22:01
anteayatake a look at that part of the infra manual22:01
hemnaok thanks22:02
anteayaif you are still unhappy ping again22:02
hemnathingee, do you agree ?22:02
*** esker has quit IRC22:02
thingeehemna: yup no problem with that22:02
hemna(I have 2 bugs that I need to append to mark them as bugs in os-brick)22:02
hemnathe 2 I mentioned above22:02
*** annegentle has quit IRC22:03
anteayayou see that cinder has a group already:
anteayado you want os-brick to have its own bugs page or should they be folded into cinder bugs?22:03
anteayamost folks reporting a bug won't be able to descern the difference and will put them in the wrong place anyhow22:04
jgriffiththingee: what.. you don't like my ci stuff any more :)22:04
*** patrickeast has quit IRC22:04
hemnaI think it should be the same as however python-cinderclient works22:04
anteayalooks like they have their own group22:05
thingeejgriffith: no, this was just something I forgot to do after discussion at the midcycle meetup.22:07
thingeejgriffith: if you don't remember, I can probably find the notes from the discussion. You were there.22:07
jgriffiththingee: I'm sure you could, and that's really not necessary at all22:08
thingeejgriffith: but I didn't mean for it to come across like that.22:08
jgriffiththingee: I really don't care, so we're even :)22:08
thingeejgriffith: ok...22:08
thingeejgriffith: by the way have you seen this volumeboot pattern issue?
anteayahemna: you are on your way then, I have to step afk for a bit22:09
hemnaok thanks, I'm walking through creating it22:09
anteayaokay great22:09
thingeehemna: thanks22:10
jgriffiththingee: nope, do you have the rest of the logs?22:10
thingeejgriffith: don't worry about it, I'll be diving into it when I get a chance22:11
hemnaDoens't look like it's grouped under OpenStack though fwiw22:11
hemnaand I can't edit it now, since I'm not the mx of it :P22:12
jgriffiththingee: your iscsi connection failed22:13
jgriffiththingee: look in n-cpu.log; 2015-04-15 19:58:49.97122:13
thingeejgriffith: yeah so as mtreinish mentioned, the connection is known to cinder one minute and then fails with nova later.22:13
jgriffiththingee: ?22:14
mtreinishjgriffith: oh, that time stamp is around the same time when cinder was doing iscsi "things"22:15
mtreinishthe tb I noticed was right below that at like 20:01 iirc22:15
jgriffithmtreinish: it's when nova tries to make the iscsi connection22:15
asselinthingee, we see that issue in icehouse too22:16
jgriffithmtreinish: but there's no target exported from Datera22:16
thingeeasselin, jgriffith: whew..ok..well I just started noticing this around april 7th22:16
asselinwhich client was released april 7th?22:17
thingeesame storage backend I've been using all the other times my ci has been successful.22:17
hemnaok both of those bugs have been added to os-brick22:17
jgriffithwhy do you guys think it's related to a client?22:18
asselinjgriffith, b/c clients releases have a reputation of breaking the world22:19
jgriffithasselin: well... seems like a stretch to say "client release" broke iscsiadm session22:20
jgriffithasselin: but could be22:20
jgriffithI dunno22:20
*** annegentle has joined #openstack-cinder22:20
*** patrickeast has joined #openstack-cinder22:21
asselinjgriffith, you're right...I should say some external dependency.22:21
thingeejgriffith: I haven't blamed anyone. Just been trying to figure out why things suddenly stopped working.22:21
jgriffiththingee: well.. not sure. Can't trace it too easily with your iqn change22:22
jgriffiththingee: is there a direct way to tie in the volume-id with that now?22:22
*** julim has quit IRC22:22
jgriffiththingee: ahh... I get it22:22
jgriffithfirst segment22:22
*** annegentle has quit IRC22:24
*** _cjones_ has quit IRC22:24
jgriffiththingee: asselin you guys using nova-net or neutron in your deployments?22:25
asselinjgriffith, neutron22:25
thingeenova net22:25
jgriffithhehe.. that eliminates those two :)22:25
jgriffithso I see the init connection in the manager22:26
jgriffiththingee: but...22:27
jgriffiththingee: you vomit later when they try the detach22:27
thingeejgriffith: yeah i might just try pause things in the detach part and verify things myself.22:29
jgriffiththingee: well...22:29
jgriffiththingee: the problem is that the open iscsi session query is what's failing22:30
jgriffiththingee: so it never actually opened a session and attached22:30
*** annashen has quit IRC22:30
jgriffiththingee: in other words, that response to the detach isn't surprising IMHO22:30
jgriffiththingee: the trick is, why nova couldn't get an iscsi session to the iqn you provided it22:30
jgriffithasselin: curious what your failure looks like?22:31
hemnaanish, ping22:31
jgriffithasselin: is it the same iscsi session connect failure?22:31
*** annashen has joined #openstack-cinder22:31
asselinjgriffith, no...same error message, but seems to be caused for a different reason22:32
jgriffithasselin: meh... tempest dump is useful; but what's in nova-compute log for the attach?22:32
jgriffithasselin: so it's tricky when comparing tempest output because of the cascade22:33
anishhemna: sup22:33
hemnaanish, heyas22:33
asselinjgriffith, yeah...realize that "Security group is sitll in use" is just likely cleanup failing...22:34
hemnaanish, so I'm trying to work on getting os-brick ready for Nova22:34
anishtransport changes ?22:34
hemnaanish, and I noticed 3 issues that would prevent nova from using it currently.22:34
hemnaanish, one of them is an issue you had fixed in libvirtISCSIVolumeDriver, but not in cinder/brick or os-brick22:34
openstackLaunchpad bug 1370226 in os-brick "LibvirtISCSIVolumeDriver cannot find volumes that include pci-* in the /dev/disk/by-path device" [Undecided,New]22:34
hemnaanish, any chance you can take a look at that, since you are most familiar with it.22:35
asselinjgriffith, so in our icehouse case..seem to be related to:
hemnaanish, that would be awesome.22:35
hemnaanish, I can try but I'm not as familiar22:35
anishno worries, I was planning to take a look at it anyways22:35
anishthere's also the iser component with the same issue22:35
hemnaanish, ok great22:35
hemnaanish, can I assign the os-brick portion to you then ?22:36
jgriffithasselin: hehe... that's a whole different can of worms22:36
*** annegentle has joined #openstack-cinder22:36
jgriffithasselin: bad stuff... the 500 coming back from the client is not cool22:36
jgriffithasselin: given it's icehouse makes me suspect of API versions that you're using22:36
jgriffithbut that would be a 100% thing I think22:37
hemnaanish, thanks a bunch!  really appreciate it22:37
hemnaanish, if you need any help, let me know22:37
*** _cjones_ has joined #openstack-cinder22:37
anishhemna: is there any documentation on how to use os-brick with nova ?22:37
asselinjgriffith, and c-api shows
*** mriedem has quit IRC22:38
asselinand I've been noticing some amq issues...that's oslo22:39
hemnaanish, hehe no.22:39
hemnaanish, I'm working on a patch to get nova to use os-brick now22:39
hemnaanish, which is what lead me to see the differences22:40
asselinjgriffith, oslo versions:
hemnaanish, I have a WIP for cinder to use os-brick22:40
anishI have something slighty related
hemnaanish, and you can get the os-brick code working with cinder upload to image22:40
anishspec that addresses that bug22:40
anishI should update that spec to include the bug22:40
hemnaanish, yah that'd be a good idea22:41
*** annegentle has quit IRC22:41
*** bswartz has joined #openstack-cinder22:42
patrickeastjgriffith: asselin: dunno if its related to what you guys are seeing, but we were recently seeing some failures on the volume_boot_pattern tests, they show up way more frequently with higher concurrency in the testing, and from our (small) sample set it failed 100% of the time when the V2 and V1 versions of the tests were running at the same time.. we had planned to dig into it more in our next sprint22:43
hemnaso, at this point it, I think it might make sense to do a drop of os-brick now22:45
hemnaget cinder using it22:45
hemnaand then get the 3 os-brick bugs in, then another drop and then nova can be based off of the 2nd drop22:45
hemnaor we'll miss our L window with nova at this point.22:45
hemnaos-brick as it is now, is just like cinder/brick22:45
hemnaso, once the 2nd drop happens, cinder will get the benefit of the 3 fixed bugs22:46
asselinjgriffith, patrickeast thingee I'm seen other rabbit mq issues:
hemnain the mean time we need a strategy for testing Cinder against os-brick as well22:46
*** Liu has quit IRC22:46
*** Yogi1 has joined #openstack-cinder22:46
* asselin steps away for a bit22:47
*** jwcroppe has quit IRC22:48
*** patrickeast has quit IRC22:48
*** Liu has joined #openstack-cinder22:52
*** markvoelker_ has joined #openstack-cinder22:53
hemnaanish, so I can try and just ignore the differences in libvirt's iscsi volume driver and put up a WIP that uses os-brick22:58
hemnait'll be missing the 3 bugs/features, but should function22:58
hemnaanish, I can try and do that tomorrow22:59
anishI can take it up from there, should not be an issue22:59
hemnafor now the cinder WIP should provide you with a mechanism for testing23:00
anishsounds good, looking at the review patch23:00
jgriffithasselin: that's no good!23:01
*** erlon has quit IRC23:01
*** Rockyg has joined #openstack-cinder23:01
*** winston-d_ has joined #openstack-cinder23:03
*** hemna is now known as hemnafk23:05
*** winston-1_ has joined #openstack-cinder23:08
*** jaypipes has quit IRC23:08
*** winston-d_ has quit IRC23:08
*** chlong has joined #openstack-cinder23:11
*** Yogi1 has quit IRC23:15
*** Yogi1 has joined #openstack-cinder23:16
*** patrickeast has joined #openstack-cinder23:16
*** fanyaohong has joined #openstack-cinder23:18
*** annashen has quit IRC23:21
*** jaypipes has joined #openstack-cinder23:22
*** jwcroppe has joined #openstack-cinder23:30
*** drjones has joined #openstack-cinder23:31
*** kmartin_ has joined #openstack-cinder23:34
*** j_king_ has joined #openstack-cinder23:34
*** Mandell has quit IRC23:37
*** Mandell has joined #openstack-cinder23:37
*** luv has quit IRC23:38
*** _cjones_ has quit IRC23:38
*** mdbooth has quit IRC23:38
*** j_king has quit IRC23:38
*** timcl has quit IRC23:38
*** kmartin has quit IRC23:38
*** bkopilov has quit IRC23:38
*** timcl1 has joined #openstack-cinder23:38
*** Tross has joined #openstack-cinder23:38
*** luv_ has joined #openstack-cinder23:38
*** aarefiev_ has joined #openstack-cinder23:38
*** mdbooth_ has joined #openstack-cinder23:38
*** luv_ is now known as luv23:38
*** bkopilov has joined #openstack-cinder23:38
*** lan_ has joined #openstack-cinder23:38
*** reed_ has joined #openstack-cinder23:38
*** cburgess_ has joined #openstack-cinder23:38
*** mdbooth_ is now known as mdbooth23:38
*** markvoelker has joined #openstack-cinder23:39
*** Tross1 has quit IRC23:39
*** lan has quit IRC23:39
*** aarefiev has quit IRC23:39
*** reed has quit IRC23:39
*** cburgess has quit IRC23:39
*** garthb has joined #openstack-cinder23:39
*** garthb_ has joined #openstack-cinder23:39
*** markvoelker_ has quit IRC23:40
*** reed_ is now known as reed23:40
*** markvoelker has quit IRC23:40
*** markvoelker has joined #openstack-cinder23:40
*** markvoelker_ has joined #openstack-cinder23:41
*** Yogi1 has quit IRC23:42
*** emagana has quit IRC23:43
*** boichev has quit IRC23:43
*** esker has joined #openstack-cinder23:44
*** markvoelker has quit IRC23:45
thingeemtreinish: yeah can definitely reproduce it with running one of those tests alone23:51
*** nestorf has joined #openstack-cinder23:55
asselinjgriffith, no....i wish I had logstash internally....23:58
asselinanother occurence:
*** wolsen_ is now known as wolsen23:59

Generated by 2.14.0 by Marius Gedminas - find it at!