Monday, 2015-11-02

*** haomaiwang has quit IRC00:01
*** haomaiwang has joined #openstack-cinder00:01
*** lprice has quit IRC00:03
*** jamielennox is now known as jamielennox|away00:04
*** salv-orlando has quit IRC00:06
*** jamielennox|away is now known as jamielennox00:07
*** p0rtal has joined #openstack-cinder00:13
*** stevemar_ has joined #openstack-cinder00:16
*** p0rtal has quit IRC00:17
*** stevemar_ has quit IRC00:20
*** apoorvad has quit IRC00:23
*** EinstCrazy has quit IRC00:25
*** takedakn has quit IRC00:27
*** takedakn has joined #openstack-cinder00:28
*** takedakn has quit IRC00:32
*** jerrygb has joined #openstack-cinder00:34
*** jerrygb has quit IRC00:39
*** jamielennox is now known as jamielennox|away00:45
openstackgerritThang Pham proposed openstack/cinder: WIP: Backwards compatible update to create_volume
*** takedakn has joined #openstack-cinder00:49
*** apoorvad has joined #openstack-cinder00:52
*** rhagarty has quit IRC00:54
*** apoorvad has quit IRC00:56
*** apoorvad has joined #openstack-cinder00:56
*** jamielennox|away is now known as jamielennox00:57
*** chlong has quit IRC00:59
*** haomaiwang has quit IRC01:01
*** haomaiwang has joined #openstack-cinder01:01
*** jamielennox is now known as jamielennox|away01:04
*** salv-orlando has joined #openstack-cinder01:07
*** lixiaoy1 has joined #openstack-cinder01:09
*** jerrygb has joined #openstack-cinder01:10
*** takedakn has quit IRC01:11
*** apoorvad has quit IRC01:12
*** chlong has joined #openstack-cinder01:12
*** EinstCrazy has joined #openstack-cinder01:13
*** jamielennox|away is now known as jamielennox01:13
*** jerrygb has quit IRC01:15
*** yangxi has joined #openstack-cinder01:17
*** zhenguo has joined #openstack-cinder01:20
*** hemna has joined #openstack-cinder01:27
*** haomaiwang has quit IRC01:28
*** Lee1092 has joined #openstack-cinder01:31
*** chenying has joined #openstack-cinder01:31
*** terryyao has joined #openstack-cinder01:36
openstackgerritchenying proposed openstack/cinder: Fix can not adding a new zone with Cisco FC zone driver
*** chenying has quit IRC01:36
*** smatzek has joined #openstack-cinder01:47
*** stevemar_ has joined #openstack-cinder01:52
*** rhagarty has joined #openstack-cinder01:54
*** sileht has quit IRC01:54
*** stevemar_ has quit IRC01:56
*** yangxi has quit IRC01:59
*** yangxi has joined #openstack-cinder02:03
*** p0rtal has joined #openstack-cinder02:05
*** haomaiwang has joined #openstack-cinder02:10
*** p0rtal has quit IRC02:10
*** jerrygb has joined #openstack-cinder02:18
*** yangxi has quit IRC02:19
*** jerrygb has quit IRC02:23
*** yangxi has joined #openstack-cinder02:23
*** jamielennox is now known as jamielennox|away02:23
*** hemna has quit IRC02:24
*** hemna has joined #openstack-cinder02:25
*** salv-orlando has quit IRC02:28
*** hemna has quit IRC02:29
*** jamielennox|away is now known as jamielennox02:31
openstackgerritXinXiaohui proposed openstack/cinder: Calculate virtual free capacity and notify
*** rex_lee_ has quit IRC02:35
*** smatzek has quit IRC02:36
*** haomaiwang has quit IRC03:01
*** 7F1AAWWJJ has joined #openstack-cinder03:01
*** terryyao has quit IRC03:15
*** jerrygb has joined #openstack-cinder03:19
*** salv-orlando has joined #openstack-cinder03:27
*** daneyon has joined #openstack-cinder03:35
*** jerrygb has quit IRC03:38
*** daneyon has quit IRC03:41
openstackgerritCedric Zhuang proposed openstack/cinder: Add retype logic in manage_existing for VNX
*** hemna has joined #openstack-cinder03:46
*** dongc has joined #openstack-cinder03:47
*** gouthamr has quit IRC03:54
*** yangxi has quit IRC03:59
*** 7F1AAWWJJ has quit IRC04:01
*** haomaiwang has joined #openstack-cinder04:01
*** yangxi has joined #openstack-cinder04:01
*** yangxi has quit IRC04:03
*** hideme_ has quit IRC04:11
*** p0rtal has joined #openstack-cinder04:11
*** hideme_ has joined #openstack-cinder04:11
*** dims has quit IRC04:13
*** jerrygb has joined #openstack-cinder04:15
*** links has joined #openstack-cinder04:15
*** lixiaoy1 has quit IRC04:15
*** hemna has quit IRC04:38
*** yangxi has joined #openstack-cinder04:38
*** jerrygb has quit IRC04:39
*** p0rtal has quit IRC04:57
*** p0rtal has joined #openstack-cinder04:58
*** haomaiwang has quit IRC05:01
*** haomaiwang has joined #openstack-cinder05:01
*** salv-orlando has quit IRC05:02
*** p0rtal has quit IRC05:02
*** lixiaoy1 has joined #openstack-cinder05:06
openstackgerritxing-yang proposed openstack/cinder-specs: Integrate replication with consistency group
*** lixiaoy11 has joined #openstack-cinder05:15
*** lixiaoy1 has quit IRC05:17
*** deepakcs has joined #openstack-cinder05:18
*** sgotliv has quit IRC05:19
*** sgotliv has joined #openstack-cinder05:19
*** lixiaoy1 has joined #openstack-cinder05:22
*** lixiaoy11 has quit IRC05:24
*** lixiaoy11 has joined #openstack-cinder05:25
*** lixiaoy1 has quit IRC05:27
*** boris-42 has joined #openstack-cinder05:28
*** salv-orlando has joined #openstack-cinder05:40
*** yangxi has quit IRC05:41
*** shausy has joined #openstack-cinder05:43
*** zenpac has quit IRC05:48
*** yangxi has joined #openstack-cinder05:55
*** dongc has quit IRC05:55
*** haomaiwang has quit IRC06:01
*** haomaiwang has joined #openstack-cinder06:01
*** lprice has joined #openstack-cinder06:04
*** jamielennox is now known as jamielennox|away06:06
*** links has quit IRC06:07
*** sileht has joined #openstack-cinder06:07
*** salv-orlando has quit IRC06:20
*** cfriesen_ has quit IRC06:22
*** sgotliv has quit IRC06:26
*** zerda has joined #openstack-cinder06:32
*** takedakn has joined #openstack-cinder06:41
*** takedakn has quit IRC06:41
*** links has joined #openstack-cinder06:44
*** zhangjn has joined #openstack-cinder06:45
*** chlong has quit IRC06:48
*** nkrinner has joined #openstack-cinder07:00
*** jamielennox|away is now known as jamielennox07:01
*** haomaiwang has quit IRC07:01
*** 7F1AAWXGU has joined #openstack-cinder07:01
*** jith_ has joined #openstack-cinder07:12
jith_hi all.. if i configure ceph as backend for cinder... is it possible to use the same ceph backend for other purpose like swift?? or cinder should have a dedicated ceph storage?07:14
nikeshmjith_:  i think you can use same ceph for both ceph volume driver and ceph backup driver07:26
*** salv-orlando has joined #openstack-cinder07:30
nikeshmjith_ :  you can use this for single node ceph devstack and play with ceph volume and backup driver07:34
nikeshmmodify it little bit07:34
jith_nikeshm: thanks07:34
*** salv-orlando has quit IRC07:35
*** kmartin has quit IRC07:35
*** flaper87 has quit IRC07:35
*** kmartin has joined #openstack-cinder07:36
*** NightKhaos has quit IRC07:36
*** links has quit IRC07:37
*** flaper87 has joined #openstack-cinder07:37
*** NightKhaos has joined #openstack-cinder07:37
*** links has joined #openstack-cinder07:39
jith_nikeshm: i am a newbie in ceph.. ceph volume driver and backup driver for cinder volumes??? because i saw a reference like for configuring swift.. i dont know about the internal operations in ceph07:40
*** jwcroppe has joined #openstack-cinder07:45
*** anshul has joined #openstack-cinder07:46
*** salv-orlando has joined #openstack-cinder07:54
*** alexschm has joined #openstack-cinder07:58
*** 7F1AAWXGU has quit IRC08:01
*** haomaiwang has joined #openstack-cinder08:01
*** shausy has quit IRC08:13
*** shausy has joined #openstack-cinder08:14
*** haomaiwang has quit IRC08:14
*** haomaiwang has joined #openstack-cinder08:14
*** haomaiwang has quit IRC08:15
*** haomaiwa_ has joined #openstack-cinder08:15
*** bapalm has quit IRC08:19
*** markus_z has joined #openstack-cinder08:20
*** bapalm has joined #openstack-cinder08:24
*** apoorvad has joined #openstack-cinder08:24
*** chenying1 has joined #openstack-cinder08:25
*** stevemar_ has joined #openstack-cinder08:25
*** yrabl has joined #openstack-cinder08:26
*** aarefiev has joined #openstack-cinder08:29
*** salv-orlando has quit IRC08:30
*** yrabl has quit IRC08:33
*** yrabl has joined #openstack-cinder08:34
openstackgerritHao Li proposed openstack/cinder: return backup parent_id when list/show backups
*** vignesh1 has joined #openstack-cinder08:38
*** ZZelle has joined #openstack-cinder08:39
*** salv-orlando has joined #openstack-cinder08:40
*** sgotliv has joined #openstack-cinder08:46
*** bluex has joined #openstack-cinder08:48
*** bluex has quit IRC08:48
*** bluex has joined #openstack-cinder08:49
*** haomaiwa_ has quit IRC09:01
*** haomaiwang has joined #openstack-cinder09:01
nikeshmjiith_ :
*** jordanP has joined #openstack-cinder09:10
nikeshmjiith_: this should give most of thing
nikeshmmay be you have to change cinder.conf for enabling ceph backup driver09:11
openstackgerritHao Li proposed openstack/cinder-specs: List backups return "parent_id" field
*** manas has joined #openstack-cinder09:14
*** chenying1 has quit IRC09:16
*** chenying1 has joined #openstack-cinder09:17
openstackgerritHao Li proposed openstack/cinder-specs: List backups return "parent_id" field
*** manas has quit IRC09:17
*** apoorvad has quit IRC09:28
*** jistr has joined #openstack-cinder09:28
*** stevemar_ has quit IRC09:46
*** lprice has quit IRC09:48
*** lixiaoy11 has quit IRC09:53
*** vgridnev has joined #openstack-cinder09:57
*** zhenguo has quit IRC09:57
*** haomaiwang has quit IRC10:01
*** haomaiwang has joined #openstack-cinder10:01
*** yuriy_n17 has joined #openstack-cinder10:03
*** yangxi has quit IRC10:14
*** jaypipes has joined #openstack-cinder10:15
*** yangxi has joined #openstack-cinder10:16
*** deepakcs has quit IRC10:19
*** deepakcs has joined #openstack-cinder10:19
*** yangxi has quit IRC10:22
openstackgerritThang Pham proposed openstack/cinder: Update create_volume API to use versionedobjects
jith_nikeshm: thanks.. i got it10:32
*** jith_ has quit IRC10:34
openstackgerritzhangsong proposed openstack/cinder: SheepdogDriver: Improve get_volume_stats operation
*** salv-orlando has quit IRC10:36
*** salv-orlando has joined #openstack-cinder10:36
*** jwcroppe has quit IRC10:37
*** jwcroppe has joined #openstack-cinder10:38
*** haomaiwang has quit IRC10:38
*** haomaiwang has joined #openstack-cinder10:39
*** zhangjn has quit IRC10:42
*** jwcroppe has quit IRC10:43
*** haomaiwang has quit IRC10:43
*** EinstCrazy has quit IRC10:45
*** haomaiwang has joined #openstack-cinder10:45
*** salv-orlando has quit IRC10:47
*** salv-orlando has joined #openstack-cinder10:48
*** subscope has joined #openstack-cinder10:48
*** haomaiwang has quit IRC11:01
*** haomaiwang has joined #openstack-cinder11:01
*** links has quit IRC11:03
*** yusuke has quit IRC11:07
*** links has joined #openstack-cinder11:09
*** dims has joined #openstack-cinder11:12
*** EinstCrazy has joined #openstack-cinder11:14
*** Zhongjun has joined #openstack-cinder11:18
openstackgerritWilson Liu proposed openstack/cinder: Huawei: add manage/unmanage volume support
*** dims has quit IRC11:21
*** salv-orlando has quit IRC11:23
*** haomaiwang has quit IRC11:24
*** salv-orlando has joined #openstack-cinder11:25
*** smatzek has joined #openstack-cinder11:32
*** cdelatte has joined #openstack-cinder11:33
*** geguileo is now known as geguileo_onPTO11:35
*** flaper87 has quit IRC11:41
*** flaper87 has joined #openstack-cinder11:41
*** dave-mccowan has joined #openstack-cinder11:47
*** zhangjn has joined #openstack-cinder11:47
*** zhangjn has quit IRC11:47
*** zhangjn has joined #openstack-cinder11:48
*** stevemar_ has joined #openstack-cinder11:58
*** subscope has quit IRC11:59
*** subscope has joined #openstack-cinder12:00
*** subscope has quit IRC12:02
*** subscope has joined #openstack-cinder12:02
*** stevemar_ has quit IRC12:02
*** stevemar_ has joined #openstack-cinder12:11
*** delattec has joined #openstack-cinder12:22
*** cdelatte has quit IRC12:22
*** ondergetekende has quit IRC12:31
*** onder has joined #openstack-cinder12:32
*** deepakcs has quit IRC12:40
*** zao has joined #openstack-cinder12:40
*** kevincarr1991 has joined #openstack-cinder12:54
*** porrua has joined #openstack-cinder12:56
*** salv-orlando has quit IRC12:58
*** subscope has quit IRC12:59
*** jerrygb has joined #openstack-cinder13:13
*** zhangjn has quit IRC13:16
*** jerrygb has quit IRC13:18
openstackgerritSzymon Wr√≥blewski proposed openstack/cinder: Add some missing fields to Volume object
*** zhangjn has joined #openstack-cinder13:19
*** jerrygb has joined #openstack-cinder13:20
*** edmondsw has joined #openstack-cinder13:28
*** subscope has joined #openstack-cinder13:29
*** dims has joined #openstack-cinder13:33
*** rzerda has joined #openstack-cinder13:36
*** timcl has joined #openstack-cinder13:37
*** Zhongjun has quit IRC13:37
*** zerda has quit IRC13:38
*** sgotliv has quit IRC13:38
*** dims has quit IRC13:39
*** bswartz has quit IRC13:39
*** diablo_rojo has joined #openstack-cinder13:39
*** krtaylor has quit IRC13:43
*** stevemar_ has quit IRC13:43
*** salv-orlando has joined #openstack-cinder13:45
*** jerrygb has quit IRC13:45
*** jerrygb has joined #openstack-cinder13:46
*** ociuhandu has joined #openstack-cinder13:51
*** jerrygb has quit IRC13:51
openstackgerritHao Li proposed openstack/cinder-specs: List backups return "parent_id" field
*** sgotliv has joined #openstack-cinder13:54
openstackgerritHao Li proposed openstack/cinder-specs: List backups return "parent_id" field
openstackgerritHao Li proposed openstack/cinder-specs: List backups return "parent_id" field
*** gouthamr has joined #openstack-cinder13:59
*** links has quit IRC14:00
*** mc_nair has joined #openstack-cinder14:01
*** akerr has joined #openstack-cinder14:05
*** akerr_ has joined #openstack-cinder14:06
*** xyang1 has joined #openstack-cinder14:08
*** cfriesen_ has joined #openstack-cinder14:10
*** dims has joined #openstack-cinder14:10
*** xyang1 has quit IRC14:10
*** yangxi has joined #openstack-cinder14:10
*** akerr has quit IRC14:10
*** xyang1 has joined #openstack-cinder14:10
*** sileht has quit IRC14:14
*** dustins has joined #openstack-cinder14:14
*** Yogi1 has joined #openstack-cinder14:16
openstackgerritHao Li proposed openstack/cinder-specs: List backups return "parent_id" field
*** breitz has joined #openstack-cinder14:19
*** yangxi has quit IRC14:20
*** jgregor has joined #openstack-cinder14:21
*** alejandrito has joined #openstack-cinder14:22
*** julim has joined #openstack-cinder14:22
*** julim has quit IRC14:23
*** rzerda has quit IRC14:24
*** subscope has quit IRC14:26
*** vincent_hou has joined #openstack-cinder14:27
*** hideme_ has quit IRC14:27
*** jerrygb has joined #openstack-cinder14:28
*** hideme_ has joined #openstack-cinder14:28
openstackgerritSzymon Wr√≥blewski proposed openstack/cinder: Add some missing fields to Volume object
*** mriedem has joined #openstack-cinder14:29
*** scottda has joined #openstack-cinder14:32
*** bswartz has joined #openstack-cinder14:32
*** zz_john5223 is now known as john522314:32
vincent_houjbernard: hi14:36
jbernardvincent_hou: yi14:36
jbernardvincent_hou: hi14:36
vincent_houjbernard: how are you?14:36
*** zerda has joined #openstack-cinder14:37
jbernardvincent_hou: im doing great, you?14:37
vincent_houjbernard: I am fine too.14:37
*** markvoelker has joined #openstack-cinder14:40
*** zerda has quit IRC14:41
*** subscope has joined #openstack-cinder14:41
*** Trident has joined #openstack-cinder14:48
*** subscope has quit IRC14:54
*** zao has left #openstack-cinder14:55
*** willsama has joined #openstack-cinder14:55
*** subscope has joined #openstack-cinder14:55
*** subscope has quit IRC14:57
*** dustins has quit IRC14:59
*** yrabl has quit IRC15:01
*** ociuhandu has quit IRC15:01
*** ociuhandu has joined #openstack-cinder15:01
openstackgerritThang Pham proposed openstack/cinder: Update get/delete_volume API to use versionedobjects
*** baumann has joined #openstack-cinder15:03
*** krtaylor has joined #openstack-cinder15:06
*** jerrygb has quit IRC15:06
*** jerrygb has joined #openstack-cinder15:06
*** haomaiwang has joined #openstack-cinder15:07
*** jerrygb has quit IRC15:07
*** jerrygb has joined #openstack-cinder15:07
*** zhangjn has quit IRC15:08
*** aix has quit IRC15:09
*** subscope has joined #openstack-cinder15:09
*** delattec has quit IRC15:10
*** delattec has joined #openstack-cinder15:11
openstackgerritHao Li proposed openstack/cinder-specs: List backups return "parent_id" field
openstackgerritHao Li proposed openstack/cinder-specs: List backups return "parent_id" field
openstackgerritabhiram moturi proposed openstack/cinder: Zfssaiscsi driver should not use 'default' initiator group
*** anshul has quit IRC15:14
*** vincent_hou has quit IRC15:20
*** ntpttr_ has quit IRC15:20
*** vgridnev has quit IRC15:24
*** cvstealt1 has joined #openstack-cinder15:27
*** pckizer_ has joined #openstack-cinder15:27
*** jgregor1 has joined #openstack-cinder15:28
*** sage has joined #openstack-cinder15:31
*** kragniz_ has joined #openstack-cinder15:32
*** kmartin_ has joined #openstack-cinder15:34
*** leseb_ has joined #openstack-cinder15:34
*** jgregor has quit IRC15:35
*** onder has quit IRC15:35
*** jaypipes has quit IRC15:35
*** flaper87 has quit IRC15:35
*** kmartin has quit IRC15:35
*** nkrinner has quit IRC15:35
*** geguileo_onPTO has quit IRC15:35
*** zul has quit IRC15:35
*** DuncanT has quit IRC15:35
*** briancurtin has quit IRC15:35
*** toabctl has quit IRC15:35
*** liewegas has quit IRC15:35
*** dhellmann has quit IRC15:35
*** jbernard has quit IRC15:35
*** leseb- has quit IRC15:35
*** pckizer has quit IRC15:35
*** kragniz has quit IRC15:35
*** cvstealth has quit IRC15:35
*** lyarwood has quit IRC15:35
*** git-harry has quit IRC15:35
*** dhellmann has joined #openstack-cinder15:36
*** toabctl has joined #openstack-cinder15:36
*** flaper87 has joined #openstack-cinder15:36
*** flaper87 has quit IRC15:36
*** flaper87 has joined #openstack-cinder15:36
*** jaypipes has joined #openstack-cinder15:38
*** haomaiwang has quit IRC15:38
*** p0rtal has joined #openstack-cinder15:41
*** nkrinner has joined #openstack-cinder15:41
*** geguileo_onPTO has joined #openstack-cinder15:41
*** bluex has quit IRC15:42
*** stevemar_ has joined #openstack-cinder15:43
*** briancurtin has joined #openstack-cinder15:44
*** p0rtal has quit IRC15:45
*** zul has joined #openstack-cinder15:47
*** kevincar_ has joined #openstack-cinder15:48
*** stevemar_ has quit IRC15:49
*** DuncanT has joined #openstack-cinder15:49
*** breitz has quit IRC15:50
*** Swanson has quit IRC15:50
*** jbernard has joined #openstack-cinder15:50
*** kevincarr1991 has quit IRC15:51
*** Swanson has joined #openstack-cinder15:53
*** breitz has joined #openstack-cinder15:53
*** cebruns has joined #openstack-cinder15:56
*** jgregor1 has quit IRC15:56
*** lprice has joined #openstack-cinder15:57
*** jungleboyj has joined #openstack-cinder15:57
*** baumann has quit IRC15:58
*** jgregor has joined #openstack-cinder15:58
*** jwcroppe has joined #openstack-cinder16:03
*** martyturner has joined #openstack-cinder16:03
*** baumann has joined #openstack-cinder16:04
*** shausy has quit IRC16:05
*** alexschm has quit IRC16:06
*** vignesh has joined #openstack-cinder16:07
*** onder has joined #openstack-cinder16:08
*** delatte has joined #openstack-cinder16:08
*** baumann1 has joined #openstack-cinder16:08
*** baumann has quit IRC16:08
*** dustins has joined #openstack-cinder16:08
*** edtubill has joined #openstack-cinder16:08
*** vignesh1 has quit IRC16:09
*** delattec has quit IRC16:09
*** merooney has joined #openstack-cinder16:11
*** mtanino has joined #openstack-cinder16:16
*** sgotliv has quit IRC16:17
*** vgridnev has joined #openstack-cinder16:19
*** subscope has quit IRC16:20
*** kevincar_ has quit IRC16:24
*** leeantho has joined #openstack-cinder16:26
*** hemnafk is now known as hemna16:28
hemnait's like Monday and stuff or something.16:29
tbarronyeah, I heard it was Monday too.  Doesn't seem right though.16:33
*** kevincarr1991 has joined #openstack-cinder16:34
*** yuriy_n17 has quit IRC16:34
diablo_rojotbarron: hemna yeah yeah yeah :P16:35
tbarrondiablo_rojo: :P16:35
diablo_rojotbarron: We totally trashed Jay's office while he was gone.16:36
hemnapicts or it didn't happen.16:36
tbarronhemna: +116:36
*** pckizer_ is now known as pckizer16:37
diablo_rojohemna: I just fb messaged you a picture.16:37
*** sileht has joined #openstack-cinder16:37
diablo_rojohemna: tbarron Also we screwed with this mouse..16:38
*** jungleboyj has quit IRC16:38
diablo_rojohemna: tbarron popped the ball out and put a troll face covering the sensor. And turned it off..And put another troll face in with the batteries..16:39
openstackgerritScott DAngelo proposed openstack/cinder-specs:     cinder-api-microversions
openstackgerritScott DAngelo proposed openstack/cinder-specs:     cinder-api-microversions
diablo_rojotbarron: Here you go
*** stevemar_ has joined #openstack-cinder16:45
openstackgerritScott DAngelo proposed openstack/cinder-specs: cinder-api-microversions
*** diogogmt has joined #openstack-cinder16:46
tbarrondiablo_rojo: nice!  you guys should "fix" the storwize driver next.16:47
*** stevemar_ has quit IRC16:49
*** jungleboyj has joined #openstack-cinder16:50
diablo_rojotbarron: jgregor is working on it with baumann116:54
*** sileht has quit IRC16:55
jgriffithmc_nair: hey... thanks for the feedback BTW16:55
smcginnisdiablo_rojo: Nice!16:55
openstackgerritxing-yang proposed openstack/cinder: Implement update_migrated_volume for the ScaleIO driver
*** kevincarr1991 has quit IRC16:56
smcginnismc_nair: Did you still have the question about LOG.warning?16:56
*** kevincarr1991 has joined #openstack-cinder16:56
*** sileht has joined #openstack-cinder16:57
mc_nairjgriffith: sure thing.  Thanks for the responses on those - a few of the things were as much me trying to see why you did something a certain way so I could learn.16:57
mc_nairsmatzek: yea, that16:57
mc_nair^ sorry - shaky hands + keyboard16:58
jgriffithmc_nair: yeah, I appreciate the questions.16:58
jgriffithI'll turn some of those around here later today16:58
jgriffithgood to make me think :)16:58
mc_nairsmcginnis: sure, that'd be great if you could explain the benefit of doing "LOG.warning(_LW("some message"))" vs. "LOG.warning(_("some message"))"16:59
smcginnismc_nair: So the correct way should be LOG.warning(_LW('xxx'))16:59
*** nkrinner has quit IRC17:00
smcginnismc_nair: There shouldn't be any doing LOG.warning(_('xxx')). If there are, then that should be fixed.17:00
smcginnismc_nair: From what I understand, _Lx categorizes the strings so the folks working on translations know the level being used and prioritize accordingly.17:01
*** garthb has joined #openstack-cinder17:02
smcginnismc_nair: The only time you wouldn't use _LW for a LOG.warning is if you assign to a string, then use that string both for logging and for setting an exception message.17:03
smcginnismc_nair: A little more detail here:
*** daneyon has joined #openstack-cinder17:03
openstackgerritScott DAngelo proposed openstack/cinder-specs: cinder-api-microversions
*** markus_z has quit IRC17:11
openstackgerritNate Potter proposed openstack/cinder: Remove 'refresh' parameter from driver get_stats
mc_nairsmcginnis: thanks for the all the info.  Couldn't come up with a good answer for the "why" but the prioritizing makes sense.  Was also wondering about why we couldn't just move the "_LW" logic into the LOG.warning method itself, since I think all warning messages get translated?  But I guess that might just complicate things since we'd still want LE for the cases where we save the translated message to a variable so we can log and throw it.17:11
hemnamc_nair, you don't want to do that17:12
*** vignesh has quit IRC17:12
hemnanot every call to the LOG.* should have a marker wrapped17:13
*** cbader has joined #openstack-cinder17:13
hemnawhich is all part of the madness of the markers17:15
Swansonjgriffith, get_replication_updates question. Is there a way to know what volumes I need to return information about without doing a db call to get all volumes associated with my backend?17:15
hemnaSwanson, I think the get_replication_updates is a v1 api '17:15
jgriffithmc_nair: that's the same path I went down when all that marker stuff started :(17:15
*** kragniz_ is now known as kragniz17:16
Swansonhemna, Ah.  I thought everything on main was v2.17:16
hemnano, :(17:16
hemnathere is some v1 stuff still around17:17
hemnawhich will get removed once IBM updates their drivers to v217:17
jungleboyjhemna: Yeah, I don't like how not everything has a marker.17:17
jungleboyjhemna: Yes, in fact baumann1 and jgregor were talking about that this morning in our weekly meeting.17:18
mc_nairjgriffith: yea, I can see how it would lead to it's own complications, as you'd still need the _LE marker because there will be cases that we want to hold onto the message in a var so we can log and then also raise an error.  Though I guess that could be it's own helper function17:18
hemnaheh, don't get me started on the markers.17:18
jungleboyjhemna: But you love the subject so!17:18
mc_nairhaha, I seem to be poking a wound17:19
mc_nairhemna: just trying to understand, so not every call to LOG.* should have a marker, but shouldn't every call to LOG.warning take the _LW marker?17:20
mc_nairI just read that link but I definitely may still be missing something17:20
hemnathe guidelines provide examples when you should/shouldn't use them17:20
hemnaif you have a var that is passed in, instead of a string, don't use the marker17:21
hemnathis is part of the insanity of the markers and logging.17:21
jungleboyjmc_nair: Not so much a wound ... we did a lot of work to get this all in place a couple of releases ago.  The way it was decided to implement it isn't anyone favorite.17:22
Swansonjgriffith, any outstanding patches for replication I should be aware of?17:22
hemnamc_nair, it's just one of those things that since other OS projects jumped off the cliff with this, we had to as well.17:22
*** kevincarr1991 has quit IRC17:25
*** kevincarr1991 has joined #openstack-cinder17:26
mc_nairalright, still don't 100% get it, but more than I did before.  Will try to understand the nuances so I can get bitter about it too :)17:26
mc_nairjgriffith, hemna - thanks for the background17:26
hemnamc_nair, welcome to the club.   almost no one 'gets it' wrt the markers, which is why they are so awesome.17:26
jgriffithSwanson: not really :)17:29
*** Yogi1 has quit IRC17:29
jgriffithhemna: do you have a link to our youtube channel with the sessions?17:30
jgriffithSwanson: the only thing I would like to propose is that things like failover are removed :)17:30
hemnayah that :)17:30
jgriffithdiablo_rojo: thanks!17:30
*** jdurgin has joined #openstack-cinder17:30
diablo_rojojgriffith: hemna: I got you :)17:30
Swansonjgriffith, I can only pledge a +1 to that but that +1 I will pledge.17:31
*** dobson has quit IRC17:31
*** hemna is now known as hemnafk17:31
jgriffithSwanson: I think the only one we have to convince is DuncanT :)17:32
*** hemnafk is now known as hemna17:33
Swansonjgriffith, He isn't in my hemisphere, I don't think.  So I can't send anyone to persuade him.17:34
jgriffithsighh... I really wish I would've been at the meetup-1, there's some misunderstandings about replication17:34
kevincarr1991is there a way to uninstall cinder?17:34
jgriffithand attempts to boil the ocean :(17:34
jgriffithkevincarr1991: why would you want to do such a thing :)17:34
jgriffithkevincarr1991: yes17:35
jgriffithapt-get remove, or yum17:35
kevincarr1991ha i have made a few errors17:35
jgriffithwhatever package manager you use17:35
jgriffithkevincarr1991: tricky part is cleaning up the database and message queue17:35
*** akerr_ has quit IRC17:35
*** jistr has quit IRC17:36
*** salv-orlando has quit IRC17:39
jungleboyjsmcginnis: Kilo is security fix only now.  Right?17:39
*** lcurtis has joined #openstack-cinder17:43
*** Yogi1 has joined #openstack-cinder17:45
*** jordanP has quit IRC17:46
openstackgerritWalter A. Boring IV (hemna) proposed openstack/os-brick: Add new Connector APIs for path validation
*** bswartz has quit IRC17:55
*** bswartz has joined #openstack-cinder17:56
*** p0rtal has joined #openstack-cinder17:56
*** willsama has quit IRC17:56
*** willsama has joined #openstack-cinder17:57
*** jgregor has quit IRC17:58
*** jwcroppe has quit IRC18:01
*** jwcroppe has joined #openstack-cinder18:01
*** kevincar_ has joined #openstack-cinder18:02
hemnapatrickeast, ping18:04
*** kevincarr1991 has quit IRC18:05
*** jwcroppe has quit IRC18:05
*** salv-orlando has joined #openstack-cinder18:08
*** sghanekar_ has joined #openstack-cinder18:09
*** akerr has joined #openstack-cinder18:13
xyang1tbarron: ping18:14
tbarronxyang1: pong18:14
xyang1tbarron: question for you, are you using SnapMirror for replication18:15
tbarronxyang1: well, I'd like to :-)18:15
tbarronxyang1: why?18:16
xyang1tbarron: I am trying to understand your use cases:)18:16
tbarronxyang1: makes sense.18:16
xyang1tbarron: can snapmirror make sure data consistency18:17
tbarronxyang1: yes, the destination has a crash-consistent snapshot of the source filesystem.18:18
tbarronxyang1: in cinder's case, that means of all the volumes in that pool.18:19
tbarronand of all snapshots of those volumes.18:19
xyang1tbarron: so it is probably ok to use consistencygroup?  Just make sure it is in the same pool18:19
tbarronxyang1: consistency group is actually more general, can span pools.18:20
tbarronxyang1: and it can be a proper subset of pool.18:21
xyang1tbarron: we can make it to work for a pool or across18:21
tbarronxyang1: and for some of our platforms we don't have this snap mirror model.18:21
xyang1tbarron: not everyone can support cg span pools18:21
tbarronso replication groups are a distinct concept from consistency groups.18:21
xyang1tbarron: so you have two models, in one pool or span pools?18:22
*** hodos has joined #openstack-cinder18:22
*** harlowja has joined #openstack-cinder18:22
tbarronour CG approach is still getting designed :-) as we have multiple platforms and want to implement in a way that makes sense across them if possible.18:23
*** timcl1 has joined #openstack-cinder18:23
tbarronthere are several things we could do to implement CGs.  Our pools are already trivially CGs, just not exposed as such.18:24
tbarronBut adding/removing individual volumes from them may not make much sense, etc.18:24
hemnatbarron, so isn't that your driver's issue then?18:24
*** timcl has quit IRC18:25
tbarronhemna: it's not just our driver's issue that CGs and Replication Groups are distinct concepts.18:25
tbarronI was just talking about our driver to answer xyang's questions about same.18:26
xyang1tbarron: there is cg for replication and cg for snapshot18:26
hemnaso I guess this is one of the issues with trying to create a general purpose SDS API in Cinder that has to work with many different backends18:26
tbarronhemna: xyang1: my insistence that replication groups are distinct from CGs conceptually isn't really18:26
tbarrondriven by just our drivers.18:26
xyang1tbarron: for lots of drivers they are two groups18:27
hemnawe all have to figure out how to map our backend capabilities with the Cinder API concepts18:27
tbarronglusterfs, zfs, probably others are difft.18:27
xyang1tbarron: for pure, i think they are the same18:27
tbarronxyang1: patrickeast: when I look at pure implementation, there are these things called Pgroups.18:28
xyang1tbarron: however consistencygroups db in cinder is not soecifically for snapshot or replication18:28
xyang1It is general18:28
tbarronPgroups can get replication type attributes, or consistency group type attributes, or both.18:28
xyang1tbarron: yes pure can be both18:28
xyang1tbarron: it is most straight forward fir them I think18:29
hemnawe have replication groups as well that's not the same as a cg18:29
tbarronxyang1: and I have no objection whatsoever to any vendor having replication groups and consistency groups co-incide.18:29
xyang1tbarron: for others, we can still use cg table because it is not just for snapshot18:29
hemnabut it doesn't preclude a CG from replicating as well18:29
tbarronhemna: ++18:29
tbarronI have no objection to replicating CGs.  Hope that's clear.18:30
xyang1tbarron: hemna cgsnapshot table is for cg snapshot18:30
hemnawe shouldn't mix the 218:31
tbarronJust don't want the only kind of replication groups to be CGs.18:31
hemnaif we want replication groups in cinder, then that's a separate feature IMHO18:31
tbarronhemna: ++18:31
hemnathat being said, we need to get replication for volumes done/working and implemented by many drivers first.18:31
tbarronhemna: xyang1: And I think it's a pretty simple extension to jgriffith's replication 2.0 to get there.18:32
hemnaI don't think we should be forging ahead as quickly with these without more drivers implementing them.18:32
xyang1hemna: tbarron I am just trying to understand netapp's case because they can't do volume replication18:33
*** jwcroppe has joined #openstack-cinder18:33
tbarronhemna: my concern ATM isn't to rush in an implementation of replication groups but rather to map out the problem space18:33
tbarronxyang1: we replicate the whole pool with all its volumes.18:34
hemnatbarron, you can't replicate a single volume ?18:34
tbarronxyang1: others might have other sorts of groups.18:34
tbarronhemna: xyang1: so if we failover, we failover the whole pool.18:35
xyang1tbarron: that is fine, data in your pool is still consistent?18:36
tbarronhemna: xyang1: the RepV2 spec says "that's cool" - there's a paragraph that shouts out to us and others with this kind of setup.18:36
tbarronxyang1: hemna: but the methods in that spec currrently all target individual volumes.18:36
tbarronxyang1: yes, the data is consistent when replicated.18:36
hemnalooks like we only have 6 drivers that do CG ?18:37
hemnamaybe my search isn't finding all of them18:37
hemnaack-grep ConsistencyGroupVD18:37
hemnaok 818:37
xyang1hemna: 6 vendors18:37
hemnaI don't see EMC in that list18:37
xyang1hemna: we didn't do VD18:38
hemnareally ?18:38
tbarronxyang1: hemna: part of the problem with ConsistencyGroupVD is it isn't 100% clear which methods are to be implemented for CGs.18:38
tbarronadd/remove vol from CG for example.18:39
hemnathe VD has 4 methods18:39
xyang1tbarron: that is the problem with VD18:39
* tbarron is setting Cory up :-)18:39
hemnacreate/delete_cg, create/delete_cgsnap18:40
xyang1hemna: unless every one implements it, we cannot add to VD18:40
hemnathen why are we even bothering with the VD mess then?18:40
xyang1hemna: new methods are update, create from src18:40
hemnaI don't get this at all18:40
xyang1hemna: we could add them and just pass18:41
hemnaI dunno, this whole VD thing is broken IMHO18:41
xyang1hemna: that is the problem.  It is too perfect18:41
hemnaif they aren't required, then don't make them abc18:41
xyang1hemna: once you add a new method, it breaks18:41
tbarronbe back in 10-1518:42
xyang1hemna: you can't make anything required until everyone implements it18:42
hemnaI'm with jgriffith on this one.  I really don't like the whole VD mess that we have in driver.py18:42
hemnait's not helping clean anything up and it greatly complicates when is supposed to be done by a driver18:43
Swanson"replication groups"?18:43
*** stevemar_ has joined #openstack-cinder18:46
*** setmason has joined #openstack-cinder18:47
* tbarron is back18:48
*** akerr has quit IRC18:49
*** stevemar_ has quit IRC18:50
tbarronSwanson: the idea is that often it makes sense to replicate a set of volumes to a common destination, and failover the same.18:50
*** rady has joined #openstack-cinder18:51
smcginnisjungleboyj: Yep, security only now.18:52
smcginnisjgriffith: +2 for removing failover18:52
*** kevincar_ has quit IRC18:52
*** kevincarr1991 has joined #openstack-cinder18:53
Swansontbarron, failover is a nightmare for replication.  Anyways, my consistency groups are only replicated in that the volumes in the groups are replicated.  The backend has no knowledge of  the the cg on the front end.18:54
smcginnisSwanson, tbarron: My problem is - we're not actually failing anything over.18:55
smcginnisThe admin still needs to redo things on the destination. It would be easier and safer in a disaster scenario to just have them manage existing and set things up again.18:56
smcginnisAt least until we can make failover do something useful other than potential F all their data.18:56
tbarronsmcginnis: Swanson: I can *do* a failover, i.e. change the destination to be active, but that happens for the whole group of volumes (in my case, the whole storage pool).18:57
xyang1Swanson: are you testing failover CG already?18:57
smcginnistbarron: Sounds even less useful then. :)18:57
tbarronsmcginnis: well, in true DR I think it's what one wants to do :)  AZ1 is on fire, so I want to move everything to AZ2.18:58
Swansonxyang1, nope.  I haven't worried about it yet.  Just trying to make sure I'm not missing something.18:59
smcginnistbarron: At least for us, there's not much to do. If the source is gone, just start using the secondary copy. I suppose that is different for other arrays.18:59
tbarronsmcginnis: Swanson: but I have no problem with pulling failover method out for now since there doesn't appear to be18:59
*** crose has joined #openstack-cinder18:59
tbarrona useful common denominator.18:59
smcginnistbarron: +118:59
Swansontbarron, smcginnis: We can do that, too, but without Nova knowing that we moved the volumes....18:59
openstackgerritMichal Dulko proposed openstack/cinder-specs: Fix syntax error in Mitaka specs toc
openstackgerritMichal Dulko proposed openstack/cinder-specs: Fix syntax error in Mitaka specs toc
tbarronSwanson: the vms are going to crash.  When they re-attach they should get what was the secondary now though.19:00
xyang1What's point of doing replication if you can never failover when needed?:)19:00
smcginnisIt's a manual recovery process anyway.19:00
Swansontbarron, smcginnis, jgriffith didn't much care for the failover method and didn't have it in the original spec.  OTOH if it is going to expand to something more useful...19:01
smcginnisI would rather crawl first, then walk. What it means to "failover" seems to require more discussion.19:02
tbarronsmcginnis: ++19:02
*** akerr has joined #openstack-cinder19:02
*** akerr has quit IRC19:03
tbarronmaybe right now we should just be able to say 'replication = True' as a capability if we have it, but not mandate that we all do it the same way.19:03
tbarronI think that was the intent of jgriffith's spec, it just evolved through reviews to be more prescriptive.19:03
xyang1I don't agree failover should be removed, but may be I am the minority here19:03
Swansonxyang1, Well all I'm doing is breaking replication and renaming the volume on the new backend so it can be picked up.19:04
*** merooney has quit IRC19:04
smcginnisAssuming the source side is there for the replication to be broken, right? Or you unmap? I forget now.19:04
hemnainitiating the failover seems ok to me.  the revert after the failover is what's problematic19:04
*** akerr has joined #openstack-cinder19:04
SwansonAt that point you have a volume that required replication on a backend without replication.  You still need to do a lot of manual work to get things back where they were.19:05
Swansonsmcginnis, if the source side is there I delete the replication.  If the source side isn't I go looking for a volume.  If I should find one (one!) I unmap it from the source and rename.19:06
Swanson(Actually I have a todo on the rename.)19:06
*** jgregor has joined #openstack-cinder19:06
*** zul_ has joined #openstack-cinder19:08
*** zul has quit IRC19:08
*** akerr has quit IRC19:09
SwansonWhat I like about failover is that it is still the same volume in the DB.19:10
hemnaSwanson, +119:10
hemnaand cinder still has access to it19:11
tbarronassuming that when AZ-1 got fried c-vol is still running :-)19:12
tbarronhemna: I agree though.19:12
jungleboyjsmcginnis: Yeah, I found the website to confirm it.  For some reason I couldn't make it sound right in my head.19:12
Swansonhemna, Yeah, but if you just used manage_existing you get the volume up and can do whatever import magic you need to do.  Maybe to AZ1's replacement.19:13
jungleboyjThough, that seems the pattern today.19:13
Swansonxyang1, do you have your cg replication spec url handy?19:13
smcginnisjungleboyj: Inhaling too much helium from all those balloons? :P19:13
smcginnisSwanson: And then the volume is actually on the backend it should be on.19:13
jungleboyjNo, helium. I started popping them and got jgregor s spit all over.  Eeeew.19:14
smcginnisjungleboyj: That's... disturbing.19:15
*** timcl has joined #openstack-cinder19:16
jungleboyjsmcginnis: :-)  Welcome to my world.19:16
*** timcl1 has quit IRC19:16
Swansonsmcginnis, yeah.  Of course the thing is people can still import a replication.  Actually can they?  If I go to do a manage_existing on a repl dest is that an error?19:17
Swansonjgriffith, ^^19:17
jungleboyjsmcginnis: Go on much of a bug cleaning spree?19:19
*** baumann has joined #openstack-cinder19:20
smcginnisjungleboyj: I took a look at a few. ;)19:20
smcginnisHopefully I didn't blow up anyones inbox.19:20
* jungleboyj is crying at the state of my inbox.19:21
*** baumann1 has quit IRC19:21
patrickeasthemna: pong19:23
patrickeasttbarron: xyang1: just reading the scrollback, but yea for pure we use protection groups (pgroups) which are always consistent for snapshots and stuff (so like 1:1 mapping with a CG), and they are the thing we can replicate19:25
*** clayg has left #openstack-cinder19:25
patrickeastso our replication strategy for cinder is to create a pgroup that is replicated and add/remove volumes from that group when enabling/disabling replication for the volume19:25
openstackgerritNate Potter proposed openstack/cinder: Remove 'refresh' parameter from driver get_stats
tbarronpatrickeast: thanks, that fits with what I see in your code.19:27
jgriffithSwanson: as I pointed out in Tokyo, people will make this as complicated as they want19:29
jgriffithSwanson: IMO this should be much more restrictive than what a lot of people are proposing lately19:29
xyang1patrickeast: thanks, I think it maps well to your pgroup19:29
smcginnisjgriffith: Agreed19:30
*** julim has joined #openstack-cinder19:32
*** merooney has joined #openstack-cinder19:33
*** Yogi1 has quit IRC19:35
j_kingjgriffith: are you advocating for simplicity? ;)19:40
*** sgotliv has joined #openstack-cinder19:41
openstackgerritTom Barron proposed openstack/cinder-specs: Scaling backup service blueprint spec
*** salv-orl_ has joined #openstack-cinder19:46
jgriffithj_king: yes please19:48
j_kingjgriffith: heresy!19:49
*** salv-orlando has quit IRC19:49
*** DericHorn-HP has joined #openstack-cinder19:49
*** salv-orl_ has quit IRC19:51
jgriffithj_king: indeed, these days it could be seen that way19:51
jgriffithso it's funny, I just saw this article... from Bill Gates of all people19:52
jgriffithbut he listed 5 keys of success:
j_kingjgriffith: i'm with ya. i was disappointed at a few sessions where new abstractions were the default answer to most technical challenges.19:52
jgriffithI find number 1 and number 5 the most compelling19:52
jgriffithj_king: yeah, 10 pounds of crap in a 5 pound bag anyone :)19:53
j_kingnot that abstractions can't make life more simple... but yeah.19:53
j_kingthe *right* abstractions tend to have that effect.19:53
*** Yogi1 has joined #openstack-cinder19:53
hemnabut if you reverse 5 and 1 you get: "The ability to focus on a"19:53
jgriffithhemna: ?19:53
*** jwcroppe has quit IRC19:54
jgriffithitem 1 being most important IMO19:54
*** jwcroppe has joined #openstack-cinder19:54
*** Yogi11 has joined #openstack-cinder19:59
*** Yogi1 has quit IRC20:01
*** kevincar_ has joined #openstack-cinder20:02
openstackgerritTom Barron proposed openstack/cinder-specs: Scaling backup service blueprint spec
*** DericHorn-HP has quit IRC20:02
*** kevincarr1991 has quit IRC20:05
*** merooney has quit IRC20:11
kevincar_Has anyone gotten this error for cinder? WARNING keystonemiddleware.auth_token [-] Authorization failed for token20:15
openstackgerritScott DAngelo proposed openstack/cinder-specs: cinder-api-microversions
openstackgerritScott DAngelo proposed openstack/cinder-specs: cinder-api-microversions
*** dims has quit IRC20:31
*** crose has quit IRC20:33
*** sghanekar_ has quit IRC20:41
*** stevemar_ has joined #openstack-cinder20:46
*** sghanekar_ has joined #openstack-cinder20:47
*** kevincar_ has quit IRC20:49
smcginnisscottda: I just needed to officially accept it for M, but I set a priority too to make sure it gets visibility.20:50
*** stevemar_ has quit IRC20:52
*** baumann has quit IRC20:55
*** merooney has joined #openstack-cinder20:56
*** merooney has quit IRC20:57
*** salv-orlando has joined #openstack-cinder20:59
*** porrua has quit IRC21:06
*** salv-orlando has quit IRC21:08
scottdasmcginnis: Thanks21:09
smcginnisscottda: No problem. Looks like LP is kind of screwy on that depending if I target then accept, or accept then target. Not the best tool, IMO. :)21:10
scottdaI'm glad I'm not the only one who gets confused.21:10
*** jamielennox is now known as jamielennox|away21:16
*** dustins has quit IRC21:20
*** timcl has quit IRC21:20
*** kevincarr1991 has joined #openstack-cinder21:21
rluciohey guys, @vmem we want to drop driver support for our 6000 series products, can I just submit a review to remove them, or do I need to follow some sort of deprecation route?21:22
*** salv-orlando has joined #openstack-cinder21:23
rlucioby "them" i mean the related driver files21:23
*** e0ne has joined #openstack-cinder21:23
hemnarlucio, I think that's up to you.   I presume you are the maintainer of the drivers ?21:23
smcginnisrlucio: You should probably deprecate it for a release.21:23
rluciohemna: yea i am21:23
hemnarlucio, we have deprecated one of our drivers for a few releases21:23
smcginnisrlucio: Unless you're sure you don't have any customers using it anymore.21:23
hemnasmcginnis, the downside to deprecation is that CI has to continue to be running21:24
rluciosmcginnis: for deprecation, is there anything to do in the code?  I already have it marked in the docs for deprecation21:24
smcginnishemna: True21:24
hemnarlucio, might want to output a LOG.warning at do_setup() time21:24
rluciohemna: exactly, from my seat, less CI maintenance, the better!21:24
*** Lee1092 has quit IRC21:24
smcginnisrlucio: Well, they're your customers, so if you don't want to give a migration window then feel free to just submit a patch to remove them.21:25
*** Lindis has joined #openstack-cinder21:25
smcginnisrlucio: Or stop your CI and I'll do it eventually. ;)21:25
*** Lindis has left #openstack-cinder21:25
*** kevincarr1991 has quit IRC21:25
rluciosmcginnis: well..  i'll double check before removal but before i get there i just wanted to be sure there wasnt a formal process21:26
*** Lindis has joined #openstack-cinder21:26
rluciosmcginnis: haha i like that, kinda of a stealth driver removal... i just stop the ci, and the problem takes care of itself :)21:27
smcginnisrlucio: Normally we deprecate things for a cycle before removing them, but since this is your driver I think you have a little leeway in how it's done.21:27
rluciohemna, smcginnis: ok maybe for starters i'll add the deprecation warning log msgs, and see what the powers-that-be want to do on the files, given the general method for deprecating for a release21:28
openstackgerritWalter A. Boring IV (hemna) proposed openstack/cinder: Move get_by_id to CinderObject
*** baumann has joined #openstack-cinder21:31
*** kevincarr1991 has joined #openstack-cinder21:31
*** Lindis has quit IRC21:32
*** apoorvad has joined #openstack-cinder21:35
*** bswartz has quit IRC21:38
*** apoorvad has quit IRC21:38
sghanekar_For a documentation merge in the default branch of openstack-manual, when are the changes visible on
*** dims has joined #openstack-cinder21:41
*** rady has quit IRC21:43
*** gouthamr has quit IRC21:45
*** can8dnSix has joined #openstack-cinder21:48
*** sgotliv has quit IRC21:49
*** garthb_ has joined #openstack-cinder21:50
*** garthb has quit IRC21:52
*** can8dnSix has quit IRC21:52
*** can8dnSix has joined #openstack-cinder21:52
*** dobson has joined #openstack-cinder21:52
*** salv-orlando has quit IRC21:55
*** rady has joined #openstack-cinder21:56
mtaninosghanekar_: When I posted a patch , the web was updated soon after the patch was merged.21:58
hemnascottda, ping22:00
scottdaGood morning22:00
hemnaso it looks like the only way to get the initiator info saved in the volume attachment22:01
*** e0ne has quit IRC22:01
hemnais to modify nova22:01
hemnaand the cinder api22:01
hemnait has to be done a volume_manager.attach_volume time22:01
hemnaand that doesn't accept the connection_info22:01
scottdaYeah, I was thinking that there won't be a clean way to detach from the cinder side unless we detach all attachments for a volume...which is NOT clean.22:02
*** aix has joined #openstack-cinder22:03
scottdaThere might be a case for that API. Do a force_detach_all from the cinder side in the case where nova has already deleted the VM22:03
hemnaso I'll have to add a new column to the attachments table22:03
hemnaand then do the song and dance with the API, then client, the nova.22:03
*** edmondsw has quit IRC22:04
*** salv-orlando has joined #openstack-cinder22:04
hemnawill be a microversion bump22:05
scottdaYeah, will definitely take some orchestration.22:05
scottdaBut, how would the admin know which attachment to delete using only Cinder if the Nova VM is gone?22:06
hemnait can only be found w/ the instance uuid22:07
scottdaDoes cinder currently do anything with instance uuid?22:07
hemnathe attachment_id is tied to the volume_id and either instance_uuid or host_name22:07
scottdaI guess the admin/user might have that, or at least the host_name is in the cinder DB.22:07
*** diablo_rojo has quit IRC22:07
hemnathat's how it currently finds the attachment22:08
hemnaeither by instance_uuid or host_name22:08
hemna+ volume_id22:08
scottdaIt would kinda work except in the case where there are 2 of the same volume (mult-attach) to the same host22:08
scottdaSo when we land Nova multi-attach that case will be problematic22:09
scottdaThis is if we don't have the instance_id22:09
hemnayou can't attach a volume to the same host_name more than once currently22:09
hemnaand by host, I mean bare metal or cinder node22:09
scottdaright, but with multi-attach we'll be able to.22:09
hemnaattachments to nova compute hosts are done via instance_uuid22:09
scottdano? ok22:10
scottdaShouldn't you be able to?22:10
hemnathat's the current limitation22:10
scottdaright, I think ildikov had mentioned that bug as a blocker22:10
scottdafor implementing nova side of multi-attach22:10
hemnaI haven't seen/heard that22:11
scottdano, I was wrong. Her issue is
hemnayah and that one is wrong22:11
*** Yogi11 has quit IRC22:11
hodoshi guys. I've looked into remotefs has _fallocate method, but it seems like it's not used anywhere22:12
hemnasee my comments on that one22:12
hodosbesides glusterfs22:12
scottdaI think when we were talking to J pipes a while back during a hangout we'd glossed over the issue of multi-attach to the same host. But we should make sure it works.22:12
scottdaFor sure will be needed with ironic.22:13
hodosis it there specifically for glusterfs ?22:13
hodosi mean _fallocate22:13
hodosi mean _fallocate()22:13
hemnascottda, you can't attach a volume more than once to the same host.22:13
hemnayou can attach it to more than 1 instance22:13
hemnaand by attaching to a host, I mean to a cinder node, or a bare metal node22:13
scottdaRight, but more than one instance on the same host should work.22:13
hemnayes, that's not the same thing.22:13
scottdayou mean a c-vol node.22:14
jgriffithhemna: just to make sure I follow:  Multi-attach is an Instance construct only?22:14
hemnaif you have 1 compute host with 10 vms on it, you can attach the same volume to all 10 vms.22:14
jgriffithhemna: it's not "really" multiple iSCSI attachements, it's just one iSCSI to one compute node?22:14
hemnajgriffith, it can multi-attach to instances or hosts.22:14
jgriffithhemna: sorry, but you're contradicting yourself22:14
*** salv-orlando has quit IRC22:14
hemnait seems like it, but I'm not22:14
*** diablo_rojo has joined #openstack-cinder22:15
hemnaI'm just not explaining it well22:15
hemnathere are 2 use cases22:15
jgriffithhemna: a difference without a distinction :)22:15
jgriffithhemna: but I think I'm with you22:15
*** salv-orlando has joined #openstack-cinder22:15
hemna1) attaching a volume to multiple instances.   Those instances can be on the same compute host or multiple compute hosts.22:15
*** mdenny has joined #openstack-cinder22:15
hemna2) attaching a volume to a host.22:15
hemnaand by host, I mean c-vol or bare metal node.22:16
jgriffithhemna: then say "node"22:16
jgriffithhemna: but how is 2 relevant to multi-attach discussions?22:16
hemnasure.  unfortunately, the db table colume is attached_host22:17
hemnajgriffith, for allowing attachments to BM nodes, via the brick-client22:17
hemnaeither way22:17
jgriffithhemna: I seem to recall recommending attached_hosts and making that a list... but regardless, still not seeing your point22:17
hemnawe have to save the initiator connector info in the attachment row in the db.22:18
hemnamy point is we can't do it currently w/o api changes22:18
jgriffithhemna: can't do "what" ?22:18
hemnato allow passing in the connector info at volume manager attach_volume()22:18
hemnaas it doesn't have that param currently22:18
jgriffithhemna: why don't you just use the connector from initialize_connection?22:20
jgriffithhemna: still another tagnetial thing22:20
scottdajgriffith: We were hoping for a way to force_detach a volume from only the cinder side, used in the case when the nova instance has been deleted but the cinder volume and connection has not been cleaned up.22:20
jgriffithscottda: understood22:20
scottdaBut cinder does not keep the connection info...22:20
hemnabecause initialize_connection is called outside of the work flow of attaching a volume22:20
hemnare: live migration time.22:21
jgriffithhemna: huh?22:21
jgriffithhemna: I beg to differ22:21
hemnanova calls initialize_connection several times in the process of live migration22:21
jgriffithhemna: as there are drivers that require the initiator iqn to do attachments22:21
jgriffithhemna: I'm now completely confused22:22
hemnaand nova isn't asking cinder to attach the volume, but to ask it for the response22:22
jgriffithhemna: none of this has anything to do with the question that was asked earlier that I can see regarding what one can and can't multi-attach too22:22
hemnaok, well I guess we are talking about 2 things then22:22
hemnaI'm trying to find out how to get the connector info stuffed into the volume attachment table22:23
hemnaso we can do force detach22:23
hemnaand baremetal attaches w/ brick-client22:23
hemnaas we discussed in the meetup in tokyo22:23
hemnaand the only way to do it is to update the cinder api to accept connector in attach_volume time22:23
jgriffithhemna: well, now "we", I just asked what the deal was when you said "you can only multi-attach to instances on the same compute node" then said "you can attach to multiple compute nodes" then said "something blah blah about bare-metal"22:23
* jgriffith is rather confused22:23
openstackgerritSonia Ghanekar proposed openstack/cinder: Using extra-specs in cloned vols for Nimble driver
jgriffithhemna: do you want to talk about that or the original question scottda asked?22:24
jgriffithhemna: I'm happy to discuss either :)22:24
scottdaI think the main problem is that the clocks says it is aftenoon, but in Tokyo it is 7:24 AM. Perhaps we should all sleep on it.22:24
jgriffithscottda: ok.. but I strongly urge folks to reconsider the question around initiator data22:25
smcginnishemna, scottda: Remind me again why we need to know the initiator to force a delete?22:25
hemnajgriffith, what do you mean ?22:25
scottdaWe need the connector.22:26
smcginnisscottda: And why?22:26
hemnasmcginnis, becaused in order to correctly call terminate_connection in the driver, you have to have the connector22:26
jgriffithhemna: I mean I don't think this problem is as difficult as it's being made out to be, and scottda we have the connector22:26
scottdaAnd I might have said delete. That would be a mistake. I meant detach22:26
hemnasmcginnis, or the driver can't know what to unexport from the array.22:26
smcginnishemna: So we aren't deleting, just detaching?22:26
hemnasmcginnis, correct, this is force detach22:27
smcginnishemna: OK, thanks. So we need to know where it's exported to in the case of multiple attach?22:27
hemnajgriffith, it's not difficult, just needs an API bump to add the connector to attach_volume22:27
smcginnisSo we don't detach from something else that is still there and using it?22:27
jgriffithhemna: and I'm arguing that I don't think that's necessary22:27
scottdaAny detach requires a connector. It is passed in by nova, but only if nova has the info. It does not if the nova instance has already been deleted, which sometimes happens22:28
hemnajgriffith, ok, well that's the only way I see it working correctly22:28
smcginnisscottda: So if we're forcing the detach, do we care where it was? Why not just remove everything for that volume?22:29
hemnathe problem is for detaching when the vm instance is gone, we don't have the connector anywhere.22:29
*** mriedem is now known as mriedem_away22:29
hemnanova doesn't have it, and cinder doesnt' have it22:29
*** julim has quit IRC22:29
scottdajgriffith: That is a problem with multi-attach. YOu might not want to detach from all instances....22:29
hemnasmcginnis, we can't find which export to remove unless we have the initiator information.22:29
*** baumann has left #openstack-cinder22:29
jgriffithhemna: scottda umm, but I think we do22:29
scottdabut you could do a force_detach_all . It might be the only way in the case of multi- attach, and would require the changes hemna has proposed (keeping the info)22:30
jgriffithhemna: scottda IIRC patrickeast added something like this a while back22:30
*** vgridnev has quit IRC22:30
hemnascottda, correct, we don't want to remove every attachment for that volume every time.22:30
jgriffithhemna: scottda because he needed initiator info in the db for his access groups to make sure he didn't recreate/duplicate22:30
hemnascottda, +122:30
smcginnisMaybe it's a difference in how arrays work. But if I'm told to force detach a volume, I could just remove whatever is configured for that volume without needing to care.22:30
smcginnisMultiattach I could see be an issue though.22:30
hemnasmcginnis, and when that volume is attached to more than one thing ?22:30
*** jamielennox|away is now known as jamielennox22:31
hemnaso the connector is what the volume is exported to, and to not wipe out all exports for that volume, you need that connector (initiator info)22:31
scottdaYeah, so the question is "do we want a fine-grained force-detach for a single atttachment for multi-attach cases?" and "can we even do this?"22:31
patrickeastjgriffith: this thing ?22:31
hemnascottda, yes22:31
jgriffithpatrickeast: ahh, yeah, that's what I was thinking of22:31
jgriffithpatrickeast: IIRC you're already doing what scottda and hemna are proposing no?22:32
hemnapatrickeast, in the case of multi-attach you will have different connector info for each attachment22:32
jgriffithpatrickeast: scottda hemna I also think that provider data is a descent enough place for this sort of thing22:32
hemnathe connector needs to be stored with the specific attachment in the attachments table to be correct.22:33
jgriffithpatrickeast: scottda hemna when the volume is attached return a model update that populates the initiators22:33
jgriffithbut anyway22:33
jgriffithhemna: why?  Why not just store it with the volume?22:33
jgriffithhemna: if you just want it for force detach/detach all, then why do you care?22:33
hemnabecause the connector will be different for every attachment22:34
jgriffithhemna: as long as you have all of them, why make it harder and introduce yet another source of truth regarding attachment info22:34
jgriffithhemna: yes, I understand that22:34
hemnathere is 1 source of truth, it's the volume_attachment table22:34
hemnathe connector should go in there for each attachment22:34
jgriffithhemna: since you mention that.. I'm also confused by that still22:35
hemnainitialize_connection doesn't have the instance_uuid and attach_volume doesn't have the connector22:36
hemnaso, I'm proposing updating attach_volume to add the connector22:36
jgriffithhemna: So are you just proposing that we add initiator to the VolumeAttachment table?22:36
hemnaattach_volume already has the code to find the correct attachment.22:36
jgriffithhemna: and I'm saying I don't understand why you would do that, honestly.  But really I guess I don't care22:37
hemnabecause that's the correct attachment for that connector22:37
jgriffithhemna: the adding the connector to attach call22:37
jgriffithhemna: ok22:37
jgriffithhemna: I won't frustrate you any more22:37
jgriffithhemna: just state that I don't think it's necessary and move on :)22:37
scottdaWe need to talk it all through anyway, and  we'll probably do it again in the spec :)22:38
hemnaI'm ok with talking it through22:38
hemnaI want to understand your viewpoint22:38
hemnausually these discussions end up in having 'aha!' moments for someone either way22:38
hemnausually me.22:38
hemnaso, I'd like to be able to store the data at initialize_connection time22:39
jgriffithhemna: yeah, so why not do that?22:39
patrickeastso, after reading the backscroll and what i remember from the ironic session the proposal is simply that we keep the initiator info in the cinder db whenever we do an attach, right?22:39
jgriffithpatrickeast: correct22:39
hemnabut 1) initialize_connection is called by nova outside of the workflow for attaching a volume22:39
patrickeastthen we can look it up and clean up connections22:39
patrickeastor detach ironic22:39
patrickeastor whatever needs to be done22:39
patrickeastis that it?22:39
hemnaand 2) we don't have a way to find the right volume_attachment table entry22:39
jgriffithpatrickeast: yes... unless hemna and scottda have a case I'm not thinking of22:40
hemnapatrickeast, yes and the only place we have of storing that connector with the right attachment is at attach_volume time currently.22:40
jgriffithhemna: wait... let's back up to item 122:40
jgriffithhemna: so you say multiple initialize_connections... but so?22:40
scottdaI can only think of the 2 cases 1) for ironic detach and 2) force-detach for pathological case when nova instance has been deleted but detach fails for cinder volume22:40
hemnajgriffith, yes, live migration calls initialize_connection several times outside of attaching a volume.22:41
jgriffithhemna: and?  The problem is?22:41
jgriffithhemna: if you already have the data, then don't store it, if it's "new" store it22:41
hemnathat can update the connector22:41
hemnaand incorrectly22:41
*** _cjones_ has joined #openstack-cinder22:41
hemnawe had a case where nova was doing a detach against the wrong connector info22:42
jgriffithhemna: right, but you don't have to update, just append22:42
jgriffithhemna: and IMO what you describe is just a bug, if that's what' going on22:42
hemnaso, if we knew what instance_uuid/host was being attached at initialize_connection time, then we could find the volume_attachment entry and update it.22:42
hemnajgriffith, well it's not a bug now, we fixed that in nova in L22:43
jgriffithhemna: sadly it's been a while since I've looked at Nova-LiveMigration to remember why it's calling initialize_connection outside of an attach.22:43
hemnabut it made me aware that initialize_connection is being called many places in nova22:43
*** edtubill has quit IRC22:43
jgriffithhemna: have you documented any of this or written up the flow that you're concerned about?22:43
jgriffithhemna: because when I look at the code it seems like the calls are in fact for attaching back to new dest instance22:45
hemnaI was just thinking outloud in here, to get feedback before I went ahead w/ a spec22:45
jgriffithpost_live_migration and init_volume_connection22:46
*** lprice has quit IRC22:46
hemnaI believe that's the only place that nova calls cinder's attach api22:47
hemnato finish the attachment22:48
jgriffithhemna: I suppose the refresh might be your concern?22:48
hemnajust need to add connector to that call22:48
openstackgerritNate Potter proposed openstack/cinder: Added more options while uploading volume as image
jgriffithhemna: oh... well, that's the only explicit call to cinders attach22:48
jgriffithhemna: oh.. this goes back to your device swapping targets on people?22:48
jgriffithhemna: I mean, you don't reuse or use the same target for the life of the volume?22:49
*** can8dnSix has quit IRC22:49
jgriffithhemna: which was why you changed the workflow before right?22:49
hemnafor 3PAR, we export a new target for every attachment22:50
hemnafor lefthand 1 target per host22:50
jgriffithhemna: yeah, that's what I'm talking about22:50
hemnait's just how they work.  /me shrugs22:50
hemnaso I can understand the confusion I guess22:51
jgriffithhemna: so the problem is that 3par when it gets another initialize_connection it creates a new target and the compute node doesn't have that because it expects the target to be the same right?22:51
hemnathat was one of our problems yes.22:51
hemnawe fixed our driver to be careful with initialize_connection calls22:51
jgriffithhemna: so rather than monkey around too much, shouldn't we just have Nova explciity detach/reattach during live migration for devices that don't keep targets?22:52
jgriffithhemna: that could be encapsulated in provider info22:52
jgriffithhemna: and for 3par nova could explicitly detach/reattach rather than just trying to use the same target info22:53
hemnahonestly, I don't know why nova does what it does during live migration22:53
jgriffithhemna: I'd also be curious to look at Pure and LVM with LIO22:53
hemnaI tested LVM with live migration and it didn't have the same problem22:54
hemnasame w/ lefthand22:54
jgriffithhemna: because they'll have some funny stuff with initiator/access groups22:54
hemnabecause the target was the same basically.22:54
jgriffithhemna: right... but use LIO :)22:54
hemnaI haven't messed w/ LIO much22:54
hemnabut maybe I should22:54
hemnaI was curious how LIO did FC22:54
jgriffithhemna: LIO is going to use the concept of initiator access groups IIRC22:54
patrickeastour initialize connection is safe to call multiple times, we handle the case where we get the same initiator more than once22:54
jgriffithhemna: meh, not even getting to that... just LIO iSCSI22:55
jgriffithpatrickeast: yeah, that's what I am wondering, why the fix isn't in the driver22:55
hemnapatrickeast, coolio.  we had a bug in our driver wrt to that.  we had to fix in L.22:55
patrickeastkind of seems like making drivers be able to handle it is the easiest approach22:55
*** jgregor has quit IRC22:55
* patrickeast says that knowing nothing about how hard it might be for other vendors :p22:56
hemnathe other side of it, in the live migration case, is that the target info wasn't always the same between src host and dest host.22:56
jgriffithpatrickeast: seems like it, the other thing is maybe hemna you should just throw up some code that does what you're talking about?22:56
hemnayah I could do that22:56
jgriffithhemna: since chatting and reading specs is always full of peril :)22:56
jgriffithhemna: maybe if I saw the code I'd just be like "Ohhhh SNAP, now I get what you're saying"22:57
hemnaI could make this a dep on scottda's microversion patch22:57
patrickeastwhat i'm wondering though is what live migration has to do with storing connector info in the cinder db for doing force clean-ups22:57
hemnaor not, and just put up a WIP POS patch22:57
patrickeastwouldn't we need it either way, live migration or not?22:57
jgriffithpatrickeast: yeah, I'm honestly really confused by all of this now TBH22:57
hemnapatrickeast, well only tangentially22:57
*** jungleboyj has quit IRC22:58
hemnabecause of the calling of initialize_connection multiple times throughout the process.22:58
patrickeastyea i mean, its the gun that was used to do the foot-shooting that requires having a force cleanup22:58
hemnanormally folks think of initialize_connection being called in the context of doing an attachment22:58
hemnabut live migration doesn't22:58
patrickeastgotcha, so for the db entry we need to know if we update it or not22:59
*** gouthamr has joined #openstack-cinder22:59
hemnaand if I add the connector to manager.attach_volume() then it's all good.22:59
hemnaas nova only calls that in the process of attaching a volume.22:59
*** gouthamr_ has joined #openstack-cinder23:00
jgriffithhemna: I've seen this movie before :)23:00
*** daneyon has quit IRC23:01
hodoshas _update_volume_stats changed for iSCSI???23:02
jgriffithhodos: go to github and look23:02
hodosbut this is odd23:03
*** gouthamr has quit IRC23:03
*** diablo_rojo has quit IRC23:04
jgriffithhemna: so you just want to obsolete initialize_connection ?23:04
*** alejandrito has quit IRC23:04
hemnaI just want to add the connector to attach_volume23:05
jgriffithhemna: it seems weird that we're going to end up passing the connector in to all of these method calls23:05
hemnathe other way to do it23:05
jgriffithhemna: sure.... but you've already added it to create_export, now attach and it's already in initialize_connection23:05
hemnais to add instance_uuid/host to initialize_connection23:05
*** chlong has joined #openstack-cinder23:06
jgriffithhemna: Yeah, I just don't get it... but like I said; I shouldn't waste any more of your cycles on it23:06
*** smatzek has quit IRC23:06
hemnacoolsvap, I'll just post up a patch then23:06
jgriffithhemna: I'm sure you've looked at this more than I have at this point23:07
hemnabad xchat23:07
jgriffithhemna: and know what you need23:07
hodosjgriffith: so this is inconsistent now for remotefs and block_device23:11
*** jwcroppe has quit IRC23:11
hodosjgriffith: get_volume_stats i mean23:12
hodosjgriffith: remotefs check for self._stats23:12
hodosjgriffith: block_device - doesn't23:12
openstackgerritAnthony Lee proposed openstack/cinder: Refactor HP 3PAR drivers to now be HPE
*** ociuhandu has quit IRC23:26
*** bswartz has joined #openstack-cinder23:27
openstackgerritAnthony Lee proposed openstack/cinder: Refactor HP LeftHand driver to now be HPE
openstackgerritAnthony Lee proposed openstack/cinder: Refactor HP 3PAR drivers to now be HPE
*** r-daneel has joined #openstack-cinder23:41
hodosjgriffith: i think this review is breaking it23:42
*** martyturner has quit IRC23:45
patrickeasthodos: whats broken with it?23:46
tbarronhodos: just saw your question re. fallocate in the backlog.  It does appear to be used only by glusterfs currently.23:46
hodospatrickeast: nothing, sorry, just need to update my driver23:46
tbarronhodos: but I don't see why nfs, etc. couldn't also use it when creating un-sparsed files.  Faster than 'dd'.23:47
tbarroneharney is on vacation this week or I'd ask him right now.23:47
hodospatrickeast: actually it's the first time when CI tests showed that i have something wrong in the driver code23:47
tbarronhodos: but why are you asking?  (go ahead and finish the other conversation first though :-)23:48
*** sileht has quit IRC23:48
* tbarron goes off to eat dinner.23:49
hodostbarron: exactly, it looks like we could improve _create_regular_file in remotefs driver23:49
hodostbarron: maybe not all OS have it?23:50
tbarronhodos: +1 unless there's something I don't see.23:50
jgriffithhodos: why do you think that patch breaks somehting for you?23:50
jgriffithhodos: that isn't even merged yet, so not sure what you're running in to23:50
hodosjgriffith: i run into this during CI23:50
jgriffithhodos: oh, is your CI testing it?23:51
jgriffithhodos: or I should ask "which" CI is yours :)23:51
hodoshodos: yes, my driver is not passing it, i think it's a bug in my driver23:51
jgriffithhodos: which driver is yours?23:51
hodosjgriffith: nexenta/iscsi23:51
jgriffithI see IBM failed, but that one always fails23:51
hodosjgriffith: nexenta ci23:51
hodosjgriffith: yes there's useless code in my iscsi driver23:52
jgriffithhodos: oh... ouch!23:52
jgriffithhodos: ahh... yeah, you're getting the dreaded "No host found" :(23:53
hodosjgriffith: yes, cause the driver got kicked out it hasn't been updated in the patch23:54
jgriffiththat's a bummer (and a pain)23:55
*** garthb_ has quit IRC23:55
*** kevincar_ has joined #openstack-cinder23:56
*** willsama has quit IRC23:56
hodosjgriffith: well at least CI proved to do some good23:57
jgriffithhodos: indeed!23:57
*** garthb_ has joined #openstack-cinder23:59
*** kevincarr1991 has quit IRC23:59

Generated by 2.14.0 by Marius Gedminas - find it at!