Monday, 2015-12-07

*** alonma has joined #openstack-cinder00:03
*** alonma has quit IRC00:07
*** takedakn has quit IRC00:09
*** alonma has joined #openstack-cinder00:11
*** chlong has quit IRC00:13
*** jerrygb has quit IRC00:15
*** takedakn has joined #openstack-cinder00:15
*** alonma has quit IRC00:15
*** alonma has joined #openstack-cinder00:19
*** mylu_ has quit IRC00:21
*** mylu has joined #openstack-cinder00:22
*** alonma has quit IRC00:23
*** dims has quit IRC00:25
*** dims has joined #openstack-cinder00:26
*** mylu has quit IRC00:27
*** chlong has joined #openstack-cinder00:27
*** mylu has joined #openstack-cinder00:28
*** fmccrthy has quit IRC00:34
*** fmccrthy has joined #openstack-cinder00:34
*** dims has quit IRC00:35
*** salv-orlando has joined #openstack-cinder00:37
*** salv-orlando has quit IRC00:44
*** takedakn has quit IRC00:47
*** smoriya has quit IRC00:50
*** smoriya_afk has joined #openstack-cinder00:52
*** smoriya_afk is now known as smoriya00:53
*** chlong has quit IRC01:01
*** EinstCrazy has joined #openstack-cinder01:04
*** zhangjn has joined #openstack-cinder01:08
openstackgerritwanghao proposed openstack/cinder: Get the replica volume ref for DR  https://review.openstack.org/25225001:18
openstackgerritwanghao proposed openstack/cinder-specs: Get the replica volume ref for DR  https://review.openstack.org/18215001:27
openstackgerritwanghao proposed openstack/cinder-specs: Get the replica volume ref for DR  https://review.openstack.org/18215001:31
*** haomaiwang has joined #openstack-cinder01:32
*** ircuser-1 has joined #openstack-cinder01:33
*** sweston has quit IRC01:35
*** haomaiwang has quit IRC01:37
*** sweston has joined #openstack-cinder01:40
*** Thelo has quit IRC01:41
*** bardia has quit IRC01:43
*** tmzhang has joined #openstack-cinder01:47
*** mudassirlatif has joined #openstack-cinder01:48
*** cvstealth has quit IRC01:53
*** cvstealth has joined #openstack-cinder01:53
*** mudassirlatif has quit IRC01:59
*** smoriya has quit IRC02:00
*** cvstealth has quit IRC02:00
*** haomaiwang has joined #openstack-cinder02:03
*** cvstealth has joined #openstack-cinder02:04
*** alonma has joined #openstack-cinder02:07
*** haomaiwang has quit IRC02:09
*** cvstealth has quit IRC02:10
*** tmzhang has quit IRC02:10
*** zhangjn has quit IRC02:10
*** alonma has quit IRC02:11
*** alonma has joined #openstack-cinder02:13
*** alonma has quit IRC02:17
*** tianmingZhang has joined #openstack-cinder02:19
*** alonma has joined #openstack-cinder02:21
*** 7JTABCRUQ has joined #openstack-cinder02:22
*** Lee1092 has joined #openstack-cinder02:22
*** alonma has quit IRC02:26
*** diegows has joined #openstack-cinder02:27
*** gouthamr has joined #openstack-cinder02:28
*** alonma has joined #openstack-cinder02:30
*** dave-mccowan has quit IRC02:33
*** alonma has quit IRC02:34
*** chlong has joined #openstack-cinder02:36
*** alonma has joined #openstack-cinder02:37
*** alonma has quit IRC02:42
openstackgerritTakashi NATSUME proposed openstack/cinder: Enable volume owners to execute migrate_volume_completion  https://review.openstack.org/25336302:43
*** gcb has joined #openstack-cinder02:45
*** links has joined #openstack-cinder02:45
*** alonma has joined #openstack-cinder02:45
*** dave-mccowan has joined #openstack-cinder02:46
*** mylu has quit IRC02:47
*** diegows has quit IRC02:50
*** alonma has quit IRC02:50
*** klkumar has joined #openstack-cinder02:53
*** houming has joined #openstack-cinder02:53
*** alonma has joined #openstack-cinder02:57
*** smoriya_afk has joined #openstack-cinder03:00
*** smoriya_afk is now known as smoriya03:00
*** 7JTABCRUQ has quit IRC03:01
*** haomaiwang has joined #openstack-cinder03:01
*** alonma has quit IRC03:01
*** cvstealth has joined #openstack-cinder03:02
*** zhangjn has joined #openstack-cinder03:03
*** zhangjn has quit IRC03:03
*** zhangjn has joined #openstack-cinder03:05
*** cvstealth has quit IRC03:07
*** cvstealth has joined #openstack-cinder03:14
*** zhangjn has quit IRC03:15
*** cvstealth has quit IRC03:18
*** gcb has quit IRC03:38
*** tianmingZhang has quit IRC03:39
*** gcb has joined #openstack-cinder03:42
*** gcb has quit IRC03:48
*** gcb has joined #openstack-cinder03:51
*** houming_ has joined #openstack-cinder03:55
*** houming has quit IRC03:56
*** houming_ is now known as houming03:56
*** gcb has quit IRC03:56
*** mylu has joined #openstack-cinder03:56
*** alonma has joined #openstack-cinder04:00
*** ianbrown has joined #openstack-cinder04:00
*** haomaiwang has quit IRC04:01
*** haomaiwa_ has joined #openstack-cinder04:01
*** dims has joined #openstack-cinder04:03
*** alonma has quit IRC04:04
*** gcb has joined #openstack-cinder04:09
*** dave-mccowan has quit IRC04:12
*** ianbrown has quit IRC04:15
*** ianbrown_ has joined #openstack-cinder04:15
*** cvstealth has joined #openstack-cinder04:25
*** ianbrown_ is now known as ianbrown04:28
*** zhangjn has joined #openstack-cinder04:32
*** dims has quit IRC04:34
*** ianbrown has quit IRC04:42
*** ianbrown has joined #openstack-cinder04:42
*** gouthamr has quit IRC04:45
*** chirag has joined #openstack-cinder04:51
*** chirag has quit IRC04:54
*** chirag has joined #openstack-cinder04:57
chiragCan anyone help me in starting incremental backup in kilo release? I am getting error of "unrecognized arguments: --incr volumebackups". Do we need to make any changes in conf file??04:59
chiragI am executing  cinder backup-create "volume-id" --incr volumebackups05:00
*** alonma has joined #openstack-cinder05:00
*** haomaiwa_ has quit IRC05:01
*** haomaiwang has joined #openstack-cinder05:01
*** alonma has quit IRC05:04
*** chlong has quit IRC05:05
lixiaoy1chirag: https://review.openstack.org/#/c/216567/ it seems incremental is not supported05:09
*** alonma has joined #openstack-cinder05:09
lixiaoy1chirag: sorry the link is https://github.com/openstack/python-cinderclient/blob/stable/kilo/cinderclient/v2/shell.py05:10
lixiaoy1chirag: Line 100505:11
*** bardia has joined #openstack-cinder05:12
*** alonma has quit IRC05:14
*** alonma has joined #openstack-cinder05:15
*** alonma has quit IRC05:20
chiraglixiaoy1: I read many articles one of them is https://community.emc.com/community/connect/everything-openstack/blog/2015/04/24/what-s-interesting-with-openstack-cinder-in-kilo05:20
chiraganother http://content.mirantis.com/whats-new-in-openstack-kilo-webcast-landing-page.html05:22
*** chlong has joined #openstack-cinder05:23
chiragBut as per the link provided by you, incremental backup is not supported.05:23
*** alonma has joined #openstack-cinder05:23
*** dims has joined #openstack-cinder05:24
lixiaoy1chirag: https://github.com/openstack/cinder/blob/stable/kilo/cinder/api/contrib/backups.py#L24005:26
lixiaoy1 this function was implemented in cinder, not in python-cinderclient05:26
lixiaoy1chirag: as a result you can't run this in command, but seems that you can do it using restful interface05:27
lixiaoy1:q05:27
*** shausy has joined #openstack-cinder05:28
*** alonma has quit IRC05:28
*** shausy has quit IRC05:28
*** shausy has joined #openstack-cinder05:28
*** ianbrown has quit IRC05:29
*** ianbrown has joined #openstack-cinder05:29
chiragCould you provide any link which could provide some examples?05:30
*** mylu has quit IRC05:32
lixiaoy1chirag: http://developer.openstack.org/api-ref-blockstorage-v2.html create backup05:32
*** zhangjn has quit IRC05:33
*** mylu has joined #openstack-cinder05:34
*** deepakcs has joined #openstack-cinder05:36
openstackgerritBardia Keyoumarsi proposed openstack/cinder: Volume driver for Coho Data storage solutions  https://review.openstack.org/24669005:39
*** bardia has quit IRC05:41
*** lixiaoy1 has quit IRC05:43
*** lprice1 has joined #openstack-cinder05:44
*** dims has quit IRC05:46
*** lprice has quit IRC05:46
*** mudassirlatif has joined #openstack-cinder05:48
*** zhangjn has joined #openstack-cinder05:49
*** mudassirlatif has quit IRC05:57
*** mudassirlatif has joined #openstack-cinder05:59
*** mylu has quit IRC05:59
*** mylu has joined #openstack-cinder05:59
*** haomaiwang has quit IRC06:01
*** mudassirlatif has quit IRC06:01
*** haomaiwa_ has joined #openstack-cinder06:01
*** zhangjn has quit IRC06:05
*** zhangjn has joined #openstack-cinder06:07
*** ChubYann has quit IRC06:11
*** yangyapeng has joined #openstack-cinder06:22
*** alonma has joined #openstack-cinder06:25
*** lpetrut has joined #openstack-cinder06:26
*** ianbrown has quit IRC06:26
*** zhonghua is now known as zhonghua-lee06:26
*** ianbrown has joined #openstack-cinder06:26
*** alonma has quit IRC06:29
*** ianbrown has quit IRC06:30
*** ianbrown has joined #openstack-cinder06:30
*** alonma has joined #openstack-cinder06:31
*** ianbrown has quit IRC06:31
*** dims has joined #openstack-cinder06:32
*** ianbrown has joined #openstack-cinder06:32
*** boris-42_ has quit IRC06:33
*** jwcroppe has quit IRC06:34
*** alonma has quit IRC06:36
*** chirag has quit IRC06:38
*** lpetrut has quit IRC06:44
*** dims has quit IRC06:45
*** markus_z has joined #openstack-cinder06:49
*** openstackgerrit_ has joined #openstack-cinder06:52
*** dims has joined #openstack-cinder06:52
openstackgerritTakashi NATSUME proposed openstack/cinder: Enable volume owners to execute migrate_volume_completion  https://review.openstack.org/25336306:52
*** markus_z has quit IRC06:54
*** alonma has joined #openstack-cinder06:54
*** alonma has quit IRC06:59
*** haomaiwa_ has quit IRC07:01
*** p0rtal_ has quit IRC07:01
*** haomaiwang has joined #openstack-cinder07:01
*** alonma has joined #openstack-cinder07:01
*** mylu has quit IRC07:03
*** alonma has quit IRC07:05
*** p0rtal has joined #openstack-cinder07:09
*** alonma has joined #openstack-cinder07:11
*** nkrinner has joined #openstack-cinder07:13
*** alonma has quit IRC07:15
*** alonma has joined #openstack-cinder07:18
*** jaypipes has joined #openstack-cinder07:18
*** alonma has quit IRC07:22
*** p0rtal has quit IRC07:24
*** alonma has joined #openstack-cinder07:24
*** alexschm has joined #openstack-cinder07:26
*** alonma has quit IRC07:28
*** chlong has quit IRC07:29
*** alonma has joined #openstack-cinder07:32
*** alonma has quit IRC07:37
*** alonma has joined #openstack-cinder07:47
*** alonma has quit IRC07:51
*** alonma has joined #openstack-cinder07:53
*** klkumar has quit IRC07:54
*** klkumar has joined #openstack-cinder07:55
*** zhangjn has quit IRC07:55
*** ianbrown_ has joined #openstack-cinder07:55
*** alonma has quit IRC07:58
*** ianbrown has quit IRC07:58
*** egonzalez has joined #openstack-cinder08:00
*** haomaiwang has quit IRC08:01
*** haomaiwang has joined #openstack-cinder08:01
*** alonma has joined #openstack-cinder08:02
*** haomaiwang has quit IRC08:05
*** liverpooler has joined #openstack-cinder08:05
*** haomaiwa_ has joined #openstack-cinder08:05
openstackgerritLisaLi proposed openstack/cinder: Update retype limitation in volume/api  https://review.openstack.org/24546008:06
*** alonma has quit IRC08:06
*** alonma has joined #openstack-cinder08:08
*** salv-orlando has joined #openstack-cinder08:09
*** lixiaoy1 has joined #openstack-cinder08:10
openstackgerritVincent Hou proposed openstack/cinder: Storwize: Implement v2 replication (split IO)  https://review.openstack.org/23738708:10
*** salv-orlando has quit IRC08:12
*** alonma has quit IRC08:13
*** shakamunyi has quit IRC08:14
*** alonma has joined #openstack-cinder08:15
*** zhangjn has joined #openstack-cinder08:16
*** alonma has quit IRC08:19
openstackgerritVincent Hou proposed openstack/cinder: Storwize: Implement v2 replication (mirror)  https://review.openstack.org/24904208:19
openstackgerritVincent Hou proposed openstack/cinder: Storwize: Implement v2 replication (split IO)  https://review.openstack.org/23738708:19
*** ianbrown_ has quit IRC08:19
*** alonma has joined #openstack-cinder08:22
*** liverpoo1er has joined #openstack-cinder08:22
*** ianbrown_ has joined #openstack-cinder08:23
*** p0rtal has joined #openstack-cinder08:24
*** alonma has quit IRC08:27
*** shz has quit IRC08:29
*** p0rtal has quit IRC08:29
*** shz has joined #openstack-cinder08:30
*** e0ne has joined #openstack-cinder08:31
*** klkumar has quit IRC08:31
*** zhangjn has quit IRC08:33
openstackgerritwanghao proposed openstack/cinder-specs: Add pagination support to other resources  https://review.openstack.org/24728408:34
*** klkumar has joined #openstack-cinder08:34
openstackgerritVincent Hou proposed openstack/cinder: Storwize: Implement v2 replication (mirror)  https://review.openstack.org/24904208:34
openstackgerritVincent Hou proposed openstack/cinder: Storwize: Implement v2 replication (split IO)  https://review.openstack.org/23738708:34
*** sgotliv_ has quit IRC08:37
*** alonma has joined #openstack-cinder08:42
*** ildikov has quit IRC08:42
*** zhangjn has joined #openstack-cinder08:46
*** zhangjn has quit IRC08:46
*** alonma has quit IRC08:47
*** ianbrown_ has quit IRC08:51
*** jordanP has joined #openstack-cinder08:53
*** alonma has joined #openstack-cinder08:55
*** e0ne has quit IRC08:55
*** ianbrown has joined #openstack-cinder08:56
*** alonma has quit IRC08:58
*** alonma has joined #openstack-cinder08:58
*** mkoderer has quit IRC08:59
*** zhangjn has joined #openstack-cinder09:00
*** e0ne has joined #openstack-cinder09:00
*** haomaiwa_ has quit IRC09:01
*** mkoderer has joined #openstack-cinder09:01
*** haomaiwang has joined #openstack-cinder09:02
*** e0ne has quit IRC09:03
*** zhangjn has quit IRC09:04
*** shausy has quit IRC09:08
*** shausy has joined #openstack-cinder09:08
*** zhangjn has joined #openstack-cinder09:09
*** zhangjn has quit IRC09:10
*** lpetrut has joined #openstack-cinder09:11
*** zhangjn has joined #openstack-cinder09:13
*** zhangjn has quit IRC09:13
*** vinayp has joined #openstack-cinder09:15
*** alonma has quit IRC09:19
*** alonma has joined #openstack-cinder09:20
*** ankit_ag has joined #openstack-cinder09:21
*** salv-orlando has joined #openstack-cinder09:22
openstackgerritzhangsong proposed openstack/os-brick: Fix the bug of devices list is always none  https://review.openstack.org/25409109:23
*** alonma has quit IRC09:24
*** alonma has joined #openstack-cinder09:26
*** alonma has quit IRC09:31
*** zhangjn has joined #openstack-cinder09:32
*** alonma has joined #openstack-cinder09:32
*** jistr has joined #openstack-cinder09:36
*** alonma has quit IRC09:37
*** alonma has joined #openstack-cinder09:39
*** alonma has quit IRC09:39
*** alonma has joined #openstack-cinder09:39
*** zhangjn has quit IRC09:40
*** ianbrown has quit IRC09:40
*** ianbrown has joined #openstack-cinder09:45
*** houming has quit IRC09:47
*** zhangjn has joined #openstack-cinder09:47
*** pschaef has joined #openstack-cinder09:47
*** zhangjn has quit IRC09:47
*** pschaef has quit IRC09:47
*** dims has quit IRC09:48
*** houming has joined #openstack-cinder09:49
*** ildikov has joined #openstack-cinder09:49
*** sgotliv_ has joined #openstack-cinder09:54
*** e0ne has joined #openstack-cinder09:56
*** haomaiwang has quit IRC10:01
*** haomaiwang has joined #openstack-cinder10:01
*** zhenguo has quit IRC10:03
*** vgridnev has joined #openstack-cinder10:05
*** yhayashi has quit IRC10:08
openstackgerritHelen Walsh proposed openstack/cinder: Error handling for invalid SLO/Workload combo  https://review.openstack.org/24383710:11
*** ianbrown has quit IRC10:13
*** p0rtal has joined #openstack-cinder10:13
*** ianbrown has joined #openstack-cinder10:13
*** p0rtal has quit IRC10:17
*** aix has joined #openstack-cinder10:19
*** ildikov has quit IRC10:19
*** cburgess has quit IRC10:22
*** openstackgerrit_ has quit IRC10:23
*** salv-orlando has quit IRC10:23
*** cburgess has joined #openstack-cinder10:27
*** jistr has quit IRC10:29
*** jistr has joined #openstack-cinder10:30
*** ociuhandu has quit IRC10:30
*** cburgess has quit IRC10:31
*** alonma has quit IRC10:31
*** alonma has joined #openstack-cinder10:31
*** cburgess has joined #openstack-cinder10:32
openstackgerritHelen Walsh proposed openstack/cinder: EMC VMAX - getting iscsi ip from port in existing masking view  https://review.openstack.org/24599710:32
*** Thelo has joined #openstack-cinder10:32
*** klkumar has quit IRC10:34
*** ildikov has joined #openstack-cinder10:35
*** alonma has quit IRC10:36
*** alonma has joined #openstack-cinder10:37
*** jaypipes has quit IRC10:39
*** jaypipes has joined #openstack-cinder10:40
*** alonma has quit IRC10:41
*** cburgess has quit IRC10:41
*** kambiz has quit IRC10:42
*** cburgess has joined #openstack-cinder10:43
*** alonma has joined #openstack-cinder10:43
*** johnthetubaguy has quit IRC10:43
*** nikeshm has joined #openstack-cinder10:44
*** kambiz has joined #openstack-cinder10:45
*** klkumar has joined #openstack-cinder10:46
*** eduardo___ has joined #openstack-cinder10:46
*** haomaiwang has quit IRC10:47
*** alonma has quit IRC10:48
*** johnthetubaguy has joined #openstack-cinder10:49
*** alonma has joined #openstack-cinder10:49
*** haomaiwang has joined #openstack-cinder10:50
*** ociuhandu has joined #openstack-cinder10:54
*** alonma has quit IRC10:54
*** alonma has joined #openstack-cinder10:56
*** cburgess has quit IRC10:56
*** cburgess has joined #openstack-cinder10:59
*** alonma has quit IRC11:00
*** alonma has joined #openstack-cinder11:00
openstackgerritHelen Walsh proposed openstack/cinder: EMC VMAX - VMAX driver failing to remove zones  https://review.openstack.org/24493311:05
*** fmccrthy has quit IRC11:07
*** fmccrthy has joined #openstack-cinder11:07
*** yangyapeng has quit IRC11:17
*** EinstCrazy has quit IRC11:17
*** johnthetubaguy has quit IRC11:21
*** alonma has quit IRC11:21
*** johnthetubaguy has joined #openstack-cinder11:21
*** alonma has joined #openstack-cinder11:22
*** alonma has quit IRC11:26
openstackgerritzhangsong proposed openstack/os-brick: Fix the bug of devices list is always none  https://review.openstack.org/25409111:31
*** alonma has joined #openstack-cinder11:33
*** dave-mccowan has joined #openstack-cinder11:35
openstackgerritzhangsong proposed openstack/os-brick: Fix the bug of devices list is always none  https://review.openstack.org/25409111:35
openstackgerritHelen Walsh proposed openstack/cinder: EMC VMAX - Fix for last volume in VMAX3 storage group  https://review.openstack.org/24433111:35
openstackgerritzhangsong proposed openstack/os-brick: Fix the bug of devices list is always none  https://review.openstack.org/25409111:36
*** alonma has quit IRC11:37
openstackgerritHelen Walsh proposed openstack/cinder: EMC VMAX - Changing PercentSynced to CopyState in isSynched  https://review.openstack.org/24699211:39
*** alonma has joined #openstack-cinder11:39
*** liverpoo1er has quit IRC11:42
*** zhonghua-lee has quit IRC11:42
*** liverpooler has quit IRC11:42
*** liverpooler has joined #openstack-cinder11:42
*** liverpoo1er has joined #openstack-cinder11:42
*** zhonghua-lee has joined #openstack-cinder11:42
*** EinstCrazy has joined #openstack-cinder11:43
*** alonma has quit IRC11:43
*** liverpoo1er has quit IRC11:43
*** liverpooler has quit IRC11:44
*** liverpooler has joined #openstack-cinder11:44
*** alonma has joined #openstack-cinder11:45
*** deepakcs has quit IRC11:46
*** noqa_v_q1ovnie is now known as noqa_v_qoovnie11:48
*** alonma has quit IRC11:49
*** alonma has joined #openstack-cinder11:49
*** klkumar has quit IRC11:51
*** klkumar has joined #openstack-cinder11:53
*** [1]Thelo has joined #openstack-cinder11:53
*** Thelo has quit IRC11:55
*** [1]Thelo is now known as Thelo11:55
*** p0rtal has joined #openstack-cinder12:01
*** salv-orlando has joined #openstack-cinder12:03
*** salv-orlando has quit IRC12:03
*** salv-orlando has joined #openstack-cinder12:04
*** p0rtal has quit IRC12:05
*** dims has joined #openstack-cinder12:08
openstackgerritjaveme proposed openstack/cinder: encode the url parameters  https://review.openstack.org/25415712:12
*** jaypipes has quit IRC12:12
*** raildo-afk is now known as raildo12:14
*** lpetrut has quit IRC12:15
openstackgerritHelen Walsh proposed openstack/cinder: EMC VMAX - Extend Volume for VMAX3  https://review.openstack.org/24894112:15
*** links has quit IRC12:15
*** lpetrut has joined #openstack-cinder12:17
*** isaacb has joined #openstack-cinder12:20
*** houming has quit IRC12:21
*** cdelatte has joined #openstack-cinder12:22
openstackgerritHelen Walsh proposed openstack/cinder: EMC VMAX - Incorrect storage group selected on an VMAX3 attach  https://review.openstack.org/25044312:27
*** chlong has joined #openstack-cinder12:30
*** andymaier has joined #openstack-cinder12:42
*** andymaier has quit IRC12:46
*** andymaier has joined #openstack-cinder12:47
*** alonma has quit IRC12:48
*** mylu has joined #openstack-cinder12:48
*** alonma has joined #openstack-cinder12:49
*** alonma has quit IRC12:53
openstackgerritHelen Walsh proposed openstack/cinder: EMC VMAX - Fix for randomly selecting a portgroup  https://review.openstack.org/24330412:54
*** alonma has joined #openstack-cinder12:55
*** mylu has quit IRC12:55
*** haomaiwang has quit IRC12:56
*** gcb has quit IRC12:58
*** alonma has quit IRC12:59
*** smoriya_ has quit IRC13:01
*** alonma has joined #openstack-cinder13:01
*** dustins has joined #openstack-cinder13:02
*** akerr has quit IRC13:05
*** alonma has quit IRC13:05
*** alonma has joined #openstack-cinder13:07
openstackgerritHelen Walsh proposed openstack/cinder: EMC VMAX - get_short_host_name not called in find_device_number  https://review.openstack.org/25209313:11
*** alonma has quit IRC13:11
*** alonma has joined #openstack-cinder13:13
*** zhonghua-lee has quit IRC13:14
*** zhonghua-lee has joined #openstack-cinder13:14
*** gouthamr has joined #openstack-cinder13:17
*** alonma has quit IRC13:18
*** alonma has joined #openstack-cinder13:19
*** gouthamr_ has joined #openstack-cinder13:22
openstackgerritHelen Walsh proposed openstack/cinder: EMC VMAX - Replacing deprecated API EMCGetTargetEndpoints  https://review.openstack.org/24432813:23
*** gouthamr has quit IRC13:23
*** adrianofr has joined #openstack-cinder13:23
*** alonma has quit IRC13:24
*** alonma has joined #openstack-cinder13:26
*** jaypipes has joined #openstack-cinder13:30
*** alonma has quit IRC13:30
*** timcl has joined #openstack-cinder13:31
openstackgerritMerged openstack/python-cinderclient: Remove py26 support  https://review.openstack.org/25223213:32
*** alonma has joined #openstack-cinder13:34
*** fthiagogv has joined #openstack-cinder13:34
*** haomaiwang has joined #openstack-cinder13:36
*** fthiagogv has quit IRC13:36
*** fthiagogv has joined #openstack-cinder13:37
*** fthiagogv has quit IRC13:38
openstackgerritCyril Roelandt proposed openstack/cinder: Python 3: fix a lot of tests  https://review.openstack.org/25419813:38
*** diablo_rojo has joined #openstack-cinder13:38
openstackgerritHelen Walsh proposed openstack/cinder: EMC VMAX - _remove_last_vol_and_delete_sg not being called for V3  https://review.openstack.org/25206613:38
*** alonma has quit IRC13:38
*** fthiagogv has joined #openstack-cinder13:39
*** edmondsw has joined #openstack-cinder13:39
*** klkumar has quit IRC13:40
*** alonma has joined #openstack-cinder13:40
*** klkumar has joined #openstack-cinder13:40
*** akerr has joined #openstack-cinder13:42
*** alonma has quit IRC13:45
*** alonma has joined #openstack-cinder13:46
*** p0rtal has joined #openstack-cinder13:49
*** diablo_rojo has quit IRC13:49
*** alonma has quit IRC13:51
openstackgerritSzymon WrĂłblewski proposed openstack/cinder: Tooz locks  https://review.openstack.org/18353713:52
*** boris-42_ has joined #openstack-cinder13:52
*** diablo_rojo has joined #openstack-cinder13:52
*** alonma has joined #openstack-cinder13:53
*** p0rtal has quit IRC13:53
*** jaypipes_ has joined #openstack-cinder13:55
*** alonma has quit IRC13:57
*** diablo_rojo1 has joined #openstack-cinder13:57
*** jaypipes_ has quit IRC13:58
*** diablo_rojo has quit IRC13:59
*** alonma has joined #openstack-cinder13:59
*** julim has joined #openstack-cinder13:59
*** alonma has quit IRC14:03
*** lprice1 has quit IRC14:04
*** jerrygb has joined #openstack-cinder14:04
*** alonma has joined #openstack-cinder14:05
*** dustins has quit IRC14:07
*** alonma has quit IRC14:10
*** alonma has joined #openstack-cinder14:11
*** Thelo has quit IRC14:13
*** [1]Thelo has joined #openstack-cinder14:14
*** alonma has quit IRC14:15
*** shz has quit IRC14:16
*** dims has quit IRC14:16
*** shz has joined #openstack-cinder14:17
*** hogepodge has quit IRC14:17
*** thingee has quit IRC14:17
*** alonma has joined #openstack-cinder14:17
*** breitz has joined #openstack-cinder14:18
*** klkumar has quit IRC14:21
*** klkumar has joined #openstack-cinder14:22
*** mdbooth_ has joined #openstack-cinder14:22
*** alonma has quit IRC14:22
*** merooney has joined #openstack-cinder14:23
*** mdbooth has quit IRC14:23
*** grumpycatt has quit IRC14:23
*** grumpycatt has joined #openstack-cinder14:23
*** mdbooth_ is now known as mdbooth14:23
openstackgerritSean McGinnis proposed openstack/python-cinderclient: Update minimum tox version to 1.8  https://review.openstack.org/24143414:23
*** alonma has joined #openstack-cinder14:24
*** baumann has joined #openstack-cinder14:25
*** jungleboyj has joined #openstack-cinder14:26
*** alonma has quit IRC14:28
*** timcl has quit IRC14:28
*** superdan is now known as dansmith14:32
*** dims has joined #openstack-cinder14:34
openstackgerritCyril Roelandt proposed openstack/cinder: Python 3: fix a lot of tests  https://review.openstack.org/25419814:34
*** baumann has quit IRC14:36
*** jgregor has joined #openstack-cinder14:37
*** alonma has joined #openstack-cinder14:40
*** akshai has joined #openstack-cinder14:43
*** alonma has quit IRC14:45
*** Thelo_ has joined #openstack-cinder14:46
openstackgerritSean McGinnis proposed openstack/os-brick: Add reno for release notes management  https://review.openstack.org/25320714:47
*** alonma has joined #openstack-cinder14:47
*** baumann has joined #openstack-cinder14:47
*** Yogi11 has joined #openstack-cinder14:47
openstackgerritSean McGinnis proposed openstack/python-cinderclient: Add reno for release notes management  https://review.openstack.org/25320614:48
*** gcb has joined #openstack-cinder14:50
*** alonma has quit IRC14:51
*** shausy has quit IRC14:52
*** alonma has joined #openstack-cinder14:53
*** smoriya has quit IRC14:54
*** crose has joined #openstack-cinder14:55
*** mriedem has joined #openstack-cinder14:55
*** smoriya_afk has joined #openstack-cinder14:56
*** smoriya_afk is now known as smoriya14:56
*** alonma has quit IRC14:57
*** martyturner has joined #openstack-cinder14:58
*** alonma has joined #openstack-cinder14:59
*** pots has joined #openstack-cinder15:00
*** diablo_rojo has joined #openstack-cinder15:01
*** ctina has joined #openstack-cinder15:02
*** dims has quit IRC15:02
*** timcl has joined #openstack-cinder15:02
*** diablo_rojo1 has quit IRC15:02
*** mc_nair has joined #openstack-cinder15:03
*** anshul has quit IRC15:03
*** alonma has quit IRC15:03
*** jungleboyj has quit IRC15:03
*** alonma has joined #openstack-cinder15:05
*** thangp has joined #openstack-cinder15:06
*** Thelo_ has quit IRC15:09
*** alonma has quit IRC15:10
openstackgerritSean McGinnis proposed openstack/os-brick: Add reno for release notes management  https://review.openstack.org/25320715:10
*** lprice has joined #openstack-cinder15:12
*** davechen has joined #openstack-cinder15:13
*** hogepodge has joined #openstack-cinder15:14
tbarronDuncanT: thanks to lixiaoy1 and dulek I think https://review.openstack.org/#/c/240978/ is ready for your scrutiny when you have a chance.15:15
*** dustins has joined #openstack-cinder15:15
*** ctina has quit IRC15:16
*** ctina has joined #openstack-cinder15:16
*** alonma has joined #openstack-cinder15:17
*** davechen1 has joined #openstack-cinder15:19
*** ctina_ has joined #openstack-cinder15:19
openstackgerritNate Potter proposed openstack/cinder: Check backup service before backup delete  https://review.openstack.org/24649815:20
*** thingee has joined #openstack-cinder15:20
*** crose has quit IRC15:21
*** cbader has joined #openstack-cinder15:21
*** davechen has quit IRC15:22
*** ctina has quit IRC15:22
*** alonma has quit IRC15:22
*** alonma has joined #openstack-cinder15:23
*** delattec has joined #openstack-cinder15:23
*** zhonghua-lee has quit IRC15:25
*** zhonghua-lee has joined #openstack-cinder15:26
*** cdelatte has quit IRC15:26
*** mtanino has joined #openstack-cinder15:27
*** alonma has quit IRC15:28
*** mylu has joined #openstack-cinder15:32
*** kfarr has joined #openstack-cinder15:33
*** alonma has joined #openstack-cinder15:34
e0nesmcginnis: thanks for fixing empty .placeholders in your patches! let's keep oir code as clean, as possible15:34
smcginnise0ne: Yeah, it's kind of convention on all of those reno patches, but might as well skip it.15:36
*** julim has quit IRC15:37
*** p0rtal has joined #openstack-cinder15:37
*** nkrinner has quit IRC15:37
*** mdenny has joined #openstack-cinder15:38
*** haomaiwang has quit IRC15:39
*** alonma has quit IRC15:39
*** liverpooler has quit IRC15:39
*** smoriya_ has joined #openstack-cinder15:40
*** julim has joined #openstack-cinder15:40
*** alonma has joined #openstack-cinder15:40
*** haomaiwang has joined #openstack-cinder15:40
*** jaypipes has quit IRC15:41
*** p0rtal has quit IRC15:42
*** changbl has quit IRC15:42
*** isaacb has quit IRC15:42
*** timcl has quit IRC15:43
*** mylu has quit IRC15:43
*** xyang1 has joined #openstack-cinder15:44
*** vgridnev has quit IRC15:44
*** alonma has quit IRC15:45
*** changbl has joined #openstack-cinder15:46
*** alonma has joined #openstack-cinder15:46
*** dave-mccowan has quit IRC15:47
*** jgregor has quit IRC15:51
*** alonma has quit IRC15:51
*** rushiagr_away is now known as rushiagr15:52
*** alonma has joined #openstack-cinder15:53
*** jgregor has joined #openstack-cinder15:53
*** changbl has quit IRC15:53
*** rushiagr is now known as rushiagr_away15:54
*** alonma has quit IRC15:57
*** harlowja_at_home has joined #openstack-cinder15:58
*** alonma has joined #openstack-cinder15:59
*** mriedem is now known as mriedem_meeting16:00
*** dave-mccowan has joined #openstack-cinder16:01
*** haomaiwang has quit IRC16:01
*** haomaiwang has joined #openstack-cinder16:01
openstackgerritNate Potter proposed openstack/cinder: Move snapshot and volume quota checks to API  https://review.openstack.org/24938816:02
*** alonma has quit IRC16:03
*** alonma has joined #openstack-cinder16:05
*** changbl has joined #openstack-cinder16:06
*** davechen has joined #openstack-cinder16:08
*** daneyon_ has joined #openstack-cinder16:09
*** jungleboyj has joined #openstack-cinder16:09
*** Guest69903 is now known as cfouts16:09
*** alonma has quit IRC16:09
*** davechen1 has quit IRC16:10
*** alonma has joined #openstack-cinder16:11
*** daneyon has quit IRC16:12
*** xyang1 has quit IRC16:13
*** cdelatte has joined #openstack-cinder16:14
*** delattec has quit IRC16:15
*** jaypipes has joined #openstack-cinder16:15
*** alonma has quit IRC16:16
*** alonma has joined #openstack-cinder16:17
*** dustins_ has joined #openstack-cinder16:18
*** dustins has quit IRC16:19
openstackgerritNate Potter proposed openstack/cinder: Move consistency group quota checks to API  https://review.openstack.org/24944116:20
*** alonma has quit IRC16:22
*** dustins_ is now known as dustins16:23
*** p0rtal has joined #openstack-cinder16:24
openstackgerritNate Potter proposed openstack/cinder: Move consistency group quota checks to API  https://review.openstack.org/24944116:24
*** alonma has joined #openstack-cinder16:24
*** dustins has quit IRC16:25
openstackgerritIvan Kolodyazhny proposed openstack/cinder: Use Cinder API v2 for Rally scenarios  https://review.openstack.org/25428516:25
*** EinstCrazy has quit IRC16:26
*** alexschm has quit IRC16:26
*** dustins has joined #openstack-cinder16:27
*** aix has quit IRC16:28
*** alonma has quit IRC16:29
*** sgotliv_ has quit IRC16:30
*** jdurgin1 has joined #openstack-cinder16:30
*** mriedem_meeting is now known as mriedem16:30
*** dustins has quit IRC16:31
*** dustins has joined #openstack-cinder16:32
*** r-daneel has joined #openstack-cinder16:34
*** vgridnev has joined #openstack-cinder16:36
*** davechen has quit IRC16:39
hemnavinayp ping16:40
*** mylu has joined #openstack-cinder16:41
*** mylu has quit IRC16:42
*** ctina_ has quit IRC16:44
*** jordanP has quit IRC16:45
*** timcl has joined #openstack-cinder16:45
*** davechen has joined #openstack-cinder16:45
*** apoorvad has joined #openstack-cinder16:46
openstackgerritYuriy Nesenenko proposed openstack/cinder: Add synchronization in Block Device driver  https://review.openstack.org/25297916:48
*** david-lyle has joined #openstack-cinder16:49
*** haomaiwang has quit IRC16:51
openstackgerritYuriy Nesenenko proposed openstack/cinder: Implement snapshots-related features for Block Device Driver  https://review.openstack.org/25311116:51
*** rhedlind has quit IRC16:52
openstackgerritMerged openstack/cinder-specs: Scaling backup service blueprint spec  https://review.openstack.org/24097816:52
openstackgerritSean McGinnis proposed openstack/cinder: Update bandit config file for new plugins  https://review.openstack.org/21254116:53
*** links has joined #openstack-cinder16:53
*** ctina_ has joined #openstack-cinder16:58
*** jdurgin1 has quit IRC16:58
*** baumann has quit IRC17:02
*** lpetrut has quit IRC17:02
*** baumann has joined #openstack-cinder17:02
*** davechen1 has joined #openstack-cinder17:09
*** harlowja_at_home has quit IRC17:10
*** jistr has quit IRC17:11
*** davechen has quit IRC17:12
*** links has quit IRC17:14
*** gcb has quit IRC17:15
*** leeantho has joined #openstack-cinder17:19
*** baumann has left #openstack-cinder17:22
*** baumann has joined #openstack-cinder17:23
openstackgerritYuriy Nesenenko proposed openstack/cinder: Add synchronization in Block Device driver  https://review.openstack.org/25297917:24
*** zz_john5223 is now known as john522317:25
*** salv-orl_ has joined #openstack-cinder17:25
openstackgerritYuriy Nesenenko proposed openstack/cinder: Implement snapshots-related features for Block Device Driver  https://review.openstack.org/25311117:26
*** e0ne has quit IRC17:27
*** salv-orlando has quit IRC17:28
*** egonzalez has quit IRC17:28
*** p0rtal has quit IRC17:30
*** davechen1 is now known as davechen17:34
*** klkumar has quit IRC17:36
*** pots has quit IRC17:36
*** kfarr has quit IRC17:39
openstackgerritYuriy Nesenenko proposed openstack/cinder: Add synchronization in Block Device driver  https://review.openstack.org/25297917:43
*** alonma has joined #openstack-cinder17:49
*** ildikov has quit IRC17:51
*** alonma has quit IRC17:53
*** edmondsw has quit IRC17:53
*** alonma has joined #openstack-cinder17:55
*** p0rtal has joined #openstack-cinder17:55
*** jistr has joined #openstack-cinder17:59
*** alonma has quit IRC17:59
*** sgotliv_ has joined #openstack-cinder18:00
*** alonma has joined #openstack-cinder18:03
*** fuentess has joined #openstack-cinder18:04
*** alonma has quit IRC18:07
*** eduardo___ has quit IRC18:08
*** julim has quit IRC18:08
*** alonma has joined #openstack-cinder18:10
*** smoriya_ has quit IRC18:12
*** alonma has quit IRC18:14
*** dims has joined #openstack-cinder18:16
*** andymaier has quit IRC18:16
*** alonma has joined #openstack-cinder18:17
*** lpetrut has joined #openstack-cinder18:18
*** alonma has quit IRC18:21
*** e0ne has joined #openstack-cinder18:24
*** alonma has joined #openstack-cinder18:25
*** bardia has joined #openstack-cinder18:25
*** davechen1 has joined #openstack-cinder18:28
*** EinstCrazy has joined #openstack-cinder18:28
*** alonma has quit IRC18:29
*** davechen has quit IRC18:30
*** EinstCrazy has quit IRC18:34
*** salv-orlando has joined #openstack-cinder18:35
*** salv-orl_ has quit IRC18:35
mriedemwhen a snapshot is created in cinder, does it create a new volume?18:36
mriedemi'm trying to sort out https://bugs.launchpad.net/cinder/+bug/152029618:36
openstackLaunchpad bug 1520296 in tempest "tempest test failed test_create_ebs_image_and_check_boot" [Undecided,New]18:36
jgriffithmriedem: nope18:36
jgriffithmriedem: you need to clarify though, Ceph backend?18:36
mriedemjgriffith: yes18:37
jgriffithmriedem: based on the bug yes...18:37
mriedemhttp://logs.openstack.org/20/218120/3/check/gate-tempest-dsvm-full-ceph/2349f2d/logs/screen-c-vol.txt.gz#_2015-12-07_05_51_07_64518:37
jgriffithmriedem: so ceph does some things "differently"18:37
mriedemVolume 19efe5d8-fa71-4569-b806-dfe2e0080b7f: being created as snap with specification: {'status': u'creating', 'volume_size': 1, 'volume_name': 'volume-19efe5d8-fa71-4569-b806-dfe2e0080b7f', 'snapshot_id': '366fab34-8494-47ee-925e-1505b7521744'}18:37
mriedemeventually when we go to delete the snapshot volume, it fails because this internal volume thing it created during the snapshot operation is still around http://logs.openstack.org/20/218120/3/check/gate-tempest-dsvm-full-ceph/2349f2d/logs/screen-c-vol.txt.gz#_2015-12-07_05_51_17_80818:38
jgriffithmriedem: yes, I looked at that briefly with jbernard last week18:38
jgriffithmriedem: he was digging in and might have some more info...18:38
jgriffithjbernard: around?18:38
*** edmondsw has joined #openstack-cinder18:39
jgriffithmriedem: Ceph as I recall has some tricky things around "guest" assisted snapshots or something like that18:39
jgriffithmriedem: unless that was just Gluster18:39
jgriffithmriedem: regardless I think eharney and russellb put that guest assisted stuff in place18:40
smcginnisjgriffith, mriedem: I thought preliminary findings on that were that ceph does some things async in the background that the tempest test was expecting to be complete by the next call.18:40
mriedemso,18:40
jgriffithit may or may not have anything to do with this bug though18:40
mriedemthe test goes like this18:40
mriedem1. create volume from image18:40
mriedem2. boot server from volume18:40
mriedem3. snapshot server (creates image and volume snapshots)18:40
mriedem4. delete server18:40
mriedem5. boot new server from image snapshot18:41
jgriffithmriedem: BUT18:41
mriedemthen cleanup18:41
smcginnismriedem: I think the issue is in the cleanup.18:41
mriedemthe snapshot deletion fails because this secondary volume created during the snapshot isn't deleted before the snapshot18:41
smcginnismriedem: The snapshot is still there when the cleanup tries to delete the volume.18:41
jgriffithmriedem: the problem is when you configure ceph it's notion of an Instance snapshot is "different" because it uses itself for Glance images as well18:41
smcginnisjgriffith: Ah!18:41
smcginnisThat could explain some things.18:41
mriedemthe secondary volume is also automatically attached to the 2nd server that you booted from an image18:42
jgriffithmriedem: smcginnis in the case of Ceph it uses some trickery because it IS the image backend18:42
jgriffithmriedem: smcginnis in other words, when it comes to Instance snapshots that are boot from volume it's completely unique to other backends18:43
*** jordant has joined #openstack-cinder18:43
jgriffithThe confusion of overloading the term "snapshot" :(18:43
*** jordant has left #openstack-cinder18:43
smcginnisHopefully jbernard has found something with this.18:44
jgriffithalthough, snapshotting a boot from volume will also create a Cinder snapshot as well18:44
mriedemyeah, i almost commented on that in tempest this morning b/c they have a snapshot client but you have to know it's volume snapshots18:44
jgriffithmriedem: yeah, it's a bit annoying18:45
jgriffithmriedem: it's caused confusion for a number of operators over the years18:45
*** ildikov has joined #openstack-cinder18:45
mriedemso it looks like b/c the image snapshot metadata has the bdm in it with the volume snapshot id, when we create the 2nd server from that image, we see the snapshot volume id, map that back to the volume id related to it, and then attaches that to the 2nd instance18:46
mriedemi have to dig into nova's code to see where that is happening18:47
jgriffithmriedem: yes, I believe your interpretation is accurate18:47
mriedemthat's what i'm seeing from the logs anyway18:47
mriedemand b/c tempest never knows about that 2nd volume, it doesn't try to delete it before the snapshot volume18:48
mriedemnova would know about it...18:48
mriedembut since the 2nd server wasn't boot from volume, i'm assuming it just detaches the volume on delete18:49
mriedemit doesn't tell cinder to delete that volume18:49
jgriffithmriedem: sounds about right18:49
mriedemoh snap, but it should b/c the image meta has delete_on_termination=True18:49
jgriffithmriedem: hmmm18:49
jgriffithmriedem: but that will fail because of the snap then18:50
mriedemtrue18:50
mriedemso i'm wondering why this isn't 100% fail18:50
mriedemin the ceph case18:50
jgriffithmriedem: the part that I got to was that we actually issued the request to delete the snapshot and the driver responded "success", but then when we went to the driver again and said "delete the volume" it puked because it said the snapshot was still present18:51
jgriffithmriedem: which is what smcginnis was eluding to earlier WRT the async call18:51
mriedemso when cinder is told to delete the snapshot volume, it will orchestrate cleaning up any volumes created when the snapshot was created?18:52
*** jaypipes has quit IRC18:52
jgriffithmriedem: so the problem is that Ceph uses cow files on top of each other18:53
jgriffithmriedem: all of that cleanup is up to them, Cinder doesn't have that concept18:53
*** julim has joined #openstack-cinder18:53
jgriffithmriedem: ie "linked snaps"18:53
jgriffithmriedem: but my earlier point was much simpler than that18:53
jgriffithmriedem: the Ceph driver isn't actually blocking on the delete18:53
jgriffithmriedem: so it says "ok, deleted" but it may or may not be deleted off the backend yet18:54
mriedemhrm, ok, at one point a long time ago i mentioned putting a retry check in the ceph volume driver18:54
jgriffithmriedem: so you get a race, where sometimes it's clear, but other times.. it's actually still on the backend so it fails18:54
mriedemthat was shot down for some reason18:54
jgriffithmriedem: I don't know anything about that.. but IMHO delete should be blocking on the device18:54
mriedemhttps://bugs.launchpad.net/cinder/+bug/1464259/comments/518:55
openstackLaunchpad bug 1464259 in Cinder "Volumes tests fails often with rbd backend" [High,Triaged] - Assigned to Jon Bernard (jbernard)18:55
mriedemi think jbernard was confusing what tempest was doing and in what order18:55
jgriffithmriedem: we shouldn't respond with succesful deletion until it's actaully confirmed as being gone in this case18:55
mriedemtempest has no concept of this 2nd volume18:55
jgriffithsome drivers it doesn't matter, but if you do linked volumes and snaps it is an issue18:55
mriedemand it's a race because sometimes the thing is gone when it says it's deleted i guess18:55
mriedemjust fast enough18:55
jgriffithmriedem: yeah18:56
jgriffithmriedem: FWIW I think that should be addressed by modifying Ceph's delete call18:56
jbernardmriedem: i dont follow how tempest doesn't know about the 2nd volume18:56
mriedemjbernard: tempest never created the 2nd volume, or attached it to the 2nd server18:57
mriedemthat was all done automatically18:57
mriedemtempest only adds cleanups for resources it creates18:57
mriedemi guess tempest could know that there is a 2nd volume attached to the 2nd server b/c of the boot from snapshot (where the image snapshot has bdm info about the vol snapshot)18:58
mriedembut then tempest (and therefore a user of the api), has to know to check for this kind of thing when deleting resources, which is gross18:58
jgriffithmriedem: jbernard and the problem I have with that is that it's a special case for a single driver18:59
jbernardfrom what i see locally, tempest's delete of the volume takes longer than expected18:59
mriedemjbernard: so why isn't there something like this in the ceph rbd driver? https://review.openstack.org/#/c/169446/18:59
jgriffithmriedem: jbernard which would be fine if it was handled by the driver18:59
jbernardmriedem: there can be19:01
jbernardthe local failure that i see is that the volume delete takes longer than expected19:01
jbernardand tempest continues to try to cleanup19:01
jgriffithjbernard: takes longer on the backend you mean?19:01
jbernardby deleting the snapshot, which is busy becuse the volume dleete hasn't completed yet19:02
jgriffithjbernard: so driver reports 'deleted' but it's not ?19:02
*** p0rtal has quit IRC19:02
jbernardi dont think so, i think it's just timing the delete operation19:02
jgriffithjbernard: in other words, "where" is it taking longer than expected?19:02
jgriffithjbernard: Ok, well I'm thoroughly confused then :)19:02
jgriffithjbernard: and don't understand the log messages any more :)19:02
jbernardjgriffith: it's possible that the failure im seeing is not the same as yours19:03
*** p0rtal has joined #openstack-cinder19:03
jbernardjgriffith: what is the link to your log again?19:03
*** anshul has joined #openstack-cinder19:04
mriedemhttp://logs.openstack.org/20/218120/3/check/gate-tempest-dsvm-full-ceph/2349f2d/19:04
*** lpetrut has quit IRC19:04
mriedemhttp://logs.openstack.org/20/218120/3/check/gate-tempest-dsvm-full-ceph/2349f2d/logs/screen-c-vol.txt.gz#_2015-12-07_05_51_17_80819:04
*** ociuhandu has quit IRC19:04
jbernardok, same that im seeing19:05
jbernardit looks like in traceback-2 that the volume delete took longer than expected19:05
jbernardbefore it completed, tempest tried to continue cleaning up the snapshot19:05
jbernardwhich failed19:05
jbernardbecause it has a dependent volume19:05
jbernardmriedem: so a retry loop would ease this19:06
jbernardmriedem: either we wait more in the driver, or we wait more in the tempest test19:07
jbernardmriedem: but we cannot delete a snapshot with dependant children19:07
jbernardwithout first deleting or flattening those children19:07
jbernarddoes this make sense? i could be missing something19:08
jbernardthe test is completing successfully, and failing in the cleanup… so there is that ;)19:08
mriedemjbernard: i think this belongs in the driver19:08
mriedemyou can't workaround this in tempest, because that means we workaround it for our ci system but everyone in the wild hitting this doesn't get that fix19:09
jbernardmriedem: yeah, i see your point19:09
jbernardwe could verify that children are either in a flattening or deleting state, and then wait for them to complete within the driver19:10
jbernardand return only after everything settles19:10
jbernardmriedem: ^ i think that's what we want19:10
jbernardjgriffith: ^ agree?19:10
mriedemi think so19:11
mriedemyeah19:11
mriedemdelete is a cast (async), so that's fine19:11
mriedemthe only way you'd timeout is in tempest19:11
mriedemif the delete is taking longer than 300 seconds or whatever it is19:11
jbernardthat i wont be able to control19:11
mriedemit shouldn't take 5 minutes to complete this in a ci run19:12
jbernardi should say, i cannot control the time it takes from a driver context19:12
jbernardbut i agree, it should not be taking so long19:12
jbernardmriedem: there are a few things that need to be addressed, ill start on the driver loop first19:13
jbernardmriedem: would it help to skip this test temporarily for the ceph job?19:14
mriedemmmm19:14
mriedem79 fails in 10 days19:14
mriedemin the check queue19:14
mriedemidk19:14
jbernardwell, it's always an option19:15
mriedemjbernard: btw, track with https://bugs.launchpad.net/cinder/+bug/146425919:15
openstackLaunchpad bug 1464259 in Cinder "Volumes tests fails often with rbd backend" [High,Triaged] - Assigned to Jon Bernard (jbernard)19:15
mriedemi duped the other bug against that one19:15
mriedemsince i think the root causes are the same19:15
jbernardok thanks19:16
*** garthb has joined #openstack-cinder19:16
openstackgerritYuriy Nesenenko proposed openstack/cinder: Implement snapshots-related features for Block Device Driver  https://review.openstack.org/25311119:20
jbernardmriedem: the retry will allow the cleanup routine to continue, but the long volume delete is still going to upset tempest19:20
jbernardmriedem: i need to figure out what's going on there19:20
mriedemlong volume delete from the ceph backend?19:21
*** davechen has joined #openstack-cinder19:21
jbernardmriedem: i believe so19:21
*** davechen1 has quit IRC19:21
mriedemhow long is it taking?19:21
*** fthiagogv has quit IRC19:22
jbernardmriedem: it seems to be exacerbated in an all-in-one vm19:22
mriedemwith concurrent test runners19:22
*** ebalduf has joined #openstack-cinder19:22
jbernardmriedem: more than 196 seconds19:23
mriedemi guess if it's still a flaky issue, we can skip the test in the ceph job until https://review.openstack.org/#/c/205282/ lands19:23
jbernardmriedem: but it does complete19:23
jbernardthat's an option as well19:23
mriedemoh i see http://logs.openstack.org/20/218120/3/check/gate-tempest-dsvm-full-ceph/2349f2d/console.html#_2015-12-07_05_59_41_99219:24
mriedemDetails: (TestVolumeBootPatternV2:_run_cleanups) Failed to delete volume-snapshot 366fab34-8494-47ee-925e-1505b7521744 within the required time (196 s).19:24
mriedemso the snapshot delete will timeout if the backing volume delete takes too long19:24
jbernardexactly19:24
mriedemwe can always increase the volume delete timeouts for the ceph job too19:24
jbernardmriedem: can that be done for a specific job?19:25
jbernardmriedem: at the least, it would be a good data point19:25
mriedemyeah, it's just env vars19:25
*** jistr has quit IRC19:26
jbernardok, that combined with smarter retry logic in snapshot delete might work19:26
mriedemwe should actually be failing faster in tempest right now i'd think19:28
mriedemthe volume delete times out because it won't delete, because of the backing volume snapshot that isn't gone19:28
jbernardno, hte volume delete does eventually succeed, the snapshot delete fails because it has a dependent volume19:29
jbernardactually, i think we're saying the same thing19:30
mriedemyeah,19:30
mriedemhttp://logs.openstack.org/20/218120/3/check/gate-tempest-dsvm-full-ceph/2349f2d/console.html#_2015-12-07_05_59_42_42119:30
mriedemtempest should puke on that 40019:30
mriedemi'd think19:30
mriedembut instead it just starts waiting for the volume to be deleted,19:30
mriedembut b/c of that 400, we shouldn't expect it to19:30
mriedemself.validate_response(schema.delete_volume, resp, body)19:31
mriedemthat only validates the response if it was successful19:32
mriedemfor the 400 it just keeps trucking19:32
jbernardright, and so keeps failing19:32
mriedemmtreinish: ^ that seems like a bug in tempest19:32
jbernardat least for ceph, because of the dependency chain, not waiting for one link to complete means all other delete operations are going to fail19:33
openstackgerritFalk Reimann proposed openstack/cinder: Add Auth Version 3 support in Swift Backup Driver  https://review.openstack.org/24797719:33
jbernardmriedem: that was what i tried to say in the bug a while back19:33
mriedemyeah, but this isn't hte root cause, it just makes the job run longer than it needs to19:34
mriedemwaiting for a thing that won't happen19:34
jbernardthe original volume delete does succeed eventually19:34
jbernardwhich is why the test sometimes passes19:34
jbernardsometimes it makes it under the timeout, and sometimes not19:34
mriedemi'm lost, and hungry (2pm and no lunch yet)19:37
mriedemthere are 2 volumes in this case,19:38
mriedemthe 'original' volume that tempest created, and fails to delete19:38
mriedemand the 2nd volume that is created when creating the volume snapshot (which tempest doesn't explicitly try to delete)19:38
mriedemjbernard: i think you're saying the 2nd volume eventually deletes19:38
jbernardmriedem: yes, at least locally it does succeed, but sometimes not within the alotted time19:39
jbernardmriedem: and when the timer expires, tempest carries on19:39
jbernardmriedem: and this is where everything falls down19:39
openstackgerritDave McCowan proposed openstack/cinder: Check current context before returning cached key manager client  https://review.openstack.org/25435719:40
*** ctina_ has quit IRC19:40
*** ChubYann has joined #openstack-cinder19:40
jbernardmriedem: because of the dependency chain, waiting must occur (somewhere)19:41
mriedemjbernard: so umm, why don't we just set rbd_flatten_volume_from_snapshot=True?19:43
mriedemhttps://review.openstack.org/#/c/32490/19:43
jbernardmriedem: i think that might work also, i assumed tempest had a reason, but that assumption is certainly misguided19:44
mriedemtempest would have 0 idea about this19:44
jbernardyeah, that's what i just remembered19:44
mriedemthis is something we'd set in cinder.conf via devstack19:44
mriedemand we'd set the devstack flag from the job config in the project-config repo19:45
jbernardi think i follow what your saying19:45
mriedemif the intent of rbd_flatten_volume_from_snapshot is to disassociate the volume from the snapshot in the backend,19:45
*** e0ne has quit IRC19:45
mriedemand deleting the volume associated with the snapshot in the backend is what's taking too long,19:45
mriedemthen it seems rbd_flatten_volume_from_snapshot=True would fix that19:45
mriedemthe blueprint is super sparse and old, so i don't even know if this code works19:46
jbernardyeah, if I'm correctly understanding the failure correctly, then it should work19:46
mriedemit doesn't even have unit tests19:46
jbernardmriedem: get some food, ill try it out in my local env19:46
mriedemthat is something i can do19:47
*** TravT has quit IRC19:49
openstackgerritDave McCowan proposed openstack/cinder: Check current context before returning cached key manager client  https://review.openstack.org/25435719:50
openstackgerritKendall Nelson proposed openstack/cinder: Dynamically Pull Out Option Sections  https://review.openstack.org/25327719:52
*** dave-mccowan has quit IRC19:53
*** davechen1 has joined #openstack-cinder19:55
*** jordant has joined #openstack-cinder19:55
*** davechen has quit IRC19:56
*** e0ne has joined #openstack-cinder19:59
*** kfarr has joined #openstack-cinder20:00
*** Lee1092 has quit IRC20:01
*** martyturner has left #openstack-cinder20:02
*** e0ne has quit IRC20:02
mtreinishmriedem: it does: http://logs.openstack.org/20/218120/3/check/gate-tempest-dsvm-full-ceph/2349f2d/console.html#_2015-12-07_05_59_42_58820:03
mtreinishmriedem: that call is during cleanup it's not the real failure20:04
mtreinishthe snapshot delete timeout is where things fall apart20:04
mriedemoh yar http://logs.openstack.org/20/218120/3/check/gate-tempest-dsvm-full-ceph/2349f2d/console.html#_2015-12-07_05_59_42_18620:04
mriedemsnapshot delete gives a 20220:04
mriedemso tempest waits but it's not going to complete20:05
mriedemdoes cinder have any concept like instance faults in nova?20:06
mriedemjgriffith: ^?20:06
*** dave-mccowan has joined #openstack-cinder20:06
mriedemlike we won't put the snapshot in error state when the delete fails, but wondering if there is some other resource that we can query for details20:06
jgriffithmriedem: sorry.. snuck away to lunch :)20:06
jgriffithmriedem: no, I don't think we do.  But TBH I'm not overly familiar with Instance faults20:07
mriedemit's a resource associated to an instance when something fails in an async operation20:07
jgriffithmriedem: yeah, we don't have anything like that20:08
mriedemlike does the task stuff in cinder stash errors when async ops, like snapshot delete, fails20:08
mriedemok20:08
jgriffithmriedem: would be awfully handy though20:08
mriedemtell jungleboyj's people to work on it20:08
jgriffithLOL20:08
*** xyang has joined #openstack-cinder20:11
jungleboyjmriedem: :-)20:11
jungleboyjmriedem: We could have someone look at it if jgriffith and others think it could be useful.20:11
mriedemi guess i'd ask ops people20:12
mriedemif they like server actions in nova and would like the same thing in cinder20:12
jgriffithjungleboyj: it's certainly useful, but we've also proposed other ways of doing similar things over the years20:12
jgriffithjungleboyj: remember the sub-states discussions... and the driver-states talks20:12
mriedemalso note that nova at some point is supposed to replace all of that with it's own tasks engine20:12
jgriffithjungleboyj: would probably be good to talk about at mid-cycle maybe?20:13
*** alonma has joined #openstack-cinder20:13
jgriffithmriedem: yeah... and there was a proposal to let taskflow do some of this too20:13
jgriffithfor "Cinder"20:13
jgriffithlots of ideas, but nothing really materialized20:13
jgriffithmriedem: jungleboyj I would be more inclined to look at future of Nova and task-states20:14
*** ociuhandu has joined #openstack-cinder20:14
mriedemwhich is https://review.openstack.org/#/c/221280/20:14
jgriffithrather than put forth a bunch of effort to be on an outgoing paradigm :)20:14
jgriffithYar!!!  Manifestos!!20:15
mriedemwell, "outgoing" in nova could mean 10 years20:15
jgriffithmriedem: that was assumed :)20:15
*** ntpttr has joined #openstack-cinder20:15
jgriffithmriedem: I wouldn't bet on anything less than 5 :)20:15
jgriffithjust kidding20:15
openstackgerritKendall Nelson proposed openstack/cinder: Dynamically Pull Out Option Sections  https://review.openstack.org/25327720:15
jungleboyjmriedem: jgriffith I will make a note to put it on the list of things to discuss in the meetup.20:17
*** alonma has quit IRC20:17
*** e0ne has joined #openstack-cinder20:19
*** e0ne has quit IRC20:19
*** e0ne has joined #openstack-cinder20:19
*** e0ne has quit IRC20:20
*** EinstCrazy has joined #openstack-cinder20:22
smcginnisjungleboyj: https://etherpad.openstack.org/p/mitaka-cinder-midcycle20:24
smcginnis:)20:25
jungleboyjsmcginnis: Should have known you would be on top of that.20:26
hemnabooked my flight/hotel for Raleigh20:26
smcginnishemna: Woot woot20:26
hemnathe airline tick was crazy cheap20:27
*** EinstCrazy has quit IRC20:27
mriedemoh fun, found a bug in the ceph backend script in devstack20:27
mriedemfor cinder_backends20:27
hemnait cost me more to fly to LAX and back20:27
smcginnishemna: When I first checked, mine would have been too. But ended up being a little more for a direct flight. :]20:27
mriedemit's setting the rbd options in the wrong group20:27
smcginnismriedem: Oooh, good find.20:27
mriedemapparently it doesn't matter20:28
mriedemthe job is running with the defaults20:28
hemnasmcginnis, yah I'm flying in via O'Hare.  /me crosses fingers for favorable wx in Chi town.20:28
smcginnishemna: Ew, layover in Chicago in January? We'll see you Wednesday night.20:29
smcginnis:P20:29
diablo_rojohemna: What day are you getting to Raleigh?20:30
*** bardia has quit IRC20:31
*** changbl has quit IRC20:35
mriedemsmcginnis: i think i've confused how cinder loads up the backend config options20:36
smcginnismriedem: You're probably not the only one. :/20:37
smcginnismriedem: You're talking in regards to the rbd options?20:37
mriedemyeha20:37
mriedem*yeah20:37
mriedemi noticed in the normal gate jobs, the backend name and config group for lvm is lvmdriver-120:38
mriedemwhich isn't a thing in the code, but cinder knows how to load that up and run it20:38
smcginnismriedem: I _think_ that's a devstack thing.20:38
smcginnishttps://github.com/openstack-dev/devstack/blob/4300f83acf06ce1b6b7976a604a756b9f28f57a1/lib/cinder#L8220:39
mriedemright but http://logs.openstack.org/31/253931/1/check/gate-tempest-dsvm-full/8b5e2d6/logs/etc/cinder/cinder.conf.txt.gz20:39
mriedem[lvmdriver-1]20:40
mriedemthat's not actually a config option group in the cinder code is it?20:40
smcginnisThe [lvmdriver-1] section corresponds to whatever is named in enabled_backends.20:40
smcginnisSO you could have enabled_backends = crap and then have a section called [crap]20:41
mriedemyeah but cinder must be processing [crap]20:43
smcginnismriedem: Yes. I've not dug in to that area though.20:43
smcginnismriedem: From my understanding, it's somehow just handled by oslo_config.20:44
smcginnisBut really don't know the specifics.20:44
jgriffithsmcginnis: mriedem it's actually handled in service startup for us20:44
jgriffithsmcginnis: mriedem those are in fact arbitrary names20:44
jgriffithsmcginnis: mriedem and we iterate through enabled-backends, look for a cooresponding ini section20:45
jgriffithsmcginnis: mriedem and launch a c-vol service specifically for that section20:45
jgriffithor "enabled_backend"20:45
jgriffithit confuses people because there are actually "n" c-vol services running under the single parent c-vol service20:46
smcginnisAh, makes sense. Hence service-list including that arbitrary name.20:46
jgriffithsmcginnis: mriedem now... one other things devstack does is throw in a default type setup, which makes things a bit more picky20:47
jgriffithmriedem: I thought you were the one that actually rewrote all of the devstack CINDER_ENABLED_BACKENDS business a year or so ago?20:47
mriedemidk20:48
jgriffithhehe20:48
mriedemi remember dabbling in there at one point20:48
jgriffithmriedem: you're far too young to not remember things :)20:48
mriedemi'm just trying to figure out how to set rbd_flatten_volume_from_snapshot=True in cinder.conf, i think i've got it20:49
jgriffithmriedem: Ohhh20:49
*** xyang1 has joined #openstack-cinder20:49
mriedemhttps://github.com/openstack-dev/devstack/blob/master/lib/cinder_backends/ceph#L5220:49
mriedemi'm just adding a var for that20:49
mriedemso we can set it in the project-config job20:49
jgriffithmriedem: I have examples if you need20:49
mriedemi think i have this20:50
mriedemwaiting to hear from jbernard if setting that helps at all20:50
jgriffithmriedem: well, if not:  https://gist.github.com/1b2d8ce085d31e5eb08120:50
mriedemyeah enabled_backends = lvmdriver-120:51
mriedemhttp://logs.openstack.org/31/253931/1/check/gate-tempest-dsvm-full/8b5e2d6/logs/etc/cinder/cinder.conf.txt.gz20:51
jgriffithmriedem: obviously you need to modify a few things20:51
*** baumann1 has joined #openstack-cinder20:53
*** baumann has quit IRC20:55
ameadegates busted, ImportError: No module named netifaces20:57
ameadeoslo_utils added a new unittest dependency20:57
ameadehttps://jenkins03.openstack.org/job/gate-cinder-python27/2241/console20:59
openstackgerritYuriy Nesenenko proposed openstack/cinder: Implement snapshots-related features for Block Device Driver  https://review.openstack.org/25311120:59
smcginnisameade: Oh joy.21:00
*** martyturner has joined #openstack-cinder21:00
ameadesmcginnis: I'm stuck in meetings but I imagine we just add it as a new dep?21:02
ameadehelp21:03
ameadelol wrong channel21:03
*** davechen has joined #openstack-cinder21:04
jbernardmriedem: it's running in a loop, 3 passes so far no failures… but i need several more to feel any level of confidence21:04
mriedemjbernard: so i guess my other question was, with the flatten, does that volume created from the snapshot just get abandoned?21:05
mriedemorphaned21:05
mriedembecause that could also be bad if we just start piling up volumes that aren't getting deleted21:05
jbernardmriedem: im not sure, i thought the delete succeeded eventually, but i think mtreinish was saying that's not the case21:05
jbernardmriedem: i guess ill find out if/when it fails again21:06
mtreinishjbernard: the snapshot delete might succeed eventually, but it didn't happen in the tempest timeout window21:06
mtreinishwhich is like 196 secs or something iirc21:06
jbernardok, then im still on the same page21:06
*** davechen1 has quit IRC21:06
mriedemthe snapshot delete failed b/c the related volume, created when the snapshot was created, was not deleted21:06
mriedemat least by the time the snapshot delete request came in21:06
jbernardwith flatten, the dependencies are removed, and so the slow delete should not fault the other operations21:07
mriedembut as i said, tempest didn't create that 2nd volume so isn't trying to delete it or wait for it to be gone21:07
jbernardand everything *should* be okay in the end21:07
mriedembut is the dependency then orphaned21:07
mriedemthis 2nd volume21:07
jbernardwho is the 2nd volume creator?21:07
jbernardi thought it was tempest21:07
mriedemnope21:07
mriedemoooo, it's probably nova21:08
*** harlowja has quit IRC21:08
*** harlowja has joined #openstack-cinder21:08
mriedemhttps://github.com/openstack/nova/blob/master/nova/virt/block_device.py#L39121:08
jbernardok, and shouldn't the instance delete clean it up?21:09
mriedemnot if the bdm isn't created with delete_on_termination=True21:10
jbernardahh21:10
mriedemwhich it is on the initial server create21:10
mriedemand should be in the image snapshot metadata21:10
jbernardwelp, unstack.sh will get it :)21:10
jbernardso then i should have volumes piling up… lemme check21:11
mriedemhttp://logs.openstack.org/20/218120/3/check/gate-tempest-dsvm-full-ceph/2349f2d/console.html#_2015-12-07_05_59_42_10921:11
mriedem'x-image-meta-property-block_device_mapping': '[{"guest_format": null, "boot_index": 0, "delete_on_termination": true, "no_device": null, "snapshot_id": "366fab34-8494-47ee-925e-1505b7521744", "device_name": "/dev/vda", "disk_bus": "virtio", "image_id": null, "source_type": "snapshot", "device_type": "disk", "volume_id": null, "destination_type": "volume", "volume_size": 1}]'21:11
mriedemso the bdm has snapshot_id and delete_on_termination=True21:11
*** jordant has quit IRC21:12
mriedemso i think when we create the 2nd server from that image snapshot, we get that snapshot bdm and create the 2nd volume in cinder from that snapshot id21:12
mriedemand attac hit21:12
mriedem*Attach21:12
jbernardbut with delete_on_termination=True, it should be cleaned up at delete time, right?21:12
mriedemyeah, it should21:13
jbernardhmm, i do have a couple of orphaned volumes21:13
mriedemhttps://github.com/openstack/nova/blob/master/nova/compute/manager.py#L234221:13
jbernardthat could be from previous failures21:13
mriedemyeah, i figured that was the risk with that flatten option21:13
mriedemoh21:13
jbernardim not sure honestly21:13
mriedemnova doesn't know about that option anyway21:14
mriedemso nova will continue to create that volume from the snapshot bdm21:14
jbernardright, but the resulting volume will stand alone21:14
jbernardand could be deleted at any point21:14
*** jordant has joined #openstack-cinder21:14
*** alonma has joined #openstack-cinder21:15
jbernardmriedem: it's either nova or tempest that should be deleting the volume, correct?21:15
mriedemshould be nova21:16
mriedemi'm checking the logs21:16
jbernard6 more runs, still no failures21:16
mriedemthis is where the compute api gets the bdms from the image meta21:17
mriedemhttps://github.com/openstack/nova/blob/master/nova/compute/api.py#L80321:17
mriedemand this is where we should create the vol from snapshot and attach that vol on boot of the 2nd server https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L172721:18
*** alonma has quit IRC21:19
mriedemyar, this is the 2nd server boot:21:20
mriedemhttp://logs.openstack.org/20/218120/3/check/gate-tempest-dsvm-full-ceph/2349f2d/logs/screen-n-cpu.txt.gz#_2015-12-07_05_51_06_77621:20
mriedemBooting with volume None at /dev/vda21:20
mriedemvolume is None because this isn't a volume bdm, it's a snapshot bdm21:20
*** changbl has joined #openstack-cinder21:21
jbernardi thought a volume was created from the snapshot here https://github.com/openstack/nova/blob/master/nova/virt/block_device.py#L39121:21
mriedemit is21:22
mriedemhttps://github.com/openstack/nova/blob/master/nova/virt/block_device.py#L50621:22
mriedemthat log message is from right above it21:22
jbernardso that instance is failing to boot?21:22
mriedemthis is all of the super hard to follow bdm code in nova21:22
mriedemno, it boots fine21:22
jbernardok, then im completely lost :21:22
mriedemcoffee time, bbiab21:22
jbernard)21:22
jbernardkk21:22
mriedemyeah man, it's the nova bdm code21:23
mriedemsuper confusing21:23
mriedemlots of smoke and mirrors21:23
jbernardi'd say it's working21:23
*** timcl has quit IRC21:23
jbernard(the confusing bit)21:23
mriedemyeah so it definitely boots the volume from the snapshot21:23
mriedemthis is the volume that's created21:25
mriedemhttp://logs.openstack.org/20/218120/3/check/gate-tempest-dsvm-full-ceph/2349f2d/logs/screen-n-cpu.txt.gz#_2015-12-07_05_51_07_46721:25
*** jungleboyj has quit IRC21:27
*** timcl has joined #openstack-cinder21:27
jbernardok, so that seems correct logic21:27
*** thangp has quit IRC21:27
jbernardim missing why nova doesn't delete the created volume later on21:27
*** timcl has quit IRC21:29
jbernardso i think what you're saying is that the root cause begins in nova?21:29
jbernardand the flatten option masks the problem21:30
jbernardby allowing tempest to keep rolling21:30
jbernard(which it's doing, still no failures)21:30
* notmorgan makes plans to hit cinder with the keystoneauth bat soon. :)21:30
jbernardmriedem: i don't appear to have accumulating volumes, same # as first check21:35
merooneysmcginnis Are you looking into the gate issue?21:35
*** alejandrito has joined #openstack-cinder21:36
mriedemjbernard: with the flatten=True option you mean?21:37
jbernardmriedem: right21:37
jbernardmriedem: 16 consecutive passes so far21:37
*** xyang1 has quit IRC21:38
jbernardmriedem: i do have three orphaned volumes, but they may have been there before i started21:38
*** dustins is now known as dustins|gone21:38
openstackgerritRyan McNair proposed openstack/cinder: WIP - Move QoS_Specs to be a VersionedObject  https://review.openstack.org/25198921:38
*** ianbrown has quit IRC21:39
*** ianbrown has joined #openstack-cinder21:39
*** xyang1 has joined #openstack-cinder21:41
mriedemjbernard: http://logs.openstack.org/20/218120/3/check/gate-tempest-dsvm-full-ceph/2349f2d/logs/screen-n-cpu.txt.gz#_2015-12-07_05_51_16_60021:43
mriedemduh duh duh21:43
*** pv has joined #openstack-cinder21:43
mriedem[instance: 91817c91-6305-4e44-9f53-0eca5a27aa8d] Ignoring VolumeNotFound: Volume None could not be found. _shutdown_instance /opt/stack/new/nova/nova/compute/manager.py:231721:43
pvhey quick question, im trying to test my consistency group create method and I keep getting 'ERROR: Policy doesn't allow consistencygroup:get to be performed.'21:44
pvany help?21:44
ameadehttps://review.openstack.org/#/c/254379/21:44
ameademerooney: ^^21:44
mriedemjbernard: so it's a nova bug21:47
pvi checked /etc/cinder/policy.json and removed the group:nobody rule21:47
mriedemnova doesn't attempt to cleanup the volume it created (from the snapshot bdm)21:47
pvso i changed it to "consistencygroup:create" : "",21:47
pvbut im still getting the same error when i try running the command21:47
*** xyang1 has quit IRC21:47
jbernardmriedem: but not every time?21:49
jbernardmriedem: so.. what changes when it fails?21:49
mriedemnot really sure yet21:49
mriedemupdating the bug with notes21:49
*** vgridnev has quit IRC21:49
jbernardi did just get one more orphaned volume21:49
mriedemwe'll likely need ndipanov to help look tomorrow21:49
jbernardmriedem: tomorrow is very close for him21:50
jbernardmriedem: so flatten is hack21:51
jbernardmriedem: we should wait for ndipanov 's thoughts21:51
*** [2]Thelo has joined #openstack-cinder21:53
*** [1]Thelo has quit IRC21:55
mriedemjbernard: oh another thing,21:56
mriedemwhich might be contributing to the race21:56
smcginnismerooney: Sorry, back to back meetings. ameade, thanks for the pointer. Looks like that should fix it.21:57
mriedemthe 2nd server created in tempest, it doesn't wait for it to be ACTIVE before it sends the delete request21:57
jbernardmriedem: that could be buggy21:58
mriedemi'm sure21:58
mriedemb/c we're creating a 2nd server and attaching that for the 2nd server21:58
*** changbl has quit IRC21:59
mriedemlike this http://logs.openstack.org/20/218120/3/check/gate-tempest-dsvm-full-ceph/2349f2d/logs/screen-n-cpu.txt.gz#_2015-12-07_05_51_15_28022:00
jbernardmriedem: is nova expected to handle the transition correctly?22:01
jbernardmriedem: i suppose so22:01
mriedemexpected? i suppose so22:01
mriedemi guess this is why scottda wants force-detach of volumes in nova :)22:01
jbernardso even if it races, the end result should be consistent22:01
mriedemthere is definitely some assumptions made in the code here that the bdm is a volume (and not handling that it's a vol snapshot)22:02
*** davechen1 has joined #openstack-cinder22:05
mriedemjbernard: my findings: https://bugs.launchpad.net/cinder/+bug/1464259/comments/1322:05
openstackLaunchpad bug 1464259 in OpenStack Compute (nova) "Volumes tests fails often with rbd backend" [High,Triaged]22:05
*** davechen has quit IRC22:06
jbernardmriedem: nicely done22:07
mriedemthe question i have for ndipanov is when we create the volume bdm for the instance after creating it from the snapshot22:08
mriedemin the nova db22:08
mriedemi think that's supposed to be this: https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L462322:08
*** martyturner has quit IRC22:09
*** bardia has joined #openstack-cinder22:10
*** merooney has quit IRC22:11
mriedemthe bdm should have it's volume_id set here https://github.com/openstack/nova/blob/master/nova/virt/block_device.py#L39622:13
mriedemand then after the attach is done it should update the bdm table in the db22:14
*** martyturner has joined #openstack-cinder22:14
*** ntpttr has quit IRC22:15
*** e0ne has joined #openstack-cinder22:15
*** Yogi11 has quit IRC22:15
*** alonma has joined #openstack-cinder22:16
*** alonma has quit IRC22:20
*** gouthamr_ has quit IRC22:20
*** alonma has joined #openstack-cinder22:22
*** davechen1 has left #openstack-cinder22:23
*** akerr has quit IRC22:25
openstackgerritMitsuhiro Tanino proposed openstack/cinder: The copy_volume_to_image returns invalid volume attribute  https://review.openstack.org/25441422:26
*** alonma has quit IRC22:27
*** alonma has joined #openstack-cinder22:28
*** diablo_rojo has quit IRC22:29
openstackgerritAlex O'Rourke proposed openstack/cinder: 3PAR: Implement v2 replication (managed)  https://review.openstack.org/23120122:31
*** diogogmt has joined #openstack-cinder22:32
openstackgerritAlex O'Rourke proposed openstack/cinder: 3PAR: Implement v2 replication (unmanaged)  https://review.openstack.org/23124522:32
*** jungleboyj has joined #openstack-cinder22:32
*** alonma has quit IRC22:32
*** xyang has quit IRC22:33
*** alonma has joined #openstack-cinder22:36
*** baumann1 has quit IRC22:38
*** kfarr has quit IRC22:39
*** akshai has quit IRC22:39
*** alonma has quit IRC22:41
*** akshai has joined #openstack-cinder22:41
*** alonma has joined #openstack-cinder22:43
*** changbl has joined #openstack-cinder22:44
pvmy cinder scheduler log shows this error 'Could not find a host for consistency group e9e88fcc-80a4-4495-805d-b274348ac4a2' when im trying to create a consistency group22:44
*** baumann has joined #openstack-cinder22:45
pvi can make volumes and everything else just fine and the scheduler logs don't show the error when I'm doing all of the other API calls22:45
*** jgregor has quit IRC22:45
pvany idea where I can find more information on why consistency groups can't be made on my host22:45
openstackgerritAlex O'Rourke proposed openstack/cinder: 3PAR: Implement v2 replication (managed)  https://review.openstack.org/23120122:46
openstackgerritAlex O'Rourke proposed openstack/cinder: 3PAR: Implement v2 replication (unmanaged)  https://review.openstack.org/23124522:47
*** alonma has quit IRC22:48
*** mriedem has quit IRC22:54
*** e0ne has quit IRC23:04
*** fuentess has quit IRC23:07
openstackgerritAlex O'Rourke proposed openstack/cinder: 3PAR: Implement v2 replication (managed)  https://review.openstack.org/23120123:10
openstackgerritAlex O'Rourke proposed openstack/cinder: 3PAR: Implement v2 replication (unmanaged)  https://review.openstack.org/23124523:10
pvany help on why the cinder scheduler would be throwing the 'Could not find a host for consistency group' error?23:11
*** ianbrown has quit IRC23:12
*** ianbrown has joined #openstack-cinder23:12
openstackgerritJay Bryant proposed openstack/cinder: Dynamically Pull Out Option Sections  https://review.openstack.org/25327723:15
*** jordant has quit IRC23:18
*** apoorvad has quit IRC23:18
*** jamielennox is now known as jamielennox|away23:18
jgriffithpatrickeast: ping23:22
*** boris-42_ has quit IRC23:23
*** salv-orl_ has joined #openstack-cinder23:25
patrickeastjgriffith: pong23:25
jgriffithpatrickeast: hey ya... we were just looking at CG implementations... pure seemed like a good one to check out :)23:26
jgriffithpatrickeast: but can't seem to find your library/client anywhere?23:26
jgriffithpatrickeast: the interface is well.. nothing special23:26
patrickeastthe python module? I think it's on github, some documentation is linked from the pypi page too23:27
jgriffithpatrickeast: cool.. lemme look on pypi23:27
jgriffithpatrickeast: OH DERP23:27
jgriffithfound it23:27
patrickeasthttps://github.com/purestorage/rest-client23:27
jgriffith:)23:27
jgriffithpatrickeast: thank you kind sir23:27
jgriffithpatrickeast: and there's my answer already :)23:28
jgriffiththanks man23:28
patrickeastLemmie know if you have any questions iirc the docs kind of rely on knowing how Purity works23:28
patrickeastnp23:28
jgriffithpatrickeast: nope, just wanted to confirm an interpretation via code23:28
*** salv-orlando has quit IRC23:28
jgriffithpatrickeast: everybody that's implemented at this point appears to be using a lib23:28
patrickeasthaha guess its what all the cool kids are doing :P23:30
jgriffithpatrickeast: no doubt! :)23:33
*** apoorvad has joined #openstack-cinder23:35
*** ianbrown has quit IRC23:41
*** ianbrown has joined #openstack-cinder23:42
*** yhayashi has joined #openstack-cinder23:46
*** lprice has quit IRC23:47
*** lprice has joined #openstack-cinder23:48
*** alonma has joined #openstack-cinder23:51
*** boris-42_ has joined #openstack-cinder23:52
*** takedakn has joined #openstack-cinder23:53
*** alonma has quit IRC23:56
*** lprice has quit IRC23:56

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!