16:00:39 #startmeeting cinder 16:00:40 Meeting started Wed Jan 16 16:00:39 2019 UTC and is due to finish in 60 minutes. The chair is jungleboyj. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:41 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:43 The meeting name has been set to 'cinder' 16:00:49 o/ 16:00:51 Hi 16:00:51 o/ 16:00:56 courtesy ping: jungleboyj diablo_rojo, diablo_rojo_phon, rajinir tbarron xyang xyang1 e0ne gouthamr thingee erlon tpsilva ganso patrickeast tommylikehu eharney geguileo smcginnis lhx_ lhx__ aspiers jgriffith moshele hwalsh felipemonteiro lpetrut lseki _alastor_ whoami-rajat yikun rosmaita enriquetaso 16:01:01 hi! o/ 16:01:02 hello 16:01:05 hi 16:01:08 hello 16:01:16 hello 16:01:42 o. 16:01:49 @! 16:01:49 <_pewp_> jungleboyj (^o^)/ 16:01:50 hi 16:02:21 o/ 16:02:50 Ok. Looks like we have good representation so we should get started. 16:03:01 #topic announcements 16:03:26 We are now past milestone 2 so no more driver merges and we should have all the specs merged. 16:03:50 One exception is for the encryption spec given that we are still waiting on core review of that I believe. 16:04:18 https://review.openstack.org/#/c/621465/ still has review priority set and is close? 16:04:19 LisaLi commented but we were waiting for eharney to also respond. 16:04:48 eharney: That is a separate discussion we will get to later. 16:04:54 ok 16:05:26 eharney: Can you take a look at the volume encryption spec and see if your concerns have been addressed? 16:05:47 image encryption? 16:05:55 #link https://review.openstack.org/#/c/608663/ 16:05:57 * eharney is still coming back online from vacation 16:06:06 will do 16:06:06 eharney: Yes. 16:06:16 eharney: Thank you and welcome back. 16:06:58 Ok, so I think that wraps up announcements 16:07:55 #topic interest in attending meet-ups while in Raleigh for the mid-cycle 16:08:17 So, there are a couple of meet-ups happening Monday and Tuesday night during the mid-cycle in RTP. 16:08:32 Wanted to gauge interest in attending those as a team while we are there. 16:09:08 I added the links to the meetup info to the planning etherpad. 16:09:27 Who is all planning on attending the midcycle in person? 16:09:28 #link https://etherpad.openstack.org/p/cinder-stein-mid-cycle-planning 16:09:31 o/ 16:09:35 o/ 16:10:00 Just going to be jungleboyj and I sitting in a room together? 16:10:02 hemna: Is planning to attend. 16:10:06 i should be there tuesday and wednesday 16:10:14 (haven't gotten official approval yet) 16:10:14 Awesome 16:10:16 smcginnis: He he, we can do that here. 16:10:27 eharney: Are you going to make it? 16:10:32 eharney: Is on the list as well as jbernard 16:10:35 yes 16:10:53 6 Cinder guys sit in a room... 16:11:04 :) 16:11:10 Anyone not on the list so far? 16:11:49 Has anyone tried getting their management on board? 16:12:36 Beuhler ... ... Beuhler ? 16:13:25 how do you mean? 16:13:36 If anyone else has asked if they can attend. 16:14:06 Anyway, it looks like we have the attendance list. 16:14:12 i think it will be an easier sell next time with the PTG no longer existing as an independent event 16:14:24 0/ 16:14:45 I do hope it's productive enough for there to be a next time. 16:14:49 So, smcginnis and I are going to try to join the meetup on Monday night. Would any of you like to join us? rosmaita eharney hemna jbernard ? 16:14:54 smcginnis: ++ 16:15:17 We will just have to make the most of it. 16:15:43 sorry about the response lag, i would be interested in that as well 16:15:55 If anyone wants to join us at the meetup, please sign up and add your name to the list. 16:16:02 Then we can coordinate getting there. 16:16:29 rosmaita: smcginnis eharney hemna Would you guys like to also do the one on Tuesday night or should we do our own thing? 16:16:48 i haven't looked at any of this stuff 16:17:14 The Tuesday night one looks potentially more interesting to me. But I would also be fine just going out for a beer with folks too. 16:17:25 eharney: Ok, take a look and update the etherpad. 16:17:31 smcginnis: I had that thought too. 16:17:46 rosmaita: Thoughts? 16:18:09 win 20 16:18:12 i'm easy, i will do whatever 16:18:29 Ok. 16:18:44 smcginnis: And I will figure it out and update the etherpad. 16:18:44 i just want to spend facetime with openstack folks 16:18:53 ++ 16:18:58 ++ 16:19:46 Ok. So lets tentatively plan to attend the Tuesday one as well. If we change our minds after Monday night, we can. 16:20:19 Ok. Lets move on then. 16:20:21 works for me 16:20:25 Seems like everyone needs some DeathWish coffee today. :) 16:20:39 :-) 16:21:01 #topic May can consider leverage hardware accelerator in image conversion before uploading and after downloading 16:21:07 LiangFang: Are you here? 16:21:14 yes 16:21:44 The floor is your's. 16:21:54 currently the new server platform may containing compression hardware accelerator 16:22:05 Right. 16:22:25 so we may can leverage such kind of hardware to do the image conversion 16:22:54 in order to do this, we may need to introduce new format of image, such as zip 16:22:56 what does the hardware do exactly? 16:23:27 e.g. 16:23:43 currently if upload volume to image 16:24:15 cpu do the format change, from raw to qcow2, for example 16:24:56 at this time, the server cpu is highly used, and the response is slow for other user 16:25:13 So you are thinking that the image conversion could be offloaded to the specialized hardware? 16:25:20 yes 16:25:33 if the hardware is there, then use it 16:25:42 LiangFang: Do you have a proposed change for this? Or just asking if it makes sense to pursue something like this? 16:25:43 but only for certain image formats, which maybe aren't used currently? 16:26:25 smcginnis: I want to do this, here ask your opinion 16:26:57 It is interesting and also has impacts to RSD applications. 16:27:04 eharney: yes, the format should be some standard compression format, such as zip 16:27:16 In general, taking advantage of FPGAs sounds great. I guess I would have to understand better how that would be implemented though. 16:27:24 smcginnis: ++ 16:27:51 I there's no objection here, I will go ahead to prepare spec 16:27:57 if 16:28:12 Could picture this being interesting to telco where they are moving images to edge sights. May have FPGAs there that could then be used to speed image expansion. 16:28:29 LiangFang: I think there is a lot to consider and a spec would be the best place to start. 16:28:47 yes, so I will prepare spec 16:29:00 Anyone have an objection to that approach? 16:29:16 would need more details to have an objection :) 16:29:22 eharney: :-) 16:29:26 eharney: ++ 16:29:38 Ok, so it sounds like it is worth your time to propose the spec with additional details. 16:29:47 LiangFang: ^ 16:29:50 ok ok 16:30:27 #action LiangFang to write a spec proposing the functionality. 16:30:47 Anything else LiangFang 16:30:56 nothing more 16:30:58 thanks 16:31:12 Thank you and thanks for waiting from last week. Sorry we ran out of time. 16:31:27 jungleboyj: np 16:31:36 #topic Gate Failure eventlet.tpool 16:31:40 whoami-rajat: Your turn. 16:32:34 whoami-rajat: Did we lose you? 16:32:38 So recently a lot of gate failures occurred and one of the most common was testing the eventlet.tpool thread list. I've listed my findings about what might have been caused in the bug. 16:32:44 jungleboyj: no no, just typing :) 16:32:53 Ok. Cool. 16:33:03 any feedback on the bug is appreciated. 16:33:31 Based on the notes in the bug report, sounds like those unit tests should probably just be removed. 16:34:01 looks line an issues with some mock or cleanUp 16:34:44 Yeah. Is geguileo here to chime in? 16:34:52 He is our tpool expert. 16:34:57 I am here 16:35:03 Yay! 16:35:05 sorry, I wasn't paying attention :-( 16:35:07 Thoughts? 16:35:18 #link https://bugs.launchpad.net/cinder/+bug/1811663 16:35:19 Launchpad bug 1811663 in Cinder "Gate failure : AssertionError: Lists differ: [] != []" [Undecided,In progress] - Assigned to Rajat Dhasmana (whoami-rajat) 16:36:53 It appears that removing the test for now is the appropriate first step. 16:37:08 could be related to https://review.openstack.org/#/c/630971/ 16:37:18 But I'd have to read more carefully the bug 16:37:28 Sorry, not that one 16:37:29 XD 16:37:34 https://review.openstack.org/#/c/615934/ 16:37:36 ^ that one 16:38:04 we could just modify the test to assert that the pool is at least X big, instead of exactly X, right? 16:38:14 #link https://review.openstack.org/#/c/615934/ 16:38:34 eharney: That could work too. 16:38:42 smcginnis: I thinks we should fix these tests instead of removing them 16:38:42 eharney: Not sure if that would work if it was already initialized with a smaller number though. 16:38:58 eharney: we can just mock all tpool calls in these tests 16:39:31 it could be that we are not clearing the threads between tests... 16:39:37 eharney: i think it's not caused by the size rather than the list of threads getting started by some other test. 16:39:42 geguileo: +1 16:40:12 But if we have concurrent tests running, we would still have this issue even if the threads were cleared, right? 16:40:45 geguileo: it would still be a task to find the tests triggering the threads to start as the tpool.execute method exists in multiple places in code. 16:41:13 smcginnis: do we use threads in tests? we should use processes for concurrent run 16:41:16 smcginnis: are the threads mocked out in the unit tests? 16:41:18 smcginnis: +1. The async backup tests failing being the example. 16:41:44 e0ne: Not sure how that is handled in tox. 16:42:00 davidsha: I'm not sure if they are everywhere. 16:42:16 looks like somebody removed code from cinder/test.py 16:42:28 and that broke it 16:42:28 geguileo: What was that? 16:42:56 * geguileo probably wrong, I'll look more carefully at the issue 16:42:58 smcginnis: kk, you could try purging whatever package you're using for threads and see which tests throw the error? 16:43:22 Ok, geguileo So you will work with whoami-rajat on this? 16:43:26 setUp() in test.py does do a tpool.killall to try to start clean for each run 16:44:05 eharney: but maybe eventlet has changed the code and now they don't do what they used to? r:-?? 16:44:09 anything not mocked would show up. 16:44:17 because I added that in there when I introduced the tests 16:45:02 Ok. We have other topics we should get to. If you could help look into this geguileo it would be appreciated. 16:45:30 smcginnis: http://paste.openstack.org/show/742778/ 16:46:35 Can we take this to the channel for afterwards? 16:46:47 Sure 16:46:52 Ok. 16:47:05 #action whoami-rajat to follow up with geguileo for more information. 16:47:18 #topic follow-up on "remove policy checks at DB layer?" from 5 December meeting 16:47:25 rosmaita: Your turn. 16:47:28 this will be fast 16:47:40 the quick recap is that our DB layer policy checks were making it difficult to do a commonly requested policy configuration (to have a read-only admin) 16:48:00 we decided that since there are so many of these checks currently in there, it would be too destabilizing to mess with them until we have more tests in place 16:48:13 plus, there *is* a way to do this via the policy configuration file, *and* the workaround is valid into all the stable branches (and beyond) 16:48:26 so i wrote up a "howto" for the cinder docs and revised it after i got some feedback from lbragstad 16:48:36 it's available for your reviewing pleasure: https://review.openstack.org/#/c/624424/ 16:48:48 should be a pretty quick read, just explains some concepts and then walks through how to configure the policy file 16:49:00 that's all i got 16:49:06 #link https://review.openstack.org/#/c/624424/ 16:49:32 rosmaita: Ok. Thank you. 16:49:37 np 16:49:54 Ok. Last topic. 16:50:32 #topic RSD driver 16:50:43 So, eharney asked about this earlier: 16:50:48 #link https://review.openstack.org/#/c/621465/ 16:51:10 The driver has missed the official deadline for inclusion. 16:51:19 They have continued to work on resolving the issues. 16:51:26 Getting a little closer each day. 16:51:35 We need to decide how to proceed. 16:51:49 Still code issues being found as recently as yesteday. 16:52:10 Last third party CI run failed. 16:52:11 smcginnis: Yeah, that is concerning. 16:52:21 They are down to 7 tests failing. 16:52:38 #link http://52.27.155.124/cinder/rsd/621465/31/testr_result.html.gz 16:53:10 I would say with the state it's in now and how far we are past the driver freeze, it's missed stein. 16:53:28 It would be one thing if there was a small issue that got resolved in a day or two after the deadline. 16:53:33 These are issues in the backend rather than the driver, we've been engaging with the backend team to help resolve these. 16:53:35 But it still doesn't appear ready. 16:54:04 So these are problems being found in the storage itself? 16:54:39 Yup 16:55:30 eharney: e0ne_ Other core team thoughts here? 16:55:31 It's still being worked atm and we're providing the feedback from the CI to the backend team to help resolve it. 16:56:33 If the backend itself is having issues it seems it needs some more burn-in time and could merge as soon as Train opens if things get resolved. 16:56:39 if backend itself is non-stable and deadline is missed I recommend to move it to T release 16:56:57 eharney: You brought the topic up first. 16:57:37 There's nothing preventing this driver being available out of tree for Stein users, then added in the Stein release. 16:57:40 yeah, was asking about it because i was just reviewing it and noticed it was about done 16:58:00 smcginnis: You mean Train release? 16:58:05 smcginnis: +1 16:58:56 We have the backend team working on a patch we think might address the last 7 issues, it should be redeployed by COB tomorrow. 16:59:50 I think that we should defer acceptance. The patch will be out there for people to pull it down to use out of tree and we can merge in Train once we have seen the CI running stably for a while. 17:00:18 It is consistent with what we have done for other drivers in the past. 17:00:35 I think the driver will be a great addition. But we are past the deadline and we've enforced that in the past and made other new drivers adhere to it. 17:00:43 smcginnis: ++ 17:01:01 Ok. We are out of time. If we need to further discuss we can do it in the cinder channel. 17:01:18 Thanks to everyone for joining! 17:01:27 #endmeeting