16:00:18 #startmeeting blazar 16:00:19 Meeting started Thu Jul 18 16:00:18 2019 UTC and is due to finish in 60 minutes. The chair is priteau. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:20 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:22 The meeting name has been set to 'blazar' 16:00:52 #topic Roll call 16:03:30 Hi jakecoll 16:04:27 good morning 16:05:43 Looks like it may be just us today 16:07:04 Jason's here now 16:07:05 #topic Upstream contributions 16:07:15 o/ 16:07:21 Hi diurnalist 16:07:33 freenode web UI is completely different, eyes still adjusting... 16:07:47 We skipped the last meeting (it was 4th of July), although I forgot to formally cancel, my apologies 16:08:10 Yea, jumped up a decade in style 16:08:12 So I'd like to catch up on the status of possible upstream contributions 16:08:26 You should try a real IRC client on of those days ;-) 16:08:27 :+1: 16:09:48 I saw that jakecoll pushed the implementation of network segments, thanks a lot 16:10:18 #link https://review.opendev.org/#/c/668749/ 16:10:40 sure, a lot of the tests are failing on gerrit, but based locally. 16:10:47 *passed 16:11:05 As I mentioned on Gerrit, we really ought to have a spec first, particularly to agree on the API and DB schema 16:11:25 I think I've got the start of a draft spec somewhere, I will try to dig it up 16:13:05 jakecoll: looking at the tox tests failure, it complains about ironicclient, which indeed still appears in parts of the patch 16:13:25 Would you be able to take out the remaining mentions and upload a new patch? 16:13:29 alright, I'll look into those 16:14:25 grep finds ironic in blazar/tests/plugins/networks/test_network_plugin.py and lower-constraints.txt 16:14:55 how does blazar handle specs? 16:16:02 There is a blazar-specs repo: https://opendev.org/openstack/blazar-specs 16:16:20 A spec template for the train release is at https://opendev.org/openstack/blazar-specs/src/branch/master/doc/source/specs/train/train-template.rst 16:16:40 When merged, specs show up at https://specs.openstack.org/openstack/blazar-specs/ 16:17:43 #action jakecoll remove ironic code from network segment patch and resubmit to get tests to pass 16:18:09 If you send me the draft spec, I can finish it up for you 16:18:18 That would be fab, I will be touch 16:18:24 #action priteau find draft network segment spec and share with jakecoll 16:20:53 Were you able to get the calendar updated to use the resource allocation API? 16:21:46 Yes and no. I did, but the query in blazar_manager is so inefficient that it takes for ever to load. 16:22:27 Are there changes we could do to make it faster? 16:23:09 Yep. It was implemented as one big join between three tables, but if you break it up into two then the speed up is pretty dramatic. 16:23:45 I saw you have this new branch, does it include the fix? https://github.com/ChameleonCloud/blazar/tree/optimize-queries 16:24:01 ... which is something that frankly makes very little sense to me, but I'm not sure how the ORM layer works for most openstack stuff 16:24:37 Yes 16:25:08 There should be a way to see exactly which queries are being made 16:26:37 this is also a good resource for some gotchas https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy 16:26:38 I imagine you'd need to change the logging to debug or something to get it 16:26:53 https://docs.sqlalchemy.org/en/13/core/engines.html#configuring-logging 16:27:02 oh interesting 16:27:24 i would bet it's not the query per se, rather it's a lot of other stuff that sqlalchemy is trying to do 16:27:46 And on the server side, you can enable the slow query log: https://mariadb.com/kb/en/library/slow-query-log-overview/ 16:27:50 there is no way that a join across two tables with ~1000 rows should take 20-40 seconds as jake experienced. it has to be sqlalchemy doing plumbing in a single thread 16:28:12 three tables 16:28:20 but yes, still 16:29:36 I've seen other slow operations in Blazar before, which may be linked to the database. Maybe we need a more general investgation of ORM performance. 16:30:05 i recommend reading that openstack wiki page, i just breezed through it (never read it before) but it has good details and looks to identify some real problems 16:30:32 in particular, this section https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy#ORM_Quick_Wins_Proof_of_Concept 16:31:23 It hasn't been changed since 2014 though, need to carefully check if the content is still relevant 16:32:14 fair 16:33:10 Were there other patches that you were working on? 16:33:40 I did the floatingip update for us. I'm not sure if you have heard back from Masahito 16:34:41 I haven't heard from him, and he was not at the IRC meeting earlier this week. 16:35:01 Is it in a state were it can be submitted for review? 16:35:14 I think you did soft-delete way back. I saw a blueprint for it that is still pending 16:35:40 Yes. I can submit it. 16:36:09 Please do. 16:36:44 nice to contribute all of these back :) 16:36:47 For soft-delete, there are some changes required, I'll have to check. 16:36:55 #action jakecoll submit floating IP update patch 16:37:09 #action priteau find required soft-delete changes 16:38:42 It will be good if you can drop more patches when you upgrade to Train 16:39:02 That's a few months away though 16:39:56 floatingips could use allocations. Networks as well once approved. 16:42:09 I think that was on masahito's todo list 16:44:12 I think we've covered everything for active upstream contributions 16:44:17 #topic AOB 16:44:32 A small update about Ironic support 16:45:01 I am working with Tetsuro to test the existing instance reservation code in Stein and check what is missing for supporting bare-metal instances 16:46:02 This requires placement to be running Stein or later as well, as the necessary feature is not available in Rocky 16:46:16 I will keep you posted once I have finished this analysis 16:46:31 what is the missing feature? and, does this mean that bare metal reservations will take a similar form (a new flavor tied to the reservation) 16:48:13 Sorry, I made a mistake 16:48:25 It requires *Nova* Stein 16:48:39 In Rocky placement added support for nested resource providers: https://docs.openstack.org/nova/rocky/user/placement.html#support-allocation-candidates-with-nested-resource-providers 16:48:45 In Stein nova started to use it 16:48:54 The instance reservation code relies on this 16:48:56 right, makes sense 16:49:08 one thing i want to raise, however we do it, we may want to keep this spec in mind https://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/deploy-templates.html 16:49:23 The goal is still for host reservation to support bare-metal as well, but tetsuro is starting with instance reservation as this is more integrated with the placement API already 16:51:29 diurnalist: are you interested in deploy templates for Chameleon? 16:52:08 we are interested in supporting users modifying bios in a controlled manner 16:52:16 i think deploy templates offer a path forward there 16:52:56 and, i bring it up because currently these would be tied to a flavor 16:52:59 though, the spec states: 16:53:01 > Longer term Nova is expected to offer the option of a user specifying an override trait on a boot request, based on what the flavor says is possible. 16:53:21 I have heard discussions about this for a long time 16:54:00 yes, also not sure how realistic any timeline is, i just bring it up because both of these ideas are centered around flavors 16:55:06 Blazar creates the flavor in instance reservation mode, so it should be easy to have an additional API parameter for the user to provide flags to enable/disable hardware options 16:55:37 Thanks for reminding me :-) 16:56:12 yes, what blazar does isn't inherently incompatible 16:58:01 We're almost out of time, anything else to discuss? 16:59:36 That's all folks! 16:59:46 Thanks a lot for joining 16:59:48 #endmeeting