16:00:18 <priteau> #startmeeting blazar
16:00:19 <openstack> Meeting started Thu Jul 18 16:00:18 2019 UTC and is due to finish in 60 minutes.  The chair is priteau. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:20 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:22 <openstack> The meeting name has been set to 'blazar'
16:00:52 <priteau> #topic Roll call
16:03:30 <priteau> Hi jakecoll
16:04:27 <jakecoll> good morning
16:05:43 <priteau> Looks like it may be just us today
16:07:04 <jakecoll> Jason's here now
16:07:05 <priteau> #topic Upstream contributions
16:07:15 <diurnalist> o/
16:07:21 <priteau> Hi diurnalist
16:07:33 <diurnalist> freenode web UI is completely different, eyes still adjusting...
16:07:47 <priteau> We skipped the last meeting (it was 4th of July), although I forgot to formally cancel, my apologies
16:08:10 <jakecoll> Yea, jumped up a decade in style
16:08:12 <priteau> So I'd like to catch up on the status of possible upstream contributions
16:08:26 <priteau> You should try a real IRC client on of those days ;-)
16:08:27 <jakecoll> :+1:
16:09:48 <priteau> I saw that jakecoll pushed the implementation of network segments, thanks a lot
16:10:18 <priteau> #link https://review.opendev.org/#/c/668749/
16:10:40 <jakecoll> sure, a lot of the tests are failing on gerrit, but based locally.
16:10:47 <jakecoll> *passed
16:11:05 <priteau> As I mentioned on Gerrit, we really ought to have a spec first, particularly to agree on the API and DB schema
16:11:25 <priteau> I think I've got the start of a draft spec somewhere, I will try to dig it up
16:13:05 <priteau> jakecoll: looking at the tox tests failure, it complains about ironicclient, which indeed still appears in parts of the patch
16:13:25 <priteau> Would you be able to take out the remaining mentions and upload a new patch?
16:13:29 <jakecoll> alright, I'll look into those
16:14:25 <priteau> grep finds ironic in blazar/tests/plugins/networks/test_network_plugin.py and lower-constraints.txt
16:14:55 <jakecoll> how does blazar handle specs?
16:16:02 <priteau> There is a blazar-specs repo: https://opendev.org/openstack/blazar-specs
16:16:20 <priteau> A spec template for the train release is at https://opendev.org/openstack/blazar-specs/src/branch/master/doc/source/specs/train/train-template.rst
16:16:40 <priteau> When merged, specs show up at https://specs.openstack.org/openstack/blazar-specs/
16:17:43 <priteau> #action jakecoll remove ironic code from network segment patch and resubmit to get tests to pass
16:18:09 <jakecoll> If you send me the draft spec, I can finish it up for you
16:18:18 <priteau> That would be fab, I will be touch
16:18:24 <priteau> #action priteau find draft network segment spec and share with jakecoll
16:20:53 <priteau> Were you able to get the calendar updated to use the resource allocation API?
16:21:46 <jakecoll> Yes and no. I did, but the query in blazar_manager is so inefficient that it takes for ever to load.
16:22:27 <priteau> Are there changes we could do to make it faster?
16:23:09 <jakecoll> Yep. It was implemented as one big join between three tables, but if you break it up into two then the speed up is pretty dramatic.
16:23:45 <priteau> I saw you have this new branch, does it include the fix? https://github.com/ChameleonCloud/blazar/tree/optimize-queries
16:24:01 <diurnalist> ... which is something that frankly makes very little sense to me, but I'm not sure how the ORM layer works for most openstack stuff
16:24:37 <jakecoll> Yes
16:25:08 <priteau> There should be a way to see exactly which queries are being made
16:26:37 <diurnalist> this is also a good resource for some gotchas https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy
16:26:38 <jakecoll> I imagine you'd need to change the logging to debug or something to get it
16:26:53 <priteau> https://docs.sqlalchemy.org/en/13/core/engines.html#configuring-logging
16:27:02 <jakecoll> oh interesting
16:27:24 <diurnalist> i would bet it's not the query per se, rather it's a lot of other stuff that sqlalchemy is trying to do
16:27:46 <priteau> And on the server side, you can enable the slow query log: https://mariadb.com/kb/en/library/slow-query-log-overview/
16:27:50 <diurnalist> there is no way that a join across two tables with ~1000 rows should take 20-40 seconds as jake experienced. it has to be sqlalchemy doing plumbing in a single thread
16:28:12 <jakecoll> three tables
16:28:20 <jakecoll> but yes, still
16:29:36 <priteau> I've seen other slow operations in Blazar before, which may be linked to the database. Maybe we need a more general investgation of ORM performance.
16:30:05 <diurnalist> i recommend reading that openstack wiki page, i just breezed through it (never read it before) but it has good details and looks to identify some real problems
16:30:32 <diurnalist> in particular, this section https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy#ORM_Quick_Wins_Proof_of_Concept
16:31:23 <priteau> It hasn't been changed since 2014 though, need to carefully check if the content is still relevant
16:32:14 <diurnalist> fair
16:33:10 <priteau> Were there other patches that you were working on?
16:33:40 <jakecoll> I did the floatingip update for us. I'm not sure if you have heard back from Masahito
16:34:41 <priteau> I haven't heard from him, and he was not at the IRC meeting earlier this week.
16:35:01 <priteau> Is it in a state were it can be submitted for review?
16:35:14 <jakecoll> I think you did soft-delete way back. I saw a blueprint for it that is still pending
16:35:40 <jakecoll> Yes. I can submit it.
16:36:09 <priteau> Please do.
16:36:44 <diurnalist> nice to contribute all of these back :)
16:36:47 <priteau> For soft-delete, there are some changes required, I'll have to check.
16:36:55 <priteau> #action jakecoll submit floating IP update patch
16:37:09 <priteau> #action priteau find required soft-delete changes
16:38:42 <priteau> It will be good if you can drop more patches when you upgrade to Train
16:39:02 <priteau> That's a few months away though
16:39:56 <jakecoll> floatingips could use allocations. Networks as well once approved.
16:42:09 <priteau> I think that was on masahito's todo list
16:44:12 <priteau> I think we've covered everything for active upstream contributions
16:44:17 <priteau> #topic AOB
16:44:32 <priteau> A small update about Ironic support
16:45:01 <priteau> I am working with Tetsuro to test the existing instance reservation code in Stein and check what is missing for supporting bare-metal instances
16:46:02 <priteau> This requires placement to be running Stein or later as well, as the necessary feature is not available in Rocky
16:46:16 <priteau> I will keep you posted once I have finished this analysis
16:46:31 <diurnalist> what is the missing feature? and, does this mean that bare metal reservations will take a similar form (a new flavor tied to the reservation)
16:48:13 <priteau> Sorry, I made a mistake
16:48:25 <priteau> It requires *Nova* Stein
16:48:39 <priteau> In Rocky placement added support for nested resource providers: https://docs.openstack.org/nova/rocky/user/placement.html#support-allocation-candidates-with-nested-resource-providers
16:48:45 <priteau> In Stein nova started to use it
16:48:54 <priteau> The instance reservation code relies on this
16:48:56 <diurnalist> right, makes sense
16:49:08 <diurnalist> one thing i want to raise, however we do it, we may want to keep this spec in mind https://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/deploy-templates.html
16:49:23 <priteau> The goal is still for host reservation to support bare-metal as well, but tetsuro is starting with instance reservation as this is more integrated with the placement API already
16:51:29 <priteau> diurnalist: are you interested in deploy templates for Chameleon?
16:52:08 <diurnalist> we are interested in supporting users modifying bios in a controlled manner
16:52:16 <diurnalist> i think deploy templates offer a path forward there
16:52:56 <diurnalist> and, i bring it up because currently these would be tied to a flavor
16:52:59 <diurnalist> though, the spec states:
16:53:01 <diurnalist> > Longer term Nova is expected to offer the option of a user specifying an override trait on a boot request, based on what the flavor says is possible.
16:53:21 <priteau> I have heard discussions about this for a long time
16:54:00 <diurnalist> yes, also not sure how realistic any timeline is, i just bring it up because both of these ideas are centered around flavors
16:55:06 <priteau> Blazar creates the flavor in instance reservation mode, so it should be easy to have an additional API parameter for the user to provide flags to enable/disable hardware options
16:55:37 <priteau> Thanks for reminding me :-)
16:56:12 <diurnalist> yes, what blazar does isn't inherently incompatible
16:58:01 <priteau> We're almost out of time, anything else to discuss?
16:59:36 <priteau> That's all folks!
16:59:46 <priteau> Thanks a lot for joining
16:59:48 <priteau> #endmeeting