Wednesday, 2020-02-26

*** ociuhandu has joined #openstack-nova00:04
*** ociuhandu has quit IRC00:09
*** eharney has quit IRC00:15
*** nicolasbock has quit IRC00:18
*** eharney has joined #openstack-nova00:27
*** brinzhang has joined #openstack-nova00:29
*** nweinber has quit IRC00:33
*** takamatsu has quit IRC00:35
*** macz_ has quit IRC00:36
*** sean-k-mooney has joined #openstack-nova00:39
*** tbachman has joined #openstack-nova00:43
sean-k-mooneydansmith: efried just heading to bed but my multinode cyborg tempest full job https://review.opendev.org/#/c/709641/5 just completed https://48ef08cde8cc22034a1d-8011a2266d21f0c09baf1c83d6d5002e.ssl.cf5.rackcdn.com/709641/5/check/cyborg-multinode-tempest-full/e4d260f/testr_results.html00:48
sean-k-mooneyi have only skimmed them quickly but i belive it shows that vms can be booted with cyborg flavor initail since tempest.api.compute.servers.test_create_server.ServersTestJSON.test_host_name_is_same_as_server_name passed i change the default flavor to request a fake deivce00:49
sean-k-mooneyit also repoduced teh 401 issue where the keystone middelware eventually starts rejecting the token because it think it expires00:50
sean-k-mooneyand i can see the live migration operations fail with Details: {'code': 400, 'message': 'No valid host was found. Unable to move instance 229ab495-b151-4522-b8dd-fa5818e302dd to host ubuntu-bionic-vexxhost-sjc1-0014809343. The instance has complex allocations on the source host so move cannot be forced.'}00:51
sean-k-mooneyso i think that is implying that 1 the vm was created with the cybrog resouces and two live migration is being rejected before we get to the driver which is good00:52
sean-k-mooneythat was from test_live_block_migration00:52
*** takamatsu has joined #openstack-nova00:52
sean-k-mooneyanyway im calling it a night o/ hopefully that will be useful as it is repoducing the same issues i was seeing and its now automated so easy to rerun00:54
*** sean-k-mooney has quit IRC00:59
*** brinzhang_ has joined #openstack-nova01:04
*** brinzhang_ has quit IRC01:05
*** brinzhang_ has joined #openstack-nova01:06
*** brinzhang has quit IRC01:08
*** ileixe has joined #openstack-nova01:09
openstackgerritBrin Zhang proposed openstack/nova master: Add test coverage of existing os-volumes-attachments policies  https://review.opendev.org/70992901:55
openstackgerritBrin Zhang proposed openstack/nova master: Introduce scope_types in os-volumes-attachments policy  https://review.opendev.org/70938802:00
*** ociuhandu has joined #openstack-nova02:06
*** ociuhandu has quit IRC02:11
*** gyee has quit IRC02:13
*** brinzhang has joined #openstack-nova02:33
*** kaisers_away has quit IRC02:34
*** brinzhang_ has quit IRC02:36
*** ileixe has quit IRC02:46
*** mkrai has joined #openstack-nova02:46
*** mkrai has quit IRC02:52
*** mkrai_ has joined #openstack-nova02:52
*** Nel1x has quit IRC03:04
*** xiaolin has joined #openstack-nova03:07
*** nweinber has joined #openstack-nova03:08
*** zhanglong has joined #openstack-nova03:13
*** nweinber has quit IRC03:17
*** Nel1x has joined #openstack-nova03:22
*** zhanglong has quit IRC03:36
*** zhanglong has joined #openstack-nova03:37
*** hongbin has joined #openstack-nova03:49
*** ileixe has joined #openstack-nova03:53
*** chenhaw has joined #openstack-nova03:55
*** chenhaw has quit IRC03:55
*** chenhaw has joined #openstack-nova03:57
*** chenhaw has quit IRC03:58
*** nweinber has joined #openstack-nova03:59
*** brinzhang_ has joined #openstack-nova03:59
*** brinzhang_ has quit IRC04:01
*** brinzhang_ has joined #openstack-nova04:01
*** brinzhang has quit IRC04:02
*** brinzhang_ has quit IRC04:03
*** brinzhang_ has joined #openstack-nova04:03
*** brinzhang_ has quit IRC04:04
*** brinzhang_ has joined #openstack-nova04:05
*** nweinber has quit IRC04:09
*** yedongcan has quit IRC04:10
*** mkrai_ has quit IRC04:17
*** mkrai has joined #openstack-nova04:17
*** brinzhang has joined #openstack-nova04:23
*** brinzhang has quit IRC04:25
*** brinzhang has joined #openstack-nova04:25
*** brinzhang_ has quit IRC04:26
*** factor has quit IRC04:31
*** factor has joined #openstack-nova04:31
*** mkrai has quit IRC04:37
*** mkrai_ has joined #openstack-nova04:38
*** hongbin has quit IRC04:39
*** yaawang has joined #openstack-nova04:44
*** imacdonn has quit IRC04:47
*** imacdonn has joined #openstack-nova04:47
*** brinzhang_ has joined #openstack-nova04:48
*** Nel1x has quit IRC04:50
*** brinzhang_ has quit IRC04:50
*** brinzhang_ has joined #openstack-nova04:50
*** brinzhang has quit IRC04:51
*** brinzhang_ has quit IRC04:52
*** brinzhang_ has joined #openstack-nova04:52
*** zhanglong has quit IRC04:53
*** brinzhang_ has quit IRC04:54
*** brinzhang_ has joined #openstack-nova04:54
*** zhanglong has joined #openstack-nova04:55
*** brinzhang_ has quit IRC04:56
*** brinzhang_ has joined #openstack-nova04:56
*** brinzhang_ has quit IRC04:58
*** brinzhang_ has joined #openstack-nova04:59
*** brinzhang_ has quit IRC05:12
*** brinzhang_ has joined #openstack-nova05:12
*** brinzhang_ has quit IRC05:14
*** brinzhang_ has joined #openstack-nova05:14
*** brinzhang_ has quit IRC05:15
*** brinzhang_ has joined #openstack-nova05:16
*** brinzhang_ has quit IRC05:17
*** brinzhang_ has joined #openstack-nova05:18
*** brinzhang_ has quit IRC05:18
*** brinzhang_ has joined #openstack-nova05:19
*** tetsuro has quit IRC05:20
*** tetsuro has joined #openstack-nova05:22
*** links has joined #openstack-nova05:25
*** vishalmanchanda has joined #openstack-nova05:34
*** evrardjp has quit IRC05:34
*** evrardjp has joined #openstack-nova05:35
*** tetsuro has quit IRC05:39
*** udesale has joined #openstack-nova05:41
*** udesale has quit IRC05:41
*** udesale has joined #openstack-nova05:41
*** larainema has joined #openstack-nova05:51
*** zhanglong has quit IRC05:58
*** zhanglong has joined #openstack-nova06:01
*** ratailor has joined #openstack-nova06:06
*** kozhukalov has joined #openstack-nova06:08
*** ociuhandu has joined #openstack-nova06:08
*** ociuhandu has quit IRC06:13
*** ratailor has quit IRC06:18
*** zhanglong has quit IRC06:20
*** brinzhang_ has quit IRC06:20
*** brinzhang_ has joined #openstack-nova06:21
*** ratailor has joined #openstack-nova06:21
*** zhanglong has joined #openstack-nova06:22
*** brinzhang_ has quit IRC06:22
*** brinzhang_ has joined #openstack-nova06:23
*** brinzhang has joined #openstack-nova06:24
*** udesale has quit IRC06:24
*** brinzhang has quit IRC06:25
*** udesale has joined #openstack-nova06:25
*** brinzhang has joined #openstack-nova06:25
*** brinzhang has joined #openstack-nova06:27
*** brinzhang has quit IRC06:27
*** brinzhang_ has quit IRC06:28
*** brinzhang has joined #openstack-nova06:28
*** brinzhang has quit IRC06:29
*** brinzhang has joined #openstack-nova06:30
*** tetsuro has joined #openstack-nova06:31
*** brinzhang has quit IRC06:31
*** brinzhang has joined #openstack-nova06:32
*** brinzhang has quit IRC06:33
*** brinzhang has joined #openstack-nova06:34
*** brinzhang has quit IRC06:35
*** brinzhang has joined #openstack-nova06:36
*** brinzhang has quit IRC06:39
*** brinzhang has joined #openstack-nova06:40
*** brinzhang has quit IRC06:42
*** brinzhang has joined #openstack-nova06:42
*** rcernin has quit IRC06:47
openstackgerritBrin Zhang proposed openstack/nova master: Fix os-volumes-attachments policy to be admin_or_owner  https://review.opendev.org/70995506:51
*** ociuhandu has joined #openstack-nova07:02
*** ociuhandu has quit IRC07:07
*** mkrai_ has quit IRC07:10
*** udesale has quit IRC07:12
*** udesale has joined #openstack-nova07:12
*** mkrai has joined #openstack-nova07:27
*** xiaolin has quit IRC07:33
brinzhangmelwitt, gmann, stephenfin: is bug 1864776 real? can you all check?07:35
openstackbug 1864776 in OpenStack Compute (nova) "os-volumes-attachments API policy is allowed for everyone even policy defaults is admin_or_owner" [Undecided,In progress] https://launchpad.net/bugs/1864776 - Assigned to Brin Zhang (zhangbailin)07:35
brinzhanggmann: the new policy of admin_api is {self.legacy_admin_context, self.system_admin_context, self.project_admin_context}, and new policy of the admin_or_owner is {self.legacy_admin_context, self.system_admin_context, self.project_admin_context, self.project_member_context, self.system_member_context, self.system_reader_context, self.system_foo_context, self.project_foo_context, self.project_reader_context, self.other_project_member_context},07:40
brinzhangit's all of the author contexts.}07:40
brinzhanggmann: Understand of this right?07:40
*** ociuhandu has joined #openstack-nova07:42
*** slaweq_ has joined #openstack-nova07:42
*** slaweq_ is now known as slaweq07:45
*** ociuhandu has quit IRC07:48
*** brinzhang_ has joined #openstack-nova08:00
*** brinzhang_ has quit IRC08:02
*** brinzhang_ has joined #openstack-nova08:02
*** brinzhang has quit IRC08:03
*** ociuhandu has joined #openstack-nova08:05
*** tesseract has joined #openstack-nova08:06
*** jawad_axd has joined #openstack-nova08:07
*** mkrai has quit IRC08:10
*** mkrai has joined #openstack-nova08:10
*** ociuhandu has quit IRC08:11
*** mkrai has quit IRC08:16
*** mkrai_ has joined #openstack-nova08:16
*** udesale has quit IRC08:18
*** brinzhang has joined #openstack-nova08:21
*** yaawang has quit IRC08:21
*** yaawang has joined #openstack-nova08:21
*** iurygregory has joined #openstack-nova08:21
*** brinzhang has quit IRC08:21
*** brinzhang has joined #openstack-nova08:22
*** rcernin has joined #openstack-nova08:22
*** maciejjozefczyk has joined #openstack-nova08:22
*** brinzhang_ has quit IRC08:23
*** brinzhang has quit IRC08:23
*** brinzhang has joined #openstack-nova08:24
*** brinzhang has quit IRC08:26
*** brinzhang has joined #openstack-nova08:26
*** brinzhang has quit IRC08:28
*** damien_r has joined #openstack-nova08:28
*** brinzhang has joined #openstack-nova08:29
*** amoralej|off is now known as amoralej08:31
*** brinzhang has quit IRC08:31
*** brinzhang has joined #openstack-nova08:31
*** brinzhang has quit IRC08:33
*** brinzhang has joined #openstack-nova08:33
*** brinzhang has quit IRC08:35
*** brinzhang has joined #openstack-nova08:35
*** brinzhang has quit IRC08:37
*** brinzhang has joined #openstack-nova08:38
*** brinzhang has joined #openstack-nova08:39
*** udesale has joined #openstack-nova08:39
*** brinzhang has quit IRC08:40
*** brinzhang has joined #openstack-nova08:41
*** mkrai_ has quit IRC08:42
*** mkrai__ has joined #openstack-nova08:42
*** mlycka has joined #openstack-nova08:52
*** ralonsoh has joined #openstack-nova08:53
*** tkajinam has quit IRC08:57
*** lennyb has quit IRC09:06
*** lennyb has joined #openstack-nova09:07
*** ociuhandu has joined #openstack-nova09:08
*** xek_ has joined #openstack-nova09:19
*** ociuhandu has quit IRC09:22
*** psachin has joined #openstack-nova09:25
*** tesseract has quit IRC09:35
*** tesseract has joined #openstack-nova09:36
*** ociuhandu has joined #openstack-nova09:42
*** martinkennelly has joined #openstack-nova09:49
*** jangutter has joined #openstack-nova09:57
*** jangutter has quit IRC09:57
*** jangutter has joined #openstack-nova09:57
*** jangutter has quit IRC09:58
*** jangutter has joined #openstack-nova09:59
openstackgerritKevin Zhao proposed openstack/nova master: Add default cpu model for aarch64  https://review.opendev.org/70949410:04
*** ociuhandu has quit IRC10:11
openstackgerritLee Yarwood proposed openstack/nova master: virt: Provide block_device_info during rescue  https://review.opendev.org/70081110:16
openstackgerritLee Yarwood proposed openstack/nova master: libvirt: Add support for stable device rescue  https://review.opendev.org/70081210:16
openstackgerritLee Yarwood proposed openstack/nova master: compute: Report COMPUTE_RESCUE_BFV and check during rescue  https://review.opendev.org/70142910:16
openstackgerritLee Yarwood proposed openstack/nova master: api: Introduce microverion 2.82 allowing boot from volume rescue  https://review.opendev.org/70143010:16
openstackgerritLee Yarwood proposed openstack/nova master: compute: Extract _get_bdm_image_metadata into nova.utils  https://review.opendev.org/70521210:16
openstackgerritLee Yarwood proposed openstack/nova master: libvirt: Support boot from volume stable device instance rescue  https://review.opendev.org/70143110:16
gibirm_work: responded in https://review.opendev.org/#/c/709280 I'm happy to help with technicalities but I missing the what the feature is supposed to achive and how the existing impl in neutron works10:18
rm_workgibi: just read it, thanks!10:19
rm_workso, the key is that it's not really a feature for end users at all -- it's for service integration cases10:20
rm_workthe octavia use-case is here: https://review.opendev.org/#/c/706153/10:20
rm_workgibi: as it is, neutron has been creating aggregates for each segment_it in a routed network automatically, for like... 3 cycles already maybe?10:21
*** ociuhandu has joined #openstack-nova10:21
rm_workso for example, if i take a look at my Staging environment's aggregate list, it looks like this: http://paste.openstack.org/show/7YyK5FtL2OiXgErAjMbM/10:24
rm_workthat aggregate was automatically created by neutron because it was configured as a routed network with a segment and those HVs were assigned to that segment in neutron10:24
rm_workgibi: i don't think there's really any further work to be done -- this filter *just works* already for the intended purpose -- allowing services to hint to nova which aggregate to schedule to, based on network segment10:25
rm_workthe real challenge is just getting it merged :)10:26
rm_workI know of at least two companies running this filter on live clouds already, one of them for many years10:26
rm_workthe second use-case is live-migrate, and I already have the basic patch for it (again, has been running for quite a while in some clouds) but I'm not spending the effort to get it all prettified yet until I have some confidence that this filter might be accepted10:27
gibirm_work: reading back ...10:28
rm_workthis is it though: http://paste.openstack.org/show/Z8ZYksr4uisFUddFoyOu/10:29
rm_workbasically all I could ask from you is a +1 instead of a -1 if you understand what I'm going for here10:29
rm_worktrying to unlock useful features that have been stuck downstream because people didn't have the energy to do what I'm doing and post them upstream with tests and docs, and convince people to review :D10:30
*** ociuhandu has quit IRC10:31
gibirm_work: so the new scheduler hint routed_segments is a list of segment ids10:31
rm_workyes10:32
gibirm_work: and the filter try to find an aggregate that has a name that ends with such segment id10:32
rm_workusers can't actually see segment_ids, so it's not really useful for the end-user side -- it's designed for service interactions10:32
rm_workyes, as that's the neutron spec to auto-create aggregates with segment_ids with that name format10:32
gibirm_work: do you have a link to that neutron spec?10:33
rm_worklet me look10:33
gibiso how the 'service' that calls nova knows the segment id it needs to pass in the scheduler hint?10:33
rm_worknova side: https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/neutron-routed-networks.html#proposed-change10:36
rm_workneutron side: https://specs.openstack.org/openstack/neutron-specs/specs/newton/routed-networks.html10:36
gibithanks10:36
gibiI have to read up10:37
rm_workgibi: so for the octavia case, the issue is that we need both of the service-vms we build to be in the same segment as the VIP (an unbound neutron port) we create10:37
rm_workso, we simply look at that port, then look at the subnet it's on, and get the segment_id, then boot using that10:37
gibirm_work: thanks ^^10:37
rm_workfor the live-migrate case in nova, it's basically the same -- all the ports need to be pluggable to the new VM, so the new VM needs to have access to the same segments -- so it looks at all the ports on the old VM, gets the subnets->segments from them, and then does the migrate to a host that schedules with all of the relevant segments10:38
rm_workyou probably don't have to read the ENTIRETY of both specs to get the idea, and I don't know whether they ended up matching the implementation EXACTLY, but I can answer questions10:39
ileixerm_work: gibi: sorry to hijack, may I ask one more question?10:40
rm_worko/10:40
gibirm_work: thanks. This description helped me a lot10:41
ileixeYou said normal users were unaware of segments, does it mean normal user does not make VM using routed network?10:41
rm_workwhen routed-networks are in use, the user is really not even aware of it10:41
rm_workit's all handled inside nova/neutron10:41
rm_workas far as the user is aware, they simply ask for a VM on a network, and it happens10:42
ileixeYes, so what happens to the normal user?10:42
ileixethen they are not using routed network, right?10:42
gibirm_work: can we trade? :) I will support your way forward to either have the filter or the placement based sollution in ussuri, in reaturn you can help me reviewing the tempest patch?10:42
rm_workso in the normal workflow, the user will do a nova boot, and it will go for scheduling, schedule to any HV, and then network plugging will be deferred to neutron instead of nova picking the IP10:42
rm_workthen neutron will look at the HV it was scheduled to, pick an appropriate segment, and do the IP assignment/plug on the neutron side10:42
rm_workrouted networks is kinda just... something that a cloud either needs (and thus uses) or doesn't. in a cloud that's routed-network enabled, there really aren't *non-routed* networks, usually (at least, not that i've seen)10:44
gibiso for first boot neutron will select the segment, but for live migrate nova should select a host that supports the same segment10:44
rm_workit's really a network implementation choice10:44
rm_workgibi: correct10:44
rm_workcurrently, live-migrate cannot work in a routed-network enabled cloud10:44
gibirm_work: wooot! Finally I undestood it :)10:44
rm_workbecause the scheduling doesn't understand it, and will pick some random HV based on other scheduling criteria, and then that HV may not actually be in the correct aggregate for plugging the old port :)10:45
*** sean-k-mooney has joined #openstack-nova10:45
gibiplugging the old port in live migrate case fails becuase the port is already assigned to segment? while in the normal boot case the port is in a deferred state so the plug wont fail10:45
rm_workyes10:46
* gibi gaining speed :)10:46
rm_worklive-migrate needs to keep the old address (port)10:46
rm_worka segment is a subnet attribute10:46
rm_workport has an address (which is part of a subnet)10:46
sean-k-mooneygibi: pluging the port on live migration would work if we live migrated to the same netwrok segment10:46
rm_workso one Network can have many Subnets, each which is a separate segment10:46
gibiok, so during cold migrate if the port needs to be plugged to another segment then it will change IP address but it si not a problem as the guest is rebooted anyhow10:47
ileixe"neutron will look at the HV it was scheduled to, pick an appropriate segment, and do the IP assignment/plug on the neutron side" <- that's the point that I was unaware of10:47
rm_worksean-k-mooney: right, so that's what we're talking about -- this simple filter allows for that10:47
ileixeThanks for explanation I will look at it10:47
sean-k-mooneygibi: it should not change ip10:47
gibisean-k-mooney: but if it changes segment then it will change IP10:47
rm_workI actually haven't even paid attention to cold-migrate, we don't do it either, or care :D it may have the same issue as live-migrate10:48
gibiahh I see10:48
rm_workI am not sure10:48
sean-k-mooneythat my point its not vaild for the port to change its ip on a move operation so it cannot change segment10:48
rm_workright10:48
sean-k-mooneywhich is why it fails today10:48
rm_worksean-k-mooney: https://review.opendev.org/#/c/709280/10:48
gibirm_work: look, I understand you dont care but I as a nova maintainer needs to care. Hence my previous -1 as I did not understood the feature10:48
rm_workthis is a very very simple way to get live-migrate and other service-to-service use-cases unblocked10:49
sean-k-mooneywe use ip_allocation=defer to allow the ip to be allcoated when we bind the port but it cannoth change again after that point10:49
rm_workgibi: right, sorry, I didn't mean that I don't care if all features work -- i mean i didn't care personally so hadn't checked whether it was the same issue -- this would solve it anyway if it is, and not affect it if it isn't10:49
rm_worki'm not going to propose a feature that breaks other stuff :)10:49
gibisean-k-mooney: I rely on you about defining what is the expected behavior of a port during cold migrate in the current case so if you say it should not change IP then it accept that10:50
rm_worksure, and in that case, this would allow to solve for cold-migrate as well10:50
rm_workin the same way10:50
gibirm_work: no worries. I need to act as a guardian not you. I just explained why I was -110:51
sean-k-mooneyrm_work: just looking at it now but how is the segment id passed to nova? a schduler hint or somehting like that10:51
rm_worksean-k-mooney: scheduler hint, yes10:51
sean-k-mooney--hint routed_segments=96a9316a-54cb-4043-9ccc-b9cacd0d4d5210:52
* gibi needs to read mriedem's patch as well as that was an alternative solution10:52
sean-k-mooneyso is that uuid a placement aggreate uuid10:52
sean-k-mooneyso we convert that to a member_of ?10:52
gibiright now I like the simplicity of the filter, but I does not like matching the _name_ of the aggregate as that feels hackish10:52
rm_worksean-k-mooney: this is a neutron segment_id10:52
rm_workgibi: yeah I am not a huge fan of name-comparison as a matcher either, but per the spec it DOES work10:53
rm_workso, this stuff is also in Placement as well10:53
rm_workand in Placement it has a little tighter mapping10:53
sean-k-mooneyright so filters are not allowed to call other servcies rest apis so im wondering how nova knows if a host is connected to the segment10:53
gibirm_work: yeah I got it that this aggregate naming thing was how it is speced and implemented10:53
rm_worksean-k-mooney: nova aggregates10:53
gibisean-k-mooney: ^^10:53
sean-k-mooneyif we have nova host aggreates we dont need a new filter10:54
rm_worksean-k-mooney: see for example http://paste.openstack.org/show/7YyK5FtL2OiXgErAjMbM/10:54
rm_workwe don't know the correct nova aggregate10:54
rm_workother services would know the segment_id10:54
rm_worknove understands the host-aggregates internally and how they map to segments10:55
rm_worksorry, when I say "we" in this case I mean other services10:55
rm_work(I am speaking from the Octavia point of view, for example)10:55
rm_workWhat Octavia has is a port, with a subnet and thus a segment_id10:56
rm_workOctavia needs to tell Nova to schedule to the aggregate that has that segment_id10:56
*** brinzhang_ has joined #openstack-nova10:57
rm_workI guess you're saying that there exists a filter already that would allow us to specify an aggregate for scheduling, and we could just look that up?10:57
sean-k-mooneyi think there is one that would allow that yes10:58
sean-k-mooneyi tought we could use the json filter for that but it looks like the answer is no10:58
gibisean-k-mooney: AggregateInstanceExtraSpec filter works based on flavors and aggregate extra spec. but we only have the aggregate name to match10:58
rm_worksean-k-mooney: so here is the example patch for how to make live-migrate work with this: http://paste.openstack.org/show/Z8ZYksr4uisFUddFoyOu/10:59
*** brinzhang_ has quit IRC10:59
rm_workgibi: i guess TECHNICALLY it's possible for other services to query nova for aggregate list, and do the matchup on their own10:59
sean-k-mooneygibi: ya the aggreate image and extraspec filters are not the right granulatrity10:59
*** brinzhang_ has joined #openstack-nova10:59
sean-k-mooneyi was considering the compute capabliteis filter10:59
rm_workassuming we could pass an aggregate_id?10:59
sean-k-mooneybut setting the segment on each host would be a pain11:00
gibirm_work: you could not pass agg_id either, you can specify agg extra_spec in the flavor11:00
rm_workin the *flavor*?11:00
rm_workyeah, that's not workable11:00
sean-k-mooneythe issue with the filter based approch is if you change the default result set size form placment it can end up retruning only hosts that cant pass the filter11:00
*** brinzhang has quit IRC11:00
sean-k-mooneyits the issue that cern had with only 10 results11:01
rm_workhmm11:01
gibirm_work: yeah in the flavor, so I getting to grasp why you are in pain11:01
sean-k-mooney/results/allocation_candiates11:01
*** priteau has joined #openstack-nova11:01
rm_workyeah we don't want to lock one flavor to one aggregate T_T11:01
rm_worksean-k-mooney: hmm interesting...11:01
sean-k-mooneyi dont think we should do this as a filter. we could do it as a pre-filter11:02
gibisean-k-mooney: that is a valid point, but limit is configurable so the deployer can ask for a lot of candidates via a big limit to avoid this11:02
sean-k-mooneyand transfrom the hint into a member_of11:02
rm_workso if we had, say, 100 HVs and 10 segments... it's possible that the filter would only even have 10 hosts as candidates, none of which are in the target filters?11:02
gibisean-k-mooney: pre-filter is mriedem's approach11:02
rm_work*target segments?11:02
gibirm_work: the ac limit is 1000 by default but yes11:02
rm_workahaha ok11:02
sean-k-mooneygibi: yes although i think we need a much more holistic aproch as i said in my comment on his patch11:03
gibisean-k-mooney: I'm totally up for a holistic approach I just did not understand mriedem's way of doing this yet11:03
gibibut this current discussion will help about that11:03
sean-k-mooneyack11:04
gibisean-k-mooney: so your approach would be to take the hint and use it as a member_of query. As these aggregates are nova aggregatest they are expected to be mirrored in placement too so member_of would work11:06
rm_workyeah so the point of doing it this way was to try to make it LESS controversial :D11:06
sean-k-mooneywell my short term hack yes11:06
*** mlycka has quit IRC11:06
sean-k-mooneyi dont think we should need a hint11:07
rm_worksimplify the use-case target (admin, not users) and make it an optional filter, not in normal code-path11:07
gibiand this way we could avoid the issue with the a_c limit11:07
rm_workso yeah, they're in placement11:07
sean-k-mooneyi want neutron to pass the placmeent aggretae and not require the create of any nova aggreates11:07
*** ociuhandu has joined #openstack-nova11:07
sean-k-mooneythe ip segments should be mirrored by neutron into placement as aggretes11:07
gibisean-k-mooney: and since qos nova support we have a way to pass traits and RCs but not aggregates from neutron to nova11:07
sean-k-mooneyyes so we would have to extend that11:08
sean-k-mooneybut that would be next cycle at the earliest11:08
*** tesseract has quit IRC11:08
gibiso the full solution needs a neutron change. I guess rm_work will hate that delay :)11:08
sean-k-mooneyso if we ignore the approch i would like to do for now11:08
gibisean-k-mooney: agree11:08
openstackgerritStephen Finucane proposed openstack/nova master: api: Add framework for extra spec validation  https://review.opendev.org/70464311:09
openstackgerritStephen Finucane proposed openstack/nova master: api: Add microversion 2.82, extra spec validation  https://review.opendev.org/70843611:09
sean-k-mooneyi would be semi ok with an experimental pre-filer with the understanding that we will proably remove it next cycle11:09
sean-k-mooneyas a FFE11:09
sean-k-mooneyif we were to do it as a post filter they are plugable and i dont see why it should be in tree11:09
gibisean-k-mooney: that pre-filter will be based on a scheduler hint?11:09
sean-k-mooneypre-filters are not plugable so they have to be in tree11:09
sean-k-mooneyi think that is the shortest path yes11:10
gibiI see11:11
sean-k-mooneyso you dont set ip_allocation defer and precreate the neutron port. neutron assigns it an ip form a segment, then you boot the vm with that port and pass the segment as a hint11:11
*** tesseract has joined #openstack-nova11:11
sean-k-mooneysince the hint will be stored in the request spec it should lock the vm to that segment form then on11:12
sean-k-mooneythis would alos work the if you just passed the network  to nova and let it create the port11:12
*** ociuhandu has quit IRC11:12
gibimriedem approach assumes that the segment_id == placement aggregate id so nova cna read the segment id from neutron and construct the placement query without any extra hack11:13
sean-k-mooneyyes11:14
gibithat would remove the need of a hint11:14
sean-k-mooneyyes11:14
gibiso it would remove the ~ API impact due to the hint11:14
rm_worksean-k-mooney: so, a post-filter that isn't in-tree, can't be a dependency of other services? I think?11:15
sean-k-mooneyit does require ues to lookup the segment of every neutron port however11:15
rm_workI mean...11:15
sean-k-mooneyrm_work: well not normally no11:15
sean-k-mooneyrm_work: what other service are you thinking of11:15
rm_workso, I'm trying to merge code in octavia that would use this11:15
rm_workI linked the change earlier... let me find it again11:15
gibisean-k-mooney: we do lookup of neutron port information anyhow due to qos so while it is an extra query it is not a totally new thing in the boot and migrate code path11:15
sean-k-mooneywell you could package the filter in octavia11:15
sean-k-mooneythen i think it would be fine11:16
sean-k-mooneybut really im not sure there is time to do this this cycle11:16
rm_workhttps://review.opendev.org/#/c/706153/11:16
rm_worksean-k-mooney: i was hoping an "optional filter" that was essentially just a few lines would be easier to digest, and we could merge it with less controversy T_T11:17
sean-k-mooneywell from a paper work point of view that is not really allowed but being pragmatic maybe. this technical requirs a spec11:17
gibirm_work: the problem is the new hint, if we merge the filter now, we cannot really remove it later when a better approach is made as the hint becomes an API11:17
rm_workit's fine, we are already running this filter internally (it's very simple to drop-in, as you say) so it's not a big deal. I am trying to unlock some of these features we're using to more folks by putting it upstream11:18
sean-k-mooneyrm_work: yep which is nice to see11:18
rm_workfor example: we have octavia working in a routed-network setup, and we have live-migrate working11:19
gibirm_work: and by being here and helping me understand the feature you actually made it a bit more likely that there will be upstream support for this11:19
rm_worksorry, the "we" in this case is actually two orgs, but I've worked for both of them, heh11:19
sean-k-mooneyrm_work: for what its worth you could get the same effect by having an availabity zone per ip segment11:19
gibirm_work: so you made a good step forward here. sorry that this is not that straight forward that is could be. but I think we are in a good track11:19
rm_work(I am working with jroll currently)11:20
rm_worksean-k-mooney: unfortunately that's not an option, heh11:20
sean-k-mooneythat is what i know people have done in the past11:20
openstackgerritLee Yarwood proposed openstack/nova master: libvirt: Remove native LUKS compat code  https://review.opendev.org/66912111:20
openstackgerritBrin Zhang proposed openstack/nova master: Introduce scope_types in os-volumes-attachments policy  https://review.opendev.org/70938811:20
sean-k-mooneyso there are a few paths forward.11:21
sean-k-mooney1.) experimtal prefilter that is disabled by default which would lookup the segment id of each port and add a member of11:21
sean-k-mooney2.) post filter that woudl use a schduler hint (packaged in octavia)11:21
sean-k-mooney3.) wait for full solution next cycle11:22
gibiI think I can give some help in 1.)11:22
sean-k-mooneyand start on the spec now to have agreement11:22
rm_workwell, we'd also like to use the same filter for nova live-migrate, and ironic11:22
rm_workso maybe 1 is ok, and we can continue to use the filter for now11:22
rm_workand swap over when this is ready upstream11:23
gibirm_work, sean-k-mooney: I will write a summary for the ML about this discussion to see what others think11:23
gibiI'm not jumping into writing a full spec yet11:24
sean-k-mooneyso the only thing i really care about here is ensuring that whatever we decide it does not tie our hands next cycle. which is why i want it to be experimental11:24
gibirm_work: I would appreciate if you could review the currently proposed tempest tests https://review.opendev.org/#/c/665155/16/tempest/scenario/test_routed_provider_network.py11:24
rm_workok11:26
gibirm_work: thanks11:26
gibisean-k-mooney: sure, this is why I don't like the in three hint idea, as that would become an API11:26
gibisean-k-mooney: pre-filter can be a lot more fluid impl, and later we can make neutron to provide the agg ids in the resource_request11:27
sean-k-mooneygibi: ya i could live with the perfromance hit if its disabled by default meaning we have to do the api query for the port info in the prefilter11:27
sean-k-mooneygibi: yep and even the allcoations  in some cases11:28
sean-k-mooneywehn you create a port that is consuming an ip it should realy have allready created a placmente allocation for that11:28
gibisean-k-mooney: agreed, this pre-filter is costly so it should be disable by default11:29
sean-k-mooneyso we would want to pass the aggreate 1 the allcotion unless you defer the ip asginment11:29
sean-k-mooney*agggreate and the allocation11:29
gibisean-k-mooney: agree, that allocating IPs needs to be handled as resources in placement but that is really fare in the future based on your current amount of people working on this11:29
gibis/your/our/11:30
sean-k-mooneyif we put a concerted effort in i think it could be done next cycle but this would require a resonable amount of work in neutron11:30
rm_worki thought neutron already DID all the work in placement11:31
rm_workthere's more?11:31
sean-k-mooneyit does not do it the way we want it to11:31
sean-k-mooneyit is reporting segments as placmenet aggreates11:31
sean-k-mooneybut its not reporting inventoreis of ips as sharing aggreates11:32
gibisean-k-mooney: at the moment this is not prioritized on my end so I won't commit to this as a spec for the next cycle. I will try to ease the immediate pain with a pre-filter11:32
*** jaosorior has joined #openstack-nova11:32
sean-k-mooneynor is its tracking segmenation id per segmentation_type (vlan,vxlan,...) and mapping those to addtional aggreates11:32
gibirm_work: there is always more work :)11:33
sean-k-mooneyyep11:33
sean-k-mooneyits frustrating becasue i have wanted to solve this problem for longer then plamcnet has existed11:34
sean-k-mooneygibi: did you come to intel shannon with jay after the briton mid cycle11:34
gibisean-k-mooney: nope, I wasn't there11:35
rm_workyeah that was kinda why i like the simplicity of the filter... it just... works? there's not really many complications11:35
rm_workbut yeah, it's not the absolute most optimal11:35
openstackgerritStephen Finucane proposed openstack/python-novaclient master: Don't print user_data for 'nova show'  https://review.opendev.org/70885011:35
rm_workthe question is, would we be able to get anything else in the forseeable future? lol11:36
rm_workbecause this filter was technically mergable like ... in newton11:36
sean-k-mooneyah ok well i pitched the idea of a json toploogy api for neutron that basicaly exposed a tree based view of resouces and capablites of network backend. this was pre placment and was a staic view but ya i have been wanting to fix this but not had time too for years.11:36
*** mkrai__ has quit IRC11:36
sean-k-mooneyrm_work: actully the prefilter will not just work it will increase the likelyhood of novalid host on a modertly full cloud11:37
sean-k-mooneyat least if you dont use ip_alloction defer11:38
sean-k-mooneyif you use the default ip_allocation=immediate11:38
rm_workyeah we use defer11:38
rm_workthis is not for nova to use for initial scheduling really11:39
sean-k-mooneythen when the neutron port is created it will get an ip and a segment and then we will only consider that segment11:39
sean-k-mooneyya with defer it will work as we will schdule to any host then we will only assgin an ip when we bind it the host11:39
sean-k-mooneyfrom that point on it will not migrate outside of that segment11:40
rm_workright, not intending to mess with the initial scheduling stuff11:40
rm_workit's for re-scheduling tasks11:40
rm_worklike what octavia or live-migrate does11:40
sean-k-mooneythe same would be true fo nova careated ports actully since we do that on the compute node11:41
rm_workwhen we're already locked to a segment due to an existing port11:41
sean-k-mooneyya11:41
sean-k-mooneythe only case that breaks is precreated port where you have not set ip_allocaton11:41
sean-k-mooneythe only case that breaks is precreated port where you have not set ip_allocaton=defer11:42
rm_workerr no, that's exactly one of our cases11:43
rm_workoctavia creates a port, it gets an IP... then we need to boot nova VMs in that segment11:43
sean-k-mooneycan octavia use ip_allocatio=defer in that case11:45
rm_workno11:45
sean-k-mooneyok then it will work until the cloud start to get full11:46
rm_workthe whole point of us creating that port is to pre-allocate an IP up-front11:46
sean-k-mooneyor that aggreate11:46
rm_workyes, that's true11:46
*** brinzhang has joined #openstack-nova11:46
sean-k-mooneyrigh if you preallcoate the ip up front then that determins the segemnt so and with the prefilter we will not consider any other host11:46
sean-k-mooneywith all the pros and cons that brings11:47
rm_workyeah that is why this filter is fine for us, we always know the segment we want11:47
sean-k-mooneyfundemetally there is no way around that11:47
sean-k-mooneybut you dont know if there is enough compute resouces in that segment to run the vm11:47
sean-k-mooneyso you will have to handel the no valid host case11:48
rm_workwe have no choice11:48
sean-k-mooneyand either choose a differnt segment or give up11:48
rm_workusually we are doing this on an object that has existing for a long time11:48
rm_workwe have a VIP port, and two VMs that serve HAProxy behind it11:48
*** martinkennelly has quit IRC11:48
rm_workthey are in a keepalived pair11:48
rm_workwhen one fails, we need to boot a new one in the same segment11:49
*** brinzhang_ has quit IRC11:49
rm_workthere is no choice11:49
sean-k-mooneyfor routed networks yes11:49
sean-k-mooneyalthough11:49
sean-k-mooneyonly the vip needs to stay the same11:49
sean-k-mooneysorry11:49
sean-k-mooneyoctavia the vip is assigned to the vm you are booting11:49
sean-k-mooneyrm_work: the vip is the public loadblancer ip right?11:50
sean-k-mooneythe one that applciation will use to connect to the services that are behind it11:51
*** ociuhandu has joined #openstack-nova11:51
rm_workyes11:51
rm_workthe VIP is static and is known to the user11:51
sean-k-mooneyyep11:52
rm_workand will always persist (it is unbound, and uses allowed-address-pairs)11:52
sean-k-mooneyya11:52
sean-k-mooneyso even in routed network you could use a /32 route to make that available rigth even if the ha proxy vm ip is not in the same subnet range11:53
rm_workso, i'm aware we will have problems if the aggregate fills up, but we have no choice :(11:53
rm_workthat'd be possible if we had a different network architecture, maybe11:53
*** mlycka has joined #openstack-nova11:53
rm_workas it is, we cannot11:53
sean-k-mooneywell bgp would be able to supprot that11:54
sean-k-mooneybut ya ok11:54
mlyckaHello. Is there a specific person responsible for adding new version templates and folders to nova-spec?11:54
sean-k-mooneynot really we have scripts that do it11:54
*** ociuhandu has quit IRC11:55
mlyckaRight right...when are you likely to run them for V?11:55
sean-k-mooneymlycka: but you did remind me i need to go update my patch to move implemented specs11:55
sean-k-mooneymlycka: usually not until after m311:55
rm_workyeah I would kill for working BGP :D11:55
mlyckam3?11:55
sean-k-mooneymilestone 3 so i think start of april11:56
mlyckaCrud11:56
sean-k-mooneywe just past the spec freeze for U so we usally dont want to start reviewing new spec right away however you could put up a review against the backlog folder and just update it when its created11:57
mlyckaI need to move a blueprint proposal to V from U, 'cause I managed to miss my window by being busy and I need to restore it11:57
sean-k-mooneyah ok well let me check something11:58
mlyckaSure, thanks.11:58
sean-k-mooneyok we dont have a script for the new folder although code is welcome. anyone can do it so you have two options. 1 propose a patch that creates the folder and copys the ussuirt template and renames it. 2 restore your spec and propose it to the backlog12:00
sean-k-mooneyif you go with option 1 just rebase your current spec on top12:00
sean-k-mooneyit just proably wont get looked at for a little bit12:01
mlyckaYeah, that's fair enough. Is the template going to be the same for Victoria then?12:01
*** brinzhang_ has joined #openstack-nova12:02
*** nicolasbock has joined #openstack-nova12:03
mlyckaAlso, do I need to file a bug for that or is there an existing one or do I just propose a patch without a bug?12:03
*** TristanSullivan has quit IRC12:03
*** brinzhang_ has quit IRC12:03
*** brinzhang_ has joined #openstack-nova12:04
*** brinzhang has quit IRC12:05
openstackgerritMerged openstack/nova master: trivial: Remove FakeScheduler  https://review.opendev.org/70722412:07
openstackgerritMerged openstack/nova master: conf: Deprecate '[scheduler] driver'  https://review.opendev.org/70722512:07
openstackgerritMerged openstack/nova master: docs: Improve documentation on writing custom scheduler filters  https://review.opendev.org/70722612:08
openstackgerritMerged openstack/nova master: trivial: Use recognized extra specs in tests  https://review.opendev.org/70843512:08
gibisean-k-mooney, rm_work: mail is up on the ML http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012846.html12:08
rm_workok12:10
rm_workso basically i guess i just keep running this filter and my patches as-is downstream, and try to participate in getting this moving forward upstream so eventually we can drop it :D12:10
gibirm_work: good strategy12:11
*** ratailor has quit IRC12:14
sean-k-mooneygibi: thanks ill take a look at it in a while12:16
sean-k-mooneygibi: before that however im going to try and go through https://review.opendev.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/provider-config-file today and adress some/all of your comments12:18
sean-k-mooneyassuming they are not really hard12:18
gibisean-k-mooney: cool. ping me if you need some clarification. honestly I lost the context on that so I dont remember if my comments were hard or not12:20
sean-k-mooneywell i have not looked at the code before so you know better then i do :) but so far they look ok. i think dustinc is not really around much at the momenet/forseeable future so im going to try and help move it along a bit12:21
*** maciejjozefczyk has quit IRC12:23
openstackgerritBalazs Gibizer proposed openstack/nova stable/rocky: Avoid circular reference during serialization  https://review.opendev.org/70979812:24
gibithis is a stable only bugfix so it need some non stable core to look at ^^12:24
gibibauzas, stephenfin: ^^12:25
*** psachin has quit IRC12:25
sean-k-mooneywhy is jsonutils.dumps being used instead of to_primitive?12:29
openstackgerritMarek Lyčka proposed openstack/nova-specs master: Adds spec infrastructure for Victoria  https://review.opendev.org/71002312:29
*** ociuhandu has joined #openstack-nova12:30
sean-k-mooneyoh i guess you are trying to convert it to a dictionay12:30
sean-k-mooneyby round tripping the data through json12:30
gibisean-k-mooney: the legacy_spec is a dict form of a RequestSpec object12:31
sean-k-mooneyya so we cant just use obj_to_primitive12:31
gibibut it is a special dict form12:31
sean-k-mooneybecasue that wont give us what we want12:31
gibiit would give us a different dict12:32
sean-k-mooneyyep so it wont work here12:32
sean-k-mooneywhy do we need to use legacy requests specs in this case?12:32
sean-k-mooneyi taught we had already moved to ovo form in rocky but i guess not fully12:33
*** udesale_ has joined #openstack-nova12:34
sean-k-mooneyoh its becasue of the workaround for bug #152908412:36
openstackbug 1529084 in oslo.messaging "RPC fake driver should accept datetime items for data" [Undecided,Fix released] https://launchpad.net/bugs/1529084 - Assigned to Balazs Gibizer (balazs-gibizer)12:36
*** udesale has quit IRC12:36
sean-k-mooneythat intoduced the seriasiation and deserialisation but missed the fact it wont recurse to the nested ovos which is what you are fixing12:37
sean-k-mooneyanother fix would be to fix the rpc fake dirver to accept datimes in dicst and remvoe the jsonutils calls entirely12:39
gibisean-k-mooney: since stein we doesn't use the legacy dict but pass around the ovo, but that is an RPC change it is not backportable12:39
gibisean-k-mooney: but yeah, we we can somehow remove the need of the json.loads(json.dumps(..) stuff that would also help12:40
*** maciejjozefczyk has joined #openstack-nova12:40
*** ociuhandu has quit IRC12:40
sean-k-mooneywell the comment says its done beacase teh fake rpc driver does not support datetimes in dicts12:41
sean-k-mooneyso if we fix that then we could remove it right12:41
sean-k-mooneyi assume there was a reason bauzas didnt do that orgininally12:41
sean-k-mooneyother then this was faster12:41
gibiI'm not sure what will happen if the dump an load is removed and then the rpc will try to serialize the dict again with ovos inside12:42
sean-k-mooneygibi: anway i have not checked that setting default=... will work but your change looks resonable to me12:42
gibisean-k-mooney: the solution is basically change how the to_primitive works during dumps to trigger the ovo serialization instead of stuck in an infinite loop12:43
sean-k-mooneyya i was assuming "default=jsonutils.to_primitive" if not set12:44
gibiI also filed a bug to oslo.serialization https://bugs.launchpad.net/oslo.serialization/+bug/186467812:44
openstackLaunchpad bug 1864678 in oslo.serialization "jsonutils.to_primitive does not follow the protocol required by json.dump" [Medium,Triaged]12:44
sean-k-mooneyso you are just passing an extra argment to it with functools.partial12:44
gibisean-k-mooney: yeas, that is the default by default (sic)12:44
gibisean-k-mooney: yes12:44
sean-k-mooneyyep so that all makes sense to me. its not the cleanest thing long term but its the minimal backportable change which is more important12:45
gibiI'm not sure that the fix of https://bugs.launchpad.net/oslo.messaging/+bug/1529084 can be backported to pike12:46
openstackLaunchpad bug 1529084 in oslo.messaging "RPC fake driver should accept datetime items for data" [Undecided,Fix released] - Assigned to Balazs Gibizer (balazs-gibizer)12:46
gibiand the the we can bump oslo.messaging version on the stable branches12:46
gibithen remove the loads(dumps(..)) part from nova, and backport it to pike12:47
sean-k-mooneyyou mean to rocky12:47
gibisean-k-mooney: rocky is the first stable that is affected12:47
gibiI have to backport the whole thing to pike12:47
sean-k-mooneyah you have other cherry picks12:47
gibias we have a failure downstream12:47
gibiin pike12:47
sean-k-mooneygotch ya12:47
gibiso my target is to fix pike, and sure I will fix every broken stable along the way but I wan't to do this with minimal change12:48
gibia new oslo.messaging version would be a lot harder business12:48
sean-k-mooneynot a core but i +1'd it anyway since it makes sense after our disucsstion stephenfin its pretty quick if you can take a look12:48
gibisean-k-mooney: thanks!12:48
sean-k-mooneyhehe i some how dont think the intel pmem ci is going to work on rocky12:53
sean-k-mooneythey might want to fix that at some point12:53
*** brinzhang has joined #openstack-nova12:54
brinzhanggmann: is that true of bug 1864776?12:55
openstackbug 1864776 in OpenStack Compute (nova) "os-volumes-attachments API policy is allowed for everyone even policy defaults is admin_or_owner" [Undecided,In progress] https://launchpad.net/bugs/1864776 - Assigned to Brin Zhang (zhangbailin)12:55
brinzhanggmann: I was pushed the fixed patch https://review.opendev.org/#/c/709955/, and rebase this patch I was tested https://review.opendev.org/#/c/709929/1/nova/tests/unit/policies/test_volumes.py, it also need everyone contexts to authorize the admin_or_owner role13:01
*** jangutter has quit IRC13:01
*** jangutter_ has joined #openstack-nova13:01
brinzhanggmann: I am not sure this bug is true, while you are free, pls review, thanks13:02
*** psachin has joined #openstack-nova13:05
*** ociuhandu has joined #openstack-nova13:06
*** amoralej is now known as amoralej|lunch13:14
*** nweinber has joined #openstack-nova13:27
*** brinzhang has quit IRC13:27
*** brinzhang has joined #openstack-nova13:29
*** brinzhang has quit IRC13:30
*** brinzhang__ has joined #openstack-nova13:30
*** brinzhang has joined #openstack-nova13:31
*** brinzhang_ has quit IRC13:32
*** brinzhang has quit IRC13:32
*** brinzhang has joined #openstack-nova13:33
*** brinzhang has quit IRC13:34
*** brinzhang has joined #openstack-nova13:35
*** ltomasbo has joined #openstack-nova13:36
*** ltomasbo has left #openstack-nova13:36
*** brinzhang has quit IRC13:36
*** brinzhang has joined #openstack-nova13:37
*** nicolasbock has quit IRC13:40
*** nicolasbock has joined #openstack-nova13:40
*** tbachman has quit IRC13:42
*** brinzhang has quit IRC13:46
*** nweinber has quit IRC13:46
*** nweinber has joined #openstack-nova13:47
openstackgerritStephen Finucane proposed openstack/python-novaclient master: Don't print user_data for 'nova show'  https://review.opendev.org/70885013:48
openstackgerritStephen Finucane proposed openstack/nova master: api: Add framework for extra spec validation  https://review.opendev.org/70464313:50
openstackgerritStephen Finucane proposed openstack/nova master: api: Add microversion 2.82, extra spec validation  https://review.opendev.org/70843613:50
openstackgerritStephen Finucane proposed openstack/nova master: docs: Add documentation for flavor extra specs  https://review.opendev.org/71003713:50
*** brinzhang has joined #openstack-nova13:52
*** ileixe has quit IRC13:52
*** ileixe has joined #openstack-nova13:53
*** Liang__ has joined #openstack-nova13:53
*** Liang__ is now known as LiangFang13:54
openstackgerritBalazs Gibizer proposed openstack/nova master: WIP: Hey let's support routed networks y'all!  https://review.opendev.org/65688513:54
brinzhang__stephenfin: do you have time the check this bug? That prevent my next work  https://bugs.launchpad.net/nova/+bug/186477613:55
openstackLaunchpad bug 1864776 in OpenStack Compute (nova) "os-volumes-attachments API policy is allowed for everyone even policy defaults is admin_or_owner" [Undecided,In progress] - Assigned to Brin Zhang (zhangbailin)13:55
brinzhang__that's the default policy refresh patch13:56
brinzhang__stephenfin: if it's true, I will rebase others on the fixed patch, otherwise I will abandon this fix, and push new patc to go continue13:57
*** ileixe has quit IRC13:58
*** amoralej|lunch is now known as amoralej13:59
openstackgerritLee Yarwood proposed openstack/nova master: WIP libvirt: Reintroduce volume based LM tests  https://review.opendev.org/53610514:00
*** mkrai has joined #openstack-nova14:00
*** canori01 has joined #openstack-nova14:04
*** slaweq has quit IRC14:05
canori01hey guys, is it possible to update the root disk for an in-use flavor in the database (such that new instances using that flavor take the change)or would that break things?14:05
*** zhanglong has quit IRC14:06
*** zhanglong has joined #openstack-nova14:07
*** dasp has quit IRC14:08
*** mkrai has quit IRC14:08
*** jmlowe has quit IRC14:09
*** slaweq has joined #openstack-nova14:09
*** jmlowe has joined #openstack-nova14:09
brinzhang__canori01: Which things would you like to change of the root disk?14:09
brinzhang__canori01:It is not recommended to change the configuration of the root disk directly in db.14:11
brinzhang__May cause strange things due to incomplete modification.14:12
*** zhanglong has quit IRC14:14
*** zhanglong has joined #openstack-nova14:16
canori01brinzhang: Only thing I want to change is the root disk size  from 0 to something like 20 or 50G14:21
*** jmlowe has quit IRC14:29
*** jmlowe has joined #openstack-nova14:32
openstackgerritStephen Finucane proposed openstack/nova master: Unplug VIFs as part of cleanup of networks  https://review.opendev.org/66338214:33
openstackgerritStephen Finucane proposed openstack/nova master: Fix incorrect vm and task state after build failure race  https://review.opendev.org/68938814:33
*** links has quit IRC14:36
*** ociuhandu has quit IRC14:37
*** zhanglong has quit IRC14:38
*** sean-k-mooney has quit IRC14:38
*** iurygregory has quit IRC14:49
openstackgerritLee Yarwood proposed openstack/nova master: api: Introduce microverion 2.82 allowing boot from volume rescue  https://review.opendev.org/70143014:50
openstackgerritLee Yarwood proposed openstack/nova master: compute: Extract _get_bdm_image_metadata into nova.utils  https://review.opendev.org/70521214:50
openstackgerritLee Yarwood proposed openstack/nova master: libvirt: Support boot from volume stable device instance rescue  https://review.opendev.org/70143114:50
*** mlycka has quit IRC14:55
openstackgerritLee Yarwood proposed openstack/nova master: DNM - Test stable device rescue tests with BFV instances  https://review.opendev.org/71005015:01
*** psachin has quit IRC15:05
*** tbachman has joined #openstack-nova15:07
*** ociuhandu has joined #openstack-nova15:11
*** sean-k-mooney has joined #openstack-nova15:14
*** ociuhandu has quit IRC15:16
*** mugsie has quit IRC15:20
*** mugsie has joined #openstack-nova15:22
*** brinzhang has quit IRC15:22
*** brinzhang has joined #openstack-nova15:23
*** iurygregory has joined #openstack-nova15:34
*** ociuhandu has joined #openstack-nova15:41
*** psachin has joined #openstack-nova15:46
*** mkrai has joined #openstack-nova15:47
*** dasp has joined #openstack-nova15:52
*** jawad_axd has quit IRC15:56
*** psachin has quit IRC15:57
*** iurygregory has quit IRC16:05
efriedsean-k-mooney: Re token expiring on that cyborg job: is the conf set up with a service token?16:06
sean-k-mooneyi am not sure but the confs are in the job logs16:12
sean-k-mooneyit should not fail in either case16:12
sean-k-mooneyonce it start happening you have to restart all the nova services to fix it16:12
*** eharney has quit IRC16:19
*** maciejjozefczyk has quit IRC16:21
sean-k-mooneyefried: so no https://48ef08cde8cc22034a1d-8011a2266d21f0c09baf1c83d6d5002e.ssl.cf5.rackcdn.com/709641/5/check/cyborg-multinode-tempest-full/e4d260f/controller/logs/etc/nova/nova_conf.txt16:22
sean-k-mooneythe service user section https://docs.openstack.org/nova/latest/configuration/config.html#service-user16:23
sean-k-mooneyis not configured but it should not need to be confitured for it to work16:24
*** udesale_ has quit IRC16:24
sean-k-mooneyit might allow you to mask the issue16:24
efriedsean-k-mooney: so the failure is happening from a later operation, not from the middle of a long-running operation?16:27
*** brinzhang has quit IRC16:35
*** brinzhang has joined #openstack-nova16:36
*** psachin has joined #openstack-nova16:36
*** brinzhang has quit IRC16:37
*** brinzhang has joined #openstack-nova16:38
*** brinzhang has quit IRC16:38
gmannbrinzhang__: sure, sorry for late response. I will review your patches today16:46
*** xek__ has joined #openstack-nova16:48
*** tesseract has quit IRC16:50
efriedsean-k-mooney: can I get your nod on https://review.opendev.org/#/c/709902/ (rocky EM patch) for os-vif please?16:50
*** xek_ has quit IRC16:51
*** maysams has quit IRC16:52
*** gyee has joined #openstack-nova16:52
*** eharney has joined #openstack-nova16:55
*** jaosorior has quit IRC16:57
*** mkrai has quit IRC17:02
*** ociuhandu has quit IRC17:03
efriedlyarwood: https://review.opendev.org/#/c/709902/1/deliverables/rocky/nova.yaml lgty?17:06
lyarwoodefried: ack yes, elod ^?17:12
*** psachin has quit IRC17:12
*** damien_r has quit IRC17:13
*** TristanSullivan has joined #openstack-nova17:14
*** ociuhandu has joined #openstack-nova17:30
sean-k-mooneyefried: it can start failing with a 401 in the middel of an operation and once it has it will continue to fail for seperate operations17:33
sean-k-mooneyand yes ill look at the em patch now17:33
*** evrardjp has quit IRC17:34
*** evrardjp has joined #openstack-nova17:35
sean-k-mooneythere is one pending bugfix i want to back port to all affected branch in os-vif but rocky predates the issue so yes that commit looks correct17:35
sean-k-mooneyill +1 the review17:35
sean-k-mooneyefried: basicaly the way i first hit the cybrog issue was i booted a vm. did a bunch of life cycle operation on it and then tried to delete it and that failed befaue the token was rejected.17:38
sean-k-mooneywhen i tried to do the operation (deleting the arq) myself it worked17:38
sean-k-mooneyif i listed device profiles myslef it also worked but when i tried to boot anotuher vm after that point it failed beacuse the nova api was not able to retive the device profile info17:39
sean-k-mooneyso if i use osc to query cyborg directly everthing is fine. if i use osc to boot a vm and nova tires to query cyborg on my behalf it was failing but only after the services had been running for a while like an hour or so17:40
*** vishalmanchanda has quit IRC17:47
*** jangutter_ has quit IRC17:47
*** jangutter has joined #openstack-nova17:48
*** jangutter has quit IRC17:50
*** tbachman has quit IRC17:51
*** ociuhandu has quit IRC18:05
*** ociuhandu has joined #openstack-nova18:05
*** ociuhandu has quit IRC18:11
*** lucidguy has joined #openstack-nova18:23
lucidguyAnyone recall me asking for assistance with >1tb ram instances? I FIGURED IT OUT!!! only took days.18:24
openstackgerritLee Yarwood proposed openstack/nova master: libvirt: Provide the backing file format when creating qcow2 disks  https://review.opendev.org/70874518:25
lyarwoodlucidguy: what was it?18:27
openstackgerritLee Yarwood proposed openstack/nova master: libvirt: Provide the backing file format when creating qcow2 disks  https://review.opendev.org/70874518:27
*** nweinber_ has joined #openstack-nova18:29
*** nweinber has quit IRC18:31
*** priteau has quit IRC18:32
sean-k-mooneyupdating other people code when you have never reviewed it and are just following gerrit comments is hard18:32
*** amoralej is now known as amoralej|off18:33
sean-k-mooneythe changes arent hard but the mental load to make sure what you are doing is correct is way higher then when its your code18:33
*** igordc has joined #openstack-nova18:37
melwittlucidguy: don't leave us hanging18:39
*** maciejjozefczyk has joined #openstack-nova18:42
*** mriedem has joined #openstack-nova18:42
*** igordc has quit IRC18:43
sean-k-mooneylucidguy: was it the alingment of jupiture18:44
sean-k-mooney*jupiter18:44
*** ociuhandu has joined #openstack-nova18:46
*** N3l1x has joined #openstack-nova18:48
*** ociuhandu has quit IRC18:51
*** maciejjozefczyk has quit IRC18:57
*** dpawlik has quit IRC19:01
openstackgerritMerged openstack/python-novaclient master: Don't print user_data for 'nova show'  https://review.opendev.org/70885019:15
*** dpawlik has joined #openstack-nova19:18
*** TristanSullivan has quit IRC19:31
*** ociuhandu has joined #openstack-nova19:34
*** nweinber__ has joined #openstack-nova19:35
*** nweinber_ has quit IRC19:38
*** eharney has quit IRC19:38
*** ociuhandu has quit IRC19:40
*** TristanSullivan has joined #openstack-nova19:49
lucidguysean-k-mooney?19:51
openstackgerritLee Yarwood proposed openstack/nova master: DNM - Test stable device rescue tests with BFV instances  https://review.opendev.org/71005019:51
*** eharney has joined #openstack-nova19:51
lucidguysean-k-mooney:  By default instances are launched with 40bit cpu memory address space, that does not allow for >1tb memory in an instances.  qemu on Ubuntu 18.04 allow to choose a machine architecture that maps the instances address space with the local HV which is 46bits.  In the end of the day all I had to do is upgrade to 18.04(Bionic) and add one line to nova.conf on the HV.19:53
*** tbachman has joined #openstack-nova20:05
openstackgerritsean mooney proposed openstack/nova master: Provider Config File: YAML file loading and schema validation  https://review.opendev.org/67334120:08
*** ociuhandu has joined #openstack-nova20:08
*** ociuhandu has quit IRC20:12
sean-k-mooneylucidguy: ah yes that makes sense20:20
sean-k-mooneyintel cpus only recently went to 48bit adress space20:20
sean-k-mooneyso im nost surpiesed the same limiation of reduced adress space was present for vms20:21
sean-k-mooneylucidguy: did you fix it by changing the machine type to q35?20:21
sean-k-mooneylucidguy: or did you add something else to the nova.conf20:22
sean-k-mooneyfor example the cpu_model?20:22
sean-k-mooneyalso a stackdump is not an approriate way of telling the enduser that you need to change somthing like that20:23
sean-k-mooneyi hope they have adressed that in a future version of qemu/kvm20:23
sean-k-mooneygibi: efried: i tried to keep my changes in https://review.opendev.org/#/c/673341/ as minimal as possible while adressing the comments.20:24
sean-k-mooneyi will try to get through the other patches in the series tommorow. there were some race condition in the test code that took me a while to figure out20:25
sean-k-mooneythey are fixed now20:25
*** ociuhandu has joined #openstack-nova20:28
*** kozhukalov has quit IRC20:33
*** kozhukalov has joined #openstack-nova20:36
*** ileixe has joined #openstack-nova20:36
*** sean-k-mooney has quit IRC20:41
*** tbachman has quit IRC20:42
*** mgariepy has quit IRC20:45
*** tbachman has joined #openstack-nova20:56
*** nweinber__ has quit IRC20:59
*** eharney has quit IRC21:00
*** larainema has quit IRC21:03
*** rcernin has quit IRC21:21
*** ralonsoh has quit IRC21:22
*** bnemec has quit IRC21:23
*** priteau has joined #openstack-nova21:28
*** kozhukalov has quit IRC21:29
*** slaweq has quit IRC21:32
*** ociuhandu has quit IRC21:36
*** ociuhandu has joined #openstack-nova21:37
*** ociuhandu has quit IRC21:41
*** ociuhandu has joined #openstack-nova21:50
*** ociuhandu has quit IRC22:02
*** ociuhandu has joined #openstack-nova22:02
*** ociuhandu has quit IRC22:07
*** tbachman has quit IRC22:11
*** priteau has quit IRC22:11
*** mriedem has quit IRC22:14
*** mriedem has joined #openstack-nova22:15
*** mriedem has left #openstack-nova22:19
*** tbachman has joined #openstack-nova22:27
*** eharney has joined #openstack-nova22:31
*** tbachman has quit IRC22:34
*** ociuhandu has joined #openstack-nova22:40
*** nweinber__ has joined #openstack-nova22:45
*** ociuhandu has quit IRC22:45
*** nweinber__ has quit IRC22:47
*** rcernin has joined #openstack-nova22:49
*** tkajinam has joined #openstack-nova22:53
*** rchurch has quit IRC22:59
*** rchurch has joined #openstack-nova23:00
*** N3l1x has quit IRC23:02
*** brinzhang__ has quit IRC23:05
efriedsean-k-mooney: Earlier, we were toying with cutting cyborg over to using sdk instead of ksa. We held off because there were a couple more quirks to be ironed out. It's possible sdk would automatically refresh the token for us -- mordred?23:11
efriedsean-k-mooney: It's been a minute, but I think on the code side you just have to s/get_ksa_adapter/get_sdk_adapter/ to make the switch. If so, perhaps we could stuff that change in between the series and your tester somehow and see if it fixes the problem.23:15
efriedIf so, then unit/functional tests would just need small tweaks to make it go.23:16
*** bnemec has joined #openstack-nova23:18
*** tbachman has joined #openstack-nova23:31
mordredefried: sdk in general should refresh tokens23:36
mordredefried: however, it's possible there are specifics I should page in23:37
mordredalso - we just landed cyborg support in sdk23:37
*** xek__ has quit IRC23:37
mordredefried: I'm EOD today - but I'd be happy to interact with folks on it tomorrow to see if we need to do anything23:38
*** TristanSullivan has quit IRC23:44

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!