Tuesday, 2018-03-27

*** chyka_ has joined #openstack-nova00:02
*** yamamoto has joined #openstack-nova00:03
*** chyka has quit IRC00:05
*** takashin has joined #openstack-nova00:07
*** yamamoto has quit IRC00:07
*** gouthamr has joined #openstack-nova00:07
*** dave-mccowan has quit IRC00:10
*** jobewan has joined #openstack-nova00:11
*** odyssey4me has quit IRC00:11
*** odyssey4me has joined #openstack-nova00:11
*** mvk has quit IRC00:16
*** rcernin has quit IRC00:19
*** rcernin has joined #openstack-nova00:19
*** liverpooler has joined #openstack-nova00:19
*** jobewan has quit IRC00:23
*** salv-orlando has joined #openstack-nova00:24
*** Dinesh_Bhor has joined #openstack-nova00:24
*** chyka_ has quit IRC00:27
*** zhurong has joined #openstack-nova00:27
*** salv-orlando has quit IRC00:29
*** mvk has joined #openstack-nova00:29
*** gouthamr has quit IRC00:31
*** vipul has joined #openstack-nova00:34
*** gcb has joined #openstack-nova00:36
*** gcb has quit IRC00:36
*** vipul has quit IRC00:40
*** fragatin_ has quit IRC00:40
*** fragatina has joined #openstack-nova00:41
*** fragatina has quit IRC00:41
*** fragatina has joined #openstack-nova00:41
*** fragatina has quit IRC00:41
*** vipul has joined #openstack-nova00:42
*** fragatina has joined #openstack-nova00:43
*** jichen has joined #openstack-nova00:44
*** fragatina has quit IRC00:48
*** jobewan has joined #openstack-nova00:49
*** hongbin has joined #openstack-nova00:51
*** harlowja has quit IRC00:53
*** mriedem has quit IRC00:55
openstackgerritMatt Riedemann proposed openstack/nova master: Create/lookup API services in cell0 on WSGI app startup  https://review.openstack.org/55667000:55
*** gjayavelu has quit IRC00:57
*** yingjun has joined #openstack-nova00:58
*** oanson has quit IRC00:58
*** jobewan has quit IRC00:58
*** gouthamr has joined #openstack-nova00:59
*** archit has quit IRC00:59
*** phuongnh has joined #openstack-nova01:00
*** oanson has joined #openstack-nova01:04
*** yangyapeng has joined #openstack-nova01:06
*** liverpooler has quit IRC01:08
*** tiendc has joined #openstack-nova01:08
*** voelzmo has joined #openstack-nova01:09
*** annp has quit IRC01:10
*** annp has joined #openstack-nova01:11
*** yangyapeng has quit IRC01:11
*** hoangcx has quit IRC01:12
*** hoangcx has joined #openstack-nova01:12
*** yangyapeng has joined #openstack-nova01:13
*** jobewan has joined #openstack-nova01:16
*** Tom-Tom has quit IRC01:20
*** trozet has quit IRC01:24
*** salv-orlando has joined #openstack-nova01:25
*** gyankum has joined #openstack-nova01:27
openstackgerritmelanie witt proposed openstack/nova master: rbd: use MAX_AVAIL stat for reporting bytes available  https://review.openstack.org/55669201:28
*** hshiina has joined #openstack-nova01:29
*** _ix has joined #openstack-nova01:29
*** salv-orlando has quit IRC01:29
*** voelzmo has quit IRC01:32
alex_xu_efried: I replied again https://review.openstack.org/#/c/554305/2, a long version, hope I explain clear that01:33
openstackgerritYikun Jiang (Kero) proposed openstack/nova-specs master: Complex (Anti)-Affinity Policies  https://review.openstack.org/54692501:35
*** tbachman has quit IRC01:36
*** Tom-Tom has joined #openstack-nova01:41
*** Tom-Tom has quit IRC01:42
*** Tom-Tom has joined #openstack-nova01:42
*** liverpooler has joined #openstack-nova01:48
*** bhujay has joined #openstack-nova01:49
*** lei-zh has joined #openstack-nova01:50
*** jobewan has quit IRC01:51
*** lei-zh has quit IRC01:51
*** lei-zh has joined #openstack-nova01:51
*** germs has joined #openstack-nova01:52
*** germs has quit IRC01:52
*** germs has joined #openstack-nova01:52
*** germs has quit IRC01:57
*** zhurong has quit IRC01:58
*** voelzmo has joined #openstack-nova02:05
phuongnhjoin #openstack-dib02:09
*** bkopilov has quit IRC02:09
*** suresh12 has quit IRC02:10
*** vladikr has quit IRC02:12
*** dave-mccowan has joined #openstack-nova02:15
*** namnh has joined #openstack-nova02:16
*** dikonoo has joined #openstack-nova02:21
*** dikonoor has quit IRC02:21
*** salv-orlando has joined #openstack-nova02:25
*** gyee has quit IRC02:26
*** yamamoto has joined #openstack-nova02:26
*** yassine has quit IRC02:26
*** yassine has joined #openstack-nova02:27
openstackgerritjichenjc proposed openstack/nova master: Avoid live migrate to same host  https://review.openstack.org/54268902:28
*** liverpooler has quit IRC02:29
*** salv-orlando has quit IRC02:30
*** lei-zh has quit IRC02:31
*** suresh12 has joined #openstack-nova02:36
*** voelzmo has quit IRC02:38
*** suresh12 has quit IRC02:41
*** lei-zh has joined #openstack-nova02:42
openstackgerritArtom Lifshitz proposed openstack/nova-specs master: NUMA-aware live migration  https://review.openstack.org/55272202:45
*** vladikr has joined #openstack-nova02:50
*** bhujay has quit IRC02:50
*** yingjun has quit IRC02:52
*** gouthamr has quit IRC02:53
*** voelzmo has joined #openstack-nova03:01
*** fragatina has joined #openstack-nova03:02
*** READ10 has quit IRC03:07
*** fragatina has quit IRC03:08
*** vladikr has quit IRC03:17
*** vladikr has joined #openstack-nova03:18
*** yangyapeng has quit IRC03:21
*** yangyapeng has joined #openstack-nova03:21
*** yangyape_ has joined #openstack-nova03:22
*** zhurong has joined #openstack-nova03:23
openstackgerritArtom Lifshitz proposed openstack/nova-specs master: NUMA-aware live migration  https://review.openstack.org/55272203:23
*** yingjun has joined #openstack-nova03:25
*** yangyapeng has quit IRC03:25
*** salv-orlando has joined #openstack-nova03:26
*** takashin has quit IRC03:26
*** fragatina has joined #openstack-nova03:30
*** salv-orlando has quit IRC03:31
*** yangyape_ has quit IRC03:34
*** voelzmo has quit IRC03:34
*** bkopilov has joined #openstack-nova03:40
*** andreas_s has joined #openstack-nova03:42
*** suresh12 has joined #openstack-nova03:43
*** Tom-Tom has quit IRC03:44
*** sree has joined #openstack-nova03:45
*** Tom-Tom has joined #openstack-nova03:45
*** diga has joined #openstack-nova03:46
*** andreas_s has quit IRC03:46
*** dave-mccowan has quit IRC03:49
*** Tom-Tom has quit IRC03:50
*** fragatina has quit IRC03:51
*** voelzmo has joined #openstack-nova03:51
*** lei-zh has quit IRC03:53
*** germs has joined #openstack-nova03:53
*** germs has quit IRC03:53
*** germs has joined #openstack-nova03:53
*** udesale has joined #openstack-nova03:55
*** germs has quit IRC03:58
*** fragatina has joined #openstack-nova03:59
*** hongbin has quit IRC03:59
*** bhujay has joined #openstack-nova04:01
*** suresh12 has quit IRC04:03
*** suresh12 has joined #openstack-nova04:06
openstackgerritYikun Jiang (Kero) proposed openstack/nova master: Add host to API and Conductor  https://review.openstack.org/55651304:10
openstackgerritYikun Jiang (Kero) proposed openstack/nova master: [WIP] Record the host info in EventReporter  https://review.openstack.org/55674604:11
*** yingjun has quit IRC04:11
*** voelzmo has quit IRC04:16
*** yamahata has joined #openstack-nova04:24
*** annp has quit IRC04:26
*** salv-orlando has joined #openstack-nova04:27
*** markvoelker has quit IRC04:27
*** psachin has joined #openstack-nova04:27
*** abhishekk has joined #openstack-nova04:28
openstackgerritJake Yip proposed openstack/nova master: Add --until_deleted_at to db archive_deleted_rows  https://review.openstack.org/55675104:29
*** salv-orlando has quit IRC04:32
*** yingjun has joined #openstack-nova04:32
*** AlexeyAbashkin has joined #openstack-nova04:33
*** chyka has joined #openstack-nova04:34
*** Tom-Tom has joined #openstack-nova04:34
*** Tom-Tom has quit IRC04:36
*** Tom-Tom has joined #openstack-nova04:36
*** Tom-Tom has quit IRC04:37
*** Tom-Tom has joined #openstack-nova04:37
*** AlexeyAbashkin has quit IRC04:37
*** chyka has quit IRC04:39
*** zhurong has quit IRC04:39
*** takashin has joined #openstack-nova04:40
*** janki has joined #openstack-nova04:48
*** yingjun has quit IRC04:50
*** diga has quit IRC04:56
*** Dinesh_Bhor has quit IRC05:01
*** Dinesh_Bhor has joined #openstack-nova05:03
*** yamamoto_ has joined #openstack-nova05:07
openstackgerritTetsuro Nakamura proposed openstack/nova master: Consider nested RPs in get_all_with_shared  https://review.openstack.org/55645005:10
openstackgerritTetsuro Nakamura proposed openstack/nova master: support shared and nested allocation candidates  https://review.openstack.org/55651405:10
*** yamamoto has quit IRC05:11
*** dikonoo has quit IRC05:11
*** dikonoo has joined #openstack-nova05:12
*** salv-orlando has joined #openstack-nova05:15
*** ratailor has joined #openstack-nova05:18
*** fragatina has quit IRC05:20
*** fragatina has joined #openstack-nova05:20
*** ratailor is now known as rtailor05:21
*** tetsuro has joined #openstack-nova05:23
*** markvoelker has joined #openstack-nova05:28
*** lei-zh has joined #openstack-nova05:31
*** gjayavelu has joined #openstack-nova05:37
*** yingjun has joined #openstack-nova05:38
*** moshele has joined #openstack-nova05:41
*** suresh12 has quit IRC05:42
*** takashin has quit IRC05:47
*** ttsiouts_ has joined #openstack-nova05:49
*** lei-zh has quit IRC05:49
*** lei-zh has joined #openstack-nova05:50
*** ttsiouts_ has quit IRC05:51
*** germs has joined #openstack-nova05:54
*** germs has quit IRC05:54
*** germs has joined #openstack-nova05:54
*** tbachman_ has joined #openstack-nova05:54
*** tbachman_ is now known as tbachman05:56
*** germs has quit IRC05:59
*** sree has quit IRC05:59
*** sree has joined #openstack-nova05:59
*** moshele has quit IRC06:00
*** yamamoto has joined #openstack-nova06:01
*** yingjun has quit IRC06:03
*** yamamoto_ has quit IRC06:04
*** sree has quit IRC06:04
*** tbachman has quit IRC06:07
*** annp has joined #openstack-nova06:12
*** lajoskatona has joined #openstack-nova06:13
*** psachin has quit IRC06:14
*** sree has joined #openstack-nova06:15
*** moshele has joined #openstack-nova06:16
*** namnh_ has joined #openstack-nova06:17
*** bandini has joined #openstack-nova06:18
*** sree has quit IRC06:19
*** namnh has quit IRC06:20
*** namnh has joined #openstack-nova06:21
*** namnh_ has quit IRC06:21
*** namnh_ has joined #openstack-nova06:22
*** chyka has joined #openstack-nova06:23
openstackgerritNaichuan Sun proposed openstack/nova master: xenapi(N-R-P):Get vgpu info from `allocations`  https://review.openstack.org/52171706:23
*** psachin has joined #openstack-nova06:23
*** moshele has quit IRC06:24
*** namnh has quit IRC06:26
*** sahid has joined #openstack-nova06:27
*** chyka has quit IRC06:27
*** Spaz-Home has joined #openstack-nova06:32
*** Spazmotic has quit IRC06:34
openstackgerritRoman Dobosz proposed openstack/nova master: Remove server group sched filter support caching  https://review.openstack.org/52920006:39
openstackgerritRoman Dobosz proposed openstack/nova master: get instance group's aggregate associations  https://review.openstack.org/53124306:39
openstackgerritRoman Dobosz proposed openstack/nova master: Support aggregate affinity filters  https://review.openstack.org/52920106:39
openstackgerritRoman Dobosz proposed openstack/nova master: Add nodes to group hosts to be checked against aggregation  https://review.openstack.org/55676106:39
openstackgerritRoman Dobosz proposed openstack/nova master: Added weight for aggregate soft (anti) affinity.  https://review.openstack.org/55676206:39
openstackgerritsahid proposed openstack/nova-specs master: update: isolate guests emulthreads on CONF.cpu_shared_set  https://review.openstack.org/51118806:42
*** tuanla____ has joined #openstack-nova06:47
*** afaranha has joined #openstack-nova06:49
openstackgerritjichenjc proposed openstack/nova master: z/VM Driver: Initial change set of z/VM driver  https://review.openstack.org/52338706:50
openstackgerritjichenjc proposed openstack/nova master: z/VM Driver: Spawn and destroy function of z/VM driver  https://review.openstack.org/52765806:50
openstackgerritjichenjc proposed openstack/nova master: z/VM Driver: add snapshot function  https://review.openstack.org/53424006:50
openstackgerritjichenjc proposed openstack/nova master: z/VM Driver: add power actions  https://review.openstack.org/54334006:50
openstackgerritjichenjc proposed openstack/nova master: z/VM Driver: add get console output  https://review.openstack.org/54334406:50
*** moshele has joined #openstack-nova06:51
*** pcaruana has joined #openstack-nova06:53
*** moshele has quit IRC06:57
yikun@jichen, https://review.openstack.org/#/c/556513/  some replies on patch, take a look when you have time. :) thanks06:59
*** takashin has joined #openstack-nova07:01
*** gjayavelu has quit IRC07:01
*** jaosorior has quit IRC07:02
*** danpawlik has joined #openstack-nova07:03
*** danpawlik has quit IRC07:05
openstackgerritKashyap Chamarthy proposed openstack/nova master: libvirt: Allow to specify granular CPU feature flags  https://review.openstack.org/53438407:05
jichenyikun: yes, I am looking at that patch now , thanks for notification07:10
openstackgerritOpenStack Proposal Bot proposed openstack/nova master: Imported Translations from Zanata  https://review.openstack.org/54877207:12
yikunjichen, OK, thanks07:12
*** phuongnh has quit IRC07:14
*** phuongnh has joined #openstack-nova07:15
*** tesseract has joined #openstack-nova07:15
*** yingjun has joined #openstack-nova07:17
kashyapjohnthetubaguy: Morning, when you are around, would appreciate if you can take a gander: https://review.openstack.org/#/c/534384/07:17
kashyapjohnthetubaguy: Tests pass (locally; Zuul is yet to give ACK), backport concerns addressed, release note in place.07:18
openstackgerritZhenyu Zheng proposed openstack/nova master: Noauth should also use request_id from compute_req_id.py  https://review.openstack.org/55526607:18
*** yamamoto_ has joined #openstack-nova07:22
*** sshwarts has joined #openstack-nova07:23
*** yamamoto has quit IRC07:25
*** rcernin has quit IRC07:25
*** damien_r has joined #openstack-nova07:25
openstackgerritjichenjc proposed openstack/nova master: Avoid live migrate to same host  https://review.openstack.org/54268907:25
openstackgerritTakashi NATSUME proposed openstack/nova-specs master: Change a validation in creating a server group  https://review.openstack.org/54648407:26
*** cdent has joined #openstack-nova07:29
*** ragiman has joined #openstack-nova07:30
*** yikun_jiang has joined #openstack-nova07:30
*** andreas_s has joined #openstack-nova07:33
*** yikun has quit IRC07:35
*** tianhui_ has joined #openstack-nova07:36
*** salv-orlando has quit IRC07:36
*** salv-orlando has joined #openstack-nova07:37
*** tianhui has quit IRC07:38
openstackgerritChris Dent proposed openstack/nova master: [placement] Filter allocation candidates by forbidden traits in db  https://review.openstack.org/55666007:41
*** salv-orlando has quit IRC07:42
*** gongysh has joined #openstack-nova07:43
openstackgerritTetsuro Nakamura proposed openstack/nova master: Consider nested RPs in get_all_with_shared  https://review.openstack.org/55645007:44
openstackgerritTetsuro Nakamura proposed openstack/nova master: Support shared and nested allocation candidates  https://review.openstack.org/55651407:44
*** psachin has quit IRC07:46
bauzasgood morning Stackers07:47
bauzasremember, today is a specs review day07:47
*** dims_ has joined #openstack-nova07:48
*** sree_ has joined #openstack-nova07:49
*** dims has quit IRC07:49
*** sree_ has quit IRC07:49
*** sree has joined #openstack-nova07:49
*** xinliang has joined #openstack-nova07:51
*** xinliang has quit IRC07:51
*** xinliang has joined #openstack-nova07:51
*** alexchadin has joined #openstack-nova07:51
*** AlexeyAbashkin has joined #openstack-nova07:51
*** alexchadin has quit IRC07:52
*** alexchadin has joined #openstack-nova07:53
*** ralonsoh has joined #openstack-nova07:53
openstackgerritKonstantinos Samaras-Tsakiris proposed openstack/nova master: Add `hide_hypervisor_id` flavor extra_spec  https://review.openstack.org/55586107:53
*** kholkina has joined #openstack-nova07:53
*** jaosorior has joined #openstack-nova07:53
*** zhurong has joined #openstack-nova07:53
*** germs has joined #openstack-nova07:54
*** germs has quit IRC07:54
*** germs has joined #openstack-nova07:54
openstackgerritNaichuan Sun proposed openstack/nova master: (WIP)xenapi(N-R-P): Add API to support compute node resource provider update and create  https://review.openstack.org/52104107:56
openstackgerritZhenyu Zheng proposed openstack/nova master: Noauth should also use request_id from compute_req_id.py  https://review.openstack.org/55526607:57
*** pooja_jadhav has quit IRC07:57
*** pooja_jadhav has joined #openstack-nova07:57
openstackgerritNaichuan Sun proposed openstack/nova master: (WIP)xenapi(N-R-P): Add API to support compute node resource provider update and create  https://review.openstack.org/52104107:58
*** germs has quit IRC07:59
*** ktibi has joined #openstack-nova07:59
*** vladikr has quit IRC08:00
*** lucas-afk is now known as lucasagomes08:01
kholkinajichen, hi! could you please take a look at https://review.openstack.org/#/c/547964/08:01
*** mdnadeem has joined #openstack-nova08:01
jichenkholkina: ok, right now08:02
bauzasjianghuaw_: naichuans: around ?08:03
*** salv-orlando has joined #openstack-nova08:04
*** phuongnh has quit IRC08:07
openstackgerritTakashi NATSUME proposed openstack/nova master: api-ref: Parameter verification for servers.inc (1/3)  https://review.openstack.org/52820108:07
*** tssurya has joined #openstack-nova08:07
*** derekh has joined #openstack-nova08:09
*** tetsuro has left #openstack-nova08:12
*** bhujay has quit IRC08:13
openstackgerritjichenjc proposed openstack/nova master: z/VM Driver: Initial change set of z/VM driver  https://review.openstack.org/52338708:14
openstackgerritjichenjc proposed openstack/nova master: z/VM Driver: Spawn and destroy function of z/VM driver  https://review.openstack.org/52765808:14
openstackgerritjichenjc proposed openstack/nova master: z/VM Driver: add snapshot function  https://review.openstack.org/53424008:14
openstackgerritjichenjc proposed openstack/nova master: z/VM Driver: add power actions  https://review.openstack.org/54334008:14
openstackgerritjichenjc proposed openstack/nova master: z/VM Driver: add get console output  https://review.openstack.org/54334408:14
jichenhi sahid, can you take a look at https://review.openstack.org/#/c/523387 and see whether it's what you want? thanks08:15
*** alexchad_ has joined #openstack-nova08:20
*** alexchadin has quit IRC08:20
openstackgerritjichenjc proposed openstack/nova master: WIP: remove ec2 in service and cmd  https://review.openstack.org/55677808:23
*** ccamacho has joined #openstack-nova08:23
*** ccamacho has quit IRC08:23
sahidjichen: hello, yes, thanks that looks what i had in my head i will check that deeper soon08:24
jichensahid: thanks for thorough review and appreciate your further comments, thanks08:25
sahidjichen: i made a simple request08:27
*** mvk has quit IRC08:27
sahidwhy you did not have moved zVMconnectorRequestHandler code in  Hypervisor?08:27
*** ccamacho has joined #openstack-nova08:27
*** ccamacho is now known as ccamacho|PTO08:28
gibibauzas: I read you discussion about matching the specific NUMA node selected by placement to the one that will be selected by the virt driver08:29
gibibauzas: and it sounds very similar to what I have with network RP selected by the placement vs. network selected by neutron agent on the physical level08:30
jianghuaw_bauzas, hi.08:31
jianghuaw_bauzas, what's up?08:33
gibibauzas: in the neturon case we think about communicating the RP selection to neutron during the port binding to ensure neutron has the necessary information08:33
bauzasgibi: for the moment, we agreed on not having a problem08:33
bauzasgibi: because the NUMA filter would get the same resources08:34
bauzasjianghuaw_: just a question about vGPUs08:35
bauzasjianghuaw_: given each pGPU is having multiple types and given each type is having specific vGPU number, I think using nested RPs for us would be something like :08:36
openstackgerritYikun Jiang (Kero) proposed openstack/nova master: Record the host info in EventReporter  https://review.openstack.org/55674608:36
*** mdnadeem_ has joined #openstack-nova08:36
bauzasjianghuaw_: root RP(compute) -> child RP (pGPU) -> child RP (GPU type)08:36
bauzasjianghuaw_: WDYF ?08:36
*** mdnadeem has quit IRC08:37
jianghuaw_I think we still have to restrict each pGPU only support one vGPU type.08:37
openstackgerritSilvan Kaiser proposed openstack/nova master: Exec systemd-run with privileges in Quobyte driver  https://review.openstack.org/55419508:37
gibibauzas: OK, then good for you08:37
jianghuaw_so  root RP(compute) -> child RP (pGPU)<vgpu inventory>08:37
bauzasjianghuaw_: if so, we should be asking operators to say which type for each PCI device, right?08:38
bauzasif we want to support multiple types08:38
*** alexchad_ has quit IRC08:38
jianghuaw_I guess yes.08:38
bauzasand also, I'm not sure that Intel has the same problem than nVidia where only one type is possible for each pGPU08:38
jianghuaw_For xen, it's root RP(compute) -> child RP (pGPU group)<vgpu inventory>08:39
bauzasbut each group is only having one type, right?08:39
bauzasif so, how do you say which type for each group ?08:39
*** alexchadin has joined #openstack-nova08:39
*** gyankum has quit IRC08:41
*** gyan_ has joined #openstack-nova08:41
jianghuaw_all of the same type PGPUs will be put into the same group.08:41
bauzasnot sure I understand08:41
bauzaseach physical device is having multiple possible types08:41
jianghuaw_so still depending the configure option to restrict each grpu only has one type enabled.08:41
*** rmart04 has joined #openstack-nova08:42
jianghuaw_But if there are multiple types of PGPUs exists in single compute. it can support different types.08:42
bauzassay I'm doing enabled_vgpu_types=nvidia-1,nvidia-208:43
bauzas(or the name you use in Xen)08:43
bauzasif I have 2 pGPUs, each one supporting both types, what Xen will have ?08:43
bauzastwo pGPU groups ?08:43
jianghuaw_Also we can ask operators to customize the pGPU groups. Then we can put some pgpus into group1 and then other others be group2.08:43
jianghuaw_and each can enable different vGPU types.08:43
*** gyan__ has joined #openstack-nova08:44
*** yamahata has quit IRC08:44
jianghuaw_switching to a meeting.08:45
*** mdbooth has joined #openstack-nova08:46
*** gyan_ has quit IRC08:47
jichensahid: sorry for delay, I replied in the patch and saw your question here08:47
bauzasjianghuaw_: mmmm, not sure I like that08:47
*** sahid has quit IRC08:51
*** mvk has joined #openstack-nova08:55
kashyapmelwitt: When you're back: the ListOpt() trick works -- https://review.openstack.org/#/c/534384/08:55
*** Dinesh_Bhor has quit IRC08:56
*** mdnadeem_ has quit IRC08:59
*** mdnadeem has joined #openstack-nova08:59
*** zhurong has quit IRC09:00
*** jogo has quit IRC09:03
openstackgerritTheodoros Tsioutsias proposed openstack/nova-specs master: Add PENDING vm state  https://review.openstack.org/55421209:04
*** rmart04 has quit IRC09:06
*** rmart04 has joined #openstack-nova09:06
*** sahid has joined #openstack-nova09:09
openstackgerritSylvain Bauza proposed openstack/nova-specs master: Proposes NUMA topology with RPs  https://review.openstack.org/55292409:10
*** afaranha has quit IRC09:11
*** hshiina has quit IRC09:13
*** bhujay has joined #openstack-nova09:18
openstackgerritKonstantinos Samaras-Tsakiris proposed openstack/nova master: Add `hide_hypervisor_id` flavor extra_spec  https://review.openstack.org/55586109:18
ktibiHi, how can I disable the compatibility check for live migration ? because I have two CPU model : Nehalem & Nehalem-IBRS and migration fail :/09:24
*** XueFeng has quit IRC09:28
*** XueFeng has joined #openstack-nova09:29
*** takashin has left #openstack-nova09:31
*** mvk has quit IRC09:36
openstackgerritTetiana Lashchova proposed openstack/nova-specs master: Allow modification of user-data via the server update  https://review.openstack.org/54796409:36
*** mvk has joined #openstack-nova09:37
*** jogo has joined #openstack-nova09:39
jianghuaw_bauzas, came back from a call meeting. I understood your concern. The problem is that we can't handle the dynamically changing vGPU capacities.09:41
openstackgerritBhagyashri Shewale proposed openstack/nova-specs master: Disallow rotation parameter 0 for 'createBackup' API  https://review.openstack.org/51182509:41
jianghuaw_I mean the available vGPUs for one type will be impacted by other types belongs to the same pGPU.09:41
*** yingjun has quit IRC09:42
jianghuaw_that's why we have to keep the restriction that each PGPU (or pgpu group) can only expose one vGPU type.09:42
kaisers1mikal: I'm not sure which direction to go in https://review.openstack.org/#/c/554195/ , could you pls review regarding Stephens comments?09:42
bauzasjianghuaw_: sure but I don't think it's a problem09:43
bauzasjianghuaw_: at least for libvirt, what I know is that if I'm providing multiple inventories, then when I'll create the first mdev, it'll automatically update the inventories of the other types to be total=009:44
*** Tom-Tom has quit IRC09:44
jianghuaw_bauzas, will it cause conflict?09:45
jianghuaw_in the case multiple vGPUs have been allocated from multiple inventories09:46
jianghuaw_Before create the first mdev, all inventories will have >0 vGPUs available. Right?09:47
jianghuaw_If yes, it's possible to allocate vGPUs from different inventories.09:48
jianghuaw_Then the first mdev creation will result into the other inventories' total be 0. the allocatoin for the other vGPUs will fail. right?09:49
openstackgerritMatthew Booth proposed openstack/nova-specs master: Add serial numbers for local disks  https://review.openstack.org/55656509:50
*** sdague has joined #openstack-nova09:51
openstackgerritSilvan Kaiser proposed openstack/nova master: Exec systemd-run with privileges in Quobyte driver  https://review.openstack.org/55419509:52
*** jichen has quit IRC09:55
stephenfinkashyap: This of any interest? Abandoning if not https://review.openstack.org/#/c/348394/09:55
* kashyap clicks09:55
*** germs has joined #openstack-nova09:55
kashyapstephenfin: That's another piecemeal way of fixing the current mess of handling different OVMF binaries09:56
kashyapstephenfin: Can be abandoned, IMHO.  And we should handle it globally for all distributions09:57
* kashyap proposed a spec for that09:57
kashyaphttps://review.openstack.org/#/c/506720/09:57
kashyapstephenfin: Which is in turn a bit predicated on this RFC I started for libvirt and QEMU: [RFC] Defining firmware (OVMF, et al) metadata format & file09:58
kashyaphttps://lists.nongnu.org/archive/html/qemu-devel/2018-03/msg01978.html09:58
stephenfinkashyap: Cool, done09:58
* kashyap goes to look about bumping min libvirt / QEMU versions for 'Solar' release09:58
*** chyka has joined #openstack-nova09:58
*** yassine has quit IRC10:00
*** germs has quit IRC10:00
stephenfinsahid: Any chance you could abandon these, now that bauzas' vGPU series has merged? https://review.openstack.org/#/q/topic:pci-mdev-support-compaq+(status:open+OR+status:merged)10:01
*** chyka has quit IRC10:02
*** mdnadeem_ has joined #openstack-nova10:03
*** gyan_ has joined #openstack-nova10:03
*** ratailor_ has joined #openstack-nova10:03
*** abhishekk_ has joined #openstack-nova10:03
*** udesale_ has joined #openstack-nova10:03
*** abhishekk has quit IRC10:04
*** rtailor has quit IRC10:04
*** mdnadeem has quit IRC10:04
*** gyan__ has quit IRC10:05
*** udesale__ has joined #openstack-nova10:05
*** gyan__ has joined #openstack-nova10:05
*** ratailor__ has joined #openstack-nova10:06
*** udesale has quit IRC10:07
*** udesale_ has quit IRC10:08
*** gyan_ has quit IRC10:08
*** abhishekk_ has quit IRC10:08
*** ratailor_ has quit IRC10:08
*** mdnadeem_ has quit IRC10:08
*** yamamoto_ has quit IRC10:10
*** avolkov has joined #openstack-nova10:10
*** gongysh has quit IRC10:11
*** derekh is now known as derekh_afk10:13
*** zhaochao has quit IRC10:14
sahidstephenfin: i'm expecting gerrit to abandon them automatically at some point10:15
stephenfinAh, it doesn't do that. Someone (typically sdague in the past) had to run a script or something10:16
*** namnh_ has quit IRC10:18
*** mdnadeem_ has joined #openstack-nova10:20
*** abhishekk_ has joined #openstack-nova10:20
*** abhishekk_ is now known as abhishekk10:21
kashyapstephenfin: Question for you here: https://review.openstack.org/#/c/530924/10:27
*** gongysh has joined #openstack-nova10:28
kashyapAlso, sigh (existing fault), naming nuisance: 'disk_cachemodes' (should be 'disk_cache_modes')10:28
kashyapChanging such things at this point would be futile (for OCD's sake), as it might break scripts, & other tools, etc :-(10:29
*** sahid has quit IRC10:30
*** zhurong has joined #openstack-nova10:31
*** lei-zh has quit IRC10:33
*** gyan__ has quit IRC10:34
*** mdnadeem has joined #openstack-nova10:36
*** bkopilov has quit IRC10:36
*** gyan__ has joined #openstack-nova10:36
*** vivsoni has quit IRC10:37
*** mdnadeem_ has quit IRC10:38
*** vivsoni has joined #openstack-nova10:38
*** moshele has joined #openstack-nova10:46
*** moshele has quit IRC10:48
*** udesale__ has quit IRC10:51
*** yamamoto has joined #openstack-nova10:52
openstackgerritChris Dent proposed openstack/nova master: [placement] Filter resource providers by forbidden traits in db  https://review.openstack.org/55647210:56
openstackgerritChris Dent proposed openstack/nova master: [placement] Filter allocation candidates by forbidden traits in db  https://review.openstack.org/55666010:56
openstackgerritChris Dent proposed openstack/nova master: [placement] Parse forbidden traits in query strings  https://review.openstack.org/55681910:56
openstackgerritChris Dent proposed openstack/nova master: [placement] Support forbidden traits in API  https://review.openstack.org/55682010:56
cdentOkay, with that done, I can brew up to start the spec dive10:57
*** abhishekk has quit IRC10:58
*** tiendc has quit IRC10:58
*** moshele has joined #openstack-nova11:00
openstackgerritSurya Seetharaman proposed openstack/nova master: Scheduling Optimization: Remove cell0 from the list of candidates  https://review.openstack.org/55682111:03
*** AlexeyAbashkin has quit IRC11:05
sean-k-mooneymelwitt: not that my vote counts in the actul ballot but i had assumed efried had a +/-1.5 for a while so i'd also be +1 on the proposal to add you to the core team.11:05
*** janki has quit IRC11:06
*** AlexeyAbashkin has joined #openstack-nova11:07
*** lucasagomes is now known as lucas-hungry11:08
*** alexchadin has quit IRC11:09
kashyapsean-k-mooney: What do you mean 1.5?11:11
kashyap(Also: IMHO, even if your vote doesn't "count" by some measure, active contributors such as you expressing their views is equally important, in my books.)11:12
sean-k-mooneykashyap: that while efried does not have +2 right currently there are some area of the code where his +/- 1 are treated as +/- 2 due to the experience he has demonstrated in that area11:12
kashyapI see.  I guessed as much, just wanted to double-confirm.11:12
kashyapsean-k-mooney: Unrelated bait (apologies): I think this is ready: https://review.openstack.org/#/c/534384/11:13
sean-k-mooneykashyap: it does not "count" because the vote is between members of the core team rather then an open vote. but yes i know its still valuble for active memeber to express there support/decent11:13
sean-k-mooneykashyap: oh yes i skimmed it yesterday11:14
openstackgerritYikun Jiang (Kero) proposed openstack/nova master: Record the host info in EventReporter  https://review.openstack.org/55674611:14
kashyapsean-k-mooney: Thanks!  Tests pass, backport concerns addressed, rel note in place (removed needless verbosity per dansmith's review)11:14
sean-k-mooneyi see you got the release nots job passing11:15
kashyapYep!  I got the rST dark magic right11:15
sean-k-mooneyill quickly review again but last time i looked it seamed fine to me11:15
kashyapsean-k-mooney: Thanks, the core change is quite small: I limit the 'choices' keyword arg to one value (PCID)11:16
sean-k-mooneyi saw dans comment and kind of aggreed but not strongly enough to ask for a respin11:16
kashyapsean-k-mooney: Yeah; I reworded it, and moved the CPU models info into the config file11:16
kashyapAs Operators can assume that it can be applied to all models.11:16
kashyapAs we know, some virtual CPU models include it (like the Haswell variants), some don't (Nehalem, etc).11:16
*** Zames has joined #openstack-nova11:18
sean-k-mooneyitem_type=types.String(11:19
sean-k-mooney            choices=['pcid']11:19
sean-k-mooney        )11:19
sean-k-mooneythats new to me11:19
sean-k-mooneykashyap: is that syntax documented in oslo somewhere?11:20
kashyapsean-k-mooney: Not quote, I found it from stephenfin's use here: https://github.com/openstack/nova/blob/cd15c3d/nova/conf/vnc.py#L226,L23211:20
kashyaps/quote/quite/11:20
kashyapsean-k-mooney: I only skimmed the oslo_config/cfg.py, though11:20
kashyaphttps://github.com/openstack/oslo.config/blob/master/oslo_config/cfg.py11:21
sean-k-mooneykashyap: well i have used choice elements for string fields before just never lists11:21
kashyapsean-k-mooney: It's actually documented here: https://github.com/openstack/oslo.config/blob/master/oslo_config/cfg.py#L5211:21
*** sahid has joined #openstack-nova11:22
sean-k-mooneyya so its not that you are using a listOpt that has a choice filed, you are instead create a list of sting enums which have the chioce element11:24
sean-k-mooneybut it works11:24
*** vivsoni_ has joined #openstack-nova11:24
kashyapExactly :P  It's still using the 'String' type11:24
*** vivsoni has quit IRC11:24
openstackgerritElod Illes proposed openstack/nova stable/ocata: WIP: Functional test for regression bug #1713783  https://review.openstack.org/50516011:26
openstackbug 1713783 in OpenStack Compute (nova) ocata "After failed evacuation the recovered source compute tries to delete the instance" [High,In progress] https://launchpad.net/bugs/1713783 - Assigned to Balazs Gibizer (balazs-gibizer)11:26
*** Zames has quit IRC11:27
*** yamamoto has quit IRC11:30
kashyapsean-k-mooney: Thanks for the review.  Much appreciated11:32
*** sahid has quit IRC11:33
sean-k-mooneykashyap: no worries. i dont get enough time as i like to review but always feel free to ping me with a patch if you have one11:33
*** sahid has joined #openstack-nova11:33
kashyapsean-k-mooney: Understood.  I don't spend 100% time here either.  I think I mostly know what topics pique your interest, will do :-)11:33
*** openstackgerrit has quit IRC11:33
*** yamamoto has joined #openstack-nova11:34
*** dtantsur|afk is now known as dtantsur11:34
*** gyan__ has quit IRC11:34
*** bkopilov has joined #openstack-nova11:34
*** yamamoto has quit IRC11:38
*** yassine has joined #openstack-nova11:38
*** sree has quit IRC11:41
*** udesale has joined #openstack-nova11:47
*** openstackgerrit has joined #openstack-nova11:48
openstackgerritZhenyu Zheng proposed openstack/nova master: Noauth should also use request_id from compute_req_id.py  https://review.openstack.org/55526611:48
*** yamamoto has joined #openstack-nova11:54
*** sree has joined #openstack-nova11:54
*** gongysh has quit IRC11:54
*** yangyapeng has joined #openstack-nova11:55
*** germs has joined #openstack-nova11:56
*** germs has quit IRC11:56
*** germs has joined #openstack-nova11:56
*** sree has quit IRC11:58
*** tuanla____ has quit IRC12:00
*** germs has quit IRC12:01
*** pchavva has joined #openstack-nova12:03
*** ratailor__ is now known as ratailor12:03
*** openstackgerrit has quit IRC12:04
*** sree has joined #openstack-nova12:04
*** vladikr has joined #openstack-nova12:05
*** alexchadin has joined #openstack-nova12:08
*** janki has joined #openstack-nova12:08
*** openstackgerrit has joined #openstack-nova12:09
openstackgerritSurya Seetharaman proposed openstack/nova master: Allow scheduling only to enabled cells (Filter Scheduler)  https://review.openstack.org/55052712:09
*** sree has quit IRC12:09
openstackgerritSurya Seetharaman proposed openstack/nova master: Add --enable and --disable options to  nova-manage update_cell  https://review.openstack.org/55541612:11
*** liverpooler has joined #openstack-nova12:12
openstackgerritSurya Seetharaman proposed openstack/nova master: Update the cells FAQs and scheduler maintenance docs.  https://review.openstack.org/55645912:12
openstackgerritSurya Seetharaman proposed openstack/nova master: Update the cells FAQs and scheduler maintenance docs.  https://review.openstack.org/55645912:13
*** edmondsw has joined #openstack-nova12:13
*** sean-k-mooney has quit IRC12:14
*** lucas-hungry is now known as lucasagomes12:16
*** sree has joined #openstack-nova12:17
*** zhurong has quit IRC12:27
*** markvoelker has quit IRC12:28
*** markvoelker has joined #openstack-nova12:28
*** chyka has joined #openstack-nova12:28
*** chyka has quit IRC12:33
bhagyashrisjohnthetubaguy: Hi, I have proposed revised (as per the discussion in Dublin PTG) spec: https://review.openstack.org/#/c/511825/2 (12:35
bhagyashrisDisallow rotation parameter 0 for 'createBackup' API) Request you to review the same.12:35
*** yingjun has joined #openstack-nova12:39
*** voelzmo has joined #openstack-nova12:40
*** dave-mccowan has joined #openstack-nova12:41
*** READ10 has joined #openstack-nova12:42
*** odyssey4me has quit IRC12:43
*** odyssey4me has joined #openstack-nova12:43
* efried waves12:46
* efried prepares to put on blinders to code reviews and focus only on specs.12:47
*** derekh_afk is now known as derekh12:48
*** _ix has quit IRC12:49
*** yikun_jiang has quit IRC12:50
*** yikun_jiang has joined #openstack-nova12:51
* gibi waves back to efried 12:51
*** voelzmo has quit IRC12:53
efriedalex_xu_: Responded.12:54
bauzassorry folks, was at sports12:54
*** moshele has quit IRC12:54
*** gjayavelu has joined #openstack-nova12:57
bauzassahid: thanks for having reviewed https://review.openstack.org/#/c/552924/ !12:57
bauzassahid: you mean that page memories are different from the main memory ?12:58
efriedstephenfin: Just noticed you seem to have split personality according to stackalytics.  It looks like your reviews go to one ID, your commits to the other.12:58
*** suresh12 has joined #openstack-nova12:59
stephenfinefried: Oh, really?12:59
efriedhttp://stackalytics.com/report/contribution/nova/90 -- sort by name and find thyself.12:59
bauzassahid: sorry, I'm not an expect for that, I just looked at https://docs.openstack.org/nova/pike/admin/huge-pages.html12:59
sahidbauzas: i imagine when you are talking about main memroy, you are talking about small pages12:59
sahidso if you allocate some huge pages, the small pages avail are going to decrease13:00
bauzassahid: okay, thanks13:00
bauzassahid: so, main memory and pages memories are different ?13:00
stephenfinefried: Ah, one is using my email address, the other my launchpad ID. Guess I need to sync those somehow13:01
stephenfinOr not. Meh13:01
bauzasI thought it was just a size for eahc13:01
bauzasstephenfin: efried: meh, when I see stackalytics, I'm sad :(13:01
sahidbauzas: not sure what you mean but on a system that use hugepages that does not look right to tallk about main memory13:02
bauzasbecause I don't have a lot of time for reviewing :(13:02
efriedI was like, wait, stephenfin has one commit in the last 90 days??  I *know* that ain't right.13:02
*** jaypipes has joined #openstack-nova13:02
sahidthere are small pages and huge pages13:02
openstackgerritTheodoros Tsioutsias proposed openstack/nova-specs master: Enable rebuild for instances in cell0  https://review.openstack.org/55421813:02
bauzassahid: okay, just to be clear, say I'm asking for a 1G page, it's different from asking to use 1GB for the main memory?13:02
openstackgerritArtom Lifshitz proposed openstack/nova-specs master: NUMA-aware live migration  https://review.openstack.org/55272213:02
bauzasa different resource ?13:03
sahidbauzas: yes13:03
bauzashaha, thanks!13:03
*** suresh12 has quit IRC13:03
bauzasokay, I'll modify that then13:03
*** sree has quit IRC13:04
artombauzas only ^^^ for me13:05
Spaz-HomeGood luck on your specs day, my friends.13:06
stephenfinsahid: RE: your concerns on the NUMA-aware vSwitches, I totally agree. It's a horrible hack13:08
stephenfinsahid: However, it's the best we can get right now. Far as I can tell, there's no deterministic way to get this information from every vSwitch (at the moment, at least) and adding that would take some neutron plugins into compute-driver territory13:09
*** lyan has joined #openstack-nova13:09
*** lyan is now known as Guest4564313:10
stephenfinNUMA affinity for the OVS case is mostly driven from the PCI devices used, as from what I can tell OVS doesn't expose a "give me the NUMA affinity for this PCI device I've attached to my bridge" API. We'd need to implement this ourselves13:10
stephenfinWhich is something we really don't want to be doing in an ML2 driver, IMO13:11
stephenfinThe better model, which is what jaypipes, gibi and I discussed, was to use placement for this and collaboratively build up this model between nova and neutron. However, placement isn't there yet13:11
stephenfinsahid: So this is making the best of a bad situation. If I've missed something though, definitely let me know.13:12
stephenfinI'll put all the above in the review13:12
*** eharney has joined #openstack-nova13:12
*** Guest45643 has quit IRC13:15
openstackgerritSylvain Bauza proposed openstack/nova-specs master: Proposes NUMA topology with RPs  https://review.openstack.org/55292413:16
*** _ix has joined #openstack-nova13:18
*** mriedem has joined #openstack-nova13:18
*** liangy has joined #openstack-nova13:19
sahidstephenfin: we don'-t need to ask that question "give me the NUMA affinity for this PCI device I've attached to my bridge" to OVS13:21
sahidthe operator is going to configure OVS and actually DPDK based on where the device is located13:22
sahidso basically what we need is just to retrieve where the vhu ports are created13:22
sahidthis can be done by asking OVS13:22
sahidso the neutron agent can return such information to nova by the binding detail od the port13:23
stephenfinSo just query the PMD pinning information for a bridge?13:23
sahiddo i have mised something?13:23
sahidhum... no sure i underatand the PMD pinning for a bridge?13:24
*** tbachman has joined #openstack-nova13:25
stephenfinIt seemed like a big assumption to make (that PMD threads would be affined with the NIC), especially given that they don't have to be with recent releases13:25
sahidwell if operators want best perofrmance they have to do that13:25
stephenfinSorry, not the bridge. I'm referring to this https://developers.redhat.com/blog/2017/06/28/ovs-dpdk-parameters-dealing-with-multi-numa/13:26
sahidit's not our responsability (i don't think so)13:26
stephenfine.g. 'ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0xF0'13:26
sahidso that is configurd by operator13:26
sahidwe are expecting that the pmd to run on the same NUMA node where the phy NIC is, right?13:27
stephenfinYes. There can be multiple NICs13:27
stephenfin...too13:27
stephenfinSo NIC X is on NUMA node 0, while NIC Y is on NUMA node 113:28
sahidbut NIC X and NIC Y have different network13:28
sahidright?13:28
stephenfinand NIC X is connected to/tagged with physnet_x, and NIC Y to physnet_y13:28
stephenfinright13:28
sahidso when neutron is asking to create a port for X13:29
sahidthe agent can query OVS to know where that vhu is located?13:29
stephenfinCan they?13:29
sahidyep13:30
sahidlet a sec to find the command13:30
stephenfinRight, I didn't know that :D13:30
*** mikal_ has joined #openstack-nova13:30
* stephenfin waits eagerly13:30
alex_xu_efried: I missed one thing. If the resource provider X is the compute node. RP X provides VCPU and memory also. Each instance will consume the VCPU and memory. In that case, RP X will be returned13:31
sahidstephenfin: https://software.intel.com/en-us/articles/vhost-user-numa-awareness-in-open-vswitch-with-dpdk13:31
sahidif we can know here a port is located13:31
stephenfinmikal: Just the man I'm looking for. Fancy jotting down your thoughts on https://review.openstack.org/55419513:31
sahidthe agent can return this information to nova, no?13:31
stephenfin(random aside: the OVS (-DPDK) documentation is hands-down awful. It sucks that we have to resort to random blogs for this critical information)13:32
stephenfinsahid: If it's what we want then I don't see why not. Lemme check13:32
*** moshele has joined #openstack-nova13:33
*** jaosorior has quit IRC13:33
*** mikal has quit IRC13:33
stephenfinooh, so 'pmd-rxq-show' does seem to be exactly what I wanted13:33
*** liangy has quit IRC13:34
stephenfin...and we'd just assume that OVS was configured correctly so that there are PMD threads on all NUMA nodes, which is Red Hat's guidance at least13:35
stephenfinsean-k-mooney[m]: If you're about, any thoughts on ^13:35
efriedalex_xu_: Ah, then that's a mistake in modeling.13:36
*** Eran_Kuris has quit IRC13:36
*** moshele has quit IRC13:37
efriedalex_xu_: The RP tree should not be set up such that the compute host provides inventory of FPGA.13:37
alex_xu_efried: yes, I realized that also, that isn't the FPGA case now13:37
bauzasalex_xu_: efried: which specific spec are you discussing ?13:37
stephenfinefried: See what you made me do? https://review.openstack.org/55685013:37
alex_xu_bauzas: here https://review.openstack.org/#/c/554305/13:38
* stephenfin busts into some Tay-Tay13:38
alex_xu_efried: basically, it only can happened in the compute node RP,13:38
*** gouthamr has joined #openstack-nova13:38
bauzasalex_xu_: ack, thanks13:39
efriedalex_xu_: Well, really any RP in the tree which provides inventory in multiple RCs that are "unrelated" to each other.13:39
*** moshele has joined #openstack-nova13:39
efriedalex_xu_: This is actually what I wrote a bug about last year.13:39
* efried finds...13:39
alex_xu_efried: ha, currently we have VGPU, but will change soon :)13:39
kashyap"the operator is going to configure OVS and actually DPDK based on where the device is located" --> Hope the Operator has the necessary PhDs to configure DPDK et al13:40
alex_xu_efried: but I agree with that13:40
alex_xu_I feel it will be rare case in the future13:40
bauzasalex_xu_: efried: FWIW, I discussed this morning with jianghuaw_ about some possible quirks with VGPUs13:41
bauzasand I'd love your thoughts13:41
bauzasbecause it would need to spec up13:41
bauzasthe fact is that physical GPUs support multiple types13:41
bauzasdepending on the type, you can have different number of vGPUs13:41
bauzasso, guess what?13:41
efriedalex_xu_: I can't find the bug right offhand, but I think the scenario I described was: compute RP has HDD disk, sharing provider (or in fact another RP in the tree) has SSD disk.  I ask for disk & VCPU, I ask for the HDD trait.  I will get back candidates which include the SSD, because the compute RP is satisfying the HDD trait, even though it's not providing the disk resource.13:42
alex_xu_we need to update inventory based the user request?13:42
bauzasalex_xu_: efried: in theory, a nested RP that would be a pGPU could have different inventories for the same resource class13:42
*** alexchadin has quit IRC13:42
bauzasalex_xu_: efried: but the trick is, once a vGPU is created, then all the other types but the one picked for that vGPU will have total=0 for their inventories13:43
efriedbauzas: I think we've discussed similar issues in the past.  Trying to recall what we came up with as a solution...13:43
bauzasalex_xu_: efried: so there are 2 ways to consider that13:43
bauzaseither we ask the operator to define which type he wants to support per-GPU13:43
alex_xu_efried: ah, I see, but I guess that bug is killed by the reality that we don't support two different storage on the host13:44
bauzasand in that case, the enabled_vgpu_types option needs to be change13:44
bauzaschanged*13:44
efriedalex_xu_: Oh, but we do :)13:44
bauzasor, we magically provide a 2nd level of the tree with nested RPs, meaning that a pGPU will have multiple children, each one being something like pGPU_thistype13:44
efriedbauzas: That only helps if you want to lock it down (predefine and preallocate)13:45
*** alexchadin has joined #openstack-nova13:45
bauzasthe second solution makes the thing less impactful for operators, but it adds a second level to the tree just for VGPUs13:45
bauzasefried: not really13:45
bauzasefried: because at start, each inventory will provide all the possible vGPUs13:46
efriedbauzas: But you'd still be counting on an allocation from *here* affecting the inventory over *there*13:46
efriedwhich we can't do.13:46
bauzasefried: in theory the allocation would be against the pGPU_type13:46
bauzasnot the pGPU itself13:46
efriedYeah, I get that, but you would have had to pre-designate how many of each type you start with.13:47
bauzasefried: that's the config option13:47
efriedYou can't show "full" inventory for all types.13:47
efriedOkay, right, so swhat I'm saying, it's locked down ahead of time.13:47
efriedvia the config option13:47
bauzasfor the moment, the config option is a ListOpt supporting types13:47
*** vivsoni_ has quit IRC13:48
bauzasbut honestly, I think we should do something less crazy and just have an option that would do like pci passthrough13:48
*** mchlumsky has joined #openstack-nova13:48
bauzasie. "for that PCI device, here is the type"13:48
*** vivsoni_ has joined #openstack-nova13:48
bauzasjaypipes: thoughts on that ? ^13:48
efriedbauzas: If you want ultimate flexibility, I think you need to provide inventory of some RC that lets you represent "units of VGPU-ness", where different types consume a different number of that resource.  And then your request would have to be well-behaved and ask for resources=VGPU:1,VGPU_UNITY_THINGY:413:48
*** yingjun has quit IRC13:49
efried...where the request for VGPU_UNITY_THINGY has to be correct for the type you're requesting.13:49
openstackgerritMerged openstack/os-traits master: Updated from global requirements  https://review.openstack.org/55159913:49
*** moshele has quit IRC13:49
bauzasefried: that's where I think we're over engineering13:50
*** mlavalle has joined #openstack-nova13:50
efriedThat would be clearer as13:50
efriedresources=VGPU_TYPE_X:1,VGPU_UNITY_THINGY:413:50
efriedor13:50
efriedresources=VGPU:1,VGPU_UNITY_THINGY:4&required=VGPU_TYPE_X13:50
bauzasefried: operators want flavors like VGPU=2&trait=MY_TYPE13:50
*** yingjun has joined #openstack-nova13:50
bauzasyeah, honestly, I feel for the short term a config option that would do a whitelist seems the most acceptable solution13:50
efriedbauzas: Yeah, I get that.  This isn't the only place we've seen where it would be useful to provide a translation layer from the flavor to the actual placement request.13:51
efriedbauzas: I think perhaps we're oversimplifying/idealizing by thinking that a direct mapping of flavor to placement request is going to allow us to cover everything.13:51
bauzaslet's incrementally try to resolve the usecase13:52
bauzasfor queens, we supported a single type13:52
efriedbauzas: Basically, requiring the admin to understand the nuances and quirks of both the provider tree as modeled by the virt driver, and the syntax and semantics of the placement API query.13:52
bauzasefried: that's where I think a whitelist could be more understandable13:52
bauzasefried: we could have enabled_vgpu_types that would keep the existing types we agree13:53
*** hongbin has joined #openstack-nova13:53
bauzasand then a second option that would tell for which PCI ID which type13:53
efriedbauzas: I'm okay with that idea in general, but I'm going to be watching like a hawk to make sure we retain the separation of platform-specific syntax.13:53
bauzasin that case, the PGPU inventory would be of one type, problem solved.13:53
efriedE.g. "NO PCI ID!"13:53
bauzasefried: the PCI ID thingy is just within the driver13:54
* jaypipes reads back13:54
bauzasno crazypants about PCI tracking13:54
bauzaslitterally ask libvirt to pick that type for that pGPU13:54
efriedbauzas: If there's a PCI ID in the file, then we have to say that the file gets parsed by virt alone.  Kind of thing.13:55
bauzasefried: yeah, zactly that13:55
efriedbauzas: So... you want to do something like this for Rocky?13:56
bauzasI guess13:56
bauzasif nested RPs is a thing :)13:56
efriedbauzas: Cause this is edging unequivocally into Generic Device Management territory.13:56
alex_xu_bauzas: when your request VGPU_thistype, how do you change total=0 for the VGPU_thosetype?13:56
bauzasalex_xu_: that's magically done by sysfs13:56
bauzasalex_xu_: which I'm using for getting the inventory13:57
alex_xu_bauzas: but how the placement to know that13:57
*** germs has joined #openstack-nova13:57
bauzasalex_xu_: because we provide inventories of VGPU resouces as of queens :)13:57
efriedbauzas: Wait, you're talking about doing that at setup time, not at allocation time, right?13:57
bauzasthat are populated based on sysfs :)13:57
*** alexchadin has quit IRC13:58
bauzasefried: alex_xu_ is talking of the case where we would have pGPU types as children13:58
alex_xu_bauzas: if there are two requests at same time. One for VGPU_thistype, another for VGPU_thosetype. So there will be race case. Only one can successful in the host finally13:58
efriedbauzas: Bricks will be shat by several people if you start talking about modifying inventory of Y because X got allocated.13:58
bauzasalex_xu_: yeah I considered the race condition13:58
bauzasalex_xu_: and that's actually a good call for not doing that13:58
*** esberglu has joined #openstack-nova13:58
bauzasbut rather doing a whitelisting on the virt driver direcrtly13:58
alex_xu_bauzas: so limit only one type in each host?13:59
bauzasalex_xu_: no, one type per physical GPU14:00
alex_xu_ah, i got it14:00
bauzasone type per host is already here14:00
bauzasexcept the fact we don't use traits atm14:00
bauzasanyway, will just propose something soon so we could see if that requires some spec14:01
bauzasor at least some spec amendment14:01
efriedbauzas: Right, so you're setting up the types & inventories the first time you load up.  The first time update_provider_tree runs, it reads that config file and sets up the provider tree with types & inventories according to what the admin put in there.  Right?14:01
bauzas++14:01
*** germs has quit IRC14:02
efriedCool cool14:02
alex_xu_the race condidates only happened one time, if people accept that, it also ok14:02
bauzasefried: https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/add-support-for-vgpu.html describes that partially14:02
*** r-daneel has joined #openstack-nova14:02
jaypipesguh, it would have been nice to have more than a few hours of sleep before spec sprint day... :(14:02
efriedbauzas: I can also see a solution that goes halfway in the future...14:02
bauzasjaypipes: take caffeine or drugs, either.14:02
*** jaosorior has joined #openstack-nova14:03
bauzasjaypipes: FWIW, https://review.openstack.org/#/c/541290/7/specs/rocky/approved/numa-aware-vswitches.rst will require me some paracetamol14:03
bauzasstephenfin: ^ ;)14:03
cdentjaypipes: start on an easy one: https://review.openstack.org/#/c/418393/14:03
edleafejaypipes: have you tried the blue meth?14:03
bauzasalways take the blue pill14:04
stephenfinI hear the blue meth is real good14:04
bauzasnever choose the red one14:04
efriedbauzas: In a fashion similar to neutron port creation, the user could create the VGPU resource before doing the boot request.  The VGPU creation would eventually hit vendor-specific code (maybe virt, maybe some agent - cyborg?) which actually *would* tweak the inventories and available types.  Then you would attach that VGPU to your boot request, just like you would a port.14:04
alex_xu_efried: any legal drugs in Florida?14:05
efriedalex_xu_: Legal schmegal.  But I'm in Texas.14:05
bauzasefried: creating the mediated devices in advance is already a possibility14:05
efriedIn Florida, they wash up on the beach.14:05
*** awaugama has joined #openstack-nova14:07
kashyapalex_xu_: Hey there14:08
kashyapalex_xu_: About your comment here: https://review.openstack.org/#/c/534384/16/nova/tests/unit/virt/libvirt/test_config.py14:09
efriedbauzas: But having that creation impact placement inventories would be new.14:09
bauzasefried: no, it's already the case14:09
kashyapalex_xu_: The test you pointed out is slightly different in that, it is testing: 'obj.extra_flags'14:09
efriedoh?14:09
bauzasefried: if you create a mediated device, libvirt will just add that to the total14:09
bauzasI should write a blogpost on this...14:10
bauzasbecause creating a mediated device doesn't mean you *use* it for a guesrt14:10
bauzasso, placement is getting an inventory of 'available+created' as a total already14:10
efriedcool14:11
jaypipesstephenfin: and ack on the OVS documentation being awful.14:12
jaypipesOVS-DPDK that is.14:12
*** archit has joined #openstack-nova14:13
dansmithefried: mriedem: I can never remember the moving target of translations.. we're translating exceptions now, but not translating logs _at all_ (even warning) ?14:14
bauzasdansmith: right IIRC14:15
efrieddansmith: correct14:15
bauzasbecause translating logs is too much for the i18n team14:15
bauzasthey aren't able to scale14:15
efriedNice patch for a new contributor: remove _Lx from nova.i18n, see what breaks, fix it.14:16
dansmithwtfever14:17
dansmithnot marking them means they _can't_ be translated which seems like a stupid step back, but whatever14:18
*** gjayavelu has quit IRC14:19
openstackgerritPatricia Domingues proposed openstack/nova master: load up the volume drivers by checking architecture  https://review.openstack.org/54139314:21
jaypipesefried: "The first time update_provider_tree runs, it reads that config file and sets up the provider tree with types & inventories according to what the admin put in there. Right?" <-- you mean, exactly like my provider-config-file spec enabled?14:23
efriedjaypipes: could be, could be.  I'm not sure if I read that before it was declared moot, but if I did, I've purged it :(14:24
*** yangyapeng has quit IRC14:25
efriedjaypipes: Anyway, it sounds like a lovely idea you had :)14:25
*** gjayavelu has joined #openstack-nova14:25
jaypipesefried, bauzas: I'm the person that would shit a brick if you start adding dynamic inventory creation based on whatever a user requested.14:26
efriedYes, but not the only person.14:26
bauzasjaypipes: I'm entering the danger timezone of me needing to go get my children at school14:26
*** gjayavelu has quit IRC14:27
* efried now has Top Gun soundtrack stuck in head. Thanks a lot bauzas14:27
bauzasjaypipes: tl;dr I just want the inventory for that specific physical GPU to be config-driven14:27
*** artom has quit IRC14:27
bauzasjaypipes: if that sounds crazy, tell me more in 20 mins :)14:27
jaypipesbauzas: yet another whitelist conf option, yes, I know.14:27
openstackgerritDan Smith proposed openstack/nova master: Make get_allocation_candidates() honor aggregate restrictions  https://review.openstack.org/54799014:28
openstackgerritDan Smith proposed openstack/nova master: Add an index on aggregate_metadata.value  https://review.openstack.org/55585114:28
openstackgerritDan Smith proposed openstack/nova master: Add AggregateList.get_by_metadata() query method  https://review.openstack.org/54472814:28
openstackgerritDan Smith proposed openstack/nova master: Add require_tenant_aggregate request filter  https://review.openstack.org/54500214:28
openstackgerritDan Smith proposed openstack/nova master: WIP: Honor availability_zone hint via placement  https://review.openstack.org/54628214:28
sahidjaypipes: if you can enqueue this https://review.openstack.org/#/c/511188/ :)14:29
mriedemefried: can we abandon https://review.openstack.org/#/c/497978/ or do you plan on updating it?14:29
efriedmriedem: Eventually.  But I can abandon for now and resurrect at that time.14:30
mriedemok. i'm just starting backward for specs, oldest to newest and cleaning out the cruft.14:30
openstackgerritBalazs Gibizer proposed openstack/nova-specs master: Network bandwidth resource provider  https://review.openstack.org/50230614:30
*** eharney has quit IRC14:31
mriedemjohnthetubaguy: should we abandon https://review.openstack.org/#/c/438134/ for service-protected-servers? does that align with anything keystone is working on with RBAC?14:31
mriedemedmondsw: ^14:31
edmondswin a mtg, will look in a few14:32
mriedemdansmith: this was the thing that prompted my ML thread about volume type proxy https://review.openstack.org/#/c/466595/14:32
mriedemi think...14:32
dansmithack14:33
*** mchlumsky has quit IRC14:34
mriedembauzas: i'm going to start an ops list thread about https://review.openstack.org/#/c/446446/ since if it's just a bug fix for broken behavior, we don't need a spec or a microversion14:35
bhagyashrismriedem, alex_xu_:  Hi, I have proposed revised (as per the discussion in Dublin PTG) spec: https://review.openstack.org/#/c/511825/2 (14:37
bhagyashrisDisallow rotation parameter 0 for 'createBackup' API) Request you to review the same.14:37
mriedembhagyashris: cool i'll review today14:37
alex_xu_kashyap: strange...I didn't see there is extra_flags in LibvirtConfigGuestCPU obj14:38
alex_xu_bhagyashris: cool, will try to reach that14:38
kashyapalex_xu_: But, thanks to your comment, I could actually remove another line in the test14:38
alex_xu_kashyap: np14:39
*** ratailor has quit IRC14:39
kashyapalex_xu_: I'll upload a new version once I re-run my local tests.14:39
kashyapalex_xu_: As I caught another bug from a functional test.14:39
kashyapThe casing of the config options was lost, after I moved to Oslo StrOpt().  Glad I tested locally w/ both casings14:39
kashyap(Now fixed it)14:39
*** mchlumsky has joined #openstack-nova14:41
jaypipesgibi: "The backend information is needed for the NeutronBackendWeigher that tries to simulate the Neutron backend selection mechanism by implementing a preference order between backends." <-- this is what concerns me about the introduction of those NET_BACKEND_XXX traits. I don't see those traits as being germane to scheduling. Rather, I just see them as being a part of the port configuration information that we send down to os-vif. And we have14:41
jaypipesthe port binding information in Neutron for that. I still don't see why those should be traits.14:41
alex_xu_kashyap: actually I mean I didn't find a extra_flags field for the LibvirtConfigGuestCPU even for now...14:42
gibijaypipes: the backend traits are not needed for the placement query or for the scheduler filters14:42
efriedjaypipes: Do you have a spec queued up for NRP-in-a_c yet?14:43
*** lpetrut has joined #openstack-nova14:43
gibijaypipes: but as the filter scheduler makes the allocation, it implicitly decides about the backend as well14:43
gibijaypipes: and before the bandwidth feature this decision was made by neutron14:43
gibijaypipes: but after it, the decision is made by the filter scheduler14:44
bhagyashrismriedem, alex_xu_: thank you :)14:44
gibijaypipes: so we want to save some of the freedom of neutron here14:44
kashyapalex_xu_: It's not in that class.  But take a look at LibvirtConfigCPUFeature()14:44
gibijaypipes: by adding a weigher that can express backend preference order14:44
*** eharney has joined #openstack-nova14:45
kashyapalex_xu_: Typo, actually this one: LibvirtConfigGuestCPUFeature()14:45
jaypipesefried: crap. haven't finished it.14:45
jaypipesefried: I can push what I have.14:45
*** yamahata has joined #openstack-nova14:46
efriedjaypipes: as you see fit.  Just thought I'd ask, it being spec review day and all.14:46
openstackgerritMatthew Booth proposed openstack/nova-specs master: Add serial numbers for local disks  https://review.openstack.org/55656514:47
jaypipesefried: ack, thx for the reminder.14:47
sean-k-mooney[m]stephenfin:  hi sorry was in a meeting you wanted me to comment on some ovs-dpdk stuff14:49
openstackgerritDan Smith proposed openstack/nova-specs master: Amend the member_of spec for multiple query sets  https://review.openstack.org/55541314:49
efrieddansmith: remove -2 from ^ ?14:49
dansmithefried: yep, I just need to fix a pep8 thing I just noticed14:50
efriedo14:50
efrieddansmith: Two spots14:50
* bauzas is back from school14:51
stephenfinsean-k-mooney[m]: Indeed. sahid was suggesting we could simply rely on NUMA affinity of a vhost-user interface's PMD queues to determine where to place a host14:51
bauzasmriedem: ack, thanks14:51
bauzasmriedem: honestly, I wasn't knowing what to do with this14:51
efrieddansmith: While you're at it, "in in"14:51
*** yangyapeng has joined #openstack-nova14:52
dansmithefried: ah yeah saw that before and had forgottten it14:52
dansmithtttt14:52
*** yangyapeng has quit IRC14:52
*** yangyapeng has joined #openstack-nova14:53
efrieddansmith: Please clarify whether the multiple-member_of thing will also be implemented in GET /resource_providers14:53
dansmithefried: ah, I was going to ask if we should do that too, but I expected it would be outside this spec, no?14:53
efrieddansmith: Whether it is or not, that should be stated.  I could go either way.  But slight preference for including it.  For a couple of reasons...14:54
dansmithack14:54
efrieddansmith: First, a single microversion introducing multiple-member_of syntax.  I like that better than one microversion introducing it for one URI, another for introducing what's effectively the same feature to another URI.14:55
openstackgerritDan Smith proposed openstack/nova-specs master: Amend the member_of spec for multiple query sets  https://review.openstack.org/55541314:55
efrieddansmith: Second, it's actually going to make the code easier to write.  Because we currently do the processing of member_of in common code for both; so we can continue to do that.14:55
dansmithefried: sure14:56
*** kholkina has quit IRC14:56
sean-k-mooney[m]stephenfin:  that wont work als ovs has no idea if there is enough memory on the numa node for the vm also the memory is not allocated until the vhost-user frontend in qemu connects to the vhost-user backend in ovs14:56
efrieddansmith: Technically, the Work Items section should be updated...14:56
dansmithefried: for /rps?14:57
efrieddansmith: On rereading, it's sufficiently vague to be acceptable as is.14:57
dansmithyeah, I was going to say..14:57
efrieddansmith: Which is fine by me; I've always thought that section was pretty much redundant anyway.14:57
efrieddansmith: +1.  One spec down!14:58
sean-k-mooney[m]stephenfin: the other thing is that when the vhost-user frontend connects to the vhost-user backend in ovs it tries to allocate  memory and pmd for the vhost-user interface from on of the numa nodes of the guest automatically14:58
* dansmith does fist pump14:58
openstackgerritJay Pipes proposed openstack/nova-specs master: Handle nested providers for allocation candidates  https://review.openstack.org/55687314:58
*** yamahata has quit IRC14:59
jaypipesefried: ^14:59
efriedjaypipes: ack14:59
mriedembhagyashris: comments inline https://review.openstack.org/#/c/511825/15:00
*** hamzy has quit IRC15:02
jaypipesgibi: maybe I'm just being thick... I still don't get it. You will have multiple backends on the same compute host supporting the same physical networks that support the same vNIC types and you want to be able to choose which backend to allocate a piece of bandwidth from?15:04
bauzasjaypipes: I'm not sure we need a spec for https://review.openstack.org/#/c/556873/1/specs/rocky/approved/nested-resource-providers-allocation-candidates.rst15:04
bauzasjaypipes: it's just fixes we need to merge IMHO15:04
*** felipemonteiro_ has joined #openstack-nova15:04
mriedembauzas: it's an api change and microversion right?15:05
mriedemso spec is required yeah15:05
mriedem?15:05
gibijaypipes: exactly. Neutron today does it in the following way:15:05
gibijaypipes: Neutron has a mechnism driver config15:05
bauzasmriedem: from the spec itself, looks like it's not changing the API15:05
stephenfinsean-k-mooney[m]: Potentially dumb question, but in what way would lack of guest memory be related?15:05
gibijaypipes: Neutron iterates throught that list and try binding the port with the given driver15:05
jaypipesmriedem: no API change, no.15:05
stephenfinI figured if you could do 'ovs-appctl dpif-netdev/pmd-rxq-show' to show the affinity for a given interface and just return that15:06
gibijaypipes: the first driver that returns a positive result from that bind call will be the one Neutron use15:06
*** hamzy has joined #openstack-nova15:06
gibijaypipes: so that config option defines a preference order between backends supporting the same physnet15:06
mriedemjaypipes: bauzas: it's a behavior change for the alloc candidates api though15:06
stephenfinOr does that even exist before the port is attached to the interface?15:06
*** voelzmo has joined #openstack-nova15:06
bhagyashrismriedem: thank you for review i will look into it as i am working in IST time zone it's end of day for me :)15:06
stephenfinthis would be so much easier if I have a machine to experiment on. Stupid fried motherboard is killing me :(15:06
jaypipesmriedem: if "behaviour change" means "it will work when there are nested providers", then yes. :)15:07
bauzasmriedem: technically, alloc-candidates doesn't work yet with nested RPs15:07
jaypipesstephenfin: at least it's not an efried motherboard.15:07
bauzasmriedem: it's not changing the existing15:07
mriedembauzas: technically volume-backed rebuild with a new image doesn't work either,15:07
* jaypipes goes back into the bad-joke cave.15:07
mriedembut if we make that work, it's an api change15:07
mriedemeven if the request params don't change15:07
jaypipesdoes it even matter guys? the spec is up.15:08
bauzasI agree it's a signal15:08
mriedemyeah i'm saying it's a spec,15:08
mriedemand likely a microversion bump15:08
bauzasokay, I'll comment that too then15:08
*** bhujay has quit IRC15:09
bauzasmriedem: jaypipes: speaking of specs15:09
bauzasjaypipes: now I'm back, can we discuss about my point with vGPU types ?15:10
*** pcaruana has quit IRC15:10
bauzaslooks like you were sad about that15:10
sahidstephenfin, sean-k-mooney[m] I think the issue is on the scheduling, we will have first to select a host so then we could create the ports and query them15:11
*** dikonoo has quit IRC15:11
stephenfinsahid: I'd be perfectly fine saying we need pre-created ports for this thing15:11
stephenfinThat's already a requirement for SR-IOV and afaik is the plan for gibi's bandwidth-aware scheduling spec15:11
*** jaosorior has quit IRC15:12
sahidstephenfin: the problem is then you need to raise that re-schedule excpetion is the resources are not enough15:12
jaypipesbauzas: "sad" would be one way to put it, yes.15:12
sahidyes i think we do that somewhere but i can't really remember15:12
stephenfinsame issue with SR-IOV though, right?15:12
bauzasjaypipes: let's be gentlemen :p15:12
bauzasjaypipes: so, before discussing about a solution, do you understand the problem ?15:13
sahidstephenfin: yes probably you are right15:13
*** germs has joined #openstack-nova15:13
*** germs has quit IRC15:13
*** germs has joined #openstack-nova15:13
jaypipesbauzas: yes, I fully understand the problem.15:13
bauzasjaypipes: in https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/add-support-for-vgpu.html we don't mention that a single pGPU can have multiple types and we just suppose a single inventory of VGPU for each15:13
jaypipesbauzas: I had numerous conversations with jianghuaw_ about this.15:14
bauzasthe fact is, once *one* mediated device is created, then *all* the others types are not possible for that pGPU15:14
stephenfinsahid, sean-k-mooney[m]: The bigger concern I have is that a vhost-user port is configured to a given PMD based on the guest - not the routes15:15
bauzasjaypipes: and what are your thoughts on that ?15:15
bauzasjaypipes: we already somehow set inventories based on config option thru enabled_gpu_types tho15:15
jaypipesbauzas: the only thing I really don't want is on-the-fly re-configuration of providers based on *what the user requested*.15:16
stephenfinsahid, sean-k-mooney[m]: e.g. all vhost-user ports will be handled by a random PMD until the guest is attached, when that reallocation happens15:16
bauzasjaypipes: it's not that, if I understand correctly your concern15:16
stephenfinsahid, sean-k-mooney[m]: Assuming vhost-user ports that aren't associated with a guest even appear in output of 'ovs-appctl dpif-netdev/pmd-rxq-show' (I can't test it, grrr)15:16
*** hemna_ has joined #openstack-nova15:17
jaypipesbauzas: if we want to pre-define configuration of multiple supported vGPU types using a CONF option, so be it. I would prefer to stop adding yet more CONF options and instead handle inventory of providers using a provider-config YAML file format, but that ain't gonna happen apparently, so be it.15:17
bauzasjaypipes: if we go on a direction where each pGPU has multiple inventories, each for a GPU type, then that said, yes it would be dynamically modified on an instance creation15:17
*** lyan has joined #openstack-nova15:17
kashyapalex_xu_: You're right; I can remove that extra test.  The main test in test_driver.py takes care of the full config.  Thanks for catching.15:17
bauzasjaypipes: I can try to spec it, you know15:17
jaypipesbauzas: that's exactly what I *don't* want.15:18
bauzasjaypipes: the YAML file15:18
*** lyan is now known as Guest925615:18
bauzasjaypipes: okay cool, so we're aligned15:18
edleafebauzas: multiple inventories is not going to work15:18
*** germs has quit IRC15:18
bauzasedleafe: if each inventory is behind a child RP15:18
bauzaseither way, sounds we're in agreement15:18
edleafeyeah15:18
bauzasI *don't* want to make things complicated15:18
bauzasthe problem is more about the specific config option15:19
bauzasI understand jaypipes on that15:19
jaypipesbauzas: it would be the same inventory of VGPU. But different child providers would be tagged with specific VGPU_TYPE_XXX traits, right?15:19
*** burt has joined #openstack-nova15:19
sean-k-mooney[m]stephenfin:  if they are not associate with a guest im not sure. when they do get added to a guest they might also change.15:19
bauzasjaypipes: you're talking of the possibility to have children, each of them being a type ?15:19
bauzasjaypipes: if so, each inventory would be different15:19
jaypipesbauzas: no15:20
bauzasbecause the total number of vGPUs you can create depends on your type15:20
*** eharney has quit IRC15:20
jaypipesbauzas: I'm saying the resource class would all be "VGPU"15:20
bauzasoh, with the conf opt solution ?15:20
stephenfinsean-k-mooney[m]: Yeah, it seems like a lot of magic (even for ovs-dpdk) to say "we have this route that uses this NIC, therefore the vhost-user port should be processed by this PMD thread"15:20
*** sambetts|afk is now known as sambetts15:20
bauzasyeah, it's still VGPU resource class and a trait for that type15:20
*** eharney has joined #openstack-nova15:20
jaypipesbauzas: right.15:20
bauzasso each PGPU will be a leaf15:20
sean-k-mooney[m]stephenfin: this wont work in general however as this would only work with vhost-user. the approch you need to enable need to work for any switch backend15:21
bauzaswith one inventory about the total number of vGPUs it can create *for the type defined by the opt*15:21
jaypipesbauzas: each pGPU group, but yes.15:21
bauzasplus the trait telling which type it is15:21
bauzasjaypipes: libvirt doesn't have the notion of groups15:21
jaypipesbauzas: right.15:21
jaypipesbauzas: I know, but xen does.15:21
bauzassure, but from a placement perspective, a "PGPU" RP is, from a libvirt perspective, a PCI device, and from a xen perspective, a PGPU group15:22
stephenfinsean-k-mooney[m]: Sure, but vhost-user would be a start. Once we have a way to expose this information, we can extend other backends15:22
bauzasbut both are reconciled15:22
jaypipesack15:22
bauzasit's just a driver-only thing15:22
bauzasokay, now question15:23
bauzasdoes that need to be spec'd up ?15:23
bauzasjaypipes:^15:23
sean-k-mooney[m]stephenfin:  if we wanted to do anything regardign the interface rx queue it likely should be an os-vif thing where we calulate teh best pmd our selves and set that not the other way around15:23
jaypipesbauzas: yes. new CONF option, new way of behaving for the virt drivers. I would say yes.15:24
openstackgerritMerged openstack/nova master: tox: Remove unnecessary configuration  https://review.openstack.org/55654415:24
sean-k-mooney[m]stephenfin:  most backend wont that this info so vhost-user is not something we should build on15:26
stephenfinsean-k-mooney[m]: What other backends would there be?15:26
*** imacdonn has quit IRC15:26
*** imacdonn has joined #openstack-nova15:26
openstackgerritKashyap Chamarthy proposed openstack/nova master: libvirt: Allow to specify granular CPU feature flags  https://review.openstack.org/53438415:26
sean-k-mooney[m]stephenfin:  for example kernel ovs with kernel vhost has kerne vhost-treads it spwans per interface, it dose not have pmds, so while we can taskset those threads we cant affinites them via looking a queues15:26
*** artom has joined #openstack-nova15:26
bauzasjaypipes: fine by me15:27
bauzasartom: had some concerns about the virt driver "calling" the conductor in https://review.openstack.org/#/c/552722/615:27
*** damien_r has quit IRC15:27
*** tianhui_ has quit IRC15:28
artombauzas, replied on the patch, but we can continue here if you want. Basically, no new calls are added, it's just existing methods/returns with new data in them15:28
artomShould I clarify the spec?15:28
bauzasartom: maybe I misunderstood but you mean that the compute service would call back the conductor ?15:28
alex_xu_kashyap: np15:28
*** tianhui has joined #openstack-nova15:28
bauzasartom: that would help15:28
edmondswmriedem I don't remember ever discussing a service-protected-server case in our RBAC discussions, but I don't think you need to use service tokens there... just have a role that's given permission to manage these15:29
sean-k-mooney[m]stephenfin:  in any case queue mapping or tasksetting of vhost threads i think is out of scope of nova15:29
edmondswthat spec needs a lot of work if anything's to happen there15:29
bauzasartom: also, on the fact we would check the compute version15:30
bauzasartom: does a live migration work with two different compute versions already ?15:30
openstackgerritMerged openstack/nova master: Move placement test cases from db to placement  https://review.openstack.org/55314915:30
mriedembauzas: yes15:30
mriedemwe support mixed version computes for live migration15:30
artombauzas, I didn't see anything in the code that would suggest it wouldn't15:30
bauzasmriedem: context is https://review.openstack.org/#/c/552722/6/specs/rocky/approved/numa-aware-live-migration.rst@24415:30
mriedemwe have a grenade + live migratoin job that also does this back and forth15:30
sean-k-mooney[m]linux bridge, sriov, hardware offloaded ovs, vpp, iovision(ebpf), mini net, calico and macvtap are the main ones used beyond ovs/ovs-dpdk15:30
bauzasmriedem: then, I was claiming we could microversion the behavioural change artom is going to introduce15:31
sean-k-mooney[m]oh there is also snabb switch15:31
mriedemthat's a bit drastic15:31
artombauzas, also, we'd only fail it if the instance has "NUMA" characteristics15:31
mriedemsec15:31
artomWhich is currently broken anyways, so not as drastic as it sounds15:31
bauzasartom: I agree, and we don't test that AFAIK15:31
mriedemsee https://review.openstack.org/#/c/522537/13/nova/conductor/tasks/live_migrate.py15:31
mriedemyou only need to know if the source and dest computes can do the new hotness15:32
mriedemotherwise it's a novalidhost if you can't find a pair15:32
bauzasmriedem: yeah I remember that change15:32
*** moshele has joined #openstack-nova15:32
sean-k-mooney[m]stephenfin: on and NTT have a dpdk soft patch panel thing , the point is nova cant have special case code for all of these15:33
bauzasmriedem: if the implementation of artom's spec would go into that direction, then yes we wouldn't need a microversion15:35
mriedemi think we can agree that we don't want to add subnet to the requested network turducken in the compute api https://review.openstack.org/#/c/518227/15:35
*** moshele has quit IRC15:35
bauzasmriedem: oh please, yes15:37
bauzasjust create the port in neutron and pass it to nova15:37
bauzasaha, jinxed by mriedem's comment15:37
artommriedem, that link you posted in my NUMA migration spec, that's basically just checking min compute version, right?15:37
artomSo the same thing I was suggesting?15:38
artomJust for clarity in my head, because I feel like we're talking about the same thing as though they were different15:38
gibijaypipes: I'm open to debate to simulate or not the neutron backend ordering preference via a scheduler weigher but for that debate we need some neutron heavy guys, like mlavalle15:39
openstackgerritJay Pipes proposed openstack/nova-specs master: Handle nested providers for allocation candidates  https://review.openstack.org/55687315:39
bauzasartom: yeah basically15:39
bauzasartom: it's for a different usage tho15:39
jaypipesmriedem, bauzas: ^^ added not about microversion.15:39
artombauzas, right, but the mechanism is the same15:40
kashyapDarn, lost edits to  wiki.openstack.org, as it logged me out mid-way.  And didn't give me a way to get the changes back.15:40
stephenfinsean-k-mooney[m]: Well damn. That's unfortunate15:40
bauzasartom: yup, the pattern should be the same15:40
artom(We might eventually standardise that in a utils somewhere, since we seem to be doing a lot of it, btw)15:40
mriedemartom: kind of, but it depends on where you're using it,15:40
mriedemif it's the API and you're checking min compute service version across the entire cell, that's different than just comparing the source and candidate dest host15:41
bauzasat least, we can assume we run a pre-flight check on the conductor that checks both compute versions15:41
bauzasthat could be generalized15:41
artommriedem, ah, so you're saying we should be more granular and just check the (source, dest) pair15:41
artomHrmm15:41
artomCould we do that in the scheduler?15:41
mriedemartom: yes, see that patch i linked15:41
bauzasbut that's an implementation detail, IHMO15:41
artomSo that straight away we have a dest that supports it?15:41
mriedemartom: we could if we exposed the capability as a trait on the compute and added a pre-request filter15:42
bauzasartom: the scheduler doesn't know whether it's a migration or a boot15:42
bauzasbut what mriedem wrote15:42
mriedemartom: that would be an optimization imo15:42
mriedemworth noting in the spec though15:42
bauzaszactly15:42
openstackgerritChris Dent proposed openstack/nova master: [placement] Filter resource providers by forbidden traits in db  https://review.openstack.org/55647215:42
openstackgerritChris Dent proposed openstack/nova master: [placement] Filter allocation candidates by forbidden traits in db  https://review.openstack.org/55666015:42
openstackgerritChris Dent proposed openstack/nova master: [placement] Parse forbidden traits in query strings  https://review.openstack.org/55681915:42
openstackgerritChris Dent proposed openstack/nova master: [placement] Support forbidden traits in API  https://review.openstack.org/55682015:42
artomMmmm, forbidden traits15:42
artom/homer15:42
gibimriedem: I will be available during the notification subteam meeting timeslot or before that but I have nothing to really talk about so we can simply skip the meeting if you agree15:42
SamYapleso i just ran into an annoying time consuming issue. i was going an upgrade and my nova-osapi_compute service changed to reporting its hostname to the database. this created new nova-osapi_compute services in the services table. the old service entires had version 9 and this was causing all GETs to instances to fail with instance not found15:43
SamYaplecode in question https://github.com/openstack/nova/blob/ed55dcad83d5db2fa7e43fc3d5465df1550b554c/nova/compute/api.py#L226815:43
SamYapleno logs anywhere describing the issue :/15:43
mriedemgibi: yes let's skip15:43
SamYaplewould a check for old osapi_compute service versions be appropriate to add to `nova-status upgrade check` ?15:44
gibimriedem: ack, let's focuse on spec reviews15:44
*** andreas_s has quit IRC15:44
artommriedem, bauzas, the optimization would be the scheduler doing the version checking? And for now we let the conductor do it?15:44
*** andreas_s has joined #openstack-nova15:44
bauzasjaypipes: should we also discuss in https://review.openstack.org/#/c/556873/2/specs/rocky/approved/nested-resource-providers-allocation-candidates.rst about how the scheduler would pass the root RP ?15:44
bauzasjaypipes: because atm, it just says "which compute node is having that UUID ?"15:45
openstackgerritChris Dent proposed openstack/nova master: Optional separate database for placement API  https://review.openstack.org/36276615:45
openstackgerritChris Dent proposed openstack/nova master: Isolate placement database config  https://review.openstack.org/54143515:45
openstackgerritChris Dent proposed openstack/nova master: WIP: Ensure that os-traits sync is attempted only at start of process  https://review.openstack.org/55385715:45
jaypipesgibi: I'm happy to speak with mlavalle or anyone else about this. I just feel like the network-bandwidth spec has gotten way over-engineered.15:45
mriedemSamYaple: a bit confused,15:45
mriedemyou said it started creating service entries in the db,15:45
mriedembut that there were also old entries?15:45
mriedemSamYaple: also, this sounds vaguely familiar to https://review.openstack.org/#/c/556670/15:46
gibijaypipes: I understand that it is a long spec and this part of it seems minor15:46
mlavallejaypipes, gibi: I am planning to go over the spec today15:46
* bauzas goes to spec up15:46
openstackgerritStephen Finucane proposed openstack/nova master: tox: Speed things up and document them  https://review.openstack.org/53438215:46
openstackgerritStephen Finucane proposed openstack/nova master: trivial: Remove 'tools/releasenotes_tox.sh'  https://review.openstack.org/53438315:46
openstackgerritStephen Finucane proposed openstack/nova master: tox: Make everything work with Python 3  https://review.openstack.org/55689415:46
jaypipesbauzas: the scheduler simply does a ComputeNode.get_all_by_uuid(), passing all the UUIDs of all providers it sees in the provider_summaries section of the allocation_candidates response. Nothing about that will be changing.15:46
*** eharney has quit IRC15:46
mlavalleI also have the impression that we can / should simplify a bit, jaypipes15:47
*** yamamoto has quit IRC15:47
mriedemSamYaple: at this point, if not CONF.cells.enable and we're in here https://github.com/openstack/nova/blob/ed55dcad83d5db2fa7e43fc3d5465df1550b554c/nova/compute/api.py#L2268 we likely should be puking out a warning15:47
jaypipesgibi: no, it's not the length of the spec that I have issues with.15:47
bauzasjaypipes: sec15:47
jaypipesmlavalle: it's the crossing of scope boundaries that I have issue with.15:47
SamYaplemriedem: http://paste.openstack.org/show/715421/15:47
gibimlavalle: cool, then could you please add your view about the backend selection in https://review.openstack.org/#/c/502306/21/specs/rocky/approved/bandwidth-resource-provider.rst@61015:47
SamYaplethe first three services inthat list were the originals. i deleted them and everything started working15:48
bauzasjaypipes: say some child RP is accepting the query, the placement API call to allocation_candidates will return the root RP UUIDs in the provider_summaries ?15:48
*** yamamoto has joined #openstack-nova15:48
bauzasjaypipes: is that already the case or is that requiring some implementation change ?15:48
jaypipesgibi: the thing we are attempting to consume is a chunk of network (ingress or egress) bandwidth for a physical network. I don't really see how vNIC type nor network "backend" selection is relevant to that.15:48
bauzasbecause I want to understand what's missing15:48
gibimlavalle: if we want to have what is described in that subsection then we have to convince jaypipes to have backend specific traits in the RP tree15:48
jaypipesbauzas: yes. and it already does that.15:48
SamYaplemriedem: with the old services not deleted, GETs on instances (nova show's) would fail15:48
bauzas\o/15:48
bauzasI was out of nested RPs for a while15:49
mriedemSamYaple: ok so before this, the api code just relied on getting the instances from the nova db configured in your nova-api nova.conf [database]/connection field right?15:49
gibijaypipes: by claiming bandwidth we implicitly select the backend, that is why backend comes into the picture15:49
bauzasbut that's a very nice point15:49
*** lajoskatona has quit IRC15:49
mriedemSamYaple: oh are you saying the new 15-level services are fixing the problem for you then?15:49
mriedembut you didn't know about those until you had figured out the problem with the older service versions15:49
SamYapleyes15:49
mriedemok15:50
jaypipesgibi: but why should the nova scheduler care about that? shouldn't the nova scheduler just claim resources against the provider of network bandwidth for a specific physical network and then leave it to os-vif and the compute node to pick whatever network backend it wants?15:50
gibijaypipes: we failed to find a RP model that allows claiming bandwidth but not selecting a specific backend that bandwidth belongs to15:50
mriedemSamYaple: did you just upgrade to newton? or ocata/pike?15:50
SamYaplemriedem: newton -> ocata15:51
mriedemok and now you have a cell1 mapping for the nova db15:51
SamYapleoh yea its all working now15:51
mriedemso you're pulling instances from https://github.com/openstack/nova/blob/ed55dcad83d5db2fa7e43fc3d5465df1550b554c/nova/compute/api.py#L227115:51
mriedemok15:51
mriedemat this point i'm not sure what the nova-status version check would look for, nova-osapi_compute services with version < 15 across all cells?15:52
mriedemand then backport that to queens, pike and ocata?15:52
SamYaplemriedem: i think so15:52
gibijaypipes: could be a problem with our model15:52
SamYaplethere are no logs indicating any issues15:52
SamYaplemriedem: i had to walk to code and db (which im not super familiar with) to findthis15:53
*** yamamoto has quit IRC15:53
mriedemSamYaple: yeah like i said at this point that should be a warning, but a warning in rocky doesn't help you in ocata15:53
mriedemSamYaple: can you report a bug and i can bring it up in the cells meeting?15:53
*** derekh has quit IRC15:53
*** yingjun has quit IRC15:53
SamYapleim looking to the future :)15:53
*** tianhui has quit IRC15:53
SamYaplesure will do15:53
*** derekh has joined #openstack-nova15:53
*** eharney has joined #openstack-nova15:53
mriedemwhatever we do can at least help the next poor soul that runs into this15:53
SamYapleexactly15:53
mriedemthanks15:53
*** tianhui has joined #openstack-nova15:54
SamYaplenova uses launchpad still, right?15:54
mriedemhell yeah15:54
SamYaple:)15:55
* gibi needs a new brain15:55
Spaz-HomeTrade me plz.15:55
jaypipesgibi, mlavalle: please move convo to https://etherpad.openstack.org/p/X0RboWOe7C15:56
gibijaypipes: ack15:56
*** ralonsoh has quit IRC15:57
*** andreas_s has quit IRC15:58
*** fragatina has quit IRC15:58
*** eharney has quit IRC15:59
*** gyee has joined #openstack-nova16:00
*** germs has joined #openstack-nova16:01
*** germs has quit IRC16:01
*** germs has joined #openstack-nova16:01
*** chyka has joined #openstack-nova16:01
*** ktibi has quit IRC16:03
*** yamamoto has joined #openstack-nova16:03
*** trozet has joined #openstack-nova16:04
SamYaplemriedem: https://bugs.launchpad.net/nova/+bug/175931616:05
openstackLaunchpad bug 1759316 in OpenStack Compute (nova) "pre-cells_v2 nova-osapi_compute service in database breaks instance lookup" [Undecided,New]16:05
*** sree has joined #openstack-nova16:05
jaypipesmlavalle: are you light green on etherpad?16:06
openstackgerritSurya Seetharaman proposed openstack/nova master: Scheduling Optimization: Remove cell0 from the list of candidates  https://review.openstack.org/55682116:06
openstackgerritSurya Seetharaman proposed openstack/nova master: Allow scheduling only to enabled cells (Filter Scheduler)  https://review.openstack.org/55052716:06
openstackgerritSurya Seetharaman proposed openstack/nova master: Add --enable and --disable options to  nova-manage update_cell  https://review.openstack.org/55541616:06
openstackgerritSurya Seetharaman proposed openstack/nova master: Update the cells FAQs and scheduler maintenance docs.  https://review.openstack.org/55645916:06
mlavallejaypipes: I am green16:06
jaypipesmlavalle: cool, thx :)16:07
*** voelzmo has quit IRC16:08
*** germs_ has joined #openstack-nova16:08
sahidgibi: i responded to your comments on the emulthreads spec, if you can let me know your rhinking on how to handle that specific point16:08
*** yamamoto has quit IRC16:08
*** sree has quit IRC16:09
*** germs has quit IRC16:10
*** dave-mccowan has quit IRC16:10
*** ragiman has quit IRC16:11
kashyapmriedem: Just to note, after last 30 mins of looking around, I've updated min versions for libvirt / QEMU / libguestfs for Debian, Fedora and RHEL: https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix#Distro_minimum_versions16:11
mriedemSamYaple: ack16:11
*** mvk has quit IRC16:11
*** germs_ has quit IRC16:11
*** germs has joined #openstack-nova16:12
*** germs has quit IRC16:12
*** germs has joined #openstack-nova16:12
kashyapOnce I do for openSUSE & SLES, then it gives a somewhat representative sample to update the code & remove backward compat cruft.16:12
mriedemkashyap: ok. you could maybe get imacdonn or stvnoyes1 to help out with the oracle linux entry, and AJaeger or toabctl to help out with suse16:12
*** andreas_s has joined #openstack-nova16:13
mriedemkashyap: well, the proposed 'next' version is generally communicated on the ops mailing list for feedback16:13
kashyapThanks; I was just looking up things myself for the past bit.  And for Debian at least I got double-checked w/ a Debian person16:13
kashyapmriedem: Yes, that too, how could I forget that :-)16:13
kashyap(I myself added a note about mail to Operators in the wiki.  Won't miss that.)16:13
kashyaps/Wiki/PTG Etherpad/16:13
*** Guest9256 has quit IRC16:15
sahidbtw mriedem or perhaps dansmith, i know that your are not so interested about the emulator threads things but if one of you can have a look at https://review.openstack.org/#/c/511188/16:16
*** rmart04 has quit IRC16:17
sahidjay pipes looks to be also the one to review it but at some point i will need to have an other core16:17
*** trinaths has joined #openstack-nova16:17
openstackgerritArtom Lifshitz proposed openstack/nova-specs master: NUMA-aware live migration  https://review.openstack.org/55272216:18
*** yamamoto has joined #openstack-nova16:18
sahidthanks in advance16:19
mriedemsahid: that is wayyyy outside of my wheelhouse16:19
bauzasmriedem: mmmm, we approved https://blueprints.launchpad.net/nova/+spec/vgpu-rocky but I discussed with jaypipes about how we would support multiple VGPU types by using a new conf opt and we agreed to provide a spec for that16:20
bauzasmriedem: fine if I'm still using that blueprint ?16:20
mriedembauzas: ask melwitt16:20
sahidyes yes no worries16:20
bauzasmelwitt: arouuuuund ?16:20
toabctlkashyap, mriedem the (open)SUSE entries seems to be ok. the upcoming versions are not yet there but SLES12SP3 is the version that we need support for imo16:20
mriedemtoabctl: as the next min version you mean?16:21
bauzassahid: https://review.openstack.org/#/c/511188/ is on my list16:21
mriedemfor rocky, we already have the next min versions in plae16:21
mriedem*plcae16:21
mriedemdamn it16:21
bauzasplace FTW16:21
*** derekh has quit IRC16:21
sahidbauzas: ah cool, thanks16:21
mriedemhttps://github.com/openstack/nova/blob/ed55dcad83d5db2fa7e43fc3d5465df1550b554c/nova/virt/libvirt/driver.py#L20716:21
mriedemtoabctl: ^16:21
*** germs has quit IRC16:22
mriedemkashyap: andreas_s is the person to ask for zkvm16:22
toabctlmriedem, I'm saying the versions are up-to-date. so Rocky seems to be fine16:22
*** sahid has quit IRC16:23
mriedemyeah, we're talking about what the next min versions should be in S16:23
*** yamamoto has quit IRC16:23
*** artom has quit IRC16:23
toabctlmriedem, so do you plan to use a higher version than currently supported on SLES/openSUSE? looks like most other distros have lower versions currently16:24
*** pcaruana has joined #openstack-nova16:25
toabctlmriedem, but there will be SLES15 and openSUSE Leap 15 soon. both will have newer versions. let me check these16:25
mriedemtoabctl: likely not no, the next min version should be compatible with what the various distros can support in S16:25
mriedemthe point of the next min version isn't to be the bleeding edge16:25
mriedemit's to raise the bar, but still be a minimum that can be supported across distros16:25
imacdonnmriedem kashyap toabctl Do keep stvnoyes1 and I in the loop, please .... QEMU/KVM version for Oracle Linux could be a bit tricky ... I think somenew newer is in the works there, but I can't make any public statements at the moment16:26
*** yassine has quit IRC16:26
imacdonnsomething* newer16:26
*** andreas_s has quit IRC16:26
openstackgerritMerged openstack/nova master: add check before adding cpus to cpuset_reserved  https://review.openstack.org/53986516:27
openstackgerritMerged openstack/nova master: Docs: modernise links  https://review.openstack.org/55602416:27
*** derekh has joined #openstack-nova16:27
gibisahid: responded in https://review.openstack.org/#/c/511188/1316:29
*** germs has joined #openstack-nova16:29
*** germs has quit IRC16:29
*** germs has joined #openstack-nova16:29
cfriesenkashyap: just added a comment to your cpu feature flag review, but might make sense to discuss it here....seems like libvirt does allow you to set additional features when using host-model, do we want to artificially restrict that?16:29
*** germs has quit IRC16:32
*** moshele has joined #openstack-nova16:33
*** lucasagomes is now known as lucas-afk16:33
*** yamamoto has joined #openstack-nova16:33
*** andreas_s has joined #openstack-nova16:36
jaypipesgibi, mlavalle: gotta love etherpad conversations. even better than IRC conversations... ;)16:36
mlavallejaypipes: it was a good idea16:37
mlavalleif there is a lot to discuss, it seems more orderly16:37
*** yamamoto has quit IRC16:37
openstackgerritEd Leafe proposed openstack/nova-specs master: Add Generation to Consumers  https://review.openstack.org/55697116:39
edleafecdent: jaypipes: efried: ^^16:39
efriededleafe: noyce16:39
openstackgerritMatt Riedemann proposed openstack/nova-specs master: Few correction in the server filter/sort spec  https://review.openstack.org/52701916:39
cdentedleafe: roger that16:39
*** lyan has joined #openstack-nova16:40
*** lyan is now known as Guest5924516:40
*** udesale has quit IRC16:41
efriedbauzas: ^ I think you wanted to refer to that from your spec16:42
efriedbauzas: Add Generation to Consumers  https://review.openstack.org/556971 that is16:42
bauzasmmm ?16:42
bauzasit's 18:42 here and my brain dropped16:42
bauzasand I still have to write a spec :p16:43
*** mdbooth has quit IRC16:43
*** germs has joined #openstack-nova16:43
*** germs has quit IRC16:43
*** germs has joined #openstack-nova16:43
*** moshele has quit IRC16:44
*** andreas_s has quit IRC16:45
*** felipemonteiro_ has quit IRC16:45
*** david-lyle has joined #openstack-nova16:46
*** yamamoto has joined #openstack-nova16:48
*** janki has quit IRC16:49
dansmithbauzas: on this: https://review.openstack.org/#/c/554134/316:50
dansmithbauzas: I get the desire for consistency, but it seems to me that on POST, extra_specs would always be empty, and on PUT you can't modify them anyway (right?)16:50
dansmithI guess maybe that's a reason to do it for PUT, but POST seems completely trivial16:51
* bauzas looks16:51
bauzasdansmith: sec, verifying16:52
bauzasdansmith: because IIRC, you can *create* a flavor and passing a spec16:52
dansmithbauzas: see the comments in the spec that say no16:52
bauzasin the same novaclient call16:52
dansmithmaybe, but not on the actual POST of /flavors16:53
*** tssurya has quit IRC16:53
dansmithconfirmed in flavor_manage16:53
bauzashah16:53
*** yamamoto has quit IRC16:53
*** sree has joined #openstack-nova16:53
*** Guest59245 has quit IRC16:53
bauzasthen I trampled my brain16:53
dansmithit's not that it's a bad thing,16:53
bauzasif so, I agree with you16:53
*** dikonoo has joined #openstack-nova16:53
dansmithit's just like.. I'm not sure I get the point of doing a microversion for that16:53
dansmithmriedem: know anything about this?16:54
gibijaypipes, mlavalle: this etherpad discussion was really useful, thanks16:54
mlavallegibi, jaypipes: Thank You16:54
bauzasdansmith: I guess the main concern is "should we accept that ?"16:54
bauzasI mean, accepting to create a flavor with some extra specs16:54
dansmithwell, that's a different question/change than proposed by this spec addition :)16:55
*** germs_ has joined #openstack-nova16:55
bauzasright16:55
bauzashonestly, that would be a separate spec then16:55
bauzas-116:55
*** vivsoni has joined #openstack-nova16:56
cfriesenwhen asking placement for candidates, does nova-scheduler ask placement to limit how many candidates are returned, or does it get *all* of the possible candidates?16:56
mlavallegibi: so I won't review the Nova spec as is. I will wait for the next iteration16:56
*** vivsoni_ has quit IRC16:56
*** germs has quit IRC16:56
*** fragatina has joined #openstack-nova16:56
mriedemuh16:56
mriedemnever seen that spec16:56
mriedemoh wait,16:57
*** dave-mccowan has joined #openstack-nova16:57
gibimlavalle: sure, I will try to do it right now16:57
mriedemumm, that bp is already approved16:57
mriedemdansmith: bauzas16:57
bauzasmriedem: because the spec was approved for just *getting* extra specs16:57
mlavallegibi: ah ok, then I will take a look at it later today16:57
dansmithright, so I guess this is already going to bring in a microversion for the GET changes,16:57
mriedemoh this is an amendment16:57
gibimlavalle: I will ping you16:57
*** bkopilov has quit IRC16:57
mlavalleperfect16:57
bauzasmriedem: now, it asks to pass extra specs when creating16:57
dansmithso we just do these few extra bits in the same microversion16:57
dansmithbauzas: no it doesn't16:58
dansmithbauzas: it's about the return of those calls16:58
mriedemi need to read this change first16:58
dansmithso that seems okay then16:58
dansmithI was reading this as "a new microversion to return the must-be-empty extra specs from POST"16:58
bauzaswell, then it's impossible to get extra specs from a POST16:58
bauzassorry, it's late here16:58
dansmithbut if it's just done with everything else, then it's not such a big deal16:58
dansmithbauzas: right, like I said, it's good for consistency,16:58
mriedemyou can't create a flavor today with extra specs in a single call16:59
dansmithjust not worth its own microversion IMHO, but if this is in with the useful GET changes then not a big deal16:59
dansmithmriedem: right and this isn't aiming to change that16:59
*** derekh has quit IRC16:59
mriedemso if this is just saying, return extra_specs in the POST response, but it will be empty, fine16:59
dansmithjust make the return of POST consistent16:59
*** oomichi has joined #openstack-nova16:59
*** trinaths has quit IRC16:59
dansmithI just didn't see a point in doing a whole new microversion for that because.. always empty,16:59
bauzasdansmith: yeah that's what I understood when reviewing first the change16:59
dansmithbut if it's just alongside the GET changes16:59
dansmiththen that's cool16:59
bauzasdansmith: yeah, sorry, I got confused tonight17:01
bauzasI didn't remember it was just the returned fields17:01
mriedemhttps://review.openstack.org/#/c/554134/3/specs/rocky/approved/add-extra-specs-to-flavor-list.rst@17717:01
mriedemexactly ^17:01
bauzashonestly, passing the extra specs on both POST/PUT and GET with the same microversion seems harmless to me17:02
mriedempassing in the request or the response?17:02
mriedemthis is just the response17:02
bauzasno, in the response17:02
mriedemyes, it is fine17:02
bauzashence my original +217:03
mriedempresumably yikun hit this during implementation17:03
bauzasbut I was trampled by POST tonight17:03
bauzasand me thinking I was misunderstanding the spec17:03
* bauzas hides17:03
*** yamamoto has joined #openstack-nova17:03
openstackgerritMatt Riedemann proposed openstack/nova-specs master: Amend the "add extra-specs to flavor" for create and update API  https://review.openstack.org/55413417:03
mriedemhttp://trampledbyturtles.com/ ?17:04
dansmithmriedem: I was +W before you hit that so you send it when you're ready17:04
mriedemalready did17:04
bauzasnice band name17:04
efriedjaypipes: Here's one for ya.  I sent in separate granular resource requests for the same resource class, e.g. resources1=VF:1,BW:200&resources2=VF:1,BW:300.  Per design, it's possible to get back candidates where all of that comes from one RP.  So in that candidate you'll get back a *single* RP { VF: 2, BW: 500 }.  Now the virt driver (or neutron, or whatever) needs to go grab/create the actual resources...17:06
mriedemwhere are the john hopkins people when you need them?17:06
dansmithman I really should have reserved my +1 on efried until the end of the waiting period, just to keep him worry'n17:06
efriedjaypipes: I guess it has to look at the flavor to figure out that the BW should be split up 200 for one and 300 for the other?17:06
*** liangy has joined #openstack-nova17:06
*** Swami has joined #openstack-nova17:07
*** yamamoto has quit IRC17:08
*** fragatina has quit IRC17:09
*** yangyapeng has quit IRC17:09
efrieddansmith: I would have totally seen through that.17:09
*** yangyapeng has joined #openstack-nova17:09
mriedemthere is a 1 week period right?17:10
mriedembecause efried has been nit pickin the shit out of my patches lately17:11
*** bkopilov has joined #openstack-nova17:11
efriedmriedem: You like the abuse.17:11
efriedmriedem: I'm nitpicking your patches so dansmith will +1 me.17:11
mriedemthat reminds me,17:11
mriedemyou bastard,17:11
mriedemi have to switch back to KSA17:12
efriedWell, we should be getting rid of neutronclient eventually anyway.17:12
mriedemha17:12
efriedand using raw ksa.17:12
mriedemi will be dead and buried before that happens17:12
efriedlike we're doing with ironic17:12
cfriesenefried: why does placement combine the two requests?17:12
efriedBut you don't have to do that.  You can catch exceptions for error paths instead.17:12
mriedemefried: i'd rather just say don't raise errors and check the response status code17:13
efriedmriedem: You could shim the neutronclient...17:13
efriedkidding17:13
efriedcfriesen: This is code not yet written.  But if nothing else, it's because (I think) the syntax of the response has a dict keyed by resource class.  So there would be no way to split them up even if we wanted to (without some new syntax).17:13
*** felipemonteiro_ has joined #openstack-nova17:14
efriedcfriesen: But you raise a good point: since it's not written yet, we could still consider new syntax in the new microversion that introduces granular.17:14
efriedcfriesen: But that spec is already approved, so sorry, too late :P17:14
jaypipesefried: yes17:15
mriedemanyone know what is up with this sriov bond thing https://review.openstack.org/#/c/463526/ ?17:15
mriedemsahid might but he's gone now17:15
openstackgerritMerged openstack/nova-specs master: Few correction in the server filter/sort spec  https://review.openstack.org/52701917:17
mriedemdansmith: on the volume-backed rebuild + new image spec, i think i want to just say, add an api to cinder to re-image the volume and once that is in place, nova will use it17:17
mriedembecause cinder also has things it needs to update about the image in the volume, i.e. some image metadata stuff17:17
dansmithmriedem: that would certainly be the nicest way yeah17:17
mriedembecause i don't think we want to do the volume create / delete swap thing, it's too messy17:17
mriedemquotas, types, etc17:17
dansmithI don't think create/swap/delete would be the way anyway,17:18
dansmithwe'd just lay the image down on the volume ourselves I think,17:18
dansmithbut that's definitely a whole big thing17:18
sean-k-mooney[m]mriedem:  is this yet another attempt at this or is it carring on form the previous attempts17:18
mriedemand the volume image meta would be out of date17:18
cfriesenefried: if you send in two granular resource requests for resources1 and resources2, wouldn't it make sense to get back a dict of {resource1:<rp>, resource2:<rp>} or similar?  They could still be the same RP.17:18
*** yamamoto has joined #openstack-nova17:18
mriedemsean-k-mooney[m]: the sriov bond spec? it's old17:18
mriedemand looks stale/abandoned17:19
*** dave-mccowan has quit IRC17:19
sean-k-mooney[m]mriedem: oh that is the bound spec that wanted to use  neutron config element to configure bonds. ya i hated that part of it17:20
sean-k-mooney[m]mriedem: ya there was also https://review.openstack.org/#/c/182242/ and there are 2-3 other ones dateing back to mitaka17:20
*** AlexeyAbashkin has quit IRC17:21
mriedemdansmith: ok done https://review.openstack.org/#/c/532407/17:21
efriedgibi: In case you're in the middle, just finished review of https://review.openstack.org/#/c/502306/2117:21
sean-k-mooney[m]mriedem: but yes no one has touched it since december so i think https://review.openstack.org/#/c/463526/43 is abandoned. its still proposed against queens17:22
*** elmaciej has joined #openstack-nova17:22
*** sambetts is now known as sambetts|afk17:23
dansmithmriedem: so, we'll need to detach the volume in order for cinder to be able to do that, which is basically equivalent to root-detach, which we already failed to do in the past.. just.. sayin'17:23
mriedemi abandoned https://review.openstack.org/#/c/182242/ today17:23
gibiefried: thanks, I'm currently working on an update. I will try to address your comments in that as well17:23
*** yamamoto has quit IRC17:23
mriedemdansmith: detach or disconnect from the host?17:23
mriedemwe need to at least keep the volume reserved for the instance17:24
dansmithmriedem: can we do those separately?17:24
mriedemyes17:24
dansmithokay17:24
dansmiththen, disconnect I guess17:24
mriedemit's basically shelve17:24
dansmithas long as cinder doesn't have a fit with that internally17:24
mriedemshelve the root volume17:24
dansmithmriedem: ...which you can't do with bfv17:24
mriedemcinder would only care about the attachments17:24
dansmithwell,17:24
dansmithyou can maybe,17:24
dansmithbut not without detaching the root, which I guess is your point17:25
*** artom has joined #openstack-nova17:25
mriedemhonestly i don't know if volume-backed shelve works17:25
dansmithI guess the root detach thing was mostly hard because of the desire to attach it to something else in the mean time,17:25
mriedemsince snapshot for a volume-backed instance is different in the api17:25
mriedemand the shelve snapshot happens in hte compute17:25
dansmithwhich wouldn't be the same problem here17:25
dansmithso I think that makes more sense then yeah17:26
sean-k-mooney[m]mriedem:  for volumes backed instance i would not expect you to snapshot it at all jsut keep the volume17:26
dansmithjust want to make sure we're not sending them off on a mission that, upon completion, leaves more hard nova problems we might punt on17:26
mriedemshelve supports volume-backed instances, it just casts directly to offload17:26
mriedemdansmith: feel free to comment on the spec17:26
sean-k-mooney[m]mriedem:  for image backed guest we have too so we can free up the space on teh compute node but for volumes there is no reason to clean it up in the backing store just to put it into an image17:27
mriedemi just can't imagine this is all better done inside nova17:27
dansmithmriedem: no definitely not, just thinking through it17:28
dansmithmriedem: you know, given all the many quagmires that came from nova-cinder interaction in the recent past17:28
sean-k-mooney[m]mriedem:  dansmith  shelve for a volume backed instance should be jsut, shotdown instance, detach volume and clean up host resouces for vm no? then unshelve is jsut select host to boot on, set up entworking etc and attach volume and boot form it?17:29
mriedemyeah so for volume-backed shelve, there is no snapshot, on unshelve we just re-attach the volumes to the instance on the new host17:30
dansmithmriedem: you're saying that works today right?17:30
mriedemwe don't detach the volume, it stays reserved so someone else can't take it while the instance is shelved17:30
mriedemdansmith: looks like it should from the code,17:30
*** fragatina has joined #openstack-nova17:30
mriedemwould need to test it of course17:30
dansmithit's the disconnect that is the important bit for detaching it from the current host17:30
dansmithyeah17:30
sean-k-mooney[m]mriedem: well i dont know if thats what happens today but i think that is what we should be doing17:30
mriedemon shelve offload we delete the instance on the host so that does the disconnect17:31
mriedemsean-k-mooney[m]: yes that's what happens today17:31
mriedemwe terminate connections on shelve offload, and then re-attach on unshelve17:31
kashyapimacdonn: I'll write an e-mail to the Operators-List / Dev this week, it's better if we discuss the version stuff there.17:33
*** yamamoto has joined #openstack-nova17:33
*** dave-mccowan has joined #openstack-nova17:34
imacdonnkashyap: OK17:34
*** mdnadeem has quit IRC17:34
mriedemdansmith: left a note in https://review.openstack.org/#/c/532407/ about the order of operations so we don't forget17:34
kashyapcfriesen: Hey, was AFK (and I will be again in 10 mins).  About your question --17:34
dansmithmriedem: cool17:35
mriedemsame guys are pushing for volume-backed rescue https://review.openstack.org/#/c/532410/17:35
kashyapcfriesen: As it stands, we are allowing this one choice to alleviate the existing problem.  The allowing flags for 'host-model' thing, we can work it out when we lift the restriction17:35
openstackgerritMerged openstack/nova-specs master: Amend the "add extra-specs to flavor" for create and update API  https://review.openstack.org/55413417:35
sean-k-mooney[m]mriedem:  this is a different topic so lets not rat hole on it but if i shelve an insatce with an active multi attach volume will one of the other attachment become active. i guess you cant boot from a multi attach volume either right?17:35
kashyapcfriesen: Does that sound reasonable to you?17:35
mriedemsean-k-mooney[m]: you can boot from a multiattach volume17:36
mriedemvolume attachments aren't like port bindings, there isn't an active one and an inactive one17:36
mriedemthey do have attach modes, so r/o and r/w17:36
openstackgerritArtom Lifshitz proposed openstack/nova-specs master: NUMA-aware live migration  https://review.openstack.org/55272217:37
mriedemwhich is related to https://review.openstack.org/#/c/552078/17:37
*** yamamoto has quit IRC17:38
mriedemildikov: heh look familiar? https://github.com/openstack/nova/blob/master/nova/compute/api.py#L355417:38
kashyapcfriesen: I'll respond on the review.17:38
sean-k-mooney[m]mriedem:  yes its the modes that i was think of. i was wonder if one of the other nodes would becomre r/w but i guess that would be an explcit call if you wantted that to happen not somethign that magically happend if you shutdown a r/w instacne or shelved it17:38
mriedemcorrect, there is no auto-change of the mode when one attachment goes away17:39
*** elmaciej has quit IRC17:39
mriedemthat is discussed as an option for attachment counting in https://review.openstack.org/#/c/552078/17:39
*** yamamoto has joined #openstack-nova17:40
*** yamamoto has quit IRC17:40
sean-k-mooney[m]mriedem:  lol how may volume specs do we have for this cycle :)17:40
*** tesseract has quit IRC17:40
*** yangyapeng has quit IRC17:40
sean-k-mooney[m]i guess thats only 2 i just feels like more17:41
*** artom has quit IRC17:41
mriedemvolume-backed rebuild, rescue, and backup17:41
mriedemfrom the same company17:41
mriedemplus mine17:41
mriedemthat's nowhere near the number of placement specs17:41
cfriesenkayshap: sorry, distracted by local stuff.  yeah, I'm fine with it for the backport.  I don't really care personally (mostly use specific cpu models for live migration) but wanted to bring it up just so it was explicitly considered.17:43
sean-k-mooney[m]mriedem:  true, am for https://review.openstack.org/#/c/552078/1/specs/rocky/approved/volume-multiattach-enhancements.rst in general do we want to add more multiboot apis to nova? and if so what is the main delta between X servers with volume Y and X servires with Y volumes each?17:43
*** arvindn05 has quit IRC17:43
sean-k-mooney[m]mriedem: that was the other main discussion we had with cinder right. should nova provide a way to consume teh fact that several hadware backend support creating multiple volumes at once to create many servers each with volumes in one call17:44
ildikovmriedem: heh, I guess that's more of a workaround than a leftover...17:45
kashyapcfriesen: No problem.  So quick point: 'host-model' + PCID doesn't make sense anyway:17:46
kashyapcfriesen: If QEMU already supports PCID, it would be enabled by 'host-model'.  And if it's not supported, adding it doesn't make it magically appear :-)17:47
*** suresh12 has joined #openstack-nova17:48
efriedcdent: "Steal an extra space from efried and put it here."  Dick.17:49
*** dikonoo has quit IRC17:49
edleafeefried: I wouldn't *dream* of ever breaking up your double spaces!17:50
efriededleafe: That's right.  They're MINE.17:50
* jroll applauds cdent17:50
*** pcaruana has quit IRC17:51
*** tssurya has joined #openstack-nova17:51
edleafeefried: Hey, I don't want to be the only one who people think grew up on typewriters instead of computers17:52
efriedThese young whippersnappers don't understand us edleafe17:53
mriedemsean-k-mooney[m]: i don't know what you're saying17:54
mriedemsean-k-mooney[m]: this isn't nova creating multiple servers and multiple multiattach volumes, nova doesn't create multiattach volumes,17:54
sean-k-mooney[m]mriedem: your spec is suggesting allow boot 10 instance with this multiattach volume17:54
mriedemit's boot from volume with an existing multiattach volume that can be attached to more than one server17:55
mriedemsean-k-mooney[m]: if an admin doesn't want to allow multiattach bfv, they can control that via policy volume:multiattach_bootable_volume17:56
sean-k-mooney[m]mriedem:  in  the cindier cross project meeting they where also a request for can i create  10 instance each with 1 voume each17:56
mriedemsean-k-mooney[m]: that was multicreate17:56
mriedemas far as i know, you can create 10 volume-backed instances in a single server create request today17:57
mriedemas long as nova is creating the volumes17:57
mriedemif it's a pre-existing volume, we don't allow that17:57
sean-k-mooney[m]mriedem:  for your spec you would create the multi attach volume in cinder first the ask nova to boot 10 instance each using that multi attach volume17:57
mriedembecause that flow does not (yet) support multiattach volumes17:57
mriedemyes17:58
sean-k-mooney[m]mriedem: yes the other usecase was multi create17:58
mriedemso with attach_mode in the bdm,17:58
mriedemwell, nvm17:58
mriedembdms aren't indexed per server in the create request17:59
mriedembut i was going to say, i could create 3 instances in a single request, with 3 bdms to the same multiattach volume, but only one has the r/w attachment, and the other two are r/o17:59
sean-k-mooney[m]ya ok well its looks like a parity thing to me not a really large change in the semantics of the api you are just allowing seting the volume and attachment mode when making the multi boot request18:00
mriedemany boot request, including multi18:00
mriedemyes18:00
mriedembut as noted ^ the bdms aren't indexed in a multicreate request,18:00
mriedemso the bdm attach_mode would be the same for all of them18:01
sean-k-mooney[m]mriedem:  is that what teh sepc what to allow or what you can do today. today you would have to do that with 3 nova boot requests correct?18:01
mriedemif you wanted them to each have different attach modes, yes18:02
mriedemi'm not proposing that we change the bdm semantics in multicreate server to be indexed18:02
sean-k-mooney[m]mriedem:  and do you want ot allow somthing like boot 10 instance and have 4 r/w and 6 r/o or is that out of scope18:02
mriedemthat's out of scope18:02
sean-k-mooney[m]ah they would all have the same mode18:02
sean-k-mooney[m]ya mixing modes would be messy on the comandline and in the api18:02
mriedemthe list of bdms in the server create request are copied per instance that gets created18:02
sean-k-mooney[m]so is it basically just a check in the api that says you can pass a volume on multi boot today?18:04
efriededleafe: commented on https://review.openstack.org/#/c/556971/18:04
mriedemsean-k-mooney[m]: can't18:04
mriedembut yes18:04
mriedemit's linked from the spec i believe18:04
openstackgerritMatt Riedemann proposed openstack/nova master: api-ref: add a note about volume-backed rescue not being supported  https://review.openstack.org/55699618:04
sean-k-mooney[m]mriedem:  cool ill read it properly later. it makes sense to me how you discibe it18:06
edleafeefried: thanks. Will update shortly18:08
*** suresh12 has quit IRC18:09
*** ttsiouts_ has joined #openstack-nova18:09
*** psachin has joined #openstack-nova18:10
*** suresh12 has joined #openstack-nova18:10
mriedemstephenfin: we don't need this mypy spec do we? https://review.openstack.org/#/c/538217/ i thought you already started making those changes, much to my chagrin18:11
mriedems/chagrin/vexation/18:12
mriedemthanks google18:12
*** gjayavelu has joined #openstack-nova18:13
efriedcfriesen: "efried: if you send in two granular resource requests for resources1 and resources2, wouldn't it make sense to get back a dict of {resource1:<rp>, resource2:<rp>} or similar?  They could still be the same RP." Sorry, I let this scroll by...18:14
efriedcfriesen: Yes, something like that would make sense.  But we don't have that in the plan at the moment.18:14
efriedcdent, edleafe, jaypipes: How do you feel about the granular microversion changing the response payload to something like ^ to preserve the division of the requests?18:16
*** rmart04 has joined #openstack-nova18:16
*** rmart04 has quit IRC18:16
efriedgibi has already identified a place where that would be useful.18:17
cdentefried: which response payload are you talking about?18:17
efriedcdent: The response from GET /a_c18:17
edleafeefried: the purpose of the granular request is to ensure that each of those requirements is met by a single RP. What would be the advantage of splitting the out if it turns out that they are the same RP?18:17
*** elmaciej has joined #openstack-nova18:17
cdenta_r or p_s?18:17
efriedcdent: The format of an allocation_request, I think.18:17
efrieda_r, definitely not p_s18:18
*** voelzmo has joined #openstack-nova18:18
cdentefried: I'm not visualizing what you're really suggesting, could you paste something somewhere, or link me to some context?18:18
cdent(I'm stuck on "how is this different from what we already do")18:18
efriedcdent: edleafe: The example I gave above was: resources1=VF:1,BW:200&resources2=VF:1,BW:300 may result in an allocation_request like RP_X: { VF: 2, BW: 500 }.  Without consulting the original request (which, as in gibi's neutron case, may not be available), the caller can't tell that one VF should get BW:200 and the other should get 300.18:19
*** voelzmo_ has joined #openstack-nova18:20
efriedcdent, edleafe: So the suggestion is that the response should instead look like:  { resources1: { RP_X: { VF: 1, BW: 200} }, resources2: { RP_X: { VF: 1, BW: 300 } } }18:20
cdentefried: and in this case we're on the same pf, and it is the pf that is being the resource provider, yes?18:21
*** felipemonteiro__ has joined #openstack-nova18:21
efriedcdent: yes18:21
efriedRP_X is a PF provider of VF and BW resources.18:21
cdentefried: does the caller still know the request it made?18:21
cdentor still have whatever info it used to make the call?18:21
efriedcdent: That depends on the scenario.  But in general ima say no.18:22
sean-k-mooney[m]mriedem:  regarding https://review.openstack.org/#/c/532410/4/specs/rocky/approved/volume-backed-server-rescue.rst i think they just want to do a normal image based nova rescue but for boot from volume instance. so the image size bits of the spec are just saying the rescue image is not empty18:22
gibicdent: neutron provide resource request pieces via the neutron port API, nova combines them to one big a_c request18:22
*** fragatina has quit IRC18:22
*** voelzmo has quit IRC18:22
cdentIn general ima say that changing the allocation format again would make me sad, especially to represent granular stuff which seems icky somehow (will have to think a bit longer on why). Which is not "god no, I hate that" but rather "hmm, I'm not immediately in love with this"18:23
efriedcdent: I think the compute manager is the thing that builds the request, and the report client sends it down and gets the response, but the thing that needs to correlate the allocations to real resources is the virt driver (or in gibi's case, neutron (maybe a neutron agent)), and we're not giving that information to those guys.18:23
cdentI'm uncomfortable with placement being use as state manager for the interaction between nova and neutron18:24
efriedeh, state manager?  I don't see that.18:24
efriedAll we're doing is providing the caller of GET /a_c with more detailed information about how the allocation_request was fulfilled.18:24
sean-k-mooney[m]cdent: well its not state management. we just need to know how to correlate placement RP inventory to phyical resouce on the host18:24
efriedI.e. I took *this* request group to make *this* chunk of allocations.18:25
*** psachin has quit IRC18:25
*** felipemonteiro_ has quit IRC18:25
*** dtantsur is now known as dtantsur|afk18:25
cdentsounds like communicating state by proxy to me but maybe I'm peculiar18:25
efriedcdent: In the case of the virt driver, he can *probably* glean that information by looking back at the flavor... but now we also have resources/traits in the image, and gibi will also have them in the port, and future us may have...  So asking virt to reconstruct all of that, which we currently do in nova, seems like a really bad idea.18:25
*** suresh12 has quit IRC18:26
* cdent scrunches his brow18:26
efriedcdent: It's not state.  I see it as the difference between, "You asked for X and Y.  Here's XY," and "You asked for X and Y.  Here's X, Y."18:26
cdentwho is making the request for allocations in the example you gave?18:27
cdentsorry candidates18:27
efriedthe scheduler report client is the thing making the placement API call.18:27
cdentright so nova is asking placement to construct data in a particular way so it can communicate something to nova18:28
cdents/nova$/neutron/18:28
efriedNo, if that's the part that's bothering you, let's remove neutron from the equation.18:28
efriedJust looking at nova talking to nova.18:28
efriednova synthesizes resource+trait info from flavor, image, request spec, whatever sources.  It comes up with a ResourceRequest (which is a set of RequestGroup) by the time it's ready to ask for allocation candidates.18:29
sean-k-mooney[m]cdent: we have the same problem with numa in nova alone18:29
sean-k-mooney[m]cdent: placement has 2 numa RP both with cpus. placement claims against one of the RPs how do i know which phyical numa node that alloaction is against18:30
*** suresh12 has joined #openstack-nova18:30
*** suresh12 has quit IRC18:30
cdentso you have some state in the flavor, you use it to make a request and because you don't want to look back at that state in the flavor, you want to extend placement to transmit that state (again I'm not saying this is the worst thing ever, just trying to identify what's being done)18:30
efriedsean-k-mooney[m]: Not that, no.18:30
*** suresh12 has joined #openstack-nova18:30
openstackgerritBalazs Gibizer proposed openstack/nova-specs master: Network bandwidth resource provider  https://review.openstack.org/50230618:30
efriedHow is what's in the flavor "state"?  It's a set of resource requests.18:31
efriedAs currently conceived, we'd be summing up resource requests that happen to land in the same RP.18:31
cfriesendoes anyone know why we plug vifs at nova-compute startup, but then also plug them again (but don't wait for them) when powering on the instance? (for libvirt anyway)18:31
efriedThat takes away information.18:31
sean-k-mooney[m]cfriesen: linux bridge18:32
efriedIf they happened to be assigned from different RPs, we would *have* that information.18:32
efriedBut since they happened to be assigned from the same RP, they get rolled together, and we lose that information.18:32
*** suresh12 has quit IRC18:32
sean-k-mooney[m]cfriesen:  the plug on startup is because on a reboot the tap wont be added to linux bridge because linux bridge does not preseits state. we then need to do it as part of boot a second time18:33
efriedCertainly when the virt driver is trying to grab real resources corresponding to an allocation, it needs to be able to correlate the RC+RP back to something real.18:33
gibimlavalle, jaypipes: I updated the bandwidth spec based on our discussion.18:34
cdentI think may be using the term "state" far more generally than you are, but that's probably not all that germane18:34
jaypipesgibi: ty gibi.18:34
*** eharney has joined #openstack-nova18:35
*** AlexeyAbashkin has joined #openstack-nova18:35
gibiefried: I don't have the brainpower any more to think through you mapping comments in the bandwidth spec today. I will get back to that tomorrow18:35
efriedcdent: looklook, here's another example.  Today I have no way of requesting two different disks from placement.  With granular, I could.  I want a 1G disk and a 2G disk, so I would say: resources1=DISK_GB:1024&resources2=DISK_GB:2048.18:35
gibiefried: thanks for pushing the discussion forward with cdent right now :)18:35
efriedcdent: In a scenario where I have sharing providers, or maybe multiple disk providers in my tree, or whatever, I may get back a candidate like { STOR_RP1: { DISK_GB: 1024 }, STOR_RP2: {DISK_GB: 2048 } }18:35
* gibi goes home18:35
* efried waves at gibi18:35
* gibi waves back18:36
cdentefried: I'm still trying to think this through, but it's slow going because I need to get past a fundamental issue for me: I really don't want to expose granularity in allocations...18:36
edleafeefried: in that scenario, there isn't anything preventing placement from getting both "disks" from the same disk RP18:36
efriedcdent: And that would be fine, cause now my virt driver can tell it needs to get 1024 from STOR_RP1 and 2048 from STOR_RP2.18:36
efriededleafe: Exactly.  In which case, as currently conceived, we would report a candidate like: { STOR_RP1: { DISK_GB: 3072 } }18:36
cfriesensean-k-mooney[m]: I get the boot-time one...why do we need to do it again at instance boot if vifs_already_plugged=True ?18:36
cdentdidnt we have a discussion (or was it in my mind) recently about the difference between contiguous and non-contiguous resource providers?18:37
efriedcdent: Somehow virt needs to figure out that that's really a 1G disk and a 2G disk.  How does he figure that out?  Where does he get that information?18:37
efriedcdent: Yes, this is related to that discussion.  It's why we couldn't e.g. split a VCPU:2 request across two numa nodes.18:37
edleafeefried: what I'm saying is that there is no way to express "these must be two separate RPs"18:37
efriededleafe: Yes, but in this case that's not something I need to express.18:38
edleafewhat granular gets you is "everything in this group must be on the same RP tree"18:38
efriededleafe: What I *am* trying to express is that "these must be two separate *disks*".18:38
*** ttsiouts_ has quit IRC18:38
sean-k-mooney[m]cfriesen:  i dont think vifs_already_plugged will be true on the boot case. just on soft reboot18:38
mriedemare people ok with me just self approving this spec update to match the actual multiattach implementation? https://review.openstack.org/#/c/544152/18:38
efriededleafe: ...which I expressed by putting them into separate granular groups.18:38
sean-k-mooney[m]cfriesen: are you using hybrid plug?18:38
sean-k-mooney[m]cfriesen:  e.g. are you using the iptables firewall driver or conntrack18:39
efriedmriedem: Given the +1s that are on it, fine by me.18:39
cfriesensean-k-mooney[m]: for power-on vifs_already_plugged is true18:39
*** ttsiouts_ has joined #openstack-nova18:39
edleafeefried: that's not what granular is for, is it? It's to ensure that all the requirements in a single group come from the same RP tree18:39
*** AlexeyAbashkin has quit IRC18:39
edleafenot that every group must come from different trees18:39
mriedemefried: done18:39
mriedemthnaks18:39
mriedemthanks even18:39
efriededleafe: "what granular gets you is "everything in this group must be on the same RP tree"" -- no, "everything in this group must be on the same RP".  But also it allows you to request different resources of the same class.18:40
openstackgerritKashyap Chamarthy proposed openstack/nova master: libvirt: Allow to specify granular CPU feature flags  https://review.openstack.org/53438418:40
sean-k-mooney[m]if vifs_already_plugged is true then we should not be calling plug at least in the os-vif case18:40
*** yamamoto has joined #openstack-nova18:40
efriededleafe: The canonical example being two VFs on different networks.  resources1=VF:1&required1=PHYSNET_A&resources2=VF:1&required2=PHYSNET_B18:40
efriededleafe: No way to do that in a single group.18:40
sean-k-mooney[m]cfriesen: if libvirt is doing the plugging (kernel ovs with conntrack or noop security gorup driver) then its likely a side effect of libvirts ovs code18:41
edleafeefried: of course18:41
efriededleafe: And in that case, you would be assured that the resources came from different providers (because presumably one PF can't be on two physnets at once), so you would be fine.18:41
cdentat one point I thought we had decided that if we wanted to do that the vf's had to be resource providers18:41
edleafeefried: but if you have resources1=VF:1&required1=PHYSNET_A&resources2=VF:1&required2=PHYSNET_A you don't necessarily get different PFs18:41
efriedcdent: No, the PFs can still be providers.18:41
cfriesensean-k-mooney[m]: LibvirtDriver._create_domain_and_network() plugs the vifs18:41
efriededleafe: Yes, correct.18:41
efriededleafe: In the case of VF resources, where one VF is one resource, that's okay.18:42
edleafeefried: but you're claiming that separate groups return separate RPs18:42
efriededleafe: But now let's add bandwidth to the mix.18:42
*** oomichi has quit IRC18:42
cfriesensean-k-mooney[m]: unconditionally, then decides whether or not to wait for them based on vifs_already_plugged18:42
efriededleafe: No, only if traits make them split.18:42
cdentefried: I know they _can_ be, but I thought we said that for the use case you are describing, making it work requires cn .... pf -> vf all as providers18:42
mriedemsean-k-mooney[m]: it appears the sriov bond port stuff moved to neutron specs https://review.openstack.org/#/c/506066/ and talks about new configs in nova.conf for how that bonding actually gets configured on the compute host...18:42
*** suresh12 has joined #openstack-nova18:42
sean-k-mooney[m]cfriesen: yes. for a new instance that will be the only time the vif is pluged18:43
*** voelzmo_ has quit IRC18:43
*** ttsiouts_ has quit IRC18:43
cfriesensean-k-mooney[m]:  right, but that happens on a resize, or a poweron, so we're plugging the vif again unnecessarily18:43
sean-k-mooney[m]mriedem:  :( really that just makes me sad. we should not have any config options needed for bonding18:43
edleafeefried: you get separate RPs if the requirements cannot be met by a single RP. But if a single RP can satify all groups, there is nothing preventing that from being selected18:43
*** ttsiouts_ has joined #openstack-nova18:43
efriedcdent: That helps a little, but still leaves us no way to model bandwidth.18:43
edleafeThat's what I'm trying to say18:43
efriededleafe: Yes, you are correct.18:44
*** avolkov has quit IRC18:44
efriededleafe: And in that case, in the VF example, if you add bandwidth - which is, how did cdent put it, a "contiguous" resource? - then we can no longer figure out how much bandwidth each VF should get just from the allocation_request.18:44
sean-k-mooney[m]cfriesen: well not nessicaily. on power we should not need to call LibvirtDriver._create_domain_and_network()  the domain should still exist. it should just be "virsh start" effectivly18:45
*** fragatina has joined #openstack-nova18:46
cfriesensean-k-mooney[m]: power_on calls hard_reboot which calls create_domain_and_network18:46
sean-k-mooney[m]cfriesen:  on resize we will be resizeing to a new host no? so we need to set up the vif there18:46
cfriesensean-k-mooney[m]: can resize to same host depending on config option18:47
cdentefried: I think I'm too tapped out to really form any solid opinion on this today. Can we rejoin in progress another time, or perhaps do it in writing?18:47
*** yamamoto has quit IRC18:47
efriedcdent: Yeah, I think I'll write it up as a delta to the granular spec, gods help me.18:47
cfriesensean-k-mooney[m]: _hard_reboot() undefines the domain18:47
sean-k-mooney[m]cfriesen:  uh why. im sure there is a reason but thats wrong18:47
efriedcdent: Maybe a ML post would be more expeditious.18:47
*** pchavva has quit IRC18:48
sean-k-mooney[m]cfriesen: but for resize we go via the scheduler so we cant assume we will18:48
cdentI think email would be an easier starting point perhaps?18:48
efriedight18:48
sean-k-mooney[m]cfriesen:  yes it does but we should not be hard rebooting when we resize or power on a vm18:49
cfriesensean-k-mooney[m]: hmm, does the fact that we undefine and then define a new domain affect the need to plug the vifs again?18:49
*** suresh12 has quit IRC18:49
sean-k-mooney[m]cfriesen: actully so again for linux bridge we would need to plug again18:49
sean-k-mooney[m]when qemu exits the tap would be removed form the kernel and the linux bridge18:50
cfriesensean-k-mooney[m]: from the code comments:  "We use _hard_reboot here to ensure that all backing files, network, and block device connections, etc. are established and available before we attempt to start the instance."18:50
cfriesensean-k-mooney[m]: okay, that explains it then18:50
sean-k-mooney[m]so when we power on we have to add it back18:50
sean-k-mooney[m]for ovs this is persited in the db and not needed18:50
*** moshele has joined #openstack-nova18:51
cfriesensean-k-mooney[m]: persisted over a compute node reboot?18:51
sean-k-mooney[m]cfriesen: is this causeing a bug for you or jsut interested in why its working this way?18:51
cfriesensean-k-mooney[m]: we're having performance issues and races on compute node startup in a small edge node.18:51
cfriesensean-k-mooney[m]: trying to figure out what my options are18:52
sean-k-mooney[m]cfriesen:  ya the port is stored in the ovs db so when the tap shows up the revalidator treads in the ovs-vswitchd will detect it and add it18:52
openstackgerritMerged openstack/nova-specs master: Amend volume multi-attach spec  https://review.openstack.org/54415218:52
sean-k-mooney[m]cfriesen:  i assume you have configured nova to start all instance on server reboot18:53
cfriesensean-k-mooney[m]: nope, there's another component that's in charge of that.  but nova will plug all the vifs on nova-compute startup even if the instances are powered off.18:53
*** ttsiouts_ has quit IRC18:54
*** moshele has quit IRC18:54
cfriesenthat slows down neutron, which causes it to be delayed responding to something else, which times out and does exponential backoff...and the end result is that one or two instance take a long time to become pingable18:54
*** trozet has quit IRC18:55
sean-k-mooney[m]cfriesen:  ah ok. i still think its a bug that nova plugs the vifs on compute agent start up. we should proably revisiti if its really needed. as you said we unconditionally plug the vifs when starting a vm so not sure what edge case still requires it18:55
sean-k-mooney[m]the current bevior dates back from nova-networks with linuxbridge but we do several things differently now18:56
cfriesensean-k-mooney[m]: if we did get rid of the vif plugging at startup, would we have to wait for the network events in _create_domain_and_network() when powering on?18:57
sean-k-mooney[m]cfriesen:  we cant wait because only ovs with the neutron agents sends those events correctly18:58
sean-k-mooney[m]cfriesen:  linux bridge does not send them reliably and odl only send them when the port is first bound not when port are plugged18:59
cfriesensean-k-mooney[m]: we do wait on instance spawn though, is that different?19:00
*** germs_ has quit IRC19:00
sean-k-mooney[m]cfriesen: we have no ideay what happens for all the rest of the backend but that is why we dont wiat currently19:01
*** yamamoto has joined #openstack-nova19:01
*** suresh12 has joined #openstack-nova19:01
cfriesensean-k-mooney[m]: kay, thanks for the info.19:02
sean-k-mooney[m]on instance spawn you meen on first boot. and yes that would be different as all backend send a vif plugged event in that case19:02
sean-k-mooney[m]cfriesen: by the way its oke for us to wait for os-vif to plug the vif we just cant wait for neutron to say its wired up. we would like to but all the ml2 dirvers do different things19:03
mlavallethanks gibi :-)19:03
*** voelzmo has joined #openstack-nova19:03
*** tssurya has quit IRC19:07
*** yamamoto has quit IRC19:08
*** lpetrut has quit IRC19:11
openstackgerritMatt Riedemann proposed openstack/nova-specs master: Non-unique network names in Servers IPs API response  https://review.openstack.org/52139219:15
mriedemsean-k-mooney[m]: that reminds me, you know how for the new port binding live migration spec we talked about adding a wait on the source host for the vifs to be plugged on the dest host during pre_live_migration?19:16
mriedemi think that is likely not possible now because of ODL19:16
mriedemsince ODL won't send a vif plugged event until the port binding changes, right?19:16
mriedemwhich currently happens in post live migration19:16
sean-k-mooney[m]mriedem:  yes didnt you already remove that in your code. we were talking about this last week i think?19:17
mriedemi'm almost inclined to add a [workarounds] config option to disable that wait for vif plugged, but enable the wait by default19:17
mriedemit's a todo in the code right now19:17
mriedemso for ovs and LB we'd default to wait,19:17
mriedembut if you're using ODL, you disable the wait19:17
sean-k-mooney[m]ya odl wont send the event until there is a neutron port update which will only happen when we activate the new building as a result of the change to the host-id in the vif binding_details dicts19:18
sean-k-mooney[m]mriedem:  we cant tell if its odl or ovs from nova so we cant wait19:19
*** voelzmo has quit IRC19:19
sean-k-mooney[m]we can wait for the compute to tell use it has plugged the interface we just cant wait for neutron to say its wired it up19:19
*** voelzmo has joined #openstack-nova19:19
mriedemsean-k-mooney[m]: i know we can't19:19
mriedembut the operator can19:19
*** voelzmo_ has joined #openstack-nova19:19
mriedemso for live migration, we can default to wait, assuming you're using a networking backend that doesn't suck19:20
mriedemand if you are, then you need to configure nova-compute to not wait for vif plugged events during live migration19:20
*** voelzmo_ has quit IRC19:20
mriedemto other specs cores, i think https://review.openstack.org/#/c/521392/ is a no-brainer19:20
*** voelzmo has quit IRC19:20
sean-k-mooney[m]oh yes we could have a per compute config option or i guess a could wide one for the conductor19:20
mriedemconductor doesn't wait19:21
mriedemcompute does19:21
*** gouthamr has quit IRC19:21
sean-k-mooney[m]mriedem:  so in the prelive migrate would we have the dest read the config value and stick it in the migration data19:21
mriedemumm19:22
mriedemi figured the source would read the config since the source host is what's waiting19:22
mriedemif the wait has to depend on the dest, then yeah maybe that has to go into the migrate data object19:22
mriedemif the dest host is using ODL but the source is using OVS19:22
sean-k-mooney[m]well since the dest might have a different network backend to the souce  we should get the dest value no?19:23
mriedemprobably19:23
sean-k-mooney[m]we proabbly also want to default to not waiting to not break upgrades as that is the default behavior today19:24
sean-k-mooney[m]we can then change the default after 1 release19:24
mriedemoh i suppose19:25
sean-k-mooney[m]or put somthing in the neutorn port binding so neutron can tell use if we should wait or not in the future19:25
mriedemolder dest computes wouldn't send the value anyway19:25
mriedemneutron telling us if waiting is safe would be great, but i'm not driving that on the neutron side19:26
*** itlinux has joined #openstack-nova19:26
sean-k-mooney[m]well the sepc said we would only use the new flow if they both supported it but ya old dest would not send it so its safer to assume dont wait19:27
sean-k-mooney[m]mriedem:  honestly i dont think that would be too hard to enable in neutron. i dont know if i have time to look at it but it should be a minor enough api extention19:28
mriedemi mention the old dest compute thing because we had also talked about doing this separately from the blueprint as a bug fix so we could backport it19:28
mriedembut if it requires changes to the versioned object, we can't really backport it19:28
mriedemthat was also before we knew that ODL didn't send events19:28
sean-k-mooney[m]basicaly i think we could have a singel key in the portbining dict. vif_plugged_event_policy that could be never,on_bind or on_plug then just hard code it for each of the ml2 drivers19:29
sean-k-mooney[m]mriedem:  ah ya that makes sense19:30
*** vivsoni has quit IRC19:32
sean-k-mooney[m]mriedem:  do you want me to open an rfe but for the event stuff and we can see what the neutron team says? it would not be a hard dependecy but it would allow us to know  in the future if we can wait19:32
cdentedleafe: responded to your coments on forbidden traits in rp in db. I think you're missing a point or I've explained things very poorly19:33
*** vivsoni has joined #openstack-nova19:33
openstackgerritMerged openstack/nova-specs master: Non-unique network names in Servers IPs API response  https://review.openstack.org/52139219:33
mriedemsean-k-mooney[m]: sure19:34
*** harlowja has joined #openstack-nova19:34
*** mvk has joined #openstack-nova19:34
efriedjaypipes, cdent, edleafe: Is there anything preventing two records in the allocations table having the same consumer+rp+rc?19:37
cdentefried: in the table itself, no, but the interface on the object does a full wipe before it does any set (for a given consumer)19:39
efriedcdent: duly noted.19:40
mriedemdo we have a bp or anything anymore for supporting shared storage pools in nova? there was a bp but i can't find it now, maybe it was marked obsolete?19:40
edleafecdent: ok, but it seems like overkill. No one would be sending !TRAIT now, so I don't quite get the need to support it19:41
efriedmriedem: Since we descoped it in Q, and didn't bring it back up in Dublin...19:41
cdentonce that full stack is in place, someone can accidentally use the wrong microversion, and they need an appropriate error19:41
mriedemefried: yeah i'm trying to list it as an alternative in https://review.openstack.org/#/c/551927/19:42
*** gouthamr has joined #openstack-nova19:42
cdentefried, mriedem: tetsuro is still dilligently trying to make shared, correct, yes?19:42
efriedmriedem: IIRC, the shared storage provider spec was approved even before Q as one of those vague "we're headed in this direction" things, which led to the inception of aggregates and MISC_SHARES_VIA_AGGREGATE trait, but was never fully tied off.19:43
efriedmriedem: And after that we never (re)proposed anything on it.19:43
cdentedleafe: did you see [t 18ze]?19:45
purplerbot<cdent> once that full stack is in place, someone can accidentally use the wrong microversion, and they need an appropriate error [2018-03-27 19:41:56.981348] [n 18ze]19:45
cdentedleafe: because I don't understand why you think it is unnecessary, and would like to19:45
mriedemefried: ok, replied in https://review.openstack.org/#/c/551927/19:46
mriedemdansmith: jaypipes: ^ you might have other ideas19:46
mriedemit's the age old "how can i support migration w/o globally shared ssh keys"19:46
mriedemif you know a set of computes are in a shared storage pool, you could do the shared storage aggregate thing and then we could look that up from nova-compute rather than do the ssh check19:47
*** gouthamr has quit IRC19:47
efriedmriedem: I think this is the closest we came to a shared storage pool spec https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/generic-resource-pools.html19:47
*** gouthamr has joined #openstack-nova19:48
mriedembefore that, we had https://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/migrate-libvirt-volumes.html19:48
openstackgerritMerged openstack/nova master: api-ref: Parameter verification for servers.inc (1/3)  https://review.openstack.org/52820119:48
mriedemthat got held up on the imagebackend refactor o' doom19:49
*** lajoskatona has joined #openstack-nova19:50
*** lajoskatona has quit IRC19:50
*** awaugama has quit IRC19:51
jaypipesefried: yes, the primary key is on rp+rc+consumer.19:54
jaypipesefried: so the table itself is preventing that.19:54
efriedjaypipes: How hard to change?19:54
jaypipesefried: why would we want to change that?19:54
efriedTo preserve granular groups in allocation records.19:54
efriedjaypipes: Composing an email on it.19:55
jaypipesefried: honestly, I'm not interested in doing that.19:55
dansmiththat's going to be a poblem19:55
dansmithunless placement handles all the atomicity of working that into a consistent view from the api19:55
*** germs has joined #openstack-nova19:55
*** germs has quit IRC19:55
*** germs has joined #openstack-nova19:55
dansmithwe depend on allocations being per consumer19:55
jaypipesright19:55
efriedper consumer isn't a problem.19:55
*** rmart04 has joined #openstack-nova19:55
dansmithpresumably you want multiple allocations per consumer against a single RP right?19:56
efrieddansmith: Yes.  I'm saying two records with e.g.19:57
efriedCONSUMER_A, RP_1, DISK_GB, 102419:57
efriedCONSUMER_A, RP_1, DISK_GB, 204819:57
efriedmeaning "one 1G disk, one 2G disk, both from the same provider, allocated to the same consumer"19:57
cdentjaypipes: you sure about that primary key? I see it as just on the id column?19:57
cdentand no uniq anywhere19:57
dansmithefried: how do you know which is which?19:57
efrieddansmith: I contend it doesn't matter which is which.19:57
dansmithefried: and what is the difference between that and one 3G allocation19:57
efriedditto19:57
efriedoh, sorry19:57
efrieddansmith: The difference is between "one 1G disk, one 2G disk" and "one 3G disk"19:58
*** edmondsw has quit IRC19:58
dansmithunless you add a group name into that key I don't see how you can reconstruct the original structure19:58
dansmithoh I see19:58
*** vivsoni has quit IRC19:58
dansmithif it's the same RP, I don't think that's really placement's job to keep track of that19:58
*** vivsoni_ has joined #openstack-nova19:58
efrieddansmith: Then whose?19:58
jaypipesefried: if there isn't one, that's a terrible mistake.19:59
dansmiththat'd be a nova problem, looking at the fact that we have 3G of allocation and, oh hey, the flavor said two disks"19:59
dansmithplacement needs only to know that you got 3G of storage from that provider19:59
efrieddansmith: It was placement that took in a request of the form resources1=DISK_GB:1024&resources2=DISK_GB:2048 and responded with DISK_GB:3072.19:59
dansmithif we decide to cut it up that's a VM management thing (i.e. nova)19:59
*** edmondsw has joined #openstack-nova19:59
efrieddansmith: Yup, that'll work, but only for a little while.19:59
dansmithencoding that you have two disks into an allocation in placement is teaching placement too much about nova, IMHO20:00
*** edmondsw_ has joined #openstack-nova20:01
dansmithefried: if you didn't ask (or imply) placement for those disks to be on different providers, then I think replying with a merged allocation or rejecting it as antithetical is legit20:01
*** trozet has joined #openstack-nova20:01
sean-k-mooney[m]dansmith:  you will have to do that for things like bandwith requests on two different nics that could be allocated form the same pf20:01
*** bkopilov has quit IRC20:01
sean-k-mooney[m]or tor20:01
sean-k-mooney[m]so its not jsut a nova thing20:01
efrieddansmith: Well, that's a separate issue - asking for them to be on different providers can't be expressed currently (assuming traits are the same etc.)20:01
*** germs has quit IRC20:02
efrieddansmith: The problem moving forward is that we're going to have agents other than nova responsible for mapping allocation into actual resources.  Plus the fact that we're going to be synthesizing that request string from not just the flavor, but also from image metadata, neutron, eventually cyborg, cinder...20:02
sean-k-mooney[m]dansmith:  so are you saying we should be explcit and have some way of saying its ok to merge?20:02
dansmithsean-k-mooney[m]: if the resource comes from a different pile, then it needs to be a different RP (even if the same pf)20:02
*** germs has joined #openstack-nova20:02
*** germs has quit IRC20:02
*** germs has joined #openstack-nova20:02
dansmithsean-k-mooney[m]: and if it's not then placement doesn't need to know about it20:02
efrieddansmith: That doesn't let you express bandwidth on the PFs though.20:03
dansmithsean-k-mooney[m]: rather decide whether not stating them as separate means they're merge-able or if it means it's a 40020:03
edleafecdent: So you see a case where someone would create flavors with traits that are prefixed with '!', but not specify a microversion that supports it? I guess I can't see that.20:03
*** lpetrut has joined #openstack-nova20:03
edleafeBut if it's necessary, so be it20:03
sean-k-mooney[m]efried: well it kind of does20:04
cdentedleafe: no I see a non-nova client of the placement api make requests to get resource providers filtered by traits, forgetting to set the microversion20:04
*** edmondsw has quit IRC20:04
jaypipesefried, sean-k-mooney[m]: I think you are both over-thinking this.20:04
efrieddansmith: Example that sean-k-mooney[m] is talking about: resources1=VF:1,NET_EGRESS_BW:200&resources2=VF:1,NET_EGRESS_BW:300 -- you can get back a candidate like { PF_RP1: { VF: 2, NET_EGRESS_BW: 500 } }20:04
*** edmondsw_ has quit IRC20:04
cdentedleafe: as in: a human20:04
*** edmondsw has joined #openstack-nova20:04
sean-k-mooney[m]the case that is not covered today would be anti afintiy of PFs for VFs for ha bonds20:04
*** edmondsw has quit IRC20:04
dansmithefried: right, it's still nova asking neutron for the fine-grained quota on each port though20:04
efriedneutron is responsible for carving out two PFs.  How does it know that one needs BW:200 and the other needs 300?20:05
dansmithefried: placement doesn't need more information than that 500 is committed to that consumer20:05
*** yamamoto has joined #openstack-nova20:05
dansmithefried: it doesn't imply it from placement, IMHO20:05
jaypipesdansmith: zactly.20:05
efriedthen from where?20:05
dansmithfrom where what?20:05
dansmitheither in the port you created ahead of time, or from the flavor details when nova goes to create it for you20:06
jaypipesefried: the allocation is the amount of that resource that is being provided by that specific resourc eprovider. it's 500.20:06
efriedWhere does it get the information that that request entails one VF with BW:200 and one with BW:300?20:06
efriedneutron has access to the flavor?  And the image?20:06
sean-k-mooney[m]efried: if there are no different traits between resouce1 and resouce2 its ok to merge20:06
dansmithefried: are you being difficult now or what?20:06
dansmithefried: neutron ports don't just magically get created20:06
dansmithefried: normally nova creates ports for you based on what you asked for in terms of --nic arguments, presumably influenced by the flavor if this sort of bw quota was encoded there20:07
dansmithnova uses that to decide what to ask placement for,20:07
dansmithand then uses it to talk to neutron to create the ports for you20:07
dansmithplacement is not the Windows Registry of information about instances for other services to mine :)20:07
dansmithjaypipes: quote of the day for you  ^20:07
*** shaohe_feng has quit IRC20:08
efrieddansmith: Okay, so for this case we get it from here, for that case we get it from there, for this other case maybe we have to invent something convoluted to pass it through... Whereas if we split up the allocations in placement, it could all come from the same place in a consistent way.20:08
efriedand no, I'm not *trying* to be difficult.20:08
*** liverpooler has quit IRC20:08
dansmithefried: the answer to the above leading question is not "mine it from placement"20:08
efriedI'm predicting that we will suffer a death of a thousand cuts by solving this problem differently every time we encounter it.20:08
sean-k-mooney[m]efried: neutron has acess to neither the flavor or image20:08
dansmithefried: haaaaave you met us?20:08
dansmitheither way,20:09
dansmithplacement is not the database of things for neutron to imply random things about your quota or policy from20:09
jaypipesWHAT dansmith SAID.20:09
sean-k-mooney[m]the enutron port case i was thinking of is i create two sriov ports in enutron tell nova to use both and want to express anti afinity for those at the pf level so in the granualr request to placment we need some what to tell placement they cannont come form the same RP even though they have not other differences20:10
jaypipesthis is precisely the conversation I had with gibi and mlavalle earlier today.20:10
dansmithnova could, for example, be configured to ask placement for bandwidth resource, but never tell neutron about it, if we want general recordkeeping and kinda soft quotas for distribution, but we don't need actual administrative limiting of bandwidth, for example20:10
dansmithif you mine it out of placement, you have no idea the semantics of the data you're examining20:10
efriedif this was a neutron-specific or nova-specific thing, okay.  I don't see it that way, though.20:10
jaypipessean-k-mooney[m]: that is what granular request groups are for.20:10
efriedjaypipes: granular request groups don't solve it.20:10
dansmithefried: the port case is a nova/neutron conversation and placement should C its way out20:10
dansmithefried: for disks, it's nova/nova or nova/cinder or whatever20:11
*** yamamoto has quit IRC20:11
jaypipesefried: how is granular request groups not for solving the case of "gimme 2 of these resources from different providers"?20:11
sean-k-mooney[m]placement is the thing to model all quantitive resouces with and qualitive tratis however so i may want to express affities between resouce in the future. for now i gues a filter could be used but i would be somewhat concered that we would get no valid host becaue of the limit we pass into placement20:11
efriedjaypipes: We have no way to say "from different providers".20:11
openstackgerritMatt Riedemann proposed openstack/nova-specs master: tox.ini: remove the stale 'minversion = 1.4'  https://review.openstack.org/53077620:12
efriedjaypipes: Unless there happen to be traits we can exploit.20:12
dansmithefried: that's what I was saying,20:12
jaypipesthat's the whole dang point behind granular request groups.20:12
dansmithefried: I think it's implied that if you ask for two DISK_GB allocations you expect them to be from different RPs20:12
sean-k-mooney[m]jaypipes: the thing that granualr resouce groups is missing is i cant say Resouce1 and resouce2 can be from the same RP or the inverse the must be form the same RP20:12
efrieddansmith: Oh, definitely not that.20:12
jaypipesdansmith: zactly my thought.20:12
dansmithefried: and thus we should decide whether that is the case and reject, or if we should expect those to come back merged20:12
dansmithefried: well, I would have thought exactly that, fwiw20:12
efriedWhat if you only have one disk provider?  and still want two disks?20:12
dansmithefried: that's not a placement thing dude20:13
dansmithefried: that's a nova thing20:13
dansmithefried: it will ask for how much resource from a specific provider it needs,20:13
dansmithand it may cut it up or not20:13
dansmithplacement has no bidness knowing or caring about that20:13
jaypipessean-k-mooney[m]: that was the whole point of granular request groups. to allow the caller to say "give me a total of 2 of these things, but make sure I get 1 each from different providers".20:13
efriedjaypipes: No, not that.20:13
jaypipesyes, yes that20:13
dansmithefried: yes that20:13
dansmithexactly that20:13
efriedjaypipes: "Give me two of these things, but each one has different traits" I'll grant you.20:14
efried...which has the effect of dividing them across two providers.20:14
dansmithefried: if you ask for two things with no trait difference, I think the assumption is you want them from different places20:14
efriedBut if they *don't* have different traits, putting them in different groups does *not* give you different RPs.20:14
dansmithyou're saying that,20:14
dansmithbut I disagree that that is how it should be20:14
dansmithso if you're talking about your patches as of now, then okay.. but then we should find and -2 those :)20:15
*** bkopilov has joined #openstack-nova20:15
efrieddansmith: http://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/granular-resource-requests.html#semantics third bullet.20:15
jaypipesagree wholeheartedly with dansmith. I don't see any logic in granular request groups *ever* meaning "give me two of these exact same resources, but allow them both to be from the same provider"20:15
*** shaohe_feng has joined #openstack-nova20:15
dansmithefried: okay so that's what I was saying above.. we either decide if they should be different (what I've been assuming) or they're not and they're effectively merged20:16
dansmithefried: your bullet says the latter20:16
dansmithwhich, if we've decided, then cool, but I think that was/is a mistake20:16
efriedjaypipes: You would never want two VFs from the same PF?20:16
jaypipesagreed it was a mistake.20:16
dansmithefried: two VFs from one PF is VF=220:16
jaypipesefried: if you did, you would say resources:SRIOV_NET_VF=220:16
dansmithefried: not VF=1,VF=120:16
sean-k-mooney[m]jaypipes:  it does simply one case which is merging resouce request form the image/flavor/port/volume20:16
sean-k-mooney[m]jaypipes:  it also has implcation for you cpu spec20:17
dansmithwhat jaypipes said20:17
efriedHow do you say, "Give me 8 VCPUs, but I don't care how you split them across NUMA nodes"?20:17
sean-k-mooney[m]today if i have a guest with no numa toplogy and it cant fit on one numa node it will be spread.20:17
jaypipesIMO, that third bullet should absolutely be amended to say "requesting granular request groups of the same resource means that the resulting candidates will be from different providers"20:17
dansmithefried: if I just want multiple VFs and ensure they're on different PFs, but I don't care about any other reason, then VF=1,VF=1 means that, different from VF=220:17
dansmithefried: VCPUS=820:18
sean-k-mooney[m]if we say the non numbered resouce request must also be form one RP we cannot support that now20:18
efrieddansmith: VCPUS=8 will get you all 8 from the same RP.  That's not even related to this discussion.20:18
jaypipessean-k-mooney[m]: the non-numbered request group specifically says "use_same_provider=False"20:19
dansmithefried: how does VPUS=4,VCPUS=4 make it more flexible?20:19
*** ttsiouts_ has joined #openstack-nova20:19
dansmithefried: that's not saying "do as you with"20:19
dansmith*wish20:19
efrieddansmith: It doesn't.  But VCPUS=1,VCPUS=1,VCPUS=1,VCPUS=1,VCPUS=1,VCPUS=1,VCPUS=1,VCPUS=1 does.20:19
sean-k-mooney[m]dansmith:  what about the case of all you severs have 2 numa node with 10 cores each and i want 16 cores but dont care about numa in my guest20:19
efrieddansmith: Now we can spread 'em any which way they fit.20:20
dansmithefried: you can't be serious20:20
dansmithefried: and if I want 32G of memory allocated anywhere? 32*24 query params? :)20:20
jaypipessean-k-mooney[m]: then you would request VCPU=1620:20
sean-k-mooney[m]jaypipes:  so non granular can be form multiple RPs and granular resuest each request is guarteeed to be form differnt RPs20:20
efrieddansmith: I imagine step sizes come into play there20:20
jaypipessean-k-mooney[m]: yes.20:21
dansmithefried: okay but that's still a damn lot of params for silly reasons20:21
dansmithtbh, I'm kinda frustrated by this conversation at this stage in the game and I'm late leaving for something else20:21
efrieddansmith: How else do you ensure generic spreadability?  Is that a silly reason?20:21
*** rmart04 has quit IRC20:21
efriedOkay.  Y'all have made your position clear.20:21
dansmithso I shall bow out, but jaypipes I agree about that third bullet change, or at least putting a stake in the ground about needing to revisit it20:21
dansmithback later20:22
jaypipesefried: what is generic spreadability? is that like I can't believe it's not butter?20:22
efriedOh, you better believe it.20:22
jaypipesheh20:22
*** yassine has joined #openstack-nova20:22
*** moshele has joined #openstack-nova20:23
*** vivsoni_ has quit IRC20:23
efriedjaypipes: It means, "I know I want 4 VCPUs.  I don't care about NUMA affinity.  I just want my instance."  Some hosts have no NUMA (all VCPU on the compute RP).  Some have two NUMA nodes, some have four.  It's a fullish cloud.  I sure would like it if I could land my instance, whether there's a host with 4 VCPUs in one NUMA node, or one with 3,1, or one with 2,1,1,0, etc.20:24
*** gouthamr has quit IRC20:25
*** vivsoni has joined #openstack-nova20:25
sean-k-mooney[m]jaypipes: so just to confim. non granuarl resouce resutes can from from multiple RP. each Granular resouce request of the same resouce class is guarenteed to come form different RPs and Granular Resouce request can or cannot spread across multiple RP im assuming Cannot?20:25
jaypipesefried: resources:VCPU=420:25
jaypipessimply as that.20:25
efriedjaypipes: Then we're treating VCPU special.20:25
efriedjaypipes: Cause you definitely can't treat DISK_GB=1024 the same way.20:25
cdentefried: are you saying in the case where the vcpu inventory is registered on the numa nested provider, or something else?20:26
jaypipessean-k-mooney[m]: a single granular request group of an amount of a resource class cannot spread that bucket across multiple providers, no.20:26
efriedsean-k-mooney[m]: As currently designed [1], separate request groups with the same resource class *may* or *may not* come from the same RP.20:26
efried[1] http://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/granular-resource-requests.html#semantics20:26
sean-k-mooney[m]efried: that would be covered by just Resource:VCPU=420:26
efriedsean-k-mooney[m]: Oh, also what jaypipes says.20:26
efriedsean-k-mooney[m]: No - see above about DISK_GB.20:27
jaypipesefried: how so? all of those resource providers have 4 VCPU available.20:27
sean-k-mooney[m]efried: why?20:27
efriedjaypipes: No, 3,1 means "3 in NUMA node 0, 1 in NUMA node 1" etc.20:27
efriedsean-k-mooney[m]: If VCPU=4 means I can spread VCPU across multiple numa node RPs, then DISK_GB=1024 means I can spread... individual gigabytes? across multiple storage RPs.20:28
jaypipesoh, in that case then no, no providers would be returned (since none have 4 VCPU available)20:28
jaypipesefried: ^20:28
jaypipesefried: and that is A-OK in my book.20:28
efriedjaypipes: Exactly.  I get NoValidHosts.  But there *were* hosts with available procs.20:28
jaypipesno there were not.20:28
jaypipesefried: there were no resource providers that had 4 VCPU.20:28
efriedBut there were *hosts* that had 4 VCPu.20:29
jaypipesefried: the host isn't the provider of the VCPU in your scenario, though.20:29
efriedwhy not just say you can't have the same RC in different RPs then?20:29
sean-k-mooney[m]jaypipes:  well in his case there were 3cpus on 1 numa node an 1 on the second20:30
efriedjaypipes: But I didn't ask for my instance to land on a RP.  I asked for it to land on a host.  Which has a tree of RPs.  But I don't know (and shouldn't care in this case) where the VCPUs are in that tree.20:30
*** moshele has quit IRC20:30
jaypipessean-k-mooney[m]: exactly. there were no providers that had 4 VCPU available.20:30
sean-k-mooney[m]but there was not host with 1 RP with 4 VCPUS20:30
*** germs has quit IRC20:30
efriedtbc, I'm agreeing that VCPU=4 should give you no hits in this case.20:31
efriedI'm saying we need a way to express a request that *does* land this instance.20:31
jaypipesefried: I pretty specifically remember you telling me that my proposed "sum the inventories for like resource classes within a provider tree" was absolutely the wrong way to handle nested providers.20:31
sean-k-mooney[m]jaypipes: right so today. before placement i can have a host with 2 socket each with 10 cores and i can boot a vm with no numa topology with 16 cores20:31
jaypipesefried: and I'm saying I don't care about that.20:32
efriedjaypipes: Yes, exactly, because the DISK_GB case breaks it unequivocally.20:32
sean-k-mooney[m]that would now break20:32
efried^^ this.20:32
jaypipesI really don't care.20:32
efriedWith the granular syntax as designed, we have a way to ask for this ^ that will work in *any* scenario.20:32
efriednamely: resources1=VCPU:1,...,resources16=VCPU:120:33
sean-k-mooney[m]efried: well diskGB only breaks in some cases20:33
efriedI'll grant you that I don't want the admin/operator to have to say that.20:33
jaypipesefried: that is just over-engineering IMHO.20:33
jaypipesfor a use case that just isn't particularly attractive to me.20:33
openstackgerritMerged openstack/nova-specs master: tox.ini: remove the stale 'minversion = 1.4'  https://review.openstack.org/53077620:33
sean-k-mooney[m]efried: that has the opisite problem20:34
efriedsean-k-mooney[m]: Not if separate granular groups can land on the same RP.20:35
sean-k-mooney[m]now i cant say thes have to come from different RPs20:35
efriedcorrect.20:35
efriedwithout traits.20:35
sean-k-mooney[m]efried: even with traits20:36
sean-k-mooney[m]traits would artifically nanorow your selection20:36
sean-k-mooney[m]you basically need a way to express "use_same_provider=True/False"  relationships between Resouce1...20:37
sean-k-mooney[m]e.g. Resouce1,Resouce2:use_same_provider=True20:37
* jaypipes wonders whether resource requests in kubernetes support this level of crazy.20:37
efriedsean-k-mooney[m]: Agree we don't want to restrict artificially via traits.  Which is why that's not a solution, just a side effect.20:38
jaypipesoh wait, no, no they don't. at all.20:38
efriedjaypipes: kubernetes is just a babe.  Give it a couple of years.20:38
efriedIt'll either be dead, or supporting this level of crazy.20:38
jaypipesefried: intel has been trying desperately for 2+ years to add this level of crazy to resource management in k8s.20:38
sean-k-mooney[m]jaypipes:  not yet. give intel time :P20:38
jaypipessorry sean-k-mooney[m], but it's true.20:38
sean-k-mooney[m]jaypipes:  haha i know one of our k8s teams sits 10 feet form me20:39
jaypipesconnor doyle?20:40
sean-k-mooney[m]jaypipes:  am i dont recognise that name but they have been working on multus ant the multi nic support + cpu pinning, hugepages and somthing else20:42
cfriesenjaypipes: sean-k-mooney[m]: in the VCPU and 4KB pages case we currently let the instance float across the whole compute node....I'm of the opinion that we *shouldn't* let it, and *should* limit it to a single host NUMA node.20:43
mriedemi saw in the latest k8s release notes that they now support cpu pinning and huge pages20:44
jaypipescfriesen: ok. nothing about the proposals would prevent that.20:44
efriedcfriesen: Fine by me, but you're going to bounce a lot of spawn requests that way.20:44
mriedemgood for them20:44
cfriesenefried: if we don't restrict it, we have no idea how many 4KB pages are left on each host numa node.20:44
sean-k-mooney[m]cfriesen: that breaks existing behavior where it cant fit in one numa node20:44
cfriesensean-k-mooney[m]: yes. and I think we have no option.20:45
efriedcfriesen: except splitting into individual pages, one per request group.20:45
efriedor... inventing something new.20:45
cfriesenefried: I'm not sure we can do that with qemu.20:46
efriedcfriesen: I'm not talking about qemu splitting.  I'm talking about the request being split.  Then placement will give you back summed-up allocations per RP.20:46
efriedNUMA_0: PAGES=64, NUMA_1: PAGES=1024 or whatever20:47
efriedcfriesen: because btw, jaypipes and dansmith came down hard on the idea of placement tracking separate request groups in any way.20:47
cfriesenefried: when you start up qemu and it's allowed to float across the whole host, we do not know how much it will consume from each host numa node20:47
sean-k-mooney[m]jaypipes:  so how would you feel about somthing like Resouce:VCPU:use_same_provider:true and then just define that all Rsource provider groups will not overlap with others20:48
efriedcfriesen: Oh, you have to let qemu have its head completely?  Bogus.20:48
cfriesenefried: at least the way we do it now, yes.   for hugepages we map a file and tell it to use that, but for 4KB pages we just say "you're allowed to use up to X memory"20:48
*** archit has quit IRC20:48
sean-k-mooney[m]efried:  well no the said seperate request groups guarenteeed different RPs20:49
efriedcfriesen: Then wouldn't the hugepages be inventory on the compute RP?20:49
sean-k-mooney[m]efried: that means the allocation candiates for those request groups will have to be reported sepreately20:49
efriedsean-k-mooney[m]: Which I don't agree with.  Which was already discussed and decided in the original spec in Q.20:49
cfriesenefried: hugepages are fine, they imply that we're limited to a numa node.   instances with "shared" cpus and 4KB pages are allowed to float across the whole compute node currently20:49
*** archit has joined #openstack-nova20:49
sean-k-mooney[m]cfriesen:  for 4k we can map a file too if we want20:49
cfriesensean-k-mooney[m]: can we map multiple files for an instance with a single virtual numa node?20:50
sean-k-mooney[m]cfriesen: we just dont but we can numa afinites 4k pages20:50
cfriesensean-k-mooney[m]: yes, but can we allocate 3GB of 4K pages from host numa node 0 and 1GB from host numa node 1 for an instance with a single virtual numa node?20:51
sean-k-mooney[m]cfriesen:  i think so. why would you want too20:52
sean-k-mooney[m]to allow the memory to come form multiple host numa nodes20:52
cfriesensean-k-mooney[m]: If I have only 3GB memory free on one numa node and 1GB on the other, and I want to keep the current behaviour of letting the instance float across the whole compute node.20:53
*** elmaciej has quit IRC20:53
jaypipessean-k-mooney[m]: again, I think that granular request groups should mean that the resources in each request group are provided by different resource providers.20:54
sean-k-mooney[m]i would have to check. we added the abiltiy to use file desciptor memory by setting the souce elemet of this https://libvirt.org/formatdomain.html#elementsMemoryBacking20:54
jaypipessean-k-mooney[m]: I do not care about the use case of "general spreadability".20:54
efriedjaypipes: but only if they're the same resource class20:54
jaypipesefried: yes.20:54
sean-k-mooney[m]i know that backing file can be affinites but i dont know if we can create two backing files attach to the same guest numa node20:54
efriedjaypipes: That's gonna be tough to implement, just for starters.20:55
jaypipesefried: though I don't see a reason why you would separate request groups where one request group does *not* contain a resource class...20:55
efriedjaypipes: So that they aren't forced to land on the *same* RP.20:55
sean-k-mooney[m]cfriesen: i think it would work at the qemu level but i need to dig deeper20:55
jaypipesefried: say wha?20:55
cfriesensean-k-mooney[m]: unless we can do that, we can't let a single-numa-node guest use memory from multiple host numa nodes.20:56
sean-k-mooney[m]jaypipes: ya i actully prefer that each group is a different RP. that is what i had originally assumed20:56
jaypipesefried: I mean, I don't see a use case for doing, for example, this: resources1=VCPU:1,required2=HW_NIC_OFFLOAD_GENEVE20:56
efriedjaypipes: It wouldn't make sense for me to say DISK_GB:1024,VF:1.  Maybe I misunderstood your statement.20:56
jaypipesefried: that doesn't make sense to me.20:56
sean-k-mooney[m]cfriesen: well thats the thing they are not really singel numa guests20:57
efriedjaypipes: resources1=VF:1,BW:200&required1=PHYSNET_A&resources2=VF:1,BW:300&required2=REALLY_FAST20:57
sean-k-mooney[m]they are guest that did no specify a numa topology so there is no reason nova could not make them multi numa guests20:58
jaypipesefried: ok, and?20:58
jaypipesefried: that makes total sense to me.20:58
cfriesensean-k-mooney[m]: ooh, fun.  that won't surprise anyone. :)20:58
efriedContrived example.  I don't care which physnet that second VF is on.  I just want it to be fast.  Why can't it land on the same RP as the first one?20:58
cfriesensean-k-mooney[m]: don't forget we'd have to preserve the numa topology over live migration20:58
sean-k-mooney[m]cfriesen:  if we made them multi numa guest when there resouces were split across host numa nodes it would fix the accounting issue and improve the performance of the guest20:59
*** edmondsw has joined #openstack-nova20:59
jaypipesefried: because it's just crazy to reason about for the user, frankly.20:59
sean-k-mooney[m]cfriesen:  your relying on an implentation deatil that i vift-driver specific20:59
cfriesensean-k-mooney[m]: it could also make the guest slower if it's OS isn't numa-aware.20:59
jaypipesefried: if a user requests two groups of a resource, the user expects those groups to be separate.21:00
jaypipesefried: and "separate" means "provided by different providers" in my and dansmith's book.21:00
sean-k-mooney[m]cfriesen:  the guest os21:00
*** edmondsw_ has joined #openstack-nova21:00
cfriesensean-k-mooney[m]: yep21:00
efriedjaypipes: disagree completely.  IMO it's way easier for the user to reason about the groups individually, without regard for their interrelations.21:00
sean-k-mooney[m]cfriesen: how would it make it slow when before it would have been spread on the phyical host but it never would have knonw21:01
*** dave-mccowan has quit IRC21:01
jaypipesefried: I'm afraid we'll just have to agree to disagree.21:01
efriedjaypipes: Fact is, either way we decide this will leave a hole, cases we can't express.21:01
cfriesensean-k-mooney[m]: code running in the guest on two separate cpus that wants to share a lot of memory between the two CPUs.  now you've got cross-NUMA latencies21:01
jaypipesefried: true.21:01
efriedjaypipes: The way it's currently written is simpler, more flexible, and easier to implement; and IMO easier to reason about.  But that may just be because it's what I've had in my head since early Q.21:02
cfriesensean-k-mooney[m]: you're correct that it's no worse than it is now21:03
sean-k-mooney[m]cfriesen: yes and today that can happen too. the guest sees 1 numa node but on the host ist ram can be allocate form multiple because it floats21:03
jaypipesMy brain is, frankly, pretty fried.21:03
efriedI've thought through all the things you can't express with it, and it boils down to just one thing: you can't express "separate these request groups".21:03
jaypipesI really need to eat and refresh my brain juices.21:03
efried...but adding that feature would be relatively straightforward with the addition of some new syntax.21:03
*** edmondsw has quit IRC21:04
efriedYeah, for my part I really ought to go review some more specs that aren't already approved.21:04
*** edmondsw_ has quit IRC21:05
*** moshele has joined #openstack-nova21:05
*** salv-orl_ has joined #openstack-nova21:06
sean-k-mooney[m]cfriesen:  :)  so i was suggesting was if you did not spcify a numa topology then we leave it up to the virt-driver but it then has to reflect the topolgy to the guest. it would be no wose then today but actully fix the 4k pages issue and technically i dont think we are breaking any guartees as i dont hink we define that if numa_nodes is not set its 1 numa node21:06
*** salv-orlando has quit IRC21:06
*** yamamoto has joined #openstack-nova21:07
sean-k-mooney[m]cfriesen: anyway that an issue for another day21:07
*** eharney has quit IRC21:08
*** shaohe_feng has quit IRC21:11
openstackgerritHongbin Lu proposed openstack/nova-specs master: Choose default network on ambiguity  https://review.openstack.org/52024721:11
openstackgerritEd Leafe proposed openstack/nova-specs master: Add Generation to Consumers  https://review.openstack.org/55697121:11
edleafecdent: efried: jaypipes: ^^ updated with your suggestions21:12
*** yamamoto has quit IRC21:12
efriededleafe: Check yer whitespace21:14
edleafeefried: ugh, do I have to? <whine>21:15
efriededleafe: But hold, there's another typo21:15
*** itlinux has quit IRC21:15
efriededleafe: And another thing that needs to be said.21:16
*** liangy has quit IRC21:16
edleafewell, I wasn't gonna push a rev for whitespace until everyone had a crack at it21:16
*** archit is now known as amodi21:19
*** itlinux has joined #openstack-nova21:20
efriededleafe: Never mind that last thing.  I was gonna ask if the generation in PUT /allocations/{c} was going to quit being ignored.  But that's a RP generation, and not really relevant here.21:20
*** amodi_ has joined #openstack-nova21:21
efriededleafe: Consider me cracked.21:21
*** amodi has quit IRC21:21
*** amodi_ is now known as amodi21:21
*** vivsoni has quit IRC21:21
sean-k-mooney[m]you know that feeling when you find the souce of a bug you hit and you wish you had not...21:22
sean-k-mooney[m]here we convert form the disk bus the use asked for to a prefix https://github.com/openstack/nova/blob/ef0ce4d692d28a7f5a0079e24acdbfe7d2767e8b/nova/virt/libvirt/blockinfo.py#L124-L14321:23
cdentcracked21:23
sean-k-mooney[m]here we convert form the prefix to the disk bus https://github.com/openstack/nova/blob/ef0ce4d692d28a7f5a0079e24acdbfe7d2767e8b/nova/virt/libvirt/blockinfo.py#L297-L30421:23
*** lpetrut has quit IRC21:23
sean-k-mooney[m]we map sata scis and usb to sd then hard code sd = scsi21:23
edleafeefried: I always have21:24
sean-k-mooney[m]that mean on kvm you can request sata or usb disk which is why i had to go to IDE to fix my centos image21:24
sean-k-mooney[m]*cant request21:25
*** r-daneel has quit IRC21:25
openstackgerritHongbin Lu proposed openstack/nova-specs master: Choose default network on ambiguity  https://review.openstack.org/52024721:26
*** vivsoni has joined #openstack-nova21:26
sean-k-mooney[m]setting the disk_bus for kvm host has apreantely been broken for 5 years since https://github.com/openstack/nova/commit/7be531fe9462f2b07d4a1abf6687f649d1dfbb89 was merged ...21:27
*** felipemonteiro__ has quit IRC21:28
openstackgerritEd Leafe proposed openstack/nova-specs master: Add Generation to Consumers  https://review.openstack.org/55697121:28
openstackgerritMerged openstack/nova-specs master: Amend the migration paging spec for uuid in server migrations response  https://review.openstack.org/53290421:29
edleafecdent: efried: jaypipes: ^^ fixed those ghastly errors21:31
efriededleafe: +121:31
*** edmondsw has joined #openstack-nova21:35
mriedemjaypipes: i'll come back to the mirror aggregates in placement spec, but not today, burned out21:35
*** fragatina has quit IRC21:35
*** fragatina has joined #openstack-nova21:35
*** moshele has quit IRC21:36
*** edmondsw has quit IRC21:36
openstackgerritChris Dent proposed openstack/nova master: WIP: Ensure that os-traits sync is attempted only at start of process  https://review.openstack.org/55385721:37
cdentthanks for putting some eyes on the error code spec jaypipes21:39
*** shaohe_feng has joined #openstack-nova21:39
openstackgerritSylvain Bauza proposed openstack/nova-specs master: Proposes Multiple GPU types  https://review.openstack.org/55706521:41
bauzasjaypipes: mriedem: just created the spec we discussed about multiple types21:41
bauzasit's a very simple spec, but needs agreement21:42
*** cdent has quit IRC21:42
openstackgerritMerged openstack/nova-specs master: Address review comments from afdc828db3c9d0205b6ded268db24f5cdf857fa6  https://review.openstack.org/55425121:43
*** burt has quit IRC21:45
mriedembauzas: i don't remember talking about that at all21:46
melwittbauzas: on https://blueprints.launchpad.net/nova/+spec/vgpu-rocky it looks like you can just use that bp to link with your spec since it was meant to cover the multiple vgpu types part anyway. the spec is just needed to facilitate discussion on the details, right?21:46
mriedemi might have been in ireland when it happened, but didn't talk about it21:46
*** mriedem is now known as mriedem_afk21:47
*** esberglu has quit IRC21:47
*** felipemonteiro has joined #openstack-nova21:47
melwittyeah, I don't remember particulars about the vgpu discussion other than, there's more work to do this cycle and we're agreed to do it and review it21:47
*** esberglu has joined #openstack-nova21:48
bauzasmriedem: no worries, I pinged you a while ago just about whether I should use another BP for it, and you told me to ping melwitt21:48
*** esberglu has quit IRC21:48
bauzasmelwitt: yup, zactly21:48
bauzasmelwitt: I linked that BP to the spec21:48
melwittokay, cool. thanks21:48
bauzasso, I just want to make clear that the spec only covers the problem in it21:49
*** esberglu has joined #openstack-nova21:49
bauzaswhich requires a consensus, hence a spec21:49
bauzasother feature patches won't need it21:49
melwittright, just have to agree how to implement the multiple types21:49
melwittyeah21:49
bauzasI'd love jianghuaw_ to voice on that spec too21:50
*** esberglu has quit IRC21:53
bauzasanyway, calling it a day \o21:54
*** gouthamr has joined #openstack-nova21:58
*** trozet has quit IRC21:59
*** amodi has quit IRC22:00
*** salv-orl_ has quit IRC22:01
*** esberglu has joined #openstack-nova22:01
*** salv-orlando has joined #openstack-nova22:01
*** gouthamr has quit IRC22:04
*** itlinux has quit IRC22:05
*** esberglu has quit IRC22:05
*** salv-orlando has quit IRC22:06
*** yamamoto has joined #openstack-nova22:08
jaypipesmriedem_afk: you and me both, man :)22:11
*** ttsiouts_ has quit IRC22:12
*** mchlumsky has quit IRC22:12
*** yamamoto has quit IRC22:14
*** hemna_ has quit IRC22:14
*** mikal has joined #openstack-nova22:15
*** trozet has joined #openstack-nova22:15
*** lbragstad has quit IRC22:15
*** rcernin has joined #openstack-nova22:16
*** mikal_ has quit IRC22:18
openstackgerritEd Leafe proposed openstack/nova-specs master: Add Generation to Consumers  https://review.openstack.org/55697122:21
edleafe^^ damn rST double colons!22:21
*** fragatina has quit IRC22:21
*** fragatina has joined #openstack-nova22:21
*** artom has joined #openstack-nova22:24
*** germs has joined #openstack-nova22:31
*** germs has quit IRC22:31
*** germs has joined #openstack-nova22:31
*** yamahata has joined #openstack-nova22:31
*** lbragstad has joined #openstack-nova22:32
*** lbragstad has quit IRC22:32
*** germs has quit IRC22:36
sean-k-mooney[m]bauzas:  can you review the bug fix i have up for the libvirt mtu here https://review.openstack.org/#/c/553072/ when you have time. i think this is also a good backport candiate.22:37
sean-k-mooney[m]i also found another lovely bug today https://bugs.launchpad.net/nova/+bug/175942022:38
openstackLaunchpad bug 1759420 in OpenStack Compute (nova) "nova does not correctly support HW_DISK_BUS=sata or usb for kvm/qemu" [Undecided,New]22:38
*** felipemonteiro has quit IRC22:38
sean-k-mooney[m]because of ^ i had to for a centos vm that was given to me to run with disks attached to ide because it was freaking out with scsi or virtio disks. the virtio_blk and virtio_scsi drivers were causeing the kernel to lock up22:40
*** andreas_s has joined #openstack-nova22:47
openstackgerritEric Fried proposed openstack/nova master: Remove usage of [placement]os_region_name  https://review.openstack.org/55708622:47
efriededleafe: ^22:47
*** andreas_s has quit IRC22:52
openstackgerritEric Fried proposed openstack/nova master: Slugification utilities for placement names  https://review.openstack.org/55662822:56
*** yassine has quit IRC22:56
*** claudiub|2 has quit IRC22:57
openstackgerritMichael Still proposed openstack/nova master: Move configurable mkfs to privsep.  https://review.openstack.org/55192122:58
openstackgerritMichael Still proposed openstack/nova master: Move xenapi xenstore_read's to privsep.  https://review.openstack.org/55224122:58
openstackgerritMichael Still proposed openstack/nova master: Move xenapi disk resizing to privsep.  https://review.openstack.org/55224222:58
openstackgerritMichael Still proposed openstack/nova master: Move xenapi partition copies to privsep.  https://review.openstack.org/55360522:58
openstackgerritMichael Still proposed openstack/nova master: Move image conversion to privsep.  https://review.openstack.org/55443722:58
openstackgerritMichael Still proposed openstack/nova master: We no longer need rootwrap.  https://review.openstack.org/55443822:58
openstackgerritMichael Still proposed openstack/nova master: We don't need utils.trycmd any more.  https://review.openstack.org/55443922:58
*** fragatina has quit IRC22:58
*** fragatina has joined #openstack-nova22:58
*** hongbin has quit IRC22:58
*** gouthamr has joined #openstack-nova22:59
mikalefried: I've replied to your comments on the first patch, but the only changes were in comments. I'm not sure that was worth a -1.22:59
sean-k-mooney[m]mikal:  no more rootwap will be quite nice to see :)23:01
sean-k-mooney[m]mikal: also i like you blueprint url23:01
mikalsean-k-mooney[m]: ta, I do try. Other blueprints you might enjoy include execs-ive-had-a-few...23:01
*** salv-orlando has joined #openstack-nova23:02
*** yassine has joined #openstack-nova23:06
mlavallejaypipes: could we structure this RP tree https://review.openstack.org/#/c/502306/22/specs/rocky/approved/bandwidth-resource-provider.rst@432 so an allocation can be satisfied by ANY of the networks in the tree?23:06
*** felipemonteiro has joined #openstack-nova23:07
*** salv-orlando has quit IRC23:08
*** yamamoto has joined #openstack-nova23:10
jaypipesmlavalle: not sure I'm following your question...23:11
*** AlexeyAbashkin has joined #openstack-nova23:12
mlavallejaypipes: I'll pose my question in the spec be cause I have to run in 15 minutes23:12
mlavallethanks for replying23:12
*** fragatina has quit IRC23:14
*** fragatina has joined #openstack-nova23:14
*** yamamoto has quit IRC23:15
*** AlexeyAbashkin has quit IRC23:17
*** suresh12 has quit IRC23:19
*** awaugama has joined #openstack-nova23:19
*** suresh12 has joined #openstack-nova23:19
*** suresh12 has quit IRC23:24
*** suresh12 has joined #openstack-nova23:25
*** felipemonteiro has quit IRC23:25
*** harlowja has quit IRC23:32
*** mlavalle has quit IRC23:32
*** gouthamr has quit IRC23:35
*** chyka has quit IRC23:35
*** germs has joined #openstack-nova23:35
*** germs has quit IRC23:35
*** germs has joined #openstack-nova23:35
*** chyka has joined #openstack-nova23:35
*** germs has quit IRC23:36
*** germs has joined #openstack-nova23:36
*** germs has quit IRC23:36
*** germs has joined #openstack-nova23:36
*** chyka has quit IRC23:39
*** mriedem_afk has quit IRC23:43
*** takashin has joined #openstack-nova23:44
*** awaugama has quit IRC23:44
*** yamahata has quit IRC23:44
*** hshiina has joined #openstack-nova23:55
*** germs has quit IRC23:58
*** germs has joined #openstack-nova23:59
*** germs has quit IRC23:59
*** germs has joined #openstack-nova23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!