Wednesday, 2018-05-09

*** username_ has joined #openstack-cyborg01:21
*** username_ is now known as username__01:23
*** dolpher has joined #openstack-cyborg02:29
*** username__ has quit IRC02:35
openstackgerritLi Liu proposed openstack/cyborg master: Added attribute object and its unit tests
openstackgerritLi Liu proposed openstack/cyborg master: Added attribute object and its unit tests
*** evin has quit IRC05:47
*** evin has joined #openstack-cyborg06:15
*** xinran__ has joined #openstack-cyborg06:25
xinran__hi, all. does cyborg have api doc?06:26
*** masber has quit IRC07:36
*** mszwed has quit IRC10:05
*** mszwed has joined #openstack-cyborg10:06
*** xinran__ has quit IRC12:05
*** circ-user-pF3Sp has joined #openstack-cyborg13:02
*** Li_Liu has joined #openstack-cyborg13:51
*** zhipeng has joined #openstack-cyborg13:51
*** circ-user-pF3Sp has quit IRC13:54
*** circ-user-IlMPE has joined #openstack-cyborg13:54
*** evin has quit IRC13:55
*** Helloway has joined #openstack-cyborg13:56
*** NokMikeR has joined #openstack-cyborg13:56
*** Helloway has quit IRC13:56
*** Helloway has joined #openstack-cyborg13:57
*** xinran__ has joined #openstack-cyborg13:59
*** shaohe has joined #openstack-cyborg14:01
zhipeng#startmeeting openstack-cyborg14:01
openstackMeeting started Wed May  9 14:01:06 2018 UTC and is due to finish in 60 minutes.  The chair is zhipeng. Information about MeetBot at
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:01
*** openstack changes topic to " (Meeting topic: openstack-cyborg)"14:01
openstackThe meeting name has been set to 'openstack_cyborg'14:01
zhipeng#topic Roll Call14:01
*** openstack changes topic to "Roll Call (Meeting topic: openstack-cyborg)"14:01
zhipeng#info Howard14:01
sum12#info sum1214:01
NokMikeR#info Mike14:02
Helloway#info Helloway14:02
edleafe#info Ed14:02
Li_Liu#info Li_Liu14:02
shaohe#info shaohe14:02
zhipengHi sum12, could you introduce a bit about yourself ?14:03
xinran__#info xinran__14:03
*** sundar has joined #openstack-cyborg14:03
sum12Hey, I am Sumit. I work for SUSE and have been part of some other projects already.14:03
sundarHi Sumit14:03
zhipengwelcome :)14:03
Li_LiuHi Sumit14:03
sum12Thanks everyone :)14:04
sundar#info Sundar14:04
shaohewelcome sum1214:04
sum12glad to be here, thanks sundar zhipeng Li_Liu NokMikeR sundar14:05
zhipengokey let's get into business14:06
zhipeng#topic driver subteam meeting time14:06
*** openstack changes topic to "driver subteam meeting time (Meeting topic: openstack-cyborg)"14:06
zhipengso shaohe has helped started a poll14:07
zhipengand it seems three options are at the top14:07
sundarCould somebody post the poll link here?14:08
zhipengUTC 1:00am Mon, UTC 2:00pm Mon, UTC 8:30am Wed14:08
shaohesundar: any prefer on the time?14:10
zhipenglet's pick one :)14:10
Li_LiuI am good with anytime at night(which is morning back in China)14:11
sundarYes, Shaohe. I clicked on the doodle now. Sunday eve or Mon 7 am PST are both good14:12
zhipengok let's do the Mon UTC 2:00pm14:13
zhipengwhich will be 10:00pm at China, 10:00 am EST, 7:00am PST14:13
zhipengit's the most popular time14:15
zhipengokey ?14:15
sundarSounds good to me!14:15
zhipeng#agreed driver subteam meeting time every Mon UTC140014:16
Li_Liusounds good14:16
zhipeng#topic new core reviewer promotion14:16
*** openstack changes topic to "new core reviewer promotion (Meeting topic: openstack-cyborg)"14:16
zhipengin order to increase our review bandwidth, i hereby promote Sundar to be the new core reviewer14:17
zhipengSundar has been very active and taking charge of several important specs14:17
zhipengso as usual, we will have one week time for any feedback, and acknowledgment of the promotion next Wed :)14:17
kosamara#info kosamara14:18
sundarThanks, zhipeng14:18
Li_Liugratz :)14:18
shaohegratz :)14:19
zhipengwell not yet :)14:19
zhipenglet's wait for a week for feedback14:19
sundarThanks, Li_Liu and shaohe :). As zhipeng says, one week from now, we will know.14:20
zhipengbut would like to thanks for Sundar's great effort so far14:20
zhipengokey moving on14:20
zhipeng#topic KubeCon feedback14:20
*** openstack changes topic to "KubeCon feedback (Meeting topic: openstack-cyborg)"14:20
zhipengokey last week i attend KubeCon and the resource mgmt wg deep dive session14:20
zhipengk8s res mgmt wg is the center in k8s which deals with general and acceleration resources14:21
zhipengMy takeaway is that the support for a general accelerator mgmt, is still not in any shape in k8s14:22
zhipengGoogle is interested in GPU passthrough support for ML, mainly14:22
zhipengso if anyone wants to introduce any feature in-tree14:23
zhipengthat would require a PoC up-front14:24
zhipengmany things we discussed here, like vGPU types, general accelerator support including FPGA and others14:24
zhipengare viewed non-priority14:25
zhipengthe resource class/resource api PRs are also long shot14:25
zhipengaccording to vishnu14:25
sundarzhipeng: Agreed with your assessments. If we can get just one feature in -- passing annotations to the device plugin API -- that will help us meet most basic FPGA use cases, IMHO14:27
zhipengSasha mentioned Intel team has finished a FPGA DPI PoC14:27
zhipengbut also just pre-programmed FPGAs14:28
sundarWe can couple that with a scheduler extension. However, scheduler extensions are not viewed favorably because the scheduler fra,ework itself may change and the APIs may change along with it.14:28
zhipengyes, and also DPI is designed at the node level14:29
sundarHowever, we can do a POC base don it, including programming support, and revamp it when the APIs change. Just my thought. :)14:29
zhipengand a mostly "reschedule" focused mechanism14:29
sundarYes, we do have a POC that does only pre-programmed use case. That does not show the strength of FPGAs, which is reprogramming14:29
zhipengso DPI is designed to mostly work for hot-plug use case, not scheduling upfront14:30
zhipengthe scheduling will be retriggered once the node discover the DPI Plugin14:30
zhipenganyways it is the current lay of land14:30
zhipengso my thinking is, maybe it is reasonable to introduce a CRD framework for cyborg into k8s community14:31
zhipengso that we could have all of our data model preserved, has leeway on the api and scheduling design14:31
zhipengmaintain a k8s-ish API interface14:31
zhipengand a out-of-band general accelerator mgmt functionality, not bound to DPI development14:32
zhipengi don't know what other team member's thought on this matter ?14:32
sundarThe CRD framework does not allow AFAIK for a nested topology, which OpenStack supports.14:32
zhipengCRD is just an API mechinism right ?14:33
NokMikeRneeds more blinking lights14:33
zhipengnot implementation specific14:33
sundarYes. How do we model regions inside FPGAs, accelerators inside regions, local memory inside either ...14:33
zhipengthat could all be done in Cyborg14:34
zhipengfor example if you look at kubernetes service catalog14:35
Li_Liuk8s will do the scheduling for Cyborg then?14:35
zhipengwe could use scheduling extention maybe for that14:35
zhipengbut I doubt Google wants to have the k8s core doing scheduling that taking accelerators into consideration14:36
sundarThe Cyborg implementation could relate different resources. Agreed. The CRD discussions also seem to get into resource classes etc., which seem to be a long shot, as you said. Yes, agreed that scheduling core cannot be changed14:37
Li_Liubut the scheduling extension still require some change in k8s main tree right?14:37
*** evin has joined #openstack-cyborg14:37
sundarLi_Liu: scheduler extension is a standard mechanism in K8s today14:38
zhipengyep what sundar said14:38
sundarHowever, the scheduler framework itself may evolve, and the extension APIs may change along with it14:38
sundarLink to proposed K8s scheduler framework:
zhipengsome crd fundamentals14:40
shaohezhipeng: can not open it.14:41
zhipengso as I understand, crd is basically a way that we write a non-core k8s-ish controller14:41
zhipengshaohe you need vpn14:41
zhipengit listens upon the api-server14:42
zhipengand the keyword will trigger the request going to the crd controller, instead of the core k8s controller14:42
zhipengbasically a hat on cyborg, if you will14:42
Li_Liuso it's a subscribe/notify model right?14:44
zhipengin essence, as I understand yes14:44
NokMikeRin go land14:46
*** shaohe has quit IRC14:46
zhipengyep :)14:47
NokMikeRis it trival to use python from go or how does the cyborg api interaction work to k8 and go?14:48
zhipengwe could have gRPC clients that abstract away the lang difference14:48
NokMikeRI need that for English to Finnish to English also :)14:49
zhipengyou need google duplex for that :P14:50
sundarExactly, gRPC -- as Zhipeng said. The controller is a separate daemon. Also, kubelet and Cyborg DP will also be separate processes.14:51
zhipenglet's keep discussion alive offline :)14:52
zhipeng#topic bugs and issues14:52
*** openstack changes topic to "bugs and issues (Meeting topic: openstack-cyborg)"14:52
zhipengshaohe a colleague of mine report that when devstack starts, he could not find cyborg services14:53
zhipenghave you encountered similar problem ?14:53
*** shashaguo has joined #openstack-cyborg14:54
zhipengshaohe dropped i think14:55
zhipengokey let's move on to the next topic then14:55
NokMikeRI reported the same problem many moons ago, but have not tried lately to install it14:55
zhipengNokMikeR i think during some of the past fixes it turns out ok14:56
zhipengi'm not sure if some of the recent patches breaks it14:56
NokMikeRthe mutable config page shows a failure?
*** shaohe_ has joined #openstack-cyborg14:57
zhipenghey welcome back14:58
zhipengNokMikeR that should be already fixed14:58
zhipengso the specific problem is c-cond and c-agent are not running15:00
zhipengshaohe_ that is not normal right /15:00
zhipengif devstack succeed15:01
zhipengc-api, c-cond, c-agent should all be running right ?15:01
shaohe_devstack should report error, if c-cond and c-agent are not running15:01
zhipengok i will contact the author15:02
zhipengokey moving on15:02
zhipeng#topic spec review day15:03
*** openstack changes topic to "spec review day (Meeting topic: openstack-cyborg)"15:03
shaohe_it should be cyborg-agent, cyborg-api, cyborg-cond15:03
zhipengnova and other team all have this custom of spec sprint, or runways15:03
zhipenglet's have one as well15:03
zhipengshaohe_ ok I will let him know15:03
shaohe_is c-xxx cinder process?15:04
zhipengmaybe he just referred wrong15:04
zhipengI should double check with him15:04
zhipengokey back to topic15:05
zhipenglet's start with the "old ones" :)15:06
zhipengwe should be ready to land those15:06
* NokMikeR braces for impact15:06
zhipengfirst up15:07
zhipeng#info python-cyborgclient framework15:07
zhipengshaohe_ mentioned the client code is actually ready, so let's land this one15:07
zhipengany objections ?15:07
Li_Liuagree, I think we can merge that one15:07
sundarI think the syntax is not quite in line?15:07
Li_Liujust finished reviewing15:08
sundarOther commands are like 'openstack server ...' where the 2nd argument is the object on which an action is applied15:08
sundarWhereas we are proposing 'openstack acceleration ...' I saw Shaohe respond to my comment.15:08
sundarIt is not clear to me why we cannot have 'openstack acelerator <list/show/...>'15:09
NokMikeRwondering why its not openstack cyborg list/show ?15:09
zhipengbecause accelerator more like an object that we will choose to act upon15:09
zhipengNokMikeR legal issue15:10
sundarNokMikeR: Just as we have 'openstack server ..' instead of 'openstack nova '15:10
shaohe_cyborg is project name, acceleration is service type.15:10
edleafeyeah - project names should be avoided15:10
NokMikeRclarified thanks.15:11
Li_Liuthat's good to know :)15:11
sundarshaohe: It is not clear to me why we cannot have 'openstack accelerator <list/show/...>'15:11
zhipengsundar see my above comment15:12
zhipengbecause accelerator more like an object that we will choose to act upon15:12
zhipengacceleration is a type of service, like server service, volume service15:12
sundarzhipeng, yes. Like server/image etc. shouldn't it be the 2nd arg?15:12
zhipengwe might have something like openstack acceleration fpga create15:12
zhipengshaohe_ correct me if I'm wrong15:13
shaohe_yes. I have look other project.15:13
shaohe_they do use this syntax15:13
shaohe_some use this syntax15:14
NokMikeRis it command(create) then object(fpga) ?15:14
NokMikeRopenstack [<global-options>] <object-1> <action> [<object-2>] [<command-arguments>]15:15
shaohe_yes as NokMikeR suggestion15:15
zhipengNokMikeR yes the example i mentioned might not be strictly correct :P15:15
shaohe_global-options can be service type15:15
shaohe_for cyborg the service type is acceleration.15:16
shaohe_for cinder the service type is volume15:16
edleafe#link openstackclient guidelines:
shaohe_for glance the service type is image15:16
sundarshaohe: May be I still have a disconnect. :) "openstack [<global-options>] <object-1> <action> [<object-2>] [<command-arguments>]" Where is the service here?15:16
edleafesundar: object-115:17
sundarSo, we will have syntax like 'openstack acceleration create <object-2> ...'15:18
zhipengsomething like that15:18
shaohe_openstack network --help |less15:18
sundarIf everybody else is ok with it, I am ok too :)15:18
edleafeI would suggest s/acceleration/accelerator15:19
shaohe_openstack  network flavor create15:19
zhipengedleafe any reason for that ?15:19
sundarIMHO, the term 'accelerator' is more in line with other usages -- but go ahead :)15:20
edleafe^^ what sundar said15:20
zhipengya I'm just thinking when we actually using accelerator it will need to be more specific (FPGA, GPU, ...)15:21
zhipengacceleration could just represent a service type offered by cyborg in general15:21
shaohe_sundar: edleafe: $ openstack --help |grep volume15:21
xinran__Yes I think accelerator is more reasonable15:21
sundarzhipeng: yes, openstack accelerator create fpga <args> -- here 'fpga' is 'object-2'15:21
NokMikeRacceleration implies more than one device? accelerator is singular or one device. If we debate this to the end we end up somewhere in particle physics... :)15:21
zhipenghaha NokMikeR15:22
zhipengbut I think accelerator has more votes here15:22
sum12i feel accelerator/acceleration are both too complicated as compared to service/image/network/...15:22
zhipengsum12 lol what is your suggestion15:22
sum12I am suggesting to ask we we anything easier in out arsenal ?15:23
zhipengmaybe just acc ?15:23
shaohe_sum12: acc for abbreviation?15:23
zhipengshaohe_ man crush15:24
sundarsum12: we use the term 'accelerator' or 'device'. But the term device is too general?15:24
zhipengsundar ya that might give people confusion15:24
sum12accel ?15:24
zhipengaccel a little bit too Xilinx-ish ?15:24
sundarsum12: accel is ok by me. I use that abbrev too15:24
zhipengokey anyone else ?15:24
zhipengif accel could work then accel it is :)15:25
sundarIf we give bash completions15:25
sundarIf we provide bash completions, it may not matter :)15:25
edleafe'accelerator' is a known term for computers.15:25
edleafe'accel' not so much15:25
shaohe_openstack command support bash completion15:25
sundarsum12: with bash completion, do you still see a problem with 'accelerator'?15:26
shaohe_let see cinder's command name15:26
shaohe_$ openstack --help |grep volume15:26
shaohe_  volume type create  Create new volume type   volume type delete  Delete volume type(s)   volume type list  List volume types   volume type set  Set volume type properties   volume type show  Display volume type details   volume type unset  Unset volume type properties   volume unset   Unset volume properties15:27
shaohe_oh, sorry15:27
shaohe_  volume type create  Create new volume type15:27
shaohe_  volume snapshot create  Create new volume snapshot15:27
sum12bash completion is not a problem, but if I was  devops guy I like the small and easy to remember (scripting) and not too charachter-y15:27
NokMikeR they are listed here15:27
sundarFrom the link by NokMikeR: $ volume type list   # 'volume type' is a two-word single object15:28
shaohe_$ openstack volume type create --help15:28
shaohe_usage: openstack volume type create [-h] [-f {json,shell,table,value,yaml}]15:28
edleafesum12: at least it isn't as long as 'application credential' :)15:29
sum12edleafe: :)15:29
zhipengdo we have a consensus now ? :)15:29
shaohe_sundar: yes, it is two-word single object15:29
shaohe_and volume is the service type.15:30
sundarWe have a single-word single object :) which is better15:30
sundarAnyways, I vote for 'openstack accelerator <command> <object-2> ...' That's my 2 cents15:30
zhipengi vote for that as well15:31
NokMikeRsame here if Im allowed.15:32
shaohe_OK, remove the service type.15:32
shaohe_but keep in mind15:32
*** Yumeng__ has joined #openstack-cyborg15:33
kosamaraI would prefer accelerator too15:33
shaohe_if cyborg support a flavor api15:33
shaohe_flavor create/list15:33
shaohe_what should it be?15:33
zhipengsum12 that word looks a bit shorter now ? lol15:33
shaohe_^ sundar:15:33
shaohe_let's show the command:15:34
shaohe_$ openstack --help |grep flavor15:34
shaohe_flavor create  Create new flavor15:34
shaohe_  network flavor create  Create new network flavor15:34
shaohe_there are two flavor,15:34
sundarShaohe: why should Cyborg commands create flavors? Flavors with accelerators should still be under usual command, right? In any case, we can do 'openstack accelerator create flavor ...'15:35
shaohe_first one is nova flavor15:35
shaohe_second one is network flavor.15:35
sundarshaohe: ok. we can do 'openstack accelerator create flavor ...'15:36
shaohe_sundar: but openstack accelerator create flavor is not formal15:36
*** Yumeng__ has left #openstack-cyborg15:36
shaohe_for accelerator is just an collection of our restful url.15:37
shaohe_and accelerations is the service type. we register in keystone15:38
sundarshaohe: Not sure what you mean by 'formal'. If you prefer 'openstack accelerator flavor create', like nova/network, I am fine.15:38
zhipengshaohe_ i think let's just change it to accelerator15:38
zhipengseems like a team consensus at the moment15:38
zhipengafter that update the patch should good to go :)15:39
shaohe_zhipeng: OK, so we register in keystone use accelerator instead of acceleration?15:39
sundarThere were some deprecated code in that patch15:40
shaohe_sundar: I will remove them15:40
sundarThanks, shaohe15:40
zhipengshaohe- yes I guess so15:40
shaohe_OK, let me file a patch to correct the service type firstly.15:41
zhipengmany thx :)15:41
zhipengok given the time in China15:41
*** evin has quit IRC15:42
zhipengcould you provide a update on the quota spec ?15:42
zhipeng#info Quota spec15:42
sundarIs this the Keystone-based quota that we were recommended to use?15:44
xinran__Yes i have update the spec firstly we should support quota usage in cyborg and implement limit part by invoke Oslo.limit once keystone guys finish that15:45
xinran__I have a doubt about resource type. Should we just count total number of accelerator or should we count like fpga gpu etc15:46
Li_LiuWe count the number of deployables I think15:47
zhipengLi_Liu shall we count them per type ?15:47
sundarLi_Liu: A FPGA, as well as regions with it, will all be Deployables, right?15:47
shaohe_count granularity15:47
Li_Liuyes, they should be grouped in types for sure15:48
xinran__zhipeng:  seems there is only one resource class(accelerator) for now15:48
sundarS, the quota be based on Deployable type? i.e. you can get X regions15:48
Li_Liuthe deployable patch has already been merged15:49
Li_Liuregions are just a type of deployable15:49
sundarI think xinran is right -- quotas are based on resource classes, right?15:49
xinran__sundar:  yes I think so :)15:50
zhipengokey then we could settle upon that :)15:51
shaohe_sundar: only one resource type for quota?15:51
sundarOK, then there is only one resource class in Cyborg -- CUSTOM_ACCELERATOR -- as we agreed with Nova15:51
Li_Liui see, that's what is exposed to nova.15:52
shaohe_for example, there maybe spdk software accelerators and vgpu accelerators. they share the same quotas?15:52
sundarshaohe: I think so, but maybe I need to read more on quotas15:52
Li_Liuxinran__ sundar their db existence are deployable just to be clear15:53
sundarLi_Liu: when we get to oslo.limit based on Keystone, as Xinran said, that will be based on resource classes, right?15:53
shaohe_for nova, at present, the granularity is cpu, mem...15:53
edleafesundar: why just one CUSTOM_ class? The whole idea behind CUSTOM_ resource classes is that the service can create what it needs.15:54
shaohe_maybe gpu is also one quota15:54
Li_Liusundar right, deployables are just db existences.15:54
sum12need to drop15:54
sundaredleafe: this is what Nova folks proposed to us, right? :) Are we ok with CUSTOM_ACCELERATOR_FPGA, CUSTOM_ACCELERATOR_GPU, ...?15:55
xinran__I think quota should depends on resource class15:55
shaohe_we should keep in mind, if we use one CUSTOM_ class, we must use nest resource provider.15:55
edleafesundar: I guess I missed that proposal. Was it to keep the Nova flavors simple?15:55
shaohe_or we can not distinguish the different resource15:56
sundaredleafe: Multiple resource classes would actually be better. I didn't see a specific reason for single RC. Maybe the discussion was centered around vGPU types, and one was enough15:56
shaohe_that's the difference on flavors if we use one CUSTOM_ class or multi CUSTOM_ class?15:57
edleafesundar: that sounds more correct. As shaohe_ notes, if I ask for CUSTOM_ACCELERATOR, I might get back an FPGA or a GPU. :)15:57
sundaredleafe: we have traits that distinguish FPGAs of different types, GPUs of different types, ...15:58
sundarSo, the flavor would ask for the traits too, as noted in the spec15:58
sundarBut, for quotas, it would be better to have distinct RCs based on device type (FPGA, GPU, HPTS, ....)15:59
xinran__I also feel a little bit confused why there is only one resource class but anyway quota should accord to resource class right?15:59
zhipengLi_Liu any input ?16:00
shaohe_so for one  CUSTOM_ class, it should be nest PR first, and the flavor should be: resources:CUSTOM_ACCELERATOR:1, traits: FPGA16:00
shaohe_for multi CUSTOM_ class,  the flavor should be: resources:CUSTOM_FPGA:116:00
shaohe_for multi CUSTOM_ class,  the flavor should be: resources:CUSTOM_GPU:116:00
shaohe_^ edleafe: right?16:01
sundarshaohe: yes, you are right. We don't need nested RPs for this flavor definition.16:01
edleafeIt would be better, IMO, to have separate resource classes to distinguish the different devices (GPU vs. FPGA), and use traits to further refine the capabilities of a particular device16:01
sundarBut that will introduce limitations on combining different device tyoes16:02
sundaredleafe: Agreed. Let me amend the spec. Please review it!16:02
shaohe_multi CUSTOM_ class can work without NPR.16:02
edleafesundar: ack16:02
shaohe_one CUSTOM_ class must work with NPR16:02
sundarshaohe: multi CUSTOM_ class without nRPs will also have issues if you combine 2 different FPGAs on same host16:03
Li_Liuzhipeng as far as I concern, using CUSTOM_ACCELERATOR might be a bit too general, if we decide to use it, additional information will be need to schedule/allocate the resources16:03
xinran__Li_Liu:  like traits?16:03
zhipengwill the way shaohe_ just mentioned work ?16:04
Li_Liuright. coz essentially, we want to guide nova during their scheduling.16:04
shaohe_xinran__: yes, it need traits on sub resource provider.16:04
Li_Liushaohes's way should work16:04
sundarshaohe and all: I have been tracking nRP support in Nova, and apparently we are a week or two away from getting it. edleafe can confirm :) So, maybe we don't have to split hairs over what to do without nRP ;)16:05
sundarEven with CUSTOM_ACCELERATOR_FPGA etc., we still need traits16:06
shaohe_but we can not need nPR.16:07
sundarSorry, late for my next meeting. I'll do my reviews offline and catch up here. :)16:07
edleafeit does seem that nested RPs may be merged soon16:08
shaohe_edleafe: good. Then one CUSTOM_ class can works.16:09
zhipengsounds very promising :)16:09
edleafeshaohe_: sure, it *can* work, but it isn't really a good design16:09
sundarshaohe: multiple classes are good for quotas16:10
NokMikeRSorry Howard I have to leave physically but will leave this on monitoring. Best of luck sleeping :)16:10
zhipengNokMikeR :) no problem16:11
shaohe_edleafe: I have no objection on one CUSTOM_ class after we support nPR16:11
zhipengokey so since this is a spec review day, Li_Liu sundar plz continue discuss your specs here and I will leave the meeting recorded throughout the day16:11
xinran__Anyway I think quota should be accord with RCs... if one RC, quota count one...16:12
Li_LiuI am gonna grab some lunch, we can discuss this later today16:14
Li_LiuI will keep my session open16:14
Li_Liu@sundar and all16:15
sundarSorry, guys. I am jumping between my meeting and here. I will keep it open too.16:20
*** zhipeng has quit IRC16:23
edleafezhipengh[m]: you forgot to #endmeeting16:28
zhipengh[m]That is on purpose :)16:29
sundarLi_Liu: I am getting customer feedback that some of them want to use function names in Rocky. The problem is that FPGA hardware and bitstreams may expose function IDs, not names.16:34
sundarSo, one possibility is to let the operator apply a Glance property for function name when he onboards a bitstream, and reference that in the flavor. For Cyborg, it is just another Glance property to query -- wheher it is function ID or name.16:35
sundarWhat do you think?16:35
*** masber has joined #openstack-cyborg16:40
*** evin has joined #openstack-cyborg16:58
*** shashaguo has quit IRC17:34
*** NokMikeR has quit IRC17:35
Li_Liusundar, I added function_uuid based on your comments for the metadata spec. That uuid should be mapped to a specific function name18:15
*** circ-user-IlMPE has quit IRC18:22
*** Helloway has quit IRC18:23
*** xinran__ has quit IRC18:39
sundarLi_Liu: yes. At least for Rocky, we can leave that mapping to the operator.18:56
sundarIOW, the traits we apply are still based only on UUIDs. No further complexity in Rocky18:59
sundarCan you add a function name as an optional property in your spec?19:00
*** sundar has quit IRC19:04
Li_Liusundar sure will do19:11
*** circ-user-IlMPE has joined #openstack-cyborg19:28
*** circ-user-IlMPE has quit IRC19:32
*** sundar has joined #openstack-cyborg19:40
sundarLi_Liu: Thanks!19:40
*** sundar has quit IRC19:56
*** circ-user-IlMPE has joined #openstack-cyborg20:57
*** circ-user-IlMPE has quit IRC21:01
*** Li_Liu has quit IRC21:12
*** circ-user-IlMPE has joined #openstack-cyborg22:17
*** circ-user-IlMPE has quit IRC22:21
*** circ-user-IlMPE has joined #openstack-cyborg23:26
*** circ-user-IlMPE has quit IRC23:30
*** shaohe_ has quit IRC23:54

Generated by 2.15.3 by Marius Gedminas - find it at!