18:02:14 #startmeeting networking_policy 18:02:15 Meeting started Thu Aug 20 18:02:14 2015 UTC and is due to finish in 60 minutes. The chair is SumitNaiksatam. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:02:17 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:02:19 The meeting name has been set to 'networking_policy' 18:02:36 #info agenda https://wiki.openstack.org/wiki/Meetings/GroupBasedPolicy#Aug_20th_2015 18:03:12 we have release Kilo now! 18:03:19 *released 18:03:36 thanks to the team for the fantastic effort 18:03:56 i think we accomplished quite a lot 18:04:08 SumitNaiksatam: ++ 18:04:12 details of which will be posted in the release notes page: 18:04:21 great 18:04:22 #link https://wiki.openstack.org/wiki/GroupBasedPolicy/ReleaseNotes/Kilo 18:04:54 ransari: hi 18:05:01 hi 18:05:07 ransari: just discussing that we released kilo 18:05:20 Good milestone!! 18:05:27 ransari: thanks 18:05:33 tarballs have been posted in launchpad 18:06:11 as before, we will continue to fix bugs as and when find them, and backport them 18:06:18 ok 18:06:55 any thoughts, questions about the release (or concerns if you tested it) 18:07:31 ok 18:07:33 Any plan on distribution package? :-) 18:07:50 Yi: okay, good segue 18:08:03 #topic Packaging update 18:08:07 I plan to update the Fedora/RDO packages to the kilo and stable/juno releases, but am not sure when 18:08:42 rkukura: okay :-) 18:09:42 rkukura: another update on the packaging/distr front? 18:09:55 no 18:10:13 sorry, i meant to say - any other 18:10:16 rkukura: okay thanks 18:10:51 #topic Integration testing 18:10:56 I dont have much of an update on the integration tests 18:11:14 except that the rally job needs to be tweaked a little more 18:11:32 currently it votes +1 even if we have less than 100% success on a test 18:12:02 ideally all concurrent tests in this suite should return 100% succcess 18:12:10 however i have seen at times that they dont 18:12:26 so we need to decide what is acceptable and what is not, and where the issue it 18:12:29 *is 18:12:55 currently i am running 10 concurrent operations for every test 18:12:59 SumitNaiksatam: +1, I think this is of vital importance 18:13:08 SumitNaiksatam: how many servers/workers? 18:13:11 ivar-la__: yeah 18:13:39 ivar-la__: good question, i believe i am using the default 18:14:00 SumitNaiksatam: so one server with 0 workers I think 18:14:16 ivar-la__: definitely one server 18:14:18 that is important, one server with 0 workers is basically serial IIUC 18:14:37 ivar-la__: i agree if there is only one worker that it should be serial 18:14:41 you either need more than 1 server or more than one worker 18:15:01 ivar-la__: yes, we cannot do more than one server in that gate environment 18:15:13 the difference is that external locking (oslo_consurrancy) won't work across server, but it will across workers 18:15:30 oslo_concurrency* 18:15:48 SumitNaiksatam: seems legit 18:15:59 ivar-la__: right 18:16:29 assuming its “0” workers (which is one thread), we are still seeing issues sometimes 18:16:57 we will need to investigate the result of those tests more carefully 18:17:11 #topic Bugs 18:17:25 SumitNaiksatam: default number of API workers seem to be the number of CPUs 18:17:30 SumitNaiksatam: as for Kilo/Liberty 18:17:40 ivar-la__: ah okay 18:18:09 zero in Juno 18:18:10 so we had some pending reviews sitting in the queue for a while now 18:18:43 ivar-la__: okay, need to check what that particular devstack is doing since i did not override the conf 18:18:54 there are a few pending reviews in the review queue 18:19:12 and we should decide how we want to make progress 18:19:48 #link https://review.openstack.org/166424 (Admin or Provider tenant owns service chain instances) 18:20:09 ransari: i believe you mentioned that you prefer the default to be admin? 18:20:23 One clarification I have w.r.t some of patches under review. Some introduce DB migration. If this is backported to Juno, on production deployments, will it requiore a re-install of rpm, or will rpm upgrade work? 18:20:35 My mistake. prefer default to be provider 18:20:49 ransari: ah good, so the current patch will work for you? 18:20:58 yes, I believe so. 18:21:09 the provider is already the default 18:21:21 the DB migration reference was this: https://review.openstack.org/#/c/170235/ 18:21:37 ransari: nice, not sure if magesh is around, but it will help to get him to review this and make progress 18:21:41 I need to do two things there... First finish the rebase (was a pretty ugly one and it's unfinished) 18:21:43 ransari: very good point to bring up 18:21:51 ivar-la__: yes sure 18:21:54 ivar-la__: Any update on allowing the tenant to be specified by name? 18:22:06 and the address rkukura comment about using admin name... Although it doesn't look very straight forward 18:22:24 rkukura: thanks for bringing that up 18:22:32 We need an extra user (probably neutron user on Keystone) that can ask keystone info about this particular tenant 18:22:40 In my nova testing, I’ve been using the UUID of the “service” tenant. 18:23:02 sumit: we will review https://review.openstack.org/166424 and confirm 18:23:05 But by default, I'm not sure the Neutron service tenant can retrieve information about other tenants 18:23:20 ransari: okay, thanks 18:24:14 RH default I beleive does have admin user added to services teannt 18:24:26 ivar-la__: That’s the tenant I’ve been using to own the nova VMs, etc.. I didn’t mean to imply it could query keystone, but maybe it can. 18:25:31 rkukura: what user are you using? 18:26:00 SumitNaiksatam: “service” 18:26:37 rkukura: so the Nova VMs aren't owned by the service_admin? 18:26:39 Does "service" user have admin role? 18:26:46 rkukura: okay 18:26:55 rkukura: that makes sense... I think we don't have any way to authenticate against keystone with that 18:27:12 rkukura: would it work if we user service_tenant_name and service_tenant_password? 18:27:22 rkukura: s/user/used 18:28:01 ivar-la__: I think that would let us authenticate with keystone 18:28:18 yes, so service_tenant could own the VMs and to "external calls" 18:28:29 and still we should be able to get the UUID for internal ones 18:29:46 ivar-la__: did not quite understand the internal vs external calls, do you mean neutron vs other component calls ? 18:30:33 mageshgv: yes... In Neutron we just need a context with the proper tenant_id, not a full fledged Keystone token 18:30:41 mageshgv: at least for now 18:30:46 ivar-la__: okay 18:32:30 mageshgv: i just noticed you posted the following #link https://bugs.launchpad.net/bugs/1487156 18:32:30 Launchpad bug 1487156 in Group Based Policy "Fetching instance metadata fails for Policy Target VMs" [Undecided,New] 18:32:47 i believe this would be a high priority to fix 18:33:11 SumitNaiksatam: yes, this is a high priority one now 18:33:19 i think this would have crept in when we start applying the egress rules 18:33:45 SumitNaiksatam: right 18:34:05 mageshgv: i am wondering what else we need to open up 18:34:51 SumitNaiksatam: I think we should take a look at the rules added by L3 agent/Firewall. I notice there this IP is opened up 18:35:42 mageshgv: which IP? 18:36:21 SumitNaiksatam: The metadata server IP. I meant the ports/IPs opened up by openstack fwaas 18:36:33 should give us an idea 18:36:54 mageshv: That is a good suggestion 18:37:16 mageshgv: hmmm, i dont recall opening up anything by default in fwaas 18:37:37 mageshgv: since its applied at the perimeter 18:37:59 anyway, we can investigate 18:38:17 SumitNaiksatam: okay, I will take a look once more 18:39:09 ivar-la__: so adding the metadata IP can be added to default SG? 18:39:34 sure 18:39:50 ivar-la__: okay 18:39:54 will be needed for outgoing traffic 18:40:27 ivar-la__: right 18:40:44 #link https://bugs.launchpad.net/group-based-policy/+bug/1484425 18:40:44 Launchpad bug 1484425 in Group Based Policy "GBP Allows the same PTG to be the provider and consumer of a Contract" [Undecided,Opinion] 18:40:52 mageshgv: you have a patch for this as well 18:41:00 #link https://review.openstack.org/#/c/212676/ 18:41:12 however this is currently working as designed 18:41:40 i meant from an API perspective 18:42:07 I’ve wondered about the design on this 18:42:33 Are PRSes supposed to be needed for PTs in the same PTG to talk to each other? 18:42:44 rkukura: no 18:42:47 SumitNaiksatam: yes, we do not restrict it at the api, but what does it mean when we say a group both provides and consumes the same contract 18:42:57 mageshgv: yes 18:43:14 mageshgv: a particular policy driver can choose to not support this 18:44:04 SumitNaiksatam: I am trying to understand what would be the use case of such a scenario 18:45:53 mageshgv: there are certain contracts like infrastructure services’ contracts 18:46:26 mageshgv: which the providing PTG also needs, but might not necessarily be getting serviced by its own members 18:47:18 mageshgv: was this creating an issue in the resource mapping driver? 18:47:37 SumitNaiksatam: This seems to contradict your answer to my question above. 18:48:23 SumitNaiksatam: This needs more thought since we do not have a selector which can choose from a list of providers 18:48:27 rkukura: PRS not needed to communicating with the same PTG 18:48:49 mageshgv: that is correct, we had the scope notion but havent implemented it 18:48:56 SumitNaiksatam: yes, this was causing issues when we have a chain 18:49:23 mageshgv: so i agree with you that this can result in ambiguity with our currnet implementation 18:49:53 mageshgv: but i am suggesting that we do the validation and raise the error in the driver, as opposed to in the API itself 18:50:14 SumitNaiksatam: So is the PRS needed only to insert a service chain between PTs in a PTG, or am I really confused? 18:51:04 rkukura: no no, by services’ I did not mean the L4-7 services and their chains 18:51:12 providing/consuming at the same time is a peer relationship 18:51:28 show do you establish the PRSs between two peer groups? 18:52:32 we have 8 mins, so perhaps this is a slightly longer discussion requiring more concrete examples 18:52:39 i will send out an email on this 18:52:41 SumitNaiksatam, rkukura, ivar-la__: I think we should first clear this ambiguity in the model 18:53:06 SumitNaiksatam, mageshgv: I’d appreciate that 18:53:32 #link https://bugs.launchpad.net/group-based-policy/+bug/1460831 18:53:32 Launchpad bug 1460831 in Group Based Policy "API for group update is not clear" [High,Confirmed] - Assigned to Sumit Naiksatam (snaiksat) 18:53:37 mageshgv: what ambiguity? 18:53:45 i was trying to address this in a backward compatible way 18:54:29 mageshgv: I don't think that is ambiguous... I am a provider of a service, but at the same time I need to consume it from someone else 18:55:01 ivar-la__: correct 18:55:18 so in #link https://review.openstack.org/#/c/209409/ i was trying to allow both, the old format, and the new format 18:55:20 ivar-la__: right, but we do not have a way of specifying from which provider we will be consuming 18:55:38 however i realized that this creates an issue with the dict that we return 18:55:54 mageshgv: that's the point of having contracts, the "destination" is not specified there 18:56:02 mageshgv: otherwise they would be firewalls 18:56:10 should we return a dict with provided/consumed PRS in the new format or the old format 18:56:26 since we can specify only one 18:56:36 hence i did not push for this patch to get into kilo 18:56:54 would like to hear you suggestions on this (since we want to make this a non-disruptive transition in the API) 18:57:16 similar concerns might apply to other API changes that we might want to make 18:57:35 if guess if we introduce micro versioning then this is not a concern 18:57:45 ivar-la__: for me, it looks really confusing when I say a particular group provides as well as consumes the same contract say X 18:57:54 SumitNaiksatam: can you make an example of the new format? 18:58:13 ivar-la__: its posted in the review commit message 18:58:50 ok we have a couple of minutes 18:59:10 any other high priority reviews, that you want to request the attention of the rest of the team? 18:59:41 SumitNaiksatam: what about UUID:scope, but with scope implicitly set to None or '' when not provided? 18:59:56 so you can POST {'policy_rule_set': UUID1} 19:00:07 and will become {'policy_rule_set': UUID1:''} 19:00:11 ivar-la__: thats an option, but in the future we might want to add “status” as well to that relationship 19:00:17 that ugly, but compatible I think 19:00:28 SumitNaiksatam: oh right 19:00:35 ivar-la__: yes, i did think of what you say, that is definitely compatible 19:00:55 #topic Open Discussion 19:01:13 ransari: sorry we did not get time to address your upgrade point 19:01:24 lets go to #openstack-gbp for that 19:01:57 one suggestion - think a bunch of folks are going to be on vacation next week, so we wont meet next week 19:02:20 okay i think we are 2 mins over 19:02:27 thanks all for joining today 19:02:28 mageshgv: let's say you have a ICMP PRS, and you want all your groups to be able to ping each other... who do you choose as the provider? 19:02:39 bye! 19:02:42 bye! 19:02:45 bye 19:02:45 bye 19:02:46 ivar-la__: yes simple example 19:02:48 bye 19:02:53 #endmeeting