Tuesday, 2017-08-22

*** yamamoto_ has joined #openstack-dragonflow00:18
*** yamamoto_ has quit IRC00:24
*** yamamoto_ has joined #openstack-dragonflow01:21
*** yamamoto_ has quit IRC01:26
*** yamamoto_ has joined #openstack-dragonflow02:22
*** yamamoto_ has quit IRC02:28
*** yamamoto has joined #openstack-dragonflow02:56
*** yamamoto has quit IRC03:02
*** afanti has joined #openstack-dragonflow03:35
*** yamamoto has joined #openstack-dragonflow04:03
*** yamamoto has quit IRC04:09
*** yamamoto has joined #openstack-dragonflow05:05
*** yamamoto has quit IRC05:10
oansonMorning05:31
oansondimak, Jenkins looks lovely in https://review.openstack.org/#/c/489003/05:32
dimakoanson, good morning, what a surprise05:47
dimakquick, make tempest voting :{05:47
dimak:P05:47
dimakoanson, the last recheck was due to a specific change?06:03
oansonNo06:03
oansonAnd no to the voting tempest thing06:03
oanson:)06:03
oansondimak, re https://review.openstack.org/#/c/480196/5/dragonflow/controller/df_local_controller.py@34606:04
oansonDo you have a suggestion?06:04
openstackgerritOmer Anson proposed openstack/dragonflow master: model proxy: Throw custom exception when model not cached  https://review.openstack.org/49586506:06
*** yamamoto has joined #openstack-dragonflow06:06
dimakoanson, consider you have an n item deep ref chain: a->b->....->n, if it gets updated n to a, thats n^2 DB gets06:11
dimakI'd got for looking at cached copies06:11
dimakand let sync bring updates on the next run06:11
dimakthat way you only have to do depth 106:11
oansonAll right. I'll do that06:12
dimakwell, not exactly depth 106:12
*** yamamoto has quit IRC06:12
oansonNot depth 1 at all :)06:12
oansonBut probably linear06:12
dimakif we could assure there are no invalid references in db-store, it'd be depth one :P06:13
dimakalso, there's another issue06:13
oansonShoot06:13
dimakconsider you have a single VM port, that belongs to tenant a06:13
dimakand it uses a tenant b shared network thorough its router06:14
dimakso you DFS there and retrieve the network06:14
dimakand store it in db store06:14
dimaknext time sync runs it will throw it away06:14
dimakbecause we're not subscribe to topic b06:15
oansonYes06:15
oansonWant to implement garbage collection for models?06:15
dimaknope06:16
dimakwe just have to make sure each model we fetch we subscribe to its topic06:16
oansonThat's what we need. Subscribed models exist in their own right. Referenced models exist only as long as they are referenced06:16
oansonYes. Otherwise changes in the router won't be intercepted06:16
oansonWe also need to unsubscribe when references are 006:18
dimakYou need the whole topic because there might be ports on other topic, and you won't be able to reach them without fetching them06:18
dimakOr just stop maintaining topic list, and generate it each run in topology06:18
dimakstart from ports, get topics, get all objects in topic, get all references, get all topics, ...06:19
oansonMakes sense06:19
oansonUse topic as a coarse-grain filter, but sync/topology work bottom-up06:19
dimakyes06:19
oansonAnd we need to identify first-class objects - OVS ports?06:19
oansonProvider networks?06:20
dimakwhat do you mean?06:20
oansonFrom where do we start spanning the tree in sync/topology?06:20
dimakI'd say start from ovs ports, and see how far that gets up ;)06:20
dimakus*06:20
oansonAnything that isn't referenced by lports isn't coverd: e.g. Trunk ports, FIP06:21
dimakas long as they share the topic its alright06:22
dimak1 issue i could see is06:22
dimakyou connect to provider network by router from another tenant06:22
oansonHmm... In trunk port I guess we can assume the ChildPortSegmentation object is the same topic06:22
oansonIn FIP, I guess the same can be applied06:23
oansonThat router is identifiable by is_external:true, right?06:23
dimaka network has router:external06:24
oansonBah06:24
oansonIn general, if a router connects two networks from different topics, one topic won't see it06:25
dimakyes06:25
oansonMulti-topic? :)06:26
dimakhow you're going to manage it?06:26
dimakrouters can be shared?06:26
oansonIn the Neutron API plugin layer.06:26
oansonThe topic doesn't *have* to be the tenant_id/project_id06:27
dimaki.e. have shared property06:27
oansonIt can be a derived value. For routers - derived from all the tenant_ids it connects06:27
dimakand how exactly you're going to subscribe to that?06:27
oansonYes, have a is_shared property, but have it fine-grained - shared between whom06:27
dimakor just publish on all relevant topics?06:27
oansonDunno06:27
oansonI want to say bitmask06:29
dimakoanson, https://bugs.launchpad.net/dragonflow/+bug/171226606:46
openstackLaunchpad bug 1712266 in DragonFlow "Selective topology distribution will not fetch shared objects" [Undecided,New]06:46
dimakUmmm, topic is a bit off06:46
oansonI think you're right - this feature needs a redesign06:47
dimakLets try again06:48
dimakoanson, https://bugs.launchpad.net/dragonflow/+bug/171226606:48
openstackLaunchpad bug 1712266 in DragonFlow "Selective topology distribution will not fetch reachable objects of other tenants" [Undecided,New]06:48
dimakThis one is better06:48
oansondimak, this looks too big for pike06:51
oansonLet's push it to Queens06:51
oansonShared objects weren't take into account, and that's a irreconcilable design flaw.06:52
dimaknever though it was for pike anyway06:52
dimakxiao proposed a spec for that back in the day06:52
dimakhttps://review.openstack.org/41469706:53
oansonEven after he left, he still contributes :)07:00
oansonBut there are some open unanswered questions there07:01
oansondimak, lihi, irenab, could we schedule a brainstorm for today 13:00ish utc? I want to discuss the selective proactive issue. (Here in the channel)07:38
dimaksure07:39
irenabok07:40
lihi14:30?07:40
oansonlihi, utc?07:41
lihioh utc. OK then (for 13:00)07:41
oansonCool.07:42
openstackgerritOmer Anson proposed openstack/dragonflow master: An instance update also sends an update on all referred instances  https://review.openstack.org/48019607:43
irenaboanson, this can be helpful if you would share the context for the discussion prior to the meeting07:43
oansonThe issue I want to tackle is selective-proactive not supporting shared objects. There's a bug: https://bugs.launchpad.net/dragonflow/+bug/1712266 and the beginning of a spec: https://review.openstack.org/41469707:46
openstackLaunchpad bug 1712266 in DragonFlow "Selective topology distribution will not fetch reachable objects of other tenants" [Undecided,New]07:46
oansonWe should discuss how to mitigate this. Understand the problems we have currently, and design a solution and find a volunteer to fix it.07:47
irenaboanson, do we have the requirements properly defined?07:48
oansonI doubt it.07:48
irenabMaybe worth to start by discussing requirements and not just solving the problems we see now07:49
oansonMakes sense07:49
irenaboanson, thanks for sharing the links07:51
*** Natanbro has joined #openstack-dragonflow08:18
openstackgerritMerged openstack/dragonflow master: Tempest gate: Update apparmor with docker permissions  https://review.openstack.org/49577508:26
openstackgerritMerged openstack/dragonflow master: Move FIP status updates to L3 plugin  https://review.openstack.org/49335908:26
dimakoanson, can we make etcd/zmq job voting?08:29
openstackgerritMerged openstack/dragonflow master: Reset in_port for provider ingress packets  https://review.openstack.org/49416408:32
*** yamamoto has joined #openstack-dragonflow08:35
openstackgerritDima Kuznetsov proposed openstack/dragonflow master: DNAT: used cached floating lport  https://review.openstack.org/49615408:43
openstackgerritDima Kuznetsov proposed openstack/dragonflow master: DNAT: use cached floating lport  https://review.openstack.org/49615408:44
openstackgerritDima Kuznetsov proposed openstack/dragonflow master: [DNM][WIP] Fix tempest  https://review.openstack.org/48900308:54
openstackgerritDima Kuznetsov proposed openstack/dragonflow master: Disable l3 agent in gate  https://review.openstack.org/48338508:54
*** yamamoto has quit IRC08:59
*** yamamoto has joined #openstack-dragonflow08:59
*** yamamoto has quit IRC08:59
*** kkxue has joined #openstack-dragonflow09:05
*** kkxue_ has joined #openstack-dragonflow09:09
*** kkxue has quit IRC09:10
openstackgerritDima Kuznetsov proposed openstack/dragonflow master: Remove enable_goto_flows configuration option  https://review.openstack.org/49428709:28
openstackgerritDima Kuznetsov proposed openstack/dragonflow master: Change DNAT to rely on Provider app for bridge access  https://review.openstack.org/47536209:28
*** kkxue__ has joined #openstack-dragonflow09:46
*** kkxue_ has quit IRC09:47
*** yamamoto has joined #openstack-dragonflow10:00
*** yamamoto has quit IRC10:05
openstackgerritDima Kuznetsov proposed openstack/dragonflow master: Remove enable_goto_flows configuration option  https://review.openstack.org/49428710:10
openstackgerritDima Kuznetsov proposed openstack/dragonflow master: Change DNAT to rely on Provider app for bridge access  https://review.openstack.org/47536210:10
openstackgerritDima Kuznetsov proposed openstack/dragonflow master: Remove unique_key field from floating IPs  https://review.openstack.org/49618610:10
*** yamamoto has joined #openstack-dragonflow10:30
*** kkxue__ has quit IRC10:30
openstackgerritDima Kuznetsov proposed openstack/dragonflow master: Remove unique_key field from floating IPs  https://review.openstack.org/49618610:44
openstackgerritDima Kuznetsov proposed openstack/dragonflow master: Remove enable_goto_flows configuration option  https://review.openstack.org/49428710:44
openstackgerritDima Kuznetsov proposed openstack/dragonflow master: Change DNAT to rely on Provider app for bridge access  https://review.openstack.org/47536210:44
*** lihi has quit IRC11:01
*** lihi has joined #openstack-dragonflow11:05
openstackgerritOmer Anson proposed openstack/dragonflow master: devstack: Add hooks to support deployment with Octavia  https://review.openstack.org/49620411:09
oansonirenab, note ^^^ and https://review.openstack.org/496205 for octavia devstack integration11:10
irenaboanson, great!11:11
openstackgerritDima Kuznetsov proposed openstack/dragonflow master: Remove enable_goto_flows configuration option  https://review.openstack.org/49428711:11
openstackgerritDima Kuznetsov proposed openstack/dragonflow master: Change DNAT to rely on Provider app for bridge access  https://review.openstack.org/47536211:11
*** yamamoto has quit IRC11:18
*** yamamoto has joined #openstack-dragonflow11:30
*** yamamoto has quit IRC11:46
*** yamamoto has joined #openstack-dragonflow11:48
dimakoanson, we brainstorm?11:59
oansonI think we said in an hour11:59
oansonI was supposed to have a meeting now, but it may have been canceled12:00
oansonSure, let's start, and worst comes I'll take a back seat.12:00
oansonlihi, irenab, you guys want to start now?12:00
dimakoh, I though it was 12, not 1312:00
oansonOr in an hour like we scheduled?12:00
dimakI don't mind either12:01
irenabnow is better12:01
oansonlihi, ? You available?12:02
*** yamamoto has quit IRC12:03
oansonI guess she isn't in. All right, let's wait a bit and try again later.12:04
*** yamamoto has joined #openstack-dragonflow12:04
*** yamamoto has quit IRC12:10
*** yamamoto has joined #openstack-dragonflow12:12
lihiHi, sorry, I missed the notification. In 10 min?12:35
openstackgerritDima Kuznetsov proposed openstack/dragonflow master: Disable l3 agent in gate  https://review.openstack.org/48338512:35
oansonSure12:37
oansonNo worries.12:37
oansondimak, irenab, lihi, 8 minutes?12:38
oanson7 now :)12:38
dimaksure12:38
irenabok12:39
oansonHi all12:45
oansondimak, lihi, irenab, shall we start?12:45
dimako/12:45
lihi👍12:45
oansonirenab, ?12:45
irenaba min12:46
irenabhere12:48
oansonAll right!12:48
oansonLet's start12:48
oansonThe issue is that we selective-proactive works badly with shared objects12:48
dimaknot shared per se12:48
oansone.g. a network or router that's shared across tenants, like the provider network12:48
dimakjust anything that connects one tenant to another12:49
oansonYes.12:49
oansonShared as in used by objects from more than one tenant12:49
oansonor project, even12:49
irenablets list the requirements we expect the SPD to provide12:50
irenaboanson, I belive you started some etherpad12:50
oansonI started something here: https://etherpad.openstack.org/p/dragonflow-selective-proactive12:50
oansonirenab, yes12:50
irenablet just post all case needed to be supported12:51
oansonIn the end we'll end up with a garbage collection method: When a topic or object is no longer referenced, it should be collected12:52
oansonWhich could be done using back-refs, but in-memory, not in db12:53
dimakbut how do decide to fetch it in the first place12:53
lihiMaybe only fetch it if needed12:53
irenabsome cases may lead to heavy load12:54
dimaka separate service can generate tenant dependency graph12:56
oansonYes.12:56
dimakbut thats another spof12:56
oansonBut that means double retrieval12:56
oansonspof?12:56
dimaksingle point of failure12:56
dimakand its full database per number of services that do this12:57
dimakdoesn't have to be one on each compute12:57
oansonMaybe the problem we're solving is too complex. Maybe not allow ad-hoc object connections12:57
dimakneutron allows it12:57
oansonShared objects, such as routers and networks, are published to everyone. They always exist12:58
irenabare we putting neutron RBAC aside?12:58
oansonFor now - Neutron rbac is limited to networks currently (according to https://specs.openstack.org/openstack/neutron-specs/specs/liberty/rbac-networks.html ) unless I misunderstand12:59
irenabit works for qos policy too as far as I remember12:59
oansonAnd if we construct a dependency graph in the API layer, the effect is the same12:59
irenabit started with network12:59
oansonI think one of the issues we see is with routers.12:59
irenabso this dependency is clear from the virtual topology view13:00
dimakI don't see that routers have a shared property13:00
dimakjust networks13:00
oansonHmm13:01
oansonNo good then13:01
dimakWe can distribute routers to everyone13:01
irenabI do not like the fact that we should understand the internals of neutron to make the decision13:02
oansonOr at least we need something from within Neutron to abstract it for us13:02
oansonDistributing routers is problematic - that would cascade to distributing networks and ports13:03
oansonWe loose the edge of selective-proactive13:03
irenabopting to the worst case13:03
dimakoanson, why?13:04
oansondimak, any network that has a router will be pulled. Any port connected to such a network will have to be pulled as well13:04
dimakI think that just router interfaces + their netorks13:04
dimakonly if that router is relevant13:04
dimaki.e. there's a route that connects local ports to that network13:05
oansonIf you pull the network, you're going to have to pull its ports for the l3 app13:05
irenaboanson, maybe we can look at what information may be required locally and given this decide whaen is enough to stop polling13:05
dimakI won't13:05
oansondimak, sorry?13:05
oansonirenab, that means every application has to report the relations it counts on13:06
dimakoanson, you just need routers + interfaces (their networks)13:06
irenaboanson, report to whom?13:06
oansonl3 does l2 lookup within the destination network. Then we probably need to pass it to the tunneling app.13:06
oansonirenab, to the new topology mechanism13:06
irenabI missed the service decomposition phase :-)13:07
oansonI would like to opt for a simpler solution: We do keep a dependency graph, but only between topics. i.e. if a->b, then pulling topic a also pulls b. The bookkeeping is kept to Neutron APIs, maybe using Neutron DB to store the extra data. In NB DB store only the tree.13:08
oansonirenab, it was implicit13:08
lihioanson, how do you know to build it?13:08
irenabif topic is tenat/project, I do not like the granularity13:09
oansonI should have all the info in the Neutron API sied13:09
oansonside*13:09
oansonirenab, topic doesn't have to be tenant/project13:09
oansonBut I doubt we can do something more fine-grained13:09
dimaklihi, we can map out the api actions that create dependencies13:09
dimakfor not it is mostly add/delete router interface13:09
irenabwe may have others too, such as qos policies that shared across projects13:10
oansonYes.13:11
irenabwe still may have it lazily loaded, retrieved when needed13:12
*** yamamoto has quit IRC13:12
oansonirenab, then they need a special topic13:13
irenaboanson, seems you already have some desing in mind. Care to share?13:14
oansonSo far we have two plausible solutions: 1. Each app reports its dependencies, and topology service pulls the additional info needed. 2. We build a coarse-grain dependency graph on the Neutron side.13:14
oansonirenab, no, I'm just making it up as I go along13:14
*** yamamoto has joined #openstack-dragonflow13:14
oansonIf we load lazily - we have to listen to updates (e.g. register to the topic for updates for that object)13:15
dimakoanson, 1 requires backrefs?13:15
oansondimak, possibly, but maybe we can keep them local to the topology service13:15
oansonThat means we need a special topic, since we don't want to pull all objects from the second topic. Or maybe we do13:15
dimakif there's a X->Y dependency, and we have only topic Y, how topology will guess topic X?13:16
oansonYes, that won't work13:16
dimakSomewhere, something needs a global view13:16
oansonYes. We can't get away from that13:16
dimakor each db update should keep those updated13:17
dimakbut that requires high level of consistency13:17
oansonI don't want to manually add back-refs to the api. It will be hell to maintain13:17
irenabcan you give some concrete example for the x->Y case?13:18
dimakshared network, single router13:18
dimakeach tenant gets router interface13:18
oansonOr lports on an lswitch, but of different topics13:18
irenabin case of shared networks there will be more than one interface on the router?13:19
oansonI think each network is limited to a single router interface13:19
irenabin neutron for sure13:20
oansonWe have a way to support the general case, but as long as Neutron is the only API, I don't think it has value13:20
dimaknetwork can have several router interfaces13:21
irenablets not overcomplicate if not required for now13:21
irenabdimak, yes, but on different router. Or it has value to have it on same one?13:21
dimakdifferent router13:22
*** yamamoto has quit IRC13:22
irenabso is there any conclusion?13:24
oanson1. We will need an orchestrator on the API level13:25
oansonWhich will be difficult if we don't want something neutron specific13:25
dimakirenab, what I was trying to describe: https://paste.fedoraproject.org/paste/N9puUzIJ0WztHEwjUN4djw13:25
oansonThis is definitely something we need to support13:25
irenabdimak, ok. Seems like typical neutron deployment13:26
oanson1. implies that we need a dependency graph to publish to the compute nodes13:26
oansonSo that's 2.13:26
lihiDoes being not neutron-specific makes things so much complicated?13:27
irenabdependency graph is topic based or entity based?13:27
oansonirenab, undecided13:27
dimakirenab, I think the more common case is a router-per-tenant13:27
oansonlihi, depends how we do it. I'm afraid it will come to tightly coupled with neutron, and it will be hard to generalise13:28
irenabI think both cases are possible, depends who manages the router13:28
irenabwith give me a network, probably admin will be responsible for the router13:28
dimakwell yes, both are something we should support13:28
oansonprovider network is a must - we need it to support hierarchical port binding13:29
irenaboanson, we need to stick to DF model and topics13:29
oansonirenab, point 3. on the etherpad sufficient? 'Dependency graph must be DF model based, to avoid Neutron tight-coupling'\13:30
irenabyes13:31
irenabdo we want to consider selective reactive?13:31
oansonirenab, is that relevant? Performance-wise?13:32
irenabor it fits into lazily loading13:32
irenabthis maybe on the first hit, then you have it cached13:32
oansonSorry?13:33
irenabsotheing similar to Midonet that fetches the relevant model first time it need to details, later the model is in local cache13:33
oansonWhat about services reporting their required references/back-refs? do we need it? Possibly in the API layer?13:34
oansonirenab, I think that's what we have with the db_store pattern13:34
irenabI think it should be clear from the models13:34
oansonNot necessarily13:34
oansonl3 app requires the routers connected to every network. But network mode shouldn't mention routers since routers is an app/feature/extension that sits on top of networks13:35
irenabl3 includes maybe we should check case by case, l3 is about routing13:37
oansonNot sure I follow13:37
irenabmaybe we should check app by app13:38
oansonWe should13:38
irenabfor l3 app its about routing, so l3 will need routers data13:38
irenab+ networks that have interface on routers13:38
oansonYes13:38
dimaknote that a network can be a few hops away13:39
oansonWe start charting the dependency graph from OVS ports. That's the only thing we know we have.13:39
irenaboanson, ovs port represents neutron port?13:40
oansondimak, currently we definitely only support one hop. I think in Neutron that's enough13:40
oansonirenab, yes13:40
oansonhttps://review.openstack.org/#/c/480195/ makes it official13:40
irenabI have 5 mins before I must leave13:40
oansonirenab, no worries. Worst comes we can continue tomorrow :)13:40
oansonThere's no rush. This won't be done before pike13:41
oansonAll right, new plan13:41
oansonLet's table this discussion. Pick it up again in the vPTG13:41
irenabspeaking of pike, do we have SPD disabled by default?13:41
oansonNot yet. Who wants to take it?13:42
oansonFor small deployments and development, the current selective proactive solution is good enough. Maybe make shared stuff an 'all topics' model, but that requires in depth testing13:42
dimakfor small deployments you don't need selective13:43
oansondimak, yes. That's what I meant :)13:43
oansonSecond thing is that I don't see the subnet model and dhcp model changes getting in before the tag cutoff, and I don't want to rush us13:43
lihiSPD?13:43
irenabSelective Proactive dist13:44
oansonSo I want to discuss upgrade path. Specifically adding a Grenada test to make sure we can upgrade from pike to queens and enforce writing correct db migration code13:44
irenabsorry guys and girls, have to drop the discussion13:45
dimakbye13:45
oansonirenab, no worries. Thanks for your help!13:45
lihibye13:45
oansonSo this is the plan: Disable SPD by default, discuss it in PTG, and prioritize Grenada test before patches: https://review.openstack.org/#/c/494557 and https://review.openstack.org/#/c/48019613:46
oansonlihi, dimak, is this all right? Do we have consensus?13:46
dimak+113:46
lihiyes13:46
oansonAll right. Any volunteers for the Grenada test, or should I take it?13:47
oansonI am free now that my two big patches are postponed :)13:47
oansonAnd I'll disable SPD by default now.13:47
oansonGrenade*13:48
dimakI'll take a look see how hard it is to set up an upgrade job13:48
oansondimak, not too many plates in the air?13:48
dimaki'll juggle13:49
oansonI can take it. I'm fairly free13:49
oansondimak, ?13:52
dimakIt's yours if you insist13:53
oansonI do :)13:53
oansonMy biggest task was just postponed :)13:53
*** igordc has joined #openstack-dragonflow14:02
oansondimak, you still here?14:15
*** mlavalle has joined #openstack-dragonflow14:16
*** yamamoto has joined #openstack-dragonflow14:23
openstackgerritOmer Anson proposed openstack/dragonflow master: Disable Selective-Proactive Distribution by default  https://review.openstack.org/49624714:28
*** yamamoto has quit IRC14:28
dimakoanson, missed your ping, yes14:40
oansonI was trying to remember what we decided about #170817814:41
oansonI was trying to remember what we decided about bug #170817814:41
openstackbug 1708178 in DragonFlow "LBaaSv2 with 3rd party provider does not work if L3agent is disabled" [Critical,New] https://launchpad.net/bugs/170817814:41
dimakirenab and I found it working with HAProxy14:41
oansonIf I recall correctly, we agreed to disable it in favor of bug #171226614:41
openstackbug 1712266 in DragonFlow "Selective topology distribution will not fetch reachable objects of other tenants" [Undecided,New] https://launchpad.net/bugs/171226614:41
dimakI'm not convinced those 2 are related14:42
dimaklbaas port was different tenant?14:43
oansonYou said the first is invalid. We opened the second since the question of SPD exists14:43
dimakeven if so, both had local ports14:43
oansonI think there was a cross-tenant issue, since disabling SPD solved the issue14:43
dimakI believe you but I can't see why that mattererd14:44
oansonI didn't see the system. Those are the reports from you, irena and eyal14:44
dimakI think we should try again with spd on14:45
dimakmaybe there's something else at play14:45
oansonSure. But as long as it's reported working, I want to bump it down to medium14:46
dimakok14:46
oansonI also want to verify the l3-agent / advanced services before closing the bug completely14:46
dimakspeaking of l3 agent14:46
dimakplease review https://review.openstack.org/#/c/483385/ when you get a moment14:47
dimaklihi too :)14:47
dimak(just fixed a typo in the commit message)14:47
oansonDone. Yes - I see it's only rebases14:48
dimakI have to step outside for a while, I'll check in later14:48
oansonNo worries. I'm leaving soon, so have a good evening :)14:48
lihiDone14:50
lihi:)14:50
openstackgerritOmer Anson proposed openstack/dragonflow master: LBaaS spec  https://review.openstack.org/47746314:52
openstackgerritOmer Anson proposed openstack/dragonflow master: Fix docs warnings in extra_dhcp_opts spec  https://review.openstack.org/49626414:52
openstackgerritOmer Anson proposed openstack/dragonflow master: devstack: Add hooks to support deployment with Octavia  https://review.openstack.org/49620415:01
*** yamamoto has joined #openstack-dragonflow15:25
*** yamamoto has quit IRC15:30
openstackgerritMerged openstack/dragonflow master: model proxy: Throw custom exception when model not cached  https://review.openstack.org/49586515:41
*** afanti has quit IRC15:55
openstackgerritYuval Brik proposed openstack/dragonflow master: Redis Driver Rewrite [WIP]  https://review.openstack.org/49629916:00
*** Natanbro has quit IRC16:03
*** yamamoto has joined #openstack-dragonflow16:26
*** yamamoto has quit IRC16:32
openstackgerritMerged openstack/dragonflow master: Disable l3 agent in gate  https://review.openstack.org/48338516:54
*** yamamoto has joined #openstack-dragonflow17:28
*** yamamoto has quit IRC17:35
*** yamamoto has joined #openstack-dragonflow18:31
*** yamamoto has quit IRC18:37
*** yamamoto has joined #openstack-dragonflow19:33
*** yamamoto has quit IRC19:38
*** igordc has quit IRC20:23
*** yamamoto has joined #openstack-dragonflow20:35
*** yamamoto has quit IRC20:38
*** yamamoto has joined #openstack-dragonflow20:38
*** yamamoto has quit IRC20:39
*** yamamoto has joined #openstack-dragonflow21:10
*** yamamoto has quit IRC21:15
*** yamamoto_ has joined #openstack-dragonflow22:11
*** yamamoto_ has quit IRC22:17
*** yamamoto_ has joined #openstack-dragonflow23:13
*** yamamoto_ has quit IRC23:18
*** mlavalle has quit IRC23:38

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!