Friday, 2021-01-08

*** macz_ has quit IRC00:41
*** rcernin has quit IRC00:44
*** manpreet has joined #openstack-meeting00:45
*** ircuser-1 has quit IRC01:12
*** timburke has quit IRC01:13
*** jamesmcarthur has quit IRC01:21
*** jamesmcarthur has joined #openstack-meeting01:21
*** jmasud has joined #openstack-meeting01:37
*** jmasud has quit IRC01:39
*** rcernin has joined #openstack-meeting01:45
*** jmasud has joined #openstack-meeting01:46
*** jmasud has quit IRC01:48
*** jmasud has joined #openstack-meeting01:50
*** mlavalle has quit IRC02:06
*** _mlavalle_1 has joined #openstack-meeting02:07
*** jmasud has quit IRC02:21
*** njohnston has quit IRC02:37
*** jmasud has joined #openstack-meeting02:37
*** jamesmcarthur has quit IRC02:39
*** jamesmcarthur has joined #openstack-meeting02:41
*** macz_ has joined #openstack-meeting02:42
*** jamesmcarthur has quit IRC02:45
*** macz_ has quit IRC02:47
*** jamesmcarthur has joined #openstack-meeting03:03
*** jamesmcarthur has quit IRC03:04
*** jamesmcarthur has joined #openstack-meeting03:04
*** jamesmcarthur has quit IRC03:31
*** jmasud has quit IRC03:35
*** jamesmcarthur has joined #openstack-meeting03:36
*** jamesmcarthur has quit IRC03:36
*** jamesmcarthur has joined #openstack-meeting03:36
*** rcernin has quit IRC03:39
*** armax has quit IRC03:49
*** rcernin has joined #openstack-meeting03:52
*** rcernin has quit IRC03:52
*** rcernin has joined #openstack-meeting03:52
*** armax has joined #openstack-meeting03:53
*** jmasud has joined #openstack-meeting03:55
*** bbowen_ has joined #openstack-meeting04:06
*** ircuser-1 has joined #openstack-meeting04:07
*** jmasud has quit IRC04:08
*** bbowen has quit IRC04:08
*** armax has quit IRC04:13
*** jmasud has joined #openstack-meeting04:15
*** jamesmcarthur has quit IRC04:19
*** jamesmcarthur has joined #openstack-meeting04:20
*** njohnston has joined #openstack-meeting04:22
*** jamesmcarthur has quit IRC04:25
*** jamesmcarthur has joined #openstack-meeting04:26
*** jamesmcarthur has quit IRC04:31
*** jamesmcarthur has joined #openstack-meeting04:32
*** timburke has joined #openstack-meeting04:43
*** macz_ has joined #openstack-meeting04:43
*** macz_ has quit IRC04:47
*** jmasud has quit IRC05:29
*** evrardjp has quit IRC05:33
*** evrardjp has joined #openstack-meeting05:33
*** jmasud has joined #openstack-meeting05:34
*** rcernin has quit IRC05:36
*** rcernin has joined #openstack-meeting05:44
*** jamesmcarthur has quit IRC05:55
*** rcernin has quit IRC05:56
*** jamesmcarthur has joined #openstack-meeting05:56
*** jamesmcarthur has quit IRC06:01
*** gyee has quit IRC06:11
*** jamesmcarthur has joined #openstack-meeting06:15
*** rcernin has joined #openstack-meeting06:22
*** rcernin has quit IRC06:23
*** rcernin has joined #openstack-meeting06:24
*** rcernin has quit IRC06:39
*** rcernin has joined #openstack-meeting06:44
*** jmasud has quit IRC06:48
*** jmasud has joined #openstack-meeting06:53
*** jmasud has quit IRC07:02
*** ralonsoh has joined #openstack-meeting07:03
*** jmasud has joined #openstack-meeting07:05
*** jmasud has joined #openstack-meeting07:06
*** rcernin has quit IRC07:14
*** timburke has quit IRC07:16
*** jmasud has quit IRC07:26
*** rcernin has joined #openstack-meeting07:28
*** rcernin has quit IRC07:43
*** jgriffit1 has quit IRC07:46
*** dklyle has quit IRC07:51
*** bbowen has joined #openstack-meeting08:01
*** bbowen_ has quit IRC08:02
*** whoami-rajat has joined #openstack-meeting08:03
*** jgriffith has joined #openstack-meeting08:07
*** jamesmcarthur has quit IRC08:16
*** rh-jelabarre has quit IRC08:19
*** rpittau|afk is now known as rpittau08:32
*** jamesmcarthur has joined #openstack-meeting08:47
*** jamesmcarthur has quit IRC08:52
*** tosky has joined #openstack-meeting08:56
*** ttx has quit IRC10:16
*** ttx has joined #openstack-meeting10:17
*** rh-jelabarre has joined #openstack-meeting10:22
*** ociuhandu has joined #openstack-meeting10:31
*** ociuhandu has quit IRC10:39
*** ociuhandu has joined #openstack-meeting10:41
*** jamesmcarthur has joined #openstack-meeting11:04
*** jamesmcarthur has quit IRC11:09
*** e0ne has joined #openstack-meeting11:11
*** ociuhandu has quit IRC11:22
*** whoami-rajat__ has joined #openstack-meeting11:26
*** whoami-rajat has quit IRC11:27
*** xinranwang has joined #openstack-meeting11:47
*** bbowen_ has joined #openstack-meeting12:10
*** bbowen has quit IRC12:10
*** ociuhandu has joined #openstack-meeting12:13
*** ociuhandu has joined #openstack-meeting12:15
*** ociuhandu has quit IRC12:26
*** SotK has quit IRC12:52
*** SotK has joined #openstack-meeting12:52
*** ociuhandu has joined #openstack-meeting12:53
*** ociuhandu has quit IRC12:58
*** ociuhandu has joined #openstack-meeting12:58
*** armax has joined #openstack-meeting13:08
*** rfolco has joined #openstack-meeting13:09
*** rafaelweingartne has joined #openstack-meeting13:52
*** _mlavalle_1 has quit IRC13:58
*** mlavalle has joined #openstack-meeting13:58
slaweq#startmeeting neutron_drivers14:00
openstackMeeting started Fri Jan  8 14:00:18 2021 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.14:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:00
*** openstack changes topic to " (Meeting topic: neutron_drivers)"14:00
openstackThe meeting name has been set to 'neutron_drivers'14:00
*** gibi has joined #openstack-meeting14:00
mlavalleo/14:00
rafaelweingartne\o14:00
ralonsohhi14:00
yonglihehi14:00
gibio/14:00
amotokihi14:00
xinranwangHi14:01
slaweqhaleyb: njohnston yamamoto: ping14:01
haleybhi, didn't see reminder14:01
slaweqhello everyone on the first drivers meeting in 2021 :)14:01
slaweqfirst of all Happy New Year!14:01
yongliheHappy New Year14:02
*** lajoskatona has joined #openstack-meeting14:02
lajoskatonao/14:02
amotokihappy new year14:02
ralonsohhny!14:02
slaweqand now lets start as we have couple of topics to discuss14:02
slaweq#topic RFEs14:02
*** openstack changes topic to "RFEs (Meeting topic: neutron_drivers)"14:02
slaweqfirst one:14:02
slaweqhttps://bugs.launchpad.net/neutron/+bug/190910014:02
openstackLaunchpad bug 1909100 in neutron "[RFE]add new vnic type "cyborg"" [Wishlist,Confirmed] - Assigned to Rodolfo Alonso (rodolfo-alonso-hernandez)14:02
*** jawad_axd has joined #openstack-meeting14:03
ralonsohI think xinranwang could explain this RFE a bit14:03
xinranwangsure14:03
xinranwangWe hope to add a new vnic type for port to indicate that the port has a backend managed by cyborg14:04
xinranwangso that nova can trigger the interaction with cyborg according to this vnic type14:04
slaweqbased on last comments by gibi and ralonsoh in the LP I'm not sure we really need to add such new vnic type14:06
ralonsohand amotoki's comment14:06
slaweqright14:06
ralonsohthis port is almost a "direct" port14:06
ralonsohactually this is a PCI device14:06
gibiif there is no new vnic_type then nova will either ignor the vnic_type for these ports or neutron should enforce vnic_type=direct14:07
gibior direct-physical14:07
gibiignoring incoming vnic_type seems hackish14:07
ralonsohagree14:07
yongliheyeah, it should behave like this.14:07
amotokivnic types are used to determine how neutron handles the port. my concern is what happens if two or more vnic types which are backed by cyborg.14:08
yonglihemaybe should limited it's should not be normal in neutron? after all neutron know what network should like.14:08
yonglihefor now, direct is supported, and for future  the direct-physical is candidate.14:09
*** sean-k-mooney has joined #openstack-meeting14:09
amotokiso does it mean you need more vnic type for cybord backed ports?14:10
sean-k-mooneythe intent was to have a sperate vnic type dedicated to device that were  managed by cyborg and not support the device-procfile with other vnic types14:10
ralonsohbut this is not needed in the nova side14:11
sean-k-mooneyon the nova side we wanted a clear way to differenciate between hardwar offloaded ovs and ovs with cyborg ports or similar for ml2/sriov nic agent14:11
ralonsohand neutron can limit the device-profile to direct ports14:11
yonglihesean, we had to use sriov agent14:12
ralonsohthis can be done reading the port definition, with the "device_profile" extension14:12
sean-k-mooneythen we cant have colocation of hardware offloaded ovs and cyborg on the same compute14:12
sean-k-mooneyright?14:12
xinranwangwe should limit that only new vnic type should have device -profile filled, if we have new vnic type.14:12
sean-k-mooneywe dont want to assume that any exssiting ml2 driver that support ovs will work with cyborg14:12
sean-k-mooney* not support ovs support vnic type direct14:13
yonglihesean, sure, only sriov-agent working, not support ovs managed vf14:13
sean-k-mooneyright but ml2/ovs support direct as does ovn for hardware offloaded ovs14:13
sean-k-mooneywe did not want those ml2 drivers to be bind the port in that case correct14:14
ralonsohOVN direct is for external ports (sriov)14:14
sean-k-mooneyralonsoh: that will taken the hardware offloaded ovs codepath in os-vif14:14
yonglihehow ml2/ovs differentiate it from normal sriov direct14:15
sean-k-mooneydepending on the vif_type that is returned14:15
sean-k-mooneyspeerat topic i guess14:15
yongliheso the if no new vinc, neutron should limited the backend not been set if it's belong to ovs14:15
yonglihebase on vif_type14:16
ralonsohneutron folks?14:16
sean-k-mooneyyes so there are 2 things nova would have to treat the port as a normal sriov port14:16
sean-k-mooneye.g. not attempt to add it to ovs but if its bound by the ml2/ovs dirver then we woudl try to add it to ovs14:17
sean-k-mooneythe only thing that would prevent that today woudl be the check for swtidev api14:17
sean-k-mooneypresumably the cyborg VF would not have that enable but they could in theory14:17
slaweqso IIUC new vnic type will be mostly useful for nova, right? So nova will not need to do various "if's"14:17
slaweqand will know that if vnic_type=='cyborg' then device_profile is needed also14:18
sean-k-mooneyslaweq: its also useful for neturon so existing ml2 dirver dont have to be modifed to filter out cyborg ports if they dont supprot them14:18
slaweqis that correct?14:18
sean-k-mooneyyes14:18
slaweqsean-k-mooney: right14:18
sean-k-mooneywe could technically make this work i without the new vnic type14:19
slaweqand without this new type both, nova and neutron will need to do some if vnic_type=='direct' and device_profile not None, then "it's cyborg port"14:19
sean-k-mooneybut we felt being explicit as simpler14:19
slaweqor something like that14:19
slaweqcorrect?14:19
sean-k-mooneyyes14:19
yonglihewhat about new vif?14:19
yonglihejust like ml2/ovs does it14:19
slaweqthx sean-k-mooney for confirmation14:20
sean-k-mooneyyonglihe: sorry im not following can you restate that again14:20
yongliheuse vif to mark the port is 'cyborg backend' instead of vnic.14:20
yonglihevif_type vs vnic_type14:21
sean-k-mooneyyou cannot set the vif-type14:21
yongliheok14:21
sean-k-mooneythat is chosen by the driver14:21
sean-k-mooneywe could add a new one i guess14:21
mlavallevif type is an output of the binding process14:21
yongliheso that's now working14:21
yonglihethanks14:21
slaweqso, one more question - about amotoki's concern regarding name of the new vnic_type14:21
sean-k-mooneyso we could use vnic direct with vif-type cyborg14:21
sean-k-mooneythat would allow use to resus macvtap or direct-phsyical if we wanted too14:22
slaweqcan it be something else, to reflect " correspoding functionality rather than who implements the functionality."?14:22
amotokiif we assume 'direct' vnic type with cybord support, isn't better to name it sa direct-cybord or direct-<....> with more functional name?14:22
slaweqor "accelerator" maybe?14:23
yongliheok for me14:23
amotokiif so, if you want to support cybord support with other vnic, we can name it as XXX-cybord/accerlarator.14:23
sean-k-mooneyamotoki: that came up in the ptg and i belive sylvain had a similar concern basicaly suggesting do not include the project name14:23
mlavallethat's a good idea IMO14:23
yongliheaccelerator-x might be nice14:23
sean-k-mooneyslaweq: acclerator and device-profile were both suggested before14:23
slaweq:)14:24
sean-k-mooneyi have no strong feeling either way14:24
yongliheacclerator-direct acclerator-direct-phy14:24
mlavallelet's not use the project name14:24
yongliheagree14:24
gibiyonglihe: +114:24
amotoki+1 for yonglihe's idea14:24
slaweqthat is fine for me14:24
sean-k-mooneyyep accelerato-<connection mechanium> sound good to me14:24
slaweq+114:24
yonglihenice14:24
xinranwangyonglihe: +114:24
mlavalle+114:25
ralonsoh+114:25
ralonsohI'll amend the patch today14:25
slaweqhaleyb: njohnston: any thoughts?14:25
haleyb+1 from me14:25
yongliheralonsoh, thanks , i gonna verify that patch 3 days later14:26
slaweqI will mark that rfe as approved14:26
sean-k-mooneyby the way we are avoiding the exsiting smartnic vnic type because that is used for ironic already.14:26
sean-k-mooneyhttps://github.com/openstack/neutron-lib/blob/master/neutron_lib/api/definitions/portbindings.py#L11914:26
slaweqwith note about naming change14:26
slaweqnext RFE now14:27
xinranwangslaweq ralonsoh  thanks14:27
slaweqhttps://bugs.launchpad.net/neutron/+bug/190093414:27
openstackLaunchpad bug 1900934 in neutron "[RFE][DHCP][OVS] flow based DHCP" [Wishlist,New] - Assigned to LIU Yulong (dragon889)14:27
slaweqthank You xinranwang for proposal14:27
slaweqregarding LP 1900934 - this was already discussed few times14:27
slaweqliuyulong proposed spec already https://review.opendev.org/c/openstack/neutron-specs/+/76858814:28
slaweqbut rfe is still not decided14:28
slaweqso I think we should decide if we want to go with this solution, and continue discussion about details in the review of spec or if we don't want it in neutron at all14:28
sean-k-mooneyslaweq: i assume this is doing dhcp via openflow rules similar ot ovn with ml2/ovs?14:29
slaweqsean-k-mooney: yes14:29
slaweqexactly14:29
*** rafaelweingartne has quit IRC14:29
*** lajoskatona has left #openstack-meeting14:29
sean-k-mooneycool that would be nice espcially for routed networks14:29
sean-k-mooneysince each l2 agent could provide dhcp for the segment14:29
mlavalleI also have to say that my employer might be interested on this14:30
sean-k-mooneyassumign it was done as an l2 agent exteion rather then in the dhcp agent14:30
ralonsohI'm ok with the RFE, just some comments in the spec (we can move the discussion there)14:30
slaweqsean-k-mooney: that is original proposal IIRC14:30
ralonsohjust wondering the gaps between DHCP agent and OVS DHCP14:30
slaweqralonsoh: one of the gaps will be for sure that there will be no dns names resolving in such case14:31
slaweqonly dhcp14:31
ralonsohyeah14:31
amotokihow about extra dhcp optios?14:31
slaweqalso, I'm not sure if all extra-dhcp-options will work14:32
slaweqamotoki++14:32
slaweqprobably some of them may not work, I'm not sure14:32
slaweqbut IMHO that14:32
amotokianyway it can be covered by documentation on feature differences between flow-based dhcp and dhcp-agent14:32
slaweqthat is fine as long as it will be documented14:32
slaweqamotoki: You are faster than me again :P14:32
mlavalleand will serve a lot of "plain vanilla' dhcp cases14:32
amotokiI think it is better to call it as "flow-based dhcp agent" rather than distributed dhcp agent. slaweq's rfe covers distributed agent in some way too.14:33
slaweqamotoki: technically it's not even "agent" but dhcp-extension14:34
amotokislaweq: correct. I know it is not an agent.14:35
amotokiI spelled "dhcp AGENT" too many times :(14:35
slaweq:)14:35
mlavalleso let's approve it and move on with the RFE14:37
mlavalle+1 from me14:37
ralonsoh+114:37
amotokii'm fine to approve it14:37
haleyb+1 from me14:37
slaweqmlavalle: that is also my vote - lets approve rfe and continue discussion about details in spec review14:37
slaweqso +114:37
slaweqI will mark this rfe as approved14:38
slaweqthx14:38
slaweqlast rfe for today14:38
slaweqhttps://bugs.launchpad.net/neutron/+bug/191053314:38
openstackLaunchpad bug 1910533 in neutron "[RFE] New dhcp agents scheduler - use all agents for each network" [Wishlist,New] - Assigned to Slawek Kaplonski (slaweq)14:38
slaweqI proposed that RFE but ralonsoh may have more details about use case14:39
ralonsohI can confirm this is a source of problems in some deployments14:39
slaweqas he was recently "impacted" by this limitation :)14:39
*** jgriffith has quit IRC14:39
ralonsohif you have several leafs in a deployment and networks across those leafs14:39
ralonsohif you don't specify the correct number of DHCP agents, some leafs won't have a DHCP agent running14:40
ralonsohand the VMs won't have IP14:40
amotokiralonsoh: is a broadcast domain separeted?14:40
ralonsohyes14:40
amotokito overcome it we need to use dhcp-relay or deploy dhcp agents per broadcast domain14:41
amotokithis request sounds reasonable to me14:42
haleybralonsoh: so when you add a leaf/site but don't increase the agents it doesn't get an agent?14:42
ralonsohexactly this is the problem14:42
haleyback, thanks14:42
slaweqhaleyb: right as number of agents per network isn't related to sites at all14:43
slaweqso such new scheduler could be simply "workaround" of that problem14:43
sean-k-mooneyralonsoh: technically you could deploy 1 dhcp instance per network segment14:44
ralonsohyes, that's a possibility14:44
sean-k-mooneyat least for routed networks14:44
ralonsohbut you need to know where is each agent14:44
sean-k-mooneyyou kind of already do14:44
sean-k-mooneyyou know its hosts and the segment mappings14:45
sean-k-mooneybut again only in the routed networks case14:45
*** rpittau is now known as rpittau|afk14:46
sean-k-mooneyincreasing the dhcp agent count would not guarenttee it is on the leaf site right14:47
sean-k-mooneyit could add another instnace to the central site in principal14:47
ralonsohok, I was looking for the BZ14:47
ralonsohhttps://bugzilla.redhat.com/show_bug.cgi?id=188662214:47
openstackbugzilla.redhat.com bug 1886622 in openstack-neutron "Floating IP assigned to an instance is not accessible in scale up Multistack env with spine& leaf topology" [High,Assigned] - Assigned to ralonsoh14:47
ralonsohit has public information about this error14:47
amotokiwe can assume a deployment knows which network node belongs to which segment14:48
slaweqsean-k-mooney: right, that's why we propose to add scheduler which will schedule network to all dhcp agents14:48
sean-k-mooneyso really when adding a new leaf site today you would have to expcitly add an instnace to the new agent deployed at that site14:48
ralonsohamotoki, yes, that's correct14:48
amotokiif we deploy dhcp agent per segment, scheduling a netwokr to all dhcp agents will be a workaround14:48
sean-k-mooneyamotoki: you could do both. all agent if not a routed network and per segment if it is. but per segement is just an optimisation really14:49
amotokisean-k-mooney: yes, that's right14:49
slaweqTBH Liu's proposal about distributed dhcp would solve this use case also14:50
ralonsohright (for OVS)14:50
slaweqralonsoh: yep14:50
ralonsohbut would be desirable to have this dhcp scheduler to avoid the need of setting the exact number of DHCP agents needed14:51
amotokiyeah agree. deployments can continue to use DHCP agent they are familiar with too.14:52
amotokiI am not sure we need a new dhcp agent scheduler for this.14:52
amotokiAnother option is to modify the current dhcp agent scheduler to accept an option like dhcp_agent_per_network=all14:52
ralonsohagree14:53
slaweqamotoki: that may be good idea14:53
*** e0ne has quit IRC14:54
ralonsohdo you prefer to explore this idea? the change in the code will be smaller14:54
slaweqralonsoh: I can14:54
slaweqand we will get back to that in next weeks14:54
amotokianyway I am okay with the basic idea to assign a network to all agents.14:55
ralonsoh+1 to this idea14:55
slaweqso do You want to vote on approval rfe as an idea today, or wait more for some PoC code?14:55
slaweq(I will not vote as I proposed rfe)14:56
mlavalle+114:56
ralonsohI can wait for a POC14:56
amotokii am okay with either way14:56
mlavallebut we can surely approve the RFE14:56
haleyb+1 from me14:56
mlavalleif the PoC is not satusfactory, we can scrap it14:57
mlavalleI think we can trust slaweq14:57
slaweqthx :)14:57
mlavallecan't we?14:57
ralonsohmaybe...14:57
ralonsohhehehehe14:57
slaweq:P14:57
amotokihehe :)14:57
slaweqI don't trust myself so I'm not sure ;)14:57
slaweqbut thank You14:57
slaweqI will mark this one as approved also14:57
slaweqthis was really effective meeting14:57
slaweq3 rfes approved14:58
ralonsohsure14:58
slaweqthank You14:58
mlavalleo/14:58
slaweqI think we can call it a meeting now14:58
ralonsohbye!14:58
slaweqhave a great weekend and see You online14:58
amotokio/14:58
slaweqo/14:58
slaweq#endmeeting14:58
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"14:58
openstackMeeting ended Fri Jan  8 14:58:33 2021 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)14:58
openstackMinutes:        http://eavesdrop.openstack.org/meetings/neutron_drivers/2021/neutron_drivers.2021-01-08-14.00.html14:58
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/neutron_drivers/2021/neutron_drivers.2021-01-08-14.00.txt14:58
openstackLog:            http://eavesdrop.openstack.org/meetings/neutron_drivers/2021/neutron_drivers.2021-01-08-14.00.log.html14:58
*** jamesmcarthur has joined #openstack-meeting15:06
*** jamesmcarthur has quit IRC15:10
*** TrevorV has joined #openstack-meeting15:22
*** e0ne has joined #openstack-meeting15:27
*** macz_ has joined #openstack-meeting15:46
*** macz_ has quit IRC15:46
*** macz_ has joined #openstack-meeting15:47
*** dklyle has joined #openstack-meeting15:49
*** jgriffith has joined #openstack-meeting16:00
*** jamesmcarthur has joined #openstack-meeting16:01
*** trandles has left #openstack-meeting16:12
*** ociuhandu_ has joined #openstack-meeting16:28
*** ircuser-1 has quit IRC16:29
*** ociuhandu has quit IRC16:31
*** ociuhandu_ has quit IRC16:33
*** ociuhandu has joined #openstack-meeting16:38
*** whoami-rajat__ has quit IRC16:39
*** timburke has joined #openstack-meeting16:42
*** ociuhandu has quit IRC16:44
*** gyee has joined #openstack-meeting16:58
*** jamesmcarthur has quit IRC17:04
*** jmasud has joined #openstack-meeting17:31
*** jmasud has quit IRC17:47
*** jmasud has joined #openstack-meeting17:57
*** jawad_axd has quit IRC18:05
*** xinranwang has quit IRC18:16
*** ralonsoh has quit IRC18:17
*** ociuhandu has joined #openstack-meeting18:48
*** ociuhandu has quit IRC18:53
*** jamesmcarthur has joined #openstack-meeting19:23
*** jmasud has quit IRC19:33
*** jawad_axd has joined #openstack-meeting19:38
*** jmasud has joined #openstack-meeting19:58
*** jamesmcarthur has quit IRC20:11
*** jamesmcarthur has joined #openstack-meeting20:22
*** e0ne has quit IRC20:24
*** jamesmcarthur has quit IRC20:39
*** diablo_rojo__ has joined #openstack-meeting20:59
*** rfolco has quit IRC21:11
*** afazekas has quit IRC21:12
*** slaweq has quit IRC21:13
*** jmasud has quit IRC21:20
*** jawad_axd has quit IRC21:20
*** TrevorV has quit IRC21:26
*** jmasud has joined #openstack-meeting21:29
*** jmasud has quit IRC21:31
*** jmasud_ has joined #openstack-meeting21:31
*** jmasud_ has quit IRC21:47
*** jamesmcarthur has joined #openstack-meeting22:14
*** jamesmcarthur has quit IRC22:17
*** timburke_ has joined #openstack-meeting22:31
*** timburke has quit IRC22:34
*** jmasud has joined #openstack-meeting23:50
*** jmasud has quit IRC23:54

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!