15:00:02 <mmedvede> #startmeeting third-party
15:00:03 <openstack> Meeting started Mon Oct 19 15:00:02 2015 UTC and is due to finish in 60 minutes.  The chair is mmedvede. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:04 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:06 <openstack> The meeting name has been set to 'third_party'
15:00:41 <mmedvede> hi, anybody here for third-party meeting?
15:00:52 <mmedvede> I am sitting in today for anteaya
15:02:10 <asselin_> o/
15:02:34 <mmedvede> hey asselin_
15:02:48 <asselin_> good morning
15:03:40 <mmedvede> how are things with your CI?
15:04:25 <asselin_> mmedvede, I changed teams, so that work is being done now by cbader
15:05:09 <mmedvede> asselin_: congratulations! Are you still working with OpenStack?
15:05:34 <asselin_> mmedvede, thanks, yes
15:06:10 <cbader> mmedvede, It is going well had issue with path variable filed defect on it.
15:06:33 <cbader> mmedvede, looks like it is fixed tests are passing now.
15:06:47 <asselin_> and still working on common ci. I'll be helping out cbader, although hopefully the updated docs are sufficient for him to get it working on his own.
15:06:48 <mmedvede> hi cbader
15:06:50 <asselin_> good morning cbader
15:07:18 <mmedvede> cbader: do you have a link to defect? maybe someone else is affected
15:07:27 <cbader> good morning asselin and mmedvede
15:08:10 <qeas> Hi guys. I have a problem running CI for our NFS driver. test_volume_crud_with_volume_type_and_extra_specs fails without any tracebacks in logs, I can't figure out what's wrong. http://paste.openstack.org/show/476707/ this is basicly all the info I am getting from tox, can anyone help me with it?
15:09:06 <asselin_> qeas, you should look at the screen c-vol logs
15:09:41 <qeas> asselin_ : no errors there
15:10:10 <qeas> aswell as in any other logs
15:10:19 <mmedvede> hi qeas, there should be something. Did you try grepping all logs for the volume id?
15:10:23 <qeas> that is my biggest concern
15:10:27 <qeas> yep
15:10:40 <qeas> grep Traceback /opt/stack/screen-logs/*
15:10:42 <qeas> nothing
15:11:01 <qeas> oh
15:11:08 <asselin_> qeas, are the logs public?
15:11:12 <qeas> didn't try for volume ifd
15:11:14 <qeas> id*
15:11:15 <cbader> mmedvede, HCD-564, the closed it saying not issue but someone fix it since my tests are now passing with correct path to pypi being passed, it was hard coded by build-cfg.sh to /snapshot-2015-10-12.
15:11:43 <qeas> not really, I will try grepping for id, maybe will find something
15:12:18 <asselin_> cbader, HCD is not related to upstream openstack, right?
15:12:24 <mmedvede> qeas: if you get a hit on id, then try using associated request-id and trace the request across the logs
15:13:20 <cbader> asselin, yes I thinks so.
15:13:20 <mmedvede> cbader: so the defect internal for you :) I thought you referred to something upstream
15:14:29 <cbader> mmedvede, not is was not upstream sorry new to this thread.
15:14:52 <qeas> mmedvede : will try that, thanks for the idea
15:15:01 <mmedvede> cbader: no worries
15:15:32 <mmedvede> cbader: many times third-party CI's are affected by changes upstream. Then it helps to share
15:16:16 <asselin_> qeas, if the logs are available publicly, we can help diagnose. Otherwise that stacktrace is just tempest failing. But the true error is somewhere in the screen log files
15:16:58 <asselin_> the c-* log files are for cinder and you can look there. You should be able to trace the call via tempest IDs from the api to the scheduler to the volume service
15:17:41 <asselin_> and in some cases, you need to go to the nova logs (for volume attaches and detaches) although the test case you mentioned doesn't seem related to that
15:17:45 <cbader> mmedvede, yes thank you for the help
15:18:58 <qeas> asselin_: Logs aren't anywhere public right now, but I can post them
15:20:18 <qeas> or actually I am wrong, I do have them public http://140.174.232.106/master/ns_nfs/2015.10.16-15:13:15/ please take a look and tell me if you find anything useful
15:22:15 <mmedvede> qeas: thank you. I'll take a peak
15:22:27 <qeas> mmedvede: thanks a lot
15:23:54 <mmedvede> anything else on this topic?
15:25:35 <mmedvede> anybody has something else to discuss today?
15:26:37 <asselin_> qeas, weird I don't see the requests anywhere....let's take it offline
15:27:16 <asselin_> qeas, as a side note, setting up a log server like openstack helps to navigate the log files.
15:28:02 <eantyshev> Hello! What is the current 'entry point' instruction to create third-party CI? Is https://github.com/rasselin/os-ext-testing information relevant?
15:28:49 <asselin_> eantyshev, please use this: https://review.openstack.org/#/c/227584/
15:29:27 <asselin_> eantyshev, that repo should still work, but will be deprecated very soon, replaced by ^^
15:29:43 <mmedvede> #link new documentation for setting up third-party CI https://review.openstack.org/#/c/227584/
15:30:04 <asselin_> qeas, please take a look at that too ^^
15:30:16 <eantyshev> asselin_: Thanks a lot!
15:30:21 <mmedvede> asselin_: it is your link to review :)
15:31:46 <qeas> asselin_ : didn't really understand about the log server, how should I do it correctly?
15:32:40 <mmedvede> qeas: OpenStack infrastructure uses devstack-gate scripts to run their tests, and it uploads fairly standard set of artifacts to logserver
15:32:47 <asselin_> qeas, I don't have official upstream docs yet, but this script should work: https://github.com/rasselin/os-ext-testing/blob/master/puppet/install_log_server.sh
15:33:18 <asselin_> qeas, it uses os-loganalyze which helps filter log files by log level, adds color, etc.
15:33:18 <mmedvede> #link devstack-gate https://github.com/openstack-infra/devstack-gate
15:34:48 <qeas> asselin_, mmedvede : ok thanks, will try that
15:35:16 <mmedvede> qeas: try asselin_ 's suggestion first
15:35:21 <qeas> did you find anything in my logs?
15:36:43 <asselin_> qeas, I couldn't trace the request even.....very strange.
15:37:48 <mmedvede> qeas: I can see the request showing up in c-api.log
15:37:49 <qeas> asselin_: that's what bugs me the most. Grepping volume id only showed c-api and c-sch logs, nothing in c-vol
15:40:49 <mmedvede> qeas: in the log, it says "No valid host was found" when it tries to create volume
15:41:00 <mmedvede> qeas: we can take it offline
15:42:37 <qeas> mmedvede: hmm, so it looks like scheduler is trying to find a host with some specs that don't match the host
15:43:20 <mmedvede> qeas: hard to tell without digging further. grep for req-78d47268-58b3-4fcb-bd21-6761dc823e39 in the logs you linked
15:43:36 <qeas> mmedvede: maybe I missed some new specs that were added for NFS ?
15:44:54 <qeas> mmedvede: the thing is that error never occurred before, we always passed nfs tests successfully
15:45:30 <qeas> mmedvede: so I thought maybe it was because of some new changes to devstack
15:45:49 <asselin_> qeas, it might also be a new test case
15:45:55 <mmedvede> qeas: I would recommend to look for changes then. If you have the exact moment when you started failing, use it and check changes in projects that could affect you
15:48:57 <qeas> asselin_: I don't think it's new http://140.174.232.106/master/ns_nfs/2015.10.06-10:32:01 , you can see our driver pass it here a month and a half ago
15:50:12 <qeas> asselin_: oh sorry, actually it was just half a month ago
15:50:30 <mmedvede> qeas: looking through errors further, it seems your error is caused by nova-scheduler not being able to get any valid hosts because. One of filters returns "extra_spec requirement 'iSCSI' does not match 'NFS'"
15:51:03 <qeas> mmedvede: but why is it looking for iSCSI ?
15:51:48 <mmedvede> qeas: not question to me
15:52:58 <mmedvede> asselin_: do you know a good place to ask? e.g. irc channel?
15:53:38 <asselin_> perhaps the cinder channel is best
15:54:36 <qeas> mmedvede: this is very strange:  u'extra_specs': {u'vendor_name': u'Nexenta', u'storage_protocol': u'iSCSI'} , not sure why it passes wrong storage protocol, but this gives me an idea on where to look at
15:54:49 <qeas> mmedvede: thanks a lot for your help
15:55:08 <mmedvede> qeas: your welcome. That is as far as I can take you
15:56:13 <qeas> mmedvede:  I guess I will figure it out from here. thanks again
15:57:01 <mmedvede> all right, 5 minutes left. Anything else here? speak up
15:57:46 <asselin_> next week is the summit, mmedvede will you be chairing again?
15:58:08 <mmedvede> asselin_: not sure. I can if necessary
15:58:24 <mmedvede> probably would be a short meeting anyway :)
15:58:32 <asselin_> yes
15:59:20 <mmedvede> have fun at summit, who is going
15:59:42 <mmedvede> that's a wrap then, thanks for attending!
15:59:45 <mmedvede> #endmeeting