py36 create: /tmp/tmp60tx9frq/pytest_confluence_report-0.0.3-py3-none-any/.tox/py36 py36 installdeps: pytest==6.0.1, pip py36 installed: attrs==20.3.0,importlib-metadata==2.0.0,iniconfig==1.1.1,more-itertools==8.6.0,packaging==20.4,pluggy==0.13.1,py==1.9.0,pyparsing==2.4.7,pytest==6.0.1,six==1.15.0,toml==0.10.2,zipp==3.4.0 py36 run-test-pre: PYTHONHASHSEED='3659968254' py36 run-test: commands[0] | pip install ../pytest_confluence_report-0.0.3-py3-none-any.whl Processing /tmp/tmp60tx9frq/pytest_confluence_report-0.0.3-py3-none-any.whl Collecting attrs==19.3.0 Using cached attrs-19.3.0-py2.py3-none-any.whl (39 kB) Collecting uyaml==0.0.7 Downloading uyaml-0.0.7-py3-none-any.whl (6.6 kB) Collecting atlassian-python-api==1.17.5 Downloading atlassian-python-api-1.17.5.tar.gz (83 kB) Collecting lxml==4.5.2 Downloading lxml-4.5.2-cp36-cp36m-manylinux1_x86_64.whl (5.5 MB) Collecting typer==0.3.2 Downloading typer-0.3.2-py3-none-any.whl (21 kB) Collecting junitparser==1.4.1 Downloading junitparser-1.4.1-py2.py3-none-any.whl (9.9 kB) Collecting enforce-pep8==0.0.11 Downloading enforce_pep8-0.0.11-py3-none-any.whl (11 kB) Processing /home/travis/.cache/pip/wheels/e5/9d/ad/2ee53cf262cba1ffd8afe1487eef788ea3f260b7e6232a80fc/PyYAML-5.3.1-cp36-cp36m-linux_x86_64.whl Collecting dataclasses==0.6.0 Downloading dataclasses-0.6-py3-none-any.whl (14 kB) Collecting requests Using cached requests-2.25.0-py2.py3-none-any.whl (61 kB) Requirement already satisfied: six in ./.tox/py36/lib/python3.6/site-packages (from atlassian-python-api==1.17.5->pytest-confluence-report==0.0.3) (1.15.0) Collecting oauthlib Downloading oauthlib-3.1.0-py2.py3-none-any.whl (147 kB) Collecting requests_oauthlib Downloading requests_oauthlib-1.3.0-py2.py3-none-any.whl (23 kB) Collecting click<7.2.0,>=7.1.1 Using cached click-7.1.2-py2.py3-none-any.whl (82 kB) Processing /home/travis/.cache/pip/wheels/6e/9c/ed/4499c9865ac1002697793e0ae05ba6be33553d098f3347fb94/future-0.18.2-py3-none-any.whl Collecting chardet<4,>=3.0.2 Using cached chardet-3.0.4-py2.py3-none-any.whl (133 kB) Collecting certifi>=2017.4.17 Using cached certifi-2020.11.8-py2.py3-none-any.whl (155 kB) Collecting idna<3,>=2.5 Using cached idna-2.10-py2.py3-none-any.whl (58 kB) Collecting urllib3<1.27,>=1.21.1 Using cached urllib3-1.26.2-py2.py3-none-any.whl (136 kB) Building wheels for collected packages: atlassian-python-api Building wheel for atlassian-python-api (setup.py): started Building wheel for atlassian-python-api (setup.py): finished with status 'done' Created wheel for atlassian-python-api: filename=atlassian_python_api-1.17.5-py3-none-any.whl size=86565 sha256=b689a9f93505a3dbc7755ecfd05dc2ebacd2797e1f6d821669624f5b1ac7231e Stored in directory: /home/travis/.cache/pip/wheels/68/d6/56/4302dfa4d7697fba32c596f011428f7a90ca68399ead6173af Successfully built atlassian-python-api Installing collected packages: attrs, dataclasses, enforce-pep8, pyyaml, uyaml, chardet, certifi, idna, urllib3, requests, oauthlib, requests-oauthlib, atlassian-python-api, lxml, click, typer, future, junitparser, pytest-confluence-report Attempting uninstall: attrs Found existing installation: attrs 20.3.0 Uninstalling attrs-20.3.0: Successfully uninstalled attrs-20.3.0 Successfully installed atlassian-python-api-1.17.5 attrs-19.3.0 certifi-2020.11.8 chardet-3.0.4 click-7.1.2 dataclasses-0.6 enforce-pep8-0.0.11 future-0.18.2 idna-2.10 junitparser-1.4.1 lxml-4.5.2 oauthlib-3.1.0 pytest-confluence-report-0.0.3 pyyaml-5.3.1 requests-2.25.0 requests-oauthlib-1.3.0 typer-0.3.2 urllib3-1.26.2 uyaml-0.0.7 WARNING: You are using pip version 20.2.2; however, version 20.2.4 is available. You should consider upgrading via the '/tmp/tmp60tx9frq/pytest_confluence_report-0.0.3-py3-none-any/.tox/py36/bin/python -m pip install --upgrade pip' command. py36 run-test: commands[1] | pytest --trace-config --help PLUGIN registered: <_pytest.config.PytestPluginManager object at 0x7f590a030fd0> PLUGIN registered: <_pytest.config.Config object at 0x7f59051db208> PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: > err=> in_=> _state='suspended' _in_suspended=False> _capture_fixture=None> PLUGIN registered: <_pytest.cacheprovider.LFPlugin object at 0x7f59030c9b00> PLUGIN registered: <_pytest.cacheprovider.NFPlugin object at 0x7f59030c9da0> PLUGIN registered: <_pytest.faulthandler.FaultHandlerHooks object at 0x7f59030c90f0> PLUGIN registered: <_pytest.stepwise.StepwisePlugin object at 0x7f59030c9080> PLUGIN registered: <_pytest.terminal.TerminalReporter object at 0x7f59030c9860> PLUGIN registered: <_pytest.logging.LoggingPlugin object at 0x7f59030161d0> usage: pytest [options] [file_or_dir] [file_or_dir] [...] positional arguments: file_or_dir general: -k EXPRESSION only run tests which match the given substring expression. An expression is a python evaluatable expression where all names are substring-matched against test names and their parent classes. Example: -k 'test_method or test_other' matches all test functions and classes whose name contains 'test_method' or 'test_other', while -k 'not test_method' matches those that don't contain 'test_method' in their names. -k 'not test_method and not test_other' will eliminate the matches. Additionally keywords are matched to classes and functions containing extra names in their 'extra_keyword_matches' set, as well as functions which have names assigned directly to them. The matching is case-insensitive. -m MARKEXPR only run tests matching given mark expression. For example: -m 'mark1 and not mark2'. --markers show markers (builtin, plugin and per-project ones). -x, --exitfirst exit instantly on first error or failed test. --maxfail=num exit after first num failures or errors. --strict-config any warnings encountered while parsing the `pytest` section of the configuration file raise errors. --strict-markers, --strict markers not registered in the `markers` section of the configuration file raise errors. -c file load configuration from `file` instead of trying to locate one of the implicit configuration files. --continue-on-collection-errors Force test execution even if collection errors occur. --rootdir=ROOTDIR Define root directory for tests. Can be relative path: 'root_dir', './root_dir', 'root_dir/another_dir/'; absolute path: '/home/user/root_dir'; path with variables: '$HOME/root_dir'. --fixtures, --funcargs show available fixtures, sorted by plugin appearance (fixtures with leading '_' are only shown with '-v') --fixtures-per-test show fixtures per test --pdb start the interactive Python debugger on errors or KeyboardInterrupt. --pdbcls=modulename:classname start a custom interactive Python debugger on errors. For example: --pdbcls=IPython.terminal.debugger:TerminalPdb --trace Immediately break when running each test. --capture=method per-test capturing method: one of fd|sys|no|tee-sys. -s shortcut for --capture=no. --runxfail report the results of xfail tests as if they were not marked --lf, --last-failed rerun only the tests that failed at the last run (or all if none failed) --ff, --failed-first run all tests, but run the last failures first. This may re-order tests and thus lead to repeated fixture setup/teardown. --nf, --new-first run tests from new files first, then the rest of the tests sorted by file mtime --cache-show=[CACHESHOW] show cache contents, don't perform collection or tests. Optional argument: glob (default: '*'). --cache-clear remove all cache contents at start of test run. --lfnf={all,none}, --last-failed-no-failures={all,none} which tests to run with no previously (known) failures. --sw, --stepwise exit on test failure and continue from last failing test next time --stepwise-skip ignore the first failing test but stop on the next failing test reporting: --durations=N show N slowest setup/test durations (N=0 for all). -v, --verbose increase verbosity. --no-header disable header --no-summary disable summary -q, --quiet decrease verbosity. --verbosity=VERBOSE set verbosity. Default is 0. -r chars show extra test summary info as specified by chars: (f)ailed, (E)rror, (s)kipped, (x)failed, (X)passed, (p)assed, (P)assed with output, (a)ll except passed (p/P), or (A)ll. (w)arnings are enabled by default (see --disable-warnings), 'N' can be used to reset the list. (default: 'fE'). --disable-warnings, --disable-pytest-warnings disable warnings summary -l, --showlocals show locals in tracebacks (disabled by default). --tb=style traceback print mode (auto/long/short/line/native/no). --show-capture={no,stdout,stderr,log,all} Controls how captured stdout/stderr/log is shown on failed tests. Default is 'all'. --full-trace don't cut any tracebacks (default is to cut). --color=color color terminal output (yes/no/auto). --code-highlight={yes,no} Whether code should be highlighted (only if --color is also enabled) --pastebin=mode send failed|all info to bpaste.net pastebin service. --junit-xml=path create junit-xml style report file at given path. --junit-prefix=str prepend prefix to classnames in junit-xml output --result-log=path DEPRECATED path for machine-readable result log. collection: --collect-only, --co only collect tests, don't execute them. --pyargs try to interpret all arguments as python packages. --ignore=path ignore path during collection (multi-allowed). --ignore-glob=path ignore path pattern during collection (multi-allowed). --deselect=nodeid_prefix deselect item (via node id prefix) during collection (multi-allowed). --confcutdir=dir only load conftest.py's relative to specified dir. --noconftest Don't load any conftest.py files. --keep-duplicates Keep duplicate tests. --collect-in-virtualenv Don't ignore tests in a local virtualenv directory --import-mode={prepend,append,importlib} prepend/append to sys.path when importing test modules and conftest files, default is to prepend. --doctest-modules run doctests in all .py modules --doctest-report={none,cdiff,ndiff,udiff,only_first_failure} choose another output format for diffs on doctest failure --doctest-glob=pat doctests file matching pattern, default: test*.txt --doctest-ignore-import-errors ignore doctest ImportErrors --doctest-continue-on-failure for a given doctest, continue to run after the first failure test session debugging and configuration: --basetemp=dir base temporary directory for this test run.(warning: this directory is removed if it exists) -V, --version display pytest version and information about plugins.When given twice, also display information about plugins. -h, --help show help message and configuration info -p name early-load given plugin module name or entry point (multi-allowed). To avoid loading of plugins, use the `no:` prefix, e.g. `no:doctest`. --trace-config trace considerations of conftest.py files. --debug store internal tracing debug information in 'pytestdebug.log'. -o OVERRIDE_INI, --override-ini=OVERRIDE_INI override ini option with "option=value" style, e.g. `-o xfail_strict=True -o cache_dir=cache`. --assert=MODE Control assertion debugging tools. 'plain' performs no assertion debugging. 'rewrite' (the default) rewrites assert statements in test modules on import to provide assert expression information. --setup-only only setup fixtures, do not execute tests. --setup-show show setup of fixtures while executing tests. --setup-plan show what fixtures and tests would be executed but don't execute anything. pytest-warnings: -W PYTHONWARNINGS, --pythonwarnings=PYTHONWARNINGS set which warnings to report, see -W option of python itself. logging: --log-level=LEVEL level of messages to catch/display. Not set by default, so it depends on the root/parent log handler's effective level, where it is "WARNING" by default. --log-format=LOG_FORMAT log format as used by the logging module. --log-date-format=LOG_DATE_FORMAT log date format as used by the logging module. --log-cli-level=LOG_CLI_LEVEL cli logging level. --log-cli-format=LOG_CLI_FORMAT log format as used by the logging module. --log-cli-date-format=LOG_CLI_DATE_FORMAT log date format as used by the logging module. --log-file=LOG_FILE path to a file when logging will be written to. --log-file-level=LOG_FILE_LEVEL log file logging level. --log-file-format=LOG_FILE_FORMAT log format as used by the logging module. --log-file-date-format=LOG_FILE_DATE_FORMAT log date format as used by the logging module. --log-auto-indent=LOG_AUTO_INDENT Auto-indent multiline messages passed to the logging module. Accepts true|on, false|off or an integer. Confluence test report: --confluence-upload, --cu Convert pytest results into Confluence page --confluence-settings=CONFLUENCE_SETTINGS, --cs=CONFLUENCE_SETTINGS Path to Confluence settings file e.g `settings.yml`. [pytest] ini-options in the first pytest.ini|tox.ini|setup.cfg file found: markers (linelist): markers for test functions empty_parameter_set_mark (string): default marker for empty parametersets norecursedirs (args): directory patterns to avoid for recursion testpaths (args): directories to search for tests when no files or directories are given in the command line. usefixtures (args): list of default fixtures to be used with this project python_files (args): glob-style file patterns for Python test module discovery python_classes (args): prefixes or glob names for Python test class discovery python_functions (args): prefixes or glob names for Python test function and method discovery disable_test_id_escaping_and_forfeit_all_rights_to_community_support (bool): disable string escape non-ascii characters, might cause unwanted side effects(use at your own risk) console_output_style (string): console output: "classic", or with additional progress information ("progress" (percentage) | "count"). xfail_strict (bool): default for the strict parameter of xfail markers when not given explicitly (default: False) enable_assertion_pass_hook (bool): Enables the pytest_assertion_pass hook.Make sure to delete any previously generated pyc cache files. junit_suite_name (string): Test suite name for JUnit report junit_logging (string): Write captured log messages to JUnit report: one of no|log|system-out|system-err|out-err|all junit_log_passing_tests (bool): Capture log information for passing tests to JUnit report: junit_duration_report (string): Duration time to report: one of total|call junit_family (string): Emit XML for schema: one of legacy|xunit1|xunit2 doctest_optionflags (args): option flags for doctests doctest_encoding (string): encoding used for doctest files cache_dir (string): cache directory path. filterwarnings (linelist): Each line specifies a pattern for warnings.filterwarnings. Processed after -W/--pythonwarnings. log_level (string): default value for --log-level log_format (string): default value for --log-format log_date_format (string): default value for --log-date-format log_cli (bool): enable log display during test run (also known as "live logging"). log_cli_level (string): default value for --log-cli-level log_cli_format (string): default value for --log-cli-format log_cli_date_format (string): default value for --log-cli-date-format log_file (string): default value for --log-file log_file_level (string): default value for --log-file-level log_file_format (string): default value for --log-file-format log_file_date_format (string): default value for --log-file-date-format log_auto_indent (string): default value for --log-auto-indent faulthandler_timeout (string): Dump the traceback of all threads if a test takes more than TIMEOUT seconds to finish. addopts (args): extra command line options minversion (string): minimally required pytest version required_plugins (args): plugins that must be present for pytest to run environment variables: PYTEST_ADDOPTS extra command line options PYTEST_PLUGINS comma-separated plugins to load during startup PYTEST_DISABLE_PLUGIN_AUTOLOAD set to disable plugin auto-loading PYTEST_DEBUG set to enable debug tracing of pytest's internals to see available markers type: pytest --markers to see available fixtures type: pytest --fixtures (shown according to specified file_or_dir or current dir if not specified; fixtures with leading '_' are only shown with the '-v' option ___________________________________ summary ____________________________________ py36: commands succeeded congratulations :) write json report at: /tmp/tmp60tx9frq/pytest_confluence_report-0.0.3-py3-none-any/result.json Time: 45.5 seconds