GLOB sdist-make: /tmp/tmpbqada78o/pytest-sunpy-1.0.0/setup.py py36 create: /tmp/tmpbqada78o/pytest-sunpy-1.0.0/.tox/py36 py36 installdeps: pytest==3.9.1 py36 inst: /tmp/tmpbqada78o/pytest-sunpy-1.0.0/.tox/.tmp/package/1/pytest-sunpy-1.0.0.zip py36 installed: apipkg==1.5,atomicwrites==1.2.1,attrs==18.2.0,coverage==4.5.2,execnet==1.5.0,hypothesis==4.0.1,more-itertools==5.0.0,numpy==1.16.0,pluggy==0.8.1,psutil==5.4.8,py==1.7.0,pytest==3.9.1,pytest-arraydiff==0.3,pytest-astropy==0.5.0,pytest-cov==2.6.1,pytest-doctestplus==0.2.0,pytest-forked==1.0.1,pytest-mock==1.10.0,pytest-openfiles==0.3.2,pytest-remotedata==0.3.1,pytest-rerunfailures==6.0,pytest-sunpy==1.0.0,pytest-xdist==1.26.0,six==1.12.0 py36 run-test-pre: PYTHONHASHSEED='2578982583' py36 runtests: commands[0] | pytest --trace-config --help PLUGIN registered: <_pytest.config.PytestPluginManager object at 0x7f4a3ca54c50> PLUGIN registered: <_pytest.config.Config object at 0x7f4a36c69550> PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: PLUGIN registered: <_pytest.capture.CaptureManager object at 0x7f4a2f03a5c0> PLUGIN registered: <_pytest.cacheprovider.LFPlugin object at 0x7f4a2efd8c50> PLUGIN registered: <_pytest.cacheprovider.NFPlugin object at 0x7f4a2efd8278> PLUGIN registered: <_pytest.logging.LoggingPlugin object at 0x7f4a2eea62e8> PLUGIN registered: <_pytest.terminal.TerminalReporter object at 0x7f4a2efc35c0> usage: pytest [options] [file_or_dir] [file_or_dir] [...] positional arguments: file_or_dir general: -k EXPRESSION only run tests which match the given substring expression. An expression is a python evaluatable expression where all names are substring-matched against test names and their parent classes. Example: -k 'test_method or test_other' matches all test functions and classes whose name contains 'test_method' or 'test_other', while -k 'not test_method' matches those that don't contain 'test_method' in their names. Additionally keywords are matched to classes and functions containing extra names in their 'extra_keyword_matches' set, as well as functions which have names assigned directly to them. -m MARKEXPR only run tests matching given mark expression. example: -m 'mark1 and not mark2'. --markers show markers (builtin, plugin and per-project ones). -x, --exitfirst exit instantly on first error or failed test. --maxfail=num exit after first num failures or errors. --strict marks not registered in configuration file raise errors. -c file load configuration from `file` instead of trying to locate one of the implicit configuration files. --continue-on-collection-errors Force test execution even if collection errors occur. --rootdir=ROOTDIR Define root directory for tests. Can be relative path: 'root_dir', './root_dir', 'root_dir/another_dir/'; absolute path: '/home/user/root_dir'; path with variables: '$HOME/root_dir'. --fixtures, --funcargs show available fixtures, sorted by plugin appearance (fixtures with leading '_' are only shown with '-v') --fixtures-per-test show fixtures per test --import-mode={prepend,append} prepend/append to sys.path when importing test modules, default is to prepend. --pdb start the interactive Python debugger on errors or KeyboardInterrupt. --pdbcls=modulename:classname start a custom interactive Python debugger on errors. For example: --pdbcls=IPython.terminal.debugger:TerminalPdb --trace Immediately break when running each test. --capture=method per-test capturing method: one of fd|sys|no. -s shortcut for --capture=no. --runxfail run tests even if they are marked xfail --lf, --last-failed rerun only the tests that failed at the last run (or all if none failed) --ff, --failed-first run all tests but run the last failures first. This may re-order tests and thus lead to repeated fixture setup/teardown --nf, --new-first run tests from new files first, then the rest of the tests sorted by file mtime --cache-show show cache contents, don't perform collection or tests --cache-clear remove all cache contents at start of test run. --lfnf={all,none}, --last-failed-no-failures={all,none} change the behavior when no test failed in the last run or no information about the last failures was found in the cache --arraydiff Enable comparison of arrays to reference arrays stored in files --arraydiff-generate-path=ARRAYDIFF_GENERATE_PATH directory to generate reference files in, relative to location where py.test is run --arraydiff-reference-path=ARRAYDIFF_REFERENCE_PATH directory containing reference files, relative to location where py.test is run --arraydiff-default-format=ARRAYDIFF_DEFAULT_FORMAT Default format for the reference arrays (can be 'fits' or 'text' currently) reporting: -v, --verbose increase verbosity. -q, --quiet decrease verbosity. --verbosity=VERBOSE set verbosity -r chars show extra test summary info as specified by chars (f)ailed, (E)error, (s)skipped, (x)failed, (X)passed, (p)passed, (P)passed with output, (a)all except pP. Warnings are displayed at all times except when --disable-warnings is set --disable-warnings, --disable-pytest-warnings disable warnings summary -l, --showlocals show locals in tracebacks (disabled by default). --tb=style traceback print mode (auto/long/short/line/native/no). --show-capture={no,stdout,stderr,log,all} Controls how captured stdout/stderr/log is shown on failed tests. Default is 'all'. --full-trace don't cut any tracebacks (default is to cut). --color=color color terminal output (yes/no/auto). --durations=N show N slowest setup/test durations (N=0 for all). --pastebin=mode send failed|all info to bpaste.net pastebin service. --junit-xml=path create junit-xml style report file at given path. --junit-prefix=str prepend prefix to classnames in junit-xml output --result-log=path DEPRECATED path for machine-readable result log. collection: --collect-only only collect tests, don't execute them. --pyargs try to interpret all arguments as python packages. --ignore=path ignore path during collection (multi-allowed). --deselect=nodeid_prefix deselect item during collection (multi-allowed). --confcutdir=dir only load conftest.py's relative to specified dir. --noconftest Don't load any conftest.py files. --keep-duplicates Keep duplicate tests. --collect-in-virtualenv Don't ignore tests in a local virtualenv directory --doctest-modules run doctests in all .py modules --doctest-report={none,cdiff,ndiff,udiff,only_first_failure} choose another output format for diffs on doctest failure --doctest-glob=pat doctests file matching pattern, default: test*.txt --doctest-ignore-import-errors ignore doctest ImportErrors --doctest-continue-on-failure for a given doctest, continue to run after the first failure test session debugging and configuration: --basetemp=dir base temporary directory for this test run.(warning: this directory is removed if it exists) --version display pytest lib version and import information. -h, --help show help message and configuration info -p name early-load given plugin (multi-allowed). To avoid loading of plugins, use the `no:` prefix, e.g. `no:doctest`. --trace-config trace considerations of conftest.py files. --debug store internal tracing debug information in 'pytestdebug.log'. -o OVERRIDE_INI, --override-ini=OVERRIDE_INI override ini option with "option=value" style, e.g. `-o xfail_strict=True -o cache_dir=cache`. --assert=MODE Control assertion debugging tools. 'plain' performs no assertion debugging. 'rewrite' (the default) rewrites assert statements in test modules on import to provide assert expression information. --setup-only only setup fixtures, do not execute tests. --setup-show show setup of fixtures while executing tests. --setup-plan show what fixtures and tests would be executed but don't execute anything. pytest-warnings: -W PYTHONWARNINGS, --pythonwarnings=PYTHONWARNINGS set which warnings to report, see -W option of python itself. logging: --no-print-logs disable printing caught logs on failed tests. --log-level=LOG_LEVEL logging level used by the logging module --log-format=LOG_FORMAT log format as used by the logging module. --log-date-format=LOG_DATE_FORMAT log date format as used by the logging module. --log-cli-level=LOG_CLI_LEVEL cli logging level. --log-cli-format=LOG_CLI_FORMAT log format as used by the logging module. --log-cli-date-format=LOG_CLI_DATE_FORMAT log date format as used by the logging module. --log-file=LOG_FILE path to a file when logging will be written to. --log-file-level=LOG_FILE_LEVEL log file logging level. --log-file-format=LOG_FILE_FORMAT log format as used by the logging module. --log-file-date-format=LOG_FILE_DATE_FORMAT log date format as used by the logging module. distributed and subprocess testing: -n numprocesses, --numprocesses=numprocesses shortcut for '--dist=load --tx=NUM*popen', you can use 'auto' here for auto detection CPUs number on host system --maxprocesses=maxprocesses limit the maximum number of workers to process the tests when using --numprocesses=auto --max-worker-restart=MAXWORKERRESTART, --max-slave-restart=MAXWORKERRESTART maximum number of workers that can be restarted when crashed (set to zero to disable this feature) '--max- slave-restart' option is deprecated and will be removed in a future release --dist=distmode set mode for distributing tests to exec environments. each: send each test to all available environments. load: load balance by sending any pending test to any available environment. loadscope: load balance by sending pending groups of tests in the same scope to any available environment. loadfile: load balance by sending test grouped by file to any available environment. (default) no: run tests inprocess, don't distribute. --tx=xspec add a test execution environment. some examples: --tx popen//python=python2.5 --tx socket=192.168.1.102:8888 --tx ssh=user@codespeak.net//chdir=testcache -d load-balance tests. shortcut for '--dist=load' --rsyncdir=DIR add directory for rsyncing to remote tx nodes. --rsyncignore=GLOB add expression for ignores when rsyncing to remote tx nodes. --boxed backward compatibility alias for pytest-forked --forked -f, --looponfail run tests in subprocess, wait for modified files and re-run failing test set until all pass. re-run failing tests to eliminate flaky failures: --reruns=RERUNS number of times to re-run failed tests. defaults to 0. --reruns-delay=RERUNS_DELAY add time (seconds) delay between reruns. forked subprocess test execution: --forked box each test run in a separate process (unix) coverage reporting with distributed testing support: --cov=[path] measure coverage for filesystem path (multi-allowed) --cov-report=type type of report to generate: term, term-missing, annotate, html, xml (multi-allowed). term, term- missing may be followed by ":skip-covered". annotate, html and xml may be followed by ":DEST" where DEST specifies the output location. Use --cov-report= to not generate any output. --cov-config=path config file for coverage, default: .coveragerc --no-cov-on-fail do not report coverage if test run fails, default: False --no-cov Disable coverage report completely (useful for debuggers) default: False --cov-fail-under=MIN Fail if the total coverage is less than MIN. --cov-append do not delete coverage but append to current, default: False --cov-branch Enable branch coverage. Hypothesis: --hypothesis-profile=HYPOTHESIS_PROFILE Load in a registered hypothesis.settings profile --hypothesis-verbosity={quiet,normal,verbose,debug} Override profile with verbosity setting specified --hypothesis-show-statistics Configure when statistics are printed --hypothesis-seed=HYPOTHESIS_SEED Set a seed to use for all Hypothesis tests custom options: --remote-data=[REMOTE_DATA] run tests with online data --open-files fail if any test leaves files open --doctest-plus enable running doctests with additional features not found in the normal doctest plugin --doctest-rst enable running doctests in .rst documentation --doctest-plus-atol=DOCTEST_PLUS_ATOL set the absolute tolerance for float comparison --doctest-plus-rtol=DOCTEST_PLUS_RTOL set the relative tolerance for float comparison [pytest] ini-options in the first pytest.ini|tox.ini|setup.cfg file found: markers (linelist) markers for test functions empty_parameter_set_mark (string) default marker for empty parametersets norecursedirs (args) directory patterns to avoid for recursion testpaths (args) directories to search for tests when no files or dire console_output_style (string) console output: classic or with additional progr usefixtures (args) list of default fixtures to be used with this project python_files (args) glob-style file patterns for Python test module disco python_classes (args) prefixes or glob names for Python test class discover python_functions (args) prefixes or glob names for Python test function and m xfail_strict (bool) default for the strict parameter of xfail markers whe junit_suite_name (string) Test suite name for JUnit report junit_logging (string) Write captured log messages to JUnit report: one of n doctest_optionflags (args) option flags for doctests doctest_encoding (string) encoding used for doctest files cache_dir (string) cache directory path. filterwarnings (linelist) Each line specifies a pattern for warnings.filterwar log_print (bool) default value for --no-print-logs log_level (string) default value for --log-level log_format (string) default value for --log-format log_date_format (string) default value for --log-date-format log_cli (bool) enable log display during test run (also known as "li log_cli_level (string) default value for --log-cli-level log_cli_format (string) default value for --log-cli-format log_cli_date_format (string) default value for --log-cli-date-format log_file (string) default value for --log-file log_file_level (string) default value for --log-file-level log_file_format (string) default value for --log-file-format log_file_date_format (string) default value for --log-file-date-format addopts (args) extra command line options minversion (string) minimally required pytest version rsyncdirs (pathlist) list of (relative) paths to be rsynced for remote dis rsyncignore (pathlist) list of (relative) glob-style paths to be ignored for looponfailroots (pathlist) directories to check for changes remote_data_strict (bool) If 'True', tests will fail if they attempt to access open_files_ignore (args) when used with the --open-files option, allows specif mock_traceback_monkeypatch (string) Monkeypatch the mock library to improve re mock_use_standalone_module (string) Use standalone "mock" (from PyPI) instead doctest_plus (string) enable running doctests with additional features not doctest_norecursedirs (args) like the norecursedirs option but applies only to doctest_rst (string) Run the doctests in the rst documentation doctest_plus_atol (string) set the absolute tolerance for float comparison doctest_plus_rtol (string) set the relative tolerance for float comparison environment variables: PYTEST_ADDOPTS extra command line options PYTEST_PLUGINS comma-separated plugins to load during startup PYTEST_DISABLE_PLUGIN_AUTOLOAD set to disable plugin auto-loading PYTEST_DEBUG set to enable debug tracing of pytest's internals to see available markers type: pytest --markers to see available fixtures type: pytest --fixtures (shown according to specified file_or_dir or current dir if not specified; fixtures with leading '_' are only shown with the '-v' option ___________________________________ summary ____________________________________ py36: commands succeeded congratulations :) wrote json report at: /tmp/tmpbqada78o/pytest-sunpy-1.0.0/result.json Time: 95.1 seconds