Skip to content

developer-analytics-bot/fabric8-analytics-server

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Build Status

Fabric8-Analytics Core API Documentation

The Fabric8-Analytics API is a microservice that is responsible for:

  • Serving generated analysis results to clients through API
  • Scheduling new analyses based on client requests

Contributing

See our contributing guidelines for more info.

Core API Access Logs

Core API access logs have following format:

{remote hostname} {remote logname, default '-'} {username, default '-'} {access time} "{first line of request}" {response status} {response length} {referer, default '-'} "{user agent}"

In terms of httpd LogFormat option, this is %h %l %u %t \"%r\" %>s %b %{Referer}i \"%{User-agent}i\". See httpd documentation for details.

For example:

172.19.0.1 - - [18/Mar/2016:07:59:17 +0000] "GET /static/patternfly/css/patternfly.min.css HTTP/1.1" 304 - http://localhost:32000/ "Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:45.0) Gecko/20100101 Firefox/45.0"

Docker based API testing

From the top-level git directory, run the tests in a container using the helper script:

$ ./runtests.sh

(The above command assumes you have passwordless docker invocation configured - if you don't, then sudo will be necessary to enable docker invocation).

If you're changing dependencies rather than just editing source code locally, you will need images to be rebuilt when invoking runtest.sh. You can set environment variable REBUILD=1 to request image rebuilding.

If the offline virtualenv based tests have been run, then this may complain about mismatched locations in compiled files. Those can be deleted using:

$ find -name *.pyc -delete

NOTE: Running the container based tests is likely to cause any already running local core API instance launched via Docker Compose to fall over due to changes in the SELinux labels on mounted volumes, and may also cause spurious test failures.

Virtualenv-based offline testing

Test cases marked with pytest.mark.offline may be executed without having a Docker daemon running locally.

For server testing, the virtualenv should be created using Python 3.4 or later

To configure a virtualenv (called bayesian in the example) to run these tests:

(bayesian) $ python -m pip install -e ../lib
(bayesian) $ python -m pip install -r requirements.txt
(bayesian) $ python -m pip install -r tests/requirements.txt

The marked offline tests can then be run as:

(bayesian) $ py.test -m offline tests/

If the Docker container based tests have been run, then this might complain about mismatched locations in compiled files. Those can be deleted using:

(bayesian) $ sudo find -name *.pyc -delete

Accessing operational data for stack analyses

It's possible to access operational data related to a particular stack analysis request:

/api/v1/stack-analyses/<request-id>/_debug

Note this endpoint is not part of the public API.

Footnotes

Coding standards

  • You can use scripts run-linter.sh and check-docstyle.sh to check if the code follows PEP 8 and PEP 257 coding standards. These scripts can be run w/o any arguments:
./run-linter.sh
./check-docstyle.sh

The first script checks the indentation, line lengths, variable names, white space around operators etc. The second script checks all documentation strings - its presence and format. Please fix any warnings and errors reported by these scripts.

Code complexity measurement

The scripts measure-cyclomatic-complexity.sh and measure-maintainability-index.sh are used to measure code complexity. These scripts can be run w/o any arguments:

./measure-cyclomatic-complexity.sh
./measure-maintainability-index.sh

The first script measures cyclomatic complexity of all Python sources found in the repository. Please see this table for further explanation on how to comprehend the results.

The second script measures maintainability index of all Python sources found in the repository. Please see the following link with explanation of this measurement.

You can specify command line option --fail-on-error if you need to check and use the exit code in your workflow. In this case the script returns 0 when no failures has been found and non zero value instead.

Dead code detection

The script detect-dead-code.sh can be used to detect dead code in the repository. This script can be run w/o any arguments:

./detect-dead-code.sh

Please note that due to Python's dynamic nature, static code analyzers are likely to miss some dead code. Also, code that is only called implicitly may be reported as unused.

Because of this potential problems, only code detected with more than 90% of confidence is reported.

Common issues detection

The script detect-common-errors.sh can be used to detect common errors in the repository. This script can be run w/o any arguments:

./detect-common-errors.sh

Please note that only semantical problems are reported.

Check for scripts written in BASH

The script named check-bashscripts.sh can be used to check all BASH scripts (in fact: all files with the .sh extension) for various possible issues, incompatibilities, and caveats. This script can be run w/o any arguments:

./check-bashscripts.sh

Please see the following link for further explanation, how the ShellCheck works and which issues can be detected.

Code coverage report

Code coverage is reported via the codecov.io. The results can be seen on the following address:

code coverage report

About

fabric8-analytics API server

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 88.5%
  • Shell 9.8%
  • Dockerfile 1.1%
  • Makefile 0.6%