server/tests-py/README.md
This document describes the Python integration test suite. Please consult
the server/CONTRIBUTING document for general information on the overall
test setup and other testing suites.
This document describes running and writing tests, as well as some information on how to update test dependencies.
Tests can be run using run.sh, dev.sh or directly using pytest.
Please note that running the BigQuery tests requires a few manual steps.
run.shThe run.sh scripts are an active work in progress, and will eventually replace the dev.sh option below.
The easiest way to run the test suite is to:
Run the Python integration tests with ./server/tests-py/run.sh.
Filter on specific test files with ./server/tests-py/run.sh -- create_async_action_with_nested_output_and_relation.py
If you have any issues with run.sh, please create a GitHub issue and run and test via dev.sh instead.
scripts/dev.sh test --integration
NOTE: this only runs the tests for Postgres. If you want to run tests for a different backend, use:
scripts/dev.sh test --integration --backend mssql
Available options are documented in scripts/parse-pytest-backend:
You can filter tests by using -k <name>. Note that <name> is case-
insensitive.
scripts/dev.sh test --integration --backend mssql -k MSSQL
Note that you can also use expressions here, for example:
scripts/dev.sh test --integration --backend mssql -k "MSSQL and not Permission"
See pytest docs for more details.
If you want to stop after the first test failure you can pass -x:
scripts/dev.sh test --integration --backend mssql -k MSSQL -x
You can increase or decrease the log verbosity by adding -v or -q
to the command.
WARNING: running tests manually will force skipping of some tests. dev.sh
deals with setting up some environment variables which decide how and if
some of the tests are executed.
To run the Python tests, you’ll need to install the necessary Python dependencies first. It is recommended that you do this in a self-contained Python venv, which is supported by Python 3.3+ out of the box. To create one, run:
python3 -m venv .python-venv
(The second argument names a directory where the venv sandbox will be created; it can be anything you like, but .python-venv is .gitignored.)
With the venv created, you can enter into it in your current shell session by running:
source .python-venv/bin/activate
(Source .python-venv/bin/activate.fish instead if you are using fish as your shell.)
Install the necessary Python dependencies into the sandbox:
pip3 install -r tests-py/requirements.txt
Install the dependencies for the Node server used by the remote schema tests:
(cd tests-py/remote_schemas/nodejs && npm ci)
Start an instance of graphql-engine for the test suite to use:
env EVENT_WEBHOOK_HEADER=MyEnvValue \
EVENT_WEBHOOK_HANDLER=http://localhost:5592 \
SCHEDULED_TRIGGERS_WEBHOOK_DOMAIN=http://127.0.0.1:5594 \
cabal new-run -- exe:graphql-engine \
--database-url='postgres://<user>:<password>@<host>:<port>/<dbname>' \
serve --stringify-numeric-types
Optionally, replace the --database-url parameter with --metadata-database-url to enable testing against multiple sources.
The environment variables are needed for a couple of tests, and the --stringify-numeric-types option is used to avoid the need to do floating-point comparisons.
Optionally, add more sources to test against:
If the tests include more sources (e.g., by using -k MSSQL), then you can use the following commands to add sources to your running graphql instance:
# Add a Postgres source
curl "$METADATA_URL" \
--data-raw '{"type":"pg_add_source","args":{"name":"default","configuration":{"connection_info":{"database_url":"'"$POSTGRES_DB_URL"'","pool_settings":{}}}}}'
# Add a SQL Server source
curl "$METADATA_URL" \
--data-raw '{"type":"mssql_add_source","args":{"name":"mssql","configuration":{"connection_info":{"connection_string":"'"$MSSQL_DB_URL"'","pool_settings":{}}}}}'
# Optionally verify sources have been added
curl "$METADATA_URL" --data-raw '{"type":"export_metadata","args":{}}'
With the server running, run the test suite:
cd tests-py
pytest --hge-urls http://localhost:8080 \
--pg-urls 'postgres://<user>:<password>@<host>:<port>/<dbname>'
This will run all the tests, which can take a couple minutes (especially since some of the tests are slow). You can configure pytest to run only a subset of the tests; see the pytest documentation for more details.
Some other useful points of note:
It is recommended to use a separate Postgres database for testing, since the tests will drop and recreate the hdb_catalog schema, and they may fail if certain tables already exist. (It’s also useful to be able to just drop and recreate the entire test database if it somehow gets into a bad state.)
You can pass the -v or -vv options to pytest to enable more verbose output while running the tests and in test failures. You can also pass the -l option to display the current values of Python local variables in test failures.
Tests can be run against a specific backend (defaulting to Postgres) with the backend flag, for example:
pytest --hge-urls http://localhost:8080 \
--pg-urls 'postgres://<user>:<password>@<host>:<port>/<dbname>'
--backend mssql -k TestGraphQLQueryBasicCommon
For more details, please consult pytest --help.
Running integration tests against a BigQuery data source is a little more involved due to the necessary service account requirements:
HASURA_BIGQUERY_PROJECT_ID=# the project ID of the service account
HASURA_BIGQUERY_SERVICE_ACCOUNT_EMAIL=# eg. "<<SERVICE_ACCOUNT_NAME>>@<<PROJECT_NAME>>.iam.gserviceaccount.com"
HASURA_BIGQUERY_SERVICE_KEY=# the service account key
Before running the test suite:
HASURA_BIGQUERY_PROJECT_ID variable.HASURA_BIGQUERY_SERVICE_KEY variable.
export HASURA_BIGQUERY_SERVICE_KEY=$(cat /path/to/service/account)
source scripts/verify-bigquery-creds.sh $HASURA_BIGQUERY_PROJECT_ID $HASURA_BIGQUERY_SERVICE_KEY $HASURA_BIGQUERY_SERVICE_ACCOUNT_EMAIL
HASURA_BIGQUERY_SERVICE_KEY and HASURA_BIGQUERY_PROJECT_ID environment variables set. For example:scripts/dev.sh test --integration --backend bigquery -k TestGraphQLQueryBasicBigquery
HASURA_DEBUGGING_ASSERT_CORRECT_BIGQUERY_INTROSPECTION=true - Enables assertions to verify correct BigQuery introspection behavior. This is automatically set in CI tests. Set it manually for local debugging if needed.Note to Hasura team: a service account is already setup for internal use, please check the wiki for further details.
Tests are grouped as test classes in test modules (names starting with test_)
The configuration files (if needed) for the tests in a class are usually kept in one folder.
dir variable or the dir() functionSome tests (like in test_graphql_queries.py) requires a setup and teardown per class.
DefaultTestSelectQueries class.setup.yaml and teardown.yaml once per classdir(), which returns the configuration folderFor mutation tests (like in test_graphql_mutations.py)
schema_setup and schema_teardown per classvalues_setup and values_teardown per testDefaultTestMutations class for this.setup.yaml and teardown.yaml once per class.values_setup.yaml and values_teardown.yaml once per class.Check whether the test you intend to write already exists in the test suite, so that there will be no duplicate tests or the existing test will just need to be modified.
All the tests use setup and teardown, the setup step is used to initialize the graphql-engine and the database in a certain state after which the tests should be run. After the tests are run, the state needs to be cleared, which should be done in the teardown step. The setup and teardown is localised for every python test class.
See TestCreateAndDelete in test_events.py
for reference.
The setup and teardown can be configured to run before and after every test in a test class
or run before and after running all the tests in a class. Depending on the use case, there
are different fixtures like per_class_tests_db_state,per_method_tests_db_state defined in the conftest.py file.
Sometimes, it's required to run the graphql-engine with in a different configuration only
for a particular set of tests. In this case, these tests should be run only when the graphql-engine
is run with the said configuration and should be skipped in other graphql-engine configurations. This
can be done by accepting a new command-line flag from the pytest command and depending on the value or
presence of the flag, the tests should be run accordingly. After adding this kind of a test, a new section
needs to be added in the test-server.sh. This new section's name should also
be added in the server-test-names.txt file, otherwise the test will not be run in the CI.
For example,
The tests in the test_remote_schema_permissions.py
are only to be run when the remote schema permissions are enabled in the graphql-engine and when
it's not set, these tests should be skipped. Now, to run these tests we parse a command line option
from pytest called (--enable-remote-schema-permissions) and the presence of this flag means that
we need to run these tests. When the tests are run with this command line option, it's assumed that
the server has enabled remote schema permissions.
The current workflow for supporting a new backend in integration tests is as follows:
dev.sh to support the new backend. Example.setup_<backend>: for v1/query or metadata queries such as <backend>_track_table. Example.schema_setup_<backend>: for v2/query queries such as <backend>_run_sql. Example.teardown_<backend> and cleardb_<backend>—backend; that's how the files are looked up.backend parameter for the per_backend_test_class and per_backend_test_function fixtures, parameterised by backend. Example.Note: When teardown is not disabled (via skip_teardown(*) , in which case, this phase is skipped entirely), teardown.yaml always runs before schema_teardown.yaml, even if the tests fail. See setup_and_teardown in server/tests-py/conftest.py for the full source code/logic.
(*): See setup_and_teardown_v1q and setup_and_teardown_v2q in conftest.py for more details.
This means, for example, that if teardown.yaml untracks a table, and schema_teardown.yaml runs raw SQL to drop the table, both would succeed (assuming the table is tracked/exists).
Test suite naming convention The current convention is to indicate the backend(s) tests can be run against in the class name. For example:
TestGraphQLQueryBasicMSSQL for tests that can only be run against a SQL Server backendTestGraphQLQueryBasicCommon for tests that can be run against more than one backendCommon, then it is likely a test written pre-v2.0 that
can only be run on PostgresThis naming convention enables easier test filtering with pytest command line flags.
The backend-specific and common test suites are disjoint; for example, run pytest --integration -k "Common or MSSQL" --backend mssql to run all MSSQL tests.
Note that --backend does not interact with the selection of tests. You will generally have to combine --backend with -k.
The packages/requirements are documented in two files:
server/tests-py/requirements-top-level.txtserver/tests-py/requirements.txtThe server/tests-py/requirements-top-level.txt file is the main file. It
contains the direct dependencies along with version requirements we know
we should be careful about.
The server/tests-py/requirements.txt file is the lock file. It holds
version numbers for all direct and transitive dependencies. This file
can be re-generated by:
server/tests-py/requirements-top-level.txtserver/tests-py/requirements.txtdev.sh test --integrationDEVSH_VERSION in scripts/dev.sh to force reinstall
these dependenciesSteps 3 can be done manually:
pip3 install -r requirements-top-level.txt
pip3 freeze > requirements.txt