docs/testing/testing_in_chromium.md
Testing is an essential component of software development in Chromium, it ensures Chrome is behaving as we expect, and is critical to find bugs and regressions at early stage.
This document covers the high level overview of testing in Chromium, including what type of tests we have, what's the purpose for each test type, what tests are needed for new features etc.
There are several different types of tests in Chromium to serve different purposes, some types of test are running on multiple platforms, others are specific for one platform.
content_shell) and comparing the rendered output or
JavaScript output against an expected output file.
Web Tests are required to launch new W3C API support in Chromium.The following table shows which types of test works on which platforms.
| Linux | Windows | Mac | Android | iOS | CrOS | |
|---|---|---|---|---|---|---|
| gtest(C++) | √ | √ | √ | √ | √ | √ |
| Browser Tests(C++) | √ | √ | √ | √ | ||
| Web Tests(HTML, JS) | √ | √ | √ | |||
| Telemetry(Python) | √ | √ | √ | √ | √ | |
| Robolectric(Java) | √ | |||||
| Instrumentation Tests(Java) | √ | |||||
| EarlGrey | √ | |||||
| Fuzzer Tests(C++) | √ | √ | √ | √ | √ | |
| Tast(Golang) | √ |
*** note Browser Tests Note
Only subset of browser tests are enabled on Android:
Other browser tests are not supported on Android yet. crbug/611756 tracks the effort to enable them on Android.
*** note Web Tests Note
Web Tests were enabled on Android K before, but it is disabled on Android platform now, see this thread for more context.
*** note Tast Tests Note
Tast tests are written, maintained and gardened by ChromeOS engineers.
ChromeOS tests that Chrome engineers support should be (re)written in the following priority order:
When a Tast test fails:
Right now, code coverage is the only way we have to measure test coverage. The following is the recommended thresholds for different code coverage levels:
level 1(improving): >0%
level 2(acceptable): 60%
level 3(commendable): 75%
level 4(exemplary): 90%
Go to code coverage dashboard to check the code coverage for your project.
TODO: add the link to the instruction about how to enable new tests in CQ and main waterfall
Before you can run a gtest, you need to build the appropriate launcher target
that contains your test, such as blink_unittests:
autoninja -C out/Default blink_unittests
To run specific tests, rather than all tests in a launcher, pass
--gtest_filter= with a pattern. The simplest pattern is the full name of a
test (SuiteOrFixtureName.TestName), but you can use wildcards:
out/Default/blink_unittests --gtest_filter='Foo*'
Use --help for more ways to select and run tests.
TODO: add the link to the instruction about how to run tests on Swarming.
Go to LUCI Analysis to find reports about flaky tests in your projects.
If you cannot fix a flaky test in a short timeframe, disable it first to reduce development pain for other and then fix it later. "How do I disable a flaky test" has instructions on how to disable a flaky test.
Tests are not configured to upload metrics, such as UMA, UKM or crash reports.