Expected snapshots are stored in _tests/{spec-name}-snapshots/{project-name}_ folders (where project-name is the browser name).
In most of the cases, they capture and compare not the whole visible area of the screen but only the single element/section (e.g. created shape or canvas with created board).
It helps to avoid the failure of the tests upon such changes in the UI like adding new sections to the Design panel, new buttons to the toolbars and so on.
However, about 10% of the tests capture and compare all visible area of the screen, since in such scenarios it makes sense to check not only the layers/canvas, but the panels, toolbar, etc.
To exclude performance tests from the periodical regression test run the following scripts should be used:
- for Chrome: `"test": "npx playwright test --project=chrome -gv 'PERF'"`
- for Firefox: `"firefox": "npx playwright test --project=firefox -gv 'PERF'"`
- for Webkit: `"webkit": "npx playwright test --project=webkit -gv 'PERF'"`
**5. Running tests via GitHub Actions.**
On _Settings > Environments_ page 2 environments were created: _PRE_ and _PRO_.
For each environment the appropriate secrets were added:
- _LOGIN_EMAIL_ (email from your Penpot account, which is used for tests)
- _LOGIN_PWD_ (password from your Penpot account, which is used for tests)
- _BASE_URL_ (Penpot url)
2 _.yml_ files were added into _.github/workflows_ directory with settings for environments:
- tests for _PRE_ env will be run by schedule: each Friday at 8:00 am UTC (and also it is possible to trigger them manually)
- tests for _PRO_ env will be run only by request and triggered manually
**Note**:
- The UTC time is used for schedule set up.
- There may be a delay for start running tests by schedule. It will take nearly 5-15 minutes.
There are 2 workflows on _Actions_ tab:
- Penpot Regression Tests on PRO env
- Penpot Regression Tests on PRE env
To run workflow by request you need to open it from the left sidebar and click on _[Run workflow]_ > _[Run workflow]_.
In a few seconds running should be start.
**Tests run results:**
When the run will be finished the appropriate marker will appear near the current workflow:
-`green icon` - workflow has been passed
-`red icon` - workflow has been failed
It is possible to open workflows (both passed and failed) and look through the _Summary_ info:
- Status
- Total duration
- Artifacts
In _Artifacts_ section there will be a _'playwright-report.zip'_ file. It is possible to download it, extract and open _index.html_ file with the default playwright report.