penpotqa/README.md

138 lines
6.7 KiB
Markdown
Raw Permalink Normal View History

2022-11-15 11:57:56 +00:00
# penpotqa
2022-11-15 11:57:56 +00:00
QA Test for Penpot
2022-11-22 15:35:30 +00:00
Based on Playwright framework.
2022-12-16 09:23:54 +00:00
**1. Initial requirements and configuration.**
2022-12-15 17:28:21 +00:00
2022-12-13 12:25:36 +00:00
Prerequisites for local run:
2022-12-15 17:11:03 +00:00
2022-12-13 12:25:36 +00:00
- Windows OS
- Screen resolution 1920x1080
2022-12-15 11:34:19 +00:00
- Installed Node.js
- “Clear” Penpot account (without added files, projects, etc., but with a completed onboarding flow).
- The _.env_ file added to the root of the project with 3 env variables:
- `LOGIN_EMAIL` (email from your Penpot account)
- `LOGIN_PWD` (password from your Penpot account)
- `BASE_URL` (Penpot url - e.g. http://localhost:9001/ if deployed locally)
2022-12-15 17:28:21 +00:00
**2. Test run - main notes.**
2022-12-16 09:23:54 +00:00
Upon cloning the repo and trying to run tests, you may be prompted to install the browsers:
2022-12-15 11:34:19 +00:00
`npx playwright install`
By default, `npm test` runs all tests in Chrome browser (the script `"test": "npx playwright test --project=chrome"` in _package.js_).
To run the specific test/tests, change the test script in _package.js_ in the next ways (or add a separate script):
2022-12-16 09:23:54 +00:00
- Run single test (by title) - e.g. `"npx playwright test -g \"CO-154 Transform ellipse to path\" --project=chrome"`
2022-12-15 11:34:19 +00:00
- Run single test spec (file) - e.g. `"npx playwright test tests/login.spec.js --project=chrome"`
- Run specific tests package (folder) - e.g. `"npx playwright test tests/dashboard --project=chrome"`
To run the tests in Firefox and Webkit browsers, use `"firefox"` and `"webkit"` scripts accordingly:
`"firefox": "npx playwright test --project=firefox"`
`"webkit": "npx playwright test --project=webkit"`
2022-12-15 17:28:21 +00:00
**3. Test run - additional settings.**
2022-12-15 11:34:19 +00:00
Some settings from _playwright.config.js_ may be useful:
2022-12-15 17:11:03 +00:00
2022-12-15 11:34:19 +00:00
- By default, test retries are enabled (test retries 2 times in case of failure). To disable them, change the value of `retries` property to 0
2022-12-13 12:25:36 +00:00
- `timeout` and `expect.timeout` - maximum time of execution a single test and of a waiting expect() condition to be met accordingly
2022-12-15 11:34:19 +00:00
- `use.headless `- change to _false_ to run in headed browser mode
- `use.channel: "chrome"` - comment out to run tests in Chromium instead of Chrome (for "chrome" project)
2022-12-13 12:25:36 +00:00
2023-12-26 13:01:34 +00:00
**4. Parallel tests execution.**
- All tests should be independent for running them in parallel mode
- For run tests in parallel mode need to update key `workers` in `playwright.config.js` file
- `workers`: `process.env.CI ? 2 : 2` - by default 2 workers are used for local run and run on CI/CD.
- For disabling parallelism set `workers` to 1.
**5. Tests amount and execution time.**
2023-12-26 13:01:34 +00:00
- For now there are 327 tests in current repository
- If parallel execution is enabled with default amount of workers (2) the average time for each browser is the following:
- Chrome: 43 mins
- Firefox: 45 mins
- Webkit: 55 mins
**6. Snapshots comparison.**
2022-12-15 17:28:21 +00:00
2022-12-15 17:11:03 +00:00
Expected snapshots are stored in _tests/{spec-name}-snapshots/{project-name}_ folders (where project-name is the browser name).
In most of the cases, they capture and compare not the whole visible area of the screen but only the single element/section (e.g. created shape or canvas with created board).
2022-12-15 11:34:19 +00:00
It helps to avoid the failure of the tests upon such changes in the UI like adding new sections to the Design panel, new buttons to the toolbars and so on.
2022-12-13 12:25:36 +00:00
Such tests use the pattern:
`await expect(<pageName.elementName>).toHaveScreenshot("snapshotName.png");`
2022-12-15 17:11:03 +00:00
However, about 10% of the tests capture and compare all visible area of the screen, since in such scenarios it makes sense to check not only the layers/canvas, but the panels, toolbar, etc.
2022-12-15 11:34:19 +00:00
These tests are use the pattern:
2022-12-13 12:25:36 +00:00
`await expect(page).toHaveScreenshot("snapshotName.png", { mask: [pageName.elementName], });`
2022-12-15 11:34:19 +00:00
Masking is used in order to ignore the elements which have unpredictable text content or values (e.g. username, timestamp, etc.).
2022-12-15 17:11:03 +00:00
Therefore, however the impact of future UI changes to snapshots comparison is minimized, it is impossible to avoid such cases at all.
2022-12-15 11:34:19 +00:00
However, it is rather simple to update snapshots:
2022-12-15 17:11:03 +00:00
2022-12-15 11:34:19 +00:00
- Upon test failure, compare the actual and expected snapshots and verify that the difference occurred due to intended changes in UI.
- Delete expected snapshot from folder.
- Run the test one more time, which will capture the actual snapshot and write it as expected to the appropriate folder.
2023-12-26 13:01:34 +00:00
- Run tests in headless mode to get new snapshots.
2022-12-13 12:25:36 +00:00
- Commit the new expected snapshot and push.
2022-12-15 17:11:03 +00:00
Note 1: there is a known issue that Chrome does render differently in headless and headed modes, that's why
`expect.toHaveScreenshot.maxDiffPixelRatio: 0.01` is set in _playwright.config.js_ for "chrome" project , which means that
an acceptable ratio of pixels that are different to the total amount of pixels is 1% within screenshot comparison.
2022-12-15 17:28:21 +00:00
2023-12-26 13:01:34 +00:00
**7. Performance testing.**
2023-06-08 11:42:23 +00:00
To exclude performance tests from the periodical regression test run the following scripts should be used:
2023-12-12 07:53:40 +00:00
2023-06-09 06:55:45 +00:00
- for Chrome: `"npx playwright test --project=chrome -gv 'PERF'"`
- for Firefox: `"npx playwright test --project=firefox -gv 'PERF'"`
- for Webkit: `"npx playwright test --project=webkit -gv 'PERF'"`
2023-06-08 11:42:23 +00:00
2023-06-09 06:55:45 +00:00
Note: The above scripts should be executed via the command line. Do not run them directly from the _package.json_,
because in such way performance tests are not ignored.
2023-12-26 13:01:34 +00:00
**8. Running tests via GitHub Actions.**
2023-06-08 11:42:23 +00:00
On _Settings > Environments_ page 2 environments were created: _PRE_ and _PRO_.
For each environment the appropriate secrets were added:
2023-12-12 07:53:40 +00:00
- _LOGIN_EMAIL_ (email from your Penpot account, which is used for tests)
- _LOGIN_PWD_ (password from your Penpot account, which is used for tests)
- _BASE_URL_ (Penpot url)
2023-06-08 11:42:23 +00:00
2 _.yml_ files were added into _.github/workflows_ directory with settings for environments:
2023-12-12 07:53:40 +00:00
- tests for _PRE_ env will be run by schedule: each Thursday at 6:00 am UTC (and also it is possible to trigger them manually)
- tests for _PRO_ env will be run only by request and triggered manually
2023-06-08 11:42:23 +00:00
**Note**:
2023-12-12 07:53:40 +00:00
2023-06-08 11:42:23 +00:00
- The UTC time is used for schedule set up.
- There may be a delay for start running tests by schedule. It will take nearly 5-15 minutes.
There are 2 workflows on _Actions_ tab:
2023-12-12 07:53:40 +00:00
2023-06-08 11:42:23 +00:00
- Penpot Regression Tests on PRO env
- Penpot Regression Tests on PRE env
To run workflow by request you need to open it from the left sidebar and click on _[Run workflow]_ > _[Run workflow]_.
In a few seconds running should be start.
2023-12-26 13:01:34 +00:00
**Note**:
Before running tests on PRO env need to manually log in with test account on PRO server and close the 'Release Notes' popup.
2023-06-08 11:42:23 +00:00
**Tests run results:**
When the run will be finished the appropriate marker will appear near the current workflow:
2023-12-12 07:53:40 +00:00
2023-06-08 11:42:23 +00:00
- `green icon` - workflow has been passed
- `red icon` - workflow has been failed
It is possible to open workflows (both passed and failed) and look through the _Summary_ info:
2023-12-12 07:53:40 +00:00
2023-06-08 11:42:23 +00:00
- Status
- Total duration
- Artifacts
In _Artifacts_ section there will be a _'playwright-report.zip'_ file. It is possible to download it, extract and open _index.html_ file with the default playwright report.