
Little rules, big impact: 10 principles I follow to write better tests
TL;DR - Writing tests that last
- One test, one purpose - Keep each test focused on a single goal.
- DRY - Don't repeat code.
- Reuse tests smartly - Loop through datasets when scenarios are the same.
- Separate test data - Keep logic and data apart for clarity.
- Make assertions informative - Write failure messages that explain why.
- Be intentional with waits - Avoid waitForTimeout; use smart conditions.
- Keep tests independent - Don't chain, unless it's E2E logic.
- Keep them simple and short - Each test should tell one clear story.
- Review tests like code - Be consistent, helpful, and collaborative.
- Use stable locators and selectors and keep them organized - Get them right, and your suite will stay solid through refactors.
What makes a test good?
I've been thinking a lot about what makes a good test. Is it the tool? The framework? The language? Not really, it's how you write it.
Over time, I've realized that writing tests isn't that different from writing good code. You need standards. Rules. A bit of discipline.
Following a few simple principles helps keep your test suite clean, readable, and stable, even months (or years) later. It also keeps your team aligned and your future self grateful when you revisit old code.
So, here are the core rules I follow before every pull request, the ones that keep my tests consistent, meaningful, and a little less flaky.
1. One test, one purpose
Every test should have one clear reason to exist. It should check one thing, tell one story, and fail for one reason. That doesn't mean you need a separate test for every expect().
It just means that everything inside a test, from setup to assertions, should serve the same goal.
Example (what not to do):
test('user can login and view dashboard', async ({ page }) => {
// Login
await page.goto('/login');
await page.fill('#email', 'user@example.com');
await page.fill('#password', 'password123');
await page.click('button[type="submit"]');
// Check dashboard
await expect(page.locator('.dashboard')).toBeVisible();
await expect(page.locator('.welcome-message')).toContainText('Welcome');
});Why?:
- If this test fails, you won't immediately know what broke, login or dashboard?
- You've mixed two different flows in one test.
Better approach:
test('user can login successfully', async ({ page }) => {
await page.goto('/login');
await page.fill('#email', 'user@example.com');
await page.fill('#password', 'password123');
await page.click('button[type="submit"]');
await expect(page.locator('.user-menu')).toBeVisible();
});
test('dashboard displays welcome message', async ({ page }) => {
// Assume user is already logged in via fixture
await page.goto('/dashboard');
await expect(page.locator('.welcome-message')).toContainText('Welcome');
});Why it matters:
- Instantly see what failed without reading logs.
- Re-run just the broken part instead of an entire flow.
- Keep reports clean and meaningful.
2. DRY (Don't Repeat Yourself)
We've all done it, copied a few lines from one test to another "just for now." Three sprints later, you're fixing the same broken selector in 12 different places. That's how flaky code spreads.
Instead, follow the DRY principle - Don't Repeat Yourself. If you use the same sequence of steps in two or more tests, extract them into a function.
Example (the messy way):
test('create product A', async ({ page }) => {
await page.goto('/products');
await page.click('button:has-text("New Product")');
await page.fill('#name', 'Product A');
await page.fill('#price', '99.99');
await page.click('button:has-text("Save")');
await expect(page.locator('.success-message')).toBeVisible();
});
test('create product B', async ({ page }) => {
await page.goto('/products');
await page.click('button:has-text("New Product")');
await page.fill('#name', 'Product B');
await page.fill('#price', '149.99');
await page.click('button:has-text("Save")');
await expect(page.locator('.success-message')).toBeVisible();
});The DRY way:
async function createProduct(page: Page, name: string, price: string) {
await page.goto('/products');
await page.click('button:has-text("New Product")');
await page.fill('#name', name);
await page.fill('#price', price);
await page.click('button:has-text("Save")');
}
test('create product A', async ({ page }) => {
await createProduct(page, 'Product A', '99.99');
await expect(page.locator('.success-message')).toBeVisible();
});
test('create product B', async ({ page }) => {
await createProduct(page, 'Product B', '149.99');
await expect(page.locator('.success-message')).toBeVisible();
});Why it matters:
- One fix updates every test.
- Your code looks cleaner and tells a clearer story.
- New team members understand your flow faster.
Just remember: don't over-DRY. Use it only when it makes tests easier to read, not harder.
3. Reuse tests when it makes sense
Sometimes your flow is the same, but your data changes. Maybe you need to log in as different users, or verify the same form with several input sets.
Instead of writing five almost-identical tests, reuse the same logic and loop through your data.
That's data-driven testing - simple, elegant, and powerful. Without reusing logic (the repetitive way):
test('admin can access admin panel', async ({ page }) => {
await loginAs(page, 'admin@example.com', 'admin123');
await expect(page.locator('.admin-panel')).toBeVisible();
});
test('manager can access admin panel', async ({ page }) => {
await loginAs(page, 'manager@example.com', 'manager123');
await expect(page.locator('.admin-panel')).toBeVisible();
});
test('supervisor can access admin panel', async ({ page }) => {
await loginAs(page, 'supervisor@example.com', 'supervisor123');
await expect(page.locator('.admin-panel')).toBeVisible();
});Reusable, data-driven version:
const adminUsers = [
{ email: 'admin@example.com', password: 'admin123', role: 'admin' },
{ email: 'manager@example.com', password: 'manager123', role: 'manager' },
{ email: 'supervisor@example.com', password: 'supervisor123', role: 'supervisor' },
];
for (const user of adminUsers) {
test(`${user.role} can access admin panel`, async ({ page }) => {
await loginAs(page, user.email, user.password);
await expect(page.locator('.admin-panel')).toBeVisible();
});
}Why it matters:
- Keeps your suite lean - one logic, many data points.
- Reduces maintenance - fix one place, not five.
- Improves coverage without adding clutter.
Just make sure the test name includes the dataset (${user.role}), it'll make your reports easy to read and debug.
4. Keep test data separate
Hardcoding values inside your test might seem harmless, until you need to change them in 20 places.
Test data is the first thing to turn messy if it's not handled properly. Your goal is simple: separate logic from data.
What to avoid: Mixing logic and datasets directly in your test file.
test('create product', async ({ page }) => {
await page.goto('/products');
await page.fill('#name', 'Premium Widget');
await page.fill('#price', '99.99');
await page.fill('#category', 'Electronics');
await page.fill('#description', 'A high-quality widget for professionals');
// ... more hardcoded values
});or this variant:
const productName = 'Premium Widget';
const productPrice = '99.99';
const productCategory = 'Electronics';
test('create product', async ({ page }) => {
await page.goto('/products');
await page.fill('#name', productName);
// ...
});The cleaner way: Move data into a dedicated folder (e.g. /test-data/) and import it.
/tests/ui/products/test-data/products.data.ts
export const PRODUCT_DATA = {
premium: {
name: 'Premium Widget',
price: '99.99',
category: 'Electronics',
description: 'A high-quality widget for professionals',
},
basic: {
name: 'Basic Widget',
price: '29.99',
category: 'Electronics',
description: 'An affordable widget for everyday use',
},
};/tests/ui/products/create-product.spec.ts
import { PRODUCT_DATA } from './test-data/products.data';
test('create premium product', async ({ page }) => {
await page.goto('/products');
await page.fill('#name', PRODUCT_DATA.premium.name);
await page.fill('#price', PRODUCT_DATA.premium.price);
await page.fill('#category', PRODUCT_DATA.premium.category);
await page.fill('#description', PRODUCT_DATA.premium.description);
// ...
});Why it matters:
- Makes your test logic easy to read and maintain.
- Keeps your test files short and focused.
- Allows you to reuse or switch datasets across environments (e.g., staging vs. prod).
Remember! Keeping datasets inside the same test file works only for quick, self-contained examples. Otherwise, externalizing them keeps tests cleaner and more scalable.
Treat test data as configuration, not code.
5. Make assertions informative
A test that fails should help you, not confuse you.
If your report says:
expect(received).toBeTruthy()
…you've already lost a few minutes figuring out what that even means.
A good assertion is like a well-written error message, it tells you what went wrong and where. That's why I always add context and custom messages to my expectations.
Example (the "mystery fail" version):
await expect(page.locator('.alert')).toBeVisible();If this fails, what alert was expected? Success? Error? Timeout?
Better version:
await expect(
page.locator('.alert'),
'Success message is not visible after form submission'
).toBeVisible();Why it matters:
- You save time when debugging - no guesswork needed.
- Test reports become self-explanatory for anyone reading them.
- Failures are less stressful because you instantly know what failed and why.
Pro tip: When writing custom messages, think: "If this fails, what will future-me need to know to fix it fast?"
6. Be intentional with waits
If there's one silent killer of stable tests, it's bad waiting. We've all been there, the test fails randomly, and someone adds:
await page.waitForTimeout(3000);…and it "fixes" it. For a while. Until it doesn't.
Hardcoded waits are like duct tape, they hide timing issues instead of solving them. Playwright gives you better tools, so use them.
The duct tape approach:
test('submit form', async ({ page }) => {
await page.goto('/form');
await page.fill('#email', 'test@example.com');
await page.click('button[type="submit"]');
await page.waitForTimeout(3000); // Why 3 seconds? Why not 2? Or 5?
await expect(page.locator('.success')).toBeVisible();
});Better (explicit and smart):
test('submit form', async ({ page }) => {
await page.goto('/form');
await page.fill('#email', 'test@example.com');
await page.click('button[type="submit"]');
// Wait for the specific condition, not an arbitrary time
await expect(page.locator('.success')).toBeVisible();
// Or if you need to wait for a network request:
await page.waitForResponse(response =>
response.url().includes('/api/submit') && response.status() === 200
);
});Playwright automatically waits for elements to appear, become visible, enabled, and even stable. You almost never need manual timeouts, only when writing tests or debugging.
If you really do need to wait for something specific, be intentional.
Why it matters:
- Reduces flaky, random failures.
- Keeps test execution fast.
- Builds trust in your automation results.
Pro tip: If you find yourself adding waitForTimeout, ask why your element wasn't ready, maybe it's an async call, animation, or missing loading state in the app itself.
7. Keep tests independent
If one test failing causes three others to fail after it, you don't have a test suite, you have a domino line.
Each test should be able to run on its own, in any order, and still pass. No assumptions. No dependencies. No "this test only works after login.spec.ts runs."
Chained tests:
// login.spec.ts
test('user can login', async ({ page }) => {
await page.goto('/login');
await page.fill('#email', 'user@example.com');
await page.fill('#password', 'password123');
await page.click('button[type="submit"]');
// Sets a cookie or session that other tests depend on
});
// profile.spec.ts
test('user can view profile', async ({ page }) => {
// This assumes login.spec.ts ran first!
await page.goto('/profile');
await expect(page.locator('.profile-info')).toBeVisible();
});This works only if the first test runs first, but fails immediately if you run profile.spec.ts alone or in parallel.
Good (self-contained):
// profile.spec.ts
test('user can view profile', async ({ page }) => {
// Login first, independently
await page.goto('/login');
await page.fill('#email', 'user@example.com');
await page.fill('#password', 'password123');
await page.click('button[type="submit"]');
// Now test the profile
await page.goto('/profile');
await expect(page.locator('.profile-info')).toBeVisible();
});Now this test can run independently, in parallel, or even in isolation when debugging, and it still passes.
Why it matters:
- Makes your suite parallel-friendly and CI-stable.
- Simplifies debugging — no "side effects" from previous tests.
- Improves test speed (no need to wait for other suites).
Pro tip: If tests share a common precondition (like "user is logged in"), use fixtures or beforeEach hooks, never rely on previous test runs to set state.
But remember, chaining tests does make sense when you're building e2e flows.
Just be intentional: don't chain for convenience, chain for meaning.
8. Keep tests simple and short
If your test looks like a mini novel, with nested conditions, endless steps, and scrolling for days, it's probably doing too much.
A good test is like a good sentence: clear, focused, and easy to read. Anyone should be able to open it and understand what it's verifying within seconds.
Too complex, tries to test everything:
test('user can manage products', async ({ page }) => {
// Login
await page.goto('/login');
await page.fill('#email', 'admin@example.com');
await page.fill('#password', 'admin123');
await page.click('button[type="submit"]');
// Create product
await page.goto('/products');
await page.click('button:has-text("New Product")');
await page.fill('#name', 'Test Product');
await page.fill('#price', '99.99');
await page.click('button:has-text("Save")');
await expect(page.locator('.success-message')).toBeVisible();
// Edit product
await page.click('text=Test Product');
await page.fill('#name', 'Updated Product');
await page.click('button:has-text("Update")');
await expect(page.locator('.success-message')).toBeVisible();
// Delete product
await page.click('button:has-text("Delete")');
await page.click('button:has-text("Confirm")');
await expect(page.locator('.success-message')).toBeVisible();
});This kind of test might "work," but it's complicated to debug when something breaks. If one step fails, you don't know which part is the root cause - login, creation, or deletion.
Better version: Split that monster into smaller, meaningful tests:
test('user can create product', async ({ page }) => {
await loginAsAdmin(page);
await page.goto('/products');
await page.click('button:has-text("New Product")');
await page.fill('#name', 'Test Product');
await page.fill('#price', '99.99');
await page.click('button:has-text("Save")');
await expect(page.locator('.success-message')).toBeVisible();
});
test('user can edit product', async ({ page }) => {
await loginAsAdmin(page);
const productId = await createTestProduct(page);
await page.goto(`/products/${productId}`);
await page.fill('#name', 'Updated Product');
await page.click('button:has-text("Update")');
await expect(page.locator('.success-message')).toBeVisible();
});
test('user can delete product', async ({ page }) => {
await loginAsAdmin(page);
const productId = await createTestProduct(page);
await page.goto(`/products/${productId}`);
await page.click('button:has-text("Delete")');
await page.click('button:has-text("Confirm")');
await expect(page.locator('.success-message')).toBeVisible();
});Each test now tells its own story: short, focused, and easy to maintain.
Why it matters:
- Small tests are faster to read, debug, and review.
- Failures pinpoint exactly where the issue is.
- You can share steps between tests and still keep things clean.
Pro tip: If your test takes more than a few seconds to explain, split it. And if it takes longer than a minute to read, rewrite it.
9. Review tests like you review code
Automation tests are code, and should be treated that way.
A lot of teams skip proper reviews for test scripts because "they're not part of the product." Bad test code can block releases and waste time, just like bad production code.
So review your tests with the same care you'd review an app feature.
What to look for in a test review:
- Is the test readable and easy to follow?
- Are there duplicate steps that could be turned into helpers?
- Are the assertions meaningful and clear?
- Is the naming consistent and descriptive?
- Does the test follow the same structure and conventions as others?
Example checklist to use:
- âś… One test = one purpose.
- âś… No hardcoded data inside test
- âś… Clear assertion messages.
- âś… Descriptive test titles
- âś… Helpers or fixtures used where it makes sense.
- âś… Test runs independently and reliably
Why it matters:
- Keeps your framework clean and scalable.
- Makes onboarding easier for new QAs.
- Prevents small issues (like hidden waits or poor naming) from growing into maintenance nightmares.
Pro tip: If you're in a team, agree on test review rules just like coding standards, things like naming, data location, fixtures, and other relevant aspects. Consistency isn't just about passing tests, it's about trust in results.
And one more thing:
Code reviews aren't about proving someone wrong, they're about helping each other write better code. The person reviewing your test isn't judging you; they're protecting your time, your releases, and your future self.
10. Use stable locators and keep them organized
Flaky tests often start with one simple mistake, bad locators.
Locators are the foundation of every UI test. Get them right, and your suite will stay solid through refactors. Get them wrong, and you'll spend your nights chasing ghost failures.
Locator hierarchy I always follow:
-
Prefer Playwright's predefined locators - Use semantic, built-in methods like
getByRole,getByText, orgetByPlaceholder. They're resilient, readable, and mimic how users actually interact with the page. -
Add
data-testidoridattributes wherever possible - When you can influence the product code — do it.data-testid(orid) attributes give you full control and stability. This should be your go-to selectors in any serious automation setup. If they're missing, ask your dev team to add them - it's a small effort with a huge payoff. -
Use CSS or XPath only as a temporary fallback - Sometimes you'll need to move fast, but make it clear these selectors are temporary.
Pro tip: Keep a "todo" comment or ticket reference so these selectors get replaced once proper hooks exist. Your future self will thank you.
Organize locators outside the test files
Don't scatter them across specs. Keep them in dedicated page objects or locator maps for consistency and easier updates.
// pages/products-page.ts
export const PRODUCTS_PAGE = {
selectors: {
newProductButton: 'button:has-text("New Product")',
productNameInput: '[data-testid="product-name"]',
productPriceInput: '[data-testid="product-price"]',
saveButton: '[data-testid="save-product"]',
successMessage: '[data-testid="success-message"]',
},
async createProduct(page: Page, name: string, price: string) {
await page.click(this.selectors.newProductButton);
await page.fill(this.selectors.productNameInput, name);
await page.fill(this.selectors.productPriceInput, price);
await page.click(this.selectors.saveButton);
},
};Then in your test:
import { PRODUCTS_PAGE } from '../pages/products-page';
test('create product', async ({ page }) => {
await page.goto('/products');
await PRODUCTS_PAGE.createProduct(page, 'Test Product', '99.99');
await expect(page.locator(PRODUCTS_PAGE.selectors.successMessage)).toBeVisible();
});Why it matters:
- Keeps tests stable across UI refactors
- Encourages collaboration between QA and devs
- Reduces locator duplication
- Makes reviews and debugging faster
Pro tip: If your team is still adding test IDs gradually, document it in your QA–Dev agreement or your test strategy.
Final assertion
Writing good tests isn't just about syntax or tools, it's about discipline, thought, and empathy for whoever reads or runs them next (even if that person is future-you).
Clean, stable, and meaningful tests don't happen by chance.
They happen because you follow certain principles, you write with intention, structure with clarity, and review with kindness.
Tests are not just there to "catch bugs." They're part of your product's quality story. They protect your team's confidence to move fast without breaking things.
What's next?
In the next chapter of Playwright Chronicles, I'll talk about something every tester faces sooner or later: What to automate, what to skip, and how to find the right balance.
Because knowing what not to test is just as important as writing the perfect test.


