Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .cursor/rules/.cursorindexingignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
# Don't index SpecStory auto-save files, but allow explicit context inclusion via @ references
.specstory/**
169 changes: 169 additions & 0 deletions .cursor/rules/AI-Assistance.mdc
Original file line number Diff line number Diff line change
@@ -0,0 +1,169 @@
---
description:
globs:
alwaysApply: true
---
# AI Assistance Guidelines

This document provides guidance for AI assistants working with the Ably CLI codebase. These guidelines are agent-agnostic and designed to help any AI assistant provide high-quality contributions.

## Approaching This Codebase

### Understanding the Project

1. **Start with Key Documentation**:
- [Product Requirements](mdc:docs/Product-Requirements.md)
- [Project Structure](mdc:docs/Project-Structure.md)
- [Testing Strategy](mdc:docs/Testing.md)
- [Mandatory Workflow](mdc:.cursor/rules/Workflow.mdc)

2. **Recognize Context Boundaries**:
- The CLI is focused on Ably services - do not suggest features outside this scope
- Always check if a feature already exists before suggesting implementation

### Common Patterns

1. **Command Structure**:
```typescript
// Commands should follow the oclif structure
export default class MyCommand extends Command {
static description = 'Clear description of what the command does'

static examples = [
'<%= config.bin %> commandName',
'<%= config.bin %> commandName --some-flag',
]

static flags = {
// Flag definitions
}

static args = {
// Argument definitions
}

async run() {
const {args, flags} = await this.parse(MyCommand)
// Implementation
}
}
```

2. **Authorization Pattern**:
```typescript
// Command should support auth via the standard auth helper
import { getAuth } from '../../helpers/auth'

// Then in the run method:
const auth = await getAuth(flags)
const client = new Ably.Rest(auth)
```

3. **Error Handling**:
```typescript
try {
// Operation that might fail
} catch (error) {
if (error.code === 40100) {
this.error('Authentication failed. Check your API key or token.')
} else {
this.error(`Failed: ${error.message}`)
}
}
```

### Anti-Patterns to Avoid

1. **❌ Direct HTTP Requests for Data Plane APIs**
```typescript
// WRONG: Using fetch directly for data plane operations
const response = await fetch('https://rest.ably.io/channels/...')
```

**✅ Correct Approach**
```typescript
// CORRECT: Using the Ably SDK
const client = new Ably.Rest(auth)
const channel = client.channels.get('channel-name')
```

2. **❌ Inconsistent Command Structure**
```typescript
// WRONG: Non-standard command structure
export default class {
async execute(args) {
// Implementation
}
}
```

3. **❌ Hardcoded Endpoints**
```typescript
// WRONG: Hardcoded endpoint URLs
const url = 'https://control.ably.net/v1/apps'
```

**✅ Correct Approach**
```typescript
// CORRECT: Using the config
const controlHost = flags.controlHost || config.controlHost || 'control.ably.net'
const url = `https://${controlHost}/v1/apps`
```

## Effective Testing

1. **Mocking Ably SDK Example**:
```typescript
// Example of proper Ably SDK mocking
import * as sinon from 'sinon'

describe('my command', () => {
let mockChannel: any
let mockClient: any

beforeEach(() => {
mockChannel = {
publish: sinon.stub().resolves(),
presence: {
get: sinon.stub().resolves([]),
enter: sinon.stub().resolves(),
},
}

mockClient = {
channels: {
get: sinon.stub().returns(mockChannel),
},
close: sinon.stub().resolves(),
}

// Important: stub the constructor, not just methods
sinon.stub(Ably, 'Rest').returns(mockClient)
})

afterEach(() => {
sinon.restore() // Don't forget cleanup!
})
})
```

2. **Testing Resource Cleanup**:
Always ensure resources are properly closed/cleaned up, especially in tests:
```typescript
// Example of proper resource cleanup
afterEach(async () => {
await client.close()
sinon.restore()
})
```

## Contributing Workflow

All contributions must follow the [Mandatory Workflow](mdc:.cursor/rules/Workflow.mdc):

1. **Build**: Run `pnpm prepare`
2. **Lint**: Run `pnpm exec eslint .`
3. **Test**: Run appropriate tests
4. **Document**: Update relevant docs

See documentation in `.cursor/rules/Workflow.mdc` for details.
File renamed without changes.
File renamed without changes.
File renamed without changes.
35 changes: 35 additions & 0 deletions .cursor/rules/Workflow.mdc
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
---
description:
globs:
alwaysApply: true
---
# Mandatory Development Workflow

**IMPORTANT:** Before considering any task complete, you **MUST** perform and verify the following steps in order. Failure to complete these steps means the work is **not finished**.

1. **Run Build:**
* Execute `pnpm prepare`.
* **Purpose:** Ensures TypeScript compiles, `oclif.manifest.json` is updated, and `README.md` reflects command changes.
* **Verification:** Check for build errors in the output. Ensure `oclif.manifest.json` and `README.md` changes (if any) are sensible.

2. **Run Linter:**
* Execute `pnpm exec eslint .` (or `pnpm exec eslint -- [filepath]` for specific files).
* **Purpose:** Ensures code adheres to project style and best practices. Catches potential errors.
* **Verification:** Ensure the command exits with code 0 (no errors). **Do not** use workarounds like prefixing unused variables with `_`; address the root cause (e.g., remove unused code). See `development.mdc` for more details on linting philosophy.

3. **Run Tests:**
* Execute relevant tests locally (e.g., `pnpm test:unit`, `pnpm test:integration`, `pnpm test:e2e`, `pnpm test:playwright`, or specific file paths like `pnpm test test/unit/commands/some-command.test.ts`).
* **Purpose:** Verifies changes haven't introduced regressions and that new features work as expected.
* **Verification:** Ensure all executed tests pass. If tests fail, debug them (see `docs/Debugging.md`). Add or update tests (Unit, Integration, E2E) as appropriate for your changes. Refer to `docs/Testing.md` for the testing strategy.

4. **Update Documentation & Rules:**
* Review and update all relevant documentation and rule files based on your changes.
* **Checklist:**
* `README.md`: Especially the command list if commands were added/changed (often handled by `pnpm prepare`).
* `docs/*.md`: Any file impacted by the changes (e.g., `Product-Requirements.md`, `Project-Structure.md`, `Testing.md`).
* `.cursor/rules/*.mdc`: Any rule file impacted by changes to development practices, Ably usage, project structure, etc.
* `docs/TODO.md`: Update any related tasks.
* **Purpose:** Keeps project knowledge current for both humans and AI agents.
* **Verification:** Manually confirm that documentation accurately reflects the implemented changes.

**Only after successfully completing ALL four steps should you consider your task complete.**
40 changes: 40 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# Contributing to the Ably CLI

Thank you for your interest in contributing to the Ably CLI!

## Development Workflow

All code changes, whether features or bug fixes, **MUST** follow the mandatory workflow outlined in [.cursor/rules/Workflow.mdc](mdc:.cursor/rules/Workflow.mdc).

In summary, this involves:

1. **Build:** Run `pnpm prepare` to compile, update manifests, and update the README.
2. **Lint:** Run `pnpm exec eslint .` and fix all errors/warnings.
3. **Test:** Run relevant tests (`pnpm test:unit`, `pnpm test:integration`, `pnpm test:e2e`, `pnpm test:playwright`, or specific files) and ensure they pass. Add new tests or update existing ones as needed.
4. **Document:** Update all relevant documentation (`docs/`, `.cursor/rules/`, `README.md`) to reflect your changes.

**Pull requests will not be merged unless all these steps are completed and verified.**

## Key Documents

Before starting work, please familiarize yourself with:

* [Product Requirements](./docs/Product-Requirements.md): Understand the goals and features.
* [Project Structure](./docs/Project-Structure.md): Know where different code components live.
* [Testing Strategy](./docs/Testing.md): Understand the different types of tests and how to run them.
* [Development Rules](mdc:.cursor/rules/Development.mdc): Coding standards, linting, dependency management.
* [Ably Rules](mdc:.cursor/rules/Ably.mdc): How to interact with Ably APIs/SDKs.

## Reporting Issues

Please report bugs or suggest features using GitHub Issues.

## Submitting Pull Requests

1. Fork the repository.
2. Create a new branch for your feature or bug fix.
3. Make your changes, ensuring you follow the **Mandatory Development Workflow** described above.
4. Commit your changes with clear messages.
5. Push your branch to your fork.
6. Create a Pull Request against the `main` branch of the `ably/cli` repository.
7. Ensure all CI checks pass.
5 changes: 5 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@
* [Commands](#commands)
* [MCP Server](#mcp-server)
* [Terminal Server](#terminal-server)
* [Contributing](#contributing)
<!-- tocstop -->

# CLI Usage
Expand Down Expand Up @@ -4743,3 +4744,7 @@ The MCP server also provides the following resources:
# Terminal Server

This repository includes a terminal server that provides a bash environment over WebSockets for the web based CLI. See [Server-Setup.md](./docs/Server-Setup.md) for more info on how it works and how to deploy it.

# Contributing

Please see the documentation in [`.cursor/rules/Workflow.mdc`](.cursor/rules/Workflow.mdc) for details on how to contribute to this project, including our mandatory development workflow, testing requirements, and code quality standards.
2 changes: 2 additions & 0 deletions docker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,5 +47,7 @@ The security tests are also integrated into the CI pipeline using GitHub Actions

For more information about the security hardening measures implemented, see:

- [Container-Security.md](../docs/Container-Security.md)
- [Security-Hardening.md](../docs/Security-Hardening.md)
- [Security-Testing-Auditing.md](../docs/Security-Testing-Auditing.md)
- [User-Namespace-Remapping.md](../docs/User-Namespace-Remapping.md)
65 changes: 65 additions & 0 deletions docs/Debugging.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
# Debugging Guide

This guide provides tips for debugging common issues when developing the Ably CLI.

## General Tips

* **Check Logs:** Look for errors or relevant messages in the CLI output, test runner output, server logs (for Web CLI tests), or browser console (for Web CLI tests).
* **Isolate the Issue:** Try to reproduce the problem with the simplest possible command or test case. Comment out parts of the code or test to narrow down the source of the error.
* **Consult Documentation:** Review relevant project docs (`docs/`, `.cursor/rules/`) and Ably documentation (<https://ably.com/docs>).

## Debugging Tests

Refer to [docs/Testing.md](mdc:docs/Testing.md) for how to run specific tests.

### Mocha Tests (Unit, Integration, E2E)

* **Verbose Output:** Run Mocha with increased verbosity if needed (though the default `spec` reporter is usually detailed). Check `scripts/run-tests.sh` for current settings.
* **Debugger:** Use the Node.js inspector:
```bash
# Add --inspect-brk flag
node --inspect-brk --import 'data:text/javascript,...' ./node_modules/mocha/bin/mocha ... [your test path]
```
Then attach your debugger (e.g., Chrome DevTools, VS Code debugger).
* **`console.log`:** Add temporary `console.log` statements in the test or the code being tested.
* **Mocking Issues (Unit/Integration):**
* Verify mocks (`sinon`, `nock`) are correctly set up and restored (`beforeEach`/`afterEach`).
* Ensure stubs match the actual function signatures.
* Check that network requests are being intercepted as expected (e.g., using `nock.recorder`).
* **E2E Failures:**
* **Credentials:** Ensure `E2E_ABLY_API_KEY` (and any other required keys) are correctly set in your environment (`.env` file locally, secrets in CI).
* **Network:** Check for network connectivity issues to Ably services.
* **Resource Conflicts:** Ensure previous test runs cleaned up resources correctly (e.g., unique channel names per test run).
* **CI vs. Local:** If tests fail only in CI, suspect environment differences (Node version, dependencies, permissions, resources). Check CI logs carefully.

Comment on lines +26 to +34
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Correct list indentation to two spaces.

Nested list items are indented with four spaces, which conflicts with markdownlint’s MD007. Normalize to two-space indents:

-     *   Verify mocks (`sinon`, `nock`) are correctly set up and restored (`beforeEach`/`afterEach`).
+   * Verify mocks (`sinon`, `nock`) are correctly set up and restored (`beforeEach`/`afterEach`).
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
* Verify mocks (`sinon`, `nock`) are correctly set up and restored (`beforeEach`/`afterEach`).
* Ensure stubs match the actual function signatures.
* Check that network requests are being intercepted as expected (e.g., using `nock.recorder`).
* **E2E Failures:**
* **Credentials:** Ensure `E2E_ABLY_API_KEY` (and any other required keys) are correctly set in your environment (`.env` file locally, secrets in CI).
* **Network:** Check for network connectivity issues to Ably services.
* **Resource Conflicts:** Ensure previous test runs cleaned up resources correctly (e.g., unique channel names per test run).
* **CI vs. Local:** If tests fail only in CI, suspect environment differences (Node version, dependencies, permissions, resources). Check CI logs carefully.
* Verify mocks (`sinon`, `nock`) are correctly set up and restored (`beforeEach`/`afterEach`).
* Ensure stubs match the actual function signatures.
* Check that network requests are being intercepted as expected (e.g., using `nock.recorder`).
* **E2E Failures:**
* **Credentials:** Ensure `E2E_ABLY_API_KEY` (and any other required keys) are correctly set in your environment (`.env` file locally, secrets in CI).
* **Network:** Check for network connectivity issues to Ably services.
* **Resource Conflicts:** Ensure previous test runs cleaned up resources correctly (e.g., unique channel names per test run).
* **CI vs. Local:** If tests fail only in CI, suspect environment differences (Node version, dependencies, permissions, resources). Check CI logs carefully.
🧰 Tools
🪛 markdownlint-cli2 (0.17.2)

26-26: Unordered list indentation
Expected: 2; Actual: 4

(MD007, ul-indent)


27-27: Unordered list indentation
Expected: 2; Actual: 4

(MD007, ul-indent)


28-28: Unordered list indentation
Expected: 2; Actual: 4

(MD007, ul-indent)


30-30: Unordered list indentation
Expected: 2; Actual: 4

(MD007, ul-indent)


31-31: Unordered list indentation
Expected: 2; Actual: 4

(MD007, ul-indent)


32-32: Unordered list indentation
Expected: 2; Actual: 4

(MD007, ul-indent)


33-33: Unordered list indentation
Expected: 2; Actual: 4

(MD007, ul-indent)

### Playwright Tests (Web CLI)

* **Test Output:** Playwright provides detailed error messages, including:
* The specific action that failed (e.g., `locator.waitFor`, `expect.toContainText`).
* The expected vs. received values.
* Call logs showing the sequence of actions.
* **Error Context:** Check the linked `error-context.md` file in `test-results/` for screenshots, DOM snapshots, and console logs at the point of failure.
* **Browser Console:** Add `page.on('console', ...)` listeners in your test (as shown in `.specstory` examples) to capture browser logs.
* **Debugging UI Mode:** Run Playwright with the UI for interactive debugging:
```bash
pnpm exec playwright test test/e2e/web-cli/web-cli.test.ts --ui
```
* **Common Issues:**
* **Selector Timeouts:** The element didn't appear within the timeout. Check if the server/app started correctly, if there were errors, or if the selector is correct.
* **Incorrect Text/Assertions:** The expected text doesn't match the actual text in the terminal (check for subtle differences like whitespace, case sensitivity, or ANSI codes if not using `toContainText`).
* **Connection Errors:** Check browser console logs and terminal server logs for WebSocket connection issues (e.g., wrong port, server crash, `ERR_CONNECTION_REFUSED`).
* **Build Artifacts:** Ensure the Web CLI example (`examples/web-cli`) was built successfully (`pnpm --filter ably-web-cli-example build`) before the test runs, especially in CI.

## Debugging the CLI Locally

* **Run with Node Inspector:**
```bash
node --inspect-brk bin/run.js [your command and flags]
```
Attach your debugger.
* **Verbose Flags:** Use the CLI's built-in `--verbose` flag if available for the command.
* **Oclif Debugging:** Set the `DEBUG` environment variable for oclif internal logs:
```bash
DEBUG=oclif* bin/run.js [command]
```
* **Check Configuration:** Use `ably config` (opens the config file) to verify stored credentials or settings.
Loading