Help Center

150+ answers about load testing with AccessLoad — from first test to enterprise deployment. Everything you need, nothing you don't.

Getting Started API Chain Testing Data-Driven Testing Correlation & Variables Protocols Network Simulation Metrics & SLA AI Analysis Reports Private Testing CI/CD Integration Why AccessLoad Migration & Switching Common Workflows Security Billing & Plans Error Debugging Scalability Industry Use Cases Testing Concepts Limitations & Honesty Team & Collaboration Troubleshooting

🚀 Getting Started

Your first test in under 60 seconds.

How do I run my first load test?
It takes under 60 seconds:
  • Sign up free — no credit card required
  • Paste your API URL and select the HTTP method
  • Set virtual users (VUs) — start with 10–50
  • Click Run — results stream in real-time
That's it. No downloads, no plugins, no configuration files. You'll see live throughput, latency percentiles, and error rates immediately.
Do I need to install anything?
For cloud testing: nothing at all. Everything runs in your browser. For private testing (testing APIs behind a firewall): one command to start an agent. No agents, no browser plugins, no desktop software, no runtime dependencies.
What is a Virtual User (VU)?
A Virtual User simulates one real user hitting your API concurrently. If you set 1,000 VUs, that's 1,000 simultaneous connections making requests to your API. Each VU executes the full test scenario independently — including all steps in a chain test, with its own extracted variables and state. VUs are not just "threads" — they simulate real user behavior with proper connection handling.
What is ramp-up and why does it matter?
Ramp-up gradually increases load instead of hitting your server with all VUs at once. For example, with 1,000 VUs and 60-second ramp-up, AccessLoad adds ~17 users per second. This reveals at exactly what concurrency your system starts degrading — far more useful than an instant spike. Most legacy tools default to instant full load, which only tells you "it broke" without revealing the breaking point.
Can I test multiple endpoints in one test?
Yes. Add multiple endpoint paths under the same base URL, and AccessLoad distributes traffic across all of them. Each endpoint gets its own metrics breakdown — latency, error rate, and throughput — so you can pinpoint which endpoint is the bottleneck. For sequential multi-step flows, use API Chain Testing instead.
What happens when I click "Run"? What does the platform do behind the scenes?
When you click Run, AccessLoad's AL-Engine:
  • Compiles your configuration into an optimized test script
  • Distributes it to available execution agents based on capacity
  • Ramps up virtual users according to your settings
  • Streams real-time metrics (TPS, latency, errors) to your dashboard
  • Captures error snapshots for failed requests
  • Analyzes results with AI when the test completes
All of this happens automatically — no infrastructure to manage.
Do I need to know anything about performance testing to get started?
No. If you can call an API (or know your API URL), you can use AccessLoad. The platform provides sensible defaults — VU count, duration, SLA profiles. The AI analysis explains results in plain English. As you get more experienced, you can fine-tune everything. We've designed it so beginners get value immediately while experts have full control.
Can I save my test configuration and re-run it later?
Yes. Your test history shows all previous runs with full configuration. You can re-run any test with the same settings, or modify parameters and run again. This is essential for regression testing — run the same test after every deployment to track performance over time.

🔗 API Chain Testing

Multi-step user journey simulation — the way real users use your API.

What is API chain testing?
API chain testing lets you define sequential, dependent API calls that simulate real user journeys. Instead of hitting one endpoint repeatedly (which is meaningless), you model realistic flows:
  • Step 1: POST /auth/login → extract auth_token
  • Step 2: GET /api/profile (using auth_token) → extract user_id
  • Step 3: POST /api/orders (using auth_token + user_id)
Each virtual user executes the entire chain, with extracted data flowing between steps automatically.
Why is chain testing more realistic than single-endpoint testing?
Real users don't repeatedly hit one endpoint. They login, browse, search, add to cart, checkout. Each step has different server-side costs — authentication hits the auth service, search queries the database, checkout processes payment. Single-endpoint testing tells you nothing about how your system handles the combination of these operations under load. Chain testing reveals the real bottlenecks.
How does variable extraction work between chain steps?
After any step, you can extract values from the response using simple dot-notation:
  • From response body: set source to "body" and JSON path to data.access_token
  • From response header: set source to "header" and specify the header name (e.g., X-Request-Id)
Extracted variables are available in all subsequent steps using {{variable_name}} syntax in headers or request body. No regex, no complex extractors, no post-processor configuration.
Can each VU maintain its own session state in a chain test?
Yes. Each virtual user runs the chain independently with its own extracted variables. If VU #1 logs in and gets token "abc123" and VU #2 gets "xyz789", each VU carries its own token through subsequent steps. There's no shared state between virtual users — just like real users with separate browser sessions.
How many steps can I have in a chain?
There's no hard limit. You can model complex flows with as many steps as your real user journey requires — login, browse catalog, search, view product, add to cart, apply coupon, checkout, payment, confirmation. Each step supports its own HTTP method (GET, POST, PUT, PATCH, DELETE), custom headers, and request body.
Do I need to write scripts for chain tests?
No. Everything is configured through the UI — select method, enter path, define headers, set body, add extractions. AccessLoad's AL-Engine compiles your chain into an optimized test script automatically. You can also use our AI assistant to generate test scenarios from natural language descriptions.
Can I use a token from Step 1 in Step 5's Authorization header?
Yes. Any variable extracted in any step is available in all subsequent steps. Extract auth_token in Step 1, and use {{auth_token}} in Step 2, 3, 4, 5, or any later step — in headers, body, or even URL path parameters. Variables persist for the entire chain execution within each VU.
What if a chain step fails — does the whole chain stop?
By default, if a step returns an error (4xx/5xx), the chain continues but the failure is recorded. This is intentional — it shows you which subsequent steps also fail when a dependency breaks (cascade failure analysis). The error snapshot captures the exact request/response for debugging.
Can I test an e-commerce checkout flow with chain testing?
Absolutely. A typical e-commerce chain:
  • Step 1: POST /auth/login → extract session_token
  • Step 2: GET /api/products → extract product_id
  • Step 3: POST /api/cart (with product_id) → extract cart_id
  • Step 4: POST /api/checkout (with cart_id) → extract order_id
  • Step 5: GET /api/orders/{{order_id}} (verify order)
Each VU simulates a complete purchase journey. With 1,000 VUs, you're simulating 1,000 users shopping simultaneously.
Can I add delays between chain steps to simulate real user think time?
Yes. You can configure think time between steps to simulate how real users pause between actions — browsing a product page for 5 seconds before clicking "Add to Cart". This makes your test more realistic and avoids unrealistically high request rates that don't match real-world behavior.

📊 Data-Driven Testing

Feed unique data to every virtual user — test like the real world.

Can I test with thousands of unique user credentials?
Yes. AccessLoad supports data-driven testing where each virtual user can be fed unique data from your dataset. If you have 5,000 user credentials, each VU picks a different row — so you're testing with 5,000 different real accounts simultaneously, not hammering login with one set of credentials.
Why is data-driven testing important? Why not just use one test account?
Using one account for all VUs creates unrealistic test conditions:
  • Cache hit rate skyrockets — the same user's data is cached after the first request, hiding real latency
  • Session management is wrong — 1,000 sessions for the same user doesn't match reality
  • Database query patterns differ — accessing one user's data vs. 5,000 users' data exercises completely different DB paths
  • Rate limiting triggers — you'll hit per-user rate limits, not per-system limits
Data-driven testing reveals the real performance your system will deliver in production.
How is this different from legacy tools that need CSV files and complex configuration?
Legacy desktop tools require you to:
  • Create CSV files manually in a specific format
  • Configure a "Data Set" element and wire it to thread groups
  • Map column names to variables with exact syntax
  • Handle file distribution across remote agents yourself
  • Debug "file not found" errors on remote machines
AccessLoad handles parameterization natively — upload your data, reference columns in your request templates, and the platform distributes data across VUs and workers automatically. No file management, no wiring, no syntax errors.
How does data distribution work across virtual users?
Each VU is assigned a unique row from your dataset based on its VU ID. With 5,000 rows and 5,000 VUs, each user tests with completely different data. If you have more VUs than data rows, rows are cycled. If you have more rows than VUs, each VU can iterate through multiple rows across test iterations. The distribution is deterministic — the same VU always gets the same data for reproducibility.
Can I combine data-driven testing with API chains?
Yes — this is where AccessLoad really shines. Reference data columns in any chain step — use {{csv.email}} in your login body and {{csv.product_id}} in a later step's body. Each VU uses its own row of data plus the dynamically extracted variables from previous steps. This lets you simulate thousands of unique users completing unique journeys — the most realistic load test possible.
What if I have 5,000 rows of data but only 500 VUs?
Each VU will cycle through multiple rows across iterations. VU #1 uses row 1 in iteration 1, row 501 in iteration 2, row 1001 in iteration 3, and so on. This means all 5,000 rows get tested even with fewer VUs — it just takes more iterations.
What if I have 100 rows but 1,000 VUs?
Rows are cycled — VU #1 and VU #101 would use the same row. This is fine for most scenarios (like testing with a pool of 100 test accounts). If you need strict uniqueness, match your VU count to your data rows.
Does data distribution work across multiple workers in distributed testing?
Yes. The platform ensures unique data distribution even when VUs are split across multiple execution agents. Worker #1 with VUs 1–500 gets rows 1–500, Worker #2 with VUs 501–1000 gets rows 501–1000. No duplicate data, no manual file distribution.

🔄 Correlation & Variables

The #1 pain point with legacy tools — solved.

What is "correlation" in load testing and why is it so painful?
Correlation means extracting dynamic values from server responses and using them in subsequent requests — like extracting a session token from the login response and passing it to API calls. In legacy tools, this requires:
  • Adding a separate "post-processor" element to each request
  • Writing regex patterns or JSON path expressions
  • Wiring extracted variables to the correct requests
  • Debugging when patterns break due to response format changes
It's the single most time-consuming and error-prone part of test creation. AccessLoad eliminates this pain entirely.
How does AccessLoad make correlation simple?
Extraction is built into each chain step — not a separate element:
  • Click "Add Extract" on any step
  • Give it a variable name (e.g., auth_token)
  • Choose source: "body" or "header"
  • Enter the path: data.access_token (dot-notation, not regex)
Use {{auth_token}} in any subsequent step. That's it. No regex, no separate elements, no wiring, no debugging. It works on the first try, every time.
Can I extract values from nested JSON responses?
Yes. Use dot-notation for any depth:
  • data.user.id → extracts from {"data": {"user": {"id": 123}}}
  • result.items.0.product_id → extracts first item's product_id from an array
  • meta.pagination.next_cursor → extracts pagination cursor
No regex patterns, no JSONPath syntax differences, no escaping issues.
Can I extract values from response headers?
Yes. Set the source to "header" and specify the header name — like X-Request-Id, Set-Cookie, or Location. This is useful for extracting CSRF tokens, redirect URLs, rate limit headers, or custom server headers.
What if the response format changes — do my extractions break?
Dot-notation is more resilient than regex. If your response adds new fields, existing paths still work. If a field is renamed or restructured, you update the path in one place — not hunt through regex patterns across multiple post-processors. And because extraction is inline with the step, you can see exactly what each step expects.
Can I use extracted variables in URL paths, not just headers and body?
Yes. If you extract user_id from a login response, you can use it in the next step's path: /api/users/{{user_id}}/orders. Variables work everywhere — URL path, query parameters, headers, and request body.
How do I handle CSRF tokens?
Add a step that fetches the page/endpoint returning the CSRF token. Extract it from the response body or header (e.g., meta.csrf_token or header X-CSRF-Token). Use {{csrf_token}} in subsequent POST/PUT/DELETE requests. Each VU gets its own CSRF token, just like a real browser session.
Can I chain multiple extractions from a single response?
Yes. Each step supports multiple extractions. From a single login response, you could extract access_token, refresh_token, user_id, and session_expiry — all in one step. Use any of them in subsequent steps.

🌐 Protocols & API Types

HTTP, GraphQL, gRPC — all supported natively, no plugins.

What API protocols does AccessLoad support?
  • HTTP/REST: All methods — GET, POST, PUT, PATCH, DELETE. Custom headers and JSON bodies.
  • GraphQL: Paste your query or mutation. Variables are supported. Request is automatically wrapped in the correct POST body format.
  • gRPC: Upload your .proto file, select service and method, define the payload. Full binary protocol support with real-time metrics.
All protocols get the same metrics (TPS, P50/P90/P95/P99, error rate), SLA checks, AI analysis, and reporting.
How does GraphQL testing work without complex configuration?
Just paste your GraphQL query or mutation into the query field, optionally add variables as JSON, and run. AccessLoad automatically wraps it in the correct {"query": "...", "variables": {...}} POST body and sets the right Content-Type header. Legacy tools require you to manually construct the POST body, escape JSON strings, and set Content-Type — AccessLoad does it all for you.
Can I test GraphQL subscriptions?
GraphQL subscriptions typically use WebSocket transport. AccessLoad currently supports GraphQL queries and mutations. For subscription testing, you can test the underlying WebSocket endpoint or the REST fallback if your server supports it.
How does gRPC load testing work?
Upload your .proto file through the UI. AccessLoad parses it and shows available services and methods. Select the one you want to test, define the request payload as JSON, and run. The platform handles binary serialization, HTTP/2 framing, and streaming — things that are nearly impossible to configure in legacy GUI-based tools.
Can I test APIs that require custom headers or authentication?
Yes. Add any custom headers (Authorization, API keys, Content-Type, custom tokens) as JSON key-value pairs. For bearer tokens: {"Authorization": "Bearer your-token"}. For API keys: {"X-API-Key": "your-key"}. In chain tests, you can dynamically extract tokens and inject them into subsequent steps using {{variable}} syntax.
Can I test SOAP/XML APIs?
Yes. SOAP is just HTTP POST with XML body. Set the Content-Type header to application/xml or text/xml, paste your SOAP envelope in the request body, add any required SOAPAction header, and run. AccessLoad treats it like any other HTTP POST request.
Can I test APIs with different content types (form data, XML, plain text)?
Yes. Set the Content-Type header to whatever your API expects — application/json, application/xml, application/x-www-form-urlencoded, text/plain, or any custom type. The request body is sent exactly as you provide it.
Can I test APIs that use OAuth 2.0 authentication?
Yes. Use a chain test:
  • Step 1: POST to your OAuth token endpoint with client_id, client_secret, grant_type
  • Step 2: Extract access_token from the response
  • Step 3+: Use {{access_token}} in the Authorization header for all API calls
Each VU gets its own token — exactly like real OAuth flow.
Can I send multipart/form-data requests (file uploads)?
For simple file upload testing, you can set the appropriate Content-Type and body. For complex multipart scenarios with actual file data, contact our support team for guidance on advanced configuration.

📱 Network Simulation

See what your users actually experience — not what your data center sees.

Why should I test under simulated network conditions?
Your API may respond in 50ms on a data center connection, but what do your actual users experience on a 4G connection in Mumbai or a 3G connection in a rural area? Network simulation reveals:
  • How timeouts behave under real latency
  • Whether your payload sizes are too large for slow connections
  • How retry logic handles degraded networks
  • Whether your mobile app backend is actually mobile-ready
  • Whether your compression and caching strategy actually helps
What network profiles are available?
  • 3G: 300 kbps bandwidth, 200ms latency — typical emerging-market mobile
  • 4G: 5 Mbps bandwidth, 50ms latency — standard mobile connection
  • Slow WiFi: Simulates congested coffee-shop or hotel WiFi
  • Custom: Set your own bandwidth (kbps) and latency (ms) for any scenario
Network conditions are applied at the test-execution layer, not as a proxy — so results are accurate and there's no infrastructure overhead.
Can I combine network simulation with API chain testing?
Yes. Run your full multi-step user journey under 3G conditions to see how a real mobile user would experience your entire flow — login, data loading, transactions. This reveals bottlenecks that single-endpoint tests on fast connections completely miss. A checkout flow that takes 2 seconds on fiber might take 45 seconds on 3G — that's a lost sale.
Can I test with different network conditions for the same endpoint?
Run the same test multiple times with different network profiles and compare results side-by-side. This gives you a clear picture: "Our API handles 4G fine (P95: 200ms) but degrades badly on 3G (P95: 4,500ms)." This data drives engineering decisions — do you need smaller payloads? Better compression? A CDN?
Is network simulation accurate? How does it work?
Network conditions are applied at the execution layer using kernel-level traffic shaping — the same technique used by mobile carriers to throttle connections. This is not a proxy adding artificial delays (which some legacy tools do). The bandwidth throttling and latency injection happen at the network level, producing results that closely match real-world mobile performance.
Why don't other load testing tools offer network simulation?
Desktop-based tools run on your machine. They can't inject network conditions between your machine and the server without complex proxy setups. Since AccessLoad controls the execution infrastructure, we apply network conditions at the source — between our load generators and your API. It's a fundamental architectural advantage of cloud-native load testing.

📈 Metrics & SLA

What gets measured, what gets managed.

What metrics does AccessLoad capture?
  • Throughput: Transactions per second (TPS) — overall and per endpoint
  • Latency percentiles: P50, P90, P95, P99 — not just averages, which hide outliers
  • Error rate: Percentage and count of failed requests
  • Status codes: Distribution of HTTP response codes (200, 400, 500, etc.)
  • Per-endpoint breakdown: Every metric split by endpoint, so you see exactly which API is the bottleneck
What are SLA profiles and how do they work?
SLA profiles define pass/fail criteria for your tests:
  • Web App: P95 < 500ms, error rate < 1%
  • Backend API: P95 < 1,500ms, error rate < 5%
  • Heavy Processing: P95 < 5,000ms, error rate < 10%
  • Custom: Set your own thresholds for P95 latency and error rate
Tests are evaluated against your chosen SLA. If thresholds are breached, the test is marked as failed — and you can optionally auto-abort the test to save resources.
Why do you show P50/P90/P95/P99 instead of average latency?
Averages lie. If 99 requests take 10ms and 1 request takes 10 seconds, the average is 109ms — which looks fine. But P99 shows 10,000ms — revealing the real problem. P95 is what your users actually experience. Legacy tools that emphasize average response time give you a false sense of confidence. AccessLoad defaults to percentile-based metrics because they expose tail latency issues, GC pauses, and database lock contention.
What is the difference between P50, P90, P95, and P99?
  • P50 (median): 50% of requests are faster than this. Your "typical" response time.
  • P90: 90% of requests are faster. Only 10% are slower.
  • P95: 95% of requests are faster. This is the standard SLA metric — what most of your users experience.
  • P99: 99% of requests are faster. Reveals tail latency — the worst 1% of experiences. If P99 is 10x higher than P50, you have a serious tail latency problem.
Can I compare results across multiple test runs?
Yes. Select any two test runs for a side-by-side comparison showing:
  • TPS delta (regression if >10% drop)
  • Latency delta (regression if >15% increase)
  • Error rate delta (regression if >20% increase)
This is essential for tracking performance over time — deploy a new build, run the same test, and instantly see if you've introduced a regression.
What is the "warmup grace period" for SLA evaluation?
During ramp-up, latency is often higher as connection pools, caches, and JIT compilers warm up. The warmup grace period tells AccessLoad to skip SLA evaluation during the first N seconds of the test. This prevents false SLA failures caused by cold-start effects — you only measure steady-state performance.
What does "auto-abort on SLA breach" do?
When enabled, AccessLoad automatically stops the test if your SLA thresholds are violated. This saves VU quota and time when the system is clearly failing — no point running 1,000 VUs for 30 more minutes when P95 is already 10x over threshold. The test still records all data up to the abort point.
Can I see per-endpoint metrics in a multi-endpoint test?
Yes. Every metric is broken down per endpoint. If you test 5 endpoints, you'll see 5 sets of TPS, latency percentiles, and error rates. This immediately tells you: "The /search endpoint is the bottleneck (P95: 3,200ms) while /health is fine (P95: 12ms)."
How do I know if my test results are statistically significant?
Look at the total number of requests. A test with 100 requests has high variance — P99 might just be one slow request. A test with 100,000 requests gives stable, reliable metrics. Rule of thumb: for P99 to be meaningful, you need at least 10,000 requests. AccessLoad shows total request counts alongside all percentile metrics so you can judge confidence.

🤖 AI-Powered Analysis

Stop staring at graphs. Get answers.

What does the AI analysis actually do?
After every test, AccessLoad's AI engine analyzes your results and detects specific bottleneck patterns:
  • Slow Failure: High latency + high errors → database or downstream service choking under load
  • Fast Failure: Low latency + high errors → rate limiting or connection pool exhaustion
  • Slow but Stable: High latency, no errors → optimization candidate (missing index, N+1 queries)
  • Tail Latency Spike: P50 is fast but P95/P99 are slow → GC pauses, lock contention, or cold cache
  • Elevated Error Rate: Error rate above baseline → configuration or capacity issue
Each pattern comes with specific, actionable recommendations — not generic "improve your code" advice.
Do I need to interpret graphs and charts myself?
No. Legacy tools dump raw graphs and expect you to correlate latency spikes with throughput changes manually — a skill that takes years to develop. AccessLoad's AI engine does this correlation for you and tells you what's wrong and what to do about it. The charts are still there for engineers who want to dig deeper, but the AI summary gives you answers in plain English.
Can the AI generate test scenarios from natural language?
Yes. Describe what you want to test in plain English — e.g., "Test my login API at https://api.example.com/auth with 500 users for 2 minutes" — and the AI engine converts it into a fully configured test. No need to understand test parameters or configuration syntax. This is especially useful for teams new to performance testing.
How accurate is the AI analysis compared to a human performance engineer?
The AI excels at pattern recognition across metrics — correlating latency, throughput, and error rate patterns that humans often miss. It applies battle-tested heuristics from thousands of load test patterns. For 80% of cases, the AI identifies the bottleneck correctly. For complex, multi-layered issues (like cascading failures across microservices), a human performance engineer may still provide deeper root-cause analysis.
Does the AI learn from my previous tests?
The AI engine analyzes each test independently using proven performance engineering heuristics. It doesn't store or learn from your specific data — ensuring privacy and preventing bias from historical anomalies. The patterns it detects are based on universal performance engineering principles that apply across all systems.
Can I ask the AI questions about my test results?
Yes. The AI chat assistant on every page can answer questions about your test results, explain what metrics mean, suggest improvements, and help you configure your next test. Ask things like "Why is my P99 so much higher than P95?" or "What does a 502 error mean?" and get contextual, actionable answers.

📄 Reports & Export

Share results with your team, management, and clients.

What report formats are available?
  • HTML Report: Interactive, full-detail report with all metrics, charts, per-endpoint breakdowns, and AI analysis. Available on all plans.
  • PDF Report: Stakeholder-friendly export for sharing with management, clients, or compliance teams. Available on Pro and Business plans.
  • Dashboard: Real-time metrics visible during and after every test run in the browser.
Can I share test results with non-technical stakeholders?
Yes. Download the PDF report — it includes a clear summary with pass/fail SLA verdict, key metrics (throughput, latency, error rate), and AI-generated insights in plain English. No technical knowledge needed to understand the results. Legacy tools output raw CSV data or complex graphs that require a performance engineer to interpret — AccessLoad makes results accessible to everyone.
How long are test results retained?
  • Free plan: 7 days
  • Pro plan: 30 days
  • Business plan: 90 days
Download HTML or PDF reports anytime to keep permanent copies. Run comparison is available between any two runs within your retention window.
Can I use test reports for compliance or audit purposes?
Yes. PDF reports contain timestamped test configurations, SLA results, and detailed metrics — suitable for compliance documentation. For regulated industries, combine this with private testing to demonstrate that performance testing was conducted without exposing sensitive data to third parties.
Can I track performance trends over time?
Yes. Use run comparison to compare any two test runs. Run the same test configuration weekly or after each deployment and compare: "Last week P95 was 450ms, this week it's 720ms — what changed?" This is how mature teams prevent performance degradation from creeping into production.

🏰 Private Testing (BYOC)

Test internal APIs without exposing them to the internet.

How does private testing work?
Deploy a lightweight test agent inside your network — your VPC, data center, or even your laptop. The agent connects outbound to AccessLoad, receives test instructions, executes them locally against your internal APIs, and uploads results. Your APIs never need to be publicly accessible. No inbound ports, no firewall changes, no VPN tunnels.
Is it really just one command to set up?
Yes. Copy the command from your dashboard, paste it into any machine that has network access to your target APIs, and it starts. The agent registers itself automatically and appears in your dashboard within seconds. Compare this to legacy tools that require installing Java runtimes, configuring RMI ports, setting up master-slave architectures, and managing firewall rules on both ends.
Can I scale private testing across multiple machines?
Yes. Deploy agents on multiple machines and AccessLoad automatically distributes load across all available agents. Run the same command on 10 machines and you have 10 agents sharing the workload. No manual agent coordination, no shared file systems, no complex configuration.
Can I deploy agents in my own cloud (AWS, GCP, Azure)?
Yes. Deploy agents anywhere — AWS EC2, GCP Compute Engine, Azure VMs, Kubernetes pods, or bare metal. The agent only needs outbound HTTPS access to AccessLoad. This is ideal for testing services that live inside a VPC, behind a NAT, or in a private subnet.
Does test traffic stay within my network?
Yes. The agent executes tests locally — test traffic flows directly from the agent to your API within your network. Only test results (metrics and summaries) are sent to AccessLoad. Your actual request payloads, response data, and API credentials never leave your infrastructure.
What data is sent to AccessLoad from a private agent?
Only aggregate performance metrics:
  • Response times (latency percentiles)
  • Request counts and throughput
  • Error counts and status codes
  • Error snapshots (request/response for failed requests)
Your actual API endpoints, authentication tokens, request payloads, and response bodies are processed locally and not sent to AccessLoad (except in error snapshots, which you control).
Can I test internal microservices that have no public DNS?
Yes. Since the agent runs inside your network, it can reach any service your machine can reach — http://user-service:8080, http://10.0.1.50:3000, or any internal DNS name. This is a major advantage over cloud-only testing platforms that can only test publicly accessible endpoints.
What happens if the agent loses connection to AccessLoad during a test?
The agent buffers results locally and reconnects automatically. Short network interruptions don't affect the running test. If the connection is lost for an extended period, the test completes locally and results are uploaded when connectivity is restored. Your test data is never lost.
Can I run private agents behind a corporate proxy?
The agent requires outbound HTTPS access to AccessLoad. If your network routes through a proxy, configure standard HTTPS proxy environment variables and the agent will use them. No special configuration needed beyond what any outbound HTTPS application requires.

🐙 CI/CD Integration

Catch performance regressions before they reach production.

How does CI/CD performance testing work?
  • Install: Add the AccessLoad GitHub App to your repository
  • Configure: Add a .accessload.yml file with your target URL, VU count, and SLA thresholds
  • Automatic: Every PR triggers a performance smoke test against your staging environment
  • Results: Pass/fail check appears directly on the PR with detailed metrics table
If the PR introduces a performance regression, the check fails — before it merges to main.
Can I test dynamic staging environments per PR?
Yes. Use URL templates like https://pr-{{PR_NUMBER}}.staging.example.com in your config. AccessLoad automatically substitutes the PR number so each pull request tests against its own staging deployment. This works with any preview deployment system.
What SLA checks are available in CI/CD?
Configure thresholds for:
  • P95 latency: e.g., must be under 1,500ms
  • P99 latency: tail latency check
  • Error rate: e.g., must be under 5%
  • Throughput minimum: e.g., must sustain 100 req/s
If any threshold is breached, the PR check fails with a detailed report showing which metrics violated which thresholds.
Does this replace manual load testing?
CI/CD integration runs lightweight smoke tests (quick, low-VU tests) on every PR. They catch obvious regressions early — a new database query that adds 2 seconds, a missing index, a broken endpoint. You should still run full-scale load tests periodically, but CI/CD smoke tests act as an automated safety net for every code change.
How long does a CI/CD performance test take?
Typically 1–3 minutes. CI/CD smoke tests use fewer VUs (10–50) and shorter durations (30–60 seconds) than full load tests. The goal is to detect regressions fast, not simulate full production load. Results appear on the PR within minutes of the staging deployment completing.
Can I block a merge if performance tests fail?
Yes. Configure AccessLoad as a required status check in your GitHub branch protection rules. If the performance test fails, the PR cannot be merged until the issue is fixed and the test passes. This enforces performance standards across your entire team — no more "we'll fix performance later."
What does the PR comment look like?
AccessLoad posts a detailed metrics table directly on the PR:
  • ✅ or ❌ overall verdict
  • Throughput (TPS)
  • P95 and P99 latency with threshold comparison
  • Error rate with threshold comparison
  • Link to full test report
Reviewers see performance impact at a glance — no need to open a separate dashboard.

⚡ Why AccessLoad

Honest comparisons — because you've been burned by marketing before.

I'm used to desktop-based load testing tools. Why should I switch?
Desktop tools were designed 15+ years ago for a different era:
  • Installation: They require 150MB+ Java installs. AccessLoad runs in your browser.
  • Distributed testing: They need complex master-slave RMI setup. AccessLoad distributes automatically.
  • GUI performance: Their UIs freeze when viewing results from large tests. AccessLoad streams results in real-time.
  • Modern protocols: GraphQL and gRPC support is either missing or requires third-party plugins. AccessLoad supports them natively.
  • Analysis: They dump raw data. AccessLoad's AI tells you what's wrong and how to fix it.
How is your approach to test creation better than proxy recording?
Legacy tools use a proxy recorder — you configure your browser to route through a proxy, browse your app, and the tool captures requests. This approach breaks with:
  • Modern SPAs (React, Angular) that make dynamic API calls
  • HTTPS/TLS (requires certificate manipulation)
  • Mobile apps (can't easily proxy mobile traffic)
  • WebSockets and streaming protocols
  • APIs with token rotation or certificate pinning
AccessLoad takes a direct approach: define your API endpoints explicitly or describe your scenario to our AI, and get a properly structured test. It's faster, more accurate, and doesn't require proxy configuration.
What about open-source alternatives?
Open-source tools give you an engine, but you still need to:
  • Write scripts manually — AccessLoad's UI and AI generate them for you
  • Set up infrastructure — servers, orchestration, result storage
  • Build dashboards — monitoring pipelines for real-time metrics
  • Manage distribution — multi-machine coordination
  • Interpret results — manual correlation and analysis
The total cost of ownership for open-source (infrastructure + engineering time + maintenance) often exceeds a Pro subscription within the first month. You're paying with engineer time instead of money — and engineer time is more expensive.
How does correlation work compared to legacy tools?
Legacy tools require you to add separate "post-processor" elements — Regular Expression Extractors or JSON Extractors — wire them to specific requests, define extraction rules, and debug regex patterns when they break. AccessLoad builds extraction into each chain step. Use simple dot-notation like data.access_token — no regex, no separate elements, no wiring. It works on the first try. See the Correlation & Variables section for details.
Do I need scripting or programming knowledge?
No. Everything is configurable through the UI. Legacy tools often require learning a proprietary scripting language (or Java/Groovy extensions) to do anything beyond basic tests. AccessLoad's AI can even generate test configs from natural language descriptions. Power users who want full control can access the underlying test configuration, but it's never required.
How does data parameterization compare to legacy tools?
Legacy tools require:
  • Creating CSV files in a specific format
  • Adding "CSV Data Set Config" elements and mapping column names
  • Manually distributing CSV files to all remote load generators
  • Configuring thread affinity and data cycling modes
  • Debugging "file not found" errors on remote machines
AccessLoad handles all of this natively. Upload your data, reference columns in your request templates, and the platform distributes unique data rows across VUs and workers automatically — with zero file management.
What about enterprise tools with annual license fees?
Enterprise load testing tools often cost $50,000–$200,000 per year in licensing alone, plus infrastructure and dedicated personnel. AccessLoad Business is $249/month — a fraction of the cost — with comparable or superior capabilities:
  • 25,000 VUs (many enterprise tools charge per-VU)
  • Distributed testing included (no extra "controller" licenses)
  • AI analysis included (enterprise tools charge for analytics add-ons)
  • No per-seat licensing — your whole team can use it
I've invested years learning my current tool. Is the switching cost worth it?
We understand the investment. Consider what your current tool actually costs:
  • Time spent on correlation: Hours debugging regex extractors → minutes with dot-notation
  • Infrastructure management: Days maintaining distributed setups → zero with AccessLoad
  • Script maintenance: Scripts break with every UI change → UI-based config is resilient
  • Team onboarding: Weeks to train new engineers → productive in 30 minutes
Most teams recover their switching investment within the first sprint. And you don't have to migrate old scripts — just recreate the scenarios you actively use.
Can AccessLoad do everything my current enterprise tool does?
AccessLoad covers 90%+ of what performance teams actually use daily — HTTP/GraphQL/gRPC testing, chain scenarios, data parameterization, distributed load, CI/CD integration, and AI analysis. Some niche enterprise features (like TruClient browser rendering, SAP/Citrix protocol emulation, or Siebel-specific testing) are not in scope — AccessLoad focuses on API and backend performance testing where the majority of modern applications live.
My team is familiar with thread groups and listeners. How does AccessLoad map to these concepts?
  • Thread Group → VU count + ramp-up: Instead of configuring thread groups, just set VU count and ramp-up time
  • HTTP Sampler → Endpoint config: Instead of adding sampler elements, paste your URL and set the method
  • Listener → Real-time dashboard: Results stream automatically — no need to add listener elements
  • CSV Data Set Config → Data upload: Upload your data, reference columns with {{csv.column}}
  • Timer → Think time: Configure delays between chain steps
  • Assertion → SLA profile: Set pass/fail criteria once, applied globally
  • Post-Processor → Chain extraction: Built into each step, no separate element
What about VuGen scripting — can AccessLoad handle complex scripts?
VuGen scripting is needed because enterprise tools require low-level protocol manipulation. AccessLoad eliminates the need for scripting by providing a visual chain builder with built-in variable extraction. For the 90% of test scenarios that involve HTTP/REST/GraphQL/gRPC API calls with authentication flows, AccessLoad's UI handles everything without writing a single line of code. The AL-Engine generates optimized scripts automatically.

🔄 Migration & Switching

We know switching tools feels risky. Here's the honest truth.

Can I import my existing test scripts from other tools?
AccessLoad uses a different paradigm — visual configuration instead of scripts. Rather than importing old scripts (which often carry years of workarounds and technical debt), recreate your scenarios in the UI. Most test scenarios that took hours to script can be configured in 5–10 minutes through the UI. Think of it as a fresh start without the baggage.
How long does it take to get productive with AccessLoad?
Most users run their first test within 5 minutes of signing up. For chain tests with variable extraction, add 10–15 minutes. For full CI/CD integration, add 30 minutes. Compare this to legacy tools where new engineers typically need 1–2 weeks of training before creating reliable test scripts.
What if my team resists switching from a familiar tool?
Common concerns and how to address them:
  • "I know the old tool" — AccessLoad is simpler. If they can use a web form, they can use AccessLoad. No scripting language to learn.
  • "Our scripts are complex" — Complexity in scripts usually means the tool is fighting you. AccessLoad's chain builder handles complex flows without code.
  • "What about our existing tests?" — Keep the old tool running in parallel while you migrate active scenarios. There's no big-bang migration required.
  • "SaaS isn't secure enough" — Private testing keeps all traffic in your network. Only metrics leave.
Can I run AccessLoad alongside my existing tool during transition?
Absolutely. Many teams run both in parallel — using their legacy tool for existing test suites while building new tests in AccessLoad. As you gain confidence, gradually migrate scenarios. There's no lock-in and no pressure to switch all at once. The Free plan makes this zero-cost to try.
How do I justify the switch to management?
Frame it in business terms:
  • Cost reduction: $249/mo vs $50K–$200K/yr for enterprise licenses
  • Time savings: Test setup drops from hours to minutes. No infrastructure management.
  • Team velocity: No scripting skills needed — any developer can create load tests
  • Risk reduction: CI/CD integration catches regressions automatically — fewer production incidents
  • Modern stack: Native GraphQL, gRPC, API chain support — your current tool needs plugins or doesn't support them
Start with the Free plan, run a proof of concept, and present real results.
Is there vendor lock-in with AccessLoad?
No. Your test configurations are simple (URL, headers, body, steps) — not proprietary scripts. If you ever leave, recreating these tests in any other tool takes minutes. We don't lock your data behind proprietary formats. Download your reports and results anytime.
What's the learning curve for someone who's used enterprise performance tools for years?
Negative. Seriously — experienced performance engineers often say AccessLoad is frustratingly simple because they're used to complexity being the norm. Things that took 30 minutes to configure (correlation, data parameterization, distributed setup) take 30 seconds. The only adjustment is trusting that it's really that simple.

🔧 Common Workflows

How to do specific things you're used to doing.

How do I handle cookies and session management?
Each virtual user maintains its own cookie jar automatically. If your server sets a Set-Cookie header, subsequent requests from that VU include the cookie. For explicit session management, extract the session token in a chain step and pass it via headers. Each VU has isolated cookie state — just like real browser sessions.
How do I test with think time (pacing between requests)?
Configure think time between chain steps to simulate real user behavior. A real user doesn't make 100 API calls per second — they login, pause, browse, pause, click. Think time makes your test realistic and prevents artificially high request rates that don't match production traffic patterns.
How do I handle dynamic CSRF tokens?
Use a chain test:
  • Step 1: GET the page/endpoint that returns the CSRF token
  • Step 2: Extract the CSRF token from the response body or X-CSRF-Token header
  • Step 3+: Include {{csrf_token}} in the X-CSRF-Token header of POST/PUT/DELETE requests
Each VU gets its own CSRF token — exactly like a real browser session.
How do I test pagination (iterating through pages of results)?
Use a chain test with cursor extraction:
  • Step 1: GET /api/items?page=1 → extract next_cursor
  • Step 2: GET /api/items?cursor={{next_cursor}} → extract next cursor
  • Repeat for as many pages as needed
This tests how your pagination performs under load — a common bottleneck for APIs with large datasets.
How do I test a login flow with token refresh?
Model the full OAuth token lifecycle:
  • Step 1: POST /auth/login → extract access_token and refresh_token
  • Step 2: GET /api/data (with access_token) — normal API usage
  • Step 3: POST /auth/refresh (with refresh_token) → extract new access_token
  • Step 4: GET /api/data (with new access_token) — verify refresh works under load
How do I test WebSocket connections?
AccessLoad's AL-Engine supports WebSocket testing at the protocol level. For most real-time applications, you can test the initial HTTP upgrade request and the WebSocket message patterns. For complex bidirectional streaming scenarios, contact our support team for advanced configuration guidance.
How do I test APIs that require client certificates (mTLS)?
For private testing, deploy an agent in your network that has the client certificates installed. The agent uses the local certificate store for mTLS authentication. This keeps your certificates secure within your infrastructure while still getting full AccessLoad reporting and analysis.
How do I simulate different user roles (admin, regular user, guest)?
Use data-driven testing with a role column in your dataset. Each row has different credentials for different roles:
  • Row 1: admin@company.com / admin_password
  • Row 2: user@company.com / user_password
  • Row 3: guest@company.com / guest_password
With 1,000 VUs and this data, you'll have ~333 admin sessions, ~333 user sessions, and ~334 guest sessions — matching your real-world user distribution.
How do I test API rate limiting?
Run a test with gradually increasing VUs and monitor when your error rate jumps. Rate limiting typically manifests as "Fast Failure" — low latency + high errors (usually 429 Too Many Requests). The AI analysis will identify this pattern and confirm it's rate limiting, not a system failure. Use per-VU unique API keys (via data-driven testing) to test per-key vs per-system rate limits.
How do I do spike testing (sudden burst of traffic)?
Set ramp-up time to 0 or very low (1–5 seconds) with your full VU count. This simulates a flash sale, marketing campaign launch, or DDoS-like traffic spike. Compare results with your normal ramp-up test to see how your system handles sudden load vs. gradual load. Many systems handle 10,000 gradual users but crash with 10,000 simultaneous users.
How do I do soak testing (long-duration endurance test)?
Set a moderate VU count with a long duration (Business plan supports up to 2 hours). Soak tests reveal:
  • Memory leaks that only appear after sustained load
  • Connection pool exhaustion over time
  • Database connection leaks
  • Log file disk fill-up
  • Token expiration issues during long sessions
For tests longer than 2 hours, contact sales for enterprise options.

🔐 Security & Compliance

Because "trust us" isn't an answer.

Is my test data secure?
  • Encryption in transit: All communication uses TLS 1.2+
  • Encryption at rest: Test results are stored encrypted
  • Private testing: Test traffic never touches our servers — only results are uploaded
  • No credential storage: API keys and tokens in your test config are used at execution time and not stored in plaintext
Can I use AccessLoad for testing in regulated industries?
Yes. With private testing, all test traffic stays within your infrastructure — ideal for HIPAA, PCI-DSS, and SOC2 environments. Your sensitive data (patient records, financial transactions) never leaves your network. Only aggregate performance metrics (latency, throughput, error counts) are sent to AccessLoad.
What authentication methods does the platform support?
  • Email + password — with email verification and password reset
  • Google OAuth — one-click sign-in
  • GitHub OAuth — for developer teams
  • API keys — for programmatic access and private agent authentication
What if I accidentally put production credentials in a test?
With private testing, credentials never leave your network. For cloud testing, credentials are used only during test execution and not persisted in plaintext. We recommend using dedicated test accounts, not production credentials. For regulated environments, always use private testing to ensure credentials stay within your infrastructure.
Is there SOC2 compliance?
SOC2 compliance is on our roadmap. In the meantime, our architecture is designed with security best practices: TLS encryption, encrypted storage, private testing for data isolation, and role-based access control. For compliance-critical testing, private testing ensures your data never leaves your infrastructure.
Can I restrict who on my team can run tests or view results?
Yes. User roles control access — standard users, admins, and billing admins have different permissions. Only authorized team members can configure tests, view results, or manage billing. API keys are scoped per user with capacity limits.
Do you have a DPA (Data Processing Agreement)?
Yes. Contact our sales team at accessload.com/contact for our DPA, which covers GDPR and data protection requirements for enterprise customers.

💰 Billing & Plans

Transparent pricing. No per-seat, no per-VU, no hidden fees.

What plans are available?
  • Free: 200 VUs/month, 1 concurrent run, 7-day retention. No credit card.
  • Pro ($99/mo ≈ ₹8,299): 10,000 VUs/month, 1-hour max duration, 3 concurrent runs, 30-day retention, PDF reports, CI/CD integration, email support.
  • Business ($249/mo ≈ ₹20,899): 25,000 VUs/month, 2-hour max duration, 5 concurrent runs, 90-day retention, private testing (BYOC), AI analysis, priority support.
See full comparison at accessload.com/#pricing.
What happens if I exceed my monthly VU limit?
You'll see a friendly upgrade prompt before starting the test — we never cut you off mid-run. On Business plans, overage billing is available so tests aren't blocked. You can always upgrade your plan instantly from the billing dashboard.
Do you support GST invoicing for Indian businesses?
Yes. Indian clients are charged 18% GST (CGST + SGST for Maharashtra, IGST for other states). You can add your GSTIN in billing settings for proper B2B tax invoices. All invoices are available as downloadable PDFs from your billing dashboard.
Is there a free trial for paid plans?
Yes. Pro plans come with a 14-day free trial — full access to all Pro features with no credit card required to start. The Free plan is permanently free with 200 VUs/month, so you can always keep using AccessLoad at no cost.
Why should I pay when there are free open-source alternatives?
Open-source tools are "free" the way a puppy is "free." You still need to:
  • Host infrastructure: Load generation servers, result databases, dashboards — $200–$500/mo in cloud costs
  • Engineer time: Setting up, maintaining, and troubleshooting — 10+ hours/month at $100+/hr
  • Build features: Distributed testing, CI/CD integration, reporting — weeks of development
AccessLoad Pro at $99/mo is cheaper than the infrastructure alone, and you get AI analysis, CI/CD, and distributed testing included. Your engineers spend time building your product, not maintaining test infrastructure.
Are there per-seat or per-user charges?
No. Your entire team can use AccessLoad under one subscription. No per-seat licensing, no "additional user" fees. This is a fundamental difference from enterprise tools that charge $5,000+ per user seat.
What is the total cost of ownership compared to running my own solution?
Self-managed load testing typically costs:
  • Infrastructure: $200–$500/mo for load generators + monitoring stack
  • Engineer time: 10–20 hrs/mo for maintenance at $100+/hr = $1,000–$2,000/mo
  • Software licenses: $0 (open-source) to $50K+/yr (enterprise)
  • Opportunity cost: Engineers maintaining test infra instead of building product
AccessLoad Business at $249/mo gives you everything — and your engineers stay focused on your product.
Can I get a custom enterprise plan?
Yes. For teams needing more than 25,000 VUs, longer test durations, dedicated infrastructure, custom SLAs, or volume discounts, contact our sales team at accessload.com/contact.
What payment methods do you accept?
We accept all major credit/debit cards and UPI payments through our secure payment processor. For enterprise plans, we also support bank transfers and purchase orders. All payment processing is PCI-compliant.

🔍 Error Debugging

When things break under load — find out exactly why.

How do I debug failed requests in a load test?
AccessLoad captures up to 200 error snapshots per test — full request/response pairs for failed requests. Each snapshot includes:
  • Full request headers and body (what was sent)
  • Full response headers and body (what came back)
  • Status code and error type
  • Timing breakdown (DNS, TLS, connection, server wait)
  • Which VU and iteration triggered the failure
Filter by status code, endpoint, or error type to quickly isolate the problem.
What's the difference between error snapshots and server logs?
Server logs show what your server saw. Error snapshots show what the client experienced. This catches issues that don't appear in server logs:
  • Timeouts (the server never received the request)
  • TLS handshake failures
  • Connection refused / pool exhaustion
  • Load balancer errors (502, 503, 504)
  • Incorrect responses that have 200 status but wrong body
What do common error codes mean in a load test?
  • 429 Too Many Requests: Rate limiting kicked in — your API is throttling requests
  • 500 Internal Server Error: Your server crashed — check server logs for stack trace
  • 502 Bad Gateway: Your load balancer couldn't reach the backend — backend overloaded or crashed
  • 503 Service Unavailable: Server is overwhelmed — capacity limit reached
  • 504 Gateway Timeout: Request took too long — backend is too slow under load
  • Connection refused: Server's connection pool is full — it can't accept more connections
  • Timeout: Server didn't respond within the timeout window — severe performance issue
Why am I seeing errors only at high VU counts?
This is normal — it means you've found your system's breaking point. Common causes:
  • Database connection pool exhaustion: Fixed pool size can't handle concurrent queries
  • Thread pool saturation: Web server runs out of worker threads
  • Memory pressure: Each connection uses memory — too many connections = OOM
  • Downstream service timeout: A dependency (payment gateway, email service) can't handle the load
The AI analysis will identify which pattern you're hitting and recommend specific fixes.
How do I use the timing breakdown to diagnose issues?
Each error snapshot shows where time was spent:
  • DNS high: DNS resolution is slow — consider DNS caching
  • TLS high: SSL handshake is expensive — check certificate chain or enable session resumption
  • Connecting high: Server is slow to accept connections — connection pool or socket backlog issue
  • Waiting high: Server is processing slowly — database queries, business logic, or downstream calls
  • Receiving high: Large response payload — consider compression or pagination

🏗️ Scalability & Architecture

Built for serious load from day one.

How many virtual users can AccessLoad handle?
Up to 25,000 VUs on the Business plan, with enterprise options for higher loads. Load is automatically distributed across multiple execution agents — you don't need to think about infrastructure. For comparison, legacy desktop tools typically start lagging above 500–1,000 VUs on a single machine and require complex distributed setups beyond that.
How does distributed load generation work?
AccessLoad automatically distributes your VU count across available execution agents based on their capacity. If you request 10,000 VUs and 5 agents are available with 2,000 VU capacity each, the platform assigns 2,000 VUs to each agent. Results are aggregated centrally in real-time. No manual coordination, no shared file systems, no agent configuration.
Can I run multiple tests at the same time?
Yes. Concurrent test limits by plan:
  • Free: 1 concurrent run
  • Pro: 3 concurrent runs
  • Business: 5 concurrent runs
Each run gets its own isolated resources and metrics. This lets you test multiple services or environments in parallel.
What is the maximum test duration?
  • Free: No hard limit (fair use)
  • Pro: 1 hour per test
  • Business: 2 hours per test
For endurance/soak testing beyond 2 hours, contact our sales team for enterprise options.
What happens if one execution agent fails during a distributed test?
The remaining agents continue executing. Results from the failed agent up to the failure point are included in the report. Agent health is monitored via heartbeats — if an agent stops responding, it's marked as unhealthy and excluded from future test assignments. Your test doesn't stop because of an infrastructure issue.
Can I generate load from different geographic regions?
With private testing (BYOC), deploy agents in any region — AWS us-east-1, GCP asia-south1, Azure westeurope. The load comes from wherever you deploy agents. This lets you test how your system performs for users in different geographies, including latency from distant regions.

🏢 Industry Use Cases

How different industries use AccessLoad.

How do e-commerce companies use AccessLoad?
E-commerce teams test:
  • Flash sale readiness: Spike test with thousands of VUs hitting product pages simultaneously
  • Checkout flow: Chain test: browse → cart → checkout → payment — find which step breaks first
  • Search performance: Test search API with diverse query patterns under load
  • Seasonal capacity: Can your system handle Black Friday traffic?
The difference between 2-second checkout and 8-second checkout is millions in lost revenue.
How do fintech/banking companies use AccessLoad?
Financial services teams use:
  • Private testing: Test internal banking APIs without exposing them — mandatory for PCI-DSS
  • Transaction flow testing: Chain: authenticate → check balance → transfer → verify — each step must be fast under load
  • Peak hour simulation: Test with 5,000 unique accounts simultaneously (data-driven) to simulate market opening or salary day
  • Compliance reporting: PDF reports with SLA verdicts for regulatory documentation
How do healthcare companies use AccessLoad?
Healthcare teams prioritize data security:
  • Private testing mandatory: Patient data (PHI) never leaves the hospital network
  • EHR API testing: Test patient record retrieval under load — slow EHR kills clinical workflow
  • Telemedicine: Test video call initiation APIs under peak demand
  • HIPAA compliance: All test traffic stays within HIPAA-compliant infrastructure using BYOC
How do SaaS companies use AccessLoad?
SaaS teams integrate load testing into their development workflow:
  • CI/CD integration: Every PR gets a performance smoke test — regressions caught before merge
  • Multi-tenant testing: Data-driven tests with different tenant IDs to verify isolation
  • API versioning: Compare v1 vs v2 performance side-by-side
  • Scale planning: "We have 1,000 users now, will our API handle 10,000?"
How do mobile app companies use AccessLoad?
Mobile-focused teams leverage:
  • Network simulation: Test APIs under 3G/4G conditions — what users in emerging markets experience
  • API chain for user flows: Signup → verify email → set profile → browse feed — the full mobile onboarding journey
  • Push notification APIs: Can your notification service handle 100K simultaneous deliveries?
  • Offline sync APIs: Test sync endpoints when 10,000 users come online simultaneously
Visit accessload.com/mobile for more details.
How do gaming companies use AccessLoad?
Gaming backends need extreme performance:
  • Launch day simulation: Spike test with maximum VUs — can your matchmaking API handle 50K simultaneous players?
  • Real-time APIs: Test leaderboard, inventory, and chat APIs under sustained load
  • In-app purchase flow: Chain: authenticate → browse store → purchase → verify — every step must be sub-second
  • Network conditions: Test under mobile network simulation — many gamers play on mobile
How do government/public sector organizations use AccessLoad?
Government teams need security and scale:
  • Private testing: Mandatory for government networks — no data leaves the secure zone
  • Citizen-facing portals: Tax filing season, license renewals — spiky traffic patterns
  • Compliance: PDF reports for audit documentation with SLA pass/fail verdicts
  • Air-gapped networks: Deploy private agents in isolated networks with no internet access (results stored locally)

📚 Performance Testing Concepts

New to load testing? Start here.

What is load testing and why does it matter?
Load testing simulates real-world traffic against your application to see how it performs under stress. Without it, you're flying blind:
  • You don't know if your API can handle 1,000 concurrent users
  • You don't know if a new deployment introduced a performance regression
  • You don't know where the bottleneck is (database? API gateway? business logic?)
  • You don't know if your system will survive peak traffic (Black Friday, launch day)
Load testing answers these questions before your users discover the answers in production.
What is the difference between load testing, stress testing, and performance testing?
  • Performance testing: The umbrella term — any test that measures speed, responsiveness, or stability
  • Load testing: Test under expected production load — "Can we handle our normal traffic?"
  • Stress testing: Push beyond expected load — "At what point does the system break?"
  • Spike testing: Sudden burst of traffic — "What happens during a flash sale?"
  • Soak testing: Sustained load over hours — "Are there memory leaks or resource exhaustion?"
  • Scalability testing: Increasing load to find scaling limits — "How far can we scale horizontally?"
AccessLoad supports all of these by adjusting VU count, ramp-up, and duration.
What is throughput (TPS) and why does it matter?
Throughput (Transactions Per Second) measures how many requests your system can process per second. It's the "speed limit" of your API. If your TPS plateaus while VUs increase, your system is at capacity. If TPS drops as you add VUs, your system is overloaded. A healthy system shows TPS increasing linearly with VUs until it reaches a plateau.
What is latency and how is it different from response time?
In load testing, latency and response time are often used interchangeably. Technically:
  • Latency: Time from sending the request to receiving the first byte of the response
  • Response time: Time from sending the request to receiving the complete response
For API testing, the difference is minimal unless responses are very large. AccessLoad measures full response time, which is what your users actually experience.
What is a "percentile" and why should I care about P95/P99?
A percentile tells you what percentage of users are faster than a given time:
  • P95 = 500ms means 95% of requests were faster than 500ms, and 5% were slower
  • P99 = 2,000ms means 1% of your users waited over 2 seconds
If you have 1 million daily users, P99 = 2,000ms means 10,000 users per day have a terrible experience. Averages hide this. P95 and P99 expose it.
What is a "bottleneck" and how do I find it?
A bottleneck is the weakest component in your system — the one that limits overall performance. Common bottlenecks:
  • Database: Slow queries, missing indexes, connection pool exhaustion
  • Application server: CPU-bound processing, memory pressure, thread pool saturation
  • Network: Bandwidth limits, DNS resolution, TLS overhead
  • Downstream services: Payment gateway, email service, third-party APIs
AccessLoad's AI analysis identifies which component is the bottleneck and recommends specific fixes.
How many virtual users should I use for my first test?
Start small and increase:
  • First test: 10–50 VUs — verify your test configuration works
  • Baseline: 100–500 VUs — establish normal performance metrics
  • Load test: Expected production concurrency (check your analytics for peak concurrent users)
  • Stress test: 2–3x your expected load — find the breaking point
Don't start with 10,000 VUs — you'll overwhelm your system before getting useful data. Build up gradually.
How do I estimate the right number of VUs for my application?
Use this formula: VUs = (Peak hourly users × Avg session duration in hours) / Think time factor
  • If you get 10,000 users/hour and sessions last 5 minutes: 10,000 × (5/60) = ~833 concurrent users
  • Add 2–3x safety margin for stress testing: ~2,500 VUs
If you don't have analytics data, start with 100 VUs and double until you find the breaking point. The breaking point tells you your current capacity.

🎯 Limitations & Honesty

What we can't do — because trust is earned through honesty.

What can't AccessLoad do?
We believe in transparency:
  • Browser rendering: We don't render JavaScript or simulate real browsers (that's browser-level testing, not API testing)
  • Desktop app protocols: No Citrix, SAP GUI, or RDP protocol emulation
  • Proprietary protocols: No Siebel, PeopleSoft, or mainframe-specific protocols
  • File upload testing: Limited support for multipart/form-data with large files
  • WebSocket bidirectional streaming: Basic support — complex pub/sub patterns may need custom configuration
AccessLoad focuses on API and backend performance testing — which is where 90%+ of modern applications live.
Can AccessLoad test a full website with JavaScript rendering?
AccessLoad tests API endpoints, not browser rendering. If your website is an SPA (React, Angular, Vue), the performance bottleneck is almost always the API backend, not the browser rendering. Test your API endpoints under load with AccessLoad — that's where server-side bottlenecks live. For front-end rendering performance, use browser-based tools like Lighthouse or WebPageTest.
Is AccessLoad suitable for testing a monolithic application with server-rendered pages?
Yes. Server-rendered pages are just HTTP endpoints that return HTML instead of JSON. AccessLoad can test them the same way — hit the URL, measure response time. You won't see how the HTML renders in a browser, but you'll see how fast your server generates the page under load — which is the critical metric for server-side performance.
Can AccessLoad guarantee my system won't fail in production?
No tool can guarantee that. Load testing reduces risk by exposing problems before production — but production has variables that are hard to replicate exactly (network conditions, user behavior patterns, third-party service availability). What AccessLoad does is give you confidence: "Our system handles 5,000 concurrent users with P95 under 500ms." That's actionable intelligence.
What if I need more than 25,000 VUs?
Contact our sales team for enterprise plans. With BYOC (private testing), you can scale by deploying more agents — each agent adds capacity. There's no architectural limit to the number of agents. For cloud testing beyond 25K VUs, we can provision dedicated infrastructure for your tests.
Can AccessLoad test non-HTTP services (message queues, databases directly)?
AccessLoad is designed for HTTP-based APIs (REST, GraphQL, gRPC). It doesn't directly test message queues (RabbitMQ, Kafka), databases (SQL queries), or non-HTTP services. However, if your message queue or database is accessed through an HTTP API, you can test that API layer with AccessLoad.

👥 Team & Collaboration

Load testing isn't a solo sport.

Can my whole team use AccessLoad?
Yes. There's no per-seat licensing. Your entire team — developers, QA engineers, DevOps, managers — can sign up and use AccessLoad. Everyone sees the same test results, reports, and AI analysis. Enterprise tools charge $5,000+ per seat — AccessLoad charges per plan, not per person.
Can different team members run different tests simultaneously?
Yes, subject to your plan's concurrent run limit (1/3/5 for Free/Pro/Business). Multiple team members can configure, run, and analyze tests independently. Each test run is attributed to the user who started it.
How do I onboard new team members who've never done load testing?
Send them this help page and have them sign up for the Free plan. Within 30 minutes they'll:
  • Run their first test (5 minutes)
  • Understand TPS, P95, error rate (this help page)
  • Create a chain test (15 minutes)
  • Know how to interpret AI analysis (10 minutes)
Compare this to legacy tools where onboarding takes 1–2 weeks of training. AccessLoad is designed so anyone who can use a web form can run a load test.
Can I share test results with a client or external stakeholder?
Yes. Download the PDF or HTML report and share via email, Slack, or any channel. The report is self-contained — no login required to view it. The PDF includes AI analysis in plain English, SLA verdicts, and key metrics — suitable for non-technical stakeholders.
Is there an API for automating test runs programmatically?
Yes. Use API keys for programmatic access. Trigger tests, retrieve results, and integrate with your automation workflows. This is how CI/CD integration works under the hood — and you can use the same API for custom automation scripts, scheduled tests, or integration with your monitoring tools.

🛠️ Troubleshooting

When something doesn't work as expected.

My test shows 100% errors — what's wrong?
Common causes:
  • Wrong URL: Check that the base URL is correct and the endpoint exists
  • Missing headers: Your API might require Content-Type, Authorization, or other headers
  • CORS/firewall: Your server might be blocking requests from AccessLoad's IP addresses — use private testing
  • SSL issue: Self-signed certificates may cause TLS errors
  • API not running: Verify the API is actually accessible from outside your network
Check the error snapshots for the specific error message and status code.
My chain test's variable extraction isn't working. How do I debug it?
  • Check the response format: Run a single VU test first and look at the error snapshot to see the actual response body
  • Verify the JSON path: If the response is {"data": {"token": "abc"}}, the path should be data.token
  • Check for null responses: If the previous step returned an error, the extraction will fail — look at that step's errors first
  • Header name case: Header names are case-insensitive — try exact case from the response
My test results seem unrealistic — response times are too low. Why?
If response times seem impossibly fast (sub-1ms), check:
  • Cached responses: Your CDN or API gateway might be serving cached responses
  • Health check endpoint: You might be testing a /health endpoint that returns instantly
  • Wrong environment: You might be testing a mock server instead of the real API
Run with more VUs to see if latency increases (real server) or stays flat (cached/mock).
My private agent isn't connecting. What should I check?
  • Outbound HTTPS: The agent needs outbound access to AccessLoad on port 443
  • API key: Verify the API key in the agent command is correct and active
  • DNS resolution: Ensure the machine can resolve accessload.com
  • Proxy: If behind a corporate proxy, set HTTPS_PROXY environment variable
  • Firewall: Check that outbound HTTPS isn't blocked by security software
My test ran successfully but the dashboard shows no data. Why?
This usually means the test completed too quickly (under 1–2 seconds) to stream metrics. Try:
  • Increase duration to at least 30 seconds
  • Increase VU count — more VUs generate more data points
  • Check if your API is responding instantly (cached/mock)
If the problem persists, check the test run's status in your history — if it shows "completed" with 0 requests, there may be a configuration issue.
How do I test a localhost/local development server?
Cloud testing can't reach localhost. Use private testing:
  • Deploy a private agent on your development machine
  • The agent can reach http://localhost:3000 or any local address
  • Results are uploaded to AccessLoad for full dashboard and AI analysis
This is also how you test APIs running in local containers, Kubernetes clusters, or dev VMs.

Still have questions?

Talk to our AI assistant (bottom-right corner on any page) or reach out to our team.

Start Free → Contact Us