Is Claude down right now?
This page shows the current operational status of Claude, Anthropic's AI assistant available at claude.ai and through the Anthropic API. The status indicator above is refreshed every 60 seconds against Anthropic's official status feed, combined with live user reports. Green means the chat product, the API, and console.anthropic.com are all responding normally. Amber means one of those components is degraded — usually elevated latency or partial regional issues. Red means a major incident is in progress.
Anthropic publishes incidents to its own status page, but in practice that page can lag the actual outage by twenty to forty minutes during a fast-moving event. The user-submitted report count on this page is intentionally faster — if you are seeing errors and the report counter is climbing, that is your signal regardless of what the official dot says.
Claude outage history and common causes
Claude runs on a multi-region serving architecture, so outages tend to fall into a few categories. Capacity-related incidents happen during periods of unusually high demand, often after a new model release or a large enterprise customer rolling out at scale. These typically present as slow responses or 529 "overloaded" errors rather than full unreachability. Deployment-related incidents happen when a new version of the inference stack ships with regressions; these are often confined to a single model variant (e.g., Sonnet works while Opus fails).
Other recurring causes include: upstream provider issues at AWS — Anthropic's infrastructure leans heavily on AWS, so US-East-1 problems propagate quickly; authentication issues that lock users out of claude.ai while the API works fine; and tool-use failures where the underlying model is fine but features like web search, file uploads, or computer use are broken. Claude's API uptime is generally a notch above the chat product's uptime because the chat frontend is a more complex moving target.
Is Claude reliable?
Claude's reliability has improved meaningfully over the past year. Independent monitoring tracks 30-day availability in the 99.5 to 99.8 percent range for the API, with the chat product slightly behind. That is competitive with the rest of the AI assistant category and noticeably better than its early-2024 numbers, when capacity issues were a recurring complaint.
For most professional and consumer use cases Claude is reliable enough to be a primary tool. The pragmatic advice is the one that applies to every AI service: do not bet a real workflow on a single provider. Major chat assistants do not all go down at the same time. If you depend on AI for income-producing work, keep ChatGPT or Gemini open as a fallback. If you build on the API, the Anthropic SDK and the OpenAI SDK have similar enough interfaces that you can swap providers in a few lines of code when you need to.
What to do when Claude is down
First, check the status indicator at the top of this page. If it is red, the issue is on Anthropic's side and refreshing will not help. If it is green but you are seeing errors, the problem is more likely your network, your account, or a specific feature rather than the whole product.
Second, try a different surface. The Anthropic API frequently works when claude.ai does not, because the chat frontend is a separate deployment. The mobile app and the desktop app have separate release pipelines too. If you are using Claude through a third-party tool (Cursor, Zed, Perplexity Pro), that integration has its own failure modes that are not Anthropic's fault.
Third, switch tools. ChatGPT and Gemini can handle most of the same tasks. Perplexity is good for research and citations specifically. If you are stuck on coding, GitHub Copilot uses different infrastructure than the consumer chat assistants and tends to fail independently.
Finally, report the outage here. The user report counter is the trust signal that lets the next person to land on this page instantly know whether to wait or switch tools.