Is Perplexity down right now?
This page tracks the current status of Perplexity, the AI-powered answer engine at perplexity.ai. The indicator above is refreshed every 60 seconds. Green means the public site is responding normally. Amber means it is degraded — usually slow answers, missing citations, or partial feature outages. Red means a major incident is in progress.
Perplexity is a layer on top of multiple model providers and a real-time web index, which means its failure modes are unique. The frontend can be perfectly healthy while the underlying answer engine is broken because one of its upstream model providers (OpenAI, Anthropic, its own Sonar models) is down. The user reports counter on this page is the fastest signal that something is wrong.
Perplexity outage history and common causes
Perplexity's architecture is unusual: it routes queries to a mix of language models and a real-time web crawler, then synthesizes an answer with citations. Any of those components can fail independently. The most common Perplexity outages of the past year have been silent ones — the site loads, queries return, but the answers are clearly broken (missing citations, repeating results, or refusing to use the web index). These do not register on uptime monitors but are the failure mode users complain about most.
Specific recurring issues include: cascading failures when one of Perplexity's upstream model providers (OpenAI's API, Anthropic's API) is itself down, indexing failures that cause stale or irrelevant results, authentication problems that lock Pro users out, mobile app outages tied to specific app versions, and feature-level breakage where Pages, Spaces, or specific search modes (Pro Search, Deep Research) fail without taking down the rest of the product.
Is Perplexity reliable?
Perplexity's basic uptime is competitive with the rest of the AI category — the site itself is reachable nearly all the time. The trickier reliability question is answer quality, which depends on upstream services Perplexity does not control. When OpenAI or Anthropic have a bad day, Perplexity's answer quality drops too because it is routing queries to those providers under the hood.
For research-grade work, the practical advice is to verify important answers against the source citations, which is something Perplexity makes easy by design. For casual use, Perplexity is reliable enough to be a primary tool. If you do depend on it, keep a fallback — ChatGPT with browsing enabled or Gemini with grounding both cover similar ground.
What to do when Perplexity is down
First, check the indicator at the top of this page. If it is red, the issue is on Perplexity's side. If it is green but you are seeing strange answers, the issue is more likely with one of Perplexity's upstream model providers — check the ChatGPT and Claude status pages on this site.
Second, try a different surface. The Perplexity mobile app and the desktop site share a backend but have separate frontends, so issues sometimes hit only one. The Comet browser also has Perplexity integration and can serve as a fallback.
Third, switch tools. For web-grounded research questions, ChatGPT with browsing and Gemini with grounding both cover the same use case. Google Search itself is the lowest-friction fallback if you just need an answer fast and Perplexity is not cooperating.
Finally, report the issue on this page. The user report counter is the trust signal that helps the next person who lands here decide whether to wait or move on.