Incidents/2025-03-12 ExternalStorage Database Cluster Overload
Summary
…
During the scheduled PHP 8.1 Scap rollout for mw-{api-int,parsoid, jobrunner}, on March 12th & 13th 2025, we found ourselves having said deployments running without access to the Memcached cluster and without a valid prometheus StatsD address.
Without Memcached access, MediaWiki had to query the databases for objects that would typically be cached, leading to increased database load. At the time, MediaWiki was pushing stats to both Graphite and prometheus-statsd-exporter, thus the lack of a valid prometheus statsd address had no impact.
During the incidents, we observed high load and a sharp increase in connections to the External Storage cluster, while the majority of MediaWiki errors (channel: error) visible at the time, were database related. The External Storage hosts are used to store the compressed text content of wiki page revisions
However, there was an overwhelming volume of memcached errors events (channel: memcached, ~2mil/min) which logstash was unable to process quickly enough, resulting in an observability gap where:
- Logstash itself probably was not able to render the
channel:memcachedlogstash dashboard, until after the incident, allowing it to consume all its backlog - Delays in the s pipeline pipeline caused the MediaWikiMemcachedHighErrorRate alert to be missing, while the corresponding Grafana graph displayed no data during the incidents.
Both days, after reverting said patches and running helmfile or re-running scap, everything went back to normalTemplate:TOC
Timeline
All times in UTC.
12th March 2025
- 10:44 Template:Ircnick runs scap to deploy 1126607[puppet] and 1126650 [deployment-charts], for the
mw-{api-int,parsoid, jobrunner}PHP 8.1 rollout
- 10:58 UTC: alerts for high backend response times started coming in
- 13:19 Template:Ircnick and Template:Ircnick deployed a patch in cirrus-streaming-updater to reduce SUP parallelism 1126988[deployment-charts]
- 13:33 Template:Ircnick reduces the concurrency of categoryMembershipChange job in changeprop-jobqueue 1127000[deployment-charts]
- 13:44 Template:Ircnick re-run Scap after revering 1126607[puppet] and 1126650 [deployment-charts]
13th March 2025
- 11:15 Template:Ircnick begins a scap deployment (yes, again), to deploy 1127476[puppet] and 1127478[deployment-charts] for the
mw-{api-int,parsoid, jobrunner}PHP 8.1 rollout - 11:34 Template:Ircnick performs a rolling restart on changeprop-jobqueue
- 11:43 Template:Ircnick: rolling restarting mw-api-int
- 11.44 Template:Ircnickdisables the categorymembership job in changeprop-jobqueue 1127500[deployment-charts]
- 11:46 Template:Ircnick reverts 1127476[puppet] and 1127478[deployment-charts]
- 11:58 Template:Ircnick redeploys mw-api-int to pick up the revert
- 12:07 Template:Ircnick bumps the number of replicas for mw-api-int
- 12:28 Template:Ircnick redeploys mw-jobrunner to pick up the revert
- 14:06 Template:Ircnick redeploys mw-parsoid to pick up the revert
Detection
TBA
Contributing Factors
Key factors that contributed in causing this incident, as well as delaying its root cause, were: scap, php-fpm envvars.inc include file, and the logstash-prometheus pipeline.
What does Scap do?
Scap is our deployment tool for MediaWiki. Scap takes care of 3 very important steps:
- build and push MediaWiki images
- Update
helmfile-defaultswith the latest image version tag per deployment, and per release.- The image flavour (if it is a 7.4 or a 8.1 image) of each deployment-release combination is defined in puppet in
kubernetes.yaml
- The image flavour (if it is a 7.4 or a 8.1 image) of each deployment-release combination is defined in puppet in
- Runs
helmfileon all deployments running MediaWiki
To provide a visual example, the latest scap run updated the helmfile-defaults for the main release of mw-parsoid (aka mw-parsoid-main) as follows:
docker:
registry: docker-registry.discovery.wmnet
main_app:
image: restricted/mediawiki-multiversion:2025-03-18-101751-publish-81
mw:
httpd:
image_tag: restricted/mediawiki-webserver:2025-03-18-101751-webserver
What is this envvars.inc include file in php-fpm?
We are exporting two very important environmental variables to php-fpm
MCROUTER_SERVER:static IP address defined in deployment-charts, essentially the memcached/mcrouter address, defaults to127.0.0.1:11213STATSD_EXPORTER_PROMETHEUS_SERVICE_HOST: populated and injected into pods by the k8s api, unset by default
In kubernetes, we put both variables in a ConfigMap called mediawiki-main-php-envvars, which in turn mount in the container as /etc/php/<X.Y>/fpm/env/envvars.inc
PHP-FPM reads the enviromental variables from a hardcoded include directory, whose exact location depends on the PHP version.
In the publish-74 container image, that would be:
[www]
listen = ${FCGI_URL}
<snip>
; MediaWiki helm chart via the php.envvars value.
include = /etc/php/7.4/fpm/env/*.inc
In the publish-81 container image, that would be:
[www]
listen = ${FCGI_URL}
<snip>
; MediaWiki helm chart via the php.envvars value.
include = /etc/php/8.1/fpm/env/*.inc
So, what was broken then?
Our PHP 8.1 mw-{api-int,parsoid, jobrunner} rollout consisted of two sister patches:
- 1126607[puppet] switching
mw-{api-int,parsoid, jobrunner}the MediaWiki image flavour frompublish-74topublish-81 - 1126650 [deployment-charts] which in practice would change the mount location of
envvars.incfrom/etc/php/7.4/fpm/env/envvars.incto/etc/php/8.1/fpm/env/envvars.inc
We performed a scap deployment to deploy the above. Our expectation was that after the deployment, was that we would have:
mw-{api-int,parsoid, jobrunner}running themediawiki-multiversion publish-81image and- The
mediawiki-main-php-envvarsconfigMapmounted as/etc/php/8.1/fpm/env/envvars.inc
Due to an unexpected Scap behavior, explained below, what was actually rolled out in production was :
mw-{api-int,parsoid,jobrunner}running themediawiki-multiversion publish-74- The
mediawiki-main-php-envvarsconfigMap, mounted at/etc/php/8.1/fpm/env/envvars.inc
As the PHP 7.4 image (publish-74) was in use, the includes under /etc/php/7.4/fpm/env/*.inc, contained default values, so they were not useful.
How did we break this?
During the scheduled deployment, the Scap command was executed with the flag -Dbuild_mw_container_image:False. This flag is commonly utilised by Site Reliability Engineers (SREs) as, in most cases, our changes do not necessitate rebuilding container images. Specifically, transitioning the main release of mw-{api-int,parsoid, jobrunner} to the publish-81 image would not require an image rebuild as we have already publish-81 built and cached.
However, this transition would necessitate updates to the helmfile-defaults of the main releases for mw-{api-int,parsoid,jobrunner} so to replace the latest -publish-74 image tag with the latest -publish-81 . Unfortunately, it was not immediately apparent that using the flag -Dbuild_mw_container_image:False would additionally cause scap to skip the helmfile-defaults update.
Conclusions
OPTIONAL: General conclusions (bullet points or narrative)
What went well?
- …
OPTIONAL: (Use bullet points) for example: automated monitoring detected the incident, outage was root-caused quickly, etc
What went poorly?
- …
OPTIONAL: (Use bullet points) for example: documentation on the affected service was unhelpful, communication difficulties, etc
Where did we get lucky?
- …
OPTIONAL: (Use bullet points) for example: user's error report was exceptionally detailed, incident occurred when the most people were online to assist, etc
Links to relevant documentation
- …
Add links to information that someone responding to this alert should have (runbook, plus supporting docs). If that documentation does not exist, add an action item to create it.
Actionables
- …
Create a list of action items that will help prevent this from happening again as much as possible. Link to or create a Phabricator task for every step.
Add the #Sustainability (Incident Followup) and the #SRE-OnFire Phabricator tag to these tasks.
Scorecard
| Question | Answer
(yes/no) |
Notes | |
|---|---|---|---|
| People | Were the people responding to this incident sufficiently different than the previous five incidents? | ||
| Were the people who responded prepared enough to respond effectively | |||
| Were fewer than five people paged? | |||
| Were pages routed to the correct sub-team(s)? | |||
| Were pages routed to online (business hours) engineers? Answer “no” if engineers were paged after business hours. | |||
| Process | Was the "Incident status" section atop the Google Doc kept up-to-date during the incident? | ||
| Was a public wikimediastatus.net entry created? | |||
| Is there a phabricator task for the incident? | |||
| Are the documented action items assigned? | |||
| Is this incident sufficiently different from earlier incidents so as not to be a repeat occurrence? | |||
| Tooling | To the best of your knowledge was the open task queue free of any tasks that would have prevented this incident? Answer “no” if there are open tasks that would prevent this incident or make mitigation easier if implemented. | ||
| Were the people responding able to communicate effectively during the incident with the existing tooling? | |||
| Did existing monitoring notify the initial responders? | |||
| Were the engineering tools that were to be used during the incident, available and in service? | |||
| Were the steps taken to mitigate guided by an existing runbook? | |||
| Total score (count of all “yes” answers above) | |||