Incidents/2025-03-12 ExternalStorage Database Cluster Overload

From testwiki
Revision as of 18:57, 20 March 2025 by imported>Effie Mouzeli (WMF) (First draft)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Template:Irdoc

Summary

Template:Incident scorecard

During the scheduled PHP 8.1 Scap rollout for mw-{api-int,parsoid, jobrunner}, on March 12th & 13th 2025, we found ourselves having said deployments running without access to the Memcached cluster and without a valid prometheus StatsD address.

Without Memcached access, MediaWiki had to query the databases for objects that would typically be cached, leading to increased database load. At the time, MediaWiki was pushing stats to both Graphite and prometheus-statsd-exporter, thus the lack of a valid prometheus statsd address had no impact.  

During the incidents, we observed high load and a sharp increase in connections to the External Storage cluster, while the majority of MediaWiki errors (channel: error) visible at the time, were database related. The External Storage hosts are used to store the compressed text content of wiki page revisions

However, there was an overwhelming volume of memcached errors events (channel: memcached, ~2mil/min) which logstash was unable to process quickly enough, resulting in an observability gap where:

  • Logstash itself probably was not able to render the channel:memcached logstash dashboard, until after the incident, allowing it to consume all its backlog
  • Delays in the smwlogstashprometheus-es-exporterprometheus pipeline pipeline caused the MediaWikiMemcachedHighErrorRate alert to be missing, while the corresponding Grafana graph displayed no data during the incidents.

Both days, after reverting said patches and running helmfile or re-running scap, everything went back to normalTemplate:TOC

Timeline

All times in UTC.

12th March 2025

13th March 2025

Detection

TBA

Contributing Factors

Key factors that contributed in causing this incident, as well as delaying its root cause, were: scap, php-fpm envvars.inc include file, and the logstash-prometheus pipeline.

What does Scap do?

Scap is our deployment tool for MediaWiki. Scap takes care of 3 very important steps:

  • build and push MediaWiki images
  • Update helmfile-defaults with the latest image version tag per deployment, and per release.
    • The image flavour (if it is a 7.4 or a 8.1 image) of each deployment-release combination is defined in puppet in kubernetes.yaml
  • Runs helmfile on all deployments running MediaWiki

To provide a visual example, the latest scap run updated the  helmfile-defaults for the main release of mw-parsoid (aka mw-parsoid-main) as follows:

docker:
  registry: docker-registry.discovery.wmnet
main_app:
  image: restricted/mediawiki-multiversion:2025-03-18-101751-publish-81
mw:
  httpd:
    image_tag: restricted/mediawiki-webserver:2025-03-18-101751-webserver

What is this envvars.inc include file in php-fpm?

We are exporting two very important environmental variables to php-fpm

  • MCROUTER_SERVER: static IP address defined in deployment-charts, essentially the memcached/mcrouter address, defaults to 127.0.0.1:11213
  • STATSD_EXPORTER_PROMETHEUS_SERVICE_HOST: populated and injected into pods by the k8s api, unset by default


In kubernetes, we put both variables in a ConfigMap called mediawiki-main-php-envvars, which in turn mount in the container as  /etc/php/<X.Y>/fpm/env/envvars.inc

PHP-FPM reads the enviromental variables from a hardcoded include directory, whose exact location depends on the PHP version.

In the publish-74 container image, that would be:

[www]
listen = ${FCGI_URL}
<snip>
; MediaWiki helm chart via the php.envvars value.
include = /etc/php/7.4/fpm/env/*.inc

In the publish-81 container image, that would be:

[www]
listen = ${FCGI_URL}
<snip>
; MediaWiki helm chart via the php.envvars value.
include = /etc/php/8.1/fpm/env/*.inc

So, what was broken then?

Our PHP 8.1 mw-{api-int,parsoid, jobrunner} rollout consisted of two sister patches:

  • 1126607[puppet] switching  mw-{api-int,parsoid, jobrunner} the MediaWiki image flavour from publish-74 to  publish-81
  • 1126650 [deployment-charts] which in practice would change the mount location of envvars.inc from /etc/php/7.4/fpm/env/envvars.inc to /etc/php/8.1/fpm/env/envvars.inc

We performed a scap deployment to deploy the above. Our expectation was that after the deployment, was that we would have:

  • mw-{api-int,parsoid, jobrunner} running the mediawiki-multiversion publish-81 image and
  • The mediawiki-main-php-envvars configMap mounted as  /etc/php/8.1/fpm/env/envvars.inc

Due to an unexpected Scap behavior, explained below, what was actually rolled out in production was :

  • mw-{api-int,parsoid,jobrunner} running the mediawiki-multiversion publish-74
  • The mediawiki-main-php-envvars configMap , mounted at /etc/php/8.1/fpm/env/envvars.inc

As the PHP 7.4 image (publish-74) was in use, the includes under /etc/php/7.4/fpm/env/*.inc, contained default values, so they were not useful.

How did we break this?

During the scheduled deployment, the Scap command was executed with the flag -Dbuild_mw_container_image:False. This flag is commonly utilised by Site Reliability Engineers (SREs) as, in most cases, our changes do not necessitate rebuilding container images. Specifically, transitioning the main release of mw-{api-int,parsoid, jobrunner} to the publish-81 image would not require an image rebuild as we have already publish-81 built and cached.

However, this transition would necessitate updates to the helmfile-defaults of the main releases for mw-{api-int,parsoid,jobrunner} so to replace  the latest -publish-74 image tag with the latest -publish-81 . Unfortunately, it was  not immediately apparent that using the flag -Dbuild_mw_container_image:False would additionally cause scap to skip the helmfile-defaults update.

Conclusions

OPTIONAL: General conclusions (bullet points or narrative)

What went well?

OPTIONAL: (Use bullet points) for example: automated monitoring detected the incident, outage was root-caused quickly, etc

What went poorly?

OPTIONAL: (Use bullet points) for example: documentation on the affected service was unhelpful, communication difficulties, etc

Where did we get lucky?

OPTIONAL: (Use bullet points) for example: user's error report was exceptionally detailed, incident occurred when the most people were online to assist, etc

Add links to information that someone responding to this alert should have (runbook, plus supporting docs). If that documentation does not exist, add an action item to create it.

Actionables

Create a list of action items that will help prevent this from happening again as much as possible. Link to or create a Phabricator task for every step.

Add the #Sustainability (Incident Followup) and the #SRE-OnFire Phabricator tag to these tasks.

Scorecard

Incident Engagement ScoreCard
Question Answer

(yes/no)

Notes
People Were the people responding to this incident sufficiently different than the previous five incidents?
Were the people who responded prepared enough to respond effectively
Were fewer than five people paged?
Were pages routed to the correct sub-team(s)?
Were pages routed to online (business hours) engineers?  Answer “no” if engineers were paged after business hours.
Process Was the "Incident status" section atop the Google Doc kept up-to-date during the incident?
Was a public wikimediastatus.net entry created?
Is there a phabricator task for the incident?
Are the documented action items assigned?
Is this incident sufficiently different from earlier incidents so as not to be a repeat occurrence?
Tooling To the best of your knowledge was the open task queue free of any tasks that would have prevented this incident? Answer “no” if there are open tasks that would prevent this incident or make mitigation easier if implemented.
Were the people responding able to communicate effectively during the incident with the existing tooling?
Did existing monitoring notify the initial responders?
Were the engineering tools that were to be used during the incident, available and in service?
Were the steps taken to mitigate guided by an existing runbook?
Total score (count of all “yes” answers above)