1401-1450 of 10000 results (55ms)
2020-03-04 §
09:13 <_joe_> removing nginx from servers where it was just used for service proxying. [production]
09:09 <marostegui@deploy1001> Synchronized wmf-config/db-codfw.php: Set es2 as RO - T246072 (duration: 01m 14s) [production]
08:58 <akosiaris> release Giant Puppet Lock across the fleet. https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/464601/ has made it's way to all PoPs and most of codfw without issues, will make it in the rest of the fleet in the next 30mins [production]
08:54 <addshore> START warm cache for db1111 & db1126 for Q20-25 million T219123 (pass 2 today) [production]
08:45 <akosiaris> running puppet on first mw host after merge of https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/464601/, mw2269, rescheduling icinga checks as well [production]
08:41 <akosiaris> running puppet on first es host after merge of https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/464601/, es2019, rescheduling icinga checks as well (correction) [production]
08:41 <akosiaris> running puppet on first es host after merge of https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/464601/, db2019, rescheduling icinga checks as well [production]
08:41 <joal> Kill-restart mediawiki-history-reduced-coord [analytics]
08:41 <akosiaris> running puppet on first db host after merge of https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/464601/, db2086, rescheduling icinga checks as well [production]
08:38 <joal> Kill-restart mediawiki-history-dumps-coord [analytics]
08:13 <addshore> START warm cache for db1111 & db1126 for Q20-25 million T219123 (pass 1 today) [production]
07:37 <marostegui@cumin1001> dbctl commit (dc=all): 'Fully repool db1098:3316 and db1098:3317 after reimage to buster T246604', diff saved to https://phabricator.wikimedia.org/P10608 and previous config saved to /var/cache/conftool/dbconfig/20200304-073721-marostegui.json [production]
07:14 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly repool db1098:3316 and db1098:3317 after reimage to buster T246604', diff saved to https://phabricator.wikimedia.org/P10607 and previous config saved to /var/cache/conftool/dbconfig/20200304-071443-marostegui.json [production]
07:00 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly repool db1098:3316 and db1098:3317 after reimage to buster T246604', diff saved to https://phabricator.wikimedia.org/P10606 and previous config saved to /var/cache/conftool/dbconfig/20200304-070048-marostegui.json [production]
06:45 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly repool db1098:3316 and db1098:3317 after reimage to buster T246604', diff saved to https://phabricator.wikimedia.org/P10605 and previous config saved to /var/cache/conftool/dbconfig/20200304-064520-marostegui.json [production]
06:30 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
06:28 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime [production]
06:22 <cdanis> ✔️ cdanis@prometheus2004.codfw.wmnet ~ 🕝☕ sudo systemctl restart prometheus@ops [production]
06:21 <cdanis> ✔️ cdanis@prometheus2004.codfw.wmnet ~ 🕝☕ sudo systemctl reload prometheus@ops [production]
06:10 <marostegui> Stop MySQL on db1098:3316, db1098:3317 for upgrade - T246604 [production]
01:56 <mutante> mw2178 - systemctl reset-failed to clear (CRITICAL: Status of the systemd unit php7.2-fpm_check_restart) [production]
01:55 <mutante> mw2290 - systemctl reset-failed to clear (CRITICAL: Status of the systemd unit php7.2-fpm_check_restart) [production]
01:48 <mutante> mw1315 - restarted php-fpm and apache (was alerting in Icinga with 503 for 12 hours), log showed failed coredumps, restarts recovered it [production]
01:35 <wm-bot> <bd808> Updated to b58a6dc (Filter using keyword subfields) [tools.csp-report]
01:34 <James_F> Beta Cluster: Created deployment-parsoid11 in Horizon T246854 to test 576493. [releng]
01:31 <mutante> ganeti2003 - DRAC reset failed with "ipmi_cmd_cold_reset: BMC busy" [production]
01:30 <mutante> ganeti2003 - mgmt interface stopped responding on SSH, resetting DRAC via bmc-device from the host [production]
01:27 <wm-bot> <bd808> Updated to 91c10d8 (Update for change in search results) and switch to elastic7 backend cluster [tools.csp-report]
01:23 <Krenair> Made James a deployment-prep projectadmin [releng]
00:25 <ebernhardson@deploy1001> Synchronized php-1.35.0-wmf.21/extensions/WikimediaEvents/modules/ext.wikimediaEvents/searchSatisfaction.js: [cirrus] Match fallback config key with the one used in cirrus (duration: 01m 03s) [production]
00:23 <ebernhardson@deploy1001> Synchronized php-1.35.0-wmf.22/extensions/WikimediaEvents/modules/ext.wikimediaEvents/searchSatisfaction.js: [cirrus] Match fallback config key with the one used in cirrus (duration: 01m 04s) [production]
00:15 <ebernhardson@deploy1001> Synchronized wmf-config/SearchSettingsForWikibase.php: [cirrus] move similarity settings to IS.php (duration: 01m 05s) [production]
00:13 <ebernhardson@deploy1001> Synchronized wmf-config/InitialiseSettings.php: [cirrus] move similarity settings to IS.php (duration: 01m 04s) [production]
00:06 <ebernhardson@deploy1001> Synchronized wmf-config/InitialiseSettings.php: [cirrus] configure wgCirrusSearchMaxShardsPerNode per cluster (duration: 01m 05s) [production]
00:06 <ebernhardson> post-deployment restart mjolnir-kafka-bulk-daemon across eqiad and codfw [production]
00:05 <ebernhardson@deploy1001> Finished deploy [search/mjolnir/deploy@1c97543]: Bump mjolnir to master: Revert stream gzip decompression (duration: 05m 25s) [production]
00:00 <ebernhardson@deploy1001> Started deploy [search/mjolnir/deploy@1c97543]: Bump mjolnir to master: Revert stream gzip decompression [production]
2020-03-03 §
23:13 <wm-bot> <bd808> Updated to f249e23c (Update for ElasticSearch 7 backend) and switched to  http://elasticsearch.svc.tools.eqiad1.wikimedia.cloud backend [tools.bash]
22:51 <James_F> Zuul: Add gabrielchihonglee to CI allow list [releng]
22:43 <James_F> Docker: Publish mediawiki-phan-testrun 0.1.4 [releng]
22:28 <wm-bot> <zppix1> add wikibugs to ignore list and restart bot to deploy [tools.zppixbot]
22:23 <bstorm_> deleted all resources on the old cluster T246519 [tools.paws-public]
22:20 <bd808> Added bstorm_, chicocvenancio, and bd808 as co-maintainers [tools.paws-public]
22:20 <bstorm_> created ingress for paws-public.wmflabs.org and deleted old service object T246519 [tools.paws-public]
22:07 <bstorm_> removing the service object on the old cluster again T246519 [tools.paws-public]
22:00 <thcipriani> reloading zuul to deploy https://gerrit.wikimedia.org/r/575575 [releng]
21:57 <bstorm_> recreated the service on the old cluster because it didn't work right away? T246519 [tools.paws-public]
21:55 <bstorm_> launched ingress on the new cluster, removing the service object on the old cluster T246519 [tools.paws-public]
21:48 <jforrester@deploy1001> Synchronized wmf-config/InitialiseSettings.php: Touch and secondary sync of IS for cache-busting (duration: 01m 04s) [production]
21:46 <jforrester@deploy1001> Synchronized wmf-config/InitialiseSettings.php: [wikidatawiki] Note that MostRevisions and MostLinked have been disabled (duration: 01m 05s) [production]