2020-03-04
ยง
|
10:05 |
<marostegui> |
es2 maintenance window over T246072 |
[production] |
09:59 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Give some weight to es2 master es1015 and es2016, now standalone - T246072', diff saved to https://phabricator.wikimedia.org/P10609 and previous config saved to /var/cache/conftool/dbconfig/20200304-095919-marostegui.json |
[production] |
09:55 |
<marostegui> |
Reset replication on es2 hosts - T246072 |
[production] |
09:44 |
<moritzm> |
installing python-bleach security updates |
[production] |
09:43 |
<marostegui> |
Set es1015 (es2 master) on read_only - T246072 |
[production] |
09:38 |
<addshore> |
START warm cache for db1111 & db1126 for Q20-25 million T219123 (pass 3 today) |
[production] |
09:21 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Set es2 as RO - T246072 (duration: 01m 04s) |
[production] |
09:13 |
<_joe_> |
removing nginx from servers where it was just used for service proxying. |
[production] |
09:09 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-codfw.php: Set es2 as RO - T246072 (duration: 01m 14s) |
[production] |
08:58 |
<akosiaris> |
release Giant Puppet Lock across the fleet. https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/464601/ has made it's way to all PoPs and most of codfw without issues, will make it in the rest of the fleet in the next 30mins |
[production] |
08:54 |
<addshore> |
START warm cache for db1111 & db1126 for Q20-25 million T219123 (pass 2 today) |
[production] |
08:45 |
<akosiaris> |
running puppet on first mw host after merge of https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/464601/, mw2269, rescheduling icinga checks as well |
[production] |
08:41 |
<akosiaris> |
running puppet on first es host after merge of https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/464601/, es2019, rescheduling icinga checks as well (correction) |
[production] |
08:41 |
<akosiaris> |
running puppet on first es host after merge of https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/464601/, db2019, rescheduling icinga checks as well |
[production] |
08:41 |
<akosiaris> |
running puppet on first db host after merge of https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/464601/, db2086, rescheduling icinga checks as well |
[production] |
08:13 |
<addshore> |
START warm cache for db1111 & db1126 for Q20-25 million T219123 (pass 1 today) |
[production] |
07:37 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Fully repool db1098:3316 and db1098:3317 after reimage to buster T246604', diff saved to https://phabricator.wikimedia.org/P10608 and previous config saved to /var/cache/conftool/dbconfig/20200304-073721-marostegui.json |
[production] |
07:14 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Slowly repool db1098:3316 and db1098:3317 after reimage to buster T246604', diff saved to https://phabricator.wikimedia.org/P10607 and previous config saved to /var/cache/conftool/dbconfig/20200304-071443-marostegui.json |
[production] |
07:00 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Slowly repool db1098:3316 and db1098:3317 after reimage to buster T246604', diff saved to https://phabricator.wikimedia.org/P10606 and previous config saved to /var/cache/conftool/dbconfig/20200304-070048-marostegui.json |
[production] |
06:45 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Slowly repool db1098:3316 and db1098:3317 after reimage to buster T246604', diff saved to https://phabricator.wikimedia.org/P10605 and previous config saved to /var/cache/conftool/dbconfig/20200304-064520-marostegui.json |
[production] |
06:30 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
06:28 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
06:22 |
<cdanis> |
โ๏ธ cdanis@prometheus2004.codfw.wmnet ~ ๐โ sudo systemctl restart prometheus@ops |
[production] |
06:21 |
<cdanis> |
โ๏ธ cdanis@prometheus2004.codfw.wmnet ~ ๐โ sudo systemctl reload prometheus@ops |
[production] |
06:10 |
<marostegui> |
Stop MySQL on db1098:3316, db1098:3317 for upgrade - T246604 |
[production] |
01:56 |
<mutante> |
mw2178 - systemctl reset-failed to clear (CRITICAL: Status of the systemd unit php7.2-fpm_check_restart) |
[production] |
01:55 |
<mutante> |
mw2290 - systemctl reset-failed to clear (CRITICAL: Status of the systemd unit php7.2-fpm_check_restart) |
[production] |
01:48 |
<mutante> |
mw1315 - restarted php-fpm and apache (was alerting in Icinga with 503 for 12 hours), log showed failed coredumps, restarts recovered it |
[production] |
01:31 |
<mutante> |
ganeti2003 - DRAC reset failed with "ipmi_cmd_cold_reset: BMC busy" |
[production] |
01:30 |
<mutante> |
ganeti2003 - mgmt interface stopped responding on SSH, resetting DRAC via bmc-device from the host |
[production] |
00:25 |
<ebernhardson@deploy1001> |
Synchronized php-1.35.0-wmf.21/extensions/WikimediaEvents/modules/ext.wikimediaEvents/searchSatisfaction.js: [cirrus] Match fallback config key with the one used in cirrus (duration: 01m 03s) |
[production] |
00:23 |
<ebernhardson@deploy1001> |
Synchronized php-1.35.0-wmf.22/extensions/WikimediaEvents/modules/ext.wikimediaEvents/searchSatisfaction.js: [cirrus] Match fallback config key with the one used in cirrus (duration: 01m 04s) |
[production] |
00:15 |
<ebernhardson@deploy1001> |
Synchronized wmf-config/SearchSettingsForWikibase.php: [cirrus] move similarity settings to IS.php (duration: 01m 05s) |
[production] |
00:13 |
<ebernhardson@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: [cirrus] move similarity settings to IS.php (duration: 01m 04s) |
[production] |
00:06 |
<ebernhardson@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: [cirrus] configure wgCirrusSearchMaxShardsPerNode per cluster (duration: 01m 05s) |
[production] |
00:06 |
<ebernhardson> |
post-deployment restart mjolnir-kafka-bulk-daemon across eqiad and codfw |
[production] |
00:05 |
<ebernhardson@deploy1001> |
Finished deploy [search/mjolnir/deploy@1c97543]: Bump mjolnir to master: Revert stream gzip decompression (duration: 05m 25s) |
[production] |
00:00 |
<ebernhardson@deploy1001> |
Started deploy [search/mjolnir/deploy@1c97543]: Bump mjolnir to master: Revert stream gzip decompression |
[production] |
2020-03-03
ยง
|
21:48 |
<jforrester@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: Touch and secondary sync of IS for cache-busting (duration: 01m 04s) |
[production] |
21:46 |
<jforrester@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: [wikidatawiki] Note that MostRevisions and MostLinked have been disabled (duration: 01m 05s) |
[production] |
21:33 |
<otto@deploy1001> |
helmfile [EQIAD] Ran 'apply' command on namespace 'eventgate-logging-external' for release 'canary' . |
[production] |
21:33 |
<otto@deploy1001> |
helmfile [EQIAD] Ran 'apply' command on namespace 'eventgate-logging-external' for release 'production' . |
[production] |
21:13 |
<thcipriani@deploy1001> |
Synchronized php-1.35.0-wmf.22/includes/Defines.php: [[gerrit:576439|Update MW_VERSION to 1.35.0-wmf.22]] (duration: 01m 06s) |
[production] |
20:59 |
<vgutierrez> |
Starting pybal on lvs1013 |
[production] |
20:54 |
<vgutierrez> |
rebooting lvs1013 |
[production] |
20:44 |
<joal@deploy1001> |
Finished deploy [analytics/refinery@264c7ec] (thin): Regular weekly analytics deploy (duration: 00m 07s) |
[production] |
20:44 |
<joal@deploy1001> |
Started deploy [analytics/refinery@264c7ec] (thin): Regular weekly analytics deploy |
[production] |
20:43 |
<joal@deploy1001> |
Finished deploy [analytics/refinery@264c7ec]: Regular (duration: 13m 05s) |
[production] |
20:42 |
<vgutierrez> |
stopping pybal on lvs1013 |
[production] |
20:42 |
<otto@deploy1001> |
helmfile [CODFW] Ran 'apply' command on namespace 'eventgate-logging-external' for release 'canary' . |
[production] |