2020-03-04
ยง
|
10:41 |
<addshore> |
START warm cache for db1111 & db1126 for Q25-30 million T219123 (pass 1) |
[production] |
10:38 |
<vgutierrez> |
upload trafficserver 8.0.6-1wm1 to apt.wm.o (buster) |
[production] |
10:38 |
<addshore@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: Reading up to Q25M for the new term store everywhere (was Q20M) + warm db1126 & db1111 caches (T219123) cache bust (duration: 01m 04s) |
[production] |
10:36 |
<addshore@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: Reading up to Q25M for the new term store everywhere (was Q20M) + warm db1126 & db1111 caches (T219123) (duration: 01m 05s) |
[production] |
10:20 |
<marostegui> |
Remove es2 eqiad and codfw from zarcillo.masters table - T246072 |
[production] |
10:10 |
<marostegui> |
Update shards table to set es2 display=0 - T246072 |
[production] |
10:05 |
<marostegui> |
es2 maintenance window over T246072 |
[production] |
09:59 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Give some weight to es2 master es1015 and es2016, now standalone - T246072', diff saved to https://phabricator.wikimedia.org/P10609 and previous config saved to /var/cache/conftool/dbconfig/20200304-095919-marostegui.json |
[production] |
09:55 |
<marostegui> |
Reset replication on es2 hosts - T246072 |
[production] |
09:44 |
<moritzm> |
installing python-bleach security updates |
[production] |
09:43 |
<marostegui> |
Set es1015 (es2 master) on read_only - T246072 |
[production] |
09:38 |
<addshore> |
START warm cache for db1111 & db1126 for Q20-25 million T219123 (pass 3 today) |
[production] |
09:21 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Set es2 as RO - T246072 (duration: 01m 04s) |
[production] |
09:13 |
<_joe_> |
removing nginx from servers where it was just used for service proxying. |
[production] |
09:09 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-codfw.php: Set es2 as RO - T246072 (duration: 01m 14s) |
[production] |
08:58 |
<akosiaris> |
release Giant Puppet Lock across the fleet. https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/464601/ has made it's way to all PoPs and most of codfw without issues, will make it in the rest of the fleet in the next 30mins |
[production] |
08:54 |
<addshore> |
START warm cache for db1111 & db1126 for Q20-25 million T219123 (pass 2 today) |
[production] |
08:45 |
<akosiaris> |
running puppet on first mw host after merge of https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/464601/, mw2269, rescheduling icinga checks as well |
[production] |
08:41 |
<akosiaris> |
running puppet on first es host after merge of https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/464601/, es2019, rescheduling icinga checks as well (correction) |
[production] |
08:41 |
<akosiaris> |
running puppet on first es host after merge of https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/464601/, db2019, rescheduling icinga checks as well |
[production] |
08:41 |
<akosiaris> |
running puppet on first db host after merge of https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/464601/, db2086, rescheduling icinga checks as well |
[production] |
08:13 |
<addshore> |
START warm cache for db1111 & db1126 for Q20-25 million T219123 (pass 1 today) |
[production] |
07:37 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Fully repool db1098:3316 and db1098:3317 after reimage to buster T246604', diff saved to https://phabricator.wikimedia.org/P10608 and previous config saved to /var/cache/conftool/dbconfig/20200304-073721-marostegui.json |
[production] |
07:14 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Slowly repool db1098:3316 and db1098:3317 after reimage to buster T246604', diff saved to https://phabricator.wikimedia.org/P10607 and previous config saved to /var/cache/conftool/dbconfig/20200304-071443-marostegui.json |
[production] |
07:00 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Slowly repool db1098:3316 and db1098:3317 after reimage to buster T246604', diff saved to https://phabricator.wikimedia.org/P10606 and previous config saved to /var/cache/conftool/dbconfig/20200304-070048-marostegui.json |
[production] |
06:45 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Slowly repool db1098:3316 and db1098:3317 after reimage to buster T246604', diff saved to https://phabricator.wikimedia.org/P10605 and previous config saved to /var/cache/conftool/dbconfig/20200304-064520-marostegui.json |
[production] |
06:30 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
06:28 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
06:22 |
<cdanis> |
โ๏ธ cdanis@prometheus2004.codfw.wmnet ~ ๐โ sudo systemctl restart prometheus@ops |
[production] |
06:21 |
<cdanis> |
โ๏ธ cdanis@prometheus2004.codfw.wmnet ~ ๐โ sudo systemctl reload prometheus@ops |
[production] |
06:10 |
<marostegui> |
Stop MySQL on db1098:3316, db1098:3317 for upgrade - T246604 |
[production] |
01:56 |
<mutante> |
mw2178 - systemctl reset-failed to clear (CRITICAL: Status of the systemd unit php7.2-fpm_check_restart) |
[production] |
01:55 |
<mutante> |
mw2290 - systemctl reset-failed to clear (CRITICAL: Status of the systemd unit php7.2-fpm_check_restart) |
[production] |
01:48 |
<mutante> |
mw1315 - restarted php-fpm and apache (was alerting in Icinga with 503 for 12 hours), log showed failed coredumps, restarts recovered it |
[production] |
01:31 |
<mutante> |
ganeti2003 - DRAC reset failed with "ipmi_cmd_cold_reset: BMC busy" |
[production] |
01:30 |
<mutante> |
ganeti2003 - mgmt interface stopped responding on SSH, resetting DRAC via bmc-device from the host |
[production] |
00:25 |
<ebernhardson@deploy1001> |
Synchronized php-1.35.0-wmf.21/extensions/WikimediaEvents/modules/ext.wikimediaEvents/searchSatisfaction.js: [cirrus] Match fallback config key with the one used in cirrus (duration: 01m 03s) |
[production] |
00:23 |
<ebernhardson@deploy1001> |
Synchronized php-1.35.0-wmf.22/extensions/WikimediaEvents/modules/ext.wikimediaEvents/searchSatisfaction.js: [cirrus] Match fallback config key with the one used in cirrus (duration: 01m 04s) |
[production] |
00:15 |
<ebernhardson@deploy1001> |
Synchronized wmf-config/SearchSettingsForWikibase.php: [cirrus] move similarity settings to IS.php (duration: 01m 05s) |
[production] |
00:13 |
<ebernhardson@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: [cirrus] move similarity settings to IS.php (duration: 01m 04s) |
[production] |
00:06 |
<ebernhardson@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: [cirrus] configure wgCirrusSearchMaxShardsPerNode per cluster (duration: 01m 05s) |
[production] |
00:06 |
<ebernhardson> |
post-deployment restart mjolnir-kafka-bulk-daemon across eqiad and codfw |
[production] |
00:05 |
<ebernhardson@deploy1001> |
Finished deploy [search/mjolnir/deploy@1c97543]: Bump mjolnir to master: Revert stream gzip decompression (duration: 05m 25s) |
[production] |
00:00 |
<ebernhardson@deploy1001> |
Started deploy [search/mjolnir/deploy@1c97543]: Bump mjolnir to master: Revert stream gzip decompression |
[production] |