2021-02-26
§
|
09:33 |
<aborrero@cumin2001> |
START - Cookbook sre.hosts.reboot-single for host cloudcontrol2001-dev.wikimedia.org |
[production] |
09:28 |
<root@cumin1001> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
09:24 |
<root@cumin1001> |
START - Cookbook sre.dns.netbox |
[production] |
09:22 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1169 (re)pooling @ 50%: Repool db1169 after cloning db1134', diff saved to https://phabricator.wikimedia.org/P14501 and previous config saved to /var/cache/conftool/dbconfig/20210226-092240-root.json |
[production] |
09:13 |
<jbond42> |
pupet enabled post sudoers fix, running puppet fleet wide with cumin -b 15 '*' 'run-puppet-agent ' |
[production] |
09:07 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1169 (re)pooling @ 40%: Repool db1169 after cloning db1134', diff saved to https://phabricator.wikimedia.org/P14500 and previous config saved to /var/cache/conftool/dbconfig/20210226-090736-root.json |
[production] |
08:55 |
<jbond42> |
disabled puppet pending rollback of https://gerrit.wikimedia.org/r/666899 |
[production] |
08:52 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1169 (re)pooling @ 25%: Repool db1169 after cloning db1134', diff saved to https://phabricator.wikimedia.org/P14498 and previous config saved to /var/cache/conftool/dbconfig/20210226-085233-root.json |
[production] |
08:37 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1169 (re)pooling @ 15%: Repool db1169 after cloning db1134', diff saved to https://phabricator.wikimedia.org/P14497 and previous config saved to /var/cache/conftool/dbconfig/20210226-083729-root.json |
[production] |
08:22 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1169 (re)pooling @ 10%: Repool db1169 after cloning db1134', diff saved to https://phabricator.wikimedia.org/P14496 and previous config saved to /var/cache/conftool/dbconfig/20210226-082226-root.json |
[production] |
08:19 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on analytics1058.eqiad.wmnet with reason: REIMAGE |
[production] |
08:17 |
<elukey@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on analytics1058.eqiad.wmnet with reason: REIMAGE |
[production] |
08:07 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1169 (re)pooling @ 5%: Repool db1169 after cloning db1134', diff saved to https://phabricator.wikimedia.org/P14495 and previous config saved to /var/cache/conftool/dbconfig/20210226-080722-root.json |
[production] |
08:04 |
<elukey> |
run ipmi mc reset cold for analytics1058 - mgmt responding to pings and ipmi, but not to ssh |
[production] |
07:52 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1169 (re)pooling @ 1%: Repool db1169 after cloning db1134', diff saved to https://phabricator.wikimedia.org/P14494 and previous config saved to /var/cache/conftool/dbconfig/20210226-075219-root.json |
[production] |
07:02 |
<marostegui> |
Stop MySQL on db2106 to clone db2147 T275633 |
[production] |
07:01 |
<elukey> |
reboot an-worker1099 to clear out kernel soft lockup errors |
[production] |
06:59 |
<elukey> |
restart datanode on an-worker1099 - soft lockup kernel errors |
[production] |
06:53 |
<kartik@deploy1001> |
Synchronized php-1.36.0-wmf.32/extensions/ContentTranslation: Bump ContentTranslation to e6b1a7c to include lost {{gerrit|666327}} backport (duration: 00m 58s) |
[production] |
06:39 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Remove db1092 from dbctl T275019', diff saved to https://phabricator.wikimedia.org/P14492 and previous config saved to /var/cache/conftool/dbconfig/20210226-063914-marostegui.json |
[production] |
06:32 |
<kartik@deploy1001> |
Synchronized php-1.36.0-wmf.32/extensions/ContentTranslation: Resync ContentTranslation for {{gerrit|666327}} (duration: 01m 16s) |
[production] |
06:17 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db1169 to clone db1134 T275343', diff saved to https://phabricator.wikimedia.org/P14490 and previous config saved to /var/cache/conftool/dbconfig/20210226-061705-marostegui.json |
[production] |
05:29 |
<ryankemper@cumin2001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on elastic2045.codfw.wmnet with reason: REIMAGE |
[production] |
05:27 |
<ryankemper@cumin2001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on elastic2045.codfw.wmnet with reason: REIMAGE |
[production] |
05:25 |
<ryankemper> |
[relforge] Downtimed `relforge1004` until `2021-03-02 07:23:36` (https://phabricator.wikimedia.org/T275658 is in flight to fix broken `kibana.service`) |
[production] |
05:07 |
<ryankemper> |
T275345 `sudo -i wmf-auto-reimage-host --conftool -p T275345 elastic2045.codfw.wmnet` on `ryankemper@cumin2001` tmux session `elastic_reimage_elastic1065` |
[production] |
04:23 |
<ryankemper> |
T267927 [WDQS Data Reload] `sudo -i cookbook sre.wdqs.data-reload wdqs2008.codfw.wmnet --task-id T267927 --reload-data wikidata --reason 'T267927: Reload wikidata jnl from fresh dumps' --reuse-downloaded-dump --depool` on `ryankemper@cumin2001` tmux session `wdqs_data_reload_2008` |
[production] |
04:21 |
<ryankemper@cumin2001> |
START - Cookbook sre.wdqs.data-reload |
[production] |
00:14 |
<urbanecm@deploy1001> |
Synchronized php-1.36.0-wmf.32/extensions/Graph/: 9d5cf348f5dda32f8889d5160bb1fe34a4e07f8c: Do not log graph errors to WMF servers (T274557) (duration: 01m 36s) |
[production] |
2021-02-25
§
|
23:55 |
<mutante> |
deploy1002, deploy2002 - scap-master-sync deploy1001.eqiad.wmnet (T265963) |
[production] |
23:41 |
<mutante> |
deploy2001 2/2 - because rsync is --delete but also --exclude="**/cache/l10n/*.cdb" --exclude="*.swp" you can't expect /srv/mediawiki-staging to be the same size on 2 servers |
[production] |
23:39 |
<mutante> |
deploy2001 - scap-master-sync from deploy1001 runs and attempts to --delete files to stay in sync but fails to do so because *.cdb files are in cache dirs and rsync does not want to delete non-empty directories, this leads to build up of the size of /srv/mediawiki-staging to 10 times the size of eqiad |
[production] |
23:34 |
<mutante> |
deploy2001 - scap-master-sync from deploy1001 |
[production] |
23:13 |
<mutante> |
deploy1002 - /usr/local/bin/scap-master-sync deploy1001.eqiad.wmnet |
[production] |
23:01 |
<dduvall@deploy1001> |
Pruned MediaWiki: 1.36.0-wmf.30 (duration: 04m 20s) |
[production] |
21:38 |
<legoktm> |
pushed new version of docker-registry.discovery.wmnet/wikimedia-buster image |
[production] |
21:20 |
<mutante> |
deploy2001 - rsynced /srv/deployment from deploy1001 after gerrit:666757 |
[production] |
20:57 |
<eileen> |
civicrm revision changed from 604d07c859 to f07390ff87, config revision is 643477b35d |
[production] |
20:35 |
<jhuneidi@deploy1001> |
rebuilt and synchronized wikiversions files: all wikis to 1.36.0-wmf.32 refs T274936 |
[production] |
20:17 |
<tgr@deploy1001> |
Synchronized php-1.36.0-wmf.31/extensions/GrowthExperiments/: Backport: [[gerrit:666704|Impact module: Add "not rendered" state (T270294, T275615)]] (duration: 01m 08s) |
[production] |
19:40 |
<tgr@deploy1001> |
Synchronized php-1.36.0-wmf.32/extensions/GrowthExperiments/: Backport: [[gerrit:666704|Impact module: Add "not rendered" state (T270294, T275615)]] (duration: 01m 26s) |
[production] |
19:16 |
<ryankemper> |
T267927 Downloading dumps: `sudo https_proxy=webproxy.codfw.wmnet:8080 wget https://dumps.wikimedia.org/wikidatawiki/entities/latest-all.ttl.bz2 -O /srv/wdqs/latest-all.ttl.bz2 && sudo https_proxy=webproxy.codfw.wmnet:8080 wget https://dumps.wikimedia.org/wikidatawiki/entities/latest-lexemes.ttl.bz2 -O /srv/wdqs/latest-lexemes.ttl.bz2` on `ryankemper@wdqs2008` tmux session `download_latest_dumps` |
[production] |
18:59 |
<ryankemper@cumin2001> |
END (FAIL) - Cookbook sre.wdqs.data-reload (exit_code=99) |
[production] |
18:59 |
<ryankemper@cumin2001> |
START - Cookbook sre.wdqs.data-reload |
[production] |
18:59 |
<ryankemper> |
T267927 Manual puppet run got `wdqs2008` present in puppetdb again. Now being blocked by lack of host key for `wdqs2008` present on `cumin2001`, so I'm running puppet on `cumin2001` to get the latest state of `/etc/ssh/ssh_known_hosts` |
[production] |
18:57 |
<ryankemper@cumin2001> |
END (FAIL) - Cookbook sre.wdqs.data-reload (exit_code=99) |
[production] |
18:57 |
<ryankemper@cumin2001> |
START - Cookbook sre.wdqs.data-reload |
[production] |
18:56 |
<ryankemper@cumin2001> |
END (FAIL) - Cookbook sre.wdqs.data-reload (exit_code=99) |
[production] |
18:56 |
<ryankemper@cumin2001> |
START - Cookbook sre.wdqs.data-reload |
[production] |
18:50 |
<ryankemper> |
T267927 Trying to kick off data reload on `wdqs2008` from `cumin2001` fails because of `spicerack.remote.RemoteError: No hosts provided`. Doing some spelunking through IRC history looks like this happens when a host is not present in puppetDB. I'm confirmed `wdqs2008` is absent on puppetboard, so running puppet agent to get it re-registered (hopefully) |
[production] |