4251-4300 of 10000 results (33ms)
2021-02-26 §
08:04 <elukey> run ipmi mc reset cold for analytics1058 - mgmt responding to pings and ipmi, but not to ssh [production]
07:52 <marostegui@cumin1001> dbctl commit (dc=all): 'db1169 (re)pooling @ 1%: Repool db1169 after cloning db1134', diff saved to https://phabricator.wikimedia.org/P14494 and previous config saved to /var/cache/conftool/dbconfig/20210226-075219-root.json [production]
07:02 <marostegui> Stop MySQL on db2106 to clone db2147 T275633 [production]
07:01 <elukey> reboot an-worker1099 to clear out kernel soft lockup errors [production]
06:59 <elukey> restart datanode on an-worker1099 - soft lockup kernel errors [production]
06:53 <kartik@deploy1001> Synchronized php-1.36.0-wmf.32/extensions/ContentTranslation: Bump ContentTranslation to e6b1a7c to include lost {{gerrit|666327}} backport (duration: 00m 58s) [production]
06:39 <marostegui@cumin1001> dbctl commit (dc=all): 'Remove db1092 from dbctl T275019', diff saved to https://phabricator.wikimedia.org/P14492 and previous config saved to /var/cache/conftool/dbconfig/20210226-063914-marostegui.json [production]
06:32 <kartik@deploy1001> Synchronized php-1.36.0-wmf.32/extensions/ContentTranslation: Resync ContentTranslation for {{gerrit|666327}} (duration: 01m 16s) [production]
06:17 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1169 to clone db1134 T275343', diff saved to https://phabricator.wikimedia.org/P14490 and previous config saved to /var/cache/conftool/dbconfig/20210226-061705-marostegui.json [production]
05:29 <ryankemper@cumin2001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on elastic2045.codfw.wmnet with reason: REIMAGE [production]
05:27 <ryankemper@cumin2001> START - Cookbook sre.hosts.downtime for 2:00:00 on elastic2045.codfw.wmnet with reason: REIMAGE [production]
05:25 <ryankemper> [relforge] Downtimed `relforge1004` until `2021-03-02 07:23:36` (https://phabricator.wikimedia.org/T275658 is in flight to fix broken `kibana.service`) [production]
05:07 <ryankemper> T275345 `sudo -i wmf-auto-reimage-host --conftool -p T275345 elastic2045.codfw.wmnet` on `ryankemper@cumin2001` tmux session `elastic_reimage_elastic1065` [production]
04:23 <ryankemper> T267927 [WDQS Data Reload] `sudo -i cookbook sre.wdqs.data-reload wdqs2008.codfw.wmnet --task-id T267927 --reload-data wikidata --reason 'T267927: Reload wikidata jnl from fresh dumps' --reuse-downloaded-dump --depool` on `ryankemper@cumin2001` tmux session `wdqs_data_reload_2008` [production]
04:21 <ryankemper@cumin2001> START - Cookbook sre.wdqs.data-reload [production]
00:14 <urbanecm@deploy1001> Synchronized php-1.36.0-wmf.32/extensions/Graph/: 9d5cf348f5dda32f8889d5160bb1fe34a4e07f8c: Do not log graph errors to WMF servers (T274557) (duration: 01m 36s) [production]
2021-02-25 §
23:55 <mutante> deploy1002, deploy2002 - scap-master-sync deploy1001.eqiad.wmnet (T265963) [production]
23:41 <mutante> deploy2001 2/2 - because rsync is --delete but also --exclude="**/cache/l10n/*.cdb" --exclude="*.swp" you can't expect /srv/mediawiki-staging to be the same size on 2 servers [production]
23:39 <mutante> deploy2001 - scap-master-sync from deploy1001 runs and attempts to --delete files to stay in sync but fails to do so because *.cdb files are in cache dirs and rsync does not want to delete non-empty directories, this leads to build up of the size of /srv/mediawiki-staging to 10 times the size of eqiad [production]
23:34 <mutante> deploy2001 - scap-master-sync from deploy1001 [production]
23:13 <mutante> deploy1002 - /usr/local/bin/scap-master-sync deploy1001.eqiad.wmnet [production]
23:01 <dduvall@deploy1001> Pruned MediaWiki: 1.36.0-wmf.30 (duration: 04m 20s) [production]
21:38 <legoktm> pushed new version of docker-registry.discovery.wmnet/wikimedia-buster image [production]
21:20 <mutante> deploy2001 - rsynced /srv/deployment from deploy1001 after gerrit:666757 [production]
20:57 <eileen> civicrm revision changed from 604d07c859 to f07390ff87, config revision is 643477b35d [production]
20:35 <jhuneidi@deploy1001> rebuilt and synchronized wikiversions files: all wikis to 1.36.0-wmf.32 refs T274936 [production]
20:17 <tgr@deploy1001> Synchronized php-1.36.0-wmf.31/extensions/GrowthExperiments/: Backport: [[gerrit:666704|Impact module: Add "not rendered" state (T270294, T275615)]] (duration: 01m 08s) [production]
19:40 <tgr@deploy1001> Synchronized php-1.36.0-wmf.32/extensions/GrowthExperiments/: Backport: [[gerrit:666704|Impact module: Add "not rendered" state (T270294, T275615)]] (duration: 01m 26s) [production]
19:16 <ryankemper> T267927 Downloading dumps: `sudo https_proxy=webproxy.codfw.wmnet:8080 wget https://dumps.wikimedia.org/wikidatawiki/entities/latest-all.ttl.bz2 -O /srv/wdqs/latest-all.ttl.bz2 && sudo https_proxy=webproxy.codfw.wmnet:8080 wget https://dumps.wikimedia.org/wikidatawiki/entities/latest-lexemes.ttl.bz2 -O /srv/wdqs/latest-lexemes.ttl.bz2` on `ryankemper@wdqs2008` tmux session `download_latest_dumps` [production]
18:59 <ryankemper@cumin2001> END (FAIL) - Cookbook sre.wdqs.data-reload (exit_code=99) [production]
18:59 <ryankemper@cumin2001> START - Cookbook sre.wdqs.data-reload [production]
18:59 <ryankemper> T267927 Manual puppet run got `wdqs2008` present in puppetdb again. Now being blocked by lack of host key for `wdqs2008` present on `cumin2001`, so I'm running puppet on `cumin2001` to get the latest state of `/etc/ssh/ssh_known_hosts` [production]
18:57 <ryankemper@cumin2001> END (FAIL) - Cookbook sre.wdqs.data-reload (exit_code=99) [production]
18:57 <ryankemper@cumin2001> START - Cookbook sre.wdqs.data-reload [production]
18:56 <ryankemper@cumin2001> END (FAIL) - Cookbook sre.wdqs.data-reload (exit_code=99) [production]
18:56 <ryankemper@cumin2001> START - Cookbook sre.wdqs.data-reload [production]
18:50 <ryankemper> T267927 Trying to kick off data reload on `wdqs2008` from `cumin2001` fails because of `spicerack.remote.RemoteError: No hosts provided`. Doing some spelunking through IRC history looks like this happens when a host is not present in puppetDB. I'm confirmed `wdqs2008` is absent on puppetboard, so running puppet agent to get it re-registered (hopefully) [production]
18:38 <ryankemper@cumin2001> END (FAIL) - Cookbook sre.wdqs.data-reload (exit_code=99) [production]
18:38 <ryankemper@cumin2001> START - Cookbook sre.wdqs.data-reload [production]
18:37 <pt1979@cumin2001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
18:37 <ryankemper@cumin2001> END (FAIL) - Cookbook sre.wdqs.data-reload (exit_code=99) [production]
18:37 <ryankemper@cumin2001> START - Cookbook sre.wdqs.data-reload [production]
18:36 <ryankemper@cumin2001> END (FAIL) - Cookbook sre.wdqs.data-reload (exit_code=99) [production]
18:36 <ryankemper@cumin2001> START - Cookbook sre.wdqs.data-reload [production]
18:31 <pt1979@cumin2001> START - Cookbook sre.dns.netbox [production]
18:30 <akosiaris@deploy1001> helmfile [staging] Ran 'sync' command on namespace 'zotero' for release 'staging' . [production]
18:30 <akosiaris@deploy1001> helmfile [staging] Ran 'sync' command on namespace 'zotero' for release 'production' . [production]
18:30 <akosiaris@deploy1001> helmfile [staging] Ran 'sync' command on namespace 'wikifeeds' for release 'staging' . [production]
18:30 <akosiaris@deploy1001> helmfile [staging] Ran 'sync' command on namespace 'wikifeeds' for release 'production' . [production]
18:30 <akosiaris@deploy1001> helmfile [staging] Ran 'sync' command on namespace 'termbox' for release 'production' . [production]