3451-3500 of 10000 results (24ms)
2021-02-26 §
05:29 <ryankemper@cumin2001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on elastic2045.codfw.wmnet with reason: REIMAGE [production]
05:27 <ryankemper@cumin2001> START - Cookbook sre.hosts.downtime for 2:00:00 on elastic2045.codfw.wmnet with reason: REIMAGE [production]
05:25 <ryankemper> [relforge] Downtimed `relforge1004` until `2021-03-02 07:23:36` (https://phabricator.wikimedia.org/T275658 is in flight to fix broken `kibana.service`) [production]
05:07 <ryankemper> T275345 `sudo -i wmf-auto-reimage-host --conftool -p T275345 elastic2045.codfw.wmnet` on `ryankemper@cumin2001` tmux session `elastic_reimage_elastic1065` [production]
04:23 <ryankemper> T267927 [WDQS Data Reload] `sudo -i cookbook sre.wdqs.data-reload wdqs2008.codfw.wmnet --task-id T267927 --reload-data wikidata --reason 'T267927: Reload wikidata jnl from fresh dumps' --reuse-downloaded-dump --depool` on `ryankemper@cumin2001` tmux session `wdqs_data_reload_2008` [production]
04:21 <ryankemper@cumin2001> START - Cookbook sre.wdqs.data-reload [production]
00:14 <urbanecm@deploy1001> Synchronized php-1.36.0-wmf.32/extensions/Graph/: 9d5cf348f5dda32f8889d5160bb1fe34a4e07f8c: Do not log graph errors to WMF servers (T274557) (duration: 01m 36s) [production]
2021-02-25 §
23:55 <mutante> deploy1002, deploy2002 - scap-master-sync deploy1001.eqiad.wmnet (T265963) [production]
23:41 <mutante> deploy2001 2/2 - because rsync is --delete but also --exclude="**/cache/l10n/*.cdb" --exclude="*.swp" you can't expect /srv/mediawiki-staging to be the same size on 2 servers [production]
23:39 <mutante> deploy2001 - scap-master-sync from deploy1001 runs and attempts to --delete files to stay in sync but fails to do so because *.cdb files are in cache dirs and rsync does not want to delete non-empty directories, this leads to build up of the size of /srv/mediawiki-staging to 10 times the size of eqiad [production]
23:34 <mutante> deploy2001 - scap-master-sync from deploy1001 [production]
23:13 <mutante> deploy1002 - /usr/local/bin/scap-master-sync deploy1001.eqiad.wmnet [production]
23:01 <dduvall@deploy1001> Pruned MediaWiki: 1.36.0-wmf.30 (duration: 04m 20s) [production]
21:38 <legoktm> pushed new version of docker-registry.discovery.wmnet/wikimedia-buster image [production]
21:20 <mutante> deploy2001 - rsynced /srv/deployment from deploy1001 after gerrit:666757 [production]
20:57 <eileen> civicrm revision changed from 604d07c859 to f07390ff87, config revision is 643477b35d [production]
20:35 <jhuneidi@deploy1001> rebuilt and synchronized wikiversions files: all wikis to 1.36.0-wmf.32 refs T274936 [production]
20:17 <tgr@deploy1001> Synchronized php-1.36.0-wmf.31/extensions/GrowthExperiments/: Backport: [[gerrit:666704|Impact module: Add "not rendered" state (T270294, T275615)]] (duration: 01m 08s) [production]
19:40 <tgr@deploy1001> Synchronized php-1.36.0-wmf.32/extensions/GrowthExperiments/: Backport: [[gerrit:666704|Impact module: Add "not rendered" state (T270294, T275615)]] (duration: 01m 26s) [production]
19:16 <ryankemper> T267927 Downloading dumps: `sudo https_proxy=webproxy.codfw.wmnet:8080 wget https://dumps.wikimedia.org/wikidatawiki/entities/latest-all.ttl.bz2 -O /srv/wdqs/latest-all.ttl.bz2 && sudo https_proxy=webproxy.codfw.wmnet:8080 wget https://dumps.wikimedia.org/wikidatawiki/entities/latest-lexemes.ttl.bz2 -O /srv/wdqs/latest-lexemes.ttl.bz2` on `ryankemper@wdqs2008` tmux session `download_latest_dumps` [production]
18:59 <ryankemper@cumin2001> END (FAIL) - Cookbook sre.wdqs.data-reload (exit_code=99) [production]
18:59 <ryankemper@cumin2001> START - Cookbook sre.wdqs.data-reload [production]
18:59 <ryankemper> T267927 Manual puppet run got `wdqs2008` present in puppetdb again. Now being blocked by lack of host key for `wdqs2008` present on `cumin2001`, so I'm running puppet on `cumin2001` to get the latest state of `/etc/ssh/ssh_known_hosts` [production]
18:57 <ryankemper@cumin2001> END (FAIL) - Cookbook sre.wdqs.data-reload (exit_code=99) [production]
18:57 <ryankemper@cumin2001> START - Cookbook sre.wdqs.data-reload [production]
18:56 <ryankemper@cumin2001> END (FAIL) - Cookbook sre.wdqs.data-reload (exit_code=99) [production]
18:56 <ryankemper@cumin2001> START - Cookbook sre.wdqs.data-reload [production]
18:50 <ryankemper> T267927 Trying to kick off data reload on `wdqs2008` from `cumin2001` fails because of `spicerack.remote.RemoteError: No hosts provided`. Doing some spelunking through IRC history looks like this happens when a host is not present in puppetDB. I'm confirmed `wdqs2008` is absent on puppetboard, so running puppet agent to get it re-registered (hopefully) [production]
18:38 <ryankemper@cumin2001> END (FAIL) - Cookbook sre.wdqs.data-reload (exit_code=99) [production]
18:38 <ryankemper@cumin2001> START - Cookbook sre.wdqs.data-reload [production]
18:37 <pt1979@cumin2001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
18:37 <ryankemper@cumin2001> END (FAIL) - Cookbook sre.wdqs.data-reload (exit_code=99) [production]
18:37 <ryankemper@cumin2001> START - Cookbook sre.wdqs.data-reload [production]
18:36 <ryankemper@cumin2001> END (FAIL) - Cookbook sre.wdqs.data-reload (exit_code=99) [production]
18:36 <ryankemper@cumin2001> START - Cookbook sre.wdqs.data-reload [production]
18:31 <pt1979@cumin2001> START - Cookbook sre.dns.netbox [production]
18:30 <akosiaris@deploy1001> helmfile [staging] Ran 'sync' command on namespace 'zotero' for release 'staging' . [production]
18:30 <akosiaris@deploy1001> helmfile [staging] Ran 'sync' command on namespace 'zotero' for release 'production' . [production]
18:30 <akosiaris@deploy1001> helmfile [staging] Ran 'sync' command on namespace 'wikifeeds' for release 'staging' . [production]
18:30 <akosiaris@deploy1001> helmfile [staging] Ran 'sync' command on namespace 'wikifeeds' for release 'production' . [production]
18:30 <akosiaris@deploy1001> helmfile [staging] Ran 'sync' command on namespace 'termbox' for release 'production' . [production]
18:30 <akosiaris@deploy1001> helmfile [staging] Ran 'sync' command on namespace 'termbox' for release 'staging' . [production]
18:30 <akosiaris@deploy1001> helmfile [staging] Ran 'sync' command on namespace 'termbox' for release 'test' . [production]
18:30 <akosiaris@deploy1001> helmfile [staging] Ran 'sync' command on namespace 'similar-users' for release 'main' . [production]
18:27 <oblivian@deploy1001> helmfile [codfw] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'production' . [production]
18:25 <oblivian@deploy1001> helmfile [eqiad] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'production' . [production]
18:23 <bblack> dns[1235]002 - upgrade gdnsd to 3.6.0 (dns4002 and authdns2001 already running it for some time!) [production]
18:21 <akosiaris@deploy1001> helmfile [staging] Ran 'sync' command on namespace 'sessionstore' for release 'staging' . [production]
18:20 <akosiaris@deploy1001> helmfile [staging] Ran 'sync' command on namespace 'sessionstore' for release 'production' . [production]
18:19 <akosiaris@deploy1001> helmfile [staging] Ran 'sync' command on namespace 'recommendation-api' for release 'production' . [production]