151-200 of 10000 results (31ms)
2021-07-28 ยง
17:14 <bd808> mwv-builder-03: added cinder volume, migrated /srv to it, fixed broken apt state, forced puppet run to ensure things are better now that the / partition has some free space. [mediawiki-vagrant]
16:59 <bd808> Adding cinder volume to hold /srv data. [mediawiki-vagrant]
16:54 <andrewbogott> rebooting relforge-search; it's unreachable and hanging [search]
16:52 <andrewbogott> rebooting VM 'backend', unreachable and hanging [wikicommunityhealth]
16:49 <bd808> Hard reboot mwv-builder-03. Unresponsive to ssh and http. [mediawiki-vagrant]
16:47 <andrewbogott> rebooting nehpets VM, OOM [reading-web-staging]
16:39 <andrewbogott> rebooting gerrit-prod-1001; seemingly unreachable [devtools]
16:37 <ryankemper> [WDQS Deploy] Deploy complete. Successful test query placed on query.wikidata.org, there's no relevant criticals in Icinga, and Grafana looks good [production]
16:18 <wm-bot> ` sudo docker restart 873f4b18478d` Restarting wikibase-registry_wdqs-frontend_1 to hopefully fix all wdqs queries being 502 [wikibase-registry]
16:17 <wm-bot> ` sudo docker restart 6e997bf4a59e` Restarting wikibase-registry_wdqs-proxy_1 to hopefully fix all wdqs queries being 502 [wikibase-registry]
16:11 <wm-bot> ` sudo docker restart b6f6d2d0dd7a` Restarting wikibase-registry_wdqs-0310_1 to hopefully fix all wdqs queries being 502 [wikibase-registry]
16:00 <ryankemper> [WDQS Deploy] Restarting `wdqs-categories` across lvs-managed hosts, one node at a time: `sudo -E cumin -b 1 'A:wdqs-all and not A:wdqs-test' 'depool && sleep 45 && systemctl restart wdqs-categories && sleep 45 && pool'` [production]
15:59 <ryankemper> [WDQS Deploy] Restarted `wdqs-categories` across all test hosts simultaneously: `sudo -E cumin 'A:wdqs-test' 'systemctl restart wdqs-categories'` [production]
15:59 <ryankemper> [WDQS Deploy] Restarted `wdqs-updater` across all hosts, 4 hosts at a time: `sudo -E cumin -b 4 'A:wdqs-all' 'systemctl restart wdqs-updater'` [production]
15:58 <ryankemper> T287112 [WDQS] Re-pooled `wdqs2002` [production]
15:57 <ryankemper@deploy1002> Finished deploy [wdqs/wdqs@26273d8]: 0.3.77 (duration: 08m 55s) [production]
15:53 <mutante> mw1434,mw1435,mw1436: scap pull, repooled, reimaged, converted from API to appserver for balancing (T279309) [production]
15:53 <dzahn@cumin1001> conftool action : set/pooled=yes; selector: name=mw143[4-6].eqiad.wmnet [production]
15:52 <dzahn@cumin1001> conftool action : set/pooled=no; selector: name=mw143[4-6].eqiad.wmnet [production]
15:51 <ryankemper> [WDQS Deploy] Tests passing following deploy of `0.3.77` on canary `wdqs1003`; proceeding to rest of fleet [production]
15:48 <ryankemper@deploy1002> Started deploy [wdqs/wdqs@26273d8]: 0.3.77 [production]
15:47 <ryankemper> [WDQS Deploy] Gearing up for deploy of wdqs `0.3.77`. Pre-deploy tests passing on canary `wdqs1003` [production]
15:47 <jgiannelos@deploy1002> helmfile [staging] Ran 'sync' command on namespace 'tegola-vector-tiles' for release 'main' . [production]
15:08 <jmm@cumin2002> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
15:05 <jmm@cumin2002> START - Cookbook sre.dns.netbox [production]
14:58 <dzahn@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on mw1434.eqiad.wmnet with reason: REIMAGE [production]
14:57 <wm-bot> `sudo docker image prune --all` removing all unused docker images. Freed up 5.14 GB. T287492 [wikibase-registry]
14:56 <dzahn@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on mw1434.eqiad.wmnet with reason: REIMAGE [production]
14:54 <andrewbogott> rebooting a11y.reading-web-staging.eqiad1.wikimedia.cloud; seems hung [reading-web-staging]
14:44 <hashar> Reloading Zuul to deploy https://gerrit.wikimedia.org/r/708536 [releng]
14:39 <elukey@deploy1002> helmfile [ml-serve-eqiad] DONE helmfile.d/admin 'sync'. [production]
14:33 <dzahn@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on mw1434.eqiad.wmnet with reason: known issue [production]
14:33 <dzahn@cumin1001> START - Cookbook sre.hosts.downtime for 4:00:00 on mw1434.eqiad.wmnet with reason: known issue [production]
14:19 <elukey@deploy1002> helmfile [ml-serve-eqiad] START helmfile.d/admin 'sync'. [production]
14:06 <dzahn@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on mw1436.eqiad.wmnet with reason: REIMAGE [production]
14:06 <elukey@deploy1002> helmfile [ml-serve-eqiad] DONE helmfile.d/admin 'sync'. [production]
14:06 <elukey@deploy1002> helmfile [ml-serve-eqiad] START helmfile.d/admin 'sync'. [production]
14:06 <dcausse@deploy1002> helmfile [staging] Ran 'sync' command on namespace 'rdf-streaming-updater' for release 'main' . [production]
14:04 <dzahn@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on mw1435.eqiad.wmnet with reason: REIMAGE [production]
14:03 <dzahn@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on mw1436.eqiad.wmnet with reason: REIMAGE [production]
14:01 <dzahn@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on mw1435.eqiad.wmnet with reason: REIMAGE [production]
13:53 <wm-bot> `sudo docker image prune` remove dangling images. Freed up 117 MB. T287492 [wikibase-registry]
13:53 <wm-bot> `sudo docker container prune` remove dangling container meta data. T287492 [wikibase-registry]
13:50 <wm-bot> `sudo docker kill wikibase-registry_wikibase-update_run_1` kill container not in /root/wikibase-registry/docker-compose.yml - no idea where that came from. T287492 [wikibase-registry]
13:32 <dzahn@cumin1001> conftool action : set/pooled=inactive; selector: name=mw143[4-6].eqiad.wmnet [production]
13:29 <moritzm> installing python2.7 security updates on stretch [production]
13:08 <moritzm> installing python3.5 security updates on stretch [production]
12:27 <dcausse@deploy1002> helmfile [staging] Ran 'sync' command on namespace 'rdf-streaming-updater' for release 'main' . [production]
11:26 <moritzm> installing nginx security updates on thumbor* [production]
11:18 <moritzm> installing nginx security updates on sodium (mirrors.wikimedia.org) [production]