3251-3300 of 10000 results (47ms)
2021-03-15 §
07:48 <joal> Manually start mediawiki-history-drop-snapshot.service to check the run succeeds [analytics]
07:47 <joal> Drop hive wmf.mediawiki_wikitext_history snapshot partitions (2020-08, 2020-09, 2020-10, 2020-11) [analytics]
07:22 <elukey> powercycle ms-be1038 - no ssh, no tty available in mgmt serial console, irrecoverable error saved in ilo's system logs [production]
2021-03-14 §
20:49 <joal> Manually clean some data ( mediawiki-history-drop-snapshot.service seems not working) [analytics]
20:46 <joal> Force a run of mediawiki-history-drop-snapshot.service to clean up some data [analytics]
17:57 <marostegui@cumin1001> dbctl commit (dc=all): 'db1146:3314 (re)pooling @ 100%: Repool db1146:3314', diff saved to https://phabricator.wikimedia.org/P14827 and previous config saved to /var/cache/conftool/dbconfig/20210314-175751-root.json [production]
17:42 <marostegui@cumin1001> dbctl commit (dc=all): 'db1146:3314 (re)pooling @ 75%: Repool db1146:3314', diff saved to https://phabricator.wikimedia.org/P14826 and previous config saved to /var/cache/conftool/dbconfig/20210314-174248-root.json [production]
17:27 <marostegui@cumin1001> dbctl commit (dc=all): 'db1146:3314 (re)pooling @ 50%: Repool db1146:3314', diff saved to https://phabricator.wikimedia.org/P14825 and previous config saved to /var/cache/conftool/dbconfig/20210314-172744-root.json [production]
17:12 <marostegui@cumin1001> dbctl commit (dc=all): 'db1146:3314 (re)pooling @ 25%: Repool db1146:3314', diff saved to https://phabricator.wikimedia.org/P14824 and previous config saved to /var/cache/conftool/dbconfig/20210314-171240-root.json [production]
14:43 <gehel> depool wdqs1005 and restart blazegraph - will keep depooled until this server has catched up on lag [production]
2021-03-13 §
19:01 <Amir1> change default charset of all core tables in labstestwiki to binary (T269348) [production]
18:53 <Amir1> run schema changes for varbinary on wikitech (T269348) [production]
17:38 <twentyafterfour> restarted apache on gerrit1001 to resolve apache worker exhaustion see T277127 [production]
17:10 <twentyafterfour> restart apache on gerrit1001 [releng]
16:57 <Reedy> gerrit web interface is slow/timing out [production]
16:31 <wm-bot> <lucaswerkmeister> deployed f389caf9b2 (gender i18n improvements, should be a no-op) [tools.lexeme-forms]
01:18 <ryankemper> T266470 Re-enabled icinga service notifications for `Check no envoy runtime configuration is left persistent` on `wdqs100[9,10]` [production]
01:04 <ryankemper> T266470 merged https://gerrit.wikimedia.org/r/c/operations/dns/+/668255 && `ryankemper@authdns1001:~$ sudo authdns-update` [production]
00:55 <mutante> [wdqs1009:/etc/envoy] $ sudo /usr/local/sbin/build-envoy-config -c /etc/envoy/ [production]
2021-03-12 §
23:13 <bstorm> cleared error state for all grid queues [tools]
22:57 <marxarelli> reloading zuul to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/671295 [releng]
22:53 <ryankemper> T266470 Manually disabled service notifications for `Check no envoy runtime configuration is left persistent`, will need to circle back on Monday to restore notifications [production]
22:28 <marxarelli> running `tox -e jenkins-jobs -- --conf jenkins_jobs.ini update ./jjb '*-pipeline-*'` to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/668199 [releng]
22:10 <legoktm> imported mailman-puppetmaster.mailman.eqiad1.wikimedia.cloud facts to puppet-compiler [production]
21:52 <mutante> puppetmaster1001 sudo puppet cert clean testreduce.discovery.wmnet (T266509) [production]
21:15 <dzahn@cumin1001> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts mw2219.codfw.wmnet [production]
20:49 <dzahn@cumin1001> START - Cookbook sre.hosts.decommission for hosts mw2219.codfw.wmnet [production]
20:48 <dzahn@cumin1001> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts mw2218.codfw.wmnet [production]
20:32 <dzahn@cumin1001> START - Cookbook sre.hosts.decommission for hosts mw2218.codfw.wmnet [production]
20:32 <dzahn@cumin1001> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts mw2217.codfw.wmnet [production]
20:22 <eevans@deploy1002> helmfile [staging] Ran 'sync' command on namespace 'sessionstore' for release 'staging' . [production]
20:18 <wm-bot> <lucaswerkmeister> deployed 9500beeed4 (three new translations) – should be a no-op but I didn’t want to leave it lying around without a webservice restart either [tools.lexeme-forms]
20:15 <dzahn@cumin1001> START - Cookbook sre.hosts.decommission for hosts mw2217.codfw.wmnet [production]
20:14 <dzahn@cumin1001> conftool action : set/pooled=inactive; selector: name=mw2219.codfw.wmnet [production]
20:14 <dzahn@cumin1001> conftool action : set/pooled=inactive; selector: name=mw2218.codfw.wmnet [production]
20:14 <dzahn@cumin1001> conftool action : set/pooled=inactive; selector: name=mw2217.codfw.wmnet [production]
19:47 <dzahn@cumin1001> conftool action : set/weight=1; selector: name=mw2376.codfw.wmnet,service=canary [production]
19:47 <dzahn@cumin1001> conftool action : set/weight=1; selector: name=mw2374.codfw.wmnet,service=canary [production]
19:47 <ebernhardson> start in-place reindex testwiki in eqiad, codfw, cloudelastic cirrus clusters for T269493 [production]
19:45 <dzahn@cumin1001> conftool action : set/pooled=yes; selector: name=mw2374.codfw.wmnet [production]
19:41 <mutante> mw2374, mw2376 - depooling to turn them into canaries [production]
19:41 <dzahn@cumin1001> conftool action : set/pooled=no; selector: name=mw2376.codfw.wmnet [production]
19:41 <dzahn@cumin1001> conftool action : set/pooled=no; selector: name=mw2374.codfw.wmnet [production]
19:28 <wm-bot> <lucaswerkmeister> deployed aa07bef3bd (i18n update) – also, previous SAL message mentioned 712d262475 but that’s still in <code>git log @..@{u}</code>, so I think I forgot to rebase last time [tools.lexeme-forms]
19:13 <Majavah> taavi@deployment-cumin:~$ sudo cumin -b 1 -s 5 'wdqs2*' 'run-puppet-agent -q' [releng]
19:09 <cstone> tools revision changed from 532f8ecb33 to b7b4060c30 [production]
19:01 <legoktm> legoktm@deployment-puppetmaster04:/var/lib/git/labs$ sudo mv private-back /root/private-back-2020-06 [releng]
18:28 <bblack> authdns1001.wikimedia.org,dns2001.wikimedia.org - upgrade gdnsd to 3.6.0 (half the servers have been on this for a couple weeks now, just finishing up the rollout) [production]
18:24 <bblack> dns[15]001.wikimedia.org - upgrade gdnsd to 3.6.0 (half the servers have been on this for a couple weeks now, just finishing up the rollout) [production]
18:21 <bblack> dns[34]001.wikimedia.org - upgrade gdnsd to 3.6.0 (half the servers have been on this for a couple weeks now, just finishing up the rollout) [production]