8151-8200 of 10000 results (71ms)
2020-03-16 §
11:52 <XioNoX> manually fix prometheus squid exporter on install1003 [production]
11:04 <Amir1> ... for Q30M-Q35M of the new term store [production]
11:04 <Amir1> Warming up InnoDB buffer pool cache in db1111, db1126, db1104, db1092 (T219123) [production]
10:55 <Amir1> warming up db1026 for up to Q35M for the new term store (T219123) [production]
10:47 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly repool es1015', diff saved to https://phabricator.wikimedia.org/P10705 and previous config saved to /var/cache/conftool/dbconfig/20200316-104723-marostegui.json [production]
10:45 <ladsgroup@deploy1001> Synchronized wmf-config/InitialiseSettings.php: "Set term store to WRITE_BOTH for all of Wikidata" (T219123), take II (duration: 01m 07s) [production]
10:43 <ladsgroup@deploy1001> Synchronized wmf-config/InitialiseSettings.php: "Set term store to WRITE_BOTH for all of Wikidata" (T219123) (duration: 01m 13s) [production]
10:40 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly repool es1015', diff saved to https://phabricator.wikimedia.org/P10704 and previous config saved to /var/cache/conftool/dbconfig/20200316-104002-marostegui.json [production]
10:36 <elukey> roll restart of recommendation service on scb* as attempt to fix the flapping alerts - T247732 [production]
10:28 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly repool es1015', diff saved to https://phabricator.wikimedia.org/P10703 and previous config saved to /var/cache/conftool/dbconfig/20200316-102829-marostegui.json [production]
10:17 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly repool es1015', diff saved to https://phabricator.wikimedia.org/P10702 and previous config saved to /var/cache/conftool/dbconfig/20200316-101707-marostegui.json [production]
10:10 <marostegui> Stop mysql for upgrade on es1015 T239791 [production]
10:02 <Amir1> start of ladsgroup@mwmaint1002:~$ mwscript extensions/Wikibase/repo/maintenance/rebuildItemTerms.php --wiki=wikidatawiki --batch-size=50 --sleep=0 --file=15march2217-holes-nulls.list on screen (T219123) [production]
09:32 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool es1015 for upgrade and restart T239791', diff saved to https://phabricator.wikimedia.org/P10701 and previous config saved to /var/cache/conftool/dbconfig/20200316-093228-marostegui.json [production]
09:30 <marostegui@cumin1001> dbctl commit (dc=all): 'Promote es1011 to es2 master, this is a NOOP T239791', diff saved to https://phabricator.wikimedia.org/P10700 and previous config saved to /var/cache/conftool/dbconfig/20200316-093048-marostegui.json [production]
08:15 <marostegui> Review and enable events on recently migrated 10.4 hosts - T247728 [production]
08:02 <ema> cp4025 restart trafficserver-tls to clear 'tls process restarted' alert T241593 T185968 [production]
07:57 <moritzm> installing libxslt security updates [production]
07:52 <ema> cp4025: restart varnish-fe to clear 'child restarted' alert T185968 [production]
07:47 <moritzm> installing lxml security updates [production]
07:14 <moritzm> installing libgd2 security updates on jessie [production]
06:54 <moritzm> removing some library packages from jessie/stretch after labstore1006/1007 dist-upgrade to buster [production]
06:38 <_joe_> restart envoy with 10 requests per connection on mw2231, T247484 [production]
2020-03-15 §
23:20 <jynus> removed oldest snapshots on dbprov1001 [production]
13:27 <dcausse> restarting blazegraph on wdqs1005 T242453 [production]
07:01 <marostegui> Restart logrotate on db1107 [production]
2020-03-14 §
08:33 <elukey> run kafka preferred-replica-election on kafka-jumbo1001 - T247561 [production]
08:32 <elukey> run systemctl restart systemd-timedated.service on stat1008 [production]
01:06 <mutante> planet1001 - copying /etc/apt/sources.list from planet2001 to planet1001 - apt-get update - apt-get install openssh-server T247592 [production]
2020-03-13 §
23:12 <bstorm_> rebooting labstore1006 for upgrade to stretch T224583 [production]
22:49 <herron@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
22:45 <herron@cumin1001> START - Cookbook sre.hosts.downtime [production]
22:27 <bstorm_> rebooting labstore1006 T224583 [production]
22:21 <bstorm_> downtimed labstore1006 for upgrades T224583 [production]
20:02 <mutante> stat1005 - ip link set en01 down ; ip link set en01 up (T247561) [production]
19:30 <bstorm_> rebooting labstore1007 for upgrade to buster T224583 [production]
18:51 <shdubsh> test increase fs.inotify.max_user_watches on prometheus2004 [production]
17:58 <hnowlan@deploy1001> helmfile [STAGING] Ran 'sync' command on namespace 'changeprop' for release 'staging' . [production]
17:21 <mutante> removed squid from install1002/install2002 (formerly webproxy.(eqiad|codfw).wmnet until 2 days ago, replaced by install1003/install2003) T224576 [production]
17:20 <elukey@cumin1001> END (PASS) - Cookbook sre.kafka.roll-restart-mirror-maker (exit_code=0) [production]
17:09 <hnowlan@deploy1001> helmfile [STAGING] Ran 'sync' command on namespace 'changeprop' for release 'staging' . [production]
17:08 <elukey@cumin1001> START - Cookbook sre.kafka.roll-restart-mirror-maker [production]
17:00 <krinkle@deploy1001> Synchronized dblists/: If4d17082f, Iadba5b01b, Ibe16d5f09 (duration: 01m 07s) [production]
16:58 <krinkle@deploy1001> Synchronized wmf-config/config/: Ibe16d5f09 (duration: 01m 10s) [production]
16:51 <bstorm_> rebooting labstore1007 for stretch upgrade T224583 [production]
16:37 <krinkle@deploy1001> Synchronized wmf-config/config/: If4d17082f, Iadba5b01b (duration: 01m 11s) [production]
16:18 <herron@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
16:15 <herron@cumin1001> START - Cookbook sre.hosts.downtime [production]
16:04 <bstorm_> rebooting labstore1007 for first cycle of upgrades T224583 [production]
16:02 <elukey> powercycle kafka-jumbo1006 after switch port changed - T247561 [production]