3401-3450 of 10000 results (64ms)
2019-02-20 §
16:19 <twentyafterfour> stopped phd on phab1002 [production]
16:03 <ottomata> removing spark 1 from Analytics cluster - T212134 [production]
15:55 <bblack> authdns2001: upgrade gdnsd to 3.0.0-1~wmf1 [production]
15:37 <fsero> restarting docker-registry service on systemd [production]
15:35 <moritzm> temporarily stop prometheus instances on prometheus1004 for systemd upgrade/journald restart [production]
14:43 <gehel@cumin2001> END (FAIL) - Cookbook sre.elasticsearch.rolling-upgrade (exit_code=99) [production]
14:35 <gehel@cumin2001> START - Cookbook sre.elasticsearch.rolling-upgrade [production]
14:35 <volans> upgraded spicerack to 0.0.18 on cumin[12]001 [production]
14:34 <volans> uploaded spicerack_0.0.18-1_amd64.deb to apt.wikimedia.org stretch-wikimedia [production]
14:00 <gehel@cumin2001> END (ERROR) - Cookbook sre.elasticsearch.rolling-upgrade (exit_code=97) [production]
14:00 <gehel@cumin2001> START - Cookbook sre.elasticsearch.rolling-upgrade [production]
13:59 <gehel> rolling upgrade of elasticsearch / cirrus / codfw to 5.6.14 - T215931 [production]
13:51 <godog> prometheus on prometheus2004 crashed/exited after journald upgrade -- starting up again now [production]
13:00 <jbond42> rolling restarts for hhvm in eqiad [production]
12:28 <volans> upgraded spicerack to 0.0.17 on cumin[12]001 [production]
12:25 <volans> uploaded spicerack_0.0.17-1_amd64.deb to apt.wikimedia.org stretch-wikimedia [production]
12:08 <moritzm> restarted ircecho on kraz.wikimedia.org [production]
11:46 <jbond42> rolling restarts for hhvm in codfw [production]
11:28 <akosiaris> rebuild and re-upload rsyslog_8.38.0-1~bpo9+1wmf1_amd64.changes to apt.wikimedia.org/stretch-wikimedia to have mmkubernetes package [production]
10:36 <marostegui> Deploy schema change on db1095:3313 - T210713 [production]
10:04 <marostegui> Deploy schema change on dbstore1004:3313 - T210713 [production]
09:57 <moritzm> installing systemd security updates on jessie hosts [production]
09:33 <marostegui> Deploy schema change on db2043 (s3 codfw master), lag will be generated on s3 codfw - T210713 [production]
09:06 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Fully repool db1109 (duration: 00m 52s) [production]
08:48 <moritzm> powercycling rdb1001 for a test [production]
07:45 <moritzm> installing gnupg2 updates on stretch [production]
07:14 <marostegui> Deploy schema change on s1 primary master (db1067) - T210713 [production]
07:13 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1080 T210713 (duration: 00m 52s) [production]
07:09 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Increase traffic for db1109 after kernel upgrade (duration: 00m 52s) [production]
06:54 <oblivian@deploy1001> Synchronized wmf-config/profiler.php: Fix the tideways setup (duration: 00m 52s) [production]
06:50 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Increase traffic for db1109 after kernel upgrade (duration: 00m 52s) [production]
06:47 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1080 T210713 (duration: 00m 51s) [production]
06:44 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1119 T210713 (duration: 00m 51s) [production]
06:38 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Slowly repool db1109 after kernel upgrade (duration: 00m 52s) [production]
06:28 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Slowly repool db1109 after kernel upgrade (duration: 00m 52s) [production]
06:18 <marostegui> Stop MySQL on db1109 for kernel and mysql upgrade [production]
06:18 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1109 for kernel and mysql upgrade (duration: 00m 52s) [production]
06:12 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1119 T210713 (duration: 01m 05s) [production]
04:45 <XioNoX> add avoid-paths WIRESTAR-OPTICALTEL to cr2-eqdfw [production]
02:15 <mobrovac@deploy1001> Finished deploy [restbase/deploy@751dc5c]: Temporarily collect VE lrequest ogs for T215956 (duration: 22m 37s) [production]
01:52 <mobrovac@deploy1001> Started deploy [restbase/deploy@751dc5c]: Temporarily collect VE lrequest ogs for T215956 [production]
00:24 <ebernhardson@deploy1001> Synchronized php-1.33.0-wmf.17/skins/MinervaNeue/resources/skins.minerva.content.styles/lists.less: Revert switch to outside list style from ordered lists (duration: 00m 52s) [production]
00:23 <ebernhardson@deploy1001> Synchronized php-1.33.0-wmf.18/skins/MinervaNeue/resources/skins.minerva.content.styles/lists.less: Revert switch to outside list style from ordered lists (duration: 00m 59s) [production]
00:05 <ebernhardson@deploy1001> Synchronized wmf-config/CirrusSearch-production.php: SWAT T215969 Return cirrussearch master timeout back to the default value (duration: 00m 57s) [production]
2019-02-19 §
23:51 <ebernhardson> restarted ferm on relforge1001 [production]
23:50 <ebernhardson> temporarly stop ferm on relforge1001 to test where a connection is being blocked [production]
20:49 <thcipriani@deploy1001> rebuilt and synchronized wikiversions files: group0 to 1.33.0-wmf.18 [production]
20:34 <thcipriani@deploy1001> Finished scap: testwiki to php-1.33.0-wmf.18 and rebuild l10n cache (duration: 30m 31s) [production]
20:07 <gehel@cumin1001> END (PASS) - Cookbook sre.elasticsearch.rolling-upgrade (exit_code=0) [production]
20:04 <thcipriani@deploy1001> Started scap: testwiki to php-1.33.0-wmf.18 and rebuild l10n cache [production]