51-100 of 10000 results (19ms)
2019-02-28 ยง
15:29 <jbond42> rebooting labstore2001 [production]
15:23 <filippo@puppetmaster1001> conftool action : set/pooled=no; selector: name=ms-fe1005.eqiad.wmnet [production]
15:19 <jbond42> rebooting rhodium [production]
15:15 <cmjohnson1> powering off db1114 to replace motherboard T214720 [production]
15:14 <_joe_> uploading scap 3.9.1-1 to {stretch,jessie}-wikimedia [production]
14:50 <jbond42> reboot cloudnet2001-dev.codfw.wmnet [production]
14:47 <hashar> mw1272 fixed by running "scap sync-l10n" from deploy host [production]
14:46 <hashar> mw1272 had /srv/mediawiki/php-1.33.0-wmf.19/includes/cache/localisation/LocalisationCache.php:475) No localisation cache found for English. Please run maintenance/rebuildLocalisationCache.php. [production]
14:46 <hashar@deploy1001> scap sync-l10n completed (1.33.0-wmf.19) (duration: 03m 33s) [production]
14:42 <jbond@cumin1001> conftool action : set/pooled=no; selector: name=rhodium.eqiad.wmnet [production]
14:41 <hashar@deploy1001> Synchronized php: group1 wikis to 1.33.0-wmf.19 (duration: 00m 53s) [production]
14:40 <hashar@deploy1001> rebuilt and synchronized wikiversions files: group1 wikis to 1.33.0-wmf.19 [production]
14:34 <milimetric@deploy1001> Finished deploy [analytics/refinery@f605fad]: New sqoop logic that uses the sharded replicas (duration: 10m 00s) [production]
14:30 <akosiaris@deploy1001> scap-helm citoid finished [production]
14:30 <akosiaris@deploy1001> scap-helm citoid cluster staging completed [production]
14:30 <akosiaris@deploy1001> scap-helm citoid upgrade -f citoid-staging-values.yaml staging stable/citoid [namespace: citoid, clusters: staging] [production]
14:28 <hashar@deploy1001> Synchronized php-1.33.0-wmf.19/extensions/WikibaseMediaInfo: Move up checks to test if we should construct depicts widgets - T217285 (duration: 00m 58s) [production]
14:24 <milimetric@deploy1001> Started deploy [analytics/refinery@f605fad]: New sqoop logic that uses the sharded replicas [production]
13:56 <elukey> re-start cleanup of 20k+ zookeeper nodes on conf100[4-6] (old Hadoop Yarn state) - T216952 [production]
13:52 <filippo@puppetmaster1001> conftool action : set/pooled=no; selector: name=prometheus1003.eqiad.wmnet [production]
13:43 <godog> depool prometheus1003.eqiad.wmnet to take a data snapshot [production]
13:34 <filippo@puppetmaster1001> conftool action : set/pooled=yes; selector: name=prometheus2003.codfw.wmnet [production]
12:36 <zeljkof> EU SWAT finished [production]
12:35 <zfilipin@deploy1001> Synchronized wmf-config/throttle.php: SWAT: [[gerrit:493383|Add throttle rule for Day of Digital Service (T217155)]] (duration: 00m 52s) [production]
12:31 <zfilipin@deploy1001> Synchronized wmf-config/throttle.php: SWAT: [[gerrit:493382|New throttle rule for Czech Wikigap 2019 (T217270)]] (duration: 00m 53s) [production]
12:18 <zfilipin@deploy1001> Synchronized wmf-config/: SWAT: [[gerrit:491959|Show referencePreviews on group0 wikis as beta feature (T214905)]] (duration: 00m 56s) [production]
11:59 <jbond42> rolling openssl security updates to jessie systems [production]
11:32 <akosiaris> remove sca1003, sca1004, sca2003, sca2004 from the fleet. Celebrate!!!! [production]
11:28 <elukey> pause cleanup of 20k+ zookeeper nodes on conf100[4-6] (old Hadoop Yarn state) - T216952 [production]
10:00 <_joe_> executing a rolling puppet run (2 server at a time per cluster, per dc) in eqiad,codfw as an HHVM restart will be triggered [production]
09:37 <gilles@deploy1001> Synchronized php-1.33.0-wmf.19/extensions/NavigationTiming/modules/ext.navigationTiming.js: T217210 Don't assume PerformanceObserver entry types are supported (duration: 00m 54s) [production]
09:30 <elukey> start cleanup of 20k+ zookeeper nodes on conf100[4-6] (old Hadoop Yarn state) - T216952 [production]
09:26 <moritzm> installed php security updates on netmon1002 and people1001 [production]
09:22 <marostegui> Stop MySQL on db1125 (sanitarium) to upgrade, this will generate lag on labs on: s2, s4, s6,s7 [production]
09:21 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1121 (duration: 00m 54s) [production]
09:08 <marostegui> Stop MySQL on db1121 for upgrade, this will generate lag on labsdb:s4 [production]
09:08 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1121 (duration: 00m 53s) [production]
08:59 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Fully repool db1079 (duration: 00m 53s) [production]
08:32 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Increase API traffic db1079 after mysql upgrade (duration: 00m 53s) [production]
08:31 <elukey> roll restart of Yarn Resource Managers on an-master100[1,2] to pick up new settings [production]
08:22 <marostegui> Change abuse_filter_log indexes on s3 codfw, lag will appear on codfw - T187295 [production]
08:12 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Increase traffic for db1079 after mysql upgrade (duration: 00m 54s) [production]
08:06 <moritzm> installing glibc security updates for stretch [production]
07:47 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Slowly repool db1079 in API after mysql upgrade (duration: 00m 53s) [production]
07:24 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Slowly repool db1079 after mysql upgrade (duration: 00m 56s) [production]
07:08 <marostegui> Stop MySQL on db1079 for mysql upgrade [production]
06:50 <marostegui> Deploy schema change on db1079, this will generate lag on s7 on labs - T86342 [production]
06:23 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1079 (duration: 00m 55s) [production]
06:18 <kart_> Finished manual run of unpublished ContentTranslation draft purge script (T216983) [production]
05:56 <marostegui> Upgrade MySQL on db1124 (Sanitarium) lag will be generated on s1,s3,s5,s8 [production]