5451-5500 of 10000 results (72ms)
2018-11-13 ยง
16:32 <thcipriani@deploy1001> Finished deploy [gerrit/gerrit@d2763c6]: v2.15.6 to cobalt (duration: 00m 10s) [production]
16:32 <thcipriani@deploy1001> Started deploy [gerrit/gerrit@d2763c6]: v2.15.6 to cobalt [production]
16:29 <thcipriani@deploy1001> Finished deploy [gerrit/gerrit@d2763c6]: v2.15.6 to gerrit2001 (duration: 00m 11s) [production]
16:29 <thcipriani@deploy1001> Started deploy [gerrit/gerrit@d2763c6]: v2.15.6 to gerrit2001 [production]
16:22 <anomie@deploy1001> Synchronized wmf-config/InitialiseSettings.php: Setting actor migration to write-both/read-old on test wikis and mediawikiwiki (T188327) (duration: 00m 54s) [production]
16:07 <anomie@mwmaint1002> Running refreshExternallinksIndex.php on labtestwiki for T209373 [production]
16:07 <anomie@mwmaint1002> Running refreshExternallinksIndex.php on section 3 wikis in group 0 for T209373 [production]
15:48 <_joe_> upgrading extensions on all appservers / jobrunners while upgrading to php 7.2 [production]
15:45 <gehel> restart tilerator on maps1004 [production]
15:21 <moritzm> draining ganeti1006 for reboot/kernel security update [production]
15:18 <marostegui> Restore replication consistency options on dbstore2002:3313 as it has caught up - T208320 [production]
14:59 <akosiaris> increase the migration downtime for kafkamon1001. It should make live migration of these VMs easier and without the need for manual fiddling [production]
14:54 <hashar@deploy1001> rebuilt and synchronized wikiversions files: group to 1.33.0-wmf.4 | T206658 [production]
14:40 <hashar@deploy1001> Finished scap: testwiki to php-1.33.0-wmf.4 | T206658 (duration: 19m 34s) [production]
14:27 <moritzm> draining ganeti1007 for reboot/kernel security update [production]
14:20 <hashar@deploy1001> Started scap: testwiki to php-1.33.0-wmf.4 | T206658 [production]
14:20 <akosiaris> reboot logstash1007, logstash1008, logstash1009 with 500 secs of sleep between them for the migration_downtime ganeti setting to be applied [production]
14:18 <akosiaris> increase the migration downtime for logstash1007, logstash1008, logstash1009. It should make live migration of these VMs easier and without the need for manual fiddling [production]
14:15 <hashar@deploy1001> Pruned MediaWiki: 1.32.0-wmf.24 (duration: 08m 55s) [production]
14:03 <hashar> Applied security patches to 1.33.0-wmf.4 | T206658 [production]
14:03 <gehel> start plugin and JVM upgrade on elasticsearch / cirrus / codfw - T209293 [production]
14:00 <hashar> scap prep 1.33.0-wmf.4 # T206658 [production]
13:58 <marostegui@deploy1001> Synchronized wmf-config/db-codfw.php: Pool pc2007 to replace pc2004 (duration: 00m 48s) [production]
13:41 <marostegui> Deploy schema change on s8 codfw master (db2045) this will generate lag on s8 codfw - T203709 [production]
13:40 <hashar> Cutting wmf/1.33.0-wmf.4 branch | T206658 [production]
13:30 <moritzm> draining ganeti1008 for reboot/kernel security update [production]
12:51 <phuedx> European Mid-day SWAT finished [production]
12:50 <phuedx@deploy1001> Finished scap: SWAT: [[gerrit:473164|Define WikimediaMessages for Wikibase SEO change]] l18n refresh (duration: 21m 43s) [production]
12:28 <phuedx@deploy1001> Started scap: SWAT: [[gerrit:473164|Define WikimediaMessages for Wikibase SEO change]] l18n refresh [production]
12:22 <phuedx@deploy1001> Synchronized php-1.33.0-wmf.3/extensions/WikimediaMessages/: SWAT: [[gerrit:473164|Define WikimediaMessages for Wikibase SEO change (T208755)]] (duration: 00m 56s) [production]
10:57 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1092 (duration: 00m 52s) [production]
10:47 <marostegui> Deploy schema change on db1116:3318 T203709 [production]
10:40 <godog> stop sending metrics to old graphite hardware [production]
10:15 <gehel> restart elasticsearch on relforge for plugin upgrade - T209293 [production]
09:54 <moritzm> restarting jenkins on releases1001 to pick up Java security update [production]
09:25 <_joe_> uploading new versions of php-msgpack, php-geoip compatible with both php 7.0 and php 7.2 to thirdparty/php72 T208433 [production]
09:23 <marostegui> Deploy schema change on db1092 T203709 [production]
09:23 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1092 (duration: 00m 52s) [production]
09:20 <elukey> rollout new prometheus-mcrouter-exporter to mw* - previous rollout didn't work as expected [production]
09:11 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1104 (duration: 00m 55s) [production]
08:37 <moritzm> updating remaining rsyslog on stretch to 8.38.0-1~bpo9+1wmf1 [production]
07:21 <marostegui> Deploy schema change on db1104 T203709 [production]
07:20 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1104 (duration: 00m 53s) [production]
07:16 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1109 (duration: 00m 54s) [production]
07:05 <elukey> powercycle lvs2006 - mgmt/serial console blank, not responsive since hours ago [production]
06:02 <marostegui> Add ipb_sitewide column to db1073:labtestwiki [production]
05:43 <marostegui> Stop MySQL on pc2004 to transfer its data to pc2007 - T208383 [production]
05:42 <marostegui@deploy1001> Synchronized wmf-config/db-codfw.php: Depool pc2004 - T208383 (duration: 00m 53s) [production]
05:39 <marostegui> Deploy schema change on db2048 (s1 codfw master), this will create lag on s1 codfw - T114117 [production]
05:34 <marostegui> Deploy schema change on db1109 T203709 [production]