651-700 of 10000 results (69ms)
2019-08-08 §
08:44 <moritzm> installing OpenJDK security updates on elastic* servers [production]
08:36 <marostegui> Remove math table from s5 T196055 [production]
08:13 <marostegui> Stop MySQL on db2065 to test dbproxy2003 [production]
07:48 <marostegui@deploy1001> Synchronized wmf-config/db-codfw.php: Promote db2096 as codfw x1 master T220170 (duration: 00m 57s) [production]
07:39 <marostegui> Switchover x1 codfw master db2069 -> db2096 T220170 [production]
06:40 <_joe_> restarting php-fpm on the application servers to pick up the change [production]
05:54 <marostegui> Stop MySQL on db2035 for decommissioning T229784 [production]
05:52 <marostegui> Remove db2035 from tendril and zarcillo T229784 [production]
00:48 <mutante> mwdebug2002 - sudo -i restart-php7.2-fpm [production]
00:20 <ejegg> re-enabled both recurring charge jobs [production]
00:02 <tstarling@deploy1001> Synchronized wmf-config/CommonSettings.php: hack for Parsoid testing on scandium (duration: 00m 55s) [production]
2019-08-07 §
23:58 <tstarling@deploy1001> Synchronized w/rest.php: Creating rest.php endpoint disabled by default (duration: 00m 55s) [production]
23:46 <ejegg> disabled newer recurring charge job to test one at a time on existing recur records [production]
23:22 <mutante> elastic2054 - powercycling after it went down unexpectedly and Icinga alerted, this happened before in T227298 [production]
23:08 <XioNoX> set virtual-chassis vcp-snmp-statistics on asw2-ulsfo - T228824 [production]
23:07 <ebernhardson@deploy1001> Synchronized wmf-config/InitialiseSettings.php: T220625: Send writes for all non-private wikis to cloudelastic (duration: 01m 02s) [production]
23:03 <XioNoX> set virtual-chassis vcp-snmp-statistics on asw-a-codfw - T228824 [production]
22:50 <ebernhardson> mwmaint start cirrussearch saneitize.php against all non-private group1 wikis for cloudelastic cluster [production]
22:48 <mutante> mwmaint1002 - manually running the purgeOldData cron command to verify it with PHP 7.2 for 528730 (T195392) [production]
22:12 <jgleeson> switched on all fundraising process-control except ingenico_recurring_charge [production]
21:50 <ppchelko@deploy1001> Finished deploy [cpjobqueue/deploy@a151f4e]: Prepare for eventgate transition T230049 T230048 (duration: 00m 59s) [production]
21:49 <ppchelko@deploy1001> Started deploy [cpjobqueue/deploy@a151f4e]: Prepare for eventgate transition T230049 T230048 [production]
21:25 <mutante> restarting gerrit service to apply config change (528769) [production]
21:00 <ebernhardson> apply transient logger settings from prod search clusters to cloudelastic [production]
20:34 <reedy@deploy1001> rebuilt and synchronized wikiversions files: labswiki back to .17 [production]
20:34 <jgleeson> updated civicrm from 727a2c193b to be5b5a150b [production]
20:32 <reedy@deploy1001> rebuilt and synchronized wikiversions files: labswiki back to .16 temporarily [production]
20:28 <jgleeson> switched off fundraising process-control jobs [production]
19:36 <brennen@deploy1001> Synchronized php: group1 wikis to 1.34.0-wmf.17 (duration: 00m 54s) [production]
19:35 <brennen@deploy1001> rebuilt and synchronized wikiversions files: group1 wikis to 1.34.0-wmf.17 [production]
19:16 <reedy@deploy1001> Synchronized wmf-config/InitialiseSettings.php: Revert Switch property terms migration to WRITE_NEW on client wikis T225053 (duration: 00m 58s) [production]
18:15 <jijiki> Restart hhvm and php-fpm on canary mw hosts [production]
17:54 <shdubsh> install2002 add fstab entry for /srv mount - T229997 [production]
17:46 <shdubsh> install2002 stop nginx and squid for resync /srv to spare disk and restore mount - T229997 [production]
17:42 <otto@deploy1001> Synchronized wmf-config/InitialiseSettings.php: Retry - Revert "Switch high-traffic jobs to eventgate." (duration: 00m 58s) [production]
16:40 <mobrovac@deploy1001> Synchronized wmf-config/InitialiseSettings.php: JobQueue: Revert switching high-traffic jobs to eventgate (duration: 00m 55s) [production]
16:34 <mobrovac@deploy1001> scap failed: average error rate on 6/11 canaries increased by 10x (rerun with --force to override this check, see https://logstash.wikimedia.org/goto/db09a36be5ed3e81155041f7d46ad040 for details) [production]
16:00 <thcipriani> restarting jenkins for update [production]
15:58 <jijiki> restart npre on stat1004 [production]
15:08 <_joe_> freeing APCu on mw1270, which has degraded performance [production]
14:24 <marostegui> Reboot dbproxy2003 for kernel upgrades [production]
14:16 <jbond42> puppet *now* re-enabled [production]
14:16 <jbond42> puppet not re-enabled [production]
14:01 <jbond42> disable puppet fleet wide for puppetdb restart [production]
13:57 <marostegui> Remove labsdb1004 and labsdb1005 from zarcillo database (instance table), as those hosts were decommissioned months ago [production]
13:55 <marostegui> Remove labsdb1004 and labsdb1005 from zarcillo database, as those hosts were decommissioned months ago [production]
13:48 <marostegui> Apply grants for dbproxy1003 on m3 - T202367 [production]
13:22 <elukey> roll restart aqs on aqs100[4-9] to pick up new Druid backend settings [production]
11:48 <Amir1> EU SWAT is done [production]
11:37 <kart_> Updated cxserver to 2019-08-06-100812-production (T227571) [production]