6251-6300 of 10000 results (29ms)
2016-03-02 §
16:31 <mobrovac> restbase deploy continue of fb66dbf for the rest of the nodes [production]
16:30 <thcipriani@tin> Synchronized php-1.27.0-wmf.15/extensions/ContentTranslation/includes/TranslationStorageManager.php: SWAT: Use correct timestamp for updates [[gerrit:274363]] (duration: 00m 59s) [production]
16:28 <urandom> starting post-bootstrap (1009-b) cleanup on restbase100{5,6,9-a}.eqiad.wmnet : T95253 [production]
16:25 <thcipriani@tin> Synchronized php-1.27.0-wmf.15/extensions/ContentTranslation/modules/widgets/translator/ext.cx.translator.js: SWAT: Translator widget: Fix js error if translator does not have recent contributions [[gerrit:274340]] (duration: 01m 05s) [production]
16:07 <thcipriani@tin> Synchronized wmf-config/InitialiseSettings.php: SWAT: Do not send Referer from private wikis [[gerrit:274414]] (duration: 01m 18s) [production]
15:53 <apergos> extending maintenance window for dataset1001 by one hour to 5 pm UTC [production]
15:53 <mobrovac> restbase deploy start of fb66dbf on restbase1001 [production]
15:44 <apergos> may extend the maintenance window for dataset1001 upgrade if headway can be made on PXE boot issues... 15 minutes left to decide [production]
15:16 <andrewbogott> rebooting californium just to make sure dist-upgrade didn’t mess up grub [production]
15:15 <gehel> elastic1007.eqiad.wmnet: upgrading to 1.7.5, shipping logs to logstash (T122697, T109101) [production]
15:06 <andrewbogott> running apt-get dist upgrade to upgrade californium packages to openstack Liberty [production]
15:02 <mobrovac> restbase reverting to fa1207e95, problems spotted in logstash [production]
14:58 <mobrovac> restbase deploy start of 5def2f8 on restbase1001 [production]
14:32 <gehel> elastic1006.eqiad.wmnet: upgrading to 1.7.5, shipping logs to logstash (T122697, T109101) [production]
14:06 <godog> bootstrap restbase1010-a T128107 [production]
14:03 <apergos> web service for dumps.wikimedia.org and download.wikimedia.org is now unavailable (upgrade of server to jessie) [production]
13:32 <apergos> nfs service for dataset1001 disabled (impacts users of stat100{2,3} in prep for jessie upgrade [production]
13:23 <gehel> elastic1005.eqiad.wmnet: upgrading to 1.7.5, shipping logs to logstash (T122697, T109101) [production]
13:13 <_joe_> re-enabled puppet on scb1002, repooled scb1001 for mobileapps [production]
13:10 <mobrovac> mobileapps re-deploying d384f1ba for T113542 [production]
12:33 <bblack> restarted logstash on logstash1002 [production]
12:32 <mobrovac> mobileapps stopping (again) the service on scb1001 for debugging, T113542 [production]
12:29 <bblack> restarted logstash on logstash1001 [production]
12:27 <_joe_> puppet disabled on both scb1001/2, depooled scb1001 for moborovac to test and config manually patched on scb1002 so that it runs with the old code correctly [production]
12:25 <mobrovac> mobileapps rolling back to 68e38ec7, problems found in the latest deploy for T113542 [production]
12:00 <mobrovac> mobileapps stopping the service on scb1001 for debug purposes, T113542 [production]
11:56 <_joe_> stopped puppet on scb1002, depooled scb1001 from mobileapps [production]
11:36 <mobrovac> mobileapps deploying d384f1ba [production]
11:09 <jynus> profiling db1023 and db1061 for 24 hours- 1/20th of the queries slightly slower [production]
10:42 <moritzm> restarting graphite-web on graphite1001 (for django security update) [production]
10:42 <hashar> Zuul should no more be caught in death loop due to Depends-On on an event-schemas change. Hole filled with https://gerrit.wikimedia.org/r/#/c/274356/ T128569 [production]
10:36 <elukey> stopped Redis multi-instance on rdb1006 (Job Queue slave) as pre-step for Debian re-image [production]
10:16 <gehel> elastic1004.eqiad.wmnet: upgrading to 1.7.5, shipping logs to logstash (T122697, T109101) [production]
09:43 <volans> Cloning es2005->es2014, es2007->es2016, es2009->es2018, see T127330 [production]
09:30 <moritzm> installing nodejs updates on restbase* [production]
09:19 <elukey> redis multi-instance stopped on rdb1004 (jobqueue slave) as pre-step for Debian re-image [production]
09:16 <volans@tin> Synchronized wmf-config/db-codfw.php: Depooling external storage DBs in codfw for migration: T127330 (duration: 01m 24s) [production]
09:13 <hashar> Zuul went crazy / caught in a loop of doom. Same has Saturday. It went back magically at 08:32 UTC T128569 [production]
08:48 <gehel> elastic1003.eqiad.wmnet: upgrading to 1.7.5, shipping logs to logstash (T122697, T109101) [production]
08:33 <moritzm> installing Django security updates [production]
08:17 <_joe_> disabling puppet on all memcached hosts in preparation for enabling ipsec [production]
07:35 <legoktm@tin> Synchronized wmf-config/InitialiseSettings.php: Disable $wgReferrerPolicy on private wikis (duration: 01m 01s) [production]
06:45 <_joe_> rebooting serpens [production]
03:04 <l10nupdate@tin> ResourceLoader cache refresh completed at Wed Mar 2 03:04:14 UTC 2016 (duration 8m 49s) [production]
02:55 <mwdeploy@tin> sync-l10n completed (1.27.0-wmf.15) (duration: 09m 31s) [production]
02:29 <mwdeploy@tin> sync-l10n completed (1.27.0-wmf.14) (duration: 12m 32s) [production]
00:45 <krenair@tin> Synchronized portals: https://gerrit.wikimedia.org/r/#/c/274316/ - try #2, this time with the submodule update (duration: 01m 17s) [production]
00:44 <krenair@tin> Synchronized portals/prod/wikipedia.org/assets: https://gerrit.wikimedia.org/r/#/c/274316/ - try #2, this time with the submodule update (duration: 01m 16s) [production]
00:31 <krenair@tin> Synchronized portals: https://gerrit.wikimedia.org/r/#/c/274316/ (duration: 01m 18s) [production]
00:30 <krenair@tin> Synchronized portals/prod/wikipedia.org/assets: https://gerrit.wikimedia.org/r/#/c/274316/ (duration: 01m 18s) [production]