4101-4150 of 10000 results (40ms)
2016-02-20 §
03:10 <l10nupdate@tin> ResourceLoader cache refresh completed at Sat Feb 20 03:10:55 UTC 2016 (duration 8m 53s) [production]
03:02 <mwdeploy@tin> sync-l10n completed (1.27.0-wmf.14) (duration: 10m 48s) [production]
02:32 <mwdeploy@tin> sync-l10n completed (1.27.0-wmf.13) (duration: 13m 16s) [production]
00:45 <ori@tin> Finished scap: I2a66b40e4c6: add a dependency on xhprof/xhgui (duration: 50m 12s) [production]
00:32 <ori> Restarted HHVM on mw1248 (locked up, T89912). [production]
2016-02-19 §
23:55 <ori@tin> Started scap: I2a66b40e4c6: add a dependency on xhprof/xhgui [production]
23:43 <catrope@tin> Synchronized wmf-config/InitialiseSettings.php: Enable SecurePoll poll creation on officewiki (duration: 02m 21s) [production]
23:11 <ori@mira> scap failed: ValueError /srv/mediawiki-staging/multiversion/vendor/slim/slim/tests/templates/test.php has content before opening <?php tag (duration: 00m 17s) [production]
23:11 <ori@mira> Started scap: I2a66b40e4c6: add a dependency on xhprof/xhgui [production]
20:23 <chasemp> reboot labvirt1002 [production]
17:01 <andrewbogott> reenabling puppet on labservices1001 and restoring original dns settings, pending merge of https://gerrit.wikimedia.org/r/#/c/271797/ [production]
16:55 <chasemp> reboot labvirt1011 [production]
16:09 <elukey> rebooted kafka2001.codfw.wmnet for kernel upgrade [production]
15:59 <elukey> rebooted kafka2002.codfw for kernel upgrade [production]
15:59 <moritzm> rebooting osmium [production]
15:49 <elukey> added kafka1002 back to eventbus pool via confctl [production]
15:48 <moritzm> rolling restart of maps cluster for glibc update [production]
15:42 <elukey> removed kafka1002.eqiad.wmnet from eventbus pool via confctl [production]
15:38 <elukey> re-added kafka1001.eqiad.wmnet back to eventbus' pool via confctl [production]
15:34 <elukey> rebooting kafka1001 for kernel upgrade [production]
15:29 <elukey> removed kafka1001 from eventbus via confctl [production]
15:14 <andrewbogott> restarting pdns on labservices1001 again [production]
15:02 <andrewbogott> restarting pdns on labservices1001 [production]
15:00 <paravoid> setting up (e)BGP sessions between ulsfo-codfw [production]
14:52 <apergos> labstore1001 issues were (again) cluebot writing to its error log. I chowned that log to root and left a README file in the directory with an explanation plus a pointer to us here if they have questions/need help. [production]
14:48 <moritzm> rolling restart of swift-proxy in eqiad [production]
14:27 <volans> Restarting MariaDB on es2001 (still depooled) [T127330] [production]
14:27 <paravoid> cr2-knams: re-activating BGP with 1257 [production]
14:12 <jynus> restarting mysql at dbstore1001 [production]
13:59 <moritzm> restarting salt-master on neodymium [production]
13:49 <elukey> puppet re-enabled on analytics1027 [production]
13:37 <moritzm> rebooting mira [production]
13:06 <moritzm> restarting slapd on dubnium/pollux [production]
12:48 <moritzm> restarting apache on uranium [production]
12:44 <elukey> puppet stopped on analytics1027 for issues with the cluster [production]
11:40 <godog> start a new ^global- swift container replication eqiad -> codfw [production]
11:39 <akosiaris> reboot cygnus, stuck in 100% IOwait [production]
11:13 <moritzm> rolling restart of aqs cluster for glibc update [production]
11:08 <mark> correction: Reduced sync_speed_max to 100000 (half) for md125 on labstore1001 [production]
11:07 <mark> Reduced sync_speed_max to 100000 (half) for md126 on labstore1001 [production]
11:02 <moritzm> restarted gerrit on ytterbium (actual restart happened ten minutes earlier than this log entry, though) [production]
10:22 <moritzm> rolling restart of cassandra on restbase/eqiad [production]
10:04 <jynus> restarting db021 slave, will be testing/depooled for a while [production]
10:00 <jynus> purging requested rows on eventlogging Edit table (m4) at db1046, db1047, dbstore1002 and dbstore2002 [production]
09:59 <moritzm> rolling restart of cassandra on restbase/codfw [production]
09:03 <_joe_> idled the raid check for md123 on labstore1001 [production]
08:50 <_joe_> killed the backup rsync on labstore1001 to alleviate the high io load [production]
03:00 <l10nupdate@tin> ResourceLoader cache refresh completed at Fri Feb 19 03:00:47 UTC 2016 (duration 9m 10s) [production]
02:51 <mwdeploy@tin> sync-l10n completed (1.27.0-wmf.14) (duration: 11m 18s) [production]
02:43 <mutante> powercycle cp2017 [production]