601-650 of 10000 results (28ms)
2016-05-04 §
12:10 <bblack> cache_upload HTTP/2 switch process complete [production]
12:08 <gehel> restarting elasticsearch server elastic2021.codfw.wmnet (T110236) [production]
12:00 <bblack> starting cache_upload HTTP/2 switch process [production]
11:51 <gehel> restarting elasticsearch server elastic2020.codfw.wmnet (T110236) [production]
11:16 <elukey> updating Pybal/LVS for codfw eventbus on lvs2003 [production]
11:09 <moritzm> removed obsolete mediawiki-math-texvc/imagemagick from nobelium [production]
11:03 <gehel> restarting elasticsearch server elastic2019.codfw.wmnet (T110236) [production]
10:33 <gehel> restarting elasticsearch server elastic2018.codfw.wmnet (T110236) [production]
10:29 <moritzm> rolling restart of parsoid in eqiad to pick up openssl update [production]
10:25 <elukey> updating pybal/LVS with codfw eventbus config on lvs2006 [production]
10:23 <jynus> restarting db1058 for reimaging to jessie T125028 [production]
10:05 <gehel> restarting elasticsearch server elastic2017.codfw.wmnet (T110236) [production]
09:59 <godog> root@tin:/# lvresize -r -v --size +30G /dev/mapper/tin--vg-root [production]
09:44 <kart_> Updated cxserver to 45596ac [production]
09:15 <jynus> restarting enwiki-labs reimports (lag could happen temporarily) [production]
08:42 <gehel> restarting elasticsearch server elastic2016.codfw.wmnet (T110236) [production]
08:19 <jynus> stopping db1058 mysql for backup in preparation for reimage [production]
08:14 <jynus@tin> Synchronized wmf-config/db-eqiad.php: Depool db1058 for reimage (duration: 00m 36s) [production]
06:20 <gehel> restarting elasticsearch server elastic2015.codfw.wmnet (T110236) [production]
03:06 <l10nupdate@tin> ResourceLoader cache refresh completed at Wed May 4 03:06:41 UTC 2016 (duration 9m 32s) [production]
02:57 <mwdeploy@tin> sync-l10n completed (1.27.0-wmf.23) (duration: 17m 12s) [production]
02:23 <mwdeploy@tin> sync-l10n completed (1.27.0-wmf.22) (duration: 09m 19s) [production]
00:23 <Dereckson> mwscript namespaceDupes.php aswikisource --merge --fix (T133505) [production]
00:21 <kaldari> ran mwscript maintenance/updateCollation.php --wiki=ruwiktionary --force [production]
00:02 <dereckson@tin> Synchronized php-1.27.0-wmf.23/extensions/Graph/extension.json: Graph: match modern module loading in core (3/3) (duration: 00m 26s) [production]
00:02 <dereckson@tin> Synchronized php-1.27.0-wmf.23/extensions/Graph/lib/topojson-global.js: Graph: match modern module loading in core (2/3) (duration: 00m 26s) [production]
00:01 <dereckson@tin> Synchronized php-1.27.0-wmf.23/extensions/Graph/lib/d3-global.js: Graph: match modern module loading in core (1/3) (duration: 00m 26s) [production]
2016-05-03 §
23:55 <dereckson@tin> Synchronized php-1.27.0-wmf.22/extensions/Graph/extension.json: Graph: match modern module loading in core (3/3) (duration: 00m 25s) [production]
23:55 <dereckson@tin> Synchronized php-1.27.0-wmf.22/extensions/Graph/lib/topojson-global.js: Graph: match modern module loading in core (2/3) (duration: 00m 25s) [production]
23:54 <dereckson@tin> Synchronized php-1.27.0-wmf.22/extensions/Graph/lib/d3-global.js: Graph: match modern module loading in core (1/3) (duration: 00m 26s) [production]
23:40 <bblack> slow, depooled, staggered restart of varnish frontends on text and upload clusters commencing [production]
23:23 <dereckson@tin> Synchronized wmf-config/CirrusSearch-production.php: Cirrus: only use pooled curl in hhvm / [[Gerrit:286485]] (duration: 00m 34s) [production]
23:09 <dereckson@tin> Synchronized wmf-config/CommonSettings-labs.php: Revert Don't yet allow wikidatasparql graph urls (no op in prod) (duration: 00m 25s) [production]
23:08 <dereckson@tin> Synchronized wmf-config/CommonSettings.php: Revert Don't yet allow wikidatasparql graph urls (T126741) (duration: 00m 26s) [production]
22:47 <bblack> upgrading varnish3 package on cache_text ... [production]
22:46 <andrewbogott> restarting rabbitmq on labcontrol1001 to pick up a new ulimit [production]
21:55 <bblack> stopped varnishkafka on all cache_upload, and wiped out the spammy junk it fills the disk with in /var/cache/varnishkafka/ [production]
21:45 <ebernhardson@tin> Synchronized php-1.27.0-wmf.23/includes/search/SearchEngine.php: T134305 Fix invalid namespace handling in wmf.23 (duration: 00m 38s) [production]
21:37 <bblack> cache_upload: rolling varnishd (backend only) restarts for package update [production]
21:31 <bblack> upgrading varnish3 package on cache_upload [production]
21:11 <gehel> restarting wdqs1002 (T134238) [production]
20:02 <hashar> Restarting Jenkins [production]
19:47 <gehel> restarting elasticsearch server elastic2014.codfw.wmnet (T110236) [production]
19:11 <hashar> group0 to 1.27.0-wmf.23 is complete. [production]
19:01 <hashar@tin> rebuilt wikiversions.php and synchronized wikiversions files: group0 to 1.27.0-wmf.23 T131557 [production]
19:01 <gehel> restarting elasticsearch server elastic2013.codfw.wmnet (T110236) [production]
18:26 <bblack> cache_misc: rolling varnishd restarts for package update [production]
18:23 <bblack> upgrading varnish3 package on cache_misc [production]
18:17 <bblack> HTTP/2 enable for cache_misc (nginx upgrade - T96848) [production]
17:59 <gehel> restarting elasticsearch server elastic2012.codfw.wmnet (T110236) [production]