151-200 of 10000 results (28ms)
2016-05-04 §
14:12 <gehel> restarting elasticsearch server elastic2024.codfw.wmnet (T110236) [production]
13:49 <jmm@palladium> conftool action : set/pooled=yes; selector: sca1002.eqiad.wmnet [production]
13:49 <jmm@palladium> conftool action : set/pooled=no; selector: sca1002.eqiad.wmnet [production]
13:42 <jmm@palladium> conftool action : set/pooled=yes; selector: sca1001.eqiad.wmnet [production]
13:41 <jmm@palladium> conftool action : set/pooled=no; selector: sca1001.eqiad.wmnet [production]
13:40 <moritzm> rolling restart of apertium in sca1* for openssl update [production]
13:24 <gehel> restarting elasticsearch server elastic2023.codfw.wmnet (T110236) [production]
12:50 <gehel> restarting elasticsearch server elastic2022.codfw.wmnet (T110236) [production]
12:34 <gehel> restarting blazegraph (T134238) [production]
12:23 <bblack> cache_text HTTP/2 switch process complete [production]
12:12 <bblack> starting cache_text HTTP/2 switch process [production]
12:10 <bblack> cache_upload HTTP/2 switch process complete [production]
12:08 <gehel> restarting elasticsearch server elastic2021.codfw.wmnet (T110236) [production]
12:00 <bblack> starting cache_upload HTTP/2 switch process [production]
11:51 <gehel> restarting elasticsearch server elastic2020.codfw.wmnet (T110236) [production]
11:16 <elukey> updating Pybal/LVS for codfw eventbus on lvs2003 [production]
11:09 <moritzm> removed obsolete mediawiki-math-texvc/imagemagick from nobelium [production]
11:03 <gehel> restarting elasticsearch server elastic2019.codfw.wmnet (T110236) [production]
10:33 <gehel> restarting elasticsearch server elastic2018.codfw.wmnet (T110236) [production]
10:29 <moritzm> rolling restart of parsoid in eqiad to pick up openssl update [production]
10:25 <elukey> updating pybal/LVS with codfw eventbus config on lvs2006 [production]
10:23 <jynus> restarting db1058 for reimaging to jessie T125028 [production]
10:05 <gehel> restarting elasticsearch server elastic2017.codfw.wmnet (T110236) [production]
09:59 <godog> root@tin:/# lvresize -r -v --size +30G /dev/mapper/tin--vg-root [production]
09:44 <kart_> Updated cxserver to 45596ac [production]
09:15 <jynus> restarting enwiki-labs reimports (lag could happen temporarily) [production]
08:42 <gehel> restarting elasticsearch server elastic2016.codfw.wmnet (T110236) [production]
08:19 <jynus> stopping db1058 mysql for backup in preparation for reimage [production]
08:14 <jynus@tin> Synchronized wmf-config/db-eqiad.php: Depool db1058 for reimage (duration: 00m 36s) [production]
06:20 <gehel> restarting elasticsearch server elastic2015.codfw.wmnet (T110236) [production]
03:06 <l10nupdate@tin> ResourceLoader cache refresh completed at Wed May 4 03:06:41 UTC 2016 (duration 9m 32s) [production]
02:57 <mwdeploy@tin> sync-l10n completed (1.27.0-wmf.23) (duration: 17m 12s) [production]
02:23 <mwdeploy@tin> sync-l10n completed (1.27.0-wmf.22) (duration: 09m 19s) [production]
00:23 <Dereckson> mwscript namespaceDupes.php aswikisource --merge --fix (T133505) [production]
00:21 <kaldari> ran mwscript maintenance/updateCollation.php --wiki=ruwiktionary --force [production]
00:02 <dereckson@tin> Synchronized php-1.27.0-wmf.23/extensions/Graph/extension.json: Graph: match modern module loading in core (3/3) (duration: 00m 26s) [production]
00:02 <dereckson@tin> Synchronized php-1.27.0-wmf.23/extensions/Graph/lib/topojson-global.js: Graph: match modern module loading in core (2/3) (duration: 00m 26s) [production]
00:01 <dereckson@tin> Synchronized php-1.27.0-wmf.23/extensions/Graph/lib/d3-global.js: Graph: match modern module loading in core (1/3) (duration: 00m 26s) [production]
2016-05-03 §
23:55 <dereckson@tin> Synchronized php-1.27.0-wmf.22/extensions/Graph/extension.json: Graph: match modern module loading in core (3/3) (duration: 00m 25s) [production]
23:55 <dereckson@tin> Synchronized php-1.27.0-wmf.22/extensions/Graph/lib/topojson-global.js: Graph: match modern module loading in core (2/3) (duration: 00m 25s) [production]
23:54 <dereckson@tin> Synchronized php-1.27.0-wmf.22/extensions/Graph/lib/d3-global.js: Graph: match modern module loading in core (1/3) (duration: 00m 26s) [production]
23:40 <bblack> slow, depooled, staggered restart of varnish frontends on text and upload clusters commencing [production]
23:23 <dereckson@tin> Synchronized wmf-config/CirrusSearch-production.php: Cirrus: only use pooled curl in hhvm / [[Gerrit:286485]] (duration: 00m 34s) [production]
23:09 <dereckson@tin> Synchronized wmf-config/CommonSettings-labs.php: Revert Don't yet allow wikidatasparql graph urls (no op in prod) (duration: 00m 25s) [production]
23:08 <dereckson@tin> Synchronized wmf-config/CommonSettings.php: Revert Don't yet allow wikidatasparql graph urls (T126741) (duration: 00m 26s) [production]
22:47 <bblack> upgrading varnish3 package on cache_text ... [production]
22:46 <andrewbogott> restarting rabbitmq on labcontrol1001 to pick up a new ulimit [production]
21:55 <bblack> stopped varnishkafka on all cache_upload, and wiped out the spammy junk it fills the disk with in /var/cache/varnishkafka/ [production]
21:45 <ebernhardson@tin> Synchronized php-1.27.0-wmf.23/includes/search/SearchEngine.php: T134305 Fix invalid namespace handling in wmf.23 (duration: 00m 38s) [production]
21:37 <bblack> cache_upload: rolling varnishd (backend only) restarts for package update [production]