751-800 of 10000 results (31ms)
2016-04-28 ยง
21:41 <twentyafterfour> added usleep(200000); to slow down the phabricator import even further. [production]
21:32 <twentyafterfour> reduced phabricator taskmaster processes to 1 [production]
21:08 <gehel> restarting elasticsearch server elastic1012.eqiad.wmnet (T110236) [production]
19:47 <gehel> restarting elasticsearch server elastic1011.eqiad.wmnet (T110236) [production]
19:15 <jynus> manually rotating db1038's error log [production]
19:10 <hashar> 1.27.0-wmf.22 deployed. Uneventful. [production]
19:00 <hashar@tin> rebuilt wikiversions.php and synchronized wikiversions files: all wikis to 1.27.0-wmf.22 [production]
18:42 <catrope@tin> Synchronized php-1.27.0-wmf.22/extensions/Echo/: Fix fatal T133921 (duration: 00m 32s) [production]
18:19 <gehel> restarting elasticsearch server elastic1010.eqiad.wmnet (T110236) [production]
18:08 <mattflaschen@tin> Synchronized wmf-config/db-labs.php: Beta Cluster change (duration: 00m 37s) [production]
17:40 <yurik> deployed and restarted kartotherian & tilerator [production]
16:57 <gehel> restarting elasticsearch server elastic1009.eqiad.wmnet (T110236) [production]
16:41 <ejegg> updated payments-wiki from 16ed5af8c8544ea1c8d837ae16585eba4cbbfd4e to c502ab2f6b6ff914d67503a664d36076fdc32dcf [production]
16:26 <twentyafterfour> further reduced the queue worker count on phabricator, to relieve stress on mysql m3 db1048 [production]
16:17 <bblack> starting SPDY stats sample on 8x caches for 24H - T96848 [production]
16:15 <gehel> restarting elasticsearch server elastic1008.eqiad.wmnet (T110236) [production]
15:35 <elukey> installed memcached 1.4.25-2 (Debian sid/testing) in mc2009 as part of performance test (T129963) [production]
15:27 <thcipriani@tin> Synchronized wmf-config/CommonSettings.php: SWAT: Math: increase the number of concurrent connections to 150 [[gerrit:283269]] (duration: 00m 35s) [production]
15:27 <gehel@palladium> conftool action : get/pooled; selector: elastic1001.eqiad.wmnet [production]
15:23 <elukey> puppet disabled on mc2009 as preparation step for https://gerrit.wikimedia.org/r/#/c/284907 [production]
15:12 <gehel> restarting elasticsearch server elastic1007.eqiad.wmnet (T110236) [production]
15:05 <jynus> restarting db1038 for reimage to jessie [production]
14:32 <gehel> wdqs-updater started on wdqs1002 (T133566) [production]
14:25 <bblack> started SPDY stats sample on 8x caches - T96848#2248582 [production]
14:25 <elukey> deployed new zookeeper nodes in codfw (conf200[123]) [production]
13:59 <gehel> restarting elasticsearch server elastic1006.eqiad.wmnet (T110236) [production]
13:23 <bblack> rebooting cp1008 [production]
12:50 <gehel> restarting elasticsearch server elastic1005.eqiad.wmnet (T110236) [production]
12:33 <moritzm> upgrade/rolling restart of mediawiki canaries for pcre upgrade [production]
12:31 <volans> Increase eqiad masters expire_logs_days (according to available space) T133333 [production]
12:31 <jynus> restarting sanitarium:s3 instance- query stuck again [production]
12:04 <gehel> restarting elasticsearch server elastic1004.eqiad.wmnet (T110236) [production]
11:25 <moritzm> uploaded varnish 3.0.6plus-wm9 to carbon for jessie-wikimedia [production]
11:19 <volans> cleaning up some space on puppet-compiler host [production]
11:14 <moritzm> upgraded varnish on cp1008 to 3.0.7 (except one patch) [production]
11:14 <gehel> restarting elasticsearch server elastic1003.eqiad.wmnet (T110236) [production]
11:03 <jynus> backing up db1038 data to dbstore1002 [production]
10:50 <jynus> stopping and restarting db1038 for backup and upgrade T125028 [production]
10:41 <jynus> running update table on eventlogging database on the master (db1046) T108856 [production]
10:39 <elukey@palladium> conftool action : set/pooled=yes; selector: aqs1001.eqiad.wmnet [production]
10:32 <hoo> Set new email for global user "Sebschlicht" per https://meta.wikimedia.org/w/index.php?oldid=15564713#Sebschlicht2.40global and private communication [production]
10:31 <moritzm> installing PHP updates for jessie [production]
09:46 <gehel> restarting elasticsearch server elastic1002.eqiad.wmnet (T110236) [production]
09:23 <jynus> removing unused mysql-server-5.5 from holmium (keeping database just in case) T128737 [production]
09:10 <elukey@palladium> conftool action : set/pooled=no; selector: aqs1001.eqiad.wmnet [production]
09:03 <moritzm> remove obsolete mysql 5.5 installations from mw1022, mw1023, mw1024, mw1025, mw1114 and mw1163 [production]
09:00 <gehel> restarting elasticsearch server elastic1001.eqiad.wmnet (T110236) [production]
08:59 <gehel> starting rolling restart of elasticsearch cluster in eqiad (T110236) [production]
08:58 <oblivian@palladium> conftool action : set/weight=10; selector: name=mw2018.codfw.wmnet [production]
08:57 <oblivian@palladium> conftool action : set/weight=12; selector: name=mw2018.codfw.wmnet [production]