551-600 of 10000 results (32ms)
2016-04-28 §
12:33 <moritzm> upgrade/rolling restart of mediawiki canaries for pcre upgrade [production]
12:31 <volans> Increase eqiad masters expire_logs_days (according to available space) T133333 [production]
12:31 <jynus> restarting sanitarium:s3 instance- query stuck again [production]
12:04 <gehel> restarting elasticsearch server elastic1004.eqiad.wmnet (T110236) [production]
11:25 <moritzm> uploaded varnish 3.0.6plus-wm9 to carbon for jessie-wikimedia [production]
11:19 <volans> cleaning up some space on puppet-compiler host [production]
11:14 <moritzm> upgraded varnish on cp1008 to 3.0.7 (except one patch) [production]
11:14 <gehel> restarting elasticsearch server elastic1003.eqiad.wmnet (T110236) [production]
11:03 <jynus> backing up db1038 data to dbstore1002 [production]
10:50 <jynus> stopping and restarting db1038 for backup and upgrade T125028 [production]
10:44 <joal> deployed aqs on all three nodes (Thanks elukey !!!!) [analytics]
10:41 <jynus> running update table on eventlogging database on the master (db1046) T108856 [production]
10:39 <elukey@palladium> conftool action : set/pooled=yes; selector: aqs1001.eqiad.wmnet [production]
10:32 <hoo> Set new email for global user "Sebschlicht" per https://meta.wikimedia.org/w/index.php?oldid=15564713#Sebschlicht2.40global and private communication [production]
10:31 <moritzm> installing PHP updates for jessie [production]
09:46 <gehel> restarting elasticsearch server elastic1002.eqiad.wmnet (T110236) [production]
09:23 <jynus> removing unused mysql-server-5.5 from holmium (keeping database just in case) T128737 [production]
09:10 <elukey@palladium> conftool action : set/pooled=no; selector: aqs1001.eqiad.wmnet [production]
09:03 <moritzm> remove obsolete mysql 5.5 installations from mw1022, mw1023, mw1024, mw1025, mw1114 and mw1163 [production]
09:03 <joal> Deploying aqs on aqs1001 [analytics]
09:00 <gehel> restarting elasticsearch server elastic1001.eqiad.wmnet (T110236) [production]
08:59 <gehel> starting rolling restart of elasticsearch cluster in eqiad (T110236) [production]
08:58 <oblivian@palladium> conftool action : set/weight=10; selector: name=mw2018.codfw.wmnet [production]
08:57 <oblivian@palladium> conftool action : set/weight=12; selector: name=mw2018.codfw.wmnet [production]
08:14 <elukey> restarting kafka on kafka{1012,1014,1022,1020,2001,2002} for Java upgrades. EL will be restarted as well (sigh) [analytics]
08:12 <elukey> restarting kafka on kafka{1012,1014,1022,1020,2001,2002} for Java upgrades. Will probably trigger some EventLogging alarms due to a bug (T133779) [production]
07:51 <twentyafterfour> applied a hotfix to phabricator repository import job so that autoclose will not apply to unmerged refs/changes [production]
07:50 <twentyafterfour> reduced the number of phabricator worker processes to hopefully stop exhausting mysql connections. [production]
05:37 <mutante> lvs1012 - puppet fail, tries to upgrade tcpdump package and cannot be authenticated [production]
05:34 <mutante> mw1146 - hhvm restart [production]
05:27 <mutante> krypton remove RT packages, remnants from testing [production]
04:15 <YuviPanda> delete half of the trusty webservice jobs [tools]
04:00 <YuviPanda> deleted all precise webservice jobs, waiting for webservicemonitor to bring them back up [tools]
03:04 <catrope@tin> Synchronized php-1.27.0-wmf.22/extensions/Echo: Fix T133817 (originally scheduled for SWAT) (duration: 00m 34s) [production]
03:03 <catrope@tin> Synchronized php-1.27.0-wmf.21/extensions/Echo: Fix T133817 (originally scheduled for SWAT) (duration: 00m 39s) [production]
02:41 <mwdeploy@tin> sync-l10n completed (1.27.0-wmf.22) (duration: 09m 24s) [production]
02:24 <mwdeploy@tin> sync-l10n completed (1.27.0-wmf.21) (duration: 10m 38s) [production]
02:12 <twentyafterfour> manually edited crontab on iridium and killed multiple instances of public_task_dump.py (the cronjob was defined as * 2 * * * instead of 0 2 * * *) [production]
00:48 <twentyafterfour> Phabricator's back online, everything seems to have gone smoothly. [production]
00:29 <twentyafterfour> Preparing to take phabricator offline for maintenance. [production]
2016-04-27 §
23:57 <thcipriani> nodepool instances running again after an openstack rabbitmq restart by andrewbogott [releng]
22:51 <duploktm> also ran openstack server delete ci-jessie-wikimedia-85342 [releng]
22:42 <legoktm> nodepool delete 85342 [releng]
22:41 <matt_flaschen> Deployed https://gerrit.wikimedia.org/r/#/c/285765/ to enable External Store everywhere on Beta Cluster [releng]
22:38 <legoktm> stop/started nodepool [releng]
22:36 <thcipriani> I don't have permission to restart nodepool [releng]
22:35 <thcipriani> restarting nodepool [releng]
22:18 <matt_flaschen> Deployed https://gerrit.wikimedia.org/r/#/c/282440/ to switch Beta Cluster to use External Store for new testwiki writes [releng]
22:18 <mattflaschen@tin> Synchronized wmf-config/db-labs.php: Beta Cluster change (duration: 00m 29s) [production]
22:04 <bblack> banned req.url ~ "^/w/load.php.*choiceData" on cache_text [production]