7251-7300 of 10000 results (56ms)
2016-04-28 §
07:50 <twentyafterfour> reduced the number of phabricator worker processes to hopefully stop exhausting mysql connections. [production]
05:37 <mutante> lvs1012 - puppet fail, tries to upgrade tcpdump package and cannot be authenticated [production]
05:34 <mutante> mw1146 - hhvm restart [production]
05:27 <mutante> krypton remove RT packages, remnants from testing [production]
04:15 <YuviPanda> delete half of the trusty webservice jobs [tools]
04:00 <YuviPanda> deleted all precise webservice jobs, waiting for webservicemonitor to bring them back up [tools]
03:04 <catrope@tin> Synchronized php-1.27.0-wmf.22/extensions/Echo: Fix T133817 (originally scheduled for SWAT) (duration: 00m 34s) [production]
03:03 <catrope@tin> Synchronized php-1.27.0-wmf.21/extensions/Echo: Fix T133817 (originally scheduled for SWAT) (duration: 00m 39s) [production]
02:41 <mwdeploy@tin> sync-l10n completed (1.27.0-wmf.22) (duration: 09m 24s) [production]
02:24 <mwdeploy@tin> sync-l10n completed (1.27.0-wmf.21) (duration: 10m 38s) [production]
02:12 <twentyafterfour> manually edited crontab on iridium and killed multiple instances of public_task_dump.py (the cronjob was defined as * 2 * * * instead of 0 2 * * *) [production]
00:48 <twentyafterfour> Phabricator's back online, everything seems to have gone smoothly. [production]
00:29 <twentyafterfour> Preparing to take phabricator offline for maintenance. [production]
2016-04-27 §
23:57 <thcipriani> nodepool instances running again after an openstack rabbitmq restart by andrewbogott [releng]
22:51 <duploktm> also ran openstack server delete ci-jessie-wikimedia-85342 [releng]
22:42 <legoktm> nodepool delete 85342 [releng]
22:41 <matt_flaschen> Deployed https://gerrit.wikimedia.org/r/#/c/285765/ to enable External Store everywhere on Beta Cluster [releng]
22:38 <legoktm> stop/started nodepool [releng]
22:36 <thcipriani> I don't have permission to restart nodepool [releng]
22:35 <thcipriani> restarting nodepool [releng]
22:18 <matt_flaschen> Deployed https://gerrit.wikimedia.org/r/#/c/282440/ to switch Beta Cluster to use External Store for new testwiki writes [releng]
22:18 <mattflaschen@tin> Synchronized wmf-config/db-labs.php: Beta Cluster change (duration: 00m 29s) [production]
22:04 <bblack> banned req.url ~ "^/w/load.php.*choiceData" on cache_text [production]
22:00 <bblack> banned req.url ~ "^/load.php.*choiceData" on cache_text [production]
21:22 <cwd> updated civicrm from 15a0086eef78f16110eba358a28ef78b51a385e1 to 777a91b8f9f6003a3eebdb8f2c73e45cc2bfb4a4 [production]
21:03 <bblack> rebooting cp1065 [production]
21:01 <ebernhardson@tin> Synchronized wmf-config/InitialiseSettings.php: Restore codfw to elasticsearch config T133784 (duration: 00m 31s) [production]
21:00 <ebernhardson@tin> Synchronized wmf-config/CirrusSearch-production.php: Restore codfw to elasticsearch config T133784 (duration: 00m 37s) [production]
20:59 <hashar> thcipriani downgraded git plugins successfully (we wanted to rule out their upgrade for some weird issue) [releng]
20:48 <thcipriani> restarting jenkins after plugin downgrade [production]
20:48 <halfak> deployed ores-wikimedia-config:6453fe5 [ores]
20:41 <hashar> 1.27.0-wmf.22 to group1 has been completed without incident. Deployment is open ! [production]
20:41 <ebernhardson> Enabled cirrussearch writes to codfw only on mw1165 w/ live hack [production]
20:32 <gehel> switching wdqs1002 to maintenance and reimporting data (T133566) [production]
20:28 <cscott> updated OCG to version e39e06570083877d5498da577758cf8d162c1af4 [production]
20:20 <yurik> deployed kartotherian & tilerator services [production]
20:13 <cscott> updated OCG to version e39e06570083877d5498da577758cf8d162c1af4 [releng]
20:09 <gehel> adding back wdqs1001 to varnish configuration after reinstall (T133566) [production]
20:06 <mutante> language-dev running puppet, still fails due to issue with MW-singlenode and gitclone (but hey the kernels got installed) [language]
20:01 <mutante> language-dev dpkg-configure -a to fix borked dpkg (manually interrupted dist-upgrade?) , manually removing ganglia (T115330) [language]
20:00 <mutante> language-dev puppet fail / broken dpkg, dpkg was interrupted [language]
19:58 <mutante> IP address 10.68.16.66 has 26 names, 25 are in contintcloud, one is sm-puppetmaster-trusty2.servermon.eqiad.wmflabs. [integration]
19:46 <mutante> - gptest1.catgraph manually stopping ganglia (T115330) [catgraph]
19:45 <mutante> - gptest1.catgraph many puppet errors due to failed mysql-server-5.5 install , broken dpkg/puppet [catgraph]
19:32 <mutante> integration-raita "Could not find class role::ci::raita" puppet error. manually stopping ganglia-monitor [integration]
19:24 <Pchelolo> update restbase to e9fbdfe [production]
19:18 <Pchelolo> update restbase to e9fbdfe: canary on restbase1007 [production]
19:11 <Pchelolo> update restbase to e9fbdfe: staging [production]
19:09 <hashar@tin> rebuilt wikiversions.php and synchronized wikiversions files: group1 wikis to 1.27.0-wmf.22 [production]
19:00 <dcausse> restarting elastic on elastic2007.codfw.wmnet (master) [production]