2014-09-16
§
|
15:20 |
<manybubbles> |
SWAT complete |
[production] |
15:16 |
<manybubbles> |
Synchronized php-1.24wmf20/extensions/VisualEditor/: swat update for wmf20 (duration: 00m 25s) |
[production] |
15:13 |
<hashar> |
Jenkins: mediawiki extensions phpunit jobs should pass more or less until the CI system is sent an orbit and dies out horribly. in such a case ping me / phone. |
[production] |
15:08 |
<manybubbles> |
Synchronized php-1.24wmf21/extensions/VisualEditor/: SWAT visual editor update wmf21 (duration: 00m 07s) |
[production] |
14:52 |
<ottomata> |
set vm.dirty_expire_centisecs to 10000 (was 30000) on analytics1021 to experiment with paging and kafka-zookeeper timeouts |
[production] |
14:36 |
<godog> |
stopped htcp-purger on ms1004 RT #8358 |
[production] |
14:32 |
<godog> |
silenced ms-be1014 until torrow, pending forced reboot |
[production] |
14:28 |
<hashar> |
Jenkins: breaking continuous integration for MediaWiki repositories. Extensions are now tested with mediawiki/vendor and, mediawiki/core is checked out to the patch branch if it exist. {{gerrit|160656}} |
[production] |
14:20 |
<akosiaris_> |
restarted apache on fenari , it was leaking memory, situation back to normal, cause unknown yet |
[production] |
14:12 |
<akosiaris_> |
stopped apache on fenari . It was in swap, investigating |
[production] |
12:35 |
<springle> |
Synchronized wmf-config/db-eqiad.php: repool s2 db1054, s3 db1027, s4 db1056, s5 db1037 (duration: 00m 10s) |
[production] |
12:26 |
<godog> |
reboot ms-be1014, xfs issues |
[production] |
12:22 |
<godog> |
temporarily chgrp wikidev /var/log/hhvm/error.log on mw1018 |
[production] |
12:21 |
<reedy> |
Synchronized php-1.24wmf20/LocalSettings.php: Fix path to be /srv based (duration: 00m 32s) |
[production] |
11:25 |
<reedy> |
Synchronized docroot and w: (no message) (duration: 00m 35s) |
[production] |
11:12 |
<reedy> |
Purged l10n cache for 1.24wmf19 |
[production] |
11:12 |
<reedy> |
Purged l10n cache for 1.24wmf18 |
[production] |
11:10 |
<reedy> |
Purged l10n cache for 1.24wmf15 |
[production] |
09:21 |
<_joe_> |
reimaging mw1018 and mw1021 w HAT: removing from pybal, etc. |
[production] |
06:29 |
<springle> |
xtrabackup clone db1037 to db2023 |
[production] |
05:31 |
<springle> |
xtrabackup clone db1056 to db2019 |
[production] |
04:01 |
<LocalisationUpdate> |
ResourceLoader cache refresh completed at Tue Sep 16 04:01:05 UTC 2014 (duration 1m 4s) |
[production] |
03:11 |
<springle> |
xtrabackup clone db1027 to db2018 |
[production] |
03:04 |
<LocalisationUpdate> |
completed (1.24wmf21) at 2014-09-16 03:04:46+00:00 |
[production] |
02:53 |
<springle> |
xtrabackup clone db1054 to db2017 |
[production] |
02:50 |
<springle> |
Synchronized wmf-config/db-eqiad.php: depool s2 db1054, s3 db1027, s4 db1056, s5 db1037 for codfw cloning (duration: 01m 12s) |
[production] |
02:39 |
<springle> |
Synchronized wmf-config/db-eqiad.php: repool db1036, depool db1002 (duration: 00m 07s) |
[production] |
02:31 |
<LocalisationUpdate> |
completed (1.24wmf20) at 2014-09-16 02:31:16+00:00 |
[production] |
2014-09-15
§
|
23:32 |
<maxsem> |
Synchronized php-1.24wmf21/resources/: SWAT: https://gerrit.wikimedia.org/r/#/c/160488/1 https://gerrit.wikimedia.org/r/#/c/160543/ (duration: 00m 06s) |
[production] |
23:26 |
<bblack> |
restarting lvs1001 for HT disable + kernel upgrade |
[production] |
23:19 |
<maxsem> |
Synchronized php-1.24wmf21/extensions/VisualEditor/: SWAT: https://gerrit.wikimedia.org/r/#/c/160554/ (duration: 00m 07s) |
[production] |
23:12 |
<bblack> |
restarting lvs1002 for HT disable + kernel upgrade |
[production] |
23:07 |
<greg-g> |
Running sample job on integration-slave1006 and warming up npmjs.org cache |
[production] |
22:56 |
<Krinkle> |
Running sample job on integration-slave1008 and warming up npmjs.org cache |
[production] |
22:49 |
<Krinkle> |
Running sample job on integration-slave1007 and warming up npmjs.org cache |
[production] |
22:48 |
<Krinkle> |
Pooling the newly setup Trusty-based Jenkins slaves (integration-slave1006, integration-slave1007 and integration-slave1008) |
[production] |
22:42 |
<bblack> |
dropping static routes for 2620:0:861:ed1a::[d,f,10,11] -> lvs1005 from cr[12]-eqiad (only 11 is of any consequence, misc-web-lb, and they're advertised by bgp and this is preventing failover to lvs1002) |
[production] |
21:28 |
<cscott> |
updated OCG to version 188a3c221d927bd0601ef5e1b0c0f4a9d1cdbd31 |
[production] |
20:46 |
<subbu> |
deployed Parsoid version b845bff9 |
[production] |
18:49 |
<ejegg> |
Synchronized php-1.24wmf20/extensions/CentralNotice/: Update CentralNotice to remove jquery.json dependency (duration: 00m 23s) |
[production] |
18:46 |
<hoo> |
Sync to tmh100[12] failed, according to awight |
[production] |
18:44 |
<ejegg> |
Synchronized php-1.24wmf21/extensions/CentralNotice/: Update CentralNotice to remove jquery.json dependency (duration: 00m 09s) |
[production] |
18:43 |
<manybubbles> |
performance tests show cirrus should handle jawiki with no problem but if load spirals out of control and I'm not around then revert https://gerrit.wikimedia.org/r/#/c/160465/ |
[production] |
18:40 |
<hoo> |
Local part of the global rename of Gnumarcoo => .avgas fatally timed out on itwiki. This needs to be fixed per hand. |
[production] |
18:40 |
<manybubbles> |
Setting Cirrus to jawiki's primary search backend went well but Japan is mostly asleep. If Elasticsearch load takes a turn for the worse in four or five hours then we'll know how it went. |
[production] |
17:14 |
<bd808> |
Restarted elasticsearch on logstash1003; 2014-09-14T09:33:57Z java.lang.OutOfMemoryError |
[production] |
17:09 |
<_joe_> |
killing salt-call on all mediawiki hosts |
[production] |
17:06 |
<bd808> |
Restarted elasticsearch on logstash1001; 2014-09-15T06:12:09Z java.lang.OutOfMemoryError |
[production] |
17:04 |
<bblack> |
using salt to kill salt-minion everywhere... |
[production] |
17:02 |
<bd808> |
Restarted logstash on logstash1001. I hoped this would fix the dashboards, but it looks like the backing elasticsearch cluster is too sad for them to work at the moment. |
[production] |