2014-09-24
§
|
23:21 |
<ejegg> |
Updated paymentswiki from 3ac5dd1c3fade37b6f3a4879aef8ea71b3bbbf08 to 83464deed3b66da655ca5d1086852237c4793b71 |
[production] |
23:18 |
<catrope> |
Synchronized php-1.24wmf22/extensions/VisualEditor: SWAT (duration: 00m 04s) |
[production] |
23:14 |
<catrope> |
Synchronized php-1.24wmf22/resources/lib/oojs-ui/: SWAT (duration: 00m 05s) |
[production] |
23:12 |
<greg-g> |
restarted jouncebot, he wasn't announcing deploy windows |
[production] |
23:00 |
<mutante> |
OCG - scheduled downtime/disabled notifications for LVS check |
[production] |
22:44 |
<andrewbogott> |
salted a bash update on labs instances, which turned out to be updated already. |
[production] |
20:56 |
<cscott> |
updated OCG to version 48acb8a2031863e35fad9960e48af60a3618def9 |
[production] |
20:43 |
<aaron> |
Synchronized php-1.24wmf22/includes/cache/bloom: ad8a7a761d5f3bd086bbd6c88870e83c701e59e3 (duration: 00m 04s) |
[production] |
20:00 |
<reedy> |
Synchronized wmf-config/: (no message) (duration: 00m 15s) |
[production] |
19:47 |
<yurik> |
Synchronized php-1.24wmf22/extensions/ZeroBanner/: Updating to master (duration: 01m 10s) |
[production] |
19:46 |
<yurik> |
Synchronized php-1.24wmf21/extensions/ZeroBanner/: Updating to master (duration: 01m 07s) |
[production] |
19:15 |
<yurik> |
Finished scap: updating Graph, JsonConfig, ZeroBanner & ZeroPortal to master for 21 & 22 (duration: 07m 46s) |
[production] |
19:07 |
<yurik> |
Started scap: updating Graph, JsonConfig, ZeroBanner & ZeroPortal to master for 21 & 22 |
[production] |
18:55 |
<reedy> |
Synchronized wmf-config/interwiki.cdb: Updating interwiki cache (duration: 00m 14s) |
[production] |
18:53 |
<reedy> |
Synchronized php-1.24wmf22/extensions/WikimediaMaintenance: (no message) (duration: 00m 14s) |
[production] |
17:13 |
<manybubbles> |
lowered throttling on Elasticsearch index transfer from one node to another speed because I hate excitement |
[production] |
15:38 |
<Nemo_bis> |
cscott> i'm working on the OCG health issue above. i'll let you know when i know what's going on. icinga-wm> PROBLEM - OCG health on ocg1002 is CRITICAL |
[production] |
15:37 |
<demon> |
Synchronized php-1.24wmf22/extensions/CentralAuth: (no message) (duration: 00m 05s) |
[production] |
15:21 |
<demon> |
Synchronized php-1.24wmf22/extensions/CirrusSearch/maintenance/updateOneSearchIndexConfig.php: (no message) (duration: 00m 05s) |
[production] |
15:01 |
<demon> |
Synchronized wmf-config/Wikibase.php: (no message) (duration: 00m 06s) |
[production] |
14:57 |
<Jeff_Green> |
restarted service ocg on ocg1001 |
[production] |
14:40 |
<manybubbles> |
finished deployment - load spikes look to be gone. yay |
[production] |
14:22 |
<manybubbles> |
Synchronized php-1.24wmf21/extensions/CirrusSearch/: Switch implementation of Cirrus link counting jobs to hopefully lower overall load. (duration: 00m 04s) |
[production] |
14:21 |
<manybubbles> |
Synchronized wmf-config: More cirrus config to lower load (duration: 00m 04s) |
[production] |
14:17 |
<manybubbles> |
Synchronized wmf-config: Cirrus config to lower load (duration: 00m 04s) |
[production] |
14:14 |
<manybubbles> |
Synchronized php-1.24wmf22/extensions/CirrusSearch/: Switch implementation of Cirrus link counting jobs to hopefully lower overall load. (duration: 00m 06s) |
[production] |
14:08 |
<manybubbles> |
starting deployment to lower cirrus load spikes |
[production] |
13:19 |
<manybubbles> |
*disabled* |
[production] |
13:17 |
<manybubbles> |
disable row awareness on Cirrus's elasticsearch cluster - might help balance load better. too much load was on one row |
[production] |
13:04 |
<hashar> |
Zuul proceeding queue again |
[production] |
13:00 |
<hashar> |
Jenkins: disconnecting Gearman client from Zuul and reconnecting |
[production] |
12:59 |
<hashar> |
Zuul / Jenkins stuck |
[production] |
09:33 |
<hashar_> |
Jenkins switched mwext-UploadWizard-qunit back to Zuul cloner by applying pending change {{gerrit|161459}} |
[production] |
09:33 |
<hashar_> |
restarting zuul-merger |
[production] |
09:32 |
<hashar_> |
restarting zuul |
[production] |
09:19 |
<hashar_> |
Upgrading Zuul to f0e3688 Cherry pick https://review.openstack.org/#/c/123437/1 which fix {{bug|71133}} ''Zuul cloner: fails on extension jobs against a wmf branch'' |
[production] |
05:41 |
<legoktm> |
ran script to back populate bug 70620 on metawiki (/home/legoktm/ca/populateBug70620.php on terbium) |
[production] |
04:29 |
<LocalisationUpdate> |
ResourceLoader cache refresh completed at Wed Sep 24 04:29:53 UTC 2014 (duration 29m 52s) |
[production] |
03:34 |
<tstarling> |
Finished scap: (no message) (duration: 12m 09s) |
[production] |
03:22 |
<tstarling> |
Started scap: (no message) |
[production] |
03:21 |
<tstarling> |
scap failed: RuntimeError scap requires SSH agent forwarding (duration: 00m 00s) |
[production] |
03:12 |
<LocalisationUpdate> |
completed (1.24wmf22) at 2014-09-24 03:12:54+00:00 |
[production] |
02:39 |
<LocalisationUpdate> |
completed (1.24wmf21) at 2014-09-24 02:39:39+00:00 |
[production] |
02:10 |
<springle> |
Synchronized wmf-config/db-eqiad.php: repool db1062 (duration: 00m 06s) |
[production] |
01:25 |
<mutante> |
tridge - shutting down |
[production] |