7701-7750 of 10000 results (41ms)
2014-09-24 §
23:12 <greg-g> restarted jouncebot, he wasn't announcing deploy windows [production]
23:00 <mutante> OCG - scheduled downtime/disabled notifications for LVS check [production]
22:44 <andrewbogott> salted a bash update on labs instances, which turned out to be updated already. [production]
20:56 <cscott> updated OCG to version 48acb8a2031863e35fad9960e48af60a3618def9 [production]
20:43 <aaron> Synchronized php-1.24wmf22/includes/cache/bloom: ad8a7a761d5f3bd086bbd6c88870e83c701e59e3 (duration: 00m 04s) [production]
20:00 <reedy> Synchronized wmf-config/: (no message) (duration: 00m 15s) [production]
19:47 <yurik> Synchronized php-1.24wmf22/extensions/ZeroBanner/: Updating to master (duration: 01m 10s) [production]
19:46 <yurik> Synchronized php-1.24wmf21/extensions/ZeroBanner/: Updating to master (duration: 01m 07s) [production]
19:15 <yurik> Finished scap: updating Graph, JsonConfig, ZeroBanner & ZeroPortal to master for 21 & 22 (duration: 07m 46s) [production]
19:07 <yurik> Started scap: updating Graph, JsonConfig, ZeroBanner & ZeroPortal to master for 21 & 22 [production]
18:55 <reedy> Synchronized wmf-config/interwiki.cdb: Updating interwiki cache (duration: 00m 14s) [production]
18:53 <reedy> Synchronized php-1.24wmf22/extensions/WikimediaMaintenance: (no message) (duration: 00m 14s) [production]
17:13 <manybubbles> lowered throttling on Elasticsearch index transfer from one node to another speed because I hate excitement [production]
15:38 <Nemo_bis> cscott> i'm working on the OCG health issue above. i'll let you know when i know what's going on. icinga-wm> PROBLEM - OCG health on ocg1002 is CRITICAL [production]
15:37 <demon> Synchronized php-1.24wmf22/extensions/CentralAuth: (no message) (duration: 00m 05s) [production]
15:21 <demon> Synchronized php-1.24wmf22/extensions/CirrusSearch/maintenance/updateOneSearchIndexConfig.php: (no message) (duration: 00m 05s) [production]
15:01 <demon> Synchronized wmf-config/Wikibase.php: (no message) (duration: 00m 06s) [production]
14:57 <Jeff_Green> restarted service ocg on ocg1001 [production]
14:40 <manybubbles> finished deployment - load spikes look to be gone. yay [production]
14:22 <manybubbles> Synchronized php-1.24wmf21/extensions/CirrusSearch/: Switch implementation of Cirrus link counting jobs to hopefully lower overall load. (duration: 00m 04s) [production]
14:21 <manybubbles> Synchronized wmf-config: More cirrus config to lower load (duration: 00m 04s) [production]
14:17 <manybubbles> Synchronized wmf-config: Cirrus config to lower load (duration: 00m 04s) [production]
14:14 <manybubbles> Synchronized php-1.24wmf22/extensions/CirrusSearch/: Switch implementation of Cirrus link counting jobs to hopefully lower overall load. (duration: 00m 06s) [production]
14:08 <manybubbles> starting deployment to lower cirrus load spikes [production]
13:19 <manybubbles> *disabled* [production]
13:17 <manybubbles> disable row awareness on Cirrus's elasticsearch cluster - might help balance load better. too much load was on one row [production]
13:04 <hashar> Zuul proceeding queue again [production]
13:00 <hashar> Jenkins: disconnecting Gearman client from Zuul and reconnecting [production]
12:59 <hashar> Zuul / Jenkins stuck [production]
09:33 <hashar_> Jenkins switched mwext-UploadWizard-qunit back to Zuul cloner by applying pending change {{gerrit|161459}} [production]
09:33 <hashar_> restarting zuul-merger [production]
09:32 <hashar_> restarting zuul [production]
09:19 <hashar_> Upgrading Zuul to f0e3688 Cherry pick https://review.openstack.org/#/c/123437/1 which fix {{bug|71133}} ''Zuul cloner: fails on extension jobs against a wmf branch'' [production]
05:41 <legoktm> ran script to back populate bug 70620 on metawiki (/home/legoktm/ca/populateBug70620.php on terbium) [production]
04:29 <LocalisationUpdate> ResourceLoader cache refresh completed at Wed Sep 24 04:29:53 UTC 2014 (duration 29m 52s) [production]
03:34 <tstarling> Finished scap: (no message) (duration: 12m 09s) [production]
03:22 <tstarling> Started scap: (no message) [production]
03:21 <tstarling> scap failed: RuntimeError scap requires SSH agent forwarding (duration: 00m 00s) [production]
03:12 <LocalisationUpdate> completed (1.24wmf22) at 2014-09-24 03:12:54+00:00 [production]
02:39 <LocalisationUpdate> completed (1.24wmf21) at 2014-09-24 02:39:39+00:00 [production]
02:10 <springle> Synchronized wmf-config/db-eqiad.php: repool db1062 (duration: 00m 06s) [production]
01:25 <mutante> tridge - shutting down [production]
2014-09-23 §
23:47 <maxsem> Synchronized php-1.24wmf22/extensions/MobileFrontend/: (no message) (duration: 00m 04s) [production]
23:15 <maxsem> Synchronized wmf-config/CommonSettings.php: fail! (duration: 00m 04s) [production]
23:13 <maxsem> Synchronized wmf-config/CommonSettings.php: https://gerrit.wikimedia.org/r/#/c/162297/ (duration: 00m 03s) [production]
23:06 <maxsem> Synchronized php-1.24wmf21/extensions/MassMessage/: https://gerrit.wikimedia.org/r/#/c/161002/ (duration: 00m 03s) [production]
22:04 <aaron> Synchronized php-1.24wmf22/includes/jobqueue/JobRunner.php: f23f1ad35f02f6a17c9b5842aa6d8c152a273639 (duration: 00m 04s) [production]
21:54 <ebernhardson> Finished scap: Bump flow submodule (and change an i18n message) in 1.24wmf21 and 1.24wmf22 (duration: 28m 14s) [production]
21:25 <ebernhardson> Started scap: Bump flow submodule (and change an i18n message) in 1.24wmf21 and 1.24wmf22 [production]
20:25 <cscott> updated OCG to version 1cf9281ec3e01d6cbb27053de9f2423582fcc156 [production]