251-300 of 10000 results (17ms)
2015-02-05 §
02:35 <LocalisationUpdate> completed (1.25wmf15) at 2015-02-05 02:34:02+00:00 [production]
02:34 <l10nupdate> Synchronized php-1.25wmf15/cache/l10n: (no message) (duration: 00m 01s) [production]
02:20 <LocalisationUpdate> completed (1.25wmf14) at 2015-02-05 02:19:31+00:00 [production]
02:19 <l10nupdate> Synchronized php-1.25wmf14/cache/l10n: (no message) (duration: 00m 02s) [production]
01:41 <ori> Synchronized wmf-config/CommonSettings.php: I7b270eb8a: Set $wgUDPProfilerHost to service alias rather than hard-code IP (duration: 00m 05s) [production]
00:54 <bd808> truncated redis input queues for logstash on all 3 hosts to see if cluster can keep up now with 3 elasticsearch writer threads [production]
00:08 <Krinkle> Added 'dduvall' to integration group ACL on Gerrit [production]
00:06 <springle> xtrabackup clone virt1000 to silver [production]
2015-02-04 §
23:38 <mutante> starting memcached on virt1000 [production]
23:21 <qchris> Manual failover of Hadoop namenode from analytics1001 to analytics1002, as analytics1001 had Heap space errors [production]
22:50 <ejegg> updated payments from 1e9b78e9a8bf557a710988620bd6f1a335787173 to cbaf66e7705789f37117ec6edc4d936c6174d511 [production]
22:49 <manybubbles> this is certainly a bug in Elasticsearch, but I imagine its one solved in newer versions. i hope, more like. [production]
22:49 <manybubbles> not sure what happened but now space if freeing up on 1001. the disk was never in danger of filling up but it was full enough not to allocate more to it. Now that stuff is allocating elsewhere elasticsearch is clearing the used space. [production]
22:41 <manybubbles> looks like elastics1001 doesn't have much free space left. I think that might have something to do with this.... [production]
22:38 <manybubbles> Elasticsearch wasn't initializing shards to elastic1001 after its restart. Didn't check why. Set allocation to primaries then back to all and that unstuck it. [production]
21:16 <arlolra> updated Parsoid to version dd4721f4 [production]
20:33 <ori> rebuilt wikiversions.cdb and synchronized wikiversions files: I4fb67945b: Revert "[Regression] Revert "Non wikipedias to 1.25wmf15" [production]
20:18 <aude> Synchronized wmf-config/Wikibase.php: set useLegacyChangesSubscription to true for Wikidata (duration: 00m 07s) [production]
18:30 <godog> bounce txstatsd on cache hosts in eqiad [production]
18:17 <godog> bounce txstatsd on cache hosts in ulsfo [production]
18:08 <godog> bounce txstatsd on cache hosts in esams [production]
17:30 <marktraceur> Synchronized php-1.25wmf14/extensions/UploadWizard/: Touching pretty much everything in UploadWizard, maybe it will help (duration: 00m 07s) [production]
17:22 <marktraceur> Synchronized php-1.25wmf14/extensions/UploadWizard/resources/mw.UploadWizard.js: Touch an UploadWizard file to try and fix caching (duration: 00m 07s) [production]
16:58 <robh> replacing the intermediary cert on dumps.w.o (so nginx will flap on it shortly) [production]
16:56 <godog> restart ES on elastic1001 [production]
15:43 <marktraceur> Synchronized php-1.25wmf14/extensions/UploadWizard/resources/controller/uw.controller.Upload.js: Touch an UploadWizard file to try and fix caching (duration: 00m 07s) [production]
15:25 <marktraceur> Synchronized php-1.25wmf14/extensions/UploadWizard/resources/controller/uw.controller.Upload.js: Touch an UploadWizard file to try and fix caching (duration: 00m 05s) [production]
15:22 <godog> graphite move close to completion, updating dashboards [production]
15:16 <godog> bounce diamond in batches in eqiad [production]
14:50 <marktraceur> Synchronized php-1.25wmf15/extensions/UploadWizard/resources/controller/uw.controller.Upload.js: Touch an UploadWizard file to try and fix caching (duration: 00m 05s) [production]
14:14 <godog> bounce webperf-related services on hafnium too: ve, statsd-mw-js-deprecate, statsv, asset-check [production]
14:10 <godog> bounce navtiming on hafnium to pick up dns changes [production]
12:42 <godog> stop bacula-fd on tungsten, backups running during migration [production]
12:41 <_joe_> installing the new HHVM package on jobrunners [production]
12:28 <godog> bounce txstatsd on ms-fe* [production]
12:28 <godog> bounce txstatsd on ms-be* [production]
12:00 <godog> bounce diamond in batches in ulsfo [production]
11:57 <godog> bounce diamond in batches in esams [production]
11:51 <godog> bounce mwprof on tungsten to force picking up dns changes [production]
11:35 <_joe_> installing the new hhvm package on api, one at a time [production]
11:23 <godog> start migrating graphite from tungsten to graphite1001 https://gerrit.wikimedia.org/r/#/c/188036/1 https://gerrit.wikimedia.org/r/#/c/188035/1 https://phabricator.wikimedia.org/T85909 [production]
10:14 <ori> Finished scap: I78446aacb: [Regression] Revert "Non wikipedias to 1.25wmf15" (duration: 31m 34s) [production]
10:06 <godog> start migrating graphite from tungsten to graphite1001 https://gerrit.wikimedia.org/r/#/c/188036/1 https://gerrit.wikimedia.org/r/#/c/188035/1 https://phabricator.wikimedia.org/T85909 [production]
10:06 <ori> restarted hung HHVM on mw1039 [production]
09:42 <ori> Started scap: I78446aacb: [Regression] Revert "Non wikipedias to 1.25wmf15" [production]
09:42 <ori> scap aborted: I78446aacb: [Regression] Revert "Non wikipedias to 1.25wmf15" (duration: 00m 02s) [production]
09:42 <ori> Started scap: I78446aacb: [Regression] Revert "Non wikipedias to 1.25wmf15" [production]
08:57 <_joe_> installing the new hhvm package on all appservers, one at a time [production]
07:49 <qchris> Manual failover of Hadoop namenode from analytics1002 to analytics1001, as analytics1002 had Heap space errors [production]
05:16 <springle> Synchronized wmf-config/db-eqiad.php: depool db1057 (duration: 00m 06s) [production]