9101-9150 of 10000 results (48ms)
2015-05-14 §
00:03 <ebernhardson> Synchronized php-1.26wmf5/extensions/Gather/: SWAT Submodule bump for Gather extension (duration: 00m 12s) [production]
2015-05-13 §
23:52 <awight> payments config: correct memcache location [production]
23:40 <ebernhardson> Synchronized wmf-config/CirrusSearch-common.php: SWAT deploy cirrus config change (duration: 00m 12s) [production]
22:26 <twentyafterfour> Purged l10n cache for 1.26wmf4 [production]
22:25 <twentyafterfour> rebuilt wikiversions.cdb and synchronized wikiversions files: Group 0 to 1.26wmf6 [production]
22:21 <twentyafterfour> rebuilt wikiversions.cdb and synchronized wikiversions files: Wikipedias to 1.26wmf5 [production]
22:17 <twentyafterfour> restarted phd on iridium (phabricator) to sync the daemons' configuration [production]
21:28 <manybubbles> restarting elasticsearch on elastic1005 [production]
21:12 <cscott> updated OCG to version c7c75e5b03ad9096571dc6dbfcb7022c924ccb4f [production]
21:03 <awight> updated payments from f97f8f99268974cfdb0182f178955bd627137842 to e89d18ee20abcb1ca3c455e6a298bf8a6aa84442 [production]
20:28 <subbu> deployed parsoid version a8108fe6 [production]
20:15 <manybubbles> restarted elasticsearch on elastic1004 [production]
20:12 <twentyafterfour> Finished scap: testwiki to php-1.26wmf6 and rebuild l10n cache (duration: 47m 24s) [production]
20:11 <manybubbles> cancel that - I just realized I can't do that. [production]
20:10 <manybubbles> elastic1003 restarted elasticsearch just fine. the cluster restart is going awesome. I'm going to rig the other 28 to restart via a script, one after the other. Expect nagios to complain about them some. [production]
20:03 <bblack> restarting hhvm on mw1190 [production]
19:25 <twentyafterfour> Started scap: testwiki to php-1.26wmf6 and rebuild l10n cache [production]
19:11 <awight> paymens rolled back to f97f8f99268974cfdb0182f178955bd627137842 [production]
19:10 <awight> payments updated from f97f8f99268974cfdb0182f178955bd627137842 to 5c326a521120a904a2012654e9287757dc5a8ca2 [production]
19:00 <manybubbles> elastic1002 restart went well - starting elastic1003 [production]
18:45 <awight> rolled back payments to f97f8f99268974cfdb0182f178955bd627137842 [production]
18:43 <awight> update payments from f97f8f99268974cfdb0182f178955bd627137842 to 5c326a521120a904a2012654e9287757dc5a8ca2 [production]
18:05 <demon> Synchronized wmf-config/CommonSettings.php: undo all the nostalgia (duration: 00m 10s) [production]
17:21 <demon> Synchronized wmf-config/CommonSettings.php: something something skins are broken (duration: 00m 11s) [production]
17:14 <demon> Synchronized wmf-config/CommonSettings.php: because sometimes moving code helps (duration: 00m 15s) [production]
17:10 <manybub|lunch> elastic1002 restarted and rejoined the cluster - now the cluster is repaining. hurray. [production]
17:08 <manybub|lunch> elastic1001 restarted and rejoined the cluster hapilly while I was at lunch. it looks good - no errors beyond the ones we have fixes in flight for. So I'm going to do elastic1002 [production]
17:03 <hashar> Zuul clone failures solved. Was due to network traffic being interrupted between labs and prod. [production]
16:53 <krenair> Synchronized wmf-config/InitialiseSettings.php: https://gerrit.wikimedia.org/r/#/c/209967/ (duration: 00m 14s) [production]
16:51 <hashar> Zuul clone failure https://phabricator.wikimedia.org/T98980 [production]
16:49 <andrewbogott> re-enabling puppet on labnet1001 [production]
16:46 <mutante> es2010 failed disk, reopening ticket for last fail in January [production]
16:41 <jynus> Enabling puppet agent in db1009.eqiad after reinstall [production]
16:40 <ori> Synchronized php-1.26wmf4/includes/resourceloader/ResourceLoader.php: I30b490e5b: ResourceLoader::filter: use APC when running under HHVM (duration: 00m 11s) [production]
16:38 <ori> Synchronized php-1.26wmf5/includes/resourceloader/ResourceLoader.php: I30b490e5b: ResourceLoader::filter: use APC when running under HHVM (duration: 00m 14s) [production]
16:28 <andrewbogott> disabling puppet on labnet1001 to tinker with nova config [production]
15:44 <mark> Disregard cr2-knams:xe-0/0/0; we're working on it [production]
15:21 <manybubbles> I think the elasticsearch cluster got stuck with alloation disabled after the rolling restart. Funky. Haven't seen that one before. Probably a problem with our instructions. Anyway, unstuck it and recovery is going faster now [production]
15:17 <demon> Synchronized wmf-config/InitialiseSettings.php: didn't work, undoing previous sync (duration: 00m 12s) [production]
15:15 <demon> Synchronized wmf-config/InitialiseSettings.php: trying something (duration: 00m 12s) [production]
14:53 <manybubbles> elasticsearch restart on elastic1001 going well. cluster still in recovering state as expect. I'll give it an hour to soak. [production]
14:48 <manybubbles> ok - time to start the rolling restart. I'm going to to elastic1001 first non-automated and watch it [production]
14:36 <manybubbles> s/gitfit/gitfat/ oh well [production]
14:35 <manybubbles> first attempt at syncing elasticsearch plugins didn't work 100%. syncing again. gitfit/gitdeploy is betraying me [production]
14:32 <manybubbles> syncing new versions of elsaticsearch plugins to prod. no restarts yet. [production]
14:04 <aude> Synchronized wmf-config/InitialiseSettings.php: Enable usage tracking for Wikisource (duration: 00m 14s) [production]
13:57 <aude> added wbc_entity_usage table on all Wikibase Client wikis [production]
13:56 <jynus> jcrespo Disabling puppet agent in db1009.eqiad in preparation for reinstall [production]
13:45 <aude> Synchronized php-1.26wmf5/extensions/Wikidata: Update maintenance script (duration: 00m 20s) [production]
12:45 <springle> xtrabackup clone db1060 to db1018 [production]