2015-05-14
§
|
02:51 |
<LocalisationUpdate> |
completed (1.26wmf6) at 2015-05-14 02:49:53+00:00 |
[production] |
02:47 |
<l10nupdate> |
Synchronized php-1.26wmf6/cache/l10n: (no message) (duration: 04m 16s) |
[production] |
02:44 |
<springle> |
xtrabackup clone db1056 to db1019 |
[production] |
02:29 |
<LocalisationUpdate> |
completed (1.26wmf5) at 2015-05-14 02:28:02+00:00 |
[production] |
02:24 |
<l10nupdate> |
Synchronized php-1.26wmf5/cache/l10n: (no message) (duration: 05m 51s) |
[production] |
01:48 |
<manybubbles> |
sorry - restarting elasticsearch on elastic1007 |
[production] |
01:48 |
<manybubbles> |
restarting elastic1007 |
[production] |
01:33 |
<springle> |
Synchronized wmf-config/db-codfw.php: pool new codfw slaves (duration: 00m 11s) |
[production] |
01:28 |
<springle> |
Synchronized wmf-config/db-eqiad.php: repool db1060, warm up (duration: 00m 14s) |
[production] |
00:49 |
<manybubbles> |
restarting elasticsearch on elastic1006 |
[production] |
00:03 |
<ebernhardson> |
Synchronized php-1.26wmf5/extensions/Gather/: SWAT Submodule bump for Gather extension (duration: 00m 12s) |
[production] |
2015-05-13
§
|
23:52 |
<awight> |
payments config: correct memcache location |
[production] |
23:40 |
<ebernhardson> |
Synchronized wmf-config/CirrusSearch-common.php: SWAT deploy cirrus config change (duration: 00m 12s) |
[production] |
22:26 |
<twentyafterfour> |
Purged l10n cache for 1.26wmf4 |
[production] |
22:25 |
<twentyafterfour> |
rebuilt wikiversions.cdb and synchronized wikiversions files: Group 0 to 1.26wmf6 |
[production] |
22:21 |
<twentyafterfour> |
rebuilt wikiversions.cdb and synchronized wikiversions files: Wikipedias to 1.26wmf5 |
[production] |
22:17 |
<twentyafterfour> |
restarted phd on iridium (phabricator) to sync the daemons' configuration |
[production] |
21:28 |
<manybubbles> |
restarting elasticsearch on elastic1005 |
[production] |
21:12 |
<cscott> |
updated OCG to version c7c75e5b03ad9096571dc6dbfcb7022c924ccb4f |
[production] |
21:03 |
<awight> |
updated payments from f97f8f99268974cfdb0182f178955bd627137842 to e89d18ee20abcb1ca3c455e6a298bf8a6aa84442 |
[production] |
20:28 |
<subbu> |
deployed parsoid version a8108fe6 |
[production] |
20:15 |
<manybubbles> |
restarted elasticsearch on elastic1004 |
[production] |
20:12 |
<twentyafterfour> |
Finished scap: testwiki to php-1.26wmf6 and rebuild l10n cache (duration: 47m 24s) |
[production] |
20:11 |
<manybubbles> |
cancel that - I just realized I can't do that. |
[production] |
20:10 |
<manybubbles> |
elastic1003 restarted elasticsearch just fine. the cluster restart is going awesome. I'm going to rig the other 28 to restart via a script, one after the other. Expect nagios to complain about them some. |
[production] |
20:03 |
<bblack> |
restarting hhvm on mw1190 |
[production] |
19:25 |
<twentyafterfour> |
Started scap: testwiki to php-1.26wmf6 and rebuild l10n cache |
[production] |
19:11 |
<awight> |
paymens rolled back to f97f8f99268974cfdb0182f178955bd627137842 |
[production] |
19:10 |
<awight> |
payments updated from f97f8f99268974cfdb0182f178955bd627137842 to 5c326a521120a904a2012654e9287757dc5a8ca2 |
[production] |
19:00 |
<manybubbles> |
elastic1002 restart went well - starting elastic1003 |
[production] |
18:45 |
<awight> |
rolled back payments to f97f8f99268974cfdb0182f178955bd627137842 |
[production] |
18:43 |
<awight> |
update payments from f97f8f99268974cfdb0182f178955bd627137842 to 5c326a521120a904a2012654e9287757dc5a8ca2 |
[production] |
18:05 |
<demon> |
Synchronized wmf-config/CommonSettings.php: undo all the nostalgia (duration: 00m 10s) |
[production] |
17:21 |
<demon> |
Synchronized wmf-config/CommonSettings.php: something something skins are broken (duration: 00m 11s) |
[production] |
17:14 |
<demon> |
Synchronized wmf-config/CommonSettings.php: because sometimes moving code helps (duration: 00m 15s) |
[production] |
17:10 |
<manybub|lunch> |
elastic1002 restarted and rejoined the cluster - now the cluster is repaining. hurray. |
[production] |
17:08 |
<manybub|lunch> |
elastic1001 restarted and rejoined the cluster hapilly while I was at lunch. it looks good - no errors beyond the ones we have fixes in flight for. So I'm going to do elastic1002 |
[production] |
17:03 |
<hashar> |
Zuul clone failures solved. Was due to network traffic being interrupted between labs and prod. |
[production] |
16:53 |
<krenair> |
Synchronized wmf-config/InitialiseSettings.php: https://gerrit.wikimedia.org/r/#/c/209967/ (duration: 00m 14s) |
[production] |
16:51 |
<hashar> |
Zuul clone failure https://phabricator.wikimedia.org/T98980 |
[production] |
16:49 |
<andrewbogott> |
re-enabling puppet on labnet1001 |
[production] |
16:46 |
<mutante> |
es2010 failed disk, reopening ticket for last fail in January |
[production] |
16:41 |
<jynus> |
Enabling puppet agent in db1009.eqiad after reinstall |
[production] |
16:40 |
<ori> |
Synchronized php-1.26wmf4/includes/resourceloader/ResourceLoader.php: I30b490e5b: ResourceLoader::filter: use APC when running under HHVM (duration: 00m 11s) |
[production] |
16:38 |
<ori> |
Synchronized php-1.26wmf5/includes/resourceloader/ResourceLoader.php: I30b490e5b: ResourceLoader::filter: use APC when running under HHVM (duration: 00m 14s) |
[production] |
16:28 |
<andrewbogott> |
disabling puppet on labnet1001 to tinker with nova config |
[production] |
15:44 |
<mark> |
Disregard cr2-knams:xe-0/0/0; we're working on it |
[production] |
15:21 |
<manybubbles> |
I think the elasticsearch cluster got stuck with alloation disabled after the rolling restart. Funky. Haven't seen that one before. Probably a problem with our instructions. Anyway, unstuck it and recovery is going faster now |
[production] |
15:17 |
<demon> |
Synchronized wmf-config/InitialiseSettings.php: didn't work, undoing previous sync (duration: 00m 12s) |
[production] |
15:15 |
<demon> |
Synchronized wmf-config/InitialiseSettings.php: trying something (duration: 00m 12s) |
[production] |