2015-05-14
§
|
14:12 |
<paravoid> |
switching ns2 back to eeden |
[production] |
13:56 |
<cmjohnson1> |
upgrading tellurium to trusty |
[production] |
13:41 |
<cmjohnson1> |
power cycling barium |
[production] |
13:40 |
<godog> |
es-root restart-fast on elastic1011 |
[production] |
13:21 |
<paravoid> |
reimaging eeden with jessie |
[production] |
12:59 |
<paravoid> |
switching ns2 to multatuli |
[production] |
12:53 |
<jynus> |
disabling temporarily Ichinga check for MySQL running on db1009 until data is migrated from virt1000 and host sent to production |
[production] |
12:40 |
<akosiaris> |
uploaded to apt.wikimedia.org jessie-wikimedia: apertium-pt-gl_0.9.2~r60358-1 |
[production] |
12:36 |
<godog> |
es-tool restart-fast on elastic1010 |
[production] |
11:40 |
<manybubbles> |
restarting elasticsearch on elastic1009 |
[production] |
05:07 |
<LocalisationUpdate> |
ResourceLoader cache refresh completed at Thu May 14 05:06:09 UTC 2015 (duration 6m 8s) |
[production] |
02:55 |
<manybubbles> |
restarting elasticsearch on elastic1008 |
[production] |
02:51 |
<LocalisationUpdate> |
completed (1.26wmf6) at 2015-05-14 02:49:53+00:00 |
[production] |
02:47 |
<l10nupdate> |
Synchronized php-1.26wmf6/cache/l10n: (no message) (duration: 04m 16s) |
[production] |
02:44 |
<springle> |
xtrabackup clone db1056 to db1019 |
[production] |
02:29 |
<LocalisationUpdate> |
completed (1.26wmf5) at 2015-05-14 02:28:02+00:00 |
[production] |
02:24 |
<l10nupdate> |
Synchronized php-1.26wmf5/cache/l10n: (no message) (duration: 05m 51s) |
[production] |
01:48 |
<manybubbles> |
sorry - restarting elasticsearch on elastic1007 |
[production] |
01:48 |
<manybubbles> |
restarting elastic1007 |
[production] |
01:33 |
<springle> |
Synchronized wmf-config/db-codfw.php: pool new codfw slaves (duration: 00m 11s) |
[production] |
01:28 |
<springle> |
Synchronized wmf-config/db-eqiad.php: repool db1060, warm up (duration: 00m 14s) |
[production] |
00:49 |
<manybubbles> |
restarting elasticsearch on elastic1006 |
[production] |
00:03 |
<ebernhardson> |
Synchronized php-1.26wmf5/extensions/Gather/: SWAT Submodule bump for Gather extension (duration: 00m 12s) |
[production] |
2015-05-13
§
|
23:52 |
<awight> |
payments config: correct memcache location |
[production] |
23:40 |
<ebernhardson> |
Synchronized wmf-config/CirrusSearch-common.php: SWAT deploy cirrus config change (duration: 00m 12s) |
[production] |
22:26 |
<twentyafterfour> |
Purged l10n cache for 1.26wmf4 |
[production] |
22:25 |
<twentyafterfour> |
rebuilt wikiversions.cdb and synchronized wikiversions files: Group 0 to 1.26wmf6 |
[production] |
22:21 |
<twentyafterfour> |
rebuilt wikiversions.cdb and synchronized wikiversions files: Wikipedias to 1.26wmf5 |
[production] |
22:17 |
<twentyafterfour> |
restarted phd on iridium (phabricator) to sync the daemons' configuration |
[production] |
21:28 |
<manybubbles> |
restarting elasticsearch on elastic1005 |
[production] |
21:12 |
<cscott> |
updated OCG to version c7c75e5b03ad9096571dc6dbfcb7022c924ccb4f |
[production] |
21:03 |
<awight> |
updated payments from f97f8f99268974cfdb0182f178955bd627137842 to e89d18ee20abcb1ca3c455e6a298bf8a6aa84442 |
[production] |
20:28 |
<subbu> |
deployed parsoid version a8108fe6 |
[production] |
20:15 |
<manybubbles> |
restarted elasticsearch on elastic1004 |
[production] |
20:12 |
<twentyafterfour> |
Finished scap: testwiki to php-1.26wmf6 and rebuild l10n cache (duration: 47m 24s) |
[production] |
20:11 |
<manybubbles> |
cancel that - I just realized I can't do that. |
[production] |
20:10 |
<manybubbles> |
elastic1003 restarted elasticsearch just fine. the cluster restart is going awesome. I'm going to rig the other 28 to restart via a script, one after the other. Expect nagios to complain about them some. |
[production] |
20:03 |
<bblack> |
restarting hhvm on mw1190 |
[production] |
19:25 |
<twentyafterfour> |
Started scap: testwiki to php-1.26wmf6 and rebuild l10n cache |
[production] |
19:11 |
<awight> |
paymens rolled back to f97f8f99268974cfdb0182f178955bd627137842 |
[production] |
19:10 |
<awight> |
payments updated from f97f8f99268974cfdb0182f178955bd627137842 to 5c326a521120a904a2012654e9287757dc5a8ca2 |
[production] |
19:00 |
<manybubbles> |
elastic1002 restart went well - starting elastic1003 |
[production] |
18:45 |
<awight> |
rolled back payments to f97f8f99268974cfdb0182f178955bd627137842 |
[production] |
18:43 |
<awight> |
update payments from f97f8f99268974cfdb0182f178955bd627137842 to 5c326a521120a904a2012654e9287757dc5a8ca2 |
[production] |
18:05 |
<demon> |
Synchronized wmf-config/CommonSettings.php: undo all the nostalgia (duration: 00m 10s) |
[production] |
17:21 |
<demon> |
Synchronized wmf-config/CommonSettings.php: something something skins are broken (duration: 00m 11s) |
[production] |
17:14 |
<demon> |
Synchronized wmf-config/CommonSettings.php: because sometimes moving code helps (duration: 00m 15s) |
[production] |
17:10 |
<manybub|lunch> |
elastic1002 restarted and rejoined the cluster - now the cluster is repaining. hurray. |
[production] |
17:08 |
<manybub|lunch> |
elastic1001 restarted and rejoined the cluster hapilly while I was at lunch. it looks good - no errors beyond the ones we have fixes in flight for. So I'm going to do elastic1002 |
[production] |
17:03 |
<hashar> |
Zuul clone failures solved. Was due to network traffic being interrupted between labs and prod. |
[production] |