2015-06-16
ยง
|
18:03 |
<godog> |
bounce statsite on graphite1001, stuck while writing to graphite |
[production] |
17:30 |
<ejegg> |
update SmashPig on listener from e1e925c9fc2a60c1e14ef01d8b653dc09512f51f to 258f2c917b1ae50b01231927bcd6f58ecaa8940b |
[production] |
17:23 |
<krinkle> |
Synchronized php-1.26wmf9/includes/resourceloader/ResourceLoader.php: undo live hack (duration: 00m 13s) |
[production] |
17:09 |
<aude> |
Synchronized arbitraryaccess.dblist: Enable arbitrary access on gomwiki and lrcwiki (duration: 00m 13s) |
[production] |
17:09 |
<aude> |
Synchronized usagetracking.dblist: Enable Wikibase usage tracking on second batch of s3 wikis (duration: 00m 13s) |
[production] |
17:03 |
<bblack> |
Synchronized wmf-config/InitialiseSettings.php: wgCanonicalServer: HTTPS for all (duration: 00m 15s) |
[production] |
16:44 |
<krenair> |
Synchronized wmf-config/interwiki.cdb: Updating interwiki cache (duration: 00m 13s) |
[production] |
16:43 |
<krenair> |
Synchronized wmf-config/InitialiseSettings.php: (no message) (duration: 00m 13s) |
[production] |
16:43 |
<krenair> |
Synchronized w/static/images/project-logos/gomwiki.png: (no message) (duration: 00m 14s) |
[production] |
16:42 |
<krenair> |
Synchronized langlist: gomwiki (duration: 00m 13s) |
[production] |
16:41 |
<krenair> |
rebuilt wikiversions.cdb and synchronized wikiversions files: (no message) |
[production] |
16:40 |
<krenair> |
Synchronized database lists: (no message) (duration: 00m 13s) |
[production] |
16:29 |
<krenair> |
Synchronized wmf-config/interwiki.cdb: Updating interwiki cache (duration: 00m 13s) |
[production] |
16:27 |
<krenair> |
Synchronized langlist: (no message) (duration: 00m 14s) |
[production] |
16:25 |
<krenair> |
Synchronized w/static/images/project-logos/lrcwiki.png: (no message) (duration: 00m 13s) |
[production] |
16:21 |
<moritzm> |
updated copper, oxygen, labstore2001 and labnodepool1001 to the 3.19 kernel |
[production] |
16:11 |
<krenair> |
Synchronized wmf-config/interwiki.cdb: Updating interwiki cache (duration: 00m 13s) |
[production] |
16:10 |
<krenair> |
Synchronized wmf-config: (no message) (duration: 00m 14s) |
[production] |
16:06 |
<krenair> |
rebuilt wikiversions.cdb and synchronized wikiversions files: (no message) |
[production] |
16:05 |
<krenair> |
Synchronized database lists: (no message) (duration: 00m 15s) |
[production] |
15:43 |
<thcipriani> |
Synchronized wmf-config/InitialiseSettings.php: SWAT: templateeditor: add templateeditor right in hewiki [[gerrit:218426]] (duration: 00m 13s) |
[production] |
15:09 |
<thcipriani> |
Synchronized wmf-config/InitialiseSettings.php: SWAT: Turn on wgGenerateThumbnailOnParse for wikitech. [[gerrit:218553]] (duration: 00m 12s) |
[production] |
15:03 |
<thcipriani> |
Synchronized wmf-config/InitialiseSettings.php: SWAT: CX: Add wikis for CX deployment on 20150616 [[gerrit:218341]] (duration: 00m 12s) |
[production] |
14:18 |
<cmjohnson> |
barium is going down for disk replacement |
[production] |
13:38 |
<aude> |
Synchronized usagetracking.dblist: Enable Wikibase usage tracking on dewiki (duration: 00m 15s) |
[production] |
13:18 |
<akosiaris> |
rebooted etherpad1001 for kernel upgrades |
[production] |
12:51 |
<jynus> |
Synchronized wmf-config/db-codfw.php: Repool es2005, es2006 and es2007 after maintenance (duration: 00m 13s) |
[production] |
12:44 |
<aude> |
Synchronized usagetracking.dblist: Enable Wikibase usage tracking on cswiki (duration: 00m 14s) |
[production] |
12:20 |
<aude> |
Synchronized usagetracking.dblist: Enable usage tracking on ruwiki (duration: 00m 15s) |
[production] |
11:21 |
<paravoid> |
restarting the puppetmaster |
[production] |
11:19 |
<springle> |
Synchronized wmf-config/db-eqiad.php: repool db1073, warm up (duration: 00m 13s) |
[production] |
10:36 |
<akosiaris> |
rebooting ganeti200{1..6}.codfw.wmnet for kernel upgrades |
[production] |
09:33 |
<jynus> |
Synchronized wmf-config/db-codfw.php: Depool es2005, es2006 and es2007 for maintenance (duration: 00m 14s) |
[production] |
09:10 |
<YuviPanda> |
deleted huge puppet-master.log on labcontrol1001 |
[production] |
08:05 |
<jynus> |
added m5-slave to dns servers |
[production] |
07:52 |
<paravoid> |
restarting hhvm on mw1121 |
[production] |
07:39 |
<jynus> |
Synchronized wmf-config/db-eqiad.php: Repool es1005 (duration: 00m 14s) |
[production] |
06:24 |
<LocalisationUpdate> |
ResourceLoader cache refresh completed at Tue Jun 16 06:24:04 UTC 2015 (duration 24m 3s) |
[production] |
06:18 |
<godog> |
restore ES replication throttling to 20mb/s |
[production] |
06:13 |
<godog> |
restore ES replication throttling to 40mb/s |
[production] |
06:08 |
<filippo> |
Synchronized wmf-config/PoolCounterSettings-common.php: unthrottle ES (duration: 00m 14s) |
[production] |
05:56 |
<godog> |
bump ES replication throttling to 60mb/s |
[production] |
05:50 |
<manybubbles> |
ok - we're yellow and recovering. ops can take this from here. We have a root cause and we have things I can complain about to the elastic folks I plan to meet with today anyway. I'm going to finish waking up now. |
[production] |
05:49 |
<manybubbles> |
reenabling puppet agent on elasticsearch machines |
[production] |
05:46 |
<manybubbles> |
I expect them to be red for another few minutes during the initial master recovery |
[production] |
05:46 |
<manybubbles> |
started all elasticsearch nodes and now they are recovering. |
[production] |
05:41 |
<godog> |
restart gmond on elastic1007 |
[production] |
05:39 |
<filippo> |
Synchronized wmf-config/PoolCounterSettings-common.php: throttle ES (duration: 00m 13s) |
[production] |
05:25 |
<manybubbles> |
shutting down all the elasticsearch on the elasticsearch nodes against - another full cluster restart should fix it like it did last time............... |
[production] |
05:11 |
<godog> |
restart elasticsearch on elastic1031 |
[production] |