2014-07-16
§
|
21:01 |
<yurik> |
Synchronized php-1.24wmf13/extensions/: update to JsonConfig, ZeroBanner, ZeroPortal (duration: 04m 53s) |
[production] |
20:56 |
<yurik> |
Synchronized php-1.24wmf12/extensions/: update to JsonConfig, ZeroBanner, ZeroPortal (duration: 04m 54s) |
[production] |
20:22 |
<subbu> |
deploy parsoid 060dcb54 |
[production] |
19:56 |
<ottomata> |
reenabling puppet on analytics1027 |
[production] |
19:21 |
<ottomata> |
temp disabling puppet on analytics1027 |
[production] |
17:57 |
<akosiaris> |
clean puppet stored config database for osm-db100{1,2}.eqiad.wmnet, updating icinga |
[production] |
16:49 |
<Reedy> |
Restarted jenkins again |
[production] |
16:12 |
<Reedy> |
Restarted jenkins |
[production] |
16:11 |
<Reedy> |
Killed jenkins |
[production] |
14:34 |
<_joe_> |
moving the stale conf-enabled directory away on jobrunners, or when we upgrade to trusty all hell will break loose |
[production] |
13:06 |
<oblivian> |
gracefulled all apaches |
[production] |
12:15 |
<oblivian> |
gracefulled all apaches |
[production] |
12:01 |
<_joe_> |
removed stale files from /etc/apache2/conf-enabled on all mw hosts |
[production] |
11:25 |
<manybubbles> |
Synchronized wmf-config/InitialiseSettings.php: Take Cirrus as default from more wikis while we figure out load issues (duration: 00m 06s) |
[production] |
10:32 |
<_joe_> |
releasing a new apache config to all mediawikis |
[production] |
08:54 |
<godog> |
repool ms-fe1004 |
[production] |
08:52 |
<godog> |
repool ms-fe1003 and depool ms-fe1004 |
[production] |
08:46 |
<godog> |
repool ms-fe1002 and depool ms-fe1003 |
[production] |
08:39 |
<godog> |
depool ms-fe1002 for swift upgrade |
[production] |
05:54 |
<springle> |
resuming page content model schema changes, osc_host.sh processes on terbium ok to kill in emergency |
[production] |
04:23 |
<springle> |
restarted gitblit on antimony |
[production] |
03:04 |
<LocalisationUpdate> |
ResourceLoader cache refresh completed at Wed Jul 16 03:03:41 UTC 2014 (duration 3m 40s) |
[production] |
02:27 |
<LocalisationUpdate> |
completed (1.24wmf13) at 2014-07-16 02:26:12+00:00 |
[production] |
02:15 |
<LocalisationUpdate> |
completed (1.24wmf12) at 2014-07-16 02:14:32+00:00 |
[production] |
01:34 |
<manybubbles> |
moving shards off of elastic101[789] |
[production] |
2014-07-15
§
|
23:20 |
<maxsem> |
Synchronized wmf-config/: https://gerrit.wikimedia.org/r/#/c/146615/ (duration: 00m 04s) |
[production] |
23:16 |
<maxsem> |
Synchronized php-1.24wmf12/extensions/CirrusSearch/: https://gerrit.wikimedia.org/r/#q,146471,n,z (duration: 00m 05s) |
[production] |
23:14 |
<maxsem> |
Synchronized php-1.24wmf13/includes/specials/SpecialVersion.php: (no message) (duration: 00m 04s) |
[production] |
23:13 |
<maxsem> |
Synchronized php-1.24wmf13/extensions/CirrusSearch/: https://gerrit.wikimedia.org/r/#q,146471,n,z (duration: 00m 04s) |
[production] |
22:35 |
<K4-713> |
synchronized payments to afa12be34769000bf8 |
[production] |
21:34 |
<_joe_> |
disabling puppet on mw1001, tests |
[production] |
21:26 |
<aude> |
Synchronized php-1.24wmf13/extensions/Wikidata: Update submodule to fix entity search issue on Wikidata (duration: 00m 21s) |
[production] |
21:15 |
<ori> |
to test r146607, locally modified upstart conf for jobrunner on mw1001 to log to /var/log/mediawiki, and restarted service |
[production] |
20:24 |
<ori> |
restarted jobrunner on all jobrunners |
[production] |
20:23 |
<AaronSchulz> |
Deployed /srv/jobrunner to 31e54c564d369e89613db48977eec0a5891b6498 |
[production] |
20:21 |
<reedy> |
Synchronized docroot and w: (no message) (duration: 00m 21s) |
[production] |
20:18 |
<reedy> |
rebuilt wikiversions.cdb and synchronized wikiversions files: Non wikipedias to 1.24wmf13 |
[production] |
20:13 |
<Krinkle> |
Reloading Zuul to deploy If2312bcf18bdbe8dee |
[production] |
20:12 |
<bd808> |
log volume up after logstash restart |
[production] |
20:10 |
<bd808> |
restarted logstash on logstash1001; log volume looked to be down from "normal" |
[production] |
19:55 |
<Reedy> |
Applied extensions/UploadWizard/UploadWizard.sql to rowiki (re bug 59242) |
[production] |
18:53 |
<manybubbles> |
bouncing elastic1018 to pick up new merge policy. hopefully that'll help with io thrashing |
[production] |
17:58 |
<ori> |
_joe_ deployed jobrunner to all job runners |
[production] |
17:40 |
<manybubbles> |
my last attempt to lower the concurrent traffic for recovery was a failure - tried again and succeeded. that seems to have fixed the echo service disruption from taking elastic1017 out of service |
[production] |
17:37 |
<ori> |
updated jobrunner to bef32b9120 |
[production] |
17:29 |
<manybubbles> |
elastic1017 went nuts again. just shutting elasticsearch off on it for now |
[production] |
16:25 |
<_joe_> |
all mw servers updated |
[production] |
16:10 |
<_joe_> |
mw1100 and onwards updated |
[production] |
16:00 |
<_joe_> |
mw1060-mw1099 updated |
[production] |
15:58 |
<manybubbles> |
restarting Elasticsearch on elastic1017 - its thrashing the disk again. I'm still not 100% sure why |
[production] |