2012-04-25
§
|
22:14 |
<LeslieCarr> |
restarted swift-container-auditor on ms-be3 |
[production] |
21:55 |
<RobH> |
pushing dns update for scs-c1-eqiad and ps1-c#-eqiad |
[production] |
21:22 |
<LeslieCarr> |
reloading varnish on mobile caches cp1041 cp1042 cp1043 cp1044 |
[production] |
21:21 |
<LeslieCarr> |
clearing mobile varnish cache |
[production] |
19:38 |
<logmsgbot_> |
catrope synchronized php-1.20wmf1/extensions/ZeroRatedMobileAccess/ZeroRatedMobileAccess.body.php 'Attempted fatal fix' |
[production] |
19:33 |
<logmsgbot_> |
catrope synchronized php-1.20wmf1/extensions/Math/ 'Deploying 4c9e7dbe761c798ce15d7e2acef829a1582c058b' |
[production] |
19:14 |
<notpeter> |
starting innobackupex from db12 to db59 for new s1 slave, per mr. feldman's directions |
[production] |
18:56 |
<notpeter> |
starting innobackupex from db1017 to db60 for new s1 slave |
[production] |
18:49 |
<logmsgbot_> |
aaron synchronized php-1.20wmf1/extensions/FeaturedFeeds/SpecialFeedItem.php 'Deployed 4fb14a7b2ca9be715b820a9847d999f21c7d2cfc' |
[production] |
18:36 |
<logmsgbot_> |
aaron synchronized php-1.20wmf1/img_auth.php 'Deployed f7e49bd71bd8356751242c5ce1cbae076a27cf7a' |
[production] |
18:10 |
<logmsgbot_> |
aaron rebuilt wikiversions.cdb and synchronized wikiversions files: Moving all remaining wikis to php-1.20wmf1 |
[production] |
17:07 |
<LeslieCarr> |
reloaded mobile varnish configs |
[production] |
17:06 |
<LeslieCarr> |
purging mobile cache |
[production] |
16:40 |
<LeslieCarr> |
starting delete script on ms-be3 |
[production] |
16:14 |
<RobH> |
done moving mgmt connections and serial connections in s8-eqiad for now |
[production] |
16:05 |
<RobH> |
reshuffling cables in eqiad for serial and mgmt connections in a8, this may affect all eqiad mgmt and serial connections for the next 5 minutes |
[production] |
15:29 |
<hashar> |
hashar: gallium: MySQL had issues most probably because of the mysql configuration snippets. https://gerrit.wikimedia.org/r/5796 might solve that. |
[production] |
14:03 |
<mutante> |
gallium - don't start puppet unless the erb template fix for mysql has been merged |
[production] |
13:52 |
<mutante> |
gallium stopped puppet, moved log_slow_queries config, re-setting up mysql again |
[production] |
13:41 |
<mutante> |
gallium/testswarm - back up after mysql upgrade and issue starting the service |
[production] |
13:36 |
<mutante> |
gallium - dpkg-reconfigure mysql-server-5.1, mysql does not start right |
[production] |
13:27 |
<mutante> |
running apt-get upgrade on gallium |
[production] |
12:29 |
<mark> |
Sending US, Brazil, Indian traffic to upload.eqiad |
[production] |
11:39 |
<mutante> |
running authdns-update to add analysis mgmt names |
[production] |
05:35 |
<paravoid> |
powercycled lvs6, was dead and not responding to serial |
[production] |
03:43 |
<logmsgbot_> |
asher synchronized wmf-config/db.php 'adding db58 to s7 as a new slave with a low weight' |
[production] |
03:24 |
<logmsgbot_> |
asher synchronized wmf-config/db.php 'pulling db58' |
[production] |
03:23 |
<logmsgbot_> |
asher synchronized wmf-config/db.php 'adding db58 to s7 as a new slave with a low weight' |
[production] |
02:28 |
<logmsgbot_> |
LocalisationUpdate completed (1.20wmf1) at Wed Apr 25 02:28:47 UTC 2012 |
[production] |
02:14 |
<logmsgbot_> |
LocalisationUpdate completed (1.19) at Wed Apr 25 02:14:46 UTC 2012 |
[production] |
00:02 |
<binasher> |
profiling collector was pegged at 100% cpu and graphs were turned to swiss cheese due to a bad stats call in 1.20, now fixed |
[production] |
2012-04-24
§
|
23:59 |
<binasher> |
powering off db16 |
[production] |
23:55 |
<binasher> |
streaming hot backup of db1041 to db58 (building a new s7 slave) |
[production] |
23:48 |
<logmsgbot_> |
aaron synchronized php-1.19/includes/Setup.php 'Hacked out session request stats.' |
[production] |
23:46 |
<logmsgbot_> |
aaron synchronized php-1.20wmf1/includes/Setup.php 'Deployed 42fcd43299246ecd1b265fcfcdd01a60319cf378' |
[production] |
23:19 |
<AaronSchulz> |
Running 'mwscriptwikiset maintenance/populateRevisionSha1.php all.dblist' on hume |
[production] |
22:43 |
<logmsgbot_> |
aaron synchronized wmf-config/CommonSettings.php 'Enabled file change journal on wikis using the new backend config.' |
[production] |
22:20 |
<AaronSchulz> |
Tables added |
[production] |
22:18 |
<binasher> |
rebooting db16 with updated kernel. it's probably still hopeless (dimm errors) |
[production] |
22:18 |
<AaronSchulz> |
Creating the filejournal table on all wikis |
[production] |
21:59 |
<logmsgbot_> |
aaron synchronized wmf-config/CommonSettings.php 'Switched commonswiki to the new backend config format.' |
[production] |
21:48 |
<logmsgbot_> |
asher synchronized wmf-config/db.php 'pulling db16, memory errors' |
[production] |
20:13 |
<apergos> |
re-enabled replication via cron on ms7, it should catch up within an hour or so |
[production] |
20:10 |
<binasher> |
reimaged db58 with fixed raid setup, imaging db59 |
[production] |
19:51 |
<notpeter> |
starting innobackupex from db1034 to db57 for new s2 slave |
[production] |
19:50 |
<Ryan_Lane> |
repooling ssl3001 |
[production] |
19:28 |
<Ryan_Lane> |
depooling ssl3001 |
[production] |
18:18 |
<LeslieCarr> |
deploying to frontend |
[production] |