2012-12-11
ยง
|
19:03 |
<asher> |
synchronized wmf-config/db.php 'pulling db59 from s1 for test' |
[production] |
18:44 |
<RoanKattouw> |
Force-running puppet on celsus and constable |
[production] |
18:25 |
<RoanKattouw> |
Force-running puppet on celsus and constable again |
[production] |
18:24 |
<mutante> |
giving all RT accounts a real name value to reduce confusion (nickname,unix login,username,language bla bla is optional) |
[production] |
18:21 |
<RoanKattouw> |
Edited celsus:/etc/varnish/default.vcl to point to 10.2.1.28:8000 |
[production] |
18:12 |
<RoanKattouw> |
Installing Varnish in celsus |
[production] |
17:43 |
<RoanKattouw> |
Force-running puppet on constable |
[production] |
16:50 |
<cmjohnson1> |
correction...mc1010 and mc1002 replacing 10g NIC |
[production] |
16:43 |
<cmjohnson1> |
mc1009 and mc1002 shutting down to replace 10g NIC |
[production] |
16:26 |
<reedy> |
synchronized php-1.21wmf6/extensions/ParserFunctions/ |
[production] |
15:32 |
<demon> |
synchronized php-1.21wmf6/extensions/Wikibase/client/includes/store/sql/WikiPageEntityLookup.php |
[production] |
15:27 |
<demon> |
synchronized php-1.21wmf6/includes/Revision.php 'Syncing I14a7ebb8' |
[production] |
14:44 |
<maxsem> |
synchronized wmf-config |
[production] |
14:25 |
<sbernardin> |
sr266 going down for main board replacement rt2896 |
[production] |
14:25 |
<sbernardin> |
sr266 going down for main board replacement |
[production] |
12:32 |
<hashar> |
changing operations-puppet-validate Jenkins job. |
[production] |
12:25 |
<Nemo_bis> |
All !Wikipedia and !Wikimedia projects are now up and editable again, s3 was read-only for a few hours for slow DB http://ur1.ca/bz8v0 |
[production] |
12:07 |
<reedy> |
synchronized php-1.21wmf6/extensions/LabeledSectionTransclusion |
[production] |
09:35 |
<mark> |
Power cycled nescio |
[production] |
08:43 |
<hashar> |
restarting Jenkins, it still knows about jobs that no more exist :/ |
[production] |
08:40 |
<apergos> |
stopped job runners temporarily while we look at still-increasing replag on s3 |
[production] |
08:03 |
<binasher> |
restarted all job runners |
[production] |
07:56 |
<aaron> |
synchronized php-1.21wmf5/includes/job/JobQueueDB.php 'deployed 6f03ada325f985fc0b14c0d0144a42c94268941f' |
[production] |
07:55 |
<aaron> |
synchronized php-1.21wmf6/includes/job/JobQueueDB.php 'deployed eabed08c91538ffbb651ed7d8867f9966afc2ae5 ' |
[production] |
07:42 |
<aaron> |
synchronized php-1.21wmf6/includes/job/JobQueueDB.php 'deployed 26b9a472df430dc3abaead0e6f5a9f640b075b99' |
[production] |
07:34 |
<binasher> |
restarting all job runners |
[production] |
07:32 |
<Aaron|home> |
Also deployed 919b9d8ba3ee7d6a16214b0853e82fa733a0605a and dcd314e837ccf3d2f7c2ca7dc94b7d1e6accfde4 |
[production] |
07:31 |
<aaron> |
synchronized php-1.21wmf5/includes/job/JobQueueDB.php 'reverted live hacks to use slave DB' |
[production] |
07:02 |
<catrope> |
synchronized php-1.21wmf6/extensions/VisualEditor/modules/ve/init/mw/targets/ve.init.mw.ViewPageTarget.js 'touch' |
[production] |
07:02 |
<catrope> |
synchronized php-1.21wmf5/extensions/VisualEditor/modules/ve/init/mw/targets/ve.init.mw.ViewPageTarget.js 'touch' |
[production] |
07:00 |
<olivneh> |
synchronized php-1.21wmf5/extensions/E3Experiments/Experiments.php |
[production] |
06:56 |
<olivneh> |
synchronized php-1.21wmf5/extensions/EventLogging 'Syncing to avoid obsolete CACHE_MEMCACHE use in prod.' |
[production] |
06:49 |
<asher> |
synchronized wmf-config/db.php 'pulling db39 from s3, crazy behind on purging' |
[production] |
05:52 |
<aaron> |
synchronized php-1.21wmf5/includes/objectcache/MemcachedPhpBagOStuff.php |
[production] |
05:52 |
<aaron> |
synchronized php-1.21wmf5/includes/objectcache/MemcachedPhpBagOStuff.php |
[production] |
05:22 |
<binasher> |
killed all wikiadmin threads on db39 |
[production] |
05:11 |
<catrope> |
synchronized php-1.21wmf6/extensions/VisualEditor 'Updating VisualEditor to master' |
[production] |
05:11 |
<catrope> |
synchronized php-1.21wmf5/extensions/VisualEditor 'Updating VisualEditor to master' |
[production] |
04:41 |
<binasher> |
same with db66 |
[production] |
04:38 |
<binasher> |
killed 22 long running FlaggedRevsStats::getEditReviewTimes on db39, run from hume against enwikinews |
[production] |
04:05 |
<RoanKattouw> |
Updating Parsoid to master |
[production] |
03:53 |
<mutante> |
powercycling mexia |
[production] |
03:29 |
<aaron> |
synchronized php-1.21wmf5/includes/job/JobQueueDB.php |
[production] |
03:29 |
<aaron> |
synchronized php-1.21wmf5/includes/job/jobs/RefreshLinksJob.php |
[production] |
03:29 |
<RoanKattouw> |
mexia still hasn't come back up. That combined with the weird behavior I saw before rebooting it makes me suspect a hardware issue |
[production] |
03:12 |
<aaron> |
synchronized php-1.21wmf6/includes/job/JobQueueDB.php |
[production] |
03:08 |
<aaron> |
synchronized php-1.21wmf6/includes/job/jobs/RefreshLinksJob.php |
[production] |
03:02 |
<RoanKattouw> |
Rebooting mexia, seems to have been hit by some kernel bug |
[production] |
02:00 |
<LocalisationUpdate> |
failed: git pull of core failed |
[production] |
01:12 |
<pgehres> |
synchronized wmf-config/CommonSettings.php 'Putting donation pipeline back to normal' |
[production] |