651-700 of 10000 results (9ms)
2012-07-04 §
13:21 <mutante> svn commiting gerrit 987, sync-apache [production]
13:14 <mutante> git pull in /h/w/common/docroot . adding wikidata.org files on fenari. , then "sync-docroot" [production]
12:44 <dzahn> synchronized php/cache/interwiki.cdb 'Updating interwiki cache' [production]
12:44 <mutante> updating/syncing interwiki cache [production]
09:32 <hashar> swift-container-auditor seems to get down from time. Nagios reporting 0 processes at 8:15am and 9:25am UTC (I guess it get restarted automatically by puppet) [production]
07:34 <hashar> updating Jenkins copy of integration/jenkins from 0f069c3 to e264d1b. Bring new ant script + update to testswarm fetcher [production]
07:09 <Tim> on srv193: ran dpkg --set-selections to revert holds on php5 packages and ran apt-get upgrade [production]
07:07 <Tim> on srv193: fixing broken PHP packages causing puppet failure, nothing in the server admin log about them so I assume they were installed by accident [production]
06:21 <Tim> deployed Idb6d9a8b and restarting apaches [production]
06:11 <Tim> deployed Id7008681 and restarting apaches [production]
05:45 <Tim> reniced apache processes to level 0 [production]
05:04 <Tim> deploying apache nice level change per RT #664 [production]
03:59 <Tim> on mw1: experimenting with renice methods for RT 664 [production]
02:26 <LocalisationUpdate> completed (1.20wmf6) at Wed Jul 4 02:26:39 UTC 2012 [production]
2012-07-03 §
23:03 <mlitn> synchronized php-1.20wmf6/extensions/ArticleFeedbackv5/ [production]
22:36 <preilly> synchronized php-1.20wmf6/extensions/MobileFrontend 'weekly update' [production]
22:27 <hashar> updating testswarm submitter on gallium [production]
21:01 <mlitn> Finished syncing Wikimedia installation... : [production]
20:37 <mlitn> Started syncing Wikimedia installation... : [production]
20:18 <asher> synchronized wmf-config/db.php 'lowering db32 weight' [production]
19:34 <asher> synchronized wmf-config/db.php 'lowering db36 weight' [production]
19:31 <asher> synchronized wmf-config/db.php 're-add db36, db32 (low weight), es3 (innodb)' [production]
18:11 <RobH> virt1006 mgmt serial not set correctly, fixed [production]
17:51 <RobH> investigating stat1001 power issue [production]
17:22 <RobH> fluorine offlining to test disks [production]
17:22 <RobH> pulling helium offline for disk testing with fluorine disks [production]
17:10 <RobH> db1047 disk0 rebuild in progress [production]
17:05 <RobH> replacing bad disk in db1047 [production]
16:45 <reedy> synchronized wmf-config/ 'Various config changes' [production]
16:31 <reedy> synchronized docroot/mediawiki/xml/export-0.7.xsd [production]
16:15 <Jeff_Green> silicon gets dist-upgrade & reboot [production]
16:15 <reedy> synchronized wmf-config/InitialiseSettings.php 'wgCheckSerialized is deaded' [production]
03:53 <tstarling> synchronized wmf-config/CommonSettings.php 're-enable API action=purge on commonswiki' [production]
02:27 <LocalisationUpdate> completed (1.20wmf6) at Tue Jul 3 02:27:02 UTC 2012 [production]
2012-07-02 §
22:33 <binasher> rebooted db36 for kernel upgrade [production]
22:25 <asher> synchronized wmf-config/db.php 'temp pulling db36' [production]
22:03 <brion> fun with routers in tampa, wikis down [production]
21:48 <maplebed> rebooted emery - it's been unresponsive for 3 days. [production]
18:34 <reedy> synchronized wmf-config/CommonSettings.php [production]
18:30 <hashar> set up ignore file in httpd configuration directory [production]
18:23 <reedy> synchronized wmf-config/ 'Enable WikimediaShopLink on enwiki' [production]
18:11 <reedy> rebuilt wikiversions.cdb and synchronized wikiversions files: 273 wikipedias to 1.20wmf6 [production]
18:03 <reedy> rebuilt wikiversions.cdb and synchronized wikiversions files: enwiki to 1.20wmf6 [production]
15:21 <hashar> synchronized wmf-config/CommonSettings.php '/etc/wikimedia-realm detection https://gerrit.wikimedia.org/r/13888' [production]
15:18 <hashar> synchronized docroot/bits/static-master '([[bugzilla:37245|bug 37245]]) docroot 'static-master' for beta bits' [production]
15:04 <mutante> authdns-update to switch jobs.wm redirect to wikimedia-lb to fix SSL cert mismatch (RT-3071) [production]
14:55 <mark> Reboot of cr1-sdtpa did not fix the RE packet loss issue... therefore unlikely to be leap second related [production]
14:41 <mark> Rebooting cr1-sdtpa [production]
14:37 <mark> Shutdown PyBal BGP sessions on cr1-sdtpa [production]
14:34 <mark> Shutdown BGP session to 2828 on cr1-sdtpa [production]