6101-6150 of 10000 results (19ms)
2012-01-09 §
17:16 <jeremyb> the time is now 17:19:30 UTC [production]
02:01 <LocalisationUpdate> completed (1.18) at Mon Jan 9 02:04:47 UTC 2012 [production]
2012-01-08 §
23:20 <Reedy> For some reason cp1001-10042 weren't listed in CommonSettings.php XFF, but (at least) 1042 was in service, meaning edits were attributed to it [production]
23:18 <reedy> synchronized wmf-config/CommonSettings.php 'Add cp1001-cp1041' [production]
23:10 <reedy> synchronized wmf-config/CommonSettings.php 'Add cp1042 to XFF' [production]
21:47 <rainman-sr> killed broken search indexer thread on searchidx1 (please note searchidx1 is no longer in use!), and restarted incremental indexing on searchidx2 which was somehow broken [production]
21:43 <rainman-sr> someone started incremental updating on searchidx1 ??!! [production]
14:54 <apergos> removed old puppet lockfile on brewster, ran by hand [production]
14:47 <apergos> cleared out some very large squid logs on brewster, (basically all of them) plus lighty logs, disk was full. restarted squid manually [production]
02:01 <LocalisationUpdate> completed (1.18) at Sun Jan 8 02:05:11 UTC 2012 [production]
00:43 <tfinc> killing long running show_bug.cgi procs on kaulen [production]
2012-01-07 §
22:30 <Reedy> Users reporting slowness while editing. dberror.log shows a few mysql errors for enwiki master and slaves. Few errors on other wikis, mainly enwiki [production]
02:01 <LocalisationUpdate> completed (1.18) at Sat Jan 7 02:05:09 UTC 2012 [production]
2012-01-06 §
23:22 <RobH> working rt1549 lvs1003 may flap, it is presently not in service due to possible hdd failure [production]
22:55 <binasher> db22 is back in s4 [production]
22:55 <asher> synchronized wmf-config/db.php 'adding db22 back to s4' [production]
21:41 <RobH> db1029 powering back up with ssd testing hardware installed [production]
21:35 <RobH> db1029 coming down for ssd testing [production]
21:26 <RobH> cp1014 and cp1019 hdd controller cables replaced (removed for testing controllers), both can be used normally [production]
21:19 <binasher> restoring db22 from a live hotbackup of db1038 [production]
21:18 <RobH> es1002 back ready for service use per #2220: replace original RAID card in es1002 [production]
21:05 <binasher> putting db51 into production as an s4 slave [production]
21:05 <asher> synchronized wmf-config/db.php 'adding db51 as an s4 slave' [production]
20:57 <binasher> started slaving db51 off of db31 [production]
20:21 <RobH> rt2226 - redeploy db22 for asher [production]
20:19 <RobH> db22 reinstalled and booting into OS. No puppet runs yet, now its Asher's problem ;] [production]
20:04 <RobH> db22 reinstalling [production]
19:24 <binasher> started innodb hot backup of db1038 to db51 [production]
18:43 <maplebed> s4 database rotation complete. outage duration 36 minutes. [production]
18:37 <maplebed> pushed out new db.php setting s4 to read-write [production]
18:37 <ben> synchronized wmf-config/db.php [production]
18:35 <maplebed> db31 made read-write as the new master for s4 [production]
18:31 <maplebed> old master for s4 log file db22-bin.000106 log pos 631618956 [production]
18:30 <maplebed> new master for s4: db31, log file db31-bin.000213 log pos is 205612709 [production]
18:24 <asher> synchronized wmf-config/db.php 'setting s4 to read only, preparing to make db31 master' [production]
18:22 <Reedy> Commons having db issues, db22 (s4 master) has a disk issue [production]
16:02 <apergos> restarted lilghty on dataset2 [production]
16:01 <Reedy> HTTP server (lighttpd?) seems to be down on dataset2 [production]
15:46 <RoanKattouw> Removing gs_* files in /tmp on srv220 that are >30 min old [production]
15:44 <reedy> synchronized wmf-config/InitialiseSettings.php 'Bug 33556 - ArticleFeedback settings on Chinese wikipedia' [production]
15:43 <RoanKattouw> Removed /tmp/mw-cache-1.17 and /tmp/mw-cache-1.17-test on srv220 [production]
15:41 <Reedy> srv220 / is at 100% usage [production]
15:41 <reedy> synchronized wmf-config/InitialiseSettings.php 'Bug 33556 - ArticleFeedback settings on Chinese wikipedia' [production]
14:34 <mutante> saw the log about cp1043/44 being deliberately left broken, but requirement in varnish.pp also broke others, fixed on sq67,68,69 (gerrit change 1802) [production]
02:01 <LocalisationUpdate> completed (1.18) at Fri Jan 6 02:05:01 UTC 2012 [production]
01:25 <binasher> puppet is being deliberately left broken on cp1043 and 1044 until tomorrow [production]
01:23 <binasher> backend varnish instance on cp1042 running 3.0.2 is in production for 1/3 of mobile requests [production]
2012-01-05 §
22:15 <preilly> small fix for iPhone vary support [production]
22:15 <preilly> synchronized php-1.18/extensions/MobileFrontend/MobileFrontend.php [production]
21:39 <Ryan_Lane> rebooting virt1 [production]