2351-2400 of 10000 results (14ms)
2010-08-23 §
11:36 <mark> Removing all the 2009-08 daily zfs snapshots on ms4 [production]
11:33 <mark> Removed oldest daily thumbs zfs snapshot on ms4 [production]
10:43 <Tim> on ms4: restarting webserver7 with fcgi re-enabled, reduced thread pool count to 4 [production]
10:42 <mark> Added swap2 to /etc/vfstab on ms4 [production]
10:36 <mark> zfs create -V 4gb rpool/swap2; swap -a /dev/zol/dsk/rpool/swap2 (on ms4) [production]
10:11 <Tim> trying to start webserver7 on ms4, will see if it crashes it again [production]
09:56 <mark> svcadm disable puppetd on ms4 [production]
09:11 <Tim> ms4 back up, after some mucking around with /etc/vfstab [production]
08:48 <Tim> ms4 timeout on http, squid serving "cannot forward", will reboot [production]
08:44 <Tim> ms4 not responding to ssh, giving "stub start error" on http, trying serial console, very slow [production]
03:05 <domas> 'ps -ef | grep php-cgi | awk '$3==1 { print $2 }' | xargs kill; rm /tmp/https-ms4-5351d5c9/stub.pid' to recover from ms4 fastcgi death, not sure what are the causes yet [production]
00:50 <midom> synchronized php-1.5/wmf-config/db.php 'oops, wrong cluster' [production]
00:49 <midom> synchronized php-1.5/wmf-config/db.php [production]
2010-08-22 §
20:55 <apergos> should probably debug the ms4 issue but I will sleep soon. yet another restart. [production]
18:25 <apergos> restarted webserver7 on ms4 [production]
2010-08-20 §
05:44 <andrew> synchronized php-1.5/languages/Language.php 'Deploying r71329' [production]
05:43 <Andrew> not deploying r71327, actually deploying r71329 because r71327 does not work on 1.16wmf4 [production]
05:36 <Andrew> deploying r71327 [production]
2010-08-19 §
22:39 <tfinc> synchronized php-1.5/wmf-config/CommonSettings.php 'fixing case on banner bnames' [production]
21:42 <tfinc> synchronized php-1.5/wmf-config/CommonSettings.php 'Adding new banners and appeal pages' [production]
21:34 <RobH> bug 24664 for mk chapter done [production]
21:30 <robh> ran sync-common-all [production]
21:20 <RobH> pushed live project ko.wikinews.org, no apache or dns changes needed since ko langcode was already in dns [production]
21:18 <robh> synchronized php-1.5/wmf-config/InitialiseSettings.php [production]
21:17 <robh> ran sync-common-all [production]
20:59 <RobH> created new project frr.wikipedia.org, dns, apache, etc.. [production]
20:53 <robh> ran sync-common-all [production]
20:40 <mark> Downpreffed AS16265 transit routes to local-pref 90 [production]
20:28 <RobH> pushed dns changes and apache changes for the bookshelf project url, bug # 24872 [production]
20:28 <mark> Turned up AS157 transit on 10G link e1/3 on br1-knams [production]
14:34 <Tim> killed hung convert on all image scalers [production]
2010-08-18 §
20:03 <RobH> sq57 drive replaced, but raid didnt work (seems like grub wasnt copied to both drives) leaving offline for now, will investigate later [production]
19:42 <RobH> sq57 set to false in lvs, replacing bad disk. [production]
18:33 <RobH> kicking around db16, trying to fix it [production]
10:37 <mark> Restored VRRP priorities to original state [production]
10:33 <mark> authdns-scenario normal [production]
10:30 <mark> Enabled ve1 on csw1-esams [production]
02:31 <Tim> also edited /etc/gai.conf on fenari to prefer IPv6, to fix ExtensionDistributor [production]
02:28 <Tim> edited /etc/gai.conf on kaulen to avoid broken IPv6 connection to mayflower, so CR will start working again [production]
2010-08-17 §
22:57 <mark> Shutdown ve1 on csw1 to force VRRP backup [production]
22:53 <mark> Packet loss, authdns-scenario esams-down [production]
22:48 <mark> authdns-scenario normal [production]
22:43 <mark> Configured all VRRP instances on csw1-esams to have priority 1, to reliably stay in backup mode [production]
22:11 <RobH> dns changed to route traffic to tampa [production]
20:13 <RobH> set srv278 to false in lvs, taking it down for hardware testing per rt#24 [production]
18:06 <rainman-sr> disabling interwiki search on all wikis, not only en.wp until we figure out what is going on [production]
16:49 <rainman-sr> search11 is fully up with all features, and seems to work fine .. will keep an eye on it [production]
16:43 <RobH> srv230 online [production]
16:41 <rainman-sr> all of search up, still fiddling with search11 to see why it gave strange I/O spikes during the batch2 migration [production]
16:37 <RobH> investigating srv230. [production]