1151-1200 of 9293 results (10ms)
2010-08-23 §
08:48 <Tim> ms4 timeout on http, squid serving "cannot forward", will reboot [production]
08:44 <Tim> ms4 not responding to ssh, giving "stub start error" on http, trying serial console, very slow [production]
03:05 <domas> 'ps -ef | grep php-cgi | awk '$3==1 { print $2 }' | xargs kill; rm /tmp/https-ms4-5351d5c9/stub.pid' to recover from ms4 fastcgi death, not sure what are the causes yet [production]
00:50 <midom> synchronized php-1.5/wmf-config/db.php 'oops, wrong cluster' [production]
00:49 <midom> synchronized php-1.5/wmf-config/db.php [production]
2010-08-22 §
20:55 <apergos> should probably debug the ms4 issue but I will sleep soon. yet another restart. [production]
18:25 <apergos> restarted webserver7 on ms4 [production]
2010-08-20 §
05:44 <andrew> synchronized php-1.5/languages/Language.php 'Deploying r71329' [production]
05:43 <Andrew> not deploying r71327, actually deploying r71329 because r71327 does not work on 1.16wmf4 [production]
05:36 <Andrew> deploying r71327 [production]
2010-08-19 §
22:39 <tfinc> synchronized php-1.5/wmf-config/CommonSettings.php 'fixing case on banner bnames' [production]
21:42 <tfinc> synchronized php-1.5/wmf-config/CommonSettings.php 'Adding new banners and appeal pages' [production]
21:34 <RobH> bug 24664 for mk chapter done [production]
21:30 <robh> ran sync-common-all [production]
21:20 <RobH> pushed live project ko.wikinews.org, no apache or dns changes needed since ko langcode was already in dns [production]
21:18 <robh> synchronized php-1.5/wmf-config/InitialiseSettings.php [production]
21:17 <robh> ran sync-common-all [production]
20:59 <RobH> created new project frr.wikipedia.org, dns, apache, etc.. [production]
20:53 <robh> ran sync-common-all [production]
20:40 <mark> Downpreffed AS16265 transit routes to local-pref 90 [production]
20:28 <RobH> pushed dns changes and apache changes for the bookshelf project url, bug # 24872 [production]
20:28 <mark> Turned up AS157 transit on 10G link e1/3 on br1-knams [production]
14:34 <Tim> killed hung convert on all image scalers [production]
2010-08-18 §
20:03 <RobH> sq57 drive replaced, but raid didnt work (seems like grub wasnt copied to both drives) leaving offline for now, will investigate later [production]
19:42 <RobH> sq57 set to false in lvs, replacing bad disk. [production]
18:33 <RobH> kicking around db16, trying to fix it [production]
10:37 <mark> Restored VRRP priorities to original state [production]
10:33 <mark> authdns-scenario normal [production]
10:30 <mark> Enabled ve1 on csw1-esams [production]
02:31 <Tim> also edited /etc/gai.conf on fenari to prefer IPv6, to fix ExtensionDistributor [production]
02:28 <Tim> edited /etc/gai.conf on kaulen to avoid broken IPv6 connection to mayflower, so CR will start working again [production]
2010-08-17 §
22:57 <mark> Shutdown ve1 on csw1 to force VRRP backup [production]
22:53 <mark> Packet loss, authdns-scenario esams-down [production]
22:48 <mark> authdns-scenario normal [production]
22:43 <mark> Configured all VRRP instances on csw1-esams to have priority 1, to reliably stay in backup mode [production]
22:11 <RobH> dns changed to route traffic to tampa [production]
20:13 <RobH> set srv278 to false in lvs, taking it down for hardware testing per rt#24 [production]
18:06 <rainman-sr> disabling interwiki search on all wikis, not only en.wp until we figure out what is going on [production]
16:49 <rainman-sr> search11 is fully up with all features, and seems to work fine .. will keep an eye on it [production]
16:43 <RobH> srv230 online [production]
16:41 <rainman-sr> all of search up, still fiddling with search11 to see why it gave strange I/O spikes during the batch2 migration [production]
16:37 <RobH> investigating srv230. [production]
16:32 <RobH> srv230 back online with memory replacement, synced and back in cluster [production]
16:31 <robh> synchronized php-1.5/wmf-config/lucene.php [production]
16:29 <robh> synchronized php-1.5/wmf-config/lucene.php 'Returning all search values to normal, should restore full search functionality.' [production]
16:22 <rainman-sr> bringing up search5,12, 13-20 [production]
16:21 <RobH> shutting down srv230 to swap out bad memory [production]
15:50 <RobH> search13-search20 relocated to b3-sdtpa. All servers are online, working to bring search back to full deployment. [production]
14:36 <rainman-sr> search5,12 will also show offline because they run parts of the services that are temporarly disabled [production]
14:30 <RobH> search13-search20 will show offline during their relocation, approx until 16:00 if all things go well [production]