1-50 of 8771 results (5ms)
2010-10-04 §
19:52 <catrope> synchronized php-1.5/wmf-config/CommonSettings.php 'Bump style version appendix' [production]
19:52 <catrope> synchronized php-1.5/extensions/ArticleAssessmentPilot/js/ArticleAssessment.combined.min.js 'r74268' [production]
19:51 <catrope> synchronized php-1.5/extensions/UsabilityInitiative/PrefSwitch/SpecialPrefSwitch.php 'r74268' [production]
11:57 <mark> Stopped PyBal on amslvs4 as a test [production]
11:47 <mark> synchronized php-1.5/wmf-config/mc.php 'Replace srv87' [production]
10:53 <mark> Setup BGP state monitoring for csw5-pmtpa and csw1-sdtpa as well [production]
10:48 <mark> Setup BGP state monitoring for csw1-esams and csw2-esams [production]
10:36 <mark> Restarted Apache on srv223 [production]
10:24 <mark> Fixed Nagios by removing db34 from the listed hosts section in conf.php [production]
09:47 <mark> Downpreffed AS13030 transit to local-pref 90 on br1-knams [production]
2010-10-03 §
19:25 <catrope> synchronized php-1.5/wmf-config/checkers.php [production]
18:59 <catrope> synchronized php-1.5/wmf-config/checkers.php 'Block search abuse reported by Robert' [production]
17:43 <mark> Restarted apache on srv219-221, 224 [production]
17:38 <mark> Stopped IPVS backup sync daemon on amslvs3 [production]
17:37 <mark> Rebooting amslvs1 [production]
17:37 <midom> synchronized php-1.5/wmf-config/db.php [production]
17:32 <mark> Stopped PyBal on amslvs1 [production]
17:30 <mark> Stopped IPVS backup sync daemon on amslvs4 [production]
17:27 <mark> Rebooting amslvs2 [production]
17:19 <mark> Stopped PyBal on amslvs2 [production]
17:14 <mark> Drive failed in thistle [production]
17:09 <mark> Started temporary LVS state syncing between amslvs1->amslvs3 and amslvs2->amslvs4, preparing for reboot of amslvs1-2 [production]
16:51 <mark> Rebooted (backup) LVS servers amslvs3 and amslvs4 [production]
16:36 <midom> synchronized php-1.5/wmf-config/db.php 'adding db28 to s5' [production]
16:20 <mark> Selectively updated pybal/bgp.py on lvs2-4 and restarted PyBal [production]
16:13 <mark> Upgraded PyBal to r0.1+74215 on amslvs1-4 [production]
14:27 <midom> synchronized php-1.5/wmf-config/db.php [production]
14:12 <domas> db22 was first wikimedia slave initialized using xtrabackup, hehe [production]
14:08 <midom> synchronized php-1.5/wmf-config/db.php 'adding db22 as commons 5.1 slave' [production]
14:06 <mark> Restored nameservers order for mobile serverswq [production]
13:36 <mark> lvs3 service ips were unreachable from outside the subnet for some reason. Restarting PyBal (and therefore the BGP session to the router) fixed it [production]
13:22 <mark> Started pdns recursor on dobson again [production]
13:19 <mark> Once again stopped pnds-recursor on dobson [production]
13:18 <mark> Restarted Apache on the mobile servers one by one, to put the new resolv.conf in effect [production]
13:10 <mark> Swapped nameservers for the mobile servers, so they don't crash the mobile site while I reinstall the primary recursor [production]
13:02 <catrope> synchronized php-1.5/wmf-config/checkers.php 'Log no-UA requests for exempt IPs to /h/w/l/nouaexempt.log' [production]
12:45 <mark> Mobile site broke with missing primary DNS recursor, restarted it [production]
12:44 <mark> Stopped DNS recursor on dobson again [production]
12:20 <mark> Installed DNS recursors on all LVS servers to reduce SPOFfiness [production]
11:46 <domas> doing some DB maintenance [production]
11:44 <mark> Installed nscd on lvs4 and restarted PyBal [production]
11:30 <mark> Restarted pybal on lvs4 [production]
11:25 <catrope> synchronized php-1.5/wmf-config/CommonSettings.php 'Bump style version appendix' [production]
11:21 <mark> Shutdown primary pmtpa DNS recursor (dobson) [production]
11:20 <catrope> synchronized php-1.5/skins/common/wikibits.js 'r74198' [production]
2010-10-02 §
17:01 <apergos> we were hoping that a reboot of dataset1 would clear up the ipv6 + lighty issue that it had after os upgrade; no dice. this needs further investiagtion. ipv6 left disabled in lighty conf for now. [production]
16:43 <RobH> dataset1 relocation and drive work done, system is back online [production]
16:20 <RobH> dataset1 moved, boots back online, still working on its degraded disk replacement [production]
15:55 <RobH> rebooted scs-a1-sdtpa since its web interface isnt accessible, not sure if its the device or the network [production]
15:06 <RobH> dataset1 downtime window starts now [production]