1701-1750 of 10000 results (15ms)
2010-11-11 §
16:37 <mark> Powercycled sq59 [production]
16:34 <mark> Powercycled sq57 [production]
15:38 <mark> Removed ex-fedora data on ms2, after backing it up to tridge [production]
10:33 <catrope> synchronized php-1.5/wmf-config/CommonSettings.php 'Bump style version appendix' [production]
10:32 <catrope> synchronized php-1.5/extensions/UsabilityInitiative/Vector/Vector.combined.min.js 'r76511' [production]
09:58 <Ryan_Lane> restarted apache on fenari [production]
04:02 <tfinc> synchronized php-1.5/extensions/ContributionReporting/ContributionReporting.php 'Updating for 2010' [production]
03:50 <tfinc> synchronized php-1.5/wmf-config/CommonSettings.php 'removing test since its in the extension config' [production]
03:46 <tfinc> synchronizing Wikimedia installation... Revision: 76474 [production]
02:30 <atglenn> so another restart of torrus. seriously... [production]
00:43 <domas> what Rob meant was that they went away by themselves, as it was upstream provider issue. [production]
00:39 <RobH> !wikipedia and !wikimedia network issues resolved, all projects should be fine now [production]
00:35 <domas> #network #failwhale #lol [production]
00:31 <RobH> looking into the current slowdown/inaccessibilty issues for folks on !Wikipedia and !Wikimedia [production]
00:25 <domas> flapping network in pmtpa [production]
2010-11-10 §
22:32 <rfaulk> installed "scipy" python package on grosley.wikimedia.org with apt-get - statistical analysis in python [production]
22:23 <atglenn> restarted torrus, it had deadlocked again. is it my imagination or is this happening really often lately? [production]
21:26 <catrope> synchronized php-1.5/wmf-config/CommonSettings.php 'Bump style version appendix' [production]
21:25 <catrope> synchronized php-1.5/extensions/UsabilityInitiative/Vector/Vector.combined.min.js 'r76474' [production]
21:21 <nimishg> synchronized php-1.5/extensions/ContributionReporting/ContributionReporting.i18n.php 'r76472' [production]
21:16 <catrope> synchronized php-1.5/wmf-config/CommonSettings.php 'Bump style version appendix' [production]
21:16 <RoanKattouw> Removed srv124 from mediawiki-installation node group as it's slated to be decommissioned [production]
21:14 <catrope> synchronized php-1.5/extensions/UsabilityInitiative/Vector/Vector.combined.min.js 'r76471' [production]
20:44 <catrope> synchronized php-1.5/wmf-config/CommonSettings.php 'Bump style version appendix' [production]
20:44 <catrope> synchronized php-1.5/extensions/UsabilityInitiative/Vector/Vector.combined.min.js 'r76469' [production]
20:32 <catrope> synchronized php-1.5/wmf-config/CommonSettings.php 'Bump style version appendix' [production]
20:31 <catrope> synchronized php-1.5/extensions/UsabilityInitiative/js/plugins.combined.min.js 'r76467' [production]
19:40 <RobH> singer config restarted, will host download.w.o & dumps.w.o as well as a number of other things that refer to those two entries in dns [production]
19:39 <RobH> changed dns for dumps.wikimedia.org to go to singer instead of dataset1 during its downtime [production]
18:59 <atglenn> someone was polite and didn't name me in the above comment :-P I commented out the script that ships logs to both dammit.lt and dataset1 instead of looking at the script itself [production]
18:58 <domas> unbroke pagecounts shipment (someone broke it and said "yes you can blame me, it was my f*ckup, people should know that") [production]
14:59 <apergos> rebooting dataset1 so we can get web service going over there (can't be restarted in the usual way after kernel panic) [production]
10:38 <RoanKattouw> Published MW 1.16 tarball on noc.wm.o because download.wm.o is still down http://noc.wikimedia.org/mediawiki-1.16.0.tar.gz [production]
06:47 <tfinc> synchronized php-1.5/wmf-config/CommonSettings.php 'Turning cc gateway back on with the sidebar' [production]
05:20 <apergos> stopped rsync of pagecount stats from locke to dataset1 for now til disk/fs issue is resolved [production]
04:59 <apergos> shot all dump processes on dataset1; note a kernel panic in logs from within __destroy_inode, going to reboot and leave rsync of pagecounts and dumps off [production]
00:41 <tfinc> synchronized php-1.5/wmf-config/CommonSettings.php 'Taking outage on cc cluster' [production]
2010-11-09 §
22:42 <RobH> running puppet on spence to remove all the old apaches that are no longer in any kind of service [production]
22:39 <jeluf> synchronized php-1.5/wmf-config/InitialiseSettings.php '24539 - Transwiki import source for ml.wikisource.org' [production]
22:36 <RobH> didnt log my change that I ran about 25 minutes ago to change test.w.o from srv124 to srv193 in squid settings and deployed [production]
22:32 <jeluf> ran sync-common-all [production]
22:27 <jeluf> synchronized closed.dblist [production]
21:32 <robh> synchronized php-1.5/wmf-config/mc.php 'removed srv193 from potential memcached pool as it will shortly become the new test.w.o server' [production]
20:53 <catrope> synchronized php-1.5/wmf-config/InitialiseSettings.php 'Set tenwiki logo to local Wiki.png' [production]
17:26 <robh> synchronized php-1.5/wmf-config/abusefilter.php 'bugzilla#24394' [production]
17:17 <robh> synchronized php-1.5/wmf-config/abusefilter.php [production]
17:15 <RobH> that actually ran 15 minutes ago and was stuck on a broken server at the end, all other hosts had synced [production]
17:15 <robh> synchronized php-1.5/wmf-config/mc.php 'srv230 unresponsive to ssh, needs to reboot, swapped it out for working spare' [production]
15:51 <RobH> removed srv* under srv151 from pybal, left entry for srv124 as its test.w.o, even though its set to false [production]
15:46 <RobH> srv229 puppet was hanging, manually ran apt-get update and reran puppet, now its happy [production]