5301-5350 of 10000 results (22ms)
2011-08-12 §
21:10 <maplebed> changing ownership of /usr/local/apache/common-local/php-1.17/cache/l10n from nagios to mwdeploy on all affected srv hosts [production]
21:03 <binasher> deploying new squid frontend.conf - bypass mobile redirector in case of trial opt-in/out pages [production]
20:53 <notpeter> upgrading squid and squid-frontend on amssq51 and amssq52 [production]
20:52 <LocalisationUpdate> completed (1.17) at Fri Aug 12 20:54:15 UTC 2011 [production]
18:44 <maplebed> made ariley an account on rt to submit tickets for email list maintenance [production]
18:24 <JeLuF> DNS: added bugs.mediawiki.org as alias to text.wikimedia.org [production]
17:25 <jeluf> synchronized wmf-config/InitialiseSettings.php '30268 - Point eowp logo to Wiki.png' [production]
16:25 <mark> synchronized wmf-config/CommonSettings.php 'Raise multicast ttl from 2 to 8' [production]
16:10 <catrope> synchronized wmf-config/CommonSettings.php 'Set $wgVaryXFPForAPI = true for HTTPS experiment wikis. This splits the API Squid cache between HTTP and HTTPS, fixing cache pollution issues' [production]
15:08 <mark> Turning off ethernet hw offloading GRO on all lvs servers with Puppet [production]
14:30 <mark> Turned off all forms of hardware segmentation on lvs4, fixing the slow upload problem [production]
14:29 <mark> Turned tcp segment offloading back on on sq51..86 [production]
13:57 <mark> Manually turned off TCP segmentation offloading on sq51..86 [production]
12:59 <catrope> synchronized wmf-config/StartProfiler.php 'Remove upload profiling, hasn't produced any useful data' [production]
12:52 <catrope> synchronized wmf-config/StartProfiler.php 'Profile uploads on officewiki separately, I wanna try something' [production]
11:39 <apergos> reran ppuppet by hand on spence, sq32 entries in nagios conf files were not recreated, restarted nagios, seems to be running [production]
11:11 <apergos> er... because sq32 is in the decommissioned list but the script to purge resources from nagios is broken right now, which means nagios fails to start [production]
11:11 <apergos> purged sq32 resources and host references from puppet db manually on db9, and from nagios conf files on spence. will run puppet manually on spence shortly [production]
10:52 <Andrew> (screen on hume) [production]
10:52 <Andrew> Running populatePifEditCount.php on all wikis [production]
10:45 <Andrew> Adding pif_edits table to all wikis for personal image filter voter list [production]
10:39 <mark> Enabled cr1-sdtpa:xe-0/0/2; a cross connect has been ordered, expect Nagios to complain [production]
10:24 <apergos> uncommented the monitor_group line in varnish.pp which defines the cache_mobile_eqiad group in puppet (thanks ma rk), will run puppet shortly on spence [production]
08:44 <apergos> revert change to site.pp, try applying to spence [production]
08:29 <apergos> doing repeated manual runs of puppet on spence til we catch up to current config (it is quite out of date) [production]
07:21 <apergos> nagios was failing to start because of unknown host group cache_mobile_eqiad in /etc/nagios/puppet_hosts.cfg; commented out line $nagios_group = "cache_mobile_${site}" in site.pp, waiting for puppet run to complete on spence [production]
02:15 <LocalisationUpdate> completed (1.17) at Fri Aug 12 02:17:38 UTC 2011 [production]
2011-08-11 §
21:24 <JeLuF> added 'ttf-ubuntu-font-family' to the list of required packages for image scalers in puppet ([[bugzilla:30288|bug 30288]]) [production]
21:07 <JeLuF> virt2 root filesystem has switched to read-only due to a disk failure [production]
21:04 <LocalisationUpdate> completed (1.17) at Thu Aug 11 21:06:09 UTC 2011 [production]
20:24 <binasher> re-pooling mobile2 [production]
20:16 <LocalisationUpdate> failed [production]
18:57 <binasher> depooling mobile2 from lvs for mobile extension opt in proxy conf testing [production]
18:19 <preilly> pushing new mobile frontend changes to production [production]
18:19 <preilly> synchronizing Wikimedia installation... Revision: 94253: [production]
18:12 <mark> Temporarily serving li.wikipedia.org from srv153 (bypassing LVS) as backend on the text squids [production]
18:02 <mark> Test complete, change reverted [production]
17:57 <mark> Temporarily moved squids->apaches LVS traffic from lvs4 to lvs3 for testing [production]
16:48 <Ryan_Lane> upping the nginx upload size too 100m for the https and ipv6 cluster [production]
16:06 <Reedy> SVN and related services should be ok now. High HTTP load causing OOM [production]
15:51 <Reedy> SVN may be unavailable due to issues with Formey [production]
15:32 <catrope> synchronized php/includes/filerepo/LocalFile.php '[[rev:94252|r94252]] by Chad - Try wrapping ss_images update in a transaction' [production]
15:17 <RoanKattouw> All file uploads were returning HTTP 500 errors between 15:07 and 15:16 UTC, my apologies. It's fixed now [production]
15:16 <catrope> synchronized wmf-config/StartProfiler.php 'Fix it for real this time' [production]
15:15 <catrope> synchronized wmf-config/StartProfiler.php 'Unbreak uploads, oops' [production]
15:07 <catrope> synchronized wmf-config/StartProfiler.php 'Make upload profiling 1:1 instead of 1:50' [production]
15:01 <mark> Set maximum TCP window size on nas1-a [production]
15:00 <RoanKattouw> ...and that worked. Lesson of the day: it seems you can't use dashes in your profile IDs [production]
14:59 <catrope> synchronized wmf-config/StartProfiler.php 'Remove dashes from profile IDs just because I'm paranoid' [production]
14:56 <catrope> synchronized wmf-config/StartProfiler.php 'Add profiling groups upload-commons and upload-other, based on count($_FILES)' [production]