551-600 of 10000 results (10ms)
2013-01-22 §
17:40 <asher> synchronized wmf-config/db-pmtpa.php 'setting pmtpa to readonly' [production]
17:14 <notpeter> dns svn repo on sockpuppet has changes staged and checked in, not yet deployed on dobson [production]
17:05 <notpeter> staging cname switch for pmtpa/eqiad dbs on sockpuppet [production]
15:22 <mark> Changed AS14907->AS43821 routing [production]
12:26 <^demon> restarted ircecho on manganese [production]
10:30 <Tim> also replaced /etc/sudoers on fenari and hume. Hume needs a puppet change which will follow shortly [production]
10:18 <hashar> gallium : restarted puppet [production]
10:14 <hashar> jenkins updated all plugins and restarting [production]
10:09 <Tim> on searchidx2: replaced /etc/sudoers with the distro default so that /etc/sudoers.d/* from puppet can take effect [production]
06:48 <Tim> on eqiad upload varnishes: purged /wikipedia/it/b/bc/Wiki.png at user request. varnishhtcpd appears to be totally broken. [production]
02:48 <LocalisationUpdate> completed (1.21wmf8) at Tue Jan 22 02:48:37 UTC 2013 [production]
02:26 <LocalisationUpdate> completed (1.21wmf7) at Tue Jan 22 02:26:14 UTC 2013 [production]
2013-01-21 §
23:11 <Tim> on sockpuppet: fixed puppet checkout, switching branch from mikepatch1 to production, and then did fetch&&rebase for good measure [production]
22:52 <Ryan_Lane> adding virt9-11 entries in dns [production]
22:49 <reedy> synchronized wmf-config/CommonSettings.php [production]
20:22 <mark> Rerouted AS43821->AS14907 traffic [production]
19:41 <mark> Restarting knsq* upload frontends manually [production]
19:34 <paravoid> deploying squid config for upload's /monitoring/ [production]
19:14 <mark> Restarting amssq* upload frontends in a slow loop [production]
19:11 <mark> Restarted oversized frontend on amssq50 [production]
18:55 <paravoid> starting knsq19 backend squid [production]
18:50 <mark> Restarted knsq16 backend minus two disks [production]
18:40 <mark> Started amssq56 squid instances [production]
18:38 <mark> Started knsq16 minus one disk [production]
18:37 <mark> Started knsq18 minus one disk [production]
18:32 <mark> power cycled amssq56 [production]
18:28 <mark> Rebooting knsq24 [production]
18:05 <mark> Rerouted AS43821->AS14907 traffic [production]
17:53 <paravoid> stopping knsq19 backend squid [production]
17:46 <paravoid> restarting knsq19 backend squid [production]
17:35 <paravoid> restarting pybal on lvs1002 [production]
17:35 <cmjohnson1> db1038 swapping bad disk (slot 2) with new disk [production]
17:27 <mark> Rerouted AS14907->AS43821 traffic [production]
16:36 <paravoid> depooling, restarting and repooling ms-fe2/3/4 one by one [production]
16:30 <paravoid> repooling ms-fe1 [production]
16:29 <mark> Rerouted AS43821->AS14907 traffic [production]
15:48 <reedy> synchronized php-1.21wmf8/includes/EditPage.php [production]
15:45 <reedy> synchronized php-1.21wmf7/includes/EditPage.php [production]
15:32 <paravoid> depooling ms-fe1 for testing [production]
14:17 <hashar> gallium: manually installed pyflakes {{gerrit|44974}} [production]
11:10 <apergos> nagios was dead over the weekend (config broken), fixed in puppet and on spence, now back in action [production]
09:57 <hashar> relaying Ryan: he restarted ldap on virt0 (was hung after server restart). nscld was properly falling back to virt1000 but ldap was stuck there too. DNS got restarted. [production]
09:52 <hashar> jenkins: jobs refresh completed. [production]
08:14 <hashar> jenkins: updating all Jenkins jobs based on d31c92e of integration/jenkins-job-builder-config.git [production]
05:51 <Tim> on marmontel: removed MW-specific packages php5-wmerrors, php-luasandbox, php-wikidiff2 [production]
02:50 <LocalisationUpdate> completed (1.21wmf8) at Mon Jan 21 02:50:20 UTC 2013 [production]
02:27 <LocalisationUpdate> completed (1.21wmf7) at Mon Jan 21 02:26:59 UTC 2013 [production]
2013-01-20 §
23:56 <tstarling> Finished syncing Wikimedia installation... : [production]
23:45 <tstarling> Started syncing Wikimedia installation... : [production]
23:40 <Ryan_Lane> rebooting virt0 to determine which dimm is bad [production]