201-250 of 10000 results (8ms)
2011-01-12 §
22:13 <RobH> sq42-sq45 down for reinstallation to lucid [production]
20:43 <RobH> sq41 coming down for reinstallation to lucid [production]
18:42 <apergos> that file needs to be left there for awhile apparently (say the google instructions) so please don't just toss it yet [production]
18:40 <apergos> this is about domain verification for google storage. [production]
18:39 <ariel> synchronized docroot/www.wikimedia.org/google126853a33948578b.html [production]
17:48 <RobH> all text squids in pmtpa/sdtpa have been upgraded to lucid [production]
17:48 <RobH> sq75-sq78 back in service, lucid [production]
17:45 <mark> synchronized php-1.5/wmf-config/db.php 'Remove db15 from rotation' [production]
17:22 <RobH> sq75-sq78 coming down for reinstall to lucid [production]
17:19 <RobH> sq71-sq74 reinstalled to lucid and pushed back into service [production]
16:39 <mark_> Increased CARP weight of amssq31 to 30 [production]
16:33 <mark_> Powercycled amssq38 [production]
16:30 <RobH> sq71-sq74 coming down for reinstallation to lucid [production]
16:27 <RobH> sq65-sq66 reinstalled to lucid and online [production]
16:20 <mark_> Lowered CARP weight of amssq* non-SSD text squids from 10 to 8 to relieve disk and memory pressure [production]
16:20 <apergos> running cleanup on index.html and md5sums for the XML dump jobs that recorded bogus "recombine" jobs that weren't actually run [production]
16:18 <mark_> Stopped old puppet instance and restarted squid-frontend on amssq37 [production]
16:16 <RobH> amsq31 upgraded to lucid and back in service [production]
16:05 <RobH> both srv182 and srv183 were not responsive to serial console, rebooted both. [production]
15:42 <RobH> sq65 & sq66 coming down for reinstallation to lucid [production]
15:40 <RobH> sq62-sq64 installed and online lucid [production]
15:31 <RobH> reinstalling amssq31 [production]
15:11 <RobH> sq62-sq64 down for reinstall [production]
15:06 <RobH> sq59-sq61 online as lucid hosts [production]
14:59 <RobH> updated wmf repo, copying package for squid from karmic to lucid as well (seems to have been lost from other changes, as frontend was still there) [production]
14:02 <RobH> sq59-sq61 offline for reinstall [production]
12:03 <mark_> Enabled multicast snooping on csw1-sdtpa [production]
2011-01-11 §
23:39 <Ryan_Lane> adding cron on nova-controller.tesla to svn up /wiki hourly [production]
22:27 <RobH> have to reinstall the squids, wrong version written into lease [production]
22:24 <awjr> archived hudson build files for 'donations queue consume' and emptied builds directory on grosley to allow the jobs to continue running (donations queue consume job was failing due to hitting max # of files in a dir for ext3 fs) [production]
22:05 <RobH> correction, sq65, sq66, & sq71 [production]
22:04 <RobH> sq62-sq64 back online, sq65-sq67 coming down for reinstall [production]
21:48 <Ryan_Lane> adding exim::simple-mail-sender to owa1-3 [production]
21:33 <RobH> sq59-sq61 in service, sq62-sq64 reinstalling [production]
21:27 <Ryan_Lane> powercycling amssq31 [production]
20:52 <RobH> sq59-61 reinstalled and online, pooled, partially up [production]
20:42 <rainman-sr> enwiki.spell index somehow got corrupt, investigating and rebuilding it now on searchidx1 [production]
18:57 <RobH> sq59-sq61 coming back offline, bad partitioning in automatied install, need to update squid configuration for these hosts [production]
18:47 <RobH> sq59 having reinstall issues, skipping it and moving on [production]
18:36 <RobH> sq61 reinstalled and back online [production]
18:30 <RobH> sq60 reinstalled, back in service [production]
18:03 <RobH> sq59-sq61 depooled for upgrade [production]
16:15 <RobH> sync-docroot run to push updated tenwiki favicon [production]
14:27 <mark_> Depooled amssq31 and amssq32 for SSD install [production]
2011-01-10 §
21:01 <Ryan_Lane> patching python-nova on nova-compute*.tesla (see bug #lp700015) [production]
20:59 <Ryan_Lane> err make that #lp681164 for ldap driver [production]
20:59 <Ryan_Lane> patching ldap driver on nova-* (see bug #lp681030) [production]
20:57 <Ryan_Lane> patching ec2 api on nova-controller.tesla (see bug #lp701216) [production]
19:56 <RobH> updated dns with virt!-virt4 mgmt info [production]
16:10 <RobH> singer was crashed, investigating why it suddenly had issues. It was pulling down db9, but it halted before it could damage anything. [production]