2501-2550 of 10000 results (19ms)
2011-01-12 §
16:39 <mark_> Increased CARP weight of amssq31 to 30 [production]
16:33 <mark_> Powercycled amssq38 [production]
16:30 <RobH> sq71-sq74 coming down for reinstallation to lucid [production]
16:27 <RobH> sq65-sq66 reinstalled to lucid and online [production]
16:20 <mark_> Lowered CARP weight of amssq* non-SSD text squids from 10 to 8 to relieve disk and memory pressure [production]
16:20 <apergos> running cleanup on index.html and md5sums for the XML dump jobs that recorded bogus "recombine" jobs that weren't actually run [production]
16:18 <mark_> Stopped old puppet instance and restarted squid-frontend on amssq37 [production]
16:16 <RobH> amsq31 upgraded to lucid and back in service [production]
16:05 <RobH> both srv182 and srv183 were not responsive to serial console, rebooted both. [production]
15:42 <RobH> sq65 & sq66 coming down for reinstallation to lucid [production]
15:40 <RobH> sq62-sq64 installed and online lucid [production]
15:31 <RobH> reinstalling amssq31 [production]
15:11 <RobH> sq62-sq64 down for reinstall [production]
15:06 <RobH> sq59-sq61 online as lucid hosts [production]
14:59 <RobH> updated wmf repo, copying package for squid from karmic to lucid as well (seems to have been lost from other changes, as frontend was still there) [production]
14:02 <RobH> sq59-sq61 offline for reinstall [production]
12:03 <mark_> Enabled multicast snooping on csw1-sdtpa [production]
2011-01-11 §
23:39 <Ryan_Lane> adding cron on nova-controller.tesla to svn up /wiki hourly [production]
22:27 <RobH> have to reinstall the squids, wrong version written into lease [production]
22:24 <awjr> archived hudson build files for 'donations queue consume' and emptied builds directory on grosley to allow the jobs to continue running (donations queue consume job was failing due to hitting max # of files in a dir for ext3 fs) [production]
22:05 <RobH> correction, sq65, sq66, & sq71 [production]
22:04 <RobH> sq62-sq64 back online, sq65-sq67 coming down for reinstall [production]
21:48 <Ryan_Lane> adding exim::simple-mail-sender to owa1-3 [production]
21:33 <RobH> sq59-sq61 in service, sq62-sq64 reinstalling [production]
21:27 <Ryan_Lane> powercycling amssq31 [production]
20:52 <RobH> sq59-61 reinstalled and online, pooled, partially up [production]
20:42 <rainman-sr> enwiki.spell index somehow got corrupt, investigating and rebuilding it now on searchidx1 [production]
18:57 <RobH> sq59-sq61 coming back offline, bad partitioning in automatied install, need to update squid configuration for these hosts [production]
18:47 <RobH> sq59 having reinstall issues, skipping it and moving on [production]
18:36 <RobH> sq61 reinstalled and back online [production]
18:30 <RobH> sq60 reinstalled, back in service [production]
18:03 <RobH> sq59-sq61 depooled for upgrade [production]
16:15 <RobH> sync-docroot run to push updated tenwiki favicon [production]
14:27 <mark_> Depooled amssq31 and amssq32 for SSD install [production]
2011-01-10 §
21:01 <Ryan_Lane> patching python-nova on nova-compute*.tesla (see bug #lp700015) [production]
20:59 <Ryan_Lane> err make that #lp681164 for ldap driver [production]
20:59 <Ryan_Lane> patching ldap driver on nova-* (see bug #lp681030) [production]
20:57 <Ryan_Lane> patching ec2 api on nova-controller.tesla (see bug #lp701216) [production]
19:56 <RobH> updated dns with virt!-virt4 mgmt info [production]
16:10 <RobH> singer was crashed, investigating why it suddenly had issues. It was pulling down db9, but it halted before it could damage anything. [production]
16:04 <RobH> secure server, as well as blogs, offline, investigating server issue on singer [production]
16:02 <RobH> db9 having issues and singer is as well, taking singer down since its already crashed [production]
2011-01-09 §
22:04 <Ryan_Lane> powercycling srv217 [production]
22:03 <Ryan_Lane> powercycling srv271 [production]
22:01 <Ryan_Lane> powercycling srv262 [production]
19:21 <apergos> rebooting amssq61 hoping to clear up its problem. I guess puppet restarts squid instances every so often *sigh* [production]
18:50 <apergos> stopping squid front and back end instances on amssq61, has network issue [production]
14:15 <mark> Reduced Varnish max worker threads from 8000 to 2000 per threadpool [production]
13:52 <mark> Pooled knsq6 and knsq7 as bits.esams [production]
13:43 <mark> Converted knsq6 and knsq7 into bits.esams machines [production]