8051-8100 of 10000 results (23ms)
2013-05-20 §
14:54 <ottomata> rebooting emery to upgrade to 3.2.0-43-generic kernel [production]
14:53 <ottomata> upgraded all linux machines to linux 3.2.0-43-generic kernel [production]
02:29 <LocalisationUpdate> ResourceLoader cache refresh completed at Mon May 20 02:29:25 UTC 2013 [production]
02:13 <LocalisationUpdate> completed (1.22wmf3) at Mon May 20 02:13:42 UTC 2013 [production]
02:05 <LocalisationUpdate> completed (1.22wmf4) at Mon May 20 02:05:04 UTC 2013 [production]
2013-05-19 §
02:07 <LocalisationUpdate> ResourceLoader cache refresh completed at Sun May 19 02:06:55 UTC 2013 [production]
02:02 <LocalisationUpdate> completed (1.22wmf3) at Sun May 19 02:01:54 UTC 2013 [production]
02:01 <LocalisationUpdate> completed (1.22wmf4) at Sun May 19 02:01:19 UTC 2013 [production]
2013-05-18 §
02:29 <LocalisationUpdate> ResourceLoader cache refresh completed at Sat May 18 02:29:09 UTC 2013 [production]
02:11 <LocalisationUpdate> completed (1.22wmf3) at Sat May 18 02:11:28 UTC 2013 [production]
02:06 <LocalisationUpdate> completed (1.22wmf4) at Sat May 18 02:06:30 UTC 2013 [production]
01:22 <mutante> singer boot issue - affects service contacts.wm (but no others) [production]
00:12 <mutante> dist-upgrading yvon [production]
00:08 <mutante> dist-upgrading nitrogen [production]
2013-05-17 §
23:48 <mutante> dist-upgrading singer [production]
23:45 <mutante> dist-upgrading nickel [production]
23:44 <RobH> all tampa based apaches have had kernel upgrades [production]
23:40 <mutante> dist-upgrading praseodymium [production]
23:32 <RobH> upgrading all srv*.pmtpa.wmnet via dist-upgrade in salt. [production]
23:27 <mutante> dist-upgrading gurvin [production]
23:13 <RobH> rebooting all pmtpa mw servers [production]
23:12 <mutante> dist-upgrading hydrogen, manutius [production]
23:11 <RobH> still doing pmtpa mw upgrades, ignore all icinga alarms for now [production]
23:04 <mutante> dist-upgrading chromium [production]
23:01 <RobH> using salt to dist-upgrade all the tampa apaches... nothing could go wrong....right? [production]
22:57 <akosiaris> upgrading and rebooting cp1001-1020 (3 at a time) [production]
22:54 <mutante> dist-upgrading capella ( IPv6 tunnel relay) [production]
22:33 <mutante> shutting down zinc [production]
22:22 <mutante> !!log doesn't log [production]
22:21 <Reedy> [23:18:29] <binasher> !!log running hotbackup of db71 to pre-labsdb for s1 [production]
22:14 <mutante> rebooting spence for upgrades [production]
22:09 <reedy> synchronized w [production]
22:08 <reedy> synchronized docroot [production]
22:07 <mutante> rm -rf php-1.21wmf12 on tin per reedy [production]
22:02 <reedy> synchronized wmf-config/flaggedrevs.php [production]
21:57 <apergos> mw80 memtest failure dimm1, all other image scalers in pmtpa updated [production]
21:52 <mutante> dist-upgrading zhen (mobile vumi) [production]
21:48 <mutante> DNS update - kill storage1 and 2 [production]
21:33 <akosiaris> about to reboot fenari [production]
21:13 <hashar> gallium Manually blackholed some web engine crawler (via ip route) [production]
19:54 <mutante> Wikimedia IRC server working again [production]
19:45 <Ryan_Lane> rebooted ssl1/2 ssl3001 ssl1001/1002 [production]
19:42 <hashar> Jenkins restarted successfully. [production]
19:39 <Ryan_Lane> depooling ssl3001 ssl1/2 ssl1001/2 [production]
19:39 <Ryan_Lane> repooling ssl1003/4 ssl3/4 [production]
19:31 <Ryan_Lane> repooling ssl3002/3 [production]
19:29 <hashar> restarted Jenkins [production]
19:29 <Ryan_Lane> rebooted ssl3/4 ssl3002/3 ssl1003/1004 [production]
19:28 <Ryan_Lane> depooled ssl3/4 ssl3002/3 ssl1003/1004 [production]
19:19 <RobH> blog back up, whew. [production]