3551-3600 of 10000 results (25ms)
2012-01-13 §
00:28 <reedy> synchronized closed.dblist 'Closing en_labswikimedia, de_labswikimedia, liquidthreads_labswikimedia' [production]
00:17 <Ryan_Lane> stopping puppet on all virt nodes [production]
2012-01-12 §
21:54 <Ryan_Lane> relabeled port at virt0 [production]
21:54 <Ryan_Lane> moved new virt0 from squid vlan to public-services2 [production]
21:43 <Ryan_Lane> rebuilding mobile2 as virt0 [production]
21:43 <Ryan_Lane> Adding back mgmt info for mobile1, changing mobile2 to virt0 [production]
21:11 <Ryan_Lane> rebuilding mobile1 as virt0 [production]
21:08 <Ryan_Lane> renaming mobile1 to virt0 [production]
20:54 <binasher> installing percona-toolkit on few remaining hardy dbs [production]
20:26 <cmjohnson1> shutting down srv178-189 for decommissioning [production]
20:14 <binasher> granted the "process" priv to nagios@localhost on all production db clusters [production]
20:07 <reedy> synchronized php-1.18/includes/specials/SpecialSearch.php '[[rev:108751|r108751]]' [production]
20:07 <LeslieCarr> reassigning ports on asw-b-sdtpa [production]
17:00 <notpeter> stop sodium to do manual reinstall [production]
16:33 <RobH> adjusting all power strip humidity sensor 2 (floor level) to 12% humidity, as the center rack has the proper levels, floor levels always are low in humidity. [production]
16:04 <mutante> starting nagios-nrpe-server on ALL via dsh to speed up nagios recovery [production]
15:33 <mutante> starting nagios-nrpe-server on srv's via dsh [production]
02:04 <LocalisationUpdate> completed (1.18) at Thu Jan 12 02:04:31 UTC 2012 [production]
00:48 <preilly> pushing quick fix for special random [production]
00:48 <preilly> synchronized php-1.18/extensions/MobileFrontend/MobileFrontend.php 'update to mobile frontend to fix random link' [production]
00:41 <LeslieCarr> added ganglia1002 and ganglia1001 to dns [production]
2012-01-11 §
23:18 <RobH> searchidx1001 offline and powered down until replacement memory arrives (2012-01-13) rt 2208 [production]
22:56 <RobH> poking searchidx1001 for memory error [production]
22:45 <RobH> mw1108 online and ready for install per rt2253 [production]
22:42 <RobH> mw1099 repaired, ready for os install per rt2252 [production]
22:39 <RobH> mw1081 ready for install rt2251 [production]
22:32 <RobH> no its not ;] [production]
22:16 <Reedy> lists.wikimedia.org is down [production]
21:53 <reedy> synchronized php-1.18/includes/api/ '[[rev:108683|r108683]]' [production]
21:36 <RobH> psw1-eqiad mgmt connected [production]
21:24 <RobH> leslie is handling the ganglia not starting back up issue even though i caused it to die, yay me [production]
21:22 <RobH> updated dns for neon/cobalt to ganglia1001/1002 [production]
21:17 <RobH> ganglia offline for a moment, sorry folks [production]
21:17 <catrope> synchronizing Wikimedia installation... : Deploying MoodBar changes [production]
21:16 <RobH> i just took nickel offline by mistake [production]
20:58 <reedy> synchronized wmf-config/InitialiseSettings.php 'Change shorturl preefix default' [production]
20:57 <reedy> synchronized php-1.18/extensions/ShortUrl/ '[[rev:108680|r108680]]' [production]
20:50 <RoanKattouw> Applying MoodBar schema changes (index addition and column addition) on all wikis [production]
20:44 <catrope> synchronized php-1.18/extensions/ArticleFeedbackv5/ 'Updating AFTv5 to trunk staet' [production]
20:14 <Jeff_Green> adjusted firewall rules on payments* to restore ganglia reporting since we switched to nickel [production]
20:11 <RoanKattouw> Created AFTv5 tables on testwiki [production]
20:09 <catrope> synchronized php-1.18/extensions/ArticleFeedbackv5/modules/jquery.articleFeedbackv5/jquery.articleFeedbackv5.js '[[rev:108666|r108666]]' [production]
19:53 <cmjohnson1> shutting down srv191 for new install [production]
19:52 <cmjohnson1> replaced HDD srv191 [production]
19:47 <catrope> synchronized php-1.18/resources/startup.js 'touch' [production]
19:46 <catrope> synchronized wmf-config/InitialiseSettings.php 'Enable AFTv5 on testwiki' [production]
19:40 <RobH> mw1103 hardware issues, disregard nagios flapping [production]
19:37 <RobH> mw1102 offline due to bad mainboard until replacement arrives tomorrow or next [production]
19:30 <RobH> working on mw1102, disregard flapping [production]
18:40 <notpeter> running authdns-update on dobson to pick up new dns temps [production]