5201-5250 of 10000 results (26ms)
2012-03-27 §
16:58 <reedy> synchronized wmf-config/InitialiseSettings.php 'Config for lezwiki' [production]
16:56 <reedy> synchronized wmf-config/InitialiseSettings.php 'Config for lezwiki' [production]
16:48 <reedy> ran sync-common-all [production]
16:32 <reedy> synchronized wmf-config/InitialiseSettings.php 'prep work for new wikis' [production]
16:08 <reedy> synchronized wmf-config/InitialiseSettings.php 'Bug 34527 - Create a Arbcom namespace on Russian Wikipedia' [production]
16:06 <reedy> synchronized wmf-config/InitialiseSettings.php 'Bug 34527 - Create a Arbcom namespace on Russian Wikipedia' [production]
15:47 <reedy> synchronized wmf-config/InitialiseSettings.php 'Bug 35161 - Incubator configuration updates' [production]
15:11 <reedy> synchronized wmf-config/InitialiseSettings.php 'Bug 32825 - Favicon for siwiki' [production]
14:40 <reedy> synchronized wmf-config/InitialiseSettings.php 'Bug 35516 - Add Skin: namespace to MW.org' [production]
08:15 <apergos> test you silly morebot [production]
2012-03-26 §
17:14 <notpeter> backingup plwiki.nspart1 index on search7, deleting working copy, and restarting lsearchd. (note: this will probably cause some downtime on some languages while the proc restarts...) [production]
15:18 <RobH> db59 has errors, but as it was a fusion io testbed server, it is more than likely tweaked for such, it is not in any rotation [production]
14:54 <RobH> db59 shutting down for io card removal per rt 2589 [production]
13:37 <mutante> while on it, installing a whole bunch of package updates on db42 [production]
13:25 <mutante> db42 was out of disk , caused by ~5G citations.csv in /tmp, gzipped the file [production]
09:59 <mutante> ..and on ms-be-3. running puppet on db59 [production]
09:43 <mutante> another corrupted .yaml file on ssl2 [production]
09:33 <mutante> brewster - delete puppet lock file, restart lighttpd, puppet ... [production]
09:05 <mutante> brewster was out of disk - deleted lighttpd access.log.1, gzipped access.log [production]
02:18 <LocalisationUpdate> completed (1.19) at Mon Mar 26 02:18:03 UTC 2012 [production]
2012-03-25 §
22:26 <RobH> row b servertech firmware in eqiad all updated, should clear alarms as they come back online [production]
22:18 <RobH> firmware updates on servertechs in row b eqiad, disregard alarms [production]
20:14 <RobH> to fellow ops, you can disregard those observium errors, as I caused them [production]
20:13 <RobH> firmware updated on all power strips in row a eqiad. [production]
16:22 <RobH> ps1-a1-sdtpa firmware update complete [production]
16:16 <RobH> updating firmware on ps1-a1-sdtpa [production]
16:14 <RobH> ps1-b1-sdtpa firmware updated successfully [production]
16:14 <RobH> ps1-a1-eqiad firmware updated successfully [production]
16:09 <RobH> updating firmware on ps1-s1-eqiad and ps1-b1-sdtpa [production]
16:07 <RobH> updated firmware successfully on ps1-a8-eqiad, if it has observium alarms now then there are bigger issues. [production]
02:17 <LocalisationUpdate> completed (1.19) at Sun Mar 25 02:17:21 UTC 2012 [production]
00:59 <LeslieCarr> admin down asw-a-eqiad xe-1/1/2 and cr2-eqiad xe-5/0/0 due to framing errors causing packet loss and lacp sporadic timeouts. source of the issue [production]
2012-03-24 §
19:46 <preilly> synchronized php-1.19/extensions/MobileFrontend/MobileFrontend.body.php 'Following a performance regression reported on wikitech-l, added merciless profiling to ExtMobileFrontend::DOMParse()' [production]
19:46 <preilly> synchronized php-1.19/extensions/MobileFrontend/MobileFrontend.body.php 'Following a performance regression reported on wikitech-l, added merciless profiling to ExtMobileFrontend::DOMParse()' [production]
17:35 <mark> Migration from br1-knams to cr2-knams completed. [production]
17:09 <mark> Migrated second knams-esams dark fiber link from br1-knams to cr2-knams [production]
16:36 <mark> Corrected MTU setting on cr2-knams's AMS-IX interface [production]
16:20 <Reedy> Some european users reporting oruting issues [production]
16:01 <mark> Cleared OSPF session between csw1-esams and csw2-esams which magically made some internal routes reappear [production]
15:40 <mark> Brought up AMS-IX ipv4 BGP sessions [production]
15:30 <mark> Brought up AMS-IX ipv6 BGP sessions [production]
15:25 <mark> Moved AMS-IX connection to cr2-knams:xe-1/1/0 [production]
15:22 <mark> Shutdown all AMS-IX BGP sessions [production]
15:06 <mark> Disabled BFD on OSPF3 between cr2-knams and csw1-esams [production]
14:49 <mark> Moved AS6908 and AS1257 PIs to cr2-knams [production]
14:18 <mark> Brought up AS13030 and AS1299 BGP sessions on cr2-knams [production]
13:57 <mark> Shutdown AS1299 BGP session on br1-knams [production]
13:14 <mark> Established full iBGP mesh with added router cr2-knams. cr2-knams now has full Internet connectivity. [production]
12:48 <mark> Moved fiber from br1-knams:e1/2 to cr2-knams:xe-0/0/0 [production]
12:44 <mark> Disabled br1-knams:e1/2 (DF leg 1 to esams) [production]