6801-6850 of 10000 results (21ms)
2010-11-17 §
10:03 <tomasz_> adding single field indexes to utm_source, utm_medium, and utm_campaign under contribution_tracking table within drupal db on db9 [production]
03:44 <atglenn> restarted apache on ekrem, many processes hung in "graceful close" state for a long period of time [production]
03:06 <tfinc> synchronized php-1.5/extensions/CentralNotice/SpecialBannerController.php [production]
03:04 <Tim> in puppet, disabled nagios::purge since it breaks puppet entirely on fenari. Removed Aaron's obsolete ssh public key by adding an ensure=>absent to puppet. [production]
01:34 <Tim> on ekrem: ran logrotate -f, since log rotation previously failed due to disk full [production]
01:28 <Tim> on ekrem: root partition full, deleted old apache access logs [production]
00:20 <tfinc> synchronized php-1.5/wmf-config/CommonSettings.php [production]
2010-11-16 §
19:05 <catrope> synchronizing Wikimedia installation... Revision: 76812: [production]
19:03 <RoanKattouw> Running scap to deploy UploadWizard backend changes (core only) [production]
17:04 <Ryan_Lane> adding run stages to puppet config; adding apt-get update to first stage, and nagios resource purging to last stage [production]
17:01 <catrope> synchronized php-1.5/maintenance/nextJobDB.php 'Fix memcached usage for nextJobDB.php, broken since Sep 09. Should speed up job queue processing' [production]
16:39 <RobH> updated dns [production]
15:50 <catrope> synchronized php-1.5/wmf-config/InitialiseSettings.php 'Add deletedtext, deletedhistory rights to eliminator group on hiwiki' [production]
15:04 <catrope> synchronized php-1.5/wmf-config/InitialiseSettings.php 'bug 25374 - Eliminator group for hiwiki' [production]
14:36 <JeLuF> 25871 - fixed logo for pflwiki [production]
14:05 <RobH> temp fixed nagios [production]
2010-11-15 §
22:31 <Ryan_Lane> repooling sq70 [production]
21:43 <Ryan_Lane> pushing change to varnish to send cache-control header for geoip lookup [production]
21:43 <catrope> synchronized php-1.5/wmf-config/InitialiseSettings.php 'Adding categories to $wmgArticleAssessmentCategory' [production]
21:37 <catrope> synchronized php-1.5/extensions/ArticleAssessmentPilot/ArticleAssessmentPilot.hooks.php 'r76709' [production]
21:18 <Ryan_Lane> depooling sq70 [production]
21:17 <jeluf> synchronized php-1.5/wmf-config/InitialiseSettings.php '25569 - Create the Gagauz Wikipedia (wp/gag)' [production]
21:01 <mark> Lowered CARP weight of esams text amssq* squids from 20 to 10, equal to the older knsq* squids [production]
20:33 <Ryan_Lane> setting authdns-scenario normal [production]
20:05 <RobH> current slowdowns reported for folks hitting AMS squids. Moving traffic to US datacenter should fix major slowdowns on !Wikipedia & !Wikimedia [production]
20:04 <Ryan_Lane> setting authdns-scenario esams-down [production]
19:56 <RobH> fixed nagios again [production]
19:51 <RobH> updating dns for new owa processing nodes [production]
18:54 <RobH> srv298 now online in api pool [production]
18:21 <Ryan_Lane> fixing puppet manually on sq34, sq36, sq37, sq39, sq40, and knsq13 [production]
18:18 <RobH> gilman to secure gateway project stalled, needs network checks done [production]
18:07 <Ryan_Lane> puppetizing /etc/default/puppet, since some hosts had START=no, instead of START=yes [production]
17:46 <RobH> gilman needed hard reset, ilom responsive now (thx rich!) [production]
17:35 <Ryan_Lane> restarting puppet again on all nodes using -M flag for ddsh to see system names (checking for errors) [production]
17:23 <Ryan_Lane> restarting puppet on all nodes [production]
17:12 <RobH> sq57 disk replaced, reinstalled, back in service [production]
17:09 <mark> Restarted apache on sockpuppet with concurrency 4 instead of 3 [production]
17:04 <RobH> puppet is now failing to work properly on sq57, why did we upgrade puppet again? [production]
16:59 <RobH> sq57 reinstalled and doing post installation configuration [production]
16:40 <Ryan_Lane> upping configtimeout setting in puppet to 8 minutes, globally [production]
16:33 <Ryan_Lane> trying to add puppet.conf to puppet again [production]
16:24 <Ryan_Lane> undoing puppet.conf changes [production]
16:20 <RobH> sq57 coming down for reinstallation [production]
16:19 <RobH> db13 back online, restarted mysql, but its currently commented out of db.php [production]
16:12 <RobH> not sure why db13 is borked, but its down, poking at it [production]
16:09 <Ryan_Lane> added puppet.conf to puppet. pushing change out [production]
16:00 <RobH> torrus is up again [production]
15:59 <richcole> swaped sq57 sdb bad drive [production]
15:56 <RobH> torrus is down, again, restarting and cleaning up its services [production]
15:52 <RobH> manually purged spence nagios, started manually, working until puppet borks it again [production]