8851-8900 of 10000 results (25ms)
2010-11-18 §
21:40 <JeLuF> started squid manually on sq59 sq61 sq73 sq60 sq62 sq65 sq77 sq63 sq64 sq72 sq75 sq74 sq76 sq71 sq78 sq66, startup script is broken. [production]
21:26 <mark> Fixed puppet on formey [production]
21:24 <mark> Fixed puppet on linne [production]
19:57 <JeLuF> blocked UDP from srv124 on nfs1 aka syslog [production]
17:00 <JeLuF> restarted puppet on srv215, srv235, srv244, srv257, srv262, srv288 [production]
16:33 <JeLuF> fixed puppet on srv185 and srv200 [production]
15:00 <aaron> synchronized php-1.5/wmf-config/flaggedrevs.php 'Set FR_INCLUDES_CURRENT on mediawikiwiki' [production]
2010-11-17 §
20:19 <JeLuF> syslog is being spammed with one week old messages from srv124 [production]
20:19 <RobH> owa1/2/3 online with base OS install and puppet updates [production]
17:59 <RobH> updated dns for new databases servers [production]
17:15 <richcole> owa1 going down for repair [production]
15:52 <Ryan_Lane> moved the nagios purge stuff out of puppet, and into nagios's init script. Pulled the nagios init script into puppet [production]
10:03 <tomasz_> adding single field index on converted amount under public_reporting within civirm db on db9 [production]
10:03 <tomasz_> adding single field indexes to utm_source, utm_medium, and utm_campaign under contribution_tracking table within drupal db on db9 [production]
03:44 <atglenn> restarted apache on ekrem, many processes hung in "graceful close" state for a long period of time [production]
03:06 <tfinc> synchronized php-1.5/extensions/CentralNotice/SpecialBannerController.php [production]
03:04 <Tim> in puppet, disabled nagios::purge since it breaks puppet entirely on fenari. Removed Aaron's obsolete ssh public key by adding an ensure=>absent to puppet. [production]
01:34 <Tim> on ekrem: ran logrotate -f, since log rotation previously failed due to disk full [production]
01:28 <Tim> on ekrem: root partition full, deleted old apache access logs [production]
00:20 <tfinc> synchronized php-1.5/wmf-config/CommonSettings.php [production]
2010-11-16 §
19:05 <catrope> synchronizing Wikimedia installation... Revision: 76812: [production]
19:03 <RoanKattouw> Running scap to deploy UploadWizard backend changes (core only) [production]
17:04 <Ryan_Lane> adding run stages to puppet config; adding apt-get update to first stage, and nagios resource purging to last stage [production]
17:01 <catrope> synchronized php-1.5/maintenance/nextJobDB.php 'Fix memcached usage for nextJobDB.php, broken since Sep 09. Should speed up job queue processing' [production]
16:39 <RobH> updated dns [production]
15:50 <catrope> synchronized php-1.5/wmf-config/InitialiseSettings.php 'Add deletedtext, deletedhistory rights to eliminator group on hiwiki' [production]
15:04 <catrope> synchronized php-1.5/wmf-config/InitialiseSettings.php 'bug 25374 - Eliminator group for hiwiki' [production]
14:36 <JeLuF> 25871 - fixed logo for pflwiki [production]
14:05 <RobH> temp fixed nagios [production]
2010-11-15 §
22:31 <Ryan_Lane> repooling sq70 [production]
21:43 <Ryan_Lane> pushing change to varnish to send cache-control header for geoip lookup [production]
21:43 <catrope> synchronized php-1.5/wmf-config/InitialiseSettings.php 'Adding categories to $wmgArticleAssessmentCategory' [production]
21:37 <catrope> synchronized php-1.5/extensions/ArticleAssessmentPilot/ArticleAssessmentPilot.hooks.php 'r76709' [production]
21:18 <Ryan_Lane> depooling sq70 [production]
21:17 <jeluf> synchronized php-1.5/wmf-config/InitialiseSettings.php '25569 - Create the Gagauz Wikipedia (wp/gag)' [production]
21:01 <mark> Lowered CARP weight of esams text amssq* squids from 20 to 10, equal to the older knsq* squids [production]
20:33 <Ryan_Lane> setting authdns-scenario normal [production]
20:05 <RobH> current slowdowns reported for folks hitting AMS squids. Moving traffic to US datacenter should fix major slowdowns on !Wikipedia & !Wikimedia [production]
20:04 <Ryan_Lane> setting authdns-scenario esams-down [production]
19:56 <RobH> fixed nagios again [production]
19:51 <RobH> updating dns for new owa processing nodes [production]
18:54 <RobH> srv298 now online in api pool [production]
18:21 <Ryan_Lane> fixing puppet manually on sq34, sq36, sq37, sq39, sq40, and knsq13 [production]
18:18 <RobH> gilman to secure gateway project stalled, needs network checks done [production]
18:07 <Ryan_Lane> puppetizing /etc/default/puppet, since some hosts had START=no, instead of START=yes [production]
17:46 <RobH> gilman needed hard reset, ilom responsive now (thx rich!) [production]
17:35 <Ryan_Lane> restarting puppet again on all nodes using -M flag for ddsh to see system names (checking for errors) [production]
17:23 <Ryan_Lane> restarting puppet on all nodes [production]
17:12 <RobH> sq57 disk replaced, reinstalled, back in service [production]
17:09 <mark> Restarted apache on sockpuppet with concurrency 4 instead of 3 [production]