401-450 of 10000 results (22ms)
2013-03-19 §
08:37 <hashar> gallium: stopped puppet to deploy Zuul whenever the queue is empty [production]
08:36 <hashar> zuul : giving results processing priority over enqueued jobs. Cherry picked 263fba9 from upstream. That should resolve Zuul being slow to report back to Gerrit {{bug|46176}} [production]
03:52 <reedy> synchronized wmf-config/InitialiseSettings.php [production]
03:50 <reedy> synchronized wmf-config/InitialiseSettings.php [production]
03:11 <Tim> started incremental updater on searchidx1001 and searchidx2, apparently has not been running since March 13 [production]
02:50 <LocalisationUpdate> completed (1.21wmf12) at Tue Mar 19 02:50:19 UTC 2013 [production]
02:29 <LocalisationUpdate> completed (1.21wmf11) at Tue Mar 19 02:29:02 UTC 2013 [production]
01:18 <reedy> synchronized wmf-config/InitialiseSettings.php [production]
00:22 <Ryan_Lane> adding dns entries for labstore systems in eqiad [production]
00:08 <asher> synchronized wmf-config/db-pmtpa.php 'returning servers' [production]
2013-03-18 §
23:34 <binasher> i <3 running "dpkg -P mysql-server-5.1 mysqlfb-client-5.1 mysqlfb-common mysqlfb-server-5.1 mysqlfb-server-core-5.1 libmysqlfbclient16 libmysqlclient16" [production]
23:11 <asher> synchronized wmf-config/db-eqiad.php 'returning db10[10-11],[26-28] at low weights' [production]
22:45 <binasher> converted rogue s3 tables (moodbar_feedback, trackbacks, transcache) to innodb [production]
22:28 <ottomata> added python-flask-login-0.1.2-1 to apt [production]
22:23 <robh> gracefulled all apaches [production]
22:20 <LeslieCarr> intradatacenter link is flapping , switching links, this may cause some higher latency [production]
22:14 <robh> gracefulled all apaches [production]
22:11 <robh> gracefulled all apaches [production]
21:58 <mutante> purging wikimediafoundation.org from squid [production]
21:57 <dzahn> gracefulled all apaches [production]
21:56 <robh> gracefulled all apaches [production]
21:36 <asher> synchronized wmf-config/db-eqiad.php 'pulling a db from s3-7 for upgrade' [production]
21:30 <paravoid> authdns-update: removing *.ts.wikimedia.org records [production]
21:29 <dzahn> gracefulled all apaches [production]
21:24 <binasher> ran "ddsh -g apaches -cM '/etc/init.d/apache2 start'" [production]
21:24 <RobH> authdns-update rt4683 [production]
21:22 <RobH> gracefuling all apaches resulted in rendering cluster overload, they didnt restart apache, odd [production]
21:21 <robh> synchronized docroot [production]
21:18 <LeslieCarr> service apache2 start on the mw-eqiad group via dsh [production]
21:14 <binasher> started apache on all imagescalers [production]
21:00 <robh> gracefulled all apaches [production]
21:00 <RobH> troubleshooting erros in apache restart script [production]
20:58 <robh> gracefulled all apaches [production]
20:36 <RobH> authdns-update rt4694 [production]
20:20 <RobH> authdns-update [production]
20:04 <reedy> rebuilt wikiversions.cdb and synchronized wikiversions files: [production]
20:00 <reedy> Finished syncing Wikimedia installation... : Rebuild message cache for 1.21wmf12 [production]
19:44 <^demon> jetty freaked out again, forcing a gerrit restart [production]
19:32 <binasher> upgraded all coredb mariadb replicas to 5.5.30 [production]
19:25 <reedy> Started syncing Wikimedia installation... : Rebuild message cache for 1.21wmf12 [production]
19:07 <asher> synchronized wmf-config/db-eqiad.php 'returning db1043 db1009' [production]
19:04 <reedy> synchronized docroot [production]
19:02 <asher> synchronized wmf-config/db-eqiad.php 'db1050 for special s1' [production]
18:56 <asher> synchronized wmf-config/db-eqiad.php 'pulling db1043, db1009 for upgrade' [production]
18:40 <reedy> synchronized php-1.21wmf12 'Initial sync of php-1.21wmf12' [production]
18:16 <awight> payments cluster updated from c19cc66 to fe4fb96b [production]
18:06 <reedy> synchronized wmf-config/CommonSettings.php [production]
18:03 <aaron> synchronized wmf-config/PrivateSettings.php 'Removed old testing cruft' [production]
16:19 <cmjohnson1> removing/replacing disk 4 ms-be1004 [production]
16:01 <hashar> Hard restarted Zuul which was dead locked again :-( [production]