3101-3150 of 10000 results (32ms)
2013-03-19 §
18:01 <robh> synchronized docroot [production]
17:21 <mark> Ended bits.esams performance testing [production]
16:29 <mark> Starting load testing with esams bits servers [production]
16:20 <mark> Rebooted cp3019 with hyperthreading and DAPC disabled [production]
16:16 <krinkle> synchronized extract2.php [production]
14:29 <mark> Rebooting arsenic with hyperthreading disabled [production]
12:54 <Tim> on professor: restarted collector [production]
08:46 <hashar> gallium: killed stalled puppet agents. Locked on apt-get and update-java-alt [production]
08:41 <hashar> gallium: restarted puppet and zuul [production]
08:38 <hashar> gallium: stopped zuul and upgrading [production]
08:37 <hashar> gallium: stopped puppet to deploy Zuul whenever the queue is empty [production]
08:36 <hashar> zuul : giving results processing priority over enqueued jobs. Cherry picked 263fba9 from upstream. That should resolve Zuul being slow to report back to Gerrit {{bug|46176}} [production]
03:52 <reedy> synchronized wmf-config/InitialiseSettings.php [production]
03:50 <reedy> synchronized wmf-config/InitialiseSettings.php [production]
03:11 <Tim> started incremental updater on searchidx1001 and searchidx2, apparently has not been running since March 13 [production]
02:50 <LocalisationUpdate> completed (1.21wmf12) at Tue Mar 19 02:50:19 UTC 2013 [production]
02:29 <LocalisationUpdate> completed (1.21wmf11) at Tue Mar 19 02:29:02 UTC 2013 [production]
01:18 <reedy> synchronized wmf-config/InitialiseSettings.php [production]
00:22 <Ryan_Lane> adding dns entries for labstore systems in eqiad [production]
00:08 <asher> synchronized wmf-config/db-pmtpa.php 'returning servers' [production]
2013-03-18 §
23:34 <binasher> i <3 running "dpkg -P mysql-server-5.1 mysqlfb-client-5.1 mysqlfb-common mysqlfb-server-5.1 mysqlfb-server-core-5.1 libmysqlfbclient16 libmysqlclient16" [production]
23:11 <asher> synchronized wmf-config/db-eqiad.php 'returning db10[10-11],[26-28] at low weights' [production]
22:45 <binasher> converted rogue s3 tables (moodbar_feedback, trackbacks, transcache) to innodb [production]
22:28 <ottomata> added python-flask-login-0.1.2-1 to apt [production]
22:23 <robh> gracefulled all apaches [production]
22:20 <LeslieCarr> intradatacenter link is flapping , switching links, this may cause some higher latency [production]
22:14 <robh> gracefulled all apaches [production]
22:11 <robh> gracefulled all apaches [production]
21:58 <mutante> purging wikimediafoundation.org from squid [production]
21:57 <dzahn> gracefulled all apaches [production]
21:56 <robh> gracefulled all apaches [production]
21:36 <asher> synchronized wmf-config/db-eqiad.php 'pulling a db from s3-7 for upgrade' [production]
21:30 <paravoid> authdns-update: removing *.ts.wikimedia.org records [production]
21:29 <dzahn> gracefulled all apaches [production]
21:24 <binasher> ran "ddsh -g apaches -cM '/etc/init.d/apache2 start'" [production]
21:24 <RobH> authdns-update rt4683 [production]
21:22 <RobH> gracefuling all apaches resulted in rendering cluster overload, they didnt restart apache, odd [production]
21:21 <robh> synchronized docroot [production]
21:18 <LeslieCarr> service apache2 start on the mw-eqiad group via dsh [production]
21:14 <binasher> started apache on all imagescalers [production]
21:00 <robh> gracefulled all apaches [production]
21:00 <RobH> troubleshooting erros in apache restart script [production]
20:58 <robh> gracefulled all apaches [production]
20:36 <RobH> authdns-update rt4694 [production]
20:20 <RobH> authdns-update [production]
20:04 <reedy> rebuilt wikiversions.cdb and synchronized wikiversions files: [production]
20:00 <reedy> Finished syncing Wikimedia installation... : Rebuild message cache for 1.21wmf12 [production]
19:44 <^demon> jetty freaked out again, forcing a gerrit restart [production]
19:32 <binasher> upgraded all coredb mariadb replicas to 5.5.30 [production]
19:25 <reedy> Started syncing Wikimedia installation... : Rebuild message cache for 1.21wmf12 [production]