651-700 of 10000 results (14ms)
2014-02-19 §
04:58 <demon> synchronized wmf-config/CirrusSearch-common.php 'Turn on checkDelay for cirrus links update secondary jobs' [production]
03:23 <andrewbogott> testing the log by logging a test [production]
02:55 <LocalisationUpdate> ResourceLoader cache refresh completed at 2014-02-19 02:55:36+00:00 [production]
02:27 <LocalisationUpdate> completed (1.23wmf13) at 2014-02-19 02:27:49+00:00 [production]
02:03 <LocalisationUpdate> completed (1.23wmf14) at 2014-02-19 02:03:41+00:00 [production]
00:04 <mutante> restarting gitblit on antimony [production]
2014-02-18 §
23:42 <mutante> shutting down locke - killing 757 days of uptime and one more Tampa classic host [production]
23:38 <mutante> locke - disable puppet, puppetstoredconfigclean on master, revoke puppet cert and salt key.. [production]
22:59 <ebernhardson> synchronized wmf-config/CommonSettings.php [production]
22:35 <ebernhardson> synchronized wmf-config/InitialiseSettings.php [production]
22:25 <ebernhardson> synchronized wmf-config/InitialiseSettings.php [production]
22:23 <ebernhardson> synchronized wmf-config/InitialiseSettings.php [production]
22:20 <ebernhardson> synchronized wmf-config/InitialiseSettings.php [production]
22:18 <ebernhardson> synchronized php-1.23wmf14/extensions/Flow [production]
22:10 <ebernhardson> synchronized php-1.23wmf13/extensions/Flow/includes/Data/RevisionStorage.php [production]
20:35 <demon> synchronized wmf-config/CirrusSearch-common.php 'Elastica is always included now' [production]
20:21 <ottomata> upgraded librdkafka1 to 0.8.3 on cp1056, restarting varnishkafka there [production]
19:55 <aaron> synchronized php-1.23wmf14/includes/filebackend/SwiftFileBackend.php '58fa613a75c2730cbf8f60e9e3f283a3f043f00b' [production]
19:45 <ottomata> repooling cp3022 into bits esams. varnishkafka has emptied its outbuf since last night [production]
19:40 <reedy> rebuilt wikiversions.cdb and synchronized wikiversions files: All non wikipedias to 1.23wmf14 [production]
19:39 <reedy> updated /a/common to {{Gerrit|I641a25ef9}}: Symlink in the extension-list files [production]
19:28 <Krinkle> Jenkins jobs for npm are broken because the new integration-slave02 and integration-slave03 instances have SSL issues (different npm version and no certificates). And integration-slave01 (which was working) was deleted. [production]
04:11 <reedy> synchronized docroot/noc/ [production]
03:38 <reedy> updated /a/common to {{Gerrit|Ifd09130b4}}: Ignore PhpStorm files [production]
02:53 <springle> reindexing s1 slaves abuse_filter_log [production]
02:50 <awight> payments updated from 9b936320b797bd01c4e61b1cd7c2e15b0820a24b to fe302a89e718dce7917acefb8c762ddc1c19c028 [production]
02:48 <LocalisationUpdate> ResourceLoader cache refresh completed at 2014-02-18 02:48:14+00:00 [production]
02:42 <awight> rollback payments from 9e4d8b29581e2465d1acde8d7c2377fa6a8522a6 to 9b936320b797bd01c4e61b1cd7c2e15b0820a24b [production]
02:38 <awight> rollback payments from ce6233998f4bc0266c2e027c44620a8ba9984681 to 9e4d8b29581e2465d1acde8d7c2377fa6a8522a6 [production]
02:26 <LocalisationUpdate> completed (1.23wmf14) at 2014-02-18 02:26:05+00:00 [production]
02:14 <LocalisationUpdate> completed (1.23wmf13) at 2014-02-18 02:14:08+00:00 [production]
2014-02-17 §
20:57 <ottomata> depooling cp3022.esams.wikimedia.org to investigate varnishkafka issues [production]
16:15 <hashar> Jenkins deleting slave integration-slave01 (had only 2 CPU) [production]
16:14 <hashar> Jenkins added two labs slaves with 4 CPU: integration-slave02 and integration-slave03 [production]
08:46 <hashar> Upgrading Jenkins, half an hour downtime [production]
03:19 <LocalisationUpdate> ResourceLoader cache refresh completed at 2014-02-17 03:19:28+00:00 [production]
02:37 <LocalisationUpdate> completed (1.23wmf14) at 2014-02-17 02:37:40+00:00 [production]
02:27 <LocalisationUpdate> completed (1.23wmf13) at 2014-02-17 02:27:22+00:00 [production]
2014-02-16 §
19:32 <aaron> synchronized php-1.23wmf13/includes/filebackend/SwiftFileBackend.php 'e14a87489d9f65fec85347c8e4a7825576f15be6' [production]
16:03 <ottomata> restarted varnishkafka on esams bits varnishes [production]
15:58 <ottomata> restarted varnishkafka on cp3019 [production]
15:48 <ottomata> starting kafka leader replica election to production load across both brokers evenly. Not yet sure why analytics1022 was the leader for all toppars… [production]
10:48 <apergos> for the record, after the reboot I added back the 10.0.0.45 and ran start-nfs, still not happy [production]
10:11 <ori> labstore4. dmesg: XFS (dm-0): xfs_log_force: error 5 returned. Rebooting. [production]
08:07 <matanya> Labs NFS Issues: cannot open directory .: Stale NFS file handle XFS seems broken again [production]
02:42 <LocalisationUpdate> ResourceLoader cache refresh completed at 2014-02-16 02:42:25+00:00 [production]
02:21 <LocalisationUpdate> completed (1.23wmf14) at 2014-02-16 02:20:57+00:00 [production]
02:11 <LocalisationUpdate> completed (1.23wmf13) at 2014-02-16 02:11:05+00:00 [production]
2014-02-15 §
22:30 <reedy> updated /a/common to {{Gerrit|Id33b8287c}}: Remove 1.23wmf1 through 1.23wmf5 [production]
22:27 <reedy> updated /a/common to {{Gerrit|I04d387adf}}: s1 substitute db1034 for db1055 during schema changes [production]