5601-5650 of 9955 results (21ms)
2016-03-01 §
12:03 <hashar> Jenkins can not pool back integration-slave-trusty-1003 Jenkins master has a bunch of blocking threads pilling up with hudson.plugins.sshslaves.SSHLauncher.afterDisconnect() locked somehow [releng]
11:41 <hashar> Rebooting integration-slave-trusty-1003 (does not reply to salt / ssh) [releng]
10:34 <hashar> Image ci-jessie-wikimedia-1456827861 in wmflabs-eqiad is ready [releng]
10:24 <hashar> Refreshing Nodepool snapshot instances [releng]
10:22 <hashar> Refreshing Nodepool base image to speed instances boot time (dropping open-iscsi package https://gerrit.wikimedia.org/r/#/c/273973/ ) [releng]
2016-02-29 §
16:23 <hashar> salt -v '*slave*' cmd.run 'rm -fR /mnt/jenkins-workspace/workspace/mwext*jslint' T127362 [releng]
16:17 <hashar> Deleting all mwext-.*-jslint jobs from Jenkins. Paladox has migrated all of them to jshint/jsonlint generic jobs T127362 [releng]
16:16 <hashar> Deleting all mwext-.*-jslint jobs from Jenkins. Paladox has migrated all of them to jshint/jsonlint generic jobs [releng]
09:46 <hashar> Jenkins installing Yaml Axis Plugin 0.2.0 [releng]
2016-02-28 §
01:30 <Krinkle> Rebooting integration-slave-precise-1012 – Might help T109704 (MySQL not running) [releng]
2016-02-27 §
03:24 <jzerebecki> works again, but lost queued jobs [releng]
03:20 <jzerebecki> that made it worse, restarting zuul [releng]
03:18 <jzerebecki> tryping reload [releng]
03:16 <jzerebecki> no luck. different problem. [releng]
03:12 <jzerebecki> trying https://www.mediawiki.org/wiki/Continuous_integration/Zuul#Gearman_deadlock [releng]
00:51 <jzerebecki> salt -v --show-timeout '*slave*' cmd.run "bash -c 'cd /srv/deployment/integration/slave-scripts; git pull'" T128191 [releng]
2016-02-26 §
15:14 <jzerebecki> salt -v --show-timeout '*slave*' cmd.run "bash -c 'cd /srv/deployment/integration/slave-scripts; git pull'" T128191 [releng]
15:14 <jzerebecki> salt -v --show-timeout '*slave*' cmd.run "bash -c 'cd /srv/deployment/integration/slave-scripts; git pull'" [releng]
14:44 <hashar> (since it started, dont be that scared!) [releng]
14:44 <hashar> Nodepool has triggered 40 000 instances [releng]
11:53 <hashar> Restarted memcached on deployment-memc02 T128177 [releng]
11:53 <hashar> memcached process on deployment-memc02 seems to have a nice leak of socket usages (from lost) and plainly refuse connections (bunch of CLOSE_WAIT) T128177 [releng]
11:53 <hashar> memcached process on deployment-memc02 seems to have a nice leak of socket usages (from lost) and plainly refuse connections (bunch of CLOSE_WAIT) [releng]
11:40 <hashar> deployment-memc04 find /etc/apt -name '*proxy' -delete (prevented apt-get update) [releng]
11:26 <hashar> beta: salt -v '*' cmd.run 'apt-get -y install ruby-msgpack' . I am tired of seeing puppet debug messages: "Debug: Failed to load library 'msgpack' for feature 'msgpack'" [releng]
11:24 <hashar> puppet keep restarting nutcracker apparently T128177 [releng]
11:20 <hashar> Memcached error for key "enwiki:flow_workflow%3Av2%3Apk:63dc3cf6a7184c32477496d63c173f9c:4.8" on server "127.0.0.1:11212": SERVER HAS FAILED AND IS DISABLED UNTIL TIMED RETRY [releng]
2016-02-25 §
22:38 <hashar> beta: maybe deployment-jobunner01 is processing jobs a bit faster now. Seems like hhvm went wild [releng]
22:23 <hashar> beta: jobrunner01 had apache/hhvm killed somehow .... Blame me [releng]
21:56 <hashar> beta: stopped jobchron / jobrunner on deployment-jobrunner01 and restarting them by running puppet [releng]
21:49 <hashar> beta did a git-deploy of jobrunner/jobrunner hoping to fix puppet run on deployment-jobrunner01 and apparently it did! T126846 [releng]
11:21 <hashar> deleting workspace /mnt/jenkins-workspace/workspace/browsertests-Wikidata-WikidataTests-linux-firefox-sauce on slave-trusty-1015 [releng]
10:08 <hashar> Jenkins upgraded T128006 [releng]
01:44 <legoktm> deploying https://gerrit.wikimedia.org/r/273170 [releng]
01:39 <legoktm> deploying https://gerrit.wikimedia.org/r/272955 (undeployed) and https://gerrit.wikimedia.org/r/273136 [releng]
01:37 <legoktm> deploying https://gerrit.wikimedia.org/r/273136 [releng]
00:31 <thcipriani> running puppet on beta to update scap to latest packaged version: sudo salt -b '10%' -G 'deployment_target:scap/scap' cmd.run 'puppet agent -t' [releng]
00:20 <thcipriani> deployment-tin not accepting jobs for some time, ran through https://www.mediawiki.org/wiki/Continuous_integration/Jenkins#Hung_beta_code.2Fdb_update, is back now [releng]
2016-02-24 §
19:54 <legoktm> legoktm@deployment-tin:~$ mwscript extensions/ORES/maintenance/PopulateDatabase.php --wiki=enwiki [releng]
18:30 <bd808> "configuration file '/etc/nutcracker/nutcracker.yml' syntax is invalid" [releng]
18:27 <bd808> nutcracker dead on mediawiki01; investigating [releng]
17:20 <hashar> Deleted Nodepool instances so new ones get to use the new snapshot ci-jessie-wikimedia-1456333979 [releng]
17:12 <hashar> Refreshing nodepool snapshot. Been stall since Feb 15th T127755 [releng]
17:01 <bd808> https://wmflabs.org/sal/releng missing SAL data since 2016-02-20T20:19 due to bot crash; needs to be backfilled from wikitech data (T127981) [releng]
2016-02-20 §
20:19 <Krinkle> beta-code-update-eqiad job repeatedly stuck at "IRC notifier plugin" [releng]
19:29 <Krinkle> beta-code-update-eqiad broken because deployment-tin:/srv/mediawiki-staging/php-master/extensions/MobileFrontend/includes/MobileFrontend.hooks.php was modified on the server without commit [releng]
19:22 <Krinkle> Various beta-mediawiki-config-update-eqiad jobs have been stuck 'queued' for > 24 hours [releng]
2016-02-19 §
12:09 <hashar> killed https://integration.wikimedia.org/ci/job/beta-code-update-eqiad/ been running for 13 hours. Blocked because slave went offline due to labs reboots yesterday [releng]
10:15 <hashar> Creating a bunch of repository in GitHub to fix Gerrit replication errors [releng]
2016-02-18 §
21:04 <legoktm> deploying https://gerrit.wikimedia.org/r/271600 [releng]