101-150 of 4476 results (12ms)
2016-03-04 §
02:49 <Krinkle> Reloading Zuul to deploy https://gerrit.wikimedia.org/r/274889 [releng]
00:11 <Krinkle> salt -v --show-timeout '*slave*' cmd.run "bash -c 'cd /srv/deployment/integration/slave-scripts; git pull'" [releng]
2016-03-03 §
23:37 <legoktm> salt -v --show-timeout '*slave*' cmd.run "bash -c 'cd /srv/deployment/integration/slave-scripts; git pull'" [releng]
22:34 <legoktm> mysql not running on integration-slave-precise-1002, manually starting (T109704) [releng]
22:30 <legoktm> mysql not running on integration-slave-precise-1011, manually starting (T109704) [releng]
22:19 <legoktm> mysql not running on integration-slave-precise-1012, manually starting (T109704) [releng]
22:07 <legoktm> deploying https://gerrit.wikimedia.org/r/274821 [releng]
21:58 <Krinkle> Reloading Zuul to deploy (EventLogging and AdminLinks) https://gerrit.wikimedia.org/r/274821 / [releng]
18:49 <thcipriani> killing deployment-bastion since it is no longer used [releng]
14:23 <hashar> https://integration.wikimedia.org/ci/computer/integration-slave-trusty-1011/ is out of disk space [releng]
2016-03-02 §
16:22 <jzerebecki> reloading zuul for 9398fa1..943f17b [releng]
10:38 <hashar> Zuul should no more be caught in death loop due to Depends-On on an event-schemas change. Hole filled with https://gerrit.wikimedia.org/r/#/c/274356/ T128569 [releng]
08:53 <hashar> gerrit set-account Jsahleen --inactive T108854 [releng]
01:19 <thcipriani> force restarting zuul because the queue is very stuck https://www.mediawiki.org/wiki/Continuous_integration/Zuul#Restart [releng]
01:13 <thcipriani> following steps for gearman deadlock: https://www.mediawiki.org/wiki/Continuous_integration/Zuul#Known_issues [releng]
2016-03-01 §
23:10 <Krinkle> Updated Jenkins configuration to also support php5 and hhvm for Console Sections detection of "PHPUnit" [releng]
17:05 <hashar> gerrit: set accounts inactive for Eloquence and Mgrover. Former employees of wmf and mail bounceback [releng]
16:41 <hashar> Restarted Jenkins [releng]
16:32 <hashar> Bunch of Jenkins job got stall because I have killed threads in Jenkins to unblock integration-slave-trusty-1003 :-( [releng]
12:14 <hashar> integration-slave-trusty-1003 is back online [releng]
12:13 <hashar> Might have killed the proper Jenkins thread to unlock integration-slave-trusty-1003 [releng]
12:03 <hashar> Jenkins can not pool back integration-slave-trusty-1003 Jenkins master has a bunch of blocking threads pilling up with hudson.plugins.sshslaves.SSHLauncher.afterDisconnect() locked somehow [releng]
11:41 <hashar> Rebooting integration-slave-trusty-1003 (does not reply to salt / ssh) [releng]
10:34 <hashar> Image ci-jessie-wikimedia-1456827861 in wmflabs-eqiad is ready [releng]
10:24 <hashar> Refreshing Nodepool snapshot instances [releng]
10:22 <hashar> Refreshing Nodepool base image to speed instances boot time (dropping open-iscsi package https://gerrit.wikimedia.org/r/#/c/273973/ ) [releng]
2016-02-29 §
16:23 <hashar> salt -v '*slave*' cmd.run 'rm -fR /mnt/jenkins-workspace/workspace/mwext*jslint' T127362 [releng]
16:17 <hashar> Deleting all mwext-.*-jslint jobs from Jenkins. Paladox has migrated all of them to jshint/jsonlint generic jobs T127362 [releng]
16:16 <hashar> Deleting all mwext-.*-jslint jobs from Jenkins. Paladox has migrated all of them to jshint/jsonlint generic jobs [releng]
09:46 <hashar> Jenkins installing Yaml Axis Plugin 0.2.0 [releng]
2016-02-28 §
01:30 <Krinkle> Rebooting integration-slave-precise-1012 – Might help T109704 (MySQL not running) [releng]
2016-02-27 §
03:24 <jzerebecki> works again, but lost queued jobs [releng]
03:20 <jzerebecki> that made it worse, restarting zuul [releng]
03:18 <jzerebecki> tryping reload [releng]
03:16 <jzerebecki> no luck. different problem. [releng]
03:12 <jzerebecki> trying https://www.mediawiki.org/wiki/Continuous_integration/Zuul#Gearman_deadlock [releng]
00:51 <jzerebecki> salt -v --show-timeout '*slave*' cmd.run "bash -c 'cd /srv/deployment/integration/slave-scripts; git pull'" T128191 [releng]
2016-02-26 §
15:14 <jzerebecki> salt -v --show-timeout '*slave*' cmd.run "bash -c 'cd /srv/deployment/integration/slave-scripts; git pull'" T128191 [releng]
15:14 <jzerebecki> salt -v --show-timeout '*slave*' cmd.run "bash -c 'cd /srv/deployment/integration/slave-scripts; git pull'" [releng]
14:44 <hashar> (since it started, dont be that scared!) [releng]
14:44 <hashar> Nodepool has triggered 40 000 instances [releng]
11:53 <hashar> Restarted memcached on deployment-memc02 T128177 [releng]
11:53 <hashar> memcached process on deployment-memc02 seems to have a nice leak of socket usages (from lost) and plainly refuse connections (bunch of CLOSE_WAIT) T128177 [releng]
11:53 <hashar> memcached process on deployment-memc02 seems to have a nice leak of socket usages (from lost) and plainly refuse connections (bunch of CLOSE_WAIT) [releng]
11:40 <hashar> deployment-memc04 find /etc/apt -name '*proxy' -delete (prevented apt-get update) [releng]
11:26 <hashar> beta: salt -v '*' cmd.run 'apt-get -y install ruby-msgpack' . I am tired of seeing puppet debug messages: "Debug: Failed to load library 'msgpack' for feature 'msgpack'" [releng]
11:24 <hashar> puppet keep restarting nutcracker apparently T128177 [releng]
11:20 <hashar> Memcached error for key "enwiki:flow_workflow%3Av2%3Apk:63dc3cf6a7184c32477496d63c173f9c:4.8" on server "127.0.0.1:11212": SERVER HAS FAILED AND IS DISABLED UNTIL TIMED RETRY [releng]
2016-02-25 §
22:38 <hashar> beta: maybe deployment-jobunner01 is processing jobs a bit faster now. Seems like hhvm went wild [releng]
22:23 <hashar> beta: jobrunner01 had apache/hhvm killed somehow .... Blame me [releng]