1-50 of 3269 results (7ms)
2015-07-29 §
23:55 <marxarelli> clearing disk space on integrations-slave-trusty-1012 with `find /mnt/jenkins-workspace/workspace -mindepth 1 -maxdepth 1 -type d -mtime +15 -exec rm -rf {} \;` [releng]
18:15 <bd808> upgraded nutcracker on deployment-jobrunner01 [releng]
18:14 <bd808> upgraded nutcracker on deployment-videoscaler01 [releng]
18:08 <bd808> rm deployment-fluorine:/a/mw-log/archive/*-201506* [releng]
18:08 <bd808> rm deployment-fluorine:/a/mw-log/archive/*-201505* [releng]
18:02 <bd808> rm deployment-videoscaler01:/var/log/atop.log.?* [releng]
16:49 <thcipriani> lots of "Error connecting to 10.68.16.193: Can't connect to MySQL server on '10.68.16.193'" deployment-db1 seems up and functional :( [releng]
16:27 <thcipriani> deployment-prep login timeouts, tried restarting apache, hhvm, and nutcracker on mediawiki{01..03} [releng]
14:38 <bblack> cherry-picked https://gerrit.wikimedia.org/r/#/c/215624 (updated to PS8) into deployment-puppetmaster ops/puppet [releng]
14:28 <bblack> cherry-picked https://gerrit.wikimedia.org/r/#/c/215624 into deployment-puppetmaster ops/puppet [releng]
12:38 <hashar_> salt minions are back somehow [releng]
12:36 <hashar_> salt on deployment-salt is missing most of the instances :-((( [releng]
03:00 <ostriches> deployment-bastion: please please someone rebuild me to not have a stupid 2G /var partition [releng]
02:59 <ostriches> deployment-bastion: purged a bunch of atop and pacct logs, and apt cache...clogging up /var again. [releng]
02:34 <legoktm> deploying https://gerrit.wikimedia.org/r/227640 [releng]
2015-07-28 §
23:43 <marxarelli> running `jenkins-jobs update config/ 'mwext-mw-selenium'` to deploy I7afa07e9f559bffeeebaf7454cc6b39a37e04063 [releng]
21:05 <bd808> upgraded nutcracker on mediawiki03 [releng]
21:04 <bd808> upgraded nutcracker on mediawiki02 [releng]
21:01 <bd808> upgraded nutcracker on mediawiki01 [releng]
19:49 <jzerebecki> reloading zuul b1b2cab..b02830e [releng]
11:18 <hashar> Assigning label "BetaClusterBastion" to https://integration.wikimedia.org/ci/computer/deployment-bastion.eqiad/ [releng]
11:12 <hashar> Jenkins jobs for the beta cluster ended up stuck again. Found a workaround by removing the Jenkins label on deployment-bastion node and reinstating it. Seems to get rid of the deadlock ( ref: https://phabricator.wikimedia.org/T72597#1487801 ) [releng]
09:50 <hashar> deployment-apertium01 is back! The ferm rules were outdated / not maintained by puppet, dropped ferm entirely. [releng]
09:40 <hashar> rebooting deployment-apertium01 to ensure its ferm rules are properly loaded on boot ( https://phabricator.wikimedia.org/T106658 ) [releng]
00:46 <legoktm> deploying https://gerrit.wikimedia.org/r/227383 [releng]
2015-07-27 §
23:04 <marxarelli> running `jenkins-jobs update config/ 'browsertests-*'` to deploy I3c61ff4089791375e21aadfa045d503dfd73ca0e [releng]
13:26 <hashar> Precise slaves had faulty elasticsearch: apt-get install --reinstall elasticsearch [releng]
13:21 <hashar> puppet stalled on Precise Jenkins slaves :-( [releng]
08:52 <hashar> upgrading packages on Precise slaves [releng]
08:49 <hashar> rebooting all Trusty jenkins slaves [releng]
08:39 <hashar> upgrading python-pip on Trusty from 1.5.4-1ubuntu1 to 1.5.4-1ubuntu3 . Fix up pip silently removing system packages ( https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=771794 ) [releng]
08:12 <hashar> On CI slaves, bumping HHVM from 3.6.1+dfsg1-1+wm3 to 3.6.5+dfsg1-1+wm1 [releng]
08:11 <hashar> apt-get upgrade Trusty Jenkins slaves [releng]
2015-07-24 §
17:35 <marxarelli> updating integration slave scripts from integration-saltmaster to deploy I6906fadede546ce2205797da1c6b267aed586e17 [releng]
17:17 <marxarelli> running `jenkins-jobs update config/ 'mediawiki-selenium-integration' 'mwext-mw-selenium'` to deploy Ib289d784c7b3985bd4823d967fbc07d5759dc756 [releng]
17:05 <marxarelli> running `jenkins-jobs update config/ 'mediawiki-selenium-integration'` to deploy and test Ib289d784c7b3985bd4823d967fbc07d5759dc756 [releng]
17:04 <hashar> integration-saltmaster, in a '''screen''' : salt -b 1 '*slave*' cmd.run '/usr/local/sbin/puppet-run'|tee hashar-massrun.log [releng]
17:04 <hashar> cancelled last command [releng]
17:03 <hashar> integration-saltmaster : salt -b 1 '*slave*' cmd.run '/usr/local/sbin/puppet-run' & && disown && exit [releng]
16:55 <hashar> Might have fixed the puppet/pip mess on CI slaves by creating a symlink from /usr/bin/pip to /usr/local/bin/pip ( https://gerrit.wikimedia.org/r/#/c/226729/1..2/modules/contint/manifests/packages/python.pp,unified ) [releng]
16:36 <hashar> puppet on Jenkins slaves might have some intermittent issues due to pip installation https://gerrit.wikimedia.org/r/226729 [releng]
15:29 <hashar> removing pip obsolete download-cache setting ( https://gerrit.wikimedia.org/r/#/c/226730/ ) [releng]
15:27 <hashar> upgrading pip to 7.1.0 via pypi ( https://gerrit.wikimedia.org/r/#/c/226729/ ). Revert plan is to uncherry pick the patch on the puppetmaster and: pip uninstall pip [releng]
12:46 <hashar> Jenkins: switching gearman plugin from our custom compiled 0.1.1-9-g08e9c42-change_192429_2 to upstream 0.1.2. They are actually the exact same versions. [releng]
08:40 <hashar> upgrading zuul to zuul_2.0.0-327-g3ebedde-wmf3precise1 to fix a regression ( https://phabricator.wikimedia.org/T106531 ) [releng]
08:39 <hashar> upgrading zuul [releng]
2015-07-23 §
23:03 <marxarelli> running `jenkins-jobs update config/ 'browsertests-*'` to deploy I2d0f83d0c6a406d46627578cb8db0706d1b8655d [releng]
16:38 <marxarelli> Reloading Zuul to deploy I96b6218a208f133209452c71bcf01a1088305aea [releng]
15:40 <urandom> applied wip logstash & cassandra changes (https://gerrit.wikimedia.org/r/#/c/226025/) to deployment-prep [releng]
13:24 <hashar> apt-get upgrade integration-puppetmaster and rebooting it [releng]