1101-1150 of 4962 results (15ms)
2015-12-15 §
08:29 <hashar> stopping zuul-merger on gallium for maintenance [releng]
06:19 <legoktm> marked integration-slave-trusty-1017 as offline due to tmpfs issue [releng]
02:38 <Krinkle> beta-mediawiki-config-update-eqiad jobs have been stuck on 'queued' for the past 3 hours [releng]
02:34 <Krinkle> Ran 'sudo rm -rf /mnt/home/jenkins-deploy/tmpfs/jenk*' on ci slaves via salt [releng]
00:57 <thcipriani> marking integration-slave-trusty-1012 offline, strange zuul.cloner behavior. [releng]
2015-12-14 §
20:15 <hashar> scandium restarted zuul-merger [releng]
15:12 <hashar> Stopping zuul-merger daemon on scandium. It lost its disk somehow earlier "DISK CRITICAL - /srv/ssd is not accessible: No such file or directory" https://phabricator.wikimedia.org/T121400#1877725 [releng]
14:09 <hashar> beta and integration: killing redis-servers on Ubuntu instances so they are properly tracked by upstart/puppet ( https://phabricator.wikimedia.org/T121396 ) [releng]
12:59 <hashar> dist-upgrade of all CI slaves [releng]
2015-12-13 §
23:30 <bd808> Ran deployment-bastion:~bd808/cleanup-var-crap.sh and freed 846M on /var [releng]
21:11 <legoktm> deploying https://gerrit.wikimedia.org/r/258784 [releng]
2015-12-11 §
22:18 <hashar> Stopped zuul merger on gallium to have phabricator/extensions populated on scandium (namespacing issue). Restarted zuul-merger on gallium once done. [releng]
22:14 <hashar> On Zuul merger, nuking /srv/ssd/zuul/git/phabricator/extensions so zuul-merger can properly clone phabricator/extensions.git (dir exists because of phabricator/extensions/Sprint.git among others ) [releng]
21:55 <hashar> Reloading Zuul to deploy 385ddd9dd906865e7e61c3c5ea85eae0bb522c8d [releng]
15:14 <jzerebecki> ssh integration-slave-trusty-1017.eqiad.wmflabs 'sudo -u jenkins-deploy rm -rf /mnt/home/jenkins-deploy/tmpfs/jenkins-1' [releng]
15:04 <jzerebecki> jenkins-deploy@integration-slave-precise-1011:/mnt/jenkins-workspace/workspace/mwext-Wikibase-client-tests-mysql-zend/src/extensions/Wikibase$ rm .git/refs/heads/mw1.21-wmf6.lock [releng]
10:45 <hashar> salt --show-timeout '*slave*' cmd.run 'rm -fR /mnt/home/jenkins-deploy/tmpfs/jenkins-?/*' [releng]
01:12 <legoktm> deploying https://gerrit.wikimedia.org/r/258395 [releng]
2015-12-10 §
19:30 <legoktm> marked integration-slave-trusty-1011 as offline, all jobs failing due to tmpfs/lesscache permission denied errors [releng]
15:10 <hashar> deleted all nodepool snapshot image ci-jessie-wikimedia-1449740024 [releng]
15:06 <hashar> Image ci-jessie-wikimedia-1449759571 in wmflabs-eqiad is ready [releng]
14:59 <hashar> Refreshing Nodepool snapshot ( doc is https://wikitech.wikimedia.org/wiki/Nodepool#Manually_generate_a_new_snapshot ) [releng]
14:59 <hashar> New image id is 82a708eb-fd1a-4320-a054-6f1d4a319caa [releng]
14:57 <hashar> created new Nodepool base image and pushing it to labs [releng]
09:40 <hashar> Image ci-jessie-wikimedia-1449740024 in wmflabs-eqiad is ready "etcd got upgraded in the snapshot image: etcd (2.0.10-1 => 2.2.1+dfsg-1)" [releng]
09:34 <hashar> Updating Nodepool snapshot image ; setup_node.sh now runs apt-get upgrade ( https://gerrit.wikimedia.org/r/#/c/257940/ ) [releng]
01:27 <legoktm> deploying https://gerrit.wikimedia.org/r/258090 [releng]
2015-12-09 §
23:41 <thcipriani> delete deployment-kafka03 doesn't seem to be in-use yet and cannot be accessed via salt or ssh by root or anyone [releng]
17:49 <hashar> salt-key --delete deployment-sentry2.eqiad.wmflabs ( already have deployment-sentry2.deployment-prep.eqiad.wmflabs ) [releng]
16:19 <hashar> Image ci-jessie-wikimedia-1449677602 in wmflabs-eqiad is ready ( comes with python-etcd ) [releng]
16:15 <hashar> Refreshing nodepool snapshots will hopefully grab python-etcd ( https://gerrit.wikimedia.org/r/257906 ) [releng]
16:05 <hashar> Image ci-jessie-wikimedia-1449676603 in wmflabs-eqiad is ready [releng]
15:56 <hashar> refreshing nodepool snapshot instance, need a new etcd version [releng]
14:06 <hashar> integration-slave-trusty-1011: sudo rm -fR /mnt/home/jenkins-deploy/tmpfs/jenkins-0 ( https://phabricator.wikimedia.org/T120824 ) [releng]
12:57 <hashar> Upgrading Jenkins Gearman plugin to grab upstream patch https://review.openstack.org/#/c/252768/ 'fix registration for jenkins master' should be noop [releng]
2015-12-08 §
20:31 <hashar> beta cluster instances switching to new ldap configuration [releng]
20:31 <hashar> beta: rebased operations/puppet and locally fixed a conflict [releng]
19:32 <hashar> LDAP got migrated. We might have mwdeploy local users that got created on beta cluster instances :( [releng]
19:29 <hashar> beta: aborted rebase on puppetmaster. [releng]
14:18 <Krinkle> Removed integration-slave-trusty-1012:/mnt/home/jenkins-deploy/tmpfs/jenkins-2 which was left behind by a job. Caused other jobs to fail due to lack of permission to chmod/rm-rf this dir. [releng]
11:48 <hashar> beta: salt-key --delete deployment-cxserver03.eqiad.wmflabs [releng]
11:42 <hashar> running puppet on deployment-restbase01 . Catch up on lot of changes [releng]
11:23 <hashar> puppet catching up a lot of changes on deployment-cache-mobile04 and deployment-cache-text04 [releng]
11:20 <hashar> beta: rebased puppet.git on puppetmaster [releng]
10:58 <hashar> dropped deployment-cache-text04 puppet SSL certificates [releng]
10:44 <hashar> beta: deployment-cache-text04 upgrading openssl libssl1.0.0 [releng]
10:42 <hashar> beta: fixing salt on bunch of hosts. There are duplicate process on a few of them. Fix up is: killall salt-minion && rm /var/run/salt-minion.pid && /etc/init.d/salt-minion start [releng]
10:32 <hashar> beta: salt-key --delete=deployment-cache-upload04.eqiad.wmflabs (missing 'deployment-prep' subdomain) [releng]
10:31 <hashar> beta: puppet being fixed on memc04 sentry2 cache-upload04 cxserver03 db1 [releng]
10:27 <hashar> beta: fixing puppet.con on a bunch of hosts. The [agent] server = deployment-puppetmaster.eqiad.wmflabs is wrong, missing 'deployment-prep' sub domain [releng]