851-900 of 4718 results (16ms)
2015-12-16 §
13:15 <hashar> Gerrit , on mediawiki/services/mathoid force pushed gh-pages branch from Github to Gerrit repo . Attempting to fix Gerrit replication issue ( https://phabricator.wikimedia.org/T121635 ) [releng]
2015-12-15 §
22:57 <hashar> On scandium created /srv/ssd/zuul/git/wikimedia/fundraising/crm repo manually. Namespace conflict with wikimedia/fundraising/crm/civicrm.git which prevented zuul-merger to clone the arm repo [releng]
16:33 <hashar> cleared /mnt/home/jenkins-deploy/tmpfs/jenkins-2 from integration-slave-trusty-1017 and added it back to the pool [releng]
13:45 <hashar> reverted composer upgrade on CI with https://gerrit.wikimedia.org/r/#/c/259241/ [releng]
13:37 <hashar> bumping composer on CI to 1.0.0-alpha11 https://gerrit.wikimedia.org/r/#/c/258933/ [releng]
08:47 <hashar> restarted zuul-merger on gallium [releng]
08:29 <hashar> stopping zuul-merger on gallium for maintenance [releng]
06:19 <legoktm> marked integration-slave-trusty-1017 as offline due to tmpfs issue [releng]
02:38 <Krinkle> beta-mediawiki-config-update-eqiad jobs have been stuck on 'queued' for the past 3 hours [releng]
02:34 <Krinkle> Ran 'sudo rm -rf /mnt/home/jenkins-deploy/tmpfs/jenk*' on ci slaves via salt [releng]
00:57 <thcipriani> marking integration-slave-trusty-1012 offline, strange zuul.cloner behavior. [releng]
2015-12-14 §
20:15 <hashar> scandium restarted zuul-merger [releng]
15:12 <hashar> Stopping zuul-merger daemon on scandium. It lost its disk somehow earlier "DISK CRITICAL - /srv/ssd is not accessible: No such file or directory" https://phabricator.wikimedia.org/T121400#1877725 [releng]
14:09 <hashar> beta and integration: killing redis-servers on Ubuntu instances so they are properly tracked by upstart/puppet ( https://phabricator.wikimedia.org/T121396 ) [releng]
12:59 <hashar> dist-upgrade of all CI slaves [releng]
2015-12-13 §
23:30 <bd808> Ran deployment-bastion:~bd808/cleanup-var-crap.sh and freed 846M on /var [releng]
21:11 <legoktm> deploying https://gerrit.wikimedia.org/r/258784 [releng]
2015-12-11 §
22:18 <hashar> Stopped zuul merger on gallium to have phabricator/extensions populated on scandium (namespacing issue). Restarted zuul-merger on gallium once done. [releng]
22:14 <hashar> On Zuul merger, nuking /srv/ssd/zuul/git/phabricator/extensions so zuul-merger can properly clone phabricator/extensions.git (dir exists because of phabricator/extensions/Sprint.git among others ) [releng]
21:55 <hashar> Reloading Zuul to deploy 385ddd9dd906865e7e61c3c5ea85eae0bb522c8d [releng]
15:14 <jzerebecki> ssh integration-slave-trusty-1017.eqiad.wmflabs 'sudo -u jenkins-deploy rm -rf /mnt/home/jenkins-deploy/tmpfs/jenkins-1' [releng]
15:04 <jzerebecki> jenkins-deploy@integration-slave-precise-1011:/mnt/jenkins-workspace/workspace/mwext-Wikibase-client-tests-mysql-zend/src/extensions/Wikibase$ rm .git/refs/heads/mw1.21-wmf6.lock [releng]
10:45 <hashar> salt --show-timeout '*slave*' cmd.run 'rm -fR /mnt/home/jenkins-deploy/tmpfs/jenkins-?/*' [releng]
01:12 <legoktm> deploying https://gerrit.wikimedia.org/r/258395 [releng]
2015-12-10 §
19:30 <legoktm> marked integration-slave-trusty-1011 as offline, all jobs failing due to tmpfs/lesscache permission denied errors [releng]
15:10 <hashar> deleted all nodepool snapshot image ci-jessie-wikimedia-1449740024 [releng]
15:06 <hashar> Image ci-jessie-wikimedia-1449759571 in wmflabs-eqiad is ready [releng]
14:59 <hashar> Refreshing Nodepool snapshot ( doc is https://wikitech.wikimedia.org/wiki/Nodepool#Manually_generate_a_new_snapshot ) [releng]
14:59 <hashar> New image id is 82a708eb-fd1a-4320-a054-6f1d4a319caa [releng]
14:57 <hashar> created new Nodepool base image and pushing it to labs [releng]
09:40 <hashar> Image ci-jessie-wikimedia-1449740024 in wmflabs-eqiad is ready "etcd got upgraded in the snapshot image: etcd (2.0.10-1 => 2.2.1+dfsg-1)" [releng]
09:34 <hashar> Updating Nodepool snapshot image ; setup_node.sh now runs apt-get upgrade ( https://gerrit.wikimedia.org/r/#/c/257940/ ) [releng]
01:27 <legoktm> deploying https://gerrit.wikimedia.org/r/258090 [releng]
2015-12-09 §
23:41 <thcipriani> delete deployment-kafka03 doesn't seem to be in-use yet and cannot be accessed via salt or ssh by root or anyone [releng]
17:49 <hashar> salt-key --delete deployment-sentry2.eqiad.wmflabs ( already have deployment-sentry2.deployment-prep.eqiad.wmflabs ) [releng]
16:19 <hashar> Image ci-jessie-wikimedia-1449677602 in wmflabs-eqiad is ready ( comes with python-etcd ) [releng]
16:15 <hashar> Refreshing nodepool snapshots will hopefully grab python-etcd ( https://gerrit.wikimedia.org/r/257906 ) [releng]
16:05 <hashar> Image ci-jessie-wikimedia-1449676603 in wmflabs-eqiad is ready [releng]
15:56 <hashar> refreshing nodepool snapshot instance, need a new etcd version [releng]
14:06 <hashar> integration-slave-trusty-1011: sudo rm -fR /mnt/home/jenkins-deploy/tmpfs/jenkins-0 ( https://phabricator.wikimedia.org/T120824 ) [releng]
12:57 <hashar> Upgrading Jenkins Gearman plugin to grab upstream patch https://review.openstack.org/#/c/252768/ 'fix registration for jenkins master' should be noop [releng]
2015-12-08 §
20:31 <hashar> beta cluster instances switching to new ldap configuration [releng]
20:31 <hashar> beta: rebased operations/puppet and locally fixed a conflict [releng]
19:32 <hashar> LDAP got migrated. We might have mwdeploy local users that got created on beta cluster instances :( [releng]
19:29 <hashar> beta: aborted rebase on puppetmaster. [releng]
14:18 <Krinkle> Removed integration-slave-trusty-1012:/mnt/home/jenkins-deploy/tmpfs/jenkins-2 which was left behind by a job. Caused other jobs to fail due to lack of permission to chmod/rm-rf this dir. [releng]
11:48 <hashar> beta: salt-key --delete deployment-cxserver03.eqiad.wmflabs [releng]
11:42 <hashar> running puppet on deployment-restbase01 . Catch up on lot of changes [releng]
11:23 <hashar> puppet catching up a lot of changes on deployment-cache-mobile04 and deployment-cache-text04 [releng]
11:20 <hashar> beta: rebased puppet.git on puppetmaster [releng]