6151-6200 of 10000 results (24ms)
2016-08-04 §
17:03 <legoktm> jstart -N qamorebots /usr/lib/adminbot/adminlogbot.py --config ./confs/qa-logbot.py [releng]
16:53 <thcipriani> restart integration-puppetmaster puppetmaster service. Evidently OOM'd. [releng]
06:05 <Amir1> restarting uwsgi-ores and celery-ores-worker in deployment-sca03 [releng]
06:04 <Amir1> restarting redis-instance-tcp_6379, redis-instance-tcp_6380, and redis-server services in deployment-ores-redis [releng]
06:03 <Amir1> ran puppet agent in deployment-ores-redis [releng]
06:01 <Amir1> ran puppet agent in deployment-sca03 [releng]
05:54 <Amir1> deploying 616707c to ores [releng]
2016-08-03 §
17:12 <thcipriani> Reloading Zuul to deploy https://gerrit.wikimedia.org/r/#/c/301370 [releng]
09:25 <legoktm> deploying https://gerrit.wikimedia.org/r/302665 [releng]
05:22 <Krinkle> Jenkins job beta-mediawiki-config-update-eqiad has been stuck and unrun for 6 hours [releng]
2016-08-02 §
21:32 <thcipriani> re-enabling puppet on scap targets [releng]
21:28 <thcipriani> disabling puppet on scap targets briefly to test scap_3.2.2-1_all.deb [releng]
14:02 <gehel> deployment-prep rebooting deployment-elastic06 (unresponsive to SSH and Salt) [releng]
2016-08-01 §
20:28 <thcipriani> restarting deployment-ms-be01, not responding to ssh, mw-fe01 requests timing out [releng]
08:28 <Amir1> deploying fedd675 to ores in sca03 [releng]
2016-07-29 §
23:27 <bd808> Rebooting deployment-logstash2; Console showed hung task timeouts (P3606) [releng]
15:55 <hasharAway> pooled Jenkins slave integration-slave-jessie-1003 [10.68.21.145] [releng]
14:02 <hashar> deployment-prep / beta : added addshore to the project [releng]
13:24 <hashar> created integration-slave-jessie-1003 m1.medium to help processing debian-glue jobs [releng]
13:01 <hashar> Upgrading Zuul on jessie slaves using https://people.wikimedia.org/~hashar/debs/zuul_2.1.0-391-gbc58ea3-jessie/zuul_2.1.0-391-gbc58ea3-wmf2jessie1_amd64.deb [releng]
12:53 <hashar> Upgrading Zuul on precise slaves using https://people.wikimedia.org/~hashar/debs/zuul_2.1.0-391-gbc58ea3/zuul_2.1.0-391-gbc58ea3-wmf2precise1_amd64.deb [releng]
09:38 <hashar> Upgrading Zuul to get rid of a forced sleep(300) whenever a patch is merged T93812. zuul_2.1.0-391-gbc58ea3-wmf2precise1 [releng]
2016-07-28 §
12:18 <hashar> installed 2.1.0-391-gbc58ea3-wmf1jessie1 on zuul-dev-jessie.integration.eqiad.wmflabs T140894 [releng]
12:18 <hashar> installed 2.1.0-391-gbc58ea3-wmf1jessie1 on zuul-dev-jessie.integration.eqiad.wmflabs [releng]
09:46 <hashar> Nodepool: Image ci-trusty-wikimedia-1469698821 in wmflabs-eqiad is ready [releng]
09:35 <hashar> Regenerated Nodepool image for Trusty. The snapshot failed while upgrading grub-pc for some reason. Noticed with thcipriani yesterday [releng]
2016-07-27 §
16:12 <hashar> salt -v '*slave-trusty*' cmd.run 'service mysql start' ( was missing on integration-slave-trusty-1011.integration.eqiad.wmflabs ) [releng]
14:03 <hashar> upgraded zuul on gallium via dpkg -i /root/zuul_2.1.0-391-gbc58ea3-wmf1precise1_amd64.deb (revert is zuul_2.1.0-151-g30a433b-wmf4precise1_amd64.deb ) [releng]
12:43 <hashar> restarted Jenkins for some trivial plugins updates [releng]
12:35 <hashar> hard rebooting integration-slave-trusty-1011 from Horizon. ssh lost, no log in Horizon. [releng]
09:46 <hashar> manually triggered debian-glue on all operations/debs repo that had no jenkins-bot vote. Via zuul enqueue on gallium and list fetched from "gerrit query --current-patch-set 'is:open NOT label:verified=2,jenkins-bot project:^operations/debs/.*'|egrep '(ref|project):'" [releng]
06:20 <TimStarling> created instance deployment-depurate01 for testing of role::html5depurate [releng]
2016-07-26 §
20:13 <hashar> Zuul deployed https://gerrit.wikimedia.org/r/301093 which adds 'debian-glue' job on all of operations/debs/ repos [releng]
18:10 <ostriches> zuul: reloading to pick up config change [releng]
12:49 <godog> cherry-pick https://gerrit.wikimedia.org/r/#/c/300827/ on deployment-puppetmaster [releng]
11:59 <legoktm> also pulled in I73f01f87b06b995bdd855628006225879a17fee5 [releng]
11:59 <legoktm> deploying https://gerrit.wikimedia.org/r/301109 [releng]
11:37 <hashar> rebased integration puppetmaster git repo [releng]
11:31 <hashar> enable puppet agent on integration-puppetmaster . Had it disabled while hacking on https://gerrit.wikimedia.org/r/#/c/300830/ [releng]
08:42 <hashar> T141269 On integration-slave-trusty-1018 , deleting workspace that has a corrupt git: rm -fR /mnt/jenkins-workspace/workspace/mediawiki-extensions-hhvm* [releng]
01:08 <Amir1> deployed ores a291da1 in sca03, ores-beta.wmflabs.org works as expected [releng]
2016-07-25 §
22:45 <legoktm> restarting zuul due to depends-on lockup [releng]
14:24 <godog> bounce puppetmaster on deployment-puppetmaster [releng]
13:17 <godog> cherry-pick https://gerrit.wikimedia.org/r/#/c/300827/ on deployment-puppetmaster [releng]
2016-07-23 §
20:06 <bd808> Cleanup jobrunner01 logs via -- sudo logrotate --force /etc/logrotate.d/mediawiki_jobrunner [releng]
20:03 <bd808> Deleted jobqueues in redis with no matching wikis: ptwikibooks, labswiki [releng]
19:20 <bd808> jobrunner01 spamming /var/log/mediawiki with attempts to process jobs for wiki=labswiki [releng]
2016-07-22 §
20:26 <hashar> T141114 upgraded jenkins-debian-glue from v0.13.0 to v0.17.0 on integration-slave-jessie-1001 and integration-slave-jessie-1002 [releng]
19:07 <thcipriani> beta-cluster has successfully used a canary for mediawiki deployments [releng]
16:53 <thcipriani> bumping scap to v.3.2.1 on deployment-tin to test canary deploys, again [releng]