2951-3000 of 7896 results (18ms)
2016-08-08 §
23:33 <TimStarling> deleted instance deployment-depurate01 [releng]
16:19 <bd808> Manually cleaned up root@logstash02 cronjobs related to logstash03 [releng]
14:39 <Amir1> deploying d00159c for ores in sca03 [releng]
10:14 <Amir1> deploying 616707c into sca03 (for ores) [releng]
2016-08-07 §
12:01 <hashar> Nodepool: can't spawn instances due to: Forbidden: Quota exceeded for instances: Requested 1, but already used 10 of 10 instances (HTTP 403) [releng]
12:01 <hashar> nodepool: deleted servers stuck in "used" states for roughly 4 hours (using: nodepool list , then nodepool delete <id>) [releng]
11:54 <hashar> Nodepool: can't spawn instances due to: Forbidden: Quota exceeded for instances: Requested 1, but already used 10 of 10 instances (HTTP 403) [releng]
11:54 <hashar> nodepool: deleted servers stuck in "used" states for roughly 4 hours (using: nodepool list , then nodepool delete <id>) [releng]
2016-08-06 §
12:31 <Amir1> restarting uwsgi-ores and celery-ores-worker in deployment-sca03 [releng]
12:28 <Amir1> cherry-picked 303356/1 into the puppetmaster [releng]
12:00 <Amir1> restarting uwsgi-ores and celery-ores-worker in deployment-sca03 [releng]
2016-08-05 §
17:54 <bd808> Cherry-picked https://gerrit.wikimedia.org/r/#/c/299825/3 for testing [releng]
17:50 <bd808> Removed stale cherry-picks for https://gerrit.wikimedia.org/r/#/c/302303/ and https://gerrit.wikimedia.org/r/#/c/300458/ that were blocking git rebase [releng]
00:41 <Krinkle> Reloading Zuul to deploy https://gerrit.wikimedia.org/r/303113 [releng]
00:31 <Krinkle> Reloading Zuul to deploy https://gerrit.wikimedia.org/r/300068 [releng]
2016-08-04 §
20:07 <marxarelli> Running jenkins-jobs update config/ 'selenium-*' to deploy https://gerrit.wikimedia.org/r/#/c/302775/ [releng]
17:03 <legoktm> jstart -N qamorebots /usr/lib/adminbot/adminlogbot.py --config ./confs/qa-logbot.py [releng]
17:03 <legoktm> jstart -N qamorebots /usr/lib/adminbot/adminlogbot.py --config ./confs/qa-logbot.py [releng]
16:53 <thcipriani> restart integration-puppetmaster puppetmaster service. Evidently OOM'd. [releng]
06:05 <Amir1> restarting uwsgi-ores and celery-ores-worker in deployment-sca03 [releng]
06:04 <Amir1> restarting redis-instance-tcp_6379, redis-instance-tcp_6380, and redis-server services in deployment-ores-redis [releng]
06:03 <Amir1> ran puppet agent in deployment-ores-redis [releng]
06:01 <Amir1> ran puppet agent in deployment-sca03 [releng]
05:54 <Amir1> deploying 616707c to ores [releng]
2016-08-03 §
17:12 <thcipriani> Reloading Zuul to deploy https://gerrit.wikimedia.org/r/#/c/301370 [releng]
09:25 <legoktm> deploying https://gerrit.wikimedia.org/r/302665 [releng]
05:22 <Krinkle> Jenkins job beta-mediawiki-config-update-eqiad has been stuck and unrun for 6 hours [releng]
2016-08-02 §
21:32 <thcipriani> re-enabling puppet on scap targets [releng]
21:28 <thcipriani> disabling puppet on scap targets briefly to test scap_3.2.2-1_all.deb [releng]
14:02 <gehel> deployment-prep rebooting deployment-elastic06 (unresponsive to SSH and Salt) [releng]
2016-08-01 §
20:28 <thcipriani> restarting deployment-ms-be01, not responding to ssh, mw-fe01 requests timing out [releng]
08:28 <Amir1> deploying fedd675 to ores in sca03 [releng]
2016-07-29 §
23:27 <bd808> Rebooting deployment-logstash2; Console showed hung task timeouts (P3606) [releng]
15:55 <hasharAway> pooled Jenkins slave integration-slave-jessie-1003 [10.68.21.145] [releng]
14:02 <hashar> deployment-prep / beta : added addshore to the project [releng]
13:24 <hashar> created integration-slave-jessie-1003 m1.medium to help processing debian-glue jobs [releng]
13:01 <hashar> Upgrading Zuul on jessie slaves using https://people.wikimedia.org/~hashar/debs/zuul_2.1.0-391-gbc58ea3-jessie/zuul_2.1.0-391-gbc58ea3-wmf2jessie1_amd64.deb [releng]
12:53 <hashar> Upgrading Zuul on precise slaves using https://people.wikimedia.org/~hashar/debs/zuul_2.1.0-391-gbc58ea3/zuul_2.1.0-391-gbc58ea3-wmf2precise1_amd64.deb [releng]
09:38 <hashar> Upgrading Zuul to get rid of a forced sleep(300) whenever a patch is merged T93812. zuul_2.1.0-391-gbc58ea3-wmf2precise1 [releng]
2016-07-28 §
12:18 <hashar> installed 2.1.0-391-gbc58ea3-wmf1jessie1 on zuul-dev-jessie.integration.eqiad.wmflabs T140894 [releng]
12:18 <hashar> installed 2.1.0-391-gbc58ea3-wmf1jessie1 on zuul-dev-jessie.integration.eqiad.wmflabs [releng]
09:46 <hashar> Nodepool: Image ci-trusty-wikimedia-1469698821 in wmflabs-eqiad is ready [releng]
09:35 <hashar> Regenerated Nodepool image for Trusty. The snapshot failed while upgrading grub-pc for some reason. Noticed with thcipriani yesterday [releng]
2016-07-27 §
16:12 <hashar> salt -v '*slave-trusty*' cmd.run 'service mysql start' ( was missing on integration-slave-trusty-1011.integration.eqiad.wmflabs ) [releng]
14:03 <hashar> upgraded zuul on gallium via dpkg -i /root/zuul_2.1.0-391-gbc58ea3-wmf1precise1_amd64.deb (revert is zuul_2.1.0-151-g30a433b-wmf4precise1_amd64.deb ) [releng]
12:43 <hashar> restarted Jenkins for some trivial plugins updates [releng]
12:35 <hashar> hard rebooting integration-slave-trusty-1011 from Horizon. ssh lost, no log in Horizon. [releng]
09:46 <hashar> manually triggered debian-glue on all operations/debs repo that had no jenkins-bot vote. Via zuul enqueue on gallium and list fetched from "gerrit query --current-patch-set 'is:open NOT label:verified=2,jenkins-bot project:^operations/debs/.*'|egrep '(ref|project):'" [releng]
06:20 <TimStarling> created instance deployment-depurate01 for testing of role::html5depurate [releng]
2016-07-26 §
20:13 <hashar> Zuul deployed https://gerrit.wikimedia.org/r/301093 which adds 'debian-glue' job on all of operations/debs/ repos [releng]