1301-1350 of 3675 results (13ms)
2015-03-06 §
18:22 <^d> staging: set has_ganglia to false in hiera [releng]
16:57 <legoktm> deployed https://gerrit.wikimedia.org/r/194892 [releng]
16:40 <Krinkle> Jenkins auto-depooled integration-slave1008 due to low /tmp space. Purged /tmp/npm-* to bring back up. [releng]
16:27 <Krinkle> Delete integration-slave1005 [releng]
09:17 <hasharConf> Jenkins: upgrading and restarting. Wish me luck. [releng]
06:29 <Krinkle> Re-creating integration-slave1401 - integration-slave1404 [releng]
02:21 <legoktm> deployed https://gerrit.wikimedia.org/r/194340 [releng]
01:52 <legoktm> deployed https://gerrit.wikimedia.org/r/194461 [releng]
2015-03-05 §
22:01 <Krinkle> Reloading Zuul to deploy I97c1d639313b [releng]
21:15 <hashar> stopping Jenkins [releng]
21:08 <hashar> killing browser tests running [releng]
20:48 <Krinkle> Re-establishing Gearman connection from Jenkins [releng]
20:44 <Krinkle> Deleting integration-slave1201-integration-slave1404, and integration-slave1401-integration-slave1404. [releng]
20:18 <Krinkle> Finished creation and provisioning of integration-slave1405 [releng]
19:34 <legoktm> deploying https://gerrit.wikimedia.org/r/194461, lots of new jobs [releng]
18:50 <Krinkle> Re-creating integration-slave1405 [releng]
17:52 <twentyafterfour> pushed wmf/1.25wmf20 branch to submodule repos [releng]
16:18 <greg-g> now there are jobs running on the zuul status page [releng]
16:16 <greg-g> getting "/zuul/status.json: Service Temporarily Unavailable" after the zuul restart [releng]
16:12 <^d> restarted zuul [releng]
16:06 <greg-g> jenkins doesn't have anything queued and is processing jobs apparently, not sure why zuul is showing two jobs queued for almost 2 hours (one with all tests passing, the other with nothing tested yet) [releng]
16:04 <greg-g> not sure it helped [releng]
16:02 <greg-g> about to disconnect/reconnect gearman per https://www.mediawiki.org/wiki/Continuous_integration/Zuul#Known_issues [releng]
00:34 <legoktm> deployed https://gerrit.wikimedia.org/r/194421 [releng]
2015-03-04 §
17:34 <Krinkle> Depooling all new integation-slave12xx and integration-slave14xx instances again [releng]
17:34 <Krinkle> "ImportError: Entry point ('console_scripts', 'tox') not found" on integration-slave12xx instances for operations-puppet-tox-data_admin_lint [releng]
17:33 <Krinkle> "/usr/local/bin/tox: Permission denied" on integration-slave14xx instances [releng]
17:11 <Krinkle> Pooled integration-slave1201, integration-slave1202, integration-slave1203, integration-slave1204 [releng]
17:06 <Krinkle> Pooled integration-slave1402, integration-slave1403, integration-slave1404, integration-slave1405 [releng]
16:56 <Krinkle> Pooled integration-slave1401 [releng]
16:26 <Krinkle> integration-slave14xx are now provisioned and being added to the pool. Old trusty slaves will be depooled later and eventually deleted. [releng]
2015-03-03 §
22:00 <hashar> reboot integration-puppetmaster in case it solves a NFS mount issue [releng]
20:33 <legoktm> manually created centralauth.users_to_rename table [releng]
18:48 <Krinkle> New builds run fine, but there's 30 stuck builds occupying executors. Their output is finished and Zuul/Gerrit got the event already, but they won't die. [releng]
18:30 <Krinkle> The stuck builds have a line of "Finished: .." in them, but are still showing a loader spinner on the bottom of their build log 10 minutes later. [releng]
18:28 <Krinkle> Lots of Jenkins builds are stuck half-way executing. No clear cause. Everything seems up. [releng]
17:18 <Krinkle> Reloading Zuul to deploy Icad0a26dc8 and Icac172b16 [releng]
15:39 <hashar> cancelled logrotate update of all jobs since that seems to kill the Jenkins/Zuul gearman connection. Probably because all jobs are registered on each config change. [releng]
15:31 <hashar> updating all jobs in Jenkins based on PS2 of https://gerrit.wikimedia.org/r/194109 [releng]
10:56 <hashar> Created instance i-000008fb with image "ubuntu-14.04-trusty" and hostname i-000008fb.eqiad.wmflabs. [releng]
10:52 <hashar> deleting integration-puppetmaster to recreate it with a new image {bug|T87484} . Will have to reapply I5335ea7cbfba33e84b3ddc6e3dd83a7232b8acfd and I30e5bfeac398e0f88e538c75554439fe82fcc1cf [releng]
03:47 <Krinkle> git-deploy: Deploying integration/slave-scripts 05a5593..1e64ed9 [releng]
01:11 <marxarelli> gzip'd /var/log/account/pacct.0 on deployment-bastion to free space [releng]
2015-03-02 §
21:35 <twentyafterfour> <Krenair> (per #mediawiki-core, have deleted the job queue key in redis, should get regenerated. also cleared screwed up log and restarted job runner service) [releng]
15:39 <Krinkle> Removing /usr/local/src/zuul from integration-slave12xx and integration-slave14xx to let puppet re-install zuul-cloner (T90984) [releng]
13:39 <Krinkle> integration-slave12xx and integration-slave14xx instances still depooled due to T90984 [releng]
2015-02-27 §
21:58 <Krinkle> Ragekilled all queued jobs related to beta and force restarted Jenkins slave agent on deployment-bastion.eqiad [releng]
21:56 <Krinkle> Job beta-update-databases-eqiad and node deployment-bastion.eqiad have been stuck for the past 4 hours [releng]
21:49 <marxarelli> Reloading Zuul to deploy I273270295fa5a29422a57af13f9e372bced96af1 and I81f5e785d26e21434cd66dc694b4cfe70c1fa494 [releng]
18:08 <Krenair> Kicked deployment-bastion node in jenkins to try to fix jobs [releng]