3001-3050 of 7896 results (24ms)
2016-07-26 §
18:10 <ostriches> zuul: reloading to pick up config change [releng]
12:49 <godog> cherry-pick https://gerrit.wikimedia.org/r/#/c/300827/ on deployment-puppetmaster [releng]
11:59 <legoktm> also pulled in I73f01f87b06b995bdd855628006225879a17fee5 [releng]
11:59 <legoktm> deploying https://gerrit.wikimedia.org/r/301109 [releng]
11:37 <hashar> rebased integration puppetmaster git repo [releng]
11:31 <hashar> enable puppet agent on integration-puppetmaster . Had it disabled while hacking on https://gerrit.wikimedia.org/r/#/c/300830/ [releng]
08:42 <hashar> T141269 On integration-slave-trusty-1018 , deleting workspace that has a corrupt git: rm -fR /mnt/jenkins-workspace/workspace/mediawiki-extensions-hhvm* [releng]
01:08 <Amir1> deployed ores a291da1 in sca03, ores-beta.wmflabs.org works as expected [releng]
2016-07-25 §
22:45 <legoktm> restarting zuul due to depends-on lockup [releng]
14:24 <godog> bounce puppetmaster on deployment-puppetmaster [releng]
13:17 <godog> cherry-pick https://gerrit.wikimedia.org/r/#/c/300827/ on deployment-puppetmaster [releng]
2016-07-23 §
20:06 <bd808> Cleanup jobrunner01 logs via -- sudo logrotate --force /etc/logrotate.d/mediawiki_jobrunner [releng]
20:03 <bd808> Deleted jobqueues in redis with no matching wikis: ptwikibooks, labswiki [releng]
19:20 <bd808> jobrunner01 spamming /var/log/mediawiki with attempts to process jobs for wiki=labswiki [releng]
2016-07-22 §
20:26 <hashar> T141114 upgraded jenkins-debian-glue from v0.13.0 to v0.17.0 on integration-slave-jessie-1001 and integration-slave-jessie-1002 [releng]
19:07 <thcipriani> beta-cluster has successfully used a canary for mediawiki deployments [releng]
16:53 <thcipriani> bumping scap to v.3.2.1 on deployment-tin to test canary deploys, again [releng]
16:46 <thcipriani> rolling back scap version to v.3.2.0 [releng]
16:37 <thcipriani> bumping scap to v.3.2.1 on deployment-tin to test canary deploys [releng]
13:02 <hashar> zuul rebased patch queue on tip of upstream branch and force pushed branch. c3d2810...4ddad4e HEAD -> patch-queue/debian/precise-wikimedia (forced update) [releng]
10:32 <hashar> Jenkins restarted and it pooled both integration-slave-jessie-1002 and integration-slave-trusty-1018 [releng]
10:23 <hashar> Jenkins has some random deadlock. Will probably reboot it [releng]
10:17 <hashar> Jenkins can't ssh / add slaves integration-slave-jessie-1002 or integration-slave-trusty-1018 . Apparently due to some Jenkins deadlock in the ssh slave plugin :-/ Lame way to solve it: restart Jenkins [releng]
10:10 <hashar> rebooting integration-slave-jessie-1002 and integration-slave-trusty-1018 . Hang somehow [releng]
10:06 <hashar> T141083 salt -v '*slave-trusty*' cmd.run 'service mysql start' [releng]
09:55 <hashar> integration-slave-trusty-1001 service mysql start [releng]
2016-07-21 §
16:11 <hashar> Updated our JJB fork cherry picking f74501e781f by madhuvishy. Was made to support the maven release plugin. Branch bump is 10f2bcd..6fcaf39 [releng]
16:04 <hashar> integration/zuul.git .Updated upstream branch:bc58ea34125f11eb353abc3e5b96ac1efad06141 finally caught up with upstream \O/ [releng]
15:13 <hashar> integration/zuul.git .Updated upstream branch: 06770a85fcff810fc3e1673120710100fc7b0601:upstream [releng]
14:03 <hashar> integration/zuul.git bumping upstream branch: git push d34e0b4:upstream [releng]
03:18 <greg-g> had to do https://www.mediawiki.org/wiki/Continuous_integration/Jenkins#Hung_beta_code.2Fdb_update twice, seems to be back [releng]
00:12 <bd808> Cherry-picked https://gerrit.wikimedia.org/r/#/c/299825/ to deployment-puppetmaster so wdqs nginx log parsing can be tested [releng]
2016-07-20 §
13:55 <hashar> beta: switching job beta-scap-eqiad to use 'scap sync' per https://gerrit.wikimedia.org/r/#/c/287951/ (poke thcipriani ) [releng]
12:47 <hashar> integration: enabled unattended upgrade on all instances by adding contint::packages::apt to https://wikitech.wikimedia.org/wiki/Hiera:Integration [releng]
10:28 <hashar> beta dropped salt-key on deployment-salt02 for the three instances: deployment-upload.deployment-prep.eqiad.wmflabs , deployment-logstash3.deployment-prep.eqiad.wmflabs and deployment-ores-web.deployment-prep.eqiad.wmflabs [releng]
10:26 <hashar> beta: rebased puppetmaster git repo. "Parsoid: Move to service::node" has weird conflict https://gerrit.wikimedia.org/r/#/c/298436/ [releng]
10:15 <hashar> beta: removing puppet cherry pick of https://gerrit.wikimedia.org/r/#/c/258979/ "mediawiki: add conftool-specifc credentials and scripts" abandonned/superseeded and caused a conflict [releng]
08:17 <hashar> deployment-fluorine : deleting a puppet lock file /var/lib/puppet/state/agent_catalog_run.lock (created at 2016-07-18 19:58:46 UTC) [releng]
01:53 <legoktm> deploying https://gerrit.wikimedia.org/r/299930 [releng]
2016-07-18 §
20:56 <thcipriani> Deleted deployment-fluorine:/srv/mw-log/archive/*-201605* freed 30 GB [releng]
15:00 <hashar> Upgraded Zuul on the Precise slaves to zuul_2.1.0-151-g30a433b-wmf4precise1 [releng]
12:10 <hashar> (restarted qa-morebots) [releng]
12:10 <hashar> Enabling puppet again on integration-slave-precise-1002 , removing Zuul-server config and adding the slave back in Jenkins pool [releng]
12:08 <hashar> Enabling puppet again on integration-slave-precise-1002 , removing Zuul-server config and adding the slave back in Jenkins pool [releng]
08:32 <hashar> gallium: upgrading Zuul 2.1.0-95-g66c8e52-wmf1precise1 .. zuul_2.1.0-151-g30a433b-wmf3precise1 [releng]
2016-07-16 §
23:19 <paladox> testing morebots [releng]
2016-07-15 §
08:34 <hashar> Unpooling integration-slave-precise-1002 will use it as a zuul-server test instance temporarily [releng]
2016-07-14 §
18:54 <ebernhardson> deployment-prep manually edited elasticsearch.yml on deployment-elastic05 and restarted to get it listening on eth0. Still looking into why puppet wrote out wrong config file [releng]
09:05 <Amir1> rebooting deployment-ores-redis [releng]
08:29 <Amir1> deploying 0e9555f to ores-beta (sca03) [releng]