1001-1050 of 4828 results (9ms)
2015-12-09 §
12:57 <hashar> Upgrading Jenkins Gearman plugin to grab upstream patch https://review.openstack.org/#/c/252768/ 'fix registration for jenkins master' should be noop [releng]
2015-12-08 §
20:31 <hashar> beta cluster instances switching to new ldap configuration [releng]
20:31 <hashar> beta: rebased operations/puppet and locally fixed a conflict [releng]
19:32 <hashar> LDAP got migrated. We might have mwdeploy local users that got created on beta cluster instances :( [releng]
19:29 <hashar> beta: aborted rebase on puppetmaster. [releng]
14:18 <Krinkle> Removed integration-slave-trusty-1012:/mnt/home/jenkins-deploy/tmpfs/jenkins-2 which was left behind by a job. Caused other jobs to fail due to lack of permission to chmod/rm-rf this dir. [releng]
11:48 <hashar> beta: salt-key --delete deployment-cxserver03.eqiad.wmflabs [releng]
11:42 <hashar> running puppet on deployment-restbase01 . Catch up on lot of changes [releng]
11:23 <hashar> puppet catching up a lot of changes on deployment-cache-mobile04 and deployment-cache-text04 [releng]
11:20 <hashar> beta: rebased puppet.git on puppetmaster [releng]
10:58 <hashar> dropped deployment-cache-text04 puppet SSL certificates [releng]
10:44 <hashar> beta: deployment-cache-text04 upgrading openssl libssl1.0.0 [releng]
10:42 <hashar> beta: fixing salt on bunch of hosts. There are duplicate process on a few of them. Fix up is: killall salt-minion && rm /var/run/salt-minion.pid && /etc/init.d/salt-minion start [releng]
10:32 <hashar> beta: salt-key --delete=deployment-cache-upload04.eqiad.wmflabs (missing 'deployment-prep' subdomain) [releng]
10:31 <hashar> beta: puppet being fixed on memc04 sentry2 cache-upload04 cxserver03 db1 [releng]
10:27 <hashar> beta: fixing puppet.con on a bunch of hosts. The [agent] server = deployment-puppetmaster.eqiad.wmflabs is wrong, missing 'deployment-prep' sub domain [releng]
10:23 <hashar> beta: salt-key --delete=i-000005d2.eqiad.wmflabs [releng]
2015-12-07 §
16:16 <hashar> Nodepool no more listens for Jenkins events over ZeroMQ. No TCP connection established on port 8888 [releng]
16:09 <hashar> Nodepool no more notice Jenkins slaves went offline. Delay deletions and repooling significantly. Investigating [releng]
15:22 <hashar> labs DNS had some issue. all solved now. [releng]
13:46 <hashar> Reloading Jenkins configuration from disk following up mass deletions of jobs directly on gallium [releng]
13:41 <hashar> deleting a bunch of unmanaged Jenkins jobs (no more in JJB / no more in Zuul) [releng]
04:24 <bd808> The ip address in jenkins for ci-jessie-wikimedia-10306 now belongs to an instance named future-wikipedia.reading-web-staging.eqiad.wmflabs (obviously the config is wrong) [releng]
04:12 <bd808> ci-jessie-wikimedia-10306 down and blocking many zuul queues [releng]
2015-12-04 §
19:24 <MaxSem> bumped portals [releng]
09:15 <hashar> salt --show-timeout '*' cmd.run 'rm -fR /mnt/jenkins-workspace/workspace/mwext-qunit/src/skins/*' ( https://phabricator.wikimedia.org/T120349 ) [releng]
2015-12-03 §
23:53 <marxarelli> Reloading Zuul to deploy If60f720995dfc7859e53cf33043b5a21b1a4b085 [releng]
23:39 <jzerebecki> reloading zuul for c078000..f934379 [releng]
17:46 <jzerebecki> reloading zuul for e4d3745..c078000 [releng]
16:25 <jzerebecki> reloading zuul for 58b5486..e4d3745 [releng]
11:00 <hashar> reenabled puppet on integration slaves [releng]
10:13 <hashar> integration disabling puppet agent to test xvfb https://gerrit.wikimedia.org/r/#/c/256643/ [releng]
08:52 <hashar> apt-get upgrade integration-raita.integration.wmflabs.org [releng]
2015-12-02 §
11:16 <hashar> configure wmf-insecte to join #wikimedia-android-ci ( https://gerrit.wikimedia.org/r/#/c/254905/3/jjb/mobile.yaml,unified ) [releng]
11:06 <hashar> restarting nodepool [releng]
11:05 <hashar> manually refreshed nodepool snapshot ( Image ci-jessie-wikimedia-1449053701 in wmflabs-eqiad is ready ) while investigating for https://phabricator.wikimedia.org/T120076 [releng]
09:24 <hashar> key holder rearmed (hopefully) doc at https://wikitech.wikimedia.org/wiki/Keyholder [releng]
09:19 <hashar> beta-scap-eqiad is broken Permission denied (publickey). [releng]
2015-12-01 §
16:06 <hashar> split mediawiki core parser tests under Zend to their own job https://gerrit.wikimedia.org/r/#/c/256006/ [releng]
14:55 <hashar> salt --show-timeout '*' cmd.run 'cd /srv/deployment/integration/slave-scripts; git pull' [releng]
14:49 <hashar> mw-phpunit.sh error is fixed via https://gerrit.wikimedia.org/r/256222 [releng]
14:36 <hashar> bin/mw-phpunit.sh: line 31: phpunit_args[@]: unbound variable [releng]
10:37 <hashar> kicking puppetmaster on integration-puppetmaster : out of memory [releng]
10:30 <hashar> Upgrading Zuul on Trusty and Jessie labs slaves to 2.1.0-60-g1cc37f7-wmf4... [releng]
2015-11-29 §
23:23 <bd808> updated cherry-pick of https://gerrit.wikimedia.org/r/#/c/255916/ to PS2 [releng]
05:34 <bd808> cherry-picked https://gerrit.wikimedia.org/r/#/c/255916/ for testing [releng]
05:25 <bd808> trebuchet is wack and not getting returner results from any hosts; see T119765 [releng]
05:23 <bd808> updated scap to 1879fd4 Add sync-l10n command for l10nupdate [releng]
05:22 <bd808> stashed uncommited scap3 changes found on deployment-bastion [releng]
2015-11-25 §
19:53 <legoktm> ran mwscript sql.php --wiki=enwiki --wikidb=wikishared /srv/mediawiki-staging/php-master/extensions/Echo/db_patches/echo_unread_wikis.sql [releng]