1701-1750 of 3693 results (15ms)
2014-11-21 §
15:19 <hashar> I have revoked the deployment-salt certificates. All puppet agent are thus broken! [releng]
15:01 <hashar> deployment-salt cleaning certs with puppet cert clean [releng]
14:52 <hashar> manually switching restbase01 puppet master from virt1000 to deployment-salt.eqiad.wmflabs [releng]
14:50 <hashar> deployment-restbase01 has some puppet error: Error 400 on SERVER: Must provide non empty value. on node i-00000727.eqiad.wmflabs . That is due to puppet pickle() function being given an empty variable [releng]
2014-11-20 §
15:25 <hashar> 15:01 Restarted Jenkins AND Zuul. Beta cluster jobs are still deadlocked. [releng]
13:21 <hashar> for integration, set puppet master report retention to 360 minutes ( https://wikitech.wikimedia.org/wiki/Hiera:Integration , see https://bugzilla.wikimedia.org/show_bug.cgi?id=73472#c14 ) [releng]
13:20 <hashar> rebased puppet master on integration project [releng]
13:20 <hashar> rebased puppet master [releng]
2014-11-19 §
21:27 <bd808> Ran `GIT_SSH=/var/lib/git/ssh git pull --rebase` in deployment-salt:/srv/var-lib/git/labs/private [releng]
21:19 <anomie> Cherry-picked https://gerrit.wikimedia.org/r/#/c/173336/3 to Beta [releng]
2014-11-18 §
15:32 <hashar> Deleting job https://integration.wikimedia.org/ci/job/mediawiki-vendor-integration/ replaced by mediawiki-phpunit. Clearing out workspaces {{bug|73515}} [releng]
2014-11-17 §
20:37 <YuviPanda> cleaned out logs on deployment-bastion [releng]
16:48 <YuviPanda> delete deployment-analytics01, a tortoise from an ancient time. [releng]
09:24 <YuviPanda> moved *old* /var/log/eventlogging into /home/yuvipanda so puppet can run without bitching [releng]
05:17 <YuviPanda> force apt-get install -f to unstuck puppet [releng]
04:57 <YuviPanda> cleaned up coredump on mediawiki02 on deployment-prep [releng]
04:49 <YuviPanda> clean up coredump on deployment-prep [releng]
2014-11-16 §
00:38 <YuviPanda> uncherrypick https://gerrit.wikimedia.org/r/#/c/173634/ because OMG CODE [releng]
00:14 <YuviPanda> cherry-pick https://gerrit.wikimedia.org/r/#/c/173634/ on deployment-salt [releng]
00:01 <YuviPanda> cherry-pick https://gerrit.wikimedia.org/r/#/c/173510/ on deployment-prep to make memc03 run puppet [releng]
2014-11-14 §
21:03 <marxarelli> loaded and re-saved jenkins configuration to get it back to english [releng]
20:02 <anomie> Cherry-picking https://gerrit.wikimedia.org/r/#/c/173336/ for testing in logstash [releng]
17:27 <bd808> /var full on deployment-mediawiki02. Adjusted ~bd808/cleanup-hhvm-cores for core found in /var/tmp/core rather than the expected /var/tmp/hhvm [releng]
11:14 <hashar> Recreated a labs Gerrit setup on integration-zuul-server . Available from http://integration.wmflabs.org/gerrit/ using OpenID for authentication. [releng]
2014-11-13 §
11:13 <hashar> apt-get upgrade / maintenance on all slaves [releng]
11:02 <hashar> bringing back integration-slave1008 to the pool. The label had a typo. https://integration.wikimedia.org/ci/computer/integration-slave1008/ [releng]
10:11 <YuviPanda> cherry pick https://gerrit.wikimedia.org/r/#/c/172967/1 to test https://bugzilla.wikimedia.org/show_bug.cgi?id=73263 [releng]
2014-11-12 §
21:03 <hashar> Restarted Jenkins due to a deadlock with deployment-bastion slave [releng]
18:16 <YuviPanda> cherry picking https://gerrit.wikimedia.org/r/#/c/172776/ on labs puppetmaster to see if it fixes issues in the cache machines [releng]
2014-11-11 §
17:13 <cscott> removed old ocg cronjobs on deployment-pdf0x; see https://bugzilla.wikimedia.org/show_bug.cgi?id=73166 [releng]
2014-11-10 §
22:37 <cscott> rsync'ed .git from pdf01 to pdf02 to resolve git-deploy issues on pdf02 (git fsck on pdf02 reported lots of errors) [releng]
21:41 <cscott> updated OCG to version d9855961b18f550f62c0b20da70f95847a215805 (skipping deployment-pdf02) [releng]
21:39 <cscott> deployment-pdf02 is not responding to git-deploy for OCG [releng]
2014-11-09 §
16:51 <bd808> Running `chmod -R =rwX .` in /data/project/upload7 [releng]
2014-11-08 §
08:07 <YuviPanda> that fixed it [releng]
08:04 <YuviPanda> disabling/enabling gearman [releng]
2014-11-06 §
23:43 <bd808> https://integration.wikimedia.org/ci/job/mwext-MobileFrontend-qunit-mobile/ happier after I deleted the clone of mw/core that was somehow corrupted [releng]
21:01 <cscott> bounced zuul, jobs seem to be running again [releng]
20:58 <cscott> about to restart zuul as per https://www.mediawiki.org/wiki/Continuous_integration/Zuul#Known_issues [releng]
00:53 <bd808> HHVM not installed on integration-slave1009? "/srv/deployment/integration/slave-scripts/bin/mw-run-phpunit-hhvm.sh: line 42: hhvm: command not found" -- https://integration.wikimedia.org/ci/job/mediawiki-core-regression-hhvm-master/2542/console [releng]
2014-11-05 §
16:14 <bd808> Updated scap to include Ic4574b7fed679434097be28c061927ac459a86fc (Revert "Make scap restart HHVM") [releng]
06:14 <ori> restarted hhvm on beta app servers [releng]
2014-11-03 §
22:07 <cscott> updated OCG to version 5834af97ae80382f3368dc61b9d119cef0fe129b [releng]
2014-10-31 §
17:13 <godog> bouncing zuul in jenkins as per https://www.mediawiki.org/wiki/Continuous_integration/Zuul#Known_issues [releng]
2014-10-30 §
16:34 <hashar> cleared out /var/ on integration-puppetmaster [releng]
16:34 <bd808> Upgraded kibana to v3.1.1 [releng]
15:54 <hashar> Zuul: merging in https://review.openstack.org/#/c/128921/3 which should fix jobs being stuck in queue on merge/gearman failures. {{bug|72113}} [releng]
15:45 <hashar> Upgrading Zuul reference copy from upstream c9d11ab..1f4f8e1 [releng]
15:43 <hashar> Going to upgrade Zuul and monitor the result over the next hour. [releng]
2014-10-29 §
22:58 <bd808> Stopped udp2log and started udp2log-mw on deployment-bastion [releng]