51-100 of 2021 results (8ms)
2014-11-14 §
17:27 <bd808> /var full on deployment-mediawiki02. Adjusted ~bd808/cleanup-hhvm-cores for core found in /var/tmp/core rather than the expected /var/tmp/hhvm [releng]
11:14 <hashar> Recreated a labs Gerrit setup on integration-zuul-server . Available from http://integration.wmflabs.org/gerrit/ using OpenID for authentication. [releng]
2014-11-13 §
11:13 <hashar> apt-get upgrade / maintenance on all slaves [releng]
11:02 <hashar> bringing back integration-slave1008 to the pool. The label had a typo. https://integration.wikimedia.org/ci/computer/integration-slave1008/ [releng]
10:11 <YuviPanda> cherry pick https://gerrit.wikimedia.org/r/#/c/172967/1 to test https://bugzilla.wikimedia.org/show_bug.cgi?id=73263 [releng]
2014-11-12 §
21:03 <hashar> Restarted Jenkins due to a deadlock with deployment-bastion slave [releng]
18:16 <YuviPanda> cherry picking https://gerrit.wikimedia.org/r/#/c/172776/ on labs puppetmaster to see if it fixes issues in the cache machines [releng]
2014-11-11 §
17:13 <cscott> removed old ocg cronjobs on deployment-pdf0x; see https://bugzilla.wikimedia.org/show_bug.cgi?id=73166 [releng]
2014-11-10 §
22:37 <cscott> rsync'ed .git from pdf01 to pdf02 to resolve git-deploy issues on pdf02 (git fsck on pdf02 reported lots of errors) [releng]
21:41 <cscott> updated OCG to version d9855961b18f550f62c0b20da70f95847a215805 (skipping deployment-pdf02) [releng]
21:39 <cscott> deployment-pdf02 is not responding to git-deploy for OCG [releng]
2014-11-09 §
16:51 <bd808> Running `chmod -R =rwX .` in /data/project/upload7 [releng]
2014-11-08 §
08:07 <YuviPanda> that fixed it [releng]
08:04 <YuviPanda> disabling/enabling gearman [releng]
2014-11-06 §
23:43 <bd808> https://integration.wikimedia.org/ci/job/mwext-MobileFrontend-qunit-mobile/ happier after I deleted the clone of mw/core that was somehow corrupted [releng]
21:01 <cscott> bounced zuul, jobs seem to be running again [releng]
20:58 <cscott> about to restart zuul as per https://www.mediawiki.org/wiki/Continuous_integration/Zuul#Known_issues [releng]
00:53 <bd808> HHVM not installed on integration-slave1009? "/srv/deployment/integration/slave-scripts/bin/mw-run-phpunit-hhvm.sh: line 42: hhvm: command not found" -- https://integration.wikimedia.org/ci/job/mediawiki-core-regression-hhvm-master/2542/console [releng]
2014-11-05 §
16:14 <bd808> Updated scap to include Ic4574b7fed679434097be28c061927ac459a86fc (Revert "Make scap restart HHVM") [releng]
06:14 <ori> restarted hhvm on beta app servers [releng]
2014-11-03 §
22:07 <cscott> updated OCG to version 5834af97ae80382f3368dc61b9d119cef0fe129b [releng]
2014-10-31 §
17:13 <godog> bouncing zuul in jenkins as per https://www.mediawiki.org/wiki/Continuous_integration/Zuul#Known_issues [releng]
2014-10-30 §
16:34 <hashar> cleared out /var/ on integration-puppetmaster [releng]
16:34 <bd808> Upgraded kibana to v3.1.1 [releng]
15:54 <hashar> Zuul: merging in https://review.openstack.org/#/c/128921/3 which should fix jobs being stuck in queue on merge/gearman failures. {{bug|72113}} [releng]
15:45 <hashar> Upgrading Zuul reference copy from upstream c9d11ab..1f4f8e1 [releng]
15:43 <hashar> Going to upgrade Zuul and monitor the result over the next hour. [releng]
2014-10-29 §
22:58 <bd808> Stopped udp2log and started udp2log-mw on deployment-bastion [releng]
19:46 <bd808> Logging seems broken following merge of https://gerrit.wikimedia.org/r/#/c/119941/24. Investigating [releng]
18:55 <ori> upgraded hhvm on beta labs to 3.3.0+dfsg1-1+wm1 [releng]
2014-10-28 §
23:47 <RoanKattouw> ...which was a no-op [releng]
23:46 <RoanKattouw> Updating puppet repo on deployment-salt puppet master [releng]
21:39 <bd808> RoanKattouw creating deployment-parsoid05 as a replacement for the totally broken deployment-parsoid04 [releng]
21:36 <RoanKattouw> Creating deployment-parsoid05 as a replacement for the totally broken deployment-parsoid04 (also as a trusty instance rather than precise) [releng]
21:06 <RoanKattouw> Rebooting deployment-parsoid04, wasn't responding to ssh [releng]
2014-10-27 §
20:23 <cscott> updated OCG to version 60b15d9985f881aadaa5fdf7c945298c3d7ebeac [releng]
2014-10-24 §
13:36 <hashar> That bumps hhvm on contint from 3.3.0-20140925+wmf2 to 3.3.0-20140925+wmf3 [releng]
13:36 <hashar> apt-get upgrade on Trusty Jenkins slaves [releng]
2014-10-23 §
22:43 <hashar> Jenkins resumed activity. Beta cluster code is being updated [releng]
21:36 <hashar> Jenkins: disconnected / reconnected slave node deployment-bastion.eqiad [releng]
2014-10-22 §
21:10 <arlolra> updated OCG to version e977e2c8ecacea2b4dee837933cc2ffdc6b214cb [releng]
20:54 <bd808> Enabled puppet on deployment-logstash1 [releng]
09:07 <hashar> Jenkins: upgrading gearman-plugin from 0.0.7-1-g3811bb8 to 0.1.0-1-gfa5f083 . Ie bring us to latest version + 1 commit [releng]
2014-10-21 §
21:10 <hashar> contint: refreshed slave-scripts 0b85d48..8c3f228 sqlite files will be cleared out after 20 minutes (instead of 60 minutes) {{bug|71128}} [releng]
20:51 <cscott> deployment-prep _joe_ promises to fix this properly tomorrow am [releng]
20:51 <cscott> deployment-prep turned off puppet on deployment-pdf01, manually fixed broken /etc/ocg/mw-ocg-service.js [releng]
20:50 <cscott> deployment-prep updated OCG to version 523c8123cd826c75240837c42aff6301032d8ff1 [releng]
10:55 <hashar> deleted salt master key on deployment-elastic{06,07}, restarted salt-minion and reran puppet. It is now passing on both instances \\O/ [releng]
10:48 <hashar> rerunning puppet manually on deployment-elastic{06,07} [releng]
10:48 <hashar> beta: signing puppet cert for deployment-elastic{06,07}. On deployment-salt ran: puppet ca sign i-000006b6.eqiad.wmflabs; puppet ca sign i-000006b7.eqiad.wmflabs [releng]