951-1000 of 2909 results (15ms)
2014-11-08 §
08:07 <YuviPanda> that fixed it [releng]
08:04 <YuviPanda> disabling/enabling gearman [releng]
2014-11-06 §
23:43 <bd808> https://integration.wikimedia.org/ci/job/mwext-MobileFrontend-qunit-mobile/ happier after I deleted the clone of mw/core that was somehow corrupted [releng]
21:01 <cscott> bounced zuul, jobs seem to be running again [releng]
20:58 <cscott> about to restart zuul as per https://www.mediawiki.org/wiki/Continuous_integration/Zuul#Known_issues [releng]
00:53 <bd808> HHVM not installed on integration-slave1009? "/srv/deployment/integration/slave-scripts/bin/mw-run-phpunit-hhvm.sh: line 42: hhvm: command not found" -- https://integration.wikimedia.org/ci/job/mediawiki-core-regression-hhvm-master/2542/console [releng]
2014-11-05 §
16:14 <bd808> Updated scap to include Ic4574b7fed679434097be28c061927ac459a86fc (Revert "Make scap restart HHVM") [releng]
06:14 <ori> restarted hhvm on beta app servers [releng]
2014-11-03 §
22:07 <cscott> updated OCG to version 5834af97ae80382f3368dc61b9d119cef0fe129b [releng]
2014-10-31 §
17:13 <godog> bouncing zuul in jenkins as per https://www.mediawiki.org/wiki/Continuous_integration/Zuul#Known_issues [releng]
2014-10-30 §
16:34 <hashar> cleared out /var/ on integration-puppetmaster [releng]
16:34 <bd808> Upgraded kibana to v3.1.1 [releng]
15:54 <hashar> Zuul: merging in https://review.openstack.org/#/c/128921/3 which should fix jobs being stuck in queue on merge/gearman failures. {{bug|72113}} [releng]
15:45 <hashar> Upgrading Zuul reference copy from upstream c9d11ab..1f4f8e1 [releng]
15:43 <hashar> Going to upgrade Zuul and monitor the result over the next hour. [releng]
2014-10-29 §
22:58 <bd808> Stopped udp2log and started udp2log-mw on deployment-bastion [releng]
19:46 <bd808> Logging seems broken following merge of https://gerrit.wikimedia.org/r/#/c/119941/24. Investigating [releng]
18:55 <ori> upgraded hhvm on beta labs to 3.3.0+dfsg1-1+wm1 [releng]
2014-10-28 §
23:47 <RoanKattouw> ...which was a no-op [releng]
23:46 <RoanKattouw> Updating puppet repo on deployment-salt puppet master [releng]
21:39 <bd808> RoanKattouw creating deployment-parsoid05 as a replacement for the totally broken deployment-parsoid04 [releng]
21:36 <RoanKattouw> Creating deployment-parsoid05 as a replacement for the totally broken deployment-parsoid04 (also as a trusty instance rather than precise) [releng]
21:06 <RoanKattouw> Rebooting deployment-parsoid04, wasn't responding to ssh [releng]
2014-10-27 §
20:23 <cscott> updated OCG to version 60b15d9985f881aadaa5fdf7c945298c3d7ebeac [releng]
2014-10-24 §
13:36 <hashar> That bumps hhvm on contint from 3.3.0-20140925+wmf2 to 3.3.0-20140925+wmf3 [releng]
13:36 <hashar> apt-get upgrade on Trusty Jenkins slaves [releng]
2014-10-23 §
22:43 <hashar> Jenkins resumed activity. Beta cluster code is being updated [releng]
21:36 <hashar> Jenkins: disconnected / reconnected slave node deployment-bastion.eqiad [releng]
2014-10-22 §
21:10 <arlolra> updated OCG to version e977e2c8ecacea2b4dee837933cc2ffdc6b214cb [releng]
20:54 <bd808> Enabled puppet on deployment-logstash1 [releng]
09:07 <hashar> Jenkins: upgrading gearman-plugin from 0.0.7-1-g3811bb8 to 0.1.0-1-gfa5f083 . Ie bring us to latest version + 1 commit [releng]
2014-10-21 §
21:10 <hashar> contint: refreshed slave-scripts 0b85d48..8c3f228 sqlite files will be cleared out after 20 minutes (instead of 60 minutes) {{bug|71128}} [releng]
20:51 <cscott> deployment-prep _joe_ promises to fix this properly tomorrow am [releng]
20:51 <cscott> deployment-prep turned off puppet on deployment-pdf01, manually fixed broken /etc/ocg/mw-ocg-service.js [releng]
20:50 <cscott> deployment-prep updated OCG to version 523c8123cd826c75240837c42aff6301032d8ff1 [releng]
10:55 <hashar> deleted salt master key on deployment-elastic{06,07}, restarted salt-minion and reran puppet. It is now passing on both instances \\O/ [releng]
10:48 <hashar> rerunning puppet manually on deployment-elastic{06,07} [releng]
10:48 <hashar> beta: signing puppet cert for deployment-elastic{06,07}. On deployment-salt ran: puppet ca sign i-000006b6.eqiad.wmflabs; puppet ca sign i-000006b7.eqiad.wmflabs [releng]
09:29 <hashar> forget me deployment-logstash1 has a puppet agent error but it is simply because the agent is disabled "'debugging logstash config'" [releng]
09:28 <hashar> deployment-logstash1 disk full [releng]
2014-10-20 §
17:41 <bd808> Disabled redis input plugin and restarted logstash on deployment-logstash1 [releng]
17:39 <bd808> Disabled puppet on deployment-logstash1 for some live hacking of logstash config [releng]
15:27 <apergos> upgrded salt-master on virt1000 (master for labs) [releng]
2014-10-17 §
22:34 <subbu> live fixed bad logger config in /srv/deployment/parsoid/deploy/conf/wmf/betalabs.localsettings.js and verified that parsoid doesn't crash anymore -- fix now on gerrit and being merged [releng]
20:48 <hashar> qa-morebots is back [releng]
2014-10-15 §
01:08 <Krinkle> Pooled integration-slave1009 [releng]
01:08 <Krinkle> Setting up integration-slave1009 ({{bug|72014}} fixed}}) [releng]
01:00 <Krinkle> integration-publisher and integration-zuul-server were rebooted by me yesterday. Seems they only show up in graphite now. Maybe they were shutdown or had puppet stuck. [releng]
2014-10-14 §
21:00 <JohnLewis> icinga says deployment-sca01 is good (yay) [releng]
20:42 <JohnLewis> deleted and recreated deployment-sca01 (still needs puppet set up) [releng]