851-900 of 3735 results (13ms)
2015-05-08 §
20:05 <bd808> Cherry-picked https://gerrit.wikimedia.org/r/#/c/209801 [releng]
18:15 <bd808> Cherry-picked https://gerrit.wikimedia.org/r/#/c/209769/ [releng]
05:14 <bd808> apache2 access logs now only locally on instances in /var/log/apache2/other_vhosts_access.log; error log in /var/log/apache2.log and still relayed to deployment-bastion and logstash (works like production now) [releng]
04:49 <bd808> Symbolic link not allowed or link target not accessible: /srv/mediawiki/docroot/bits/static/master/extensions [releng]
04:47 <bd808> cherry-picked https://gerrit.wikimedia.org/r/#/c/209680/ [releng]
2015-05-07 §
20:48 <bd808> Updated kibana to bb9fcf6 (Merge remote-tracking branch 'upstream/kibana3') [releng]
18:00 <greg-g> brought deployment-bastion.eqiad back online in Jenkins (after Krinkle disconnected it some hours ago). Jobs are processing [releng]
16:05 <bd808> Updated scap to 5d681af (Better handling for php lint checks) [releng]
14:05 <Krinkle> deployment-bastion.eqiad has been stuck for 10 hours. [releng]
14:05 <Krinkle> As of two days now, Jenkins always returns Wikimedia 503 Error page after logging in. Log in session itself is fine. [releng]
05:02 <legoktm> slaves are going up/down likely due to automated labs migration script [releng]
2015-05-06 §
15:13 <bd808> Updated scap to 57036d2 (Update statsd events) [releng]
2015-05-05 §
19:06 <jzerebecki> integration-slave-trusty-1015:~$ sudo -u jenkins-deploy rm -rf /mnt/jenkins-workspace/workspace/mwext-Wikibase-qunit/src/node_modules [releng]
15:42 <legoktm> deploying https://gerrit.wikimedia.org/r/208975 & https://gerrit.wikimedia.org/r/208976 [releng]
04:36 <legoktm> deploying https://gerrit.wikimedia.org/r/208899 [releng]
04:04 <legoktm> deploying https://gerrit.wikimedia.org/r/208889,90,91,92 [releng]
2015-05-04 §
23:50 <hashar> restarted Jenkins (deadlock with deployment-bastion) [releng]
23:49 <hashar> restarted Jenkins [releng]
22:50 <hashar> Manually retriggering last change of operations/mediawiki-config.git with: <tt>zuul enqueue --trigger gerrit --pipeline postmerge --project operations/mediawiki-config --change 208822,1</tt> [releng]
22:49 <hashar> restarted Zuul to clear out a bunch of operations/mediawiki-config.git jobs [releng]
22:20 <hashar> restarting Jenkins from gallium :/ [releng]
22:18 <thcipriani> jenkins restarted [releng]
22:12 <thcipriani> preparing jenkins for shutdown [releng]
21:59 <hashar> disconnected reconnected Jenkins Gearman client [releng]
21:41 <thcipriani> deployment-bastion still not accepting jobs from jenkins [releng]
21:35 <thcipriani> disconnecting deployment-bastion and reconnecting, again [releng]
20:54 <thcipriani> marking node deployment-bastion offline due to suck jenkins execution lock [releng]
19:03 <legoktm> deploying https://gerrit.wikimedia.org/r/208339 [releng]
17:46 <bd808> integration-slave-precise-1014 died trying to clone mediawiki/core.git with "fatal: destination path 'src' already exists and is not an empty directory." [releng]
2015-05-02 §
06:53 <legoktm> deploying https://gerrit.wikimedia.org/r/208366 [releng]
06:45 <legoktm> deploying https://gerrit.wikimedia.org/r/208364 [releng]
05:49 <legoktm> deploying https://gerrit.wikimedia.org/r/208358 [releng]
05:26 <legoktm> deploying https://gerrit.wikimedia.org/r/207132 [releng]
04:18 <legoktm> deploying https://gerrit.wikimedia.org/r/208342 and https://gerrit.wikimedia.org/r/208340 [releng]
03:56 <legoktm> reset mediawiki-extensions-hhvm workspace on integration-slave-trusty-1015 (bad .git lock) [releng]
00:51 <yuvipanda> created deployment-boomboom to test [releng]
2015-04-30 §
19:26 <Krinkle> Repooled integration-slave-trusty-1013. IP unchanged. [releng]
19:00 <Krinkle> Depooled integration-slave-trusty-1013 for labs maintenance (per andrewbogott) [releng]
14:17 <hashar> Jenkins: properly downgraded IRC plugin from 2.26 to 2.25 [releng]
13:40 <hashar> Jenkins: downgrading IRC plugin from 2.26 to 2.25 [releng]
12:09 <hashar> restarting Jenkins https://phabricator.wikimedia.org/T96183 [releng]
2015-04-29 §
21:03 <andrewbogott> suspending and shrinking disks of many instances [releng]
17:15 <thcipriani> removed l10nupdate user from /etc/passwd on deployment-bastion [releng]
15:00 <hashar> Instances are being moved out from labvirt1005 which has some faulty memory. List of instances at https://phabricator.wikimedia.org/T97521#1245217 [releng]
14:25 <hashar> upgrading zuul on integration-slave-precise-1011 for https://phabricator.wikimedia.org/T97106 [releng]
14:11 <hashar> rebooting integration-saltmaster stalled. [releng]
13:11 <hashar> Rebooting deployment-parsoid05 via wikitech interface. [releng]
13:02 <hashar> labvirt1005 seems to have hardware issue. Impacts a bunch of beta cluster / integration instances as listed on https://phabricator.wikimedia.org/T97521#1245217 [releng]
12:22 <hashar> deployment-parsoid05 slow down is https://phabricator.wikimedia.org/T97421 . Running apt-get upgrade and rebooting it but its slowness issue might be with the underlying hardware [releng]
12:13 <hashar> killing puppet on deployment-parsoid05 eats all CPU for some reason [releng]