3451-3500 of 7896 results (23ms)
2016-03-25 §
22:14 <marxarelli> creating new jenkins node for integration-slave-trusty-1024 [releng]
22:11 <marxarelli> rebooting integration-slave-trusty-{1024,1025} before pooling as replacements for trusty-1002 and trusty-1005 [releng]
21:06 <marxarelli> repooling integration-slave-trusty-{1005,1002} to help with load while replacement instances are provisioning [releng]
16:59 <marxarelli> depooling integration-slave-trusty-1002 until DNS resolution can be resolved. still investigating disk space issue [releng]
2016-03-24 §
16:39 <thcipriani> restarted rsync service on deployment-tin [releng]
13:45 <thcipriani|afk> rearmed keyholder on deployment-tin [releng]
11:20 <hashar> Restarted Jenkins, jobs got blocked notifying to IRC [releng]
04:41 <Krinkle> beta-update-databases-eqiad and beta-scap-eqiad stuck for over 8 hours (IRC notifier plugin deadlock) [releng]
03:28 <Krinkle> beta-mediawiki-config-update-eqiadqueued has been stuck for over 5 hours. [releng]
2016-03-23 §
23:00 <Krinkle> rm-rf integration-slave-trusty-1013:/mnt/home/jenkins-deploy/tmpfs/jenkins-2/karma-54925082/ (bad permissions, caused Karma issues) [releng]
19:02 <legoktm> restarted zuul [releng]
2016-03-22 §
17:40 <legoktm> deploying https://gerrit.wikimedia.org/r/278926 [releng]
2016-03-21 §
21:55 <hashar> zuul: almost all MediaWiki extensions migrated to run the npm job on Nodepool (with Node.js 4.3) T119143 . All tested. Will monitor the build results that ran overnight tomorrow [releng]
20:28 <hashar> Mass running npm-node-4.3 jobs against MediaWiki extensions to make sure they all pass ( https://gerrit.wikimedia.org/r/#/c/278004/ | T119143 ) [releng]
17:40 <elukey> executed git rebase --interactive on deployment-puppetmaster.deployment-prep.eqiad.wmflabs to remove https://gerrit.wikimedia.org/r/#/c/278713/ [releng]
15:46 <elukey> hacked manually the cdh puppet submodule on deployment-puppetmaster.deployment-prep.eqiad.wmflabs - please let me know if interfere with anybody's tests [releng]
14:24 <elukey> executed git submodule update --init on deployment-puppetmaster.deployment-prep.eqiad.wmflabs [releng]
11:25 <elukey> beta: cherry picked https://gerrit.wikimedia.org/r/#/c/278713/ to test an updated to the cdh module (analytics) [releng]
11:13 <hashar> beta: rebased puppet master which had a conflict on https://gerrit.wikimedia.org/r/#/c/274711/ which got merged meanwhile (saves Elukey ) [releng]
11:02 <hashar> beta: added Elukey (wikimedia ops) to the project as member and admin [releng]
2016-03-19 §
13:04 <hashar> Jenkins: added ldap-labs-codfw.wikimedia.org as a fallback LDAP server T130446 [releng]
2016-03-18 §
17:16 <jzerebecki> reloading zuul for e33494f..89a9659 [releng]
2016-03-17 §
21:10 <thcipriani> updating scap on deployment-tin to test D133 [releng]
18:31 <cscott> updated OCG to version c1a8232594fe846bd2374efd8f7c20d7e97ac449 [releng]
09:34 <hashar> deployment-jobrunner01 deleted /var/log/apache/*.gz T130179 [releng]
09:04 <hashar> Upgrading hhvm and related extensions on jobrunner01 T130179 [releng]
2016-03-16 §
14:28 <hashar> Updated jobs having the package manager cache system (castor) via https://gerrit.wikimedia.org/r/#/c/277774/ [releng]
2016-03-15 §
15:17 <jzerebecki> added wikidata.beta.wmflabs.org in https://wikitech.wikimedia.org/wiki/Special:NovaAddress to deployment-cache-text04.deployment-prep.eqiad.wmflabs [releng]
14:19 <hashar> Image ci-jessie-wikimedia-1458051246 in wmflabs-eqiad is ready T124447 [releng]
14:14 <hashar> Refreshing Nodepool snapshot images so it get a fresh copy of slave-scripts T124447 [releng]
14:08 <hashar> Deploying slave script change https://gerrit.wikimedia.org/r/#/c/277508/ "npm-install-dev.py: Use config.dev.yaml instead of config.yaml" for T124447 [releng]
2016-03-14 §
22:17 <greg-g> new jobs weren't processing in Zuul, lego fixed it and blamed Reedy [releng]
20:13 <hashar> Updating Jenkins jobs mwext-Wikibase-* so they no more rely on --with-phpunit ( ping @hoo https://gerrit.wikimedia.org/r/#/c/277330/ ) [releng]
17:02 <Krinkle> Doing full Zuul restart due to deadlock (T128569) [releng]
10:18 <moritzm> re-enabled systemd unit for logstash on deployment-logstash2 [releng]
2016-03-11 §
22:42 <legoktm> deploying https://gerrit.wikimedia.org/r/276901 [releng]
19:40 <legoktm> legoktm@integration-slave-trusty-1001:/mnt/jenkins-workspace/workspace$ sudo rm -rf mwext-Echo-testextension-* # because it was broken [releng]
2016-03-10 §
20:22 <hashar> Nodepool Image ci-jessie-wikimedia-1457641052 in wmflabs-eqiad is ready [releng]
20:19 <hashar> Refreshing Nodepool to include the 'varnish' package T128188 [releng]
20:05 <hashar> apt-get upgrade integration-slave-jessie1001 (bring in ffmpeg update and nodejs among other things) [releng]
12:22 <hashar> Nodeppol Image ci-jessie-wikimedia-1457612269 in wmflabs-eqiad is ready [releng]
12:18 <hashar> Nodepool: rebuilding image to get mathoid/graphoid packages included (hopefully) T119693 T128280 [releng]
2016-03-09 §
17:56 <bd808> Cleaned up git clone state in deployment-tin.deployment-prep:/srv/mediawiki-staging/php-master and queued beta-code-update-eqiad to try again (T129371) [releng]
17:48 <bd808> Git clone at deployment-tin.deployment-prep:/srv/mediawiki-staging/php-master in completely horrible state. Investigating [releng]
17:22 <bd808> Fixed https://integration.wikimedia.org/ci/job/beta-mediawiki-config-update-eqiad/4452/ [releng]
17:19 <bd808> Manually cleaning up broken rebase in deployment-tin.deployment-prep:/srv/mediawiki-staging [releng]
16:27 <bd808> Removed cherry-pick of https://gerrit.wikimedia.org/r/#/c/274696 ; manually cleaned up systemd unit and restarted logstash on deployment-logstash2 [releng]
14:59 <hashar> Image ci-jessie-wikimedia-1457535250 in wmflabs-eqiad is ready T129345 [releng]
14:57 <hashar> Rebuilding snapshot image to get Xvfb enabled at boot time T129345 [releng]
13:04 <moritzm> cherrypicked patch to deployment-prep which provides a systemd unit for logstash [releng]