5101-5150 of 9955 results (15ms)
2016-07-18 §
12:10 <hashar> (restarted qa-morebots) [releng]
12:10 <hashar> Enabling puppet again on integration-slave-precise-1002 , removing Zuul-server config and adding the slave back in Jenkins pool [releng]
12:08 <hashar> Enabling puppet again on integration-slave-precise-1002 , removing Zuul-server config and adding the slave back in Jenkins pool [releng]
08:32 <hashar> gallium: upgrading Zuul 2.1.0-95-g66c8e52-wmf1precise1 .. zuul_2.1.0-151-g30a433b-wmf3precise1 [releng]
2016-07-16 §
23:19 <paladox> testing morebots [releng]
2016-07-15 §
08:34 <hashar> Unpooling integration-slave-precise-1002 will use it as a zuul-server test instance temporarily [releng]
2016-07-14 §
18:54 <ebernhardson> deployment-prep manually edited elasticsearch.yml on deployment-elastic05 and restarted to get it listening on eth0. Still looking into why puppet wrote out wrong config file [releng]
09:05 <Amir1> rebooting deployment-ores-redis [releng]
08:29 <Amir1> deploying 0e9555f to ores-beta (sca03) [releng]
2016-07-13 §
16:05 <urandom> Installing Cassandra 2.2.6-wmf1 on deployment-restbase0[1-2].deployment-prep.eqiad.wmflabs : T126629 [releng]
13:58 <hashar> T137525 reverted Zuul back to zuul_2.1.0-95-g66c8e52-wmf1precise1_amd64.deb . It could not connect to Gerrit reliably [releng]
13:46 <hashar> T137525 Stopped zuul that ran in a terminal (with -d). Started it with the init script. [releng]
12:36 <hashar> T137525 Upgrading Zuul 2.1.0-95-g66c8e52-wmf1precise1 ... zuul_2.1.0-151-g30a433b-wmf1precise1_amd64.deb [releng]
11:37 <hashar> apt-get upgrade on deployment-mediawiki02 [releng]
08:33 <hashar> removing deployment-parsoid05 from the Jenkins slaves T140218 [releng]
2016-07-12 §
20:29 <hashar> integration: force running unattended upgrade on all instances: salt --batch 4 -v '*' cmd.run 'unattended-upgrade' . That upgrades diamond and hhvm among others. imagemagick-common has a prompt though [releng]
20:22 <hashar> CI force running puppet on all instances: salt --batch 5 -v '*' puppet.run [releng]
20:04 <hashar> Maybe fix unattended upgrade on the CI slaves via https://gerrit.wikimedia.org/r/298568 [releng]
16:43 <Amir1> deploying f472f65 to ores-beta [releng]
10:11 <hashar> Github created repos operations-debs-contenttranslation-apertium-mk-en and operations-docker-images-toollabs-images for Gerrit replication [releng]
2016-07-11 §
14:24 <hashar> Removing ZeroMQ config from the Jenkins jobs. It is now enabled globally. T139923 [releng]
10:15 <hashar> T136188: on Trusty slaves, upgrading Chromium from v49 to v51: salt -v '*slave-trusty-*' cmd.run 'apt-get -y install chromium-browser chromium-chromedriver chromium-codecs-ffmpeg-extra' [releng]
10:13 <hashar> T136188: salt -v '*slave-trusty*' cmd.run 'rm /etc/apt/preferences.d/chromium-*' [releng]
10:09 <hashar> Unpinning Chromium v49 from the Trusty slaves and upgrading to v51 for T136188 [releng]
09:34 <zeljkof> Enabled ZMQ Event Publisher on all Jobs in Jenkins [releng]
2016-07-09 §
18:57 <legoktm> deploying https://gerrit.wikimedia.org/r/297731 and https://gerrit.wikimedia.org/r/298142 [releng]
14:07 <bd808> Testing logstash change https://gerrit.wikimedia.org/r/#/c/298115/ via cherry-pick [releng]
2016-07-08 §
16:08 <hashar> scandium: git -C /srv/ssd/zuul/git/mediawiki/services/graphoid remote set-head origin --auto [releng]
16:06 <hashar> scandium: git -C /srv/ssd/zuul/git/mediawiki/services/graphoid init && git -C /srv/ssd/zuul/git/mediawiki/services/graphoid remote add origin ssh://jenkins-bot@ytterbium.wikimedia.org:29418/mediawiki/services/graphoid [releng]
14:59 <hashar> nodepool: rebuild Trusty image from scratch Image ci-trusty-wikimedia-1467989709 in wmflabs-eqiad is ready [releng]
12:35 <hashar> beta: find /data/project/upload7/*/*/thumb -type f -atime +30 -delete [releng]
10:31 <hashar> beta: mass delete http://commons.wikimedia.beta.wmflabs.org/wiki/Category:GWToolset_Batch_Upload files T64835 [releng]
10:26 <hashar> beta: mass delete http://commons.wikimedia.beta.wmflabs.org/wiki/Category:GWToolset_Batch_Upload files [releng]
2016-07-07 §
21:41 <MaxSem> Chowned php-master/vendor back to jenkins-deploy [releng]
13:10 <hashar> deleting integration-slave-trusty-1024 and integration-slave-trusty-1025 to free up some RAM. We have enough permanent Trusty slaves. T139535 [releng]
02:43 <MaxSem> started redis-server on deployment-stream [releng]
01:14 <bd808> Restarted logstash on deployment-logstash2 [releng]
01:12 <MaxSem> Leaving my hacks for the night to collect data, if needed revert with cd /srv/mediawiki-staging/php-master/vendor && sudo git reset --hard HEAD && sudo chown -hR jenkins-deploy:wikidev . [releng]
00:50 <bd808> Rebooting deployment-logstash3.eqiad.wmflabs; console full of hung process messages from kernel [releng]
00:27 <MaxSem> Initialized ORES on all wikis where it's enabled, was causing job failures [releng]
00:13 <MaxSem> Debugging a fatal in betalabs, might cause syncs to fail [releng]
2016-07-06 §
20:30 <hashar> beta: restarted mysql on both db1 and db2 so it takes in account the --syslog setting T119370 [releng]
20:08 <hashar> beta: on db1 and db2 move the MariaDB 'syslog' setting under [mysqld_safe] section. Cherry picked https://gerrit.wikimedia.org/r/#/c/296713/3 and reloaded mysql on both instances. T119370 [releng]
14:54 <hashar> Image ci-jessie-wikimedia-1467816381 in wmflabs-eqiad is ready T133779 [releng]
14:47 <hashar_> attempting to refresh ci-jessie-wikimedia image to get librdkafka-dev included for T133779 [releng]
2016-07-05 §
21:54 <hasharAway> CI has drained the gate-and-submit queue [releng]
21:37 <hasharAway> Nodepool: nodepool delete a few instances that would never spawn / have been stuck for ~ 40 minutes [releng]
2016-07-04 §
18:58 <hashar> Upgrading arcanist on permanent CI slaves since xhpast was broken T137770 [releng]
12:50 <yuvipanda> migrating deployment-tin to labvirt1011 [releng]
2016-07-03 §
13:10 <paladox> phabricator Update phab-01 and phab-05 (phab-02) and phab-03 to fix a security bug in phabricator (Did the update last night but forgot to log it) [releng]