401-450 of 5244 results (4ms)
2016-07-13 §
13:46 <hashar> T137525 Stopped zuul that ran in a terminal (with -d). Started it with the init script. [releng]
12:36 <hashar> T137525 Upgrading Zuul 2.1.0-95-g66c8e52-wmf1precise1 ... zuul_2.1.0-151-g30a433b-wmf1precise1_amd64.deb [releng]
11:37 <hashar> apt-get upgrade on deployment-mediawiki02 [releng]
08:33 <hashar> removing deployment-parsoid05 from the Jenkins slaves T140218 [releng]
2016-07-12 §
20:29 <hashar> integration: force running unattended upgrade on all instances: salt --batch 4 -v '*' cmd.run 'unattended-upgrade' . That upgrades diamond and hhvm among others. imagemagick-common has a prompt though [releng]
20:22 <hashar> CI force running puppet on all instances: salt --batch 5 -v '*' puppet.run [releng]
20:04 <hashar> Maybe fix unattended upgrade on the CI slaves via https://gerrit.wikimedia.org/r/298568 [releng]
16:43 <Amir1> deploying f472f65 to ores-beta [releng]
10:11 <hashar> Github created repos operations-debs-contenttranslation-apertium-mk-en and operations-docker-images-toollabs-images for Gerrit replication [releng]
2016-07-11 §
14:24 <hashar> Removing ZeroMQ config from the Jenkins jobs. It is now enabled globally. T139923 [releng]
10:15 <hashar> T136188: on Trusty slaves, upgrading Chromium from v49 to v51: salt -v '*slave-trusty-*' cmd.run 'apt-get -y install chromium-browser chromium-chromedriver chromium-codecs-ffmpeg-extra' [releng]
10:13 <hashar> T136188: salt -v '*slave-trusty*' cmd.run 'rm /etc/apt/preferences.d/chromium-*' [releng]
10:09 <hashar> Unpinning Chromium v49 from the Trusty slaves and upgrading to v51 for T136188 [releng]
09:34 <zeljkof> Enabled ZMQ Event Publisher on all Jobs in Jenkins [releng]
2016-07-09 §
18:57 <legoktm> deploying https://gerrit.wikimedia.org/r/297731 and https://gerrit.wikimedia.org/r/298142 [releng]
14:07 <bd808> Testing logstash change https://gerrit.wikimedia.org/r/#/c/298115/ via cherry-pick [releng]
2016-07-08 §
16:08 <hashar> scandium: git -C /srv/ssd/zuul/git/mediawiki/services/graphoid remote set-head origin --auto [releng]
16:06 <hashar> scandium: git -C /srv/ssd/zuul/git/mediawiki/services/graphoid init && git -C /srv/ssd/zuul/git/mediawiki/services/graphoid remote add origin ssh://jenkins-bot@ytterbium.wikimedia.org:29418/mediawiki/services/graphoid [releng]
14:59 <hashar> nodepool: rebuild Trusty image from scratch Image ci-trusty-wikimedia-1467989709 in wmflabs-eqiad is ready [releng]
12:35 <hashar> beta: find /data/project/upload7/*/*/thumb -type f -atime +30 -delete [releng]
10:31 <hashar> beta: mass delete http://commons.wikimedia.beta.wmflabs.org/wiki/Category:GWToolset_Batch_Upload files T64835 [releng]
10:26 <hashar> beta: mass delete http://commons.wikimedia.beta.wmflabs.org/wiki/Category:GWToolset_Batch_Upload files [releng]
2016-07-07 §
21:41 <MaxSem> Chowned php-master/vendor back to jenkins-deploy [releng]
13:10 <hashar> deleting integration-slave-trusty-1024 and integration-slave-trusty-1025 to free up some RAM. We have enough permanent Trusty slaves. T139535 [releng]
02:43 <MaxSem> started redis-server on deployment-stream [releng]
01:14 <bd808> Restarted logstash on deployment-logstash2 [releng]
01:12 <MaxSem> Leaving my hacks for the night to collect data, if needed revert with cd /srv/mediawiki-staging/php-master/vendor && sudo git reset --hard HEAD && sudo chown -hR jenkins-deploy:wikidev . [releng]
00:50 <bd808> Rebooting deployment-logstash3.eqiad.wmflabs; console full of hung process messages from kernel [releng]
00:27 <MaxSem> Initialized ORES on all wikis where it's enabled, was causing job failures [releng]
00:13 <MaxSem> Debugging a fatal in betalabs, might cause syncs to fail [releng]
2016-07-06 §
20:30 <hashar> beta: restarted mysql on both db1 and db2 so it takes in account the --syslog setting T119370 [releng]
20:08 <hashar> beta: on db1 and db2 move the MariaDB 'syslog' setting under [mysqld_safe] section. Cherry picked https://gerrit.wikimedia.org/r/#/c/296713/3 and reloaded mysql on both instances. T119370 [releng]
14:54 <hashar> Image ci-jessie-wikimedia-1467816381 in wmflabs-eqiad is ready T133779 [releng]
14:47 <hashar_> attempting to refresh ci-jessie-wikimedia image to get librdkafka-dev included for T133779 [releng]
2016-07-05 §
21:54 <hasharAway> CI has drained the gate-and-submit queue [releng]
21:37 <hasharAway> Nodepool: nodepool delete a few instances that would never spawn / have been stuck for ~ 40 minutes [releng]
2016-07-04 §
18:58 <hashar> Upgrading arcanist on permanent CI slaves since xhpast was broken T137770 [releng]
12:50 <yuvipanda> migrating deployment-tin to labvirt1011 [releng]
2016-07-03 §
13:10 <paladox> phabricator Update phab-01 and phab-05 (phab-02) and phab-03 to fix a security bug in phabricator (Did the update last night but forgot to log it) [releng]
12:04 <jzerebecki> reloading zuul for 7e6a2e2..13ea50f [releng]
2016-07-02 §
13:37 <jzerebecki> reloading zuul for 15127b2..7e6a2e2 [releng]
2016-06-30 §
10:31 <hashar> Deleting integration-slave-trusty-1015 . Can not bring up mysql T138074 and the ssh slave connection would not hold anyway. Must be broken somehow [releng]
10:04 <hashar> Attempting to refresh Nodepool image for Jessie ( ci-jessie-wikimedia ). Been stall for 284 hours (12 days) [releng]
09:36 <hashar> Trusty is missing the package arcanist ... :( [releng]
09:35 <hashar> Attempting to refresh Nodepool image for Trusty ( ci-trusty-wikimedia ). Been stall for 283 hours (12 days) [releng]
2016-06-28 §
21:33 <halfak> deploying ores beec291 [releng]
21:15 <halfak> deploying ores 6979a98 [releng]
2016-06-27 §
22:32 <eberhardson> deployment-prep deployed gerrit.wikimedia.org/r/296279 to puppetmaster to test kibana4 role [releng]
19:41 <bd808> Rebooting deployment-logstash3.eqiad.wmflabs via wikitech. Console log full of blocked kworker messages, ssh non-responsive, and blocking logstash records being recorded. [releng]
18:20 <thcipriani> deployment-puppetmaster.deployment-prep:/var/lib/git/labs/private modules/secret/secrets/keyholder keys conflicts resolved [releng]