451-500 of 4122 results (16ms)
2015-10-29 §
10:21 <hashar> restarting Jenkins (java upgrade) [releng]
03:18 <thcipriani|afk> broken again. looks like /srv/mediawiki-staging on mira should be owned by mwdeploy [releng]
02:50 <thcipriani|afk> hooray, fixed! [releng]
02:40 <thcipriani|afk> beta-scap-eqiad failing due to rsync-created mira:/srv/mediawiki-staging/.~tmp~ directory being owned by mwdeploy but with a uid of 993 instead of 603 (local mwdeploy) [releng]
00:12 <MaxSem> Manually fixed permissions on mw-config/portals, reinitialized submodule and synced [releng]
2015-10-28 §
19:06 <legoktm> deploying https://gerrit.wikimedia.org/r/249476 [releng]
16:00 <hashar> for integration/zuul.git , created branch labs-tox-deployment to be used to deploy Zuul with pip on labs instances [releng]
15:23 <hashar> beta: deleting old deployment-parsoidcache02 (trusty) replaced by deployment-cache-parsoid05 (Jessie) [releng]
15:22 <hashar> beta: moved web proxy parsoid-beta.wmflabs.org to use http://deployment-cache-parsoid05.eqiad.wmflabs:80 [releng]
15:13 <hashar> zuul-merger will now use ZUUL_URL=git://gallium.wikimedia.org instead of ZUUL_URL=git://zuul.eqiad.wmnet ( https://gerrit.wikimedia.org/r/#/c/249389/ ) [releng]
14:47 <hashar> no matter, NFS is under maintenance [releng]
14:46 <hashar> rebooting deployment-parsoid05 . seems NFS is flappy [releng]
10:01 <hashar> applying role::cache::parsoid to deployment-cache-parsoid05 [releng]
09:58 <hashar> Deleting deployment-parsoidcache02 (Trusty) 10.68.16.145 to be replaced with deployment-cache-parsoid05 10.68.20.102 (Jessie) [releng]
09:54 <hashar> beta: deleting deployment-cache-parsoid04 not enough disk space for /srv/ ( https://phabricator.wikimedia.org/T103660 ) [releng]
05:26 <legoktm> deploying https://gerrit.wikimedia.org/r/249349 [releng]
03:43 <Krinkle> Reloading Zuul to deploy https://gerrit.wikimedia.org/r/249341 ( [releng]
2015-10-27 §
20:42 <hashar> Nuking Nodepool instances using the previous snapshot ( ci-jessie-wikimedia-1445955240 ) [releng]
20:40 <hashar> Nodepool snapshot ci-jessie-wikimedia-1445977928 generated. Includes /usr/bin/rake ( puppet: https://gerrit.wikimedia.org/r/#/c/249219/ ) [releng]
20:31 <hashar> Generating new Nodepool snapshot ( https://wikitech.wikimedia.org/wiki/Nodepool#Manually_generate_a_new_snapshot ) to have 'rake' included ( puppet: https://gerrit.wikimedia.org/r/#/c/249219/ ) [releng]
15:08 <ostriches> deploying master of scap to beta [releng]
2015-10-26 §
23:04 <Krinkle> Reloading Zuul to deploy https://gerrit.wikimedia.org/r/249015 [releng]
18:37 <legoktm> deploying https://gerrit.wikimedia.org/r/248929 [releng]
18:22 <MaxSem> Cherrypicked https://gerrit.wikimedia.org/r/#/c/248374/ on beta [releng]
16:03 <hashar> reenabling puppet on integration-slave-trusty-1013 [releng]
15:13 <hashar> Disabling puppet on trusty-1013 to apply the slave scripts local hack https://gerrit.wikimedia.org/r/#/c/248883/ . Should fix some weird qunit failure ( https://phabricator.wikimedia.org/T116565 ) [releng]
05:11 <legoktm> deploying https://gerrit.wikimedia.org/r/248811 & https://gerrit.wikimedia.org/r/248812 [releng]
2015-10-24 §
05:58 <marxarel_> deployment-db2 data restored, replication working [releng]
03:54 <marxarelli> restoring deployment-db2 again ... [releng]
03:18 <marxarelli> finished restoring data on deployment-db2. replication is working once again [releng]
02:15 <marxarelli> restoring data on deployment-db2 [releng]
00:19 <legoktm> deploying https://gerrit.wikimedia.org/r/248576 [releng]
00:15 <legoktm> deploying https://gerrit.wikimedia.org/r/248562 [releng]
2015-10-23 §
23:05 <marxarelli> restoring deployment-db2 from dump [releng]
23:03 <legoktm> deploying https://gerrit.wikimedia.org/r/248551 [releng]
22:18 <marxarelli> dump of deployment-db1 failed due to "View 'labswiki.bounce_records' references invalid table(s)" [releng]
21:37 <marxarelli> dumping databases on deployment-db1 for restore of deployment-db2 [releng]
21:13 <marxarelli> deployment-db1 binlog deployment-db1-bin.000062 appears corrupt [releng]
20:54 <marxarelli> deployment-db2 shows slave io but slave sql failed on duplicate key [releng]
18:59 <twentyafterfour> deleted atop.log.* files on deployment-bastion. when are we going to enlarge /var on this instance. grr [releng]
18:58 <marxarelli> Killed mysql process 15840440 on account of its gargantuan temp file filling up /mnt [releng]
2015-10-22 §
10:36 <hashar> integration: cherry picked https://gerrit.wikimedia.org/r/#/c/244748/ "contint: install npm/grunt-cli with npm" , giving it a try one host a time [releng]
10:31 <hashar> integration disabling puppet <tt>salt --show-timeout --timeout=10 '*' cmd.run 'puppet agent --disable "install npm/grunt-cli via puppet https://gerrit.wikimedia.org/r/#/c/244748/"'</tt> [releng]
10:05 <hashar> salt-key -d deployment-logstash2.eqiad.wmflabs [releng]
10:05 <hashar> salt-key -d deployment-urldownloader.eqiad.wmflabs [releng]
10:04 <hashar> integration: clean up downloaded apt packages which are filing /var/cache/apt/archives on a few instances <tt>salt --show-timeout '*' cmd.run 'apt-get clean'</tt> [releng]
10:03 <hashar> beta: clean up downloaded apt packages which are filing /var/cache/apt/archives on a few instances (ex: 4GBytes on mediawiki02) <tt>salt --show-timeout '*' cmd.run 'apt-get clean'</tt> [releng]
10:01 <hashar> beta-cluster: I have deleted some incorrect salt minions with: salt-key -d i-0000* [releng]
10:00 <hashar> beta-cluster: I have deleted some incorrect salt minions with: salt-key -d i-0000* [releng]
09:46 <hashar> beta-cluster: I have deleted some incorrect salt minions with: salt-key -d i-0000* [releng]